linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/2] mm: using CMA for 1 GB hugepages allocation
@ 2020-04-07  1:04 Roman Gushchin
  2020-04-07  1:04 ` [PATCH v4 1/2] mm: cma: NUMA node interface Roman Gushchin
       [not found] ` <20200407010431.1286488-3-guro@fb.com>
  0 siblings, 2 replies; 7+ messages in thread
From: Roman Gushchin @ 2020-04-07  1:04 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Aslan Bakirov, Michal Hocko, linux-mm, kernel-team, linux-kernel,
	Rik van Riel, Mike Kravetz, Roman Gushchin

The patchset adds a hugetlb_cma boot option, which allows
to reserve a cma area which can be later used for 1 GB
hugepages allocations.

This is v4 of the patch(set). It contains a patch from Aslan,
which adds a useful function of the cma side, and the previous
version of the hugetlb_cma patch (v3) with all following cleanups
and fixes squashed, plus the following changes:
1) removed the hard-coded archs list from docs
2) added a warning printing on non-supported archs
3) hugetlb_lock is temporarily dropped in update_and_free_page()

I've retained Michal's and Mike's acks, because changes are
not significant. Please, let me know if there is something
wrong.

Thanks!


Aslan Bakirov (1):
  mm: cma: NUMA node interface

Roman Gushchin (1):
  mm: hugetlb: optionally allocate gigantic hugepages using cma

 .../admin-guide/kernel-parameters.txt         |   8 ++
 arch/arm64/mm/init.c                          |   6 +
 arch/x86/kernel/setup.c                       |   4 +
 include/linux/cma.h                           |  13 ++-
 include/linux/hugetlb.h                       |  12 ++
 include/linux/memblock.h                      |   3 +
 mm/cma.c                                      |  16 +--
 mm/hugetlb.c                                  | 109 ++++++++++++++++++
 mm/memblock.c                                 |   2 +-
 9 files changed, 163 insertions(+), 10 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v4 1/2] mm: cma: NUMA node interface
  2020-04-07  1:04 [PATCH v4 0/2] mm: using CMA for 1 GB hugepages allocation Roman Gushchin
@ 2020-04-07  1:04 ` Roman Gushchin
       [not found] ` <20200407010431.1286488-3-guro@fb.com>
  1 sibling, 0 replies; 7+ messages in thread
From: Roman Gushchin @ 2020-04-07  1:04 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Aslan Bakirov, Michal Hocko, linux-mm, kernel-team, linux-kernel,
	Rik van Riel, Mike Kravetz

From: Aslan Bakirov <aslan@fb.com>

I've noticed that there is no interfaces exposed by CMA which would let me
to declare contigous memory on particular NUMA node.

This patchset adds the ability to try to allocate contiguous memory on
specific node. It will fallback to other nodes if the specified one
doesn't work.

Implement a new method for declaring contigous memory on particular node
and keep cma_declare_contiguous() as a wrapper.

Signed-off-by: Aslan Bakirov <aslan@fb.com>
---
 include/linux/cma.h      | 13 +++++++++++--
 include/linux/memblock.h |  3 +++
 mm/cma.c                 | 16 +++++++++-------
 mm/memblock.c            |  2 +-
 4 files changed, 24 insertions(+), 10 deletions(-)

diff --git a/include/linux/cma.h b/include/linux/cma.h
index 190184b5ff32..eae834c2162f 100644
--- a/include/linux/cma.h
+++ b/include/linux/cma.h
@@ -24,10 +24,19 @@ extern phys_addr_t cma_get_base(const struct cma *cma);
 extern unsigned long cma_get_size(const struct cma *cma);
 extern const char *cma_get_name(const struct cma *cma);
 
-extern int __init cma_declare_contiguous(phys_addr_t base,
+extern int __init cma_declare_contiguous_nid(phys_addr_t base,
 			phys_addr_t size, phys_addr_t limit,
 			phys_addr_t alignment, unsigned int order_per_bit,
-			bool fixed, const char *name, struct cma **res_cma);
+			bool fixed, const char *name, struct cma **res_cma,
+			int nid);
+static inline int __init cma_declare_contiguous(phys_addr_t base,
+			phys_addr_t size, phys_addr_t limit,
+			phys_addr_t alignment, unsigned int order_per_bit,
+			bool fixed, const char *name, struct cma **res_cma)
+{
+	return cma_declare_contiguous_nid(base, size, limit, alignment,
+			order_per_bit, fixed, name, res_cma, NUMA_NO_NODE);
+}
 extern int cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
 					unsigned int order_per_bit,
 					const char *name,
diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index 079d17d96410..6bc37a731d27 100644
--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -348,6 +348,9 @@ static inline int memblock_get_region_node(const struct memblock_region *r)
 
 phys_addr_t memblock_phys_alloc_range(phys_addr_t size, phys_addr_t align,
 				      phys_addr_t start, phys_addr_t end);
+phys_addr_t memblock_alloc_range_nid(phys_addr_t size,
+				      phys_addr_t align, phys_addr_t start,
+				      phys_addr_t end, int nid, bool exact_nid);
 phys_addr_t memblock_phys_alloc_try_nid(phys_addr_t size, phys_addr_t align, int nid);
 
 static inline phys_addr_t memblock_phys_alloc(phys_addr_t size,
diff --git a/mm/cma.c b/mm/cma.c
index be55d1988c67..0463ad2ce06b 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -220,7 +220,7 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
 }
 
 /**
- * cma_declare_contiguous() - reserve custom contiguous area
+ * cma_declare_contiguous_nid() - reserve custom contiguous area
  * @base: Base address of the reserved area optional, use 0 for any
  * @size: Size of the reserved area (in bytes),
  * @limit: End address of the reserved memory (optional, 0 for any).
@@ -229,6 +229,7 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
  * @fixed: hint about where to place the reserved area
  * @name: The name of the area. See function cma_init_reserved_mem()
  * @res_cma: Pointer to store the created cma region.
+ * @nid: nid of the free area to find, %NUMA_NO_NODE for any node
  *
  * This function reserves memory from early allocator. It should be
  * called by arch specific code once the early allocator (memblock or bootmem)
@@ -238,10 +239,11 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
  * If @fixed is true, reserve contiguous area at exactly @base.  If false,
  * reserve in range from @base to @limit.
  */
-int __init cma_declare_contiguous(phys_addr_t base,
+int __init cma_declare_contiguous_nid(phys_addr_t base,
 			phys_addr_t size, phys_addr_t limit,
 			phys_addr_t alignment, unsigned int order_per_bit,
-			bool fixed, const char *name, struct cma **res_cma)
+			bool fixed, const char *name, struct cma **res_cma,
+			int nid)
 {
 	phys_addr_t memblock_end = memblock_end_of_DRAM();
 	phys_addr_t highmem_start;
@@ -336,14 +338,14 @@ int __init cma_declare_contiguous(phys_addr_t base,
 		 * memory in case of failure.
 		 */
 		if (base < highmem_start && limit > highmem_start) {
-			addr = memblock_phys_alloc_range(size, alignment,
-							 highmem_start, limit);
+			addr = memblock_alloc_range_nid(size, alignment,
+					highmem_start, limit, nid, false);
 			limit = highmem_start;
 		}
 
 		if (!addr) {
-			addr = memblock_phys_alloc_range(size, alignment, base,
-							 limit);
+			addr = memblock_alloc_range_nid(size, alignment, base,
+					limit, nid, false);
 			if (!addr) {
 				ret = -ENOMEM;
 				goto err;
diff --git a/mm/memblock.c b/mm/memblock.c
index 4d06bbaded0f..c79ba6f9920c 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1349,7 +1349,7 @@ __next_mem_pfn_range_in_zone(u64 *idx, struct zone *zone,
  * Return:
  * Physical address of allocated memory block on success, %0 on failure.
  */
-static phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size,
+phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size,
 					phys_addr_t align, phys_addr_t start,
 					phys_addr_t end, int nid,
 					bool exact_nid)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v4 2/2] mm: hugetlb: optionally allocate gigantic hugepages using cma
       [not found] ` <20200407010431.1286488-3-guro@fb.com>
@ 2020-04-07  7:03   ` Michal Hocko
  2020-04-07 15:25     ` Roman Gushchin
  0 siblings, 1 reply; 7+ messages in thread
From: Michal Hocko @ 2020-04-07  7:03 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Andrew Morton, Aslan Bakirov, linux-mm, kernel-team,
	linux-kernel, Rik van Riel, Mike Kravetz, Andreas Schaufler,
	Randy Dunlap, Joonsoo Kim

On Mon 06-04-20 18:04:31, Roman Gushchin wrote:
[...]
My ack still applies but I have only noticed two minor things now.

[...]
> @@ -1281,8 +1308,14 @@ static void update_and_free_page(struct hstate *h, struct page *page)
>  	set_compound_page_dtor(page, NULL_COMPOUND_DTOR);
>  	set_page_refcounted(page);
>  	if (hstate_is_gigantic(h)) {
> +		/*
> +		 * Temporarily drop the hugetlb_lock, because
> +		 * we might block in free_gigantic_page().
> +		 */
> +		spin_unlock(&hugetlb_lock);
>  		destroy_compound_gigantic_page(page, huge_page_order(h));
>  		free_gigantic_page(page, huge_page_order(h));
> +		spin_lock(&hugetlb_lock);

This is OK with the current code because existing paths do not have to
revalidate the state AFAICS but it is a bit subtle. I have checked the
cma_free path and it can only sleep on the cma->lock unless I am missing
something. This lock is only used for cma bitmap manipulation and the
mutex sounds like an overkill there and it can be replaced by a
spinlock.

Sounds like a follow up patch material to me.

[...]
> +	for_each_node_state(nid, N_ONLINE) {
> +		int res;
> +
> +		size = min(per_node, hugetlb_cma_size - reserved);
> +		size = round_up(size, PAGE_SIZE << order);
> +
> +		res = cma_declare_contiguous_nid(0, size, 0, PAGE_SIZE << order,
> +						 0, false, "hugetlb",
> +						 &hugetlb_cma[nid], nid);
> +		if (res) {
> +			pr_warn("hugetlb_cma: reservation failed: err %d, node %d",
> +				res, nid);
> +			break;

Do we really have to break out after a single node failure? There might
be other nodes that can satisfy the allocation. You are not cleaning up
previous allocations so there is a partial state and then it would make
more sense to me to simply s@break@continue@ here.

> +		}
> +
> +		reserved += size;
> +		pr_info("hugetlb_cma: reserved %lu MiB on node %d\n",
> +			size / SZ_1M, nid);
> +
> +		if (reserved >= hugetlb_cma_size)
> +			break;
> +	}
> +}
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v4 2/2] mm: hugetlb: optionally allocate gigantic hugepages using cma
  2020-04-07  7:03   ` [PATCH v4 2/2] mm: hugetlb: optionally allocate gigantic hugepages using cma Michal Hocko
@ 2020-04-07 15:25     ` Roman Gushchin
  2020-04-07 15:40       ` Michal Hocko
  0 siblings, 1 reply; 7+ messages in thread
From: Roman Gushchin @ 2020-04-07 15:25 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Andrew Morton, Aslan Bakirov, linux-mm, kernel-team,
	linux-kernel, Rik van Riel, Mike Kravetz, Andreas Schaufler,
	Randy Dunlap, Joonsoo Kim

On Tue, Apr 07, 2020 at 09:03:31AM +0200, Michal Hocko wrote:
> On Mon 06-04-20 18:04:31, Roman Gushchin wrote:
> [...]
> My ack still applies but I have only noticed two minor things now.

Hello, Michal!

> 
> [...]
> > @@ -1281,8 +1308,14 @@ static void update_and_free_page(struct hstate *h, struct page *page)
> >  	set_compound_page_dtor(page, NULL_COMPOUND_DTOR);
> >  	set_page_refcounted(page);
> >  	if (hstate_is_gigantic(h)) {
> > +		/*
> > +		 * Temporarily drop the hugetlb_lock, because
> > +		 * we might block in free_gigantic_page().
> > +		 */
> > +		spin_unlock(&hugetlb_lock);
> >  		destroy_compound_gigantic_page(page, huge_page_order(h));
> >  		free_gigantic_page(page, huge_page_order(h));
> > +		spin_lock(&hugetlb_lock);
> 
> This is OK with the current code because existing paths do not have to
> revalidate the state AFAICS but it is a bit subtle. I have checked the
> cma_free path and it can only sleep on the cma->lock unless I am missing
> something. This lock is only used for cma bitmap manipulation and the
> mutex sounds like an overkill there and it can be replaced by a
> spinlock.
> 
> Sounds like a follow up patch material to me.

I had the same idea and even posted a patch:
https://lore.kernel.org/linux-mm/20200403174559.GC220160@carbon.lan/T/#m87be98bdacda02cea3dd6759b48a28bd23f29ff0

However, Joonsoo pointed out that in some cases the bitmap operation might
be too long for a spinlock.

Alternatively, we can implement an asynchronous delayed release on the cma side,
I just don't know if it's worth it (I mean adding code/complexity).

> 
> [...]
> > +	for_each_node_state(nid, N_ONLINE) {
> > +		int res;
> > +
> > +		size = min(per_node, hugetlb_cma_size - reserved);
> > +		size = round_up(size, PAGE_SIZE << order);
> > +
> > +		res = cma_declare_contiguous_nid(0, size, 0, PAGE_SIZE << order,
> > +						 0, false, "hugetlb",
> > +						 &hugetlb_cma[nid], nid);
> > +		if (res) {
> > +			pr_warn("hugetlb_cma: reservation failed: err %d, node %d",
> > +				res, nid);
> > +			break;
> 
> Do we really have to break out after a single node failure? There might
> be other nodes that can satisfy the allocation. You are not cleaning up
> previous allocations so there is a partial state and then it would make
> more sense to me to simply s@break@continue@ here.

But then we should iterate over all nodes in alloc_gigantic_page()?
Currently if hugetlb_cma[0] is NULL it will immediately switch back
to the fallback approach.

Actually, Idk how realistic are use cases with complex node configuration,
so that we can hugetlb_cma areas can be allocated only on some of them.
I'd leave it up to the moment when we'll have a real world example.
Then we probably want something more sophisticated anyway...

I have no strong opinion here, so if you really think we should s/break/continue,
I'm fine with it too.

Thanks!

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v4 2/2] mm: hugetlb: optionally allocate gigantic hugepages using cma
  2020-04-07 15:25     ` Roman Gushchin
@ 2020-04-07 15:40       ` Michal Hocko
  2020-04-07 16:06         ` Roman Gushchin
  0 siblings, 1 reply; 7+ messages in thread
From: Michal Hocko @ 2020-04-07 15:40 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Andrew Morton, Aslan Bakirov, linux-mm, kernel-team,
	linux-kernel, Rik van Riel, Mike Kravetz, Andreas Schaufler,
	Randy Dunlap, Joonsoo Kim

On Tue 07-04-20 08:25:44, Roman Gushchin wrote:
> On Tue, Apr 07, 2020 at 09:03:31AM +0200, Michal Hocko wrote:
> > On Mon 06-04-20 18:04:31, Roman Gushchin wrote:
> > [...]
> > My ack still applies but I have only noticed two minor things now.
> 
> Hello, Michal!
> 
> > 
> > [...]
> > > @@ -1281,8 +1308,14 @@ static void update_and_free_page(struct hstate *h, struct page *page)
> > >  	set_compound_page_dtor(page, NULL_COMPOUND_DTOR);
> > >  	set_page_refcounted(page);
> > >  	if (hstate_is_gigantic(h)) {
> > > +		/*
> > > +		 * Temporarily drop the hugetlb_lock, because
> > > +		 * we might block in free_gigantic_page().
> > > +		 */
> > > +		spin_unlock(&hugetlb_lock);
> > >  		destroy_compound_gigantic_page(page, huge_page_order(h));
> > >  		free_gigantic_page(page, huge_page_order(h));
> > > +		spin_lock(&hugetlb_lock);
> > 
> > This is OK with the current code because existing paths do not have to
> > revalidate the state AFAICS but it is a bit subtle. I have checked the
> > cma_free path and it can only sleep on the cma->lock unless I am missing
> > something. This lock is only used for cma bitmap manipulation and the
> > mutex sounds like an overkill there and it can be replaced by a
> > spinlock.
> > 
> > Sounds like a follow up patch material to me.
> 
> I had the same idea and even posted a patch:
> https://lore.kernel.org/linux-mm/20200403174559.GC220160@carbon.lan/T/#m87be98bdacda02cea3dd6759b48a28bd23f29ff0
> 
> However, Joonsoo pointed out that in some cases the bitmap operation might
> be too long for a spinlock.

I was not aware of this email thread. I will have a look. Thanks!
 
> Alternatively, we can implement an asynchronous delayed release on the cma side,
> I just don't know if it's worth it (I mean adding code/complexity).
> 
> > 
> > [...]
> > > +	for_each_node_state(nid, N_ONLINE) {
> > > +		int res;
> > > +
> > > +		size = min(per_node, hugetlb_cma_size - reserved);
> > > +		size = round_up(size, PAGE_SIZE << order);
> > > +
> > > +		res = cma_declare_contiguous_nid(0, size, 0, PAGE_SIZE << order,
> > > +						 0, false, "hugetlb",
> > > +						 &hugetlb_cma[nid], nid);
> > > +		if (res) {
> > > +			pr_warn("hugetlb_cma: reservation failed: err %d, node %d",
> > > +				res, nid);
> > > +			break;
> > 
> > Do we really have to break out after a single node failure? There might
> > be other nodes that can satisfy the allocation. You are not cleaning up
> > previous allocations so there is a partial state and then it would make
> > more sense to me to simply s@break@continue@ here.
> 
> But then we should iterate over all nodes in alloc_gigantic_page()?

OK, I've managed to miss the early break on hugetlb_cma[node] == NULL
there as well. I do not think this makes much sense. Just consider a
setup with one node much smaller than others (not unseen on LPAR
configurations) and then you are potentially using CMA areas on some
nodes without a good reason.

> Currently if hugetlb_cma[0] is NULL it will immediately switch back
> to the fallback approach.
> 
> Actually, Idk how realistic are use cases with complex node configuration,
> so that we can hugetlb_cma areas can be allocated only on some of them.
> I'd leave it up to the moment when we'll have a real world example.
> Then we probably want something more sophisticated anyway...

I do not follow. Isn't the s@break@continue@ in this and
alloc_gigantic_page path enough to make it work?
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v4 2/2] mm: hugetlb: optionally allocate gigantic hugepages using cma
  2020-04-07 15:40       ` Michal Hocko
@ 2020-04-07 16:06         ` Roman Gushchin
  2020-04-07 16:23           ` Michal Hocko
  0 siblings, 1 reply; 7+ messages in thread
From: Roman Gushchin @ 2020-04-07 16:06 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Andrew Morton, Aslan Bakirov, linux-mm, kernel-team,
	linux-kernel, Rik van Riel, Mike Kravetz, Andreas Schaufler,
	Randy Dunlap, Joonsoo Kim

On Tue, Apr 07, 2020 at 05:40:05PM +0200, Michal Hocko wrote:
> On Tue 07-04-20 08:25:44, Roman Gushchin wrote:
> > On Tue, Apr 07, 2020 at 09:03:31AM +0200, Michal Hocko wrote:
> > > On Mon 06-04-20 18:04:31, Roman Gushchin wrote:
> > > [...]
> > > My ack still applies but I have only noticed two minor things now.
> > 
> > Hello, Michal!
> > 
> > > 
> > > [...]
> > > > @@ -1281,8 +1308,14 @@ static void update_and_free_page(struct hstate *h, struct page *page)
> > > >  	set_compound_page_dtor(page, NULL_COMPOUND_DTOR);
> > > >  	set_page_refcounted(page);
> > > >  	if (hstate_is_gigantic(h)) {
> > > > +		/*
> > > > +		 * Temporarily drop the hugetlb_lock, because
> > > > +		 * we might block in free_gigantic_page().
> > > > +		 */
> > > > +		spin_unlock(&hugetlb_lock);
> > > >  		destroy_compound_gigantic_page(page, huge_page_order(h));
> > > >  		free_gigantic_page(page, huge_page_order(h));
> > > > +		spin_lock(&hugetlb_lock);
> > > 
> > > This is OK with the current code because existing paths do not have to
> > > revalidate the state AFAICS but it is a bit subtle. I have checked the
> > > cma_free path and it can only sleep on the cma->lock unless I am missing
> > > something. This lock is only used for cma bitmap manipulation and the
> > > mutex sounds like an overkill there and it can be replaced by a
> > > spinlock.
> > > 
> > > Sounds like a follow up patch material to me.
> > 
> > I had the same idea and even posted a patch:
> > https://lore.kernel.org/linux-mm/20200403174559.GC220160@carbon.lan/T/#m87be98bdacda02cea3dd6759b48a28bd23f29ff0
> > 
> > However, Joonsoo pointed out that in some cases the bitmap operation might
> > be too long for a spinlock.
> 
> I was not aware of this email thread. I will have a look. Thanks!
>  
> > Alternatively, we can implement an asynchronous delayed release on the cma side,
> > I just don't know if it's worth it (I mean adding code/complexity).
> > 
> > > 
> > > [...]
> > > > +	for_each_node_state(nid, N_ONLINE) {
> > > > +		int res;
> > > > +
> > > > +		size = min(per_node, hugetlb_cma_size - reserved);
> > > > +		size = round_up(size, PAGE_SIZE << order);
> > > > +
> > > > +		res = cma_declare_contiguous_nid(0, size, 0, PAGE_SIZE << order,
> > > > +						 0, false, "hugetlb",
> > > > +						 &hugetlb_cma[nid], nid);
> > > > +		if (res) {
> > > > +			pr_warn("hugetlb_cma: reservation failed: err %d, node %d",
> > > > +				res, nid);
> > > > +			break;
> > > 
> > > Do we really have to break out after a single node failure? There might
> > > be other nodes that can satisfy the allocation. You are not cleaning up
> > > previous allocations so there is a partial state and then it would make
> > > more sense to me to simply s@break@continue@ here.
> > 
> > But then we should iterate over all nodes in alloc_gigantic_page()?
> 
> OK, I've managed to miss the early break on hugetlb_cma[node] == NULL
> there as well. I do not think this makes much sense. Just consider a
> setup with one node much smaller than others (not unseen on LPAR
> configurations) and then you are potentially using CMA areas on some
> nodes without a good reason.
> 
> > Currently if hugetlb_cma[0] is NULL it will immediately switch back
> > to the fallback approach.
> > 
> > Actually, Idk how realistic are use cases with complex node configuration,
> > so that we can hugetlb_cma areas can be allocated only on some of them.
> > I'd leave it up to the moment when we'll have a real world example.
> > Then we probably want something more sophisticated anyway...
> 
> I do not follow. Isn't the s@break@continue@ in this and
> alloc_gigantic_page path enough to make it work?

Well, of course it will. But for a highly asymmetrical configuration
there is probably not much sense to try allocate cma areas of a similar
size on each node and rely on allocation failures on some of them.

But, again, if you strictly prefer s/break/continue, I can send a v5.
Just let me know.

Thanks!

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v4 2/2] mm: hugetlb: optionally allocate gigantic hugepages using cma
  2020-04-07 16:06         ` Roman Gushchin
@ 2020-04-07 16:23           ` Michal Hocko
  0 siblings, 0 replies; 7+ messages in thread
From: Michal Hocko @ 2020-04-07 16:23 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Andrew Morton, Aslan Bakirov, linux-mm, kernel-team,
	linux-kernel, Rik van Riel, Mike Kravetz, Andreas Schaufler,
	Randy Dunlap, Joonsoo Kim

On Tue 07-04-20 09:06:40, Roman Gushchin wrote:
> On Tue, Apr 07, 2020 at 05:40:05PM +0200, Michal Hocko wrote:
> > On Tue 07-04-20 08:25:44, Roman Gushchin wrote:
> > > On Tue, Apr 07, 2020 at 09:03:31AM +0200, Michal Hocko wrote:
> > > > On Mon 06-04-20 18:04:31, Roman Gushchin wrote:
> > > > [...]
> > > > My ack still applies but I have only noticed two minor things now.
> > > 
> > > Hello, Michal!
> > > 
> > > > 
> > > > [...]
> > > > > @@ -1281,8 +1308,14 @@ static void update_and_free_page(struct hstate *h, struct page *page)
> > > > >  	set_compound_page_dtor(page, NULL_COMPOUND_DTOR);
> > > > >  	set_page_refcounted(page);
> > > > >  	if (hstate_is_gigantic(h)) {
> > > > > +		/*
> > > > > +		 * Temporarily drop the hugetlb_lock, because
> > > > > +		 * we might block in free_gigantic_page().
> > > > > +		 */
> > > > > +		spin_unlock(&hugetlb_lock);
> > > > >  		destroy_compound_gigantic_page(page, huge_page_order(h));
> > > > >  		free_gigantic_page(page, huge_page_order(h));
> > > > > +		spin_lock(&hugetlb_lock);
> > > > 
> > > > This is OK with the current code because existing paths do not have to
> > > > revalidate the state AFAICS but it is a bit subtle. I have checked the
> > > > cma_free path and it can only sleep on the cma->lock unless I am missing
> > > > something. This lock is only used for cma bitmap manipulation and the
> > > > mutex sounds like an overkill there and it can be replaced by a
> > > > spinlock.
> > > > 
> > > > Sounds like a follow up patch material to me.
> > > 
> > > I had the same idea and even posted a patch:
> > > https://lore.kernel.org/linux-mm/20200403174559.GC220160@carbon.lan/T/#m87be98bdacda02cea3dd6759b48a28bd23f29ff0
> > > 
> > > However, Joonsoo pointed out that in some cases the bitmap operation might
> > > be too long for a spinlock.
> > 
> > I was not aware of this email thread. I will have a look. Thanks!
> >  
> > > Alternatively, we can implement an asynchronous delayed release on the cma side,
> > > I just don't know if it's worth it (I mean adding code/complexity).
> > > 
> > > > 
> > > > [...]
> > > > > +	for_each_node_state(nid, N_ONLINE) {
> > > > > +		int res;
> > > > > +
> > > > > +		size = min(per_node, hugetlb_cma_size - reserved);
> > > > > +		size = round_up(size, PAGE_SIZE << order);
> > > > > +
> > > > > +		res = cma_declare_contiguous_nid(0, size, 0, PAGE_SIZE << order,
> > > > > +						 0, false, "hugetlb",
> > > > > +						 &hugetlb_cma[nid], nid);
> > > > > +		if (res) {
> > > > > +			pr_warn("hugetlb_cma: reservation failed: err %d, node %d",
> > > > > +				res, nid);
> > > > > +			break;
> > > > 
> > > > Do we really have to break out after a single node failure? There might
> > > > be other nodes that can satisfy the allocation. You are not cleaning up
> > > > previous allocations so there is a partial state and then it would make
> > > > more sense to me to simply s@break@continue@ here.
> > > 
> > > But then we should iterate over all nodes in alloc_gigantic_page()?
> > 
> > OK, I've managed to miss the early break on hugetlb_cma[node] == NULL
> > there as well. I do not think this makes much sense. Just consider a
> > setup with one node much smaller than others (not unseen on LPAR
> > configurations) and then you are potentially using CMA areas on some
> > nodes without a good reason.
> > 
> > > Currently if hugetlb_cma[0] is NULL it will immediately switch back
> > > to the fallback approach.
> > > 
> > > Actually, Idk how realistic are use cases with complex node configuration,
> > > so that we can hugetlb_cma areas can be allocated only on some of them.
> > > I'd leave it up to the moment when we'll have a real world example.
> > > Then we probably want something more sophisticated anyway...
> > 
> > I do not follow. Isn't the s@break@continue@ in this and
> > alloc_gigantic_page path enough to make it work?
> 
> Well, of course it will. But for a highly asymmetrical configuration
> there is probably not much sense to try allocate cma areas of a similar
> size on each node and rely on allocation failures on some of them.
> 
> But, again, if you strictly prefer s/break/continue, I can send a v5.
> Just let me know.

There is no real reason to have such a restriction. I can follow up with
a separate patch if you want me but it should be "fixed".

Thanks

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2020-04-07 16:23 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-07  1:04 [PATCH v4 0/2] mm: using CMA for 1 GB hugepages allocation Roman Gushchin
2020-04-07  1:04 ` [PATCH v4 1/2] mm: cma: NUMA node interface Roman Gushchin
     [not found] ` <20200407010431.1286488-3-guro@fb.com>
2020-04-07  7:03   ` [PATCH v4 2/2] mm: hugetlb: optionally allocate gigantic hugepages using cma Michal Hocko
2020-04-07 15:25     ` Roman Gushchin
2020-04-07 15:40       ` Michal Hocko
2020-04-07 16:06         ` Roman Gushchin
2020-04-07 16:23           ` Michal Hocko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).