All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] mm/vmalloc: Fallback to a single page allocator
@ 2021-05-21 11:10 Uladzislau Rezki (Sony)
  2021-05-21 11:43 ` Matthew Wilcox
  0 siblings, 1 reply; 6+ messages in thread
From: Uladzislau Rezki (Sony) @ 2021-05-21 11:10 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, LKML, Mel Gorman, Christoph Hellwig, Matthew Wilcox,
	Nicholas Piggin, Uladzislau Rezki, Hillf Danton, Michal Hocko,
	Oleksiy Avramchenko, Steven Rostedt

Currently for order-0 pages we use a bulk-page allocator to get
set of pages. From the other hand not allocating all pages is
something that might occur. In that case we should fallbak to
the single-page allocator trying to get missing pages, because
it is more permissive(direct reclaim, etc).

Introduce a vm_area_alloc_pages() function where the described
logic is implemented.

Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
 mm/vmalloc.c | 81 +++++++++++++++++++++++++++++++++-------------------
 1 file changed, 52 insertions(+), 29 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index b2a0cbfa37c1..4d9c422124d3 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2756,6 +2756,54 @@ void *vmap_pfn(unsigned long *pfns, unsigned int count, pgprot_t prot)
 EXPORT_SYMBOL_GPL(vmap_pfn);
 #endif /* CONFIG_VMAP_PFN */
 
+static inline unsigned int
+vm_area_alloc_pages(gfp_t gfp, int nid, unsigned int page_order,
+	unsigned long nr_small_pages, struct page **pages)
+{
+	unsigned int nr_allocated = 0;
+
+	/*
+	 * For order-0 pages we make use of bulk allocator, if
+	 * the page array is partly or not at all populated due
+	 * to fails, fallback to a single page allocator that is
+	 * more permissive.
+	 */
+	if (!page_order)
+		nr_allocated = alloc_pages_bulk_array_node(
+			gfp, nid, nr_small_pages, pages);
+	else
+		/*
+		 * Compound pages required for remap_vmalloc_page if
+		 * high-order pages.
+		 */
+		gfp |= __GFP_COMP;
+
+	/* High-order pages or fallback path if "bulk" fails. */
+	while (nr_allocated < nr_small_pages) {
+		struct page *page;
+		int i;
+
+		page = alloc_pages_node(nid, gfp, page_order);
+		if (unlikely(!page))
+			break;
+
+		/*
+		 * Careful, we allocate and map page_order pages, but
+		 * tracking is done per PAGE_SIZE page so as to keep the
+		 * vm_struct APIs independent of the physical/mapped size.
+		 */
+		for (i = 0; i < (1U << page_order); i++)
+			pages[nr_allocated + i] = page + i;
+
+		if (gfpflags_allow_blocking(gfp))
+			cond_resched();
+
+		nr_allocated += 1U << page_order;
+	}
+
+	return nr_allocated;
+}
+
 static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
 				 pgprot_t prot, unsigned int page_shift,
 				 int node)
@@ -2789,37 +2837,11 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
 		return NULL;
 	}
 
-	area->nr_pages = 0;
 	set_vm_area_page_order(area, page_shift - PAGE_SHIFT);
 	page_order = vm_area_page_order(area);
 
-	if (!page_order) {
-		area->nr_pages = alloc_pages_bulk_array_node(
-			gfp_mask, node, nr_small_pages, area->pages);
-	} else {
-		/*
-		 * Careful, we allocate and map page_order pages, but tracking is done
-		 * per PAGE_SIZE page so as to keep the vm_struct APIs independent of
-		 * the physical/mapped size.
-		 */
-		while (area->nr_pages < nr_small_pages) {
-			struct page *page;
-			int i;
-
-			/* Compound pages required for remap_vmalloc_page */
-			page = alloc_pages_node(node, gfp_mask | __GFP_COMP, page_order);
-			if (unlikely(!page))
-				break;
-
-			for (i = 0; i < (1U << page_order); i++)
-				area->pages[area->nr_pages + i] = page + i;
-
-			if (gfpflags_allow_blocking(gfp_mask))
-				cond_resched();
-
-			area->nr_pages += 1U << page_order;
-		}
-	}
+	area->nr_pages = vm_area_alloc_pages(gfp_mask, node,
+		page_order, nr_small_pages, area->pages);
 
 	atomic_long_add(area->nr_pages, &nr_vmalloc_pages);
 
@@ -2835,7 +2857,8 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
 		goto fail;
 	}
 
-	if (vmap_pages_range(addr, addr + size, prot, area->pages, page_shift) < 0) {
+	if (vmap_pages_range(addr, addr + size, prot, area->pages,
+			page_shift) < 0) {
 		warn_alloc(gfp_mask, NULL,
 			   "vmalloc size %lu allocation failure: "
 			   "failed to map pages",
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm/vmalloc: Fallback to a single page allocator
  2021-05-21 11:10 [PATCH] mm/vmalloc: Fallback to a single page allocator Uladzislau Rezki (Sony)
@ 2021-05-21 11:43 ` Matthew Wilcox
  2021-05-21 12:55   ` Uladzislau Rezki
  0 siblings, 1 reply; 6+ messages in thread
From: Matthew Wilcox @ 2021-05-21 11:43 UTC (permalink / raw)
  To: Uladzislau Rezki (Sony)
  Cc: Andrew Morton, linux-mm, LKML, Mel Gorman, Christoph Hellwig,
	Nicholas Piggin, Hillf Danton, Michal Hocko, Oleksiy Avramchenko,
	Steven Rostedt

On Fri, May 21, 2021 at 01:10:33PM +0200, Uladzislau Rezki (Sony) wrote:
> +static inline unsigned int
> +vm_area_alloc_pages(gfp_t gfp, int nid, unsigned int page_order,
> +	unsigned long nr_small_pages, struct page **pages)

(at least) two tabs here, please, otherwise the argument list is at
the same indentation as the code which trips up my parser.  some people
like to match the opening bracket, but that always feels like more work
than it's worth.  fwiw, i'd format it like this:

static inline unsigned int vm_area_alloc_pages(gfp_t gfp, int nid,
		unsigned int order, unsigned long nr_pages, struct page **pages)
{
...

(yes, i renamed some of the variables there; overly long variable names
are painful)

The rest of the patch looks good.

Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm/vmalloc: Fallback to a single page allocator
  2021-05-21 11:43 ` Matthew Wilcox
@ 2021-05-21 12:55   ` Uladzislau Rezki
  2021-05-21 13:07     ` Uladzislau Rezki
  0 siblings, 1 reply; 6+ messages in thread
From: Uladzislau Rezki @ 2021-05-21 12:55 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Uladzislau Rezki (Sony),
	Andrew Morton, linux-mm, LKML, Mel Gorman, Christoph Hellwig,
	Nicholas Piggin, Hillf Danton, Michal Hocko, Oleksiy Avramchenko,
	Steven Rostedt

> On Fri, May 21, 2021 at 01:10:33PM +0200, Uladzislau Rezki (Sony) wrote:
> > +static inline unsigned int
> > +vm_area_alloc_pages(gfp_t gfp, int nid, unsigned int page_order,
> > +	unsigned long nr_small_pages, struct page **pages)
> 
> (at least) two tabs here, please, otherwise the argument list is at
> the same indentation as the code which trips up my parser.  some people
> like to match the opening bracket, but that always feels like more work
> than it's worth.  fwiw, i'd format it like this:
> 
> static inline unsigned int vm_area_alloc_pages(gfp_t gfp, int nid,
> 		unsigned int order, unsigned long nr_pages, struct page **pages)
> {
> ...
>
No problem. Will fix it.

> 
> (yes, i renamed some of the variables there; overly long variable names
> are painful)
> 
> The rest of the patch looks good.
> 
> Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Thank you!

I will re-spin the patch and send a v2.

--
Vlad Rezki

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm/vmalloc: Fallback to a single page allocator
  2021-05-21 12:55   ` Uladzislau Rezki
@ 2021-05-21 13:07     ` Uladzislau Rezki
  2021-05-24 13:58       ` Christoph Hellwig
  0 siblings, 1 reply; 6+ messages in thread
From: Uladzislau Rezki @ 2021-05-21 13:07 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Matthew Wilcox, Andrew Morton, linux-mm, LKML, Mel Gorman,
	Christoph Hellwig, Nicholas Piggin, Hillf Danton, Michal Hocko,
	Oleksiy Avramchenko, Steven Rostedt

On Fri, May 21, 2021 at 02:55:09PM +0200, Uladzislau Rezki wrote:
> > On Fri, May 21, 2021 at 01:10:33PM +0200, Uladzislau Rezki (Sony) wrote:
> > > +static inline unsigned int
> > > +vm_area_alloc_pages(gfp_t gfp, int nid, unsigned int page_order,
> > > +	unsigned long nr_small_pages, struct page **pages)
> > 
> > (at least) two tabs here, please, otherwise the argument list is at
> > the same indentation as the code which trips up my parser.  some people
> > like to match the opening bracket, but that always feels like more work
> > than it's worth.  fwiw, i'd format it like this:
> > 
> > static inline unsigned int vm_area_alloc_pages(gfp_t gfp, int nid,
> > 		unsigned int order, unsigned long nr_pages, struct page **pages)
> > {
> > ...
> >
> No problem. Will fix it.
> 
> > 
> > (yes, i renamed some of the variables there; overly long variable names
> > are painful)
> > 
> > The rest of the patch looks good.
> > 
> > Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Thank you!
> 
> I will re-spin the patch and send a v2.
> 

From 6537bc97b5550f17b0813caf02ce0ec1865fa94e Mon Sep 17 00:00:00 2001
From: "Uladzislau Rezki (Sony)" <urezki@gmail.com>
Date: Thu, 20 May 2021 14:13:23 +0200
Subject: [PATCH v2] mm/vmalloc: Fallback to a single page allocator

Currently for order-0 pages we use a bulk-page allocator to get
set of pages. From the other hand not allocating all pages is
something that might occur. In that case we should fallbak to
the single-page allocator trying to get missing pages, because
it is more permissive(direct reclaim, etc).

Introduce a vm_area_alloc_pages() function where the described
logic is implemented.

Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
---
 mm/vmalloc.c | 81 +++++++++++++++++++++++++++++++++-------------------
 1 file changed, 52 insertions(+), 29 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index b2a0cbfa37c1..7765af7b1e9c 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2756,6 +2756,54 @@ void *vmap_pfn(unsigned long *pfns, unsigned int count, pgprot_t prot)
 EXPORT_SYMBOL_GPL(vmap_pfn);
 #endif /* CONFIG_VMAP_PFN */
 
+static inline unsigned int
+vm_area_alloc_pages(gfp_t gfp, int nid,
+		unsigned int order, unsigned long nr_pages, struct page **pages)
+{
+	unsigned int nr_allocated = 0;
+
+	/*
+	 * For order-0 pages we make use of bulk allocator, if
+	 * the page array is partly or not at all populated due
+	 * to fails, fallback to a single page allocator that is
+	 * more permissive.
+	 */
+	if (!order)
+		nr_allocated = alloc_pages_bulk_array_node(
+			gfp, nid, nr_pages, pages);
+	else
+		/*
+		 * Compound pages required for remap_vmalloc_page if
+		 * high-order pages.
+		 */
+		gfp |= __GFP_COMP;
+
+	/* High-order pages or fallback path if "bulk" fails. */
+	while (nr_allocated < nr_pages) {
+		struct page *page;
+		int i;
+
+		page = alloc_pages_node(nid, gfp, order);
+		if (unlikely(!page))
+			break;
+
+		/*
+		 * Careful, we allocate and map page-order pages, but
+		 * tracking is done per PAGE_SIZE page so as to keep the
+		 * vm_struct APIs independent of the physical/mapped size.
+		 */
+		for (i = 0; i < (1U << order); i++)
+			pages[nr_allocated + i] = page + i;
+
+		if (gfpflags_allow_blocking(gfp))
+			cond_resched();
+
+		nr_allocated += 1U << order;
+	}
+
+	return nr_allocated;
+}
+
 static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
 				 pgprot_t prot, unsigned int page_shift,
 				 int node)
@@ -2789,37 +2837,11 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
 		return NULL;
 	}
 
-	area->nr_pages = 0;
 	set_vm_area_page_order(area, page_shift - PAGE_SHIFT);
 	page_order = vm_area_page_order(area);
 
-	if (!page_order) {
-		area->nr_pages = alloc_pages_bulk_array_node(
-			gfp_mask, node, nr_small_pages, area->pages);
-	} else {
-		/*
-		 * Careful, we allocate and map page_order pages, but tracking is done
-		 * per PAGE_SIZE page so as to keep the vm_struct APIs independent of
-		 * the physical/mapped size.
-		 */
-		while (area->nr_pages < nr_small_pages) {
-			struct page *page;
-			int i;
-
-			/* Compound pages required for remap_vmalloc_page */
-			page = alloc_pages_node(node, gfp_mask | __GFP_COMP, page_order);
-			if (unlikely(!page))
-				break;
-
-			for (i = 0; i < (1U << page_order); i++)
-				area->pages[area->nr_pages + i] = page + i;
-
-			if (gfpflags_allow_blocking(gfp_mask))
-				cond_resched();
-
-			area->nr_pages += 1U << page_order;
-		}
-	}
+	area->nr_pages = vm_area_alloc_pages(gfp_mask, node,
+		page_order, nr_small_pages, area->pages);
 
 	atomic_long_add(area->nr_pages, &nr_vmalloc_pages);
 
@@ -2835,7 +2857,8 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
 		goto fail;
 	}
 
-	if (vmap_pages_range(addr, addr + size, prot, area->pages, page_shift) < 0) {
+	if (vmap_pages_range(addr, addr + size, prot, area->pages,
+			page_shift) < 0) {
 		warn_alloc(gfp_mask, NULL,
 			   "vmalloc size %lu allocation failure: "
 			   "failed to map pages",
-- 
2.20.1

--
Vlad Rezki

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm/vmalloc: Fallback to a single page allocator
  2021-05-21 13:07     ` Uladzislau Rezki
@ 2021-05-24 13:58       ` Christoph Hellwig
  2021-05-24 15:49         ` Uladzislau Rezki
  0 siblings, 1 reply; 6+ messages in thread
From: Christoph Hellwig @ 2021-05-24 13:58 UTC (permalink / raw)
  To: Uladzislau Rezki
  Cc: Andrew Morton, Matthew Wilcox, linux-mm, LKML, Mel Gorman,
	Christoph Hellwig, Nicholas Piggin, Hillf Danton, Michal Hocko,
	Oleksiy Avramchenko, Steven Rostedt

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH] mm/vmalloc: Fallback to a single page allocator
  2021-05-24 13:58       ` Christoph Hellwig
@ 2021-05-24 15:49         ` Uladzislau Rezki
  0 siblings, 0 replies; 6+ messages in thread
From: Uladzislau Rezki @ 2021-05-24 15:49 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Andrew Morton, Matthew Wilcox, Linux Memory Management List,
	LKML, Mel Gorman, Nicholas Piggin, Hillf Danton, Michal Hocko,
	Oleksiy Avramchenko, Steven Rostedt

[-- Attachment #1: Type: text/plain, Size: 75 bytes --]

> Looks good,
>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
>
Thanks!

>

[-- Attachment #2: Type: text/html, Size: 554 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2021-05-24 15:50 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-21 11:10 [PATCH] mm/vmalloc: Fallback to a single page allocator Uladzislau Rezki (Sony)
2021-05-21 11:43 ` Matthew Wilcox
2021-05-21 12:55   ` Uladzislau Rezki
2021-05-21 13:07     ` Uladzislau Rezki
2021-05-24 13:58       ` Christoph Hellwig
2021-05-24 15:49         ` Uladzislau Rezki

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.