All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/3] mm/util: Add kvmalloc_node_caller
@ 2021-05-10 15:02 Matthew Wilcox (Oracle)
  2021-05-10 15:02 ` [PATCH 2/3] mm/vmalloc: Use kvmalloc to allocate the table of pages Matthew Wilcox (Oracle)
  2021-05-10 15:02 ` [PATCH 3/3] MAINTAINERS: Add Vlad Rezki as vmalloc maintainer Matthew Wilcox (Oracle)
  0 siblings, 2 replies; 3+ messages in thread
From: Matthew Wilcox (Oracle) @ 2021-05-10 15:02 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), David Rientjes

Allow the caller of kvmalloc to specify who counts as the allocator
of the memory instead of assuming it's the immediate caller.  Also
reword the kernel-doc for kvmalloc_node() to document the semantics
of the function rather than its implementation.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: David Rientjes <rientjes@google.com>
---
 include/linux/mm.h   |  4 +++-
 include/linux/slab.h |  2 ++
 mm/util.c            | 51 ++++++++++++++++++++++++--------------------
 3 files changed, 33 insertions(+), 24 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 48268d2d0282..4f9b2007efad 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -798,7 +798,9 @@ static inline int is_vmalloc_or_module_addr(const void *x)
 }
 #endif
 
-extern void *kvmalloc_node(size_t size, gfp_t flags, int node);
+void *kvmalloc_node_caller(size_t size, gfp_t flags, int node,
+		unsigned long caller);
+void *kvmalloc_node(size_t size, gfp_t flags, int node);
 static inline void *kvmalloc(size_t size, gfp_t flags)
 {
 	return kvmalloc_node(size, flags, NUMA_NO_NODE);
diff --git a/include/linux/slab.h b/include/linux/slab.h
index 0c97d788762c..6611b8ee55ee 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -663,6 +663,8 @@ extern void *__kmalloc_node_track_caller(size_t, gfp_t, int, unsigned long);
 
 #else /* CONFIG_NUMA */
 
+#define __kmalloc_node_track_caller(size, flags, node, caller) \
+	__kmalloc_track_caller(size, flags, caller)
 #define kmalloc_node_track_caller(size, flags, node) \
 	kmalloc_track_caller(size, flags)
 
diff --git a/mm/util.c b/mm/util.c
index a8bf17f18a81..ee4422be86a2 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -539,26 +539,8 @@ unsigned long vm_mmap(struct file *file, unsigned long addr,
 }
 EXPORT_SYMBOL(vm_mmap);
 
-/**
- * kvmalloc_node - attempt to allocate physically contiguous memory, but upon
- * failure, fall back to non-contiguous (vmalloc) allocation.
- * @size: size of the request.
- * @flags: gfp mask for the allocation - must be compatible (superset) with GFP_KERNEL.
- * @node: numa node to allocate from
- *
- * Uses kmalloc to get the memory but if the allocation fails then falls back
- * to the vmalloc allocator. Use kvfree for freeing the memory.
- *
- * Reclaim modifiers - __GFP_NORETRY and __GFP_NOFAIL are not supported.
- * __GFP_RETRY_MAYFAIL is supported, and it should be used only if kmalloc is
- * preferable to the vmalloc fallback, due to visible performance drawbacks.
- *
- * Please note that any use of gfp flags outside of GFP_KERNEL is careful to not
- * fall back to vmalloc.
- *
- * Return: pointer to the allocated memory of %NULL in case of failure
- */
-void *kvmalloc_node(size_t size, gfp_t flags, int node)
+void *kvmalloc_node_caller(size_t size, gfp_t flags, int node,
+		unsigned long caller)
 {
 	gfp_t kmalloc_flags = flags;
 	void *ret;
@@ -584,7 +566,7 @@ void *kvmalloc_node(size_t size, gfp_t flags, int node)
 			kmalloc_flags |= __GFP_NORETRY;
 	}
 
-	ret = kmalloc_node(size, kmalloc_flags, node);
+	ret = __kmalloc_node_track_caller(size, kmalloc_flags, node, caller);
 
 	/*
 	 * It doesn't really make sense to fallback to vmalloc for sub page
@@ -593,8 +575,31 @@ void *kvmalloc_node(size_t size, gfp_t flags, int node)
 	if (ret || size <= PAGE_SIZE)
 		return ret;
 
-	return __vmalloc_node(size, 1, flags, node,
-			__builtin_return_address(0));
+	return __vmalloc_node(size, 1, flags, node, (void *)caller);
+}
+
+/**
+ * kvmalloc_node - Allocate memory from a particular NUMA node.
+ * @size: Number of bytes to allocate.
+ * @flags: Memory allocation (GFP) flags.
+ * @node: NUMA node to allocate from.
+ *
+ * The allocated memory may or may not be physically contiguous, and so
+ * is not suitable for DMA.  Use kvfree() to free the memory.
+ *
+ * Reclaim modifiers - __GFP_NORETRY and __GFP_NOFAIL are not supported.
+ * __GFP_RETRY_MAYFAIL is supported, and it should be used only if kmalloc is
+ * preferable to the vmalloc fallback, due to visible performance drawbacks.
+ *
+ * Any use of gfp flags outside of GFP_KERNEL is careful to not
+ * fall back to vmalloc.
+ *
+ * Return: pointer to the allocated memory or %NULL in case of failure.
+ * %ZERO_SIZE_PTR if @size is zero.
+ */
+void *kvmalloc_node(size_t size, gfp_t flags, int node)
+{
+	return kvmalloc_node_caller(size, flags, node, _RET_IP_);
 }
 EXPORT_SYMBOL(kvmalloc_node);
 
-- 
2.30.2



^ permalink raw reply related	[flat|nested] 3+ messages in thread

* [PATCH 2/3] mm/vmalloc: Use kvmalloc to allocate the table of pages
  2021-05-10 15:02 [PATCH 1/3] mm/util: Add kvmalloc_node_caller Matthew Wilcox (Oracle)
@ 2021-05-10 15:02 ` Matthew Wilcox (Oracle)
  2021-05-10 15:02 ` [PATCH 3/3] MAINTAINERS: Add Vlad Rezki as vmalloc maintainer Matthew Wilcox (Oracle)
  1 sibling, 0 replies; 3+ messages in thread
From: Matthew Wilcox (Oracle) @ 2021-05-10 15:02 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), David Rientjes

If we're trying to allocate 4MB of memory, the table will be 8KiB in size
(1024 pointers * 8 bytes per pointer), which can usually be satisfied
by a kmalloc (which is significantly faster).  Instead of changing this
open-coded implementation, just use kvmalloc().

This improves the allocation speed of vmalloc(4MB) by approximately
5% in our benchmark.  It's still dominated by the 1024 calls to
alloc_pages_node(), which will be the subject of a later patch.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: David Rientjes <rientjes@google.com>
---
 mm/vmalloc.c | 9 ++-------
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index a13ac524f6ff..867c155c07e0 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2774,13 +2774,8 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
 		gfp_mask |= __GFP_HIGHMEM;
 
 	/* Please note that the recursion is strictly bounded. */
-	if (array_size > PAGE_SIZE) {
-		pages = __vmalloc_node(array_size, 1, nested_gfp, node,
-					area->caller);
-	} else {
-		pages = kmalloc_node(array_size, nested_gfp, node);
-	}
-
+	pages = kvmalloc_node_caller(array_size, nested_gfp, node,
+					(unsigned long)area->caller);
 	if (!pages) {
 		free_vm_area(area);
 		warn_alloc(gfp_mask, NULL,
-- 
2.30.2



^ permalink raw reply related	[flat|nested] 3+ messages in thread

* [PATCH 3/3] MAINTAINERS: Add Vlad Rezki as vmalloc maintainer
  2021-05-10 15:02 [PATCH 1/3] mm/util: Add kvmalloc_node_caller Matthew Wilcox (Oracle)
  2021-05-10 15:02 ` [PATCH 2/3] mm/vmalloc: Use kvmalloc to allocate the table of pages Matthew Wilcox (Oracle)
@ 2021-05-10 15:02 ` Matthew Wilcox (Oracle)
  1 sibling, 0 replies; 3+ messages in thread
From: Matthew Wilcox (Oracle) @ 2021-05-10 15:02 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle)

People should know to cc Vlad on vmalloc-related patches.  With this,
get-maintainer.pl suggests:

Uladzislau Rezki <urezki@gmail.com> (maintainer:VMALLOC)
Andrew Morton <akpm@linux-foundation.org> (maintainer:MEMORY MANAGEMENT)
linux-mm@kvack.org (open list:VMALLOC)
linux-kernel@vger.kernel.org (open list)

which looks appropriate to me.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 MAINTAINERS | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index bd7aff0c120f..68604bcb6dd0 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -19467,6 +19467,13 @@ S:	Maintained
 F:	drivers/vlynq/vlynq.c
 F:	include/linux/vlynq.h
 
+VMALLOC
+M:	Uladzislau Rezki <urezki@gmail.com>
+L:	linux-mm@kvack.org
+S:	Maintained
+F:	mm/vmalloc.c
+F:	include/linux/vmalloc.h
+
 VME SUBSYSTEM
 M:	Martyn Welch <martyn@welchs.me.uk>
 M:	Manohar Vanga <manohar.vanga@gmail.com>
-- 
2.30.2



^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-05-10 15:06 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-10 15:02 [PATCH 1/3] mm/util: Add kvmalloc_node_caller Matthew Wilcox (Oracle)
2021-05-10 15:02 ` [PATCH 2/3] mm/vmalloc: Use kvmalloc to allocate the table of pages Matthew Wilcox (Oracle)
2021-05-10 15:02 ` [PATCH 3/3] MAINTAINERS: Add Vlad Rezki as vmalloc maintainer Matthew Wilcox (Oracle)

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.