All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/2] Mapping an entire folio
@ 2022-11-01 20:18 Matthew Wilcox (Oracle)
  2022-11-01 20:18 ` [PATCH v2 1/2] vmalloc: Factor vmap_alloc() out of vm_map_ram() Matthew Wilcox (Oracle)
  2022-11-01 20:18 ` [PATCH v2 2/2] mm: Add folio_map_local() Matthew Wilcox (Oracle)
  0 siblings, 2 replies; 7+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-11-01 20:18 UTC (permalink / raw)
  To: linux-mm
  Cc: Matthew Wilcox (Oracle),
	Uladzislau Rezki, David Howells, Dave Chinner, linux-fsdevel,
	Thomas Gleixner, Ira Weiny, Fabio M. De Francesco,
	Luis Chamberlain

I had intended to write and test one user before sending this out,
but Dave Howells says he has a user now that wants this functionality,
so here we go.  It is only compile tested.

Earlier thread on this: https://lore.kernel.org/all/YvvdFrtiW33UOkGr@casper.infradead.org/

v2:
 - Remove spurious blank line change in highmem.h (David Howells)
 - Insert missing "else" in folio_unmap_local() (Hyeonggon Yoo)
 - Use vm_unmap_ram() instead of vunmap() in folio_unmap_local() (Hyeonggon Yoo)
 - Factor vmap_alloc() out of vm_map_ram() (Uladzislau Rezki)

Matthew Wilcox (Oracle) (2):
  vmalloc: Factor vmap_alloc() out of vm_map_ram()
  mm: Add folio_map_local()

 include/linux/highmem.h | 40 ++++++++++++++++++++++
 include/linux/vmalloc.h |  6 ++--
 mm/vmalloc.c            | 73 +++++++++++++++++++++++++++++++----------
 3 files changed, 99 insertions(+), 20 deletions(-)

-- 
2.35.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v2 1/2] vmalloc: Factor vmap_alloc() out of vm_map_ram()
  2022-11-01 20:18 [PATCH v2 0/2] Mapping an entire folio Matthew Wilcox (Oracle)
@ 2022-11-01 20:18 ` Matthew Wilcox (Oracle)
  2022-11-02  3:46   ` Hyeonggon Yoo
                     ` (2 more replies)
  2022-11-01 20:18 ` [PATCH v2 2/2] mm: Add folio_map_local() Matthew Wilcox (Oracle)
  1 sibling, 3 replies; 7+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-11-01 20:18 UTC (permalink / raw)
  To: linux-mm
  Cc: Matthew Wilcox (Oracle),
	Uladzislau Rezki, David Howells, Dave Chinner, linux-fsdevel,
	Thomas Gleixner, Ira Weiny, Fabio M. De Francesco,
	Luis Chamberlain

Introduce vmap_alloc() to simply get the address space.  This allows
for code sharing in the next patch.

Suggested-by: Uladzislau Rezki <urezki@gmail.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/vmalloc.c | 41 +++++++++++++++++++++++------------------
 1 file changed, 23 insertions(+), 18 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index ccaa461998f3..dcab1d3cf185 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2230,6 +2230,27 @@ void vm_unmap_ram(const void *mem, unsigned int count)
 }
 EXPORT_SYMBOL(vm_unmap_ram);
 
+static void *vmap_alloc(size_t size, int node)
+{
+	void *mem;
+
+	if (likely(size <= (VMAP_MAX_ALLOC * PAGE_SIZE))) {
+		mem = vb_alloc(size, GFP_KERNEL);
+		if (IS_ERR(mem))
+			mem = NULL;
+	} else {
+		struct vmap_area *va;
+		va = alloc_vmap_area(size, PAGE_SIZE,
+				VMALLOC_START, VMALLOC_END, node, GFP_KERNEL);
+		if (IS_ERR(va))
+			mem = NULL;
+		else
+			mem = (void *)va->va_start;
+	}
+
+	return mem;
+}
+
 /**
  * vm_map_ram - map pages linearly into kernel virtual address (vmalloc space)
  * @pages: an array of pointers to the pages to be mapped
@@ -2247,24 +2268,8 @@ EXPORT_SYMBOL(vm_unmap_ram);
 void *vm_map_ram(struct page **pages, unsigned int count, int node)
 {
 	unsigned long size = (unsigned long)count << PAGE_SHIFT;
-	unsigned long addr;
-	void *mem;
-
-	if (likely(count <= VMAP_MAX_ALLOC)) {
-		mem = vb_alloc(size, GFP_KERNEL);
-		if (IS_ERR(mem))
-			return NULL;
-		addr = (unsigned long)mem;
-	} else {
-		struct vmap_area *va;
-		va = alloc_vmap_area(size, PAGE_SIZE,
-				VMALLOC_START, VMALLOC_END, node, GFP_KERNEL);
-		if (IS_ERR(va))
-			return NULL;
-
-		addr = va->va_start;
-		mem = (void *)addr;
-	}
+	void *mem = vmap_alloc(size, node);
+	unsigned long addr = (unsigned long)mem;
 
 	if (vmap_pages_range(addr, addr + size, PAGE_KERNEL,
 				pages, PAGE_SHIFT) < 0) {
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v2 2/2] mm: Add folio_map_local()
  2022-11-01 20:18 [PATCH v2 0/2] Mapping an entire folio Matthew Wilcox (Oracle)
  2022-11-01 20:18 ` [PATCH v2 1/2] vmalloc: Factor vmap_alloc() out of vm_map_ram() Matthew Wilcox (Oracle)
@ 2022-11-01 20:18 ` Matthew Wilcox (Oracle)
  2022-11-02  9:13   ` Christoph Hellwig
  1 sibling, 1 reply; 7+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-11-01 20:18 UTC (permalink / raw)
  To: linux-mm
  Cc: Matthew Wilcox (Oracle),
	Uladzislau Rezki, David Howells, Dave Chinner, linux-fsdevel,
	Thomas Gleixner, Ira Weiny, Fabio M. De Francesco,
	Luis Chamberlain

Some filesystems benefit from being able to map the entire folio.
On 32-bit platforms with HIGHMEM, we fall back to using vmap, which
will be slow.  If it proves to be a performance problem, we can look at
optimising it in a number of ways.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/highmem.h | 40 ++++++++++++++++++++++++++++++++++++++++
 include/linux/vmalloc.h |  6 ++++--
 mm/vmalloc.c            | 32 ++++++++++++++++++++++++++++++++
 3 files changed, 76 insertions(+), 2 deletions(-)

diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index e9912da5441b..d56ae62db252 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -10,6 +10,7 @@
 #include <linux/mm.h>
 #include <linux/uaccess.h>
 #include <linux/hardirq.h>
+#include <linux/vmalloc.h>
 
 #include "highmem-internal.h"
 
@@ -132,6 +133,45 @@ static inline void *kmap_local_page(struct page *page);
  */
 static inline void *kmap_local_folio(struct folio *folio, size_t offset);
 
+/**
+ * folio_map_local - Map an entire folio.
+ * @folio: The folio to map.
+ *
+ * Unlike kmap_local_folio(), map an entire folio.  This should be undone
+ * with folio_unmap_local().  The address returned should be treated as
+ * stack-based, and local to this CPU, like kmap_local_folio().
+ *
+ * Context: May allocate memory using GFP_KERNEL if it takes the vmap path.
+ * Return: A kernel virtual address which can be used to access the folio,
+ * or NULL if the mapping fails.
+ */
+static inline __must_check void *folio_map_local(struct folio *folio)
+{
+	might_alloc(GFP_KERNEL);
+
+	if (!IS_ENABLED(CONFIG_HIGHMEM))
+		return folio_address(folio);
+	if (folio_test_large(folio))
+		return vm_map_folio(folio);
+	return kmap_local_page(&folio->page);
+}
+
+/**
+ * folio_unmap_local - Unmap an entire folio.
+ * @addr: Address returned from folio_map_local()
+ *
+ * Undo the result of a previous call to folio_map_local().
+ */
+static inline void folio_unmap_local(const void *addr, unsigned long nr_pages)
+{
+	if (!IS_ENABLED(CONFIG_HIGHMEM))
+		return;
+	if (is_vmalloc_addr(addr))
+		vm_unmap_ram(addr, nr_pages);
+	else
+		kunmap_local(addr);
+}
+
 /**
  * kmap_atomic - Atomically map a page for temporary usage - Deprecated!
  * @page:	Pointer to the page to be mapped
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 096d48aa3437..4bb34c939c01 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -13,6 +13,7 @@
 #include <asm/vmalloc.h>
 
 struct vm_area_struct;		/* vma defining user mapping in mm_types.h */
+struct folio;			/* also mm_types.h */
 struct notifier_block;		/* in notifier.h */
 
 /* bits in flags of vmalloc's vm_struct below */
@@ -163,8 +164,9 @@ extern void *vcalloc(size_t n, size_t size) __alloc_size(1, 2);
 extern void vfree(const void *addr);
 extern void vfree_atomic(const void *addr);
 
-extern void *vmap(struct page **pages, unsigned int count,
-			unsigned long flags, pgprot_t prot);
+void *vmap(struct page **pages, unsigned int count, unsigned long flags,
+		pgprot_t prot);
+void *vm_map_folio(struct folio *folio);
 void *vmap_pfn(unsigned long *pfns, unsigned int count, pgprot_t prot);
 extern void vunmap(const void *addr);
 
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index dcab1d3cf185..c101b09d15d3 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2288,6 +2288,38 @@ void *vm_map_ram(struct page **pages, unsigned int count, int node)
 }
 EXPORT_SYMBOL(vm_map_ram);
 
+#ifdef CONFIG_HIGHMEM
+/**
+ * vm_map_folio() - Map an entire folio into virtually contiguous space.
+ * @folio: The folio to map.
+ *
+ * Maps all pages in @folio into contiguous kernel virtual space.  This
+ * function is only available in HIGHMEM builds; for !HIGHMEM, use
+ * folio_address().  The pages are mapped with PAGE_KERNEL permissions.
+ *
+ * Return: The address of the area or %NULL on failure
+ */
+void *vm_map_folio(struct folio *folio)
+{
+	size_t size = folio_size(folio);
+	void *mem = vmap_alloc(size, NUMA_NO_NODE);
+	unsigned long addr = (unsigned long)mem;
+
+	if (vmap_range_noflush(addr, addr + size,
+				folio_pfn(folio) << PAGE_SHIFT,
+				PAGE_KERNEL, folio_shift(folio))) {
+		vm_unmap_ram(mem, folio_nr_pages(folio));
+		return NULL;
+	}
+	flush_cache_vmap(addr, addr + size);
+
+	mem = kasan_unpoison_vmalloc(mem, size, KASAN_VMALLOC_PROT_NORMAL);
+
+	return mem;
+}
+EXPORT_SYMBOL(vm_map_folio);
+#endif
+
 static struct vm_struct *vmlist __initdata;
 
 static inline unsigned int vm_area_page_order(struct vm_struct *vm)
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 1/2] vmalloc: Factor vmap_alloc() out of vm_map_ram()
  2022-11-01 20:18 ` [PATCH v2 1/2] vmalloc: Factor vmap_alloc() out of vm_map_ram() Matthew Wilcox (Oracle)
@ 2022-11-02  3:46   ` Hyeonggon Yoo
  2022-11-02  3:59   ` Hyeonggon Yoo
  2022-11-02  9:10   ` Christoph Hellwig
  2 siblings, 0 replies; 7+ messages in thread
From: Hyeonggon Yoo @ 2022-11-02  3:46 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle)
  Cc: linux-mm, Uladzislau Rezki, David Howells, Dave Chinner,
	linux-fsdevel, Thomas Gleixner, Ira Weiny, Fabio M. De Francesco,
	Luis Chamberlain

On Tue, Nov 01, 2022 at 08:18:27PM +0000, Matthew Wilcox (Oracle) wrote:
> Introduce vmap_alloc() to simply get the address space.  This allows
> for code sharing in the next patch.
> 
> Suggested-by: Uladzislau Rezki <urezki@gmail.com>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  mm/vmalloc.c | 41 +++++++++++++++++++++++------------------
>  1 file changed, 23 insertions(+), 18 deletions(-)
> 
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index ccaa461998f3..dcab1d3cf185 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -2230,6 +2230,27 @@ void vm_unmap_ram(const void *mem, unsigned int count)
>  }
>  EXPORT_SYMBOL(vm_unmap_ram);
>  
> +static void *vmap_alloc(size_t size, int node)
> +{
> +	void *mem;
> +
> +	if (likely(size <= (VMAP_MAX_ALLOC * PAGE_SIZE))) {
> +		mem = vb_alloc(size, GFP_KERNEL);
> +		if (IS_ERR(mem))
> +			mem = NULL;
> +	} else {
> +		struct vmap_area *va;
> +		va = alloc_vmap_area(size, PAGE_SIZE,
> +				VMALLOC_START, VMALLOC_END, node, GFP_KERNEL);
> +		if (IS_ERR(va))
> +			mem = NULL;
> +		else
> +			mem = (void *)va->va_start;
> +	}
> +
> +	return mem;
> +}
> +
>  /**
>   * vm_map_ram - map pages linearly into kernel virtual address (vmalloc space)
>   * @pages: an array of pointers to the pages to be mapped
> @@ -2247,24 +2268,8 @@ EXPORT_SYMBOL(vm_unmap_ram);
>  void *vm_map_ram(struct page **pages, unsigned int count, int node)
>  {
>  	unsigned long size = (unsigned long)count << PAGE_SHIFT;
> -	unsigned long addr;
> -	void *mem;
> -
> -	if (likely(count <= VMAP_MAX_ALLOC)) {
> -		mem = vb_alloc(size, GFP_KERNEL);
> -		if (IS_ERR(mem))
> -			return NULL;
> -		addr = (unsigned long)mem;
> -	} else {
> -		struct vmap_area *va;
> -		va = alloc_vmap_area(size, PAGE_SIZE,
> -				VMALLOC_START, VMALLOC_END, node, GFP_KERNEL);
> -		if (IS_ERR(va))
> -			return NULL;
> -
> -		addr = va->va_start;
> -		mem = (void *)addr;
> -	}
> +	void *mem = vmap_alloc(size, node);
> +	unsigned long addr = (unsigned long)mem;
>  
>  	if (vmap_pages_range(addr, addr + size, PAGE_KERNEL,
>  				pages, PAGE_SHIFT) < 0) {
> -- 
> 2.35.1

Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>

-- 
Thanks,
Hyeonggon

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 1/2] vmalloc: Factor vmap_alloc() out of vm_map_ram()
  2022-11-01 20:18 ` [PATCH v2 1/2] vmalloc: Factor vmap_alloc() out of vm_map_ram() Matthew Wilcox (Oracle)
  2022-11-02  3:46   ` Hyeonggon Yoo
@ 2022-11-02  3:59   ` Hyeonggon Yoo
  2022-11-02  9:10   ` Christoph Hellwig
  2 siblings, 0 replies; 7+ messages in thread
From: Hyeonggon Yoo @ 2022-11-02  3:59 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle)
  Cc: linux-mm, Uladzislau Rezki, David Howells, Dave Chinner,
	linux-fsdevel, Thomas Gleixner, Ira Weiny, Fabio M. De Francesco,
	Luis Chamberlain

On Tue, Nov 01, 2022 at 08:18:27PM +0000, Matthew Wilcox (Oracle) wrote:
> Introduce vmap_alloc() to simply get the address space.  This allows
> for code sharing in the next patch.
> 
> Suggested-by: Uladzislau Rezki <urezki@gmail.com>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  mm/vmalloc.c | 41 +++++++++++++++++++++++------------------
>  1 file changed, 23 insertions(+), 18 deletions(-)
> 
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index ccaa461998f3..dcab1d3cf185 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -2230,6 +2230,27 @@ void vm_unmap_ram(const void *mem, unsigned int count)
>  }
>  EXPORT_SYMBOL(vm_unmap_ram);
>  
> +static void *vmap_alloc(size_t size, int node)
> +{
> +	void *mem;
> +
> +	if (likely(size <= (VMAP_MAX_ALLOC * PAGE_SIZE))) {
> +		mem = vb_alloc(size, GFP_KERNEL);
> +		if (IS_ERR(mem))
> +			mem = NULL;
> +	} else {
> +		struct vmap_area *va;
> +		va = alloc_vmap_area(size, PAGE_SIZE,
> +				VMALLOC_START, VMALLOC_END, node, GFP_KERNEL);
> +		if (IS_ERR(va))
> +			mem = NULL;
> +		else
> +			mem = (void *)va->va_start;
> +	}
> +
> +	return mem;
> +}
> +
>  /**
>   * vm_map_ram - map pages linearly into kernel virtual address (vmalloc space)
>   * @pages: an array of pointers to the pages to be mapped
> @@ -2247,24 +2268,8 @@ EXPORT_SYMBOL(vm_unmap_ram);
>  void *vm_map_ram(struct page **pages, unsigned int count, int node)
>  {
>  	unsigned long size = (unsigned long)count << PAGE_SHIFT;
> -	unsigned long addr;
> -	void *mem;
> -
> -	if (likely(count <= VMAP_MAX_ALLOC)) {
> -		mem = vb_alloc(size, GFP_KERNEL);
> -		if (IS_ERR(mem))
> -			return NULL;
> -		addr = (unsigned long)mem;
> -	} else {
> -		struct vmap_area *va;
> -		va = alloc_vmap_area(size, PAGE_SIZE,
> -				VMALLOC_START, VMALLOC_END, node, GFP_KERNEL);
> -		if (IS_ERR(va))
> -			return NULL;
> -
> -		addr = va->va_start;
> -		mem = (void *)addr;
> -	}
> +	void *mem = vmap_alloc(size, node);
> +	unsigned long addr = (unsigned long)mem;

I think we need to check mem != NULL.

>  
>  	if (vmap_pages_range(addr, addr + size, PAGE_KERNEL,
>  				pages, PAGE_SHIFT) < 0) {
> -- 
> 2.35.1
> 
> 

-- 
Thanks,
Hyeonggon

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 1/2] vmalloc: Factor vmap_alloc() out of vm_map_ram()
  2022-11-01 20:18 ` [PATCH v2 1/2] vmalloc: Factor vmap_alloc() out of vm_map_ram() Matthew Wilcox (Oracle)
  2022-11-02  3:46   ` Hyeonggon Yoo
  2022-11-02  3:59   ` Hyeonggon Yoo
@ 2022-11-02  9:10   ` Christoph Hellwig
  2 siblings, 0 replies; 7+ messages in thread
From: Christoph Hellwig @ 2022-11-02  9:10 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle)
  Cc: linux-mm, Uladzislau Rezki, David Howells, Dave Chinner,
	linux-fsdevel, Thomas Gleixner, Ira Weiny, Fabio M. De Francesco,
	Luis Chamberlain

On Tue, Nov 01, 2022 at 08:18:27PM +0000, Matthew Wilcox (Oracle) wrote:
> Introduce vmap_alloc() to simply get the address space.  This allows
> for code sharing in the next patch.
> 
> Suggested-by: Uladzislau Rezki <urezki@gmail.com>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  mm/vmalloc.c | 41 +++++++++++++++++++++++------------------
>  1 file changed, 23 insertions(+), 18 deletions(-)
> 
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index ccaa461998f3..dcab1d3cf185 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -2230,6 +2230,27 @@ void vm_unmap_ram(const void *mem, unsigned int count)
>  }
>  EXPORT_SYMBOL(vm_unmap_ram);
>  
> +static void *vmap_alloc(size_t size, int node)
> +{
> +	void *mem;
> +
> +	if (likely(size <= (VMAP_MAX_ALLOC * PAGE_SIZE))) {
> +		mem = vb_alloc(size, GFP_KERNEL);
> +		if (IS_ERR(mem))
> +			mem = NULL;
> +	} else {
> +		struct vmap_area *va;
> +		va = alloc_vmap_area(size, PAGE_SIZE,
> +				VMALLOC_START, VMALLOC_END, node, GFP_KERNEL);
> +		if (IS_ERR(va))
> +			mem = NULL;
> +		else
> +			mem = (void *)va->va_start;
> +	}
> +
> +	return mem;

This reads really strange, why not return the ERR_PTR and do:

static void *vmap_alloc(size_t size, int node)
{
	if (unlikely(size > VMAP_MAX_ALLOC * PAGE_SIZE)) {
		struct vmap_area *va;

		va = alloc_vmap_area(size, PAGE_SIZE, VMALLOC_START,
				     VMALLOC_END, node, GFP_KERNEL);
		if (IS_ERR(va))
			return ERR_CAST(va);
		return (void *)va->va_start;
	}

	return vb_alloc(size, GFP_KERNEL);
}

> @@ -2247,24 +2268,8 @@ EXPORT_SYMBOL(vm_unmap_ram);
>  void *vm_map_ram(struct page **pages, unsigned int count, int node)
>  {
>  	unsigned long size = (unsigned long)count << PAGE_SHIFT;
> +	void *mem = vmap_alloc(size, node);
> +	unsigned long addr = (unsigned long)mem;

And here we still need the error check anyway, no matter if it is for
NULL or an ERR_PTR.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 2/2] mm: Add folio_map_local()
  2022-11-01 20:18 ` [PATCH v2 2/2] mm: Add folio_map_local() Matthew Wilcox (Oracle)
@ 2022-11-02  9:13   ` Christoph Hellwig
  0 siblings, 0 replies; 7+ messages in thread
From: Christoph Hellwig @ 2022-11-02  9:13 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle)
  Cc: linux-mm, Uladzislau Rezki, David Howells, Dave Chinner,
	linux-fsdevel, Thomas Gleixner, Ira Weiny, Fabio M. De Francesco,
	Luis Chamberlain

> +void *vm_map_folio(struct folio *folio)
> +{
> +	size_t size = folio_size(folio);
> +	void *mem = vmap_alloc(size, NUMA_NO_NODE);

Needs an error check here.

> +	mem = kasan_unpoison_vmalloc(mem, size, KASAN_VMALLOC_PROT_NORMAL);
> +
> +	return mem;

Why not:

	return kasan_unpoison_vmalloc(mem, size, KASAN_VMALLOC_PROT_NORMAL);

> +EXPORT_SYMBOL(vm_map_folio);

All new vmalloc/vmap functionality should be EXPORT_SYMBOL_GPL.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2022-11-02  9:15 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-01 20:18 [PATCH v2 0/2] Mapping an entire folio Matthew Wilcox (Oracle)
2022-11-01 20:18 ` [PATCH v2 1/2] vmalloc: Factor vmap_alloc() out of vm_map_ram() Matthew Wilcox (Oracle)
2022-11-02  3:46   ` Hyeonggon Yoo
2022-11-02  3:59   ` Hyeonggon Yoo
2022-11-02  9:10   ` Christoph Hellwig
2022-11-01 20:18 ` [PATCH v2 2/2] mm: Add folio_map_local() Matthew Wilcox (Oracle)
2022-11-02  9:13   ` Christoph Hellwig

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.