bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next] mm: Introduce vm_area_[un]map_pages().
@ 2024-02-20 19:26 Alexei Starovoitov
  2024-02-21  5:52 ` Christoph Hellwig
  0 siblings, 1 reply; 7+ messages in thread
From: Alexei Starovoitov @ 2024-02-20 19:26 UTC (permalink / raw)
  To: bpf
  Cc: daniel, andrii, torvalds, brho, hannes, lstoakes, akpm, urezki,
	hch, linux-mm, kernel-team

From: Alexei Starovoitov <ast@kernel.org>

vmap() API is used to map a set of pages into contiguous kernel virtual space.

BPF would like to extend the vmap API to implement a lazily-populated
contiguous kernel virtual space which size and start address is fixed early.

The vmap API has functions to request and release areas of kernel address space:
get_vm_area() and free_vm_area().

Introduce vm_area_map_pages(area, start_addr, count, pages)
to map a set of pages within a given area.
It has the same sanity checks as vmap() does.
In addition it also checks that get_vm_area() was created with VM_MAP flag
(as all users of vmap() should be doing).

Also add vm_area_unmap_pages() that is a safer alternative to
existing vunmap_range() api.

The next commits will introduce bpf_arena which is a sparsely populated shared
memory region between bpf program and user space process. It will map
privately-managed pages into an existing vm area with the following steps:

  area = get_vm_area(area_size, VM_MAP | VM_USERMAP); // at bpf prog verification time
  vm_area_map_pages(area, kaddr, 1, page);            // on demand
  vm_area_unmap_pages(area, kaddr, 1);
  free_vm_area(area);                                 // after bpf prog is unloaded

For BPF use case the area_size will be 4Gbyte plus 64Kbyte of guard pages and
area->addr known and fixed at the program verification time.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
 include/linux/vmalloc.h |  3 +++
 mm/vmalloc.c            | 46 +++++++++++++++++++++++++++++++++++++++++
 2 files changed, 49 insertions(+)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index c720be70c8dd..7d112cc5f2a3 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -232,6 +232,9 @@ static inline bool is_vm_area_hugepages(const void *addr)
 }
 
 #ifdef CONFIG_MMU
+int vm_area_map_pages(struct vm_struct *area, unsigned long addr, unsigned int count,
+		      struct page **pages);
+int vm_area_unmap_pages(struct vm_struct *area, unsigned long addr, unsigned int count);
 void vunmap_range(unsigned long addr, unsigned long end);
 static inline void set_vm_flush_reset_perms(void *addr)
 {
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index d12a17fc0c17..d6337d46f1d8 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -635,6 +635,52 @@ static int vmap_pages_range(unsigned long addr, unsigned long end,
 	return err;
 }
 
+/**
+ * vm_area_map_pages - map pages inside given vm_area
+ * @area: vm_area
+ * @addr: start address inside vm_area
+ * @count: number of pages
+ * @pages: pages to map (always PAGE_SIZE pages)
+ */
+int vm_area_map_pages(struct vm_struct *area, unsigned long addr, unsigned int count,
+		      struct page **pages)
+{
+	unsigned long size = ((unsigned long)count) * PAGE_SIZE;
+	unsigned long end = addr + size;
+
+	might_sleep();
+	if (WARN_ON_ONCE(area->flags & VM_FLUSH_RESET_PERMS))
+		return -EINVAL;
+	if (WARN_ON_ONCE(area->flags & VM_NO_GUARD))
+		return -EINVAL;
+	if (WARN_ON_ONCE(!(area->flags & VM_MAP)))
+		return -EINVAL;
+	if (count > totalram_pages())
+		return -E2BIG;
+	if (addr < (unsigned long)area->addr || (void *)end > area->addr + area->size)
+		return -ERANGE;
+
+	return vmap_pages_range(addr, end, PAGE_KERNEL, pages, PAGE_SHIFT);
+}
+
+/**
+ * vm_area_unmap_pages - unmap pages inside given vm_area
+ * @area: vm_area
+ * @addr: start address inside vm_area
+ * @count: number of pages to unmap
+ */
+int vm_area_unmap_pages(struct vm_struct *area, unsigned long addr, unsigned int count)
+{
+	unsigned long size = ((unsigned long)count) * PAGE_SIZE;
+	unsigned long end = addr + size;
+
+	if (addr < (unsigned long)area->addr || (void *)end > area->addr + area->size)
+		return -ERANGE;
+
+	vunmap_range(addr, end);
+	return 0;
+}
+
 int is_vmalloc_or_module_addr(const void *x)
 {
 	/*
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH bpf-next] mm: Introduce vm_area_[un]map_pages().
  2024-02-20 19:26 [PATCH bpf-next] mm: Introduce vm_area_[un]map_pages() Alexei Starovoitov
@ 2024-02-21  5:52 ` Christoph Hellwig
  2024-02-21 19:05   ` Alexei Starovoitov
  0 siblings, 1 reply; 7+ messages in thread
From: Christoph Hellwig @ 2024-02-21  5:52 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: bpf, daniel, andrii, torvalds, brho, hannes, lstoakes, akpm,
	urezki, hch, linux-mm, kernel-team

On Tue, Feb 20, 2024 at 11:26:13AM -0800, Alexei Starovoitov wrote:
> From: Alexei Starovoitov <ast@kernel.org>
> 
> vmap() API is used to map a set of pages into contiguous kernel virtual space.
> 
> BPF would like to extend the vmap API to implement a lazily-populated
> contiguous kernel virtual space which size and start address is fixed early.
> 
> The vmap API has functions to request and release areas of kernel address space:
> get_vm_area() and free_vm_area().

As said before I really hate growing more get_vm_area and
free_vm_area outside the core vmalloc code.  We have a few of those
mostly due to ioremap (which is beeing consolidate) and executable code
allocation (which there have been various attempts at consolidation,
and hopefully one finally succeeds..).  So let's take a step back and
think how we can do that without it.

For the dynamically growing part do you need a special allocator or
can we just go straight to the page allocator and implement this
in common code?

> For BPF use case the area_size will be 4Gbyte plus 64Kbyte of guard pages and
> area->addr known and fixed at the program verification time.

How is this ever going to to work on 32-bit platforms?


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH bpf-next] mm: Introduce vm_area_[un]map_pages().
  2024-02-21  5:52 ` Christoph Hellwig
@ 2024-02-21 19:05   ` Alexei Starovoitov
  2024-02-22 23:25     ` Alexei Starovoitov
  2024-02-23 17:14     ` Christoph Hellwig
  0 siblings, 2 replies; 7+ messages in thread
From: Alexei Starovoitov @ 2024-02-21 19:05 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: bpf, Daniel Borkmann, Andrii Nakryiko, Linus Torvalds,
	Barret Rhoden, Johannes Weiner, Lorenzo Stoakes, Andrew Morton,
	Uladzislau Rezki, linux-mm, Kernel Team

On Tue, Feb 20, 2024 at 9:52 PM Christoph Hellwig <hch@infradead.org> wrote:
>
> On Tue, Feb 20, 2024 at 11:26:13AM -0800, Alexei Starovoitov wrote:
> > From: Alexei Starovoitov <ast@kernel.org>
> >
> > vmap() API is used to map a set of pages into contiguous kernel virtual space.
> >
> > BPF would like to extend the vmap API to implement a lazily-populated
> > contiguous kernel virtual space which size and start address is fixed early.
> >
> > The vmap API has functions to request and release areas of kernel address space:
> > get_vm_area() and free_vm_area().
>
> As said before I really hate growing more get_vm_area and
> free_vm_area outside the core vmalloc code.  We have a few of those
> mostly due to ioremap (which is beeing consolidate) and executable code
> allocation (which there have been various attempts at consolidation,
> and hopefully one finally succeeds..).  So let's take a step back and
> think how we can do that without it.

There are also xen grant tables that grab the range with get_vm_area(),
but manage it on their own. It's not an ioremap case.
It looks to me the vmalloc address range has different kinds of areas
already: vmalloc, vmap, ioremap, xen.

Maybe we can do:
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 7d112cc5f2a3..633c7b643daa 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -28,6 +28,7 @@ struct iov_iter;              /* in uio.h */
 #define VM_MAP_PUT_PAGES       0x00000200      /* put pages and free
array in vfree */
 #define VM_ALLOW_HUGE_VMAP     0x00000400      /* Allow for huge
pages on archs with HAVE_ARCH_HUGE_VMALLOC */
+#define VM_BPF                 0x00000800      /* bpf_arena pages */

+static inline struct vm_struct *get_bpf_vm_area(unsigned long size)
+{
+       return get_vm_area(size, VM_BPF);
+}

and enforce that flag in vm_area_[un]map_pages() ?

vmallocinfo can display it or skip it.
Things like find_vm_area() can do something different with such an area
(if that was the concern).

> For the dynamically growing part do you need a special allocator or
> can we just go straight to the page allocator and implement this
> in common code?

It's a bit special allocator that is using maple tree to manage
range within 4G region and
alloc_pages_node(GFP_KERNEL | __GFP_ZERO | __GFP_ACCOUNT)
to grab pages.
With extra dance:
        memcg = bpf_map_get_memcg(map);
        old_memcg = set_active_memcg(memcg);
to make sure memcg accounting is done the common way for all bpf maps.

The tricky bpf specific part is a computation of pgoff,
since it's a shared memory region between user space and bpf prog.
The lower 32-bits of the pointer have to be the same for user space and bpf.

Not much changed in the patch since the earlier thread.
Either find it in your email or here:
https://git.kernel.org/pub/scm/linux/kernel/git/ast/bpf.git/commit/?h=arena&id=364c9b5d233d775728ec2bf3b4168fa6909e58d1

Are you suggesting the api like:

struct vm_struct *area = get_sparse_vm_area(size);
vm_area_alloc_pages(struct vm_struct *area, ulong addr, int page_cnt,
int numa_id);

and vm_area_alloc_pages() will allocate pages and vmap_pages_range()
them while all code in mm/vmalloc.c ?

I can give it a shot.

The ugly part is bpf_map_get_memcg() would need to be passed in somehow.

Another bpf specific bit is the guard pages before and after 4G range
and such vm_area_alloc_pages() would need to skip them.

> > For BPF use case the area_size will be 4Gbyte plus 64Kbyte of guard pages and
> > area->addr known and fixed at the program verification time.
>
> How is this ever going to to work on 32-bit platforms?

bpf_arena requires 64bit and mmu.

ifeq ($(CONFIG_MMU)$(CONFIG_64BIT),yy)
obj-$(CONFIG_BPF_SYSCALL) += arena.o
endif

and special JIT support too.

With bpf_arena we can finally deprecate a bunch of things like bloom
filter bpf map, etc.

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH bpf-next] mm: Introduce vm_area_[un]map_pages().
  2024-02-21 19:05   ` Alexei Starovoitov
@ 2024-02-22 23:25     ` Alexei Starovoitov
  2024-02-24  0:00       ` Alexei Starovoitov
  2024-02-23 17:14     ` Christoph Hellwig
  1 sibling, 1 reply; 7+ messages in thread
From: Alexei Starovoitov @ 2024-02-22 23:25 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: bpf, Daniel Borkmann, Andrii Nakryiko, Linus Torvalds,
	Barret Rhoden, Johannes Weiner, Lorenzo Stoakes, Andrew Morton,
	Uladzislau Rezki, linux-mm, Kernel Team

On Wed, Feb 21, 2024 at 11:05 AM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
>
> On Tue, Feb 20, 2024 at 9:52 PM Christoph Hellwig <hch@infradead.org> wrote:
> >
> > On Tue, Feb 20, 2024 at 11:26:13AM -0800, Alexei Starovoitov wrote:
> > > From: Alexei Starovoitov <ast@kernel.org>
> > >
> > > vmap() API is used to map a set of pages into contiguous kernel virtual space.
> > >
> > > BPF would like to extend the vmap API to implement a lazily-populated
> > > contiguous kernel virtual space which size and start address is fixed early.
> > >
> > > The vmap API has functions to request and release areas of kernel address space:
> > > get_vm_area() and free_vm_area().
> >
> > As said before I really hate growing more get_vm_area and
> > free_vm_area outside the core vmalloc code.  We have a few of those
> > mostly due to ioremap (which is beeing consolidate) and executable code
> > allocation (which there have been various attempts at consolidation,
> > and hopefully one finally succeeds..).  So let's take a step back and
> > think how we can do that without it.
>
> There are also xen grant tables that grab the range with get_vm_area(),
> but manage it on their own. It's not an ioremap case.
> It looks to me the vmalloc address range has different kinds of areas
> already: vmalloc, vmap, ioremap, xen.
>
> Maybe we can do:
> diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
> index 7d112cc5f2a3..633c7b643daa 100644
> --- a/include/linux/vmalloc.h
> +++ b/include/linux/vmalloc.h
> @@ -28,6 +28,7 @@ struct iov_iter;              /* in uio.h */
>  #define VM_MAP_PUT_PAGES       0x00000200      /* put pages and free
> array in vfree */
>  #define VM_ALLOW_HUGE_VMAP     0x00000400      /* Allow for huge
> pages on archs with HAVE_ARCH_HUGE_VMALLOC */
> +#define VM_BPF                 0x00000800      /* bpf_arena pages */
>
> +static inline struct vm_struct *get_bpf_vm_area(unsigned long size)
> +{
> +       return get_vm_area(size, VM_BPF);
> +}
>
> and enforce that flag in vm_area_[un]map_pages() ?
>
> vmallocinfo can display it or skip it.
> Things like find_vm_area() can do something different with such an area
> (if that was the concern).
>
> > For the dynamically growing part do you need a special allocator or
> > can we just go straight to the page allocator and implement this
> > in common code?
>
> It's a bit special allocator that is using maple tree to manage
> range within 4G region and
> alloc_pages_node(GFP_KERNEL | __GFP_ZERO | __GFP_ACCOUNT)
> to grab pages.
> With extra dance:
>         memcg = bpf_map_get_memcg(map);
>         old_memcg = set_active_memcg(memcg);
> to make sure memcg accounting is done the common way for all bpf maps.
>
> The tricky bpf specific part is a computation of pgoff,
> since it's a shared memory region between user space and bpf prog.
> The lower 32-bits of the pointer have to be the same for user space and bpf.
>
> Not much changed in the patch since the earlier thread.
> Either find it in your email or here:
> https://git.kernel.org/pub/scm/linux/kernel/git/ast/bpf.git/commit/?h=arena&id=364c9b5d233d775728ec2bf3b4168fa6909e58d1
>
> Are you suggesting the api like:
>
> struct vm_struct *area = get_sparse_vm_area(size);
> vm_area_alloc_pages(struct vm_struct *area, ulong addr, int page_cnt,
> int numa_id);
>
> and vm_area_alloc_pages() will allocate pages and vmap_pages_range()
> them while all code in mm/vmalloc.c ?
>
> I can give it a shot.
>
> The ugly part is bpf_map_get_memcg() would need to be passed in somehow.
>
> Another bpf specific bit is the guard pages before and after 4G range
> and such vm_area_alloc_pages() would need to skip them.

I've looked at this approach more.
The somewhat generic-ish api for mm/vmalloc.c may look like:
struct vm_sparse_struct *area;

area = get_sparse_vm_area(vm_area_size, guard_size,
                          pgoff_offset, max_pages, memcg, ...);

vm_area_size is what get_vm_area() will reserve out of the kernel
vmalloc region. For bpf_arena case it will be 4gb+64k.
guard_size is the size of the guard area. 64k for bpf_arena.
pgoff_offset is the offset where pages would need to start allocating
after the guard area.
For any normal vma the pgoff==0 is the first page after vma->vm_start.
bpf_arena is bpf/user shared sparse region and it needs to keep lower 32-bit
from the address that user space received from mmap().
So that the first allocated page with pgoff=0 will be the first
page for _user_ vma->vm_start.
Hence for kernel vmalloc range the page allocator needs that
pgoff_offset.
max_pages is easy. It's the max number of pages that
this sparse_vm_area is allowed to allocate.
It's also driven by user space. When user does
mmap(NULL, bpf_arena_size, ..., bpf_arena_map_fd)
it gets an address and that address determines pgoff_offset
and arena_size determines the max_pages.
That arena_size can be 1 page or 1000 pages. Always less than 4Gb.
But vm_area_size will be 4gb+64k regardless.

vm_area_alloc_pages(struct vm_sparse_struct *area, ulong addr,
                    int page_cnt, int numa_id);
is semantically similar to user's mmap().
If addr == 0 the kernel will find a free range after pgoff_offset
and will allocate page_cnt pages from there and vmap to
kernel's vm_sparse_struct area.
If addr is specified it would have to be >= pgoff_offset
and page_cnt <= max_pages.
All pages are accounted into memcg specified at vm_sparse_struct
creation time.
And it will use maple tree to track all these range allocation
within vm_sparse_struct.

So far it looks like the bigger half of kernel/bpf/arena.c
will migrate to mm/vmalloc.c and will be very bpf specific.

So I don't particularly like this direction. Feels like a burden
for mm and bpf folks.

btw LWN just posted a nice article describing the motivation
https://lwn.net/Articles/961941/

So far doing:

+#define VM_BPF                 0x00000800      /* bpf_arena pages */
or VM_SPARSE ?

+static inline struct vm_struct *get_bpf_vm_area(unsigned long size)
+{
+       return get_vm_area(size, VM_BPF);
+}
and enforcing that flag where appropriate in mm/vmalloc.c
is the easiest for everyone.
We probably should add
#define VM_XEN 0x00001000
and use it in xen use cases to differentiate
vmalloc vs vmap vs ioremap vs bpf vs xen users.

Please share your opinions.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH bpf-next] mm: Introduce vm_area_[un]map_pages().
  2024-02-21 19:05   ` Alexei Starovoitov
  2024-02-22 23:25     ` Alexei Starovoitov
@ 2024-02-23 17:14     ` Christoph Hellwig
  2024-02-23 17:27       ` Alexei Starovoitov
  1 sibling, 1 reply; 7+ messages in thread
From: Christoph Hellwig @ 2024-02-23 17:14 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Christoph Hellwig, bpf, Daniel Borkmann, Andrii Nakryiko,
	Linus Torvalds, Barret Rhoden, Johannes Weiner, Lorenzo Stoakes,
	Andrew Morton, Uladzislau Rezki, linux-mm, Kernel Team

On Wed, Feb 21, 2024 at 11:05:09AM -0800, Alexei Starovoitov wrote:
> +#define VM_BPF                 0x00000800      /* bpf_arena pages */
> 
> +static inline struct vm_struct *get_bpf_vm_area(unsigned long size)
> +{
> +       return get_vm_area(size, VM_BPF);
> +}
> 
> and enforce that flag in vm_area_[un]map_pages() ?
> 
> vmallocinfo can display it or skip it.
> Things like find_vm_area() can do something different with such an area
> (if that was the concern).

Well, a growing allocation is a generally useful feature.  I'd
rather not limit it to bpf if we can.

> > For the dynamically growing part do you need a special allocator or
> > can we just go straight to the page allocator and implement this
> > in common code?
> 
> It's a bit special allocator that is using maple tree to manage
> range within 4G region and
> alloc_pages_node(GFP_KERNEL | __GFP_ZERO | __GFP_ACCOUNT)
> to grab pages.
> With extra dance:
>         memcg = bpf_map_get_memcg(map);
>         old_memcg = set_active_memcg(memcg);
> to make sure memcg accounting is done the common way for all bpf maps.

Ok, so it's not just a growing allocation but actually sparse and
all over the place?  That doesn't really make it easier to come
up with a good enough interface.  How do you decide what gets placed
where?

> struct vm_struct *area = get_sparse_vm_area(size);
> vm_area_alloc_pages(struct vm_struct *area, ulong addr, int page_cnt,
> int numa_id);
> 
> and vm_area_alloc_pages() will allocate pages and vmap_pages_range()
> them while all code in mm/vmalloc.c ?

My vague hope was that we could just start out with an area and
grow it.  But it sounds like you need something much more complex
that that.

But yes, a more specific API is probably a better idea.  And maybe
the cookie should be a VM area either but a structure dedicated to
this.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH bpf-next] mm: Introduce vm_area_[un]map_pages().
  2024-02-23 17:14     ` Christoph Hellwig
@ 2024-02-23 17:27       ` Alexei Starovoitov
  0 siblings, 0 replies; 7+ messages in thread
From: Alexei Starovoitov @ 2024-02-23 17:27 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: bpf, Daniel Borkmann, Andrii Nakryiko, Linus Torvalds,
	Barret Rhoden, Johannes Weiner, Lorenzo Stoakes, Andrew Morton,
	Uladzislau Rezki, linux-mm, Kernel Team

On Fri, Feb 23, 2024 at 9:14 AM Christoph Hellwig <hch@infradead.org> wrote:
>
> On Wed, Feb 21, 2024 at 11:05:09AM -0800, Alexei Starovoitov wrote:
> > +#define VM_BPF                 0x00000800      /* bpf_arena pages */
> >
> > +static inline struct vm_struct *get_bpf_vm_area(unsigned long size)
> > +{
> > +       return get_vm_area(size, VM_BPF);
> > +}
> >
> > and enforce that flag in vm_area_[un]map_pages() ?
> >
> > vmallocinfo can display it or skip it.
> > Things like find_vm_area() can do something different with such an area
> > (if that was the concern).
>
> Well, a growing allocation is a generally useful feature.  I'd
> rather not limit it to bpf if we can.

sure. See VM_SPARSE proposal in the other email.

> > > For the dynamically growing part do you need a special allocator or
> > > can we just go straight to the page allocator and implement this
> > > in common code?
> >
> > It's a bit special allocator that is using maple tree to manage
> > range within 4G region and
> > alloc_pages_node(GFP_KERNEL | __GFP_ZERO | __GFP_ACCOUNT)
> > to grab pages.
> > With extra dance:
> >         memcg = bpf_map_get_memcg(map);
> >         old_memcg = set_active_memcg(memcg);
> > to make sure memcg accounting is done the common way for all bpf maps.
>
> Ok, so it's not just a growing allocation but actually sparse and
> all over the place?  That doesn't really make it easier to come
> up with a good enough interface.

yep.

> How do you decide what gets placed
> where?

See proposal in the other email in this thread.
tldr: it's a user space mmap() like interface.
either give me N pages at any addr or
give me N pages at this addr if this range is still free.

> > struct vm_struct *area = get_sparse_vm_area(size);
> > vm_area_alloc_pages(struct vm_struct *area, ulong addr, int page_cnt,
> > int numa_id);
> >
> > and vm_area_alloc_pages() will allocate pages and vmap_pages_range()
> > them while all code in mm/vmalloc.c ?
>
> My vague hope was that we could just start out with an area and
> grow it.  But it sounds like you need something much more complex
> that that.

yes. With bpf specific tricks due to lower 32-bit wrap around.

> But yes, a more specific API is probably a better idea.  And maybe
> the cookie should be a VM area either but a structure dedicated to
> this.

Right. see 'struct sparse_vm_area' proposal in the other email.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH bpf-next] mm: Introduce vm_area_[un]map_pages().
  2024-02-22 23:25     ` Alexei Starovoitov
@ 2024-02-24  0:00       ` Alexei Starovoitov
  0 siblings, 0 replies; 7+ messages in thread
From: Alexei Starovoitov @ 2024-02-24  0:00 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: bpf, Daniel Borkmann, Andrii Nakryiko, Linus Torvalds,
	Barret Rhoden, Johannes Weiner, Lorenzo Stoakes, Andrew Morton,
	Uladzislau Rezki, linux-mm, Kernel Team

On Thu, Feb 22, 2024 at 3:25 PM Alexei Starovoitov
<alexei.starovoitov@gmail.com> wrote:
> >
> > I can give it a shot.
> >
> > The ugly part is bpf_map_get_memcg() would need to be passed in somehow.
> >
> > Another bpf specific bit is the guard pages before and after 4G range
> > and such vm_area_alloc_pages() would need to skip them.
>
> I've looked at this approach more.
> The somewhat generic-ish api for mm/vmalloc.c may look like:
> struct vm_sparse_struct *area;
>
> area = get_sparse_vm_area(vm_area_size, guard_size,
>                           pgoff_offset, max_pages, memcg, ...);
>
> vm_area_size is what get_vm_area() will reserve out of the kernel
> vmalloc region. For bpf_arena case it will be 4gb+64k.
> guard_size is the size of the guard area. 64k for bpf_arena.
> pgoff_offset is the offset where pages would need to start allocating
> after the guard area.
> For any normal vma the pgoff==0 is the first page after vma->vm_start.
> bpf_arena is bpf/user shared sparse region and it needs to keep lower 32-bit
> from the address that user space received from mmap().
> So that the first allocated page with pgoff=0 will be the first
> page for _user_ vma->vm_start.
> Hence for kernel vmalloc range the page allocator needs that
> pgoff_offset.
> max_pages is easy. It's the max number of pages that
> this sparse_vm_area is allowed to allocate.
> It's also driven by user space. When user does
> mmap(NULL, bpf_arena_size, ..., bpf_arena_map_fd)
> it gets an address and that address determines pgoff_offset
> and arena_size determines the max_pages.
> That arena_size can be 1 page or 1000 pages. Always less than 4Gb.
> But vm_area_size will be 4gb+64k regardless.
>
> vm_area_alloc_pages(struct vm_sparse_struct *area, ulong addr,
>                     int page_cnt, int numa_id);
> is semantically similar to user's mmap().
> If addr == 0 the kernel will find a free range after pgoff_offset
> and will allocate page_cnt pages from there and vmap to
> kernel's vm_sparse_struct area.
> If addr is specified it would have to be >= pgoff_offset
> and page_cnt <= max_pages.
> All pages are accounted into memcg specified at vm_sparse_struct
> creation time.
> And it will use maple tree to track all these range allocation
> within vm_sparse_struct.
>
> So far it looks like the bigger half of kernel/bpf/arena.c
> will migrate to mm/vmalloc.c and will be very bpf specific.
>
> So I don't particularly like this direction. Feels like a burden
> for mm and bpf folks.
>
> btw LWN just posted a nice article describing the motivation
> https://lwn.net/Articles/961941/
>
> So far doing:
>
> +#define VM_BPF                 0x00000800      /* bpf_arena pages */
> or VM_SPARSE ?
>
> and enforcing that flag where appropriate in mm/vmalloc.c
> is the easiest for everyone.
> We probably should add
> #define VM_XEN 0x00001000
> and use it in xen use cases to differentiate
> vmalloc vs vmap vs ioremap vs bpf vs xen users.

Here is what I had in mind:
https://lore.kernel.org/bpf/20240223235728.13981-1-alexei.starovoitov@gmail.com/

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2024-02-24  0:00 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-02-20 19:26 [PATCH bpf-next] mm: Introduce vm_area_[un]map_pages() Alexei Starovoitov
2024-02-21  5:52 ` Christoph Hellwig
2024-02-21 19:05   ` Alexei Starovoitov
2024-02-22 23:25     ` Alexei Starovoitov
2024-02-24  0:00       ` Alexei Starovoitov
2024-02-23 17:14     ` Christoph Hellwig
2024-02-23 17:27       ` Alexei Starovoitov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).