All of lore.kernel.org
 help / color / mirror / Atom feed
From: Hao Luo <haoluo@google.com>
To: Yonghong Song <yhs@fb.com>
Cc: Alexei Starovoitov <ast@kernel.org>,
	Andrii Nakryiko <andrii@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	KP Singh <kpsingh@kernel.org>, Martin KaFai Lau <kafai@fb.com>,
	Song Liu <songliubraving@fb.com>,
	bpf@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH RFC bpf-next 0/2] Mmapable task local storage.
Date: Mon, 28 Mar 2022 10:46:15 -0700	[thread overview]
Message-ID: <CA+khW7iq+UKsfQxdT3QpSqPUFN8gQWWDLoQ9zxB=uWTs63AZEA@mail.gmail.com> (raw)
In-Reply-To: <CA+khW7g3hy61qnvtqUizaW+qB6wk=Y9cjivhORshOk=ZzTXJ-A@mail.gmail.com>

On Mon, Mar 28, 2022 at 10:39 AM Hao Luo <haoluo@google.com> wrote:
>
> Hi Yonghong,
>
> On Fri, Mar 25, 2022 at 12:16 PM Yonghong Song <yhs@fb.com> wrote:
> >
> > On 3/24/22 4:41 PM, Hao Luo wrote:
> > > Some map types support mmap operation, which allows userspace to
> > > communicate with BPF programs directly. Currently only arraymap
> > > and ringbuf have mmap implemented.
> > >
> > > However, in some use cases, when multiple program instances can
> > > run concurrently, global mmapable memory can cause race. In that
> > > case, userspace needs to provide necessary synchronizations to
> > > coordinate the usage of mapped global data. This can be a source
> > > of bottleneck.
> >
> > I can see your use case here. Each calling process can get the
> > corresponding bpf program task local storage data through
> > mmap interface. As you mentioned, there is a tradeoff
> > between more memory vs. non-global synchronization.
> >
> > I am thinking that another bpf_iter approach can retrieve
> > the similar result. We could implement a bpf_iter
> > for task local storage map, optionally it can provide
> > a tid to retrieve the data for that particular tid.
> > This way, user space needs an explicit syscall, but
> > does not need to allocate more memory than necessary.
> >
> > WDYT?
> >
>
> Thanks for the suggestion. I have two thoughts about bpf_iter + tid and mmap:
>
> - mmap prevents the calling task from reading other task's value.
> Using bpf_iter, one can pass other task's tid to get their values. I
> assume there are two potential ways of passing tid to bpf_iter: one is
> to use global data in bpf prog, the other is adding tid parameterized
> iter_link. For the first, it's not easy for unpriv tasks to use. For
> the second, we need to create one iter_link object for each interested
> tid. It may not be easy to use either.
>
> - Regarding adding an explicit syscall. I thought about adding
> write/read syscalls for task local storage maps, just like reading
> values from iter_link. Writing or reading task local storage map
> updates/reads the current task's value. I think this could achieve the
> same effect as mmap.
>

Actually, my use case of using mmap on task local storage is to allow
userspace to pass FDs into bpf prog. Some of the helpers I want to add
need to take an FD as parameter and the bpf progs can run
concurrently, thus using global data is racy. Mmapable task local
storage is the best solution I can find for this purpose.

Song also mentioned to me offline, that mmapable task local storage
may be useful for his use case.

I am actually open to other proposals.

> > >
> > > It would be great to have a mmapable local storage in that case.
> > > This patch adds that.
> > >
> > > Mmap isn't BPF syscall, so unpriv users can also use it to
> > > interact with maps.
> > >
> > > Currently the only way of allocating mmapable map area is using
> > > vmalloc() and it's only used at map allocation time. Vmalloc()
> > > may sleep, therefore it's not suitable for maps that may allocate
> > > memory in an atomic context such as local storage. Local storage
> > > uses kmalloc() with GFP_ATOMIC, which doesn't sleep. This patch
> > > uses kmalloc() with GFP_ATOMIC as well for mmapable map area.
> > >
> > > Allocating mmapable memory has requirment on page alignment. So we
> > > have to deliberately allocate more memory than necessary to obtain
> > > an address that has sdata->data aligned at page boundary. The
> > > calculations for mmapable allocation size, and the actual
> > > allocation/deallocation are packaged in three functions:
> > >
> > >   - bpf_map_mmapable_alloc_size()
> > >   - bpf_map_mmapable_kzalloc()
> > >   - bpf_map_mmapable_kfree()
> > >
> > > BPF local storage uses them to provide generic mmap API:
> > >
> > >   - bpf_local_storage_mmap()
> > >
> > > And task local storage adds the mmap callback:
> > >
> > >   - task_storage_map_mmap()
> > >
> > > When application calls mmap on a task local storage, it gets its
> > > own local storage.
> > >
> > > Overall, mmapable local storage trades off memory with flexibility
> > > and efficiency. It brings memory fragmentation but can make programs
> > > stateless. Therefore useful in some cases.
> > >
> > > Hao Luo (2):
> > >    bpf: Mmapable local storage.
> > >    selftests/bpf: Test mmapable task local storage.
> > >
> > >   include/linux/bpf.h                           |  4 +
> > >   include/linux/bpf_local_storage.h             |  5 +-
> > >   kernel/bpf/bpf_local_storage.c                | 73 +++++++++++++++++--
> > >   kernel/bpf/bpf_task_storage.c                 | 40 ++++++++++
> > >   kernel/bpf/syscall.c                          | 67 +++++++++++++++++
> > >   .../bpf/prog_tests/task_local_storage.c       | 38 ++++++++++
> > >   .../bpf/progs/task_local_storage_mmapable.c   | 38 ++++++++++
> > >   7 files changed, 257 insertions(+), 8 deletions(-)
> > >   create mode 100644 tools/testing/selftests/bpf/progs/task_local_storage_mmapable.c
> > >

  reply	other threads:[~2022-03-28 17:46 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-24 23:41 [PATCH RFC bpf-next 0/2] Mmapable task local storage Hao Luo
2022-03-24 23:41 ` [PATCH RFC bpf-next 1/2] bpf: Mmapable " Hao Luo
2022-03-24 23:41 ` [PATCH RFC bpf-next 2/2] selftests/bpf: Test mmapable task " Hao Luo
2022-03-25 19:16 ` [PATCH RFC bpf-next 0/2] Mmapable " Yonghong Song
2022-03-28 17:39   ` Hao Luo
2022-03-28 17:46     ` Hao Luo [this message]
2022-03-29  9:37       ` Kumar Kartikeya Dwivedi
2022-03-29 17:43         ` Hao Luo
2022-03-29 21:45           ` Martin KaFai Lau
2022-03-30 18:05             ` Hao Luo
2022-03-29 23:29           ` Alexei Starovoitov
2022-03-30 18:06             ` Hao Luo
2022-03-30 18:16               ` Alexei Starovoitov
2022-03-30 18:26                 ` Hao Luo
2022-03-31 22:32                   ` KP Singh
2022-03-31 23:06                     ` Alexei Starovoitov
2022-04-02  0:48                       ` KP Singh
2022-03-25  7:53 [PATCH RFC bpf-next 1/2] bpf: Mmapable " kernel test robot
2022-03-25 12:40 ` [kbuild] " Dan Carpenter
2022-03-25 17:36 ` Hao Luo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CA+khW7iq+UKsfQxdT3QpSqPUFN8gQWWDLoQ9zxB=uWTs63AZEA@mail.gmail.com' \
    --to=haoluo@google.com \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=kafai@fb.com \
    --cc=kpsingh@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=songliubraving@fb.com \
    --cc=yhs@fb.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.