From: KP Singh <kpsingh@chromium.org>
To: Martin KaFai Lau <kafai@fb.com>
Cc: open list <linux-kernel@vger.kernel.org>,
bpf <bpf@vger.kernel.org>, Alexei Starovoitov <ast@kernel.org>,
Daniel Borkmann <daniel@iogearbox.net>,
Paul Turner <pjt@google.com>, Jann Horn <jannh@google.com>,
Hao Luo <haoluo@google.com>
Subject: Re: [PATCH bpf-next 1/5] bpf: Implement task local storage
Date: Tue, 3 Nov 2020 15:46:34 +0100 [thread overview]
Message-ID: <CACYkzJ6uzOu6YP2MQs4eYScXzATE+Ha5WLcNWW2cskObC23bEw@mail.gmail.com> (raw)
In-Reply-To: <CACYkzJ5VU2Pd2ZiY7AKJM0yZ2NsDbQOu1Y_FYwkBv6M6NFvkcw@mail.gmail.com>
On Fri, Oct 30, 2020 at 11:53 AM KP Singh <kpsingh@chromium.org> wrote:
>
> Thanks for taking a look!
>
> On Wed, Oct 28, 2020 at 2:13 AM Martin KaFai Lau <kafai@fb.com> wrote:
> >
> > On Tue, Oct 27, 2020 at 06:03:13PM +0100, KP Singh wrote:
> > [ ... ]
> >
> > > diff --git a/kernel/bpf/bpf_task_storage.c b/kernel/bpf/bpf_task_storage.c
> > > new file mode 100644
> > > index 000000000000..774140c458cc
> > > --- /dev/null
> > > +++ b/kernel/bpf/bpf_task_storage.c
> > > @@ -0,0 +1,327 @@
> > > +// SPDX-License-Identifier: GPL-2.0
> > > +/*
> > > + * Copyright (c) 2019 Facebook
> > > + * Copyright 2020 Google LLC.
> > > + */
> > > +
> > > +#include "linux/pid.h"
> > > +#include "linux/sched.h"
> > > +#include <linux/rculist.h>
> > > +#include <linux/list.h>
> > > +#include <linux/hash.h>
> > > +#include <linux/types.h>
> > > +#include <linux/spinlock.h>
> > > +#include <linux/bpf.h>
> > > +#include <linux/bpf_local_storage.h>
> > > +#include <net/sock.h>
> > Is this required?
>
> Nope. Removed.
>
> >
> > > +#include <uapi/linux/sock_diag.h>
> > > +#include <uapi/linux/btf.h>
> > > +#include <linux/bpf_lsm.h>
> > > +#include <linux/btf_ids.h>
> > > +#include <linux/fdtable.h>
> > > +
> > > +DEFINE_BPF_STORAGE_CACHE(task_cache);
> > > +
> > > +static struct bpf_local_storage __rcu **task_storage_ptr(void *owner)
>
> [...]
>
> > > + err = -EBADF;
> > > + goto out_fput;
> > > + }
> > > +
> > > + pid = get_pid(f->private_data);
> > n00b question. Is get_pid(f->private_data) required?
> > f->private_data could be freed while holding f->f_count?
>
> I would assume that holding a reference to the file should also
> keep the private_data alive but I was not sure so I grabbed the
> extra reference.
>
> >
> > > + task = get_pid_task(pid, PIDTYPE_PID);
> > Should put_task_struct() be called before returning?
>
> If we keep using get_pid_task then, yes, I see it grabs a reference to the task.
> We could also call pid_task under rcu locks but it might be cleaner to
> just get_pid_task
> and put_task_struct().
I refactored this to use pidfd_get_pid and it seems like we can simply call
pid_task since we are already in an RCU read side critical section.
And to be pedantic, I added a WARN_ON_ONCE(!rcu_read_lock_held());
(although this is not required as lockdep should pretty much handle it
by default)
- KP
>
> >
> > > + if (!task || !task_storage_ptr(task)) {
> > "!task_storage_ptr(task)" is unnecessary, task_storage_lookup() should
> > have taken care of it.
> >
> >
> > > + err = -ENOENT;
> > > + goto out;
> > > + }
> > > +
> > > + sdata = task_storage_lookup(task, map, true);
> > > + put_pid(pid);
>
> [...]
>
> > > + .map_lookup_elem = bpf_pid_task_storage_lookup_elem,
> > > + .map_update_elem = bpf_pid_task_storage_update_elem,
> > > + .map_delete_elem = bpf_pid_task_storage_delete_elem,
> > Please exercise the syscall use cases also in the selftest.
>
> Will do. Thanks for the nudge :)
I also added another patch to exercise them for the other storage types too.
- KP
>
> >
> > > + .map_check_btf = bpf_local_storage_map_check_btf,
> > > + .map_btf_name = "bpf_local_storage_map",
> > > + .map_btf_id = &task_storage_map_btf_id,
> > > + .map_owner_storage_ptr = task_storage_ptr,
> > > +};
> > > +
next prev parent reply other threads:[~2020-11-03 14:47 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-10-27 17:03 [PATCH bpf-next 0/5] Implement task_local_storage KP Singh
2020-10-27 17:03 ` [PATCH bpf-next 1/5] bpf: Implement task local storage KP Singh
2020-10-28 1:13 ` Martin KaFai Lau
2020-10-30 10:53 ` KP Singh
2020-11-03 14:46 ` KP Singh [this message]
2020-10-28 1:22 ` Martin KaFai Lau
2020-11-03 14:52 ` KP Singh
2020-10-29 23:12 ` Andrii Nakryiko
2020-10-30 11:02 ` KP Singh
2020-10-29 23:27 ` Song Liu
2020-10-30 11:07 ` KP Singh
2020-10-31 0:02 ` Song Liu
2020-10-27 17:03 ` [PATCH bpf-next 2/5] bpf: Implement get_current_task_btf and RET_PTR_TO_BTF_ID KP Singh
2020-10-28 1:27 ` Martin KaFai Lau
2020-10-31 18:45 ` kernel test robot
2020-10-31 18:45 ` [RFC PATCH] bpf: bpf_get_current_task_btf_proto can be static kernel test robot
2020-10-27 17:03 ` [PATCH bpf-next 3/5] bpf: Fix tests for local_storage KP Singh
2020-10-27 17:03 ` [PATCH bpf-next 4/5] bpf: Update selftests for local_storage to use vmlinux.h KP Singh
2020-10-27 17:03 ` [PATCH bpf-next 5/5] bpf: Add tests for task_local_storage KP Singh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CACYkzJ6uzOu6YP2MQs4eYScXzATE+Ha5WLcNWW2cskObC23bEw@mail.gmail.com \
--to=kpsingh@chromium.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=haoluo@google.com \
--cc=jannh@google.com \
--cc=kafai@fb.com \
--cc=linux-kernel@vger.kernel.org \
--cc=pjt@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).