bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Stanislav Fomichev <sdf@fomichev.me>
To: Yonghong Song <yhs@fb.com>
Cc: Jakub Kicinski <jakub.kicinski@netronome.com>,
	Brian Vazquez <brianvv@google.com>,
	Alexei Starovoitov <ast@fb.com>,
	"bpf@vger.kernel.org" <bpf@vger.kernel.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	Kernel Team <Kernel-team@fb.com>
Subject: Re: [PATCH bpf-next 00/13] bpf: adding map batch processing support
Date: Fri, 30 Aug 2019 14:18:09 -0700	[thread overview]
Message-ID: <20190830211809.GB2101@mini-arch> (raw)
In-Reply-To: <eda3c9e0-8ad6-e684-0aeb-d63b9ed60aa7@fb.com>

On 08/30, Yonghong Song wrote:
> 
> 
> On 8/30/19 1:15 PM, Stanislav Fomichev wrote:
> > On 08/29, Jakub Kicinski wrote:
> >> On Thu, 29 Aug 2019 16:13:59 -0700, Brian Vazquez wrote:
> >>>> We need a per-map implementation of the exec side, but roughly maps
> >>>> would do:
> >>>>
> >>>>          LIST_HEAD(deleted);
> >>>>
> >>>>          for entry in map {
> >>>>                  struct map_op_ctx {
> >>>>                          .key    = entry->key,
> >>>>                          .value  = entry->value,
> >>>>                  };
> >>>>
> >>>>                  act = BPF_PROG_RUN(filter, &map_op_ctx);
> >>>>                  if (act & ~ACT_BITS)
> >>>>                          return -EINVAL;
> >>>>
> >>>>                  if (act & DELETE) {
> >>>>                          map_unlink(entry);
> >>>>                          list_add(entry, &deleted);
> >>>>                  }
> >>>>                  if (act & STOP)
> >>>>                          break;
> >>>>          }
> >>>>
> >>>>          synchronize_rcu();
> >>>>
> >>>>          for entry in deleted {
> >>>>                  struct map_op_ctx {
> >>>>                          .key    = entry->key,
> >>>>                          .value  = entry->value,
> >>>>                  };
> >>>>
> >>>>                  BPF_PROG_RUN(dumper, &map_op_ctx);
> >>>>                  map_free(entry);
> >>>>          }
> >>>>   
> >>> Hi Jakub,
> >>>
> >>> how would that approach support percpu maps?
> >>>
> >>> I'm thinking of a scenario where you want to do some calculations on
> >>> percpu maps and you are interested on the info on all the cpus not
> >>> just the one that is running the bpf program. Currently on a pcpu map
> >>> the bpf_map_lookup_elem helper only returns the pointer to the data of
> >>> the executing cpu.
> >>
> >> Right, we need to have the iteration outside of the bpf program itself,
> >> and pass the element in through the context. That way we can feed each
> >> per cpu entry into the program separately.
> > My 2 cents:
> > 
> > I personally like Jakub's/Quentin's proposal more. So if I get to choose
> > between this series and Jakub's filter+dump in BPF, I'd pick filter+dump
> > (pending per-cpu issue which we actually care about).
> > 
> > But if we can have both, I don't have any objections; this patch
> > series looks to me a lot like what Brian did, just extended to more
> > commands. If we are fine with the shortcomings raised about the
> > original series, then let's go with this version. Maybe we can also
> > look into addressing these independently.
> > 
> > But if I pretend that we live in an ideal world, I'd just go with
> > whatever Jakub and Quentin are doing so we don't have to support
> > two APIs that essentially do the same (minus batching update, but
> > it looks like there is no clear use case for that yet; maybe).
> > 
> > I guess you can hold off this series a bit and discuss it at LPC,
> > you have a talk dedicated to that :-) (and afaiu, you are all going)
> 
> Absolutely. We will have a discussion on map batching and I signed
> on with that :-). One of goals for this patch set is for me to explore
> what uapi (attr and bpf subcommands) we should expose to users.
> Hopefully at that time we will get more clarity
> on Jakub's approach and we can discuss how to proceed.
Sounds good! Your series didn't have an RFC tag, so I wasn't
sure whether we've fully committed to that approach or not.

  parent reply	other threads:[~2019-08-30 21:18 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-29  6:45 [PATCH bpf-next 00/13] bpf: adding map batch processing support Yonghong Song
2019-08-29  6:45 ` [PATCH bpf-next 01/13] bpf: add bpf_map_value_size and bp_map_copy_value helper functions Yonghong Song
2019-08-29 22:04   ` Song Liu
2019-08-30  6:40     ` Yonghong Song
2019-08-29  6:45 ` [PATCH bpf-next 02/13] bpf: refactor map_update_elem() Yonghong Song
2019-08-29 23:37   ` Song Liu
2019-08-29  6:45 ` [PATCH bpf-next 03/13] bpf: refactor map_delete_elem() Yonghong Song
2019-08-29 23:39   ` Song Liu
2019-08-29  6:45 ` [PATCH bpf-next 04/13] bpf: refactor map_get_next_key() Yonghong Song
2019-08-29 23:39   ` Song Liu
2019-08-29  6:45 ` [PATCH bpf-next 05/13] bpf: adding map batch processing support Yonghong Song
2019-08-29 23:01   ` Brian Vazquez
2019-08-30  6:39     ` Yonghong Song
2019-08-30  6:58       ` Alexei Starovoitov
2019-08-29  6:45 ` [PATCH bpf-next 06/13] tools/bpf: sync uapi header bpf.h Yonghong Song
2019-08-29  6:45 ` [PATCH bpf-next 07/13] tools/bpf: implement libbpf API functions for map batch operations Yonghong Song
2019-08-29  6:45 ` [PATCH bpf-next 08/13] tools/bpf: add test for bpf_map_update_batch() Yonghong Song
2019-08-29  6:45 ` [PATCH bpf-next 09/13] tools/bpf: add test for bpf_map_lookup_batch() Yonghong Song
2019-08-29  6:45 ` [PATCH bpf-next 10/13] tools/bpf: add test for bpf_map_lookup_and_delete_batch() Yonghong Song
2019-08-29  6:45 ` [PATCH bpf-next 11/13] tools/bpf: add test for bpf_map_delete_batch() Yonghong Song
2019-08-29  6:45 ` [PATCH bpf-next 12/13] tools/bpf: add a multithreaded test for map batch operations Yonghong Song
2019-08-29  6:45 ` [PATCH bpf-next 13/13] tools/bpf: measure map batching perf Yonghong Song
2019-08-29 18:39 ` [PATCH bpf-next 00/13] bpf: adding map batch processing support Jakub Kicinski
2019-08-29 23:13   ` Brian Vazquez
2019-08-30  0:15     ` Jakub Kicinski
2019-08-30 20:15       ` Stanislav Fomichev
2019-08-30 20:55         ` Yonghong Song
2019-08-30 21:10           ` Jakub Kicinski
2019-08-30 22:24             ` Yonghong Song
2019-08-30 21:18           ` Stanislav Fomichev [this message]
2019-09-03 21:01             ` Alexei Starovoitov
2019-09-03 22:30               ` Stanislav Fomichev
2019-09-03 23:07                 ` Brian Vazquez
2019-09-04  1:35                   ` Alexei Starovoitov
2019-09-03 23:07                 ` Yonghong Song
2019-08-30  7:25   ` Yonghong Song
2019-08-30 21:35     ` Jakub Kicinski
2019-08-30 22:38       ` Yonghong Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190830211809.GB2101@mini-arch \
    --to=sdf@fomichev.me \
    --cc=Kernel-team@fb.com \
    --cc=ast@fb.com \
    --cc=bpf@vger.kernel.org \
    --cc=brianvv@google.com \
    --cc=daniel@iogearbox.net \
    --cc=jakub.kicinski@netronome.com \
    --cc=netdev@vger.kernel.org \
    --cc=yhs@fb.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).