From: "Björn Töpel" <firstname.lastname@example.org> To: Jakub Kicinski <email@example.com> Cc: Netdev <firstname.lastname@example.org>, "Alexei Starovoitov" <email@example.com>, "Daniel Borkmann" <firstname.lastname@example.org>, "Björn Töpel" <email@example.com>, bpf <firstname.lastname@example.org>, "Magnus Karlsson" <email@example.com>, "Karlsson, Magnus" <firstname.lastname@example.org>, "Fijalkowski, Maciej" <email@example.com>, "Toke Høiland-Jørgensen" <firstname.lastname@example.org> Subject: Re: [PATCH bpf-next v2 1/2] xsk: store struct xdp_sock as a flexible array member of the XSKMAP Date: Tue, 29 Oct 2019 07:20:55 +0100 [thread overview] Message-ID: <CAJ+HfNjDzNg9wdNkhx7BVkK5Udd3_WP0UMT8jTyssd254M6NsQ@mail.gmail.com> (raw) In-Reply-To: <email@example.com> On Mon, 28 Oct 2019 at 23:26, Jakub Kicinski <firstname.lastname@example.org> wrote: > > On Mon, 28 Oct 2019 23:11:50 +0100, Björn Töpel wrote: > > On Mon, 28 Oct 2019 at 18:55, Jakub Kicinski > > <email@example.com> wrote: > > > > > > On Fri, 25 Oct 2019 09:18:40 +0200, Björn Töpel wrote: > > > > From: Björn Töpel <firstname.lastname@example.org> > > > > > > > > Prior this commit, the array storing XDP socket instances were stored > > > > in a separate allocated array of the XSKMAP. Now, we store the sockets > > > > as a flexible array member in a similar fashion as the arraymap. Doing > > > > so, we do less pointer chasing in the lookup. > > > > > > > > Signed-off-by: Björn Töpel <email@example.com> > > > > > > Thanks for the re-spin. > > > > > > > diff --git a/kernel/bpf/xskmap.c b/kernel/bpf/xskmap.c > > > > index 82a1ffe15dfa..a83e92fe2971 100644 > > > > --- a/kernel/bpf/xskmap.c > > > > +++ b/kernel/bpf/xskmap.c > > > > > > > @@ -92,44 +93,35 @@ static struct bpf_map *xsk_map_alloc(union bpf_attr *attr) > > > > attr->map_flags & ~(BPF_F_NUMA_NODE | BPF_F_RDONLY | BPF_F_WRONLY)) > > > > return ERR_PTR(-EINVAL); > > > > > > > > - m = kzalloc(sizeof(*m), GFP_USER); > > > > - if (!m) > > > > + numa_node = bpf_map_attr_numa_node(attr); > > > > + size = struct_size(m, xsk_map, attr->max_entries); > > > > + cost = size + array_size(sizeof(*m->flush_list), num_possible_cpus()); > > > > > > Now we didn't use array_size() previously because the sum here may > > > overflow. > > > > > > We could use __ab_c_size() here, the name is probably too ugly to use > > > directly and IDK what we'd have to name such a accumulation helper... > > > > > > So maybe just make cost and size a u64 and we should be in the clear. > > > > > > > Hmm, but both: > > int bpf_map_charge_init(struct bpf_map_memory *mem, size_t size); > > void *bpf_map_area_alloc(size_t size, int numa_node); > > pass size as size_t, so casting to u64 doesn't really help on 32-bit > > systems, right? > > Yup :( IOW looks like the overflows will not be caught on 32bit > machines in all existing code that does the (u64) cast. I hope > I'm wrong there. > > > Wdyt about simply adding: > > if (cost < size) > > return ERR_PTR(-EINVAL) > > after the cost calculation for explicit overflow checking? > > We'd need that for all users of these helpers. Could it perhaps makes > sense to pass "alloc_size" and "extra_cost" as separate size_t to > bpf_map_charge_init() and then we can do the overflow checking there, > centrally? > The cost/size calculations seem to vary a bit from map to map, so I don't know about the extra size_t arguments... but all of them do use u64 for cost and explicit casting, in favor of u32 overflow checks. Changing bpf_map_charge_init()/bpf_map_area_alloc() size to u64 would be the smallest change, together with a 64-to-32 overflow check in those functions. > We can probably get rid of all the u64 casting too at that point, > and use standard overflow helpers, yuppie :) > Yeah, that's the other path, but more churn (check_add_overflow() in every map). Preferred path? > > So, if size's struct_size overflows, the allocation will fail. > > And we'll catch the cost overflow with the if-statement, no? > > > > Another option is changing the size_t in bpf_map_... to u64. Maybe > > that's better, since arraymap and devmap uses u64 for cost/size.
next prev parent reply other threads:[~2019-10-29 6:21 UTC|newest] Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-10-25 7:18 [PATCH bpf-next v2 0/2] xsk: XSKMAP lookup improvements Björn Töpel 2019-10-25 7:18 ` [PATCH bpf-next v2 1/2] xsk: store struct xdp_sock as a flexible array member of the XSKMAP Björn Töpel 2019-10-28 17:55 ` Jakub Kicinski 2019-10-28 22:11 ` Björn Töpel 2019-10-28 22:26 ` Jakub Kicinski 2019-10-29 6:20 ` Björn Töpel [this message] 2019-10-29 13:44 ` Alexei Starovoitov 2019-10-25 7:18 ` [PATCH bpf-next v2 2/2] bpf: implement map_gen_lookup() callback for XSKMAP Björn Töpel
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=CAJ+HfNjDzNg9wdNkhx7BVkK5Udd3_WP0UMT8jTyssd254M6NsQ@mail.gmail.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --email@example.com \ --firstname.lastname@example.org \ --subject='Re: [PATCH bpf-next v2 1/2] xsk: store struct xdp_sock as a flexible array member of the XSKMAP' \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).