bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
To: "Toke Høiland-Jørgensen" <toke@redhat.com>
Cc: daniel@iogearbox.net, ast@fb.com, bpf@vger.kernel.org,
	netdev@vger.kernel.org
Subject: Re: [PATCH bpf] xdp: Handle device unregister for devmap_hash map type
Date: Thu, 17 Oct 2019 05:09:07 +0900	[thread overview]
Message-ID: <2d516208-8c46-707c-4484-4547e66fc128@i-love.sakura.ne.jp> (raw)
In-Reply-To: <20191016132802.2760149-1-toke@redhat.com>

On 2019/10/16 22:28, Toke Høiland-Jørgensen wrote:
> It seems I forgot to add handling of devmap_hash type maps to the device
> unregister hook for devmaps. This omission causes devices to not be
> properly released, which causes hangs.
> 
> Fix this by adding the missing handler.
> 
> Fixes: 6f9d451ab1a3 ("xdp: Add devmap_hash map type for looking up devices by hashed index")
> Reported-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
> Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>

Well, regarding 6f9d451ab1a3, I think that we want explicit "(u64)" cast

@@ -97,6 +123,14 @@ static int dev_map_init_map(struct bpf_dtab *dtab, union bpf_attr *attr)
        cost = (u64) dtab->map.max_entries * sizeof(struct bpf_dtab_netdev *);
        cost += sizeof(struct list_head) * num_possible_cpus();

+       if (attr->map_type == BPF_MAP_TYPE_DEVMAP_HASH) {
+               dtab->n_buckets = roundup_pow_of_two(dtab->map.max_entries);
+
+               if (!dtab->n_buckets) /* Overflow check */
+                       return -EINVAL;
+               cost += sizeof(struct hlist_head) * dtab->n_buckets;

                                                    ^here

+       }
+
        /* if map size is larger than memlock limit, reject it */
        err = bpf_map_charge_init(&dtab->map.memory, cost);
        if (err)

like "(u64) dtab->map.max_entries * sizeof(struct bpf_dtab_netdev *)" does.
Otherwise, on 32bits build, "sizeof(struct hlist_head) * dtab->n_buckets" can become 0.

----------
#include <stdio.h>
#include <linux/types.h>

int main(int argc, char *argv[])
{
        volatile __u32 i = 4294967296ULL / sizeof(unsigned long *);
        volatile __u64 cost = sizeof(unsigned long *) * i;

        printf("cost=%llu\n", (unsigned long long) cost);
        return 0;
}
----------

  parent reply	other threads:[~2019-10-16 20:09 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-16 13:28 [PATCH bpf] xdp: Handle device unregister for devmap_hash map type Toke Høiland-Jørgensen
2019-10-16 16:24 ` Martin Lau
2019-10-17 10:27   ` Toke Høiland-Jørgensen
2019-10-16 20:09 ` Tetsuo Handa [this message]
2019-10-17 10:28   ` Toke Høiland-Jørgensen
2019-10-17 15:23     ` Alexei Starovoitov
2019-10-17 15:40       ` Toke Høiland-Jørgensen
2019-10-17 19:17 ` Andrii Nakryiko
2019-10-18 10:31   ` Toke Høiland-Jørgensen
2019-10-18 16:28     ` Andrii Nakryiko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2d516208-8c46-707c-4484-4547e66fc128@i-love.sakura.ne.jp \
    --to=penguin-kernel@i-love.sakura.ne.jp \
    --cc=ast@fb.com \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=netdev@vger.kernel.org \
    --cc=toke@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).