From: "Loftus, Ciara" <ciara.loftus@intel.com>
To: Andrii Nakryiko <andrii.nakryiko@gmail.com>
Cc: Networking <netdev@vger.kernel.org>, bpf <bpf@vger.kernel.org>,
"Karlsson, Magnus" <magnus.karlsson@intel.com>,
"Björn Töpel" <bjorn@kernel.org>,
"Alexei Starovoitov" <alexei.starovoitov@gmail.com>
Subject: RE: [PATCH v4 bpf 2/3] libbpf: restore umem state after socket create failure
Date: Thu, 8 Apr 2021 05:52:58 +0000 [thread overview]
Message-ID: <1449b25b2176449394ee07fea5750469@intel.com> (raw)
In-Reply-To: <CAEf4BzayWNm=kYqKz-6-P+fuRoy2UfPG8j8FuwXh5P6HDbsW9A@mail.gmail.com>
> On Tue, Mar 30, 2021 at 11:45 PM Ciara Loftus <ciara.loftus@intel.com>
> wrote:
> >
> > If the call to xsk_socket__create fails, the user may want to retry the
> > socket creation using the same umem. Ensure that the umem is in the
> > same state on exit if the call fails by:
> > 1. ensuring the umem _save pointers are unmodified.
> > 2. not unmapping the set of umem rings that were set up with the umem
> > during xsk_umem__create, since those maps existed before the call to
> > xsk_socket__create and should remain in tact even in the event of
> > failure.
> >
> > Fixes: 2f6324a3937f ("libbpf: Support shared umems between queues and
> devices")
> >
> > Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
> > ---
> > tools/lib/bpf/xsk.c | 41 +++++++++++++++++++++++------------------
> > 1 file changed, 23 insertions(+), 18 deletions(-)
> >
> > diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c
> > index 443b0cfb45e8..5098d9e3b55a 100644
> > --- a/tools/lib/bpf/xsk.c
> > +++ b/tools/lib/bpf/xsk.c
> > @@ -743,26 +743,30 @@ static struct xsk_ctx *xsk_get_ctx(struct
> xsk_umem *umem, int ifindex,
> > return NULL;
> > }
> >
> > -static void xsk_put_ctx(struct xsk_ctx *ctx)
> > +static void xsk_put_ctx(struct xsk_ctx *ctx, bool unmap)
> > {
> > struct xsk_umem *umem = ctx->umem;
> > struct xdp_mmap_offsets off;
> > int err;
> >
> > - if (--ctx->refcount == 0) {
> > - err = xsk_get_mmap_offsets(umem->fd, &off);
> > - if (!err) {
> > - munmap(ctx->fill->ring - off.fr.desc,
> > - off.fr.desc + umem->config.fill_size *
> > - sizeof(__u64));
> > - munmap(ctx->comp->ring - off.cr.desc,
> > - off.cr.desc + umem->config.comp_size *
> > - sizeof(__u64));
> > - }
> > + if (--ctx->refcount)
> > + return;
> >
> > - list_del(&ctx->list);
> > - free(ctx);
> > - }
> > + if (!unmap)
> > + goto out_free;
> > +
> > + err = xsk_get_mmap_offsets(umem->fd, &off);
> > + if (err)
> > + goto out_free;
> > +
> > + munmap(ctx->fill->ring - off.fr.desc, off.fr.desc + umem-
> >config.fill_size *
> > + sizeof(__u64));
> > + munmap(ctx->comp->ring - off.cr.desc, off.cr.desc + umem-
> >config.comp_size *
> > + sizeof(__u64));
> > +
> > +out_free:
> > + list_del(&ctx->list);
> > + free(ctx);
> > }
> >
> > static struct xsk_ctx *xsk_create_ctx(struct xsk_socket *xsk,
> > @@ -797,8 +801,6 @@ static struct xsk_ctx *xsk_create_ctx(struct
> xsk_socket *xsk,
> > memcpy(ctx->ifname, ifname, IFNAMSIZ - 1);
> > ctx->ifname[IFNAMSIZ - 1] = '\0';
> >
> > - umem->fill_save = NULL;
> > - umem->comp_save = NULL;
> > ctx->fill = fill;
> > ctx->comp = comp;
> > list_add(&ctx->list, &umem->ctx_list);
> > @@ -854,6 +856,7 @@ int xsk_socket__create_shared(struct xsk_socket
> **xsk_ptr,
> > struct xsk_socket *xsk;
> > struct xsk_ctx *ctx;
> > int err, ifindex;
> > + bool unmap = umem->fill_save != fill;
> >
>
> we are checking !umem only on the next line, so here it can be still
> NULL. Please send a fix, thanks.
Thank you for catching this. I've sent a fix.
Ciara
>
> > if (!umem || !xsk_ptr || !(rx || tx))
> > return -EFAULT;
> > @@ -994,6 +997,8 @@ int xsk_socket__create_shared(struct xsk_socket
> **xsk_ptr,
> > }
> >
> > *xsk_ptr = xsk;
> > + umem->fill_save = NULL;
> > + umem->comp_save = NULL;
> > return 0;
> >
> > out_mmap_tx:
> > @@ -1005,7 +1010,7 @@ int xsk_socket__create_shared(struct xsk_socket
> **xsk_ptr,
> > munmap(rx_map, off.rx.desc +
> > xsk->config.rx_size * sizeof(struct xdp_desc));
> > out_put_ctx:
> > - xsk_put_ctx(ctx);
> > + xsk_put_ctx(ctx, unmap);
> > out_socket:
> > if (--umem->refcount)
> > close(xsk->fd);
> > @@ -1071,7 +1076,7 @@ void xsk_socket__delete(struct xsk_socket *xsk)
> > }
> > }
> >
> > - xsk_put_ctx(ctx);
> > + xsk_put_ctx(ctx, true);
> >
> > umem->refcount--;
> > /* Do not close an fd that also has an associated umem connected
> > --
> > 2.17.1
> >
next prev parent reply other threads:[~2021-04-08 5:53 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-03-31 6:12 [PATCH v4 bpf 0/3] AF_XDP Socket Creation Fixes Ciara Loftus
2021-03-31 6:12 ` [PATCH v4 bpf 1/3] libbpf: ensure umem pointer is non-NULL before dereferencing Ciara Loftus
2021-03-31 6:12 ` [PATCH v4 bpf 2/3] libbpf: restore umem state after socket create failure Ciara Loftus
2021-04-07 18:02 ` Andrii Nakryiko
2021-04-08 5:52 ` Loftus, Ciara [this message]
2021-03-31 6:12 ` [PATCH v4 bpf 3/3] libbpf: only create rx and tx XDP rings when necessary Ciara Loftus
2021-03-31 7:05 ` [PATCH v4 bpf 0/3] AF_XDP Socket Creation Fixes Björn Töpel
2021-04-01 21:49 ` Alexei Starovoitov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1449b25b2176449394ee07fea5750469@intel.com \
--to=ciara.loftus@intel.com \
--cc=alexei.starovoitov@gmail.com \
--cc=andrii.nakryiko@gmail.com \
--cc=bjorn@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=magnus.karlsson@intel.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).