bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: William Tu <u9012063@gmail.com>
To: Magnus Karlsson <magnus.karlsson@gmail.com>
Cc: "Magnus Karlsson" <magnus.karlsson@intel.com>,
	"Björn Töpel" <bjorn.topel@intel.com>,
	"Alexei Starovoitov" <ast@kernel.org>,
	"Daniel Borkmann" <daniel@iogearbox.net>,
	"Network Development" <netdev@vger.kernel.org>,
	"Jonathan Lemon" <jonathan.lemon@gmail.com>,
	bpf <bpf@vger.kernel.org>
Subject: Re: [PATCH bpf-next 1/5] libbpf: support XDP_SHARED_UMEM with external XDP program
Date: Fri, 8 Nov 2019 10:43:20 -0800	[thread overview]
Message-ID: <20191108184320.GC30004@gmail.com> (raw)
In-Reply-To: <CAJ8uoz0DJx0sbsAU1GyjZcX3JvcEq7QKFRM5sYrZ_ScAHgEE=A@mail.gmail.com>

On Fri, Nov 08, 2019 at 07:19:18PM +0100, Magnus Karlsson wrote:
> On Fri, Nov 8, 2019 at 7:03 PM William Tu <u9012063@gmail.com> wrote:
> >
> > Hi Magnus,
> >
> > Thanks for the patch.
> >
> > On Thu, Nov 07, 2019 at 06:47:36PM +0100, Magnus Karlsson wrote:
> > > Add support in libbpf to create multiple sockets that share a single
> > > umem. Note that an external XDP program need to be supplied that
> > > routes the incoming traffic to the desired sockets. So you need to
> > > supply the libbpf_flag XSK_LIBBPF_FLAGS__INHIBIT_PROG_LOAD and load
> > > your own XDP program.
> > >
> > > Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
> > > ---
> > >  tools/lib/bpf/xsk.c | 27 +++++++++++++++++----------
> > >  1 file changed, 17 insertions(+), 10 deletions(-)
> > >
> > > diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c
> > > index 86c1b61..8ebd810 100644
> > > --- a/tools/lib/bpf/xsk.c
> > > +++ b/tools/lib/bpf/xsk.c
> > > @@ -586,15 +586,21 @@ int xsk_socket__create(struct xsk_socket **xsk_ptr, const char *ifname,
> > >       if (!umem || !xsk_ptr || !rx || !tx)
> > >               return -EFAULT;
> > >
> > > -     if (umem->refcount) {
> > > -             pr_warn("Error: shared umems not supported by libbpf.\n");
> > > -             return -EBUSY;
> > > -     }
> > > -
> > >       xsk = calloc(1, sizeof(*xsk));
> > >       if (!xsk)
> > >               return -ENOMEM;
> > >
> > > +     err = xsk_set_xdp_socket_config(&xsk->config, usr_config);
> > > +     if (err)
> > > +             goto out_xsk_alloc;
> > > +
> > > +     if (umem->refcount &&
> > > +         !(xsk->config.libbpf_flags & XSK_LIBBPF_FLAGS__INHIBIT_PROG_LOAD)) {
> > > +             pr_warn("Error: shared umems not supported by libbpf supplied XDP program.\n");
> >
> > Why can't we use the existing default one in libbpf?
> > If users don't want to redistribute packet to different queue,
> > then they can still use the libbpf default one.
> 
> Is there any point in creating two or more sockets tied to the same
> umem and directing all traffic to just one socket? IMHO, I believe

When using build-in XDP, isn't the traffic being directed to its
own xsk on its queue? (so not just one xsk socket)

So using build-in XDP, for example, queue1/xsk1 and queue2/xsk2, and
sharing one umem. Both xsk1 and xsk2 receive packets from their queue.

> that most users in this case would want to distribute the packets over
> the sockets in some way. I also think that users might be unpleasantly
> surprised if they create multiple sockets and all packets only get to
> a single socket because libbpf loaded an XDP program that makes little
> sense in the XDP_SHARED_UMEM case. If we force them to supply an XDP

Do I misunderstand the code?
I looked at xsk_setup_xdp_prog, xsk_load_xdp_prog, and xsk_set_bpf_maps.
The build-in prog will distribute packets to different xsk sockets,
not a single socket.

> program, they need to make this decision. I also wanted to extend the
> sample with an explicit user loaded XDP program as an example of how
> to do this. What do you think?

Yes, I like it. Like previous version having the xdpsock_kern.c as an
example for people to follow.

William

> 
> /Magnus
> 
> > William
> > > +             err = -EBUSY;
> > > +             goto out_xsk_alloc;
> > > +     }
> > > +
> > >       if (umem->refcount++ > 0) {
> > >               xsk->fd = socket(AF_XDP, SOCK_RAW, 0);
> > >               if (xsk->fd < 0) {
> > > @@ -616,10 +622,6 @@ int xsk_socket__create(struct xsk_socket **xsk_ptr, const char *ifname,
> > >       memcpy(xsk->ifname, ifname, IFNAMSIZ - 1);
> > >       xsk->ifname[IFNAMSIZ - 1] = '\0';
> > >
> > > -     err = xsk_set_xdp_socket_config(&xsk->config, usr_config);
> > > -     if (err)
> > > -             goto out_socket;
> > > -
> > >       if (rx) {
> > >               err = setsockopt(xsk->fd, SOL_XDP, XDP_RX_RING,
> > >                                &xsk->config.rx_size,
> > > @@ -687,7 +689,12 @@ int xsk_socket__create(struct xsk_socket **xsk_ptr, const char *ifname,
> > >       sxdp.sxdp_family = PF_XDP;
> > >       sxdp.sxdp_ifindex = xsk->ifindex;
> > >       sxdp.sxdp_queue_id = xsk->queue_id;
> > > -     sxdp.sxdp_flags = xsk->config.bind_flags;
> > > +     if (umem->refcount > 1) {
> > > +             sxdp.sxdp_flags = XDP_SHARED_UMEM;
> > > +             sxdp.sxdp_shared_umem_fd = umem->fd;
> > > +     } else {
> > > +             sxdp.sxdp_flags = xsk->config.bind_flags;
> > > +     }
> > >
> > >       err = bind(xsk->fd, (struct sockaddr *)&sxdp, sizeof(sxdp));
> > >       if (err) {
> > > --
> > > 2.7.4
> > >

  reply	other threads:[~2019-11-08 18:43 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-07 17:47 [PATCH bpf-next 0/5] Extend libbpf to support shared umems and Rx|Tx-only sockets Magnus Karlsson
2019-11-07 17:47 ` [PATCH bpf-next 1/5] libbpf: support XDP_SHARED_UMEM with external XDP program Magnus Karlsson
2019-11-08 18:03   ` William Tu
2019-11-08 18:19     ` Magnus Karlsson
2019-11-08 18:43       ` William Tu [this message]
2019-11-08 19:17         ` Magnus Karlsson
2019-11-08 22:31           ` William Tu
2019-11-08 22:56   ` Jonathan Lemon
2019-11-10 18:34     ` William Tu
2019-11-07 17:47 ` [PATCH bpf-next 2/5] samples/bpf: add XDP_SHARED_UMEM support to xdpsock Magnus Karlsson
2019-11-08 18:13   ` William Tu
2019-11-08 18:33     ` Magnus Karlsson
2019-11-08 19:09       ` William Tu
2019-11-08 22:59   ` Jonathan Lemon
2019-11-10 18:34     ` William Tu
2019-11-07 17:47 ` [PATCH bpf-next 3/5] libbpf: allow for creating Rx or Tx only AF_XDP sockets Magnus Karlsson
2019-11-08 23:00   ` Jonathan Lemon
2019-11-10 18:34     ` William Tu
2019-11-07 17:47 ` [PATCH bpf-next 4/5] samples/bpf: use Rx-only and Tx-only sockets in xdpsock Magnus Karlsson
2019-11-08 23:02   ` Jonathan Lemon
2019-11-10 18:34     ` William Tu
2019-11-07 17:47 ` [PATCH bpf-next 5/5] xsk: extend documentation for Rx|Tx-only sockets and shared umems Magnus Karlsson
2019-11-08 23:03   ` Jonathan Lemon
2019-11-10 18:35     ` William Tu
2019-11-08 14:57 ` [PATCH bpf-next 0/5] Extend libbpf to support shared umems and Rx|Tx-only sockets William Tu
2019-11-08 18:09   ` Magnus Karlsson
2019-11-11  3:32 ` Alexei Starovoitov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191108184320.GC30004@gmail.com \
    --to=u9012063@gmail.com \
    --cc=ast@kernel.org \
    --cc=bjorn.topel@intel.com \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=jonathan.lemon@gmail.com \
    --cc=magnus.karlsson@gmail.com \
    --cc=magnus.karlsson@intel.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).