All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Björn Töpel" <bjorn.topel@gmail.com>
To: Maciej Fijalkowski <maciejromanfijalkowski@gmail.com>
Cc: "Björn Töpel" <bjorn.topel@intel.com>,
	"Karlsson, Magnus" <magnus.karlsson@intel.com>,
	Netdev <netdev@vger.kernel.org>,
	"Alexei Starovoitov" <ast@kernel.org>,
	"Daniel Borkmann" <daniel@iogearbox.net>,
	"Jakub Kicinski" <jakub.kicinski@netronome.com>,
	"Jonathan Lemon" <jonathan.lemon@gmail.com>,
	"Song Liu" <songliubraving@fb.com>, bpf <bpf@vger.kernel.org>
Subject: Re: [RFC PATCH bpf-next 1/4] libbpf: fill the AF_XDP fill queue before bind() call
Date: Wed, 5 Jun 2019 11:00:07 +0200	[thread overview]
Message-ID: <CAJ+HfNj6NvRQcT5iS_nQEYfpWoav7LxEqLFShjP8BHjqAaopqA@mail.gmail.com> (raw)
In-Reply-To: <20190604170452.00001b29@gmail.com>

On Tue, 4 Jun 2019 at 17:06, Maciej Fijalkowski
<maciejromanfijalkowski@gmail.com> wrote:
>
> On Tue, 4 Jun 2019 10:06:36 +0200
> Björn Töpel <bjorn.topel@intel.com> wrote:
>
> > On 2019-06-03 15:19, Maciej Fijalkowski wrote:
> > > Let's get into the driver via ndo_bpf with command set to XDP_SETUP_UMEM
> > > with fill queue that already contains some available entries that can be
> > > used by Rx driver rings. Things worked in such way on old version of
> > > xdpsock (that lacked libbpf support) and there's no particular reason
> > > for having this preparation done after bind().
> > >
> > > Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> > > Signed-off-by: Krzysztof Kazimierczak <krzysztof.kazimierczak@intel.com>
> > > ---
> > >   samples/bpf/xdpsock_user.c | 15 ---------------
> > >   tools/lib/bpf/xsk.c        | 19 ++++++++++++++++++-
> > >   2 files changed, 18 insertions(+), 16 deletions(-)
> > >
> > > diff --git a/samples/bpf/xdpsock_user.c b/samples/bpf/xdpsock_user.c
> > > index d08ee1ab7bb4..e9dceb09b6d1 100644
> > > --- a/samples/bpf/xdpsock_user.c
> > > +++ b/samples/bpf/xdpsock_user.c
> > > @@ -296,8 +296,6 @@ static struct xsk_socket_info *xsk_configure_socket(struct xsk_umem_info *umem)
> > >     struct xsk_socket_config cfg;
> > >     struct xsk_socket_info *xsk;
> > >     int ret;
> > > -   u32 idx;
> > > -   int i;
> > >
> > >     xsk = calloc(1, sizeof(*xsk));
> > >     if (!xsk)
> > > @@ -318,19 +316,6 @@ static struct xsk_socket_info *xsk_configure_socket(struct xsk_umem_info *umem)
> > >     if (ret)
> > >             exit_with_error(-ret);
> > >
> > > -   ret = xsk_ring_prod__reserve(&xsk->umem->fq,
> > > -                                XSK_RING_PROD__DEFAULT_NUM_DESCS,
> > > -                                &idx);
> > > -   if (ret != XSK_RING_PROD__DEFAULT_NUM_DESCS)
> > > -           exit_with_error(-ret);
> > > -   for (i = 0;
> > > -        i < XSK_RING_PROD__DEFAULT_NUM_DESCS *
> > > -                XSK_UMEM__DEFAULT_FRAME_SIZE;
> > > -        i += XSK_UMEM__DEFAULT_FRAME_SIZE)
> > > -           *xsk_ring_prod__fill_addr(&xsk->umem->fq, idx++) = i;
> > > -   xsk_ring_prod__submit(&xsk->umem->fq,
> > > -                         XSK_RING_PROD__DEFAULT_NUM_DESCS);
> > > -
> > >     return xsk;
> > >   }
> > >
> > > diff --git a/tools/lib/bpf/xsk.c b/tools/lib/bpf/xsk.c
> > > index 38667b62f1fe..57dda1389870 100644
> > > --- a/tools/lib/bpf/xsk.c
> > > +++ b/tools/lib/bpf/xsk.c
> > > @@ -529,7 +529,8 @@ int xsk_socket__create(struct xsk_socket **xsk_ptr, const char *ifname,
> > >     struct xdp_mmap_offsets off;
> > >     struct xsk_socket *xsk;
> > >     socklen_t optlen;
> > > -   int err;
> > > +   int err, i;
> > > +   u32 idx;
> > >
> > >     if (!umem || !xsk_ptr || !rx || !tx)
> > >             return -EFAULT;
> > > @@ -632,6 +633,22 @@ int xsk_socket__create(struct xsk_socket **xsk_ptr, const char *ifname,
> > >     }
> > >     xsk->tx = tx;
> > >
> > > +   err = xsk_ring_prod__reserve(umem->fill,
> > > +                                XSK_RING_PROD__DEFAULT_NUM_DESCS,
> > > +                                &idx);
> > > +   if (err != XSK_RING_PROD__DEFAULT_NUM_DESCS) {
> > > +           err = -errno;
> > > +           goto out_mmap_tx;
> > > +   }
> > > +
> > > +   for (i = 0;
> > > +        i < XSK_RING_PROD__DEFAULT_NUM_DESCS *
> > > +                XSK_UMEM__DEFAULT_FRAME_SIZE;
> > > +        i += XSK_UMEM__DEFAULT_FRAME_SIZE)
> > > +           *xsk_ring_prod__fill_addr(umem->fill, idx++) = i;
> > > +   xsk_ring_prod__submit(umem->fill,
> > > +                         XSK_RING_PROD__DEFAULT_NUM_DESCS);
> > > +
> >
> > Here, entries are added to the umem fill ring regardless if Rx is being
> > used or not. For a Tx only setup, this is not what we want, right?
>
> Right, but we have such behavior even without this patch. So I see two options
> here:
> - if you agree with this patch, then I guess we would need to pass the info to
>   libbpf what exactly we are setting up (txonly, rxdrop, l2fwd)?
> - otherwise, we should be passing the opt_bench onto xsk_configure_socket and
>   based on that decide whether we fill the fq or not?
>
> >
> > Thinking out loud here; Now libbpf is making the decision which umem
> > entries that are added to the fill ring. The sample application has this
> > (naive) scheme. I'm not sure that all applications would like that
> > policy. What do you think?
> >
>
> I find it convenient to have the fill queue in "initialized" state if I am
> making use of it, especially in case when I am doing the ZC so I must give the
> buffers to the driver via fill queue. So why would we bother other applications
> to provide it? I must admit that I haven't used AF_XDP with other apps than the
> example one, so I might not be able to elaborate further. Maybe other people
> have different feelings about it.
>

Personally, I think this scheme is not worth pursuing. I'd just leave
the fill ring work to the application. E.g. DPDK would definitely not
use a scheme like this.

Björn

> > >     sxdp.sxdp_family = PF_XDP;
> > >     sxdp.sxdp_ifindex = xsk->ifindex;
> > >     sxdp.sxdp_queue_id = xsk->queue_id;
> > >
>

  parent reply	other threads:[~2019-06-05  9:00 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-03 13:19 [RFC PATCH bpf-next 0/4] libbpf: xsk improvements Maciej Fijalkowski
2019-06-03 13:19 ` [RFC PATCH bpf-next 1/4] libbpf: fill the AF_XDP fill queue before bind() call Maciej Fijalkowski
2019-06-04  8:06   ` Björn Töpel
2019-06-04 15:04     ` Maciej Fijalkowski
2019-06-04 15:54       ` Jonathan Lemon
2019-06-05  9:00       ` Björn Töpel [this message]
2019-06-03 13:19 ` [RFC PATCH bpf-next 2/4] libbpf: check for channels.max_{t,r}x in xsk_get_max_queues Maciej Fijalkowski
2019-06-04  8:06   ` Björn Töpel
2019-06-04 15:05     ` Maciej Fijalkowski
2019-06-03 13:19 ` [RFC PATCH bpf-next 3/4] libbpf: move xdp program removal to libbpf Maciej Fijalkowski
2019-06-04  8:07   ` Björn Töpel
2019-06-04 15:06     ` Maciej Fijalkowski
2019-06-05  9:03       ` Björn Töpel
2019-06-03 13:19 ` [RFC PATCH bpf-next 4/4] libbpf: don't remove eBPF resources when other xsks are present Maciej Fijalkowski
2019-06-03 18:26   ` Jonathan Lemon
2019-06-04  8:08   ` Björn Töpel
2019-06-04 15:07     ` Maciej Fijalkowski
2019-06-05  9:26       ` Björn Töpel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAJ+HfNj6NvRQcT5iS_nQEYfpWoav7LxEqLFShjP8BHjqAaopqA@mail.gmail.com \
    --to=bjorn.topel@gmail.com \
    --cc=ast@kernel.org \
    --cc=bjorn.topel@intel.com \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=jakub.kicinski@netronome.com \
    --cc=jonathan.lemon@gmail.com \
    --cc=maciejromanfijalkowski@gmail.com \
    --cc=magnus.karlsson@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=songliubraving@fb.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.