bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: John Fastabend <john.fastabend@gmail.com>
To: Jesper Dangaard Brouer <brouer@redhat.com>,
	John Fastabend <john.fastabend@gmail.com>
Cc: "Alexei Starovoitov" <alexei.starovoitov@gmail.com>,
	bpf@vger.kernel.org, gamemann@gflclan.com, lrizzo@google.com,
	netdev@vger.kernel.org,
	"Daniel Borkmann" <borkmann@iogearbox.net>,
	"Alexander Duyck" <alexander.duyck@gmail.com>,
	"Ilias Apalodimas" <ilias.apalodimas@linaro.org>,
	"Toke Høiland-Jørgensen" <toke@toke.dk>,
	brouer@redhat.com
Subject: Re: [bpf-next PATCH] xdp: accept that XDP headroom isn't always equal XDP_PACKET_HEADROOM
Date: Mon, 09 Mar 2020 22:49:55 -0700	[thread overview]
Message-ID: <5e672a83a5c07_6d9d2ad5365425b4a3@john-XPS-13-9370.notmuch> (raw)
In-Reply-To: <20200309093932.2a738ab1@carbon>

Jesper Dangaard Brouer wrote:
> On Fri, 06 Mar 2020 08:06:35 -0800
> John Fastabend <john.fastabend@gmail.com> wrote:
> 
> > Alexei Starovoitov wrote:
> > > On Tue, Mar 03, 2020 at 12:46:58PM +0100, Jesper Dangaard Brouer wrote:  
> [...]
> > > > 
> > > > Still for generic-XDP if headroom is less, still expand headroom to
> > > > XDP_PACKET_HEADROOM as this is the default in most XDP drivers.
> > > > 
> > > > Tested on ixgbe with xdp_rxq_info --skb-mode and --action XDP_DROP:
> > > > - Before: 4,816,430 pps
> > > > - After : 7,749,678 pps
> > > > (Note that ixgbe in native mode XDP_DROP 14,704,539 pps)
> > > >   
> > 
> > But why do we care about generic-XDP performance? Seems users should
> > just use XDP proper on ixgbe and i40e its supported.
> >
> [...]
> > 
> > Or just let ixgbe/i40e be slow? I guess I'm missing some context?
> 
> The context originates from an email thread[1] on XDP-newbies list, that
> had a production setup (anycast routing of gaming traffic[3]) that used
> XDP and they used XDP-generic (actually without realizing it).  They
> were using Intel igb driver (that don't have native-XDP), and changing
> to e.g. ixgbe (or i40e) is challenging given it requires physical access
> to the PoP (Points of Presence) and upgrading to a 10G port at the PoP
> also have costs associated.

OK maybe igb should get xdp...

I get the wanting to run an XDP program across multiple cards even when
they don't have XDP support.

> 
> Why not simply use TC-BPF (cls_bpf) instead of XDP.  I've actually been
> promoting that more people should use TC-BPF, and also in combination[2].
> The reason it makes sense to stick with XDP here is to allow them to
> deploy the same software on their PoP servers, regardless of which
> NIC driver is available.

Sounds reasonable to me.

> 
> Performance wise, I will admit that I've explicitly chosen not to
> optimize XDP-generic, and I've even seen it as a good thing that we
> have this reallocation penalty.  Given the uniform software deployment
> argument and my measurements in[1] I've changed my mind.  For the igb
> driver I'm not motivated to implement XDP-native, because a newer Intel
> CPU can handle wirespeed even-with the reallocations, but it is just
> wasteful to do these reallocations.  "Allowing" these 1Gbit/s NICs to
> work more optimally with XDP-generic, will allow us to ignore
> converting these drivers to XDP-native, and as HW gets upgraded they
> will transition seamlessly to XDP-native.

OK, might be nice to put the details in the commit message? The original
patch seemed to be mostly about 140e and ixgbe but in those case the
user (from above xdp example) would just use XDP. So more about 1gbps
and missing xdpsupport?

> 
> 
> [1] https://www.spinics.net/lists/xdp-newbies/msg01548.html
> [2] https://github.com/xdp-project/xdp-cpumap-tc
> [3] https://gitlab.com/Dreae/compressor/
> -- 
> Best regards,
>   Jesper Dangaard Brouer
>   MSc.CS, Principal Kernel Engineer at Red Hat
>   LinkedIn: http://www.linkedin.com/in/brouer
> 



      reply	other threads:[~2020-03-10  5:50 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-03 11:46 [bpf-next PATCH] xdp: accept that XDP headroom isn't always equal XDP_PACKET_HEADROOM Jesper Dangaard Brouer
2020-03-03 12:12 ` Toke Høiland-Jørgensen
2020-03-03 12:54   ` Jesper Dangaard Brouer
2020-03-03 18:43 ` Alexei Starovoitov
2020-03-06 16:06   ` John Fastabend
2020-03-06 16:16     ` Luigi Rizzo
2020-03-09  8:39     ` Jesper Dangaard Brouer
2020-03-10  5:49       ` John Fastabend [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5e672a83a5c07_6d9d2ad5365425b4a3@john-XPS-13-9370.notmuch \
    --to=john.fastabend@gmail.com \
    --cc=alexander.duyck@gmail.com \
    --cc=alexei.starovoitov@gmail.com \
    --cc=borkmann@iogearbox.net \
    --cc=bpf@vger.kernel.org \
    --cc=brouer@redhat.com \
    --cc=gamemann@gflclan.com \
    --cc=ilias.apalodimas@linaro.org \
    --cc=lrizzo@google.com \
    --cc=netdev@vger.kernel.org \
    --cc=toke@toke.dk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).