* XDP_REDIRECT forwarding speed
@ 2020-05-26 7:00 Denis Salopek
2020-05-26 8:04 ` Jesper Dangaard Brouer
0 siblings, 1 reply; 3+ messages in thread
From: Denis Salopek @ 2020-05-26 7:00 UTC (permalink / raw)
To: xdp-newbies
Hi!
I want to make sure I did everything right to make my XDP program
(simple forwarding with bpf_redirect_map) as fast as possible. Is following
advices and gotchas from this:
https://www.mail-archive.com/netdev@vger.kernel.org/msg184139.html enough
or are there some additional/newer recommendations? I managed to get near
line-rate on my Intel X520s (on Ryzen 3700X and one queue/CPU), but not
quite 14.88 Mpps so I was wondering is there something else to speed
things up even more.
Also, are there any recommended settings/tweaks for bidirectional
forwarding? I suppose there would be a drop in performance compared to
single direction, but has anyone done any benchmarks?
Regards,
Denis
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: XDP_REDIRECT forwarding speed
2020-05-26 7:00 XDP_REDIRECT forwarding speed Denis Salopek
@ 2020-05-26 8:04 ` Jesper Dangaard Brouer
2020-05-26 11:09 ` Denis Salopek
0 siblings, 1 reply; 3+ messages in thread
From: Jesper Dangaard Brouer @ 2020-05-26 8:04 UTC (permalink / raw)
To: Denis Salopek; +Cc: xdp-newbies, Alexander Duyck
On Tue, 26 May 2020 07:00:30 +0000
Denis Salopek <Denis.Salopek@fer.hr> wrote:
> I want to make sure I did everything right to make my XDP program
> (simple forwarding with bpf_redirect_map) as fast as possible. Is following
> advices and gotchas from this:
> https://www.mail-archive.com/netdev@vger.kernel.org/msg184139.html
I prefer links to lore.kernel.org:
[1] https://lore.kernel.org/netdev/20170821212506.1cb0d5d6@redhat.com/
Do notice that my results in [1] is for a single queue and single CPU.
In production I assume that you can likely scale this across more CPUs ;-)
> enough or are there some additional/newer recommendations? I managed
> to get near line-rate on my Intel X520s (on Ryzen 3700X and one
> queue/CPU), but not quite 14.88 Mpps so I was wondering is there
> something else to speed things up even more.
In [1] I mention the need to tune the TX-queue to keep up via either
adjusting the TX-DMA completion interrupt interval:
Tuned with rx-usecs 25:
ethtool -C ixgbe1 rx-usecs 25 ;\
ethtool -C ixgbe2 rx-usecs 25
Or increasing the size of the TX-queue, so it doesn't overrun:
Tuned with adjusting ring-queue sizes:
ethtool -G ixgbe1 rx 1024 tx 1024 ;\
ethtool -G ixgbe2 rx 1024 tx 1024
This might not be needed any longer, as I think it was Alexander, that
implemented an improved interrupt adjustment scheme for ixgbe.
> Also, are there any recommended settings/tweaks for bidirectional
> forwarding? I suppose there would be a drop in performance compared to
> single direction, but has anyone done any benchmarks?
As this was 1-CPU you can just run the other direction on another CPU.
That said, it can still be an advantage to run the bidirectional
traffic on the same CPU and RX-TX-queue pair, as above issue with
TX-queue DMA cleanups/completions goes away. Because, the ixgbe driver
will do TX-cleanups as part (before) the RX-processing.
What is your use-case?
e.g. building an IPv4 router?
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: XDP_REDIRECT forwarding speed
2020-05-26 8:04 ` Jesper Dangaard Brouer
@ 2020-05-26 11:09 ` Denis Salopek
0 siblings, 0 replies; 3+ messages in thread
From: Denis Salopek @ 2020-05-26 11:09 UTC (permalink / raw)
To: Jesper Dangaard Brouer; +Cc: xdp-newbies, Alexander Duyck
> From: Jesper Dangaard Brouer <jbrouer@redhat.com>
> Sent: Tuesday, May 26, 2020 10:04 AM
> To: Denis Salopek
> Cc: xdp-newbies@vger.kernel.org; Alexander Duyck
> Subject: Re: XDP_REDIRECT forwarding speed
>
> On Tue, 26 May 2020 07:00:30 +0000
> Denis Salopek <Denis.Salopek@fer.hr> wrote:
>
> > I want to make sure I did everything right to make my XDP program
> > (simple forwarding with bpf_redirect_map) as fast as possible. Is following
> > advices and gotchas from this:
> > https://www.mail-archive.com/netdev@vger.kernel.org/msg184139.html
>
> I prefer links to lore.kernel.org:
> [1] https://lore.kernel.org/netdev/20170821212506.1cb0d5d6@redhat.com/
>
> Do notice that my results in [1] is for a single queue and single CPU.
> In production I assume that you can likely scale this across more CPUs ;-)
>
> > enough or are there some additional/newer recommendations? I managed
> > to get near line-rate on my Intel X520s (on Ryzen 3700X and one
> > queue/CPU), but not quite 14.88 Mpps so I was wondering is there
> > something else to speed things up even more.
>
> In [1] I mention the need to tune the TX-queue to keep up via either
> adjusting the TX-DMA completion interrupt interval:
>
> Tuned with rx-usecs 25:
> ethtool -C ixgbe1 rx-usecs 25 ;\
> ethtool -C ixgbe2 rx-usecs 25
>
> Or increasing the size of the TX-queue, so it doesn't overrun:
>
> Tuned with adjusting ring-queue sizes:
> ethtool -G ixgbe1 rx 1024 tx 1024 ;\
> ethtool -G ixgbe2 rx 1024 tx 1024
>
> This might not be needed any longer, as I think it was Alexander, that
> implemented an improved interrupt adjustment scheme for ixgbe.
Thank you for the info.
> > Also, are there any recommended settings/tweaks for bidirectional
> > forwarding? I suppose there would be a drop in performance compared to
> > single direction, but has anyone done any benchmarks?
>
> As this was 1-CPU you can just run the other direction on another CPU.
> That said, it can still be an advantage to run the bidirectional
> traffic on the same CPU and RX-TX-queue pair, as above issue with
> TX-queue DMA cleanups/completions goes away. Because, the ixgbe driver
> will do TX-cleanups as part (before) the RX-processing.
Yeah, you are right, bidirectional yields 10-15% more pps.
> What is your use-case?
> e.g. building an IPv4 router?
Just exploring the performance potential of userspace vs XDP for
DDoS-scrubbing type middlebox (i.e. forwarding) tasks.
>
> --
> Best regards,
> Jesper Dangaard Brouer
> MSc.CS, Principal Kernel Engineer at Red Hat
> LinkedIn: http://www.linkedin.com/in/brouer
>
>
Denis
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2020-05-26 11:10 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-26 7:00 XDP_REDIRECT forwarding speed Denis Salopek
2020-05-26 8:04 ` Jesper Dangaard Brouer
2020-05-26 11:09 ` Denis Salopek
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.