From: Paolo Abeni <pabeni@redhat.com>
To: "Paweł Staszewski" <pstaszewski@itcare.pl>,
"David Ahern" <dsahern@gmail.com>,
netdev@vger.kernel.org,
"Jesper Dangaard Brouer" <brouer@redhat.com>
Subject: Re: Linux kernel - 5.4.0+ (net-next from 27.11.2019) routing/network performance
Date: Mon, 02 Dec 2019 11:53:22 +0100 [thread overview]
Message-ID: <9c5c6dc9b7eb78c257d67c85ed2a6e0998ec8907.camel@redhat.com> (raw)
In-Reply-To: <8e17a844-e98b-59b1-5a0e-669562b3178c@itcare.pl>
On Mon, 2019-12-02 at 11:09 +0100, Paweł Staszewski wrote:
> W dniu 01.12.2019 o 17:05, David Ahern pisze:
> > On 11/29/19 4:00 PM, Paweł Staszewski wrote:
> > > As always - each year i need to summarize network performance for
> > > routing applications like linux router on native Linux kernel (without
> > > xdp/dpdk/vpp etc) :)
> > >
> > Do you keep past profiles? How does this profile (and traffic rates)
> > compare to older kernels - e.g., 5.0 or 4.19?
> >
> >
> Yes - so for 4.19:
>
> Max bandwidth was about 40-42Gbit/s RX / 40-42Gbit/s TX of
> forwarded(routed) traffic
>
> And after "order-0 pages" patches - max was 50Gbit/s RX + 50Gbit/s TX
> (forwarding - bandwidth max)
>
> (current kernel almost doubled this)
Looks like we are on the good track ;)
[...]
> After "order-0 pages" patch
>
> PerfTop: 104692 irqs/sec kernel:99.5% exact: 0.0% [4000Hz
> cycles], (all, 56 CPUs)
> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>
> 9.06% [kernel] [k] mlx5e_skb_from_cqe_mpwrq_linear
> 6.43% [kernel] [k] tasklet_action_common.isra.21
> 5.68% [kernel] [k] fib_table_lookup
> 4.89% [kernel] [k] irq_entries_start
> 4.53% [kernel] [k] mlx5_eq_int
> 4.10% [kernel] [k] build_skb
> 3.39% [kernel] [k] mlx5e_poll_tx_cq
> 3.38% [kernel] [k] mlx5e_sq_xmit
> 2.73% [kernel] [k] mlx5e_poll_rx_cq
Compared to the current kernel perf figures, it looks like most of the
gains come from driver changes.
[... current perf figures follow ...]
> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>
> 7.56% [kernel] [k] __dev_queue_xmit
This is a bit surprising to me. I guess this is due
'__dev_queue_xmit()' being calling twice per packet (team, NIC) and due
to the retpoline overhead.
> 1.74% [kernel] [k] tcp_gro_receive
If the reference use-case is with a quite large number of cuncurrent
flows, I guess you can try disabling GRO
Cheers,
Paolo
next prev parent reply other threads:[~2019-12-02 10:53 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-11-29 22:00 Linux kernel - 5.4.0+ (net-next from 27.11.2019) routing/network performance Paweł Staszewski
2019-11-29 22:13 ` Paweł Staszewski
2019-12-01 16:05 ` David Ahern
2019-12-02 10:09 ` Paweł Staszewski
2019-12-02 10:53 ` Paolo Abeni [this message]
2019-12-02 16:23 ` Paweł Staszewski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9c5c6dc9b7eb78c257d67c85ed2a6e0998ec8907.camel@redhat.com \
--to=pabeni@redhat.com \
--cc=brouer@redhat.com \
--cc=dsahern@gmail.com \
--cc=netdev@vger.kernel.org \
--cc=pstaszewski@itcare.pl \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).