netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: David Woodhouse <dwmw2@infradead.org>
To: Jason Wang <jasowang@redhat.com>, netdev@vger.kernel.org
Cc: "Eugenio Pérez" <eperezma@redhat.com>,
	"Willem de Bruijn" <willemb@google.com>,
	"Michael S.Tsirkin" <mst@redhat.com>
Subject: Re: [PATCH v3 3/5] vhost_net: remove virtio_net_hdr validation, let tun/tap do it themselves
Date: Tue, 29 Jun 2021 14:15:45 +0100	[thread overview]
Message-ID: <5db593687d2adbecc2f084d17de6d3d3c7deaef5.camel@infradead.org> (raw)
In-Reply-To: <cdf3fe3ceff17bc2a5aaf006577c1cb0bef40f3a.camel@infradead.org>

[-- Attachment #1: Type: text/plain, Size: 3555 bytes --]

On Tue, 2021-06-29 at 11:49 +0100, David Woodhouse wrote:
> On Tue, 2021-06-29 at 11:43 +0800, Jason Wang wrote:
> > > The kernel on a c5.metal can transmit (AES128-SHA1) ESP at about
> > > 1.2Gb/s from iperf, as it seems to be doing it all from the iperf
> > > thread.
> > > 
> > > Before I started messing with OpenConnect, it could transmit 1.6Gb/s.
> > > 
> > > When I pull in the 'stitched' AES+SHA code from OpenSSL instead of
> > > doing the encryption and the HMAC in separate passes, I get to 2.1Gb/s.
> > > 
> > > Adding vhost support on top of that takes me to 2.46Gb/s, which is a
> > > decent enough win.
> > 
> > 
> > Interesting, I think the latency should be improved as well in this
> > case.
> 
> I tried using 'ping -i 0.1' to get an idea of latency for the
> interesting VoIP-like case of packets where we have to wake up each
> time.
> 
> For the *inbound* case, RX on the tun device followed by TX of the
> replies, I see results like this:
> 
>      --- 172.16.0.2 ping statistics ---
>      141 packets transmitted, 141 received, 0% packet loss, time 14557ms
>      rtt min/avg/max/mdev = 0.380/0.419/0.461/0.024 ms
> 
> 
> The opposite direction (tun TX then RX) is similar:
> 
>      --- 172.16.0.1 ping statistics ---
>      295 packets transmitted, 295 received, 0% packet loss, time 30573ms
>      rtt min/avg/max/mdev = 0.454/0.545/0.718/0.024 ms
> 
> 
> Using vhost-net (and TUNSNDBUF of INT_MAX-1 just to avoid XDP), it
> looks like this. Inbound:
> 
>      --- 172.16.0.2 ping statistics ---
>      139 packets transmitted, 139 received, 0% packet loss, time 14350ms
>      rtt min/avg/max/mdev = 0.432/0.578/0.658/0.058 ms
> 
> Outbound:
> 
>      --- 172.16.0.1 ping statistics ---
>      149 packets transmitted, 149 received, 0% packet loss, time 15391ms
>      rtt mn/avg/max/mdev = 0.496/0.682/0.935/0.036 ms
> 
> 
> So as I expected, the throughput is better with vhost-net once I get to
> the point of 100% CPU usage in my main thread, because it offloads the
> kernel←→user copies. But latency is somewhat worse.
> 
> I'm still using select() instead of epoll() which would give me a
> little back — but only a little, as I only poll on 3-4 fds, and more to
> the point it'll give me just as much win in the non-vhost case too, so
> it won't make much difference to the vhost vs. non-vhost comparison.
> 
> Perhaps I really should look into that trick of "if the vhost TX ring
> is already stopped and would need a kick, and I only have a few packets
> in the batch, just write them directly to /dev/net/tun".
> 
> I'm wondering how that optimisation would translate to actual guests,
> which presumably have the same problem. Perhaps it would be an
> operation on the vhost fd, which ends up processing the ring right
> there in the context of *that* process instead of doing a wakeup?

That turns out to be fairly trivial: 
https://gitlab.com/openconnect/openconnect/-/commit/668ff1399541be927

It gives me back about half the latency I lost by moving to vhost-net:

     --- 172.16.0.2 ping statistics ---
     133 packets transmitted, 133 received, 0% packet loss, time 13725ms
     rtt min/avg/max/mdev = 0.437/0.510/0.621/0.035 ms

     --- 172.16.0.1 ping statistics ---
     133 packets transmitted, 133 received, 0% packet loss, time 13728ms
     rtt min/avg/max/mdev = 0.541/0.605/0.658/0.022 ms

I think it's definitely worth looking at whether we can/should do
something roughly equivalent for actual guests.


[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5174 bytes --]

  reply	other threads:[~2021-06-29 13:15 UTC|newest]

Thread overview: 73+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-19 13:33 [PATCH] net: tun: fix tun_xdp_one() for IFF_TUN mode David Woodhouse
2021-06-21  7:00 ` Jason Wang
2021-06-21 10:52   ` David Woodhouse
2021-06-21 14:50     ` David Woodhouse
2021-06-21 20:43       ` David Woodhouse
2021-06-22  4:52         ` Jason Wang
2021-06-22  7:24           ` David Woodhouse
2021-06-22  7:51             ` Jason Wang
2021-06-22  8:10               ` David Woodhouse
2021-06-22 11:36               ` David Woodhouse
2021-06-22  4:34       ` Jason Wang
2021-06-22  4:34     ` Jason Wang
2021-06-22  7:28       ` David Woodhouse
2021-06-22  8:00         ` Jason Wang
2021-06-22  8:29           ` David Woodhouse
2021-06-23  3:39             ` Jason Wang
2021-06-24 12:39               ` David Woodhouse
2021-06-22 16:15 ` [PATCH v2 1/4] " David Woodhouse
2021-06-22 16:15   ` [PATCH v2 2/4] net: tun: don't assume IFF_VNET_HDR in tun_xdp_one() tx path David Woodhouse
2021-06-23  3:46     ` Jason Wang
2021-06-22 16:15   ` [PATCH v2 3/4] vhost_net: validate virtio_net_hdr only if it exists David Woodhouse
2021-06-23  3:48     ` Jason Wang
2021-06-22 16:15   ` [PATCH v2 4/4] vhost_net: Add self test with tun device David Woodhouse
2021-06-23  4:02     ` Jason Wang
2021-06-23 16:12       ` David Woodhouse
2021-06-24  6:12         ` Jason Wang
2021-06-24 10:42           ` David Woodhouse
2021-06-25  2:55             ` Jason Wang
2021-06-25  7:54               ` David Woodhouse
2021-06-23  3:45   ` [PATCH v2 1/4] net: tun: fix tun_xdp_one() for IFF_TUN mode Jason Wang
2021-06-23  8:30     ` David Woodhouse
2021-06-23 13:52     ` David Woodhouse
2021-06-23 17:31       ` David Woodhouse
2021-06-23 22:52         ` David Woodhouse
2021-06-24  6:37           ` Jason Wang
2021-06-24  7:23             ` David Woodhouse
2021-06-24  6:18       ` Jason Wang
2021-06-24  7:05         ` David Woodhouse
2021-06-24 12:30 ` [PATCH v3 1/5] net: add header len parameter to tun_get_socket(), tap_get_socket() David Woodhouse
2021-06-24 12:30   ` [PATCH v3 2/5] net: tun: don't assume IFF_VNET_HDR in tun_xdp_one() tx path David Woodhouse
2021-06-25  6:58     ` Jason Wang
2021-06-24 12:30   ` [PATCH v3 3/5] vhost_net: remove virtio_net_hdr validation, let tun/tap do it themselves David Woodhouse
2021-06-25  7:33     ` Jason Wang
2021-06-25  8:37       ` David Woodhouse
2021-06-28  4:23         ` Jason Wang
2021-06-28 11:23           ` David Woodhouse
2021-06-28 23:29             ` David Woodhouse
2021-06-29  3:43               ` Jason Wang
2021-06-29  6:59                 ` David Woodhouse
2021-06-29 10:49                 ` David Woodhouse
2021-06-29 13:15                   ` David Woodhouse [this message]
2021-06-30  4:39                   ` Jason Wang
2021-06-30 10:02                     ` David Woodhouse
2021-07-01  4:13                       ` Jason Wang
2021-07-01 17:39                         ` David Woodhouse
2021-07-02  3:13                           ` Jason Wang
2021-07-02  8:08                             ` David Woodhouse
2021-07-02  8:50                               ` Jason Wang
2021-07-09 15:04                               ` Eugenio Perez Martin
2021-06-29  3:21             ` Jason Wang
2021-06-24 12:30   ` [PATCH v3 4/5] net: tun: fix tun_xdp_one() for IFF_TUN mode David Woodhouse
2021-06-25  7:41     ` Jason Wang
2021-06-25  8:51       ` David Woodhouse
2021-06-28  4:27         ` Jason Wang
2021-06-28 10:43           ` David Woodhouse
2021-06-25 18:43     ` Willem de Bruijn
2021-06-25 19:00       ` David Woodhouse
2021-06-24 12:30   ` [PATCH v3 5/5] vhost_net: Add self test with tun device David Woodhouse
2021-06-25  5:00   ` [PATCH v3 1/5] net: add header len parameter to tun_get_socket(), tap_get_socket() Jason Wang
2021-06-25  8:23     ` David Woodhouse
2021-06-28  4:22       ` Jason Wang
2021-06-25 18:13   ` Willem de Bruijn
2021-06-25 18:55     ` David Woodhouse

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5db593687d2adbecc2f084d17de6d3d3c7deaef5.camel@infradead.org \
    --to=dwmw2@infradead.org \
    --cc=eperezma@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=willemb@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).