All of lore.kernel.org
 help / color / mirror / Atom feed
From: David Woodhouse <dwmw2@infradead.org>
To: Jason Wang <jasowang@redhat.com>, netdev@vger.kernel.org
Cc: "Eugenio Pérez" <eperezma@redhat.com>,
	"Willem de Bruijn" <willemb@google.com>,
	"Michael S.Tsirkin" <mst@redhat.com>
Subject: Re: [PATCH v3 3/5] vhost_net: remove virtio_net_hdr validation, let tun/tap do it themselves
Date: Fri, 02 Jul 2021 09:08:35 +0100	[thread overview]
Message-ID: <511df01a3641c2010d875a61161d6da7093abd4a.camel@infradead.org> (raw)
In-Reply-To: <ccf524ce-17f8-3763-0f86-2afbcf6aa6fc@redhat.com>

[-- Attachment #1: Type: text/plain, Size: 4895 bytes --]

On Fri, 2021-07-02 at 11:13 +0800, Jason Wang wrote:
> 在 2021/7/2 上午1:39, David Woodhouse 写道:
> > 
> > Right, but the VMM (or the guest, if we're letting the guest choose)
> > wouldn't have to use it for those cases.
> 
> 
> I'm not sure I get here. If so, simply write to TUN directly would work.

As noted, that works nicely for me in OpenConnect; I just write it to
the tun device *instead* of putting it in the vring. My TX latency is
now fine; it's just RX which takes *two* scheduler wakeups (tun wakes
vhost thread, wakes guest).

But it's not clear to me that a VMM could use it. Because the guest has
already put that packet *into* the vring. Now if the VMM is in the path
of all wakeups for that vring, I suppose we *might* be able to contrive
some hackish way to be 'sure' that the kernel isn't servicing it, so we
could try to 'steal' that packet from the ring in order to send it
directly... but no. That's awful :)

I do think it'd be interesting to look at a way to reduce the latency
of the vring wakeup especially for that case of a virtio-net guest with
a single small packet to send. But realistically speaking, I'm unlikely
to get to it any time soon except for showing the numbers with the
userspace equivalent and observing that there's probably a similar win
to be had for guests too.

In the short term, I should focus on what we want to do to finish off
my existing fixes. Did we have a consensus on whether to bother
supporting PI? As I said, I'm mildly inclined to do so just because it
mostly comes out in the wash as we fix everything else, and making it
gracefully *refuse* that setup reliably is just as hard.

I think I'll try to make the vhost-net code much more resilient to
finding that tun_recvmsg() returns a header other than the sock_hlen it
expects, and see how much still actually needs "fixing" if we can do
that.


> I think the design is to delay the tx checksum as much as possible:
> 
> 1) host RX -> TAP TX -> Guest RX -> Guest TX -> TX RX -> host TX
> 2) VM1 TX -> TAP RX -> switch -> TX TX -> VM2 TX
> 
> E.g  if the checksum is supported in all those path, we don't need any 
> software checksum at all in the above path. And if any part is not 
> capable of doing checksum, the checksum will be done by networking core 
> before calling the hard_start_xmit of that device.

Right, but in *any* case where the 'device' is going to memcpy the data
around (like tun_put_user() does), it's a waste of time having the
networking core do a *separate* pass over the data just to checksum it.

> > > > We could similarly do a partial checksum in tun_get_user() and hand it
> > > > off to the network stack with ->ip_summed == CHECKSUM_PARTIAL.
> > > 
> > > I think that's is how it is expected to work (via vnet header), see
> > > virtio_net_hdr_to_skb().
> > 
> > But only if the "guest" supports it; it doesn't get handled by the tun
> > device. It *could*, since we already have the helpers to checksum *as*
> > we copy to/from userspace.
> > 
> > It doesn't help for me to advertise that I support TX checksums in
> > userspace because I'd have to do an extra pass for that. I only do one
> > pass over the data as I encrypt it, and in many block cipher modes the
> > encryption of the early blocks affects the IV for the subsequent
> > blocks... do I can't just go back and "fix" the checksum at the start
> > of the packet, once I'm finished.
> > 
> > So doing the checksum as the packet is copied up to userspace would be
> > very useful.
> 
> 
> I think I get this, but it requires a new TUN features (and maybe make 
> it userspace controllable via tun_set_csum()).

I don't think it's visible to userspace at all; it's purely between the
tun driver and the network stack. We *always* set NETIF_F_HW_CSUM,
regardless of what the user can cope with. And if the user *didn't*
support checksum offload then tun will transparently do the checksum
*during* the copy_to_iter() (in either tun_put_user_xdp() or
tun_put_user()).

Userspace sees precisely what it did before. If it doesn't support
checksum offload then it gets a pre-checksummed packet just as before.
It's just that the kernel will do that checksum *while* it's already
touching the data as it copies it to userspace, instead of in a
separate pass.

Although actually, for my *benchmark* case with iperf3 sending UDP, I
spotted in the perf traces that we actually do the checksum as we're
copying from userspace in the udp_sendmsg() call. There's a check in
__ip_append_data() which looks to see if the destination device has
HW_CSUM|IP_CSUM features, and does the copy-and-checksum if not. There
are definitely use cases which *don't* have that kind of optimisation
though, and packets that would reach tun_net_xmit() with CHECKSUM_NONE.
So I think it's worth looking at.


[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5174 bytes --]

  reply	other threads:[~2021-07-02  8:08 UTC|newest]

Thread overview: 73+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-19 13:33 [PATCH] net: tun: fix tun_xdp_one() for IFF_TUN mode David Woodhouse
2021-06-21  7:00 ` Jason Wang
2021-06-21 10:52   ` David Woodhouse
2021-06-21 14:50     ` David Woodhouse
2021-06-21 20:43       ` David Woodhouse
2021-06-22  4:52         ` Jason Wang
2021-06-22  7:24           ` David Woodhouse
2021-06-22  7:51             ` Jason Wang
2021-06-22  8:10               ` David Woodhouse
2021-06-22 11:36               ` David Woodhouse
2021-06-22  4:34       ` Jason Wang
2021-06-22  4:34     ` Jason Wang
2021-06-22  7:28       ` David Woodhouse
2021-06-22  8:00         ` Jason Wang
2021-06-22  8:29           ` David Woodhouse
2021-06-23  3:39             ` Jason Wang
2021-06-24 12:39               ` David Woodhouse
2021-06-22 16:15 ` [PATCH v2 1/4] " David Woodhouse
2021-06-22 16:15   ` [PATCH v2 2/4] net: tun: don't assume IFF_VNET_HDR in tun_xdp_one() tx path David Woodhouse
2021-06-23  3:46     ` Jason Wang
2021-06-22 16:15   ` [PATCH v2 3/4] vhost_net: validate virtio_net_hdr only if it exists David Woodhouse
2021-06-23  3:48     ` Jason Wang
2021-06-22 16:15   ` [PATCH v2 4/4] vhost_net: Add self test with tun device David Woodhouse
2021-06-23  4:02     ` Jason Wang
2021-06-23 16:12       ` David Woodhouse
2021-06-24  6:12         ` Jason Wang
2021-06-24 10:42           ` David Woodhouse
2021-06-25  2:55             ` Jason Wang
2021-06-25  7:54               ` David Woodhouse
2021-06-23  3:45   ` [PATCH v2 1/4] net: tun: fix tun_xdp_one() for IFF_TUN mode Jason Wang
2021-06-23  8:30     ` David Woodhouse
2021-06-23 13:52     ` David Woodhouse
2021-06-23 17:31       ` David Woodhouse
2021-06-23 22:52         ` David Woodhouse
2021-06-24  6:37           ` Jason Wang
2021-06-24  7:23             ` David Woodhouse
2021-06-24  6:18       ` Jason Wang
2021-06-24  7:05         ` David Woodhouse
2021-06-24 12:30 ` [PATCH v3 1/5] net: add header len parameter to tun_get_socket(), tap_get_socket() David Woodhouse
2021-06-24 12:30   ` [PATCH v3 2/5] net: tun: don't assume IFF_VNET_HDR in tun_xdp_one() tx path David Woodhouse
2021-06-25  6:58     ` Jason Wang
2021-06-24 12:30   ` [PATCH v3 3/5] vhost_net: remove virtio_net_hdr validation, let tun/tap do it themselves David Woodhouse
2021-06-25  7:33     ` Jason Wang
2021-06-25  8:37       ` David Woodhouse
2021-06-28  4:23         ` Jason Wang
2021-06-28 11:23           ` David Woodhouse
2021-06-28 23:29             ` David Woodhouse
2021-06-29  3:43               ` Jason Wang
2021-06-29  6:59                 ` David Woodhouse
2021-06-29 10:49                 ` David Woodhouse
2021-06-29 13:15                   ` David Woodhouse
2021-06-30  4:39                   ` Jason Wang
2021-06-30 10:02                     ` David Woodhouse
2021-07-01  4:13                       ` Jason Wang
2021-07-01 17:39                         ` David Woodhouse
2021-07-02  3:13                           ` Jason Wang
2021-07-02  8:08                             ` David Woodhouse [this message]
2021-07-02  8:50                               ` Jason Wang
2021-07-09 15:04                               ` Eugenio Perez Martin
2021-06-29  3:21             ` Jason Wang
2021-06-24 12:30   ` [PATCH v3 4/5] net: tun: fix tun_xdp_one() for IFF_TUN mode David Woodhouse
2021-06-25  7:41     ` Jason Wang
2021-06-25  8:51       ` David Woodhouse
2021-06-28  4:27         ` Jason Wang
2021-06-28 10:43           ` David Woodhouse
2021-06-25 18:43     ` Willem de Bruijn
2021-06-25 19:00       ` David Woodhouse
2021-06-24 12:30   ` [PATCH v3 5/5] vhost_net: Add self test with tun device David Woodhouse
2021-06-25  5:00   ` [PATCH v3 1/5] net: add header len parameter to tun_get_socket(), tap_get_socket() Jason Wang
2021-06-25  8:23     ` David Woodhouse
2021-06-28  4:22       ` Jason Wang
2021-06-25 18:13   ` Willem de Bruijn
2021-06-25 18:55     ` David Woodhouse

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=511df01a3641c2010d875a61161d6da7093abd4a.camel@infradead.org \
    --to=dwmw2@infradead.org \
    --cc=eperezma@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=willemb@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.