All of lore.kernel.org
 help / color / mirror / Atom feed
From: Bobby Eshleman <bobbyeshleman@gmail.com>
To: Arseniy Krasnov <AVKrasnov@sberdevices.ru>
Cc: Stefan Hajnoczi <stefanha@redhat.com>,
	Stefano Garzarella <sgarzare@redhat.com>,
	"David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Bobby Eshleman <bobby.eshleman@bytedance.com>,
	kvm@vger.kernel.org, virtualization@lists.linux-foundation.org,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	kernel@sberdevices.ru, oxffffaa@gmail.com
Subject: Re: [RFC PATCH v4 00/17] vsock: MSG_ZEROCOPY flag support
Date: Mon, 12 Jun 2023 17:20:52 +0000	[thread overview]
Message-ID: <ZIdT9Ei9C5wkHXNe@bullseye> (raw)
In-Reply-To: <20230603204939.1598818-1-AVKrasnov@sberdevices.ru>

Hey Arseniy,

Thanks for this series, very good stuff!

On Sat, Jun 03, 2023 at 11:49:22PM +0300, Arseniy Krasnov wrote:
> Hello,
> 
>                            DESCRIPTION
> 
> this is MSG_ZEROCOPY feature support for virtio/vsock. I tried to follow
> current implementation for TCP as much as possible:
> 
> 1) Sender must enable SO_ZEROCOPY flag to use this feature. Without this
>    flag, data will be sent in "classic" copy manner and MSG_ZEROCOPY
>    flag will be ignored (e.g. without completion).
> 
> 2) Kernel uses completions from socket's error queue. Single completion
>    for single tx syscall (or it can merge several completions to single
>    one). I used already implemented logic for MSG_ZEROCOPY support:
>    'msg_zerocopy_realloc()' etc.
> 
> Difference with copy way is not significant. During packet allocation,
> non-linear skb is created and filled with pinned user pages.
> There are also some updates for vhost and guest parts of transport - in
> both cases i've added handling of non-linear skb for virtio part. vhost
> copies data from such skb to the guest's rx virtio buffers. In the guest,
> virtio transport fills tx virtio queue with pages from skb.
> 
> Head of this patchset is:
> https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git/commit/?id=d20dd0ea14072e8a90ff864b2c1603bd68920b4b
> 
> 
> This version has several limits/problems:
> 
> 1) As this feature totally depends on transport, there is no way (or it
>    is difficult) to check whether transport is able to handle it or not
>    during SO_ZEROCOPY setting. Seems I need to call AF_VSOCK specific
>    setsockopt callback from setsockopt callback for SOL_SOCKET, but this
>    leads to lock problem, because both AF_VSOCK and SOL_SOCKET callback
>    are not considered to be called from each other. So in current version
>    SO_ZEROCOPY is set successfully to any type (e.g. transport) of
>    AF_VSOCK socket, but if transport does not support MSG_ZEROCOPY,
>    tx routine will fail with EOPNOTSUPP.
> 
>    ^^^
>    This is still no resolved :(
> 

I think to get around this you could use set SOCK_CUSTOM_SOCKOPT in the
vsock create function, handle SO_ZEROCOPY in the vsock handler, but pass
the rest of the common options to sock_setsockopt().

I think the next issue you would run into though is that users may call
setsockopt() before connect(), and so the transport will still be
unknown (except for dgrams, which are weird for reasons).

What do you think about EOPNOTSUPP being returned when the user selects
an incompatible transport with connect() instead of returning it later
in the tx path?

> 2) When MSG_ZEROCOPY is used, for each tx system call we need to enqueue
>    one completion. In each completion there is flag which shows how tx
>    was performed: zerocopy or copy. This leads that whole message must
>    be send in zerocopy or copy way - we can't send part of message with
>    copying and rest of message with zerocopy mode (or vice versa). Now,
>    we need to account vsock credit logic, e.g. we can't send whole data
>    once - only allowed number of bytes could sent at any moment. In case
>    of copying way there is no problem as in worst case we can send single
>    bytes, but zerocopy is more complex because smallest transmission
>    unit is single page. So if there is not enough space at peer's side
>    to send integer number of pages (at least one) - we will wait, thus
>    stalling tx side. To overcome this problem i've added simple rule -
>    zerocopy is possible only when there is enough space at another side
>    for whole message (to check, that current 'msghdr' was already used
>    in previous tx iterations i use 'iov_offset' field of it's iov iter).
> 
>    ^^^
>    Discussed as ok during v2. Link:
>    https://lore.kernel.org/netdev/23guh3txkghxpgcrcjx7h62qsoj3xgjhfzgtbmqp2slrz3rxr4@zya2z7kwt75l/
> 
> 3) loopback transport is not supported, because it requires to implement
>    non-linear skb handling in dequeue logic (as we "send" fragged skb
>    and "receive" it from the same queue). I'm going to implement it in
>    next versions.
> 
>    ^^^ fixed in v2
> 
> 4) Current implementation sets max length of packet to 64KB. IIUC this
>    is due to 'kmalloc()' allocated data buffers. I think, in case of
>    MSG_ZEROCOPY this value could be increased, because 'kmalloc()' is
>    not touched for data - user space pages are used as buffers. Also
>    this limit trims every message which is > 64KB, thus such messages
>    will be send in copy mode due to 'iov_offset' check in 2).
> 
>    ^^^ fixed in v2
> 
>                          PATCHSET STRUCTURE
> 
> Patchset has the following structure:
> 1) Handle non-linear skbuff on receive in virtio/vhost.
> 2) Handle non-linear skbuff on send in virtio/vhost.
> 3) Updates for AF_VSOCK.
> 4) Enable MSG_ZEROCOPY support on transports.
> 5) Tests/tools/docs updates.
> 
>                             PERFORMANCE
> 
> Performance: it is a little bit tricky to compare performance between
> copy and zerocopy transmissions. In zerocopy way we need to wait when
> user buffers will be released by kernel, so it is like synchronous
> path (wait until device driver will process it), while in copy way we
> can feed data to kernel as many as we want, don't care about device
> driver. So I compared only time which we spend in the 'send()' syscall.
> Then if this value will be combined with total number of transmitted
> bytes, we can get Gbit/s parameter. Also to avoid tx stalls due to not
> enough credit, receiver allocates same amount of space as sender needs.
> 
> Sender:
> ./vsock_perf --sender <CID> --buf-size <buf size> --bytes 256M [--zc]
> 
> Receiver:
> ./vsock_perf --vsk-size 256M
> 
> I run tests on two setups: desktop with Core i7 - I use this PC for
> development and in this case guest is nested guest, and host is normal
> guest. Another hardware is some embedded board with Atom - here I don't
> have nested virtualization - host runs on hw, and guest is normal guest.
> 
> G2H transmission (values are Gbit/s):
> 
>    Core i7 with nested guest.            Atom with normal guest.
> 
> *-------------------------------*   *-------------------------------*
> |          |         |          |   |          |         |          |
> | buf size |   copy  | zerocopy |   | buf size |   copy  | zerocopy |
> |          |         |          |   |          |         |          |
> *-------------------------------*   *-------------------------------*
> |   4KB    |    3    |    10    |   |   4KB    |   0.8   |   1.9    |
> *-------------------------------*   *-------------------------------*
> |   32KB   |   20    |    61    |   |   32KB   |   6.8   |   20.2   |
> *-------------------------------*   *-------------------------------*
> |   256KB  |   33    |   244    |   |   256KB  |   7.8   |   55     |
> *-------------------------------*   *-------------------------------*
> |    1M    |   30    |   373    |   |    1M    |   7     |   95     |
> *-------------------------------*   *-------------------------------*
> |    8M    |   22    |   475    |   |    8M    |   7     |   114    |
> *-------------------------------*   *-------------------------------*
> 
> H2G:
> 
>    Core i7 with nested guest.            Atom with normal guest.
> 
> *-------------------------------*   *-------------------------------*
> |          |         |          |   |          |         |          |
> | buf size |   copy  | zerocopy |   | buf size |   copy  | zerocopy |
> |          |         |          |   |          |         |          |
> *-------------------------------*   *-------------------------------*
> |   4KB    |   20    |    10    |   |   4KB    |   4.37  |    3     |
> *-------------------------------*   *-------------------------------*
> |   32KB   |   37    |    75    |   |   32KB   |   11    |   18     |
> *-------------------------------*   *-------------------------------*
> |   256KB  |   44    |   299    |   |   256KB  |   11    |   62     |
> *-------------------------------*   *-------------------------------*
> |    1M    |   28    |   335    |   |    1M    |   9     |   77     |
> *-------------------------------*   *-------------------------------*
> |    8M    |   27    |   417    |   |    8M    |  9.35   |  115     |
> *-------------------------------*   *-------------------------------*
> 

Nice!


[...]

Thanks,
Bobby

  parent reply	other threads:[~2023-06-12 17:20 UTC|newest]

Thread overview: 63+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-03 20:49 [RFC PATCH v4 00/17] vsock: MSG_ZEROCOPY flag support Arseniy Krasnov
2023-06-03 20:49 ` [RFC PATCH v4 01/17] vsock/virtio: read data from non-linear skb Arseniy Krasnov
2023-06-12 17:43   ` Bobby Eshleman
2023-06-26 15:20   ` Stefano Garzarella
2023-06-26 15:20     ` Stefano Garzarella
2023-06-03 20:49 ` [RFC PATCH v4 02/17] vhost/vsock: " Arseniy Krasnov
2023-06-12 17:53   ` Bobby Eshleman
2023-06-26 15:24   ` Stefano Garzarella
2023-06-26 15:24     ` Stefano Garzarella
2023-06-03 20:49 ` [RFC PATCH v4 03/17] vsock/virtio: support to send " Arseniy Krasnov
2023-06-12 18:30   ` Bobby Eshleman
2023-06-26 15:36   ` Stefano Garzarella
2023-06-26 15:36     ` Stefano Garzarella
2023-06-27  4:39     ` Arseniy Krasnov
2023-06-27  7:49       ` Stefano Garzarella
2023-06-27  7:49         ` Stefano Garzarella
2023-06-03 20:49 ` [RFC PATCH v4 04/17] vsock/virtio: non-linear skb handling for tap Arseniy Krasnov
2023-06-26 15:43   ` Stefano Garzarella
2023-06-26 15:43     ` Stefano Garzarella
2023-06-03 20:49 ` [RFC PATCH v4 05/17] vsock/virtio: MSG_ZEROCOPY flag support Arseniy Krasnov
2023-06-26 16:03   ` Stefano Garzarella
2023-06-26 16:03     ` Stefano Garzarella
2023-06-27  4:41     ` Arseniy Krasnov
2023-06-27  7:50       ` Stefano Garzarella
2023-06-27  7:50         ` Stefano Garzarella
2023-06-27  8:22         ` Arseniy Krasnov
2023-06-29 12:32           ` Stefano Garzarella
2023-06-29 12:32             ` Stefano Garzarella
2023-06-03 20:49 ` [RFC PATCH v4 06/17] vsock: check error queue to set EPOLLERR Arseniy Krasnov
2023-06-26 16:04   ` Stefano Garzarella
2023-06-26 16:04     ` Stefano Garzarella
2023-06-27  4:44     ` Arseniy Krasnov
2023-06-27  7:53       ` Stefano Garzarella
2023-06-27  7:53         ` Stefano Garzarella
2023-06-03 20:49 ` [RFC PATCH v4 07/17] vsock: read from socket's error queue Arseniy Krasnov
2023-06-26 16:08   ` Stefano Garzarella
2023-06-26 16:08     ` Stefano Garzarella
2023-06-27  4:49     ` Arseniy Krasnov
2023-06-27  7:58       ` Stefano Garzarella
2023-06-27  7:58         ` Stefano Garzarella
2023-06-03 20:49 ` [RFC PATCH v4 08/17] vsock: check for MSG_ZEROCOPY support on send Arseniy Krasnov
2023-06-03 20:49 ` [RFC PATCH v4 09/17] vsock: enable SOCK_SUPPORT_ZC bit Arseniy Krasnov
2023-06-03 20:49 ` [RFC PATCH v4 10/17] vhost/vsock: support MSG_ZEROCOPY for transport Arseniy Krasnov
2023-06-26 16:10   ` Stefano Garzarella
2023-06-26 16:10     ` Stefano Garzarella
2023-06-03 20:49 ` [RFC PATCH v4 11/17] vsock/virtio: " Arseniy Krasnov
2023-06-26 16:11   ` Stefano Garzarella
2023-06-26 16:11     ` Stefano Garzarella
2023-06-03 20:49 ` [RFC PATCH v4 12/17] vsock/loopback: " Arseniy Krasnov
2023-06-26 16:14   ` Stefano Garzarella
2023-06-26 16:14     ` Stefano Garzarella
2023-06-03 20:49 ` [RFC PATCH v4 13/17] net/sock: enable setting SO_ZEROCOPY for PF_VSOCK Arseniy Krasnov
2023-06-03 20:49 ` [RFC PATCH v4 14/17] docs: net: description of MSG_ZEROCOPY for AF_VSOCK Arseniy Krasnov
2023-06-03 20:49 ` [RFC PATCH v4 15/17] test/vsock: MSG_ZEROCOPY flag tests Arseniy Krasnov
2023-06-03 20:49 ` [RFC PATCH v4 16/17] test/vsock: MSG_ZEROCOPY support for vsock_perf Arseniy Krasnov
2023-06-03 20:49 ` [RFC PATCH v4 17/17] test/vsock: io_uring rx/tx tests Arseniy Krasnov
2023-06-12 17:20 ` Bobby Eshleman [this message]
2023-06-14  5:39   ` [RFC PATCH v4 00/17] vsock: MSG_ZEROCOPY flag support Arseniy Krasnov
2023-06-26 16:15 ` Stefano Garzarella
2023-06-26 16:15   ` Stefano Garzarella
2023-06-27  4:55   ` Arseniy Krasnov
2023-06-27  8:01     ` Stefano Garzarella
2023-06-27  8:01       ` Stefano Garzarella

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZIdT9Ei9C5wkHXNe@bullseye \
    --to=bobbyeshleman@gmail.com \
    --cc=AVKrasnov@sberdevices.ru \
    --cc=bobby.eshleman@bytedance.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=jasowang@redhat.com \
    --cc=kernel@sberdevices.ru \
    --cc=kuba@kernel.org \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=oxffffaa@gmail.com \
    --cc=pabeni@redhat.com \
    --cc=sgarzare@redhat.com \
    --cc=stefanha@redhat.com \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.