linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Stefano Garzarella <sgarzare@redhat.com>
To: Jason Wang <jasowang@redhat.com>,
	"Jiang Wang ." <jiang.wang@bytedance.com>
Cc: virtualization@lists.linux-foundation.org,
	"Stefan Hajnoczi" <stefanha@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Arseny Krasnov" <arseny.krasnov@kaspersky.com>,
	jhansen@vmware.comments, cong.wang@bytedance.com,
	"Xiongchun Duan" <duanxiongchun@bytedance.com>,
	"Yongji Xie" <xieyongji@bytedance.com>,
	柴稳 <chaiwen.cc@bytedance.com>,
	"David S. Miller" <davem@davemloft.net>,
	"Jakub Kicinski" <kuba@kernel.org>,
	"Steven Rostedt" <rostedt@goodmis.org>,
	"Ingo Molnar" <mingo@redhat.com>,
	"Colin Ian King" <colin.king@canonical.com>,
	"Jorgen Hansen" <jhansen@vmware.com>,
	"Andra Paraschiv" <andraprs@amazon.com>,
	"Norbert Slusarek" <nslusarek@gmx.net>,
	"Lu Wei" <luwei32@huawei.com>,
	"Alexander Popov" <alex.popov@linux.com>,
	kvm@vger.kernel.org, Networking <netdev@vger.kernel.org>,
	linux-kernel@vger.kernel.org
Subject: Re: [RFC v1 0/6] virtio/vsock: introduce SOCK_DGRAM support
Date: Thu, 10 Jun 2021 11:51:51 +0200	[thread overview]
Message-ID: <20210610095151.2cpyny56kbotzppp@steredhat> (raw)
In-Reply-To: <47ce307b-f95e-25c7-ed58-9cd1cbff5b57@redhat.com>

On Thu, Jun 10, 2021 at 03:46:55PM +0800, Jason Wang wrote:
>
>在 2021/6/10 下午3:23, Stefano Garzarella 写道:
>>On Thu, Jun 10, 2021 at 12:02:35PM +0800, Jason Wang wrote:
>>>
>>>在 2021/6/10 上午11:43, Jiang Wang . 写道:
>>>>On Wed, Jun 9, 2021 at 6:51 PM Jason Wang <jasowang@redhat.com> wrote:
>>>>>
>>>>>在 2021/6/10 上午7:24, Jiang Wang 写道:
>>>>>>This patchset implements support of SOCK_DGRAM for virtio
>>>>>>transport.
>>>>>>
>>>>>>Datagram sockets are connectionless and unreliable. To avoid 
>>>>>>unfair contention
>>>>>>with stream and other sockets, add two more virtqueues and
>>>>>>a new feature bit to indicate if those two new queues exist or not.
>>>>>>
>>>>>>Dgram does not use the existing credit update mechanism for
>>>>>>stream sockets. When sending from the guest/driver, sending packets
>>>>>>synchronously, so the sender will get an error when the 
>>>>>>virtqueue is full.
>>>>>>When sending from the host/device, send packets asynchronously
>>>>>>because the descriptor memory belongs to the corresponding QEMU
>>>>>>process.
>>>>>
>>>>>What's the use case for the datagram vsock?
>>>>>
>>>>One use case is for non critical info logging from the guest
>>>>to the host, such as the performance data of some applications.
>>>
>>>
>>>Anything that prevents you from using the stream socket?
>>>
>>>
>>>>
>>>>It can also be used to replace UDP communications between
>>>>the guest and the host.
>>>
>>>
>>>Any advantage for VSOCK in this case? Is it for performance (I 
>>>guess not since I don't exepct vsock will be faster).
>>
>>I think the general advantage to using vsock are for the guest 
>>agents that potentially don't need any configuration.
>
>
>Right, I wonder if we really need datagram consider the host to guest 
>communication is reliable.
>
>(Note that I don't object it since vsock has already supported that, 
>just wonder its use cases)

Yep, it was the same concern I had :-)
Also because we're now adding SEQPACKET, which provides reliable 
datagram support.

But IIUC the use case is the logging where you don't need a reliable 
communication and you want to avoid to keep more open connections with 
different guests.

So the server in the host can be pretty simple and doesn't have to 
handle connections. It just waits for datagrams on a port.

>
>
>>
>>>
>>>An obvious drawback is that it breaks the migration. Using UDP you 
>>>can have a very rich features support from the kernel where vsock 
>>>can't.
>>>
>>
>>Thanks for bringing this up!
>>What features does UDP support and datagram on vsock could not support?
>
>
>E.g the sendpage() and busy polling. And using UDP means qdiscs and 
>eBPF can work.

Thanks, I see!

>
>
>>
>>>
>>>>
>>>>>>The virtio spec patch is here:
>>>>>>https://www.spinics.net/lists/linux-virtualization/msg50027.html
>>>>>
>>>>>Have a quick glance, I suggest to split mergeable rx buffer into an
>>>>>separate patch.
>>>>Sure.
>>>>
>>>>>But I think it's time to revisit the idea of unifying the 
>>>>>virtio-net and
>>>>>virtio-vsock. Otherwise we're duplicating features and bugs.
>>>>For mergeable rxbuf related code, I think a set of common helper
>>>>functions can be used by both virtio-net and virtio-vsock. For other
>>>>parts, that may not be very beneficial. I will think about more.
>>>>
>>>>If there is a previous email discussion about this topic, could 
>>>>you send me
>>>>some links? I did a quick web search but did not find any related
>>>>info. Thanks.
>>>
>>>
>>>We had a lot:
>>>
>>>[1] https://patchwork.kernel.org/project/kvm/patch/5BDFF537.3050806@huawei.com/
>>>[2] https://lists.linuxfoundation.org/pipermail/virtualization/2018-November/039798.html
>>>[3] https://www.lkml.org/lkml/2020/1/16/2043
>>>
>>
>>When I tried it, the biggest problem that blocked me were all the 
>>features strictly related to TCP/IP stack and ethernet devices that 
>>vsock device doesn't know how to handle: TSO, GSO, checksums, MAC, 
>>napi, xdp, min ethernet frame size, MTU, etc.
>
>
>It depends on which level we want to share:
>
>1) sharing codes
>2) sharing devices
>3) make vsock a protocol that is understood by the network core
>
>We can start from 1), the low level tx/rx logic can be shared at both 
>virtio-net and vhost-net. For 2) we probably need some work on the 
>spec, probably with a new feature bit to demonstrate that it's a vsock 
>device not a ethernet device. Then if it is probed as a vsock device we 
>won't let packet to be delivered in the TCP/IP stack. For 3), it would 
>be even harder and I'm not sure it's worth to do that.
>
>
>>
>>So in my opinion to unify them is not so simple, because vsock is not 
>>really an ethernet device, but simply a socket.
>
>
>We can start from sharing codes.

Yep, I agree, and maybe the mergeable buffer is a good starting point to 
share code!

@Jiang, do you want to take a look of this possibility?

Thanks,
Stefano


  reply	other threads:[~2021-06-10  9:52 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-09 23:24 [RFC v1 0/6] virtio/vsock: introduce SOCK_DGRAM support Jiang Wang
2021-06-09 23:24 ` [RFC v1 1/6] virtio/vsock: add VIRTIO_VSOCK_F_DGRAM feature bit Jiang Wang
2021-06-18  9:39   ` Stefano Garzarella
2021-06-21 17:24     ` [External] " Jiang Wang .
2021-06-22 10:50       ` Stefano Garzarella
2021-06-09 23:24 ` [RFC v1 2/6] virtio/vsock: add support for virtio datagram Jiang Wang
2021-06-18  9:52   ` Stefano Garzarella
2021-06-18 10:11   ` Stefano Garzarella
2021-06-09 23:24 ` [RFC v1 3/6] vhost/vsock: add support for vhost dgram Jiang Wang
2021-06-18 10:13   ` Stefano Garzarella
2021-06-21 17:32     ` [External] " Jiang Wang .
2021-06-09 23:24 ` [RFC v1 4/6] vsock_test: add tests for vsock dgram Jiang Wang
2021-06-09 23:24 ` [RFC v1 5/6] vhost/vsock: add kconfig for vhost dgram support Jiang Wang
2021-06-18  9:54   ` Stefano Garzarella
2021-06-21 17:25     ` [External] " Jiang Wang .
2021-06-09 23:24 ` [RFC v1 6/6] virtio/vsock: add sysfs for rx buf len for dgram Jiang Wang
2021-06-18 10:04   ` Stefano Garzarella
2021-06-21 17:27     ` [External] " Jiang Wang .
2021-06-10  1:50 ` [RFC v1 0/6] virtio/vsock: introduce SOCK_DGRAM support Jason Wang
2021-06-10  3:43   ` Jiang Wang .
2021-06-10  4:02     ` Jason Wang
2021-06-10  7:23       ` Stefano Garzarella
2021-06-10  7:46         ` Jason Wang
2021-06-10  9:51           ` Stefano Garzarella [this message]
2021-06-10 16:44             ` Jiang Wang .
2021-06-18  9:35 ` Stefano Garzarella
2021-06-21 17:21   ` [External] " Jiang Wang .

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210610095151.2cpyny56kbotzppp@steredhat \
    --to=sgarzare@redhat.com \
    --cc=alex.popov@linux.com \
    --cc=andraprs@amazon.com \
    --cc=arseny.krasnov@kaspersky.com \
    --cc=chaiwen.cc@bytedance.com \
    --cc=colin.king@canonical.com \
    --cc=cong.wang@bytedance.com \
    --cc=davem@davemloft.net \
    --cc=duanxiongchun@bytedance.com \
    --cc=jasowang@redhat.com \
    --cc=jhansen@vmware.com \
    --cc=jhansen@vmware.comments \
    --cc=jiang.wang@bytedance.com \
    --cc=kuba@kernel.org \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luwei32@huawei.com \
    --cc=mingo@redhat.com \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=nslusarek@gmx.net \
    --cc=rostedt@goodmis.org \
    --cc=stefanha@redhat.com \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=xieyongji@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).