From mboxrd@z Thu Jan 1 00:00:00 1970 From: jiangyiwen Subject: Re: [PATCH 0/5] VSOCK: support mergeable rx buffer in vhost-vsock Date: Tue, 6 Nov 2018 10:17:45 +0800 Message-ID: <5BE0F9C9.2080003@huawei.com> References: <5BDFF49C.3040603@huawei.com> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 8bit Cc: , , To: Jason Wang , Return-path: Received: from szxga07-in.huawei.com ([45.249.212.35]:39093 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725897AbeKFLkj (ORCPT ); Tue, 6 Nov 2018 06:40:39 -0500 In-Reply-To: Sender: netdev-owner@vger.kernel.org List-ID: On 2018/11/5 17:21, Jason Wang wrote: > > On 2018/11/5 下午3:43, jiangyiwen wrote: >> Now vsock only support send/receive small packet, it can't achieve >> high performance. As previous discussed with Jason Wang, I revisit the >> idea of vhost-net about mergeable rx buffer and implement the mergeable >> rx buffer in vhost-vsock, it can allow big packet to be scattered in >> into different buffers and improve performance obviously. >> >> I write a tool to test the vhost-vsock performance, mainly send big >> packet(64K) included guest->Host and Host->Guest. The result as >> follows: >> >> Before performance: >> Single socket Multiple sockets(Max Bandwidth) >> Guest->Host ~400MB/s ~480MB/s >> Host->Guest ~1450MB/s ~1600MB/s >> >> After performance: >> Single socket Multiple sockets(Max Bandwidth) >> Guest->Host ~1700MB/s ~2900MB/s >> Host->Guest ~1700MB/s ~2900MB/s >> >> From the test results, the performance is improved obviously, and guest >> memory will not be wasted. > > > Hi: > > Thanks for the patches and the numbers are really impressive. > > But instead of duplicating codes between sock and net. I was considering to use virtio-net as a transport of vsock. Then we may have all existed features likes batching, mergeable rx buffers and multiqueue. Want to consider this idea? Thoughts? > > Hi Jason, I am not very familiar with virtio-net, so I am afraid I can't give too much effective advice. Then I have several problems: 1. If use virtio-net as a transport, guest should see a virtio-net device instead of virtio-vsock device, right? Is vsock only as a transport between socket and net_device? User should still use AF_VSOCK type to create socket, right? 2. I want to know if this idea has already started, and how is the current progress? 3. And what is stefan's idea? Thanks, Yiwen. >> >> --- >> >> Yiwen Jiang (5): >> VSOCK: support fill mergeable rx buffer in guest >> VSOCK: support fill data to mergeable rx buffer in host >> VSOCK: support receive mergeable rx buffer in guest >> VSOCK: modify default rx buf size to improve performance >> VSOCK: batch sending rx buffer to increase bandwidth >> >> drivers/vhost/vsock.c | 135 +++++++++++++++++++++++------ >> include/linux/virtio_vsock.h | 15 +++- >> include/uapi/linux/virtio_vsock.h | 5 ++ >> net/vmw_vsock/virtio_transport.c | 147 ++++++++++++++++++++++++++------ >> net/vmw_vsock/virtio_transport_common.c | 59 +++++++++++-- >> 5 files changed, 300 insertions(+), 61 deletions(-) >> > > . >