From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jason Wang Subject: Re: [PATCH 0/5] VSOCK: support mergeable rx buffer in vhost-vsock Date: Mon, 5 Nov 2018 17:21:55 +0800 Message-ID: References: <5BDFF49C.3040603@huawei.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Cc: netdev@vger.kernel.org, kvm@vger.kernel.org, virtualization@lists.linux-foundation.org To: jiangyiwen , stefanha@redhat.com Return-path: Received: from mx1.redhat.com ([209.132.183.28]:39548 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726125AbeKESkr (ORCPT ); Mon, 5 Nov 2018 13:40:47 -0500 In-Reply-To: <5BDFF49C.3040603@huawei.com> Content-Language: en-US Sender: netdev-owner@vger.kernel.org List-ID: On 2018/11/5 下午3:43, jiangyiwen wrote: > Now vsock only support send/receive small packet, it can't achieve > high performance. As previous discussed with Jason Wang, I revisit the > idea of vhost-net about mergeable rx buffer and implement the mergeable > rx buffer in vhost-vsock, it can allow big packet to be scattered in > into different buffers and improve performance obviously. > > I write a tool to test the vhost-vsock performance, mainly send big > packet(64K) included guest->Host and Host->Guest. The result as > follows: > > Before performance: > Single socket Multiple sockets(Max Bandwidth) > Guest->Host ~400MB/s ~480MB/s > Host->Guest ~1450MB/s ~1600MB/s > > After performance: > Single socket Multiple sockets(Max Bandwidth) > Guest->Host ~1700MB/s ~2900MB/s > Host->Guest ~1700MB/s ~2900MB/s > > From the test results, the performance is improved obviously, and guest > memory will not be wasted. Hi: Thanks for the patches and the numbers are really impressive. But instead of duplicating codes between sock and net. I was considering to use virtio-net as a transport of vsock. Then we may have all existed features likes batching, mergeable rx buffers and multiqueue. Want to consider this idea? Thoughts? > > --- > > Yiwen Jiang (5): > VSOCK: support fill mergeable rx buffer in guest > VSOCK: support fill data to mergeable rx buffer in host > VSOCK: support receive mergeable rx buffer in guest > VSOCK: modify default rx buf size to improve performance > VSOCK: batch sending rx buffer to increase bandwidth > > drivers/vhost/vsock.c | 135 +++++++++++++++++++++++------ > include/linux/virtio_vsock.h | 15 +++- > include/uapi/linux/virtio_vsock.h | 5 ++ > net/vmw_vsock/virtio_transport.c | 147 ++++++++++++++++++++++++++------ > net/vmw_vsock/virtio_transport_common.c | 59 +++++++++++-- > 5 files changed, 300 insertions(+), 61 deletions(-) >