From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27D6CC04A6B for ; Tue, 14 May 2019 03:38:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 036BA208CA for ; Tue, 14 May 2019 03:38:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726971AbfENDiQ (ORCPT ); Mon, 13 May 2019 23:38:16 -0400 Received: from mx1.redhat.com ([209.132.183.28]:59014 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726660AbfENDiP (ORCPT ); Mon, 13 May 2019 23:38:15 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 95D9B307D92F; Tue, 14 May 2019 03:38:15 +0000 (UTC) Received: from [10.72.12.59] (ovpn-12-59.pek2.redhat.com [10.72.12.59]) by smtp.corp.redhat.com (Postfix) with ESMTP id 815F2608BB; Tue, 14 May 2019 03:38:06 +0000 (UTC) Subject: Re: [PATCH v2 7/8] vsock/virtio: increase RX buffer size to 64 KiB To: Stefano Garzarella Cc: netdev@vger.kernel.org, "David S. Miller" , "Michael S. Tsirkin" , virtualization@lists.linux-foundation.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Stefan Hajnoczi References: <20190510125843.95587-1-sgarzare@redhat.com> <20190510125843.95587-8-sgarzare@redhat.com> <20190513175138.4yycad2xi65komw6@steredhat> From: Jason Wang Message-ID: Date: Tue, 14 May 2019 11:38:05 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1 MIME-Version: 1.0 In-Reply-To: <20190513175138.4yycad2xi65komw6@steredhat> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Content-Language: en-US X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.48]); Tue, 14 May 2019 03:38:15 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2019/5/14 上午1:51, Stefano Garzarella wrote: > On Mon, May 13, 2019 at 06:01:52PM +0800, Jason Wang wrote: >> On 2019/5/10 下午8:58, Stefano Garzarella wrote: >>> In order to increase host -> guest throughput with large packets, >>> we can use 64 KiB RX buffers. >>> >>> Signed-off-by: Stefano Garzarella >>> --- >>> include/linux/virtio_vsock.h | 2 +- >>> 1 file changed, 1 insertion(+), 1 deletion(-) >>> >>> diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h >>> index 84b72026d327..5a9d25be72df 100644 >>> --- a/include/linux/virtio_vsock.h >>> +++ b/include/linux/virtio_vsock.h >>> @@ -10,7 +10,7 @@ >>> #define VIRTIO_VSOCK_DEFAULT_MIN_BUF_SIZE 128 >>> #define VIRTIO_VSOCK_DEFAULT_BUF_SIZE (1024 * 256) >>> #define VIRTIO_VSOCK_DEFAULT_MAX_BUF_SIZE (1024 * 256) >>> -#define VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE (1024 * 4) >>> +#define VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE (1024 * 64) >>> #define VIRTIO_VSOCK_MAX_BUF_SIZE 0xFFFFFFFFUL >>> #define VIRTIO_VSOCK_MAX_PKT_BUF_SIZE (1024 * 64) >> >> We probably don't want such high order allocation. It's better to switch to >> use order 0 pages in this case. See add_recvbuf_big() for virtio-net. If we >> get datapath unified, we will get more stuffs set. > IIUC, you are suggesting to allocate only pages and put them in a > scatterlist, then add them to the virtqueue. > > Is it correct? Yes since you are using:                 pkt->buf = kmalloc(buf_len, GFP_KERNEL);                 if (!pkt->buf) {                         virtio_transport_free_pkt(pkt);                         break;                 } This is likely to fail when the memory is fragmented which is kind of fragile. > > The issue that I have here, is that the virtio-vsock guest driver, see > virtio_vsock_rx_fill(), allocates a struct virtio_vsock_pkt that > contains the room for the header, then allocates the buffer for the payload. > At this point it fills the scatterlist with the &virtio_vsock_pkt.hdr and the > buffer for the payload. This part should be fine since what is needed is just adding more pages to sg[] and call virtuqeueu_add_sg(). > > Changing this will require several modifications, and if we get datapath > unified, I'm not sure it's worth it. > Of course, if we leave the datapaths separated, I'd like to do that later. > > What do you think? For the driver it self, it should not be hard. But I think you mean the issue of e.g virtio_vsock_pkt itself which doesn't support sg. For short time, maybe we can use kvec instead. Thanks > > Thanks, > Stefano