From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS,USER_AGENT_NEOMUTT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF638C04AB4 for ; Tue, 14 May 2019 16:21:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C7B3F20863 for ; Tue, 14 May 2019 16:21:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726383AbfENQVC (ORCPT ); Tue, 14 May 2019 12:21:02 -0400 Received: from mail-wr1-f66.google.com ([209.85.221.66]:33258 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726013AbfENQVB (ORCPT ); Tue, 14 May 2019 12:21:01 -0400 Received: by mail-wr1-f66.google.com with SMTP id d9so11553881wrx.0 for ; Tue, 14 May 2019 09:21:00 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to:user-agent; bh=BOCFSrhgfdaeCsPqx/9Z23ucW5WFX+NmToiNV2PrBeE=; b=hcV9dy3w3chOiWSLO3uK8G0jJsiL+NhYWspXxaY685vAf/XjdMOnVxXDjvC7xhf5L4 rh5TaaQiQVJdrru9uM9gIjKRN548YszwgVZPBMGd3HmGtDaPgO3QK372JcHAp8zjVWmj Qi1a6gmW0RBRv5k/x5qa2673B7vaJ//HOoDOiu3WH8kRAYiBPYrPBAhzxTKRsDc2IivZ kTJfKnyZnRGYipQru7OiX6blf/WwI50vgItaZ27wy8tBYeBLN113idpK22/J3ye0f3AG AX/f9J/ypsXi9gpuzYGte1FF6a2U5H+KGMySdmBv+pyl4EtbuHOfF/ZwPZLtr4guTTzY 1hog== X-Gm-Message-State: APjAAAU70J/btWRnvxphtLYp72lDjCfFchFgxuOHWS1EWXWJp1+jESXB YacyBthF9JaCoS2mxP70A8PWTg== X-Google-Smtp-Source: APXvYqyscd4jQz6/42rGahRh0erb6xOt/MMNshG64dkL9Ee+1e2OY4Rd80gvnBFmXC+AkJhQaflnLg== X-Received: by 2002:adf:afcd:: with SMTP id y13mr21348543wrd.270.1557850859484; Tue, 14 May 2019 09:20:59 -0700 (PDT) Received: from steredhat (host151-251-static.12-87-b.business.telecomitalia.it. [87.12.251.151]) by smtp.gmail.com with ESMTPSA id g3sm4407851wmf.9.2019.05.14.09.20.58 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 14 May 2019 09:20:58 -0700 (PDT) Date: Tue, 14 May 2019 18:20:56 +0200 From: Stefano Garzarella To: Jason Wang Cc: netdev@vger.kernel.org, "David S. Miller" , "Michael S. Tsirkin" , virtualization@lists.linux-foundation.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, Stefan Hajnoczi Subject: Re: [PATCH v2 7/8] vsock/virtio: increase RX buffer size to 64 KiB Message-ID: <20190514162056.5aotcuzsi6e6wya7@steredhat> References: <20190510125843.95587-1-sgarzare@redhat.com> <20190510125843.95587-8-sgarzare@redhat.com> <20190513175138.4yycad2xi65komw6@steredhat> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: User-Agent: NeoMutt/20180716 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, May 14, 2019 at 11:38:05AM +0800, Jason Wang wrote: > > On 2019/5/14 上午1:51, Stefano Garzarella wrote: > > On Mon, May 13, 2019 at 06:01:52PM +0800, Jason Wang wrote: > > > On 2019/5/10 下午8:58, Stefano Garzarella wrote: > > > > In order to increase host -> guest throughput with large packets, > > > > we can use 64 KiB RX buffers. > > > > > > > > Signed-off-by: Stefano Garzarella > > > > --- > > > > include/linux/virtio_vsock.h | 2 +- > > > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > > > > > diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h > > > > index 84b72026d327..5a9d25be72df 100644 > > > > --- a/include/linux/virtio_vsock.h > > > > +++ b/include/linux/virtio_vsock.h > > > > @@ -10,7 +10,7 @@ > > > > #define VIRTIO_VSOCK_DEFAULT_MIN_BUF_SIZE 128 > > > > #define VIRTIO_VSOCK_DEFAULT_BUF_SIZE (1024 * 256) > > > > #define VIRTIO_VSOCK_DEFAULT_MAX_BUF_SIZE (1024 * 256) > > > > -#define VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE (1024 * 4) > > > > +#define VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE (1024 * 64) > > > > #define VIRTIO_VSOCK_MAX_BUF_SIZE 0xFFFFFFFFUL > > > > #define VIRTIO_VSOCK_MAX_PKT_BUF_SIZE (1024 * 64) > > > > > > We probably don't want such high order allocation. It's better to switch to > > > use order 0 pages in this case. See add_recvbuf_big() for virtio-net. If we > > > get datapath unified, we will get more stuffs set. > > IIUC, you are suggesting to allocate only pages and put them in a > > scatterlist, then add them to the virtqueue. > > > > Is it correct? > > > Yes since you are using: > >                 pkt->buf = kmalloc(buf_len, GFP_KERNEL); >                 if (!pkt->buf) { >                         virtio_transport_free_pkt(pkt); >                         break; >                 } > > This is likely to fail when the memory is fragmented which is kind of > fragile. > > Thanks for pointing that out. > > > > The issue that I have here, is that the virtio-vsock guest driver, see > > virtio_vsock_rx_fill(), allocates a struct virtio_vsock_pkt that > > contains the room for the header, then allocates the buffer for the payload. > > At this point it fills the scatterlist with the &virtio_vsock_pkt.hdr and the > > buffer for the payload. > > > This part should be fine since what is needed is just adding more pages to > sg[] and call virtuqeueu_add_sg(). > > Yes, I agree. > > > > Changing this will require several modifications, and if we get datapath > > unified, I'm not sure it's worth it. > > Of course, if we leave the datapaths separated, I'd like to do that later. > > > > What do you think? > > > For the driver it self, it should not be hard. But I think you mean the > issue of e.g virtio_vsock_pkt itself which doesn't support sg. For short > time, maybe we can use kvec instead. I'll try to use kvec in the virtio_vsock_pkt. Since this struct is shared also with the host driver (vhost-vsock), I hope the changes could be limited, otherwise we can remove the last 2 patches of the series for now, leaving the RX buffer size to 4KB. Thanks, Stefano