linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Stefano Garzarella <sgarzare@redhat.com>
To: Jason Wang <jasowang@redhat.com>
Cc: netdev@vger.kernel.org, "David S. Miller" <davem@davemloft.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	virtualization@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
	Stefan Hajnoczi <stefanha@redhat.com>
Subject: Re: [PATCH v2 7/8] vsock/virtio: increase RX buffer size to 64 KiB
Date: Tue, 14 May 2019 18:20:56 +0200	[thread overview]
Message-ID: <20190514162056.5aotcuzsi6e6wya7@steredhat> (raw)
In-Reply-To: <fd934a4c-f7d2-8a04-ed93-a3b690ed0d79@redhat.com>

On Tue, May 14, 2019 at 11:38:05AM +0800, Jason Wang wrote:
> 
> On 2019/5/14 上午1:51, Stefano Garzarella wrote:
> > On Mon, May 13, 2019 at 06:01:52PM +0800, Jason Wang wrote:
> > > On 2019/5/10 下午8:58, Stefano Garzarella wrote:
> > > > In order to increase host -> guest throughput with large packets,
> > > > we can use 64 KiB RX buffers.
> > > > 
> > > > Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
> > > > ---
> > > >    include/linux/virtio_vsock.h | 2 +-
> > > >    1 file changed, 1 insertion(+), 1 deletion(-)
> > > > 
> > > > diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
> > > > index 84b72026d327..5a9d25be72df 100644
> > > > --- a/include/linux/virtio_vsock.h
> > > > +++ b/include/linux/virtio_vsock.h
> > > > @@ -10,7 +10,7 @@
> > > >    #define VIRTIO_VSOCK_DEFAULT_MIN_BUF_SIZE	128
> > > >    #define VIRTIO_VSOCK_DEFAULT_BUF_SIZE		(1024 * 256)
> > > >    #define VIRTIO_VSOCK_DEFAULT_MAX_BUF_SIZE	(1024 * 256)
> > > > -#define VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE	(1024 * 4)
> > > > +#define VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE	(1024 * 64)
> > > >    #define VIRTIO_VSOCK_MAX_BUF_SIZE		0xFFFFFFFFUL
> > > >    #define VIRTIO_VSOCK_MAX_PKT_BUF_SIZE		(1024 * 64)
> > > 
> > > We probably don't want such high order allocation. It's better to switch to
> > > use order 0 pages in this case. See add_recvbuf_big() for virtio-net. If we
> > > get datapath unified, we will get more stuffs set.
> > IIUC, you are suggesting to allocate only pages and put them in a
> > scatterlist, then add them to the virtqueue.
> > 
> > Is it correct?
> 
> 
> Yes since you are using:
> 
>                 pkt->buf = kmalloc(buf_len, GFP_KERNEL);
>                 if (!pkt->buf) {
>                         virtio_transport_free_pkt(pkt);
>                         break;
>                 }
> 
> This is likely to fail when the memory is fragmented which is kind of
> fragile.
> 
> 

Thanks for pointing that out.

> > 
> > The issue that I have here, is that the virtio-vsock guest driver, see
> > virtio_vsock_rx_fill(), allocates a struct virtio_vsock_pkt that
> > contains the room for the header, then allocates the buffer for the payload.
> > At this point it fills the scatterlist with the &virtio_vsock_pkt.hdr and the
> > buffer for the payload.
> 
> 
> This part should be fine since what is needed is just adding more pages to
> sg[] and call virtuqeueu_add_sg().
> 
> 

Yes, I agree.

> > 
> > Changing this will require several modifications, and if we get datapath
> > unified, I'm not sure it's worth it.
> > Of course, if we leave the datapaths separated, I'd like to do that later.
> > 
> > What do you think?
> 
> 
> For the driver it self, it should not be hard. But I think you mean the
> issue of e.g virtio_vsock_pkt itself which doesn't support sg. For short
> time, maybe we can use kvec instead.

I'll try to use kvec in the virtio_vsock_pkt.

Since this struct is shared also with the host driver (vhost-vsock),
I hope the changes could be limited, otherwise we can remove the last 2
patches of the series for now, leaving the RX buffer size to 4KB.

Thanks,
Stefano

  reply	other threads:[~2019-05-14 16:21 UTC|newest]

Thread overview: 38+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-05-10 12:58 [PATCH v2 0/8] vsock/virtio: optimizations to increase the throughput Stefano Garzarella
2019-05-10 12:58 ` [PATCH v2 1/8] vsock/virtio: limit the memory used per-socket Stefano Garzarella
2019-05-12 16:57   ` Michael S. Tsirkin
2019-05-13 16:40     ` Stefano Garzarella
2019-05-13  9:58   ` Jason Wang
2019-05-13 17:23     ` Stefano Garzarella
2019-05-14  3:25       ` Jason Wang
2019-05-14  3:40         ` Jason Wang
2019-05-14 16:35         ` Stefano Garzarella
2019-05-15  2:48           ` Jason Wang
2019-05-28 16:45             ` Stefano Garzarella
2019-05-29  0:59               ` Jason Wang
2019-05-16 15:25   ` Stefan Hajnoczi
2019-05-17  8:25     ` Stefano Garzarella
2019-05-20  8:57       ` Stefan Hajnoczi
2019-05-10 12:58 ` [PATCH v2 2/8] vsock/virtio: free packets during the socket release Stefano Garzarella
2019-05-10 22:20   ` David Miller
2019-05-11  8:27     ` Stefano Garzarella
2019-05-16 15:32   ` Stefan Hajnoczi
2019-05-17  8:26     ` Stefano Garzarella
2019-05-10 12:58 ` [PATCH v2 3/8] vsock/virtio: fix locking for fwd_cnt and buf_alloc Stefano Garzarella
2019-05-10 12:58 ` [PATCH v2 4/8] vsock/virtio: reduce credit update messages Stefano Garzarella
2019-05-10 12:58 ` [PATCH v2 5/8] vhost/vsock: split packets to send using multiple buffers Stefano Garzarella
2019-05-10 12:58 ` [PATCH v2 6/8] vsock/virtio: change the maximum packet size allowed Stefano Garzarella
2019-05-10 12:58 ` [PATCH v2 7/8] vsock/virtio: increase RX buffer size to 64 KiB Stefano Garzarella
2019-05-13 10:01   ` Jason Wang
2019-05-13 17:51     ` Stefano Garzarella
2019-05-14  3:38       ` Jason Wang
2019-05-14 16:20         ` Stefano Garzarella [this message]
2019-05-15  2:50           ` Jason Wang
2019-05-15  8:22             ` Stefano Garzarella
2019-05-10 12:58 ` [PATCH v2 8/8] vsock/virtio: make the RX buffer size tunable Stefano Garzarella
2019-05-13 10:05   ` Jason Wang
2019-05-13 12:46     ` Jason Wang
2019-05-14 16:10       ` Stefano Garzarella
2019-05-13  9:33 ` [PATCH v2 0/8] vsock/virtio: optimizations to increase the throughput Jason Wang
2019-05-13 16:49   ` Stefano Garzarella
2019-05-20 14:09   ` Stefano Garzarella

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190514162056.5aotcuzsi6e6wya7@steredhat \
    --to=sgarzare@redhat.com \
    --cc=davem@davemloft.net \
    --cc=jasowang@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=stefanha@redhat.com \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).