From: "Michael S. Tsirkin" <mst@redhat.com>
To: Stefano Garzarella <sgarzare@redhat.com>
Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
Stefan Hajnoczi <stefanha@redhat.com>,
"David S. Miller" <davem@davemloft.net>,
virtualization@lists.linux-foundation.org,
Jason Wang <jasowang@redhat.com>,
kvm@vger.kernel.org
Subject: Re: [PATCH v4 1/5] vsock/virtio: limit the memory used per-socket
Date: Sun, 1 Sep 2019 04:26:19 -0400 [thread overview]
Message-ID: <20190901025815-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <20190801133616.sik5drn6ecesukbb@steredhat>
On Thu, Aug 01, 2019 at 03:36:16PM +0200, Stefano Garzarella wrote:
> On Thu, Aug 01, 2019 at 09:21:15AM -0400, Michael S. Tsirkin wrote:
> > On Thu, Aug 01, 2019 at 12:47:54PM +0200, Stefano Garzarella wrote:
> > > On Tue, Jul 30, 2019 at 04:42:25PM -0400, Michael S. Tsirkin wrote:
> > > > On Tue, Jul 30, 2019 at 11:35:39AM +0200, Stefano Garzarella wrote:
> > >
> > > (...)
> > >
> > > > >
> > > > > The problem here is the compatibility. Before this series virtio-vsock
> > > > > and vhost-vsock modules had the RX buffer size hard-coded
> > > > > (VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE = 4K). So, if we send a buffer smaller
> > > > > of 4K, there might be issues.
> > > >
> > > > Shouldn't be if they are following the spec. If not let's fix
> > > > the broken parts.
> > > >
> > > > >
> > > > > Maybe it is the time to add add 'features' to virtio-vsock device.
> > > > >
> > > > > Thanks,
> > > > > Stefano
> > > >
> > > > Why would a remote care about buffer sizes?
> > > >
> > > > Let's first see what the issues are. If they exist
> > > > we can either fix the bugs, or code the bug as a feature in spec.
> > > >
> > >
> > > The vhost_transport '.stream_enqueue' callback
> > > [virtio_transport_stream_enqueue()] calls the virtio_transport_send_pkt_info(),
> > > passing the user message. This function allocates a new packet, copying
> > > the user message, but (before this series) it limits the packet size to
> > > the VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE (4K):
> > >
> > > static int virtio_transport_send_pkt_info(struct vsock_sock *vsk,
> > > struct virtio_vsock_pkt_info *info)
> > > {
> > > ...
> > > /* we can send less than pkt_len bytes */
> > > if (pkt_len > VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE)
> > > pkt_len = VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE;
> > >
> > > /* virtio_transport_get_credit might return less than pkt_len credit */
> > > pkt_len = virtio_transport_get_credit(vvs, pkt_len);
> > >
> > > /* Do not send zero length OP_RW pkt */
> > > if (pkt_len == 0 && info->op == VIRTIO_VSOCK_OP_RW)
> > > return pkt_len;
> > > ...
> > > }
> > >
> > > then it queues the packet for the TX worker calling .send_pkt()
> > > [vhost_transport_send_pkt() in the vhost_transport case]
> > >
> > > The main function executed by the TX worker is
> > > vhost_transport_do_send_pkt() that picks up a buffer from the virtqueue
> > > and it tries to copy the packet (up to 4K) on it. If the buffer
> > > allocated from the guest will be smaller then 4K, I think here it will
> > > be discarded with an error:
> > >
>
> I'm adding more lines to explain better.
>
> > > static void
> > > vhost_transport_do_send_pkt(struct vhost_vsock *vsock,
> > > struct vhost_virtqueue *vq)
> > > {
> ...
>
> head = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov),
> &out, &in, NULL, NULL);
>
> ...
>
> len = iov_length(&vq->iov[out], in);
> iov_iter_init(&iov_iter, READ, &vq->iov[out], in, len);
>
> nbytes = copy_to_iter(&pkt->hdr, sizeof(pkt->hdr), &iov_iter);
> if (nbytes != sizeof(pkt->hdr)) {
> virtio_transport_free_pkt(pkt);
> vq_err(vq, "Faulted on copying pkt hdr\n");
> break;
> }
>
> > > ...
> > > nbytes = copy_to_iter(pkt->buf, pkt->len, &iov_iter);
> >
> > isn't pck len the actual length though?
> >
>
> It is the length of the packet that we are copying in the guest RX
> buffers pointed by the iov_iter. The guest allocates an iovec with 2
> buffers, one for the header and one for the payload (4KB).
BTW at the moment that forces another kmalloc within virtio core. Maybe
vsock needs a flag to skip allocation in this case. Worth benchmarking.
See virtqueue_use_indirect which just does total_sg > 1.
>
> > > if (nbytes != pkt->len) {
> > > virtio_transport_free_pkt(pkt);
> > > vq_err(vq, "Faulted on copying pkt buf\n");
> > > break;
> > > }
> > > ...
> > > }
> > >
> > >
> > > This series changes this behavior since now we will split the packet in
> > > vhost_transport_do_send_pkt() depending on the buffer found in the
> > > virtqueue.
> > >
> > > We didn't change the buffer size in this series, so we still backward
> > > compatible, but if we will use buffers smaller than 4K, we should
> > > encounter the error described above.
So that's an implementation bug then? It made an assumption
of a 4K sized buffer? Or even PAGE_SIZE sized buffer?
> > >
> > > How do you suggest we proceed if we want to change the buffer size?
> > > Maybe adding a feature to "support any buffer size"?
> > >
> > > Thanks,
> > > Stefano
> >
> >
>
> --
next prev parent reply other threads:[~2019-09-01 8:26 UTC|newest]
Thread overview: 69+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-07-17 11:30 [PATCH v4 0/5] vsock/virtio: optimizations to increase the throughput Stefano Garzarella
2019-07-17 11:30 ` [PATCH v4 1/5] vsock/virtio: limit the memory used per-socket Stefano Garzarella
2019-07-29 14:04 ` Michael S. Tsirkin
2019-07-29 15:36 ` Stefano Garzarella
2019-07-29 15:49 ` Michael S. Tsirkin
2019-07-29 16:19 ` Stefano Garzarella
2019-07-29 16:50 ` Stefano Garzarella
2019-07-29 19:10 ` Michael S. Tsirkin
2019-07-30 9:35 ` Stefano Garzarella
2019-07-30 20:42 ` Michael S. Tsirkin
2019-08-01 10:47 ` Stefano Garzarella
2019-08-01 13:21 ` Michael S. Tsirkin
2019-08-01 13:36 ` Stefano Garzarella
2019-09-01 8:26 ` Michael S. Tsirkin [this message]
2019-09-01 10:17 ` Michael S. Tsirkin
2019-09-02 9:57 ` Stefano Garzarella
2019-09-02 15:23 ` Michael S. Tsirkin
2019-09-03 4:39 ` Michael S. Tsirkin
2019-09-03 7:45 ` Stefano Garzarella
2019-09-03 7:52 ` Michael S. Tsirkin
2019-09-03 8:00 ` Stefano Garzarella
2019-07-29 16:01 ` Michael S. Tsirkin
2019-07-29 16:41 ` Stefano Garzarella
2019-07-29 19:33 ` Michael S. Tsirkin
2019-08-30 9:40 ` Stefano Garzarella
2019-09-01 6:56 ` Michael S. Tsirkin
2019-09-02 8:39 ` Stefan Hajnoczi
2019-09-02 8:55 ` Stefano Garzarella
2019-10-11 13:40 ` Stefano Garzarella
2019-10-11 14:11 ` Michael S. Tsirkin
2019-10-11 14:23 ` Stefano Garzarella
2019-10-14 8:17 ` Stefan Hajnoczi
2019-10-14 8:21 ` Jason Wang
2019-10-14 8:38 ` Stefano Garzarella
2019-07-17 11:30 ` [PATCH v4 2/5] vsock/virtio: reduce credit update messages Stefano Garzarella
2019-07-22 8:36 ` Stefan Hajnoczi
2019-09-03 4:38 ` Michael S. Tsirkin
2019-09-03 7:31 ` Stefano Garzarella
2019-09-03 7:38 ` Michael S. Tsirkin
2019-07-17 11:30 ` [PATCH v4 3/5] vsock/virtio: fix locking in virtio_transport_inc_tx_pkt() Stefano Garzarella
2019-07-17 14:51 ` Michael S. Tsirkin
2019-07-18 7:43 ` Stefano Garzarella
2019-07-22 8:53 ` Stefan Hajnoczi
2019-07-17 11:30 ` [PATCH v4 4/5] vhost/vsock: split packets to send using multiple buffers Stefano Garzarella
2019-07-17 14:54 ` Michael S. Tsirkin
2019-07-18 7:50 ` Stefano Garzarella
2019-07-18 8:13 ` Michael S. Tsirkin
2019-07-18 9:37 ` Stefano Garzarella
2019-07-18 11:35 ` Michael S. Tsirkin
2019-07-19 8:08 ` Stefano Garzarella
2019-07-19 8:21 ` Jason Wang
2019-07-19 8:39 ` Stefano Garzarella
2019-07-19 8:51 ` Jason Wang
2019-07-19 9:20 ` Stefano Garzarella
2019-07-22 9:06 ` Stefan Hajnoczi
2019-07-17 11:30 ` [PATCH v4 5/5] vsock/virtio: change the maximum packet size allowed Stefano Garzarella
2019-07-17 14:59 ` Michael S. Tsirkin
2019-07-18 7:52 ` Stefano Garzarella
2019-07-18 12:33 ` Michael S. Tsirkin
2019-07-19 8:29 ` Stefano Garzarella
2019-07-22 9:07 ` Stefan Hajnoczi
2019-07-22 9:08 ` [PATCH v4 0/5] vsock/virtio: optimizations to increase the throughput Stefan Hajnoczi
2019-07-22 9:14 ` Stefano Garzarella
2019-07-29 13:12 ` Stefan Hajnoczi
2019-07-29 13:59 ` Michael S. Tsirkin
2019-07-30 9:40 ` Stefano Garzarella
2019-07-30 10:03 ` Jason Wang
2019-07-30 15:38 ` Stefano Garzarella
2019-09-03 8:02 ` request for stable (was Re: [PATCH v4 0/5] vsock/virtio: optimizations to increase the throughput) Michael S. Tsirkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190901025815-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=davem@davemloft.net \
--cc=jasowang@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=sgarzare@redhat.com \
--cc=stefanha@redhat.com \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).