From: "Michael S. Tsirkin" <mst@redhat.com>
To: Stefano Garzarella <sgarzare@redhat.com>
Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
Stefan Hajnoczi <stefanha@redhat.com>,
"David S. Miller" <davem@davemloft.net>,
virtualization@lists.linux-foundation.org,
Jason Wang <jasowang@redhat.com>,
kvm@vger.kernel.org
Subject: Re: [PATCH v4 1/5] vsock/virtio: limit the memory used per-socket
Date: Mon, 29 Jul 2019 10:04:29 -0400 [thread overview]
Message-ID: <20190729095956-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <20190717113030.163499-2-sgarzare@redhat.com>
On Wed, Jul 17, 2019 at 01:30:26PM +0200, Stefano Garzarella wrote:
> Since virtio-vsock was introduced, the buffers filled by the host
> and pushed to the guest using the vring, are directly queued in
> a per-socket list. These buffers are preallocated by the guest
> with a fixed size (4 KB).
>
> The maximum amount of memory used by each socket should be
> controlled by the credit mechanism.
> The default credit available per-socket is 256 KB, but if we use
> only 1 byte per packet, the guest can queue up to 262144 of 4 KB
> buffers, using up to 1 GB of memory per-socket. In addition, the
> guest will continue to fill the vring with new 4 KB free buffers
> to avoid starvation of other sockets.
>
> This patch mitigates this issue copying the payload of small
> packets (< 128 bytes) into the buffer of last packet queued, in
> order to avoid wasting memory.
>
> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
This is good enough for net-next, but for net I think we
should figure out how to address the issue completely.
Can we make the accounting precise? What happens to
performance if we do?
> ---
> drivers/vhost/vsock.c | 2 +
> include/linux/virtio_vsock.h | 1 +
> net/vmw_vsock/virtio_transport.c | 1 +
> net/vmw_vsock/virtio_transport_common.c | 60 +++++++++++++++++++++----
> 4 files changed, 55 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
> index 6a50e1d0529c..6c8390a2af52 100644
> --- a/drivers/vhost/vsock.c
> +++ b/drivers/vhost/vsock.c
> @@ -329,6 +329,8 @@ vhost_vsock_alloc_pkt(struct vhost_virtqueue *vq,
> return NULL;
> }
>
> + pkt->buf_len = pkt->len;
> +
> nbytes = copy_from_iter(pkt->buf, pkt->len, &iov_iter);
> if (nbytes != pkt->len) {
> vq_err(vq, "Expected %u byte payload, got %zu bytes\n",
> diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
> index e223e2632edd..7d973903f52e 100644
> --- a/include/linux/virtio_vsock.h
> +++ b/include/linux/virtio_vsock.h
> @@ -52,6 +52,7 @@ struct virtio_vsock_pkt {
> /* socket refcnt not held, only use for cancellation */
> struct vsock_sock *vsk;
> void *buf;
> + u32 buf_len;
> u32 len;
> u32 off;
> bool reply;
> diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
> index 0815d1357861..082a30936690 100644
> --- a/net/vmw_vsock/virtio_transport.c
> +++ b/net/vmw_vsock/virtio_transport.c
> @@ -307,6 +307,7 @@ static void virtio_vsock_rx_fill(struct virtio_vsock *vsock)
> break;
> }
>
> + pkt->buf_len = buf_len;
> pkt->len = buf_len;
>
> sg_init_one(&hdr, &pkt->hdr, sizeof(pkt->hdr));
> diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
> index 6f1a8aff65c5..095221f94786 100644
> --- a/net/vmw_vsock/virtio_transport_common.c
> +++ b/net/vmw_vsock/virtio_transport_common.c
> @@ -26,6 +26,9 @@
> /* How long to wait for graceful shutdown of a connection */
> #define VSOCK_CLOSE_TIMEOUT (8 * HZ)
>
> +/* Threshold for detecting small packets to copy */
> +#define GOOD_COPY_LEN 128
> +
> static const struct virtio_transport *virtio_transport_get_ops(void)
> {
> const struct vsock_transport *t = vsock_core_get_transport();
> @@ -64,6 +67,9 @@ virtio_transport_alloc_pkt(struct virtio_vsock_pkt_info *info,
> pkt->buf = kmalloc(len, GFP_KERNEL);
> if (!pkt->buf)
> goto out_pkt;
> +
> + pkt->buf_len = len;
> +
> err = memcpy_from_msg(pkt->buf, info->msg, len);
> if (err)
> goto out;
> @@ -841,24 +847,60 @@ virtio_transport_recv_connecting(struct sock *sk,
> return err;
> }
>
> +static void
> +virtio_transport_recv_enqueue(struct vsock_sock *vsk,
> + struct virtio_vsock_pkt *pkt)
> +{
> + struct virtio_vsock_sock *vvs = vsk->trans;
> + bool free_pkt = false;
> +
> + pkt->len = le32_to_cpu(pkt->hdr.len);
> + pkt->off = 0;
> +
> + spin_lock_bh(&vvs->rx_lock);
> +
> + virtio_transport_inc_rx_pkt(vvs, pkt);
> +
> + /* Try to copy small packets into the buffer of last packet queued,
> + * to avoid wasting memory queueing the entire buffer with a small
> + * payload.
> + */
> + if (pkt->len <= GOOD_COPY_LEN && !list_empty(&vvs->rx_queue)) {
> + struct virtio_vsock_pkt *last_pkt;
> +
> + last_pkt = list_last_entry(&vvs->rx_queue,
> + struct virtio_vsock_pkt, list);
> +
> + /* If there is space in the last packet queued, we copy the
> + * new packet in its buffer.
> + */
> + if (pkt->len <= last_pkt->buf_len - last_pkt->len) {
> + memcpy(last_pkt->buf + last_pkt->len, pkt->buf,
> + pkt->len);
> + last_pkt->len += pkt->len;
> + free_pkt = true;
> + goto out;
> + }
> + }
> +
> + list_add_tail(&pkt->list, &vvs->rx_queue);
> +
> +out:
> + spin_unlock_bh(&vvs->rx_lock);
> + if (free_pkt)
> + virtio_transport_free_pkt(pkt);
> +}
> +
> static int
> virtio_transport_recv_connected(struct sock *sk,
> struct virtio_vsock_pkt *pkt)
> {
> struct vsock_sock *vsk = vsock_sk(sk);
> - struct virtio_vsock_sock *vvs = vsk->trans;
> int err = 0;
>
> switch (le16_to_cpu(pkt->hdr.op)) {
> case VIRTIO_VSOCK_OP_RW:
> - pkt->len = le32_to_cpu(pkt->hdr.len);
> - pkt->off = 0;
> -
> - spin_lock_bh(&vvs->rx_lock);
> - virtio_transport_inc_rx_pkt(vvs, pkt);
> - list_add_tail(&pkt->list, &vvs->rx_queue);
> - spin_unlock_bh(&vvs->rx_lock);
> -
> + virtio_transport_recv_enqueue(vsk, pkt);
> sk->sk_data_ready(sk);
> return err;
> case VIRTIO_VSOCK_OP_CREDIT_UPDATE:
> --
> 2.20.1
next prev parent reply other threads:[~2019-07-29 14:04 UTC|newest]
Thread overview: 69+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-07-17 11:30 [PATCH v4 0/5] vsock/virtio: optimizations to increase the throughput Stefano Garzarella
2019-07-17 11:30 ` [PATCH v4 1/5] vsock/virtio: limit the memory used per-socket Stefano Garzarella
2019-07-29 14:04 ` Michael S. Tsirkin [this message]
2019-07-29 15:36 ` Stefano Garzarella
2019-07-29 15:49 ` Michael S. Tsirkin
2019-07-29 16:19 ` Stefano Garzarella
2019-07-29 16:50 ` Stefano Garzarella
2019-07-29 19:10 ` Michael S. Tsirkin
2019-07-30 9:35 ` Stefano Garzarella
2019-07-30 20:42 ` Michael S. Tsirkin
2019-08-01 10:47 ` Stefano Garzarella
2019-08-01 13:21 ` Michael S. Tsirkin
2019-08-01 13:36 ` Stefano Garzarella
2019-09-01 8:26 ` Michael S. Tsirkin
2019-09-01 10:17 ` Michael S. Tsirkin
2019-09-02 9:57 ` Stefano Garzarella
2019-09-02 15:23 ` Michael S. Tsirkin
2019-09-03 4:39 ` Michael S. Tsirkin
2019-09-03 7:45 ` Stefano Garzarella
2019-09-03 7:52 ` Michael S. Tsirkin
2019-09-03 8:00 ` Stefano Garzarella
2019-07-29 16:01 ` Michael S. Tsirkin
2019-07-29 16:41 ` Stefano Garzarella
2019-07-29 19:33 ` Michael S. Tsirkin
2019-08-30 9:40 ` Stefano Garzarella
2019-09-01 6:56 ` Michael S. Tsirkin
2019-09-02 8:39 ` Stefan Hajnoczi
2019-09-02 8:55 ` Stefano Garzarella
2019-10-11 13:40 ` Stefano Garzarella
2019-10-11 14:11 ` Michael S. Tsirkin
2019-10-11 14:23 ` Stefano Garzarella
2019-10-14 8:17 ` Stefan Hajnoczi
2019-10-14 8:21 ` Jason Wang
2019-10-14 8:38 ` Stefano Garzarella
2019-07-17 11:30 ` [PATCH v4 2/5] vsock/virtio: reduce credit update messages Stefano Garzarella
2019-07-22 8:36 ` Stefan Hajnoczi
2019-09-03 4:38 ` Michael S. Tsirkin
2019-09-03 7:31 ` Stefano Garzarella
2019-09-03 7:38 ` Michael S. Tsirkin
2019-07-17 11:30 ` [PATCH v4 3/5] vsock/virtio: fix locking in virtio_transport_inc_tx_pkt() Stefano Garzarella
2019-07-17 14:51 ` Michael S. Tsirkin
2019-07-18 7:43 ` Stefano Garzarella
2019-07-22 8:53 ` Stefan Hajnoczi
2019-07-17 11:30 ` [PATCH v4 4/5] vhost/vsock: split packets to send using multiple buffers Stefano Garzarella
2019-07-17 14:54 ` Michael S. Tsirkin
2019-07-18 7:50 ` Stefano Garzarella
2019-07-18 8:13 ` Michael S. Tsirkin
2019-07-18 9:37 ` Stefano Garzarella
2019-07-18 11:35 ` Michael S. Tsirkin
2019-07-19 8:08 ` Stefano Garzarella
2019-07-19 8:21 ` Jason Wang
2019-07-19 8:39 ` Stefano Garzarella
2019-07-19 8:51 ` Jason Wang
2019-07-19 9:20 ` Stefano Garzarella
2019-07-22 9:06 ` Stefan Hajnoczi
2019-07-17 11:30 ` [PATCH v4 5/5] vsock/virtio: change the maximum packet size allowed Stefano Garzarella
2019-07-17 14:59 ` Michael S. Tsirkin
2019-07-18 7:52 ` Stefano Garzarella
2019-07-18 12:33 ` Michael S. Tsirkin
2019-07-19 8:29 ` Stefano Garzarella
2019-07-22 9:07 ` Stefan Hajnoczi
2019-07-22 9:08 ` [PATCH v4 0/5] vsock/virtio: optimizations to increase the throughput Stefan Hajnoczi
2019-07-22 9:14 ` Stefano Garzarella
2019-07-29 13:12 ` Stefan Hajnoczi
2019-07-29 13:59 ` Michael S. Tsirkin
2019-07-30 9:40 ` Stefano Garzarella
2019-07-30 10:03 ` Jason Wang
2019-07-30 15:38 ` Stefano Garzarella
2019-09-03 8:02 ` request for stable (was Re: [PATCH v4 0/5] vsock/virtio: optimizations to increase the throughput) Michael S. Tsirkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190729095956-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=davem@davemloft.net \
--cc=jasowang@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=sgarzare@redhat.com \
--cc=stefanha@redhat.com \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).