linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next v5 0/5] vsock/virtio: optimizations to increase the throughput
@ 2019-07-30 15:43 Stefano Garzarella
  2019-07-30 15:43 ` [PATCH net-next v5 1/5] vsock/virtio: limit the memory used per-socket Stefano Garzarella
                   ` (5 more replies)
  0 siblings, 6 replies; 10+ messages in thread
From: Stefano Garzarella @ 2019-07-30 15:43 UTC (permalink / raw)
  To: netdev
  Cc: virtualization, linux-kernel, Jason Wang, kvm,
	Michael S. Tsirkin, David S. Miller, Stefan Hajnoczi

This series tries to increase the throughput of virtio-vsock with slight
changes.
While I was testing the v2 of this series I discovered an huge use of memory,
so I added patch 1 to mitigate this issue. I put it in this series in order
to better track the performance trends.

v5:
- rebased all patches on net-next
- added Stefan's R-b and Michael's A-b

v4: https://patchwork.kernel.org/cover/11047717
v3: https://patchwork.kernel.org/cover/10970145
v2: https://patchwork.kernel.org/cover/10938743
v1: https://patchwork.kernel.org/cover/10885431

Below are the benchmarks step by step. I used iperf3 [1] modified with VSOCK
support. As Michael suggested in the v1, I booted host and guest with 'nosmap'.

A brief description of patches:
- Patches 1:   limit the memory usage with an extra copy for small packets
- Patches 2+3: reduce the number of credit update messages sent to the
               transmitter
- Patches 4+5: allow the host to split packets on multiple buffers and use
               VIRTIO_VSOCK_MAX_PKT_BUF_SIZE as the max packet size allowed

                    host -> guest [Gbps]
pkt_size before opt   p 1     p 2+3    p 4+5

32         0.032     0.030    0.048    0.051
64         0.061     0.059    0.108    0.117
128        0.122     0.112    0.227    0.234
256        0.244     0.241    0.418    0.415
512        0.459     0.466    0.847    0.865
1K         0.927     0.919    1.657    1.641
2K         1.884     1.813    3.262    3.269
4K         3.378     3.326    6.044    6.195
8K         5.637     5.676   10.141   11.287
16K        8.250     8.402   15.976   16.736
32K       13.327    13.204   19.013   20.515
64K       21.241    21.341   20.973   21.879
128K      21.851    22.354   21.816   23.203
256K      21.408    21.693   21.846   24.088
512K      21.600    21.899   21.921   24.106

                    guest -> host [Gbps]
pkt_size before opt   p 1     p 2+3    p 4+5

32         0.045     0.046    0.057    0.057
64         0.089     0.091    0.103    0.104
128        0.170     0.179    0.192    0.200
256        0.364     0.351    0.361    0.379
512        0.709     0.699    0.731    0.790
1K         1.399     1.407    1.395    1.427
2K         2.670     2.684    2.745    2.835
4K         5.171     5.199    5.305    5.451
8K         8.442     8.500   10.083    9.941
16K       12.305    12.259   13.519   15.385
32K       11.418    11.150   11.988   24.680
64K       10.778    10.659   11.589   35.273
128K      10.421    10.339   10.939   40.338
256K      10.300     9.719   10.508   36.562
512K       9.833     9.808   10.612   35.979

As Stefan suggested in the v1, I measured also the efficiency in this way:
    efficiency = Mbps / (%CPU_Host + %CPU_Guest)

The '%CPU_Guest' is taken inside the VM. I know that it is not the best way,
but it's provided for free from iperf3 and could be an indication.

        host -> guest efficiency [Mbps / (%CPU_Host + %CPU_Guest)]
pkt_size before opt   p 1     p 2+3    p 4+5

32         0.35      0.45     0.79     1.02
64         0.56      0.80     1.41     1.54
128        1.11      1.52     3.03     3.12
256        2.20      2.16     5.44     5.58
512        4.17      4.18    10.96    11.46
1K         8.30      8.26    20.99    20.89
2K        16.82     16.31    39.76    39.73
4K        30.89     30.79    74.07    75.73
8K        53.74     54.49   124.24   148.91
16K       80.68     83.63   200.21   232.79
32K      132.27    132.52   260.81   357.07
64K      229.82    230.40   300.19   444.18
128K     332.60    329.78   331.51   492.28
256K     331.06    337.22   339.59   511.59
512K     335.58    328.50   331.56   504.56

        guest -> host efficiency [Mbps / (%CPU_Host + %CPU_Guest)]
pkt_size before opt   p 1     p 2+3    p 4+5

32         0.43      0.43     0.53     0.56
64         0.85      0.86     1.04     1.10
128        1.63      1.71     2.07     2.13
256        3.48      3.35     4.02     4.22
512        6.80      6.67     7.97     8.63
1K        13.32     13.31    15.72    15.94
2K        25.79     25.92    30.84    30.98
4K        50.37     50.48    58.79    59.69
8K        95.90     96.15   107.04   110.33
16K      145.80    145.43   143.97   174.70
32K      147.06    144.74   146.02   282.48
64K      145.25    143.99   141.62   406.40
128K     149.34    146.96   147.49   489.34
256K     156.35    149.81   152.21   536.37
512K     151.65    150.74   151.52   519.93

[1] https://github.com/stefano-garzarella/iperf/

Stefano Garzarella (5):
  vsock/virtio: limit the memory used per-socket
  vsock/virtio: reduce credit update messages
  vsock/virtio: fix locking in virtio_transport_inc_tx_pkt()
  vhost/vsock: split packets to send using multiple buffers
  vsock/virtio: change the maximum packet size allowed

 drivers/vhost/vsock.c                   | 68 ++++++++++++-----
 include/linux/virtio_vsock.h            |  4 +-
 net/vmw_vsock/virtio_transport.c        |  1 +
 net/vmw_vsock/virtio_transport_common.c | 99 ++++++++++++++++++++-----
 4 files changed, 134 insertions(+), 38 deletions(-)

-- 
2.20.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH net-next v5 1/5] vsock/virtio: limit the memory used per-socket
  2019-07-30 15:43 [PATCH net-next v5 0/5] vsock/virtio: optimizations to increase the throughput Stefano Garzarella
@ 2019-07-30 15:43 ` Stefano Garzarella
  2019-07-30 15:43 ` [PATCH net-next v5 2/5] vsock/virtio: reduce credit update messages Stefano Garzarella
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Stefano Garzarella @ 2019-07-30 15:43 UTC (permalink / raw)
  To: netdev
  Cc: virtualization, linux-kernel, Jason Wang, kvm,
	Michael S. Tsirkin, David S. Miller, Stefan Hajnoczi

Since virtio-vsock was introduced, the buffers filled by the host
and pushed to the guest using the vring, are directly queued in
a per-socket list. These buffers are preallocated by the guest
with a fixed size (4 KB).

The maximum amount of memory used by each socket should be
controlled by the credit mechanism.
The default credit available per-socket is 256 KB, but if we use
only 1 byte per packet, the guest can queue up to 262144 of 4 KB
buffers, using up to 1 GB of memory per-socket. In addition, the
guest will continue to fill the vring with new 4 KB free buffers
to avoid starvation of other sockets.

This patch mitigates this issue copying the payload of small
packets (< 128 bytes) into the buffer of last packet queued, in
order to avoid wasting memory.

Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
---
 drivers/vhost/vsock.c                   |  2 +
 include/linux/virtio_vsock.h            |  1 +
 net/vmw_vsock/virtio_transport.c        |  1 +
 net/vmw_vsock/virtio_transport_common.c | 60 +++++++++++++++++++++----
 4 files changed, 55 insertions(+), 9 deletions(-)

diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
index 6a50e1d0529c..6c8390a2af52 100644
--- a/drivers/vhost/vsock.c
+++ b/drivers/vhost/vsock.c
@@ -329,6 +329,8 @@ vhost_vsock_alloc_pkt(struct vhost_virtqueue *vq,
 		return NULL;
 	}
 
+	pkt->buf_len = pkt->len;
+
 	nbytes = copy_from_iter(pkt->buf, pkt->len, &iov_iter);
 	if (nbytes != pkt->len) {
 		vq_err(vq, "Expected %u byte payload, got %zu bytes\n",
diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
index e223e2632edd..7d973903f52e 100644
--- a/include/linux/virtio_vsock.h
+++ b/include/linux/virtio_vsock.h
@@ -52,6 +52,7 @@ struct virtio_vsock_pkt {
 	/* socket refcnt not held, only use for cancellation */
 	struct vsock_sock *vsk;
 	void *buf;
+	u32 buf_len;
 	u32 len;
 	u32 off;
 	bool reply;
diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
index 0815d1357861..082a30936690 100644
--- a/net/vmw_vsock/virtio_transport.c
+++ b/net/vmw_vsock/virtio_transport.c
@@ -307,6 +307,7 @@ static void virtio_vsock_rx_fill(struct virtio_vsock *vsock)
 			break;
 		}
 
+		pkt->buf_len = buf_len;
 		pkt->len = buf_len;
 
 		sg_init_one(&hdr, &pkt->hdr, sizeof(pkt->hdr));
diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
index 6f1a8aff65c5..095221f94786 100644
--- a/net/vmw_vsock/virtio_transport_common.c
+++ b/net/vmw_vsock/virtio_transport_common.c
@@ -26,6 +26,9 @@
 /* How long to wait for graceful shutdown of a connection */
 #define VSOCK_CLOSE_TIMEOUT (8 * HZ)
 
+/* Threshold for detecting small packets to copy */
+#define GOOD_COPY_LEN  128
+
 static const struct virtio_transport *virtio_transport_get_ops(void)
 {
 	const struct vsock_transport *t = vsock_core_get_transport();
@@ -64,6 +67,9 @@ virtio_transport_alloc_pkt(struct virtio_vsock_pkt_info *info,
 		pkt->buf = kmalloc(len, GFP_KERNEL);
 		if (!pkt->buf)
 			goto out_pkt;
+
+		pkt->buf_len = len;
+
 		err = memcpy_from_msg(pkt->buf, info->msg, len);
 		if (err)
 			goto out;
@@ -841,24 +847,60 @@ virtio_transport_recv_connecting(struct sock *sk,
 	return err;
 }
 
+static void
+virtio_transport_recv_enqueue(struct vsock_sock *vsk,
+			      struct virtio_vsock_pkt *pkt)
+{
+	struct virtio_vsock_sock *vvs = vsk->trans;
+	bool free_pkt = false;
+
+	pkt->len = le32_to_cpu(pkt->hdr.len);
+	pkt->off = 0;
+
+	spin_lock_bh(&vvs->rx_lock);
+
+	virtio_transport_inc_rx_pkt(vvs, pkt);
+
+	/* Try to copy small packets into the buffer of last packet queued,
+	 * to avoid wasting memory queueing the entire buffer with a small
+	 * payload.
+	 */
+	if (pkt->len <= GOOD_COPY_LEN && !list_empty(&vvs->rx_queue)) {
+		struct virtio_vsock_pkt *last_pkt;
+
+		last_pkt = list_last_entry(&vvs->rx_queue,
+					   struct virtio_vsock_pkt, list);
+
+		/* If there is space in the last packet queued, we copy the
+		 * new packet in its buffer.
+		 */
+		if (pkt->len <= last_pkt->buf_len - last_pkt->len) {
+			memcpy(last_pkt->buf + last_pkt->len, pkt->buf,
+			       pkt->len);
+			last_pkt->len += pkt->len;
+			free_pkt = true;
+			goto out;
+		}
+	}
+
+	list_add_tail(&pkt->list, &vvs->rx_queue);
+
+out:
+	spin_unlock_bh(&vvs->rx_lock);
+	if (free_pkt)
+		virtio_transport_free_pkt(pkt);
+}
+
 static int
 virtio_transport_recv_connected(struct sock *sk,
 				struct virtio_vsock_pkt *pkt)
 {
 	struct vsock_sock *vsk = vsock_sk(sk);
-	struct virtio_vsock_sock *vvs = vsk->trans;
 	int err = 0;
 
 	switch (le16_to_cpu(pkt->hdr.op)) {
 	case VIRTIO_VSOCK_OP_RW:
-		pkt->len = le32_to_cpu(pkt->hdr.len);
-		pkt->off = 0;
-
-		spin_lock_bh(&vvs->rx_lock);
-		virtio_transport_inc_rx_pkt(vvs, pkt);
-		list_add_tail(&pkt->list, &vvs->rx_queue);
-		spin_unlock_bh(&vvs->rx_lock);
-
+		virtio_transport_recv_enqueue(vsk, pkt);
 		sk->sk_data_ready(sk);
 		return err;
 	case VIRTIO_VSOCK_OP_CREDIT_UPDATE:
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH net-next v5 2/5] vsock/virtio: reduce credit update messages
  2019-07-30 15:43 [PATCH net-next v5 0/5] vsock/virtio: optimizations to increase the throughput Stefano Garzarella
  2019-07-30 15:43 ` [PATCH net-next v5 1/5] vsock/virtio: limit the memory used per-socket Stefano Garzarella
@ 2019-07-30 15:43 ` Stefano Garzarella
  2019-07-30 15:43 ` [PATCH net-next v5 3/5] vsock/virtio: fix locking in virtio_transport_inc_tx_pkt() Stefano Garzarella
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Stefano Garzarella @ 2019-07-30 15:43 UTC (permalink / raw)
  To: netdev
  Cc: virtualization, linux-kernel, Jason Wang, kvm,
	Michael S. Tsirkin, David S. Miller, Stefan Hajnoczi

In order to reduce the number of credit update messages,
we send them only when the space available seen by the
transmitter is less than VIRTIO_VSOCK_MAX_PKT_BUF_SIZE.

Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
---
 include/linux/virtio_vsock.h            |  1 +
 net/vmw_vsock/virtio_transport_common.c | 16 +++++++++++++---
 2 files changed, 14 insertions(+), 3 deletions(-)

diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
index 7d973903f52e..49fc9d20bc43 100644
--- a/include/linux/virtio_vsock.h
+++ b/include/linux/virtio_vsock.h
@@ -41,6 +41,7 @@ struct virtio_vsock_sock {
 
 	/* Protected by rx_lock */
 	u32 fwd_cnt;
+	u32 last_fwd_cnt;
 	u32 rx_bytes;
 	struct list_head rx_queue;
 };
diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
index 095221f94786..a85559d4d974 100644
--- a/net/vmw_vsock/virtio_transport_common.c
+++ b/net/vmw_vsock/virtio_transport_common.c
@@ -211,6 +211,7 @@ static void virtio_transport_dec_rx_pkt(struct virtio_vsock_sock *vvs,
 void virtio_transport_inc_tx_pkt(struct virtio_vsock_sock *vvs, struct virtio_vsock_pkt *pkt)
 {
 	spin_lock_bh(&vvs->tx_lock);
+	vvs->last_fwd_cnt = vvs->fwd_cnt;
 	pkt->hdr.fwd_cnt = cpu_to_le32(vvs->fwd_cnt);
 	pkt->hdr.buf_alloc = cpu_to_le32(vvs->buf_alloc);
 	spin_unlock_bh(&vvs->tx_lock);
@@ -261,6 +262,7 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
 	struct virtio_vsock_sock *vvs = vsk->trans;
 	struct virtio_vsock_pkt *pkt;
 	size_t bytes, total = 0;
+	u32 free_space;
 	int err = -EFAULT;
 
 	spin_lock_bh(&vvs->rx_lock);
@@ -291,11 +293,19 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
 			virtio_transport_free_pkt(pkt);
 		}
 	}
+
+	free_space = vvs->buf_alloc - (vvs->fwd_cnt - vvs->last_fwd_cnt);
+
 	spin_unlock_bh(&vvs->rx_lock);
 
-	/* Send a credit pkt to peer */
-	virtio_transport_send_credit_update(vsk, VIRTIO_VSOCK_TYPE_STREAM,
-					    NULL);
+	/* We send a credit update only when the space available seen
+	 * by the transmitter is less than VIRTIO_VSOCK_MAX_PKT_BUF_SIZE
+	 */
+	if (free_space < VIRTIO_VSOCK_MAX_PKT_BUF_SIZE) {
+		virtio_transport_send_credit_update(vsk,
+						    VIRTIO_VSOCK_TYPE_STREAM,
+						    NULL);
+	}
 
 	return total;
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH net-next v5 3/5] vsock/virtio: fix locking in virtio_transport_inc_tx_pkt()
  2019-07-30 15:43 [PATCH net-next v5 0/5] vsock/virtio: optimizations to increase the throughput Stefano Garzarella
  2019-07-30 15:43 ` [PATCH net-next v5 1/5] vsock/virtio: limit the memory used per-socket Stefano Garzarella
  2019-07-30 15:43 ` [PATCH net-next v5 2/5] vsock/virtio: reduce credit update messages Stefano Garzarella
@ 2019-07-30 15:43 ` Stefano Garzarella
  2019-07-30 15:43 ` [PATCH net-next v5 4/5] vhost/vsock: split packets to send using multiple buffers Stefano Garzarella
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Stefano Garzarella @ 2019-07-30 15:43 UTC (permalink / raw)
  To: netdev
  Cc: virtualization, linux-kernel, Jason Wang, kvm,
	Michael S. Tsirkin, David S. Miller, Stefan Hajnoczi

fwd_cnt and last_fwd_cnt are protected by rx_lock, so we should use
the same spinlock also if we are in the TX path.

Move also buf_alloc under the same lock.

Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
---
 include/linux/virtio_vsock.h            | 2 +-
 net/vmw_vsock/virtio_transport_common.c | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
index 49fc9d20bc43..4c7781f4b29b 100644
--- a/include/linux/virtio_vsock.h
+++ b/include/linux/virtio_vsock.h
@@ -35,7 +35,6 @@ struct virtio_vsock_sock {
 
 	/* Protected by tx_lock */
 	u32 tx_cnt;
-	u32 buf_alloc;
 	u32 peer_fwd_cnt;
 	u32 peer_buf_alloc;
 
@@ -43,6 +42,7 @@ struct virtio_vsock_sock {
 	u32 fwd_cnt;
 	u32 last_fwd_cnt;
 	u32 rx_bytes;
+	u32 buf_alloc;
 	struct list_head rx_queue;
 };
 
diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
index a85559d4d974..34a2b42313b7 100644
--- a/net/vmw_vsock/virtio_transport_common.c
+++ b/net/vmw_vsock/virtio_transport_common.c
@@ -210,11 +210,11 @@ static void virtio_transport_dec_rx_pkt(struct virtio_vsock_sock *vvs,
 
 void virtio_transport_inc_tx_pkt(struct virtio_vsock_sock *vvs, struct virtio_vsock_pkt *pkt)
 {
-	spin_lock_bh(&vvs->tx_lock);
+	spin_lock_bh(&vvs->rx_lock);
 	vvs->last_fwd_cnt = vvs->fwd_cnt;
 	pkt->hdr.fwd_cnt = cpu_to_le32(vvs->fwd_cnt);
 	pkt->hdr.buf_alloc = cpu_to_le32(vvs->buf_alloc);
-	spin_unlock_bh(&vvs->tx_lock);
+	spin_unlock_bh(&vvs->rx_lock);
 }
 EXPORT_SYMBOL_GPL(virtio_transport_inc_tx_pkt);
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH net-next v5 4/5] vhost/vsock: split packets to send using multiple buffers
  2019-07-30 15:43 [PATCH net-next v5 0/5] vsock/virtio: optimizations to increase the throughput Stefano Garzarella
                   ` (2 preceding siblings ...)
  2019-07-30 15:43 ` [PATCH net-next v5 3/5] vsock/virtio: fix locking in virtio_transport_inc_tx_pkt() Stefano Garzarella
@ 2019-07-30 15:43 ` Stefano Garzarella
  2019-11-18 11:50   ` Stefano Garzarella
  2019-07-30 15:43 ` [PATCH net-next v5 5/5] vsock/virtio: change the maximum packet size allowed Stefano Garzarella
  2019-07-30 15:54 ` [PATCH net-next v5 0/5] vsock/virtio: optimizations to increase the throughput Michael S. Tsirkin
  5 siblings, 1 reply; 10+ messages in thread
From: Stefano Garzarella @ 2019-07-30 15:43 UTC (permalink / raw)
  To: netdev
  Cc: virtualization, linux-kernel, Jason Wang, kvm,
	Michael S. Tsirkin, David S. Miller, Stefan Hajnoczi

If the packets to sent to the guest are bigger than the buffer
available, we can split them, using multiple buffers and fixing
the length in the packet header.
This is safe since virtio-vsock supports only stream sockets.

Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
---
 drivers/vhost/vsock.c                   | 66 ++++++++++++++++++-------
 net/vmw_vsock/virtio_transport_common.c | 15 ++++--
 2 files changed, 60 insertions(+), 21 deletions(-)

diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
index 6c8390a2af52..9f57736fe15e 100644
--- a/drivers/vhost/vsock.c
+++ b/drivers/vhost/vsock.c
@@ -102,7 +102,7 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock,
 		struct iov_iter iov_iter;
 		unsigned out, in;
 		size_t nbytes;
-		size_t len;
+		size_t iov_len, payload_len;
 		int head;
 
 		spin_lock_bh(&vsock->send_pkt_list_lock);
@@ -147,8 +147,24 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock,
 			break;
 		}
 
-		len = iov_length(&vq->iov[out], in);
-		iov_iter_init(&iov_iter, READ, &vq->iov[out], in, len);
+		iov_len = iov_length(&vq->iov[out], in);
+		if (iov_len < sizeof(pkt->hdr)) {
+			virtio_transport_free_pkt(pkt);
+			vq_err(vq, "Buffer len [%zu] too small\n", iov_len);
+			break;
+		}
+
+		iov_iter_init(&iov_iter, READ, &vq->iov[out], in, iov_len);
+		payload_len = pkt->len - pkt->off;
+
+		/* If the packet is greater than the space available in the
+		 * buffer, we split it using multiple buffers.
+		 */
+		if (payload_len > iov_len - sizeof(pkt->hdr))
+			payload_len = iov_len - sizeof(pkt->hdr);
+
+		/* Set the correct length in the header */
+		pkt->hdr.len = cpu_to_le32(payload_len);
 
 		nbytes = copy_to_iter(&pkt->hdr, sizeof(pkt->hdr), &iov_iter);
 		if (nbytes != sizeof(pkt->hdr)) {
@@ -157,33 +173,47 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock,
 			break;
 		}
 
-		nbytes = copy_to_iter(pkt->buf, pkt->len, &iov_iter);
-		if (nbytes != pkt->len) {
+		nbytes = copy_to_iter(pkt->buf + pkt->off, payload_len,
+				      &iov_iter);
+		if (nbytes != payload_len) {
 			virtio_transport_free_pkt(pkt);
 			vq_err(vq, "Faulted on copying pkt buf\n");
 			break;
 		}
 
-		vhost_add_used(vq, head, sizeof(pkt->hdr) + pkt->len);
+		vhost_add_used(vq, head, sizeof(pkt->hdr) + payload_len);
 		added = true;
 
-		if (pkt->reply) {
-			int val;
-
-			val = atomic_dec_return(&vsock->queued_replies);
-
-			/* Do we have resources to resume tx processing? */
-			if (val + 1 == tx_vq->num)
-				restart_tx = true;
-		}
-
 		/* Deliver to monitoring devices all correctly transmitted
 		 * packets.
 		 */
 		virtio_transport_deliver_tap_pkt(pkt);
 
-		total_len += pkt->len;
-		virtio_transport_free_pkt(pkt);
+		pkt->off += payload_len;
+		total_len += payload_len;
+
+		/* If we didn't send all the payload we can requeue the packet
+		 * to send it with the next available buffer.
+		 */
+		if (pkt->off < pkt->len) {
+			spin_lock_bh(&vsock->send_pkt_list_lock);
+			list_add(&pkt->list, &vsock->send_pkt_list);
+			spin_unlock_bh(&vsock->send_pkt_list_lock);
+		} else {
+			if (pkt->reply) {
+				int val;
+
+				val = atomic_dec_return(&vsock->queued_replies);
+
+				/* Do we have resources to resume tx
+				 * processing?
+				 */
+				if (val + 1 == tx_vq->num)
+					restart_tx = true;
+			}
+
+			virtio_transport_free_pkt(pkt);
+		}
 	} while(likely(!vhost_exceeds_weight(vq, ++pkts, total_len)));
 	if (added)
 		vhost_signal(&vsock->dev, vq);
diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
index 34a2b42313b7..56fab3f03d0e 100644
--- a/net/vmw_vsock/virtio_transport_common.c
+++ b/net/vmw_vsock/virtio_transport_common.c
@@ -97,8 +97,17 @@ static struct sk_buff *virtio_transport_build_skb(void *opaque)
 	struct virtio_vsock_pkt *pkt = opaque;
 	struct af_vsockmon_hdr *hdr;
 	struct sk_buff *skb;
+	size_t payload_len;
+	void *payload_buf;
 
-	skb = alloc_skb(sizeof(*hdr) + sizeof(pkt->hdr) + pkt->len,
+	/* A packet could be split to fit the RX buffer, so we can retrieve
+	 * the payload length from the header and the buffer pointer taking
+	 * care of the offset in the original packet.
+	 */
+	payload_len = le32_to_cpu(pkt->hdr.len);
+	payload_buf = pkt->buf + pkt->off;
+
+	skb = alloc_skb(sizeof(*hdr) + sizeof(pkt->hdr) + payload_len,
 			GFP_ATOMIC);
 	if (!skb)
 		return NULL;
@@ -138,8 +147,8 @@ static struct sk_buff *virtio_transport_build_skb(void *opaque)
 
 	skb_put_data(skb, &pkt->hdr, sizeof(pkt->hdr));
 
-	if (pkt->len) {
-		skb_put_data(skb, pkt->buf, pkt->len);
+	if (payload_len) {
+		skb_put_data(skb, payload_buf, payload_len);
 	}
 
 	return skb;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH net-next v5 5/5] vsock/virtio: change the maximum packet size allowed
  2019-07-30 15:43 [PATCH net-next v5 0/5] vsock/virtio: optimizations to increase the throughput Stefano Garzarella
                   ` (3 preceding siblings ...)
  2019-07-30 15:43 ` [PATCH net-next v5 4/5] vhost/vsock: split packets to send using multiple buffers Stefano Garzarella
@ 2019-07-30 15:43 ` Stefano Garzarella
  2019-07-30 15:54 ` [PATCH net-next v5 0/5] vsock/virtio: optimizations to increase the throughput Michael S. Tsirkin
  5 siblings, 0 replies; 10+ messages in thread
From: Stefano Garzarella @ 2019-07-30 15:43 UTC (permalink / raw)
  To: netdev
  Cc: virtualization, linux-kernel, Jason Wang, kvm,
	Michael S. Tsirkin, David S. Miller, Stefan Hajnoczi

Since now we are able to split packets, we can avoid limiting
their sizes to VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE.
Instead, we can use VIRTIO_VSOCK_MAX_PKT_BUF_SIZE as the max
packet size.

Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
---
 net/vmw_vsock/virtio_transport_common.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
index 56fab3f03d0e..94cc0fa3e848 100644
--- a/net/vmw_vsock/virtio_transport_common.c
+++ b/net/vmw_vsock/virtio_transport_common.c
@@ -181,8 +181,8 @@ static int virtio_transport_send_pkt_info(struct vsock_sock *vsk,
 	vvs = vsk->trans;
 
 	/* we can send less than pkt_len bytes */
-	if (pkt_len > VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE)
-		pkt_len = VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE;
+	if (pkt_len > VIRTIO_VSOCK_MAX_PKT_BUF_SIZE)
+		pkt_len = VIRTIO_VSOCK_MAX_PKT_BUF_SIZE;
 
 	/* virtio_transport_get_credit might return less than pkt_len credit */
 	pkt_len = virtio_transport_get_credit(vvs, pkt_len);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH net-next v5 0/5] vsock/virtio: optimizations to increase the throughput
  2019-07-30 15:43 [PATCH net-next v5 0/5] vsock/virtio: optimizations to increase the throughput Stefano Garzarella
                   ` (4 preceding siblings ...)
  2019-07-30 15:43 ` [PATCH net-next v5 5/5] vsock/virtio: change the maximum packet size allowed Stefano Garzarella
@ 2019-07-30 15:54 ` Michael S. Tsirkin
  2019-07-30 15:55   ` Michael S. Tsirkin
  5 siblings, 1 reply; 10+ messages in thread
From: Michael S. Tsirkin @ 2019-07-30 15:54 UTC (permalink / raw)
  To: Stefano Garzarella
  Cc: netdev, virtualization, linux-kernel, Jason Wang, kvm,
	David S. Miller, Stefan Hajnoczi

On Tue, Jul 30, 2019 at 05:43:29PM +0200, Stefano Garzarella wrote:
> This series tries to increase the throughput of virtio-vsock with slight
> changes.
> While I was testing the v2 of this series I discovered an huge use of memory,
> so I added patch 1 to mitigate this issue. I put it in this series in order
> to better track the performance trends.
> 
> v5:
> - rebased all patches on net-next
> - added Stefan's R-b and Michael's A-b

This doesn't solve all issues around allocation - as I mentioned I think
we will need to improve accounting for that,
and maybe add pre-allocation.
But it's a great series of steps in the right direction!



> v4: https://patchwork.kernel.org/cover/11047717
> v3: https://patchwork.kernel.org/cover/10970145
> v2: https://patchwork.kernel.org/cover/10938743
> v1: https://patchwork.kernel.org/cover/10885431
> 
> Below are the benchmarks step by step. I used iperf3 [1] modified with VSOCK
> support. As Michael suggested in the v1, I booted host and guest with 'nosmap'.
> 
> A brief description of patches:
> - Patches 1:   limit the memory usage with an extra copy for small packets
> - Patches 2+3: reduce the number of credit update messages sent to the
>                transmitter
> - Patches 4+5: allow the host to split packets on multiple buffers and use
>                VIRTIO_VSOCK_MAX_PKT_BUF_SIZE as the max packet size allowed
> 
>                     host -> guest [Gbps]
> pkt_size before opt   p 1     p 2+3    p 4+5
> 
> 32         0.032     0.030    0.048    0.051
> 64         0.061     0.059    0.108    0.117
> 128        0.122     0.112    0.227    0.234
> 256        0.244     0.241    0.418    0.415
> 512        0.459     0.466    0.847    0.865
> 1K         0.927     0.919    1.657    1.641
> 2K         1.884     1.813    3.262    3.269
> 4K         3.378     3.326    6.044    6.195
> 8K         5.637     5.676   10.141   11.287
> 16K        8.250     8.402   15.976   16.736
> 32K       13.327    13.204   19.013   20.515
> 64K       21.241    21.341   20.973   21.879
> 128K      21.851    22.354   21.816   23.203
> 256K      21.408    21.693   21.846   24.088
> 512K      21.600    21.899   21.921   24.106
> 
>                     guest -> host [Gbps]
> pkt_size before opt   p 1     p 2+3    p 4+5
> 
> 32         0.045     0.046    0.057    0.057
> 64         0.089     0.091    0.103    0.104
> 128        0.170     0.179    0.192    0.200
> 256        0.364     0.351    0.361    0.379
> 512        0.709     0.699    0.731    0.790
> 1K         1.399     1.407    1.395    1.427
> 2K         2.670     2.684    2.745    2.835
> 4K         5.171     5.199    5.305    5.451
> 8K         8.442     8.500   10.083    9.941
> 16K       12.305    12.259   13.519   15.385
> 32K       11.418    11.150   11.988   24.680
> 64K       10.778    10.659   11.589   35.273
> 128K      10.421    10.339   10.939   40.338
> 256K      10.300     9.719   10.508   36.562
> 512K       9.833     9.808   10.612   35.979
> 
> As Stefan suggested in the v1, I measured also the efficiency in this way:
>     efficiency = Mbps / (%CPU_Host + %CPU_Guest)
> 
> The '%CPU_Guest' is taken inside the VM. I know that it is not the best way,
> but it's provided for free from iperf3 and could be an indication.
> 
>         host -> guest efficiency [Mbps / (%CPU_Host + %CPU_Guest)]
> pkt_size before opt   p 1     p 2+3    p 4+5
> 
> 32         0.35      0.45     0.79     1.02
> 64         0.56      0.80     1.41     1.54
> 128        1.11      1.52     3.03     3.12
> 256        2.20      2.16     5.44     5.58
> 512        4.17      4.18    10.96    11.46
> 1K         8.30      8.26    20.99    20.89
> 2K        16.82     16.31    39.76    39.73
> 4K        30.89     30.79    74.07    75.73
> 8K        53.74     54.49   124.24   148.91
> 16K       80.68     83.63   200.21   232.79
> 32K      132.27    132.52   260.81   357.07
> 64K      229.82    230.40   300.19   444.18
> 128K     332.60    329.78   331.51   492.28
> 256K     331.06    337.22   339.59   511.59
> 512K     335.58    328.50   331.56   504.56
> 
>         guest -> host efficiency [Mbps / (%CPU_Host + %CPU_Guest)]
> pkt_size before opt   p 1     p 2+3    p 4+5
> 
> 32         0.43      0.43     0.53     0.56
> 64         0.85      0.86     1.04     1.10
> 128        1.63      1.71     2.07     2.13
> 256        3.48      3.35     4.02     4.22
> 512        6.80      6.67     7.97     8.63
> 1K        13.32     13.31    15.72    15.94
> 2K        25.79     25.92    30.84    30.98
> 4K        50.37     50.48    58.79    59.69
> 8K        95.90     96.15   107.04   110.33
> 16K      145.80    145.43   143.97   174.70
> 32K      147.06    144.74   146.02   282.48
> 64K      145.25    143.99   141.62   406.40
> 128K     149.34    146.96   147.49   489.34
> 256K     156.35    149.81   152.21   536.37
> 512K     151.65    150.74   151.52   519.93
> 
> [1] https://github.com/stefano-garzarella/iperf/
> 
> Stefano Garzarella (5):
>   vsock/virtio: limit the memory used per-socket
>   vsock/virtio: reduce credit update messages
>   vsock/virtio: fix locking in virtio_transport_inc_tx_pkt()
>   vhost/vsock: split packets to send using multiple buffers
>   vsock/virtio: change the maximum packet size allowed
> 
>  drivers/vhost/vsock.c                   | 68 ++++++++++++-----
>  include/linux/virtio_vsock.h            |  4 +-
>  net/vmw_vsock/virtio_transport.c        |  1 +
>  net/vmw_vsock/virtio_transport_common.c | 99 ++++++++++++++++++++-----
>  4 files changed, 134 insertions(+), 38 deletions(-)
> 
> -- 
> 2.20.1

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH net-next v5 0/5] vsock/virtio: optimizations to increase the throughput
  2019-07-30 15:54 ` [PATCH net-next v5 0/5] vsock/virtio: optimizations to increase the throughput Michael S. Tsirkin
@ 2019-07-30 15:55   ` Michael S. Tsirkin
  2019-07-30 16:02     ` Stefano Garzarella
  0 siblings, 1 reply; 10+ messages in thread
From: Michael S. Tsirkin @ 2019-07-30 15:55 UTC (permalink / raw)
  To: Stefano Garzarella
  Cc: netdev, virtualization, linux-kernel, Jason Wang, kvm,
	David S. Miller, Stefan Hajnoczi

On Tue, Jul 30, 2019 at 11:54:53AM -0400, Michael S. Tsirkin wrote:
> On Tue, Jul 30, 2019 at 05:43:29PM +0200, Stefano Garzarella wrote:
> > This series tries to increase the throughput of virtio-vsock with slight
> > changes.
> > While I was testing the v2 of this series I discovered an huge use of memory,
> > so I added patch 1 to mitigate this issue. I put it in this series in order
> > to better track the performance trends.
> > 
> > v5:
> > - rebased all patches on net-next
> > - added Stefan's R-b and Michael's A-b
> 
> This doesn't solve all issues around allocation - as I mentioned I think
> we will need to improve accounting for that,
> and maybe add pre-allocation.
> But it's a great series of steps in the right direction!
> 


So

Acked-by: Michael S. Tsirkin <mst@redhat.com>

> 
> > v4: https://patchwork.kernel.org/cover/11047717
> > v3: https://patchwork.kernel.org/cover/10970145
> > v2: https://patchwork.kernel.org/cover/10938743
> > v1: https://patchwork.kernel.org/cover/10885431
> > 
> > Below are the benchmarks step by step. I used iperf3 [1] modified with VSOCK
> > support. As Michael suggested in the v1, I booted host and guest with 'nosmap'.
> > 
> > A brief description of patches:
> > - Patches 1:   limit the memory usage with an extra copy for small packets
> > - Patches 2+3: reduce the number of credit update messages sent to the
> >                transmitter
> > - Patches 4+5: allow the host to split packets on multiple buffers and use
> >                VIRTIO_VSOCK_MAX_PKT_BUF_SIZE as the max packet size allowed
> > 
> >                     host -> guest [Gbps]
> > pkt_size before opt   p 1     p 2+3    p 4+5
> > 
> > 32         0.032     0.030    0.048    0.051
> > 64         0.061     0.059    0.108    0.117
> > 128        0.122     0.112    0.227    0.234
> > 256        0.244     0.241    0.418    0.415
> > 512        0.459     0.466    0.847    0.865
> > 1K         0.927     0.919    1.657    1.641
> > 2K         1.884     1.813    3.262    3.269
> > 4K         3.378     3.326    6.044    6.195
> > 8K         5.637     5.676   10.141   11.287
> > 16K        8.250     8.402   15.976   16.736
> > 32K       13.327    13.204   19.013   20.515
> > 64K       21.241    21.341   20.973   21.879
> > 128K      21.851    22.354   21.816   23.203
> > 256K      21.408    21.693   21.846   24.088
> > 512K      21.600    21.899   21.921   24.106
> > 
> >                     guest -> host [Gbps]
> > pkt_size before opt   p 1     p 2+3    p 4+5
> > 
> > 32         0.045     0.046    0.057    0.057
> > 64         0.089     0.091    0.103    0.104
> > 128        0.170     0.179    0.192    0.200
> > 256        0.364     0.351    0.361    0.379
> > 512        0.709     0.699    0.731    0.790
> > 1K         1.399     1.407    1.395    1.427
> > 2K         2.670     2.684    2.745    2.835
> > 4K         5.171     5.199    5.305    5.451
> > 8K         8.442     8.500   10.083    9.941
> > 16K       12.305    12.259   13.519   15.385
> > 32K       11.418    11.150   11.988   24.680
> > 64K       10.778    10.659   11.589   35.273
> > 128K      10.421    10.339   10.939   40.338
> > 256K      10.300     9.719   10.508   36.562
> > 512K       9.833     9.808   10.612   35.979
> > 
> > As Stefan suggested in the v1, I measured also the efficiency in this way:
> >     efficiency = Mbps / (%CPU_Host + %CPU_Guest)
> > 
> > The '%CPU_Guest' is taken inside the VM. I know that it is not the best way,
> > but it's provided for free from iperf3 and could be an indication.
> > 
> >         host -> guest efficiency [Mbps / (%CPU_Host + %CPU_Guest)]
> > pkt_size before opt   p 1     p 2+3    p 4+5
> > 
> > 32         0.35      0.45     0.79     1.02
> > 64         0.56      0.80     1.41     1.54
> > 128        1.11      1.52     3.03     3.12
> > 256        2.20      2.16     5.44     5.58
> > 512        4.17      4.18    10.96    11.46
> > 1K         8.30      8.26    20.99    20.89
> > 2K        16.82     16.31    39.76    39.73
> > 4K        30.89     30.79    74.07    75.73
> > 8K        53.74     54.49   124.24   148.91
> > 16K       80.68     83.63   200.21   232.79
> > 32K      132.27    132.52   260.81   357.07
> > 64K      229.82    230.40   300.19   444.18
> > 128K     332.60    329.78   331.51   492.28
> > 256K     331.06    337.22   339.59   511.59
> > 512K     335.58    328.50   331.56   504.56
> > 
> >         guest -> host efficiency [Mbps / (%CPU_Host + %CPU_Guest)]
> > pkt_size before opt   p 1     p 2+3    p 4+5
> > 
> > 32         0.43      0.43     0.53     0.56
> > 64         0.85      0.86     1.04     1.10
> > 128        1.63      1.71     2.07     2.13
> > 256        3.48      3.35     4.02     4.22
> > 512        6.80      6.67     7.97     8.63
> > 1K        13.32     13.31    15.72    15.94
> > 2K        25.79     25.92    30.84    30.98
> > 4K        50.37     50.48    58.79    59.69
> > 8K        95.90     96.15   107.04   110.33
> > 16K      145.80    145.43   143.97   174.70
> > 32K      147.06    144.74   146.02   282.48
> > 64K      145.25    143.99   141.62   406.40
> > 128K     149.34    146.96   147.49   489.34
> > 256K     156.35    149.81   152.21   536.37
> > 512K     151.65    150.74   151.52   519.93
> > 
> > [1] https://github.com/stefano-garzarella/iperf/
> > 
> > Stefano Garzarella (5):
> >   vsock/virtio: limit the memory used per-socket
> >   vsock/virtio: reduce credit update messages
> >   vsock/virtio: fix locking in virtio_transport_inc_tx_pkt()
> >   vhost/vsock: split packets to send using multiple buffers
> >   vsock/virtio: change the maximum packet size allowed
> > 
> >  drivers/vhost/vsock.c                   | 68 ++++++++++++-----
> >  include/linux/virtio_vsock.h            |  4 +-
> >  net/vmw_vsock/virtio_transport.c        |  1 +
> >  net/vmw_vsock/virtio_transport_common.c | 99 ++++++++++++++++++++-----
> >  4 files changed, 134 insertions(+), 38 deletions(-)
> > 
> > -- 
> > 2.20.1

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH net-next v5 0/5] vsock/virtio: optimizations to increase the throughput
  2019-07-30 15:55   ` Michael S. Tsirkin
@ 2019-07-30 16:02     ` Stefano Garzarella
  0 siblings, 0 replies; 10+ messages in thread
From: Stefano Garzarella @ 2019-07-30 16:02 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: netdev, virtualization, linux-kernel, Jason Wang, kvm,
	David S. Miller, Stefan Hajnoczi

On Tue, Jul 30, 2019 at 11:55:09AM -0400, Michael S. Tsirkin wrote:
> On Tue, Jul 30, 2019 at 11:54:53AM -0400, Michael S. Tsirkin wrote:
> > On Tue, Jul 30, 2019 at 05:43:29PM +0200, Stefano Garzarella wrote:
> > > This series tries to increase the throughput of virtio-vsock with slight
> > > changes.
> > > While I was testing the v2 of this series I discovered an huge use of memory,
> > > so I added patch 1 to mitigate this issue. I put it in this series in order
> > > to better track the performance trends.
> > > 
> > > v5:
> > > - rebased all patches on net-next
> > > - added Stefan's R-b and Michael's A-b
> > 
> > This doesn't solve all issues around allocation - as I mentioned I think
> > we will need to improve accounting for that,
> > and maybe add pre-allocation.

Yes, I'll work on it following your suggestions.

> > But it's a great series of steps in the right direction!
> > 

Thank you very much :)
Stefano

> 
> 
> So
> 
> Acked-by: Michael S. Tsirkin <mst@redhat.com>
> 
> > 
> > > v4: https://patchwork.kernel.org/cover/11047717
> > > v3: https://patchwork.kernel.org/cover/10970145
> > > v2: https://patchwork.kernel.org/cover/10938743
> > > v1: https://patchwork.kernel.org/cover/10885431
> > > 
> > > Below are the benchmarks step by step. I used iperf3 [1] modified with VSOCK
> > > support. As Michael suggested in the v1, I booted host and guest with 'nosmap'.
> > > 
> > > A brief description of patches:
> > > - Patches 1:   limit the memory usage with an extra copy for small packets
> > > - Patches 2+3: reduce the number of credit update messages sent to the
> > >                transmitter
> > > - Patches 4+5: allow the host to split packets on multiple buffers and use
> > >                VIRTIO_VSOCK_MAX_PKT_BUF_SIZE as the max packet size allowed
> > > 
> > >                     host -> guest [Gbps]
> > > pkt_size before opt   p 1     p 2+3    p 4+5
> > > 
> > > 32         0.032     0.030    0.048    0.051
> > > 64         0.061     0.059    0.108    0.117
> > > 128        0.122     0.112    0.227    0.234
> > > 256        0.244     0.241    0.418    0.415
> > > 512        0.459     0.466    0.847    0.865
> > > 1K         0.927     0.919    1.657    1.641
> > > 2K         1.884     1.813    3.262    3.269
> > > 4K         3.378     3.326    6.044    6.195
> > > 8K         5.637     5.676   10.141   11.287
> > > 16K        8.250     8.402   15.976   16.736
> > > 32K       13.327    13.204   19.013   20.515
> > > 64K       21.241    21.341   20.973   21.879
> > > 128K      21.851    22.354   21.816   23.203
> > > 256K      21.408    21.693   21.846   24.088
> > > 512K      21.600    21.899   21.921   24.106
> > > 
> > >                     guest -> host [Gbps]
> > > pkt_size before opt   p 1     p 2+3    p 4+5
> > > 
> > > 32         0.045     0.046    0.057    0.057
> > > 64         0.089     0.091    0.103    0.104
> > > 128        0.170     0.179    0.192    0.200
> > > 256        0.364     0.351    0.361    0.379
> > > 512        0.709     0.699    0.731    0.790
> > > 1K         1.399     1.407    1.395    1.427
> > > 2K         2.670     2.684    2.745    2.835
> > > 4K         5.171     5.199    5.305    5.451
> > > 8K         8.442     8.500   10.083    9.941
> > > 16K       12.305    12.259   13.519   15.385
> > > 32K       11.418    11.150   11.988   24.680
> > > 64K       10.778    10.659   11.589   35.273
> > > 128K      10.421    10.339   10.939   40.338
> > > 256K      10.300     9.719   10.508   36.562
> > > 512K       9.833     9.808   10.612   35.979
> > > 
> > > As Stefan suggested in the v1, I measured also the efficiency in this way:
> > >     efficiency = Mbps / (%CPU_Host + %CPU_Guest)
> > > 
> > > The '%CPU_Guest' is taken inside the VM. I know that it is not the best way,
> > > but it's provided for free from iperf3 and could be an indication.
> > > 
> > >         host -> guest efficiency [Mbps / (%CPU_Host + %CPU_Guest)]
> > > pkt_size before opt   p 1     p 2+3    p 4+5
> > > 
> > > 32         0.35      0.45     0.79     1.02
> > > 64         0.56      0.80     1.41     1.54
> > > 128        1.11      1.52     3.03     3.12
> > > 256        2.20      2.16     5.44     5.58
> > > 512        4.17      4.18    10.96    11.46
> > > 1K         8.30      8.26    20.99    20.89
> > > 2K        16.82     16.31    39.76    39.73
> > > 4K        30.89     30.79    74.07    75.73
> > > 8K        53.74     54.49   124.24   148.91
> > > 16K       80.68     83.63   200.21   232.79
> > > 32K      132.27    132.52   260.81   357.07
> > > 64K      229.82    230.40   300.19   444.18
> > > 128K     332.60    329.78   331.51   492.28
> > > 256K     331.06    337.22   339.59   511.59
> > > 512K     335.58    328.50   331.56   504.56
> > > 
> > >         guest -> host efficiency [Mbps / (%CPU_Host + %CPU_Guest)]
> > > pkt_size before opt   p 1     p 2+3    p 4+5
> > > 
> > > 32         0.43      0.43     0.53     0.56
> > > 64         0.85      0.86     1.04     1.10
> > > 128        1.63      1.71     2.07     2.13
> > > 256        3.48      3.35     4.02     4.22
> > > 512        6.80      6.67     7.97     8.63
> > > 1K        13.32     13.31    15.72    15.94
> > > 2K        25.79     25.92    30.84    30.98
> > > 4K        50.37     50.48    58.79    59.69
> > > 8K        95.90     96.15   107.04   110.33
> > > 16K      145.80    145.43   143.97   174.70
> > > 32K      147.06    144.74   146.02   282.48
> > > 64K      145.25    143.99   141.62   406.40
> > > 128K     149.34    146.96   147.49   489.34
> > > 256K     156.35    149.81   152.21   536.37
> > > 512K     151.65    150.74   151.52   519.93
> > > 
> > > [1] https://github.com/stefano-garzarella/iperf/
> > > 
> > > Stefano Garzarella (5):
> > >   vsock/virtio: limit the memory used per-socket
> > >   vsock/virtio: reduce credit update messages
> > >   vsock/virtio: fix locking in virtio_transport_inc_tx_pkt()
> > >   vhost/vsock: split packets to send using multiple buffers
> > >   vsock/virtio: change the maximum packet size allowed
> > > 
> > >  drivers/vhost/vsock.c                   | 68 ++++++++++++-----
> > >  include/linux/virtio_vsock.h            |  4 +-
> > >  net/vmw_vsock/virtio_transport.c        |  1 +
> > >  net/vmw_vsock/virtio_transport_common.c | 99 ++++++++++++++++++++-----
> > >  4 files changed, 134 insertions(+), 38 deletions(-)
> > > 
> > > -- 
> > > 2.20.1

-- 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH net-next v5 4/5] vhost/vsock: split packets to send using multiple buffers
  2019-07-30 15:43 ` [PATCH net-next v5 4/5] vhost/vsock: split packets to send using multiple buffers Stefano Garzarella
@ 2019-11-18 11:50   ` Stefano Garzarella
  0 siblings, 0 replies; 10+ messages in thread
From: Stefano Garzarella @ 2019-11-18 11:50 UTC (permalink / raw)
  To: David S. Miller
  Cc: netdev, virtualization, linux-kernel, Jason Wang, kvm,
	Michael S. Tsirkin, Stefan Hajnoczi

Hi Dave,
talking to Michael, we realized that this patch merged upstream (commit
6dbd3e66e7785a2f055bf84d98de9b8fd31ff3f5) solves an issue in the
device emulation in the vhost-vsock module, because the emulation
did not meet the specification, assuming that the buffer in the RX
virtqueue was always 4 KB, without checking the actual size.

We think it's better to apply this patch in -stable.

What do you think?

Thanks,
Stefano

On Tue, Jul 30, 2019 at 5:44 PM Stefano Garzarella <sgarzare@redhat.com> wrote:
>
> If the packets to sent to the guest are bigger than the buffer
> available, we can split them, using multiple buffers and fixing
> the length in the packet header.
> This is safe since virtio-vsock supports only stream sockets.
>
> Signed-off-by: Stefano Garzarella <sgarzare@redhat.com>
> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> Acked-by: Michael S. Tsirkin <mst@redhat.com>
> ---
>  drivers/vhost/vsock.c                   | 66 ++++++++++++++++++-------
>  net/vmw_vsock/virtio_transport_common.c | 15 ++++--
>  2 files changed, 60 insertions(+), 21 deletions(-)
>
> diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
> index 6c8390a2af52..9f57736fe15e 100644
> --- a/drivers/vhost/vsock.c
> +++ b/drivers/vhost/vsock.c
> @@ -102,7 +102,7 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock,
>                 struct iov_iter iov_iter;
>                 unsigned out, in;
>                 size_t nbytes;
> -               size_t len;
> +               size_t iov_len, payload_len;
>                 int head;
>
>                 spin_lock_bh(&vsock->send_pkt_list_lock);
> @@ -147,8 +147,24 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock,
>                         break;
>                 }
>
> -               len = iov_length(&vq->iov[out], in);
> -               iov_iter_init(&iov_iter, READ, &vq->iov[out], in, len);
> +               iov_len = iov_length(&vq->iov[out], in);
> +               if (iov_len < sizeof(pkt->hdr)) {
> +                       virtio_transport_free_pkt(pkt);
> +                       vq_err(vq, "Buffer len [%zu] too small\n", iov_len);
> +                       break;
> +               }
> +
> +               iov_iter_init(&iov_iter, READ, &vq->iov[out], in, iov_len);
> +               payload_len = pkt->len - pkt->off;
> +
> +               /* If the packet is greater than the space available in the
> +                * buffer, we split it using multiple buffers.
> +                */
> +               if (payload_len > iov_len - sizeof(pkt->hdr))
> +                       payload_len = iov_len - sizeof(pkt->hdr);
> +
> +               /* Set the correct length in the header */
> +               pkt->hdr.len = cpu_to_le32(payload_len);
>
>                 nbytes = copy_to_iter(&pkt->hdr, sizeof(pkt->hdr), &iov_iter);
>                 if (nbytes != sizeof(pkt->hdr)) {
> @@ -157,33 +173,47 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock,
>                         break;
>                 }
>
> -               nbytes = copy_to_iter(pkt->buf, pkt->len, &iov_iter);
> -               if (nbytes != pkt->len) {
> +               nbytes = copy_to_iter(pkt->buf + pkt->off, payload_len,
> +                                     &iov_iter);
> +               if (nbytes != payload_len) {
>                         virtio_transport_free_pkt(pkt);
>                         vq_err(vq, "Faulted on copying pkt buf\n");
>                         break;
>                 }
>
> -               vhost_add_used(vq, head, sizeof(pkt->hdr) + pkt->len);
> +               vhost_add_used(vq, head, sizeof(pkt->hdr) + payload_len);
>                 added = true;
>
> -               if (pkt->reply) {
> -                       int val;
> -
> -                       val = atomic_dec_return(&vsock->queued_replies);
> -
> -                       /* Do we have resources to resume tx processing? */
> -                       if (val + 1 == tx_vq->num)
> -                               restart_tx = true;
> -               }
> -
>                 /* Deliver to monitoring devices all correctly transmitted
>                  * packets.
>                  */
>                 virtio_transport_deliver_tap_pkt(pkt);
>
> -               total_len += pkt->len;
> -               virtio_transport_free_pkt(pkt);
> +               pkt->off += payload_len;
> +               total_len += payload_len;
> +
> +               /* If we didn't send all the payload we can requeue the packet
> +                * to send it with the next available buffer.
> +                */
> +               if (pkt->off < pkt->len) {
> +                       spin_lock_bh(&vsock->send_pkt_list_lock);
> +                       list_add(&pkt->list, &vsock->send_pkt_list);
> +                       spin_unlock_bh(&vsock->send_pkt_list_lock);
> +               } else {
> +                       if (pkt->reply) {
> +                               int val;
> +
> +                               val = atomic_dec_return(&vsock->queued_replies);
> +
> +                               /* Do we have resources to resume tx
> +                                * processing?
> +                                */
> +                               if (val + 1 == tx_vq->num)
> +                                       restart_tx = true;
> +                       }
> +
> +                       virtio_transport_free_pkt(pkt);
> +               }
>         } while(likely(!vhost_exceeds_weight(vq, ++pkts, total_len)));
>         if (added)
>                 vhost_signal(&vsock->dev, vq);
> diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
> index 34a2b42313b7..56fab3f03d0e 100644
> --- a/net/vmw_vsock/virtio_transport_common.c
> +++ b/net/vmw_vsock/virtio_transport_common.c
> @@ -97,8 +97,17 @@ static struct sk_buff *virtio_transport_build_skb(void *opaque)
>         struct virtio_vsock_pkt *pkt = opaque;
>         struct af_vsockmon_hdr *hdr;
>         struct sk_buff *skb;
> +       size_t payload_len;
> +       void *payload_buf;
>
> -       skb = alloc_skb(sizeof(*hdr) + sizeof(pkt->hdr) + pkt->len,
> +       /* A packet could be split to fit the RX buffer, so we can retrieve
> +        * the payload length from the header and the buffer pointer taking
> +        * care of the offset in the original packet.
> +        */
> +       payload_len = le32_to_cpu(pkt->hdr.len);
> +       payload_buf = pkt->buf + pkt->off;
> +
> +       skb = alloc_skb(sizeof(*hdr) + sizeof(pkt->hdr) + payload_len,
>                         GFP_ATOMIC);
>         if (!skb)
>                 return NULL;
> @@ -138,8 +147,8 @@ static struct sk_buff *virtio_transport_build_skb(void *opaque)
>
>         skb_put_data(skb, &pkt->hdr, sizeof(pkt->hdr));
>
> -       if (pkt->len) {
> -               skb_put_data(skb, pkt->buf, pkt->len);
> +       if (payload_len) {
> +               skb_put_data(skb, payload_buf, payload_len);
>         }
>
>         return skb;
> --
> 2.20.1
>


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2019-11-18 11:50 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-30 15:43 [PATCH net-next v5 0/5] vsock/virtio: optimizations to increase the throughput Stefano Garzarella
2019-07-30 15:43 ` [PATCH net-next v5 1/5] vsock/virtio: limit the memory used per-socket Stefano Garzarella
2019-07-30 15:43 ` [PATCH net-next v5 2/5] vsock/virtio: reduce credit update messages Stefano Garzarella
2019-07-30 15:43 ` [PATCH net-next v5 3/5] vsock/virtio: fix locking in virtio_transport_inc_tx_pkt() Stefano Garzarella
2019-07-30 15:43 ` [PATCH net-next v5 4/5] vhost/vsock: split packets to send using multiple buffers Stefano Garzarella
2019-11-18 11:50   ` Stefano Garzarella
2019-07-30 15:43 ` [PATCH net-next v5 5/5] vsock/virtio: change the maximum packet size allowed Stefano Garzarella
2019-07-30 15:54 ` [PATCH net-next v5 0/5] vsock/virtio: optimizations to increase the throughput Michael S. Tsirkin
2019-07-30 15:55   ` Michael S. Tsirkin
2019-07-30 16:02     ` Stefano Garzarella

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).