linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V4 0/3] basic busy polling support for vhost_net
@ 2016-03-04 11:24 Jason Wang
  2016-03-04 11:24 ` [PATCH V4 1/3] vhost: introduce vhost_has_work() Jason Wang
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Jason Wang @ 2016-03-04 11:24 UTC (permalink / raw)
  To: mst, kvm, virtualization, netdev, linux-kernel
  Cc: RAPOPORT, yang.zhang.wz, borntraeger, Jason Wang

This series tries to add basic busy polling for vhost net. The idea is
simple: at the end of tx/rx processing, busy polling for new tx added
descriptor and rx receive socket for a while. The maximum number of
time (in us) could be spent on busy polling was specified ioctl.

Test A were done through:

- 50 us as busy loop timeout
- Netperf 2.6
- Two machines with back to back connected mlx4
- Guest with 1 vcpus and 1 queue

Results:
- Obvious improvements (%5 - 20%) for latency (TCP_RR).
- Get a better or minor regression on most of the TX tests, but see
  some regression on 4096 size.
- Except for 8 sessions of 4096 size RX, have a better or same
  performance.
- CPU utilization were incrased as expected.

TCP_RR:
size/session/+thu%/+normalize%/+tpkts%/+rpkts%/+ioexits%/
    1/     1/   +8%/  -32%/   +8%/   +8%/   +7%
    1/    50/   +7%/  -19%/   +7%/   +7%/   +1%
    1/   100/   +5%/  -21%/   +5%/   +5%/    0%
    1/   200/   +5%/  -21%/   +7%/   +7%/   +1%
   64/     1/  +11%/  -29%/  +11%/  +11%/  +10%
   64/    50/   +7%/  -19%/   +8%/   +8%/   +2%
   64/   100/   +8%/  -18%/   +9%/   +9%/   +2%
   64/   200/   +6%/  -19%/   +6%/   +6%/    0%
  256/     1/   +7%/  -33%/   +7%/   +7%/   +6%
  256/    50/   +7%/  -18%/   +7%/   +7%/    0%
  256/   100/   +9%/  -18%/   +8%/   +8%/   +2%
  256/   200/   +9%/  -18%/  +10%/  +10%/   +3%
 1024/     1/  +20%/  -28%/  +20%/  +20%/  +19%
 1024/    50/   +8%/  -18%/   +9%/   +9%/   +2%
 1024/   100/   +6%/  -19%/   +5%/   +5%/    0%
 1024/   200/   +8%/  -18%/   +9%/   +9%/   +2%
Guest TX:
size/session/+thu%/+normalize%/+tpkts%/+rpkts%/+ioexits%/
   64/     1/   -5%/  -28%/  +11%/  +12%/  +10%
   64/     4/   -2%/  -26%/  +13%/  +13%/  +13%
   64/     8/   -6%/  -29%/   +9%/  +10%/  +10%
  512/     1/  +15%/   -7%/  +13%/  +11%/   +3%
  512/     4/  +17%/   -6%/  +18%/  +13%/  +11%
  512/     8/  +14%/   -7%/  +13%/   +7%/   +7%
 1024/     1/  +27%/   -2%/  +26%/  +29%/  +12%
 1024/     4/   +8%/   -9%/   +6%/   +1%/   +6%
 1024/     8/  +41%/  +12%/  +34%/  +20%/   -3%
 4096/     1/  -22%/  -21%/  -36%/  +81%/+1360%
 4096/     4/  -57%/  -58%/ +286%/  +15%/+2074%
 4096/     8/  +67%/  +70%/  -45%/   -8%/  +63%
16384/     1/   -2%/   -5%/   +5%/   -3%/  +80%
16384/     4/    0%/    0%/    0%/   +4%/ +138%
16384/     8/    0%/    0%/    0%/   +1%/  +41%
65535/     1/   -3%/   -6%/   +2%/  +11%/ +113%
65535/     4/   -2%/   -1%/   -2%/   -3%/ +484%
65535/     8/    0%/   +1%/    0%/   +2%/  +40%
Guest RX:
size/session/+thu%/+normalize%/+tpkts%/+rpkts%/+ioexits%/
   64/     1/  +31%/   -3%/   +8%/   +8%/   +8%
   64/     4/  +11%/  -17%/  +13%/  +14%/  +15%
   64/     8/   +4%/  -23%/  +11%/  +11%/  +12%
  512/     1/  +24%/    0%/  +18%/  +14%/   -8%
  512/     4/   +4%/  -15%/   +6%/   +5%/   +6%
  512/     8/  +26%/    0%/  +21%/  +10%/   +3%
 1024/     1/  +88%/  +47%/  +69%/  +44%/  -30%
 1024/     4/  +18%/   -5%/  +19%/  +16%/   +2%
 1024/     8/  +15%/   -4%/  +13%/   +8%/   +1%
 4096/     1/   -3%/   -5%/   +2%/   -2%/  +41%
 4096/     4/   +2%/   +3%/  -20%/  -14%/  -24%
 4096/     8/  -43%/  -45%/  +69%/  -24%/  +94%
16384/     1/   -3%/  -11%/  +23%/   +7%/  +42%
16384/     4/   -3%/   -3%/   -4%/   +5%/ +115%
16384/     8/   -1%/    0%/   -1%/   -3%/  +32%
65535/     1/   +1%/    0%/   +2%/    0%/  +66%
65535/     4/   -1%/   -1%/    0%/   +4%/ +492%
65535/     8/    0%/   -1%/   -1%/   +4%/  +38%

Changes from V3:
- drop single_task_running()
- use cpu_relax_lowlatency() instead of cpu_relax()

Changes from V2:
- rename vhost_vq_more_avail() to vhost_vq_avail_empty(). And return
false we __get_user() fails.
- do not bother premmptions/timers for good path.
- use vhost_vring_state as ioctl parameter instead of reinveting a new
one.
- add the unit of timeout (us) to the comment of new added ioctls

Changes from V1:
- remove the buggy vq_error() in vhost_vq_more_avail().
- leave vhost_enable_notify() untouched.

Changes from RFC V3:
- small tweak on the code to avoid multiple duplicate conditions in
critical path when busy loop is not enabled.
- add the test result of multiple VMs

Changes from RFC V2:
- poll also at the end of rx handling
- factor out the polling logic and optimize the code a little bit
- add two ioctls to get and set the busy poll timeout
- test on ixgbe (which can give more stable and reproducable numbers)
instead of mlx4.

Changes from RFC V1:
- add a comment for vhost_has_work() to explain why it could be
lockless
- add param description for busyloop_timeout
- split out the busy polling logic into a new helper
- check and exit the loop when there's a pending signal
- disable preemption during busy looping to make sure lock_clock() was
correctly used.

Jason Wang (3):
  vhost: introduce vhost_has_work()
  vhost: introduce vhost_vq_avail_empty()
  vhost_net: basic polling support

 drivers/vhost/net.c        | 78 +++++++++++++++++++++++++++++++++++++++++++---
 drivers/vhost/vhost.c      | 35 +++++++++++++++++++++
 drivers/vhost/vhost.h      |  3 ++
 include/uapi/linux/vhost.h |  6 ++++
 4 files changed, 117 insertions(+), 5 deletions(-)

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH V4 1/3] vhost: introduce vhost_has_work()
  2016-03-04 11:24 [PATCH V4 0/3] basic busy polling support for vhost_net Jason Wang
@ 2016-03-04 11:24 ` Jason Wang
  2016-03-04 11:24 ` [PATCH V4 2/3] vhost: introduce vhost_vq_avail_empty() Jason Wang
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 7+ messages in thread
From: Jason Wang @ 2016-03-04 11:24 UTC (permalink / raw)
  To: mst, kvm, virtualization, netdev, linux-kernel
  Cc: RAPOPORT, yang.zhang.wz, borntraeger, Jason Wang

This path introduces a helper which can give a hint for whether or not
there's a work queued in the work list. This could be used for busy
polling code to exit the busy loop.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 drivers/vhost/vhost.c | 7 +++++++
 drivers/vhost/vhost.h | 1 +
 2 files changed, 8 insertions(+)

diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index ad2146a..90ac092 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -245,6 +245,13 @@ void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work)
 }
 EXPORT_SYMBOL_GPL(vhost_work_queue);
 
+/* A lockless hint for busy polling code to exit the loop */
+bool vhost_has_work(struct vhost_dev *dev)
+{
+	return !list_empty(&dev->work_list);
+}
+EXPORT_SYMBOL_GPL(vhost_has_work);
+
 void vhost_poll_queue(struct vhost_poll *poll)
 {
 	vhost_work_queue(poll->dev, &poll->work);
diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
index d3f7674..43284ad 100644
--- a/drivers/vhost/vhost.h
+++ b/drivers/vhost/vhost.h
@@ -37,6 +37,7 @@ struct vhost_poll {
 
 void vhost_work_init(struct vhost_work *work, vhost_work_fn_t fn);
 void vhost_work_queue(struct vhost_dev *dev, struct vhost_work *work);
+bool vhost_has_work(struct vhost_dev *dev);
 
 void vhost_poll_init(struct vhost_poll *poll, vhost_work_fn_t fn,
 		     unsigned long mask, struct vhost_dev *dev);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH V4 2/3] vhost: introduce vhost_vq_avail_empty()
  2016-03-04 11:24 [PATCH V4 0/3] basic busy polling support for vhost_net Jason Wang
  2016-03-04 11:24 ` [PATCH V4 1/3] vhost: introduce vhost_has_work() Jason Wang
@ 2016-03-04 11:24 ` Jason Wang
  2016-03-04 11:24 ` [PATCH V4 3/3] vhost_net: basic polling support Jason Wang
  2016-03-09 19:26 ` [PATCH V4 0/3] basic busy polling support for vhost_net Greg Kurz
  3 siblings, 0 replies; 7+ messages in thread
From: Jason Wang @ 2016-03-04 11:24 UTC (permalink / raw)
  To: mst, kvm, virtualization, netdev, linux-kernel
  Cc: RAPOPORT, yang.zhang.wz, borntraeger, Jason Wang

This patch introduces a helper which will return true if we're sure
that the available ring is empty for a specific vq. When we're not
sure, e.g vq access failure, return false instead. This could be used
for busy polling code to exit the busy loop.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 drivers/vhost/vhost.c | 14 ++++++++++++++
 drivers/vhost/vhost.h |  1 +
 2 files changed, 15 insertions(+)

diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index 90ac092..89301fd 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -1633,6 +1633,20 @@ void vhost_add_used_and_signal_n(struct vhost_dev *dev,
 }
 EXPORT_SYMBOL_GPL(vhost_add_used_and_signal_n);
 
+/* return true if we're sure that avaiable ring is empty */
+bool vhost_vq_avail_empty(struct vhost_dev *dev, struct vhost_virtqueue *vq)
+{
+	__virtio16 avail_idx;
+	int r;
+
+	r = __get_user(avail_idx, &vq->avail->idx);
+	if (r)
+		return false;
+
+	return vhost16_to_cpu(vq, avail_idx) == vq->avail_idx;
+}
+EXPORT_SYMBOL_GPL(vhost_vq_avail_empty);
+
 /* OK, now we need to know about added descriptors. */
 bool vhost_enable_notify(struct vhost_dev *dev, struct vhost_virtqueue *vq)
 {
diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
index 43284ad..a7a43f0 100644
--- a/drivers/vhost/vhost.h
+++ b/drivers/vhost/vhost.h
@@ -159,6 +159,7 @@ void vhost_add_used_and_signal_n(struct vhost_dev *, struct vhost_virtqueue *,
 			       struct vring_used_elem *heads, unsigned count);
 void vhost_signal(struct vhost_dev *, struct vhost_virtqueue *);
 void vhost_disable_notify(struct vhost_dev *, struct vhost_virtqueue *);
+bool vhost_vq_avail_empty(struct vhost_dev *, struct vhost_virtqueue *);
 bool vhost_enable_notify(struct vhost_dev *, struct vhost_virtqueue *);
 
 int vhost_log_write(struct vhost_virtqueue *vq, struct vhost_log *log,
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH V4 3/3] vhost_net: basic polling support
  2016-03-04 11:24 [PATCH V4 0/3] basic busy polling support for vhost_net Jason Wang
  2016-03-04 11:24 ` [PATCH V4 1/3] vhost: introduce vhost_has_work() Jason Wang
  2016-03-04 11:24 ` [PATCH V4 2/3] vhost: introduce vhost_vq_avail_empty() Jason Wang
@ 2016-03-04 11:24 ` Jason Wang
  2016-03-09 19:26 ` [PATCH V4 0/3] basic busy polling support for vhost_net Greg Kurz
  3 siblings, 0 replies; 7+ messages in thread
From: Jason Wang @ 2016-03-04 11:24 UTC (permalink / raw)
  To: mst, kvm, virtualization, netdev, linux-kernel
  Cc: RAPOPORT, yang.zhang.wz, borntraeger, Jason Wang

This patch tries to poll for new added tx buffer or socket receive
queue for a while at the end of tx/rx processing. The maximum time
spent on polling were specified through a new kind of vring ioctl.

Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 drivers/vhost/net.c        | 78 +++++++++++++++++++++++++++++++++++++++++++---
 drivers/vhost/vhost.c      | 14 +++++++++
 drivers/vhost/vhost.h      |  1 +
 include/uapi/linux/vhost.h |  6 ++++
 4 files changed, 94 insertions(+), 5 deletions(-)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 9eda69e..eefdbc7 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -287,6 +287,43 @@ static void vhost_zerocopy_callback(struct ubuf_info *ubuf, bool success)
 	rcu_read_unlock_bh();
 }
 
+static inline unsigned long busy_clock(void)
+{
+	return local_clock() >> 10;
+}
+
+static bool vhost_can_busy_poll(struct vhost_dev *dev,
+				unsigned long endtime)
+{
+	return likely(!need_resched()) &&
+	       likely(!time_after(busy_clock(), endtime)) &&
+	       likely(!signal_pending(current)) &&
+	       !vhost_has_work(dev);
+}
+
+static int vhost_net_tx_get_vq_desc(struct vhost_net *net,
+				    struct vhost_virtqueue *vq,
+				    struct iovec iov[], unsigned int iov_size,
+				    unsigned int *out_num, unsigned int *in_num)
+{
+	unsigned long uninitialized_var(endtime);
+	int r = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov),
+				    out_num, in_num, NULL, NULL);
+
+	if (r == vq->num && vq->busyloop_timeout) {
+		preempt_disable();
+		endtime = busy_clock() + vq->busyloop_timeout;
+		while (vhost_can_busy_poll(vq->dev, endtime) &&
+		       vhost_vq_avail_empty(vq->dev, vq))
+			cpu_relax_lowlatency();
+		preempt_enable();
+		r = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov),
+					out_num, in_num, NULL, NULL);
+	}
+
+	return r;
+}
+
 /* Expects to be always run from workqueue - which acts as
  * read-size critical section for our kind of RCU. */
 static void handle_tx(struct vhost_net *net)
@@ -331,10 +368,9 @@ static void handle_tx(struct vhost_net *net)
 			      % UIO_MAXIOV == nvq->done_idx))
 			break;
 
-		head = vhost_get_vq_desc(vq, vq->iov,
-					 ARRAY_SIZE(vq->iov),
-					 &out, &in,
-					 NULL, NULL);
+		head = vhost_net_tx_get_vq_desc(net, vq, vq->iov,
+						ARRAY_SIZE(vq->iov),
+						&out, &in);
 		/* On error, stop handling until the next kick. */
 		if (unlikely(head < 0))
 			break;
@@ -435,6 +471,38 @@ static int peek_head_len(struct sock *sk)
 	return len;
 }
 
+static int vhost_net_rx_peek_head_len(struct vhost_net *net, struct sock *sk)
+{
+	struct vhost_net_virtqueue *nvq = &net->vqs[VHOST_NET_VQ_TX];
+	struct vhost_virtqueue *vq = &nvq->vq;
+	unsigned long uninitialized_var(endtime);
+	int len = peek_head_len(sk);
+
+	if (!len && vq->busyloop_timeout) {
+		/* Both tx vq and rx socket were polled here */
+		mutex_lock(&vq->mutex);
+		vhost_disable_notify(&net->dev, vq);
+
+		preempt_disable();
+		endtime = busy_clock() + vq->busyloop_timeout;
+
+		while (vhost_can_busy_poll(&net->dev, endtime) &&
+		       skb_queue_empty(&sk->sk_receive_queue) &&
+		       vhost_vq_avail_empty(&net->dev, vq))
+			cpu_relax_lowlatency();
+
+		preempt_enable();
+
+		if (vhost_enable_notify(&net->dev, vq))
+			vhost_poll_queue(&vq->poll);
+		mutex_unlock(&vq->mutex);
+
+		len = peek_head_len(sk);
+	}
+
+	return len;
+}
+
 /* This is a multi-buffer version of vhost_get_desc, that works if
  *	vq has read descriptors only.
  * @vq		- the relevant virtqueue
@@ -553,7 +621,7 @@ static void handle_rx(struct vhost_net *net)
 		vq->log : NULL;
 	mergeable = vhost_has_feature(vq, VIRTIO_NET_F_MRG_RXBUF);
 
-	while ((sock_len = peek_head_len(sock->sk))) {
+	while ((sock_len = vhost_net_rx_peek_head_len(net, sock->sk))) {
 		sock_len += sock_hlen;
 		vhost_len = sock_len + vhost_hlen;
 		headcount = get_rx_bufs(vq, vq->heads, vhost_len,
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index 89301fd..b4b7eda 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -285,6 +285,7 @@ static void vhost_vq_reset(struct vhost_dev *dev,
 	vq->memory = NULL;
 	vq->is_le = virtio_legacy_is_little_endian();
 	vhost_vq_reset_user_be(vq);
+	vq->busyloop_timeout = 0;
 }
 
 static int vhost_worker(void *data)
@@ -919,6 +920,19 @@ long vhost_vring_ioctl(struct vhost_dev *d, int ioctl, void __user *argp)
 	case VHOST_GET_VRING_ENDIAN:
 		r = vhost_get_vring_endian(vq, idx, argp);
 		break;
+	case VHOST_SET_VRING_BUSYLOOP_TIMEOUT:
+		if (copy_from_user(&s, argp, sizeof(s))) {
+			r = -EFAULT;
+			break;
+		}
+		vq->busyloop_timeout = s.num;
+		break;
+	case VHOST_GET_VRING_BUSYLOOP_TIMEOUT:
+		s.index = idx;
+		s.num = vq->busyloop_timeout;
+		if (copy_to_user(argp, &s, sizeof(s)))
+			r = -EFAULT;
+		break;
 	default:
 		r = -ENOIOCTLCMD;
 	}
diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
index a7a43f0..9a02158 100644
--- a/drivers/vhost/vhost.h
+++ b/drivers/vhost/vhost.h
@@ -115,6 +115,7 @@ struct vhost_virtqueue {
 	/* Ring endianness requested by userspace for cross-endian support. */
 	bool user_be;
 #endif
+	u32 busyloop_timeout;
 };
 
 struct vhost_dev {
diff --git a/include/uapi/linux/vhost.h b/include/uapi/linux/vhost.h
index ab373191..61a8777 100644
--- a/include/uapi/linux/vhost.h
+++ b/include/uapi/linux/vhost.h
@@ -126,6 +126,12 @@ struct vhost_memory {
 #define VHOST_SET_VRING_CALL _IOW(VHOST_VIRTIO, 0x21, struct vhost_vring_file)
 /* Set eventfd to signal an error */
 #define VHOST_SET_VRING_ERR _IOW(VHOST_VIRTIO, 0x22, struct vhost_vring_file)
+/* Set busy loop timeout (in us) */
+#define VHOST_SET_VRING_BUSYLOOP_TIMEOUT _IOW(VHOST_VIRTIO, 0x23,	\
+					 struct vhost_vring_state)
+/* Get busy loop timeout (in us) */
+#define VHOST_GET_VRING_BUSYLOOP_TIMEOUT _IOW(VHOST_VIRTIO, 0x24,	\
+					 struct vhost_vring_state)
 
 /* VHOST_NET specific defines */
 
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH V4 0/3] basic busy polling support for vhost_net
  2016-03-04 11:24 [PATCH V4 0/3] basic busy polling support for vhost_net Jason Wang
                   ` (2 preceding siblings ...)
  2016-03-04 11:24 ` [PATCH V4 3/3] vhost_net: basic polling support Jason Wang
@ 2016-03-09 19:26 ` Greg Kurz
  2016-03-10  6:48   ` Michael Rapoport
       [not found]   ` <201603100648.u2A6mSTl020833@d06av07.portsmouth.uk.ibm.com>
  3 siblings, 2 replies; 7+ messages in thread
From: Greg Kurz @ 2016-03-09 19:26 UTC (permalink / raw)
  To: Jason Wang
  Cc: mst, kvm, virtualization, netdev, linux-kernel, yang.zhang.wz,
	borntraeger, RAPOPORT

On Fri,  4 Mar 2016 06:24:50 -0500
Jason Wang <jasowang@redhat.com> wrote:

> This series tries to add basic busy polling for vhost net. The idea is
> simple: at the end of tx/rx processing, busy polling for new tx added
> descriptor and rx receive socket for a while. The maximum number of
> time (in us) could be spent on busy polling was specified ioctl.
> 
> Test A were done through:
> 
> - 50 us as busy loop timeout
> - Netperf 2.6
> - Two machines with back to back connected mlx4

Hi Jason,

Could this also improve performance if both VMs are
on the same host system ?

> - Guest with 1 vcpus and 1 queue
> 
> Results:
> - Obvious improvements (%5 - 20%) for latency (TCP_RR).
> - Get a better or minor regression on most of the TX tests, but see
>   some regression on 4096 size.
> - Except for 8 sessions of 4096 size RX, have a better or same
>   performance.
> - CPU utilization were incrased as expected.
> 
> TCP_RR:
> size/session/+thu%/+normalize%/+tpkts%/+rpkts%/+ioexits%/
>     1/     1/   +8%/  -32%/   +8%/   +8%/   +7%
>     1/    50/   +7%/  -19%/   +7%/   +7%/   +1%
>     1/   100/   +5%/  -21%/   +5%/   +5%/    0%
>     1/   200/   +5%/  -21%/   +7%/   +7%/   +1%
>    64/     1/  +11%/  -29%/  +11%/  +11%/  +10%
>    64/    50/   +7%/  -19%/   +8%/   +8%/   +2%
>    64/   100/   +8%/  -18%/   +9%/   +9%/   +2%
>    64/   200/   +6%/  -19%/   +6%/   +6%/    0%
>   256/     1/   +7%/  -33%/   +7%/   +7%/   +6%
>   256/    50/   +7%/  -18%/   +7%/   +7%/    0%
>   256/   100/   +9%/  -18%/   +8%/   +8%/   +2%
>   256/   200/   +9%/  -18%/  +10%/  +10%/   +3%
>  1024/     1/  +20%/  -28%/  +20%/  +20%/  +19%
>  1024/    50/   +8%/  -18%/   +9%/   +9%/   +2%
>  1024/   100/   +6%/  -19%/   +5%/   +5%/    0%
>  1024/   200/   +8%/  -18%/   +9%/   +9%/   +2%
> Guest TX:
> size/session/+thu%/+normalize%/+tpkts%/+rpkts%/+ioexits%/
>    64/     1/   -5%/  -28%/  +11%/  +12%/  +10%
>    64/     4/   -2%/  -26%/  +13%/  +13%/  +13%
>    64/     8/   -6%/  -29%/   +9%/  +10%/  +10%
>   512/     1/  +15%/   -7%/  +13%/  +11%/   +3%
>   512/     4/  +17%/   -6%/  +18%/  +13%/  +11%
>   512/     8/  +14%/   -7%/  +13%/   +7%/   +7%
>  1024/     1/  +27%/   -2%/  +26%/  +29%/  +12%
>  1024/     4/   +8%/   -9%/   +6%/   +1%/   +6%
>  1024/     8/  +41%/  +12%/  +34%/  +20%/   -3%
>  4096/     1/  -22%/  -21%/  -36%/  +81%/+1360%
>  4096/     4/  -57%/  -58%/ +286%/  +15%/+2074%
>  4096/     8/  +67%/  +70%/  -45%/   -8%/  +63%
> 16384/     1/   -2%/   -5%/   +5%/   -3%/  +80%
> 16384/     4/    0%/    0%/    0%/   +4%/ +138%
> 16384/     8/    0%/    0%/    0%/   +1%/  +41%
> 65535/     1/   -3%/   -6%/   +2%/  +11%/ +113%
> 65535/     4/   -2%/   -1%/   -2%/   -3%/ +484%
> 65535/     8/    0%/   +1%/    0%/   +2%/  +40%
> Guest RX:
> size/session/+thu%/+normalize%/+tpkts%/+rpkts%/+ioexits%/
>    64/     1/  +31%/   -3%/   +8%/   +8%/   +8%
>    64/     4/  +11%/  -17%/  +13%/  +14%/  +15%
>    64/     8/   +4%/  -23%/  +11%/  +11%/  +12%
>   512/     1/  +24%/    0%/  +18%/  +14%/   -8%
>   512/     4/   +4%/  -15%/   +6%/   +5%/   +6%
>   512/     8/  +26%/    0%/  +21%/  +10%/   +3%
>  1024/     1/  +88%/  +47%/  +69%/  +44%/  -30%
>  1024/     4/  +18%/   -5%/  +19%/  +16%/   +2%
>  1024/     8/  +15%/   -4%/  +13%/   +8%/   +1%
>  4096/     1/   -3%/   -5%/   +2%/   -2%/  +41%
>  4096/     4/   +2%/   +3%/  -20%/  -14%/  -24%
>  4096/     8/  -43%/  -45%/  +69%/  -24%/  +94%
> 16384/     1/   -3%/  -11%/  +23%/   +7%/  +42%
> 16384/     4/   -3%/   -3%/   -4%/   +5%/ +115%
> 16384/     8/   -1%/    0%/   -1%/   -3%/  +32%
> 65535/     1/   +1%/    0%/   +2%/    0%/  +66%
> 65535/     4/   -1%/   -1%/    0%/   +4%/ +492%
> 65535/     8/    0%/   -1%/   -1%/   +4%/  +38%
> 
> Changes from V3:
> - drop single_task_running()
> - use cpu_relax_lowlatency() instead of cpu_relax()
> 
> Changes from V2:
> - rename vhost_vq_more_avail() to vhost_vq_avail_empty(). And return
> false we __get_user() fails.
> - do not bother premmptions/timers for good path.
> - use vhost_vring_state as ioctl parameter instead of reinveting a new
> one.
> - add the unit of timeout (us) to the comment of new added ioctls
> 
> Changes from V1:
> - remove the buggy vq_error() in vhost_vq_more_avail().
> - leave vhost_enable_notify() untouched.
> 
> Changes from RFC V3:
> - small tweak on the code to avoid multiple duplicate conditions in
> critical path when busy loop is not enabled.
> - add the test result of multiple VMs
> 
> Changes from RFC V2:
> - poll also at the end of rx handling
> - factor out the polling logic and optimize the code a little bit
> - add two ioctls to get and set the busy poll timeout
> - test on ixgbe (which can give more stable and reproducable numbers)
> instead of mlx4.
> 
> Changes from RFC V1:
> - add a comment for vhost_has_work() to explain why it could be
> lockless
> - add param description for busyloop_timeout
> - split out the busy polling logic into a new helper
> - check and exit the loop when there's a pending signal
> - disable preemption during busy looping to make sure lock_clock() was
> correctly used.
> 
> Jason Wang (3):
>   vhost: introduce vhost_has_work()
>   vhost: introduce vhost_vq_avail_empty()
>   vhost_net: basic polling support
> 
>  drivers/vhost/net.c        | 78 +++++++++++++++++++++++++++++++++++++++++++---
>  drivers/vhost/vhost.c      | 35 +++++++++++++++++++++
>  drivers/vhost/vhost.h      |  3 ++
>  include/uapi/linux/vhost.h |  6 ++++
>  4 files changed, 117 insertions(+), 5 deletions(-)
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH V4 0/3] basic busy polling support for vhost_net
  2016-03-09 19:26 ` [PATCH V4 0/3] basic busy polling support for vhost_net Greg Kurz
@ 2016-03-10  6:48   ` Michael Rapoport
       [not found]   ` <201603100648.u2A6mSTl020833@d06av07.portsmouth.uk.ibm.com>
  1 sibling, 0 replies; 7+ messages in thread
From: Michael Rapoport @ 2016-03-10  6:48 UTC (permalink / raw)
  To: Greg Kurz
  Cc: borntraeger, Jason Wang, kvm, linux-kernel, mst, netdev,
	virtualization, yang.zhang.wz

Hi Greg,

> Greg Kurz <gkurz@linux.vnet.ibm.com> wrote on 03/09/2016 09:26:45 PM:
> > On Fri,  4 Mar 2016 06:24:50 -0500
> > Jason Wang <jasowang@redhat.com> wrote:
> 
> > This series tries to add basic busy polling for vhost net. The idea is
> > simple: at the end of tx/rx processing, busy polling for new tx added
> > descriptor and rx receive socket for a while. The maximum number of
> > time (in us) could be spent on busy polling was specified ioctl.
> > 
> > Test A were done through:
> > 
> > - 50 us as busy loop timeout
> > - Netperf 2.6
> > - Two machines with back to back connected mlx4
> 
> Hi Jason,
> 
> Could this also improve performance if both VMs are
> on the same host system ?

I've experimented a little with Jason's patches and guest-to-guest netperf 
when both guests were on the same host, and I saw improvements for that 
case.
 
> > - Guest with 1 vcpus and 1 queue
> > 
> > Results:
> > - Obvious improvements (%5 - 20%) for latency (TCP_RR).
> > - Get a better or minor regression on most of the TX tests, but see
> >   some regression on 4096 size.
> > - Except for 8 sessions of 4096 size RX, have a better or same
> >   performance.
> > - CPU utilization were incrased as expected.
> > 
> > TCP_RR:
> > size/session/+thu%/+normalize%/+tpkts%/+rpkts%/+ioexits%/
> >     1/     1/   +8%/  -32%/   +8%/   +8%/   +7%
> >     1/    50/   +7%/  -19%/   +7%/   +7%/   +1%
> >     1/   100/   +5%/  -21%/   +5%/   +5%/    0%
> >     1/   200/   +5%/  -21%/   +7%/   +7%/   +1%
> >    64/     1/  +11%/  -29%/  +11%/  +11%/  +10%
> >    64/    50/   +7%/  -19%/   +8%/   +8%/   +2%
> >    64/   100/   +8%/  -18%/   +9%/   +9%/   +2%
> >    64/   200/   +6%/  -19%/   +6%/   +6%/    0%
> >   256/     1/   +7%/  -33%/   +7%/   +7%/   +6%
> >   256/    50/   +7%/  -18%/   +7%/   +7%/    0%
> >   256/   100/   +9%/  -18%/   +8%/   +8%/   +2%
> >   256/   200/   +9%/  -18%/  +10%/  +10%/   +3%
> >  1024/     1/  +20%/  -28%/  +20%/  +20%/  +19%
> >  1024/    50/   +8%/  -18%/   +9%/   +9%/   +2%
> >  1024/   100/   +6%/  -19%/   +5%/   +5%/    0%
> >  1024/   200/   +8%/  -18%/   +9%/   +9%/   +2%
> > Guest TX:
> > size/session/+thu%/+normalize%/+tpkts%/+rpkts%/+ioexits%/
> >    64/     1/   -5%/  -28%/  +11%/  +12%/  +10%
> >    64/     4/   -2%/  -26%/  +13%/  +13%/  +13%
> >    64/     8/   -6%/  -29%/   +9%/  +10%/  +10%
> >   512/     1/  +15%/   -7%/  +13%/  +11%/   +3%
> >   512/     4/  +17%/   -6%/  +18%/  +13%/  +11%
> >   512/     8/  +14%/   -7%/  +13%/   +7%/   +7%
> >  1024/     1/  +27%/   -2%/  +26%/  +29%/  +12%
> >  1024/     4/   +8%/   -9%/   +6%/   +1%/   +6%
> >  1024/     8/  +41%/  +12%/  +34%/  +20%/   -3%
> >  4096/     1/  -22%/  -21%/  -36%/  +81%/+1360%
> >  4096/     4/  -57%/  -58%/ +286%/  +15%/+2074%
> >  4096/     8/  +67%/  +70%/  -45%/   -8%/  +63%
> > 16384/     1/   -2%/   -5%/   +5%/   -3%/  +80%
> > 16384/     4/    0%/    0%/    0%/   +4%/ +138%
> > 16384/     8/    0%/    0%/    0%/   +1%/  +41%
> > 65535/     1/   -3%/   -6%/   +2%/  +11%/ +113%
> > 65535/     4/   -2%/   -1%/   -2%/   -3%/ +484%
> > 65535/     8/    0%/   +1%/    0%/   +2%/  +40%
> > Guest RX:
> > size/session/+thu%/+normalize%/+tpkts%/+rpkts%/+ioexits%/
> >    64/     1/  +31%/   -3%/   +8%/   +8%/   +8%
> >    64/     4/  +11%/  -17%/  +13%/  +14%/  +15%
> >    64/     8/   +4%/  -23%/  +11%/  +11%/  +12%
> >   512/     1/  +24%/    0%/  +18%/  +14%/   -8%
> >   512/     4/   +4%/  -15%/   +6%/   +5%/   +6%
> >   512/     8/  +26%/    0%/  +21%/  +10%/   +3%
> >  1024/     1/  +88%/  +47%/  +69%/  +44%/  -30%
> >  1024/     4/  +18%/   -5%/  +19%/  +16%/   +2%
> >  1024/     8/  +15%/   -4%/  +13%/   +8%/   +1%
> >  4096/     1/   -3%/   -5%/   +2%/   -2%/  +41%
> >  4096/     4/   +2%/   +3%/  -20%/  -14%/  -24%
> >  4096/     8/  -43%/  -45%/  +69%/  -24%/  +94%
> > 16384/     1/   -3%/  -11%/  +23%/   +7%/  +42%
> > 16384/     4/   -3%/   -3%/   -4%/   +5%/ +115%
> > 16384/     8/   -1%/    0%/   -1%/   -3%/  +32%
> > 65535/     1/   +1%/    0%/   +2%/    0%/  +66%
> > 65535/     4/   -1%/   -1%/    0%/   +4%/ +492%
> > 65535/     8/    0%/   -1%/   -1%/   +4%/  +38%
> > 
> > Changes from V3:
> > - drop single_task_running()
> > - use cpu_relax_lowlatency() instead of cpu_relax()
> > 
> > Changes from V2:
> > - rename vhost_vq_more_avail() to vhost_vq_avail_empty(). And return
> > false we __get_user() fails.
> > - do not bother premmptions/timers for good path.
> > - use vhost_vring_state as ioctl parameter instead of reinveting a new
> > one.
> > - add the unit of timeout (us) to the comment of new added ioctls
> > 
> > Changes from V1:
> > - remove the buggy vq_error() in vhost_vq_more_avail().
> > - leave vhost_enable_notify() untouched.
> > 
> > Changes from RFC V3:
> > - small tweak on the code to avoid multiple duplicate conditions in
> > critical path when busy loop is not enabled.
> > - add the test result of multiple VMs
> > 
> > Changes from RFC V2:
> > - poll also at the end of rx handling
> > - factor out the polling logic and optimize the code a little bit
> > - add two ioctls to get and set the busy poll timeout
> > - test on ixgbe (which can give more stable and reproducable numbers)
> > instead of mlx4.
> > 
> > Changes from RFC V1:
> > - add a comment for vhost_has_work() to explain why it could be
> > lockless
> > - add param description for busyloop_timeout
> > - split out the busy polling logic into a new helper
> > - check and exit the loop when there's a pending signal
> > - disable preemption during busy looping to make sure lock_clock() was
> > correctly used.
> > 
> > Jason Wang (3):
> >   vhost: introduce vhost_has_work()
> >   vhost: introduce vhost_vq_avail_empty()
> >   vhost_net: basic polling support
> > 
> >  drivers/vhost/net.c        | 78 
+++++++++++++++++++++++++++++++++++++++++++---
> >  drivers/vhost/vhost.c      | 35 +++++++++++++++++++++
> >  drivers/vhost/vhost.h      |  3 ++
> >  include/uapi/linux/vhost.h |  6 ++++
> >  4 files changed, 117 insertions(+), 5 deletions(-)
> > 
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH V4 0/3] basic busy polling support for vhost_net
       [not found]   ` <201603100648.u2A6mSTl020833@d06av07.portsmouth.uk.ibm.com>
@ 2016-03-15  3:10     ` Jason Wang
  0 siblings, 0 replies; 7+ messages in thread
From: Jason Wang @ 2016-03-15  3:10 UTC (permalink / raw)
  To: Michael Rapoport, Greg Kurz
  Cc: borntraeger, kvm, linux-kernel, mst, netdev, virtualization,
	yang.zhang.wz



On 03/10/2016 02:48 PM, Michael Rapoport wrote:
> Hi Greg,
>
>> > Greg Kurz <gkurz@linux.vnet.ibm.com> wrote on 03/09/2016 09:26:45 PM:
>>> > > On Fri,  4 Mar 2016 06:24:50 -0500
>>> > > Jason Wang <jasowang@redhat.com> wrote:
>> > 
>>> > > This series tries to add basic busy polling for vhost net. The idea is
>>> > > simple: at the end of tx/rx processing, busy polling for new tx added
>>> > > descriptor and rx receive socket for a while. The maximum number of
>>> > > time (in us) could be spent on busy polling was specified ioctl.
>>> > > 
>>> > > Test A were done through:
>>> > > 
>>> > > - 50 us as busy loop timeout
>>> > > - Netperf 2.6
>>> > > - Two machines with back to back connected mlx4
>> > 
>> > Hi Jason,
>> > 
>> > Could this also improve performance if both VMs are
>> > on the same host system ?
> I've experimented a little with Jason's patches and guest-to-guest netperf 
> when both guests were on the same host, and I saw improvements for that 
> case.
>  

Good to know this, I haven't tested this before but from the codes, it
should work for VM2VM case too.

Thanks a lot for the testing.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2016-03-15  3:10 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-03-04 11:24 [PATCH V4 0/3] basic busy polling support for vhost_net Jason Wang
2016-03-04 11:24 ` [PATCH V4 1/3] vhost: introduce vhost_has_work() Jason Wang
2016-03-04 11:24 ` [PATCH V4 2/3] vhost: introduce vhost_vq_avail_empty() Jason Wang
2016-03-04 11:24 ` [PATCH V4 3/3] vhost_net: basic polling support Jason Wang
2016-03-09 19:26 ` [PATCH V4 0/3] basic busy polling support for vhost_net Greg Kurz
2016-03-10  6:48   ` Michael Rapoport
     [not found]   ` <201603100648.u2A6mSTl020833@d06av07.portsmouth.uk.ibm.com>
2016-03-15  3:10     ` Jason Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).