All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next V2] vhost_net: conditionally enable tx polling
@ 2017-11-13  3:45 Jason Wang
  0 siblings, 0 replies; 2+ messages in thread
From: Jason Wang @ 2017-11-13  3:45 UTC (permalink / raw)
  To: mst, jasowang, kvm, virtualization, netdev, linux-kernel
  Cc: Matthew Rosato, Wei Xu

We always poll tx for socket, this is sub optimal since this will
slightly increase the waitqueue traversing time and more important,
vhost could not benefit from commit 9e641bdcfa4e ("net-tun:
restructure tun_do_read for better sleep/wakeup efficiency") even if
we've stopped rx polling during handle_rx(), tx poll were still left
in the waitqueue.

Pktgen from a remote host to VM over mlx4 on two 2.00GHz Xeon E5-2650
shows 11.7% improvements on rx PPS. (from 1.28Mpps to 1.44Mpps)

Cc: Wei Xu <wexu@redhat.com>
Cc: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
Changes from V1:
- don't try to disable tx polling during start
- poll tx on error unconditonally
---
 drivers/vhost/net.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 68677d9..8d626d7 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -471,6 +471,7 @@ static void handle_tx(struct vhost_net *net)
 		goto out;
 
 	vhost_disable_notify(&net->dev, vq);
+	vhost_net_disable_vq(net, vq);
 
 	hdr_size = nvq->vhost_hlen;
 	zcopy = nvq->ubufs;
@@ -556,6 +557,7 @@ static void handle_tx(struct vhost_net *net)
 					% UIO_MAXIOV;
 			}
 			vhost_discard_vq_desc(vq, 1);
+			vhost_net_enable_vq(net, vq);
 			break;
 		}
 		if (err != len)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 2+ messages in thread

* [PATCH net-next V2] vhost_net: conditionally enable tx polling
@ 2017-11-13  3:45 Jason Wang
  0 siblings, 0 replies; 2+ messages in thread
From: Jason Wang @ 2017-11-13  3:45 UTC (permalink / raw)
  To: mst, jasowang, kvm, virtualization, netdev, linux-kernel
  Cc: Wei Xu, Matthew Rosato

We always poll tx for socket, this is sub optimal since this will
slightly increase the waitqueue traversing time and more important,
vhost could not benefit from commit 9e641bdcfa4e ("net-tun:
restructure tun_do_read for better sleep/wakeup efficiency") even if
we've stopped rx polling during handle_rx(), tx poll were still left
in the waitqueue.

Pktgen from a remote host to VM over mlx4 on two 2.00GHz Xeon E5-2650
shows 11.7% improvements on rx PPS. (from 1.28Mpps to 1.44Mpps)

Cc: Wei Xu <wexu@redhat.com>
Cc: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
Changes from V1:
- don't try to disable tx polling during start
- poll tx on error unconditonally
---
 drivers/vhost/net.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 68677d9..8d626d7 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -471,6 +471,7 @@ static void handle_tx(struct vhost_net *net)
 		goto out;
 
 	vhost_disable_notify(&net->dev, vq);
+	vhost_net_disable_vq(net, vq);
 
 	hdr_size = nvq->vhost_hlen;
 	zcopy = nvq->ubufs;
@@ -556,6 +557,7 @@ static void handle_tx(struct vhost_net *net)
 					% UIO_MAXIOV;
 			}
 			vhost_discard_vq_desc(vq, 1);
+			vhost_net_enable_vq(net, vq);
 			break;
 		}
 		if (err != len)
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2017-11-13  3:45 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-13  3:45 [PATCH net-next V2] vhost_net: conditionally enable tx polling Jason Wang
2017-11-13  3:45 Jason Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.