All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net,stable] vhost: fix skb leak in handle_rx()
@ 2017-11-28 17:17 wexu
  2017-11-28 17:50   ` Michael S. Tsirkin
                   ` (4 more replies)
  0 siblings, 5 replies; 16+ messages in thread
From: wexu @ 2017-11-28 17:17 UTC (permalink / raw)
  To: virtualization, netdev, linux-kernel; +Cc: jasowang, mst, mjrosato, wexu

From: Wei Xu <wexu@redhat.com>

Matthew found a roughly 40% tcp throughput regression with commit
c67df11f(vhost_net: try batch dequing from skb array) as discussed
in the following thread:
https://www.mail-archive.com/netdev@vger.kernel.org/msg187936.html

Eventually we figured out that it was a skb leak in handle_rx()
when sending packets to the VM. This usually happens when a guest
can not drain out vq as fast as vhost fills in, afterwards it sets
off the traffic jam and leaks skb(s) which occurs as no headcount
to send on the vq from vhost side.

This can be avoided by making sure we have got enough headcount
before actually consuming a skb from the batched rx array while
transmitting, which is simply done by deferring it a moment later
in this patch.

Signed-off-by: Wei Xu <wexu@redhat.com>
---
 drivers/vhost/net.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 8d626d7..e76535e 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -778,8 +778,6 @@ static void handle_rx(struct vhost_net *net)
 		/* On error, stop handling until the next kick. */
 		if (unlikely(headcount < 0))
 			goto out;
-		if (nvq->rx_array)
-			msg.msg_control = vhost_net_buf_consume(&nvq->rxq);
 		/* On overrun, truncate and discard */
 		if (unlikely(headcount > UIO_MAXIOV)) {
 			iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1);
@@ -809,6 +807,8 @@ static void handle_rx(struct vhost_net *net)
 			 */
 			iov_iter_advance(&msg.msg_iter, vhost_hlen);
 		}
+		if (nvq->rx_array)
+			msg.msg_control = vhost_net_buf_consume(&nvq->rxq);
 		err = sock->ops->recvmsg(sock, &msg,
 					 sock_len, MSG_DONTWAIT | MSG_TRUNC);
 		/* Userspace might have consumed the packet meanwhile:
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 16+ messages in thread
* [PATCH net,stable] vhost: fix skb leak in handle_rx()
@ 2017-11-28 17:17 wexu
  0 siblings, 0 replies; 16+ messages in thread
From: wexu @ 2017-11-28 17:17 UTC (permalink / raw)
  To: virtualization, netdev, linux-kernel; +Cc: mjrosato, wexu, mst

From: Wei Xu <wexu@redhat.com>

Matthew found a roughly 40% tcp throughput regression with commit
c67df11f(vhost_net: try batch dequing from skb array) as discussed
in the following thread:
https://www.mail-archive.com/netdev@vger.kernel.org/msg187936.html

Eventually we figured out that it was a skb leak in handle_rx()
when sending packets to the VM. This usually happens when a guest
can not drain out vq as fast as vhost fills in, afterwards it sets
off the traffic jam and leaks skb(s) which occurs as no headcount
to send on the vq from vhost side.

This can be avoided by making sure we have got enough headcount
before actually consuming a skb from the batched rx array while
transmitting, which is simply done by deferring it a moment later
in this patch.

Signed-off-by: Wei Xu <wexu@redhat.com>
---
 drivers/vhost/net.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 8d626d7..e76535e 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -778,8 +778,6 @@ static void handle_rx(struct vhost_net *net)
 		/* On error, stop handling until the next kick. */
 		if (unlikely(headcount < 0))
 			goto out;
-		if (nvq->rx_array)
-			msg.msg_control = vhost_net_buf_consume(&nvq->rxq);
 		/* On overrun, truncate and discard */
 		if (unlikely(headcount > UIO_MAXIOV)) {
 			iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1);
@@ -809,6 +807,8 @@ static void handle_rx(struct vhost_net *net)
 			 */
 			iov_iter_advance(&msg.msg_iter, vhost_hlen);
 		}
+		if (nvq->rx_array)
+			msg.msg_control = vhost_net_buf_consume(&nvq->rxq);
 		err = sock->ops->recvmsg(sock, &msg,
 					 sock_len, MSG_DONTWAIT | MSG_TRUNC);
 		/* Userspace might have consumed the packet meanwhile:
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2017-11-29  6:32 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-28 17:17 [PATCH net,stable] vhost: fix skb leak in handle_rx() wexu
2017-11-28 17:50 ` Michael S. Tsirkin
2017-11-28 17:50   ` Michael S. Tsirkin
2017-11-29  6:12   ` Wei Xu
2017-11-29  6:12   ` Wei Xu
2017-11-28 17:53 ` Michael S. Tsirkin
2017-11-28 17:53 ` Michael S. Tsirkin
2017-11-29  6:32   ` Wei Xu
2017-11-29  6:32   ` Wei Xu
2017-11-29  1:53 ` Jason Wang
2017-11-29  5:06   ` Jason Wang
2017-11-29  5:06   ` Jason Wang
2017-11-29  5:44     ` Wei Xu
2017-11-29  5:44     ` Wei Xu
2017-11-29  1:53 ` Jason Wang
  -- strict thread matches above, loose matches on Subject: below --
2017-11-28 17:17 wexu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.