linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net,stable v2] vhost: fix skb leak in handle_rx()
@ 2017-11-29 14:23 wexu
  2017-11-29 14:43 ` Jason Wang
  2017-11-29 15:31 ` Michael S. Tsirkin
  0 siblings, 2 replies; 7+ messages in thread
From: wexu @ 2017-11-29 14:23 UTC (permalink / raw)
  To: virtualization, netdev, linux-kernel; +Cc: jasowang, mst, mjrosato, wexu

From: Wei Xu <wexu@redhat.com>

Matthew found a roughly 40% tcp throughput regression with commit
c67df11f(vhost_net: try batch dequing from skb array) as discussed
in the following thread:
https://www.mail-archive.com/netdev@vger.kernel.org/msg187936.html

Eventually we figured out that it was a skb leak in handle_rx()
when sending packets to the VM. This usually happens when a guest
can not drain out vq as fast as vhost fills in, afterwards it sets
off the traffic jam and leaks skb(s) which occurs as no headcount
to send on the vq from vhost side.

This can be avoided by making sure we have got enough headcount
before actually consuming a skb from the batched rx array while
transmitting, which is simply done by moving checking the zero
headcount a bit ahead.

Also strengthen the small possibility of leak in case of recvmsg()
fails by freeing the skb.

Signed-off-by: Wei Xu <wexu@redhat.com>
Reported-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
---
 drivers/vhost/net.c | 23 +++++++++++++----------
 1 file changed, 13 insertions(+), 10 deletions(-)

v2:
- add Matthew as the reporter, thanks matthew.
- moving zero headcount check ahead instead of defer consuming skb
  due to jason and mst's comment.
- add freeing skb in favor of recvmsg() fails.

diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 8d626d7..e302e08 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -778,16 +778,6 @@ static void handle_rx(struct vhost_net *net)
 		/* On error, stop handling until the next kick. */
 		if (unlikely(headcount < 0))
 			goto out;
-		if (nvq->rx_array)
-			msg.msg_control = vhost_net_buf_consume(&nvq->rxq);
-		/* On overrun, truncate and discard */
-		if (unlikely(headcount > UIO_MAXIOV)) {
-			iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1);
-			err = sock->ops->recvmsg(sock, &msg,
-						 1, MSG_DONTWAIT | MSG_TRUNC);
-			pr_debug("Discarded rx packet: len %zd\n", sock_len);
-			continue;
-		}
 		/* OK, now we need to know about added descriptors. */
 		if (!headcount) {
 			if (unlikely(vhost_enable_notify(&net->dev, vq))) {
@@ -800,6 +790,18 @@ static void handle_rx(struct vhost_net *net)
 			 * they refilled. */
 			goto out;
 		}
+		if (nvq->rx_array)
+			msg.msg_control = vhost_net_buf_consume(&nvq->rxq);
+		/* On overrun, truncate and discard */
+		if (unlikely(headcount > UIO_MAXIOV)) {
+			iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1);
+			err = sock->ops->recvmsg(sock, &msg,
+						 1, MSG_DONTWAIT | MSG_TRUNC);
+			if (unlikely(err != 1))
+				kfree_skb((struct sk_buff *)msg.msg_control);
+			pr_debug("Discarded rx packet: len %zd\n", sock_len);
+			continue;
+		}
 		/* We don't need to be notified again. */
 		iov_iter_init(&msg.msg_iter, READ, vq->iov, in, vhost_len);
 		fixup = msg.msg_iter;
@@ -818,6 +820,7 @@ static void handle_rx(struct vhost_net *net)
 			pr_debug("Discarded rx packet: "
 				 " len %d, expected %zd\n", err, sock_len);
 			vhost_discard_vq_desc(vq, headcount);
+			kfree_skb((struct sk_buff *)msg.msg_control);
 			continue;
 		}
 		/* Supply virtio_net_hdr if VHOST_NET_F_VIRTIO_NET_HDR */
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH net,stable v2] vhost: fix skb leak in handle_rx()
  2017-11-29 14:23 [PATCH net,stable v2] vhost: fix skb leak in handle_rx() wexu
@ 2017-11-29 14:43 ` Jason Wang
  2017-11-30  4:45   ` Wei Xu
  2017-11-29 15:31 ` Michael S. Tsirkin
  1 sibling, 1 reply; 7+ messages in thread
From: Jason Wang @ 2017-11-29 14:43 UTC (permalink / raw)
  To: wexu, virtualization, netdev, linux-kernel; +Cc: mst, mjrosato



On 2017年11月29日 22:23, wexu@redhat.com wrote:
> From: Wei Xu <wexu@redhat.com>
>
> Matthew found a roughly 40% tcp throughput regression with commit
> c67df11f(vhost_net: try batch dequing from skb array) as discussed
> in the following thread:
> https://www.mail-archive.com/netdev@vger.kernel.org/msg187936.html
>
> Eventually we figured out that it was a skb leak in handle_rx()
> when sending packets to the VM. This usually happens when a guest
> can not drain out vq as fast as vhost fills in, afterwards it sets
> off the traffic jam and leaks skb(s) which occurs as no headcount
> to send on the vq from vhost side.
>
> This can be avoided by making sure we have got enough headcount
> before actually consuming a skb from the batched rx array while
> transmitting, which is simply done by moving checking the zero
> headcount a bit ahead.
>
> Also strengthen the small possibility of leak in case of recvmsg()
> fails by freeing the skb.
>
> Signed-off-by: Wei Xu <wexu@redhat.com>
> Reported-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
> ---
>   drivers/vhost/net.c | 23 +++++++++++++----------
>   1 file changed, 13 insertions(+), 10 deletions(-)
>
> v2:
> - add Matthew as the reporter, thanks matthew.
> - moving zero headcount check ahead instead of defer consuming skb
>    due to jason and mst's comment.
> - add freeing skb in favor of recvmsg() fails.
>
> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> index 8d626d7..e302e08 100644
> --- a/drivers/vhost/net.c
> +++ b/drivers/vhost/net.c
> @@ -778,16 +778,6 @@ static void handle_rx(struct vhost_net *net)
>   		/* On error, stop handling until the next kick. */
>   		if (unlikely(headcount < 0))
>   			goto out;
> -		if (nvq->rx_array)
> -			msg.msg_control = vhost_net_buf_consume(&nvq->rxq);
> -		/* On overrun, truncate and discard */
> -		if (unlikely(headcount > UIO_MAXIOV)) {
> -			iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1);
> -			err = sock->ops->recvmsg(sock, &msg,
> -						 1, MSG_DONTWAIT | MSG_TRUNC);
> -			pr_debug("Discarded rx packet: len %zd\n", sock_len);
> -			continue;
> -		}
>   		/* OK, now we need to know about added descriptors. */
>   		if (!headcount) {
>   			if (unlikely(vhost_enable_notify(&net->dev, vq))) {
> @@ -800,6 +790,18 @@ static void handle_rx(struct vhost_net *net)
>   			 * they refilled. */
>   			goto out;
>   		}
> +		if (nvq->rx_array)
> +			msg.msg_control = vhost_net_buf_consume(&nvq->rxq);
> +		/* On overrun, truncate and discard */
> +		if (unlikely(headcount > UIO_MAXIOV)) {
> +			iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1);
> +			err = sock->ops->recvmsg(sock, &msg,
> +						 1, MSG_DONTWAIT | MSG_TRUNC);
> +			if (unlikely(err != 1))
> +				kfree_skb((struct sk_buff *)msg.msg_control);

I think we'd better fix this in tun/tap (better in another patch) 
otherwise it lead to an odd API: some case skb were freed in recvmsg() 
but caller still need to deal with the rest case.

Thanks

> +			pr_debug("Discarded rx packet: len %zd\n", sock_len);
> +			continue;
> +		}
>   		/* We don't need to be notified again. */
>   		iov_iter_init(&msg.msg_iter, READ, vq->iov, in, vhost_len);
>   		fixup = msg.msg_iter;
> @@ -818,6 +820,7 @@ static void handle_rx(struct vhost_net *net)
>   			pr_debug("Discarded rx packet: "
>   				 " len %d, expected %zd\n", err, sock_len);
>   			vhost_discard_vq_desc(vq, headcount);
> +			kfree_skb((struct sk_buff *)msg.msg_control);
>   			continue;
>   		}
>   		/* Supply virtio_net_hdr if VHOST_NET_F_VIRTIO_NET_HDR */

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH net,stable v2] vhost: fix skb leak in handle_rx()
  2017-11-29 14:23 [PATCH net,stable v2] vhost: fix skb leak in handle_rx() wexu
  2017-11-29 14:43 ` Jason Wang
@ 2017-11-29 15:31 ` Michael S. Tsirkin
  2017-11-30  2:46   ` Jason Wang
  1 sibling, 1 reply; 7+ messages in thread
From: Michael S. Tsirkin @ 2017-11-29 15:31 UTC (permalink / raw)
  To: wexu; +Cc: virtualization, netdev, linux-kernel, jasowang, mjrosato

On Wed, Nov 29, 2017 at 09:23:24AM -0500, wexu@redhat.com wrote:
> From: Wei Xu <wexu@redhat.com>
> 
> Matthew found a roughly 40% tcp throughput regression with commit
> c67df11f(vhost_net: try batch dequing from skb array) as discussed
> in the following thread:
> https://www.mail-archive.com/netdev@vger.kernel.org/msg187936.html
> 
> Eventually we figured out that it was a skb leak in handle_rx()
> when sending packets to the VM. This usually happens when a guest
> can not drain out vq as fast as vhost fills in, afterwards it sets
> off the traffic jam and leaks skb(s) which occurs as no headcount
> to send on the vq from vhost side.
> 
> This can be avoided by making sure we have got enough headcount
> before actually consuming a skb from the batched rx array while
> transmitting, which is simply done by moving checking the zero
> headcount a bit ahead.
> 
> Also strengthen the small possibility of leak in case of recvmsg()
> fails by freeing the skb.
> 
> Signed-off-by: Wei Xu <wexu@redhat.com>
> Reported-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
> ---
>  drivers/vhost/net.c | 23 +++++++++++++----------
>  1 file changed, 13 insertions(+), 10 deletions(-)
> 
> v2:
> - add Matthew as the reporter, thanks matthew.
> - moving zero headcount check ahead instead of defer consuming skb
>   due to jason and mst's comment.
> - add freeing skb in favor of recvmsg() fails.
> 
> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> index 8d626d7..e302e08 100644
> --- a/drivers/vhost/net.c
> +++ b/drivers/vhost/net.c
> @@ -778,16 +778,6 @@ static void handle_rx(struct vhost_net *net)
>  		/* On error, stop handling until the next kick. */
>  		if (unlikely(headcount < 0))
>  			goto out;
> -		if (nvq->rx_array)
> -			msg.msg_control = vhost_net_buf_consume(&nvq->rxq);
> -		/* On overrun, truncate and discard */
> -		if (unlikely(headcount > UIO_MAXIOV)) {
> -			iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1);
> -			err = sock->ops->recvmsg(sock, &msg,
> -						 1, MSG_DONTWAIT | MSG_TRUNC);
> -			pr_debug("Discarded rx packet: len %zd\n", sock_len);
> -			continue;
> -		}
>  		/* OK, now we need to know about added descriptors. */
>  		if (!headcount) {
>  			if (unlikely(vhost_enable_notify(&net->dev, vq))) {
> @@ -800,6 +790,18 @@ static void handle_rx(struct vhost_net *net)
>  			 * they refilled. */
>  			goto out;
>  		}
> +		if (nvq->rx_array)
> +			msg.msg_control = vhost_net_buf_consume(&nvq->rxq);
> +		/* On overrun, truncate and discard */
> +		if (unlikely(headcount > UIO_MAXIOV)) {
> +			iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1);
> +			err = sock->ops->recvmsg(sock, &msg,
> +						 1, MSG_DONTWAIT | MSG_TRUNC);
> +			if (unlikely(err != 1))

Why 1? How is receiving 1 byte special or even possible?
Also, I wouldn't put an unlikely here. It's all error handling code anyway.

> +				kfree_skb((struct sk_buff *)msg.msg_control);

You do not need a cast here.
Also, is it really safe to refer to msg_control here?
I'd rather keep a copy of the skb pointer and use it than assume
caller did not change it. But also see below.

> +			pr_debug("Discarded rx packet: len %zd\n", sock_len);
> +			continue;
> +		}
>  		/* We don't need to be notified again. */
>  		iov_iter_init(&msg.msg_iter, READ, vq->iov, in, vhost_len);
>  		fixup = msg.msg_iter;
> @@ -818,6 +820,7 @@ static void handle_rx(struct vhost_net *net)
>  			pr_debug("Discarded rx packet: "
>  				 " len %d, expected %zd\n", err, sock_len);
>  			vhost_discard_vq_desc(vq, headcount);
> +			kfree_skb((struct sk_buff *)msg.msg_control);

You do not need a cast here.

Also, we have

        ret = tun_put_user(tun, tfile, skb, to);
        if (unlikely(ret < 0))
                kfree_skb(skb);
        else
                consume_skb(skb);

        return ret;

So it looks like recvmsg actually always consumes the skb.
So I was wrong when I said you need to kfree it after
recv msg, and your original patch was good.

Jason, what do you think?

>  			continue;
>  		}
>  		/* Supply virtio_net_hdr if VHOST_NET_F_VIRTIO_NET_HDR */
> -- 
> 1.8.3.1

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH net,stable v2] vhost: fix skb leak in handle_rx()
  2017-11-29 15:31 ` Michael S. Tsirkin
@ 2017-11-30  2:46   ` Jason Wang
  2017-11-30  9:39     ` Wei Xu
  2017-11-30 13:25     ` Michael S. Tsirkin
  0 siblings, 2 replies; 7+ messages in thread
From: Jason Wang @ 2017-11-30  2:46 UTC (permalink / raw)
  To: Michael S. Tsirkin, wexu; +Cc: virtualization, netdev, linux-kernel, mjrosato



On 2017年11月29日 23:31, Michael S. Tsirkin wrote:
> On Wed, Nov 29, 2017 at 09:23:24AM -0500,wexu@redhat.com  wrote:
>> From: Wei Xu<wexu@redhat.com>
>>
>> Matthew found a roughly 40% tcp throughput regression with commit
>> c67df11f(vhost_net: try batch dequing from skb array) as discussed
>> in the following thread:
>> https://www.mail-archive.com/netdev@vger.kernel.org/msg187936.html
>>
>> Eventually we figured out that it was a skb leak in handle_rx()
>> when sending packets to the VM. This usually happens when a guest
>> can not drain out vq as fast as vhost fills in, afterwards it sets
>> off the traffic jam and leaks skb(s) which occurs as no headcount
>> to send on the vq from vhost side.
>>
>> This can be avoided by making sure we have got enough headcount
>> before actually consuming a skb from the batched rx array while
>> transmitting, which is simply done by moving checking the zero
>> headcount a bit ahead.
>>
>> Also strengthen the small possibility of leak in case of recvmsg()
>> fails by freeing the skb.
>>
>> Signed-off-by: Wei Xu<wexu@redhat.com>
>> Reported-by: Matthew Rosato<mjrosato@linux.vnet.ibm.com>
>> ---
>>   drivers/vhost/net.c | 23 +++++++++++++----------
>>   1 file changed, 13 insertions(+), 10 deletions(-)
>>
>> v2:
>> - add Matthew as the reporter, thanks matthew.
>> - moving zero headcount check ahead instead of defer consuming skb
>>    due to jason and mst's comment.
>> - add freeing skb in favor of recvmsg() fails.
>>
>> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
>> index 8d626d7..e302e08 100644
>> --- a/drivers/vhost/net.c
>> +++ b/drivers/vhost/net.c
>> @@ -778,16 +778,6 @@ static void handle_rx(struct vhost_net *net)
>>   		/* On error, stop handling until the next kick. */
>>   		if (unlikely(headcount < 0))
>>   			goto out;
>> -		if (nvq->rx_array)
>> -			msg.msg_control = vhost_net_buf_consume(&nvq->rxq);
>> -		/* On overrun, truncate and discard */
>> -		if (unlikely(headcount > UIO_MAXIOV)) {
>> -			iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1);
>> -			err = sock->ops->recvmsg(sock, &msg,
>> -						 1, MSG_DONTWAIT | MSG_TRUNC);
>> -			pr_debug("Discarded rx packet: len %zd\n", sock_len);
>> -			continue;
>> -		}
>>   		/* OK, now we need to know about added descriptors. */
>>   		if (!headcount) {
>>   			if (unlikely(vhost_enable_notify(&net->dev, vq))) {
>> @@ -800,6 +790,18 @@ static void handle_rx(struct vhost_net *net)
>>   			 * they refilled. */
>>   			goto out;
>>   		}
>> +		if (nvq->rx_array)
>> +			msg.msg_control = vhost_net_buf_consume(&nvq->rxq);
>> +		/* On overrun, truncate and discard */
>> +		if (unlikely(headcount > UIO_MAXIOV)) {
>> +			iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1);
>> +			err = sock->ops->recvmsg(sock, &msg,
>> +						 1, MSG_DONTWAIT | MSG_TRUNC);
>> +			if (unlikely(err != 1))
> Why 1? How is receiving 1 byte special or even possible?
> Also, I wouldn't put an unlikely here. It's all error handling code anyway.
>
>> +				kfree_skb((struct sk_buff *)msg.msg_control);
> You do not need a cast here.
> Also, is it really safe to refer to msg_control here?
> I'd rather keep a copy of the skb pointer and use it than assume
> caller did not change it. But also see below.
>
>> +			pr_debug("Discarded rx packet: len %zd\n", sock_len);
>> +			continue;
>> +		}
>>   		/* We don't need to be notified again. */
>>   		iov_iter_init(&msg.msg_iter, READ, vq->iov, in, vhost_len);
>>   		fixup = msg.msg_iter;
>> @@ -818,6 +820,7 @@ static void handle_rx(struct vhost_net *net)
>>   			pr_debug("Discarded rx packet: "
>>   				 " len %d, expected %zd\n", err, sock_len);
>>   			vhost_discard_vq_desc(vq, headcount);
>> +			kfree_skb((struct sk_buff *)msg.msg_control);
> You do not need a cast here.
>
> Also, we have
>
>          ret = tun_put_user(tun, tfile, skb, to);
>          if (unlikely(ret < 0))
>                  kfree_skb(skb);
>          else
>                  consume_skb(skb);
>
>          return ret;
>
> So it looks like recvmsg actually always consumes the skb.
> So I was wrong when I said you need to kfree it after
> recv msg, and your original patch was good.
>
> Jason, what do you think?
>

tun_recvmsg() has the following check:

static int tun_recvmsg(struct socket *sock, struct msghdr *m, size_t 
total_len,
                int flags)
{
     struct tun_file *tfile = container_of(sock, struct tun_file, socket);
     struct tun_struct *tun = __tun_get(tfile);
     int ret;

     if (!tun)
         return -EBADFD;

     if (flags & ~(MSG_DONTWAIT|MSG_TRUNC|MSG_ERRQUEUE)) {
         ret = -EINVAL;
         goto out;
     }

And tun_do_read() has:

     if (!iov_iter_count(to))
         return 0;

So I think we need free skb in those cases.

Thanks

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH net,stable v2] vhost: fix skb leak in handle_rx()
  2017-11-29 14:43 ` Jason Wang
@ 2017-11-30  4:45   ` Wei Xu
  0 siblings, 0 replies; 7+ messages in thread
From: Wei Xu @ 2017-11-30  4:45 UTC (permalink / raw)
  To: Jason Wang; +Cc: virtualization, netdev, linux-kernel, mst, mjrosato

On Wed, Nov 29, 2017 at 10:43:33PM +0800, Jason Wang wrote:
> 
> 
> On 2017年11月29日 22:23, wexu@redhat.com wrote:
> > From: Wei Xu <wexu@redhat.com>
> > 
> > Matthew found a roughly 40% tcp throughput regression with commit
> > c67df11f(vhost_net: try batch dequing from skb array) as discussed
> > in the following thread:
> > https://www.mail-archive.com/netdev@vger.kernel.org/msg187936.html
> > 
> > Eventually we figured out that it was a skb leak in handle_rx()
> > when sending packets to the VM. This usually happens when a guest
> > can not drain out vq as fast as vhost fills in, afterwards it sets
> > off the traffic jam and leaks skb(s) which occurs as no headcount
> > to send on the vq from vhost side.
> > 
> > This can be avoided by making sure we have got enough headcount
> > before actually consuming a skb from the batched rx array while
> > transmitting, which is simply done by moving checking the zero
> > headcount a bit ahead.
> > 
> > Also strengthen the small possibility of leak in case of recvmsg()
> > fails by freeing the skb.
> > 
> > Signed-off-by: Wei Xu <wexu@redhat.com>
> > Reported-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
> > ---
> >   drivers/vhost/net.c | 23 +++++++++++++----------
> >   1 file changed, 13 insertions(+), 10 deletions(-)
> > 
> > v2:
> > - add Matthew as the reporter, thanks matthew.
> > - moving zero headcount check ahead instead of defer consuming skb
> >    due to jason and mst's comment.
> > - add freeing skb in favor of recvmsg() fails.
> > 
> > diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> > index 8d626d7..e302e08 100644
> > --- a/drivers/vhost/net.c
> > +++ b/drivers/vhost/net.c
> > @@ -778,16 +778,6 @@ static void handle_rx(struct vhost_net *net)
> >   		/* On error, stop handling until the next kick. */
> >   		if (unlikely(headcount < 0))
> >   			goto out;
> > -		if (nvq->rx_array)
> > -			msg.msg_control = vhost_net_buf_consume(&nvq->rxq);
> > -		/* On overrun, truncate and discard */
> > -		if (unlikely(headcount > UIO_MAXIOV)) {
> > -			iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1);
> > -			err = sock->ops->recvmsg(sock, &msg,
> > -						 1, MSG_DONTWAIT | MSG_TRUNC);
> > -			pr_debug("Discarded rx packet: len %zd\n", sock_len);
> > -			continue;
> > -		}
> >   		/* OK, now we need to know about added descriptors. */
> >   		if (!headcount) {
> >   			if (unlikely(vhost_enable_notify(&net->dev, vq))) {
> > @@ -800,6 +790,18 @@ static void handle_rx(struct vhost_net *net)
> >   			 * they refilled. */
> >   			goto out;
> >   		}
> > +		if (nvq->rx_array)
> > +			msg.msg_control = vhost_net_buf_consume(&nvq->rxq);
> > +		/* On overrun, truncate and discard */
> > +		if (unlikely(headcount > UIO_MAXIOV)) {
> > +			iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1);
> > +			err = sock->ops->recvmsg(sock, &msg,
> > +						 1, MSG_DONTWAIT | MSG_TRUNC);
> > +			if (unlikely(err != 1))
> > +				kfree_skb((struct sk_buff *)msg.msg_control);
> 
> I think we'd better fix this in tun/tap (better in another patch) otherwise
> it lead to an odd API: some case skb were freed in recvmsg() but caller
> still need to deal with the rest case.

Right, it is better to handle it in recvmsg().

Wei

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH net,stable v2] vhost: fix skb leak in handle_rx()
  2017-11-30  2:46   ` Jason Wang
@ 2017-11-30  9:39     ` Wei Xu
  2017-11-30 13:25     ` Michael S. Tsirkin
  1 sibling, 0 replies; 7+ messages in thread
From: Wei Xu @ 2017-11-30  9:39 UTC (permalink / raw)
  To: Jason Wang
  Cc: Michael S. Tsirkin, virtualization, netdev, linux-kernel, mjrosato

On Thu, Nov 30, 2017 at 10:46:17AM +0800, Jason Wang wrote:
> 
> 
> On 2017年11月29日 23:31, Michael S. Tsirkin wrote:
> > On Wed, Nov 29, 2017 at 09:23:24AM -0500,wexu@redhat.com  wrote:
> > > From: Wei Xu<wexu@redhat.com>
> > > 
> > > Matthew found a roughly 40% tcp throughput regression with commit
> > > c67df11f(vhost_net: try batch dequing from skb array) as discussed
> > > in the following thread:
> > > https://www.mail-archive.com/netdev@vger.kernel.org/msg187936.html
> > > 
> > > Eventually we figured out that it was a skb leak in handle_rx()
> > > when sending packets to the VM. This usually happens when a guest
> > > can not drain out vq as fast as vhost fills in, afterwards it sets
> > > off the traffic jam and leaks skb(s) which occurs as no headcount
> > > to send on the vq from vhost side.
> > > 
> > > This can be avoided by making sure we have got enough headcount
> > > before actually consuming a skb from the batched rx array while
> > > transmitting, which is simply done by moving checking the zero
> > > headcount a bit ahead.
> > > 
> > > Also strengthen the small possibility of leak in case of recvmsg()
> > > fails by freeing the skb.
> > > 
> > > Signed-off-by: Wei Xu<wexu@redhat.com>
> > > Reported-by: Matthew Rosato<mjrosato@linux.vnet.ibm.com>
> > > ---
> > >   drivers/vhost/net.c | 23 +++++++++++++----------
> > >   1 file changed, 13 insertions(+), 10 deletions(-)
> > > 
> > > v2:
> > > - add Matthew as the reporter, thanks matthew.
> > > - moving zero headcount check ahead instead of defer consuming skb
> > >    due to jason and mst's comment.
> > > - add freeing skb in favor of recvmsg() fails.
> > > 
> > > diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> > > index 8d626d7..e302e08 100644
> > > --- a/drivers/vhost/net.c
> > > +++ b/drivers/vhost/net.c
> > > @@ -778,16 +778,6 @@ static void handle_rx(struct vhost_net *net)
> > >   		/* On error, stop handling until the next kick. */
> > >   		if (unlikely(headcount < 0))
> > >   			goto out;
> > > -		if (nvq->rx_array)
> > > -			msg.msg_control = vhost_net_buf_consume(&nvq->rxq);
> > > -		/* On overrun, truncate and discard */
> > > -		if (unlikely(headcount > UIO_MAXIOV)) {
> > > -			iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1);
> > > -			err = sock->ops->recvmsg(sock, &msg,
> > > -						 1, MSG_DONTWAIT | MSG_TRUNC);
> > > -			pr_debug("Discarded rx packet: len %zd\n", sock_len);
> > > -			continue;
> > > -		}
> > >   		/* OK, now we need to know about added descriptors. */
> > >   		if (!headcount) {
> > >   			if (unlikely(vhost_enable_notify(&net->dev, vq))) {
> > > @@ -800,6 +790,18 @@ static void handle_rx(struct vhost_net *net)
> > >   			 * they refilled. */
> > >   			goto out;
> > >   		}
> > > +		if (nvq->rx_array)
> > > +			msg.msg_control = vhost_net_buf_consume(&nvq->rxq);
> > > +		/* On overrun, truncate and discard */
> > > +		if (unlikely(headcount > UIO_MAXIOV)) {
> > > +			iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1);
> > > +			err = sock->ops->recvmsg(sock, &msg,
> > > +						 1, MSG_DONTWAIT | MSG_TRUNC);
> > > +			if (unlikely(err != 1))
> > Why 1? How is receiving 1 byte special or even possible?
> > Also, I wouldn't put an unlikely here. It's all error handling code anyway.

Vhost is dropping the skb by invoking a 1 byte recvmsg() here, while it
is kind of weird to free skb since it would have been freed in recvmsg()
for most cases, and the return value doesn't make sense too much.

> > 
> > > +				kfree_skb((struct sk_buff *)msg.msg_control);
> > You do not need a cast here.

Yes, exactly, I missed it.

> > Also, is it really safe to refer to msg_control here?
> > I'd rather keep a copy of the skb pointer and use it than assume
> > caller did not change it. But also see below.

It should be safe since msg is a local variable here, the callee has no
chance to modify it, except rx_array is not used by vhost and then it
becomes uncertain, but I don't know what the case is. Isn't vhost using
rx_array for all kinds of devices? Any clue rings the bell?

> > 
> > > +			pr_debug("Discarded rx packet: len %zd\n", sock_len);
> > > +			continue;
> > > +		}
> > >   		/* We don't need to be notified again. */
> > >   		iov_iter_init(&msg.msg_iter, READ, vq->iov, in, vhost_len);
> > >   		fixup = msg.msg_iter;
> > > @@ -818,6 +820,7 @@ static void handle_rx(struct vhost_net *net)
> > >   			pr_debug("Discarded rx packet: "
> > >   				 " len %d, expected %zd\n", err, sock_len);
> > >   			vhost_discard_vq_desc(vq, headcount);
> > > +			kfree_skb((struct sk_buff *)msg.msg_control);
> > You do not need a cast here.
> > 
> > Also, we have
> > 
> >          ret = tun_put_user(tun, tfile, skb, to);
> >          if (unlikely(ret < 0))
> >                  kfree_skb(skb);
> >          else
> >                  consume_skb(skb);
> > 
> >          return ret;
> > 
> > So it looks like recvmsg actually always consumes the skb.
> > So I was wrong when I said you need to kfree it after
> > recv msg, and your original patch was good.

OK, I will repost it.

BTW, Per Jason's comments below, vhost has passed in proper flag,
iov, etc to both tun and tap device, so it would not be an issue.

The only case probably be missed would be removing tun dynamically
with traffic from vhost, while I am not sure how this can be reproduced,
I remember someone reported he got a similar issue by repeatedly
creating and destroying 1000+ VMs simultaneously. Has these kinds of
issues been fixed already? Or that might come about still?

     if (!tun)
         return -EBADFD;

Anyway, what about do it in recvmsg() and in another patch later on?

Wei

> > 
> > Jason, what do you think?
> > 
> 
> tun_recvmsg() has the following check:
> 
> static int tun_recvmsg(struct socket *sock, struct msghdr *m, size_t
> total_len,
>                int flags)
> {
>     struct tun_file *tfile = container_of(sock, struct tun_file, socket);
>     struct tun_struct *tun = __tun_get(tfile);
>     int ret;
> 
>     if (!tun)
>         return -EBADFD;
> 
>     if (flags & ~(MSG_DONTWAIT|MSG_TRUNC|MSG_ERRQUEUE)) {
>         ret = -EINVAL;
>         goto out;
>     }
> 
> And tun_do_read() has:
> 
>     if (!iov_iter_count(to))
>         return 0;
> 
> So I think we need free skb in those cases.
> 
> Thanks

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH net,stable v2] vhost: fix skb leak in handle_rx()
  2017-11-30  2:46   ` Jason Wang
  2017-11-30  9:39     ` Wei Xu
@ 2017-11-30 13:25     ` Michael S. Tsirkin
  1 sibling, 0 replies; 7+ messages in thread
From: Michael S. Tsirkin @ 2017-11-30 13:25 UTC (permalink / raw)
  To: Jason Wang; +Cc: wexu, virtualization, netdev, linux-kernel, mjrosato

On Thu, Nov 30, 2017 at 10:46:17AM +0800, Jason Wang wrote:
> 
> 
> On 2017年11月29日 23:31, Michael S. Tsirkin wrote:
> > On Wed, Nov 29, 2017 at 09:23:24AM -0500,wexu@redhat.com  wrote:
> > > From: Wei Xu<wexu@redhat.com>
> > > 
> > > Matthew found a roughly 40% tcp throughput regression with commit
> > > c67df11f(vhost_net: try batch dequing from skb array) as discussed
> > > in the following thread:
> > > https://www.mail-archive.com/netdev@vger.kernel.org/msg187936.html
> > > 
> > > Eventually we figured out that it was a skb leak in handle_rx()
> > > when sending packets to the VM. This usually happens when a guest
> > > can not drain out vq as fast as vhost fills in, afterwards it sets
> > > off the traffic jam and leaks skb(s) which occurs as no headcount
> > > to send on the vq from vhost side.
> > > 
> > > This can be avoided by making sure we have got enough headcount
> > > before actually consuming a skb from the batched rx array while
> > > transmitting, which is simply done by moving checking the zero
> > > headcount a bit ahead.
> > > 
> > > Also strengthen the small possibility of leak in case of recvmsg()
> > > fails by freeing the skb.
> > > 
> > > Signed-off-by: Wei Xu<wexu@redhat.com>
> > > Reported-by: Matthew Rosato<mjrosato@linux.vnet.ibm.com>
> > > ---
> > >   drivers/vhost/net.c | 23 +++++++++++++----------
> > >   1 file changed, 13 insertions(+), 10 deletions(-)
> > > 
> > > v2:
> > > - add Matthew as the reporter, thanks matthew.
> > > - moving zero headcount check ahead instead of defer consuming skb
> > >    due to jason and mst's comment.
> > > - add freeing skb in favor of recvmsg() fails.
> > > 
> > > diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> > > index 8d626d7..e302e08 100644
> > > --- a/drivers/vhost/net.c
> > > +++ b/drivers/vhost/net.c
> > > @@ -778,16 +778,6 @@ static void handle_rx(struct vhost_net *net)
> > >   		/* On error, stop handling until the next kick. */
> > >   		if (unlikely(headcount < 0))
> > >   			goto out;
> > > -		if (nvq->rx_array)
> > > -			msg.msg_control = vhost_net_buf_consume(&nvq->rxq);
> > > -		/* On overrun, truncate and discard */
> > > -		if (unlikely(headcount > UIO_MAXIOV)) {
> > > -			iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1);
> > > -			err = sock->ops->recvmsg(sock, &msg,
> > > -						 1, MSG_DONTWAIT | MSG_TRUNC);
> > > -			pr_debug("Discarded rx packet: len %zd\n", sock_len);
> > > -			continue;
> > > -		}
> > >   		/* OK, now we need to know about added descriptors. */
> > >   		if (!headcount) {
> > >   			if (unlikely(vhost_enable_notify(&net->dev, vq))) {
> > > @@ -800,6 +790,18 @@ static void handle_rx(struct vhost_net *net)
> > >   			 * they refilled. */
> > >   			goto out;
> > >   		}
> > > +		if (nvq->rx_array)
> > > +			msg.msg_control = vhost_net_buf_consume(&nvq->rxq);
> > > +		/* On overrun, truncate and discard */
> > > +		if (unlikely(headcount > UIO_MAXIOV)) {
> > > +			iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1);
> > > +			err = sock->ops->recvmsg(sock, &msg,
> > > +						 1, MSG_DONTWAIT | MSG_TRUNC);
> > > +			if (unlikely(err != 1))
> > Why 1? How is receiving 1 byte special or even possible?
> > Also, I wouldn't put an unlikely here. It's all error handling code anyway.
> > 
> > > +				kfree_skb((struct sk_buff *)msg.msg_control);
> > You do not need a cast here.
> > Also, is it really safe to refer to msg_control here?
> > I'd rather keep a copy of the skb pointer and use it than assume
> > caller did not change it. But also see below.
> > 
> > > +			pr_debug("Discarded rx packet: len %zd\n", sock_len);
> > > +			continue;
> > > +		}
> > >   		/* We don't need to be notified again. */
> > >   		iov_iter_init(&msg.msg_iter, READ, vq->iov, in, vhost_len);
> > >   		fixup = msg.msg_iter;
> > > @@ -818,6 +820,7 @@ static void handle_rx(struct vhost_net *net)
> > >   			pr_debug("Discarded rx packet: "
> > >   				 " len %d, expected %zd\n", err, sock_len);
> > >   			vhost_discard_vq_desc(vq, headcount);
> > > +			kfree_skb((struct sk_buff *)msg.msg_control);
> > You do not need a cast here.
> > 
> > Also, we have
> > 
> >          ret = tun_put_user(tun, tfile, skb, to);
> >          if (unlikely(ret < 0))
> >                  kfree_skb(skb);
> >          else
> >                  consume_skb(skb);
> > 
> >          return ret;
> > 
> > So it looks like recvmsg actually always consumes the skb.
> > So I was wrong when I said you need to kfree it after
> > recv msg, and your original patch was good.
> > 
> > Jason, what do you think?
> > 
> 
> tun_recvmsg() has the following check:
> 
> static int tun_recvmsg(struct socket *sock, struct msghdr *m, size_t
> total_len,
>                int flags)
> {
>     struct tun_file *tfile = container_of(sock, struct tun_file, socket);
>     struct tun_struct *tun = __tun_get(tfile);
>     int ret;
> 
>     if (!tun)
>         return -EBADFD;
> 
>     if (flags & ~(MSG_DONTWAIT|MSG_TRUNC|MSG_ERRQUEUE)) {
>         ret = -EINVAL;
>         goto out;
>     }
>
> And tun_do_read() has:
> 
>     if (!iov_iter_count(to))
>         return 0;
> 
> So I think we need free skb in those cases.
> 
> Thanks

So it's a mess for callers. Let's free within tun then?

-- 
MST

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2017-11-30 13:25 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-29 14:23 [PATCH net,stable v2] vhost: fix skb leak in handle_rx() wexu
2017-11-29 14:43 ` Jason Wang
2017-11-30  4:45   ` Wei Xu
2017-11-29 15:31 ` Michael S. Tsirkin
2017-11-30  2:46   ` Jason Wang
2017-11-30  9:39     ` Wei Xu
2017-11-30 13:25     ` Michael S. Tsirkin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).