* [PATCH v2 1/1] vhost: Added pad cleanup if vnet_hdr is not present.
@ 2024-03-27 23:18 Andrew Melnychenko
2024-03-28 4:02 ` Jason Wang
0 siblings, 1 reply; 3+ messages in thread
From: Andrew Melnychenko @ 2024-03-27 23:18 UTC (permalink / raw)
To: mst, jasowang, ast, daniel, davem, kuba, hawk, john.fastabend,
kvm, virtualization, netdev, linux-kernel, bpf
Cc: yuri.benditovich, yan
When the Qemu launched with vhost but without tap vnet_hdr,
vhost tries to copy vnet_hdr from socket iter with size 0
to the page that may contain some trash.
That trash can be interpreted as unpredictable values for
vnet_hdr.
That leads to dropping some packets and in some cases to
stalling vhost routine when the vhost_net tries to process
packets and fails in a loop.
Qemu options:
-netdev tap,vhost=on,vnet_hdr=off,...
From security point of view, wrong values on field used later
tap's tap_get_user_xdp() and will affect skb gso and options.
Later the header(and data in headroom) should not be used by the stack.
Using custom socket as a backend to vhost_net can reveal some data
in the vnet_hdr, although it would require kernel access to implement.
The issue happens because the value of sock_len in virtqueue is 0.
That value is set at vhost_net_set_features() with
VHOST_NET_F_VIRTIO_NET_HDR, also it's set to zero at device open()
and reset() routine.
So, currently, to trigger the issue, we need to set up qemu with
vhost=on,vnet_hdr=off, or do not configure vhost in the custom program.
Signed-off-by: Andrew Melnychenko <andrew@daynix.com>
---
drivers/vhost/net.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index f2ed7167c848..57411ac2d08b 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -735,6 +735,9 @@ static int vhost_net_build_xdp(struct vhost_net_virtqueue *nvq,
hdr = buf;
gso = &hdr->gso;
+ if (!sock_hlen)
+ memset(buf, 0, pad);
+
if ((gso->flags & VIRTIO_NET_HDR_F_NEEDS_CSUM) &&
vhost16_to_cpu(vq, gso->csum_start) +
vhost16_to_cpu(vq, gso->csum_offset) + 2 >
--
2.43.0
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH v2 1/1] vhost: Added pad cleanup if vnet_hdr is not present.
2024-03-27 23:18 [PATCH v2 1/1] vhost: Added pad cleanup if vnet_hdr is not present Andrew Melnychenko
@ 2024-03-28 4:02 ` Jason Wang
2024-03-28 7:46 ` Andrew Melnichenko
0 siblings, 1 reply; 3+ messages in thread
From: Jason Wang @ 2024-03-28 4:02 UTC (permalink / raw)
To: Andrew Melnychenko
Cc: mst, ast, daniel, davem, kuba, hawk, john.fastabend, kvm,
virtualization, netdev, linux-kernel, bpf, yuri.benditovich, yan
On Thu, Mar 28, 2024 at 7:44 AM Andrew Melnychenko <andrew@daynix.com> wrote:
>
> When the Qemu launched with vhost but without tap vnet_hdr,
> vhost tries to copy vnet_hdr from socket iter with size 0
> to the page that may contain some trash.
> That trash can be interpreted as unpredictable values for
> vnet_hdr.
> That leads to dropping some packets and in some cases to
> stalling vhost routine when the vhost_net tries to process
> packets and fails in a loop.
>
> Qemu options:
> -netdev tap,vhost=on,vnet_hdr=off,...
>
> From security point of view, wrong values on field used later
> tap's tap_get_user_xdp() and will affect skb gso and options.
> Later the header(and data in headroom) should not be used by the stack.
> Using custom socket as a backend to vhost_net can reveal some data
> in the vnet_hdr, although it would require kernel access to implement.
>
> The issue happens because the value of sock_len in virtqueue is 0.
> That value is set at vhost_net_set_features() with
> VHOST_NET_F_VIRTIO_NET_HDR, also it's set to zero at device open()
> and reset() routine.
> So, currently, to trigger the issue, we need to set up qemu with
> vhost=on,vnet_hdr=off, or do not configure vhost in the custom program.
>
> Signed-off-by: Andrew Melnychenko <andrew@daynix.com>
Acked-by: Jason Wang <jasowang@redhat.com>
It seems it has been merged by Michael.
Thanks
> ---
> drivers/vhost/net.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> index f2ed7167c848..57411ac2d08b 100644
> --- a/drivers/vhost/net.c
> +++ b/drivers/vhost/net.c
> @@ -735,6 +735,9 @@ static int vhost_net_build_xdp(struct vhost_net_virtqueue *nvq,
> hdr = buf;
> gso = &hdr->gso;
>
> + if (!sock_hlen)
> + memset(buf, 0, pad);
> +
> if ((gso->flags & VIRTIO_NET_HDR_F_NEEDS_CSUM) &&
> vhost16_to_cpu(vq, gso->csum_start) +
> vhost16_to_cpu(vq, gso->csum_offset) + 2 >
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH v2 1/1] vhost: Added pad cleanup if vnet_hdr is not present.
2024-03-28 4:02 ` Jason Wang
@ 2024-03-28 7:46 ` Andrew Melnichenko
0 siblings, 0 replies; 3+ messages in thread
From: Andrew Melnichenko @ 2024-03-28 7:46 UTC (permalink / raw)
To: Jason Wang
Cc: mst, ast, daniel, davem, kuba, hawk, john.fastabend, kvm,
virtualization, netdev, linux-kernel, bpf, yuri.benditovich, yan
Thanks, I'll look into it.
On Thu, Mar 28, 2024 at 6:03 AM Jason Wang <jasowang@redhat.com> wrote:
>
> On Thu, Mar 28, 2024 at 7:44 AM Andrew Melnychenko <andrew@daynix.com> wrote:
> >
> > When the Qemu launched with vhost but without tap vnet_hdr,
> > vhost tries to copy vnet_hdr from socket iter with size 0
> > to the page that may contain some trash.
> > That trash can be interpreted as unpredictable values for
> > vnet_hdr.
> > That leads to dropping some packets and in some cases to
> > stalling vhost routine when the vhost_net tries to process
> > packets and fails in a loop.
> >
> > Qemu options:
> > -netdev tap,vhost=on,vnet_hdr=off,...
> >
> > From security point of view, wrong values on field used later
> > tap's tap_get_user_xdp() and will affect skb gso and options.
> > Later the header(and data in headroom) should not be used by the stack.
> > Using custom socket as a backend to vhost_net can reveal some data
> > in the vnet_hdr, although it would require kernel access to implement.
> >
> > The issue happens because the value of sock_len in virtqueue is 0.
> > That value is set at vhost_net_set_features() with
> > VHOST_NET_F_VIRTIO_NET_HDR, also it's set to zero at device open()
> > and reset() routine.
> > So, currently, to trigger the issue, we need to set up qemu with
> > vhost=on,vnet_hdr=off, or do not configure vhost in the custom program.
> >
> > Signed-off-by: Andrew Melnychenko <andrew@daynix.com>
>
> Acked-by: Jason Wang <jasowang@redhat.com>
>
> It seems it has been merged by Michael.
>
> Thanks
>
> > ---
> > drivers/vhost/net.c | 3 +++
> > 1 file changed, 3 insertions(+)
> >
> > diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> > index f2ed7167c848..57411ac2d08b 100644
> > --- a/drivers/vhost/net.c
> > +++ b/drivers/vhost/net.c
> > @@ -735,6 +735,9 @@ static int vhost_net_build_xdp(struct vhost_net_virtqueue *nvq,
> > hdr = buf;
> > gso = &hdr->gso;
> >
> > + if (!sock_hlen)
> > + memset(buf, 0, pad);
> > +
> > if ((gso->flags & VIRTIO_NET_HDR_F_NEEDS_CSUM) &&
> > vhost16_to_cpu(vq, gso->csum_start) +
> > vhost16_to_cpu(vq, gso->csum_offset) + 2 >
> > --
> > 2.43.0
> >
>
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2024-03-28 8:12 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-03-27 23:18 [PATCH v2 1/1] vhost: Added pad cleanup if vnet_hdr is not present Andrew Melnychenko
2024-03-28 4:02 ` Jason Wang
2024-03-28 7:46 ` Andrew Melnichenko
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.