All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC] virtio-net: share receive_*() and add_recvbuf_*() with virtio-vsock
@ 2019-07-10 15:37 Stefano Garzarella
  2019-07-11  7:37 ` Jason Wang
  2019-07-11  7:37 ` Jason Wang
  0 siblings, 2 replies; 26+ messages in thread
From: Stefano Garzarella @ 2019-07-10 15:37 UTC (permalink / raw)
  To: Michael S. Tsirkin, Jason Wang, Stefan Hajnoczi; +Cc: virtualization, netdev

Hi,
as Jason suggested some months ago, I looked better at the virtio-net driver to
understand if we can reuse some parts also in the virtio-vsock driver, since we
have similar challenges (mergeable buffers, page allocation, small
packets, etc.).

Initially, I would add the skbuff in the virtio-vsock in order to re-use
receive_*() functions.
Then I would move receive_[small, big, mergeable]() and
add_recvbuf_[small, big, mergeable]() outside of virtio-net driver, in order to
call them also from virtio-vsock. I need to do some refactoring (e.g. leave the
XDP part on the virtio-net driver), but I think it is feasible.

The idea is to create a virtio-skb.[h,c] where put these functions and a new
object where stores some attributes needed (e.g. hdr_len ) and status (e.g.
some fields of struct receive_queue). This is an idea of virtio-skb.h that
I have in mind:
    struct virtskb;

    struct sk_buff *virtskb_receive_small(struct virtskb *vs, ...);
    struct sk_buff *virtskb_receive_big(struct virtskb *vs, ...);
    struct sk_buff *virtskb_receive_mergeable(struct virtskb *vs, ...);

    int virtskb_add_recvbuf_small(struct virtskb*vs, ...);
    int virtskb_add_recvbuf_big(struct virtskb *vs, ...);
    int virtskb_add_recvbuf_mergeable(struct virtskb *vs, ...);

For the Guest->Host path it should be easier, so maybe I can add a
"virtskb_send(struct virtskb *vs, struct sk_buff *skb)" with a part of the code
of xmit_skb().

Let me know if you have in mind better names or if I should put these function
in another place.

I would like to leave the control part completely separate, so, for example,
the two drivers will negotiate the features independently and they will call
the right virtskb_receive_*() function based on the negotiation.

I already started to work on it, but before to do more steps and send an RFC
patch, I would like to hear your opinion.
Do you think that makes sense?
Do you see any issue or a better solution?

Thanks in advance,
Stefano

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC] virtio-net: share receive_*() and add_recvbuf_*() with virtio-vsock
  2019-07-10 15:37 [RFC] virtio-net: share receive_*() and add_recvbuf_*() with virtio-vsock Stefano Garzarella
@ 2019-07-11  7:37 ` Jason Wang
  2019-07-11 11:41   ` Stefano Garzarella
  2019-07-11 11:41   ` Stefano Garzarella
  2019-07-11  7:37 ` Jason Wang
  1 sibling, 2 replies; 26+ messages in thread
From: Jason Wang @ 2019-07-11  7:37 UTC (permalink / raw)
  To: Stefano Garzarella, Michael S. Tsirkin, Stefan Hajnoczi
  Cc: virtualization, netdev


On 2019/7/10 下午11:37, Stefano Garzarella wrote:
> Hi,
> as Jason suggested some months ago, I looked better at the virtio-net driver to
> understand if we can reuse some parts also in the virtio-vsock driver, since we
> have similar challenges (mergeable buffers, page allocation, small
> packets, etc.).
>
> Initially, I would add the skbuff in the virtio-vsock in order to re-use
> receive_*() functions.


Yes, that will be a good step.


> Then I would move receive_[small, big, mergeable]() and
> add_recvbuf_[small, big, mergeable]() outside of virtio-net driver, in order to
> call them also from virtio-vsock. I need to do some refactoring (e.g. leave the
> XDP part on the virtio-net driver), but I think it is feasible.
>
> The idea is to create a virtio-skb.[h,c] where put these functions and a new
> object where stores some attributes needed (e.g. hdr_len ) and status (e.g.
> some fields of struct receive_queue).


My understanding is we could be more ambitious here. Do you see any 
blocker for reusing virtio-net directly? It's better to reuse not only 
the functions but also the logic like NAPI to avoid re-inventing 
something buggy and duplicated.


> This is an idea of virtio-skb.h that
> I have in mind:
>      struct virtskb;


What fields do you want to store in virtskb? It looks to be exist 
sk_buff is flexible enough to us?


>
>      struct sk_buff *virtskb_receive_small(struct virtskb *vs, ...);
>      struct sk_buff *virtskb_receive_big(struct virtskb *vs, ...);
>      struct sk_buff *virtskb_receive_mergeable(struct virtskb *vs, ...);
>
>      int virtskb_add_recvbuf_small(struct virtskb*vs, ...);
>      int virtskb_add_recvbuf_big(struct virtskb *vs, ...);
>      int virtskb_add_recvbuf_mergeable(struct virtskb *vs, ...);
>
> For the Guest->Host path it should be easier, so maybe I can add a
> "virtskb_send(struct virtskb *vs, struct sk_buff *skb)" with a part of the code
> of xmit_skb().


I may miss something, but I don't see any thing that prevents us from 
using xmit_skb() directly.


>
> Let me know if you have in mind better names or if I should put these function
> in another place.
>
> I would like to leave the control part completely separate, so, for example,
> the two drivers will negotiate the features independently and they will call
> the right virtskb_receive_*() function based on the negotiation.


If it's one the issue of negotiation, we can simply change the 
virtnet_probe() to deal with different devices.


>
> I already started to work on it, but before to do more steps and send an RFC
> patch, I would like to hear your opinion.
> Do you think that makes sense?
> Do you see any issue or a better solution?


I still think we need to seek a way of adding some codes on virtio-net.c 
directly if there's no huge different in the processing of TX/RX. That 
would save us a lot time.

Thanks


>
> Thanks in advance,
> Stefano

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC] virtio-net: share receive_*() and add_recvbuf_*() with virtio-vsock
  2019-07-10 15:37 [RFC] virtio-net: share receive_*() and add_recvbuf_*() with virtio-vsock Stefano Garzarella
  2019-07-11  7:37 ` Jason Wang
@ 2019-07-11  7:37 ` Jason Wang
  1 sibling, 0 replies; 26+ messages in thread
From: Jason Wang @ 2019-07-11  7:37 UTC (permalink / raw)
  To: Stefano Garzarella, Michael S. Tsirkin, Stefan Hajnoczi
  Cc: netdev, virtualization


On 2019/7/10 下午11:37, Stefano Garzarella wrote:
> Hi,
> as Jason suggested some months ago, I looked better at the virtio-net driver to
> understand if we can reuse some parts also in the virtio-vsock driver, since we
> have similar challenges (mergeable buffers, page allocation, small
> packets, etc.).
>
> Initially, I would add the skbuff in the virtio-vsock in order to re-use
> receive_*() functions.


Yes, that will be a good step.


> Then I would move receive_[small, big, mergeable]() and
> add_recvbuf_[small, big, mergeable]() outside of virtio-net driver, in order to
> call them also from virtio-vsock. I need to do some refactoring (e.g. leave the
> XDP part on the virtio-net driver), but I think it is feasible.
>
> The idea is to create a virtio-skb.[h,c] where put these functions and a new
> object where stores some attributes needed (e.g. hdr_len ) and status (e.g.
> some fields of struct receive_queue).


My understanding is we could be more ambitious here. Do you see any 
blocker for reusing virtio-net directly? It's better to reuse not only 
the functions but also the logic like NAPI to avoid re-inventing 
something buggy and duplicated.


> This is an idea of virtio-skb.h that
> I have in mind:
>      struct virtskb;


What fields do you want to store in virtskb? It looks to be exist 
sk_buff is flexible enough to us?


>
>      struct sk_buff *virtskb_receive_small(struct virtskb *vs, ...);
>      struct sk_buff *virtskb_receive_big(struct virtskb *vs, ...);
>      struct sk_buff *virtskb_receive_mergeable(struct virtskb *vs, ...);
>
>      int virtskb_add_recvbuf_small(struct virtskb*vs, ...);
>      int virtskb_add_recvbuf_big(struct virtskb *vs, ...);
>      int virtskb_add_recvbuf_mergeable(struct virtskb *vs, ...);
>
> For the Guest->Host path it should be easier, so maybe I can add a
> "virtskb_send(struct virtskb *vs, struct sk_buff *skb)" with a part of the code
> of xmit_skb().


I may miss something, but I don't see any thing that prevents us from 
using xmit_skb() directly.


>
> Let me know if you have in mind better names or if I should put these function
> in another place.
>
> I would like to leave the control part completely separate, so, for example,
> the two drivers will negotiate the features independently and they will call
> the right virtskb_receive_*() function based on the negotiation.


If it's one the issue of negotiation, we can simply change the 
virtnet_probe() to deal with different devices.


>
> I already started to work on it, but before to do more steps and send an RFC
> patch, I would like to hear your opinion.
> Do you think that makes sense?
> Do you see any issue or a better solution?


I still think we need to seek a way of adding some codes on virtio-net.c 
directly if there's no huge different in the processing of TX/RX. That 
would save us a lot time.

Thanks


>
> Thanks in advance,
> Stefano
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC] virtio-net: share receive_*() and add_recvbuf_*() with virtio-vsock
  2019-07-11  7:37 ` Jason Wang
@ 2019-07-11 11:41   ` Stefano Garzarella
  2019-07-11 19:52     ` Michael S. Tsirkin
  2019-07-11 19:52     ` Michael S. Tsirkin
  2019-07-11 11:41   ` Stefano Garzarella
  1 sibling, 2 replies; 26+ messages in thread
From: Stefano Garzarella @ 2019-07-11 11:41 UTC (permalink / raw)
  To: Jason Wang, Michael S. Tsirkin, Stefan Hajnoczi; +Cc: virtualization, netdev

On Thu, Jul 11, 2019 at 03:37:00PM +0800, Jason Wang wrote:
> 
> On 2019/7/10 下午11:37, Stefano Garzarella wrote:
> > Hi,
> > as Jason suggested some months ago, I looked better at the virtio-net driver to
> > understand if we can reuse some parts also in the virtio-vsock driver, since we
> > have similar challenges (mergeable buffers, page allocation, small
> > packets, etc.).
> > 
> > Initially, I would add the skbuff in the virtio-vsock in order to re-use
> > receive_*() functions.
> 
> 
> Yes, that will be a good step.
> 

Okay, I'll go on this way.

> 
> > Then I would move receive_[small, big, mergeable]() and
> > add_recvbuf_[small, big, mergeable]() outside of virtio-net driver, in order to
> > call them also from virtio-vsock. I need to do some refactoring (e.g. leave the
> > XDP part on the virtio-net driver), but I think it is feasible.
> > 
> > The idea is to create a virtio-skb.[h,c] where put these functions and a new
> > object where stores some attributes needed (e.g. hdr_len ) and status (e.g.
> > some fields of struct receive_queue).
> 
> 
> My understanding is we could be more ambitious here. Do you see any blocker
> for reusing virtio-net directly? It's better to reuse not only the functions
> but also the logic like NAPI to avoid re-inventing something buggy and
> duplicated.
> 

These are my concerns:
- virtio-vsock is not a "net_device", so a lot of code related to
  ethtool, net devices (MAC address, MTU, speed, VLAN, XDP, offloading) will be
  not used by virtio-vsock.

- virtio-vsock has a different header. We can consider it as part of
  virtio_net payload, but it precludes the compatibility with old hosts. This
  was one of the major doubts that made me think about using only the
  send/recv skbuff functions, that it shouldn't break the compatibility.

> 
> > This is an idea of virtio-skb.h that
> > I have in mind:
> >      struct virtskb;
> 
> 
> What fields do you want to store in virtskb? It looks to be exist sk_buff is
> flexible enough to us?

My idea is to store queues information, like struct receive_queue or
struct send_queue, and some device attributes (e.g. hdr_len ).

> 
> 
> > 
> >      struct sk_buff *virtskb_receive_small(struct virtskb *vs, ...);
> >      struct sk_buff *virtskb_receive_big(struct virtskb *vs, ...);
> >      struct sk_buff *virtskb_receive_mergeable(struct virtskb *vs, ...);
> > 
> >      int virtskb_add_recvbuf_small(struct virtskb*vs, ...);
> >      int virtskb_add_recvbuf_big(struct virtskb *vs, ...);
> >      int virtskb_add_recvbuf_mergeable(struct virtskb *vs, ...);
> > 
> > For the Guest->Host path it should be easier, so maybe I can add a
> > "virtskb_send(struct virtskb *vs, struct sk_buff *skb)" with a part of the code
> > of xmit_skb().
> 
> 
> I may miss something, but I don't see any thing that prevents us from using
> xmit_skb() directly.
> 

Yes, but my initial idea was to make it more parametric and not related to the
virtio_net_hdr, so the 'hdr_len' could be a parameter and the
'num_buffers' should be handled by the caller.

> 
> > 
> > Let me know if you have in mind better names or if I should put these function
> > in another place.
> > 
> > I would like to leave the control part completely separate, so, for example,
> > the two drivers will negotiate the features independently and they will call
> > the right virtskb_receive_*() function based on the negotiation.
> 
> 
> If it's one the issue of negotiation, we can simply change the
> virtnet_probe() to deal with different devices.
> 
> 
> > 
> > I already started to work on it, but before to do more steps and send an RFC
> > patch, I would like to hear your opinion.
> > Do you think that makes sense?
> > Do you see any issue or a better solution?
> 
> 
> I still think we need to seek a way of adding some codes on virtio-net.c
> directly if there's no huge different in the processing of TX/RX. That would
> save us a lot time.

After the reading of the buffers from the virtqueue I think the process
is slightly different, because virtio-net will interface with the network
stack, while virtio-vsock will interface with the vsock-core (socket).
So the virtio-vsock implements the following:
- control flow mechanism to avoid to loose packets, informing the peer
  about the amount of memory available in the receive queue using some
  fields in the virtio_vsock_hdr
- de-multiplexing parsing the virtio_vsock_hdr and choosing the right
  socket depending on the port
- socket state handling

We can use the virtio-net as transport, but we should add a lot of
code to skip "net device" stuff when it is used by the virtio-vsock.
This could break something in virtio-net, for this reason, I thought to reuse
only the send/recv functions starting from the idea to split the virtio-net
driver in two parts:
a. one with all stuff related to the network stack
b. one with the stuff needed to communicate with the host

And use skbuff to communicate between parts. In this way, virtio-vsock
can use only the b part.

Maybe we can do this split in a better way, but I'm not sure it is
simple.

Thanks,
Stefano

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC] virtio-net: share receive_*() and add_recvbuf_*() with virtio-vsock
  2019-07-11  7:37 ` Jason Wang
  2019-07-11 11:41   ` Stefano Garzarella
@ 2019-07-11 11:41   ` Stefano Garzarella
  1 sibling, 0 replies; 26+ messages in thread
From: Stefano Garzarella @ 2019-07-11 11:41 UTC (permalink / raw)
  To: Jason Wang, Michael S. Tsirkin, Stefan Hajnoczi; +Cc: netdev, virtualization

On Thu, Jul 11, 2019 at 03:37:00PM +0800, Jason Wang wrote:
> 
> On 2019/7/10 下午11:37, Stefano Garzarella wrote:
> > Hi,
> > as Jason suggested some months ago, I looked better at the virtio-net driver to
> > understand if we can reuse some parts also in the virtio-vsock driver, since we
> > have similar challenges (mergeable buffers, page allocation, small
> > packets, etc.).
> > 
> > Initially, I would add the skbuff in the virtio-vsock in order to re-use
> > receive_*() functions.
> 
> 
> Yes, that will be a good step.
> 

Okay, I'll go on this way.

> 
> > Then I would move receive_[small, big, mergeable]() and
> > add_recvbuf_[small, big, mergeable]() outside of virtio-net driver, in order to
> > call them also from virtio-vsock. I need to do some refactoring (e.g. leave the
> > XDP part on the virtio-net driver), but I think it is feasible.
> > 
> > The idea is to create a virtio-skb.[h,c] where put these functions and a new
> > object where stores some attributes needed (e.g. hdr_len ) and status (e.g.
> > some fields of struct receive_queue).
> 
> 
> My understanding is we could be more ambitious here. Do you see any blocker
> for reusing virtio-net directly? It's better to reuse not only the functions
> but also the logic like NAPI to avoid re-inventing something buggy and
> duplicated.
> 

These are my concerns:
- virtio-vsock is not a "net_device", so a lot of code related to
  ethtool, net devices (MAC address, MTU, speed, VLAN, XDP, offloading) will be
  not used by virtio-vsock.

- virtio-vsock has a different header. We can consider it as part of
  virtio_net payload, but it precludes the compatibility with old hosts. This
  was one of the major doubts that made me think about using only the
  send/recv skbuff functions, that it shouldn't break the compatibility.

> 
> > This is an idea of virtio-skb.h that
> > I have in mind:
> >      struct virtskb;
> 
> 
> What fields do you want to store in virtskb? It looks to be exist sk_buff is
> flexible enough to us?

My idea is to store queues information, like struct receive_queue or
struct send_queue, and some device attributes (e.g. hdr_len ).

> 
> 
> > 
> >      struct sk_buff *virtskb_receive_small(struct virtskb *vs, ...);
> >      struct sk_buff *virtskb_receive_big(struct virtskb *vs, ...);
> >      struct sk_buff *virtskb_receive_mergeable(struct virtskb *vs, ...);
> > 
> >      int virtskb_add_recvbuf_small(struct virtskb*vs, ...);
> >      int virtskb_add_recvbuf_big(struct virtskb *vs, ...);
> >      int virtskb_add_recvbuf_mergeable(struct virtskb *vs, ...);
> > 
> > For the Guest->Host path it should be easier, so maybe I can add a
> > "virtskb_send(struct virtskb *vs, struct sk_buff *skb)" with a part of the code
> > of xmit_skb().
> 
> 
> I may miss something, but I don't see any thing that prevents us from using
> xmit_skb() directly.
> 

Yes, but my initial idea was to make it more parametric and not related to the
virtio_net_hdr, so the 'hdr_len' could be a parameter and the
'num_buffers' should be handled by the caller.

> 
> > 
> > Let me know if you have in mind better names or if I should put these function
> > in another place.
> > 
> > I would like to leave the control part completely separate, so, for example,
> > the two drivers will negotiate the features independently and they will call
> > the right virtskb_receive_*() function based on the negotiation.
> 
> 
> If it's one the issue of negotiation, we can simply change the
> virtnet_probe() to deal with different devices.
> 
> 
> > 
> > I already started to work on it, but before to do more steps and send an RFC
> > patch, I would like to hear your opinion.
> > Do you think that makes sense?
> > Do you see any issue or a better solution?
> 
> 
> I still think we need to seek a way of adding some codes on virtio-net.c
> directly if there's no huge different in the processing of TX/RX. That would
> save us a lot time.

After the reading of the buffers from the virtqueue I think the process
is slightly different, because virtio-net will interface with the network
stack, while virtio-vsock will interface with the vsock-core (socket).
So the virtio-vsock implements the following:
- control flow mechanism to avoid to loose packets, informing the peer
  about the amount of memory available in the receive queue using some
  fields in the virtio_vsock_hdr
- de-multiplexing parsing the virtio_vsock_hdr and choosing the right
  socket depending on the port
- socket state handling

We can use the virtio-net as transport, but we should add a lot of
code to skip "net device" stuff when it is used by the virtio-vsock.
This could break something in virtio-net, for this reason, I thought to reuse
only the send/recv functions starting from the idea to split the virtio-net
driver in two parts:
a. one with all stuff related to the network stack
b. one with the stuff needed to communicate with the host

And use skbuff to communicate between parts. In this way, virtio-vsock
can use only the b part.

Maybe we can do this split in a better way, but I'm not sure it is
simple.

Thanks,
Stefano
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC] virtio-net: share receive_*() and add_recvbuf_*() with virtio-vsock
  2019-07-11 11:41   ` Stefano Garzarella
  2019-07-11 19:52     ` Michael S. Tsirkin
@ 2019-07-11 19:52     ` Michael S. Tsirkin
  2019-07-12 10:00       ` Stefano Garzarella
  2019-07-12 10:00       ` Stefano Garzarella
  1 sibling, 2 replies; 26+ messages in thread
From: Michael S. Tsirkin @ 2019-07-11 19:52 UTC (permalink / raw)
  To: Stefano Garzarella; +Cc: Jason Wang, Stefan Hajnoczi, virtualization, netdev

On Thu, Jul 11, 2019 at 01:41:34PM +0200, Stefano Garzarella wrote:
> On Thu, Jul 11, 2019 at 03:37:00PM +0800, Jason Wang wrote:
> > 
> > On 2019/7/10 下午11:37, Stefano Garzarella wrote:
> > > Hi,
> > > as Jason suggested some months ago, I looked better at the virtio-net driver to
> > > understand if we can reuse some parts also in the virtio-vsock driver, since we
> > > have similar challenges (mergeable buffers, page allocation, small
> > > packets, etc.).
> > > 
> > > Initially, I would add the skbuff in the virtio-vsock in order to re-use
> > > receive_*() functions.
> > 
> > 
> > Yes, that will be a good step.
> > 
> 
> Okay, I'll go on this way.
> 
> > 
> > > Then I would move receive_[small, big, mergeable]() and
> > > add_recvbuf_[small, big, mergeable]() outside of virtio-net driver, in order to
> > > call them also from virtio-vsock. I need to do some refactoring (e.g. leave the
> > > XDP part on the virtio-net driver), but I think it is feasible.
> > > 
> > > The idea is to create a virtio-skb.[h,c] where put these functions and a new
> > > object where stores some attributes needed (e.g. hdr_len ) and status (e.g.
> > > some fields of struct receive_queue).
> > 
> > 
> > My understanding is we could be more ambitious here. Do you see any blocker
> > for reusing virtio-net directly? It's better to reuse not only the functions
> > but also the logic like NAPI to avoid re-inventing something buggy and
> > duplicated.
> > 
> 
> These are my concerns:
> - virtio-vsock is not a "net_device", so a lot of code related to
>   ethtool, net devices (MAC address, MTU, speed, VLAN, XDP, offloading) will be
>   not used by virtio-vsock.
> 
> - virtio-vsock has a different header. We can consider it as part of
>   virtio_net payload, but it precludes the compatibility with old hosts. This
>   was one of the major doubts that made me think about using only the
>   send/recv skbuff functions, that it shouldn't break the compatibility.
> 
> > 
> > > This is an idea of virtio-skb.h that
> > > I have in mind:
> > >      struct virtskb;
> > 
> > 
> > What fields do you want to store in virtskb? It looks to be exist sk_buff is
> > flexible enough to us?
> 
> My idea is to store queues information, like struct receive_queue or
> struct send_queue, and some device attributes (e.g. hdr_len ).
> 
> > 
> > 
> > > 
> > >      struct sk_buff *virtskb_receive_small(struct virtskb *vs, ...);
> > >      struct sk_buff *virtskb_receive_big(struct virtskb *vs, ...);
> > >      struct sk_buff *virtskb_receive_mergeable(struct virtskb *vs, ...);
> > > 
> > >      int virtskb_add_recvbuf_small(struct virtskb*vs, ...);
> > >      int virtskb_add_recvbuf_big(struct virtskb *vs, ...);
> > >      int virtskb_add_recvbuf_mergeable(struct virtskb *vs, ...);
> > > 
> > > For the Guest->Host path it should be easier, so maybe I can add a
> > > "virtskb_send(struct virtskb *vs, struct sk_buff *skb)" with a part of the code
> > > of xmit_skb().
> > 
> > 
> > I may miss something, but I don't see any thing that prevents us from using
> > xmit_skb() directly.
> > 
> 
> Yes, but my initial idea was to make it more parametric and not related to the
> virtio_net_hdr, so the 'hdr_len' could be a parameter and the
> 'num_buffers' should be handled by the caller.
> 
> > 
> > > 
> > > Let me know if you have in mind better names or if I should put these function
> > > in another place.
> > > 
> > > I would like to leave the control part completely separate, so, for example,
> > > the two drivers will negotiate the features independently and they will call
> > > the right virtskb_receive_*() function based on the negotiation.
> > 
> > 
> > If it's one the issue of negotiation, we can simply change the
> > virtnet_probe() to deal with different devices.
> > 
> > 
> > > 
> > > I already started to work on it, but before to do more steps and send an RFC
> > > patch, I would like to hear your opinion.
> > > Do you think that makes sense?
> > > Do you see any issue or a better solution?
> > 
> > 
> > I still think we need to seek a way of adding some codes on virtio-net.c
> > directly if there's no huge different in the processing of TX/RX. That would
> > save us a lot time.
> 
> After the reading of the buffers from the virtqueue I think the process
> is slightly different, because virtio-net will interface with the network
> stack, while virtio-vsock will interface with the vsock-core (socket).
> So the virtio-vsock implements the following:
> - control flow mechanism to avoid to loose packets, informing the peer
>   about the amount of memory available in the receive queue using some
>   fields in the virtio_vsock_hdr
> - de-multiplexing parsing the virtio_vsock_hdr and choosing the right
>   socket depending on the port
> - socket state handling
> 
> We can use the virtio-net as transport, but we should add a lot of
> code to skip "net device" stuff when it is used by the virtio-vsock.
> This could break something in virtio-net, for this reason, I thought to reuse
> only the send/recv functions starting from the idea to split the virtio-net
> driver in two parts:
> a. one with all stuff related to the network stack
> b. one with the stuff needed to communicate with the host
> 
> And use skbuff to communicate between parts. In this way, virtio-vsock
> can use only the b part.
> 
> Maybe we can do this split in a better way, but I'm not sure it is
> simple.
> 
> Thanks,
> Stefano

Frankly, skb is a huge structure which adds a lot of
overhead. I am not sure that using it is such a great idea
if building a device that does not have to interface
with the networking stack.

So I agree with Jason in theory. To clarify, he is basically saying
current implementation is all wrong, it should be a protocol and we
should teach networking stack that there are reliable net devices that
handle just this protocol. We could add a flag in virtio net that
will say it's such a device.

Whether it's doable, I don't know, and it's definitely not simple - in
particular you will have to also re-implement existing devices in these
terms, and not just virtio - vmware vsock too.

If you want to do a POC you can add a new address family,
that's easier.

Just reusing random functions won't help, net stack
is very heavy, if it manages to outperform vsock it's
because vsock was not written with performance in mind.
But the smarts are in the core not virtio driver.
What makes vsock slow is design decisions like
using a workqueue to process packets,
not batching memory management etc etc.
All things that net core does for virtio net.

-- 
MST

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC] virtio-net: share receive_*() and add_recvbuf_*() with virtio-vsock
  2019-07-11 11:41   ` Stefano Garzarella
@ 2019-07-11 19:52     ` Michael S. Tsirkin
  2019-07-11 19:52     ` Michael S. Tsirkin
  1 sibling, 0 replies; 26+ messages in thread
From: Michael S. Tsirkin @ 2019-07-11 19:52 UTC (permalink / raw)
  To: Stefano Garzarella; +Cc: netdev, Stefan Hajnoczi, virtualization

On Thu, Jul 11, 2019 at 01:41:34PM +0200, Stefano Garzarella wrote:
> On Thu, Jul 11, 2019 at 03:37:00PM +0800, Jason Wang wrote:
> > 
> > On 2019/7/10 下午11:37, Stefano Garzarella wrote:
> > > Hi,
> > > as Jason suggested some months ago, I looked better at the virtio-net driver to
> > > understand if we can reuse some parts also in the virtio-vsock driver, since we
> > > have similar challenges (mergeable buffers, page allocation, small
> > > packets, etc.).
> > > 
> > > Initially, I would add the skbuff in the virtio-vsock in order to re-use
> > > receive_*() functions.
> > 
> > 
> > Yes, that will be a good step.
> > 
> 
> Okay, I'll go on this way.
> 
> > 
> > > Then I would move receive_[small, big, mergeable]() and
> > > add_recvbuf_[small, big, mergeable]() outside of virtio-net driver, in order to
> > > call them also from virtio-vsock. I need to do some refactoring (e.g. leave the
> > > XDP part on the virtio-net driver), but I think it is feasible.
> > > 
> > > The idea is to create a virtio-skb.[h,c] where put these functions and a new
> > > object where stores some attributes needed (e.g. hdr_len ) and status (e.g.
> > > some fields of struct receive_queue).
> > 
> > 
> > My understanding is we could be more ambitious here. Do you see any blocker
> > for reusing virtio-net directly? It's better to reuse not only the functions
> > but also the logic like NAPI to avoid re-inventing something buggy and
> > duplicated.
> > 
> 
> These are my concerns:
> - virtio-vsock is not a "net_device", so a lot of code related to
>   ethtool, net devices (MAC address, MTU, speed, VLAN, XDP, offloading) will be
>   not used by virtio-vsock.
> 
> - virtio-vsock has a different header. We can consider it as part of
>   virtio_net payload, but it precludes the compatibility with old hosts. This
>   was one of the major doubts that made me think about using only the
>   send/recv skbuff functions, that it shouldn't break the compatibility.
> 
> > 
> > > This is an idea of virtio-skb.h that
> > > I have in mind:
> > >      struct virtskb;
> > 
> > 
> > What fields do you want to store in virtskb? It looks to be exist sk_buff is
> > flexible enough to us?
> 
> My idea is to store queues information, like struct receive_queue or
> struct send_queue, and some device attributes (e.g. hdr_len ).
> 
> > 
> > 
> > > 
> > >      struct sk_buff *virtskb_receive_small(struct virtskb *vs, ...);
> > >      struct sk_buff *virtskb_receive_big(struct virtskb *vs, ...);
> > >      struct sk_buff *virtskb_receive_mergeable(struct virtskb *vs, ...);
> > > 
> > >      int virtskb_add_recvbuf_small(struct virtskb*vs, ...);
> > >      int virtskb_add_recvbuf_big(struct virtskb *vs, ...);
> > >      int virtskb_add_recvbuf_mergeable(struct virtskb *vs, ...);
> > > 
> > > For the Guest->Host path it should be easier, so maybe I can add a
> > > "virtskb_send(struct virtskb *vs, struct sk_buff *skb)" with a part of the code
> > > of xmit_skb().
> > 
> > 
> > I may miss something, but I don't see any thing that prevents us from using
> > xmit_skb() directly.
> > 
> 
> Yes, but my initial idea was to make it more parametric and not related to the
> virtio_net_hdr, so the 'hdr_len' could be a parameter and the
> 'num_buffers' should be handled by the caller.
> 
> > 
> > > 
> > > Let me know if you have in mind better names or if I should put these function
> > > in another place.
> > > 
> > > I would like to leave the control part completely separate, so, for example,
> > > the two drivers will negotiate the features independently and they will call
> > > the right virtskb_receive_*() function based on the negotiation.
> > 
> > 
> > If it's one the issue of negotiation, we can simply change the
> > virtnet_probe() to deal with different devices.
> > 
> > 
> > > 
> > > I already started to work on it, but before to do more steps and send an RFC
> > > patch, I would like to hear your opinion.
> > > Do you think that makes sense?
> > > Do you see any issue or a better solution?
> > 
> > 
> > I still think we need to seek a way of adding some codes on virtio-net.c
> > directly if there's no huge different in the processing of TX/RX. That would
> > save us a lot time.
> 
> After the reading of the buffers from the virtqueue I think the process
> is slightly different, because virtio-net will interface with the network
> stack, while virtio-vsock will interface with the vsock-core (socket).
> So the virtio-vsock implements the following:
> - control flow mechanism to avoid to loose packets, informing the peer
>   about the amount of memory available in the receive queue using some
>   fields in the virtio_vsock_hdr
> - de-multiplexing parsing the virtio_vsock_hdr and choosing the right
>   socket depending on the port
> - socket state handling
> 
> We can use the virtio-net as transport, but we should add a lot of
> code to skip "net device" stuff when it is used by the virtio-vsock.
> This could break something in virtio-net, for this reason, I thought to reuse
> only the send/recv functions starting from the idea to split the virtio-net
> driver in two parts:
> a. one with all stuff related to the network stack
> b. one with the stuff needed to communicate with the host
> 
> And use skbuff to communicate between parts. In this way, virtio-vsock
> can use only the b part.
> 
> Maybe we can do this split in a better way, but I'm not sure it is
> simple.
> 
> Thanks,
> Stefano

Frankly, skb is a huge structure which adds a lot of
overhead. I am not sure that using it is such a great idea
if building a device that does not have to interface
with the networking stack.

So I agree with Jason in theory. To clarify, he is basically saying
current implementation is all wrong, it should be a protocol and we
should teach networking stack that there are reliable net devices that
handle just this protocol. We could add a flag in virtio net that
will say it's such a device.

Whether it's doable, I don't know, and it's definitely not simple - in
particular you will have to also re-implement existing devices in these
terms, and not just virtio - vmware vsock too.

If you want to do a POC you can add a new address family,
that's easier.

Just reusing random functions won't help, net stack
is very heavy, if it manages to outperform vsock it's
because vsock was not written with performance in mind.
But the smarts are in the core not virtio driver.
What makes vsock slow is design decisions like
using a workqueue to process packets,
not batching memory management etc etc.
All things that net core does for virtio net.

-- 
MST
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC] virtio-net: share receive_*() and add_recvbuf_*() with virtio-vsock
  2019-07-11 19:52     ` Michael S. Tsirkin
  2019-07-12 10:00       ` Stefano Garzarella
@ 2019-07-12 10:00       ` Stefano Garzarella
  2019-07-12 10:14         ` Jason Wang
  2019-07-12 10:14         ` Jason Wang
  1 sibling, 2 replies; 26+ messages in thread
From: Stefano Garzarella @ 2019-07-12 10:00 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: Jason Wang, Stefan Hajnoczi, virtualization, netdev

On Thu, Jul 11, 2019 at 03:52:21PM -0400, Michael S. Tsirkin wrote:
> On Thu, Jul 11, 2019 at 01:41:34PM +0200, Stefano Garzarella wrote:
> > On Thu, Jul 11, 2019 at 03:37:00PM +0800, Jason Wang wrote:
> > > 
> > > On 2019/7/10 下午11:37, Stefano Garzarella wrote:
> > > > Hi,
> > > > as Jason suggested some months ago, I looked better at the virtio-net driver to
> > > > understand if we can reuse some parts also in the virtio-vsock driver, since we
> > > > have similar challenges (mergeable buffers, page allocation, small
> > > > packets, etc.).
> > > > 
> > > > Initially, I would add the skbuff in the virtio-vsock in order to re-use
> > > > receive_*() functions.
> > > 
> > > 
> > > Yes, that will be a good step.
> > > 
> > 
> > Okay, I'll go on this way.
> > 
> > > 
> > > > Then I would move receive_[small, big, mergeable]() and
> > > > add_recvbuf_[small, big, mergeable]() outside of virtio-net driver, in order to
> > > > call them also from virtio-vsock. I need to do some refactoring (e.g. leave the
> > > > XDP part on the virtio-net driver), but I think it is feasible.
> > > > 
> > > > The idea is to create a virtio-skb.[h,c] where put these functions and a new
> > > > object where stores some attributes needed (e.g. hdr_len ) and status (e.g.
> > > > some fields of struct receive_queue).
> > > 
> > > 
> > > My understanding is we could be more ambitious here. Do you see any blocker
> > > for reusing virtio-net directly? It's better to reuse not only the functions
> > > but also the logic like NAPI to avoid re-inventing something buggy and
> > > duplicated.
> > > 
> > 
> > These are my concerns:
> > - virtio-vsock is not a "net_device", so a lot of code related to
> >   ethtool, net devices (MAC address, MTU, speed, VLAN, XDP, offloading) will be
> >   not used by virtio-vsock.
> > 
> > - virtio-vsock has a different header. We can consider it as part of
> >   virtio_net payload, but it precludes the compatibility with old hosts. This
> >   was one of the major doubts that made me think about using only the
> >   send/recv skbuff functions, that it shouldn't break the compatibility.
> > 
> > > 
> > > > This is an idea of virtio-skb.h that
> > > > I have in mind:
> > > >      struct virtskb;
> > > 
> > > 
> > > What fields do you want to store in virtskb? It looks to be exist sk_buff is
> > > flexible enough to us?
> > 
> > My idea is to store queues information, like struct receive_queue or
> > struct send_queue, and some device attributes (e.g. hdr_len ).
> > 
> > > 
> > > 
> > > > 
> > > >      struct sk_buff *virtskb_receive_small(struct virtskb *vs, ...);
> > > >      struct sk_buff *virtskb_receive_big(struct virtskb *vs, ...);
> > > >      struct sk_buff *virtskb_receive_mergeable(struct virtskb *vs, ...);
> > > > 
> > > >      int virtskb_add_recvbuf_small(struct virtskb*vs, ...);
> > > >      int virtskb_add_recvbuf_big(struct virtskb *vs, ...);
> > > >      int virtskb_add_recvbuf_mergeable(struct virtskb *vs, ...);
> > > > 
> > > > For the Guest->Host path it should be easier, so maybe I can add a
> > > > "virtskb_send(struct virtskb *vs, struct sk_buff *skb)" with a part of the code
> > > > of xmit_skb().
> > > 
> > > 
> > > I may miss something, but I don't see any thing that prevents us from using
> > > xmit_skb() directly.
> > > 
> > 
> > Yes, but my initial idea was to make it more parametric and not related to the
> > virtio_net_hdr, so the 'hdr_len' could be a parameter and the
> > 'num_buffers' should be handled by the caller.
> > 
> > > 
> > > > 
> > > > Let me know if you have in mind better names or if I should put these function
> > > > in another place.
> > > > 
> > > > I would like to leave the control part completely separate, so, for example,
> > > > the two drivers will negotiate the features independently and they will call
> > > > the right virtskb_receive_*() function based on the negotiation.
> > > 
> > > 
> > > If it's one the issue of negotiation, we can simply change the
> > > virtnet_probe() to deal with different devices.
> > > 
> > > 
> > > > 
> > > > I already started to work on it, but before to do more steps and send an RFC
> > > > patch, I would like to hear your opinion.
> > > > Do you think that makes sense?
> > > > Do you see any issue or a better solution?
> > > 
> > > 
> > > I still think we need to seek a way of adding some codes on virtio-net.c
> > > directly if there's no huge different in the processing of TX/RX. That would
> > > save us a lot time.
> > 
> > After the reading of the buffers from the virtqueue I think the process
> > is slightly different, because virtio-net will interface with the network
> > stack, while virtio-vsock will interface with the vsock-core (socket).
> > So the virtio-vsock implements the following:
> > - control flow mechanism to avoid to loose packets, informing the peer
> >   about the amount of memory available in the receive queue using some
> >   fields in the virtio_vsock_hdr
> > - de-multiplexing parsing the virtio_vsock_hdr and choosing the right
> >   socket depending on the port
> > - socket state handling
> > 
> > We can use the virtio-net as transport, but we should add a lot of
> > code to skip "net device" stuff when it is used by the virtio-vsock.
> > This could break something in virtio-net, for this reason, I thought to reuse
> > only the send/recv functions starting from the idea to split the virtio-net
> > driver in two parts:
> > a. one with all stuff related to the network stack
> > b. one with the stuff needed to communicate with the host
> > 
> > And use skbuff to communicate between parts. In this way, virtio-vsock
> > can use only the b part.
> > 
> > Maybe we can do this split in a better way, but I'm not sure it is
> > simple.
> > 
> > Thanks,
> > Stefano
> 
> Frankly, skb is a huge structure which adds a lot of
> overhead. I am not sure that using it is such a great idea
> if building a device that does not have to interface
> with the networking stack.

Thanks for the advice!

> 
> So I agree with Jason in theory. To clarify, he is basically saying
> current implementation is all wrong, it should be a protocol and we
> should teach networking stack that there are reliable net devices that
> handle just this protocol. We could add a flag in virtio net that
> will say it's such a device.
> 
> Whether it's doable, I don't know, and it's definitely not simple - in
> particular you will have to also re-implement existing devices in these
> terms, and not just virtio - vmware vsock too.
> 
> If you want to do a POC you can add a new address family,
> that's easier.

Very interesting!
I agree with you. In this way we can completely split the protocol
logic, from the device.

As you said, it will not simple to do, but can be an opportunity to learn
better the Linux networking stack!
I'll try to do a PoC with AF_VSOCK2 that will use the virtio-net.

> 
> Just reusing random functions won't help, net stack
> is very heavy, if it manages to outperform vsock it's
> because vsock was not written with performance in mind.
> But the smarts are in the core not virtio driver.
> What makes vsock slow is design decisions like
> using a workqueue to process packets,
> not batching memory management etc etc.
> All things that net core does for virtio net.

Got it :)

Michael, Jason, thank you very much! Your suggestions are very useful!

Stefano

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC] virtio-net: share receive_*() and add_recvbuf_*() with virtio-vsock
  2019-07-11 19:52     ` Michael S. Tsirkin
@ 2019-07-12 10:00       ` Stefano Garzarella
  2019-07-12 10:00       ` Stefano Garzarella
  1 sibling, 0 replies; 26+ messages in thread
From: Stefano Garzarella @ 2019-07-12 10:00 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: netdev, Stefan Hajnoczi, virtualization

On Thu, Jul 11, 2019 at 03:52:21PM -0400, Michael S. Tsirkin wrote:
> On Thu, Jul 11, 2019 at 01:41:34PM +0200, Stefano Garzarella wrote:
> > On Thu, Jul 11, 2019 at 03:37:00PM +0800, Jason Wang wrote:
> > > 
> > > On 2019/7/10 下午11:37, Stefano Garzarella wrote:
> > > > Hi,
> > > > as Jason suggested some months ago, I looked better at the virtio-net driver to
> > > > understand if we can reuse some parts also in the virtio-vsock driver, since we
> > > > have similar challenges (mergeable buffers, page allocation, small
> > > > packets, etc.).
> > > > 
> > > > Initially, I would add the skbuff in the virtio-vsock in order to re-use
> > > > receive_*() functions.
> > > 
> > > 
> > > Yes, that will be a good step.
> > > 
> > 
> > Okay, I'll go on this way.
> > 
> > > 
> > > > Then I would move receive_[small, big, mergeable]() and
> > > > add_recvbuf_[small, big, mergeable]() outside of virtio-net driver, in order to
> > > > call them also from virtio-vsock. I need to do some refactoring (e.g. leave the
> > > > XDP part on the virtio-net driver), but I think it is feasible.
> > > > 
> > > > The idea is to create a virtio-skb.[h,c] where put these functions and a new
> > > > object where stores some attributes needed (e.g. hdr_len ) and status (e.g.
> > > > some fields of struct receive_queue).
> > > 
> > > 
> > > My understanding is we could be more ambitious here. Do you see any blocker
> > > for reusing virtio-net directly? It's better to reuse not only the functions
> > > but also the logic like NAPI to avoid re-inventing something buggy and
> > > duplicated.
> > > 
> > 
> > These are my concerns:
> > - virtio-vsock is not a "net_device", so a lot of code related to
> >   ethtool, net devices (MAC address, MTU, speed, VLAN, XDP, offloading) will be
> >   not used by virtio-vsock.
> > 
> > - virtio-vsock has a different header. We can consider it as part of
> >   virtio_net payload, but it precludes the compatibility with old hosts. This
> >   was one of the major doubts that made me think about using only the
> >   send/recv skbuff functions, that it shouldn't break the compatibility.
> > 
> > > 
> > > > This is an idea of virtio-skb.h that
> > > > I have in mind:
> > > >      struct virtskb;
> > > 
> > > 
> > > What fields do you want to store in virtskb? It looks to be exist sk_buff is
> > > flexible enough to us?
> > 
> > My idea is to store queues information, like struct receive_queue or
> > struct send_queue, and some device attributes (e.g. hdr_len ).
> > 
> > > 
> > > 
> > > > 
> > > >      struct sk_buff *virtskb_receive_small(struct virtskb *vs, ...);
> > > >      struct sk_buff *virtskb_receive_big(struct virtskb *vs, ...);
> > > >      struct sk_buff *virtskb_receive_mergeable(struct virtskb *vs, ...);
> > > > 
> > > >      int virtskb_add_recvbuf_small(struct virtskb*vs, ...);
> > > >      int virtskb_add_recvbuf_big(struct virtskb *vs, ...);
> > > >      int virtskb_add_recvbuf_mergeable(struct virtskb *vs, ...);
> > > > 
> > > > For the Guest->Host path it should be easier, so maybe I can add a
> > > > "virtskb_send(struct virtskb *vs, struct sk_buff *skb)" with a part of the code
> > > > of xmit_skb().
> > > 
> > > 
> > > I may miss something, but I don't see any thing that prevents us from using
> > > xmit_skb() directly.
> > > 
> > 
> > Yes, but my initial idea was to make it more parametric and not related to the
> > virtio_net_hdr, so the 'hdr_len' could be a parameter and the
> > 'num_buffers' should be handled by the caller.
> > 
> > > 
> > > > 
> > > > Let me know if you have in mind better names or if I should put these function
> > > > in another place.
> > > > 
> > > > I would like to leave the control part completely separate, so, for example,
> > > > the two drivers will negotiate the features independently and they will call
> > > > the right virtskb_receive_*() function based on the negotiation.
> > > 
> > > 
> > > If it's one the issue of negotiation, we can simply change the
> > > virtnet_probe() to deal with different devices.
> > > 
> > > 
> > > > 
> > > > I already started to work on it, but before to do more steps and send an RFC
> > > > patch, I would like to hear your opinion.
> > > > Do you think that makes sense?
> > > > Do you see any issue or a better solution?
> > > 
> > > 
> > > I still think we need to seek a way of adding some codes on virtio-net.c
> > > directly if there's no huge different in the processing of TX/RX. That would
> > > save us a lot time.
> > 
> > After the reading of the buffers from the virtqueue I think the process
> > is slightly different, because virtio-net will interface with the network
> > stack, while virtio-vsock will interface with the vsock-core (socket).
> > So the virtio-vsock implements the following:
> > - control flow mechanism to avoid to loose packets, informing the peer
> >   about the amount of memory available in the receive queue using some
> >   fields in the virtio_vsock_hdr
> > - de-multiplexing parsing the virtio_vsock_hdr and choosing the right
> >   socket depending on the port
> > - socket state handling
> > 
> > We can use the virtio-net as transport, but we should add a lot of
> > code to skip "net device" stuff when it is used by the virtio-vsock.
> > This could break something in virtio-net, for this reason, I thought to reuse
> > only the send/recv functions starting from the idea to split the virtio-net
> > driver in two parts:
> > a. one with all stuff related to the network stack
> > b. one with the stuff needed to communicate with the host
> > 
> > And use skbuff to communicate between parts. In this way, virtio-vsock
> > can use only the b part.
> > 
> > Maybe we can do this split in a better way, but I'm not sure it is
> > simple.
> > 
> > Thanks,
> > Stefano
> 
> Frankly, skb is a huge structure which adds a lot of
> overhead. I am not sure that using it is such a great idea
> if building a device that does not have to interface
> with the networking stack.

Thanks for the advice!

> 
> So I agree with Jason in theory. To clarify, he is basically saying
> current implementation is all wrong, it should be a protocol and we
> should teach networking stack that there are reliable net devices that
> handle just this protocol. We could add a flag in virtio net that
> will say it's such a device.
> 
> Whether it's doable, I don't know, and it's definitely not simple - in
> particular you will have to also re-implement existing devices in these
> terms, and not just virtio - vmware vsock too.
> 
> If you want to do a POC you can add a new address family,
> that's easier.

Very interesting!
I agree with you. In this way we can completely split the protocol
logic, from the device.

As you said, it will not simple to do, but can be an opportunity to learn
better the Linux networking stack!
I'll try to do a PoC with AF_VSOCK2 that will use the virtio-net.

> 
> Just reusing random functions won't help, net stack
> is very heavy, if it manages to outperform vsock it's
> because vsock was not written with performance in mind.
> But the smarts are in the core not virtio driver.
> What makes vsock slow is design decisions like
> using a workqueue to process packets,
> not batching memory management etc etc.
> All things that net core does for virtio net.

Got it :)

Michael, Jason, thank you very much! Your suggestions are very useful!

Stefano
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC] virtio-net: share receive_*() and add_recvbuf_*() with virtio-vsock
  2019-07-12 10:00       ` Stefano Garzarella
  2019-07-12 10:14         ` Jason Wang
@ 2019-07-12 10:14         ` Jason Wang
  2019-07-15  7:44           ` Stefano Garzarella
  2019-07-15  7:44           ` Stefano Garzarella
  1 sibling, 2 replies; 26+ messages in thread
From: Jason Wang @ 2019-07-12 10:14 UTC (permalink / raw)
  To: Stefano Garzarella, Michael S. Tsirkin
  Cc: Stefan Hajnoczi, virtualization, netdev


On 2019/7/12 下午6:00, Stefano Garzarella wrote:
> On Thu, Jul 11, 2019 at 03:52:21PM -0400, Michael S. Tsirkin wrote:
>> On Thu, Jul 11, 2019 at 01:41:34PM +0200, Stefano Garzarella wrote:
>>> On Thu, Jul 11, 2019 at 03:37:00PM +0800, Jason Wang wrote:
>>>> On 2019/7/10 下午11:37, Stefano Garzarella wrote:
>>>>> Hi,
>>>>> as Jason suggested some months ago, I looked better at the virtio-net driver to
>>>>> understand if we can reuse some parts also in the virtio-vsock driver, since we
>>>>> have similar challenges (mergeable buffers, page allocation, small
>>>>> packets, etc.).
>>>>>
>>>>> Initially, I would add the skbuff in the virtio-vsock in order to re-use
>>>>> receive_*() functions.
>>>>
>>>> Yes, that will be a good step.
>>>>
>>> Okay, I'll go on this way.
>>>
>>>>> Then I would move receive_[small, big, mergeable]() and
>>>>> add_recvbuf_[small, big, mergeable]() outside of virtio-net driver, in order to
>>>>> call them also from virtio-vsock. I need to do some refactoring (e.g. leave the
>>>>> XDP part on the virtio-net driver), but I think it is feasible.
>>>>>
>>>>> The idea is to create a virtio-skb.[h,c] where put these functions and a new
>>>>> object where stores some attributes needed (e.g. hdr_len ) and status (e.g.
>>>>> some fields of struct receive_queue).
>>>>
>>>> My understanding is we could be more ambitious here. Do you see any blocker
>>>> for reusing virtio-net directly? It's better to reuse not only the functions
>>>> but also the logic like NAPI to avoid re-inventing something buggy and
>>>> duplicated.
>>>>
>>> These are my concerns:
>>> - virtio-vsock is not a "net_device", so a lot of code related to
>>>    ethtool, net devices (MAC address, MTU, speed, VLAN, XDP, offloading) will be
>>>    not used by virtio-vsock.


Linux support device other than ethernet, so it should not be a problem.


>>>
>>> - virtio-vsock has a different header. We can consider it as part of
>>>    virtio_net payload, but it precludes the compatibility with old hosts. This
>>>    was one of the major doubts that made me think about using only the
>>>    send/recv skbuff functions, that it shouldn't break the compatibility.


We can extend the current vnet header helper for it to work for vsock.


>>>
>>>>> This is an idea of virtio-skb.h that
>>>>> I have in mind:
>>>>>       struct virtskb;
>>>>
>>>> What fields do you want to store in virtskb? It looks to be exist sk_buff is
>>>> flexible enough to us?
>>> My idea is to store queues information, like struct receive_queue or
>>> struct send_queue, and some device attributes (e.g. hdr_len ).


If you reuse skb or virtnet_info, there is not necessary.


>>>
>>>>
>>>>>       struct sk_buff *virtskb_receive_small(struct virtskb *vs, ...);
>>>>>       struct sk_buff *virtskb_receive_big(struct virtskb *vs, ...);
>>>>>       struct sk_buff *virtskb_receive_mergeable(struct virtskb *vs, ...);
>>>>>
>>>>>       int virtskb_add_recvbuf_small(struct virtskb*vs, ...);
>>>>>       int virtskb_add_recvbuf_big(struct virtskb *vs, ...);
>>>>>       int virtskb_add_recvbuf_mergeable(struct virtskb *vs, ...);
>>>>>
>>>>> For the Guest->Host path it should be easier, so maybe I can add a
>>>>> "virtskb_send(struct virtskb *vs, struct sk_buff *skb)" with a part of the code
>>>>> of xmit_skb().
>>>>
>>>> I may miss something, but I don't see any thing that prevents us from using
>>>> xmit_skb() directly.
>>>>
>>> Yes, but my initial idea was to make it more parametric and not related to the
>>> virtio_net_hdr, so the 'hdr_len' could be a parameter and the
>>> 'num_buffers' should be handled by the caller.
>>>
>>>>> Let me know if you have in mind better names or if I should put these function
>>>>> in another place.
>>>>>
>>>>> I would like to leave the control part completely separate, so, for example,
>>>>> the two drivers will negotiate the features independently and they will call
>>>>> the right virtskb_receive_*() function based on the negotiation.
>>>>
>>>> If it's one the issue of negotiation, we can simply change the
>>>> virtnet_probe() to deal with different devices.
>>>>
>>>>
>>>>> I already started to work on it, but before to do more steps and send an RFC
>>>>> patch, I would like to hear your opinion.
>>>>> Do you think that makes sense?
>>>>> Do you see any issue or a better solution?
>>>>
>>>> I still think we need to seek a way of adding some codes on virtio-net.c
>>>> directly if there's no huge different in the processing of TX/RX. That would
>>>> save us a lot time.
>>> After the reading of the buffers from the virtqueue I think the process
>>> is slightly different, because virtio-net will interface with the network
>>> stack, while virtio-vsock will interface with the vsock-core (socket).
>>> So the virtio-vsock implements the following:
>>> - control flow mechanism to avoid to loose packets, informing the peer
>>>    about the amount of memory available in the receive queue using some
>>>    fields in the virtio_vsock_hdr
>>> - de-multiplexing parsing the virtio_vsock_hdr and choosing the right
>>>    socket depending on the port
>>> - socket state handling


I think it's just a branch, for ethernet, go for networking stack. 
otherwise go for vsock core?


>>>
>>> We can use the virtio-net as transport, but we should add a lot of
>>> code to skip "net device" stuff when it is used by the virtio-vsock.


This could be another choice, but consider it was not transparent to the 
admin and require new features, we may seek a transparent solution here.


>>> This could break something in virtio-net, for this reason, I thought to reuse
>>> only the send/recv functions starting from the idea to split the virtio-net
>>> driver in two parts:
>>> a. one with all stuff related to the network stack
>>> b. one with the stuff needed to communicate with the host
>>>
>>> And use skbuff to communicate between parts. In this way, virtio-vsock
>>> can use only the b part.
>>>
>>> Maybe we can do this split in a better way, but I'm not sure it is
>>> simple.
>>>
>>> Thanks,
>>> Stefano
>> Frankly, skb is a huge structure which adds a lot of
>> overhead. I am not sure that using it is such a great idea
>> if building a device that does not have to interface
>> with the networking stack.


I believe vsock is mainly used for stream performance not for PPS. So 
the impact should be minimal. We can use other metadata, just need 
branch in recv_xxx().


> Thanks for the advice!
>
>> So I agree with Jason in theory. To clarify, he is basically saying
>> current implementation is all wrong, it should be a protocol and we
>> should teach networking stack that there are reliable net devices that
>> handle just this protocol. We could add a flag in virtio net that
>> will say it's such a device.
>>
>> Whether it's doable, I don't know, and it's definitely not simple - in
>> particular you will have to also re-implement existing devices in these
>> terms, and not just virtio - vmware vsock too.


Merging vsock protocol to exist networking stack could be a long term 
goal, I believe for the first phase, we can seek to use virtio-net first.


>>
>> If you want to do a POC you can add a new address family,
>> that's easier.
> Very interesting!
> I agree with you. In this way we can completely split the protocol
> logic, from the device.
>
> As you said, it will not simple to do, but can be an opportunity to learn
> better the Linux networking stack!
> I'll try to do a PoC with AF_VSOCK2 that will use the virtio-net.


I suggest to do this step by step:

1) use virtio-net but keep some protocol logic

2) separate protocol logic and merge it to exist Linux networking stack

Thanks


>> Just reusing random functions won't help, net stack
>> is very heavy, if it manages to outperform vsock it's
>> because vsock was not written with performance in mind.
>> But the smarts are in the core not virtio driver.
>> What makes vsock slow is design decisions like
>> using a workqueue to process packets,
>> not batching memory management etc etc.
>> All things that net core does for virtio net.
> Got it :)
>
> Michael, Jason, thank you very much! Your suggestions are very useful!
>
> Stefano

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC] virtio-net: share receive_*() and add_recvbuf_*() with virtio-vsock
  2019-07-12 10:00       ` Stefano Garzarella
@ 2019-07-12 10:14         ` Jason Wang
  2019-07-12 10:14         ` Jason Wang
  1 sibling, 0 replies; 26+ messages in thread
From: Jason Wang @ 2019-07-12 10:14 UTC (permalink / raw)
  To: Stefano Garzarella, Michael S. Tsirkin
  Cc: netdev, Stefan Hajnoczi, virtualization


On 2019/7/12 下午6:00, Stefano Garzarella wrote:
> On Thu, Jul 11, 2019 at 03:52:21PM -0400, Michael S. Tsirkin wrote:
>> On Thu, Jul 11, 2019 at 01:41:34PM +0200, Stefano Garzarella wrote:
>>> On Thu, Jul 11, 2019 at 03:37:00PM +0800, Jason Wang wrote:
>>>> On 2019/7/10 下午11:37, Stefano Garzarella wrote:
>>>>> Hi,
>>>>> as Jason suggested some months ago, I looked better at the virtio-net driver to
>>>>> understand if we can reuse some parts also in the virtio-vsock driver, since we
>>>>> have similar challenges (mergeable buffers, page allocation, small
>>>>> packets, etc.).
>>>>>
>>>>> Initially, I would add the skbuff in the virtio-vsock in order to re-use
>>>>> receive_*() functions.
>>>>
>>>> Yes, that will be a good step.
>>>>
>>> Okay, I'll go on this way.
>>>
>>>>> Then I would move receive_[small, big, mergeable]() and
>>>>> add_recvbuf_[small, big, mergeable]() outside of virtio-net driver, in order to
>>>>> call them also from virtio-vsock. I need to do some refactoring (e.g. leave the
>>>>> XDP part on the virtio-net driver), but I think it is feasible.
>>>>>
>>>>> The idea is to create a virtio-skb.[h,c] where put these functions and a new
>>>>> object where stores some attributes needed (e.g. hdr_len ) and status (e.g.
>>>>> some fields of struct receive_queue).
>>>>
>>>> My understanding is we could be more ambitious here. Do you see any blocker
>>>> for reusing virtio-net directly? It's better to reuse not only the functions
>>>> but also the logic like NAPI to avoid re-inventing something buggy and
>>>> duplicated.
>>>>
>>> These are my concerns:
>>> - virtio-vsock is not a "net_device", so a lot of code related to
>>>    ethtool, net devices (MAC address, MTU, speed, VLAN, XDP, offloading) will be
>>>    not used by virtio-vsock.


Linux support device other than ethernet, so it should not be a problem.


>>>
>>> - virtio-vsock has a different header. We can consider it as part of
>>>    virtio_net payload, but it precludes the compatibility with old hosts. This
>>>    was one of the major doubts that made me think about using only the
>>>    send/recv skbuff functions, that it shouldn't break the compatibility.


We can extend the current vnet header helper for it to work for vsock.


>>>
>>>>> This is an idea of virtio-skb.h that
>>>>> I have in mind:
>>>>>       struct virtskb;
>>>>
>>>> What fields do you want to store in virtskb? It looks to be exist sk_buff is
>>>> flexible enough to us?
>>> My idea is to store queues information, like struct receive_queue or
>>> struct send_queue, and some device attributes (e.g. hdr_len ).


If you reuse skb or virtnet_info, there is not necessary.


>>>
>>>>
>>>>>       struct sk_buff *virtskb_receive_small(struct virtskb *vs, ...);
>>>>>       struct sk_buff *virtskb_receive_big(struct virtskb *vs, ...);
>>>>>       struct sk_buff *virtskb_receive_mergeable(struct virtskb *vs, ...);
>>>>>
>>>>>       int virtskb_add_recvbuf_small(struct virtskb*vs, ...);
>>>>>       int virtskb_add_recvbuf_big(struct virtskb *vs, ...);
>>>>>       int virtskb_add_recvbuf_mergeable(struct virtskb *vs, ...);
>>>>>
>>>>> For the Guest->Host path it should be easier, so maybe I can add a
>>>>> "virtskb_send(struct virtskb *vs, struct sk_buff *skb)" with a part of the code
>>>>> of xmit_skb().
>>>>
>>>> I may miss something, but I don't see any thing that prevents us from using
>>>> xmit_skb() directly.
>>>>
>>> Yes, but my initial idea was to make it more parametric and not related to the
>>> virtio_net_hdr, so the 'hdr_len' could be a parameter and the
>>> 'num_buffers' should be handled by the caller.
>>>
>>>>> Let me know if you have in mind better names or if I should put these function
>>>>> in another place.
>>>>>
>>>>> I would like to leave the control part completely separate, so, for example,
>>>>> the two drivers will negotiate the features independently and they will call
>>>>> the right virtskb_receive_*() function based on the negotiation.
>>>>
>>>> If it's one the issue of negotiation, we can simply change the
>>>> virtnet_probe() to deal with different devices.
>>>>
>>>>
>>>>> I already started to work on it, but before to do more steps and send an RFC
>>>>> patch, I would like to hear your opinion.
>>>>> Do you think that makes sense?
>>>>> Do you see any issue or a better solution?
>>>>
>>>> I still think we need to seek a way of adding some codes on virtio-net.c
>>>> directly if there's no huge different in the processing of TX/RX. That would
>>>> save us a lot time.
>>> After the reading of the buffers from the virtqueue I think the process
>>> is slightly different, because virtio-net will interface with the network
>>> stack, while virtio-vsock will interface with the vsock-core (socket).
>>> So the virtio-vsock implements the following:
>>> - control flow mechanism to avoid to loose packets, informing the peer
>>>    about the amount of memory available in the receive queue using some
>>>    fields in the virtio_vsock_hdr
>>> - de-multiplexing parsing the virtio_vsock_hdr and choosing the right
>>>    socket depending on the port
>>> - socket state handling


I think it's just a branch, for ethernet, go for networking stack. 
otherwise go for vsock core?


>>>
>>> We can use the virtio-net as transport, but we should add a lot of
>>> code to skip "net device" stuff when it is used by the virtio-vsock.


This could be another choice, but consider it was not transparent to the 
admin and require new features, we may seek a transparent solution here.


>>> This could break something in virtio-net, for this reason, I thought to reuse
>>> only the send/recv functions starting from the idea to split the virtio-net
>>> driver in two parts:
>>> a. one with all stuff related to the network stack
>>> b. one with the stuff needed to communicate with the host
>>>
>>> And use skbuff to communicate between parts. In this way, virtio-vsock
>>> can use only the b part.
>>>
>>> Maybe we can do this split in a better way, but I'm not sure it is
>>> simple.
>>>
>>> Thanks,
>>> Stefano
>> Frankly, skb is a huge structure which adds a lot of
>> overhead. I am not sure that using it is such a great idea
>> if building a device that does not have to interface
>> with the networking stack.


I believe vsock is mainly used for stream performance not for PPS. So 
the impact should be minimal. We can use other metadata, just need 
branch in recv_xxx().


> Thanks for the advice!
>
>> So I agree with Jason in theory. To clarify, he is basically saying
>> current implementation is all wrong, it should be a protocol and we
>> should teach networking stack that there are reliable net devices that
>> handle just this protocol. We could add a flag in virtio net that
>> will say it's such a device.
>>
>> Whether it's doable, I don't know, and it's definitely not simple - in
>> particular you will have to also re-implement existing devices in these
>> terms, and not just virtio - vmware vsock too.


Merging vsock protocol to exist networking stack could be a long term 
goal, I believe for the first phase, we can seek to use virtio-net first.


>>
>> If you want to do a POC you can add a new address family,
>> that's easier.
> Very interesting!
> I agree with you. In this way we can completely split the protocol
> logic, from the device.
>
> As you said, it will not simple to do, but can be an opportunity to learn
> better the Linux networking stack!
> I'll try to do a PoC with AF_VSOCK2 that will use the virtio-net.


I suggest to do this step by step:

1) use virtio-net but keep some protocol logic

2) separate protocol logic and merge it to exist Linux networking stack

Thanks


>> Just reusing random functions won't help, net stack
>> is very heavy, if it manages to outperform vsock it's
>> because vsock was not written with performance in mind.
>> But the smarts are in the core not virtio driver.
>> What makes vsock slow is design decisions like
>> using a workqueue to process packets,
>> not batching memory management etc etc.
>> All things that net core does for virtio net.
> Got it :)
>
> Michael, Jason, thank you very much! Your suggestions are very useful!
>
> Stefano
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC] virtio-net: share receive_*() and add_recvbuf_*() with virtio-vsock
  2019-07-12 10:14         ` Jason Wang
@ 2019-07-15  7:44           ` Stefano Garzarella
  2019-07-15  9:16             ` Jason Wang
                               ` (3 more replies)
  2019-07-15  7:44           ` Stefano Garzarella
  1 sibling, 4 replies; 26+ messages in thread
From: Stefano Garzarella @ 2019-07-15  7:44 UTC (permalink / raw)
  To: Jason Wang; +Cc: Michael S. Tsirkin, Stefan Hajnoczi, virtualization, netdev

On Fri, Jul 12, 2019 at 06:14:39PM +0800, Jason Wang wrote:
> 
> On 2019/7/12 下午6:00, Stefano Garzarella wrote:
> > On Thu, Jul 11, 2019 at 03:52:21PM -0400, Michael S. Tsirkin wrote:
> > > On Thu, Jul 11, 2019 at 01:41:34PM +0200, Stefano Garzarella wrote:
> > > > On Thu, Jul 11, 2019 at 03:37:00PM +0800, Jason Wang wrote:
> > > > > On 2019/7/10 下午11:37, Stefano Garzarella wrote:
> > > > > > Hi,
> > > > > > as Jason suggested some months ago, I looked better at the virtio-net driver to
> > > > > > understand if we can reuse some parts also in the virtio-vsock driver, since we
> > > > > > have similar challenges (mergeable buffers, page allocation, small
> > > > > > packets, etc.).
> > > > > > 
> > > > > > Initially, I would add the skbuff in the virtio-vsock in order to re-use
> > > > > > receive_*() functions.
> > > > > 
> > > > > Yes, that will be a good step.
> > > > > 
> > > > Okay, I'll go on this way.
> > > > 
> > > > > > Then I would move receive_[small, big, mergeable]() and
> > > > > > add_recvbuf_[small, big, mergeable]() outside of virtio-net driver, in order to
> > > > > > call them also from virtio-vsock. I need to do some refactoring (e.g. leave the
> > > > > > XDP part on the virtio-net driver), but I think it is feasible.
> > > > > > 
> > > > > > The idea is to create a virtio-skb.[h,c] where put these functions and a new
> > > > > > object where stores some attributes needed (e.g. hdr_len ) and status (e.g.
> > > > > > some fields of struct receive_queue).
> > > > > 
> > > > > My understanding is we could be more ambitious here. Do you see any blocker
> > > > > for reusing virtio-net directly? It's better to reuse not only the functions
> > > > > but also the logic like NAPI to avoid re-inventing something buggy and
> > > > > duplicated.
> > > > > 
> > > > These are my concerns:
> > > > - virtio-vsock is not a "net_device", so a lot of code related to
> > > >    ethtool, net devices (MAC address, MTU, speed, VLAN, XDP, offloading) will be
> > > >    not used by virtio-vsock.
> 
> 
> Linux support device other than ethernet, so it should not be a problem.
> 
> 
> > > > 
> > > > - virtio-vsock has a different header. We can consider it as part of
> > > >    virtio_net payload, but it precludes the compatibility with old hosts. This
> > > >    was one of the major doubts that made me think about using only the
> > > >    send/recv skbuff functions, that it shouldn't break the compatibility.
> 
> 
> We can extend the current vnet header helper for it to work for vsock.

Okay, I'll do it.

> 
> 
> > > > 
> > > > > > This is an idea of virtio-skb.h that
> > > > > > I have in mind:
> > > > > >       struct virtskb;
> > > > > 
> > > > > What fields do you want to store in virtskb? It looks to be exist sk_buff is
> > > > > flexible enough to us?
> > > > My idea is to store queues information, like struct receive_queue or
> > > > struct send_queue, and some device attributes (e.g. hdr_len ).
> 
> 
> If you reuse skb or virtnet_info, there is not necessary.
> 

Okay.

> 
> > > > 
> > > > > 
> > > > > >       struct sk_buff *virtskb_receive_small(struct virtskb *vs, ...);
> > > > > >       struct sk_buff *virtskb_receive_big(struct virtskb *vs, ...);
> > > > > >       struct sk_buff *virtskb_receive_mergeable(struct virtskb *vs, ...);
> > > > > > 
> > > > > >       int virtskb_add_recvbuf_small(struct virtskb*vs, ...);
> > > > > >       int virtskb_add_recvbuf_big(struct virtskb *vs, ...);
> > > > > >       int virtskb_add_recvbuf_mergeable(struct virtskb *vs, ...);
> > > > > > 
> > > > > > For the Guest->Host path it should be easier, so maybe I can add a
> > > > > > "virtskb_send(struct virtskb *vs, struct sk_buff *skb)" with a part of the code
> > > > > > of xmit_skb().
> > > > > 
> > > > > I may miss something, but I don't see any thing that prevents us from using
> > > > > xmit_skb() directly.
> > > > > 
> > > > Yes, but my initial idea was to make it more parametric and not related to the
> > > > virtio_net_hdr, so the 'hdr_len' could be a parameter and the
> > > > 'num_buffers' should be handled by the caller.
> > > > 
> > > > > > Let me know if you have in mind better names or if I should put these function
> > > > > > in another place.
> > > > > > 
> > > > > > I would like to leave the control part completely separate, so, for example,
> > > > > > the two drivers will negotiate the features independently and they will call
> > > > > > the right virtskb_receive_*() function based on the negotiation.
> > > > > 
> > > > > If it's one the issue of negotiation, we can simply change the
> > > > > virtnet_probe() to deal with different devices.
> > > > > 
> > > > > 
> > > > > > I already started to work on it, but before to do more steps and send an RFC
> > > > > > patch, I would like to hear your opinion.
> > > > > > Do you think that makes sense?
> > > > > > Do you see any issue or a better solution?
> > > > > 
> > > > > I still think we need to seek a way of adding some codes on virtio-net.c
> > > > > directly if there's no huge different in the processing of TX/RX. That would
> > > > > save us a lot time.
> > > > After the reading of the buffers from the virtqueue I think the process
> > > > is slightly different, because virtio-net will interface with the network
> > > > stack, while virtio-vsock will interface with the vsock-core (socket).
> > > > So the virtio-vsock implements the following:
> > > > - control flow mechanism to avoid to loose packets, informing the peer
> > > >    about the amount of memory available in the receive queue using some
> > > >    fields in the virtio_vsock_hdr
> > > > - de-multiplexing parsing the virtio_vsock_hdr and choosing the right
> > > >    socket depending on the port
> > > > - socket state handling
> 
> 
> I think it's just a branch, for ethernet, go for networking stack. otherwise
> go for vsock core?
> 

Yes, that should work.

So, I should refactor the functions that can be called also from the vsock
core, in order to remove "struct net_device *dev" parameter.
Maybe creating some wrappers for the network stack.

Otherwise I should create a fake net_device for vsock_core.

What do you suggest?

> 
> > > > 
> > > > We can use the virtio-net as transport, but we should add a lot of
> > > > code to skip "net device" stuff when it is used by the virtio-vsock.
> 
> 
> This could be another choice, but consider it was not transparent to the
> admin and require new features, we may seek a transparent solution here.
> 
> 
> > > > This could break something in virtio-net, for this reason, I thought to reuse
> > > > only the send/recv functions starting from the idea to split the virtio-net
> > > > driver in two parts:
> > > > a. one with all stuff related to the network stack
> > > > b. one with the stuff needed to communicate with the host
> > > > 
> > > > And use skbuff to communicate between parts. In this way, virtio-vsock
> > > > can use only the b part.
> > > > 
> > > > Maybe we can do this split in a better way, but I'm not sure it is
> > > > simple.
> > > > 
> > > > Thanks,
> > > > Stefano
> > > Frankly, skb is a huge structure which adds a lot of
> > > overhead. I am not sure that using it is such a great idea
> > > if building a device that does not have to interface
> > > with the networking stack.
> 
> 
> I believe vsock is mainly used for stream performance not for PPS. So the
> impact should be minimal. We can use other metadata, just need branch in
> recv_xxx().
> 

Yes, I think stream performance is the case.

> 
> > Thanks for the advice!
> > 
> > > So I agree with Jason in theory. To clarify, he is basically saying
> > > current implementation is all wrong, it should be a protocol and we
> > > should teach networking stack that there are reliable net devices that
> > > handle just this protocol. We could add a flag in virtio net that
> > > will say it's such a device.
> > > 
> > > Whether it's doable, I don't know, and it's definitely not simple - in
> > > particular you will have to also re-implement existing devices in these
> > > terms, and not just virtio - vmware vsock too.
> 
> 
> Merging vsock protocol to exist networking stack could be a long term goal,
> I believe for the first phase, we can seek to use virtio-net first.
>

Yes, I agree.

> 
> > > 
> > > If you want to do a POC you can add a new address family,
> > > that's easier.
> > Very interesting!
> > I agree with you. In this way we can completely split the protocol
> > logic, from the device.
> > 
> > As you said, it will not simple to do, but can be an opportunity to learn
> > better the Linux networking stack!
> > I'll try to do a PoC with AF_VSOCK2 that will use the virtio-net.
> 
> 
> I suggest to do this step by step:
> 
> 1) use virtio-net but keep some protocol logic
> 
> 2) separate protocol logic and merge it to exist Linux networking stack

Make sense, thanks for the suggestions, I'll try to do these steps!

Thanks,
Stefano

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC] virtio-net: share receive_*() and add_recvbuf_*() with virtio-vsock
  2019-07-12 10:14         ` Jason Wang
  2019-07-15  7:44           ` Stefano Garzarella
@ 2019-07-15  7:44           ` Stefano Garzarella
  1 sibling, 0 replies; 26+ messages in thread
From: Stefano Garzarella @ 2019-07-15  7:44 UTC (permalink / raw)
  To: Jason Wang; +Cc: netdev, virtualization, Stefan Hajnoczi, Michael S. Tsirkin

On Fri, Jul 12, 2019 at 06:14:39PM +0800, Jason Wang wrote:
> 
> On 2019/7/12 下午6:00, Stefano Garzarella wrote:
> > On Thu, Jul 11, 2019 at 03:52:21PM -0400, Michael S. Tsirkin wrote:
> > > On Thu, Jul 11, 2019 at 01:41:34PM +0200, Stefano Garzarella wrote:
> > > > On Thu, Jul 11, 2019 at 03:37:00PM +0800, Jason Wang wrote:
> > > > > On 2019/7/10 下午11:37, Stefano Garzarella wrote:
> > > > > > Hi,
> > > > > > as Jason suggested some months ago, I looked better at the virtio-net driver to
> > > > > > understand if we can reuse some parts also in the virtio-vsock driver, since we
> > > > > > have similar challenges (mergeable buffers, page allocation, small
> > > > > > packets, etc.).
> > > > > > 
> > > > > > Initially, I would add the skbuff in the virtio-vsock in order to re-use
> > > > > > receive_*() functions.
> > > > > 
> > > > > Yes, that will be a good step.
> > > > > 
> > > > Okay, I'll go on this way.
> > > > 
> > > > > > Then I would move receive_[small, big, mergeable]() and
> > > > > > add_recvbuf_[small, big, mergeable]() outside of virtio-net driver, in order to
> > > > > > call them also from virtio-vsock. I need to do some refactoring (e.g. leave the
> > > > > > XDP part on the virtio-net driver), but I think it is feasible.
> > > > > > 
> > > > > > The idea is to create a virtio-skb.[h,c] where put these functions and a new
> > > > > > object where stores some attributes needed (e.g. hdr_len ) and status (e.g.
> > > > > > some fields of struct receive_queue).
> > > > > 
> > > > > My understanding is we could be more ambitious here. Do you see any blocker
> > > > > for reusing virtio-net directly? It's better to reuse not only the functions
> > > > > but also the logic like NAPI to avoid re-inventing something buggy and
> > > > > duplicated.
> > > > > 
> > > > These are my concerns:
> > > > - virtio-vsock is not a "net_device", so a lot of code related to
> > > >    ethtool, net devices (MAC address, MTU, speed, VLAN, XDP, offloading) will be
> > > >    not used by virtio-vsock.
> 
> 
> Linux support device other than ethernet, so it should not be a problem.
> 
> 
> > > > 
> > > > - virtio-vsock has a different header. We can consider it as part of
> > > >    virtio_net payload, but it precludes the compatibility with old hosts. This
> > > >    was one of the major doubts that made me think about using only the
> > > >    send/recv skbuff functions, that it shouldn't break the compatibility.
> 
> 
> We can extend the current vnet header helper for it to work for vsock.

Okay, I'll do it.

> 
> 
> > > > 
> > > > > > This is an idea of virtio-skb.h that
> > > > > > I have in mind:
> > > > > >       struct virtskb;
> > > > > 
> > > > > What fields do you want to store in virtskb? It looks to be exist sk_buff is
> > > > > flexible enough to us?
> > > > My idea is to store queues information, like struct receive_queue or
> > > > struct send_queue, and some device attributes (e.g. hdr_len ).
> 
> 
> If you reuse skb or virtnet_info, there is not necessary.
> 

Okay.

> 
> > > > 
> > > > > 
> > > > > >       struct sk_buff *virtskb_receive_small(struct virtskb *vs, ...);
> > > > > >       struct sk_buff *virtskb_receive_big(struct virtskb *vs, ...);
> > > > > >       struct sk_buff *virtskb_receive_mergeable(struct virtskb *vs, ...);
> > > > > > 
> > > > > >       int virtskb_add_recvbuf_small(struct virtskb*vs, ...);
> > > > > >       int virtskb_add_recvbuf_big(struct virtskb *vs, ...);
> > > > > >       int virtskb_add_recvbuf_mergeable(struct virtskb *vs, ...);
> > > > > > 
> > > > > > For the Guest->Host path it should be easier, so maybe I can add a
> > > > > > "virtskb_send(struct virtskb *vs, struct sk_buff *skb)" with a part of the code
> > > > > > of xmit_skb().
> > > > > 
> > > > > I may miss something, but I don't see any thing that prevents us from using
> > > > > xmit_skb() directly.
> > > > > 
> > > > Yes, but my initial idea was to make it more parametric and not related to the
> > > > virtio_net_hdr, so the 'hdr_len' could be a parameter and the
> > > > 'num_buffers' should be handled by the caller.
> > > > 
> > > > > > Let me know if you have in mind better names or if I should put these function
> > > > > > in another place.
> > > > > > 
> > > > > > I would like to leave the control part completely separate, so, for example,
> > > > > > the two drivers will negotiate the features independently and they will call
> > > > > > the right virtskb_receive_*() function based on the negotiation.
> > > > > 
> > > > > If it's one the issue of negotiation, we can simply change the
> > > > > virtnet_probe() to deal with different devices.
> > > > > 
> > > > > 
> > > > > > I already started to work on it, but before to do more steps and send an RFC
> > > > > > patch, I would like to hear your opinion.
> > > > > > Do you think that makes sense?
> > > > > > Do you see any issue or a better solution?
> > > > > 
> > > > > I still think we need to seek a way of adding some codes on virtio-net.c
> > > > > directly if there's no huge different in the processing of TX/RX. That would
> > > > > save us a lot time.
> > > > After the reading of the buffers from the virtqueue I think the process
> > > > is slightly different, because virtio-net will interface with the network
> > > > stack, while virtio-vsock will interface with the vsock-core (socket).
> > > > So the virtio-vsock implements the following:
> > > > - control flow mechanism to avoid to loose packets, informing the peer
> > > >    about the amount of memory available in the receive queue using some
> > > >    fields in the virtio_vsock_hdr
> > > > - de-multiplexing parsing the virtio_vsock_hdr and choosing the right
> > > >    socket depending on the port
> > > > - socket state handling
> 
> 
> I think it's just a branch, for ethernet, go for networking stack. otherwise
> go for vsock core?
> 

Yes, that should work.

So, I should refactor the functions that can be called also from the vsock
core, in order to remove "struct net_device *dev" parameter.
Maybe creating some wrappers for the network stack.

Otherwise I should create a fake net_device for vsock_core.

What do you suggest?

> 
> > > > 
> > > > We can use the virtio-net as transport, but we should add a lot of
> > > > code to skip "net device" stuff when it is used by the virtio-vsock.
> 
> 
> This could be another choice, but consider it was not transparent to the
> admin and require new features, we may seek a transparent solution here.
> 
> 
> > > > This could break something in virtio-net, for this reason, I thought to reuse
> > > > only the send/recv functions starting from the idea to split the virtio-net
> > > > driver in two parts:
> > > > a. one with all stuff related to the network stack
> > > > b. one with the stuff needed to communicate with the host
> > > > 
> > > > And use skbuff to communicate between parts. In this way, virtio-vsock
> > > > can use only the b part.
> > > > 
> > > > Maybe we can do this split in a better way, but I'm not sure it is
> > > > simple.
> > > > 
> > > > Thanks,
> > > > Stefano
> > > Frankly, skb is a huge structure which adds a lot of
> > > overhead. I am not sure that using it is such a great idea
> > > if building a device that does not have to interface
> > > with the networking stack.
> 
> 
> I believe vsock is mainly used for stream performance not for PPS. So the
> impact should be minimal. We can use other metadata, just need branch in
> recv_xxx().
> 

Yes, I think stream performance is the case.

> 
> > Thanks for the advice!
> > 
> > > So I agree with Jason in theory. To clarify, he is basically saying
> > > current implementation is all wrong, it should be a protocol and we
> > > should teach networking stack that there are reliable net devices that
> > > handle just this protocol. We could add a flag in virtio net that
> > > will say it's such a device.
> > > 
> > > Whether it's doable, I don't know, and it's definitely not simple - in
> > > particular you will have to also re-implement existing devices in these
> > > terms, and not just virtio - vmware vsock too.
> 
> 
> Merging vsock protocol to exist networking stack could be a long term goal,
> I believe for the first phase, we can seek to use virtio-net first.
>

Yes, I agree.

> 
> > > 
> > > If you want to do a POC you can add a new address family,
> > > that's easier.
> > Very interesting!
> > I agree with you. In this way we can completely split the protocol
> > logic, from the device.
> > 
> > As you said, it will not simple to do, but can be an opportunity to learn
> > better the Linux networking stack!
> > I'll try to do a PoC with AF_VSOCK2 that will use the virtio-net.
> 
> 
> I suggest to do this step by step:
> 
> 1) use virtio-net but keep some protocol logic
> 
> 2) separate protocol logic and merge it to exist Linux networking stack

Make sense, thanks for the suggestions, I'll try to do these steps!

Thanks,
Stefano
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC] virtio-net: share receive_*() and add_recvbuf_*() with virtio-vsock
  2019-07-15  7:44           ` Stefano Garzarella
  2019-07-15  9:16             ` Jason Wang
@ 2019-07-15  9:16             ` Jason Wang
  2019-07-15 10:42                 ` Stefano Garzarella
  2019-07-15 17:50             ` Michael S. Tsirkin
  2019-07-15 17:50             ` Michael S. Tsirkin
  3 siblings, 1 reply; 26+ messages in thread
From: Jason Wang @ 2019-07-15  9:16 UTC (permalink / raw)
  To: Stefano Garzarella
  Cc: Michael S. Tsirkin, Stefan Hajnoczi, virtualization, netdev


>>>>>>>        struct sk_buff *virtskb_receive_small(struct virtskb *vs, ...);
>>>>>>>        struct sk_buff *virtskb_receive_big(struct virtskb *vs, ...);
>>>>>>>        struct sk_buff *virtskb_receive_mergeable(struct virtskb *vs, ...);
>>>>>>>
>>>>>>>        int virtskb_add_recvbuf_small(struct virtskb*vs, ...);
>>>>>>>        int virtskb_add_recvbuf_big(struct virtskb *vs, ...);
>>>>>>>        int virtskb_add_recvbuf_mergeable(struct virtskb *vs, ...);
>>>>>>>
>>>>>>> For the Guest->Host path it should be easier, so maybe I can add a
>>>>>>> "virtskb_send(struct virtskb *vs, struct sk_buff *skb)" with a part of the code
>>>>>>> of xmit_skb().
>>>>>> I may miss something, but I don't see any thing that prevents us from using
>>>>>> xmit_skb() directly.
>>>>>>
>>>>> Yes, but my initial idea was to make it more parametric and not related to the
>>>>> virtio_net_hdr, so the 'hdr_len' could be a parameter and the
>>>>> 'num_buffers' should be handled by the caller.
>>>>>
>>>>>>> Let me know if you have in mind better names or if I should put these function
>>>>>>> in another place.
>>>>>>>
>>>>>>> I would like to leave the control part completely separate, so, for example,
>>>>>>> the two drivers will negotiate the features independently and they will call
>>>>>>> the right virtskb_receive_*() function based on the negotiation.
>>>>>> If it's one the issue of negotiation, we can simply change the
>>>>>> virtnet_probe() to deal with different devices.
>>>>>>
>>>>>>
>>>>>>> I already started to work on it, but before to do more steps and send an RFC
>>>>>>> patch, I would like to hear your opinion.
>>>>>>> Do you think that makes sense?
>>>>>>> Do you see any issue or a better solution?
>>>>>> I still think we need to seek a way of adding some codes on virtio-net.c
>>>>>> directly if there's no huge different in the processing of TX/RX. That would
>>>>>> save us a lot time.
>>>>> After the reading of the buffers from the virtqueue I think the process
>>>>> is slightly different, because virtio-net will interface with the network
>>>>> stack, while virtio-vsock will interface with the vsock-core (socket).
>>>>> So the virtio-vsock implements the following:
>>>>> - control flow mechanism to avoid to loose packets, informing the peer
>>>>>     about the amount of memory available in the receive queue using some
>>>>>     fields in the virtio_vsock_hdr
>>>>> - de-multiplexing parsing the virtio_vsock_hdr and choosing the right
>>>>>     socket depending on the port
>>>>> - socket state handling
>>
>> I think it's just a branch, for ethernet, go for networking stack. otherwise
>> go for vsock core?
>>
> Yes, that should work.
>
> So, I should refactor the functions that can be called also from the vsock
> core, in order to remove "struct net_device *dev" parameter.
> Maybe creating some wrappers for the network stack.
>
> Otherwise I should create a fake net_device for vsock_core.
>
> What do you suggest?


I'm not quite sure I get the question. Can you just use the one that 
created by virtio_net?


Thanks

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC] virtio-net: share receive_*() and add_recvbuf_*() with virtio-vsock
  2019-07-15  7:44           ` Stefano Garzarella
@ 2019-07-15  9:16             ` Jason Wang
  2019-07-15  9:16             ` Jason Wang
                               ` (2 subsequent siblings)
  3 siblings, 0 replies; 26+ messages in thread
From: Jason Wang @ 2019-07-15  9:16 UTC (permalink / raw)
  To: Stefano Garzarella
  Cc: netdev, virtualization, Stefan Hajnoczi, Michael S. Tsirkin


>>>>>>>        struct sk_buff *virtskb_receive_small(struct virtskb *vs, ...);
>>>>>>>        struct sk_buff *virtskb_receive_big(struct virtskb *vs, ...);
>>>>>>>        struct sk_buff *virtskb_receive_mergeable(struct virtskb *vs, ...);
>>>>>>>
>>>>>>>        int virtskb_add_recvbuf_small(struct virtskb*vs, ...);
>>>>>>>        int virtskb_add_recvbuf_big(struct virtskb *vs, ...);
>>>>>>>        int virtskb_add_recvbuf_mergeable(struct virtskb *vs, ...);
>>>>>>>
>>>>>>> For the Guest->Host path it should be easier, so maybe I can add a
>>>>>>> "virtskb_send(struct virtskb *vs, struct sk_buff *skb)" with a part of the code
>>>>>>> of xmit_skb().
>>>>>> I may miss something, but I don't see any thing that prevents us from using
>>>>>> xmit_skb() directly.
>>>>>>
>>>>> Yes, but my initial idea was to make it more parametric and not related to the
>>>>> virtio_net_hdr, so the 'hdr_len' could be a parameter and the
>>>>> 'num_buffers' should be handled by the caller.
>>>>>
>>>>>>> Let me know if you have in mind better names or if I should put these function
>>>>>>> in another place.
>>>>>>>
>>>>>>> I would like to leave the control part completely separate, so, for example,
>>>>>>> the two drivers will negotiate the features independently and they will call
>>>>>>> the right virtskb_receive_*() function based on the negotiation.
>>>>>> If it's one the issue of negotiation, we can simply change the
>>>>>> virtnet_probe() to deal with different devices.
>>>>>>
>>>>>>
>>>>>>> I already started to work on it, but before to do more steps and send an RFC
>>>>>>> patch, I would like to hear your opinion.
>>>>>>> Do you think that makes sense?
>>>>>>> Do you see any issue or a better solution?
>>>>>> I still think we need to seek a way of adding some codes on virtio-net.c
>>>>>> directly if there's no huge different in the processing of TX/RX. That would
>>>>>> save us a lot time.
>>>>> After the reading of the buffers from the virtqueue I think the process
>>>>> is slightly different, because virtio-net will interface with the network
>>>>> stack, while virtio-vsock will interface with the vsock-core (socket).
>>>>> So the virtio-vsock implements the following:
>>>>> - control flow mechanism to avoid to loose packets, informing the peer
>>>>>     about the amount of memory available in the receive queue using some
>>>>>     fields in the virtio_vsock_hdr
>>>>> - de-multiplexing parsing the virtio_vsock_hdr and choosing the right
>>>>>     socket depending on the port
>>>>> - socket state handling
>>
>> I think it's just a branch, for ethernet, go for networking stack. otherwise
>> go for vsock core?
>>
> Yes, that should work.
>
> So, I should refactor the functions that can be called also from the vsock
> core, in order to remove "struct net_device *dev" parameter.
> Maybe creating some wrappers for the network stack.
>
> Otherwise I should create a fake net_device for vsock_core.
>
> What do you suggest?


I'm not quite sure I get the question. Can you just use the one that 
created by virtio_net?


Thanks

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC] virtio-net: share receive_*() and add_recvbuf_*() with virtio-vsock
  2019-07-15  9:16             ` Jason Wang
@ 2019-07-15 10:42                 ` Stefano Garzarella
  0 siblings, 0 replies; 26+ messages in thread
From: Stefano Garzarella @ 2019-07-15 10:42 UTC (permalink / raw)
  To: Jason Wang; +Cc: Michael S. Tsirkin, Stefan Hajnoczi, virtualization, netdev

On Mon, Jul 15, 2019 at 05:16:09PM +0800, Jason Wang wrote:
> 
> > > > > > > >        struct sk_buff *virtskb_receive_small(struct virtskb *vs, ...);
> > > > > > > >        struct sk_buff *virtskb_receive_big(struct virtskb *vs, ...);
> > > > > > > >        struct sk_buff *virtskb_receive_mergeable(struct virtskb *vs, ...);
> > > > > > > > 
> > > > > > > >        int virtskb_add_recvbuf_small(struct virtskb*vs, ...);
> > > > > > > >        int virtskb_add_recvbuf_big(struct virtskb *vs, ...);
> > > > > > > >        int virtskb_add_recvbuf_mergeable(struct virtskb *vs, ...);
> > > > > > > > 
> > > > > > > > For the Guest->Host path it should be easier, so maybe I can add a
> > > > > > > > "virtskb_send(struct virtskb *vs, struct sk_buff *skb)" with a part of the code
> > > > > > > > of xmit_skb().
> > > > > > > I may miss something, but I don't see any thing that prevents us from using
> > > > > > > xmit_skb() directly.
> > > > > > > 
> > > > > > Yes, but my initial idea was to make it more parametric and not related to the
> > > > > > virtio_net_hdr, so the 'hdr_len' could be a parameter and the
> > > > > > 'num_buffers' should be handled by the caller.
> > > > > > 
> > > > > > > > Let me know if you have in mind better names or if I should put these function
> > > > > > > > in another place.
> > > > > > > > 
> > > > > > > > I would like to leave the control part completely separate, so, for example,
> > > > > > > > the two drivers will negotiate the features independently and they will call
> > > > > > > > the right virtskb_receive_*() function based on the negotiation.
> > > > > > > If it's one the issue of negotiation, we can simply change the
> > > > > > > virtnet_probe() to deal with different devices.
> > > > > > > 
> > > > > > > 
> > > > > > > > I already started to work on it, but before to do more steps and send an RFC
> > > > > > > > patch, I would like to hear your opinion.
> > > > > > > > Do you think that makes sense?
> > > > > > > > Do you see any issue or a better solution?
> > > > > > > I still think we need to seek a way of adding some codes on virtio-net.c
> > > > > > > directly if there's no huge different in the processing of TX/RX. That would
> > > > > > > save us a lot time.
> > > > > > After the reading of the buffers from the virtqueue I think the process
> > > > > > is slightly different, because virtio-net will interface with the network
> > > > > > stack, while virtio-vsock will interface with the vsock-core (socket).
> > > > > > So the virtio-vsock implements the following:
> > > > > > - control flow mechanism to avoid to loose packets, informing the peer
> > > > > >     about the amount of memory available in the receive queue using some
> > > > > >     fields in the virtio_vsock_hdr
> > > > > > - de-multiplexing parsing the virtio_vsock_hdr and choosing the right
> > > > > >     socket depending on the port
> > > > > > - socket state handling
> > > 
> > > I think it's just a branch, for ethernet, go for networking stack. otherwise
> > > go for vsock core?
> > > 
> > Yes, that should work.
> > 
> > So, I should refactor the functions that can be called also from the vsock
> > core, in order to remove "struct net_device *dev" parameter.
> > Maybe creating some wrappers for the network stack.
> > 
> > Otherwise I should create a fake net_device for vsock_core.
> > 
> > What do you suggest?
> 
> 
> I'm not quite sure I get the question. Can you just use the one that created
> by virtio_net?

Sure, sorry but I missed that it is allocated in the virtnet_probe()!

Thanks,
Stefano

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC] virtio-net: share receive_*() and add_recvbuf_*() with virtio-vsock
@ 2019-07-15 10:42                 ` Stefano Garzarella
  0 siblings, 0 replies; 26+ messages in thread
From: Stefano Garzarella @ 2019-07-15 10:42 UTC (permalink / raw)
  To: Jason Wang; +Cc: netdev, virtualization, Stefan Hajnoczi, Michael S. Tsirkin

On Mon, Jul 15, 2019 at 05:16:09PM +0800, Jason Wang wrote:
> 
> > > > > > > >        struct sk_buff *virtskb_receive_small(struct virtskb *vs, ...);
> > > > > > > >        struct sk_buff *virtskb_receive_big(struct virtskb *vs, ...);
> > > > > > > >        struct sk_buff *virtskb_receive_mergeable(struct virtskb *vs, ...);
> > > > > > > > 
> > > > > > > >        int virtskb_add_recvbuf_small(struct virtskb*vs, ...);
> > > > > > > >        int virtskb_add_recvbuf_big(struct virtskb *vs, ...);
> > > > > > > >        int virtskb_add_recvbuf_mergeable(struct virtskb *vs, ...);
> > > > > > > > 
> > > > > > > > For the Guest->Host path it should be easier, so maybe I can add a
> > > > > > > > "virtskb_send(struct virtskb *vs, struct sk_buff *skb)" with a part of the code
> > > > > > > > of xmit_skb().
> > > > > > > I may miss something, but I don't see any thing that prevents us from using
> > > > > > > xmit_skb() directly.
> > > > > > > 
> > > > > > Yes, but my initial idea was to make it more parametric and not related to the
> > > > > > virtio_net_hdr, so the 'hdr_len' could be a parameter and the
> > > > > > 'num_buffers' should be handled by the caller.
> > > > > > 
> > > > > > > > Let me know if you have in mind better names or if I should put these function
> > > > > > > > in another place.
> > > > > > > > 
> > > > > > > > I would like to leave the control part completely separate, so, for example,
> > > > > > > > the two drivers will negotiate the features independently and they will call
> > > > > > > > the right virtskb_receive_*() function based on the negotiation.
> > > > > > > If it's one the issue of negotiation, we can simply change the
> > > > > > > virtnet_probe() to deal with different devices.
> > > > > > > 
> > > > > > > 
> > > > > > > > I already started to work on it, but before to do more steps and send an RFC
> > > > > > > > patch, I would like to hear your opinion.
> > > > > > > > Do you think that makes sense?
> > > > > > > > Do you see any issue or a better solution?
> > > > > > > I still think we need to seek a way of adding some codes on virtio-net.c
> > > > > > > directly if there's no huge different in the processing of TX/RX. That would
> > > > > > > save us a lot time.
> > > > > > After the reading of the buffers from the virtqueue I think the process
> > > > > > is slightly different, because virtio-net will interface with the network
> > > > > > stack, while virtio-vsock will interface with the vsock-core (socket).
> > > > > > So the virtio-vsock implements the following:
> > > > > > - control flow mechanism to avoid to loose packets, informing the peer
> > > > > >     about the amount of memory available in the receive queue using some
> > > > > >     fields in the virtio_vsock_hdr
> > > > > > - de-multiplexing parsing the virtio_vsock_hdr and choosing the right
> > > > > >     socket depending on the port
> > > > > > - socket state handling
> > > 
> > > I think it's just a branch, for ethernet, go for networking stack. otherwise
> > > go for vsock core?
> > > 
> > Yes, that should work.
> > 
> > So, I should refactor the functions that can be called also from the vsock
> > core, in order to remove "struct net_device *dev" parameter.
> > Maybe creating some wrappers for the network stack.
> > 
> > Otherwise I should create a fake net_device for vsock_core.
> > 
> > What do you suggest?
> 
> 
> I'm not quite sure I get the question. Can you just use the one that created
> by virtio_net?

Sure, sorry but I missed that it is allocated in the virtnet_probe()!

Thanks,
Stefano

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC] virtio-net: share receive_*() and add_recvbuf_*() with virtio-vsock
  2019-07-15  7:44           ` Stefano Garzarella
                               ` (2 preceding siblings ...)
  2019-07-15 17:50             ` Michael S. Tsirkin
@ 2019-07-15 17:50             ` Michael S. Tsirkin
  2019-07-16  9:40               ` Stefano Garzarella
  2019-07-16  9:40               ` Stefano Garzarella
  3 siblings, 2 replies; 26+ messages in thread
From: Michael S. Tsirkin @ 2019-07-15 17:50 UTC (permalink / raw)
  To: Stefano Garzarella; +Cc: Jason Wang, Stefan Hajnoczi, virtualization, netdev

On Mon, Jul 15, 2019 at 09:44:16AM +0200, Stefano Garzarella wrote:
> On Fri, Jul 12, 2019 at 06:14:39PM +0800, Jason Wang wrote:
> > 
> > On 2019/7/12 下午6:00, Stefano Garzarella wrote:
> > > On Thu, Jul 11, 2019 at 03:52:21PM -0400, Michael S. Tsirkin wrote:
> > > > On Thu, Jul 11, 2019 at 01:41:34PM +0200, Stefano Garzarella wrote:
> > > > > On Thu, Jul 11, 2019 at 03:37:00PM +0800, Jason Wang wrote:
> > > > > > On 2019/7/10 下午11:37, Stefano Garzarella wrote:
> > > > > > > Hi,
> > > > > > > as Jason suggested some months ago, I looked better at the virtio-net driver to
> > > > > > > understand if we can reuse some parts also in the virtio-vsock driver, since we
> > > > > > > have similar challenges (mergeable buffers, page allocation, small
> > > > > > > packets, etc.).
> > > > > > > 
> > > > > > > Initially, I would add the skbuff in the virtio-vsock in order to re-use
> > > > > > > receive_*() functions.
> > > > > > 
> > > > > > Yes, that will be a good step.
> > > > > > 
> > > > > Okay, I'll go on this way.
> > > > > 
> > > > > > > Then I would move receive_[small, big, mergeable]() and
> > > > > > > add_recvbuf_[small, big, mergeable]() outside of virtio-net driver, in order to
> > > > > > > call them also from virtio-vsock. I need to do some refactoring (e.g. leave the
> > > > > > > XDP part on the virtio-net driver), but I think it is feasible.
> > > > > > > 
> > > > > > > The idea is to create a virtio-skb.[h,c] where put these functions and a new
> > > > > > > object where stores some attributes needed (e.g. hdr_len ) and status (e.g.
> > > > > > > some fields of struct receive_queue).
> > > > > > 
> > > > > > My understanding is we could be more ambitious here. Do you see any blocker
> > > > > > for reusing virtio-net directly? It's better to reuse not only the functions
> > > > > > but also the logic like NAPI to avoid re-inventing something buggy and
> > > > > > duplicated.
> > > > > > 
> > > > > These are my concerns:
> > > > > - virtio-vsock is not a "net_device", so a lot of code related to
> > > > >    ethtool, net devices (MAC address, MTU, speed, VLAN, XDP, offloading) will be
> > > > >    not used by virtio-vsock.
> > 
> > 
> > Linux support device other than ethernet, so it should not be a problem.
> > 
> > 
> > > > > 
> > > > > - virtio-vsock has a different header. We can consider it as part of
> > > > >    virtio_net payload, but it precludes the compatibility with old hosts. This
> > > > >    was one of the major doubts that made me think about using only the
> > > > >    send/recv skbuff functions, that it shouldn't break the compatibility.
> > 
> > 
> > We can extend the current vnet header helper for it to work for vsock.
> 
> Okay, I'll do it.
> 
> > 
> > 
> > > > > 
> > > > > > > This is an idea of virtio-skb.h that
> > > > > > > I have in mind:
> > > > > > >       struct virtskb;
> > > > > > 
> > > > > > What fields do you want to store in virtskb? It looks to be exist sk_buff is
> > > > > > flexible enough to us?
> > > > > My idea is to store queues information, like struct receive_queue or
> > > > > struct send_queue, and some device attributes (e.g. hdr_len ).
> > 
> > 
> > If you reuse skb or virtnet_info, there is not necessary.
> > 
> 
> Okay.
> 
> > 
> > > > > 
> > > > > > 
> > > > > > >       struct sk_buff *virtskb_receive_small(struct virtskb *vs, ...);
> > > > > > >       struct sk_buff *virtskb_receive_big(struct virtskb *vs, ...);
> > > > > > >       struct sk_buff *virtskb_receive_mergeable(struct virtskb *vs, ...);
> > > > > > > 
> > > > > > >       int virtskb_add_recvbuf_small(struct virtskb*vs, ...);
> > > > > > >       int virtskb_add_recvbuf_big(struct virtskb *vs, ...);
> > > > > > >       int virtskb_add_recvbuf_mergeable(struct virtskb *vs, ...);
> > > > > > > 
> > > > > > > For the Guest->Host path it should be easier, so maybe I can add a
> > > > > > > "virtskb_send(struct virtskb *vs, struct sk_buff *skb)" with a part of the code
> > > > > > > of xmit_skb().
> > > > > > 
> > > > > > I may miss something, but I don't see any thing that prevents us from using
> > > > > > xmit_skb() directly.
> > > > > > 
> > > > > Yes, but my initial idea was to make it more parametric and not related to the
> > > > > virtio_net_hdr, so the 'hdr_len' could be a parameter and the
> > > > > 'num_buffers' should be handled by the caller.
> > > > > 
> > > > > > > Let me know if you have in mind better names or if I should put these function
> > > > > > > in another place.
> > > > > > > 
> > > > > > > I would like to leave the control part completely separate, so, for example,
> > > > > > > the two drivers will negotiate the features independently and they will call
> > > > > > > the right virtskb_receive_*() function based on the negotiation.
> > > > > > 
> > > > > > If it's one the issue of negotiation, we can simply change the
> > > > > > virtnet_probe() to deal with different devices.
> > > > > > 
> > > > > > 
> > > > > > > I already started to work on it, but before to do more steps and send an RFC
> > > > > > > patch, I would like to hear your opinion.
> > > > > > > Do you think that makes sense?
> > > > > > > Do you see any issue or a better solution?
> > > > > > 
> > > > > > I still think we need to seek a way of adding some codes on virtio-net.c
> > > > > > directly if there's no huge different in the processing of TX/RX. That would
> > > > > > save us a lot time.
> > > > > After the reading of the buffers from the virtqueue I think the process
> > > > > is slightly different, because virtio-net will interface with the network
> > > > > stack, while virtio-vsock will interface with the vsock-core (socket).
> > > > > So the virtio-vsock implements the following:
> > > > > - control flow mechanism to avoid to loose packets, informing the peer
> > > > >    about the amount of memory available in the receive queue using some
> > > > >    fields in the virtio_vsock_hdr
> > > > > - de-multiplexing parsing the virtio_vsock_hdr and choosing the right
> > > > >    socket depending on the port
> > > > > - socket state handling
> > 
> > 
> > I think it's just a branch, for ethernet, go for networking stack. otherwise
> > go for vsock core?
> > 
> 
> Yes, that should work.
> 
> So, I should refactor the functions that can be called also from the vsock
> core, in order to remove "struct net_device *dev" parameter.
> Maybe creating some wrappers for the network stack.
> 
> Otherwise I should create a fake net_device for vsock_core.
> 
> What do you suggest?

Neither.

I think what Jason was saying all along is this:

virtio net doesn't actually lose packets, at least most
of the time. And it actually most of the time
passes all packets to host. So it's possible to use a virtio net
device (possibly with a feature flag that says "does not lose packets,
all packets go to host") and build vsock on top.

and all of this is nice, but don't expect anything easy,
or any quick results.

Also, in a sense it's a missed opportunity: we could cut out a lot
of fat and see just how fast can a protocol that is completely
new and separate from networking stack go.
Instead vsock implementation carries so much baggage from both
networking stack - such as softirq processing - and itself such as
workqueues, global state and crude locking - to the point where
it's actually slower than TCP.

> > 
> > > > > 
> > > > > We can use the virtio-net as transport, but we should add a lot of
> > > > > code to skip "net device" stuff when it is used by the virtio-vsock.
> > 
> > 
> > This could be another choice, but consider it was not transparent to the
> > admin and require new features, we may seek a transparent solution here.
> > 
> > 
> > > > > This could break something in virtio-net, for this reason, I thought to reuse
> > > > > only the send/recv functions starting from the idea to split the virtio-net
> > > > > driver in two parts:
> > > > > a. one with all stuff related to the network stack
> > > > > b. one with the stuff needed to communicate with the host
> > > > > 
> > > > > And use skbuff to communicate between parts. In this way, virtio-vsock
> > > > > can use only the b part.
> > > > > 
> > > > > Maybe we can do this split in a better way, but I'm not sure it is
> > > > > simple.
> > > > > 
> > > > > Thanks,
> > > > > Stefano
> > > > Frankly, skb is a huge structure which adds a lot of
> > > > overhead. I am not sure that using it is such a great idea
> > > > if building a device that does not have to interface
> > > > with the networking stack.
> > 
> > 
> > I believe vsock is mainly used for stream performance not for PPS. So the
> > impact should be minimal. We can use other metadata, just need branch in
> > recv_xxx().
> > 
> 
> Yes, I think stream performance is the case.
> 
> > 
> > > Thanks for the advice!
> > > 
> > > > So I agree with Jason in theory. To clarify, he is basically saying
> > > > current implementation is all wrong, it should be a protocol and we
> > > > should teach networking stack that there are reliable net devices that
> > > > handle just this protocol. We could add a flag in virtio net that
> > > > will say it's such a device.
> > > > 
> > > > Whether it's doable, I don't know, and it's definitely not simple - in
> > > > particular you will have to also re-implement existing devices in these
> > > > terms, and not just virtio - vmware vsock too.
> > 
> > 
> > Merging vsock protocol to exist networking stack could be a long term goal,
> > I believe for the first phase, we can seek to use virtio-net first.
> >
> 
> Yes, I agree.
> 
> > 
> > > > 
> > > > If you want to do a POC you can add a new address family,
> > > > that's easier.
> > > Very interesting!
> > > I agree with you. In this way we can completely split the protocol
> > > logic, from the device.
> > > 
> > > As you said, it will not simple to do, but can be an opportunity to learn
> > > better the Linux networking stack!
> > > I'll try to do a PoC with AF_VSOCK2 that will use the virtio-net.
> > 
> > 
> > I suggest to do this step by step:
> > 
> > 1) use virtio-net but keep some protocol logic
> > 
> > 2) separate protocol logic and merge it to exist Linux networking stack
> 
> Make sense, thanks for the suggestions, I'll try to do these steps!
> 
> Thanks,
> Stefano


An alternative is look at sources of overhead in vsock and get rid of
them, or rewrite it from scratch focusing on performance.


-- 
MST

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC] virtio-net: share receive_*() and add_recvbuf_*() with virtio-vsock
  2019-07-15  7:44           ` Stefano Garzarella
  2019-07-15  9:16             ` Jason Wang
  2019-07-15  9:16             ` Jason Wang
@ 2019-07-15 17:50             ` Michael S. Tsirkin
  2019-07-15 17:50             ` Michael S. Tsirkin
  3 siblings, 0 replies; 26+ messages in thread
From: Michael S. Tsirkin @ 2019-07-15 17:50 UTC (permalink / raw)
  To: Stefano Garzarella; +Cc: netdev, Stefan Hajnoczi, virtualization

On Mon, Jul 15, 2019 at 09:44:16AM +0200, Stefano Garzarella wrote:
> On Fri, Jul 12, 2019 at 06:14:39PM +0800, Jason Wang wrote:
> > 
> > On 2019/7/12 下午6:00, Stefano Garzarella wrote:
> > > On Thu, Jul 11, 2019 at 03:52:21PM -0400, Michael S. Tsirkin wrote:
> > > > On Thu, Jul 11, 2019 at 01:41:34PM +0200, Stefano Garzarella wrote:
> > > > > On Thu, Jul 11, 2019 at 03:37:00PM +0800, Jason Wang wrote:
> > > > > > On 2019/7/10 下午11:37, Stefano Garzarella wrote:
> > > > > > > Hi,
> > > > > > > as Jason suggested some months ago, I looked better at the virtio-net driver to
> > > > > > > understand if we can reuse some parts also in the virtio-vsock driver, since we
> > > > > > > have similar challenges (mergeable buffers, page allocation, small
> > > > > > > packets, etc.).
> > > > > > > 
> > > > > > > Initially, I would add the skbuff in the virtio-vsock in order to re-use
> > > > > > > receive_*() functions.
> > > > > > 
> > > > > > Yes, that will be a good step.
> > > > > > 
> > > > > Okay, I'll go on this way.
> > > > > 
> > > > > > > Then I would move receive_[small, big, mergeable]() and
> > > > > > > add_recvbuf_[small, big, mergeable]() outside of virtio-net driver, in order to
> > > > > > > call them also from virtio-vsock. I need to do some refactoring (e.g. leave the
> > > > > > > XDP part on the virtio-net driver), but I think it is feasible.
> > > > > > > 
> > > > > > > The idea is to create a virtio-skb.[h,c] where put these functions and a new
> > > > > > > object where stores some attributes needed (e.g. hdr_len ) and status (e.g.
> > > > > > > some fields of struct receive_queue).
> > > > > > 
> > > > > > My understanding is we could be more ambitious here. Do you see any blocker
> > > > > > for reusing virtio-net directly? It's better to reuse not only the functions
> > > > > > but also the logic like NAPI to avoid re-inventing something buggy and
> > > > > > duplicated.
> > > > > > 
> > > > > These are my concerns:
> > > > > - virtio-vsock is not a "net_device", so a lot of code related to
> > > > >    ethtool, net devices (MAC address, MTU, speed, VLAN, XDP, offloading) will be
> > > > >    not used by virtio-vsock.
> > 
> > 
> > Linux support device other than ethernet, so it should not be a problem.
> > 
> > 
> > > > > 
> > > > > - virtio-vsock has a different header. We can consider it as part of
> > > > >    virtio_net payload, but it precludes the compatibility with old hosts. This
> > > > >    was one of the major doubts that made me think about using only the
> > > > >    send/recv skbuff functions, that it shouldn't break the compatibility.
> > 
> > 
> > We can extend the current vnet header helper for it to work for vsock.
> 
> Okay, I'll do it.
> 
> > 
> > 
> > > > > 
> > > > > > > This is an idea of virtio-skb.h that
> > > > > > > I have in mind:
> > > > > > >       struct virtskb;
> > > > > > 
> > > > > > What fields do you want to store in virtskb? It looks to be exist sk_buff is
> > > > > > flexible enough to us?
> > > > > My idea is to store queues information, like struct receive_queue or
> > > > > struct send_queue, and some device attributes (e.g. hdr_len ).
> > 
> > 
> > If you reuse skb or virtnet_info, there is not necessary.
> > 
> 
> Okay.
> 
> > 
> > > > > 
> > > > > > 
> > > > > > >       struct sk_buff *virtskb_receive_small(struct virtskb *vs, ...);
> > > > > > >       struct sk_buff *virtskb_receive_big(struct virtskb *vs, ...);
> > > > > > >       struct sk_buff *virtskb_receive_mergeable(struct virtskb *vs, ...);
> > > > > > > 
> > > > > > >       int virtskb_add_recvbuf_small(struct virtskb*vs, ...);
> > > > > > >       int virtskb_add_recvbuf_big(struct virtskb *vs, ...);
> > > > > > >       int virtskb_add_recvbuf_mergeable(struct virtskb *vs, ...);
> > > > > > > 
> > > > > > > For the Guest->Host path it should be easier, so maybe I can add a
> > > > > > > "virtskb_send(struct virtskb *vs, struct sk_buff *skb)" with a part of the code
> > > > > > > of xmit_skb().
> > > > > > 
> > > > > > I may miss something, but I don't see any thing that prevents us from using
> > > > > > xmit_skb() directly.
> > > > > > 
> > > > > Yes, but my initial idea was to make it more parametric and not related to the
> > > > > virtio_net_hdr, so the 'hdr_len' could be a parameter and the
> > > > > 'num_buffers' should be handled by the caller.
> > > > > 
> > > > > > > Let me know if you have in mind better names or if I should put these function
> > > > > > > in another place.
> > > > > > > 
> > > > > > > I would like to leave the control part completely separate, so, for example,
> > > > > > > the two drivers will negotiate the features independently and they will call
> > > > > > > the right virtskb_receive_*() function based on the negotiation.
> > > > > > 
> > > > > > If it's one the issue of negotiation, we can simply change the
> > > > > > virtnet_probe() to deal with different devices.
> > > > > > 
> > > > > > 
> > > > > > > I already started to work on it, but before to do more steps and send an RFC
> > > > > > > patch, I would like to hear your opinion.
> > > > > > > Do you think that makes sense?
> > > > > > > Do you see any issue or a better solution?
> > > > > > 
> > > > > > I still think we need to seek a way of adding some codes on virtio-net.c
> > > > > > directly if there's no huge different in the processing of TX/RX. That would
> > > > > > save us a lot time.
> > > > > After the reading of the buffers from the virtqueue I think the process
> > > > > is slightly different, because virtio-net will interface with the network
> > > > > stack, while virtio-vsock will interface with the vsock-core (socket).
> > > > > So the virtio-vsock implements the following:
> > > > > - control flow mechanism to avoid to loose packets, informing the peer
> > > > >    about the amount of memory available in the receive queue using some
> > > > >    fields in the virtio_vsock_hdr
> > > > > - de-multiplexing parsing the virtio_vsock_hdr and choosing the right
> > > > >    socket depending on the port
> > > > > - socket state handling
> > 
> > 
> > I think it's just a branch, for ethernet, go for networking stack. otherwise
> > go for vsock core?
> > 
> 
> Yes, that should work.
> 
> So, I should refactor the functions that can be called also from the vsock
> core, in order to remove "struct net_device *dev" parameter.
> Maybe creating some wrappers for the network stack.
> 
> Otherwise I should create a fake net_device for vsock_core.
> 
> What do you suggest?

Neither.

I think what Jason was saying all along is this:

virtio net doesn't actually lose packets, at least most
of the time. And it actually most of the time
passes all packets to host. So it's possible to use a virtio net
device (possibly with a feature flag that says "does not lose packets,
all packets go to host") and build vsock on top.

and all of this is nice, but don't expect anything easy,
or any quick results.

Also, in a sense it's a missed opportunity: we could cut out a lot
of fat and see just how fast can a protocol that is completely
new and separate from networking stack go.
Instead vsock implementation carries so much baggage from both
networking stack - such as softirq processing - and itself such as
workqueues, global state and crude locking - to the point where
it's actually slower than TCP.

> > 
> > > > > 
> > > > > We can use the virtio-net as transport, but we should add a lot of
> > > > > code to skip "net device" stuff when it is used by the virtio-vsock.
> > 
> > 
> > This could be another choice, but consider it was not transparent to the
> > admin and require new features, we may seek a transparent solution here.
> > 
> > 
> > > > > This could break something in virtio-net, for this reason, I thought to reuse
> > > > > only the send/recv functions starting from the idea to split the virtio-net
> > > > > driver in two parts:
> > > > > a. one with all stuff related to the network stack
> > > > > b. one with the stuff needed to communicate with the host
> > > > > 
> > > > > And use skbuff to communicate between parts. In this way, virtio-vsock
> > > > > can use only the b part.
> > > > > 
> > > > > Maybe we can do this split in a better way, but I'm not sure it is
> > > > > simple.
> > > > > 
> > > > > Thanks,
> > > > > Stefano
> > > > Frankly, skb is a huge structure which adds a lot of
> > > > overhead. I am not sure that using it is such a great idea
> > > > if building a device that does not have to interface
> > > > with the networking stack.
> > 
> > 
> > I believe vsock is mainly used for stream performance not for PPS. So the
> > impact should be minimal. We can use other metadata, just need branch in
> > recv_xxx().
> > 
> 
> Yes, I think stream performance is the case.
> 
> > 
> > > Thanks for the advice!
> > > 
> > > > So I agree with Jason in theory. To clarify, he is basically saying
> > > > current implementation is all wrong, it should be a protocol and we
> > > > should teach networking stack that there are reliable net devices that
> > > > handle just this protocol. We could add a flag in virtio net that
> > > > will say it's such a device.
> > > > 
> > > > Whether it's doable, I don't know, and it's definitely not simple - in
> > > > particular you will have to also re-implement existing devices in these
> > > > terms, and not just virtio - vmware vsock too.
> > 
> > 
> > Merging vsock protocol to exist networking stack could be a long term goal,
> > I believe for the first phase, we can seek to use virtio-net first.
> >
> 
> Yes, I agree.
> 
> > 
> > > > 
> > > > If you want to do a POC you can add a new address family,
> > > > that's easier.
> > > Very interesting!
> > > I agree with you. In this way we can completely split the protocol
> > > logic, from the device.
> > > 
> > > As you said, it will not simple to do, but can be an opportunity to learn
> > > better the Linux networking stack!
> > > I'll try to do a PoC with AF_VSOCK2 that will use the virtio-net.
> > 
> > 
> > I suggest to do this step by step:
> > 
> > 1) use virtio-net but keep some protocol logic
> > 
> > 2) separate protocol logic and merge it to exist Linux networking stack
> 
> Make sense, thanks for the suggestions, I'll try to do these steps!
> 
> Thanks,
> Stefano


An alternative is look at sources of overhead in vsock and get rid of
them, or rewrite it from scratch focusing on performance.


-- 
MST
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC] virtio-net: share receive_*() and add_recvbuf_*() with virtio-vsock
  2019-07-15 17:50             ` Michael S. Tsirkin
  2019-07-16  9:40               ` Stefano Garzarella
@ 2019-07-16  9:40               ` Stefano Garzarella
  2019-07-16 10:01                 ` Michael S. Tsirkin
  2019-07-16 10:01                 ` Michael S. Tsirkin
  1 sibling, 2 replies; 26+ messages in thread
From: Stefano Garzarella @ 2019-07-16  9:40 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: Jason Wang, Stefan Hajnoczi, virtualization, netdev

On Mon, Jul 15, 2019 at 01:50:28PM -0400, Michael S. Tsirkin wrote:
> On Mon, Jul 15, 2019 at 09:44:16AM +0200, Stefano Garzarella wrote:
> > On Fri, Jul 12, 2019 at 06:14:39PM +0800, Jason Wang wrote:

[...]

> > > 
> > > 
> > > I think it's just a branch, for ethernet, go for networking stack. otherwise
> > > go for vsock core?
> > > 
> > 
> > Yes, that should work.
> > 
> > So, I should refactor the functions that can be called also from the vsock
> > core, in order to remove "struct net_device *dev" parameter.
> > Maybe creating some wrappers for the network stack.
> > 
> > Otherwise I should create a fake net_device for vsock_core.
> > 
> > What do you suggest?
> 
> Neither.
> 
> I think what Jason was saying all along is this:
> 
> virtio net doesn't actually lose packets, at least most
> of the time. And it actually most of the time
> passes all packets to host. So it's possible to use a virtio net
> device (possibly with a feature flag that says "does not lose packets,
> all packets go to host") and build vsock on top.

Yes, I got it after the latest Jason's reply.

> 
> and all of this is nice, but don't expect anything easy,
> or any quick results.

I expected this... :-(

> 
> Also, in a sense it's a missed opportunity: we could cut out a lot
> of fat and see just how fast can a protocol that is completely
> new and separate from networking stack go.

In this case, if we will try to do a PoC, what do you think is better?
    1. new AF_VSOCK + network-stack + virtio-net modified
        Maybe it is allow us to reuse a lot of stuff already written,
        but we will go through the network stack

    2. new AF_VSOCK + glue + virtio-net modified
        Intermediate approach, similar to Jason's proposal

    3, new AF_VSOCK + new virtio-vsock
        Can be the thinnest, but we have to rewrite many things, with the risk
        of making the same mistakes as the current implementation.


> Instead vsock implementation carries so much baggage from both
> networking stack - such as softirq processing - and itself such as
> workqueues, global state and crude locking - to the point where
> it's actually slower than TCP.

I agree, and I'm finding new issues while I'm trying to support nested
VMs, allowing multiple vsock transports (virtio-vsock and vhost-vsock in
the KVM case) at runtime.

> 

[...]

> > > 
> > > I suggest to do this step by step:
> > > 
> > > 1) use virtio-net but keep some protocol logic
> > > 
> > > 2) separate protocol logic and merge it to exist Linux networking stack
> > 
> > Make sense, thanks for the suggestions, I'll try to do these steps!
> > 
> > Thanks,
> > Stefano
> 
> 
> An alternative is look at sources of overhead in vsock and get rid of
> them, or rewrite it from scratch focusing on performance.

I started looking at virtio-vsock and vhost-vsock trying to do very
simple changes [1] to increase the performance. I should send a v4 of that
series as a very short term, then I'd like to have a deeper look to understand
if it is better to try to optimize or rewrite it from scratch.


Thanks,
Stefano

[1] https://patchwork.kernel.org/cover/10970145/


^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC] virtio-net: share receive_*() and add_recvbuf_*() with virtio-vsock
  2019-07-15 17:50             ` Michael S. Tsirkin
@ 2019-07-16  9:40               ` Stefano Garzarella
  2019-07-16  9:40               ` Stefano Garzarella
  1 sibling, 0 replies; 26+ messages in thread
From: Stefano Garzarella @ 2019-07-16  9:40 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: netdev, Stefan Hajnoczi, virtualization

On Mon, Jul 15, 2019 at 01:50:28PM -0400, Michael S. Tsirkin wrote:
> On Mon, Jul 15, 2019 at 09:44:16AM +0200, Stefano Garzarella wrote:
> > On Fri, Jul 12, 2019 at 06:14:39PM +0800, Jason Wang wrote:

[...]

> > > 
> > > 
> > > I think it's just a branch, for ethernet, go for networking stack. otherwise
> > > go for vsock core?
> > > 
> > 
> > Yes, that should work.
> > 
> > So, I should refactor the functions that can be called also from the vsock
> > core, in order to remove "struct net_device *dev" parameter.
> > Maybe creating some wrappers for the network stack.
> > 
> > Otherwise I should create a fake net_device for vsock_core.
> > 
> > What do you suggest?
> 
> Neither.
> 
> I think what Jason was saying all along is this:
> 
> virtio net doesn't actually lose packets, at least most
> of the time. And it actually most of the time
> passes all packets to host. So it's possible to use a virtio net
> device (possibly with a feature flag that says "does not lose packets,
> all packets go to host") and build vsock on top.

Yes, I got it after the latest Jason's reply.

> 
> and all of this is nice, but don't expect anything easy,
> or any quick results.

I expected this... :-(

> 
> Also, in a sense it's a missed opportunity: we could cut out a lot
> of fat and see just how fast can a protocol that is completely
> new and separate from networking stack go.

In this case, if we will try to do a PoC, what do you think is better?
    1. new AF_VSOCK + network-stack + virtio-net modified
        Maybe it is allow us to reuse a lot of stuff already written,
        but we will go through the network stack

    2. new AF_VSOCK + glue + virtio-net modified
        Intermediate approach, similar to Jason's proposal

    3, new AF_VSOCK + new virtio-vsock
        Can be the thinnest, but we have to rewrite many things, with the risk
        of making the same mistakes as the current implementation.


> Instead vsock implementation carries so much baggage from both
> networking stack - such as softirq processing - and itself such as
> workqueues, global state and crude locking - to the point where
> it's actually slower than TCP.

I agree, and I'm finding new issues while I'm trying to support nested
VMs, allowing multiple vsock transports (virtio-vsock and vhost-vsock in
the KVM case) at runtime.

> 

[...]

> > > 
> > > I suggest to do this step by step:
> > > 
> > > 1) use virtio-net but keep some protocol logic
> > > 
> > > 2) separate protocol logic and merge it to exist Linux networking stack
> > 
> > Make sense, thanks for the suggestions, I'll try to do these steps!
> > 
> > Thanks,
> > Stefano
> 
> 
> An alternative is look at sources of overhead in vsock and get rid of
> them, or rewrite it from scratch focusing on performance.

I started looking at virtio-vsock and vhost-vsock trying to do very
simple changes [1] to increase the performance. I should send a v4 of that
series as a very short term, then I'd like to have a deeper look to understand
if it is better to try to optimize or rewrite it from scratch.


Thanks,
Stefano

[1] https://patchwork.kernel.org/cover/10970145/

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC] virtio-net: share receive_*() and add_recvbuf_*() with virtio-vsock
  2019-07-16  9:40               ` Stefano Garzarella
  2019-07-16 10:01                 ` Michael S. Tsirkin
@ 2019-07-16 10:01                 ` Michael S. Tsirkin
  2019-07-16 10:22                     ` Stefano Garzarella
  1 sibling, 1 reply; 26+ messages in thread
From: Michael S. Tsirkin @ 2019-07-16 10:01 UTC (permalink / raw)
  To: Stefano Garzarella; +Cc: Jason Wang, Stefan Hajnoczi, virtualization, netdev

On Tue, Jul 16, 2019 at 11:40:24AM +0200, Stefano Garzarella wrote:
> On Mon, Jul 15, 2019 at 01:50:28PM -0400, Michael S. Tsirkin wrote:
> > On Mon, Jul 15, 2019 at 09:44:16AM +0200, Stefano Garzarella wrote:
> > > On Fri, Jul 12, 2019 at 06:14:39PM +0800, Jason Wang wrote:
> 
> [...]
> 
> > > > 
> > > > 
> > > > I think it's just a branch, for ethernet, go for networking stack. otherwise
> > > > go for vsock core?
> > > > 
> > > 
> > > Yes, that should work.
> > > 
> > > So, I should refactor the functions that can be called also from the vsock
> > > core, in order to remove "struct net_device *dev" parameter.
> > > Maybe creating some wrappers for the network stack.
> > > 
> > > Otherwise I should create a fake net_device for vsock_core.
> > > 
> > > What do you suggest?
> > 
> > Neither.
> > 
> > I think what Jason was saying all along is this:
> > 
> > virtio net doesn't actually lose packets, at least most
> > of the time. And it actually most of the time
> > passes all packets to host. So it's possible to use a virtio net
> > device (possibly with a feature flag that says "does not lose packets,
> > all packets go to host") and build vsock on top.
> 
> Yes, I got it after the latest Jason's reply.
> 
> > 
> > and all of this is nice, but don't expect anything easy,
> > or any quick results.
> 
> I expected this... :-(
> 
> > 
> > Also, in a sense it's a missed opportunity: we could cut out a lot
> > of fat and see just how fast can a protocol that is completely
> > new and separate from networking stack go.
> 
> In this case, if we will try to do a PoC, what do you think is better?
>     1. new AF_VSOCK + network-stack + virtio-net modified
>         Maybe it is allow us to reuse a lot of stuff already written,
>         but we will go through the network stack
> 
>     2. new AF_VSOCK + glue + virtio-net modified
>         Intermediate approach, similar to Jason's proposal
> 
>     3, new AF_VSOCK + new virtio-vsock
>         Can be the thinnest, but we have to rewrite many things, with the risk
>         of making the same mistakes as the current implementation.
> 

1 or 3 imho. I wouldn't expect a lot from 2.  I slightly favor 3 and
Jason 1. So take your pick :)

> > Instead vsock implementation carries so much baggage from both
> > networking stack - such as softirq processing - and itself such as
> > workqueues, global state and crude locking - to the point where
> > it's actually slower than TCP.
> 
> I agree, and I'm finding new issues while I'm trying to support nested
> VMs, allowing multiple vsock transports (virtio-vsock and vhost-vsock in
> the KVM case) at runtime.
> 
> > 
> 
> [...]
> 
> > > > 
> > > > I suggest to do this step by step:
> > > > 
> > > > 1) use virtio-net but keep some protocol logic
> > > > 
> > > > 2) separate protocol logic and merge it to exist Linux networking stack
> > > 
> > > Make sense, thanks for the suggestions, I'll try to do these steps!
> > > 
> > > Thanks,
> > > Stefano
> > 
> > 
> > An alternative is look at sources of overhead in vsock and get rid of
> > them, or rewrite it from scratch focusing on performance.
> 
> I started looking at virtio-vsock and vhost-vsock trying to do very
> simple changes [1] to increase the performance. I should send a v4 of that
> series as a very short term, then I'd like to have a deeper look to understand
> if it is better to try to optimize or rewrite it from scratch.
> 
> 
> Thanks,
> Stefano
> 
> [1] https://patchwork.kernel.org/cover/10970145/

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC] virtio-net: share receive_*() and add_recvbuf_*() with virtio-vsock
  2019-07-16  9:40               ` Stefano Garzarella
@ 2019-07-16 10:01                 ` Michael S. Tsirkin
  2019-07-16 10:01                 ` Michael S. Tsirkin
  1 sibling, 0 replies; 26+ messages in thread
From: Michael S. Tsirkin @ 2019-07-16 10:01 UTC (permalink / raw)
  To: Stefano Garzarella; +Cc: netdev, Stefan Hajnoczi, virtualization

On Tue, Jul 16, 2019 at 11:40:24AM +0200, Stefano Garzarella wrote:
> On Mon, Jul 15, 2019 at 01:50:28PM -0400, Michael S. Tsirkin wrote:
> > On Mon, Jul 15, 2019 at 09:44:16AM +0200, Stefano Garzarella wrote:
> > > On Fri, Jul 12, 2019 at 06:14:39PM +0800, Jason Wang wrote:
> 
> [...]
> 
> > > > 
> > > > 
> > > > I think it's just a branch, for ethernet, go for networking stack. otherwise
> > > > go for vsock core?
> > > > 
> > > 
> > > Yes, that should work.
> > > 
> > > So, I should refactor the functions that can be called also from the vsock
> > > core, in order to remove "struct net_device *dev" parameter.
> > > Maybe creating some wrappers for the network stack.
> > > 
> > > Otherwise I should create a fake net_device for vsock_core.
> > > 
> > > What do you suggest?
> > 
> > Neither.
> > 
> > I think what Jason was saying all along is this:
> > 
> > virtio net doesn't actually lose packets, at least most
> > of the time. And it actually most of the time
> > passes all packets to host. So it's possible to use a virtio net
> > device (possibly with a feature flag that says "does not lose packets,
> > all packets go to host") and build vsock on top.
> 
> Yes, I got it after the latest Jason's reply.
> 
> > 
> > and all of this is nice, but don't expect anything easy,
> > or any quick results.
> 
> I expected this... :-(
> 
> > 
> > Also, in a sense it's a missed opportunity: we could cut out a lot
> > of fat and see just how fast can a protocol that is completely
> > new and separate from networking stack go.
> 
> In this case, if we will try to do a PoC, what do you think is better?
>     1. new AF_VSOCK + network-stack + virtio-net modified
>         Maybe it is allow us to reuse a lot of stuff already written,
>         but we will go through the network stack
> 
>     2. new AF_VSOCK + glue + virtio-net modified
>         Intermediate approach, similar to Jason's proposal
> 
>     3, new AF_VSOCK + new virtio-vsock
>         Can be the thinnest, but we have to rewrite many things, with the risk
>         of making the same mistakes as the current implementation.
> 

1 or 3 imho. I wouldn't expect a lot from 2.  I slightly favor 3 and
Jason 1. So take your pick :)

> > Instead vsock implementation carries so much baggage from both
> > networking stack - such as softirq processing - and itself such as
> > workqueues, global state and crude locking - to the point where
> > it's actually slower than TCP.
> 
> I agree, and I'm finding new issues while I'm trying to support nested
> VMs, allowing multiple vsock transports (virtio-vsock and vhost-vsock in
> the KVM case) at runtime.
> 
> > 
> 
> [...]
> 
> > > > 
> > > > I suggest to do this step by step:
> > > > 
> > > > 1) use virtio-net but keep some protocol logic
> > > > 
> > > > 2) separate protocol logic and merge it to exist Linux networking stack
> > > 
> > > Make sense, thanks for the suggestions, I'll try to do these steps!
> > > 
> > > Thanks,
> > > Stefano
> > 
> > 
> > An alternative is look at sources of overhead in vsock and get rid of
> > them, or rewrite it from scratch focusing on performance.
> 
> I started looking at virtio-vsock and vhost-vsock trying to do very
> simple changes [1] to increase the performance. I should send a v4 of that
> series as a very short term, then I'd like to have a deeper look to understand
> if it is better to try to optimize or rewrite it from scratch.
> 
> 
> Thanks,
> Stefano
> 
> [1] https://patchwork.kernel.org/cover/10970145/

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC] virtio-net: share receive_*() and add_recvbuf_*() with virtio-vsock
  2019-07-16 10:01                 ` Michael S. Tsirkin
@ 2019-07-16 10:22                     ` Stefano Garzarella
  0 siblings, 0 replies; 26+ messages in thread
From: Stefano Garzarella @ 2019-07-16 10:22 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: Jason Wang, Stefan Hajnoczi, virtualization, netdev

On Tue, Jul 16, 2019 at 06:01:33AM -0400, Michael S. Tsirkin wrote:
> On Tue, Jul 16, 2019 at 11:40:24AM +0200, Stefano Garzarella wrote:
> > On Mon, Jul 15, 2019 at 01:50:28PM -0400, Michael S. Tsirkin wrote:
> > > On Mon, Jul 15, 2019 at 09:44:16AM +0200, Stefano Garzarella wrote:
> > > > On Fri, Jul 12, 2019 at 06:14:39PM +0800, Jason Wang wrote:
> > 
> > [...]
> > 
> > > > > 
> > > > > 
> > > > > I think it's just a branch, for ethernet, go for networking stack. otherwise
> > > > > go for vsock core?
> > > > > 
> > > > 
> > > > Yes, that should work.
> > > > 
> > > > So, I should refactor the functions that can be called also from the vsock
> > > > core, in order to remove "struct net_device *dev" parameter.
> > > > Maybe creating some wrappers for the network stack.
> > > > 
> > > > Otherwise I should create a fake net_device for vsock_core.
> > > > 
> > > > What do you suggest?
> > > 
> > > Neither.
> > > 
> > > I think what Jason was saying all along is this:
> > > 
> > > virtio net doesn't actually lose packets, at least most
> > > of the time. And it actually most of the time
> > > passes all packets to host. So it's possible to use a virtio net
> > > device (possibly with a feature flag that says "does not lose packets,
> > > all packets go to host") and build vsock on top.
> > 
> > Yes, I got it after the latest Jason's reply.
> > 
> > > 
> > > and all of this is nice, but don't expect anything easy,
> > > or any quick results.
> > 
> > I expected this... :-(
> > 
> > > 
> > > Also, in a sense it's a missed opportunity: we could cut out a lot
> > > of fat and see just how fast can a protocol that is completely
> > > new and separate from networking stack go.
> > 
> > In this case, if we will try to do a PoC, what do you think is better?
> >     1. new AF_VSOCK + network-stack + virtio-net modified
> >         Maybe it is allow us to reuse a lot of stuff already written,
> >         but we will go through the network stack
> > 
> >     2. new AF_VSOCK + glue + virtio-net modified
> >         Intermediate approach, similar to Jason's proposal
> > 
> >     3, new AF_VSOCK + new virtio-vsock
> >         Can be the thinnest, but we have to rewrite many things, with the risk
> >         of making the same mistakes as the current implementation.
> > 
> 
> 1 or 3 imho. I wouldn't expect a lot from 2.  I slightly favor 3 and
> Jason 1. So take your pick :)
> 

Yes, I agree :)

Maybe "Jason 1" could be the short term (and an opportunity to study better the
code and sources of overhead) and "new AF_VSOCK + new virtio-vsock" the long
term goal with the multi-transport support in mind.

Thank you so much for your guidance and useful advice,
Stefano

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [RFC] virtio-net: share receive_*() and add_recvbuf_*() with virtio-vsock
@ 2019-07-16 10:22                     ` Stefano Garzarella
  0 siblings, 0 replies; 26+ messages in thread
From: Stefano Garzarella @ 2019-07-16 10:22 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: netdev, Stefan Hajnoczi, virtualization

On Tue, Jul 16, 2019 at 06:01:33AM -0400, Michael S. Tsirkin wrote:
> On Tue, Jul 16, 2019 at 11:40:24AM +0200, Stefano Garzarella wrote:
> > On Mon, Jul 15, 2019 at 01:50:28PM -0400, Michael S. Tsirkin wrote:
> > > On Mon, Jul 15, 2019 at 09:44:16AM +0200, Stefano Garzarella wrote:
> > > > On Fri, Jul 12, 2019 at 06:14:39PM +0800, Jason Wang wrote:
> > 
> > [...]
> > 
> > > > > 
> > > > > 
> > > > > I think it's just a branch, for ethernet, go for networking stack. otherwise
> > > > > go for vsock core?
> > > > > 
> > > > 
> > > > Yes, that should work.
> > > > 
> > > > So, I should refactor the functions that can be called also from the vsock
> > > > core, in order to remove "struct net_device *dev" parameter.
> > > > Maybe creating some wrappers for the network stack.
> > > > 
> > > > Otherwise I should create a fake net_device for vsock_core.
> > > > 
> > > > What do you suggest?
> > > 
> > > Neither.
> > > 
> > > I think what Jason was saying all along is this:
> > > 
> > > virtio net doesn't actually lose packets, at least most
> > > of the time. And it actually most of the time
> > > passes all packets to host. So it's possible to use a virtio net
> > > device (possibly with a feature flag that says "does not lose packets,
> > > all packets go to host") and build vsock on top.
> > 
> > Yes, I got it after the latest Jason's reply.
> > 
> > > 
> > > and all of this is nice, but don't expect anything easy,
> > > or any quick results.
> > 
> > I expected this... :-(
> > 
> > > 
> > > Also, in a sense it's a missed opportunity: we could cut out a lot
> > > of fat and see just how fast can a protocol that is completely
> > > new and separate from networking stack go.
> > 
> > In this case, if we will try to do a PoC, what do you think is better?
> >     1. new AF_VSOCK + network-stack + virtio-net modified
> >         Maybe it is allow us to reuse a lot of stuff already written,
> >         but we will go through the network stack
> > 
> >     2. new AF_VSOCK + glue + virtio-net modified
> >         Intermediate approach, similar to Jason's proposal
> > 
> >     3, new AF_VSOCK + new virtio-vsock
> >         Can be the thinnest, but we have to rewrite many things, with the risk
> >         of making the same mistakes as the current implementation.
> > 
> 
> 1 or 3 imho. I wouldn't expect a lot from 2.  I slightly favor 3 and
> Jason 1. So take your pick :)
> 

Yes, I agree :)

Maybe "Jason 1" could be the short term (and an opportunity to study better the
code and sources of overhead) and "new AF_VSOCK + new virtio-vsock" the long
term goal with the multi-transport support in mind.

Thank you so much for your guidance and useful advice,
Stefano

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [RFC] virtio-net: share receive_*() and add_recvbuf_*() with virtio-vsock
@ 2019-07-10 15:37 Stefano Garzarella
  0 siblings, 0 replies; 26+ messages in thread
From: Stefano Garzarella @ 2019-07-10 15:37 UTC (permalink / raw)
  To: Michael S. Tsirkin, Jason Wang, Stefan Hajnoczi; +Cc: netdev, virtualization

Hi,
as Jason suggested some months ago, I looked better at the virtio-net driver to
understand if we can reuse some parts also in the virtio-vsock driver, since we
have similar challenges (mergeable buffers, page allocation, small
packets, etc.).

Initially, I would add the skbuff in the virtio-vsock in order to re-use
receive_*() functions.
Then I would move receive_[small, big, mergeable]() and
add_recvbuf_[small, big, mergeable]() outside of virtio-net driver, in order to
call them also from virtio-vsock. I need to do some refactoring (e.g. leave the
XDP part on the virtio-net driver), but I think it is feasible.

The idea is to create a virtio-skb.[h,c] where put these functions and a new
object where stores some attributes needed (e.g. hdr_len ) and status (e.g.
some fields of struct receive_queue). This is an idea of virtio-skb.h that
I have in mind:
    struct virtskb;

    struct sk_buff *virtskb_receive_small(struct virtskb *vs, ...);
    struct sk_buff *virtskb_receive_big(struct virtskb *vs, ...);
    struct sk_buff *virtskb_receive_mergeable(struct virtskb *vs, ...);

    int virtskb_add_recvbuf_small(struct virtskb*vs, ...);
    int virtskb_add_recvbuf_big(struct virtskb *vs, ...);
    int virtskb_add_recvbuf_mergeable(struct virtskb *vs, ...);

For the Guest->Host path it should be easier, so maybe I can add a
"virtskb_send(struct virtskb *vs, struct sk_buff *skb)" with a part of the code
of xmit_skb().

Let me know if you have in mind better names or if I should put these function
in another place.

I would like to leave the control part completely separate, so, for example,
the two drivers will negotiate the features independently and they will call
the right virtskb_receive_*() function based on the negotiation.

I already started to work on it, but before to do more steps and send an RFC
patch, I would like to hear your opinion.
Do you think that makes sense?
Do you see any issue or a better solution?

Thanks in advance,
Stefano

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2019-07-16 10:22 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-10 15:37 [RFC] virtio-net: share receive_*() and add_recvbuf_*() with virtio-vsock Stefano Garzarella
2019-07-11  7:37 ` Jason Wang
2019-07-11 11:41   ` Stefano Garzarella
2019-07-11 19:52     ` Michael S. Tsirkin
2019-07-11 19:52     ` Michael S. Tsirkin
2019-07-12 10:00       ` Stefano Garzarella
2019-07-12 10:00       ` Stefano Garzarella
2019-07-12 10:14         ` Jason Wang
2019-07-12 10:14         ` Jason Wang
2019-07-15  7:44           ` Stefano Garzarella
2019-07-15  9:16             ` Jason Wang
2019-07-15  9:16             ` Jason Wang
2019-07-15 10:42               ` Stefano Garzarella
2019-07-15 10:42                 ` Stefano Garzarella
2019-07-15 17:50             ` Michael S. Tsirkin
2019-07-15 17:50             ` Michael S. Tsirkin
2019-07-16  9:40               ` Stefano Garzarella
2019-07-16  9:40               ` Stefano Garzarella
2019-07-16 10:01                 ` Michael S. Tsirkin
2019-07-16 10:01                 ` Michael S. Tsirkin
2019-07-16 10:22                   ` Stefano Garzarella
2019-07-16 10:22                     ` Stefano Garzarella
2019-07-15  7:44           ` Stefano Garzarella
2019-07-11 11:41   ` Stefano Garzarella
2019-07-11  7:37 ` Jason Wang
2019-07-10 15:37 Stefano Garzarella

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.