All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jason Wang <jasowang@redhat.com>
To: Haotian Wang <haotian.wang@sifive.com>,
	kishon@ti.com, mst@redhat.com, lorenzo.pieralisi@arm.com,
	bhelgaas@google.com
Cc: linux-pci@vger.kernel.org, haotian.wang@duke.edu
Subject: Re: [PATCH] pci: endpoint: functions: Add a virtnet EP function
Date: Thu, 5 Sep 2019 10:56:19 +0800	[thread overview]
Message-ID: <59982499-0fc1-2e39-9ff9-993fb4dd7dcc@redhat.com> (raw)
In-Reply-To: <20190904215801.2971-1-haotian.wang@sifive.com>

[-- Attachment #1: Type: text/plain, Size: 5599 bytes --]


On 2019/9/5 上午5:58, Haotian Wang wrote:
> Hi Jason,
>
> I have an additional comment regarding using vring.
>
> On Tue, Sep 3, 2019 at 6:42 AM Jason Wang <jasowang@redhat.com> wrote:
>> Kind of, in order to address the above limitation, you probably want to 
>> implement a vringh based netdevice and driver. It will work like, 
>> instead of trying to represent a virtio-net device to endpoint, 
>> represent a new type of network device, it uses two vringh ring instead 
>> virtio ring. The vringh ring is usually used to implement the 
>> counterpart of virtio driver. The advantages are obvious:
>>
>> - no need to deal with two sets of features, config space etc.
>> - network specific, from the point of endpoint linux, it's not a virtio 
>> device, no need to care about transport stuffs or embedding internal 
>> virtio-net specific data structures
>> - reuse the exist codes (vringh) to avoid duplicated bugs, implementing 
>> a virtqueue is kind of challenge
> With vringh.c, there is no easy way to interface with virtio_net.c.
>
> vringh.c is linked with vhost/net.c nicely 


Let me clarify, vhost_net doesn't use vringh at all (though there's a
plan to switch to use vringh).


> but again it's not easy to
> interface vhost/net.c with the network stack of endpoint kernel. The
> vhost drivers are not designed with the purpose of creating another
> suite of virtual devices in the host kernel in the first place. If I try
> to manually write code for this interfacing, it seems that I will do
> duplicate work that virtio_net.c does.


Let me explain:

- I'm not suggesting to use vhost_net since it can only deal with
userspace virtio rings.
- I suggest to introduce netdev that has vringh vring assoticated.
Vringh was designed to deal with virtio ring located at different types
of memory. It supports userspace vring and kernel vring currently, but
it should not be too hard to add support for e.g endpoint device that
requires DMA or whatever other method to access the vring. So it was by
design to talk directly with e.g kernel virtio device.
- In your case, you can read vring address from virtio config space
through endpoint framework and then create vringh. It's as simple as:
creating a netdev, read vring address, and initialize vringh. Then you
can use vringh helper to get iov and build skb etc (similar to caif_virtio).


>
> There will be two more main disadvantages probably.
>
> Firstly, there will be two layers of overheads. vhost/net.c uses
> vringh.c to channel data buffers into some struct sockets. This is the
> first layer of overhead. That the virtual network device will have to
> use these sockets somehow adds another layer of overhead.


As I said, it doesn't work like vhost and no socket is needed at all.


>
> Secondly, probing, intialization and de-initialization of the virtual
> network_device are already non-trivial. I'll likely copy this part
> almost verbatim from virtio_net.c in the end. So in the end, there will
> be more duplicate code.


It will be a new type of network device instead of virtio, you don't
need to care any virtio stuffs but vringh in your codes. So it looks to
me it would be much simpler and compact.

But I'm not saying your method is no way to go, but you should deal with
lots of other issues like I've replied in the previous mail. What you
want to achieve is

1) Host (virtio-pci) <-> virtio ring <-> virtual eth device <-> virtio
ring <-> Endpoint (virtio with customized config_ops).

But I suggest is

2) Host (virtio-pci) <-> virtio ring <-> virtual eth device <-> vringh
vring (virtio ring in the Host) <-> network device

The differences is.
- Complexity: In your proposal, there will be two virtio devices and 4
virtqueues. It means you need to prepare two sets of features, config
ops etc. And dealing with inconsistent feature will be a pain. It may
work for simple case like a virtio-net device with only _F_MAC, but it
would be hard to be expanded. If we decide to go for vringh, there will
be a single virtio device and 2 virtqueues. In the endpoint part, it
will be 2 vringh vring (which is actually point the same virtqueue from
Host side) and a normal network device. There's no need for dealing with
inconsistency, since vringh basically sever as a a device
implementation, the feature negotiation is just between device (network
device with vringh) and driver (virtito-pci) from the view of Linux
running on the PCI Host.
- Maintainability: A third path for dealing virtio ring. We've already
had vhost and vringh, a third path will add a lot of overhead when
trying to maintaining them. My proposal will try to reuse vringh,
there's no need a new path.
- Layer violation: We want to hide the transport details from the device
and make virito-net device can be used without modification. But your
codes try to poke information like virtnet_info. My proposal is to just
have a new networking device that won't need to care virtio at all. It's
not that hard as you imagine to have a new type of netdev, I suggest to
take a look at how caif_virtio is done, it would be helpful.

If you still decide to go with two two virtio device model, you need
probably:
- Proving two sets of config and features, and deal with inconsistency
- Try to reuse the vringh codes
- Do not refer internal structures from virtio-net.c

But I recommend to take a step of trying vringh method which should be
much simpler.

Thanks


>
> Thank you for your patience!
>
> Best,
> Haotian

[-- Attachment #2: pEpkey.asc --]
[-- Type: application/pgp-keys, Size: 2493 bytes --]

  reply	other threads:[~2019-09-05  2:56 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-23 21:31 [PATCH] pci: endpoint: functions: Add a virtnet EP function Haotian Wang
2019-08-26 10:51 ` Kishon Vijay Abraham I
2019-08-26 21:59   ` Haotian Wang
2019-08-27  8:12     ` Kishon Vijay Abraham I
2019-08-27 18:01       ` Haotian Wang
2019-08-30  6:11 ` Jason Wang
2019-08-30 23:06   ` Haotian Wang
2019-09-02  3:50     ` Jason Wang
2019-09-02 20:05       ` Haotian Wang
2019-09-03 10:42         ` Jason Wang
2019-09-04  0:55           ` Haotian Wang
2019-09-04 21:58           ` Haotian Wang
2019-09-05  2:56             ` Jason Wang [this message]
2019-09-05  3:28               ` Haotian Wang
2019-11-25 12:49               ` Kishon Vijay Abraham I
2019-11-26  9:58                 ` Jason Wang
2019-11-26 12:35                   ` Kishon Vijay Abraham I
2019-11-26 21:55                     ` Alan Mikhak
2019-11-26 22:01                       ` Alan Mikhak
2019-11-27  3:04                         ` Jason Wang
2019-09-03  6:25 ` Michael S. Tsirkin
2019-09-03 20:39   ` Haotian Wang
2019-09-05  7:07     ` Michael S. Tsirkin
2019-09-05 16:15       ` Haotian Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=59982499-0fc1-2e39-9ff9-993fb4dd7dcc@redhat.com \
    --to=jasowang@redhat.com \
    --cc=bhelgaas@google.com \
    --cc=haotian.wang@duke.edu \
    --cc=haotian.wang@sifive.com \
    --cc=kishon@ti.com \
    --cc=linux-pci@vger.kernel.org \
    --cc=lorenzo.pieralisi@arm.com \
    --cc=mst@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.