From: Jason Wang <jasowang@redhat.com>
To: Jason Gunthorpe <jgg@mellanox.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
virtualization@lists.linux-foundation.org,
netdev@vger.kernel.org, tiwei.bie@intel.com,
maxime.coquelin@redhat.com, cunming.liang@intel.com,
zhihong.wang@intel.com, rob.miller@broadcom.com,
xiao.w.wang@intel.com, haotian.wang@sifive.com,
lingshan.zhu@intel.com, eperezma@redhat.com, lulu@redhat.com,
parav@mellanox.com, kevin.tian@intel.com, stefanha@redhat.com,
rdunlap@infradead.org, hch@infradead.org, aadam@redhat.com,
jiri@mellanox.com, shahafs@mellanox.com, hanand@xilinx.com,
mhabets@solarflare.com
Subject: Re: [PATCH V2 3/5] vDPA: introduce vDPA bus
Date: Mon, 17 Feb 2020 14:07:35 +0800 [thread overview]
Message-ID: <312c3a04-4cc5-650c-48bc-ffbc7c765c22@redhat.com> (raw)
In-Reply-To: <20200214140446.GD4271@mellanox.com>
On 2020/2/14 下午10:04, Jason Gunthorpe wrote:
> On Fri, Feb 14, 2020 at 12:05:32PM +0800, Jason Wang wrote:
>
>>> The standard driver model is a 'bus' driver provides the HW access
>>> (think PCI level things), and a 'hw driver' attaches to the bus
>>> device,
>> This is not true, kernel had already had plenty virtual bus where virtual
>> devices and drivers could be attached, besides mdev and virtio, you can see
>> vop, rpmsg, visorbus etc.
> Sure, but those are not connecting HW into the kernel..
Well the virtual devices are normally implemented via a real HW driver.
E.g for virio bus, its transport driver could be driver of real hardware
(e.g PCI).
>
>>> and instantiates a 'subsystem device' (think netdev, rdma,
>>> etc) using some per-subsystem XXX_register().
>>
>> Well, if you go through virtio spec, we support ~20 types of different
>> devices. Classes like netdev and rdma are correct since they have a clear
>> set of semantics their own. But grouping network and scsi into a single
>> class looks wrong, that's the work of a virtual bus.
> rdma also has about 20 different types of things it supports on top of
> the generic ib_device.
>
> The central point in RDMA is the 'struct ib_device' which is a device
> class. You can discover all RDMA devices by looking in /sys/class/infiniband/
>
> It has an internal bus like thing (which probably should have been an
> actual bus, but this was done 15 years ago) which allows other
> subsystems to have drivers to match and bind their own drivers to the
> struct ib_device.
Right.
>
> So you'd have a chain like:
>
> struct pci_device -> struct ib_device -> [ib client bus thing] -> struct net_device
So for vDPA we want to have:
kernel datapath:
struct pci_device -> struct vDPA device -> [ vDPA bus] -> struct
virtio_device -> [virtio bus] -> struct net_device
userspace datapath:
struct pci_device -> struct vDPA device -> [ vDPA bus] -> struct
vhost_device -> UAPI -> userspace driver
>
> And the various char devs are created by clients connecting to the
> ib_device and creating char devs on their own classes.
>
> Since ib_devices are multi-queue we can have all 20 devices running
> concurrently and there are various schemes to manage when the various
> things are created.
>
>>> The 'hw driver' pulls in
>>> functions from the 'subsystem' using a combination of callbacks and
>>> library-style calls so there is no code duplication.
>> The point is we want vDPA devices to be used by different subsystems, not
>> only vhost, but also netdev, blk, crypto (every subsystem that can use
>> virtio devices). That's why we introduce vDPA bus and introduce different
>> drivers on top.
> See the other mail, it seems struct virtio_device serves this purpose
> already, confused why a struct vdpa_device and another bus is being
> introduced
>
>> There're several examples that a bus is needed on top.
>>
>> A good example is Mellanox TmFIFO driver which is a platform device driver
>> but register itself as a virtio device in order to be used by virito-console
>> driver on the virtio bus.
> How is that another bus? The platform bus is the HW bus, the TmFIFO is
> the HW driver, and virtio_device is the subsystem.
>
> This seems reasonable/normal so far..
Yes, that's reasonable. This example is to answer the question why bus
is used instead of class here.
>
>> But it's a pity that the device can not be used by userspace driver due to
>> the limitation of virito bus which is designed for kernel driver. That's why
>> vDPA bus is introduced which abstract the common requirements of both kernel
>> and userspace drivers which allow the a single HW driver to be used by
>> kernel drivers (and the subsystems on top) and userspace drivers.
> Ah! Maybe this is the source of all this strangeness - the userspace
> driver is something parallel to the struct virtio_device instead of
> being a consumer of it??
userspace driver is not parallel to virtio_device. The vhost_device is
parallel to virtio_device actually.
> That certianly would mess up the driver model
> quite a lot.
>
> Then you want to add another bus to switch between vhost and struct
> virtio_device? But only for vdpa?
Still, vhost works on top of vDPA bus directly (see the reply above).
>
> But as you point out something like TmFIFO is left hanging. Seems like
> the wrong abstraction point..
You know, even refactoring virtio-bus is not for free, TmFIFO driver
needs changes anyhow.
Thanks
>
> Jason
>
next prev parent reply other threads:[~2020-02-17 6:08 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-02-10 3:56 [PATCH V2 0/5] vDPA support Jason Wang
2020-02-10 3:56 ` [PATCH V2 1/5] vhost: factor out IOTLB Jason Wang
2020-02-10 3:56 ` [PATCH V2 2/5] vringh: IOTLB support Jason Wang
2020-02-10 3:56 ` [PATCH V2 3/5] vDPA: introduce vDPA bus Jason Wang
2020-02-11 13:47 ` Jason Gunthorpe
2020-02-12 7:55 ` Jason Wang
2020-02-12 12:51 ` Jason Gunthorpe
2020-02-13 3:34 ` Jason Wang
2020-02-13 13:41 ` Jason Gunthorpe
2020-02-13 14:58 ` Jason Wang
2020-02-13 15:05 ` Jason Gunthorpe
2020-02-13 15:41 ` Michael S. Tsirkin
2020-02-13 15:51 ` Jason Gunthorpe
2020-02-13 15:56 ` Michael S. Tsirkin
2020-02-13 16:24 ` Jason Gunthorpe
2020-02-14 4:05 ` Jason Wang
2020-02-14 14:04 ` Jason Gunthorpe
2020-02-17 6:07 ` Jason Wang [this message]
2020-02-13 15:59 ` Michael S. Tsirkin
2020-02-13 16:13 ` Jason Gunthorpe
2020-02-14 4:39 ` Jason Wang
2020-02-14 3:23 ` Jason Wang
2020-02-14 13:52 ` Jason Gunthorpe
2020-02-17 6:08 ` Jason Wang
2020-02-18 13:56 ` Jason Gunthorpe
2020-02-19 2:59 ` Tiwei Bie
2020-02-19 5:35 ` Jason Wang
2020-02-19 12:53 ` Jason Gunthorpe
2020-02-10 3:56 ` [PATCH V2 4/5] virtio: introduce a vDPA based transport Jason Wang
2020-02-10 13:34 ` Jason Gunthorpe
2020-02-11 3:04 ` Jason Wang
2020-02-10 3:56 ` [PATCH V2 5/5] vdpasim: vDPA device simulator Jason Wang
2020-02-10 11:23 ` Michael S. Tsirkin
2020-02-11 3:12 ` Jason Wang
2020-02-11 13:52 ` Jason Gunthorpe
2020-02-12 8:27 ` Jason Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=312c3a04-4cc5-650c-48bc-ffbc7c765c22@redhat.com \
--to=jasowang@redhat.com \
--cc=aadam@redhat.com \
--cc=cunming.liang@intel.com \
--cc=eperezma@redhat.com \
--cc=hanand@xilinx.com \
--cc=haotian.wang@sifive.com \
--cc=hch@infradead.org \
--cc=jgg@mellanox.com \
--cc=jiri@mellanox.com \
--cc=kevin.tian@intel.com \
--cc=kvm@vger.kernel.org \
--cc=lingshan.zhu@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=lulu@redhat.com \
--cc=maxime.coquelin@redhat.com \
--cc=mhabets@solarflare.com \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=parav@mellanox.com \
--cc=rdunlap@infradead.org \
--cc=rob.miller@broadcom.com \
--cc=shahafs@mellanox.com \
--cc=stefanha@redhat.com \
--cc=tiwei.bie@intel.com \
--cc=virtualization@lists.linux-foundation.org \
--cc=xiao.w.wang@intel.com \
--cc=zhihong.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).