LKML Archive on lore.kernel.org
 help / color / Atom feed
From: Jerome Glisse <jglisse@redhat.com>
To: Alan Cox <gnomes@lxorguk.ukuu.org.uk>
Cc: "Tian, Kevin" <kevin.tian@intel.com>,
	Kenneth Lee <nek.in.cn@gmail.com>,
	Hao Fang <fanghao11@huawei.com>,
	Herbert Xu <herbert@gondor.apana.org.au>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Jonathan Corbet <corbet@lwn.net>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	"linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>,
	"Kumar, Sanjay K" <sanjay.k.kumar@intel.com>,
	"iommu@lists.linux-foundation.org"
	<iommu@lists.linux-foundation.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linuxarm@huawei.com" <linuxarm@huawei.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	"linux-crypto@vger.kernel.org" <linux-crypto@vger.kernel.org>,
	Philippe Ombredanne <pombredanne@nexb.com>,
	Zaibo Xu <xuzaibo@huawei.com>,
	Kenneth Lee <liguozhu@hisilicon.com>,
	"David S . Miller" <davem@davemloft.net>,
	Ross Zwisler <ross.zwisler@linux.intel.com>
Subject: Re: [RFC PATCH 0/7] A General Accelerator Framework, WarpDrive
Date: Fri, 3 Aug 2018 10:55:26 -0400
Message-ID: <20180803145526.GB4079@redhat.com> (raw)
In-Reply-To: <20180803152043.40f88947@alans-desktop>

On Fri, Aug 03, 2018 at 03:20:43PM +0100, Alan Cox wrote:
> > If we are going to have any kind of general purpose accelerator API then
> > > it has to be able to implement things like  
> > 
> > Why is the existing driver model not good enough ? So you want
> > a device with function X you look into /dev/X (for instance
> > for GPU you look in /dev/dri)
> 
> Except when my GPU is in an FPGA in which case it might be somewhere else
> or it's a general purpose accelerator that happens to be usable as a GPU.
> Unusual today in big computer space but you'll find it in
> microcontrollers.

You do need a specific userspace driver for each of those device
correct ?

You definitly do for GPU and i do not see that going away any time
soon. I doubt Xilinx and Altera will ever compile down to same bit-
stream format either.

So userspace application is bound to use some kind of library that
implement the userspace side of the driver and that library can
easily provide helpers to enumerate all the devices it supports.

For instance that is what OpenCL allows for both GPU and FPGA.
One single API and multiple different hardware you can target from
it.


> > Each of those device need a userspace driver and thus this
> > user space driver can easily knows where to look. I do not
> > expect that every application will reimplement those drivers
> > but instead use some kind of library that provide a high
> > level API for each of those devices.
> 
> Think about it from the user level. You have a pipeline of things you
> wish to execute, you need to get the right accelerator combinations and
> they need to fit together to meet system constraints like number of
> IOMMU ids the accelerator supports, where they are connected.

Creating a pipe of device ie one device consuming the work of
the previous one, is a problem on its own and it should be solved
separatly and not inside VFIO.

GPU (on ARM) already have this pipe thing because the IP block
that do overlay, or the IP block that push pixel to the screen or
the IP block that do 3D rendering are all coming from different
company.

I do not see the value of having all the device enumerated through
VFIO to address this problem. I can definitly understand having a
specific kernel mechanism to expose to userspace what is do-able
but i believe this should be its own thing that allow any device
(a VFIO one, a network one, a GPU, a FPGA, ...) to be use in "pipe"
mode.


> > Now you have a hierarchy of memory for the CPU (HBM, local
> > node main memory aka you DDR dimm, persistent memory) each
> 
> It's not a heirarchy, it's a graph. There's no fundamental reason two
> accelerators can't be close to two different CPU cores but have shared
> HBM that is far from each processor. There are physical reasons it tends
> to look more like a heirarchy today.

Yes you are right i used the wrong word.

> 
> > Anyway i think finding devices and finding relation between
> > devices and memory is 2 separate problems and as such should
> > be handled separatly.
> 
> At a certain level they are deeply intertwined because you need a common
> API. It's not good if I want a particular accelerator and need to then
> see which API its under on this machine and which interface I have to
> use, and maybe have a mix of FPGA, WarpDrive and Google ASIC interfaces
> all different.
> 
> The job of the kernel is to impose some kind of sanity and unity on this
> lot.
> 
> All of it in the end comes down to
> 
> 'Somehow glue some chunk of memory into my address space and find any
> supporting driver I need'
> 
> plus virtualization of the above.
> 
> That bit's easy - but making it usable is a different story.

My point is that for all those devices you will have a userspace drivers
and thus all the complexity of finding best memory for a given combination
of hardware can be push to userspace. Kernel only have to expose the
topology of the different memory and their relation with each of the
devices. Very much like you have NUMA node for CPU.


I rather see kernel having one API to expose topology, one API to
expose device mixing capabilities (ie how can you can pipe device
together if at all). Then having to update every single existing
upstream driver that want to participate in the above to now become
a VFIO driver.


I have nothing against VFIO ;) Just to me it seems that this is all
reinventing a new device driver infrastructure under it while the
existing one is good enough and can evolve to support all the cases
discussed here.

Cheers,
Jérôme

  reply index

Thread overview: 61+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-08-01 10:22 Kenneth Lee
2018-08-01 10:22 ` [RFC PATCH 1/7] vfio/spimdev: Add documents for WarpDrive framework Kenneth Lee
2018-08-02  3:14   ` Tian, Kevin
2018-08-02  4:22     ` Kenneth Lee
2018-08-02  4:41       ` Tian, Kevin
2018-08-06 12:27   ` Pavel Machek
2018-08-08  1:43     ` Kenneth Lee
2018-08-01 10:22 ` [RFC PATCH 2/7] iommu: Add share domain interface in iommu for spimdev Kenneth Lee
2018-08-02  3:17   ` Tian, Kevin
2018-08-02  4:15     ` Kenneth Lee
2018-08-02  4:39       ` Tian, Kevin
2018-08-08  9:13   ` Joerg Roedel
2018-08-09  1:09     ` Kenneth Lee
2018-08-01 10:22 ` [RFC PATCH 3/7] vfio: add spimdev support Kenneth Lee
2018-08-01 16:23   ` Randy Dunlap
2018-08-02  3:07     ` Kenneth Lee
2018-08-02  3:21   ` Tian, Kevin
2018-08-02  3:47     ` Kenneth Lee
2018-08-02  4:24       ` Tian, Kevin
2018-08-02  7:34         ` Kenneth Lee
     [not found]           ` <20180802103528.0b863030.cohuck@redhat.com>
     [not found]             ` <20180802124327.403b10ab@t450s.home>
2018-08-06  1:40               ` Kenneth Lee
2018-08-06 15:49                 ` Alex Williamson
2018-08-06 16:34                   ` Raj, Ashok
2018-08-06 17:05                     ` Alex Williamson
2018-08-08  1:32                       ` Kenneth Lee
2018-08-01 10:22 ` [RFC PATCH 4/7] crypto: add hisilicon Queue Manager driver Kenneth Lee
2018-08-01 10:22 ` [RFC PATCH 5/7] crypto: Add Hisilicon Zip driver Kenneth Lee
2018-08-01 10:22 ` [RFC PATCH 6/7] crypto: add spimdev support to Hisilicon QM Kenneth Lee
2018-08-01 10:22 ` [RFC PATCH 7/7] vfio/spimdev: add user sample for spimdev Kenneth Lee
2018-08-01 16:56 ` [RFC PATCH 0/7] A General Accelerator Framework, WarpDrive Jerome Glisse
2018-08-02  2:33   ` Tian, Kevin
2018-08-02  4:05     ` Kenneth Lee
2018-08-02 14:22       ` Jerome Glisse
2018-08-03  3:47         ` Kenneth Lee
2018-08-03 14:39           ` Jerome Glisse
2018-08-06  3:12             ` Kenneth Lee
2018-08-06 15:32               ` Jerome Glisse
2018-08-08  1:08                 ` Kenneth Lee
2018-08-08 15:18                   ` Jerome Glisse
2018-08-09  8:03                     ` Kenneth Lee
2018-08-09  8:31                       ` Tian, Kevin
2018-08-10  1:37                         ` Kenneth Lee
2018-08-09 14:46                       ` Jerome Glisse
2018-08-10  3:39                         ` Kenneth Lee
2018-08-10 13:12                           ` Jean-Philippe Brucker
2018-08-11 15:26                             ` Kenneth Lee
2018-08-13  9:29                               ` Kenneth Lee
2018-08-13 19:23                                 ` Jerome Glisse
2018-08-14  3:46                                   ` Kenneth Lee
2018-08-10 14:32                           ` Jerome Glisse
2018-08-11 14:44                             ` Kenneth Lee
2018-08-02 10:10     ` Alan Cox
2018-08-02 12:24       ` Xu Zaibo
2018-08-02 14:46       ` Jerome Glisse
2018-08-03 14:20         ` Alan Cox
2018-08-03 14:55           ` Jerome Glisse [this message]
2018-08-06  1:26           ` Kenneth Lee
2018-08-02  2:59 ` Tian, Kevin
2018-08-02  3:40   ` Kenneth Lee
2018-08-02  4:36     ` Tian, Kevin
2018-08-02  5:35       ` Kenneth Lee

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180803145526.GB4079@redhat.com \
    --to=jglisse@redhat.com \
    --cc=alex.williamson@redhat.com \
    --cc=corbet@lwn.net \
    --cc=davem@davemloft.net \
    --cc=fanghao11@huawei.com \
    --cc=gnomes@lxorguk.ukuu.org.uk \
    --cc=gregkh@linuxfoundation.org \
    --cc=herbert@gondor.apana.org.au \
    --cc=iommu@lists.linux-foundation.org \
    --cc=kevin.tian@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=liguozhu@hisilicon.com \
    --cc=linux-crypto@vger.kernel.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxarm@huawei.com \
    --cc=nek.in.cn@gmail.com \
    --cc=pombredanne@nexb.com \
    --cc=ross.zwisler@linux.intel.com \
    --cc=sanjay.k.kumar@intel.com \
    --cc=tglx@linutronix.de \
    --cc=xuzaibo@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

LKML Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/lkml/0 lkml/git/0.git
	git clone --mirror https://lore.kernel.org/lkml/1 lkml/git/1.git
	git clone --mirror https://lore.kernel.org/lkml/2 lkml/git/2.git
	git clone --mirror https://lore.kernel.org/lkml/3 lkml/git/3.git
	git clone --mirror https://lore.kernel.org/lkml/4 lkml/git/4.git
	git clone --mirror https://lore.kernel.org/lkml/5 lkml/git/5.git
	git clone --mirror https://lore.kernel.org/lkml/6 lkml/git/6.git
	git clone --mirror https://lore.kernel.org/lkml/7 lkml/git/7.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 lkml lkml/ https://lore.kernel.org/lkml \
		linux-kernel@vger.kernel.org linux-kernel@archiver.kernel.org
	public-inbox-index lkml


Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-kernel


AGPL code for this site: git clone https://public-inbox.org/ public-inbox