linux-doc.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Daniel Vetter <daniel@ffwll.ch>
To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Arnd Bergmann <arnd@arndb.de>, Dave Airlie <airlied@gmail.com>,
	Maciej Kwapulinski <maciej.kwapulinski@linux.intel.com>,
	Jonathan Corbet <corbet@lwn.net>,
	Derek Kiernan <derek.kiernan@xilinx.com>,
	Dragan Cvetic <dragan.cvetic@xilinx.com>,
	Andy Shevchenko <andy.shevchenko@gmail.com>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	"open list:DOCUMENTATION" <linux-doc@vger.kernel.org>,
	DRI Development <dri-devel@lists.freedesktop.org>
Subject: Re: [PATCH v3 00/14] Driver of Intel(R) Gaussian & Neural Accelerator
Date: Mon, 17 May 2021 10:49:09 +0200	[thread overview]
Message-ID: <CAKMK7uEg2khb7wDzHTGEPwfbYe+T_5Av=_BTnt91CBW5U4yWvg@mail.gmail.com> (raw)
In-Reply-To: <YKIigHrwqp8zd036@kroah.com>

On Mon, May 17, 2021 at 10:00 AM Greg Kroah-Hartman
<gregkh@linuxfoundation.org> wrote:
>
> On Mon, May 17, 2021 at 09:40:53AM +0200, Daniel Vetter wrote:
> > On Fri, May 14, 2021 at 11:00:38AM +0200, Arnd Bergmann wrote:
> > > On Fri, May 14, 2021 at 10:34 AM Greg Kroah-Hartman
> > > <gregkh@linuxfoundation.org> wrote:
> > > > On Thu, May 13, 2021 at 01:00:26PM +0200, Maciej Kwapulinski wrote:
> > > > > Dear kernel maintainers,
> > > > >
> > > > > This submission is a kernel driver to support Intel(R) Gaussian & Neural
> > > > > Accelerator (Intel(R) GNA). Intel(R) GNA is a PCI-based neural co-processor
> > > > > available on multiple Intel platforms. AI developers and users can offload
> > > > > continuous inference workloads to an Intel(R) GNA device in order to free
> > > > > processor resources and save power. Noise reduction and speech recognition
> > > > > are the examples of the workloads Intel(R) GNA deals with while its usage
> > > > > is not limited to the two.
> > > >
> > > > How does this compare with the "nnpi" driver being proposed here:
> > > >         https://lore.kernel.org/r/20210513085725.45528-1-guy.zadicario@intel.com
> > > >
> > > > Please work with those developers to share code and userspace api and
> > > > tools.  Having the community review two totally different apis and
> > > > drivers for the same type of functionality from the same company is
> > > > totally wasteful of our time and energy.
> > >
> > > Agreed, but I think we should go further than this and work towards a
> > > subsystem across companies for machine learning and neural networks
> > > accelerators for both inferencing and training.
> >
> > We have, it's called drivers/gpu. Feel free to rename to drivers/xpu or
> > think G as in General, not Graphisc.
> >
> > > We have support for Intel habanalabs hardware in drivers/misc, and there are
> > > countless hardware solutions out of tree that would hopefully go the same
> > > way with an upstream submission and open source user space, including
> > >
> > > - Intel/Mobileye EyeQ
> > > - Intel/Movidius Keembay
> > > - Nvidia NVDLA
> > > - Gyrfalcon Lightspeeur
> > > - Apple Neural Engine
> > > - Google TPU
> > > - Arm Ethos
> > >
> > > plus many more that are somewhat less likely to gain fully open source
> > > driver stacks.
> >
> > We also had this entire discussion 2 years ago with habanalabs. The
> > hang-up is that drivers/gpu folks require fully open source userspace,
> > including compiler and anything else you need to actually use the chip.
> > Greg doesn't, he's happy if all he has is the runtime library with some
> > tests.

I guess we're really going to beat this horse into pulp ... oh well.

> All you need is a library, what you write on top of that is always
> application-specific, so how can I ask for "more"?

This is like accepting a new cpu port, where all you require is that
the libc port is open source, but the cpu compiler is totally fine as
a blob (doable with llvm now being supported). It makes no sense at
all, at least to people who have worked with accelerators like this
before.

We are not requiring that applications are open. We're only requiring
that at least one of the compilers you need (no need to open the fully
optimized one with all the magic sauce) to create any kind of
applications is open, because without that you can't use the device,
you can't analyze the stack, and you have no idea at all about what
exactly it is you're merging. With these devices, the uapi visible in
include/uapi is the smallest part of the interface exposed to
userspace.

> > These two drivers here look a lot more like classic gpus than habanalabs
> > did, at least from a quick look they operate with explicit buffer
> > allocations/registration model. So even more reasons to just reuse all the
> > stuff we have already. But also I don't expect these drivers here to come
> > with open compilers, they never do, not initially at least before you
> > started talking with the vendor. Hence I expect there'll be more
> > drivers/totally-not-drm acceleration subsystem nonsense.
>
> As these are both from Intel, why aren't they using the same open
> compiler?  Why aren't they using the same userspace api as well?  What's
> preventing them from talking to each other about this and not forcing
> the community (i.e. outsiders) from being the one to force this to
> happen?

I'm unfortuantely not the CEO of this company. Also you're the one who
keeps accepting drivers that the accel folks (aka dri-devel community)
said shouldn't be merged, so my internal bargaining power is zero to
force something reaonable here. So please don't blame me for this
mess, this is yours entirely.

> > Anyway this horse has been throughroughly beaten to death and more, the
> > agreement is that accel drivers in drivers/misc must not use any gpu
> > stuff, so that drivers/gpu people dont end up in a prickly situation they
> > never signed up for. E.g. I removed some code sharing from habanalabs.
> > This means interop between gpu and nn/ai drivers will be no-go until this
> > is resolved, but *shrug*.
>
> I'm all for making this unified, but these are not really devices doing
> graphics so putting it all into DRM always feels wrong to me.  The fact
> that people abuse GPU devices for not graphic usages would indicate to
> me that that code should be moving _out_ of the drm subsystem :)

Like I said, if the 'g' really annoys you that much, feel free to send
in a patch to rename drivers/gpu to drivers/xpu.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch

  reply	other threads:[~2021-05-17  8:49 UTC|newest]

Thread overview: 54+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-13 11:00 [PATCH v3 00/14] Driver of Intel(R) Gaussian & Neural Accelerator Maciej Kwapulinski
2021-05-13 11:00 ` [PATCH v3 01/14] intel_gna: add driver module Maciej Kwapulinski
2021-05-13 11:13   ` Greg Kroah-Hartman
2021-05-13 11:00 ` [PATCH v3 02/14] intel_gna: add component of hardware operation Maciej Kwapulinski
2021-05-13 11:15   ` Greg Kroah-Hartman
2021-05-13 11:00 ` [PATCH v3 03/14] intel_gna: read hardware info in the driver Maciej Kwapulinski
2021-05-13 11:00 ` [PATCH v3 04/14] intel_gna: add memory handling Maciej Kwapulinski
2021-05-13 11:00 ` [PATCH v3 05/14] intel_gna: initialize mmu Maciej Kwapulinski
2021-05-13 11:00 ` [PATCH v3 06/14] intel_gna: add hardware ids Maciej Kwapulinski
2021-05-13 11:00 ` [PATCH v3 07/14] intel_gna: add request component Maciej Kwapulinski
2021-05-13 11:00 ` [PATCH v3 08/14] intel_gna: implement scoring Maciej Kwapulinski
2021-05-13 11:00 ` [PATCH v3 09/14] intel_gna: add a work queue to process scoring requests Maciej Kwapulinski
2021-05-13 11:00 ` [PATCH v3 10/14] intel_gna: add interrupt handler Maciej Kwapulinski
2021-05-13 11:00 ` [PATCH v3 11/14] intel_gna: add ioctl handler Maciej Kwapulinski
2021-05-13 11:24   ` Greg Kroah-Hartman
2021-05-14  8:20     ` Maciej Kwapulinski
2021-05-14  8:32       ` Greg Kroah-Hartman
2021-05-24 10:43         ` Maciej Kwapulinski
2021-05-24 10:49           ` Greg Kroah-Hartman
2021-05-25  7:50             ` Maciej Kwapulinski
2021-05-13 14:16   ` Matthew Wilcox
     [not found]   ` <20210514101253.1037-1-hdanton@sina.com>
2021-05-14 15:06     ` Maciej Kwapulinski
2021-05-13 11:00 ` [PATCH v3 12/14] intel_gna: add a 'misc' device Maciej Kwapulinski
2021-05-13 11:18   ` Greg Kroah-Hartman
2021-05-13 17:06     ` Maciej Kwapulinski
2021-05-13 17:15       ` Greg Kroah-Hartman
2021-05-13 11:00 ` [PATCH v3 13/14] intel_gna: add file operations to " Maciej Kwapulinski
2021-05-13 11:19   ` Greg Kroah-Hartman
2021-05-13 11:00 ` [PATCH v3 14/14] intel_gna: add power management Maciej Kwapulinski
2021-05-14  8:34 ` [PATCH v3 00/14] Driver of Intel(R) Gaussian & Neural Accelerator Greg Kroah-Hartman
2021-05-14  9:00   ` Arnd Bergmann
2021-05-17  7:40     ` Daniel Vetter
2021-05-17  8:00       ` Greg Kroah-Hartman
2021-05-17  8:49         ` Daniel Vetter [this message]
2021-05-17  8:55           ` Greg Kroah-Hartman
2021-05-17  9:12             ` Daniel Vetter
2021-05-17 18:04               ` Dave Airlie
2021-05-17 19:12       ` Thomas Zimmermann
2021-05-17 19:23         ` Alex Deucher
2021-05-17 19:39           ` Daniel Vetter
2021-05-17 19:49           ` Thomas Zimmermann
2021-05-17 20:00             ` Daniel Vetter
2021-05-17 20:15               ` Thomas Zimmermann
2021-05-17 19:32         ` Daniel Stone
2021-05-17 20:10           ` Thomas Zimmermann
2021-05-17 21:24             ` Daniel Vetter
2021-05-17 21:36             ` Dave Airlie
2021-06-16  7:38   ` Maciej Kwapulinski
2022-06-20  9:49     ` maciej.kwapulinski
2022-06-20  9:56       ` Greg KH
2022-06-20 10:08         ` Maciej Kwapulinski
2022-06-20 10:26           ` Greg KH
2022-06-25 17:25       ` Daniel Vetter
2021-05-20 11:58 ` Linus Walleij

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAKMK7uEg2khb7wDzHTGEPwfbYe+T_5Av=_BTnt91CBW5U4yWvg@mail.gmail.com' \
    --to=daniel@ffwll.ch \
    --cc=airlied@gmail.com \
    --cc=andy.shevchenko@gmail.com \
    --cc=arnd@arndb.de \
    --cc=corbet@lwn.net \
    --cc=derek.kiernan@xilinx.com \
    --cc=dragan.cvetic@xilinx.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=gregkh@linuxfoundation.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maciej.kwapulinski@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).