All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jeffrey Hugo <jhugo@codeaurora.org>
To: Daniel Vetter <daniel@ffwll.ch>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: dri-devel <dri-devel@lists.freedesktop.org>,
	Olof Johansson <olof.johansson@gmail.com>,
	Jason Gunthorpe <jgg@mellanox.com>,
	Dave Airlie <airlied@gmail.com>, Arnd Bergmann <arnd@arndb.de>,
	Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>,
	Bjorn Andersson <bjorn.andersson@linaro.org>,
	wufan@codeaurora.org, pratanan@codeaurora.org,
	linux-arm-msm <linux-arm-msm@vger.kernel.org>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [RFC PATCH 0/8] Qualcomm Cloud AI 100 driver
Date: Wed, 20 May 2020 08:48:13 -0600	[thread overview]
Message-ID: <5701b299-7800-1584-4b3a-6147e7ad3fca@codeaurora.org> (raw)
In-Reply-To: <CAKMK7uEbwTK68sxhf452fPHzAreQqRbRc7=RLGX-9SesXnJnLQ@mail.gmail.com>

On 5/20/2020 2:34 AM, Daniel Vetter wrote:
> On Wed, May 20, 2020 at 7:15 AM Greg Kroah-Hartman
> <gregkh@linuxfoundation.org> wrote:
>>
>> On Tue, May 19, 2020 at 10:41:15PM +0200, Daniel Vetter wrote:
>>> On Tue, May 19, 2020 at 07:41:20PM +0200, Greg Kroah-Hartman wrote:
>>>> On Tue, May 19, 2020 at 08:57:38AM -0600, Jeffrey Hugo wrote:
>>>>> On 5/18/2020 11:08 PM, Dave Airlie wrote:
>>>>>> On Fri, 15 May 2020 at 00:12, Jeffrey Hugo <jhugo@codeaurora.org> wrote:
>>>>>>>
>>>>>>> Introduction:
>>>>>>> Qualcomm Cloud AI 100 is a PCIe adapter card which contains a dedicated
>>>>>>> SoC ASIC for the purpose of efficently running Deep Learning inference
>>>>>>> workloads in a data center environment.
>>>>>>>
>>>>>>> The offical press release can be found at -
>>>>>>> https://www.qualcomm.com/news/releases/2019/04/09/qualcomm-brings-power-efficient-artificial-intelligence-inference
>>>>>>>
>>>>>>> The offical product website is -
>>>>>>> https://www.qualcomm.com/products/datacenter-artificial-intelligence
>>>>>>>
>>>>>>> At the time of the offical press release, numerious technology news sites
>>>>>>> also covered the product.  Doing a search of your favorite site is likely
>>>>>>> to find their coverage of it.
>>>>>>>
>>>>>>> It is our goal to have the kernel driver for the product fully upstream.
>>>>>>> The purpose of this RFC is to start that process.  We are still doing
>>>>>>> development (see below), and thus not quite looking to gain acceptance quite
>>>>>>> yet, but now that we have a working driver we beleive we are at the stage
>>>>>>> where meaningful conversation with the community can occur.
>>>>>>
>>>>>>
>>>>>> Hi Jeffery,
>>>>>>
>>>>>> Just wondering what the userspace/testing plans for this driver.
>>>>>>
>>>>>> This introduces a new user facing API for a device without pointers to
>>>>>> users or tests for that API.
>>>>>
>>>>> We have daily internal testing, although I don't expect you to take my word
>>>>> for that.
>>>>>
>>>>> I would like to get one of these devices into the hands of Linaro, so that
>>>>> it can be put into KernelCI.  Similar to other Qualcomm products. I'm trying
>>>>> to convince the powers that be to make this happen.
>>>>>
>>>>> Regarding what the community could do on its own, everything but the Linux
>>>>> driver is considered proprietary - that includes the on device firmware and
>>>>> the entire userspace stack.  This is a decision above my pay grade.
>>>>
>>>> Ok, that's a decision you are going to have to push upward on, as we
>>>> really can't take this without a working, open, userspace.
>>>
>>> Uh wut.
>>>
>>> So the merge criteria for drivers/accel (atm still drivers/misc but I
>>> thought that was interim until more drivers showed up) isn't actually
>>> "totally-not-a-gpu accel driver without open source userspace".
>>>
>>> Instead it's "totally-not-a-gpu accel driver without open source
>>> userspace" _and_ you have to be best buddies with Greg. Or at least
>>> not be on the naughty company list. Since for habanalabs all you
>>> wanted is a few test cases to exercise the ioctls. Not the entire
>>> userspace.
>>
>> Also, to be fair, I have changed my mind after seeing the mess of
>> complexity that these "ioctls for everyone!" type of pass-through
>> these kinds of drivers are creating.  You were right, we need open
>> userspace code in order to be able to properly evaluate and figure out
>> what they are doing is right or not and be able to maintain things over
>> time correctly.
>>
>> So I was wrong, and you were right, my apologies for my previous
>> stubbornness.
> 
> Awesome and don't worry, I'm pretty sure we've all been stubborn
> occasionally :-)
> 
>  From a drivers/gpu pov I think still not quite there since we also
> want to see the compiler for these programmable accelerator thingies.
> But just having a fairly good consensus that "userspace library with
> all the runtime stuff excluding compiler must be open" is a huge step
> forward. Next step may be that we (kernel overall, drivers/gpu will
> still ask for the full thing) have ISA docs for these programmable
> things, so that we can also evaluate that aspect and gauge how many
> security issues there might be. Plus have a fighting chance to fix up
> the security leaks when (post smeltdown I don't really want to
> consider this an if) someone finds a hole in the hw security wall. At
> least in drivers/gpu we historically have a ton of drivers with
> command checkers to validate what userspace wants to run on the
> accelerator thingie. Both in cases where the hw was accidentally too
> strict, and not strict enough.

I think this provides a pretty clear guidance on what you/the community 
are looking for, both now and possibly in the future.

Thank you.

 From my perspective, it would be really nice if there was something 
like Mesa that was a/the standard for these sorts of accelerators.  Its 
somewhat the wild west, and we've struggled with it.

I don't work on the compiler end of things, but based on what I've seen 
in my project, I think the vendors are going to be highly resistant to 
opening that up.  There is more than just the raw instruction set that 
goes on in the device, and its viewed as "secret sauce" even though I 
agree with your previous statements on that viewpoint.
-- 
Jeffrey Hugo
Qualcomm Technologies, Inc. is a member of the
Code Aurora Forum, a Linux Foundation Collaborative Project.

WARNING: multiple messages have this Message-ID (diff)
From: Jeffrey Hugo <jhugo@codeaurora.org>
To: Daniel Vetter <daniel@ffwll.ch>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Olof Johansson <olof.johansson@gmail.com>,
	wufan@codeaurora.org, Arnd Bergmann <arnd@arndb.de>,
	linux-arm-msm <linux-arm-msm@vger.kernel.org>,
	pratanan@codeaurora.org, LKML <linux-kernel@vger.kernel.org>,
	dri-devel <dri-devel@lists.freedesktop.org>,
	Bjorn Andersson <bjorn.andersson@linaro.org>,
	Jason Gunthorpe <jgg@mellanox.com>,
	Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
Subject: Re: [RFC PATCH 0/8] Qualcomm Cloud AI 100 driver
Date: Wed, 20 May 2020 08:48:13 -0600	[thread overview]
Message-ID: <5701b299-7800-1584-4b3a-6147e7ad3fca@codeaurora.org> (raw)
In-Reply-To: <CAKMK7uEbwTK68sxhf452fPHzAreQqRbRc7=RLGX-9SesXnJnLQ@mail.gmail.com>

On 5/20/2020 2:34 AM, Daniel Vetter wrote:
> On Wed, May 20, 2020 at 7:15 AM Greg Kroah-Hartman
> <gregkh@linuxfoundation.org> wrote:
>>
>> On Tue, May 19, 2020 at 10:41:15PM +0200, Daniel Vetter wrote:
>>> On Tue, May 19, 2020 at 07:41:20PM +0200, Greg Kroah-Hartman wrote:
>>>> On Tue, May 19, 2020 at 08:57:38AM -0600, Jeffrey Hugo wrote:
>>>>> On 5/18/2020 11:08 PM, Dave Airlie wrote:
>>>>>> On Fri, 15 May 2020 at 00:12, Jeffrey Hugo <jhugo@codeaurora.org> wrote:
>>>>>>>
>>>>>>> Introduction:
>>>>>>> Qualcomm Cloud AI 100 is a PCIe adapter card which contains a dedicated
>>>>>>> SoC ASIC for the purpose of efficently running Deep Learning inference
>>>>>>> workloads in a data center environment.
>>>>>>>
>>>>>>> The offical press release can be found at -
>>>>>>> https://www.qualcomm.com/news/releases/2019/04/09/qualcomm-brings-power-efficient-artificial-intelligence-inference
>>>>>>>
>>>>>>> The offical product website is -
>>>>>>> https://www.qualcomm.com/products/datacenter-artificial-intelligence
>>>>>>>
>>>>>>> At the time of the offical press release, numerious technology news sites
>>>>>>> also covered the product.  Doing a search of your favorite site is likely
>>>>>>> to find their coverage of it.
>>>>>>>
>>>>>>> It is our goal to have the kernel driver for the product fully upstream.
>>>>>>> The purpose of this RFC is to start that process.  We are still doing
>>>>>>> development (see below), and thus not quite looking to gain acceptance quite
>>>>>>> yet, but now that we have a working driver we beleive we are at the stage
>>>>>>> where meaningful conversation with the community can occur.
>>>>>>
>>>>>>
>>>>>> Hi Jeffery,
>>>>>>
>>>>>> Just wondering what the userspace/testing plans for this driver.
>>>>>>
>>>>>> This introduces a new user facing API for a device without pointers to
>>>>>> users or tests for that API.
>>>>>
>>>>> We have daily internal testing, although I don't expect you to take my word
>>>>> for that.
>>>>>
>>>>> I would like to get one of these devices into the hands of Linaro, so that
>>>>> it can be put into KernelCI.  Similar to other Qualcomm products. I'm trying
>>>>> to convince the powers that be to make this happen.
>>>>>
>>>>> Regarding what the community could do on its own, everything but the Linux
>>>>> driver is considered proprietary - that includes the on device firmware and
>>>>> the entire userspace stack.  This is a decision above my pay grade.
>>>>
>>>> Ok, that's a decision you are going to have to push upward on, as we
>>>> really can't take this without a working, open, userspace.
>>>
>>> Uh wut.
>>>
>>> So the merge criteria for drivers/accel (atm still drivers/misc but I
>>> thought that was interim until more drivers showed up) isn't actually
>>> "totally-not-a-gpu accel driver without open source userspace".
>>>
>>> Instead it's "totally-not-a-gpu accel driver without open source
>>> userspace" _and_ you have to be best buddies with Greg. Or at least
>>> not be on the naughty company list. Since for habanalabs all you
>>> wanted is a few test cases to exercise the ioctls. Not the entire
>>> userspace.
>>
>> Also, to be fair, I have changed my mind after seeing the mess of
>> complexity that these "ioctls for everyone!" type of pass-through
>> these kinds of drivers are creating.  You were right, we need open
>> userspace code in order to be able to properly evaluate and figure out
>> what they are doing is right or not and be able to maintain things over
>> time correctly.
>>
>> So I was wrong, and you were right, my apologies for my previous
>> stubbornness.
> 
> Awesome and don't worry, I'm pretty sure we've all been stubborn
> occasionally :-)
> 
>  From a drivers/gpu pov I think still not quite there since we also
> want to see the compiler for these programmable accelerator thingies.
> But just having a fairly good consensus that "userspace library with
> all the runtime stuff excluding compiler must be open" is a huge step
> forward. Next step may be that we (kernel overall, drivers/gpu will
> still ask for the full thing) have ISA docs for these programmable
> things, so that we can also evaluate that aspect and gauge how many
> security issues there might be. Plus have a fighting chance to fix up
> the security leaks when (post smeltdown I don't really want to
> consider this an if) someone finds a hole in the hw security wall. At
> least in drivers/gpu we historically have a ton of drivers with
> command checkers to validate what userspace wants to run on the
> accelerator thingie. Both in cases where the hw was accidentally too
> strict, and not strict enough.

I think this provides a pretty clear guidance on what you/the community 
are looking for, both now and possibly in the future.

Thank you.

 From my perspective, it would be really nice if there was something 
like Mesa that was a/the standard for these sorts of accelerators.  Its 
somewhat the wild west, and we've struggled with it.

I don't work on the compiler end of things, but based on what I've seen 
in my project, I think the vendors are going to be highly resistant to 
opening that up.  There is more than just the raw instruction set that 
goes on in the device, and its viewed as "secret sauce" even though I 
agree with your previous statements on that viewpoint.
-- 
Jeffrey Hugo
Qualcomm Technologies, Inc. is a member of the
Code Aurora Forum, a Linux Foundation Collaborative Project.
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

  reply	other threads:[~2020-05-20 14:48 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-19 20:41 [RFC PATCH 0/8] Qualcomm Cloud AI 100 driver Daniel Vetter
2020-05-19 20:41 ` Daniel Vetter
2020-05-19 23:26 ` Jason Gunthorpe
2020-05-19 23:26   ` Jason Gunthorpe
2020-05-20  4:59 ` Greg Kroah-Hartman
2020-05-20  4:59   ` Greg Kroah-Hartman
2020-05-20  5:11   ` Bjorn Andersson
2020-05-20  5:11     ` Bjorn Andersson
2020-05-20  5:54     ` Greg Kroah-Hartman
2020-05-20  5:54       ` Greg Kroah-Hartman
2020-05-20  5:15 ` Greg Kroah-Hartman
2020-05-20  5:15   ` Greg Kroah-Hartman
2020-05-20  8:34   ` Daniel Vetter
2020-05-20  8:34     ` Daniel Vetter
2020-05-20 14:48     ` Jeffrey Hugo [this message]
2020-05-20 14:48       ` Jeffrey Hugo
2020-05-20 15:56       ` Daniel Vetter
2020-05-20 15:56         ` Daniel Vetter
2020-05-20 15:59       ` Greg Kroah-Hartman
2020-05-20 15:59         ` Greg Kroah-Hartman
2020-05-20 16:15         ` Jeffrey Hugo
2020-05-20 16:15           ` Jeffrey Hugo
  -- strict thread matches above, loose matches on Subject: below --
2020-05-14 14:07 Jeffrey Hugo
2020-05-19  5:08 ` Dave Airlie
2020-05-19 14:57   ` Jeffrey Hugo
2020-05-19 17:41     ` Greg Kroah-Hartman
2020-05-19 18:07       ` Jeffrey Hugo
2020-05-19 18:12         ` Greg Kroah-Hartman
2020-05-19 18:26           ` Jeffrey Hugo
2020-05-20  5:32             ` Greg Kroah-Hartman
2020-05-19 17:33   ` Greg Kroah-Hartman
2020-05-19  6:57 ` Manivannan Sadhasivam
2020-05-19 14:16   ` Jeffrey Hugo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5701b299-7800-1584-4b3a-6147e7ad3fca@codeaurora.org \
    --to=jhugo@codeaurora.org \
    --cc=airlied@gmail.com \
    --cc=arnd@arndb.de \
    --cc=bjorn.andersson@linaro.org \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=gregkh@linuxfoundation.org \
    --cc=jgg@mellanox.com \
    --cc=linux-arm-msm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=manivannan.sadhasivam@linaro.org \
    --cc=olof.johansson@gmail.com \
    --cc=pratanan@codeaurora.org \
    --cc=wufan@codeaurora.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.