dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: Stanislaw Gruszka <stanislaw.gruszka@linux.intel.com>
To: Daniel Vetter <daniel@ffwll.ch>
Cc: Krystian Pradzynski <krystian.pradzynski@linux.intel.com>,
	Jeffrey Hugo <quic_jhugo@quicinc.com>,
	dri-devel@lists.freedesktop.org,
	Jacek Lawrynowicz <jacek.lawrynowicz@linux.intel.com>,
	tzimmermann@suse.de, andrzej.kacprowski@linux.intel.com
Subject: Re: [PATCH v4 1/7] accel/ivpu: Introduce a new DRM driver for Intel VPU
Date: Fri, 6 Jan 2023 10:56:34 +0100	[thread overview]
Message-ID: <20230106095634.GB1586324@linux.intel.com> (raw)
In-Reply-To: <Y7fpr69AXYYo2O25@phenom.ffwll.local>

On Fri, Jan 06, 2023 at 10:28:15AM +0100, Daniel Vetter wrote:
> On Thu, Jan 05, 2023 at 07:38:26PM +0200, Oded Gabbay wrote:
> > On Thu, Jan 5, 2023 at 6:25 PM Jeffrey Hugo <quic_jhugo@quicinc.com> wrote:
> > >
> > > On 1/5/2023 5:57 AM, Daniel Vetter wrote:
> > > > On Thu, Dec 08, 2022 at 12:07:27PM +0100, Jacek Lawrynowicz wrote:
> > > >> +static const struct drm_driver driver = {
> > > >> +    .driver_features = DRIVER_GEM | DRIVER_COMPUTE_ACCEL,
> > > >
> > > > So I was wondering whether this is a bright idea, and whether we shouldn't
> > > > just go ahead and infuse more meaning into accel vs render nodes.
> > > >
> > > > The uapi relevant part of render nodes is that they're multi-user safe, at
> > > > least as much as feasible. Every new open() gives you a new private
> > > > accelerator. This also has implications on how userspace drivers iterate
> > > > them, they just open them all in turn and check whether it's the right
> > > > one - because userspace apis allow applications to enumerate them all.
> > > > Which also means that any devicie initialization at open() time is a
> > > > really bad idea.
> > > >
> > > > A lot of the compute accelerators otoh (well habanalabs) are single user,
> > > > init can be done at open() time because you only open this when you
> > > > actually know you're going to use it.
> > > >
> > > > So given this, shouldn't multi-user inference engines be more like render
> > > > drivers, and less like accel? So DRIVER_RENDER, but still under
> > > > drivers/accel.
> > > >
> > > > This way that entire separate /dev node would actually become meaningful
> > > > beyond just the basic bikeshed:
> > > > - render nodes are multi user, safe to iterate and open() just for
> > > >    iteration
> > > > - accel nodes are single user, you really should not ever open them unless
> > > >    you want to use them
> > > >
> > > > Of course would need a doc patch :-)
> > > >
> > > > Thoughts?
> > > > -Daniel
> > >
> > > Hmm.
> > >
> > > I admit, I thought DRIVER_ACCEL was the same as DRIVER_RENDER, except
> > > that DRIVER_ACCEL dropped the "legacy" dual node setup and also avoided
> > > "legacy" userspace.
> > >
> > > qaic is multi-user.  I thought habana was the same, at-least for
> > > inference.  Oded, am I wrong?
> > Habana's devices support a single user at a time acquiring the device
> > and working on it.
> > Both for training and inference.
> > >
> > > So, if DRIVER_ACCEL is for single-user (training?), and multi-user ends
> > > up in DRIVER_RENDER, that would seem to mean qaic ends up using
> > > DRIVER_RENDER and not DRIVER_ACCEL.  Then qaic ends up over under
> > > /dev/dri with both a card node (never used) and a render node.  That
> > > would seem to mean that the "legacy" userspace would open qaic nodes by
> > > default - something I understood Oded was trying to avoid.
> > >
> > > If there really a usecase for DRIVER_ACCEL to support single-user?  I
> > > wonder why we can't default to multi-user, and if a particular
> > > user/driver has a single-user usecase, it enforces that in a driver
> > > specific manner?
> > >
> > > -Jeff
> > 
> > Honestly, Daniel, I don't like this suggestion. I don't understand why
> > you make a connection between render/accel to single/multi user.
> > 
> > As Jeff has said, one of the goals was to expose accelerator devices
> > to userspace with new device char nodes so we won't be bothered by
> > legacy userspace graphics software. This is something we all agreed on
> > and I don't see why we should change it now, even if you think it's
> > bike-shedding (which I disagree with).
> > 
> > But in any case, creating a new device char nodes had nothing to do
> > with whether the device supports single or multi user. I can
> > definitely see in the future training devices that support multiple
> > users.
> > 
> > The common drm/accel ioctls should of course not be limited to a
> > single user, and I agree with Jeff here, if a specific driver has such
> > a limitation (e.g. Habana), then that driver should handle it on its
> > own.
> > Maybe if there will be multiple drivers with such a limitation, we can
> > make that "handling" to be common code.
> > 
> > Bottom line, I prefer we keep things as we all agreed upon in LPC.
> 
> The problem is going to happen as soon as you have cross-vendor userspace.
> Which I'm kinda hoping is at least still the aspiration. Because with
> cross-vendor userspace you generally iterate & open all devices before you
> select the one you're going to use. And so we do kinda need a distinction,
> or we need that the single-user drivers also guarantee that open() is
> cheap.

FWIW we had good support in ivpu for probe open's in form of lazy context
allocation. It was removed recently due to review feedback that this is
unnecessary, but we can add it back.

Regards
Stanislaw


  reply	other threads:[~2023-01-06  9:56 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-12-08 11:07 [PATCH v4 0/7] New DRM accel driver for Intel VPU Jacek Lawrynowicz
2022-12-08 11:07 ` [PATCH v4 1/7] accel/ivpu: Introduce a new DRM " Jacek Lawrynowicz
2022-12-14 13:57   ` Oded Gabbay
2022-12-14 15:07     ` Jeffrey Hugo
2022-12-14 18:21       ` Oded Gabbay
2022-12-14 15:39     ` Stanislaw Gruszka
2022-12-20 20:17   ` Oded Gabbay
2022-12-21  8:25     ` Jacek Lawrynowicz
2023-01-05 12:57   ` Daniel Vetter
2023-01-05 16:25     ` Jeffrey Hugo
2023-01-05 17:38       ` Oded Gabbay
2023-01-06  9:28         ` Daniel Vetter
2023-01-06  9:56           ` Stanislaw Gruszka [this message]
2023-01-06 10:44             ` Daniel Vetter
2023-01-06 11:43               ` Oded Gabbay
2023-01-11  4:35                 ` Dave Airlie
2023-01-06 12:43               ` Stanislaw Gruszka
2022-12-08 11:07 ` [PATCH v4 2/7] accel/ivpu: Add Intel VPU MMU support Jacek Lawrynowicz
2022-12-12 11:19   ` kernel test robot
2022-12-18  9:13   ` Oded Gabbay
2022-12-19 13:17     ` Jacek Lawrynowicz
2022-12-08 11:07 ` [PATCH v4 3/7] accel/ivpu: Add GEM buffer object management Jacek Lawrynowicz
2022-12-12 15:31   ` kernel test robot
2022-12-18 10:23   ` Oded Gabbay
2022-12-19  8:08     ` Jacek Lawrynowicz
2023-01-05 18:46   ` Andrew Davis
2023-01-06 13:29     ` Stanislaw Gruszka
2023-01-09 11:47       ` Jacek Lawrynowicz
2023-01-09 18:09         ` Andrew Davis
2023-01-06 10:50   ` Daniel Vetter
2023-01-06 13:22     ` Stanislaw Gruszka
2023-01-06 18:25       ` Daniel Vetter
2023-01-09 12:06         ` Jacek Lawrynowicz
2022-12-08 11:07 ` [PATCH v4 4/7] accel/ivpu: Add IPC driver and JSM messages Jacek Lawrynowicz
2022-12-27 15:34   ` Oded Gabbay
2023-01-03 10:54     ` Jacek Lawrynowicz
2022-12-08 11:07 ` [PATCH v4 5/7] accel/ivpu: Implement firmware parsing and booting Jacek Lawrynowicz
2022-12-08 11:07 ` [PATCH v4 6/7] accel/ivpu: Add command buffer submission logic Jacek Lawrynowicz
2022-12-08 11:07 ` [PATCH v4 7/7] accel/ivpu: Add PM support Jacek Lawrynowicz
2022-12-13 11:36 ` [PATCH v4 8/7] accel/ivpu: Add depend on !UML to Kconfig Stanislaw Gruszka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230106095634.GB1586324@linux.intel.com \
    --to=stanislaw.gruszka@linux.intel.com \
    --cc=andrzej.kacprowski@linux.intel.com \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=jacek.lawrynowicz@linux.intel.com \
    --cc=krystian.pradzynski@linux.intel.com \
    --cc=quic_jhugo@quicinc.com \
    --cc=tzimmermann@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).