All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jordan Crouse <jcrouse@codeaurora.org>
To: Boris Brezillon <boris.brezillon@collabora.com>
Cc: Neil Armstrong <narmstrong@baylibre.com>,
	Emil Velikov <emil.l.velikov@gmail.com>,
	dri-devel <dri-devel@lists.freedesktop.org>,
	Steven Price <steven.price@arm.com>,
	Rob Herring <robh+dt@kernel.org>,
	Mark Janes <mark.a.janes@intel.com>,
	kernel@collabora.com, Alyssa Rosenzweig <alyssa@rosenzweig.io>
Subject: Re: [PATCH 0/3] drm/panfrost: Expose HW counters to userspace
Date: Mon, 13 May 2019 09:00:08 -0600	[thread overview]
Message-ID: <20190513150008.GC24137@jcrouse1-lnx.qualcomm.com> (raw)
In-Reply-To: <20190512154026.21a31ba0@collabora.com>

On Sun, May 12, 2019 at 03:40:26PM +0200, Boris Brezillon wrote:
> On Tue, 30 Apr 2019 09:49:51 -0600
> Jordan Crouse <jcrouse@codeaurora.org> wrote:
> 
> > On Tue, Apr 30, 2019 at 06:10:53AM -0700, Rob Clark wrote:
> > > On Tue, Apr 30, 2019 at 5:42 AM Boris Brezillon
> > > <boris.brezillon@collabora.com> wrote:  
> > > >
> > > > +Rob, Eric, Mark and more
> > > >
> > > > Hi,
> > > >
> > > > On Fri, 5 Apr 2019 16:20:45 +0100
> > > > Steven Price <steven.price@arm.com> wrote:
> > > >  
> > > > > On 04/04/2019 16:20, Boris Brezillon wrote:  
> > > > > > Hello,
> > > > > >
> > > > > > This patch adds new ioctls to expose GPU counters to userspace.
> > > > > > These will be used by the mesa driver (should be posted soon).
> > > > > >
> > > > > > A few words about the implementation: I followed the VC4/Etnaviv model
> > > > > > where perf counters are retrieved on a per-job basis. This allows one
> > > > > > to have get accurate results when there are users using the GPU
> > > > > > concurrently.
> > > > > > AFAICT, the mali kbase is using a different approach where several
> > > > > > users can register a performance monitor but with no way to have fined
> > > > > > grained control over what job/GPU-context to track.  
> > > > >
> > > > > mali_kbase submits overlapping jobs. The jobs on slot 0 and slot 1 can
> > > > > be from different contexts (address spaces), and mali_kbase also fully
> > > > > uses the _NEXT registers. So there can be a job from one context
> > > > > executing on slot 0 and a job from a different context waiting in the
> > > > > _NEXT registers. (And the same for slot 1). This means that there's no
> > > > > (visible) gap between the first job finishing and the second job
> > > > > starting. Early versions of the driver even had a throttle to avoid
> > > > > interrupt storms (see JOB_IRQ_THROTTLE) which would further delay the
> > > > > IRQ - but thankfully that's gone.
> > > > >
> > > > > The upshot is that it's basically impossible to measure "per-job"
> > > > > counters when running at full speed. Because multiple jobs are running
> > > > > and the driver doesn't actually know when one ends and the next starts.
> > > > >
> > > > > Since one of the primary use cases is to draw pretty graphs of the
> > > > > system load [1], this "per-job" information isn't all that relevant (and
> > > > > minimal performance overhead is important). And if you want to monitor
> > > > > just one application it is usually easiest to ensure that it is the only
> > > > > thing running.
> > > > >
> > > > > [1]
> > > > > https://developer.arm.com/tools-and-software/embedded/arm-development-studio/components/streamline-performance-analyzer
> > > > >  
> > > > > > This design choice comes at a cost: every time the perfmon context
> > > > > > changes (the perfmon context is the list of currently active
> > > > > > perfmons), the driver has to add a fence to prevent new jobs from
> > > > > > corrupting counters that will be dumped by previous jobs.
> > > > > >
> > > > > > Let me know if that's an issue and if you think we should approach
> > > > > > things differently.  
> > > > >
> > > > > It depends what you expect to do with the counters. Per-job counters are
> > > > > certainly useful sometimes. But serialising all jobs can mess up the
> > > > > thing you are trying to measure the performance of.  
> > > >
> > > > I finally found some time to work on v2 this morning, and it turns out
> > > > implementing global perf monitors as done in mali_kbase means rewriting
> > > > almost everything (apart from the perfcnt layout stuff). I'm not against
> > > > doing that, but I'd like to be sure this is really what we want.
> > > >
> > > > Eric, Rob, any opinion on that? Is it acceptable to expose counters
> > > > through the pipe_query/AMD_perfmon interface if we don't have this
> > > > job (or at least draw call) granularity? If not, should we keep the
> > > > solution I'm proposing here to make sure counters values are accurate,
> > > > or should we expose perf counters through a non-standard API?  
> > > 
> > > I think if you can't do per-draw level granularity, then you should
> > > not try to implement AMD_perfmon..  instead the use case is more for a
> > > sort of "gpu top" app (as opposed to something like frameretrace which
> > > is taking per-draw-call level measurements from within the app.
> > > Things that use AMD_perfmon are going to, I think, expect to query
> > > values between individual glDraw calls, and you probably don't want to
> > > flush tile passes 500 times per frame.
> > > 
> > > (Although, I suppose if there are multiple GPUs where perfcntrs work
> > > this way, it might be an interesting exercise to think about coming up
> > > w/ a standardized API (debugfs perhaps?) to monitor the counters.. so
> > > you could have a single userspace tool that works across several
> > > different drivers.)  
> > 
> > I agree. We've been pondering a lot of the same issues for Adreno.
> 
> So, you also have those global perf counters that can't be retrieved
> through the cmdstream? After the discussion I had with Rob I had the
> impression freedreno was supporting the AMD_perfmon interface in a
> proper way.

It is true that we can read most of the counters from the cmdstream but we have
a limited number of global physical counters that have to be shared across
contexts and a metric ton of countables that can be selected. As long as we
have just the one process running AMD_perfmon then we are fine, but it breaks
down after that.

I am also interested in the 'gpu top' use case which can somewhat emulated
through a cmdstream assuming that we stick to a finite set of counters and
countables that won't conflict with extensions. This is fine for basic work but
anybody doing any serious performance work would run into some limitations
quickly.

And finally the kernel uses a few counters for its own purposes and I hate the
"gentlemen's agreement" between the user and the kernel as to which physical
counters each get ownership of. It would be a lot better to turn it into an
explicit dynamic reservation which I feel would be a natural outshoot of a
generic interface.

Jordan
-- 
The Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
a Linux Foundation Collaborative Project
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

  reply	other threads:[~2019-05-13 15:00 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-04-04 15:20 [PATCH 0/3] drm/panfrost: Expose HW counters to userspace Boris Brezillon
2019-04-04 15:20 ` [PATCH 1/3] drm/panfrost: Move gpu_{write, read}() macros to panfrost_regs.h Boris Brezillon
2019-04-04 15:20 ` [PATCH 2/3] drm/panfrost: Expose HW counters to userspace Boris Brezillon
2019-04-04 15:41   ` Alyssa Rosenzweig
2019-04-04 18:17     ` Boris Brezillon
2019-04-04 22:40       ` Alyssa Rosenzweig
2019-04-05 15:36     ` Eric Anholt
2019-04-05 16:17       ` Alyssa Rosenzweig
2019-04-04 15:20 ` [PATCH 3/3] panfrost/drm: Define T860 perf counters Boris Brezillon
2019-04-05 15:20 ` [PATCH 0/3] drm/panfrost: Expose HW counters to userspace Steven Price
2019-04-05 16:33   ` Alyssa Rosenzweig
2019-04-05 17:40     ` Boris Brezillon
2019-04-05 17:43       ` Alyssa Rosenzweig
2019-04-30 12:42   ` Boris Brezillon
2019-04-30 13:10     ` Rob Clark
2019-04-30 15:49       ` Jordan Crouse
2019-05-12 13:40         ` Boris Brezillon
2019-05-13 15:00           ` Jordan Crouse [this message]
2019-05-01 17:12     ` Eric Anholt
2019-05-12 13:17       ` Boris Brezillon
2019-05-11 22:32     ` Alyssa Rosenzweig
2019-05-12 13:38       ` Boris Brezillon
2019-05-13 12:48         ` Steven Price
2019-05-13 13:39           ` Boris Brezillon
2019-05-13 14:13             ` Steven Price
2019-05-13 14:56             ` Alyssa Rosenzweig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190513150008.GC24137@jcrouse1-lnx.qualcomm.com \
    --to=jcrouse@codeaurora.org \
    --cc=alyssa@rosenzweig.io \
    --cc=boris.brezillon@collabora.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=emil.l.velikov@gmail.com \
    --cc=kernel@collabora.com \
    --cc=mark.a.janes@intel.com \
    --cc=narmstrong@baylibre.com \
    --cc=robh+dt@kernel.org \
    --cc=steven.price@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.