intel-gfx.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: Chris Wilson <chris@chris-wilson.co.uk>
To: Intel-gfx@lists.freedesktop.org,
	Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Subject: Re: [Intel-gfx] [RFC 01/12] drm/i915: Expose list of clients in sysfs
Date: Tue, 10 Mar 2020 00:13:52 +0000	[thread overview]
Message-ID: <158379923268.3232.8572720070601085988@build.alporthouse.com> (raw)
In-Reply-To: <ee5b6168-4e0a-6bbc-731e-a7391cc96397@linux.intel.com>

Quoting Tvrtko Ursulin (2020-03-09 23:26:34)
> 
> On 09/03/2020 21:34, Chris Wilson wrote:
> > Quoting Tvrtko Ursulin (2020-03-09 18:31:18)
> >> +struct i915_drm_client *
> >> +i915_drm_client_add(struct i915_drm_clients *clients, struct task_struct *task)
> >> +{
> >> +       struct i915_drm_client *client;
> >> +       int ret;
> >> +
> >> +       client = kzalloc(sizeof(*client), GFP_KERNEL);
> >> +       if (!client)
> >> +               return ERR_PTR(-ENOMEM);
> >> +
> >> +       kref_init(&client->kref);
> >> +       client->clients = clients;
> >> +
> >> +       ret = mutex_lock_interruptible(&clients->lock);
> >> +       if (ret)
> >> +               goto err_id;
> >> +       ret = xa_alloc_cyclic(&clients->xarray, &client->id, client,
> >> +                             xa_limit_32b, &clients->next_id, GFP_KERNEL);
> > 
> > So what's next_id used for that explains having the over-arching mutex?
> 
> It's to give out client id's "cyclically" - before I apparently 
> misunderstood what xa_alloc_cyclic is supposed to do - I thought after 
> giving out id 1 it would give out 2 next, even if 1 was returned to the 
> pool in the meantime. But it doesn't, I need to track the start point 
> for the next search with "next".

Ok. A requirement of the API for the external counter.
 
> I want this to make intel_gpu_top's life easier, so it doesn't have to 
> deal with id recycling for all practical purposes.

Fair enough. I only worry about the radix nodes and sparse ids :)
 
> And a peek into xa implementation told me the internal lock is not 
> protecting "next.

See xa_alloc_cyclic(), seems to cover __xa_alloc_cycle (where *next is
manipulated) under the xa_lock.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx

  reply	other threads:[~2020-03-10  0:30 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-09 18:31 [Intel-gfx] [RFC 00/12] Per client engine busyness Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 01/12] drm/i915: Expose list of clients in sysfs Tvrtko Ursulin
2020-03-09 21:34   ` Chris Wilson
2020-03-09 23:26     ` Tvrtko Ursulin
2020-03-10  0:13       ` Chris Wilson [this message]
2020-03-10  8:44         ` Tvrtko Ursulin
2020-03-10 11:41   ` Chris Wilson
2020-03-10 12:04     ` Tvrtko Ursulin
2020-03-10 17:59   ` Chris Wilson
2020-03-09 18:31 ` [Intel-gfx] [RFC 02/12] drm/i915: Update client name on context create Tvrtko Ursulin
2020-03-10 18:11   ` Chris Wilson
2020-03-10 19:52     ` Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 03/12] drm/i915: Make GEM contexts track DRM clients Tvrtko Ursulin
2020-03-10 18:20   ` Chris Wilson
2020-03-09 18:31 ` [Intel-gfx] [RFC 04/12] drm/i915: Use explicit flag to mark unreachable intel_context Tvrtko Ursulin
2020-03-10 15:30   ` Chris Wilson
2020-03-09 18:31 ` [Intel-gfx] [RFC 05/12] drm/i915: Track runtime spent in unreachable intel_contexts Tvrtko Ursulin
2020-03-10 18:25   ` Chris Wilson
2020-03-10 20:00     ` Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 06/12] drm/i915: Track runtime spent in closed GEM contexts Tvrtko Ursulin
2020-03-10 18:28   ` Chris Wilson
2020-03-10 20:01     ` Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 07/12] drm/i915: Track all user contexts per client Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 08/12] drm/i915: Expose per-engine client busyness Tvrtko Ursulin
2020-03-10 18:32   ` Chris Wilson
2020-03-10 20:04     ` Tvrtko Ursulin
2020-03-10 20:12       ` Chris Wilson
2020-03-11 10:17         ` Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 09/12] drm/i915: Track per-context engine busyness Tvrtko Ursulin
2020-03-10 18:36   ` Chris Wilson
2020-03-09 18:31 ` [Intel-gfx] [RFC 10/12] drm/i915: Carry over past software tracked context runtime Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 11/12] drm/i915: Prefer software tracked context busyness Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 12/12] compare runtimes Tvrtko Ursulin
2020-03-09 19:05 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Per client engine busyness (rev5) Patchwork
2020-03-09 19:13 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2020-03-09 22:02 ` [Intel-gfx] [RFC 00/12] Per client engine busyness Chris Wilson
2020-03-09 23:30   ` Tvrtko Ursulin
2020-03-10 15:11 ` [Intel-gfx] ✗ Fi.CI.BAT: failure for Per client engine busyness (rev5) Patchwork
2020-03-10 15:11 ` [Intel-gfx] ✗ Fi.CI.BUILD: warning " Patchwork
2020-03-10 15:19 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
2020-03-10 15:19 ` [Intel-gfx] ✗ Fi.CI.BUILD: warning " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=158379923268.3232.8572720070601085988@build.alporthouse.com \
    --to=chris@chris-wilson.co.uk \
    --cc=Intel-gfx@lists.freedesktop.org \
    --cc=tvrtko.ursulin@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).