From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: Chris Wilson <chris@chris-wilson.co.uk>, Intel-gfx@lists.freedesktop.org
Subject: Re: [Intel-gfx] [RFC 01/12] drm/i915: Expose list of clients in sysfs
Date: Mon, 9 Mar 2020 23:26:34 +0000 [thread overview]
Message-ID: <ee5b6168-4e0a-6bbc-731e-a7391cc96397@linux.intel.com> (raw)
In-Reply-To: <158378968022.16414.13552854522311222381@build.alporthouse.com>
On 09/03/2020 21:34, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2020-03-09 18:31:18)
>> +struct i915_drm_client *
>> +i915_drm_client_add(struct i915_drm_clients *clients, struct task_struct *task)
>> +{
>> + struct i915_drm_client *client;
>> + int ret;
>> +
>> + client = kzalloc(sizeof(*client), GFP_KERNEL);
>> + if (!client)
>> + return ERR_PTR(-ENOMEM);
>> +
>> + kref_init(&client->kref);
>> + client->clients = clients;
>> +
>> + ret = mutex_lock_interruptible(&clients->lock);
>> + if (ret)
>> + goto err_id;
>> + ret = xa_alloc_cyclic(&clients->xarray, &client->id, client,
>> + xa_limit_32b, &clients->next_id, GFP_KERNEL);
>
> So what's next_id used for that explains having the over-arching mutex?
It's to give out client id's "cyclically" - before I apparently
misunderstood what xa_alloc_cyclic is supposed to do - I thought after
giving out id 1 it would give out 2 next, even if 1 was returned to the
pool in the meantime. But it doesn't, I need to track the start point
for the next search with "next".
I want this to make intel_gpu_top's life easier, so it doesn't have to
deal with id recycling for all practical purposes.
And a peek into xa implementation told me the internal lock is not
protecting "next.
I could stick with one lock and not use the internal one if I used it on
release path as well.
Regards,
Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
next prev parent reply other threads:[~2020-03-09 23:26 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-03-09 18:31 [Intel-gfx] [RFC 00/12] Per client engine busyness Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 01/12] drm/i915: Expose list of clients in sysfs Tvrtko Ursulin
2020-03-09 21:34 ` Chris Wilson
2020-03-09 23:26 ` Tvrtko Ursulin [this message]
2020-03-10 0:13 ` Chris Wilson
2020-03-10 8:44 ` Tvrtko Ursulin
2020-03-10 11:41 ` Chris Wilson
2020-03-10 12:04 ` Tvrtko Ursulin
2020-03-10 17:59 ` Chris Wilson
2020-03-09 18:31 ` [Intel-gfx] [RFC 02/12] drm/i915: Update client name on context create Tvrtko Ursulin
2020-03-10 18:11 ` Chris Wilson
2020-03-10 19:52 ` Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 03/12] drm/i915: Make GEM contexts track DRM clients Tvrtko Ursulin
2020-03-10 18:20 ` Chris Wilson
2020-03-09 18:31 ` [Intel-gfx] [RFC 04/12] drm/i915: Use explicit flag to mark unreachable intel_context Tvrtko Ursulin
2020-03-10 15:30 ` Chris Wilson
2020-03-09 18:31 ` [Intel-gfx] [RFC 05/12] drm/i915: Track runtime spent in unreachable intel_contexts Tvrtko Ursulin
2020-03-10 18:25 ` Chris Wilson
2020-03-10 20:00 ` Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 06/12] drm/i915: Track runtime spent in closed GEM contexts Tvrtko Ursulin
2020-03-10 18:28 ` Chris Wilson
2020-03-10 20:01 ` Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 07/12] drm/i915: Track all user contexts per client Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 08/12] drm/i915: Expose per-engine client busyness Tvrtko Ursulin
2020-03-10 18:32 ` Chris Wilson
2020-03-10 20:04 ` Tvrtko Ursulin
2020-03-10 20:12 ` Chris Wilson
2020-03-11 10:17 ` Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 09/12] drm/i915: Track per-context engine busyness Tvrtko Ursulin
2020-03-10 18:36 ` Chris Wilson
2020-03-09 18:31 ` [Intel-gfx] [RFC 10/12] drm/i915: Carry over past software tracked context runtime Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 11/12] drm/i915: Prefer software tracked context busyness Tvrtko Ursulin
2020-03-09 18:31 ` [Intel-gfx] [RFC 12/12] compare runtimes Tvrtko Ursulin
2020-03-09 19:05 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Per client engine busyness (rev5) Patchwork
2020-03-09 19:13 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2020-03-09 22:02 ` [Intel-gfx] [RFC 00/12] Per client engine busyness Chris Wilson
2020-03-09 23:30 ` Tvrtko Ursulin
2020-03-10 15:11 ` [Intel-gfx] ✗ Fi.CI.BAT: failure for Per client engine busyness (rev5) Patchwork
2020-03-10 15:11 ` [Intel-gfx] ✗ Fi.CI.BUILD: warning " Patchwork
2020-03-10 15:19 ` [Intel-gfx] ✗ Fi.CI.BAT: failure " Patchwork
2020-03-10 15:19 ` [Intel-gfx] ✗ Fi.CI.BUILD: warning " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ee5b6168-4e0a-6bbc-731e-a7391cc96397@linux.intel.com \
--to=tvrtko.ursulin@linux.intel.com \
--cc=Intel-gfx@lists.freedesktop.org \
--cc=chris@chris-wilson.co.uk \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.