dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: Daniel Vetter <daniel@ffwll.ch>
Cc: "jhubbard@nvidia.com" <jhubbard@nvidia.com>,
	Intel Graphics Development <Intel-gfx@lists.freedesktop.org>,
	Maling list - DRI developers <dri-devel@lists.freedesktop.org>,
	"nouveau@lists.freedesktop.org" <nouveau@lists.freedesktop.org>,
	"Koenig, Christian" <Christian.Koenig@amd.com>,
	"aritger@nvidia.com" <aritger@nvidia.com>,
	"Nieto, David M" <David.Nieto@amd.com>
Subject: Re: [Intel-gfx] [PATCH 0/7] Per client engine busyness
Date: Thu, 20 May 2021 09:35:13 +0100	[thread overview]
Message-ID: <b1d508ee-6809-f5dc-6539-70cb89ef5e3b@linux.intel.com> (raw)
In-Reply-To: <CAKMK7uF0fHBoYfiTS+-80RtUeuKFUcYDBpGHtNY6Ma+aJmmkxA@mail.gmail.com>


On 19/05/2021 19:23, Daniel Vetter wrote:
> On Wed, May 19, 2021 at 6:16 PM Tvrtko Ursulin
> <tvrtko.ursulin@linux.intel.com> wrote:
>>
>>
>> On 18/05/2021 10:40, Tvrtko Ursulin wrote:
>>>
>>> On 18/05/2021 10:16, Daniel Stone wrote:
>>>> Hi,
>>>>
>>>> On Tue, 18 May 2021 at 10:09, Tvrtko Ursulin
>>>> <tvrtko.ursulin@linux.intel.com> wrote:
>>>>> I was just wondering if stat(2) and a chrdev major check would be a
>>>>> solid criteria to more efficiently (compared to parsing the text
>>>>> content) detect drm files while walking procfs.
>>>>
>>>> Maybe I'm missing something, but is the per-PID walk actually a
>>>> measurable performance issue rather than just a bit unpleasant?
>>>
>>> Per pid and per each open fd.
>>>
>>> As said in the other thread what bothers me a bit in this scheme is that
>>> the cost of obtaining GPU usage scales based on non-GPU criteria.
>>>
>>> For use case of a top-like tool which shows all processes this is a
>>> smaller additional cost, but then for a gpu-top like tool it is somewhat
>>> higher.
>>
>> To further expand, not only cost would scale per pid multiplies per open
>> fd, but to detect which of the fds are DRM I see these three options:
>>
>> 1) Open and parse fdinfo.
>> 2) Name based matching ie /dev/dri/.. something.
>> 3) Stat the symlink target and check for DRM major.
> 
> stat with symlink following should be plenty fast.

Maybe. I don't think my point about keeping the dentry cache needlessly 
hot is getting through at all. On my lightly loaded desktop:

  $ sudo lsof | wc -l
  599551

  $ sudo lsof | grep "/dev/dri/" | wc -l
  1965

It's going to look up ~600k pointless dentries in every iteration. Just 
to find a handful of DRM ones. Hard to say if that is better or worse 
than just parsing fdinfo text for all files. Will see.

>> All sound quite sub-optimal to me.
>>
>> Name based matching is probably the least evil on system resource usage
>> (Keeping the dentry cache too hot? Too many syscalls?), even though
>> fundamentally I don't it is the right approach.
>>
>> What happens with dup(2) is another question.
> 
> We need benchmark numbers showing that on anything remotely realistic
> it's an actual problem. Until we've demonstrated it's a real problem
> we don't need to solve it.

Point about dup(2) is whether it is possible to distinguish the 
duplicated fds in fdinfo. If a DRM client dupes, and we found two 
fdinfos each saying client is using 20% GPU, we don't want to add it up 
to 40%.

> E.g. top with any sorting enabled also parses way more than it
> displays on every update. It seems to be doing Just Fine (tm).

Ha, perceptions differ. I see it using 4-5% while building the kernel on 
a Xeon server which I find quite a lot. :)

>> Does anyone have any feedback on the /proc/<pid>/gpu idea at all?
> 
> When we know we have a problem to solve we can take a look at solutions.

Yes I don't think the problem would be to add a better solution later, 
so happy to try the fdinfo first. I am simply pointing out a fundamental 
design inefficiency. Even if machines are getting faster and faster I 
don't think that should be an excuse to waste more and more under the 
hood, when a more efficient solution can be designed from the start.

Regards,

Tvrtko

  parent reply	other threads:[~2021-05-20  8:35 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-13 10:59 [PATCH 0/7] Per client engine busyness Tvrtko Ursulin
2021-05-13 10:59 ` [PATCH 1/7] drm/i915: Expose list of clients in sysfs Tvrtko Ursulin
2021-05-13 10:59 ` [PATCH 2/7] drm/i915: Update client name on context create Tvrtko Ursulin
2021-05-13 10:59 ` [PATCH 3/7] drm/i915: Make GEM contexts track DRM clients Tvrtko Ursulin
2021-05-13 10:59 ` [PATCH 4/7] drm/i915: Track runtime spent in closed and unreachable GEM contexts Tvrtko Ursulin
2021-05-13 11:00 ` [PATCH 5/7] drm/i915: Track all user contexts per client Tvrtko Ursulin
2021-05-13 11:00 ` [PATCH 6/7] drm/i915: Track context current active time Tvrtko Ursulin
2021-05-13 11:00 ` [PATCH 7/7] drm/i915: Expose per-engine client busyness Tvrtko Ursulin
2021-05-13 15:48 ` [PATCH 0/7] Per client engine busyness Alex Deucher
2021-05-13 16:40   ` Tvrtko Ursulin
2021-05-14  5:58     ` Alex Deucher
2021-05-14  7:22       ` Nieto, David M
2021-05-14  8:04         ` Christian König
2021-05-14 13:42           ` Tvrtko Ursulin
2021-05-14 13:53             ` Christian König
2021-05-14 14:47               ` Tvrtko Ursulin
2021-05-14 14:56                 ` Christian König
2021-05-14 15:03                   ` Tvrtko Ursulin
2021-05-14 15:10                     ` Christian König
2021-05-17 14:30                       ` Daniel Vetter
2021-05-17 14:39                         ` Nieto, David M
2021-05-17 16:00                           ` Tvrtko Ursulin
2021-05-17 18:02                             ` Nieto, David M
2021-05-17 18:16                               ` Nieto, David M
2021-05-17 19:03                                 ` Simon Ser
2021-05-18  9:08                                   ` Tvrtko Ursulin
2021-05-18  9:16                                     ` Daniel Stone
2021-05-18  9:40                                       ` Tvrtko Ursulin
2021-05-19 16:16                                         ` Tvrtko Ursulin
2021-05-19 18:23                                           ` [Intel-gfx] " Daniel Vetter
2021-05-19 23:17                                             ` Nieto, David M
2021-05-20 14:11                                               ` Daniel Vetter
2021-05-20 14:12                                                 ` Christian König
2021-05-20 14:17                                                   ` [Nouveau] " arabek
2021-05-20  8:35                                             ` Tvrtko Ursulin [this message]
2021-05-24 10:48                                               ` Tvrtko Ursulin
2021-05-18  9:35                               ` Tvrtko Ursulin
2021-05-18 12:06                                 ` Christian König
2021-05-17 19:16                         ` Christian König
2021-06-28 10:16                       ` Tvrtko Ursulin
2021-06-28 14:37                         ` Daniel Vetter
2021-05-17 14:20   ` Daniel Vetter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b1d508ee-6809-f5dc-6539-70cb89ef5e3b@linux.intel.com \
    --to=tvrtko.ursulin@linux.intel.com \
    --cc=Christian.Koenig@amd.com \
    --cc=David.Nieto@amd.com \
    --cc=Intel-gfx@lists.freedesktop.org \
    --cc=aritger@nvidia.com \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=jhubbard@nvidia.com \
    --cc=nouveau@lists.freedesktop.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).