dri-devel.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: "Christian König" <christian.koenig@amd.com>
To: Daniel Vetter <daniel@ffwll.ch>, "Nieto, David M" <David.Nieto@amd.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
	"nouveau@lists.freedesktop.org" <nouveau@lists.freedesktop.org>,
	Intel Graphics Development <Intel-gfx@lists.freedesktop.org>,
	Maling list - DRI developers <dri-devel@lists.freedesktop.org>,
	"jhubbard@nvidia.com" <jhubbard@nvidia.com>,
	"aritger@nvidia.com" <aritger@nvidia.com>
Subject: Re: [Intel-gfx] [PATCH 0/7] Per client engine busyness
Date: Thu, 20 May 2021 16:12:47 +0200	[thread overview]
Message-ID: <e086fbd7-5d37-c8e2-0a49-c6c646faf309@amd.com> (raw)
In-Reply-To: <YKZt+x6as7ix6TPy@phenom.ffwll.local>



Am 20.05.21 um 16:11 schrieb Daniel Vetter:
> On Wed, May 19, 2021 at 11:17:24PM +0000, Nieto, David M wrote:
>> [AMD Official Use Only]
>>
>> Parsing over 550 processes for fdinfo is taking between 40-100ms single
>> threaded in a 2GHz skylake IBRS within a VM using simple string
>> comparisons and DIRent parsing. And that is pretty much the worst case
>> scenario with some more optimized implementations.
> I think this is plenty ok, and if it's not you could probably make this
> massively faster with io_uring for all the fs operations and whack a
> parser-generator on top for real parsing speed.

Well if it becomes a problem fixing the debugfs "clients" file and 
making it sysfs shouldn't be much of a problem later on.

Christian.

>
> So imo we shouldn't worry about algorithmic inefficiency of the fdinfo
> approach at all, and focuse more on trying to reasonably (but not too
> much, this is still drm render stuff after all) standardize how it works
> and how we'll extend it all. I think there's tons of good suggestions in
> this thread on this topic already.
>
> /me out
> -Daniel
>
>> David
>> ________________________________
>> From: Daniel Vetter <daniel@ffwll.ch>
>> Sent: Wednesday, May 19, 2021 11:23 AM
>> To: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
>> Cc: Daniel Stone <daniel@fooishbar.org>; jhubbard@nvidia.com <jhubbard@nvidia.com>; nouveau@lists.freedesktop.org <nouveau@lists.freedesktop.org>; Intel Graphics Development <Intel-gfx@lists.freedesktop.org>; Maling list - DRI developers <dri-devel@lists.freedesktop.org>; Simon Ser <contact@emersion.fr>; Koenig, Christian <Christian.Koenig@amd.com>; aritger@nvidia.com <aritger@nvidia.com>; Nieto, David M <David.Nieto@amd.com>
>> Subject: Re: [Intel-gfx] [PATCH 0/7] Per client engine busyness
>>
>> On Wed, May 19, 2021 at 6:16 PM Tvrtko Ursulin
>> <tvrtko.ursulin@linux.intel.com> wrote:
>>>
>>> On 18/05/2021 10:40, Tvrtko Ursulin wrote:
>>>> On 18/05/2021 10:16, Daniel Stone wrote:
>>>>> Hi,
>>>>>
>>>>> On Tue, 18 May 2021 at 10:09, Tvrtko Ursulin
>>>>> <tvrtko.ursulin@linux.intel.com> wrote:
>>>>>> I was just wondering if stat(2) and a chrdev major check would be a
>>>>>> solid criteria to more efficiently (compared to parsing the text
>>>>>> content) detect drm files while walking procfs.
>>>>> Maybe I'm missing something, but is the per-PID walk actually a
>>>>> measurable performance issue rather than just a bit unpleasant?
>>>> Per pid and per each open fd.
>>>>
>>>> As said in the other thread what bothers me a bit in this scheme is that
>>>> the cost of obtaining GPU usage scales based on non-GPU criteria.
>>>>
>>>> For use case of a top-like tool which shows all processes this is a
>>>> smaller additional cost, but then for a gpu-top like tool it is somewhat
>>>> higher.
>>> To further expand, not only cost would scale per pid multiplies per open
>>> fd, but to detect which of the fds are DRM I see these three options:
>>>
>>> 1) Open and parse fdinfo.
>>> 2) Name based matching ie /dev/dri/.. something.
>>> 3) Stat the symlink target and check for DRM major.
>> stat with symlink following should be plenty fast.
>>
>>> All sound quite sub-optimal to me.
>>>
>>> Name based matching is probably the least evil on system resource usage
>>> (Keeping the dentry cache too hot? Too many syscalls?), even though
>>> fundamentally I don't it is the right approach.
>>>
>>> What happens with dup(2) is another question.
>> We need benchmark numbers showing that on anything remotely realistic
>> it's an actual problem. Until we've demonstrated it's a real problem
>> we don't need to solve it.
>>
>> E.g. top with any sorting enabled also parses way more than it
>> displays on every update. It seems to be doing Just Fine (tm).
>>
>>> Does anyone have any feedback on the /proc/<pid>/gpu idea at all?
>> When we know we have a problem to solve we can take a look at solutions.
>> -Daniel
>> --
>> Daniel Vetter
>> Software Engineer, Intel Corporation
>> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fblog.ffwll.ch%2F&amp;data=04%7C01%7CChristian.Koenig%40amd.com%7Ced2eccaa081d4cd336d408d91b991ee0%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637571166744508313%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=ZihrnanU70nJAM6bHYCjRnURDDCIdwGI85imjGd%2FNgs%3D&amp;reserved=0


  reply	other threads:[~2021-05-20 14:13 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-13 10:59 [PATCH 0/7] Per client engine busyness Tvrtko Ursulin
2021-05-13 10:59 ` [PATCH 1/7] drm/i915: Expose list of clients in sysfs Tvrtko Ursulin
2021-05-13 10:59 ` [PATCH 2/7] drm/i915: Update client name on context create Tvrtko Ursulin
2021-05-13 10:59 ` [PATCH 3/7] drm/i915: Make GEM contexts track DRM clients Tvrtko Ursulin
2021-05-13 10:59 ` [PATCH 4/7] drm/i915: Track runtime spent in closed and unreachable GEM contexts Tvrtko Ursulin
2021-05-13 11:00 ` [PATCH 5/7] drm/i915: Track all user contexts per client Tvrtko Ursulin
2021-05-13 11:00 ` [PATCH 6/7] drm/i915: Track context current active time Tvrtko Ursulin
2021-05-13 11:00 ` [PATCH 7/7] drm/i915: Expose per-engine client busyness Tvrtko Ursulin
2021-05-13 15:48 ` [PATCH 0/7] Per client engine busyness Alex Deucher
2021-05-13 16:40   ` Tvrtko Ursulin
2021-05-14  5:58     ` Alex Deucher
2021-05-14  7:22       ` Nieto, David M
2021-05-14  8:04         ` Christian König
2021-05-14 13:42           ` Tvrtko Ursulin
2021-05-14 13:53             ` Christian König
2021-05-14 14:47               ` Tvrtko Ursulin
2021-05-14 14:56                 ` Christian König
2021-05-14 15:03                   ` Tvrtko Ursulin
2021-05-14 15:10                     ` Christian König
2021-05-17 14:30                       ` Daniel Vetter
2021-05-17 14:39                         ` Nieto, David M
2021-05-17 16:00                           ` Tvrtko Ursulin
2021-05-17 18:02                             ` Nieto, David M
2021-05-17 18:16                               ` Nieto, David M
2021-05-17 19:03                                 ` Simon Ser
2021-05-18  9:08                                   ` Tvrtko Ursulin
2021-05-18  9:16                                     ` Daniel Stone
2021-05-18  9:40                                       ` Tvrtko Ursulin
2021-05-19 16:16                                         ` Tvrtko Ursulin
2021-05-19 18:23                                           ` [Intel-gfx] " Daniel Vetter
2021-05-19 23:17                                             ` Nieto, David M
2021-05-20 14:11                                               ` Daniel Vetter
2021-05-20 14:12                                                 ` Christian König [this message]
2021-05-20 14:17                                                   ` [Nouveau] " arabek
2021-05-20  8:35                                             ` Tvrtko Ursulin
2021-05-24 10:48                                               ` Tvrtko Ursulin
2021-05-18  9:35                               ` Tvrtko Ursulin
2021-05-18 12:06                                 ` Christian König
2021-05-17 19:16                         ` Christian König
2021-06-28 10:16                       ` Tvrtko Ursulin
2021-06-28 14:37                         ` Daniel Vetter
2021-05-17 14:20   ` Daniel Vetter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e086fbd7-5d37-c8e2-0a49-c6c646faf309@amd.com \
    --to=christian.koenig@amd.com \
    --cc=David.Nieto@amd.com \
    --cc=Intel-gfx@lists.freedesktop.org \
    --cc=aritger@nvidia.com \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=jhubbard@nvidia.com \
    --cc=nouveau@lists.freedesktop.org \
    --cc=tvrtko.ursulin@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).