From: Daniel Vetter <daniel@ffwll.ch> To: "Christian König" <christian.koenig@amd.com> Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>, Intel Graphics Development <Intel-gfx@lists.freedesktop.org>, Maling list - DRI developers <dri-devel@lists.freedesktop.org>, "Nieto, David M" <David.Nieto@amd.com> Subject: Re: [PATCH 0/7] Per client engine busyness Date: Mon, 17 May 2021 16:30:47 +0200 [thread overview] Message-ID: <YKJ+F4KqEiQQYkRz@phenom.ffwll.local> (raw) In-Reply-To: <a2b03603-eb3e-7bef-a799-c15cfb1a8e0b@amd.com> On Fri, May 14, 2021 at 05:10:29PM +0200, Christian König wrote: > Am 14.05.21 um 17:03 schrieb Tvrtko Ursulin: > > > > On 14/05/2021 15:56, Christian König wrote: > > > Am 14.05.21 um 16:47 schrieb Tvrtko Ursulin: > > > > > > > > On 14/05/2021 14:53, Christian König wrote: > > > > > > > > > > > > David also said that you considered sysfs but were wary > > > > > > of exposing process info in there. To clarify, my patch > > > > > > is not exposing sysfs entry per process, but one per > > > > > > open drm fd. > > > > > > > > > > > > > > > > Yes, we discussed this as well, but then rejected the approach. > > > > > > > > > > To have useful information related to the open drm fd you > > > > > need to related that to process(es) which have that file > > > > > descriptor open. Just tracking who opened it first like DRM > > > > > does is pretty useless on modern systems. > > > > > > > > We do update the pid/name for fds passed over unix sockets. > > > > > > Well I just double checked and that is not correct. > > > > > > Could be that i915 has some special code for that, but on my laptop > > > I only see the X server under the "clients" debugfs file. > > > > Yes we have special code in i915 for this. Part of this series we are > > discussing here. > > Ah, yeah you should mention that. Could we please separate that into common > code instead? Cause I really see that as a bug in the current handling > independent of the discussion here. > > As far as I know all IOCTLs go though some common place in DRM anyway. Yeah, might be good to fix that confusion in debugfs. But since that's non-uapi, I guess no one ever cared (enough). > > > > > But an "lsof /dev/dri/renderD128" for example does exactly > > > > > what top does as well, it iterates over /proc and sees which > > > > > process has that file open. > > > > > > > > Lsof is quite inefficient for this use case. It has to open > > > > _all_ open files for _all_ processes on the system to find a > > > > handful of ones which may have the DRM device open. > > > > > > Completely agree. > > > > > > The key point is you either need to have all references to an open > > > fd, or at least track whoever last used that fd. > > > > > > At least the last time I looked even the fs layer didn't know which > > > fd is open by which process. So there wasn't really any alternative > > > to the lsof approach. > > > > I asked you about the use case you have in mind which you did not > > answer. Otherwise I don't understand when do you need to walk all files. > > What information you want to get? > > Per fd debugging information, e.g. instead of the top use case you know > which process you want to look at. > > > > > For the use case of knowing which DRM file is using how much GPU time on > > engine X we do not need to walk all open files either with my sysfs > > approach or the proc approach from Chris. (In the former case we > > optionally aggregate by PID at presentation time, and in the latter case > > aggregation is implicit.) > > I'm unsure if we should go with the sysfs, proc or some completely different > approach. > > In general it would be nice to have a way to find all the fd references for > an open inode. Yeah, but that maybe needs to be an ioctl or syscall or something on the inode, that givey you a list of (procfd, fd_nr) pairs pointing back at all open files? If this really is a real world problem, but given that top/lsof and everyone else hasn't asked for it yet maybe it's not. Also I replied in some other thread, I really like the fdinfo stuff, and I think trying to somewhat standardized this across drivers would be neat. Especially since i915 is going to adopt drm/scheduler for front-end scheduling too, so at least some of this should be fairly easy to share. Cheers, Daniel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch
WARNING: multiple messages have this Message-ID (diff)
From: Daniel Vetter <daniel@ffwll.ch> To: "Christian König" <christian.koenig@amd.com> Cc: Intel Graphics Development <Intel-gfx@lists.freedesktop.org>, Maling list - DRI developers <dri-devel@lists.freedesktop.org>, Alex Deucher <alexdeucher@gmail.com>, "Nieto, David M" <David.Nieto@amd.com> Subject: Re: [Intel-gfx] [PATCH 0/7] Per client engine busyness Date: Mon, 17 May 2021 16:30:47 +0200 [thread overview] Message-ID: <YKJ+F4KqEiQQYkRz@phenom.ffwll.local> (raw) In-Reply-To: <a2b03603-eb3e-7bef-a799-c15cfb1a8e0b@amd.com> On Fri, May 14, 2021 at 05:10:29PM +0200, Christian König wrote: > Am 14.05.21 um 17:03 schrieb Tvrtko Ursulin: > > > > On 14/05/2021 15:56, Christian König wrote: > > > Am 14.05.21 um 16:47 schrieb Tvrtko Ursulin: > > > > > > > > On 14/05/2021 14:53, Christian König wrote: > > > > > > > > > > > > David also said that you considered sysfs but were wary > > > > > > of exposing process info in there. To clarify, my patch > > > > > > is not exposing sysfs entry per process, but one per > > > > > > open drm fd. > > > > > > > > > > > > > > > > Yes, we discussed this as well, but then rejected the approach. > > > > > > > > > > To have useful information related to the open drm fd you > > > > > need to related that to process(es) which have that file > > > > > descriptor open. Just tracking who opened it first like DRM > > > > > does is pretty useless on modern systems. > > > > > > > > We do update the pid/name for fds passed over unix sockets. > > > > > > Well I just double checked and that is not correct. > > > > > > Could be that i915 has some special code for that, but on my laptop > > > I only see the X server under the "clients" debugfs file. > > > > Yes we have special code in i915 for this. Part of this series we are > > discussing here. > > Ah, yeah you should mention that. Could we please separate that into common > code instead? Cause I really see that as a bug in the current handling > independent of the discussion here. > > As far as I know all IOCTLs go though some common place in DRM anyway. Yeah, might be good to fix that confusion in debugfs. But since that's non-uapi, I guess no one ever cared (enough). > > > > > But an "lsof /dev/dri/renderD128" for example does exactly > > > > > what top does as well, it iterates over /proc and sees which > > > > > process has that file open. > > > > > > > > Lsof is quite inefficient for this use case. It has to open > > > > _all_ open files for _all_ processes on the system to find a > > > > handful of ones which may have the DRM device open. > > > > > > Completely agree. > > > > > > The key point is you either need to have all references to an open > > > fd, or at least track whoever last used that fd. > > > > > > At least the last time I looked even the fs layer didn't know which > > > fd is open by which process. So there wasn't really any alternative > > > to the lsof approach. > > > > I asked you about the use case you have in mind which you did not > > answer. Otherwise I don't understand when do you need to walk all files. > > What information you want to get? > > Per fd debugging information, e.g. instead of the top use case you know > which process you want to look at. > > > > > For the use case of knowing which DRM file is using how much GPU time on > > engine X we do not need to walk all open files either with my sysfs > > approach or the proc approach from Chris. (In the former case we > > optionally aggregate by PID at presentation time, and in the latter case > > aggregation is implicit.) > > I'm unsure if we should go with the sysfs, proc or some completely different > approach. > > In general it would be nice to have a way to find all the fd references for > an open inode. Yeah, but that maybe needs to be an ioctl or syscall or something on the inode, that givey you a list of (procfd, fd_nr) pairs pointing back at all open files? If this really is a real world problem, but given that top/lsof and everyone else hasn't asked for it yet maybe it's not. Also I replied in some other thread, I really like the fdinfo stuff, and I think trying to somewhat standardized this across drivers would be neat. Especially since i915 is going to adopt drm/scheduler for front-end scheduling too, so at least some of this should be fairly easy to share. Cheers, Daniel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch _______________________________________________ Intel-gfx mailing list Intel-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/intel-gfx
next prev parent reply other threads:[~2021-05-17 14:30 UTC|newest] Thread overview: 103+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-05-13 10:59 [PATCH 0/7] Per client engine busyness Tvrtko Ursulin 2021-05-13 10:59 ` [Intel-gfx] " Tvrtko Ursulin 2021-05-13 10:59 ` [PATCH 1/7] drm/i915: Expose list of clients in sysfs Tvrtko Ursulin 2021-05-13 10:59 ` [Intel-gfx] " Tvrtko Ursulin 2021-05-13 10:59 ` [PATCH 2/7] drm/i915: Update client name on context create Tvrtko Ursulin 2021-05-13 10:59 ` [Intel-gfx] " Tvrtko Ursulin 2021-05-13 10:59 ` [PATCH 3/7] drm/i915: Make GEM contexts track DRM clients Tvrtko Ursulin 2021-05-13 10:59 ` [Intel-gfx] " Tvrtko Ursulin 2021-05-13 10:59 ` [PATCH 4/7] drm/i915: Track runtime spent in closed and unreachable GEM contexts Tvrtko Ursulin 2021-05-13 10:59 ` [Intel-gfx] " Tvrtko Ursulin 2021-05-13 11:00 ` [PATCH 5/7] drm/i915: Track all user contexts per client Tvrtko Ursulin 2021-05-13 11:00 ` [Intel-gfx] " Tvrtko Ursulin 2021-05-13 11:00 ` [PATCH 6/7] drm/i915: Track context current active time Tvrtko Ursulin 2021-05-13 11:00 ` [Intel-gfx] " Tvrtko Ursulin 2021-05-13 11:00 ` [PATCH 7/7] drm/i915: Expose per-engine client busyness Tvrtko Ursulin 2021-05-13 11:00 ` [Intel-gfx] " Tvrtko Ursulin 2021-05-13 11:28 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Per client engine busyness Patchwork 2021-05-13 11:30 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork 2021-05-13 11:59 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork 2021-05-13 15:48 ` [PATCH 0/7] " Alex Deucher 2021-05-13 15:48 ` [Intel-gfx] " Alex Deucher 2021-05-13 16:40 ` Tvrtko Ursulin 2021-05-13 16:40 ` [Intel-gfx] " Tvrtko Ursulin 2021-05-14 5:58 ` Alex Deucher 2021-05-14 5:58 ` [Intel-gfx] " Alex Deucher 2021-05-14 7:22 ` Nieto, David M 2021-05-14 7:22 ` [Intel-gfx] " Nieto, David M 2021-05-14 8:04 ` Christian König 2021-05-14 8:04 ` [Intel-gfx] " Christian König 2021-05-14 13:42 ` Tvrtko Ursulin 2021-05-14 13:42 ` [Intel-gfx] " Tvrtko Ursulin 2021-05-14 13:53 ` Christian König 2021-05-14 13:53 ` [Intel-gfx] " Christian König 2021-05-14 14:47 ` Tvrtko Ursulin 2021-05-14 14:47 ` [Intel-gfx] " Tvrtko Ursulin 2021-05-14 14:56 ` Christian König 2021-05-14 14:56 ` [Intel-gfx] " Christian König 2021-05-14 15:03 ` Tvrtko Ursulin 2021-05-14 15:03 ` [Intel-gfx] " Tvrtko Ursulin 2021-05-14 15:10 ` Christian König 2021-05-14 15:10 ` [Intel-gfx] " Christian König 2021-05-17 14:30 ` Daniel Vetter [this message] 2021-05-17 14:30 ` Daniel Vetter 2021-05-17 14:39 ` Nieto, David M 2021-05-17 14:39 ` [Intel-gfx] " Nieto, David M 2021-05-17 16:00 ` Tvrtko Ursulin 2021-05-17 16:00 ` [Intel-gfx] " Tvrtko Ursulin 2021-05-17 18:02 ` Nieto, David M 2021-05-17 18:02 ` [Intel-gfx] " Nieto, David M 2021-05-17 18:16 ` [Nouveau] " Nieto, David M 2021-05-17 18:16 ` [Intel-gfx] " Nieto, David M 2021-05-17 18:16 ` Nieto, David M 2021-05-17 19:03 ` [Nouveau] " Simon Ser 2021-05-17 19:03 ` [Intel-gfx] " Simon Ser 2021-05-17 19:03 ` Simon Ser 2021-05-18 9:08 ` [Nouveau] " Tvrtko Ursulin 2021-05-18 9:08 ` [Intel-gfx] " Tvrtko Ursulin 2021-05-18 9:08 ` Tvrtko Ursulin 2021-05-18 9:16 ` [Nouveau] " Daniel Stone 2021-05-18 9:16 ` [Intel-gfx] " Daniel Stone 2021-05-18 9:16 ` Daniel Stone 2021-05-18 9:40 ` [Nouveau] " Tvrtko Ursulin 2021-05-18 9:40 ` [Intel-gfx] " Tvrtko Ursulin 2021-05-18 9:40 ` Tvrtko Ursulin 2021-05-19 16:16 ` [Nouveau] " Tvrtko Ursulin 2021-05-19 16:16 ` [Intel-gfx] " Tvrtko Ursulin 2021-05-19 16:16 ` Tvrtko Ursulin 2021-05-19 18:23 ` [Nouveau] [Intel-gfx] " Daniel Vetter 2021-05-19 18:23 ` Daniel Vetter 2021-05-19 18:23 ` Daniel Vetter 2021-05-19 23:17 ` [Nouveau] " Nieto, David M 2021-05-19 23:17 ` Nieto, David M 2021-05-19 23:17 ` Nieto, David M 2021-05-20 14:11 ` [Nouveau] " Daniel Vetter 2021-05-20 14:11 ` Daniel Vetter 2021-05-20 14:11 ` Daniel Vetter 2021-05-20 14:12 ` [Nouveau] " Christian König 2021-05-20 14:12 ` Christian König 2021-05-20 14:12 ` Christian König 2021-05-20 14:17 ` [Nouveau] " arabek 2021-05-20 14:17 ` [Intel-gfx] [Nouveau] " arabek 2021-05-20 14:17 ` [Nouveau] [Intel-gfx] " arabek 2021-05-20 8:35 ` Tvrtko Ursulin 2021-05-20 8:35 ` Tvrtko Ursulin 2021-05-20 8:35 ` Tvrtko Ursulin 2021-05-24 10:48 ` [Nouveau] " Tvrtko Ursulin 2021-05-24 10:48 ` Tvrtko Ursulin 2021-05-24 10:48 ` Tvrtko Ursulin 2021-05-18 9:35 ` Tvrtko Ursulin 2021-05-18 9:35 ` [Intel-gfx] " Tvrtko Ursulin 2021-05-18 12:06 ` Christian König 2021-05-18 12:06 ` [Intel-gfx] " Christian König 2021-05-17 19:16 ` Christian König 2021-05-17 19:16 ` [Intel-gfx] " Christian König 2021-06-28 10:16 ` Tvrtko Ursulin 2021-06-28 10:16 ` [Intel-gfx] " Tvrtko Ursulin 2021-06-28 14:37 ` Daniel Vetter 2021-06-28 14:37 ` [Intel-gfx] " Daniel Vetter 2021-05-15 10:40 ` Maxime Schmitt 2021-05-17 16:13 ` Tvrtko Ursulin 2021-05-17 14:20 ` Daniel Vetter 2021-05-17 14:20 ` [Intel-gfx] " Daniel Vetter 2021-05-13 16:38 ` [Intel-gfx] ✗ Fi.CI.IGT: failure for " Patchwork
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=YKJ+F4KqEiQQYkRz@phenom.ffwll.local \ --to=daniel@ffwll.ch \ --cc=David.Nieto@amd.com \ --cc=Intel-gfx@lists.freedesktop.org \ --cc=christian.koenig@amd.com \ --cc=dri-devel@lists.freedesktop.org \ --cc=tvrtko.ursulin@linux.intel.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.