From: Yiwei Zhang <zzyiwei@google.com>
To: Pekka Paalanen <ppaalanen@gmail.com>
Cc: Alistair Delva <adelva@google.com>,
Prahlad Kilambi <prahladk@google.com>,
dri-devel@lists.freedesktop.org,
Jerome Glisse <jglisse@redhat.com>,
Sean Paul <seanpaul@chromium.org>,
kraxel@redhat.com, Chris Forbes <chrisforbes@google.com>,
kernel-team@android.com
Subject: Re: Proposal to report GPU private memory allocations with sysfs nodes [plain text version]
Date: Wed, 30 Oct 2019 14:03:43 -0700 [thread overview]
Message-ID: <CAKT=dDkrPUOM2qf8RrZsS9cxmiEntZT4K6rAEf2xMyyZe=Usog@mail.gmail.com> (raw)
In-Reply-To: <20191029103304.29142c34@eldfell.localdomain>
Hi folks,
Didn't realize gmail has a plain text mode ; )
> In my opinion tracking per process is good, but you cannot sidestep the
> question of tracking performance by saying that there is only few
> processes using the GPU.
Agreed, I shouldn't make that statement. Thanks for the info as well!
> What is an "active" GPU private allocation? This implies that there are
> also inactive allocations, what are those?
"active" is used to claim that we don't track the allocation history. We just
want the currently allocated memory.
> What about getting a coherent view of the total GPU private memory
> consumption of a single process? I think the same caveat and solution
> would apply.
Realistically I assume drivers won't change the values during a snapshot
call? But adding one more node per process for total GPU private memory
allocated would be good for test enforcement for the coherency as well. I'd
suggest an additional "/sys/devices/<some TBD root>/<pid>/gpu_mem/total"
node.
Best,
Yiwei
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
next prev parent reply other threads:[~2019-10-30 21:03 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-10-25 18:35 Proposal to report GPU private memory allocations with sysfs nodes [plain text version] Yiwei Zhang
2019-10-28 15:26 ` Jerome Glisse
2019-10-28 18:33 ` Yiwei Zhang
2019-10-29 1:19 ` Yiwei Zhang
2019-10-31 5:23 ` Kenny Ho
2019-10-31 16:59 ` Yiwei Zhang
2019-10-31 17:57 ` Kenny Ho
2019-11-01 8:36 ` Pekka Paalanen
2019-11-04 19:34 ` Yiwei Zhang
2019-11-05 9:47 ` Daniel Vetter
2019-11-05 19:45 ` Yiwei Zhang
2019-11-06 9:56 ` Daniel Vetter
2019-11-06 16:55 ` Rob Clark
2019-11-06 19:21 ` Yiwei Zhang
2019-11-12 18:17 ` Yiwei Zhang
2019-11-12 20:18 ` Jerome Glisse
2019-11-15 1:02 ` Yiwei Zhang
2019-12-13 22:09 ` Yiwei Zhang
2019-12-19 16:18 ` Rohan Garg
2019-12-19 18:52 ` Yiwei Zhang
2020-01-06 10:46 ` Rohan Garg
2020-01-06 20:47 ` Yiwei Zhang
2020-03-20 12:07 ` Rohan Garg
2020-03-20 21:35 ` Yiwei Zhang
2019-10-29 8:33 ` Pekka Paalanen
2019-10-30 21:03 ` Yiwei Zhang [this message]
2019-10-30 22:06 ` Yiwei Zhang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAKT=dDkrPUOM2qf8RrZsS9cxmiEntZT4K6rAEf2xMyyZe=Usog@mail.gmail.com' \
--to=zzyiwei@google.com \
--cc=adelva@google.com \
--cc=chrisforbes@google.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=jglisse@redhat.com \
--cc=kernel-team@android.com \
--cc=kraxel@redhat.com \
--cc=ppaalanen@gmail.com \
--cc=prahladk@google.com \
--cc=seanpaul@chromium.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).