From: Hridya Valsaraju <hridya@google.com>
To: "David Airlie" <airlied@linux.ie>,
"Daniel Vetter" <daniel@ffwll.ch>,
"Maarten Lankhorst" <maarten.lankhorst@linux.intel.com>,
"Maxime Ripard" <mripard@kernel.org>,
"Thomas Zimmermann" <tzimmermann@suse.de>,
"Jonathan Corbet" <corbet@lwn.net>,
"Greg Kroah-Hartman" <gregkh@linuxfoundation.org>,
"Arve Hjønnevåg" <arve@android.com>,
"Todd Kjos" <tkjos@android.com>,
"Martijn Coenen" <maco@android.com>,
"Joel Fernandes" <joel@joelfernandes.org>,
"Christian Brauner" <christian@brauner.io>,
"Hridya Valsaraju" <hridya@google.com>,
"Suren Baghdasaryan" <surenb@google.com>,
"Sumit Semwal" <sumit.semwal@linaro.org>,
"Benjamin Gaignard" <benjamin.gaignard@linaro.org>,
"Liam Mark" <lmark@codeaurora.org>,
"Laura Abbott" <labbott@redhat.com>,
"Brian Starkey" <Brian.Starkey@arm.com>,
"John Stultz" <john.stultz@linaro.org>,
"Christian König" <christian.koenig@amd.com>,
"Tejun Heo" <tj@kernel.org>, "Zefan Li" <lizefan.x@bytedance.com>,
"Johannes Weiner" <hannes@cmpxchg.org>,
"Dave Airlie" <airlied@redhat.com>,
"Kenneth Graunke" <kenneth@whitecape.org>,
"Simon Ser" <contact@emersion.fr>,
"Jason Ekstrand" <jason@jlekstrand.net>,
"Matthew Auld" <matthew.auld@intel.com>,
"Matthew Brost" <matthew.brost@intel.com>,
"Li Li" <dualli@google.com>, "Marco Ballesio" <balejs@google.com>,
"Finn Behrens" <me@kloenk.de>, "Hang Lu" <hangl@codeaurora.org>,
"Wedson Almeida Filho" <wedsonaf@google.com>,
"Masahiro Yamada" <masahiroy@kernel.org>,
"Andrew Morton" <akpm@linux-foundation.org>,
"Nathan Chancellor" <nathan@kernel.org>,
"Kees Cook" <keescook@chromium.org>,
"Nick Desaulniers" <ndesaulniers@google.com>,
"Miguel Ojeda" <ojeda@kernel.org>,
"Vipin Sharma" <vipinsh@google.com>,
"Chris Down" <chris@chrisdown.name>,
"Daniel Borkmann" <daniel@iogearbox.net>,
"Vlastimil Babka" <vbabka@suse.cz>,
"Arnd Bergmann" <arnd@arndb.de>,
dri-devel@lists.freedesktop.org, linux-doc@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-media@vger.kernel.org,
linaro-mm-sig@lists.linaro.org, cgroups@vger.kernel.org
Cc: Kenny.Ho@amd.com, daniels@collabora.com, kaleshsingh@google.com,
tjmercier@google.com
Subject: [RFC 1/6] gpu: rfc: Proposal for a GPU cgroup controller
Date: Fri, 14 Jan 2022 17:05:59 -0800 [thread overview]
Message-ID: <20220115010622.3185921-2-hridya@google.com> (raw)
In-Reply-To: <20220115010622.3185921-1-hridya@google.com>
This patch adds a proposal for a new GPU cgroup controller for
accounting/limiting GPU and GPU-related memory allocations.
The proposed controller is based on the DRM cgroup controller[1] and
follows the design of the RDMA cgroup controller.
The new cgroup controller would:
* Allow setting per-cgroup limits on the total size of buffers charged
to it.
* Allow setting per-device limits on the total size of buffers
allocated by device within a cgroup.
* Expose a per-device/allocator breakdown of the buffers charged to a
cgroup.
The prototype in the following patches are only for memory accounting
using the GPU cgroup controller and does not implement limit setting.
[1]: https://lore.kernel.org/amd-gfx/20210126214626.16260-1-brian.welty@intel.com/
Signed-off-by: Hridya Valsaraju <hridya@google.com>
---
Hi all,
Here is the RFC documentation for the GPU cgroup controller that we
talked about at LPC 2021 along with a prototype. I reached out to Tejun
with the idea recently and he mentioned that cgroup-aware BPF(by Kenny
Ho) or the new misc cgroup controller can also be considered as
alternatives to track GPU resources. I am sending the RFC to the list to
give everyone else a chance to chime in with their thoughts as well so
that we can reach an agreement on how to proceed. Thanks in advance!
Regards,
Hridya
Documentation/gpu/rfc/gpu-cgroup.rst | 192 +++++++++++++++++++++++++++
Documentation/gpu/rfc/index.rst | 4 +
2 files changed, 196 insertions(+)
create mode 100644 Documentation/gpu/rfc/gpu-cgroup.rst
diff --git a/Documentation/gpu/rfc/gpu-cgroup.rst b/Documentation/gpu/rfc/gpu-cgroup.rst
new file mode 100644
index 000000000000..9bff23007b22
--- /dev/null
+++ b/Documentation/gpu/rfc/gpu-cgroup.rst
@@ -0,0 +1,192 @@
+===================================
+GPU cgroup controller
+===================================
+
+Goals
+=====
+This document intends to outline a plan to create a cgroup v2 controller subsystem
+for the per-cgroup accounting of device and system memory allocated by the GPU
+and related subsystems.
+
+The new cgroup controller would:
+
+* Allow setting per-cgroup limits on the total size of buffers charged to it.
+
+* Allow setting per-device limits on the total size of buffers allocated by a
+ device/allocator within a cgroup.
+
+* Expose a per-device/allocator breakdown of the buffers charged to a cgroup.
+
+Alternatives Considered
+=======================
+
+The following alternatives were considered:
+
+The memory cgroup controller
+____________________________
+
+1. As was noted in [1], memory accounting provided by the GPU cgroup
+controller is not a good fit for integration into memcg due to the
+differences in how accounting is performed. It implements a mechanism
+for the allocator attribution of GPU and GPU-related memory by
+charging each buffer to the cgroup of the process on behalf of which
+the memory was allocated. The buffer stays charged to the cgroup until
+it is freed regardless of whether the process retains any references
+to it. On the other hand, the memory cgroup controller offers a more
+fine-grained charging and uncharging behavior depending on the kind of
+page being accounted.
+
+2. Memcg performs accounting in units of pages. In the DMA-BUF buffer sharing model,
+a process takes a reference to the entire buffer(hence keeping it alive) even if
+it is only accessing parts of it. Therefore, per-page memory tracking for DMA-BUF
+memory accounting would only introduce additional overhead without any benefits.
+
+[1]: https://patchwork.kernel.org/project/dri-devel/cover/20190501140438.9506-1-brian.welty@intel.com/#22624705
+
+Userspace service to keep track of buffer allocations and releases
+__________________________________________________________________
+
+1. There is no way for a userspace service to intercept all allocations and releases.
+2. In case the process gets killed or restarted, we lose all accounting so far.
+
+UAPI
+====
+When enabled, the new cgroup controller would create the following files in every cgroup.
+
+::
+
+ gpu.memory.current (R)
+ gpu.memory.max (R/W)
+
+gpu.memory.current is a read-only file and would contain per-device memory allocations
+in a key-value format where key is a string representing the device name
+and the value is the size of memory charged to the device in the cgroup in bytes.
+
+For example:
+
+::
+
+ cat /sys/kernel/fs/cgroup1/gpu.memory.current
+ dev1 4194304
+ dev2 4194304
+
+The string key for each device is set by the device driver when the device registers
+with the GPU cgroup controller to participate in resource accounting(see section
+'Design and Implementation' for more details).
+
+gpu.memory.max is a read/write file. It would show the current total
+size limits on memory usage for the cgroup and the limits on total memory usage
+for each allocator/device.
+
+Setting a total limit for a cgroup can be done as follows:
+
+::
+
+ echo “total 41943040” > /sys/kernel/fs/cgroup1/gpu.memory.max
+
+Setting a total limit for a particular device/allocator can be done as follows:
+
+::
+
+ echo “dev1 4194304” > /sys/kernel/fs/cgroup1/gpu.memory.max
+
+In this example, 'dev1' is the string key set by the device driver during
+registration.
+
+Design and Implementation
+=========================
+
+The cgroup controller would closely follow the design of the RDMA cgroup controller
+subsystem where each cgroup maintains a list of resource pools.
+Each resource pool contains a struct device and the counter to track current total,
+and the maximum limit set for the device.
+
+The below code block is a preliminary estimation on how the core kernel data structures
+and APIs would look like.
+
+.. code-block:: c
+
+ /**
+ * The GPU cgroup controller data structure.
+ */
+ struct gpucg {
+ struct cgroup_subsys_state css;
+ /* list of all resource pools that belong to this cgroup */
+ struct list_head rpools;
+ };
+
+ struct gpucg_device {
+ /*
+ * list of various resource pools in various cgroups that the device is
+ * part of.
+ */
+ struct list_head rpools;
+ /* list of all devices registered for GPU cgroup accounting */
+ struct list_head dev_node;
+ /* name to be used as identifier for accounting and limit setting */
+ const char *name;
+ };
+
+ struct gpucg_resource_pool {
+ /* The device whose resource usage is tracked by this resource pool */
+ struct gpucg_device *device;
+
+ /* list of all resource pools for the cgroup */
+ struct list_head cg_node;
+
+ /*
+ * list maintained by the gpucg_device to keep track of its
+ * resource pools
+ */
+ struct list_head dev_node;
+
+ /* tracks memory usage of the resource pool */
+ struct page_counter total;
+ };
+
+ /**
+ * gpucg_register_device - Registers a device for memory accounting using the
+ * GPU cgroup controller.
+ *
+ * @device: The device to register for memory accounting. Must remain valid
+ * after registration.
+ * @name: Pointer to a string literal to denote the name of the device.
+ */
+ void gpucg_register_device(struct gpucg_device *gpucg_dev, const char *name);
+
+ /**
+ * gpucg_try_charge - charge memory to the specified gpucg and gpucg_device.
+ *
+ * @gpucg: The gpu cgroup to charge the memory to.
+ * @device: The device to charge the memory to.
+ * @usage: size of memory to charge in bytes.
+ *
+ * Return: returns 0 if the charging is successful and otherwise returns an
+ * error code.
+ */
+ int gpucg_try_charge(struct gpucg *gpucg, struct gpucg_device *device, u64 usage);
+
+ /**
+ * gpucg_uncharge - uncharge memory from the specified gpucg and gpucg_device.
+ *
+ * @gpucg: The gpu cgroup to uncharge the memory from.
+ * @device: The device to charge the memory from.
+ * @usage: size of memory to uncharge in bytes.
+ */
+ void gpucg_uncharge(struct gpucg *gpucg, struct gpucg_device *device, u64 usage);
+
+Future Work
+===========
+Additional GPU resources can be supported by adding new controller files.
+
+Upstreaming Plan
+================
+* Decide on a UAPI that accommodates all use-cases for the upstream GPU ecosystem
+ as well as for Android.
+
+* Prototype the GPU cgroup controller and integrate its usage into the DMA-BUF
+ system heap.
+
+* Demonstrate its usage from userspace in the Android Open Space Project.
+
+* Send out RFCs to LKML for the GPU cgroup controller and iterate.
diff --git a/Documentation/gpu/rfc/index.rst b/Documentation/gpu/rfc/index.rst
index 91e93a705230..0a9bcd94e95d 100644
--- a/Documentation/gpu/rfc/index.rst
+++ b/Documentation/gpu/rfc/index.rst
@@ -23,3 +23,7 @@ host such documentation:
.. toctree::
i915_scheduler.rst
+
+.. toctree::
+
+ gpu-cgroup.rst
--
2.34.1.703.g22d0c6ccf7-goog
next prev parent reply other threads:[~2022-01-15 1:07 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-15 1:05 [RFC 0/6] Proposal for a GPU cgroup controller Hridya Valsaraju
2022-01-15 1:05 ` Hridya Valsaraju [this message]
2022-01-15 1:06 ` [RFC 2/6] cgroup: gpu: Add a cgroup controller for allocator attribution of GPU memory Hridya Valsaraju
2022-01-19 15:40 ` Randy Dunlap
2022-01-19 18:24 ` Hridya Valsaraju
2022-01-15 1:06 ` [RFC 3/6] dmabuf: heaps: Use the GPU cgroup charge/uncharge APIs Hridya Valsaraju
2022-01-15 1:06 ` [RFC 4/6] dma-buf: Add DMA-BUF exporter op to charge a DMA-BUF to a cgroup Hridya Valsaraju
2022-01-17 7:46 ` Christian König
2022-01-18 18:54 ` Hridya Valsaraju
2022-01-19 15:54 ` Daniel Vetter
2022-01-19 15:58 ` Christian König
2022-01-19 18:21 ` Hridya Valsaraju
2022-01-15 1:06 ` [RFC 5/6] dmabuf: system_heap: implement dma-buf op for GPU cgroup charge transfer Hridya Valsaraju
2022-01-15 1:06 ` [RFC 6/6] android: binder: Add a buffer flag to relinquish ownership of fds Hridya Valsaraju
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220115010622.3185921-2-hridya@google.com \
--to=hridya@google.com \
--cc=Brian.Starkey@arm.com \
--cc=Kenny.Ho@amd.com \
--cc=airlied@linux.ie \
--cc=airlied@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=arnd@arndb.de \
--cc=arve@android.com \
--cc=balejs@google.com \
--cc=benjamin.gaignard@linaro.org \
--cc=cgroups@vger.kernel.org \
--cc=chris@chrisdown.name \
--cc=christian.koenig@amd.com \
--cc=christian@brauner.io \
--cc=contact@emersion.fr \
--cc=corbet@lwn.net \
--cc=daniel@ffwll.ch \
--cc=daniel@iogearbox.net \
--cc=daniels@collabora.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=dualli@google.com \
--cc=gregkh@linuxfoundation.org \
--cc=hangl@codeaurora.org \
--cc=hannes@cmpxchg.org \
--cc=jason@jlekstrand.net \
--cc=joel@joelfernandes.org \
--cc=john.stultz@linaro.org \
--cc=kaleshsingh@google.com \
--cc=keescook@chromium.org \
--cc=kenneth@whitecape.org \
--cc=labbott@redhat.com \
--cc=linaro-mm-sig@lists.linaro.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-media@vger.kernel.org \
--cc=lizefan.x@bytedance.com \
--cc=lmark@codeaurora.org \
--cc=maarten.lankhorst@linux.intel.com \
--cc=maco@android.com \
--cc=masahiroy@kernel.org \
--cc=matthew.auld@intel.com \
--cc=matthew.brost@intel.com \
--cc=me@kloenk.de \
--cc=mripard@kernel.org \
--cc=nathan@kernel.org \
--cc=ndesaulniers@google.com \
--cc=ojeda@kernel.org \
--cc=sumit.semwal@linaro.org \
--cc=surenb@google.com \
--cc=tj@kernel.org \
--cc=tjmercier@google.com \
--cc=tkjos@android.com \
--cc=tzimmermann@suse.de \
--cc=vbabka@suse.cz \
--cc=vipinsh@google.com \
--cc=wedsonaf@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).