All of lore.kernel.org
 help / color / mirror / Atom feed
From: Tejun Heo <tj@kernel.org>
To: "T.J. Mercier" <tjmercier@google.com>
Cc: daniel@ffwll.ch,
	Maarten Lankhorst <maarten.lankhorst@linux.intel.com>,
	Maxime Ripard <mripard@kernel.org>,
	Thomas Zimmermann <tzimmermann@suse.de>,
	David Airlie <airlied@linux.ie>, Jonathan Corbet <corbet@lwn.net>,
	hridya@google.com, christian.koenig@amd.com, jstultz@google.com,
	tkjos@android.com, cmllamas@google.com, surenb@google.com,
	kaleshsingh@google.com, Kenny.Ho@amd.com, mkoutny@suse.com,
	skhan@linuxfoundation.org, kernel-team@android.com,
	dri-devel@lists.freedesktop.org, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org
Subject: Re: [RFC v5 1/6] gpu: rfc: Proposal for a GPU cgroup controller
Date: Thu, 21 Apr 2022 10:34:41 -1000	[thread overview]
Message-ID: <YmG/4Q0Cz0yUMbu+@slm.duckdns.org> (raw)
In-Reply-To: <20220420235228.2767816-2-tjmercier@google.com>

Hello,

On Wed, Apr 20, 2022 at 11:52:19PM +0000, T.J. Mercier wrote:
> From: Hridya Valsaraju <hridya@google.com>
> 
> This patch adds a proposal for a new GPU cgroup controller for
> accounting/limiting GPU and GPU-related memory allocations.
> The proposed controller is based on the DRM cgroup controller[1] and
> follows the design of the RDMA cgroup controller.
> 
> The new cgroup controller would:
> * Allow setting per-device limits on the total size of buffers
>   allocated by device within a cgroup.
> * Expose a per-device/allocator breakdown of the buffers charged to a
>   cgroup.
> 
> The prototype in the following patches is only for memory accounting
> using the GPU cgroup controller and does not implement limit setting.
> 
> [1]: https://lore.kernel.org/amd-gfx/20210126214626.16260-1-brian.welty@intel.com/
> 
> Signed-off-by: Hridya Valsaraju <hridya@google.com>
> Signed-off-by: T.J. Mercier <tjmercier@google.com>

Looks straight-forward enough from cgroup side. Are gpu folks generally
happy? David, Daniel, Kenny, what are your thoughts?

>  Documentation/gpu/rfc/gpu-cgroup.rst | 190 +++++++++++++++++++++++++++

Can you fold the important part into cgroup-v2.rst and maybe make the rest
code comments if necessary?

Thanks.

-- 
tejun

WARNING: multiple messages have this Message-ID (diff)
From: Tejun Heo <tj@kernel.org>
To: "T.J. Mercier" <tjmercier@google.com>
Cc: kernel-team@android.com, tkjos@android.com,
	Jonathan Corbet <corbet@lwn.net>, David Airlie <airlied@linux.ie>,
	Kenny.Ho@amd.com, skhan@linuxfoundation.org, cmllamas@google.com,
	linux-doc@vger.kernel.org, jstultz@google.com,
	dri-devel@lists.freedesktop.org,
	Thomas Zimmermann <tzimmermann@suse.de>,
	kaleshsingh@google.com, hridya@google.com, mkoutny@suse.com,
	surenb@google.com, christian.koenig@amd.com,
	linux-kernel@vger.kernel.org
Subject: Re: [RFC v5 1/6] gpu: rfc: Proposal for a GPU cgroup controller
Date: Thu, 21 Apr 2022 10:34:41 -1000	[thread overview]
Message-ID: <YmG/4Q0Cz0yUMbu+@slm.duckdns.org> (raw)
In-Reply-To: <20220420235228.2767816-2-tjmercier@google.com>

Hello,

On Wed, Apr 20, 2022 at 11:52:19PM +0000, T.J. Mercier wrote:
> From: Hridya Valsaraju <hridya@google.com>
> 
> This patch adds a proposal for a new GPU cgroup controller for
> accounting/limiting GPU and GPU-related memory allocations.
> The proposed controller is based on the DRM cgroup controller[1] and
> follows the design of the RDMA cgroup controller.
> 
> The new cgroup controller would:
> * Allow setting per-device limits on the total size of buffers
>   allocated by device within a cgroup.
> * Expose a per-device/allocator breakdown of the buffers charged to a
>   cgroup.
> 
> The prototype in the following patches is only for memory accounting
> using the GPU cgroup controller and does not implement limit setting.
> 
> [1]: https://lore.kernel.org/amd-gfx/20210126214626.16260-1-brian.welty@intel.com/
> 
> Signed-off-by: Hridya Valsaraju <hridya@google.com>
> Signed-off-by: T.J. Mercier <tjmercier@google.com>

Looks straight-forward enough from cgroup side. Are gpu folks generally
happy? David, Daniel, Kenny, what are your thoughts?

>  Documentation/gpu/rfc/gpu-cgroup.rst | 190 +++++++++++++++++++++++++++

Can you fold the important part into cgroup-v2.rst and maybe make the rest
code comments if necessary?

Thanks.

-- 
tejun

  reply	other threads:[~2022-04-21 20:35 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-20 23:52 [RFC v5 0/6] Proposal for a GPU cgroup controller T.J. Mercier
2022-04-20 23:52 ` T.J. Mercier
2022-04-20 23:52 ` T.J. Mercier
2022-04-20 23:52 ` [RFC v5 1/6] gpu: rfc: " T.J. Mercier
2022-04-20 23:52   ` T.J. Mercier
2022-04-21 20:34   ` Tejun Heo [this message]
2022-04-21 20:34     ` Tejun Heo
2022-04-21 22:25     ` T.J. Mercier
2022-04-21 22:25       ` T.J. Mercier
2022-04-20 23:52 ` [RFC v5 2/6] cgroup: gpu: Add a cgroup controller for allocator attribution of GPU memory T.J. Mercier
2022-04-20 23:52   ` T.J. Mercier
2022-04-20 23:52 ` [RFC v5 3/6] dmabuf: heaps: export system_heap buffers with GPU cgroup charging T.J. Mercier
2022-04-20 23:52   ` T.J. Mercier
2022-04-20 23:52 ` [RFC v5 4/6] dmabuf: Add gpu cgroup charge transfer function T.J. Mercier
2022-04-20 23:52   ` T.J. Mercier
2022-04-20 23:52   ` T.J. Mercier
2022-04-20 23:52 ` [RFC v5 5/6] binder: Add flags to relinquish ownership of fds T.J. Mercier
2022-04-20 23:52   ` T.J. Mercier
2022-04-21 18:28   ` Carlos Llamas
2022-04-21 18:28     ` Carlos Llamas
2022-04-21 22:09     ` T.J. Mercier
2022-04-21 22:09       ` T.J. Mercier
2022-04-20 23:52 ` [RFC v5 6/6] selftests: Add binder cgroup gpu memory transfer tests T.J. Mercier
2022-04-22 14:53 ` [RFC v5 0/6] Proposal for a GPU cgroup controller Greg Kroah-Hartman
2022-04-22 14:53   ` Greg Kroah-Hartman
2022-04-22 14:53   ` Greg Kroah-Hartman
2022-04-22 16:40   ` T.J. Mercier
2022-04-22 16:40     ` T.J. Mercier
2022-04-22 16:40     ` T.J. Mercier

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YmG/4Q0Cz0yUMbu+@slm.duckdns.org \
    --to=tj@kernel.org \
    --cc=Kenny.Ho@amd.com \
    --cc=airlied@linux.ie \
    --cc=christian.koenig@amd.com \
    --cc=cmllamas@google.com \
    --cc=corbet@lwn.net \
    --cc=daniel@ffwll.ch \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=hridya@google.com \
    --cc=jstultz@google.com \
    --cc=kaleshsingh@google.com \
    --cc=kernel-team@android.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maarten.lankhorst@linux.intel.com \
    --cc=mkoutny@suse.com \
    --cc=mripard@kernel.org \
    --cc=skhan@linuxfoundation.org \
    --cc=surenb@google.com \
    --cc=tjmercier@google.com \
    --cc=tkjos@android.com \
    --cc=tzimmermann@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.