From: Kenny Ho <y2kenny@gmail.com> To: Leon Romanovsky <leon@kernel.org> Cc: Parav Pandit <parav@mellanox.com>, David Airlie <airlied@linux.ie>, kenny.ho@amd.com, intel-gfx@lists.freedesktop.org, "Welty, Brian" <brian.welty@intel.com>, Harish.Kasiviswanathan@amd.com, cgroups@vger.kernel.org, dri-devel@lists.freedesktop.org, Michal Hocko <mhocko@kernel.org>, Johannes Weiner <hannes@cmpxchg.org>, linux-mm@kvack.org, J??r??me Glisse <jglisse@redhat.com>, Li Zefan <lizefan@huawei.com>, Vladimir Davydov <vdavydov.dev@gmail.com>, Rodrigo Vivi <rodrigo.vivi@intel.com>, Alex Deucher <alexander.deucher@amd.com>, Tejun Heo <tj@kernel.org>, Christian K??nig <christian.koenig@amd.com>, RDMA mailing list <linux-rdma@vger.kernel.org> Subject: Re: [RFC PATCH 0/5] cgroup support for GPU devices Date: Sun, 5 May 2019 12:34:16 -0400 [thread overview] Message-ID: <CAOWid-cCq+yB9m-u8YpHFuhUZ+C7EpbT2OD27iszJVrruAtqKg@mail.gmail.com> (raw) In-Reply-To: <20190505160506.GF6938@mtr-leonro.mtl.com> (sent again. Not sure why my previous email was just a reply instead of reply-all.) On Sun, May 5, 2019 at 12:05 PM Leon Romanovsky <leon@kernel.org> wrote: > We are talking about two different access patterns for this device > memory (DM). One is to use this device memory (DM) and second to configure/limit. > Usually those actions will be performed by different groups. > > First group (programmers) is using special API [1] through libibverbs [2] > without any notion of cgroups or any limitations. Second group (sysadmins) > is less interested in application specifics and for them "device memory" means > "memory" and not "rdma, nic specific, internal memory". Um... I am not sure that answered it, especially in the context of cgroup (this is just for my curiosity btw, I don't know much about rdma.) You said sysadmins are less interested in application specifics but then how would they make the judgement call on how much "device memory" is provisioned to one application/container over another (let say you have 5 cgroup sharing an rdma device)? What are the consequences of under provisioning "device memory" to an application? And if they are all just memory, can a sysadmin provision more system memory in place of device memory (like, are they interchangeable)? I guess I am confused because if device memory is just memory (not rdma, nic specific) to sysadmins how would they know to set the right amount? Regards, Kenny > [1] ibv_alloc_dm() > http://man7.org/linux/man-pages/man3/ibv_alloc_dm.3.html > https://www.openfabrics.org/images/2018workshop/presentations/304_LLiss_OnDeviceMemory.pdf > [2] https://github.com/linux-rdma/rdma-core/blob/master/libibverbs/ > > > > > I think we need to be careful about drawing the line between > > duplication and over couplings between subsystems. I have other > > thoughts and concerns and I will try to organize them into a response > > in the next few days. > > > > Regards, > > Kenny > > > > > > > > > > > > Is AMD interested in collaborating to help shape this framework? > > > > It is intended to be device-neutral, so could be leveraged by various > > > > types of devices. > > > > If you have an alternative solution well underway, then maybe > > > > we can work together to merge our efforts into one. > > > > In the end, the DRM community is best served with common solution. > > > > > > > > > > > > > > > > > >>> and with future work, we could extend to: > > > > >>> * track and control share of GPU time (reuse of cpu/cpuacct) > > > > >>> * apply mask of allowed execution engines (reuse of cpusets) > > > > >>> > > > > >>> Instead of introducing a new cgroup subsystem for GPU devices, a new > > > > >>> framework is proposed to allow devices to register with existing cgroup > > > > >>> controllers, which creates per-device cgroup_subsys_state within the > > > > >>> cgroup. This gives device drivers their own private cgroup controls > > > > >>> (such as memory limits or other parameters) to be applied to device > > > > >>> resources instead of host system resources. > > > > >>> Device drivers (GPU or other) are then able to reuse the existing cgroup > > > > >>> controls, instead of inventing similar ones. > > > > >>> > > > > >>> Per-device controls would be exposed in cgroup filesystem as: > > > > >>> mount/<cgroup_name>/<subsys_name>.devices/<dev_name>/<subsys_files> > > > > >>> such as (for example): > > > > >>> mount/<cgroup_name>/memory.devices/<dev_name>/memory.max > > > > >>> mount/<cgroup_name>/memory.devices/<dev_name>/memory.current > > > > >>> mount/<cgroup_name>/cpu.devices/<dev_name>/cpu.stat > > > > >>> mount/<cgroup_name>/cpu.devices/<dev_name>/cpu.weight > > > > >>> > > > > >>> The drm/i915 patch in this series is based on top of other RFC work [1] > > > > >>> for i915 device memory support. > > > > >>> > > > > >>> AMD [2] and Intel [3] have proposed related work in this area within the > > > > >>> last few years, listed below as reference. This new RFC reuses existing > > > > >>> cgroup controllers and takes a different approach than prior work. > > > > >>> > > > > >>> Finally, some potential discussion points for this series: > > > > >>> * merge proposed <subsys_name>.devices into a single devices directory? > > > > >>> * allow devices to have multiple registrations for subsets of resources? > > > > >>> * document a 'common charging policy' for device drivers to follow? > > > > >>> > > > > >>> [1] https://patchwork.freedesktop.org/series/56683/ > > > > >>> [2] https://lists.freedesktop.org/archives/dri-devel/2018-November/197106.html > > > > >>> [3] https://lists.freedesktop.org/archives/intel-gfx/2018-January/153156.html > > > > >>> > > > > >>> > > > > >>> Brian Welty (5): > > > > >>> cgroup: Add cgroup_subsys per-device registration framework > > > > >>> cgroup: Change kernfs_node for directories to store > > > > >>> cgroup_subsys_state > > > > >>> memcg: Add per-device support to memory cgroup subsystem > > > > >>> drm: Add memory cgroup registration and DRIVER_CGROUPS feature bit > > > > >>> drm/i915: Use memory cgroup for enforcing device memory limit > > > > >>> > > > > >>> drivers/gpu/drm/drm_drv.c | 12 + > > > > >>> drivers/gpu/drm/drm_gem.c | 7 + > > > > >>> drivers/gpu/drm/i915/i915_drv.c | 2 +- > > > > >>> drivers/gpu/drm/i915/intel_memory_region.c | 24 +- > > > > >>> include/drm/drm_device.h | 3 + > > > > >>> include/drm/drm_drv.h | 8 + > > > > >>> include/drm/drm_gem.h | 11 + > > > > >>> include/linux/cgroup-defs.h | 28 ++ > > > > >>> include/linux/cgroup.h | 3 + > > > > >>> include/linux/memcontrol.h | 10 + > > > > >>> kernel/cgroup/cgroup-v1.c | 10 +- > > > > >>> kernel/cgroup/cgroup.c | 310 ++++++++++++++++++--- > > > > >>> mm/memcontrol.c | 183 +++++++++++- > > > > >>> 13 files changed, 552 insertions(+), 59 deletions(-) > > > > >>> > > > > >>> -- > > > > >>> 2.21.0 > > > > >>> > > > > >> _______________________________________________ > > > > >> dri-devel mailing list > > > > >> dri-devel@lists.freedesktop.org > > > > >> https://lists.freedesktop.org/mailman/listinfo/dri-devel _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel
WARNING: multiple messages have this Message-ID (diff)
From: Kenny Ho <y2kenny@gmail.com> To: Leon Romanovsky <leon@kernel.org> Cc: "Welty, Brian" <brian.welty@intel.com>, Alex Deucher <alexander.deucher@amd.com>, Parav Pandit <parav@mellanox.com>, David Airlie <airlied@linux.ie>, intel-gfx@lists.freedesktop.org, "J??r??me Glisse" <jglisse@redhat.com>, dri-devel@lists.freedesktop.org, Michal Hocko <mhocko@kernel.org>, linux-mm@kvack.org, Rodrigo Vivi <rodrigo.vivi@intel.com>, Li Zefan <lizefan@huawei.com>, Vladimir Davydov <vdavydov.dev@gmail.com>, Johannes Weiner <hannes@cmpxchg.org>, Tejun Heo <tj@kernel.org>, cgroups@vger.kernel.org, "Christian K??nig" <christian.koenig@amd.com>, RDMA mailing list <linux-rdma@vger.kernel.org>, kenny.ho@amd.com, Harish.Kasiviswanathan@amd.com, daniel@ffwll.ch Subject: Re: [RFC PATCH 0/5] cgroup support for GPU devices Date: Sun, 5 May 2019 12:34:16 -0400 [thread overview] Message-ID: <CAOWid-cCq+yB9m-u8YpHFuhUZ+C7EpbT2OD27iszJVrruAtqKg@mail.gmail.com> (raw) In-Reply-To: <20190505160506.GF6938@mtr-leonro.mtl.com> (sent again. Not sure why my previous email was just a reply instead of reply-all.) On Sun, May 5, 2019 at 12:05 PM Leon Romanovsky <leon@kernel.org> wrote: > We are talking about two different access patterns for this device > memory (DM). One is to use this device memory (DM) and second to configure/limit. > Usually those actions will be performed by different groups. > > First group (programmers) is using special API [1] through libibverbs [2] > without any notion of cgroups or any limitations. Second group (sysadmins) > is less interested in application specifics and for them "device memory" means > "memory" and not "rdma, nic specific, internal memory". Um... I am not sure that answered it, especially in the context of cgroup (this is just for my curiosity btw, I don't know much about rdma.) You said sysadmins are less interested in application specifics but then how would they make the judgement call on how much "device memory" is provisioned to one application/container over another (let say you have 5 cgroup sharing an rdma device)? What are the consequences of under provisioning "device memory" to an application? And if they are all just memory, can a sysadmin provision more system memory in place of device memory (like, are they interchangeable)? I guess I am confused because if device memory is just memory (not rdma, nic specific) to sysadmins how would they know to set the right amount? Regards, Kenny > [1] ibv_alloc_dm() > http://man7.org/linux/man-pages/man3/ibv_alloc_dm.3.html > https://www.openfabrics.org/images/2018workshop/presentations/304_LLiss_OnDeviceMemory.pdf > [2] https://github.com/linux-rdma/rdma-core/blob/master/libibverbs/ > > > > > I think we need to be careful about drawing the line between > > duplication and over couplings between subsystems. I have other > > thoughts and concerns and I will try to organize them into a response > > in the next few days. > > > > Regards, > > Kenny > > > > > > > > > > > > Is AMD interested in collaborating to help shape this framework? > > > > It is intended to be device-neutral, so could be leveraged by various > > > > types of devices. > > > > If you have an alternative solution well underway, then maybe > > > > we can work together to merge our efforts into one. > > > > In the end, the DRM community is best served with common solution. > > > > > > > > > > > > > > > > > >>> and with future work, we could extend to: > > > > >>> * track and control share of GPU time (reuse of cpu/cpuacct) > > > > >>> * apply mask of allowed execution engines (reuse of cpusets) > > > > >>> > > > > >>> Instead of introducing a new cgroup subsystem for GPU devices, a new > > > > >>> framework is proposed to allow devices to register with existing cgroup > > > > >>> controllers, which creates per-device cgroup_subsys_state within the > > > > >>> cgroup. This gives device drivers their own private cgroup controls > > > > >>> (such as memory limits or other parameters) to be applied to device > > > > >>> resources instead of host system resources. > > > > >>> Device drivers (GPU or other) are then able to reuse the existing cgroup > > > > >>> controls, instead of inventing similar ones. > > > > >>> > > > > >>> Per-device controls would be exposed in cgroup filesystem as: > > > > >>> mount/<cgroup_name>/<subsys_name>.devices/<dev_name>/<subsys_files> > > > > >>> such as (for example): > > > > >>> mount/<cgroup_name>/memory.devices/<dev_name>/memory.max > > > > >>> mount/<cgroup_name>/memory.devices/<dev_name>/memory.current > > > > >>> mount/<cgroup_name>/cpu.devices/<dev_name>/cpu.stat > > > > >>> mount/<cgroup_name>/cpu.devices/<dev_name>/cpu.weight > > > > >>> > > > > >>> The drm/i915 patch in this series is based on top of other RFC work [1] > > > > >>> for i915 device memory support. > > > > >>> > > > > >>> AMD [2] and Intel [3] have proposed related work in this area within the > > > > >>> last few years, listed below as reference. This new RFC reuses existing > > > > >>> cgroup controllers and takes a different approach than prior work. > > > > >>> > > > > >>> Finally, some potential discussion points for this series: > > > > >>> * merge proposed <subsys_name>.devices into a single devices directory? > > > > >>> * allow devices to have multiple registrations for subsets of resources? > > > > >>> * document a 'common charging policy' for device drivers to follow? > > > > >>> > > > > >>> [1] https://patchwork.freedesktop.org/series/56683/ > > > > >>> [2] https://lists.freedesktop.org/archives/dri-devel/2018-November/197106.html > > > > >>> [3] https://lists.freedesktop.org/archives/intel-gfx/2018-January/153156.html > > > > >>> > > > > >>> > > > > >>> Brian Welty (5): > > > > >>> cgroup: Add cgroup_subsys per-device registration framework > > > > >>> cgroup: Change kernfs_node for directories to store > > > > >>> cgroup_subsys_state > > > > >>> memcg: Add per-device support to memory cgroup subsystem > > > > >>> drm: Add memory cgroup registration and DRIVER_CGROUPS feature bit > > > > >>> drm/i915: Use memory cgroup for enforcing device memory limit > > > > >>> > > > > >>> drivers/gpu/drm/drm_drv.c | 12 + > > > > >>> drivers/gpu/drm/drm_gem.c | 7 + > > > > >>> drivers/gpu/drm/i915/i915_drv.c | 2 +- > > > > >>> drivers/gpu/drm/i915/intel_memory_region.c | 24 +- > > > > >>> include/drm/drm_device.h | 3 + > > > > >>> include/drm/drm_drv.h | 8 + > > > > >>> include/drm/drm_gem.h | 11 + > > > > >>> include/linux/cgroup-defs.h | 28 ++ > > > > >>> include/linux/cgroup.h | 3 + > > > > >>> include/linux/memcontrol.h | 10 + > > > > >>> kernel/cgroup/cgroup-v1.c | 10 +- > > > > >>> kernel/cgroup/cgroup.c | 310 ++++++++++++++++++--- > > > > >>> mm/memcontrol.c | 183 +++++++++++- > > > > >>> 13 files changed, 552 insertions(+), 59 deletions(-) > > > > >>> > > > > >>> -- > > > > >>> 2.21.0 > > > > >>> > > > > >> _______________________________________________ > > > > >> dri-devel mailing list > > > > >> dri-devel@lists.freedesktop.org > > > > >> https://lists.freedesktop.org/mailman/listinfo/dri-devel
next prev parent reply other threads:[~2019-05-05 16:34 UTC|newest] Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-05-01 14:04 [RFC PATCH 0/5] cgroup support for GPU devices Brian Welty 2019-05-01 14:04 ` Brian Welty 2019-05-01 14:04 ` [RFC PATCH 1/5] cgroup: Add cgroup_subsys per-device registration framework Brian Welty 2019-05-01 14:04 ` Brian Welty 2019-05-01 14:04 ` [RFC PATCH 2/5] cgroup: Change kernfs_node for directories to store cgroup_subsys_state Brian Welty 2019-05-01 14:04 ` Brian Welty 2019-05-01 14:04 ` [RFC PATCH 3/5] memcg: Add per-device support to memory cgroup subsystem Brian Welty 2019-05-01 14:04 ` Brian Welty 2019-05-01 14:04 ` [RFC PATCH 4/5] drm: Add memory cgroup registration and DRIVER_CGROUPS feature bit Brian Welty 2019-05-01 14:04 ` Brian Welty 2019-05-01 14:04 ` [RFC PATCH 5/5] drm/i915: Use memory cgroup for enforcing device memory limit Brian Welty 2019-05-01 14:04 ` Brian Welty 2019-05-02 8:34 ` [RFC PATCH 0/5] cgroup support for GPU devices Leon Romanovsky 2019-05-02 8:34 ` Leon Romanovsky 2019-05-02 22:48 ` Kenny Ho 2019-05-02 22:48 ` Kenny Ho 2019-05-03 21:14 ` Welty, Brian 2019-05-03 21:14 ` Welty, Brian 2019-05-05 7:14 ` Leon Romanovsky 2019-05-05 7:14 ` Leon Romanovsky 2019-05-05 14:21 ` Kenny Ho 2019-05-05 14:21 ` Kenny Ho 2019-05-05 16:05 ` Leon Romanovsky 2019-05-05 16:05 ` Leon Romanovsky 2019-05-05 16:34 ` Kenny Ho [this message] 2019-05-05 16:34 ` Kenny Ho 2019-05-05 16:55 ` Leon Romanovsky 2019-05-05 16:55 ` Leon Romanovsky 2019-05-05 16:46 ` Chris Down 2019-05-05 16:46 ` Chris Down 2019-05-06 15:16 ` Johannes Weiner 2019-05-06 15:16 ` Johannes Weiner 2019-05-06 15:26 ` Tejun Heo 2019-05-06 15:26 ` Tejun Heo 2019-05-07 19:50 ` Welty, Brian 2019-05-07 19:50 ` Welty, Brian 2019-05-09 16:52 ` Tejun Heo 2019-05-09 16:52 ` Tejun Heo
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=CAOWid-cCq+yB9m-u8YpHFuhUZ+C7EpbT2OD27iszJVrruAtqKg@mail.gmail.com \ --to=y2kenny@gmail.com \ --cc=Harish.Kasiviswanathan@amd.com \ --cc=airlied@linux.ie \ --cc=alexander.deucher@amd.com \ --cc=brian.welty@intel.com \ --cc=cgroups@vger.kernel.org \ --cc=christian.koenig@amd.com \ --cc=dri-devel@lists.freedesktop.org \ --cc=hannes@cmpxchg.org \ --cc=intel-gfx@lists.freedesktop.org \ --cc=jglisse@redhat.com \ --cc=kenny.ho@amd.com \ --cc=leon@kernel.org \ --cc=linux-mm@kvack.org \ --cc=linux-rdma@vger.kernel.org \ --cc=lizefan@huawei.com \ --cc=mhocko@kernel.org \ --cc=parav@mellanox.com \ --cc=rodrigo.vivi@intel.com \ --cc=tj@kernel.org \ --cc=vdavydov.dev@gmail.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.