All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC 0/3] mm/memcontrol drm/ttm: charge ttm buffer backed by system memory
@ 2020-01-13 15:35 ` Qiang Yu
  0 siblings, 0 replies; 19+ messages in thread
From: Qiang Yu @ 2020-01-13 15:35 UTC (permalink / raw)
  To: linux-mm, cgroups, dri-devel
  Cc: Johannes Weiner, Michal Hocko, Andrew Morton, Tejun Heo,
	Christian Koenig, Huang Rui, David Airlie, Daniel Vetter,
	Kenny Ho, Qiang Yu

Buffers created by GPU driver could be huge (often several MB and even hundred
or thousand MB). Some GPU driver call drm_gem_get_pages() which uses shmem to
allocate these buffers which will charge memcg already, while some GPU driver
like amdgpu use TTM which just allocate these system memory backed buffers with
alloc_pages() so won't charge memcg currently.

Not like pure kernel memory, GPU buffer need to be mapped to user space for user
filling data and command then let GPU hardware consume these buffers. So it is
not proper to use memcg kmem by adding __GFP_ACCOUNT to alloc_pages gfp flags.

Another reason is back memory of GPU buffer may be allocated latter after the
buffer object is created, and even in other processes. So we need to record the
memcg when buffer object creation, then charge it latter when needed.

TTM will use a page pool acting as a cache for write-combine/no-cache pages.
So adding new GFP flags for alloc_pages also does not work.

Qiang Yu (3):
  mm: memcontrol: add mem_cgroup_(un)charge_drvmem
  mm: memcontrol: record driver memory statistics
  drm/ttm: support memcg for ttm_tt

 drivers/gpu/drm/ttm/ttm_bo.c         | 10 +++++
 drivers/gpu/drm/ttm/ttm_page_alloc.c | 18 ++++++++-
 drivers/gpu/drm/ttm/ttm_tt.c         |  3 ++
 include/drm/ttm/ttm_bo_api.h         |  5 +++
 include/drm/ttm/ttm_tt.h             |  4 ++
 include/linux/memcontrol.h           | 22 +++++++++++
 mm/memcontrol.c                      | 58 ++++++++++++++++++++++++++++
 7 files changed, 119 insertions(+), 1 deletion(-)

-- 
2.17.1



^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH RFC 0/3] mm/memcontrol drm/ttm: charge ttm buffer backed by system memory
@ 2020-01-13 15:35 ` Qiang Yu
  0 siblings, 0 replies; 19+ messages in thread
From: Qiang Yu @ 2020-01-13 15:35 UTC (permalink / raw)
  To: linux-mm, cgroups, dri-devel
  Cc: David Airlie, Kenny Ho, Michal Hocko, Qiang Yu, Huang Rui,
	Johannes Weiner, Tejun Heo, Andrew Morton, Christian Koenig

Buffers created by GPU driver could be huge (often several MB and even hundred
or thousand MB). Some GPU driver call drm_gem_get_pages() which uses shmem to
allocate these buffers which will charge memcg already, while some GPU driver
like amdgpu use TTM which just allocate these system memory backed buffers with
alloc_pages() so won't charge memcg currently.

Not like pure kernel memory, GPU buffer need to be mapped to user space for user
filling data and command then let GPU hardware consume these buffers. So it is
not proper to use memcg kmem by adding __GFP_ACCOUNT to alloc_pages gfp flags.

Another reason is back memory of GPU buffer may be allocated latter after the
buffer object is created, and even in other processes. So we need to record the
memcg when buffer object creation, then charge it latter when needed.

TTM will use a page pool acting as a cache for write-combine/no-cache pages.
So adding new GFP flags for alloc_pages also does not work.

Qiang Yu (3):
  mm: memcontrol: add mem_cgroup_(un)charge_drvmem
  mm: memcontrol: record driver memory statistics
  drm/ttm: support memcg for ttm_tt

 drivers/gpu/drm/ttm/ttm_bo.c         | 10 +++++
 drivers/gpu/drm/ttm/ttm_page_alloc.c | 18 ++++++++-
 drivers/gpu/drm/ttm/ttm_tt.c         |  3 ++
 include/drm/ttm/ttm_bo_api.h         |  5 +++
 include/drm/ttm/ttm_tt.h             |  4 ++
 include/linux/memcontrol.h           | 22 +++++++++++
 mm/memcontrol.c                      | 58 ++++++++++++++++++++++++++++
 7 files changed, 119 insertions(+), 1 deletion(-)

-- 
2.17.1

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH RFC 0/3] mm/memcontrol drm/ttm: charge ttm buffer backed by system memory
@ 2020-01-13 15:35 ` Qiang Yu
  0 siblings, 0 replies; 19+ messages in thread
From: Qiang Yu @ 2020-01-13 15:35 UTC (permalink / raw)
  To: linux-mm, cgroups, dri-devel
  Cc: David Airlie, Kenny Ho, Michal Hocko, Qiang Yu, Huang Rui,
	Johannes Weiner, Tejun Heo, Andrew Morton, Christian Koenig

Buffers created by GPU driver could be huge (often several MB and even hundred
or thousand MB). Some GPU driver call drm_gem_get_pages() which uses shmem to
allocate these buffers which will charge memcg already, while some GPU driver
like amdgpu use TTM which just allocate these system memory backed buffers with
alloc_pages() so won't charge memcg currently.

Not like pure kernel memory, GPU buffer need to be mapped to user space for user
filling data and command then let GPU hardware consume these buffers. So it is
not proper to use memcg kmem by adding __GFP_ACCOUNT to alloc_pages gfp flags.

Another reason is back memory of GPU buffer may be allocated latter after the
buffer object is created, and even in other processes. So we need to record the
memcg when buffer object creation, then charge it latter when needed.

TTM will use a page pool acting as a cache for write-combine/no-cache pages.
So adding new GFP flags for alloc_pages also does not work.

Qiang Yu (3):
  mm: memcontrol: add mem_cgroup_(un)charge_drvmem
  mm: memcontrol: record driver memory statistics
  drm/ttm: support memcg for ttm_tt

 drivers/gpu/drm/ttm/ttm_bo.c         | 10 +++++
 drivers/gpu/drm/ttm/ttm_page_alloc.c | 18 ++++++++-
 drivers/gpu/drm/ttm/ttm_tt.c         |  3 ++
 include/drm/ttm/ttm_bo_api.h         |  5 +++
 include/drm/ttm/ttm_tt.h             |  4 ++
 include/linux/memcontrol.h           | 22 +++++++++++
 mm/memcontrol.c                      | 58 ++++++++++++++++++++++++++++
 7 files changed, 119 insertions(+), 1 deletion(-)

-- 
2.17.1

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH RFC 1/3] mm: memcontrol: add mem_cgroup_(un)charge_drvmem
  2020-01-13 15:35 ` Qiang Yu
  (?)
@ 2020-01-13 15:35   ` Qiang Yu
  -1 siblings, 0 replies; 19+ messages in thread
From: Qiang Yu @ 2020-01-13 15:35 UTC (permalink / raw)
  To: linux-mm, cgroups, dri-devel
  Cc: Johannes Weiner, Michal Hocko, Andrew Morton, Tejun Heo,
	Christian Koenig, Huang Rui, David Airlie, Daniel Vetter,
	Kenny Ho, Qiang Yu

This is for driver which will allocate memory for both user application
and kernel device driver usage. For example, GPU driver will allocate
some GFP_USER pages and mapped to user to fill commands and data like
texture and vertex, then let GPU command processor "eat" these memory.
These buffers can be huge (offen several MB and may get to hundred
or even thousand MB).

Signed-off-by: Qiang Yu <qiang.yu@amd.com>
---
 include/linux/memcontrol.h | 21 ++++++++++++++++
 mm/memcontrol.c            | 49 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 70 insertions(+)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index ae703ea3ef48..d76977943265 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1363,6 +1363,27 @@ static inline void memcg_set_shrinker_bit(struct mem_cgroup *memcg,
 }
 #endif
 
+#ifdef CONFIG_MEMCG
+struct mem_cgroup *mem_cgroup_driver_get_from_current(void);
+int mem_cgroup_charge_drvmem(struct mem_cgroup *memcg, gfp_t gfp,
+			     unsigned long nr_pages);
+void mem_cgroup_uncharge_drvmem(struct mem_cgroup *memcg, unsigned long nr_pages);
+#else
+static inline struct mem_cgroup *mem_cgroup_get_from_current(void)
+{
+	return NULL;
+}
+
+static inline int mem_cgroup_charge_drvmem(struct mem_cgroup *memcg, gfp_t gfp,
+					   unsigned long nr_pages)
+{
+	return 0;
+}
+
+static inline void mem_cgroup_uncharge_drvmem(struct mem_cgroup *memcg,
+					      unsigned long nr_pages) { }
+#endif
+
 struct kmem_cache *memcg_kmem_get_cache(struct kmem_cache *cachep);
 void memcg_kmem_put_cache(struct kmem_cache *cachep);
 
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 37592dd7ae32..28595c276e6b 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -6913,6 +6913,55 @@ void mem_cgroup_uncharge_skmem(struct mem_cgroup *memcg, unsigned int nr_pages)
 	refill_stock(memcg, nr_pages);
 }
 
+/**
+ * mem_cgroup_driver_get_from_current - get memcg from current task for driver
+ *
+ * Return memcg from current task, NULL otherwise.
+ */
+struct mem_cgroup *mem_cgroup_driver_get_from_current(void)
+{
+	struct mem_cgroup *memcg, *ret = NULL;
+
+	if (mem_cgroup_disabled())
+		return NULL;
+
+	rcu_read_lock();
+	memcg = mem_cgroup_from_task(current);
+	if (memcg && memcg != root_mem_cgroup &&
+	    css_tryget_online(&memcg->css))
+		ret = memcg;
+	rcu_read_unlock();
+
+	return ret;
+}
+EXPORT_SYMBOL(mem_cgroup_driver_get_from_current);
+
+/**
+ * mem_cgroup_charge_drvmem - charge a batch of pages for driver
+ * @memcg: memcg to charge
+ * @gfp: gfp flags for charge
+ * @nr_pages: number of pages to charge
+ *
+ * Return %true if success, %false otherwise.
+ */
+int mem_cgroup_charge_drvmem(struct mem_cgroup *memcg, gfp_t gfp,
+			     unsigned long nr_pages)
+{
+	return try_charge(memcg, gfp, nr_pages);
+}
+EXPORT_SYMBOL(mem_cgroup_charge_drvmem);
+
+/**
+ * mem_cgroup_uncharge_drvmem - uncharge a batch of pages for driver
+ * @memcg: memcg to uncharge
+ * @nr_pages: number of pages to uncharge
+ */
+void mem_cgroup_uncharge_drvmem(struct mem_cgroup *memcg, unsigned long nr_pages)
+{
+	refill_stock(memcg, nr_pages);
+}
+EXPORT_SYMBOL(mem_cgroup_uncharge_drvmem);
+
 static int __init cgroup_memory(char *s)
 {
 	char *token;
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH RFC 1/3] mm: memcontrol: add mem_cgroup_(un)charge_drvmem
@ 2020-01-13 15:35   ` Qiang Yu
  0 siblings, 0 replies; 19+ messages in thread
From: Qiang Yu @ 2020-01-13 15:35 UTC (permalink / raw)
  To: linux-mm, cgroups, dri-devel
  Cc: David Airlie, Kenny Ho, Michal Hocko, Qiang Yu, Huang Rui,
	Johannes Weiner, Tejun Heo, Andrew Morton, Christian Koenig

This is for driver which will allocate memory for both user application
and kernel device driver usage. For example, GPU driver will allocate
some GFP_USER pages and mapped to user to fill commands and data like
texture and vertex, then let GPU command processor "eat" these memory.
These buffers can be huge (offen several MB and may get to hundred
or even thousand MB).

Signed-off-by: Qiang Yu <qiang.yu@amd.com>
---
 include/linux/memcontrol.h | 21 ++++++++++++++++
 mm/memcontrol.c            | 49 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 70 insertions(+)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index ae703ea3ef48..d76977943265 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1363,6 +1363,27 @@ static inline void memcg_set_shrinker_bit(struct mem_cgroup *memcg,
 }
 #endif
 
+#ifdef CONFIG_MEMCG
+struct mem_cgroup *mem_cgroup_driver_get_from_current(void);
+int mem_cgroup_charge_drvmem(struct mem_cgroup *memcg, gfp_t gfp,
+			     unsigned long nr_pages);
+void mem_cgroup_uncharge_drvmem(struct mem_cgroup *memcg, unsigned long nr_pages);
+#else
+static inline struct mem_cgroup *mem_cgroup_get_from_current(void)
+{
+	return NULL;
+}
+
+static inline int mem_cgroup_charge_drvmem(struct mem_cgroup *memcg, gfp_t gfp,
+					   unsigned long nr_pages)
+{
+	return 0;
+}
+
+static inline void mem_cgroup_uncharge_drvmem(struct mem_cgroup *memcg,
+					      unsigned long nr_pages) { }
+#endif
+
 struct kmem_cache *memcg_kmem_get_cache(struct kmem_cache *cachep);
 void memcg_kmem_put_cache(struct kmem_cache *cachep);
 
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 37592dd7ae32..28595c276e6b 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -6913,6 +6913,55 @@ void mem_cgroup_uncharge_skmem(struct mem_cgroup *memcg, unsigned int nr_pages)
 	refill_stock(memcg, nr_pages);
 }
 
+/**
+ * mem_cgroup_driver_get_from_current - get memcg from current task for driver
+ *
+ * Return memcg from current task, NULL otherwise.
+ */
+struct mem_cgroup *mem_cgroup_driver_get_from_current(void)
+{
+	struct mem_cgroup *memcg, *ret = NULL;
+
+	if (mem_cgroup_disabled())
+		return NULL;
+
+	rcu_read_lock();
+	memcg = mem_cgroup_from_task(current);
+	if (memcg && memcg != root_mem_cgroup &&
+	    css_tryget_online(&memcg->css))
+		ret = memcg;
+	rcu_read_unlock();
+
+	return ret;
+}
+EXPORT_SYMBOL(mem_cgroup_driver_get_from_current);
+
+/**
+ * mem_cgroup_charge_drvmem - charge a batch of pages for driver
+ * @memcg: memcg to charge
+ * @gfp: gfp flags for charge
+ * @nr_pages: number of pages to charge
+ *
+ * Return %true if success, %false otherwise.
+ */
+int mem_cgroup_charge_drvmem(struct mem_cgroup *memcg, gfp_t gfp,
+			     unsigned long nr_pages)
+{
+	return try_charge(memcg, gfp, nr_pages);
+}
+EXPORT_SYMBOL(mem_cgroup_charge_drvmem);
+
+/**
+ * mem_cgroup_uncharge_drvmem - uncharge a batch of pages for driver
+ * @memcg: memcg to uncharge
+ * @nr_pages: number of pages to uncharge
+ */
+void mem_cgroup_uncharge_drvmem(struct mem_cgroup *memcg, unsigned long nr_pages)
+{
+	refill_stock(memcg, nr_pages);
+}
+EXPORT_SYMBOL(mem_cgroup_uncharge_drvmem);
+
 static int __init cgroup_memory(char *s)
 {
 	char *token;
-- 
2.17.1

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH RFC 1/3] mm: memcontrol: add mem_cgroup_(un)charge_drvmem
@ 2020-01-13 15:35   ` Qiang Yu
  0 siblings, 0 replies; 19+ messages in thread
From: Qiang Yu @ 2020-01-13 15:35 UTC (permalink / raw)
  To: linux-mm-Bw31MaZKKs3YtjvyW6yDsg, cgroups-u79uwXL29TY76Z2rM5mHXA,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Johannes Weiner, Michal Hocko, Andrew Morton, Tejun Heo,
	Christian Koenig, Huang Rui, David Airlie, Daniel Vetter,
	Kenny Ho, Qiang Yu

This is for driver which will allocate memory for both user application
and kernel device driver usage. For example, GPU driver will allocate
some GFP_USER pages and mapped to user to fill commands and data like
texture and vertex, then let GPU command processor "eat" these memory.
These buffers can be huge (offen several MB and may get to hundred
or even thousand MB).

Signed-off-by: Qiang Yu <qiang.yu-5C7GfCeVMHo@public.gmane.org>
---
 include/linux/memcontrol.h | 21 ++++++++++++++++
 mm/memcontrol.c            | 49 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 70 insertions(+)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index ae703ea3ef48..d76977943265 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1363,6 +1363,27 @@ static inline void memcg_set_shrinker_bit(struct mem_cgroup *memcg,
 }
 #endif
 
+#ifdef CONFIG_MEMCG
+struct mem_cgroup *mem_cgroup_driver_get_from_current(void);
+int mem_cgroup_charge_drvmem(struct mem_cgroup *memcg, gfp_t gfp,
+			     unsigned long nr_pages);
+void mem_cgroup_uncharge_drvmem(struct mem_cgroup *memcg, unsigned long nr_pages);
+#else
+static inline struct mem_cgroup *mem_cgroup_get_from_current(void)
+{
+	return NULL;
+}
+
+static inline int mem_cgroup_charge_drvmem(struct mem_cgroup *memcg, gfp_t gfp,
+					   unsigned long nr_pages)
+{
+	return 0;
+}
+
+static inline void mem_cgroup_uncharge_drvmem(struct mem_cgroup *memcg,
+					      unsigned long nr_pages) { }
+#endif
+
 struct kmem_cache *memcg_kmem_get_cache(struct kmem_cache *cachep);
 void memcg_kmem_put_cache(struct kmem_cache *cachep);
 
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 37592dd7ae32..28595c276e6b 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -6913,6 +6913,55 @@ void mem_cgroup_uncharge_skmem(struct mem_cgroup *memcg, unsigned int nr_pages)
 	refill_stock(memcg, nr_pages);
 }
 
+/**
+ * mem_cgroup_driver_get_from_current - get memcg from current task for driver
+ *
+ * Return memcg from current task, NULL otherwise.
+ */
+struct mem_cgroup *mem_cgroup_driver_get_from_current(void)
+{
+	struct mem_cgroup *memcg, *ret = NULL;
+
+	if (mem_cgroup_disabled())
+		return NULL;
+
+	rcu_read_lock();
+	memcg = mem_cgroup_from_task(current);
+	if (memcg && memcg != root_mem_cgroup &&
+	    css_tryget_online(&memcg->css))
+		ret = memcg;
+	rcu_read_unlock();
+
+	return ret;
+}
+EXPORT_SYMBOL(mem_cgroup_driver_get_from_current);
+
+/**
+ * mem_cgroup_charge_drvmem - charge a batch of pages for driver
+ * @memcg: memcg to charge
+ * @gfp: gfp flags for charge
+ * @nr_pages: number of pages to charge
+ *
+ * Return %true if success, %false otherwise.
+ */
+int mem_cgroup_charge_drvmem(struct mem_cgroup *memcg, gfp_t gfp,
+			     unsigned long nr_pages)
+{
+	return try_charge(memcg, gfp, nr_pages);
+}
+EXPORT_SYMBOL(mem_cgroup_charge_drvmem);
+
+/**
+ * mem_cgroup_uncharge_drvmem - uncharge a batch of pages for driver
+ * @memcg: memcg to uncharge
+ * @nr_pages: number of pages to uncharge
+ */
+void mem_cgroup_uncharge_drvmem(struct mem_cgroup *memcg, unsigned long nr_pages)
+{
+	refill_stock(memcg, nr_pages);
+}
+EXPORT_SYMBOL(mem_cgroup_uncharge_drvmem);
+
 static int __init cgroup_memory(char *s)
 {
 	char *token;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH RFC 2/3] mm: memcontrol: record driver memory statistics
  2020-01-13 15:35 ` Qiang Yu
@ 2020-01-13 15:35   ` Qiang Yu
  -1 siblings, 0 replies; 19+ messages in thread
From: Qiang Yu @ 2020-01-13 15:35 UTC (permalink / raw)
  To: linux-mm, cgroups, dri-devel
  Cc: Johannes Weiner, Michal Hocko, Andrew Morton, Tejun Heo,
	Christian Koenig, Huang Rui, David Airlie, Daniel Vetter,
	Kenny Ho, Qiang Yu

Signed-off-by: Qiang Yu <qiang.yu@amd.com>
---
 include/linux/memcontrol.h | 1 +
 mm/memcontrol.c            | 9 +++++++++
 2 files changed, 10 insertions(+)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index d76977943265..6518b4b5ee07 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -36,6 +36,7 @@ enum memcg_stat_item {
 	MEMCG_SOCK,
 	/* XXX: why are these zone and not node counters? */
 	MEMCG_KERNEL_STACK_KB,
+	MEMCG_DRV,
 	MEMCG_NR_STAT,
 };
 
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 28595c276e6b..cdd3f3401598 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1413,6 +1413,9 @@ static char *memory_stat_format(struct mem_cgroup *memcg)
 	seq_buf_printf(&s, "sock %llu\n",
 		       (u64)memcg_page_state(memcg, MEMCG_SOCK) *
 		       PAGE_SIZE);
+	seq_buf_printf(&s, "driver %llu\n",
+		       (u64)memcg_page_state(memcg, MEMCG_DRV) *
+		       PAGE_SIZE);
 
 	seq_buf_printf(&s, "shmem %llu\n",
 		       (u64)memcg_page_state(memcg, NR_SHMEM) *
@@ -6947,6 +6950,9 @@ EXPORT_SYMBOL(mem_cgroup_driver_get_from_current);
 int mem_cgroup_charge_drvmem(struct mem_cgroup *memcg, gfp_t gfp,
 			     unsigned long nr_pages)
 {
+	if (cgroup_subsys_on_dfl(memory_cgrp_subsys))
+		mod_memcg_state(memcg, MEMCG_DRV, nr_pages);
+
 	return try_charge(memcg, gfp, nr_pages);
 }
 EXPORT_SYMBOL(mem_cgroup_charge_drvmem);
@@ -6958,6 +6964,9 @@ EXPORT_SYMBOL(mem_cgroup_charge_drvmem);
  */
 void mem_cgroup_uncharge_drvmem(struct mem_cgroup *memcg, unsigned long nr_pages)
 {
+	if (cgroup_subsys_on_dfl(memory_cgrp_subsys))
+		mod_memcg_state(memcg, MEMCG_DRV, -nr_pages);
+
 	refill_stock(memcg, nr_pages);
 }
 EXPORT_SYMBOL(mem_cgroup_uncharge_drvmem);
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH RFC 2/3] mm: memcontrol: record driver memory statistics
@ 2020-01-13 15:35   ` Qiang Yu
  0 siblings, 0 replies; 19+ messages in thread
From: Qiang Yu @ 2020-01-13 15:35 UTC (permalink / raw)
  To: linux-mm, cgroups, dri-devel
  Cc: David Airlie, Kenny Ho, Michal Hocko, Qiang Yu, Huang Rui,
	Johannes Weiner, Tejun Heo, Andrew Morton, Christian Koenig

Signed-off-by: Qiang Yu <qiang.yu@amd.com>
---
 include/linux/memcontrol.h | 1 +
 mm/memcontrol.c            | 9 +++++++++
 2 files changed, 10 insertions(+)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index d76977943265..6518b4b5ee07 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -36,6 +36,7 @@ enum memcg_stat_item {
 	MEMCG_SOCK,
 	/* XXX: why are these zone and not node counters? */
 	MEMCG_KERNEL_STACK_KB,
+	MEMCG_DRV,
 	MEMCG_NR_STAT,
 };
 
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 28595c276e6b..cdd3f3401598 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1413,6 +1413,9 @@ static char *memory_stat_format(struct mem_cgroup *memcg)
 	seq_buf_printf(&s, "sock %llu\n",
 		       (u64)memcg_page_state(memcg, MEMCG_SOCK) *
 		       PAGE_SIZE);
+	seq_buf_printf(&s, "driver %llu\n",
+		       (u64)memcg_page_state(memcg, MEMCG_DRV) *
+		       PAGE_SIZE);
 
 	seq_buf_printf(&s, "shmem %llu\n",
 		       (u64)memcg_page_state(memcg, NR_SHMEM) *
@@ -6947,6 +6950,9 @@ EXPORT_SYMBOL(mem_cgroup_driver_get_from_current);
 int mem_cgroup_charge_drvmem(struct mem_cgroup *memcg, gfp_t gfp,
 			     unsigned long nr_pages)
 {
+	if (cgroup_subsys_on_dfl(memory_cgrp_subsys))
+		mod_memcg_state(memcg, MEMCG_DRV, nr_pages);
+
 	return try_charge(memcg, gfp, nr_pages);
 }
 EXPORT_SYMBOL(mem_cgroup_charge_drvmem);
@@ -6958,6 +6964,9 @@ EXPORT_SYMBOL(mem_cgroup_charge_drvmem);
  */
 void mem_cgroup_uncharge_drvmem(struct mem_cgroup *memcg, unsigned long nr_pages)
 {
+	if (cgroup_subsys_on_dfl(memory_cgrp_subsys))
+		mod_memcg_state(memcg, MEMCG_DRV, -nr_pages);
+
 	refill_stock(memcg, nr_pages);
 }
 EXPORT_SYMBOL(mem_cgroup_uncharge_drvmem);
-- 
2.17.1

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH RFC 3/3] drm/ttm: support memcg for ttm_tt
  2020-01-13 15:35 ` Qiang Yu
@ 2020-01-13 15:35   ` Qiang Yu
  -1 siblings, 0 replies; 19+ messages in thread
From: Qiang Yu @ 2020-01-13 15:35 UTC (permalink / raw)
  To: linux-mm, cgroups, dri-devel
  Cc: Johannes Weiner, Michal Hocko, Andrew Morton, Tejun Heo,
	Christian Koenig, Huang Rui, David Airlie, Daniel Vetter,
	Kenny Ho, Qiang Yu

Charge TTM allocated system memory to memory cgroup which will
limit the memory usage of a group of processes.

The memory is always charged to the control group of task which
create this buffer object and when it's created. For example,
when a buffer is created by process A and exported to process B,
then process B populate this buffer, the memory is still charged
to process A's memcg; if a buffer is created by process A when in
memcg B, then A is moved to memcg C and populate this buffer, it
will charge memcg B.

Signed-off-by: Qiang Yu <qiang.yu@amd.com>
---
 drivers/gpu/drm/ttm/ttm_bo.c         | 10 ++++++++++
 drivers/gpu/drm/ttm/ttm_page_alloc.c | 18 +++++++++++++++++-
 drivers/gpu/drm/ttm/ttm_tt.c         |  3 +++
 include/drm/ttm/ttm_bo_api.h         |  5 +++++
 include/drm/ttm/ttm_tt.h             |  4 ++++
 5 files changed, 39 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index 8d91b0428af1..4e64846ee523 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -42,6 +42,7 @@
 #include <linux/module.h>
 #include <linux/atomic.h>
 #include <linux/dma-resv.h>
+#include <linux/memcontrol.h>
 
 static void ttm_bo_global_kobj_release(struct kobject *kobj);
 
@@ -162,6 +163,10 @@ static void ttm_bo_release_list(struct kref *list_kref)
 	if (!ttm_bo_uses_embedded_gem_object(bo))
 		dma_resv_fini(&bo->base._resv);
 	mutex_destroy(&bo->wu_mutex);
+#ifdef CONFIG_MEMCG
+	if (bo->memcg)
+		css_put(&bo->memcg->css);
+#endif
 	bo->destroy(bo);
 	ttm_mem_global_free(&ttm_mem_glob, acc_size);
 }
@@ -1330,6 +1335,11 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev,
 	}
 	atomic_inc(&ttm_bo_glob.bo_count);
 
+#ifdef CONFIG_MEMCG
+	if (bo->type == ttm_bo_type_device)
+		bo->memcg = mem_cgroup_driver_get_from_current();
+#endif
+
 	/*
 	 * For ttm_bo_type_device buffers, allocate
 	 * address space from the device.
diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c b/drivers/gpu/drm/ttm/ttm_page_alloc.c
index b40a4678c296..ecd1831a1d38 100644
--- a/drivers/gpu/drm/ttm/ttm_page_alloc.c
+++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c
@@ -42,7 +42,7 @@
 #include <linux/seq_file.h> /* for seq_printf */
 #include <linux/slab.h>
 #include <linux/dma-mapping.h>
-
+#include <linux/memcontrol.h>
 #include <linux/atomic.h>
 
 #include <drm/ttm/ttm_bo_driver.h>
@@ -1045,6 +1045,11 @@ ttm_pool_unpopulate_helper(struct ttm_tt *ttm, unsigned mem_count_update)
 	ttm_put_pages(ttm->pages, ttm->num_pages, ttm->page_flags,
 		      ttm->caching_state);
 	ttm->state = tt_unpopulated;
+
+#ifdef CONFIG_MEMCG
+	if (ttm->memcg)
+		mem_cgroup_uncharge_drvmem(ttm->memcg, ttm->num_pages);
+#endif
 }
 
 int ttm_pool_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx)
@@ -1059,6 +1064,17 @@ int ttm_pool_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx)
 	if (ttm_check_under_lowerlimit(mem_glob, ttm->num_pages, ctx))
 		return -ENOMEM;
 
+#ifdef CONFIG_MEMCG
+	if (ttm->memcg) {
+		gfp_t gfp_flags = GFP_USER;
+		if (ttm->page_flags & TTM_PAGE_FLAG_NO_RETRY)
+			gfp_flags |= __GFP_RETRY_MAYFAIL;
+		ret = mem_cgroup_charge_drvmem(ttm->memcg, gfp_flags, ttm->num_pages);
+		if (ret)
+			return ret;
+	}
+#endif
+
 	ret = ttm_get_pages(ttm->pages, ttm->num_pages, ttm->page_flags,
 			    ttm->caching_state);
 	if (unlikely(ret != 0)) {
diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c
index e0e9b4f69db6..1acb153084e1 100644
--- a/drivers/gpu/drm/ttm/ttm_tt.c
+++ b/drivers/gpu/drm/ttm/ttm_tt.c
@@ -233,6 +233,9 @@ void ttm_tt_init_fields(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
 	ttm->state = tt_unpopulated;
 	ttm->swap_storage = NULL;
 	ttm->sg = bo->sg;
+#ifdef CONFIG_MEMCG
+	ttm->memcg = bo->memcg;
+#endif
 }
 
 int ttm_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
index 65e399d280f7..95a08e81a73e 100644
--- a/include/drm/ttm/ttm_bo_api.h
+++ b/include/drm/ttm/ttm_bo_api.h
@@ -54,6 +54,8 @@ struct ttm_place;
 
 struct ttm_lru_bulk_move;
 
+struct mem_cgroup;
+
 /**
  * struct ttm_bus_placement
  *
@@ -180,6 +182,9 @@ struct ttm_buffer_object {
 	void (*destroy) (struct ttm_buffer_object *);
 	unsigned long num_pages;
 	size_t acc_size;
+#ifdef CONFIG_MEMCG
+	struct mem_cgroup *memcg;
+#endif
 
 	/**
 	* Members not needing protection.
diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h
index c0e928abf592..10fb5a557b95 100644
--- a/include/drm/ttm/ttm_tt.h
+++ b/include/drm/ttm/ttm_tt.h
@@ -33,6 +33,7 @@ struct ttm_tt;
 struct ttm_mem_reg;
 struct ttm_buffer_object;
 struct ttm_operation_ctx;
+struct mem_cgroup;
 
 #define TTM_PAGE_FLAG_WRITE           (1 << 3)
 #define TTM_PAGE_FLAG_SWAPPED         (1 << 4)
@@ -116,6 +117,9 @@ struct ttm_tt {
 		tt_unbound,
 		tt_unpopulated,
 	} state;
+#ifdef CONFIG_MEMCG
+	struct mem_cgroup *memcg;
+#endif
 };
 
 /**
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH RFC 3/3] drm/ttm: support memcg for ttm_tt
@ 2020-01-13 15:35   ` Qiang Yu
  0 siblings, 0 replies; 19+ messages in thread
From: Qiang Yu @ 2020-01-13 15:35 UTC (permalink / raw)
  To: linux-mm, cgroups, dri-devel
  Cc: David Airlie, Kenny Ho, Michal Hocko, Qiang Yu, Huang Rui,
	Johannes Weiner, Tejun Heo, Andrew Morton, Christian Koenig

Charge TTM allocated system memory to memory cgroup which will
limit the memory usage of a group of processes.

The memory is always charged to the control group of task which
create this buffer object and when it's created. For example,
when a buffer is created by process A and exported to process B,
then process B populate this buffer, the memory is still charged
to process A's memcg; if a buffer is created by process A when in
memcg B, then A is moved to memcg C and populate this buffer, it
will charge memcg B.

Signed-off-by: Qiang Yu <qiang.yu@amd.com>
---
 drivers/gpu/drm/ttm/ttm_bo.c         | 10 ++++++++++
 drivers/gpu/drm/ttm/ttm_page_alloc.c | 18 +++++++++++++++++-
 drivers/gpu/drm/ttm/ttm_tt.c         |  3 +++
 include/drm/ttm/ttm_bo_api.h         |  5 +++++
 include/drm/ttm/ttm_tt.h             |  4 ++++
 5 files changed, 39 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index 8d91b0428af1..4e64846ee523 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -42,6 +42,7 @@
 #include <linux/module.h>
 #include <linux/atomic.h>
 #include <linux/dma-resv.h>
+#include <linux/memcontrol.h>
 
 static void ttm_bo_global_kobj_release(struct kobject *kobj);
 
@@ -162,6 +163,10 @@ static void ttm_bo_release_list(struct kref *list_kref)
 	if (!ttm_bo_uses_embedded_gem_object(bo))
 		dma_resv_fini(&bo->base._resv);
 	mutex_destroy(&bo->wu_mutex);
+#ifdef CONFIG_MEMCG
+	if (bo->memcg)
+		css_put(&bo->memcg->css);
+#endif
 	bo->destroy(bo);
 	ttm_mem_global_free(&ttm_mem_glob, acc_size);
 }
@@ -1330,6 +1335,11 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev,
 	}
 	atomic_inc(&ttm_bo_glob.bo_count);
 
+#ifdef CONFIG_MEMCG
+	if (bo->type == ttm_bo_type_device)
+		bo->memcg = mem_cgroup_driver_get_from_current();
+#endif
+
 	/*
 	 * For ttm_bo_type_device buffers, allocate
 	 * address space from the device.
diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c b/drivers/gpu/drm/ttm/ttm_page_alloc.c
index b40a4678c296..ecd1831a1d38 100644
--- a/drivers/gpu/drm/ttm/ttm_page_alloc.c
+++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c
@@ -42,7 +42,7 @@
 #include <linux/seq_file.h> /* for seq_printf */
 #include <linux/slab.h>
 #include <linux/dma-mapping.h>
-
+#include <linux/memcontrol.h>
 #include <linux/atomic.h>
 
 #include <drm/ttm/ttm_bo_driver.h>
@@ -1045,6 +1045,11 @@ ttm_pool_unpopulate_helper(struct ttm_tt *ttm, unsigned mem_count_update)
 	ttm_put_pages(ttm->pages, ttm->num_pages, ttm->page_flags,
 		      ttm->caching_state);
 	ttm->state = tt_unpopulated;
+
+#ifdef CONFIG_MEMCG
+	if (ttm->memcg)
+		mem_cgroup_uncharge_drvmem(ttm->memcg, ttm->num_pages);
+#endif
 }
 
 int ttm_pool_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx)
@@ -1059,6 +1064,17 @@ int ttm_pool_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx)
 	if (ttm_check_under_lowerlimit(mem_glob, ttm->num_pages, ctx))
 		return -ENOMEM;
 
+#ifdef CONFIG_MEMCG
+	if (ttm->memcg) {
+		gfp_t gfp_flags = GFP_USER;
+		if (ttm->page_flags & TTM_PAGE_FLAG_NO_RETRY)
+			gfp_flags |= __GFP_RETRY_MAYFAIL;
+		ret = mem_cgroup_charge_drvmem(ttm->memcg, gfp_flags, ttm->num_pages);
+		if (ret)
+			return ret;
+	}
+#endif
+
 	ret = ttm_get_pages(ttm->pages, ttm->num_pages, ttm->page_flags,
 			    ttm->caching_state);
 	if (unlikely(ret != 0)) {
diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c
index e0e9b4f69db6..1acb153084e1 100644
--- a/drivers/gpu/drm/ttm/ttm_tt.c
+++ b/drivers/gpu/drm/ttm/ttm_tt.c
@@ -233,6 +233,9 @@ void ttm_tt_init_fields(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
 	ttm->state = tt_unpopulated;
 	ttm->swap_storage = NULL;
 	ttm->sg = bo->sg;
+#ifdef CONFIG_MEMCG
+	ttm->memcg = bo->memcg;
+#endif
 }
 
 int ttm_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
index 65e399d280f7..95a08e81a73e 100644
--- a/include/drm/ttm/ttm_bo_api.h
+++ b/include/drm/ttm/ttm_bo_api.h
@@ -54,6 +54,8 @@ struct ttm_place;
 
 struct ttm_lru_bulk_move;
 
+struct mem_cgroup;
+
 /**
  * struct ttm_bus_placement
  *
@@ -180,6 +182,9 @@ struct ttm_buffer_object {
 	void (*destroy) (struct ttm_buffer_object *);
 	unsigned long num_pages;
 	size_t acc_size;
+#ifdef CONFIG_MEMCG
+	struct mem_cgroup *memcg;
+#endif
 
 	/**
 	* Members not needing protection.
diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h
index c0e928abf592..10fb5a557b95 100644
--- a/include/drm/ttm/ttm_tt.h
+++ b/include/drm/ttm/ttm_tt.h
@@ -33,6 +33,7 @@ struct ttm_tt;
 struct ttm_mem_reg;
 struct ttm_buffer_object;
 struct ttm_operation_ctx;
+struct mem_cgroup;
 
 #define TTM_PAGE_FLAG_WRITE           (1 << 3)
 #define TTM_PAGE_FLAG_SWAPPED         (1 << 4)
@@ -116,6 +117,9 @@ struct ttm_tt {
 		tt_unbound,
 		tt_unpopulated,
 	} state;
+#ifdef CONFIG_MEMCG
+	struct mem_cgroup *memcg;
+#endif
 };
 
 /**
-- 
2.17.1

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH RFC 3/3] drm/ttm: support memcg for ttm_tt
  2020-01-13 15:35   ` Qiang Yu
  (?)
@ 2020-01-13 15:55     ` Christian König
  -1 siblings, 0 replies; 19+ messages in thread
From: Christian König @ 2020-01-13 15:55 UTC (permalink / raw)
  To: Qiang Yu, linux-mm, cgroups, dri-devel
  Cc: Johannes Weiner, Michal Hocko, Andrew Morton, Tejun Heo,
	Huang Rui, David Airlie, Daniel Vetter, Kenny Ho

Am 13.01.20 um 16:35 schrieb Qiang Yu:
> Charge TTM allocated system memory to memory cgroup which will
> limit the memory usage of a group of processes.

NAK to the whole approach. This belongs into the GEM or driver layer, 
but not into TTM.

> The memory is always charged to the control group of task which
> create this buffer object and when it's created. For example,
> when a buffer is created by process A and exported to process B,
> then process B populate this buffer, the memory is still charged
> to process A's memcg; if a buffer is created by process A when in
> memcg B, then A is moved to memcg C and populate this buffer, it
> will charge memcg B.

This is actually the most common use case for graphics application where 
the X server allocates most of the backing store.

So we need a better handling than just accounting the memory to whoever 
allocated it first.

Regards,
Christian.

>
> Signed-off-by: Qiang Yu <qiang.yu@amd.com>
> ---
>   drivers/gpu/drm/ttm/ttm_bo.c         | 10 ++++++++++
>   drivers/gpu/drm/ttm/ttm_page_alloc.c | 18 +++++++++++++++++-
>   drivers/gpu/drm/ttm/ttm_tt.c         |  3 +++
>   include/drm/ttm/ttm_bo_api.h         |  5 +++++
>   include/drm/ttm/ttm_tt.h             |  4 ++++
>   5 files changed, 39 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
> index 8d91b0428af1..4e64846ee523 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
> @@ -42,6 +42,7 @@
>   #include <linux/module.h>
>   #include <linux/atomic.h>
>   #include <linux/dma-resv.h>
> +#include <linux/memcontrol.h>
>   
>   static void ttm_bo_global_kobj_release(struct kobject *kobj);
>   
> @@ -162,6 +163,10 @@ static void ttm_bo_release_list(struct kref *list_kref)
>   	if (!ttm_bo_uses_embedded_gem_object(bo))
>   		dma_resv_fini(&bo->base._resv);
>   	mutex_destroy(&bo->wu_mutex);
> +#ifdef CONFIG_MEMCG
> +	if (bo->memcg)
> +		css_put(&bo->memcg->css);
> +#endif
>   	bo->destroy(bo);
>   	ttm_mem_global_free(&ttm_mem_glob, acc_size);
>   }
> @@ -1330,6 +1335,11 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev,
>   	}
>   	atomic_inc(&ttm_bo_glob.bo_count);
>   
> +#ifdef CONFIG_MEMCG
> +	if (bo->type == ttm_bo_type_device)
> +		bo->memcg = mem_cgroup_driver_get_from_current();
> +#endif
> +
>   	/*
>   	 * For ttm_bo_type_device buffers, allocate
>   	 * address space from the device.
> diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c b/drivers/gpu/drm/ttm/ttm_page_alloc.c
> index b40a4678c296..ecd1831a1d38 100644
> --- a/drivers/gpu/drm/ttm/ttm_page_alloc.c
> +++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c
> @@ -42,7 +42,7 @@
>   #include <linux/seq_file.h> /* for seq_printf */
>   #include <linux/slab.h>
>   #include <linux/dma-mapping.h>
> -
> +#include <linux/memcontrol.h>
>   #include <linux/atomic.h>
>   
>   #include <drm/ttm/ttm_bo_driver.h>
> @@ -1045,6 +1045,11 @@ ttm_pool_unpopulate_helper(struct ttm_tt *ttm, unsigned mem_count_update)
>   	ttm_put_pages(ttm->pages, ttm->num_pages, ttm->page_flags,
>   		      ttm->caching_state);
>   	ttm->state = tt_unpopulated;
> +
> +#ifdef CONFIG_MEMCG
> +	if (ttm->memcg)
> +		mem_cgroup_uncharge_drvmem(ttm->memcg, ttm->num_pages);
> +#endif
>   }
>   
>   int ttm_pool_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx)
> @@ -1059,6 +1064,17 @@ int ttm_pool_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx)
>   	if (ttm_check_under_lowerlimit(mem_glob, ttm->num_pages, ctx))
>   		return -ENOMEM;
>   
> +#ifdef CONFIG_MEMCG
> +	if (ttm->memcg) {
> +		gfp_t gfp_flags = GFP_USER;
> +		if (ttm->page_flags & TTM_PAGE_FLAG_NO_RETRY)
> +			gfp_flags |= __GFP_RETRY_MAYFAIL;
> +		ret = mem_cgroup_charge_drvmem(ttm->memcg, gfp_flags, ttm->num_pages);
> +		if (ret)
> +			return ret;
> +	}
> +#endif
> +
>   	ret = ttm_get_pages(ttm->pages, ttm->num_pages, ttm->page_flags,
>   			    ttm->caching_state);
>   	if (unlikely(ret != 0)) {
> diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c
> index e0e9b4f69db6..1acb153084e1 100644
> --- a/drivers/gpu/drm/ttm/ttm_tt.c
> +++ b/drivers/gpu/drm/ttm/ttm_tt.c
> @@ -233,6 +233,9 @@ void ttm_tt_init_fields(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
>   	ttm->state = tt_unpopulated;
>   	ttm->swap_storage = NULL;
>   	ttm->sg = bo->sg;
> +#ifdef CONFIG_MEMCG
> +	ttm->memcg = bo->memcg;
> +#endif
>   }
>   
>   int ttm_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> index 65e399d280f7..95a08e81a73e 100644
> --- a/include/drm/ttm/ttm_bo_api.h
> +++ b/include/drm/ttm/ttm_bo_api.h
> @@ -54,6 +54,8 @@ struct ttm_place;
>   
>   struct ttm_lru_bulk_move;
>   
> +struct mem_cgroup;
> +
>   /**
>    * struct ttm_bus_placement
>    *
> @@ -180,6 +182,9 @@ struct ttm_buffer_object {
>   	void (*destroy) (struct ttm_buffer_object *);
>   	unsigned long num_pages;
>   	size_t acc_size;
> +#ifdef CONFIG_MEMCG
> +	struct mem_cgroup *memcg;
> +#endif
>   
>   	/**
>   	* Members not needing protection.
> diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h
> index c0e928abf592..10fb5a557b95 100644
> --- a/include/drm/ttm/ttm_tt.h
> +++ b/include/drm/ttm/ttm_tt.h
> @@ -33,6 +33,7 @@ struct ttm_tt;
>   struct ttm_mem_reg;
>   struct ttm_buffer_object;
>   struct ttm_operation_ctx;
> +struct mem_cgroup;
>   
>   #define TTM_PAGE_FLAG_WRITE           (1 << 3)
>   #define TTM_PAGE_FLAG_SWAPPED         (1 << 4)
> @@ -116,6 +117,9 @@ struct ttm_tt {
>   		tt_unbound,
>   		tt_unpopulated,
>   	} state;
> +#ifdef CONFIG_MEMCG
> +	struct mem_cgroup *memcg;
> +#endif
>   };
>   
>   /**



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH RFC 3/3] drm/ttm: support memcg for ttm_tt
@ 2020-01-13 15:55     ` Christian König
  0 siblings, 0 replies; 19+ messages in thread
From: Christian König @ 2020-01-13 15:55 UTC (permalink / raw)
  To: Qiang Yu, linux-mm, cgroups, dri-devel
  Cc: David Airlie, Kenny Ho, Michal Hocko, Huang Rui, Johannes Weiner,
	Tejun Heo, Andrew Morton

Am 13.01.20 um 16:35 schrieb Qiang Yu:
> Charge TTM allocated system memory to memory cgroup which will
> limit the memory usage of a group of processes.

NAK to the whole approach. This belongs into the GEM or driver layer, 
but not into TTM.

> The memory is always charged to the control group of task which
> create this buffer object and when it's created. For example,
> when a buffer is created by process A and exported to process B,
> then process B populate this buffer, the memory is still charged
> to process A's memcg; if a buffer is created by process A when in
> memcg B, then A is moved to memcg C and populate this buffer, it
> will charge memcg B.

This is actually the most common use case for graphics application where 
the X server allocates most of the backing store.

So we need a better handling than just accounting the memory to whoever 
allocated it first.

Regards,
Christian.

>
> Signed-off-by: Qiang Yu <qiang.yu@amd.com>
> ---
>   drivers/gpu/drm/ttm/ttm_bo.c         | 10 ++++++++++
>   drivers/gpu/drm/ttm/ttm_page_alloc.c | 18 +++++++++++++++++-
>   drivers/gpu/drm/ttm/ttm_tt.c         |  3 +++
>   include/drm/ttm/ttm_bo_api.h         |  5 +++++
>   include/drm/ttm/ttm_tt.h             |  4 ++++
>   5 files changed, 39 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
> index 8d91b0428af1..4e64846ee523 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
> @@ -42,6 +42,7 @@
>   #include <linux/module.h>
>   #include <linux/atomic.h>
>   #include <linux/dma-resv.h>
> +#include <linux/memcontrol.h>
>   
>   static void ttm_bo_global_kobj_release(struct kobject *kobj);
>   
> @@ -162,6 +163,10 @@ static void ttm_bo_release_list(struct kref *list_kref)
>   	if (!ttm_bo_uses_embedded_gem_object(bo))
>   		dma_resv_fini(&bo->base._resv);
>   	mutex_destroy(&bo->wu_mutex);
> +#ifdef CONFIG_MEMCG
> +	if (bo->memcg)
> +		css_put(&bo->memcg->css);
> +#endif
>   	bo->destroy(bo);
>   	ttm_mem_global_free(&ttm_mem_glob, acc_size);
>   }
> @@ -1330,6 +1335,11 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev,
>   	}
>   	atomic_inc(&ttm_bo_glob.bo_count);
>   
> +#ifdef CONFIG_MEMCG
> +	if (bo->type == ttm_bo_type_device)
> +		bo->memcg = mem_cgroup_driver_get_from_current();
> +#endif
> +
>   	/*
>   	 * For ttm_bo_type_device buffers, allocate
>   	 * address space from the device.
> diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c b/drivers/gpu/drm/ttm/ttm_page_alloc.c
> index b40a4678c296..ecd1831a1d38 100644
> --- a/drivers/gpu/drm/ttm/ttm_page_alloc.c
> +++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c
> @@ -42,7 +42,7 @@
>   #include <linux/seq_file.h> /* for seq_printf */
>   #include <linux/slab.h>
>   #include <linux/dma-mapping.h>
> -
> +#include <linux/memcontrol.h>
>   #include <linux/atomic.h>
>   
>   #include <drm/ttm/ttm_bo_driver.h>
> @@ -1045,6 +1045,11 @@ ttm_pool_unpopulate_helper(struct ttm_tt *ttm, unsigned mem_count_update)
>   	ttm_put_pages(ttm->pages, ttm->num_pages, ttm->page_flags,
>   		      ttm->caching_state);
>   	ttm->state = tt_unpopulated;
> +
> +#ifdef CONFIG_MEMCG
> +	if (ttm->memcg)
> +		mem_cgroup_uncharge_drvmem(ttm->memcg, ttm->num_pages);
> +#endif
>   }
>   
>   int ttm_pool_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx)
> @@ -1059,6 +1064,17 @@ int ttm_pool_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx)
>   	if (ttm_check_under_lowerlimit(mem_glob, ttm->num_pages, ctx))
>   		return -ENOMEM;
>   
> +#ifdef CONFIG_MEMCG
> +	if (ttm->memcg) {
> +		gfp_t gfp_flags = GFP_USER;
> +		if (ttm->page_flags & TTM_PAGE_FLAG_NO_RETRY)
> +			gfp_flags |= __GFP_RETRY_MAYFAIL;
> +		ret = mem_cgroup_charge_drvmem(ttm->memcg, gfp_flags, ttm->num_pages);
> +		if (ret)
> +			return ret;
> +	}
> +#endif
> +
>   	ret = ttm_get_pages(ttm->pages, ttm->num_pages, ttm->page_flags,
>   			    ttm->caching_state);
>   	if (unlikely(ret != 0)) {
> diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c
> index e0e9b4f69db6..1acb153084e1 100644
> --- a/drivers/gpu/drm/ttm/ttm_tt.c
> +++ b/drivers/gpu/drm/ttm/ttm_tt.c
> @@ -233,6 +233,9 @@ void ttm_tt_init_fields(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
>   	ttm->state = tt_unpopulated;
>   	ttm->swap_storage = NULL;
>   	ttm->sg = bo->sg;
> +#ifdef CONFIG_MEMCG
> +	ttm->memcg = bo->memcg;
> +#endif
>   }
>   
>   int ttm_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> index 65e399d280f7..95a08e81a73e 100644
> --- a/include/drm/ttm/ttm_bo_api.h
> +++ b/include/drm/ttm/ttm_bo_api.h
> @@ -54,6 +54,8 @@ struct ttm_place;
>   
>   struct ttm_lru_bulk_move;
>   
> +struct mem_cgroup;
> +
>   /**
>    * struct ttm_bus_placement
>    *
> @@ -180,6 +182,9 @@ struct ttm_buffer_object {
>   	void (*destroy) (struct ttm_buffer_object *);
>   	unsigned long num_pages;
>   	size_t acc_size;
> +#ifdef CONFIG_MEMCG
> +	struct mem_cgroup *memcg;
> +#endif
>   
>   	/**
>   	* Members not needing protection.
> diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h
> index c0e928abf592..10fb5a557b95 100644
> --- a/include/drm/ttm/ttm_tt.h
> +++ b/include/drm/ttm/ttm_tt.h
> @@ -33,6 +33,7 @@ struct ttm_tt;
>   struct ttm_mem_reg;
>   struct ttm_buffer_object;
>   struct ttm_operation_ctx;
> +struct mem_cgroup;
>   
>   #define TTM_PAGE_FLAG_WRITE           (1 << 3)
>   #define TTM_PAGE_FLAG_SWAPPED         (1 << 4)
> @@ -116,6 +117,9 @@ struct ttm_tt {
>   		tt_unbound,
>   		tt_unpopulated,
>   	} state;
> +#ifdef CONFIG_MEMCG
> +	struct mem_cgroup *memcg;
> +#endif
>   };
>   
>   /**

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH RFC 3/3] drm/ttm: support memcg for ttm_tt
@ 2020-01-13 15:55     ` Christian König
  0 siblings, 0 replies; 19+ messages in thread
From: Christian König @ 2020-01-13 15:55 UTC (permalink / raw)
  To: Qiang Yu, linux-mm-Bw31MaZKKs3YtjvyW6yDsg,
	cgroups-u79uwXL29TY76Z2rM5mHXA,
	dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW
  Cc: Johannes Weiner, Michal Hocko, Andrew Morton, Tejun Heo,
	Huang Rui, David Airlie, Daniel Vetter, Kenny Ho

Am 13.01.20 um 16:35 schrieb Qiang Yu:
> Charge TTM allocated system memory to memory cgroup which will
> limit the memory usage of a group of processes.

NAK to the whole approach. This belongs into the GEM or driver layer, 
but not into TTM.

> The memory is always charged to the control group of task which
> create this buffer object and when it's created. For example,
> when a buffer is created by process A and exported to process B,
> then process B populate this buffer, the memory is still charged
> to process A's memcg; if a buffer is created by process A when in
> memcg B, then A is moved to memcg C and populate this buffer, it
> will charge memcg B.

This is actually the most common use case for graphics application where 
the X server allocates most of the backing store.

So we need a better handling than just accounting the memory to whoever 
allocated it first.

Regards,
Christian.

>
> Signed-off-by: Qiang Yu <qiang.yu-5C7GfCeVMHo@public.gmane.org>
> ---
>   drivers/gpu/drm/ttm/ttm_bo.c         | 10 ++++++++++
>   drivers/gpu/drm/ttm/ttm_page_alloc.c | 18 +++++++++++++++++-
>   drivers/gpu/drm/ttm/ttm_tt.c         |  3 +++
>   include/drm/ttm/ttm_bo_api.h         |  5 +++++
>   include/drm/ttm/ttm_tt.h             |  4 ++++
>   5 files changed, 39 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
> index 8d91b0428af1..4e64846ee523 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
> @@ -42,6 +42,7 @@
>   #include <linux/module.h>
>   #include <linux/atomic.h>
>   #include <linux/dma-resv.h>
> +#include <linux/memcontrol.h>
>   
>   static void ttm_bo_global_kobj_release(struct kobject *kobj);
>   
> @@ -162,6 +163,10 @@ static void ttm_bo_release_list(struct kref *list_kref)
>   	if (!ttm_bo_uses_embedded_gem_object(bo))
>   		dma_resv_fini(&bo->base._resv);
>   	mutex_destroy(&bo->wu_mutex);
> +#ifdef CONFIG_MEMCG
> +	if (bo->memcg)
> +		css_put(&bo->memcg->css);
> +#endif
>   	bo->destroy(bo);
>   	ttm_mem_global_free(&ttm_mem_glob, acc_size);
>   }
> @@ -1330,6 +1335,11 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev,
>   	}
>   	atomic_inc(&ttm_bo_glob.bo_count);
>   
> +#ifdef CONFIG_MEMCG
> +	if (bo->type == ttm_bo_type_device)
> +		bo->memcg = mem_cgroup_driver_get_from_current();
> +#endif
> +
>   	/*
>   	 * For ttm_bo_type_device buffers, allocate
>   	 * address space from the device.
> diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c b/drivers/gpu/drm/ttm/ttm_page_alloc.c
> index b40a4678c296..ecd1831a1d38 100644
> --- a/drivers/gpu/drm/ttm/ttm_page_alloc.c
> +++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c
> @@ -42,7 +42,7 @@
>   #include <linux/seq_file.h> /* for seq_printf */
>   #include <linux/slab.h>
>   #include <linux/dma-mapping.h>
> -
> +#include <linux/memcontrol.h>
>   #include <linux/atomic.h>
>   
>   #include <drm/ttm/ttm_bo_driver.h>
> @@ -1045,6 +1045,11 @@ ttm_pool_unpopulate_helper(struct ttm_tt *ttm, unsigned mem_count_update)
>   	ttm_put_pages(ttm->pages, ttm->num_pages, ttm->page_flags,
>   		      ttm->caching_state);
>   	ttm->state = tt_unpopulated;
> +
> +#ifdef CONFIG_MEMCG
> +	if (ttm->memcg)
> +		mem_cgroup_uncharge_drvmem(ttm->memcg, ttm->num_pages);
> +#endif
>   }
>   
>   int ttm_pool_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx)
> @@ -1059,6 +1064,17 @@ int ttm_pool_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx)
>   	if (ttm_check_under_lowerlimit(mem_glob, ttm->num_pages, ctx))
>   		return -ENOMEM;
>   
> +#ifdef CONFIG_MEMCG
> +	if (ttm->memcg) {
> +		gfp_t gfp_flags = GFP_USER;
> +		if (ttm->page_flags & TTM_PAGE_FLAG_NO_RETRY)
> +			gfp_flags |= __GFP_RETRY_MAYFAIL;
> +		ret = mem_cgroup_charge_drvmem(ttm->memcg, gfp_flags, ttm->num_pages);
> +		if (ret)
> +			return ret;
> +	}
> +#endif
> +
>   	ret = ttm_get_pages(ttm->pages, ttm->num_pages, ttm->page_flags,
>   			    ttm->caching_state);
>   	if (unlikely(ret != 0)) {
> diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c
> index e0e9b4f69db6..1acb153084e1 100644
> --- a/drivers/gpu/drm/ttm/ttm_tt.c
> +++ b/drivers/gpu/drm/ttm/ttm_tt.c
> @@ -233,6 +233,9 @@ void ttm_tt_init_fields(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
>   	ttm->state = tt_unpopulated;
>   	ttm->swap_storage = NULL;
>   	ttm->sg = bo->sg;
> +#ifdef CONFIG_MEMCG
> +	ttm->memcg = bo->memcg;
> +#endif
>   }
>   
>   int ttm_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> index 65e399d280f7..95a08e81a73e 100644
> --- a/include/drm/ttm/ttm_bo_api.h
> +++ b/include/drm/ttm/ttm_bo_api.h
> @@ -54,6 +54,8 @@ struct ttm_place;
>   
>   struct ttm_lru_bulk_move;
>   
> +struct mem_cgroup;
> +
>   /**
>    * struct ttm_bus_placement
>    *
> @@ -180,6 +182,9 @@ struct ttm_buffer_object {
>   	void (*destroy) (struct ttm_buffer_object *);
>   	unsigned long num_pages;
>   	size_t acc_size;
> +#ifdef CONFIG_MEMCG
> +	struct mem_cgroup *memcg;
> +#endif
>   
>   	/**
>   	* Members not needing protection.
> diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h
> index c0e928abf592..10fb5a557b95 100644
> --- a/include/drm/ttm/ttm_tt.h
> +++ b/include/drm/ttm/ttm_tt.h
> @@ -33,6 +33,7 @@ struct ttm_tt;
>   struct ttm_mem_reg;
>   struct ttm_buffer_object;
>   struct ttm_operation_ctx;
> +struct mem_cgroup;
>   
>   #define TTM_PAGE_FLAG_WRITE           (1 << 3)
>   #define TTM_PAGE_FLAG_SWAPPED         (1 << 4)
> @@ -116,6 +117,9 @@ struct ttm_tt {
>   		tt_unbound,
>   		tt_unpopulated,
>   	} state;
> +#ifdef CONFIG_MEMCG
> +	struct mem_cgroup *memcg;
> +#endif
>   };
>   
>   /**


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH RFC 3/3] drm/ttm: support memcg for ttm_tt
  2020-01-13 15:55     ` Christian König
  (?)
@ 2020-01-19  2:47       ` Qiang Yu
  -1 siblings, 0 replies; 19+ messages in thread
From: Qiang Yu @ 2020-01-19  2:47 UTC (permalink / raw)
  To: Christian König
  Cc: Qiang Yu, Linux Memory Management List, cgroups, dri-devel,
	David Airlie, Kenny Ho, Michal Hocko, Huang Rui, Johannes Weiner,
	Tejun Heo, Andrew Morton

On Mon, Jan 13, 2020 at 11:56 PM Christian König
<christian.koenig@amd.com> wrote:
>
> Am 13.01.20 um 16:35 schrieb Qiang Yu:
> > Charge TTM allocated system memory to memory cgroup which will
> > limit the memory usage of a group of processes.
>
> NAK to the whole approach. This belongs into the GEM or driver layer,
> but not into TTM.
>
Sorry for responding late.

GEM layer seems not a proper place to handle this as:
1. it is not aware of the back storage (system mem or device mem) unless
we add this information up to GEM which I think is not appropriate
2. system memory allocated by GEM with drm_gem_get_pages() is already
charged to memcg, it's only the ttm system memory not charged to memcg

Implement in driver like amdgpu is an option. But seems the problem is inside
TTM which does not charge pages allocated by itself to memcg, won't it be
better to solve it in TTM so that all drivers using it can benefit? Or you just
think we should not rely on memcg for GPU system memory limitation?

> > The memory is always charged to the control group of task which
> > create this buffer object and when it's created. For example,
> > when a buffer is created by process A and exported to process B,
> > then process B populate this buffer, the memory is still charged
> > to process A's memcg; if a buffer is created by process A when in
> > memcg B, then A is moved to memcg C and populate this buffer, it
> > will charge memcg B.
>
> This is actually the most common use case for graphics application where
> the X server allocates most of the backing store.
>
> So we need a better handling than just accounting the memory to whoever
> allocated it first.
>
You mean the application based on DRI2 and X11 protocol draw? I think this
is still reasonable to charge xserver for the memory, because xserver allocate
the buffer and share to application which is its design and implementation
nature. With DRI3, the buffer is allocated by application, also
suitable for this
approach.

Regards,
Qiang

> Regards,
> Christian.
>
> >
> > Signed-off-by: Qiang Yu <qiang.yu@amd.com>
> > ---
> >   drivers/gpu/drm/ttm/ttm_bo.c         | 10 ++++++++++
> >   drivers/gpu/drm/ttm/ttm_page_alloc.c | 18 +++++++++++++++++-
> >   drivers/gpu/drm/ttm/ttm_tt.c         |  3 +++
> >   include/drm/ttm/ttm_bo_api.h         |  5 +++++
> >   include/drm/ttm/ttm_tt.h             |  4 ++++
> >   5 files changed, 39 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
> > index 8d91b0428af1..4e64846ee523 100644
> > --- a/drivers/gpu/drm/ttm/ttm_bo.c
> > +++ b/drivers/gpu/drm/ttm/ttm_bo.c
> > @@ -42,6 +42,7 @@
> >   #include <linux/module.h>
> >   #include <linux/atomic.h>
> >   #include <linux/dma-resv.h>
> > +#include <linux/memcontrol.h>
> >
> >   static void ttm_bo_global_kobj_release(struct kobject *kobj);
> >
> > @@ -162,6 +163,10 @@ static void ttm_bo_release_list(struct kref *list_kref)
> >       if (!ttm_bo_uses_embedded_gem_object(bo))
> >               dma_resv_fini(&bo->base._resv);
> >       mutex_destroy(&bo->wu_mutex);
> > +#ifdef CONFIG_MEMCG
> > +     if (bo->memcg)
> > +             css_put(&bo->memcg->css);
> > +#endif
> >       bo->destroy(bo);
> >       ttm_mem_global_free(&ttm_mem_glob, acc_size);
> >   }
> > @@ -1330,6 +1335,11 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev,
> >       }
> >       atomic_inc(&ttm_bo_glob.bo_count);
> >
> > +#ifdef CONFIG_MEMCG
> > +     if (bo->type == ttm_bo_type_device)
> > +             bo->memcg = mem_cgroup_driver_get_from_current();
> > +#endif
> > +
> >       /*
> >        * For ttm_bo_type_device buffers, allocate
> >        * address space from the device.
> > diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c b/drivers/gpu/drm/ttm/ttm_page_alloc.c
> > index b40a4678c296..ecd1831a1d38 100644
> > --- a/drivers/gpu/drm/ttm/ttm_page_alloc.c
> > +++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c
> > @@ -42,7 +42,7 @@
> >   #include <linux/seq_file.h> /* for seq_printf */
> >   #include <linux/slab.h>
> >   #include <linux/dma-mapping.h>
> > -
> > +#include <linux/memcontrol.h>
> >   #include <linux/atomic.h>
> >
> >   #include <drm/ttm/ttm_bo_driver.h>
> > @@ -1045,6 +1045,11 @@ ttm_pool_unpopulate_helper(struct ttm_tt *ttm, unsigned mem_count_update)
> >       ttm_put_pages(ttm->pages, ttm->num_pages, ttm->page_flags,
> >                     ttm->caching_state);
> >       ttm->state = tt_unpopulated;
> > +
> > +#ifdef CONFIG_MEMCG
> > +     if (ttm->memcg)
> > +             mem_cgroup_uncharge_drvmem(ttm->memcg, ttm->num_pages);
> > +#endif
> >   }
> >
> >   int ttm_pool_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx)
> > @@ -1059,6 +1064,17 @@ int ttm_pool_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx)
> >       if (ttm_check_under_lowerlimit(mem_glob, ttm->num_pages, ctx))
> >               return -ENOMEM;
> >
> > +#ifdef CONFIG_MEMCG
> > +     if (ttm->memcg) {
> > +             gfp_t gfp_flags = GFP_USER;
> > +             if (ttm->page_flags & TTM_PAGE_FLAG_NO_RETRY)
> > +                     gfp_flags |= __GFP_RETRY_MAYFAIL;
> > +             ret = mem_cgroup_charge_drvmem(ttm->memcg, gfp_flags, ttm->num_pages);
> > +             if (ret)
> > +                     return ret;
> > +     }
> > +#endif
> > +
> >       ret = ttm_get_pages(ttm->pages, ttm->num_pages, ttm->page_flags,
> >                           ttm->caching_state);
> >       if (unlikely(ret != 0)) {
> > diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c
> > index e0e9b4f69db6..1acb153084e1 100644
> > --- a/drivers/gpu/drm/ttm/ttm_tt.c
> > +++ b/drivers/gpu/drm/ttm/ttm_tt.c
> > @@ -233,6 +233,9 @@ void ttm_tt_init_fields(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
> >       ttm->state = tt_unpopulated;
> >       ttm->swap_storage = NULL;
> >       ttm->sg = bo->sg;
> > +#ifdef CONFIG_MEMCG
> > +     ttm->memcg = bo->memcg;
> > +#endif
> >   }
> >
> >   int ttm_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
> > diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> > index 65e399d280f7..95a08e81a73e 100644
> > --- a/include/drm/ttm/ttm_bo_api.h
> > +++ b/include/drm/ttm/ttm_bo_api.h
> > @@ -54,6 +54,8 @@ struct ttm_place;
> >
> >   struct ttm_lru_bulk_move;
> >
> > +struct mem_cgroup;
> > +
> >   /**
> >    * struct ttm_bus_placement
> >    *
> > @@ -180,6 +182,9 @@ struct ttm_buffer_object {
> >       void (*destroy) (struct ttm_buffer_object *);
> >       unsigned long num_pages;
> >       size_t acc_size;
> > +#ifdef CONFIG_MEMCG
> > +     struct mem_cgroup *memcg;
> > +#endif
> >
> >       /**
> >       * Members not needing protection.
> > diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h
> > index c0e928abf592..10fb5a557b95 100644
> > --- a/include/drm/ttm/ttm_tt.h
> > +++ b/include/drm/ttm/ttm_tt.h
> > @@ -33,6 +33,7 @@ struct ttm_tt;
> >   struct ttm_mem_reg;
> >   struct ttm_buffer_object;
> >   struct ttm_operation_ctx;
> > +struct mem_cgroup;
> >
> >   #define TTM_PAGE_FLAG_WRITE           (1 << 3)
> >   #define TTM_PAGE_FLAG_SWAPPED         (1 << 4)
> > @@ -116,6 +117,9 @@ struct ttm_tt {
> >               tt_unbound,
> >               tt_unpopulated,
> >       } state;
> > +#ifdef CONFIG_MEMCG
> > +     struct mem_cgroup *memcg;
> > +#endif
> >   };
> >
> >   /**
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH RFC 3/3] drm/ttm: support memcg for ttm_tt
@ 2020-01-19  2:47       ` Qiang Yu
  0 siblings, 0 replies; 19+ messages in thread
From: Qiang Yu @ 2020-01-19  2:47 UTC (permalink / raw)
  To: Christian König
  Cc: Linux Memory Management List, David Airlie, Kenny Ho, dri-devel,
	Michal Hocko, Qiang Yu, Huang Rui, Johannes Weiner, Tejun Heo,
	cgroups, Andrew Morton

On Mon, Jan 13, 2020 at 11:56 PM Christian König
<christian.koenig@amd.com> wrote:
>
> Am 13.01.20 um 16:35 schrieb Qiang Yu:
> > Charge TTM allocated system memory to memory cgroup which will
> > limit the memory usage of a group of processes.
>
> NAK to the whole approach. This belongs into the GEM or driver layer,
> but not into TTM.
>
Sorry for responding late.

GEM layer seems not a proper place to handle this as:
1. it is not aware of the back storage (system mem or device mem) unless
we add this information up to GEM which I think is not appropriate
2. system memory allocated by GEM with drm_gem_get_pages() is already
charged to memcg, it's only the ttm system memory not charged to memcg

Implement in driver like amdgpu is an option. But seems the problem is inside
TTM which does not charge pages allocated by itself to memcg, won't it be
better to solve it in TTM so that all drivers using it can benefit? Or you just
think we should not rely on memcg for GPU system memory limitation?

> > The memory is always charged to the control group of task which
> > create this buffer object and when it's created. For example,
> > when a buffer is created by process A and exported to process B,
> > then process B populate this buffer, the memory is still charged
> > to process A's memcg; if a buffer is created by process A when in
> > memcg B, then A is moved to memcg C and populate this buffer, it
> > will charge memcg B.
>
> This is actually the most common use case for graphics application where
> the X server allocates most of the backing store.
>
> So we need a better handling than just accounting the memory to whoever
> allocated it first.
>
You mean the application based on DRI2 and X11 protocol draw? I think this
is still reasonable to charge xserver for the memory, because xserver allocate
the buffer and share to application which is its design and implementation
nature. With DRI3, the buffer is allocated by application, also
suitable for this
approach.

Regards,
Qiang

> Regards,
> Christian.
>
> >
> > Signed-off-by: Qiang Yu <qiang.yu@amd.com>
> > ---
> >   drivers/gpu/drm/ttm/ttm_bo.c         | 10 ++++++++++
> >   drivers/gpu/drm/ttm/ttm_page_alloc.c | 18 +++++++++++++++++-
> >   drivers/gpu/drm/ttm/ttm_tt.c         |  3 +++
> >   include/drm/ttm/ttm_bo_api.h         |  5 +++++
> >   include/drm/ttm/ttm_tt.h             |  4 ++++
> >   5 files changed, 39 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
> > index 8d91b0428af1..4e64846ee523 100644
> > --- a/drivers/gpu/drm/ttm/ttm_bo.c
> > +++ b/drivers/gpu/drm/ttm/ttm_bo.c
> > @@ -42,6 +42,7 @@
> >   #include <linux/module.h>
> >   #include <linux/atomic.h>
> >   #include <linux/dma-resv.h>
> > +#include <linux/memcontrol.h>
> >
> >   static void ttm_bo_global_kobj_release(struct kobject *kobj);
> >
> > @@ -162,6 +163,10 @@ static void ttm_bo_release_list(struct kref *list_kref)
> >       if (!ttm_bo_uses_embedded_gem_object(bo))
> >               dma_resv_fini(&bo->base._resv);
> >       mutex_destroy(&bo->wu_mutex);
> > +#ifdef CONFIG_MEMCG
> > +     if (bo->memcg)
> > +             css_put(&bo->memcg->css);
> > +#endif
> >       bo->destroy(bo);
> >       ttm_mem_global_free(&ttm_mem_glob, acc_size);
> >   }
> > @@ -1330,6 +1335,11 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev,
> >       }
> >       atomic_inc(&ttm_bo_glob.bo_count);
> >
> > +#ifdef CONFIG_MEMCG
> > +     if (bo->type == ttm_bo_type_device)
> > +             bo->memcg = mem_cgroup_driver_get_from_current();
> > +#endif
> > +
> >       /*
> >        * For ttm_bo_type_device buffers, allocate
> >        * address space from the device.
> > diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c b/drivers/gpu/drm/ttm/ttm_page_alloc.c
> > index b40a4678c296..ecd1831a1d38 100644
> > --- a/drivers/gpu/drm/ttm/ttm_page_alloc.c
> > +++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c
> > @@ -42,7 +42,7 @@
> >   #include <linux/seq_file.h> /* for seq_printf */
> >   #include <linux/slab.h>
> >   #include <linux/dma-mapping.h>
> > -
> > +#include <linux/memcontrol.h>
> >   #include <linux/atomic.h>
> >
> >   #include <drm/ttm/ttm_bo_driver.h>
> > @@ -1045,6 +1045,11 @@ ttm_pool_unpopulate_helper(struct ttm_tt *ttm, unsigned mem_count_update)
> >       ttm_put_pages(ttm->pages, ttm->num_pages, ttm->page_flags,
> >                     ttm->caching_state);
> >       ttm->state = tt_unpopulated;
> > +
> > +#ifdef CONFIG_MEMCG
> > +     if (ttm->memcg)
> > +             mem_cgroup_uncharge_drvmem(ttm->memcg, ttm->num_pages);
> > +#endif
> >   }
> >
> >   int ttm_pool_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx)
> > @@ -1059,6 +1064,17 @@ int ttm_pool_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx)
> >       if (ttm_check_under_lowerlimit(mem_glob, ttm->num_pages, ctx))
> >               return -ENOMEM;
> >
> > +#ifdef CONFIG_MEMCG
> > +     if (ttm->memcg) {
> > +             gfp_t gfp_flags = GFP_USER;
> > +             if (ttm->page_flags & TTM_PAGE_FLAG_NO_RETRY)
> > +                     gfp_flags |= __GFP_RETRY_MAYFAIL;
> > +             ret = mem_cgroup_charge_drvmem(ttm->memcg, gfp_flags, ttm->num_pages);
> > +             if (ret)
> > +                     return ret;
> > +     }
> > +#endif
> > +
> >       ret = ttm_get_pages(ttm->pages, ttm->num_pages, ttm->page_flags,
> >                           ttm->caching_state);
> >       if (unlikely(ret != 0)) {
> > diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c
> > index e0e9b4f69db6..1acb153084e1 100644
> > --- a/drivers/gpu/drm/ttm/ttm_tt.c
> > +++ b/drivers/gpu/drm/ttm/ttm_tt.c
> > @@ -233,6 +233,9 @@ void ttm_tt_init_fields(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
> >       ttm->state = tt_unpopulated;
> >       ttm->swap_storage = NULL;
> >       ttm->sg = bo->sg;
> > +#ifdef CONFIG_MEMCG
> > +     ttm->memcg = bo->memcg;
> > +#endif
> >   }
> >
> >   int ttm_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
> > diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> > index 65e399d280f7..95a08e81a73e 100644
> > --- a/include/drm/ttm/ttm_bo_api.h
> > +++ b/include/drm/ttm/ttm_bo_api.h
> > @@ -54,6 +54,8 @@ struct ttm_place;
> >
> >   struct ttm_lru_bulk_move;
> >
> > +struct mem_cgroup;
> > +
> >   /**
> >    * struct ttm_bus_placement
> >    *
> > @@ -180,6 +182,9 @@ struct ttm_buffer_object {
> >       void (*destroy) (struct ttm_buffer_object *);
> >       unsigned long num_pages;
> >       size_t acc_size;
> > +#ifdef CONFIG_MEMCG
> > +     struct mem_cgroup *memcg;
> > +#endif
> >
> >       /**
> >       * Members not needing protection.
> > diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h
> > index c0e928abf592..10fb5a557b95 100644
> > --- a/include/drm/ttm/ttm_tt.h
> > +++ b/include/drm/ttm/ttm_tt.h
> > @@ -33,6 +33,7 @@ struct ttm_tt;
> >   struct ttm_mem_reg;
> >   struct ttm_buffer_object;
> >   struct ttm_operation_ctx;
> > +struct mem_cgroup;
> >
> >   #define TTM_PAGE_FLAG_WRITE           (1 << 3)
> >   #define TTM_PAGE_FLAG_SWAPPED         (1 << 4)
> > @@ -116,6 +117,9 @@ struct ttm_tt {
> >               tt_unbound,
> >               tt_unpopulated,
> >       } state;
> > +#ifdef CONFIG_MEMCG
> > +     struct mem_cgroup *memcg;
> > +#endif
> >   };
> >
> >   /**
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH RFC 3/3] drm/ttm: support memcg for ttm_tt
@ 2020-01-19  2:47       ` Qiang Yu
  0 siblings, 0 replies; 19+ messages in thread
From: Qiang Yu @ 2020-01-19  2:47 UTC (permalink / raw)
  To: Christian König
  Cc: Qiang Yu, Linux Memory Management List,
	cgroups-u79uwXL29TY76Z2rM5mHXA, dri-devel, David Airlie,
	Kenny Ho, Michal Hocko, Huang Rui, Johannes Weiner, Tejun Heo,
	Andrew Morton

On Mon, Jan 13, 2020 at 11:56 PM Christian König
<christian.koenig-5C7GfCeVMHo@public.gmane.org> wrote:
>
> Am 13.01.20 um 16:35 schrieb Qiang Yu:
> > Charge TTM allocated system memory to memory cgroup which will
> > limit the memory usage of a group of processes.
>
> NAK to the whole approach. This belongs into the GEM or driver layer,
> but not into TTM.
>
Sorry for responding late.

GEM layer seems not a proper place to handle this as:
1. it is not aware of the back storage (system mem or device mem) unless
we add this information up to GEM which I think is not appropriate
2. system memory allocated by GEM with drm_gem_get_pages() is already
charged to memcg, it's only the ttm system memory not charged to memcg

Implement in driver like amdgpu is an option. But seems the problem is inside
TTM which does not charge pages allocated by itself to memcg, won't it be
better to solve it in TTM so that all drivers using it can benefit? Or you just
think we should not rely on memcg for GPU system memory limitation?

> > The memory is always charged to the control group of task which
> > create this buffer object and when it's created. For example,
> > when a buffer is created by process A and exported to process B,
> > then process B populate this buffer, the memory is still charged
> > to process A's memcg; if a buffer is created by process A when in
> > memcg B, then A is moved to memcg C and populate this buffer, it
> > will charge memcg B.
>
> This is actually the most common use case for graphics application where
> the X server allocates most of the backing store.
>
> So we need a better handling than just accounting the memory to whoever
> allocated it first.
>
You mean the application based on DRI2 and X11 protocol draw? I think this
is still reasonable to charge xserver for the memory, because xserver allocate
the buffer and share to application which is its design and implementation
nature. With DRI3, the buffer is allocated by application, also
suitable for this
approach.

Regards,
Qiang

> Regards,
> Christian.
>
> >
> > Signed-off-by: Qiang Yu <qiang.yu-5C7GfCeVMHo@public.gmane.org>
> > ---
> >   drivers/gpu/drm/ttm/ttm_bo.c         | 10 ++++++++++
> >   drivers/gpu/drm/ttm/ttm_page_alloc.c | 18 +++++++++++++++++-
> >   drivers/gpu/drm/ttm/ttm_tt.c         |  3 +++
> >   include/drm/ttm/ttm_bo_api.h         |  5 +++++
> >   include/drm/ttm/ttm_tt.h             |  4 ++++
> >   5 files changed, 39 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
> > index 8d91b0428af1..4e64846ee523 100644
> > --- a/drivers/gpu/drm/ttm/ttm_bo.c
> > +++ b/drivers/gpu/drm/ttm/ttm_bo.c
> > @@ -42,6 +42,7 @@
> >   #include <linux/module.h>
> >   #include <linux/atomic.h>
> >   #include <linux/dma-resv.h>
> > +#include <linux/memcontrol.h>
> >
> >   static void ttm_bo_global_kobj_release(struct kobject *kobj);
> >
> > @@ -162,6 +163,10 @@ static void ttm_bo_release_list(struct kref *list_kref)
> >       if (!ttm_bo_uses_embedded_gem_object(bo))
> >               dma_resv_fini(&bo->base._resv);
> >       mutex_destroy(&bo->wu_mutex);
> > +#ifdef CONFIG_MEMCG
> > +     if (bo->memcg)
> > +             css_put(&bo->memcg->css);
> > +#endif
> >       bo->destroy(bo);
> >       ttm_mem_global_free(&ttm_mem_glob, acc_size);
> >   }
> > @@ -1330,6 +1335,11 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev,
> >       }
> >       atomic_inc(&ttm_bo_glob.bo_count);
> >
> > +#ifdef CONFIG_MEMCG
> > +     if (bo->type == ttm_bo_type_device)
> > +             bo->memcg = mem_cgroup_driver_get_from_current();
> > +#endif
> > +
> >       /*
> >        * For ttm_bo_type_device buffers, allocate
> >        * address space from the device.
> > diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c b/drivers/gpu/drm/ttm/ttm_page_alloc.c
> > index b40a4678c296..ecd1831a1d38 100644
> > --- a/drivers/gpu/drm/ttm/ttm_page_alloc.c
> > +++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c
> > @@ -42,7 +42,7 @@
> >   #include <linux/seq_file.h> /* for seq_printf */
> >   #include <linux/slab.h>
> >   #include <linux/dma-mapping.h>
> > -
> > +#include <linux/memcontrol.h>
> >   #include <linux/atomic.h>
> >
> >   #include <drm/ttm/ttm_bo_driver.h>
> > @@ -1045,6 +1045,11 @@ ttm_pool_unpopulate_helper(struct ttm_tt *ttm, unsigned mem_count_update)
> >       ttm_put_pages(ttm->pages, ttm->num_pages, ttm->page_flags,
> >                     ttm->caching_state);
> >       ttm->state = tt_unpopulated;
> > +
> > +#ifdef CONFIG_MEMCG
> > +     if (ttm->memcg)
> > +             mem_cgroup_uncharge_drvmem(ttm->memcg, ttm->num_pages);
> > +#endif
> >   }
> >
> >   int ttm_pool_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx)
> > @@ -1059,6 +1064,17 @@ int ttm_pool_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx)
> >       if (ttm_check_under_lowerlimit(mem_glob, ttm->num_pages, ctx))
> >               return -ENOMEM;
> >
> > +#ifdef CONFIG_MEMCG
> > +     if (ttm->memcg) {
> > +             gfp_t gfp_flags = GFP_USER;
> > +             if (ttm->page_flags & TTM_PAGE_FLAG_NO_RETRY)
> > +                     gfp_flags |= __GFP_RETRY_MAYFAIL;
> > +             ret = mem_cgroup_charge_drvmem(ttm->memcg, gfp_flags, ttm->num_pages);
> > +             if (ret)
> > +                     return ret;
> > +     }
> > +#endif
> > +
> >       ret = ttm_get_pages(ttm->pages, ttm->num_pages, ttm->page_flags,
> >                           ttm->caching_state);
> >       if (unlikely(ret != 0)) {
> > diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c
> > index e0e9b4f69db6..1acb153084e1 100644
> > --- a/drivers/gpu/drm/ttm/ttm_tt.c
> > +++ b/drivers/gpu/drm/ttm/ttm_tt.c
> > @@ -233,6 +233,9 @@ void ttm_tt_init_fields(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
> >       ttm->state = tt_unpopulated;
> >       ttm->swap_storage = NULL;
> >       ttm->sg = bo->sg;
> > +#ifdef CONFIG_MEMCG
> > +     ttm->memcg = bo->memcg;
> > +#endif
> >   }
> >
> >   int ttm_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
> > diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> > index 65e399d280f7..95a08e81a73e 100644
> > --- a/include/drm/ttm/ttm_bo_api.h
> > +++ b/include/drm/ttm/ttm_bo_api.h
> > @@ -54,6 +54,8 @@ struct ttm_place;
> >
> >   struct ttm_lru_bulk_move;
> >
> > +struct mem_cgroup;
> > +
> >   /**
> >    * struct ttm_bus_placement
> >    *
> > @@ -180,6 +182,9 @@ struct ttm_buffer_object {
> >       void (*destroy) (struct ttm_buffer_object *);
> >       unsigned long num_pages;
> >       size_t acc_size;
> > +#ifdef CONFIG_MEMCG
> > +     struct mem_cgroup *memcg;
> > +#endif
> >
> >       /**
> >       * Members not needing protection.
> > diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h
> > index c0e928abf592..10fb5a557b95 100644
> > --- a/include/drm/ttm/ttm_tt.h
> > +++ b/include/drm/ttm/ttm_tt.h
> > @@ -33,6 +33,7 @@ struct ttm_tt;
> >   struct ttm_mem_reg;
> >   struct ttm_buffer_object;
> >   struct ttm_operation_ctx;
> > +struct mem_cgroup;
> >
> >   #define TTM_PAGE_FLAG_WRITE           (1 << 3)
> >   #define TTM_PAGE_FLAG_SWAPPED         (1 << 4)
> > @@ -116,6 +117,9 @@ struct ttm_tt {
> >               tt_unbound,
> >               tt_unpopulated,
> >       } state;
> > +#ifdef CONFIG_MEMCG
> > +     struct mem_cgroup *memcg;
> > +#endif
> >   };
> >
> >   /**
>
> _______________________________________________
> dri-devel mailing list
> dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH RFC 3/3] drm/ttm: support memcg for ttm_tt
  2020-01-19  2:47       ` Qiang Yu
  (?)
@ 2020-01-19 13:03         ` Christian König
  -1 siblings, 0 replies; 19+ messages in thread
From: Christian König @ 2020-01-19 13:03 UTC (permalink / raw)
  To: Qiang Yu
  Cc: Qiang Yu, Linux Memory Management List, cgroups, dri-devel,
	David Airlie, Kenny Ho, Michal Hocko, Huang Rui, Johannes Weiner,
	Tejun Heo, Andrew Morton

Am 19.01.20 um 03:47 schrieb Qiang Yu:
> On Mon, Jan 13, 2020 at 11:56 PM Christian König
> <christian.koenig@amd.com> wrote:
>> Am 13.01.20 um 16:35 schrieb Qiang Yu:
>>> Charge TTM allocated system memory to memory cgroup which will
>>> limit the memory usage of a group of processes.
>> NAK to the whole approach. This belongs into the GEM or driver layer,
>> but not into TTM.
>>
> Sorry for responding late.
>
> GEM layer seems not a proper place to handle this as:
> 1. it is not aware of the back storage (system mem or device mem) unless
> we add this information up to GEM which I think is not appropriate
> 2. system memory allocated by GEM with drm_gem_get_pages() is already
> charged to memcg, it's only the ttm system memory not charged to memcg

The key point is that we already discussed this on the mailing list and 
GEM was agreed on to be the right place for this.

That's the reason why the Intel developers already proposed a way to 
expose the buffer location in GEM.

Please sync up with Kenny who is leading the development efforts and 
with the Intel developers before warming up an old discussion again.

Adding that to TTM is an absolute no-go from my maintainers perspective.

>
> Implement in driver like amdgpu is an option. But seems the problem is inside
> TTM which does not charge pages allocated by itself to memcg, won't it be
> better to solve it in TTM so that all drivers using it can benefit? Or you just
> think we should not rely on memcg for GPU system memory limitation?
>
>>> The memory is always charged to the control group of task which
>>> create this buffer object and when it's created. For example,
>>> when a buffer is created by process A and exported to process B,
>>> then process B populate this buffer, the memory is still charged
>>> to process A's memcg; if a buffer is created by process A when in
>>> memcg B, then A is moved to memcg C and populate this buffer, it
>>> will charge memcg B.
>> This is actually the most common use case for graphics application where
>> the X server allocates most of the backing store.
>>
>> So we need a better handling than just accounting the memory to whoever
>> allocated it first.
>>
> You mean the application based on DRI2 and X11 protocol draw? I think this
> is still reasonable to charge xserver for the memory, because xserver allocate
> the buffer and share to application which is its design and implementation
> nature. With DRI3, the buffer is allocated by application, also
> suitable for this
> approach.

That is a way to simplistic.

Again we already discussed this and the agreed compromise is to charge 
the application which is using the memory and not who has allocated it.

So you need to add the charge on importing a buffer and not just when it 
is created.

Regards,
Christian.

>
> Regards,
> Qiang
>
>> Regards,
>> Christian.
>>
>>> Signed-off-by: Qiang Yu <qiang.yu@amd.com>
>>> ---
>>>    drivers/gpu/drm/ttm/ttm_bo.c         | 10 ++++++++++
>>>    drivers/gpu/drm/ttm/ttm_page_alloc.c | 18 +++++++++++++++++-
>>>    drivers/gpu/drm/ttm/ttm_tt.c         |  3 +++
>>>    include/drm/ttm/ttm_bo_api.h         |  5 +++++
>>>    include/drm/ttm/ttm_tt.h             |  4 ++++
>>>    5 files changed, 39 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
>>> index 8d91b0428af1..4e64846ee523 100644
>>> --- a/drivers/gpu/drm/ttm/ttm_bo.c
>>> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
>>> @@ -42,6 +42,7 @@
>>>    #include <linux/module.h>
>>>    #include <linux/atomic.h>
>>>    #include <linux/dma-resv.h>
>>> +#include <linux/memcontrol.h>
>>>
>>>    static void ttm_bo_global_kobj_release(struct kobject *kobj);
>>>
>>> @@ -162,6 +163,10 @@ static void ttm_bo_release_list(struct kref *list_kref)
>>>        if (!ttm_bo_uses_embedded_gem_object(bo))
>>>                dma_resv_fini(&bo->base._resv);
>>>        mutex_destroy(&bo->wu_mutex);
>>> +#ifdef CONFIG_MEMCG
>>> +     if (bo->memcg)
>>> +             css_put(&bo->memcg->css);
>>> +#endif
>>>        bo->destroy(bo);
>>>        ttm_mem_global_free(&ttm_mem_glob, acc_size);
>>>    }
>>> @@ -1330,6 +1335,11 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev,
>>>        }
>>>        atomic_inc(&ttm_bo_glob.bo_count);
>>>
>>> +#ifdef CONFIG_MEMCG
>>> +     if (bo->type == ttm_bo_type_device)
>>> +             bo->memcg = mem_cgroup_driver_get_from_current();
>>> +#endif
>>> +
>>>        /*
>>>         * For ttm_bo_type_device buffers, allocate
>>>         * address space from the device.
>>> diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c b/drivers/gpu/drm/ttm/ttm_page_alloc.c
>>> index b40a4678c296..ecd1831a1d38 100644
>>> --- a/drivers/gpu/drm/ttm/ttm_page_alloc.c
>>> +++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c
>>> @@ -42,7 +42,7 @@
>>>    #include <linux/seq_file.h> /* for seq_printf */
>>>    #include <linux/slab.h>
>>>    #include <linux/dma-mapping.h>
>>> -
>>> +#include <linux/memcontrol.h>
>>>    #include <linux/atomic.h>
>>>
>>>    #include <drm/ttm/ttm_bo_driver.h>
>>> @@ -1045,6 +1045,11 @@ ttm_pool_unpopulate_helper(struct ttm_tt *ttm, unsigned mem_count_update)
>>>        ttm_put_pages(ttm->pages, ttm->num_pages, ttm->page_flags,
>>>                      ttm->caching_state);
>>>        ttm->state = tt_unpopulated;
>>> +
>>> +#ifdef CONFIG_MEMCG
>>> +     if (ttm->memcg)
>>> +             mem_cgroup_uncharge_drvmem(ttm->memcg, ttm->num_pages);
>>> +#endif
>>>    }
>>>
>>>    int ttm_pool_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx)
>>> @@ -1059,6 +1064,17 @@ int ttm_pool_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx)
>>>        if (ttm_check_under_lowerlimit(mem_glob, ttm->num_pages, ctx))
>>>                return -ENOMEM;
>>>
>>> +#ifdef CONFIG_MEMCG
>>> +     if (ttm->memcg) {
>>> +             gfp_t gfp_flags = GFP_USER;
>>> +             if (ttm->page_flags & TTM_PAGE_FLAG_NO_RETRY)
>>> +                     gfp_flags |= __GFP_RETRY_MAYFAIL;
>>> +             ret = mem_cgroup_charge_drvmem(ttm->memcg, gfp_flags, ttm->num_pages);
>>> +             if (ret)
>>> +                     return ret;
>>> +     }
>>> +#endif
>>> +
>>>        ret = ttm_get_pages(ttm->pages, ttm->num_pages, ttm->page_flags,
>>>                            ttm->caching_state);
>>>        if (unlikely(ret != 0)) {
>>> diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c
>>> index e0e9b4f69db6..1acb153084e1 100644
>>> --- a/drivers/gpu/drm/ttm/ttm_tt.c
>>> +++ b/drivers/gpu/drm/ttm/ttm_tt.c
>>> @@ -233,6 +233,9 @@ void ttm_tt_init_fields(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
>>>        ttm->state = tt_unpopulated;
>>>        ttm->swap_storage = NULL;
>>>        ttm->sg = bo->sg;
>>> +#ifdef CONFIG_MEMCG
>>> +     ttm->memcg = bo->memcg;
>>> +#endif
>>>    }
>>>
>>>    int ttm_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
>>> index 65e399d280f7..95a08e81a73e 100644
>>> --- a/include/drm/ttm/ttm_bo_api.h
>>> +++ b/include/drm/ttm/ttm_bo_api.h
>>> @@ -54,6 +54,8 @@ struct ttm_place;
>>>
>>>    struct ttm_lru_bulk_move;
>>>
>>> +struct mem_cgroup;
>>> +
>>>    /**
>>>     * struct ttm_bus_placement
>>>     *
>>> @@ -180,6 +182,9 @@ struct ttm_buffer_object {
>>>        void (*destroy) (struct ttm_buffer_object *);
>>>        unsigned long num_pages;
>>>        size_t acc_size;
>>> +#ifdef CONFIG_MEMCG
>>> +     struct mem_cgroup *memcg;
>>> +#endif
>>>
>>>        /**
>>>        * Members not needing protection.
>>> diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h
>>> index c0e928abf592..10fb5a557b95 100644
>>> --- a/include/drm/ttm/ttm_tt.h
>>> +++ b/include/drm/ttm/ttm_tt.h
>>> @@ -33,6 +33,7 @@ struct ttm_tt;
>>>    struct ttm_mem_reg;
>>>    struct ttm_buffer_object;
>>>    struct ttm_operation_ctx;
>>> +struct mem_cgroup;
>>>
>>>    #define TTM_PAGE_FLAG_WRITE           (1 << 3)
>>>    #define TTM_PAGE_FLAG_SWAPPED         (1 << 4)
>>> @@ -116,6 +117,9 @@ struct ttm_tt {
>>>                tt_unbound,
>>>                tt_unpopulated,
>>>        } state;
>>> +#ifdef CONFIG_MEMCG
>>> +     struct mem_cgroup *memcg;
>>> +#endif
>>>    };
>>>
>>>    /**
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C5d3b70a43b80444c550808d79c89e968%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637149988466853762&amp;sdata=ni3ku7nC%2FD5E8kivppfuuF7ZoiyfLQ8L3Y4j9IfHYUU%3D&amp;reserved=0



^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH RFC 3/3] drm/ttm: support memcg for ttm_tt
@ 2020-01-19 13:03         ` Christian König
  0 siblings, 0 replies; 19+ messages in thread
From: Christian König @ 2020-01-19 13:03 UTC (permalink / raw)
  To: Qiang Yu
  Cc: Linux Memory Management List, David Airlie, Kenny Ho, dri-devel,
	Michal Hocko, Qiang Yu, Huang Rui, Johannes Weiner, Tejun Heo,
	cgroups, Andrew Morton

Am 19.01.20 um 03:47 schrieb Qiang Yu:
> On Mon, Jan 13, 2020 at 11:56 PM Christian König
> <christian.koenig@amd.com> wrote:
>> Am 13.01.20 um 16:35 schrieb Qiang Yu:
>>> Charge TTM allocated system memory to memory cgroup which will
>>> limit the memory usage of a group of processes.
>> NAK to the whole approach. This belongs into the GEM or driver layer,
>> but not into TTM.
>>
> Sorry for responding late.
>
> GEM layer seems not a proper place to handle this as:
> 1. it is not aware of the back storage (system mem or device mem) unless
> we add this information up to GEM which I think is not appropriate
> 2. system memory allocated by GEM with drm_gem_get_pages() is already
> charged to memcg, it's only the ttm system memory not charged to memcg

The key point is that we already discussed this on the mailing list and 
GEM was agreed on to be the right place for this.

That's the reason why the Intel developers already proposed a way to 
expose the buffer location in GEM.

Please sync up with Kenny who is leading the development efforts and 
with the Intel developers before warming up an old discussion again.

Adding that to TTM is an absolute no-go from my maintainers perspective.

>
> Implement in driver like amdgpu is an option. But seems the problem is inside
> TTM which does not charge pages allocated by itself to memcg, won't it be
> better to solve it in TTM so that all drivers using it can benefit? Or you just
> think we should not rely on memcg for GPU system memory limitation?
>
>>> The memory is always charged to the control group of task which
>>> create this buffer object and when it's created. For example,
>>> when a buffer is created by process A and exported to process B,
>>> then process B populate this buffer, the memory is still charged
>>> to process A's memcg; if a buffer is created by process A when in
>>> memcg B, then A is moved to memcg C and populate this buffer, it
>>> will charge memcg B.
>> This is actually the most common use case for graphics application where
>> the X server allocates most of the backing store.
>>
>> So we need a better handling than just accounting the memory to whoever
>> allocated it first.
>>
> You mean the application based on DRI2 and X11 protocol draw? I think this
> is still reasonable to charge xserver for the memory, because xserver allocate
> the buffer and share to application which is its design and implementation
> nature. With DRI3, the buffer is allocated by application, also
> suitable for this
> approach.

That is a way to simplistic.

Again we already discussed this and the agreed compromise is to charge 
the application which is using the memory and not who has allocated it.

So you need to add the charge on importing a buffer and not just when it 
is created.

Regards,
Christian.

>
> Regards,
> Qiang
>
>> Regards,
>> Christian.
>>
>>> Signed-off-by: Qiang Yu <qiang.yu@amd.com>
>>> ---
>>>    drivers/gpu/drm/ttm/ttm_bo.c         | 10 ++++++++++
>>>    drivers/gpu/drm/ttm/ttm_page_alloc.c | 18 +++++++++++++++++-
>>>    drivers/gpu/drm/ttm/ttm_tt.c         |  3 +++
>>>    include/drm/ttm/ttm_bo_api.h         |  5 +++++
>>>    include/drm/ttm/ttm_tt.h             |  4 ++++
>>>    5 files changed, 39 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
>>> index 8d91b0428af1..4e64846ee523 100644
>>> --- a/drivers/gpu/drm/ttm/ttm_bo.c
>>> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
>>> @@ -42,6 +42,7 @@
>>>    #include <linux/module.h>
>>>    #include <linux/atomic.h>
>>>    #include <linux/dma-resv.h>
>>> +#include <linux/memcontrol.h>
>>>
>>>    static void ttm_bo_global_kobj_release(struct kobject *kobj);
>>>
>>> @@ -162,6 +163,10 @@ static void ttm_bo_release_list(struct kref *list_kref)
>>>        if (!ttm_bo_uses_embedded_gem_object(bo))
>>>                dma_resv_fini(&bo->base._resv);
>>>        mutex_destroy(&bo->wu_mutex);
>>> +#ifdef CONFIG_MEMCG
>>> +     if (bo->memcg)
>>> +             css_put(&bo->memcg->css);
>>> +#endif
>>>        bo->destroy(bo);
>>>        ttm_mem_global_free(&ttm_mem_glob, acc_size);
>>>    }
>>> @@ -1330,6 +1335,11 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev,
>>>        }
>>>        atomic_inc(&ttm_bo_glob.bo_count);
>>>
>>> +#ifdef CONFIG_MEMCG
>>> +     if (bo->type == ttm_bo_type_device)
>>> +             bo->memcg = mem_cgroup_driver_get_from_current();
>>> +#endif
>>> +
>>>        /*
>>>         * For ttm_bo_type_device buffers, allocate
>>>         * address space from the device.
>>> diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c b/drivers/gpu/drm/ttm/ttm_page_alloc.c
>>> index b40a4678c296..ecd1831a1d38 100644
>>> --- a/drivers/gpu/drm/ttm/ttm_page_alloc.c
>>> +++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c
>>> @@ -42,7 +42,7 @@
>>>    #include <linux/seq_file.h> /* for seq_printf */
>>>    #include <linux/slab.h>
>>>    #include <linux/dma-mapping.h>
>>> -
>>> +#include <linux/memcontrol.h>
>>>    #include <linux/atomic.h>
>>>
>>>    #include <drm/ttm/ttm_bo_driver.h>
>>> @@ -1045,6 +1045,11 @@ ttm_pool_unpopulate_helper(struct ttm_tt *ttm, unsigned mem_count_update)
>>>        ttm_put_pages(ttm->pages, ttm->num_pages, ttm->page_flags,
>>>                      ttm->caching_state);
>>>        ttm->state = tt_unpopulated;
>>> +
>>> +#ifdef CONFIG_MEMCG
>>> +     if (ttm->memcg)
>>> +             mem_cgroup_uncharge_drvmem(ttm->memcg, ttm->num_pages);
>>> +#endif
>>>    }
>>>
>>>    int ttm_pool_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx)
>>> @@ -1059,6 +1064,17 @@ int ttm_pool_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx)
>>>        if (ttm_check_under_lowerlimit(mem_glob, ttm->num_pages, ctx))
>>>                return -ENOMEM;
>>>
>>> +#ifdef CONFIG_MEMCG
>>> +     if (ttm->memcg) {
>>> +             gfp_t gfp_flags = GFP_USER;
>>> +             if (ttm->page_flags & TTM_PAGE_FLAG_NO_RETRY)
>>> +                     gfp_flags |= __GFP_RETRY_MAYFAIL;
>>> +             ret = mem_cgroup_charge_drvmem(ttm->memcg, gfp_flags, ttm->num_pages);
>>> +             if (ret)
>>> +                     return ret;
>>> +     }
>>> +#endif
>>> +
>>>        ret = ttm_get_pages(ttm->pages, ttm->num_pages, ttm->page_flags,
>>>                            ttm->caching_state);
>>>        if (unlikely(ret != 0)) {
>>> diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c
>>> index e0e9b4f69db6..1acb153084e1 100644
>>> --- a/drivers/gpu/drm/ttm/ttm_tt.c
>>> +++ b/drivers/gpu/drm/ttm/ttm_tt.c
>>> @@ -233,6 +233,9 @@ void ttm_tt_init_fields(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
>>>        ttm->state = tt_unpopulated;
>>>        ttm->swap_storage = NULL;
>>>        ttm->sg = bo->sg;
>>> +#ifdef CONFIG_MEMCG
>>> +     ttm->memcg = bo->memcg;
>>> +#endif
>>>    }
>>>
>>>    int ttm_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
>>> index 65e399d280f7..95a08e81a73e 100644
>>> --- a/include/drm/ttm/ttm_bo_api.h
>>> +++ b/include/drm/ttm/ttm_bo_api.h
>>> @@ -54,6 +54,8 @@ struct ttm_place;
>>>
>>>    struct ttm_lru_bulk_move;
>>>
>>> +struct mem_cgroup;
>>> +
>>>    /**
>>>     * struct ttm_bus_placement
>>>     *
>>> @@ -180,6 +182,9 @@ struct ttm_buffer_object {
>>>        void (*destroy) (struct ttm_buffer_object *);
>>>        unsigned long num_pages;
>>>        size_t acc_size;
>>> +#ifdef CONFIG_MEMCG
>>> +     struct mem_cgroup *memcg;
>>> +#endif
>>>
>>>        /**
>>>        * Members not needing protection.
>>> diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h
>>> index c0e928abf592..10fb5a557b95 100644
>>> --- a/include/drm/ttm/ttm_tt.h
>>> +++ b/include/drm/ttm/ttm_tt.h
>>> @@ -33,6 +33,7 @@ struct ttm_tt;
>>>    struct ttm_mem_reg;
>>>    struct ttm_buffer_object;
>>>    struct ttm_operation_ctx;
>>> +struct mem_cgroup;
>>>
>>>    #define TTM_PAGE_FLAG_WRITE           (1 << 3)
>>>    #define TTM_PAGE_FLAG_SWAPPED         (1 << 4)
>>> @@ -116,6 +117,9 @@ struct ttm_tt {
>>>                tt_unbound,
>>>                tt_unpopulated,
>>>        } state;
>>> +#ifdef CONFIG_MEMCG
>>> +     struct mem_cgroup *memcg;
>>> +#endif
>>>    };
>>>
>>>    /**
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C5d3b70a43b80444c550808d79c89e968%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637149988466853762&amp;sdata=ni3ku7nC%2FD5E8kivppfuuF7ZoiyfLQ8L3Y4j9IfHYUU%3D&amp;reserved=0

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH RFC 3/3] drm/ttm: support memcg for ttm_tt
@ 2020-01-19 13:03         ` Christian König
  0 siblings, 0 replies; 19+ messages in thread
From: Christian König @ 2020-01-19 13:03 UTC (permalink / raw)
  To: Qiang Yu
  Cc: Qiang Yu, Linux Memory Management List,
	cgroups-u79uwXL29TY76Z2rM5mHXA, dri-devel, David Airlie,
	Kenny Ho, Michal Hocko, Huang Rui, Johannes Weiner, Tejun Heo,
	Andrew Morton

Am 19.01.20 um 03:47 schrieb Qiang Yu:
> On Mon, Jan 13, 2020 at 11:56 PM Christian König
> <christian.koenig-5C7GfCeVMHo@public.gmane.org> wrote:
>> Am 13.01.20 um 16:35 schrieb Qiang Yu:
>>> Charge TTM allocated system memory to memory cgroup which will
>>> limit the memory usage of a group of processes.
>> NAK to the whole approach. This belongs into the GEM or driver layer,
>> but not into TTM.
>>
> Sorry for responding late.
>
> GEM layer seems not a proper place to handle this as:
> 1. it is not aware of the back storage (system mem or device mem) unless
> we add this information up to GEM which I think is not appropriate
> 2. system memory allocated by GEM with drm_gem_get_pages() is already
> charged to memcg, it's only the ttm system memory not charged to memcg

The key point is that we already discussed this on the mailing list and 
GEM was agreed on to be the right place for this.

That's the reason why the Intel developers already proposed a way to 
expose the buffer location in GEM.

Please sync up with Kenny who is leading the development efforts and 
with the Intel developers before warming up an old discussion again.

Adding that to TTM is an absolute no-go from my maintainers perspective.

>
> Implement in driver like amdgpu is an option. But seems the problem is inside
> TTM which does not charge pages allocated by itself to memcg, won't it be
> better to solve it in TTM so that all drivers using it can benefit? Or you just
> think we should not rely on memcg for GPU system memory limitation?
>
>>> The memory is always charged to the control group of task which
>>> create this buffer object and when it's created. For example,
>>> when a buffer is created by process A and exported to process B,
>>> then process B populate this buffer, the memory is still charged
>>> to process A's memcg; if a buffer is created by process A when in
>>> memcg B, then A is moved to memcg C and populate this buffer, it
>>> will charge memcg B.
>> This is actually the most common use case for graphics application where
>> the X server allocates most of the backing store.
>>
>> So we need a better handling than just accounting the memory to whoever
>> allocated it first.
>>
> You mean the application based on DRI2 and X11 protocol draw? I think this
> is still reasonable to charge xserver for the memory, because xserver allocate
> the buffer and share to application which is its design and implementation
> nature. With DRI3, the buffer is allocated by application, also
> suitable for this
> approach.

That is a way to simplistic.

Again we already discussed this and the agreed compromise is to charge 
the application which is using the memory and not who has allocated it.

So you need to add the charge on importing a buffer and not just when it 
is created.

Regards,
Christian.

>
> Regards,
> Qiang
>
>> Regards,
>> Christian.
>>
>>> Signed-off-by: Qiang Yu <qiang.yu-5C7GfCeVMHo@public.gmane.org>
>>> ---
>>>    drivers/gpu/drm/ttm/ttm_bo.c         | 10 ++++++++++
>>>    drivers/gpu/drm/ttm/ttm_page_alloc.c | 18 +++++++++++++++++-
>>>    drivers/gpu/drm/ttm/ttm_tt.c         |  3 +++
>>>    include/drm/ttm/ttm_bo_api.h         |  5 +++++
>>>    include/drm/ttm/ttm_tt.h             |  4 ++++
>>>    5 files changed, 39 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
>>> index 8d91b0428af1..4e64846ee523 100644
>>> --- a/drivers/gpu/drm/ttm/ttm_bo.c
>>> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
>>> @@ -42,6 +42,7 @@
>>>    #include <linux/module.h>
>>>    #include <linux/atomic.h>
>>>    #include <linux/dma-resv.h>
>>> +#include <linux/memcontrol.h>
>>>
>>>    static void ttm_bo_global_kobj_release(struct kobject *kobj);
>>>
>>> @@ -162,6 +163,10 @@ static void ttm_bo_release_list(struct kref *list_kref)
>>>        if (!ttm_bo_uses_embedded_gem_object(bo))
>>>                dma_resv_fini(&bo->base._resv);
>>>        mutex_destroy(&bo->wu_mutex);
>>> +#ifdef CONFIG_MEMCG
>>> +     if (bo->memcg)
>>> +             css_put(&bo->memcg->css);
>>> +#endif
>>>        bo->destroy(bo);
>>>        ttm_mem_global_free(&ttm_mem_glob, acc_size);
>>>    }
>>> @@ -1330,6 +1335,11 @@ int ttm_bo_init_reserved(struct ttm_bo_device *bdev,
>>>        }
>>>        atomic_inc(&ttm_bo_glob.bo_count);
>>>
>>> +#ifdef CONFIG_MEMCG
>>> +     if (bo->type == ttm_bo_type_device)
>>> +             bo->memcg = mem_cgroup_driver_get_from_current();
>>> +#endif
>>> +
>>>        /*
>>>         * For ttm_bo_type_device buffers, allocate
>>>         * address space from the device.
>>> diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c b/drivers/gpu/drm/ttm/ttm_page_alloc.c
>>> index b40a4678c296..ecd1831a1d38 100644
>>> --- a/drivers/gpu/drm/ttm/ttm_page_alloc.c
>>> +++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c
>>> @@ -42,7 +42,7 @@
>>>    #include <linux/seq_file.h> /* for seq_printf */
>>>    #include <linux/slab.h>
>>>    #include <linux/dma-mapping.h>
>>> -
>>> +#include <linux/memcontrol.h>
>>>    #include <linux/atomic.h>
>>>
>>>    #include <drm/ttm/ttm_bo_driver.h>
>>> @@ -1045,6 +1045,11 @@ ttm_pool_unpopulate_helper(struct ttm_tt *ttm, unsigned mem_count_update)
>>>        ttm_put_pages(ttm->pages, ttm->num_pages, ttm->page_flags,
>>>                      ttm->caching_state);
>>>        ttm->state = tt_unpopulated;
>>> +
>>> +#ifdef CONFIG_MEMCG
>>> +     if (ttm->memcg)
>>> +             mem_cgroup_uncharge_drvmem(ttm->memcg, ttm->num_pages);
>>> +#endif
>>>    }
>>>
>>>    int ttm_pool_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx)
>>> @@ -1059,6 +1064,17 @@ int ttm_pool_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx)
>>>        if (ttm_check_under_lowerlimit(mem_glob, ttm->num_pages, ctx))
>>>                return -ENOMEM;
>>>
>>> +#ifdef CONFIG_MEMCG
>>> +     if (ttm->memcg) {
>>> +             gfp_t gfp_flags = GFP_USER;
>>> +             if (ttm->page_flags & TTM_PAGE_FLAG_NO_RETRY)
>>> +                     gfp_flags |= __GFP_RETRY_MAYFAIL;
>>> +             ret = mem_cgroup_charge_drvmem(ttm->memcg, gfp_flags, ttm->num_pages);
>>> +             if (ret)
>>> +                     return ret;
>>> +     }
>>> +#endif
>>> +
>>>        ret = ttm_get_pages(ttm->pages, ttm->num_pages, ttm->page_flags,
>>>                            ttm->caching_state);
>>>        if (unlikely(ret != 0)) {
>>> diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c
>>> index e0e9b4f69db6..1acb153084e1 100644
>>> --- a/drivers/gpu/drm/ttm/ttm_tt.c
>>> +++ b/drivers/gpu/drm/ttm/ttm_tt.c
>>> @@ -233,6 +233,9 @@ void ttm_tt_init_fields(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
>>>        ttm->state = tt_unpopulated;
>>>        ttm->swap_storage = NULL;
>>>        ttm->sg = bo->sg;
>>> +#ifdef CONFIG_MEMCG
>>> +     ttm->memcg = bo->memcg;
>>> +#endif
>>>    }
>>>
>>>    int ttm_tt_init(struct ttm_tt *ttm, struct ttm_buffer_object *bo,
>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
>>> index 65e399d280f7..95a08e81a73e 100644
>>> --- a/include/drm/ttm/ttm_bo_api.h
>>> +++ b/include/drm/ttm/ttm_bo_api.h
>>> @@ -54,6 +54,8 @@ struct ttm_place;
>>>
>>>    struct ttm_lru_bulk_move;
>>>
>>> +struct mem_cgroup;
>>> +
>>>    /**
>>>     * struct ttm_bus_placement
>>>     *
>>> @@ -180,6 +182,9 @@ struct ttm_buffer_object {
>>>        void (*destroy) (struct ttm_buffer_object *);
>>>        unsigned long num_pages;
>>>        size_t acc_size;
>>> +#ifdef CONFIG_MEMCG
>>> +     struct mem_cgroup *memcg;
>>> +#endif
>>>
>>>        /**
>>>        * Members not needing protection.
>>> diff --git a/include/drm/ttm/ttm_tt.h b/include/drm/ttm/ttm_tt.h
>>> index c0e928abf592..10fb5a557b95 100644
>>> --- a/include/drm/ttm/ttm_tt.h
>>> +++ b/include/drm/ttm/ttm_tt.h
>>> @@ -33,6 +33,7 @@ struct ttm_tt;
>>>    struct ttm_mem_reg;
>>>    struct ttm_buffer_object;
>>>    struct ttm_operation_ctx;
>>> +struct mem_cgroup;
>>>
>>>    #define TTM_PAGE_FLAG_WRITE           (1 << 3)
>>>    #define TTM_PAGE_FLAG_SWAPPED         (1 << 4)
>>> @@ -116,6 +117,9 @@ struct ttm_tt {
>>>                tt_unbound,
>>>                tt_unpopulated,
>>>        } state;
>>> +#ifdef CONFIG_MEMCG
>>> +     struct mem_cgroup *memcg;
>>> +#endif
>>>    };
>>>
>>>    /**
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C5d3b70a43b80444c550808d79c89e968%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637149988466853762&amp;sdata=ni3ku7nC%2FD5E8kivppfuuF7ZoiyfLQ8L3Y4j9IfHYUU%3D&amp;reserved=0


^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2020-01-19 13:04 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-13 15:35 [PATCH RFC 0/3] mm/memcontrol drm/ttm: charge ttm buffer backed by system memory Qiang Yu
2020-01-13 15:35 ` Qiang Yu
2020-01-13 15:35 ` Qiang Yu
2020-01-13 15:35 ` [PATCH RFC 1/3] mm: memcontrol: add mem_cgroup_(un)charge_drvmem Qiang Yu
2020-01-13 15:35   ` Qiang Yu
2020-01-13 15:35   ` Qiang Yu
2020-01-13 15:35 ` [PATCH RFC 2/3] mm: memcontrol: record driver memory statistics Qiang Yu
2020-01-13 15:35   ` Qiang Yu
2020-01-13 15:35 ` [PATCH RFC 3/3] drm/ttm: support memcg for ttm_tt Qiang Yu
2020-01-13 15:35   ` Qiang Yu
2020-01-13 15:55   ` Christian König
2020-01-13 15:55     ` Christian König
2020-01-13 15:55     ` Christian König
2020-01-19  2:47     ` Qiang Yu
2020-01-19  2:47       ` Qiang Yu
2020-01-19  2:47       ` Qiang Yu
2020-01-19 13:03       ` Christian König
2020-01-19 13:03         ` Christian König
2020-01-19 13:03         ` Christian König

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.