linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/6] mm: memcg: kmem API cleanup
@ 2020-01-09 20:26 Roman Gushchin
  2020-01-09 20:26 ` [PATCH v2 1/6] mm: kmem: cleanup (__)memcg_kmem_charge_memcg() arguments Roman Gushchin
                   ` (5 more replies)
  0 siblings, 6 replies; 21+ messages in thread
From: Roman Gushchin @ 2020-01-09 20:26 UTC (permalink / raw)
  To: linux-mm, Andrew Morton
  Cc: Michal Hocko, Johannes Weiner, Shakeel Butt, Vladimir Davydov,
	linux-kernel, kernel-team, Roman Gushchin

This patchset aims to clean up the kernel memory charging API.
It doesn't bring any functional changes, just removes unused
arguments, renames some functions and fixes some comments.

Currently it's not obvious which functions are most basic
(memcg_kmem_(un)charge_memcg()) and which are based on them
(memcg_kmem_(un)charge()). The patchset renames these functions
and removes unused arguments:

TL;DR:
was:
  memcg_kmem_charge_memcg(page, gfp, order, memcg)
  memcg_kmem_uncharge_memcg(memcg, nr_pages)
  memcg_kmem_charge(page, gfp, order)
  memcg_kmem_uncharge(page, order)

now:
  memcg_kmem_charge(memcg, gfp, nr_pages)
  memcg_kmem_uncharge(memcg, nr_pages)
  memcg_kmem_charge_page(page, gfp, order)
  memcg_kmem_uncharge_page(page, order)


v2:
1) Dropped the first patch, which was incorrect. Thanks, Andrew!


Roman Gushchin (6):
  mm: kmem: cleanup (__)memcg_kmem_charge_memcg() arguments
  mm: kmem: cleanup memcg_kmem_uncharge_memcg() arguments
  mm: kmem: rename memcg_kmem_(un)charge() into
    memcg_kmem_(un)charge_page()
  mm: kmem: switch to nr_pages in (__)memcg_kmem_charge_memcg()
  mm: memcg/slab: cache page number in memcg_(un)charge_slab()
  mm: kmem: rename (__)memcg_kmem_(un)charge_memcg() to
    __memcg_kmem_(un)charge()

 fs/pipe.c                  |  2 +-
 include/linux/memcontrol.h | 42 +++++++++++++++--------------
 kernel/fork.c              |  9 ++++---
 mm/memcontrol.c            | 54 ++++++++++++++++++--------------------
 mm/page_alloc.c            |  4 +--
 mm/slab.h                  | 22 +++++++++-------
 6 files changed, 68 insertions(+), 65 deletions(-)

-- 
2.21.1



^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v2 1/6] mm: kmem: cleanup (__)memcg_kmem_charge_memcg() arguments
  2020-01-09 20:26 [PATCH v2 0/6] mm: memcg: kmem API cleanup Roman Gushchin
@ 2020-01-09 20:26 ` Roman Gushchin
  2020-01-11  0:23   ` Shakeel Butt
  2020-01-16 16:42   ` Johannes Weiner
  2020-01-09 20:26 ` [PATCH v2 2/6] mm: kmem: cleanup memcg_kmem_uncharge_memcg() arguments Roman Gushchin
                   ` (4 subsequent siblings)
  5 siblings, 2 replies; 21+ messages in thread
From: Roman Gushchin @ 2020-01-09 20:26 UTC (permalink / raw)
  To: linux-mm, Andrew Morton
  Cc: Michal Hocko, Johannes Weiner, Shakeel Butt, Vladimir Davydov,
	linux-kernel, kernel-team, Roman Gushchin

The first argument of memcg_kmem_charge_memcg() and
__memcg_kmem_charge_memcg() is the page pointer and it's not used.
Let's drop it.

Memcg pointer is passed as the last argument. Move it to
the first place for consistency with other memcg functions,
e.g. __memcg_kmem_uncharge_memcg() or try_charge().

Signed-off-by: Roman Gushchin <guro@fb.com>
---
 include/linux/memcontrol.h | 9 ++++-----
 mm/memcontrol.c            | 8 +++-----
 mm/slab.h                  | 2 +-
 3 files changed, 8 insertions(+), 11 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index a7a0a1a5c8d5..c954209fd685 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1364,8 +1364,7 @@ void memcg_kmem_put_cache(struct kmem_cache *cachep);
 #ifdef CONFIG_MEMCG_KMEM
 int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order);
 void __memcg_kmem_uncharge(struct page *page, int order);
-int __memcg_kmem_charge_memcg(struct page *page, gfp_t gfp, int order,
-			      struct mem_cgroup *memcg);
+int __memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp, int order);
 void __memcg_kmem_uncharge_memcg(struct mem_cgroup *memcg,
 				 unsigned int nr_pages);
 
@@ -1402,11 +1401,11 @@ static inline void memcg_kmem_uncharge(struct page *page, int order)
 		__memcg_kmem_uncharge(page, order);
 }
 
-static inline int memcg_kmem_charge_memcg(struct page *page, gfp_t gfp,
-					  int order, struct mem_cgroup *memcg)
+static inline int memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp,
+					  int order)
 {
 	if (memcg_kmem_enabled())
-		return __memcg_kmem_charge_memcg(page, gfp, order, memcg);
+		return __memcg_kmem_charge_memcg(memcg, gfp, order);
 	return 0;
 }
 
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index c5b5f74cfd4d..6ef38f669923 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2820,15 +2820,13 @@ void memcg_kmem_put_cache(struct kmem_cache *cachep)
 
 /**
  * __memcg_kmem_charge_memcg: charge a kmem page
- * @page: page to charge
+ * @memcg: memory cgroup to charge
  * @gfp: reclaim mode
  * @order: allocation order
- * @memcg: memory cgroup to charge
  *
  * Returns 0 on success, an error code on failure.
  */
-int __memcg_kmem_charge_memcg(struct page *page, gfp_t gfp, int order,
-			    struct mem_cgroup *memcg)
+int __memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp, int order)
 {
 	unsigned int nr_pages = 1 << order;
 	struct page_counter *counter;
@@ -2874,7 +2872,7 @@ int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order)
 
 	memcg = get_mem_cgroup_from_current();
 	if (!mem_cgroup_is_root(memcg)) {
-		ret = __memcg_kmem_charge_memcg(page, gfp, order, memcg);
+		ret = __memcg_kmem_charge_memcg(memcg, gfp, order);
 		if (!ret) {
 			page->mem_cgroup = memcg;
 			__SetPageKmemcg(page);
diff --git a/mm/slab.h b/mm/slab.h
index 7e94700aa78c..c4c93e991250 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -365,7 +365,7 @@ static __always_inline int memcg_charge_slab(struct page *page,
 		return 0;
 	}
 
-	ret = memcg_kmem_charge_memcg(page, gfp, order, memcg);
+	ret = memcg_kmem_charge_memcg(memcg, gfp, order);
 	if (ret)
 		goto out;
 
-- 
2.21.1



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 2/6] mm: kmem: cleanup memcg_kmem_uncharge_memcg() arguments
  2020-01-09 20:26 [PATCH v2 0/6] mm: memcg: kmem API cleanup Roman Gushchin
  2020-01-09 20:26 ` [PATCH v2 1/6] mm: kmem: cleanup (__)memcg_kmem_charge_memcg() arguments Roman Gushchin
@ 2020-01-09 20:26 ` Roman Gushchin
  2020-01-11  0:23   ` Shakeel Butt
  2020-01-16 16:44   ` Johannes Weiner
  2020-01-09 20:26 ` [PATCH v2 3/6] mm: kmem: rename memcg_kmem_(un)charge() into memcg_kmem_(un)charge_page() Roman Gushchin
                   ` (3 subsequent siblings)
  5 siblings, 2 replies; 21+ messages in thread
From: Roman Gushchin @ 2020-01-09 20:26 UTC (permalink / raw)
  To: linux-mm, Andrew Morton
  Cc: Michal Hocko, Johannes Weiner, Shakeel Butt, Vladimir Davydov,
	linux-kernel, kernel-team, Roman Gushchin

Drop the unused page argument and put the memcg pointer at the first
place. This make the function consistent with its peers:
__memcg_kmem_uncharge_memcg(), memcg_kmem_charge_memcg(), etc.

Signed-off-by: Roman Gushchin <guro@fb.com>
---
 include/linux/memcontrol.h | 4 ++--
 mm/slab.h                  | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index c954209fd685..900a9f884260 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1409,8 +1409,8 @@ static inline int memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp,
 	return 0;
 }
 
-static inline void memcg_kmem_uncharge_memcg(struct page *page, int order,
-					     struct mem_cgroup *memcg)
+static inline void memcg_kmem_uncharge_memcg(struct mem_cgroup *memcg,
+					     int order)
 {
 	if (memcg_kmem_enabled())
 		__memcg_kmem_uncharge_memcg(memcg, 1 << order);
diff --git a/mm/slab.h b/mm/slab.h
index c4c93e991250..e7da63fb8211 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -395,7 +395,7 @@ static __always_inline void memcg_uncharge_slab(struct page *page, int order,
 	if (likely(!mem_cgroup_is_root(memcg))) {
 		lruvec = mem_cgroup_lruvec(memcg, page_pgdat(page));
 		mod_lruvec_state(lruvec, cache_vmstat_idx(s), -(1 << order));
-		memcg_kmem_uncharge_memcg(page, order, memcg);
+		memcg_kmem_uncharge_memcg(memcg, order);
 	} else {
 		mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
 				    -(1 << order));
-- 
2.21.1



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 3/6] mm: kmem: rename memcg_kmem_(un)charge() into memcg_kmem_(un)charge_page()
  2020-01-09 20:26 [PATCH v2 0/6] mm: memcg: kmem API cleanup Roman Gushchin
  2020-01-09 20:26 ` [PATCH v2 1/6] mm: kmem: cleanup (__)memcg_kmem_charge_memcg() arguments Roman Gushchin
  2020-01-09 20:26 ` [PATCH v2 2/6] mm: kmem: cleanup memcg_kmem_uncharge_memcg() arguments Roman Gushchin
@ 2020-01-09 20:26 ` Roman Gushchin
  2020-01-11  0:25   ` Shakeel Butt
  2020-01-16 16:47   ` Johannes Weiner
  2020-01-09 20:26 ` [PATCH v2 4/6] mm: kmem: switch to nr_pages in (__)memcg_kmem_charge_memcg() Roman Gushchin
                   ` (2 subsequent siblings)
  5 siblings, 2 replies; 21+ messages in thread
From: Roman Gushchin @ 2020-01-09 20:26 UTC (permalink / raw)
  To: linux-mm, Andrew Morton
  Cc: Michal Hocko, Johannes Weiner, Shakeel Butt, Vladimir Davydov,
	linux-kernel, kernel-team, Roman Gushchin

Rename (__)memcg_kmem_(un)charge() into (__)memcg_kmem_(un)charge_page()
to better reflect what they are actually doing:
1) call __memcg_kmem_(un)charge_memcg() to actually charge or
uncharge the current memcg
2) set or clear the PageKmemcg flag

Signed-off-by: Roman Gushchin <guro@fb.com>
---
 fs/pipe.c                  |  2 +-
 include/linux/memcontrol.h | 23 +++++++++++++----------
 kernel/fork.c              |  9 +++++----
 mm/memcontrol.c            |  8 ++++----
 mm/page_alloc.c            |  4 ++--
 5 files changed, 25 insertions(+), 21 deletions(-)

diff --git a/fs/pipe.c b/fs/pipe.c
index 57502c3c0fba..f1851f7ecbd6 100644
--- a/fs/pipe.c
+++ b/fs/pipe.c
@@ -143,7 +143,7 @@ static int anon_pipe_buf_steal(struct pipe_inode_info *pipe,
 	struct page *page = buf->page;
 
 	if (page_count(page) == 1) {
-		memcg_kmem_uncharge(page, 0);
+		memcg_kmem_uncharge_page(page, 0);
 		__SetPageLocked(page);
 		return 0;
 	}
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 900a9f884260..4ee0c345e905 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1362,8 +1362,8 @@ struct kmem_cache *memcg_kmem_get_cache(struct kmem_cache *cachep);
 void memcg_kmem_put_cache(struct kmem_cache *cachep);
 
 #ifdef CONFIG_MEMCG_KMEM
-int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order);
-void __memcg_kmem_uncharge(struct page *page, int order);
+int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order);
+void __memcg_kmem_uncharge_page(struct page *page, int order);
 int __memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp, int order);
 void __memcg_kmem_uncharge_memcg(struct mem_cgroup *memcg,
 				 unsigned int nr_pages);
@@ -1388,17 +1388,18 @@ static inline bool memcg_kmem_enabled(void)
 	return static_branch_unlikely(&memcg_kmem_enabled_key);
 }
 
-static inline int memcg_kmem_charge(struct page *page, gfp_t gfp, int order)
+static inline int memcg_kmem_charge_page(struct page *page, gfp_t gfp,
+					 int order)
 {
 	if (memcg_kmem_enabled())
-		return __memcg_kmem_charge(page, gfp, order);
+		return __memcg_kmem_charge_page(page, gfp, order);
 	return 0;
 }
 
-static inline void memcg_kmem_uncharge(struct page *page, int order)
+static inline void memcg_kmem_uncharge_page(struct page *page, int order)
 {
 	if (memcg_kmem_enabled())
-		__memcg_kmem_uncharge(page, order);
+		__memcg_kmem_uncharge_page(page, order);
 }
 
 static inline int memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp,
@@ -1428,21 +1429,23 @@ static inline int memcg_cache_id(struct mem_cgroup *memcg)
 
 #else
 
-static inline int memcg_kmem_charge(struct page *page, gfp_t gfp, int order)
+static inline int memcg_kmem_charge_page(struct page *page, gfp_t gfp,
+					 int order)
 {
 	return 0;
 }
 
-static inline void memcg_kmem_uncharge(struct page *page, int order)
+static inline void memcg_kmem_uncharge_page(struct page *page, int order)
 {
 }
 
-static inline int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order)
+static inline int __memcg_kmem_charge_page(struct page *page, gfp_t gfp,
+					   int order)
 {
 	return 0;
 }
 
-static inline void __memcg_kmem_uncharge(struct page *page, int order)
+static inline void __memcg_kmem_uncharge_page(struct page *page, int order)
 {
 }
 
diff --git a/kernel/fork.c b/kernel/fork.c
index 2508a4f238a3..6e410a189683 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -281,7 +281,7 @@ static inline void free_thread_stack(struct task_struct *tsk)
 					     MEMCG_KERNEL_STACK_KB,
 					     -(int)(PAGE_SIZE / 1024));
 
-			memcg_kmem_uncharge(vm->pages[i], 0);
+			memcg_kmem_uncharge_page(vm->pages[i], 0);
 		}
 
 		for (i = 0; i < NR_CACHED_STACKS; i++) {
@@ -413,12 +413,13 @@ static int memcg_charge_kernel_stack(struct task_struct *tsk)
 
 		for (i = 0; i < THREAD_SIZE / PAGE_SIZE; i++) {
 			/*
-			 * If memcg_kmem_charge() fails, page->mem_cgroup
-			 * pointer is NULL, and both memcg_kmem_uncharge()
+			 * If memcg_kmem_charge_page() fails, page->mem_cgroup
+			 * pointer is NULL, and both memcg_kmem_uncharge_page()
 			 * and mod_memcg_page_state() in free_thread_stack()
 			 * will ignore this page. So it's safe.
 			 */
-			ret = memcg_kmem_charge(vm->pages[i], GFP_KERNEL, 0);
+			ret = memcg_kmem_charge_page(vm->pages[i], GFP_KERNEL,
+						     0);
 			if (ret)
 				return ret;
 
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 6ef38f669923..6bdf040e4615 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2855,14 +2855,14 @@ int __memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp, int order)
 }
 
 /**
- * __memcg_kmem_charge: charge a kmem page to the current memory cgroup
+ * __memcg_kmem_charge_page: charge a kmem page to the current memory cgroup
  * @page: page to charge
  * @gfp: reclaim mode
  * @order: allocation order
  *
  * Returns 0 on success, an error code on failure.
  */
-int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order)
+int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order)
 {
 	struct mem_cgroup *memcg;
 	int ret = 0;
@@ -2898,11 +2898,11 @@ void __memcg_kmem_uncharge_memcg(struct mem_cgroup *memcg,
 		page_counter_uncharge(&memcg->memsw, nr_pages);
 }
 /**
- * __memcg_kmem_uncharge: uncharge a kmem page
+ * __memcg_kmem_uncharge_page: uncharge a kmem page
  * @page: page to uncharge
  * @order: allocation order
  */
-void __memcg_kmem_uncharge(struct page *page, int order)
+void __memcg_kmem_uncharge_page(struct page *page, int order)
 {
 	struct mem_cgroup *memcg = page->mem_cgroup;
 	unsigned int nr_pages = 1 << order;
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 4785a8a2040e..8fca5b806139 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1159,7 +1159,7 @@ static __always_inline bool free_pages_prepare(struct page *page,
 	if (PageMappingFlags(page))
 		page->mapping = NULL;
 	if (memcg_kmem_enabled() && PageKmemcg(page))
-		__memcg_kmem_uncharge(page, order);
+		__memcg_kmem_uncharge_page(page, order);
 	if (check_free)
 		bad += free_pages_check(page);
 	if (bad)
@@ -4777,7 +4777,7 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid,
 
 out:
 	if (memcg_kmem_enabled() && (gfp_mask & __GFP_ACCOUNT) && page &&
-	    unlikely(__memcg_kmem_charge(page, gfp_mask, order) != 0)) {
+	    unlikely(__memcg_kmem_charge_page(page, gfp_mask, order) != 0)) {
 		__free_pages(page, order);
 		page = NULL;
 	}
-- 
2.21.1



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 4/6] mm: kmem: switch to nr_pages in (__)memcg_kmem_charge_memcg()
  2020-01-09 20:26 [PATCH v2 0/6] mm: memcg: kmem API cleanup Roman Gushchin
                   ` (2 preceding siblings ...)
  2020-01-09 20:26 ` [PATCH v2 3/6] mm: kmem: rename memcg_kmem_(un)charge() into memcg_kmem_(un)charge_page() Roman Gushchin
@ 2020-01-09 20:26 ` Roman Gushchin
  2020-01-11  0:27   ` Shakeel Butt
  2020-01-16 17:24   ` Johannes Weiner
  2020-01-09 20:26 ` [PATCH v2 5/6] mm: memcg/slab: cache page number in memcg_(un)charge_slab() Roman Gushchin
  2020-01-09 20:26 ` [PATCH v2 6/6] mm: kmem: rename (__)memcg_kmem_(un)charge_memcg() to __memcg_kmem_(un)charge() Roman Gushchin
  5 siblings, 2 replies; 21+ messages in thread
From: Roman Gushchin @ 2020-01-09 20:26 UTC (permalink / raw)
  To: linux-mm, Andrew Morton
  Cc: Michal Hocko, Johannes Weiner, Shakeel Butt, Vladimir Davydov,
	linux-kernel, kernel-team, Roman Gushchin

These functions are charging the given number of kernel pages to the
given memory cgroup. The number doesn't have to be a power of two.
Let's make them to take the unsigned int nr_pages as an argument
instead of the page order.

It makes them look consistent with the corresponding uncharge
functions and functions like: mem_cgroup_charge_skmem(memcg, nr_pages).

Signed-off-by: Roman Gushchin <guro@fb.com>
---
 include/linux/memcontrol.h | 11 ++++++-----
 mm/memcontrol.c            |  8 ++++----
 mm/slab.h                  |  2 +-
 3 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 4ee0c345e905..851c373edb74 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1364,7 +1364,8 @@ void memcg_kmem_put_cache(struct kmem_cache *cachep);
 #ifdef CONFIG_MEMCG_KMEM
 int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order);
 void __memcg_kmem_uncharge_page(struct page *page, int order);
-int __memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp, int order);
+int __memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp,
+			      unsigned int nr_pages);
 void __memcg_kmem_uncharge_memcg(struct mem_cgroup *memcg,
 				 unsigned int nr_pages);
 
@@ -1403,18 +1404,18 @@ static inline void memcg_kmem_uncharge_page(struct page *page, int order)
 }
 
 static inline int memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp,
-					  int order)
+					  unsigned int nr_pages)
 {
 	if (memcg_kmem_enabled())
-		return __memcg_kmem_charge_memcg(memcg, gfp, order);
+		return __memcg_kmem_charge_memcg(memcg, gfp, nr_pages);
 	return 0;
 }
 
 static inline void memcg_kmem_uncharge_memcg(struct mem_cgroup *memcg,
-					     int order)
+					     unsigned int nr_pages)
 {
 	if (memcg_kmem_enabled())
-		__memcg_kmem_uncharge_memcg(memcg, 1 << order);
+		__memcg_kmem_uncharge_memcg(memcg, nr_pages);
 }
 
 /*
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 6bdf040e4615..8dbfb9fed9d8 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2822,13 +2822,13 @@ void memcg_kmem_put_cache(struct kmem_cache *cachep)
  * __memcg_kmem_charge_memcg: charge a kmem page
  * @memcg: memory cgroup to charge
  * @gfp: reclaim mode
- * @order: allocation order
+ * @nr_pages: number of pages to charge
  *
  * Returns 0 on success, an error code on failure.
  */
-int __memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp, int order)
+int __memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp,
+			      unsigned int nr_pages)
 {
-	unsigned int nr_pages = 1 << order;
 	struct page_counter *counter;
 	int ret;
 
@@ -2872,7 +2872,7 @@ int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order)
 
 	memcg = get_mem_cgroup_from_current();
 	if (!mem_cgroup_is_root(memcg)) {
-		ret = __memcg_kmem_charge_memcg(memcg, gfp, order);
+		ret = __memcg_kmem_charge_memcg(memcg, gfp, 1 << order);
 		if (!ret) {
 			page->mem_cgroup = memcg;
 			__SetPageKmemcg(page);
diff --git a/mm/slab.h b/mm/slab.h
index e7da63fb8211..d96c87a30a9b 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -365,7 +365,7 @@ static __always_inline int memcg_charge_slab(struct page *page,
 		return 0;
 	}
 
-	ret = memcg_kmem_charge_memcg(memcg, gfp, order);
+	ret = memcg_kmem_charge_memcg(memcg, gfp, 1 << order);
 	if (ret)
 		goto out;
 
-- 
2.21.1



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 5/6] mm: memcg/slab: cache page number in memcg_(un)charge_slab()
  2020-01-09 20:26 [PATCH v2 0/6] mm: memcg: kmem API cleanup Roman Gushchin
                   ` (3 preceding siblings ...)
  2020-01-09 20:26 ` [PATCH v2 4/6] mm: kmem: switch to nr_pages in (__)memcg_kmem_charge_memcg() Roman Gushchin
@ 2020-01-09 20:26 ` Roman Gushchin
  2020-01-11  0:28   ` Shakeel Butt
  2020-01-16 17:26   ` Johannes Weiner
  2020-01-09 20:26 ` [PATCH v2 6/6] mm: kmem: rename (__)memcg_kmem_(un)charge_memcg() to __memcg_kmem_(un)charge() Roman Gushchin
  5 siblings, 2 replies; 21+ messages in thread
From: Roman Gushchin @ 2020-01-09 20:26 UTC (permalink / raw)
  To: linux-mm, Andrew Morton
  Cc: Michal Hocko, Johannes Weiner, Shakeel Butt, Vladimir Davydov,
	linux-kernel, kernel-team, Roman Gushchin

There are many places in memcg_charge_slab() and memcg_uncharge_slab()
which are calculating the number of pages to charge, css references to
grab etc depending on the order of the slab page.

Let's simplify the code by calculating it once and caching in the
local variable.

Signed-off-by: Roman Gushchin <guro@fb.com>
---
 mm/slab.h | 22 ++++++++++++----------
 1 file changed, 12 insertions(+), 10 deletions(-)

diff --git a/mm/slab.h b/mm/slab.h
index d96c87a30a9b..43f8ce4aa325 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -348,6 +348,7 @@ static __always_inline int memcg_charge_slab(struct page *page,
 					     gfp_t gfp, int order,
 					     struct kmem_cache *s)
 {
+	unsigned int nr_pages = 1 << order;
 	struct mem_cgroup *memcg;
 	struct lruvec *lruvec;
 	int ret;
@@ -360,21 +361,21 @@ static __always_inline int memcg_charge_slab(struct page *page,
 
 	if (unlikely(!memcg || mem_cgroup_is_root(memcg))) {
 		mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
-				    (1 << order));
-		percpu_ref_get_many(&s->memcg_params.refcnt, 1 << order);
+				    nr_pages);
+		percpu_ref_get_many(&s->memcg_params.refcnt, nr_pages);
 		return 0;
 	}
 
-	ret = memcg_kmem_charge_memcg(memcg, gfp, 1 << order);
+	ret = memcg_kmem_charge_memcg(memcg, gfp, nr_pages);
 	if (ret)
 		goto out;
 
 	lruvec = mem_cgroup_lruvec(memcg, page_pgdat(page));
-	mod_lruvec_state(lruvec, cache_vmstat_idx(s), 1 << order);
+	mod_lruvec_state(lruvec, cache_vmstat_idx(s), nr_pages);
 
 	/* transer try_charge() page references to kmem_cache */
-	percpu_ref_get_many(&s->memcg_params.refcnt, 1 << order);
-	css_put_many(&memcg->css, 1 << order);
+	percpu_ref_get_many(&s->memcg_params.refcnt, nr_pages);
+	css_put_many(&memcg->css, nr_pages);
 out:
 	css_put(&memcg->css);
 	return ret;
@@ -387,6 +388,7 @@ static __always_inline int memcg_charge_slab(struct page *page,
 static __always_inline void memcg_uncharge_slab(struct page *page, int order,
 						struct kmem_cache *s)
 {
+	unsigned int nr_pages = 1 << order;
 	struct mem_cgroup *memcg;
 	struct lruvec *lruvec;
 
@@ -394,15 +396,15 @@ static __always_inline void memcg_uncharge_slab(struct page *page, int order,
 	memcg = READ_ONCE(s->memcg_params.memcg);
 	if (likely(!mem_cgroup_is_root(memcg))) {
 		lruvec = mem_cgroup_lruvec(memcg, page_pgdat(page));
-		mod_lruvec_state(lruvec, cache_vmstat_idx(s), -(1 << order));
-		memcg_kmem_uncharge_memcg(memcg, order);
+		mod_lruvec_state(lruvec, cache_vmstat_idx(s), -nr_pages);
+		memcg_kmem_uncharge_memcg(memcg, nr_pages);
 	} else {
 		mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
-				    -(1 << order));
+				    -nr_pages);
 	}
 	rcu_read_unlock();
 
-	percpu_ref_put_many(&s->memcg_params.refcnt, 1 << order);
+	percpu_ref_put_many(&s->memcg_params.refcnt, nr_pages);
 }
 
 extern void slab_init_memcg_params(struct kmem_cache *);
-- 
2.21.1



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v2 6/6] mm: kmem: rename (__)memcg_kmem_(un)charge_memcg() to __memcg_kmem_(un)charge()
  2020-01-09 20:26 [PATCH v2 0/6] mm: memcg: kmem API cleanup Roman Gushchin
                   ` (4 preceding siblings ...)
  2020-01-09 20:26 ` [PATCH v2 5/6] mm: memcg/slab: cache page number in memcg_(un)charge_slab() Roman Gushchin
@ 2020-01-09 20:26 ` Roman Gushchin
  2020-01-11  0:32   ` Shakeel Butt
  2020-01-16 17:30   ` Johannes Weiner
  5 siblings, 2 replies; 21+ messages in thread
From: Roman Gushchin @ 2020-01-09 20:26 UTC (permalink / raw)
  To: linux-mm, Andrew Morton
  Cc: Michal Hocko, Johannes Weiner, Shakeel Butt, Vladimir Davydov,
	linux-kernel, kernel-team, Roman Gushchin

Drop the _memcg suffix from (__)memcg_kmem_(un)charge functions.
It's shorter and more obvious.

These are the most basic functions which are just (un)charging the
given cgroup with the given amount of pages.

Also fix up the corresponding comments.

Signed-off-by: Roman Gushchin <guro@fb.com>
---
 include/linux/memcontrol.h | 19 +++++++++---------
 mm/memcontrol.c            | 40 +++++++++++++++++++-------------------
 mm/slab.h                  |  4 ++--
 3 files changed, 31 insertions(+), 32 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 851c373edb74..c372bed6be80 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1362,12 +1362,11 @@ struct kmem_cache *memcg_kmem_get_cache(struct kmem_cache *cachep);
 void memcg_kmem_put_cache(struct kmem_cache *cachep);
 
 #ifdef CONFIG_MEMCG_KMEM
+int __memcg_kmem_charge(struct mem_cgroup *memcg, gfp_t gfp,
+			unsigned int nr_pages);
+void __memcg_kmem_uncharge(struct mem_cgroup *memcg, unsigned int nr_pages);
 int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order);
 void __memcg_kmem_uncharge_page(struct page *page, int order);
-int __memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp,
-			      unsigned int nr_pages);
-void __memcg_kmem_uncharge_memcg(struct mem_cgroup *memcg,
-				 unsigned int nr_pages);
 
 extern struct static_key_false memcg_kmem_enabled_key;
 extern struct workqueue_struct *memcg_kmem_cache_wq;
@@ -1403,19 +1402,19 @@ static inline void memcg_kmem_uncharge_page(struct page *page, int order)
 		__memcg_kmem_uncharge_page(page, order);
 }
 
-static inline int memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp,
-					  unsigned int nr_pages)
+static inline int memcg_kmem_charge(struct mem_cgroup *memcg, gfp_t gfp,
+				    unsigned int nr_pages)
 {
 	if (memcg_kmem_enabled())
-		return __memcg_kmem_charge_memcg(memcg, gfp, nr_pages);
+		return __memcg_kmem_charge(memcg, gfp, nr_pages);
 	return 0;
 }
 
-static inline void memcg_kmem_uncharge_memcg(struct mem_cgroup *memcg,
-					     unsigned int nr_pages)
+static inline void memcg_kmem_uncharge(struct mem_cgroup *memcg,
+				       unsigned int nr_pages)
 {
 	if (memcg_kmem_enabled())
-		__memcg_kmem_uncharge_memcg(memcg, nr_pages);
+		__memcg_kmem_uncharge(memcg, nr_pages);
 }
 
 /*
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 8dbfb9fed9d8..8bcd231e6e7b 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2819,15 +2819,15 @@ void memcg_kmem_put_cache(struct kmem_cache *cachep)
 }
 
 /**
- * __memcg_kmem_charge_memcg: charge a kmem page
+ * __memcg_kmem_charge: charge a number of kernel pages to a memcg
  * @memcg: memory cgroup to charge
  * @gfp: reclaim mode
  * @nr_pages: number of pages to charge
  *
  * Returns 0 on success, an error code on failure.
  */
-int __memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp,
-			      unsigned int nr_pages)
+int __memcg_kmem_charge(struct mem_cgroup *memcg, gfp_t gfp,
+			unsigned int nr_pages)
 {
 	struct page_counter *counter;
 	int ret;
@@ -2854,6 +2854,21 @@ int __memcg_kmem_charge_memcg(struct mem_cgroup *memcg, gfp_t gfp,
 	return 0;
 }
 
+/**
+ * __memcg_kmem_uncharge: uncharge a number of kernel pages from a memcg
+ * @memcg: memcg to uncharge
+ * @nr_pages: number of pages to uncharge
+ */
+void __memcg_kmem_uncharge(struct mem_cgroup *memcg, unsigned int nr_pages)
+{
+	if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
+		page_counter_uncharge(&memcg->kmem, nr_pages);
+
+	page_counter_uncharge(&memcg->memory, nr_pages);
+	if (do_memsw_account())
+		page_counter_uncharge(&memcg->memsw, nr_pages);
+}
+
 /**
  * __memcg_kmem_charge_page: charge a kmem page to the current memory cgroup
  * @page: page to charge
@@ -2872,7 +2887,7 @@ int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order)
 
 	memcg = get_mem_cgroup_from_current();
 	if (!mem_cgroup_is_root(memcg)) {
-		ret = __memcg_kmem_charge_memcg(memcg, gfp, 1 << order);
+		ret = __memcg_kmem_charge(memcg, gfp, 1 << order);
 		if (!ret) {
 			page->mem_cgroup = memcg;
 			__SetPageKmemcg(page);
@@ -2882,21 +2897,6 @@ int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order)
 	return ret;
 }
 
-/**
- * __memcg_kmem_uncharge_memcg: uncharge a kmem page
- * @memcg: memcg to uncharge
- * @nr_pages: number of pages to uncharge
- */
-void __memcg_kmem_uncharge_memcg(struct mem_cgroup *memcg,
-				 unsigned int nr_pages)
-{
-	if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
-		page_counter_uncharge(&memcg->kmem, nr_pages);
-
-	page_counter_uncharge(&memcg->memory, nr_pages);
-	if (do_memsw_account())
-		page_counter_uncharge(&memcg->memsw, nr_pages);
-}
 /**
  * __memcg_kmem_uncharge_page: uncharge a kmem page
  * @page: page to uncharge
@@ -2911,7 +2911,7 @@ void __memcg_kmem_uncharge_page(struct page *page, int order)
 		return;
 
 	VM_BUG_ON_PAGE(mem_cgroup_is_root(memcg), page);
-	__memcg_kmem_uncharge_memcg(memcg, nr_pages);
+	__memcg_kmem_uncharge(memcg, nr_pages);
 	page->mem_cgroup = NULL;
 
 	/* slab pages do not have PageKmemcg flag set */
diff --git a/mm/slab.h b/mm/slab.h
index 43f8ce4aa325..207c83ef6e06 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -366,7 +366,7 @@ static __always_inline int memcg_charge_slab(struct page *page,
 		return 0;
 	}
 
-	ret = memcg_kmem_charge_memcg(memcg, gfp, nr_pages);
+	ret = memcg_kmem_charge(memcg, gfp, nr_pages);
 	if (ret)
 		goto out;
 
@@ -397,7 +397,7 @@ static __always_inline void memcg_uncharge_slab(struct page *page, int order,
 	if (likely(!mem_cgroup_is_root(memcg))) {
 		lruvec = mem_cgroup_lruvec(memcg, page_pgdat(page));
 		mod_lruvec_state(lruvec, cache_vmstat_idx(s), -nr_pages);
-		memcg_kmem_uncharge_memcg(memcg, nr_pages);
+		memcg_kmem_uncharge(memcg, nr_pages);
 	} else {
 		mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
 				    -nr_pages);
-- 
2.21.1



^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 1/6] mm: kmem: cleanup (__)memcg_kmem_charge_memcg() arguments
  2020-01-09 20:26 ` [PATCH v2 1/6] mm: kmem: cleanup (__)memcg_kmem_charge_memcg() arguments Roman Gushchin
@ 2020-01-11  0:23   ` Shakeel Butt
  2020-01-16 16:42   ` Johannes Weiner
  1 sibling, 0 replies; 21+ messages in thread
From: Shakeel Butt @ 2020-01-11  0:23 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Linux MM, Andrew Morton, Michal Hocko, Johannes Weiner,
	Vladimir Davydov, LKML, Kernel Team

On Thu, Jan 9, 2020 at 12:27 PM Roman Gushchin <guro@fb.com> wrote:
>
> The first argument of memcg_kmem_charge_memcg() and
> __memcg_kmem_charge_memcg() is the page pointer and it's not used.
> Let's drop it.
>
> Memcg pointer is passed as the last argument. Move it to
> the first place for consistency with other memcg functions,
> e.g. __memcg_kmem_uncharge_memcg() or try_charge().
>
> Signed-off-by: Roman Gushchin <guro@fb.com>

Reviewed-by: Shakeel Butt <shakeelb@google.com>


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 2/6] mm: kmem: cleanup memcg_kmem_uncharge_memcg() arguments
  2020-01-09 20:26 ` [PATCH v2 2/6] mm: kmem: cleanup memcg_kmem_uncharge_memcg() arguments Roman Gushchin
@ 2020-01-11  0:23   ` Shakeel Butt
  2020-01-13 15:11     ` Roman Gushchin
  2020-01-16 16:44   ` Johannes Weiner
  1 sibling, 1 reply; 21+ messages in thread
From: Shakeel Butt @ 2020-01-11  0:23 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Linux MM, Andrew Morton, Michal Hocko, Johannes Weiner,
	Vladimir Davydov, LKML, Kernel Team

On Thu, Jan 9, 2020 at 12:27 PM Roman Gushchin <guro@fb.com> wrote:
>
> Drop the unused page argument and put the memcg pointer at the first
> place. This make the function consistent with its peers:
> __memcg_kmem_uncharge_memcg(), memcg_kmem_charge_memcg(), etc.
>
> Signed-off-by: Roman Gushchin <guro@fb.com>

Why not squash this patch in the previous one?

Reviewed-by: Shakeel Butt <shakeelb@google.com>


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 3/6] mm: kmem: rename memcg_kmem_(un)charge() into memcg_kmem_(un)charge_page()
  2020-01-09 20:26 ` [PATCH v2 3/6] mm: kmem: rename memcg_kmem_(un)charge() into memcg_kmem_(un)charge_page() Roman Gushchin
@ 2020-01-11  0:25   ` Shakeel Butt
  2020-01-16 16:47   ` Johannes Weiner
  1 sibling, 0 replies; 21+ messages in thread
From: Shakeel Butt @ 2020-01-11  0:25 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Linux MM, Andrew Morton, Michal Hocko, Johannes Weiner,
	Vladimir Davydov, LKML, Kernel Team

On Thu, Jan 9, 2020 at 12:27 PM Roman Gushchin <guro@fb.com> wrote:
>
> Rename (__)memcg_kmem_(un)charge() into (__)memcg_kmem_(un)charge_page()
> to better reflect what they are actually doing:
> 1) call __memcg_kmem_(un)charge_memcg() to actually charge or
> uncharge the current memcg
> 2) set or clear the PageKmemcg flag
>
> Signed-off-by: Roman Gushchin <guro@fb.com>

Reviewed-by: Shakeel Butt <shakeelb@google.com>


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 4/6] mm: kmem: switch to nr_pages in (__)memcg_kmem_charge_memcg()
  2020-01-09 20:26 ` [PATCH v2 4/6] mm: kmem: switch to nr_pages in (__)memcg_kmem_charge_memcg() Roman Gushchin
@ 2020-01-11  0:27   ` Shakeel Butt
  2020-01-16 17:24   ` Johannes Weiner
  1 sibling, 0 replies; 21+ messages in thread
From: Shakeel Butt @ 2020-01-11  0:27 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Linux MM, Andrew Morton, Michal Hocko, Johannes Weiner,
	Vladimir Davydov, LKML, Kernel Team

On Thu, Jan 9, 2020 at 12:27 PM Roman Gushchin <guro@fb.com> wrote:
>
> These functions are charging the given number of kernel pages to the
> given memory cgroup. The number doesn't have to be a power of two.
> Let's make them to take the unsigned int nr_pages as an argument
> instead of the page order.
>
> It makes them look consistent with the corresponding uncharge
> functions and functions like: mem_cgroup_charge_skmem(memcg, nr_pages).
>
> Signed-off-by: Roman Gushchin <guro@fb.com>

Reviewed-by: Shakeel Butt <shakeelb@google.com>


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 5/6] mm: memcg/slab: cache page number in memcg_(un)charge_slab()
  2020-01-09 20:26 ` [PATCH v2 5/6] mm: memcg/slab: cache page number in memcg_(un)charge_slab() Roman Gushchin
@ 2020-01-11  0:28   ` Shakeel Butt
  2020-01-16 17:26   ` Johannes Weiner
  1 sibling, 0 replies; 21+ messages in thread
From: Shakeel Butt @ 2020-01-11  0:28 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Linux MM, Andrew Morton, Michal Hocko, Johannes Weiner,
	Vladimir Davydov, LKML, Kernel Team

On Thu, Jan 9, 2020 at 12:27 PM Roman Gushchin <guro@fb.com> wrote:
>
> There are many places in memcg_charge_slab() and memcg_uncharge_slab()
> which are calculating the number of pages to charge, css references to
> grab etc depending on the order of the slab page.
>
> Let's simplify the code by calculating it once and caching in the
> local variable.
>
> Signed-off-by: Roman Gushchin <guro@fb.com>

Reviewed-by: Shakeel Butt <shakeelb@google.com>


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 6/6] mm: kmem: rename (__)memcg_kmem_(un)charge_memcg() to __memcg_kmem_(un)charge()
  2020-01-09 20:26 ` [PATCH v2 6/6] mm: kmem: rename (__)memcg_kmem_(un)charge_memcg() to __memcg_kmem_(un)charge() Roman Gushchin
@ 2020-01-11  0:32   ` Shakeel Butt
  2020-01-16 17:30   ` Johannes Weiner
  1 sibling, 0 replies; 21+ messages in thread
From: Shakeel Butt @ 2020-01-11  0:32 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Linux MM, Andrew Morton, Michal Hocko, Johannes Weiner,
	Vladimir Davydov, LKML, Kernel Team

On Thu, Jan 9, 2020 at 12:27 PM Roman Gushchin <guro@fb.com> wrote:
>
> Drop the _memcg suffix from (__)memcg_kmem_(un)charge functions.
> It's shorter and more obvious.
>
> These are the most basic functions which are just (un)charging the
> given cgroup with the given amount of pages.
>
> Also fix up the corresponding comments.
>
> Signed-off-by: Roman Gushchin <guro@fb.com>

Reviewed-by: Shakeel Butt <shakeelb@google.com>


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 2/6] mm: kmem: cleanup memcg_kmem_uncharge_memcg() arguments
  2020-01-11  0:23   ` Shakeel Butt
@ 2020-01-13 15:11     ` Roman Gushchin
  0 siblings, 0 replies; 21+ messages in thread
From: Roman Gushchin @ 2020-01-13 15:11 UTC (permalink / raw)
  To: Shakeel Butt
  Cc: Linux MM, Andrew Morton, Michal Hocko, Johannes Weiner,
	Vladimir Davydov, LKML, Kernel Team

On Fri, Jan 10, 2020 at 04:23:52PM -0800, Shakeel Butt wrote:
> On Thu, Jan 9, 2020 at 12:27 PM Roman Gushchin <guro@fb.com> wrote:
> >
> > Drop the unused page argument and put the memcg pointer at the first
> > place. This make the function consistent with its peers:
> > __memcg_kmem_uncharge_memcg(), memcg_kmem_charge_memcg(), etc.
> >
> > Signed-off-by: Roman Gushchin <guro@fb.com>
> 
> Why not squash this patch in the previous one?

No particular reason, could be squashed too. Just made these changes one by one.
Can be perfectly squashed, but probably it's not worth another version.

> 
> Reviewed-by: Shakeel Butt <shakeelb@google.com>

Thank you for the review!


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 1/6] mm: kmem: cleanup (__)memcg_kmem_charge_memcg() arguments
  2020-01-09 20:26 ` [PATCH v2 1/6] mm: kmem: cleanup (__)memcg_kmem_charge_memcg() arguments Roman Gushchin
  2020-01-11  0:23   ` Shakeel Butt
@ 2020-01-16 16:42   ` Johannes Weiner
  1 sibling, 0 replies; 21+ messages in thread
From: Johannes Weiner @ 2020-01-16 16:42 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: linux-mm, Andrew Morton, Michal Hocko, Shakeel Butt,
	Vladimir Davydov, linux-kernel, kernel-team

On Thu, Jan 09, 2020 at 12:26:54PM -0800, Roman Gushchin wrote:
> The first argument of memcg_kmem_charge_memcg() and
> __memcg_kmem_charge_memcg() is the page pointer and it's not used.
> Let's drop it.
> 
> Memcg pointer is passed as the last argument. Move it to
> the first place for consistency with other memcg functions,
> e.g. __memcg_kmem_uncharge_memcg() or try_charge().
> 
> Signed-off-by: Roman Gushchin <guro@fb.com>

Looks good to me.

Acked-by: Johannes Weiner <hannes@cmpxchg.org>


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 2/6] mm: kmem: cleanup memcg_kmem_uncharge_memcg() arguments
  2020-01-09 20:26 ` [PATCH v2 2/6] mm: kmem: cleanup memcg_kmem_uncharge_memcg() arguments Roman Gushchin
  2020-01-11  0:23   ` Shakeel Butt
@ 2020-01-16 16:44   ` Johannes Weiner
  1 sibling, 0 replies; 21+ messages in thread
From: Johannes Weiner @ 2020-01-16 16:44 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: linux-mm, Andrew Morton, Michal Hocko, Shakeel Butt,
	Vladimir Davydov, linux-kernel, kernel-team

On Thu, Jan 09, 2020 at 12:26:55PM -0800, Roman Gushchin wrote:
> Drop the unused page argument and put the memcg pointer at the first
> place. This make the function consistent with its peers:
> __memcg_kmem_uncharge_memcg(), memcg_kmem_charge_memcg(), etc.
> 
> Signed-off-by: Roman Gushchin <guro@fb.com>

Acked-by: Johannes Weiner <hannes@cmpxchg.org>


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 3/6] mm: kmem: rename memcg_kmem_(un)charge() into memcg_kmem_(un)charge_page()
  2020-01-09 20:26 ` [PATCH v2 3/6] mm: kmem: rename memcg_kmem_(un)charge() into memcg_kmem_(un)charge_page() Roman Gushchin
  2020-01-11  0:25   ` Shakeel Butt
@ 2020-01-16 16:47   ` Johannes Weiner
  1 sibling, 0 replies; 21+ messages in thread
From: Johannes Weiner @ 2020-01-16 16:47 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: linux-mm, Andrew Morton, Michal Hocko, Shakeel Butt,
	Vladimir Davydov, linux-kernel, kernel-team

On Thu, Jan 09, 2020 at 12:26:56PM -0800, Roman Gushchin wrote:
> Rename (__)memcg_kmem_(un)charge() into (__)memcg_kmem_(un)charge_page()

I almost bluescreened trying to parse this!

> to better reflect what they are actually doing:
> 1) call __memcg_kmem_(un)charge_memcg() to actually charge or
> uncharge the current memcg
> 2) set or clear the PageKmemcg flag
> 
> Signed-off-by: Roman Gushchin <guro@fb.com>

Agreed, this is better.

Acked-by: Johannes Weiner <hannes@cmpxchg.org>


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 4/6] mm: kmem: switch to nr_pages in (__)memcg_kmem_charge_memcg()
  2020-01-09 20:26 ` [PATCH v2 4/6] mm: kmem: switch to nr_pages in (__)memcg_kmem_charge_memcg() Roman Gushchin
  2020-01-11  0:27   ` Shakeel Butt
@ 2020-01-16 17:24   ` Johannes Weiner
  1 sibling, 0 replies; 21+ messages in thread
From: Johannes Weiner @ 2020-01-16 17:24 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: linux-mm, Andrew Morton, Michal Hocko, Shakeel Butt,
	Vladimir Davydov, linux-kernel, kernel-team

On Thu, Jan 09, 2020 at 12:26:57PM -0800, Roman Gushchin wrote:
> These functions are charging the given number of kernel pages to the
> given memory cgroup. The number doesn't have to be a power of two.
> Let's make them to take the unsigned int nr_pages as an argument
> instead of the page order.
> 
> It makes them look consistent with the corresponding uncharge
> functions and functions like: mem_cgroup_charge_skmem(memcg, nr_pages).
> 
> Signed-off-by: Roman Gushchin <guro@fb.com>

Acked-by: Johannes Weiner <hannes@cmpxchg.org>


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 5/6] mm: memcg/slab: cache page number in memcg_(un)charge_slab()
  2020-01-09 20:26 ` [PATCH v2 5/6] mm: memcg/slab: cache page number in memcg_(un)charge_slab() Roman Gushchin
  2020-01-11  0:28   ` Shakeel Butt
@ 2020-01-16 17:26   ` Johannes Weiner
  1 sibling, 0 replies; 21+ messages in thread
From: Johannes Weiner @ 2020-01-16 17:26 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: linux-mm, Andrew Morton, Michal Hocko, Shakeel Butt,
	Vladimir Davydov, linux-kernel, kernel-team

On Thu, Jan 09, 2020 at 12:26:58PM -0800, Roman Gushchin wrote:
> There are many places in memcg_charge_slab() and memcg_uncharge_slab()
> which are calculating the number of pages to charge, css references to
> grab etc depending on the order of the slab page.
> 
> Let's simplify the code by calculating it once and caching in the
> local variable.
> 
> Signed-off-by: Roman Gushchin <guro@fb.com>

Acked-by: Johannes Weiner <hannes@cmpxchg.org>


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 6/6] mm: kmem: rename (__)memcg_kmem_(un)charge_memcg() to __memcg_kmem_(un)charge()
  2020-01-09 20:26 ` [PATCH v2 6/6] mm: kmem: rename (__)memcg_kmem_(un)charge_memcg() to __memcg_kmem_(un)charge() Roman Gushchin
  2020-01-11  0:32   ` Shakeel Butt
@ 2020-01-16 17:30   ` Johannes Weiner
  2020-01-16 17:40     ` Roman Gushchin
  1 sibling, 1 reply; 21+ messages in thread
From: Johannes Weiner @ 2020-01-16 17:30 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: linux-mm, Andrew Morton, Michal Hocko, Shakeel Butt,
	Vladimir Davydov, linux-kernel, kernel-team

On Thu, Jan 09, 2020 at 12:26:59PM -0800, Roman Gushchin wrote:
> Drop the _memcg suffix from (__)memcg_kmem_(un)charge functions.
> It's shorter and more obvious.
> 
> These are the most basic functions which are just (un)charging the
> given cgroup with the given amount of pages.
> 
> Also fix up the corresponding comments.
> 
> Signed-off-by: Roman Gushchin <guro@fb.com>

This looks good to me. I also appreciate the grouping of the functions
by layer (first charge/uncharge, then charge_page/uncharge_page).

Acked-by: Johannes Weiner <hannes@cmpxchg.org>


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v2 6/6] mm: kmem: rename (__)memcg_kmem_(un)charge_memcg() to __memcg_kmem_(un)charge()
  2020-01-16 17:30   ` Johannes Weiner
@ 2020-01-16 17:40     ` Roman Gushchin
  0 siblings, 0 replies; 21+ messages in thread
From: Roman Gushchin @ 2020-01-16 17:40 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: linux-mm, Andrew Morton, Michal Hocko, Shakeel Butt,
	Vladimir Davydov, linux-kernel, Kernel Team

On Thu, Jan 16, 2020 at 12:30:02PM -0500, Johannes Weiner wrote:
> On Thu, Jan 09, 2020 at 12:26:59PM -0800, Roman Gushchin wrote:
> > Drop the _memcg suffix from (__)memcg_kmem_(un)charge functions.
> > It's shorter and more obvious.
> > 
> > These are the most basic functions which are just (un)charging the
> > given cgroup with the given amount of pages.
> > 
> > Also fix up the corresponding comments.
> > 
> > Signed-off-by: Roman Gushchin <guro@fb.com>
> 
> This looks good to me. I also appreciate the grouping of the functions
> by layer (first charge/uncharge, then charge_page/uncharge_page).
> 
> Acked-by: Johannes Weiner <hannes@cmpxchg.org>

Thank you for the review!


^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2020-01-16 17:40 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-09 20:26 [PATCH v2 0/6] mm: memcg: kmem API cleanup Roman Gushchin
2020-01-09 20:26 ` [PATCH v2 1/6] mm: kmem: cleanup (__)memcg_kmem_charge_memcg() arguments Roman Gushchin
2020-01-11  0:23   ` Shakeel Butt
2020-01-16 16:42   ` Johannes Weiner
2020-01-09 20:26 ` [PATCH v2 2/6] mm: kmem: cleanup memcg_kmem_uncharge_memcg() arguments Roman Gushchin
2020-01-11  0:23   ` Shakeel Butt
2020-01-13 15:11     ` Roman Gushchin
2020-01-16 16:44   ` Johannes Weiner
2020-01-09 20:26 ` [PATCH v2 3/6] mm: kmem: rename memcg_kmem_(un)charge() into memcg_kmem_(un)charge_page() Roman Gushchin
2020-01-11  0:25   ` Shakeel Butt
2020-01-16 16:47   ` Johannes Weiner
2020-01-09 20:26 ` [PATCH v2 4/6] mm: kmem: switch to nr_pages in (__)memcg_kmem_charge_memcg() Roman Gushchin
2020-01-11  0:27   ` Shakeel Butt
2020-01-16 17:24   ` Johannes Weiner
2020-01-09 20:26 ` [PATCH v2 5/6] mm: memcg/slab: cache page number in memcg_(un)charge_slab() Roman Gushchin
2020-01-11  0:28   ` Shakeel Butt
2020-01-16 17:26   ` Johannes Weiner
2020-01-09 20:26 ` [PATCH v2 6/6] mm: kmem: rename (__)memcg_kmem_(un)charge_memcg() to __memcg_kmem_(un)charge() Roman Gushchin
2020-01-11  0:32   ` Shakeel Butt
2020-01-16 17:30   ` Johannes Weiner
2020-01-16 17:40     ` Roman Gushchin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).