linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/3] memcg: simplify charging kmem pages
@ 2015-10-04 22:21 Vladimir Davydov
  2015-10-04 22:21 ` [PATCH 2/3] memcg: unify slab and other kmem pages charging Vladimir Davydov
                   ` (2 more replies)
  0 siblings, 3 replies; 17+ messages in thread
From: Vladimir Davydov @ 2015-10-04 22:21 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Johannes Weiner, Michal Hocko, linux-mm, linux-kernel

Charging kmem pages proceeds in two steps. First, we try to charge the
allocation size to the memcg the current task belongs to, then we
allocate a page and "commit" the charge storing the pointer to the memcg
in the page struct.

Such a design looks overcomplicated, because there is no much sense in
trying charging the allocation before actually allocating a page: we
won't be able to consume much memory over the limit even if we charge
after doing the actual allocation, besides we already charge user pages
post factum, so being pedantic with kmem pages just looks pointless.

So this patch simplifies the design by merging the "charge" and the
"commit" steps into the same function, which takes the allocated page.

Also, rename the charge and uncharge methods to memcg_kmem_charge and
memcg_kmem_uncharge and make the charge method return error code instead
of bool to conform to mem_cgroup_try_charge.

Signed-off-by: Vladimir Davydov <vdavydov@virtuozzo.com>
---
 include/linux/memcontrol.h | 69 +++++++++++++---------------------------------
 mm/memcontrol.c            | 39 +++-----------------------
 mm/page_alloc.c            | 18 ++++++------
 3 files changed, 32 insertions(+), 94 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index a3e0296eb063..9e1f4d5efc56 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -752,11 +752,8 @@ static inline bool memcg_kmem_is_active(struct mem_cgroup *memcg)
  * conditions, but because they are pretty simple, they are expected to be
  * fast.
  */
-bool __memcg_kmem_newpage_charge(gfp_t gfp, struct mem_cgroup **memcg,
-					int order);
-void __memcg_kmem_commit_charge(struct page *page,
-				       struct mem_cgroup *memcg, int order);
-void __memcg_kmem_uncharge_pages(struct page *page, int order);
+int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order);
+void __memcg_kmem_uncharge(struct page *page, int order);
 
 /*
  * helper for acessing a memcg's index. It will be used as an index in the
@@ -789,52 +786,30 @@ static inline bool __memcg_kmem_bypass(gfp_t gfp)
 }
 
 /**
- * memcg_kmem_newpage_charge: verify if a new kmem allocation is allowed.
- * @gfp: the gfp allocation flags.
- * @memcg: a pointer to the memcg this was charged against.
- * @order: allocation order.
+ * memcg_kmem_charge: charge a kmem page
+ * @page: page to charge
+ * @gfp: reclaim mode
+ * @order: allocation order
  *
- * returns true if the memcg where the current task belongs can hold this
- * allocation.
- *
- * We return true automatically if this allocation is not to be accounted to
- * any memcg.
+ * Returns 0 on success, an error code on failure.
  */
-static inline bool
-memcg_kmem_newpage_charge(gfp_t gfp, struct mem_cgroup **memcg, int order)
+static __always_inline int memcg_kmem_charge(struct page *page,
+					     gfp_t gfp, int order)
 {
 	if (__memcg_kmem_bypass(gfp))
-		return true;
-	return __memcg_kmem_newpage_charge(gfp, memcg, order);
+		return 0;
+	return __memcg_kmem_charge(page, gfp, order);
 }
 
 /**
- * memcg_kmem_uncharge_pages: uncharge pages from memcg
- * @page: pointer to struct page being freed
- * @order: allocation order.
+ * memcg_kmem_uncharge: uncharge a kmem page
+ * @page: page to uncharge
+ * @order: allocation order
  */
-static inline void
-memcg_kmem_uncharge_pages(struct page *page, int order)
+static __always_inline void memcg_kmem_uncharge(struct page *page, int order)
 {
 	if (memcg_kmem_enabled())
-		__memcg_kmem_uncharge_pages(page, order);
-}
-
-/**
- * memcg_kmem_commit_charge: embeds correct memcg in a page
- * @page: pointer to struct page recently allocated
- * @memcg: the memcg structure we charged against
- * @order: allocation order.
- *
- * Needs to be called after memcg_kmem_newpage_charge, regardless of success or
- * failure of the allocation. if @page is NULL, this function will revert the
- * charges. Otherwise, it will commit @page to @memcg.
- */
-static inline void
-memcg_kmem_commit_charge(struct page *page, struct mem_cgroup *memcg, int order)
-{
-	if (memcg_kmem_enabled() && memcg)
-		__memcg_kmem_commit_charge(page, memcg, order);
+		__memcg_kmem_uncharge(page, order);
 }
 
 /**
@@ -878,18 +853,12 @@ static inline bool memcg_kmem_is_active(struct mem_cgroup *memcg)
 	return false;
 }
 
-static inline bool
-memcg_kmem_newpage_charge(gfp_t gfp, struct mem_cgroup **memcg, int order)
-{
-	return true;
-}
-
-static inline void memcg_kmem_uncharge_pages(struct page *page, int order)
+static inline int memcg_kmem_charge(struct page *page, gfp_t gfp, int order)
 {
+	return 0;
 }
 
-static inline void
-memcg_kmem_commit_charge(struct page *page, struct mem_cgroup *memcg, int order)
+static inline void memcg_kmem_uncharge(struct page *page, int order)
 {
 }
 
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 44706a17cddc..7c0af36fc8d0 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2404,57 +2404,26 @@ void __memcg_kmem_put_cache(struct kmem_cache *cachep)
 		css_put(&cachep->memcg_params.memcg->css);
 }
 
-/*
- * We need to verify if the allocation against current->mm->owner's memcg is
- * possible for the given order. But the page is not allocated yet, so we'll
- * need a further commit step to do the final arrangements.
- *
- * It is possible for the task to switch cgroups in this mean time, so at
- * commit time, we can't rely on task conversion any longer.  We'll then use
- * the handle argument to return to the caller which cgroup we should commit
- * against. We could also return the memcg directly and avoid the pointer
- * passing, but a boolean return value gives better semantics considering
- * the compiled-out case as well.
- *
- * Returning true means the allocation is possible.
- */
-bool
-__memcg_kmem_newpage_charge(gfp_t gfp, struct mem_cgroup **_memcg, int order)
+int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order)
 {
 	struct mem_cgroup *memcg;
 	int ret;
 
-	*_memcg = NULL;
-
 	memcg = get_mem_cgroup_from_mm(current->mm);
 
 	if (!memcg_kmem_is_active(memcg)) {
 		css_put(&memcg->css);
-		return true;
+		return 0;
 	}
 
 	ret = memcg_charge_kmem(memcg, gfp, 1 << order);
-	if (!ret)
-		*_memcg = memcg;
 
 	css_put(&memcg->css);
-	return (ret == 0);
-}
-
-void __memcg_kmem_commit_charge(struct page *page, struct mem_cgroup *memcg,
-			      int order)
-{
-	VM_BUG_ON(mem_cgroup_is_root(memcg));
-
-	/* The page allocation failed. Revert */
-	if (!page) {
-		memcg_uncharge_kmem(memcg, 1 << order);
-		return;
-	}
 	page->mem_cgroup = memcg;
+	return ret;
 }
 
-void __memcg_kmem_uncharge_pages(struct page *page, int order)
+void __memcg_kmem_uncharge(struct page *page, int order)
 {
 	struct mem_cgroup *memcg = page->mem_cgroup;
 
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e19132074404..91d1a1923eb8 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3414,24 +3414,24 @@ EXPORT_SYMBOL(__free_page_frag);
 struct page *alloc_kmem_pages(gfp_t gfp_mask, unsigned int order)
 {
 	struct page *page;
-	struct mem_cgroup *memcg = NULL;
 
-	if (!memcg_kmem_newpage_charge(gfp_mask, &memcg, order))
-		return NULL;
 	page = alloc_pages(gfp_mask, order);
-	memcg_kmem_commit_charge(page, memcg, order);
+	if (page && memcg_kmem_charge(page, gfp_mask, order) != 0) {
+		__free_pages(page, order);
+		page = NULL;
+	}
 	return page;
 }
 
 struct page *alloc_kmem_pages_node(int nid, gfp_t gfp_mask, unsigned int order)
 {
 	struct page *page;
-	struct mem_cgroup *memcg = NULL;
 
-	if (!memcg_kmem_newpage_charge(gfp_mask, &memcg, order))
-		return NULL;
 	page = alloc_pages_node(nid, gfp_mask, order);
-	memcg_kmem_commit_charge(page, memcg, order);
+	if (page && memcg_kmem_charge(page, gfp_mask, order) != 0) {
+		__free_pages(page, order);
+		page = NULL;
+	}
 	return page;
 }
 
@@ -3441,7 +3441,7 @@ struct page *alloc_kmem_pages_node(int nid, gfp_t gfp_mask, unsigned int order)
  */
 void __free_kmem_pages(struct page *page, unsigned int order)
 {
-	memcg_kmem_uncharge_pages(page, order);
+	memcg_kmem_uncharge(page, order);
 	__free_pages(page, order);
 }
 
-- 
2.1.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 2/3] memcg: unify slab and other kmem pages charging
  2015-10-04 22:21 [PATCH 1/3] memcg: simplify charging kmem pages Vladimir Davydov
@ 2015-10-04 22:21 ` Vladimir Davydov
  2015-10-08 15:10   ` Michal Hocko
  2015-10-17  0:19   ` Johannes Weiner
  2015-10-04 22:21 ` [PATCH 3/3] memcg: simplify and inline __mem_cgroup_from_kmem Vladimir Davydov
  2015-10-08 14:40 ` [PATCH 1/3] memcg: simplify charging kmem pages Michal Hocko
  2 siblings, 2 replies; 17+ messages in thread
From: Vladimir Davydov @ 2015-10-04 22:21 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Johannes Weiner, Michal Hocko, linux-mm, linux-kernel

We have memcg_kmem_charge and memcg_kmem_uncharge methods for charging
and uncharging kmem pages to memcg, but currently they are not used for
charging slab pages (i.e. they are only used for charging pages
allocated with alloc_kmem_pages). The only reason why the slab subsystem
uses special helpers, memcg_charge_slab and memcg_uncharge_slab, is that
it needs to charge to the memcg of kmem cache while memcg_charge_kmem
charges to the memcg that the current task belongs to.

To remove this diversity, this patch adds an extra argument to
__memcg_kmem_charge that can be a pointer to a memcg or NULL. If it is
not NULL, the function tries to charge to the memcg it points to,
otherwise it charge to the current context. Next, it makes the slab
subsystem use this function to charge slab pages.

Since memcg_charge_kmem and memcg_uncharge_kmem helpers are now used
only in __memcg_kmem_charge and __memcg_kmem_uncharge, they are inlined.
Since __memcg_kmem_charge stores a pointer to the memcg in the page
struct, we don't need memcg_uncharge_slab anymore and can use
free_kmem_pages. Besides, one can now detect which memcg a slab page
belongs to by reading /proc/kpagecgroup.

Note, this patch switches slab to charge-after-alloc design. Since this
design is already used for all other memcg charges, it should not make
any difference.

Signed-off-by: Vladimir Davydov <vdavydov@virtuozzo.com>
---
 include/linux/memcontrol.h |  9 ++----
 mm/memcontrol.c            | 73 +++++++++++++++++++++-------------------------
 mm/slab.c                  | 12 ++++----
 mm/slab.h                  | 24 +++++----------
 mm/slub.c                  | 12 ++++----
 5 files changed, 55 insertions(+), 75 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 9e1f4d5efc56..8a9b7a798f14 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -752,7 +752,8 @@ static inline bool memcg_kmem_is_active(struct mem_cgroup *memcg)
  * conditions, but because they are pretty simple, they are expected to be
  * fast.
  */
-int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order);
+int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order,
+			struct mem_cgroup *memcg);
 void __memcg_kmem_uncharge(struct page *page, int order);
 
 /*
@@ -770,10 +771,6 @@ void __memcg_kmem_put_cache(struct kmem_cache *cachep);
 
 struct mem_cgroup *__mem_cgroup_from_kmem(void *ptr);
 
-int memcg_charge_kmem(struct mem_cgroup *memcg, gfp_t gfp,
-		      unsigned long nr_pages);
-void memcg_uncharge_kmem(struct mem_cgroup *memcg, unsigned long nr_pages);
-
 static inline bool __memcg_kmem_bypass(gfp_t gfp)
 {
 	if (!memcg_kmem_enabled())
@@ -798,7 +795,7 @@ static __always_inline int memcg_kmem_charge(struct page *page,
 {
 	if (__memcg_kmem_bypass(gfp))
 		return 0;
-	return __memcg_kmem_charge(page, gfp, order);
+	return __memcg_kmem_charge(page, gfp, order, NULL);
 }
 
 /**
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 7c0af36fc8d0..1d6413e0dd29 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2215,34 +2215,6 @@ static void commit_charge(struct page *page, struct mem_cgroup *memcg,
 }
 
 #ifdef CONFIG_MEMCG_KMEM
-int memcg_charge_kmem(struct mem_cgroup *memcg, gfp_t gfp,
-		      unsigned long nr_pages)
-{
-	struct page_counter *counter;
-	int ret = 0;
-
-	ret = page_counter_try_charge(&memcg->kmem, nr_pages, &counter);
-	if (ret < 0)
-		return ret;
-
-	ret = try_charge(memcg, gfp, nr_pages);
-	if (ret)
-		page_counter_uncharge(&memcg->kmem, nr_pages);
-
-	return ret;
-}
-
-void memcg_uncharge_kmem(struct mem_cgroup *memcg, unsigned long nr_pages)
-{
-	page_counter_uncharge(&memcg->memory, nr_pages);
-	if (do_swap_account)
-		page_counter_uncharge(&memcg->memsw, nr_pages);
-
-	page_counter_uncharge(&memcg->kmem, nr_pages);
-
-	css_put_many(&memcg->css, nr_pages);
-}
-
 static int memcg_alloc_cache_id(void)
 {
 	int id, size;
@@ -2404,36 +2376,59 @@ void __memcg_kmem_put_cache(struct kmem_cache *cachep)
 		css_put(&cachep->memcg_params.memcg->css);
 }
 
-int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order)
+/*
+ * If @memcg != NULL, charge to @memcg, otherwise charge to the memcg the
+ * current task belongs to.
+ */
+int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order,
+			struct mem_cgroup *memcg)
 {
-	struct mem_cgroup *memcg;
-	int ret;
-
-	memcg = get_mem_cgroup_from_mm(current->mm);
+	struct page_counter *counter;
+	unsigned int nr_pages = 1 << order;
+	bool put = false;
+	int ret = 0;
 
-	if (!memcg_kmem_is_active(memcg)) {
-		css_put(&memcg->css);
-		return 0;
+	if (!memcg) {
+		memcg = get_mem_cgroup_from_mm(current->mm);
+		put = true;
 	}
+	if (!memcg_kmem_is_active(memcg))
+		goto out;
 
-	ret = memcg_charge_kmem(memcg, gfp, 1 << order);
+	ret = page_counter_try_charge(&memcg->kmem, nr_pages, &counter);
+	if (ret)
+		goto out;
+
+	ret = try_charge(memcg, gfp, nr_pages);
+	if (ret) {
+		page_counter_uncharge(&memcg->kmem, nr_pages);
+		goto out;
+	}
 
-	css_put(&memcg->css);
 	page->mem_cgroup = memcg;
+out:
+	if (put)
+		css_put(&memcg->css);
 	return ret;
 }
 
 void __memcg_kmem_uncharge(struct page *page, int order)
 {
 	struct mem_cgroup *memcg = page->mem_cgroup;
+	unsigned int nr_pages = 1 << order;
 
 	if (!memcg)
 		return;
 
 	VM_BUG_ON_PAGE(mem_cgroup_is_root(memcg), page);
 
-	memcg_uncharge_kmem(memcg, 1 << order);
+	page_counter_uncharge(&memcg->kmem, nr_pages);
+	page_counter_uncharge(&memcg->memory, nr_pages);
+	if (do_swap_account)
+		page_counter_uncharge(&memcg->memsw, nr_pages);
+
 	page->mem_cgroup = NULL;
+	css_put_many(&memcg->css, nr_pages);
 }
 
 struct mem_cgroup *__mem_cgroup_from_kmem(void *ptr)
diff --git a/mm/slab.c b/mm/slab.c
index ad6c6f8385d9..037d9d71633a 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1592,16 +1592,17 @@ static struct page *kmem_getpages(struct kmem_cache *cachep, gfp_t flags,
 	if (cachep->flags & SLAB_RECLAIM_ACCOUNT)
 		flags |= __GFP_RECLAIMABLE;
 
-	if (memcg_charge_slab(cachep, flags, cachep->gfporder))
-		return NULL;
-
 	page = __alloc_pages_node(nodeid, flags | __GFP_NOTRACK, cachep->gfporder);
 	if (!page) {
-		memcg_uncharge_slab(cachep, cachep->gfporder);
 		slab_out_of_memory(cachep, flags, nodeid);
 		return NULL;
 	}
 
+	if (memcg_charge_slab(page, flags, cachep->gfporder, cachep)) {
+		__free_pages(page, cachep->gfporder);
+		return NULL;
+	}
+
 	/* Record if ALLOC_NO_WATERMARKS was set when allocating the slab */
 	if (page_is_pfmemalloc(page))
 		pfmemalloc_active = true;
@@ -1653,8 +1654,7 @@ static void kmem_freepages(struct kmem_cache *cachep, struct page *page)
 
 	if (current->reclaim_state)
 		current->reclaim_state->reclaimed_slab += nr_freed;
-	__free_pages(page, cachep->gfporder);
-	memcg_uncharge_slab(cachep, cachep->gfporder);
+	__free_kmem_pages(page, cachep->gfporder);
 }
 
 static void kmem_rcu_free(struct rcu_head *head)
diff --git a/mm/slab.h b/mm/slab.h
index a3a967d7d7c2..16cc5b0de1d8 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -240,23 +240,16 @@ static inline struct kmem_cache *memcg_root_cache(struct kmem_cache *s)
 	return s->memcg_params.root_cache;
 }
 
-static __always_inline int memcg_charge_slab(struct kmem_cache *s,
-					     gfp_t gfp, int order)
+static __always_inline int memcg_charge_slab(struct page *page,
+					     gfp_t gfp, int order,
+					     struct kmem_cache *s)
 {
 	if (!memcg_kmem_enabled())
 		return 0;
 	if (is_root_cache(s))
 		return 0;
-	return memcg_charge_kmem(s->memcg_params.memcg, gfp, 1 << order);
-}
-
-static __always_inline void memcg_uncharge_slab(struct kmem_cache *s, int order)
-{
-	if (!memcg_kmem_enabled())
-		return;
-	if (is_root_cache(s))
-		return;
-	memcg_uncharge_kmem(s->memcg_params.memcg, 1 << order);
+	return __memcg_kmem_charge(page, gfp, order,
+				   s->memcg_params.memcg);
 }
 
 extern void slab_init_memcg_params(struct kmem_cache *);
@@ -295,15 +288,12 @@ static inline struct kmem_cache *memcg_root_cache(struct kmem_cache *s)
 	return s;
 }
 
-static inline int memcg_charge_slab(struct kmem_cache *s, gfp_t gfp, int order)
+static inline int memcg_charge_slab(struct page *page, gfp_t gfp, int order,
+				    struct kmem_cache *s)
 {
 	return 0;
 }
 
-static inline void memcg_uncharge_slab(struct kmem_cache *s, int order)
-{
-}
-
 static inline void slab_init_memcg_params(struct kmem_cache *s)
 {
 }
diff --git a/mm/slub.c b/mm/slub.c
index 2b40c186e941..a05388e8a80f 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1330,16 +1330,15 @@ static inline struct page *alloc_slab_page(struct kmem_cache *s,
 
 	flags |= __GFP_NOTRACK;
 
-	if (memcg_charge_slab(s, flags, order))
-		return NULL;
-
 	if (node == NUMA_NO_NODE)
 		page = alloc_pages(flags, order);
 	else
 		page = __alloc_pages_node(node, flags, order);
 
-	if (!page)
-		memcg_uncharge_slab(s, order);
+	if (page && memcg_charge_slab(page, flags, order, s)) {
+		__free_pages(page, order);
+		page = NULL;
+	}
 
 	return page;
 }
@@ -1478,8 +1477,7 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
 	page_mapcount_reset(page);
 	if (current->reclaim_state)
 		current->reclaim_state->reclaimed_slab += pages;
-	__free_pages(page, order);
-	memcg_uncharge_slab(s, order);
+	__free_kmem_pages(page, order);
 }
 
 #define need_reserve_slab_rcu						\
-- 
2.1.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 3/3] memcg: simplify and inline __mem_cgroup_from_kmem
  2015-10-04 22:21 [PATCH 1/3] memcg: simplify charging kmem pages Vladimir Davydov
  2015-10-04 22:21 ` [PATCH 2/3] memcg: unify slab and other kmem pages charging Vladimir Davydov
@ 2015-10-04 22:21 ` Vladimir Davydov
  2015-10-08 15:13   ` Michal Hocko
  2015-10-16 13:17   ` Kirill A. Shutemov
  2015-10-08 14:40 ` [PATCH 1/3] memcg: simplify charging kmem pages Michal Hocko
  2 siblings, 2 replies; 17+ messages in thread
From: Vladimir Davydov @ 2015-10-04 22:21 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Johannes Weiner, Michal Hocko, linux-mm, linux-kernel

Before the previous patch, __mem_cgroup_from_kmem had to handle two
types of kmem - slab pages and pages allocated with alloc_kmem_pages -
differently, because slab pages did not store information about owner
memcg in the page struct. Now we can unify it. Since after it, this
function becomes tiny we can fold it into mem_cgroup_from_kmem.

Signed-off-by: Vladimir Davydov <vdavydov@virtuozzo.com>
---
 include/linux/memcontrol.h |  7 ++++---
 mm/memcontrol.c            | 18 ------------------
 2 files changed, 4 insertions(+), 21 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 8a9b7a798f14..0e2e039609d1 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -769,8 +769,6 @@ static inline int memcg_cache_id(struct mem_cgroup *memcg)
 struct kmem_cache *__memcg_kmem_get_cache(struct kmem_cache *cachep);
 void __memcg_kmem_put_cache(struct kmem_cache *cachep);
 
-struct mem_cgroup *__mem_cgroup_from_kmem(void *ptr);
-
 static inline bool __memcg_kmem_bypass(gfp_t gfp)
 {
 	if (!memcg_kmem_enabled())
@@ -832,9 +830,12 @@ static __always_inline void memcg_kmem_put_cache(struct kmem_cache *cachep)
 
 static __always_inline struct mem_cgroup *mem_cgroup_from_kmem(void *ptr)
 {
+	struct page *page;
+
 	if (!memcg_kmem_enabled())
 		return NULL;
-	return __mem_cgroup_from_kmem(ptr);
+	page = virt_to_head_page(ptr);
+	return page->mem_cgroup;
 }
 #else
 #define for_each_memcg_cache_index(_idx)	\
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 1d6413e0dd29..6329e6182d89 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2430,24 +2430,6 @@ void __memcg_kmem_uncharge(struct page *page, int order)
 	page->mem_cgroup = NULL;
 	css_put_many(&memcg->css, nr_pages);
 }
-
-struct mem_cgroup *__mem_cgroup_from_kmem(void *ptr)
-{
-	struct mem_cgroup *memcg = NULL;
-	struct kmem_cache *cachep;
-	struct page *page;
-
-	page = virt_to_head_page(ptr);
-	if (PageSlab(page)) {
-		cachep = page->slab_cache;
-		if (!is_root_cache(cachep))
-			memcg = cachep->memcg_params.memcg;
-	} else
-		/* page allocated by alloc_kmem_pages */
-		memcg = page->mem_cgroup;
-
-	return memcg;
-}
 #endif /* CONFIG_MEMCG_KMEM */
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
-- 
2.1.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 1/3] memcg: simplify charging kmem pages
  2015-10-04 22:21 [PATCH 1/3] memcg: simplify charging kmem pages Vladimir Davydov
  2015-10-04 22:21 ` [PATCH 2/3] memcg: unify slab and other kmem pages charging Vladimir Davydov
  2015-10-04 22:21 ` [PATCH 3/3] memcg: simplify and inline __mem_cgroup_from_kmem Vladimir Davydov
@ 2015-10-08 14:40 ` Michal Hocko
  2 siblings, 0 replies; 17+ messages in thread
From: Michal Hocko @ 2015-10-08 14:40 UTC (permalink / raw)
  To: Vladimir Davydov; +Cc: Andrew Morton, Johannes Weiner, linux-mm, linux-kernel

On Mon 05-10-15 01:21:41, Vladimir Davydov wrote:
> Charging kmem pages proceeds in two steps. First, we try to charge the
> allocation size to the memcg the current task belongs to, then we
> allocate a page and "commit" the charge storing the pointer to the memcg
> in the page struct.
> 
> Such a design looks overcomplicated, because there is no much sense in
> trying charging the allocation before actually allocating a page: we
> won't be able to consume much memory over the limit even if we charge
> after doing the actual allocation, besides we already charge user pages
> post factum, so being pedantic with kmem pages just looks pointless.
> 
> So this patch simplifies the design by merging the "charge" and the
> "commit" steps into the same function, which takes the allocated page.

Yes this makes sense.

> Also, rename the charge and uncharge methods to memcg_kmem_charge and
> memcg_kmem_uncharge and make the charge method return error code instead
> of bool to conform to mem_cgroup_try_charge.

OK

> Signed-off-by: Vladimir Davydov <vdavydov@virtuozzo.com>

Nice cleanup!
Acked-by: Michal Hocko <mhocko@suse.com>

> ---
>  include/linux/memcontrol.h | 69 +++++++++++++---------------------------------
>  mm/memcontrol.c            | 39 +++-----------------------
>  mm/page_alloc.c            | 18 ++++++------
>  3 files changed, 32 insertions(+), 94 deletions(-)
> 
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index a3e0296eb063..9e1f4d5efc56 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -752,11 +752,8 @@ static inline bool memcg_kmem_is_active(struct mem_cgroup *memcg)
>   * conditions, but because they are pretty simple, they are expected to be
>   * fast.
>   */
> -bool __memcg_kmem_newpage_charge(gfp_t gfp, struct mem_cgroup **memcg,
> -					int order);
> -void __memcg_kmem_commit_charge(struct page *page,
> -				       struct mem_cgroup *memcg, int order);
> -void __memcg_kmem_uncharge_pages(struct page *page, int order);
> +int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order);
> +void __memcg_kmem_uncharge(struct page *page, int order);
>  
>  /*
>   * helper for acessing a memcg's index. It will be used as an index in the
> @@ -789,52 +786,30 @@ static inline bool __memcg_kmem_bypass(gfp_t gfp)
>  }
>  
>  /**
> - * memcg_kmem_newpage_charge: verify if a new kmem allocation is allowed.
> - * @gfp: the gfp allocation flags.
> - * @memcg: a pointer to the memcg this was charged against.
> - * @order: allocation order.
> + * memcg_kmem_charge: charge a kmem page
> + * @page: page to charge
> + * @gfp: reclaim mode
> + * @order: allocation order
>   *
> - * returns true if the memcg where the current task belongs can hold this
> - * allocation.
> - *
> - * We return true automatically if this allocation is not to be accounted to
> - * any memcg.
> + * Returns 0 on success, an error code on failure.
>   */
> -static inline bool
> -memcg_kmem_newpage_charge(gfp_t gfp, struct mem_cgroup **memcg, int order)
> +static __always_inline int memcg_kmem_charge(struct page *page,
> +					     gfp_t gfp, int order)
>  {
>  	if (__memcg_kmem_bypass(gfp))
> -		return true;
> -	return __memcg_kmem_newpage_charge(gfp, memcg, order);
> +		return 0;
> +	return __memcg_kmem_charge(page, gfp, order);
>  }
>  
>  /**
> - * memcg_kmem_uncharge_pages: uncharge pages from memcg
> - * @page: pointer to struct page being freed
> - * @order: allocation order.
> + * memcg_kmem_uncharge: uncharge a kmem page
> + * @page: page to uncharge
> + * @order: allocation order
>   */
> -static inline void
> -memcg_kmem_uncharge_pages(struct page *page, int order)
> +static __always_inline void memcg_kmem_uncharge(struct page *page, int order)
>  {
>  	if (memcg_kmem_enabled())
> -		__memcg_kmem_uncharge_pages(page, order);
> -}
> -
> -/**
> - * memcg_kmem_commit_charge: embeds correct memcg in a page
> - * @page: pointer to struct page recently allocated
> - * @memcg: the memcg structure we charged against
> - * @order: allocation order.
> - *
> - * Needs to be called after memcg_kmem_newpage_charge, regardless of success or
> - * failure of the allocation. if @page is NULL, this function will revert the
> - * charges. Otherwise, it will commit @page to @memcg.
> - */
> -static inline void
> -memcg_kmem_commit_charge(struct page *page, struct mem_cgroup *memcg, int order)
> -{
> -	if (memcg_kmem_enabled() && memcg)
> -		__memcg_kmem_commit_charge(page, memcg, order);
> +		__memcg_kmem_uncharge(page, order);
>  }
>  
>  /**
> @@ -878,18 +853,12 @@ static inline bool memcg_kmem_is_active(struct mem_cgroup *memcg)
>  	return false;
>  }
>  
> -static inline bool
> -memcg_kmem_newpage_charge(gfp_t gfp, struct mem_cgroup **memcg, int order)
> -{
> -	return true;
> -}
> -
> -static inline void memcg_kmem_uncharge_pages(struct page *page, int order)
> +static inline int memcg_kmem_charge(struct page *page, gfp_t gfp, int order)
>  {
> +	return 0;
>  }
>  
> -static inline void
> -memcg_kmem_commit_charge(struct page *page, struct mem_cgroup *memcg, int order)
> +static inline void memcg_kmem_uncharge(struct page *page, int order)
>  {
>  }
>  
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 44706a17cddc..7c0af36fc8d0 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -2404,57 +2404,26 @@ void __memcg_kmem_put_cache(struct kmem_cache *cachep)
>  		css_put(&cachep->memcg_params.memcg->css);
>  }
>  
> -/*
> - * We need to verify if the allocation against current->mm->owner's memcg is
> - * possible for the given order. But the page is not allocated yet, so we'll
> - * need a further commit step to do the final arrangements.
> - *
> - * It is possible for the task to switch cgroups in this mean time, so at
> - * commit time, we can't rely on task conversion any longer.  We'll then use
> - * the handle argument to return to the caller which cgroup we should commit
> - * against. We could also return the memcg directly and avoid the pointer
> - * passing, but a boolean return value gives better semantics considering
> - * the compiled-out case as well.
> - *
> - * Returning true means the allocation is possible.
> - */
> -bool
> -__memcg_kmem_newpage_charge(gfp_t gfp, struct mem_cgroup **_memcg, int order)
> +int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order)
>  {
>  	struct mem_cgroup *memcg;
>  	int ret;
>  
> -	*_memcg = NULL;
> -
>  	memcg = get_mem_cgroup_from_mm(current->mm);
>  
>  	if (!memcg_kmem_is_active(memcg)) {
>  		css_put(&memcg->css);
> -		return true;
> +		return 0;
>  	}
>  
>  	ret = memcg_charge_kmem(memcg, gfp, 1 << order);
> -	if (!ret)
> -		*_memcg = memcg;
>  
>  	css_put(&memcg->css);
> -	return (ret == 0);
> -}
> -
> -void __memcg_kmem_commit_charge(struct page *page, struct mem_cgroup *memcg,
> -			      int order)
> -{
> -	VM_BUG_ON(mem_cgroup_is_root(memcg));
> -
> -	/* The page allocation failed. Revert */
> -	if (!page) {
> -		memcg_uncharge_kmem(memcg, 1 << order);
> -		return;
> -	}
>  	page->mem_cgroup = memcg;
> +	return ret;
>  }
>  
> -void __memcg_kmem_uncharge_pages(struct page *page, int order)
> +void __memcg_kmem_uncharge(struct page *page, int order)
>  {
>  	struct mem_cgroup *memcg = page->mem_cgroup;
>  
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index e19132074404..91d1a1923eb8 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3414,24 +3414,24 @@ EXPORT_SYMBOL(__free_page_frag);
>  struct page *alloc_kmem_pages(gfp_t gfp_mask, unsigned int order)
>  {
>  	struct page *page;
> -	struct mem_cgroup *memcg = NULL;
>  
> -	if (!memcg_kmem_newpage_charge(gfp_mask, &memcg, order))
> -		return NULL;
>  	page = alloc_pages(gfp_mask, order);
> -	memcg_kmem_commit_charge(page, memcg, order);
> +	if (page && memcg_kmem_charge(page, gfp_mask, order) != 0) {
> +		__free_pages(page, order);
> +		page = NULL;
> +	}
>  	return page;
>  }
>  
>  struct page *alloc_kmem_pages_node(int nid, gfp_t gfp_mask, unsigned int order)
>  {
>  	struct page *page;
> -	struct mem_cgroup *memcg = NULL;
>  
> -	if (!memcg_kmem_newpage_charge(gfp_mask, &memcg, order))
> -		return NULL;
>  	page = alloc_pages_node(nid, gfp_mask, order);
> -	memcg_kmem_commit_charge(page, memcg, order);
> +	if (page && memcg_kmem_charge(page, gfp_mask, order) != 0) {
> +		__free_pages(page, order);
> +		page = NULL;
> +	}
>  	return page;
>  }
>  
> @@ -3441,7 +3441,7 @@ struct page *alloc_kmem_pages_node(int nid, gfp_t gfp_mask, unsigned int order)
>   */
>  void __free_kmem_pages(struct page *page, unsigned int order)
>  {
> -	memcg_kmem_uncharge_pages(page, order);
> +	memcg_kmem_uncharge(page, order);
>  	__free_pages(page, order);
>  }
>  
> -- 
> 2.1.4

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/3] memcg: unify slab and other kmem pages charging
  2015-10-04 22:21 ` [PATCH 2/3] memcg: unify slab and other kmem pages charging Vladimir Davydov
@ 2015-10-08 15:10   ` Michal Hocko
  2015-10-17  0:19   ` Johannes Weiner
  1 sibling, 0 replies; 17+ messages in thread
From: Michal Hocko @ 2015-10-08 15:10 UTC (permalink / raw)
  To: Vladimir Davydov; +Cc: Andrew Morton, Johannes Weiner, linux-mm, linux-kernel

On Mon 05-10-15 01:21:42, Vladimir Davydov wrote:
> We have memcg_kmem_charge and memcg_kmem_uncharge methods for charging
> and uncharging kmem pages to memcg, but currently they are not used for
> charging slab pages (i.e. they are only used for charging pages
> allocated with alloc_kmem_pages). The only reason why the slab subsystem
> uses special helpers, memcg_charge_slab and memcg_uncharge_slab, is that
> it needs to charge to the memcg of kmem cache while memcg_charge_kmem
> charges to the memcg that the current task belongs to.
> 
> To remove this diversity, this patch adds an extra argument to
> __memcg_kmem_charge that can be a pointer to a memcg or NULL. If it is
> not NULL, the function tries to charge to the memcg it points to,
> otherwise it charge to the current context. Next, it makes the slab
> subsystem use this function to charge slab pages.
> 
> Since memcg_charge_kmem and memcg_uncharge_kmem helpers are now used
> only in __memcg_kmem_charge and __memcg_kmem_uncharge, they are inlined.
> Since __memcg_kmem_charge stores a pointer to the memcg in the page
> struct, we don't need memcg_uncharge_slab anymore and can use
> free_kmem_pages. Besides, one can now detect which memcg a slab page
> belongs to by reading /proc/kpagecgroup.

I like it! All the charged pages are marked the same way finally.

> Note, this patch switches slab to charge-after-alloc design. Since this
> design is already used for all other memcg charges, it should not make
> any difference.
> 
> Signed-off-by: Vladimir Davydov <vdavydov@virtuozzo.com>

Acked-by: Michal Hocko <mhocko@suse.com>

> ---
>  include/linux/memcontrol.h |  9 ++----
>  mm/memcontrol.c            | 73 +++++++++++++++++++++-------------------------
>  mm/slab.c                  | 12 ++++----
>  mm/slab.h                  | 24 +++++----------
>  mm/slub.c                  | 12 ++++----
>  5 files changed, 55 insertions(+), 75 deletions(-)
> 
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 9e1f4d5efc56..8a9b7a798f14 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -752,7 +752,8 @@ static inline bool memcg_kmem_is_active(struct mem_cgroup *memcg)
>   * conditions, but because they are pretty simple, they are expected to be
>   * fast.
>   */
> -int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order);
> +int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order,
> +			struct mem_cgroup *memcg);
>  void __memcg_kmem_uncharge(struct page *page, int order);
>  
>  /*
> @@ -770,10 +771,6 @@ void __memcg_kmem_put_cache(struct kmem_cache *cachep);
>  
>  struct mem_cgroup *__mem_cgroup_from_kmem(void *ptr);
>  
> -int memcg_charge_kmem(struct mem_cgroup *memcg, gfp_t gfp,
> -		      unsigned long nr_pages);
> -void memcg_uncharge_kmem(struct mem_cgroup *memcg, unsigned long nr_pages);
> -
>  static inline bool __memcg_kmem_bypass(gfp_t gfp)
>  {
>  	if (!memcg_kmem_enabled())
> @@ -798,7 +795,7 @@ static __always_inline int memcg_kmem_charge(struct page *page,
>  {
>  	if (__memcg_kmem_bypass(gfp))
>  		return 0;
> -	return __memcg_kmem_charge(page, gfp, order);
> +	return __memcg_kmem_charge(page, gfp, order, NULL);
>  }
>  
>  /**
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 7c0af36fc8d0..1d6413e0dd29 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -2215,34 +2215,6 @@ static void commit_charge(struct page *page, struct mem_cgroup *memcg,
>  }
>  
>  #ifdef CONFIG_MEMCG_KMEM
> -int memcg_charge_kmem(struct mem_cgroup *memcg, gfp_t gfp,
> -		      unsigned long nr_pages)
> -{
> -	struct page_counter *counter;
> -	int ret = 0;
> -
> -	ret = page_counter_try_charge(&memcg->kmem, nr_pages, &counter);
> -	if (ret < 0)
> -		return ret;
> -
> -	ret = try_charge(memcg, gfp, nr_pages);
> -	if (ret)
> -		page_counter_uncharge(&memcg->kmem, nr_pages);
> -
> -	return ret;
> -}
> -
> -void memcg_uncharge_kmem(struct mem_cgroup *memcg, unsigned long nr_pages)
> -{
> -	page_counter_uncharge(&memcg->memory, nr_pages);
> -	if (do_swap_account)
> -		page_counter_uncharge(&memcg->memsw, nr_pages);
> -
> -	page_counter_uncharge(&memcg->kmem, nr_pages);
> -
> -	css_put_many(&memcg->css, nr_pages);
> -}
> -
>  static int memcg_alloc_cache_id(void)
>  {
>  	int id, size;
> @@ -2404,36 +2376,59 @@ void __memcg_kmem_put_cache(struct kmem_cache *cachep)
>  		css_put(&cachep->memcg_params.memcg->css);
>  }
>  
> -int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order)
> +/*
> + * If @memcg != NULL, charge to @memcg, otherwise charge to the memcg the
> + * current task belongs to.
> + */
> +int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order,
> +			struct mem_cgroup *memcg)
>  {
> -	struct mem_cgroup *memcg;
> -	int ret;
> -
> -	memcg = get_mem_cgroup_from_mm(current->mm);
> +	struct page_counter *counter;
> +	unsigned int nr_pages = 1 << order;
> +	bool put = false;
> +	int ret = 0;
>  
> -	if (!memcg_kmem_is_active(memcg)) {
> -		css_put(&memcg->css);
> -		return 0;
> +	if (!memcg) {
> +		memcg = get_mem_cgroup_from_mm(current->mm);
> +		put = true;
>  	}
> +	if (!memcg_kmem_is_active(memcg))
> +		goto out;
>  
> -	ret = memcg_charge_kmem(memcg, gfp, 1 << order);
> +	ret = page_counter_try_charge(&memcg->kmem, nr_pages, &counter);
> +	if (ret)
> +		goto out;
> +
> +	ret = try_charge(memcg, gfp, nr_pages);
> +	if (ret) {
> +		page_counter_uncharge(&memcg->kmem, nr_pages);
> +		goto out;
> +	}
>  
> -	css_put(&memcg->css);
>  	page->mem_cgroup = memcg;
> +out:
> +	if (put)
> +		css_put(&memcg->css);
>  	return ret;
>  }
>  
>  void __memcg_kmem_uncharge(struct page *page, int order)
>  {
>  	struct mem_cgroup *memcg = page->mem_cgroup;
> +	unsigned int nr_pages = 1 << order;
>  
>  	if (!memcg)
>  		return;
>  
>  	VM_BUG_ON_PAGE(mem_cgroup_is_root(memcg), page);
>  
> -	memcg_uncharge_kmem(memcg, 1 << order);
> +	page_counter_uncharge(&memcg->kmem, nr_pages);
> +	page_counter_uncharge(&memcg->memory, nr_pages);
> +	if (do_swap_account)
> +		page_counter_uncharge(&memcg->memsw, nr_pages);
> +
>  	page->mem_cgroup = NULL;
> +	css_put_many(&memcg->css, nr_pages);
>  }
>  
>  struct mem_cgroup *__mem_cgroup_from_kmem(void *ptr)
> diff --git a/mm/slab.c b/mm/slab.c
> index ad6c6f8385d9..037d9d71633a 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -1592,16 +1592,17 @@ static struct page *kmem_getpages(struct kmem_cache *cachep, gfp_t flags,
>  	if (cachep->flags & SLAB_RECLAIM_ACCOUNT)
>  		flags |= __GFP_RECLAIMABLE;
>  
> -	if (memcg_charge_slab(cachep, flags, cachep->gfporder))
> -		return NULL;
> -
>  	page = __alloc_pages_node(nodeid, flags | __GFP_NOTRACK, cachep->gfporder);
>  	if (!page) {
> -		memcg_uncharge_slab(cachep, cachep->gfporder);
>  		slab_out_of_memory(cachep, flags, nodeid);
>  		return NULL;
>  	}
>  
> +	if (memcg_charge_slab(page, flags, cachep->gfporder, cachep)) {
> +		__free_pages(page, cachep->gfporder);
> +		return NULL;
> +	}
> +
>  	/* Record if ALLOC_NO_WATERMARKS was set when allocating the slab */
>  	if (page_is_pfmemalloc(page))
>  		pfmemalloc_active = true;
> @@ -1653,8 +1654,7 @@ static void kmem_freepages(struct kmem_cache *cachep, struct page *page)
>  
>  	if (current->reclaim_state)
>  		current->reclaim_state->reclaimed_slab += nr_freed;
> -	__free_pages(page, cachep->gfporder);
> -	memcg_uncharge_slab(cachep, cachep->gfporder);
> +	__free_kmem_pages(page, cachep->gfporder);
>  }
>  
>  static void kmem_rcu_free(struct rcu_head *head)
> diff --git a/mm/slab.h b/mm/slab.h
> index a3a967d7d7c2..16cc5b0de1d8 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -240,23 +240,16 @@ static inline struct kmem_cache *memcg_root_cache(struct kmem_cache *s)
>  	return s->memcg_params.root_cache;
>  }
>  
> -static __always_inline int memcg_charge_slab(struct kmem_cache *s,
> -					     gfp_t gfp, int order)
> +static __always_inline int memcg_charge_slab(struct page *page,
> +					     gfp_t gfp, int order,
> +					     struct kmem_cache *s)
>  {
>  	if (!memcg_kmem_enabled())
>  		return 0;
>  	if (is_root_cache(s))
>  		return 0;
> -	return memcg_charge_kmem(s->memcg_params.memcg, gfp, 1 << order);
> -}
> -
> -static __always_inline void memcg_uncharge_slab(struct kmem_cache *s, int order)
> -{
> -	if (!memcg_kmem_enabled())
> -		return;
> -	if (is_root_cache(s))
> -		return;
> -	memcg_uncharge_kmem(s->memcg_params.memcg, 1 << order);
> +	return __memcg_kmem_charge(page, gfp, order,
> +				   s->memcg_params.memcg);
>  }
>  
>  extern void slab_init_memcg_params(struct kmem_cache *);
> @@ -295,15 +288,12 @@ static inline struct kmem_cache *memcg_root_cache(struct kmem_cache *s)
>  	return s;
>  }
>  
> -static inline int memcg_charge_slab(struct kmem_cache *s, gfp_t gfp, int order)
> +static inline int memcg_charge_slab(struct page *page, gfp_t gfp, int order,
> +				    struct kmem_cache *s)
>  {
>  	return 0;
>  }
>  
> -static inline void memcg_uncharge_slab(struct kmem_cache *s, int order)
> -{
> -}
> -
>  static inline void slab_init_memcg_params(struct kmem_cache *s)
>  {
>  }
> diff --git a/mm/slub.c b/mm/slub.c
> index 2b40c186e941..a05388e8a80f 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -1330,16 +1330,15 @@ static inline struct page *alloc_slab_page(struct kmem_cache *s,
>  
>  	flags |= __GFP_NOTRACK;
>  
> -	if (memcg_charge_slab(s, flags, order))
> -		return NULL;
> -
>  	if (node == NUMA_NO_NODE)
>  		page = alloc_pages(flags, order);
>  	else
>  		page = __alloc_pages_node(node, flags, order);
>  
> -	if (!page)
> -		memcg_uncharge_slab(s, order);
> +	if (page && memcg_charge_slab(page, flags, order, s)) {
> +		__free_pages(page, order);
> +		page = NULL;
> +	}
>  
>  	return page;
>  }
> @@ -1478,8 +1477,7 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
>  	page_mapcount_reset(page);
>  	if (current->reclaim_state)
>  		current->reclaim_state->reclaimed_slab += pages;
> -	__free_pages(page, order);
> -	memcg_uncharge_slab(s, order);
> +	__free_kmem_pages(page, order);
>  }
>  
>  #define need_reserve_slab_rcu						\
> -- 
> 2.1.4

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/3] memcg: simplify and inline __mem_cgroup_from_kmem
  2015-10-04 22:21 ` [PATCH 3/3] memcg: simplify and inline __mem_cgroup_from_kmem Vladimir Davydov
@ 2015-10-08 15:13   ` Michal Hocko
  2015-10-16 13:17   ` Kirill A. Shutemov
  1 sibling, 0 replies; 17+ messages in thread
From: Michal Hocko @ 2015-10-08 15:13 UTC (permalink / raw)
  To: Vladimir Davydov; +Cc: Andrew Morton, Johannes Weiner, linux-mm, linux-kernel

On Mon 05-10-15 01:21:43, Vladimir Davydov wrote:
> Before the previous patch, __mem_cgroup_from_kmem had to handle two
> types of kmem - slab pages and pages allocated with alloc_kmem_pages -
> differently, because slab pages did not store information about owner
> memcg in the page struct. Now we can unify it. Since after it, this
> function becomes tiny we can fold it into mem_cgroup_from_kmem.
> 
> Signed-off-by: Vladimir Davydov <vdavydov@virtuozzo.com>

Acked-by: Michal Hocko <mhocko@suse.com>

> ---
>  include/linux/memcontrol.h |  7 ++++---
>  mm/memcontrol.c            | 18 ------------------
>  2 files changed, 4 insertions(+), 21 deletions(-)
> 
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 8a9b7a798f14..0e2e039609d1 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -769,8 +769,6 @@ static inline int memcg_cache_id(struct mem_cgroup *memcg)
>  struct kmem_cache *__memcg_kmem_get_cache(struct kmem_cache *cachep);
>  void __memcg_kmem_put_cache(struct kmem_cache *cachep);
>  
> -struct mem_cgroup *__mem_cgroup_from_kmem(void *ptr);
> -
>  static inline bool __memcg_kmem_bypass(gfp_t gfp)
>  {
>  	if (!memcg_kmem_enabled())
> @@ -832,9 +830,12 @@ static __always_inline void memcg_kmem_put_cache(struct kmem_cache *cachep)
>  
>  static __always_inline struct mem_cgroup *mem_cgroup_from_kmem(void *ptr)
>  {
> +	struct page *page;
> +
>  	if (!memcg_kmem_enabled())
>  		return NULL;
> -	return __mem_cgroup_from_kmem(ptr);
> +	page = virt_to_head_page(ptr);
> +	return page->mem_cgroup;
>  }
>  #else
>  #define for_each_memcg_cache_index(_idx)	\
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 1d6413e0dd29..6329e6182d89 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -2430,24 +2430,6 @@ void __memcg_kmem_uncharge(struct page *page, int order)
>  	page->mem_cgroup = NULL;
>  	css_put_many(&memcg->css, nr_pages);
>  }
> -
> -struct mem_cgroup *__mem_cgroup_from_kmem(void *ptr)
> -{
> -	struct mem_cgroup *memcg = NULL;
> -	struct kmem_cache *cachep;
> -	struct page *page;
> -
> -	page = virt_to_head_page(ptr);
> -	if (PageSlab(page)) {
> -		cachep = page->slab_cache;
> -		if (!is_root_cache(cachep))
> -			memcg = cachep->memcg_params.memcg;
> -	} else
> -		/* page allocated by alloc_kmem_pages */
> -		memcg = page->mem_cgroup;
> -
> -	return memcg;
> -}
>  #endif /* CONFIG_MEMCG_KMEM */
>  
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> -- 
> 2.1.4

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/3] memcg: simplify and inline __mem_cgroup_from_kmem
  2015-10-04 22:21 ` [PATCH 3/3] memcg: simplify and inline __mem_cgroup_from_kmem Vladimir Davydov
  2015-10-08 15:13   ` Michal Hocko
@ 2015-10-16 13:17   ` Kirill A. Shutemov
  2015-10-16 13:51     ` Vladimir Davydov
  2015-10-16 14:36     ` Michal Hocko
  1 sibling, 2 replies; 17+ messages in thread
From: Kirill A. Shutemov @ 2015-10-16 13:17 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: Andrew Morton, Johannes Weiner, Michal Hocko, linux-mm, linux-kernel

On Mon, Oct 05, 2015 at 01:21:43AM +0300, Vladimir Davydov wrote:
> Before the previous patch, __mem_cgroup_from_kmem had to handle two
> types of kmem - slab pages and pages allocated with alloc_kmem_pages -
> differently, because slab pages did not store information about owner
> memcg in the page struct. Now we can unify it. Since after it, this
> function becomes tiny we can fold it into mem_cgroup_from_kmem.
> 
> Signed-off-by: Vladimir Davydov <vdavydov@virtuozzo.com>
> ---
>  include/linux/memcontrol.h |  7 ++++---
>  mm/memcontrol.c            | 18 ------------------
>  2 files changed, 4 insertions(+), 21 deletions(-)
> 
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 8a9b7a798f14..0e2e039609d1 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -769,8 +769,6 @@ static inline int memcg_cache_id(struct mem_cgroup *memcg)
>  struct kmem_cache *__memcg_kmem_get_cache(struct kmem_cache *cachep);
>  void __memcg_kmem_put_cache(struct kmem_cache *cachep);
>  
> -struct mem_cgroup *__mem_cgroup_from_kmem(void *ptr);
> -
>  static inline bool __memcg_kmem_bypass(gfp_t gfp)
>  {
>  	if (!memcg_kmem_enabled())
> @@ -832,9 +830,12 @@ static __always_inline void memcg_kmem_put_cache(struct kmem_cache *cachep)
>  
>  static __always_inline struct mem_cgroup *mem_cgroup_from_kmem(void *ptr)
>  {
> +	struct page *page;
> +
>  	if (!memcg_kmem_enabled())
>  		return NULL;
> -	return __mem_cgroup_from_kmem(ptr);
> +	page = virt_to_head_page(ptr);
> +	return page->mem_cgroup;
>  }

virt_to_head_page() is defined in <linux/mm.h> but you don't include it,
and the commit breaks build for me (on v4.3-rc5-mmotm-2015-10-15-15-20).

  CC      arch/x86/kernel/asm-offsets.s
In file included from /home/kas/linux/mm/include/linux/swap.h:8:0,
                 from /home/kas/linux/mm/include/linux/suspend.h:4,
                 from /home/kas/linux/mm/arch/x86/kernel/asm-offsets.c:12:
/home/kas/linux/mm/include/linux/memcontrol.h: In function a??mem_cgroup_from_kmema??:
/home/kas/linux/mm/include/linux/memcontrol.h:841:9: error: implicit declaration of function a??virt_to_head_pagea?? [-Werror=implicit-function-declaration]
  page = virt_to_head_page(ptr);
         ^
/home/kas/linux/mm/include/linux/memcontrol.h:841:7: warning: assignment makes pointer from integer without a cast [-Wint-conversion]
  page = virt_to_head_page(ptr);
       ^
In file included from /home/kas/linux/mm/include/linux/suspend.h:8:0,
                 from /home/kas/linux/mm/arch/x86/kernel/asm-offsets.c:12:
/home/kas/linux/mm/include/linux/mm.h: At top level:
/home/kas/linux/mm/include/linux/mm.h:452:28: error: conflicting types for a??virt_to_head_pagea??
 static inline struct page *virt_to_head_page(const void *x)
                            ^
In file included from /home/kas/linux/mm/include/linux/swap.h:8:0,
                 from /home/kas/linux/mm/include/linux/suspend.h:4,
                 from /home/kas/linux/mm/arch/x86/kernel/asm-offsets.c:12:
/home/kas/linux/mm/include/linux/memcontrol.h:841:9: note: previous implicit declaration of a??virt_to_head_pagea?? was here
  page = virt_to_head_page(ptr);
         ^
cc1: some warnings being treated as errors

The patch below fixes it for me (and for allmodconfig on x86-64), but I'm not
sure if it have any side effects on other configurations.

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 47677acb4516..e8e52e502c20 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -26,6 +26,7 @@
 #include <linux/page_counter.h>
 #include <linux/vmpressure.h>
 #include <linux/eventfd.h>
+#include <linux/mm.h>
 #include <linux/mmzone.h>
 #include <linux/writeback.h>
 
-- 
 Kirill A. Shutemov

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/3] memcg: simplify and inline __mem_cgroup_from_kmem
  2015-10-16 13:17   ` Kirill A. Shutemov
@ 2015-10-16 13:51     ` Vladimir Davydov
  2015-10-16 22:12       ` Hugh Dickins
  2015-10-16 14:36     ` Michal Hocko
  1 sibling, 1 reply; 17+ messages in thread
From: Vladimir Davydov @ 2015-10-16 13:51 UTC (permalink / raw)
  To: Kirill A. Shutemov, Andrew Morton
  Cc: Johannes Weiner, Michal Hocko, linux-mm, linux-kernel

On Fri, Oct 16, 2015 at 04:17:26PM +0300, Kirill A. Shutemov wrote:
> On Mon, Oct 05, 2015 at 01:21:43AM +0300, Vladimir Davydov wrote:
> > Before the previous patch, __mem_cgroup_from_kmem had to handle two
> > types of kmem - slab pages and pages allocated with alloc_kmem_pages -
> > differently, because slab pages did not store information about owner
> > memcg in the page struct. Now we can unify it. Since after it, this
> > function becomes tiny we can fold it into mem_cgroup_from_kmem.
> > 
> > Signed-off-by: Vladimir Davydov <vdavydov@virtuozzo.com>
> > ---
> >  include/linux/memcontrol.h |  7 ++++---
> >  mm/memcontrol.c            | 18 ------------------
> >  2 files changed, 4 insertions(+), 21 deletions(-)
> > 
> > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > index 8a9b7a798f14..0e2e039609d1 100644
> > --- a/include/linux/memcontrol.h
> > +++ b/include/linux/memcontrol.h
> > @@ -769,8 +769,6 @@ static inline int memcg_cache_id(struct mem_cgroup *memcg)
> >  struct kmem_cache *__memcg_kmem_get_cache(struct kmem_cache *cachep);
> >  void __memcg_kmem_put_cache(struct kmem_cache *cachep);
> >  
> > -struct mem_cgroup *__mem_cgroup_from_kmem(void *ptr);
> > -
> >  static inline bool __memcg_kmem_bypass(gfp_t gfp)
> >  {
> >  	if (!memcg_kmem_enabled())
> > @@ -832,9 +830,12 @@ static __always_inline void memcg_kmem_put_cache(struct kmem_cache *cachep)
> >  
> >  static __always_inline struct mem_cgroup *mem_cgroup_from_kmem(void *ptr)
> >  {
> > +	struct page *page;
> > +
> >  	if (!memcg_kmem_enabled())
> >  		return NULL;
> > -	return __mem_cgroup_from_kmem(ptr);
> > +	page = virt_to_head_page(ptr);
> > +	return page->mem_cgroup;
> >  }
> 
> virt_to_head_page() is defined in <linux/mm.h> but you don't include it,
> and the commit breaks build for me (on v4.3-rc5-mmotm-2015-10-15-15-20).
> 
>   CC      arch/x86/kernel/asm-offsets.s
> In file included from /home/kas/linux/mm/include/linux/swap.h:8:0,
>                  from /home/kas/linux/mm/include/linux/suspend.h:4,
>                  from /home/kas/linux/mm/arch/x86/kernel/asm-offsets.c:12:
> /home/kas/linux/mm/include/linux/memcontrol.h: In function a?~mem_cgroup_from_kmema?T:
> /home/kas/linux/mm/include/linux/memcontrol.h:841:9: error: implicit declaration of function a?~virt_to_head_pagea?T [-Werror=implicit-function-declaration]
>   page = virt_to_head_page(ptr);
>          ^
> /home/kas/linux/mm/include/linux/memcontrol.h:841:7: warning: assignment makes pointer from integer without a cast [-Wint-conversion]
>   page = virt_to_head_page(ptr);
>        ^
> In file included from /home/kas/linux/mm/include/linux/suspend.h:8:0,
>                  from /home/kas/linux/mm/arch/x86/kernel/asm-offsets.c:12:
> /home/kas/linux/mm/include/linux/mm.h: At top level:
> /home/kas/linux/mm/include/linux/mm.h:452:28: error: conflicting types for a?~virt_to_head_pagea?T
>  static inline struct page *virt_to_head_page(const void *x)
>                             ^
> In file included from /home/kas/linux/mm/include/linux/swap.h:8:0,
>                  from /home/kas/linux/mm/include/linux/suspend.h:4,
>                  from /home/kas/linux/mm/arch/x86/kernel/asm-offsets.c:12:
> /home/kas/linux/mm/include/linux/memcontrol.h:841:9: note: previous implicit declaration of a?~virt_to_head_pagea?T was here
>   page = virt_to_head_page(ptr);
>          ^
> cc1: some warnings being treated as errors

Oops, in my config I have CONFIG_CGROUP_WRITEBACK enabled, which results
in including mm.h to memcontrol.h indirectly:

 linux/memcontrol.h
  linux/writeback.h
   linux/bio.h
    linux/highmem.h
     linux/mm.h

That's why I didn't notice this. Sorry about that.

> 
> The patch below fixes it for me (and for allmodconfig on x86-64), but I'm not
> sure if it have any side effects on other configurations.

It should work OK with any config, otherwise CONFIG_CGROUP_WRITEBACK
would be broken too.

Andrew, could you please merge the fix by Kirill into
memcg-simplify-and-inline-__mem_cgroup_from_kmem.patch

Thanks,
Vladimir

> 
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 47677acb4516..e8e52e502c20 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -26,6 +26,7 @@
>  #include <linux/page_counter.h>
>  #include <linux/vmpressure.h>
>  #include <linux/eventfd.h>
> +#include <linux/mm.h>
>  #include <linux/mmzone.h>
>  #include <linux/writeback.h>
>  

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/3] memcg: simplify and inline __mem_cgroup_from_kmem
  2015-10-16 13:17   ` Kirill A. Shutemov
  2015-10-16 13:51     ` Vladimir Davydov
@ 2015-10-16 14:36     ` Michal Hocko
  1 sibling, 0 replies; 17+ messages in thread
From: Michal Hocko @ 2015-10-16 14:36 UTC (permalink / raw)
  To: Kirill A. Shutemov
  Cc: Vladimir Davydov, Andrew Morton, Johannes Weiner, linux-mm, linux-kernel

On Fri 16-10-15 16:17:26, Kirill A. Shutemov wrote:
[...]

I've just encountered the same while updating mmotm git tree. Thanks for
the fix. All other configs which I am testing are good as well.

> virt_to_head_page() is defined in <linux/mm.h> but you don't include it,
> and the commit breaks build for me (on v4.3-rc5-mmotm-2015-10-15-15-20).
> 
>   CC      arch/x86/kernel/asm-offsets.s
> In file included from /home/kas/linux/mm/include/linux/swap.h:8:0,
>                  from /home/kas/linux/mm/include/linux/suspend.h:4,
>                  from /home/kas/linux/mm/arch/x86/kernel/asm-offsets.c:12:
> /home/kas/linux/mm/include/linux/memcontrol.h: In function a??mem_cgroup_from_kmema??:
> /home/kas/linux/mm/include/linux/memcontrol.h:841:9: error: implicit declaration of function a??virt_to_head_pagea?? [-Werror=implicit-function-declaration]
>   page = virt_to_head_page(ptr);
>          ^
> /home/kas/linux/mm/include/linux/memcontrol.h:841:7: warning: assignment makes pointer from integer without a cast [-Wint-conversion]
>   page = virt_to_head_page(ptr);
>        ^
> In file included from /home/kas/linux/mm/include/linux/suspend.h:8:0,
>                  from /home/kas/linux/mm/arch/x86/kernel/asm-offsets.c:12:
> /home/kas/linux/mm/include/linux/mm.h: At top level:
> /home/kas/linux/mm/include/linux/mm.h:452:28: error: conflicting types for a??virt_to_head_pagea??
>  static inline struct page *virt_to_head_page(const void *x)
>                             ^
> In file included from /home/kas/linux/mm/include/linux/swap.h:8:0,
>                  from /home/kas/linux/mm/include/linux/suspend.h:4,
>                  from /home/kas/linux/mm/arch/x86/kernel/asm-offsets.c:12:
> /home/kas/linux/mm/include/linux/memcontrol.h:841:9: note: previous implicit declaration of a??virt_to_head_pagea?? was here
>   page = virt_to_head_page(ptr);
>          ^
> cc1: some warnings being treated as errors
> 
> The patch below fixes it for me (and for allmodconfig on x86-64), but I'm not
> sure if it have any side effects on other configurations.
> 
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 47677acb4516..e8e52e502c20 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -26,6 +26,7 @@
>  #include <linux/page_counter.h>
>  #include <linux/vmpressure.h>
>  #include <linux/eventfd.h>
> +#include <linux/mm.h>
>  #include <linux/mmzone.h>
>  #include <linux/writeback.h>
>  
> -- 
>  Kirill A. Shutemov
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/3] memcg: simplify and inline __mem_cgroup_from_kmem
  2015-10-16 13:51     ` Vladimir Davydov
@ 2015-10-16 22:12       ` Hugh Dickins
  2015-10-16 22:19         ` Johannes Weiner
                           ` (3 more replies)
  0 siblings, 4 replies; 17+ messages in thread
From: Hugh Dickins @ 2015-10-16 22:12 UTC (permalink / raw)
  To: Vladimir Davydov
  Cc: Kirill A. Shutemov, Andrew Morton, Johannes Weiner, Michal Hocko,
	Arnd Bergmann, linux-mm, linux-kernel

[-- Attachment #1: Type: TEXT/PLAIN, Size: 6649 bytes --]

On Fri, 16 Oct 2015, Vladimir Davydov wrote:
> On Fri, Oct 16, 2015 at 04:17:26PM +0300, Kirill A. Shutemov wrote:
> > On Mon, Oct 05, 2015 at 01:21:43AM +0300, Vladimir Davydov wrote:
> > > Before the previous patch, __mem_cgroup_from_kmem had to handle two
> > > types of kmem - slab pages and pages allocated with alloc_kmem_pages -
> > > differently, because slab pages did not store information about owner
> > > memcg in the page struct. Now we can unify it. Since after it, this
> > > function becomes tiny we can fold it into mem_cgroup_from_kmem.
> > > 
> > > Signed-off-by: Vladimir Davydov <vdavydov@virtuozzo.com>
> > > ---
> > >  include/linux/memcontrol.h |  7 ++++---
> > >  mm/memcontrol.c            | 18 ------------------
> > >  2 files changed, 4 insertions(+), 21 deletions(-)
> > > 
> > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > > index 8a9b7a798f14..0e2e039609d1 100644
> > > --- a/include/linux/memcontrol.h
> > > +++ b/include/linux/memcontrol.h
> > > @@ -769,8 +769,6 @@ static inline int memcg_cache_id(struct mem_cgroup *memcg)
> > >  struct kmem_cache *__memcg_kmem_get_cache(struct kmem_cache *cachep);
> > >  void __memcg_kmem_put_cache(struct kmem_cache *cachep);
> > >  
> > > -struct mem_cgroup *__mem_cgroup_from_kmem(void *ptr);
> > > -
> > >  static inline bool __memcg_kmem_bypass(gfp_t gfp)
> > >  {
> > >  	if (!memcg_kmem_enabled())
> > > @@ -832,9 +830,12 @@ static __always_inline void memcg_kmem_put_cache(struct kmem_cache *cachep)
> > >  
> > >  static __always_inline struct mem_cgroup *mem_cgroup_from_kmem(void *ptr)
> > >  {
> > > +	struct page *page;
> > > +
> > >  	if (!memcg_kmem_enabled())
> > >  		return NULL;
> > > -	return __mem_cgroup_from_kmem(ptr);
> > > +	page = virt_to_head_page(ptr);
> > > +	return page->mem_cgroup;
> > >  }
> > 
> > virt_to_head_page() is defined in <linux/mm.h> but you don't include it,
> > and the commit breaks build for me (on v4.3-rc5-mmotm-2015-10-15-15-20).
> > 
> >   CC      arch/x86/kernel/asm-offsets.s
> > In file included from /home/kas/linux/mm/include/linux/swap.h:8:0,
> >                  from /home/kas/linux/mm/include/linux/suspend.h:4,
> >                  from /home/kas/linux/mm/arch/x86/kernel/asm-offsets.c:12:
> > /home/kas/linux/mm/include/linux/memcontrol.h: In function â?~mem_cgroup_from_kmemâ?T:
> > /home/kas/linux/mm/include/linux/memcontrol.h:841:9: error: implicit declaration of function â?~virt_to_head_pageâ?T [-Werror=implicit-function-declaration]
> >   page = virt_to_head_page(ptr);
> >          ^
> > /home/kas/linux/mm/include/linux/memcontrol.h:841:7: warning: assignment makes pointer from integer without a cast [-Wint-conversion]
> >   page = virt_to_head_page(ptr);
> >        ^
> > In file included from /home/kas/linux/mm/include/linux/suspend.h:8:0,
> >                  from /home/kas/linux/mm/arch/x86/kernel/asm-offsets.c:12:
> > /home/kas/linux/mm/include/linux/mm.h: At top level:
> > /home/kas/linux/mm/include/linux/mm.h:452:28: error: conflicting types for â?~virt_to_head_pageâ?T
> >  static inline struct page *virt_to_head_page(const void *x)
> >                             ^
> > In file included from /home/kas/linux/mm/include/linux/swap.h:8:0,
> >                  from /home/kas/linux/mm/include/linux/suspend.h:4,
> >                  from /home/kas/linux/mm/arch/x86/kernel/asm-offsets.c:12:
> > /home/kas/linux/mm/include/linux/memcontrol.h:841:9: note: previous implicit declaration of â?~virt_to_head_pageâ?T was here
> >   page = virt_to_head_page(ptr);
> >          ^
> > cc1: some warnings being treated as errors
> 
> Oops, in my config I have CONFIG_CGROUP_WRITEBACK enabled, which results
> in including mm.h to memcontrol.h indirectly:
> 
>  linux/memcontrol.h
>   linux/writeback.h
>    linux/bio.h
>     linux/highmem.h
>      linux/mm.h
> 
> That's why I didn't notice this. Sorry about that.
> 
> > 
> > The patch below fixes it for me (and for allmodconfig on x86-64), but I'm not
> > sure if it have any side effects on other configurations.
> 
> It should work OK with any config, otherwise CONFIG_CGROUP_WRITEBACK
> would be broken too.
> 
> Andrew, could you please merge the fix by Kirill into
> memcg-simplify-and-inline-__mem_cgroup_from_kmem.patch
> 
> Thanks,
> Vladimir
> 
> > 
> > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > index 47677acb4516..e8e52e502c20 100644
> > --- a/include/linux/memcontrol.h
> > +++ b/include/linux/memcontrol.h
> > @@ -26,6 +26,7 @@
> >  #include <linux/page_counter.h>
> >  #include <linux/vmpressure.h>
> >  #include <linux/eventfd.h>
> > +#include <linux/mm.h>
> >  #include <linux/mmzone.h>
> >  #include <linux/writeback.h>
> >  

Are you expecting to use mem_cgroup_from_kmem() from other places
in future?  Seems possible; but at present it's called from only
one place, and (given how memcontrol.h has somehow managed to avoid
including mm.h all these years), I thought it would be nice to avoid
it for just this; and fixed my build with the patch below last night.
Whatever you all think best: just wanted to point out an alternative.

Hugh

--- 4035m/include/linux/memcontrol.h	2015-10-15 15:26:59.503568644 -0700
+++ 4035M/include/linux/memcontrol.h	2015-10-16 03:09:10.000000000 -0700
@@ -831,16 +831,6 @@ static __always_inline void memcg_kmem_p
 	if (memcg_kmem_enabled())
 		__memcg_kmem_put_cache(cachep);
 }
-
-static __always_inline struct mem_cgroup *mem_cgroup_from_kmem(void *ptr)
-{
-	struct page *page;
-
-	if (!memcg_kmem_enabled())
-		return NULL;
-	page = virt_to_head_page(ptr);
-	return page->mem_cgroup;
-}
 #else
 #define for_each_memcg_cache_index(_idx)	\
 	for (; NULL; )
@@ -886,11 +876,5 @@ memcg_kmem_get_cache(struct kmem_cache *
 static inline void memcg_kmem_put_cache(struct kmem_cache *cachep)
 {
 }
-
-static inline struct mem_cgroup *mem_cgroup_from_kmem(void *ptr)
-{
-	return NULL;
-}
 #endif /* CONFIG_MEMCG_KMEM */
 #endif /* _LINUX_MEMCONTROL_H */
-
--- 4035m/mm/list_lru.c	2015-10-15 15:26:59.835572128 -0700
+++ 4035M/mm/list_lru.c	2015-10-16 03:11:51.000000000 -0700
@@ -63,6 +63,16 @@ list_lru_from_memcg_idx(struct list_lru_
 	return &nlru->lru;
 }
 
+static __always_inline struct mem_cgroup *mem_cgroup_from_kmem(void *ptr)
+{
+	struct page *page;
+
+	if (!memcg_kmem_enabled())
+		return NULL;
+	page = virt_to_head_page(ptr);
+	return page->mem_cgroup;
+}
+
 static inline struct list_lru_one *
 list_lru_from_kmem(struct list_lru_node *nlru, void *ptr)
 {

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/3] memcg: simplify and inline __mem_cgroup_from_kmem
  2015-10-16 22:12       ` Hugh Dickins
@ 2015-10-16 22:19         ` Johannes Weiner
  2015-10-16 22:21         ` Andrew Morton
                           ` (2 subsequent siblings)
  3 siblings, 0 replies; 17+ messages in thread
From: Johannes Weiner @ 2015-10-16 22:19 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Vladimir Davydov, Kirill A. Shutemov, Andrew Morton,
	Michal Hocko, Arnd Bergmann, linux-mm, linux-kernel

On Fri, Oct 16, 2015 at 03:12:23PM -0700, Hugh Dickins wrote:
> --- 4035m/mm/list_lru.c	2015-10-15 15:26:59.835572128 -0700
> +++ 4035M/mm/list_lru.c	2015-10-16 03:11:51.000000000 -0700
> @@ -63,6 +63,16 @@ list_lru_from_memcg_idx(struct list_lru_
>  	return &nlru->lru;
>  }
>  
> +static __always_inline struct mem_cgroup *mem_cgroup_from_kmem(void *ptr)
> +{
> +	struct page *page;
> +
> +	if (!memcg_kmem_enabled())
> +		return NULL;
> +	page = virt_to_head_page(ptr);
> +	return page->mem_cgroup;
> +}
> +
>  static inline struct list_lru_one *
>  list_lru_from_kmem(struct list_lru_node *nlru, void *ptr)
>  {

I like this better than the mm.h include, too.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/3] memcg: simplify and inline __mem_cgroup_from_kmem
  2015-10-16 22:12       ` Hugh Dickins
  2015-10-16 22:19         ` Johannes Weiner
@ 2015-10-16 22:21         ` Andrew Morton
  2015-10-17 14:58         ` Vladimir Davydov
  2015-10-19  8:08         ` Michal Hocko
  3 siblings, 0 replies; 17+ messages in thread
From: Andrew Morton @ 2015-10-16 22:21 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Vladimir Davydov, Kirill A. Shutemov, Johannes Weiner,
	Michal Hocko, Arnd Bergmann, linux-mm, linux-kernel

On Fri, 16 Oct 2015 15:12:23 -0700 (PDT) Hugh Dickins <hughd@google.com> wrote:

> > > --- a/include/linux/memcontrol.h
> > > +++ b/include/linux/memcontrol.h
> > > @@ -26,6 +26,7 @@
> > >  #include <linux/page_counter.h>
> > >  #include <linux/vmpressure.h>
> > >  #include <linux/eventfd.h>
> > > +#include <linux/mm.h>
> > >  #include <linux/mmzone.h>
> > >  #include <linux/writeback.h>
> > >  
> 
> Are you expecting to use mem_cgroup_from_kmem() from other places
> in future?  Seems possible; but at present it's called from only
> one place, and (given how memcontrol.h has somehow managed to avoid
> including mm.h all these years), I thought it would be nice to avoid
> it for just this;

Yes, I was wondering about that.  I figured that anything which
includes memcontrol.h is already including mm.h and gcc is pretty
efficient with handling the #ifdef FOO_H_INCLUDED guards.

> and fixed my build with the patch below last night.
> Whatever you all think best: just wanted to point out an alternative.

Yes, that's neater - let's go that way.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/3] memcg: unify slab and other kmem pages charging
  2015-10-04 22:21 ` [PATCH 2/3] memcg: unify slab and other kmem pages charging Vladimir Davydov
  2015-10-08 15:10   ` Michal Hocko
@ 2015-10-17  0:19   ` Johannes Weiner
  2015-10-17 15:05     ` Vladimir Davydov
  2015-10-19  8:09     ` Michal Hocko
  1 sibling, 2 replies; 17+ messages in thread
From: Johannes Weiner @ 2015-10-17  0:19 UTC (permalink / raw)
  To: Vladimir Davydov; +Cc: Andrew Morton, Michal Hocko, linux-mm, linux-kernel

Hi Vladimir,

I'm late, but fwiw these patches are great simplifications. Thanks!

One nit below:

On Mon, Oct 05, 2015 at 01:21:42AM +0300, Vladimir Davydov wrote:
> @@ -2404,36 +2376,59 @@ void __memcg_kmem_put_cache(struct kmem_cache *cachep)
>  		css_put(&cachep->memcg_params.memcg->css);
>  }
>  
> -int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order)
> +/*
> + * If @memcg != NULL, charge to @memcg, otherwise charge to the memcg the
> + * current task belongs to.
> + */
> +int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order,
> +			struct mem_cgroup *memcg)
>  {
> -	struct mem_cgroup *memcg;
> -	int ret;
> -
> -	memcg = get_mem_cgroup_from_mm(current->mm);
> +	struct page_counter *counter;
> +	unsigned int nr_pages = 1 << order;
> +	bool put = false;
> +	int ret = 0;
>  
> -	if (!memcg_kmem_is_active(memcg)) {
> -		css_put(&memcg->css);
> -		return 0;
> +	if (!memcg) {
> +		memcg = get_mem_cgroup_from_mm(current->mm);
> +		put = true;

I think it'd be better to have an outer function than a magic
parameter for the memcg lookup. Could we fold this in there?

---

Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
---
 include/linux/memcontrol.h |  7 ++++---
 mm/memcontrol.c            | 36 ++++++++++++++++++------------------
 mm/slab.h                  |  4 ++--
 3 files changed, 24 insertions(+), 23 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 47677ac..730a65d 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -756,8 +756,9 @@ static inline bool memcg_kmem_is_active(struct mem_cgroup *memcg)
  * conditions, but because they are pretty simple, they are expected to be
  * fast.
  */
-int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order,
-			struct mem_cgroup *memcg);
+int __memcg_kmem_charge_memcg(struct page *page, gfp_t gfp, int order,
+			      struct mem_cgroup *memcg);
+int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order);
 void __memcg_kmem_uncharge(struct page *page, int order);
 
 /*
@@ -797,7 +798,7 @@ static __always_inline int memcg_kmem_charge(struct page *page,
 {
 	if (__memcg_kmem_bypass(gfp))
 		return 0;
-	return __memcg_kmem_charge(page, gfp, order, NULL);
+	return __memcg_kmem_charge(page, gfp, order);
 }
 
 /**
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 15db655..6fc9959 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2378,39 +2378,39 @@ void __memcg_kmem_put_cache(struct kmem_cache *cachep)
 		css_put(&cachep->memcg_params.memcg->css);
 }
 
-/*
- * If @memcg != NULL, charge to @memcg, otherwise charge to the memcg the
- * current task belongs to.
- */
-int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order,
-			struct mem_cgroup *memcg)
+int __memcg_kmem_charge_memcg(struct page *page, gfp_t gfp, int order,
+			      struct mem_cgroup *memcg)
 {
-	struct page_counter *counter;
 	unsigned int nr_pages = 1 << order;
-	bool put = false;
+	struct page_counter *counter;
 	int ret = 0;
 
-	if (!memcg) {
-		memcg = get_mem_cgroup_from_mm(current->mm);
-		put = true;
-	}
 	if (!memcg_kmem_is_active(memcg))
-		goto out;
+		return 0;
 
 	ret = page_counter_try_charge(&memcg->kmem, nr_pages, &counter);
 	if (ret)
-		goto out;
+		return ret;
 
 	ret = try_charge(memcg, gfp, nr_pages);
 	if (ret) {
 		page_counter_uncharge(&memcg->kmem, nr_pages);
-		goto out;
+		return ret;
 	}
 
 	page->mem_cgroup = memcg;
-out:
-	if (put)
-		css_put(&memcg->css);
+
+	return 0;
+}
+
+int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order)
+{
+	struct mem_cgroup *memcg;
+	int ret;
+
+	memcg = get_mem_cgroup_from_mm(current->mm);
+	ret = __memcg_kmem_charge_memcg(page, gfp, order, memcg);
+	css_put(&memcg->css);
 	return ret;
 }
 
diff --git a/mm/slab.h b/mm/slab.h
index 3d667a4..27492eb 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -244,8 +244,8 @@ static __always_inline int memcg_charge_slab(struct page *page,
 		return 0;
 	if (is_root_cache(s))
 		return 0;
-	return __memcg_kmem_charge(page, gfp, order,
-				   s->memcg_params.memcg);
+	return __memcg_kmem_charge_memcg(page, gfp, order,
+					 s->memcg_params.memcg);
 }
 
 extern void slab_init_memcg_params(struct kmem_cache *);
-- 
2.6.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/3] memcg: simplify and inline __mem_cgroup_from_kmem
  2015-10-16 22:12       ` Hugh Dickins
  2015-10-16 22:19         ` Johannes Weiner
  2015-10-16 22:21         ` Andrew Morton
@ 2015-10-17 14:58         ` Vladimir Davydov
  2015-10-19  8:08         ` Michal Hocko
  3 siblings, 0 replies; 17+ messages in thread
From: Vladimir Davydov @ 2015-10-17 14:58 UTC (permalink / raw)
  To: Hugh Dickins, Andrew Morton
  Cc: Kirill A. Shutemov, Johannes Weiner, Michal Hocko, Arnd Bergmann,
	linux-mm, linux-kernel

On Fri, Oct 16, 2015 at 03:12:23PM -0700, Hugh Dickins wrote:
...
> Are you expecting to use mem_cgroup_from_kmem() from other places
> in future?  Seems possible; but at present it's called from only

Not in the near future. At least, currently I can't think of any other
use for it except list_lru_from_kmem.

> one place, and (given how memcontrol.h has somehow managed to avoid
> including mm.h all these years), I thought it would be nice to avoid
> it for just this; and fixed my build with the patch below last night.
> Whatever you all think best: just wanted to point out an alternative.

Makes sense, thanks!

I would even inline mem_cgroup_from_kmem to list_lru_from_kmem:

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 47677acb4516..2077b9bb4883 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -831,16 +831,6 @@ static __always_inline void memcg_kmem_put_cache(struct kmem_cache *cachep)
 	if (memcg_kmem_enabled())
 		__memcg_kmem_put_cache(cachep);
 }
-
-static __always_inline struct mem_cgroup *mem_cgroup_from_kmem(void *ptr)
-{
-	struct page *page;
-
-	if (!memcg_kmem_enabled())
-		return NULL;
-	page = virt_to_head_page(ptr);
-	return page->mem_cgroup;
-}
 #else
 #define for_each_memcg_cache_index(_idx)	\
 	for (; NULL; )
@@ -886,11 +876,6 @@ memcg_kmem_get_cache(struct kmem_cache *cachep, gfp_t gfp)
 static inline void memcg_kmem_put_cache(struct kmem_cache *cachep)
 {
 }
-
-static inline struct mem_cgroup *mem_cgroup_from_kmem(void *ptr)
-{
-	return NULL;
-}
 #endif /* CONFIG_MEMCG_KMEM */
 #endif /* _LINUX_MEMCONTROL_H */
 
diff --git a/mm/list_lru.c b/mm/list_lru.c
index 28237476b055..00d0a70af70a 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -68,10 +68,10 @@ list_lru_from_kmem(struct list_lru_node *nlru, void *ptr)
 {
 	struct mem_cgroup *memcg;
 
-	if (!nlru->memcg_lrus)
+	if (!memcg_kmem_enabled() || !nlru->memcg_lrus)
 		return &nlru->lru;
 
-	memcg = mem_cgroup_from_kmem(ptr);
+	memcg = virt_to_head_page(ptr)->mem_cgroup;
 	if (!memcg)
 		return &nlru->lru;
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/3] memcg: unify slab and other kmem pages charging
  2015-10-17  0:19   ` Johannes Weiner
@ 2015-10-17 15:05     ` Vladimir Davydov
  2015-10-19  8:09     ` Michal Hocko
  1 sibling, 0 replies; 17+ messages in thread
From: Vladimir Davydov @ 2015-10-17 15:05 UTC (permalink / raw)
  To: Johannes Weiner, Andrew Morton; +Cc: Michal Hocko, linux-mm, linux-kernel

On Fri, Oct 16, 2015 at 05:19:32PM -0700, Johannes Weiner wrote:
...
> I think it'd be better to have an outer function than a magic
> parameter for the memcg lookup. Could we fold this in there?

Yeah, that looks neater. Thanks!

Andrew, could you please fold this one too?

> 
> ---
> 
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> ---
>  include/linux/memcontrol.h |  7 ++++---
>  mm/memcontrol.c            | 36 ++++++++++++++++++------------------
>  mm/slab.h                  |  4 ++--
>  3 files changed, 24 insertions(+), 23 deletions(-)
> 
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 47677ac..730a65d 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -756,8 +756,9 @@ static inline bool memcg_kmem_is_active(struct mem_cgroup *memcg)
>   * conditions, but because they are pretty simple, they are expected to be
>   * fast.
>   */
> -int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order,
> -			struct mem_cgroup *memcg);
> +int __memcg_kmem_charge_memcg(struct page *page, gfp_t gfp, int order,
> +			      struct mem_cgroup *memcg);
> +int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order);
>  void __memcg_kmem_uncharge(struct page *page, int order);
>  
>  /*
> @@ -797,7 +798,7 @@ static __always_inline int memcg_kmem_charge(struct page *page,
>  {
>  	if (__memcg_kmem_bypass(gfp))
>  		return 0;
> -	return __memcg_kmem_charge(page, gfp, order, NULL);
> +	return __memcg_kmem_charge(page, gfp, order);
>  }
>  
>  /**
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 15db655..6fc9959 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -2378,39 +2378,39 @@ void __memcg_kmem_put_cache(struct kmem_cache *cachep)
>  		css_put(&cachep->memcg_params.memcg->css);
>  }
>  
> -/*
> - * If @memcg != NULL, charge to @memcg, otherwise charge to the memcg the
> - * current task belongs to.
> - */
> -int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order,
> -			struct mem_cgroup *memcg)
> +int __memcg_kmem_charge_memcg(struct page *page, gfp_t gfp, int order,
> +			      struct mem_cgroup *memcg)
>  {
> -	struct page_counter *counter;
>  	unsigned int nr_pages = 1 << order;
> -	bool put = false;
> +	struct page_counter *counter;
>  	int ret = 0;
>  
> -	if (!memcg) {
> -		memcg = get_mem_cgroup_from_mm(current->mm);
> -		put = true;
> -	}
>  	if (!memcg_kmem_is_active(memcg))
> -		goto out;
> +		return 0;
>  
>  	ret = page_counter_try_charge(&memcg->kmem, nr_pages, &counter);
>  	if (ret)
> -		goto out;
> +		return ret;
>  
>  	ret = try_charge(memcg, gfp, nr_pages);
>  	if (ret) {
>  		page_counter_uncharge(&memcg->kmem, nr_pages);
> -		goto out;
> +		return ret;
>  	}
>  
>  	page->mem_cgroup = memcg;
> -out:
> -	if (put)
> -		css_put(&memcg->css);
> +
> +	return 0;
> +}
> +
> +int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order)
> +{
> +	struct mem_cgroup *memcg;
> +	int ret;
> +
> +	memcg = get_mem_cgroup_from_mm(current->mm);
> +	ret = __memcg_kmem_charge_memcg(page, gfp, order, memcg);
> +	css_put(&memcg->css);
>  	return ret;
>  }
>  
> diff --git a/mm/slab.h b/mm/slab.h
> index 3d667a4..27492eb 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -244,8 +244,8 @@ static __always_inline int memcg_charge_slab(struct page *page,
>  		return 0;
>  	if (is_root_cache(s))
>  		return 0;
> -	return __memcg_kmem_charge(page, gfp, order,
> -				   s->memcg_params.memcg);
> +	return __memcg_kmem_charge_memcg(page, gfp, order,
> +					 s->memcg_params.memcg);
>  }
>  
>  extern void slab_init_memcg_params(struct kmem_cache *);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/3] memcg: simplify and inline __mem_cgroup_from_kmem
  2015-10-16 22:12       ` Hugh Dickins
                           ` (2 preceding siblings ...)
  2015-10-17 14:58         ` Vladimir Davydov
@ 2015-10-19  8:08         ` Michal Hocko
  3 siblings, 0 replies; 17+ messages in thread
From: Michal Hocko @ 2015-10-19  8:08 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Vladimir Davydov, Kirill A. Shutemov, Andrew Morton,
	Johannes Weiner, Arnd Bergmann, linux-mm, linux-kernel

On Fri 16-10-15 15:12:23, Hugh Dickins wrote:
[...]
> Are you expecting to use mem_cgroup_from_kmem() from other places
> in future?  Seems possible; but at present it's called from only
> one place, and (given how memcontrol.h has somehow managed to avoid
> including mm.h all these years), I thought it would be nice to avoid
> it for just this; and fixed my build with the patch below last night.
> Whatever you all think best: just wanted to point out an alternative.

Yes, this is better.

Thanks!
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 2/3] memcg: unify slab and other kmem pages charging
  2015-10-17  0:19   ` Johannes Weiner
  2015-10-17 15:05     ` Vladimir Davydov
@ 2015-10-19  8:09     ` Michal Hocko
  1 sibling, 0 replies; 17+ messages in thread
From: Michal Hocko @ 2015-10-19  8:09 UTC (permalink / raw)
  To: Johannes Weiner; +Cc: Vladimir Davydov, Andrew Morton, linux-mm, linux-kernel

On Fri 16-10-15 17:19:32, Johannes Weiner wrote:
[...]
> I think it'd be better to have an outer function than a magic
> parameter for the memcg lookup. Could we fold this in there?
> 
> ---
> 

Yes this makes sense.
Thanks!

> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> ---
>  include/linux/memcontrol.h |  7 ++++---
>  mm/memcontrol.c            | 36 ++++++++++++++++++------------------
>  mm/slab.h                  |  4 ++--
>  3 files changed, 24 insertions(+), 23 deletions(-)
> 
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 47677ac..730a65d 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -756,8 +756,9 @@ static inline bool memcg_kmem_is_active(struct mem_cgroup *memcg)
>   * conditions, but because they are pretty simple, they are expected to be
>   * fast.
>   */
> -int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order,
> -			struct mem_cgroup *memcg);
> +int __memcg_kmem_charge_memcg(struct page *page, gfp_t gfp, int order,
> +			      struct mem_cgroup *memcg);
> +int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order);
>  void __memcg_kmem_uncharge(struct page *page, int order);
>  
>  /*
> @@ -797,7 +798,7 @@ static __always_inline int memcg_kmem_charge(struct page *page,
>  {
>  	if (__memcg_kmem_bypass(gfp))
>  		return 0;
> -	return __memcg_kmem_charge(page, gfp, order, NULL);
> +	return __memcg_kmem_charge(page, gfp, order);
>  }
>  
>  /**
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 15db655..6fc9959 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -2378,39 +2378,39 @@ void __memcg_kmem_put_cache(struct kmem_cache *cachep)
>  		css_put(&cachep->memcg_params.memcg->css);
>  }
>  
> -/*
> - * If @memcg != NULL, charge to @memcg, otherwise charge to the memcg the
> - * current task belongs to.
> - */
> -int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order,
> -			struct mem_cgroup *memcg)
> +int __memcg_kmem_charge_memcg(struct page *page, gfp_t gfp, int order,
> +			      struct mem_cgroup *memcg)
>  {
> -	struct page_counter *counter;
>  	unsigned int nr_pages = 1 << order;
> -	bool put = false;
> +	struct page_counter *counter;
>  	int ret = 0;
>  
> -	if (!memcg) {
> -		memcg = get_mem_cgroup_from_mm(current->mm);
> -		put = true;
> -	}
>  	if (!memcg_kmem_is_active(memcg))
> -		goto out;
> +		return 0;
>  
>  	ret = page_counter_try_charge(&memcg->kmem, nr_pages, &counter);
>  	if (ret)
> -		goto out;
> +		return ret;
>  
>  	ret = try_charge(memcg, gfp, nr_pages);
>  	if (ret) {
>  		page_counter_uncharge(&memcg->kmem, nr_pages);
> -		goto out;
> +		return ret;
>  	}
>  
>  	page->mem_cgroup = memcg;
> -out:
> -	if (put)
> -		css_put(&memcg->css);
> +
> +	return 0;
> +}
> +
> +int __memcg_kmem_charge(struct page *page, gfp_t gfp, int order)
> +{
> +	struct mem_cgroup *memcg;
> +	int ret;
> +
> +	memcg = get_mem_cgroup_from_mm(current->mm);
> +	ret = __memcg_kmem_charge_memcg(page, gfp, order, memcg);
> +	css_put(&memcg->css);
>  	return ret;
>  }
>  
> diff --git a/mm/slab.h b/mm/slab.h
> index 3d667a4..27492eb 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -244,8 +244,8 @@ static __always_inline int memcg_charge_slab(struct page *page,
>  		return 0;
>  	if (is_root_cache(s))
>  		return 0;
> -	return __memcg_kmem_charge(page, gfp, order,
> -				   s->memcg_params.memcg);
> +	return __memcg_kmem_charge_memcg(page, gfp, order,
> +					 s->memcg_params.memcg);
>  }
>  
>  extern void slab_init_memcg_params(struct kmem_cache *);
> -- 
> 2.6.1

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2015-10-19  8:09 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-10-04 22:21 [PATCH 1/3] memcg: simplify charging kmem pages Vladimir Davydov
2015-10-04 22:21 ` [PATCH 2/3] memcg: unify slab and other kmem pages charging Vladimir Davydov
2015-10-08 15:10   ` Michal Hocko
2015-10-17  0:19   ` Johannes Weiner
2015-10-17 15:05     ` Vladimir Davydov
2015-10-19  8:09     ` Michal Hocko
2015-10-04 22:21 ` [PATCH 3/3] memcg: simplify and inline __mem_cgroup_from_kmem Vladimir Davydov
2015-10-08 15:13   ` Michal Hocko
2015-10-16 13:17   ` Kirill A. Shutemov
2015-10-16 13:51     ` Vladimir Davydov
2015-10-16 22:12       ` Hugh Dickins
2015-10-16 22:19         ` Johannes Weiner
2015-10-16 22:21         ` Andrew Morton
2015-10-17 14:58         ` Vladimir Davydov
2015-10-19  8:08         ` Michal Hocko
2015-10-16 14:36     ` Michal Hocko
2015-10-08 14:40 ` [PATCH 1/3] memcg: simplify charging kmem pages Michal Hocko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).