linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Roman Gushchin <guro@fb.com>
To: <linux-mm@kvack.org>
Cc: Michal Hocko <mhocko@kernel.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	<linux-kernel@vger.kernel.org>, <kernel-team@fb.com>,
	Shakeel Butt <shakeelb@google.com>,
	Vladimir Davydov <vdavydov.dev@gmail.com>,
	Waiman Long <longman@redhat.com>,
	Christoph Lameter <cl@linux.com>, Roman Gushchin <guro@fb.com>
Subject: [PATCH 04/16] mm: memcg/slab: allocate space for memcg ownership data for non-root slabs
Date: Thu, 17 Oct 2019 17:28:08 -0700	[thread overview]
Message-ID: <20191018002820.307763-5-guro@fb.com> (raw)
In-Reply-To: <20191018002820.307763-1-guro@fb.com>

Allocate and release memory for storing the memcg ownership data.
For each slab page allocate space sufficient for number_of_objects
pointers to struct mem_cgroup_vec.

The mem_cgroup field of the struct page isn't used for slab pages,
so let's use the space for storing the pointer for the allocated
space.

This commit makes sure that the space is ready for use, but nobody
is actually using it yet. Following commits in the series will fix it.

Signed-off-by: Roman Gushchin <guro@fb.com>
---
 include/linux/mm_types.h |  5 ++++-
 mm/slab.c                |  3 ++-
 mm/slab.h                | 37 ++++++++++++++++++++++++++++++++++++-
 mm/slub.c                |  2 +-
 4 files changed, 43 insertions(+), 4 deletions(-)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 2222fa795284..4d99ee5a9c53 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -198,7 +198,10 @@ struct page {
 	atomic_t _refcount;
 
 #ifdef CONFIG_MEMCG
-	struct mem_cgroup *mem_cgroup;
+	union {
+		struct mem_cgroup *mem_cgroup;
+		struct mem_cgroup_ptr **mem_cgroup_vec;
+	};
 #endif
 
 	/*
diff --git a/mm/slab.c b/mm/slab.c
index f1e1840af533..ffa16dd966ef 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -1370,7 +1370,8 @@ static struct page *kmem_getpages(struct kmem_cache *cachep, gfp_t flags,
 		return NULL;
 	}
 
-	if (charge_slab_page(page, flags, cachep->gfporder, cachep)) {
+	if (charge_slab_page(page, flags, cachep->gfporder, cachep,
+			     cachep->num)) {
 		__free_pages(page, cachep->gfporder);
 		return NULL;
 	}
diff --git a/mm/slab.h b/mm/slab.h
index 03833b02b9ae..8620a0a1d5fa 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -406,6 +406,23 @@ static __always_inline void memcg_uncharge_slab(struct page *page, int order,
 	percpu_ref_put_many(&s->memcg_params.refcnt, 1 << order);
 }
 
+static inline int memcg_alloc_page_memcg_vec(struct page *page, gfp_t gfp,
+					     unsigned int objects)
+{
+	page->mem_cgroup_vec = kmalloc(sizeof(struct mem_cgroup_ptr *) *
+				       objects, gfp | __GFP_ZERO);
+	if (!page->mem_cgroup_vec)
+		return -ENOMEM;
+
+	return 0;
+}
+
+static inline void memcg_free_page_memcg_vec(struct page *page)
+{
+	kfree(page->mem_cgroup_vec);
+	page->mem_cgroup_vec = NULL;
+}
+
 extern void slab_init_memcg_params(struct kmem_cache *);
 extern void memcg_link_cache(struct kmem_cache *s, struct mem_cgroup *memcg);
 
@@ -455,6 +472,16 @@ static inline void memcg_uncharge_slab(struct page *page, int order,
 {
 }
 
+static inline int memcg_alloc_page_memcg_vec(struct page *page, gfp_t gfp,
+					     unsigned int objects)
+{
+	return 0;
+}
+
+static inline void memcg_free_page_memcg_vec(struct page *page)
+{
+}
+
 static inline void slab_init_memcg_params(struct kmem_cache *s)
 {
 }
@@ -479,14 +506,21 @@ static inline struct kmem_cache *virt_to_cache(const void *obj)
 
 static __always_inline int charge_slab_page(struct page *page,
 					    gfp_t gfp, int order,
-					    struct kmem_cache *s)
+					    struct kmem_cache *s,
+					    unsigned int objects)
 {
+	int ret;
+
 	if (is_root_cache(s)) {
 		mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s),
 				    PAGE_SIZE << order);
 		return 0;
 	}
 
+	ret = memcg_alloc_page_memcg_vec(page, gfp, objects);
+	if (ret)
+		return ret;
+
 	return memcg_charge_slab(page, gfp, order, s);
 }
 
@@ -499,6 +533,7 @@ static __always_inline void uncharge_slab_page(struct page *page, int order,
 		return;
 	}
 
+	memcg_free_page_memcg_vec(page);
 	memcg_uncharge_slab(page, order, s);
 }
 
diff --git a/mm/slub.c b/mm/slub.c
index bd902d65a71c..e810582f5b86 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1518,7 +1518,7 @@ static inline struct page *alloc_slab_page(struct kmem_cache *s,
 	else
 		page = __alloc_pages_node(node, flags, order);
 
-	if (page && charge_slab_page(page, flags, order, s)) {
+	if (page && charge_slab_page(page, flags, order, s, oo_objects(oo))) {
 		__free_pages(page, order);
 		page = NULL;
 	}
-- 
2.21.0


  parent reply	other threads:[~2019-10-18  0:29 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-18  0:28 [PATCH 00/16] The new slab memory controller Roman Gushchin
2019-10-18  0:28 ` [PATCH 01/16] mm: memcg: introduce mem_cgroup_ptr Roman Gushchin
2019-10-18  0:28 ` [PATCH 02/16] mm: vmstat: use s32 for vm_node_stat_diff in struct per_cpu_nodestat Roman Gushchin
2019-10-20 22:44   ` Christopher Lameter
2019-10-21  1:15     ` Roman Gushchin
2019-10-21 18:09       ` Christopher Lameter
2019-10-20 22:51   ` Christopher Lameter
2019-10-21  1:21     ` Roman Gushchin
2019-10-18  0:28 ` [PATCH 03/16] mm: vmstat: convert slab vmstat counter to bytes Roman Gushchin
2019-10-18  0:28 ` Roman Gushchin [this message]
2019-10-18  0:28 ` [PATCH 05/16] mm: slub: implement SLUB version of obj_to_index() Roman Gushchin
2019-10-18  0:28 ` [PATCH 06/16] mm: memcg/slab: save memcg ownership data for non-root slab objects Roman Gushchin
2019-10-18  0:28 ` [PATCH 07/16] mm: memcg: move memcg_kmem_bypass() to memcontrol.h Roman Gushchin
2019-10-18  0:28 ` [PATCH 08/16] mm: memcg: introduce __mod_lruvec_memcg_state() Roman Gushchin
2019-10-18  0:28 ` [PATCH 09/16] mm: memcg/slab: charge individual slab objects instead of pages Roman Gushchin
2019-10-25 19:41   ` Johannes Weiner
2019-10-25 20:00     ` Roman Gushchin
2019-10-25 20:52       ` Johannes Weiner
2019-10-31  1:52     ` Roman Gushchin
2019-10-31 14:23       ` Johannes Weiner
2019-10-31 14:41       ` Johannes Weiner
2019-10-31 15:07         ` Roman Gushchin
2019-10-31 18:50           ` Johannes Weiner
2019-10-18  0:28 ` [PATCH 10/16] mm: memcg: move get_mem_cgroup_from_current() to memcontrol.h Roman Gushchin
2019-10-18  0:28 ` [PATCH 11/16] mm: memcg/slab: replace memcg_from_slab_page() with memcg_from_slab_obj() Roman Gushchin
2019-10-18  0:28 ` [PATCH 13/16] mm: memcg/slab: deprecate memory.kmem.slabinfo Roman Gushchin
2019-10-18  0:28 ` [PATCH 14/16] mm: memcg/slab: use one set of kmem_caches for all memory cgroups Roman Gushchin
2019-10-18  0:28 ` [PATCH 15/16] tools/cgroup: make slabinfo.py compatible with new slab controller Roman Gushchin
2019-10-18  0:28 ` [PATCH 16/16] mm: slab: remove redundant check in memcg_accumulate_slabinfo() Roman Gushchin
2019-10-18 17:03 ` [PATCH 00/16] The new slab memory controller Waiman Long
2019-10-18 17:12   ` Roman Gushchin
2019-10-22 13:22 ` Michal Hocko
2019-10-22 13:28   ` Michal Hocko
2019-10-22 15:48     ` Roman Gushchin
2019-10-22 13:31 ` Michal Hocko
2019-10-22 15:59   ` Roman Gushchin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191018002820.307763-5-guro@fb.com \
    --to=guro@fb.com \
    --cc=cl@linux.com \
    --cc=hannes@cmpxchg.org \
    --cc=kernel-team@fb.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=longman@redhat.com \
    --cc=mhocko@kernel.org \
    --cc=shakeelb@google.com \
    --cc=vdavydov.dev@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).