From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79657C433E0 for ; Fri, 7 Aug 2020 06:20:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 28CBF221E5 for ; Fri, 7 Aug 2020 06:20:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="fNXDWE8Z" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 28CBF221E5 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B9B718D0050; Fri, 7 Aug 2020 02:20:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B49C98D0026; Fri, 7 Aug 2020 02:20:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A5F858D0050; Fri, 7 Aug 2020 02:20:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 8D9F08D0026 for ; Fri, 7 Aug 2020 02:20:55 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 56AE73625 for ; Fri, 7 Aug 2020 06:20:55 +0000 (UTC) X-FDA: 77122774470.03.shoes22_07122ff26fbe Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin03.hostedemail.com (Postfix) with ESMTP id 2C44A28A4E8 for ; Fri, 7 Aug 2020 06:20:55 +0000 (UTC) X-HE-Tag: shoes22_07122ff26fbe X-Filterd-Recvd-Size: 8041 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf20.hostedemail.com (Postfix) with ESMTP for ; Fri, 7 Aug 2020 06:20:54 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 6E5D22177B; Fri, 7 Aug 2020 06:20:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1596781253; bh=DStW/HX07G4oUvaIN6kcs8n/BmYvQOTIjqp5tncLiqc=; h=Date:From:To:Subject:In-Reply-To:From; b=fNXDWE8Zk2ghqKc6gvoqIexDyamBoFBc5ZAICLOATkem4mIX3CkCUHk2jqr1QQEp2 uUQHN/r2fbKoExves+NzDz+DgCDYK0PKIvnPqTq7lxQ1PeHy8Xgv+PHc2JjQhJnIi1 G9anaXbmavF2s4Kc1X50EJiTXVspueuoMb/tOUxc= Date: Thu, 06 Aug 2020 23:20:52 -0700 From: Andrew Morton To: akpm@linux-foundation.org, cl@linux.com, guro@fb.com, hannes@cmpxchg.org, linux-mm@kvack.org, mhocko@kernel.org, mm-commits@vger.kernel.org, shakeelb@google.com, tj@kernel.org, torvalds@linux-foundation.org, vbabka@suse.cz Subject: [patch 068/163] mm: memcg/slab: allocate obj_cgroups for non-root slab pages Message-ID: <20200807062052.tZ84SDxmn%akpm@linux-foundation.org> In-Reply-To: <20200806231643.a2711a608dd0f18bff2caf2b@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Rspamd-Queue-Id: 2C44A28A4E8 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.017976, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Roman Gushchin Subject: mm: memcg/slab: allocate obj_cgroups for non-root slab pages Allocate and release memory to store obj_cgroup pointers for each non-root slab page. Reuse page->mem_cgroup pointer to store a pointer to the allocated space. This commit temporarily increases the memory footprint of the kernel memory accounting. To store obj_cgroup pointers we'll need a place for an objcg_pointer for each allocated object. However, the following patches in the series will enable sharing of slab pages between memory cgroups, which will dramatically increase the total slab utilization. And the final memory footprint will be significantly smaller than before. To distinguish between obj_cgroups and memcg pointers in case when it's not obvious which one is used (as in page_cgroup_ino()), let's always set the lowest bit in the obj_cgroup case. The original obj_cgroups pointer is marked to be ignored by kmemleak, which otherwise would report a memory leak for each allocated vector. Link: http://lkml.kernel.org/r/20200623174037.3951353-8-guro@fb.com Signed-off-by: Roman Gushchin Reviewed-by: Vlastimil Babka Reviewed-by: Shakeel Butt Cc: Christoph Lameter Cc: Johannes Weiner Cc: Michal Hocko Cc: Tejun Heo Signed-off-by: Andrew Morton --- include/linux/mm_types.h | 5 ++- include/linux/slab_def.h | 6 ++++ include/linux/slub_def.h | 5 +++ mm/memcontrol.c | 17 +++++++++--- mm/slab.h | 52 +++++++++++++++++++++++++++++++++++++ 5 files changed, 81 insertions(+), 4 deletions(-) --- a/include/linux/mm_types.h~mm-memcg-slab-allocate-obj_cgroups-for-non-root-slab-pages +++ a/include/linux/mm_types.h @@ -198,7 +198,10 @@ struct page { atomic_t _refcount; #ifdef CONFIG_MEMCG - struct mem_cgroup *mem_cgroup; + union { + struct mem_cgroup *mem_cgroup; + struct obj_cgroup **obj_cgroups; + }; #endif /* --- a/include/linux/slab_def.h~mm-memcg-slab-allocate-obj_cgroups-for-non-root-slab-pages +++ a/include/linux/slab_def.h @@ -114,4 +114,10 @@ static inline unsigned int obj_to_index( return reciprocal_divide(offset, cache->reciprocal_buffer_size); } +static inline int objs_per_slab_page(const struct kmem_cache *cache, + const struct page *page) +{ + return cache->num; +} + #endif /* _LINUX_SLAB_DEF_H */ --- a/include/linux/slub_def.h~mm-memcg-slab-allocate-obj_cgroups-for-non-root-slab-pages +++ a/include/linux/slub_def.h @@ -198,4 +198,9 @@ static inline unsigned int obj_to_index( return __obj_to_index(cache, page_address(page), obj); } +static inline int objs_per_slab_page(const struct kmem_cache *cache, + const struct page *page) +{ + return page->objects; +} #endif /* _LINUX_SLUB_DEF_H */ --- a/mm/memcontrol.c~mm-memcg-slab-allocate-obj_cgroups-for-non-root-slab-pages +++ a/mm/memcontrol.c @@ -569,10 +569,21 @@ ino_t page_cgroup_ino(struct page *page) unsigned long ino = 0; rcu_read_lock(); - if (PageSlab(page) && !PageTail(page)) + if (PageSlab(page) && !PageTail(page)) { memcg = memcg_from_slab_page(page); - else - memcg = READ_ONCE(page->mem_cgroup); + } else { + memcg = page->mem_cgroup; + + /* + * The lowest bit set means that memcg isn't a valid + * memcg pointer, but a obj_cgroups pointer. + * In this case the page is shared and doesn't belong + * to any specific memory cgroup. + */ + if ((unsigned long) memcg & 0x1UL) + memcg = NULL; + } + while (memcg && !(memcg->css.flags & CSS_ONLINE)) memcg = parent_mem_cgroup(memcg); if (memcg) --- a/mm/slab.h~mm-memcg-slab-allocate-obj_cgroups-for-non-root-slab-pages +++ a/mm/slab.h @@ -109,6 +109,7 @@ struct memcg_cache_params { #include #include #include +#include /* * State of the slab allocator. @@ -348,6 +349,18 @@ static inline struct kmem_cache *memcg_r return s->memcg_params.root_cache; } +static inline struct obj_cgroup **page_obj_cgroups(struct page *page) +{ + /* + * page->mem_cgroup and page->obj_cgroups are sharing the same + * space. To distinguish between them in case we don't know for sure + * that the page is a slab page (e.g. page_cgroup_ino()), let's + * always set the lowest bit of obj_cgroups. + */ + return (struct obj_cgroup **) + ((unsigned long)page->obj_cgroups & ~0x1UL); +} + /* * Expects a pointer to a slab page. Please note, that PageSlab() check * isn't sufficient, as it returns true also for tail compound slab pages, @@ -435,6 +448,28 @@ static __always_inline void memcg_unchar percpu_ref_put_many(&s->memcg_params.refcnt, nr_pages); } +static inline int memcg_alloc_page_obj_cgroups(struct page *page, + struct kmem_cache *s, gfp_t gfp) +{ + unsigned int objects = objs_per_slab_page(s, page); + void *vec; + + vec = kcalloc_node(objects, sizeof(struct obj_cgroup *), gfp, + page_to_nid(page)); + if (!vec) + return -ENOMEM; + + kmemleak_not_leak(vec); + page->obj_cgroups = (struct obj_cgroup **) ((unsigned long)vec | 0x1UL); + return 0; +} + +static inline void memcg_free_page_obj_cgroups(struct page *page) +{ + kfree(page_obj_cgroups(page)); + page->obj_cgroups = NULL; +} + extern void slab_init_memcg_params(struct kmem_cache *); extern void memcg_link_cache(struct kmem_cache *s, struct mem_cgroup *memcg); @@ -484,6 +519,16 @@ static inline void memcg_uncharge_slab(s { } +static inline int memcg_alloc_page_obj_cgroups(struct page *page, + struct kmem_cache *s, gfp_t gfp) +{ + return 0; +} + +static inline void memcg_free_page_obj_cgroups(struct page *page) +{ +} + static inline void slab_init_memcg_params(struct kmem_cache *s) { } @@ -510,12 +555,18 @@ static __always_inline int charge_slab_p gfp_t gfp, int order, struct kmem_cache *s) { + int ret; + if (is_root_cache(s)) { mod_node_page_state(page_pgdat(page), cache_vmstat_idx(s), PAGE_SIZE << order); return 0; } + ret = memcg_alloc_page_obj_cgroups(page, s, gfp); + if (ret) + return ret; + return memcg_charge_slab(page, gfp, order, s); } @@ -528,6 +579,7 @@ static __always_inline void uncharge_sla return; } + memcg_free_page_obj_cgroups(page); memcg_uncharge_slab(page, order, s); } _