From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB8A4C11F67 for ; Thu, 1 Jul 2021 07:12:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 53550613DD for ; Thu, 1 Jul 2021 07:12:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 53550613DD Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CDCAE8D0299; Thu, 1 Jul 2021 03:12:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C8D9F8D0297; Thu, 1 Jul 2021 03:12:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B2DEC8D0299; Thu, 1 Jul 2021 03:12:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0131.hostedemail.com [216.40.44.131]) by kanga.kvack.org (Postfix) with ESMTP id 919DB8D0297 for ; Thu, 1 Jul 2021 03:12:42 -0400 (EDT) Received: from smtpin40.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 4EBB2248AC for ; Thu, 1 Jul 2021 07:12:42 +0000 (UTC) X-FDA: 78313151364.40.D74D883 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf27.hostedemail.com (Postfix) with ESMTP id 0DF3C7000091 for ; Thu, 1 Jul 2021 07:12:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=KLerUe42f8SIbn+h7BpkYDx+8xVVkF65bnO7cZo5M0w=; b=qAdccjqTF8wSj5lwxkt1ZAi/7q a24GeI4IPJwQtIKf8M1z/pzIVJpp5UcMjPqePzGt/XNPjgXGx+5e5kjGkm7pU+3LlcAnwppR+BGyh C+/65nAJjca25YyaV2ix/DOO36KVTq06O2Nvl4LXZjtHOZoY4f+iEv7B5wQDE/Olnnm/FCG+VUvNf ObO2cEgI3Fy0jTL3wOuwVardK9vNyA/2obUNvXt/zJTAe9Y8kqrElV4+PTD2+M3T7BTba6OPV6LPd EzfCFelYYK7PNKpxyQY8DSM6hsmL9Q30hS8HL7QYITxGnAvwoPJLqdxUYgr7Y5BUIjBwTnc/8PXPN j1kEhMpA==; Received: from hch by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1lyqs3-006HY9-Ct; Thu, 01 Jul 2021 07:12:26 +0000 Date: Thu, 1 Jul 2021 08:12:23 +0100 From: Christoph Hellwig To: "Matthew Wilcox (Oracle)" Cc: linux-mm@kvack.org, cgroups@vger.kernel.org, Johannes Weiner , Michal Hocko , Vladimir Davydov Subject: Re: [PATCH v3 06/18] mm/memcg: Add folio_memcg() and related functions Message-ID: References: <20210630040034.1155892-1-willy@infradead.org> <20210630040034.1155892-7-willy@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210630040034.1155892-7-willy@infradead.org> X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=qAdccjqT; dmarc=none; spf=none (imf27.hostedemail.com: domain of BATV@casper.srs.infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=BATV@casper.srs.infradead.org X-Stat-Signature: q8cpjenr956jr6qmk8dfggk4w7zbxgth X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 0DF3C7000091 X-HE-Tag: 1625123561-21645 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Jun 30, 2021 at 05:00:22AM +0100, Matthew Wilcox (Oracle) wrote: > memcg information is only stored in the head page, so the memcg > subsystem needs to assure that all accesses are to the head page. > The first step is converting page_memcg() to folio_memcg(). > > Retain page_memcg() as a wrapper around folio_memcg() and PageMemcgKmem() > as a wrapper around folio_memcg_kmem() but convert __page_memcg() to > __folio_memcg() and __page_objcg() to __folio_objcg(). > > Signed-off-by: Matthew Wilcox (Oracle) > --- > include/linux/memcontrol.h | 105 +++++++++++++++++++++---------------- > mm/memcontrol.c | 21 ++++---- > 2 files changed, 73 insertions(+), 53 deletions(-) > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > index 6d66037be646..92689fb2dab4 100644 > --- a/include/linux/memcontrol.h > +++ b/include/linux/memcontrol.h > @@ -372,6 +372,7 @@ enum page_memcg_data_flags { > #define MEMCG_DATA_FLAGS_MASK (__NR_MEMCG_DATA_FLAGS - 1) > > static inline bool PageMemcgKmem(struct page *page); > +static inline bool folio_memcg_kmem(struct folio *folio); > > /* > * After the initialization objcg->memcg is always pointing at > @@ -386,73 +387,78 @@ static inline struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *objcg) > } > > /* > - * __page_memcg - get the memory cgroup associated with a non-kmem page > - * @page: a pointer to the page struct > + * __folio_memcg - Get the memory cgroup associated with a non-kmem folio > + * @folio: Pointer to the folio. > * > - * Returns a pointer to the memory cgroup associated with the page, > - * or NULL. This function assumes that the page is known to have a > + * Returns a pointer to the memory cgroup associated with the folio, > + * or NULL. This function assumes that the folio is known to have a > * proper memory cgroup pointer. It's not safe to call this function > - * against some type of pages, e.g. slab pages or ex-slab pages or > - * kmem pages. > + * against some type of folios, e.g. slab folios or ex-slab folios or > + * kmem folios. > */ > -static inline struct mem_cgroup *__page_memcg(struct page *page) > +static inline struct mem_cgroup *__folio_memcg(struct folio *folio) > { > - unsigned long memcg_data = page->memcg_data; > + unsigned long memcg_data = folio->memcg_data; > > - VM_BUG_ON_PAGE(PageSlab(page), page); > - VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_OBJCGS, page); > - VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_KMEM, page); > + VM_BUG_ON_FOLIO(folio_slab(folio), folio); > + VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio); > + VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_KMEM, folio); > > return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); > } > > /* > - * __page_objcg - get the object cgroup associated with a kmem page > - * @page: a pointer to the page struct > + * __folio_objcg - get the object cgroup associated with a kmem folio. > + * @folio: Pointer to the folio. > * > - * Returns a pointer to the object cgroup associated with the page, > - * or NULL. This function assumes that the page is known to have a > + * Returns a pointer to the object cgroup associated with the folio, > + * or NULL. This function assumes that the folio is known to have a > * proper object cgroup pointer. It's not safe to call this function > - * against some type of pages, e.g. slab pages or ex-slab pages or > - * LRU pages. > + * against some type of folios, e.g. slab folios or ex-slab folios or > + * LRU folios. > */ > -static inline struct obj_cgroup *__page_objcg(struct page *page) > +static inline struct obj_cgroup *__folio_objcg(struct folio *folio) > { > - unsigned long memcg_data = page->memcg_data; > + unsigned long memcg_data = folio->memcg_data; > > - VM_BUG_ON_PAGE(PageSlab(page), page); > - VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_OBJCGS, page); > - VM_BUG_ON_PAGE(!(memcg_data & MEMCG_DATA_KMEM), page); > + VM_BUG_ON_FOLIO(folio_slab(folio), folio); > + VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio); > + VM_BUG_ON_FOLIO(!(memcg_data & MEMCG_DATA_KMEM), folio); > > return (struct obj_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); > } > > /* > - * page_memcg - get the memory cgroup associated with a page > - * @page: a pointer to the page struct > + * folio_memcg - Get the memory cgroup associated with a folio. > + * @folio: Pointer to the folio. > * > - * Returns a pointer to the memory cgroup associated with the page, > - * or NULL. This function assumes that the page is known to have a > + * Returns a pointer to the memory cgroup associated with the folio, > + * or NULL. This function assumes that the folio is known to have a > * proper memory cgroup pointer. It's not safe to call this function > - * against some type of pages, e.g. slab pages or ex-slab pages. > + * against some type of folios, e.g. slab folios or ex-slab folios. > * > - * For a non-kmem page any of the following ensures page and memcg binding > + * For a non-kmem folio any of the following ensures folio and memcg binding > * stability: > * > - * - the page lock > + * - the folio lock > * - LRU isolation > * - lock_page_memcg() > * - exclusive reference > * > - * For a kmem page a caller should hold an rcu read lock to protect memcg > - * associated with a kmem page from being released. > + * For a kmem folio a caller should hold an rcu read lock to protect memcg > + * associated with a kmem folio from being released. > */ > -static inline struct mem_cgroup *page_memcg(struct page *page) > +static inline struct mem_cgroup *folio_memcg(struct folio *folio) > { > - if (PageMemcgKmem(page)) > - return obj_cgroup_memcg(__page_objcg(page)); > + if (folio_memcg_kmem(folio)) > + return obj_cgroup_memcg(__folio_objcg(folio)); > else > + return __folio_memcg(folio); Nit: it might be worth to drop the pointless else while you're at it. therwise looks good: Reviewed-by: Christoph Hellwig From mboxrd@z Thu Jan 1 00:00:00 1970 From: Christoph Hellwig Subject: Re: [PATCH v3 06/18] mm/memcg: Add folio_memcg() and related functions Date: Thu, 1 Jul 2021 08:12:23 +0100 Message-ID: References: <20210630040034.1155892-1-willy@infradead.org> <20210630040034.1155892-7-willy@infradead.org> Mime-Version: 1.0 Return-path: DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=KLerUe42f8SIbn+h7BpkYDx+8xVVkF65bnO7cZo5M0w=; b=qAdccjqTF8wSj5lwxkt1ZAi/7q a24GeI4IPJwQtIKf8M1z/pzIVJpp5UcMjPqePzGt/XNPjgXGx+5e5kjGkm7pU+3LlcAnwppR+BGyh C+/65nAJjca25YyaV2ix/DOO36KVTq06O2Nvl4LXZjtHOZoY4f+iEv7B5wQDE/Olnnm/FCG+VUvNf ObO2cEgI3Fy0jTL3wOuwVardK9vNyA/2obUNvXt/zJTAe9Y8kqrElV4+PTD2+M3T7BTba6OPV6LPd EzfCFelYYK7PNKpxyQY8DSM6hsmL9Q30hS8HL7QYITxGnAvwoPJLqdxUYgr7Y5BUIjBwTnc/8PXPN j1kEhMpA==; Content-Disposition: inline In-Reply-To: <20210630040034.1155892-7-willy-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org> List-ID: Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: "Matthew Wilcox (Oracle)" Cc: linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Johannes Weiner , Michal Hocko , Vladimir Davydov On Wed, Jun 30, 2021 at 05:00:22AM +0100, Matthew Wilcox (Oracle) wrote: > memcg information is only stored in the head page, so the memcg > subsystem needs to assure that all accesses are to the head page. > The first step is converting page_memcg() to folio_memcg(). > > Retain page_memcg() as a wrapper around folio_memcg() and PageMemcgKmem() > as a wrapper around folio_memcg_kmem() but convert __page_memcg() to > __folio_memcg() and __page_objcg() to __folio_objcg(). > > Signed-off-by: Matthew Wilcox (Oracle) > --- > include/linux/memcontrol.h | 105 +++++++++++++++++++++---------------- > mm/memcontrol.c | 21 ++++---- > 2 files changed, 73 insertions(+), 53 deletions(-) > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > index 6d66037be646..92689fb2dab4 100644 > --- a/include/linux/memcontrol.h > +++ b/include/linux/memcontrol.h > @@ -372,6 +372,7 @@ enum page_memcg_data_flags { > #define MEMCG_DATA_FLAGS_MASK (__NR_MEMCG_DATA_FLAGS - 1) > > static inline bool PageMemcgKmem(struct page *page); > +static inline bool folio_memcg_kmem(struct folio *folio); > > /* > * After the initialization objcg->memcg is always pointing at > @@ -386,73 +387,78 @@ static inline struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *objcg) > } > > /* > - * __page_memcg - get the memory cgroup associated with a non-kmem page > - * @page: a pointer to the page struct > + * __folio_memcg - Get the memory cgroup associated with a non-kmem folio > + * @folio: Pointer to the folio. > * > - * Returns a pointer to the memory cgroup associated with the page, > - * or NULL. This function assumes that the page is known to have a > + * Returns a pointer to the memory cgroup associated with the folio, > + * or NULL. This function assumes that the folio is known to have a > * proper memory cgroup pointer. It's not safe to call this function > - * against some type of pages, e.g. slab pages or ex-slab pages or > - * kmem pages. > + * against some type of folios, e.g. slab folios or ex-slab folios or > + * kmem folios. > */ > -static inline struct mem_cgroup *__page_memcg(struct page *page) > +static inline struct mem_cgroup *__folio_memcg(struct folio *folio) > { > - unsigned long memcg_data = page->memcg_data; > + unsigned long memcg_data = folio->memcg_data; > > - VM_BUG_ON_PAGE(PageSlab(page), page); > - VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_OBJCGS, page); > - VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_KMEM, page); > + VM_BUG_ON_FOLIO(folio_slab(folio), folio); > + VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio); > + VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_KMEM, folio); > > return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); > } > > /* > - * __page_objcg - get the object cgroup associated with a kmem page > - * @page: a pointer to the page struct > + * __folio_objcg - get the object cgroup associated with a kmem folio. > + * @folio: Pointer to the folio. > * > - * Returns a pointer to the object cgroup associated with the page, > - * or NULL. This function assumes that the page is known to have a > + * Returns a pointer to the object cgroup associated with the folio, > + * or NULL. This function assumes that the folio is known to have a > * proper object cgroup pointer. It's not safe to call this function > - * against some type of pages, e.g. slab pages or ex-slab pages or > - * LRU pages. > + * against some type of folios, e.g. slab folios or ex-slab folios or > + * LRU folios. > */ > -static inline struct obj_cgroup *__page_objcg(struct page *page) > +static inline struct obj_cgroup *__folio_objcg(struct folio *folio) > { > - unsigned long memcg_data = page->memcg_data; > + unsigned long memcg_data = folio->memcg_data; > > - VM_BUG_ON_PAGE(PageSlab(page), page); > - VM_BUG_ON_PAGE(memcg_data & MEMCG_DATA_OBJCGS, page); > - VM_BUG_ON_PAGE(!(memcg_data & MEMCG_DATA_KMEM), page); > + VM_BUG_ON_FOLIO(folio_slab(folio), folio); > + VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio); > + VM_BUG_ON_FOLIO(!(memcg_data & MEMCG_DATA_KMEM), folio); > > return (struct obj_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK); > } > > /* > - * page_memcg - get the memory cgroup associated with a page > - * @page: a pointer to the page struct > + * folio_memcg - Get the memory cgroup associated with a folio. > + * @folio: Pointer to the folio. > * > - * Returns a pointer to the memory cgroup associated with the page, > - * or NULL. This function assumes that the page is known to have a > + * Returns a pointer to the memory cgroup associated with the folio, > + * or NULL. This function assumes that the folio is known to have a > * proper memory cgroup pointer. It's not safe to call this function > - * against some type of pages, e.g. slab pages or ex-slab pages. > + * against some type of folios, e.g. slab folios or ex-slab folios. > * > - * For a non-kmem page any of the following ensures page and memcg binding > + * For a non-kmem folio any of the following ensures folio and memcg binding > * stability: > * > - * - the page lock > + * - the folio lock > * - LRU isolation > * - lock_page_memcg() > * - exclusive reference > * > - * For a kmem page a caller should hold an rcu read lock to protect memcg > - * associated with a kmem page from being released. > + * For a kmem folio a caller should hold an rcu read lock to protect memcg > + * associated with a kmem folio from being released. > */ > -static inline struct mem_cgroup *page_memcg(struct page *page) > +static inline struct mem_cgroup *folio_memcg(struct folio *folio) > { > - if (PageMemcgKmem(page)) > - return obj_cgroup_memcg(__page_objcg(page)); > + if (folio_memcg_kmem(folio)) > + return obj_cgroup_memcg(__folio_objcg(folio)); > else > + return __folio_memcg(folio); Nit: it might be worth to drop the pointless else while you're at it. therwise looks good: Reviewed-by: Christoph Hellwig