From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.5 required=3.0 tests=INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A46C4C33CB6 for ; Thu, 16 Jan 2020 15:51:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6E6CF2075B for ; Thu, 16 Jan 2020 15:51:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6E6CF2075B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E60CA8E0078; Thu, 16 Jan 2020 10:50:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E10708E003F; Thu, 16 Jan 2020 10:50:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D26448E0078; Thu, 16 Jan 2020 10:50:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0197.hostedemail.com [216.40.44.197]) by kanga.kvack.org (Postfix) with ESMTP id BCD6B8E003F for ; Thu, 16 Jan 2020 10:50:59 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 749912C79 for ; Thu, 16 Jan 2020 15:50:59 +0000 (UTC) X-FDA: 76383935838.30.sheet04_7cfc47f21b403 X-HE-Tag: sheet04_7cfc47f21b403 X-Filterd-Recvd-Size: 6314 Received: from mail-wm1-f65.google.com (mail-wm1-f65.google.com [209.85.128.65]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Thu, 16 Jan 2020 15:50:58 +0000 (UTC) Received: by mail-wm1-f65.google.com with SMTP id a5so4297469wmb.0 for ; Thu, 16 Jan 2020 07:50:58 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=tp/pPR6E37fbdganQsMrDbtN2FGOk8Ren5MO02hZDsQ=; b=VKwTAiNeTFa0A4oi34410nMq13puwFvC1xVs6Wh3iJZpiyhKx/3BeTKZfj6z7Y5QWV +E1V09lFNSvneeW4K4tukcCBKYHmh8OgLWJt0fPwR/SCkLfNjLZm7FstbWi/+Y7EMg0F G7B9ovhaiQyXL9NxlTGOACvhGkreZiQemIgvA3p4h5zmiuhrl2buT/DUCOx0VWdoE3db o2tmsb0xCv25RgPkuRcTBvInXL0ZncLe5d+O2K8Z/UJvkjHZ+Tnk+jL1c6c3RwxyO8EI qQtdiQ0luh30+PU1cdhxqcFJ8skRzAig9QQITv46q1qK6KlAqkB7ikNRP+/uOvB/K1sT Byjg== X-Gm-Message-State: APjAAAWZLqRLeg1z0lGt7psc3ybKzyM4oYga9N40Pd9/fBVL3nQBg56X iv9LzRkVJ+xMHdpHGWKNqSk= X-Google-Smtp-Source: APXvYqyhmoiodXAgdRFUSiXZf5zc4k1l85EIwRLiEQ3kp/yzmGjV6NGRU508+CFI7caz6YWwU8V4UA== X-Received: by 2002:a7b:cf12:: with SMTP id l18mr7234748wmg.66.1579189857627; Thu, 16 Jan 2020 07:50:57 -0800 (PST) Received: from localhost (prg-ext-pat.suse.com. [213.151.95.130]) by smtp.gmail.com with ESMTPSA id v17sm29549621wrt.91.2020.01.16.07.50.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 16 Jan 2020 07:50:56 -0800 (PST) Date: Thu, 16 Jan 2020 16:50:56 +0100 From: Michal Hocko To: Yafang Shao Cc: dchinner@redhat.com, akpm@linux-foundation.org, linux-mm@kvack.org, Roman Gushchin Subject: Re: [PATCH] mm: verify page type before getting memcg from it Message-ID: <20200116155056.GA19428@dhcp22.suse.cz> References: <1579183811-1898-1-git-send-email-laoar.shao@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1579183811-1898-1-git-send-email-laoar.shao@gmail.com> User-Agent: Mutt/1.12.2 (2019-09-21) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: [Cc Roman] On Thu 16-01-20 09:10:11, Yafang Shao wrote: > Per disccusion with Dave[1], we always assume we only ever put objects from > memcg associated slab pages in the list_lru. In list_lru_from_kmem() it > calls memcg_from_slab_page() which makes no attempt to verify the page is > actually a slab page. But currently the binder coder (in > drivers/android/binder_alloc.c) stores normal pages in the list_lru, rather > than slab objects. The only reason the binder doesn't catch issue is that > the list_lru is not configured to be memcg aware. > In order to make it more stable, we should verify the page type before > getting memcg from it. In this patch, a new helper is introduced and the > old helper is modified. Now we have two helpers as bellow, > > struct mem_cgroup *__memcg_from_slab_page(struct page *page); > struct mem_cgroup *memcg_from_slab_page(struct page *page); > > The first helper is used when we are sure the page is a slab page and also > a head page, while the second helper is used when we are not sure the page > type. > > [1]. > https://lore.kernel.org/linux-mm/20200106213103.GJ23195@dread.disaster.area/ > > Suggested-by: Dave Chinner > Signed-off-by: Yafang Shao > --- > mm/memcontrol.c | 7 ++----- > mm/slab.h | 24 +++++++++++++++++++++++- > 2 files changed, 25 insertions(+), 6 deletions(-) > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 9bd4ea7..7658b8e 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -460,10 +460,7 @@ ino_t page_cgroup_ino(struct page *page) > unsigned long ino = 0; > > rcu_read_lock(); > - if (PageSlab(page) && !PageTail(page)) > - memcg = memcg_from_slab_page(page); > - else > - memcg = READ_ONCE(page->mem_cgroup); > + memcg = memcg_from_slab_page(page); > while (memcg && !(memcg->css.flags & CSS_ONLINE)) > memcg = parent_mem_cgroup(memcg); > if (memcg) > @@ -748,7 +745,7 @@ void __mod_lruvec_slab_state(void *p, enum node_stat_item idx, int val) > struct lruvec *lruvec; > > rcu_read_lock(); > - memcg = memcg_from_slab_page(page); > + memcg = __memcg_from_slab_page(page); > > /* Untracked pages have no memcg, no lruvec. Update only the node */ > if (!memcg || memcg == root_mem_cgroup) { > diff --git a/mm/slab.h b/mm/slab.h > index 7e94700..2444ae4 100644 > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -329,7 +329,7 @@ static inline struct kmem_cache *memcg_root_cache(struct kmem_cache *s) > * The kmem_cache can be reparented asynchronously. The caller must ensure > * the memcg lifetime, e.g. by taking rcu_read_lock() or cgroup_mutex. > */ > -static inline struct mem_cgroup *memcg_from_slab_page(struct page *page) > +static inline struct mem_cgroup *__memcg_from_slab_page(struct page *page) > { > struct kmem_cache *s; > > @@ -341,6 +341,23 @@ static inline struct mem_cgroup *memcg_from_slab_page(struct page *page) > } > > /* > + * If we are not sure whether the page can pass PageSlab() && !PageTail(), > + * we should use this function. That's the difference between this helper > + * and the above one. > + */ > +static inline struct mem_cgroup *memcg_from_slab_page(struct page *page) > +{ > + struct mem_cgroup *memcg; > + > + if (PageSlab(page) && !PageTail(page)) > + memcg = __memcg_from_slab_page(page); > + else > + memcg = READ_ONCE(page->mem_cgroup); > + > + return memcg; > +} > + > +/* > * Charge the slab page belonging to the non-root kmem_cache. > * Can be called for non-root kmem_caches only. > */ > @@ -438,6 +455,11 @@ static inline struct kmem_cache *memcg_root_cache(struct kmem_cache *s) > return s; > } > > +static inline struct mem_cgroup *__memcg_from_slab_page(struct page *page) > +{ > + return NULL; > +} > + > static inline struct mem_cgroup *memcg_from_slab_page(struct page *page) > { > return NULL; > -- > 1.8.3.1 > -- Michal Hocko SUSE Labs