From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 359D1C433DF for ; Fri, 31 Jul 2020 14:21:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0891522B40 for ; Fri, 31 Jul 2020 14:21:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="hFnFQkQe" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728940AbgGaOU6 (ORCPT ); Fri, 31 Jul 2020 10:20:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49398 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728697AbgGaOU6 (ORCPT ); Fri, 31 Jul 2020 10:20:58 -0400 Received: from mail-io1-xd44.google.com (mail-io1-xd44.google.com [IPv6:2607:f8b0:4864:20::d44]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2C275C061574; Fri, 31 Jul 2020 07:20:58 -0700 (PDT) Received: by mail-io1-xd44.google.com with SMTP id a5so16555200ioa.13; Fri, 31 Jul 2020 07:20:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=0aAVEzNKHnDRLwQliJhWr60G4nOTwfN1Nr0S9gmiAaI=; b=hFnFQkQexoCzKcXPojwdl0Qll7CYMMAdP35xco70i8/rURwuLltCrLeBTHQ8SOvfPT 1BXsP+cRN+slrTj3L3U8MttdBfSu3eGLxNHbrZ2unl0XZdtvSfS9VV2CifQ3TgMoFd76 NLJmwgGC3TOSiCD38xjzKAYQvHnS2LvxnOkfD2hpNUpA7+1iMpslVw3X868DGW420leT Z3zzcV/x9wQK5q3KF2CwdvXgq79B19byg7ZwkoQq8THbqxeX5sQ+jFr9ParWN8tQOSpi VIwfQEhzBk1kux/D5J4Wv0+3Ja00tsK+nXEgkWl08cDuebnLKCfHVT7hEpSxeDedyKff vHSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=0aAVEzNKHnDRLwQliJhWr60G4nOTwfN1Nr0S9gmiAaI=; b=sirYb4cnPe33bdggKB7f7VyHD6BnhPUnS7cQxmMLa1gMba7fwN5+01vJfG7VJhqs/y oC2ydcME+mxHLA8pJiephZY6bRl5VDjCrskTH6ayzmhTG3WCrTEORNYPP3i2vSwToj84 qb5teyBe+8BcjHoT8IAPB6IQnlA+5hkbrIjHHTWfhyUBpyu4NCwG8cOQIPFQKbWysWl3 W1E9bTf7V0WU1nh/4+Y0NGD6jB3+ZVskguy4u9tNv+HlTw9ck6tfSP7zcbdrwycMAy47 /esnURAClOqGiX06n8LQwrRJ1bI/eKhLL1WXzQh1nBVOHrGS0aRVyRhIWZjGFpaXc3RG DFgw== X-Gm-Message-State: AOAM533yqT3CPDgUhhHsfm/qSmOrxZU1s+qgy1CFDbYA9lZ7A02uVK0s M/jF4Lmd6RjkA3ojnM6wh1rVOhnVMPP+Ewe39QI= X-Google-Smtp-Source: ABdhPJzAtddS7ioCRQuM7TAYBQxbHbULlnQozR6IiwOAx0YR2Va4Bex4rQN3Ry4ricGPJNBCHLFRHkZQvJNNTMCTe9o= X-Received: by 2002:a5d:8d04:: with SMTP id p4mr3814927ioj.187.1596205257183; Fri, 31 Jul 2020 07:20:57 -0700 (PDT) MIME-Version: 1.0 References: <1595681998-19193-1-git-send-email-alex.shi@linux.alibaba.com> <1595681998-19193-19-git-send-email-alex.shi@linux.alibaba.com> <3345bfbf-ebe9-b5e0-a731-77dd7d76b0c9@linux.alibaba.com> In-Reply-To: <3345bfbf-ebe9-b5e0-a731-77dd7d76b0c9@linux.alibaba.com> From: Alexander Duyck Date: Fri, 31 Jul 2020 07:20:46 -0700 Message-ID: Subject: Re: [PATCH v17 18/21] mm/lru: introduce the relock_page_lruvec function To: Alex Shi Cc: Andrew Morton , Mel Gorman , Tejun Heo , Hugh Dickins , Konstantin Khlebnikov , Daniel Jordan , Yang Shi , Matthew Wilcox , Johannes Weiner , kbuild test robot , linux-mm , LKML , cgroups@vger.kernel.org, Shakeel Butt , Joonsoo Kim , Wei Yang , "Kirill A. Shutemov" , Rong Chen , Thomas Gleixner , Andrey Ryabinin Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jul 29, 2020 at 11:08 PM Alex Shi wrot= e: > > > > =E5=9C=A8 2020/7/30 =E4=B8=8A=E5=8D=881:52, Alexander Duyck =E5=86=99=E9= =81=93: > >> + rcu_read_lock(); > >> + locked =3D mem_cgroup_page_lruvec(page, pgdat) =3D=3D locked_l= ruvec; > >> + rcu_read_unlock(); > >> + > >> + if (locked) > >> + return locked_lruvec; > >> + > >> + if (locked_lruvec) > >> + unlock_page_lruvec_irqrestore(locked_lruvec, *flags); > >> + > >> + return lock_page_lruvec_irqsave(page, flags); > >> +} > >> + > > So looking these over they seem to be pretty inefficient for what they > > do. Basically in worst case (locked_lruvec =3D=3D NULL) you end up call= ing > > mem_cgoup_page_lruvec and all the rcu_read_lock/unlock a couple times > > for a single page. It might make more sense to structure this like: > > if (locked_lruvec) { > > Uh, we still need to check this page's lruvec, that needs a rcu_read_lock= . > to save a mem_cgroup_page_lruvec call, we have to open lock_page_lruvec > as your mentained before. > > > if (lruvec_holds_page_lru_lock(page, locked_lruvec)) > > return locked_lruvec; > > > > unlock_page_lruvec_irqrestore(locked_lruvec, *flags); > > } > > return lock_page_lruvec_irqsave(page, flags); > > > > The other piece that has me scratching my head is that I wonder if we > > couldn't do this without needing the rcu_read_lock. For example, what > > if we were to compare the page mem_cgroup pointer to the memcg back > > pointer stored in the mem_cgroup_per_node? It seems like ordering > > things this way would significantly reduce the overhead due to the > > pointer chasing to see if the page is in the locked lruvec or not. > > > > If page->mem_cgroup always be charged. the following could be better. > > +/* Don't lock again iff page's lruvec locked */ > +static inline struct lruvec *relock_page_lruvec_irqsave(struct page *pag= e, > + struct lruvec *locked_lruvec, unsigned long *flags) > +{ > + struct lruvec *lruvec; > + > + if (mem_cgroup_disabled()) > + return locked_lruvec; > + > + /* user page always be charged */ > + VM_BUG_ON_PAGE(!page->mem_cgroup, page); > + > + rcu_read_lock(); > + if (likely(lruvec_memcg(locked_lruvec) =3D=3D page->mem_cgroup)) = { > + rcu_read_unlock(); > + return locked_lruvec; > + } > + > + if (locked_lruvec) > + unlock_page_lruvec_irqrestore(locked_lruvec, *flags); > + > + lruvec =3D mem_cgroup_page_lruvec(page, page_pgdat(page)); > + spin_lock_irqsave(&lruvec->lru_lock, *flags); > + rcu_read_unlock(); > + lruvec_memcg_debug(lruvec, page); > + > + return lruvec; > +} > + I understand that you have to use the rcu_lock when you want to acquire the lruvec via mem_cgroup_page_lruvec(). That is why I didn't do away with the call to lock_page_lruvec_irqsave() at the end of the function. However it doesn't make sense to do it when you are already holding the locked_lruvec and simply getting the container of it in order to compare pointer values. One thing I was getting at with the lruvec_holds_page_lru_lock() function I had introduced in my example is that the code baths for the two relock functions are very similar. If we could move all the logic for identifying if we can reuse the lock into a single function it would dut down on the redundancy quite a bit as well. In addition by testing for locked_lruvec !=3D NULL before we before we do the comparison we can save ourselves some unnecessary testing in the case where The thought I had was try to avoid the rcu_lock entirely in the lock reuse case. Basically you just need to compare the pgdat value and the memcg between the page and the lruvec. As long as they both point the same values then you should have the correct lruvec and no need to relock. There is no need to take the rcu_lock as long as you aren't dereferencing something and if you are just comparing the pointers it should be good with that. The fallback if mem_cgroup_disabled() is to make certain the page pgdat->__lruvec is the address belonging to the lruvec. > The user page is always be charged since readahead page is charged now. > and looks we also can apply this patch. I will test it to see if there is > other exception. Yes that would simplify things a bit as the code I had was having to use a ternary to test for root_mem_cgroup if page->mem_cgroup was NULL. I should be able to finish up testing today and will submit a few clean-up patches as RFC to get your thoughts/feedback. > commit 826128346e50f6c60c513e166998466b593becad > Author: Alex Shi > Date: Thu Jul 30 13:58:38 2020 +0800 > > mm/memcg: remove useless check on page->mem_cgroup > > Since readahead page will be charged on memcg too. We don't need to > check this exception now. > > Signed-off-by: Alex Shi > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index af96217f2ec5..0c7f6bed199b 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -1336,12 +1336,6 @@ struct lruvec *mem_cgroup_page_lruvec(struct page = *page, struct pglist_data *pgd > > VM_BUG_ON_PAGE(PageTail(page), page); > memcg =3D READ_ONCE(page->mem_cgroup); > - /* > - * Swapcache readahead pages are added to the LRU - and > - * possibly migrated - before they are charged. > - */ > - if (!memcg) > - memcg =3D root_mem_cgroup; > > mz =3D mem_cgroup_page_nodeinfo(memcg, page); > lruvec =3D &mz->lruvec; > @@ -6962,10 +6956,7 @@ void mem_cgroup_migrate(struct page *oldpage, stru= ct page *newpage) > if (newpage->mem_cgroup) > return; > > - /* Swapcache readahead pages can get replaced before being charge= d */ > memcg =3D oldpage->mem_cgroup; > - if (!memcg) > - return; > > /* Force-charge the new page. The old one will be freed soon */ > nr_pages =3D thp_nr_pages(newpage); > @@ -7160,10 +7151,6 @@ void mem_cgroup_swapout(struct page *page, swp_ent= ry_t entry) > > memcg =3D page->mem_cgroup; > > - /* Readahead page, never charged */ > - if (!memcg) > - return; > - > /* > * In case the memcg owning these pages has been offlined and doe= sn't > * have an ID allocated to it anymore, charge the closest online > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.1 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F157C433E0 for ; Fri, 31 Jul 2020 14:21:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C4CC922B3F for ; Fri, 31 Jul 2020 14:20:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="hFnFQkQe" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C4CC922B3F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3B0A28D002A; Fri, 31 Jul 2020 10:20:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3639C8D000B; Fri, 31 Jul 2020 10:20:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 279A68D002A; Fri, 31 Jul 2020 10:20:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0055.hostedemail.com [216.40.44.55]) by kanga.kvack.org (Postfix) with ESMTP id 1373C8D000B for ; Fri, 31 Jul 2020 10:20:59 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id A3C1D1EFD for ; Fri, 31 Jul 2020 14:20:58 +0000 (UTC) X-FDA: 77098582596.16.trees16_3513ad726f84 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id 7C9FD100E692B for ; Fri, 31 Jul 2020 14:20:58 +0000 (UTC) X-HE-Tag: trees16_3513ad726f84 X-Filterd-Recvd-Size: 10315 Received: from mail-io1-f67.google.com (mail-io1-f67.google.com [209.85.166.67]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Fri, 31 Jul 2020 14:20:58 +0000 (UTC) Received: by mail-io1-f67.google.com with SMTP id s189so24574822iod.2 for ; Fri, 31 Jul 2020 07:20:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=0aAVEzNKHnDRLwQliJhWr60G4nOTwfN1Nr0S9gmiAaI=; b=hFnFQkQexoCzKcXPojwdl0Qll7CYMMAdP35xco70i8/rURwuLltCrLeBTHQ8SOvfPT 1BXsP+cRN+slrTj3L3U8MttdBfSu3eGLxNHbrZ2unl0XZdtvSfS9VV2CifQ3TgMoFd76 NLJmwgGC3TOSiCD38xjzKAYQvHnS2LvxnOkfD2hpNUpA7+1iMpslVw3X868DGW420leT Z3zzcV/x9wQK5q3KF2CwdvXgq79B19byg7ZwkoQq8THbqxeX5sQ+jFr9ParWN8tQOSpi VIwfQEhzBk1kux/D5J4Wv0+3Ja00tsK+nXEgkWl08cDuebnLKCfHVT7hEpSxeDedyKff vHSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc:content-transfer-encoding; bh=0aAVEzNKHnDRLwQliJhWr60G4nOTwfN1Nr0S9gmiAaI=; b=KRAaNtPVvKtG51pr2jZcpjFDaWFg+5aLOuhbzXdIE9S4bcVoavnmQglFsmcVqxYjbA PkIfzwPliB+fjod2Mq906htAqa1mnTRwcmmfF/rBYLZ3Nw3I9OTqzUP7MXk40ve+DW00 Wvyjz8pjFYDkSKXTm64Rju4OocWC/jyYlpZHiJDQfJbC/1JpPoj6Lmflmv3D4P1yGzUx EMUgsUHA8IU/2P1yf8cUlIVkzOSfwfW78RN70S7S0G0SLL4FVl+cEShGHUNqcz/2EeVa dm+Is+mzwhmjEE6rWPYNLS59lCAT7gy01Q/cJQcrj8iQRp29FwVMobFd79/XDK4jL+tz HGmA== X-Gm-Message-State: AOAM530qmlVT347xXktOTELMqbXsxTiab0Eb9kUHMPbcPJNjc4vEDzch mOWi6E08EcMlBaieqbsnjRqgJogyTU3tP8MgCIQ= X-Google-Smtp-Source: ABdhPJzAtddS7ioCRQuM7TAYBQxbHbULlnQozR6IiwOAx0YR2Va4Bex4rQN3Ry4ricGPJNBCHLFRHkZQvJNNTMCTe9o= X-Received: by 2002:a5d:8d04:: with SMTP id p4mr3814927ioj.187.1596205257183; Fri, 31 Jul 2020 07:20:57 -0700 (PDT) MIME-Version: 1.0 References: <1595681998-19193-1-git-send-email-alex.shi@linux.alibaba.com> <1595681998-19193-19-git-send-email-alex.shi@linux.alibaba.com> <3345bfbf-ebe9-b5e0-a731-77dd7d76b0c9@linux.alibaba.com> In-Reply-To: <3345bfbf-ebe9-b5e0-a731-77dd7d76b0c9@linux.alibaba.com> From: Alexander Duyck Date: Fri, 31 Jul 2020 07:20:46 -0700 Message-ID: Subject: Re: [PATCH v17 18/21] mm/lru: introduce the relock_page_lruvec function To: Alex Shi Cc: Andrew Morton , Mel Gorman , Tejun Heo , Hugh Dickins , Konstantin Khlebnikov , Daniel Jordan , Yang Shi , Matthew Wilcox , Johannes Weiner , kbuild test robot , linux-mm , LKML , cgroups@vger.kernel.org, Shakeel Butt , Joonsoo Kim , Wei Yang , "Kirill A. Shutemov" , Rong Chen , Thomas Gleixner , Andrey Ryabinin Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 7C9FD100E692B X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Jul 29, 2020 at 11:08 PM Alex Shi wrot= e: > > > > =E5=9C=A8 2020/7/30 =E4=B8=8A=E5=8D=881:52, Alexander Duyck =E5=86=99=E9= =81=93: > >> + rcu_read_lock(); > >> + locked =3D mem_cgroup_page_lruvec(page, pgdat) =3D=3D locked_l= ruvec; > >> + rcu_read_unlock(); > >> + > >> + if (locked) > >> + return locked_lruvec; > >> + > >> + if (locked_lruvec) > >> + unlock_page_lruvec_irqrestore(locked_lruvec, *flags); > >> + > >> + return lock_page_lruvec_irqsave(page, flags); > >> +} > >> + > > So looking these over they seem to be pretty inefficient for what they > > do. Basically in worst case (locked_lruvec =3D=3D NULL) you end up call= ing > > mem_cgoup_page_lruvec and all the rcu_read_lock/unlock a couple times > > for a single page. It might make more sense to structure this like: > > if (locked_lruvec) { > > Uh, we still need to check this page's lruvec, that needs a rcu_read_lock= . > to save a mem_cgroup_page_lruvec call, we have to open lock_page_lruvec > as your mentained before. > > > if (lruvec_holds_page_lru_lock(page, locked_lruvec)) > > return locked_lruvec; > > > > unlock_page_lruvec_irqrestore(locked_lruvec, *flags); > > } > > return lock_page_lruvec_irqsave(page, flags); > > > > The other piece that has me scratching my head is that I wonder if we > > couldn't do this without needing the rcu_read_lock. For example, what > > if we were to compare the page mem_cgroup pointer to the memcg back > > pointer stored in the mem_cgroup_per_node? It seems like ordering > > things this way would significantly reduce the overhead due to the > > pointer chasing to see if the page is in the locked lruvec or not. > > > > If page->mem_cgroup always be charged. the following could be better. > > +/* Don't lock again iff page's lruvec locked */ > +static inline struct lruvec *relock_page_lruvec_irqsave(struct page *pag= e, > + struct lruvec *locked_lruvec, unsigned long *flags) > +{ > + struct lruvec *lruvec; > + > + if (mem_cgroup_disabled()) > + return locked_lruvec; > + > + /* user page always be charged */ > + VM_BUG_ON_PAGE(!page->mem_cgroup, page); > + > + rcu_read_lock(); > + if (likely(lruvec_memcg(locked_lruvec) =3D=3D page->mem_cgroup)) = { > + rcu_read_unlock(); > + return locked_lruvec; > + } > + > + if (locked_lruvec) > + unlock_page_lruvec_irqrestore(locked_lruvec, *flags); > + > + lruvec =3D mem_cgroup_page_lruvec(page, page_pgdat(page)); > + spin_lock_irqsave(&lruvec->lru_lock, *flags); > + rcu_read_unlock(); > + lruvec_memcg_debug(lruvec, page); > + > + return lruvec; > +} > + I understand that you have to use the rcu_lock when you want to acquire the lruvec via mem_cgroup_page_lruvec(). That is why I didn't do away with the call to lock_page_lruvec_irqsave() at the end of the function. However it doesn't make sense to do it when you are already holding the locked_lruvec and simply getting the container of it in order to compare pointer values. One thing I was getting at with the lruvec_holds_page_lru_lock() function I had introduced in my example is that the code baths for the two relock functions are very similar. If we could move all the logic for identifying if we can reuse the lock into a single function it would dut down on the redundancy quite a bit as well. In addition by testing for locked_lruvec !=3D NULL before we before we do the comparison we can save ourselves some unnecessary testing in the case where The thought I had was try to avoid the rcu_lock entirely in the lock reuse case. Basically you just need to compare the pgdat value and the memcg between the page and the lruvec. As long as they both point the same values then you should have the correct lruvec and no need to relock. There is no need to take the rcu_lock as long as you aren't dereferencing something and if you are just comparing the pointers it should be good with that. The fallback if mem_cgroup_disabled() is to make certain the page pgdat->__lruvec is the address belonging to the lruvec. > The user page is always be charged since readahead page is charged now. > and looks we also can apply this patch. I will test it to see if there is > other exception. Yes that would simplify things a bit as the code I had was having to use a ternary to test for root_mem_cgroup if page->mem_cgroup was NULL. I should be able to finish up testing today and will submit a few clean-up patches as RFC to get your thoughts/feedback. > commit 826128346e50f6c60c513e166998466b593becad > Author: Alex Shi > Date: Thu Jul 30 13:58:38 2020 +0800 > > mm/memcg: remove useless check on page->mem_cgroup > > Since readahead page will be charged on memcg too. We don't need to > check this exception now. > > Signed-off-by: Alex Shi > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index af96217f2ec5..0c7f6bed199b 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -1336,12 +1336,6 @@ struct lruvec *mem_cgroup_page_lruvec(struct page = *page, struct pglist_data *pgd > > VM_BUG_ON_PAGE(PageTail(page), page); > memcg =3D READ_ONCE(page->mem_cgroup); > - /* > - * Swapcache readahead pages are added to the LRU - and > - * possibly migrated - before they are charged. > - */ > - if (!memcg) > - memcg =3D root_mem_cgroup; > > mz =3D mem_cgroup_page_nodeinfo(memcg, page); > lruvec =3D &mz->lruvec; > @@ -6962,10 +6956,7 @@ void mem_cgroup_migrate(struct page *oldpage, stru= ct page *newpage) > if (newpage->mem_cgroup) > return; > > - /* Swapcache readahead pages can get replaced before being charge= d */ > memcg =3D oldpage->mem_cgroup; > - if (!memcg) > - return; > > /* Force-charge the new page. The old one will be freed soon */ > nr_pages =3D thp_nr_pages(newpage); > @@ -7160,10 +7151,6 @@ void mem_cgroup_swapout(struct page *page, swp_ent= ry_t entry) > > memcg =3D page->mem_cgroup; > > - /* Readahead page, never charged */ > - if (!memcg) > - return; > - > /* > * In case the memcg owning these pages has been offlined and doe= sn't > * have an ID allocated to it anymore, charge the closest online > From mboxrd@z Thu Jan 1 00:00:00 1970 From: Alexander Duyck Subject: Re: [PATCH v17 18/21] mm/lru: introduce the relock_page_lruvec function Date: Fri, 31 Jul 2020 07:20:46 -0700 Message-ID: References: <1595681998-19193-1-git-send-email-alex.shi@linux.alibaba.com> <1595681998-19193-19-git-send-email-alex.shi@linux.alibaba.com> <3345bfbf-ebe9-b5e0-a731-77dd7d76b0c9@linux.alibaba.com> Mime-Version: 1.0 Content-Transfer-Encoding: quoted-printable Return-path: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc:content-transfer-encoding; bh=0aAVEzNKHnDRLwQliJhWr60G4nOTwfN1Nr0S9gmiAaI=; b=hFnFQkQexoCzKcXPojwdl0Qll7CYMMAdP35xco70i8/rURwuLltCrLeBTHQ8SOvfPT 1BXsP+cRN+slrTj3L3U8MttdBfSu3eGLxNHbrZ2unl0XZdtvSfS9VV2CifQ3TgMoFd76 NLJmwgGC3TOSiCD38xjzKAYQvHnS2LvxnOkfD2hpNUpA7+1iMpslVw3X868DGW420leT Z3zzcV/x9wQK5q3KF2CwdvXgq79B19byg7ZwkoQq8THbqxeX5sQ+jFr9ParWN8tQOSpi VIwfQEhzBk1kux/D5J4Wv0+3Ja00tsK+nXEgkWl08cDuebnLKCfHVT7hEpSxeDedyKff vHSw== In-Reply-To: <3345bfbf-ebe9-b5e0-a731-77dd7d76b0c9-KPsoFbNs7GizrGE5bRqYAgC/G2K4zDHf@public.gmane.org> Sender: cgroups-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org List-ID: Content-Type: text/plain; charset="utf-8" To: Alex Shi Cc: Andrew Morton , Mel Gorman , Tejun Heo , Hugh Dickins , Konstantin Khlebnikov , Daniel Jordan , Yang Shi , Matthew Wilcox , Johannes Weiner , kbuild test robot , linux-mm , LKML , cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, Shakeel Butt , Joonsoo Kim , Wei Yang , "Kirill A. Shutemov" , Rong Chen , Thomas Gleixner , Andrey Ryabinin On Wed, Jul 29, 2020 at 11:08 PM Alex Shi wrot= e: > > > > =E5=9C=A8 2020/7/30 =E4=B8=8A=E5=8D=881:52, Alexander Duyck =E5=86=99=E9= =81=93: > >> + rcu_read_lock(); > >> + locked =3D mem_cgroup_page_lruvec(page, pgdat) =3D=3D locked_l= ruvec; > >> + rcu_read_unlock(); > >> + > >> + if (locked) > >> + return locked_lruvec; > >> + > >> + if (locked_lruvec) > >> + unlock_page_lruvec_irqrestore(locked_lruvec, *flags); > >> + > >> + return lock_page_lruvec_irqsave(page, flags); > >> +} > >> + > > So looking these over they seem to be pretty inefficient for what they > > do. Basically in worst case (locked_lruvec =3D=3D NULL) you end up call= ing > > mem_cgoup_page_lruvec and all the rcu_read_lock/unlock a couple times > > for a single page. It might make more sense to structure this like: > > if (locked_lruvec) { > > Uh, we still need to check this page's lruvec, that needs a rcu_read_lock= . > to save a mem_cgroup_page_lruvec call, we have to open lock_page_lruvec > as your mentained before. > > > if (lruvec_holds_page_lru_lock(page, locked_lruvec)) > > return locked_lruvec; > > > > unlock_page_lruvec_irqrestore(locked_lruvec, *flags); > > } > > return lock_page_lruvec_irqsave(page, flags); > > > > The other piece that has me scratching my head is that I wonder if we > > couldn't do this without needing the rcu_read_lock. For example, what > > if we were to compare the page mem_cgroup pointer to the memcg back > > pointer stored in the mem_cgroup_per_node? It seems like ordering > > things this way would significantly reduce the overhead due to the > > pointer chasing to see if the page is in the locked lruvec or not. > > > > If page->mem_cgroup always be charged. the following could be better. > > +/* Don't lock again iff page's lruvec locked */ > +static inline struct lruvec *relock_page_lruvec_irqsave(struct page *pag= e, > + struct lruvec *locked_lruvec, unsigned long *flags) > +{ > + struct lruvec *lruvec; > + > + if (mem_cgroup_disabled()) > + return locked_lruvec; > + > + /* user page always be charged */ > + VM_BUG_ON_PAGE(!page->mem_cgroup, page); > + > + rcu_read_lock(); > + if (likely(lruvec_memcg(locked_lruvec) =3D=3D page->mem_cgroup)) = { > + rcu_read_unlock(); > + return locked_lruvec; > + } > + > + if (locked_lruvec) > + unlock_page_lruvec_irqrestore(locked_lruvec, *flags); > + > + lruvec =3D mem_cgroup_page_lruvec(page, page_pgdat(page)); > + spin_lock_irqsave(&lruvec->lru_lock, *flags); > + rcu_read_unlock(); > + lruvec_memcg_debug(lruvec, page); > + > + return lruvec; > +} > + I understand that you have to use the rcu_lock when you want to acquire the lruvec via mem_cgroup_page_lruvec(). That is why I didn't do away with the call to lock_page_lruvec_irqsave() at the end of the function. However it doesn't make sense to do it when you are already holding the locked_lruvec and simply getting the container of it in order to compare pointer values. One thing I was getting at with the lruvec_holds_page_lru_lock() function I had introduced in my example is that the code baths for the two relock functions are very similar. If we could move all the logic for identifying if we can reuse the lock into a single function it would dut down on the redundancy quite a bit as well. In addition by testing for locked_lruvec !=3D NULL before we before we do the comparison we can save ourselves some unnecessary testing in the case where The thought I had was try to avoid the rcu_lock entirely in the lock reuse case. Basically you just need to compare the pgdat value and the memcg between the page and the lruvec. As long as they both point the same values then you should have the correct lruvec and no need to relock. There is no need to take the rcu_lock as long as you aren't dereferencing something and if you are just comparing the pointers it should be good with that. The fallback if mem_cgroup_disabled() is to make certain the page pgdat->__lruvec is the address belonging to the lruvec. > The user page is always be charged since readahead page is charged now. > and looks we also can apply this patch. I will test it to see if there is > other exception. Yes that would simplify things a bit as the code I had was having to use a ternary to test for root_mem_cgroup if page->mem_cgroup was NULL. I should be able to finish up testing today and will submit a few clean-up patches as RFC to get your thoughts/feedback. > commit 826128346e50f6c60c513e166998466b593becad > Author: Alex Shi > Date: Thu Jul 30 13:58:38 2020 +0800 > > mm/memcg: remove useless check on page->mem_cgroup > > Since readahead page will be charged on memcg too. We don't need to > check this exception now. > > Signed-off-by: Alex Shi > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index af96217f2ec5..0c7f6bed199b 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -1336,12 +1336,6 @@ struct lruvec *mem_cgroup_page_lruvec(struct page = *page, struct pglist_data *pgd > > VM_BUG_ON_PAGE(PageTail(page), page); > memcg =3D READ_ONCE(page->mem_cgroup); > - /* > - * Swapcache readahead pages are added to the LRU - and > - * possibly migrated - before they are charged. > - */ > - if (!memcg) > - memcg =3D root_mem_cgroup; > > mz =3D mem_cgroup_page_nodeinfo(memcg, page); > lruvec =3D &mz->lruvec; > @@ -6962,10 +6956,7 @@ void mem_cgroup_migrate(struct page *oldpage, stru= ct page *newpage) > if (newpage->mem_cgroup) > return; > > - /* Swapcache readahead pages can get replaced before being charge= d */ > memcg =3D oldpage->mem_cgroup; > - if (!memcg) > - return; > > /* Force-charge the new page. The old one will be freed soon */ > nr_pages =3D thp_nr_pages(newpage); > @@ -7160,10 +7151,6 @@ void mem_cgroup_swapout(struct page *page, swp_ent= ry_t entry) > > memcg =3D page->mem_cgroup; > > - /* Readahead page, never charged */ > - if (!memcg) > - return; > - > /* > * In case the memcg owning these pages has been offlined and doe= sn't > * have an ID allocated to it anymore, charge the closest online >