From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.5 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55FAEC433E7 for ; Thu, 16 Jul 2020 09:07:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3BB94207F9 for ; Thu, 16 Jul 2020 09:07:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726836AbgGPJHx (ORCPT ); Thu, 16 Jul 2020 05:07:53 -0400 Received: from out30-44.freemail.mail.aliyun.com ([115.124.30.44]:47321 "EHLO out30-44.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726201AbgGPJHw (ORCPT ); Thu, 16 Jul 2020 05:07:52 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R571e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e01358;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=19;SR=0;TI=SMTPD_---0U2t2gmE_1594890464; Received: from IT-FVFX43SYHV2H.local(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0U2t2gmE_1594890464) by smtp.aliyun-inc.com(127.0.0.1); Thu, 16 Jul 2020 17:07:45 +0800 Subject: Re: [PATCH v16 13/22] mm/lru: introduce TestClearPageLRU From: Alex Shi To: akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, yang.shi@linux.alibaba.com, willy@infradead.org, hannes@cmpxchg.org, lkp@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, shakeelb@google.com, iamjoonsoo.kim@lge.com, richard.weiyang@gmail.com, kirill@shutemov.name Cc: Michal Hocko , Vladimir Davydov References: <1594429136-20002-1-git-send-email-alex.shi@linux.alibaba.com> <1594429136-20002-14-git-send-email-alex.shi@linux.alibaba.com> Message-ID: <00b5e5ef-fd05-baae-49c7-e1b1a973b864@linux.alibaba.com> Date: Thu, 16 Jul 2020 17:06:53 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0) Gecko/20100101 Thunderbird/68.7.0 MIME-Version: 1.0 In-Reply-To: <1594429136-20002-14-git-send-email-alex.shi@linux.alibaba.com> Content-Type: text/plain; charset=gbk Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Johannes, The patchset looks good from logical and testing part. Is there any concern for any patches? Thanks Alex ÔÚ 2020/7/11 ÉÏÎç8:58, Alex Shi дµÀ: > Combine PageLRU check and ClearPageLRU into a function by new > introduced func TestClearPageLRU. This function will be used as page > isolation precondition to prevent other isolations some where else. > Then there are may non PageLRU page on lru list, need to remove BUG > checking accordingly. > > Hugh Dickins pointed that __page_cache_release and release_pages > has no need to do atomic clear bit since no user on the page at that > moment. and no need get_page() before lru bit clear in isolate_lru_page, > since it '(1) Must be called with an elevated refcount on the page'. > > As Andrew Morton mentioned this change would dirty cacheline for page > isn't on LRU. But the lost would be acceptable with Rong Chen > report: > https://lkml.org/lkml/2020/3/4/173 > > Suggested-by: Johannes Weiner > Signed-off-by: Alex Shi > Cc: Hugh Dickins > Cc: Johannes Weiner > Cc: Michal Hocko > Cc: Vladimir Davydov > Cc: Andrew Morton > Cc: linux-kernel@vger.kernel.org > Cc: cgroups@vger.kernel.org > Cc: linux-mm@kvack.org > --- > include/linux/page-flags.h | 1 + > mm/mlock.c | 3 +-- > mm/swap.c | 6 ++---- > mm/vmscan.c | 26 +++++++++++--------------- > 4 files changed, 15 insertions(+), 21 deletions(-) > > diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h > index 6be1aa559b1e..9554ed1387dc 100644 > --- a/include/linux/page-flags.h > +++ b/include/linux/page-flags.h > @@ -326,6 +326,7 @@ static inline void page_init_poison(struct page *page, size_t size) > PAGEFLAG(Dirty, dirty, PF_HEAD) TESTSCFLAG(Dirty, dirty, PF_HEAD) > __CLEARPAGEFLAG(Dirty, dirty, PF_HEAD) > PAGEFLAG(LRU, lru, PF_HEAD) __CLEARPAGEFLAG(LRU, lru, PF_HEAD) > + TESTCLEARFLAG(LRU, lru, PF_HEAD) > PAGEFLAG(Active, active, PF_HEAD) __CLEARPAGEFLAG(Active, active, PF_HEAD) > TESTCLEARFLAG(Active, active, PF_HEAD) > PAGEFLAG(Workingset, workingset, PF_HEAD) > diff --git a/mm/mlock.c b/mm/mlock.c > index f8736136fad7..228ba5a8e0a5 100644 > --- a/mm/mlock.c > +++ b/mm/mlock.c > @@ -108,13 +108,12 @@ void mlock_vma_page(struct page *page) > */ > static bool __munlock_isolate_lru_page(struct page *page, bool getpage) > { > - if (PageLRU(page)) { > + if (TestClearPageLRU(page)) { > struct lruvec *lruvec; > > lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); > if (getpage) > get_page(page); > - ClearPageLRU(page); > del_page_from_lru_list(page, lruvec, page_lru(page)); > return true; > } > diff --git a/mm/swap.c b/mm/swap.c > index f645965fde0e..5092fe9c8c47 100644 > --- a/mm/swap.c > +++ b/mm/swap.c > @@ -83,10 +83,9 @@ static void __page_cache_release(struct page *page) > struct lruvec *lruvec; > unsigned long flags; > > + __ClearPageLRU(page); > spin_lock_irqsave(&pgdat->lru_lock, flags); > lruvec = mem_cgroup_page_lruvec(page, pgdat); > - VM_BUG_ON_PAGE(!PageLRU(page), page); > - __ClearPageLRU(page); > del_page_from_lru_list(page, lruvec, page_off_lru(page)); > spin_unlock_irqrestore(&pgdat->lru_lock, flags); > } > @@ -878,9 +877,8 @@ void release_pages(struct page **pages, int nr) > spin_lock_irqsave(&locked_pgdat->lru_lock, flags); > } > > - lruvec = mem_cgroup_page_lruvec(page, locked_pgdat); > - VM_BUG_ON_PAGE(!PageLRU(page), page); > __ClearPageLRU(page); > + lruvec = mem_cgroup_page_lruvec(page, locked_pgdat); > del_page_from_lru_list(page, lruvec, page_off_lru(page)); > } > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index c1c4259b4de5..18986fefd49b 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -1548,16 +1548,16 @@ int __isolate_lru_page(struct page *page, isolate_mode_t mode) > { > int ret = -EINVAL; > > - /* Only take pages on the LRU. */ > - if (!PageLRU(page)) > - return ret; > - > /* Compaction should not handle unevictable pages but CMA can do so */ > if (PageUnevictable(page) && !(mode & ISOLATE_UNEVICTABLE)) > return ret; > > ret = -EBUSY; > > + /* Only take pages on the LRU. */ > + if (!PageLRU(page)) > + return ret; > + > /* > * To minimise LRU disruption, the caller can indicate that it only > * wants to isolate pages it will be able to operate on without > @@ -1671,8 +1671,6 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan, > page = lru_to_page(src); > prefetchw_prev_lru_page(page, src, flags); > > - VM_BUG_ON_PAGE(!PageLRU(page), page); > - > nr_pages = compound_nr(page); > total_scan += nr_pages; > > @@ -1769,21 +1767,19 @@ int isolate_lru_page(struct page *page) > VM_BUG_ON_PAGE(!page_count(page), page); > WARN_RATELIMIT(PageTail(page), "trying to isolate tail page"); > > - if (PageLRU(page)) { > + if (TestClearPageLRU(page)) { > pg_data_t *pgdat = page_pgdat(page); > struct lruvec *lruvec; > + int lru = page_lru(page); > > - spin_lock_irq(&pgdat->lru_lock); > + get_page(page); > lruvec = mem_cgroup_page_lruvec(page, pgdat); > - if (PageLRU(page)) { > - int lru = page_lru(page); > - get_page(page); > - ClearPageLRU(page); > - del_page_from_lru_list(page, lruvec, lru); > - ret = 0; > - } > + spin_lock_irq(&pgdat->lru_lock); > + del_page_from_lru_list(page, lruvec, lru); > spin_unlock_irq(&pgdat->lru_lock); > + ret = 0; > } > + > return ret; > } > > From mboxrd@z Thu Jan 1 00:00:00 1970 From: Alex Shi Subject: Re: [PATCH v16 13/22] mm/lru: introduce TestClearPageLRU Date: Thu, 16 Jul 2020 17:06:53 +0800 Message-ID: <00b5e5ef-fd05-baae-49c7-e1b1a973b864@linux.alibaba.com> References: <1594429136-20002-1-git-send-email-alex.shi@linux.alibaba.com> <1594429136-20002-14-git-send-email-alex.shi@linux.alibaba.com> Mime-Version: 1.0 Content-Transfer-Encoding: quoted-printable Return-path: In-Reply-To: <1594429136-20002-14-git-send-email-alex.shi-KPsoFbNs7GizrGE5bRqYAgC/G2K4zDHf@public.gmane.org> Sender: cgroups-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org List-ID: Content-Type: text/plain; charset="iso-8859-1" To: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org, mgorman-3eNAlZScCAx27rWaFMvyedHuzzzSOjJt@public.gmane.org, tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org, hughd-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org, khlebnikov-XoJtRXgx1JseBXzfvpsJ4g@public.gmane.org, daniel.m.jordan-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org, yang.shi-KPsoFbNs7GizrGE5bRqYAgC/G2K4zDHf@public.gmane.org, willy-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org, hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org, lkp-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org, linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org, iamjoonsoo.kim-Hm3cg6mZ9cc@public.gmane.org, richard.weiyang-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org, kirill-oKw7cIdHH8eLwutG50LtGA@public.gmane.org Cc: Michal Hocko , Vladimir Davydov Hi Johannes, The patchset looks good from logical and testing part. Is there any concern for any patches? Thanks Alex =D4=DA 2020/7/11 =C9=CF=CE=E78:58, Alex Shi =D0=B4=B5=C0: > Combine PageLRU check and ClearPageLRU into a function by new > introduced func TestClearPageLRU. This function will be used as page > isolation precondition to prevent other isolations some where else. > Then there are may non PageLRU page on lru list, need to remove BUG > checking accordingly. >=20 > Hugh Dickins pointed that __page_cache_release and release_pages > has no need to do atomic clear bit since no user on the page at that > moment. and no need get_page() before lru bit clear in isolate_lru_page, > since it '(1) Must be called with an elevated refcount on the page'. >=20 > As Andrew Morton mentioned this change would dirty cacheline for page > isn't on LRU. But the lost would be acceptable with Rong Chen > report: > https://lkml.org/lkml/2020/3/4/173 >=20 > Suggested-by: Johannes Weiner > Signed-off-by: Alex Shi > Cc: Hugh Dickins > Cc: Johannes Weiner > Cc: Michal Hocko > Cc: Vladimir Davydov > Cc: Andrew Morton > Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org > Cc: cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org > Cc: linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org > --- > include/linux/page-flags.h | 1 + > mm/mlock.c | 3 +-- > mm/swap.c | 6 ++---- > mm/vmscan.c | 26 +++++++++++--------------- > 4 files changed, 15 insertions(+), 21 deletions(-) >=20 > diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h > index 6be1aa559b1e..9554ed1387dc 100644 > --- a/include/linux/page-flags.h > +++ b/include/linux/page-flags.h > @@ -326,6 +326,7 @@ static inline void page_init_poison(struct page *page= , size_t size) > PAGEFLAG(Dirty, dirty, PF_HEAD) TESTSCFLAG(Dirty, dirty, PF_HEAD) > __CLEARPAGEFLAG(Dirty, dirty, PF_HEAD) > PAGEFLAG(LRU, lru, PF_HEAD) __CLEARPAGEFLAG(LRU, lru, PF_HEAD) > + TESTCLEARFLAG(LRU, lru, PF_HEAD) > PAGEFLAG(Active, active, PF_HEAD) __CLEARPAGEFLAG(Active, active, PF_HEA= D) > TESTCLEARFLAG(Active, active, PF_HEAD) > PAGEFLAG(Workingset, workingset, PF_HEAD) > diff --git a/mm/mlock.c b/mm/mlock.c > index f8736136fad7..228ba5a8e0a5 100644 > --- a/mm/mlock.c > +++ b/mm/mlock.c > @@ -108,13 +108,12 @@ void mlock_vma_page(struct page *page) > */ > static bool __munlock_isolate_lru_page(struct page *page, bool getpage) > { > - if (PageLRU(page)) { > + if (TestClearPageLRU(page)) { > struct lruvec *lruvec; > =20 > lruvec =3D mem_cgroup_page_lruvec(page, page_pgdat(page)); > if (getpage) > get_page(page); > - ClearPageLRU(page); > del_page_from_lru_list(page, lruvec, page_lru(page)); > return true; > } > diff --git a/mm/swap.c b/mm/swap.c > index f645965fde0e..5092fe9c8c47 100644 > --- a/mm/swap.c > +++ b/mm/swap.c > @@ -83,10 +83,9 @@ static void __page_cache_release(struct page *page) > struct lruvec *lruvec; > unsigned long flags; > =20 > + __ClearPageLRU(page); > spin_lock_irqsave(&pgdat->lru_lock, flags); > lruvec =3D mem_cgroup_page_lruvec(page, pgdat); > - VM_BUG_ON_PAGE(!PageLRU(page), page); > - __ClearPageLRU(page); > del_page_from_lru_list(page, lruvec, page_off_lru(page)); > spin_unlock_irqrestore(&pgdat->lru_lock, flags); > } > @@ -878,9 +877,8 @@ void release_pages(struct page **pages, int nr) > spin_lock_irqsave(&locked_pgdat->lru_lock, flags); > } > =20 > - lruvec =3D mem_cgroup_page_lruvec(page, locked_pgdat); > - VM_BUG_ON_PAGE(!PageLRU(page), page); > __ClearPageLRU(page); > + lruvec =3D mem_cgroup_page_lruvec(page, locked_pgdat); > del_page_from_lru_list(page, lruvec, page_off_lru(page)); > } > =20 > diff --git a/mm/vmscan.c b/mm/vmscan.c > index c1c4259b4de5..18986fefd49b 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -1548,16 +1548,16 @@ int __isolate_lru_page(struct page *page, isolate= _mode_t mode) > { > int ret =3D -EINVAL; > =20 > - /* Only take pages on the LRU. */ > - if (!PageLRU(page)) > - return ret; > - > /* Compaction should not handle unevictable pages but CMA can do so */ > if (PageUnevictable(page) && !(mode & ISOLATE_UNEVICTABLE)) > return ret; > =20 > ret =3D -EBUSY; > =20 > + /* Only take pages on the LRU. */ > + if (!PageLRU(page)) > + return ret; > + > /* > * To minimise LRU disruption, the caller can indicate that it only > * wants to isolate pages it will be able to operate on without > @@ -1671,8 +1671,6 @@ static unsigned long isolate_lru_pages(unsigned lon= g nr_to_scan, > page =3D lru_to_page(src); > prefetchw_prev_lru_page(page, src, flags); > =20 > - VM_BUG_ON_PAGE(!PageLRU(page), page); > - > nr_pages =3D compound_nr(page); > total_scan +=3D nr_pages; > =20 > @@ -1769,21 +1767,19 @@ int isolate_lru_page(struct page *page) > VM_BUG_ON_PAGE(!page_count(page), page); > WARN_RATELIMIT(PageTail(page), "trying to isolate tail page"); > =20 > - if (PageLRU(page)) { > + if (TestClearPageLRU(page)) { > pg_data_t *pgdat =3D page_pgdat(page); > struct lruvec *lruvec; > + int lru =3D page_lru(page); > =20 > - spin_lock_irq(&pgdat->lru_lock); > + get_page(page); > lruvec =3D mem_cgroup_page_lruvec(page, pgdat); > - if (PageLRU(page)) { > - int lru =3D page_lru(page); > - get_page(page); > - ClearPageLRU(page); > - del_page_from_lru_list(page, lruvec, lru); > - ret =3D 0; > - } > + spin_lock_irq(&pgdat->lru_lock); > + del_page_from_lru_list(page, lruvec, lru); > spin_unlock_irq(&pgdat->lru_lock); > + ret =3D 0; > } > + > return ret; > } > =20 >=20