From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=3.0 tests=FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1144CC2BA83 for ; Wed, 12 Feb 2020 03:35:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A50932082F for ; Wed, 12 Feb 2020 03:35:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A50932082F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=sina.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 37CCF6B039A; Tue, 11 Feb 2020 22:35:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 32C666B039B; Tue, 11 Feb 2020 22:35:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1F43F6B039C; Tue, 11 Feb 2020 22:35:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0221.hostedemail.com [216.40.44.221]) by kanga.kvack.org (Postfix) with ESMTP id 054826B039A for ; Tue, 11 Feb 2020 22:35:52 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 91D8F45A3 for ; Wed, 12 Feb 2020 03:35:51 +0000 (UTC) X-FDA: 76480060902.04.fall79_50018dc29c81b X-HE-Tag: fall79_50018dc29c81b X-Filterd-Recvd-Size: 4985 Received: from r3-21.sinamail.sina.com.cn (r3-21.sinamail.sina.com.cn [202.108.3.21]) by imf25.hostedemail.com (Postfix) with SMTP for ; Wed, 12 Feb 2020 03:35:49 +0000 (UTC) Received: from unknown (HELO localhost.localdomain)([114.246.227.35]) by sina.com with ESMTP id 5E43729000016DF7; Wed, 12 Feb 2020 11:35:47 +0800 (CST) X-Sender: hdanton@sina.com X-Auth-ID: hdanton@sina.com X-SMAIL-MID: 50820628860 From: Hillf Danton To: js1304@gmail.com Cc: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Johannes Weiner , Michal Hocko , Hugh Dickins , Minchan Kim , Vlastimil Babka , Mel Gorman , kernel-team@lge.com, Joonsoo Kim Subject: Re: [PATCH 9/9] mm/swap: count a new anonymous page as a reclaim_state's rotate Date: Wed, 12 Feb 2020 11:35:34 +0800 Message-Id: <20200212033534.3744-1-hdanton@sina.com> In-Reply-To: <1581401993-20041-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1581401993-20041-1-git-send-email-iamjoonsoo.kim@lge.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, 10 Feb 2020 22:20:37 -0800 (PST) > From: Joonsoo Kim >=20 > reclaim_stat's rotate is used for controlling the ratio of scanning pag= e > between file and anonymous LRU. All new anonymous pages are counted > for rotate before the patch, protecting anonymous pages on active LRU, = and, > it makes that reclaim on anonymous LRU is less happened than file LRU. >=20 > Now, situation is changed. all new anonymous pages are not added > to the active LRU so rotate would be far less than before. It will caus= e > that reclaim on anonymous LRU happens more and it would result in bad > effect on some system that is optimized for previous setting. >=20 > Therefore, this patch counts a new anonymous page as a reclaim_state's > rotate. Although it is non-logical to add this count to > the reclaim_state's rotate in current algorithm, reducing the regressio= n > would be more important. >=20 > I found this regression on kernel-build test and it is roughly 2~5% > performance degradation. With this workaround, performance is completel= y > restored. >=20 > Signed-off-by: Joonsoo Kim > --- > mm/swap.c | 27 ++++++++++++++++++++++++++- > 1 file changed, 26 insertions(+), 1 deletion(-) >=20 > diff --git a/mm/swap.c b/mm/swap.c > index 18b2735..c3584af 100644 > --- a/mm/swap.c > +++ b/mm/swap.c > @@ -187,6 +187,9 @@ int get_kernel_page(unsigned long start, int write,= struct page **pages) > } > EXPORT_SYMBOL_GPL(get_kernel_page); > =20 > +static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lru= vec, > + void *arg); > + > static void pagevec_lru_move_fn(struct pagevec *pvec, > void (*move_fn)(struct page *page, struct lruvec *lruvec, void *arg), > void *arg) > @@ -207,6 +210,19 @@ static void pagevec_lru_move_fn(struct pagevec *pv= ec, > spin_lock_irqsave(&pgdat->lru_lock, flags); > } > =20 > + if (move_fn =3D=3D __pagevec_lru_add_fn) { > + struct list_head *entry =3D &page->lru; > + unsigned long next =3D (unsigned long)entry->next; > + unsigned long rotate =3D next & 2; > + > + if (rotate) { > + VM_BUG_ON(arg); > + > + next =3D next & ~2; > + entry->next =3D (struct list_head *)next; > + arg =3D (void *)rotate; > + } > + } > lruvec =3D mem_cgroup_page_lruvec(page, pgdat); > (*move_fn)(page, lruvec, arg); > } > @@ -475,6 +491,14 @@ void lru_cache_add_inactive_or_unevictable(struct = page *page, > hpage_nr_pages(page)); > count_vm_event(UNEVICTABLE_PGMLOCKED); > } > + > + if (PageSwapBacked(page) && evictable) { > + struct list_head *entry =3D &page->lru; > + unsigned long next =3D (unsigned long)entry->next; > + > + next =3D next | 2; > + entry->next =3D (struct list_head *)next; > + } > lru_cache_add(page); > } > =20 > @@ -927,6 +951,7 @@ static void __pagevec_lru_add_fn(struct page *page,= struct lruvec *lruvec, > { > enum lru_list lru; > int was_unevictable =3D TestClearPageUnevictable(page); > + unsigned long rotate =3D (unsigned long)arg; > =20 > VM_BUG_ON_PAGE(PageLRU(page), page); > =20 > @@ -962,7 +987,7 @@ static void __pagevec_lru_add_fn(struct page *page,= struct lruvec *lruvec, > if (page_evictable(page)) { > lru =3D page_lru(page); > update_page_reclaim_stat(lruvec, page_is_file_cache(page), > - PageActive(page)); > + PageActive(page) | rotate); Is it likely to rotate a page if we know it's not active? update_page_reclaim_stat(lruvec, page_is_file_cache(page), - PageActive(page)); + PageActive(page) || + !page_is_file_cache(page)); > if (was_unevictable) > count_vm_event(UNEVICTABLE_PGRESCUED); > } else { > --=20 > 2.7.4