From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9EF3DC433DF for ; Tue, 28 Jul 2020 15:39:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6DBF620786 for ; Tue, 28 Jul 2020 15:39:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6DBF620786 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DB8EF8D0005; Tue, 28 Jul 2020 11:39:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D442A6B0078; Tue, 28 Jul 2020 11:39:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C09FD8D0005; Tue, 28 Jul 2020 11:39:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0181.hostedemail.com [216.40.44.181]) by kanga.kvack.org (Postfix) with ESMTP id A83B46B0071 for ; Tue, 28 Jul 2020 11:39:58 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 2ED6D1EE6 for ; Tue, 28 Jul 2020 15:39:58 +0000 (UTC) X-FDA: 77087895276.05.cows58_420ec8e26f6b Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin05.hostedemail.com (Postfix) with ESMTP id 04F031800B8A1 for ; Tue, 28 Jul 2020 15:39:52 +0000 (UTC) X-HE-Tag: cows58_420ec8e26f6b X-Filterd-Recvd-Size: 8602 Received: from out30-42.freemail.mail.aliyun.com (out30-42.freemail.mail.aliyun.com [115.124.30.42]) by imf26.hostedemail.com (Postfix) with ESMTP for ; Tue, 28 Jul 2020 15:39:51 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R131e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04426;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=21;SR=0;TI=SMTPD_---0U44sh65_1595950783; Received: from IT-FVFX43SYHV2H.lan(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0U44sh65_1595950783) by smtp.aliyun-inc.com(127.0.0.1); Tue, 28 Jul 2020 23:39:43 +0800 Subject: Re: [PATCH v17 17/21] mm/lru: replace pgdat lru_lock with lruvec lock To: Alexander Duyck Cc: Andrew Morton , Mel Gorman , Tejun Heo , Hugh Dickins , Konstantin Khlebnikov , Daniel Jordan , Yang Shi , Matthew Wilcox , Johannes Weiner , kbuild test robot , linux-mm , LKML , cgroups@vger.kernel.org, Shakeel Butt , Joonsoo Kim , Wei Yang , "Kirill A. Shutemov" , Rong Chen , Michal Hocko , Vladimir Davydov References: <1595681998-19193-1-git-send-email-alex.shi@linux.alibaba.com> <1595681998-19193-18-git-send-email-alex.shi@linux.alibaba.com> From: Alex Shi Message-ID: <1fd45e69-3a50-aae8-bcc4-47d891a5e263@linux.alibaba.com> Date: Tue, 28 Jul 2020 23:39:31 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0) Gecko/20100101 Thunderbird/68.7.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 X-Rspamd-Queue-Id: 04F031800B8A1 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: =E5=9C=A8 2020/7/28 =E4=B8=8A=E5=8D=887:34, Alexander Duyck =E5=86=99=E9=81= =93: > It might make more sense to look at modifying > compact_unlock_should_abort and compact_lock_irqsave (which always > returns true so should probably be a void) to address the deficiencies > they have that make them unusable for you. One of possible reuse for the func compact_unlock_should_abort, could be like the following, the locked parameter reused different in 2 places. but, it's seems no this style usage in kernel, isn't it? Thanks Alex >From 41d5ce6562f20f74bc6ac2db83e226ac28d56e90 Mon Sep 17 00:00:00 2001 From: Alex Shi Date: Tue, 28 Jul 2020 21:19:32 +0800 Subject: [PATCH] compaction polishing Signed-off-by: Alex Shi --- mm/compaction.c | 71 ++++++++++++++++++++++++---------------------------= ------ 1 file changed, 30 insertions(+), 41 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index c28a43481f01..36fce988de3e 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -479,20 +479,20 @@ static bool test_and_set_skip(struct compact_contro= l *cc, struct page *page, * * Always returns true which makes it easier to track lock state in call= ers. */ -static bool compact_lock_irqsave(spinlock_t *lock, unsigned long *flags, +static void compact_lock_irqsave(spinlock_t *lock, unsigned long *flags, struct compact_control *cc) __acquires(lock) { /* Track if the lock is contended in async mode */ if (cc->mode =3D=3D MIGRATE_ASYNC && !cc->contended) { if (spin_trylock_irqsave(lock, *flags)) - return true; + return; =20 cc->contended =3D true; } =20 spin_lock_irqsave(lock, *flags); - return true; + return; } =20 /* @@ -511,11 +511,11 @@ static bool compact_lock_irqsave(spinlock_t *lock, = unsigned long *flags, * scheduled) */ static bool compact_unlock_should_abort(spinlock_t *lock, - unsigned long flags, bool *locked, struct compact_control *cc) + unsigned long flags, void **locked, struct compact_control *cc) { if (*locked) { spin_unlock_irqrestore(lock, flags); - *locked =3D false; + *locked =3D NULL; } =20 if (fatal_signal_pending(current)) { @@ -543,7 +543,7 @@ static unsigned long isolate_freepages_block(struct c= ompact_control *cc, int nr_scanned =3D 0, total_isolated =3D 0; struct page *cursor; unsigned long flags =3D 0; - bool locked =3D false; + struct compact_control *locked =3D NULL; unsigned long blockpfn =3D *start_pfn; unsigned int order; =20 @@ -565,7 +565,7 @@ static unsigned long isolate_freepages_block(struct c= ompact_control *cc, */ if (!(blockpfn % SWAP_CLUSTER_MAX) && compact_unlock_should_abort(&cc->zone->lock, flags, - &locked, cc)) + (void**)&locked, cc)) break; =20 nr_scanned++; @@ -599,8 +599,8 @@ static unsigned long isolate_freepages_block(struct c= ompact_control *cc, * recheck as well. */ if (!locked) { - locked =3D compact_lock_irqsave(&cc->zone->lock, - &flags, cc); + compact_lock_irqsave(&cc->zone->lock, &flags, cc); + locked =3D cc; =20 /* Recheck this is a buddy page under lock */ if (!PageBuddy(page)) @@ -787,7 +787,7 @@ static bool too_many_isolated(pg_data_t *pgdat) unsigned long nr_scanned =3D 0, nr_isolated =3D 0; struct lruvec *lruvec; unsigned long flags =3D 0; - struct lruvec *locked_lruvec =3D NULL; + struct lruvec *locked =3D NULL; struct page *page =3D NULL, *valid_page =3D NULL; unsigned long start_pfn =3D low_pfn; bool skip_on_failure =3D false; @@ -847,21 +847,11 @@ static bool too_many_isolated(pg_data_t *pgdat) * contention, to give chance to IRQs. Abort completely if * a fatal signal is pending. */ - if (!(low_pfn % SWAP_CLUSTER_MAX)) { - if (locked_lruvec) { - unlock_page_lruvec_irqrestore(locked_lruvec, - flags); - locked_lruvec =3D NULL; - } - - if (fatal_signal_pending(current)) { - cc->contended =3D true; - - low_pfn =3D 0; - goto fatal_pending; - } - - cond_resched(); + if (!(low_pfn % SWAP_CLUSTER_MAX) + && compact_unlock_should_abort(&locked->lru_lock, flags, + (void**)&locked, cc)) { + low_pfn =3D 0; + goto fatal_pending; } =20 if (!pfn_valid_within(low_pfn)) @@ -932,9 +922,9 @@ static bool too_many_isolated(pg_data_t *pgdat) */ if (unlikely(__PageMovable(page)) && !PageIsolated(page)) { - if (locked_lruvec) { - unlock_page_lruvec_irqrestore(locked_lruvec, flags); - locked_lruvec =3D NULL; + if (locked) { + unlock_page_lruvec_irqrestore(locked, flags); + locked =3D NULL; } =20 if (!isolate_movable_page(page, isolate_mode)) @@ -979,13 +969,13 @@ static bool too_many_isolated(pg_data_t *pgdat) lruvec =3D mem_cgroup_page_lruvec(page, pgdat); =20 /* If we already hold the lock, we can skip some rechecking */ - if (lruvec !=3D locked_lruvec) { - if (locked_lruvec) - unlock_page_lruvec_irqrestore(locked_lruvec, + if (lruvec !=3D locked) { + if (locked) + unlock_page_lruvec_irqrestore(locked, flags); =20 compact_lock_irqsave(&lruvec->lru_lock, &flags, cc); - locked_lruvec =3D lruvec; + locked =3D lruvec; rcu_read_unlock(); =20 lruvec_memcg_debug(lruvec, page); @@ -1041,9 +1031,9 @@ static bool too_many_isolated(pg_data_t *pgdat) =20 isolate_fail_put: /* Avoid potential deadlock in freeing page under lru_lock */ - if (locked_lruvec) { - unlock_page_lruvec_irqrestore(locked_lruvec, flags); - locked_lruvec =3D NULL; + if (locked) { + unlock_page_lruvec_irqrestore(locked, flags); + locked =3D NULL; } put_page(page); =20 @@ -1057,10 +1047,9 @@ static bool too_many_isolated(pg_data_t *pgdat) * page anyway. */ if (nr_isolated) { - if (locked_lruvec) { - unlock_page_lruvec_irqrestore(locked_lruvec, - flags); - locked_lruvec =3D NULL; + if (locked) { + unlock_page_lruvec_irqrestore(locked, flags); + locked =3D NULL; } putback_movable_pages(&cc->migratepages); cc->nr_migratepages =3D 0; @@ -1087,8 +1076,8 @@ static bool too_many_isolated(pg_data_t *pgdat) page =3D NULL; =20 isolate_abort: - if (locked_lruvec) - unlock_page_lruvec_irqrestore(locked_lruvec, flags); + if (locked) + unlock_page_lruvec_irqrestore(locked, flags); if (page) { SetPageLRU(page); put_page(page); --=20 1.8.3.1