From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B7464C33CAA for ; Mon, 20 Jan 2020 13:00:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 612D920678 for ; Mon, 20 Jan 2020 13:00:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 612D920678 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C0AE56B064E; Mon, 20 Jan 2020 08:00:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BBC666B064F; Mon, 20 Jan 2020 08:00:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AD28C6B0650; Mon, 20 Jan 2020 08:00:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0252.hostedemail.com [216.40.44.252]) by kanga.kvack.org (Postfix) with ESMTP id 937FF6B064E for ; Mon, 20 Jan 2020 08:00:06 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 4CC2840E1 for ; Mon, 20 Jan 2020 13:00:06 +0000 (UTC) X-FDA: 76398020412.12.dirt15_1dd0127e15f37 X-HE-Tag: dirt15_1dd0127e15f37 X-Filterd-Recvd-Size: 5770 Received: from out30-43.freemail.mail.aliyun.com (out30-43.freemail.mail.aliyun.com [115.124.30.43]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Mon, 20 Jan 2020 13:00:03 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R171e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e07417;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=38;SR=0;TI=SMTPD_---0ToETcqo_1579525195; Received: from IT-FVFX43SYHV2H.local(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0ToETcqo_1579525195) by smtp.aliyun-inc.com(127.0.0.1); Mon, 20 Jan 2020 20:59:57 +0800 Subject: Re: [PATCH v8 03/10] mm/lru: replace pgdat lru_lock with lruvec lock To: Johannes Weiner Cc: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, yang.shi@linux.alibaba.com, willy@infradead.org, shakeelb@google.com, Michal Hocko , Vladimir Davydov , Roman Gushchin , Chris Down , Thomas Gleixner , Vlastimil Babka , Qian Cai , Andrey Ryabinin , "Kirill A. Shutemov" , =?UTF-8?B?SsOpcsO0bWUgR2xpc3Nl?= , Andrea Arcangeli , David Rientjes , "Aneesh Kumar K.V" , swkhack , "Potyra, Stefan" , Mike Rapoport , Stephen Rothwell , Colin Ian King , Jason Gunthorpe , Mauro Carvalho Chehab , Peng Fan , Nikolay Borisov , Ira Weiny , Kirill Tkhai , Yafang Shao References: <1579143909-156105-1-git-send-email-alex.shi@linux.alibaba.com> <1579143909-156105-4-git-send-email-alex.shi@linux.alibaba.com> <20200116215222.GA64230@cmpxchg.org> From: Alex Shi Message-ID: <9ee80b68-a78f-714a-c727-1f6d2b4f87ea@linux.alibaba.com> Date: Mon, 20 Jan 2020 20:58:09 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0) Gecko/20100101 Thunderbird/68.3.1 MIME-Version: 1.0 In-Reply-To: <20200116215222.GA64230@cmpxchg.org> Content-Type: text/plain; charset=gbk Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: =D4=DA 2020/1/17 =C9=CF=CE=E75:52, Johannes Weiner =D0=B4=B5=C0: > You simply cannot serialize on page->mem_cgroup->lruvec when > page->mem_cgroup isn't stable. You need to serialize on the page > itself, one way or another, to make this work. >=20 >=20 > So here is a crazy idea that may be worth exploring: >=20 > Right now, pgdat->lru_lock protects both PageLRU *and* the lruvec's > linked list. >=20 > Can we make PageLRU atomic and use it to stabilize the lru_lock > instead, and then use the lru_lock only serialize list operations? >=20 Hi Johannes, I am trying to figure out the solution of atomic PageLRU, but is=20 blocked by the following sitations, when PageLRU and lru list was protect= ed together under lru_lock, the PageLRU could be a indicator if page on lru = list But now seems it can't be the indicator anymore. Could you give more clues of stabilization usage of PageLRU? =20 __page_cache_release/release_pages/compaction __pagevec_lru_ad= d if (TestClearPageLRU(page)) if (!PageLRU()) lruvec_lo= ck(); list_add(= ); lruvec_unlock(); SetPageLRU() //position 1 lock_page_lruvec_irqsave(page, &flags); del_page_from_lru_list(page, lruvec, ..); unlock_page_lruvec_irqrestore(lruvec, flags); SetPageLR= U() //position 2 Thanks a lot! Alex > I.e. in compaction, you'd do >=20 > if (!TestClearPageLRU(page)) > goto isolate_fail; > /* > * We isolated the page's LRU state and thereby locked out all > * other isolators, including cgroup page moving, page reclaim, > * page freeing etc. That means page->mem_cgroup is now stable > * and we can safely look up the correct lruvec and take the > * page off its physical LRU list. > */ > lruvec =3D mem_cgroup_page_lruvec(page); > spin_lock_irq(&lruvec->lru_lock); > del_page_from_lru_list(page, lruvec, page_lru(page)); >=20 > Putback would mostly remain the same (although you could take the > PageLRU setting out of the list update locked section, as long as it's > set after the page is physically linked): >=20 > /* LRU isolation pins page->mem_cgroup */ > lruvec =3D mem_cgroup_page_lruvec(page) > spin_lock_irq(&lruvec->lru_lock); > add_page_to_lru_list(...); > spin_unlock_irq(&lruvec->lru_lock); >=20 > SetPageLRU(page); >=20 > And you'd have to carefully review and rework other sites that rely on > PageLRU: reclaim, __page_cache_release(), __activate_page() etc. >=20 > Especially things like activate_page(), which used to only check > PageLRU to shuffle the page on the LRU list would now have to briefly > clear PageLRU and then set it again afterwards. >=20 > However, aside from a bit more churn in those cases, and the > unfortunate additional atomic operations, I currently can't think of a > fundamental reason why this wouldn't work. >=20 > Hugh, what do you think? >=20