From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.2 required=3.0 tests=BAYES_00,BODY_ENHANCEMENT, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EADB7C5519F for ; Thu, 12 Nov 2020 14:20:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3AC45216FD for ; Thu, 12 Nov 2020 14:20:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3AC45216FD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3E07B6B0068; Thu, 12 Nov 2020 09:20:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 36A986B006C; Thu, 12 Nov 2020 09:20:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2318F6B006E; Thu, 12 Nov 2020 09:20:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0229.hostedemail.com [216.40.44.229]) by kanga.kvack.org (Postfix) with ESMTP id E555D6B0068 for ; Thu, 12 Nov 2020 09:20:38 -0500 (EST) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 74C043622 for ; Thu, 12 Nov 2020 14:20:38 +0000 (UTC) X-FDA: 77475976956.02.cause01_240c3ec27307 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin02.hostedemail.com (Postfix) with ESMTP id 53F5D10097AA0 for ; Thu, 12 Nov 2020 14:20:38 +0000 (UTC) X-HE-Tag: cause01_240c3ec27307 X-Filterd-Recvd-Size: 6349 Received: from out30-133.freemail.mail.aliyun.com (out30-133.freemail.mail.aliyun.com [115.124.30.133]) by imf18.hostedemail.com (Postfix) with ESMTP for ; Thu, 12 Nov 2020 14:20:34 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R721e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04394;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=24;SR=0;TI=SMTPD_---0UF5gzB-_1605190819; Received: from IT-FVFX43SYHV2H.lan(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0UF5gzB-_1605190819) by smtp.aliyun-inc.com(127.0.0.1); Thu, 12 Nov 2020 22:20:20 +0800 Subject: Re: [PATCH v21 17/19] mm/lru: replace pgdat lru_lock with lruvec lock To: Vlastimil Babka , akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, willy@infradead.org, hannes@cmpxchg.org, lkp@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, shakeelb@google.com, iamjoonsoo.kim@lge.com, richard.weiyang@gmail.com, kirill@shutemov.name, alexander.duyck@gmail.com, rong.a.chen@intel.com, mhocko@suse.com, vdavydov.dev@gmail.com, shy828301@gmail.com Cc: Michal Hocko , Yang Shi References: <1604566549-62481-1-git-send-email-alex.shi@linux.alibaba.com> <1604566549-62481-18-git-send-email-alex.shi@linux.alibaba.com> From: Alex Shi Message-ID: Date: Thu, 12 Nov 2020 22:19:33 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0) Gecko/20100101 Thunderbird/68.12.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: =E5=9C=A8 2020/11/12 =E4=B8=8B=E5=8D=888:19, Vlastimil Babka =E5=86=99=E9= =81=93: > On 11/5/20 9:55 AM, Alex Shi wrote: >> This patch moves per node lru_lock into lruvec, thus bring a lru_lock = for >> each of memcg per node. So on a large machine, each of memcg don't >> have to suffer from per node pgdat->lru_lock competition. They could g= o >> fast with their self lru_lock. >> >> After move memcg charge before lru inserting, page isolation could >> serialize page's memcg, then per memcg lruvec lock is stable and could >> replace per node lru lock. >> >> In func isolate_migratepages_block, compact_unlock_should_abort and >> lock_page_lruvec_irqsave are open coded to work with compact_control. >> Also add a debug func in locking which may give some clues if there ar= e >> sth out of hands. >> >> Daniel Jordan's testing show 62% improvement on modified readtwice cas= e >> on his 2P * 10 core * 2 HT broadwell box. >> https://lore.kernel.org/lkml/20200915165807.kpp7uhiw7l3loofu@ca-dmjord= an1.us.oracle.com/ >> >> On a large machine with memcg enabled but not used, the page's lruvec >> seeking pass a few pointers, that may lead to lru_lock holding time >> increase and a bit regression. >> >> Hugh Dickins helped on the patch polish, thanks! >> >> Signed-off-by: Alex Shi >> Acked-by: Hugh Dickins >> Cc: Rong Chen >> Cc: Hugh Dickins >> Cc: Andrew Morton >> Cc: Johannes Weiner >> Cc: Michal Hocko >> Cc: Vladimir Davydov >> Cc: Yang Shi >> Cc: Matthew Wilcox >> Cc: Konstantin Khlebnikov >> Cc: Tejun Heo >> Cc: linux-kernel@vger.kernel.org >> Cc: linux-mm@kvack.org >> Cc: cgroups@vger.kernel.org >=20 > I think I need some explanation about the rcu_read_lock() usage in lock= _page_lruvec*() (and places effectively opencoding it). > Preferably in form of some code comment, but that can be also added as = a additional patch later, I don't want to block the series. >=20 Hi Vlastimil,=20 Thanks for comments! Oh, we did talk about the rcu_read_lock which is used to block memcg dest= roy during locking. and the spin_lock actually includes a rcu_read_lock(). Yes, we could add = this comments later. > mem_cgroup_page_lruvec() comment says >=20 > =C2=A0* This function relies on page->mem_cgroup being stable - see the > =C2=A0* access rules in commit_charge(). >=20 > commit_charge() comment: >=20 > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * Any of the following= ensures page->mem_cgroup stability: > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * - the page lock > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * - LRU isolation > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * - lock_page_memcg() > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * - exclusive referenc= e >=20 > "LRU isolation" used to be quite clear, but now is it after TestClearPa= geLRU(page) or after deleting from the lru list as well? > Also it doesn't mention rcu_read_lock(), should it? The lru isolation still is same as old conception, a set actions that tak= e a page from a lru list, and commit_charge do need a isoltion for the page. but the condition of page_memcg could be change since we don't rely on lr= u isolation for it. The comments could be changed later. >=20 > So what exactly are we protecting by rcu_read_lock() in e.g. lock_page_= lruvec()? >=20 > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 rcu_read_lock(); > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 lruvec =3D mem_cgroup_page_l= ruvec(page, pgdat); > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 spin_lock(&lruvec->lru_lock)= ; > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 rcu_read_unlock(); >=20 > Looks like we are protecting the lruvec from going away and it can't go= away anymore after we take the lru_lock? >=20 > But then e.g. in __munlock_pagevec() we are doing this without an rcu_r= ead_lock(): >=20 > =C2=A0=C2=A0=C2=A0=C2=A0new_lruvec =3D mem_cgroup_page_lruvec(page, pag= e_pgdat(page)); TestClearPageLRU could block the page from memcg migration/destory. Thanks Alex >=20 > where new_lruvec is potentionally not the one that we have locked >=20 > And the last thing mem_cgroup_page_lruvec() is doing is: >=20 > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (unlikely(lruvec->pgdat != =3D pgdat)) > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0 lruvec->pgdat =3D pgdat; > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return lruvec; >=20 > So without the rcu_read_lock() is this potentionally accessing the pgda= t field of lruvec that might have just gone away? >=20 > Thanks, > Vlastimil