From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE, SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07F4EC43461 for ; Sun, 13 Sep 2020 14:24:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 69D312158C for ; Sun, 13 Sep 2020 14:24:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 69D312158C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B6AD76B0037; Sun, 13 Sep 2020 10:24:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B40E36B0055; Sun, 13 Sep 2020 10:24:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A57E16B005A; Sun, 13 Sep 2020 10:24:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0022.hostedemail.com [216.40.44.22]) by kanga.kvack.org (Postfix) with ESMTP id 8BEFD6B0037 for ; Sun, 13 Sep 2020 10:24:07 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 46EC7181AC9BF for ; Sun, 13 Sep 2020 14:24:07 +0000 (UTC) X-FDA: 77258257734.02.order81_220a08427100 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin02.hostedemail.com (Postfix) with ESMTP id 15B7F10097AA1 for ; Sun, 13 Sep 2020 14:24:07 +0000 (UTC) X-HE-Tag: order81_220a08427100 X-Filterd-Recvd-Size: 7622 Received: from out30-132.freemail.mail.aliyun.com (out30-132.freemail.mail.aliyun.com [115.124.30.132]) by imf39.hostedemail.com (Postfix) with ESMTP for ; Sun, 13 Sep 2020 14:24:04 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R101e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04423;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=24;SR=0;TI=SMTPD_---0U8mvg-c_1600007034; Received: from IT-FVFX43SYHV2H.lan(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0U8mvg-c_1600007034) by smtp.aliyun-inc.com(127.0.0.1); Sun, 13 Sep 2020 22:23:55 +0800 Subject: Re: [PATCH v18 00/32] per memcg lru_lock: reviews To: Hugh Dickins Cc: Andrew Morton , mgorman@techsingularity.net, tj@kernel.org, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, willy@infradead.org, hannes@cmpxchg.org, lkp@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, shakeelb@google.com, iamjoonsoo.kim@lge.com, richard.weiyang@gmail.com, kirill@shutemov.name, alexander.duyck@gmail.com, rong.a.chen@intel.com, mhocko@suse.com, vdavydov.dev@gmail.com, shy828301@gmail.com, vbabka@suse.cz, minchan@kernel.org, cai@lca.pw References: <1598273705-69124-1-git-send-email-alex.shi@linux.alibaba.com> <20200824114204.cc796ca182db95809dd70a47@linux-foundation.org> <61a42a87-eec9-e300-f710-992756f70de6@linux.alibaba.com> From: Alex Shi Message-ID: <2fed13c0-1d78-fa92-b607-e997b1bc5fb2@linux.alibaba.com> Date: Sun, 13 Sep 2020 22:22:48 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0) Gecko/20100101 Thunderbird/68.7.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 X-Rspamd-Queue-Id: 15B7F10097AA1 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: =E5=9C=A8 2020/9/12 =E4=B8=8B=E5=8D=884:38, Hugh Dickins =E5=86=99=E9=81=93= : > On Wed, 9 Sep 2020, Alex Shi wrote: >> =E5=9C=A8 2020/9/9 =E4=B8=8A=E5=8D=887:41, Hugh Dickins =E5=86=99=E9=81= =93: >>> >>> The use of lock_page_memcg() in __munlock_pagevec() in 20/32, >>> introduced in patchset v17, looks good but it isn't: I was lucky that >>> systemd at reboot did some munlocking that exposed the problem to loc= kdep. >>> The first time into the loop, lock_page_memcg() is done before lru_lo= ck >>> (as 06/32 has allowed); but the second time around the loop, it is do= ne >>> while still holding lru_lock. >> >> I don't know the details of lockdep show. Just wondering could it poss= ible=20 >> to solid the move_lock/lru_lock sequence? >> or try other blocking way which mentioned in commit_charge()? >> >>> >>> lock_page_memcg() really needs to be absorbed into (a variant of) >>> relock_page_lruvec(), and I do have that (it's awkward because of >>> the different ways in which the IRQ flags are handled). And out of >>> curiosity, I've also tried using that in mm/swap.c too, instead of th= e >>> TestClearPageLRU technique: lockdep is happy, but an update_lru_size(= ) >>> warning showed that it cannot safely be mixed with the TestClearPageL= RU >>> technique (that I'd left in isolate_lru_page()). So I'll stash away >>> that relock_page_lruvec(), and consider what's best for mm/mlock.c: >>> now that I've posted these comments so far, that's my priority, then >>> to get the result under testing again, before resuming these comments= . >> >> No idea of your solution, but looking forward for your good news! :) >=20 > Yes, it is good news, and simpler than anything suggested above. Awesome! >=20 > The main difficulties will probably be to look good in the 80 columns > (I know that limit has been lifted recently, but some of us use xterms > side by side), and to explain it. >=20 > mm/mlock.c has not been kept up-to-date very well: and in particular, > you have taken too seriously that "Serialize with any parallel > __split_huge_page_refcount()..." comment that you updated to two > comments "Serialize split tail pages in __split_huge_page_tail()...". >=20 > Delete them! The original comment was by Vlastimil for v3.14 in 2014. > But Kirill redesigned THP refcounting for v4.5 in 2016: that's when > __split_huge_page_refcount() went away. And with the new refcounting, > the THP splitting races that lru_lock protected munlock_vma_page() > and __munlock_pagevec() from: those races have become impossible. >=20 > Or maybe there never was such a race in __munlock_pagevec(): you > have added the comment there, assuming lru_lock was for that purpose, > but that was probably just the convenient place to take it, > to cover all the del_page_from_lru()s. >=20 > Observe how split_huge_page_to_list() uses unmap_page() to remove > all pmds and all ptes for the huge page being split, and remap_page() > only replaces the migration entries (used for anon but not for shmem > or file) after doing all of the __split_huge_page_tail()s, before > unlocking any of the pages. Recall that munlock_vma_page() and > __munlock_pagevec() are being applied to pages found mapped > into userspace, by ptes or pmd: there are none of those while > __split_huge_page_tail() is being used, so no race to protect from. >=20 > (Could a newly detached tail be freshly faulted into userspace just > before __split_huge_page() has reached the head? Not quite, the > fault has to wait to get the tail's page lock. But even if it > could, how would that be a problem for __munlock_pagevec()?) >=20 > There's lots more that could be said: for example, PageMlocked will > always be clear on the THP head during __split_huge_page_tail(), > because the last unmap of a PageMlocked page does clear_page_mlock(). > But that's not required to prove the case, it's just another argument > against the "Serialize" comment you have in __munlock_pagevec(). >=20 > So, no need for the problematic lock_page_memcg(page) there in > __munlock_pagevec(), nor to lock (or relock) lruvec just below it. > __munlock_pagevec() still needs lru_lock to del_page_from_lru_list(), > of course, but that must be done after your TestClearPageMlocked has > stabilized page->memcg. Use relock_page_lruvec_irq() here? I suppose > that will be easiest, but notice how __munlock_pagevec_fill() has > already made sure that all the pages in the pagevec are from the same > zone (and it cannot do the same for memcg without locking page memcg); > so some of relock's work will be redundant. It sounds reasonable for me. >=20 > Otherwise, I'm much happier with your mm/mlock.c since looking at it > in more detail: a couple of nits though - drop the clear_page_mlock() > hunk from 25/32 - kernel style says do it the way you are undoing by > - if (!isolate_lru_page(page)) { > + if (!isolate_lru_page(page)) > putback_lru_page(page); > - } else { > + else { > I don't always follow that over-braced style when making changes, > but you should not touch otherwise untouched code just to make it > go against the approved style. And in munlock_vma_page(), > - if (!TestClearPageMlocked(page)) { > + if (!TestClearPageMlocked(page)) > /* Potentially, PTE-mapped THP: do not skip the rest PTEs */ > - nr_pages =3D 1; > - goto unlock_out; > - } > + return 0; > please restore the braces: with that comment line in there, > the compiler does not need the braces, but the human eye does. Yes, That's better to keep the brace there. Thanks Alex