From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.2 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A6684C56202 for ; Thu, 26 Nov 2020 08:09:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F1A2020B80 for ; Thu, 26 Nov 2020 08:09:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F1A2020B80 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0FFB66B006E; Thu, 26 Nov 2020 03:09:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0B0F16B0070; Thu, 26 Nov 2020 03:09:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F081D6B0071; Thu, 26 Nov 2020 03:09:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0235.hostedemail.com [216.40.44.235]) by kanga.kvack.org (Postfix) with ESMTP id DBADB6B006E for ; Thu, 26 Nov 2020 03:09:38 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 997BD181AC9CC for ; Thu, 26 Nov 2020 08:09:38 +0000 (UTC) X-FDA: 77525845236.04.ball32_60008bd2737d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin04.hostedemail.com (Postfix) with ESMTP id 74448800CF21 for ; Thu, 26 Nov 2020 08:09:38 +0000 (UTC) X-HE-Tag: ball32_60008bd2737d X-Filterd-Recvd-Size: 2922 Received: from out30-43.freemail.mail.aliyun.com (out30-43.freemail.mail.aliyun.com [115.124.30.43]) by imf23.hostedemail.com (Postfix) with ESMTP for ; Thu, 26 Nov 2020 08:09:36 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R291e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04423;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=7;SR=0;TI=SMTPD_---0UGa28sv_1606378169; Received: from IT-FVFX43SYHV2H.local(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0UGa28sv_1606378169) by smtp.aliyun-inc.com(127.0.0.1); Thu, 26 Nov 2020 16:09:29 +0800 Subject: Re: [PATCH next] mm/swap.c: reduce lock contention in lru_cache_add To: Yu Zhao Cc: Konstantin Khlebnikov , Andrew Morton , Hugh Dickins , Michal Hocko , linux-mm@kvack.org, linux-kernel@vger.kernel.org References: <1605860847-47445-1-git-send-email-alex.shi@linux.alibaba.com> <20201126045234.GA1014081@google.com> <20201126072402.GA1047005@google.com> From: Alex Shi Message-ID: <0e14f1dc-31bb-5965-4711-9e59c51ee36d@linux.alibaba.com> Date: Thu, 26 Nov 2020 16:09:29 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0) Gecko/20100101 Thunderbird/68.12.0 MIME-Version: 1.0 In-Reply-To: <20201126072402.GA1047005@google.com> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000001, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: =E5=9C=A8 2020/11/26 =E4=B8=8B=E5=8D=883:24, Yu Zhao =E5=86=99=E9=81=93: > Oh, no, I'm not against your idea. I was saying it doesn't seem > necessary to sort -- a nested loop would just do the job given > pagevec is small. >=20 > diff --git a/mm/swap.c b/mm/swap.c > index cb3794e13b48..1d238edc2907 100644 > --- a/mm/swap.c > +++ b/mm/swap.c > @@ -996,15 +996,26 @@ static void __pagevec_lru_add_fn(struct page *pag= e, struct lruvec *lruvec) > */ > void __pagevec_lru_add(struct pagevec *pvec) > { > - int i; > + int i, j; > struct lruvec *lruvec =3D NULL; > unsigned long flags =3D 0; > =20 > for (i =3D 0; i < pagevec_count(pvec); i++) { > struct page *page =3D pvec->pages[i]; > =20 > + if (!page) > + continue; > + > lruvec =3D relock_page_lruvec_irqsave(page, lruvec, &flags); > - __pagevec_lru_add_fn(page, lruvec); > + > + for (j =3D i; j < pagevec_count(pvec); j++) { > + if (page_to_nid(pvec->pages[j]) !=3D page_to_nid(page) || > + page_memcg(pvec->pages[j]) !=3D page_memcg(page)) > + continue; > + > + __pagevec_lru_add_fn(pvec->pages[j], lruvec); > + pvec->pages[j] =3D NULL; > + } Uh, I have to say your method is more better than mine. And this could be reused for all relock_page_lruvec. I expect this could speed up lru performance a lot! > } > if (lruvec) > unlock_page_lruvec_irqrestore(lruvec, flags);