From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE, SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9238C4361A for ; Thu, 6 Aug 2020 11:05:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D0A732310F for ; Thu, 6 Aug 2020 11:05:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728665AbgHFHmf (ORCPT ); Thu, 6 Aug 2020 03:42:35 -0400 Received: from out30-57.freemail.mail.aliyun.com ([115.124.30.57]:48155 "EHLO out30-57.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727998AbgHFHmF (ORCPT ); Thu, 6 Aug 2020 03:42:05 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R141e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04357;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=21;SR=0;TI=SMTPD_---0U4uOac3_1596699708; Received: from IT-FVFX43SYHV2H.local(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0U4uOac3_1596699708) by smtp.aliyun-inc.com(127.0.0.1); Thu, 06 Aug 2020 15:41:49 +0800 Subject: Re: [PATCH v17 17/21] mm/lru: replace pgdat lru_lock with lruvec lock From: Alex Shi To: akpm@linux-foundation.org, mgorman@techsingularity.net, tj@kernel.org, hughd@google.com, khlebnikov@yandex-team.ru, daniel.m.jordan@oracle.com, yang.shi@linux.alibaba.com, willy@infradead.org, hannes@cmpxchg.org, lkp@intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, shakeelb@google.com, iamjoonsoo.kim@lge.com, richard.weiyang@gmail.com, kirill@shutemov.name, alexander.duyck@gmail.com, rong.a.chen@intel.com Cc: Michal Hocko , Vladimir Davydov References: <1595681998-19193-1-git-send-email-alex.shi@linux.alibaba.com> <1595681998-19193-18-git-send-email-alex.shi@linux.alibaba.com> Message-ID: Date: Thu, 6 Aug 2020 15:41:28 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0) Gecko/20100101 Thunderbird/68.7.0 MIME-Version: 1.0 In-Reply-To: <1595681998-19193-18-git-send-email-alex.shi@linux.alibaba.com> Content-Type: text/plain; charset=gbk Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Johannes, Michal, >From page to its lruvec, a few memory access under lock cause extra cost. Would you like to save the per memcg lruvec pointer to page->private? Thanks Alex ÔÚ 2020/7/25 ÏÂÎç8:59, Alex Shi дµÀ: > /** > * mem_cgroup_page_lruvec - return lruvec for isolating/putting an LRU page > * @page: the page > @@ -1215,7 +1228,8 @@ struct lruvec *mem_cgroup_page_lruvec(struct page *page, struct pglist_data *pgd > goto out; > } > > - memcg = page->mem_cgroup; > + VM_BUG_ON_PAGE(PageTail(page), page); > + memcg = READ_ONCE(page->mem_cgroup); > /* > * Swapcache readahead pages are added to the LRU - and > * possibly migrated - before they are charged. > @@ -1236,6 +1250,51 @@ struct lruvec *mem_cgroup_page_lruvec(struct page *page, struct pglist_data *pgd > return lruvec; > } > > +struct lruvec *lock_page_lruvec(struct page *page) > +{ > + struct lruvec *lruvec; > + struct pglist_data *pgdat = page_pgdat(page); > + > + rcu_read_lock(); > + lruvec = mem_cgroup_page_lruvec(page, pgdat); > + spin_lock(&lruvec->lru_lock); > + rcu_read_unlock(); > + > + lruvec_memcg_debug(lruvec, page); > + > + return lruvec; > +} > + From mboxrd@z Thu Jan 1 00:00:00 1970 From: Alex Shi Subject: Re: [PATCH v17 17/21] mm/lru: replace pgdat lru_lock with lruvec lock Date: Thu, 6 Aug 2020 15:41:28 +0800 Message-ID: References: <1595681998-19193-1-git-send-email-alex.shi@linux.alibaba.com> <1595681998-19193-18-git-send-email-alex.shi@linux.alibaba.com> Mime-Version: 1.0 Content-Transfer-Encoding: quoted-printable Return-path: In-Reply-To: <1595681998-19193-18-git-send-email-alex.shi-KPsoFbNs7GizrGE5bRqYAgC/G2K4zDHf@public.gmane.org> Sender: cgroups-owner-u79uwXL29TY76Z2rM5mHXA@public.gmane.org List-ID: Content-Type: text/plain; charset="iso-8859-1" To: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org, mgorman-3eNAlZScCAx27rWaFMvyedHuzzzSOjJt@public.gmane.org, tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org, hughd-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org, khlebnikov-XoJtRXgx1JseBXzfvpsJ4g@public.gmane.org, daniel.m.jordan-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org, yang.shi-KPsoFbNs7GizrGE5bRqYAgC/G2K4zDHf@public.gmane.org, willy-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org, hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org, lkp-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org, linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org, linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org, shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org, iamjoonsoo.kim-Hm3cg6mZ9cc@public.gmane.org, richard.weiyang-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org, kirill-oKw7cIdHH8eLwutG50LtGA@public.gmane.org, alexander.duyck-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org, rong.a.chen-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org Cc: Michal Hocko , Vladimir Davydov Hi Johannes, Michal, >From page to its lruvec, a few memory access under lock cause extra cost. Would you like to save the per memcg lruvec pointer to page->private? Thanks Alex =D4=DA 2020/7/25 =CF=C2=CE=E78:59, Alex Shi =D0=B4=B5=C0: > /** > * mem_cgroup_page_lruvec - return lruvec for isolating/putting an LRU p= age > * @page: the page > @@ -1215,7 +1228,8 @@ struct lruvec *mem_cgroup_page_lruvec(struct page *= page, struct pglist_data *pgd > goto out; > } > =20 > - memcg =3D page->mem_cgroup; > + VM_BUG_ON_PAGE(PageTail(page), page); > + memcg =3D READ_ONCE(page->mem_cgroup); > /* > * Swapcache readahead pages are added to the LRU - and > * possibly migrated - before they are charged. > @@ -1236,6 +1250,51 @@ struct lruvec *mem_cgroup_page_lruvec(struct page = *page, struct pglist_data *pgd > return lruvec; > } > =20 > +struct lruvec *lock_page_lruvec(struct page *page) > +{ > + struct lruvec *lruvec; > + struct pglist_data *pgdat =3D page_pgdat(page); > + > + rcu_read_lock(); > + lruvec =3D mem_cgroup_page_lruvec(page, pgdat); > + spin_lock(&lruvec->lru_lock); > + rcu_read_unlock(); > + > + lruvec_memcg_debug(lruvec, page); > + > + return lruvec; > +} > +