From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, UNPARSEABLE_RELAY,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7F95CC3A589 for ; Tue, 20 Aug 2019 09:50:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5B7D222CF7 for ; Tue, 20 Aug 2019 09:50:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729731AbfHTJto (ORCPT ); Tue, 20 Aug 2019 05:49:44 -0400 Received: from out30-44.freemail.mail.aliyun.com ([115.124.30.44]:39122 "EHLO out30-44.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729660AbfHTJtm (ORCPT ); Tue, 20 Aug 2019 05:49:42 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R731e4;CH=green;DM=||false|;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04400;MF=alex.shi@linux.alibaba.com;NM=1;PH=DS;RN=16;SR=0;TI=SMTPD_---0TZzk.Bf_1566294574; Received: from localhost(mailfrom:alex.shi@linux.alibaba.com fp:SMTPD_---0TZzk.Bf_1566294574) by smtp.aliyun-inc.com(127.0.0.1); Tue, 20 Aug 2019 17:49:34 +0800 From: Alex Shi To: cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Andrew Morton , Mel Gorman , Tejun Heo Cc: Alex Shi , Michal Hocko , "Kirill A. Shutemov" , Hugh Dickins , Vlastimil Babka , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Andrea Arcangeli , "Aneesh Kumar K.V" , David Rientjes , Souptick Joarder Subject: [PATCH 05/14] lru/huge_page: use per lruvec lock in __split_huge_page Date: Tue, 20 Aug 2019 17:48:28 +0800 Message-Id: <1566294517-86418-6-git-send-email-alex.shi@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1566294517-86418-1-git-send-email-alex.shi@linux.alibaba.com> References: <1566294517-86418-1-git-send-email-alex.shi@linux.alibaba.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Using lruvec lock to replace pgdat lru_lock. Signed-off-by: Alex Shi Cc: Michal Hocko Cc: "Kirill A. Shutemov" Cc: Hugh Dickins Cc: Vlastimil Babka Cc: "Jérôme Glisse" Cc: Andrea Arcangeli Cc: "Aneesh Kumar K.V" Cc: David Rientjes Cc: Souptick Joarder Cc: Andrew Morton Cc: Tejun Heo Cc: cgroups@vger.kernel.org Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org --- mm/huge_memory.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 3a483deee807..9a96c0944b4d 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2529,7 +2529,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, xa_unlock(&head->mapping->i_pages); } - spin_unlock_irqrestore(&pgdat->lruvec.lru_lock, flags); + spin_unlock_irqrestore(&lruvec->lru_lock, flags); remap_page(head); @@ -2671,6 +2671,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) struct pglist_data *pgdata = NODE_DATA(page_to_nid(head)); struct anon_vma *anon_vma = NULL; struct address_space *mapping = NULL; + struct lruvec *lruvec; int count, mapcount, extra_pins, ret; bool mlocked; unsigned long flags; @@ -2739,8 +2740,10 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) if (mlocked) lru_add_drain(); + lruvec = mem_cgroup_page_lruvec(head, pgdata); /* prevent PageLRU to go away from under us, and freeze lru stats */ - spin_lock_irqsave(&pgdata->lruvec.lru_lock, flags); + spin_lock_irqsave(&lruvec->lru_lock, flags); + sync_lruvec_pgdat(lruvec, pgdata); if (mapping) { XA_STATE(xas, &mapping->i_pages, page_index(head)); @@ -2785,7 +2788,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) spin_unlock(&pgdata->split_queue_lock); fail: if (mapping) xa_unlock(&mapping->i_pages); - spin_unlock_irqrestore(&pgdata->lruvec.lru_lock, flags); + spin_unlock_irqrestore(&lruvec->lru_lock, flags); remap_page(head); ret = -EBUSY; } -- 1.8.3.1