From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 930D4C4320A for ; Sat, 31 Jul 2021 06:39:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 22F9761058 for ; Sat, 31 Jul 2021 06:39:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 22F9761058 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 3C3D88D0001; Sat, 31 Jul 2021 02:39:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 34D096B0036; Sat, 31 Jul 2021 02:39:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1EF228D0001; Sat, 31 Jul 2021 02:39:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0103.hostedemail.com [216.40.44.103]) by kanga.kvack.org (Postfix) with ESMTP id F25126B0033 for ; Sat, 31 Jul 2021 02:39:45 -0400 (EDT) Received: from smtpin32.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 85921180ACF62 for ; Sat, 31 Jul 2021 06:39:45 +0000 (UTC) X-FDA: 78421932330.32.945D681 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf07.hostedemail.com (Postfix) with ESMTP id 334171007E56 for ; Sat, 31 Jul 2021 06:39:45 +0000 (UTC) Received: by mail-yb1-f201.google.com with SMTP id w200-20020a25c7d10000b02905585436b530so13054733ybe.21 for ; Fri, 30 Jul 2021 23:39:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=BM+FE+/TiUx2OjFlRwtIO4yCUv9GYAVqgfjFJ6CUhoU=; b=WsYJB5oepFO3Wnd6PD+UXgOWzwQ85U4gvNUBQPL78jtZrSF4sTHPbyiNtqUk23nQhz c1/PYP5BlzvA/wilIXGO7/qINCd35Q73p17mn2jO/lk5M7RAh5BmffbH+ZRzbwuLlOAF 8HjeTu+PZBemftmZiO7WaTDU6M4xV0XSWMl1BXM0uHxPkwGsk578T+Uz5CQFs9IxGrHm jTmnq7qqKeYJz/kOeyz5dSHQoXn9qlKmIq8em5IZo+2APiFd8KgFFxvciDyawbOIwJ87 LeRs/nPRsn22A18v740Ymk2mMfcVnytiNyiQK5P0kmVBL/s19JKAYoiLqpdZyt3QI6uJ ok6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=BM+FE+/TiUx2OjFlRwtIO4yCUv9GYAVqgfjFJ6CUhoU=; b=kWdXehJ+uaexFW9U7npdV3zJ83jutqg6WtaxojVzqitKqGEOFIaXgS0t8Ev0zXYvEp fQNKtWYD4hmjvmOnm5lnkIVTGpgTiaJXq0AZEHupJrp5JFxiv0B8aSdVSxRcMjiWxgS3 +L3IUdrA4mwm/oHdfNfhL+3Ip/1F1xlp0Xci/s1EgAn2Dc5P8RpfxCspQiPNHmBPzhp0 BL4ovBfvBL2IpdsuxUWHP7heAM2xZsr9FLWfvPCfYcSsD8FoW1OOND3JUpX+7Uh0DZ5a tAKfGvOHt4H34azmsC4NHJo75o2QZu1wshnmdtaUg1FqQDT+hBXeXfYuD5y3uoJMjFiv sobA== X-Gm-Message-State: AOAM530UUCPmmLSdfERFww1B8k1HoDCHq7gxpB8kqQz7bqtzXbARTLXV SPIhwgeTlOiN4CqffRUTed9Pa/IDpBxv38XbrygiguOe01L8R8LaEKWDKVDvtlZKV16QTokZHyu 15uc2JjVY0PnBd/vy/5rOBD2R3AgkTpEawNEM3hq5B7LyzB0fV3+HEbyD X-Google-Smtp-Source: ABdhPJxiboigXtft6Tow9VLWZOsDXlaor5MrVjIBKJWd0LwIdCmmJ7JyIMMmMlT45SzkfdIze2fknU1vGMg= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:2b92:c131:b08a:84c7]) (user=yuzhao job=sendgmr) by 2002:a25:bc0f:: with SMTP id i15mr8025092ybh.233.1627713583285; Fri, 30 Jul 2021 23:39:43 -0700 (PDT) Date: Sat, 31 Jul 2021 00:39:36 -0600 In-Reply-To: <20210731063938.1391602-1-yuzhao@google.com> Message-Id: <20210731063938.1391602-2-yuzhao@google.com> Mime-Version: 1.0 References: <20210731063938.1391602-1-yuzhao@google.com> X-Mailer: git-send-email 2.32.0.554.ge1b32706d8-goog Subject: [PATCH 1/3] mm: don't take lru lock when splitting isolated thp From: Yu Zhao To: linux-mm@kvack.org Cc: Andrew Morton , Hugh Dickins , "Kirill A . Shutemov" , Matthew Wilcox , Vlastimil Babka , Yang Shi , Zi Yan , linux-kernel@vger.kernel.org, Yu Zhao , Shuang Zhai Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 334171007E56 Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=WsYJB5oe; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf07.hostedemail.com: domain of 3L_AEYQYKCDIminVOcUccUZS.QcaZWbil-aaYjOQY.cfU@flex--yuzhao.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3L_AEYQYKCDIminVOcUccUZS.QcaZWbil-aaYjOQY.cfU@flex--yuzhao.bounces.google.com X-Stat-Signature: yagzj3bp5pdrk7adga6eb39cwxjcgkos X-HE-Tag: 1627713585-707244 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We won't put its tail pages on lru when splitting an isolated thp under reclaim or migration, and therefore we don't need to take the lru lock. Signed-off-by: Yu Zhao Tested-by: Shuang Zhai --- mm/huge_memory.c | 25 +++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index afff3ac87067..d8b655856e79 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2342,17 +2342,14 @@ static void remap_page(struct page *page, unsigned int nr) } } -static void lru_add_page_tail(struct page *head, struct page *tail, - struct lruvec *lruvec, struct list_head *list) +static void lru_add_page_tail(struct page *head, struct page *tail, struct list_head *list) { VM_BUG_ON_PAGE(!PageHead(head), head); VM_BUG_ON_PAGE(PageCompound(tail), head); VM_BUG_ON_PAGE(PageLRU(tail), head); - lockdep_assert_held(&lruvec->lru_lock); if (list) { - /* page reclaim is reclaiming a huge page */ - VM_WARN_ON(PageLRU(head)); + /* page reclaim or migration is splitting an isolated thp */ get_page(tail); list_add_tail(&tail->lru, list); } else { @@ -2363,8 +2360,7 @@ static void lru_add_page_tail(struct page *head, struct page *tail, } } -static void __split_huge_page_tail(struct page *head, int tail, - struct lruvec *lruvec, struct list_head *list) +static void __split_huge_page_tail(struct page *head, int tail, struct list_head *list) { struct page *page_tail = head + tail; @@ -2425,19 +2421,21 @@ static void __split_huge_page_tail(struct page *head, int tail, * pages to show after the currently processed elements - e.g. * migrate_pages */ - lru_add_page_tail(head, page_tail, lruvec, list); + lru_add_page_tail(head, page_tail, list); } static void __split_huge_page(struct page *page, struct list_head *list, pgoff_t end) { struct page *head = compound_head(page); - struct lruvec *lruvec; + struct lruvec *lruvec = NULL; struct address_space *swap_cache = NULL; unsigned long offset = 0; unsigned int nr = thp_nr_pages(head); int i; + VM_BUG_ON_PAGE(list && PageLRU(head), head); + /* complete memcg works before add pages to LRU */ split_page_memcg(head, nr); @@ -2450,10 +2448,11 @@ static void __split_huge_page(struct page *page, struct list_head *list, } /* lock lru list/PageCompound, ref frozen by page_ref_freeze */ - lruvec = lock_page_lruvec(head); + if (!list) + lruvec = lock_page_lruvec(head); for (i = nr - 1; i >= 1; i--) { - __split_huge_page_tail(head, i, lruvec, list); + __split_huge_page_tail(head, i, list); /* Some pages can be beyond i_size: drop them from page cache */ if (head[i].index >= end) { ClearPageDirty(head + i); @@ -2471,7 +2470,8 @@ static void __split_huge_page(struct page *page, struct list_head *list, } ClearPageCompound(head); - unlock_page_lruvec(lruvec); + if (lruvec) + unlock_page_lruvec(lruvec); /* Caller disabled irqs, so they are still disabled here */ split_page_owner(head, nr); @@ -2645,6 +2645,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) VM_BUG_ON_PAGE(is_huge_zero_page(head), head); VM_BUG_ON_PAGE(!PageLocked(head), head); VM_BUG_ON_PAGE(!PageCompound(head), head); + VM_BUG_ON_PAGE(list && PageLRU(head), head); if (PageWriteback(head)) return -EBUSY; -- 2.32.0.554.ge1b32706d8-goog