From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D894C433ED for ; Tue, 13 Apr 2021 06:57:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 16A5761278 for ; Tue, 13 Apr 2021 06:57:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1345104AbhDMG5S (ORCPT ); Tue, 13 Apr 2021 02:57:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44168 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237169AbhDMG5I (ORCPT ); Tue, 13 Apr 2021 02:57:08 -0400 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 01AF4C06175F for ; Mon, 12 Apr 2021 23:56:49 -0700 (PDT) Received: by mail-qk1-x749.google.com with SMTP id j24so9889811qkg.7 for ; Mon, 12 Apr 2021 23:56:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=kZ40TZQJmz2zt6lYwCpeAnxVbOWM8KwFdCtsfH6CbQ4=; b=Lo7XMOOHbyzBoRlK8b2GE15qCT4QqS9ijyXSl1ryGVj5Alkuv2mcfhY4vR1gU/ak5i HPCaNU4SNyd/togq6z9pJeIcKdhVNoakHlBzalPajFLmRC9Qbai2K4MiOiC3w/4zVP3/ NtLrS3pnu6kRnE/1OF1NCyaMABOTJ1Ahmg/dZPqItxMI54CzXgYo6GdLYksK4AzjBKx6 3OPkxOXxP71Nm7Tjl273X7BKZEBEv2cYYpFtO65/dAM6wU+OCRnD0EkkgtX7e7+gTBso oX16tOXHwiiZ6sLaMJLirvmeW9Lp7bXGjP63ZC1IEHuQFyVaxg7TzhpG+PXULs33Mwht 64KQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=kZ40TZQJmz2zt6lYwCpeAnxVbOWM8KwFdCtsfH6CbQ4=; b=m5HbExYCzmc21c5OLCzzHa8Xe8EdXvMRiTtiR09Dq8ChzNpcxJHIjjhpQyFMcUJWLj +EmmgKiIE+uS4OHdEXmzNSv8MNhhEq7kUHf2SgjNDKlYLuCdTyrGG1MSWfK/msnX8s0I ed03u8uPvY4i5nrXUPDSK0dSOilJdsKsbJ2GZF+UbwvHZb/bl7np8JUMFzrB2dYfV3GD rJFKMpvlKiHjGv/usQSGWtLVDxlNl2ZH02SQETt2ZwtrhNj3g1Je8bALwt2ZVdzkZCGJ ieq/RzKjaSqH69A9hehJuecmBRowdH3vtX4JtNR1N62OtoE92KN5JhRy7UIVzomglFHL 9n1A== X-Gm-Message-State: AOAM533DVaJizLoTWtX7Zoe1e9yCLp7H3odxXAoCcHrMJ9IzNh+lDvEB F0NqK2LlktrIoIPLMrk68BAVCsE0tyc= X-Google-Smtp-Source: ABdhPJx0OFD8QshALbNm7ufdWhFpw5ctF+y/1hKbFM42Olw0k5XnLx6uQVu5On95xo6CAByxMQgtMhVbOBY= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:d02d:cccc:9ebe:9fe9]) (user=yuzhao job=sendgmr) by 2002:a0c:fa12:: with SMTP id q18mr9972206qvn.2.1618297008125; Mon, 12 Apr 2021 23:56:48 -0700 (PDT) Date: Tue, 13 Apr 2021 00:56:22 -0600 In-Reply-To: <20210413065633.2782273-1-yuzhao@google.com> Message-Id: <20210413065633.2782273-6-yuzhao@google.com> Mime-Version: 1.0 References: <20210413065633.2782273-1-yuzhao@google.com> X-Mailer: git-send-email 2.31.1.295.g9ea45b61b8-goog Subject: [PATCH v2 05/16] mm/swap.c: export activate_page() From: Yu Zhao To: linux-mm@kvack.org Cc: Alex Shi , Andi Kleen , Andrew Morton , Benjamin Manes , Dave Chinner , Dave Hansen , Hillf Danton , Jens Axboe , Johannes Weiner , Jonathan Corbet , Joonsoo Kim , Matthew Wilcox , Mel Gorman , Miaohe Lin , Michael Larabel , Michal Hocko , Michel Lespinasse , Rik van Riel , Roman Gushchin , Rong Chen , SeongJae Park , Tim Chen , Vlastimil Babka , Yang Shi , Ying Huang , Zi Yan , linux-kernel@vger.kernel.org, lkp@lists.01.org, page-reclaim@google.com, Yu Zhao Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org activate_page() is needed to activate pages that are already on lru or queued in lru_pvecs.lru_add. The exported function is a merger between the existing activate_page() and __lru_cache_activate_page(). Signed-off-by: Yu Zhao --- include/linux/swap.h | 1 + mm/swap.c | 28 +++++++++++++++------------- 2 files changed, 16 insertions(+), 13 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 4cc6ec3bf0ab..de2bbbf181ba 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -344,6 +344,7 @@ extern void lru_add_drain_cpu(int cpu); extern void lru_add_drain_cpu_zone(struct zone *zone); extern void lru_add_drain_all(void); extern void rotate_reclaimable_page(struct page *page); +extern void activate_page(struct page *page); extern void deactivate_file_page(struct page *page); extern void deactivate_page(struct page *page); extern void mark_page_lazyfree(struct page *page); diff --git a/mm/swap.c b/mm/swap.c index 31b844d4ed94..f20ed56ebbbf 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -334,7 +334,7 @@ static bool need_activate_page_drain(int cpu) return pagevec_count(&per_cpu(lru_pvecs.activate_page, cpu)) != 0; } -static void activate_page(struct page *page) +static void activate_page_on_lru(struct page *page) { page = compound_head(page); if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) { @@ -354,7 +354,7 @@ static inline void activate_page_drain(int cpu) { } -static void activate_page(struct page *page) +static void activate_page_on_lru(struct page *page) { struct lruvec *lruvec; @@ -368,11 +368,22 @@ static void activate_page(struct page *page) } #endif -static void __lru_cache_activate_page(struct page *page) +/* + * If the page is on the LRU, queue it for activation via + * lru_pvecs.activate_page. Otherwise, assume the page is on a + * pagevec, mark it active and it'll be moved to the active + * LRU on the next drain. + */ +void activate_page(struct page *page) { struct pagevec *pvec; int i; + if (PageLRU(page)) { + activate_page_on_lru(page); + return; + } + local_lock(&lru_pvecs.lock); pvec = this_cpu_ptr(&lru_pvecs.lru_add); @@ -421,16 +432,7 @@ void mark_page_accessed(struct page *page) * evictable page accessed has no effect. */ } else if (!PageActive(page)) { - /* - * If the page is on the LRU, queue it for activation via - * lru_pvecs.activate_page. Otherwise, assume the page is on a - * pagevec, mark it active and it'll be moved to the active - * LRU on the next drain. - */ - if (PageLRU(page)) - activate_page(page); - else - __lru_cache_activate_page(page); + activate_page(page); ClearPageReferenced(page); workingset_activation(page); } -- 2.31.1.295.g9ea45b61b8-goog From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 262AFC433B4 for ; Tue, 13 Apr 2021 06:56:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C6CAE613C5 for ; Tue, 13 Apr 2021 06:56:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C6CAE613C5 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C02406B0073; Tue, 13 Apr 2021 02:56:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BD8BE6B0074; Tue, 13 Apr 2021 02:56:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AC7E06B0075; Tue, 13 Apr 2021 02:56:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0047.hostedemail.com [216.40.44.47]) by kanga.kvack.org (Postfix) with ESMTP id 8E2496B0073 for ; Tue, 13 Apr 2021 02:56:49 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 524713623 for ; Tue, 13 Apr 2021 06:56:49 +0000 (UTC) X-FDA: 78026436138.08.5443881 Received: from mail-qt1-f202.google.com (mail-qt1-f202.google.com [209.85.160.202]) by imf22.hostedemail.com (Postfix) with ESMTP id 84FE5C0001EE for ; Tue, 13 Apr 2021 06:56:45 +0000 (UTC) Received: by mail-qt1-f202.google.com with SMTP id p13so684095qtk.2 for ; Mon, 12 Apr 2021 23:56:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=kZ40TZQJmz2zt6lYwCpeAnxVbOWM8KwFdCtsfH6CbQ4=; b=Lo7XMOOHbyzBoRlK8b2GE15qCT4QqS9ijyXSl1ryGVj5Alkuv2mcfhY4vR1gU/ak5i HPCaNU4SNyd/togq6z9pJeIcKdhVNoakHlBzalPajFLmRC9Qbai2K4MiOiC3w/4zVP3/ NtLrS3pnu6kRnE/1OF1NCyaMABOTJ1Ahmg/dZPqItxMI54CzXgYo6GdLYksK4AzjBKx6 3OPkxOXxP71Nm7Tjl273X7BKZEBEv2cYYpFtO65/dAM6wU+OCRnD0EkkgtX7e7+gTBso oX16tOXHwiiZ6sLaMJLirvmeW9Lp7bXGjP63ZC1IEHuQFyVaxg7TzhpG+PXULs33Mwht 64KQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=kZ40TZQJmz2zt6lYwCpeAnxVbOWM8KwFdCtsfH6CbQ4=; b=P8baLpFPKN7FDC02zC8XAK2TNYe1O25X7d5df3hgRd7NPl9t9ZTXz4kAtiMzf8xww8 thMicp0fTQKT7//x/G92CZpJ9Js6FVU1jjNEnl9pJnBvUj9yHgLLfGP55XkySmTZXY7F W4YkJ5aY7GtiHzA4rIZmP/6NUDX0ZAGwDIjY5Wl7HKti/qrbOsdTrDQqSfMnjjyhEMM0 CrDGSdS2udnA/28YKm+VxUz2CK4yk5h+g0eJFqOxTH/fZCweXaypOfWLZtg+GU4vvDrV KNMWWet7y+98R5LWOwOlw/MqE/YPnYDa0Irf6+7FAGYmfNQpD4jXX3L0O9dbGu99lmvt A3hw== X-Gm-Message-State: AOAM531lPSyl2CaPCJlHu35kTQHdeV9f9OWCQd59WH/Uqz78OgL9Fi6N TGM+bUFeVaw43t1AjoSphjpzWUw29C/4BOiQfdB3FcQj/tD6eJ/ZJAn5Ph6P0bjKXuuhEJlUH+Q RSef1+ZFPHt3ICYj2Fx3DIe60bU+/OYWlasFlPkfS0T0Ev+pMChZIVD5B X-Google-Smtp-Source: ABdhPJx0OFD8QshALbNm7ufdWhFpw5ctF+y/1hKbFM42Olw0k5XnLx6uQVu5On95xo6CAByxMQgtMhVbOBY= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:d02d:cccc:9ebe:9fe9]) (user=yuzhao job=sendgmr) by 2002:a0c:fa12:: with SMTP id q18mr9972206qvn.2.1618297008125; Mon, 12 Apr 2021 23:56:48 -0700 (PDT) Date: Tue, 13 Apr 2021 00:56:22 -0600 In-Reply-To: <20210413065633.2782273-1-yuzhao@google.com> Message-Id: <20210413065633.2782273-6-yuzhao@google.com> Mime-Version: 1.0 References: <20210413065633.2782273-1-yuzhao@google.com> X-Mailer: git-send-email 2.31.1.295.g9ea45b61b8-goog Subject: [PATCH v2 05/16] mm/swap.c: export activate_page() From: Yu Zhao To: linux-mm@kvack.org Cc: Alex Shi , Andi Kleen , Andrew Morton , Benjamin Manes , Dave Chinner , Dave Hansen , Hillf Danton , Jens Axboe , Johannes Weiner , Jonathan Corbet , Joonsoo Kim , Matthew Wilcox , Mel Gorman , Miaohe Lin , Michael Larabel , Michal Hocko , Michel Lespinasse , Rik van Riel , Roman Gushchin , Rong Chen , SeongJae Park , Tim Chen , Vlastimil Babka , Yang Shi , Ying Huang , Zi Yan , linux-kernel@vger.kernel.org, lkp@lists.01.org, page-reclaim@google.com, Yu Zhao Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 84FE5C0001EE X-Stat-Signature: 7xucbb35ts689s6pyrhontejux6s5qge Received-SPF: none (flex--yuzhao.bounces.google.com>: No applicable sender policy available) receiver=imf22; identity=mailfrom; envelope-from="<3sEB1YAYKCBEFBGyr5x55x2v.t532z4BE-331Crt1.58x@flex--yuzhao.bounces.google.com>"; helo=mail-qt1-f202.google.com; client-ip=209.85.160.202 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1618297005-509628 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: activate_page() is needed to activate pages that are already on lru or queued in lru_pvecs.lru_add. The exported function is a merger between the existing activate_page() and __lru_cache_activate_page(). Signed-off-by: Yu Zhao --- include/linux/swap.h | 1 + mm/swap.c | 28 +++++++++++++++------------- 2 files changed, 16 insertions(+), 13 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 4cc6ec3bf0ab..de2bbbf181ba 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -344,6 +344,7 @@ extern void lru_add_drain_cpu(int cpu); extern void lru_add_drain_cpu_zone(struct zone *zone); extern void lru_add_drain_all(void); extern void rotate_reclaimable_page(struct page *page); +extern void activate_page(struct page *page); extern void deactivate_file_page(struct page *page); extern void deactivate_page(struct page *page); extern void mark_page_lazyfree(struct page *page); diff --git a/mm/swap.c b/mm/swap.c index 31b844d4ed94..f20ed56ebbbf 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -334,7 +334,7 @@ static bool need_activate_page_drain(int cpu) return pagevec_count(&per_cpu(lru_pvecs.activate_page, cpu)) != 0; } -static void activate_page(struct page *page) +static void activate_page_on_lru(struct page *page) { page = compound_head(page); if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) { @@ -354,7 +354,7 @@ static inline void activate_page_drain(int cpu) { } -static void activate_page(struct page *page) +static void activate_page_on_lru(struct page *page) { struct lruvec *lruvec; @@ -368,11 +368,22 @@ static void activate_page(struct page *page) } #endif -static void __lru_cache_activate_page(struct page *page) +/* + * If the page is on the LRU, queue it for activation via + * lru_pvecs.activate_page. Otherwise, assume the page is on a + * pagevec, mark it active and it'll be moved to the active + * LRU on the next drain. + */ +void activate_page(struct page *page) { struct pagevec *pvec; int i; + if (PageLRU(page)) { + activate_page_on_lru(page); + return; + } + local_lock(&lru_pvecs.lock); pvec = this_cpu_ptr(&lru_pvecs.lru_add); @@ -421,16 +432,7 @@ void mark_page_accessed(struct page *page) * evictable page accessed has no effect. */ } else if (!PageActive(page)) { - /* - * If the page is on the LRU, queue it for activation via - * lru_pvecs.activate_page. Otherwise, assume the page is on a - * pagevec, mark it active and it'll be moved to the active - * LRU on the next drain. - */ - if (PageLRU(page)) - activate_page(page); - else - __lru_cache_activate_page(page); + activate_page(page); ClearPageReferenced(page); workingset_activation(page); } -- 2.31.1.295.g9ea45b61b8-goog From mboxrd@z Thu Jan 1 00:00:00 1970 Content-Type: multipart/mixed; boundary="===============1579379777242497902==" MIME-Version: 1.0 From: Yu Zhao To: lkp@lists.01.org Subject: [PATCH v2 05/16] mm/swap.c: export activate_page() Date: Tue, 13 Apr 2021 00:56:22 -0600 Message-ID: <20210413065633.2782273-6-yuzhao@google.com> In-Reply-To: <20210413065633.2782273-1-yuzhao@google.com> List-Id: --===============1579379777242497902== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable activate_page() is needed to activate pages that are already on lru or queued in lru_pvecs.lru_add. The exported function is a merger between the existing activate_page() and __lru_cache_activate_page(). Signed-off-by: Yu Zhao --- include/linux/swap.h | 1 + mm/swap.c | 28 +++++++++++++++------------- 2 files changed, 16 insertions(+), 13 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 4cc6ec3bf0ab..de2bbbf181ba 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -344,6 +344,7 @@ extern void lru_add_drain_cpu(int cpu); extern void lru_add_drain_cpu_zone(struct zone *zone); extern void lru_add_drain_all(void); extern void rotate_reclaimable_page(struct page *page); +extern void activate_page(struct page *page); extern void deactivate_file_page(struct page *page); extern void deactivate_page(struct page *page); extern void mark_page_lazyfree(struct page *page); diff --git a/mm/swap.c b/mm/swap.c index 31b844d4ed94..f20ed56ebbbf 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -334,7 +334,7 @@ static bool need_activate_page_drain(int cpu) return pagevec_count(&per_cpu(lru_pvecs.activate_page, cpu)) !=3D 0; } = -static void activate_page(struct page *page) +static void activate_page_on_lru(struct page *page) { page =3D compound_head(page); if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) { @@ -354,7 +354,7 @@ static inline void activate_page_drain(int cpu) { } = -static void activate_page(struct page *page) +static void activate_page_on_lru(struct page *page) { struct lruvec *lruvec; = @@ -368,11 +368,22 @@ static void activate_page(struct page *page) } #endif = -static void __lru_cache_activate_page(struct page *page) +/* + * If the page is on the LRU, queue it for activation via + * lru_pvecs.activate_page. Otherwise, assume the page is on a + * pagevec, mark it active and it'll be moved to the active + * LRU on the next drain. + */ +void activate_page(struct page *page) { struct pagevec *pvec; int i; = + if (PageLRU(page)) { + activate_page_on_lru(page); + return; + } + local_lock(&lru_pvecs.lock); pvec =3D this_cpu_ptr(&lru_pvecs.lru_add); = @@ -421,16 +432,7 @@ void mark_page_accessed(struct page *page) * evictable page accessed has no effect. */ } else if (!PageActive(page)) { - /* - * If the page is on the LRU, queue it for activation via - * lru_pvecs.activate_page. Otherwise, assume the page is on a - * pagevec, mark it active and it'll be moved to the active - * LRU on the next drain. - */ - if (PageLRU(page)) - activate_page(page); - else - __lru_cache_activate_page(page); + activate_page(page); ClearPageReferenced(page); workingset_activation(page); } -- = 2.31.1.295.g9ea45b61b8-goog --===============1579379777242497902==--