From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-26.6 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06D24C433E6 for ; Fri, 8 Jan 2021 15:59:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D1922239A1 for ; Fri, 8 Jan 2021 15:59:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728000AbhAHP7Z (ORCPT ); Fri, 8 Jan 2021 10:59:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40644 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726784AbhAHP7Y (ORCPT ); Fri, 8 Jan 2021 10:59:24 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80DE7C0612FE for ; Fri, 8 Jan 2021 07:58:44 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id y5so6579670plr.19 for ; Fri, 08 Jan 2021 07:58:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=K2Y9mypxMIxy8dN+zcRs31GW/sxwp8pLzjC2qV3HZYM=; b=fMonvPlmyErI1mcmDqi8RukzV5Gouy8wYMiM08zbjaJoYY/yMo2UeZ3eFOCbCO4G3V 1dNSMEgcrcAW4KhGmKYCnZ6gHz6Zm6OxbpYBIC2OlWG4q+2ERmSUQ+06FvficujEJlCP aclJ2Mgm+Z11APWXPofJaQ0TC866/C2I07zzrH+U1rVnBYRILKTYVa6BZvc7AaDn9HCn KJWPYY8Sp062kGre+hawhEd+73NbNkCUdLbUcINVGzRBsBLfZyFMwCRxWPidkrtik3r3 Em3OIUYOin0H1HHj4pyovunplJrm6QkBM5y+nA5yXxjXToe7++wf/2hu7JXNCItGfNmc dGOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=K2Y9mypxMIxy8dN+zcRs31GW/sxwp8pLzjC2qV3HZYM=; b=L0Scbj21SRRxYx8949BLokfoHMiXiQt8OnshqyhPfdVfmaWBa211Z6yp/y5YQ+CASt mv4X4wy+N+FZRNHRc4SALy8FGcFRgAjh1eJhTpRx+0tIrwjadq7h+ieF+3bbjfDLcOyK yZ5DlLr2vF009Y8h+K7iqrfdub/vKsdyDr7cMvrWYlNlHyki8hwKv4QGaArQ5Pt6MyIA 9DD7YuyCtaw18bgAONzVmdGvJDiq9AqgBhUZx0+NBbqKWaVrqMpvacDKa17GqG+rEAOQ jAr1xmSrSRZqIBd03Q+afADdPwsOoGHdvacZLB5DnetN03Dm4XNge3tY3V4PrPr2wQng l4Ag== X-Gm-Message-State: AOAM530p9eGbdFAoYyEkbQVfmmgXXG44aWvxgPLzucFCaKL2Q6Vz11D5 sAUdUdeiE6oqFbcSHdowodXNSIl5MoZUPw== X-Google-Smtp-Source: ABdhPJy2Jp4f8exDZLZva3gOaWtknVud3DXstpbjibsQkH3dGVI/OiWvSUtfZt1GwnzEcZ/HZqVbQGIJ2IBEKg== Sender: "shakeelb via sendgmr" X-Received: from shakeelb.svl.corp.google.com ([100.116.77.44]) (user=shakeelb job=sendgmr) by 2002:a65:460d:: with SMTP id v13mr7632675pgq.414.1610121523757; Fri, 08 Jan 2021 07:58:43 -0800 (PST) Date: Fri, 8 Jan 2021 07:58:12 -0800 In-Reply-To: <20210108155813.2914586-1-shakeelb@google.com> Message-Id: <20210108155813.2914586-2-shakeelb@google.com> Mime-Version: 1.0 References: <20210108155813.2914586-1-shakeelb@google.com> X-Mailer: git-send-email 2.29.2.729.g45daf8777d-goog Subject: [PATCH v2 2/3] mm: fix numa stats for thp migration From: Shakeel Butt To: Johannes Weiner , Roman Gushchin , Michal Hocko , Yang Shi Cc: Andrew Morton , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Shakeel Butt , stable@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Currently the kernel is not correctly updating the numa stats for NR_FILE_PAGES and NR_SHMEM on THP migration. Fix that. For NR_FILE_DIRTY and NR_ZONE_WRITE_PENDING, although at the moment there is no need to handle THP migration as kernel still does not have write support for file THP but to be more future proof, this patch adds the THP support for those stats as well. Fixes: e71769ae52609 ("mm: enable thp migration for shmem thp") Signed-off-by: Shakeel Butt Acked-by: Yang Shi Reviewed-by: Roman Gushchin Cc: --- Changes since v1: - Fixed a typo mm/migrate.c | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 613794f6a433..c0efe921bca5 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -402,6 +402,7 @@ int migrate_page_move_mapping(struct address_space *mapping, struct zone *oldzone, *newzone; int dirty; int expected_count = expected_page_refs(mapping, page) + extra_count; + int nr = thp_nr_pages(page); if (!mapping) { /* Anonymous page without mapping */ @@ -437,7 +438,7 @@ int migrate_page_move_mapping(struct address_space *mapping, */ newpage->index = page->index; newpage->mapping = page->mapping; - page_ref_add(newpage, thp_nr_pages(page)); /* add cache reference */ + page_ref_add(newpage, nr); /* add cache reference */ if (PageSwapBacked(page)) { __SetPageSwapBacked(newpage); if (PageSwapCache(page)) { @@ -459,7 +460,7 @@ int migrate_page_move_mapping(struct address_space *mapping, if (PageTransHuge(page)) { int i; - for (i = 1; i < HPAGE_PMD_NR; i++) { + for (i = 1; i < nr; i++) { xas_next(&xas); xas_store(&xas, newpage); } @@ -470,7 +471,7 @@ int migrate_page_move_mapping(struct address_space *mapping, * to one less reference. * We know this isn't the last reference. */ - page_ref_unfreeze(page, expected_count - thp_nr_pages(page)); + page_ref_unfreeze(page, expected_count - nr); xas_unlock(&xas); /* Leave irq disabled to prevent preemption while updating stats */ @@ -493,17 +494,17 @@ int migrate_page_move_mapping(struct address_space *mapping, old_lruvec = mem_cgroup_lruvec(memcg, oldzone->zone_pgdat); new_lruvec = mem_cgroup_lruvec(memcg, newzone->zone_pgdat); - __dec_lruvec_state(old_lruvec, NR_FILE_PAGES); - __inc_lruvec_state(new_lruvec, NR_FILE_PAGES); + __mod_lruvec_state(old_lruvec, NR_FILE_PAGES, -nr); + __mod_lruvec_state(new_lruvec, NR_FILE_PAGES, nr); if (PageSwapBacked(page) && !PageSwapCache(page)) { - __dec_lruvec_state(old_lruvec, NR_SHMEM); - __inc_lruvec_state(new_lruvec, NR_SHMEM); + __mod_lruvec_state(old_lruvec, NR_SHMEM, -nr); + __mod_lruvec_state(new_lruvec, NR_SHMEM, nr); } if (dirty && mapping_can_writeback(mapping)) { - __dec_lruvec_state(old_lruvec, NR_FILE_DIRTY); - __dec_zone_state(oldzone, NR_ZONE_WRITE_PENDING); - __inc_lruvec_state(new_lruvec, NR_FILE_DIRTY); - __inc_zone_state(newzone, NR_ZONE_WRITE_PENDING); + __mod_lruvec_state(old_lruvec, NR_FILE_DIRTY, -nr); + __mod_zone_page_state(oldzone, NR_ZONE_WRITE_PENDING, -nr); + __mod_lruvec_state(new_lruvec, NR_FILE_DIRTY, nr); + __mod_zone_page_state(newzone, NR_ZONE_WRITE_PENDING, nr); } } local_irq_enable(); -- 2.29.2.729.g45daf8777d-goog From mboxrd@z Thu Jan 1 00:00:00 1970 From: Shakeel Butt Subject: [PATCH v2 2/3] mm: fix numa stats for thp migration Date: Fri, 8 Jan 2021 07:58:12 -0800 Message-ID: <20210108155813.2914586-2-shakeelb@google.com> References: <20210108155813.2914586-1-shakeelb@google.com> Mime-Version: 1.0 Return-path: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=K2Y9mypxMIxy8dN+zcRs31GW/sxwp8pLzjC2qV3HZYM=; b=fMonvPlmyErI1mcmDqi8RukzV5Gouy8wYMiM08zbjaJoYY/yMo2UeZ3eFOCbCO4G3V 1dNSMEgcrcAW4KhGmKYCnZ6gHz6Zm6OxbpYBIC2OlWG4q+2ERmSUQ+06FvficujEJlCP aclJ2Mgm+Z11APWXPofJaQ0TC866/C2I07zzrH+U1rVnBYRILKTYVa6BZvc7AaDn9HCn KJWPYY8Sp062kGre+hawhEd+73NbNkCUdLbUcINVGzRBsBLfZyFMwCRxWPidkrtik3r3 Em3OIUYOin0H1HHj4pyovunplJrm6QkBM5y+nA5yXxjXToe7++wf/2hu7JXNCItGfNmc dGOA== Sender: "shakeelb via sendgmr" In-Reply-To: <20210108155813.2914586-1-shakeelb@google.com> List-ID: Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Johannes Weiner , Roman Gushchin , Michal Hocko , Yang Shi Cc: Andrew Morton , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Shakeel Butt , stable@vger.kernel.org Currently the kernel is not correctly updating the numa stats for NR_FILE_PAGES and NR_SHMEM on THP migration. Fix that. For NR_FILE_DIRTY and NR_ZONE_WRITE_PENDING, although at the moment there is no need to handle THP migration as kernel still does not have write support for file THP but to be more future proof, this patch adds the THP support for those stats as well. Fixes: e71769ae52609 ("mm: enable thp migration for shmem thp") Signed-off-by: Shakeel Butt Acked-by: Yang Shi Reviewed-by: Roman Gushchin Cc: --- Changes since v1: - Fixed a typo mm/migrate.c | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 613794f6a433..c0efe921bca5 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -402,6 +402,7 @@ int migrate_page_move_mapping(struct address_space *mapping, struct zone *oldzone, *newzone; int dirty; int expected_count = expected_page_refs(mapping, page) + extra_count; + int nr = thp_nr_pages(page); if (!mapping) { /* Anonymous page without mapping */ @@ -437,7 +438,7 @@ int migrate_page_move_mapping(struct address_space *mapping, */ newpage->index = page->index; newpage->mapping = page->mapping; - page_ref_add(newpage, thp_nr_pages(page)); /* add cache reference */ + page_ref_add(newpage, nr); /* add cache reference */ if (PageSwapBacked(page)) { __SetPageSwapBacked(newpage); if (PageSwapCache(page)) { @@ -459,7 +460,7 @@ int migrate_page_move_mapping(struct address_space *mapping, if (PageTransHuge(page)) { int i; - for (i = 1; i < HPAGE_PMD_NR; i++) { + for (i = 1; i < nr; i++) { xas_next(&xas); xas_store(&xas, newpage); } @@ -470,7 +471,7 @@ int migrate_page_move_mapping(struct address_space *mapping, * to one less reference. * We know this isn't the last reference. */ - page_ref_unfreeze(page, expected_count - thp_nr_pages(page)); + page_ref_unfreeze(page, expected_count - nr); xas_unlock(&xas); /* Leave irq disabled to prevent preemption while updating stats */ @@ -493,17 +494,17 @@ int migrate_page_move_mapping(struct address_space *mapping, old_lruvec = mem_cgroup_lruvec(memcg, oldzone->zone_pgdat); new_lruvec = mem_cgroup_lruvec(memcg, newzone->zone_pgdat); - __dec_lruvec_state(old_lruvec, NR_FILE_PAGES); - __inc_lruvec_state(new_lruvec, NR_FILE_PAGES); + __mod_lruvec_state(old_lruvec, NR_FILE_PAGES, -nr); + __mod_lruvec_state(new_lruvec, NR_FILE_PAGES, nr); if (PageSwapBacked(page) && !PageSwapCache(page)) { - __dec_lruvec_state(old_lruvec, NR_SHMEM); - __inc_lruvec_state(new_lruvec, NR_SHMEM); + __mod_lruvec_state(old_lruvec, NR_SHMEM, -nr); + __mod_lruvec_state(new_lruvec, NR_SHMEM, nr); } if (dirty && mapping_can_writeback(mapping)) { - __dec_lruvec_state(old_lruvec, NR_FILE_DIRTY); - __dec_zone_state(oldzone, NR_ZONE_WRITE_PENDING); - __inc_lruvec_state(new_lruvec, NR_FILE_DIRTY); - __inc_zone_state(newzone, NR_ZONE_WRITE_PENDING); + __mod_lruvec_state(old_lruvec, NR_FILE_DIRTY, -nr); + __mod_zone_page_state(oldzone, NR_ZONE_WRITE_PENDING, -nr); + __mod_lruvec_state(new_lruvec, NR_FILE_DIRTY, nr); + __mod_zone_page_state(newzone, NR_ZONE_WRITE_PENDING, nr); } } local_irq_enable(); -- 2.29.2.729.g45daf8777d-goog