From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F240FC433E6 for ; Mon, 28 Dec 2020 17:32:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CB0D622B40 for ; Mon, 28 Dec 2020 17:32:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728388AbgL1Rcj (ORCPT ); Mon, 28 Dec 2020 12:32:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34670 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728360AbgL1Rci (ORCPT ); Mon, 28 Dec 2020 12:32:38 -0500 Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com [IPv6:2a00:1450:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 13827C061798; Mon, 28 Dec 2020 09:31:58 -0800 (PST) Received: by mail-ej1-x636.google.com with SMTP id ce23so15068266ejb.8; Mon, 28 Dec 2020 09:31:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=uMJv2lHE34Rym/NavReAV+zZuMZZwc2rochor9I7cUk=; b=vLPjujywSriLbbbiWgkv3ouyHNliTmrBC13fnxtXf3p5EdEkmPy8fjwLxYBzletbU2 Y9VXV4wEV2YlNjWsd50etSvFYzhK6CanfhVa14AC0JRHzsAvc9QqZkaLC1okmuwCI/zp 6SgsMIAXYonjX0eU77bCj5AuZy4cUE1gc+vLMGI92yXhNKtyJ1w7Zi7ibDXAFuNn+1IR W+gouTA96SF6W2a5hufEGWescv0KcA2OD/8Y7DLgL9YETPfbM+f/HCvQP0V/DDiT3JjA 80cZnTEJSqcl1k1jt968Mr69xOR+mHhFrrMBjy32xqIX/PjPpgQl1t88jsxwp90dfoZL Byzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=uMJv2lHE34Rym/NavReAV+zZuMZZwc2rochor9I7cUk=; b=prwfh93Sc8/i4F2/NF8pQuiJUAGnRvTltbFRrFxtjvQwmOSZKRrFV+5bhY6QlVAOqv Kr/eKd5RRumEaiEMFUqejVzUjHgW3lenWSbblT2wV36cCUIdpLItKm7qbSJr5gQAekzY HHaNFPxy74u28oEr3l/HwKHNWpJ0OH0HKYaNg6Qo5QkgR6bFuoV1PImghEnNFlXv6v4O xWDNBqnL55v1U8fMbOz0fsx0Pbd4CeVG2ntm00eojw65wwxirq6IYcE0OlGKudyPPFHE DmfjCYQnNNHF6H/DX7kGlELYpadpX6K06ratubq8d9pTrI8jZuviw7qA5/w5DLfTkjES yrGg== X-Gm-Message-State: AOAM531SWrZokgHolok/Wyiri2SgrRqWIlMH/qUqWUMyZlLtkvthlROX QhvAsLyVBAAkbUriDA9Hfz7p3qgOp3ngHwaonwA= X-Google-Smtp-Source: ABdhPJwlEdPEG6+Ljrdij9m+oxcj4tYDG90MJqOSvbjZQgN/UtokII1B0CsF/WGzvH1xL8Y92L+rW2nu+KGZWALfB5Y= X-Received: by 2002:a17:906:6a45:: with SMTP id n5mr43209968ejs.514.1609176716860; Mon, 28 Dec 2020 09:31:56 -0800 (PST) MIME-Version: 1.0 References: <20201227181310.3235210-1-shakeelb@google.com> <20201227181310.3235210-2-shakeelb@google.com> In-Reply-To: From: Yang Shi Date: Mon, 28 Dec 2020 09:31:45 -0800 Message-ID: Subject: Re: [PATCH 2/2] mm: fix numa stats for thp migration To: Shakeel Butt Cc: Muchun Song , Naoya Horiguchi , Andrew Morton , "Kirill A . Shutemov" , Johannes Weiner , Roman Gushchin , Cgroups , Linux MM , LKML , stable Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sun, Dec 27, 2020 at 10:16 AM Shakeel Butt wrote: > > On Sun, Dec 27, 2020 at 10:14 AM Shakeel Butt wrote: > > > > Currently the kernel is not correctly updating the numa stats for > > NR_FILE_PAGES and NR_SHMEM on THP migration. Fix that. For NR_FILE_DIRTY > > and NR_ZONE_WRITE_PENDING, although at the moment there is no need to > > handle THP migration as kernel still does not have write support for > > file THP but to be more future proof, this patch adds the THP support > > for those stats as well. > > > > Fixes: e71769ae52609 ("mm: enable thp migration for shmem thp") > > Signed-off-by: Shakeel Butt > > Cc: > > --- > > mm/migrate.c | 23 ++++++++++++----------- > > 1 file changed, 12 insertions(+), 11 deletions(-) > > > > diff --git a/mm/migrate.c b/mm/migrate.c > > index 613794f6a433..ade163c6ecdf 100644 > > --- a/mm/migrate.c > > +++ b/mm/migrate.c > > @@ -402,6 +402,7 @@ int migrate_page_move_mapping(struct address_space *mapping, > > struct zone *oldzone, *newzone; > > int dirty; > > int expected_count = expected_page_refs(mapping, page) + extra_count; > > + int nr = thp_nr_pages(page); > > > > if (!mapping) { > > /* Anonymous page without mapping */ > > @@ -437,7 +438,7 @@ int migrate_page_move_mapping(struct address_space *mapping, > > */ > > newpage->index = page->index; > > newpage->mapping = page->mapping; > > - page_ref_add(newpage, thp_nr_pages(page)); /* add cache reference */ > > + page_ref_add(newpage, nr); /* add cache reference */ > > if (PageSwapBacked(page)) { > > __SetPageSwapBacked(newpage); > > if (PageSwapCache(page)) { > > @@ -459,7 +460,7 @@ int migrate_page_move_mapping(struct address_space *mapping, > > if (PageTransHuge(page)) { > > int i; > > > > - for (i = 1; i < HPAGE_PMD_NR; i++) { > > + for (i = 1; i < nr; i++) { > > xas_next(&xas); > > xas_store(&xas, newpage); > > } > > @@ -470,7 +471,7 @@ int migrate_page_move_mapping(struct address_space *mapping, > > * to one less reference. > > * We know this isn't the last reference. > > */ > > - page_ref_unfreeze(page, expected_count - thp_nr_pages(page)); > > + page_ref_unfreeze(page, expected_count - nr); > > > > xas_unlock(&xas); > > /* Leave irq disabled to prevent preemption while updating stats */ > > @@ -493,17 +494,17 @@ int migrate_page_move_mapping(struct address_space *mapping, > > old_lruvec = mem_cgroup_lruvec(memcg, oldzone->zone_pgdat); > > new_lruvec = mem_cgroup_lruvec(memcg, newzone->zone_pgdat); > > > > - __dec_lruvec_state(old_lruvec, NR_FILE_PAGES); > > - __inc_lruvec_state(new_lruvec, NR_FILE_PAGES); > > + __mod_lruvec_state(old_lruvec, NR_FILE_PAGES, -nr); > > + __mod_lruvec_state(new_lruvec, NR_FILE_PAGES, nr); > > if (PageSwapBacked(page) && !PageSwapCache(page)) { > > - __dec_lruvec_state(old_lruvec, NR_SHMEM); > > - __inc_lruvec_state(new_lruvec, NR_SHMEM); > > + __mod_lruvec_state(old_lruvec, NR_SHMEM, -nr); > > + __mod_lruvec_state(new_lruvec, NR_SHMEM, nr); > > } > > if (dirty && mapping_can_writeback(mapping)) { > > - __dec_lruvec_state(old_lruvec, NR_FILE_DIRTY); > > - __dec_zone_state(oldzone, NR_ZONE_WRITE_PENDING); > > - __inc_lruvec_state(new_lruvec, NR_FILE_DIRTY); > > - __inc_zone_state(newzone, NR_ZONE_WRITE_PENDING); > > + __mod_lruvec_state(old_lruvec, NR_FILE_DIRTY, -nr); > > + __mod_zone_page_tate(oldzone, NR_ZONE_WRITE_PENDING, -nr); > > This should be __mod_zone_page_state(). I fixed locally but sent the > older patch by mistake. Acked-by: Yang Shi > > > + __mod_lruvec_state(new_lruvec, NR_FILE_DIRTY, nr); > > + __mod_zone_page_state(newzone, NR_ZONE_WRITE_PENDING, nr); > > } > > } > > local_irq_enable(); > > -- > > 2.29.2.729.g45daf8777d-goog > > > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F30AAC433DB for ; Mon, 28 Dec 2020 17:31:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 88CFF229C5 for ; Mon, 28 Dec 2020 17:31:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 88CFF229C5 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DBEA28D0013; Mon, 28 Dec 2020 12:31:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D6DA88D0010; Mon, 28 Dec 2020 12:31:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C5D1F8D0013; Mon, 28 Dec 2020 12:31:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0174.hostedemail.com [216.40.44.174]) by kanga.kvack.org (Postfix) with ESMTP id AC2218D0010 for ; Mon, 28 Dec 2020 12:31:58 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 714C38249980 for ; Mon, 28 Dec 2020 17:31:58 +0000 (UTC) X-FDA: 77643383916.22.burn08_1b1431a27495 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id 5109D18038E75 for ; Mon, 28 Dec 2020 17:31:58 +0000 (UTC) X-HE-Tag: burn08_1b1431a27495 X-Filterd-Recvd-Size: 7609 Received: from mail-ej1-f54.google.com (mail-ej1-f54.google.com [209.85.218.54]) by imf15.hostedemail.com (Postfix) with ESMTP for ; Mon, 28 Dec 2020 17:31:57 +0000 (UTC) Received: by mail-ej1-f54.google.com with SMTP id w1so15028672ejf.11 for ; Mon, 28 Dec 2020 09:31:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=uMJv2lHE34Rym/NavReAV+zZuMZZwc2rochor9I7cUk=; b=vLPjujywSriLbbbiWgkv3ouyHNliTmrBC13fnxtXf3p5EdEkmPy8fjwLxYBzletbU2 Y9VXV4wEV2YlNjWsd50etSvFYzhK6CanfhVa14AC0JRHzsAvc9QqZkaLC1okmuwCI/zp 6SgsMIAXYonjX0eU77bCj5AuZy4cUE1gc+vLMGI92yXhNKtyJ1w7Zi7ibDXAFuNn+1IR W+gouTA96SF6W2a5hufEGWescv0KcA2OD/8Y7DLgL9YETPfbM+f/HCvQP0V/DDiT3JjA 80cZnTEJSqcl1k1jt968Mr69xOR+mHhFrrMBjy32xqIX/PjPpgQl1t88jsxwp90dfoZL Byzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=uMJv2lHE34Rym/NavReAV+zZuMZZwc2rochor9I7cUk=; b=ft4ayNNucPQfK5h1DmvrKbbb2wO3l+4VEGx80VGD9IsUELzKhISJ26BM76DGMtuQ6A HqdqLIvzHBSZ6heOiCtpnbznyUtG3/qXjGiz/7N7BD7vBDSoI4m+YuE555MDdhU9Se6T lI3Nl3Z7gF0Jer3oRtFIyFpbhQxyb586obD/d1mLPc2U6YDN5iCTnu/w6/J1BwUHFF26 AuHhcB575YFUPTMyXmr95M9a3FHBrJskX7anCstntPVac5/UybW0TkkeaCMoypI5M5VI YDri+V+Z9ahNT8ZvdT9kRhEQTwRBED1748n7/YYuwUy2/hwISd2AIYrNUEYeTv4JhLCA Qqfw== X-Gm-Message-State: AOAM5319ToncMnNR6BxmYLLciSLh16guyvdKQw9+E0QuENWB2NpmrT21 PWNpD6FVKrrxnMvPqgG9N5nRNnlEQTknKIWpvwE= X-Google-Smtp-Source: ABdhPJwlEdPEG6+Ljrdij9m+oxcj4tYDG90MJqOSvbjZQgN/UtokII1B0CsF/WGzvH1xL8Y92L+rW2nu+KGZWALfB5Y= X-Received: by 2002:a17:906:6a45:: with SMTP id n5mr43209968ejs.514.1609176716860; Mon, 28 Dec 2020 09:31:56 -0800 (PST) MIME-Version: 1.0 References: <20201227181310.3235210-1-shakeelb@google.com> <20201227181310.3235210-2-shakeelb@google.com> In-Reply-To: From: Yang Shi Date: Mon, 28 Dec 2020 09:31:45 -0800 Message-ID: Subject: Re: [PATCH 2/2] mm: fix numa stats for thp migration To: Shakeel Butt Cc: Muchun Song , Naoya Horiguchi , Andrew Morton , "Kirill A . Shutemov" , Johannes Weiner , Roman Gushchin , Cgroups , Linux MM , LKML , stable Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sun, Dec 27, 2020 at 10:16 AM Shakeel Butt wrote: > > On Sun, Dec 27, 2020 at 10:14 AM Shakeel Butt wrote: > > > > Currently the kernel is not correctly updating the numa stats for > > NR_FILE_PAGES and NR_SHMEM on THP migration. Fix that. For NR_FILE_DIRTY > > and NR_ZONE_WRITE_PENDING, although at the moment there is no need to > > handle THP migration as kernel still does not have write support for > > file THP but to be more future proof, this patch adds the THP support > > for those stats as well. > > > > Fixes: e71769ae52609 ("mm: enable thp migration for shmem thp") > > Signed-off-by: Shakeel Butt > > Cc: > > --- > > mm/migrate.c | 23 ++++++++++++----------- > > 1 file changed, 12 insertions(+), 11 deletions(-) > > > > diff --git a/mm/migrate.c b/mm/migrate.c > > index 613794f6a433..ade163c6ecdf 100644 > > --- a/mm/migrate.c > > +++ b/mm/migrate.c > > @@ -402,6 +402,7 @@ int migrate_page_move_mapping(struct address_space *mapping, > > struct zone *oldzone, *newzone; > > int dirty; > > int expected_count = expected_page_refs(mapping, page) + extra_count; > > + int nr = thp_nr_pages(page); > > > > if (!mapping) { > > /* Anonymous page without mapping */ > > @@ -437,7 +438,7 @@ int migrate_page_move_mapping(struct address_space *mapping, > > */ > > newpage->index = page->index; > > newpage->mapping = page->mapping; > > - page_ref_add(newpage, thp_nr_pages(page)); /* add cache reference */ > > + page_ref_add(newpage, nr); /* add cache reference */ > > if (PageSwapBacked(page)) { > > __SetPageSwapBacked(newpage); > > if (PageSwapCache(page)) { > > @@ -459,7 +460,7 @@ int migrate_page_move_mapping(struct address_space *mapping, > > if (PageTransHuge(page)) { > > int i; > > > > - for (i = 1; i < HPAGE_PMD_NR; i++) { > > + for (i = 1; i < nr; i++) { > > xas_next(&xas); > > xas_store(&xas, newpage); > > } > > @@ -470,7 +471,7 @@ int migrate_page_move_mapping(struct address_space *mapping, > > * to one less reference. > > * We know this isn't the last reference. > > */ > > - page_ref_unfreeze(page, expected_count - thp_nr_pages(page)); > > + page_ref_unfreeze(page, expected_count - nr); > > > > xas_unlock(&xas); > > /* Leave irq disabled to prevent preemption while updating stats */ > > @@ -493,17 +494,17 @@ int migrate_page_move_mapping(struct address_space *mapping, > > old_lruvec = mem_cgroup_lruvec(memcg, oldzone->zone_pgdat); > > new_lruvec = mem_cgroup_lruvec(memcg, newzone->zone_pgdat); > > > > - __dec_lruvec_state(old_lruvec, NR_FILE_PAGES); > > - __inc_lruvec_state(new_lruvec, NR_FILE_PAGES); > > + __mod_lruvec_state(old_lruvec, NR_FILE_PAGES, -nr); > > + __mod_lruvec_state(new_lruvec, NR_FILE_PAGES, nr); > > if (PageSwapBacked(page) && !PageSwapCache(page)) { > > - __dec_lruvec_state(old_lruvec, NR_SHMEM); > > - __inc_lruvec_state(new_lruvec, NR_SHMEM); > > + __mod_lruvec_state(old_lruvec, NR_SHMEM, -nr); > > + __mod_lruvec_state(new_lruvec, NR_SHMEM, nr); > > } > > if (dirty && mapping_can_writeback(mapping)) { > > - __dec_lruvec_state(old_lruvec, NR_FILE_DIRTY); > > - __dec_zone_state(oldzone, NR_ZONE_WRITE_PENDING); > > - __inc_lruvec_state(new_lruvec, NR_FILE_DIRTY); > > - __inc_zone_state(newzone, NR_ZONE_WRITE_PENDING); > > + __mod_lruvec_state(old_lruvec, NR_FILE_DIRTY, -nr); > > + __mod_zone_page_tate(oldzone, NR_ZONE_WRITE_PENDING, -nr); > > This should be __mod_zone_page_state(). I fixed locally but sent the > older patch by mistake. Acked-by: Yang Shi > > > + __mod_lruvec_state(new_lruvec, NR_FILE_DIRTY, nr); > > + __mod_zone_page_state(newzone, NR_ZONE_WRITE_PENDING, nr); > > } > > } > > local_irq_enable(); > > -- > > 2.29.2.729.g45daf8777d-goog > > > From mboxrd@z Thu Jan 1 00:00:00 1970 From: Yang Shi Subject: Re: [PATCH 2/2] mm: fix numa stats for thp migration Date: Mon, 28 Dec 2020 09:31:45 -0800 Message-ID: References: <20201227181310.3235210-1-shakeelb@google.com> <20201227181310.3235210-2-shakeelb@google.com> Mime-Version: 1.0 Return-path: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=uMJv2lHE34Rym/NavReAV+zZuMZZwc2rochor9I7cUk=; b=vLPjujywSriLbbbiWgkv3ouyHNliTmrBC13fnxtXf3p5EdEkmPy8fjwLxYBzletbU2 Y9VXV4wEV2YlNjWsd50etSvFYzhK6CanfhVa14AC0JRHzsAvc9QqZkaLC1okmuwCI/zp 6SgsMIAXYonjX0eU77bCj5AuZy4cUE1gc+vLMGI92yXhNKtyJ1w7Zi7ibDXAFuNn+1IR W+gouTA96SF6W2a5hufEGWescv0KcA2OD/8Y7DLgL9YETPfbM+f/HCvQP0V/DDiT3JjA 80cZnTEJSqcl1k1jt968Mr69xOR+mHhFrrMBjy32xqIX/PjPpgQl1t88jsxwp90dfoZL Byzw== In-Reply-To: List-ID: Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Shakeel Butt Cc: Muchun Song , Naoya Horiguchi , Andrew Morton , "Kirill A . Shutemov" , Johannes Weiner , Roman Gushchin , Cgroups , Linux MM , LKML , stable On Sun, Dec 27, 2020 at 10:16 AM Shakeel Butt wrote: > > On Sun, Dec 27, 2020 at 10:14 AM Shakeel Butt wrote: > > > > Currently the kernel is not correctly updating the numa stats for > > NR_FILE_PAGES and NR_SHMEM on THP migration. Fix that. For NR_FILE_DIRTY > > and NR_ZONE_WRITE_PENDING, although at the moment there is no need to > > handle THP migration as kernel still does not have write support for > > file THP but to be more future proof, this patch adds the THP support > > for those stats as well. > > > > Fixes: e71769ae52609 ("mm: enable thp migration for shmem thp") > > Signed-off-by: Shakeel Butt > > Cc: > > --- > > mm/migrate.c | 23 ++++++++++++----------- > > 1 file changed, 12 insertions(+), 11 deletions(-) > > > > diff --git a/mm/migrate.c b/mm/migrate.c > > index 613794f6a433..ade163c6ecdf 100644 > > --- a/mm/migrate.c > > +++ b/mm/migrate.c > > @@ -402,6 +402,7 @@ int migrate_page_move_mapping(struct address_space *mapping, > > struct zone *oldzone, *newzone; > > int dirty; > > int expected_count = expected_page_refs(mapping, page) + extra_count; > > + int nr = thp_nr_pages(page); > > > > if (!mapping) { > > /* Anonymous page without mapping */ > > @@ -437,7 +438,7 @@ int migrate_page_move_mapping(struct address_space *mapping, > > */ > > newpage->index = page->index; > > newpage->mapping = page->mapping; > > - page_ref_add(newpage, thp_nr_pages(page)); /* add cache reference */ > > + page_ref_add(newpage, nr); /* add cache reference */ > > if (PageSwapBacked(page)) { > > __SetPageSwapBacked(newpage); > > if (PageSwapCache(page)) { > > @@ -459,7 +460,7 @@ int migrate_page_move_mapping(struct address_space *mapping, > > if (PageTransHuge(page)) { > > int i; > > > > - for (i = 1; i < HPAGE_PMD_NR; i++) { > > + for (i = 1; i < nr; i++) { > > xas_next(&xas); > > xas_store(&xas, newpage); > > } > > @@ -470,7 +471,7 @@ int migrate_page_move_mapping(struct address_space *mapping, > > * to one less reference. > > * We know this isn't the last reference. > > */ > > - page_ref_unfreeze(page, expected_count - thp_nr_pages(page)); > > + page_ref_unfreeze(page, expected_count - nr); > > > > xas_unlock(&xas); > > /* Leave irq disabled to prevent preemption while updating stats */ > > @@ -493,17 +494,17 @@ int migrate_page_move_mapping(struct address_space *mapping, > > old_lruvec = mem_cgroup_lruvec(memcg, oldzone->zone_pgdat); > > new_lruvec = mem_cgroup_lruvec(memcg, newzone->zone_pgdat); > > > > - __dec_lruvec_state(old_lruvec, NR_FILE_PAGES); > > - __inc_lruvec_state(new_lruvec, NR_FILE_PAGES); > > + __mod_lruvec_state(old_lruvec, NR_FILE_PAGES, -nr); > > + __mod_lruvec_state(new_lruvec, NR_FILE_PAGES, nr); > > if (PageSwapBacked(page) && !PageSwapCache(page)) { > > - __dec_lruvec_state(old_lruvec, NR_SHMEM); > > - __inc_lruvec_state(new_lruvec, NR_SHMEM); > > + __mod_lruvec_state(old_lruvec, NR_SHMEM, -nr); > > + __mod_lruvec_state(new_lruvec, NR_SHMEM, nr); > > } > > if (dirty && mapping_can_writeback(mapping)) { > > - __dec_lruvec_state(old_lruvec, NR_FILE_DIRTY); > > - __dec_zone_state(oldzone, NR_ZONE_WRITE_PENDING); > > - __inc_lruvec_state(new_lruvec, NR_FILE_DIRTY); > > - __inc_zone_state(newzone, NR_ZONE_WRITE_PENDING); > > + __mod_lruvec_state(old_lruvec, NR_FILE_DIRTY, -nr); > > + __mod_zone_page_tate(oldzone, NR_ZONE_WRITE_PENDING, -nr); > > This should be __mod_zone_page_state(). I fixed locally but sent the > older patch by mistake. Acked-by: Yang Shi > > > + __mod_lruvec_state(new_lruvec, NR_FILE_DIRTY, nr); > > + __mod_zone_page_state(newzone, NR_ZONE_WRITE_PENDING, nr); > > } > > } > > local_irq_enable(); > > -- > > 2.29.2.729.g45daf8777d-goog > > >