From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8007C47254 for ; Fri, 8 May 2020 21:22:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7AD3D21974 for ; Fri, 8 May 2020 21:22:28 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="XY5On+Go" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7AD3D21974 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1038F900012; Fri, 8 May 2020 17:22:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0D9E4900004; Fri, 8 May 2020 17:22:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 01627900012; Fri, 8 May 2020 17:22:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id DEC73900004 for ; Fri, 8 May 2020 17:22:27 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 9F651181AEF07 for ; Fri, 8 May 2020 21:22:27 +0000 (UTC) X-FDA: 76794825534.05.leg26_89dc27cb5f157 X-HE-Tag: leg26_89dc27cb5f157 X-Filterd-Recvd-Size: 5273 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) by imf49.hostedemail.com (Postfix) with ESMTP for ; Fri, 8 May 2020 21:22:27 +0000 (UTC) Received: by mail-pf1-f201.google.com with SMTP id f26so3036209pfn.9 for ; Fri, 08 May 2020 14:22:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:message-id:mime-version:subject:from:to:cc; bh=H7ED1E6iU6GJrL2T+LX5Bj/uEQY3JnE8s7k3T/+egkg=; b=XY5On+GozPMjP6G7zkPxPhW0eE2rp4+jjdvdyV+TE6avG+bX2A6pptdCo2RW+Rn3no KQU1jyKtWYfy9dkYUy4EImZ9+uEOAdDNPH0YwlyphH5/olIWRcFiT8PHrG0uYO6VgAgO E/dVasAEh4SuLPL7R9m2RQCh6iAMAIo8SZvlMpxpYiJTLphr4eSjp2MuwGrDq5yyaNzx 4N+w8pQqM2elRyzuI36Dqha8UAGJbBpiCNZBetFNyysG+bhxK79vAv6P6/dmFhDWQ0cZ 53BuMgiu5tPKdTehsHF+yErNAw9OFP+cGYZN4uCLlChZJodaZgBljzVYxcpHxpU3zoun Gm9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=H7ED1E6iU6GJrL2T+LX5Bj/uEQY3JnE8s7k3T/+egkg=; b=CxEURCbLSSNj09W5cW6lJyzHiOVJNsr2BD3c8coWZBPEaLgr2uUV1c2uGaJX1gs7eL Yr47CF5QPfF4uZYz8rR8X3KXC0HdTZTwsvArczPW+EmweH214eJM9d8WwlHTxtpkrLrX ROKyWEe80/ZlRUBdup/tt1aFoVzJQjgTsYruT+oVBGTyqL98Myb+99thYnzrxbCIzdPr vCa1prEZzRbrucqAVBawkUz8a5GuRqBdo3amXLL1L49AADddnuHJ+yEg84EVXoYpCAg2 UXaRrq28+hD8VWTHQDiUc/jB5C+nVhHhqU0XIqQm1o4GG2G1GMaqEiCLv9OuJL931zAW G7lA== X-Gm-Message-State: AGi0PuZ0/Pm8HlHQb+gdcJcMjEbNXS87e78eX1UvtP7EoXzecqVMyd3u wSMpIPUM893eR9DdOKnrvEENW8DFafJ1ow== X-Google-Smtp-Source: APiQypLfungZ5Nzv+f2ielEC+coRuoHMgMXQgXBUaRp75fbEnt3diNF+/f9OMq68YEUmbJsRsjKINr5ubL8GLQ== X-Received: by 2002:a17:90a:cb0b:: with SMTP id z11mr7520557pjt.62.1588972946092; Fri, 08 May 2020 14:22:26 -0700 (PDT) Date: Fri, 8 May 2020 14:22:13 -0700 Message-Id: <20200508212215.181307-1-shakeelb@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.26.2.645.ge9eca65c58-goog Subject: [PATCH 1/3] mm: swap: fix vmstats for huge pages From: Shakeel Butt To: Mel Gorman , Johannes Weiner , Roman Gushchin , Michal Hocko Cc: Andrew Morton , Yafang Shao , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, Shakeel Butt Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Many of the callbacks called by pagevec_lru_move_fn() do not correctly update the vmstats for huge pages. Fix that. Also __pagevec_lru_add_fn() use the irq-unsafe alternative to update the stat as the irqs are already disabled. Signed-off-by: Shakeel Butt --- mm/swap.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index a37bd7b202ac..3dbef6517cac 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -225,7 +225,7 @@ static void pagevec_move_tail_fn(struct page *page, struct lruvec *lruvec, del_page_from_lru_list(page, lruvec, page_lru(page)); ClearPageActive(page); add_page_to_lru_list_tail(page, lruvec, page_lru(page)); - (*pgmoved)++; + (*pgmoved) += hpage_nr_pages(page); } } @@ -285,7 +285,7 @@ static void __activate_page(struct page *page, struct lruvec *lruvec, add_page_to_lru_list(page, lruvec, lru); trace_mm_lru_activate(page); - __count_vm_event(PGACTIVATE); + __count_vm_events(PGACTIVATE, hpage_nr_pages(page)); update_page_reclaim_stat(lruvec, file, 1); } } @@ -503,6 +503,7 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec, { int lru, file; bool active; + int nr_pages = hpage_nr_pages(page); if (!PageLRU(page)) return; @@ -536,11 +537,11 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec, * We moves tha page into tail of inactive. */ add_page_to_lru_list_tail(page, lruvec, lru); - __count_vm_event(PGROTATED); + __count_vm_events(PGROTATED, nr_pages); } if (active) - __count_vm_event(PGDEACTIVATE); + __count_vm_events(PGDEACTIVATE, nr_pages); update_page_reclaim_stat(lruvec, file, 0); } @@ -929,6 +930,7 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec, { enum lru_list lru; int was_unevictable = TestClearPageUnevictable(page); + int nr_pages = hpage_nr_pages(page); VM_BUG_ON_PAGE(PageLRU(page), page); @@ -966,13 +968,13 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec, update_page_reclaim_stat(lruvec, page_is_file_lru(page), PageActive(page)); if (was_unevictable) - count_vm_event(UNEVICTABLE_PGRESCUED); + __count_vm_events(UNEVICTABLE_PGRESCUED, nr_pages); } else { lru = LRU_UNEVICTABLE; ClearPageActive(page); SetPageUnevictable(page); if (!was_unevictable) - count_vm_event(UNEVICTABLE_PGCULLED); + __count_vm_events(UNEVICTABLE_PGCULLED, nr_pages); } add_page_to_lru_list(page, lruvec, lru); -- 2.26.2.645.ge9eca65c58-goog