From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.9 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D4CEC10DCE for ; Fri, 6 Mar 2020 22:22:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1A17520674 for ; Fri, 6 Mar 2020 22:22:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="g/DXRsUp" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1A17520674 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BCC396B0007; Fri, 6 Mar 2020 17:22:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B7CEF6B0008; Fri, 6 Mar 2020 17:22:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A6B0C6B000A; Fri, 6 Mar 2020 17:22:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0020.hostedemail.com [216.40.44.20]) by kanga.kvack.org (Postfix) with ESMTP id 8F7D86B0007 for ; Fri, 6 Mar 2020 17:22:38 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 307A834A3 for ; Fri, 6 Mar 2020 22:22:38 +0000 (UTC) X-FDA: 76566362796.19.stick54_34144d1896227 X-HE-Tag: stick54_34144d1896227 X-Filterd-Recvd-Size: 7967 Received: from mail-pf1-f196.google.com (mail-pf1-f196.google.com [209.85.210.196]) by imf04.hostedemail.com (Postfix) with ESMTP for ; Fri, 6 Mar 2020 22:22:37 +0000 (UTC) Received: by mail-pf1-f196.google.com with SMTP id g21so1776288pfb.5 for ; Fri, 06 Mar 2020 14:22:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=u5iQagkAL956X3yAdHAN3Ifbi9AEjrmYTtohHs4VyfI=; b=g/DXRsUpnPVv6Ya8NHenBrLz2IUQ4G4Sat8Og//O03Wd8aU2aHn9mt6lfgnz6ERSCX hSnkjMcl7Zy2wOyT7dwIF93oQcdUVfCdTTvw+QVhDPmlBGTHdK2sxUAkkXAR6gCIbzT7 xJc0W7dMjIRoLtxn8UaK2/igdNrqyiCMIMZPOQcykdDAVW5xTMiCB1mKY7Ui4gJl4cAA L/M32dbJ022bCltBq8p/2urg0SYTI2cVKeT/Bl6Ojz5Dut9wD1mDM6jqUkEu2st5oc2U YAIjFSuh9D3nFe4NLL/Jcaodr+36L3zxBcsGJavDzgKPMCE2I0YDT3ytV19vdtTAL9Br FS5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=u5iQagkAL956X3yAdHAN3Ifbi9AEjrmYTtohHs4VyfI=; b=W31dzt7xHp2LlDlkZBlvVmbwjbPyvhsdMwchaIriVyrYZ0QAssNDNzq4B+547ZKVFp BMnk42sEXagTRIVYhSKE6SVIxU5pIjaM/I/4maXS5YmZqJ4A0gsY1wZH8ltLm1Mu76g+ Yz474hXFLX6NJKVnnujuy5kngiGl1E3T4VkvBsx0iwy9uc8BskTNzqgTe3RFoFmHL0+B f6O282gOqUG9NXNrDiykdAbt4vbNGZdSEC9Uyt8byrpITyHzWqFgLCGjp/DuAUQPcKoy JLuGZ/Jd7kRXjezTgbUURu5IAVLYUl7uyR8pBCg0u7I2GEbN+SwXEj+KsdrCNAD5zCht 6WPQ== X-Gm-Message-State: ANhLgQ0G3AbZXkp77jj4fAxjYqiXr4x8V49QoxA7mR53sLD4vxvOohLn yPHWANZpCbiSsbhzMIVcIImLWA== X-Google-Smtp-Source: ADFU+vv+K2fDyFW749TFSttaWCLjv+bSISZN1YZ8ATK0SwDRzX9iEudV9PVKTAUzPDXtahm6UTPFuQ== X-Received: by 2002:a05:6a00:2d4:: with SMTP id b20mr5935555pft.143.1583533356476; Fri, 06 Mar 2020 14:22:36 -0800 (PST) Received: from [2620:15c:17:3:3a5:23a7:5e32:4598] ([2620:15c:17:3:3a5:23a7:5e32:4598]) by smtp.gmail.com with ESMTPSA id fa19sm10310152pjb.47.2020.03.06.14.22.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 06 Mar 2020 14:22:35 -0800 (PST) Date: Fri, 6 Mar 2020 14:22:35 -0800 (PST) From: David Rientjes X-X-Sender: rientjes@chino.kir.corp.google.com To: Andrew Morton cc: Yang Shi , "Kirill A. Shutemov" , "Kirill A. Shutemov" , Mike Rapoport , Jeremy Cline , Linux Kernel Mailing List , Linux MM Subject: [patch 2/2] mm, thp: track fallbacks due to failed memcg charges separately In-Reply-To: Message-ID: References: User-Agent: Alpine 2.21 (DEB 202 2017-01-01) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The thp_fault_fallback and thp_file_fallback vmstats are incremented if either the hugepage allocation fails through the page allocator or the hugepage charge fails through mem cgroup. This patch leaves this field untouched but adds two new fields, thp_{fault,file}_fallback_charge, which is incremented only when the mem cgroup charge fails. This distinguishes between attempted hugepage allocations that fail due to fragmentation (or low memory conditions) and those that fail due to mem cgroup limits. That can be used to determine the impact of fragmentation on the system by excluding faults that failed due to memcg usage. Signed-off-by: David Rientjes --- Documentation/admin-guide/mm/transhuge.rst | 10 ++++++++++ include/linux/vm_event_item.h | 3 +++ mm/huge_memory.c | 2 ++ mm/shmem.c | 4 +++- mm/vmstat.c | 2 ++ 5 files changed, 20 insertions(+), 1 deletion(-) diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst --- a/Documentation/admin-guide/mm/transhuge.rst +++ b/Documentation/admin-guide/mm/transhuge.rst @@ -310,6 +310,11 @@ thp_fault_fallback is incremented if a page fault fails to allocate a huge page and instead falls back to using small pages. +thp_fault_fallback_charge + is incremented if a page fault fails to charge a huge page and + instead falls back to using small pages even though the + allocation was successful. + thp_collapse_alloc_failed is incremented if khugepaged found a range of pages that should be collapsed into one huge page but failed @@ -323,6 +328,11 @@ thp_file_fallback is incremented if a file huge page is attempted to be allocated but fails and instead falls back to using small pages. +thp_file_fallback_charge + is incremented if a file huge page cannot be charged and instead + falls back to using small pages even though the allocation was + successful. + thp_file_mapped is incremented every time a file huge page is mapped into user address space. diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h --- a/include/linux/vm_event_item.h +++ b/include/linux/vm_event_item.h @@ -73,10 +73,12 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT, #ifdef CONFIG_TRANSPARENT_HUGEPAGE THP_FAULT_ALLOC, THP_FAULT_FALLBACK, + THP_FAULT_FALLBACK_CHARGE, THP_COLLAPSE_ALLOC, THP_COLLAPSE_ALLOC_FAILED, THP_FILE_ALLOC, THP_FILE_FALLBACK, + THP_FILE_FALLBACK_CHARGE, THP_FILE_MAPPED, THP_SPLIT_PAGE, THP_SPLIT_PAGE_FAILED, @@ -117,6 +119,7 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT, #ifndef CONFIG_TRANSPARENT_HUGEPAGE #define THP_FILE_ALLOC ({ BUILD_BUG(); 0; }) #define THP_FILE_FALLBACK ({ BUILD_BUG(); 0; }) +#define THP_FILE_FALLBACK_CHARGE ({ BUILD_BUG(); 0; }) #define THP_FILE_MAPPED ({ BUILD_BUG(); 0; }) #endif diff --git a/mm/huge_memory.c b/mm/huge_memory.c --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -597,6 +597,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, if (mem_cgroup_try_charge_delay(page, vma->vm_mm, gfp, &memcg, true)) { put_page(page); count_vm_event(THP_FAULT_FALLBACK); + count_vm_event(THP_FAULT_FALLBACK_CHARGE); return VM_FAULT_FALLBACK; } @@ -1406,6 +1407,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) put_page(page); ret |= VM_FAULT_FALLBACK; count_vm_event(THP_FAULT_FALLBACK); + count_vm_event(THP_FAULT_FALLBACK_CHARGE); goto out; } diff --git a/mm/shmem.c b/mm/shmem.c --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1871,8 +1871,10 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, error = mem_cgroup_try_charge_delay(page, charge_mm, gfp, &memcg, PageTransHuge(page)); if (error) { - if (PageTransHuge(page)) + if (PageTransHuge(page)) { count_vm_event(THP_FILE_FALLBACK); + count_vm_event(THP_FILE_FALLBACK_CHARGE); + } goto unacct; } error = shmem_add_to_page_cache(page, mapping, hindex, diff --git a/mm/vmstat.c b/mm/vmstat.c --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1254,10 +1254,12 @@ const char * const vmstat_text[] = { #ifdef CONFIG_TRANSPARENT_HUGEPAGE "thp_fault_alloc", "thp_fault_fallback", + "thp_fault_fallback_charge", "thp_collapse_alloc", "thp_collapse_alloc_failed", "thp_file_alloc", "thp_file_fallback", + "thp_file_fallback_charge", "thp_file_mapped", "thp_split_page", "thp_split_page_failed",