From mboxrd@z Thu Jan 1 00:00:00 1970 From: Matthew Wilcox Date: Sat, 27 Jun 2020 18:46:42 +0000 Subject: [PATCH 9/8] mm: Account PMD tables like PTE tables Message-Id: <20200627184642.GF25039@casper.infradead.org> List-Id: References: <20200627143453.31835-1-rppt@kernel.org> In-Reply-To: <20200627143453.31835-1-rppt@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Mike Rapoport Cc: linux-kernel@vger.kernel.org, Abdul Haleem , Andrew Morton , Andy Lutomirski , Arnd Bergmann , Christophe Leroy , Joerg Roedel , Max Filippov , Mike Rapoport , Peter Zijlstra , Satheesh Rajendran , Stafford Horne , Stephen Rothwell , Steven Rostedt , linux-alpha@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-mi We account the PTE level of the page tables to the process in order to make smarter OOM decisions and help diagnose why memory is fragmented. For these same reasons, we should account pages allocated for PMDs. With larger process address spaces and ASLR, the number of PMDs in use is higher than it used to be so the inaccuracy is starting to matter. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mm.h | 24 ++++++++++++++++++++---- 1 file changed, 20 insertions(+), 4 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index dc7b87310c10..b283e25fcffa 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2271,7 +2271,7 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) return ptlock_ptr(pmd_to_page(pmd)); } -static inline bool pgtable_pmd_page_ctor(struct page *page) +static inline bool pmd_ptlock_init(struct page *page) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE page->pmd_huge_pte = NULL; @@ -2279,7 +2279,7 @@ static inline bool pgtable_pmd_page_ctor(struct page *page) return ptlock_init(page); } -static inline void pgtable_pmd_page_dtor(struct page *page) +static inline void pmd_ptlock_free(struct page *page) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE VM_BUG_ON_PAGE(page->pmd_huge_pte, page); @@ -2296,8 +2296,8 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) return &mm->page_table_lock; } -static inline bool pgtable_pmd_page_ctor(struct page *page) { return true; } -static inline void pgtable_pmd_page_dtor(struct page *page) {} +static inline bool pmd_ptlock_init(struct page *page) { return true; } +static inline void pmd_ptlock_free(struct page *page) {} #define pmd_huge_pte(mm, pmd) ((mm)->pmd_huge_pte) @@ -2310,6 +2310,22 @@ static inline spinlock_t *pmd_lock(struct mm_struct *mm, pmd_t *pmd) return ptl; } +static inline bool pgtable_pmd_page_ctor(struct page *page) +{ + if (!pmd_ptlock_init(page)) + return false; + __SetPageTable(page); + inc_zone_page_state(page, NR_PAGETABLE); + return true; +} + +static inline void pgtable_pmd_page_dtor(struct page *page) +{ + pmd_ptlock_free(page); + __ClearPageTable(page); + dec_zone_page_state(page, NR_PAGETABLE); +} + /* * No scalability reason to split PUD locks yet, but follow the same pattern * as the PMD locks to make it easier if we decide to. The VM should not be -- 2.27.0 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C24CC433E1 for ; Sat, 27 Jun 2020 18:47:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DDE3C20857 for ; Sat, 27 Jun 2020 18:47:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="rt8TkjxM" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726705AbgF0SrC (ORCPT ); Sat, 27 Jun 2020 14:47:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49048 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725867AbgF0SrB (ORCPT ); Sat, 27 Jun 2020 14:47:01 -0400 Received: from casper.infradead.org (unknown [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 73EB7C061794; Sat, 27 Jun 2020 11:47:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=1N0BByy8wh3U5L6kSyD094uOjKmOfferUMmn5EME8io=; b=rt8TkjxM6E+8nmcpYCJvT3+hGr H4ecs3CPjquKl0PwNGx/0fLvLzSeMiWcVi9g0WCIz3JnPQUXbHNqxCwsT2ayVaFw3c97wVSQmiW1a rRm52Ir+aRTZP4azHkd5YkWRXRczyXy3h3kPFA95NHffmRnSpt92NCElCKtgs9bWeGRSTPeQhCTju iPtJAvPHoRFkmTEM1tJ00siPzLuSHbI8lKSLvU4vW88vtaVf2379Kp5XmE9uUb/m7xTPxYufnEdnI IEHPwIDOHEDO9KXQeLwESz53maPxEJf9fE+lLaTHw5uxLUAf5caLMZIB5tLXnAC15PHD057E6o2BK h9N6MSEQ==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1jpFqc-00058E-Db; Sat, 27 Jun 2020 18:46:42 +0000 Date: Sat, 27 Jun 2020 19:46:42 +0100 From: Matthew Wilcox To: Mike Rapoport Cc: linux-kernel@vger.kernel.org, Abdul Haleem , Andrew Morton , Andy Lutomirski , Arnd Bergmann , Christophe Leroy , Joerg Roedel , Max Filippov , Mike Rapoport , Peter Zijlstra , Satheesh Rajendran , Stafford Horne , Stephen Rothwell , Steven Rostedt , linux-alpha@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-parisc@vger.kernel.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-um@lists.infradead.org, linux-xtensa@linux-xtensa.org, linuxppc-dev@lists.ozlabs.org, openrisc@lists.librecores.org, sparclinux@vger.kernel.org Subject: [PATCH 9/8] mm: Account PMD tables like PTE tables Message-ID: <20200627184642.GF25039@casper.infradead.org> References: <20200627143453.31835-1-rppt@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200627143453.31835-1-rppt@kernel.org> Sender: linux-parisc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-parisc@vger.kernel.org We account the PTE level of the page tables to the process in order to make smarter OOM decisions and help diagnose why memory is fragmented. For these same reasons, we should account pages allocated for PMDs. With larger process address spaces and ASLR, the number of PMDs in use is higher than it used to be so the inaccuracy is starting to matter. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mm.h | 24 ++++++++++++++++++++---- 1 file changed, 20 insertions(+), 4 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index dc7b87310c10..b283e25fcffa 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2271,7 +2271,7 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) return ptlock_ptr(pmd_to_page(pmd)); } -static inline bool pgtable_pmd_page_ctor(struct page *page) +static inline bool pmd_ptlock_init(struct page *page) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE page->pmd_huge_pte = NULL; @@ -2279,7 +2279,7 @@ static inline bool pgtable_pmd_page_ctor(struct page *page) return ptlock_init(page); } -static inline void pgtable_pmd_page_dtor(struct page *page) +static inline void pmd_ptlock_free(struct page *page) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE VM_BUG_ON_PAGE(page->pmd_huge_pte, page); @@ -2296,8 +2296,8 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) return &mm->page_table_lock; } -static inline bool pgtable_pmd_page_ctor(struct page *page) { return true; } -static inline void pgtable_pmd_page_dtor(struct page *page) {} +static inline bool pmd_ptlock_init(struct page *page) { return true; } +static inline void pmd_ptlock_free(struct page *page) {} #define pmd_huge_pte(mm, pmd) ((mm)->pmd_huge_pte) @@ -2310,6 +2310,22 @@ static inline spinlock_t *pmd_lock(struct mm_struct *mm, pmd_t *pmd) return ptl; } +static inline bool pgtable_pmd_page_ctor(struct page *page) +{ + if (!pmd_ptlock_init(page)) + return false; + __SetPageTable(page); + inc_zone_page_state(page, NR_PAGETABLE); + return true; +} + +static inline void pgtable_pmd_page_dtor(struct page *page) +{ + pmd_ptlock_free(page); + __ClearPageTable(page); + dec_zone_page_state(page, NR_PAGETABLE); +} + /* * No scalability reason to split PUD locks yet, but follow the same pattern * as the PMD locks to make it easier if we decide to. The VM should not be -- 2.27.0 From mboxrd@z Thu Jan 1 00:00:00 1970 From: Matthew Wilcox Subject: [PATCH 9/8] mm: Account PMD tables like PTE tables Date: Sat, 27 Jun 2020 19:46:42 +0100 Message-ID: <20200627184642.GF25039@casper.infradead.org> References: <20200627143453.31835-1-rppt@kernel.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Return-path: Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49048 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725867AbgF0SrB (ORCPT ); Sat, 27 Jun 2020 14:47:01 -0400 Content-Disposition: inline In-Reply-To: <20200627143453.31835-1-rppt@kernel.org> Sender: linux-arch-owner@vger.kernel.org List-ID: To: Mike Rapoport Cc: linux-kernel@vger.kernel.org, Abdul Haleem , Andrew Morton , Andy Lutomirski , Arnd Bergmann , Christophe Leroy , Joerg Roedel , Max Filippov , Mike Rapoport , Peter Zijlstra , Satheesh Rajendran , Stafford Horne , Stephen Rothwell , Steven Rostedt , linux-alpha@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-mi We account the PTE level of the page tables to the process in order to make smarter OOM decisions and help diagnose why memory is fragmented. For these same reasons, we should account pages allocated for PMDs. With larger process address spaces and ASLR, the number of PMDs in use is higher than it used to be so the inaccuracy is starting to matter. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mm.h | 24 ++++++++++++++++++++---- 1 file changed, 20 insertions(+), 4 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index dc7b87310c10..b283e25fcffa 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2271,7 +2271,7 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) return ptlock_ptr(pmd_to_page(pmd)); } -static inline bool pgtable_pmd_page_ctor(struct page *page) +static inline bool pmd_ptlock_init(struct page *page) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE page->pmd_huge_pte = NULL; @@ -2279,7 +2279,7 @@ static inline bool pgtable_pmd_page_ctor(struct page *page) return ptlock_init(page); } -static inline void pgtable_pmd_page_dtor(struct page *page) +static inline void pmd_ptlock_free(struct page *page) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE VM_BUG_ON_PAGE(page->pmd_huge_pte, page); @@ -2296,8 +2296,8 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) return &mm->page_table_lock; } -static inline bool pgtable_pmd_page_ctor(struct page *page) { return true; } -static inline void pgtable_pmd_page_dtor(struct page *page) {} +static inline bool pmd_ptlock_init(struct page *page) { return true; } +static inline void pmd_ptlock_free(struct page *page) {} #define pmd_huge_pte(mm, pmd) ((mm)->pmd_huge_pte) @@ -2310,6 +2310,22 @@ static inline spinlock_t *pmd_lock(struct mm_struct *mm, pmd_t *pmd) return ptl; } +static inline bool pgtable_pmd_page_ctor(struct page *page) +{ + if (!pmd_ptlock_init(page)) + return false; + __SetPageTable(page); + inc_zone_page_state(page, NR_PAGETABLE); + return true; +} + +static inline void pgtable_pmd_page_dtor(struct page *page) +{ + pmd_ptlock_free(page); + __ClearPageTable(page); + dec_zone_page_state(page, NR_PAGETABLE); +} + /* * No scalability reason to split PUD locks yet, but follow the same pattern * as the PMD locks to make it easier if we decide to. The VM should not be -- 2.27.0 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E140C433E0 for ; Sat, 27 Jun 2020 18:47:01 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D0E5E2080C for ; Sat, 27 Jun 2020 18:47:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="xJn1arnR"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="rt8TkjxM" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D0E5E2080C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References:Message-ID: Subject:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=OfFeunCjqbGaRzB6x7uuKvt4yfyucLA9eD1BBkeFOHQ=; b=xJn1arnRuiOdow0R2kp2Io2/Z mjJV3U+hAziGKl1v5VM85qnSn7lebNGNTiNJq8obPXI+eF2KrqpA2d7VzgfDNNFo+34p5ZODZLIe9 lHJAL+y+LYWwq3Hx9v5ZcC2E4M8DFq2A8GXotddECxqGE+FwOrHCUCq2kmjvSjW4ptPpvumbNkqaX wiAXaMDyFaEBfYSKkZBG5RvDECpHNbTeIlbADxeSJXhNBiJ7O8NEh5z7Y3fzMbS9QPul7pYiADolN fEHpAJ8TNkGaZYJH/Vh9EqfBe8QFZ9PI9kxHeeOeks+rouiqGW45JexHQr1d6imfgEZI7PMyqUW/K QKgdHkOQg==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jpFqn-0005ZO-4L; Sat, 27 Jun 2020 18:46:53 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jpFqe-0005Ye-QM; Sat, 27 Jun 2020 18:46:44 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=1N0BByy8wh3U5L6kSyD094uOjKmOfferUMmn5EME8io=; b=rt8TkjxM6E+8nmcpYCJvT3+hGr H4ecs3CPjquKl0PwNGx/0fLvLzSeMiWcVi9g0WCIz3JnPQUXbHNqxCwsT2ayVaFw3c97wVSQmiW1a rRm52Ir+aRTZP4azHkd5YkWRXRczyXy3h3kPFA95NHffmRnSpt92NCElCKtgs9bWeGRSTPeQhCTju iPtJAvPHoRFkmTEM1tJ00siPzLuSHbI8lKSLvU4vW88vtaVf2379Kp5XmE9uUb/m7xTPxYufnEdnI IEHPwIDOHEDO9KXQeLwESz53maPxEJf9fE+lLaTHw5uxLUAf5caLMZIB5tLXnAC15PHD057E6o2BK h9N6MSEQ==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1jpFqc-00058E-Db; Sat, 27 Jun 2020 18:46:42 +0000 Date: Sat, 27 Jun 2020 19:46:42 +0100 From: Matthew Wilcox To: Mike Rapoport Subject: [PATCH 9/8] mm: Account PMD tables like PTE tables Message-ID: <20200627184642.GF25039@casper.infradead.org> References: <20200627143453.31835-1-rppt@kernel.org> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20200627143453.31835-1-rppt@kernel.org> X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-ia64@vger.kernel.org, linux-sh@vger.kernel.org, Peter Zijlstra , linux-mips@vger.kernel.org, Max Filippov , Satheesh Rajendran , linux-csky@vger.kernel.org, sparclinux@vger.kernel.org, linux-riscv@lists.infradead.org, linux-arch@vger.kernel.org, Stephen Rothwell , linux-hexagon@vger.kernel.org, Joerg Roedel , Christophe Leroy , Mike Rapoport , Abdul Haleem , linux-snps-arc@lists.infradead.org, linux-xtensa@linux-xtensa.org, Arnd Bergmann , linux-s390@vger.kernel.org, linux-um@lists.infradead.org, Steven Rostedt , linux-m68k@lists.linux-m68k.org, openrisc@lists.librecores.org, Andy Lutomirski , Stafford Horne , linux-arm-kernel@lists.infradead.org, linux-parisc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, Andrew Morton , linuxppc-dev@lists.ozlabs.org Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org We account the PTE level of the page tables to the process in order to make smarter OOM decisions and help diagnose why memory is fragmented. For these same reasons, we should account pages allocated for PMDs. With larger process address spaces and ASLR, the number of PMDs in use is higher than it used to be so the inaccuracy is starting to matter. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mm.h | 24 ++++++++++++++++++++---- 1 file changed, 20 insertions(+), 4 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index dc7b87310c10..b283e25fcffa 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2271,7 +2271,7 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) return ptlock_ptr(pmd_to_page(pmd)); } -static inline bool pgtable_pmd_page_ctor(struct page *page) +static inline bool pmd_ptlock_init(struct page *page) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE page->pmd_huge_pte = NULL; @@ -2279,7 +2279,7 @@ static inline bool pgtable_pmd_page_ctor(struct page *page) return ptlock_init(page); } -static inline void pgtable_pmd_page_dtor(struct page *page) +static inline void pmd_ptlock_free(struct page *page) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE VM_BUG_ON_PAGE(page->pmd_huge_pte, page); @@ -2296,8 +2296,8 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) return &mm->page_table_lock; } -static inline bool pgtable_pmd_page_ctor(struct page *page) { return true; } -static inline void pgtable_pmd_page_dtor(struct page *page) {} +static inline bool pmd_ptlock_init(struct page *page) { return true; } +static inline void pmd_ptlock_free(struct page *page) {} #define pmd_huge_pte(mm, pmd) ((mm)->pmd_huge_pte) @@ -2310,6 +2310,22 @@ static inline spinlock_t *pmd_lock(struct mm_struct *mm, pmd_t *pmd) return ptl; } +static inline bool pgtable_pmd_page_ctor(struct page *page) +{ + if (!pmd_ptlock_init(page)) + return false; + __SetPageTable(page); + inc_zone_page_state(page, NR_PAGETABLE); + return true; +} + +static inline void pgtable_pmd_page_dtor(struct page *page) +{ + pmd_ptlock_free(page); + __ClearPageTable(page); + dec_zone_page_state(page, NR_PAGETABLE); +} + /* * No scalability reason to split PUD locks yet, but follow the same pattern * as the PMD locks to make it easier if we decide to. The VM should not be -- 2.27.0 _______________________________________________ linux-riscv mailing list linux-riscv@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-riscv From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5AAADC433DF for ; Sat, 27 Jun 2020 18:48:54 +0000 (UTC) Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B8FE720724 for ; Sat, 27 Jun 2020 18:48:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="rt8TkjxM" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B8FE720724 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 49vN7l4ScfzDrGn for ; Sun, 28 Jun 2020 04:48:51 +1000 (AEST) Authentication-Results: lists.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=infradead.org (client-ip=2001:8b0:10b:1236::1; helo=casper.infradead.org; envelope-from=willy@infradead.org; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: lists.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=infradead.org header.i=@infradead.org header.a=rsa-sha256 header.s=casper.20170209 header.b=rt8TkjxM; dkim-atps=neutral Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 49vN5t5dnJzDqfX for ; Sun, 28 Jun 2020 04:47:14 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=1N0BByy8wh3U5L6kSyD094uOjKmOfferUMmn5EME8io=; b=rt8TkjxM6E+8nmcpYCJvT3+hGr H4ecs3CPjquKl0PwNGx/0fLvLzSeMiWcVi9g0WCIz3JnPQUXbHNqxCwsT2ayVaFw3c97wVSQmiW1a rRm52Ir+aRTZP4azHkd5YkWRXRczyXy3h3kPFA95NHffmRnSpt92NCElCKtgs9bWeGRSTPeQhCTju iPtJAvPHoRFkmTEM1tJ00siPzLuSHbI8lKSLvU4vW88vtaVf2379Kp5XmE9uUb/m7xTPxYufnEdnI IEHPwIDOHEDO9KXQeLwESz53maPxEJf9fE+lLaTHw5uxLUAf5caLMZIB5tLXnAC15PHD057E6o2BK h9N6MSEQ==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1jpFqc-00058E-Db; Sat, 27 Jun 2020 18:46:42 +0000 Date: Sat, 27 Jun 2020 19:46:42 +0100 From: Matthew Wilcox To: Mike Rapoport Subject: [PATCH 9/8] mm: Account PMD tables like PTE tables Message-ID: <20200627184642.GF25039@casper.infradead.org> References: <20200627143453.31835-1-rppt@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200627143453.31835-1-rppt@kernel.org> X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-ia64@vger.kernel.org, linux-sh@vger.kernel.org, Peter Zijlstra , linux-mips@vger.kernel.org, Max Filippov , Satheesh Rajendran , linux-csky@vger.kernel.org, sparclinux@vger.kernel.org, linux-riscv@lists.infradead.org, linux-arch@vger.kernel.org, Stephen Rothwell , linux-hexagon@vger.kernel.org, Joerg Roedel , Mike Rapoport , Abdul Haleem , linux-snps-arc@lists.infradead.org, linux-xtensa@linux-xtensa.org, Arnd Bergmann , linux-s390@vger.kernel.org, linux-um@lists.infradead.org, Steven Rostedt , linux-m68k@lists.linux-m68k.org, openrisc@lists.librecores.org, Andy Lutomirski , Stafford Horne , linux-arm-kernel@lists.infradead.org, linux-parisc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, Andrew Morton , linuxppc-dev@lists.ozlabs.org Errors-To: linuxppc-dev-bounces+linuxppc-dev=archiver.kernel.org@lists.ozlabs.org Sender: "Linuxppc-dev" We account the PTE level of the page tables to the process in order to make smarter OOM decisions and help diagnose why memory is fragmented. For these same reasons, we should account pages allocated for PMDs. With larger process address spaces and ASLR, the number of PMDs in use is higher than it used to be so the inaccuracy is starting to matter. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mm.h | 24 ++++++++++++++++++++---- 1 file changed, 20 insertions(+), 4 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index dc7b87310c10..b283e25fcffa 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2271,7 +2271,7 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) return ptlock_ptr(pmd_to_page(pmd)); } -static inline bool pgtable_pmd_page_ctor(struct page *page) +static inline bool pmd_ptlock_init(struct page *page) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE page->pmd_huge_pte = NULL; @@ -2279,7 +2279,7 @@ static inline bool pgtable_pmd_page_ctor(struct page *page) return ptlock_init(page); } -static inline void pgtable_pmd_page_dtor(struct page *page) +static inline void pmd_ptlock_free(struct page *page) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE VM_BUG_ON_PAGE(page->pmd_huge_pte, page); @@ -2296,8 +2296,8 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) return &mm->page_table_lock; } -static inline bool pgtable_pmd_page_ctor(struct page *page) { return true; } -static inline void pgtable_pmd_page_dtor(struct page *page) {} +static inline bool pmd_ptlock_init(struct page *page) { return true; } +static inline void pmd_ptlock_free(struct page *page) {} #define pmd_huge_pte(mm, pmd) ((mm)->pmd_huge_pte) @@ -2310,6 +2310,22 @@ static inline spinlock_t *pmd_lock(struct mm_struct *mm, pmd_t *pmd) return ptl; } +static inline bool pgtable_pmd_page_ctor(struct page *page) +{ + if (!pmd_ptlock_init(page)) + return false; + __SetPageTable(page); + inc_zone_page_state(page, NR_PAGETABLE); + return true; +} + +static inline void pgtable_pmd_page_dtor(struct page *page) +{ + pmd_ptlock_free(page); + __ClearPageTable(page); + dec_zone_page_state(page, NR_PAGETABLE); +} + /* * No scalability reason to split PUD locks yet, but follow the same pattern * as the PMD locks to make it easier if we decide to. The VM should not be -- 2.27.0 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2AA7C433DF for ; Sat, 27 Jun 2020 18:46:54 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A513520724 for ; Sat, 27 Jun 2020 18:46:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="JEuotQpB"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="rt8TkjxM" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A513520724 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-snps-arc-bounces+linux-snps-arc=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References:Message-ID: Subject:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=CDTxQx5PQ9GIibwyKqowqsQ0c8ZXfv0gXMl0Z9XmRcw=; b=JEuotQpBu44SODCxKRu/f454l avPH1YsDj8FHw0xhnzQ4JTrLl9T/vguiYkhQV0OkK73AqVUcq9FgXVuqiUcRE462CMcLs6gX/pN4T kkNcU88la5IYKgRLLqJ6EEnt9a4nIg5Qb06AjN0UNDNxbAi6Z71Ni0EjO8r81l1l9ZzX3PkEd+MnJ qcFa/tz4CyDMC4x0XWtaRC84LRdZKyMT++70prYbcnHE5fcJZmbaGFRwvokKx1HG0wJqYJLs2OOxx MjV++3l4ygMPrXDhft74OG2Ew7yTYo4tWYgscPNdM6eMlaTQ+hqwGbpuChDrdEfy1u2ZtPIxPoiRG U6BqFVuVA==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jpFqn-0005Zb-Rh; Sat, 27 Jun 2020 18:46:53 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jpFqe-0005Ye-QM; Sat, 27 Jun 2020 18:46:44 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=1N0BByy8wh3U5L6kSyD094uOjKmOfferUMmn5EME8io=; b=rt8TkjxM6E+8nmcpYCJvT3+hGr H4ecs3CPjquKl0PwNGx/0fLvLzSeMiWcVi9g0WCIz3JnPQUXbHNqxCwsT2ayVaFw3c97wVSQmiW1a rRm52Ir+aRTZP4azHkd5YkWRXRczyXy3h3kPFA95NHffmRnSpt92NCElCKtgs9bWeGRSTPeQhCTju iPtJAvPHoRFkmTEM1tJ00siPzLuSHbI8lKSLvU4vW88vtaVf2379Kp5XmE9uUb/m7xTPxYufnEdnI IEHPwIDOHEDO9KXQeLwESz53maPxEJf9fE+lLaTHw5uxLUAf5caLMZIB5tLXnAC15PHD057E6o2BK h9N6MSEQ==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1jpFqc-00058E-Db; Sat, 27 Jun 2020 18:46:42 +0000 Date: Sat, 27 Jun 2020 19:46:42 +0100 From: Matthew Wilcox To: Mike Rapoport Subject: [PATCH 9/8] mm: Account PMD tables like PTE tables Message-ID: <20200627184642.GF25039@casper.infradead.org> References: <20200627143453.31835-1-rppt@kernel.org> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20200627143453.31835-1-rppt@kernel.org> X-BeenThere: linux-snps-arc@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on Synopsys ARC Processors List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-ia64@vger.kernel.org, linux-sh@vger.kernel.org, Peter Zijlstra , linux-mips@vger.kernel.org, Max Filippov , Satheesh Rajendran , linux-csky@vger.kernel.org, sparclinux@vger.kernel.org, linux-riscv@lists.infradead.org, linux-arch@vger.kernel.org, Stephen Rothwell , linux-hexagon@vger.kernel.org, Joerg Roedel , Christophe Leroy , Mike Rapoport , Abdul Haleem , linux-snps-arc@lists.infradead.org, linux-xtensa@linux-xtensa.org, Arnd Bergmann , linux-s390@vger.kernel.org, linux-um@lists.infradead.org, Steven Rostedt , linux-m68k@lists.linux-m68k.org, openrisc@lists.librecores.org, Andy Lutomirski , Stafford Horne , linux-arm-kernel@lists.infradead.org, linux-parisc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, Andrew Morton , linuxppc-dev@lists.ozlabs.org Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-snps-arc" Errors-To: linux-snps-arc-bounces+linux-snps-arc=archiver.kernel.org@lists.infradead.org We account the PTE level of the page tables to the process in order to make smarter OOM decisions and help diagnose why memory is fragmented. For these same reasons, we should account pages allocated for PMDs. With larger process address spaces and ASLR, the number of PMDs in use is higher than it used to be so the inaccuracy is starting to matter. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mm.h | 24 ++++++++++++++++++++---- 1 file changed, 20 insertions(+), 4 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index dc7b87310c10..b283e25fcffa 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2271,7 +2271,7 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) return ptlock_ptr(pmd_to_page(pmd)); } -static inline bool pgtable_pmd_page_ctor(struct page *page) +static inline bool pmd_ptlock_init(struct page *page) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE page->pmd_huge_pte = NULL; @@ -2279,7 +2279,7 @@ static inline bool pgtable_pmd_page_ctor(struct page *page) return ptlock_init(page); } -static inline void pgtable_pmd_page_dtor(struct page *page) +static inline void pmd_ptlock_free(struct page *page) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE VM_BUG_ON_PAGE(page->pmd_huge_pte, page); @@ -2296,8 +2296,8 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) return &mm->page_table_lock; } -static inline bool pgtable_pmd_page_ctor(struct page *page) { return true; } -static inline void pgtable_pmd_page_dtor(struct page *page) {} +static inline bool pmd_ptlock_init(struct page *page) { return true; } +static inline void pmd_ptlock_free(struct page *page) {} #define pmd_huge_pte(mm, pmd) ((mm)->pmd_huge_pte) @@ -2310,6 +2310,22 @@ static inline spinlock_t *pmd_lock(struct mm_struct *mm, pmd_t *pmd) return ptl; } +static inline bool pgtable_pmd_page_ctor(struct page *page) +{ + if (!pmd_ptlock_init(page)) + return false; + __SetPageTable(page); + inc_zone_page_state(page, NR_PAGETABLE); + return true; +} + +static inline void pgtable_pmd_page_dtor(struct page *page) +{ + pmd_ptlock_free(page); + __ClearPageTable(page); + dec_zone_page_state(page, NR_PAGETABLE); +} + /* * No scalability reason to split PUD locks yet, but follow the same pattern * as the PMD locks to make it easier if we decide to. The VM should not be -- 2.27.0 _______________________________________________ linux-snps-arc mailing list linux-snps-arc@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-snps-arc From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A78B4C433E0 for ; Sat, 27 Jun 2020 18:49:27 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 743E420724 for ; Sat, 27 Jun 2020 18:49:27 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="LpCZsXa8"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="rt8TkjxM" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 743E420724 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References:Message-ID: Subject:To:From:Date:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=FfVBv/bJfJCy6Kkr8zCpdHqtRjsYTdJOYE/vSKM+a9M=; b=LpCZsXa8aiPyVCOUUr8D8cSYH ZONCY+Cwp/y/769wSKsK2L7ZVTHH5RQ3680taiROKqd2V1q08U7X6Z1f3CjR2tgwC4s45jyMftt+s 5HxpzJb34rKjN/0pRDg4fcYa1JmWmvgwAwdyWBRNca5a1GOBC1I7A1fIyuVQ23+Nl4JaX5O9I+HTD OUemwfIXhaHPVWxj9w+zSTr/TthnUWx5H9sZXz4IKNZ4gL+8UJpUvRgcG2qETvi2uQjCVkAUsACef Oca/VBMS/fKomOR0RFL5D+DOoxMXT53ZwInTwAqkG4Dbsa4KXqBZ39rVi6Mhvx82tGQr8MV/zoYrv YesJyKJcw==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jpFqg-0005Z0-6E; Sat, 27 Jun 2020 18:46:46 +0000 Received: from casper.infradead.org ([2001:8b0:10b:1236::1]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jpFqe-0005Ye-QM; Sat, 27 Jun 2020 18:46:44 +0000 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=1N0BByy8wh3U5L6kSyD094uOjKmOfferUMmn5EME8io=; b=rt8TkjxM6E+8nmcpYCJvT3+hGr H4ecs3CPjquKl0PwNGx/0fLvLzSeMiWcVi9g0WCIz3JnPQUXbHNqxCwsT2ayVaFw3c97wVSQmiW1a rRm52Ir+aRTZP4azHkd5YkWRXRczyXy3h3kPFA95NHffmRnSpt92NCElCKtgs9bWeGRSTPeQhCTju iPtJAvPHoRFkmTEM1tJ00siPzLuSHbI8lKSLvU4vW88vtaVf2379Kp5XmE9uUb/m7xTPxYufnEdnI IEHPwIDOHEDO9KXQeLwESz53maPxEJf9fE+lLaTHw5uxLUAf5caLMZIB5tLXnAC15PHD057E6o2BK h9N6MSEQ==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1jpFqc-00058E-Db; Sat, 27 Jun 2020 18:46:42 +0000 Date: Sat, 27 Jun 2020 19:46:42 +0100 From: Matthew Wilcox To: Mike Rapoport Subject: [PATCH 9/8] mm: Account PMD tables like PTE tables Message-ID: <20200627184642.GF25039@casper.infradead.org> References: <20200627143453.31835-1-rppt@kernel.org> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20200627143453.31835-1-rppt@kernel.org> X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-ia64@vger.kernel.org, linux-sh@vger.kernel.org, Peter Zijlstra , linux-mips@vger.kernel.org, Max Filippov , Satheesh Rajendran , linux-csky@vger.kernel.org, sparclinux@vger.kernel.org, linux-riscv@lists.infradead.org, linux-arch@vger.kernel.org, Stephen Rothwell , linux-hexagon@vger.kernel.org, Joerg Roedel , Christophe Leroy , Mike Rapoport , Abdul Haleem , linux-snps-arc@lists.infradead.org, linux-xtensa@linux-xtensa.org, Arnd Bergmann , linux-s390@vger.kernel.org, linux-um@lists.infradead.org, Steven Rostedt , linux-m68k@lists.linux-m68k.org, openrisc@lists.librecores.org, Andy Lutomirski , Stafford Horne , linux-arm-kernel@lists.infradead.org, linux-parisc@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-alpha@vger.kernel.org, Andrew Morton , linuxppc-dev@lists.ozlabs.org Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org We account the PTE level of the page tables to the process in order to make smarter OOM decisions and help diagnose why memory is fragmented. For these same reasons, we should account pages allocated for PMDs. With larger process address spaces and ASLR, the number of PMDs in use is higher than it used to be so the inaccuracy is starting to matter. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mm.h | 24 ++++++++++++++++++++---- 1 file changed, 20 insertions(+), 4 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index dc7b87310c10..b283e25fcffa 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2271,7 +2271,7 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) return ptlock_ptr(pmd_to_page(pmd)); } -static inline bool pgtable_pmd_page_ctor(struct page *page) +static inline bool pmd_ptlock_init(struct page *page) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE page->pmd_huge_pte = NULL; @@ -2279,7 +2279,7 @@ static inline bool pgtable_pmd_page_ctor(struct page *page) return ptlock_init(page); } -static inline void pgtable_pmd_page_dtor(struct page *page) +static inline void pmd_ptlock_free(struct page *page) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE VM_BUG_ON_PAGE(page->pmd_huge_pte, page); @@ -2296,8 +2296,8 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) return &mm->page_table_lock; } -static inline bool pgtable_pmd_page_ctor(struct page *page) { return true; } -static inline void pgtable_pmd_page_dtor(struct page *page) {} +static inline bool pmd_ptlock_init(struct page *page) { return true; } +static inline void pmd_ptlock_free(struct page *page) {} #define pmd_huge_pte(mm, pmd) ((mm)->pmd_huge_pte) @@ -2310,6 +2310,22 @@ static inline spinlock_t *pmd_lock(struct mm_struct *mm, pmd_t *pmd) return ptl; } +static inline bool pgtable_pmd_page_ctor(struct page *page) +{ + if (!pmd_ptlock_init(page)) + return false; + __SetPageTable(page); + inc_zone_page_state(page, NR_PAGETABLE); + return true; +} + +static inline void pgtable_pmd_page_dtor(struct page *page) +{ + pmd_ptlock_free(page); + __ClearPageTable(page); + dec_zone_page_state(page, NR_PAGETABLE); +} + /* * No scalability reason to split PUD locks yet, but follow the same pattern * as the PMD locks to make it easier if we decide to. The VM should not be -- 2.27.0 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel From mboxrd@z Thu Jan 1 00:00:00 1970 From: Matthew Wilcox Date: Sat, 27 Jun 2020 19:46:42 +0100 Subject: [OpenRISC] [PATCH 9/8] mm: Account PMD tables like PTE tables In-Reply-To: <20200627143453.31835-1-rppt@kernel.org> References: <20200627143453.31835-1-rppt@kernel.org> Message-ID: <20200627184642.GF25039@casper.infradead.org> List-Id: MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: openrisc@lists.librecores.org We account the PTE level of the page tables to the process in order to make smarter OOM decisions and help diagnose why memory is fragmented. For these same reasons, we should account pages allocated for PMDs. With larger process address spaces and ASLR, the number of PMDs in use is higher than it used to be so the inaccuracy is starting to matter. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/mm.h | 24 ++++++++++++++++++++---- 1 file changed, 20 insertions(+), 4 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index dc7b87310c10..b283e25fcffa 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2271,7 +2271,7 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) return ptlock_ptr(pmd_to_page(pmd)); } -static inline bool pgtable_pmd_page_ctor(struct page *page) +static inline bool pmd_ptlock_init(struct page *page) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE page->pmd_huge_pte = NULL; @@ -2279,7 +2279,7 @@ static inline bool pgtable_pmd_page_ctor(struct page *page) return ptlock_init(page); } -static inline void pgtable_pmd_page_dtor(struct page *page) +static inline void pmd_ptlock_free(struct page *page) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE VM_BUG_ON_PAGE(page->pmd_huge_pte, page); @@ -2296,8 +2296,8 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) return &mm->page_table_lock; } -static inline bool pgtable_pmd_page_ctor(struct page *page) { return true; } -static inline void pgtable_pmd_page_dtor(struct page *page) {} +static inline bool pmd_ptlock_init(struct page *page) { return true; } +static inline void pmd_ptlock_free(struct page *page) {} #define pmd_huge_pte(mm, pmd) ((mm)->pmd_huge_pte) @@ -2310,6 +2310,22 @@ static inline spinlock_t *pmd_lock(struct mm_struct *mm, pmd_t *pmd) return ptl; } +static inline bool pgtable_pmd_page_ctor(struct page *page) +{ + if (!pmd_ptlock_init(page)) + return false; + __SetPageTable(page); + inc_zone_page_state(page, NR_PAGETABLE); + return true; +} + +static inline void pgtable_pmd_page_dtor(struct page *page) +{ + pmd_ptlock_free(page); + __ClearPageTable(page); + dec_zone_page_state(page, NR_PAGETABLE); +} + /* * No scalability reason to split PUD locks yet, but follow the same pattern * as the PMD locks to make it easier if we decide to. The VM should not be -- 2.27.0