All of lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Wilcox <willy@infradead.org>
To: Mike Rapoport <rppt@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Andy Lutomirski <luto@kernel.org>, Arnd Bergmann <arnd@arndb.de>,
	Borislav Petkov <bp@alien8.de>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Christopher Lameter <cl@linux.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	David Hildenbrand <david@redhat.com>,
	Elena Reshetova <elena.reshetova@intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Ingo Molnar <mingo@redhat.com>,
	James Bottomley <jejb@linux.ibm.com>,
	"Kirill A. Shutemov" <kirill@shutemov.name>,
	Mark Rutland <mark.rutland@arm.com>,
	Mike Rapoport <rppt@linux.ibm.com>,
	Michael Kerrisk <mtk.manpages@gmail.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Rick Edgecombe <rick.p.edgecombe@intel.com>,
	Roman Gushchin <guro@fb.com>, Shakeel Butt <shakeelb@google.com>,
	Shuah Khan <shuah@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Tycho Andersen <tycho@tycho.ws>, Will Deacon <will@kernel.org>,
	linux-api@vger.kernel.org, linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org,
	linux-nvdimm@lists.01.org, linux-riscv@lists.infradead.org,
	x86@kernel.org, Hagen Paul Pfeifer <hagen@jauu.net>,
	Palmer Dabbelt <palmerdabbelt@google.com>
Subject: Re: [PATCH v16 08/11] secretmem: add memcg accounting
Date: Mon, 25 Jan 2021 16:17:06 +0000	[thread overview]
Message-ID: <20210125161706.GE308988@casper.infradead.org> (raw)
In-Reply-To: <20210121122723.3446-9-rppt@kernel.org>

On Thu, Jan 21, 2021 at 02:27:20PM +0200, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
> 
> Account memory consumed by secretmem to memcg. The accounting is updated
> when the memory is actually allocated and freed.

I think this is wrong.  It fails to account subsequent allocators from
the same PMD.  If you want to track like this, you need separate pools
per memcg.

I think you shouldn't try to track like this; better to just track on
a per-page basis.  After all, the page allocator doesn't track order-10
pages to the memcg that initially caused them to be split.

> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> Acked-by: Roman Gushchin <guro@fb.com>
> Reviewed-by: Shakeel Butt <shakeelb@google.com>
> Cc: Alexander Viro <viro@zeniv.linux.org.uk>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Arnd Bergmann <arnd@arndb.de>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Christopher Lameter <cl@linux.com>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Elena Reshetova <elena.reshetova@intel.com>
> Cc: Hagen Paul Pfeifer <hagen@jauu.net>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: James Bottomley <jejb@linux.ibm.com>
> Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Michael Kerrisk <mtk.manpages@gmail.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>
> Cc: Palmer Dabbelt <palmerdabbelt@google.com>
> Cc: Paul Walmsley <paul.walmsley@sifive.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
> Cc: Shuah Khan <shuah@kernel.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Tycho Andersen <tycho@tycho.ws>
> Cc: Will Deacon <will@kernel.org>
> ---
>  mm/filemap.c   |  3 ++-
>  mm/secretmem.c | 36 +++++++++++++++++++++++++++++++++++-
>  2 files changed, 37 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 2d0c6721879d..bb28dd6d9e22 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -42,6 +42,7 @@
>  #include <linux/psi.h>
>  #include <linux/ramfs.h>
>  #include <linux/page_idle.h>
> +#include <linux/secretmem.h>
>  #include "internal.h"
>  
>  #define CREATE_TRACE_POINTS
> @@ -839,7 +840,7 @@ noinline int __add_to_page_cache_locked(struct page *page,
>  	page->mapping = mapping;
>  	page->index = offset;
>  
> -	if (!huge) {
> +	if (!huge && !page_is_secretmem(page)) {
>  		error = mem_cgroup_charge(page, current->mm, gfp);
>  		if (error)
>  			goto error;
> diff --git a/mm/secretmem.c b/mm/secretmem.c
> index 469211c7cc3a..05026460e2ee 100644
> --- a/mm/secretmem.c
> +++ b/mm/secretmem.c
> @@ -18,6 +18,7 @@
>  #include <linux/memblock.h>
>  #include <linux/pseudo_fs.h>
>  #include <linux/secretmem.h>
> +#include <linux/memcontrol.h>
>  #include <linux/set_memory.h>
>  #include <linux/sched/signal.h>
>  
> @@ -44,6 +45,32 @@ struct secretmem_ctx {
>  
>  static struct cma *secretmem_cma;
>  
> +static int secretmem_account_pages(struct page *page, gfp_t gfp, int order)
> +{
> +	int err;
> +
> +	err = memcg_kmem_charge_page(page, gfp, order);
> +	if (err)
> +		return err;
> +
> +	/*
> +	 * seceremem caches are unreclaimable kernel allocations, so treat
> +	 * them as unreclaimable slab memory for VM statistics purposes
> +	 */
> +	mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> +			      PAGE_SIZE << order);
> +
> +	return 0;
> +}
> +
> +static void secretmem_unaccount_pages(struct page *page, int order)
> +{
> +
> +	mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> +			      -PAGE_SIZE << order);
> +	memcg_kmem_uncharge_page(page, order);
> +}
> +
>  static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
>  {
>  	unsigned long nr_pages = (1 << PMD_PAGE_ORDER);
> @@ -56,6 +83,10 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
>  	if (!page)
>  		return -ENOMEM;
>  
> +	err = secretmem_account_pages(page, gfp, PMD_PAGE_ORDER);
> +	if (err)
> +		goto err_cma_release;
> +
>  	/*
>  	 * clear the data left from the prevoius user before dropping the
>  	 * pages from the direct map
> @@ -65,7 +96,7 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
>  
>  	err = set_direct_map_invalid_noflush(page, nr_pages);
>  	if (err)
> -		goto err_cma_release;
> +		goto err_memcg_uncharge;
>  
>  	addr = (unsigned long)page_address(page);
>  	err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE);
> @@ -83,6 +114,8 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
>  	 * won't fail
>  	 */
>  	set_direct_map_default_noflush(page, nr_pages);
> +err_memcg_uncharge:
> +	secretmem_unaccount_pages(page, PMD_PAGE_ORDER);
>  err_cma_release:
>  	cma_release(secretmem_cma, page, nr_pages);
>  	return err;
> @@ -314,6 +347,7 @@ static void secretmem_cleanup_chunk(struct gen_pool *pool,
>  	int i;
>  
>  	set_direct_map_default_noflush(page, nr_pages);
> +	secretmem_unaccount_pages(page, PMD_PAGE_ORDER);
>  
>  	for (i = 0; i < nr_pages; i++)
>  		clear_highpage(page + i);
> -- 
> 2.28.0
> 
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

WARNING: multiple messages have this Message-ID (diff)
From: Matthew Wilcox <willy@infradead.org>
To: Mike Rapoport <rppt@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Andy Lutomirski <luto@kernel.org>, Arnd Bergmann <arnd@arndb.de>,
	Borislav Petkov <bp@alien8.de>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Christopher Lameter <cl@linux.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	David Hildenbrand <david@redhat.com>,
	Elena Reshetova <elena.reshetova@intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Ingo Molnar <mingo@redhat.com>,
	James Bottomley <jejb@linux.ibm.com>,
	"Kirill A. Shutemov" <kirill@shutemov.name>,
	Mark Rutland <mark.rutland@arm.com>,
	Mike Rapoport <rppt@linux.ibm.com>,
	Michael Kerrisk <mtk.manpages@gmail.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Rick Edgecombe <rick.p.edgecombe@intel.com>,
	Roman Gushchin <guro@fb.com>, Shakeel Butt <shakeelb@google.com>,
	Shuah Khan <shuah@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Tycho Andersen <tycho@tycho.ws>, Will Deacon <will@kernel.org>,
	linux-api@vger.kernel.org, linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org,
	linux-nvdimm@lists.01.org, linux-riscv@lists.infradead.org,
	x86@kernel.org, Hagen Paul Pfeifer <hagen@jauu.net>,
	Palmer Dabbelt <palmerdabbelt@google.com>
Subject: Re: [PATCH v16 08/11] secretmem: add memcg accounting
Date: Mon, 25 Jan 2021 16:17:06 +0000	[thread overview]
Message-ID: <20210125161706.GE308988@casper.infradead.org> (raw)
In-Reply-To: <20210121122723.3446-9-rppt@kernel.org>

On Thu, Jan 21, 2021 at 02:27:20PM +0200, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
> 
> Account memory consumed by secretmem to memcg. The accounting is updated
> when the memory is actually allocated and freed.

I think this is wrong.  It fails to account subsequent allocators from
the same PMD.  If you want to track like this, you need separate pools
per memcg.

I think you shouldn't try to track like this; better to just track on
a per-page basis.  After all, the page allocator doesn't track order-10
pages to the memcg that initially caused them to be split.

> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> Acked-by: Roman Gushchin <guro@fb.com>
> Reviewed-by: Shakeel Butt <shakeelb@google.com>
> Cc: Alexander Viro <viro@zeniv.linux.org.uk>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Arnd Bergmann <arnd@arndb.de>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Christopher Lameter <cl@linux.com>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Elena Reshetova <elena.reshetova@intel.com>
> Cc: Hagen Paul Pfeifer <hagen@jauu.net>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: James Bottomley <jejb@linux.ibm.com>
> Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Michael Kerrisk <mtk.manpages@gmail.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>
> Cc: Palmer Dabbelt <palmerdabbelt@google.com>
> Cc: Paul Walmsley <paul.walmsley@sifive.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
> Cc: Shuah Khan <shuah@kernel.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Tycho Andersen <tycho@tycho.ws>
> Cc: Will Deacon <will@kernel.org>
> ---
>  mm/filemap.c   |  3 ++-
>  mm/secretmem.c | 36 +++++++++++++++++++++++++++++++++++-
>  2 files changed, 37 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 2d0c6721879d..bb28dd6d9e22 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -42,6 +42,7 @@
>  #include <linux/psi.h>
>  #include <linux/ramfs.h>
>  #include <linux/page_idle.h>
> +#include <linux/secretmem.h>
>  #include "internal.h"
>  
>  #define CREATE_TRACE_POINTS
> @@ -839,7 +840,7 @@ noinline int __add_to_page_cache_locked(struct page *page,
>  	page->mapping = mapping;
>  	page->index = offset;
>  
> -	if (!huge) {
> +	if (!huge && !page_is_secretmem(page)) {
>  		error = mem_cgroup_charge(page, current->mm, gfp);
>  		if (error)
>  			goto error;
> diff --git a/mm/secretmem.c b/mm/secretmem.c
> index 469211c7cc3a..05026460e2ee 100644
> --- a/mm/secretmem.c
> +++ b/mm/secretmem.c
> @@ -18,6 +18,7 @@
>  #include <linux/memblock.h>
>  #include <linux/pseudo_fs.h>
>  #include <linux/secretmem.h>
> +#include <linux/memcontrol.h>
>  #include <linux/set_memory.h>
>  #include <linux/sched/signal.h>
>  
> @@ -44,6 +45,32 @@ struct secretmem_ctx {
>  
>  static struct cma *secretmem_cma;
>  
> +static int secretmem_account_pages(struct page *page, gfp_t gfp, int order)
> +{
> +	int err;
> +
> +	err = memcg_kmem_charge_page(page, gfp, order);
> +	if (err)
> +		return err;
> +
> +	/*
> +	 * seceremem caches are unreclaimable kernel allocations, so treat
> +	 * them as unreclaimable slab memory for VM statistics purposes
> +	 */
> +	mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> +			      PAGE_SIZE << order);
> +
> +	return 0;
> +}
> +
> +static void secretmem_unaccount_pages(struct page *page, int order)
> +{
> +
> +	mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> +			      -PAGE_SIZE << order);
> +	memcg_kmem_uncharge_page(page, order);
> +}
> +
>  static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
>  {
>  	unsigned long nr_pages = (1 << PMD_PAGE_ORDER);
> @@ -56,6 +83,10 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
>  	if (!page)
>  		return -ENOMEM;
>  
> +	err = secretmem_account_pages(page, gfp, PMD_PAGE_ORDER);
> +	if (err)
> +		goto err_cma_release;
> +
>  	/*
>  	 * clear the data left from the prevoius user before dropping the
>  	 * pages from the direct map
> @@ -65,7 +96,7 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
>  
>  	err = set_direct_map_invalid_noflush(page, nr_pages);
>  	if (err)
> -		goto err_cma_release;
> +		goto err_memcg_uncharge;
>  
>  	addr = (unsigned long)page_address(page);
>  	err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE);
> @@ -83,6 +114,8 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
>  	 * won't fail
>  	 */
>  	set_direct_map_default_noflush(page, nr_pages);
> +err_memcg_uncharge:
> +	secretmem_unaccount_pages(page, PMD_PAGE_ORDER);
>  err_cma_release:
>  	cma_release(secretmem_cma, page, nr_pages);
>  	return err;
> @@ -314,6 +347,7 @@ static void secretmem_cleanup_chunk(struct gen_pool *pool,
>  	int i;
>  
>  	set_direct_map_default_noflush(page, nr_pages);
> +	secretmem_unaccount_pages(page, PMD_PAGE_ORDER);
>  
>  	for (i = 0; i < nr_pages; i++)
>  		clear_highpage(page + i);
> -- 
> 2.28.0
> 

WARNING: multiple messages have this Message-ID (diff)
From: Matthew Wilcox <willy@infradead.org>
To: Mike Rapoport <rppt@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>,
	David Hildenbrand <david@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	linux-mm@kvack.org, linux-kselftest@vger.kernel.org,
	"H. Peter Anvin" <hpa@zytor.com>,
	Christopher Lameter <cl@linux.com>, Shuah Khan <shuah@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Elena Reshetova <elena.reshetova@intel.com>,
	linux-arch@vger.kernel.org, Tycho Andersen <tycho@tycho.ws>,
	linux-nvdimm@lists.01.org, Will Deacon <will@kernel.org>,
	x86@kernel.org, linux-riscv@lists.infradead.org,
	Mike Rapoport <rppt@linux.ibm.com>,
	Ingo Molnar <mingo@redhat.com>,
	Michael Kerrisk <mtk.manpages@gmail.com>,
	Palmer Dabbelt <palmerdabbelt@google.com>,
	Arnd Bergmann <arnd@arndb.de>,
	James Bottomley <jejb@linux.ibm.com>,
	Hagen Paul Pfeifer <hagen@jauu.net>,
	Borislav Petkov <bp@alien8.de>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Andy Lutomirski <luto@kernel.org>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	"Kirill A. Shutemov" <kirill@shutemov.name>,
	Dan Williams <dan.j.williams@intel.com>,
	linux-arm-kernel@lists.infradead.org, linux-api@vger.kernel.org,
	linux-kernel@vger.kernel.org, Palmer Dabbelt <palmer@dabbelt.com>,
	linux-fsdevel@vger.kernel.org, Shakeel Butt <shakeelb@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Rick Edgecombe <rick.p.edgecombe@intel.com>,
	Roman Gushchin <guro@fb.com>
Subject: Re: [PATCH v16 08/11] secretmem: add memcg accounting
Date: Mon, 25 Jan 2021 16:17:06 +0000	[thread overview]
Message-ID: <20210125161706.GE308988@casper.infradead.org> (raw)
In-Reply-To: <20210121122723.3446-9-rppt@kernel.org>

On Thu, Jan 21, 2021 at 02:27:20PM +0200, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
> 
> Account memory consumed by secretmem to memcg. The accounting is updated
> when the memory is actually allocated and freed.

I think this is wrong.  It fails to account subsequent allocators from
the same PMD.  If you want to track like this, you need separate pools
per memcg.

I think you shouldn't try to track like this; better to just track on
a per-page basis.  After all, the page allocator doesn't track order-10
pages to the memcg that initially caused them to be split.

> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> Acked-by: Roman Gushchin <guro@fb.com>
> Reviewed-by: Shakeel Butt <shakeelb@google.com>
> Cc: Alexander Viro <viro@zeniv.linux.org.uk>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Arnd Bergmann <arnd@arndb.de>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Christopher Lameter <cl@linux.com>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Elena Reshetova <elena.reshetova@intel.com>
> Cc: Hagen Paul Pfeifer <hagen@jauu.net>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: James Bottomley <jejb@linux.ibm.com>
> Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Michael Kerrisk <mtk.manpages@gmail.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>
> Cc: Palmer Dabbelt <palmerdabbelt@google.com>
> Cc: Paul Walmsley <paul.walmsley@sifive.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
> Cc: Shuah Khan <shuah@kernel.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Tycho Andersen <tycho@tycho.ws>
> Cc: Will Deacon <will@kernel.org>
> ---
>  mm/filemap.c   |  3 ++-
>  mm/secretmem.c | 36 +++++++++++++++++++++++++++++++++++-
>  2 files changed, 37 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 2d0c6721879d..bb28dd6d9e22 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -42,6 +42,7 @@
>  #include <linux/psi.h>
>  #include <linux/ramfs.h>
>  #include <linux/page_idle.h>
> +#include <linux/secretmem.h>
>  #include "internal.h"
>  
>  #define CREATE_TRACE_POINTS
> @@ -839,7 +840,7 @@ noinline int __add_to_page_cache_locked(struct page *page,
>  	page->mapping = mapping;
>  	page->index = offset;
>  
> -	if (!huge) {
> +	if (!huge && !page_is_secretmem(page)) {
>  		error = mem_cgroup_charge(page, current->mm, gfp);
>  		if (error)
>  			goto error;
> diff --git a/mm/secretmem.c b/mm/secretmem.c
> index 469211c7cc3a..05026460e2ee 100644
> --- a/mm/secretmem.c
> +++ b/mm/secretmem.c
> @@ -18,6 +18,7 @@
>  #include <linux/memblock.h>
>  #include <linux/pseudo_fs.h>
>  #include <linux/secretmem.h>
> +#include <linux/memcontrol.h>
>  #include <linux/set_memory.h>
>  #include <linux/sched/signal.h>
>  
> @@ -44,6 +45,32 @@ struct secretmem_ctx {
>  
>  static struct cma *secretmem_cma;
>  
> +static int secretmem_account_pages(struct page *page, gfp_t gfp, int order)
> +{
> +	int err;
> +
> +	err = memcg_kmem_charge_page(page, gfp, order);
> +	if (err)
> +		return err;
> +
> +	/*
> +	 * seceremem caches are unreclaimable kernel allocations, so treat
> +	 * them as unreclaimable slab memory for VM statistics purposes
> +	 */
> +	mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> +			      PAGE_SIZE << order);
> +
> +	return 0;
> +}
> +
> +static void secretmem_unaccount_pages(struct page *page, int order)
> +{
> +
> +	mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> +			      -PAGE_SIZE << order);
> +	memcg_kmem_uncharge_page(page, order);
> +}
> +
>  static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
>  {
>  	unsigned long nr_pages = (1 << PMD_PAGE_ORDER);
> @@ -56,6 +83,10 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
>  	if (!page)
>  		return -ENOMEM;
>  
> +	err = secretmem_account_pages(page, gfp, PMD_PAGE_ORDER);
> +	if (err)
> +		goto err_cma_release;
> +
>  	/*
>  	 * clear the data left from the prevoius user before dropping the
>  	 * pages from the direct map
> @@ -65,7 +96,7 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
>  
>  	err = set_direct_map_invalid_noflush(page, nr_pages);
>  	if (err)
> -		goto err_cma_release;
> +		goto err_memcg_uncharge;
>  
>  	addr = (unsigned long)page_address(page);
>  	err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE);
> @@ -83,6 +114,8 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
>  	 * won't fail
>  	 */
>  	set_direct_map_default_noflush(page, nr_pages);
> +err_memcg_uncharge:
> +	secretmem_unaccount_pages(page, PMD_PAGE_ORDER);
>  err_cma_release:
>  	cma_release(secretmem_cma, page, nr_pages);
>  	return err;
> @@ -314,6 +347,7 @@ static void secretmem_cleanup_chunk(struct gen_pool *pool,
>  	int i;
>  
>  	set_direct_map_default_noflush(page, nr_pages);
> +	secretmem_unaccount_pages(page, PMD_PAGE_ORDER);
>  
>  	for (i = 0; i < nr_pages; i++)
>  		clear_highpage(page + i);
> -- 
> 2.28.0
> 

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

WARNING: multiple messages have this Message-ID (diff)
From: Matthew Wilcox <willy@infradead.org>
To: Mike Rapoport <rppt@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>,
	David Hildenbrand <david@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	linux-mm@kvack.org, linux-kselftest@vger.kernel.org,
	"H. Peter Anvin" <hpa@zytor.com>,
	Christopher Lameter <cl@linux.com>, Shuah Khan <shuah@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Elena Reshetova <elena.reshetova@intel.com>,
	linux-arch@vger.kernel.org, Tycho Andersen <tycho@tycho.ws>,
	linux-nvdimm@lists.01.org, Will Deacon <will@kernel.org>,
	x86@kernel.org, linux-riscv@lists.infradead.org,
	Mike Rapoport <rppt@linux.ibm.com>,
	Ingo Molnar <mingo@redhat.com>,
	Michael Kerrisk <mtk.manpages@gmail.com>,
	Palmer Dabbelt <palmerdabbelt@google.com>,
	Arnd Bergmann <arnd@arndb.de>,
	James Bottomley <jejb@linux.ibm.com>,
	Hagen Paul Pfeifer <hagen@jauu.net>,
	Borislav Petkov <bp@alien8.de>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Andy Lutomirski <luto@kernel.org>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	"Kirill A. Shutemov" <kirill@shutemov.name>,
	Dan Williams <dan.j.williams@intel.com>,
	linux-arm-kernel@lists.infradead.org, linux-api@vger.kernel.org,
	linux-kernel@vger.kernel.org, Palmer Dabbelt <palmer@dabbelt.com>,
	linux-fsdevel@vger.kernel.org, Shakeel Butt <shakeelb@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Rick Edgecombe <rick.p.edgecombe@intel.com>,
	Roman Gushchin <guro@fb.com>
Subject: Re: [PATCH v16 08/11] secretmem: add memcg accounting
Date: Mon, 25 Jan 2021 16:17:06 +0000	[thread overview]
Message-ID: <20210125161706.GE308988@casper.infradead.org> (raw)
In-Reply-To: <20210121122723.3446-9-rppt@kernel.org>

On Thu, Jan 21, 2021 at 02:27:20PM +0200, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
> 
> Account memory consumed by secretmem to memcg. The accounting is updated
> when the memory is actually allocated and freed.

I think this is wrong.  It fails to account subsequent allocators from
the same PMD.  If you want to track like this, you need separate pools
per memcg.

I think you shouldn't try to track like this; better to just track on
a per-page basis.  After all, the page allocator doesn't track order-10
pages to the memcg that initially caused them to be split.

> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> Acked-by: Roman Gushchin <guro@fb.com>
> Reviewed-by: Shakeel Butt <shakeelb@google.com>
> Cc: Alexander Viro <viro@zeniv.linux.org.uk>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Arnd Bergmann <arnd@arndb.de>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Christopher Lameter <cl@linux.com>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Elena Reshetova <elena.reshetova@intel.com>
> Cc: Hagen Paul Pfeifer <hagen@jauu.net>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: James Bottomley <jejb@linux.ibm.com>
> Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Michael Kerrisk <mtk.manpages@gmail.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>
> Cc: Palmer Dabbelt <palmerdabbelt@google.com>
> Cc: Paul Walmsley <paul.walmsley@sifive.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
> Cc: Shuah Khan <shuah@kernel.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Tycho Andersen <tycho@tycho.ws>
> Cc: Will Deacon <will@kernel.org>
> ---
>  mm/filemap.c   |  3 ++-
>  mm/secretmem.c | 36 +++++++++++++++++++++++++++++++++++-
>  2 files changed, 37 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 2d0c6721879d..bb28dd6d9e22 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -42,6 +42,7 @@
>  #include <linux/psi.h>
>  #include <linux/ramfs.h>
>  #include <linux/page_idle.h>
> +#include <linux/secretmem.h>
>  #include "internal.h"
>  
>  #define CREATE_TRACE_POINTS
> @@ -839,7 +840,7 @@ noinline int __add_to_page_cache_locked(struct page *page,
>  	page->mapping = mapping;
>  	page->index = offset;
>  
> -	if (!huge) {
> +	if (!huge && !page_is_secretmem(page)) {
>  		error = mem_cgroup_charge(page, current->mm, gfp);
>  		if (error)
>  			goto error;
> diff --git a/mm/secretmem.c b/mm/secretmem.c
> index 469211c7cc3a..05026460e2ee 100644
> --- a/mm/secretmem.c
> +++ b/mm/secretmem.c
> @@ -18,6 +18,7 @@
>  #include <linux/memblock.h>
>  #include <linux/pseudo_fs.h>
>  #include <linux/secretmem.h>
> +#include <linux/memcontrol.h>
>  #include <linux/set_memory.h>
>  #include <linux/sched/signal.h>
>  
> @@ -44,6 +45,32 @@ struct secretmem_ctx {
>  
>  static struct cma *secretmem_cma;
>  
> +static int secretmem_account_pages(struct page *page, gfp_t gfp, int order)
> +{
> +	int err;
> +
> +	err = memcg_kmem_charge_page(page, gfp, order);
> +	if (err)
> +		return err;
> +
> +	/*
> +	 * seceremem caches are unreclaimable kernel allocations, so treat
> +	 * them as unreclaimable slab memory for VM statistics purposes
> +	 */
> +	mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> +			      PAGE_SIZE << order);
> +
> +	return 0;
> +}
> +
> +static void secretmem_unaccount_pages(struct page *page, int order)
> +{
> +
> +	mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> +			      -PAGE_SIZE << order);
> +	memcg_kmem_uncharge_page(page, order);
> +}
> +
>  static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
>  {
>  	unsigned long nr_pages = (1 << PMD_PAGE_ORDER);
> @@ -56,6 +83,10 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
>  	if (!page)
>  		return -ENOMEM;
>  
> +	err = secretmem_account_pages(page, gfp, PMD_PAGE_ORDER);
> +	if (err)
> +		goto err_cma_release;
> +
>  	/*
>  	 * clear the data left from the prevoius user before dropping the
>  	 * pages from the direct map
> @@ -65,7 +96,7 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
>  
>  	err = set_direct_map_invalid_noflush(page, nr_pages);
>  	if (err)
> -		goto err_cma_release;
> +		goto err_memcg_uncharge;
>  
>  	addr = (unsigned long)page_address(page);
>  	err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE);
> @@ -83,6 +114,8 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
>  	 * won't fail
>  	 */
>  	set_direct_map_default_noflush(page, nr_pages);
> +err_memcg_uncharge:
> +	secretmem_unaccount_pages(page, PMD_PAGE_ORDER);
>  err_cma_release:
>  	cma_release(secretmem_cma, page, nr_pages);
>  	return err;
> @@ -314,6 +347,7 @@ static void secretmem_cleanup_chunk(struct gen_pool *pool,
>  	int i;
>  
>  	set_direct_map_default_noflush(page, nr_pages);
> +	secretmem_unaccount_pages(page, PMD_PAGE_ORDER);
>  
>  	for (i = 0; i < nr_pages; i++)
>  		clear_highpage(page + i);
> -- 
> 2.28.0
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2021-01-25 16:20 UTC|newest]

Thread overview: 318+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-21 12:27 [PATCH v16 00/11] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
2021-01-21 12:27 ` Mike Rapoport
2021-01-21 12:27 ` Mike Rapoport
2021-01-21 12:27 ` Mike Rapoport
2021-01-21 12:27 ` [PATCH v16 01/11] mm: add definition of PMD_PAGE_ORDER Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27 ` [PATCH v16 02/11] mmap: make mlock_future_check() global Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27 ` [PATCH v16 03/11] riscv/Kconfig: make direct map manipulation options depend on MMU Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27 ` [PATCH v16 04/11] set_memory: allow set_direct_map_*_noflush() for multiple pages Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27 ` [PATCH v16 05/11] set_memory: allow querying whether set_direct_map_*() is actually enabled Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27 ` [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-25 17:01   ` Michal Hocko
2021-01-25 17:01     ` Michal Hocko
2021-01-25 17:01     ` Michal Hocko
2021-01-25 17:01     ` Michal Hocko
2021-01-25 21:36     ` Mike Rapoport
2021-01-25 21:36       ` Mike Rapoport
2021-01-25 21:36       ` Mike Rapoport
2021-01-25 21:36       ` Mike Rapoport
2021-01-26  7:16       ` Michal Hocko
2021-01-26  7:16         ` Michal Hocko
2021-01-26  7:16         ` Michal Hocko
2021-01-26  7:16         ` Michal Hocko
2021-01-26  8:33         ` Mike Rapoport
2021-01-26  8:33           ` Mike Rapoport
2021-01-26  8:33           ` Mike Rapoport
2021-01-26  8:33           ` Mike Rapoport
2021-01-26  9:00           ` Michal Hocko
2021-01-26  9:00             ` Michal Hocko
2021-01-26  9:00             ` Michal Hocko
2021-01-26  9:00             ` Michal Hocko
2021-01-26  9:20             ` Mike Rapoport
2021-01-26  9:20               ` Mike Rapoport
2021-01-26  9:20               ` Mike Rapoport
2021-01-26  9:20               ` Mike Rapoport
2021-01-26  9:49               ` Michal Hocko
2021-01-26  9:49                 ` Michal Hocko
2021-01-26  9:49                 ` Michal Hocko
2021-01-26  9:49                 ` Michal Hocko
2021-01-26  9:53                 ` David Hildenbrand
2021-01-26  9:53                   ` David Hildenbrand
2021-01-26  9:53                   ` David Hildenbrand
2021-01-26  9:53                   ` David Hildenbrand
2021-01-26 10:19                   ` Michal Hocko
2021-01-26 10:19                     ` Michal Hocko
2021-01-26 10:19                     ` Michal Hocko
2021-01-26 10:19                     ` Michal Hocko
2021-01-26  9:20             ` Michal Hocko
2021-01-26  9:20               ` Michal Hocko
2021-01-26  9:20               ` Michal Hocko
2021-01-26  9:20               ` Michal Hocko
2021-02-03 12:15   ` Michal Hocko
2021-02-03 12:15     ` Michal Hocko
2021-02-03 12:15     ` Michal Hocko
2021-02-03 12:15     ` Michal Hocko
2021-02-04 11:34     ` Mike Rapoport
2021-02-04 11:34       ` Mike Rapoport
2021-02-04 11:34       ` Mike Rapoport
2021-02-04 11:34       ` Mike Rapoport
2021-01-21 12:27 ` [PATCH v16 07/11] secretmem: use PMD-size pages to amortize direct map fragmentation Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-26 11:46   ` Michal Hocko
2021-01-26 11:46     ` Michal Hocko
2021-01-26 11:46     ` Michal Hocko
2021-01-26 11:46     ` Michal Hocko
2021-01-26 11:56     ` David Hildenbrand
2021-01-26 11:56       ` David Hildenbrand
2021-01-26 11:56       ` David Hildenbrand
2021-01-26 11:56       ` David Hildenbrand
2021-01-26 12:08       ` Michal Hocko
2021-01-26 12:08         ` Michal Hocko
2021-01-26 12:08         ` Michal Hocko
2021-01-26 12:08         ` Michal Hocko
2021-01-28  9:22         ` Mike Rapoport
2021-01-28  9:22           ` Mike Rapoport
2021-01-28  9:22           ` Mike Rapoport
2021-01-28  9:22           ` Mike Rapoport
2021-01-28 13:01           ` Michal Hocko
2021-01-28 13:01             ` Michal Hocko
2021-01-28 13:01             ` Michal Hocko
2021-01-28 13:01             ` Michal Hocko
2021-01-28 13:28             ` Christoph Lameter
2021-01-28 13:28               ` Christoph Lameter
2021-01-28 13:28               ` Christoph Lameter
2021-01-28 13:28               ` Christoph Lameter
2021-01-28 13:28               ` Christoph Lameter
2021-01-28 13:49               ` Michal Hocko
2021-01-28 13:49                 ` Michal Hocko
2021-01-28 13:49                 ` Michal Hocko
2021-01-28 13:49                 ` Michal Hocko
2021-01-28 15:56                 ` Christoph Lameter
2021-01-28 15:56                   ` Christoph Lameter
2021-01-28 15:56                   ` Christoph Lameter
2021-01-28 15:56                   ` Christoph Lameter
2021-01-28 15:56                   ` Christoph Lameter
2021-01-28 16:23                   ` Michal Hocko
2021-01-28 16:23                     ` Michal Hocko
2021-01-28 16:23                     ` Michal Hocko
2021-01-28 16:23                     ` Michal Hocko
2021-01-28 15:28             ` James Bottomley
2021-01-28 15:28               ` James Bottomley
2021-01-28 15:28               ` James Bottomley
2021-01-28 15:28               ` James Bottomley
2021-01-29  7:03               ` Mike Rapoport
2021-01-29  7:03                 ` Mike Rapoport
2021-01-29  7:03                 ` Mike Rapoport
2021-01-29  7:03                 ` Mike Rapoport
2021-01-28 21:05             ` James Bottomley
2021-01-28 21:05               ` James Bottomley
2021-01-28 21:05               ` James Bottomley
2021-01-28 21:05               ` James Bottomley
2021-01-29  7:53               ` Michal Hocko
2021-01-29  7:53                 ` Michal Hocko
2021-01-29  7:53                 ` Michal Hocko
2021-01-29  7:53                 ` Michal Hocko
2021-01-29  8:23               ` Michal Hocko
2021-01-29  8:23                 ` Michal Hocko
2021-01-29  8:23                 ` Michal Hocko
2021-01-29  8:23                 ` Michal Hocko
2021-02-01 16:56                 ` James Bottomley
2021-02-01 16:56                   ` James Bottomley
2021-02-01 16:56                   ` James Bottomley
2021-02-01 16:56                   ` James Bottomley
2021-02-02  9:35                   ` Michal Hocko
2021-02-02  9:35                     ` Michal Hocko
2021-02-02  9:35                     ` Michal Hocko
2021-02-02  9:35                     ` Michal Hocko
2021-02-02 12:48                     ` Mike Rapoport
2021-02-02 12:48                       ` Mike Rapoport
2021-02-02 12:48                       ` Mike Rapoport
2021-02-02 12:48                       ` Mike Rapoport
2021-02-02 13:14                       ` David Hildenbrand
2021-02-02 13:14                         ` David Hildenbrand
2021-02-02 13:14                         ` David Hildenbrand
2021-02-02 13:14                         ` David Hildenbrand
2021-02-02 13:32                         ` Michal Hocko
2021-02-02 13:32                           ` Michal Hocko
2021-02-02 13:32                           ` Michal Hocko
2021-02-02 13:32                           ` Michal Hocko
2021-02-02 14:12                           ` David Hildenbrand
2021-02-02 14:12                             ` David Hildenbrand
2021-02-02 14:12                             ` David Hildenbrand
2021-02-02 14:12                             ` David Hildenbrand
2021-02-02 14:22                             ` Michal Hocko
2021-02-02 14:22                               ` Michal Hocko
2021-02-02 14:22                               ` Michal Hocko
2021-02-02 14:22                               ` Michal Hocko
2021-02-02 14:26                               ` David Hildenbrand
2021-02-02 14:26                                 ` David Hildenbrand
2021-02-02 14:26                                 ` David Hildenbrand
2021-02-02 14:26                                 ` David Hildenbrand
2021-02-02 14:32                                 ` Michal Hocko
2021-02-02 14:32                                   ` Michal Hocko
2021-02-02 14:32                                   ` Michal Hocko
2021-02-02 14:32                                   ` Michal Hocko
2021-02-02 14:34                                   ` David Hildenbrand
2021-02-02 14:34                                     ` David Hildenbrand
2021-02-02 14:34                                     ` David Hildenbrand
2021-02-02 14:34                                     ` David Hildenbrand
2021-02-02 18:15                                     ` Mike Rapoport
2021-02-02 18:15                                       ` Mike Rapoport
2021-02-02 18:15                                       ` Mike Rapoport
2021-02-02 18:15                                       ` Mike Rapoport
2021-02-02 18:55                                       ` James Bottomley
2021-02-02 18:55                                         ` James Bottomley
2021-02-02 18:55                                         ` James Bottomley
2021-02-02 18:55                                         ` James Bottomley
2021-02-03 12:09                                         ` Michal Hocko
2021-02-03 12:09                                           ` Michal Hocko
2021-02-03 12:09                                           ` Michal Hocko
2021-02-03 12:09                                           ` Michal Hocko
2021-02-04 11:31                                           ` Mike Rapoport
2021-02-04 11:31                                             ` Mike Rapoport
2021-02-04 11:31                                             ` Mike Rapoport
2021-02-04 11:31                                             ` Mike Rapoport
2021-02-02 13:27                       ` Michal Hocko
2021-02-02 13:27                         ` Michal Hocko
2021-02-02 13:27                         ` Michal Hocko
2021-02-02 13:27                         ` Michal Hocko
2021-02-02 19:10                         ` Mike Rapoport
2021-02-02 19:10                           ` Mike Rapoport
2021-02-02 19:10                           ` Mike Rapoport
2021-02-02 19:10                           ` Mike Rapoport
2021-02-03  9:12                           ` Michal Hocko
2021-02-03  9:12                             ` Michal Hocko
2021-02-03  9:12                             ` Michal Hocko
2021-02-03  9:12                             ` Michal Hocko
2021-02-04  9:58                             ` Mike Rapoport
2021-02-04  9:58                               ` Mike Rapoport
2021-02-04  9:58                               ` Mike Rapoport
2021-02-04  9:58                               ` Mike Rapoport
2021-02-04 13:02                               ` Michal Hocko
2021-02-04 13:02                                 ` Michal Hocko
2021-02-04 13:02                                 ` Michal Hocko
2021-02-04 13:02                                 ` Michal Hocko
2021-01-29  7:21             ` Mike Rapoport
2021-01-29  7:21               ` Mike Rapoport
2021-01-29  7:21               ` Mike Rapoport
2021-01-29  7:21               ` Mike Rapoport
2021-01-29  8:51               ` Michal Hocko
2021-01-29  8:51                 ` Michal Hocko
2021-01-29  8:51                 ` Michal Hocko
2021-01-29  8:51                 ` Michal Hocko
2021-02-02 14:42                 ` David Hildenbrand
2021-02-02 14:42                   ` David Hildenbrand
2021-02-02 14:42                   ` David Hildenbrand
2021-02-02 14:42                   ` David Hildenbrand
2021-01-21 12:27 ` [PATCH v16 08/11] secretmem: add memcg accounting Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-25 16:17   ` Matthew Wilcox [this message]
2021-01-25 16:17     ` Matthew Wilcox
2021-01-25 16:17     ` Matthew Wilcox
2021-01-25 16:17     ` Matthew Wilcox
2021-01-25 17:18     ` Shakeel Butt
2021-01-25 17:18       ` Shakeel Butt
2021-01-25 17:18       ` Shakeel Butt
2021-01-25 17:18       ` Shakeel Butt
2021-01-25 17:18       ` Shakeel Butt
2021-01-25 21:35       ` Mike Rapoport
2021-01-25 21:35         ` Mike Rapoport
2021-01-25 21:35         ` Mike Rapoport
2021-01-25 21:35         ` Mike Rapoport
2021-01-28 15:07         ` Shakeel Butt
2021-01-28 15:07           ` Shakeel Butt
2021-01-28 15:07           ` Shakeel Butt
2021-01-28 15:07           ` Shakeel Butt
2021-01-28 15:07           ` Shakeel Butt
2021-01-25 16:54   ` Michal Hocko
2021-01-25 16:54     ` Michal Hocko
2021-01-25 16:54     ` Michal Hocko
2021-01-25 16:54     ` Michal Hocko
2021-01-25 21:38     ` Mike Rapoport
2021-01-25 21:38       ` Mike Rapoport
2021-01-25 21:38       ` Mike Rapoport
2021-01-25 21:38       ` Mike Rapoport
2021-01-26  7:31       ` Michal Hocko
2021-01-26  7:31         ` Michal Hocko
2021-01-26  7:31         ` Michal Hocko
2021-01-26  7:31         ` Michal Hocko
2021-01-26  8:56         ` Mike Rapoport
2021-01-26  8:56           ` Mike Rapoport
2021-01-26  8:56           ` Mike Rapoport
2021-01-26  8:56           ` Mike Rapoport
2021-01-26  9:15           ` Michal Hocko
2021-01-26  9:15             ` Michal Hocko
2021-01-26  9:15             ` Michal Hocko
2021-01-26  9:15             ` Michal Hocko
2021-01-26 14:48       ` Matthew Wilcox
2021-01-26 14:48         ` Matthew Wilcox
2021-01-26 14:48         ` Matthew Wilcox
2021-01-26 14:48         ` Matthew Wilcox
2021-01-26 15:05         ` Michal Hocko
2021-01-26 15:05           ` Michal Hocko
2021-01-26 15:05           ` Michal Hocko
2021-01-26 15:05           ` Michal Hocko
2021-01-27 18:42           ` Roman Gushchin
2021-01-27 18:42             ` Roman Gushchin
2021-01-27 18:42             ` Roman Gushchin
2021-01-27 18:42             ` Roman Gushchin
2021-01-28  7:58             ` Michal Hocko
2021-01-28  7:58               ` Michal Hocko
2021-01-28  7:58               ` Michal Hocko
2021-01-28  7:58               ` Michal Hocko
2021-01-28 14:05               ` Shakeel Butt
2021-01-28 14:05                 ` Shakeel Butt
2021-01-28 14:05                 ` Shakeel Butt
2021-01-28 14:05                 ` Shakeel Butt
2021-01-28 14:05                 ` Shakeel Butt
2021-01-28 14:22                 ` Michal Hocko
2021-01-28 14:22                   ` Michal Hocko
2021-01-28 14:22                   ` Michal Hocko
2021-01-28 14:22                   ` Michal Hocko
2021-01-28 14:57                   ` Shakeel Butt
2021-01-28 14:57                     ` Shakeel Butt
2021-01-28 14:57                     ` Shakeel Butt
2021-01-28 14:57                     ` Shakeel Butt
2021-01-28 14:57                     ` Shakeel Butt
2021-01-21 12:27 ` [PATCH v16 09/11] PM: hibernate: disable when there are active secretmem users Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27 ` [PATCH v16 10/11] arch, mm: wire up memfd_secret system call where relevant Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-25 18:18   ` Catalin Marinas
2021-01-25 18:18     ` Catalin Marinas
2021-01-25 18:18     ` Catalin Marinas
2021-01-25 18:18     ` Catalin Marinas
2021-01-21 12:27 ` [PATCH v16 11/11] secretmem: test: add basic selftest for memfd_secret(2) Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 22:18 ` [PATCH v16 00/11] mm: introduce memfd_secret system call to create "secret" memory areas Andrew Morton
2021-01-21 22:18   ` Andrew Morton
2021-01-21 22:18   ` Andrew Morton
2021-01-21 22:18   ` Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210125161706.GE308988@casper.infradead.org \
    --to=willy@infradead.org \
    --cc=akpm@linux-foundation.org \
    --cc=arnd@arndb.de \
    --cc=bp@alien8.de \
    --cc=catalin.marinas@arm.com \
    --cc=cl@linux.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=david@redhat.com \
    --cc=elena.reshetova@intel.com \
    --cc=guro@fb.com \
    --cc=hagen@jauu.net \
    --cc=hpa@zytor.com \
    --cc=jejb@linux.ibm.com \
    --cc=kirill@shutemov.name \
    --cc=linux-api@vger.kernel.org \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-kselftest@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=luto@kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=mingo@redhat.com \
    --cc=mtk.manpages@gmail.com \
    --cc=palmer@dabbelt.com \
    --cc=palmerdabbelt@google.com \
    --cc=paul.walmsley@sifive.com \
    --cc=peterz@infradead.org \
    --cc=rick.p.edgecombe@intel.com \
    --cc=rppt@kernel.org \
    --cc=rppt@linux.ibm.com \
    --cc=shakeelb@google.com \
    --cc=shuah@kernel.org \
    --cc=tglx@linutronix.de \
    --cc=tycho@tycho.ws \
    --cc=viro@zeniv.linux.org.uk \
    --cc=will@kernel.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.