All of lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Wilcox <willy@infradead.org>
To: Mike Rapoport <rppt@linux.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Mike Rapoport <rppt@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Andy Lutomirski <luto@kernel.org>, Arnd Bergmann <arnd@arndb.de>,
	Borislav Petkov <bp@alien8.de>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Christopher Lameter <cl@linux.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	David Hildenbrand <david@redhat.com>,
	Elena Reshetova <elena.reshetova@intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Idan Yaniv <idan.yaniv@ibm.com>,
	Ingo Molnar <mingo@redhat.com>,
	James Bottomley <jejb@linux.ibm.com>,
	"Kirill A. Shutemov" <kirill@shutemov.name>,
	Mark Rutland <mark.rutland@arm.com>,
	Michael Kerrisk <mtk.manpages@gmail.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Shuah Khan <shuah@kernel.org>, Tycho Andersen <tycho@tycho.ws>,
	Will Deacon <will@kernel.org>,
	linux-api@vger.kernel.org, linux-arch@vger.kernel.org,
	linux- arm-kernel@lists.infradead.org,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org,
	linux-nvdimm@lists.01.org, linux-riscv@lists.infradead.org,
	x86@kernel.org
Subject: Re: [PATCH v6 5/6] mm: secretmem: use PMD-size pages to amortize direct map fragmentation
Date: Wed, 30 Sep 2020 16:09:28 +0100	[thread overview]
Message-ID: <20200930150928.GR20115@casper.infradead.org> (raw)
In-Reply-To: <20200930102745.GC3226834@linux.ibm.com>

On Wed, Sep 30, 2020 at 01:27:45PM +0300, Mike Rapoport wrote:
> On Tue, Sep 29, 2020 at 05:15:52PM +0200, Peter Zijlstra wrote:
> > On Tue, Sep 29, 2020 at 05:58:13PM +0300, Mike Rapoport wrote:
> > > On Tue, Sep 29, 2020 at 04:12:16PM +0200, Peter Zijlstra wrote:
> > 
> > > > It will drop them down to 4k pages. Given enough inodes, and allocating
> > > > only a single sekrit page per pmd, we'll shatter the directmap into 4k.
> > > 
> > > Why? Secretmem allocates PMD-size page per inode and uses it as a pool
> > > of 4K pages for that inode. This way it ensures that
> > > __kernel_map_pages() is always called on PMD boundaries.
> > 
> > Oh, you unmap the 2m page upfront? I read it like you did the unmap at
> > the sekrit page alloc, not the pool alloc side of things.
> > 
> > Then yes, but then you're wasting gobs of memory. Basically you can pin
> > 2M per inode while only accounting a single page.
> 
> Right, quite like THP :)

Huh?  THP accounts every page it allocates.  If you allocate 2MB,
it accounts 512 pages.  And THP are reclaimable by vmscan, this is
obviously not.
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

WARNING: multiple messages have this Message-ID (diff)
From: Matthew Wilcox <willy@infradead.org>
To: Mike Rapoport <rppt@linux.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Mike Rapoport <rppt@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Andy Lutomirski <luto@kernel.org>, Arnd Bergmann <arnd@arndb.de>,
	Borislav Petkov <bp@alien8.de>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Christopher Lameter <cl@linux.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	David Hildenbrand <david@redhat.com>,
	Elena Reshetova <elena.reshetova@intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Idan Yaniv <idan.yaniv@ibm.com>,
	Ingo Molnar <mingo@redhat.com>,
	James Bottomley <jejb@linux.ibm.com>,
	"Kirill A. Shutemov" <kirill@shutemov.name>,
	Mark Rutland <mark.rutland@arm.com>,
	Michael Kerrisk <mtk.manpages@gmail.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Shuah Khan <shuah@kernel.org>, Tycho Andersen <tycho@tycho.ws>,
	Will Deacon <will@kernel.org>,
	linux-api@vger.kernel.org, linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org,
	linux-nvdimm@lists.01.org, linux-riscv@lists.infradead.org,
	x86@kernel.org
Subject: Re: [PATCH v6 5/6] mm: secretmem: use PMD-size pages to amortize direct map fragmentation
Date: Wed, 30 Sep 2020 16:09:28 +0100	[thread overview]
Message-ID: <20200930150928.GR20115@casper.infradead.org> (raw)
In-Reply-To: <20200930102745.GC3226834@linux.ibm.com>

On Wed, Sep 30, 2020 at 01:27:45PM +0300, Mike Rapoport wrote:
> On Tue, Sep 29, 2020 at 05:15:52PM +0200, Peter Zijlstra wrote:
> > On Tue, Sep 29, 2020 at 05:58:13PM +0300, Mike Rapoport wrote:
> > > On Tue, Sep 29, 2020 at 04:12:16PM +0200, Peter Zijlstra wrote:
> > 
> > > > It will drop them down to 4k pages. Given enough inodes, and allocating
> > > > only a single sekrit page per pmd, we'll shatter the directmap into 4k.
> > > 
> > > Why? Secretmem allocates PMD-size page per inode and uses it as a pool
> > > of 4K pages for that inode. This way it ensures that
> > > __kernel_map_pages() is always called on PMD boundaries.
> > 
> > Oh, you unmap the 2m page upfront? I read it like you did the unmap at
> > the sekrit page alloc, not the pool alloc side of things.
> > 
> > Then yes, but then you're wasting gobs of memory. Basically you can pin
> > 2M per inode while only accounting a single page.
> 
> Right, quite like THP :)

Huh?  THP accounts every page it allocates.  If you allocate 2MB,
it accounts 512 pages.  And THP are reclaimable by vmscan, this is
obviously not.


WARNING: multiple messages have this Message-ID (diff)
From: Matthew Wilcox <willy@infradead.org>
To: Mike Rapoport <rppt@linux.ibm.com>
Cc: Mark Rutland <mark.rutland@arm.com>,
	David Hildenbrand <david@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	linux-mm@kvack.org, Will Deacon <will@kernel.org>,
	linux-kselftest@vger.kernel.org, "H. Peter Anvin" <hpa@zytor.com>,
	Christopher Lameter <cl@linux.com>,
	Idan Yaniv <idan.yaniv@ibm.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Elena Reshetova <elena.reshetova@intel.com>,
	linux-arch@vger.kernel.org, Tycho Andersen <tycho@tycho.ws>,
	linux-nvdimm@lists.01.org, Shuah Khan <shuah@kernel.org>,
	x86@kernel.org, linux-riscv@lists.infradead.org,
	Ingo Molnar <mingo@redhat.com>,
	Michael Kerrisk <mtk.manpages@gmail.com>,
	Arnd Bergmann <arnd@arndb.de>,
	James Bottomley <jejb@linux.ibm.com>,
	Borislav Petkov <bp@alien8.de>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Andy Lutomirski <luto@kernel.org>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	"Kirill A. Shutemov" <kirill@shutemov.name>,
	Dan Williams <dan.j.williams@intel.com>,
	linux-arm-kernel@lists.infradead.org, linux-api@vger.kernel.org,
	linux-kernel@vger.kernel.org, Palmer Dabbelt <palmer@dabbelt.com>,
	linux-fsdevel@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Mike Rapoport <rppt@kernel.org>
Subject: Re: [PATCH v6 5/6] mm: secretmem: use PMD-size pages to amortize direct map fragmentation
Date: Wed, 30 Sep 2020 16:09:28 +0100	[thread overview]
Message-ID: <20200930150928.GR20115@casper.infradead.org> (raw)
In-Reply-To: <20200930102745.GC3226834@linux.ibm.com>

On Wed, Sep 30, 2020 at 01:27:45PM +0300, Mike Rapoport wrote:
> On Tue, Sep 29, 2020 at 05:15:52PM +0200, Peter Zijlstra wrote:
> > On Tue, Sep 29, 2020 at 05:58:13PM +0300, Mike Rapoport wrote:
> > > On Tue, Sep 29, 2020 at 04:12:16PM +0200, Peter Zijlstra wrote:
> > 
> > > > It will drop them down to 4k pages. Given enough inodes, and allocating
> > > > only a single sekrit page per pmd, we'll shatter the directmap into 4k.
> > > 
> > > Why? Secretmem allocates PMD-size page per inode and uses it as a pool
> > > of 4K pages for that inode. This way it ensures that
> > > __kernel_map_pages() is always called on PMD boundaries.
> > 
> > Oh, you unmap the 2m page upfront? I read it like you did the unmap at
> > the sekrit page alloc, not the pool alloc side of things.
> > 
> > Then yes, but then you're wasting gobs of memory. Basically you can pin
> > 2M per inode while only accounting a single page.
> 
> Right, quite like THP :)

Huh?  THP accounts every page it allocates.  If you allocate 2MB,
it accounts 512 pages.  And THP are reclaimable by vmscan, this is
obviously not.


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

WARNING: multiple messages have this Message-ID (diff)
From: Matthew Wilcox <willy@infradead.org>
To: Mike Rapoport <rppt@linux.ibm.com>
Cc: Mark Rutland <mark.rutland@arm.com>,
	David Hildenbrand <david@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	linux-mm@kvack.org, Will Deacon <will@kernel.org>,
	linux-kselftest@vger.kernel.org, "H. Peter Anvin" <hpa@zytor.com>,
	Christopher Lameter <cl@linux.com>,
	Idan Yaniv <idan.yaniv@ibm.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Elena Reshetova <elena.reshetova@intel.com>,
	linux-arch@vger.kernel.org, Tycho Andersen <tycho@tycho.ws>,
	linux-nvdimm@lists.01.org, Shuah Khan <shuah@kernel.org>,
	x86@kernel.org, linux-riscv@lists.infradead.org,
	Ingo Molnar <mingo@redhat.com>,
	Michael Kerrisk <mtk.manpages@gmail.com>,
	Arnd Bergmann <arnd@arndb.de>,
	James Bottomley <jejb@linux.ibm.com>,
	Borislav Petkov <bp@alien8.de>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Andy Lutomirski <luto@kernel.org>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	"Kirill A. Shutemov" <kirill@shutemov.name>,
	Dan Williams <dan.j.williams@intel.com>,
	linux-arm-kernel@lists.infradead.org, linux-api@vger.kernel.org,
	linux-kernel@vger.kernel.org, Palmer Dabbelt <palmer@dabbelt.com>,
	linux-fsdevel@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Mike Rapoport <rppt@kernel.org>
Subject: Re: [PATCH v6 5/6] mm: secretmem: use PMD-size pages to amortize direct map fragmentation
Date: Wed, 30 Sep 2020 16:09:28 +0100	[thread overview]
Message-ID: <20200930150928.GR20115@casper.infradead.org> (raw)
In-Reply-To: <20200930102745.GC3226834@linux.ibm.com>

On Wed, Sep 30, 2020 at 01:27:45PM +0300, Mike Rapoport wrote:
> On Tue, Sep 29, 2020 at 05:15:52PM +0200, Peter Zijlstra wrote:
> > On Tue, Sep 29, 2020 at 05:58:13PM +0300, Mike Rapoport wrote:
> > > On Tue, Sep 29, 2020 at 04:12:16PM +0200, Peter Zijlstra wrote:
> > 
> > > > It will drop them down to 4k pages. Given enough inodes, and allocating
> > > > only a single sekrit page per pmd, we'll shatter the directmap into 4k.
> > > 
> > > Why? Secretmem allocates PMD-size page per inode and uses it as a pool
> > > of 4K pages for that inode. This way it ensures that
> > > __kernel_map_pages() is always called on PMD boundaries.
> > 
> > Oh, you unmap the 2m page upfront? I read it like you did the unmap at
> > the sekrit page alloc, not the pool alloc side of things.
> > 
> > Then yes, but then you're wasting gobs of memory. Basically you can pin
> > 2M per inode while only accounting a single page.
> 
> Right, quite like THP :)

Huh?  THP accounts every page it allocates.  If you allocate 2MB,
it accounts 512 pages.  And THP are reclaimable by vmscan, this is
obviously not.


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  parent reply	other threads:[~2020-09-30 15:10 UTC|newest]

Thread overview: 236+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-24 13:28 [PATCH v6 0/6] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
2020-09-24 13:28 ` Mike Rapoport
2020-09-24 13:28 ` Mike Rapoport
2020-09-24 13:28 ` Mike Rapoport
2020-09-24 13:28 ` [PATCH v6 1/6] mm: add definition of PMD_PAGE_ORDER Mike Rapoport
2020-09-24 13:28   ` Mike Rapoport
2020-09-24 13:28   ` Mike Rapoport
2020-09-24 13:28   ` Mike Rapoport
2020-09-24 13:29 ` [PATCH v6 2/6] mmap: make mlock_future_check() global Mike Rapoport
2020-09-24 13:29   ` Mike Rapoport
2020-09-24 13:29   ` Mike Rapoport
2020-09-24 13:29   ` Mike Rapoport
2020-09-24 13:29 ` [PATCH v6 3/6] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
2020-09-24 13:29   ` Mike Rapoport
2020-09-24 13:29   ` Mike Rapoport
2020-09-24 13:29   ` Mike Rapoport
2020-09-29  4:58   ` Edgecombe, Rick P
2020-09-29  4:58     ` Edgecombe, Rick P
2020-09-29  4:58     ` Edgecombe, Rick P
2020-09-29  4:58     ` Edgecombe, Rick P
2020-09-29  4:58     ` Edgecombe, Rick P
2020-09-29 13:06     ` Mike Rapoport
2020-09-29 13:06       ` Mike Rapoport
2020-09-29 13:06       ` Mike Rapoport
2020-09-29 13:06       ` Mike Rapoport
2020-09-29 13:06       ` Mike Rapoport
2020-09-29 20:06       ` Edgecombe, Rick P
2020-09-29 20:06         ` Edgecombe, Rick P
2020-09-29 20:06         ` Edgecombe, Rick P
2020-09-29 20:06         ` Edgecombe, Rick P
2020-09-29 20:06         ` Edgecombe, Rick P
2020-09-30 10:35         ` Mike Rapoport
2020-09-30 10:35           ` Mike Rapoport
2020-09-30 10:35           ` Mike Rapoport
2020-09-30 10:35           ` Mike Rapoport
2020-09-30 10:35           ` Mike Rapoport
2020-09-30 20:11           ` Edgecombe, Rick P
2020-09-30 20:11             ` Edgecombe, Rick P
2020-09-30 20:11             ` Edgecombe, Rick P
2020-09-30 20:11             ` Edgecombe, Rick P
2020-09-30 20:11             ` Edgecombe, Rick P
2020-10-11  9:42             ` Mike Rapoport
2020-10-11  9:42               ` Mike Rapoport
2020-10-11  9:42               ` Mike Rapoport
2020-10-11  9:42               ` Mike Rapoport
2020-10-11  9:42               ` Mike Rapoport
2020-09-24 13:29 ` [PATCH v6 4/6] arch, mm: wire up memfd_secret system call were relevant Mike Rapoport
2020-09-24 13:29   ` Mike Rapoport
2020-09-24 13:29   ` Mike Rapoport
2020-09-24 13:29   ` Mike Rapoport
2020-09-24 13:29 ` [PATCH v6 5/6] mm: secretmem: use PMD-size pages to amortize direct map fragmentation Mike Rapoport
2020-09-24 13:29   ` Mike Rapoport
2020-09-24 13:29   ` Mike Rapoport
2020-09-24 13:29   ` Mike Rapoport
2020-09-25  7:41   ` Peter Zijlstra
2020-09-25  7:41     ` Peter Zijlstra
2020-09-25  7:41     ` Peter Zijlstra
2020-09-25  7:41     ` Peter Zijlstra
2020-09-25  9:00     ` David Hildenbrand
2020-09-25  9:00       ` David Hildenbrand
2020-09-25  9:00       ` David Hildenbrand
2020-09-25  9:00       ` David Hildenbrand
2020-09-25  9:50       ` Peter Zijlstra
2020-09-25  9:50         ` Peter Zijlstra
2020-09-25  9:50         ` Peter Zijlstra
2020-09-25  9:50         ` Peter Zijlstra
2020-09-25 10:31         ` Mark Rutland
2020-09-25 10:31           ` Mark Rutland
2020-09-25 10:31           ` Mark Rutland
2020-09-25 10:31           ` Mark Rutland
2020-09-25 14:57           ` Tycho Andersen
2020-09-25 14:57             ` Tycho Andersen
2020-09-25 14:57             ` Tycho Andersen
2020-09-25 14:57             ` Tycho Andersen
2020-09-29 14:04           ` Mike Rapoport
2020-09-29 14:04             ` Mike Rapoport
2020-09-29 14:04             ` Mike Rapoport
2020-09-29 14:04             ` Mike Rapoport
2020-09-29 13:07         ` Mike Rapoport
2020-09-29 13:07           ` Mike Rapoport
2020-09-29 13:07           ` Mike Rapoport
2020-09-29 13:07           ` Mike Rapoport
2020-09-29 13:06       ` Mike Rapoport
2020-09-29 13:06         ` Mike Rapoport
2020-09-29 13:06         ` Mike Rapoport
2020-09-29 13:06         ` Mike Rapoport
2020-09-29 13:05     ` Mike Rapoport
2020-09-29 13:05       ` Mike Rapoport
2020-09-29 13:05       ` Mike Rapoport
2020-09-29 13:05       ` Mike Rapoport
2020-09-29 14:12       ` Peter Zijlstra
2020-09-29 14:12         ` Peter Zijlstra
2020-09-29 14:12         ` Peter Zijlstra
2020-09-29 14:12         ` Peter Zijlstra
2020-09-29 14:31         ` Dave Hansen
2020-09-29 14:31           ` Dave Hansen
2020-09-29 14:31           ` Dave Hansen
2020-09-29 14:31           ` Dave Hansen
2020-09-29 14:58         ` Mike Rapoport
2020-09-29 14:58           ` Mike Rapoport
2020-09-29 14:58           ` Mike Rapoport
2020-09-29 14:58           ` Mike Rapoport
2020-09-29 15:15           ` Peter Zijlstra
2020-09-29 15:15             ` Peter Zijlstra
2020-09-29 15:15             ` Peter Zijlstra
2020-09-29 15:15             ` Peter Zijlstra
2020-09-30 10:27             ` Mike Rapoport
2020-09-30 10:27               ` Mike Rapoport
2020-09-30 10:27               ` Mike Rapoport
2020-09-30 10:27               ` Mike Rapoport
2020-09-30 14:39               ` James Bottomley
2020-09-30 14:39                 ` James Bottomley
2020-09-30 14:39                 ` James Bottomley
2020-09-30 14:39                 ` James Bottomley
2020-09-30 14:45                 ` David Hildenbrand
2020-09-30 14:45                   ` David Hildenbrand
2020-09-30 14:45                   ` David Hildenbrand
2020-09-30 14:45                   ` David Hildenbrand
2020-09-30 15:17                   ` James Bottomley
2020-09-30 15:17                     ` James Bottomley
2020-09-30 15:17                     ` James Bottomley
2020-09-30 15:17                     ` James Bottomley
2020-09-30 15:25                     ` David Hildenbrand
2020-09-30 15:25                       ` David Hildenbrand
2020-09-30 15:25                       ` David Hildenbrand
2020-09-30 15:25                       ` David Hildenbrand
2020-09-30 15:09               ` Matthew Wilcox [this message]
2020-09-30 15:09                 ` Matthew Wilcox
2020-09-30 15:09                 ` Matthew Wilcox
2020-09-30 15:09                 ` Matthew Wilcox
2020-10-01  8:14                 ` Mike Rapoport
2020-10-01  8:14                   ` Mike Rapoport
2020-10-01  8:14                   ` Mike Rapoport
2020-10-01  8:14                   ` Mike Rapoport
2020-09-29 15:03         ` James Bottomley
2020-09-29 15:03           ` James Bottomley
2020-09-29 15:03           ` James Bottomley
2020-09-29 15:03           ` James Bottomley
2020-09-30 10:20         ` Mike Rapoport
2020-09-30 10:20           ` Mike Rapoport
2020-09-30 10:20           ` Mike Rapoport
2020-09-30 10:20           ` Mike Rapoport
2020-09-30 10:43           ` Peter Zijlstra
2020-09-30 10:43             ` Peter Zijlstra
2020-09-30 10:43             ` Peter Zijlstra
2020-09-30 10:43             ` Peter Zijlstra
2020-09-24 13:29 ` [PATCH v6 6/6] secretmem: test: add basic selftest for memfd_secret(2) Mike Rapoport
2020-09-24 13:29   ` Mike Rapoport
2020-09-24 13:29   ` Mike Rapoport
2020-09-24 13:29   ` Mike Rapoport
2020-09-24 13:35 ` [PATCH] man2: new page describing memfd_secret() system call Mike Rapoport
2020-09-24 13:35   ` Mike Rapoport
2020-09-24 13:35   ` Mike Rapoport
2020-09-24 13:35   ` Mike Rapoport
2020-09-24 14:55   ` Alejandro Colomar
2020-09-24 14:55     ` Alejandro Colomar
2020-09-24 14:55     ` Alejandro Colomar
2020-09-24 14:55     ` Alejandro Colomar
2020-10-03  9:32     ` Alejandro Colomar
2020-10-03  9:32       ` Alejandro Colomar
2020-10-03  9:32       ` Alejandro Colomar
2020-10-03  9:32       ` Alejandro Colomar
2020-10-05  7:32       ` Mike Rapoport
2020-10-05  7:32         ` Mike Rapoport
2020-10-05  7:32         ` Mike Rapoport
2020-10-05  7:32         ` Mike Rapoport
2020-11-16 21:01         ` [PATCH v2] memfd_secret.2: New " Alejandro Colomar
2020-11-16 21:01           ` Alejandro Colomar
2020-11-16 21:01           ` Alejandro Colomar
2020-11-16 21:01           ` Alejandro Colomar
2020-11-17  6:26           ` Mike Rapoport
2020-11-17  6:26             ` Mike Rapoport
2020-11-17  6:26             ` Mike Rapoport
2020-11-17  6:26             ` Mike Rapoport
2020-11-21 21:46             ` Alejandro Colomar (man-pages)
2020-11-22  7:03               ` Mike Rapoport
2020-09-25  2:34 ` [PATCH v6 0/6] mm: introduce memfd_secret system call to create "secret" memory areas Andrew Morton
2020-09-25  2:34   ` Andrew Morton
2020-09-25  2:34   ` Andrew Morton
2020-09-25  2:34   ` Andrew Morton
2020-09-25  6:42   ` Mike Rapoport
2020-09-25  6:42     ` Mike Rapoport
2020-09-25  6:42     ` Mike Rapoport
2020-09-25  6:42     ` Mike Rapoport
2020-11-01 11:09 ` Hagen Paul Pfeifer
2020-11-01 11:09   ` Hagen Paul Pfeifer
2020-11-01 11:09   ` Hagen Paul Pfeifer
2020-11-01 11:09   ` Hagen Paul Pfeifer
2020-11-02 15:40   ` Mike Rapoport
2020-11-02 15:40     ` Mike Rapoport
2020-11-02 15:40     ` Mike Rapoport
2020-11-02 15:40     ` Mike Rapoport
2020-11-03 13:52     ` Hagen Paul Pfeifer
2020-11-03 13:52       ` Hagen Paul Pfeifer
2020-11-03 13:52       ` Hagen Paul Pfeifer
2020-11-03 13:52       ` Hagen Paul Pfeifer
2020-11-03 16:30       ` Mike Rapoport
2020-11-03 16:30         ` Mike Rapoport
2020-11-03 16:30         ` Mike Rapoport
2020-11-03 16:30         ` Mike Rapoport
2020-11-04 11:39         ` Hagen Paul Pfeifer
2020-11-04 11:39           ` Hagen Paul Pfeifer
2020-11-04 11:39           ` Hagen Paul Pfeifer
2020-11-04 11:39           ` Hagen Paul Pfeifer
2020-11-04 17:02           ` Mike Rapoport
2020-11-04 17:02             ` Mike Rapoport
2020-11-04 17:02             ` Mike Rapoport
2020-11-04 17:02             ` Mike Rapoport
2020-11-09 10:41             ` Hagen Paul Pfeifer
2020-11-09 10:41               ` Hagen Paul Pfeifer
2020-11-09 10:41               ` Hagen Paul Pfeifer
2020-11-09 10:41               ` Hagen Paul Pfeifer
2020-11-02  9:11 ` David Hildenbrand
2020-11-02  9:11   ` David Hildenbrand
2020-11-02  9:11   ` David Hildenbrand
2020-11-02  9:11   ` David Hildenbrand
2020-11-02  9:31   ` David Hildenbrand
2020-11-02  9:31     ` David Hildenbrand
2020-11-02  9:31     ` David Hildenbrand
2020-11-02  9:31     ` David Hildenbrand
2020-11-02 17:43   ` Mike Rapoport
2020-11-02 17:43     ` Mike Rapoport
2020-11-02 17:43     ` Mike Rapoport
2020-11-02 17:43     ` Mike Rapoport
2020-11-02 17:51     ` David Hildenbrand
2020-11-02 17:51       ` David Hildenbrand
2020-11-02 17:51       ` David Hildenbrand
2020-11-02 17:51       ` David Hildenbrand
2020-11-03  9:52       ` Mike Rapoport
2020-11-03  9:52         ` Mike Rapoport
2020-11-03  9:52         ` Mike Rapoport
2020-11-03  9:52         ` Mike Rapoport
2020-11-03 10:11         ` David Hildenbrand
2020-11-03 10:11           ` David Hildenbrand
2020-11-03 10:11           ` David Hildenbrand
2020-11-03 10:11           ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200930150928.GR20115@casper.infradead.org \
    --to=willy@infradead.org \
    --cc=akpm@linux-foundation.org \
    --cc=arnd@arndb.de \
    --cc=bp@alien8.de \
    --cc=catalin.marinas@arm.com \
    --cc=cl@linux.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=david@redhat.com \
    --cc=elena.reshetova@intel.com \
    --cc=hpa@zytor.com \
    --cc=idan.yaniv@ibm.com \
    --cc=jejb@linux.ibm.com \
    --cc=kirill@shutemov.name \
    --cc=linux-api@vger.kernel.org \
    --cc=linux-arch@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=mingo@redhat.com \
    --cc=mtk.manpages@gmail.com \
    --cc=palmer@dabbelt.com \
    --cc=paul.walmsley@sifive.com \
    --cc=peterz@infradead.org \
    --cc=rppt@kernel.org \
    --cc=rppt@linux.ibm.com \
    --cc=shuah@kernel.org \
    --cc=tglx@linutronix.de \
    --cc=tycho@tycho.ws \
    --cc=viro@zeniv.linux.org.uk \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.