All of lore.kernel.org
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Michal Hocko <mhocko@suse.com>, Mike Rapoport <rppt@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Andy Lutomirski <luto@kernel.org>, Arnd Bergmann <arnd@arndb.de>,
	Borislav Petkov <bp@alien8.de>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Christopher Lameter <cl@linux.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Elena Reshetova <elena.reshetova@intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Ingo Molnar <mingo@redhat.com>,
	James Bottomley <jejb@linux.ibm.com>,
	"Kirill A. Shutemov" <kirill@shutemov.name>,
	Matthew Wilcox <willy@infradead.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Mike Rapoport <rppt@linux.ibm.com>,
	Michael Kerrisk <mtk.manpages@gmail.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Rick Edgecombe <rick.p.edgecombe@intel.com>,
	Roman Gushchin <guro@fb.com>, Shakeel Butt <shakeelb@google.com>,
	Shuah Khan <shuah@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Tycho Andersen <tycho@tycho.ws>, Will Deacon <will@kernel.org>,
	linux-api@vger.kernel.org, linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org,
	linux-nvdimm@lists.01.org, linux-riscv@lists.infradead.org,
	x86@kernel.org, Hagen Paul Pfeifer <hagen@jauu.net>,
	Palmer Dabbelt <palmerdabbelt@google.com>
Subject: Re: [PATCH v16 07/11] secretmem: use PMD-size pages to amortize direct map fragmentation
Date: Tue, 26 Jan 2021 12:56:48 +0100	[thread overview]
Message-ID: <303f348d-e494-e386-d1f5-14505b5da254@redhat.com> (raw)
In-Reply-To: <20210126114657.GL827@dhcp22.suse.cz>

On 26.01.21 12:46, Michal Hocko wrote:
> On Thu 21-01-21 14:27:19, Mike Rapoport wrote:
>> From: Mike Rapoport <rppt@linux.ibm.com>
>>
>> Removing a PAGE_SIZE page from the direct map every time such page is
>> allocated for a secret memory mapping will cause severe fragmentation of
>> the direct map. This fragmentation can be reduced by using PMD-size pages
>> as a pool for small pages for secret memory mappings.
>>
>> Add a gen_pool per secretmem inode and lazily populate this pool with
>> PMD-size pages.
>>
>> As pages allocated by secretmem become unmovable, use CMA to back large
>> page caches so that page allocator won't be surprised by failing attempt to
>> migrate these pages.
>>
>> The CMA area used by secretmem is controlled by the "secretmem=" kernel
>> parameter. This allows explicit control over the memory available for
>> secretmem and provides upper hard limit for secretmem consumption.
> 
> OK, so I have finally had a look at this closer and this is really not
> acceptable. I have already mentioned that in a response to other patch
> but any task is able to deprive access to secret memory to other tasks
> and cause OOM killer which wouldn't really recover ever and potentially
> panic the system. Now you could be less drastic and only make SIGBUS on
> fault but that would be still quite terrible. There is a very good
> reason why hugetlb implements is non-trivial reservation system to avoid
> exactly these problems.
> 
> So unless I am really misreading the code
> Nacked-by: Michal Hocko <mhocko@suse.com>
> 
> That doesn't mean I reject the whole idea. There are some details to
> sort out as mentioned elsewhere but you cannot really depend on
> pre-allocated pool which can fail at a fault time like that.

So, to do it similar to hugetlbfs (e.g., with CMA), there would have to 
be a mechanism to actually try pre-reserving (e.g., from the CMA area), 
at which point in time the pages would get moved to the secretmem pool, 
and a mechanism for mmap() etc. to "reserve" from these secretmem pool, 
such that there are guarantees at fault time?

What we have right now feels like some kind of overcommit (reading, as 
overcommiting huge pages, so we might get SIGBUS at fault time).

TBH, the SIGBUS thingy doesn't sound terrible to me - if this behavior 
is to be expected right now by applications using it and they can handle 
it - no guarantees. I fully agree that some kind of 
reservation/guarantee mechanism would be preferable.

-- 
Thanks,

David / dhildenb
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

WARNING: multiple messages have this Message-ID (diff)
From: David Hildenbrand <david@redhat.com>
To: Michal Hocko <mhocko@suse.com>, Mike Rapoport <rppt@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Andy Lutomirski <luto@kernel.org>, Arnd Bergmann <arnd@arndb.de>,
	Borislav Petkov <bp@alien8.de>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Christopher Lameter <cl@linux.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Elena Reshetova <elena.reshetova@intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Ingo Molnar <mingo@redhat.com>,
	James Bottomley <jejb@linux.ibm.com>,
	"Kirill A. Shutemov" <kirill@shutemov.name>,
	Matthew Wilcox <willy@infradead.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Mike Rapoport <rppt@linux.ibm.com>,
	Michael Kerrisk <mtk.manpages@gmail.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Rick Edgecombe <rick.p.edgecombe@intel.com>,
	Roman Gushchin <guro@fb.com>, Shakeel Butt <shakeelb@google.com>,
	Shuah Khan <shuah@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Tycho Andersen <tycho@tycho.ws>, Will Deacon <will@kernel.org>,
	linux-api@vger.kernel.org, linux-arch@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org,
	linux-nvdimm@lists.01.org, linux-riscv@lists.infradead.org,
	x86@kernel.org, Hagen Paul Pfeifer <hagen@jauu.net>,
	Palmer Dabbelt <palmerdabbelt@google.com>
Subject: Re: [PATCH v16 07/11] secretmem: use PMD-size pages to amortize direct map fragmentation
Date: Tue, 26 Jan 2021 12:56:48 +0100	[thread overview]
Message-ID: <303f348d-e494-e386-d1f5-14505b5da254@redhat.com> (raw)
In-Reply-To: <20210126114657.GL827@dhcp22.suse.cz>

On 26.01.21 12:46, Michal Hocko wrote:
> On Thu 21-01-21 14:27:19, Mike Rapoport wrote:
>> From: Mike Rapoport <rppt@linux.ibm.com>
>>
>> Removing a PAGE_SIZE page from the direct map every time such page is
>> allocated for a secret memory mapping will cause severe fragmentation of
>> the direct map. This fragmentation can be reduced by using PMD-size pages
>> as a pool for small pages for secret memory mappings.
>>
>> Add a gen_pool per secretmem inode and lazily populate this pool with
>> PMD-size pages.
>>
>> As pages allocated by secretmem become unmovable, use CMA to back large
>> page caches so that page allocator won't be surprised by failing attempt to
>> migrate these pages.
>>
>> The CMA area used by secretmem is controlled by the "secretmem=" kernel
>> parameter. This allows explicit control over the memory available for
>> secretmem and provides upper hard limit for secretmem consumption.
> 
> OK, so I have finally had a look at this closer and this is really not
> acceptable. I have already mentioned that in a response to other patch
> but any task is able to deprive access to secret memory to other tasks
> and cause OOM killer which wouldn't really recover ever and potentially
> panic the system. Now you could be less drastic and only make SIGBUS on
> fault but that would be still quite terrible. There is a very good
> reason why hugetlb implements is non-trivial reservation system to avoid
> exactly these problems.
> 
> So unless I am really misreading the code
> Nacked-by: Michal Hocko <mhocko@suse.com>
> 
> That doesn't mean I reject the whole idea. There are some details to
> sort out as mentioned elsewhere but you cannot really depend on
> pre-allocated pool which can fail at a fault time like that.

So, to do it similar to hugetlbfs (e.g., with CMA), there would have to 
be a mechanism to actually try pre-reserving (e.g., from the CMA area), 
at which point in time the pages would get moved to the secretmem pool, 
and a mechanism for mmap() etc. to "reserve" from these secretmem pool, 
such that there are guarantees at fault time?

What we have right now feels like some kind of overcommit (reading, as 
overcommiting huge pages, so we might get SIGBUS at fault time).

TBH, the SIGBUS thingy doesn't sound terrible to me - if this behavior 
is to be expected right now by applications using it and they can handle 
it - no guarantees. I fully agree that some kind of 
reservation/guarantee mechanism would be preferable.

-- 
Thanks,

David / dhildenb


WARNING: multiple messages have this Message-ID (diff)
From: David Hildenbrand <david@redhat.com>
To: Michal Hocko <mhocko@suse.com>, Mike Rapoport <rppt@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	linux-mm@kvack.org, linux-kselftest@vger.kernel.org,
	"H. Peter Anvin" <hpa@zytor.com>,
	Christopher Lameter <cl@linux.com>, Shuah Khan <shuah@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Elena Reshetova <elena.reshetova@intel.com>,
	linux-arch@vger.kernel.org, Tycho Andersen <tycho@tycho.ws>,
	linux-nvdimm@lists.01.org, Will Deacon <will@kernel.org>,
	x86@kernel.org, Matthew Wilcox <willy@infradead.org>,
	Mike Rapoport <rppt@linux.ibm.com>,
	Ingo Molnar <mingo@redhat.com>,
	Michael Kerrisk <mtk.manpages@gmail.com>,
	Palmer Dabbelt <palmerdabbelt@google.com>,
	Arnd Bergmann <arnd@arndb.de>,
	James Bottomley <jejb@linux.ibm.com>,
	Hagen Paul Pfeifer <hagen@jauu.net>,
	Borislav Petkov <bp@alien8.de>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Andy Lutomirski <luto@kernel.org>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	"Kirill A. Shutemov" <kirill@shutemov.name>,
	Dan Williams <dan.j.williams@intel.com>,
	linux-arm-kernel@lists.infradead.org, linux-api@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org,
	Palmer Dabbelt <palmer@dabbelt.com>,
	linux-fsdevel@vger.kernel.org, Shakeel Butt <shakeelb@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Rick Edgecombe <rick.p.edgecombe@intel.com>,
	Roman Gushchin <guro@fb.com>
Subject: Re: [PATCH v16 07/11] secretmem: use PMD-size pages to amortize direct map fragmentation
Date: Tue, 26 Jan 2021 12:56:48 +0100	[thread overview]
Message-ID: <303f348d-e494-e386-d1f5-14505b5da254@redhat.com> (raw)
In-Reply-To: <20210126114657.GL827@dhcp22.suse.cz>

On 26.01.21 12:46, Michal Hocko wrote:
> On Thu 21-01-21 14:27:19, Mike Rapoport wrote:
>> From: Mike Rapoport <rppt@linux.ibm.com>
>>
>> Removing a PAGE_SIZE page from the direct map every time such page is
>> allocated for a secret memory mapping will cause severe fragmentation of
>> the direct map. This fragmentation can be reduced by using PMD-size pages
>> as a pool for small pages for secret memory mappings.
>>
>> Add a gen_pool per secretmem inode and lazily populate this pool with
>> PMD-size pages.
>>
>> As pages allocated by secretmem become unmovable, use CMA to back large
>> page caches so that page allocator won't be surprised by failing attempt to
>> migrate these pages.
>>
>> The CMA area used by secretmem is controlled by the "secretmem=" kernel
>> parameter. This allows explicit control over the memory available for
>> secretmem and provides upper hard limit for secretmem consumption.
> 
> OK, so I have finally had a look at this closer and this is really not
> acceptable. I have already mentioned that in a response to other patch
> but any task is able to deprive access to secret memory to other tasks
> and cause OOM killer which wouldn't really recover ever and potentially
> panic the system. Now you could be less drastic and only make SIGBUS on
> fault but that would be still quite terrible. There is a very good
> reason why hugetlb implements is non-trivial reservation system to avoid
> exactly these problems.
> 
> So unless I am really misreading the code
> Nacked-by: Michal Hocko <mhocko@suse.com>
> 
> That doesn't mean I reject the whole idea. There are some details to
> sort out as mentioned elsewhere but you cannot really depend on
> pre-allocated pool which can fail at a fault time like that.

So, to do it similar to hugetlbfs (e.g., with CMA), there would have to 
be a mechanism to actually try pre-reserving (e.g., from the CMA area), 
at which point in time the pages would get moved to the secretmem pool, 
and a mechanism for mmap() etc. to "reserve" from these secretmem pool, 
such that there are guarantees at fault time?

What we have right now feels like some kind of overcommit (reading, as 
overcommiting huge pages, so we might get SIGBUS at fault time).

TBH, the SIGBUS thingy doesn't sound terrible to me - if this behavior 
is to be expected right now by applications using it and they can handle 
it - no guarantees. I fully agree that some kind of 
reservation/guarantee mechanism would be preferable.

-- 
Thanks,

David / dhildenb


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

WARNING: multiple messages have this Message-ID (diff)
From: David Hildenbrand <david@redhat.com>
To: Michal Hocko <mhocko@suse.com>, Mike Rapoport <rppt@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	linux-mm@kvack.org, linux-kselftest@vger.kernel.org,
	"H. Peter Anvin" <hpa@zytor.com>,
	Christopher Lameter <cl@linux.com>, Shuah Khan <shuah@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Elena Reshetova <elena.reshetova@intel.com>,
	linux-arch@vger.kernel.org, Tycho Andersen <tycho@tycho.ws>,
	linux-nvdimm@lists.01.org, Will Deacon <will@kernel.org>,
	x86@kernel.org, Matthew Wilcox <willy@infradead.org>,
	Mike Rapoport <rppt@linux.ibm.com>,
	Ingo Molnar <mingo@redhat.com>,
	Michael Kerrisk <mtk.manpages@gmail.com>,
	Palmer Dabbelt <palmerdabbelt@google.com>,
	Arnd Bergmann <arnd@arndb.de>,
	James Bottomley <jejb@linux.ibm.com>,
	Hagen Paul Pfeifer <hagen@jauu.net>,
	Borislav Petkov <bp@alien8.de>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Andy Lutomirski <luto@kernel.org>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	"Kirill A. Shutemov" <kirill@shutemov.name>,
	Dan Williams <dan.j.williams@intel.com>,
	linux-arm-kernel@lists.infradead.org, linux-api@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org,
	Palmer Dabbelt <palmer@dabbelt.com>,
	linux-fsdevel@vger.kernel.org, Shakeel Butt <shakeelb@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Rick Edgecombe <rick.p.edgecombe@intel.com>,
	Roman Gushchin <guro@fb.com>
Subject: Re: [PATCH v16 07/11] secretmem: use PMD-size pages to amortize direct map fragmentation
Date: Tue, 26 Jan 2021 12:56:48 +0100	[thread overview]
Message-ID: <303f348d-e494-e386-d1f5-14505b5da254@redhat.com> (raw)
In-Reply-To: <20210126114657.GL827@dhcp22.suse.cz>

On 26.01.21 12:46, Michal Hocko wrote:
> On Thu 21-01-21 14:27:19, Mike Rapoport wrote:
>> From: Mike Rapoport <rppt@linux.ibm.com>
>>
>> Removing a PAGE_SIZE page from the direct map every time such page is
>> allocated for a secret memory mapping will cause severe fragmentation of
>> the direct map. This fragmentation can be reduced by using PMD-size pages
>> as a pool for small pages for secret memory mappings.
>>
>> Add a gen_pool per secretmem inode and lazily populate this pool with
>> PMD-size pages.
>>
>> As pages allocated by secretmem become unmovable, use CMA to back large
>> page caches so that page allocator won't be surprised by failing attempt to
>> migrate these pages.
>>
>> The CMA area used by secretmem is controlled by the "secretmem=" kernel
>> parameter. This allows explicit control over the memory available for
>> secretmem and provides upper hard limit for secretmem consumption.
> 
> OK, so I have finally had a look at this closer and this is really not
> acceptable. I have already mentioned that in a response to other patch
> but any task is able to deprive access to secret memory to other tasks
> and cause OOM killer which wouldn't really recover ever and potentially
> panic the system. Now you could be less drastic and only make SIGBUS on
> fault but that would be still quite terrible. There is a very good
> reason why hugetlb implements is non-trivial reservation system to avoid
> exactly these problems.
> 
> So unless I am really misreading the code
> Nacked-by: Michal Hocko <mhocko@suse.com>
> 
> That doesn't mean I reject the whole idea. There are some details to
> sort out as mentioned elsewhere but you cannot really depend on
> pre-allocated pool which can fail at a fault time like that.

So, to do it similar to hugetlbfs (e.g., with CMA), there would have to 
be a mechanism to actually try pre-reserving (e.g., from the CMA area), 
at which point in time the pages would get moved to the secretmem pool, 
and a mechanism for mmap() etc. to "reserve" from these secretmem pool, 
such that there are guarantees at fault time?

What we have right now feels like some kind of overcommit (reading, as 
overcommiting huge pages, so we might get SIGBUS at fault time).

TBH, the SIGBUS thingy doesn't sound terrible to me - if this behavior 
is to be expected right now by applications using it and they can handle 
it - no guarantees. I fully agree that some kind of 
reservation/guarantee mechanism would be preferable.

-- 
Thanks,

David / dhildenb


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2021-01-26 11:57 UTC|newest]

Thread overview: 318+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-21 12:27 [PATCH v16 00/11] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
2021-01-21 12:27 ` Mike Rapoport
2021-01-21 12:27 ` Mike Rapoport
2021-01-21 12:27 ` Mike Rapoport
2021-01-21 12:27 ` [PATCH v16 01/11] mm: add definition of PMD_PAGE_ORDER Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27 ` [PATCH v16 02/11] mmap: make mlock_future_check() global Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27 ` [PATCH v16 03/11] riscv/Kconfig: make direct map manipulation options depend on MMU Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27 ` [PATCH v16 04/11] set_memory: allow set_direct_map_*_noflush() for multiple pages Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27 ` [PATCH v16 05/11] set_memory: allow querying whether set_direct_map_*() is actually enabled Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27 ` [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-25 17:01   ` Michal Hocko
2021-01-25 17:01     ` Michal Hocko
2021-01-25 17:01     ` Michal Hocko
2021-01-25 17:01     ` Michal Hocko
2021-01-25 21:36     ` Mike Rapoport
2021-01-25 21:36       ` Mike Rapoport
2021-01-25 21:36       ` Mike Rapoport
2021-01-25 21:36       ` Mike Rapoport
2021-01-26  7:16       ` Michal Hocko
2021-01-26  7:16         ` Michal Hocko
2021-01-26  7:16         ` Michal Hocko
2021-01-26  7:16         ` Michal Hocko
2021-01-26  8:33         ` Mike Rapoport
2021-01-26  8:33           ` Mike Rapoport
2021-01-26  8:33           ` Mike Rapoport
2021-01-26  8:33           ` Mike Rapoport
2021-01-26  9:00           ` Michal Hocko
2021-01-26  9:00             ` Michal Hocko
2021-01-26  9:00             ` Michal Hocko
2021-01-26  9:00             ` Michal Hocko
2021-01-26  9:20             ` Mike Rapoport
2021-01-26  9:20               ` Mike Rapoport
2021-01-26  9:20               ` Mike Rapoport
2021-01-26  9:20               ` Mike Rapoport
2021-01-26  9:49               ` Michal Hocko
2021-01-26  9:49                 ` Michal Hocko
2021-01-26  9:49                 ` Michal Hocko
2021-01-26  9:49                 ` Michal Hocko
2021-01-26  9:53                 ` David Hildenbrand
2021-01-26  9:53                   ` David Hildenbrand
2021-01-26  9:53                   ` David Hildenbrand
2021-01-26  9:53                   ` David Hildenbrand
2021-01-26 10:19                   ` Michal Hocko
2021-01-26 10:19                     ` Michal Hocko
2021-01-26 10:19                     ` Michal Hocko
2021-01-26 10:19                     ` Michal Hocko
2021-01-26  9:20             ` Michal Hocko
2021-01-26  9:20               ` Michal Hocko
2021-01-26  9:20               ` Michal Hocko
2021-01-26  9:20               ` Michal Hocko
2021-02-03 12:15   ` Michal Hocko
2021-02-03 12:15     ` Michal Hocko
2021-02-03 12:15     ` Michal Hocko
2021-02-03 12:15     ` Michal Hocko
2021-02-04 11:34     ` Mike Rapoport
2021-02-04 11:34       ` Mike Rapoport
2021-02-04 11:34       ` Mike Rapoport
2021-02-04 11:34       ` Mike Rapoport
2021-01-21 12:27 ` [PATCH v16 07/11] secretmem: use PMD-size pages to amortize direct map fragmentation Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-26 11:46   ` Michal Hocko
2021-01-26 11:46     ` Michal Hocko
2021-01-26 11:46     ` Michal Hocko
2021-01-26 11:46     ` Michal Hocko
2021-01-26 11:56     ` David Hildenbrand [this message]
2021-01-26 11:56       ` David Hildenbrand
2021-01-26 11:56       ` David Hildenbrand
2021-01-26 11:56       ` David Hildenbrand
2021-01-26 12:08       ` Michal Hocko
2021-01-26 12:08         ` Michal Hocko
2021-01-26 12:08         ` Michal Hocko
2021-01-26 12:08         ` Michal Hocko
2021-01-28  9:22         ` Mike Rapoport
2021-01-28  9:22           ` Mike Rapoport
2021-01-28  9:22           ` Mike Rapoport
2021-01-28  9:22           ` Mike Rapoport
2021-01-28 13:01           ` Michal Hocko
2021-01-28 13:01             ` Michal Hocko
2021-01-28 13:01             ` Michal Hocko
2021-01-28 13:01             ` Michal Hocko
2021-01-28 13:28             ` Christoph Lameter
2021-01-28 13:28               ` Christoph Lameter
2021-01-28 13:28               ` Christoph Lameter
2021-01-28 13:28               ` Christoph Lameter
2021-01-28 13:28               ` Christoph Lameter
2021-01-28 13:49               ` Michal Hocko
2021-01-28 13:49                 ` Michal Hocko
2021-01-28 13:49                 ` Michal Hocko
2021-01-28 13:49                 ` Michal Hocko
2021-01-28 15:56                 ` Christoph Lameter
2021-01-28 15:56                   ` Christoph Lameter
2021-01-28 15:56                   ` Christoph Lameter
2021-01-28 15:56                   ` Christoph Lameter
2021-01-28 15:56                   ` Christoph Lameter
2021-01-28 16:23                   ` Michal Hocko
2021-01-28 16:23                     ` Michal Hocko
2021-01-28 16:23                     ` Michal Hocko
2021-01-28 16:23                     ` Michal Hocko
2021-01-28 15:28             ` James Bottomley
2021-01-28 15:28               ` James Bottomley
2021-01-28 15:28               ` James Bottomley
2021-01-28 15:28               ` James Bottomley
2021-01-29  7:03               ` Mike Rapoport
2021-01-29  7:03                 ` Mike Rapoport
2021-01-29  7:03                 ` Mike Rapoport
2021-01-29  7:03                 ` Mike Rapoport
2021-01-28 21:05             ` James Bottomley
2021-01-28 21:05               ` James Bottomley
2021-01-28 21:05               ` James Bottomley
2021-01-28 21:05               ` James Bottomley
2021-01-29  7:53               ` Michal Hocko
2021-01-29  7:53                 ` Michal Hocko
2021-01-29  7:53                 ` Michal Hocko
2021-01-29  7:53                 ` Michal Hocko
2021-01-29  8:23               ` Michal Hocko
2021-01-29  8:23                 ` Michal Hocko
2021-01-29  8:23                 ` Michal Hocko
2021-01-29  8:23                 ` Michal Hocko
2021-02-01 16:56                 ` James Bottomley
2021-02-01 16:56                   ` James Bottomley
2021-02-01 16:56                   ` James Bottomley
2021-02-01 16:56                   ` James Bottomley
2021-02-02  9:35                   ` Michal Hocko
2021-02-02  9:35                     ` Michal Hocko
2021-02-02  9:35                     ` Michal Hocko
2021-02-02  9:35                     ` Michal Hocko
2021-02-02 12:48                     ` Mike Rapoport
2021-02-02 12:48                       ` Mike Rapoport
2021-02-02 12:48                       ` Mike Rapoport
2021-02-02 12:48                       ` Mike Rapoport
2021-02-02 13:14                       ` David Hildenbrand
2021-02-02 13:14                         ` David Hildenbrand
2021-02-02 13:14                         ` David Hildenbrand
2021-02-02 13:14                         ` David Hildenbrand
2021-02-02 13:32                         ` Michal Hocko
2021-02-02 13:32                           ` Michal Hocko
2021-02-02 13:32                           ` Michal Hocko
2021-02-02 13:32                           ` Michal Hocko
2021-02-02 14:12                           ` David Hildenbrand
2021-02-02 14:12                             ` David Hildenbrand
2021-02-02 14:12                             ` David Hildenbrand
2021-02-02 14:12                             ` David Hildenbrand
2021-02-02 14:22                             ` Michal Hocko
2021-02-02 14:22                               ` Michal Hocko
2021-02-02 14:22                               ` Michal Hocko
2021-02-02 14:22                               ` Michal Hocko
2021-02-02 14:26                               ` David Hildenbrand
2021-02-02 14:26                                 ` David Hildenbrand
2021-02-02 14:26                                 ` David Hildenbrand
2021-02-02 14:26                                 ` David Hildenbrand
2021-02-02 14:32                                 ` Michal Hocko
2021-02-02 14:32                                   ` Michal Hocko
2021-02-02 14:32                                   ` Michal Hocko
2021-02-02 14:32                                   ` Michal Hocko
2021-02-02 14:34                                   ` David Hildenbrand
2021-02-02 14:34                                     ` David Hildenbrand
2021-02-02 14:34                                     ` David Hildenbrand
2021-02-02 14:34                                     ` David Hildenbrand
2021-02-02 18:15                                     ` Mike Rapoport
2021-02-02 18:15                                       ` Mike Rapoport
2021-02-02 18:15                                       ` Mike Rapoport
2021-02-02 18:15                                       ` Mike Rapoport
2021-02-02 18:55                                       ` James Bottomley
2021-02-02 18:55                                         ` James Bottomley
2021-02-02 18:55                                         ` James Bottomley
2021-02-02 18:55                                         ` James Bottomley
2021-02-03 12:09                                         ` Michal Hocko
2021-02-03 12:09                                           ` Michal Hocko
2021-02-03 12:09                                           ` Michal Hocko
2021-02-03 12:09                                           ` Michal Hocko
2021-02-04 11:31                                           ` Mike Rapoport
2021-02-04 11:31                                             ` Mike Rapoport
2021-02-04 11:31                                             ` Mike Rapoport
2021-02-04 11:31                                             ` Mike Rapoport
2021-02-02 13:27                       ` Michal Hocko
2021-02-02 13:27                         ` Michal Hocko
2021-02-02 13:27                         ` Michal Hocko
2021-02-02 13:27                         ` Michal Hocko
2021-02-02 19:10                         ` Mike Rapoport
2021-02-02 19:10                           ` Mike Rapoport
2021-02-02 19:10                           ` Mike Rapoport
2021-02-02 19:10                           ` Mike Rapoport
2021-02-03  9:12                           ` Michal Hocko
2021-02-03  9:12                             ` Michal Hocko
2021-02-03  9:12                             ` Michal Hocko
2021-02-03  9:12                             ` Michal Hocko
2021-02-04  9:58                             ` Mike Rapoport
2021-02-04  9:58                               ` Mike Rapoport
2021-02-04  9:58                               ` Mike Rapoport
2021-02-04  9:58                               ` Mike Rapoport
2021-02-04 13:02                               ` Michal Hocko
2021-02-04 13:02                                 ` Michal Hocko
2021-02-04 13:02                                 ` Michal Hocko
2021-02-04 13:02                                 ` Michal Hocko
2021-01-29  7:21             ` Mike Rapoport
2021-01-29  7:21               ` Mike Rapoport
2021-01-29  7:21               ` Mike Rapoport
2021-01-29  7:21               ` Mike Rapoport
2021-01-29  8:51               ` Michal Hocko
2021-01-29  8:51                 ` Michal Hocko
2021-01-29  8:51                 ` Michal Hocko
2021-01-29  8:51                 ` Michal Hocko
2021-02-02 14:42                 ` David Hildenbrand
2021-02-02 14:42                   ` David Hildenbrand
2021-02-02 14:42                   ` David Hildenbrand
2021-02-02 14:42                   ` David Hildenbrand
2021-01-21 12:27 ` [PATCH v16 08/11] secretmem: add memcg accounting Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-25 16:17   ` Matthew Wilcox
2021-01-25 16:17     ` Matthew Wilcox
2021-01-25 16:17     ` Matthew Wilcox
2021-01-25 16:17     ` Matthew Wilcox
2021-01-25 17:18     ` Shakeel Butt
2021-01-25 17:18       ` Shakeel Butt
2021-01-25 17:18       ` Shakeel Butt
2021-01-25 17:18       ` Shakeel Butt
2021-01-25 17:18       ` Shakeel Butt
2021-01-25 21:35       ` Mike Rapoport
2021-01-25 21:35         ` Mike Rapoport
2021-01-25 21:35         ` Mike Rapoport
2021-01-25 21:35         ` Mike Rapoport
2021-01-28 15:07         ` Shakeel Butt
2021-01-28 15:07           ` Shakeel Butt
2021-01-28 15:07           ` Shakeel Butt
2021-01-28 15:07           ` Shakeel Butt
2021-01-28 15:07           ` Shakeel Butt
2021-01-25 16:54   ` Michal Hocko
2021-01-25 16:54     ` Michal Hocko
2021-01-25 16:54     ` Michal Hocko
2021-01-25 16:54     ` Michal Hocko
2021-01-25 21:38     ` Mike Rapoport
2021-01-25 21:38       ` Mike Rapoport
2021-01-25 21:38       ` Mike Rapoport
2021-01-25 21:38       ` Mike Rapoport
2021-01-26  7:31       ` Michal Hocko
2021-01-26  7:31         ` Michal Hocko
2021-01-26  7:31         ` Michal Hocko
2021-01-26  7:31         ` Michal Hocko
2021-01-26  8:56         ` Mike Rapoport
2021-01-26  8:56           ` Mike Rapoport
2021-01-26  8:56           ` Mike Rapoport
2021-01-26  8:56           ` Mike Rapoport
2021-01-26  9:15           ` Michal Hocko
2021-01-26  9:15             ` Michal Hocko
2021-01-26  9:15             ` Michal Hocko
2021-01-26  9:15             ` Michal Hocko
2021-01-26 14:48       ` Matthew Wilcox
2021-01-26 14:48         ` Matthew Wilcox
2021-01-26 14:48         ` Matthew Wilcox
2021-01-26 14:48         ` Matthew Wilcox
2021-01-26 15:05         ` Michal Hocko
2021-01-26 15:05           ` Michal Hocko
2021-01-26 15:05           ` Michal Hocko
2021-01-26 15:05           ` Michal Hocko
2021-01-27 18:42           ` Roman Gushchin
2021-01-27 18:42             ` Roman Gushchin
2021-01-27 18:42             ` Roman Gushchin
2021-01-27 18:42             ` Roman Gushchin
2021-01-28  7:58             ` Michal Hocko
2021-01-28  7:58               ` Michal Hocko
2021-01-28  7:58               ` Michal Hocko
2021-01-28  7:58               ` Michal Hocko
2021-01-28 14:05               ` Shakeel Butt
2021-01-28 14:05                 ` Shakeel Butt
2021-01-28 14:05                 ` Shakeel Butt
2021-01-28 14:05                 ` Shakeel Butt
2021-01-28 14:05                 ` Shakeel Butt
2021-01-28 14:22                 ` Michal Hocko
2021-01-28 14:22                   ` Michal Hocko
2021-01-28 14:22                   ` Michal Hocko
2021-01-28 14:22                   ` Michal Hocko
2021-01-28 14:57                   ` Shakeel Butt
2021-01-28 14:57                     ` Shakeel Butt
2021-01-28 14:57                     ` Shakeel Butt
2021-01-28 14:57                     ` Shakeel Butt
2021-01-28 14:57                     ` Shakeel Butt
2021-01-21 12:27 ` [PATCH v16 09/11] PM: hibernate: disable when there are active secretmem users Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27 ` [PATCH v16 10/11] arch, mm: wire up memfd_secret system call where relevant Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-25 18:18   ` Catalin Marinas
2021-01-25 18:18     ` Catalin Marinas
2021-01-25 18:18     ` Catalin Marinas
2021-01-25 18:18     ` Catalin Marinas
2021-01-21 12:27 ` [PATCH v16 11/11] secretmem: test: add basic selftest for memfd_secret(2) Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 12:27   ` Mike Rapoport
2021-01-21 22:18 ` [PATCH v16 00/11] mm: introduce memfd_secret system call to create "secret" memory areas Andrew Morton
2021-01-21 22:18   ` Andrew Morton
2021-01-21 22:18   ` Andrew Morton
2021-01-21 22:18   ` Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=303f348d-e494-e386-d1f5-14505b5da254@redhat.com \
    --to=david@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=arnd@arndb.de \
    --cc=bp@alien8.de \
    --cc=catalin.marinas@arm.com \
    --cc=cl@linux.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=elena.reshetova@intel.com \
    --cc=guro@fb.com \
    --cc=hagen@jauu.net \
    --cc=hpa@zytor.com \
    --cc=jejb@linux.ibm.com \
    --cc=kirill@shutemov.name \
    --cc=linux-api@vger.kernel.org \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-kselftest@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=luto@kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=mhocko@suse.com \
    --cc=mingo@redhat.com \
    --cc=mtk.manpages@gmail.com \
    --cc=palmer@dabbelt.com \
    --cc=palmerdabbelt@google.com \
    --cc=paul.walmsley@sifive.com \
    --cc=peterz@infradead.org \
    --cc=rick.p.edgecombe@intel.com \
    --cc=rppt@kernel.org \
    --cc=rppt@linux.ibm.com \
    --cc=shakeelb@google.com \
    --cc=shuah@kernel.org \
    --cc=tglx@linutronix.de \
    --cc=tycho@tycho.ws \
    --cc=viro@zeniv.linux.org.uk \
    --cc=will@kernel.org \
    --cc=willy@infradead.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.