All of lore.kernel.org
 help / color / mirror / Atom feed
From: Vlastimil Babka <vbabka@suse.cz>
To: Mike Rapoport <rppt@kernel.org>, Dave Hansen <dave.hansen@intel.com>
Cc: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
	Andy Lutomirski <luto@kernel.org>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Ira Weiny <ira.weiny@intel.com>,
	Kees Cook <keescook@chromium.org>,
	Mike Rapoport <rppt@linux.ibm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Rick Edgecombe <rick.p.edgecombe@intel.com>,
	x86@kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH 4/4] x86/mm: write protect (most) page tables
Date: Thu, 26 Aug 2021 11:01:18 +0200	[thread overview]
Message-ID: <de31a1c7-1a35-831f-3eed-e4a6e77f9e44@suse.cz> (raw)
In-Reply-To: <YSdKsTX3EQQqgj0y@kernel.org>

On 8/26/21 10:02, Mike Rapoport wrote:
> On Mon, Aug 23, 2021 at 04:50:10PM -0700, Dave Hansen wrote:
>> On 8/23/21 6:25 AM, Mike Rapoport wrote:
>> >  void ___pte_free_tlb(struct mmu_gather *tlb, struct page *pte)
>> >  {
>> > +	enable_pgtable_write(page_address(pte));
>> >  	pgtable_pte_page_dtor(pte);
>> >  	paravirt_release_pte(page_to_pfn(pte));
>> >  	paravirt_tlb_remove_table(tlb, pte);
>> > @@ -69,6 +73,7 @@ void ___pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd)
>> >  #ifdef CONFIG_X86_PAE
>> >  	tlb->need_flush_all = 1;
>> >  #endif
>> > +	enable_pgtable_write(pmd);
>> >  	pgtable_pmd_page_dtor(page);
>> >  	paravirt_tlb_remove_table(tlb, page);
>> >  }
>> 
>> I'm also cringing a bit at hacking this into the page allocator.   A
>> *lot* of what you're trying to do with getting large allocations out and
>> splitting them up is done very well today by the slab allocators.  It
>> might take some rearrangement of 'struct page' metadata to be more slab
>> friendly, but it does seem like a close enough fit to warrant investigating.
> 
> I thought more about using slab, but it seems to me the least suitable
> option. The usecases at hand (page tables, secretmem, SEV/TDX) allocate in
> page granularity and some of them use struct page metadata, so even its
> rearrangement won't help. And adding support for 2M slabs to SLUB would be
> quite intrusive.

Agree, and there would be unnecessary memory overhead too, SLUB would be happy
to cache a 2MB block on each CPU, etc.

> I think that better options are moving such cache deeper into buddy or
> using e.g. genalloc instead of a list to deal with higher order allocations. 
> 
> The choice between these two will mostly depend of the API selection, i.e.
> a GFP flag or a dedicated alloc/free.

Implementing on top of buddy seem still like the better option to me.

  reply	other threads:[~2021-08-26  9:01 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-23 13:25 [RFC PATCH 0/4] mm/page_alloc: cache pte-mapped allocations Mike Rapoport
2021-08-23 13:25 ` [RFC PATCH 1/4] list: Support getting most recent element in list_lru Mike Rapoport
2021-08-23 13:25 ` [RFC PATCH 2/4] list: Support list head not in object for list_lru Mike Rapoport
2021-08-23 13:25 ` [RFC PATCH 3/4] mm/page_alloc: introduce __GFP_PTE_MAPPED flag to allocate pte-mapped pages Mike Rapoport
2021-08-23 20:29   ` Edgecombe, Rick P
2021-08-24 13:02     ` Mike Rapoport
2021-08-24 16:38       ` Edgecombe, Rick P
2021-08-24 16:54         ` Mike Rapoport
2021-08-24 17:23           ` Edgecombe, Rick P
2021-08-24 17:37             ` Mike Rapoport
2021-08-24 16:12   ` Vlastimil Babka
2021-08-25  8:43   ` David Hildenbrand
2021-08-23 13:25 ` [RFC PATCH 4/4] x86/mm: write protect (most) page tables Mike Rapoport
2021-08-23 20:08   ` Edgecombe, Rick P
2021-08-23 23:50   ` Dave Hansen
2021-08-24  3:34     ` Andy Lutomirski
2021-08-25 14:59       ` Dave Hansen
2021-08-24 13:32     ` Mike Rapoport
2021-08-25  8:38     ` David Hildenbrand
2021-08-26  8:02     ` Mike Rapoport
2021-08-26  9:01       ` Vlastimil Babka [this message]
2021-08-24  5:32   ` Nadav Amit
2021-08-24  5:34     ` Nadav Amit
2021-08-24 13:36       ` Mike Rapoport
2021-08-23 20:02 ` [RFC PATCH 0/4] mm/page_alloc: cache pte-mapped allocations Edgecombe, Rick P
2021-08-24 13:03   ` Mike Rapoport
2021-08-24 16:09 ` Vlastimil Babka
2021-08-29  7:06   ` Mike Rapoport

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=de31a1c7-1a35-831f-3eed-e4a6e77f9e44@suse.cz \
    --to=vbabka@suse.cz \
    --cc=akpm@linux-foundation.org \
    --cc=dave.hansen@intel.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=ira.weiny@intel.com \
    --cc=keescook@chromium.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=luto@kernel.org \
    --cc=peterz@infradead.org \
    --cc=rick.p.edgecombe@intel.com \
    --cc=rppt@kernel.org \
    --cc=rppt@linux.ibm.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.