linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Dave Hansen <dave.hansen@intel.com>
To: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	Borislav Petkov <bp@alien8.de>, Andy Lutomirski <luto@kernel.org>,
	Sean Christopherson <seanjc@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Joerg Roedel <jroedel@suse.de>, Ard Biesheuvel <ardb@kernel.org>
Cc: Andi Kleen <ak@linux.intel.com>,
	Kuppuswamy Sathyanarayanan 
	<sathyanarayanan.kuppuswamy@linux.intel.com>,
	David Rientjes <rientjes@google.com>,
	Vlastimil Babka <vbabka@suse.cz>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Peter Zijlstra <peterz@infradead.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Ingo Molnar <mingo@redhat.com>,
	Varad Gautam <varad.gautam@suse.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev,
	linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCHv2 1/7] mm: Add support for unaccepted memory
Date: Tue, 11 Jan 2022 11:46:37 -0800	[thread overview]
Message-ID: <3a68fabd-eaff-2164-5609-3a71fd4a7257@intel.com> (raw)
In-Reply-To: <20220111113314.27173-2-kirill.shutemov@linux.intel.com>

> diff --git a/mm/memblock.c b/mm/memblock.c
> index 1018e50566f3..6dfa594192de 100644
> --- a/mm/memblock.c
> +++ b/mm/memblock.c
> @@ -1400,6 +1400,7 @@ phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size,
>   		 */
>   		kmemleak_alloc_phys(found, size, 0, 0);
>   
> +	accept_memory(found, found + size);
>   	return found;
>   }

This could use a comment.

Looking at this, I also have to wonder if accept_memory() is a bit too 
generic.  Should it perhaps be: cc_accept_memory() or 
cc_guest_accept_memory()?

> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index c5952749ad40..5707b4b5f774 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1064,6 +1064,7 @@ static inline void __free_one_page(struct page *page,
>   	unsigned int max_order;
>   	struct page *buddy;
>   	bool to_tail;
> +	bool offline = PageOffline(page);
>   
>   	max_order = min_t(unsigned int, MAX_ORDER - 1, pageblock_order);
>   
> @@ -1097,6 +1098,10 @@ static inline void __free_one_page(struct page *page,
>   			clear_page_guard(zone, buddy, order, migratetype);
>   		else
>   			del_page_from_free_list(buddy, zone, order);
> +
> +		if (PageOffline(buddy))
> +			offline = true;
> +
>   		combined_pfn = buddy_pfn & pfn;
>   		page = page + (combined_pfn - pfn);
>   		pfn = combined_pfn;
> @@ -1130,6 +1135,9 @@ static inline void __free_one_page(struct page *page,
>   done_merging:
>   	set_buddy_order(page, order);
>   
> +	if (offline)
> +		__SetPageOffline(page);
> +
>   	if (fpi_flags & FPI_TO_TAIL)
>   		to_tail = true;
>   	else if (is_shuffle_order(order))

This is touching some pretty hot code paths.  You mention both that 
accepting memory is slow and expensive, yet you're doing it in the core 
allocator.

That needs at least some discussion in the changelog.

> @@ -1155,7 +1163,8 @@ static inline void __free_one_page(struct page *page,
>   static inline bool page_expected_state(struct page *page,
>   					unsigned long check_flags)
>   {
> -	if (unlikely(atomic_read(&page->_mapcount) != -1))
> +	if (unlikely(atomic_read(&page->_mapcount) != -1) &&
> +	    !PageOffline(page))
>   		return false;

Looking at stuff like this, I can't help but think that a:

	#define PageOffline PageUnaccepted

and some other renaming would be a fine idea.  I get that the Offline 
bit can be reused, but I'm not sure that the "Offline" *naming* should 
be reused.  What you're doing here is logically distinct from existing 
offlining.

>   	if (unlikely((unsigned long)page->mapping |
> @@ -1734,6 +1743,8 @@ void __init memblock_free_pages(struct page *page, unsigned long pfn,
>   {
>   	if (early_page_uninitialised(pfn))
>   		return;
> +
> +	maybe_set_page_offline(page, order);
>   	__free_pages_core(page, order);
>   }
>   
> @@ -1823,10 +1834,12 @@ static void __init deferred_free_range(unsigned long pfn,
>   	if (nr_pages == pageblock_nr_pages &&
>   	    (pfn & (pageblock_nr_pages - 1)) == 0) {
>   		set_pageblock_migratetype(page, MIGRATE_MOVABLE);
> +		maybe_set_page_offline(page, pageblock_order);
>   		__free_pages_core(page, pageblock_order);
>   		return;
>   	}
>   
> +	accept_memory(pfn << PAGE_SHIFT, (pfn + nr_pages) << PAGE_SHIFT);
>   	for (i = 0; i < nr_pages; i++, page++, pfn++) {
>   		if ((pfn & (pageblock_nr_pages - 1)) == 0)
>   			set_pageblock_migratetype(page, MIGRATE_MOVABLE);
> @@ -2297,6 +2310,9 @@ static inline void expand(struct zone *zone, struct page *page,
>   		if (set_page_guard(zone, &page[size], high, migratetype))
>   			continue;
>   
> +		if (PageOffline(page))
> +			__SetPageOffline(&page[size]);

Yeah, this is really begging for comments.  Please add some.

>   		add_to_free_list(&page[size], zone, high, migratetype);
>   		set_buddy_order(&page[size], high);
>   	}
> @@ -2393,6 +2409,9 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
>   	 */
>   	kernel_unpoison_pages(page, 1 << order);
>   
> +	if (PageOffline(page))
> +		accept_and_clear_page_offline(page, order);
> +
>   	/*
>   	 * As memory initialization might be integrated into KASAN,
>   	 * kasan_alloc_pages and kernel_init_free_pages must be

I guess once there are no more PageOffline() pages in the allocator, the 
only impact from these patches will be a bunch of conditional branches 
from the "if (PageOffline(page))" that always have the same result.  The 
branch predictors should do a good job with that.

*BUT*, that overhead is going to be universally inflicted on all users 
on x86, even those without TDX.  I guess the compiler will save non-x86 
users because they'll have an empty stub for 
accept_and_clear_page_offline() which the compiler will optimize away.

It sure would be nice to have some changelog material about why this is 
OK, though.  This is especially true since there's a global spinlock 
hidden in accept_and_clear_page_offline() wrapping a slow and "costly" 
operation.

  reply	other threads:[~2022-01-11 19:46 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-11 11:33 [PATCHv2 0/7] Implement support for unaccepted memory Kirill A. Shutemov
2022-01-11 11:33 ` [PATCHv2 1/7] mm: Add " Kirill A. Shutemov
2022-01-11 19:46   ` Dave Hansen [this message]
2022-01-12 11:31     ` David Hildenbrand
2022-01-12 19:15       ` Kirill A. Shutemov
2022-01-14 13:22         ` David Hildenbrand
2022-01-12 18:30     ` Kirill A. Shutemov
2022-01-12 18:40       ` Dave Hansen
2022-01-13  7:42         ` Mike Rapoport
2022-01-11 11:33 ` [PATCHv2 2/7] efi/x86: Get full memory map in allocate_e820() Kirill A. Shutemov
2022-01-11 11:33 ` [PATCHv2 3/7] efi/x86: Implement support for unaccepted memory Kirill A. Shutemov
2022-01-11 17:17   ` Dave Hansen
2022-01-12 19:29     ` Kirill A. Shutemov
2022-01-12 19:35       ` Dave Hansen
2022-01-11 11:33 ` [PATCHv2 4/7] x86/boot/compressed: Handle " Kirill A. Shutemov
2022-01-11 11:33 ` [PATCHv2 5/7] x86/mm: Reserve unaccepted memory bitmap Kirill A. Shutemov
2022-01-11 19:10   ` Dave Hansen
2022-01-12 19:43     ` Kirill A. Shutemov
2022-01-12 19:53       ` Dave Hansen
2022-01-15 18:46         ` Mike Rapoport
2022-01-11 11:33 ` [PATCHv2 6/7] x86/mm: Provide helpers for unaccepted memory Kirill A. Shutemov
2022-01-11 20:01   ` Dave Hansen
2022-01-12 19:43     ` Kirill A. Shutemov
2022-01-11 11:33 ` [PATCHv2 7/7] x86/tdx: Unaccepted memory support Kirill A. Shutemov
2022-01-18 21:05 ` [PATCHv2 0/7] Implement support for unaccepted memory Brijesh Singh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3a68fabd-eaff-2164-5609-3a71fd4a7257@intel.com \
    --to=dave.hansen@intel.com \
    --cc=ak@linux.intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=ardb@kernel.org \
    --cc=bp@alien8.de \
    --cc=dfaggioli@suse.com \
    --cc=jroedel@suse.de \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-coco@lists.linux.dev \
    --cc=linux-efi@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=luto@kernel.org \
    --cc=mingo@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rientjes@google.com \
    --cc=sathyanarayanan.kuppuswamy@linux.intel.com \
    --cc=seanjc@google.com \
    --cc=tglx@linutronix.de \
    --cc=thomas.lendacky@amd.com \
    --cc=varad.gautam@suse.com \
    --cc=vbabka@suse.cz \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).