linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Kirill A. Shutemov" <kirill@shutemov.name>
To: David Hildenbrand <david@redhat.com>
Cc: Dave Hansen <dave.hansen@intel.com>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	Borislav Petkov <bp@alien8.de>, Andy Lutomirski <luto@kernel.org>,
	Sean Christopherson <seanjc@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Joerg Roedel <jroedel@suse.de>, Ard Biesheuvel <ardb@kernel.org>,
	Andi Kleen <ak@linux.intel.com>,
	Kuppuswamy Sathyanarayanan
	<sathyanarayanan.kuppuswamy@linux.intel.com>,
	David Rientjes <rientjes@google.com>,
	Vlastimil Babka <vbabka@suse.cz>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Peter Zijlstra <peterz@infradead.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Ingo Molnar <mingo@redhat.com>,
	Varad Gautam <varad.gautam@suse.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	Brijesh Singh <brijesh.singh@amd.com>,
	Mike Rapoport <rppt@kernel.org>,
	x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev,
	linux-efi@vger.kernel.org, linux-kernel@vger.kernel.org,
	Mike Rapoport <rppt@linux.ibm.com>
Subject: Re: [PATCHv4 1/8] mm: Add support for unaccepted memory
Date: Wed, 13 Apr 2022 14:30:24 +0300	[thread overview]
Message-ID: <20220413113024.ycvocn6ynerl3b7m@box.shutemov.name> (raw)
In-Reply-To: <a458c13f-9994-b227-ff61-bfdfec10bc27@redhat.com>

On Wed, Apr 13, 2022 at 12:36:11PM +0200, David Hildenbrand wrote:
> On 12.04.22 18:08, Dave Hansen wrote:
> > On 4/12/22 01:15, David Hildenbrand wrote:
> >> Can we simply automate this using a kthread or smth like that, which
> >> just traverses the free page lists and accepts pages (similar, but
> >> different to free page reporting)?
> > 
> > That's definitely doable.
> > 
> > The downside is that this will force premature consumption of physical
> > memory resources that the guest may never use.  That's a particular
> > problem on TDX systems since there is no way for a VMM to reclaim guest
> > memory short of killing the guest.
> 
> IIRC, the hypervisor will usually effectively populate all guest RAM
> either way right now.

No, it is not usual. By default QEMU/KVM uses anonymous mapping and
fault-in memory on demand.

Yes, there's an option to pre-populate guest memory, but it is not the
default.

> So yes, for hypervisors that might optimize for
> that, that statement would be true. But I lost track how helpful it
> would be in the near future e.g., with the fd-based private guest memory
> -- maybe they already optimize for delayed acceptance of memory, turning
> it into delayed population.
> 
> > 
> > In other words, I can see a good argument either way:
> > 1. The kernel should accept everything to avoid the perf nastiness
> > 2. The kernel should accept only what it needs in order to reduce memory
> >    use
> > 
> > I'm kinda partial to #1 though, if I had to pick only one.
> > 
> > The other option might be to tie this all to DEFERRED_STRUCT_PAGE_INIT.
> >  Have the rule that everything that gets a 'struct page' must be
> > accepted.  If you want to do delayed acceptance, you do it via
> > DEFERRED_STRUCT_PAGE_INIT.
> 
> That could also be an option, yes. At least being able to chose would be
> good. But IIRC, DEFERRED_STRUCT_PAGE_INIT will still make the system get
> stuck during boot and wait until everything was accepted.

Right. It deferred page init has to be done before init.

> I see the following variants:
> 
> 1) Slow boot; after boot, all memory is already accepted.
> 2) Fast boot; after boot, all memory will slowly but steadily get
>    accepted in the background. After a while, all memory is accepted and
>    can be signaled to user space.
> 3) Fast boot; after boot, memory gets accepted on demand. This is what
>    we have in this series.
> 
> I somehow don't quite like 3), but with deferred population in the
> hypervisor, it might just make sense.

Conceptionally, 3 is not different from what happens now. The first time
normal VM touches the page (like on handling __GFP_ZERO) the page gets
allocated on host. It can take very long time if it kicks in direct
reclaim on the host.

The only difference is that it is *usually* slower.

I guest we can make a case for making 1 an option to match pre-populated
use case for normal VMs.

Frankly, I think option 2 is the worst one. You still CPU cycles from the
workload after boot to do the job that may or may not be needed. It is an
half-measure that helps nobody.

-- 
 Kirill A. Shutemov


  reply	other threads:[~2022-04-13 11:28 UTC|newest]

Thread overview: 66+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-05 23:43 [PATCHv4 0/8] mm, x86/cc: Implement support for unaccepted memory Kirill A. Shutemov
2022-04-05 23:43 ` [PATCHv4 1/8] mm: Add " Kirill A. Shutemov
2022-04-08 18:55   ` Dave Hansen
2022-04-09 15:54     ` Kirill A. Shutemov
2022-04-11  6:38       ` Dave Hansen
2022-04-11 10:07         ` Mike Rapoport
2022-04-13 11:40           ` Kirill A. Shutemov
2022-04-13 14:48             ` Mike Rapoport
2022-04-13 15:15               ` Kirill A. Shutemov
2022-04-13 20:06                 ` Mike Rapoport
2022-04-11  8:47       ` David Hildenbrand
2022-04-08 19:04   ` David Hildenbrand
2022-04-08 19:11   ` Dave Hansen
2022-04-09 17:52     ` Kirill A. Shutemov
2022-04-11  6:41       ` Dave Hansen
2022-04-11 15:55         ` Borislav Petkov
2022-04-11 16:27           ` Dave Hansen
2022-04-11 18:55             ` Tom Lendacky
2022-04-12  8:15     ` David Hildenbrand
2022-04-12 16:08       ` Dave Hansen
2022-04-13 10:36         ` David Hildenbrand
2022-04-13 11:30           ` Kirill A. Shutemov [this message]
2022-04-13 11:32             ` David Hildenbrand
2022-04-13 15:36             ` Dave Hansen
2022-04-13 16:07               ` David Hildenbrand
2022-04-13 16:13                 ` Dave Hansen
2022-04-13 16:24               ` Kirill A. Shutemov
2022-04-13 14:39           ` Mike Rapoport
2022-04-05 23:43 ` [PATCHv4 2/8] efi/x86: Get full memory map in allocate_e820() Kirill A. Shutemov
     [not found]   ` <Ylae+bejPzRMPrDw@zn.tnic>
2022-04-13 11:45     ` Kirill A. Shutemov
2022-04-05 23:43 ` [PATCHv4 3/8] efi/x86: Implement support for unaccepted memory Kirill A. Shutemov
2022-04-08 17:26   ` Dave Hansen
2022-04-09 19:41     ` Kirill A. Shutemov
2022-04-14 15:55     ` Borislav Petkov
2022-04-15 22:24   ` Borislav Petkov
2022-04-18 15:55     ` Kirill A. Shutemov
2022-04-18 16:38       ` Borislav Petkov
2022-04-18 20:24         ` Kirill A. Shutemov
2022-04-18 21:01           ` Borislav Petkov
2022-04-18 23:50             ` Kirill A. Shutemov
2022-04-19  7:39               ` Borislav Petkov
2022-04-19 15:30                 ` Kirill A. Shutemov
2022-04-19 16:38                   ` Dave Hansen
2022-04-19 19:23                   ` Borislav Petkov
2022-04-21 12:26                 ` Borislav Petkov
2022-04-22  0:21                 ` Kirill A. Shutemov
2022-04-22  9:30                   ` Borislav Petkov
2022-04-22 13:26                     ` Kirill A. Shutemov
2022-04-05 23:43 ` [PATCHv4 4/8] x86/boot/compressed: Handle " Kirill A. Shutemov
2022-04-08 17:57   ` Dave Hansen
2022-04-09 20:20     ` Kirill A. Shutemov
2022-04-11  6:49       ` Dave Hansen
2022-04-05 23:43 ` [PATCHv4 5/8] x86/mm: Reserve unaccepted memory bitmap Kirill A. Shutemov
2022-04-08 18:08   ` Dave Hansen
2022-04-09 20:43     ` Kirill A. Shutemov
2022-04-05 23:43 ` [PATCHv4 6/8] x86/mm: Provide helpers for unaccepted memory Kirill A. Shutemov
2022-04-08 18:15   ` Dave Hansen
2022-04-08 19:21   ` Dave Hansen
2022-04-13 16:08     ` Kirill A. Shutemov
2022-04-05 23:43 ` [PATCHv4 7/8] x86/tdx: Unaccepted memory support Kirill A. Shutemov
2022-04-08 18:28   ` Dave Hansen
2022-04-05 23:43 ` [PATCHv4 8/8] mm/vmstat: Add counter for memory accepting Kirill A. Shutemov
2022-04-12  8:18   ` David Hildenbrand
2022-04-08 17:02 ` [PATCHv4 0/8] mm, x86/cc: Implement support for unaccepted memory Dave Hansen
2022-04-09 23:44   ` Kirill A. Shutemov
2022-04-21 12:29     ` Borislav Petkov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220413113024.ycvocn6ynerl3b7m@box.shutemov.name \
    --to=kirill@shutemov.name \
    --cc=ak@linux.intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=ardb@kernel.org \
    --cc=bp@alien8.de \
    --cc=brijesh.singh@amd.com \
    --cc=dave.hansen@intel.com \
    --cc=david@redhat.com \
    --cc=dfaggioli@suse.com \
    --cc=jroedel@suse.de \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-coco@lists.linux.dev \
    --cc=linux-efi@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=luto@kernel.org \
    --cc=mingo@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rientjes@google.com \
    --cc=rppt@kernel.org \
    --cc=rppt@linux.ibm.com \
    --cc=sathyanarayanan.kuppuswamy@linux.intel.com \
    --cc=seanjc@google.com \
    --cc=tglx@linutronix.de \
    --cc=thomas.lendacky@amd.com \
    --cc=varad.gautam@suse.com \
    --cc=vbabka@suse.cz \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).