From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9FF71C433E0 for ; Tue, 2 Feb 2021 16:03:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1CDF964F49 for ; Tue, 2 Feb 2021 16:03:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1CDF964F49 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 92A8A6B0005; Tue, 2 Feb 2021 11:03:35 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8DA976B006E; Tue, 2 Feb 2021 11:03:35 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7CA7F6B0070; Tue, 2 Feb 2021 11:03:35 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0135.hostedemail.com [216.40.44.135]) by kanga.kvack.org (Postfix) with ESMTP id 665C76B0005 for ; Tue, 2 Feb 2021 11:03:35 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 1F3FF180AD806 for ; Tue, 2 Feb 2021 16:03:35 +0000 (UTC) X-FDA: 77773797990.28.event56_360562b275cc Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id E52D9640A for ; Tue, 2 Feb 2021 16:03:12 +0000 (UTC) X-HE-Tag: event56_360562b275cc X-Filterd-Recvd-Size: 7450 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by imf12.hostedemail.com (Postfix) with ESMTP for ; Tue, 2 Feb 2021 16:03:11 +0000 (UTC) IronPort-SDR: k3sPhq7kEkIpw9FBdkrd5XbjWtCbY1ecFRo6BkLWnNgUKXvmBX4MfEESb8HgJS78lH64s6Li5E P0S40ABV85Rw== X-IronPort-AV: E=McAfee;i="6000,8403,9882"; a="199799357" X-IronPort-AV: E=Sophos;i="5.79,395,1602572400"; d="scan'208";a="199799357" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Feb 2021 08:02:12 -0800 IronPort-SDR: 752O+AHXWq4gqM1Z27tfyOFAoQvx6/ib938tyxSd3Z4W2n1KG7884DS4fQErI9y3kFuhMP/gYf K2WuGYVbaH8Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.79,395,1602572400"; d="scan'208";a="432980866" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga001.jf.intel.com with ESMTP; 02 Feb 2021 08:02:06 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id EDABF109; Tue, 2 Feb 2021 18:02:05 +0200 (EET) Date: Tue, 2 Feb 2021 19:02:05 +0300 From: "Kirill A. Shutemov" To: David Rientjes Cc: Borislav Petkov , Andy Lutomirski , Sean Christopherson , Andrew Morton , Andi Kleen , Brijesh Singh , Tom Lendacky , Jon Grimm , Thomas Gleixner , Christoph Hellwig , Peter Zijlstra , Paolo Bonzini , Ingo Molnar , Joerg Roedel , x86@kernel.org, linux-mm@kvack.org Subject: Re: AMD SEV-SNP/Intel TDX: validation of memory pages Message-ID: <20210202160205.3wfchtibq2sd7pe5@black.fi.intel.com> References: <7515a81a-19e-b063-2081-3f5e79f0f7a8@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <7515a81a-19e-b063-2081-3f5e79f0f7a8@google.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Feb 01, 2021 at 05:51:09PM -0800, David Rientjes wrote: > Hi everybody, > > I'd like to kick-start the discussion on lazy validation of guest memory > for the purposes of AMD SEV-SNP and Intel TDX. > > Both AMD SEV-SNP and Intel TDX require validation of guest memory before > it may be used by the guest. This is needed for integrity protection from > a potentially malicious hypervisor or other host components. > > For AMD SEV-SNP, the hypervisor assigns a page to the guest using the new > RMPUPDATE instruction. The guest then transitions the page to a usable by > the new PVALIDATE instruction[1]. This sets the Validated flag in the > Reverse Map Table (RMP) for a guest addressable page, which opts into > hardware and firmware integrity protection. This may only be done by the > guest itself and until that time, the guest cannot access the page. > > The guest can only PVALIDATE memory for a gPA once; the RMP then > guarantees for each hPA that there is only a single gPA mapping. This > validation can either be done all up front at the time the guest is booted > or it can be done lazily at runtime on fault if the guest keeps track of > Valid vs Invalid pages. Because doing PVALIDATE for all guest memory at > boot would be extremely lengthy, I'd like to discuss the options for doing > it lazily. > > Similarly, for Intel TDX, the hypervisor unmaps the gPA from the shared > EPT and invalidates the tlb and all caches for the TD's vcpus; it then > adds a page to the gPA address space for a TD by using the new > TDH.MEM.PAGE.AUG call. The TDG.MEM.PAGE.ACCEPT TDCALL[2] then allows a > guest to accept a guest page for a gPA and initialize it using the private > key for that TD. This may only be done by the TD itself and until that > time, the gPA cannot be used within the TD. > > Both AMD SEV-SNP and Intel TDX support hugepages. SEV-SNP supports 2MB > whereas TDX has accept TDCALL support for 2MB and 1GB. > > I believe the UEFI ECR[3] for the unaccepted memory type to > EFI_MEMORY_TYPE was accepted in December. This should enable the guest to > learn what memory has not yet been validated (or accepted) by the firmware > if all guest memory is not done completely up front. > > This likely requires a pre-validation of all memory that can be accessed > when handling a #VC (or #VE for TDX) such as IST stacks, including memory > in the x86 boot sequence that must be validated before the core mm > subsystem is up and running to handle the lazy validation. I believe > lazy validation can be done by the core mm after that, perhaps by > maintaining a new "validated" bit in struct page flags. > > Has anybody looked into this or, even better, is anybody currently working > on this? It's likely I'm going to do this on Intel side, but I have not looked deeply into it. > I think quite invasive changes are needed for the guest to support lazy > validation/acceptance to core areas that lots of people on the recipient > list have strong opinions about. Some things that come to mind: > > - Annotations for pages that must be pre-validated in the x86 boot > sequence, including IST stacks > > - Proliferation of these annotations throughout any kernel code that can > access memory for #VC or #VE > > - Handling lazy validation of guest memory through the core mm layer, > most likely involving a bit in struct page flags to track their status > > - Any need for validating memory that is not backed by struct page that > needs to be special-cased > > - Any concerns about this for the DMA layer > > One possibility for minimal disruption to the boot entry code is to > require the guest BIOS to validate 4GB and below, and then leave 4GB and > above to be done lazily (the true amount of memory will actually be less > due to the MMIO hole). [ As I didn't looked into actual code, I may say total garbage below... ] Pre-validating 4GB would indeed be easiest way to go, but it's going to be too slow. The more realistic is for BIOS to pre-validate memory where kernel and initrd are placed, plus few dozen megs for runtime. It means decompression code would need to be aware about the validation. The critical thing is that once memory is validate we must not validate it again. It's possible VMM->guest attack vector. We must track precisely what memory has been validated and stop the guest on detecting the unexpected second validation request. It also means that we has to keep the information when the control gets passed from decompression code to the real kernel. Page flag is no good for this. My initial thought that we can use e820/efi memmap to keep track of information -- remove the unaccepted memory flag from the range that got accepted. The decompression code validates the memory that it's need for decompression, modify memmap accordingly and pass control to the main kernel. The main kernel may accept the memory via #VE/#VC, but ideally it need to stay within memory accepted by decompression code for initial boot. I think the bulk of memory validation can be done via existing machinery: we have already deferred struct page initialization code in kernel and I believe we can hook up into it for the purpose. Any comments? -- Kirill A. Shutemov