From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pj1-f45.google.com (mail-pj1-f45.google.com [209.85.216.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9415E177 for ; Wed, 14 Jul 2021 22:25:08 +0000 (UTC) Received: by mail-pj1-f45.google.com with SMTP id bt15so2538696pjb.2 for ; Wed, 14 Jul 2021 15:25:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=d0LEZQXZXBUYt56KXRh0C3kDKHdIojtN6Lf/OzbsuzM=; b=FfQxK3SV/ESQiEb5v3FHVKWIjqOGMi4YhXCbDa/51EtfbkQHxT2bFZuQBtvVg++Xeg 6tt92/t1RsBiKAXrzBF+W13Xl100z7HkSSk7BE9ticaVj6iq5TZK7Yg/85MWig+KqUDw cyh4r8o4dCYrVzEuhIF9iT9raSFKiQqQNcDLT7akQKaZ/An2B98mJOTtp0QI2qrbIFM4 WR9L7DaXT0Vv1KusvFEYql1V5zbws+hFtvF3t8YfggWxOEhoGuBz+C+2lUnc/kEi5Ad0 eL5L3DWqhdHe5U357bksn3v1Xz/7tZ2TBx48v3xH753GiAUfNYU73mbLS26JarnruUEu /B+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=d0LEZQXZXBUYt56KXRh0C3kDKHdIojtN6Lf/OzbsuzM=; b=lG7XPKrw4a0NniU7uPBIWLH3aPuOzC6ZutLiUMp2GqBRGemQGFSSPYWgICLNUfwQiW yWbHecTWTNNc1PZFutyNLBi0CJI59ASeaHY2cStG16ntG+T1M4jOVN075VVGtRcdWGEQ xxbAUPENtSfg9hXAVGFrD+hbFA8gQyMTJOx/Fex5JvfB0CCs0qydQnLhgEK7/EC/nJm0 pccbxV3u8wDfYNyYxhG/2Xl28kW+MTBQcEnmOjNyjS/7YK7fLJDs61ZL6o2Snjqq8ZgB JIcLhPaO3z1Cp9YdyT2guGCsIj/hT8nUjsYIg7sDXUkcXMwrIAPPD0YiB1K58f7CbPt7 IXUA== X-Gm-Message-State: AOAM530d/EfRwxOW4UCZ5T8/wJktSJ6zfRpZcwabKUB0J5xKWNPIIvqT qWalSohp1RYsuhYRZhKQMuCdgg== X-Google-Smtp-Source: ABdhPJyj12ndfez3CtyE1b5kgpw1CSsww4ClT6FKSpDwrLOL4aVpAEAP3FkBpK5S7C9HrKMRFrU05w== X-Received: by 2002:a17:90b:1d84:: with SMTP id pf4mr140522pjb.166.1626301507743; Wed, 14 Jul 2021 15:25:07 -0700 (PDT) Received: from google.com (157.214.185.35.bc.googleusercontent.com. [35.185.214.157]) by smtp.gmail.com with ESMTPSA id 3sm3889790pfm.25.2021.07.14.15.25.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 Jul 2021 15:25:07 -0700 (PDT) Date: Wed, 14 Jul 2021 22:25:03 +0000 From: Sean Christopherson To: Brijesh Singh Cc: x86@kernel.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, linux-efi@vger.kernel.org, platform-driver-x86@vger.kernel.org, linux-coco@lists.linux.dev, linux-mm@kvack.org, linux-crypto@vger.kernel.org, Thomas Gleixner , Ingo Molnar , Joerg Roedel , Tom Lendacky , "H. Peter Anvin" , Ard Biesheuvel , Paolo Bonzini , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Andy Lutomirski , Dave Hansen , Sergio Lopez , Peter Gonda , Peter Zijlstra , Srinivas Pandruvada , David Rientjes , Dov Murik , Tobin Feldman-Fitzthum , Borislav Petkov , Michael Roth , Vlastimil Babka , tony.luck@intel.com, npmccallum@redhat.com, brijesh.ksingh@gmail.com Subject: Re: [PATCH Part2 RFC v4 07/40] x86/sev: Split the physmap when adding the page in RMP table Message-ID: References: <20210707183616.5620-1-brijesh.singh@amd.com> <20210707183616.5620-8-brijesh.singh@amd.com> Precedence: bulk X-Mailing-List: linux-coco@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210707183616.5620-8-brijesh.singh@amd.com> On Wed, Jul 07, 2021, Brijesh Singh wrote: > The integrity guarantee of SEV-SNP is enforced through the RMP table. > The RMP is used in conjuntion with standard x86 and IOMMU page > tables to enforce memory restrictions and page access rights. The > RMP is indexed by system physical address, and is checked at the end > of CPU and IOMMU table walks. The RMP check is enforced as soon as > SEV-SNP is enabled globally in the system. Not every memory access > requires an RMP check. In particular, the read accesses from the > hypervisor do not require RMP checks because the data confidentiality > is already protected via memory encryption. When hardware encounters > an RMP checks failure, it raise a page-fault exception. The RMP bit in > fault error code can be used to determine if the fault was due to an > RMP checks failure. > > A write from the hypervisor goes through the RMP checks. When the > hypervisor writes to pages, hardware checks to ensures that the assigned > bit in the RMP is zero (i.e page is shared). If the page table entry that > gives the sPA indicates that the target page size is a large page, then > all RMP entries for the 4KB constituting pages of the target must have the > assigned bit 0. If one of entry does not have assigned bit 0 then hardware > will raise an RMP violation. To resolve it, split the page table entry > leading to target page into 4K. Isn't the above just saying: All RMP entries covered by a large page must match the shared vs. encrypted state of the page, e.g. host large pages must have assigned=0 for all relevant RMP entries. > This poses a challenge in the Linux memory model. The Linux kernel > creates a direct mapping of all the physical memory -- referred to as > the physmap. The physmap may contain a valid mapping of guest owned pages. > During the page table walk, the host access may get into the situation > where one of the pages within the large page is owned by the guest (i.e > assigned bit is set in RMP). A write to a non-guest within the large page > will raise an RMP violation. Call set_memory_4k() to split the physmap > before adding the page in the RMP table. This ensures that the pages > added in the RMP table are used as 4K in the physmap. > > Signed-off-by: Brijesh Singh > --- > arch/x86/kernel/sev.c | 6 ++++++ > 1 file changed, 6 insertions(+) > > diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c > index 949efe530319..a482e01f880a 100644 > --- a/arch/x86/kernel/sev.c > +++ b/arch/x86/kernel/sev.c > @@ -2375,6 +2375,12 @@ int rmpupdate(struct page *page, struct rmpupdate *val) > if (!cpu_feature_enabled(X86_FEATURE_SEV_SNP)) > return -ENXIO; > > + ret = set_memory_4k((unsigned long)page_to_virt(page), 1); IIUC, this shatters the direct map for page that's assigned to an SNP guest, and the large pages are never recovered? I believe a better approach would be to do something similar to memfd_secret[*], which encountered a similar problem with the direct map. Instead of forcing the direct map to be forever 4k, unmap the direct map when making a page guest private, and restore the direct map when it's made shared (or freed). I thought memfd_secret had also solved the problem of restoring large pages in the direct map, but at a glance I can't tell if that's actually implemented anywhere. But, even if it's not currently implemented, I think it makes sense to mimic the memfd_secret approach so that both features can benefit if large page preservation/restoration is ever added. [*] https://lkml.kernel.org/r/20210518072034.31572-5-rppt@kernel.org > + if (ret) { > + pr_err("Failed to split physical address 0x%lx (%d)\n", spa, ret); > + return ret; > + } > + > /* Retry if another processor is modifying the RMP entry. */ > do { > /* Binutils version 2.36 supports the RMPUPDATE mnemonic. */ > -- > 2.17.1 >