From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EBC70C433F5 for ; Mon, 28 Mar 2022 18:58:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6DEE48D0003; Mon, 28 Mar 2022 14:58:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 68DA38D0001; Mon, 28 Mar 2022 14:58:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 52FC68D0003; Mon, 28 Mar 2022 14:58:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0146.hostedemail.com [216.40.44.146]) by kanga.kvack.org (Postfix) with ESMTP id 461818D0001 for ; Mon, 28 Mar 2022 14:58:41 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 05003A45CA for ; Mon, 28 Mar 2022 18:58:41 +0000 (UTC) X-FDA: 79294706442.20.C7B2927 Received: from mail-pf1-f178.google.com (mail-pf1-f178.google.com [209.85.210.178]) by imf27.hostedemail.com (Postfix) with ESMTP id 8BB2F4003E for ; Mon, 28 Mar 2022 18:58:40 +0000 (UTC) Received: by mail-pf1-f178.google.com with SMTP id s11so13546931pfu.13 for ; Mon, 28 Mar 2022 11:58:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=J0gKBEKFmyw8tGYtEJySSF3LdFpLZnr/dEFLwztId4s=; b=BP6lY2zHEqXub322vtOJdyYONyA2pqNnkB9QNDf/JQ/FQms6jvObpEygxdH0FyBEjA 6rDnrRMp1VIY1DGs4Aovje3NRLVQnfqDCW/cPxNNpmIU4pgIFSLRETIlwLywtIAT+AOV lSoXKwCaRrQeo5jYgSfkPNwOxKDE/kG5zvgNTghAfhgssFANlYN6U4LJSIFxbS4efxAk RZHIeYKWjhSS9BjuxiknAgyddsaYaw8q0yI+BZX79mKvTDujTRnOhIi1+W7msg+v2O3B q+NGnwlN1qJ0l57xyUIeUxLlmBQ5MkFDOMntLhJK+CMbBP+AfGJYbeYDdC6LvciB6kti +W0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=J0gKBEKFmyw8tGYtEJySSF3LdFpLZnr/dEFLwztId4s=; b=tXe9dCC+Q+hUsbIVpQ1e0z0YJGTJt+Ytf+6lnq0VfdjfHt/w+UrZLHgcziiOarhktB FcgDn9YI7lFmnxI2G+LCAMhVoBd5y5BinlUQiLc/9TKn/FNlhamtppVSSKWQGcbctuY0 VAE6JC8IcaCq7GuQkmuwja2SEgjqnrUT6rTSq26m63X7nQHU2teYjyjB/lChzEwCEGct djlS66V7hj6JEd0i1TUUNe8PvVRYGEJ1jaxcDnlxvc9whvK7QEGGn5zyz5pJMnGVLZjP bvyoC/PsCRtjs2gmMJVltf/zlvlRORDrhrhr0Dmabk5rhq32CPKpuKuNL/XRdixdn1Kj wv2Q== X-Gm-Message-State: AOAM532+qHADKsTRoGTFoAlmE68QIStkOK/yjq0cpNNDOHmc+XKErHLx /uGOZ/G7++V3dcqkU5g4hhdM7g== X-Google-Smtp-Source: ABdhPJzt33Tcu5zFv7O6tkqFVpoXjqM4zc4NiOigygH2Y07+cCWuGLygkYuTYFYwvGauzAJIekPWIA== X-Received: by 2002:aa7:8256:0:b0:4e0:78ad:eb81 with SMTP id e22-20020aa78256000000b004e078adeb81mr24856623pfn.30.1648493919181; Mon, 28 Mar 2022 11:58:39 -0700 (PDT) Received: from google.com (157.214.185.35.bc.googleusercontent.com. [35.185.214.157]) by smtp.gmail.com with ESMTPSA id 132-20020a62168a000000b004f40e8b3133sm17938224pfw.188.2022.03.28.11.58.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Mar 2022 11:58:38 -0700 (PDT) Date: Mon, 28 Mar 2022 18:58:35 +0000 From: Sean Christopherson To: Quentin Perret Cc: Chao Peng , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-api@vger.kernel.org, qemu-devel@nongnu.org, Paolo Bonzini , Jonathan Corbet , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, maz@kernel.org, will@kernel.org Subject: Re: [PATCH v5 00/13] KVM: mm: fd-based approach for supporting KVM guest private memory Message-ID: References: <20220310140911.50924-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Stat-Signature: oss4kyo8xho9qpxiw813e8inao65dgk7 Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=BP6lY2zH; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf27.hostedemail.com: domain of seanjc@google.com designates 209.85.210.178 as permitted sender) smtp.mailfrom=seanjc@google.com X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 8BB2F4003E X-HE-Tag: 1648493920-47998 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Mar 28, 2022, Quentin Perret wrote: > Hi Sean, > > Thanks for the reply, this helps a lot. > > On Monday 28 Mar 2022 at 17:13:10 (+0000), Sean Christopherson wrote: > > On Thu, Mar 24, 2022, Quentin Perret wrote: > > > For Protected KVM (and I suspect most other confidential computing > > > solutions), guests have the ability to share some of their pages back > > > with the host kernel using a dedicated hypercall. This is necessary > > > for e.g. virtio communications, so these shared pages need to be mapped > > > back into the VMM's address space. I'm a bit confused about how that > > > would work with the approach proposed here. What is going to be the > > > approach for TDX? > > > > > > It feels like the most 'natural' thing would be to have a KVM exit > > > reason describing which pages have been shared back by the guest, and to > > > then allow the VMM to mmap those specific pages in response in the > > > memfd. Is this something that has been discussed or considered? > > > > The proposed solution is to exit to userspace with a new exit reason, KVM_EXIT_MEMORY_ERROR, > > when the guest makes the hypercall to request conversion[1]. The private fd itself > > will never allow mapping memory into userspace, instead userspace will need to punch > > a hole in the private fd backing store. The absense of a valid mapping in the private > > fd is how KVM detects that a pfn is "shared" (memslots without a private fd are always > > shared)[2]. > > Right. I'm still a bit confused about how the VMM is going to get the > shared page mapped in its page-table. Once it has punched a hole into > the private fd, how is it supposed to access the actual physical page > that the guest shared? The guest doesn't share a _host_ physical page, the guest shares a _guest_ physical page. Until host userspace converts the gfn to shared and thus maps the gfn=>hva via mmap(), the guest is blocked and can't read/write/exec the memory. AFAIK, no architecture allows in-place decryption of guest private memory. s390 allows a page to be "made accessible" to the host for the purposes of swap, and other architectures will have similar behavior for migrating a protected VM, but those scenarios are not sharing the page (and they also make the page inaccessible to the guest). > Is there an assumption somewhere that the VMM should have this page mapped in > via an alias that it can legally access only once it has punched a hole at > the corresponding offset in the private fd or something along those lines? Yes, the VMM must have a completely separate VMA. The VMM doesn't haven't to wait until the conversion to mmap() the shared variant, though obviously it will potentially consume double the memory if the VMM actually populates both the private and shared backing stores. > > The key point is that KVM never decides to convert between shared and private, it's > > always a userspace decision. Like normal memslots, where userspace has full control > > over what gfns are a valid, this gives userspace full control over whether a gfn is > > shared or private at any given time. > > I'm understanding this as 'the VMM is allowed to punch holes in the > private fd whenever it wants'. Is this correct? >From the kernel's perspective, yes, the VMM can punch holes at any time. From a "do I want to DoS my guest" perspective, the VMM must honor its contract with the guest and not spuriously unmap private memory. > What happens if it does so for a page that a guest hasn't shared back? When the hole is punched, KVM will unmap the corresponding private SPTEs. If the guest is still accessing the page as private, the next access will fault and KVM will exit to userspace with KVM_EXIT_MEMORY_ERROR. Of course the guest is probably hosed if the hole punch was truly spurious, as at least hardware-based protected VMs effectively destroy data when a private page is unmapped from the guest private SPTEs. E.g. Linux guests for TDX and SNP will panic/terminate in such a scenario as they will get a fault (injected by trusted hardware/firmware) saying that the guest is trying to access an unaccepted/unvalidated page (TDX and SNP require the guest to explicit accept all private pages that aren't part of the guest's initial pre-boot image). > > Another important detail is that this approach means the kernel and KVM treat the > > shared backing store and private backing store as independent, albeit related, > > entities. This is very deliberate as it makes it easier to reason about what is > > and isn't allowed/required. E.g. the kernel only needs to handle freeing private > > memory, there is no special handling for conversion to shared because no such path > > exists as far as host pfns are concerned. And userspace doesn't need any new "rules" > > for protecting itself against a malicious guest, e.g. userspace already needs to > > ensure that it has a valid mapping prior to accessing guest memory (or be able to > > handle any resulting signals). A malicious guest can DoS itself by instructing > > userspace to communicate over memory that is currently mapped private, but there > > are no new novel attack vectors from the host's perspective as coercing the host > > into accessing an invalid mapping after shared=>private conversion is just a variant > > of a use-after-free. > > Interesting. I was (maybe incorrectly) assuming that it would be > difficult to handle illegal host accesses w/ TDX. IOW, this would > essentially crash the host. Is this remotely correct or did I get that > wrong? Handling illegal host kernel accesses for both TDX and SEV-SNP is extremely difficult, bordering on impossible. That's one of the biggest, if not _the_ biggest, motivations for the private fd approach. On "conversion", the page that is used to back the shared variant is a completely different, unrelated host physical page. Whether or not the private/shared backing page is freed is orthogonal to what version is mapped into the guest. E.g. if the guest converts a 4kb chunk of a 2mb hugepage, the private backing store could keep the physical page on hole punch (example only, I don't know if this is the actual proposed implementation). The idea is that it'll be much, much more difficult for the host to perform an illegal access if the actual private memory is not mapped anywhere (modulo the kernel's direct map, which we may or may not leave intact). The private backing store just needs to ensure it properly sanitizing pages before freeing them.