From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78F37C4320A for ; Wed, 1 Sep 2021 08:09:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 60E8E60238 for ; Wed, 1 Sep 2021 08:09:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242947AbhIAIKN (ORCPT ); Wed, 1 Sep 2021 04:10:13 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:30468 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242887AbhIAIKI (ORCPT ); Wed, 1 Sep 2021 04:10:08 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1630483751; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xehwNc5m+0VkuxXN2ywY3KTZndfyL4hPEsZFfPwosyo=; b=Gqfb2DC+SIKh/b+m6HsvIL3HWsTUEAqJJSW2rK3DlHaTqe96lAqoF3Rq3eIYDM3KTx4fwW E87coeCpedqZBVkRDI8yScIb8dwADF5OukhyJGyRtjVJ5eoE0fdITGv2uGDTVVYSA6K5xU ML6wStMWTLjrZzsij1G2yqu12QwdPoc= Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com [209.85.221.69]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-554-twPHeTllOXeZG27KANVDuA-1; Wed, 01 Sep 2021 04:09:10 -0400 X-MC-Unique: twPHeTllOXeZG27KANVDuA-1 Received: by mail-wr1-f69.google.com with SMTP id r11-20020a5d4e4b000000b001575c5ed4b4so520605wrt.4 for ; Wed, 01 Sep 2021 01:09:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:to:cc:references:from:organization:subject :message-id:date:user-agent:mime-version:in-reply-to :content-language:content-transfer-encoding; bh=xehwNc5m+0VkuxXN2ywY3KTZndfyL4hPEsZFfPwosyo=; b=IJJstv78L4eNXSDhkD2+sknZmXPLP3hF/kGrcHdjI6cRT+kqX24L8SiXU6qS3U9AJh 88QHgPIzBWfIRYZJbMTPn6VECsEbf1znHEPJbpj/IqmIjpkqyU8YOHkjHsj8mDogfJLq 5aQveKhxVId6R4JOuQxx/V2pdAZOaX1XFA0xI8VsvKg54BCmXOFM0bph1zg0iuiCw0nW 6JM++A3yeKcRXCZ6uKauay18wLosdoDFKn0iw8aivNdNLDdJ1K09wskOnj8hXOdpDLR5 7U6B37A0qQTTS/knjWV3g1YnWNVZJRH9lakyc/iYXUG9t/EBb5mjUaM87bg/gee8Dpjm Mb4Q== X-Gm-Message-State: AOAM532uAmPdIy/c1ZFphl7Y4Lbu3D31J29Q2c/LLtp/avJOSPHKO+ZJ t/ZWB0MyFVdUP+rTqsvTuHX2QJvDIR/stgSEjeyuBSLVycShkj3ZVhQtyXovVCpt2SI6PsPOMWI yDhT+lBLwdubRCzga683fWfpl X-Received: by 2002:a1c:2684:: with SMTP id m126mr8402732wmm.65.1630483749483; Wed, 01 Sep 2021 01:09:09 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxVg9SDEJVZUq3nbvsMK7CWfSbicB1cKUcx7fhtGFdGrRsx3YALb8TPN4qumsd3be+NY8b/Lw== X-Received: by 2002:a1c:2684:: with SMTP id m126mr8402689wmm.65.1630483749203; Wed, 01 Sep 2021 01:09:09 -0700 (PDT) Received: from [192.168.3.132] (p4ff23f71.dip0.t-ipconnect.de. [79.242.63.113]) by smtp.gmail.com with ESMTPSA id o7sm4481973wmc.46.2021.09.01.01.09.07 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 01 Sep 2021 01:09:08 -0700 (PDT) To: Sean Christopherson Cc: Paolo Bonzini , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Borislav Petkov , Andy Lutomirski , Andrew Morton , Joerg Roedel , Andi Kleen , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Ingo Molnar , Varad Gautam , Dario Faggioli , x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, "Kirill A . Shutemov" , "Kirill A . Shutemov" , Kuppuswamy Sathyanarayanan , Dave Hansen , Yu Zhang References: <20210824005248.200037-1-seanjc@google.com> <307d385a-a263-276f-28eb-4bc8dd287e32@redhat.com> <61ea53ce-2ba7-70cc-950d-ca128bcb29c5@redhat.com> From: David Hildenbrand Organization: Red Hat Subject: Re: [RFC] KVM: mm: fd-based approach for supporting KVM guest private memory Message-ID: Date: Wed, 1 Sep 2021 10:09:07 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org >> Do we have to protect from that? How would KVM protect from user space >> replacing private pages by shared pages in any of the models we discuss? > > The overarching rule is that KVM needs to guarantee a given pfn is never mapped[*] > as both private and shared, where "shared" also incorporates any mapping from the > host. Essentially it boils down to the kernel ensuring that a pfn is unmapped > before it's converted to/from private, and KVM ensuring that it honors any > unmap notifications from the kernel, e.g. via mmu_notifier or via a direct callback > as proposed in this RFC. Okay, so the fallocate(PUNCHHOLE) from user space could trigger the respective unmapping and freeing of backing storage. > > As it pertains to PUNCH_HOLE, the responsibilities are no different than when the > backing-store is destroyed; the backing-store needs to notify downstream MMUs > (a.k.a. KVM) to unmap the pfn(s) before freeing the associated memory. Right. > > [*] Whether or not the kernel's direct mapping needs to be removed is debatable, > but my argument is that that behavior is not visible to userspace and thus > out of scope for this discussion, e.g. zapping/restoring the direct map can > be added/removed without impacting the userspace ABI. Right. Removing it shouldn't also be requited IMHO. There are other ways to teach the kernel to not read/write some online pages (filter /proc/kcore, disable hibernation, strict access checks for /dev/mem ...). > >>>> Define "ordinary" user memory slots as overlay on top of "encrypted" memory >>>> slots. Inside KVM, bail out if you encounter such a VMA inside a normal >>>> user memory slot. When creating a "encryped" user memory slot, require that >>>> the whole VMA is covered at creation time. You know the VMA can't change >>>> later. >>> >>> This can work for the basic use cases, but even then I'd strongly prefer not to >>> tie memslot correctness to the VMAs. KVM doesn't truly care what lies behind >>> the virtual address of a memslot, and when it does care, it tends to do poorly, >>> e.g. see the whole PFNMAP snafu. KVM cares about the pfn<->gfn mappings, and >>> that's reflected in the infrastructure. E.g. KVM relies on the mmu_notifiers >>> to handle mprotect()/munmap()/etc... >> >> Right, and for the existing use cases this worked. But encrypted memory >> breaks many assumptions we once made ... >> >> I have somewhat mixed feelings about pages that are mapped into $WHATEVER >> page tables but not actually mapped into user space page tables. There is no >> way to reach these via the rmap. >> >> We have something like that already via vfio. And that is fundamentally >> broken when it comes to mmu notifiers, page pinning, page migration, ... > > I'm not super familiar with VFIO internals, but the idea with the fd-based > approach is that the backing-store would be in direct communication with KVM and > would handle those operations through that direct channel. Right. The problem I am seeing is that e.g., try_to_unmap() might not be able to actually fully unmap a page, because some non-synchronized KVM MMU still maps a page. It would be great to evaluate how the fd callbacks would fit into the whole picture, including the current rmap. I guess I'm missing the bigger picture how it all fits together on the !KVM side. > >>> As is, I don't think KVM would get any kind of notification if userpaces unmaps >>> the VMA for a private memslot that does not have any entries in the host page >>> tables. I'm sure it's a solvable problem, e.g. by ensuring at least one page >>> is touched by the backing store, but I don't think the end result would be any >>> prettier than a dedicated API for KVM to consume. >>> >>> Relying on VMAs, and thus the mmu_notifiers, also doesn't provide line of sight >>> to page migration or swap. For those types of operations, KVM currently just >>> reacts to invalidation notifications by zapping guest PTEs, and then gets the >>> new pfn when the guest re-faults on the page. That sequence doesn't work for >>> TDX or SEV-SNP because the trusteday agent needs to do the memcpy() of the page >>> contents, i.e. the host needs to call into KVM for the actual migration. >> >> Right, but I still think this is a kernel internal. You can do such >> handshake later in the kernel IMHO. > > It is kernel internal, but AFAICT it will be ugly because KVM "needs" to do the > migration and that would invert the mmu_notifer API, e.g. instead of "telling" > secondary MMUs to invalidate/change a mappings, the mm would be "asking" > secondary MMus "can you move this?". More below. In my thinking, the the rmap via mmu notifiers would do the unmapping just as we know it (from primary MMU -> secondary MMU). Once try_to_unmap() succeeded, the fd provider could kick-off the migration via whatever callback. > >> But I also already thought: is it really KVM that is to perform the >> migration or is it the fd-provider that performs the migration? Who says >> memfd_encrypted() doesn't default to a TDX "backend" on Intel CPUs that just >> knows how to migrate such a page? >> >> I'd love to have some details on how that's supposed to work, and which >> information we'd need to migrate/swap/... in addition to the EPFN and a new >> SPFN. > > KVM "needs" to do the migration. On TDX, the migration will be a SEAMCALL, > a post-VMXON instruction that transfers control to the TDX-Module, that at > minimum needs a per-VM identifier, the gfn, and the page table level. The call The per-VM identifier and the GFN would be easy to grab. Page table level, not so sure -- do you mean the general page table depth? Or if it's mapped as 4k vs. 2M ... ? The latter could be answered by the fd provider already I assume. Does the page still have to be mapped into the secondary MMU when performing the migration via TDX? I assume not, which would simplify things a lot. > into the TDX-Module would also need to take a KVM lock (probably KVM's mmu_lock) > to satisfy TDX's concurrency requirement, e.g. to avoid "spurious" errors due to > the backing-store attempting to migrate memory that KVM is unmapping due to a > memslot change. Something like that might be handled by fixing private memory slots similar to in my draft, right? > > The per-VM identifier may not apply to SEV-SNP, but I believe everything else > holds true. Thanks! -- Thanks, David / dhildenb From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B596A3FC1 for ; Wed, 1 Sep 2021 08:09:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1630483751; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xehwNc5m+0VkuxXN2ywY3KTZndfyL4hPEsZFfPwosyo=; b=Gqfb2DC+SIKh/b+m6HsvIL3HWsTUEAqJJSW2rK3DlHaTqe96lAqoF3Rq3eIYDM3KTx4fwW E87coeCpedqZBVkRDI8yScIb8dwADF5OukhyJGyRtjVJ5eoE0fdITGv2uGDTVVYSA6K5xU ML6wStMWTLjrZzsij1G2yqu12QwdPoc= Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com [209.85.221.72]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-373-P04eN1DoOiaq4RvE5hc6gA-1; Wed, 01 Sep 2021 04:09:10 -0400 X-MC-Unique: P04eN1DoOiaq4RvE5hc6gA-1 Received: by mail-wr1-f72.google.com with SMTP id j1-20020adff541000000b001593715d384so139458wrp.1 for ; Wed, 01 Sep 2021 01:09:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:to:cc:references:from:organization:subject :message-id:date:user-agent:mime-version:in-reply-to :content-language:content-transfer-encoding; bh=xehwNc5m+0VkuxXN2ywY3KTZndfyL4hPEsZFfPwosyo=; b=VoSVp+/ufD18PTZ9Zux6iIUoUjykBOcO936hvz3uVH8aSjtDBkuXiMnlghgPQAw3rb yKcWSlmE25gRVtY3cQ6bcL7BVYiUS0OREA6BZRt9ycV6AxlRkBZ0ikiGIqyPd1DVHaic OXBpucmzntWjDW8mhODOw29bRJhSxkyRgijxWbR/jEkDT29QXqIJVaxK5sz/XHL/BhAO fP0o1OQey1yXaqvifeYsKwTLyQtram6/oumONaYBMvXvlCohHz5I43q6FwhQH7WCNMfs Jlr9aN/UEXJW4BjsJP63o12WA7SGRHj0jkEeEZii5Gq0/WvH7AiUfNn98SJXw/hl5o+3 pxqg== X-Gm-Message-State: AOAM531bI141jxsHUn0//WrZ0gUz3J4cTRI1gaUjj43K7f1a5UDPLNqr GOXgVJatvlBi+BrsxyD8GJFU3xM0rN88C6SrFCubuDJ+3Px+U3lOC/F2dbJcfxG8dF78ujkK8J1 o52NSjBKslxeFi8Eb5Whosg== X-Received: by 2002:a1c:2684:: with SMTP id m126mr8402714wmm.65.1630483749480; Wed, 01 Sep 2021 01:09:09 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxVg9SDEJVZUq3nbvsMK7CWfSbicB1cKUcx7fhtGFdGrRsx3YALb8TPN4qumsd3be+NY8b/Lw== X-Received: by 2002:a1c:2684:: with SMTP id m126mr8402689wmm.65.1630483749203; Wed, 01 Sep 2021 01:09:09 -0700 (PDT) Received: from [192.168.3.132] (p4ff23f71.dip0.t-ipconnect.de. [79.242.63.113]) by smtp.gmail.com with ESMTPSA id o7sm4481973wmc.46.2021.09.01.01.09.07 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 01 Sep 2021 01:09:08 -0700 (PDT) To: Sean Christopherson Cc: Paolo Bonzini , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, Borislav Petkov , Andy Lutomirski , Andrew Morton , Joerg Roedel , Andi Kleen , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , Peter Zijlstra , Ingo Molnar , Varad Gautam , Dario Faggioli , x86@kernel.org, linux-mm@kvack.org, linux-coco@lists.linux.dev, "Kirill A . Shutemov" , "Kirill A . Shutemov" , Kuppuswamy Sathyanarayanan , Dave Hansen , Yu Zhang References: <20210824005248.200037-1-seanjc@google.com> <307d385a-a263-276f-28eb-4bc8dd287e32@redhat.com> <61ea53ce-2ba7-70cc-950d-ca128bcb29c5@redhat.com> From: David Hildenbrand Organization: Red Hat Subject: Re: [RFC] KVM: mm: fd-based approach for supporting KVM guest private memory Message-ID: Date: Wed, 1 Sep 2021 10:09:07 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 Precedence: bulk X-Mailing-List: linux-coco@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 In-Reply-To: Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 8bit >> Do we have to protect from that? How would KVM protect from user space >> replacing private pages by shared pages in any of the models we discuss? > > The overarching rule is that KVM needs to guarantee a given pfn is never mapped[*] > as both private and shared, where "shared" also incorporates any mapping from the > host. Essentially it boils down to the kernel ensuring that a pfn is unmapped > before it's converted to/from private, and KVM ensuring that it honors any > unmap notifications from the kernel, e.g. via mmu_notifier or via a direct callback > as proposed in this RFC. Okay, so the fallocate(PUNCHHOLE) from user space could trigger the respective unmapping and freeing of backing storage. > > As it pertains to PUNCH_HOLE, the responsibilities are no different than when the > backing-store is destroyed; the backing-store needs to notify downstream MMUs > (a.k.a. KVM) to unmap the pfn(s) before freeing the associated memory. Right. > > [*] Whether or not the kernel's direct mapping needs to be removed is debatable, > but my argument is that that behavior is not visible to userspace and thus > out of scope for this discussion, e.g. zapping/restoring the direct map can > be added/removed without impacting the userspace ABI. Right. Removing it shouldn't also be requited IMHO. There are other ways to teach the kernel to not read/write some online pages (filter /proc/kcore, disable hibernation, strict access checks for /dev/mem ...). > >>>> Define "ordinary" user memory slots as overlay on top of "encrypted" memory >>>> slots. Inside KVM, bail out if you encounter such a VMA inside a normal >>>> user memory slot. When creating a "encryped" user memory slot, require that >>>> the whole VMA is covered at creation time. You know the VMA can't change >>>> later. >>> >>> This can work for the basic use cases, but even then I'd strongly prefer not to >>> tie memslot correctness to the VMAs. KVM doesn't truly care what lies behind >>> the virtual address of a memslot, and when it does care, it tends to do poorly, >>> e.g. see the whole PFNMAP snafu. KVM cares about the pfn<->gfn mappings, and >>> that's reflected in the infrastructure. E.g. KVM relies on the mmu_notifiers >>> to handle mprotect()/munmap()/etc... >> >> Right, and for the existing use cases this worked. But encrypted memory >> breaks many assumptions we once made ... >> >> I have somewhat mixed feelings about pages that are mapped into $WHATEVER >> page tables but not actually mapped into user space page tables. There is no >> way to reach these via the rmap. >> >> We have something like that already via vfio. And that is fundamentally >> broken when it comes to mmu notifiers, page pinning, page migration, ... > > I'm not super familiar with VFIO internals, but the idea with the fd-based > approach is that the backing-store would be in direct communication with KVM and > would handle those operations through that direct channel. Right. The problem I am seeing is that e.g., try_to_unmap() might not be able to actually fully unmap a page, because some non-synchronized KVM MMU still maps a page. It would be great to evaluate how the fd callbacks would fit into the whole picture, including the current rmap. I guess I'm missing the bigger picture how it all fits together on the !KVM side. > >>> As is, I don't think KVM would get any kind of notification if userpaces unmaps >>> the VMA for a private memslot that does not have any entries in the host page >>> tables. I'm sure it's a solvable problem, e.g. by ensuring at least one page >>> is touched by the backing store, but I don't think the end result would be any >>> prettier than a dedicated API for KVM to consume. >>> >>> Relying on VMAs, and thus the mmu_notifiers, also doesn't provide line of sight >>> to page migration or swap. For those types of operations, KVM currently just >>> reacts to invalidation notifications by zapping guest PTEs, and then gets the >>> new pfn when the guest re-faults on the page. That sequence doesn't work for >>> TDX or SEV-SNP because the trusteday agent needs to do the memcpy() of the page >>> contents, i.e. the host needs to call into KVM for the actual migration. >> >> Right, but I still think this is a kernel internal. You can do such >> handshake later in the kernel IMHO. > > It is kernel internal, but AFAICT it will be ugly because KVM "needs" to do the > migration and that would invert the mmu_notifer API, e.g. instead of "telling" > secondary MMUs to invalidate/change a mappings, the mm would be "asking" > secondary MMus "can you move this?". More below. In my thinking, the the rmap via mmu notifiers would do the unmapping just as we know it (from primary MMU -> secondary MMU). Once try_to_unmap() succeeded, the fd provider could kick-off the migration via whatever callback. > >> But I also already thought: is it really KVM that is to perform the >> migration or is it the fd-provider that performs the migration? Who says >> memfd_encrypted() doesn't default to a TDX "backend" on Intel CPUs that just >> knows how to migrate such a page? >> >> I'd love to have some details on how that's supposed to work, and which >> information we'd need to migrate/swap/... in addition to the EPFN and a new >> SPFN. > > KVM "needs" to do the migration. On TDX, the migration will be a SEAMCALL, > a post-VMXON instruction that transfers control to the TDX-Module, that at > minimum needs a per-VM identifier, the gfn, and the page table level. The call The per-VM identifier and the GFN would be easy to grab. Page table level, not so sure -- do you mean the general page table depth? Or if it's mapped as 4k vs. 2M ... ? The latter could be answered by the fd provider already I assume. Does the page still have to be mapped into the secondary MMU when performing the migration via TDX? I assume not, which would simplify things a lot. > into the TDX-Module would also need to take a KVM lock (probably KVM's mmu_lock) > to satisfy TDX's concurrency requirement, e.g. to avoid "spurious" errors due to > the backing-store attempting to migrate memory that KVM is unmapping due to a > memslot change. Something like that might be handled by fixing private memory slots similar to in my draft, right? > > The per-VM identifier may not apply to SEV-SNP, but I believe everything else > holds true. Thanks! -- Thanks, David / dhildenb