From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DCF06C4320A for ; Tue, 31 Aug 2021 20:45:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 85FF860F56 for ; Tue, 31 Aug 2021 20:45:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 85FF860F56 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id EBB4C940008; Tue, 31 Aug 2021 16:45:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E43DC940007; Tue, 31 Aug 2021 16:45:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CE4CC940008; Tue, 31 Aug 2021 16:45:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0100.hostedemail.com [216.40.44.100]) by kanga.kvack.org (Postfix) with ESMTP id BEA27940007 for ; Tue, 31 Aug 2021 16:45:16 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 83A398249980 for ; Tue, 31 Aug 2021 20:45:16 +0000 (UTC) X-FDA: 78536555832.19.B0AB792 Received: from mail-pg1-f182.google.com (mail-pg1-f182.google.com [209.85.215.182]) by imf24.hostedemail.com (Postfix) with ESMTP id 42162B00009F for ; Tue, 31 Aug 2021 20:45:16 +0000 (UTC) Received: by mail-pg1-f182.google.com with SMTP id y23so447495pgi.7 for ; Tue, 31 Aug 2021 13:45:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=4+llZ11pd43hnvJFo4X2CKlMRkdRmFso1OEbzXFynZI=; b=Kc0EAAbqn118B0qHHW5UMPWmgjaHRq+XXR/XWPDy1tCs/qckkDpLhqI27UA17Y2sTO dN6oVJ3fjFs+ahh1MrqaSOdHPqUIg9IJfftjyasUX4Nn4IFgecw60UCY6V7CzCpY90al llfo5GOTEym5r0xaZtqfJmQqGNWT//E3c2uhEenSEK+MK8608Dc9RR2iT0z7tqL9OYIp OMEPz1yBGWPkk53O1oLotcsvPMPLlk1cCO3KAew977UQ19YCk4gkCBB8vxiYfITePooX f3gSUbgRZe0RSeoJX8S9auyLDipswgmI2l2gUI8cmkm/bJvbCa7GsxF17ky1tWox0mIE VsAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=4+llZ11pd43hnvJFo4X2CKlMRkdRmFso1OEbzXFynZI=; b=TVKb7sbjgIYOOmWpm4ZZfPlnHmHxYDDtBjcSR0L2k6LdHp40PWQAePm2kkvUWcZrE4 0pQxalfCgSQ1HYv8pMvSva4b9EG1XzJnkBAQhNMdTZk/srdxMbvNY5iHaVPOwOL0qddd NNOOfm57dPPfbrswiMd9rD0u26wyKHmWW/ANBPMGEqOxqIArz2HaOTT7bt+wmZl2jpIJ 1mAnOxgfJqqcCmIZUeORlSZHdAvU7FK1jvAFvqEFrjO37z53F7SgiI6wt2JVgtUkhuxH /VZHC+kf/3g3Bnxp4711IKOfcerG7vYp31b93vGcMrmuqD/EDUVn+jar9UFNRCGc0SDX wEaQ== X-Gm-Message-State: AOAM5312gG0y8uUkG+tt0gA0jbKLsN+XqC04IVn9NCxM+csoOwItRCK9 w/A4cXoXcRKEdEyX8oo2aFjs6g== X-Google-Smtp-Source: ABdhPJxKFiU735GQhGSqTT8sTEH141+rPPmKuXknxEDVqYkZWpoRHMrjd5Tl3gckoxmc3jgAnBmZAA== X-Received: by 2002:a05:6a00:10cb:b029:3c6:8cc9:5098 with SMTP id d11-20020a056a0010cbb02903c68cc95098mr30667314pfu.41.1630442714947; Tue, 31 Aug 2021 13:45:14 -0700 (PDT) Received: from google.com (157.214.185.35.bc.googleusercontent.com. [35.185.214.157]) by smtp.gmail.com with ESMTPSA id 141sm22280263pgf.46.2021.08.31.13.45.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 31 Aug 2021 13:45:14 -0700 (PDT) Date: Tue, 31 Aug 2021 20:45:10 +0000 From: Sean Christopherson To: David Hildenbrand Cc: Andy Lutomirski , Paolo Bonzini , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , kvm list , Linux Kernel Mailing List , Borislav Petkov , Andrew Morton , Joerg Roedel , Andi Kleen , David Rientjes , Vlastimil Babka , Tom Lendacky , Thomas Gleixner , "Peter Zijlstra (Intel)" , Ingo Molnar , Varad Gautam , Dario Faggioli , the arch/x86 maintainers , linux-mm@kvack.org, linux-coco@lists.linux.dev, "Kirill A. Shutemov" , "Kirill A . Shutemov" , Sathyanarayanan Kuppuswamy , Dave Hansen , Yu Zhang Subject: Re: [RFC] KVM: mm: fd-based approach for supporting KVM guest private memory Message-ID: References: <20210824005248.200037-1-seanjc@google.com> <307d385a-a263-276f-28eb-4bc8dd287e32@redhat.com> <40af9d25-c854-8846-fdab-13fe70b3b279@kernel.org> <73319f3c-6f5e-4f39-a678-7be5fddd55f2@www.fastmail.com> <949e6d95-266d-0234-3b86-6bd3c5267333@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <949e6d95-266d-0234-3b86-6bd3c5267333@redhat.com> Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20161025 header.b=Kc0EAAbq; spf=pass (imf24.hostedemail.com: domain of seanjc@google.com designates 209.85.215.182 as permitted sender) smtp.mailfrom=seanjc@google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 42162B00009F X-Stat-Signature: o9sinfazp99abi5dxt8fpebq45nuqji3 X-HE-Tag: 1630442716-586476 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Aug 31, 2021, David Hildenbrand wrote: > On 28.08.21 00:28, Sean Christopherson wrote: > > On Fri, Aug 27, 2021, Andy Lutomirski wrote: > > > > > > On Thu, Aug 26, 2021, at 2:26 PM, David Hildenbrand wrote: > > > > On 26.08.21 19:05, Andy Lutomirski wrote: > > > > > > > > Oof. That's quite a requirement. What's the point of the VMA once all > > > > > this is done? > > > > > > > > You can keep using things like mbind(), madvise(), ... and the GUP code > > > > with a special flag might mostly just do what you want. You won't have > > > > to reinvent too many wheels on the page fault logic side at least. > > > > Ya, Kirill's RFC more or less proved a special GUP flag would indeed Just Work. > > However, the KVM page fault side of things would require only a handful of small > > changes to send private memslots down a different path. Compared to the rest of > > the enabling, it's quite minor. > > > > The counter to that is other KVM architectures would need to learn how to use the > > new APIs, though I suspect that there will be a fair bit of arch enabling regardless > > of what route we take. > > > > > You can keep calling the functions. The implementations working is a > > > different story: you can't just unmap (pte_numa-style or otherwise) a private > > > guest page to quiesce it, move it with memcpy(), and then fault it back in. > > > > Ya, I brought this up in my earlier reply. Even the initial implementation (without > > real NUMA support) would likely be painful, e.g. the KVM TDX RFC/PoC adds dedicated > > logic in KVM to handle the case where NUMA balancing zaps a _pinned_ page and then > > KVM fault in the same pfn. It's not thaaat ugly, but it's arguably more invasive > > to KVM's page fault flows than a new fd-based private memslot scheme. > > I might have a different mindset, but less code churn doesn't necessarily > translate to "better approach". I wasn't referring to code churn. By "invasive" I mean number of touchpoints in KVM as well as the nature of the touchpoints. E.g. poking into how KVM uses available bits in its shadow PTEs and adding multiple checks through KVM's page fault handler, versus two callbacks to get the PFN and page size. > I'm certainly not pushing for what I proposed (it's a rough, broken sketch). > I'm much rather trying to come up with alternatives that try solving the > same issue, handling the identified requirements. > > I have a gut feeling that the list of requirements might not be complete > yet. For example, I wonder if we have to protect against user space > replacing private pages by shared pages or punishing random holes into the > encrypted memory fd. Replacing a private page with a shared page for a given GFN is very much a requirement as it's expected behavior for all VMM+guests when converting guest memory between shared and private. Punching holes is a sort of optional requirement. It's a "requirement" in that it's allowed if the backing store supports such a behavior, optional in that support wouldn't be strictly necessary and/or could come with constraints. The expected use case is that host userspace would punch a hole to free unreachable private memory, e.g. after the corresponding GFN(s) is converted to shared, so that it doesn't consume 2x memory for the guest.