From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2730C4332F for ; Tue, 18 Oct 2022 00:36:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232125AbiJRAge (ORCPT ); Mon, 17 Oct 2022 20:36:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48824 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231833AbiJRAfs (ORCPT ); Mon, 17 Oct 2022 20:35:48 -0400 Received: from mail-oa1-x32.google.com (mail-oa1-x32.google.com [IPv6:2001:4860:4864:20::32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AE6641D65A for ; Mon, 17 Oct 2022 17:35:26 -0700 (PDT) Received: by mail-oa1-x32.google.com with SMTP id 586e51a60fabf-132b8f6f1b2so15139909fac.11 for ; Mon, 17 Oct 2022 17:35:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=DZFNHgeagPRGpN511S5OVmTVLDSRxzqHSZND+ilItHI=; b=rX/14hxyK+jbBsNmsi6OICKyz6vY8MdAjLpdpmL8Eoa7S9QfTsmtZ2g16772VPAdQ3 fyx8viqBFbhwe4fsuILQVTdMHesYZMcl1oUOCmX8b3E05NNXj84+XzmbL1LsDXmL5F1j LMV1n2XU1a76XmxIXhbX/jQLk94Nf9oyipbD0iN/8f2ZIv+JsTvOocrVsgP0HsxvQWOB FV59MeLky59LcmcQNFmf5mPPHgZyPDz4SSGEWvXafNz+jO3Il6nH3aWk0dudbCEewkRL w2EkcpSee8Zn3VXKnm1otN85uvjvLHE28I2YLKCADeFtBWdkawzMyHE1W2n1XGklWzw2 7bHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=DZFNHgeagPRGpN511S5OVmTVLDSRxzqHSZND+ilItHI=; b=huHzT3B6jnCg8ii9GVM7mi0lj1TM/zEWpVhKvTM0sEQWhQ/EHoc9i700UvWcHgoleg CTUoDO20v8S9+/KftHz4/OBxYEHzkETgfJr9RiuqyX8P5VTiEhAVsIJPwJG2XCk/Ue8o GJi1xMlOruK7pFsDV8yDHBE263uFOx0KjT/1mAKjskMF3HjnJAN8hPnJYGa+RkbIKatU l+h7iSjZLJVsz69FwrFv5nrxXSaTWxKxC67HdDvGn7+T5/DwFPUWvTrap6xaR+jU9Rlz C7LIxMIePV9yp/XsuzNilvz0doi8XXlN+mNXsexiOfoAt6pNGtc78Ruvxqsm1gBylMew 6tHA== X-Gm-Message-State: ACrzQf2CxvvWdWrjFxmN0XxfzqJJszF29tK6NC9PM7qqBWRTOsr9wk2D ictKPZYT7H2cuqiVwiEgQKX8Ii2wNc6v0g== X-Google-Smtp-Source: AMsMyM6GsW9qHuPFp8+B+qhuY2BytXR7EdsQrQz18ZdcSSusgW2PC6Ay30Uig0XRAo3WSxCvao9eLQ== X-Received: by 2002:a17:90a:4594:b0:20b:23d5:8ead with SMTP id v20-20020a17090a459400b0020b23d58eadmr35608180pjg.127.1666053232202; Mon, 17 Oct 2022 17:33:52 -0700 (PDT) Received: from google.com (7.104.168.34.bc.googleusercontent.com. [34.168.104.7]) by smtp.gmail.com with ESMTPSA id q59-20020a17090a1b4100b001efa9e83927sm9986738pjq.51.2022.10.17.17.33.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Oct 2022 17:33:51 -0700 (PDT) Date: Tue, 18 Oct 2022 00:33:48 +0000 From: Sean Christopherson To: Fuad Tabba Cc: Chao Peng , David Hildenbrand , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org, Paolo Bonzini , Jonathan Corbet , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , Michael Roth , mhocko@suse.com, Muchun Song , wei.w.wang@intel.com, Will Deacon , Marc Zyngier Subject: Re: [PATCH v8 1/8] mm/memfd: Introduce userspace inaccessible memfd Message-ID: References: <20220915142913.2213336-1-chao.p.peng@linux.intel.com> <20220915142913.2213336-2-chao.p.peng@linux.intel.com> <20220926142330.GC2658254@chaop.bj.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-api@vger.kernel.org On Fri, Sep 30, 2022, Fuad Tabba wrote: > > > > > pKVM would also need a way to make an fd accessible again > > > > > when shared back, which I think isn't possible with this patch. > > > > > > > > But does pKVM really want to mmap/munmap a new region at the page-level, > > > > that can cause VMA fragmentation if the conversion is frequent as I see. > > > > Even with a KVM ioctl for mapping as mentioned below, I think there will > > > > be the same issue. > > > > > > pKVM doesn't really need to unmap the memory. What is really important > > > is that the memory is not GUP'able. > > > > Well, not entirely unguppable, just unguppable without a magic FOLL_* flag, > > otherwise KVM wouldn't be able to get the PFN to map into guest memory. > > > > The problem is that gup() and "mapped" are tied together. So yes, pKVM doesn't > > strictly need to unmap memory _in the untrusted host_, but since mapped==guppable, > > the end result is the same. > > > > Emphasis above because pKVM still needs unmap the memory _somehwere_. IIUC, the > > current approach is to do that only in the stage-2 page tables, i.e. only in the > > context of the hypervisor. Which is also the source of the gup() problems; the > > untrusted kernel is blissfully unaware that the memory is inaccessible. > > > > Any approach that moves some of that information into the untrusted kernel so that > > the kernel can protect itself will incur fragmentation in the VMAs. Well, unless > > all of guest memory becomes unguppable, but that's likely not a viable option. > > Actually, for pKVM, there is no need for the guest memory to be GUP'able at > all if we use the new inaccessible_get_pfn(). Ya, I was referring to pKVM without UPM / inaccessible memory. Jumping back to blocking gup(), what about using the same tricks as secretmem to block gup()? E.g. compare vm_ops to block regular gup() and a_ops to block fast gup() on struct page? With a Kconfig that's selected by pKVM (which would also need its own Kconfig), e.g. CONFIG_INACCESSIBLE_MAPPABLE_MEM, there would be zero performance overhead for non-pKVM kernels, i.e. hooking gup() shouldn't be controversial. I suspect the fast gup() path could even be optimized to avoid the page_mapping() lookup by adding a PG_inaccessible flag that's defined iff the TBD Kconfig is selected. I'm guessing pKVM isn't expected to be deployed on massivve NUMA systems anytime soon, so there should be plenty of page flags to go around. Blocking gup() instead of trying to play refcount games when converting back to private would eliminate the need to put heavy restrictions on mapping, as the goal of those were purely to simplify the KVM implementation, e.g. the "one mapping per memslot" thing would go away entirely. > This of course goes back to what I'd mentioned before in v7; it seems that > representing the memslot memory as a file descriptor should be orthogonal to > whether the memory is shared or private, rather than a private_fd for private > memory and the userspace_addr for shared memory. I also explored the idea of backing any guest memory with an fd, but came to the conclusion that private memory needs a separate handle[1], at least on x86. For SNP and TDX, even though the GPA is the same (ignoring the fact that SNP and TDX steal GPA bits to differentiate private vs. shared), the two types need to be treated as separate mappings[2]. Post-boot, converting is lossy in both directions, so even conceptually they are two disctint pages that just happen to share (some) GPA bits. To allow conversions, i.e. changing which mapping to use, without memslot updates, KVM needs to let userspace provide both mappings in a single memslot. So while fd-based memory is an orthogonal concept, e.g. we could add fd-based shared memory, KVM would still need a dedicated private handle. For pKVM, the fd doesn't strictly need to be mutually exclusive with the existing userspace_addr, but since the private_fd is going to be added for x86, I think it makes sense to use that instead of adding generic fd-based memory for pKVM's use case (which is arguably still "private" memory but with special semantics). [1] https://lore.kernel.org/all/YulTH7bL4MwT5v5K@google.com [2] https://lore.kernel.org/all/869622df-5bf6-0fbb-cac4-34c6ae7df119@kernel.org > The host can then map or unmap the shared/private memory using the fd, which > allows it more freedom in even choosing to unmap shared memory when not > needed, for example.