From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2A60C05027 for ; Mon, 23 Jan 2023 23:38:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232384AbjAWXiP (ORCPT ); Mon, 23 Jan 2023 18:38:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54818 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232073AbjAWXiO (ORCPT ); Mon, 23 Jan 2023 18:38:14 -0500 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4132F12073 for ; Mon, 23 Jan 2023 15:38:12 -0800 (PST) Received: by mail-pj1-x1032.google.com with SMTP id o13so13198066pjg.2 for ; Mon, 23 Jan 2023 15:38:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date:from:to :cc:subject:date:message-id:reply-to; bh=PDHB0ODlMstmhFdpIes6JLYo5+ikO8l0oYaAPmmlt1c=; b=VkzHwRGnuLzzOCe+jd2+aXAC1B0Pi316N1RkmUx4r6ANqJcx0x+5Q72Z8UUWxTdy1r pG5SCyLEjYuMfqhsFLdkkFHinHiEydOUFzX+r5WNej9OPd87nM0d7yREV+Rq8DkVOrGV NNXVLr+T0ZRr5iz2mMx1Z0IHdQwR/sXaSDOinZlAX/u9O36ljHgPvbEvKqpZSoiKL/Ql 2n5bBmNgSGwr5j9NkL7X+hI5ejHfVg7U5tPdlMMUlM5A7mUt0EJEbSqS9aevYAkYpR2i P7CyNcMiBR5cvRjvE+wd+fhWDFEx7SAIHLLtgqKHLWXCd78gvhmYm3A32QaiqopBK3SZ eGPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=in-reply-to:content-transfer-encoding:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=PDHB0ODlMstmhFdpIes6JLYo5+ikO8l0oYaAPmmlt1c=; b=3tkZlDHPm/O0hfjl3Zlu/02jVO6zQXOaixg/Ns9jNdGh1Of47ypavBDxP8zqltdAjC 8oU8r24YMqvwoWuGE6hhL+4jxrjyIRMkPYKI8wQDrcNWcI6r0oHQCAswr+4cPd7Jy6Xd wAV0B/8hGJLfD7n4SH0mVvoAUiU9sbQVjAcE/py7B3Cgj0dF3gBiDDEA8cu6jlh6aohf sUexiethpFhVfQ21QZZ0EAWqZfDec/a+19xQq0ugFFIu0FtzUWAZiTz3GsyyZPAEitlx zUWOjnoIRnKpC4FN29zGggnsDWpIaHoj28IogbXYqdAGK9sYgXwRK14DPntLR/ol3mTa 4jrA== X-Gm-Message-State: AFqh2kqG+OlbyYenz/gbdnSHTt9bnNY1AY3je1saA3DRmXjsNdyjyFM5 HgHjrE2quf8LN9WvFq4uHxHgaw== X-Google-Smtp-Source: AMrXdXtrvJkyeLq8anFSJnOID4b9C3AaKf59uOM0fvrdASPfrf/mhyTzO8Q2iNKbj2o8vaNJGqEopA== X-Received: by 2002:a05:6a20:3ca7:b0:b8:c646:b0e2 with SMTP id b39-20020a056a203ca700b000b8c646b0e2mr1004683pzj.3.1674517091557; Mon, 23 Jan 2023 15:38:11 -0800 (PST) Received: from google.com (7.104.168.34.bc.googleusercontent.com. [34.168.104.7]) by smtp.gmail.com with ESMTPSA id t6-20020a63b246000000b00499a90cce5bsm102181pgo.50.2023.01.23.15.38.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Jan 2023 15:38:10 -0800 (PST) Date: Mon, 23 Jan 2023 23:38:07 +0000 From: Sean Christopherson To: "Huang, Kai" Cc: "chao.p.peng@linux.intel.com" , "vbabka@suse.cz" , "tglx@linutronix.de" , "linux-arch@vger.kernel.org" , "kvm@vger.kernel.org" , "jmattson@google.com" , "Lutomirski, Andy" , "ak@linux.intel.com" , "kirill.shutemov@linux.intel.com" , "Hocko, Michal" , "tabba@google.com" , "qemu-devel@nongnu.org" , "david@redhat.com" , "michael.roth@amd.com" , "corbet@lwn.net" , "linux-kernel@vger.kernel.org" , "dhildenb@redhat.com" , "bfields@fieldses.org" , "linux-fsdevel@vger.kernel.org" , "x86@kernel.org" , "bp@alien8.de" , "ddutile@redhat.com" , "rppt@kernel.org" , "shuah@kernel.org" , "vkuznets@redhat.com" , "naoya.horiguchi@nec.com" , "linux-api@vger.kernel.org" , "qperret@google.com" , "arnd@arndb.de" , "pbonzini@redhat.com" , "Annapurve, Vishal" , "mail@maciej.szmigiero.name" , "wanpengli@tencent.com" , "yu.c.zhang@linux.intel.com" , "hughd@google.com" , "aarcange@redhat.com" , "mingo@redhat.com" , "hpa@zytor.com" , "Nakajima, Jun" , "jlayton@kernel.org" , "joro@8bytes.org" , "linux-mm@kvack.org" , "Wang, Wei W" , "steven.price@arm.com" , "linux-doc@vger.kernel.org" , "Hansen, Dave" , "akpm@linux-foundation.org" , "linmiaohe@huawei.com" Subject: Re: [PATCH v10 1/9] mm: Introduce memfd_restricted system call to create restricted user memory Message-ID: References: <20221202061347.1070246-2-chao.p.peng@linux.intel.com> <5c6e2e516f19b0a030eae9bf073d555c57ca1f21.camel@intel.com> <20221219075313.GB1691829@chaop.bj.intel.com> <20221220072228.GA1724933@chaop.bj.intel.com> <126046ce506df070d57e6fe5ab9c92cdaf4cf9b7.camel@intel.com> <20221221133905.GA1766136@chaop.bj.intel.com> <010a330c-a4d5-9c1a-3212-f9107d1c5f4e@suse.cz> <0959c72ec635688f4b6c1b516815f79f52543b31.camel@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <0959c72ec635688f4b6c1b516815f79f52543b31.camel@intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-api@vger.kernel.org On Mon, Jan 23, 2023, Huang, Kai wrote: > On Mon, 2023-01-23 at 15:03 +0100, Vlastimil Babka wrote: > > On 12/22/22 01:37, Huang, Kai wrote: > > > > > I argue that this page pinning (or page migration prevention) is not > > > > > tied to where the page comes from, instead related to how the page will > > > > > be used. Whether the page is restrictedmem backed or GUP() backed, once > > > > > it's used by current version of TDX then the page pinning is needed. So > > > > > such page migration prevention is really TDX thing, even not KVM generic > > > > > thing (that's why I think we don't need change the existing logic of > > > > > kvm_release_pfn_clean()).  > > > > > > > > This essentially boils down to who "owns" page migration handling, and sadly, > > > page migration is kinda "owned" by the core-kernel, i.e. KVM cannot handle page > > > migration by itself -- it's just a passive receiver. > > > > > > For normal pages, page migration is totally done by the core-kernel (i.e. it > > > unmaps page from VMA, allocates a new page, and uses migrate_pape() or a_ops- > > > > migrate_page() to actually migrate the page). > > > In the sense of TDX, conceptually it should be done in the same way. The more > > > important thing is: yes KVM can use get_page() to prevent page migration, but > > > when KVM wants to support it, KVM cannot just remove get_page(), as the core- > > > kernel will still just do migrate_page() which won't work for TDX (given > > > restricted_memfd doesn't have a_ops->migrate_page() implemented). > > > > > > So I think the restricted_memfd filesystem should own page migration handling, > > > (i.e. by implementing a_ops->migrate_page() to either just reject page migration > > > or somehow support it). > > > > While this thread seems to be settled on refcounts already,  > > > > I am not sure but will let Sean/Paolo to decide. My preference is whatever is most performant without being hideous :-) > > just wanted > > to point out that it wouldn't be ideal to prevent migrations by > > a_ops->migrate_page() rejecting them. It would mean cputime wasted (i.e. > > by memory compaction) by isolating the pages for migration and then > > releasing them after the callback rejects it (at least we wouldn't waste > > time creating and undoing migration entries in the userspace page tables > > as there's no mmap). Elevated refcount on the other hand is detected > > very early in compaction so no isolation is attempted, so from that > > aspect it's optimal. > > I am probably missing something, Heh, me too, I could have sworn that using refcounts was the least efficient way to block migration. > but IIUC the checking of refcount happens at very last stage of page migration too