From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90C3CC32771 for ; Mon, 26 Sep 2022 16:01:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235019AbiIZQBZ (ORCPT ); Mon, 26 Sep 2022 12:01:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42148 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234061AbiIZQAj (ORCPT ); Mon, 26 Sep 2022 12:00:39 -0400 Received: from new2-smtp.messagingengine.com (new2-smtp.messagingengine.com [66.111.4.224]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 632F45FAB; Mon, 26 Sep 2022 07:49:02 -0700 (PDT) Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailnew.nyi.internal (Postfix) with ESMTP id 032D7580B47; Mon, 26 Sep 2022 10:49:00 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute1.internal (MEProxy); Mon, 26 Sep 2022 10:49:00 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov.name; h=cc:cc:content-type:date:date:from:from:in-reply-to :in-reply-to:message-id:mime-version:references:reply-to:sender :subject:subject:to:to; s=fm1; t=1664203739; x=1664210939; bh=hj yaiyVncGPyAcE40V9D6BFeUzHrJx4Z5KsQEWvTx6U=; b=BLkeAJVrmw7qkBJfm/ r53DCtBwngiqsuCp5gakhvZnWCECk6zFydkZLbd6I8KxKeonxY3Gm2ku4yAZq3DX mPH+dIkbEqtuCewZbPVPywT1t0sXTaj/vq5aWDsA3hnT3F2YHzOqpGx2urUlL2EE U/hsjExpJ6BXKte28MPI5f1X6kuOVEYyYkbgR5w4fjqPJgjr0ryrQrGWgyK8slPz V/GfWGk7sO7BfhbizEnnaZSoWp2rG5Ulecm6xMUgTQ/3Ts/kUndBbbKfd96QtbpK y35I54Zb50espMcXYT+wIyCvf2tgXT1nBc1qW+qbQ7b9KMnF2w45hTyMQ89w4C0O Z9XA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-type:date:date:feedback-id :feedback-id:from:from:in-reply-to:in-reply-to:message-id :mime-version:references:reply-to:sender:subject:subject:to:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm2; t=1664203739; x=1664210939; bh=hjyaiyVncGPyAcE40V9D6BFeUzHr Jx4Z5KsQEWvTx6U=; b=BaxqKASgIiPw7rncxCNijATsatPiyiSIoD2+I/65CLct UspOGb5FxKqzGRfzDSh/1qeoclfaerzVKVPkO7dTdAv6IAGsbKNmy/SPFTmF7jhG Exx30tbN5Diyf5+E0W3bbdGQCcMN7mN/w6o2fRRAfCHlkk8Xue3EdsTbGmzJ62gQ gYN7Xpcg3R0YPsq5pkPdB0omh+B0Bunz5x5wCqzDQx8hectKi6K7L8H2ynFFDGWH VoDwHTyQpwEIBYhcy/LGZUvL4O6aToBkJ3zMlQsX5z51hh8mbTuDd5nIXCjXUStv nChvh3skGQpP75NdIiZ8rjn+N8NVQM/ev7r6o/yLaA== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvfedrfeegvddgkeduucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhepfffhvfevuffkfhggtggujgesthdttddttddtvdenucfhrhhomhepfdfmihhr ihhllhcutedrucfuhhhuthgvmhhovhdfuceokhhirhhilhhlsehshhhuthgvmhhovhdrnh grmhgvqeenucggtffrrghtthgvrhhnpeelgffhfeetlefhveffleevfffgtefffeelfedu udfhjeduteeggfeiheefteehjeenucffohhmrghinhepkhgvrhhnvghlrdhorhhgnecuve hluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepkhhirhhilhhl sehshhhuthgvmhhovhdrnhgrmhgv X-ME-Proxy: Feedback-ID: ie3994620:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 26 Sep 2022 10:48:57 -0400 (EDT) Received: by box.shutemov.name (Postfix, from userid 1000) id 03E28104928; Mon, 26 Sep 2022 17:48:54 +0300 (+03) Date: Mon, 26 Sep 2022 17:48:54 +0300 From: "Kirill A. Shutemov" To: David Hildenbrand Cc: "Kirill A . Shutemov" , Paolo Bonzini , Sean Christopherson , Chao Peng , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org, Jonathan Corbet , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , Michael Roth , mhocko@suse.com, Muchun Song , wei.w.wang@intel.com Subject: Re: [PATCH v8 1/8] mm/memfd: Introduce userspace inaccessible memfd Message-ID: <20220926144854.dyiacztlpx4fkjs5@box.shutemov.name> References: <20220915142913.2213336-1-chao.p.peng@linux.intel.com> <20220915142913.2213336-2-chao.p.peng@linux.intel.com> <20220923005808.vfltoecttoatgw5o@box.shutemov.name> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Sep 26, 2022 at 12:35:34PM +0200, David Hildenbrand wrote: > On 23.09.22 02:58, Kirill A . Shutemov wrote: > > On Mon, Sep 19, 2022 at 11:12:46AM +0200, David Hildenbrand wrote: > > > > diff --git a/include/uapi/linux/magic.h b/include/uapi/linux/magic.h > > > > index 6325d1d0e90f..9d066be3d7e8 100644 > > > > --- a/include/uapi/linux/magic.h > > > > +++ b/include/uapi/linux/magic.h > > > > @@ -101,5 +101,6 @@ > > > > #define DMA_BUF_MAGIC 0x444d4142 /* "DMAB" */ > > > > #define DEVMEM_MAGIC 0x454d444d /* "DMEM" */ > > > > #define SECRETMEM_MAGIC 0x5345434d /* "SECM" */ > > > > +#define INACCESSIBLE_MAGIC 0x494e4143 /* "INAC" */ > > > > > > > > > [...] > > > > > > > + > > > > +int inaccessible_get_pfn(struct file *file, pgoff_t offset, pfn_t *pfn, > > > > + int *order) > > > > +{ > > > > + struct inaccessible_data *data = file->f_mapping->private_data; > > > > + struct file *memfd = data->memfd; > > > > + struct page *page; > > > > + int ret; > > > > + > > > > + ret = shmem_getpage(file_inode(memfd), offset, &page, SGP_WRITE); > > > > + if (ret) > > > > + return ret; > > > > + > > > > + *pfn = page_to_pfn_t(page); > > > > + *order = thp_order(compound_head(page)); > > > > + SetPageUptodate(page); > > > > + unlock_page(page); > > > > + > > > > + return 0; > > > > +} > > > > +EXPORT_SYMBOL_GPL(inaccessible_get_pfn); > > > > + > > > > +void inaccessible_put_pfn(struct file *file, pfn_t pfn) > > > > +{ > > > > + struct page *page = pfn_t_to_page(pfn); > > > > + > > > > + if (WARN_ON_ONCE(!page)) > > > > + return; > > > > + > > > > + put_page(page); > > > > +} > > > > +EXPORT_SYMBOL_GPL(inaccessible_put_pfn); > > > > > > Sorry, I missed your reply regarding get/put interface. > > > > > > https://lore.kernel.org/linux-mm/20220810092532.GD862421@chaop.bj.intel.com/ > > > > > > "We have a design assumption that somedays this can even support non-page > > > based backing stores." > > > > > > As long as there is no such user in sight (especially how to get the memfd > > > from even allocating such memory which will require bigger changes), I > > > prefer to keep it simple here and work on pages/folios. No need to > > > over-complicate it for now. > > > > Sean, Paolo , what is your take on this? Do you have conrete use case of > > pageless backend for the mechanism in sight? Maybe DAX? > > The problem I'm having with this is how to actually get such memory into the > memory backend (that triggers notifiers) and what the semantics are at all > with memory that is not managed by the buddy. > > memfd with fixed PFNs doesn't make too much sense. What do you mean by "fixed PFN". It is as fixed as struct page/folio, no? PFN covers more possible backends. > When using DAX, what happens with the shared <->private conversion? Which > "type" is supposed to use dax, which not? > > In other word, I'm missing too many details on the bigger picture of how > this would work at all to see why it makes sense right now to prepare for > that. IIUC, KVM doesn't really care about pages or folios. They need PFN to populate SEPT. Returning page/folio would make KVM do additional steps to extract PFN and one more place to have a bug. -- Kiryl Shutsemau / Kirill A. Shutemov