From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80453C433EF for ; Wed, 13 Jul 2022 07:48:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234840AbiGMHsz (ORCPT ); Wed, 13 Jul 2022 03:48:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44154 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234860AbiGMHs3 (ORCPT ); Wed, 13 Jul 2022 03:48:29 -0400 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B524BE6332; Wed, 13 Jul 2022 00:48:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1657698506; x=1689234506; h=date:from:to:cc:subject:message-id:reply-to:references: mime-version:in-reply-to; bh=nUfcHt/iaf3oMSfMyzHxGwWsSmqpbvlJ2JFwYR1oe/M=; b=C4cP7E864jxiXG8f7VB1bNxdXYNG12yHhsNZlDN4+uGv70+0xztfKFhO ukHKXBXu1kiraU7YIGFaenCfBQGF12+KplTVGTDE5pjFlrAu2bDNCandl T8Gh5bxOsB80l3NLxqG6z6o9q05NfddgX+xdoHTH0wbzeRopuEji/FJ9O MwxqbvPhvp/R0Aa9j5ooO1/+8VEZcN8BFz79RyuF1bCjNSfbolylaS6ak 3jek3LsAlr7oCc3JOixWVJ2yp1nVSNT/Z744IJKJtHKlCR9BADxTEcqGY aWaRfqH/L7MwQmC9R5lpqwIhG9Z7JXfVNTLoRxoejPMuwEDoiR1ICVjeX Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10406"; a="264928758" X-IronPort-AV: E=Sophos;i="5.92,267,1650956400"; d="scan'208";a="264928758" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jul 2022 00:48:25 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.92,267,1650956400"; d="scan'208";a="685067167" Received: from chaop.bj.intel.com (HELO localhost) ([10.240.192.101]) by FMSMGA003.fm.intel.com with ESMTP; 13 Jul 2022 00:48:16 -0700 Date: Wed, 13 Jul 2022 15:44:58 +0800 From: Chao Peng To: "Gupta, Pankaj" Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org, linux-kselftest@vger.kernel.org, Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , Michael Roth , mhocko@suse.com, Muchun Song Subject: Re: [PATCH v7 04/14] mm/shmem: Support memfile_notifier Message-ID: <20220713074458.GB2831541@chaop.bj.intel.com> Reply-To: Chao Peng References: <20220706082016.2603916-1-chao.p.peng@linux.intel.com> <20220706082016.2603916-5-chao.p.peng@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jul 12, 2022 at 08:02:34PM +0200, Gupta, Pankaj wrote: > On 7/6/2022 10:20 AM, Chao Peng wrote: > > From: "Kirill A. Shutemov" > > > > Implement shmem as a memfile_notifier backing store. Essentially it > > interacts with the memfile_notifier feature flags for userspace > > access/page migration/page reclaiming and implements the necessary > > memfile_backing_store callbacks. > > > > Signed-off-by: Kirill A. Shutemov > > Signed-off-by: Chao Peng > > --- > > include/linux/shmem_fs.h | 2 + > > mm/shmem.c | 109 ++++++++++++++++++++++++++++++++++++++- > > 2 files changed, 110 insertions(+), 1 deletion(-) ... > > +#ifdef CONFIG_MIGRATION > > +static int shmem_migrate_page(struct address_space *mapping, > > + struct page *newpage, struct page *page, > > + enum migrate_mode mode) > > +{ > > + struct inode *inode = mapping->host; > > + struct shmem_inode_info *info = SHMEM_I(inode); > > + > > + if (info->memfile_node.flags & MEMFILE_F_UNMOVABLE) > > + return -EOPNOTSUPP; > > + return migrate_page(mapping, newpage, page, mode); > > Wondering how well page migrate would work for private pages > on shmem memfd based backend? >From high level: - KVM unset MEMFILE_F_UNMOVABLE bit to indicate it capable of migrating a page. - Introduce new 'migrate' callback(s) to memfile_notifier_ops for KVM to register. - The callback is hooked to migrate_page() here. - Once page migration requested, shmem calls into the 'migrate' callback(s) to perform additional steps for encrypted memory (For TDX we will call TDH.MEM.PAGE.RELOCATE). Chao > > > +} > > +#endif > > + > > const struct address_space_operations shmem_aops = { > > .writepage = shmem_writepage, > > .dirty_folio = noop_dirty_folio, > > @@ -3814,7 +3872,7 @@ const struct address_space_operations shmem_aops = { > > .write_end = shmem_write_end, > > #endif > > #ifdef CONFIG_MIGRATION > > - .migratepage = migrate_page, > > + .migratepage = shmem_migrate_page, > > #endif > > .error_remove_page = shmem_error_remove_page, > > }; > > @@ -3931,6 +3989,51 @@ static struct file_system_type shmem_fs_type = { > > .fs_flags = FS_USERNS_MOUNT, > > };