From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F3DC6C433F5 for ; Tue, 8 Feb 2022 18:30:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 671666B0078; Tue, 8 Feb 2022 13:30:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6207E6B007B; Tue, 8 Feb 2022 13:30:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4C1C16B007D; Tue, 8 Feb 2022 13:30:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0217.hostedemail.com [216.40.44.217]) by kanga.kvack.org (Postfix) with ESMTP id 3F4E26B0078 for ; Tue, 8 Feb 2022 13:30:15 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id F3489181E207D for ; Tue, 8 Feb 2022 18:30:14 +0000 (UTC) X-FDA: 79120452348.29.518EE91 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf21.hostedemail.com (Postfix) with ESMTP id 799F81C0010 for ; Tue, 8 Feb 2022 18:30:14 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 803D860B66; Tue, 8 Feb 2022 18:30:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id BE202C004E1; Tue, 8 Feb 2022 18:30:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1644345012; bh=K1+hCMMTsSLAKEwPC2EsRQPw6UUAUoQD7/b994l5uXs=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=hU0/lVYyJT31vQOIQHTGA9EAF5MsH0yBJrrzVhzs5zFaILZgEK67dSeqfC+QFLTOv wDEHu/wGnGQw/izyM8f2rVPRjKCmfSIBJWN6OXDQqz9FlGcsE9HTE8xL9T+wK1nCqo VYSc96T4yxQHoazACG8qE7ItaZZ0/+e5f4t9qECopa0WKxq14XUpwujb/465+ADL4l ZrqaO8XGD1kFMx77cphrzAUlOLE0AJ1YRnylOk654H7ZFivYUhkhnpZdMA+z4jUb9o osb53N7WGxlLdzYzz0RvOz0lLWVajHKxZvJpjtYwCPk76/Ufz4GDZ4WuwK6HIcASMh DEm/eb7KgyCbQ== Date: Tue, 8 Feb 2022 20:29:56 +0200 From: Mike Rapoport To: Chao Peng Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, qemu-devel@nongnu.org, Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Yu Zhang , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com Subject: Re: [PATCH v4 04/12] mm/shmem: Support memfile_notifier Message-ID: References: <20220118132121.31388-1-chao.p.peng@linux.intel.com> <20220118132121.31388-5-chao.p.peng@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220118132121.31388-5-chao.p.peng@linux.intel.com> X-Rspam-User: Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="hU0/lVYy"; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf21.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 799F81C0010 X-Stat-Signature: hzjtwbwwbcpftopuy8t6hc1pbdxkyb65 X-HE-Tag: 1644345014-153114 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi, On Tue, Jan 18, 2022 at 09:21:13PM +0800, Chao Peng wrote: > It maintains a memfile_notifier list in shmem_inode_info structure and > implements memfile_pfn_ops callbacks defined by memfile_notifier. It > then exposes them to memfile_notifier via > shmem_get_memfile_notifier_info. > > We use SGP_NOALLOC in shmem_get_lock_pfn since the pages should be > allocated by userspace for private memory. If there is no pages > allocated at the offset then error should be returned so KVM knows that > the memory is not private memory. > > Signed-off-by: Kirill A. Shutemov > Signed-off-by: Chao Peng > --- > include/linux/shmem_fs.h | 4 ++ > mm/memfile_notifier.c | 12 +++++- > mm/shmem.c | 81 ++++++++++++++++++++++++++++++++++++++++ > 3 files changed, 96 insertions(+), 1 deletion(-) > > diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h > index 166158b6e917..461633587eaf 100644 > --- a/include/linux/shmem_fs.h > +++ b/include/linux/shmem_fs.h > @@ -9,6 +9,7 @@ > #include > #include > #include > +#include > > /* inode in-kernel data */ > > @@ -24,6 +25,9 @@ struct shmem_inode_info { > struct shared_policy policy; /* NUMA memory alloc policy */ > struct simple_xattrs xattrs; /* list of xattrs */ > atomic_t stop_eviction; /* hold when working on inode */ > +#ifdef CONFIG_MEMFILE_NOTIFIER > + struct memfile_notifier_list memfile_notifiers; > +#endif > struct inode vfs_inode; > }; > > diff --git a/mm/memfile_notifier.c b/mm/memfile_notifier.c > index 8171d4601a04..b4699cbf629e 100644 > --- a/mm/memfile_notifier.c > +++ b/mm/memfile_notifier.c > @@ -41,11 +41,21 @@ void memfile_notifier_fallocate(struct memfile_notifier_list *list, > srcu_read_unlock(&srcu, id); > } > > +#ifdef CONFIG_SHMEM > +extern int shmem_get_memfile_notifier_info(struct inode *inode, > + struct memfile_notifier_list **list, > + struct memfile_pfn_ops **ops); > +#endif > + > static int memfile_get_notifier_info(struct inode *inode, > struct memfile_notifier_list **list, > struct memfile_pfn_ops **ops) > { > - return -EOPNOTSUPP; > + int ret = -EOPNOTSUPP; > +#ifdef CONFIG_SHMEM > + ret = shmem_get_memfile_notifier_info(inode, list, ops); > +#endif This looks backwards. Can we have some register method for memory backing store and call it from shmem.c? > + return ret; > } > > int memfile_register_notifier(struct inode *inode, -- Sincerely yours, Mike.