From: Chao Peng <chao.p.peng@linux.intel.com>
To: Mike Rapoport <rppt@kernel.org>
Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-mm@kvack.org, linux-fsdevel@vger.kernel.org,
qemu-devel@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>,
Jonathan Corbet <corbet@lwn.net>,
Sean Christopherson <seanjc@google.com>,
Vitaly Kuznetsov <vkuznets@redhat.com>,
Wanpeng Li <wanpengli@tencent.com>,
Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
x86@kernel.org, "H . Peter Anvin" <hpa@zytor.com>,
Hugh Dickins <hughd@google.com>, Jeff Layton <jlayton@kernel.org>,
"J . Bruce Fields" <bfields@fieldses.org>,
Andrew Morton <akpm@linux-foundation.org>,
Yu Zhang <yu.c.zhang@linux.intel.com>,
"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>,
luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com,
ak@linux.intel.com, david@redhat.com
Subject: Re: [PATCH v4 04/12] mm/shmem: Support memfile_notifier
Date: Thu, 17 Feb 2022 21:10:36 +0800 [thread overview]
Message-ID: <20220217131036.GC32679@chaop.bj.intel.com> (raw)
In-Reply-To: <YgK2pDB34AsqCHd0@kernel.org>
On Tue, Feb 08, 2022 at 08:29:56PM +0200, Mike Rapoport wrote:
> Hi,
>
> On Tue, Jan 18, 2022 at 09:21:13PM +0800, Chao Peng wrote:
> > It maintains a memfile_notifier list in shmem_inode_info structure and
> > implements memfile_pfn_ops callbacks defined by memfile_notifier. It
> > then exposes them to memfile_notifier via
> > shmem_get_memfile_notifier_info.
> >
> > We use SGP_NOALLOC in shmem_get_lock_pfn since the pages should be
> > allocated by userspace for private memory. If there is no pages
> > allocated at the offset then error should be returned so KVM knows that
> > the memory is not private memory.
> >
> > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> > Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
> > ---
> > include/linux/shmem_fs.h | 4 ++
> > mm/memfile_notifier.c | 12 +++++-
> > mm/shmem.c | 81 ++++++++++++++++++++++++++++++++++++++++
> > 3 files changed, 96 insertions(+), 1 deletion(-)
> >
> > diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h
> > index 166158b6e917..461633587eaf 100644
> > --- a/include/linux/shmem_fs.h
> > +++ b/include/linux/shmem_fs.h
> > @@ -9,6 +9,7 @@
> > #include <linux/percpu_counter.h>
> > #include <linux/xattr.h>
> > #include <linux/fs_parser.h>
> > +#include <linux/memfile_notifier.h>
> >
> > /* inode in-kernel data */
> >
> > @@ -24,6 +25,9 @@ struct shmem_inode_info {
> > struct shared_policy policy; /* NUMA memory alloc policy */
> > struct simple_xattrs xattrs; /* list of xattrs */
> > atomic_t stop_eviction; /* hold when working on inode */
> > +#ifdef CONFIG_MEMFILE_NOTIFIER
> > + struct memfile_notifier_list memfile_notifiers;
> > +#endif
> > struct inode vfs_inode;
> > };
> >
> > diff --git a/mm/memfile_notifier.c b/mm/memfile_notifier.c
> > index 8171d4601a04..b4699cbf629e 100644
> > --- a/mm/memfile_notifier.c
> > +++ b/mm/memfile_notifier.c
> > @@ -41,11 +41,21 @@ void memfile_notifier_fallocate(struct memfile_notifier_list *list,
> > srcu_read_unlock(&srcu, id);
> > }
> >
> > +#ifdef CONFIG_SHMEM
> > +extern int shmem_get_memfile_notifier_info(struct inode *inode,
> > + struct memfile_notifier_list **list,
> > + struct memfile_pfn_ops **ops);
> > +#endif
> > +
> > static int memfile_get_notifier_info(struct inode *inode,
> > struct memfile_notifier_list **list,
> > struct memfile_pfn_ops **ops)
> > {
> > - return -EOPNOTSUPP;
> > + int ret = -EOPNOTSUPP;
> > +#ifdef CONFIG_SHMEM
> > + ret = shmem_get_memfile_notifier_info(inode, list, ops);
> > +#endif
>
> This looks backwards. Can we have some register method for memory backing
> store and call it from shmem.c?
Agreed. That would be clearer.
Chao
>
> > + return ret;
> > }
> >
> > int memfile_register_notifier(struct inode *inode,
>
> --
> Sincerely yours,
> Mike.
next prev parent reply other threads:[~2022-02-17 13:11 UTC|newest]
Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-18 13:21 [PATCH v4 00/12] KVM: mm: fd-based approach for supporting KVM guest private memory Chao Peng
2022-01-18 13:21 ` [PATCH v4 01/12] mm/shmem: Introduce F_SEAL_INACCESSIBLE Chao Peng
2022-02-07 12:24 ` Vlastimil Babka
2022-02-17 12:56 ` Chao Peng
2022-02-11 23:33 ` Andy Lutomirski
2022-02-17 13:06 ` Chao Peng
2022-02-17 19:09 ` Andy Lutomirski
2022-02-23 11:49 ` Chao Peng
2022-02-23 12:05 ` Steven Price
2022-03-04 19:24 ` Andy Lutomirski
2022-03-07 13:26 ` Chao Peng
2022-03-08 12:17 ` Paolo Bonzini
2022-01-18 13:21 ` [PATCH v4 02/12] mm/memfd: Introduce MFD_INACCESSIBLE flag Chao Peng
2022-01-21 15:50 ` Steven Price
2022-01-24 13:29 ` Chao Peng
2022-02-07 18:51 ` Vlastimil Babka
2022-02-08 8:49 ` David Hildenbrand
2022-02-08 18:22 ` Mike Rapoport
2022-01-18 13:21 ` [PATCH v4 03/12] mm: Introduce memfile_notifier Chao Peng
2022-03-07 15:42 ` Vlastimil Babka
2022-03-08 1:45 ` Chao Peng
2022-01-18 13:21 ` [PATCH v4 04/12] mm/shmem: Support memfile_notifier Chao Peng
2022-02-08 18:29 ` Mike Rapoport
2022-02-17 13:10 ` Chao Peng [this message]
2022-02-11 23:40 ` Andy Lutomirski
2022-02-17 13:23 ` Chao Peng
2022-01-18 13:21 ` [PATCH v4 05/12] KVM: Extend the memslot to support fd-based private memory Chao Peng
2022-01-18 13:21 ` [PATCH v4 06/12] KVM: Use kvm_userspace_memory_region_ext Chao Peng
2022-01-18 13:21 ` [PATCH v4 07/12] KVM: Add KVM_EXIT_MEMORY_ERROR exit Chao Peng
2022-01-18 13:21 ` [PATCH v4 08/12] KVM: Use memfile_pfn_ops to obtain pfn for private pages Chao Peng
2022-01-18 13:21 ` [PATCH v4 09/12] KVM: Handle page fault for private memory Chao Peng
2022-01-18 13:21 ` [PATCH v4 10/12] KVM: Register private memslot to memory backing store Chao Peng
2022-01-18 13:21 ` [PATCH v4 11/12] KVM: Zap existing KVM mappings when pages changed in the private fd Chao Peng
2022-01-18 13:21 ` [PATCH v4 12/12] KVM: Expose KVM_MEM_PRIVATE Chao Peng
2022-01-25 20:20 ` Maciej S. Szmigiero
2022-02-17 13:45 ` Chao Peng
2022-02-22 1:16 ` Maciej S. Szmigiero
2022-02-23 12:00 ` Chao Peng
2022-02-23 18:32 ` Maciej S. Szmigiero
2022-02-24 8:07 ` Chao Peng
2022-01-28 16:47 ` [PATCH v4 00/12] KVM: mm: fd-based approach for supporting KVM guest private memory Steven Price
2022-02-02 2:28 ` Nakajima, Jun
2022-02-02 9:23 ` Steven Price
2022-02-02 20:47 ` Nakajima, Jun
2022-02-08 18:33 ` Mike Rapoport
2022-02-17 13:47 ` Chao Peng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220217131036.GC32679@chaop.bj.intel.com \
--to=chao.p.peng@linux.intel.com \
--cc=ak@linux.intel.com \
--cc=akpm@linux-foundation.org \
--cc=bfields@fieldses.org \
--cc=bp@alien8.de \
--cc=corbet@lwn.net \
--cc=dave.hansen@intel.com \
--cc=david@redhat.com \
--cc=hpa@zytor.com \
--cc=hughd@google.com \
--cc=jlayton@kernel.org \
--cc=jmattson@google.com \
--cc=joro@8bytes.org \
--cc=jun.nakajima@intel.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=kvm@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=luto@kernel.org \
--cc=mingo@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=rppt@kernel.org \
--cc=seanjc@google.com \
--cc=tglx@linutronix.de \
--cc=vkuznets@redhat.com \
--cc=wanpengli@tencent.com \
--cc=x86@kernel.org \
--cc=yu.c.zhang@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).