From: Chao Peng <chao.p.peng@linux.intel.com>
To: "Gupta, Pankaj" <pankaj.gupta@amd.com>
Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-mm@kvack.org, linux-fsdevel@vger.kernel.org,
linux-api@vger.kernel.org, linux-doc@vger.kernel.org,
qemu-devel@nongnu.org, linux-kselftest@vger.kernel.org,
Paolo Bonzini <pbonzini@redhat.com>,
Jonathan Corbet <corbet@lwn.net>,
Sean Christopherson <seanjc@google.com>,
Vitaly Kuznetsov <vkuznets@redhat.com>,
Wanpeng Li <wanpengli@tencent.com>,
Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
x86@kernel.org, "H . Peter Anvin" <hpa@zytor.com>,
Hugh Dickins <hughd@google.com>, Jeff Layton <jlayton@kernel.org>,
"J . Bruce Fields" <bfields@fieldses.org>,
Andrew Morton <akpm@linux-foundation.org>,
Shuah Khan <shuah@kernel.org>, Mike Rapoport <rppt@kernel.org>,
Steven Price <steven.price@arm.com>,
"Maciej S . Szmigiero" <mail@maciej.szmigiero.name>,
Vlastimil Babka <vbabka@suse.cz>,
Vishal Annapurve <vannapurve@google.com>,
Yu Zhang <yu.c.zhang@linux.intel.com>,
"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>,
luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com,
ak@linux.intel.com, david@redhat.com, aarcange@redhat.com,
ddutile@redhat.com, dhildenb@redhat.com,
Quentin Perret <qperret@google.com>,
Michael Roth <michael.roth@amd.com>,
mhocko@suse.com, Muchun Song <songmuchun@bytedance.com>
Subject: Re: [PATCH v7 07/14] KVM: Use gfn instead of hva for mmu_notifier_retry
Date: Mon, 18 Jul 2022 21:29:50 +0800 [thread overview]
Message-ID: <20220718132950.GA38104@chaop.bj.intel.com> (raw)
In-Reply-To: <d480a850-601b-cda2-b671-04d839c98429@amd.com>
On Fri, Jul 15, 2022 at 01:36:15PM +0200, Gupta, Pankaj wrote:
> > Currently in mmu_notifier validate path, hva range is recorded and then
> > checked in the mmu_notifier_retry_hva() from page fault path. However
> > for the to be introduced private memory, a page fault may not have a hva
>
> As this patch appeared in v7, just wondering did you see an actual bug
> because of it? And not having corresponding 'hva' occurs only with private
> memory because its not mapped to host userspace?
The addressed problem is not new in this version, previous versions I
also had code to handle it (just in different way). But the problem is:
mmu_notifier/memfile_notifier may be in the progress of invalidating a
pfn that obtained earlier in the page fault handler, when happens, we
should retry the fault. In v6 I used global mmu_notifier_retry() for
memfile_notifier but that can block unrelated mmu_notifer invalidation
which has hva range specified.
Sean gave a comment at https://lkml.org/lkml/2022/6/17/1001 to separate
memfile_notifier from mmu_notifier but during the implementation I
realized we actually can reuse the same code for shared and private
memory if both using gpa range and that can simplify the code handling
in kvm_zap_gfn_range and some other code (e.g. we don't need two
versions for memfile_notifier/mmu_notifier).
Adding gpa range for private memory invalidation also relieves the
above blocking issue between private memory page fault and mmu_notifier.
Chao
>
> Thanks,
> Pankaj
>
> > associated, checking gfn(gpa) makes more sense. For existing non private
> > memory case, gfn is expected to continue to work.
> >
> > The patch also fixes a potential bug in kvm_zap_gfn_range() which has
> > already been using gfn when calling kvm_inc/dec_notifier_count() in
> > current code.
> >
> > Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
> > ---
> > arch/x86/kvm/mmu/mmu.c | 2 +-
> > include/linux/kvm_host.h | 18 ++++++++----------
> > virt/kvm/kvm_main.c | 6 +++---
> > 3 files changed, 12 insertions(+), 14 deletions(-)
> >
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index f7fa4c31b7c5..0d882fad4bc1 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -4182,7 +4182,7 @@ static bool is_page_fault_stale(struct kvm_vcpu *vcpu,
> > return true;
> > return fault->slot &&
> > - mmu_notifier_retry_hva(vcpu->kvm, mmu_seq, fault->hva);
> > + mmu_notifier_retry_gfn(vcpu->kvm, mmu_seq, fault->gfn);
> > }
> > static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
> > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > index 0bdb6044e316..e9153b54e2a4 100644
> > --- a/include/linux/kvm_host.h
> > +++ b/include/linux/kvm_host.h
> > @@ -767,8 +767,8 @@ struct kvm {
> > struct mmu_notifier mmu_notifier;
> > unsigned long mmu_notifier_seq;
> > long mmu_notifier_count;
> > - unsigned long mmu_notifier_range_start;
> > - unsigned long mmu_notifier_range_end;
> > + gfn_t mmu_notifier_range_start;
> > + gfn_t mmu_notifier_range_end;
> > #endif
> > struct list_head devices;
> > u64 manual_dirty_log_protect;
> > @@ -1362,10 +1362,8 @@ void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc);
> > void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc);
> > #endif
> > -void kvm_inc_notifier_count(struct kvm *kvm, unsigned long start,
> > - unsigned long end);
> > -void kvm_dec_notifier_count(struct kvm *kvm, unsigned long start,
> > - unsigned long end);
> > +void kvm_inc_notifier_count(struct kvm *kvm, gfn_t start, gfn_t end);
> > +void kvm_dec_notifier_count(struct kvm *kvm, gfn_t start, gfn_t end);
> > long kvm_arch_dev_ioctl(struct file *filp,
> > unsigned int ioctl, unsigned long arg);
> > @@ -1923,9 +1921,9 @@ static inline int mmu_notifier_retry(struct kvm *kvm, unsigned long mmu_seq)
> > return 0;
> > }
> > -static inline int mmu_notifier_retry_hva(struct kvm *kvm,
> > +static inline int mmu_notifier_retry_gfn(struct kvm *kvm,
> > unsigned long mmu_seq,
> > - unsigned long hva)
> > + gfn_t gfn)
> > {
> > lockdep_assert_held(&kvm->mmu_lock);
> > /*
> > @@ -1935,8 +1933,8 @@ static inline int mmu_notifier_retry_hva(struct kvm *kvm,
> > * positives, due to shortcuts when handing concurrent invalidations.
> > */
> > if (unlikely(kvm->mmu_notifier_count) &&
> > - hva >= kvm->mmu_notifier_range_start &&
> > - hva < kvm->mmu_notifier_range_end)
> > + gfn >= kvm->mmu_notifier_range_start &&
> > + gfn < kvm->mmu_notifier_range_end)
> > return 1;
> > if (kvm->mmu_notifier_seq != mmu_seq)
> > return 1;
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index da263c370d00..4d7f0e72366f 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -536,8 +536,7 @@ static void kvm_mmu_notifier_invalidate_range(struct mmu_notifier *mn,
> > typedef bool (*hva_handler_t)(struct kvm *kvm, struct kvm_gfn_range *range);
> > -typedef void (*on_lock_fn_t)(struct kvm *kvm, unsigned long start,
> > - unsigned long end);
> > +typedef void (*on_lock_fn_t)(struct kvm *kvm, gfn_t start, gfn_t end);
> > typedef void (*on_unlock_fn_t)(struct kvm *kvm);
> > @@ -624,7 +623,8 @@ static __always_inline int __kvm_handle_hva_range(struct kvm *kvm,
> > locked = true;
> > KVM_MMU_LOCK(kvm);
> > if (!IS_KVM_NULL_FN(range->on_lock))
> > - range->on_lock(kvm, range->start, range->end);
> > + range->on_lock(kvm, gfn_range.start,
> > + gfn_range.end);
> > if (IS_KVM_NULL_FN(range->handler))
> > break;
> > }
next prev parent reply other threads:[~2022-07-18 13:34 UTC|newest]
Thread overview: 172+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-07-06 8:20 [PATCH v7 00/14] KVM: mm: fd-based approach for supporting KVM guest private memory Chao Peng
2022-07-06 8:20 ` [PATCH v7 01/14] mm: Add F_SEAL_AUTO_ALLOCATE seal to memfd Chao Peng
2022-07-21 9:44 ` David Hildenbrand
2022-07-21 9:50 ` David Hildenbrand
2022-07-21 15:05 ` Sean Christopherson
2022-07-25 13:46 ` Chao Peng
2022-07-21 10:27 ` Gupta, Pankaj
2022-07-25 13:54 ` Chao Peng
2022-07-25 14:49 ` Gupta, Pankaj
2022-07-25 13:42 ` Chao Peng
2022-08-05 17:55 ` Paolo Bonzini
2022-08-05 18:06 ` David Hildenbrand
2022-08-10 9:40 ` Chao Peng
2022-08-10 9:38 ` Chao Peng
2022-08-17 23:41 ` Kirill A. Shutemov
2022-08-18 9:09 ` Paolo Bonzini
2022-08-23 7:36 ` David Hildenbrand
2022-08-24 10:20 ` Chao Peng
2022-08-26 15:19 ` Fuad Tabba
2022-08-29 15:18 ` Chao Peng
2022-07-06 8:20 ` [PATCH v7 02/14] selftests/memfd: Add tests for F_SEAL_AUTO_ALLOCATE Chao Peng
2022-08-05 13:11 ` David Hildenbrand
2022-07-06 8:20 ` [PATCH v7 03/14] mm: Introduce memfile_notifier Chao Peng
2022-08-05 13:22 ` David Hildenbrand
2022-08-10 9:22 ` Chao Peng
2022-08-10 10:05 ` David Hildenbrand
2022-08-10 14:38 ` Sean Christopherson
2022-08-11 12:27 ` Quentin Perret
2022-08-11 13:39 ` Chao Peng
2022-07-06 8:20 ` [PATCH v7 04/14] mm/shmem: Support memfile_notifier Chao Peng
2022-07-12 18:02 ` Gupta, Pankaj
2022-07-13 7:44 ` Chao Peng
2022-07-13 10:01 ` Gupta, Pankaj
2022-07-13 23:49 ` Chao Peng
2022-07-14 4:15 ` Gupta, Pankaj
2022-08-05 13:26 ` David Hildenbrand
2022-08-10 9:25 ` Chao Peng
2022-07-06 8:20 ` [PATCH v7 05/14] mm/memfd: Introduce MFD_INACCESSIBLE flag Chao Peng
2022-08-05 13:28 ` David Hildenbrand
2022-08-10 9:37 ` Chao Peng
2022-08-10 9:55 ` David Hildenbrand
2022-08-11 13:17 ` Chao Peng
2022-09-07 16:18 ` Kirill A. Shutemov
2022-07-06 8:20 ` [PATCH v7 06/14] KVM: Rename KVM_PRIVATE_MEM_SLOTS to KVM_INTERNAL_MEM_SLOTS Chao Peng
2022-07-06 8:20 ` [PATCH v7 07/14] KVM: Use gfn instead of hva for mmu_notifier_retry Chao Peng
2022-07-15 11:36 ` Gupta, Pankaj
2022-07-18 13:29 ` Chao Peng [this message]
2022-07-18 15:26 ` Sean Christopherson
2022-07-19 14:02 ` Chao Peng
2022-08-04 7:10 ` Isaku Yamahata
2022-08-10 8:19 ` Chao Peng
2022-07-06 8:20 ` [PATCH v7 08/14] KVM: Rename mmu_notifier_* Chao Peng
2022-07-29 19:02 ` Sean Christopherson
2022-08-03 10:13 ` Chao Peng
2022-08-05 19:54 ` Paolo Bonzini
2022-08-10 8:09 ` Chao Peng
2023-05-23 7:19 ` Kautuk Consul
2023-05-23 14:19 ` Sean Christopherson
2023-05-24 6:12 ` Kautuk Consul
2023-05-24 20:16 ` Sean Christopherson
2023-05-24 20:33 ` Peter Zijlstra
2023-05-24 21:39 ` Sean Christopherson
2023-05-25 8:54 ` Peter Zijlstra
2023-05-25 3:52 ` Kautuk Consul
2023-05-24 20:28 ` Peter Zijlstra
2022-07-06 8:20 ` [PATCH v7 09/14] KVM: Extend the memslot to support fd-based private memory Chao Peng
2022-07-29 19:51 ` Sean Christopherson
2022-08-03 10:08 ` Chao Peng
2022-08-03 14:42 ` Sean Christopherson
2022-07-06 8:20 ` [PATCH v7 10/14] KVM: Add KVM_EXIT_MEMORY_FAULT exit Chao Peng
2022-07-06 8:20 ` [PATCH v7 11/14] KVM: Register/unregister the guest private memory regions Chao Peng
2022-07-19 8:00 ` Gupta, Pankaj
2022-07-19 14:08 ` Chao Peng
2022-07-19 14:23 ` Gupta, Pankaj
2022-07-20 15:07 ` Chao Peng
2022-07-20 15:31 ` Gupta, Pankaj
2022-07-20 16:21 ` Sean Christopherson
2022-07-20 17:41 ` Gupta, Pankaj
2022-07-21 7:34 ` Wei Wang
2022-07-21 9:29 ` Chao Peng
2022-07-21 17:58 ` Sean Christopherson
2022-07-25 13:04 ` Chao Peng
2022-07-29 19:54 ` Sean Christopherson
2022-08-02 0:49 ` Sean Christopherson
2022-08-02 16:38 ` Sean Christopherson
2022-08-03 9:48 ` Chao Peng
2022-08-03 15:51 ` Sean Christopherson
2022-08-04 7:58 ` Chao Peng
2022-07-20 16:44 ` Sean Christopherson
2022-07-21 9:37 ` Chao Peng
2022-08-19 19:37 ` Vishal Annapurve
2022-08-24 10:37 ` Chao Peng
2022-08-26 15:19 ` Fuad Tabba
2022-08-29 15:21 ` Chao Peng
2022-07-06 8:20 ` [PATCH v7 12/14] KVM: Handle page fault for private memory Chao Peng
2022-07-29 20:58 ` Sean Christopherson
2022-08-03 9:52 ` Chao Peng
2022-07-06 8:20 ` [PATCH v7 13/14] KVM: Enable and expose KVM_MEM_PRIVATE Chao Peng
2022-07-19 9:55 ` Gupta, Pankaj
2022-07-19 14:12 ` Chao Peng
2022-07-06 8:20 ` [PATCH v7 14/14] memfd_create.2: Describe MFD_INACCESSIBLE flag Chao Peng
2022-08-01 14:40 ` Dave Hansen
2022-08-03 9:53 ` Chao Peng
2022-07-13 3:58 ` [PATCH v7 00/14] KVM: mm: fd-based approach for supporting KVM guest private memory Gupta, Pankaj
2022-07-13 7:57 ` Chao Peng
2022-07-13 10:35 ` Gupta, Pankaj
2022-07-13 23:59 ` Chao Peng
2022-07-14 4:39 ` Gupta, Pankaj
2022-07-14 5:06 ` Gupta, Pankaj
2022-07-14 4:29 ` Andy Lutomirski
2022-07-14 5:13 ` Gupta, Pankaj
2022-08-11 10:02 ` Nikunj A. Dadhania
2022-08-11 11:30 ` Gupta, Pankaj
2022-08-11 13:32 ` Chao Peng
2022-08-11 17:28 ` Nikunj A. Dadhania
2022-08-12 3:22 ` Nikunj A. Dadhania
2022-08-11 17:18 ` Nikunj A. Dadhania
2022-08-11 23:02 ` Gupta, Pankaj
2022-08-12 6:02 ` Gupta, Pankaj
2022-08-12 7:18 ` Gupta, Pankaj
2022-08-12 8:48 ` Nikunj A. Dadhania
2022-08-12 9:33 ` Gupta, Pankaj
2022-08-15 13:04 ` Chao Peng
2022-08-16 4:28 ` Nikunj A. Dadhania
2022-08-16 11:33 ` Gupta, Pankaj
2022-08-16 12:24 ` Kirill A . Shutemov
2022-08-16 13:03 ` Gupta, Pankaj
2022-08-16 15:38 ` Sean Christopherson
2022-08-17 15:27 ` Michael Roth
2022-08-23 1:25 ` Isaku Yamahata
2022-08-23 17:41 ` Gupta, Pankaj
2022-08-18 5:40 ` Hugh Dickins
2022-08-18 13:24 ` Kirill A . Shutemov
2022-08-19 0:20 ` Sean Christopherson
2022-08-19 3:38 ` Hugh Dickins
2022-08-19 22:53 ` Sean Christopherson
2022-08-23 7:55 ` David Hildenbrand
2022-08-23 16:05 ` Sean Christopherson
2022-08-24 9:41 ` Chao Peng
2022-09-09 4:55 ` Andy Lutomirski
2022-08-19 3:00 ` Hugh Dickins
2022-08-20 0:27 ` Kirill A. Shutemov
2022-08-21 5:15 ` Hugh Dickins
2022-08-31 14:24 ` Kirill A . Shutemov
2022-09-02 10:27 ` Chao Peng
2022-09-02 12:30 ` Kirill A . Shutemov
2022-09-08 1:10 ` Kirill A. Shutemov
2022-09-13 9:44 ` Sean Christopherson
2022-09-13 13:28 ` Kirill A. Shutemov
2022-09-13 14:53 ` Sean Christopherson
2022-09-13 16:00 ` Kirill A. Shutemov
2022-09-13 16:12 ` Sean Christopherson
2022-09-09 4:48 ` Andy Lutomirski
2022-09-09 14:32 ` Kirill A . Shutemov
2022-09-09 19:11 ` Andy Lutomirski
2022-09-09 23:02 ` Kirill A . Shutemov
2022-08-21 10:27 ` Matthew Wilcox
2022-08-24 10:27 ` Chao Peng
2022-09-09 4:44 ` Andy Lutomirski
[not found] ` <diqzlej60z57.fsf@ackerleytng-cloudtop.c.googlers.com>
[not found] ` <20221202061347.1070246-2-chao.p.peng@linux.intel.com>
2023-04-13 15:25 ` Christian Brauner
2023-04-13 22:28 ` Sean Christopherson
2023-04-14 22:38 ` Ackerley Tng
2023-04-14 23:26 ` Sean Christopherson
2023-04-15 0:06 ` Sean Christopherson
2023-04-19 8:29 ` Christian Brauner
2023-04-20 0:49 ` Sean Christopherson
2023-04-20 8:35 ` Christian Brauner
2022-08-26 15:19 ` Fuad Tabba
2022-08-29 15:17 ` Chao Peng
2022-08-31 9:12 ` Fuad Tabba
2022-09-02 10:19 ` Chao Peng
2022-09-09 15:35 ` Michael Roth
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220718132950.GA38104@chaop.bj.intel.com \
--to=chao.p.peng@linux.intel.com \
--cc=aarcange@redhat.com \
--cc=ak@linux.intel.com \
--cc=akpm@linux-foundation.org \
--cc=bfields@fieldses.org \
--cc=bp@alien8.de \
--cc=corbet@lwn.net \
--cc=dave.hansen@intel.com \
--cc=david@redhat.com \
--cc=ddutile@redhat.com \
--cc=dhildenb@redhat.com \
--cc=hpa@zytor.com \
--cc=hughd@google.com \
--cc=jlayton@kernel.org \
--cc=jmattson@google.com \
--cc=joro@8bytes.org \
--cc=jun.nakajima@intel.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=kvm@vger.kernel.org \
--cc=linux-api@vger.kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=luto@kernel.org \
--cc=mail@maciej.szmigiero.name \
--cc=mhocko@suse.com \
--cc=michael.roth@amd.com \
--cc=mingo@redhat.com \
--cc=pankaj.gupta@amd.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=qperret@google.com \
--cc=rppt@kernel.org \
--cc=seanjc@google.com \
--cc=shuah@kernel.org \
--cc=songmuchun@bytedance.com \
--cc=steven.price@arm.com \
--cc=tglx@linutronix.de \
--cc=vannapurve@google.com \
--cc=vbabka@suse.cz \
--cc=vkuznets@redhat.com \
--cc=wanpengli@tencent.com \
--cc=x86@kernel.org \
--cc=yu.c.zhang@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).