From: Sean Christopherson <seanjc@google.com>
To: Marc Zyngier <maz@kernel.org>,
Huacai Chen <chenhuacai@kernel.org>,
Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
Paul Mackerras <paulus@ozlabs.org>,
Paolo Bonzini <pbonzini@redhat.com>
Cc: James Morse <james.morse@arm.com>,
Julien Thierry <julien.thierry.kdev@gmail.com>,
Suzuki K Poulose <suzuki.poulose@arm.com>,
Sean Christopherson <seanjc@google.com>,
Vitaly Kuznetsov <vkuznets@redhat.com>,
Wanpeng Li <wanpengli@tencent.com>,
Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
linux-arm-kernel@lists.infradead.org,
kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org,
kvm@vger.kernel.org, kvm-ppc@vger.kernel.org,
linux-kernel@vger.kernel.org, Ben Gardon <bgardon@google.com>
Subject: [PATCH v2 00/10] KVM: Consolidate and optimize MMU notifiers
Date: Thu, 1 Apr 2021 17:56:48 -0700 [thread overview]
Message-ID: <20210402005658.3024832-1-seanjc@google.com> (raw)
The end goal of this series is to optimize the MMU notifiers to take
mmu_lock if and only if the notification is relevant to KVM, i.e. the hva
range overlaps a memslot. Large VMs (hundreds of vCPUs) are very
sensitive to mmu_lock being taken for write at inopportune times, and
such VMs also tend to be "static", e.g. backed by HugeTLB with minimal
page shenanigans. The vast majority of notifications for these VMs will
be spurious (for KVM), and eliding mmu_lock for spurious notifications
avoids an otherwise unacceptable disruption to the guest.
To get there without potentially degrading performance, e.g. due to
multiple memslot lookups, especially on non-x86 where the use cases are
largely unknown (from my perspective), first consolidate the MMU notifier
logic by moving the hva->gfn lookups into common KVM.
Based on kvm/queue, commit 5f986f748438 ("KVM: x86: dump_vmcs should
include the autoload/autostore MSR lists").
Well tested on Intel and AMD. Compile tested for arm64, MIPS, PPC,
PPC e500, and s390. Absolutely needs to be tested for real on non-x86,
I give it even odds that I introduced an off-by-one bug somewhere.
v2:
- Drop the patches that have already been pushed to kvm/queue.
- Drop two selftest changes that had snuck in via "git commit -a".
- Add a patch to assert that mmu_notifier_count is elevated when
.change_pte() runs. [Paolo]
- Split out moving KVM_MMU_(UN)LOCK() to __kvm_handle_hva_range() to a
separate patch. Opted not to squash it with the introduction of the
common hva walkers (patch 02), as that prevented sharing code between
the old and new APIs. [Paolo]
- Tweak the comment in kvm_vm_destroy() above the smashing of the new
slots lock. [Paolo]
- Make mmu_notifier_slots_lock unconditional to avoid #ifdefs. [Paolo]
v1:
- https://lkml.kernel.org/r/20210326021957.1424875-1-seanjc@google.com
Sean Christopherson (10):
KVM: Assert that notifier count is elevated in .change_pte()
KVM: Move x86's MMU notifier memslot walkers to generic code
KVM: arm64: Convert to the gfn-based MMU notifier callbacks
KVM: MIPS/MMU: Convert to the gfn-based MMU notifier callbacks
KVM: PPC: Convert to the gfn-based MMU notifier callbacks
KVM: Kill off the old hva-based MMU notifier callbacks
KVM: Move MMU notifier's mmu_lock acquisition into common helper
KVM: Take mmu_lock when handling MMU notifier iff the hva hits a
memslot
KVM: Don't take mmu_lock for range invalidation unless necessary
KVM: x86/mmu: Allow yielding during MMU notifier unmap/zap, if
possible
arch/arm64/kvm/mmu.c | 117 +++------
arch/mips/kvm/mmu.c | 97 ++------
arch/powerpc/include/asm/kvm_book3s.h | 12 +-
arch/powerpc/include/asm/kvm_ppc.h | 9 +-
arch/powerpc/kvm/book3s.c | 18 +-
arch/powerpc/kvm/book3s.h | 10 +-
arch/powerpc/kvm/book3s_64_mmu_hv.c | 98 ++------
arch/powerpc/kvm/book3s_64_mmu_radix.c | 25 +-
arch/powerpc/kvm/book3s_hv.c | 12 +-
arch/powerpc/kvm/book3s_pr.c | 56 ++---
arch/powerpc/kvm/e500_mmu_host.c | 27 +-
arch/x86/kvm/mmu/mmu.c | 127 ++++------
arch/x86/kvm/mmu/tdp_mmu.c | 245 +++++++------------
arch/x86/kvm/mmu/tdp_mmu.h | 14 +-
include/linux/kvm_host.h | 22 +-
virt/kvm/kvm_main.c | 325 +++++++++++++++++++------
16 files changed, 552 insertions(+), 662 deletions(-)
--
2.31.0.208.g409f899ff0-goog
WARNING: multiple messages have this Message-ID (diff)
From: Sean Christopherson <seanjc@google.com>
To: Marc Zyngier <maz@kernel.org>,
Huacai Chen <chenhuacai@kernel.org>,
Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
Paul Mackerras <paulus@ozlabs.org>,
Paolo Bonzini <pbonzini@redhat.com>
Cc: Wanpeng Li <wanpengli@tencent.com>,
kvm@vger.kernel.org, Sean Christopherson <seanjc@google.com>,
Joerg Roedel <joro@8bytes.org>,
linux-mips@vger.kernel.org, kvm-ppc@vger.kernel.org,
linux-kernel@vger.kernel.org,
linux-arm-kernel@lists.infradead.org,
Ben Gardon <bgardon@google.com>,
Vitaly Kuznetsov <vkuznets@redhat.com>,
kvmarm@lists.cs.columbia.edu, Jim Mattson <jmattson@google.com>
Subject: [PATCH v2 00/10] KVM: Consolidate and optimize MMU notifiers
Date: Thu, 1 Apr 2021 17:56:48 -0700 [thread overview]
Message-ID: <20210402005658.3024832-1-seanjc@google.com> (raw)
The end goal of this series is to optimize the MMU notifiers to take
mmu_lock if and only if the notification is relevant to KVM, i.e. the hva
range overlaps a memslot. Large VMs (hundreds of vCPUs) are very
sensitive to mmu_lock being taken for write at inopportune times, and
such VMs also tend to be "static", e.g. backed by HugeTLB with minimal
page shenanigans. The vast majority of notifications for these VMs will
be spurious (for KVM), and eliding mmu_lock for spurious notifications
avoids an otherwise unacceptable disruption to the guest.
To get there without potentially degrading performance, e.g. due to
multiple memslot lookups, especially on non-x86 where the use cases are
largely unknown (from my perspective), first consolidate the MMU notifier
logic by moving the hva->gfn lookups into common KVM.
Based on kvm/queue, commit 5f986f748438 ("KVM: x86: dump_vmcs should
include the autoload/autostore MSR lists").
Well tested on Intel and AMD. Compile tested for arm64, MIPS, PPC,
PPC e500, and s390. Absolutely needs to be tested for real on non-x86,
I give it even odds that I introduced an off-by-one bug somewhere.
v2:
- Drop the patches that have already been pushed to kvm/queue.
- Drop two selftest changes that had snuck in via "git commit -a".
- Add a patch to assert that mmu_notifier_count is elevated when
.change_pte() runs. [Paolo]
- Split out moving KVM_MMU_(UN)LOCK() to __kvm_handle_hva_range() to a
separate patch. Opted not to squash it with the introduction of the
common hva walkers (patch 02), as that prevented sharing code between
the old and new APIs. [Paolo]
- Tweak the comment in kvm_vm_destroy() above the smashing of the new
slots lock. [Paolo]
- Make mmu_notifier_slots_lock unconditional to avoid #ifdefs. [Paolo]
v1:
- https://lkml.kernel.org/r/20210326021957.1424875-1-seanjc@google.com
Sean Christopherson (10):
KVM: Assert that notifier count is elevated in .change_pte()
KVM: Move x86's MMU notifier memslot walkers to generic code
KVM: arm64: Convert to the gfn-based MMU notifier callbacks
KVM: MIPS/MMU: Convert to the gfn-based MMU notifier callbacks
KVM: PPC: Convert to the gfn-based MMU notifier callbacks
KVM: Kill off the old hva-based MMU notifier callbacks
KVM: Move MMU notifier's mmu_lock acquisition into common helper
KVM: Take mmu_lock when handling MMU notifier iff the hva hits a
memslot
KVM: Don't take mmu_lock for range invalidation unless necessary
KVM: x86/mmu: Allow yielding during MMU notifier unmap/zap, if
possible
arch/arm64/kvm/mmu.c | 117 +++------
arch/mips/kvm/mmu.c | 97 ++------
arch/powerpc/include/asm/kvm_book3s.h | 12 +-
arch/powerpc/include/asm/kvm_ppc.h | 9 +-
arch/powerpc/kvm/book3s.c | 18 +-
arch/powerpc/kvm/book3s.h | 10 +-
arch/powerpc/kvm/book3s_64_mmu_hv.c | 98 ++------
arch/powerpc/kvm/book3s_64_mmu_radix.c | 25 +-
arch/powerpc/kvm/book3s_hv.c | 12 +-
arch/powerpc/kvm/book3s_pr.c | 56 ++---
arch/powerpc/kvm/e500_mmu_host.c | 27 +-
arch/x86/kvm/mmu/mmu.c | 127 ++++------
arch/x86/kvm/mmu/tdp_mmu.c | 245 +++++++------------
arch/x86/kvm/mmu/tdp_mmu.h | 14 +-
include/linux/kvm_host.h | 22 +-
virt/kvm/kvm_main.c | 325 +++++++++++++++++++------
16 files changed, 552 insertions(+), 662 deletions(-)
--
2.31.0.208.g409f899ff0-goog
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
WARNING: multiple messages have this Message-ID (diff)
From: Sean Christopherson <seanjc@google.com>
To: Marc Zyngier <maz@kernel.org>,
Huacai Chen <chenhuacai@kernel.org>,
Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
Paul Mackerras <paulus@ozlabs.org>,
Paolo Bonzini <pbonzini@redhat.com>
Cc: James Morse <james.morse@arm.com>,
Julien Thierry <julien.thierry.kdev@gmail.com>,
Suzuki K Poulose <suzuki.poulose@arm.com>,
Sean Christopherson <seanjc@google.com>,
Vitaly Kuznetsov <vkuznets@redhat.com>,
Wanpeng Li <wanpengli@tencent.com>,
Jim Mattson <jmattson@google.com>,
Joerg Roedel <joro@8bytes.org>,
linux-arm-kernel@lists.infradead.org,
kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org,
kvm@vger.kernel.org, kvm-ppc@vger.kernel.org,
linux-kernel@vger.kernel.org, Ben Gardon <bgardon@google.com>
Subject: [PATCH v2 00/10] KVM: Consolidate and optimize MMU notifiers
Date: Thu, 1 Apr 2021 17:56:48 -0700 [thread overview]
Message-ID: <20210402005658.3024832-1-seanjc@google.com> (raw)
The end goal of this series is to optimize the MMU notifiers to take
mmu_lock if and only if the notification is relevant to KVM, i.e. the hva
range overlaps a memslot. Large VMs (hundreds of vCPUs) are very
sensitive to mmu_lock being taken for write at inopportune times, and
such VMs also tend to be "static", e.g. backed by HugeTLB with minimal
page shenanigans. The vast majority of notifications for these VMs will
be spurious (for KVM), and eliding mmu_lock for spurious notifications
avoids an otherwise unacceptable disruption to the guest.
To get there without potentially degrading performance, e.g. due to
multiple memslot lookups, especially on non-x86 where the use cases are
largely unknown (from my perspective), first consolidate the MMU notifier
logic by moving the hva->gfn lookups into common KVM.
Based on kvm/queue, commit 5f986f748438 ("KVM: x86: dump_vmcs should
include the autoload/autostore MSR lists").
Well tested on Intel and AMD. Compile tested for arm64, MIPS, PPC,
PPC e500, and s390. Absolutely needs to be tested for real on non-x86,
I give it even odds that I introduced an off-by-one bug somewhere.
v2:
- Drop the patches that have already been pushed to kvm/queue.
- Drop two selftest changes that had snuck in via "git commit -a".
- Add a patch to assert that mmu_notifier_count is elevated when
.change_pte() runs. [Paolo]
- Split out moving KVM_MMU_(UN)LOCK() to __kvm_handle_hva_range() to a
separate patch. Opted not to squash it with the introduction of the
common hva walkers (patch 02), as that prevented sharing code between
the old and new APIs. [Paolo]
- Tweak the comment in kvm_vm_destroy() above the smashing of the new
slots lock. [Paolo]
- Make mmu_notifier_slots_lock unconditional to avoid #ifdefs. [Paolo]
v1:
- https://lkml.kernel.org/r/20210326021957.1424875-1-seanjc@google.com
Sean Christopherson (10):
KVM: Assert that notifier count is elevated in .change_pte()
KVM: Move x86's MMU notifier memslot walkers to generic code
KVM: arm64: Convert to the gfn-based MMU notifier callbacks
KVM: MIPS/MMU: Convert to the gfn-based MMU notifier callbacks
KVM: PPC: Convert to the gfn-based MMU notifier callbacks
KVM: Kill off the old hva-based MMU notifier callbacks
KVM: Move MMU notifier's mmu_lock acquisition into common helper
KVM: Take mmu_lock when handling MMU notifier iff the hva hits a
memslot
KVM: Don't take mmu_lock for range invalidation unless necessary
KVM: x86/mmu: Allow yielding during MMU notifier unmap/zap, if
possible
arch/arm64/kvm/mmu.c | 117 +++------
arch/mips/kvm/mmu.c | 97 ++------
arch/powerpc/include/asm/kvm_book3s.h | 12 +-
arch/powerpc/include/asm/kvm_ppc.h | 9 +-
arch/powerpc/kvm/book3s.c | 18 +-
arch/powerpc/kvm/book3s.h | 10 +-
arch/powerpc/kvm/book3s_64_mmu_hv.c | 98 ++------
arch/powerpc/kvm/book3s_64_mmu_radix.c | 25 +-
arch/powerpc/kvm/book3s_hv.c | 12 +-
arch/powerpc/kvm/book3s_pr.c | 56 ++---
arch/powerpc/kvm/e500_mmu_host.c | 27 +-
arch/x86/kvm/mmu/mmu.c | 127 ++++------
arch/x86/kvm/mmu/tdp_mmu.c | 245 +++++++------------
arch/x86/kvm/mmu/tdp_mmu.h | 14 +-
include/linux/kvm_host.h | 22 +-
virt/kvm/kvm_main.c | 325 +++++++++++++++++++------
16 files changed, 552 insertions(+), 662 deletions(-)
--
2.31.0.208.g409f899ff0-goog
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
WARNING: multiple messages have this Message-ID (diff)
From: Sean Christopherson <seanjc@google.com>
To: Marc Zyngier <maz@kernel.org>,
Huacai Chen <chenhuacai@kernel.org>,
Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
Paul Mackerras <paulus@ozlabs.org>,
Paolo Bonzini <pbonzini@redhat.com>
Cc: James Morse <james.morse@arm.com>,
Julien Thierry <julien.thierry.kdev@gmail.com>,
Suzuki K Poulose <suzuki.poulose@arm.com>,
Sean Christopherson <seanjc@google.com>,
Vitaly Kuznetsov <vkuznets@redhat.com>,
Wanpeng Li <wanpengli@tencent.com>,
Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
linux-arm-kernel@lists.infradead.org,
kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org,
kvm@vger.kernel.org, kvm-ppc@vger.kernel.org,
linux-kernel@vger.kernel.org, Ben Gardon <bgardon@google.com>
Subject: [PATCH v2 00/10] KVM: Consolidate and optimize MMU notifiers
Date: Fri, 02 Apr 2021 00:56:48 +0000 [thread overview]
Message-ID: <20210402005658.3024832-1-seanjc@google.com> (raw)
The end goal of this series is to optimize the MMU notifiers to take
mmu_lock if and only if the notification is relevant to KVM, i.e. the hva
range overlaps a memslot. Large VMs (hundreds of vCPUs) are very
sensitive to mmu_lock being taken for write at inopportune times, and
such VMs also tend to be "static", e.g. backed by HugeTLB with minimal
page shenanigans. The vast majority of notifications for these VMs will
be spurious (for KVM), and eliding mmu_lock for spurious notifications
avoids an otherwise unacceptable disruption to the guest.
To get there without potentially degrading performance, e.g. due to
multiple memslot lookups, especially on non-x86 where the use cases are
largely unknown (from my perspective), first consolidate the MMU notifier
logic by moving the hva->gfn lookups into common KVM.
Based on kvm/queue, commit 5f986f748438 ("KVM: x86: dump_vmcs should
include the autoload/autostore MSR lists").
Well tested on Intel and AMD. Compile tested for arm64, MIPS, PPC,
PPC e500, and s390. Absolutely needs to be tested for real on non-x86,
I give it even odds that I introduced an off-by-one bug somewhere.
v2:
- Drop the patches that have already been pushed to kvm/queue.
- Drop two selftest changes that had snuck in via "git commit -a".
- Add a patch to assert that mmu_notifier_count is elevated when
.change_pte() runs. [Paolo]
- Split out moving KVM_MMU_(UN)LOCK() to __kvm_handle_hva_range() to a
separate patch. Opted not to squash it with the introduction of the
common hva walkers (patch 02), as that prevented sharing code between
the old and new APIs. [Paolo]
- Tweak the comment in kvm_vm_destroy() above the smashing of the new
slots lock. [Paolo]
- Make mmu_notifier_slots_lock unconditional to avoid #ifdefs. [Paolo]
v1:
- https://lkml.kernel.org/r/20210326021957.1424875-1-seanjc@google.com
Sean Christopherson (10):
KVM: Assert that notifier count is elevated in .change_pte()
KVM: Move x86's MMU notifier memslot walkers to generic code
KVM: arm64: Convert to the gfn-based MMU notifier callbacks
KVM: MIPS/MMU: Convert to the gfn-based MMU notifier callbacks
KVM: PPC: Convert to the gfn-based MMU notifier callbacks
KVM: Kill off the old hva-based MMU notifier callbacks
KVM: Move MMU notifier's mmu_lock acquisition into common helper
KVM: Take mmu_lock when handling MMU notifier iff the hva hits a
memslot
KVM: Don't take mmu_lock for range invalidation unless necessary
KVM: x86/mmu: Allow yielding during MMU notifier unmap/zap, if
possible
arch/arm64/kvm/mmu.c | 117 +++------
arch/mips/kvm/mmu.c | 97 ++------
arch/powerpc/include/asm/kvm_book3s.h | 12 +-
arch/powerpc/include/asm/kvm_ppc.h | 9 +-
arch/powerpc/kvm/book3s.c | 18 +-
arch/powerpc/kvm/book3s.h | 10 +-
arch/powerpc/kvm/book3s_64_mmu_hv.c | 98 ++------
arch/powerpc/kvm/book3s_64_mmu_radix.c | 25 +-
arch/powerpc/kvm/book3s_hv.c | 12 +-
arch/powerpc/kvm/book3s_pr.c | 56 ++---
arch/powerpc/kvm/e500_mmu_host.c | 27 +-
arch/x86/kvm/mmu/mmu.c | 127 ++++------
arch/x86/kvm/mmu/tdp_mmu.c | 245 +++++++------------
arch/x86/kvm/mmu/tdp_mmu.h | 14 +-
include/linux/kvm_host.h | 22 +-
virt/kvm/kvm_main.c | 325 +++++++++++++++++++------
16 files changed, 552 insertions(+), 662 deletions(-)
--
2.31.0.208.g409f899ff0-goog
next reply other threads:[~2021-04-02 0:57 UTC|newest]
Thread overview: 84+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-04-02 0:56 Sean Christopherson [this message]
2021-04-02 0:56 ` [PATCH v2 00/10] KVM: Consolidate and optimize MMU notifiers Sean Christopherson
2021-04-02 0:56 ` Sean Christopherson
2021-04-02 0:56 ` Sean Christopherson
2021-04-02 0:56 ` [PATCH v2 01/10] KVM: Assert that notifier count is elevated in .change_pte() Sean Christopherson
2021-04-02 0:56 ` Sean Christopherson
2021-04-02 0:56 ` Sean Christopherson
2021-04-02 0:56 ` Sean Christopherson
2021-04-02 11:08 ` Paolo Bonzini
2021-04-02 11:08 ` Paolo Bonzini
2021-04-02 11:08 ` Paolo Bonzini
2021-04-02 11:08 ` Paolo Bonzini
2021-04-02 0:56 ` [PATCH v2 02/10] KVM: Move x86's MMU notifier memslot walkers to generic code Sean Christopherson
2021-04-02 0:56 ` Sean Christopherson
2021-04-02 0:56 ` Sean Christopherson
2021-04-02 0:56 ` Sean Christopherson
2021-04-02 0:56 ` [PATCH v2 03/10] KVM: arm64: Convert to the gfn-based MMU notifier callbacks Sean Christopherson
2021-04-02 0:56 ` Sean Christopherson
2021-04-02 0:56 ` Sean Christopherson
2021-04-02 0:56 ` Sean Christopherson
2021-04-12 10:12 ` Marc Zyngier
2021-04-12 10:12 ` Marc Zyngier
2021-04-12 10:12 ` Marc Zyngier
2021-04-12 10:12 ` Marc Zyngier
2021-04-02 0:56 ` [PATCH v2 04/10] KVM: MIPS/MMU: " Sean Christopherson
2021-04-02 0:56 ` Sean Christopherson
2021-04-02 0:56 ` Sean Christopherson
2021-04-02 0:56 ` Sean Christopherson
2021-04-02 0:56 ` [PATCH v2 05/10] KVM: PPC: " Sean Christopherson
2021-04-02 0:56 ` Sean Christopherson
2021-04-02 0:56 ` Sean Christopherson
2021-04-02 0:56 ` Sean Christopherson
2021-04-02 0:56 ` [PATCH v2 06/10] KVM: Kill off the old hva-based " Sean Christopherson
2021-04-02 0:56 ` Sean Christopherson
2021-04-02 0:56 ` Sean Christopherson
2021-04-02 0:56 ` Sean Christopherson
2021-04-02 0:56 ` [PATCH v2 07/10] KVM: Move MMU notifier's mmu_lock acquisition into common helper Sean Christopherson
2021-04-02 0:56 ` Sean Christopherson
2021-04-02 0:56 ` Sean Christopherson
2021-04-02 0:56 ` Sean Christopherson
2021-04-02 9:35 ` Paolo Bonzini
2021-04-02 9:35 ` Paolo Bonzini
2021-04-02 9:35 ` Paolo Bonzini
2021-04-02 9:35 ` Paolo Bonzini
2021-04-02 14:59 ` Sean Christopherson
2021-04-02 14:59 ` Sean Christopherson
2021-04-02 14:59 ` Sean Christopherson
2021-04-02 14:59 ` Sean Christopherson
2021-04-02 0:56 ` [PATCH v2 08/10] KVM: Take mmu_lock when handling MMU notifier iff the hva hits a memslot Sean Christopherson
2021-04-02 0:56 ` Sean Christopherson
2021-04-02 0:56 ` Sean Christopherson
2021-04-02 0:56 ` Sean Christopherson
2021-04-02 0:56 ` [PATCH v2 09/10] KVM: Don't take mmu_lock for range invalidation unless necessary Sean Christopherson
2021-04-02 0:56 ` Sean Christopherson
2021-04-02 0:56 ` Sean Christopherson
2021-04-02 0:56 ` Sean Christopherson
2021-04-02 9:34 ` Paolo Bonzini
2021-04-02 9:34 ` Paolo Bonzini
2021-04-02 9:34 ` Paolo Bonzini
2021-04-02 9:34 ` Paolo Bonzini
2021-04-02 14:59 ` Sean Christopherson
2021-04-02 14:59 ` Sean Christopherson
2021-04-02 14:59 ` Sean Christopherson
2021-04-02 14:59 ` Sean Christopherson
2021-04-19 8:49 ` Wanpeng Li
2021-04-19 8:49 ` Wanpeng Li
2021-04-19 8:49 ` Wanpeng Li
2021-04-19 8:49 ` Wanpeng Li
2021-04-19 13:50 ` Paolo Bonzini
2021-04-19 15:09 ` Sean Christopherson
2021-04-19 22:09 ` Paolo Bonzini
2021-04-20 1:17 ` Sean Christopherson
2021-04-02 0:56 ` [PATCH v2 10/10] KVM: x86/mmu: Allow yielding during MMU notifier unmap/zap, if possible Sean Christopherson
2021-04-02 0:56 ` Sean Christopherson
2021-04-02 0:56 ` Sean Christopherson
2021-04-02 0:56 ` Sean Christopherson
2021-04-02 12:17 ` [PATCH v2 00/10] KVM: Consolidate and optimize MMU notifiers Paolo Bonzini
2021-04-02 12:17 ` Paolo Bonzini
2021-04-02 12:17 ` Paolo Bonzini
2021-04-02 12:17 ` Paolo Bonzini
2021-04-12 10:27 ` Marc Zyngier
2021-04-12 10:27 ` Marc Zyngier
2021-04-12 10:27 ` Marc Zyngier
2021-04-12 10:27 ` Marc Zyngier
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210402005658.3024832-1-seanjc@google.com \
--to=seanjc@google.com \
--cc=aleksandar.qemu.devel@gmail.com \
--cc=bgardon@google.com \
--cc=chenhuacai@kernel.org \
--cc=james.morse@arm.com \
--cc=jmattson@google.com \
--cc=joro@8bytes.org \
--cc=julien.thierry.kdev@gmail.com \
--cc=kvm-ppc@vger.kernel.org \
--cc=kvm@vger.kernel.org \
--cc=kvmarm@lists.cs.columbia.edu \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mips@vger.kernel.org \
--cc=maz@kernel.org \
--cc=paulus@ozlabs.org \
--cc=pbonzini@redhat.com \
--cc=suzuki.poulose@arm.com \
--cc=vkuznets@redhat.com \
--cc=wanpengli@tencent.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.