kvmarm.lists.cs.columbia.edu archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] KVM: arm/arm64: Fix guest's PMR synchronization when blocking on WFI
@ 2019-08-02 10:37 Marc Zyngier
  2019-08-02 10:37 ` [PATCH 1/2] KVM: arm/arm64: Sync ICH_VMCR_EL2 back when about to block Marc Zyngier
  2019-08-02 10:37 ` [PATCH 2/2] KVM: Call kvm_arch_vcpu_blocking early into the blocking sequence Marc Zyngier
  0 siblings, 2 replies; 5+ messages in thread
From: Marc Zyngier @ 2019-08-02 10:37 UTC (permalink / raw)
  To: Paolo Bonzini, Radim Krčmář,
	Julien Thierry, Suzuki K Poulose, James Morse, Joerg Roedel,
	Suravee Suthikulpanit, Tangnianyao
  Cc: kvmarm, kvm

It recently came to light that if we run a guest that actively uses
interrupt priorities to block interrupts, vcpus can end-up being
blocked while they shouldn't, leading to an unresponsive guest (a
slightly less than desirable outcome).

Patch #1 fixes the issue (which has been with us since 4.12), which I plan
to take in for 5.3 with immediate backport to stable.

Patch #2 is more of an RFC, as it also impacts the SVN AVIC support. It
moves the kvm_arch_vcpu_blocking callback to happen earlier, leading to
much better performances on ARM, and leading to the above fix to be
applied at the best possible spot. I'd welcome any comment/testing on
this, specially on non-ARM systems.

Marc Zyngier (2):
  KVM: arm/arm64: Sync ICH_VMCR_EL2 back when about to block
  KVM: Call kvm_arch_vcpu_blocking early into the blocking sequence

 include/kvm/arm_vgic.h      |  1 +
 virt/kvm/arm/arm.c          | 11 +++++++++++
 virt/kvm/arm/vgic/vgic-v2.c |  9 ++++++++-
 virt/kvm/arm/vgic/vgic-v3.c |  7 ++++++-
 virt/kvm/arm/vgic/vgic.c    | 11 +++++++++++
 virt/kvm/arm/vgic/vgic.h    |  2 ++
 virt/kvm/kvm_main.c         |  7 +++----
 7 files changed, 42 insertions(+), 6 deletions(-)

-- 
2.20.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH 1/2] KVM: arm/arm64: Sync ICH_VMCR_EL2 back when about to block
  2019-08-02 10:37 [PATCH 0/2] KVM: arm/arm64: Fix guest's PMR synchronization when blocking on WFI Marc Zyngier
@ 2019-08-02 10:37 ` Marc Zyngier
  2019-08-02 10:37 ` [PATCH 2/2] KVM: Call kvm_arch_vcpu_blocking early into the blocking sequence Marc Zyngier
  1 sibling, 0 replies; 5+ messages in thread
From: Marc Zyngier @ 2019-08-02 10:37 UTC (permalink / raw)
  To: Paolo Bonzini, Radim Krčmář,
	Julien Thierry, Suzuki K Poulose, James Morse, Joerg Roedel,
	Suravee Suthikulpanit, Tangnianyao
  Cc: kvmarm, kvm

Since commit commit 328e56647944 ("KVM: arm/arm64: vgic: Defer
touching GICH_VMCR to vcpu_load/put"), we leave ICH_VMCR_EL2 (or
its GICv2 equivalent) loaded as long as we can, only syncing it
back when we're scheduled out.

There is a small snag with that though: kvm_vgic_vcpu_pending_irq(),
which is indirectly called from kvm_vcpu_check_block(), needs to
evaluate the guest's view of ICC_PMR_EL1. At the point were we
call kvm_vcpu_check_block(), the vcpu is still loaded, and whatever
changes to PMR is not visible in memory until we do a vcpu_put().

Things go really south if the guest does the following:

	mov x0, #0	// or any small value masking interrupts
	msr ICC_PMR_EL1, x0

	[vcpu preempted, then rescheduled, VMCR sampled]

	mov x0, #ff	// allow all interrupts
	msr ICC_PMR_EL1, x0
	wfi		// traps to EL2, so samping of VMCR

	[interrupt arrives just after WFI]

Here, the hypervisor's view of PMR is zero, while the guest has enabled
its interrupts. kvm_vgic_vcpu_pending_irq() will then say that no
interrupts are pending (despite an interrupt being received) and we'll
block for no reason. If the guest doesn't have a periodic interrupt
firing once it has blocked, it will stay there forever.

To avoid this unfortuante situation, let's resync VMCR from
kvm_arch_vcpu_blocking(), ensuring that a following kvm_vcpu_check_block()
will observe the latest value of PMR.

This has been found by booting an arm64 Linux guest with the pseudo NMI
feature, and thus using interrupt priorities to mask interrupts instead
of the usual PSTATE masking.

Cc: stable@vger.kernel.org # 4.12
Fixes: 328e56647944 ("KVM: arm/arm64: vgic: Defer touching GICH_VMCR to vcpu_load/put")
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 include/kvm/arm_vgic.h      |  1 +
 virt/kvm/arm/arm.c          | 11 +++++++++++
 virt/kvm/arm/vgic/vgic-v2.c |  9 ++++++++-
 virt/kvm/arm/vgic/vgic-v3.c |  7 ++++++-
 virt/kvm/arm/vgic/vgic.c    | 11 +++++++++++
 virt/kvm/arm/vgic/vgic.h    |  2 ++
 6 files changed, 39 insertions(+), 2 deletions(-)

diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index 46bbc949c20a..7a30524a80ee 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -350,6 +350,7 @@ int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu);
 
 void kvm_vgic_load(struct kvm_vcpu *vcpu);
 void kvm_vgic_put(struct kvm_vcpu *vcpu);
+void kvm_vgic_vmcr_sync(struct kvm_vcpu *vcpu);
 
 #define irqchip_in_kernel(k)	(!!((k)->arch.vgic.in_kernel))
 #define vgic_initialized(k)	((k)->arch.vgic.initialized)
diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index acc43242a310..d9a650bfaf22 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -323,6 +323,17 @@ int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
 
 void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)
 {
+	/*
+	 * If we're about to block (most likely because we've just hit a
+	 * WFI), we need to sync back the state of the GIC CPU interface
+	 * so that we have the lastest PMR and group enables. This ensures
+	 * that kvm_arch_vcpu_runnable has up-to-date data to decide
+	 * whether we have pending interrupts.
+	 */
+	preempt_disable();
+	kvm_vgic_vmcr_sync(vcpu);
+	preempt_enable();
+
 	kvm_vgic_v4_enable_doorbell(vcpu);
 }
 
diff --git a/virt/kvm/arm/vgic/vgic-v2.c b/virt/kvm/arm/vgic/vgic-v2.c
index 6dd5ad706c92..96aab77d0471 100644
--- a/virt/kvm/arm/vgic/vgic-v2.c
+++ b/virt/kvm/arm/vgic/vgic-v2.c
@@ -484,10 +484,17 @@ void vgic_v2_load(struct kvm_vcpu *vcpu)
 		       kvm_vgic_global_state.vctrl_base + GICH_APR);
 }
 
-void vgic_v2_put(struct kvm_vcpu *vcpu)
+void vgic_v2_vmcr_sync(struct kvm_vcpu *vcpu)
 {
 	struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
 
 	cpu_if->vgic_vmcr = readl_relaxed(kvm_vgic_global_state.vctrl_base + GICH_VMCR);
+}
+
+void vgic_v2_put(struct kvm_vcpu *vcpu)
+{
+	struct vgic_v2_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v2;
+
+	vgic_v2_vmcr_sync(vcpu);
 	cpu_if->vgic_apr = readl_relaxed(kvm_vgic_global_state.vctrl_base + GICH_APR);
 }
diff --git a/virt/kvm/arm/vgic/vgic-v3.c b/virt/kvm/arm/vgic/vgic-v3.c
index c2c9ce009f63..0c653a1e5215 100644
--- a/virt/kvm/arm/vgic/vgic-v3.c
+++ b/virt/kvm/arm/vgic/vgic-v3.c
@@ -662,12 +662,17 @@ void vgic_v3_load(struct kvm_vcpu *vcpu)
 		__vgic_v3_activate_traps(vcpu);
 }
 
-void vgic_v3_put(struct kvm_vcpu *vcpu)
+void vgic_v3_vmcr_sync(struct kvm_vcpu *vcpu)
 {
 	struct vgic_v3_cpu_if *cpu_if = &vcpu->arch.vgic_cpu.vgic_v3;
 
 	if (likely(cpu_if->vgic_sre))
 		cpu_if->vgic_vmcr = kvm_call_hyp_ret(__vgic_v3_read_vmcr);
+}
+
+void vgic_v3_put(struct kvm_vcpu *vcpu)
+{
+	vgic_v3_vmcr_sync(vcpu);
 
 	kvm_call_hyp(__vgic_v3_save_aprs, vcpu);
 
diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
index 04786c8ec77e..13d4b38a94ec 100644
--- a/virt/kvm/arm/vgic/vgic.c
+++ b/virt/kvm/arm/vgic/vgic.c
@@ -919,6 +919,17 @@ void kvm_vgic_put(struct kvm_vcpu *vcpu)
 		vgic_v3_put(vcpu);
 }
 
+void kvm_vgic_vmcr_sync(struct kvm_vcpu *vcpu)
+{
+	if (unlikely(!irqchip_in_kernel(vcpu->kvm)))
+		return;
+
+	if (kvm_vgic_global_state.type == VGIC_V2)
+		vgic_v2_vmcr_sync(vcpu);
+	else
+		vgic_v3_vmcr_sync(vcpu);
+}
+
 int kvm_vgic_vcpu_pending_irq(struct kvm_vcpu *vcpu)
 {
 	struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
diff --git a/virt/kvm/arm/vgic/vgic.h b/virt/kvm/arm/vgic/vgic.h
index 3b7525deec80..797e05004d80 100644
--- a/virt/kvm/arm/vgic/vgic.h
+++ b/virt/kvm/arm/vgic/vgic.h
@@ -193,6 +193,7 @@ int vgic_register_dist_iodev(struct kvm *kvm, gpa_t dist_base_address,
 void vgic_v2_init_lrs(void);
 void vgic_v2_load(struct kvm_vcpu *vcpu);
 void vgic_v2_put(struct kvm_vcpu *vcpu);
+void vgic_v2_vmcr_sync(struct kvm_vcpu *vcpu);
 
 void vgic_v2_save_state(struct kvm_vcpu *vcpu);
 void vgic_v2_restore_state(struct kvm_vcpu *vcpu);
@@ -223,6 +224,7 @@ bool vgic_v3_check_base(struct kvm *kvm);
 
 void vgic_v3_load(struct kvm_vcpu *vcpu);
 void vgic_v3_put(struct kvm_vcpu *vcpu);
+void vgic_v3_vmcr_sync(struct kvm_vcpu *vcpu);
 
 bool vgic_has_its(struct kvm *kvm);
 int kvm_vgic_register_its_device(void);
-- 
2.20.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 2/2] KVM: Call kvm_arch_vcpu_blocking early into the blocking sequence
  2019-08-02 10:37 [PATCH 0/2] KVM: arm/arm64: Fix guest's PMR synchronization when blocking on WFI Marc Zyngier
  2019-08-02 10:37 ` [PATCH 1/2] KVM: arm/arm64: Sync ICH_VMCR_EL2 back when about to block Marc Zyngier
@ 2019-08-02 10:37 ` Marc Zyngier
  2019-08-02 10:46   ` Paolo Bonzini
  1 sibling, 1 reply; 5+ messages in thread
From: Marc Zyngier @ 2019-08-02 10:37 UTC (permalink / raw)
  To: Paolo Bonzini, Radim Krčmář,
	Julien Thierry, Suzuki K Poulose, James Morse, Joerg Roedel,
	Suravee Suthikulpanit, Tangnianyao
  Cc: kvmarm, kvm

When a vpcu is about to block by calling kvm_vcpu_block, we call
back into the arch code to allow any form of synchronization that
may be required at this point (SVN stops the AVIC, ARM synchronises
the VMCR and enables GICv4 doorbells). But this synchronization
comes in quite late, as we've potentially waited for halt_poll_ns
to expire.

Instead, let's move kvm_arch_vcpu_blocking() to the beginning of
kvm_vcpu_block(), which on ARM has several benefits:

- VMCR gets synchronised early, meaning that any interrupt delivered
  during the polling window will be evaluated with the correct guest
  PMR
- GICv4 doorbells are enabled, which means that any guest interrupt
  directly injected during that window will be immediately recognised

Tang Nianyao ran some tests on a GICv4 machine to evaluate such
change, and reported up to a 10% improvement for netperf:

<quote>
	netperf result:
	D06 as server, intel 8180 server as client
	with change:
	package 512 bytes - 5500 Mbits/s
	package 64 bytes - 760 Mbits/s
	without change:
	package 512 bytes - 5000 Mbits/s
	package 64 bytes - 710 Mbits/s
</quote>

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 virt/kvm/kvm_main.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 887f3b0c2b60..90d429c703cb 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -2322,6 +2322,8 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu)
 	bool waited = false;
 	u64 block_ns;
 
+	kvm_arch_vcpu_blocking(vcpu);
+
 	start = cur = ktime_get();
 	if (vcpu->halt_poll_ns && !kvm_arch_no_poll(vcpu)) {
 		ktime_t stop = ktime_add_ns(ktime_get(), vcpu->halt_poll_ns);
@@ -2342,8 +2344,6 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu)
 		} while (single_task_running() && ktime_before(cur, stop));
 	}
 
-	kvm_arch_vcpu_blocking(vcpu);
-
 	for (;;) {
 		prepare_to_swait_exclusive(&vcpu->wq, &wait, TASK_INTERRUPTIBLE);
 
@@ -2356,9 +2356,8 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu)
 
 	finish_swait(&vcpu->wq, &wait);
 	cur = ktime_get();
-
-	kvm_arch_vcpu_unblocking(vcpu);
 out:
+	kvm_arch_vcpu_unblocking(vcpu);
 	block_ns = ktime_to_ns(cur) - ktime_to_ns(start);
 
 	if (!vcpu_valid_wakeup(vcpu))
-- 
2.20.1

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH 2/2] KVM: Call kvm_arch_vcpu_blocking early into the blocking sequence
  2019-08-02 10:37 ` [PATCH 2/2] KVM: Call kvm_arch_vcpu_blocking early into the blocking sequence Marc Zyngier
@ 2019-08-02 10:46   ` Paolo Bonzini
  2019-08-18 17:53     ` Marc Zyngier
  0 siblings, 1 reply; 5+ messages in thread
From: Paolo Bonzini @ 2019-08-02 10:46 UTC (permalink / raw)
  To: Marc Zyngier, Radim Krčmář,
	Julien Thierry, Suzuki K Poulose, James Morse, Joerg Roedel,
	Suravee Suthikulpanit, Tangnianyao
  Cc: kvmarm, kvm

On 02/08/19 12:37, Marc Zyngier wrote:
> When a vpcu is about to block by calling kvm_vcpu_block, we call
> back into the arch code to allow any form of synchronization that
> may be required at this point (SVN stops the AVIC, ARM synchronises
> the VMCR and enables GICv4 doorbells). But this synchronization
> comes in quite late, as we've potentially waited for halt_poll_ns
> to expire.
> 
> Instead, let's move kvm_arch_vcpu_blocking() to the beginning of
> kvm_vcpu_block(), which on ARM has several benefits:
> 
> - VMCR gets synchronised early, meaning that any interrupt delivered
>   during the polling window will be evaluated with the correct guest
>   PMR
> - GICv4 doorbells are enabled, which means that any guest interrupt
>   directly injected during that window will be immediately recognised
> 
> Tang Nianyao ran some tests on a GICv4 machine to evaluate such
> change, and reported up to a 10% improvement for netperf:
> 
> <quote>
> 	netperf result:
> 	D06 as server, intel 8180 server as client
> 	with change:
> 	package 512 bytes - 5500 Mbits/s
> 	package 64 bytes - 760 Mbits/s
> 	without change:
> 	package 512 bytes - 5000 Mbits/s
> 	package 64 bytes - 710 Mbits/s
> </quote>
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  virt/kvm/kvm_main.c | 7 +++----
>  1 file changed, 3 insertions(+), 4 deletions(-)
> 
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 887f3b0c2b60..90d429c703cb 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -2322,6 +2322,8 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu)
>  	bool waited = false;
>  	u64 block_ns;
>  
> +	kvm_arch_vcpu_blocking(vcpu);
> +
>  	start = cur = ktime_get();
>  	if (vcpu->halt_poll_ns && !kvm_arch_no_poll(vcpu)) {
>  		ktime_t stop = ktime_add_ns(ktime_get(), vcpu->halt_poll_ns);
> @@ -2342,8 +2344,6 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu)
>  		} while (single_task_running() && ktime_before(cur, stop));
>  	}
>  
> -	kvm_arch_vcpu_blocking(vcpu);
> -
>  	for (;;) {
>  		prepare_to_swait_exclusive(&vcpu->wq, &wait, TASK_INTERRUPTIBLE);
>  
> @@ -2356,9 +2356,8 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu)
>  
>  	finish_swait(&vcpu->wq, &wait);
>  	cur = ktime_get();
> -
> -	kvm_arch_vcpu_unblocking(vcpu);
>  out:
> +	kvm_arch_vcpu_unblocking(vcpu);
>  	block_ns = ktime_to_ns(cur) - ktime_to_ns(start);
>  
>  	if (!vcpu_valid_wakeup(vcpu))
> 

Acked-by: Paolo Bonzini <pbonzini@redhat.com>
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 2/2] KVM: Call kvm_arch_vcpu_blocking early into the blocking sequence
  2019-08-02 10:46   ` Paolo Bonzini
@ 2019-08-18 17:53     ` Marc Zyngier
  0 siblings, 0 replies; 5+ messages in thread
From: Marc Zyngier @ 2019-08-18 17:53 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kvm, Joerg Roedel, kvmarm

On Fri, 2 Aug 2019 12:46:33 +0200
Paolo Bonzini <pbonzini@redhat.com> wrote:

> On 02/08/19 12:37, Marc Zyngier wrote:
> > When a vpcu is about to block by calling kvm_vcpu_block, we call
> > back into the arch code to allow any form of synchronization that
> > may be required at this point (SVN stops the AVIC, ARM synchronises
> > the VMCR and enables GICv4 doorbells). But this synchronization
> > comes in quite late, as we've potentially waited for halt_poll_ns
> > to expire.
> > 
> > Instead, let's move kvm_arch_vcpu_blocking() to the beginning of
> > kvm_vcpu_block(), which on ARM has several benefits:
> > 
> > - VMCR gets synchronised early, meaning that any interrupt delivered
> >   during the polling window will be evaluated with the correct guest
> >   PMR
> > - GICv4 doorbells are enabled, which means that any guest interrupt
> >   directly injected during that window will be immediately recognised
> > 
> > Tang Nianyao ran some tests on a GICv4 machine to evaluate such
> > change, and reported up to a 10% improvement for netperf:
> > 
> > <quote>
> > 	netperf result:
> > 	D06 as server, intel 8180 server as client
> > 	with change:
> > 	package 512 bytes - 5500 Mbits/s
> > 	package 64 bytes - 760 Mbits/s
> > 	without change:
> > 	package 512 bytes - 5000 Mbits/s
> > 	package 64 bytes - 710 Mbits/s
> > </quote>
> > 
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > ---
> >  virt/kvm/kvm_main.c | 7 +++----
> >  1 file changed, 3 insertions(+), 4 deletions(-)
> > 
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index 887f3b0c2b60..90d429c703cb 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -2322,6 +2322,8 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu)
> >  	bool waited = false;
> >  	u64 block_ns;
> >  
> > +	kvm_arch_vcpu_blocking(vcpu);
> > +
> >  	start = cur = ktime_get();
> >  	if (vcpu->halt_poll_ns && !kvm_arch_no_poll(vcpu)) {
> >  		ktime_t stop = ktime_add_ns(ktime_get(), vcpu->halt_poll_ns);
> > @@ -2342,8 +2344,6 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu)
> >  		} while (single_task_running() && ktime_before(cur, stop));
> >  	}
> >  
> > -	kvm_arch_vcpu_blocking(vcpu);
> > -
> >  	for (;;) {
> >  		prepare_to_swait_exclusive(&vcpu->wq, &wait, TASK_INTERRUPTIBLE);
> >  
> > @@ -2356,9 +2356,8 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu)
> >  
> >  	finish_swait(&vcpu->wq, &wait);
> >  	cur = ktime_get();
> > -
> > -	kvm_arch_vcpu_unblocking(vcpu);
> >  out:
> > +	kvm_arch_vcpu_unblocking(vcpu);
> >  	block_ns = ktime_to_ns(cur) - ktime_to_ns(start);
> >  
> >  	if (!vcpu_valid_wakeup(vcpu))
> >   
> 
> Acked-by: Paolo Bonzini <pbonzini@redhat.com>

Thanks for that. I've pushed this patch into -next so that it gets a
bit of exposure (I haven't heard from the AMD folks, and I'd like to
make sure it doesn't regress their platforms).

	M.
-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2019-08-18 17:53 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-02 10:37 [PATCH 0/2] KVM: arm/arm64: Fix guest's PMR synchronization when blocking on WFI Marc Zyngier
2019-08-02 10:37 ` [PATCH 1/2] KVM: arm/arm64: Sync ICH_VMCR_EL2 back when about to block Marc Zyngier
2019-08-02 10:37 ` [PATCH 2/2] KVM: Call kvm_arch_vcpu_blocking early into the blocking sequence Marc Zyngier
2019-08-02 10:46   ` Paolo Bonzini
2019-08-18 17:53     ` Marc Zyngier

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).