All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] KVM: arm64: Allow in-atomic injection of SPIs
@ 2020-05-26 16:11 ` Marc Zyngier
  0 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2020-05-26 16:11 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm
  Cc: James Morse, Julien Thierry, Suzuki K Poulose, Eric Auger, kernel-team

On a system that uses SPIs to implement MSIs (as it would be
the case on a GICv2 system exposing a GICv2m to its guests),
we deny the possibility of injecting SPIs on the in-atomic
fast-path.

This results in a very large amount of context-switches
(roughly equivalent to twice the interrupt rate) on the host,
and suboptimal performance for the guest (as measured with
a test workload involving a virtio interface backed by vhost-net).
Given that GICv2 systems are usually on the low-end of the spectrum
performance wise, they could do without the aggravation.

We solved this for GICv3+ITS by having a translation cache. But
SPIs do not need any extra infrastructure, and can be immediately
injected in the virtual distributor as the locking is already
heavy enough that we don't need to worry about anything.

This halves the number of context switches for the same workload.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/vgic/vgic-irqfd.c | 20 ++++++++++++++++----
 arch/arm64/kvm/vgic/vgic-its.c   |  3 +--
 2 files changed, 17 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c b/arch/arm64/kvm/vgic/vgic-irqfd.c
index d8cdfea5cc96..11a9f81115ab 100644
--- a/arch/arm64/kvm/vgic/vgic-irqfd.c
+++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
@@ -107,15 +107,27 @@ int kvm_arch_set_irq_inatomic(struct kvm_kernel_irq_routing_entry *e,
 			      struct kvm *kvm, int irq_source_id, int level,
 			      bool line_status)
 {
-	if (e->type == KVM_IRQ_ROUTING_MSI && vgic_has_its(kvm) && level) {
+	if (!level)
+		return -EWOULDBLOCK;
+
+	switch (e->type) {
+	case KVM_IRQ_ROUTING_MSI: {
 		struct kvm_msi msi;
 
+		if (!vgic_has_its(kvm))
+			return -EINVAL;
+
 		kvm_populate_msi(e, &msi);
-		if (!vgic_its_inject_cached_translation(kvm, &msi))
-			return 0;
+		return vgic_its_inject_cached_translation(kvm, &msi);
 	}
 
-	return -EWOULDBLOCK;
+	case KVM_IRQ_ROUTING_IRQCHIP:
+		/* Injecting SPIs is always possible in atomic context */
+		return vgic_irqfd_set_irq(e, kvm, irq_source_id, 1, line_status);
+
+	default:
+		return -EWOULDBLOCK;
+	}
 }
 
 int kvm_vgic_setup_default_irq_routing(struct kvm *kvm)
diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c
index c012a52b19f5..40cbaca81333 100644
--- a/arch/arm64/kvm/vgic/vgic-its.c
+++ b/arch/arm64/kvm/vgic/vgic-its.c
@@ -757,9 +757,8 @@ int vgic_its_inject_cached_translation(struct kvm *kvm, struct kvm_msi *msi)
 
 	db = (u64)msi->address_hi << 32 | msi->address_lo;
 	irq = vgic_its_check_cache(kvm, db, msi->devid, msi->data);
-
 	if (!irq)
-		return -1;
+		return -EWOULDBLOCK;
 
 	raw_spin_lock_irqsave(&irq->irq_lock, flags);
 	irq->pending_latch = true;
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH] KVM: arm64: Allow in-atomic injection of SPIs
@ 2020-05-26 16:11 ` Marc Zyngier
  0 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2020-05-26 16:11 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team

On a system that uses SPIs to implement MSIs (as it would be
the case on a GICv2 system exposing a GICv2m to its guests),
we deny the possibility of injecting SPIs on the in-atomic
fast-path.

This results in a very large amount of context-switches
(roughly equivalent to twice the interrupt rate) on the host,
and suboptimal performance for the guest (as measured with
a test workload involving a virtio interface backed by vhost-net).
Given that GICv2 systems are usually on the low-end of the spectrum
performance wise, they could do without the aggravation.

We solved this for GICv3+ITS by having a translation cache. But
SPIs do not need any extra infrastructure, and can be immediately
injected in the virtual distributor as the locking is already
heavy enough that we don't need to worry about anything.

This halves the number of context switches for the same workload.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/vgic/vgic-irqfd.c | 20 ++++++++++++++++----
 arch/arm64/kvm/vgic/vgic-its.c   |  3 +--
 2 files changed, 17 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c b/arch/arm64/kvm/vgic/vgic-irqfd.c
index d8cdfea5cc96..11a9f81115ab 100644
--- a/arch/arm64/kvm/vgic/vgic-irqfd.c
+++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
@@ -107,15 +107,27 @@ int kvm_arch_set_irq_inatomic(struct kvm_kernel_irq_routing_entry *e,
 			      struct kvm *kvm, int irq_source_id, int level,
 			      bool line_status)
 {
-	if (e->type == KVM_IRQ_ROUTING_MSI && vgic_has_its(kvm) && level) {
+	if (!level)
+		return -EWOULDBLOCK;
+
+	switch (e->type) {
+	case KVM_IRQ_ROUTING_MSI: {
 		struct kvm_msi msi;
 
+		if (!vgic_has_its(kvm))
+			return -EINVAL;
+
 		kvm_populate_msi(e, &msi);
-		if (!vgic_its_inject_cached_translation(kvm, &msi))
-			return 0;
+		return vgic_its_inject_cached_translation(kvm, &msi);
 	}
 
-	return -EWOULDBLOCK;
+	case KVM_IRQ_ROUTING_IRQCHIP:
+		/* Injecting SPIs is always possible in atomic context */
+		return vgic_irqfd_set_irq(e, kvm, irq_source_id, 1, line_status);
+
+	default:
+		return -EWOULDBLOCK;
+	}
 }
 
 int kvm_vgic_setup_default_irq_routing(struct kvm *kvm)
diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c
index c012a52b19f5..40cbaca81333 100644
--- a/arch/arm64/kvm/vgic/vgic-its.c
+++ b/arch/arm64/kvm/vgic/vgic-its.c
@@ -757,9 +757,8 @@ int vgic_its_inject_cached_translation(struct kvm *kvm, struct kvm_msi *msi)
 
 	db = (u64)msi->address_hi << 32 | msi->address_lo;
 	irq = vgic_its_check_cache(kvm, db, msi->devid, msi->data);
-
 	if (!irq)
-		return -1;
+		return -EWOULDBLOCK;
 
 	raw_spin_lock_irqsave(&irq->irq_lock, flags);
 	irq->pending_latch = true;
-- 
2.26.2

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH] KVM: arm64: Allow in-atomic injection of SPIs
@ 2020-05-26 16:11 ` Marc Zyngier
  0 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2020-05-26 16:11 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm
  Cc: Eric Auger, kernel-team, James Morse, Julien Thierry, Suzuki K Poulose

On a system that uses SPIs to implement MSIs (as it would be
the case on a GICv2 system exposing a GICv2m to its guests),
we deny the possibility of injecting SPIs on the in-atomic
fast-path.

This results in a very large amount of context-switches
(roughly equivalent to twice the interrupt rate) on the host,
and suboptimal performance for the guest (as measured with
a test workload involving a virtio interface backed by vhost-net).
Given that GICv2 systems are usually on the low-end of the spectrum
performance wise, they could do without the aggravation.

We solved this for GICv3+ITS by having a translation cache. But
SPIs do not need any extra infrastructure, and can be immediately
injected in the virtual distributor as the locking is already
heavy enough that we don't need to worry about anything.

This halves the number of context switches for the same workload.

Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/vgic/vgic-irqfd.c | 20 ++++++++++++++++----
 arch/arm64/kvm/vgic/vgic-its.c   |  3 +--
 2 files changed, 17 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c b/arch/arm64/kvm/vgic/vgic-irqfd.c
index d8cdfea5cc96..11a9f81115ab 100644
--- a/arch/arm64/kvm/vgic/vgic-irqfd.c
+++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
@@ -107,15 +107,27 @@ int kvm_arch_set_irq_inatomic(struct kvm_kernel_irq_routing_entry *e,
 			      struct kvm *kvm, int irq_source_id, int level,
 			      bool line_status)
 {
-	if (e->type == KVM_IRQ_ROUTING_MSI && vgic_has_its(kvm) && level) {
+	if (!level)
+		return -EWOULDBLOCK;
+
+	switch (e->type) {
+	case KVM_IRQ_ROUTING_MSI: {
 		struct kvm_msi msi;
 
+		if (!vgic_has_its(kvm))
+			return -EINVAL;
+
 		kvm_populate_msi(e, &msi);
-		if (!vgic_its_inject_cached_translation(kvm, &msi))
-			return 0;
+		return vgic_its_inject_cached_translation(kvm, &msi);
 	}
 
-	return -EWOULDBLOCK;
+	case KVM_IRQ_ROUTING_IRQCHIP:
+		/* Injecting SPIs is always possible in atomic context */
+		return vgic_irqfd_set_irq(e, kvm, irq_source_id, 1, line_status);
+
+	default:
+		return -EWOULDBLOCK;
+	}
 }
 
 int kvm_vgic_setup_default_irq_routing(struct kvm *kvm)
diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c
index c012a52b19f5..40cbaca81333 100644
--- a/arch/arm64/kvm/vgic/vgic-its.c
+++ b/arch/arm64/kvm/vgic/vgic-its.c
@@ -757,9 +757,8 @@ int vgic_its_inject_cached_translation(struct kvm *kvm, struct kvm_msi *msi)
 
 	db = (u64)msi->address_hi << 32 | msi->address_lo;
 	irq = vgic_its_check_cache(kvm, db, msi->devid, msi->data);
-
 	if (!irq)
-		return -1;
+		return -EWOULDBLOCK;
 
 	raw_spin_lock_irqsave(&irq->irq_lock, flags);
 	irq->pending_latch = true;
-- 
2.26.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH] KVM: arm64: Allow in-atomic injection of SPIs
  2020-05-26 16:11 ` Marc Zyngier
  (?)
@ 2020-05-27  7:41   ` Zenghui Yu
  -1 siblings, 0 replies; 24+ messages in thread
From: Zenghui Yu @ 2020-05-27  7:41 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm
  Cc: Eric Auger, kernel-team, James Morse, Julien Thierry, Suzuki K Poulose

On 2020/5/27 0:11, Marc Zyngier wrote:
> On a system that uses SPIs to implement MSIs (as it would be
> the case on a GICv2 system exposing a GICv2m to its guests),
> we deny the possibility of injecting SPIs on the in-atomic
> fast-path.
> 
> This results in a very large amount of context-switches
> (roughly equivalent to twice the interrupt rate) on the host,
> and suboptimal performance for the guest (as measured with
> a test workload involving a virtio interface backed by vhost-net).
> Given that GICv2 systems are usually on the low-end of the spectrum
> performance wise, they could do without the aggravation.
> 
> We solved this for GICv3+ITS by having a translation cache. But
> SPIs do not need any extra infrastructure, and can be immediately
> injected in the virtual distributor as the locking is already
> heavy enough that we don't need to worry about anything.
> 
> This halves the number of context switches for the same workload.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>   arch/arm64/kvm/vgic/vgic-irqfd.c | 20 ++++++++++++++++----
>   arch/arm64/kvm/vgic/vgic-its.c   |  3 +--
>   2 files changed, 17 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c b/arch/arm64/kvm/vgic/vgic-irqfd.c
> index d8cdfea5cc96..11a9f81115ab 100644
> --- a/arch/arm64/kvm/vgic/vgic-irqfd.c
> +++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
> @@ -107,15 +107,27 @@ int kvm_arch_set_irq_inatomic(struct kvm_kernel_irq_routing_entry *e,
>   			      struct kvm *kvm, int irq_source_id, int level,
>   			      bool line_status)

... and you may also need to update the comment on top of it to
reflect this change.

/**
  * kvm_arch_set_irq_inatomic: fast-path for irqfd injection
  *
  * Currently only direct MSI injection is supported.
  */


Thanks,
Zenghui

>   {
> -	if (e->type == KVM_IRQ_ROUTING_MSI && vgic_has_its(kvm) && level) {
> +	if (!level)
> +		return -EWOULDBLOCK;
> +
> +	switch (e->type) {
> +	case KVM_IRQ_ROUTING_MSI: {
>   		struct kvm_msi msi;
>   
> +		if (!vgic_has_its(kvm))
> +			return -EINVAL;
> +
>   		kvm_populate_msi(e, &msi);
> -		if (!vgic_its_inject_cached_translation(kvm, &msi))
> -			return 0;
> +		return vgic_its_inject_cached_translation(kvm, &msi);
>   	}
>   
> -	return -EWOULDBLOCK;
> +	case KVM_IRQ_ROUTING_IRQCHIP:
> +		/* Injecting SPIs is always possible in atomic context */
> +		return vgic_irqfd_set_irq(e, kvm, irq_source_id, 1, line_status);
> +
> +	default:
> +		return -EWOULDBLOCK;
> +	}
>   }
>   
>   int kvm_vgic_setup_default_irq_routing(struct kvm *kvm)
> diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c
> index c012a52b19f5..40cbaca81333 100644
> --- a/arch/arm64/kvm/vgic/vgic-its.c
> +++ b/arch/arm64/kvm/vgic/vgic-its.c
> @@ -757,9 +757,8 @@ int vgic_its_inject_cached_translation(struct kvm *kvm, struct kvm_msi *msi)
>   
>   	db = (u64)msi->address_hi << 32 | msi->address_lo;
>   	irq = vgic_its_check_cache(kvm, db, msi->devid, msi->data);
> -
>   	if (!irq)
> -		return -1;
> +		return -EWOULDBLOCK;
>   
>   	raw_spin_lock_irqsave(&irq->irq_lock, flags);
>   	irq->pending_latch = true;
> 

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH] KVM: arm64: Allow in-atomic injection of SPIs
@ 2020-05-27  7:41   ` Zenghui Yu
  0 siblings, 0 replies; 24+ messages in thread
From: Zenghui Yu @ 2020-05-27  7:41 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team

On 2020/5/27 0:11, Marc Zyngier wrote:
> On a system that uses SPIs to implement MSIs (as it would be
> the case on a GICv2 system exposing a GICv2m to its guests),
> we deny the possibility of injecting SPIs on the in-atomic
> fast-path.
> 
> This results in a very large amount of context-switches
> (roughly equivalent to twice the interrupt rate) on the host,
> and suboptimal performance for the guest (as measured with
> a test workload involving a virtio interface backed by vhost-net).
> Given that GICv2 systems are usually on the low-end of the spectrum
> performance wise, they could do without the aggravation.
> 
> We solved this for GICv3+ITS by having a translation cache. But
> SPIs do not need any extra infrastructure, and can be immediately
> injected in the virtual distributor as the locking is already
> heavy enough that we don't need to worry about anything.
> 
> This halves the number of context switches for the same workload.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>   arch/arm64/kvm/vgic/vgic-irqfd.c | 20 ++++++++++++++++----
>   arch/arm64/kvm/vgic/vgic-its.c   |  3 +--
>   2 files changed, 17 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c b/arch/arm64/kvm/vgic/vgic-irqfd.c
> index d8cdfea5cc96..11a9f81115ab 100644
> --- a/arch/arm64/kvm/vgic/vgic-irqfd.c
> +++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
> @@ -107,15 +107,27 @@ int kvm_arch_set_irq_inatomic(struct kvm_kernel_irq_routing_entry *e,
>   			      struct kvm *kvm, int irq_source_id, int level,
>   			      bool line_status)

... and you may also need to update the comment on top of it to
reflect this change.

/**
  * kvm_arch_set_irq_inatomic: fast-path for irqfd injection
  *
  * Currently only direct MSI injection is supported.
  */


Thanks,
Zenghui

>   {
> -	if (e->type == KVM_IRQ_ROUTING_MSI && vgic_has_its(kvm) && level) {
> +	if (!level)
> +		return -EWOULDBLOCK;
> +
> +	switch (e->type) {
> +	case KVM_IRQ_ROUTING_MSI: {
>   		struct kvm_msi msi;
>   
> +		if (!vgic_has_its(kvm))
> +			return -EINVAL;
> +
>   		kvm_populate_msi(e, &msi);
> -		if (!vgic_its_inject_cached_translation(kvm, &msi))
> -			return 0;
> +		return vgic_its_inject_cached_translation(kvm, &msi);
>   	}
>   
> -	return -EWOULDBLOCK;
> +	case KVM_IRQ_ROUTING_IRQCHIP:
> +		/* Injecting SPIs is always possible in atomic context */
> +		return vgic_irqfd_set_irq(e, kvm, irq_source_id, 1, line_status);
> +
> +	default:
> +		return -EWOULDBLOCK;
> +	}
>   }
>   
>   int kvm_vgic_setup_default_irq_routing(struct kvm *kvm)
> diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c
> index c012a52b19f5..40cbaca81333 100644
> --- a/arch/arm64/kvm/vgic/vgic-its.c
> +++ b/arch/arm64/kvm/vgic/vgic-its.c
> @@ -757,9 +757,8 @@ int vgic_its_inject_cached_translation(struct kvm *kvm, struct kvm_msi *msi)
>   
>   	db = (u64)msi->address_hi << 32 | msi->address_lo;
>   	irq = vgic_its_check_cache(kvm, db, msi->devid, msi->data);
> -
>   	if (!irq)
> -		return -1;
> +		return -EWOULDBLOCK;
>   
>   	raw_spin_lock_irqsave(&irq->irq_lock, flags);
>   	irq->pending_latch = true;
> 
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH] KVM: arm64: Allow in-atomic injection of SPIs
@ 2020-05-27  7:41   ` Zenghui Yu
  0 siblings, 0 replies; 24+ messages in thread
From: Zenghui Yu @ 2020-05-27  7:41 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm
  Cc: Eric Auger, kernel-team, James Morse, Julien Thierry, Suzuki K Poulose

On 2020/5/27 0:11, Marc Zyngier wrote:
> On a system that uses SPIs to implement MSIs (as it would be
> the case on a GICv2 system exposing a GICv2m to its guests),
> we deny the possibility of injecting SPIs on the in-atomic
> fast-path.
> 
> This results in a very large amount of context-switches
> (roughly equivalent to twice the interrupt rate) on the host,
> and suboptimal performance for the guest (as measured with
> a test workload involving a virtio interface backed by vhost-net).
> Given that GICv2 systems are usually on the low-end of the spectrum
> performance wise, they could do without the aggravation.
> 
> We solved this for GICv3+ITS by having a translation cache. But
> SPIs do not need any extra infrastructure, and can be immediately
> injected in the virtual distributor as the locking is already
> heavy enough that we don't need to worry about anything.
> 
> This halves the number of context switches for the same workload.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>   arch/arm64/kvm/vgic/vgic-irqfd.c | 20 ++++++++++++++++----
>   arch/arm64/kvm/vgic/vgic-its.c   |  3 +--
>   2 files changed, 17 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c b/arch/arm64/kvm/vgic/vgic-irqfd.c
> index d8cdfea5cc96..11a9f81115ab 100644
> --- a/arch/arm64/kvm/vgic/vgic-irqfd.c
> +++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
> @@ -107,15 +107,27 @@ int kvm_arch_set_irq_inatomic(struct kvm_kernel_irq_routing_entry *e,
>   			      struct kvm *kvm, int irq_source_id, int level,
>   			      bool line_status)

... and you may also need to update the comment on top of it to
reflect this change.

/**
  * kvm_arch_set_irq_inatomic: fast-path for irqfd injection
  *
  * Currently only direct MSI injection is supported.
  */


Thanks,
Zenghui

>   {
> -	if (e->type == KVM_IRQ_ROUTING_MSI && vgic_has_its(kvm) && level) {
> +	if (!level)
> +		return -EWOULDBLOCK;
> +
> +	switch (e->type) {
> +	case KVM_IRQ_ROUTING_MSI: {
>   		struct kvm_msi msi;
>   
> +		if (!vgic_has_its(kvm))
> +			return -EINVAL;
> +
>   		kvm_populate_msi(e, &msi);
> -		if (!vgic_its_inject_cached_translation(kvm, &msi))
> -			return 0;
> +		return vgic_its_inject_cached_translation(kvm, &msi);
>   	}
>   
> -	return -EWOULDBLOCK;
> +	case KVM_IRQ_ROUTING_IRQCHIP:
> +		/* Injecting SPIs is always possible in atomic context */
> +		return vgic_irqfd_set_irq(e, kvm, irq_source_id, 1, line_status);
> +
> +	default:
> +		return -EWOULDBLOCK;
> +	}
>   }
>   
>   int kvm_vgic_setup_default_irq_routing(struct kvm *kvm)
> diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c
> index c012a52b19f5..40cbaca81333 100644
> --- a/arch/arm64/kvm/vgic/vgic-its.c
> +++ b/arch/arm64/kvm/vgic/vgic-its.c
> @@ -757,9 +757,8 @@ int vgic_its_inject_cached_translation(struct kvm *kvm, struct kvm_msi *msi)
>   
>   	db = (u64)msi->address_hi << 32 | msi->address_lo;
>   	irq = vgic_its_check_cache(kvm, db, msi->devid, msi->data);
> -
>   	if (!irq)
> -		return -1;
> +		return -EWOULDBLOCK;
>   
>   	raw_spin_lock_irqsave(&irq->irq_lock, flags);
>   	irq->pending_latch = true;
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH] KVM: arm64: Allow in-atomic injection of SPIs
  2020-05-27  7:41   ` Zenghui Yu
  (?)
@ 2020-05-27  7:55     ` Marc Zyngier
  -1 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2020-05-27  7:55 UTC (permalink / raw)
  To: Zenghui Yu
  Cc: linux-arm-kernel, kvmarm, kvm, Eric Auger, kernel-team,
	James Morse, Julien Thierry, Suzuki K Poulose

Hi Zenghui,

On 2020-05-27 08:41, Zenghui Yu wrote:
> On 2020/5/27 0:11, Marc Zyngier wrote:
>> On a system that uses SPIs to implement MSIs (as it would be
>> the case on a GICv2 system exposing a GICv2m to its guests),
>> we deny the possibility of injecting SPIs on the in-atomic
>> fast-path.
>> 
>> This results in a very large amount of context-switches
>> (roughly equivalent to twice the interrupt rate) on the host,
>> and suboptimal performance for the guest (as measured with
>> a test workload involving a virtio interface backed by vhost-net).
>> Given that GICv2 systems are usually on the low-end of the spectrum
>> performance wise, they could do without the aggravation.
>> 
>> We solved this for GICv3+ITS by having a translation cache. But
>> SPIs do not need any extra infrastructure, and can be immediately
>> injected in the virtual distributor as the locking is already
>> heavy enough that we don't need to worry about anything.
>> 
>> This halves the number of context switches for the same workload.
>> 
>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>> ---
>>   arch/arm64/kvm/vgic/vgic-irqfd.c | 20 ++++++++++++++++----
>>   arch/arm64/kvm/vgic/vgic-its.c   |  3 +--
>>   2 files changed, 17 insertions(+), 6 deletions(-)
>> 
>> diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c 
>> b/arch/arm64/kvm/vgic/vgic-irqfd.c
>> index d8cdfea5cc96..11a9f81115ab 100644
>> --- a/arch/arm64/kvm/vgic/vgic-irqfd.c
>> +++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
>> @@ -107,15 +107,27 @@ int kvm_arch_set_irq_inatomic(struct 
>> kvm_kernel_irq_routing_entry *e,
>>   			      struct kvm *kvm, int irq_source_id, int level,
>>   			      bool line_status)
> 
> ... and you may also need to update the comment on top of it to
> reflect this change.
> 
> /**
>  * kvm_arch_set_irq_inatomic: fast-path for irqfd injection
>  *
>  * Currently only direct MSI injection is supported.
>  */

As far as I can tell, it is still valid (at least from the guest's
perspective). You could in practice use that to deal with level
interrupts, but we only inject the rising edge on this path, never
the falling edge. So effectively, this is limited to edge interrupts,
which is mostly MSIs.

Unless you are thinking of something else which I would have missed?

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH] KVM: arm64: Allow in-atomic injection of SPIs
@ 2020-05-27  7:55     ` Marc Zyngier
  0 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2020-05-27  7:55 UTC (permalink / raw)
  To: Zenghui Yu; +Cc: kvm, kernel-team, kvmarm, linux-arm-kernel

Hi Zenghui,

On 2020-05-27 08:41, Zenghui Yu wrote:
> On 2020/5/27 0:11, Marc Zyngier wrote:
>> On a system that uses SPIs to implement MSIs (as it would be
>> the case on a GICv2 system exposing a GICv2m to its guests),
>> we deny the possibility of injecting SPIs on the in-atomic
>> fast-path.
>> 
>> This results in a very large amount of context-switches
>> (roughly equivalent to twice the interrupt rate) on the host,
>> and suboptimal performance for the guest (as measured with
>> a test workload involving a virtio interface backed by vhost-net).
>> Given that GICv2 systems are usually on the low-end of the spectrum
>> performance wise, they could do without the aggravation.
>> 
>> We solved this for GICv3+ITS by having a translation cache. But
>> SPIs do not need any extra infrastructure, and can be immediately
>> injected in the virtual distributor as the locking is already
>> heavy enough that we don't need to worry about anything.
>> 
>> This halves the number of context switches for the same workload.
>> 
>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>> ---
>>   arch/arm64/kvm/vgic/vgic-irqfd.c | 20 ++++++++++++++++----
>>   arch/arm64/kvm/vgic/vgic-its.c   |  3 +--
>>   2 files changed, 17 insertions(+), 6 deletions(-)
>> 
>> diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c 
>> b/arch/arm64/kvm/vgic/vgic-irqfd.c
>> index d8cdfea5cc96..11a9f81115ab 100644
>> --- a/arch/arm64/kvm/vgic/vgic-irqfd.c
>> +++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
>> @@ -107,15 +107,27 @@ int kvm_arch_set_irq_inatomic(struct 
>> kvm_kernel_irq_routing_entry *e,
>>   			      struct kvm *kvm, int irq_source_id, int level,
>>   			      bool line_status)
> 
> ... and you may also need to update the comment on top of it to
> reflect this change.
> 
> /**
>  * kvm_arch_set_irq_inatomic: fast-path for irqfd injection
>  *
>  * Currently only direct MSI injection is supported.
>  */

As far as I can tell, it is still valid (at least from the guest's
perspective). You could in practice use that to deal with level
interrupts, but we only inject the rising edge on this path, never
the falling edge. So effectively, this is limited to edge interrupts,
which is mostly MSIs.

Unless you are thinking of something else which I would have missed?

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH] KVM: arm64: Allow in-atomic injection of SPIs
@ 2020-05-27  7:55     ` Marc Zyngier
  0 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2020-05-27  7:55 UTC (permalink / raw)
  To: Zenghui Yu
  Cc: kvm, Suzuki K Poulose, Eric Auger, James Morse, Julien Thierry,
	kernel-team, kvmarm, linux-arm-kernel

Hi Zenghui,

On 2020-05-27 08:41, Zenghui Yu wrote:
> On 2020/5/27 0:11, Marc Zyngier wrote:
>> On a system that uses SPIs to implement MSIs (as it would be
>> the case on a GICv2 system exposing a GICv2m to its guests),
>> we deny the possibility of injecting SPIs on the in-atomic
>> fast-path.
>> 
>> This results in a very large amount of context-switches
>> (roughly equivalent to twice the interrupt rate) on the host,
>> and suboptimal performance for the guest (as measured with
>> a test workload involving a virtio interface backed by vhost-net).
>> Given that GICv2 systems are usually on the low-end of the spectrum
>> performance wise, they could do without the aggravation.
>> 
>> We solved this for GICv3+ITS by having a translation cache. But
>> SPIs do not need any extra infrastructure, and can be immediately
>> injected in the virtual distributor as the locking is already
>> heavy enough that we don't need to worry about anything.
>> 
>> This halves the number of context switches for the same workload.
>> 
>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>> ---
>>   arch/arm64/kvm/vgic/vgic-irqfd.c | 20 ++++++++++++++++----
>>   arch/arm64/kvm/vgic/vgic-its.c   |  3 +--
>>   2 files changed, 17 insertions(+), 6 deletions(-)
>> 
>> diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c 
>> b/arch/arm64/kvm/vgic/vgic-irqfd.c
>> index d8cdfea5cc96..11a9f81115ab 100644
>> --- a/arch/arm64/kvm/vgic/vgic-irqfd.c
>> +++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
>> @@ -107,15 +107,27 @@ int kvm_arch_set_irq_inatomic(struct 
>> kvm_kernel_irq_routing_entry *e,
>>   			      struct kvm *kvm, int irq_source_id, int level,
>>   			      bool line_status)
> 
> ... and you may also need to update the comment on top of it to
> reflect this change.
> 
> /**
>  * kvm_arch_set_irq_inatomic: fast-path for irqfd injection
>  *
>  * Currently only direct MSI injection is supported.
>  */

As far as I can tell, it is still valid (at least from the guest's
perspective). You could in practice use that to deal with level
interrupts, but we only inject the rising edge on this path, never
the falling edge. So effectively, this is limited to edge interrupts,
which is mostly MSIs.

Unless you are thinking of something else which I would have missed?

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH] KVM: arm64: Allow in-atomic injection of SPIs
  2020-05-27  7:55     ` Marc Zyngier
  (?)
@ 2020-05-27  8:42       ` Zenghui Yu
  -1 siblings, 0 replies; 24+ messages in thread
From: Zenghui Yu @ 2020-05-27  8:42 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, kvmarm, kvm, Eric Auger, kernel-team,
	James Morse, Julien Thierry, Suzuki K Poulose

Hi Marc,

On 2020/5/27 15:55, Marc Zyngier wrote:
> Hi Zenghui,
> 
> On 2020-05-27 08:41, Zenghui Yu wrote:
>> On 2020/5/27 0:11, Marc Zyngier wrote:
>>> On a system that uses SPIs to implement MSIs (as it would be
>>> the case on a GICv2 system exposing a GICv2m to its guests),
>>> we deny the possibility of injecting SPIs on the in-atomic
>>> fast-path.
>>>
>>> This results in a very large amount of context-switches
>>> (roughly equivalent to twice the interrupt rate) on the host,
>>> and suboptimal performance for the guest (as measured with
>>> a test workload involving a virtio interface backed by vhost-net).
>>> Given that GICv2 systems are usually on the low-end of the spectrum
>>> performance wise, they could do without the aggravation.
>>>
>>> We solved this for GICv3+ITS by having a translation cache. But
>>> SPIs do not need any extra infrastructure, and can be immediately
>>> injected in the virtual distributor as the locking is already
>>> heavy enough that we don't need to worry about anything.
>>>
>>> This halves the number of context switches for the same workload.
>>>
>>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>>> ---
>>>   arch/arm64/kvm/vgic/vgic-irqfd.c | 20 ++++++++++++++++----
>>>   arch/arm64/kvm/vgic/vgic-its.c   |  3 +--
>>>   2 files changed, 17 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c 
>>> b/arch/arm64/kvm/vgic/vgic-irqfd.c
>>> index d8cdfea5cc96..11a9f81115ab 100644
>>> --- a/arch/arm64/kvm/vgic/vgic-irqfd.c
>>> +++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
>>> @@ -107,15 +107,27 @@ int kvm_arch_set_irq_inatomic(struct 
>>> kvm_kernel_irq_routing_entry *e,
>>>                     struct kvm *kvm, int irq_source_id, int level,
>>>                     bool line_status)
>>
>> ... and you may also need to update the comment on top of it to
>> reflect this change.
>>
>> /**
>>  * kvm_arch_set_irq_inatomic: fast-path for irqfd injection
>>  *
>>  * Currently only direct MSI injection is supported.
>>  */
> 
> As far as I can tell, it is still valid (at least from the guest's
> perspective). You could in practice use that to deal with level
> interrupts, but we only inject the rising edge on this path, never
> the falling edge. So effectively, this is limited to edge interrupts,
> which is mostly MSIs.

Oops... I had wrongly mixed MSI with the architecture-defined LPI, and
was think that we should add something like "direct SPI injection is
also supported now". Sorry.

> 
> Unless you are thinking of something else which I would have missed?

No, please ignore the noisy.


Thanks,
Zenghui

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH] KVM: arm64: Allow in-atomic injection of SPIs
@ 2020-05-27  8:42       ` Zenghui Yu
  0 siblings, 0 replies; 24+ messages in thread
From: Zenghui Yu @ 2020-05-27  8:42 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, kernel-team, kvmarm, linux-arm-kernel

Hi Marc,

On 2020/5/27 15:55, Marc Zyngier wrote:
> Hi Zenghui,
> 
> On 2020-05-27 08:41, Zenghui Yu wrote:
>> On 2020/5/27 0:11, Marc Zyngier wrote:
>>> On a system that uses SPIs to implement MSIs (as it would be
>>> the case on a GICv2 system exposing a GICv2m to its guests),
>>> we deny the possibility of injecting SPIs on the in-atomic
>>> fast-path.
>>>
>>> This results in a very large amount of context-switches
>>> (roughly equivalent to twice the interrupt rate) on the host,
>>> and suboptimal performance for the guest (as measured with
>>> a test workload involving a virtio interface backed by vhost-net).
>>> Given that GICv2 systems are usually on the low-end of the spectrum
>>> performance wise, they could do without the aggravation.
>>>
>>> We solved this for GICv3+ITS by having a translation cache. But
>>> SPIs do not need any extra infrastructure, and can be immediately
>>> injected in the virtual distributor as the locking is already
>>> heavy enough that we don't need to worry about anything.
>>>
>>> This halves the number of context switches for the same workload.
>>>
>>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>>> ---
>>>   arch/arm64/kvm/vgic/vgic-irqfd.c | 20 ++++++++++++++++----
>>>   arch/arm64/kvm/vgic/vgic-its.c   |  3 +--
>>>   2 files changed, 17 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c 
>>> b/arch/arm64/kvm/vgic/vgic-irqfd.c
>>> index d8cdfea5cc96..11a9f81115ab 100644
>>> --- a/arch/arm64/kvm/vgic/vgic-irqfd.c
>>> +++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
>>> @@ -107,15 +107,27 @@ int kvm_arch_set_irq_inatomic(struct 
>>> kvm_kernel_irq_routing_entry *e,
>>>                     struct kvm *kvm, int irq_source_id, int level,
>>>                     bool line_status)
>>
>> ... and you may also need to update the comment on top of it to
>> reflect this change.
>>
>> /**
>>  * kvm_arch_set_irq_inatomic: fast-path for irqfd injection
>>  *
>>  * Currently only direct MSI injection is supported.
>>  */
> 
> As far as I can tell, it is still valid (at least from the guest's
> perspective). You could in practice use that to deal with level
> interrupts, but we only inject the rising edge on this path, never
> the falling edge. So effectively, this is limited to edge interrupts,
> which is mostly MSIs.

Oops... I had wrongly mixed MSI with the architecture-defined LPI, and
was think that we should add something like "direct SPI injection is
also supported now". Sorry.

> 
> Unless you are thinking of something else which I would have missed?

No, please ignore the noisy.


Thanks,
Zenghui
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH] KVM: arm64: Allow in-atomic injection of SPIs
@ 2020-05-27  8:42       ` Zenghui Yu
  0 siblings, 0 replies; 24+ messages in thread
From: Zenghui Yu @ 2020-05-27  8:42 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvm, Suzuki K Poulose, Eric Auger, James Morse, Julien Thierry,
	kernel-team, kvmarm, linux-arm-kernel

Hi Marc,

On 2020/5/27 15:55, Marc Zyngier wrote:
> Hi Zenghui,
> 
> On 2020-05-27 08:41, Zenghui Yu wrote:
>> On 2020/5/27 0:11, Marc Zyngier wrote:
>>> On a system that uses SPIs to implement MSIs (as it would be
>>> the case on a GICv2 system exposing a GICv2m to its guests),
>>> we deny the possibility of injecting SPIs on the in-atomic
>>> fast-path.
>>>
>>> This results in a very large amount of context-switches
>>> (roughly equivalent to twice the interrupt rate) on the host,
>>> and suboptimal performance for the guest (as measured with
>>> a test workload involving a virtio interface backed by vhost-net).
>>> Given that GICv2 systems are usually on the low-end of the spectrum
>>> performance wise, they could do without the aggravation.
>>>
>>> We solved this for GICv3+ITS by having a translation cache. But
>>> SPIs do not need any extra infrastructure, and can be immediately
>>> injected in the virtual distributor as the locking is already
>>> heavy enough that we don't need to worry about anything.
>>>
>>> This halves the number of context switches for the same workload.
>>>
>>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>>> ---
>>>   arch/arm64/kvm/vgic/vgic-irqfd.c | 20 ++++++++++++++++----
>>>   arch/arm64/kvm/vgic/vgic-its.c   |  3 +--
>>>   2 files changed, 17 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c 
>>> b/arch/arm64/kvm/vgic/vgic-irqfd.c
>>> index d8cdfea5cc96..11a9f81115ab 100644
>>> --- a/arch/arm64/kvm/vgic/vgic-irqfd.c
>>> +++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
>>> @@ -107,15 +107,27 @@ int kvm_arch_set_irq_inatomic(struct 
>>> kvm_kernel_irq_routing_entry *e,
>>>                     struct kvm *kvm, int irq_source_id, int level,
>>>                     bool line_status)
>>
>> ... and you may also need to update the comment on top of it to
>> reflect this change.
>>
>> /**
>>  * kvm_arch_set_irq_inatomic: fast-path for irqfd injection
>>  *
>>  * Currently only direct MSI injection is supported.
>>  */
> 
> As far as I can tell, it is still valid (at least from the guest's
> perspective). You could in practice use that to deal with level
> interrupts, but we only inject the rising edge on this path, never
> the falling edge. So effectively, this is limited to edge interrupts,
> which is mostly MSIs.

Oops... I had wrongly mixed MSI with the architecture-defined LPI, and
was think that we should add something like "direct SPI injection is
also supported now". Sorry.

> 
> Unless you are thinking of something else which I would have missed?

No, please ignore the noisy.


Thanks,
Zenghui

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH] KVM: arm64: Allow in-atomic injection of SPIs
  2020-05-26 16:11 ` Marc Zyngier
  (?)
@ 2020-06-08 16:58   ` Auger Eric
  -1 siblings, 0 replies; 24+ messages in thread
From: Auger Eric @ 2020-06-08 16:58 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm
  Cc: James Morse, Julien Thierry, Suzuki K Poulose, kernel-team

Hi Marc,

On 5/26/20 6:11 PM, Marc Zyngier wrote:
> On a system that uses SPIs to implement MSIs (as it would be
> the case on a GICv2 system exposing a GICv2m to its guests),
> we deny the possibility of injecting SPIs on the in-atomic
> fast-path.
> 
> This results in a very large amount of context-switches
> (roughly equivalent to twice the interrupt rate) on the host,
> and suboptimal performance for the guest (as measured with
> a test workload involving a virtio interface backed by vhost-net).
> Given that GICv2 systems are usually on the low-end of the spectrum
> performance wise, they could do without the aggravation.
> 
> We solved this for GICv3+ITS by having a translation cache. But
> SPIs do not need any extra infrastructure, and can be immediately
> injected in the virtual distributor as the locking is already
> heavy enough that we don't need to worry about anything.
> 
> This halves the number of context switches for the same workload.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/vgic/vgic-irqfd.c | 20 ++++++++++++++++----
>  arch/arm64/kvm/vgic/vgic-its.c   |  3 +--
>  2 files changed, 17 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c b/arch/arm64/kvm/vgic/vgic-irqfd.c
> index d8cdfea5cc96..11a9f81115ab 100644
> --- a/arch/arm64/kvm/vgic/vgic-irqfd.c
> +++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
There is still a comment above saying
 * Currently only direct MSI injection is supported.
> @@ -107,15 +107,27 @@ int kvm_arch_set_irq_inatomic(struct kvm_kernel_irq_routing_entry *e,
>  			      struct kvm *kvm, int irq_source_id, int level,
>  			      bool line_status)
>  {
> -	if (e->type == KVM_IRQ_ROUTING_MSI && vgic_has_its(kvm) && level) {
> +	if (!level)
> +		return -EWOULDBLOCK;
> +
> +	switch (e->type) {
> +	case KVM_IRQ_ROUTING_MSI: {
>  		struct kvm_msi msi;
>  
> +		if (!vgic_has_its(kvm))
> +			return -EINVAL;
Shouldn't we return -EWOULDBLOCK by default?
QEMU does not use that path with GICv2m but in kvm_set_routing_entry() I
don't see any check related to the ITS.
> +
>  		kvm_populate_msi(e, &msi);
> -		if (!vgic_its_inject_cached_translation(kvm, &msi))
> -			return 0;
> +		return vgic_its_inject_cached_translation(kvm, &msi);

>  	}
>  
> -	return -EWOULDBLOCK;
> +	case KVM_IRQ_ROUTING_IRQCHIP:
> +		/* Injecting SPIs is always possible in atomic context */
> +		return vgic_irqfd_set_irq(e, kvm, irq_source_id, 1, line_status);
what about the 	mutex_lock(&kvm->lock) called from within
vgic_irqfd_set_irq/kvm_vgic_inject_irq/vgic_lazy_init
> +
> +	default:
> +		return -EWOULDBLOCK;
> +	}
>  }
>  
>  int kvm_vgic_setup_default_irq_routing(struct kvm *kvm)
> diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c
> index c012a52b19f5..40cbaca81333 100644
> --- a/arch/arm64/kvm/vgic/vgic-its.c
> +++ b/arch/arm64/kvm/vgic/vgic-its.c
> @@ -757,9 +757,8 @@ int vgic_its_inject_cached_translation(struct kvm *kvm, struct kvm_msi *msi)
>  
>  	db = (u64)msi->address_hi << 32 | msi->address_lo;
>  	irq = vgic_its_check_cache(kvm, db, msi->devid, msi->data);
> -
>  	if (!irq)
> -		return -1;
> +		return -EWOULDBLOCK;
>  
>  	raw_spin_lock_irqsave(&irq->irq_lock, flags);
>  	irq->pending_latch = true;
> 
Thanks

Eric


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH] KVM: arm64: Allow in-atomic injection of SPIs
@ 2020-06-08 16:58   ` Auger Eric
  0 siblings, 0 replies; 24+ messages in thread
From: Auger Eric @ 2020-06-08 16:58 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm; +Cc: kernel-team

Hi Marc,

On 5/26/20 6:11 PM, Marc Zyngier wrote:
> On a system that uses SPIs to implement MSIs (as it would be
> the case on a GICv2 system exposing a GICv2m to its guests),
> we deny the possibility of injecting SPIs on the in-atomic
> fast-path.
> 
> This results in a very large amount of context-switches
> (roughly equivalent to twice the interrupt rate) on the host,
> and suboptimal performance for the guest (as measured with
> a test workload involving a virtio interface backed by vhost-net).
> Given that GICv2 systems are usually on the low-end of the spectrum
> performance wise, they could do without the aggravation.
> 
> We solved this for GICv3+ITS by having a translation cache. But
> SPIs do not need any extra infrastructure, and can be immediately
> injected in the virtual distributor as the locking is already
> heavy enough that we don't need to worry about anything.
> 
> This halves the number of context switches for the same workload.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/vgic/vgic-irqfd.c | 20 ++++++++++++++++----
>  arch/arm64/kvm/vgic/vgic-its.c   |  3 +--
>  2 files changed, 17 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c b/arch/arm64/kvm/vgic/vgic-irqfd.c
> index d8cdfea5cc96..11a9f81115ab 100644
> --- a/arch/arm64/kvm/vgic/vgic-irqfd.c
> +++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
There is still a comment above saying
 * Currently only direct MSI injection is supported.
> @@ -107,15 +107,27 @@ int kvm_arch_set_irq_inatomic(struct kvm_kernel_irq_routing_entry *e,
>  			      struct kvm *kvm, int irq_source_id, int level,
>  			      bool line_status)
>  {
> -	if (e->type == KVM_IRQ_ROUTING_MSI && vgic_has_its(kvm) && level) {
> +	if (!level)
> +		return -EWOULDBLOCK;
> +
> +	switch (e->type) {
> +	case KVM_IRQ_ROUTING_MSI: {
>  		struct kvm_msi msi;
>  
> +		if (!vgic_has_its(kvm))
> +			return -EINVAL;
Shouldn't we return -EWOULDBLOCK by default?
QEMU does not use that path with GICv2m but in kvm_set_routing_entry() I
don't see any check related to the ITS.
> +
>  		kvm_populate_msi(e, &msi);
> -		if (!vgic_its_inject_cached_translation(kvm, &msi))
> -			return 0;
> +		return vgic_its_inject_cached_translation(kvm, &msi);

>  	}
>  
> -	return -EWOULDBLOCK;
> +	case KVM_IRQ_ROUTING_IRQCHIP:
> +		/* Injecting SPIs is always possible in atomic context */
> +		return vgic_irqfd_set_irq(e, kvm, irq_source_id, 1, line_status);
what about the 	mutex_lock(&kvm->lock) called from within
vgic_irqfd_set_irq/kvm_vgic_inject_irq/vgic_lazy_init
> +
> +	default:
> +		return -EWOULDBLOCK;
> +	}
>  }
>  
>  int kvm_vgic_setup_default_irq_routing(struct kvm *kvm)
> diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c
> index c012a52b19f5..40cbaca81333 100644
> --- a/arch/arm64/kvm/vgic/vgic-its.c
> +++ b/arch/arm64/kvm/vgic/vgic-its.c
> @@ -757,9 +757,8 @@ int vgic_its_inject_cached_translation(struct kvm *kvm, struct kvm_msi *msi)
>  
>  	db = (u64)msi->address_hi << 32 | msi->address_lo;
>  	irq = vgic_its_check_cache(kvm, db, msi->devid, msi->data);
> -
>  	if (!irq)
> -		return -1;
> +		return -EWOULDBLOCK;
>  
>  	raw_spin_lock_irqsave(&irq->irq_lock, flags);
>  	irq->pending_latch = true;
> 
Thanks

Eric

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH] KVM: arm64: Allow in-atomic injection of SPIs
@ 2020-06-08 16:58   ` Auger Eric
  0 siblings, 0 replies; 24+ messages in thread
From: Auger Eric @ 2020-06-08 16:58 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm
  Cc: kernel-team, James Morse, Julien Thierry, Suzuki K Poulose

Hi Marc,

On 5/26/20 6:11 PM, Marc Zyngier wrote:
> On a system that uses SPIs to implement MSIs (as it would be
> the case on a GICv2 system exposing a GICv2m to its guests),
> we deny the possibility of injecting SPIs on the in-atomic
> fast-path.
> 
> This results in a very large amount of context-switches
> (roughly equivalent to twice the interrupt rate) on the host,
> and suboptimal performance for the guest (as measured with
> a test workload involving a virtio interface backed by vhost-net).
> Given that GICv2 systems are usually on the low-end of the spectrum
> performance wise, they could do without the aggravation.
> 
> We solved this for GICv3+ITS by having a translation cache. But
> SPIs do not need any extra infrastructure, and can be immediately
> injected in the virtual distributor as the locking is already
> heavy enough that we don't need to worry about anything.
> 
> This halves the number of context switches for the same workload.
> 
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>  arch/arm64/kvm/vgic/vgic-irqfd.c | 20 ++++++++++++++++----
>  arch/arm64/kvm/vgic/vgic-its.c   |  3 +--
>  2 files changed, 17 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c b/arch/arm64/kvm/vgic/vgic-irqfd.c
> index d8cdfea5cc96..11a9f81115ab 100644
> --- a/arch/arm64/kvm/vgic/vgic-irqfd.c
> +++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
There is still a comment above saying
 * Currently only direct MSI injection is supported.
> @@ -107,15 +107,27 @@ int kvm_arch_set_irq_inatomic(struct kvm_kernel_irq_routing_entry *e,
>  			      struct kvm *kvm, int irq_source_id, int level,
>  			      bool line_status)
>  {
> -	if (e->type == KVM_IRQ_ROUTING_MSI && vgic_has_its(kvm) && level) {
> +	if (!level)
> +		return -EWOULDBLOCK;
> +
> +	switch (e->type) {
> +	case KVM_IRQ_ROUTING_MSI: {
>  		struct kvm_msi msi;
>  
> +		if (!vgic_has_its(kvm))
> +			return -EINVAL;
Shouldn't we return -EWOULDBLOCK by default?
QEMU does not use that path with GICv2m but in kvm_set_routing_entry() I
don't see any check related to the ITS.
> +
>  		kvm_populate_msi(e, &msi);
> -		if (!vgic_its_inject_cached_translation(kvm, &msi))
> -			return 0;
> +		return vgic_its_inject_cached_translation(kvm, &msi);

>  	}
>  
> -	return -EWOULDBLOCK;
> +	case KVM_IRQ_ROUTING_IRQCHIP:
> +		/* Injecting SPIs is always possible in atomic context */
> +		return vgic_irqfd_set_irq(e, kvm, irq_source_id, 1, line_status);
what about the 	mutex_lock(&kvm->lock) called from within
vgic_irqfd_set_irq/kvm_vgic_inject_irq/vgic_lazy_init
> +
> +	default:
> +		return -EWOULDBLOCK;
> +	}
>  }
>  
>  int kvm_vgic_setup_default_irq_routing(struct kvm *kvm)
> diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c
> index c012a52b19f5..40cbaca81333 100644
> --- a/arch/arm64/kvm/vgic/vgic-its.c
> +++ b/arch/arm64/kvm/vgic/vgic-its.c
> @@ -757,9 +757,8 @@ int vgic_its_inject_cached_translation(struct kvm *kvm, struct kvm_msi *msi)
>  
>  	db = (u64)msi->address_hi << 32 | msi->address_lo;
>  	irq = vgic_its_check_cache(kvm, db, msi->devid, msi->data);
> -
>  	if (!irq)
> -		return -1;
> +		return -EWOULDBLOCK;
>  
>  	raw_spin_lock_irqsave(&irq->irq_lock, flags);
>  	irq->pending_latch = true;
> 
Thanks

Eric


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH] KVM: arm64: Allow in-atomic injection of SPIs
  2020-06-08 16:58   ` Auger Eric
  (?)
@ 2020-06-08 17:19     ` Marc Zyngier
  -1 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2020-06-08 17:19 UTC (permalink / raw)
  To: Auger Eric
  Cc: linux-arm-kernel, kvmarm, kvm, James Morse, Julien Thierry,
	Suzuki K Poulose, kernel-team

Hi Eric,

On 2020-06-08 17:58, Auger Eric wrote:
> Hi Marc,
> 
> On 5/26/20 6:11 PM, Marc Zyngier wrote:
>> On a system that uses SPIs to implement MSIs (as it would be
>> the case on a GICv2 system exposing a GICv2m to its guests),
>> we deny the possibility of injecting SPIs on the in-atomic
>> fast-path.
>> 
>> This results in a very large amount of context-switches
>> (roughly equivalent to twice the interrupt rate) on the host,
>> and suboptimal performance for the guest (as measured with
>> a test workload involving a virtio interface backed by vhost-net).
>> Given that GICv2 systems are usually on the low-end of the spectrum
>> performance wise, they could do without the aggravation.
>> 
>> We solved this for GICv3+ITS by having a translation cache. But
>> SPIs do not need any extra infrastructure, and can be immediately
>> injected in the virtual distributor as the locking is already
>> heavy enough that we don't need to worry about anything.
>> 
>> This halves the number of context switches for the same workload.
>> 
>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>> ---
>>  arch/arm64/kvm/vgic/vgic-irqfd.c | 20 ++++++++++++++++----
>>  arch/arm64/kvm/vgic/vgic-its.c   |  3 +--
>>  2 files changed, 17 insertions(+), 6 deletions(-)
>> 
>> diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c 
>> b/arch/arm64/kvm/vgic/vgic-irqfd.c
>> index d8cdfea5cc96..11a9f81115ab 100644
>> --- a/arch/arm64/kvm/vgic/vgic-irqfd.c
>> +++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
> There is still a comment above saying
>  * Currently only direct MSI injection is supported.

I believe this comment to be correct. There is no path other
than MSI injection that leads us here. Case in point, we only
ever inject a rising edge through this API, never a falling one.

>> @@ -107,15 +107,27 @@ int kvm_arch_set_irq_inatomic(struct 
>> kvm_kernel_irq_routing_entry *e,
>>  			      struct kvm *kvm, int irq_source_id, int level,
>>  			      bool line_status)
>>  {
>> -	if (e->type == KVM_IRQ_ROUTING_MSI && vgic_has_its(kvm) && level) {
>> +	if (!level)
>> +		return -EWOULDBLOCK;
>> +
>> +	switch (e->type) {
>> +	case KVM_IRQ_ROUTING_MSI: {
>>  		struct kvm_msi msi;
>> 
>> +		if (!vgic_has_its(kvm))
>> +			return -EINVAL;
> Shouldn't we return -EWOULDBLOCK by default?
> QEMU does not use that path with GICv2m but in kvm_set_routing_entry() 
> I
> don't see any check related to the ITS.

Fair enough. I really don't anticipate anyone to be using
KVM_IRQ_ROUTING_MSI with anything but the ITS, but who knows,
people are crazy! ;-)

>> +
>>  		kvm_populate_msi(e, &msi);
>> -		if (!vgic_its_inject_cached_translation(kvm, &msi))
>> -			return 0;
>> +		return vgic_its_inject_cached_translation(kvm, &msi);
> 
>>  	}
>> 
>> -	return -EWOULDBLOCK;
>> +	case KVM_IRQ_ROUTING_IRQCHIP:
>> +		/* Injecting SPIs is always possible in atomic context */
>> +		return vgic_irqfd_set_irq(e, kvm, irq_source_id, 1, line_status);
> what about the 	mutex_lock(&kvm->lock) called from within
> vgic_irqfd_set_irq/kvm_vgic_inject_irq/vgic_lazy_init

Holy crap. The lazy GIC init strikes again :-(.
How about this on top of the existing patch:

diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c 
b/arch/arm64/kvm/vgic/vgic-irqfd.c
index 11a9f81115ab..6e5ca04d5589 100644
--- a/arch/arm64/kvm/vgic/vgic-irqfd.c
+++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
@@ -115,19 +115,23 @@ int kvm_arch_set_irq_inatomic(struct 
kvm_kernel_irq_routing_entry *e,
  		struct kvm_msi msi;

  		if (!vgic_has_its(kvm))
-			return -EINVAL;
+			break;

  		kvm_populate_msi(e, &msi);
  		return vgic_its_inject_cached_translation(kvm, &msi);
  	}

  	case KVM_IRQ_ROUTING_IRQCHIP:
-		/* Injecting SPIs is always possible in atomic context */
+		/*
+		 * Injecting SPIs is always possible in atomic context
+		 * as long as the damn vgic is initialized.
+		 */
+		if (unlikely(!vgic_initialized(kvm)))
+			break;
  		return vgic_irqfd_set_irq(e, kvm, irq_source_id, 1, line_status);
-
-	default:
-		return -EWOULDBLOCK;
  	}
+
+	return -EWOULDBLOCK;
  }

  int kvm_vgic_setup_default_irq_routing(struct kvm *kvm)


Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH] KVM: arm64: Allow in-atomic injection of SPIs
@ 2020-06-08 17:19     ` Marc Zyngier
  0 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2020-06-08 17:19 UTC (permalink / raw)
  To: Auger Eric; +Cc: kvm, linux-arm-kernel, kernel-team, kvmarm

Hi Eric,

On 2020-06-08 17:58, Auger Eric wrote:
> Hi Marc,
> 
> On 5/26/20 6:11 PM, Marc Zyngier wrote:
>> On a system that uses SPIs to implement MSIs (as it would be
>> the case on a GICv2 system exposing a GICv2m to its guests),
>> we deny the possibility of injecting SPIs on the in-atomic
>> fast-path.
>> 
>> This results in a very large amount of context-switches
>> (roughly equivalent to twice the interrupt rate) on the host,
>> and suboptimal performance for the guest (as measured with
>> a test workload involving a virtio interface backed by vhost-net).
>> Given that GICv2 systems are usually on the low-end of the spectrum
>> performance wise, they could do without the aggravation.
>> 
>> We solved this for GICv3+ITS by having a translation cache. But
>> SPIs do not need any extra infrastructure, and can be immediately
>> injected in the virtual distributor as the locking is already
>> heavy enough that we don't need to worry about anything.
>> 
>> This halves the number of context switches for the same workload.
>> 
>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>> ---
>>  arch/arm64/kvm/vgic/vgic-irqfd.c | 20 ++++++++++++++++----
>>  arch/arm64/kvm/vgic/vgic-its.c   |  3 +--
>>  2 files changed, 17 insertions(+), 6 deletions(-)
>> 
>> diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c 
>> b/arch/arm64/kvm/vgic/vgic-irqfd.c
>> index d8cdfea5cc96..11a9f81115ab 100644
>> --- a/arch/arm64/kvm/vgic/vgic-irqfd.c
>> +++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
> There is still a comment above saying
>  * Currently only direct MSI injection is supported.

I believe this comment to be correct. There is no path other
than MSI injection that leads us here. Case in point, we only
ever inject a rising edge through this API, never a falling one.

>> @@ -107,15 +107,27 @@ int kvm_arch_set_irq_inatomic(struct 
>> kvm_kernel_irq_routing_entry *e,
>>  			      struct kvm *kvm, int irq_source_id, int level,
>>  			      bool line_status)
>>  {
>> -	if (e->type == KVM_IRQ_ROUTING_MSI && vgic_has_its(kvm) && level) {
>> +	if (!level)
>> +		return -EWOULDBLOCK;
>> +
>> +	switch (e->type) {
>> +	case KVM_IRQ_ROUTING_MSI: {
>>  		struct kvm_msi msi;
>> 
>> +		if (!vgic_has_its(kvm))
>> +			return -EINVAL;
> Shouldn't we return -EWOULDBLOCK by default?
> QEMU does not use that path with GICv2m but in kvm_set_routing_entry() 
> I
> don't see any check related to the ITS.

Fair enough. I really don't anticipate anyone to be using
KVM_IRQ_ROUTING_MSI with anything but the ITS, but who knows,
people are crazy! ;-)

>> +
>>  		kvm_populate_msi(e, &msi);
>> -		if (!vgic_its_inject_cached_translation(kvm, &msi))
>> -			return 0;
>> +		return vgic_its_inject_cached_translation(kvm, &msi);
> 
>>  	}
>> 
>> -	return -EWOULDBLOCK;
>> +	case KVM_IRQ_ROUTING_IRQCHIP:
>> +		/* Injecting SPIs is always possible in atomic context */
>> +		return vgic_irqfd_set_irq(e, kvm, irq_source_id, 1, line_status);
> what about the 	mutex_lock(&kvm->lock) called from within
> vgic_irqfd_set_irq/kvm_vgic_inject_irq/vgic_lazy_init

Holy crap. The lazy GIC init strikes again :-(.
How about this on top of the existing patch:

diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c 
b/arch/arm64/kvm/vgic/vgic-irqfd.c
index 11a9f81115ab..6e5ca04d5589 100644
--- a/arch/arm64/kvm/vgic/vgic-irqfd.c
+++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
@@ -115,19 +115,23 @@ int kvm_arch_set_irq_inatomic(struct 
kvm_kernel_irq_routing_entry *e,
  		struct kvm_msi msi;

  		if (!vgic_has_its(kvm))
-			return -EINVAL;
+			break;

  		kvm_populate_msi(e, &msi);
  		return vgic_its_inject_cached_translation(kvm, &msi);
  	}

  	case KVM_IRQ_ROUTING_IRQCHIP:
-		/* Injecting SPIs is always possible in atomic context */
+		/*
+		 * Injecting SPIs is always possible in atomic context
+		 * as long as the damn vgic is initialized.
+		 */
+		if (unlikely(!vgic_initialized(kvm)))
+			break;
  		return vgic_irqfd_set_irq(e, kvm, irq_source_id, 1, line_status);
-
-	default:
-		return -EWOULDBLOCK;
  	}
+
+	return -EWOULDBLOCK;
  }

  int kvm_vgic_setup_default_irq_routing(struct kvm *kvm)


Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH] KVM: arm64: Allow in-atomic injection of SPIs
@ 2020-06-08 17:19     ` Marc Zyngier
  0 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2020-06-08 17:19 UTC (permalink / raw)
  To: Auger Eric
  Cc: kvm, Suzuki K Poulose, James Morse, linux-arm-kernel,
	kernel-team, kvmarm, Julien Thierry

Hi Eric,

On 2020-06-08 17:58, Auger Eric wrote:
> Hi Marc,
> 
> On 5/26/20 6:11 PM, Marc Zyngier wrote:
>> On a system that uses SPIs to implement MSIs (as it would be
>> the case on a GICv2 system exposing a GICv2m to its guests),
>> we deny the possibility of injecting SPIs on the in-atomic
>> fast-path.
>> 
>> This results in a very large amount of context-switches
>> (roughly equivalent to twice the interrupt rate) on the host,
>> and suboptimal performance for the guest (as measured with
>> a test workload involving a virtio interface backed by vhost-net).
>> Given that GICv2 systems are usually on the low-end of the spectrum
>> performance wise, they could do without the aggravation.
>> 
>> We solved this for GICv3+ITS by having a translation cache. But
>> SPIs do not need any extra infrastructure, and can be immediately
>> injected in the virtual distributor as the locking is already
>> heavy enough that we don't need to worry about anything.
>> 
>> This halves the number of context switches for the same workload.
>> 
>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>> ---
>>  arch/arm64/kvm/vgic/vgic-irqfd.c | 20 ++++++++++++++++----
>>  arch/arm64/kvm/vgic/vgic-its.c   |  3 +--
>>  2 files changed, 17 insertions(+), 6 deletions(-)
>> 
>> diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c 
>> b/arch/arm64/kvm/vgic/vgic-irqfd.c
>> index d8cdfea5cc96..11a9f81115ab 100644
>> --- a/arch/arm64/kvm/vgic/vgic-irqfd.c
>> +++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
> There is still a comment above saying
>  * Currently only direct MSI injection is supported.

I believe this comment to be correct. There is no path other
than MSI injection that leads us here. Case in point, we only
ever inject a rising edge through this API, never a falling one.

>> @@ -107,15 +107,27 @@ int kvm_arch_set_irq_inatomic(struct 
>> kvm_kernel_irq_routing_entry *e,
>>  			      struct kvm *kvm, int irq_source_id, int level,
>>  			      bool line_status)
>>  {
>> -	if (e->type == KVM_IRQ_ROUTING_MSI && vgic_has_its(kvm) && level) {
>> +	if (!level)
>> +		return -EWOULDBLOCK;
>> +
>> +	switch (e->type) {
>> +	case KVM_IRQ_ROUTING_MSI: {
>>  		struct kvm_msi msi;
>> 
>> +		if (!vgic_has_its(kvm))
>> +			return -EINVAL;
> Shouldn't we return -EWOULDBLOCK by default?
> QEMU does not use that path with GICv2m but in kvm_set_routing_entry() 
> I
> don't see any check related to the ITS.

Fair enough. I really don't anticipate anyone to be using
KVM_IRQ_ROUTING_MSI with anything but the ITS, but who knows,
people are crazy! ;-)

>> +
>>  		kvm_populate_msi(e, &msi);
>> -		if (!vgic_its_inject_cached_translation(kvm, &msi))
>> -			return 0;
>> +		return vgic_its_inject_cached_translation(kvm, &msi);
> 
>>  	}
>> 
>> -	return -EWOULDBLOCK;
>> +	case KVM_IRQ_ROUTING_IRQCHIP:
>> +		/* Injecting SPIs is always possible in atomic context */
>> +		return vgic_irqfd_set_irq(e, kvm, irq_source_id, 1, line_status);
> what about the 	mutex_lock(&kvm->lock) called from within
> vgic_irqfd_set_irq/kvm_vgic_inject_irq/vgic_lazy_init

Holy crap. The lazy GIC init strikes again :-(.
How about this on top of the existing patch:

diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c 
b/arch/arm64/kvm/vgic/vgic-irqfd.c
index 11a9f81115ab..6e5ca04d5589 100644
--- a/arch/arm64/kvm/vgic/vgic-irqfd.c
+++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
@@ -115,19 +115,23 @@ int kvm_arch_set_irq_inatomic(struct 
kvm_kernel_irq_routing_entry *e,
  		struct kvm_msi msi;

  		if (!vgic_has_its(kvm))
-			return -EINVAL;
+			break;

  		kvm_populate_msi(e, &msi);
  		return vgic_its_inject_cached_translation(kvm, &msi);
  	}

  	case KVM_IRQ_ROUTING_IRQCHIP:
-		/* Injecting SPIs is always possible in atomic context */
+		/*
+		 * Injecting SPIs is always possible in atomic context
+		 * as long as the damn vgic is initialized.
+		 */
+		if (unlikely(!vgic_initialized(kvm)))
+			break;
  		return vgic_irqfd_set_irq(e, kvm, irq_source_id, 1, line_status);
-
-	default:
-		return -EWOULDBLOCK;
  	}
+
+	return -EWOULDBLOCK;
  }

  int kvm_vgic_setup_default_irq_routing(struct kvm *kvm)


Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCH] KVM: arm64: Allow in-atomic injection of SPIs
  2020-06-08 17:19     ` Marc Zyngier
  (?)
@ 2020-06-09  7:48       ` Auger Eric
  -1 siblings, 0 replies; 24+ messages in thread
From: Auger Eric @ 2020-06-09  7:48 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: linux-arm-kernel, kvmarm, kvm, James Morse, Julien Thierry,
	Suzuki K Poulose, kernel-team

Hi Marc,

On 6/8/20 7:19 PM, Marc Zyngier wrote:
> Hi Eric,
> 
> On 2020-06-08 17:58, Auger Eric wrote:
>> Hi Marc,
>>
>> On 5/26/20 6:11 PM, Marc Zyngier wrote:
>>> On a system that uses SPIs to implement MSIs (as it would be
>>> the case on a GICv2 system exposing a GICv2m to its guests),
>>> we deny the possibility of injecting SPIs on the in-atomic
>>> fast-path.
>>>
>>> This results in a very large amount of context-switches
>>> (roughly equivalent to twice the interrupt rate) on the host,
>>> and suboptimal performance for the guest (as measured with
>>> a test workload involving a virtio interface backed by vhost-net).
>>> Given that GICv2 systems are usually on the low-end of the spectrum
>>> performance wise, they could do without the aggravation.
>>>
>>> We solved this for GICv3+ITS by having a translation cache. But
>>> SPIs do not need any extra infrastructure, and can be immediately
>>> injected in the virtual distributor as the locking is already
>>> heavy enough that we don't need to worry about anything.
>>>
>>> This halves the number of context switches for the same workload.
>>>
>>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>>> ---
>>>  arch/arm64/kvm/vgic/vgic-irqfd.c | 20 ++++++++++++++++----
>>>  arch/arm64/kvm/vgic/vgic-its.c   |  3 +--
>>>  2 files changed, 17 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c
>>> b/arch/arm64/kvm/vgic/vgic-irqfd.c
>>> index d8cdfea5cc96..11a9f81115ab 100644
>>> --- a/arch/arm64/kvm/vgic/vgic-irqfd.c
>>> +++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
>> There is still a comment above saying
>>  * Currently only direct MSI injection is supported.
> 
> I believe this comment to be correct. There is no path other
> than MSI injection that leads us here. Case in point, we only
> ever inject a rising edge through this API, never a falling one.

Isn't this path entered whenever you have either of the combo being used?
KVM_SET_MSI_ROUTING + KVM_IRQFD
KVM_SET_GSI_ROUTING + KVM_IRQFD

I had in mind direct MSI injection was KVM_SIGNAL_MSI/
kvm_send_userspace_msi/kvm_set_msi
> 
>>> @@ -107,15 +107,27 @@ int kvm_arch_set_irq_inatomic(struct
>>> kvm_kernel_irq_routing_entry *e,
>>>                    struct kvm *kvm, int irq_source_id, int level,
>>>                    bool line_status)
>>>  {
>>> -    if (e->type == KVM_IRQ_ROUTING_MSI && vgic_has_its(kvm) && level) {
>>> +    if (!level)
>>> +        return -EWOULDBLOCK;
>>> +
>>> +    switch (e->type) {
>>> +    case KVM_IRQ_ROUTING_MSI: {
>>>          struct kvm_msi msi;
>>>
>>> +        if (!vgic_has_its(kvm))
>>> +            return -EINVAL;
>> Shouldn't we return -EWOULDBLOCK by default?
>> QEMU does not use that path with GICv2m but in kvm_set_routing_entry() I
>> don't see any check related to the ITS.
> 
> Fair enough. I really don't anticipate anyone to be using
> KVM_IRQ_ROUTING_MSI with anything but the ITS, but who knows,
> people are crazy! ;-)
> 
>>> +
>>>          kvm_populate_msi(e, &msi);
>>> -        if (!vgic_its_inject_cached_translation(kvm, &msi))
>>> -            return 0;
>>> +        return vgic_its_inject_cached_translation(kvm, &msi);
>>
>>>      }
>>>
>>> -    return -EWOULDBLOCK;
>>> +    case KVM_IRQ_ROUTING_IRQCHIP:
>>> +        /* Injecting SPIs is always possible in atomic context */
>>> +        return vgic_irqfd_set_irq(e, kvm, irq_source_id, 1,
>>> line_status);
>> what about the     mutex_lock(&kvm->lock) called from within
>> vgic_irqfd_set_irq/kvm_vgic_inject_irq/vgic_lazy_init
> 
> Holy crap. The lazy GIC init strikes again :-(.
> How about this on top of the existing patch:
> 
> diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c
> b/arch/arm64/kvm/vgic/vgic-irqfd.c
> index 11a9f81115ab..6e5ca04d5589 100644
> --- a/arch/arm64/kvm/vgic/vgic-irqfd.c
> +++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
> @@ -115,19 +115,23 @@ int kvm_arch_set_irq_inatomic(struct
> kvm_kernel_irq_routing_entry *e,
>          struct kvm_msi msi;
> 
>          if (!vgic_has_its(kvm))
> -            return -EINVAL;
> +            break;
> 
>          kvm_populate_msi(e, &msi);
>          return vgic_its_inject_cached_translation(kvm, &msi);
>      }
> 
>      case KVM_IRQ_ROUTING_IRQCHIP:
> -        /* Injecting SPIs is always possible in atomic context */
> +        /*
> +         * Injecting SPIs is always possible in atomic context
> +         * as long as the damn vgic is initialized.
> +         */
> +        if (unlikely(!vgic_initialized(kvm)))
> +            break;
Yes this should prevent the wait situation. But may be worth to deep
into the call stack. Won't analyzers complain at some point?

Thanks

Eric
>          return vgic_irqfd_set_irq(e, kvm, irq_source_id, 1, line_status);
> -
> -    default:
> -        return -EWOULDBLOCK;
>      }
> +
> +    return -EWOULDBLOCK;
>  }
> 
>  int kvm_vgic_setup_default_irq_routing(struct kvm *kvm)
> 
> 
> Thanks,
> 
>         M.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH] KVM: arm64: Allow in-atomic injection of SPIs
@ 2020-06-09  7:48       ` Auger Eric
  0 siblings, 0 replies; 24+ messages in thread
From: Auger Eric @ 2020-06-09  7:48 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, linux-arm-kernel, kernel-team, kvmarm

Hi Marc,

On 6/8/20 7:19 PM, Marc Zyngier wrote:
> Hi Eric,
> 
> On 2020-06-08 17:58, Auger Eric wrote:
>> Hi Marc,
>>
>> On 5/26/20 6:11 PM, Marc Zyngier wrote:
>>> On a system that uses SPIs to implement MSIs (as it would be
>>> the case on a GICv2 system exposing a GICv2m to its guests),
>>> we deny the possibility of injecting SPIs on the in-atomic
>>> fast-path.
>>>
>>> This results in a very large amount of context-switches
>>> (roughly equivalent to twice the interrupt rate) on the host,
>>> and suboptimal performance for the guest (as measured with
>>> a test workload involving a virtio interface backed by vhost-net).
>>> Given that GICv2 systems are usually on the low-end of the spectrum
>>> performance wise, they could do without the aggravation.
>>>
>>> We solved this for GICv3+ITS by having a translation cache. But
>>> SPIs do not need any extra infrastructure, and can be immediately
>>> injected in the virtual distributor as the locking is already
>>> heavy enough that we don't need to worry about anything.
>>>
>>> This halves the number of context switches for the same workload.
>>>
>>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>>> ---
>>>  arch/arm64/kvm/vgic/vgic-irqfd.c | 20 ++++++++++++++++----
>>>  arch/arm64/kvm/vgic/vgic-its.c   |  3 +--
>>>  2 files changed, 17 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c
>>> b/arch/arm64/kvm/vgic/vgic-irqfd.c
>>> index d8cdfea5cc96..11a9f81115ab 100644
>>> --- a/arch/arm64/kvm/vgic/vgic-irqfd.c
>>> +++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
>> There is still a comment above saying
>>  * Currently only direct MSI injection is supported.
> 
> I believe this comment to be correct. There is no path other
> than MSI injection that leads us here. Case in point, we only
> ever inject a rising edge through this API, never a falling one.

Isn't this path entered whenever you have either of the combo being used?
KVM_SET_MSI_ROUTING + KVM_IRQFD
KVM_SET_GSI_ROUTING + KVM_IRQFD

I had in mind direct MSI injection was KVM_SIGNAL_MSI/
kvm_send_userspace_msi/kvm_set_msi
> 
>>> @@ -107,15 +107,27 @@ int kvm_arch_set_irq_inatomic(struct
>>> kvm_kernel_irq_routing_entry *e,
>>>                    struct kvm *kvm, int irq_source_id, int level,
>>>                    bool line_status)
>>>  {
>>> -    if (e->type == KVM_IRQ_ROUTING_MSI && vgic_has_its(kvm) && level) {
>>> +    if (!level)
>>> +        return -EWOULDBLOCK;
>>> +
>>> +    switch (e->type) {
>>> +    case KVM_IRQ_ROUTING_MSI: {
>>>          struct kvm_msi msi;
>>>
>>> +        if (!vgic_has_its(kvm))
>>> +            return -EINVAL;
>> Shouldn't we return -EWOULDBLOCK by default?
>> QEMU does not use that path with GICv2m but in kvm_set_routing_entry() I
>> don't see any check related to the ITS.
> 
> Fair enough. I really don't anticipate anyone to be using
> KVM_IRQ_ROUTING_MSI with anything but the ITS, but who knows,
> people are crazy! ;-)
> 
>>> +
>>>          kvm_populate_msi(e, &msi);
>>> -        if (!vgic_its_inject_cached_translation(kvm, &msi))
>>> -            return 0;
>>> +        return vgic_its_inject_cached_translation(kvm, &msi);
>>
>>>      }
>>>
>>> -    return -EWOULDBLOCK;
>>> +    case KVM_IRQ_ROUTING_IRQCHIP:
>>> +        /* Injecting SPIs is always possible in atomic context */
>>> +        return vgic_irqfd_set_irq(e, kvm, irq_source_id, 1,
>>> line_status);
>> what about the     mutex_lock(&kvm->lock) called from within
>> vgic_irqfd_set_irq/kvm_vgic_inject_irq/vgic_lazy_init
> 
> Holy crap. The lazy GIC init strikes again :-(.
> How about this on top of the existing patch:
> 
> diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c
> b/arch/arm64/kvm/vgic/vgic-irqfd.c
> index 11a9f81115ab..6e5ca04d5589 100644
> --- a/arch/arm64/kvm/vgic/vgic-irqfd.c
> +++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
> @@ -115,19 +115,23 @@ int kvm_arch_set_irq_inatomic(struct
> kvm_kernel_irq_routing_entry *e,
>          struct kvm_msi msi;
> 
>          if (!vgic_has_its(kvm))
> -            return -EINVAL;
> +            break;
> 
>          kvm_populate_msi(e, &msi);
>          return vgic_its_inject_cached_translation(kvm, &msi);
>      }
> 
>      case KVM_IRQ_ROUTING_IRQCHIP:
> -        /* Injecting SPIs is always possible in atomic context */
> +        /*
> +         * Injecting SPIs is always possible in atomic context
> +         * as long as the damn vgic is initialized.
> +         */
> +        if (unlikely(!vgic_initialized(kvm)))
> +            break;
Yes this should prevent the wait situation. But may be worth to deep
into the call stack. Won't analyzers complain at some point?

Thanks

Eric
>          return vgic_irqfd_set_irq(e, kvm, irq_source_id, 1, line_status);
> -
> -    default:
> -        return -EWOULDBLOCK;
>      }
> +
> +    return -EWOULDBLOCK;
>  }
> 
>  int kvm_vgic_setup_default_irq_routing(struct kvm *kvm)
> 
> 
> Thanks,
> 
>         M.

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH] KVM: arm64: Allow in-atomic injection of SPIs
@ 2020-06-09  7:48       ` Auger Eric
  0 siblings, 0 replies; 24+ messages in thread
From: Auger Eric @ 2020-06-09  7:48 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: kvm, Suzuki K Poulose, James Morse, linux-arm-kernel,
	kernel-team, kvmarm, Julien Thierry

Hi Marc,

On 6/8/20 7:19 PM, Marc Zyngier wrote:
> Hi Eric,
> 
> On 2020-06-08 17:58, Auger Eric wrote:
>> Hi Marc,
>>
>> On 5/26/20 6:11 PM, Marc Zyngier wrote:
>>> On a system that uses SPIs to implement MSIs (as it would be
>>> the case on a GICv2 system exposing a GICv2m to its guests),
>>> we deny the possibility of injecting SPIs on the in-atomic
>>> fast-path.
>>>
>>> This results in a very large amount of context-switches
>>> (roughly equivalent to twice the interrupt rate) on the host,
>>> and suboptimal performance for the guest (as measured with
>>> a test workload involving a virtio interface backed by vhost-net).
>>> Given that GICv2 systems are usually on the low-end of the spectrum
>>> performance wise, they could do without the aggravation.
>>>
>>> We solved this for GICv3+ITS by having a translation cache. But
>>> SPIs do not need any extra infrastructure, and can be immediately
>>> injected in the virtual distributor as the locking is already
>>> heavy enough that we don't need to worry about anything.
>>>
>>> This halves the number of context switches for the same workload.
>>>
>>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>>> ---
>>>  arch/arm64/kvm/vgic/vgic-irqfd.c | 20 ++++++++++++++++----
>>>  arch/arm64/kvm/vgic/vgic-its.c   |  3 +--
>>>  2 files changed, 17 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c
>>> b/arch/arm64/kvm/vgic/vgic-irqfd.c
>>> index d8cdfea5cc96..11a9f81115ab 100644
>>> --- a/arch/arm64/kvm/vgic/vgic-irqfd.c
>>> +++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
>> There is still a comment above saying
>>  * Currently only direct MSI injection is supported.
> 
> I believe this comment to be correct. There is no path other
> than MSI injection that leads us here. Case in point, we only
> ever inject a rising edge through this API, never a falling one.

Isn't this path entered whenever you have either of the combo being used?
KVM_SET_MSI_ROUTING + KVM_IRQFD
KVM_SET_GSI_ROUTING + KVM_IRQFD

I had in mind direct MSI injection was KVM_SIGNAL_MSI/
kvm_send_userspace_msi/kvm_set_msi
> 
>>> @@ -107,15 +107,27 @@ int kvm_arch_set_irq_inatomic(struct
>>> kvm_kernel_irq_routing_entry *e,
>>>                    struct kvm *kvm, int irq_source_id, int level,
>>>                    bool line_status)
>>>  {
>>> -    if (e->type == KVM_IRQ_ROUTING_MSI && vgic_has_its(kvm) && level) {
>>> +    if (!level)
>>> +        return -EWOULDBLOCK;
>>> +
>>> +    switch (e->type) {
>>> +    case KVM_IRQ_ROUTING_MSI: {
>>>          struct kvm_msi msi;
>>>
>>> +        if (!vgic_has_its(kvm))
>>> +            return -EINVAL;
>> Shouldn't we return -EWOULDBLOCK by default?
>> QEMU does not use that path with GICv2m but in kvm_set_routing_entry() I
>> don't see any check related to the ITS.
> 
> Fair enough. I really don't anticipate anyone to be using
> KVM_IRQ_ROUTING_MSI with anything but the ITS, but who knows,
> people are crazy! ;-)
> 
>>> +
>>>          kvm_populate_msi(e, &msi);
>>> -        if (!vgic_its_inject_cached_translation(kvm, &msi))
>>> -            return 0;
>>> +        return vgic_its_inject_cached_translation(kvm, &msi);
>>
>>>      }
>>>
>>> -    return -EWOULDBLOCK;
>>> +    case KVM_IRQ_ROUTING_IRQCHIP:
>>> +        /* Injecting SPIs is always possible in atomic context */
>>> +        return vgic_irqfd_set_irq(e, kvm, irq_source_id, 1,
>>> line_status);
>> what about the     mutex_lock(&kvm->lock) called from within
>> vgic_irqfd_set_irq/kvm_vgic_inject_irq/vgic_lazy_init
> 
> Holy crap. The lazy GIC init strikes again :-(.
> How about this on top of the existing patch:
> 
> diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c
> b/arch/arm64/kvm/vgic/vgic-irqfd.c
> index 11a9f81115ab..6e5ca04d5589 100644
> --- a/arch/arm64/kvm/vgic/vgic-irqfd.c
> +++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
> @@ -115,19 +115,23 @@ int kvm_arch_set_irq_inatomic(struct
> kvm_kernel_irq_routing_entry *e,
>          struct kvm_msi msi;
> 
>          if (!vgic_has_its(kvm))
> -            return -EINVAL;
> +            break;
> 
>          kvm_populate_msi(e, &msi);
>          return vgic_its_inject_cached_translation(kvm, &msi);
>      }
> 
>      case KVM_IRQ_ROUTING_IRQCHIP:
> -        /* Injecting SPIs is always possible in atomic context */
> +        /*
> +         * Injecting SPIs is always possible in atomic context
> +         * as long as the damn vgic is initialized.
> +         */
> +        if (unlikely(!vgic_initialized(kvm)))
> +            break;
Yes this should prevent the wait situation. But may be worth to deep
into the call stack. Won't analyzers complain at some point?

Thanks

Eric
>          return vgic_irqfd_set_irq(e, kvm, irq_source_id, 1, line_status);
> -
> -    default:
> -        return -EWOULDBLOCK;
>      }
> +
> +    return -EWOULDBLOCK;
>  }
> 
>  int kvm_vgic_setup_default_irq_routing(struct kvm *kvm)
> 
> 
> Thanks,
> 
>         M.


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH] KVM: arm64: Allow in-atomic injection of SPIs
  2020-06-09  7:48       ` Auger Eric
  (?)
@ 2020-06-09  8:21         ` Marc Zyngier
  -1 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2020-06-09  8:21 UTC (permalink / raw)
  To: Auger Eric
  Cc: linux-arm-kernel, kvmarm, kvm, James Morse, Julien Thierry,
	Suzuki K Poulose, kernel-team

On 2020-06-09 08:48, Auger Eric wrote:
> Hi Marc,
> 
> On 6/8/20 7:19 PM, Marc Zyngier wrote:
>> Hi Eric,
>> 
>> On 2020-06-08 17:58, Auger Eric wrote:
>>> Hi Marc,
>>> 
>>> On 5/26/20 6:11 PM, Marc Zyngier wrote:
>>>> On a system that uses SPIs to implement MSIs (as it would be
>>>> the case on a GICv2 system exposing a GICv2m to its guests),
>>>> we deny the possibility of injecting SPIs on the in-atomic
>>>> fast-path.
>>>> 
>>>> This results in a very large amount of context-switches
>>>> (roughly equivalent to twice the interrupt rate) on the host,
>>>> and suboptimal performance for the guest (as measured with
>>>> a test workload involving a virtio interface backed by vhost-net).
>>>> Given that GICv2 systems are usually on the low-end of the spectrum
>>>> performance wise, they could do without the aggravation.
>>>> 
>>>> We solved this for GICv3+ITS by having a translation cache. But
>>>> SPIs do not need any extra infrastructure, and can be immediately
>>>> injected in the virtual distributor as the locking is already
>>>> heavy enough that we don't need to worry about anything.
>>>> 
>>>> This halves the number of context switches for the same workload.
>>>> 
>>>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>>>> ---
>>>>  arch/arm64/kvm/vgic/vgic-irqfd.c | 20 ++++++++++++++++----
>>>>  arch/arm64/kvm/vgic/vgic-its.c   |  3 +--
>>>>  2 files changed, 17 insertions(+), 6 deletions(-)
>>>> 
>>>> diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c
>>>> b/arch/arm64/kvm/vgic/vgic-irqfd.c
>>>> index d8cdfea5cc96..11a9f81115ab 100644
>>>> --- a/arch/arm64/kvm/vgic/vgic-irqfd.c
>>>> +++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
>>> There is still a comment above saying
>>>  * Currently only direct MSI injection is supported.
>> 
>> I believe this comment to be correct. There is no path other
>> than MSI injection that leads us here. Case in point, we only
>> ever inject a rising edge through this API, never a falling one.
> 
> Isn't this path entered whenever you have either of the combo being 
> used?
> KVM_SET_MSI_ROUTING + KVM_IRQFD
> KVM_SET_GSI_ROUTING + KVM_IRQFD

I've always considered these to be MSIs, but maybe I'm narrow minded... 
;-)

> I had in mind direct MSI injection was KVM_SIGNAL_MSI/
> kvm_send_userspace_msi/kvm_set_msi

Fair enough. Zengui was also unhappy about this comment, so I'll
remove it. After all, we shouldn't care whether that's a MSI or not.

>> 
>>>> @@ -107,15 +107,27 @@ int kvm_arch_set_irq_inatomic(struct
>>>> kvm_kernel_irq_routing_entry *e,
>>>>                    struct kvm *kvm, int irq_source_id, int level,
>>>>                    bool line_status)
>>>>  {
>>>> -    if (e->type == KVM_IRQ_ROUTING_MSI && vgic_has_its(kvm) && 
>>>> level) {
>>>> +    if (!level)
>>>> +        return -EWOULDBLOCK;
>>>> +
>>>> +    switch (e->type) {
>>>> +    case KVM_IRQ_ROUTING_MSI: {
>>>>          struct kvm_msi msi;
>>>> 
>>>> +        if (!vgic_has_its(kvm))
>>>> +            return -EINVAL;
>>> Shouldn't we return -EWOULDBLOCK by default?
>>> QEMU does not use that path with GICv2m but in 
>>> kvm_set_routing_entry() I
>>> don't see any check related to the ITS.
>> 
>> Fair enough. I really don't anticipate anyone to be using
>> KVM_IRQ_ROUTING_MSI with anything but the ITS, but who knows,
>> people are crazy! ;-)
>> 
>>>> +
>>>>          kvm_populate_msi(e, &msi);
>>>> -        if (!vgic_its_inject_cached_translation(kvm, &msi))
>>>> -            return 0;
>>>> +        return vgic_its_inject_cached_translation(kvm, &msi);
>>> 
>>>>      }
>>>> 
>>>> -    return -EWOULDBLOCK;
>>>> +    case KVM_IRQ_ROUTING_IRQCHIP:
>>>> +        /* Injecting SPIs is always possible in atomic context */
>>>> +        return vgic_irqfd_set_irq(e, kvm, irq_source_id, 1,
>>>> line_status);
>>> what about the     mutex_lock(&kvm->lock) called from within
>>> vgic_irqfd_set_irq/kvm_vgic_inject_irq/vgic_lazy_init
>> 
>> Holy crap. The lazy GIC init strikes again :-(.
>> How about this on top of the existing patch:
>> 
>> diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c
>> b/arch/arm64/kvm/vgic/vgic-irqfd.c
>> index 11a9f81115ab..6e5ca04d5589 100644
>> --- a/arch/arm64/kvm/vgic/vgic-irqfd.c
>> +++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
>> @@ -115,19 +115,23 @@ int kvm_arch_set_irq_inatomic(struct
>> kvm_kernel_irq_routing_entry *e,
>>          struct kvm_msi msi;
>> 
>>          if (!vgic_has_its(kvm))
>> -            return -EINVAL;
>> +            break;
>> 
>>          kvm_populate_msi(e, &msi);
>>          return vgic_its_inject_cached_translation(kvm, &msi);
>>      }
>> 
>>      case KVM_IRQ_ROUTING_IRQCHIP:
>> -        /* Injecting SPIs is always possible in atomic context */
>> +        /*
>> +         * Injecting SPIs is always possible in atomic context
>> +         * as long as the damn vgic is initialized.
>> +         */
>> +        if (unlikely(!vgic_initialized(kvm)))
>> +            break;
> Yes this should prevent the wait situation. But may be worth to deep
> into the call stack. Won't analyzers complain at some point?

I have gone all the way in the call stack, to be sure. The init is
the only point we use a mutex, and we're under a hard spinlock just
after (all the injection proper is happening with interrupt disabled).

As for things like lockdep, they track the dynamic state, and not
whether certain branches are simply "possible". A code analyser
that wouldn't take control flow into account would be pretty broken! ;-)

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH] KVM: arm64: Allow in-atomic injection of SPIs
@ 2020-06-09  8:21         ` Marc Zyngier
  0 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2020-06-09  8:21 UTC (permalink / raw)
  To: Auger Eric; +Cc: kvm, linux-arm-kernel, kernel-team, kvmarm

On 2020-06-09 08:48, Auger Eric wrote:
> Hi Marc,
> 
> On 6/8/20 7:19 PM, Marc Zyngier wrote:
>> Hi Eric,
>> 
>> On 2020-06-08 17:58, Auger Eric wrote:
>>> Hi Marc,
>>> 
>>> On 5/26/20 6:11 PM, Marc Zyngier wrote:
>>>> On a system that uses SPIs to implement MSIs (as it would be
>>>> the case on a GICv2 system exposing a GICv2m to its guests),
>>>> we deny the possibility of injecting SPIs on the in-atomic
>>>> fast-path.
>>>> 
>>>> This results in a very large amount of context-switches
>>>> (roughly equivalent to twice the interrupt rate) on the host,
>>>> and suboptimal performance for the guest (as measured with
>>>> a test workload involving a virtio interface backed by vhost-net).
>>>> Given that GICv2 systems are usually on the low-end of the spectrum
>>>> performance wise, they could do without the aggravation.
>>>> 
>>>> We solved this for GICv3+ITS by having a translation cache. But
>>>> SPIs do not need any extra infrastructure, and can be immediately
>>>> injected in the virtual distributor as the locking is already
>>>> heavy enough that we don't need to worry about anything.
>>>> 
>>>> This halves the number of context switches for the same workload.
>>>> 
>>>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>>>> ---
>>>>  arch/arm64/kvm/vgic/vgic-irqfd.c | 20 ++++++++++++++++----
>>>>  arch/arm64/kvm/vgic/vgic-its.c   |  3 +--
>>>>  2 files changed, 17 insertions(+), 6 deletions(-)
>>>> 
>>>> diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c
>>>> b/arch/arm64/kvm/vgic/vgic-irqfd.c
>>>> index d8cdfea5cc96..11a9f81115ab 100644
>>>> --- a/arch/arm64/kvm/vgic/vgic-irqfd.c
>>>> +++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
>>> There is still a comment above saying
>>>  * Currently only direct MSI injection is supported.
>> 
>> I believe this comment to be correct. There is no path other
>> than MSI injection that leads us here. Case in point, we only
>> ever inject a rising edge through this API, never a falling one.
> 
> Isn't this path entered whenever you have either of the combo being 
> used?
> KVM_SET_MSI_ROUTING + KVM_IRQFD
> KVM_SET_GSI_ROUTING + KVM_IRQFD

I've always considered these to be MSIs, but maybe I'm narrow minded... 
;-)

> I had in mind direct MSI injection was KVM_SIGNAL_MSI/
> kvm_send_userspace_msi/kvm_set_msi

Fair enough. Zengui was also unhappy about this comment, so I'll
remove it. After all, we shouldn't care whether that's a MSI or not.

>> 
>>>> @@ -107,15 +107,27 @@ int kvm_arch_set_irq_inatomic(struct
>>>> kvm_kernel_irq_routing_entry *e,
>>>>                    struct kvm *kvm, int irq_source_id, int level,
>>>>                    bool line_status)
>>>>  {
>>>> -    if (e->type == KVM_IRQ_ROUTING_MSI && vgic_has_its(kvm) && 
>>>> level) {
>>>> +    if (!level)
>>>> +        return -EWOULDBLOCK;
>>>> +
>>>> +    switch (e->type) {
>>>> +    case KVM_IRQ_ROUTING_MSI: {
>>>>          struct kvm_msi msi;
>>>> 
>>>> +        if (!vgic_has_its(kvm))
>>>> +            return -EINVAL;
>>> Shouldn't we return -EWOULDBLOCK by default?
>>> QEMU does not use that path with GICv2m but in 
>>> kvm_set_routing_entry() I
>>> don't see any check related to the ITS.
>> 
>> Fair enough. I really don't anticipate anyone to be using
>> KVM_IRQ_ROUTING_MSI with anything but the ITS, but who knows,
>> people are crazy! ;-)
>> 
>>>> +
>>>>          kvm_populate_msi(e, &msi);
>>>> -        if (!vgic_its_inject_cached_translation(kvm, &msi))
>>>> -            return 0;
>>>> +        return vgic_its_inject_cached_translation(kvm, &msi);
>>> 
>>>>      }
>>>> 
>>>> -    return -EWOULDBLOCK;
>>>> +    case KVM_IRQ_ROUTING_IRQCHIP:
>>>> +        /* Injecting SPIs is always possible in atomic context */
>>>> +        return vgic_irqfd_set_irq(e, kvm, irq_source_id, 1,
>>>> line_status);
>>> what about the     mutex_lock(&kvm->lock) called from within
>>> vgic_irqfd_set_irq/kvm_vgic_inject_irq/vgic_lazy_init
>> 
>> Holy crap. The lazy GIC init strikes again :-(.
>> How about this on top of the existing patch:
>> 
>> diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c
>> b/arch/arm64/kvm/vgic/vgic-irqfd.c
>> index 11a9f81115ab..6e5ca04d5589 100644
>> --- a/arch/arm64/kvm/vgic/vgic-irqfd.c
>> +++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
>> @@ -115,19 +115,23 @@ int kvm_arch_set_irq_inatomic(struct
>> kvm_kernel_irq_routing_entry *e,
>>          struct kvm_msi msi;
>> 
>>          if (!vgic_has_its(kvm))
>> -            return -EINVAL;
>> +            break;
>> 
>>          kvm_populate_msi(e, &msi);
>>          return vgic_its_inject_cached_translation(kvm, &msi);
>>      }
>> 
>>      case KVM_IRQ_ROUTING_IRQCHIP:
>> -        /* Injecting SPIs is always possible in atomic context */
>> +        /*
>> +         * Injecting SPIs is always possible in atomic context
>> +         * as long as the damn vgic is initialized.
>> +         */
>> +        if (unlikely(!vgic_initialized(kvm)))
>> +            break;
> Yes this should prevent the wait situation. But may be worth to deep
> into the call stack. Won't analyzers complain at some point?

I have gone all the way in the call stack, to be sure. The init is
the only point we use a mutex, and we're under a hard spinlock just
after (all the injection proper is happening with interrupt disabled).

As for things like lockdep, they track the dynamic state, and not
whether certain branches are simply "possible". A code analyser
that wouldn't take control flow into account would be pretty broken! ;-)

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH] KVM: arm64: Allow in-atomic injection of SPIs
@ 2020-06-09  8:21         ` Marc Zyngier
  0 siblings, 0 replies; 24+ messages in thread
From: Marc Zyngier @ 2020-06-09  8:21 UTC (permalink / raw)
  To: Auger Eric
  Cc: kvm, Suzuki K Poulose, James Morse, linux-arm-kernel,
	kernel-team, kvmarm, Julien Thierry

On 2020-06-09 08:48, Auger Eric wrote:
> Hi Marc,
> 
> On 6/8/20 7:19 PM, Marc Zyngier wrote:
>> Hi Eric,
>> 
>> On 2020-06-08 17:58, Auger Eric wrote:
>>> Hi Marc,
>>> 
>>> On 5/26/20 6:11 PM, Marc Zyngier wrote:
>>>> On a system that uses SPIs to implement MSIs (as it would be
>>>> the case on a GICv2 system exposing a GICv2m to its guests),
>>>> we deny the possibility of injecting SPIs on the in-atomic
>>>> fast-path.
>>>> 
>>>> This results in a very large amount of context-switches
>>>> (roughly equivalent to twice the interrupt rate) on the host,
>>>> and suboptimal performance for the guest (as measured with
>>>> a test workload involving a virtio interface backed by vhost-net).
>>>> Given that GICv2 systems are usually on the low-end of the spectrum
>>>> performance wise, they could do without the aggravation.
>>>> 
>>>> We solved this for GICv3+ITS by having a translation cache. But
>>>> SPIs do not need any extra infrastructure, and can be immediately
>>>> injected in the virtual distributor as the locking is already
>>>> heavy enough that we don't need to worry about anything.
>>>> 
>>>> This halves the number of context switches for the same workload.
>>>> 
>>>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>>>> ---
>>>>  arch/arm64/kvm/vgic/vgic-irqfd.c | 20 ++++++++++++++++----
>>>>  arch/arm64/kvm/vgic/vgic-its.c   |  3 +--
>>>>  2 files changed, 17 insertions(+), 6 deletions(-)
>>>> 
>>>> diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c
>>>> b/arch/arm64/kvm/vgic/vgic-irqfd.c
>>>> index d8cdfea5cc96..11a9f81115ab 100644
>>>> --- a/arch/arm64/kvm/vgic/vgic-irqfd.c
>>>> +++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
>>> There is still a comment above saying
>>>  * Currently only direct MSI injection is supported.
>> 
>> I believe this comment to be correct. There is no path other
>> than MSI injection that leads us here. Case in point, we only
>> ever inject a rising edge through this API, never a falling one.
> 
> Isn't this path entered whenever you have either of the combo being 
> used?
> KVM_SET_MSI_ROUTING + KVM_IRQFD
> KVM_SET_GSI_ROUTING + KVM_IRQFD

I've always considered these to be MSIs, but maybe I'm narrow minded... 
;-)

> I had in mind direct MSI injection was KVM_SIGNAL_MSI/
> kvm_send_userspace_msi/kvm_set_msi

Fair enough. Zengui was also unhappy about this comment, so I'll
remove it. After all, we shouldn't care whether that's a MSI or not.

>> 
>>>> @@ -107,15 +107,27 @@ int kvm_arch_set_irq_inatomic(struct
>>>> kvm_kernel_irq_routing_entry *e,
>>>>                    struct kvm *kvm, int irq_source_id, int level,
>>>>                    bool line_status)
>>>>  {
>>>> -    if (e->type == KVM_IRQ_ROUTING_MSI && vgic_has_its(kvm) && 
>>>> level) {
>>>> +    if (!level)
>>>> +        return -EWOULDBLOCK;
>>>> +
>>>> +    switch (e->type) {
>>>> +    case KVM_IRQ_ROUTING_MSI: {
>>>>          struct kvm_msi msi;
>>>> 
>>>> +        if (!vgic_has_its(kvm))
>>>> +            return -EINVAL;
>>> Shouldn't we return -EWOULDBLOCK by default?
>>> QEMU does not use that path with GICv2m but in 
>>> kvm_set_routing_entry() I
>>> don't see any check related to the ITS.
>> 
>> Fair enough. I really don't anticipate anyone to be using
>> KVM_IRQ_ROUTING_MSI with anything but the ITS, but who knows,
>> people are crazy! ;-)
>> 
>>>> +
>>>>          kvm_populate_msi(e, &msi);
>>>> -        if (!vgic_its_inject_cached_translation(kvm, &msi))
>>>> -            return 0;
>>>> +        return vgic_its_inject_cached_translation(kvm, &msi);
>>> 
>>>>      }
>>>> 
>>>> -    return -EWOULDBLOCK;
>>>> +    case KVM_IRQ_ROUTING_IRQCHIP:
>>>> +        /* Injecting SPIs is always possible in atomic context */
>>>> +        return vgic_irqfd_set_irq(e, kvm, irq_source_id, 1,
>>>> line_status);
>>> what about the     mutex_lock(&kvm->lock) called from within
>>> vgic_irqfd_set_irq/kvm_vgic_inject_irq/vgic_lazy_init
>> 
>> Holy crap. The lazy GIC init strikes again :-(.
>> How about this on top of the existing patch:
>> 
>> diff --git a/arch/arm64/kvm/vgic/vgic-irqfd.c
>> b/arch/arm64/kvm/vgic/vgic-irqfd.c
>> index 11a9f81115ab..6e5ca04d5589 100644
>> --- a/arch/arm64/kvm/vgic/vgic-irqfd.c
>> +++ b/arch/arm64/kvm/vgic/vgic-irqfd.c
>> @@ -115,19 +115,23 @@ int kvm_arch_set_irq_inatomic(struct
>> kvm_kernel_irq_routing_entry *e,
>>          struct kvm_msi msi;
>> 
>>          if (!vgic_has_its(kvm))
>> -            return -EINVAL;
>> +            break;
>> 
>>          kvm_populate_msi(e, &msi);
>>          return vgic_its_inject_cached_translation(kvm, &msi);
>>      }
>> 
>>      case KVM_IRQ_ROUTING_IRQCHIP:
>> -        /* Injecting SPIs is always possible in atomic context */
>> +        /*
>> +         * Injecting SPIs is always possible in atomic context
>> +         * as long as the damn vgic is initialized.
>> +         */
>> +        if (unlikely(!vgic_initialized(kvm)))
>> +            break;
> Yes this should prevent the wait situation. But may be worth to deep
> into the call stack. Won't analyzers complain at some point?

I have gone all the way in the call stack, to be sure. The init is
the only point we use a mutex, and we're under a hard spinlock just
after (all the injection proper is happening with interrupt disabled).

As for things like lockdep, they track the dynamic state, and not
whether certain branches are simply "possible". A code analyser
that wouldn't take control flow into account would be pretty broken! ;-)

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2020-06-09  8:22 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-26 16:11 [PATCH] KVM: arm64: Allow in-atomic injection of SPIs Marc Zyngier
2020-05-26 16:11 ` Marc Zyngier
2020-05-26 16:11 ` Marc Zyngier
2020-05-27  7:41 ` Zenghui Yu
2020-05-27  7:41   ` Zenghui Yu
2020-05-27  7:41   ` Zenghui Yu
2020-05-27  7:55   ` Marc Zyngier
2020-05-27  7:55     ` Marc Zyngier
2020-05-27  7:55     ` Marc Zyngier
2020-05-27  8:42     ` Zenghui Yu
2020-05-27  8:42       ` Zenghui Yu
2020-05-27  8:42       ` Zenghui Yu
2020-06-08 16:58 ` Auger Eric
2020-06-08 16:58   ` Auger Eric
2020-06-08 16:58   ` Auger Eric
2020-06-08 17:19   ` Marc Zyngier
2020-06-08 17:19     ` Marc Zyngier
2020-06-08 17:19     ` Marc Zyngier
2020-06-09  7:48     ` Auger Eric
2020-06-09  7:48       ` Auger Eric
2020-06-09  7:48       ` Auger Eric
2020-06-09  8:21       ` Marc Zyngier
2020-06-09  8:21         ` Marc Zyngier
2020-06-09  8:21         ` Marc Zyngier

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.