xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH V3] xen-evtchn: Bind dyn evtchn:qemu-dm interrupt to next online VCPU
       [not found] <1496836016-7053-1-git-send-email-anoob.soman@citrix.com>
@ 2017-06-07 14:34 ` Boris Ostrovsky
  2017-06-13 14:12 ` Juergen Gross
  1 sibling, 0 replies; 3+ messages in thread
From: Boris Ostrovsky @ 2017-06-07 14:34 UTC (permalink / raw)
  To: Anoob Soman, xen-devel, linux-kernel; +Cc: jgross

On 06/07/2017 07:46 AM, Anoob Soman wrote:
> A HVM domian booting generates around 200K (evtchn:qemu-dm xen-dyn)
> interrupts,in a short period of time. All these evtchn:qemu-dm are bound
> to VCPU 0, until irqbalance sees these IRQ and moves it to a different VCPU.
> In one configuration, irqbalance runs every 10 seconds, which means
> irqbalance doesn't get to see these burst of interrupts and doesn't
> re-balance interrupts most of the time, making all evtchn:qemu-dm to be
> processed by VCPU0. This cause VCPU0 to spend most of time processing
> hardirq and very little time on softirq. Moreover, if dom0 kernel PREEMPTION
> is disabled, VCPU0 never runs watchdog (process context), triggering a
> softlockup detection code to panic.
>
> Binding evtchn:qemu-dm to next online VCPU, will spread hardirq
> processing evenly across different CPU. Later, irqbalance will try to balance
> evtchn:qemu-dm, if required.
>
> Signed-off-by: Anoob Soman <anoob.soman@citrix.com>

Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH V3] xen-evtchn: Bind dyn evtchn:qemu-dm interrupt to next online VCPU
       [not found] <1496836016-7053-1-git-send-email-anoob.soman@citrix.com>
  2017-06-07 14:34 ` [PATCH V3] xen-evtchn: Bind dyn evtchn:qemu-dm interrupt to next online VCPU Boris Ostrovsky
@ 2017-06-13 14:12 ` Juergen Gross
  1 sibling, 0 replies; 3+ messages in thread
From: Juergen Gross @ 2017-06-13 14:12 UTC (permalink / raw)
  To: Anoob Soman, xen-devel, linux-kernel; +Cc: boris.ostrovsky

On 07/06/17 13:46, Anoob Soman wrote:
> A HVM domian booting generates around 200K (evtchn:qemu-dm xen-dyn)
> interrupts,in a short period of time. All these evtchn:qemu-dm are bound
> to VCPU 0, until irqbalance sees these IRQ and moves it to a different VCPU.
> In one configuration, irqbalance runs every 10 seconds, which means
> irqbalance doesn't get to see these burst of interrupts and doesn't
> re-balance interrupts most of the time, making all evtchn:qemu-dm to be
> processed by VCPU0. This cause VCPU0 to spend most of time processing
> hardirq and very little time on softirq. Moreover, if dom0 kernel PREEMPTION
> is disabled, VCPU0 never runs watchdog (process context), triggering a
> softlockup detection code to panic.
> 
> Binding evtchn:qemu-dm to next online VCPU, will spread hardirq
> processing evenly across different CPU. Later, irqbalance will try to balance
> evtchn:qemu-dm, if required.
> 
> Signed-off-by: Anoob Soman <anoob.soman@citrix.com>

Committed to xen/tip.git for-linus-4.13


Thanks,

Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 3+ messages in thread

* [PATCH V3] xen-evtchn: Bind dyn evtchn:qemu-dm interrupt to next online VCPU
@ 2017-06-07 11:46 Anoob Soman
  0 siblings, 0 replies; 3+ messages in thread
From: Anoob Soman @ 2017-06-07 11:46 UTC (permalink / raw)
  To: xen-devel, linux-kernel; +Cc: jgross, boris.ostrovsky, Anoob Soman

A HVM domian booting generates around 200K (evtchn:qemu-dm xen-dyn)
interrupts,in a short period of time. All these evtchn:qemu-dm are bound
to VCPU 0, until irqbalance sees these IRQ and moves it to a different VCPU.
In one configuration, irqbalance runs every 10 seconds, which means
irqbalance doesn't get to see these burst of interrupts and doesn't
re-balance interrupts most of the time, making all evtchn:qemu-dm to be
processed by VCPU0. This cause VCPU0 to spend most of time processing
hardirq and very little time on softirq. Moreover, if dom0 kernel PREEMPTION
is disabled, VCPU0 never runs watchdog (process context), triggering a
softlockup detection code to panic.

Binding evtchn:qemu-dm to next online VCPU, will spread hardirq
processing evenly across different CPU. Later, irqbalance will try to balance
evtchn:qemu-dm, if required.

Signed-off-by: Anoob Soman <anoob.soman@citrix.com>
---

 Changes in v3:
  - Made bind_last_selected_cpu global.
  - Call xen_rebind_irq_to_cpu directly from set_affinity_irq, instead of
    an indirection.

 Changes in v2:
  - Moved bind_last_selected_cpu inside evtchn_bind_interdom_next_vcpu.
  - raw_spin_unlock_irqrestore(&desc->lock) done after vpcu rebind.

 drivers/xen/events/events_base.c |  6 +++---
 drivers/xen/evtchn.c             | 34 +++++++++++++++++++++++++++++++++-
 include/xen/events.h             |  1 +
 3 files changed, 37 insertions(+), 4 deletions(-)

diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index b52852f..813f1e8 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -1303,10 +1303,9 @@ void rebind_evtchn_irq(int evtchn, int irq)
 }
 
 /* Rebind an evtchn so that it gets delivered to a specific cpu */
-static int rebind_irq_to_cpu(unsigned irq, unsigned tcpu)
+int xen_rebind_evtchn_to_cpu(int evtchn, unsigned tcpu)
 {
 	struct evtchn_bind_vcpu bind_vcpu;
-	int evtchn = evtchn_from_irq(irq);
 	int masked;
 
 	if (!VALID_EVTCHN(evtchn))
@@ -1338,13 +1337,14 @@ static int rebind_irq_to_cpu(unsigned irq, unsigned tcpu)
 
 	return 0;
 }
+EXPORT_SYMBOL_GPL(xen_rebind_evtchn_to_cpu);
 
 static int set_affinity_irq(struct irq_data *data, const struct cpumask *dest,
 			    bool force)
 {
 	unsigned tcpu = cpumask_first_and(dest, cpu_online_mask);
 
-	return rebind_irq_to_cpu(data->irq, tcpu);
+	return xen_rebind_evtchn_to_cpu(evtchn_from_irq(data->irq), tcpu);
 }
 
 static void enable_dynirq(struct irq_data *data)
diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
index 10f1ef5..9729a64 100644
--- a/drivers/xen/evtchn.c
+++ b/drivers/xen/evtchn.c
@@ -421,6 +421,36 @@ static void evtchn_unbind_from_user(struct per_user_data *u,
 	del_evtchn(u, evtchn);
 }
 
+static DEFINE_PER_CPU(int, bind_last_selected_cpu);
+
+static void evtchn_bind_interdom_next_vcpu(int evtchn)
+{
+	unsigned int selected_cpu, irq;
+	struct irq_desc *desc;
+	unsigned long flags;
+
+	irq = irq_from_evtchn(evtchn);
+	desc = irq_to_desc(irq);
+
+	if (!desc)
+		return;
+
+	raw_spin_lock_irqsave(&desc->lock, flags);
+	selected_cpu = this_cpu_read(bind_last_selected_cpu);
+	selected_cpu = cpumask_next_and(selected_cpu,
+			desc->irq_common_data.affinity, cpu_online_mask);
+
+	if (unlikely(selected_cpu >= nr_cpu_ids))
+		selected_cpu = cpumask_first_and(desc->irq_common_data.affinity,
+				cpu_online_mask);
+
+	this_cpu_write(bind_last_selected_cpu, selected_cpu);
+
+	/* unmask expects irqs to be disabled */
+	xen_rebind_evtchn_to_cpu(evtchn, selected_cpu);
+	raw_spin_unlock_irqrestore(&desc->lock, flags);
+}
+
 static long evtchn_ioctl(struct file *file,
 			 unsigned int cmd, unsigned long arg)
 {
@@ -478,8 +508,10 @@ static long evtchn_ioctl(struct file *file,
 			break;
 
 		rc = evtchn_bind_to_user(u, bind_interdomain.local_port);
-		if (rc == 0)
+		if (rc == 0) {
 			rc = bind_interdomain.local_port;
+			evtchn_bind_interdom_next_vcpu(rc);
+		}
 		break;
 	}
 
diff --git a/include/xen/events.h b/include/xen/events.h
index 88da2ab..f442ca5 100644
--- a/include/xen/events.h
+++ b/include/xen/events.h
@@ -58,6 +58,7 @@ int bind_interdomain_evtchn_to_irqhandler(unsigned int remote_domain,
 
 void xen_send_IPI_one(unsigned int cpu, enum ipi_vector vector);
 void rebind_evtchn_irq(int evtchn, int irq);
+int xen_rebind_evtchn_to_cpu(int evtchn, unsigned tcpu);
 
 static inline void notify_remote_via_evtchn(int port)
 {
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2017-06-13 14:12 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <1496836016-7053-1-git-send-email-anoob.soman@citrix.com>
2017-06-07 14:34 ` [PATCH V3] xen-evtchn: Bind dyn evtchn:qemu-dm interrupt to next online VCPU Boris Ostrovsky
2017-06-13 14:12 ` Juergen Gross
2017-06-07 11:46 Anoob Soman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).