xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Jan Beulich <JBeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: "Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Wei Liu" <wl@xen.org>, "Roger Pau Monné" <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v4 07/13] x86/IRQ: target online CPUs when binding guest IRQ
Date: Tue, 16 Jul 2019 07:41:37 +0000	[thread overview]
Message-ID: <d60f7c11-457f-798e-7a4f-b9164439f565@suse.com> (raw)
In-Reply-To: <5cda711a-b417-76e9-d113-ea838463f225@suse.com>

fixup_irqs() skips interrupts without action. Hence such interrupts can
retain affinity to just offline CPUs. With "noirqbalance" in effect,
pirq_guest_bind() so far would have left them alone, resulting in a non-
working interrupt.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
v3: New.
---
I've not observed this problem in practice - the change is just the
result of code inspection after having noticed action-less IRQs in 'i'
debug key output pointing at all parked/offline CPUs.

--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -1703,9 +1703,27 @@ int pirq_guest_bind(struct vcpu *v, stru
  
          desc->status |= IRQ_GUEST;
  
-        /* Attempt to bind the interrupt target to the correct CPU. */
-        if ( !opt_noirqbalance && (desc->handler->set_affinity != NULL) )
-            desc->handler->set_affinity(desc, cpumask_of(v->processor));
+        /*
+         * Attempt to bind the interrupt target to the correct (or at least
+         * some online) CPU.
+         */
+        if ( desc->handler->set_affinity )
+        {
+            const cpumask_t *affinity = NULL;
+
+            if ( !opt_noirqbalance )
+                affinity = cpumask_of(v->processor);
+            else if ( !cpumask_intersects(desc->affinity, &cpu_online_map) )
+            {
+                cpumask_setall(desc->affinity);
+                affinity = &cpumask_all;
+            }
+            else if ( !cpumask_intersects(desc->arch.cpu_mask,
+                                          &cpu_online_map) )
+                affinity = desc->affinity;
+            if ( affinity )
+                desc->handler->set_affinity(desc, affinity);
+        }
  
          desc->status &= ~IRQ_DISABLED;
          desc->handler->startup(desc);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  parent reply	other threads:[~2019-07-16  7:46 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-16  7:24 [Xen-devel] [PATCH v4 00/13] x86: IRQ management adjustments Jan Beulich
2019-07-16  7:37 ` [Xen-devel] [PATCH v4 01/13] x86/IRQ: deal with move-in-progress state in fixup_irqs() Jan Beulich
2019-07-19 13:20   ` Andrew Cooper
2019-07-16  7:37 ` [Xen-devel] [PATCH v4 02/13] x86/IRQ: deal with move cleanup count " Jan Beulich
2019-07-16  7:38 ` [Xen-devel] [PATCH v4 03/13] x86/IRQ: desc->affinity should strictly represent the requested value Jan Beulich
2019-07-19 13:25   ` Andrew Cooper
2019-07-16  7:38 ` [Xen-devel] [PATCH v4 04/13] x86/IRQ: consolidate use of ->arch.cpu_mask Jan Beulich
2019-07-16  7:39 ` [Xen-devel] [PATCH v4 05/13] x86/IRQ: fix locking around vector management Jan Beulich
2019-07-16  7:40 ` [Xen-devel] [PATCH v4 06/13] x86/IOMMU: don't restrict IRQ affinities to online CPUs Jan Beulich
2019-07-16  9:12   ` Roger Pau Monné
2019-07-16 10:20     ` Jan Beulich
2019-07-16 11:17       ` Roger Pau Monné
2019-07-19 13:27   ` Andrew Cooper
2019-07-24  0:53   ` Tian, Kevin
2019-07-24 19:53   ` Woods, Brian
2019-07-16  7:41 ` Jan Beulich [this message]
2019-07-16  7:42 ` [Xen-devel] [PATCH v4 08/13] x86/IRQs: correct/tighten vector check in _clear_irq_vector() Jan Beulich
2019-07-16  7:42 ` [Xen-devel] [PATCH v4 09/13] x86/IRQ: make fixup_irqs() skip unconnected internally used interrupts Jan Beulich
2019-07-16  7:43 ` [Xen-devel] [PATCH v4 10/13] x86/IRQ: drop redundant cpumask_empty() from move_masked_irq() Jan Beulich
2019-07-16  7:43 ` [Xen-devel] [PATCH v4 11/13] x86/IRQ: tighten vector checks Jan Beulich
2019-07-16  7:44 ` [Xen-devel] [PATCH v4 12/13] x86/IRQ: eliminate some on-stack cpumask_t instances Jan Beulich
2019-07-16  7:45 ` [Xen-devel] [PATCH v4 13/13] x86/IRQ: move {,_}clear_irq_vector() Jan Beulich
2019-07-19 12:36 ` [Xen-devel] [PATCH v4 00/13] x86: IRQ management adjustments Andrew Cooper
2019-07-19 13:04   ` Jan Beulich
2019-07-19 13:13     ` Andrew Cooper

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d60f7c11-457f-798e-7a4f-b9164439f565@suse.com \
    --to=jbeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=roger.pau@citrix.com \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).