xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [Xen-devel] [PATCH] x86/vtd: Fix S3 resume following c/s 650c31d3af
@ 2019-08-12 17:17 Andrew Cooper
  2019-08-23  2:54 ` Tian, Kevin
  0 siblings, 1 reply; 2+ messages in thread
From: Andrew Cooper @ 2019-08-12 17:17 UTC (permalink / raw)
  To: Xen-devel
  Cc: Kevin Tian, Jan Beulich, Wei Liu, Andrew Cooper, Jun Nakajima,
	Roger Pau Monné

c/s 650c31d3af "x86/IRQ: fix locking around vector management" adjusted the
locking in adjust_irq_affinity().

The S3 path ends up here via iommu_resume() before interrupts are enabled, at
which point spin_lock_irq() which fail ASSERT(local_irq_is_enabled()); but
with no working console.

Use spin_lock_irqsave() instead to cope with interrupts already being
disabled.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>

I'm fairly confident that the AMD side of things is fine, because
enable_iommu() is encompased by a spin_lock_irqsave() block.
---
 xen/drivers/passthrough/vtd/iommu.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 5d72270c5b..defa74fae3 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -2135,15 +2135,16 @@ static void adjust_irq_affinity(struct acpi_drhd_unit *drhd)
                              : NUMA_NO_NODE;
     const cpumask_t *cpumask = NULL;
     struct irq_desc *desc;
+    unsigned long flags;
 
     if ( node < MAX_NUMNODES && node_online(node) &&
          cpumask_intersects(&node_to_cpumask(node), &cpu_online_map) )
         cpumask = &node_to_cpumask(node);
 
     desc = irq_to_desc(drhd->iommu->msi.irq);
-    spin_lock_irq(&desc->lock);
+    spin_lock_irqsave(&desc->lock, flags);
     dma_msi_set_affinity(desc, cpumask);
-    spin_unlock_irq(&desc->lock);
+    spin_unlock_irqrestore(&desc->lock, flags);
 }
 
 static int adjust_vtd_irq_affinities(void)
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [Xen-devel] [PATCH] x86/vtd: Fix S3 resume following c/s 650c31d3af
  2019-08-12 17:17 [Xen-devel] [PATCH] x86/vtd: Fix S3 resume following c/s 650c31d3af Andrew Cooper
@ 2019-08-23  2:54 ` Tian, Kevin
  0 siblings, 0 replies; 2+ messages in thread
From: Tian, Kevin @ 2019-08-23  2:54 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel
  Cc: Nakajima, Jun, Wei Liu, Jan Beulich, Roger Pau Monné

> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: Tuesday, August 13, 2019 1:17 AM
> 
> c/s 650c31d3af "x86/IRQ: fix locking around vector management" adjusted
> the
> locking in adjust_irq_affinity().
> 
> The S3 path ends up here via iommu_resume() before interrupts are enabled,
> at
> which point spin_lock_irq() which fail ASSERT(local_irq_is_enabled()); but
> with no working console.
> 
> Use spin_lock_irqsave() instead to cope with interrupts already being
> disabled.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2019-08-23  2:55 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-12 17:17 [Xen-devel] [PATCH] x86/vtd: Fix S3 resume following c/s 650c31d3af Andrew Cooper
2019-08-23  2:54 ` Tian, Kevin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).