All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/2] iommu/vt-d: Don't queue_iova() if there is no flush queue
@ 2019-07-16 21:38 ` Dmitry Safonov via iommu
  0 siblings, 0 replies; 14+ messages in thread
From: Dmitry Safonov @ 2019-07-16 21:38 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Safonov, Dmitry Safonov, David Woodhouse, Joerg Roedel,
	Lu Baolu, iommu, stable

Intel VT-d driver was reworked to use common deferred flushing
implementation. Previously there was one global per-cpu flush queue,
afterwards - one per domain.

Before deferring a flush, the queue should be allocated and initialized.

Currently only domains with IOMMU_DOMAIN_DMA type initialize their flush
queue. It's probably worth to init it for static or unmanaged domains
too, but it may be arguable - I'm leaving it to iommu folks.

Prevent queuing an iova flush if the domain doesn't have a queue.
The defensive check seems to be worth to keep even if queue would be
initialized for all kinds of domains. And is easy backportable.

On 4.19.43 stable kernel it has a user-visible effect: previously for
devices in si domain there were crashes, on sata devices:

 BUG: spinlock bad magic on CPU#6, swapper/0/1
  lock: 0xffff88844f582008, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0
 CPU: 6 PID: 1 Comm: swapper/0 Not tainted 4.19.43 #1
 Call Trace:
  <IRQ>
  dump_stack+0x61/0x7e
  spin_bug+0x9d/0xa3
  do_raw_spin_lock+0x22/0x8e
  _raw_spin_lock_irqsave+0x32/0x3a
  queue_iova+0x45/0x115
  intel_unmap+0x107/0x113
  intel_unmap_sg+0x6b/0x76
  __ata_qc_complete+0x7f/0x103
  ata_qc_complete+0x9b/0x26a
  ata_qc_complete_multiple+0xd0/0xe3
  ahci_handle_port_interrupt+0x3ee/0x48a
  ahci_handle_port_intr+0x73/0xa9
  ahci_single_level_irq_intr+0x40/0x60
  __handle_irq_event_percpu+0x7f/0x19a
  handle_irq_event_percpu+0x32/0x72
  handle_irq_event+0x38/0x56
  handle_edge_irq+0x102/0x121
  handle_irq+0x147/0x15c
  do_IRQ+0x66/0xf2
  common_interrupt+0xf/0xf
 RIP: 0010:__do_softirq+0x8c/0x2df

The same for usb devices that use ehci-pci:
 BUG: spinlock bad magic on CPU#0, swapper/0/1
  lock: 0xffff88844f402008, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0
 CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.19.43 #4
 Call Trace:
  <IRQ>
  dump_stack+0x61/0x7e
  spin_bug+0x9d/0xa3
  do_raw_spin_lock+0x22/0x8e
  _raw_spin_lock_irqsave+0x32/0x3a
  queue_iova+0x77/0x145
  intel_unmap+0x107/0x113
  intel_unmap_page+0xe/0x10
  usb_hcd_unmap_urb_setup_for_dma+0x53/0x9d
  usb_hcd_unmap_urb_for_dma+0x17/0x100
  unmap_urb_for_dma+0x22/0x24
  __usb_hcd_giveback_urb+0x51/0xc3
  usb_giveback_urb_bh+0x97/0xde
  tasklet_action_common.isra.4+0x5f/0xa1
  tasklet_action+0x2d/0x30
  __do_softirq+0x138/0x2df
  irq_exit+0x7d/0x8b
  smp_apic_timer_interrupt+0x10f/0x151
  apic_timer_interrupt+0xf/0x20
  </IRQ>
 RIP: 0010:_raw_spin_unlock_irqrestore+0x17/0x39

Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Lu Baolu <baolu.lu@linux.intel.com>
Cc: iommu@lists.linux-foundation.org
Cc: <stable@vger.kernel.org> # 4.14+
Fixes: 13cf01744608 ("iommu/vt-d: Make use of iova deferred flushing")
Signed-off-by: Dmitry Safonov <dima@arista.com>
---
 drivers/iommu/intel-iommu.c |  3 ++-
 drivers/iommu/iova.c        | 18 ++++++++++++++----
 include/linux/iova.h        |  6 ++++++
 3 files changed, 22 insertions(+), 5 deletions(-)

diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index ac4172c02244..6d1510284d21 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -3564,7 +3564,8 @@ static void intel_unmap(struct device *dev, dma_addr_t dev_addr, size_t size)
 
 	freelist = domain_unmap(domain, start_pfn, last_pfn);
 
-	if (intel_iommu_strict || (pdev && pdev->untrusted)) {
+	if (intel_iommu_strict || (pdev && pdev->untrusted) ||
+			!has_iova_flush_queue(&domain->iovad)) {
 		iommu_flush_iotlb_psi(iommu, domain, start_pfn,
 				      nrpages, !freelist, 0);
 		/* free iova */
diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
index d499b2621239..8413ae54904a 100644
--- a/drivers/iommu/iova.c
+++ b/drivers/iommu/iova.c
@@ -54,9 +54,14 @@ init_iova_domain(struct iova_domain *iovad, unsigned long granule,
 }
 EXPORT_SYMBOL_GPL(init_iova_domain);
 
+bool has_iova_flush_queue(struct iova_domain *iovad)
+{
+	return !!iovad->fq;
+}
+
 static void free_iova_flush_queue(struct iova_domain *iovad)
 {
-	if (!iovad->fq)
+	if (!has_iova_flush_queue(iovad))
 		return;
 
 	if (timer_pending(&iovad->fq_timer))
@@ -74,13 +79,14 @@ static void free_iova_flush_queue(struct iova_domain *iovad)
 int init_iova_flush_queue(struct iova_domain *iovad,
 			  iova_flush_cb flush_cb, iova_entry_dtor entry_dtor)
 {
+	struct iova_fq __percpu *queue;
 	int cpu;
 
 	atomic64_set(&iovad->fq_flush_start_cnt,  0);
 	atomic64_set(&iovad->fq_flush_finish_cnt, 0);
 
-	iovad->fq = alloc_percpu(struct iova_fq);
-	if (!iovad->fq)
+	queue = alloc_percpu(struct iova_fq);
+	if (!queue)
 		return -ENOMEM;
 
 	iovad->flush_cb   = flush_cb;
@@ -89,13 +95,17 @@ int init_iova_flush_queue(struct iova_domain *iovad,
 	for_each_possible_cpu(cpu) {
 		struct iova_fq *fq;
 
-		fq = per_cpu_ptr(iovad->fq, cpu);
+		fq = per_cpu_ptr(queue, cpu);
 		fq->head = 0;
 		fq->tail = 0;
 
 		spin_lock_init(&fq->lock);
 	}
 
+	smp_wmb();
+
+	iovad->fq = queue;
+
 	timer_setup(&iovad->fq_timer, fq_flush_timeout, 0);
 	atomic_set(&iovad->fq_timer_on, 0);
 
diff --git a/include/linux/iova.h b/include/linux/iova.h
index 781b96ac706f..cd0f1de901a8 100644
--- a/include/linux/iova.h
+++ b/include/linux/iova.h
@@ -155,6 +155,7 @@ struct iova *reserve_iova(struct iova_domain *iovad, unsigned long pfn_lo,
 void copy_reserved_iova(struct iova_domain *from, struct iova_domain *to);
 void init_iova_domain(struct iova_domain *iovad, unsigned long granule,
 	unsigned long start_pfn);
+bool has_iova_flush_queue(struct iova_domain *iovad);
 int init_iova_flush_queue(struct iova_domain *iovad,
 			  iova_flush_cb flush_cb, iova_entry_dtor entry_dtor);
 struct iova *find_iova(struct iova_domain *iovad, unsigned long pfn);
@@ -235,6 +236,11 @@ static inline void init_iova_domain(struct iova_domain *iovad,
 {
 }
 
+bool has_iova_flush_queue(struct iova_domain *iovad)
+{
+	return false;
+}
+
 static inline int init_iova_flush_queue(struct iova_domain *iovad,
 					iova_flush_cb flush_cb,
 					iova_entry_dtor entry_dtor)
-- 
2.22.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 1/2] iommu/vt-d: Don't queue_iova() if there is no flush queue
@ 2019-07-16 21:38 ` Dmitry Safonov via iommu
  0 siblings, 0 replies; 14+ messages in thread
From: Dmitry Safonov via iommu @ 2019-07-16 21:38 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Safonov, Dmitry Safonov, stable, iommu, David Woodhouse

Intel VT-d driver was reworked to use common deferred flushing
implementation. Previously there was one global per-cpu flush queue,
afterwards - one per domain.

Before deferring a flush, the queue should be allocated and initialized.

Currently only domains with IOMMU_DOMAIN_DMA type initialize their flush
queue. It's probably worth to init it for static or unmanaged domains
too, but it may be arguable - I'm leaving it to iommu folks.

Prevent queuing an iova flush if the domain doesn't have a queue.
The defensive check seems to be worth to keep even if queue would be
initialized for all kinds of domains. And is easy backportable.

On 4.19.43 stable kernel it has a user-visible effect: previously for
devices in si domain there were crashes, on sata devices:

 BUG: spinlock bad magic on CPU#6, swapper/0/1
  lock: 0xffff88844f582008, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0
 CPU: 6 PID: 1 Comm: swapper/0 Not tainted 4.19.43 #1
 Call Trace:
  <IRQ>
  dump_stack+0x61/0x7e
  spin_bug+0x9d/0xa3
  do_raw_spin_lock+0x22/0x8e
  _raw_spin_lock_irqsave+0x32/0x3a
  queue_iova+0x45/0x115
  intel_unmap+0x107/0x113
  intel_unmap_sg+0x6b/0x76
  __ata_qc_complete+0x7f/0x103
  ata_qc_complete+0x9b/0x26a
  ata_qc_complete_multiple+0xd0/0xe3
  ahci_handle_port_interrupt+0x3ee/0x48a
  ahci_handle_port_intr+0x73/0xa9
  ahci_single_level_irq_intr+0x40/0x60
  __handle_irq_event_percpu+0x7f/0x19a
  handle_irq_event_percpu+0x32/0x72
  handle_irq_event+0x38/0x56
  handle_edge_irq+0x102/0x121
  handle_irq+0x147/0x15c
  do_IRQ+0x66/0xf2
  common_interrupt+0xf/0xf
 RIP: 0010:__do_softirq+0x8c/0x2df

The same for usb devices that use ehci-pci:
 BUG: spinlock bad magic on CPU#0, swapper/0/1
  lock: 0xffff88844f402008, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0
 CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.19.43 #4
 Call Trace:
  <IRQ>
  dump_stack+0x61/0x7e
  spin_bug+0x9d/0xa3
  do_raw_spin_lock+0x22/0x8e
  _raw_spin_lock_irqsave+0x32/0x3a
  queue_iova+0x77/0x145
  intel_unmap+0x107/0x113
  intel_unmap_page+0xe/0x10
  usb_hcd_unmap_urb_setup_for_dma+0x53/0x9d
  usb_hcd_unmap_urb_for_dma+0x17/0x100
  unmap_urb_for_dma+0x22/0x24
  __usb_hcd_giveback_urb+0x51/0xc3
  usb_giveback_urb_bh+0x97/0xde
  tasklet_action_common.isra.4+0x5f/0xa1
  tasklet_action+0x2d/0x30
  __do_softirq+0x138/0x2df
  irq_exit+0x7d/0x8b
  smp_apic_timer_interrupt+0x10f/0x151
  apic_timer_interrupt+0xf/0x20
  </IRQ>
 RIP: 0010:_raw_spin_unlock_irqrestore+0x17/0x39

Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Lu Baolu <baolu.lu@linux.intel.com>
Cc: iommu@lists.linux-foundation.org
Cc: <stable@vger.kernel.org> # 4.14+
Fixes: 13cf01744608 ("iommu/vt-d: Make use of iova deferred flushing")
Signed-off-by: Dmitry Safonov <dima@arista.com>
---
 drivers/iommu/intel-iommu.c |  3 ++-
 drivers/iommu/iova.c        | 18 ++++++++++++++----
 include/linux/iova.h        |  6 ++++++
 3 files changed, 22 insertions(+), 5 deletions(-)

diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index ac4172c02244..6d1510284d21 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -3564,7 +3564,8 @@ static void intel_unmap(struct device *dev, dma_addr_t dev_addr, size_t size)
 
 	freelist = domain_unmap(domain, start_pfn, last_pfn);
 
-	if (intel_iommu_strict || (pdev && pdev->untrusted)) {
+	if (intel_iommu_strict || (pdev && pdev->untrusted) ||
+			!has_iova_flush_queue(&domain->iovad)) {
 		iommu_flush_iotlb_psi(iommu, domain, start_pfn,
 				      nrpages, !freelist, 0);
 		/* free iova */
diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
index d499b2621239..8413ae54904a 100644
--- a/drivers/iommu/iova.c
+++ b/drivers/iommu/iova.c
@@ -54,9 +54,14 @@ init_iova_domain(struct iova_domain *iovad, unsigned long granule,
 }
 EXPORT_SYMBOL_GPL(init_iova_domain);
 
+bool has_iova_flush_queue(struct iova_domain *iovad)
+{
+	return !!iovad->fq;
+}
+
 static void free_iova_flush_queue(struct iova_domain *iovad)
 {
-	if (!iovad->fq)
+	if (!has_iova_flush_queue(iovad))
 		return;
 
 	if (timer_pending(&iovad->fq_timer))
@@ -74,13 +79,14 @@ static void free_iova_flush_queue(struct iova_domain *iovad)
 int init_iova_flush_queue(struct iova_domain *iovad,
 			  iova_flush_cb flush_cb, iova_entry_dtor entry_dtor)
 {
+	struct iova_fq __percpu *queue;
 	int cpu;
 
 	atomic64_set(&iovad->fq_flush_start_cnt,  0);
 	atomic64_set(&iovad->fq_flush_finish_cnt, 0);
 
-	iovad->fq = alloc_percpu(struct iova_fq);
-	if (!iovad->fq)
+	queue = alloc_percpu(struct iova_fq);
+	if (!queue)
 		return -ENOMEM;
 
 	iovad->flush_cb   = flush_cb;
@@ -89,13 +95,17 @@ int init_iova_flush_queue(struct iova_domain *iovad,
 	for_each_possible_cpu(cpu) {
 		struct iova_fq *fq;
 
-		fq = per_cpu_ptr(iovad->fq, cpu);
+		fq = per_cpu_ptr(queue, cpu);
 		fq->head = 0;
 		fq->tail = 0;
 
 		spin_lock_init(&fq->lock);
 	}
 
+	smp_wmb();
+
+	iovad->fq = queue;
+
 	timer_setup(&iovad->fq_timer, fq_flush_timeout, 0);
 	atomic_set(&iovad->fq_timer_on, 0);
 
diff --git a/include/linux/iova.h b/include/linux/iova.h
index 781b96ac706f..cd0f1de901a8 100644
--- a/include/linux/iova.h
+++ b/include/linux/iova.h
@@ -155,6 +155,7 @@ struct iova *reserve_iova(struct iova_domain *iovad, unsigned long pfn_lo,
 void copy_reserved_iova(struct iova_domain *from, struct iova_domain *to);
 void init_iova_domain(struct iova_domain *iovad, unsigned long granule,
 	unsigned long start_pfn);
+bool has_iova_flush_queue(struct iova_domain *iovad);
 int init_iova_flush_queue(struct iova_domain *iovad,
 			  iova_flush_cb flush_cb, iova_entry_dtor entry_dtor);
 struct iova *find_iova(struct iova_domain *iovad, unsigned long pfn);
@@ -235,6 +236,11 @@ static inline void init_iova_domain(struct iova_domain *iovad,
 {
 }
 
+bool has_iova_flush_queue(struct iova_domain *iovad)
+{
+	return false;
+}
+
 static inline int init_iova_flush_queue(struct iova_domain *iovad,
 					iova_flush_cb flush_cb,
 					iova_entry_dtor entry_dtor)
-- 
2.22.0

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 2/2] iommu/vt-d: Check if domain->pgd was allocated
  2019-07-16 21:38 ` Dmitry Safonov via iommu
@ 2019-07-16 21:38   ` Dmitry Safonov via iommu
  -1 siblings, 0 replies; 14+ messages in thread
From: Dmitry Safonov @ 2019-07-16 21:38 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dmitry Safonov, Dmitry Safonov, David Woodhouse, Joerg Roedel,
	Lu Baolu, iommu

There is a couple of places where on domain_init() failure domain_exit()
is called. While currently domain_init() can fail only if
alloc_pgtable_page() has failed.

Make domain_exit() check if domain->pgd present, before calling
domain_unmap(), as it theoretically should crash on clearing pte entries
in dma_pte_clear_level().

Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Lu Baolu <baolu.lu@linux.intel.com>
Cc: iommu@lists.linux-foundation.org
Signed-off-by: Dmitry Safonov <dima@arista.com>
---
 drivers/iommu/intel-iommu.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 6d1510284d21..698cc40355ef 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -1835,7 +1835,6 @@ static inline int guestwidth_to_adjustwidth(int gaw)
 
 static void domain_exit(struct dmar_domain *domain)
 {
-	struct page *freelist;
 
 	/* Remove associated devices and clear attached or cached domains */
 	domain_remove_dev_info(domain);
@@ -1843,9 +1842,12 @@ static void domain_exit(struct dmar_domain *domain)
 	/* destroy iovas */
 	put_iova_domain(&domain->iovad);
 
-	freelist = domain_unmap(domain, 0, DOMAIN_MAX_PFN(domain->gaw));
+	if (domain->pgd) {
+		struct page *freelist;
 
-	dma_free_pagelist(freelist);
+		freelist = domain_unmap(domain, 0, DOMAIN_MAX_PFN(domain->gaw));
+		dma_free_pagelist(freelist);
+	}
 
 	free_domain_mem(domain);
 }
-- 
2.22.0


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 2/2] iommu/vt-d: Check if domain->pgd was allocated
@ 2019-07-16 21:38   ` Dmitry Safonov via iommu
  0 siblings, 0 replies; 14+ messages in thread
From: Dmitry Safonov via iommu @ 2019-07-16 21:38 UTC (permalink / raw)
  To: linux-kernel; +Cc: Dmitry Safonov, Dmitry Safonov, iommu, David Woodhouse

There is a couple of places where on domain_init() failure domain_exit()
is called. While currently domain_init() can fail only if
alloc_pgtable_page() has failed.

Make domain_exit() check if domain->pgd present, before calling
domain_unmap(), as it theoretically should crash on clearing pte entries
in dma_pte_clear_level().

Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Lu Baolu <baolu.lu@linux.intel.com>
Cc: iommu@lists.linux-foundation.org
Signed-off-by: Dmitry Safonov <dima@arista.com>
---
 drivers/iommu/intel-iommu.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 6d1510284d21..698cc40355ef 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -1835,7 +1835,6 @@ static inline int guestwidth_to_adjustwidth(int gaw)
 
 static void domain_exit(struct dmar_domain *domain)
 {
-	struct page *freelist;
 
 	/* Remove associated devices and clear attached or cached domains */
 	domain_remove_dev_info(domain);
@@ -1843,9 +1842,12 @@ static void domain_exit(struct dmar_domain *domain)
 	/* destroy iovas */
 	put_iova_domain(&domain->iovad);
 
-	freelist = domain_unmap(domain, 0, DOMAIN_MAX_PFN(domain->gaw));
+	if (domain->pgd) {
+		struct page *freelist;
 
-	dma_free_pagelist(freelist);
+		freelist = domain_unmap(domain, 0, DOMAIN_MAX_PFN(domain->gaw));
+		dma_free_pagelist(freelist);
+	}
 
 	free_domain_mem(domain);
 }
-- 
2.22.0

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/2] iommu/vt-d: Don't queue_iova() if there is no flush queue
  2019-07-16 21:38 ` Dmitry Safonov via iommu
  (?)
  (?)
@ 2019-07-16 23:57 ` Sasha Levin
  -1 siblings, 0 replies; 14+ messages in thread
From: Sasha Levin @ 2019-07-16 23:57 UTC (permalink / raw)
  To: Sasha Levin, Dmitry Safonov, linux-kernel
  Cc: , Dmitry Safonov, stable, iommu, David Woodhouse

Hi,

[This is an automated email]

This commit has been processed because it contains a "Fixes:" tag,
fixing commit: 13cf01744608 iommu/vt-d: Make use of iova deferred flushing.

The bot has tested the following trees: v5.2.1, v5.1.18, v4.19.59, v4.14.133.

v5.2.1: Build OK!
v5.1.18: Build OK!
v4.19.59: Failed to apply! Possible dependencies:
    02b4da5f84d1 ("intel-iommu: mark intel_dma_ops static")
    0bbeb01a4faf ("iommu/vt-d: Manage scalalble mode PASID tables")
    524a669bdd5f ("iommu/vt-d: remove the mapping_error dma_map_ops method")
    932a6523ce39 ("iommu/vt-d: Use dev_printk() when possible")
    964f2311a686 ("iommu/intel: small map_page cleanup")
    ef848b7e5a6a ("iommu/vt-d: Setup pasid entry for RID2PASID support")
    f7b0c4ce8cb3 ("iommu/vt-d: Flush IOTLB for untrusted device in time")

v4.14.133: Failed to apply! Possible dependencies:
    0bbeb01a4faf ("iommu/vt-d: Manage scalalble mode PASID tables")
    2e2e35d51279 ("iommu/vt-d: Missing checks for pasid tables if allocation fails")
    2f13eb7c580f ("iommu/vt-d: Enable 5-level paging mode in the PASID entry")
    3e781fcafedb ("iommu/vt-d: Remove unnecessary WARN_ON()")
    4774cc524570 ("iommu/vt-d: Apply per pci device pasid table in SVA")
    4fa064b26c2e ("iommu/vt-d: Clear pasid table entry when memory unbound")
    524a669bdd5f ("iommu/vt-d: remove the mapping_error dma_map_ops method")
    562831747f62 ("iommu/vt-d: Global PASID name space")
    7ec916f82c48 ("Revert "iommu/intel-iommu: Enable CONFIG_DMA_DIRECT_OPS=y and clean up intel_{alloc,free}_coherent()"")
    85319dcc8955 ("iommu/vt-d: Add for_each_device_domain() helper")
    932a6523ce39 ("iommu/vt-d: Use dev_printk() when possible")
    964f2311a686 ("iommu/intel: small map_page cleanup")
    971401015d14 ("iommu/vt-d: Use real PASID for flush in caching mode")
    9ddbfb42138d ("iommu/vt-d: Move device_domain_info to header")
    a7fc93fed94b ("iommu/vt-d: Allocate and free pasid table")
    af39507305fb ("iommu/vt-d: Apply global PASID in SVA")
    be9e6598aeb0 ("iommu/vt-d: Handle memory shortage on pasid table allocation")
    cc580e41260d ("iommu/vt-d: Per PCI device pasid table interfaces")
    d657c5c73ca9 ("iommu/intel-iommu: Enable CONFIG_DMA_DIRECT_OPS=y and clean up intel_{alloc,free}_coherent()")
    ef848b7e5a6a ("iommu/vt-d: Setup pasid entry for RID2PASID support")
    f7b0c4ce8cb3 ("iommu/vt-d: Flush IOTLB for untrusted device in time")


NOTE: The patch will not be queued to stable trees until it is upstream.

How should we proceed with this patch?

--
Thanks,
Sasha
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/2] iommu/vt-d: Check if domain->pgd was allocated
  2019-07-16 21:38   ` Dmitry Safonov via iommu
@ 2019-07-19  9:15     ` Lu Baolu
  -1 siblings, 0 replies; 14+ messages in thread
From: Lu Baolu @ 2019-07-19  9:15 UTC (permalink / raw)
  To: Dmitry Safonov, linux-kernel
  Cc: baolu.lu, Dmitry Safonov, David Woodhouse, Joerg Roedel, iommu

Hi,

On 7/17/19 5:38 AM, Dmitry Safonov wrote:
> There is a couple of places where on domain_init() failure domain_exit()
> is called. While currently domain_init() can fail only if
> alloc_pgtable_page() has failed.
> 
> Make domain_exit() check if domain->pgd present, before calling
> domain_unmap(), as it theoretically should crash on clearing pte entries
> in dma_pte_clear_level().
> 
> Cc: David Woodhouse <dwmw2@infradead.org>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Lu Baolu <baolu.lu@linux.intel.com>

This looks good to me. Thank you!

Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>

Best regards,
Baolu

> Cc: iommu@lists.linux-foundation.org
> Signed-off-by: Dmitry Safonov <dima@arista.com>
> ---
>   drivers/iommu/intel-iommu.c | 8 +++++---
>   1 file changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
> index 6d1510284d21..698cc40355ef 100644
> --- a/drivers/iommu/intel-iommu.c
> +++ b/drivers/iommu/intel-iommu.c
> @@ -1835,7 +1835,6 @@ static inline int guestwidth_to_adjustwidth(int gaw)
>   
>   static void domain_exit(struct dmar_domain *domain)
>   {
> -	struct page *freelist;
>   
>   	/* Remove associated devices and clear attached or cached domains */
>   	domain_remove_dev_info(domain);
> @@ -1843,9 +1842,12 @@ static void domain_exit(struct dmar_domain *domain)
>   	/* destroy iovas */
>   	put_iova_domain(&domain->iovad);
>   
> -	freelist = domain_unmap(domain, 0, DOMAIN_MAX_PFN(domain->gaw));
> +	if (domain->pgd) {
> +		struct page *freelist;
>   
> -	dma_free_pagelist(freelist);
> +		freelist = domain_unmap(domain, 0, DOMAIN_MAX_PFN(domain->gaw));
> +		dma_free_pagelist(freelist);
> +	}
>   
>   	free_domain_mem(domain);
>   }
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/2] iommu/vt-d: Check if domain->pgd was allocated
@ 2019-07-19  9:15     ` Lu Baolu
  0 siblings, 0 replies; 14+ messages in thread
From: Lu Baolu @ 2019-07-19  9:15 UTC (permalink / raw)
  To: Dmitry Safonov, linux-kernel; +Cc: David Woodhouse, Dmitry Safonov, iommu

Hi,

On 7/17/19 5:38 AM, Dmitry Safonov wrote:
> There is a couple of places where on domain_init() failure domain_exit()
> is called. While currently domain_init() can fail only if
> alloc_pgtable_page() has failed.
> 
> Make domain_exit() check if domain->pgd present, before calling
> domain_unmap(), as it theoretically should crash on clearing pte entries
> in dma_pte_clear_level().
> 
> Cc: David Woodhouse <dwmw2@infradead.org>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Lu Baolu <baolu.lu@linux.intel.com>

This looks good to me. Thank you!

Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>

Best regards,
Baolu

> Cc: iommu@lists.linux-foundation.org
> Signed-off-by: Dmitry Safonov <dima@arista.com>
> ---
>   drivers/iommu/intel-iommu.c | 8 +++++---
>   1 file changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
> index 6d1510284d21..698cc40355ef 100644
> --- a/drivers/iommu/intel-iommu.c
> +++ b/drivers/iommu/intel-iommu.c
> @@ -1835,7 +1835,6 @@ static inline int guestwidth_to_adjustwidth(int gaw)
>   
>   static void domain_exit(struct dmar_domain *domain)
>   {
> -	struct page *freelist;
>   
>   	/* Remove associated devices and clear attached or cached domains */
>   	domain_remove_dev_info(domain);
> @@ -1843,9 +1842,12 @@ static void domain_exit(struct dmar_domain *domain)
>   	/* destroy iovas */
>   	put_iova_domain(&domain->iovad);
>   
> -	freelist = domain_unmap(domain, 0, DOMAIN_MAX_PFN(domain->gaw));
> +	if (domain->pgd) {
> +		struct page *freelist;
>   
> -	dma_free_pagelist(freelist);
> +		freelist = domain_unmap(domain, 0, DOMAIN_MAX_PFN(domain->gaw));
> +		dma_free_pagelist(freelist);
> +	}
>   
>   	free_domain_mem(domain);
>   }
> 
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/2] iommu/vt-d: Don't queue_iova() if there is no flush queue
  2019-07-16 21:38 ` Dmitry Safonov via iommu
@ 2019-07-19  9:26   ` Lu Baolu
  -1 siblings, 0 replies; 14+ messages in thread
From: Lu Baolu @ 2019-07-19  9:26 UTC (permalink / raw)
  To: Dmitry Safonov, linux-kernel
  Cc: baolu.lu, Dmitry Safonov, David Woodhouse, Joerg Roedel, iommu, stable

Hi,

On 7/17/19 5:38 AM, Dmitry Safonov wrote:
> Intel VT-d driver was reworked to use common deferred flushing
> implementation. Previously there was one global per-cpu flush queue,
> afterwards - one per domain.
> 
> Before deferring a flush, the queue should be allocated and initialized.
> 
> Currently only domains with IOMMU_DOMAIN_DMA type initialize their flush
> queue. It's probably worth to init it for static or unmanaged domains
> too, but it may be arguable - I'm leaving it to iommu folks.

We will submit per-device dma ops soon. Then we don't need to call
intel_unmap() for the identity (static) domain. For unmanaged domains,
the map/unmap happen only during VM startup/shutdown, I am not sure
whether it's worth a flush queue.

This fix looks good to me anyway. We should always avoid deferring a
flush if there's no flush queue there.

Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>

Best regards,
Baolu

> 
> Prevent queuing an iova flush if the domain doesn't have a queue.
> The defensive check seems to be worth to keep even if queue would be
> initialized for all kinds of domains. And is easy backportable.
> 
> On 4.19.43 stable kernel it has a user-visible effect: previously for
> devices in si domain there were crashes, on sata devices:
> 
>   BUG: spinlock bad magic on CPU#6, swapper/0/1
>    lock: 0xffff88844f582008, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0
>   CPU: 6 PID: 1 Comm: swapper/0 Not tainted 4.19.43 #1
>   Call Trace:
>    <IRQ>
>    dump_stack+0x61/0x7e
>    spin_bug+0x9d/0xa3
>    do_raw_spin_lock+0x22/0x8e
>    _raw_spin_lock_irqsave+0x32/0x3a
>    queue_iova+0x45/0x115
>    intel_unmap+0x107/0x113
>    intel_unmap_sg+0x6b/0x76
>    __ata_qc_complete+0x7f/0x103
>    ata_qc_complete+0x9b/0x26a
>    ata_qc_complete_multiple+0xd0/0xe3
>    ahci_handle_port_interrupt+0x3ee/0x48a
>    ahci_handle_port_intr+0x73/0xa9
>    ahci_single_level_irq_intr+0x40/0x60
>    __handle_irq_event_percpu+0x7f/0x19a
>    handle_irq_event_percpu+0x32/0x72
>    handle_irq_event+0x38/0x56
>    handle_edge_irq+0x102/0x121
>    handle_irq+0x147/0x15c
>    do_IRQ+0x66/0xf2
>    common_interrupt+0xf/0xf
>   RIP: 0010:__do_softirq+0x8c/0x2df
> 
> The same for usb devices that use ehci-pci:
>   BUG: spinlock bad magic on CPU#0, swapper/0/1
>    lock: 0xffff88844f402008, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0
>   CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.19.43 #4
>   Call Trace:
>    <IRQ>
>    dump_stack+0x61/0x7e
>    spin_bug+0x9d/0xa3
>    do_raw_spin_lock+0x22/0x8e
>    _raw_spin_lock_irqsave+0x32/0x3a
>    queue_iova+0x77/0x145
>    intel_unmap+0x107/0x113
>    intel_unmap_page+0xe/0x10
>    usb_hcd_unmap_urb_setup_for_dma+0x53/0x9d
>    usb_hcd_unmap_urb_for_dma+0x17/0x100
>    unmap_urb_for_dma+0x22/0x24
>    __usb_hcd_giveback_urb+0x51/0xc3
>    usb_giveback_urb_bh+0x97/0xde
>    tasklet_action_common.isra.4+0x5f/0xa1
>    tasklet_action+0x2d/0x30
>    __do_softirq+0x138/0x2df
>    irq_exit+0x7d/0x8b
>    smp_apic_timer_interrupt+0x10f/0x151
>    apic_timer_interrupt+0xf/0x20
>    </IRQ>
>   RIP: 0010:_raw_spin_unlock_irqrestore+0x17/0x39
> 
> Cc: David Woodhouse <dwmw2@infradead.org>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Lu Baolu <baolu.lu@linux.intel.com>
> Cc: iommu@lists.linux-foundation.org
> Cc: <stable@vger.kernel.org> # 4.14+
> Fixes: 13cf01744608 ("iommu/vt-d: Make use of iova deferred flushing")
> Signed-off-by: Dmitry Safonov <dima@arista.com>
> ---
>   drivers/iommu/intel-iommu.c |  3 ++-
>   drivers/iommu/iova.c        | 18 ++++++++++++++----
>   include/linux/iova.h        |  6 ++++++
>   3 files changed, 22 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
> index ac4172c02244..6d1510284d21 100644
> --- a/drivers/iommu/intel-iommu.c
> +++ b/drivers/iommu/intel-iommu.c
> @@ -3564,7 +3564,8 @@ static void intel_unmap(struct device *dev, dma_addr_t dev_addr, size_t size)
>   
>   	freelist = domain_unmap(domain, start_pfn, last_pfn);
>   
> -	if (intel_iommu_strict || (pdev && pdev->untrusted)) {
> +	if (intel_iommu_strict || (pdev && pdev->untrusted) ||
> +			!has_iova_flush_queue(&domain->iovad)) {
>   		iommu_flush_iotlb_psi(iommu, domain, start_pfn,
>   				      nrpages, !freelist, 0);
>   		/* free iova */
> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
> index d499b2621239..8413ae54904a 100644
> --- a/drivers/iommu/iova.c
> +++ b/drivers/iommu/iova.c
> @@ -54,9 +54,14 @@ init_iova_domain(struct iova_domain *iovad, unsigned long granule,
>   }
>   EXPORT_SYMBOL_GPL(init_iova_domain);
>   
> +bool has_iova_flush_queue(struct iova_domain *iovad)
> +{
> +	return !!iovad->fq;
> +}
> +
>   static void free_iova_flush_queue(struct iova_domain *iovad)
>   {
> -	if (!iovad->fq)
> +	if (!has_iova_flush_queue(iovad))
>   		return;
>   
>   	if (timer_pending(&iovad->fq_timer))
> @@ -74,13 +79,14 @@ static void free_iova_flush_queue(struct iova_domain *iovad)
>   int init_iova_flush_queue(struct iova_domain *iovad,
>   			  iova_flush_cb flush_cb, iova_entry_dtor entry_dtor)
>   {
> +	struct iova_fq __percpu *queue;
>   	int cpu;
>   
>   	atomic64_set(&iovad->fq_flush_start_cnt,  0);
>   	atomic64_set(&iovad->fq_flush_finish_cnt, 0);
>   
> -	iovad->fq = alloc_percpu(struct iova_fq);
> -	if (!iovad->fq)
> +	queue = alloc_percpu(struct iova_fq);
> +	if (!queue)
>   		return -ENOMEM;
>   
>   	iovad->flush_cb   = flush_cb;
> @@ -89,13 +95,17 @@ int init_iova_flush_queue(struct iova_domain *iovad,
>   	for_each_possible_cpu(cpu) {
>   		struct iova_fq *fq;
>   
> -		fq = per_cpu_ptr(iovad->fq, cpu);
> +		fq = per_cpu_ptr(queue, cpu);
>   		fq->head = 0;
>   		fq->tail = 0;
>   
>   		spin_lock_init(&fq->lock);
>   	}
>   
> +	smp_wmb();
> +
> +	iovad->fq = queue;
> +
>   	timer_setup(&iovad->fq_timer, fq_flush_timeout, 0);
>   	atomic_set(&iovad->fq_timer_on, 0);
>   
> diff --git a/include/linux/iova.h b/include/linux/iova.h
> index 781b96ac706f..cd0f1de901a8 100644
> --- a/include/linux/iova.h
> +++ b/include/linux/iova.h
> @@ -155,6 +155,7 @@ struct iova *reserve_iova(struct iova_domain *iovad, unsigned long pfn_lo,
>   void copy_reserved_iova(struct iova_domain *from, struct iova_domain *to);
>   void init_iova_domain(struct iova_domain *iovad, unsigned long granule,
>   	unsigned long start_pfn);
> +bool has_iova_flush_queue(struct iova_domain *iovad);
>   int init_iova_flush_queue(struct iova_domain *iovad,
>   			  iova_flush_cb flush_cb, iova_entry_dtor entry_dtor);
>   struct iova *find_iova(struct iova_domain *iovad, unsigned long pfn);
> @@ -235,6 +236,11 @@ static inline void init_iova_domain(struct iova_domain *iovad,
>   {
>   }
>   
> +bool has_iova_flush_queue(struct iova_domain *iovad)
> +{
> +	return false;
> +}
> +
>   static inline int init_iova_flush_queue(struct iova_domain *iovad,
>   					iova_flush_cb flush_cb,
>   					iova_entry_dtor entry_dtor)
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/2] iommu/vt-d: Don't queue_iova() if there is no flush queue
@ 2019-07-19  9:26   ` Lu Baolu
  0 siblings, 0 replies; 14+ messages in thread
From: Lu Baolu @ 2019-07-19  9:26 UTC (permalink / raw)
  To: Dmitry Safonov, linux-kernel
  Cc: Dmitry Safonov, stable, iommu, David Woodhouse

Hi,

On 7/17/19 5:38 AM, Dmitry Safonov wrote:
> Intel VT-d driver was reworked to use common deferred flushing
> implementation. Previously there was one global per-cpu flush queue,
> afterwards - one per domain.
> 
> Before deferring a flush, the queue should be allocated and initialized.
> 
> Currently only domains with IOMMU_DOMAIN_DMA type initialize their flush
> queue. It's probably worth to init it for static or unmanaged domains
> too, but it may be arguable - I'm leaving it to iommu folks.

We will submit per-device dma ops soon. Then we don't need to call
intel_unmap() for the identity (static) domain. For unmanaged domains,
the map/unmap happen only during VM startup/shutdown, I am not sure
whether it's worth a flush queue.

This fix looks good to me anyway. We should always avoid deferring a
flush if there's no flush queue there.

Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>

Best regards,
Baolu

> 
> Prevent queuing an iova flush if the domain doesn't have a queue.
> The defensive check seems to be worth to keep even if queue would be
> initialized for all kinds of domains. And is easy backportable.
> 
> On 4.19.43 stable kernel it has a user-visible effect: previously for
> devices in si domain there were crashes, on sata devices:
> 
>   BUG: spinlock bad magic on CPU#6, swapper/0/1
>    lock: 0xffff88844f582008, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0
>   CPU: 6 PID: 1 Comm: swapper/0 Not tainted 4.19.43 #1
>   Call Trace:
>    <IRQ>
>    dump_stack+0x61/0x7e
>    spin_bug+0x9d/0xa3
>    do_raw_spin_lock+0x22/0x8e
>    _raw_spin_lock_irqsave+0x32/0x3a
>    queue_iova+0x45/0x115
>    intel_unmap+0x107/0x113
>    intel_unmap_sg+0x6b/0x76
>    __ata_qc_complete+0x7f/0x103
>    ata_qc_complete+0x9b/0x26a
>    ata_qc_complete_multiple+0xd0/0xe3
>    ahci_handle_port_interrupt+0x3ee/0x48a
>    ahci_handle_port_intr+0x73/0xa9
>    ahci_single_level_irq_intr+0x40/0x60
>    __handle_irq_event_percpu+0x7f/0x19a
>    handle_irq_event_percpu+0x32/0x72
>    handle_irq_event+0x38/0x56
>    handle_edge_irq+0x102/0x121
>    handle_irq+0x147/0x15c
>    do_IRQ+0x66/0xf2
>    common_interrupt+0xf/0xf
>   RIP: 0010:__do_softirq+0x8c/0x2df
> 
> The same for usb devices that use ehci-pci:
>   BUG: spinlock bad magic on CPU#0, swapper/0/1
>    lock: 0xffff88844f402008, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0
>   CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.19.43 #4
>   Call Trace:
>    <IRQ>
>    dump_stack+0x61/0x7e
>    spin_bug+0x9d/0xa3
>    do_raw_spin_lock+0x22/0x8e
>    _raw_spin_lock_irqsave+0x32/0x3a
>    queue_iova+0x77/0x145
>    intel_unmap+0x107/0x113
>    intel_unmap_page+0xe/0x10
>    usb_hcd_unmap_urb_setup_for_dma+0x53/0x9d
>    usb_hcd_unmap_urb_for_dma+0x17/0x100
>    unmap_urb_for_dma+0x22/0x24
>    __usb_hcd_giveback_urb+0x51/0xc3
>    usb_giveback_urb_bh+0x97/0xde
>    tasklet_action_common.isra.4+0x5f/0xa1
>    tasklet_action+0x2d/0x30
>    __do_softirq+0x138/0x2df
>    irq_exit+0x7d/0x8b
>    smp_apic_timer_interrupt+0x10f/0x151
>    apic_timer_interrupt+0xf/0x20
>    </IRQ>
>   RIP: 0010:_raw_spin_unlock_irqrestore+0x17/0x39
> 
> Cc: David Woodhouse <dwmw2@infradead.org>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: Lu Baolu <baolu.lu@linux.intel.com>
> Cc: iommu@lists.linux-foundation.org
> Cc: <stable@vger.kernel.org> # 4.14+
> Fixes: 13cf01744608 ("iommu/vt-d: Make use of iova deferred flushing")
> Signed-off-by: Dmitry Safonov <dima@arista.com>
> ---
>   drivers/iommu/intel-iommu.c |  3 ++-
>   drivers/iommu/iova.c        | 18 ++++++++++++++----
>   include/linux/iova.h        |  6 ++++++
>   3 files changed, 22 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
> index ac4172c02244..6d1510284d21 100644
> --- a/drivers/iommu/intel-iommu.c
> +++ b/drivers/iommu/intel-iommu.c
> @@ -3564,7 +3564,8 @@ static void intel_unmap(struct device *dev, dma_addr_t dev_addr, size_t size)
>   
>   	freelist = domain_unmap(domain, start_pfn, last_pfn);
>   
> -	if (intel_iommu_strict || (pdev && pdev->untrusted)) {
> +	if (intel_iommu_strict || (pdev && pdev->untrusted) ||
> +			!has_iova_flush_queue(&domain->iovad)) {
>   		iommu_flush_iotlb_psi(iommu, domain, start_pfn,
>   				      nrpages, !freelist, 0);
>   		/* free iova */
> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
> index d499b2621239..8413ae54904a 100644
> --- a/drivers/iommu/iova.c
> +++ b/drivers/iommu/iova.c
> @@ -54,9 +54,14 @@ init_iova_domain(struct iova_domain *iovad, unsigned long granule,
>   }
>   EXPORT_SYMBOL_GPL(init_iova_domain);
>   
> +bool has_iova_flush_queue(struct iova_domain *iovad)
> +{
> +	return !!iovad->fq;
> +}
> +
>   static void free_iova_flush_queue(struct iova_domain *iovad)
>   {
> -	if (!iovad->fq)
> +	if (!has_iova_flush_queue(iovad))
>   		return;
>   
>   	if (timer_pending(&iovad->fq_timer))
> @@ -74,13 +79,14 @@ static void free_iova_flush_queue(struct iova_domain *iovad)
>   int init_iova_flush_queue(struct iova_domain *iovad,
>   			  iova_flush_cb flush_cb, iova_entry_dtor entry_dtor)
>   {
> +	struct iova_fq __percpu *queue;
>   	int cpu;
>   
>   	atomic64_set(&iovad->fq_flush_start_cnt,  0);
>   	atomic64_set(&iovad->fq_flush_finish_cnt, 0);
>   
> -	iovad->fq = alloc_percpu(struct iova_fq);
> -	if (!iovad->fq)
> +	queue = alloc_percpu(struct iova_fq);
> +	if (!queue)
>   		return -ENOMEM;
>   
>   	iovad->flush_cb   = flush_cb;
> @@ -89,13 +95,17 @@ int init_iova_flush_queue(struct iova_domain *iovad,
>   	for_each_possible_cpu(cpu) {
>   		struct iova_fq *fq;
>   
> -		fq = per_cpu_ptr(iovad->fq, cpu);
> +		fq = per_cpu_ptr(queue, cpu);
>   		fq->head = 0;
>   		fq->tail = 0;
>   
>   		spin_lock_init(&fq->lock);
>   	}
>   
> +	smp_wmb();
> +
> +	iovad->fq = queue;
> +
>   	timer_setup(&iovad->fq_timer, fq_flush_timeout, 0);
>   	atomic_set(&iovad->fq_timer_on, 0);
>   
> diff --git a/include/linux/iova.h b/include/linux/iova.h
> index 781b96ac706f..cd0f1de901a8 100644
> --- a/include/linux/iova.h
> +++ b/include/linux/iova.h
> @@ -155,6 +155,7 @@ struct iova *reserve_iova(struct iova_domain *iovad, unsigned long pfn_lo,
>   void copy_reserved_iova(struct iova_domain *from, struct iova_domain *to);
>   void init_iova_domain(struct iova_domain *iovad, unsigned long granule,
>   	unsigned long start_pfn);
> +bool has_iova_flush_queue(struct iova_domain *iovad);
>   int init_iova_flush_queue(struct iova_domain *iovad,
>   			  iova_flush_cb flush_cb, iova_entry_dtor entry_dtor);
>   struct iova *find_iova(struct iova_domain *iovad, unsigned long pfn);
> @@ -235,6 +236,11 @@ static inline void init_iova_domain(struct iova_domain *iovad,
>   {
>   }
>   
> +bool has_iova_flush_queue(struct iova_domain *iovad)
> +{
> +	return false;
> +}
> +
>   static inline int init_iova_flush_queue(struct iova_domain *iovad,
>   					iova_flush_cb flush_cb,
>   					iova_entry_dtor entry_dtor)
> 
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/2] iommu/vt-d: Don't queue_iova() if there is no flush queue
  2019-07-16 21:38 ` Dmitry Safonov via iommu
@ 2019-07-22 15:44   ` Joerg Roedel
  -1 siblings, 0 replies; 14+ messages in thread
From: Joerg Roedel @ 2019-07-22 15:44 UTC (permalink / raw)
  To: Dmitry Safonov
  Cc: linux-kernel, Dmitry Safonov, David Woodhouse, Lu Baolu, iommu, stable

On Tue, Jul 16, 2019 at 10:38:05PM +0100, Dmitry Safonov wrote:
>  BUG: spinlock bad magic on CPU#6, swapper/0/1

Applied both patches to iommu/fixes.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/2] iommu/vt-d: Don't queue_iova() if there is no flush queue
@ 2019-07-22 15:44   ` Joerg Roedel
  0 siblings, 0 replies; 14+ messages in thread
From: Joerg Roedel @ 2019-07-22 15:44 UTC (permalink / raw)
  To: Dmitry Safonov
  Cc: Dmitry Safonov, linux-kernel, stable, iommu, David Woodhouse

On Tue, Jul 16, 2019 at 10:38:05PM +0100, Dmitry Safonov wrote:
>  BUG: spinlock bad magic on CPU#6, swapper/0/1

Applied both patches to iommu/fixes.
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/2] iommu/vt-d: Don't queue_iova() if there is no flush queue
  2019-07-16 21:38 ` Dmitry Safonov via iommu
@ 2019-07-23  8:17   ` Joerg Roedel
  -1 siblings, 0 replies; 14+ messages in thread
From: Joerg Roedel @ 2019-07-23  8:17 UTC (permalink / raw)
  To: Dmitry Safonov
  Cc: linux-kernel, Dmitry Safonov, David Woodhouse, Lu Baolu, iommu, stable

On Tue, Jul 16, 2019 at 10:38:05PM +0100, Dmitry Safonov wrote:

> @@ -235,6 +236,11 @@ static inline void init_iova_domain(struct iova_domain *iovad,
>  {
>  }
>  
> +bool has_iova_flush_queue(struct iova_domain *iovad)
> +{
> +	return false;
> +}
> +

This needs to be 'static inline', I queued a patch on-top of my fixes
branch.

Regards,

	Joerg

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/2] iommu/vt-d: Don't queue_iova() if there is no flush queue
@ 2019-07-23  8:17   ` Joerg Roedel
  0 siblings, 0 replies; 14+ messages in thread
From: Joerg Roedel @ 2019-07-23  8:17 UTC (permalink / raw)
  To: Dmitry Safonov
  Cc: Dmitry Safonov, linux-kernel, stable, iommu, David Woodhouse

On Tue, Jul 16, 2019 at 10:38:05PM +0100, Dmitry Safonov wrote:

> @@ -235,6 +236,11 @@ static inline void init_iova_domain(struct iova_domain *iovad,
>  {
>  }
>  
> +bool has_iova_flush_queue(struct iova_domain *iovad)
> +{
> +	return false;
> +}
> +

This needs to be 'static inline', I queued a patch on-top of my fixes
branch.

Regards,

	Joerg
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/2] iommu/vt-d: Don't queue_iova() if there is no flush queue
  2019-07-23  8:17   ` Joerg Roedel
  (?)
@ 2019-07-23  8:49   ` Dmitry Safonov via iommu
  -1 siblings, 0 replies; 14+ messages in thread
From: Dmitry Safonov via iommu @ 2019-07-23  8:49 UTC (permalink / raw)
  To: Joerg Roedel; +Cc: Dmitry Safonov, LKML, stable, iommu, David Woodhouse


[-- Attachment #1.1: Type: text/plain, Size: 486 bytes --]

On Tue 23 Jul 2019, 9:17 a.m. Joerg Roedel, <joro@8bytes.org> wrote:

> On Tue, Jul 16, 2019 at 10:38:05PM +0100, Dmitry Safonov wrote:
>
> > @@ -235,6 +236,11 @@ static inline void init_iova_domain(struct
> iova_domain *iovad,
> >  {
> >  }
> >
> > +bool has_iova_flush_queue(struct iova_domain *iovad)
> > +{
> > +     return false;
> > +}
> > +
>
> This needs to be 'static inline', I queued a patch on-top of my fixes
> branch.


Ugh, copy-n-paste declaration.

Thanks much,
Dmitry

[-- Attachment #1.2: Type: text/html, Size: 945 bytes --]

[-- Attachment #2: Type: text/plain, Size: 156 bytes --]

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2019-07-23  8:49 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-16 21:38 [PATCH 1/2] iommu/vt-d: Don't queue_iova() if there is no flush queue Dmitry Safonov
2019-07-16 21:38 ` Dmitry Safonov via iommu
2019-07-16 21:38 ` [PATCH 2/2] iommu/vt-d: Check if domain->pgd was allocated Dmitry Safonov
2019-07-16 21:38   ` Dmitry Safonov via iommu
2019-07-19  9:15   ` Lu Baolu
2019-07-19  9:15     ` Lu Baolu
2019-07-16 23:57 ` [PATCH 1/2] iommu/vt-d: Don't queue_iova() if there is no flush queue Sasha Levin
2019-07-19  9:26 ` Lu Baolu
2019-07-19  9:26   ` Lu Baolu
2019-07-22 15:44 ` Joerg Roedel
2019-07-22 15:44   ` Joerg Roedel
2019-07-23  8:17 ` Joerg Roedel
2019-07-23  8:17   ` Joerg Roedel
2019-07-23  8:49   ` Dmitry Safonov via iommu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.