qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH v2 0/2] Attempt to fix DMA reentrancy issues
@ 2021-12-17 14:37 Alexander Bulekov
  2021-12-17 14:37 ` [RFC PATCH v2 1/2] memory: fix dma-reentrancy issues at the MMIO level Alexander Bulekov
  2021-12-17 14:37 ` [RFC PATCH v2 2/2] memory: set engaged_in_io when a device calls DMA APIs Alexander Bulekov
  0 siblings, 2 replies; 3+ messages in thread
From: Alexander Bulekov @ 2021-12-17 14:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Laurent Vivier, Peter Maydell, Mauro Matteo Cascella,
	Darren Kenny, David Hildenbrand, Jason Wang, Bin Meng, Li Qiang,
	Qiuhao Li, Peter Xu, Alexander Bulekov, Bandan Das,
	Gerd Hoffmann, Stefan Hajnoczi, Paolo Bonzini, Thomas Huth,
	Edgar E . Iglesias, Philippe Mathieu-Daudé

Here's my shot at fixing dma-reentracy issues. This patch adds a flag to
the DeviceState, which is set/checked when we call an accessor
associated with the device's IO MRs.

The problem, in short, as I understand it: For the vast majority of
cases, we want to prevent a device from accessing it's own PIO/MMIO
regions over DMA.

V2: Try to fix reentrancies initiated by DMA accesses in BHs

Cc: Philippe Mathieu-Daudé <philmd@redhat.com>
Cc: Mauro Matteo Cascella <mcascell@redhat.com>
Cc: Qiuhao Li <Qiuhao.Li@outlook.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Jason Wang <jasowang@redhat.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Li Qiang <liq3ea@gmail.com>
Cc: Thomas Huth <thuth@redhat.com>
Cc: Laurent Vivier <lvivier@redhat.com>
Cc: Bandan Das <bsd@redhat.com>
Cc: Edgar E. Iglesias <edgar.iglesias@gmail.com>
Cc: Darren Kenny <darren.kenny@oracle.com>
Cc: Bin Meng <bin.meng@windriver.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>


Alexander Bulekov (2):
  memory: fix dma-reentrancy issues at the MMIO level
  memory: set engaged_in_io when a device calls DMA APIs

 include/hw/pci/pci.h   |  6 +++++-
 include/hw/qdev-core.h |  1 +
 softmmu/dma-helpers.c  |  2 ++
 softmmu/memory.c       | 15 +++++++++++++++
 softmmu/trace-events   |  1 +
 5 files changed, 24 insertions(+), 1 deletion(-)

-- 
2.33.0



^ permalink raw reply	[flat|nested] 3+ messages in thread

* [RFC PATCH v2 1/2] memory: fix dma-reentrancy issues at the MMIO level
  2021-12-17 14:37 [RFC PATCH v2 0/2] Attempt to fix DMA reentrancy issues Alexander Bulekov
@ 2021-12-17 14:37 ` Alexander Bulekov
  2021-12-17 14:37 ` [RFC PATCH v2 2/2] memory: set engaged_in_io when a device calls DMA APIs Alexander Bulekov
  1 sibling, 0 replies; 3+ messages in thread
From: Alexander Bulekov @ 2021-12-17 14:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: Eduardo Habkost, Daniel P. Berrangé,
	David Hildenbrand, Peter Xu, Alexander Bulekov, Paolo Bonzini,
	Philippe Mathieu-Daudé

Add a flag to the DeviceState, which is set/checked when we call an
accessor associated with the device's IO MRs.  This should prevent a
device's MMIO handler from using DMA apis to access it's own MMIO
regions.

Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
---
 include/hw/qdev-core.h |  1 +
 softmmu/memory.c       | 15 +++++++++++++++
 softmmu/trace-events   |  1 +
 3 files changed, 17 insertions(+)

diff --git a/include/hw/qdev-core.h b/include/hw/qdev-core.h
index 20d3066595..7b93b017c5 100644
--- a/include/hw/qdev-core.h
+++ b/include/hw/qdev-core.h
@@ -193,6 +193,7 @@ struct DeviceState {
     int instance_id_alias;
     int alias_required_for_version;
     ResettableState reset;
+    int engaged_in_io;
 };
 
 struct DeviceListener {
diff --git a/softmmu/memory.c b/softmmu/memory.c
index 7340e19ff5..229c63a68d 100644
--- a/softmmu/memory.c
+++ b/softmmu/memory.c
@@ -532,6 +532,7 @@ static MemTxResult access_with_adjusted_size(hwaddr addr,
     uint64_t access_mask;
     unsigned access_size;
     unsigned i;
+    DeviceState *dev = NULL;
     MemTxResult r = MEMTX_OK;
 
     if (!access_size_min) {
@@ -541,6 +542,17 @@ static MemTxResult access_with_adjusted_size(hwaddr addr,
         access_size_max = 4;
     }
 
+    /* Do not allow more than one simultanous access to a device's IO Regions */
+    if (mr->owner &&
+        !mr->ram_device && !mr->ram && !mr->rom_device && !mr->readonly) {
+        dev = (DeviceState *) object_dynamic_cast(mr->owner, TYPE_DEVICE);
+        if (dev->engaged_in_io) {
+            trace_memory_region_reentrant_io(get_cpu_index(), mr, addr, size);
+            return MEMTX_ERROR;
+        }
+        dev->engaged_in_io = true;
+    }
+
     /* FIXME: support unaligned access? */
     access_size = MAX(MIN(size, access_size_max), access_size_min);
     access_mask = MAKE_64BIT_MASK(0, access_size * 8);
@@ -555,6 +567,9 @@ static MemTxResult access_with_adjusted_size(hwaddr addr,
                         access_mask, attrs);
         }
     }
+    if (dev) {
+        dev->engaged_in_io = false;
+    }
     return r;
 }
 
diff --git a/softmmu/trace-events b/softmmu/trace-events
index 9c88887b3c..d7228316db 100644
--- a/softmmu/trace-events
+++ b/softmmu/trace-events
@@ -13,6 +13,7 @@ memory_region_ops_read(int cpu_index, void *mr, uint64_t addr, uint64_t value, u
 memory_region_ops_write(int cpu_index, void *mr, uint64_t addr, uint64_t value, unsigned size, const char *name) "cpu %d mr %p addr 0x%"PRIx64" value 0x%"PRIx64" size %u name '%s'"
 memory_region_subpage_read(int cpu_index, void *mr, uint64_t offset, uint64_t value, unsigned size) "cpu %d mr %p offset 0x%"PRIx64" value 0x%"PRIx64" size %u"
 memory_region_subpage_write(int cpu_index, void *mr, uint64_t offset, uint64_t value, unsigned size) "cpu %d mr %p offset 0x%"PRIx64" value 0x%"PRIx64" size %u"
+memory_region_reentrant_io(int cpu_index, void *mr, uint64_t offset, unsigned size) "cpu %d mr %p offset 0x%"PRIx64" size %u"
 memory_region_ram_device_read(int cpu_index, void *mr, uint64_t addr, uint64_t value, unsigned size) "cpu %d mr %p addr 0x%"PRIx64" value 0x%"PRIx64" size %u"
 memory_region_ram_device_write(int cpu_index, void *mr, uint64_t addr, uint64_t value, unsigned size) "cpu %d mr %p addr 0x%"PRIx64" value 0x%"PRIx64" size %u"
 memory_region_sync_dirty(const char *mr, const char *listener, int global) "mr '%s' listener '%s' synced (global=%d)"
-- 
2.33.0



^ permalink raw reply related	[flat|nested] 3+ messages in thread

* [RFC PATCH v2 2/2] memory: set engaged_in_io when a device calls DMA APIs
  2021-12-17 14:37 [RFC PATCH v2 0/2] Attempt to fix DMA reentrancy issues Alexander Bulekov
  2021-12-17 14:37 ` [RFC PATCH v2 1/2] memory: fix dma-reentrancy issues at the MMIO level Alexander Bulekov
@ 2021-12-17 14:37 ` Alexander Bulekov
  1 sibling, 0 replies; 3+ messages in thread
From: Alexander Bulekov @ 2021-12-17 14:37 UTC (permalink / raw)
  To: qemu-devel
  Cc: David Hildenbrand, Michael S. Tsirkin, Peter Xu,
	Alexander Bulekov, Paolo Bonzini, Philippe Mathieu-Daudé

DMA reentrancy problems can occur in BHs:
dev_mmio->schedule_bh
dev_bh->dma_write->dev_mmio

This patch attempts to address this scenario by marking the device as
engaged_in_io, when it calls into PCI and SGList DMA APIs.

Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
---
 include/hw/pci/pci.h  | 6 +++++-
 softmmu/dma-helpers.c | 2 ++
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/include/hw/pci/pci.h b/include/hw/pci/pci.h
index e7cdf2d5ec..8420984b23 100644
--- a/include/hw/pci/pci.h
+++ b/include/hw/pci/pci.h
@@ -808,7 +808,11 @@ static inline MemTxResult pci_dma_rw(PCIDevice *dev, dma_addr_t addr,
                                      void *buf, dma_addr_t len,
                                      DMADirection dir)
 {
-    return dma_memory_rw(pci_get_address_space(dev), addr, buf, len, dir);
+    MemTxResult result;
+    dev->qdev.engaged_in_io = true;
+    result = dma_memory_rw(pci_get_address_space(dev), addr, buf, len, dir);
+    dev->qdev.engaged_in_io = false;
+    return result;
 }
 
 /**
diff --git a/softmmu/dma-helpers.c b/softmmu/dma-helpers.c
index 7d766a5e89..dd27ba4def 100644
--- a/softmmu/dma-helpers.c
+++ b/softmmu/dma-helpers.c
@@ -303,6 +303,7 @@ static uint64_t dma_buf_rw(uint8_t *ptr, int32_t len, QEMUSGList *sg,
     resid = sg->size;
     sg_cur_index = 0;
     len = MIN(len, resid);
+    sg->dev->engaged_in_io = true;
     while (len > 0) {
         ScatterGatherEntry entry = sg->sg[sg_cur_index++];
         int32_t xfer = MIN(len, entry.len);
@@ -311,6 +312,7 @@ static uint64_t dma_buf_rw(uint8_t *ptr, int32_t len, QEMUSGList *sg,
         len -= xfer;
         resid -= xfer;
     }
+    sg->dev->engaged_in_io = true;
 
     return resid;
 }
-- 
2.33.0



^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-12-17 14:44 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-17 14:37 [RFC PATCH v2 0/2] Attempt to fix DMA reentrancy issues Alexander Bulekov
2021-12-17 14:37 ` [RFC PATCH v2 1/2] memory: fix dma-reentrancy issues at the MMIO level Alexander Bulekov
2021-12-17 14:37 ` [RFC PATCH v2 2/2] memory: set engaged_in_io when a device calls DMA APIs Alexander Bulekov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).