All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/7] memory: prevent dma-reentracy issues
@ 2022-10-28 19:16 Alexander Bulekov
  2022-10-28 19:16 ` [PATCH v3 1/7] memory: associate DMA accesses with the initiator Device Alexander Bulekov
                   ` (8 more replies)
  0 siblings, 9 replies; 18+ messages in thread
From: Alexander Bulekov @ 2022-10-28 19:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexander Bulekov, Philippe Mathieu-Daudé,
	Mauro Matteo Cascella, Qiuhao Li, Peter Xu, Jason Wang,
	David Hildenbrand, Gerd Hoffmann, Li Qiang, Thomas Huth,
	Laurent Vivier, Bandan Das, Edgar E . Iglesias, Darren Kenny,
	Bin Meng, Paolo Bonzini, Michael S . Tsirkin, Marcel Apfelbaum,
	Daniel P . Berrangé,
	Eduardo Habkost, Jon Maloy, Siqi Chen

These patches aim to solve two types of DMA-reentrancy issues:

1.) mmio -> dma -> mmio case
To solve this, we track whether the device is engaged in io by
checking/setting a flag within APIs used for MMIO access.

2.) bh -> dma write -> mmio case
This case is trickier, since we dont have a generic way to associate a
bh with the underlying Device/DeviceState. Thus, this version introduces
a change to QEMU's DMA APIs to associate each request with the
origiantor DeviceState. In total, the affected APIs are used in
approximately 250 locations:

dma_memory_valid (1 usage)
dma_memory_rw (~5 uses)
dma_memory_read (~92 uses)
dma_memory_write (~71 uses)
dma_memory_set (~4 uses)
dma_memory_map (~18 uses)
dma_memory_unmap (~21 uses)
{ld,st}_{le,be}_{uw,l,q}_dma (~10 uses)
ldub_dma (does not appear to be used anywhere)
stb_dma (1 usage)
dma_buf_read (~18 uses)
dma_buf_write (~7 uses)

It is not trivial to mechanically replace all of the invocations:
For many cases, this will be as simple as adding DEVICE(s) to the
arguments, but there are locations where the code will need to be
slightly changed. As such, for now I added "_guarded" versions of most
of the APIs which can be used until all of the invocations are fixed.

The end goal is to go through all of hw/ and make the required changes
(I will need help with this). Once that is done, the "_guarded" APIs can
take the place of the standard DMA APIs and we can mecahnically remove
the "_guarded" suffix from all invocations.

These changes do not address devices that bypass DMA apis and directly
call into address_space.. APIs. This occurs somewhat commonly, and
prevents me from fixing issues in Virtio devices, such as:
https://gitlab.com/qemu-project/qemu/-/issues/827
I'm not sure what approach we should take for these cases - maybe they
should be switched to DMA APIs (or the DMA API expanded).

v2 -> v3: Bite the bullet and modify the DMA APIs, rather than
    attempting to guess DeviceStates in BHs.

Alexander Bulekov (7):
  memory: associate DMA accesses with the initiator Device
  dma-helpers: switch to guarded DMA accesses
  ahci: switch to guarded DMA acccesses
  sdhci: switch to guarded DMA accesses
  ehci: switch to guarded DMA accesses
  xhci: switch to guarded DMA accesses
  usb/libhw: switch to guarded DMA accesses

 hw/ide/ahci.c          | 16 +++++++++-------
 hw/sd/sdhci.c          | 43 ++++++++++++++++++++++--------------------
 hw/usb/hcd-ehci.c      |  8 ++++----
 hw/usb/hcd-xhci.c      | 24 +++++++++++------------
 hw/usb/libhw.c         |  4 ++--
 include/hw/qdev-core.h |  2 ++
 include/sysemu/dma.h   | 41 ++++++++++++++++++++++++++++++++++++++++
 softmmu/dma-helpers.c  | 15 ++++++++-------
 softmmu/memory.c       | 15 +++++++++++++++
 softmmu/trace-events   |  1 +
 10 files changed, 117 insertions(+), 52 deletions(-)

-- 
2.27.0



^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v3 1/7] memory: associate DMA accesses with the initiator Device
  2022-10-28 19:16 [PATCH v3 0/7] memory: prevent dma-reentracy issues Alexander Bulekov
@ 2022-10-28 19:16 ` Alexander Bulekov
  2022-11-14 20:08   ` Stefan Hajnoczi
                     ` (2 more replies)
  2022-10-28 19:16 ` [PATCH v3 2/7] dma-helpers: switch to guarded DMA accesses Alexander Bulekov
                   ` (7 subsequent siblings)
  8 siblings, 3 replies; 18+ messages in thread
From: Alexander Bulekov @ 2022-10-28 19:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexander Bulekov, Philippe Mathieu-Daudé,
	Mauro Matteo Cascella, Qiuhao Li, Peter Xu, Jason Wang,
	David Hildenbrand, Gerd Hoffmann, Li Qiang, Thomas Huth,
	Laurent Vivier, Bandan Das, Edgar E . Iglesias, Darren Kenny,
	Bin Meng, Paolo Bonzini, Michael S . Tsirkin, Marcel Apfelbaum,
	Daniel P . Berrangé,
	Eduardo Habkost, Jon Maloy, Siqi Chen

Add transitionary DMA APIs which associate accesses with the device
initiating them. The modified APIs maintain a "MemReentrancyGuard" in
the DeviceState, which is used to prevent DMA re-entrancy issues.
The MemReentrancyGuard is set/checked when entering IO handlers and when
initiating a DMA access.

1.) mmio -> dma -> mmio case
2.) bh -> dma write -> mmio case

These issues have led to problems such as stack-exhaustion and
use-after-frees.

Summary of the problem from Peter Maydell:
https://lore.kernel.org/qemu-devel/CAFEAcA_23vc7hE3iaM-JVA6W38LK4hJoWae5KcknhPRD5fPBZA@mail.gmail.com

Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
---
 include/hw/qdev-core.h |  2 ++
 include/sysemu/dma.h   | 41 +++++++++++++++++++++++++++++++++++++++++
 softmmu/memory.c       | 15 +++++++++++++++
 softmmu/trace-events   |  1 +
 4 files changed, 59 insertions(+)

diff --git a/include/hw/qdev-core.h b/include/hw/qdev-core.h
index 785dd5a56e..ab78d211af 100644
--- a/include/hw/qdev-core.h
+++ b/include/hw/qdev-core.h
@@ -8,6 +8,7 @@
 #include "qom/object.h"
 #include "hw/hotplug.h"
 #include "hw/resettable.h"
+#include "sysemu/dma.h"
 
 enum {
     DEV_NVECTORS_UNSPECIFIED = -1,
@@ -194,6 +195,7 @@ struct DeviceState {
     int alias_required_for_version;
     ResettableState reset;
     GSList *unplug_blockers;
+    MemReentrancyGuard mem_reentrancy_guard;
 };
 
 struct DeviceListener {
diff --git a/include/sysemu/dma.h b/include/sysemu/dma.h
index a1ac5bc1b5..879b666bbb 100644
--- a/include/sysemu/dma.h
+++ b/include/sysemu/dma.h
@@ -15,6 +15,10 @@
 #include "block/block.h"
 #include "block/accounting.h"
 
+typedef struct {
+    bool engaged_in_io;
+} MemReentrancyGuard;
+
 typedef enum {
     DMA_DIRECTION_TO_DEVICE = 0,
     DMA_DIRECTION_FROM_DEVICE = 1,
@@ -321,4 +325,41 @@ void dma_acct_start(BlockBackend *blk, BlockAcctCookie *cookie,
 uint64_t dma_aligned_pow2_mask(uint64_t start, uint64_t end,
                                int max_addr_bits);
 
+#define REENTRANCY_GUARD(func, ret_type, dev, ...) \
+    ({\
+     ret_type retval;\
+     MemReentrancyGuard prior_guard_state = dev->mem_reentrancy_guard;\
+     dev->mem_reentrancy_guard.engaged_in_io = 1;\
+     retval = func(__VA_ARGS__);\
+     dev->mem_reentrancy_guard = prior_guard_state;\
+     retval;\
+     })
+#define REENTRANCY_GUARD_NORET(func, dev, ...) \
+    ({\
+     MemReentrancyGuard prior_guard_state = dev->mem_reentrancy_guard;\
+     dev->mem_reentrancy_guard.engaged_in_io = 1;\
+     func(__VA_ARGS__);\
+     dev->mem_reentrancy_guard = prior_guard_state;\
+     })
+#define dma_memory_rw_guarded(dev, ...) \
+    REENTRANCY_GUARD(dma_memory_rw, MemTxResult, dev, __VA_ARGS__)
+#define dma_memory_read_guarded(dev, ...) \
+    REENTRANCY_GUARD(dma_memory_read, MemTxResult, dev, __VA_ARGS__)
+#define dma_memory_write_guarded(dev, ...) \
+    REENTRANCY_GUARD(dma_memory_write, MemTxResult, dev, __VA_ARGS__)
+#define dma_memory_set_guarded(dev, ...) \
+    REENTRANCY_GUARD(dma_memory_set, MemTxResult, dev, __VA_ARGS__)
+#define dma_memory_map_guarded(dev, ...) \
+    REENTRANCY_GUARD(dma_memory_map, void*, dev, __VA_ARGS__)
+#define dma_memory_unmap_guarded(dev, ...) \
+    REENTRANCY_GUARD_NORET(dma_memory_unmap, dev, __VA_ARGS__)
+#define ldub_dma_guarded(dev, ...) \
+    REENTRANCY_GUARD(ldub_dma, MemTxResult, dev, __VA_ARGS__)
+#define stb_dma_guarded(dev, ...) \
+    REENTRANCY_GUARD(stb_dma, MemTxResult, dev, __VA_ARGS__)
+#define dma_buf_read_guarded(dev, ...) \
+    REENTRANCY_GUARD(dma_buf_read, MemTxResult, dev, __VA_ARGS__)
+#define dma_buf_write_guarded(dev, ...) \
+    REENTRANCY_GUARD(dma_buf_read, MemTxResult, dev, __VA_ARGS__)
+
 #endif
diff --git a/softmmu/memory.c b/softmmu/memory.c
index 7ba2048836..c44dc75149 100644
--- a/softmmu/memory.c
+++ b/softmmu/memory.c
@@ -532,6 +532,7 @@ static MemTxResult access_with_adjusted_size(hwaddr addr,
     uint64_t access_mask;
     unsigned access_size;
     unsigned i;
+    DeviceState *dev = NULL;
     MemTxResult r = MEMTX_OK;
 
     if (!access_size_min) {
@@ -541,6 +542,17 @@ static MemTxResult access_with_adjusted_size(hwaddr addr,
         access_size_max = 4;
     }
 
+    /* Do not allow more than one simultanous access to a device's IO Regions */
+    if (mr->owner &&
+            !mr->ram_device && !mr->ram && !mr->rom_device && !mr->readonly) {
+        dev = (DeviceState *) object_dynamic_cast(mr->owner, TYPE_DEVICE);
+        if (dev->mem_reentrancy_guard.engaged_in_io) {
+            trace_memory_region_reentrant_io(get_cpu_index(), mr, addr, size);
+            return MEMTX_ERROR;
+        }
+        dev->mem_reentrancy_guard.engaged_in_io = true;
+    }
+
     /* FIXME: support unaligned access? */
     access_size = MAX(MIN(size, access_size_max), access_size_min);
     access_mask = MAKE_64BIT_MASK(0, access_size * 8);
@@ -555,6 +567,9 @@ static MemTxResult access_with_adjusted_size(hwaddr addr,
                         access_mask, attrs);
         }
     }
+    if (dev) {
+        dev->mem_reentrancy_guard.engaged_in_io = false;
+    }
     return r;
 }
 
diff --git a/softmmu/trace-events b/softmmu/trace-events
index 22606dc27b..62d04ea9a7 100644
--- a/softmmu/trace-events
+++ b/softmmu/trace-events
@@ -13,6 +13,7 @@ memory_region_ops_read(int cpu_index, void *mr, uint64_t addr, uint64_t value, u
 memory_region_ops_write(int cpu_index, void *mr, uint64_t addr, uint64_t value, unsigned size, const char *name) "cpu %d mr %p addr 0x%"PRIx64" value 0x%"PRIx64" size %u name '%s'"
 memory_region_subpage_read(int cpu_index, void *mr, uint64_t offset, uint64_t value, unsigned size) "cpu %d mr %p offset 0x%"PRIx64" value 0x%"PRIx64" size %u"
 memory_region_subpage_write(int cpu_index, void *mr, uint64_t offset, uint64_t value, unsigned size) "cpu %d mr %p offset 0x%"PRIx64" value 0x%"PRIx64" size %u"
+memory_region_reentrant_io(int cpu_index, void *mr, uint64_t offset, unsigned size) "cpu %d mr %p offset 0x%"PRIx64" size %u"
 memory_region_ram_device_read(int cpu_index, void *mr, uint64_t addr, uint64_t value, unsigned size) "cpu %d mr %p addr 0x%"PRIx64" value 0x%"PRIx64" size %u"
 memory_region_ram_device_write(int cpu_index, void *mr, uint64_t addr, uint64_t value, unsigned size) "cpu %d mr %p addr 0x%"PRIx64" value 0x%"PRIx64" size %u"
 memory_region_sync_dirty(const char *mr, const char *listener, int global) "mr '%s' listener '%s' synced (global=%d)"
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 2/7] dma-helpers: switch to guarded DMA accesses
  2022-10-28 19:16 [PATCH v3 0/7] memory: prevent dma-reentracy issues Alexander Bulekov
  2022-10-28 19:16 ` [PATCH v3 1/7] memory: associate DMA accesses with the initiator Device Alexander Bulekov
@ 2022-10-28 19:16 ` Alexander Bulekov
  2022-10-28 19:16 ` [PATCH v3 3/7] ahci: switch to guarded DMA acccesses Alexander Bulekov
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 18+ messages in thread
From: Alexander Bulekov @ 2022-10-28 19:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexander Bulekov, Philippe Mathieu-Daudé,
	Mauro Matteo Cascella, Qiuhao Li, Peter Xu, Jason Wang,
	David Hildenbrand, Gerd Hoffmann, Li Qiang, Thomas Huth,
	Laurent Vivier, Bandan Das, Edgar E . Iglesias, Darren Kenny,
	Bin Meng, Paolo Bonzini, Michael S . Tsirkin, Marcel Apfelbaum,
	Daniel P . Berrangé,
	Eduardo Habkost, Jon Maloy, Siqi Chen

Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
---
 softmmu/dma-helpers.c | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/softmmu/dma-helpers.c b/softmmu/dma-helpers.c
index 7820fec54c..ba2ad23324 100644
--- a/softmmu/dma-helpers.c
+++ b/softmmu/dma-helpers.c
@@ -90,9 +90,9 @@ static void dma_blk_unmap(DMAAIOCB *dbs)
     int i;
 
     for (i = 0; i < dbs->iov.niov; ++i) {
-        dma_memory_unmap(dbs->sg->as, dbs->iov.iov[i].iov_base,
-                         dbs->iov.iov[i].iov_len, dbs->dir,
-                         dbs->iov.iov[i].iov_len);
+        dma_memory_unmap_guarded(dbs->sg->dev, dbs->sg->as,
+                dbs->iov.iov[i].iov_base, dbs->iov.iov[i].iov_len, dbs->dir,
+                dbs->iov.iov[i].iov_len);
     }
     qemu_iovec_reset(&dbs->iov);
 }
@@ -130,8 +130,8 @@ static void dma_blk_cb(void *opaque, int ret)
     while (dbs->sg_cur_index < dbs->sg->nsg) {
         cur_addr = dbs->sg->sg[dbs->sg_cur_index].base + dbs->sg_cur_byte;
         cur_len = dbs->sg->sg[dbs->sg_cur_index].len - dbs->sg_cur_byte;
-        mem = dma_memory_map(dbs->sg->as, cur_addr, &cur_len, dbs->dir,
-                             MEMTXATTRS_UNSPECIFIED);
+        mem = dma_memory_map_guarded(dbs->sg->dev, dbs->sg->as, cur_addr,
+                &cur_len, dbs->dir, MEMTXATTRS_UNSPECIFIED);
         /*
          * Make reads deterministic in icount mode. Windows sometimes issues
          * disk read requests with overlapping SGs. It leads
@@ -145,7 +145,7 @@ static void dma_blk_cb(void *opaque, int ret)
                 if (ranges_overlap((intptr_t)dbs->iov.iov[i].iov_base,
                                    dbs->iov.iov[i].iov_len, (intptr_t)mem,
                                    cur_len)) {
-                    dma_memory_unmap(dbs->sg->as, mem, cur_len,
+                    dma_memory_unmap_guarded(dbs->sg->dev, dbs->sg->as, mem, cur_len,
                                      dbs->dir, cur_len);
                     mem = NULL;
                     break;
@@ -296,7 +296,8 @@ static MemTxResult dma_buf_rw(void *buf, dma_addr_t len, dma_addr_t *residual,
     while (len > 0) {
         ScatterGatherEntry entry = sg->sg[sg_cur_index++];
         dma_addr_t xfer = MIN(len, entry.len);
-        res |= dma_memory_rw(sg->as, entry.base, ptr, xfer, dir, attrs);
+        res |= dma_memory_rw_guarded(sg->dev, sg->as, entry.base, ptr, xfer,
+                                     dir, attrs);
         ptr += xfer;
         len -= xfer;
         xresidual -= xfer;
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 3/7] ahci: switch to guarded DMA acccesses
  2022-10-28 19:16 [PATCH v3 0/7] memory: prevent dma-reentracy issues Alexander Bulekov
  2022-10-28 19:16 ` [PATCH v3 1/7] memory: associate DMA accesses with the initiator Device Alexander Bulekov
  2022-10-28 19:16 ` [PATCH v3 2/7] dma-helpers: switch to guarded DMA accesses Alexander Bulekov
@ 2022-10-28 19:16 ` Alexander Bulekov
  2022-10-28 19:16 ` [PATCH v3 4/7] sdhci: switch to guarded DMA accesses Alexander Bulekov
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 18+ messages in thread
From: Alexander Bulekov @ 2022-10-28 19:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexander Bulekov, Philippe Mathieu-Daudé,
	Mauro Matteo Cascella, Qiuhao Li, Peter Xu, Jason Wang,
	David Hildenbrand, Gerd Hoffmann, Li Qiang, Thomas Huth,
	Laurent Vivier, Bandan Das, Edgar E . Iglesias, Darren Kenny,
	Bin Meng, Paolo Bonzini, Michael S . Tsirkin, Marcel Apfelbaum,
	Daniel P . Berrangé,
	Eduardo Habkost, Jon Maloy, Siqi Chen

Fixes: https://gitlab.com/qemu-project/qemu/-/issues/62
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
---
 hw/ide/ahci.c | 16 +++++++++-------
 1 file changed, 9 insertions(+), 7 deletions(-)

diff --git a/hw/ide/ahci.c b/hw/ide/ahci.c
index 7ce001cacd..ffa817eebe 100644
--- a/hw/ide/ahci.c
+++ b/hw/ide/ahci.c
@@ -240,19 +240,21 @@ static void ahci_trigger_irq(AHCIState *s, AHCIDevice *d,
     ahci_check_irq(s);
 }
 
-static void map_page(AddressSpace *as, uint8_t **ptr, uint64_t addr,
+static void map_page(AHCIDevice *ad, uint8_t **ptr, uint64_t addr,
                      uint32_t wanted)
 {
     hwaddr len = wanted;
 
     if (*ptr) {
-        dma_memory_unmap(as, *ptr, len, DMA_DIRECTION_FROM_DEVICE, len);
+        dma_memory_unmap_guarded(DEVICE(ad), ad->hba->as,
+                *ptr, len, DMA_DIRECTION_FROM_DEVICE, len);
     }
 
-    *ptr = dma_memory_map(as, addr, &len, DMA_DIRECTION_FROM_DEVICE,
-                          MEMTXATTRS_UNSPECIFIED);
+    *ptr = dma_memory_map_guarded(DEVICE(ad), ad->hba->as, addr, &len,
+                DMA_DIRECTION_FROM_DEVICE, MEMTXATTRS_UNSPECIFIED);
     if (len < wanted && *ptr) {
-        dma_memory_unmap(as, *ptr, len, DMA_DIRECTION_FROM_DEVICE, len);
+        dma_memory_unmap_guarded(DEVICE(ad), ad->hba->as, *ptr, len,
+                DMA_DIRECTION_FROM_DEVICE, len);
         *ptr = NULL;
     }
 }
@@ -720,7 +722,7 @@ static char *ahci_pretty_buffer_fis(const uint8_t *fis, int cmd_len)
 static bool ahci_map_fis_address(AHCIDevice *ad)
 {
     AHCIPortRegs *pr = &ad->port_regs;
-    map_page(ad->hba->as, &ad->res_fis,
+    map_page(ad, &ad->res_fis,
              ((uint64_t)pr->fis_addr_hi << 32) | pr->fis_addr, 256);
     if (ad->res_fis != NULL) {
         pr->cmd |= PORT_CMD_FIS_ON;
@@ -747,7 +749,7 @@ static bool ahci_map_clb_address(AHCIDevice *ad)
 {
     AHCIPortRegs *pr = &ad->port_regs;
     ad->cur_cmd = NULL;
-    map_page(ad->hba->as, &ad->lst,
+    map_page(ad, &ad->lst,
              ((uint64_t)pr->lst_addr_hi << 32) | pr->lst_addr, 1024);
     if (ad->lst != NULL) {
         pr->cmd |= PORT_CMD_LIST_ON;
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 4/7] sdhci: switch to guarded DMA accesses
  2022-10-28 19:16 [PATCH v3 0/7] memory: prevent dma-reentracy issues Alexander Bulekov
                   ` (2 preceding siblings ...)
  2022-10-28 19:16 ` [PATCH v3 3/7] ahci: switch to guarded DMA acccesses Alexander Bulekov
@ 2022-10-28 19:16 ` Alexander Bulekov
  2022-10-28 19:16 ` [PATCH v3 5/7] ehci: " Alexander Bulekov
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 18+ messages in thread
From: Alexander Bulekov @ 2022-10-28 19:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexander Bulekov, Philippe Mathieu-Daudé,
	Mauro Matteo Cascella, Qiuhao Li, Peter Xu, Jason Wang,
	David Hildenbrand, Gerd Hoffmann, Li Qiang, Thomas Huth,
	Laurent Vivier, Bandan Das, Edgar E . Iglesias, Darren Kenny,
	Bin Meng, Paolo Bonzini, Michael S . Tsirkin, Marcel Apfelbaum,
	Daniel P . Berrangé,
	Eduardo Habkost, Jon Maloy, Siqi Chen

Fixes: https://gitlab.com/qemu-project/qemu/-/issues/1282
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
---
 hw/sd/sdhci.c | 43 +++++++++++++++++++++++--------------------
 1 file changed, 23 insertions(+), 20 deletions(-)

diff --git a/hw/sd/sdhci.c b/hw/sd/sdhci.c
index 0e5e988927..0ebbc13862 100644
--- a/hw/sd/sdhci.c
+++ b/hw/sd/sdhci.c
@@ -616,8 +616,9 @@ static void sdhci_sdma_transfer_multi_blocks(SDHCIState *s)
                     s->blkcnt--;
                 }
             }
-            dma_memory_write(s->dma_as, s->sdmasysad, &s->fifo_buffer[begin],
-                             s->data_count - begin, MEMTXATTRS_UNSPECIFIED);
+            dma_memory_write_guarded(DEVICE(s), s->dma_as, s->sdmasysad,
+                    &s->fifo_buffer[begin], s->data_count - begin,
+                    MEMTXATTRS_UNSPECIFIED);
             s->sdmasysad += s->data_count - begin;
             if (s->data_count == block_size) {
                 s->data_count = 0;
@@ -637,8 +638,9 @@ static void sdhci_sdma_transfer_multi_blocks(SDHCIState *s)
                 s->data_count = block_size;
                 boundary_count -= block_size - begin;
             }
-            dma_memory_read(s->dma_as, s->sdmasysad, &s->fifo_buffer[begin],
-                            s->data_count - begin, MEMTXATTRS_UNSPECIFIED);
+            dma_memory_read_guarded(DEVICE(s), s->dma_as, s->sdmasysad,
+                    &s->fifo_buffer[begin], s->data_count - begin,
+                    MEMTXATTRS_UNSPECIFIED);
             s->sdmasysad += s->data_count - begin;
             if (s->data_count == block_size) {
                 sdbus_write_data(&s->sdbus, s->fifo_buffer, block_size);
@@ -670,11 +672,11 @@ static void sdhci_sdma_transfer_single_block(SDHCIState *s)
 
     if (s->trnmod & SDHC_TRNS_READ) {
         sdbus_read_data(&s->sdbus, s->fifo_buffer, datacnt);
-        dma_memory_write(s->dma_as, s->sdmasysad, s->fifo_buffer, datacnt,
-                         MEMTXATTRS_UNSPECIFIED);
+        dma_memory_write_guarded(DEVICE(s), s->dma_as, s->sdmasysad,
+                s->fifo_buffer, datacnt, MEMTXATTRS_UNSPECIFIED);
     } else {
-        dma_memory_read(s->dma_as, s->sdmasysad, s->fifo_buffer, datacnt,
-                        MEMTXATTRS_UNSPECIFIED);
+        dma_memory_read_guarded(DEVICE(s), s->dma_as, s->sdmasysad,
+                s->fifo_buffer, datacnt, MEMTXATTRS_UNSPECIFIED);
         sdbus_write_data(&s->sdbus, s->fifo_buffer, datacnt);
     }
     s->blkcnt--;
@@ -696,8 +698,8 @@ static void get_adma_description(SDHCIState *s, ADMADescr *dscr)
     hwaddr entry_addr = (hwaddr)s->admasysaddr;
     switch (SDHC_DMA_TYPE(s->hostctl1)) {
     case SDHC_CTRL_ADMA2_32:
-        dma_memory_read(s->dma_as, entry_addr, &adma2, sizeof(adma2),
-                        MEMTXATTRS_UNSPECIFIED);
+        dma_memory_read_guarded(DEVICE(s), s->dma_as, entry_addr, &adma2,
+                sizeof(adma2), MEMTXATTRS_UNSPECIFIED);
         adma2 = le64_to_cpu(adma2);
         /* The spec does not specify endianness of descriptor table.
          * We currently assume that it is LE.
@@ -708,8 +710,8 @@ static void get_adma_description(SDHCIState *s, ADMADescr *dscr)
         dscr->incr = 8;
         break;
     case SDHC_CTRL_ADMA1_32:
-        dma_memory_read(s->dma_as, entry_addr, &adma1, sizeof(adma1),
-                        MEMTXATTRS_UNSPECIFIED);
+        dma_memory_read_guarded(DEVICE(s), s->dma_as, entry_addr, &adma1,
+                sizeof(adma1), MEMTXATTRS_UNSPECIFIED);
         adma1 = le32_to_cpu(adma1);
         dscr->addr = (hwaddr)(adma1 & 0xFFFFF000);
         dscr->attr = (uint8_t)extract32(adma1, 0, 7);
@@ -721,13 +723,13 @@ static void get_adma_description(SDHCIState *s, ADMADescr *dscr)
         }
         break;
     case SDHC_CTRL_ADMA2_64:
-        dma_memory_read(s->dma_as, entry_addr, &dscr->attr, 1,
-                        MEMTXATTRS_UNSPECIFIED);
-        dma_memory_read(s->dma_as, entry_addr + 2, &dscr->length, 2,
-                        MEMTXATTRS_UNSPECIFIED);
+        dma_memory_read_guarded(DEVICE(s), s->dma_as, entry_addr, &dscr->attr,
+                1, MEMTXATTRS_UNSPECIFIED);
+        dma_memory_read_guarded(DEVICE(s), s->dma_as, entry_addr + 2,
+                &dscr->length, 2, MEMTXATTRS_UNSPECIFIED);
         dscr->length = le16_to_cpu(dscr->length);
-        dma_memory_read(s->dma_as, entry_addr + 4, &dscr->addr, 8,
-                        MEMTXATTRS_UNSPECIFIED);
+        dma_memory_read_guarded(DEVICE(s), s->dma_as, entry_addr + 4,
+                &dscr->addr, 8, MEMTXATTRS_UNSPECIFIED);
         dscr->addr = le64_to_cpu(dscr->addr);
         dscr->attr &= (uint8_t) ~0xC0;
         dscr->incr = 12;
@@ -792,7 +794,7 @@ static void sdhci_do_adma(SDHCIState *s)
                         s->data_count = block_size;
                         length -= block_size - begin;
                     }
-                    res = dma_memory_write(s->dma_as, dscr.addr,
+                    res = dma_memory_write_guarded(DEVICE(s), s->dma_as, dscr.addr,
                                            &s->fifo_buffer[begin],
                                            s->data_count - begin,
                                            attrs);
@@ -821,7 +823,8 @@ static void sdhci_do_adma(SDHCIState *s)
                         s->data_count = block_size;
                         length -= block_size - begin;
                     }
-                    res = dma_memory_read(s->dma_as, dscr.addr,
+                    res = dma_memory_read_guarded(DEVICE(s), s->dma_as,
+                                          dscr.addr,
                                           &s->fifo_buffer[begin],
                                           s->data_count - begin,
                                           attrs);
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 5/7] ehci: switch to guarded DMA accesses
  2022-10-28 19:16 [PATCH v3 0/7] memory: prevent dma-reentracy issues Alexander Bulekov
                   ` (3 preceding siblings ...)
  2022-10-28 19:16 ` [PATCH v3 4/7] sdhci: switch to guarded DMA accesses Alexander Bulekov
@ 2022-10-28 19:16 ` Alexander Bulekov
  2022-10-28 19:16 ` [PATCH v3 6/7] xhci: " Alexander Bulekov
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 18+ messages in thread
From: Alexander Bulekov @ 2022-10-28 19:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexander Bulekov, Philippe Mathieu-Daudé,
	Mauro Matteo Cascella, Qiuhao Li, Peter Xu, Jason Wang,
	David Hildenbrand, Gerd Hoffmann, Li Qiang, Thomas Huth,
	Laurent Vivier, Bandan Das, Edgar E . Iglesias, Darren Kenny,
	Bin Meng, Paolo Bonzini, Michael S . Tsirkin, Marcel Apfelbaum,
	Daniel P . Berrangé,
	Eduardo Habkost, Jon Maloy, Siqi Chen

Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
---
 hw/usb/hcd-ehci.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/hw/usb/hcd-ehci.c b/hw/usb/hcd-ehci.c
index d4da8dcb8d..b93f4d44c1 100644
--- a/hw/usb/hcd-ehci.c
+++ b/hw/usb/hcd-ehci.c
@@ -383,8 +383,8 @@ static inline int get_dwords(EHCIState *ehci, uint32_t addr,
     }
 
     for (i = 0; i < num; i++, buf++, addr += sizeof(*buf)) {
-        dma_memory_read(ehci->as, addr, buf, sizeof(*buf),
-                        MEMTXATTRS_UNSPECIFIED);
+        dma_memory_read_guarded(DEVICE(ehci), ehci->as, addr, buf,
+                sizeof(*buf), MEMTXATTRS_UNSPECIFIED);
         *buf = le32_to_cpu(*buf);
     }
 
@@ -406,8 +406,8 @@ static inline int put_dwords(EHCIState *ehci, uint32_t addr,
 
     for (i = 0; i < num; i++, buf++, addr += sizeof(*buf)) {
         uint32_t tmp = cpu_to_le32(*buf);
-        dma_memory_write(ehci->as, addr, &tmp, sizeof(tmp),
-                         MEMTXATTRS_UNSPECIFIED);
+        dma_memory_write_guarded(DEVICE(ehci), ehci->as, addr, &tmp,
+                sizeof(tmp), MEMTXATTRS_UNSPECIFIED);
     }
 
     return num;
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 6/7] xhci: switch to guarded DMA accesses
  2022-10-28 19:16 [PATCH v3 0/7] memory: prevent dma-reentracy issues Alexander Bulekov
                   ` (4 preceding siblings ...)
  2022-10-28 19:16 ` [PATCH v3 5/7] ehci: " Alexander Bulekov
@ 2022-10-28 19:16 ` Alexander Bulekov
  2022-10-28 19:16 ` [PATCH v3 7/7] usb/libhw: " Alexander Bulekov
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 18+ messages in thread
From: Alexander Bulekov @ 2022-10-28 19:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexander Bulekov, Philippe Mathieu-Daudé,
	Mauro Matteo Cascella, Qiuhao Li, Peter Xu, Jason Wang,
	David Hildenbrand, Gerd Hoffmann, Li Qiang, Thomas Huth,
	Laurent Vivier, Bandan Das, Edgar E . Iglesias, Darren Kenny,
	Bin Meng, Paolo Bonzini, Michael S . Tsirkin, Marcel Apfelbaum,
	Daniel P . Berrangé,
	Eduardo Habkost, Jon Maloy, Siqi Chen

Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
---
 hw/usb/hcd-xhci.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/hw/usb/hcd-xhci.c b/hw/usb/hcd-xhci.c
index 8299f35e66..2621dde7ea 100644
--- a/hw/usb/hcd-xhci.c
+++ b/hw/usb/hcd-xhci.c
@@ -494,7 +494,7 @@ static inline void xhci_dma_read_u32s(XHCIState *xhci, dma_addr_t addr,
 
     assert((len % sizeof(uint32_t)) == 0);
 
-    if (dma_memory_read(xhci->as, addr, buf, len,
+    if (dma_memory_read_guarded(DEVICE(xhci), xhci->as, addr, buf, len,
                         MEMTXATTRS_UNSPECIFIED) != MEMTX_OK) {
         qemu_log_mask(LOG_GUEST_ERROR, "%s: DMA memory access failed!\n",
                       __func__);
@@ -521,7 +521,7 @@ static inline void xhci_dma_write_u32s(XHCIState *xhci, dma_addr_t addr,
     for (i = 0; i < n; i++) {
         tmp[i] = cpu_to_le32(buf[i]);
     }
-    if (dma_memory_write(xhci->as, addr, tmp, len,
+    if (dma_memory_write_guarded(DEVICE(xhci), xhci->as, addr, tmp, len,
                          MEMTXATTRS_UNSPECIFIED) != MEMTX_OK) {
         qemu_log_mask(LOG_GUEST_ERROR, "%s: DMA memory access failed!\n",
                       __func__);
@@ -632,8 +632,8 @@ static void xhci_write_event(XHCIState *xhci, XHCIEvent *event, int v)
                                ev_trb.status, ev_trb.control);
 
     addr = intr->er_start + TRB_SIZE*intr->er_ep_idx;
-    if (dma_memory_write(xhci->as, addr, &ev_trb, TRB_SIZE,
-                         MEMTXATTRS_UNSPECIFIED) != MEMTX_OK) {
+    if (dma_memory_write_guarded(DEVICE(xhci), xhci->as, addr, &ev_trb,
+                TRB_SIZE, MEMTXATTRS_UNSPECIFIED) != MEMTX_OK) {
         qemu_log_mask(LOG_GUEST_ERROR, "%s: DMA memory access failed!\n",
                       __func__);
         xhci_die(xhci);
@@ -698,8 +698,8 @@ static TRBType xhci_ring_fetch(XHCIState *xhci, XHCIRing *ring, XHCITRB *trb,
 
     while (1) {
         TRBType type;
-        if (dma_memory_read(xhci->as, ring->dequeue, trb, TRB_SIZE,
-                            MEMTXATTRS_UNSPECIFIED) != MEMTX_OK) {
+        if (dma_memory_read_guarded(DEVICE(xhci), xhci->as, ring->dequeue, trb,
+                    TRB_SIZE, MEMTXATTRS_UNSPECIFIED) != MEMTX_OK) {
             qemu_log_mask(LOG_GUEST_ERROR, "%s: DMA memory access failed!\n",
                           __func__);
             return 0;
@@ -750,8 +750,8 @@ static int xhci_ring_chain_length(XHCIState *xhci, const XHCIRing *ring)
 
     do {
         TRBType type;
-        if (dma_memory_read(xhci->as, dequeue, &trb, TRB_SIZE,
-                        MEMTXATTRS_UNSPECIFIED) != MEMTX_OK) {
+        if (dma_memory_read_guarded(DEVICE(xhci), xhci->as, dequeue, &trb,
+                    TRB_SIZE, MEMTXATTRS_UNSPECIFIED) != MEMTX_OK) {
             qemu_log_mask(LOG_GUEST_ERROR, "%s: DMA memory access failed!\n",
                           __func__);
             return -1;
@@ -820,8 +820,8 @@ static void xhci_er_reset(XHCIState *xhci, int v)
         xhci_die(xhci);
         return;
     }
-    if (dma_memory_read(xhci->as, erstba, &seg, sizeof(seg),
-                    MEMTXATTRS_UNSPECIFIED) != MEMTX_OK) {
+    if (dma_memory_read_guarded(DEVICE(xhci), xhci->as, erstba, &seg,
+                sizeof(seg), MEMTXATTRS_UNSPECIFIED) != MEMTX_OK) {
         qemu_log_mask(LOG_GUEST_ERROR, "%s: DMA memory access failed!\n",
                       __func__);
         xhci_die(xhci);
@@ -2445,8 +2445,8 @@ static TRBCCode xhci_get_port_bandwidth(XHCIState *xhci, uint64_t pctx)
     /* TODO: actually implement real values here */
     bw_ctx[0] = 0;
     memset(&bw_ctx[1], 80, xhci->numports); /* 80% */
-    if (dma_memory_write(xhci->as, ctx, bw_ctx, sizeof(bw_ctx),
-                     MEMTXATTRS_UNSPECIFIED) != MEMTX_OK) {
+    if (dma_memory_write_guarded(DEVICE(xhci), xhci->as, ctx, bw_ctx,
+                sizeof(bw_ctx), MEMTXATTRS_UNSPECIFIED) != MEMTX_OK) {
         qemu_log_mask(LOG_GUEST_ERROR, "%s: DMA memory write failed!\n",
                       __func__);
         return CC_TRB_ERROR;
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 7/7] usb/libhw: switch to guarded DMA accesses
  2022-10-28 19:16 [PATCH v3 0/7] memory: prevent dma-reentracy issues Alexander Bulekov
                   ` (5 preceding siblings ...)
  2022-10-28 19:16 ` [PATCH v3 6/7] xhci: " Alexander Bulekov
@ 2022-10-28 19:16 ` Alexander Bulekov
  2022-11-07 17:09 ` [PATCH v3 0/7] memory: prevent dma-reentracy issues Alexander Bulekov
  2022-11-10 20:50 ` Stefan Hajnoczi
  8 siblings, 0 replies; 18+ messages in thread
From: Alexander Bulekov @ 2022-10-28 19:16 UTC (permalink / raw)
  To: qemu-devel
  Cc: Alexander Bulekov, Philippe Mathieu-Daudé,
	Mauro Matteo Cascella, Qiuhao Li, Peter Xu, Jason Wang,
	David Hildenbrand, Gerd Hoffmann, Li Qiang, Thomas Huth,
	Laurent Vivier, Bandan Das, Edgar E . Iglesias, Darren Kenny,
	Bin Meng, Paolo Bonzini, Michael S . Tsirkin, Marcel Apfelbaum,
	Daniel P . Berrangé,
	Eduardo Habkost, Jon Maloy, Siqi Chen

Fixes: https://gitlab.com/qemu-project/qemu/-/issues/541
Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
---
 hw/usb/libhw.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/hw/usb/libhw.c b/hw/usb/libhw.c
index f350eae443..a15e97f76d 100644
--- a/hw/usb/libhw.c
+++ b/hw/usb/libhw.c
@@ -36,7 +36,7 @@ int usb_packet_map(USBPacket *p, QEMUSGList *sgl)
 
         while (len) {
             dma_addr_t xlen = len;
-            mem = dma_memory_map(sgl->as, base, &xlen, dir,
+            mem = dma_memory_map_guarded(sgl->dev, sgl->as, base, &xlen, dir,
                                  MEMTXATTRS_UNSPECIFIED);
             if (!mem) {
                 goto err;
@@ -63,7 +63,7 @@ void usb_packet_unmap(USBPacket *p, QEMUSGList *sgl)
     int i;
 
     for (i = 0; i < p->iov.niov; i++) {
-        dma_memory_unmap(sgl->as, p->iov.iov[i].iov_base,
+        dma_memory_unmap_guarded(sgl->dev, sgl->as, p->iov.iov[i].iov_base,
                          p->iov.iov[i].iov_len, dir,
                          p->iov.iov[i].iov_len);
     }
-- 
2.27.0



^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 0/7] memory: prevent dma-reentracy issues
  2022-10-28 19:16 [PATCH v3 0/7] memory: prevent dma-reentracy issues Alexander Bulekov
                   ` (6 preceding siblings ...)
  2022-10-28 19:16 ` [PATCH v3 7/7] usb/libhw: " Alexander Bulekov
@ 2022-11-07 17:09 ` Alexander Bulekov
  2022-11-10 20:50 ` Stefan Hajnoczi
  8 siblings, 0 replies; 18+ messages in thread
From: Alexander Bulekov @ 2022-11-07 17:09 UTC (permalink / raw)
  To: qemu-devel
  Cc: Philippe Mathieu-Daudé,
	Mauro Matteo Cascella, Qiuhao Li, Peter Xu, Jason Wang,
	David Hildenbrand, Gerd Hoffmann, Li Qiang, Thomas Huth,
	Laurent Vivier, Bandan Das, Edgar E . Iglesias, Darren Kenny,
	Bin Meng, Paolo Bonzini, Michael S . Tsirkin, Marcel Apfelbaum,
	Daniel P . Berrangé,
	Eduardo Habkost, Jon Maloy, Siqi Chen

On 221028 1516, Alexander Bulekov wrote:
> These patches aim to solve two types of DMA-reentrancy issues:
> 
> 1.) mmio -> dma -> mmio case
> To solve this, we track whether the device is engaged in io by
> checking/setting a flag within APIs used for MMIO access.
> 
> 2.) bh -> dma write -> mmio case
> This case is trickier, since we dont have a generic way to associate a
> bh with the underlying Device/DeviceState. Thus, this version introduces
> a change to QEMU's DMA APIs to associate each request with the
> origiantor DeviceState. In total, the affected APIs are used in
> approximately 250 locations:
> 
> dma_memory_valid (1 usage)
> dma_memory_rw (~5 uses)
> dma_memory_read (~92 uses)
> dma_memory_write (~71 uses)
> dma_memory_set (~4 uses)
> dma_memory_map (~18 uses)
> dma_memory_unmap (~21 uses)
> {ld,st}_{le,be}_{uw,l,q}_dma (~10 uses)
> ldub_dma (does not appear to be used anywhere)
> stb_dma (1 usage)
> dma_buf_read (~18 uses)
> dma_buf_write (~7 uses)
> 
> It is not trivial to mechanically replace all of the invocations:
> For many cases, this will be as simple as adding DEVICE(s) to the
> arguments, but there are locations where the code will need to be
> slightly changed. As such, for now I added "_guarded" versions of most
> of the APIs which can be used until all of the invocations are fixed.
> 
> The end goal is to go through all of hw/ and make the required changes
> (I will need help with this). Once that is done, the "_guarded" APIs can
> take the place of the standard DMA APIs and we can mecahnically remove
> the "_guarded" suffix from all invocations.
> 
> These changes do not address devices that bypass DMA apis and directly
> call into address_space.. APIs. This occurs somewhat commonly, and
> prevents me from fixing issues in Virtio devices, such as:
> https://gitlab.com/qemu-project/qemu/-/issues/827
> I'm not sure what approach we should take for these cases - maybe they
> should be switched to DMA APIs (or the DMA API expanded).
> 
> v2 -> v3: Bite the bullet and modify the DMA APIs, rather than
>     attempting to guess DeviceStates in BHs.
> 
> Alexander Bulekov (7):
>   memory: associate DMA accesses with the initiator Device
>   dma-helpers: switch to guarded DMA accesses
>   ahci: switch to guarded DMA acccesses
>   sdhci: switch to guarded DMA accesses
>   ehci: switch to guarded DMA accesses
>   xhci: switch to guarded DMA accesses
>   usb/libhw: switch to guarded DMA accesses
> 
>  hw/ide/ahci.c          | 16 +++++++++-------
>  hw/sd/sdhci.c          | 43 ++++++++++++++++++++++--------------------
>  hw/usb/hcd-ehci.c      |  8 ++++----
>  hw/usb/hcd-xhci.c      | 24 +++++++++++------------
>  hw/usb/libhw.c         |  4 ++--
>  include/hw/qdev-core.h |  2 ++
>  include/sysemu/dma.h   | 41 ++++++++++++++++++++++++++++++++++++++++
>  softmmu/dma-helpers.c  | 15 ++++++++-------
>  softmmu/memory.c       | 15 +++++++++++++++
>  softmmu/trace-events   |  1 +
>  10 files changed, 117 insertions(+), 52 deletions(-)
> 
> -- 
> 2.27.0
>

ping


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 0/7] memory: prevent dma-reentracy issues
  2022-10-28 19:16 [PATCH v3 0/7] memory: prevent dma-reentracy issues Alexander Bulekov
                   ` (7 preceding siblings ...)
  2022-11-07 17:09 ` [PATCH v3 0/7] memory: prevent dma-reentracy issues Alexander Bulekov
@ 2022-11-10 20:50 ` Stefan Hajnoczi
  2022-11-10 20:53   ` Michael S. Tsirkin
                     ` (2 more replies)
  8 siblings, 3 replies; 18+ messages in thread
From: Stefan Hajnoczi @ 2022-11-10 20:50 UTC (permalink / raw)
  To: Alexander Bulekov, Peter Maydell, Richard Henderson
  Cc: qemu-devel, Philippe Mathieu-Daudé,
	Mauro Matteo Cascella, Qiuhao Li, Peter Xu, Jason Wang,
	David Hildenbrand, Gerd Hoffmann, Li Qiang, Thomas Huth,
	Laurent Vivier, Bandan Das, Edgar E . Iglesias, Darren Kenny,
	Bin Meng, Paolo Bonzini, Michael S . Tsirkin, Marcel Apfelbaum,
	Daniel P . Berrangé,
	Eduardo Habkost, Jon Maloy, Siqi Chen

Preventing this class of bugs is important but QEMU is currently
frozen for the 7.2 release. I'm a little concerned about regressions
in a patch series that changes core device emulation code.

I'll review the series on Monday and if anyone has strong opinions on
whether to merge this into 7.2, please say so. My thoughts are that
this should be merged in the 7.3 release cycle so there's time to work
out any issues.

Stefan


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 0/7] memory: prevent dma-reentracy issues
  2022-11-10 20:50 ` Stefan Hajnoczi
@ 2022-11-10 20:53   ` Michael S. Tsirkin
  2022-11-10 22:50   ` Peter Maydell
  2022-11-15 11:28   ` Philippe Mathieu-Daudé
  2 siblings, 0 replies; 18+ messages in thread
From: Michael S. Tsirkin @ 2022-11-10 20:53 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Alexander Bulekov, Peter Maydell, Richard Henderson, qemu-devel,
	Philippe Mathieu-Daudé,
	Mauro Matteo Cascella, Qiuhao Li, Peter Xu, Jason Wang,
	David Hildenbrand, Gerd Hoffmann, Li Qiang, Thomas Huth,
	Laurent Vivier, Bandan Das, Edgar E . Iglesias, Darren Kenny,
	Bin Meng, Paolo Bonzini, Marcel Apfelbaum,
	Daniel P . Berrangé,
	Eduardo Habkost, Jon Maloy, Siqi Chen

On Thu, Nov 10, 2022 at 03:50:51PM -0500, Stefan Hajnoczi wrote:
> Preventing this class of bugs is important but QEMU is currently
> frozen for the 7.2 release. I'm a little concerned about regressions
> in a patch series that changes core device emulation code.
> 
> I'll review the series on Monday and if anyone has strong opinions on
> whether to merge this into 7.2, please say so. My thoughts are that
> this should be merged in the 7.3 release cycle so there's time to work
> out any issues.
> 
> Stefan

Stefan, what you say here makes total sense to me.
Didn't look at the series either yet.

-- 
MST



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 0/7] memory: prevent dma-reentracy issues
  2022-11-10 20:50 ` Stefan Hajnoczi
  2022-11-10 20:53   ` Michael S. Tsirkin
@ 2022-11-10 22:50   ` Peter Maydell
  2022-11-15 11:28   ` Philippe Mathieu-Daudé
  2 siblings, 0 replies; 18+ messages in thread
From: Peter Maydell @ 2022-11-10 22:50 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Alexander Bulekov, Richard Henderson, qemu-devel,
	Philippe Mathieu-Daudé,
	Mauro Matteo Cascella, Qiuhao Li, Peter Xu, Jason Wang,
	David Hildenbrand, Gerd Hoffmann, Li Qiang, Thomas Huth,
	Laurent Vivier, Bandan Das, Edgar E . Iglesias, Darren Kenny,
	Bin Meng, Paolo Bonzini, Michael S . Tsirkin, Marcel Apfelbaum,
	Daniel P . Berrangé,
	Eduardo Habkost, Jon Maloy, Siqi Chen

On Thu, 10 Nov 2022 at 20:51, Stefan Hajnoczi <stefanha@gmail.com> wrote:
>
> Preventing this class of bugs is important but QEMU is currently
> frozen for the 7.2 release. I'm a little concerned about regressions
> in a patch series that changes core device emulation code.
>
> I'll review the series on Monday and if anyone has strong opinions on
> whether to merge this into 7.2, please say so. My thoughts are that
> this should be merged in the 7.3 release cycle so there's time to work
> out any issues.

Yeah, we've lived with this class of issues for many releases
now; I would favour landing any solution early in the 8.0
cycle so we can make sure we've worked out any problems well
before release.

thanks
-- PMM


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 1/7] memory: associate DMA accesses with the initiator Device
  2022-10-28 19:16 ` [PATCH v3 1/7] memory: associate DMA accesses with the initiator Device Alexander Bulekov
@ 2022-11-14 20:08   ` Stefan Hajnoczi
  2022-11-14 20:31   ` Stefan Hajnoczi
  2022-11-15 16:19   ` Peter Xu
  2 siblings, 0 replies; 18+ messages in thread
From: Stefan Hajnoczi @ 2022-11-14 20:08 UTC (permalink / raw)
  To: Alexander Bulekov
  Cc: qemu-devel, Philippe Mathieu-Daudé,
	Mauro Matteo Cascella, Qiuhao Li, Peter Xu, Jason Wang,
	David Hildenbrand, Gerd Hoffmann, Li Qiang, Thomas Huth,
	Laurent Vivier, Bandan Das, Edgar E . Iglesias, Darren Kenny,
	Bin Meng, Paolo Bonzini, Michael S . Tsirkin, Marcel Apfelbaum,
	Daniel P . Berrangé,
	Eduardo Habkost, Jon Maloy, Siqi Chen

On Fri, 28 Oct 2022 at 15:19, Alexander Bulekov <alxndr@bu.edu> wrote:
>
> Add transitionary DMA APIs which associate accesses with the device
> initiating them. The modified APIs maintain a "MemReentrancyGuard" in
> the DeviceState, which is used to prevent DMA re-entrancy issues.
> The MemReentrancyGuard is set/checked when entering IO handlers and when
> initiating a DMA access.
>
> 1.) mmio -> dma -> mmio case
> 2.) bh -> dma write -> mmio case
>
> These issues have led to problems such as stack-exhaustion and
> use-after-frees.
>
> Summary of the problem from Peter Maydell:
> https://lore.kernel.org/qemu-devel/CAFEAcA_23vc7hE3iaM-JVA6W38LK4hJoWae5KcknhPRD5fPBZA@mail.gmail.com
>
> Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
> ---
>  include/hw/qdev-core.h |  2 ++
>  include/sysemu/dma.h   | 41 +++++++++++++++++++++++++++++++++++++++++
>  softmmu/memory.c       | 15 +++++++++++++++
>  softmmu/trace-events   |  1 +
>  4 files changed, 59 insertions(+)
>
> diff --git a/include/hw/qdev-core.h b/include/hw/qdev-core.h
> index 785dd5a56e..ab78d211af 100644
> --- a/include/hw/qdev-core.h
> +++ b/include/hw/qdev-core.h
> @@ -8,6 +8,7 @@
>  #include "qom/object.h"
>  #include "hw/hotplug.h"
>  #include "hw/resettable.h"
> +#include "sysemu/dma.h"
>
>  enum {
>      DEV_NVECTORS_UNSPECIFIED = -1,
> @@ -194,6 +195,7 @@ struct DeviceState {
>      int alias_required_for_version;
>      ResettableState reset;
>      GSList *unplug_blockers;
> +    MemReentrancyGuard mem_reentrancy_guard;
>  };
>
>  struct DeviceListener {
> diff --git a/include/sysemu/dma.h b/include/sysemu/dma.h
> index a1ac5bc1b5..879b666bbb 100644
> --- a/include/sysemu/dma.h
> +++ b/include/sysemu/dma.h
> @@ -15,6 +15,10 @@
>  #include "block/block.h"
>  #include "block/accounting.h"
>
> +typedef struct {
> +    bool engaged_in_io;
> +} MemReentrancyGuard;

Please add a doc comment that explains the purpose of MemReentrancyGuard.

> +
>  typedef enum {
>      DMA_DIRECTION_TO_DEVICE = 0,
>      DMA_DIRECTION_FROM_DEVICE = 1,
> @@ -321,4 +325,41 @@ void dma_acct_start(BlockBackend *blk, BlockAcctCookie *cookie,
>  uint64_t dma_aligned_pow2_mask(uint64_t start, uint64_t end,
>                                 int max_addr_bits);
>
> +#define REENTRANCY_GUARD(func, ret_type, dev, ...) \
> +    ({\
> +     ret_type retval;\
> +     MemReentrancyGuard prior_guard_state = dev->mem_reentrancy_guard;\
> +     dev->mem_reentrancy_guard.engaged_in_io = 1;\

Please use true/false for bool constants. That way it's obvious to the
reader that this is a bool and not an int.

> +     retval = func(__VA_ARGS__);\
> +     dev->mem_reentrancy_guard = prior_guard_state;\
> +     retval;\
> +     })

I'm trying to understand the purpose of this macro. It restores the
previous state of mem_reentrancy_guard, implying that this is
sometimes called when the guard is already true (i.e. from
MemoryRegion callbacks). It can also be called in the BH case and I
think that's why mem_reentrancy_guard is set to true here. Using BHs
to avoid deep stacks and re-entrancy is a valid technique though, and
this macro seems to be designed to prevent it. Can you explain a bit
more about how this is supposed to be used?

If this macro is a public API that other parts of QEMU will use, then
the following approach is more consistent with how the lock guard
macros work:

  REENTRANCY_GUARD(dev) {
      retval = func(1, 2, 3);
  }

It's also more readable then:

  REENTRANCY_GUARD(func, int, dev, 1, 2, 3);

?

> +#define REENTRANCY_GUARD_NORET(func, dev, ...) \
> +    ({\
> +     MemReentrancyGuard prior_guard_state = dev->mem_reentrancy_guard;\
> +     dev->mem_reentrancy_guard.engaged_in_io = 1;\
> +     func(__VA_ARGS__);\
> +     dev->mem_reentrancy_guard = prior_guard_state;\
> +     })
> +#define dma_memory_rw_guarded(dev, ...) \
> +    REENTRANCY_GUARD(dma_memory_rw, MemTxResult, dev, __VA_ARGS__)
> +#define dma_memory_read_guarded(dev, ...) \
> +    REENTRANCY_GUARD(dma_memory_read, MemTxResult, dev, __VA_ARGS__)
> +#define dma_memory_write_guarded(dev, ...) \
> +    REENTRANCY_GUARD(dma_memory_write, MemTxResult, dev, __VA_ARGS__)
> +#define dma_memory_set_guarded(dev, ...) \
> +    REENTRANCY_GUARD(dma_memory_set, MemTxResult, dev, __VA_ARGS__)
> +#define dma_memory_map_guarded(dev, ...) \
> +    REENTRANCY_GUARD(dma_memory_map, void*, dev, __VA_ARGS__)
> +#define dma_memory_unmap_guarded(dev, ...) \
> +    REENTRANCY_GUARD_NORET(dma_memory_unmap, dev, __VA_ARGS__)
> +#define ldub_dma_guarded(dev, ...) \
> +    REENTRANCY_GUARD(ldub_dma, MemTxResult, dev, __VA_ARGS__)
> +#define stb_dma_guarded(dev, ...) \
> +    REENTRANCY_GUARD(stb_dma, MemTxResult, dev, __VA_ARGS__)
> +#define dma_buf_read_guarded(dev, ...) \
> +    REENTRANCY_GUARD(dma_buf_read, MemTxResult, dev, __VA_ARGS__)
> +#define dma_buf_write_guarded(dev, ...) \
> +    REENTRANCY_GUARD(dma_buf_read, MemTxResult, dev, __VA_ARGS__)
> +
>  #endif
> diff --git a/softmmu/memory.c b/softmmu/memory.c
> index 7ba2048836..c44dc75149 100644
> --- a/softmmu/memory.c
> +++ b/softmmu/memory.c
> @@ -532,6 +532,7 @@ static MemTxResult access_with_adjusted_size(hwaddr addr,
>      uint64_t access_mask;
>      unsigned access_size;
>      unsigned i;
> +    DeviceState *dev = NULL;
>      MemTxResult r = MEMTX_OK;
>
>      if (!access_size_min) {
> @@ -541,6 +542,17 @@ static MemTxResult access_with_adjusted_size(hwaddr addr,
>          access_size_max = 4;
>      }
>
> +    /* Do not allow more than one simultanous access to a device's IO Regions */
> +    if (mr->owner &&
> +            !mr->ram_device && !mr->ram && !mr->rom_device && !mr->readonly) {

Why are readonly MemoryRegions exempt?

> +        dev = (DeviceState *) object_dynamic_cast(mr->owner, TYPE_DEVICE);
> +        if (dev->mem_reentrancy_guard.engaged_in_io) {
> +            trace_memory_region_reentrant_io(get_cpu_index(), mr, addr, size);
> +            return MEMTX_ERROR;
> +        }
> +        dev->mem_reentrancy_guard.engaged_in_io = true;
> +    }
> +
>      /* FIXME: support unaligned access? */
>      access_size = MAX(MIN(size, access_size_max), access_size_min);
>      access_mask = MAKE_64BIT_MASK(0, access_size * 8);
> @@ -555,6 +567,9 @@ static MemTxResult access_with_adjusted_size(hwaddr addr,
>                          access_mask, attrs);
>          }
>      }
> +    if (dev) {
> +        dev->mem_reentrancy_guard.engaged_in_io = false;
> +    }
>      return r;
>  }
>
> diff --git a/softmmu/trace-events b/softmmu/trace-events
> index 22606dc27b..62d04ea9a7 100644
> --- a/softmmu/trace-events
> +++ b/softmmu/trace-events
> @@ -13,6 +13,7 @@ memory_region_ops_read(int cpu_index, void *mr, uint64_t addr, uint64_t value, u
>  memory_region_ops_write(int cpu_index, void *mr, uint64_t addr, uint64_t value, unsigned size, const char *name) "cpu %d mr %p addr 0x%"PRIx64" value 0x%"PRIx64" size %u name '%s'"
>  memory_region_subpage_read(int cpu_index, void *mr, uint64_t offset, uint64_t value, unsigned size) "cpu %d mr %p offset 0x%"PRIx64" value 0x%"PRIx64" size %u"
>  memory_region_subpage_write(int cpu_index, void *mr, uint64_t offset, uint64_t value, unsigned size) "cpu %d mr %p offset 0x%"PRIx64" value 0x%"PRIx64" size %u"
> +memory_region_reentrant_io(int cpu_index, void *mr, uint64_t offset, unsigned size) "cpu %d mr %p offset 0x%"PRIx64" size %u"
>  memory_region_ram_device_read(int cpu_index, void *mr, uint64_t addr, uint64_t value, unsigned size) "cpu %d mr %p addr 0x%"PRIx64" value 0x%"PRIx64" size %u"
>  memory_region_ram_device_write(int cpu_index, void *mr, uint64_t addr, uint64_t value, unsigned size) "cpu %d mr %p addr 0x%"PRIx64" value 0x%"PRIx64" size %u"
>  memory_region_sync_dirty(const char *mr, const char *listener, int global) "mr '%s' listener '%s' synced (global=%d)"
> --
> 2.27.0
>
>


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 1/7] memory: associate DMA accesses with the initiator Device
  2022-10-28 19:16 ` [PATCH v3 1/7] memory: associate DMA accesses with the initiator Device Alexander Bulekov
  2022-11-14 20:08   ` Stefan Hajnoczi
@ 2022-11-14 20:31   ` Stefan Hajnoczi
  2022-11-15 16:19   ` Peter Xu
  2 siblings, 0 replies; 18+ messages in thread
From: Stefan Hajnoczi @ 2022-11-14 20:31 UTC (permalink / raw)
  To: Alexander Bulekov
  Cc: qemu-devel, Philippe Mathieu-Daudé,
	Mauro Matteo Cascella, Qiuhao Li, Peter Xu, Jason Wang,
	David Hildenbrand, Gerd Hoffmann, Li Qiang, Thomas Huth,
	Laurent Vivier, Bandan Das, Edgar E . Iglesias, Darren Kenny,
	Bin Meng, Paolo Bonzini, Michael S . Tsirkin, Marcel Apfelbaum,
	Daniel P . Berrangé,
	Eduardo Habkost, Jon Maloy, Siqi Chen

The _guarded() calls are required in BHs, timers, fd read/write
callbacks, etc because we're no longer in the memory region dispatch
code with the reentrancy guard set. It's not clear to me whether the
_guarded() calls are actually required in most of these patches
though? Do you plan to convert every DMA API call to a _guarded() call
in the future?

I'm asking because coming up with an API that doesn't require these
code changes will reduce code churn and make existing code safe.

Does it make sense to separate the DMA API and the reentrancy guard
API? That way the reentrancy guard can be put in place once in any BH,
timer, etc callback and then the existing DMA APIs are used within
those callbacks without new _guarded() APIs.

This approach also reduces the number of times that the guard is
toggled. The current approach is fine-grained (per DMA API call) so
the guard needs to be toggled all the time, e.g. in DMA sglist loops.

If we want the compiler to prevent DMA API calls without a reentrancy
guard, then AddressSpace pointers can be hidden behind an API that
sets the guard. This ensures that you cannot access an address space
unless you have a reentrancy guard.

Stefan


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 0/7] memory: prevent dma-reentracy issues
  2022-11-10 20:50 ` Stefan Hajnoczi
  2022-11-10 20:53   ` Michael S. Tsirkin
  2022-11-10 22:50   ` Peter Maydell
@ 2022-11-15 11:28   ` Philippe Mathieu-Daudé
  2 siblings, 0 replies; 18+ messages in thread
From: Philippe Mathieu-Daudé @ 2022-11-15 11:28 UTC (permalink / raw)
  To: Stefan Hajnoczi, Alexander Bulekov, Peter Maydell,
	Richard Henderson, Alex Bennée
  Cc: qemu-devel, Mauro Matteo Cascella, Qiuhao Li, Peter Xu,
	Jason Wang, David Hildenbrand, Gerd Hoffmann, Li Qiang,
	Thomas Huth, Laurent Vivier, Bandan Das, Edgar E . Iglesias,
	Darren Kenny, Bin Meng, Paolo Bonzini, Michael S . Tsirkin,
	Marcel Apfelbaum, Daniel P . Berrangé,
	Eduardo Habkost, Jon Maloy, Siqi Chen

On 10/11/22 21:50, Stefan Hajnoczi wrote:
> Preventing this class of bugs is important but QEMU is currently
> frozen for the 7.2 release. I'm a little concerned about regressions
> in a patch series that changes core device emulation code.

I'm waiting for Alex's MemTxRequesterType field addition in
MemTxAttrs [1] lands to rework my previous approach using
flatview_access_allowed() instead of access_with_adjusted_size()
[2]. I haven't looked at this series in detail, but since the
permission check is done on the Memory API layer, I might have
missed something in my previous intent (by using the FlatView
layer).

[1] 
https://lore.kernel.org/qemu-devel/20221111182535.64844-2-alex.bennee@linaro.org/
[2] 
https://lore.kernel.org/qemu-devel/20211215182421.418374-4-philmd@redhat.com/

> I'll review the series on Monday and if anyone has strong opinions on
> whether to merge this into 7.2, please say so. My thoughts are that
> this should be merged in the 7.3 release cycle so there's time to work
> out any issues.
> 
> Stefan



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 1/7] memory: associate DMA accesses with the initiator Device
  2022-10-28 19:16 ` [PATCH v3 1/7] memory: associate DMA accesses with the initiator Device Alexander Bulekov
  2022-11-14 20:08   ` Stefan Hajnoczi
  2022-11-14 20:31   ` Stefan Hajnoczi
@ 2022-11-15 16:19   ` Peter Xu
  2022-11-15 16:49     ` Peter Maydell
  2022-11-15 17:44     ` Alexander Bulekov
  2 siblings, 2 replies; 18+ messages in thread
From: Peter Xu @ 2022-11-15 16:19 UTC (permalink / raw)
  To: Alexander Bulekov
  Cc: qemu-devel, Philippe Mathieu-Daudé,
	Mauro Matteo Cascella, Qiuhao Li, Jason Wang, David Hildenbrand,
	Gerd Hoffmann, Li Qiang, Thomas Huth, Laurent Vivier, Bandan Das,
	Edgar E . Iglesias, Darren Kenny, Bin Meng, Paolo Bonzini,
	Michael S . Tsirkin, Marcel Apfelbaum, Daniel P . Berrangé,
	Eduardo Habkost, Jon Maloy, Siqi Chen

On Fri, Oct 28, 2022 at 03:16:42PM -0400, Alexander Bulekov wrote:
> +    /* Do not allow more than one simultanous access to a device's IO Regions */
> +    if (mr->owner &&
> +            !mr->ram_device && !mr->ram && !mr->rom_device && !mr->readonly) {
> +        dev = (DeviceState *) object_dynamic_cast(mr->owner, TYPE_DEVICE);
> +        if (dev->mem_reentrancy_guard.engaged_in_io) {

Do we need to check dev being non-NULL?  Fundamentally it's about whether
the owner can be not a DeviceState, I believe it's normally true but I
can't tell; at least from memory region API it can be any Object*.

> +            trace_memory_region_reentrant_io(get_cpu_index(), mr, addr, size);
> +            return MEMTX_ERROR;
> +        }
> +        dev->mem_reentrancy_guard.engaged_in_io = true;
> +    }

-- 
Peter Xu



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 1/7] memory: associate DMA accesses with the initiator Device
  2022-11-15 16:19   ` Peter Xu
@ 2022-11-15 16:49     ` Peter Maydell
  2022-11-15 17:44     ` Alexander Bulekov
  1 sibling, 0 replies; 18+ messages in thread
From: Peter Maydell @ 2022-11-15 16:49 UTC (permalink / raw)
  To: Peter Xu
  Cc: Alexander Bulekov, qemu-devel, Philippe Mathieu-Daudé,
	Mauro Matteo Cascella, Qiuhao Li, Jason Wang, David Hildenbrand,
	Gerd Hoffmann, Li Qiang, Thomas Huth, Laurent Vivier, Bandan Das,
	Edgar E . Iglesias, Darren Kenny, Bin Meng, Paolo Bonzini,
	Michael S . Tsirkin, Marcel Apfelbaum, Daniel P . Berrangé,
	Eduardo Habkost, Jon Maloy, Siqi Chen

On Tue, 15 Nov 2022 at 16:20, Peter Xu <peterx@redhat.com> wrote:
>
> On Fri, Oct 28, 2022 at 03:16:42PM -0400, Alexander Bulekov wrote:
> > +    /* Do not allow more than one simultanous access to a device's IO Regions */
> > +    if (mr->owner &&
> > +            !mr->ram_device && !mr->ram && !mr->rom_device && !mr->readonly) {
> > +        dev = (DeviceState *) object_dynamic_cast(mr->owner, TYPE_DEVICE);
> > +        if (dev->mem_reentrancy_guard.engaged_in_io) {
>
> Do we need to check dev being non-NULL?  Fundamentally it's about whether
> the owner can be not a DeviceState, I believe it's normally true but I
> can't tell; at least from memory region API it can be any Object*.

There is at least one MemoryRegion in the tree whose owner is not
a DeviceState: hw/arm/virt.c does:

        memory_region_init(secure_sysmem, OBJECT(machine), "secure-memory",
                           UINT64_MAX);

and MachineState inherits directly from Object, not via DeviceState.

More generally, when doing a QOM cast, either:
 (1) you know that the object must be of the right type, in which
     case you should use the cast macro (which will assert for you), or
 (2) the object may not be of the right type, in which case you
     use object_dynamic_cast() and check whether it returned NULL

The combination of object_dynamic_cast() and no NULL check is I
think usually a bug.

-- PMM


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v3 1/7] memory: associate DMA accesses with the initiator Device
  2022-11-15 16:19   ` Peter Xu
  2022-11-15 16:49     ` Peter Maydell
@ 2022-11-15 17:44     ` Alexander Bulekov
  1 sibling, 0 replies; 18+ messages in thread
From: Alexander Bulekov @ 2022-11-15 17:44 UTC (permalink / raw)
  To: Peter Xu
  Cc: qemu-devel, Philippe Mathieu-Daudé,
	Mauro Matteo Cascella, Qiuhao Li, Jason Wang, David Hildenbrand,
	Gerd Hoffmann, Li Qiang, Thomas Huth, Laurent Vivier, Bandan Das,
	Edgar E . Iglesias, Darren Kenny, Bin Meng, Paolo Bonzini,
	Michael S . Tsirkin, Marcel Apfelbaum, Daniel P . Berrangé,
	Eduardo Habkost, Jon Maloy, Siqi Chen

On 221115 1119, Peter Xu wrote:
> On Fri, Oct 28, 2022 at 03:16:42PM -0400, Alexander Bulekov wrote:
> > +    /* Do not allow more than one simultanous access to a device's IO Regions */
> > +    if (mr->owner &&
> > +            !mr->ram_device && !mr->ram && !mr->rom_device && !mr->readonly) {
> > +        dev = (DeviceState *) object_dynamic_cast(mr->owner, TYPE_DEVICE);
> > +        if (dev->mem_reentrancy_guard.engaged_in_io) {
> 
> Do we need to check dev being non-NULL?  Fundamentally it's about whether
> the owner can be not a DeviceState, I believe it's normally true but I
> can't tell; at least from memory region API it can be any Object*.
> 

I'll add a NULL-check
Thanks


^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2022-11-15 17:45 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-10-28 19:16 [PATCH v3 0/7] memory: prevent dma-reentracy issues Alexander Bulekov
2022-10-28 19:16 ` [PATCH v3 1/7] memory: associate DMA accesses with the initiator Device Alexander Bulekov
2022-11-14 20:08   ` Stefan Hajnoczi
2022-11-14 20:31   ` Stefan Hajnoczi
2022-11-15 16:19   ` Peter Xu
2022-11-15 16:49     ` Peter Maydell
2022-11-15 17:44     ` Alexander Bulekov
2022-10-28 19:16 ` [PATCH v3 2/7] dma-helpers: switch to guarded DMA accesses Alexander Bulekov
2022-10-28 19:16 ` [PATCH v3 3/7] ahci: switch to guarded DMA acccesses Alexander Bulekov
2022-10-28 19:16 ` [PATCH v3 4/7] sdhci: switch to guarded DMA accesses Alexander Bulekov
2022-10-28 19:16 ` [PATCH v3 5/7] ehci: " Alexander Bulekov
2022-10-28 19:16 ` [PATCH v3 6/7] xhci: " Alexander Bulekov
2022-10-28 19:16 ` [PATCH v3 7/7] usb/libhw: " Alexander Bulekov
2022-11-07 17:09 ` [PATCH v3 0/7] memory: prevent dma-reentracy issues Alexander Bulekov
2022-11-10 20:50 ` Stefan Hajnoczi
2022-11-10 20:53   ` Michael S. Tsirkin
2022-11-10 22:50   ` Peter Maydell
2022-11-15 11:28   ` Philippe Mathieu-Daudé

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.