All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 00/12] exec/memory: Experimental API to catch unaligned accesses
@ 2021-05-20 11:09 Philippe Mathieu-Daudé
  2021-05-20 11:09 ` [RFC PATCH 01/12] exec/memory_ldst: Use correct type sizes Philippe Mathieu-Daudé
                   ` (12 more replies)
  0 siblings, 13 replies; 16+ messages in thread
From: Philippe Mathieu-Daudé @ 2021-05-20 11:09 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Michael S. Tsirkin, Richard Henderson, Peter Xu,
	Bibo Mao, Stefan Hajnoczi, Paolo Bonzini,
	Philippe Mathieu-Daudé

Hi,

This series is an experiment after chatting with Stefan and having
received review from Peter / Richard on an orthogonal series aiming
to handle unaligned pointers (atomically):
https://www.mail-archive.com/qemu-devel@nongnu.org/msg808954.html

Here we don't aim to fix unatomic accesses, however we are interested
in catching malicious unaligned accesses from guests.

For that we introduce the MemTxAttrs::aligned field which allow
accessors to return MEMTX_UNALIGNED_ERROR early enough, instead
of trying the unaligned access which can potentially trigger a
SIGBUS and abort the process.

To be able to pass/return alignment information we modify the
memory load/store cached API, then add the
virtio_ld/st*_phys_cached_with_attrs() handler (we only implement
lduw for this experiment).

Finaly we modify vring_avail_flags() to return whether the guest
requested an illegal (unaligned) memory access.

Note: The current virtio_ld/st*_phys_cached_with_attrs() API returns
the value, and take the MemTxResult as argument, so I choose to
return -1 (marked with /* XXX */ comment. We should switch to using
an API which returns a MemTxResult and takes the value accessed as
argument, this way we don't have to return random meaningless value.

But this is beyond the scope of this experiment, here we want to
emphasize the introduction of the MemTxAttrs::aligned field and the
MEMTX_UNALIGNED_ERROR return value.

Regards,

Phil.

Philippe Mathieu-Daudé (12):
  exec/memory_ldst: Use correct type sizes
  exec/memattrs: Add attribute/error for address alignment
  exec/memory_ldst: Return MEMTX_UNALIGNED_ERROR for unaligned addresses
  exec/memory_ldst_cached: Sort declarations
  exec/memory_ldst_cached: Use correct type size
  exec/memory_ldst_cached: Set MemTxResult on success
  exec/memory_ldst_cached: Document aligned addresses are expected
  exec/memory_ldst_cached: Check address alignment if requested
  hw/virtio: Use correct type sizes
  hw/virtio: Extract virtio_lduw_phys_cached_with_attrs()
  hw/virtio: Have vring_avail_flags() return a boolean value
  hw/virtio: Display error if vring flag field is not aligned

 include/exec/memattrs.h               |   3 +
 include/hw/virtio/virtio-access.h     |  39 +++++++--
 include/exec/memory_ldst.h.inc        |  16 ++--
 include/exec/memory_ldst_cached.h.inc | 114 ++++++++++++++++++++------
 hw/virtio/virtio.c                    |  22 ++++-
 memory_ldst.c.inc                     |  69 +++++++++++++---
 6 files changed, 211 insertions(+), 52 deletions(-)

-- 
2.26.3




^ permalink raw reply	[flat|nested] 16+ messages in thread

* [RFC PATCH 01/12] exec/memory_ldst: Use correct type sizes
  2021-05-20 11:09 [RFC PATCH 00/12] exec/memory: Experimental API to catch unaligned accesses Philippe Mathieu-Daudé
@ 2021-05-20 11:09 ` Philippe Mathieu-Daudé
  2021-05-20 11:09 ` [RFC PATCH 02/12] exec/memattrs: Add attribute/error for address alignment Philippe Mathieu-Daudé
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Philippe Mathieu-Daudé @ 2021-05-20 11:09 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Michael S. Tsirkin, Richard Henderson, Peter Xu,
	Bibo Mao, Stefan Hajnoczi, Paolo Bonzini,
	Philippe Mathieu-Daudé

Use uint8_t for (unsigned) byte, and uint16_t for (unsigned)
16-bit word.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 include/exec/memory_ldst.h.inc | 16 ++++++++--------
 memory_ldst.c.inc              | 20 ++++++++++----------
 2 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/include/exec/memory_ldst.h.inc b/include/exec/memory_ldst.h.inc
index 46e6c220d35..7c3a641f7ed 100644
--- a/include/exec/memory_ldst.h.inc
+++ b/include/exec/memory_ldst.h.inc
@@ -20,7 +20,7 @@
  */
 
 #ifdef TARGET_ENDIANNESS
-extern uint32_t glue(address_space_lduw, SUFFIX)(ARG1_DECL,
+extern uint16_t glue(address_space_lduw, SUFFIX)(ARG1_DECL,
     hwaddr addr, MemTxAttrs attrs, MemTxResult *result);
 extern uint32_t glue(address_space_ldl, SUFFIX)(ARG1_DECL,
     hwaddr addr, MemTxAttrs attrs, MemTxResult *result);
@@ -29,17 +29,17 @@ extern uint64_t glue(address_space_ldq, SUFFIX)(ARG1_DECL,
 extern void glue(address_space_stl_notdirty, SUFFIX)(ARG1_DECL,
     hwaddr addr, uint32_t val, MemTxAttrs attrs, MemTxResult *result);
 extern void glue(address_space_stw, SUFFIX)(ARG1_DECL,
-    hwaddr addr, uint32_t val, MemTxAttrs attrs, MemTxResult *result);
+    hwaddr addr, uint16_t val, MemTxAttrs attrs, MemTxResult *result);
 extern void glue(address_space_stl, SUFFIX)(ARG1_DECL,
     hwaddr addr, uint32_t val, MemTxAttrs attrs, MemTxResult *result);
 extern void glue(address_space_stq, SUFFIX)(ARG1_DECL,
     hwaddr addr, uint64_t val, MemTxAttrs attrs, MemTxResult *result);
 #else
-extern uint32_t glue(address_space_ldub, SUFFIX)(ARG1_DECL,
+extern uint8_t glue(address_space_ldub, SUFFIX)(ARG1_DECL,
     hwaddr addr, MemTxAttrs attrs, MemTxResult *result);
-extern uint32_t glue(address_space_lduw_le, SUFFIX)(ARG1_DECL,
+extern uint16_t glue(address_space_lduw_le, SUFFIX)(ARG1_DECL,
     hwaddr addr, MemTxAttrs attrs, MemTxResult *result);
-extern uint32_t glue(address_space_lduw_be, SUFFIX)(ARG1_DECL,
+extern uint16_t glue(address_space_lduw_be, SUFFIX)(ARG1_DECL,
     hwaddr addr, MemTxAttrs attrs, MemTxResult *result);
 extern uint32_t glue(address_space_ldl_le, SUFFIX)(ARG1_DECL,
     hwaddr addr, MemTxAttrs attrs, MemTxResult *result);
@@ -50,11 +50,11 @@ extern uint64_t glue(address_space_ldq_le, SUFFIX)(ARG1_DECL,
 extern uint64_t glue(address_space_ldq_be, SUFFIX)(ARG1_DECL,
     hwaddr addr, MemTxAttrs attrs, MemTxResult *result);
 extern void glue(address_space_stb, SUFFIX)(ARG1_DECL,
-    hwaddr addr, uint32_t val, MemTxAttrs attrs, MemTxResult *result);
+    hwaddr addr, uint8_t val, MemTxAttrs attrs, MemTxResult *result);
 extern void glue(address_space_stw_le, SUFFIX)(ARG1_DECL,
-    hwaddr addr, uint32_t val, MemTxAttrs attrs, MemTxResult *result);
+    hwaddr addr, uint16_t val, MemTxAttrs attrs, MemTxResult *result);
 extern void glue(address_space_stw_be, SUFFIX)(ARG1_DECL,
-    hwaddr addr, uint32_t val, MemTxAttrs attrs, MemTxResult *result);
+    hwaddr addr, uint16_t val, MemTxAttrs attrs, MemTxResult *result);
 extern void glue(address_space_stl_le, SUFFIX)(ARG1_DECL,
     hwaddr addr, uint32_t val, MemTxAttrs attrs, MemTxResult *result);
 extern void glue(address_space_stl_be, SUFFIX)(ARG1_DECL,
diff --git a/memory_ldst.c.inc b/memory_ldst.c.inc
index b56e9619674..84b868f2946 100644
--- a/memory_ldst.c.inc
+++ b/memory_ldst.c.inc
@@ -157,7 +157,7 @@ uint64_t glue(address_space_ldq_be, SUFFIX)(ARG1_DECL,
                                                     DEVICE_BIG_ENDIAN);
 }
 
-uint32_t glue(address_space_ldub, SUFFIX)(ARG1_DECL,
+uint8_t glue(address_space_ldub, SUFFIX)(ARG1_DECL,
     hwaddr addr, MemTxAttrs attrs, MemTxResult *result)
 {
     uint8_t *ptr;
@@ -193,7 +193,7 @@ uint32_t glue(address_space_ldub, SUFFIX)(ARG1_DECL,
 }
 
 /* warning: addr must be aligned */
-static inline uint32_t glue(address_space_lduw_internal, SUFFIX)(ARG1_DECL,
+static inline uint16_t glue(address_space_lduw_internal, SUFFIX)(ARG1_DECL,
     hwaddr addr, MemTxAttrs attrs, MemTxResult *result,
     enum device_endian endian)
 {
@@ -240,21 +240,21 @@ static inline uint32_t glue(address_space_lduw_internal, SUFFIX)(ARG1_DECL,
     return val;
 }
 
-uint32_t glue(address_space_lduw, SUFFIX)(ARG1_DECL,
+uint16_t glue(address_space_lduw, SUFFIX)(ARG1_DECL,
     hwaddr addr, MemTxAttrs attrs, MemTxResult *result)
 {
     return glue(address_space_lduw_internal, SUFFIX)(ARG1, addr, attrs, result,
                                                      DEVICE_NATIVE_ENDIAN);
 }
 
-uint32_t glue(address_space_lduw_le, SUFFIX)(ARG1_DECL,
+uint16_t glue(address_space_lduw_le, SUFFIX)(ARG1_DECL,
     hwaddr addr, MemTxAttrs attrs, MemTxResult *result)
 {
     return glue(address_space_lduw_internal, SUFFIX)(ARG1, addr, attrs, result,
                                                      DEVICE_LITTLE_ENDIAN);
 }
 
-uint32_t glue(address_space_lduw_be, SUFFIX)(ARG1_DECL,
+uint16_t glue(address_space_lduw_be, SUFFIX)(ARG1_DECL,
     hwaddr addr, MemTxAttrs attrs, MemTxResult *result)
 {
     return glue(address_space_lduw_internal, SUFFIX)(ARG1, addr, attrs, result,
@@ -366,7 +366,7 @@ void glue(address_space_stl_be, SUFFIX)(ARG1_DECL,
 }
 
 void glue(address_space_stb, SUFFIX)(ARG1_DECL,
-    hwaddr addr, uint32_t val, MemTxAttrs attrs, MemTxResult *result)
+    hwaddr addr, uint8_t val, MemTxAttrs attrs, MemTxResult *result)
 {
     uint8_t *ptr;
     MemoryRegion *mr;
@@ -398,7 +398,7 @@ void glue(address_space_stb, SUFFIX)(ARG1_DECL,
 
 /* warning: addr must be aligned */
 static inline void glue(address_space_stw_internal, SUFFIX)(ARG1_DECL,
-    hwaddr addr, uint32_t val, MemTxAttrs attrs,
+    hwaddr addr, uint16_t val, MemTxAttrs attrs,
     MemTxResult *result, enum device_endian endian)
 {
     uint8_t *ptr;
@@ -441,21 +441,21 @@ static inline void glue(address_space_stw_internal, SUFFIX)(ARG1_DECL,
 }
 
 void glue(address_space_stw, SUFFIX)(ARG1_DECL,
-    hwaddr addr, uint32_t val, MemTxAttrs attrs, MemTxResult *result)
+    hwaddr addr, uint16_t val, MemTxAttrs attrs, MemTxResult *result)
 {
     glue(address_space_stw_internal, SUFFIX)(ARG1, addr, val, attrs, result,
                                              DEVICE_NATIVE_ENDIAN);
 }
 
 void glue(address_space_stw_le, SUFFIX)(ARG1_DECL,
-    hwaddr addr, uint32_t val, MemTxAttrs attrs, MemTxResult *result)
+    hwaddr addr, uint16_t val, MemTxAttrs attrs, MemTxResult *result)
 {
     glue(address_space_stw_internal, SUFFIX)(ARG1, addr, val, attrs, result,
                                              DEVICE_LITTLE_ENDIAN);
 }
 
 void glue(address_space_stw_be, SUFFIX)(ARG1_DECL,
-    hwaddr addr, uint32_t val, MemTxAttrs attrs, MemTxResult *result)
+    hwaddr addr, uint16_t val, MemTxAttrs attrs, MemTxResult *result)
 {
     glue(address_space_stw_internal, SUFFIX)(ARG1, addr, val, attrs, result,
                                DEVICE_BIG_ENDIAN);
-- 
2.26.3



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 02/12] exec/memattrs: Add attribute/error for address alignment
  2021-05-20 11:09 [RFC PATCH 00/12] exec/memory: Experimental API to catch unaligned accesses Philippe Mathieu-Daudé
  2021-05-20 11:09 ` [RFC PATCH 01/12] exec/memory_ldst: Use correct type sizes Philippe Mathieu-Daudé
@ 2021-05-20 11:09 ` Philippe Mathieu-Daudé
  2021-06-02 12:39   ` Stefan Hajnoczi
  2021-05-20 11:09 ` [RFC PATCH 03/12] exec/memory_ldst: Return MEMTX_UNALIGNED_ERROR for unaligned addresses Philippe Mathieu-Daudé
                   ` (10 subsequent siblings)
  12 siblings, 1 reply; 16+ messages in thread
From: Philippe Mathieu-Daudé @ 2021-05-20 11:09 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Michael S. Tsirkin, Richard Henderson, Peter Xu,
	Bibo Mao, Stefan Hajnoczi, Paolo Bonzini,
	Philippe Mathieu-Daudé

A bus master might specify the 'aligned' attribute to enforce
a transaction using aligned address. If the address is not
aligned, the accessor will return MEMTX_UNALIGNED_ERROR.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 include/exec/memattrs.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/include/exec/memattrs.h b/include/exec/memattrs.h
index 95f2d20d55b..6fe59194e35 100644
--- a/include/exec/memattrs.h
+++ b/include/exec/memattrs.h
@@ -39,6 +39,8 @@ typedef struct MemTxAttrs {
     unsigned int requester_id:16;
     /* Invert endianness for this page */
     unsigned int byte_swap:1;
+    /* Memory access must be aligned */
+    unsigned int aligned:1;
     /*
      * The following are target-specific page-table bits.  These are not
      * related to actual memory transactions at all.  However, this structure
@@ -66,6 +68,7 @@ typedef struct MemTxAttrs {
 #define MEMTX_OK 0
 #define MEMTX_ERROR             (1U << 0) /* device returned an error */
 #define MEMTX_DECODE_ERROR      (1U << 1) /* nothing at that address */
+#define MEMTX_UNALIGNED_ERROR   (1U << 2) /* address is not aligned */
 typedef uint32_t MemTxResult;
 
 #endif
-- 
2.26.3



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 03/12] exec/memory_ldst: Return MEMTX_UNALIGNED_ERROR for unaligned addresses
  2021-05-20 11:09 [RFC PATCH 00/12] exec/memory: Experimental API to catch unaligned accesses Philippe Mathieu-Daudé
  2021-05-20 11:09 ` [RFC PATCH 01/12] exec/memory_ldst: Use correct type sizes Philippe Mathieu-Daudé
  2021-05-20 11:09 ` [RFC PATCH 02/12] exec/memattrs: Add attribute/error for address alignment Philippe Mathieu-Daudé
@ 2021-05-20 11:09 ` Philippe Mathieu-Daudé
  2021-05-20 11:09 ` [RFC PATCH 04/12] exec/memory_ldst_cached: Sort declarations Philippe Mathieu-Daudé
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Philippe Mathieu-Daudé @ 2021-05-20 11:09 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Michael S. Tsirkin, Richard Henderson, Peter Xu,
	Bibo Mao, Stefan Hajnoczi, Paolo Bonzini,
	Philippe Mathieu-Daudé

All address_space internal handlers have the /* warning: addr
must be aligned */ comment, so we don't expect any caller to
pass unaligned addresses.

Now than we added the MemTxAttrs.aligned attribute, callers
might want to pass unaligned addresses. In case they do, be
ready and return MEMTX_UNALIGNED_ERROR.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 memory_ldst.c.inc | 49 +++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 49 insertions(+)

diff --git a/memory_ldst.c.inc b/memory_ldst.c.inc
index 84b868f2946..efeb545479e 100644
--- a/memory_ldst.c.inc
+++ b/memory_ldst.c.inc
@@ -32,6 +32,13 @@ static inline uint32_t glue(address_space_ldl_internal, SUFFIX)(ARG1_DECL,
     MemTxResult r;
     bool release_lock = false;
 
+    if (unlikely(!QEMU_IS_ALIGNED(addr, sizeof(uint32_t)))) {
+        if (result) {
+            *result = MEMTX_UNALIGNED_ERROR;
+        }
+        return (uint32_t)-1; /* XXX */
+    }
+
     RCU_READ_LOCK();
     mr = TRANSLATE(addr, &addr1, &l, false, attrs);
     if (l < 4 || !memory_access_is_direct(mr, false)) {
@@ -101,6 +108,13 @@ static inline uint64_t glue(address_space_ldq_internal, SUFFIX)(ARG1_DECL,
     MemTxResult r;
     bool release_lock = false;
 
+    if (unlikely(!QEMU_IS_ALIGNED(addr, sizeof(uint64_t)))) {
+        if (result) {
+            *result = MEMTX_UNALIGNED_ERROR;
+        }
+        return (uint64_t)-1; /* XXX */
+    }
+
     RCU_READ_LOCK();
     mr = TRANSLATE(addr, &addr1, &l, false, attrs);
     if (l < 8 || !memory_access_is_direct(mr, false)) {
@@ -205,6 +219,13 @@ static inline uint16_t glue(address_space_lduw_internal, SUFFIX)(ARG1_DECL,
     MemTxResult r;
     bool release_lock = false;
 
+    if (unlikely(!QEMU_IS_ALIGNED(addr, sizeof(uint16_t)))) {
+        if (result) {
+            *result = MEMTX_UNALIGNED_ERROR;
+        }
+        return (uint16_t)-1; /* XXX */
+    }
+
     RCU_READ_LOCK();
     mr = TRANSLATE(addr, &addr1, &l, false, attrs);
     if (l < 2 || !memory_access_is_direct(mr, false)) {
@@ -275,6 +296,13 @@ void glue(address_space_stl_notdirty, SUFFIX)(ARG1_DECL,
     uint8_t dirty_log_mask;
     bool release_lock = false;
 
+    if (unlikely(!QEMU_IS_ALIGNED(addr, sizeof(uint32_t)))) {
+        if (result) {
+            *result = MEMTX_UNALIGNED_ERROR;
+        }
+        return;
+    }
+
     RCU_READ_LOCK();
     mr = TRANSLATE(addr, &addr1, &l, true, attrs);
     if (l < 4 || !memory_access_is_direct(mr, true)) {
@@ -312,6 +340,13 @@ static inline void glue(address_space_stl_internal, SUFFIX)(ARG1_DECL,
     MemTxResult r;
     bool release_lock = false;
 
+    if (unlikely(!QEMU_IS_ALIGNED(addr, sizeof(uint32_t)))) {
+        if (result) {
+            *result = MEMTX_UNALIGNED_ERROR;
+        }
+        return;
+    }
+
     RCU_READ_LOCK();
     mr = TRANSLATE(addr, &addr1, &l, true, attrs);
     if (l < 4 || !memory_access_is_direct(mr, true)) {
@@ -408,6 +443,13 @@ static inline void glue(address_space_stw_internal, SUFFIX)(ARG1_DECL,
     MemTxResult r;
     bool release_lock = false;
 
+    if (unlikely(!QEMU_IS_ALIGNED(addr, sizeof(uint16_t)))) {
+        if (result) {
+            *result = MEMTX_UNALIGNED_ERROR;
+        }
+        return;
+    }
+
     RCU_READ_LOCK();
     mr = TRANSLATE(addr, &addr1, &l, true, attrs);
     if (l < 2 || !memory_access_is_direct(mr, true)) {
@@ -472,6 +514,13 @@ static void glue(address_space_stq_internal, SUFFIX)(ARG1_DECL,
     MemTxResult r;
     bool release_lock = false;
 
+    if (unlikely(!QEMU_IS_ALIGNED(addr, sizeof(uint64_t)))) {
+        if (result) {
+            *result = MEMTX_UNALIGNED_ERROR;
+        }
+        return;
+    }
+
     RCU_READ_LOCK();
     mr = TRANSLATE(addr, &addr1, &l, true, attrs);
     if (l < 8 || !memory_access_is_direct(mr, true)) {
-- 
2.26.3



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 04/12] exec/memory_ldst_cached: Sort declarations
  2021-05-20 11:09 [RFC PATCH 00/12] exec/memory: Experimental API to catch unaligned accesses Philippe Mathieu-Daudé
                   ` (2 preceding siblings ...)
  2021-05-20 11:09 ` [RFC PATCH 03/12] exec/memory_ldst: Return MEMTX_UNALIGNED_ERROR for unaligned addresses Philippe Mathieu-Daudé
@ 2021-05-20 11:09 ` Philippe Mathieu-Daudé
  2021-05-20 11:09 ` [RFC PATCH 05/12] exec/memory_ldst_cached: Use correct type size Philippe Mathieu-Daudé
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Philippe Mathieu-Daudé @ 2021-05-20 11:09 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Michael S. Tsirkin, Richard Henderson, Peter Xu,
	Bibo Mao, Stefan Hajnoczi, Paolo Bonzini,
	Philippe Mathieu-Daudé

To ease the file review, sort the declarations by the size of
the access (8, 16, 32). Simple code movement, no logical change.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 include/exec/memory_ldst_cached.h.inc | 46 +++++++++++++--------------
 1 file changed, 23 insertions(+), 23 deletions(-)

diff --git a/include/exec/memory_ldst_cached.h.inc b/include/exec/memory_ldst_cached.h.inc
index 7bc8790d346..c33449d0cd5 100644
--- a/include/exec/memory_ldst_cached.h.inc
+++ b/include/exec/memory_ldst_cached.h.inc
@@ -24,6 +24,18 @@
 #define LD_P(size) \
     glue(glue(ld, size), glue(ENDIANNESS, _p))
 
+static inline uint32_t ADDRESS_SPACE_LD_CACHED(uw)(MemoryRegionCache *cache,
+    hwaddr addr, MemTxAttrs attrs, MemTxResult *result)
+{
+    assert(addr < cache->len && 2 <= cache->len - addr);
+    fuzz_dma_read_cb(cache->xlat + addr, 2, cache->mrs.mr);
+    if (likely(cache->ptr)) {
+        return LD_P(uw)(cache->ptr + addr);
+    } else {
+        return ADDRESS_SPACE_LD_CACHED_SLOW(uw)(cache, addr, attrs, result);
+    }
+}
+
 static inline uint32_t ADDRESS_SPACE_LD_CACHED(l)(MemoryRegionCache *cache,
     hwaddr addr, MemTxAttrs attrs, MemTxResult *result)
 {
@@ -48,18 +60,6 @@ static inline uint64_t ADDRESS_SPACE_LD_CACHED(q)(MemoryRegionCache *cache,
     }
 }
 
-static inline uint32_t ADDRESS_SPACE_LD_CACHED(uw)(MemoryRegionCache *cache,
-    hwaddr addr, MemTxAttrs attrs, MemTxResult *result)
-{
-    assert(addr < cache->len && 2 <= cache->len - addr);
-    fuzz_dma_read_cb(cache->xlat + addr, 2, cache->mrs.mr);
-    if (likely(cache->ptr)) {
-        return LD_P(uw)(cache->ptr + addr);
-    } else {
-        return ADDRESS_SPACE_LD_CACHED_SLOW(uw)(cache, addr, attrs, result);
-    }
-}
-
 #undef ADDRESS_SPACE_LD_CACHED
 #undef ADDRESS_SPACE_LD_CACHED_SLOW
 #undef LD_P
@@ -71,17 +71,6 @@ static inline uint32_t ADDRESS_SPACE_LD_CACHED(uw)(MemoryRegionCache *cache,
 #define ST_P(size) \
     glue(glue(st, size), glue(ENDIANNESS, _p))
 
-static inline void ADDRESS_SPACE_ST_CACHED(l)(MemoryRegionCache *cache,
-    hwaddr addr, uint32_t val, MemTxAttrs attrs, MemTxResult *result)
-{
-    assert(addr < cache->len && 4 <= cache->len - addr);
-    if (likely(cache->ptr)) {
-        ST_P(l)(cache->ptr + addr, val);
-    } else {
-        ADDRESS_SPACE_ST_CACHED_SLOW(l)(cache, addr, val, attrs, result);
-    }
-}
-
 static inline void ADDRESS_SPACE_ST_CACHED(w)(MemoryRegionCache *cache,
     hwaddr addr, uint32_t val, MemTxAttrs attrs, MemTxResult *result)
 {
@@ -93,6 +82,17 @@ static inline void ADDRESS_SPACE_ST_CACHED(w)(MemoryRegionCache *cache,
     }
 }
 
+static inline void ADDRESS_SPACE_ST_CACHED(l)(MemoryRegionCache *cache,
+    hwaddr addr, uint32_t val, MemTxAttrs attrs, MemTxResult *result)
+{
+    assert(addr < cache->len && 4 <= cache->len - addr);
+    if (likely(cache->ptr)) {
+        ST_P(l)(cache->ptr + addr, val);
+    } else {
+        ADDRESS_SPACE_ST_CACHED_SLOW(l)(cache, addr, val, attrs, result);
+    }
+}
+
 static inline void ADDRESS_SPACE_ST_CACHED(q)(MemoryRegionCache *cache,
     hwaddr addr, uint64_t val, MemTxAttrs attrs, MemTxResult *result)
 {
-- 
2.26.3



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 05/12] exec/memory_ldst_cached: Use correct type size
  2021-05-20 11:09 [RFC PATCH 00/12] exec/memory: Experimental API to catch unaligned accesses Philippe Mathieu-Daudé
                   ` (3 preceding siblings ...)
  2021-05-20 11:09 ` [RFC PATCH 04/12] exec/memory_ldst_cached: Sort declarations Philippe Mathieu-Daudé
@ 2021-05-20 11:09 ` Philippe Mathieu-Daudé
  2021-05-20 11:09 ` [RFC PATCH 06/12] exec/memory_ldst_cached: Set MemTxResult on success Philippe Mathieu-Daudé
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Philippe Mathieu-Daudé @ 2021-05-20 11:09 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Michael S. Tsirkin, Richard Henderson, Peter Xu,
	Bibo Mao, Stefan Hajnoczi, Paolo Bonzini,
	Philippe Mathieu-Daudé

Use uint16_t for (unsigned) 16-bit word.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 include/exec/memory_ldst_cached.h.inc | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/exec/memory_ldst_cached.h.inc b/include/exec/memory_ldst_cached.h.inc
index c33449d0cd5..d7834f852c4 100644
--- a/include/exec/memory_ldst_cached.h.inc
+++ b/include/exec/memory_ldst_cached.h.inc
@@ -24,7 +24,7 @@
 #define LD_P(size) \
     glue(glue(ld, size), glue(ENDIANNESS, _p))
 
-static inline uint32_t ADDRESS_SPACE_LD_CACHED(uw)(MemoryRegionCache *cache,
+static inline uint16_t ADDRESS_SPACE_LD_CACHED(uw)(MemoryRegionCache *cache,
     hwaddr addr, MemTxAttrs attrs, MemTxResult *result)
 {
     assert(addr < cache->len && 2 <= cache->len - addr);
@@ -72,7 +72,7 @@ static inline uint64_t ADDRESS_SPACE_LD_CACHED(q)(MemoryRegionCache *cache,
     glue(glue(st, size), glue(ENDIANNESS, _p))
 
 static inline void ADDRESS_SPACE_ST_CACHED(w)(MemoryRegionCache *cache,
-    hwaddr addr, uint32_t val, MemTxAttrs attrs, MemTxResult *result)
+    hwaddr addr, uint16_t val, MemTxAttrs attrs, MemTxResult *result)
 {
     assert(addr < cache->len && 2 <= cache->len - addr);
     if (likely(cache->ptr)) {
-- 
2.26.3



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 06/12] exec/memory_ldst_cached: Set MemTxResult on success
  2021-05-20 11:09 [RFC PATCH 00/12] exec/memory: Experimental API to catch unaligned accesses Philippe Mathieu-Daudé
                   ` (4 preceding siblings ...)
  2021-05-20 11:09 ` [RFC PATCH 05/12] exec/memory_ldst_cached: Use correct type size Philippe Mathieu-Daudé
@ 2021-05-20 11:09 ` Philippe Mathieu-Daudé
  2021-05-20 11:09 ` [RFC PATCH 07/12] exec/memory_ldst_cached: Document aligned addresses are expected Philippe Mathieu-Daudé
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Philippe Mathieu-Daudé @ 2021-05-20 11:09 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Michael S. Tsirkin, Richard Henderson, Peter Xu,
	Bibo Mao, Stefan Hajnoczi, Paolo Bonzini,
	Philippe Mathieu-Daudé

If the caller passed a MemTxResult argument, we must fill
it with the transaction result. We do it when no cache is
present, complete the other case (which is always successful).

Fixes: 48564041a73 ("exec: reintroduce MemoryRegion caching")
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 include/exec/memory_ldst_cached.h.inc | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/include/exec/memory_ldst_cached.h.inc b/include/exec/memory_ldst_cached.h.inc
index d7834f852c4..a8f146251d4 100644
--- a/include/exec/memory_ldst_cached.h.inc
+++ b/include/exec/memory_ldst_cached.h.inc
@@ -30,6 +30,9 @@ static inline uint16_t ADDRESS_SPACE_LD_CACHED(uw)(MemoryRegionCache *cache,
     assert(addr < cache->len && 2 <= cache->len - addr);
     fuzz_dma_read_cb(cache->xlat + addr, 2, cache->mrs.mr);
     if (likely(cache->ptr)) {
+        if (result) {
+            *result = MEMTX_OK;
+        }
         return LD_P(uw)(cache->ptr + addr);
     } else {
         return ADDRESS_SPACE_LD_CACHED_SLOW(uw)(cache, addr, attrs, result);
@@ -42,6 +45,9 @@ static inline uint32_t ADDRESS_SPACE_LD_CACHED(l)(MemoryRegionCache *cache,
     assert(addr < cache->len && 4 <= cache->len - addr);
     fuzz_dma_read_cb(cache->xlat + addr, 4, cache->mrs.mr);
     if (likely(cache->ptr)) {
+        if (result) {
+            *result = MEMTX_OK;
+        }
         return LD_P(l)(cache->ptr + addr);
     } else {
         return ADDRESS_SPACE_LD_CACHED_SLOW(l)(cache, addr, attrs, result);
@@ -54,6 +60,9 @@ static inline uint64_t ADDRESS_SPACE_LD_CACHED(q)(MemoryRegionCache *cache,
     assert(addr < cache->len && 8 <= cache->len - addr);
     fuzz_dma_read_cb(cache->xlat + addr, 8, cache->mrs.mr);
     if (likely(cache->ptr)) {
+        if (result) {
+            *result = MEMTX_OK;
+        }
         return LD_P(q)(cache->ptr + addr);
     } else {
         return ADDRESS_SPACE_LD_CACHED_SLOW(q)(cache, addr, attrs, result);
@@ -76,6 +85,9 @@ static inline void ADDRESS_SPACE_ST_CACHED(w)(MemoryRegionCache *cache,
 {
     assert(addr < cache->len && 2 <= cache->len - addr);
     if (likely(cache->ptr)) {
+        if (result) {
+            *result = MEMTX_OK;
+        }
         ST_P(w)(cache->ptr + addr, val);
     } else {
         ADDRESS_SPACE_ST_CACHED_SLOW(w)(cache, addr, val, attrs, result);
@@ -87,6 +99,9 @@ static inline void ADDRESS_SPACE_ST_CACHED(l)(MemoryRegionCache *cache,
 {
     assert(addr < cache->len && 4 <= cache->len - addr);
     if (likely(cache->ptr)) {
+        if (result) {
+            *result = MEMTX_OK;
+        }
         ST_P(l)(cache->ptr + addr, val);
     } else {
         ADDRESS_SPACE_ST_CACHED_SLOW(l)(cache, addr, val, attrs, result);
@@ -98,6 +113,9 @@ static inline void ADDRESS_SPACE_ST_CACHED(q)(MemoryRegionCache *cache,
 {
     assert(addr < cache->len && 8 <= cache->len - addr);
     if (likely(cache->ptr)) {
+        if (result) {
+            *result = MEMTX_OK;
+        }
         ST_P(q)(cache->ptr + addr, val);
     } else {
         ADDRESS_SPACE_ST_CACHED_SLOW(q)(cache, addr, val, attrs, result);
-- 
2.26.3



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 07/12] exec/memory_ldst_cached: Document aligned addresses are expected
  2021-05-20 11:09 [RFC PATCH 00/12] exec/memory: Experimental API to catch unaligned accesses Philippe Mathieu-Daudé
                   ` (5 preceding siblings ...)
  2021-05-20 11:09 ` [RFC PATCH 06/12] exec/memory_ldst_cached: Set MemTxResult on success Philippe Mathieu-Daudé
@ 2021-05-20 11:09 ` Philippe Mathieu-Daudé
  2021-05-20 11:09 ` [RFC PATCH 08/12] exec/memory_ldst_cached: Check address alignment if requested Philippe Mathieu-Daudé
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Philippe Mathieu-Daudé @ 2021-05-20 11:09 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Michael S. Tsirkin, Richard Henderson, Peter Xu,
	Bibo Mao, Stefan Hajnoczi, Paolo Bonzini,
	Philippe Mathieu-Daudé

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 include/exec/memory_ldst_cached.h.inc | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/include/exec/memory_ldst_cached.h.inc b/include/exec/memory_ldst_cached.h.inc
index a8f146251d4..515beb48f47 100644
--- a/include/exec/memory_ldst_cached.h.inc
+++ b/include/exec/memory_ldst_cached.h.inc
@@ -24,6 +24,7 @@
 #define LD_P(size) \
     glue(glue(ld, size), glue(ENDIANNESS, _p))
 
+/* warning: addr must be aligned */
 static inline uint16_t ADDRESS_SPACE_LD_CACHED(uw)(MemoryRegionCache *cache,
     hwaddr addr, MemTxAttrs attrs, MemTxResult *result)
 {
@@ -39,6 +40,7 @@ static inline uint16_t ADDRESS_SPACE_LD_CACHED(uw)(MemoryRegionCache *cache,
     }
 }
 
+/* warning: addr must be aligned */
 static inline uint32_t ADDRESS_SPACE_LD_CACHED(l)(MemoryRegionCache *cache,
     hwaddr addr, MemTxAttrs attrs, MemTxResult *result)
 {
@@ -54,6 +56,7 @@ static inline uint32_t ADDRESS_SPACE_LD_CACHED(l)(MemoryRegionCache *cache,
     }
 }
 
+/* warning: addr must be aligned */
 static inline uint64_t ADDRESS_SPACE_LD_CACHED(q)(MemoryRegionCache *cache,
     hwaddr addr, MemTxAttrs attrs, MemTxResult *result)
 {
@@ -80,6 +83,7 @@ static inline uint64_t ADDRESS_SPACE_LD_CACHED(q)(MemoryRegionCache *cache,
 #define ST_P(size) \
     glue(glue(st, size), glue(ENDIANNESS, _p))
 
+/* warning: addr must be aligned */
 static inline void ADDRESS_SPACE_ST_CACHED(w)(MemoryRegionCache *cache,
     hwaddr addr, uint16_t val, MemTxAttrs attrs, MemTxResult *result)
 {
@@ -94,6 +98,7 @@ static inline void ADDRESS_SPACE_ST_CACHED(w)(MemoryRegionCache *cache,
     }
 }
 
+/* warning: addr must be aligned */
 static inline void ADDRESS_SPACE_ST_CACHED(l)(MemoryRegionCache *cache,
     hwaddr addr, uint32_t val, MemTxAttrs attrs, MemTxResult *result)
 {
@@ -108,6 +113,7 @@ static inline void ADDRESS_SPACE_ST_CACHED(l)(MemoryRegionCache *cache,
     }
 }
 
+/* warning: addr must be aligned */
 static inline void ADDRESS_SPACE_ST_CACHED(q)(MemoryRegionCache *cache,
     hwaddr addr, uint64_t val, MemTxAttrs attrs, MemTxResult *result)
 {
-- 
2.26.3



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 08/12] exec/memory_ldst_cached: Check address alignment if requested
  2021-05-20 11:09 [RFC PATCH 00/12] exec/memory: Experimental API to catch unaligned accesses Philippe Mathieu-Daudé
                   ` (6 preceding siblings ...)
  2021-05-20 11:09 ` [RFC PATCH 07/12] exec/memory_ldst_cached: Document aligned addresses are expected Philippe Mathieu-Daudé
@ 2021-05-20 11:09 ` Philippe Mathieu-Daudé
  2021-05-20 11:09 ` [RFC PATCH 09/12] hw/virtio: Use correct type sizes Philippe Mathieu-Daudé
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Philippe Mathieu-Daudé @ 2021-05-20 11:09 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Michael S. Tsirkin, Richard Henderson, Peter Xu,
	Bibo Mao, Stefan Hajnoczi, Paolo Bonzini,
	Philippe Mathieu-Daudé

If the caller requires strict alignment, check the address
satisfies it before doing the transaction. Otherwise return
a MEMTX_UNALIGNED_ERROR.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 include/exec/memory_ldst_cached.h.inc | 42 +++++++++++++++++++++++++++
 1 file changed, 42 insertions(+)

diff --git a/include/exec/memory_ldst_cached.h.inc b/include/exec/memory_ldst_cached.h.inc
index 515beb48f47..311a9759a22 100644
--- a/include/exec/memory_ldst_cached.h.inc
+++ b/include/exec/memory_ldst_cached.h.inc
@@ -31,6 +31,13 @@ static inline uint16_t ADDRESS_SPACE_LD_CACHED(uw)(MemoryRegionCache *cache,
     assert(addr < cache->len && 2 <= cache->len - addr);
     fuzz_dma_read_cb(cache->xlat + addr, 2, cache->mrs.mr);
     if (likely(cache->ptr)) {
+        if (attrs.aligned && unlikely(!QEMU_PTR_IS_ALIGNED(cache->ptr,
+                                                           sizeof(uint16_t)))) {
+            if (result) {
+                *result = MEMTX_UNALIGNED_ERROR;
+            }
+            return (uint16_t)-1; /* XXX */
+        }
         if (result) {
             *result = MEMTX_OK;
         }
@@ -47,6 +54,13 @@ static inline uint32_t ADDRESS_SPACE_LD_CACHED(l)(MemoryRegionCache *cache,
     assert(addr < cache->len && 4 <= cache->len - addr);
     fuzz_dma_read_cb(cache->xlat + addr, 4, cache->mrs.mr);
     if (likely(cache->ptr)) {
+        if (attrs.aligned && unlikely(!QEMU_PTR_IS_ALIGNED(cache->ptr,
+                                                           sizeof(uint32_t)))) {
+            if (result) {
+                *result = MEMTX_UNALIGNED_ERROR;
+            }
+            return (uint32_t)-1; /* XXX */
+        }
         if (result) {
             *result = MEMTX_OK;
         }
@@ -63,6 +77,13 @@ static inline uint64_t ADDRESS_SPACE_LD_CACHED(q)(MemoryRegionCache *cache,
     assert(addr < cache->len && 8 <= cache->len - addr);
     fuzz_dma_read_cb(cache->xlat + addr, 8, cache->mrs.mr);
     if (likely(cache->ptr)) {
+        if (attrs.aligned && unlikely(!QEMU_PTR_IS_ALIGNED(cache->ptr,
+                                                           sizeof(uint64_t)))) {
+            if (result) {
+                *result = MEMTX_UNALIGNED_ERROR;
+            }
+            return (uint64_t)-1; /* XXX */
+        }
         if (result) {
             *result = MEMTX_OK;
         }
@@ -89,6 +110,13 @@ static inline void ADDRESS_SPACE_ST_CACHED(w)(MemoryRegionCache *cache,
 {
     assert(addr < cache->len && 2 <= cache->len - addr);
     if (likely(cache->ptr)) {
+        if (attrs.aligned && unlikely(!QEMU_PTR_IS_ALIGNED(cache->ptr,
+                                                           sizeof(uint16_t)))) {
+            if (result) {
+                *result = MEMTX_UNALIGNED_ERROR;
+            }
+            return;
+        }
         if (result) {
             *result = MEMTX_OK;
         }
@@ -104,6 +132,13 @@ static inline void ADDRESS_SPACE_ST_CACHED(l)(MemoryRegionCache *cache,
 {
     assert(addr < cache->len && 4 <= cache->len - addr);
     if (likely(cache->ptr)) {
+        if (attrs.aligned && unlikely(!QEMU_PTR_IS_ALIGNED(cache->ptr,
+                                                           sizeof(uint32_t)))) {
+            if (result) {
+                *result = MEMTX_UNALIGNED_ERROR;
+            }
+            return;
+        }
         if (result) {
             *result = MEMTX_OK;
         }
@@ -119,6 +154,13 @@ static inline void ADDRESS_SPACE_ST_CACHED(q)(MemoryRegionCache *cache,
 {
     assert(addr < cache->len && 8 <= cache->len - addr);
     if (likely(cache->ptr)) {
+        if (attrs.aligned && unlikely(!QEMU_PTR_IS_ALIGNED(cache->ptr,
+                                                           sizeof(uint64_t)))) {
+            if (result) {
+                *result = MEMTX_UNALIGNED_ERROR;
+            }
+            return;
+        }
         if (result) {
             *result = MEMTX_OK;
         }
-- 
2.26.3



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 09/12] hw/virtio: Use correct type sizes
  2021-05-20 11:09 [RFC PATCH 00/12] exec/memory: Experimental API to catch unaligned accesses Philippe Mathieu-Daudé
                   ` (7 preceding siblings ...)
  2021-05-20 11:09 ` [RFC PATCH 08/12] exec/memory_ldst_cached: Check address alignment if requested Philippe Mathieu-Daudé
@ 2021-05-20 11:09 ` Philippe Mathieu-Daudé
  2021-05-20 11:09 ` [RFC PATCH 10/12] hw/virtio: Extract virtio_lduw_phys_cached_with_attrs() Philippe Mathieu-Daudé
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Philippe Mathieu-Daudé @ 2021-05-20 11:09 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Michael S. Tsirkin, Richard Henderson, Peter Xu,
	Bibo Mao, Stefan Hajnoczi, Paolo Bonzini,
	Philippe Mathieu-Daudé

Use uint16_t for unsigned 16-bit data and uint32_t for unsigned 32-bit.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 include/hw/virtio/virtio-access.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/include/hw/virtio/virtio-access.h b/include/hw/virtio/virtio-access.h
index 6818a23a2d3..ae8c9feffc5 100644
--- a/include/hw/virtio/virtio-access.h
+++ b/include/hw/virtio/virtio-access.h
@@ -120,7 +120,7 @@ static inline void virtio_stq_p(VirtIODevice *vdev, void *ptr, uint64_t v)
     }
 }
 
-static inline int virtio_lduw_p(VirtIODevice *vdev, const void *ptr)
+static inline uint16_t virtio_lduw_p(VirtIODevice *vdev, const void *ptr)
 {
     if (virtio_access_is_big_endian(vdev)) {
         return lduw_be_p(ptr);
@@ -129,7 +129,7 @@ static inline int virtio_lduw_p(VirtIODevice *vdev, const void *ptr)
     }
 }
 
-static inline int virtio_ldl_p(VirtIODevice *vdev, const void *ptr)
+static inline uint32_t virtio_ldl_p(VirtIODevice *vdev, const void *ptr)
 {
     if (virtio_access_is_big_endian(vdev)) {
         return ldl_be_p(ptr);
-- 
2.26.3



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 10/12] hw/virtio: Extract virtio_lduw_phys_cached_with_attrs()
  2021-05-20 11:09 [RFC PATCH 00/12] exec/memory: Experimental API to catch unaligned accesses Philippe Mathieu-Daudé
                   ` (8 preceding siblings ...)
  2021-05-20 11:09 ` [RFC PATCH 09/12] hw/virtio: Use correct type sizes Philippe Mathieu-Daudé
@ 2021-05-20 11:09 ` Philippe Mathieu-Daudé
  2021-05-20 11:09 ` [RFC PATCH 11/12] hw/virtio: Have vring_avail_flags() return a boolean value Philippe Mathieu-Daudé
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 16+ messages in thread
From: Philippe Mathieu-Daudé @ 2021-05-20 11:09 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Michael S. Tsirkin, Richard Henderson, Peter Xu,
	Bibo Mao, Stefan Hajnoczi, Paolo Bonzini,
	Philippe Mathieu-Daudé

To be able to specify memory transaction attributes (such the
address alignment), extract virtio_lduw_phys_cached_with_attrs()
from virtio_lduw_phys_cached().

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 include/hw/virtio/virtio-access.h | 35 +++++++++++++++++++++++++++----
 1 file changed, 31 insertions(+), 4 deletions(-)

diff --git a/include/hw/virtio/virtio-access.h b/include/hw/virtio/virtio-access.h
index ae8c9feffc5..aebf0a088a0 100644
--- a/include/hw/virtio/virtio-access.h
+++ b/include/hw/virtio/virtio-access.h
@@ -156,14 +156,41 @@ static inline uint16_t virtio_tswap16(VirtIODevice *vdev, uint16_t s)
 #endif
 }
 
+/**
+ * virtio_ld*_phys_cached_with_attrs: load from a VirtIO cached #MemoryRegion
+ * virtio_st*_phys_cached_with_attrs: store to a VirtIO cached #MemoryRegion
+ *
+ * These functions perform a load or store of the byte, word,
+ * longword or quad to the specified address.  The address is
+ * a physical address in the VirtIO device AddressSpace, but it must lie within
+ * a #MemoryRegion that was mapped with address_space_cache_init.
+ *
+ * @vdev: virtio device accessed
+ * @cache: previously initialized #MemoryRegionCache to be accessed
+ * @pa: physical address within the address space
+ * @val: data value, for stores
+ * @attrs: memory transaction attributes
+ * @result: location to write the success/failure of the transaction;
+ *   if NULL, this information is discarded
+ */
+
+static inline uint16_t virtio_lduw_phys_cached_with_attrs(VirtIODevice *vdev,
+                                                MemoryRegionCache *cache,
+                                                hwaddr pa, MemTxAttrs attrs,
+                                                MemTxResult *result)
+{
+    if (virtio_access_is_big_endian(vdev)) {
+        return address_space_lduw_be_cached(cache, pa, attrs, result);
+    }
+    return address_space_lduw_le_cached(cache, pa, attrs, result);
+}
+
 static inline uint16_t virtio_lduw_phys_cached(VirtIODevice *vdev,
                                                MemoryRegionCache *cache,
                                                hwaddr pa)
 {
-    if (virtio_access_is_big_endian(vdev)) {
-        return lduw_be_phys_cached(cache, pa);
-    }
-    return lduw_le_phys_cached(cache, pa);
+    return virtio_lduw_phys_cached_with_attrs(vdev, cache, pa,
+                                              MEMTXATTRS_UNSPECIFIED, NULL);
 }
 
 static inline uint32_t virtio_ldl_phys_cached(VirtIODevice *vdev,
-- 
2.26.3



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 11/12] hw/virtio: Have vring_avail_flags() return a boolean value
  2021-05-20 11:09 [RFC PATCH 00/12] exec/memory: Experimental API to catch unaligned accesses Philippe Mathieu-Daudé
                   ` (9 preceding siblings ...)
  2021-05-20 11:09 ` [RFC PATCH 10/12] hw/virtio: Extract virtio_lduw_phys_cached_with_attrs() Philippe Mathieu-Daudé
@ 2021-05-20 11:09 ` Philippe Mathieu-Daudé
  2021-05-20 11:09 ` [RFC PATCH 12/12] hw/virtio: Display error if vring flag field is not aligned Philippe Mathieu-Daudé
  2021-06-01  3:32 ` [RFC PATCH 00/12] exec/memory: Experimental API to catch unaligned accesses Philippe Mathieu-Daudé
  12 siblings, 0 replies; 16+ messages in thread
From: Philippe Mathieu-Daudé @ 2021-05-20 11:09 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Michael S. Tsirkin, Richard Henderson, Peter Xu,
	Bibo Mao, Stefan Hajnoczi, Paolo Bonzini,
	Philippe Mathieu-Daudé

To be able to check the memory transaction succeeded, change
the vring_avail_flags() to take the value to read by argument,
so we can return a boolean whether we succeeded or not.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 hw/virtio/virtio.c | 13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index e02544b2df7..1b8bc194ce1 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -287,16 +287,19 @@ static VRingMemoryRegionCaches *vring_get_region_caches(struct VirtQueue *vq)
 }
 
 /* Called within rcu_read_lock().  */
-static inline uint16_t vring_avail_flags(VirtQueue *vq)
+static inline bool vring_avail_flags(VirtQueue *vq, uint16_t *val)
 {
     VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
     hwaddr pa = offsetof(VRingAvail, flags);
 
     if (!caches) {
-        return 0;
+        *val = 0;
+        return true;
     }
 
-    return virtio_lduw_phys_cached(vq->vdev, &caches->avail, pa);
+    *val = virtio_lduw_phys_cached_with_attrs(vq->vdev, &caches->avail, pa);
+
+    return true;
 }
 
 /* Called within rcu_read_lock().  */
@@ -2462,7 +2465,9 @@ static bool virtio_split_should_notify(VirtIODevice *vdev, VirtQueue *vq)
     }
 
     if (!virtio_vdev_has_feature(vdev, VIRTIO_RING_F_EVENT_IDX)) {
-        return !(vring_avail_flags(vq) & VRING_AVAIL_F_NO_INTERRUPT);
+        uint16_t flags;
+        return vring_avail_flags(vq, &flags)
+               && !(flags & VRING_AVAIL_F_NO_INTERRUPT);
     }
 
     v = vq->signalled_used_valid;
-- 
2.26.3



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [RFC PATCH 12/12] hw/virtio: Display error if vring flag field is not aligned
  2021-05-20 11:09 [RFC PATCH 00/12] exec/memory: Experimental API to catch unaligned accesses Philippe Mathieu-Daudé
                   ` (10 preceding siblings ...)
  2021-05-20 11:09 ` [RFC PATCH 11/12] hw/virtio: Have vring_avail_flags() return a boolean value Philippe Mathieu-Daudé
@ 2021-05-20 11:09 ` Philippe Mathieu-Daudé
  2021-06-02 12:44   ` Stefan Hajnoczi
  2021-06-01  3:32 ` [RFC PATCH 00/12] exec/memory: Experimental API to catch unaligned accesses Philippe Mathieu-Daudé
  12 siblings, 1 reply; 16+ messages in thread
From: Philippe Mathieu-Daudé @ 2021-05-20 11:09 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Michael S. Tsirkin, Richard Henderson, Peter Xu,
	Bibo Mao, Stefan Hajnoczi, Paolo Bonzini,
	Philippe Mathieu-Daudé

Per the virtio spec [*] the vring is aligned, so the 'flag' field
is also 16-bit aligned. If it is not, this is a guest error, and
we'd rather report any malicious activity.

Enforce the transaction alignment by setting the 'aligned' attribute
and log unaligned addresses.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 hw/virtio/virtio.c | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 1b8bc194ce1..f19d12fc71e 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -291,15 +291,24 @@ static inline bool vring_avail_flags(VirtQueue *vq, uint16_t *val)
 {
     VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
     hwaddr pa = offsetof(VRingAvail, flags);
+    MemTxAttrs attrs = { .aligned = 1 };
+    MemTxResult res;
 
     if (!caches) {
         *val = 0;
         return true;
     }
 
-    *val = virtio_lduw_phys_cached_with_attrs(vq->vdev, &caches->avail, pa);
+    *val = virtio_lduw_phys_cached_with_attrs(vq->vdev, &caches->avail,
+                                              pa, attrs, &res);
+    if (res == MEMTX_UNALIGNED_ERROR) {
+        qemu_log_mask(LOG_GUEST_ERROR,
+                      "virtio: vring flag address 0x%" HWADDR_PRIX " "
+                      "is not aligned\n", pa);
+        return false;
+    }
 
-    return true;
+    return res == MEMTX_OK;
 }
 
 /* Called within rcu_read_lock().  */
-- 
2.26.3



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 00/12] exec/memory: Experimental API to catch unaligned accesses
  2021-05-20 11:09 [RFC PATCH 00/12] exec/memory: Experimental API to catch unaligned accesses Philippe Mathieu-Daudé
                   ` (11 preceding siblings ...)
  2021-05-20 11:09 ` [RFC PATCH 12/12] hw/virtio: Display error if vring flag field is not aligned Philippe Mathieu-Daudé
@ 2021-06-01  3:32 ` Philippe Mathieu-Daudé
  12 siblings, 0 replies; 16+ messages in thread
From: Philippe Mathieu-Daudé @ 2021-06-01  3:32 UTC (permalink / raw)
  To: qemu-devel
  Cc: Peter Maydell, Michael S. Tsirkin, Richard Henderson, Peter Xu,
	Bibo Mao, Stefan Hajnoczi, Paolo Bonzini

Hi,

If there is any feedback, should I discard this experiment?

On 5/20/21 1:09 PM, Philippe Mathieu-Daudé wrote:
> This series is an experiment after chatting with Stefan and having
> received review from Peter / Richard on an orthogonal series aiming
> to handle unaligned pointers (atomically):
> https://www.mail-archive.com/qemu-devel@nongnu.org/msg808954.html
> 
> Here we don't aim to fix unatomic accesses, however we are interested
> in catching malicious unaligned accesses from guests.
> 
> For that we introduce the MemTxAttrs::aligned field which allow
> accessors to return MEMTX_UNALIGNED_ERROR early enough, instead
> of trying the unaligned access which can potentially trigger a
> SIGBUS and abort the process.
> 
> To be able to pass/return alignment information we modify the
> memory load/store cached API, then add the
> virtio_ld/st*_phys_cached_with_attrs() handler (we only implement
> lduw for this experiment).
> 
> Finaly we modify vring_avail_flags() to return whether the guest
> requested an illegal (unaligned) memory access.
> 
> Note: The current virtio_ld/st*_phys_cached_with_attrs() API returns
> the value, and take the MemTxResult as argument, so I choose to
> return -1 (marked with /* XXX */ comment. We should switch to using
> an API which returns a MemTxResult and takes the value accessed as
> argument, this way we don't have to return random meaningless value.
> 
> But this is beyond the scope of this experiment, here we want to
> emphasize the introduction of the MemTxAttrs::aligned field and the
> MEMTX_UNALIGNED_ERROR return value.
> 
> Regards,
> 
> Phil.
> 
> Philippe Mathieu-Daudé (12):
>   exec/memory_ldst: Use correct type sizes
>   exec/memattrs: Add attribute/error for address alignment
>   exec/memory_ldst: Return MEMTX_UNALIGNED_ERROR for unaligned addresses
>   exec/memory_ldst_cached: Sort declarations
>   exec/memory_ldst_cached: Use correct type size
>   exec/memory_ldst_cached: Set MemTxResult on success
>   exec/memory_ldst_cached: Document aligned addresses are expected
>   exec/memory_ldst_cached: Check address alignment if requested
>   hw/virtio: Use correct type sizes
>   hw/virtio: Extract virtio_lduw_phys_cached_with_attrs()
>   hw/virtio: Have vring_avail_flags() return a boolean value
>   hw/virtio: Display error if vring flag field is not aligned
> 
>  include/exec/memattrs.h               |   3 +
>  include/hw/virtio/virtio-access.h     |  39 +++++++--
>  include/exec/memory_ldst.h.inc        |  16 ++--
>  include/exec/memory_ldst_cached.h.inc | 114 ++++++++++++++++++++------
>  hw/virtio/virtio.c                    |  22 ++++-
>  memory_ldst.c.inc                     |  69 +++++++++++++---
>  6 files changed, 211 insertions(+), 52 deletions(-)
> 



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 02/12] exec/memattrs: Add attribute/error for address alignment
  2021-05-20 11:09 ` [RFC PATCH 02/12] exec/memattrs: Add attribute/error for address alignment Philippe Mathieu-Daudé
@ 2021-06-02 12:39   ` Stefan Hajnoczi
  0 siblings, 0 replies; 16+ messages in thread
From: Stefan Hajnoczi @ 2021-06-02 12:39 UTC (permalink / raw)
  To: Philippe Mathieu-Daudé
  Cc: Peter Maydell, Michael S. Tsirkin, Richard Henderson, qemu-devel,
	Peter Xu, Bibo Mao, Paolo Bonzini

[-- Attachment #1: Type: text/plain, Size: 1642 bytes --]

On Thu, May 20, 2021 at 01:09:09PM +0200, Philippe Mathieu-Daudé wrote:
> A bus master might specify the 'aligned' attribute to enforce
> a transaction using aligned address. If the address is not
> aligned, the accessor will return MEMTX_UNALIGNED_ERROR.
> 
> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> ---
>  include/exec/memattrs.h | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/include/exec/memattrs.h b/include/exec/memattrs.h
> index 95f2d20d55b..6fe59194e35 100644
> --- a/include/exec/memattrs.h
> +++ b/include/exec/memattrs.h
> @@ -39,6 +39,8 @@ typedef struct MemTxAttrs {
>      unsigned int requester_id:16;
>      /* Invert endianness for this page */
>      unsigned int byte_swap:1;
> +    /* Memory access must be aligned */
> +    unsigned int aligned:1;
>      /*
>       * The following are target-specific page-table bits.  These are not
>       * related to actual memory transactions at all.  However, this structure
> @@ -66,6 +68,7 @@ typedef struct MemTxAttrs {
>  #define MEMTX_OK 0
>  #define MEMTX_ERROR             (1U << 0) /* device returned an error */
>  #define MEMTX_DECODE_ERROR      (1U << 1) /* nothing at that address */
> +#define MEMTX_UNALIGNED_ERROR   (1U << 2) /* address is not aligned */

Most of the time alignment requirements are for "natural alignment"
(i.e. 2-byte accesses must be 2-byte aligned, 4-byte accesses must be
4-byte aligned). I wonder if there exist hardware interfaces that have
other alignment requirements and if the MemTxAttrs::aligned attribute
introduced above will be able to cover those cases?

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [RFC PATCH 12/12] hw/virtio: Display error if vring flag field is not aligned
  2021-05-20 11:09 ` [RFC PATCH 12/12] hw/virtio: Display error if vring flag field is not aligned Philippe Mathieu-Daudé
@ 2021-06-02 12:44   ` Stefan Hajnoczi
  0 siblings, 0 replies; 16+ messages in thread
From: Stefan Hajnoczi @ 2021-06-02 12:44 UTC (permalink / raw)
  To: Philippe Mathieu-Daudé
  Cc: Peter Maydell, Michael S. Tsirkin, Richard Henderson, qemu-devel,
	Peter Xu, Bibo Mao, Paolo Bonzini

[-- Attachment #1: Type: text/plain, Size: 1185 bytes --]

On Thu, May 20, 2021 at 01:09:19PM +0200, Philippe Mathieu-Daudé wrote:
>  {
>      VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
>      hwaddr pa = offsetof(VRingAvail, flags);
> +    MemTxAttrs attrs = { .aligned = 1 };
> +    MemTxResult res;
>  
>      if (!caches) {
>          *val = 0;
>          return true;
>      }
>  
> -    *val = virtio_lduw_phys_cached_with_attrs(vq->vdev, &caches->avail, pa);
> +    *val = virtio_lduw_phys_cached_with_attrs(vq->vdev, &caches->avail,
> +                                              pa, attrs, &res);
> +    if (res == MEMTX_UNALIGNED_ERROR) {
> +        qemu_log_mask(LOG_GUEST_ERROR,
> +                      "virtio: vring flag address 0x%" HWADDR_PRIX " "
> +                      "is not aligned\n", pa);
> +        return false;
> +    }

Performance-critical code paths could validate the cache and offset
ahead of time to avoid taking the more expensive code path that checks
MemTxAttrs.

The guest driver configures the vring addresses by writing to
virtio-pci/virtio-mmio registers. The alignment check can be performed
at that time (while/before creating the cache).

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2021-06-02 12:45 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-20 11:09 [RFC PATCH 00/12] exec/memory: Experimental API to catch unaligned accesses Philippe Mathieu-Daudé
2021-05-20 11:09 ` [RFC PATCH 01/12] exec/memory_ldst: Use correct type sizes Philippe Mathieu-Daudé
2021-05-20 11:09 ` [RFC PATCH 02/12] exec/memattrs: Add attribute/error for address alignment Philippe Mathieu-Daudé
2021-06-02 12:39   ` Stefan Hajnoczi
2021-05-20 11:09 ` [RFC PATCH 03/12] exec/memory_ldst: Return MEMTX_UNALIGNED_ERROR for unaligned addresses Philippe Mathieu-Daudé
2021-05-20 11:09 ` [RFC PATCH 04/12] exec/memory_ldst_cached: Sort declarations Philippe Mathieu-Daudé
2021-05-20 11:09 ` [RFC PATCH 05/12] exec/memory_ldst_cached: Use correct type size Philippe Mathieu-Daudé
2021-05-20 11:09 ` [RFC PATCH 06/12] exec/memory_ldst_cached: Set MemTxResult on success Philippe Mathieu-Daudé
2021-05-20 11:09 ` [RFC PATCH 07/12] exec/memory_ldst_cached: Document aligned addresses are expected Philippe Mathieu-Daudé
2021-05-20 11:09 ` [RFC PATCH 08/12] exec/memory_ldst_cached: Check address alignment if requested Philippe Mathieu-Daudé
2021-05-20 11:09 ` [RFC PATCH 09/12] hw/virtio: Use correct type sizes Philippe Mathieu-Daudé
2021-05-20 11:09 ` [RFC PATCH 10/12] hw/virtio: Extract virtio_lduw_phys_cached_with_attrs() Philippe Mathieu-Daudé
2021-05-20 11:09 ` [RFC PATCH 11/12] hw/virtio: Have vring_avail_flags() return a boolean value Philippe Mathieu-Daudé
2021-05-20 11:09 ` [RFC PATCH 12/12] hw/virtio: Display error if vring flag field is not aligned Philippe Mathieu-Daudé
2021-06-02 12:44   ` Stefan Hajnoczi
2021-06-01  3:32 ` [RFC PATCH 00/12] exec/memory: Experimental API to catch unaligned accesses Philippe Mathieu-Daudé

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.