All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH 0/3] cputlb: Adjust tlb bswap implementation
@ 2019-09-11  1:43 Richard Henderson
  2019-09-11  1:43 ` [Qemu-devel] [PATCH 1/3] cputlb: Disable __always_inline__ without optimization Richard Henderson
                   ` (5 more replies)
  0 siblings, 6 replies; 15+ messages in thread
From: Richard Henderson @ 2019-09-11  1:43 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, mark.cave-ayland, tony.nguyen

The version that Tony came up with, and I reviewed, doesn't actually
work when applied to RAM.  It only worked for i/o memory.  This was
the root cause for

https://lists.gnu.org/archive/html/qemu-devel/2019-09/msg00036.html

I tried a couple of different approaches in load/store_helper, but
this is the one that didn't affect the normal case -- a simple tlb
miss against (non-swapped) ram.

This is able to boot the solaris 7 notdirty_mem_ops reproducer til
it panics due to no root file system, whereas before it would not
make it to the SunOS banner.

OpenBIOS for Sparc64
Configuration device id QEMU version 1 machine id 0
kernel cmdline 
CPUs: 1 x SUNW,UltraSPARC-IIi
UUID: 00000000-0000-0000-0000-000000000000
Welcome to OpenBIOS v1.1 built on Aug 25 2019 18:20
  Type 'help' for detailed information
Trying cdrom:f...
Not a bootable ELF image
Not a bootable a.out image

Loading FCode image...
Loaded 5936 bytes
entry point is 0x4000
Evaluating FCode...
open isn't unique.
SunOS Release 5.7 Version Generic_106541-06 [UNIX(R) System V Release 4.0]
Copyright (c) 1983-1999, Sun Microsystems, Inc.
WARNING: Interrupt not seen after set_features
Cannot assemble drivers for root /pci@1f,0/pci@1,1/ide@3/cdrom@2,0:b
Cannot mount root on /pci@1f,0/pci@1,1/ide@3/cdrom@2,0:b fstype ufs
panic[cpu0]/thread=10404040: vfs_mountroot: cannot mount root
skipping system dump - no dump device configured
rebooting...
BOOTpanic - kernel: prom_reboot: reboot call returned!
EXIT
0 > 


Richard Henderson (3):
  cputlb: Disable __always_inline__ without optimization
  cputlb: Replace switches in load/store_helper with callback
  cputlb: Introduce TLB_BSWAP

 include/exec/cpu-all.h |   2 +
 accel/tcg/cputlb.c     | 245 ++++++++++++++++++++++-------------------
 2 files changed, 131 insertions(+), 116 deletions(-)

-- 
2.17.1



^ permalink raw reply	[flat|nested] 15+ messages in thread

* [Qemu-devel] [PATCH 1/3] cputlb: Disable __always_inline__ without optimization
  2019-09-11  1:43 [Qemu-devel] [PATCH 0/3] cputlb: Adjust tlb bswap implementation Richard Henderson
@ 2019-09-11  1:43 ` Richard Henderson
  2019-09-11  8:34   ` Peter Maydell
  2019-09-11  1:43 ` [Qemu-devel] [PATCH 2/3] cputlb: Replace switches in load/store_helper with callback Richard Henderson
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 15+ messages in thread
From: Richard Henderson @ 2019-09-11  1:43 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, mark.cave-ayland, tony.nguyen

This forced inlining can result in missing symbols,
which makes a debugging build harder to follow.

Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 accel/tcg/cputlb.c | 16 ++++++++++++++--
 1 file changed, 14 insertions(+), 2 deletions(-)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index abae79650c..909f01ebcc 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1269,6 +1269,18 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
     cpu_loop_exit_atomic(env_cpu(env), retaddr);
 }
 
+/*
+ * In order for the expected constant folding to happen,
+ * we require that some functions be inlined.
+ * However, this inlining can make debugging harder for a
+ * non-optimizing build.
+ */
+#ifdef __OPTIMIZE__
+#define ALWAYS_INLINE  __attribute__((always_inline))
+#else
+#define ALWAYS_INLINE
+#endif
+
 /*
  * Load Helpers
  *
@@ -1281,7 +1293,7 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
 typedef uint64_t FullLoadHelper(CPUArchState *env, target_ulong addr,
                                 TCGMemOpIdx oi, uintptr_t retaddr);
 
-static inline uint64_t __attribute__((always_inline))
+static inline uint64_t ALWAYS_INLINE
 load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
             uintptr_t retaddr, MemOp op, bool code_read,
             FullLoadHelper *full_load)
@@ -1530,7 +1542,7 @@ tcg_target_ulong helper_be_ldsl_mmu(CPUArchState *env, target_ulong addr,
  * Store Helpers
  */
 
-static inline void __attribute__((always_inline))
+static inline void ALWAYS_INLINE
 store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
              TCGMemOpIdx oi, uintptr_t retaddr, MemOp op)
 {
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [Qemu-devel] [PATCH 2/3] cputlb: Replace switches in load/store_helper with callback
  2019-09-11  1:43 [Qemu-devel] [PATCH 0/3] cputlb: Adjust tlb bswap implementation Richard Henderson
  2019-09-11  1:43 ` [Qemu-devel] [PATCH 1/3] cputlb: Disable __always_inline__ without optimization Richard Henderson
@ 2019-09-11  1:43 ` Richard Henderson
  2019-09-11 10:55   ` Tony Nguyen
  2019-09-11 13:07   ` Philippe Mathieu-Daudé
  2019-09-11  1:43 ` [Qemu-devel] [PATCH 3/3] cputlb: Introduce TLB_BSWAP Richard Henderson
                   ` (3 subsequent siblings)
  5 siblings, 2 replies; 15+ messages in thread
From: Richard Henderson @ 2019-09-11  1:43 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, mark.cave-ayland, tony.nguyen

Add a function parameter to perform the actual load/store to ram.
With optimization, this results in identical code.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 accel/tcg/cputlb.c | 157 +++++++++++++++++++++++----------------------
 1 file changed, 81 insertions(+), 76 deletions(-)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 909f01ebcc..e6229d100a 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1292,11 +1292,37 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
 
 typedef uint64_t FullLoadHelper(CPUArchState *env, target_ulong addr,
                                 TCGMemOpIdx oi, uintptr_t retaddr);
+typedef uint64_t DirectLoadHelper(const void *);
+
+static inline uint64_t direct_ldub(const void *haddr)
+{
+    return *(uint8_t *)haddr;
+}
+
+static inline uint64_t direct_lduw_be(const void *haddr)
+{
+    return lduw_be_p(haddr);
+}
+
+static inline uint64_t direct_lduw_le(const void *haddr)
+{
+    return lduw_le_p(haddr);
+}
+
+static inline uint64_t direct_ldul_be(const void *haddr)
+{
+    return (uint32_t)ldl_be_p(haddr);
+}
+
+static inline uint64_t direct_ldul_le(const void *haddr)
+{
+    return (uint32_t)ldl_le_p(haddr);
+}
 
 static inline uint64_t ALWAYS_INLINE
 load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
             uintptr_t retaddr, MemOp op, bool code_read,
-            FullLoadHelper *full_load)
+            FullLoadHelper *full_load, DirectLoadHelper *direct)
 {
     uintptr_t mmu_idx = get_mmuidx(oi);
     uintptr_t index = tlb_index(env, mmu_idx, addr);
@@ -1385,33 +1411,7 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
 
  do_aligned_access:
     haddr = (void *)((uintptr_t)addr + entry->addend);
-    switch (op) {
-    case MO_UB:
-        res = ldub_p(haddr);
-        break;
-    case MO_BEUW:
-        res = lduw_be_p(haddr);
-        break;
-    case MO_LEUW:
-        res = lduw_le_p(haddr);
-        break;
-    case MO_BEUL:
-        res = (uint32_t)ldl_be_p(haddr);
-        break;
-    case MO_LEUL:
-        res = (uint32_t)ldl_le_p(haddr);
-        break;
-    case MO_BEQ:
-        res = ldq_be_p(haddr);
-        break;
-    case MO_LEQ:
-        res = ldq_le_p(haddr);
-        break;
-    default:
-        g_assert_not_reached();
-    }
-
-    return res;
+    return direct(haddr);
 }
 
 /*
@@ -1427,7 +1427,8 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
 static uint64_t full_ldub_mmu(CPUArchState *env, target_ulong addr,
                               TCGMemOpIdx oi, uintptr_t retaddr)
 {
-    return load_helper(env, addr, oi, retaddr, MO_UB, false, full_ldub_mmu);
+    return load_helper(env, addr, oi, retaddr, MO_UB, false,
+                       full_ldub_mmu, direct_ldub);
 }
 
 tcg_target_ulong helper_ret_ldub_mmu(CPUArchState *env, target_ulong addr,
@@ -1440,7 +1441,7 @@ static uint64_t full_le_lduw_mmu(CPUArchState *env, target_ulong addr,
                                  TCGMemOpIdx oi, uintptr_t retaddr)
 {
     return load_helper(env, addr, oi, retaddr, MO_LEUW, false,
-                       full_le_lduw_mmu);
+                       full_le_lduw_mmu, direct_lduw_le);
 }
 
 tcg_target_ulong helper_le_lduw_mmu(CPUArchState *env, target_ulong addr,
@@ -1453,7 +1454,7 @@ static uint64_t full_be_lduw_mmu(CPUArchState *env, target_ulong addr,
                                  TCGMemOpIdx oi, uintptr_t retaddr)
 {
     return load_helper(env, addr, oi, retaddr, MO_BEUW, false,
-                       full_be_lduw_mmu);
+                       full_be_lduw_mmu, direct_lduw_be);
 }
 
 tcg_target_ulong helper_be_lduw_mmu(CPUArchState *env, target_ulong addr,
@@ -1466,7 +1467,7 @@ static uint64_t full_le_ldul_mmu(CPUArchState *env, target_ulong addr,
                                  TCGMemOpIdx oi, uintptr_t retaddr)
 {
     return load_helper(env, addr, oi, retaddr, MO_LEUL, false,
-                       full_le_ldul_mmu);
+                       full_le_ldul_mmu, direct_ldul_le);
 }
 
 tcg_target_ulong helper_le_ldul_mmu(CPUArchState *env, target_ulong addr,
@@ -1479,7 +1480,7 @@ static uint64_t full_be_ldul_mmu(CPUArchState *env, target_ulong addr,
                                  TCGMemOpIdx oi, uintptr_t retaddr)
 {
     return load_helper(env, addr, oi, retaddr, MO_BEUL, false,
-                       full_be_ldul_mmu);
+                       full_be_ldul_mmu, direct_ldul_be);
 }
 
 tcg_target_ulong helper_be_ldul_mmu(CPUArchState *env, target_ulong addr,
@@ -1492,14 +1493,14 @@ uint64_t helper_le_ldq_mmu(CPUArchState *env, target_ulong addr,
                            TCGMemOpIdx oi, uintptr_t retaddr)
 {
     return load_helper(env, addr, oi, retaddr, MO_LEQ, false,
-                       helper_le_ldq_mmu);
+                       helper_le_ldq_mmu, ldq_le_p);
 }
 
 uint64_t helper_be_ldq_mmu(CPUArchState *env, target_ulong addr,
                            TCGMemOpIdx oi, uintptr_t retaddr)
 {
     return load_helper(env, addr, oi, retaddr, MO_BEQ, false,
-                       helper_be_ldq_mmu);
+                       helper_be_ldq_mmu, ldq_be_p);
 }
 
 /*
@@ -1542,9 +1543,37 @@ tcg_target_ulong helper_be_ldsl_mmu(CPUArchState *env, target_ulong addr,
  * Store Helpers
  */
 
+typedef void DirectStoreHelper(void *, uint64_t);
+
+static inline void direct_stb(void *haddr, uint64_t val)
+{
+    *(uint8_t *)haddr = val;
+}
+
+static inline void direct_stw_be(void *haddr, uint64_t val)
+{
+    stw_be_p(haddr, val);
+}
+
+static inline void direct_stw_le(void *haddr, uint64_t val)
+{
+    stw_le_p(haddr, val);
+}
+
+static inline void direct_stl_be(void *haddr, uint64_t val)
+{
+    stl_be_p(haddr, val);
+}
+
+static inline void direct_stl_le(void *haddr, uint64_t val)
+{
+    stl_le_p(haddr, val);
+}
+
 static inline void ALWAYS_INLINE
 store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
-             TCGMemOpIdx oi, uintptr_t retaddr, MemOp op)
+             TCGMemOpIdx oi, uintptr_t retaddr, MemOp op,
+             DirectStoreHelper *direct)
 {
     uintptr_t mmu_idx = get_mmuidx(oi);
     uintptr_t index = tlb_index(env, mmu_idx, addr);
@@ -1669,74 +1698,49 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
 
  do_aligned_access:
     haddr = (void *)((uintptr_t)addr + entry->addend);
-    switch (op) {
-    case MO_UB:
-        stb_p(haddr, val);
-        break;
-    case MO_BEUW:
-        stw_be_p(haddr, val);
-        break;
-    case MO_LEUW:
-        stw_le_p(haddr, val);
-        break;
-    case MO_BEUL:
-        stl_be_p(haddr, val);
-        break;
-    case MO_LEUL:
-        stl_le_p(haddr, val);
-        break;
-    case MO_BEQ:
-        stq_be_p(haddr, val);
-        break;
-    case MO_LEQ:
-        stq_le_p(haddr, val);
-        break;
-    default:
-        g_assert_not_reached();
-        break;
-    }
+    direct(haddr, val);
 }
 
 void helper_ret_stb_mmu(CPUArchState *env, target_ulong addr, uint8_t val,
                         TCGMemOpIdx oi, uintptr_t retaddr)
 {
-    store_helper(env, addr, val, oi, retaddr, MO_UB);
+    store_helper(env, addr, val, oi, retaddr, MO_UB, direct_stb);
 }
 
 void helper_le_stw_mmu(CPUArchState *env, target_ulong addr, uint16_t val,
                        TCGMemOpIdx oi, uintptr_t retaddr)
 {
-    store_helper(env, addr, val, oi, retaddr, MO_LEUW);
+    store_helper(env, addr, val, oi, retaddr, MO_LEUW, direct_stw_le);
 }
 
 void helper_be_stw_mmu(CPUArchState *env, target_ulong addr, uint16_t val,
                        TCGMemOpIdx oi, uintptr_t retaddr)
 {
-    store_helper(env, addr, val, oi, retaddr, MO_BEUW);
+    store_helper(env, addr, val, oi, retaddr, MO_BEUW, direct_stw_be);
 }
 
 void helper_le_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val,
                        TCGMemOpIdx oi, uintptr_t retaddr)
 {
-    store_helper(env, addr, val, oi, retaddr, MO_LEUL);
+    store_helper(env, addr, val, oi, retaddr, MO_LEUL, direct_stl_le);
 }
 
 void helper_be_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val,
                        TCGMemOpIdx oi, uintptr_t retaddr)
 {
-    store_helper(env, addr, val, oi, retaddr, MO_BEUL);
+    store_helper(env, addr, val, oi, retaddr, MO_BEUL, direct_stl_be);
 }
 
 void helper_le_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val,
                        TCGMemOpIdx oi, uintptr_t retaddr)
 {
-    store_helper(env, addr, val, oi, retaddr, MO_LEQ);
+    store_helper(env, addr, val, oi, retaddr, MO_LEQ, stq_le_p);
 }
 
 void helper_be_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val,
                        TCGMemOpIdx oi, uintptr_t retaddr)
 {
-    store_helper(env, addr, val, oi, retaddr, MO_BEQ);
+    store_helper(env, addr, val, oi, retaddr, MO_BEQ, stq_be_p);
 }
 
 /* First set of helpers allows passing in of OI and RETADDR.  This makes
@@ -1801,7 +1805,8 @@ void helper_be_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val,
 static uint64_t full_ldub_cmmu(CPUArchState *env, target_ulong addr,
                                TCGMemOpIdx oi, uintptr_t retaddr)
 {
-    return load_helper(env, addr, oi, retaddr, MO_8, true, full_ldub_cmmu);
+    return load_helper(env, addr, oi, retaddr, MO_8, true,
+                       full_ldub_cmmu, direct_ldub);
 }
 
 uint8_t helper_ret_ldb_cmmu(CPUArchState *env, target_ulong addr,
@@ -1814,7 +1819,7 @@ static uint64_t full_le_lduw_cmmu(CPUArchState *env, target_ulong addr,
                                   TCGMemOpIdx oi, uintptr_t retaddr)
 {
     return load_helper(env, addr, oi, retaddr, MO_LEUW, true,
-                       full_le_lduw_cmmu);
+                       full_le_lduw_cmmu, direct_lduw_le);
 }
 
 uint16_t helper_le_ldw_cmmu(CPUArchState *env, target_ulong addr,
@@ -1827,7 +1832,7 @@ static uint64_t full_be_lduw_cmmu(CPUArchState *env, target_ulong addr,
                                   TCGMemOpIdx oi, uintptr_t retaddr)
 {
     return load_helper(env, addr, oi, retaddr, MO_BEUW, true,
-                       full_be_lduw_cmmu);
+                       full_be_lduw_cmmu, direct_lduw_be);
 }
 
 uint16_t helper_be_ldw_cmmu(CPUArchState *env, target_ulong addr,
@@ -1840,7 +1845,7 @@ static uint64_t full_le_ldul_cmmu(CPUArchState *env, target_ulong addr,
                                   TCGMemOpIdx oi, uintptr_t retaddr)
 {
     return load_helper(env, addr, oi, retaddr, MO_LEUL, true,
-                       full_le_ldul_cmmu);
+                       full_le_ldul_cmmu, direct_ldul_le);
 }
 
 uint32_t helper_le_ldl_cmmu(CPUArchState *env, target_ulong addr,
@@ -1853,7 +1858,7 @@ static uint64_t full_be_ldul_cmmu(CPUArchState *env, target_ulong addr,
                                   TCGMemOpIdx oi, uintptr_t retaddr)
 {
     return load_helper(env, addr, oi, retaddr, MO_BEUL, true,
-                       full_be_ldul_cmmu);
+                       full_be_ldul_cmmu, direct_ldul_be);
 }
 
 uint32_t helper_be_ldl_cmmu(CPUArchState *env, target_ulong addr,
@@ -1866,12 +1871,12 @@ uint64_t helper_le_ldq_cmmu(CPUArchState *env, target_ulong addr,
                             TCGMemOpIdx oi, uintptr_t retaddr)
 {
     return load_helper(env, addr, oi, retaddr, MO_LEQ, true,
-                       helper_le_ldq_cmmu);
+                       helper_le_ldq_cmmu, ldq_le_p);
 }
 
 uint64_t helper_be_ldq_cmmu(CPUArchState *env, target_ulong addr,
                             TCGMemOpIdx oi, uintptr_t retaddr)
 {
     return load_helper(env, addr, oi, retaddr, MO_BEQ, true,
-                       helper_be_ldq_cmmu);
+                       helper_be_ldq_cmmu, ldq_be_p);
 }
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [Qemu-devel] [PATCH 3/3] cputlb: Introduce TLB_BSWAP
  2019-09-11  1:43 [Qemu-devel] [PATCH 0/3] cputlb: Adjust tlb bswap implementation Richard Henderson
  2019-09-11  1:43 ` [Qemu-devel] [PATCH 1/3] cputlb: Disable __always_inline__ without optimization Richard Henderson
  2019-09-11  1:43 ` [Qemu-devel] [PATCH 2/3] cputlb: Replace switches in load/store_helper with callback Richard Henderson
@ 2019-09-11  1:43 ` Richard Henderson
  2019-09-11 14:56   ` Tony Nguyen
  2019-09-11  3:46 ` [Qemu-devel] [PATCH 0/3] cputlb: Adjust tlb bswap implementation no-reply
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 15+ messages in thread
From: Richard Henderson @ 2019-09-11  1:43 UTC (permalink / raw)
  To: qemu-devel; +Cc: peter.maydell, mark.cave-ayland, tony.nguyen

Handle bswap on ram directly in load/store_helper.  This fixes a
bug with the previous implementation in that one cannot use the
I/O path for RAM.

Fixes: a26fc6f5152b47f1
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 include/exec/cpu-all.h |   2 +
 accel/tcg/cputlb.c     | 118 ++++++++++++++++++++---------------------
 2 files changed, 59 insertions(+), 61 deletions(-)

diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
index d2d443c4f9..3928edab9a 100644
--- a/include/exec/cpu-all.h
+++ b/include/exec/cpu-all.h
@@ -331,6 +331,8 @@ CPUArchState *cpu_copy(CPUArchState *env);
 #define TLB_MMIO            (1 << (TARGET_PAGE_BITS - 3))
 /* Set if TLB entry contains a watchpoint.  */
 #define TLB_WATCHPOINT      (1 << (TARGET_PAGE_BITS - 4))
+/* Set if TLB entry requires byte swap.  */
+#define TLB_BSWAP           (1 << (TARGET_PAGE_BITS - 5))
 
 /* Use this mask to check interception with an alignment mask
  * in a TCG backend.
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index e6229d100a..eace6c82e3 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -737,8 +737,7 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
         address |= TLB_INVALID_MASK;
     }
     if (attrs.byte_swap) {
-        /* Force the access through the I/O slow path.  */
-        address |= TLB_MMIO;
+        address |= TLB_BSWAP;
     }
     if (!memory_region_is_ram(section->mr) &&
         !memory_region_is_romd(section->mr)) {
@@ -901,10 +900,6 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
     bool locked = false;
     MemTxResult r;
 
-    if (iotlbentry->attrs.byte_swap) {
-        op ^= MO_BSWAP;
-    }
-
     section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
     mr = section->mr;
     mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
@@ -947,10 +942,6 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
     bool locked = false;
     MemTxResult r;
 
-    if (iotlbentry->attrs.byte_swap) {
-        op ^= MO_BSWAP;
-    }
-
     section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
     mr = section->mr;
     mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
@@ -1322,7 +1313,8 @@ static inline uint64_t direct_ldul_le(const void *haddr)
 static inline uint64_t ALWAYS_INLINE
 load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
             uintptr_t retaddr, MemOp op, bool code_read,
-            FullLoadHelper *full_load, DirectLoadHelper *direct)
+            FullLoadHelper *full_load, DirectLoadHelper *direct,
+            DirectLoadHelper *direct_swap)
 {
     uintptr_t mmu_idx = get_mmuidx(oi);
     uintptr_t index = tlb_index(env, mmu_idx, addr);
@@ -1372,26 +1364,27 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
             /* On watchpoint hit, this will longjmp out.  */
             cpu_check_watchpoint(env_cpu(env), addr, size,
                                  iotlbentry->attrs, BP_MEM_READ, retaddr);
-
-            /* The backing page may or may not require I/O.  */
-            tlb_addr &= ~TLB_WATCHPOINT;
-            if ((tlb_addr & ~TARGET_PAGE_MASK) == 0) {
-                goto do_aligned_access;
-            }
         }
 
         /* Handle I/O access.  */
-        return io_readx(env, iotlbentry, mmu_idx, addr,
-                        retaddr, access_type, op);
-    }
+        if (likely(tlb_addr & TLB_MMIO)) {
+            return io_readx(env, iotlbentry, mmu_idx, addr,
+                            retaddr, access_type,
+                            op ^ (tlb_addr & TLB_BSWAP ? MO_BSWAP : 0));
+        }
 
-    /* Handle slow unaligned access (it spans two pages or IO).  */
-    if (size > 1
-        && unlikely((addr & ~TARGET_PAGE_MASK) + size - 1
-                    >= TARGET_PAGE_SIZE)) {
+        if (unlikely(tlb_addr & TLB_BSWAP)) {
+            haddr = (void *)((uintptr_t)addr + entry->addend);
+            return direct_swap(haddr);
+        }
+    } else if (size > 1
+               && unlikely((addr & ~TARGET_PAGE_MASK) + size - 1
+                           >= TARGET_PAGE_SIZE)) {
+        /* Handle slow unaligned access (it spans two pages or IO).  */
         target_ulong addr1, addr2;
         uint64_t r1, r2;
         unsigned shift;
+
     do_unaligned_access:
         addr1 = addr & ~((target_ulong)size - 1);
         addr2 = addr1 + size;
@@ -1409,7 +1402,6 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
         return res & MAKE_64BIT_MASK(0, size * 8);
     }
 
- do_aligned_access:
     haddr = (void *)((uintptr_t)addr + entry->addend);
     return direct(haddr);
 }
@@ -1428,7 +1420,7 @@ static uint64_t full_ldub_mmu(CPUArchState *env, target_ulong addr,
                               TCGMemOpIdx oi, uintptr_t retaddr)
 {
     return load_helper(env, addr, oi, retaddr, MO_UB, false,
-                       full_ldub_mmu, direct_ldub);
+                       full_ldub_mmu, direct_ldub, direct_ldub);
 }
 
 tcg_target_ulong helper_ret_ldub_mmu(CPUArchState *env, target_ulong addr,
@@ -1441,7 +1433,7 @@ static uint64_t full_le_lduw_mmu(CPUArchState *env, target_ulong addr,
                                  TCGMemOpIdx oi, uintptr_t retaddr)
 {
     return load_helper(env, addr, oi, retaddr, MO_LEUW, false,
-                       full_le_lduw_mmu, direct_lduw_le);
+                       full_le_lduw_mmu, direct_lduw_le, direct_lduw_be);
 }
 
 tcg_target_ulong helper_le_lduw_mmu(CPUArchState *env, target_ulong addr,
@@ -1454,7 +1446,7 @@ static uint64_t full_be_lduw_mmu(CPUArchState *env, target_ulong addr,
                                  TCGMemOpIdx oi, uintptr_t retaddr)
 {
     return load_helper(env, addr, oi, retaddr, MO_BEUW, false,
-                       full_be_lduw_mmu, direct_lduw_be);
+                       full_be_lduw_mmu, direct_lduw_be, direct_lduw_le);
 }
 
 tcg_target_ulong helper_be_lduw_mmu(CPUArchState *env, target_ulong addr,
@@ -1467,7 +1459,7 @@ static uint64_t full_le_ldul_mmu(CPUArchState *env, target_ulong addr,
                                  TCGMemOpIdx oi, uintptr_t retaddr)
 {
     return load_helper(env, addr, oi, retaddr, MO_LEUL, false,
-                       full_le_ldul_mmu, direct_ldul_le);
+                       full_le_ldul_mmu, direct_ldul_le, direct_ldul_be);
 }
 
 tcg_target_ulong helper_le_ldul_mmu(CPUArchState *env, target_ulong addr,
@@ -1480,7 +1472,7 @@ static uint64_t full_be_ldul_mmu(CPUArchState *env, target_ulong addr,
                                  TCGMemOpIdx oi, uintptr_t retaddr)
 {
     return load_helper(env, addr, oi, retaddr, MO_BEUL, false,
-                       full_be_ldul_mmu, direct_ldul_be);
+                       full_be_ldul_mmu, direct_ldul_be, direct_ldul_le);
 }
 
 tcg_target_ulong helper_be_ldul_mmu(CPUArchState *env, target_ulong addr,
@@ -1493,14 +1485,14 @@ uint64_t helper_le_ldq_mmu(CPUArchState *env, target_ulong addr,
                            TCGMemOpIdx oi, uintptr_t retaddr)
 {
     return load_helper(env, addr, oi, retaddr, MO_LEQ, false,
-                       helper_le_ldq_mmu, ldq_le_p);
+                       helper_le_ldq_mmu, ldq_le_p, ldq_be_p);
 }
 
 uint64_t helper_be_ldq_mmu(CPUArchState *env, target_ulong addr,
                            TCGMemOpIdx oi, uintptr_t retaddr)
 {
     return load_helper(env, addr, oi, retaddr, MO_BEQ, false,
-                       helper_be_ldq_mmu, ldq_be_p);
+                       helper_be_ldq_mmu, ldq_be_p, ldq_le_p);
 }
 
 /*
@@ -1573,7 +1565,7 @@ static inline void direct_stl_le(void *haddr, uint64_t val)
 static inline void ALWAYS_INLINE
 store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
              TCGMemOpIdx oi, uintptr_t retaddr, MemOp op,
-             DirectStoreHelper *direct)
+             DirectStoreHelper *direct, DirectStoreHelper *direct_swap)
 {
     uintptr_t mmu_idx = get_mmuidx(oi);
     uintptr_t index = tlb_index(env, mmu_idx, addr);
@@ -1618,23 +1610,24 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
             /* On watchpoint hit, this will longjmp out.  */
             cpu_check_watchpoint(env_cpu(env), addr, size,
                                  iotlbentry->attrs, BP_MEM_WRITE, retaddr);
-
-            /* The backing page may or may not require I/O.  */
-            tlb_addr &= ~TLB_WATCHPOINT;
-            if ((tlb_addr & ~TARGET_PAGE_MASK) == 0) {
-                goto do_aligned_access;
-            }
         }
 
         /* Handle I/O access.  */
-        io_writex(env, iotlbentry, mmu_idx, val, addr, retaddr, op);
-        return;
-    }
+        if (likely(tlb_addr & TLB_MMIO)) {
+            io_writex(env, iotlbentry, mmu_idx, val, addr, retaddr,
+                      op ^ (tlb_addr & TLB_BSWAP ? MO_BSWAP : 0));
+            return;
+        }
 
-    /* Handle slow unaligned access (it spans two pages or IO).  */
-    if (size > 1
-        && unlikely((addr & ~TARGET_PAGE_MASK) + size - 1
-                     >= TARGET_PAGE_SIZE)) {
+        if (unlikely(tlb_addr & TLB_BSWAP)) {
+            haddr = (void *)((uintptr_t)addr + entry->addend);
+            direct_swap(haddr, val);
+            return;
+        }
+    } else if (size > 1
+               && unlikely((addr & ~TARGET_PAGE_MASK) + size - 1
+                            >= TARGET_PAGE_SIZE)) {
+        /* Handle slow unaligned access (it spans two pages or IO).  */
         int i;
         uintptr_t index2;
         CPUTLBEntry *entry2;
@@ -1696,7 +1689,6 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
         return;
     }
 
- do_aligned_access:
     haddr = (void *)((uintptr_t)addr + entry->addend);
     direct(haddr, val);
 }
@@ -1704,43 +1696,47 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
 void helper_ret_stb_mmu(CPUArchState *env, target_ulong addr, uint8_t val,
                         TCGMemOpIdx oi, uintptr_t retaddr)
 {
-    store_helper(env, addr, val, oi, retaddr, MO_UB, direct_stb);
+    store_helper(env, addr, val, oi, retaddr, MO_UB, direct_stb, direct_stb);
 }
 
 void helper_le_stw_mmu(CPUArchState *env, target_ulong addr, uint16_t val,
                        TCGMemOpIdx oi, uintptr_t retaddr)
 {
-    store_helper(env, addr, val, oi, retaddr, MO_LEUW, direct_stw_le);
+    store_helper(env, addr, val, oi, retaddr, MO_LEUW,
+                 direct_stw_le, direct_stw_be);
 }
 
 void helper_be_stw_mmu(CPUArchState *env, target_ulong addr, uint16_t val,
                        TCGMemOpIdx oi, uintptr_t retaddr)
 {
-    store_helper(env, addr, val, oi, retaddr, MO_BEUW, direct_stw_be);
+    store_helper(env, addr, val, oi, retaddr, MO_BEUW,
+                 direct_stw_be, direct_stw_le);
 }
 
 void helper_le_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val,
                        TCGMemOpIdx oi, uintptr_t retaddr)
 {
-    store_helper(env, addr, val, oi, retaddr, MO_LEUL, direct_stl_le);
+    store_helper(env, addr, val, oi, retaddr, MO_LEUL,
+                 direct_stl_le, direct_stl_be);
 }
 
 void helper_be_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val,
                        TCGMemOpIdx oi, uintptr_t retaddr)
 {
-    store_helper(env, addr, val, oi, retaddr, MO_BEUL, direct_stl_be);
+    store_helper(env, addr, val, oi, retaddr, MO_BEUL,
+                 direct_stl_be, direct_stl_le);
 }
 
 void helper_le_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val,
                        TCGMemOpIdx oi, uintptr_t retaddr)
 {
-    store_helper(env, addr, val, oi, retaddr, MO_LEQ, stq_le_p);
+    store_helper(env, addr, val, oi, retaddr, MO_LEQ, stq_le_p, stq_be_p);
 }
 
 void helper_be_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val,
                        TCGMemOpIdx oi, uintptr_t retaddr)
 {
-    store_helper(env, addr, val, oi, retaddr, MO_BEQ, stq_be_p);
+    store_helper(env, addr, val, oi, retaddr, MO_BEQ, stq_be_p, stq_le_p);
 }
 
 /* First set of helpers allows passing in of OI and RETADDR.  This makes
@@ -1806,7 +1802,7 @@ static uint64_t full_ldub_cmmu(CPUArchState *env, target_ulong addr,
                                TCGMemOpIdx oi, uintptr_t retaddr)
 {
     return load_helper(env, addr, oi, retaddr, MO_8, true,
-                       full_ldub_cmmu, direct_ldub);
+                       full_ldub_cmmu, direct_ldub, direct_ldub);
 }
 
 uint8_t helper_ret_ldb_cmmu(CPUArchState *env, target_ulong addr,
@@ -1819,7 +1815,7 @@ static uint64_t full_le_lduw_cmmu(CPUArchState *env, target_ulong addr,
                                   TCGMemOpIdx oi, uintptr_t retaddr)
 {
     return load_helper(env, addr, oi, retaddr, MO_LEUW, true,
-                       full_le_lduw_cmmu, direct_lduw_le);
+                       full_le_lduw_cmmu, direct_lduw_le, direct_lduw_be);
 }
 
 uint16_t helper_le_ldw_cmmu(CPUArchState *env, target_ulong addr,
@@ -1832,7 +1828,7 @@ static uint64_t full_be_lduw_cmmu(CPUArchState *env, target_ulong addr,
                                   TCGMemOpIdx oi, uintptr_t retaddr)
 {
     return load_helper(env, addr, oi, retaddr, MO_BEUW, true,
-                       full_be_lduw_cmmu, direct_lduw_be);
+                       full_be_lduw_cmmu, direct_lduw_be, direct_lduw_le);
 }
 
 uint16_t helper_be_ldw_cmmu(CPUArchState *env, target_ulong addr,
@@ -1845,7 +1841,7 @@ static uint64_t full_le_ldul_cmmu(CPUArchState *env, target_ulong addr,
                                   TCGMemOpIdx oi, uintptr_t retaddr)
 {
     return load_helper(env, addr, oi, retaddr, MO_LEUL, true,
-                       full_le_ldul_cmmu, direct_ldul_le);
+                       full_le_ldul_cmmu, direct_ldul_le, direct_ldul_be);
 }
 
 uint32_t helper_le_ldl_cmmu(CPUArchState *env, target_ulong addr,
@@ -1858,7 +1854,7 @@ static uint64_t full_be_ldul_cmmu(CPUArchState *env, target_ulong addr,
                                   TCGMemOpIdx oi, uintptr_t retaddr)
 {
     return load_helper(env, addr, oi, retaddr, MO_BEUL, true,
-                       full_be_ldul_cmmu, direct_ldul_be);
+                       full_be_ldul_cmmu, direct_ldul_be, direct_ldul_le);
 }
 
 uint32_t helper_be_ldl_cmmu(CPUArchState *env, target_ulong addr,
@@ -1871,12 +1867,12 @@ uint64_t helper_le_ldq_cmmu(CPUArchState *env, target_ulong addr,
                             TCGMemOpIdx oi, uintptr_t retaddr)
 {
     return load_helper(env, addr, oi, retaddr, MO_LEQ, true,
-                       helper_le_ldq_cmmu, ldq_le_p);
+                       helper_le_ldq_cmmu, ldq_le_p, ldq_be_p);
 }
 
 uint64_t helper_be_ldq_cmmu(CPUArchState *env, target_ulong addr,
                             TCGMemOpIdx oi, uintptr_t retaddr)
 {
     return load_helper(env, addr, oi, retaddr, MO_BEQ, true,
-                       helper_be_ldq_cmmu, ldq_be_p);
+                       helper_be_ldq_cmmu, ldq_be_p, ldq_le_p);
 }
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [PATCH 0/3] cputlb: Adjust tlb bswap implementation
  2019-09-11  1:43 [Qemu-devel] [PATCH 0/3] cputlb: Adjust tlb bswap implementation Richard Henderson
                   ` (2 preceding siblings ...)
  2019-09-11  1:43 ` [Qemu-devel] [PATCH 3/3] cputlb: Introduce TLB_BSWAP Richard Henderson
@ 2019-09-11  3:46 ` no-reply
  2019-09-11  9:55 ` no-reply
  2019-09-11 13:07 ` no-reply
  5 siblings, 0 replies; 15+ messages in thread
From: no-reply @ 2019-09-11  3:46 UTC (permalink / raw)
  To: richard.henderson
  Cc: peter.maydell, mark.cave-ayland, qemu-devel, tony.nguyen

Patchew URL: https://patchew.org/QEMU/20190911014353.5926-1-richard.henderson@linaro.org/



Hi,

This series failed the docker-quick@centos7 build test. Please find the testing commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.

=== TEST SCRIPT BEGIN ===
#!/bin/bash
make docker-image-centos7 V=1 NETWORK=1
time make docker-test-quick@centos7 SHOW_ENV=1 J=14 NETWORK=1
=== TEST SCRIPT END ===

libudev           no
default devices   yes

warning: Python 2 support is deprecated
warning: Python 3 will be required for building future versions of QEMU

NOTE: cross-compilers enabled:  'cc'
  GEN     aarch64-softmmu/config-devices.mak.tmp
---
Memory content inconsistency at 4018f000 first_byte = e4 last_byte = e4 current = e5 hit_edge = 0
Memory content inconsistency at 40190000 first_byte = e4 last_byte = e4 current = e5 hit_edge = 0
and in another 25199 pages**
ERROR:/tmp/qemu-test/src/tests/migration-test.c:342:check_guests_ram: assertion failed: (bad == 0)
ERROR - Bail out! ERROR:/tmp/qemu-test/src/tests/migration-test.c:342:check_guests_ram: assertion failed: (bad == 0)
make: *** [check-qtest-aarch64] Error 1
make: *** Waiting for unfinished jobs....
  TEST    check-unit: tests/test-throttle
---
  TEST    iotest-qcow2: 252
Passed all 106 tests
**
ERROR:/tmp/qemu-test/src/tests/boot-sector.c:161:boot_sector_test: assertion failed (signature == SIGNATURE): (0x00000000 == 0x0000dead)
ERROR - Bail out! ERROR:/tmp/qemu-test/src/tests/boot-sector.c:161:boot_sector_test: assertion failed (signature == SIGNATURE): (0x00000000 == 0x0000dead)
make: *** [check-qtest-x86_64] Error 1
Traceback (most recent call last):
  File "./tests/docker/docker.py", line 659, in <module>


The full log is available at
http://patchew.org/logs/20190911014353.5926-1-richard.henderson@linaro.org/testing.docker-quick@centos7/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-devel@redhat.com

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [PATCH 1/3] cputlb: Disable __always_inline__ without optimization
  2019-09-11  1:43 ` [Qemu-devel] [PATCH 1/3] cputlb: Disable __always_inline__ without optimization Richard Henderson
@ 2019-09-11  8:34   ` Peter Maydell
  0 siblings, 0 replies; 15+ messages in thread
From: Peter Maydell @ 2019-09-11  8:34 UTC (permalink / raw)
  To: Richard Henderson; +Cc: Tony Nguyen, Mark Cave-Ayland, QEMU Developers

On Wed, 11 Sep 2019 at 02:43, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> This forced inlining can result in missing symbols,
> which makes a debugging build harder to follow.
>
> Reported-by: Peter Maydell <peter.maydell@linaro.org>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>  accel/tcg/cputlb.c | 16 ++++++++++++++--
>  1 file changed, 14 insertions(+), 2 deletions(-)
>
> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
> index abae79650c..909f01ebcc 100644
> --- a/accel/tcg/cputlb.c
> +++ b/accel/tcg/cputlb.c
> @@ -1269,6 +1269,18 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
>      cpu_loop_exit_atomic(env_cpu(env), retaddr);
>  }
>
> +/*
> + * In order for the expected constant folding to happen,
> + * we require that some functions be inlined.
> + * However, this inlining can make debugging harder for a
> + * non-optimizing build.
> + */
> +#ifdef __OPTIMIZE__
> +#define ALWAYS_INLINE  __attribute__((always_inline))
> +#else
> +#define ALWAYS_INLINE
> +#endif
> +

Maybe this should go in compiler.h ?

>  /*
>   * Load Helpers
>   *
> @@ -1281,7 +1293,7 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
>  typedef uint64_t FullLoadHelper(CPUArchState *env, target_ulong addr,
>                                  TCGMemOpIdx oi, uintptr_t retaddr);
>
> -static inline uint64_t __attribute__((always_inline))
> +static inline uint64_t ALWAYS_INLINE
>  load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
>              uintptr_t retaddr, MemOp op, bool code_read,
>              FullLoadHelper *full_load)
> @@ -1530,7 +1542,7 @@ tcg_target_ulong helper_be_ldsl_mmu(CPUArchState *env, target_ulong addr,
>   * Store Helpers
>   */
>
> -static inline void __attribute__((always_inline))
> +static inline void ALWAYS_INLINE
>  store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
>               TCGMemOpIdx oi, uintptr_t retaddr, MemOp op)
>  {
> --
> 2.17.1

Either way

Reviewed-by: Peter Maydell <peter.maydell@linaro.org>

thanks
-- PMM


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [PATCH 0/3] cputlb: Adjust tlb bswap implementation
  2019-09-11  1:43 [Qemu-devel] [PATCH 0/3] cputlb: Adjust tlb bswap implementation Richard Henderson
                   ` (3 preceding siblings ...)
  2019-09-11  3:46 ` [Qemu-devel] [PATCH 0/3] cputlb: Adjust tlb bswap implementation no-reply
@ 2019-09-11  9:55 ` no-reply
  2019-09-11 13:07 ` no-reply
  5 siblings, 0 replies; 15+ messages in thread
From: no-reply @ 2019-09-11  9:55 UTC (permalink / raw)
  To: richard.henderson
  Cc: peter.maydell, mark.cave-ayland, qemu-devel, tony.nguyen

Patchew URL: https://patchew.org/QEMU/20190911014353.5926-1-richard.henderson@linaro.org/



Hi,

This series failed the docker-quick@centos7 build test. Please find the testing commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.

=== TEST SCRIPT BEGIN ===
#!/bin/bash
make docker-image-centos7 V=1 NETWORK=1
time make docker-test-quick@centos7 SHOW_ENV=1 J=14 NETWORK=1
=== TEST SCRIPT END ===

libudev           no
default devices   yes

warning: Python 2 support is deprecated
warning: Python 3 will be required for building future versions of QEMU

NOTE: cross-compilers enabled:  'cc'
  GEN     x86_64-softmmu/config-devices.mak.tmp
---
  TEST    check-qtest-x86_64: tests/ahci-test
  TEST    check-unit: tests/test-aio-multithread
and in another 25208 pages**
ERROR:/tmp/qemu-test/src/tests/migration-test.c:342:check_guests_ram: assertion failed: (bad == 0)
ERROR - Bail out! ERROR:/tmp/qemu-test/src/tests/migration-test.c:342:check_guests_ram: assertion failed: (bad == 0)
make: *** [check-qtest-aarch64] Error 1
make: *** Waiting for unfinished jobs....
  TEST    check-unit: tests/test-throttle
---
  TEST    iotest-qcow2: 252
Passed all 106 tests
**
ERROR:/tmp/qemu-test/src/tests/boot-sector.c:161:boot_sector_test: assertion failed (signature == SIGNATURE): (0x00000000 == 0x0000dead)
ERROR - Bail out! ERROR:/tmp/qemu-test/src/tests/boot-sector.c:161:boot_sector_test: assertion failed (signature == SIGNATURE): (0x00000000 == 0x0000dead)
make: *** [check-qtest-x86_64] Error 1
Traceback (most recent call last):
  File "./tests/docker/docker.py", line 659, in <module>


The full log is available at
http://patchew.org/logs/20190911014353.5926-1-richard.henderson@linaro.org/testing.docker-quick@centos7/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-devel@redhat.com

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [PATCH 2/3] cputlb: Replace switches in load/store_helper with callback
  2019-09-11  1:43 ` [Qemu-devel] [PATCH 2/3] cputlb: Replace switches in load/store_helper with callback Richard Henderson
@ 2019-09-11 10:55   ` Tony Nguyen
  2019-09-11 13:22     ` Richard Henderson
  2019-09-11 13:07   ` Philippe Mathieu-Daudé
  1 sibling, 1 reply; 15+ messages in thread
From: Tony Nguyen @ 2019-09-11 10:55 UTC (permalink / raw)
  To: Richard Henderson; +Cc: peter.maydell, mark.cave-ayland, qemu-devel

On Tue, Sep 10, 2019 at 09:43:52PM -0400, Richard Henderson wrote:
> Add a function parameter to perform the actual load/store to ram.
> With optimization, this results in identical code.
> 
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>  accel/tcg/cputlb.c | 157 +++++++++++++++++++++++----------------------
>  1 file changed, 81 insertions(+), 76 deletions(-)
> 
> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
> index 909f01ebcc..e6229d100a 100644
> --- a/accel/tcg/cputlb.c
> +++ b/accel/tcg/cputlb.c
> @@ -1292,11 +1292,37 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
>  
>  typedef uint64_t FullLoadHelper(CPUArchState *env, target_ulong addr,
>                                  TCGMemOpIdx oi, uintptr_t retaddr);
> +typedef uint64_t DirectLoadHelper(const void *);

Would 'Load' instead of 'DirectLoadHelper' have enough clarity?

If so, consider also dropping the 'direct_' prefix in the functions below.

> +
> +static inline uint64_t direct_ldub(const void *haddr)
> +{
> +    return *(uint8_t *)haddr;
> +}
> +
> +static inline uint64_t direct_lduw_be(const void *haddr)
> +{
> +    return lduw_be_p(haddr);
> +}
> +
> +static inline uint64_t direct_lduw_le(const void *haddr)
> +{
> +    return lduw_le_p(haddr);
> +}
> +
> +static inline uint64_t direct_ldul_be(const void *haddr)
> +{
> +    return (uint32_t)ldl_be_p(haddr);
> +}
> +
> +static inline uint64_t direct_ldul_le(const void *haddr)
> +{
> +    return (uint32_t)ldl_le_p(haddr);
> +}
>  
>  static inline uint64_t ALWAYS_INLINE
>  load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
>              uintptr_t retaddr, MemOp op, bool code_read,
> -            FullLoadHelper *full_load)
> +            FullLoadHelper *full_load, DirectLoadHelper *direct)
>  {
>      uintptr_t mmu_idx = get_mmuidx(oi);
>      uintptr_t index = tlb_index(env, mmu_idx, addr);
> @@ -1385,33 +1411,7 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
>  
>   do_aligned_access:
>      haddr = (void *)((uintptr_t)addr + entry->addend);
> -    switch (op) {
> -    case MO_UB:
> -        res = ldub_p(haddr);
> -        break;
> -    case MO_BEUW:
> -        res = lduw_be_p(haddr);
> -        break;
> -    case MO_LEUW:
> -        res = lduw_le_p(haddr);
> -        break;
> -    case MO_BEUL:
> -        res = (uint32_t)ldl_be_p(haddr);
> -        break;
> -    case MO_LEUL:
> -        res = (uint32_t)ldl_le_p(haddr);
> -        break;
> -    case MO_BEQ:
> -        res = ldq_be_p(haddr);
> -        break;
> -    case MO_LEQ:
> -        res = ldq_le_p(haddr);
> -        break;
> -    default:
> -        g_assert_not_reached();
> -    }
> -
> -    return res;
> +    return direct(haddr);
>  }
>  
>  /*
> @@ -1427,7 +1427,8 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
>  static uint64_t full_ldub_mmu(CPUArchState *env, target_ulong addr,
>                                TCGMemOpIdx oi, uintptr_t retaddr)
>  {
> -    return load_helper(env, addr, oi, retaddr, MO_UB, false, full_ldub_mmu);
> +    return load_helper(env, addr, oi, retaddr, MO_UB, false,
> +                       full_ldub_mmu, direct_ldub);
>  }
>  
>  tcg_target_ulong helper_ret_ldub_mmu(CPUArchState *env, target_ulong addr,
> @@ -1440,7 +1441,7 @@ static uint64_t full_le_lduw_mmu(CPUArchState *env, target_ulong addr,
>                                   TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_LEUW, false,
> -                       full_le_lduw_mmu);
> +                       full_le_lduw_mmu, direct_lduw_le);
>  }
>  
>  tcg_target_ulong helper_le_lduw_mmu(CPUArchState *env, target_ulong addr,
> @@ -1453,7 +1454,7 @@ static uint64_t full_be_lduw_mmu(CPUArchState *env, target_ulong addr,
>                                   TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_BEUW, false,
> -                       full_be_lduw_mmu);
> +                       full_be_lduw_mmu, direct_lduw_be);
>  }
>  
>  tcg_target_ulong helper_be_lduw_mmu(CPUArchState *env, target_ulong addr,
> @@ -1466,7 +1467,7 @@ static uint64_t full_le_ldul_mmu(CPUArchState *env, target_ulong addr,
>                                   TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_LEUL, false,
> -                       full_le_ldul_mmu);
> +                       full_le_ldul_mmu, direct_ldul_le);
>  }
>  
>  tcg_target_ulong helper_le_ldul_mmu(CPUArchState *env, target_ulong addr,
> @@ -1479,7 +1480,7 @@ static uint64_t full_be_ldul_mmu(CPUArchState *env, target_ulong addr,
>                                   TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_BEUL, false,
> -                       full_be_ldul_mmu);
> +                       full_be_ldul_mmu, direct_ldul_be);
>  }
>  
>  tcg_target_ulong helper_be_ldul_mmu(CPUArchState *env, target_ulong addr,
> @@ -1492,14 +1493,14 @@ uint64_t helper_le_ldq_mmu(CPUArchState *env, target_ulong addr,
>                             TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_LEQ, false,
> -                       helper_le_ldq_mmu);
> +                       helper_le_ldq_mmu, ldq_le_p);
>  }
>  
>  uint64_t helper_be_ldq_mmu(CPUArchState *env, target_ulong addr,
>                             TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_BEQ, false,
> -                       helper_be_ldq_mmu);
> +                       helper_be_ldq_mmu, ldq_be_p);
>  }
>  
>  /*
> @@ -1542,9 +1543,37 @@ tcg_target_ulong helper_be_ldsl_mmu(CPUArchState *env, target_ulong addr,
>   * Store Helpers
>   */
>  
> +typedef void DirectStoreHelper(void *, uint64_t);

Like 'Load', would 'Store' instead of 'DirectStoreHelper' have enough clarity?

> +
> +static inline void direct_stb(void *haddr, uint64_t val)
> +{
> +    *(uint8_t *)haddr = val;
> +}
> +
> +static inline void direct_stw_be(void *haddr, uint64_t val)
> +{
> +    stw_be_p(haddr, val);
> +}
> +
> +static inline void direct_stw_le(void *haddr, uint64_t val)
> +{
> +    stw_le_p(haddr, val);
> +}
> +
> +static inline void direct_stl_be(void *haddr, uint64_t val)
> +{
> +    stl_be_p(haddr, val);
> +}
> +
> +static inline void direct_stl_le(void *haddr, uint64_t val)
> +{
> +    stl_le_p(haddr, val);
> +}
> +
>  static inline void ALWAYS_INLINE
>  store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
> -             TCGMemOpIdx oi, uintptr_t retaddr, MemOp op)
> +             TCGMemOpIdx oi, uintptr_t retaddr, MemOp op,
> +             DirectStoreHelper *direct)
>  {
>      uintptr_t mmu_idx = get_mmuidx(oi);
>      uintptr_t index = tlb_index(env, mmu_idx, addr);
> @@ -1669,74 +1698,49 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
>  
>   do_aligned_access:
>      haddr = (void *)((uintptr_t)addr + entry->addend);
> -    switch (op) {
> -    case MO_UB:
> -        stb_p(haddr, val);
> -        break;
> -    case MO_BEUW:
> -        stw_be_p(haddr, val);
> -        break;
> -    case MO_LEUW:
> -        stw_le_p(haddr, val);
> -        break;
> -    case MO_BEUL:
> -        stl_be_p(haddr, val);
> -        break;
> -    case MO_LEUL:
> -        stl_le_p(haddr, val);
> -        break;
> -    case MO_BEQ:
> -        stq_be_p(haddr, val);
> -        break;
> -    case MO_LEQ:
> -        stq_le_p(haddr, val);
> -        break;
> -    default:
> -        g_assert_not_reached();
> -        break;
> -    }
> +    direct(haddr, val);
>  }
>  
>  void helper_ret_stb_mmu(CPUArchState *env, target_ulong addr, uint8_t val,
>                          TCGMemOpIdx oi, uintptr_t retaddr)
>  {
> -    store_helper(env, addr, val, oi, retaddr, MO_UB);
> +    store_helper(env, addr, val, oi, retaddr, MO_UB, direct_stb);
>  }
>  
>  void helper_le_stw_mmu(CPUArchState *env, target_ulong addr, uint16_t val,
>                         TCGMemOpIdx oi, uintptr_t retaddr)
>  {
> -    store_helper(env, addr, val, oi, retaddr, MO_LEUW);
> +    store_helper(env, addr, val, oi, retaddr, MO_LEUW, direct_stw_le);
>  }
>  
>  void helper_be_stw_mmu(CPUArchState *env, target_ulong addr, uint16_t val,
>                         TCGMemOpIdx oi, uintptr_t retaddr)
>  {
> -    store_helper(env, addr, val, oi, retaddr, MO_BEUW);
> +    store_helper(env, addr, val, oi, retaddr, MO_BEUW, direct_stw_be);
>  }
>  
>  void helper_le_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val,
>                         TCGMemOpIdx oi, uintptr_t retaddr)
>  {
> -    store_helper(env, addr, val, oi, retaddr, MO_LEUL);
> +    store_helper(env, addr, val, oi, retaddr, MO_LEUL, direct_stl_le);
>  }
>  
>  void helper_be_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val,
>                         TCGMemOpIdx oi, uintptr_t retaddr)
>  {
> -    store_helper(env, addr, val, oi, retaddr, MO_BEUL);
> +    store_helper(env, addr, val, oi, retaddr, MO_BEUL, direct_stl_be);
>  }
>  
>  void helper_le_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val,
>                         TCGMemOpIdx oi, uintptr_t retaddr)
>  {
> -    store_helper(env, addr, val, oi, retaddr, MO_LEQ);
> +    store_helper(env, addr, val, oi, retaddr, MO_LEQ, stq_le_p);
>  }
>  
>  void helper_be_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val,
>                         TCGMemOpIdx oi, uintptr_t retaddr)
>  {
> -    store_helper(env, addr, val, oi, retaddr, MO_BEQ);
> +    store_helper(env, addr, val, oi, retaddr, MO_BEQ, stq_be_p);
>  }
>  
>  /* First set of helpers allows passing in of OI and RETADDR.  This makes
> @@ -1801,7 +1805,8 @@ void helper_be_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val,
>  static uint64_t full_ldub_cmmu(CPUArchState *env, target_ulong addr,
>                                 TCGMemOpIdx oi, uintptr_t retaddr)
>  {
> -    return load_helper(env, addr, oi, retaddr, MO_8, true, full_ldub_cmmu);
> +    return load_helper(env, addr, oi, retaddr, MO_8, true,
> +                       full_ldub_cmmu, direct_ldub);
>  }
>  
>  uint8_t helper_ret_ldb_cmmu(CPUArchState *env, target_ulong addr,
> @@ -1814,7 +1819,7 @@ static uint64_t full_le_lduw_cmmu(CPUArchState *env, target_ulong addr,
>                                    TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_LEUW, true,
> -                       full_le_lduw_cmmu);
> +                       full_le_lduw_cmmu, direct_lduw_le);
>  }
>  
>  uint16_t helper_le_ldw_cmmu(CPUArchState *env, target_ulong addr,
> @@ -1827,7 +1832,7 @@ static uint64_t full_be_lduw_cmmu(CPUArchState *env, target_ulong addr,
>                                    TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_BEUW, true,
> -                       full_be_lduw_cmmu);
> +                       full_be_lduw_cmmu, direct_lduw_be);
>  }
>  
>  uint16_t helper_be_ldw_cmmu(CPUArchState *env, target_ulong addr,
> @@ -1840,7 +1845,7 @@ static uint64_t full_le_ldul_cmmu(CPUArchState *env, target_ulong addr,
>                                    TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_LEUL, true,
> -                       full_le_ldul_cmmu);
> +                       full_le_ldul_cmmu, direct_ldul_le);
>  }
>  
>  uint32_t helper_le_ldl_cmmu(CPUArchState *env, target_ulong addr,
> @@ -1853,7 +1858,7 @@ static uint64_t full_be_ldul_cmmu(CPUArchState *env, target_ulong addr,
>                                    TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_BEUL, true,
> -                       full_be_ldul_cmmu);
> +                       full_be_ldul_cmmu, direct_ldul_be);
>  }
>  
>  uint32_t helper_be_ldl_cmmu(CPUArchState *env, target_ulong addr,
> @@ -1866,12 +1871,12 @@ uint64_t helper_le_ldq_cmmu(CPUArchState *env, target_ulong addr,
>                              TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_LEQ, true,
> -                       helper_le_ldq_cmmu);
> +                       helper_le_ldq_cmmu, ldq_le_p);
>  }
>  
>  uint64_t helper_be_ldq_cmmu(CPUArchState *env, target_ulong addr,
>                              TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_BEQ, true,
> -                       helper_be_ldq_cmmu);
> +                       helper_be_ldq_cmmu, ldq_be_p);
>  }
> -- 
> 2.17.1
> 
> 

Reviewed-by: Tony Nguyen <tony.nguyen@bt.com>


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [PATCH 0/3] cputlb: Adjust tlb bswap implementation
  2019-09-11  1:43 [Qemu-devel] [PATCH 0/3] cputlb: Adjust tlb bswap implementation Richard Henderson
                   ` (4 preceding siblings ...)
  2019-09-11  9:55 ` no-reply
@ 2019-09-11 13:07 ` no-reply
  5 siblings, 0 replies; 15+ messages in thread
From: no-reply @ 2019-09-11 13:07 UTC (permalink / raw)
  To: richard.henderson
  Cc: peter.maydell, mark.cave-ayland, qemu-devel, tony.nguyen

Patchew URL: https://patchew.org/QEMU/20190911014353.5926-1-richard.henderson@linaro.org/



Hi,

This series failed the docker-quick@centos7 build test. Please find the testing commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.

=== TEST SCRIPT BEGIN ===
#!/bin/bash
make docker-image-centos7 V=1 NETWORK=1
time make docker-test-quick@centos7 SHOW_ENV=1 J=14 NETWORK=1
=== TEST SCRIPT END ===

libudev           no
default devices   yes

warning: Python 2 support is deprecated
warning: Python 3 will be required for building future versions of QEMU

NOTE: cross-compilers enabled:  'cc'
  GEN     x86_64-softmmu/config-devices.mak.tmp
---
Memory content inconsistency at 401aa000 first_byte = e4 last_byte = e4 current = e5 hit_edge = 0
  TEST    check-qtest-x86_64: tests/ahci-test
and in another 25173 pages**
ERROR:/tmp/qemu-test/src/tests/migration-test.c:342:check_guests_ram: assertion failed: (bad == 0)
ERROR - Bail out! ERROR:/tmp/qemu-test/src/tests/migration-test.c:342:check_guests_ram: assertion failed: (bad == 0)
make: *** [check-qtest-aarch64] Error 1
make: *** Waiting for unfinished jobs....
  TEST    check-unit: tests/test-aio-multithread
---
  TEST    iotest-qcow2: 252
Passed all 106 tests
**
ERROR:/tmp/qemu-test/src/tests/boot-sector.c:161:boot_sector_test: assertion failed (signature == SIGNATURE): (0x00000000 == 0x0000dead)
ERROR - Bail out! ERROR:/tmp/qemu-test/src/tests/boot-sector.c:161:boot_sector_test: assertion failed (signature == SIGNATURE): (0x00000000 == 0x0000dead)
make: *** [check-qtest-x86_64] Error 1
Traceback (most recent call last):
  File "./tests/docker/docker.py", line 659, in <module>


The full log is available at
http://patchew.org/logs/20190911014353.5926-1-richard.henderson@linaro.org/testing.docker-quick@centos7/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-devel@redhat.com

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [PATCH 2/3] cputlb: Replace switches in load/store_helper with callback
  2019-09-11  1:43 ` [Qemu-devel] [PATCH 2/3] cputlb: Replace switches in load/store_helper with callback Richard Henderson
  2019-09-11 10:55   ` Tony Nguyen
@ 2019-09-11 13:07   ` Philippe Mathieu-Daudé
  2019-09-11 13:20     ` Richard Henderson
  1 sibling, 1 reply; 15+ messages in thread
From: Philippe Mathieu-Daudé @ 2019-09-11 13:07 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel
  Cc: peter.maydell, mark.cave-ayland, tony.nguyen

On 9/11/19 3:43 AM, Richard Henderson wrote:
> Add a function parameter to perform the actual load/store to ram.
> With optimization, this results in identical code.
> 
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>  accel/tcg/cputlb.c | 157 +++++++++++++++++++++++----------------------
>  1 file changed, 81 insertions(+), 76 deletions(-)
> 
> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
> index 909f01ebcc..e6229d100a 100644
> --- a/accel/tcg/cputlb.c
> +++ b/accel/tcg/cputlb.c
> @@ -1292,11 +1292,37 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
>  
>  typedef uint64_t FullLoadHelper(CPUArchState *env, target_ulong addr,
>                                  TCGMemOpIdx oi, uintptr_t retaddr);
> +typedef uint64_t DirectLoadHelper(const void *);
> +
> +static inline uint64_t direct_ldub(const void *haddr)
> +{
> +    return *(uint8_t *)haddr;
> +}
> +
> +static inline uint64_t direct_lduw_be(const void *haddr)
> +{
> +    return lduw_be_p(haddr);
> +}
> +
> +static inline uint64_t direct_lduw_le(const void *haddr)
> +{
> +    return lduw_le_p(haddr);
> +}
> +
> +static inline uint64_t direct_ldul_be(const void *haddr)
> +{
> +    return (uint32_t)ldl_be_p(haddr);
> +}
> +
> +static inline uint64_t direct_ldul_le(const void *haddr)
> +{
> +    return (uint32_t)ldl_le_p(haddr);
> +}
>  
>  static inline uint64_t ALWAYS_INLINE
>  load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
>              uintptr_t retaddr, MemOp op, bool code_read,
> -            FullLoadHelper *full_load)
> +            FullLoadHelper *full_load, DirectLoadHelper *direct)
>  {
>      uintptr_t mmu_idx = get_mmuidx(oi);
>      uintptr_t index = tlb_index(env, mmu_idx, addr);
> @@ -1385,33 +1411,7 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
>  
>   do_aligned_access:
>      haddr = (void *)((uintptr_t)addr + entry->addend);
> -    switch (op) {
> -    case MO_UB:
> -        res = ldub_p(haddr);
> -        break;
> -    case MO_BEUW:
> -        res = lduw_be_p(haddr);
> -        break;
> -    case MO_LEUW:
> -        res = lduw_le_p(haddr);
> -        break;
> -    case MO_BEUL:
> -        res = (uint32_t)ldl_be_p(haddr);
> -        break;
> -    case MO_LEUL:
> -        res = (uint32_t)ldl_le_p(haddr);
> -        break;
> -    case MO_BEQ:
> -        res = ldq_be_p(haddr);
> -        break;
> -    case MO_LEQ:
> -        res = ldq_le_p(haddr);
> -        break;
> -    default:
> -        g_assert_not_reached();
> -    }
> -
> -    return res;
> +    return direct(haddr);
>  }
>  
>  /*
> @@ -1427,7 +1427,8 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
>  static uint64_t full_ldub_mmu(CPUArchState *env, target_ulong addr,
>                                TCGMemOpIdx oi, uintptr_t retaddr)
>  {
> -    return load_helper(env, addr, oi, retaddr, MO_UB, false, full_ldub_mmu);
> +    return load_helper(env, addr, oi, retaddr, MO_UB, false,
> +                       full_ldub_mmu, direct_ldub);
>  }
>  
>  tcg_target_ulong helper_ret_ldub_mmu(CPUArchState *env, target_ulong addr,
> @@ -1440,7 +1441,7 @@ static uint64_t full_le_lduw_mmu(CPUArchState *env, target_ulong addr,
>                                   TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_LEUW, false,
> -                       full_le_lduw_mmu);
> +                       full_le_lduw_mmu, direct_lduw_le);

Why not cast lduw_be_p? (except for direct_ldub).

      return load_helper(env, addr, oi, retaddr, MO_LEUW, false,
                       full_le_lduw_mmu, (DirectLoadHelper)lduw_be_p);

Useful to set breakpoint while debugging?

>  }
>  
>  tcg_target_ulong helper_le_lduw_mmu(CPUArchState *env, target_ulong addr,
> @@ -1453,7 +1454,7 @@ static uint64_t full_be_lduw_mmu(CPUArchState *env, target_ulong addr,
>                                   TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_BEUW, false,
> -                       full_be_lduw_mmu);
> +                       full_be_lduw_mmu, direct_lduw_be);
>  }
>  
>  tcg_target_ulong helper_be_lduw_mmu(CPUArchState *env, target_ulong addr,
> @@ -1466,7 +1467,7 @@ static uint64_t full_le_ldul_mmu(CPUArchState *env, target_ulong addr,
>                                   TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_LEUL, false,
> -                       full_le_ldul_mmu);
> +                       full_le_ldul_mmu, direct_ldul_le);
>  }
>  
>  tcg_target_ulong helper_le_ldul_mmu(CPUArchState *env, target_ulong addr,
> @@ -1479,7 +1480,7 @@ static uint64_t full_be_ldul_mmu(CPUArchState *env, target_ulong addr,
>                                   TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_BEUL, false,
> -                       full_be_ldul_mmu);
> +                       full_be_ldul_mmu, direct_ldul_be);
>  }
>  
>  tcg_target_ulong helper_be_ldul_mmu(CPUArchState *env, target_ulong addr,
> @@ -1492,14 +1493,14 @@ uint64_t helper_le_ldq_mmu(CPUArchState *env, target_ulong addr,
>                             TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_LEQ, false,
> -                       helper_le_ldq_mmu);
> +                       helper_le_ldq_mmu, ldq_le_p);
>  }
>  
>  uint64_t helper_be_ldq_mmu(CPUArchState *env, target_ulong addr,
>                             TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_BEQ, false,
> -                       helper_be_ldq_mmu);
> +                       helper_be_ldq_mmu, ldq_be_p);
>  }
>  
>  /*
> @@ -1542,9 +1543,37 @@ tcg_target_ulong helper_be_ldsl_mmu(CPUArchState *env, target_ulong addr,
>   * Store Helpers
>   */
>  
> +typedef void DirectStoreHelper(void *, uint64_t);
> +
> +static inline void direct_stb(void *haddr, uint64_t val)
> +{
> +    *(uint8_t *)haddr = val;
> +}
> +
> +static inline void direct_stw_be(void *haddr, uint64_t val)
> +{
> +    stw_be_p(haddr, val);
> +}
> +
> +static inline void direct_stw_le(void *haddr, uint64_t val)
> +{
> +    stw_le_p(haddr, val);
> +}
> +
> +static inline void direct_stl_be(void *haddr, uint64_t val)
> +{
> +    stl_be_p(haddr, val);
> +}
> +
> +static inline void direct_stl_le(void *haddr, uint64_t val)
> +{
> +    stl_le_p(haddr, val);
> +}
> +
>  static inline void ALWAYS_INLINE
>  store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
> -             TCGMemOpIdx oi, uintptr_t retaddr, MemOp op)
> +             TCGMemOpIdx oi, uintptr_t retaddr, MemOp op,
> +             DirectStoreHelper *direct)
>  {
>      uintptr_t mmu_idx = get_mmuidx(oi);
>      uintptr_t index = tlb_index(env, mmu_idx, addr);
> @@ -1669,74 +1698,49 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
>  
>   do_aligned_access:
>      haddr = (void *)((uintptr_t)addr + entry->addend);
> -    switch (op) {
> -    case MO_UB:
> -        stb_p(haddr, val);
> -        break;
> -    case MO_BEUW:
> -        stw_be_p(haddr, val);
> -        break;
> -    case MO_LEUW:
> -        stw_le_p(haddr, val);
> -        break;
> -    case MO_BEUL:
> -        stl_be_p(haddr, val);
> -        break;
> -    case MO_LEUL:
> -        stl_le_p(haddr, val);
> -        break;
> -    case MO_BEQ:
> -        stq_be_p(haddr, val);
> -        break;
> -    case MO_LEQ:
> -        stq_le_p(haddr, val);
> -        break;
> -    default:
> -        g_assert_not_reached();
> -        break;
> -    }
> +    direct(haddr, val);
>  }
>  
>  void helper_ret_stb_mmu(CPUArchState *env, target_ulong addr, uint8_t val,
>                          TCGMemOpIdx oi, uintptr_t retaddr)
>  {
> -    store_helper(env, addr, val, oi, retaddr, MO_UB);
> +    store_helper(env, addr, val, oi, retaddr, MO_UB, direct_stb);
>  }
>  
>  void helper_le_stw_mmu(CPUArchState *env, target_ulong addr, uint16_t val,
>                         TCGMemOpIdx oi, uintptr_t retaddr)
>  {
> -    store_helper(env, addr, val, oi, retaddr, MO_LEUW);
> +    store_helper(env, addr, val, oi, retaddr, MO_LEUW, direct_stw_le);

Ditto, why not cast with DirectStoreHelper? (except for direct_stb).

>  }
>  
>  void helper_be_stw_mmu(CPUArchState *env, target_ulong addr, uint16_t val,
>                         TCGMemOpIdx oi, uintptr_t retaddr)
>  {
> -    store_helper(env, addr, val, oi, retaddr, MO_BEUW);
> +    store_helper(env, addr, val, oi, retaddr, MO_BEUW, direct_stw_be);
>  }
>  
>  void helper_le_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val,
>                         TCGMemOpIdx oi, uintptr_t retaddr)
>  {
> -    store_helper(env, addr, val, oi, retaddr, MO_LEUL);
> +    store_helper(env, addr, val, oi, retaddr, MO_LEUL, direct_stl_le);
>  }
>  
>  void helper_be_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val,
>                         TCGMemOpIdx oi, uintptr_t retaddr)
>  {
> -    store_helper(env, addr, val, oi, retaddr, MO_BEUL);
> +    store_helper(env, addr, val, oi, retaddr, MO_BEUL, direct_stl_be);
>  }
>  
>  void helper_le_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val,
>                         TCGMemOpIdx oi, uintptr_t retaddr)
>  {
> -    store_helper(env, addr, val, oi, retaddr, MO_LEQ);
> +    store_helper(env, addr, val, oi, retaddr, MO_LEQ, stq_le_p);
>  }
>  
>  void helper_be_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val,
>                         TCGMemOpIdx oi, uintptr_t retaddr)
>  {
> -    store_helper(env, addr, val, oi, retaddr, MO_BEQ);
> +    store_helper(env, addr, val, oi, retaddr, MO_BEQ, stq_be_p);
>  }
>  
>  /* First set of helpers allows passing in of OI and RETADDR.  This makes
> @@ -1801,7 +1805,8 @@ void helper_be_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val,
>  static uint64_t full_ldub_cmmu(CPUArchState *env, target_ulong addr,
>                                 TCGMemOpIdx oi, uintptr_t retaddr)
>  {
> -    return load_helper(env, addr, oi, retaddr, MO_8, true, full_ldub_cmmu);
> +    return load_helper(env, addr, oi, retaddr, MO_8, true,
> +                       full_ldub_cmmu, direct_ldub);
>  }
>  
>  uint8_t helper_ret_ldb_cmmu(CPUArchState *env, target_ulong addr,
> @@ -1814,7 +1819,7 @@ static uint64_t full_le_lduw_cmmu(CPUArchState *env, target_ulong addr,
>                                    TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_LEUW, true,
> -                       full_le_lduw_cmmu);
> +                       full_le_lduw_cmmu, direct_lduw_le);
>  }
>  
>  uint16_t helper_le_ldw_cmmu(CPUArchState *env, target_ulong addr,
> @@ -1827,7 +1832,7 @@ static uint64_t full_be_lduw_cmmu(CPUArchState *env, target_ulong addr,
>                                    TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_BEUW, true,
> -                       full_be_lduw_cmmu);
> +                       full_be_lduw_cmmu, direct_lduw_be);
>  }
>  
>  uint16_t helper_be_ldw_cmmu(CPUArchState *env, target_ulong addr,
> @@ -1840,7 +1845,7 @@ static uint64_t full_le_ldul_cmmu(CPUArchState *env, target_ulong addr,
>                                    TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_LEUL, true,
> -                       full_le_ldul_cmmu);
> +                       full_le_ldul_cmmu, direct_ldul_le);
>  }
>  
>  uint32_t helper_le_ldl_cmmu(CPUArchState *env, target_ulong addr,
> @@ -1853,7 +1858,7 @@ static uint64_t full_be_ldul_cmmu(CPUArchState *env, target_ulong addr,
>                                    TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_BEUL, true,
> -                       full_be_ldul_cmmu);
> +                       full_be_ldul_cmmu, direct_ldul_be);
>  }
>  
>  uint32_t helper_be_ldl_cmmu(CPUArchState *env, target_ulong addr,
> @@ -1866,12 +1871,12 @@ uint64_t helper_le_ldq_cmmu(CPUArchState *env, target_ulong addr,
>                              TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_LEQ, true,
> -                       helper_le_ldq_cmmu);
> +                       helper_le_ldq_cmmu, ldq_le_p);
>  }
>  
>  uint64_t helper_be_ldq_cmmu(CPUArchState *env, target_ulong addr,
>                              TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_BEQ, true,
> -                       helper_be_ldq_cmmu);
> +                       helper_be_ldq_cmmu, ldq_be_p);
>  }
> 


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [PATCH 2/3] cputlb: Replace switches in load/store_helper with callback
  2019-09-11 13:07   ` Philippe Mathieu-Daudé
@ 2019-09-11 13:20     ` Richard Henderson
  0 siblings, 0 replies; 15+ messages in thread
From: Richard Henderson @ 2019-09-11 13:20 UTC (permalink / raw)
  To: Philippe Mathieu-Daudé, qemu-devel
  Cc: peter.maydell, mark.cave-ayland, tony.nguyen

On 9/11/19 9:07 AM, Philippe Mathieu-Daudé wrote:
>>  {
>>      return load_helper(env, addr, oi, retaddr, MO_LEUW, false,
>> -                       full_le_lduw_mmu);
>> +                       full_le_lduw_mmu, direct_lduw_le);
> 
> Why not cast lduw_be_p? (except for direct_ldub).
> 
>       return load_helper(env, addr, oi, retaddr, MO_LEUW, false,
>                        full_le_lduw_mmu, (DirectLoadHelper)lduw_be_p);
> 
> Useful to set breakpoint while debugging?

Because the types in the function signature are different,
and such a cast will cause misbehavior.


r~


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [PATCH 2/3] cputlb: Replace switches in load/store_helper with callback
  2019-09-11 10:55   ` Tony Nguyen
@ 2019-09-11 13:22     ` Richard Henderson
  2019-09-11 15:17       ` Tony Nguyen
  0 siblings, 1 reply; 15+ messages in thread
From: Richard Henderson @ 2019-09-11 13:22 UTC (permalink / raw)
  To: Tony Nguyen; +Cc: peter.maydell, mark.cave-ayland, qemu-devel

On 9/11/19 6:55 AM, Tony Nguyen wrote:
>>  typedef uint64_t FullLoadHelper(CPUArchState *env, target_ulong addr,
>>                                  TCGMemOpIdx oi, uintptr_t retaddr);
>> +typedef uint64_t DirectLoadHelper(const void *);
> 
> Would 'Load' instead of 'DirectLoadHelper' have enough clarity?

I suppose so, yes.

> If so, consider also dropping the 'direct_' prefix in the functions below.
> 
>> +
>> +static inline uint64_t direct_ldub(const void *haddr)
>> +{
>> +    return *(uint8_t *)haddr;
>> +}
>> +
>> +static inline uint64_t direct_lduw_be(const void *haddr)
>> +{
>> +    return lduw_be_p(haddr);
>> +}

I would be hesitant to call this just "lduw_be"; I think that's confusing when
it's calling lduw_be_p.  But perhaps "wrap_*" or "wide_*"?


r~


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [PATCH 3/3] cputlb: Introduce TLB_BSWAP
  2019-09-11  1:43 ` [Qemu-devel] [PATCH 3/3] cputlb: Introduce TLB_BSWAP Richard Henderson
@ 2019-09-11 14:56   ` Tony Nguyen
  2019-09-12 19:07     ` Richard Henderson
  0 siblings, 1 reply; 15+ messages in thread
From: Tony Nguyen @ 2019-09-11 14:56 UTC (permalink / raw)
  To: Richard Henderson; +Cc: peter.maydell, mark.cave-ayland, qemu-devel

On Tue, Sep 10, 2019 at 09:43:53PM -0400, Richard Henderson wrote:
> Handle bswap on ram directly in load/store_helper.  This fixes a
> bug with the previous implementation in that one cannot use the
> I/O path for RAM.
> 
> Fixes: a26fc6f5152b47f1
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>  include/exec/cpu-all.h |   2 +
>  accel/tcg/cputlb.c     | 118 ++++++++++++++++++++---------------------
>  2 files changed, 59 insertions(+), 61 deletions(-)
> 
> diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
> index d2d443c4f9..3928edab9a 100644
> --- a/include/exec/cpu-all.h
> +++ b/include/exec/cpu-all.h
> @@ -331,6 +331,8 @@ CPUArchState *cpu_copy(CPUArchState *env);
>  #define TLB_MMIO            (1 << (TARGET_PAGE_BITS - 3))
>  /* Set if TLB entry contains a watchpoint.  */
>  #define TLB_WATCHPOINT      (1 << (TARGET_PAGE_BITS - 4))
> +/* Set if TLB entry requires byte swap.  */
> +#define TLB_BSWAP           (1 << (TARGET_PAGE_BITS - 5))
>  
>  /* Use this mask to check interception with an alignment mask
>   * in a TCG backend.
> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
> index e6229d100a..eace6c82e3 100644
> --- a/accel/tcg/cputlb.c
> +++ b/accel/tcg/cputlb.c
> @@ -737,8 +737,7 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
>          address |= TLB_INVALID_MASK;
>      }
>      if (attrs.byte_swap) {
> -        /* Force the access through the I/O slow path.  */
> -        address |= TLB_MMIO;
> +        address |= TLB_BSWAP;
>      }
>      if (!memory_region_is_ram(section->mr) &&
>          !memory_region_is_romd(section->mr)) {
> @@ -901,10 +900,6 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
>      bool locked = false;
>      MemTxResult r;
>  
> -    if (iotlbentry->attrs.byte_swap) {
> -        op ^= MO_BSWAP;
> -    }
> -
>      section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
>      mr = section->mr;
>      mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
> @@ -947,10 +942,6 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
>      bool locked = false;
>      MemTxResult r;
>  
> -    if (iotlbentry->attrs.byte_swap) {
> -        op ^= MO_BSWAP;
> -    }
> -
>      section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
>      mr = section->mr;
>      mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
> @@ -1322,7 +1313,8 @@ static inline uint64_t direct_ldul_le(const void *haddr)
>  static inline uint64_t ALWAYS_INLINE
>  load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
>              uintptr_t retaddr, MemOp op, bool code_read,
> -            FullLoadHelper *full_load, DirectLoadHelper *direct)
> +            FullLoadHelper *full_load, DirectLoadHelper *direct,
> +            DirectLoadHelper *direct_swap)
>  {
>      uintptr_t mmu_idx = get_mmuidx(oi);
>      uintptr_t index = tlb_index(env, mmu_idx, addr);
> @@ -1372,26 +1364,27 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
>              /* On watchpoint hit, this will longjmp out.  */
>              cpu_check_watchpoint(env_cpu(env), addr, size,
>                                   iotlbentry->attrs, BP_MEM_READ, retaddr);
> -
> -            /* The backing page may or may not require I/O.  */
> -            tlb_addr &= ~TLB_WATCHPOINT;
> -            if ((tlb_addr & ~TARGET_PAGE_MASK) == 0) {
> -                goto do_aligned_access;
> -            }
>          }
>  
>          /* Handle I/O access.  */
> -        return io_readx(env, iotlbentry, mmu_idx, addr,
> -                        retaddr, access_type, op);
> -    }
> +        if (likely(tlb_addr & TLB_MMIO)) {
> +            return io_readx(env, iotlbentry, mmu_idx, addr,
> +                            retaddr, access_type,
> +                            op ^ (tlb_addr & TLB_BSWAP ? MO_BSWAP : 0));
> +        }

Previously, the end of if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) branch
called and returned the result of io_readx.

io_readx calls cpu_transaction_failed if memory_region_dispatch_read does not
return MEMTX_OK.

Is the end of the branch missing a cpu_transaction_failed call?

Similar question for store_helper.
>  
> -    /* Handle slow unaligned access (it spans two pages or IO).  */
> -    if (size > 1
> -        && unlikely((addr & ~TARGET_PAGE_MASK) + size - 1
> -                    >= TARGET_PAGE_SIZE)) {
> +        if (unlikely(tlb_addr & TLB_BSWAP)) {
> +            haddr = (void *)((uintptr_t)addr + entry->addend);
> +            return direct_swap(haddr);
> +        }
> +    } else if (size > 1
> +               && unlikely((addr & ~TARGET_PAGE_MASK) + size - 1
> +                           >= TARGET_PAGE_SIZE)) {
> +        /* Handle slow unaligned access (it spans two pages or IO).  */
>          target_ulong addr1, addr2;
>          uint64_t r1, r2;
>          unsigned shift;
> +
>      do_unaligned_access:
>          addr1 = addr & ~((target_ulong)size - 1);
>          addr2 = addr1 + size;
> @@ -1409,7 +1402,6 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
>          return res & MAKE_64BIT_MASK(0, size * 8);
>      }
>  
> - do_aligned_access:
>      haddr = (void *)((uintptr_t)addr + entry->addend);
>      return direct(haddr);
>  }
> @@ -1428,7 +1420,7 @@ static uint64_t full_ldub_mmu(CPUArchState *env, target_ulong addr,
>                                TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_UB, false,
> -                       full_ldub_mmu, direct_ldub);
> +                       full_ldub_mmu, direct_ldub, direct_ldub);
>  }
>  
>  tcg_target_ulong helper_ret_ldub_mmu(CPUArchState *env, target_ulong addr,
> @@ -1441,7 +1433,7 @@ static uint64_t full_le_lduw_mmu(CPUArchState *env, target_ulong addr,
>                                   TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_LEUW, false,
> -                       full_le_lduw_mmu, direct_lduw_le);
> +                       full_le_lduw_mmu, direct_lduw_le, direct_lduw_be);
>  }
>  
>  tcg_target_ulong helper_le_lduw_mmu(CPUArchState *env, target_ulong addr,
> @@ -1454,7 +1446,7 @@ static uint64_t full_be_lduw_mmu(CPUArchState *env, target_ulong addr,
>                                   TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_BEUW, false,
> -                       full_be_lduw_mmu, direct_lduw_be);
> +                       full_be_lduw_mmu, direct_lduw_be, direct_lduw_le);
>  }
>  
>  tcg_target_ulong helper_be_lduw_mmu(CPUArchState *env, target_ulong addr,
> @@ -1467,7 +1459,7 @@ static uint64_t full_le_ldul_mmu(CPUArchState *env, target_ulong addr,
>                                   TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_LEUL, false,
> -                       full_le_ldul_mmu, direct_ldul_le);
> +                       full_le_ldul_mmu, direct_ldul_le, direct_ldul_be);
>  }
>  
>  tcg_target_ulong helper_le_ldul_mmu(CPUArchState *env, target_ulong addr,
> @@ -1480,7 +1472,7 @@ static uint64_t full_be_ldul_mmu(CPUArchState *env, target_ulong addr,
>                                   TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_BEUL, false,
> -                       full_be_ldul_mmu, direct_ldul_be);
> +                       full_be_ldul_mmu, direct_ldul_be, direct_ldul_le);
>  }
>  
>  tcg_target_ulong helper_be_ldul_mmu(CPUArchState *env, target_ulong addr,
> @@ -1493,14 +1485,14 @@ uint64_t helper_le_ldq_mmu(CPUArchState *env, target_ulong addr,
>                             TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_LEQ, false,
> -                       helper_le_ldq_mmu, ldq_le_p);
> +                       helper_le_ldq_mmu, ldq_le_p, ldq_be_p);
>  }
>  
>  uint64_t helper_be_ldq_mmu(CPUArchState *env, target_ulong addr,
>                             TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_BEQ, false,
> -                       helper_be_ldq_mmu, ldq_be_p);
> +                       helper_be_ldq_mmu, ldq_be_p, ldq_le_p);
>  }
>  
>  /*
> @@ -1573,7 +1565,7 @@ static inline void direct_stl_le(void *haddr, uint64_t val)
>  static inline void ALWAYS_INLINE
>  store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
>               TCGMemOpIdx oi, uintptr_t retaddr, MemOp op,
> -             DirectStoreHelper *direct)
> +             DirectStoreHelper *direct, DirectStoreHelper *direct_swap)
>  {
>      uintptr_t mmu_idx = get_mmuidx(oi);
>      uintptr_t index = tlb_index(env, mmu_idx, addr);
> @@ -1618,23 +1610,24 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
>              /* On watchpoint hit, this will longjmp out.  */
>              cpu_check_watchpoint(env_cpu(env), addr, size,
>                                   iotlbentry->attrs, BP_MEM_WRITE, retaddr);
> -
> -            /* The backing page may or may not require I/O.  */
> -            tlb_addr &= ~TLB_WATCHPOINT;
> -            if ((tlb_addr & ~TARGET_PAGE_MASK) == 0) {
> -                goto do_aligned_access;
> -            }
>          }
>  
>          /* Handle I/O access.  */
> -        io_writex(env, iotlbentry, mmu_idx, val, addr, retaddr, op);
> -        return;
> -    }
> +        if (likely(tlb_addr & TLB_MMIO)) {
> +            io_writex(env, iotlbentry, mmu_idx, val, addr, retaddr,
> +                      op ^ (tlb_addr & TLB_BSWAP ? MO_BSWAP : 0));
> +            return;
> +        }
>  
> -    /* Handle slow unaligned access (it spans two pages or IO).  */
> -    if (size > 1
> -        && unlikely((addr & ~TARGET_PAGE_MASK) + size - 1
> -                     >= TARGET_PAGE_SIZE)) {
> +        if (unlikely(tlb_addr & TLB_BSWAP)) {
> +            haddr = (void *)((uintptr_t)addr + entry->addend);
> +            direct_swap(haddr, val);
> +            return;
> +        }
> +    } else if (size > 1
> +               && unlikely((addr & ~TARGET_PAGE_MASK) + size - 1
> +                            >= TARGET_PAGE_SIZE)) {
> +        /* Handle slow unaligned access (it spans two pages or IO).  */
>          int i;
>          uintptr_t index2;
>          CPUTLBEntry *entry2;
> @@ -1696,7 +1689,6 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
>          return;
>      }
>  
> - do_aligned_access:
>      haddr = (void *)((uintptr_t)addr + entry->addend);
>      direct(haddr, val);
>  }
> @@ -1704,43 +1696,47 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
>  void helper_ret_stb_mmu(CPUArchState *env, target_ulong addr, uint8_t val,
>                          TCGMemOpIdx oi, uintptr_t retaddr)
>  {
> -    store_helper(env, addr, val, oi, retaddr, MO_UB, direct_stb);
> +    store_helper(env, addr, val, oi, retaddr, MO_UB, direct_stb, direct_stb);
>  }
>  
>  void helper_le_stw_mmu(CPUArchState *env, target_ulong addr, uint16_t val,
>                         TCGMemOpIdx oi, uintptr_t retaddr)
>  {
> -    store_helper(env, addr, val, oi, retaddr, MO_LEUW, direct_stw_le);
> +    store_helper(env, addr, val, oi, retaddr, MO_LEUW,
> +                 direct_stw_le, direct_stw_be);
>  }
>  
>  void helper_be_stw_mmu(CPUArchState *env, target_ulong addr, uint16_t val,
>                         TCGMemOpIdx oi, uintptr_t retaddr)
>  {
> -    store_helper(env, addr, val, oi, retaddr, MO_BEUW, direct_stw_be);
> +    store_helper(env, addr, val, oi, retaddr, MO_BEUW,
> +                 direct_stw_be, direct_stw_le);
>  }
>  
>  void helper_le_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val,
>                         TCGMemOpIdx oi, uintptr_t retaddr)
>  {
> -    store_helper(env, addr, val, oi, retaddr, MO_LEUL, direct_stl_le);
> +    store_helper(env, addr, val, oi, retaddr, MO_LEUL,
> +                 direct_stl_le, direct_stl_be);
>  }
>  
>  void helper_be_stl_mmu(CPUArchState *env, target_ulong addr, uint32_t val,
>                         TCGMemOpIdx oi, uintptr_t retaddr)
>  {
> -    store_helper(env, addr, val, oi, retaddr, MO_BEUL, direct_stl_be);
> +    store_helper(env, addr, val, oi, retaddr, MO_BEUL,
> +                 direct_stl_be, direct_stl_le);
>  }
>  
>  void helper_le_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val,
>                         TCGMemOpIdx oi, uintptr_t retaddr)
>  {
> -    store_helper(env, addr, val, oi, retaddr, MO_LEQ, stq_le_p);
> +    store_helper(env, addr, val, oi, retaddr, MO_LEQ, stq_le_p, stq_be_p);
>  }
>  
>  void helper_be_stq_mmu(CPUArchState *env, target_ulong addr, uint64_t val,
>                         TCGMemOpIdx oi, uintptr_t retaddr)
>  {
> -    store_helper(env, addr, val, oi, retaddr, MO_BEQ, stq_be_p);
> +    store_helper(env, addr, val, oi, retaddr, MO_BEQ, stq_be_p, stq_le_p);
>  }
>  
>  /* First set of helpers allows passing in of OI and RETADDR.  This makes
> @@ -1806,7 +1802,7 @@ static uint64_t full_ldub_cmmu(CPUArchState *env, target_ulong addr,
>                                 TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_8, true,
> -                       full_ldub_cmmu, direct_ldub);
> +                       full_ldub_cmmu, direct_ldub, direct_ldub);
>  }
>  
>  uint8_t helper_ret_ldb_cmmu(CPUArchState *env, target_ulong addr,
> @@ -1819,7 +1815,7 @@ static uint64_t full_le_lduw_cmmu(CPUArchState *env, target_ulong addr,
>                                    TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_LEUW, true,
> -                       full_le_lduw_cmmu, direct_lduw_le);
> +                       full_le_lduw_cmmu, direct_lduw_le, direct_lduw_be);
>  }
>  
>  uint16_t helper_le_ldw_cmmu(CPUArchState *env, target_ulong addr,
> @@ -1832,7 +1828,7 @@ static uint64_t full_be_lduw_cmmu(CPUArchState *env, target_ulong addr,
>                                    TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_BEUW, true,
> -                       full_be_lduw_cmmu, direct_lduw_be);
> +                       full_be_lduw_cmmu, direct_lduw_be, direct_lduw_le);
>  }
>  
>  uint16_t helper_be_ldw_cmmu(CPUArchState *env, target_ulong addr,
> @@ -1845,7 +1841,7 @@ static uint64_t full_le_ldul_cmmu(CPUArchState *env, target_ulong addr,
>                                    TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_LEUL, true,
> -                       full_le_ldul_cmmu, direct_ldul_le);
> +                       full_le_ldul_cmmu, direct_ldul_le, direct_ldul_be);
>  }
>  
>  uint32_t helper_le_ldl_cmmu(CPUArchState *env, target_ulong addr,
> @@ -1858,7 +1854,7 @@ static uint64_t full_be_ldul_cmmu(CPUArchState *env, target_ulong addr,
>                                    TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_BEUL, true,
> -                       full_be_ldul_cmmu, direct_ldul_be);
> +                       full_be_ldul_cmmu, direct_ldul_be, direct_ldul_le);
>  }
>  
>  uint32_t helper_be_ldl_cmmu(CPUArchState *env, target_ulong addr,
> @@ -1871,12 +1867,12 @@ uint64_t helper_le_ldq_cmmu(CPUArchState *env, target_ulong addr,
>                              TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_LEQ, true,
> -                       helper_le_ldq_cmmu, ldq_le_p);
> +                       helper_le_ldq_cmmu, ldq_le_p, ldq_be_p);
>  }
>  
>  uint64_t helper_be_ldq_cmmu(CPUArchState *env, target_ulong addr,
>                              TCGMemOpIdx oi, uintptr_t retaddr)
>  {
>      return load_helper(env, addr, oi, retaddr, MO_BEQ, true,
> -                       helper_be_ldq_cmmu, ldq_be_p);
> +                       helper_be_ldq_cmmu, ldq_be_p, ldq_le_p);
>  }
> -- 
> 2.17.1
> 
> 


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [PATCH 2/3] cputlb: Replace switches in load/store_helper with callback
  2019-09-11 13:22     ` Richard Henderson
@ 2019-09-11 15:17       ` Tony Nguyen
  0 siblings, 0 replies; 15+ messages in thread
From: Tony Nguyen @ 2019-09-11 15:17 UTC (permalink / raw)
  To: Richard Henderson; +Cc: peter.maydell, mark.cave-ayland, qemu-devel

On Wed, Sep 11, 2019 at 09:22:39AM -0400, Richard Henderson wrote:
> I would be hesitant to call this just "lduw_be"; I think that's confusing when
> it's calling lduw_be_p.  But perhaps "wrap_*" or "wide_*"?

Agree, some hamming distance is needed.

"wrap_*", "wide_*", or "direct_*" all works for me.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [PATCH 3/3] cputlb: Introduce TLB_BSWAP
  2019-09-11 14:56   ` Tony Nguyen
@ 2019-09-12 19:07     ` Richard Henderson
  0 siblings, 0 replies; 15+ messages in thread
From: Richard Henderson @ 2019-09-12 19:07 UTC (permalink / raw)
  To: Tony Nguyen; +Cc: peter.maydell, mark.cave-ayland, qemu-devel

On 9/11/19 10:56 AM, Tony Nguyen wrote:
>> @@ -1372,26 +1364,27 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
>>              /* On watchpoint hit, this will longjmp out.  */
>>              cpu_check_watchpoint(env_cpu(env), addr, size,
>>                                   iotlbentry->attrs, BP_MEM_READ, retaddr);
>> -
>> -            /* The backing page may or may not require I/O.  */
>> -            tlb_addr &= ~TLB_WATCHPOINT;
>> -            if ((tlb_addr & ~TARGET_PAGE_MASK) == 0) {
>> -                goto do_aligned_access;
>> -            }
>>          }
>>  
>>          /* Handle I/O access.  */
>> -        return io_readx(env, iotlbentry, mmu_idx, addr,
>> -                        retaddr, access_type, op);
>> -    }
>> +        if (likely(tlb_addr & TLB_MMIO)) {
>> +            return io_readx(env, iotlbentry, mmu_idx, addr,
>> +                            retaddr, access_type,
>> +                            op ^ (tlb_addr & TLB_BSWAP ? MO_BSWAP : 0));
>> +        }
> 
> Previously, the end of if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) branch
> called and returned the result of io_readx.

Correct.  However, rather thank clearing TLB_WATCHPOINT and TLB_BSWAP, it
seemed easier to test for those bits that *do* require that we call io_readx.

As we've seen from the bug leading to this patch set, it's invalid to call
io_readx on anything that doesn't have TLB_MMIO set -- we'll either crash due
to the missing read accessor or reach the point at which we issue a bus error
for an i/o operation without a device.

BTW, there's a bug in this same location for store_helper in that I need to
also test for TLB_NOTDIRTY, which also goes through io_writex for the moment.
That bug is trivially shown during the make check migration tests.  Due to the
late hour I failed to run those before posting this patch set.  Will be fixed
in v2.


r~


^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2019-09-12 19:08 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-11  1:43 [Qemu-devel] [PATCH 0/3] cputlb: Adjust tlb bswap implementation Richard Henderson
2019-09-11  1:43 ` [Qemu-devel] [PATCH 1/3] cputlb: Disable __always_inline__ without optimization Richard Henderson
2019-09-11  8:34   ` Peter Maydell
2019-09-11  1:43 ` [Qemu-devel] [PATCH 2/3] cputlb: Replace switches in load/store_helper with callback Richard Henderson
2019-09-11 10:55   ` Tony Nguyen
2019-09-11 13:22     ` Richard Henderson
2019-09-11 15:17       ` Tony Nguyen
2019-09-11 13:07   ` Philippe Mathieu-Daudé
2019-09-11 13:20     ` Richard Henderson
2019-09-11  1:43 ` [Qemu-devel] [PATCH 3/3] cputlb: Introduce TLB_BSWAP Richard Henderson
2019-09-11 14:56   ` Tony Nguyen
2019-09-12 19:07     ` Richard Henderson
2019-09-11  3:46 ` [Qemu-devel] [PATCH 0/3] cputlb: Adjust tlb bswap implementation no-reply
2019-09-11  9:55 ` no-reply
2019-09-11 13:07 ` no-reply

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.