All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/9] tcg: support 32-bit guest addresses as signed
@ 2022-02-27  2:04 Richard Henderson
  2022-02-27  2:04 ` [PATCH v2 1/9] tcg: Add TCG_TARGET_SIGNED_ADDR32 Richard Henderson
                   ` (8 more replies)
  0 siblings, 9 replies; 19+ messages in thread
From: Richard Henderson @ 2022-02-27  2:04 UTC (permalink / raw)
  To: qemu-devel


We have 3 hosts that naturally produce sign-extended values,
and have to work extra hard (with 1 or 2 insns) to produce
the zero-extended address that we expect today.

However, it's a simple matter of arithmetic for the middle-end
to require sign-extended addresses instead.  For user-only, we
do have to be careful not to allow a guest object to wrap around
the signed boundary, but that's fairly easily done.

Tested with aarch64, as that's the best hw currently available.

Patches lacking review:
  03-accel-tcg-Support-TCG_TARGET_SIGNED_ADDR32-for-so.patch
  06-tcg-aarch64-Support-TCG_TARGET_SIGNED_ADDR32.patch
  07-tcg-mips-Support-TCG_TARGET_SIGNED_ADDR32.patch
  09-tcg-loongarch64-Support-TCG_TARGET_SIGNED_ADDR32.patch (new)


r~

Version 1: https://lore.kernel.org/qemu-devel/20211010174401.141339-1-richard.henderson@linaro.org/


Richard Henderson (9):
  tcg: Add TCG_TARGET_SIGNED_ADDR32
  accel/tcg: Split out g2h_tlbe
  accel/tcg: Support TCG_TARGET_SIGNED_ADDR32 for softmmu
  accel/tcg: Add guest_base_signed_addr32 for user-only
  linux-user: Support TCG_TARGET_SIGNED_ADDR32
  tcg/aarch64: Support TCG_TARGET_SIGNED_ADDR32
  tcg/mips: Support TCG_TARGET_SIGNED_ADDR32
  tcg/riscv: Support TCG_TARGET_SIGNED_ADDR32
  tcg/loongarch64: Support TCG_TARGET_SIGNED_ADDR32

 include/exec/cpu-all.h            | 20 +++++++--
 include/exec/cpu_ldst.h           |  3 +-
 tcg/aarch64/tcg-target-sa32.h     |  7 ++++
 tcg/arm/tcg-target-sa32.h         |  1 +
 tcg/i386/tcg-target-sa32.h        |  1 +
 tcg/loongarch64/tcg-target-sa32.h |  1 +
 tcg/mips/tcg-target-sa32.h        |  9 ++++
 tcg/ppc/tcg-target-sa32.h         |  1 +
 tcg/riscv/tcg-target-sa32.h       |  5 +++
 tcg/s390x/tcg-target-sa32.h       |  1 +
 tcg/sparc/tcg-target-sa32.h       |  1 +
 tcg/tci/tcg-target-sa32.h         |  1 +
 accel/tcg/cputlb.c                | 36 +++++++++++-----
 bsd-user/main.c                   |  4 ++
 linux-user/elfload.c              | 62 +++++++++++++++++++++------
 linux-user/main.c                 |  3 ++
 tcg/tcg.c                         |  4 ++
 tcg/aarch64/tcg-target.c.inc      | 69 ++++++++++++++++++++-----------
 tcg/loongarch64/tcg-target.c.inc  | 15 +++----
 tcg/mips/tcg-target.c.inc         | 10 +----
 tcg/riscv/tcg-target.c.inc        |  8 +---
 21 files changed, 187 insertions(+), 75 deletions(-)
 create mode 100644 tcg/aarch64/tcg-target-sa32.h
 create mode 100644 tcg/arm/tcg-target-sa32.h
 create mode 100644 tcg/i386/tcg-target-sa32.h
 create mode 100644 tcg/loongarch64/tcg-target-sa32.h
 create mode 100644 tcg/mips/tcg-target-sa32.h
 create mode 100644 tcg/ppc/tcg-target-sa32.h
 create mode 100644 tcg/riscv/tcg-target-sa32.h
 create mode 100644 tcg/s390x/tcg-target-sa32.h
 create mode 100644 tcg/sparc/tcg-target-sa32.h
 create mode 100644 tcg/tci/tcg-target-sa32.h

-- 
2.25.1



^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v2 1/9] tcg: Add TCG_TARGET_SIGNED_ADDR32
  2022-02-27  2:04 [PATCH v2 0/9] tcg: support 32-bit guest addresses as signed Richard Henderson
@ 2022-02-27  2:04 ` Richard Henderson
  2022-02-27  2:04 ` [PATCH v2 2/9] accel/tcg: Split out g2h_tlbe Richard Henderson
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 19+ messages in thread
From: Richard Henderson @ 2022-02-27  2:04 UTC (permalink / raw)
  To: qemu-devel
  Cc: WANG Xuerui, Alistair Francis, Alex Bennée,
	Philippe Mathieu-Daudé

Define as 0 for all tcg hosts.  Put this in a separate header,
because we'll want this in places that do not ordinarily have
access to all of tcg/tcg.h.

Reviewed-by: WANG Xuerui <git@xen0n.name>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 tcg/aarch64/tcg-target-sa32.h     | 1 +
 tcg/arm/tcg-target-sa32.h         | 1 +
 tcg/i386/tcg-target-sa32.h        | 1 +
 tcg/loongarch64/tcg-target-sa32.h | 1 +
 tcg/mips/tcg-target-sa32.h        | 1 +
 tcg/ppc/tcg-target-sa32.h         | 1 +
 tcg/riscv/tcg-target-sa32.h       | 1 +
 tcg/s390x/tcg-target-sa32.h       | 1 +
 tcg/sparc/tcg-target-sa32.h       | 1 +
 tcg/tci/tcg-target-sa32.h         | 1 +
 tcg/tcg.c                         | 4 ++++
 11 files changed, 14 insertions(+)
 create mode 100644 tcg/aarch64/tcg-target-sa32.h
 create mode 100644 tcg/arm/tcg-target-sa32.h
 create mode 100644 tcg/i386/tcg-target-sa32.h
 create mode 100644 tcg/loongarch64/tcg-target-sa32.h
 create mode 100644 tcg/mips/tcg-target-sa32.h
 create mode 100644 tcg/ppc/tcg-target-sa32.h
 create mode 100644 tcg/riscv/tcg-target-sa32.h
 create mode 100644 tcg/s390x/tcg-target-sa32.h
 create mode 100644 tcg/sparc/tcg-target-sa32.h
 create mode 100644 tcg/tci/tcg-target-sa32.h

diff --git a/tcg/aarch64/tcg-target-sa32.h b/tcg/aarch64/tcg-target-sa32.h
new file mode 100644
index 0000000000..cb185b1526
--- /dev/null
+++ b/tcg/aarch64/tcg-target-sa32.h
@@ -0,0 +1 @@
+#define TCG_TARGET_SIGNED_ADDR32 0
diff --git a/tcg/arm/tcg-target-sa32.h b/tcg/arm/tcg-target-sa32.h
new file mode 100644
index 0000000000..cb185b1526
--- /dev/null
+++ b/tcg/arm/tcg-target-sa32.h
@@ -0,0 +1 @@
+#define TCG_TARGET_SIGNED_ADDR32 0
diff --git a/tcg/i386/tcg-target-sa32.h b/tcg/i386/tcg-target-sa32.h
new file mode 100644
index 0000000000..cb185b1526
--- /dev/null
+++ b/tcg/i386/tcg-target-sa32.h
@@ -0,0 +1 @@
+#define TCG_TARGET_SIGNED_ADDR32 0
diff --git a/tcg/loongarch64/tcg-target-sa32.h b/tcg/loongarch64/tcg-target-sa32.h
new file mode 100644
index 0000000000..cb185b1526
--- /dev/null
+++ b/tcg/loongarch64/tcg-target-sa32.h
@@ -0,0 +1 @@
+#define TCG_TARGET_SIGNED_ADDR32 0
diff --git a/tcg/mips/tcg-target-sa32.h b/tcg/mips/tcg-target-sa32.h
new file mode 100644
index 0000000000..cb185b1526
--- /dev/null
+++ b/tcg/mips/tcg-target-sa32.h
@@ -0,0 +1 @@
+#define TCG_TARGET_SIGNED_ADDR32 0
diff --git a/tcg/ppc/tcg-target-sa32.h b/tcg/ppc/tcg-target-sa32.h
new file mode 100644
index 0000000000..cb185b1526
--- /dev/null
+++ b/tcg/ppc/tcg-target-sa32.h
@@ -0,0 +1 @@
+#define TCG_TARGET_SIGNED_ADDR32 0
diff --git a/tcg/riscv/tcg-target-sa32.h b/tcg/riscv/tcg-target-sa32.h
new file mode 100644
index 0000000000..cb185b1526
--- /dev/null
+++ b/tcg/riscv/tcg-target-sa32.h
@@ -0,0 +1 @@
+#define TCG_TARGET_SIGNED_ADDR32 0
diff --git a/tcg/s390x/tcg-target-sa32.h b/tcg/s390x/tcg-target-sa32.h
new file mode 100644
index 0000000000..cb185b1526
--- /dev/null
+++ b/tcg/s390x/tcg-target-sa32.h
@@ -0,0 +1 @@
+#define TCG_TARGET_SIGNED_ADDR32 0
diff --git a/tcg/sparc/tcg-target-sa32.h b/tcg/sparc/tcg-target-sa32.h
new file mode 100644
index 0000000000..cb185b1526
--- /dev/null
+++ b/tcg/sparc/tcg-target-sa32.h
@@ -0,0 +1 @@
+#define TCG_TARGET_SIGNED_ADDR32 0
diff --git a/tcg/tci/tcg-target-sa32.h b/tcg/tci/tcg-target-sa32.h
new file mode 100644
index 0000000000..cb185b1526
--- /dev/null
+++ b/tcg/tci/tcg-target-sa32.h
@@ -0,0 +1 @@
+#define TCG_TARGET_SIGNED_ADDR32 0
diff --git a/tcg/tcg.c b/tcg/tcg.c
index 528277d1d3..b3e32bc215 100644
--- a/tcg/tcg.c
+++ b/tcg/tcg.c
@@ -61,6 +61,10 @@
 #include "exec/log.h"
 #include "tcg/tcg-ldst.h"
 #include "tcg-internal.h"
+#include "tcg-target-sa32.h"
+
+/* Sanity check for TCG_TARGET_SIGNED_ADDR32. */
+QEMU_BUILD_BUG_ON(TCG_TARGET_REG_BITS == 32 && TCG_TARGET_SIGNED_ADDR32);
 
 #ifdef CONFIG_TCG_INTERPRETER
 #include <ffi.h>
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 2/9] accel/tcg: Split out g2h_tlbe
  2022-02-27  2:04 [PATCH v2 0/9] tcg: support 32-bit guest addresses as signed Richard Henderson
  2022-02-27  2:04 ` [PATCH v2 1/9] tcg: Add TCG_TARGET_SIGNED_ADDR32 Richard Henderson
@ 2022-02-27  2:04 ` Richard Henderson
  2022-02-27  2:04 ` [PATCH v2 3/9] accel/tcg: Support TCG_TARGET_SIGNED_ADDR32 for softmmu Richard Henderson
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 19+ messages in thread
From: Richard Henderson @ 2022-02-27  2:04 UTC (permalink / raw)
  To: qemu-devel
  Cc: WANG Xuerui, Alistair Francis, Alex Bennée,
	Philippe Mathieu-Daudé

Create a new function to combine a CPUTLBEntry addend
with the guest address to form a host address.

Reviewed-by: WANG Xuerui <git@xen0n.name>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 accel/tcg/cputlb.c | 24 ++++++++++++++----------
 1 file changed, 14 insertions(+), 10 deletions(-)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 3b918fe018..0e62aa5d7c 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -91,6 +91,11 @@ static inline size_t sizeof_tlb(CPUTLBDescFast *fast)
     return fast->mask + (1 << CPU_TLB_ENTRY_BITS);
 }
 
+static inline uintptr_t g2h_tlbe(const CPUTLBEntry *tlb, target_ulong gaddr)
+{
+    return tlb->addend + (uintptr_t)gaddr;
+}
+
 static void tlb_window_reset(CPUTLBDesc *desc, int64_t ns,
                              size_t max_entries)
 {
@@ -986,8 +991,7 @@ static void tlb_reset_dirty_range_locked(CPUTLBEntry *tlb_entry,
 
     if ((addr & (TLB_INVALID_MASK | TLB_MMIO |
                  TLB_DISCARD_WRITE | TLB_NOTDIRTY)) == 0) {
-        addr &= TARGET_PAGE_MASK;
-        addr += tlb_entry->addend;
+        addr = g2h_tlbe(tlb_entry, addr & TARGET_PAGE_MASK);
         if ((addr - start) < length) {
 #if TCG_OVERSIZED_GUEST
             tlb_entry->addr_write |= TLB_NOTDIRTY;
@@ -1537,7 +1541,7 @@ tb_page_addr_t get_page_addr_code_hostp(CPUArchState *env, target_ulong addr,
         return -1;
     }
 
-    p = (void *)((uintptr_t)addr + entry->addend);
+    p = (void *)g2h_tlbe(entry, addr);
     if (hostp) {
         *hostp = p;
     }
@@ -1629,7 +1633,7 @@ static int probe_access_internal(CPUArchState *env, target_ulong addr,
     }
 
     /* Everything else is RAM. */
-    *phost = (void *)((uintptr_t)addr + entry->addend);
+    *phost = (void *)g2h_tlbe(entry, addr);
     return flags;
 }
 
@@ -1737,7 +1741,7 @@ bool tlb_plugin_lookup(CPUState *cpu, target_ulong addr, int mmu_idx,
             data->v.io.offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
         } else {
             data->is_io = false;
-            data->v.ram.hostaddr = (void *)((uintptr_t)addr + tlbe->addend);
+            data->v.ram.hostaddr = (void *)g2h_tlbe(tlbe, addr);
         }
         return true;
     } else {
@@ -1836,7 +1840,7 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
         goto stop_the_world;
     }
 
-    hostaddr = (void *)((uintptr_t)addr + tlbe->addend);
+    hostaddr = (void *)g2h_tlbe(tlbe, addr);
 
     if (unlikely(tlb_addr & TLB_NOTDIRTY)) {
         notdirty_write(env_cpu(env), addr, size,
@@ -1967,7 +1971,7 @@ load_helper(CPUArchState *env, target_ulong addr, MemOpIdx oi,
                             access_type, op ^ (need_swap * MO_BSWAP));
         }
 
-        haddr = (void *)((uintptr_t)addr + entry->addend);
+        haddr = (void *)g2h_tlbe(entry, addr);
 
         /*
          * Keep these two load_memop separate to ensure that the compiler
@@ -2004,7 +2008,7 @@ load_helper(CPUArchState *env, target_ulong addr, MemOpIdx oi,
         return res & MAKE_64BIT_MASK(0, size * 8);
     }
 
-    haddr = (void *)((uintptr_t)addr + entry->addend);
+    haddr = (void *)g2h_tlbe(entry, addr);
     return load_memop(haddr, op);
 }
 
@@ -2375,7 +2379,7 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
             notdirty_write(env_cpu(env), addr, size, iotlbentry, retaddr);
         }
 
-        haddr = (void *)((uintptr_t)addr + entry->addend);
+        haddr = (void *)g2h_tlbe(entry, addr);
 
         /*
          * Keep these two store_memop separate to ensure that the compiler
@@ -2400,7 +2404,7 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
         return;
     }
 
-    haddr = (void *)((uintptr_t)addr + entry->addend);
+    haddr = (void *)g2h_tlbe(entry, addr);
     store_memop(haddr, val, op);
 }
 
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 3/9] accel/tcg: Support TCG_TARGET_SIGNED_ADDR32 for softmmu
  2022-02-27  2:04 [PATCH v2 0/9] tcg: support 32-bit guest addresses as signed Richard Henderson
  2022-02-27  2:04 ` [PATCH v2 1/9] tcg: Add TCG_TARGET_SIGNED_ADDR32 Richard Henderson
  2022-02-27  2:04 ` [PATCH v2 2/9] accel/tcg: Split out g2h_tlbe Richard Henderson
@ 2022-02-27  2:04 ` Richard Henderson
  2022-02-27 22:32   ` Philippe Mathieu-Daudé
  2022-03-03 15:14   ` Peter Maydell
  2022-02-27  2:04 ` [PATCH v2 4/9] accel/tcg: Add guest_base_signed_addr32 for user-only Richard Henderson
                   ` (5 subsequent siblings)
  8 siblings, 2 replies; 19+ messages in thread
From: Richard Henderson @ 2022-02-27  2:04 UTC (permalink / raw)
  To: qemu-devel

When TCG_TARGET_SIGNED_ADDR32 is set, adjust the tlb addend to
allow the 32-bit guest address to be sign extended within the
64-bit host register instead of zero extended.

This will simplify tcg hosts like MIPS, RISC-V, and LoongArch,
which naturally sign-extend 32-bit values, in contrast to x86_64
and AArch64 which zero-extend them.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 accel/tcg/cputlb.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 0e62aa5d7c..0dbc3efbc7 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -40,6 +40,7 @@
 #include "qemu/plugin-memory.h"
 #endif
 #include "tcg/tcg-ldst.h"
+#include "tcg-target-sa32.h"
 
 /* DEBUG defines, enable DEBUG_TLB_LOG to log to the CPU_LOG_MMU target */
 /* #define DEBUG_TLB */
@@ -93,6 +94,9 @@ static inline size_t sizeof_tlb(CPUTLBDescFast *fast)
 
 static inline uintptr_t g2h_tlbe(const CPUTLBEntry *tlb, target_ulong gaddr)
 {
+    if (TCG_TARGET_SIGNED_ADDR32 && TARGET_LONG_BITS == 32) {
+        return tlb->addend + (int32_t)gaddr;
+    }
     return tlb->addend + (uintptr_t)gaddr;
 }
 
@@ -1244,7 +1248,13 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
     desc->iotlb[index].attrs = attrs;
 
     /* Now calculate the new entry */
-    tn.addend = addend - vaddr_page;
+
+    if (TCG_TARGET_SIGNED_ADDR32 && TARGET_LONG_BITS == 32) {
+        tn.addend = addend - (int32_t)vaddr_page;
+    } else {
+        tn.addend = addend - vaddr_page;
+    }
+
     if (prot & PAGE_READ) {
         tn.addr_read = address;
         if (wp_flags & BP_MEM_READ) {
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 4/9] accel/tcg: Add guest_base_signed_addr32 for user-only
  2022-02-27  2:04 [PATCH v2 0/9] tcg: support 32-bit guest addresses as signed Richard Henderson
                   ` (2 preceding siblings ...)
  2022-02-27  2:04 ` [PATCH v2 3/9] accel/tcg: Support TCG_TARGET_SIGNED_ADDR32 for softmmu Richard Henderson
@ 2022-02-27  2:04 ` Richard Henderson
  2022-03-03 15:14   ` Peter Maydell
  2022-02-27  2:04 ` [PATCH v2 5/9] linux-user: Support TCG_TARGET_SIGNED_ADDR32 Richard Henderson
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 19+ messages in thread
From: Richard Henderson @ 2022-02-27  2:04 UTC (permalink / raw)
  To: qemu-devel; +Cc: Philippe Mathieu-Daudé

While the host may prefer to treat 32-bit addresses as signed,
there are edge cases of guests that cannot be implemented with
addresses 0x7fff_ffff and 0x8000_0000 being non-consecutive.

Therefore, default to guest_base_signed_addr32 false, and allow
probe_guest_base to determine whether it is possible to set it
to true.  A tcg backend which sets TCG_TARGET_SIGNED_ADDR32 will
have to cope with either setting for user-only.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 include/exec/cpu-all.h  | 16 ++++++++++++++++
 include/exec/cpu_ldst.h |  3 ++-
 bsd-user/main.c         |  4 ++++
 linux-user/main.c       |  3 +++
 4 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
index 84caf5c3d9..26ecd3c886 100644
--- a/include/exec/cpu-all.h
+++ b/include/exec/cpu-all.h
@@ -146,6 +146,7 @@ static inline void tswap64s(uint64_t *s)
 
 #if defined(CONFIG_USER_ONLY)
 #include "exec/user/abitypes.h"
+#include "tcg-target-sa32.h"
 
 /* On some host systems the guest address space is reserved on the host.
  * This allows the guest address space to be offset to a convenient location.
@@ -154,6 +155,21 @@ extern uintptr_t guest_base;
 extern bool have_guest_base;
 extern unsigned long reserved_va;
 
+#if TCG_TARGET_SIGNED_ADDR32 && TARGET_LONG_BITS == 32
+extern bool guest_base_signed_addr32;
+#else
+#define guest_base_signed_addr32  false
+#endif
+
+static inline void set_guest_base_signed_addr32(void)
+{
+#ifdef guest_base_signed_addr32
+    qemu_build_not_reached();
+#else
+    guest_base_signed_addr32 = true;
+#endif
+}
+
 /*
  * Limit the guest addresses as best we can.
  *
diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
index da987fe8ad..add45499ee 100644
--- a/include/exec/cpu_ldst.h
+++ b/include/exec/cpu_ldst.h
@@ -87,7 +87,8 @@ static inline abi_ptr cpu_untagged_addr(CPUState *cs, abi_ptr x)
 /* All direct uses of g2h and h2g need to go away for usermode softmmu.  */
 static inline void *g2h_untagged(abi_ptr x)
 {
-    return (void *)((uintptr_t)(x) + guest_base);
+    uintptr_t hx = guest_base_signed_addr32 ? (int32_t)x : (uintptr_t)x;
+    return (void *)(guest_base + hx);
 }
 
 static inline void *g2h(CPUState *cs, abi_ptr x)
diff --git a/bsd-user/main.c b/bsd-user/main.c
index f1d58e905e..cca4b9a502 100644
--- a/bsd-user/main.c
+++ b/bsd-user/main.c
@@ -54,6 +54,10 @@
 int singlestep;
 uintptr_t guest_base;
 bool have_guest_base;
+#ifndef guest_base_signed_addr32
+bool guest_base_signed_addr32;
+#endif
+
 /*
  * When running 32-on-64 we should make sure we can fit all of the possible
  * guest address space into a contiguous chunk of virtual host memory.
diff --git a/linux-user/main.c b/linux-user/main.c
index fbc9bcfd5f..5d963ddb64 100644
--- a/linux-user/main.c
+++ b/linux-user/main.c
@@ -72,6 +72,9 @@ static const char *seed_optarg;
 unsigned long mmap_min_addr;
 uintptr_t guest_base;
 bool have_guest_base;
+#ifndef guest_base_signed_addr32
+bool guest_base_signed_addr32;
+#endif
 
 /*
  * Used to implement backwards-compatibility for the `-strace`, and
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 5/9] linux-user: Support TCG_TARGET_SIGNED_ADDR32
  2022-02-27  2:04 [PATCH v2 0/9] tcg: support 32-bit guest addresses as signed Richard Henderson
                   ` (3 preceding siblings ...)
  2022-02-27  2:04 ` [PATCH v2 4/9] accel/tcg: Add guest_base_signed_addr32 for user-only Richard Henderson
@ 2022-02-27  2:04 ` Richard Henderson
  2022-02-27 22:48   ` Philippe Mathieu-Daudé
  2022-02-27  2:04 ` [PATCH v2 6/9] tcg/aarch64: " Richard Henderson
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 19+ messages in thread
From: Richard Henderson @ 2022-02-27  2:04 UTC (permalink / raw)
  To: qemu-devel; +Cc: Alex Bennée

When using reserved_va, which is the default for a 64-bit host
and a 32-bit guest, set guest_base_signed_addr32 if requested
by TCG_TARGET_SIGNED_ADDR32, and the executable layout allows.

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 include/exec/cpu-all.h |  4 ---
 linux-user/elfload.c   | 62 ++++++++++++++++++++++++++++++++++--------
 2 files changed, 50 insertions(+), 16 deletions(-)

diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
index 26ecd3c886..8bea0e069e 100644
--- a/include/exec/cpu-all.h
+++ b/include/exec/cpu-all.h
@@ -269,11 +269,7 @@ extern const TargetPageBits target_page;
 #define PAGE_RESET     0x0040
 /* For linux-user, indicates that the page is MAP_ANON. */
 #define PAGE_ANON      0x0080
-
-#if defined(CONFIG_BSD) && defined(CONFIG_USER_ONLY)
-/* FIXME: Code that sets/uses this is broken and needs to go away.  */
 #define PAGE_RESERVED  0x0100
-#endif
 /* Target-specific bits that will be used via page_get_flags().  */
 #define PAGE_TARGET_1  0x0200
 #define PAGE_TARGET_2  0x0400
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
index 9628a38361..5522f9e721 100644
--- a/linux-user/elfload.c
+++ b/linux-user/elfload.c
@@ -2482,34 +2482,72 @@ static void pgb_dynamic(const char *image_name, long align)
 static void pgb_reserved_va(const char *image_name, abi_ulong guest_loaddr,
                             abi_ulong guest_hiaddr, long align)
 {
-    int flags = MAP_ANONYMOUS | MAP_PRIVATE | MAP_NORESERVE;
+    int flags = (MAP_ANONYMOUS | MAP_PRIVATE |
+                 MAP_NORESERVE | MAP_FIXED_NOREPLACE);
+    unsigned long local_rva = reserved_va;
+    bool protect_wrap = false;
     void *addr, *test;
 
-    if (guest_hiaddr > reserved_va) {
+    if (guest_hiaddr > local_rva) {
         error_report("%s: requires more than reserved virtual "
                      "address space (0x%" PRIx64 " > 0x%lx)",
-                     image_name, (uint64_t)guest_hiaddr, reserved_va);
+                     image_name, (uint64_t)guest_hiaddr, local_rva);
         exit(EXIT_FAILURE);
     }
 
-    /* Widen the "image" to the entire reserved address space. */
-    pgb_static(image_name, 0, reserved_va, align);
+    if (TCG_TARGET_SIGNED_ADDR32 && TARGET_LONG_BITS == 32) {
+        if (guest_loaddr < 0x80000000u && guest_hiaddr > 0x80000000u) {
+            /*
+             * The executable itself wraps on signed addresses.
+             * Without per-page translation, we must keep the
+             * guest address 0x7fff_ffff adjacent to 0x8000_0000
+             * consecutive in host memory: unsigned addresses.
+             */
+        } else {
+            set_guest_base_signed_addr32();
+            if (local_rva <= 0x80000000u) {
+                /* No guest addresses are "negative": win! */
+            } else {
+                /* Begin by allocating the entire address space. */
+                local_rva = 0xfffffffful + 1;
+                protect_wrap = true;
+            }
+        }
+    }
 
-    /* osdep.h defines this as 0 if it's missing */
-    flags |= MAP_FIXED_NOREPLACE;
+    /* Widen the "image" to the entire reserved address space. */
+    pgb_static(image_name, 0, local_rva, align);
+    assert(guest_base != 0);
 
     /* Reserve the memory on the host. */
-    assert(guest_base != 0);
     test = g2h_untagged(0);
-    addr = mmap(test, reserved_va, PROT_NONE, flags, -1, 0);
+    addr = mmap(test, local_rva, PROT_NONE, flags, -1, 0);
     if (addr == MAP_FAILED || addr != test) {
+        /*
+         * If protect_wrap, we could try again with the original reserved_va
+         * setting, but the edge case of low ulimit vm setting on a 64-bit
+         * host is probably useless.
+         */
         error_report("Unable to reserve 0x%lx bytes of virtual address "
-                     "space at %p (%s) for use as guest address space (check your"
-                     "virtual memory ulimit setting, min_mmap_addr or reserve less "
-                     "using -R option)", reserved_va, test, strerror(errno));
+                     "space at %p (%s) for use as guest address space "
+                     "(check your virtual memory ulimit setting, "
+                     "min_mmap_addr or reserve less using -R option)",
+                     local_rva, test, strerror(errno));
         exit(EXIT_FAILURE);
     }
 
+    if (protect_wrap) {
+        /*
+         * Prevent the page just before 0x80000000 from being allocated.
+         * This prevents a single guest object/allocation from crossing
+         * the signed wrap, and thus being discontiguous in host memory.
+         */
+        page_set_flags(0x7fffffff & TARGET_PAGE_MASK, 0x80000000u,
+                       PAGE_RESERVED);
+        /* Adjust guest_base so that 0 is in the middle of the reservation. */
+        guest_base += 0x80000000ul;
+    }
+
     qemu_log_mask(CPU_LOG_PAGE, "%s: base @ %p for %lu bytes\n",
                   __func__, addr, reserved_va);
 }
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 6/9] tcg/aarch64: Support TCG_TARGET_SIGNED_ADDR32
  2022-02-27  2:04 [PATCH v2 0/9] tcg: support 32-bit guest addresses as signed Richard Henderson
                   ` (4 preceding siblings ...)
  2022-02-27  2:04 ` [PATCH v2 5/9] linux-user: Support TCG_TARGET_SIGNED_ADDR32 Richard Henderson
@ 2022-02-27  2:04 ` Richard Henderson
  2022-03-03 15:04   ` Peter Maydell
  2022-02-27  2:04 ` [PATCH v2 7/9] tcg/mips: " Richard Henderson
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 19+ messages in thread
From: Richard Henderson @ 2022-02-27  2:04 UTC (permalink / raw)
  To: qemu-devel

AArch64 has both sign and zero-extending addressing modes, which
means that either treatment of guest addresses is equally efficient.
Enabling this for AArch64 gives us testing of the feature in CI.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 tcg/aarch64/tcg-target-sa32.h |  8 +++-
 tcg/aarch64/tcg-target.c.inc  | 69 +++++++++++++++++++++++------------
 2 files changed, 52 insertions(+), 25 deletions(-)

diff --git a/tcg/aarch64/tcg-target-sa32.h b/tcg/aarch64/tcg-target-sa32.h
index cb185b1526..c99e502e4c 100644
--- a/tcg/aarch64/tcg-target-sa32.h
+++ b/tcg/aarch64/tcg-target-sa32.h
@@ -1 +1,7 @@
-#define TCG_TARGET_SIGNED_ADDR32 0
+/*
+ * AArch64 has both SXTW and UXTW addressing modes, which means that
+ * it is agnostic to how guest addresses should be represented.
+ * Because aarch64 is more common than the other hosts that will
+ * want to use this feature, enable it for continuous testing.
+ */
+#define TCG_TARGET_SIGNED_ADDR32 1
diff --git a/tcg/aarch64/tcg-target.c.inc b/tcg/aarch64/tcg-target.c.inc
index 077fc51401..65cab73ea0 100644
--- a/tcg/aarch64/tcg-target.c.inc
+++ b/tcg/aarch64/tcg-target.c.inc
@@ -806,12 +806,12 @@ static void tcg_out_insn_3617(TCGContext *s, AArch64Insn insn, bool q,
 }
 
 static void tcg_out_insn_3310(TCGContext *s, AArch64Insn insn,
-                              TCGReg rd, TCGReg base, TCGType ext,
+                              TCGReg rd, TCGReg base, int option,
                               TCGReg regoff)
 {
     /* Note the AArch64Insn constants above are for C3.3.12.  Adjust.  */
     tcg_out32(s, insn | I3312_TO_I3310 | regoff << 16 |
-              0x4000 | ext << 13 | base << 5 | (rd & 0x1f));
+              option << 13 | base << 5 | (rd & 0x1f));
 }
 
 static void tcg_out_insn_3312(TCGContext *s, AArch64Insn insn,
@@ -1126,7 +1126,7 @@ static void tcg_out_ldst(TCGContext *s, AArch64Insn insn, TCGReg rd,
 
     /* Worst-case scenario, move offset to temp register, use reg offset.  */
     tcg_out_movi(s, TCG_TYPE_I64, TCG_REG_TMP, offset);
-    tcg_out_ldst_r(s, insn, rd, rn, TCG_TYPE_I64, TCG_REG_TMP);
+    tcg_out_ldst_r(s, insn, rd, rn, 3 /* LSL #0 */, TCG_REG_TMP);
 }
 
 static bool tcg_out_mov(TCGContext *s, TCGType type, TCGReg ret, TCGReg arg)
@@ -1765,31 +1765,31 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 
 static void tcg_out_qemu_ld_direct(TCGContext *s, MemOp memop, TCGType ext,
                                    TCGReg data_r, TCGReg addr_r,
-                                   TCGType otype, TCGReg off_r)
+                                   int option, TCGReg off_r)
 {
     switch (memop & MO_SSIZE) {
     case MO_UB:
-        tcg_out_ldst_r(s, I3312_LDRB, data_r, addr_r, otype, off_r);
+        tcg_out_ldst_r(s, I3312_LDRB, data_r, addr_r, option, off_r);
         break;
     case MO_SB:
         tcg_out_ldst_r(s, ext ? I3312_LDRSBX : I3312_LDRSBW,
-                       data_r, addr_r, otype, off_r);
+                       data_r, addr_r, option, off_r);
         break;
     case MO_UW:
-        tcg_out_ldst_r(s, I3312_LDRH, data_r, addr_r, otype, off_r);
+        tcg_out_ldst_r(s, I3312_LDRH, data_r, addr_r, option, off_r);
         break;
     case MO_SW:
         tcg_out_ldst_r(s, (ext ? I3312_LDRSHX : I3312_LDRSHW),
-                       data_r, addr_r, otype, off_r);
+                       data_r, addr_r, option, off_r);
         break;
     case MO_UL:
-        tcg_out_ldst_r(s, I3312_LDRW, data_r, addr_r, otype, off_r);
+        tcg_out_ldst_r(s, I3312_LDRW, data_r, addr_r, option, off_r);
         break;
     case MO_SL:
-        tcg_out_ldst_r(s, I3312_LDRSWX, data_r, addr_r, otype, off_r);
+        tcg_out_ldst_r(s, I3312_LDRSWX, data_r, addr_r, option, off_r);
         break;
     case MO_UQ:
-        tcg_out_ldst_r(s, I3312_LDRX, data_r, addr_r, otype, off_r);
+        tcg_out_ldst_r(s, I3312_LDRX, data_r, addr_r, option, off_r);
         break;
     default:
         tcg_abort();
@@ -1798,31 +1798,52 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, MemOp memop, TCGType ext,
 
 static void tcg_out_qemu_st_direct(TCGContext *s, MemOp memop,
                                    TCGReg data_r, TCGReg addr_r,
-                                   TCGType otype, TCGReg off_r)
+                                   int option, TCGReg off_r)
 {
     switch (memop & MO_SIZE) {
     case MO_8:
-        tcg_out_ldst_r(s, I3312_STRB, data_r, addr_r, otype, off_r);
+        tcg_out_ldst_r(s, I3312_STRB, data_r, addr_r, option, off_r);
         break;
     case MO_16:
-        tcg_out_ldst_r(s, I3312_STRH, data_r, addr_r, otype, off_r);
+        tcg_out_ldst_r(s, I3312_STRH, data_r, addr_r, option, off_r);
         break;
     case MO_32:
-        tcg_out_ldst_r(s, I3312_STRW, data_r, addr_r, otype, off_r);
+        tcg_out_ldst_r(s, I3312_STRW, data_r, addr_r, option, off_r);
         break;
     case MO_64:
-        tcg_out_ldst_r(s, I3312_STRX, data_r, addr_r, otype, off_r);
+        tcg_out_ldst_r(s, I3312_STRX, data_r, addr_r, option, off_r);
         break;
     default:
         tcg_abort();
     }
 }
 
+/*
+ * Bits for the option field of LDR/STR (register),
+ * for application to a guest address.
+ */
+static int ldst_ext_option(void)
+{
+#ifdef CONFIG_USER_ONLY
+    bool signed_addr32 = guest_base_signed_addr32;
+#else
+    bool signed_addr32 = TCG_TARGET_SIGNED_ADDR32;
+#endif
+
+    if (TARGET_LONG_BITS == 64) {
+        return 3; /* LSL #0 */
+    } else if (signed_addr32) {
+        return 6; /* SXTW */
+    } else {
+        return 2; /* UXTW */
+    }
+}
+
 static void tcg_out_qemu_ld(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
                             MemOpIdx oi, TCGType ext)
 {
     MemOp memop = get_memop(oi);
-    const TCGType otype = TARGET_LONG_BITS == 64 ? TCG_TYPE_I64 : TCG_TYPE_I32;
+    int option = ldst_ext_option();
 
     /* Byte swapping is left to middle-end expansion. */
     tcg_debug_assert((memop & MO_BSWAP) == 0);
@@ -1833,7 +1854,7 @@ static void tcg_out_qemu_ld(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
 
     tcg_out_tlb_read(s, addr_reg, memop, &label_ptr, mem_index, 1);
     tcg_out_qemu_ld_direct(s, memop, ext, data_reg,
-                           TCG_REG_X1, otype, addr_reg);
+                           TCG_REG_X1, option, addr_reg);
     add_qemu_ldst_label(s, true, oi, ext, data_reg, addr_reg,
                         s->code_ptr, label_ptr);
 #else /* !CONFIG_SOFTMMU */
@@ -1843,10 +1864,10 @@ static void tcg_out_qemu_ld(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
     }
     if (USE_GUEST_BASE) {
         tcg_out_qemu_ld_direct(s, memop, ext, data_reg,
-                               TCG_REG_GUEST_BASE, otype, addr_reg);
+                               TCG_REG_GUEST_BASE, option, addr_reg);
     } else {
         tcg_out_qemu_ld_direct(s, memop, ext, data_reg,
-                               addr_reg, TCG_TYPE_I64, TCG_REG_XZR);
+                               addr_reg, option, TCG_REG_XZR);
     }
 #endif /* CONFIG_SOFTMMU */
 }
@@ -1855,7 +1876,7 @@ static void tcg_out_qemu_st(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
                             MemOpIdx oi)
 {
     MemOp memop = get_memop(oi);
-    const TCGType otype = TARGET_LONG_BITS == 64 ? TCG_TYPE_I64 : TCG_TYPE_I32;
+    int option = ldst_ext_option();
 
     /* Byte swapping is left to middle-end expansion. */
     tcg_debug_assert((memop & MO_BSWAP) == 0);
@@ -1866,7 +1887,7 @@ static void tcg_out_qemu_st(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
 
     tcg_out_tlb_read(s, addr_reg, memop, &label_ptr, mem_index, 0);
     tcg_out_qemu_st_direct(s, memop, data_reg,
-                           TCG_REG_X1, otype, addr_reg);
+                           TCG_REG_X1, option, addr_reg);
     add_qemu_ldst_label(s, false, oi, (memop & MO_SIZE)== MO_64,
                         data_reg, addr_reg, s->code_ptr, label_ptr);
 #else /* !CONFIG_SOFTMMU */
@@ -1876,10 +1897,10 @@ static void tcg_out_qemu_st(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
     }
     if (USE_GUEST_BASE) {
         tcg_out_qemu_st_direct(s, memop, data_reg,
-                               TCG_REG_GUEST_BASE, otype, addr_reg);
+                               TCG_REG_GUEST_BASE, option, addr_reg);
     } else {
         tcg_out_qemu_st_direct(s, memop, data_reg,
-                               addr_reg, TCG_TYPE_I64, TCG_REG_XZR);
+                               addr_reg, option, TCG_REG_XZR);
     }
 #endif /* CONFIG_SOFTMMU */
 }
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 7/9] tcg/mips: Support TCG_TARGET_SIGNED_ADDR32
  2022-02-27  2:04 [PATCH v2 0/9] tcg: support 32-bit guest addresses as signed Richard Henderson
                   ` (5 preceding siblings ...)
  2022-02-27  2:04 ` [PATCH v2 6/9] tcg/aarch64: " Richard Henderson
@ 2022-02-27  2:04 ` Richard Henderson
  2022-02-27 22:51   ` Philippe Mathieu-Daudé
  2022-02-27  2:04 ` [PATCH v2 8/9] tcg/riscv: " Richard Henderson
  2022-02-27  2:04 ` [PATCH v2 9/9] tcg/loongarch64: " Richard Henderson
  8 siblings, 1 reply; 19+ messages in thread
From: Richard Henderson @ 2022-02-27  2:04 UTC (permalink / raw)
  To: qemu-devel
  Cc: Huacai Chen, Aleksandar Rikalo, Philippe Mathieu-Daudé,
	Aurelien Jarno

All 32-bit mips operations sign-extend the output, so we are easily
able to keep TCG_TYPE_I32 values sign-extended in host registers.

Cc: Philippe Mathieu-Daudé <f4bug@amsat.org>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Huacai Chen <chenhuacai@kernel.org>
Cc: Jiaxun Yang <jiaxun.yang@flygoat.com>
Cc: Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 tcg/mips/tcg-target-sa32.h |  8 ++++++++
 tcg/mips/tcg-target.c.inc  | 10 ++--------
 2 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/tcg/mips/tcg-target-sa32.h b/tcg/mips/tcg-target-sa32.h
index cb185b1526..51255e7cba 100644
--- a/tcg/mips/tcg-target-sa32.h
+++ b/tcg/mips/tcg-target-sa32.h
@@ -1 +1,9 @@
+/*
+ * Do not set TCG_TARGET_SIGNED_ADDR32 for mips32;
+ * TCG expects this to only be set for 64-bit hosts.
+ */
+#ifdef __mips64
+#define TCG_TARGET_SIGNED_ADDR32 1
+#else
 #define TCG_TARGET_SIGNED_ADDR32 0
+#endif
diff --git a/tcg/mips/tcg-target.c.inc b/tcg/mips/tcg-target.c.inc
index 993149d18a..b97c032ded 100644
--- a/tcg/mips/tcg-target.c.inc
+++ b/tcg/mips/tcg-target.c.inc
@@ -1168,12 +1168,6 @@ static void tcg_out_tlb_load(TCGContext *s, TCGReg base, TCGReg addrl,
                      TCG_TMP0, TCG_TMP3, cmp_off);
     }
 
-    /* Zero extend a 32-bit guest address for a 64-bit host. */
-    if (TCG_TARGET_REG_BITS > TARGET_LONG_BITS) {
-        tcg_out_ext32u(s, base, addrl);
-        addrl = base;
-    }
-
     /*
      * Mask the page bits, keeping the alignment bits to compare against.
      * For unaligned accesses, compare against the end of the access to
@@ -1679,7 +1673,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
                         data_regl, data_regh, addr_regl, addr_regh,
                         s->code_ptr, label_ptr);
 #else
-    if (TCG_TARGET_REG_BITS > TARGET_LONG_BITS) {
+    if (TCG_TARGET_REG_BITS > TARGET_LONG_BITS && !guest_base_signed_addr32) {
         tcg_out_ext32u(s, base, addr_regl);
         addr_regl = base;
     }
@@ -1878,7 +1872,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64)
                         data_regl, data_regh, addr_regl, addr_regh,
                         s->code_ptr, label_ptr);
 #else
-    if (TCG_TARGET_REG_BITS > TARGET_LONG_BITS) {
+    if (TCG_TARGET_REG_BITS > TARGET_LONG_BITS && !guest_base_signed_addr32) {
         tcg_out_ext32u(s, base, addr_regl);
         addr_regl = base;
     }
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 8/9] tcg/riscv: Support TCG_TARGET_SIGNED_ADDR32
  2022-02-27  2:04 [PATCH v2 0/9] tcg: support 32-bit guest addresses as signed Richard Henderson
                   ` (6 preceding siblings ...)
  2022-02-27  2:04 ` [PATCH v2 7/9] tcg/mips: " Richard Henderson
@ 2022-02-27  2:04 ` Richard Henderson
  2022-02-27  2:04 ` [PATCH v2 9/9] tcg/loongarch64: " Richard Henderson
  8 siblings, 0 replies; 19+ messages in thread
From: Richard Henderson @ 2022-02-27  2:04 UTC (permalink / raw)
  To: qemu-devel; +Cc: Alistair Francis, Philippe Mathieu-Daudé

All RV64 32-bit operations sign-extend the output, so we are easily
able to keep TCG_TYPE_I32 values sign-extended in host registers.

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 tcg/riscv/tcg-target-sa32.h | 6 +++++-
 tcg/riscv/tcg-target.c.inc  | 8 ++------
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/tcg/riscv/tcg-target-sa32.h b/tcg/riscv/tcg-target-sa32.h
index cb185b1526..703467b37a 100644
--- a/tcg/riscv/tcg-target-sa32.h
+++ b/tcg/riscv/tcg-target-sa32.h
@@ -1 +1,5 @@
-#define TCG_TARGET_SIGNED_ADDR32 0
+/*
+ * Do not set TCG_TARGET_SIGNED_ADDR32 for RV32;
+ * TCG expects this to only be set for 64-bit hosts.
+ */
+#define TCG_TARGET_SIGNED_ADDR32  (__riscv_xlen == 64)
diff --git a/tcg/riscv/tcg-target.c.inc b/tcg/riscv/tcg-target.c.inc
index 6409d9c3d5..c999711494 100644
--- a/tcg/riscv/tcg-target.c.inc
+++ b/tcg/riscv/tcg-target.c.inc
@@ -951,10 +951,6 @@ static void tcg_out_tlb_load(TCGContext *s, TCGReg addrl,
     tcg_out_opc_branch(s, OPC_BNE, TCG_REG_TMP0, TCG_REG_TMP1, 0);
 
     /* TLB Hit - translate address using addend.  */
-    if (TCG_TARGET_REG_BITS > TARGET_LONG_BITS) {
-        tcg_out_ext32u(s, TCG_REG_TMP0, addrl);
-        addrl = TCG_REG_TMP0;
-    }
     tcg_out_opc_reg(s, OPC_ADD, TCG_REG_TMP0, TCG_REG_TMP2, addrl);
 }
 
@@ -1175,7 +1171,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
                         data_regl, data_regh, addr_regl, addr_regh,
                         s->code_ptr, label_ptr);
 #else
-    if (TCG_TARGET_REG_BITS > TARGET_LONG_BITS) {
+    if (TCG_TARGET_REG_BITS > TARGET_LONG_BITS && !guest_base_signed_addr32) {
         tcg_out_ext32u(s, base, addr_regl);
         addr_regl = base;
     }
@@ -1247,7 +1243,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64)
                         data_regl, data_regh, addr_regl, addr_regh,
                         s->code_ptr, label_ptr);
 #else
-    if (TCG_TARGET_REG_BITS > TARGET_LONG_BITS) {
+    if (TCG_TARGET_REG_BITS > TARGET_LONG_BITS && !guest_base_signed_addr32) {
         tcg_out_ext32u(s, base, addr_regl);
         addr_regl = base;
     }
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 9/9] tcg/loongarch64: Support TCG_TARGET_SIGNED_ADDR32
  2022-02-27  2:04 [PATCH v2 0/9] tcg: support 32-bit guest addresses as signed Richard Henderson
                   ` (7 preceding siblings ...)
  2022-02-27  2:04 ` [PATCH v2 8/9] tcg/riscv: " Richard Henderson
@ 2022-02-27  2:04 ` Richard Henderson
  2022-02-27 22:52   ` Philippe Mathieu-Daudé
  8 siblings, 1 reply; 19+ messages in thread
From: Richard Henderson @ 2022-02-27  2:04 UTC (permalink / raw)
  To: qemu-devel; +Cc: WANG Xuerui

All 32-bit LoongArch operations sign-extend the output, so we are easily
able to keep TCG_TYPE_I32 values sign-extended in host registers.

Cc: WANG Xuerui <git@xen0n.name>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
 tcg/loongarch64/tcg-target-sa32.h |  2 +-
 tcg/loongarch64/tcg-target.c.inc  | 15 ++++++---------
 2 files changed, 7 insertions(+), 10 deletions(-)

diff --git a/tcg/loongarch64/tcg-target-sa32.h b/tcg/loongarch64/tcg-target-sa32.h
index cb185b1526..aaffd777bf 100644
--- a/tcg/loongarch64/tcg-target-sa32.h
+++ b/tcg/loongarch64/tcg-target-sa32.h
@@ -1 +1 @@
-#define TCG_TARGET_SIGNED_ADDR32 0
+#define TCG_TARGET_SIGNED_ADDR32 1
diff --git a/tcg/loongarch64/tcg-target.c.inc b/tcg/loongarch64/tcg-target.c.inc
index a3debf6da7..425f6629ca 100644
--- a/tcg/loongarch64/tcg-target.c.inc
+++ b/tcg/loongarch64/tcg-target.c.inc
@@ -880,8 +880,6 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
     return tcg_out_fail_alignment(s, l);
 }
 
-#endif /* CONFIG_SOFTMMU */
-
 /*
  * `ext32u` the address register into the temp register given,
  * if target is 32-bit, no-op otherwise.
@@ -891,12 +889,13 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 static TCGReg tcg_out_zext_addr_if_32_bit(TCGContext *s,
                                           TCGReg addr, TCGReg tmp)
 {
-    if (TARGET_LONG_BITS == 32) {
+    if (TARGET_LONG_BITS == 32 && !guest_base_signed_addr32) {
         tcg_out_ext32u(s, tmp, addr);
         return tmp;
     }
     return addr;
 }
+#endif /* CONFIG_SOFTMMU */
 
 static void tcg_out_qemu_ld_indexed(TCGContext *s, TCGReg rd, TCGReg rj,
                                    TCGReg rk, MemOp opc, TCGType type)
@@ -944,8 +943,8 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, TCGType type)
     tcg_insn_unit *label_ptr[1];
 #else
     unsigned a_bits;
-#endif
     TCGReg base;
+#endif
 
     data_regl = *args++;
     addr_regl = *args++;
@@ -954,8 +953,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, TCGType type)
 
 #if defined(CONFIG_SOFTMMU)
     tcg_out_tlb_load(s, addr_regl, oi, label_ptr, 1);
-    base = tcg_out_zext_addr_if_32_bit(s, addr_regl, TCG_REG_TMP0);
-    tcg_out_qemu_ld_indexed(s, data_regl, base, TCG_REG_TMP2, opc, type);
+    tcg_out_qemu_ld_indexed(s, data_regl, addr_regl, TCG_REG_TMP2, opc, type);
     add_qemu_ldst_label(s, 1, oi, type,
                         data_regl, addr_regl,
                         s->code_ptr, label_ptr);
@@ -1004,8 +1002,8 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args)
     tcg_insn_unit *label_ptr[1];
 #else
     unsigned a_bits;
-#endif
     TCGReg base;
+#endif
 
     data_regl = *args++;
     addr_regl = *args++;
@@ -1014,8 +1012,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args)
 
 #if defined(CONFIG_SOFTMMU)
     tcg_out_tlb_load(s, addr_regl, oi, label_ptr, 0);
-    base = tcg_out_zext_addr_if_32_bit(s, addr_regl, TCG_REG_TMP0);
-    tcg_out_qemu_st_indexed(s, data_regl, base, TCG_REG_TMP2, opc);
+    tcg_out_qemu_st_indexed(s, data_regl, addr_regl, TCG_REG_TMP2, opc);
     add_qemu_ldst_label(s, 0, oi,
                         0, /* type param is unused for stores */
                         data_regl, addr_regl,
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 3/9] accel/tcg: Support TCG_TARGET_SIGNED_ADDR32 for softmmu
  2022-02-27  2:04 ` [PATCH v2 3/9] accel/tcg: Support TCG_TARGET_SIGNED_ADDR32 for softmmu Richard Henderson
@ 2022-02-27 22:32   ` Philippe Mathieu-Daudé
  2022-03-03 15:14   ` Peter Maydell
  1 sibling, 0 replies; 19+ messages in thread
From: Philippe Mathieu-Daudé @ 2022-02-27 22:32 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel

On 27/2/22 03:04, Richard Henderson wrote:
> When TCG_TARGET_SIGNED_ADDR32 is set, adjust the tlb addend to
> allow the 32-bit guest address to be sign extended within the
> 64-bit host register instead of zero extended.
> 
> This will simplify tcg hosts like MIPS, RISC-V, and LoongArch,
> which naturally sign-extend 32-bit values, in contrast to x86_64
> and AArch64 which zero-extend them.
> 
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>   accel/tcg/cputlb.c | 12 +++++++++++-
>   1 file changed, 11 insertions(+), 1 deletion(-)

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 5/9] linux-user: Support TCG_TARGET_SIGNED_ADDR32
  2022-02-27  2:04 ` [PATCH v2 5/9] linux-user: Support TCG_TARGET_SIGNED_ADDR32 Richard Henderson
@ 2022-02-27 22:48   ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 19+ messages in thread
From: Philippe Mathieu-Daudé @ 2022-02-27 22:48 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel; +Cc: Alex Bennée

On 27/2/22 03:04, Richard Henderson wrote:
> When using reserved_va, which is the default for a 64-bit host
> and a 32-bit guest, set guest_base_signed_addr32 if requested
> by TCG_TARGET_SIGNED_ADDR32, and the executable layout allows.
> 
> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>   include/exec/cpu-all.h |  4 ---
>   linux-user/elfload.c   | 62 ++++++++++++++++++++++++++++++++++--------
>   2 files changed, 50 insertions(+), 16 deletions(-)

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 7/9] tcg/mips: Support TCG_TARGET_SIGNED_ADDR32
  2022-02-27  2:04 ` [PATCH v2 7/9] tcg/mips: " Richard Henderson
@ 2022-02-27 22:51   ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 19+ messages in thread
From: Philippe Mathieu-Daudé @ 2022-02-27 22:51 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel
  Cc: Huacai Chen, Aleksandar Rikalo, Philippe Mathieu-Daudé,
	Aurelien Jarno

On 27/2/22 03:04, Richard Henderson wrote:
> All 32-bit mips operations sign-extend the output, so we are easily
> able to keep TCG_TYPE_I32 values sign-extended in host registers.
> 
> Cc: Philippe Mathieu-Daudé <f4bug@amsat.org>
> Cc: Aurelien Jarno <aurelien@aurel32.net>
> Cc: Huacai Chen <chenhuacai@kernel.org>
> Cc: Jiaxun Yang <jiaxun.yang@flygoat.com>
> Cc: Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>   tcg/mips/tcg-target-sa32.h |  8 ++++++++
>   tcg/mips/tcg-target.c.inc  | 10 ++--------
>   2 files changed, 10 insertions(+), 8 deletions(-)

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 9/9] tcg/loongarch64: Support TCG_TARGET_SIGNED_ADDR32
  2022-02-27  2:04 ` [PATCH v2 9/9] tcg/loongarch64: " Richard Henderson
@ 2022-02-27 22:52   ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 19+ messages in thread
From: Philippe Mathieu-Daudé @ 2022-02-27 22:52 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel; +Cc: WANG Xuerui

On 27/2/22 03:04, Richard Henderson wrote:
> All 32-bit LoongArch operations sign-extend the output, so we are easily
> able to keep TCG_TYPE_I32 values sign-extended in host registers.
> 
> Cc: WANG Xuerui <git@xen0n.name>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>   tcg/loongarch64/tcg-target-sa32.h |  2 +-
>   tcg/loongarch64/tcg-target.c.inc  | 15 ++++++---------
>   2 files changed, 7 insertions(+), 10 deletions(-)

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 6/9] tcg/aarch64: Support TCG_TARGET_SIGNED_ADDR32
  2022-02-27  2:04 ` [PATCH v2 6/9] tcg/aarch64: " Richard Henderson
@ 2022-03-03 15:04   ` Peter Maydell
  2022-03-03 15:43     ` Richard Henderson
  0 siblings, 1 reply; 19+ messages in thread
From: Peter Maydell @ 2022-03-03 15:04 UTC (permalink / raw)
  To: Richard Henderson; +Cc: qemu-devel

On Sun, 27 Feb 2022 at 02:10, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> AArch64 has both sign and zero-extending addressing modes, which
> means that either treatment of guest addresses is equally efficient.
> Enabling this for AArch64 gives us testing of the feature in CI.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
>  tcg/aarch64/tcg-target-sa32.h |  8 +++-
>  tcg/aarch64/tcg-target.c.inc  | 69 +++++++++++++++++++++++------------
>  2 files changed, 52 insertions(+), 25 deletions(-)
>
> diff --git a/tcg/aarch64/tcg-target-sa32.h b/tcg/aarch64/tcg-target-sa32.h
> index cb185b1526..c99e502e4c 100644
> --- a/tcg/aarch64/tcg-target-sa32.h
> +++ b/tcg/aarch64/tcg-target-sa32.h
> @@ -1 +1,7 @@
> -#define TCG_TARGET_SIGNED_ADDR32 0
> +/*
> + * AArch64 has both SXTW and UXTW addressing modes, which means that
> + * it is agnostic to how guest addresses should be represented.
> + * Because aarch64 is more common than the other hosts that will
> + * want to use this feature, enable it for continuous testing.
> + */
> +#define TCG_TARGET_SIGNED_ADDR32 1
> diff --git a/tcg/aarch64/tcg-target.c.inc b/tcg/aarch64/tcg-target.c.inc
> index 077fc51401..65cab73ea0 100644
> --- a/tcg/aarch64/tcg-target.c.inc
> +++ b/tcg/aarch64/tcg-target.c.inc


>  static void tcg_out_qemu_ld(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
>                              MemOpIdx oi, TCGType ext)
>  {
>      MemOp memop = get_memop(oi);
> -    const TCGType otype = TARGET_LONG_BITS == 64 ? TCG_TYPE_I64 : TCG_TYPE_I32;
> +    int option = ldst_ext_option();
>
>      /* Byte swapping is left to middle-end expansion. */
>      tcg_debug_assert((memop & MO_BSWAP) == 0);
> @@ -1833,7 +1854,7 @@ static void tcg_out_qemu_ld(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
>
>      tcg_out_tlb_read(s, addr_reg, memop, &label_ptr, mem_index, 1);
>      tcg_out_qemu_ld_direct(s, memop, ext, data_reg,
> -                           TCG_REG_X1, otype, addr_reg);
> +                           TCG_REG_X1, option, addr_reg);
>      add_qemu_ldst_label(s, true, oi, ext, data_reg, addr_reg,
>                          s->code_ptr, label_ptr);
>  #else /* !CONFIG_SOFTMMU */
> @@ -1843,10 +1864,10 @@ static void tcg_out_qemu_ld(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
>      }
>      if (USE_GUEST_BASE) {
>          tcg_out_qemu_ld_direct(s, memop, ext, data_reg,
> -                               TCG_REG_GUEST_BASE, otype, addr_reg);
> +                               TCG_REG_GUEST_BASE, option, addr_reg);
>      } else {
>          tcg_out_qemu_ld_direct(s, memop, ext, data_reg,
> -                               addr_reg, TCG_TYPE_I64, TCG_REG_XZR);
> +                               addr_reg, option, TCG_REG_XZR);

This doesn't look right. 'option' specifies how we extend the offset
register, but here that is XZR, which is 0 no matter how we choose
to extend it, whereas we aren't going to be extending the base
register 'addr_reg' which is what we do need to either zero or
sign extend. Unfortunately we can't just flip addr_reg and XZR
around, because XZR isn't valid as the base reg.

Is this a pre-existing bug? If addr_reg needs zero extending
we won't be doing that.

>      }
>  #endif /* CONFIG_SOFTMMU */
>  }
> @@ -1855,7 +1876,7 @@ static void tcg_out_qemu_st(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
>                              MemOpIdx oi)
>  {
>      MemOp memop = get_memop(oi);
> -    const TCGType otype = TARGET_LONG_BITS == 64 ? TCG_TYPE_I64 : TCG_TYPE_I32;
> +    int option = ldst_ext_option();
>
>      /* Byte swapping is left to middle-end expansion. */
>      tcg_debug_assert((memop & MO_BSWAP) == 0);
> @@ -1866,7 +1887,7 @@ static void tcg_out_qemu_st(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
>
>      tcg_out_tlb_read(s, addr_reg, memop, &label_ptr, mem_index, 0);
>      tcg_out_qemu_st_direct(s, memop, data_reg,
> -                           TCG_REG_X1, otype, addr_reg);
> +                           TCG_REG_X1, option, addr_reg);
>      add_qemu_ldst_label(s, false, oi, (memop & MO_SIZE)== MO_64,
>                          data_reg, addr_reg, s->code_ptr, label_ptr);
>  #else /* !CONFIG_SOFTMMU */
> @@ -1876,10 +1897,10 @@ static void tcg_out_qemu_st(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
>      }
>      if (USE_GUEST_BASE) {
>          tcg_out_qemu_st_direct(s, memop, data_reg,
> -                               TCG_REG_GUEST_BASE, otype, addr_reg);
> +                               TCG_REG_GUEST_BASE, option, addr_reg);
>      } else {
>          tcg_out_qemu_st_direct(s, memop, data_reg,
> -                               addr_reg, TCG_TYPE_I64, TCG_REG_XZR);
> +                               addr_reg, option, TCG_REG_XZR);
>

Similarly here.

thanks
-- PMM


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 4/9] accel/tcg: Add guest_base_signed_addr32 for user-only
  2022-02-27  2:04 ` [PATCH v2 4/9] accel/tcg: Add guest_base_signed_addr32 for user-only Richard Henderson
@ 2022-03-03 15:14   ` Peter Maydell
  0 siblings, 0 replies; 19+ messages in thread
From: Peter Maydell @ 2022-03-03 15:14 UTC (permalink / raw)
  To: Richard Henderson; +Cc: qemu-devel, Philippe Mathieu-Daudé

On Sun, 27 Feb 2022 at 02:08, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> While the host may prefer to treat 32-bit addresses as signed,
> there are edge cases of guests that cannot be implemented with
> addresses 0x7fff_ffff and 0x8000_0000 being non-consecutive.
>
> Therefore, default to guest_base_signed_addr32 false, and allow
> probe_guest_base to determine whether it is possible to set it
> to true.  A tcg backend which sets TCG_TARGET_SIGNED_ADDR32 will
> have to cope with either setting for user-only.
>
> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>


Reviewed-by: Peter Maydell <peter.maydell@linaro.org>

thanks
-- PMM


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 3/9] accel/tcg: Support TCG_TARGET_SIGNED_ADDR32 for softmmu
  2022-02-27  2:04 ` [PATCH v2 3/9] accel/tcg: Support TCG_TARGET_SIGNED_ADDR32 for softmmu Richard Henderson
  2022-02-27 22:32   ` Philippe Mathieu-Daudé
@ 2022-03-03 15:14   ` Peter Maydell
  1 sibling, 0 replies; 19+ messages in thread
From: Peter Maydell @ 2022-03-03 15:14 UTC (permalink / raw)
  To: Richard Henderson; +Cc: qemu-devel

On Sun, 27 Feb 2022 at 02:08, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> When TCG_TARGET_SIGNED_ADDR32 is set, adjust the tlb addend to
> allow the 32-bit guest address to be sign extended within the
> 64-bit host register instead of zero extended.
>
> This will simplify tcg hosts like MIPS, RISC-V, and LoongArch,
> which naturally sign-extend 32-bit values, in contrast to x86_64
> and AArch64 which zero-extend them.
>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>

thanks
-- PMM


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 6/9] tcg/aarch64: Support TCG_TARGET_SIGNED_ADDR32
  2022-03-03 15:04   ` Peter Maydell
@ 2022-03-03 15:43     ` Richard Henderson
  2022-03-03 16:19       ` Peter Maydell
  0 siblings, 1 reply; 19+ messages in thread
From: Richard Henderson @ 2022-03-03 15:43 UTC (permalink / raw)
  To: Peter Maydell; +Cc: qemu-devel

On 3/3/22 05:04, Peter Maydell wrote:
>>       if (USE_GUEST_BASE) {
>>           tcg_out_qemu_ld_direct(s, memop, ext, data_reg,
>> -                               TCG_REG_GUEST_BASE, otype, addr_reg);
>> +                               TCG_REG_GUEST_BASE, option, addr_reg);
>>       } else {
>>           tcg_out_qemu_ld_direct(s, memop, ext, data_reg,
>> -                               addr_reg, TCG_TYPE_I64, TCG_REG_XZR);
>> +                               addr_reg, option, TCG_REG_XZR);
> 
> This doesn't look right. 'option' specifies how we extend the offset
> register, but here that is XZR, which is 0 no matter how we choose
> to extend it, whereas we aren't going to be extending the base
> register 'addr_reg' which is what we do need to either zero or
> sign extend. Unfortunately we can't just flip addr_reg and XZR
> around, because XZR isn't valid as the base reg.
> 
> Is this a pre-existing bug? If addr_reg needs zero extending
> we won't be doing that.

It's just confusing, because stuff is hidden in macros:

#define USE_GUEST_BASE     (guest_base != 0 || TARGET_LONG_BITS == 32)

We *always* use TCG_REG_GUEST_BASE when we require an extension, so the else case you 
point out will always have option == 3 /* LSL #0 */.

Previously I had a named constant I could use here, but I didn't create names for the full 
'option' field being filled, so I thought it clearer to just pass along the variable. 
Would it be clearer as

     3 /* LSL #0 */

or with some LDST_OPTION_LSL0?


r~


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 6/9] tcg/aarch64: Support TCG_TARGET_SIGNED_ADDR32
  2022-03-03 15:43     ` Richard Henderson
@ 2022-03-03 16:19       ` Peter Maydell
  0 siblings, 0 replies; 19+ messages in thread
From: Peter Maydell @ 2022-03-03 16:19 UTC (permalink / raw)
  To: Richard Henderson; +Cc: qemu-devel

On Thu, 3 Mar 2022 at 15:43, Richard Henderson
<richard.henderson@linaro.org> wrote:
>
> On 3/3/22 05:04, Peter Maydell wrote:
> >>       if (USE_GUEST_BASE) {
> >>           tcg_out_qemu_ld_direct(s, memop, ext, data_reg,
> >> -                               TCG_REG_GUEST_BASE, otype, addr_reg);
> >> +                               TCG_REG_GUEST_BASE, option, addr_reg);
> >>       } else {
> >>           tcg_out_qemu_ld_direct(s, memop, ext, data_reg,
> >> -                               addr_reg, TCG_TYPE_I64, TCG_REG_XZR);
> >> +                               addr_reg, option, TCG_REG_XZR);
> >
> > This doesn't look right. 'option' specifies how we extend the offset
> > register, but here that is XZR, which is 0 no matter how we choose
> > to extend it, whereas we aren't going to be extending the base
> > register 'addr_reg' which is what we do need to either zero or
> > sign extend. Unfortunately we can't just flip addr_reg and XZR
> > around, because XZR isn't valid as the base reg.
> >
> > Is this a pre-existing bug? If addr_reg needs zero extending
> > we won't be doing that.
>
> It's just confusing, because stuff is hidden in macros:
>
> #define USE_GUEST_BASE     (guest_base != 0 || TARGET_LONG_BITS == 32)
>
> We *always* use TCG_REG_GUEST_BASE when we require an extension, so the else case you
> point out will always have option == 3 /* LSL #0 */.
>
> Previously I had a named constant I could use here, but I didn't create names for the full
> 'option' field being filled, so I thought it clearer to just pass along the variable.
> Would it be clearer as
>
>      3 /* LSL #0 */
>
> or with some LDST_OPTION_LSL0?

I think that using something that says it's LSL 0 (either comment as done
elsewhere in the patch, or maybe better with some symbolic constant)
would help, yes. Plus an assert or a comment that we know we don't
need to extend addr_reg in this half of the if().

thanks
-- PMM


^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2022-03-03 16:33 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-27  2:04 [PATCH v2 0/9] tcg: support 32-bit guest addresses as signed Richard Henderson
2022-02-27  2:04 ` [PATCH v2 1/9] tcg: Add TCG_TARGET_SIGNED_ADDR32 Richard Henderson
2022-02-27  2:04 ` [PATCH v2 2/9] accel/tcg: Split out g2h_tlbe Richard Henderson
2022-02-27  2:04 ` [PATCH v2 3/9] accel/tcg: Support TCG_TARGET_SIGNED_ADDR32 for softmmu Richard Henderson
2022-02-27 22:32   ` Philippe Mathieu-Daudé
2022-03-03 15:14   ` Peter Maydell
2022-02-27  2:04 ` [PATCH v2 4/9] accel/tcg: Add guest_base_signed_addr32 for user-only Richard Henderson
2022-03-03 15:14   ` Peter Maydell
2022-02-27  2:04 ` [PATCH v2 5/9] linux-user: Support TCG_TARGET_SIGNED_ADDR32 Richard Henderson
2022-02-27 22:48   ` Philippe Mathieu-Daudé
2022-02-27  2:04 ` [PATCH v2 6/9] tcg/aarch64: " Richard Henderson
2022-03-03 15:04   ` Peter Maydell
2022-03-03 15:43     ` Richard Henderson
2022-03-03 16:19       ` Peter Maydell
2022-02-27  2:04 ` [PATCH v2 7/9] tcg/mips: " Richard Henderson
2022-02-27 22:51   ` Philippe Mathieu-Daudé
2022-02-27  2:04 ` [PATCH v2 8/9] tcg/riscv: " Richard Henderson
2022-02-27  2:04 ` [PATCH v2 9/9] tcg/loongarch64: " Richard Henderson
2022-02-27 22:52   ` Philippe Mathieu-Daudé

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.