All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v2 00/20] Invert Endian bit in SPARCv9 MMU TTE
@ 2019-07-22 15:34 ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:34 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, david, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, mst, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, claudio.fontana, qemu-s390x, qemu-ppc,
	amarkovic, pbonzini, aurelien

This patchset implements the IE (Invert Endian) bit in SPARCv9 MMU TTE.

It is an attempt of the instructions outlined by Richard Henderson to Mark
Cave-Ayland.

Tested with OpenBSD on sun4u. Solaris 10 is my actual goal, but unfortunately a
separate keyboard issue remains in the way.

On 01/11/17 19:15, Mark Cave-Ayland wrote:

>On 15/08/17 19:10, Richard Henderson wrote:
>
>> [CC Peter re MemTxAttrs below]
>> 
>> On 08/15/2017 09:38 AM, Mark Cave-Ayland wrote:
>>> Working through an incorrect endian issue on qemu-system-sparc64, it has
>>> become apparent that at least one OS makes use of the IE (Invert Endian)
>>> bit in the SPARCv9 MMU TTE to map PCI memory space without the
>>> programmer having to manually endian-swap accesses.
>>>
>>> In other words, to quote the UltraSPARC specification: "if this bit is
>>> set, accesses to the associated page are processed with inverse
>>> endianness from what is specified by the instruction (big-for-little and
>>> little-for-big)".

A good explanation by Mark why the IE bit is required.

>>>
>>> Looking through various bits of code, I'm trying to get a feel for the
>>> best way to implement this in an efficient manner. From what I can see
>>> this could be solved using an additional MMU index, however I'm not
>>> overly familiar with the memory and softmmu subsystems.
>> 
>> No, it can't be solved with an MMU index.
>> 
>>> Can anyone point me in the right direction as to what would be the best
>>> way to implement this feature within QEMU?
>> 
>> It's definitely tricky.
>> 
>> We definitely need some TLB_FLAGS_MASK bit set so that we're forced through 
>> the
>> memory slow path.  There is no other way to bypass the endianness that we've
>> already encoded from the target instruction.
>> 
>> Given the tlb_set_page_with_attrs interface, I would think that we need a new
>> bit in MemTxAttrs, so that the target/sparc tlb_fill (and subroutines) can 
>> pass
>> along the TTE bit for the given page.
>> 
>> We have an existing problem in softmmu_template.h,
>> 
>>     /* ??? Note that the io helpers always read data in the target
>>        byte ordering.  We should push the LE/BE request down into io.  */
>>     res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr);
>>     res = TGT_BE(res);
>> 
>> We do not want to add a third(!) byte swap along the i/o path.  We need to
>> collapse the two that we have already before considering this one.
>> 
>> This probably takes the form of:
>> 
>> (1) Replacing the "int size" argument with "TCGMemOp memop" for
>>       a) io_{read,write}x in accel/tcg/cputlb.c,
>>       b) memory_region_dispatch_{read,write} in memory.c,
>>       c) adjust_endianness in memory.c.
>>     This carries size+sign+endianness down to the next level.
>> 
>> (2) In memory.c, adjust_endianness,
>> 
>>      if (memory_region_wrong_endianness(mr)) {
>> -        switch (size) {
>> +        memop ^= MO_BSWAP;
>> +    }
>> +    if (memop & MO_BSWAP) {
>> 
>>     For extra credit, re-arrange memory_region_wrong_endianness
>>     to something more explicit -- "wrong" isn't helpful.
>
>Finally I've had a bit of spare time to experiment with this approach,
>and from what I can see there are currently 2 issues:
>
>
>1) Using TCGMemOp in memory.c means it is no longer accelerator agnostic
>
>For the moment I've defined a separate MemOp in memory.h and provided a
>mapping function in io_{read,write}x to map from TCGMemOp to MemOp and
>then pass that into memory_region_dispatch_{read,write}.
>
>Other than not referencing TCGMemOp in the memory API, another reason
>for doing this was that I wasn't convinced that all the MO_ attributes
>were valid outside of TCG. I do, of course, strongly defer to other
>people's knowledge in this area though.
>
>
>2) The above changes to adjust_endianness() fail when
>memory_region_dispatch_{read,write} are called recursively
>
>Whilst booting qemu-system-sparc64 I see that
>memory_region_dispatch_{read,write} get called recursively - once via
>io_{read,write}x and then again via flatview_read_continue() in exec.c.
>
>The net effect of this is that we perform the bswap correctly at the
>tail of the recursion, but then as we travel back up the stack we hit
>memory_region_dispatch_{read,write} once again causing a second bswap
>which means the value is returned with the incorrect endian again.
>
>
>My understanding from your softmmu_template.h comment above is that the
>memory API should do the endian swapping internally allowing the removal
>of the final TGT_BE/TGT_LE applied to the result, or did I get this wrong?
>
>> (3) In tlb_set_page_with_attrs, notice attrs.byte_swap and set
>>     a new TLB_FORCE_SLOW bit within TLB_FLAGS_MASK.
>> 
>> (4) In io_{read,write}x, if iotlbentry->attrs.byte_swap is set,
>>     then memop ^= MO_BSWAP.

Thanks for v1 feedback.

v2:
- Moved size+sign+endianness attributes from TCGMemOp into MemOp.
  In v1 TCGMemOp was re-purposed entirely into MemOp.
- Replaced MemOp MO_{8|16|32|64} with TCGMemOp MO_{UB|UW|UL|UQ} alias.
  This is to avoid warnings on comparing and coercing different enums.
- Renamed get_memop to get_tcgmemop for clarity.
- MEMOP is now SIZE_MEMOP, which is just ctzl(size). 
- Split patch 3/4 so one memory_region_dispatch_{read|write} interface
  is converted per patch.
- Do not reuse TLB_RECHECK, use new TLB_FORCE_SLOW instead.
- Split patch 4/4 so adding the MemTxAddrs parameters and converting
  tlb_set_page() to tlb_set_page_with_attrs() is separate from usage.
- CC'd maintainers.

Tony Nguyen (20):
  tcg: Replace MO_8 with MO_UB alias
  tcg: Replace MO_16 with MO_UW alias
  tcg: Replace MO_32 with MO_UL alias
  tcg: Replace MO_64 with MO_UQ alias
  tcg: Move size+sign+endian from TCGMemOp to MemOp
  tcg: Rename get_memop to get_tcgmemop
  memory: Access MemoryRegion with MemOp
  target/mips: Access MemoryRegion with MemOp
  hw/s390x: Access MemoryRegion with MemOp
  hw/intc/armv7m_nic: Access MemoryRegion with MemOp
  hw/virtio: Access MemoryRegion with MemOp
  hw/vfio: Access MemoryRegion with MemOp
  exec: Access MemoryRegion with MemOp
  cputlb: Access MemoryRegion with MemOp
  memory: Access MemoryRegion with MemOp semantics
  memory: Single byte swap along the I/O path
  cpu: TLB_FLAGS_MASK bit to force memory slow path
  cputlb: Byte swap memory transaction attribute
  target/sparc: Add TLB entry with attributes
  target/sparc: sun4u Invert Endian TTE bit

 MAINTAINERS                         |   1 +
 accel/tcg/cputlb.c                  |  75 +++---
 exec.c                              |   6 +-
 hw/intc/armv7m_nvic.c               |  12 +-
 hw/s390x/s390-pci-inst.c            |   8 +-
 hw/vfio/pci-quirks.c                |   5 +-
 hw/virtio/virtio-pci.c              |   7 +-
 include/exec/cpu-all.h              |  10 +-
 include/exec/memattrs.h             |   2 +
 include/exec/memop.h                |  30 +++
 include/exec/memory.h               |   9 +-
 memory.c                            |  37 +--
 memory_ldst.inc.c                   |  18 +-
 target/arm/sve_helper.c             |  16 +-
 target/arm/translate-a64.c          | 516 ++++++++++++++++++------------------
 target/arm/translate-sve.c          |  74 +++---
 target/arm/translate-vfp.inc.c      |  10 +-
 target/arm/translate.c              | 134 +++++-----
 target/i386/translate.c             | 506 +++++++++++++++++------------------
 target/mips/op_helper.c             |   5 +-
 target/mips/translate.c             |   8 +-
 target/ppc/translate.c              |  28 +-
 target/ppc/translate/fp-impl.inc.c  |   4 +-
 target/ppc/translate/vmx-impl.inc.c | 118 ++++-----
 target/ppc/translate/vsx-impl.inc.c |  22 +-
 target/s390x/translate.c            |  10 +-
 target/s390x/translate_vx.inc.c     |  12 +-
 target/s390x/vec.h                  |  16 +-
 target/sparc/cpu.h                  |   2 +
 target/sparc/mmu_helper.c           |  40 +--
 target/sparc/translate.c            |   4 +-
 tcg/aarch64/tcg-target.inc.c        |  82 +++---
 tcg/arm/tcg-target.inc.c            |  38 +--
 tcg/i386/tcg-target.inc.c           | 176 ++++++------
 tcg/mips/tcg-target.inc.c           |  38 +--
 tcg/optimize.c                      |   2 +-
 tcg/ppc/tcg-target.inc.c            |  28 +-
 tcg/riscv/tcg-target.inc.c          |  26 +-
 tcg/s390/tcg-target.inc.c           |  18 +-
 tcg/sparc/tcg-target.inc.c          |  18 +-
 tcg/tcg-op-gvec.c                   | 342 ++++++++++++------------
 tcg/tcg-op-vec.c                    |  30 +--
 tcg/tcg-op.c                        |  66 ++---
 tcg/tcg.c                           |   2 +-
 tcg/tcg.h                           |  34 ++-
 tcg/tci.c                           |   8 +-
 46 files changed, 1365 insertions(+), 1288 deletions(-)
 create mode 100644 include/exec/memop.h

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v2 00/20] Invert Endian bit in SPARCv9 MMU TTE
@ 2019-07-22 15:34 ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:34 UTC (permalink / raw)
  To: qemu-devel
  Cc: rth, pbonzini, peter.maydell, walling, cohuck, pasic,
	borntraeger, david, alex.williamson, mst, ehabkost, aurelien,
	amarkovic, arikalo, david, mark.cave-ayland, atar4qemu,
	claudio.fontana, balrogg, palmer, Alistair.Francis, sw, qemu-arm,
	qemu-s390x, qemu-ppc, qemu-riscv

This patchset implements the IE (Invert Endian) bit in SPARCv9 MMU TTE.

It is an attempt of the instructions outlined by Richard Henderson to Mark
Cave-Ayland.

Tested with OpenBSD on sun4u. Solaris 10 is my actual goal, but unfortunately a
separate keyboard issue remains in the way.

On 01/11/17 19:15, Mark Cave-Ayland wrote:

>On 15/08/17 19:10, Richard Henderson wrote:
>
>> [CC Peter re MemTxAttrs below]
>> 
>> On 08/15/2017 09:38 AM, Mark Cave-Ayland wrote:
>>> Working through an incorrect endian issue on qemu-system-sparc64, it has
>>> become apparent that at least one OS makes use of the IE (Invert Endian)
>>> bit in the SPARCv9 MMU TTE to map PCI memory space without the
>>> programmer having to manually endian-swap accesses.
>>>
>>> In other words, to quote the UltraSPARC specification: "if this bit is
>>> set, accesses to the associated page are processed with inverse
>>> endianness from what is specified by the instruction (big-for-little and
>>> little-for-big)".

A good explanation by Mark why the IE bit is required.

>>>
>>> Looking through various bits of code, I'm trying to get a feel for the
>>> best way to implement this in an efficient manner. From what I can see
>>> this could be solved using an additional MMU index, however I'm not
>>> overly familiar with the memory and softmmu subsystems.
>> 
>> No, it can't be solved with an MMU index.
>> 
>>> Can anyone point me in the right direction as to what would be the best
>>> way to implement this feature within QEMU?
>> 
>> It's definitely tricky.
>> 
>> We definitely need some TLB_FLAGS_MASK bit set so that we're forced through 
>> the
>> memory slow path.  There is no other way to bypass the endianness that we've
>> already encoded from the target instruction.
>> 
>> Given the tlb_set_page_with_attrs interface, I would think that we need a new
>> bit in MemTxAttrs, so that the target/sparc tlb_fill (and subroutines) can 
>> pass
>> along the TTE bit for the given page.
>> 
>> We have an existing problem in softmmu_template.h,
>> 
>>     /* ??? Note that the io helpers always read data in the target
>>        byte ordering.  We should push the LE/BE request down into io.  */
>>     res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr);
>>     res = TGT_BE(res);
>> 
>> We do not want to add a third(!) byte swap along the i/o path.  We need to
>> collapse the two that we have already before considering this one.
>> 
>> This probably takes the form of:
>> 
>> (1) Replacing the "int size" argument with "TCGMemOp memop" for
>>       a) io_{read,write}x in accel/tcg/cputlb.c,
>>       b) memory_region_dispatch_{read,write} in memory.c,
>>       c) adjust_endianness in memory.c.
>>     This carries size+sign+endianness down to the next level.
>> 
>> (2) In memory.c, adjust_endianness,
>> 
>>      if (memory_region_wrong_endianness(mr)) {
>> -        switch (size) {
>> +        memop ^= MO_BSWAP;
>> +    }
>> +    if (memop & MO_BSWAP) {
>> 
>>     For extra credit, re-arrange memory_region_wrong_endianness
>>     to something more explicit -- "wrong" isn't helpful.
>
>Finally I've had a bit of spare time to experiment with this approach,
>and from what I can see there are currently 2 issues:
>
>
>1) Using TCGMemOp in memory.c means it is no longer accelerator agnostic
>
>For the moment I've defined a separate MemOp in memory.h and provided a
>mapping function in io_{read,write}x to map from TCGMemOp to MemOp and
>then pass that into memory_region_dispatch_{read,write}.
>
>Other than not referencing TCGMemOp in the memory API, another reason
>for doing this was that I wasn't convinced that all the MO_ attributes
>were valid outside of TCG. I do, of course, strongly defer to other
>people's knowledge in this area though.
>
>
>2) The above changes to adjust_endianness() fail when
>memory_region_dispatch_{read,write} are called recursively
>
>Whilst booting qemu-system-sparc64 I see that
>memory_region_dispatch_{read,write} get called recursively - once via
>io_{read,write}x and then again via flatview_read_continue() in exec.c.
>
>The net effect of this is that we perform the bswap correctly at the
>tail of the recursion, but then as we travel back up the stack we hit
>memory_region_dispatch_{read,write} once again causing a second bswap
>which means the value is returned with the incorrect endian again.
>
>
>My understanding from your softmmu_template.h comment above is that the
>memory API should do the endian swapping internally allowing the removal
>of the final TGT_BE/TGT_LE applied to the result, or did I get this wrong?
>
>> (3) In tlb_set_page_with_attrs, notice attrs.byte_swap and set
>>     a new TLB_FORCE_SLOW bit within TLB_FLAGS_MASK.
>> 
>> (4) In io_{read,write}x, if iotlbentry->attrs.byte_swap is set,
>>     then memop ^= MO_BSWAP.

Thanks for v1 feedback.

v2:
- Moved size+sign+endianness attributes from TCGMemOp into MemOp.
  In v1 TCGMemOp was re-purposed entirely into MemOp.
- Replaced MemOp MO_{8|16|32|64} with TCGMemOp MO_{UB|UW|UL|UQ} alias.
  This is to avoid warnings on comparing and coercing different enums.
- Renamed get_memop to get_tcgmemop for clarity.
- MEMOP is now SIZE_MEMOP, which is just ctzl(size). 
- Split patch 3/4 so one memory_region_dispatch_{read|write} interface
  is converted per patch.
- Do not reuse TLB_RECHECK, use new TLB_FORCE_SLOW instead.
- Split patch 4/4 so adding the MemTxAddrs parameters and converting
  tlb_set_page() to tlb_set_page_with_attrs() is separate from usage.
- CC'd maintainers.

Tony Nguyen (20):
  tcg: Replace MO_8 with MO_UB alias
  tcg: Replace MO_16 with MO_UW alias
  tcg: Replace MO_32 with MO_UL alias
  tcg: Replace MO_64 with MO_UQ alias
  tcg: Move size+sign+endian from TCGMemOp to MemOp
  tcg: Rename get_memop to get_tcgmemop
  memory: Access MemoryRegion with MemOp
  target/mips: Access MemoryRegion with MemOp
  hw/s390x: Access MemoryRegion with MemOp
  hw/intc/armv7m_nic: Access MemoryRegion with MemOp
  hw/virtio: Access MemoryRegion with MemOp
  hw/vfio: Access MemoryRegion with MemOp
  exec: Access MemoryRegion with MemOp
  cputlb: Access MemoryRegion with MemOp
  memory: Access MemoryRegion with MemOp semantics
  memory: Single byte swap along the I/O path
  cpu: TLB_FLAGS_MASK bit to force memory slow path
  cputlb: Byte swap memory transaction attribute
  target/sparc: Add TLB entry with attributes
  target/sparc: sun4u Invert Endian TTE bit

 MAINTAINERS                         |   1 +
 accel/tcg/cputlb.c                  |  75 +++---
 exec.c                              |   6 +-
 hw/intc/armv7m_nvic.c               |  12 +-
 hw/s390x/s390-pci-inst.c            |   8 +-
 hw/vfio/pci-quirks.c                |   5 +-
 hw/virtio/virtio-pci.c              |   7 +-
 include/exec/cpu-all.h              |  10 +-
 include/exec/memattrs.h             |   2 +
 include/exec/memop.h                |  30 +++
 include/exec/memory.h               |   9 +-
 memory.c                            |  37 +--
 memory_ldst.inc.c                   |  18 +-
 target/arm/sve_helper.c             |  16 +-
 target/arm/translate-a64.c          | 516 ++++++++++++++++++------------------
 target/arm/translate-sve.c          |  74 +++---
 target/arm/translate-vfp.inc.c      |  10 +-
 target/arm/translate.c              | 134 +++++-----
 target/i386/translate.c             | 506 +++++++++++++++++------------------
 target/mips/op_helper.c             |   5 +-
 target/mips/translate.c             |   8 +-
 target/ppc/translate.c              |  28 +-
 target/ppc/translate/fp-impl.inc.c  |   4 +-
 target/ppc/translate/vmx-impl.inc.c | 118 ++++-----
 target/ppc/translate/vsx-impl.inc.c |  22 +-
 target/s390x/translate.c            |  10 +-
 target/s390x/translate_vx.inc.c     |  12 +-
 target/s390x/vec.h                  |  16 +-
 target/sparc/cpu.h                  |   2 +
 target/sparc/mmu_helper.c           |  40 +--
 target/sparc/translate.c            |   4 +-
 tcg/aarch64/tcg-target.inc.c        |  82 +++---
 tcg/arm/tcg-target.inc.c            |  38 +--
 tcg/i386/tcg-target.inc.c           | 176 ++++++------
 tcg/mips/tcg-target.inc.c           |  38 +--
 tcg/optimize.c                      |   2 +-
 tcg/ppc/tcg-target.inc.c            |  28 +-
 tcg/riscv/tcg-target.inc.c          |  26 +-
 tcg/s390/tcg-target.inc.c           |  18 +-
 tcg/sparc/tcg-target.inc.c          |  18 +-
 tcg/tcg-op-gvec.c                   | 342 ++++++++++++------------
 tcg/tcg-op-vec.c                    |  30 +--
 tcg/tcg-op.c                        |  66 ++---
 tcg/tcg.c                           |   2 +-
 tcg/tcg.h                           |  34 ++-
 tcg/tci.c                           |   8 +-
 46 files changed, 1365 insertions(+), 1288 deletions(-)
 create mode 100644 include/exec/memop.h

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 120+ messages in thread

* [Qemu-devel]  [PATCH v2 01/20] tcg: Replace MO_8 with MO_UB alias
  2019-07-22 15:34 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-22 15:38   ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:38 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, claudio.fontana, alex.williamson, qemu-ppc, amarkovic,
	pbonzini, aurelien

Preparation for splitting MO_8 out from TCGMemOp into new accelerator
independent MemOp.

As MO_8 will be a value of MemOp, existing TCGMemOp comparisons and
coercions will trigger -Wenum-compare and -Wenum-conversion.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 target/arm/sve_helper.c             |  4 +-
 target/arm/translate-a64.c          | 14 +++----
 target/arm/translate-sve.c          |  4 +-
 target/arm/translate.c              | 38 +++++++++----------
 target/i386/translate.c             | 72 +++++++++++++++++------------------
 target/mips/translate.c             |  4 +-
 target/ppc/translate/vmx-impl.inc.c | 28 +++++++-------
 target/s390x/translate.c            |  2 +-
 target/s390x/translate_vx.inc.c     |  4 +-
 target/s390x/vec.h                  |  4 +-
 tcg/aarch64/tcg-target.inc.c        | 16 ++++----
 tcg/arm/tcg-target.inc.c            |  6 +--
 tcg/i386/tcg-target.inc.c           | 54 +++++++++++++-------------
 tcg/mips/tcg-target.inc.c           |  4 +-
 tcg/riscv/tcg-target.inc.c          |  4 +-
 tcg/sparc/tcg-target.inc.c          |  2 +-
 tcg/tcg-op-gvec.c                   | 76 ++++++++++++++++++-------------------
 tcg/tcg-op-vec.c                    | 10 ++---
 tcg/tcg-op.c                        |  6 +--
 tcg/tcg.h                           |  2 +-
 20 files changed, 177 insertions(+), 177 deletions(-)

diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index fc0c175..4c7e11f 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -1531,7 +1531,7 @@ void HELPER(sve_cpy_m_b)(void *vd, void *vn, void *vg,
     uint64_t *d = vd, *n = vn;
     uint8_t *pg = vg;

-    mm = dup_const(MO_8, mm);
+    mm = dup_const(MO_UB, mm);
     for (i = 0; i < opr_sz; i += 1) {
         uint64_t nn = n[i];
         uint64_t pp = expand_pred_b(pg[H1(i)]);
@@ -1588,7 +1588,7 @@ void HELPER(sve_cpy_z_b)(void *vd, void *vg, uint64_t val, uint32_t desc)
     uint64_t *d = vd;
     uint8_t *pg = vg;

-    val = dup_const(MO_8, val);
+    val = dup_const(MO_UB, val);
     for (i = 0; i < opr_sz; i += 1) {
         d[i] = val & expand_pred_b(pg[H1(i)]);
     }
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index d323147..f840b43 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -993,7 +993,7 @@ static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx,
 {
     int vect_off = vec_reg_offset(s, srcidx, element, memop & MO_SIZE);
     switch (memop) {
-    case MO_8:
+    case MO_UB:
         tcg_gen_ld8u_i64(tcg_dest, cpu_env, vect_off);
         break;
     case MO_16:
@@ -1002,7 +1002,7 @@ static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx,
     case MO_32:
         tcg_gen_ld32u_i64(tcg_dest, cpu_env, vect_off);
         break;
-    case MO_8|MO_SIGN:
+    case MO_SB:
         tcg_gen_ld8s_i64(tcg_dest, cpu_env, vect_off);
         break;
     case MO_16|MO_SIGN:
@@ -1025,13 +1025,13 @@ static void read_vec_element_i32(DisasContext *s, TCGv_i32 tcg_dest, int srcidx,
 {
     int vect_off = vec_reg_offset(s, srcidx, element, memop & MO_SIZE);
     switch (memop) {
-    case MO_8:
+    case MO_UB:
         tcg_gen_ld8u_i32(tcg_dest, cpu_env, vect_off);
         break;
     case MO_16:
         tcg_gen_ld16u_i32(tcg_dest, cpu_env, vect_off);
         break;
-    case MO_8|MO_SIGN:
+    case MO_SB:
         tcg_gen_ld8s_i32(tcg_dest, cpu_env, vect_off);
         break;
     case MO_16|MO_SIGN:
@@ -1052,7 +1052,7 @@ static void write_vec_element(DisasContext *s, TCGv_i64 tcg_src, int destidx,
 {
     int vect_off = vec_reg_offset(s, destidx, element, memop & MO_SIZE);
     switch (memop) {
-    case MO_8:
+    case MO_UB:
         tcg_gen_st8_i64(tcg_src, cpu_env, vect_off);
         break;
     case MO_16:
@@ -1074,7 +1074,7 @@ static void write_vec_element_i32(DisasContext *s, TCGv_i32 tcg_src,
 {
     int vect_off = vec_reg_offset(s, destidx, element, memop & MO_SIZE);
     switch (memop) {
-    case MO_8:
+    case MO_UB:
         tcg_gen_st8_i32(tcg_src, cpu_env, vect_off);
         break;
     case MO_16:
@@ -12885,7 +12885,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)

     default: /* integer */
         switch (size) {
-        case MO_8:
+        case MO_UB:
         case MO_64:
             unallocated_encoding(s);
             return;
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index fa068b0..ec5fb11 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -1665,7 +1665,7 @@ static void do_sat_addsub_vec(DisasContext *s, int esz, int rd, int rn,
     desc = tcg_const_i32(simd_desc(vsz, vsz, 0));

     switch (esz) {
-    case MO_8:
+    case MO_UB:
         t32 = tcg_temp_new_i32();
         tcg_gen_extrl_i64_i32(t32, val);
         if (d) {
@@ -3308,7 +3308,7 @@ static bool trans_SUBR_zzi(DisasContext *s, arg_rri_esz *a)
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_sve_subri_b,
           .opt_opc = vecop_list,
-          .vece = MO_8,
+          .vece = MO_UB,
           .scalar_first = true },
         { .fni8 = tcg_gen_vec_sub16_i64,
           .fniv = tcg_gen_sub_vec,
diff --git a/target/arm/translate.c b/target/arm/translate.c
index 7853462..39266cf 100644
--- a/target/arm/translate.c
+++ b/target/arm/translate.c
@@ -1474,7 +1474,7 @@ static void neon_store_element(int reg, int ele, TCGMemOp size, TCGv_i32 var)
     long offset = neon_element_offset(reg, ele, size);

     switch (size) {
-    case MO_8:
+    case MO_UB:
         tcg_gen_st8_i32(var, cpu_env, offset);
         break;
     case MO_16:
@@ -1493,7 +1493,7 @@ static void neon_store_element64(int reg, int ele, TCGMemOp size, TCGv_i64 var)
     long offset = neon_element_offset(reg, ele, size);

     switch (size) {
-    case MO_8:
+    case MO_UB:
         tcg_gen_st8_i64(var, cpu_env, offset);
         break;
     case MO_16:
@@ -4262,7 +4262,7 @@ const GVecGen2i ssra_op[4] = {
       .fniv = gen_ssra_vec,
       .load_dest = true,
       .opt_opc = vecop_list_ssra,
-      .vece = MO_8 },
+      .vece = MO_UB },
     { .fni8 = gen_ssra16_i64,
       .fniv = gen_ssra_vec,
       .load_dest = true,
@@ -4320,7 +4320,7 @@ const GVecGen2i usra_op[4] = {
       .fniv = gen_usra_vec,
       .load_dest = true,
       .opt_opc = vecop_list_usra,
-      .vece = MO_8, },
+      .vece = MO_UB, },
     { .fni8 = gen_usra16_i64,
       .fniv = gen_usra_vec,
       .load_dest = true,
@@ -4341,7 +4341,7 @@ const GVecGen2i usra_op[4] = {

 static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
 {
-    uint64_t mask = dup_const(MO_8, 0xff >> shift);
+    uint64_t mask = dup_const(MO_UB, 0xff >> shift);
     TCGv_i64 t = tcg_temp_new_i64();

     tcg_gen_shri_i64(t, a, shift);
@@ -4400,7 +4400,7 @@ const GVecGen2i sri_op[4] = {
       .fniv = gen_shr_ins_vec,
       .load_dest = true,
       .opt_opc = vecop_list_sri,
-      .vece = MO_8 },
+      .vece = MO_UB },
     { .fni8 = gen_shr16_ins_i64,
       .fniv = gen_shr_ins_vec,
       .load_dest = true,
@@ -4421,7 +4421,7 @@ const GVecGen2i sri_op[4] = {

 static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
 {
-    uint64_t mask = dup_const(MO_8, 0xff << shift);
+    uint64_t mask = dup_const(MO_UB, 0xff << shift);
     TCGv_i64 t = tcg_temp_new_i64();

     tcg_gen_shli_i64(t, a, shift);
@@ -4478,7 +4478,7 @@ const GVecGen2i sli_op[4] = {
       .fniv = gen_shl_ins_vec,
       .load_dest = true,
       .opt_opc = vecop_list_sli,
-      .vece = MO_8 },
+      .vece = MO_UB },
     { .fni8 = gen_shl16_ins_i64,
       .fniv = gen_shl_ins_vec,
       .load_dest = true,
@@ -4574,7 +4574,7 @@ const GVecGen3 mla_op[4] = {
       .fniv = gen_mla_vec,
       .load_dest = true,
       .opt_opc = vecop_list_mla,
-      .vece = MO_8 },
+      .vece = MO_UB },
     { .fni4 = gen_mla16_i32,
       .fniv = gen_mla_vec,
       .load_dest = true,
@@ -4598,7 +4598,7 @@ const GVecGen3 mls_op[4] = {
       .fniv = gen_mls_vec,
       .load_dest = true,
       .opt_opc = vecop_list_mls,
-      .vece = MO_8 },
+      .vece = MO_UB },
     { .fni4 = gen_mls16_i32,
       .fniv = gen_mls_vec,
       .load_dest = true,
@@ -4645,7 +4645,7 @@ const GVecGen3 cmtst_op[4] = {
     { .fni4 = gen_helper_neon_tst_u8,
       .fniv = gen_cmtst_vec,
       .opt_opc = vecop_list_cmtst,
-      .vece = MO_8 },
+      .vece = MO_UB },
     { .fni4 = gen_helper_neon_tst_u16,
       .fniv = gen_cmtst_vec,
       .opt_opc = vecop_list_cmtst,
@@ -4681,7 +4681,7 @@ const GVecGen4 uqadd_op[4] = {
       .fno = gen_helper_gvec_uqadd_b,
       .write_aofs = true,
       .opt_opc = vecop_list_uqadd,
-      .vece = MO_8 },
+      .vece = MO_UB },
     { .fniv = gen_uqadd_vec,
       .fno = gen_helper_gvec_uqadd_h,
       .write_aofs = true,
@@ -4719,7 +4719,7 @@ const GVecGen4 sqadd_op[4] = {
       .fno = gen_helper_gvec_sqadd_b,
       .opt_opc = vecop_list_sqadd,
       .write_aofs = true,
-      .vece = MO_8 },
+      .vece = MO_UB },
     { .fniv = gen_sqadd_vec,
       .fno = gen_helper_gvec_sqadd_h,
       .opt_opc = vecop_list_sqadd,
@@ -4757,7 +4757,7 @@ const GVecGen4 uqsub_op[4] = {
       .fno = gen_helper_gvec_uqsub_b,
       .opt_opc = vecop_list_uqsub,
       .write_aofs = true,
-      .vece = MO_8 },
+      .vece = MO_UB },
     { .fniv = gen_uqsub_vec,
       .fno = gen_helper_gvec_uqsub_h,
       .opt_opc = vecop_list_uqsub,
@@ -4795,7 +4795,7 @@ const GVecGen4 sqsub_op[4] = {
       .fno = gen_helper_gvec_sqsub_b,
       .opt_opc = vecop_list_sqsub,
       .write_aofs = true,
-      .vece = MO_8 },
+      .vece = MO_UB },
     { .fniv = gen_sqsub_vec,
       .fno = gen_helper_gvec_sqsub_h,
       .opt_opc = vecop_list_sqsub,
@@ -4972,15 +4972,15 @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
                                  vec_size, vec_size);
                 break;
             case 5: /* VBSL */
-                tcg_gen_gvec_bitsel(MO_8, rd_ofs, rd_ofs, rn_ofs, rm_ofs,
+                tcg_gen_gvec_bitsel(MO_UB, rd_ofs, rd_ofs, rn_ofs, rm_ofs,
                                     vec_size, vec_size);
                 break;
             case 6: /* VBIT */
-                tcg_gen_gvec_bitsel(MO_8, rd_ofs, rm_ofs, rn_ofs, rd_ofs,
+                tcg_gen_gvec_bitsel(MO_UB, rd_ofs, rm_ofs, rn_ofs, rd_ofs,
                                     vec_size, vec_size);
                 break;
             case 7: /* VBIF */
-                tcg_gen_gvec_bitsel(MO_8, rd_ofs, rm_ofs, rd_ofs, rn_ofs,
+                tcg_gen_gvec_bitsel(MO_UB, rd_ofs, rm_ofs, rd_ofs, rn_ofs,
                                     vec_size, vec_size);
                 break;
             }
@@ -6873,7 +6873,7 @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
                     return 1;
                 }
                 if (insn & (1 << 16)) {
-                    size = MO_8;
+                    size = MO_UB;
                     element = (insn >> 17) & 7;
                 } else if (insn & (1 << 17)) {
                     size = MO_16;
diff --git a/target/i386/translate.c b/target/i386/translate.c
index 03150a8..0e45300 100644
--- a/target/i386/translate.c
+++ b/target/i386/translate.c
@@ -349,20 +349,20 @@ static inline TCGMemOp mo_64_32(TCGMemOp ot)
    byte vs word opcodes.  */
 static inline TCGMemOp mo_b_d(int b, TCGMemOp ot)
 {
-    return b & 1 ? ot : MO_8;
+    return b & 1 ? ot : MO_UB;
 }

 /* Select size 8 if lsb of B is clear, else OT capped at 32.
    Used for decoding operand size of port opcodes.  */
 static inline TCGMemOp mo_b_d32(int b, TCGMemOp ot)
 {
-    return b & 1 ? (ot == MO_16 ? MO_16 : MO_32) : MO_8;
+    return b & 1 ? (ot == MO_16 ? MO_16 : MO_32) : MO_UB;
 }

 static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t0)
 {
     switch(ot) {
-    case MO_8:
+    case MO_UB:
         if (!byte_reg_is_xH(s, reg)) {
             tcg_gen_deposit_tl(cpu_regs[reg], cpu_regs[reg], t0, 0, 8);
         } else {
@@ -390,7 +390,7 @@ static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t0)
 static inline
 void gen_op_mov_v_reg(DisasContext *s, TCGMemOp ot, TCGv t0, int reg)
 {
-    if (ot == MO_8 && byte_reg_is_xH(s, reg)) {
+    if (ot == MO_UB && byte_reg_is_xH(s, reg)) {
         tcg_gen_extract_tl(t0, cpu_regs[reg - 4], 8, 8);
     } else {
         tcg_gen_mov_tl(t0, cpu_regs[reg]);
@@ -523,7 +523,7 @@ static inline void gen_op_movl_T0_Dshift(DisasContext *s, TCGMemOp ot)
 static TCGv gen_ext_tl(TCGv dst, TCGv src, TCGMemOp size, bool sign)
 {
     switch (size) {
-    case MO_8:
+    case MO_UB:
         if (sign) {
             tcg_gen_ext8s_tl(dst, src);
         } else {
@@ -580,7 +580,7 @@ void gen_op_jz_ecx(DisasContext *s, TCGMemOp size, TCGLabel *label1)
 static void gen_helper_in_func(TCGMemOp ot, TCGv v, TCGv_i32 n)
 {
     switch (ot) {
-    case MO_8:
+    case MO_UB:
         gen_helper_inb(v, cpu_env, n);
         break;
     case MO_16:
@@ -597,7 +597,7 @@ static void gen_helper_in_func(TCGMemOp ot, TCGv v, TCGv_i32 n)
 static void gen_helper_out_func(TCGMemOp ot, TCGv_i32 v, TCGv_i32 n)
 {
     switch (ot) {
-    case MO_8:
+    case MO_UB:
         gen_helper_outb(cpu_env, v, n);
         break;
     case MO_16:
@@ -619,7 +619,7 @@ static void gen_check_io(DisasContext *s, TCGMemOp ot, target_ulong cur_eip,
     if (s->pe && (s->cpl > s->iopl || s->vm86)) {
         tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
         switch (ot) {
-        case MO_8:
+        case MO_UB:
             gen_helper_check_iob(cpu_env, s->tmp2_i32);
             break;
         case MO_16:
@@ -1557,7 +1557,7 @@ static void gen_rot_rm_T1(DisasContext *s, TCGMemOp ot, int op1, int is_right)
     tcg_gen_andi_tl(s->T1, s->T1, mask);

     switch (ot) {
-    case MO_8:
+    case MO_UB:
         /* Replicate the 8-bit input so that a 32-bit rotate works.  */
         tcg_gen_ext8u_tl(s->T0, s->T0);
         tcg_gen_muli_tl(s->T0, s->T0, 0x01010101);
@@ -1661,7 +1661,7 @@ static void gen_rot_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
                 tcg_gen_rotli_tl(s->T0, s->T0, op2);
             }
             break;
-        case MO_8:
+        case MO_UB:
             mask = 7;
             goto do_shifts;
         case MO_16:
@@ -1719,7 +1719,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,

     if (is_right) {
         switch (ot) {
-        case MO_8:
+        case MO_UB:
             gen_helper_rcrb(s->T0, cpu_env, s->T0, s->T1);
             break;
         case MO_16:
@@ -1738,7 +1738,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
         }
     } else {
         switch (ot) {
-        case MO_8:
+        case MO_UB:
             gen_helper_rclb(s->T0, cpu_env, s->T0, s->T1);
             break;
         case MO_16:
@@ -2184,7 +2184,7 @@ static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, TCGMemOp ot)
     uint32_t ret;

     switch (ot) {
-    case MO_8:
+    case MO_UB:
         ret = x86_ldub_code(env, s);
         break;
     case MO_16:
@@ -3784,7 +3784,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                     goto illegal_op;
                 }
                 if ((b & 0xff) == 0xf0) {
-                    ot = MO_8;
+                    ot = MO_UB;
                 } else if (s->dflag != MO_64) {
                     ot = (s->prefix & PREFIX_DATA ? MO_16 : MO_32);
                 } else {
@@ -4760,7 +4760,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 val = insn_get(env, s, ot);
                 break;
             case 0x83:
-                val = (int8_t)insn_get(env, s, MO_8);
+                val = (int8_t)insn_get(env, s, MO_UB);
                 break;
             }
             tcg_gen_movi_tl(s->T1, val);
@@ -4866,8 +4866,8 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             break;
         case 4: /* mul */
             switch(ot) {
-            case MO_8:
-                gen_op_mov_v_reg(s, MO_8, s->T1, R_EAX);
+            case MO_UB:
+                gen_op_mov_v_reg(s, MO_UB, s->T1, R_EAX);
                 tcg_gen_ext8u_tl(s->T0, s->T0);
                 tcg_gen_ext8u_tl(s->T1, s->T1);
                 /* XXX: use 32 bit mul which could be faster */
@@ -4915,8 +4915,8 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             break;
         case 5: /* imul */
             switch(ot) {
-            case MO_8:
-                gen_op_mov_v_reg(s, MO_8, s->T1, R_EAX);
+            case MO_UB:
+                gen_op_mov_v_reg(s, MO_UB, s->T1, R_EAX);
                 tcg_gen_ext8s_tl(s->T0, s->T0);
                 tcg_gen_ext8s_tl(s->T1, s->T1);
                 /* XXX: use 32 bit mul which could be faster */
@@ -4969,7 +4969,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             break;
         case 6: /* div */
             switch(ot) {
-            case MO_8:
+            case MO_UB:
                 gen_helper_divb_AL(cpu_env, s->T0);
                 break;
             case MO_16:
@@ -4988,7 +4988,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             break;
         case 7: /* idiv */
             switch(ot) {
-            case MO_8:
+            case MO_UB:
                 gen_helper_idivb_AL(cpu_env, s->T0);
                 break;
             case MO_16:
@@ -5157,7 +5157,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             gen_op_mov_reg_v(s, MO_32, R_EAX, s->T0);
             break;
         case MO_16:
-            gen_op_mov_v_reg(s, MO_8, s->T0, R_EAX);
+            gen_op_mov_v_reg(s, MO_UB, s->T0, R_EAX);
             tcg_gen_ext8s_tl(s->T0, s->T0);
             gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0);
             break;
@@ -5205,7 +5205,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             val = insn_get(env, s, ot);
             tcg_gen_movi_tl(s->T1, val);
         } else if (b == 0x6b) {
-            val = (int8_t)insn_get(env, s, MO_8);
+            val = (int8_t)insn_get(env, s, MO_UB);
             tcg_gen_movi_tl(s->T1, val);
         } else {
             gen_op_mov_v_reg(s, ot, s->T1, reg);
@@ -5419,7 +5419,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         if (b == 0x68)
             val = insn_get(env, s, ot);
         else
-            val = (int8_t)insn_get(env, s, MO_8);
+            val = (int8_t)insn_get(env, s, MO_UB);
         tcg_gen_movi_tl(s->T0, val);
         gen_push_v(s, s->T0);
         break;
@@ -5573,7 +5573,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             /* d_ot is the size of destination */
             d_ot = dflag;
             /* ot is the size of source */
-            ot = (b & 1) + MO_8;
+            ot = (b & 1) + MO_UB;
             /* s_ot is the sign+size of source */
             s_ot = b & 8 ? MO_SIGN | ot : ot;

@@ -5661,13 +5661,13 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         tcg_gen_add_tl(s->A0, s->A0, s->T0);
         gen_extu(s->aflag, s->A0);
         gen_add_A0_ds_seg(s);
-        gen_op_ld_v(s, MO_8, s->T0, s->A0);
-        gen_op_mov_reg_v(s, MO_8, R_EAX, s->T0);
+        gen_op_ld_v(s, MO_UB, s->T0, s->A0);
+        gen_op_mov_reg_v(s, MO_UB, R_EAX, s->T0);
         break;
     case 0xb0 ... 0xb7: /* mov R, Ib */
-        val = insn_get(env, s, MO_8);
+        val = insn_get(env, s, MO_UB);
         tcg_gen_movi_tl(s->T0, val);
-        gen_op_mov_reg_v(s, MO_8, (b & 7) | REX_B(s), s->T0);
+        gen_op_mov_reg_v(s, MO_UB, (b & 7) | REX_B(s), s->T0);
         break;
     case 0xb8 ... 0xbf: /* mov R, Iv */
 #ifdef TARGET_X86_64
@@ -6637,7 +6637,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         }
         goto do_ljmp;
     case 0xeb: /* jmp Jb */
-        tval = (int8_t)insn_get(env, s, MO_8);
+        tval = (int8_t)insn_get(env, s, MO_UB);
         tval += s->pc - s->cs_base;
         if (dflag == MO_16) {
             tval &= 0xffff;
@@ -6645,7 +6645,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         gen_jmp(s, tval);
         break;
     case 0x70 ... 0x7f: /* jcc Jb */
-        tval = (int8_t)insn_get(env, s, MO_8);
+        tval = (int8_t)insn_get(env, s, MO_UB);
         goto do_jcc;
     case 0x180 ... 0x18f: /* jcc Jv */
         if (dflag != MO_16) {
@@ -6666,7 +6666,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
     case 0x190 ... 0x19f: /* setcc Gv */
         modrm = x86_ldub_code(env, s);
         gen_setcc1(s, b, s->T0);
-        gen_ldst_modrm(env, s, modrm, MO_8, OR_TMP0, 1);
+        gen_ldst_modrm(env, s, modrm, MO_UB, OR_TMP0, 1);
         break;
     case 0x140 ... 0x14f: /* cmov Gv, Ev */
         if (!(s->cpuid_features & CPUID_CMOV)) {
@@ -6751,7 +6751,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
     case 0x9e: /* sahf */
         if (CODE64(s) && !(s->cpuid_ext3_features & CPUID_EXT3_LAHF_LM))
             goto illegal_op;
-        gen_op_mov_v_reg(s, MO_8, s->T0, R_AH);
+        gen_op_mov_v_reg(s, MO_UB, s->T0, R_AH);
         gen_compute_eflags(s);
         tcg_gen_andi_tl(cpu_cc_src, cpu_cc_src, CC_O);
         tcg_gen_andi_tl(s->T0, s->T0, CC_S | CC_Z | CC_A | CC_P | CC_C);
@@ -6763,7 +6763,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         gen_compute_eflags(s);
         /* Note: gen_compute_eflags() only gives the condition codes */
         tcg_gen_ori_tl(s->T0, cpu_cc_src, 0x02);
-        gen_op_mov_reg_v(s, MO_8, R_AH, s->T0);
+        gen_op_mov_reg_v(s, MO_UB, R_AH, s->T0);
         break;
     case 0xf5: /* cmc */
         gen_compute_eflags(s);
@@ -7137,7 +7137,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             goto illegal_op;
         gen_compute_eflags_c(s, s->T0);
         tcg_gen_neg_tl(s->T0, s->T0);
-        gen_op_mov_reg_v(s, MO_8, R_EAX, s->T0);
+        gen_op_mov_reg_v(s, MO_UB, R_EAX, s->T0);
         break;
     case 0xe0: /* loopnz */
     case 0xe1: /* loopz */
@@ -7146,7 +7146,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         {
             TCGLabel *l1, *l2, *l3;

-            tval = (int8_t)insn_get(env, s, MO_8);
+            tval = (int8_t)insn_get(env, s, MO_UB);
             next_eip = s->pc - s->cs_base;
             tval += next_eip;
             if (dflag == MO_16) {
diff --git a/target/mips/translate.c b/target/mips/translate.c
index 3575eff..20a9777 100644
--- a/target/mips/translate.c
+++ b/target/mips/translate.c
@@ -3684,7 +3684,7 @@ static void gen_st(DisasContext *ctx, uint32_t opc, int rt,
         mem_idx = MIPS_HFLAG_UM;
         /* fall through */
     case OPC_SB:
-        tcg_gen_qemu_st_tl(t1, t0, mem_idx, MO_8);
+        tcg_gen_qemu_st_tl(t1, t0, mem_idx, MO_UB);
         break;
     case OPC_SWLE:
         mem_idx = MIPS_HFLAG_UM;
@@ -20193,7 +20193,7 @@ static void gen_p_lsx(DisasContext *ctx, int rd, int rs, int rt)
         check_nms(ctx);
         gen_load_gpr(t1, rd);
         tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx,
-                           MO_8);
+                           MO_UB);
         break;
     case NM_SHX:
     /*case NM_SHXS:*/
diff --git a/target/ppc/translate/vmx-impl.inc.c b/target/ppc/translate/vmx-impl.inc.c
index 663275b..4130dd1 100644
--- a/target/ppc/translate/vmx-impl.inc.c
+++ b/target/ppc/translate/vmx-impl.inc.c
@@ -403,7 +403,7 @@ static void glue(gen_, name)(DisasContext *ctx)                         \
     tcg_temp_free_ptr(rb);                                              \
 }

-GEN_VXFORM_V(vaddubm, MO_8, tcg_gen_gvec_add, 0, 0);
+GEN_VXFORM_V(vaddubm, MO_UB, tcg_gen_gvec_add, 0, 0);
 GEN_VXFORM_DUAL_EXT(vaddubm, PPC_ALTIVEC, PPC_NONE, 0,       \
                     vmul10cuq, PPC_NONE, PPC2_ISA300, 0x0000F800)
 GEN_VXFORM_V(vadduhm, MO_16, tcg_gen_gvec_add, 0, 1);
@@ -411,23 +411,23 @@ GEN_VXFORM_DUAL(vadduhm, PPC_ALTIVEC, PPC_NONE,  \
                 vmul10ecuq, PPC_NONE, PPC2_ISA300)
 GEN_VXFORM_V(vadduwm, MO_32, tcg_gen_gvec_add, 0, 2);
 GEN_VXFORM_V(vaddudm, MO_64, tcg_gen_gvec_add, 0, 3);
-GEN_VXFORM_V(vsububm, MO_8, tcg_gen_gvec_sub, 0, 16);
+GEN_VXFORM_V(vsububm, MO_UB, tcg_gen_gvec_sub, 0, 16);
 GEN_VXFORM_V(vsubuhm, MO_16, tcg_gen_gvec_sub, 0, 17);
 GEN_VXFORM_V(vsubuwm, MO_32, tcg_gen_gvec_sub, 0, 18);
 GEN_VXFORM_V(vsubudm, MO_64, tcg_gen_gvec_sub, 0, 19);
-GEN_VXFORM_V(vmaxub, MO_8, tcg_gen_gvec_umax, 1, 0);
+GEN_VXFORM_V(vmaxub, MO_UB, tcg_gen_gvec_umax, 1, 0);
 GEN_VXFORM_V(vmaxuh, MO_16, tcg_gen_gvec_umax, 1, 1);
 GEN_VXFORM_V(vmaxuw, MO_32, tcg_gen_gvec_umax, 1, 2);
 GEN_VXFORM_V(vmaxud, MO_64, tcg_gen_gvec_umax, 1, 3);
-GEN_VXFORM_V(vmaxsb, MO_8, tcg_gen_gvec_smax, 1, 4);
+GEN_VXFORM_V(vmaxsb, MO_UB, tcg_gen_gvec_smax, 1, 4);
 GEN_VXFORM_V(vmaxsh, MO_16, tcg_gen_gvec_smax, 1, 5);
 GEN_VXFORM_V(vmaxsw, MO_32, tcg_gen_gvec_smax, 1, 6);
 GEN_VXFORM_V(vmaxsd, MO_64, tcg_gen_gvec_smax, 1, 7);
-GEN_VXFORM_V(vminub, MO_8, tcg_gen_gvec_umin, 1, 8);
+GEN_VXFORM_V(vminub, MO_UB, tcg_gen_gvec_umin, 1, 8);
 GEN_VXFORM_V(vminuh, MO_16, tcg_gen_gvec_umin, 1, 9);
 GEN_VXFORM_V(vminuw, MO_32, tcg_gen_gvec_umin, 1, 10);
 GEN_VXFORM_V(vminud, MO_64, tcg_gen_gvec_umin, 1, 11);
-GEN_VXFORM_V(vminsb, MO_8, tcg_gen_gvec_smin, 1, 12);
+GEN_VXFORM_V(vminsb, MO_UB, tcg_gen_gvec_smin, 1, 12);
 GEN_VXFORM_V(vminsh, MO_16, tcg_gen_gvec_smin, 1, 13);
 GEN_VXFORM_V(vminsw, MO_32, tcg_gen_gvec_smin, 1, 14);
 GEN_VXFORM_V(vminsd, MO_64, tcg_gen_gvec_smin, 1, 15);
@@ -530,18 +530,18 @@ GEN_VXFORM(vmuleuw, 4, 10);
 GEN_VXFORM(vmulesb, 4, 12);
 GEN_VXFORM(vmulesh, 4, 13);
 GEN_VXFORM(vmulesw, 4, 14);
-GEN_VXFORM_V(vslb, MO_8, tcg_gen_gvec_shlv, 2, 4);
+GEN_VXFORM_V(vslb, MO_UB, tcg_gen_gvec_shlv, 2, 4);
 GEN_VXFORM_V(vslh, MO_16, tcg_gen_gvec_shlv, 2, 5);
 GEN_VXFORM_V(vslw, MO_32, tcg_gen_gvec_shlv, 2, 6);
 GEN_VXFORM(vrlwnm, 2, 6);
 GEN_VXFORM_DUAL(vslw, PPC_ALTIVEC, PPC_NONE, \
                 vrlwnm, PPC_NONE, PPC2_ISA300)
 GEN_VXFORM_V(vsld, MO_64, tcg_gen_gvec_shlv, 2, 23);
-GEN_VXFORM_V(vsrb, MO_8, tcg_gen_gvec_shrv, 2, 8);
+GEN_VXFORM_V(vsrb, MO_UB, tcg_gen_gvec_shrv, 2, 8);
 GEN_VXFORM_V(vsrh, MO_16, tcg_gen_gvec_shrv, 2, 9);
 GEN_VXFORM_V(vsrw, MO_32, tcg_gen_gvec_shrv, 2, 10);
 GEN_VXFORM_V(vsrd, MO_64, tcg_gen_gvec_shrv, 2, 27);
-GEN_VXFORM_V(vsrab, MO_8, tcg_gen_gvec_sarv, 2, 12);
+GEN_VXFORM_V(vsrab, MO_UB, tcg_gen_gvec_sarv, 2, 12);
 GEN_VXFORM_V(vsrah, MO_16, tcg_gen_gvec_sarv, 2, 13);
 GEN_VXFORM_V(vsraw, MO_32, tcg_gen_gvec_sarv, 2, 14);
 GEN_VXFORM_V(vsrad, MO_64, tcg_gen_gvec_sarv, 2, 15);
@@ -589,20 +589,20 @@ static void glue(gen_, NAME)(DisasContext *ctx)                         \
                    16, 16, &g);                                         \
 }

-GEN_VXFORM_SAT(vaddubs, MO_8, add, usadd, 0, 8);
+GEN_VXFORM_SAT(vaddubs, MO_UB, add, usadd, 0, 8);
 GEN_VXFORM_DUAL_EXT(vaddubs, PPC_ALTIVEC, PPC_NONE, 0,       \
                     vmul10uq, PPC_NONE, PPC2_ISA300, 0x0000F800)
 GEN_VXFORM_SAT(vadduhs, MO_16, add, usadd, 0, 9);
 GEN_VXFORM_DUAL(vadduhs, PPC_ALTIVEC, PPC_NONE, \
                 vmul10euq, PPC_NONE, PPC2_ISA300)
 GEN_VXFORM_SAT(vadduws, MO_32, add, usadd, 0, 10);
-GEN_VXFORM_SAT(vaddsbs, MO_8, add, ssadd, 0, 12);
+GEN_VXFORM_SAT(vaddsbs, MO_UB, add, ssadd, 0, 12);
 GEN_VXFORM_SAT(vaddshs, MO_16, add, ssadd, 0, 13);
 GEN_VXFORM_SAT(vaddsws, MO_32, add, ssadd, 0, 14);
-GEN_VXFORM_SAT(vsububs, MO_8, sub, ussub, 0, 24);
+GEN_VXFORM_SAT(vsububs, MO_UB, sub, ussub, 0, 24);
 GEN_VXFORM_SAT(vsubuhs, MO_16, sub, ussub, 0, 25);
 GEN_VXFORM_SAT(vsubuws, MO_32, sub, ussub, 0, 26);
-GEN_VXFORM_SAT(vsubsbs, MO_8, sub, sssub, 0, 28);
+GEN_VXFORM_SAT(vsubsbs, MO_UB, sub, sssub, 0, 28);
 GEN_VXFORM_SAT(vsubshs, MO_16, sub, sssub, 0, 29);
 GEN_VXFORM_SAT(vsubsws, MO_32, sub, sssub, 0, 30);
 GEN_VXFORM(vadduqm, 0, 4);
@@ -912,7 +912,7 @@ static void glue(gen_, name)(DisasContext *ctx)                         \
         tcg_temp_free_ptr(rd);                                          \
     }

-GEN_VXFORM_VSPLT(vspltb, MO_8, 6, 8);
+GEN_VXFORM_VSPLT(vspltb, MO_UB, 6, 8);
 GEN_VXFORM_VSPLT(vsplth, MO_16, 6, 9);
 GEN_VXFORM_VSPLT(vspltw, MO_32, 6, 10);
 GEN_VXFORM_UIMM_SPLAT(vextractub, 6, 8, 15);
diff --git a/target/s390x/translate.c b/target/s390x/translate.c
index ac0d8b6..415747f 100644
--- a/target/s390x/translate.c
+++ b/target/s390x/translate.c
@@ -154,7 +154,7 @@ static inline int vec_full_reg_offset(uint8_t reg)

 static inline int vec_reg_offset(uint8_t reg, uint8_t enr, TCGMemOp es)
 {
-    /* Convert element size (es) - e.g. MO_8 - to bytes */
+    /* Convert element size (es) - e.g. MO_UB - to bytes */
     const uint8_t bytes = 1 << es;
     int offs = enr * bytes;

diff --git a/target/s390x/translate_vx.inc.c b/target/s390x/translate_vx.inc.c
index 41d5cf8..bb424c8 100644
--- a/target/s390x/translate_vx.inc.c
+++ b/target/s390x/translate_vx.inc.c
@@ -30,7 +30,7 @@
  * Sizes:
  *  On s390x, the operand size (oprsz) and the maximum size (maxsz) are
  *  always 16 (128 bit). What gvec code calls "vece", s390x calls "es",
- *  a.k.a. "element size". These values nicely map to MO_8 ... MO_64. Only
+ *  a.k.a. "element size". These values nicely map to MO_UB ... MO_64. Only
  *  128 bit element size has to be treated in a special way (MO_64 + 1).
  *  We will use ES_* instead of MO_* for this reason in this file.
  *
@@ -46,7 +46,7 @@
 #define NUM_VEC_ELEMENTS(es) (16 / NUM_VEC_ELEMENT_BYTES(es))
 #define NUM_VEC_ELEMENT_BITS(es) (NUM_VEC_ELEMENT_BYTES(es) * BITS_PER_BYTE)

-#define ES_8    MO_8
+#define ES_8    MO_UB
 #define ES_16   MO_16
 #define ES_32   MO_32
 #define ES_64   MO_64
diff --git a/target/s390x/vec.h b/target/s390x/vec.h
index a6e3618..b813054 100644
--- a/target/s390x/vec.h
+++ b/target/s390x/vec.h
@@ -76,7 +76,7 @@ static inline uint64_t s390_vec_read_element(const S390Vector *v, uint8_t enr,
                                              uint8_t es)
 {
     switch (es) {
-    case MO_8:
+    case MO_UB:
         return s390_vec_read_element8(v, enr);
     case MO_16:
         return s390_vec_read_element16(v, enr);
@@ -121,7 +121,7 @@ static inline void s390_vec_write_element(S390Vector *v, uint8_t enr,
                                           uint8_t es, uint64_t data)
 {
     switch (es) {
-    case MO_8:
+    case MO_UB:
         s390_vec_write_element8(v, enr, data);
         break;
     case MO_16:
diff --git a/tcg/aarch64/tcg-target.inc.c b/tcg/aarch64/tcg-target.inc.c
index 0713448..e4e0845 100644
--- a/tcg/aarch64/tcg-target.inc.c
+++ b/tcg/aarch64/tcg-target.inc.c
@@ -429,20 +429,20 @@ typedef enum {

     /* Load/store register.  Described here as 3.3.12, but the helper
        that emits them can transform to 3.3.10 or 3.3.13.  */
-    I3312_STRB      = 0x38000000 | LDST_ST << 22 | MO_8 << 30,
+    I3312_STRB      = 0x38000000 | LDST_ST << 22 | MO_UB << 30,
     I3312_STRH      = 0x38000000 | LDST_ST << 22 | MO_16 << 30,
     I3312_STRW      = 0x38000000 | LDST_ST << 22 | MO_32 << 30,
     I3312_STRX      = 0x38000000 | LDST_ST << 22 | MO_64 << 30,

-    I3312_LDRB      = 0x38000000 | LDST_LD << 22 | MO_8 << 30,
+    I3312_LDRB      = 0x38000000 | LDST_LD << 22 | MO_UB << 30,
     I3312_LDRH      = 0x38000000 | LDST_LD << 22 | MO_16 << 30,
     I3312_LDRW      = 0x38000000 | LDST_LD << 22 | MO_32 << 30,
     I3312_LDRX      = 0x38000000 | LDST_LD << 22 | MO_64 << 30,

-    I3312_LDRSBW    = 0x38000000 | LDST_LD_S_W << 22 | MO_8 << 30,
+    I3312_LDRSBW    = 0x38000000 | LDST_LD_S_W << 22 | MO_UB << 30,
     I3312_LDRSHW    = 0x38000000 | LDST_LD_S_W << 22 | MO_16 << 30,

-    I3312_LDRSBX    = 0x38000000 | LDST_LD_S_X << 22 | MO_8 << 30,
+    I3312_LDRSBX    = 0x38000000 | LDST_LD_S_X << 22 | MO_UB << 30,
     I3312_LDRSHX    = 0x38000000 | LDST_LD_S_X << 22 | MO_16 << 30,
     I3312_LDRSWX    = 0x38000000 | LDST_LD_S_X << 22 | MO_32 << 30,

@@ -862,7 +862,7 @@ static void tcg_out_dupi_vec(TCGContext *s, TCGType type,
     int cmode, imm8, i;

     /* Test all bytes equal first.  */
-    if (v64 == dup_const(MO_8, v64)) {
+    if (v64 == dup_const(MO_UB, v64)) {
         imm8 = (uint8_t)v64;
         tcg_out_insn(s, 3606, MOVI, q, rd, 0, 0xe, imm8);
         return;
@@ -1772,7 +1772,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp memop,
     const TCGMemOp bswap = memop & MO_BSWAP;

     switch (memop & MO_SIZE) {
-    case MO_8:
+    case MO_UB:
         tcg_out_ldst_r(s, I3312_STRB, data_r, addr_r, otype, off_r);
         break;
     case MO_16:
@@ -2186,7 +2186,7 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,

     case INDEX_op_ext8s_i64:
     case INDEX_op_ext8s_i32:
-        tcg_out_sxt(s, ext, MO_8, a0, a1);
+        tcg_out_sxt(s, ext, MO_UB, a0, a1);
         break;
     case INDEX_op_ext16s_i64:
     case INDEX_op_ext16s_i32:
@@ -2198,7 +2198,7 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
         break;
     case INDEX_op_ext8u_i64:
     case INDEX_op_ext8u_i32:
-        tcg_out_uxt(s, MO_8, a0, a1);
+        tcg_out_uxt(s, MO_UB, a0, a1);
         break;
     case INDEX_op_ext16u_i64:
     case INDEX_op_ext16u_i32:
diff --git a/tcg/arm/tcg-target.inc.c b/tcg/arm/tcg-target.inc.c
index ece88dc..542ffa8 100644
--- a/tcg/arm/tcg-target.inc.c
+++ b/tcg/arm/tcg-target.inc.c
@@ -1429,7 +1429,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
     datalo = lb->datalo_reg;
     datahi = lb->datahi_reg;
     switch (opc & MO_SIZE) {
-    case MO_8:
+    case MO_UB:
         argreg = tcg_out_arg_reg8(s, argreg, datalo);
         break;
     case MO_16:
@@ -1621,7 +1621,7 @@ static inline void tcg_out_qemu_st_index(TCGContext *s, int cond, TCGMemOp opc,
     TCGMemOp bswap = opc & MO_BSWAP;

     switch (opc & MO_SIZE) {
-    case MO_8:
+    case MO_UB:
         tcg_out_st8_r(s, cond, datalo, addrlo, addend);
         break;
     case MO_16:
@@ -1666,7 +1666,7 @@ static inline void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp opc,
     TCGMemOp bswap = opc & MO_BSWAP;

     switch (opc & MO_SIZE) {
-    case MO_8:
+    case MO_UB:
         tcg_out_st8_12(s, COND_AL, datalo, addrlo, 0);
         break;
     case MO_16:
diff --git a/tcg/i386/tcg-target.inc.c b/tcg/i386/tcg-target.inc.c
index 6ddeebf..0d68ba4 100644
--- a/tcg/i386/tcg-target.inc.c
+++ b/tcg/i386/tcg-target.inc.c
@@ -888,7 +888,7 @@ static bool tcg_out_dup_vec(TCGContext *s, TCGType type, unsigned vece,
         tcg_out_vex_modrm(s, avx2_dup_insn[vece] + vex_l, r, 0, a);
     } else {
         switch (vece) {
-        case MO_8:
+        case MO_UB:
             /* ??? With zero in a register, use PSHUFB.  */
             tcg_out_vex_modrm(s, OPC_PUNPCKLBW, r, a, a);
             a = r;
@@ -932,7 +932,7 @@ static bool tcg_out_dupm_vec(TCGContext *s, TCGType type, unsigned vece,
             tcg_out8(s, 0); /* imm8 */
             tcg_out_dup_vec(s, type, vece, r, r);
             break;
-        case MO_8:
+        case MO_UB:
             tcg_out_vex_modrm_offset(s, OPC_VPINSRB, r, r, base, offset);
             tcg_out8(s, 0); /* imm8 */
             tcg_out_dup_vec(s, type, vece, r, r);
@@ -2154,7 +2154,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg datalo, TCGReg datahi,
     }

     switch (memop & MO_SIZE) {
-    case MO_8:
+    case MO_UB:
         /* In 32-bit mode, 8-bit stores can only happen from [abcd]x.
            Use the scratch register if necessary.  */
         if (TCG_TARGET_REG_BITS == 32 && datalo >= 4) {
@@ -2901,7 +2901,7 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc,
         tcg_debug_assert(vece != MO_64);
         sub = 4;
     gen_shift:
-        tcg_debug_assert(vece != MO_8);
+        tcg_debug_assert(vece != MO_UB);
         insn = shift_imm_insn[vece];
         if (type == TCG_TYPE_V256) {
             insn |= P_VEXL;
@@ -3273,12 +3273,12 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, unsigned vece)

     case INDEX_op_shli_vec:
     case INDEX_op_shri_vec:
-        /* We must expand the operation for MO_8.  */
-        return vece == MO_8 ? -1 : 1;
+        /* We must expand the operation for MO_UB.  */
+        return vece == MO_UB ? -1 : 1;

     case INDEX_op_sari_vec:
-        /* We must expand the operation for MO_8.  */
-        if (vece == MO_8) {
+        /* We must expand the operation for MO_UB.  */
+        if (vece == MO_UB) {
             return -1;
         }
         /* We can emulate this for MO_64, but it does not pay off
@@ -3301,8 +3301,8 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, unsigned vece)
         return have_avx2 && vece == MO_32;

     case INDEX_op_mul_vec:
-        if (vece == MO_8) {
-            /* We can expand the operation for MO_8.  */
+        if (vece == MO_UB) {
+            /* We can expand the operation for MO_UB.  */
             return -1;
         }
         if (vece == MO_64) {
@@ -3332,7 +3332,7 @@ static void expand_vec_shi(TCGType type, unsigned vece, bool shr,
 {
     TCGv_vec t1, t2;

-    tcg_debug_assert(vece == MO_8);
+    tcg_debug_assert(vece == MO_UB);

     t1 = tcg_temp_new_vec(type);
     t2 = tcg_temp_new_vec(type);
@@ -3346,9 +3346,9 @@ static void expand_vec_shi(TCGType type, unsigned vece, bool shr,
        (3) Step 2 leaves high half zero such that PACKUSWB
            (pack with unsigned saturation) does not modify
            the quantity.  */
-    vec_gen_3(INDEX_op_x86_punpckl_vec, type, MO_8,
+    vec_gen_3(INDEX_op_x86_punpckl_vec, type, MO_UB,
               tcgv_vec_arg(t1), tcgv_vec_arg(v1), tcgv_vec_arg(v1));
-    vec_gen_3(INDEX_op_x86_punpckh_vec, type, MO_8,
+    vec_gen_3(INDEX_op_x86_punpckh_vec, type, MO_UB,
               tcgv_vec_arg(t2), tcgv_vec_arg(v1), tcgv_vec_arg(v1));

     if (shr) {
@@ -3361,7 +3361,7 @@ static void expand_vec_shi(TCGType type, unsigned vece, bool shr,
         tcg_gen_shri_vec(MO_16, t2, t2, 8);
     }

-    vec_gen_3(INDEX_op_x86_packus_vec, type, MO_8,
+    vec_gen_3(INDEX_op_x86_packus_vec, type, MO_UB,
               tcgv_vec_arg(v0), tcgv_vec_arg(t1), tcgv_vec_arg(t2));
     tcg_temp_free_vec(t1);
     tcg_temp_free_vec(t2);
@@ -3373,17 +3373,17 @@ static void expand_vec_sari(TCGType type, unsigned vece,
     TCGv_vec t1, t2;

     switch (vece) {
-    case MO_8:
+    case MO_UB:
         /* Unpack to W, shift, and repack, as in expand_vec_shi.  */
         t1 = tcg_temp_new_vec(type);
         t2 = tcg_temp_new_vec(type);
-        vec_gen_3(INDEX_op_x86_punpckl_vec, type, MO_8,
+        vec_gen_3(INDEX_op_x86_punpckl_vec, type, MO_UB,
                   tcgv_vec_arg(t1), tcgv_vec_arg(v1), tcgv_vec_arg(v1));
-        vec_gen_3(INDEX_op_x86_punpckh_vec, type, MO_8,
+        vec_gen_3(INDEX_op_x86_punpckh_vec, type, MO_UB,
                   tcgv_vec_arg(t2), tcgv_vec_arg(v1), tcgv_vec_arg(v1));
         tcg_gen_sari_vec(MO_16, t1, t1, imm + 8);
         tcg_gen_sari_vec(MO_16, t2, t2, imm + 8);
-        vec_gen_3(INDEX_op_x86_packss_vec, type, MO_8,
+        vec_gen_3(INDEX_op_x86_packss_vec, type, MO_UB,
                   tcgv_vec_arg(v0), tcgv_vec_arg(t1), tcgv_vec_arg(t2));
         tcg_temp_free_vec(t1);
         tcg_temp_free_vec(t2);
@@ -3425,7 +3425,7 @@ static void expand_vec_mul(TCGType type, unsigned vece,
 {
     TCGv_vec t1, t2, t3, t4;

-    tcg_debug_assert(vece == MO_8);
+    tcg_debug_assert(vece == MO_UB);

     /*
      * Unpack v1 bytes to words, 0 | x.
@@ -3442,13 +3442,13 @@ static void expand_vec_mul(TCGType type, unsigned vece,
         t1 = tcg_temp_new_vec(TCG_TYPE_V128);
         t2 = tcg_temp_new_vec(TCG_TYPE_V128);
         tcg_gen_dup16i_vec(t2, 0);
-        vec_gen_3(INDEX_op_x86_punpckl_vec, TCG_TYPE_V128, MO_8,
+        vec_gen_3(INDEX_op_x86_punpckl_vec, TCG_TYPE_V128, MO_UB,
                   tcgv_vec_arg(t1), tcgv_vec_arg(v1), tcgv_vec_arg(t2));
-        vec_gen_3(INDEX_op_x86_punpckl_vec, TCG_TYPE_V128, MO_8,
+        vec_gen_3(INDEX_op_x86_punpckl_vec, TCG_TYPE_V128, MO_UB,
                   tcgv_vec_arg(t2), tcgv_vec_arg(t2), tcgv_vec_arg(v2));
         tcg_gen_mul_vec(MO_16, t1, t1, t2);
         tcg_gen_shri_vec(MO_16, t1, t1, 8);
-        vec_gen_3(INDEX_op_x86_packus_vec, TCG_TYPE_V128, MO_8,
+        vec_gen_3(INDEX_op_x86_packus_vec, TCG_TYPE_V128, MO_UB,
                   tcgv_vec_arg(v0), tcgv_vec_arg(t1), tcgv_vec_arg(t1));
         tcg_temp_free_vec(t1);
         tcg_temp_free_vec(t2);
@@ -3461,19 +3461,19 @@ static void expand_vec_mul(TCGType type, unsigned vece,
         t3 = tcg_temp_new_vec(type);
         t4 = tcg_temp_new_vec(type);
         tcg_gen_dup16i_vec(t4, 0);
-        vec_gen_3(INDEX_op_x86_punpckl_vec, type, MO_8,
+        vec_gen_3(INDEX_op_x86_punpckl_vec, type, MO_UB,
                   tcgv_vec_arg(t1), tcgv_vec_arg(v1), tcgv_vec_arg(t4));
-        vec_gen_3(INDEX_op_x86_punpckl_vec, type, MO_8,
+        vec_gen_3(INDEX_op_x86_punpckl_vec, type, MO_UB,
                   tcgv_vec_arg(t2), tcgv_vec_arg(t4), tcgv_vec_arg(v2));
-        vec_gen_3(INDEX_op_x86_punpckh_vec, type, MO_8,
+        vec_gen_3(INDEX_op_x86_punpckh_vec, type, MO_UB,
                   tcgv_vec_arg(t3), tcgv_vec_arg(v1), tcgv_vec_arg(t4));
-        vec_gen_3(INDEX_op_x86_punpckh_vec, type, MO_8,
+        vec_gen_3(INDEX_op_x86_punpckh_vec, type, MO_UB,
                   tcgv_vec_arg(t4), tcgv_vec_arg(t4), tcgv_vec_arg(v2));
         tcg_gen_mul_vec(MO_16, t1, t1, t2);
         tcg_gen_mul_vec(MO_16, t3, t3, t4);
         tcg_gen_shri_vec(MO_16, t1, t1, 8);
         tcg_gen_shri_vec(MO_16, t3, t3, 8);
-        vec_gen_3(INDEX_op_x86_packus_vec, type, MO_8,
+        vec_gen_3(INDEX_op_x86_packus_vec, type, MO_UB,
                   tcgv_vec_arg(v0), tcgv_vec_arg(t1), tcgv_vec_arg(t3));
         tcg_temp_free_vec(t1);
         tcg_temp_free_vec(t2);
diff --git a/tcg/mips/tcg-target.inc.c b/tcg/mips/tcg-target.inc.c
index 41bff32..c6d13ea 100644
--- a/tcg/mips/tcg-target.inc.c
+++ b/tcg/mips/tcg-target.inc.c
@@ -1380,7 +1380,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
         i = tcg_out_call_iarg_reg(s, i, l->addrlo_reg);
     }
     switch (s_bits) {
-    case MO_8:
+    case MO_UB:
         i = tcg_out_call_iarg_reg8(s, i, l->datalo_reg);
         break;
     case MO_16:
@@ -1566,7 +1566,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
     }

     switch (opc & (MO_SIZE | MO_BSWAP)) {
-    case MO_8:
+    case MO_UB:
         tcg_out_opc_imm(s, OPC_SB, lo, base, 0);
         break;

diff --git a/tcg/riscv/tcg-target.inc.c b/tcg/riscv/tcg-target.inc.c
index 3e76bf5..9c60c0f 100644
--- a/tcg/riscv/tcg-target.inc.c
+++ b/tcg/riscv/tcg-target.inc.c
@@ -1101,7 +1101,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
     tcg_out_mov(s, TCG_TYPE_PTR, a1, l->addrlo_reg);
     tcg_out_mov(s, TCG_TYPE_PTR, a2, l->datalo_reg);
     switch (s_bits) {
-    case MO_8:
+    case MO_UB:
         tcg_out_ext8u(s, a2, a2);
         break;
     case MO_16:
@@ -1216,7 +1216,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
     g_assert(!bswap);

     switch (opc & (MO_SSIZE)) {
-    case MO_8:
+    case MO_UB:
         tcg_out_opc_store(s, OPC_SB, base, lo, 0);
         break;
     case MO_16:
diff --git a/tcg/sparc/tcg-target.inc.c b/tcg/sparc/tcg-target.inc.c
index 10b1cea..479ee2e 100644
--- a/tcg/sparc/tcg-target.inc.c
+++ b/tcg/sparc/tcg-target.inc.c
@@ -882,7 +882,7 @@ static void emit_extend(TCGContext *s, TCGReg r, int op)
      * required by the MO_* value op; do nothing for 64 bit.
      */
     switch (op & MO_SIZE) {
-    case MO_8:
+    case MO_UB:
         tcg_out_arithi(s, r, r, 0xff, ARITH_AND);
         break;
     case MO_16:
diff --git a/tcg/tcg-op-gvec.c b/tcg/tcg-op-gvec.c
index 17679b6..9658c36 100644
--- a/tcg/tcg-op-gvec.c
+++ b/tcg/tcg-op-gvec.c
@@ -306,7 +306,7 @@ static void expand_clr(uint32_t dofs, uint32_t maxsz);
 uint64_t (dup_const)(unsigned vece, uint64_t c)
 {
     switch (vece) {
-    case MO_8:
+    case MO_UB:
         return 0x0101010101010101ull * (uint8_t)c;
     case MO_16:
         return 0x0001000100010001ull * (uint16_t)c;
@@ -323,7 +323,7 @@ uint64_t (dup_const)(unsigned vece, uint64_t c)
 static void gen_dup_i32(unsigned vece, TCGv_i32 out, TCGv_i32 in)
 {
     switch (vece) {
-    case MO_8:
+    case MO_UB:
         tcg_gen_ext8u_i32(out, in);
         tcg_gen_muli_i32(out, out, 0x01010101);
         break;
@@ -341,7 +341,7 @@ static void gen_dup_i32(unsigned vece, TCGv_i32 out, TCGv_i32 in)
 static void gen_dup_i64(unsigned vece, TCGv_i64 out, TCGv_i64 in)
 {
     switch (vece) {
-    case MO_8:
+    case MO_UB:
         tcg_gen_ext8u_i64(out, in);
         tcg_gen_muli_i64(out, out, 0x0101010101010101ull);
         break;
@@ -556,7 +556,7 @@ static void do_dup(unsigned vece, uint32_t dofs, uint32_t oprsz,
             t_32 = tcg_temp_new_i32();
             if (in_64) {
                 tcg_gen_extrl_i64_i32(t_32, in_64);
-            } else if (vece == MO_8) {
+            } else if (vece == MO_UB) {
                 tcg_gen_movi_i32(t_32, in_c & 0xff);
             } else if (vece == MO_16) {
                 tcg_gen_movi_i32(t_32, in_c & 0xffff);
@@ -581,7 +581,7 @@ static void do_dup(unsigned vece, uint32_t dofs, uint32_t oprsz,
 /* Likewise, but with zero.  */
 static void expand_clr(uint32_t dofs, uint32_t maxsz)
 {
-    do_dup(MO_8, dofs, maxsz, maxsz, NULL, NULL, 0);
+    do_dup(MO_UB, dofs, maxsz, maxsz, NULL, NULL, 0);
 }

 /* Expand OPSZ bytes worth of two-operand operations using i32 elements.  */
@@ -1456,7 +1456,7 @@ void tcg_gen_gvec_dup_mem(unsigned vece, uint32_t dofs, uint32_t aofs,
         } else if (vece <= MO_32) {
             TCGv_i32 in = tcg_temp_new_i32();
             switch (vece) {
-            case MO_8:
+            case MO_UB:
                 tcg_gen_ld8u_i32(in, cpu_env, aofs);
                 break;
             case MO_16:
@@ -1533,7 +1533,7 @@ void tcg_gen_gvec_dup8i(uint32_t dofs, uint32_t oprsz,
                          uint32_t maxsz, uint8_t x)
 {
     check_size_align(oprsz, maxsz, dofs);
-    do_dup(MO_8, dofs, oprsz, maxsz, NULL, NULL, x);
+    do_dup(MO_UB, dofs, oprsz, maxsz, NULL, NULL, x);
 }

 void tcg_gen_gvec_not(unsigned vece, uint32_t dofs, uint32_t aofs,
@@ -1572,7 +1572,7 @@ static void gen_addv_mask(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b, TCGv_i64 m)

 void tcg_gen_vec_add8_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
 {
-    TCGv_i64 m = tcg_const_i64(dup_const(MO_8, 0x80));
+    TCGv_i64 m = tcg_const_i64(dup_const(MO_UB, 0x80));
     gen_addv_mask(d, a, b, m);
     tcg_temp_free_i64(m);
 }
@@ -1608,7 +1608,7 @@ void tcg_gen_gvec_add(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_add_vec,
           .fno = gen_helper_gvec_add8,
           .opt_opc = vecop_list_add,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fni8 = tcg_gen_vec_add16_i64,
           .fniv = tcg_gen_add_vec,
           .fno = gen_helper_gvec_add16,
@@ -1639,7 +1639,7 @@ void tcg_gen_gvec_adds(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_add_vec,
           .fno = gen_helper_gvec_adds8,
           .opt_opc = vecop_list_add,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fni8 = tcg_gen_vec_add16_i64,
           .fniv = tcg_gen_add_vec,
           .fno = gen_helper_gvec_adds16,
@@ -1680,7 +1680,7 @@ void tcg_gen_gvec_subs(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_gvec_subs8,
           .opt_opc = vecop_list_sub,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fni8 = tcg_gen_vec_sub16_i64,
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_gvec_subs16,
@@ -1725,7 +1725,7 @@ static void gen_subv_mask(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b, TCGv_i64 m)

 void tcg_gen_vec_sub8_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
 {
-    TCGv_i64 m = tcg_const_i64(dup_const(MO_8, 0x80));
+    TCGv_i64 m = tcg_const_i64(dup_const(MO_UB, 0x80));
     gen_subv_mask(d, a, b, m);
     tcg_temp_free_i64(m);
 }
@@ -1759,7 +1759,7 @@ void tcg_gen_gvec_sub(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_gvec_sub8,
           .opt_opc = vecop_list_sub,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fni8 = tcg_gen_vec_sub16_i64,
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_gvec_sub16,
@@ -1791,7 +1791,7 @@ void tcg_gen_gvec_mul(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_mul_vec,
           .fno = gen_helper_gvec_mul8,
           .opt_opc = vecop_list_mul,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fniv = tcg_gen_mul_vec,
           .fno = gen_helper_gvec_mul16,
           .opt_opc = vecop_list_mul,
@@ -1820,7 +1820,7 @@ void tcg_gen_gvec_muls(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_mul_vec,
           .fno = gen_helper_gvec_muls8,
           .opt_opc = vecop_list_mul,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fniv = tcg_gen_mul_vec,
           .fno = gen_helper_gvec_muls16,
           .opt_opc = vecop_list_mul,
@@ -1858,7 +1858,7 @@ void tcg_gen_gvec_ssadd(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_ssadd_vec,
           .fno = gen_helper_gvec_ssadd8,
           .opt_opc = vecop_list,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fniv = tcg_gen_ssadd_vec,
           .fno = gen_helper_gvec_ssadd16,
           .opt_opc = vecop_list,
@@ -1884,7 +1884,7 @@ void tcg_gen_gvec_sssub(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_sssub_vec,
           .fno = gen_helper_gvec_sssub8,
           .opt_opc = vecop_list,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fniv = tcg_gen_sssub_vec,
           .fno = gen_helper_gvec_sssub16,
           .opt_opc = vecop_list,
@@ -1926,7 +1926,7 @@ void tcg_gen_gvec_usadd(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_usadd_vec,
           .fno = gen_helper_gvec_usadd8,
           .opt_opc = vecop_list,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fniv = tcg_gen_usadd_vec,
           .fno = gen_helper_gvec_usadd16,
           .opt_opc = vecop_list,
@@ -1970,7 +1970,7 @@ void tcg_gen_gvec_ussub(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_ussub_vec,
           .fno = gen_helper_gvec_ussub8,
           .opt_opc = vecop_list,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fniv = tcg_gen_ussub_vec,
           .fno = gen_helper_gvec_ussub16,
           .opt_opc = vecop_list,
@@ -1998,7 +1998,7 @@ void tcg_gen_gvec_smin(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_smin_vec,
           .fno = gen_helper_gvec_smin8,
           .opt_opc = vecop_list,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fniv = tcg_gen_smin_vec,
           .fno = gen_helper_gvec_smin16,
           .opt_opc = vecop_list,
@@ -2026,7 +2026,7 @@ void tcg_gen_gvec_umin(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_umin_vec,
           .fno = gen_helper_gvec_umin8,
           .opt_opc = vecop_list,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fniv = tcg_gen_umin_vec,
           .fno = gen_helper_gvec_umin16,
           .opt_opc = vecop_list,
@@ -2054,7 +2054,7 @@ void tcg_gen_gvec_smax(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_smax_vec,
           .fno = gen_helper_gvec_smax8,
           .opt_opc = vecop_list,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fniv = tcg_gen_smax_vec,
           .fno = gen_helper_gvec_smax16,
           .opt_opc = vecop_list,
@@ -2082,7 +2082,7 @@ void tcg_gen_gvec_umax(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_umax_vec,
           .fno = gen_helper_gvec_umax8,
           .opt_opc = vecop_list,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fniv = tcg_gen_umax_vec,
           .fno = gen_helper_gvec_umax16,
           .opt_opc = vecop_list,
@@ -2120,7 +2120,7 @@ static void gen_negv_mask(TCGv_i64 d, TCGv_i64 b, TCGv_i64 m)

 void tcg_gen_vec_neg8_i64(TCGv_i64 d, TCGv_i64 b)
 {
-    TCGv_i64 m = tcg_const_i64(dup_const(MO_8, 0x80));
+    TCGv_i64 m = tcg_const_i64(dup_const(MO_UB, 0x80));
     gen_negv_mask(d, b, m);
     tcg_temp_free_i64(m);
 }
@@ -2155,7 +2155,7 @@ void tcg_gen_gvec_neg(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_neg_vec,
           .fno = gen_helper_gvec_neg8,
           .opt_opc = vecop_list,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fni8 = tcg_gen_vec_neg16_i64,
           .fniv = tcg_gen_neg_vec,
           .fno = gen_helper_gvec_neg16,
@@ -2201,7 +2201,7 @@ static void gen_absv_mask(TCGv_i64 d, TCGv_i64 b, unsigned vece)

 static void tcg_gen_vec_abs8_i64(TCGv_i64 d, TCGv_i64 b)
 {
-    gen_absv_mask(d, b, MO_8);
+    gen_absv_mask(d, b, MO_UB);
 }

 static void tcg_gen_vec_abs16_i64(TCGv_i64 d, TCGv_i64 b)
@@ -2218,7 +2218,7 @@ void tcg_gen_gvec_abs(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_abs_vec,
           .fno = gen_helper_gvec_abs8,
           .opt_opc = vecop_list,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fni8 = tcg_gen_vec_abs16_i64,
           .fniv = tcg_gen_abs_vec,
           .fno = gen_helper_gvec_abs16,
@@ -2454,7 +2454,7 @@ void tcg_gen_gvec_ori(unsigned vece, uint32_t dofs, uint32_t aofs,

 void tcg_gen_vec_shl8i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c)
 {
-    uint64_t mask = dup_const(MO_8, 0xff << c);
+    uint64_t mask = dup_const(MO_UB, 0xff << c);
     tcg_gen_shli_i64(d, a, c);
     tcg_gen_andi_i64(d, d, mask);
 }
@@ -2475,7 +2475,7 @@ void tcg_gen_gvec_shli(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_shli_vec,
           .fno = gen_helper_gvec_shl8i,
           .opt_opc = vecop_list,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fni8 = tcg_gen_vec_shl16i_i64,
           .fniv = tcg_gen_shli_vec,
           .fno = gen_helper_gvec_shl16i,
@@ -2505,7 +2505,7 @@ void tcg_gen_gvec_shli(unsigned vece, uint32_t dofs, uint32_t aofs,

 void tcg_gen_vec_shr8i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c)
 {
-    uint64_t mask = dup_const(MO_8, 0xff >> c);
+    uint64_t mask = dup_const(MO_UB, 0xff >> c);
     tcg_gen_shri_i64(d, a, c);
     tcg_gen_andi_i64(d, d, mask);
 }
@@ -2526,7 +2526,7 @@ void tcg_gen_gvec_shri(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_shri_vec,
           .fno = gen_helper_gvec_shr8i,
           .opt_opc = vecop_list,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fni8 = tcg_gen_vec_shr16i_i64,
           .fniv = tcg_gen_shri_vec,
           .fno = gen_helper_gvec_shr16i,
@@ -2556,8 +2556,8 @@ void tcg_gen_gvec_shri(unsigned vece, uint32_t dofs, uint32_t aofs,

 void tcg_gen_vec_sar8i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c)
 {
-    uint64_t s_mask = dup_const(MO_8, 0x80 >> c);
-    uint64_t c_mask = dup_const(MO_8, 0xff >> c);
+    uint64_t s_mask = dup_const(MO_UB, 0x80 >> c);
+    uint64_t c_mask = dup_const(MO_UB, 0xff >> c);
     TCGv_i64 s = tcg_temp_new_i64();

     tcg_gen_shri_i64(d, a, c);
@@ -2591,7 +2591,7 @@ void tcg_gen_gvec_sari(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_sari_vec,
           .fno = gen_helper_gvec_sar8i,
           .opt_opc = vecop_list,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fni8 = tcg_gen_vec_sar16i_i64,
           .fniv = tcg_gen_sari_vec,
           .fno = gen_helper_gvec_sar16i,
@@ -2880,7 +2880,7 @@ void tcg_gen_gvec_shlv(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_shlv_mod_vec,
           .fno = gen_helper_gvec_shl8v,
           .opt_opc = vecop_list,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fniv = tcg_gen_shlv_mod_vec,
           .fno = gen_helper_gvec_shl16v,
           .opt_opc = vecop_list,
@@ -2943,7 +2943,7 @@ void tcg_gen_gvec_shrv(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_shrv_mod_vec,
           .fno = gen_helper_gvec_shr8v,
           .opt_opc = vecop_list,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fniv = tcg_gen_shrv_mod_vec,
           .fno = gen_helper_gvec_shr16v,
           .opt_opc = vecop_list,
@@ -3006,7 +3006,7 @@ void tcg_gen_gvec_sarv(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_sarv_mod_vec,
           .fno = gen_helper_gvec_sar8v,
           .opt_opc = vecop_list,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fniv = tcg_gen_sarv_mod_vec,
           .fno = gen_helper_gvec_sar16v,
           .opt_opc = vecop_list,
@@ -3129,7 +3129,7 @@ void tcg_gen_gvec_cmp(TCGCond cond, unsigned vece, uint32_t dofs,
     check_overlap_3(dofs, aofs, bofs, maxsz);

     if (cond == TCG_COND_NEVER || cond == TCG_COND_ALWAYS) {
-        do_dup(MO_8, dofs, oprsz, maxsz,
+        do_dup(MO_UB, dofs, oprsz, maxsz,
                NULL, NULL, -(cond == TCG_COND_ALWAYS));
         return;
     }
diff --git a/tcg/tcg-op-vec.c b/tcg/tcg-op-vec.c
index 6714991..d7ffc9e 100644
--- a/tcg/tcg-op-vec.c
+++ b/tcg/tcg-op-vec.c
@@ -275,7 +275,7 @@ void tcg_gen_dup16i_vec(TCGv_vec r, uint32_t a)

 void tcg_gen_dup8i_vec(TCGv_vec r, uint32_t a)
 {
-    do_dupi_vec(r, MO_REG, dup_const(MO_8, a));
+    do_dupi_vec(r, MO_REG, dup_const(MO_UB, a));
 }

 void tcg_gen_dupi_vec(unsigned vece, TCGv_vec r, uint64_t a)
@@ -752,13 +752,13 @@ void tcg_gen_bitsel_vec(unsigned vece, TCGv_vec r, TCGv_vec a,
     tcg_debug_assert(ct->base_type >= type);

     if (TCG_TARGET_HAS_bitsel_vec) {
-        vec_gen_4(INDEX_op_bitsel_vec, type, MO_8,
+        vec_gen_4(INDEX_op_bitsel_vec, type, MO_UB,
                   temp_arg(rt), temp_arg(at), temp_arg(bt), temp_arg(ct));
     } else {
         TCGv_vec t = tcg_temp_new_vec(type);
-        tcg_gen_and_vec(MO_8, t, a, b);
-        tcg_gen_andc_vec(MO_8, r, c, a);
-        tcg_gen_or_vec(MO_8, r, r, t);
+        tcg_gen_and_vec(MO_UB, t, a, b);
+        tcg_gen_andc_vec(MO_UB, r, c, a);
+        tcg_gen_or_vec(MO_UB, r, r, t);
         tcg_temp_free_vec(t);
     }
 }
diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c
index 587d092..61eda33 100644
--- a/tcg/tcg-op.c
+++ b/tcg/tcg-op.c
@@ -2720,7 +2720,7 @@ static inline TCGMemOp tcg_canonicalize_memop(TCGMemOp op, bool is64, bool st)
     (void)get_alignment_bits(op);

     switch (op & MO_SIZE) {
-    case MO_8:
+    case MO_UB:
         op &= ~MO_BSWAP;
         break;
     case MO_16:
@@ -3024,7 +3024,7 @@ typedef void (*gen_atomic_op_i64)(TCGv_i64, TCGv_env, TCGv, TCGv_i64);
 #endif

 static void * const table_cmpxchg[16] = {
-    [MO_8] = gen_helper_atomic_cmpxchgb,
+    [MO_UB] = gen_helper_atomic_cmpxchgb,
     [MO_16 | MO_LE] = gen_helper_atomic_cmpxchgw_le,
     [MO_16 | MO_BE] = gen_helper_atomic_cmpxchgw_be,
     [MO_32 | MO_LE] = gen_helper_atomic_cmpxchgl_le,
@@ -3248,7 +3248,7 @@ static void do_atomic_op_i64(TCGv_i64 ret, TCGv addr, TCGv_i64 val,

 #define GEN_ATOMIC_HELPER(NAME, OP, NEW)                                \
 static void * const table_##NAME[16] = {                                \
-    [MO_8] = gen_helper_atomic_##NAME##b,                               \
+    [MO_UB] = gen_helper_atomic_##NAME##b,                               \
     [MO_16 | MO_LE] = gen_helper_atomic_##NAME##w_le,                   \
     [MO_16 | MO_BE] = gen_helper_atomic_##NAME##w_be,                   \
     [MO_32 | MO_LE] = gen_helper_atomic_##NAME##l_le,                   \
diff --git a/tcg/tcg.h b/tcg/tcg.h
index b411e17..5636d6b 100644
--- a/tcg/tcg.h
+++ b/tcg/tcg.h
@@ -1302,7 +1302,7 @@ uint64_t dup_const(unsigned vece, uint64_t c);

 #define dup_const(VECE, C)                                         \
     (__builtin_constant_p(VECE)                                    \
-     ? (  (VECE) == MO_8  ? 0x0101010101010101ull * (uint8_t)(C)   \
+     ?   ((VECE) == MO_UB ? 0x0101010101010101ull * (uint8_t)(C)   \
         : (VECE) == MO_16 ? 0x0001000100010001ull * (uint16_t)(C)  \
         : (VECE) == MO_32 ? 0x0000000100000001ull * (uint32_t)(C)  \
         : dup_const(VECE, C))                                      \
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v2 01/20] tcg: Replace MO_8 with MO_UB alias
@ 2019-07-22 15:38   ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:38 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, david, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, mst, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, claudio.fontana, qemu-s390x, qemu-ppc,
	amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 61988 bytes --]

Preparation for splitting MO_8 out from TCGMemOp into new accelerator
independent MemOp.

As MO_8 will be a value of MemOp, existing TCGMemOp comparisons and
coercions will trigger -Wenum-compare and -Wenum-conversion.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 target/arm/sve_helper.c             |  4 +-
 target/arm/translate-a64.c          | 14 +++----
 target/arm/translate-sve.c          |  4 +-
 target/arm/translate.c              | 38 +++++++++----------
 target/i386/translate.c             | 72 +++++++++++++++++------------------
 target/mips/translate.c             |  4 +-
 target/ppc/translate/vmx-impl.inc.c | 28 +++++++-------
 target/s390x/translate.c            |  2 +-
 target/s390x/translate_vx.inc.c     |  4 +-
 target/s390x/vec.h                  |  4 +-
 tcg/aarch64/tcg-target.inc.c        | 16 ++++----
 tcg/arm/tcg-target.inc.c            |  6 +--
 tcg/i386/tcg-target.inc.c           | 54 +++++++++++++-------------
 tcg/mips/tcg-target.inc.c           |  4 +-
 tcg/riscv/tcg-target.inc.c          |  4 +-
 tcg/sparc/tcg-target.inc.c          |  2 +-
 tcg/tcg-op-gvec.c                   | 76 ++++++++++++++++++-------------------
 tcg/tcg-op-vec.c                    | 10 ++---
 tcg/tcg-op.c                        |  6 +--
 tcg/tcg.h                           |  2 +-
 20 files changed, 177 insertions(+), 177 deletions(-)

diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index fc0c175..4c7e11f 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -1531,7 +1531,7 @@ void HELPER(sve_cpy_m_b)(void *vd, void *vn, void *vg,
     uint64_t *d = vd, *n = vn;
     uint8_t *pg = vg;

-    mm = dup_const(MO_8, mm);
+    mm = dup_const(MO_UB, mm);
     for (i = 0; i < opr_sz; i += 1) {
         uint64_t nn = n[i];
         uint64_t pp = expand_pred_b(pg[H1(i)]);
@@ -1588,7 +1588,7 @@ void HELPER(sve_cpy_z_b)(void *vd, void *vg, uint64_t val, uint32_t desc)
     uint64_t *d = vd;
     uint8_t *pg = vg;

-    val = dup_const(MO_8, val);
+    val = dup_const(MO_UB, val);
     for (i = 0; i < opr_sz; i += 1) {
         d[i] = val & expand_pred_b(pg[H1(i)]);
     }
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index d323147..f840b43 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -993,7 +993,7 @@ static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx,
 {
     int vect_off = vec_reg_offset(s, srcidx, element, memop & MO_SIZE);
     switch (memop) {
-    case MO_8:
+    case MO_UB:
         tcg_gen_ld8u_i64(tcg_dest, cpu_env, vect_off);
         break;
     case MO_16:
@@ -1002,7 +1002,7 @@ static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx,
     case MO_32:
         tcg_gen_ld32u_i64(tcg_dest, cpu_env, vect_off);
         break;
-    case MO_8|MO_SIGN:
+    case MO_SB:
         tcg_gen_ld8s_i64(tcg_dest, cpu_env, vect_off);
         break;
     case MO_16|MO_SIGN:
@@ -1025,13 +1025,13 @@ static void read_vec_element_i32(DisasContext *s, TCGv_i32 tcg_dest, int srcidx,
 {
     int vect_off = vec_reg_offset(s, srcidx, element, memop & MO_SIZE);
     switch (memop) {
-    case MO_8:
+    case MO_UB:
         tcg_gen_ld8u_i32(tcg_dest, cpu_env, vect_off);
         break;
     case MO_16:
         tcg_gen_ld16u_i32(tcg_dest, cpu_env, vect_off);
         break;
-    case MO_8|MO_SIGN:
+    case MO_SB:
         tcg_gen_ld8s_i32(tcg_dest, cpu_env, vect_off);
         break;
     case MO_16|MO_SIGN:
@@ -1052,7 +1052,7 @@ static void write_vec_element(DisasContext *s, TCGv_i64 tcg_src, int destidx,
 {
     int vect_off = vec_reg_offset(s, destidx, element, memop & MO_SIZE);
     switch (memop) {
-    case MO_8:
+    case MO_UB:
         tcg_gen_st8_i64(tcg_src, cpu_env, vect_off);
         break;
     case MO_16:
@@ -1074,7 +1074,7 @@ static void write_vec_element_i32(DisasContext *s, TCGv_i32 tcg_src,
 {
     int vect_off = vec_reg_offset(s, destidx, element, memop & MO_SIZE);
     switch (memop) {
-    case MO_8:
+    case MO_UB:
         tcg_gen_st8_i32(tcg_src, cpu_env, vect_off);
         break;
     case MO_16:
@@ -12885,7 +12885,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)

     default: /* integer */
         switch (size) {
-        case MO_8:
+        case MO_UB:
         case MO_64:
             unallocated_encoding(s);
             return;
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index fa068b0..ec5fb11 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -1665,7 +1665,7 @@ static void do_sat_addsub_vec(DisasContext *s, int esz, int rd, int rn,
     desc = tcg_const_i32(simd_desc(vsz, vsz, 0));

     switch (esz) {
-    case MO_8:
+    case MO_UB:
         t32 = tcg_temp_new_i32();
         tcg_gen_extrl_i64_i32(t32, val);
         if (d) {
@@ -3308,7 +3308,7 @@ static bool trans_SUBR_zzi(DisasContext *s, arg_rri_esz *a)
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_sve_subri_b,
           .opt_opc = vecop_list,
-          .vece = MO_8,
+          .vece = MO_UB,
           .scalar_first = true },
         { .fni8 = tcg_gen_vec_sub16_i64,
           .fniv = tcg_gen_sub_vec,
diff --git a/target/arm/translate.c b/target/arm/translate.c
index 7853462..39266cf 100644
--- a/target/arm/translate.c
+++ b/target/arm/translate.c
@@ -1474,7 +1474,7 @@ static void neon_store_element(int reg, int ele, TCGMemOp size, TCGv_i32 var)
     long offset = neon_element_offset(reg, ele, size);

     switch (size) {
-    case MO_8:
+    case MO_UB:
         tcg_gen_st8_i32(var, cpu_env, offset);
         break;
     case MO_16:
@@ -1493,7 +1493,7 @@ static void neon_store_element64(int reg, int ele, TCGMemOp size, TCGv_i64 var)
     long offset = neon_element_offset(reg, ele, size);

     switch (size) {
-    case MO_8:
+    case MO_UB:
         tcg_gen_st8_i64(var, cpu_env, offset);
         break;
     case MO_16:
@@ -4262,7 +4262,7 @@ const GVecGen2i ssra_op[4] = {
       .fniv = gen_ssra_vec,
       .load_dest = true,
       .opt_opc = vecop_list_ssra,
-      .vece = MO_8 },
+      .vece = MO_UB },
     { .fni8 = gen_ssra16_i64,
       .fniv = gen_ssra_vec,
       .load_dest = true,
@@ -4320,7 +4320,7 @@ const GVecGen2i usra_op[4] = {
       .fniv = gen_usra_vec,
       .load_dest = true,
       .opt_opc = vecop_list_usra,
-      .vece = MO_8, },
+      .vece = MO_UB, },
     { .fni8 = gen_usra16_i64,
       .fniv = gen_usra_vec,
       .load_dest = true,
@@ -4341,7 +4341,7 @@ const GVecGen2i usra_op[4] = {

 static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
 {
-    uint64_t mask = dup_const(MO_8, 0xff >> shift);
+    uint64_t mask = dup_const(MO_UB, 0xff >> shift);
     TCGv_i64 t = tcg_temp_new_i64();

     tcg_gen_shri_i64(t, a, shift);
@@ -4400,7 +4400,7 @@ const GVecGen2i sri_op[4] = {
       .fniv = gen_shr_ins_vec,
       .load_dest = true,
       .opt_opc = vecop_list_sri,
-      .vece = MO_8 },
+      .vece = MO_UB },
     { .fni8 = gen_shr16_ins_i64,
       .fniv = gen_shr_ins_vec,
       .load_dest = true,
@@ -4421,7 +4421,7 @@ const GVecGen2i sri_op[4] = {

 static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
 {
-    uint64_t mask = dup_const(MO_8, 0xff << shift);
+    uint64_t mask = dup_const(MO_UB, 0xff << shift);
     TCGv_i64 t = tcg_temp_new_i64();

     tcg_gen_shli_i64(t, a, shift);
@@ -4478,7 +4478,7 @@ const GVecGen2i sli_op[4] = {
       .fniv = gen_shl_ins_vec,
       .load_dest = true,
       .opt_opc = vecop_list_sli,
-      .vece = MO_8 },
+      .vece = MO_UB },
     { .fni8 = gen_shl16_ins_i64,
       .fniv = gen_shl_ins_vec,
       .load_dest = true,
@@ -4574,7 +4574,7 @@ const GVecGen3 mla_op[4] = {
       .fniv = gen_mla_vec,
       .load_dest = true,
       .opt_opc = vecop_list_mla,
-      .vece = MO_8 },
+      .vece = MO_UB },
     { .fni4 = gen_mla16_i32,
       .fniv = gen_mla_vec,
       .load_dest = true,
@@ -4598,7 +4598,7 @@ const GVecGen3 mls_op[4] = {
       .fniv = gen_mls_vec,
       .load_dest = true,
       .opt_opc = vecop_list_mls,
-      .vece = MO_8 },
+      .vece = MO_UB },
     { .fni4 = gen_mls16_i32,
       .fniv = gen_mls_vec,
       .load_dest = true,
@@ -4645,7 +4645,7 @@ const GVecGen3 cmtst_op[4] = {
     { .fni4 = gen_helper_neon_tst_u8,
       .fniv = gen_cmtst_vec,
       .opt_opc = vecop_list_cmtst,
-      .vece = MO_8 },
+      .vece = MO_UB },
     { .fni4 = gen_helper_neon_tst_u16,
       .fniv = gen_cmtst_vec,
       .opt_opc = vecop_list_cmtst,
@@ -4681,7 +4681,7 @@ const GVecGen4 uqadd_op[4] = {
       .fno = gen_helper_gvec_uqadd_b,
       .write_aofs = true,
       .opt_opc = vecop_list_uqadd,
-      .vece = MO_8 },
+      .vece = MO_UB },
     { .fniv = gen_uqadd_vec,
       .fno = gen_helper_gvec_uqadd_h,
       .write_aofs = true,
@@ -4719,7 +4719,7 @@ const GVecGen4 sqadd_op[4] = {
       .fno = gen_helper_gvec_sqadd_b,
       .opt_opc = vecop_list_sqadd,
       .write_aofs = true,
-      .vece = MO_8 },
+      .vece = MO_UB },
     { .fniv = gen_sqadd_vec,
       .fno = gen_helper_gvec_sqadd_h,
       .opt_opc = vecop_list_sqadd,
@@ -4757,7 +4757,7 @@ const GVecGen4 uqsub_op[4] = {
       .fno = gen_helper_gvec_uqsub_b,
       .opt_opc = vecop_list_uqsub,
       .write_aofs = true,
-      .vece = MO_8 },
+      .vece = MO_UB },
     { .fniv = gen_uqsub_vec,
       .fno = gen_helper_gvec_uqsub_h,
       .opt_opc = vecop_list_uqsub,
@@ -4795,7 +4795,7 @@ const GVecGen4 sqsub_op[4] = {
       .fno = gen_helper_gvec_sqsub_b,
       .opt_opc = vecop_list_sqsub,
       .write_aofs = true,
-      .vece = MO_8 },
+      .vece = MO_UB },
     { .fniv = gen_sqsub_vec,
       .fno = gen_helper_gvec_sqsub_h,
       .opt_opc = vecop_list_sqsub,
@@ -4972,15 +4972,15 @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
                                  vec_size, vec_size);
                 break;
             case 5: /* VBSL */
-                tcg_gen_gvec_bitsel(MO_8, rd_ofs, rd_ofs, rn_ofs, rm_ofs,
+                tcg_gen_gvec_bitsel(MO_UB, rd_ofs, rd_ofs, rn_ofs, rm_ofs,
                                     vec_size, vec_size);
                 break;
             case 6: /* VBIT */
-                tcg_gen_gvec_bitsel(MO_8, rd_ofs, rm_ofs, rn_ofs, rd_ofs,
+                tcg_gen_gvec_bitsel(MO_UB, rd_ofs, rm_ofs, rn_ofs, rd_ofs,
                                     vec_size, vec_size);
                 break;
             case 7: /* VBIF */
-                tcg_gen_gvec_bitsel(MO_8, rd_ofs, rm_ofs, rd_ofs, rn_ofs,
+                tcg_gen_gvec_bitsel(MO_UB, rd_ofs, rm_ofs, rd_ofs, rn_ofs,
                                     vec_size, vec_size);
                 break;
             }
@@ -6873,7 +6873,7 @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
                     return 1;
                 }
                 if (insn & (1 << 16)) {
-                    size = MO_8;
+                    size = MO_UB;
                     element = (insn >> 17) & 7;
                 } else if (insn & (1 << 17)) {
                     size = MO_16;
diff --git a/target/i386/translate.c b/target/i386/translate.c
index 03150a8..0e45300 100644
--- a/target/i386/translate.c
+++ b/target/i386/translate.c
@@ -349,20 +349,20 @@ static inline TCGMemOp mo_64_32(TCGMemOp ot)
    byte vs word opcodes.  */
 static inline TCGMemOp mo_b_d(int b, TCGMemOp ot)
 {
-    return b & 1 ? ot : MO_8;
+    return b & 1 ? ot : MO_UB;
 }

 /* Select size 8 if lsb of B is clear, else OT capped at 32.
    Used for decoding operand size of port opcodes.  */
 static inline TCGMemOp mo_b_d32(int b, TCGMemOp ot)
 {
-    return b & 1 ? (ot == MO_16 ? MO_16 : MO_32) : MO_8;
+    return b & 1 ? (ot == MO_16 ? MO_16 : MO_32) : MO_UB;
 }

 static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t0)
 {
     switch(ot) {
-    case MO_8:
+    case MO_UB:
         if (!byte_reg_is_xH(s, reg)) {
             tcg_gen_deposit_tl(cpu_regs[reg], cpu_regs[reg], t0, 0, 8);
         } else {
@@ -390,7 +390,7 @@ static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t0)
 static inline
 void gen_op_mov_v_reg(DisasContext *s, TCGMemOp ot, TCGv t0, int reg)
 {
-    if (ot == MO_8 && byte_reg_is_xH(s, reg)) {
+    if (ot == MO_UB && byte_reg_is_xH(s, reg)) {
         tcg_gen_extract_tl(t0, cpu_regs[reg - 4], 8, 8);
     } else {
         tcg_gen_mov_tl(t0, cpu_regs[reg]);
@@ -523,7 +523,7 @@ static inline void gen_op_movl_T0_Dshift(DisasContext *s, TCGMemOp ot)
 static TCGv gen_ext_tl(TCGv dst, TCGv src, TCGMemOp size, bool sign)
 {
     switch (size) {
-    case MO_8:
+    case MO_UB:
         if (sign) {
             tcg_gen_ext8s_tl(dst, src);
         } else {
@@ -580,7 +580,7 @@ void gen_op_jz_ecx(DisasContext *s, TCGMemOp size, TCGLabel *label1)
 static void gen_helper_in_func(TCGMemOp ot, TCGv v, TCGv_i32 n)
 {
     switch (ot) {
-    case MO_8:
+    case MO_UB:
         gen_helper_inb(v, cpu_env, n);
         break;
     case MO_16:
@@ -597,7 +597,7 @@ static void gen_helper_in_func(TCGMemOp ot, TCGv v, TCGv_i32 n)
 static void gen_helper_out_func(TCGMemOp ot, TCGv_i32 v, TCGv_i32 n)
 {
     switch (ot) {
-    case MO_8:
+    case MO_UB:
         gen_helper_outb(cpu_env, v, n);
         break;
     case MO_16:
@@ -619,7 +619,7 @@ static void gen_check_io(DisasContext *s, TCGMemOp ot, target_ulong cur_eip,
     if (s->pe && (s->cpl > s->iopl || s->vm86)) {
         tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
         switch (ot) {
-        case MO_8:
+        case MO_UB:
             gen_helper_check_iob(cpu_env, s->tmp2_i32);
             break;
         case MO_16:
@@ -1557,7 +1557,7 @@ static void gen_rot_rm_T1(DisasContext *s, TCGMemOp ot, int op1, int is_right)
     tcg_gen_andi_tl(s->T1, s->T1, mask);

     switch (ot) {
-    case MO_8:
+    case MO_UB:
         /* Replicate the 8-bit input so that a 32-bit rotate works.  */
         tcg_gen_ext8u_tl(s->T0, s->T0);
         tcg_gen_muli_tl(s->T0, s->T0, 0x01010101);
@@ -1661,7 +1661,7 @@ static void gen_rot_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
                 tcg_gen_rotli_tl(s->T0, s->T0, op2);
             }
             break;
-        case MO_8:
+        case MO_UB:
             mask = 7;
             goto do_shifts;
         case MO_16:
@@ -1719,7 +1719,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,

     if (is_right) {
         switch (ot) {
-        case MO_8:
+        case MO_UB:
             gen_helper_rcrb(s->T0, cpu_env, s->T0, s->T1);
             break;
         case MO_16:
@@ -1738,7 +1738,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
         }
     } else {
         switch (ot) {
-        case MO_8:
+        case MO_UB:
             gen_helper_rclb(s->T0, cpu_env, s->T0, s->T1);
             break;
         case MO_16:
@@ -2184,7 +2184,7 @@ static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, TCGMemOp ot)
     uint32_t ret;

     switch (ot) {
-    case MO_8:
+    case MO_UB:
         ret = x86_ldub_code(env, s);
         break;
     case MO_16:
@@ -3784,7 +3784,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                     goto illegal_op;
                 }
                 if ((b & 0xff) == 0xf0) {
-                    ot = MO_8;
+                    ot = MO_UB;
                 } else if (s->dflag != MO_64) {
                     ot = (s->prefix & PREFIX_DATA ? MO_16 : MO_32);
                 } else {
@@ -4760,7 +4760,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 val = insn_get(env, s, ot);
                 break;
             case 0x83:
-                val = (int8_t)insn_get(env, s, MO_8);
+                val = (int8_t)insn_get(env, s, MO_UB);
                 break;
             }
             tcg_gen_movi_tl(s->T1, val);
@@ -4866,8 +4866,8 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             break;
         case 4: /* mul */
             switch(ot) {
-            case MO_8:
-                gen_op_mov_v_reg(s, MO_8, s->T1, R_EAX);
+            case MO_UB:
+                gen_op_mov_v_reg(s, MO_UB, s->T1, R_EAX);
                 tcg_gen_ext8u_tl(s->T0, s->T0);
                 tcg_gen_ext8u_tl(s->T1, s->T1);
                 /* XXX: use 32 bit mul which could be faster */
@@ -4915,8 +4915,8 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             break;
         case 5: /* imul */
             switch(ot) {
-            case MO_8:
-                gen_op_mov_v_reg(s, MO_8, s->T1, R_EAX);
+            case MO_UB:
+                gen_op_mov_v_reg(s, MO_UB, s->T1, R_EAX);
                 tcg_gen_ext8s_tl(s->T0, s->T0);
                 tcg_gen_ext8s_tl(s->T1, s->T1);
                 /* XXX: use 32 bit mul which could be faster */
@@ -4969,7 +4969,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             break;
         case 6: /* div */
             switch(ot) {
-            case MO_8:
+            case MO_UB:
                 gen_helper_divb_AL(cpu_env, s->T0);
                 break;
             case MO_16:
@@ -4988,7 +4988,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             break;
         case 7: /* idiv */
             switch(ot) {
-            case MO_8:
+            case MO_UB:
                 gen_helper_idivb_AL(cpu_env, s->T0);
                 break;
             case MO_16:
@@ -5157,7 +5157,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             gen_op_mov_reg_v(s, MO_32, R_EAX, s->T0);
             break;
         case MO_16:
-            gen_op_mov_v_reg(s, MO_8, s->T0, R_EAX);
+            gen_op_mov_v_reg(s, MO_UB, s->T0, R_EAX);
             tcg_gen_ext8s_tl(s->T0, s->T0);
             gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0);
             break;
@@ -5205,7 +5205,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             val = insn_get(env, s, ot);
             tcg_gen_movi_tl(s->T1, val);
         } else if (b == 0x6b) {
-            val = (int8_t)insn_get(env, s, MO_8);
+            val = (int8_t)insn_get(env, s, MO_UB);
             tcg_gen_movi_tl(s->T1, val);
         } else {
             gen_op_mov_v_reg(s, ot, s->T1, reg);
@@ -5419,7 +5419,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         if (b == 0x68)
             val = insn_get(env, s, ot);
         else
-            val = (int8_t)insn_get(env, s, MO_8);
+            val = (int8_t)insn_get(env, s, MO_UB);
         tcg_gen_movi_tl(s->T0, val);
         gen_push_v(s, s->T0);
         break;
@@ -5573,7 +5573,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             /* d_ot is the size of destination */
             d_ot = dflag;
             /* ot is the size of source */
-            ot = (b & 1) + MO_8;
+            ot = (b & 1) + MO_UB;
             /* s_ot is the sign+size of source */
             s_ot = b & 8 ? MO_SIGN | ot : ot;

@@ -5661,13 +5661,13 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         tcg_gen_add_tl(s->A0, s->A0, s->T0);
         gen_extu(s->aflag, s->A0);
         gen_add_A0_ds_seg(s);
-        gen_op_ld_v(s, MO_8, s->T0, s->A0);
-        gen_op_mov_reg_v(s, MO_8, R_EAX, s->T0);
+        gen_op_ld_v(s, MO_UB, s->T0, s->A0);
+        gen_op_mov_reg_v(s, MO_UB, R_EAX, s->T0);
         break;
     case 0xb0 ... 0xb7: /* mov R, Ib */
-        val = insn_get(env, s, MO_8);
+        val = insn_get(env, s, MO_UB);
         tcg_gen_movi_tl(s->T0, val);
-        gen_op_mov_reg_v(s, MO_8, (b & 7) | REX_B(s), s->T0);
+        gen_op_mov_reg_v(s, MO_UB, (b & 7) | REX_B(s), s->T0);
         break;
     case 0xb8 ... 0xbf: /* mov R, Iv */
 #ifdef TARGET_X86_64
@@ -6637,7 +6637,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         }
         goto do_ljmp;
     case 0xeb: /* jmp Jb */
-        tval = (int8_t)insn_get(env, s, MO_8);
+        tval = (int8_t)insn_get(env, s, MO_UB);
         tval += s->pc - s->cs_base;
         if (dflag == MO_16) {
             tval &= 0xffff;
@@ -6645,7 +6645,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         gen_jmp(s, tval);
         break;
     case 0x70 ... 0x7f: /* jcc Jb */
-        tval = (int8_t)insn_get(env, s, MO_8);
+        tval = (int8_t)insn_get(env, s, MO_UB);
         goto do_jcc;
     case 0x180 ... 0x18f: /* jcc Jv */
         if (dflag != MO_16) {
@@ -6666,7 +6666,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
     case 0x190 ... 0x19f: /* setcc Gv */
         modrm = x86_ldub_code(env, s);
         gen_setcc1(s, b, s->T0);
-        gen_ldst_modrm(env, s, modrm, MO_8, OR_TMP0, 1);
+        gen_ldst_modrm(env, s, modrm, MO_UB, OR_TMP0, 1);
         break;
     case 0x140 ... 0x14f: /* cmov Gv, Ev */
         if (!(s->cpuid_features & CPUID_CMOV)) {
@@ -6751,7 +6751,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
     case 0x9e: /* sahf */
         if (CODE64(s) && !(s->cpuid_ext3_features & CPUID_EXT3_LAHF_LM))
             goto illegal_op;
-        gen_op_mov_v_reg(s, MO_8, s->T0, R_AH);
+        gen_op_mov_v_reg(s, MO_UB, s->T0, R_AH);
         gen_compute_eflags(s);
         tcg_gen_andi_tl(cpu_cc_src, cpu_cc_src, CC_O);
         tcg_gen_andi_tl(s->T0, s->T0, CC_S | CC_Z | CC_A | CC_P | CC_C);
@@ -6763,7 +6763,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         gen_compute_eflags(s);
         /* Note: gen_compute_eflags() only gives the condition codes */
         tcg_gen_ori_tl(s->T0, cpu_cc_src, 0x02);
-        gen_op_mov_reg_v(s, MO_8, R_AH, s->T0);
+        gen_op_mov_reg_v(s, MO_UB, R_AH, s->T0);
         break;
     case 0xf5: /* cmc */
         gen_compute_eflags(s);
@@ -7137,7 +7137,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             goto illegal_op;
         gen_compute_eflags_c(s, s->T0);
         tcg_gen_neg_tl(s->T0, s->T0);
-        gen_op_mov_reg_v(s, MO_8, R_EAX, s->T0);
+        gen_op_mov_reg_v(s, MO_UB, R_EAX, s->T0);
         break;
     case 0xe0: /* loopnz */
     case 0xe1: /* loopz */
@@ -7146,7 +7146,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         {
             TCGLabel *l1, *l2, *l3;

-            tval = (int8_t)insn_get(env, s, MO_8);
+            tval = (int8_t)insn_get(env, s, MO_UB);
             next_eip = s->pc - s->cs_base;
             tval += next_eip;
             if (dflag == MO_16) {
diff --git a/target/mips/translate.c b/target/mips/translate.c
index 3575eff..20a9777 100644
--- a/target/mips/translate.c
+++ b/target/mips/translate.c
@@ -3684,7 +3684,7 @@ static void gen_st(DisasContext *ctx, uint32_t opc, int rt,
         mem_idx = MIPS_HFLAG_UM;
         /* fall through */
     case OPC_SB:
-        tcg_gen_qemu_st_tl(t1, t0, mem_idx, MO_8);
+        tcg_gen_qemu_st_tl(t1, t0, mem_idx, MO_UB);
         break;
     case OPC_SWLE:
         mem_idx = MIPS_HFLAG_UM;
@@ -20193,7 +20193,7 @@ static void gen_p_lsx(DisasContext *ctx, int rd, int rs, int rt)
         check_nms(ctx);
         gen_load_gpr(t1, rd);
         tcg_gen_qemu_st_tl(t1, t0, ctx->mem_idx,
-                           MO_8);
+                           MO_UB);
         break;
     case NM_SHX:
     /*case NM_SHXS:*/
diff --git a/target/ppc/translate/vmx-impl.inc.c b/target/ppc/translate/vmx-impl.inc.c
index 663275b..4130dd1 100644
--- a/target/ppc/translate/vmx-impl.inc.c
+++ b/target/ppc/translate/vmx-impl.inc.c
@@ -403,7 +403,7 @@ static void glue(gen_, name)(DisasContext *ctx)                         \
     tcg_temp_free_ptr(rb);                                              \
 }

-GEN_VXFORM_V(vaddubm, MO_8, tcg_gen_gvec_add, 0, 0);
+GEN_VXFORM_V(vaddubm, MO_UB, tcg_gen_gvec_add, 0, 0);
 GEN_VXFORM_DUAL_EXT(vaddubm, PPC_ALTIVEC, PPC_NONE, 0,       \
                     vmul10cuq, PPC_NONE, PPC2_ISA300, 0x0000F800)
 GEN_VXFORM_V(vadduhm, MO_16, tcg_gen_gvec_add, 0, 1);
@@ -411,23 +411,23 @@ GEN_VXFORM_DUAL(vadduhm, PPC_ALTIVEC, PPC_NONE,  \
                 vmul10ecuq, PPC_NONE, PPC2_ISA300)
 GEN_VXFORM_V(vadduwm, MO_32, tcg_gen_gvec_add, 0, 2);
 GEN_VXFORM_V(vaddudm, MO_64, tcg_gen_gvec_add, 0, 3);
-GEN_VXFORM_V(vsububm, MO_8, tcg_gen_gvec_sub, 0, 16);
+GEN_VXFORM_V(vsububm, MO_UB, tcg_gen_gvec_sub, 0, 16);
 GEN_VXFORM_V(vsubuhm, MO_16, tcg_gen_gvec_sub, 0, 17);
 GEN_VXFORM_V(vsubuwm, MO_32, tcg_gen_gvec_sub, 0, 18);
 GEN_VXFORM_V(vsubudm, MO_64, tcg_gen_gvec_sub, 0, 19);
-GEN_VXFORM_V(vmaxub, MO_8, tcg_gen_gvec_umax, 1, 0);
+GEN_VXFORM_V(vmaxub, MO_UB, tcg_gen_gvec_umax, 1, 0);
 GEN_VXFORM_V(vmaxuh, MO_16, tcg_gen_gvec_umax, 1, 1);
 GEN_VXFORM_V(vmaxuw, MO_32, tcg_gen_gvec_umax, 1, 2);
 GEN_VXFORM_V(vmaxud, MO_64, tcg_gen_gvec_umax, 1, 3);
-GEN_VXFORM_V(vmaxsb, MO_8, tcg_gen_gvec_smax, 1, 4);
+GEN_VXFORM_V(vmaxsb, MO_UB, tcg_gen_gvec_smax, 1, 4);
 GEN_VXFORM_V(vmaxsh, MO_16, tcg_gen_gvec_smax, 1, 5);
 GEN_VXFORM_V(vmaxsw, MO_32, tcg_gen_gvec_smax, 1, 6);
 GEN_VXFORM_V(vmaxsd, MO_64, tcg_gen_gvec_smax, 1, 7);
-GEN_VXFORM_V(vminub, MO_8, tcg_gen_gvec_umin, 1, 8);
+GEN_VXFORM_V(vminub, MO_UB, tcg_gen_gvec_umin, 1, 8);
 GEN_VXFORM_V(vminuh, MO_16, tcg_gen_gvec_umin, 1, 9);
 GEN_VXFORM_V(vminuw, MO_32, tcg_gen_gvec_umin, 1, 10);
 GEN_VXFORM_V(vminud, MO_64, tcg_gen_gvec_umin, 1, 11);
-GEN_VXFORM_V(vminsb, MO_8, tcg_gen_gvec_smin, 1, 12);
+GEN_VXFORM_V(vminsb, MO_UB, tcg_gen_gvec_smin, 1, 12);
 GEN_VXFORM_V(vminsh, MO_16, tcg_gen_gvec_smin, 1, 13);
 GEN_VXFORM_V(vminsw, MO_32, tcg_gen_gvec_smin, 1, 14);
 GEN_VXFORM_V(vminsd, MO_64, tcg_gen_gvec_smin, 1, 15);
@@ -530,18 +530,18 @@ GEN_VXFORM(vmuleuw, 4, 10);
 GEN_VXFORM(vmulesb, 4, 12);
 GEN_VXFORM(vmulesh, 4, 13);
 GEN_VXFORM(vmulesw, 4, 14);
-GEN_VXFORM_V(vslb, MO_8, tcg_gen_gvec_shlv, 2, 4);
+GEN_VXFORM_V(vslb, MO_UB, tcg_gen_gvec_shlv, 2, 4);
 GEN_VXFORM_V(vslh, MO_16, tcg_gen_gvec_shlv, 2, 5);
 GEN_VXFORM_V(vslw, MO_32, tcg_gen_gvec_shlv, 2, 6);
 GEN_VXFORM(vrlwnm, 2, 6);
 GEN_VXFORM_DUAL(vslw, PPC_ALTIVEC, PPC_NONE, \
                 vrlwnm, PPC_NONE, PPC2_ISA300)
 GEN_VXFORM_V(vsld, MO_64, tcg_gen_gvec_shlv, 2, 23);
-GEN_VXFORM_V(vsrb, MO_8, tcg_gen_gvec_shrv, 2, 8);
+GEN_VXFORM_V(vsrb, MO_UB, tcg_gen_gvec_shrv, 2, 8);
 GEN_VXFORM_V(vsrh, MO_16, tcg_gen_gvec_shrv, 2, 9);
 GEN_VXFORM_V(vsrw, MO_32, tcg_gen_gvec_shrv, 2, 10);
 GEN_VXFORM_V(vsrd, MO_64, tcg_gen_gvec_shrv, 2, 27);
-GEN_VXFORM_V(vsrab, MO_8, tcg_gen_gvec_sarv, 2, 12);
+GEN_VXFORM_V(vsrab, MO_UB, tcg_gen_gvec_sarv, 2, 12);
 GEN_VXFORM_V(vsrah, MO_16, tcg_gen_gvec_sarv, 2, 13);
 GEN_VXFORM_V(vsraw, MO_32, tcg_gen_gvec_sarv, 2, 14);
 GEN_VXFORM_V(vsrad, MO_64, tcg_gen_gvec_sarv, 2, 15);
@@ -589,20 +589,20 @@ static void glue(gen_, NAME)(DisasContext *ctx)                         \
                    16, 16, &g);                                         \
 }

-GEN_VXFORM_SAT(vaddubs, MO_8, add, usadd, 0, 8);
+GEN_VXFORM_SAT(vaddubs, MO_UB, add, usadd, 0, 8);
 GEN_VXFORM_DUAL_EXT(vaddubs, PPC_ALTIVEC, PPC_NONE, 0,       \
                     vmul10uq, PPC_NONE, PPC2_ISA300, 0x0000F800)
 GEN_VXFORM_SAT(vadduhs, MO_16, add, usadd, 0, 9);
 GEN_VXFORM_DUAL(vadduhs, PPC_ALTIVEC, PPC_NONE, \
                 vmul10euq, PPC_NONE, PPC2_ISA300)
 GEN_VXFORM_SAT(vadduws, MO_32, add, usadd, 0, 10);
-GEN_VXFORM_SAT(vaddsbs, MO_8, add, ssadd, 0, 12);
+GEN_VXFORM_SAT(vaddsbs, MO_UB, add, ssadd, 0, 12);
 GEN_VXFORM_SAT(vaddshs, MO_16, add, ssadd, 0, 13);
 GEN_VXFORM_SAT(vaddsws, MO_32, add, ssadd, 0, 14);
-GEN_VXFORM_SAT(vsububs, MO_8, sub, ussub, 0, 24);
+GEN_VXFORM_SAT(vsububs, MO_UB, sub, ussub, 0, 24);
 GEN_VXFORM_SAT(vsubuhs, MO_16, sub, ussub, 0, 25);
 GEN_VXFORM_SAT(vsubuws, MO_32, sub, ussub, 0, 26);
-GEN_VXFORM_SAT(vsubsbs, MO_8, sub, sssub, 0, 28);
+GEN_VXFORM_SAT(vsubsbs, MO_UB, sub, sssub, 0, 28);
 GEN_VXFORM_SAT(vsubshs, MO_16, sub, sssub, 0, 29);
 GEN_VXFORM_SAT(vsubsws, MO_32, sub, sssub, 0, 30);
 GEN_VXFORM(vadduqm, 0, 4);
@@ -912,7 +912,7 @@ static void glue(gen_, name)(DisasContext *ctx)                         \
         tcg_temp_free_ptr(rd);                                          \
     }

-GEN_VXFORM_VSPLT(vspltb, MO_8, 6, 8);
+GEN_VXFORM_VSPLT(vspltb, MO_UB, 6, 8);
 GEN_VXFORM_VSPLT(vsplth, MO_16, 6, 9);
 GEN_VXFORM_VSPLT(vspltw, MO_32, 6, 10);
 GEN_VXFORM_UIMM_SPLAT(vextractub, 6, 8, 15);
diff --git a/target/s390x/translate.c b/target/s390x/translate.c
index ac0d8b6..415747f 100644
--- a/target/s390x/translate.c
+++ b/target/s390x/translate.c
@@ -154,7 +154,7 @@ static inline int vec_full_reg_offset(uint8_t reg)

 static inline int vec_reg_offset(uint8_t reg, uint8_t enr, TCGMemOp es)
 {
-    /* Convert element size (es) - e.g. MO_8 - to bytes */
+    /* Convert element size (es) - e.g. MO_UB - to bytes */
     const uint8_t bytes = 1 << es;
     int offs = enr * bytes;

diff --git a/target/s390x/translate_vx.inc.c b/target/s390x/translate_vx.inc.c
index 41d5cf8..bb424c8 100644
--- a/target/s390x/translate_vx.inc.c
+++ b/target/s390x/translate_vx.inc.c
@@ -30,7 +30,7 @@
  * Sizes:
  *  On s390x, the operand size (oprsz) and the maximum size (maxsz) are
  *  always 16 (128 bit). What gvec code calls "vece", s390x calls "es",
- *  a.k.a. "element size". These values nicely map to MO_8 ... MO_64. Only
+ *  a.k.a. "element size". These values nicely map to MO_UB ... MO_64. Only
  *  128 bit element size has to be treated in a special way (MO_64 + 1).
  *  We will use ES_* instead of MO_* for this reason in this file.
  *
@@ -46,7 +46,7 @@
 #define NUM_VEC_ELEMENTS(es) (16 / NUM_VEC_ELEMENT_BYTES(es))
 #define NUM_VEC_ELEMENT_BITS(es) (NUM_VEC_ELEMENT_BYTES(es) * BITS_PER_BYTE)

-#define ES_8    MO_8
+#define ES_8    MO_UB
 #define ES_16   MO_16
 #define ES_32   MO_32
 #define ES_64   MO_64
diff --git a/target/s390x/vec.h b/target/s390x/vec.h
index a6e3618..b813054 100644
--- a/target/s390x/vec.h
+++ b/target/s390x/vec.h
@@ -76,7 +76,7 @@ static inline uint64_t s390_vec_read_element(const S390Vector *v, uint8_t enr,
                                              uint8_t es)
 {
     switch (es) {
-    case MO_8:
+    case MO_UB:
         return s390_vec_read_element8(v, enr);
     case MO_16:
         return s390_vec_read_element16(v, enr);
@@ -121,7 +121,7 @@ static inline void s390_vec_write_element(S390Vector *v, uint8_t enr,
                                           uint8_t es, uint64_t data)
 {
     switch (es) {
-    case MO_8:
+    case MO_UB:
         s390_vec_write_element8(v, enr, data);
         break;
     case MO_16:
diff --git a/tcg/aarch64/tcg-target.inc.c b/tcg/aarch64/tcg-target.inc.c
index 0713448..e4e0845 100644
--- a/tcg/aarch64/tcg-target.inc.c
+++ b/tcg/aarch64/tcg-target.inc.c
@@ -429,20 +429,20 @@ typedef enum {

     /* Load/store register.  Described here as 3.3.12, but the helper
        that emits them can transform to 3.3.10 or 3.3.13.  */
-    I3312_STRB      = 0x38000000 | LDST_ST << 22 | MO_8 << 30,
+    I3312_STRB      = 0x38000000 | LDST_ST << 22 | MO_UB << 30,
     I3312_STRH      = 0x38000000 | LDST_ST << 22 | MO_16 << 30,
     I3312_STRW      = 0x38000000 | LDST_ST << 22 | MO_32 << 30,
     I3312_STRX      = 0x38000000 | LDST_ST << 22 | MO_64 << 30,

-    I3312_LDRB      = 0x38000000 | LDST_LD << 22 | MO_8 << 30,
+    I3312_LDRB      = 0x38000000 | LDST_LD << 22 | MO_UB << 30,
     I3312_LDRH      = 0x38000000 | LDST_LD << 22 | MO_16 << 30,
     I3312_LDRW      = 0x38000000 | LDST_LD << 22 | MO_32 << 30,
     I3312_LDRX      = 0x38000000 | LDST_LD << 22 | MO_64 << 30,

-    I3312_LDRSBW    = 0x38000000 | LDST_LD_S_W << 22 | MO_8 << 30,
+    I3312_LDRSBW    = 0x38000000 | LDST_LD_S_W << 22 | MO_UB << 30,
     I3312_LDRSHW    = 0x38000000 | LDST_LD_S_W << 22 | MO_16 << 30,

-    I3312_LDRSBX    = 0x38000000 | LDST_LD_S_X << 22 | MO_8 << 30,
+    I3312_LDRSBX    = 0x38000000 | LDST_LD_S_X << 22 | MO_UB << 30,
     I3312_LDRSHX    = 0x38000000 | LDST_LD_S_X << 22 | MO_16 << 30,
     I3312_LDRSWX    = 0x38000000 | LDST_LD_S_X << 22 | MO_32 << 30,

@@ -862,7 +862,7 @@ static void tcg_out_dupi_vec(TCGContext *s, TCGType type,
     int cmode, imm8, i;

     /* Test all bytes equal first.  */
-    if (v64 == dup_const(MO_8, v64)) {
+    if (v64 == dup_const(MO_UB, v64)) {
         imm8 = (uint8_t)v64;
         tcg_out_insn(s, 3606, MOVI, q, rd, 0, 0xe, imm8);
         return;
@@ -1772,7 +1772,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp memop,
     const TCGMemOp bswap = memop & MO_BSWAP;

     switch (memop & MO_SIZE) {
-    case MO_8:
+    case MO_UB:
         tcg_out_ldst_r(s, I3312_STRB, data_r, addr_r, otype, off_r);
         break;
     case MO_16:
@@ -2186,7 +2186,7 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,

     case INDEX_op_ext8s_i64:
     case INDEX_op_ext8s_i32:
-        tcg_out_sxt(s, ext, MO_8, a0, a1);
+        tcg_out_sxt(s, ext, MO_UB, a0, a1);
         break;
     case INDEX_op_ext16s_i64:
     case INDEX_op_ext16s_i32:
@@ -2198,7 +2198,7 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
         break;
     case INDEX_op_ext8u_i64:
     case INDEX_op_ext8u_i32:
-        tcg_out_uxt(s, MO_8, a0, a1);
+        tcg_out_uxt(s, MO_UB, a0, a1);
         break;
     case INDEX_op_ext16u_i64:
     case INDEX_op_ext16u_i32:
diff --git a/tcg/arm/tcg-target.inc.c b/tcg/arm/tcg-target.inc.c
index ece88dc..542ffa8 100644
--- a/tcg/arm/tcg-target.inc.c
+++ b/tcg/arm/tcg-target.inc.c
@@ -1429,7 +1429,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
     datalo = lb->datalo_reg;
     datahi = lb->datahi_reg;
     switch (opc & MO_SIZE) {
-    case MO_8:
+    case MO_UB:
         argreg = tcg_out_arg_reg8(s, argreg, datalo);
         break;
     case MO_16:
@@ -1621,7 +1621,7 @@ static inline void tcg_out_qemu_st_index(TCGContext *s, int cond, TCGMemOp opc,
     TCGMemOp bswap = opc & MO_BSWAP;

     switch (opc & MO_SIZE) {
-    case MO_8:
+    case MO_UB:
         tcg_out_st8_r(s, cond, datalo, addrlo, addend);
         break;
     case MO_16:
@@ -1666,7 +1666,7 @@ static inline void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp opc,
     TCGMemOp bswap = opc & MO_BSWAP;

     switch (opc & MO_SIZE) {
-    case MO_8:
+    case MO_UB:
         tcg_out_st8_12(s, COND_AL, datalo, addrlo, 0);
         break;
     case MO_16:
diff --git a/tcg/i386/tcg-target.inc.c b/tcg/i386/tcg-target.inc.c
index 6ddeebf..0d68ba4 100644
--- a/tcg/i386/tcg-target.inc.c
+++ b/tcg/i386/tcg-target.inc.c
@@ -888,7 +888,7 @@ static bool tcg_out_dup_vec(TCGContext *s, TCGType type, unsigned vece,
         tcg_out_vex_modrm(s, avx2_dup_insn[vece] + vex_l, r, 0, a);
     } else {
         switch (vece) {
-        case MO_8:
+        case MO_UB:
             /* ??? With zero in a register, use PSHUFB.  */
             tcg_out_vex_modrm(s, OPC_PUNPCKLBW, r, a, a);
             a = r;
@@ -932,7 +932,7 @@ static bool tcg_out_dupm_vec(TCGContext *s, TCGType type, unsigned vece,
             tcg_out8(s, 0); /* imm8 */
             tcg_out_dup_vec(s, type, vece, r, r);
             break;
-        case MO_8:
+        case MO_UB:
             tcg_out_vex_modrm_offset(s, OPC_VPINSRB, r, r, base, offset);
             tcg_out8(s, 0); /* imm8 */
             tcg_out_dup_vec(s, type, vece, r, r);
@@ -2154,7 +2154,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg datalo, TCGReg datahi,
     }

     switch (memop & MO_SIZE) {
-    case MO_8:
+    case MO_UB:
         /* In 32-bit mode, 8-bit stores can only happen from [abcd]x.
            Use the scratch register if necessary.  */
         if (TCG_TARGET_REG_BITS == 32 && datalo >= 4) {
@@ -2901,7 +2901,7 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc,
         tcg_debug_assert(vece != MO_64);
         sub = 4;
     gen_shift:
-        tcg_debug_assert(vece != MO_8);
+        tcg_debug_assert(vece != MO_UB);
         insn = shift_imm_insn[vece];
         if (type == TCG_TYPE_V256) {
             insn |= P_VEXL;
@@ -3273,12 +3273,12 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, unsigned vece)

     case INDEX_op_shli_vec:
     case INDEX_op_shri_vec:
-        /* We must expand the operation for MO_8.  */
-        return vece == MO_8 ? -1 : 1;
+        /* We must expand the operation for MO_UB.  */
+        return vece == MO_UB ? -1 : 1;

     case INDEX_op_sari_vec:
-        /* We must expand the operation for MO_8.  */
-        if (vece == MO_8) {
+        /* We must expand the operation for MO_UB.  */
+        if (vece == MO_UB) {
             return -1;
         }
         /* We can emulate this for MO_64, but it does not pay off
@@ -3301,8 +3301,8 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, unsigned vece)
         return have_avx2 && vece == MO_32;

     case INDEX_op_mul_vec:
-        if (vece == MO_8) {
-            /* We can expand the operation for MO_8.  */
+        if (vece == MO_UB) {
+            /* We can expand the operation for MO_UB.  */
             return -1;
         }
         if (vece == MO_64) {
@@ -3332,7 +3332,7 @@ static void expand_vec_shi(TCGType type, unsigned vece, bool shr,
 {
     TCGv_vec t1, t2;

-    tcg_debug_assert(vece == MO_8);
+    tcg_debug_assert(vece == MO_UB);

     t1 = tcg_temp_new_vec(type);
     t2 = tcg_temp_new_vec(type);
@@ -3346,9 +3346,9 @@ static void expand_vec_shi(TCGType type, unsigned vece, bool shr,
        (3) Step 2 leaves high half zero such that PACKUSWB
            (pack with unsigned saturation) does not modify
            the quantity.  */
-    vec_gen_3(INDEX_op_x86_punpckl_vec, type, MO_8,
+    vec_gen_3(INDEX_op_x86_punpckl_vec, type, MO_UB,
               tcgv_vec_arg(t1), tcgv_vec_arg(v1), tcgv_vec_arg(v1));
-    vec_gen_3(INDEX_op_x86_punpckh_vec, type, MO_8,
+    vec_gen_3(INDEX_op_x86_punpckh_vec, type, MO_UB,
               tcgv_vec_arg(t2), tcgv_vec_arg(v1), tcgv_vec_arg(v1));

     if (shr) {
@@ -3361,7 +3361,7 @@ static void expand_vec_shi(TCGType type, unsigned vece, bool shr,
         tcg_gen_shri_vec(MO_16, t2, t2, 8);
     }

-    vec_gen_3(INDEX_op_x86_packus_vec, type, MO_8,
+    vec_gen_3(INDEX_op_x86_packus_vec, type, MO_UB,
               tcgv_vec_arg(v0), tcgv_vec_arg(t1), tcgv_vec_arg(t2));
     tcg_temp_free_vec(t1);
     tcg_temp_free_vec(t2);
@@ -3373,17 +3373,17 @@ static void expand_vec_sari(TCGType type, unsigned vece,
     TCGv_vec t1, t2;

     switch (vece) {
-    case MO_8:
+    case MO_UB:
         /* Unpack to W, shift, and repack, as in expand_vec_shi.  */
         t1 = tcg_temp_new_vec(type);
         t2 = tcg_temp_new_vec(type);
-        vec_gen_3(INDEX_op_x86_punpckl_vec, type, MO_8,
+        vec_gen_3(INDEX_op_x86_punpckl_vec, type, MO_UB,
                   tcgv_vec_arg(t1), tcgv_vec_arg(v1), tcgv_vec_arg(v1));
-        vec_gen_3(INDEX_op_x86_punpckh_vec, type, MO_8,
+        vec_gen_3(INDEX_op_x86_punpckh_vec, type, MO_UB,
                   tcgv_vec_arg(t2), tcgv_vec_arg(v1), tcgv_vec_arg(v1));
         tcg_gen_sari_vec(MO_16, t1, t1, imm + 8);
         tcg_gen_sari_vec(MO_16, t2, t2, imm + 8);
-        vec_gen_3(INDEX_op_x86_packss_vec, type, MO_8,
+        vec_gen_3(INDEX_op_x86_packss_vec, type, MO_UB,
                   tcgv_vec_arg(v0), tcgv_vec_arg(t1), tcgv_vec_arg(t2));
         tcg_temp_free_vec(t1);
         tcg_temp_free_vec(t2);
@@ -3425,7 +3425,7 @@ static void expand_vec_mul(TCGType type, unsigned vece,
 {
     TCGv_vec t1, t2, t3, t4;

-    tcg_debug_assert(vece == MO_8);
+    tcg_debug_assert(vece == MO_UB);

     /*
      * Unpack v1 bytes to words, 0 | x.
@@ -3442,13 +3442,13 @@ static void expand_vec_mul(TCGType type, unsigned vece,
         t1 = tcg_temp_new_vec(TCG_TYPE_V128);
         t2 = tcg_temp_new_vec(TCG_TYPE_V128);
         tcg_gen_dup16i_vec(t2, 0);
-        vec_gen_3(INDEX_op_x86_punpckl_vec, TCG_TYPE_V128, MO_8,
+        vec_gen_3(INDEX_op_x86_punpckl_vec, TCG_TYPE_V128, MO_UB,
                   tcgv_vec_arg(t1), tcgv_vec_arg(v1), tcgv_vec_arg(t2));
-        vec_gen_3(INDEX_op_x86_punpckl_vec, TCG_TYPE_V128, MO_8,
+        vec_gen_3(INDEX_op_x86_punpckl_vec, TCG_TYPE_V128, MO_UB,
                   tcgv_vec_arg(t2), tcgv_vec_arg(t2), tcgv_vec_arg(v2));
         tcg_gen_mul_vec(MO_16, t1, t1, t2);
         tcg_gen_shri_vec(MO_16, t1, t1, 8);
-        vec_gen_3(INDEX_op_x86_packus_vec, TCG_TYPE_V128, MO_8,
+        vec_gen_3(INDEX_op_x86_packus_vec, TCG_TYPE_V128, MO_UB,
                   tcgv_vec_arg(v0), tcgv_vec_arg(t1), tcgv_vec_arg(t1));
         tcg_temp_free_vec(t1);
         tcg_temp_free_vec(t2);
@@ -3461,19 +3461,19 @@ static void expand_vec_mul(TCGType type, unsigned vece,
         t3 = tcg_temp_new_vec(type);
         t4 = tcg_temp_new_vec(type);
         tcg_gen_dup16i_vec(t4, 0);
-        vec_gen_3(INDEX_op_x86_punpckl_vec, type, MO_8,
+        vec_gen_3(INDEX_op_x86_punpckl_vec, type, MO_UB,
                   tcgv_vec_arg(t1), tcgv_vec_arg(v1), tcgv_vec_arg(t4));
-        vec_gen_3(INDEX_op_x86_punpckl_vec, type, MO_8,
+        vec_gen_3(INDEX_op_x86_punpckl_vec, type, MO_UB,
                   tcgv_vec_arg(t2), tcgv_vec_arg(t4), tcgv_vec_arg(v2));
-        vec_gen_3(INDEX_op_x86_punpckh_vec, type, MO_8,
+        vec_gen_3(INDEX_op_x86_punpckh_vec, type, MO_UB,
                   tcgv_vec_arg(t3), tcgv_vec_arg(v1), tcgv_vec_arg(t4));
-        vec_gen_3(INDEX_op_x86_punpckh_vec, type, MO_8,
+        vec_gen_3(INDEX_op_x86_punpckh_vec, type, MO_UB,
                   tcgv_vec_arg(t4), tcgv_vec_arg(t4), tcgv_vec_arg(v2));
         tcg_gen_mul_vec(MO_16, t1, t1, t2);
         tcg_gen_mul_vec(MO_16, t3, t3, t4);
         tcg_gen_shri_vec(MO_16, t1, t1, 8);
         tcg_gen_shri_vec(MO_16, t3, t3, 8);
-        vec_gen_3(INDEX_op_x86_packus_vec, type, MO_8,
+        vec_gen_3(INDEX_op_x86_packus_vec, type, MO_UB,
                   tcgv_vec_arg(v0), tcgv_vec_arg(t1), tcgv_vec_arg(t3));
         tcg_temp_free_vec(t1);
         tcg_temp_free_vec(t2);
diff --git a/tcg/mips/tcg-target.inc.c b/tcg/mips/tcg-target.inc.c
index 41bff32..c6d13ea 100644
--- a/tcg/mips/tcg-target.inc.c
+++ b/tcg/mips/tcg-target.inc.c
@@ -1380,7 +1380,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
         i = tcg_out_call_iarg_reg(s, i, l->addrlo_reg);
     }
     switch (s_bits) {
-    case MO_8:
+    case MO_UB:
         i = tcg_out_call_iarg_reg8(s, i, l->datalo_reg);
         break;
     case MO_16:
@@ -1566,7 +1566,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
     }

     switch (opc & (MO_SIZE | MO_BSWAP)) {
-    case MO_8:
+    case MO_UB:
         tcg_out_opc_imm(s, OPC_SB, lo, base, 0);
         break;

diff --git a/tcg/riscv/tcg-target.inc.c b/tcg/riscv/tcg-target.inc.c
index 3e76bf5..9c60c0f 100644
--- a/tcg/riscv/tcg-target.inc.c
+++ b/tcg/riscv/tcg-target.inc.c
@@ -1101,7 +1101,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
     tcg_out_mov(s, TCG_TYPE_PTR, a1, l->addrlo_reg);
     tcg_out_mov(s, TCG_TYPE_PTR, a2, l->datalo_reg);
     switch (s_bits) {
-    case MO_8:
+    case MO_UB:
         tcg_out_ext8u(s, a2, a2);
         break;
     case MO_16:
@@ -1216,7 +1216,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
     g_assert(!bswap);

     switch (opc & (MO_SSIZE)) {
-    case MO_8:
+    case MO_UB:
         tcg_out_opc_store(s, OPC_SB, base, lo, 0);
         break;
     case MO_16:
diff --git a/tcg/sparc/tcg-target.inc.c b/tcg/sparc/tcg-target.inc.c
index 10b1cea..479ee2e 100644
--- a/tcg/sparc/tcg-target.inc.c
+++ b/tcg/sparc/tcg-target.inc.c
@@ -882,7 +882,7 @@ static void emit_extend(TCGContext *s, TCGReg r, int op)
      * required by the MO_* value op; do nothing for 64 bit.
      */
     switch (op & MO_SIZE) {
-    case MO_8:
+    case MO_UB:
         tcg_out_arithi(s, r, r, 0xff, ARITH_AND);
         break;
     case MO_16:
diff --git a/tcg/tcg-op-gvec.c b/tcg/tcg-op-gvec.c
index 17679b6..9658c36 100644
--- a/tcg/tcg-op-gvec.c
+++ b/tcg/tcg-op-gvec.c
@@ -306,7 +306,7 @@ static void expand_clr(uint32_t dofs, uint32_t maxsz);
 uint64_t (dup_const)(unsigned vece, uint64_t c)
 {
     switch (vece) {
-    case MO_8:
+    case MO_UB:
         return 0x0101010101010101ull * (uint8_t)c;
     case MO_16:
         return 0x0001000100010001ull * (uint16_t)c;
@@ -323,7 +323,7 @@ uint64_t (dup_const)(unsigned vece, uint64_t c)
 static void gen_dup_i32(unsigned vece, TCGv_i32 out, TCGv_i32 in)
 {
     switch (vece) {
-    case MO_8:
+    case MO_UB:
         tcg_gen_ext8u_i32(out, in);
         tcg_gen_muli_i32(out, out, 0x01010101);
         break;
@@ -341,7 +341,7 @@ static void gen_dup_i32(unsigned vece, TCGv_i32 out, TCGv_i32 in)
 static void gen_dup_i64(unsigned vece, TCGv_i64 out, TCGv_i64 in)
 {
     switch (vece) {
-    case MO_8:
+    case MO_UB:
         tcg_gen_ext8u_i64(out, in);
         tcg_gen_muli_i64(out, out, 0x0101010101010101ull);
         break;
@@ -556,7 +556,7 @@ static void do_dup(unsigned vece, uint32_t dofs, uint32_t oprsz,
             t_32 = tcg_temp_new_i32();
             if (in_64) {
                 tcg_gen_extrl_i64_i32(t_32, in_64);
-            } else if (vece == MO_8) {
+            } else if (vece == MO_UB) {
                 tcg_gen_movi_i32(t_32, in_c & 0xff);
             } else if (vece == MO_16) {
                 tcg_gen_movi_i32(t_32, in_c & 0xffff);
@@ -581,7 +581,7 @@ static void do_dup(unsigned vece, uint32_t dofs, uint32_t oprsz,
 /* Likewise, but with zero.  */
 static void expand_clr(uint32_t dofs, uint32_t maxsz)
 {
-    do_dup(MO_8, dofs, maxsz, maxsz, NULL, NULL, 0);
+    do_dup(MO_UB, dofs, maxsz, maxsz, NULL, NULL, 0);
 }

 /* Expand OPSZ bytes worth of two-operand operations using i32 elements.  */
@@ -1456,7 +1456,7 @@ void tcg_gen_gvec_dup_mem(unsigned vece, uint32_t dofs, uint32_t aofs,
         } else if (vece <= MO_32) {
             TCGv_i32 in = tcg_temp_new_i32();
             switch (vece) {
-            case MO_8:
+            case MO_UB:
                 tcg_gen_ld8u_i32(in, cpu_env, aofs);
                 break;
             case MO_16:
@@ -1533,7 +1533,7 @@ void tcg_gen_gvec_dup8i(uint32_t dofs, uint32_t oprsz,
                          uint32_t maxsz, uint8_t x)
 {
     check_size_align(oprsz, maxsz, dofs);
-    do_dup(MO_8, dofs, oprsz, maxsz, NULL, NULL, x);
+    do_dup(MO_UB, dofs, oprsz, maxsz, NULL, NULL, x);
 }

 void tcg_gen_gvec_not(unsigned vece, uint32_t dofs, uint32_t aofs,
@@ -1572,7 +1572,7 @@ static void gen_addv_mask(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b, TCGv_i64 m)

 void tcg_gen_vec_add8_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
 {
-    TCGv_i64 m = tcg_const_i64(dup_const(MO_8, 0x80));
+    TCGv_i64 m = tcg_const_i64(dup_const(MO_UB, 0x80));
     gen_addv_mask(d, a, b, m);
     tcg_temp_free_i64(m);
 }
@@ -1608,7 +1608,7 @@ void tcg_gen_gvec_add(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_add_vec,
           .fno = gen_helper_gvec_add8,
           .opt_opc = vecop_list_add,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fni8 = tcg_gen_vec_add16_i64,
           .fniv = tcg_gen_add_vec,
           .fno = gen_helper_gvec_add16,
@@ -1639,7 +1639,7 @@ void tcg_gen_gvec_adds(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_add_vec,
           .fno = gen_helper_gvec_adds8,
           .opt_opc = vecop_list_add,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fni8 = tcg_gen_vec_add16_i64,
           .fniv = tcg_gen_add_vec,
           .fno = gen_helper_gvec_adds16,
@@ -1680,7 +1680,7 @@ void tcg_gen_gvec_subs(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_gvec_subs8,
           .opt_opc = vecop_list_sub,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fni8 = tcg_gen_vec_sub16_i64,
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_gvec_subs16,
@@ -1725,7 +1725,7 @@ static void gen_subv_mask(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b, TCGv_i64 m)

 void tcg_gen_vec_sub8_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
 {
-    TCGv_i64 m = tcg_const_i64(dup_const(MO_8, 0x80));
+    TCGv_i64 m = tcg_const_i64(dup_const(MO_UB, 0x80));
     gen_subv_mask(d, a, b, m);
     tcg_temp_free_i64(m);
 }
@@ -1759,7 +1759,7 @@ void tcg_gen_gvec_sub(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_gvec_sub8,
           .opt_opc = vecop_list_sub,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fni8 = tcg_gen_vec_sub16_i64,
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_gvec_sub16,
@@ -1791,7 +1791,7 @@ void tcg_gen_gvec_mul(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_mul_vec,
           .fno = gen_helper_gvec_mul8,
           .opt_opc = vecop_list_mul,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fniv = tcg_gen_mul_vec,
           .fno = gen_helper_gvec_mul16,
           .opt_opc = vecop_list_mul,
@@ -1820,7 +1820,7 @@ void tcg_gen_gvec_muls(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_mul_vec,
           .fno = gen_helper_gvec_muls8,
           .opt_opc = vecop_list_mul,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fniv = tcg_gen_mul_vec,
           .fno = gen_helper_gvec_muls16,
           .opt_opc = vecop_list_mul,
@@ -1858,7 +1858,7 @@ void tcg_gen_gvec_ssadd(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_ssadd_vec,
           .fno = gen_helper_gvec_ssadd8,
           .opt_opc = vecop_list,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fniv = tcg_gen_ssadd_vec,
           .fno = gen_helper_gvec_ssadd16,
           .opt_opc = vecop_list,
@@ -1884,7 +1884,7 @@ void tcg_gen_gvec_sssub(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_sssub_vec,
           .fno = gen_helper_gvec_sssub8,
           .opt_opc = vecop_list,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fniv = tcg_gen_sssub_vec,
           .fno = gen_helper_gvec_sssub16,
           .opt_opc = vecop_list,
@@ -1926,7 +1926,7 @@ void tcg_gen_gvec_usadd(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_usadd_vec,
           .fno = gen_helper_gvec_usadd8,
           .opt_opc = vecop_list,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fniv = tcg_gen_usadd_vec,
           .fno = gen_helper_gvec_usadd16,
           .opt_opc = vecop_list,
@@ -1970,7 +1970,7 @@ void tcg_gen_gvec_ussub(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_ussub_vec,
           .fno = gen_helper_gvec_ussub8,
           .opt_opc = vecop_list,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fniv = tcg_gen_ussub_vec,
           .fno = gen_helper_gvec_ussub16,
           .opt_opc = vecop_list,
@@ -1998,7 +1998,7 @@ void tcg_gen_gvec_smin(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_smin_vec,
           .fno = gen_helper_gvec_smin8,
           .opt_opc = vecop_list,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fniv = tcg_gen_smin_vec,
           .fno = gen_helper_gvec_smin16,
           .opt_opc = vecop_list,
@@ -2026,7 +2026,7 @@ void tcg_gen_gvec_umin(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_umin_vec,
           .fno = gen_helper_gvec_umin8,
           .opt_opc = vecop_list,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fniv = tcg_gen_umin_vec,
           .fno = gen_helper_gvec_umin16,
           .opt_opc = vecop_list,
@@ -2054,7 +2054,7 @@ void tcg_gen_gvec_smax(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_smax_vec,
           .fno = gen_helper_gvec_smax8,
           .opt_opc = vecop_list,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fniv = tcg_gen_smax_vec,
           .fno = gen_helper_gvec_smax16,
           .opt_opc = vecop_list,
@@ -2082,7 +2082,7 @@ void tcg_gen_gvec_umax(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_umax_vec,
           .fno = gen_helper_gvec_umax8,
           .opt_opc = vecop_list,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fniv = tcg_gen_umax_vec,
           .fno = gen_helper_gvec_umax16,
           .opt_opc = vecop_list,
@@ -2120,7 +2120,7 @@ static void gen_negv_mask(TCGv_i64 d, TCGv_i64 b, TCGv_i64 m)

 void tcg_gen_vec_neg8_i64(TCGv_i64 d, TCGv_i64 b)
 {
-    TCGv_i64 m = tcg_const_i64(dup_const(MO_8, 0x80));
+    TCGv_i64 m = tcg_const_i64(dup_const(MO_UB, 0x80));
     gen_negv_mask(d, b, m);
     tcg_temp_free_i64(m);
 }
@@ -2155,7 +2155,7 @@ void tcg_gen_gvec_neg(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_neg_vec,
           .fno = gen_helper_gvec_neg8,
           .opt_opc = vecop_list,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fni8 = tcg_gen_vec_neg16_i64,
           .fniv = tcg_gen_neg_vec,
           .fno = gen_helper_gvec_neg16,
@@ -2201,7 +2201,7 @@ static void gen_absv_mask(TCGv_i64 d, TCGv_i64 b, unsigned vece)

 static void tcg_gen_vec_abs8_i64(TCGv_i64 d, TCGv_i64 b)
 {
-    gen_absv_mask(d, b, MO_8);
+    gen_absv_mask(d, b, MO_UB);
 }

 static void tcg_gen_vec_abs16_i64(TCGv_i64 d, TCGv_i64 b)
@@ -2218,7 +2218,7 @@ void tcg_gen_gvec_abs(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_abs_vec,
           .fno = gen_helper_gvec_abs8,
           .opt_opc = vecop_list,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fni8 = tcg_gen_vec_abs16_i64,
           .fniv = tcg_gen_abs_vec,
           .fno = gen_helper_gvec_abs16,
@@ -2454,7 +2454,7 @@ void tcg_gen_gvec_ori(unsigned vece, uint32_t dofs, uint32_t aofs,

 void tcg_gen_vec_shl8i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c)
 {
-    uint64_t mask = dup_const(MO_8, 0xff << c);
+    uint64_t mask = dup_const(MO_UB, 0xff << c);
     tcg_gen_shli_i64(d, a, c);
     tcg_gen_andi_i64(d, d, mask);
 }
@@ -2475,7 +2475,7 @@ void tcg_gen_gvec_shli(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_shli_vec,
           .fno = gen_helper_gvec_shl8i,
           .opt_opc = vecop_list,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fni8 = tcg_gen_vec_shl16i_i64,
           .fniv = tcg_gen_shli_vec,
           .fno = gen_helper_gvec_shl16i,
@@ -2505,7 +2505,7 @@ void tcg_gen_gvec_shli(unsigned vece, uint32_t dofs, uint32_t aofs,

 void tcg_gen_vec_shr8i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c)
 {
-    uint64_t mask = dup_const(MO_8, 0xff >> c);
+    uint64_t mask = dup_const(MO_UB, 0xff >> c);
     tcg_gen_shri_i64(d, a, c);
     tcg_gen_andi_i64(d, d, mask);
 }
@@ -2526,7 +2526,7 @@ void tcg_gen_gvec_shri(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_shri_vec,
           .fno = gen_helper_gvec_shr8i,
           .opt_opc = vecop_list,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fni8 = tcg_gen_vec_shr16i_i64,
           .fniv = tcg_gen_shri_vec,
           .fno = gen_helper_gvec_shr16i,
@@ -2556,8 +2556,8 @@ void tcg_gen_gvec_shri(unsigned vece, uint32_t dofs, uint32_t aofs,

 void tcg_gen_vec_sar8i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c)
 {
-    uint64_t s_mask = dup_const(MO_8, 0x80 >> c);
-    uint64_t c_mask = dup_const(MO_8, 0xff >> c);
+    uint64_t s_mask = dup_const(MO_UB, 0x80 >> c);
+    uint64_t c_mask = dup_const(MO_UB, 0xff >> c);
     TCGv_i64 s = tcg_temp_new_i64();

     tcg_gen_shri_i64(d, a, c);
@@ -2591,7 +2591,7 @@ void tcg_gen_gvec_sari(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_sari_vec,
           .fno = gen_helper_gvec_sar8i,
           .opt_opc = vecop_list,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fni8 = tcg_gen_vec_sar16i_i64,
           .fniv = tcg_gen_sari_vec,
           .fno = gen_helper_gvec_sar16i,
@@ -2880,7 +2880,7 @@ void tcg_gen_gvec_shlv(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_shlv_mod_vec,
           .fno = gen_helper_gvec_shl8v,
           .opt_opc = vecop_list,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fniv = tcg_gen_shlv_mod_vec,
           .fno = gen_helper_gvec_shl16v,
           .opt_opc = vecop_list,
@@ -2943,7 +2943,7 @@ void tcg_gen_gvec_shrv(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_shrv_mod_vec,
           .fno = gen_helper_gvec_shr8v,
           .opt_opc = vecop_list,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fniv = tcg_gen_shrv_mod_vec,
           .fno = gen_helper_gvec_shr16v,
           .opt_opc = vecop_list,
@@ -3006,7 +3006,7 @@ void tcg_gen_gvec_sarv(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_sarv_mod_vec,
           .fno = gen_helper_gvec_sar8v,
           .opt_opc = vecop_list,
-          .vece = MO_8 },
+          .vece = MO_UB },
         { .fniv = tcg_gen_sarv_mod_vec,
           .fno = gen_helper_gvec_sar16v,
           .opt_opc = vecop_list,
@@ -3129,7 +3129,7 @@ void tcg_gen_gvec_cmp(TCGCond cond, unsigned vece, uint32_t dofs,
     check_overlap_3(dofs, aofs, bofs, maxsz);

     if (cond == TCG_COND_NEVER || cond == TCG_COND_ALWAYS) {
-        do_dup(MO_8, dofs, oprsz, maxsz,
+        do_dup(MO_UB, dofs, oprsz, maxsz,
                NULL, NULL, -(cond == TCG_COND_ALWAYS));
         return;
     }
diff --git a/tcg/tcg-op-vec.c b/tcg/tcg-op-vec.c
index 6714991..d7ffc9e 100644
--- a/tcg/tcg-op-vec.c
+++ b/tcg/tcg-op-vec.c
@@ -275,7 +275,7 @@ void tcg_gen_dup16i_vec(TCGv_vec r, uint32_t a)

 void tcg_gen_dup8i_vec(TCGv_vec r, uint32_t a)
 {
-    do_dupi_vec(r, MO_REG, dup_const(MO_8, a));
+    do_dupi_vec(r, MO_REG, dup_const(MO_UB, a));
 }

 void tcg_gen_dupi_vec(unsigned vece, TCGv_vec r, uint64_t a)
@@ -752,13 +752,13 @@ void tcg_gen_bitsel_vec(unsigned vece, TCGv_vec r, TCGv_vec a,
     tcg_debug_assert(ct->base_type >= type);

     if (TCG_TARGET_HAS_bitsel_vec) {
-        vec_gen_4(INDEX_op_bitsel_vec, type, MO_8,
+        vec_gen_4(INDEX_op_bitsel_vec, type, MO_UB,
                   temp_arg(rt), temp_arg(at), temp_arg(bt), temp_arg(ct));
     } else {
         TCGv_vec t = tcg_temp_new_vec(type);
-        tcg_gen_and_vec(MO_8, t, a, b);
-        tcg_gen_andc_vec(MO_8, r, c, a);
-        tcg_gen_or_vec(MO_8, r, r, t);
+        tcg_gen_and_vec(MO_UB, t, a, b);
+        tcg_gen_andc_vec(MO_UB, r, c, a);
+        tcg_gen_or_vec(MO_UB, r, r, t);
         tcg_temp_free_vec(t);
     }
 }
diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c
index 587d092..61eda33 100644
--- a/tcg/tcg-op.c
+++ b/tcg/tcg-op.c
@@ -2720,7 +2720,7 @@ static inline TCGMemOp tcg_canonicalize_memop(TCGMemOp op, bool is64, bool st)
     (void)get_alignment_bits(op);

     switch (op & MO_SIZE) {
-    case MO_8:
+    case MO_UB:
         op &= ~MO_BSWAP;
         break;
     case MO_16:
@@ -3024,7 +3024,7 @@ typedef void (*gen_atomic_op_i64)(TCGv_i64, TCGv_env, TCGv, TCGv_i64);
 #endif

 static void * const table_cmpxchg[16] = {
-    [MO_8] = gen_helper_atomic_cmpxchgb,
+    [MO_UB] = gen_helper_atomic_cmpxchgb,
     [MO_16 | MO_LE] = gen_helper_atomic_cmpxchgw_le,
     [MO_16 | MO_BE] = gen_helper_atomic_cmpxchgw_be,
     [MO_32 | MO_LE] = gen_helper_atomic_cmpxchgl_le,
@@ -3248,7 +3248,7 @@ static void do_atomic_op_i64(TCGv_i64 ret, TCGv addr, TCGv_i64 val,

 #define GEN_ATOMIC_HELPER(NAME, OP, NEW)                                \
 static void * const table_##NAME[16] = {                                \
-    [MO_8] = gen_helper_atomic_##NAME##b,                               \
+    [MO_UB] = gen_helper_atomic_##NAME##b,                               \
     [MO_16 | MO_LE] = gen_helper_atomic_##NAME##w_le,                   \
     [MO_16 | MO_BE] = gen_helper_atomic_##NAME##w_be,                   \
     [MO_32 | MO_LE] = gen_helper_atomic_##NAME##l_le,                   \
diff --git a/tcg/tcg.h b/tcg/tcg.h
index b411e17..5636d6b 100644
--- a/tcg/tcg.h
+++ b/tcg/tcg.h
@@ -1302,7 +1302,7 @@ uint64_t dup_const(unsigned vece, uint64_t c);

 #define dup_const(VECE, C)                                         \
     (__builtin_constant_p(VECE)                                    \
-     ? (  (VECE) == MO_8  ? 0x0101010101010101ull * (uint8_t)(C)   \
+     ?   ((VECE) == MO_UB ? 0x0101010101010101ull * (uint8_t)(C)   \
         : (VECE) == MO_16 ? 0x0001000100010001ull * (uint16_t)(C)  \
         : (VECE) == MO_32 ? 0x0000000100000001ull * (uint32_t)(C)  \
         : dup_const(VECE, C))                                      \
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 110030 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* Re: [Qemu-devel] [PATCH v2 02/20] tcg: Replace MO_16 with MO_UW alias
  2019-07-22 15:34 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-22 15:39   ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:39 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, claudio.fontana, alex.williamson, qemu-ppc, amarkovic,
	pbonzini, aurelien

Preparation for splitting MO_16 out from TCGMemOp into new accelerator
independent MemOp.

As MO_16 will be a value of MemOp, existing TCGMemOp comparisons and
coercions will trigger -Wenum-compare and -Wenum-conversion.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 target/arm/sve_helper.c             |   4 +-
 target/arm/translate-a64.c          |  90 ++++++++--------
 target/arm/translate-sve.c          |  40 ++++----
 target/arm/translate-vfp.inc.c      |   2 +-
 target/arm/translate.c              |  32 +++---
 target/i386/translate.c             | 200 ++++++++++++++++++------------------
 target/mips/translate.c             |   2 +-
 target/ppc/translate/vmx-impl.inc.c |  28 ++---
 target/s390x/translate_vx.inc.c     |   2 +-
 target/s390x/vec.h                  |   4 +-
 tcg/aarch64/tcg-target.inc.c        |  20 ++--
 tcg/arm/tcg-target.inc.c            |   6 +-
 tcg/i386/tcg-target.inc.c           |  48 ++++-----
 tcg/mips/tcg-target.inc.c           |   6 +-
 tcg/riscv/tcg-target.inc.c          |   4 +-
 tcg/sparc/tcg-target.inc.c          |   2 +-
 tcg/tcg-op-gvec.c                   |  72 ++++++-------
 tcg/tcg-op-vec.c                    |   2 +-
 tcg/tcg-op.c                        |  18 ++--
 tcg/tcg.h                           |   2 +-
 20 files changed, 292 insertions(+), 292 deletions(-)

diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 4c7e11f..f6bef3d 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -1546,7 +1546,7 @@ void HELPER(sve_cpy_m_h)(void *vd, void *vn, void *vg,
     uint64_t *d = vd, *n = vn;
     uint8_t *pg = vg;

-    mm = dup_const(MO_16, mm);
+    mm = dup_const(MO_UW, mm);
     for (i = 0; i < opr_sz; i += 1) {
         uint64_t nn = n[i];
         uint64_t pp = expand_pred_h(pg[H1(i)]);
@@ -1600,7 +1600,7 @@ void HELPER(sve_cpy_z_h)(void *vd, void *vg, uint64_t val, uint32_t desc)
     uint64_t *d = vd;
     uint8_t *pg = vg;

-    val = dup_const(MO_16, val);
+    val = dup_const(MO_UW, val);
     for (i = 0; i < opr_sz; i += 1) {
         d[i] = val & expand_pred_h(pg[H1(i)]);
     }
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index f840b43..3acfccb 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -492,7 +492,7 @@ static TCGv_i32 read_fp_hreg(DisasContext *s, int reg)
 {
     TCGv_i32 v = tcg_temp_new_i32();

-    tcg_gen_ld16u_i32(v, cpu_env, fp_reg_offset(s, reg, MO_16));
+    tcg_gen_ld16u_i32(v, cpu_env, fp_reg_offset(s, reg, MO_UW));
     return v;
 }

@@ -996,7 +996,7 @@ static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx,
     case MO_UB:
         tcg_gen_ld8u_i64(tcg_dest, cpu_env, vect_off);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_ld16u_i64(tcg_dest, cpu_env, vect_off);
         break;
     case MO_32:
@@ -1005,7 +1005,7 @@ static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx,
     case MO_SB:
         tcg_gen_ld8s_i64(tcg_dest, cpu_env, vect_off);
         break;
-    case MO_16|MO_SIGN:
+    case MO_SW:
         tcg_gen_ld16s_i64(tcg_dest, cpu_env, vect_off);
         break;
     case MO_32|MO_SIGN:
@@ -1028,13 +1028,13 @@ static void read_vec_element_i32(DisasContext *s, TCGv_i32 tcg_dest, int srcidx,
     case MO_UB:
         tcg_gen_ld8u_i32(tcg_dest, cpu_env, vect_off);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_ld16u_i32(tcg_dest, cpu_env, vect_off);
         break;
     case MO_SB:
         tcg_gen_ld8s_i32(tcg_dest, cpu_env, vect_off);
         break;
-    case MO_16|MO_SIGN:
+    case MO_SW:
         tcg_gen_ld16s_i32(tcg_dest, cpu_env, vect_off);
         break;
     case MO_32:
@@ -1055,7 +1055,7 @@ static void write_vec_element(DisasContext *s, TCGv_i64 tcg_src, int destidx,
     case MO_UB:
         tcg_gen_st8_i64(tcg_src, cpu_env, vect_off);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_st16_i64(tcg_src, cpu_env, vect_off);
         break;
     case MO_32:
@@ -1077,7 +1077,7 @@ static void write_vec_element_i32(DisasContext *s, TCGv_i32 tcg_src,
     case MO_UB:
         tcg_gen_st8_i32(tcg_src, cpu_env, vect_off);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_st16_i32(tcg_src, cpu_env, vect_off);
         break;
     case MO_32:
@@ -5269,7 +5269,7 @@ static void handle_fp_compare(DisasContext *s, int size,
                               bool cmp_with_zero, bool signal_all_nans)
 {
     TCGv_i64 tcg_flags = tcg_temp_new_i64();
-    TCGv_ptr fpst = get_fpstatus_ptr(size == MO_16);
+    TCGv_ptr fpst = get_fpstatus_ptr(size == MO_UW);

     if (size == MO_64) {
         TCGv_i64 tcg_vn, tcg_vm;
@@ -5306,7 +5306,7 @@ static void handle_fp_compare(DisasContext *s, int size,
                 gen_helper_vfp_cmps_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
             }
             break;
-        case MO_16:
+        case MO_UW:
             if (signal_all_nans) {
                 gen_helper_vfp_cmpeh_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
             } else {
@@ -5360,7 +5360,7 @@ static void disas_fp_compare(DisasContext *s, uint32_t insn)
         size = MO_64;
         break;
     case 3:
-        size = MO_16;
+        size = MO_UW;
         if (dc_isar_feature(aa64_fp16, s)) {
             break;
         }
@@ -5411,7 +5411,7 @@ static void disas_fp_ccomp(DisasContext *s, uint32_t insn)
         size = MO_64;
         break;
     case 3:
-        size = MO_16;
+        size = MO_UW;
         if (dc_isar_feature(aa64_fp16, s)) {
             break;
         }
@@ -5477,7 +5477,7 @@ static void disas_fp_csel(DisasContext *s, uint32_t insn)
         sz = MO_64;
         break;
     case 3:
-        sz = MO_16;
+        sz = MO_UW;
         if (dc_isar_feature(aa64_fp16, s)) {
             break;
         }
@@ -6282,7 +6282,7 @@ static void disas_fp_imm(DisasContext *s, uint32_t insn)
         sz = MO_64;
         break;
     case 3:
-        sz = MO_16;
+        sz = MO_UW;
         if (dc_isar_feature(aa64_fp16, s)) {
             break;
         }
@@ -6593,7 +6593,7 @@ static void handle_fmov(DisasContext *s, int rd, int rn, int type, bool itof)
             break;
         case 3:
             /* 16 bit */
-            tcg_gen_ld16u_i64(tcg_rd, cpu_env, fp_reg_offset(s, rn, MO_16));
+            tcg_gen_ld16u_i64(tcg_rd, cpu_env, fp_reg_offset(s, rn, MO_UW));
             break;
         default:
             g_assert_not_reached();
@@ -7030,7 +7030,7 @@ static TCGv_i32 do_reduction_op(DisasContext *s, int fpopcode, int rn,
 {
     if (esize == size) {
         int element;
-        TCGMemOp msize = esize == 16 ? MO_16 : MO_32;
+        TCGMemOp msize = esize == 16 ? MO_UW : MO_32;
         TCGv_i32 tcg_elem;

         /* We should have one register left here */
@@ -7204,7 +7204,7 @@ static void disas_simd_across_lanes(DisasContext *s, uint32_t insn)
          * Note that correct NaN propagation requires that we do these
          * operations in exactly the order specified by the pseudocode.
          */
-        TCGv_ptr fpst = get_fpstatus_ptr(size == MO_16);
+        TCGv_ptr fpst = get_fpstatus_ptr(size == MO_UW);
         int fpopcode = opcode | is_min << 4 | is_u << 5;
         int vmap = (1 << elements) - 1;
         TCGv_i32 tcg_res32 = do_reduction_op(s, fpopcode, rn, esize,
@@ -7591,7 +7591,7 @@ static void disas_simd_mod_imm(DisasContext *s, uint32_t insn)
             } else {
                 if (o2) {
                     /* FMOV (vector, immediate) - half-precision */
-                    imm = vfp_expand_imm(MO_16, abcdefgh);
+                    imm = vfp_expand_imm(MO_UW, abcdefgh);
                     /* now duplicate across the lanes */
                     imm = bitfield_replicate(imm, 16);
                 } else {
@@ -7699,7 +7699,7 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
                 unallocated_encoding(s);
                 return;
             } else {
-                size = MO_16;
+                size = MO_UW;
             }
         } else {
             size = extract32(size, 0, 1) ? MO_64 : MO_32;
@@ -7709,7 +7709,7 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
             return;
         }

-        fpst = get_fpstatus_ptr(size == MO_16);
+        fpst = get_fpstatus_ptr(size == MO_UW);
         break;
     default:
         unallocated_encoding(s);
@@ -7760,7 +7760,7 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
         read_vec_element_i32(s, tcg_op1, rn, 0, size);
         read_vec_element_i32(s, tcg_op2, rn, 1, size);

-        if (size == MO_16) {
+        if (size == MO_UW) {
             switch (opcode) {
             case 0xc: /* FMAXNMP */
                 gen_helper_advsimd_maxnumh(tcg_res, tcg_op1, tcg_op2, fpst);
@@ -8222,7 +8222,7 @@ static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
                                    int elements, int is_signed,
                                    int fracbits, int size)
 {
-    TCGv_ptr tcg_fpst = get_fpstatus_ptr(size == MO_16);
+    TCGv_ptr tcg_fpst = get_fpstatus_ptr(size == MO_UW);
     TCGv_i32 tcg_shift = NULL;

     TCGMemOp mop = size | (is_signed ? MO_SIGN : 0);
@@ -8281,7 +8281,7 @@ static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
                     }
                 }
                 break;
-            case MO_16:
+            case MO_UW:
                 if (fracbits) {
                     if (is_signed) {
                         gen_helper_vfp_sltoh(tcg_float, tcg_int32,
@@ -8339,7 +8339,7 @@ static void handle_simd_shift_intfp_conv(DisasContext *s, bool is_scalar,
     } else if (immh & 4) {
         size = MO_32;
     } else if (immh & 2) {
-        size = MO_16;
+        size = MO_UW;
         if (!dc_isar_feature(aa64_fp16, s)) {
             unallocated_encoding(s);
             return;
@@ -8384,7 +8384,7 @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
     } else if (immh & 0x4) {
         size = MO_32;
     } else if (immh & 0x2) {
-        size = MO_16;
+        size = MO_UW;
         if (!dc_isar_feature(aa64_fp16, s)) {
             unallocated_encoding(s);
             return;
@@ -8403,7 +8403,7 @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
     assert(!(is_scalar && is_q));

     tcg_rmode = tcg_const_i32(arm_rmode_to_sf(FPROUNDING_ZERO));
-    tcg_fpstatus = get_fpstatus_ptr(size == MO_16);
+    tcg_fpstatus = get_fpstatus_ptr(size == MO_UW);
     gen_helper_set_rmode(tcg_rmode, tcg_rmode, tcg_fpstatus);
     fracbits = (16 << size) - immhb;
     tcg_shift = tcg_const_i32(fracbits);
@@ -8429,7 +8429,7 @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
         int maxpass = is_scalar ? 1 : ((8 << is_q) >> size);

         switch (size) {
-        case MO_16:
+        case MO_UW:
             if (is_u) {
                 fn = gen_helper_vfp_touhh;
             } else {
@@ -9388,7 +9388,7 @@ static void handle_2misc_fcmp_zero(DisasContext *s, int opcode,
         return;
     }

-    fpst = get_fpstatus_ptr(size == MO_16);
+    fpst = get_fpstatus_ptr(size == MO_UW);

     if (is_double) {
         TCGv_i64 tcg_op = tcg_temp_new_i64();
@@ -9440,7 +9440,7 @@ static void handle_2misc_fcmp_zero(DisasContext *s, int opcode,
         bool swap = false;
         int pass, maxpasses;

-        if (size == MO_16) {
+        if (size == MO_UW) {
             switch (opcode) {
             case 0x2e: /* FCMLT (zero) */
                 swap = true;
@@ -11422,8 +11422,8 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
             int passreg = pass < (maxpass / 2) ? rn : rm;
             int passelt = (pass << 1) & (maxpass - 1);

-            read_vec_element_i32(s, tcg_op1, passreg, passelt, MO_16);
-            read_vec_element_i32(s, tcg_op2, passreg, passelt + 1, MO_16);
+            read_vec_element_i32(s, tcg_op1, passreg, passelt, MO_UW);
+            read_vec_element_i32(s, tcg_op2, passreg, passelt + 1, MO_UW);
             tcg_res[pass] = tcg_temp_new_i32();

             switch (fpopcode) {
@@ -11450,7 +11450,7 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
         }

         for (pass = 0; pass < maxpass; pass++) {
-            write_vec_element_i32(s, tcg_res[pass], rd, pass, MO_16);
+            write_vec_element_i32(s, tcg_res[pass], rd, pass, MO_UW);
             tcg_temp_free_i32(tcg_res[pass]);
         }

@@ -11463,15 +11463,15 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
             TCGv_i32 tcg_op2 = tcg_temp_new_i32();
             TCGv_i32 tcg_res = tcg_temp_new_i32();

-            read_vec_element_i32(s, tcg_op1, rn, pass, MO_16);
-            read_vec_element_i32(s, tcg_op2, rm, pass, MO_16);
+            read_vec_element_i32(s, tcg_op1, rn, pass, MO_UW);
+            read_vec_element_i32(s, tcg_op2, rm, pass, MO_UW);

             switch (fpopcode) {
             case 0x0: /* FMAXNM */
                 gen_helper_advsimd_maxnumh(tcg_res, tcg_op1, tcg_op2, fpst);
                 break;
             case 0x1: /* FMLA */
-                read_vec_element_i32(s, tcg_res, rd, pass, MO_16);
+                read_vec_element_i32(s, tcg_res, rd, pass, MO_UW);
                 gen_helper_advsimd_muladdh(tcg_res, tcg_op1, tcg_op2, tcg_res,
                                            fpst);
                 break;
@@ -11496,7 +11496,7 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
             case 0x9: /* FMLS */
                 /* As usual for ARM, separate negation for fused multiply-add */
                 tcg_gen_xori_i32(tcg_op1, tcg_op1, 0x8000);
-                read_vec_element_i32(s, tcg_res, rd, pass, MO_16);
+                read_vec_element_i32(s, tcg_res, rd, pass, MO_UW);
                 gen_helper_advsimd_muladdh(tcg_res, tcg_op1, tcg_op2, tcg_res,
                                            fpst);
                 break;
@@ -11537,7 +11537,7 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
                 g_assert_not_reached();
             }

-            write_vec_element_i32(s, tcg_res, rd, pass, MO_16);
+            write_vec_element_i32(s, tcg_res, rd, pass, MO_UW);
             tcg_temp_free_i32(tcg_res);
             tcg_temp_free_i32(tcg_op1);
             tcg_temp_free_i32(tcg_op2);
@@ -11727,7 +11727,7 @@ static void handle_2misc_widening(DisasContext *s, int opcode, bool is_q,
         for (pass = 0; pass < 4; pass++) {
             tcg_res[pass] = tcg_temp_new_i32();

-            read_vec_element_i32(s, tcg_res[pass], rn, srcelt + pass, MO_16);
+            read_vec_element_i32(s, tcg_res[pass], rn, srcelt + pass, MO_UW);
             gen_helper_vfp_fcvt_f16_to_f32(tcg_res[pass], tcg_res[pass],
                                            fpst, ahp);
         }
@@ -11768,7 +11768,7 @@ static void handle_rev(DisasContext *s, int opcode, bool u,

             read_vec_element(s, tcg_tmp, rn, i, grp_size);
             switch (grp_size) {
-            case MO_16:
+            case MO_UW:
                 tcg_gen_bswap16_i64(tcg_tmp, tcg_tmp);
                 break;
             case MO_32:
@@ -12499,7 +12499,7 @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
         if (!fp_access_check(s)) {
             return;
         }
-        handle_simd_intfp_conv(s, rd, rn, elements, !u, 0, MO_16);
+        handle_simd_intfp_conv(s, rd, rn, elements, !u, 0, MO_UW);
         return;
     }
     break;
@@ -12508,7 +12508,7 @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
     case 0x2e: /* FCMLT (zero) */
     case 0x6c: /* FCMGE (zero) */
     case 0x6d: /* FCMLE (zero) */
-        handle_2misc_fcmp_zero(s, fpop, is_scalar, 0, is_q, MO_16, rn, rd);
+        handle_2misc_fcmp_zero(s, fpop, is_scalar, 0, is_q, MO_UW, rn, rd);
         return;
     case 0x3d: /* FRECPE */
     case 0x3f: /* FRECPX */
@@ -12668,7 +12668,7 @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
             TCGv_i32 tcg_op = tcg_temp_new_i32();
             TCGv_i32 tcg_res = tcg_temp_new_i32();

-            read_vec_element_i32(s, tcg_op, rn, pass, MO_16);
+            read_vec_element_i32(s, tcg_op, rn, pass, MO_UW);

             switch (fpop) {
             case 0x1a: /* FCVTNS */
@@ -12715,7 +12715,7 @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
                 g_assert_not_reached();
             }

-            write_vec_element_i32(s, tcg_res, rd, pass, MO_16);
+            write_vec_element_i32(s, tcg_res, rd, pass, MO_UW);

             tcg_temp_free_i32(tcg_res);
             tcg_temp_free_i32(tcg_op);
@@ -12839,7 +12839,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
             unallocated_encoding(s);
             return;
         }
-        size = MO_16;
+        size = MO_UW;
         /* is_fp, but we pass cpu_env not fp_status.  */
         break;
     default:
@@ -12852,7 +12852,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
         /* convert insn encoded size to TCGMemOp size */
         switch (size) {
         case 0: /* half-precision */
-            size = MO_16;
+            size = MO_UW;
             is_fp16 = true;
             break;
         case MO_32: /* single precision */
@@ -12899,7 +12899,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)

     /* Given TCGMemOp size, adjust register and indexing.  */
     switch (size) {
-    case MO_16:
+    case MO_UW:
         index = h << 2 | l << 1 | m;
         break;
     case MO_32:
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index ec5fb11..2bc1bd1 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -1679,7 +1679,7 @@ static void do_sat_addsub_vec(DisasContext *s, int esz, int rd, int rn,
         tcg_temp_free_i32(t32);
         break;

-    case MO_16:
+    case MO_UW:
         t32 = tcg_temp_new_i32();
         tcg_gen_extrl_i64_i32(t32, val);
         if (d) {
@@ -3314,7 +3314,7 @@ static bool trans_SUBR_zzi(DisasContext *s, arg_rri_esz *a)
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_sve_subri_h,
           .opt_opc = vecop_list,
-          .vece = MO_16,
+          .vece = MO_UW,
           .scalar_first = true },
         { .fni4 = tcg_gen_sub_i32,
           .fniv = tcg_gen_sub_vec,
@@ -3468,7 +3468,7 @@ static bool trans_FMLA_zzxz(DisasContext *s, arg_FMLA_zzxz *a)

     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);
         tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, a->rd),
                            vec_full_reg_offset(s, a->rn),
                            vec_full_reg_offset(s, a->rm),
@@ -3494,7 +3494,7 @@ static bool trans_FMUL_zzx(DisasContext *s, arg_FMUL_zzx *a)

     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);
         tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
                            vec_full_reg_offset(s, a->rn),
                            vec_full_reg_offset(s, a->rm),
@@ -3526,7 +3526,7 @@ static void do_reduce(DisasContext *s, arg_rpr_esz *a,

     tcg_gen_addi_ptr(t_zn, cpu_env, vec_full_reg_offset(s, a->rn));
     tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, a->pg));
-    status = get_fpstatus_ptr(a->esz == MO_16);
+    status = get_fpstatus_ptr(a->esz == MO_UW);

     fn(temp, t_zn, t_pg, status, t_desc);
     tcg_temp_free_ptr(t_zn);
@@ -3568,7 +3568,7 @@ DO_VPZ(FMAXV, fmaxv)
 static void do_zz_fp(DisasContext *s, arg_rr_esz *a, gen_helper_gvec_2_ptr *fn)
 {
     unsigned vsz = vec_full_reg_size(s);
-    TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+    TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);

     tcg_gen_gvec_2_ptr(vec_full_reg_offset(s, a->rd),
                        vec_full_reg_offset(s, a->rn),
@@ -3616,7 +3616,7 @@ static void do_ppz_fp(DisasContext *s, arg_rpr_esz *a,
                       gen_helper_gvec_3_ptr *fn)
 {
     unsigned vsz = vec_full_reg_size(s);
-    TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+    TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);

     tcg_gen_gvec_3_ptr(pred_full_reg_offset(s, a->rd),
                        vec_full_reg_offset(s, a->rn),
@@ -3668,7 +3668,7 @@ static bool trans_FTMAD(DisasContext *s, arg_FTMAD *a)
     }
     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);
         tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
                            vec_full_reg_offset(s, a->rn),
                            vec_full_reg_offset(s, a->rm),
@@ -3708,7 +3708,7 @@ static bool trans_FADDA(DisasContext *s, arg_rprr_esz *a)
     t_pg = tcg_temp_new_ptr();
     tcg_gen_addi_ptr(t_rm, cpu_env, vec_full_reg_offset(s, a->rm));
     tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, a->pg));
-    t_fpst = get_fpstatus_ptr(a->esz == MO_16);
+    t_fpst = get_fpstatus_ptr(a->esz == MO_UW);
     t_desc = tcg_const_i32(simd_desc(vsz, vsz, 0));

     fns[a->esz - 1](t_val, t_val, t_rm, t_pg, t_fpst, t_desc);
@@ -3735,7 +3735,7 @@ static bool do_zzz_fp(DisasContext *s, arg_rrr_esz *a,
     }
     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);
         tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
                            vec_full_reg_offset(s, a->rn),
                            vec_full_reg_offset(s, a->rm),
@@ -3777,7 +3777,7 @@ static bool do_zpzz_fp(DisasContext *s, arg_rprr_esz *a,
     }
     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);
         tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, a->rd),
                            vec_full_reg_offset(s, a->rn),
                            vec_full_reg_offset(s, a->rm),
@@ -3844,7 +3844,7 @@ static void do_fp_imm(DisasContext *s, arg_rpri_esz *a, uint64_t imm,
                       gen_helper_sve_fp2scalar *fn)
 {
     TCGv_i64 temp = tcg_const_i64(imm);
-    do_fp_scalar(s, a->rd, a->rn, a->pg, a->esz == MO_16, temp, fn);
+    do_fp_scalar(s, a->rd, a->rn, a->pg, a->esz == MO_UW, temp, fn);
     tcg_temp_free_i64(temp);
 }

@@ -3893,7 +3893,7 @@ static bool do_fp_cmp(DisasContext *s, arg_rprr_esz *a,
     }
     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);
         tcg_gen_gvec_4_ptr(pred_full_reg_offset(s, a->rd),
                            vec_full_reg_offset(s, a->rn),
                            vec_full_reg_offset(s, a->rm),
@@ -3937,7 +3937,7 @@ static bool trans_FCADD(DisasContext *s, arg_FCADD *a)
     }
     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);
         tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, a->rd),
                            vec_full_reg_offset(s, a->rn),
                            vec_full_reg_offset(s, a->rm),
@@ -4044,7 +4044,7 @@ static bool trans_FCMLA_zzxz(DisasContext *s, arg_FCMLA_zzxz *a)
     tcg_debug_assert(a->rd == a->ra);
     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);
         tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
                            vec_full_reg_offset(s, a->rn),
                            vec_full_reg_offset(s, a->rm),
@@ -4186,7 +4186,7 @@ static bool trans_FRINTI(DisasContext *s, arg_rpr_esz *a)
     if (a->esz == 0) {
         return false;
     }
-    return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_16,
+    return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_UW,
                       frint_fns[a->esz - 1]);
 }

@@ -4200,7 +4200,7 @@ static bool trans_FRINTX(DisasContext *s, arg_rpr_esz *a)
     if (a->esz == 0) {
         return false;
     }
-    return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_16, fns[a->esz - 1]);
+    return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_UW, fns[a->esz - 1]);
 }

 static bool do_frint_mode(DisasContext *s, arg_rpr_esz *a, int mode)
@@ -4211,7 +4211,7 @@ static bool do_frint_mode(DisasContext *s, arg_rpr_esz *a, int mode)
     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
         TCGv_i32 tmode = tcg_const_i32(mode);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);

         gen_helper_set_rmode(tmode, tmode, status);

@@ -4262,7 +4262,7 @@ static bool trans_FRECPX(DisasContext *s, arg_rpr_esz *a)
     if (a->esz == 0) {
         return false;
     }
-    return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_16, fns[a->esz - 1]);
+    return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_UW, fns[a->esz - 1]);
 }

 static bool trans_FSQRT(DisasContext *s, arg_rpr_esz *a)
@@ -4275,7 +4275,7 @@ static bool trans_FSQRT(DisasContext *s, arg_rpr_esz *a)
     if (a->esz == 0) {
         return false;
     }
-    return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_16, fns[a->esz - 1]);
+    return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_UW, fns[a->esz - 1]);
 }

 static bool trans_SCVTF_hh(DisasContext *s, arg_rpr_esz *a)
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
index 092eb5e..549874c 100644
--- a/target/arm/translate-vfp.inc.c
+++ b/target/arm/translate-vfp.inc.c
@@ -52,7 +52,7 @@ uint64_t vfp_expand_imm(int size, uint8_t imm8)
             (extract32(imm8, 0, 6) << 3);
         imm <<= 16;
         break;
-    case MO_16:
+    case MO_UW:
         imm = (extract32(imm8, 7, 1) ? 0x8000 : 0) |
             (extract32(imm8, 6, 1) ? 0x3000 : 0x4000) |
             (extract32(imm8, 0, 6) << 6);
diff --git a/target/arm/translate.c b/target/arm/translate.c
index 39266cf..8d10922 100644
--- a/target/arm/translate.c
+++ b/target/arm/translate.c
@@ -1477,7 +1477,7 @@ static void neon_store_element(int reg, int ele, TCGMemOp size, TCGv_i32 var)
     case MO_UB:
         tcg_gen_st8_i32(var, cpu_env, offset);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_st16_i32(var, cpu_env, offset);
         break;
     case MO_32:
@@ -1496,7 +1496,7 @@ static void neon_store_element64(int reg, int ele, TCGMemOp size, TCGv_i64 var)
     case MO_UB:
         tcg_gen_st8_i64(var, cpu_env, offset);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_st16_i64(var, cpu_env, offset);
         break;
     case MO_32:
@@ -4267,7 +4267,7 @@ const GVecGen2i ssra_op[4] = {
       .fniv = gen_ssra_vec,
       .load_dest = true,
       .opt_opc = vecop_list_ssra,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fni4 = gen_ssra32_i32,
       .fniv = gen_ssra_vec,
       .load_dest = true,
@@ -4325,7 +4325,7 @@ const GVecGen2i usra_op[4] = {
       .fniv = gen_usra_vec,
       .load_dest = true,
       .opt_opc = vecop_list_usra,
-      .vece = MO_16, },
+      .vece = MO_UW, },
     { .fni4 = gen_usra32_i32,
       .fniv = gen_usra_vec,
       .load_dest = true,
@@ -4353,7 +4353,7 @@ static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)

 static void gen_shr16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
 {
-    uint64_t mask = dup_const(MO_16, 0xffff >> shift);
+    uint64_t mask = dup_const(MO_UW, 0xffff >> shift);
     TCGv_i64 t = tcg_temp_new_i64();

     tcg_gen_shri_i64(t, a, shift);
@@ -4405,7 +4405,7 @@ const GVecGen2i sri_op[4] = {
       .fniv = gen_shr_ins_vec,
       .load_dest = true,
       .opt_opc = vecop_list_sri,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fni4 = gen_shr32_ins_i32,
       .fniv = gen_shr_ins_vec,
       .load_dest = true,
@@ -4433,7 +4433,7 @@ static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)

 static void gen_shl16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
 {
-    uint64_t mask = dup_const(MO_16, 0xffff << shift);
+    uint64_t mask = dup_const(MO_UW, 0xffff << shift);
     TCGv_i64 t = tcg_temp_new_i64();

     tcg_gen_shli_i64(t, a, shift);
@@ -4483,7 +4483,7 @@ const GVecGen2i sli_op[4] = {
       .fniv = gen_shl_ins_vec,
       .load_dest = true,
       .opt_opc = vecop_list_sli,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fni4 = gen_shl32_ins_i32,
       .fniv = gen_shl_ins_vec,
       .load_dest = true,
@@ -4579,7 +4579,7 @@ const GVecGen3 mla_op[4] = {
       .fniv = gen_mla_vec,
       .load_dest = true,
       .opt_opc = vecop_list_mla,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fni4 = gen_mla32_i32,
       .fniv = gen_mla_vec,
       .load_dest = true,
@@ -4603,7 +4603,7 @@ const GVecGen3 mls_op[4] = {
       .fniv = gen_mls_vec,
       .load_dest = true,
       .opt_opc = vecop_list_mls,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fni4 = gen_mls32_i32,
       .fniv = gen_mls_vec,
       .load_dest = true,
@@ -4649,7 +4649,7 @@ const GVecGen3 cmtst_op[4] = {
     { .fni4 = gen_helper_neon_tst_u16,
       .fniv = gen_cmtst_vec,
       .opt_opc = vecop_list_cmtst,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fni4 = gen_cmtst_i32,
       .fniv = gen_cmtst_vec,
       .opt_opc = vecop_list_cmtst,
@@ -4686,7 +4686,7 @@ const GVecGen4 uqadd_op[4] = {
       .fno = gen_helper_gvec_uqadd_h,
       .write_aofs = true,
       .opt_opc = vecop_list_uqadd,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fniv = gen_uqadd_vec,
       .fno = gen_helper_gvec_uqadd_s,
       .write_aofs = true,
@@ -4724,7 +4724,7 @@ const GVecGen4 sqadd_op[4] = {
       .fno = gen_helper_gvec_sqadd_h,
       .opt_opc = vecop_list_sqadd,
       .write_aofs = true,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fniv = gen_sqadd_vec,
       .fno = gen_helper_gvec_sqadd_s,
       .opt_opc = vecop_list_sqadd,
@@ -4762,7 +4762,7 @@ const GVecGen4 uqsub_op[4] = {
       .fno = gen_helper_gvec_uqsub_h,
       .opt_opc = vecop_list_uqsub,
       .write_aofs = true,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fniv = gen_uqsub_vec,
       .fno = gen_helper_gvec_uqsub_s,
       .opt_opc = vecop_list_uqsub,
@@ -4800,7 +4800,7 @@ const GVecGen4 sqsub_op[4] = {
       .fno = gen_helper_gvec_sqsub_h,
       .opt_opc = vecop_list_sqsub,
       .write_aofs = true,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fniv = gen_sqsub_vec,
       .fno = gen_helper_gvec_sqsub_s,
       .opt_opc = vecop_list_sqsub,
@@ -6876,7 +6876,7 @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
                     size = MO_UB;
                     element = (insn >> 17) & 7;
                 } else if (insn & (1 << 17)) {
-                    size = MO_16;
+                    size = MO_UW;
                     element = (insn >> 18) & 3;
                 } else {
                     size = MO_32;
diff --git a/target/i386/translate.c b/target/i386/translate.c
index 0e45300..0535bae 100644
--- a/target/i386/translate.c
+++ b/target/i386/translate.c
@@ -323,7 +323,7 @@ static inline bool byte_reg_is_xH(DisasContext *s, int reg)
 static inline TCGMemOp mo_pushpop(DisasContext *s, TCGMemOp ot)
 {
     if (CODE64(s)) {
-        return ot == MO_16 ? MO_16 : MO_64;
+        return ot == MO_UW ? MO_UW : MO_64;
     } else {
         return ot;
     }
@@ -332,7 +332,7 @@ static inline TCGMemOp mo_pushpop(DisasContext *s, TCGMemOp ot)
 /* Select the size of the stack pointer.  */
 static inline TCGMemOp mo_stacksize(DisasContext *s)
 {
-    return CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_16;
+    return CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_UW;
 }

 /* Select only size 64 else 32.  Used for SSE operand sizes.  */
@@ -356,7 +356,7 @@ static inline TCGMemOp mo_b_d(int b, TCGMemOp ot)
    Used for decoding operand size of port opcodes.  */
 static inline TCGMemOp mo_b_d32(int b, TCGMemOp ot)
 {
-    return b & 1 ? (ot == MO_16 ? MO_16 : MO_32) : MO_UB;
+    return b & 1 ? (ot == MO_UW ? MO_UW : MO_32) : MO_UB;
 }

 static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t0)
@@ -369,7 +369,7 @@ static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t0)
             tcg_gen_deposit_tl(cpu_regs[reg - 4], cpu_regs[reg - 4], t0, 8, 8);
         }
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_deposit_tl(cpu_regs[reg], cpu_regs[reg], t0, 0, 16);
         break;
     case MO_32:
@@ -473,7 +473,7 @@ static void gen_lea_v_seg(DisasContext *s, TCGMemOp aflag, TCGv a0,
             return;
         }
         break;
-    case MO_16:
+    case MO_UW:
         /* 16 bit address */
         tcg_gen_ext16u_tl(s->A0, a0);
         a0 = s->A0;
@@ -530,7 +530,7 @@ static TCGv gen_ext_tl(TCGv dst, TCGv src, TCGMemOp size, bool sign)
             tcg_gen_ext8u_tl(dst, src);
         }
         return dst;
-    case MO_16:
+    case MO_UW:
         if (sign) {
             tcg_gen_ext16s_tl(dst, src);
         } else {
@@ -583,7 +583,7 @@ static void gen_helper_in_func(TCGMemOp ot, TCGv v, TCGv_i32 n)
     case MO_UB:
         gen_helper_inb(v, cpu_env, n);
         break;
-    case MO_16:
+    case MO_UW:
         gen_helper_inw(v, cpu_env, n);
         break;
     case MO_32:
@@ -600,7 +600,7 @@ static void gen_helper_out_func(TCGMemOp ot, TCGv_i32 v, TCGv_i32 n)
     case MO_UB:
         gen_helper_outb(cpu_env, v, n);
         break;
-    case MO_16:
+    case MO_UW:
         gen_helper_outw(cpu_env, v, n);
         break;
     case MO_32:
@@ -622,7 +622,7 @@ static void gen_check_io(DisasContext *s, TCGMemOp ot, target_ulong cur_eip,
         case MO_UB:
             gen_helper_check_iob(cpu_env, s->tmp2_i32);
             break;
-        case MO_16:
+        case MO_UW:
             gen_helper_check_iow(cpu_env, s->tmp2_i32);
             break;
         case MO_32:
@@ -1562,7 +1562,7 @@ static void gen_rot_rm_T1(DisasContext *s, TCGMemOp ot, int op1, int is_right)
         tcg_gen_ext8u_tl(s->T0, s->T0);
         tcg_gen_muli_tl(s->T0, s->T0, 0x01010101);
         goto do_long;
-    case MO_16:
+    case MO_UW:
         /* Replicate the 16-bit input so that a 32-bit rotate works.  */
         tcg_gen_deposit_tl(s->T0, s->T0, s->T0, 16, 16);
         goto do_long;
@@ -1664,7 +1664,7 @@ static void gen_rot_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
         case MO_UB:
             mask = 7;
             goto do_shifts;
-        case MO_16:
+        case MO_UW:
             mask = 15;
         do_shifts:
             shift = op2 & mask;
@@ -1722,7 +1722,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
         case MO_UB:
             gen_helper_rcrb(s->T0, cpu_env, s->T0, s->T1);
             break;
-        case MO_16:
+        case MO_UW:
             gen_helper_rcrw(s->T0, cpu_env, s->T0, s->T1);
             break;
         case MO_32:
@@ -1741,7 +1741,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
         case MO_UB:
             gen_helper_rclb(s->T0, cpu_env, s->T0, s->T1);
             break;
-        case MO_16:
+        case MO_UW:
             gen_helper_rclw(s->T0, cpu_env, s->T0, s->T1);
             break;
         case MO_32:
@@ -1778,7 +1778,7 @@ static void gen_shiftd_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
     tcg_gen_andi_tl(count, count_in, mask);

     switch (ot) {
-    case MO_16:
+    case MO_UW:
         /* Note: we implement the Intel behaviour for shift count > 16.
            This means "shrdw C, B, A" shifts A:B:A >> C.  Build the B:A
            portion by constructing it as a 32-bit value.  */
@@ -1817,7 +1817,7 @@ static void gen_shiftd_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
             tcg_gen_shl_tl(s->T1, s->T1, s->tmp4);
         } else {
             tcg_gen_shl_tl(s->tmp0, s->T0, s->tmp0);
-            if (ot == MO_16) {
+            if (ot == MO_UW) {
                 /* Only needed if count > 16, for Intel behaviour.  */
                 tcg_gen_subfi_tl(s->tmp4, 33, count);
                 tcg_gen_shr_tl(s->tmp4, s->T1, s->tmp4);
@@ -2026,7 +2026,7 @@ static AddressParts gen_lea_modrm_0(CPUX86State *env, DisasContext *s,
         }
         break;

-    case MO_16:
+    case MO_UW:
         if (mod == 0) {
             if (rm == 6) {
                 base = -1;
@@ -2187,7 +2187,7 @@ static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, TCGMemOp ot)
     case MO_UB:
         ret = x86_ldub_code(env, s);
         break;
-    case MO_16:
+    case MO_UW:
         ret = x86_lduw_code(env, s);
         break;
     case MO_32:
@@ -2400,12 +2400,12 @@ static inline void gen_pop_update(DisasContext *s, TCGMemOp ot)

 static inline void gen_stack_A0(DisasContext *s)
 {
-    gen_lea_v_seg(s, s->ss32 ? MO_32 : MO_16, cpu_regs[R_ESP], R_SS, -1);
+    gen_lea_v_seg(s, s->ss32 ? MO_32 : MO_UW, cpu_regs[R_ESP], R_SS, -1);
 }

 static void gen_pusha(DisasContext *s)
 {
-    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_16;
+    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_UW;
     TCGMemOp d_ot = s->dflag;
     int size = 1 << d_ot;
     int i;
@@ -2421,7 +2421,7 @@ static void gen_pusha(DisasContext *s)

 static void gen_popa(DisasContext *s)
 {
-    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_16;
+    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_UW;
     TCGMemOp d_ot = s->dflag;
     int size = 1 << d_ot;
     int i;
@@ -2443,7 +2443,7 @@ static void gen_popa(DisasContext *s)
 static void gen_enter(DisasContext *s, int esp_addend, int level)
 {
     TCGMemOp d_ot = mo_pushpop(s, s->dflag);
-    TCGMemOp a_ot = CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_16;
+    TCGMemOp a_ot = CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_UW;
     int size = 1 << d_ot;

     /* Push BP; compute FrameTemp into T1.  */
@@ -3613,7 +3613,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
         case 0xc4: /* pinsrw */
         case 0x1c4:
             s->rip_offset = 1;
-            gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0);
+            gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0);
             val = x86_ldub_code(env, s);
             if (b1) {
                 val &= 7;
@@ -3786,7 +3786,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                 if ((b & 0xff) == 0xf0) {
                     ot = MO_UB;
                 } else if (s->dflag != MO_64) {
-                    ot = (s->prefix & PREFIX_DATA ? MO_16 : MO_32);
+                    ot = (s->prefix & PREFIX_DATA ? MO_UW : MO_32);
                 } else {
                     ot = MO_64;
                 }
@@ -3815,7 +3815,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                     goto illegal_op;
                 }
                 if (s->dflag != MO_64) {
-                    ot = (s->prefix & PREFIX_DATA ? MO_16 : MO_32);
+                    ot = (s->prefix & PREFIX_DATA ? MO_UW : MO_32);
                 } else {
                     ot = MO_64;
                 }
@@ -4630,7 +4630,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         /* In 64-bit mode, the default data size is 32-bit.  Select 64-bit
            data with rex_w, and 16-bit data with 0x66; rex_w takes precedence
            over 0x66 if both are present.  */
-        dflag = (rex_w > 0 ? MO_64 : prefixes & PREFIX_DATA ? MO_16 : MO_32);
+        dflag = (rex_w > 0 ? MO_64 : prefixes & PREFIX_DATA ? MO_UW : MO_32);
         /* In 64-bit mode, 0x67 selects 32-bit addressing.  */
         aflag = (prefixes & PREFIX_ADR ? MO_32 : MO_64);
     } else {
@@ -4638,13 +4638,13 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         if (s->code32 ^ ((prefixes & PREFIX_DATA) != 0)) {
             dflag = MO_32;
         } else {
-            dflag = MO_16;
+            dflag = MO_UW;
         }
         /* In 16/32-bit mode, 0x67 selects the opposite addressing.  */
         if (s->code32 ^ ((prefixes & PREFIX_ADR) != 0)) {
             aflag = MO_32;
         }  else {
-            aflag = MO_16;
+            aflag = MO_UW;
         }
     }

@@ -4872,21 +4872,21 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 tcg_gen_ext8u_tl(s->T1, s->T1);
                 /* XXX: use 32 bit mul which could be faster */
                 tcg_gen_mul_tl(s->T0, s->T0, s->T1);
-                gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0);
+                gen_op_mov_reg_v(s, MO_UW, R_EAX, s->T0);
                 tcg_gen_mov_tl(cpu_cc_dst, s->T0);
                 tcg_gen_andi_tl(cpu_cc_src, s->T0, 0xff00);
                 set_cc_op(s, CC_OP_MULB);
                 break;
-            case MO_16:
-                gen_op_mov_v_reg(s, MO_16, s->T1, R_EAX);
+            case MO_UW:
+                gen_op_mov_v_reg(s, MO_UW, s->T1, R_EAX);
                 tcg_gen_ext16u_tl(s->T0, s->T0);
                 tcg_gen_ext16u_tl(s->T1, s->T1);
                 /* XXX: use 32 bit mul which could be faster */
                 tcg_gen_mul_tl(s->T0, s->T0, s->T1);
-                gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0);
+                gen_op_mov_reg_v(s, MO_UW, R_EAX, s->T0);
                 tcg_gen_mov_tl(cpu_cc_dst, s->T0);
                 tcg_gen_shri_tl(s->T0, s->T0, 16);
-                gen_op_mov_reg_v(s, MO_16, R_EDX, s->T0);
+                gen_op_mov_reg_v(s, MO_UW, R_EDX, s->T0);
                 tcg_gen_mov_tl(cpu_cc_src, s->T0);
                 set_cc_op(s, CC_OP_MULW);
                 break;
@@ -4921,24 +4921,24 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 tcg_gen_ext8s_tl(s->T1, s->T1);
                 /* XXX: use 32 bit mul which could be faster */
                 tcg_gen_mul_tl(s->T0, s->T0, s->T1);
-                gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0);
+                gen_op_mov_reg_v(s, MO_UW, R_EAX, s->T0);
                 tcg_gen_mov_tl(cpu_cc_dst, s->T0);
                 tcg_gen_ext8s_tl(s->tmp0, s->T0);
                 tcg_gen_sub_tl(cpu_cc_src, s->T0, s->tmp0);
                 set_cc_op(s, CC_OP_MULB);
                 break;
-            case MO_16:
-                gen_op_mov_v_reg(s, MO_16, s->T1, R_EAX);
+            case MO_UW:
+                gen_op_mov_v_reg(s, MO_UW, s->T1, R_EAX);
                 tcg_gen_ext16s_tl(s->T0, s->T0);
                 tcg_gen_ext16s_tl(s->T1, s->T1);
                 /* XXX: use 32 bit mul which could be faster */
                 tcg_gen_mul_tl(s->T0, s->T0, s->T1);
-                gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0);
+                gen_op_mov_reg_v(s, MO_UW, R_EAX, s->T0);
                 tcg_gen_mov_tl(cpu_cc_dst, s->T0);
                 tcg_gen_ext16s_tl(s->tmp0, s->T0);
                 tcg_gen_sub_tl(cpu_cc_src, s->T0, s->tmp0);
                 tcg_gen_shri_tl(s->T0, s->T0, 16);
-                gen_op_mov_reg_v(s, MO_16, R_EDX, s->T0);
+                gen_op_mov_reg_v(s, MO_UW, R_EDX, s->T0);
                 set_cc_op(s, CC_OP_MULW);
                 break;
             default:
@@ -4972,7 +4972,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             case MO_UB:
                 gen_helper_divb_AL(cpu_env, s->T0);
                 break;
-            case MO_16:
+            case MO_UW:
                 gen_helper_divw_AX(cpu_env, s->T0);
                 break;
             default:
@@ -4991,7 +4991,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             case MO_UB:
                 gen_helper_idivb_AL(cpu_env, s->T0);
                 break;
-            case MO_16:
+            case MO_UW:
                 gen_helper_idivw_AX(cpu_env, s->T0);
                 break;
             default:
@@ -5026,7 +5026,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 /* operand size for jumps is 64 bit */
                 ot = MO_64;
             } else if (op == 3 || op == 5) {
-                ot = dflag != MO_16 ? MO_32 + (rex_w == 1) : MO_16;
+                ot = dflag != MO_UW ? MO_32 + (rex_w == 1) : MO_UW;
             } else if (op == 6) {
                 /* default push size is 64 bit */
                 ot = mo_pushpop(s, dflag);
@@ -5057,7 +5057,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             break;
         case 2: /* call Ev */
             /* XXX: optimize if memory (no 'and' is necessary) */
-            if (dflag == MO_16) {
+            if (dflag == MO_UW) {
                 tcg_gen_ext16u_tl(s->T0, s->T0);
             }
             next_eip = s->pc - s->cs_base;
@@ -5070,7 +5070,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         case 3: /* lcall Ev */
             gen_op_ld_v(s, ot, s->T1, s->A0);
             gen_add_A0_im(s, 1 << ot);
-            gen_op_ld_v(s, MO_16, s->T0, s->A0);
+            gen_op_ld_v(s, MO_UW, s->T0, s->A0);
         do_lcall:
             if (s->pe && !s->vm86) {
                 tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
@@ -5087,7 +5087,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             gen_jr(s, s->tmp4);
             break;
         case 4: /* jmp Ev */
-            if (dflag == MO_16) {
+            if (dflag == MO_UW) {
                 tcg_gen_ext16u_tl(s->T0, s->T0);
             }
             gen_op_jmp_v(s->T0);
@@ -5097,7 +5097,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         case 5: /* ljmp Ev */
             gen_op_ld_v(s, ot, s->T1, s->A0);
             gen_add_A0_im(s, 1 << ot);
-            gen_op_ld_v(s, MO_16, s->T0, s->A0);
+            gen_op_ld_v(s, MO_UW, s->T0, s->A0);
         do_ljmp:
             if (s->pe && !s->vm86) {
                 tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
@@ -5152,14 +5152,14 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             break;
 #endif
         case MO_32:
-            gen_op_mov_v_reg(s, MO_16, s->T0, R_EAX);
+            gen_op_mov_v_reg(s, MO_UW, s->T0, R_EAX);
             tcg_gen_ext16s_tl(s->T0, s->T0);
             gen_op_mov_reg_v(s, MO_32, R_EAX, s->T0);
             break;
-        case MO_16:
+        case MO_UW:
             gen_op_mov_v_reg(s, MO_UB, s->T0, R_EAX);
             tcg_gen_ext8s_tl(s->T0, s->T0);
-            gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0);
+            gen_op_mov_reg_v(s, MO_UW, R_EAX, s->T0);
             break;
         default:
             tcg_abort();
@@ -5180,11 +5180,11 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             tcg_gen_sari_tl(s->T0, s->T0, 31);
             gen_op_mov_reg_v(s, MO_32, R_EDX, s->T0);
             break;
-        case MO_16:
-            gen_op_mov_v_reg(s, MO_16, s->T0, R_EAX);
+        case MO_UW:
+            gen_op_mov_v_reg(s, MO_UW, s->T0, R_EAX);
             tcg_gen_ext16s_tl(s->T0, s->T0);
             tcg_gen_sari_tl(s->T0, s->T0, 15);
-            gen_op_mov_reg_v(s, MO_16, R_EDX, s->T0);
+            gen_op_mov_reg_v(s, MO_UW, R_EDX, s->T0);
             break;
         default:
             tcg_abort();
@@ -5538,7 +5538,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         reg = (modrm >> 3) & 7;
         if (reg >= 6 || reg == R_CS)
             goto illegal_op;
-        gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0);
+        gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0);
         gen_movl_seg_T0(s, reg);
         /* Note that reg == R_SS in gen_movl_seg_T0 always sets is_jmp.  */
         if (s->base.is_jmp) {
@@ -5558,7 +5558,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         if (reg >= 6)
             goto illegal_op;
         gen_op_movl_T0_seg(s, reg);
-        ot = mod == 3 ? dflag : MO_16;
+        ot = mod == 3 ? dflag : MO_UW;
         gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 1);
         break;

@@ -5734,7 +5734,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
     case 0x1b5: /* lgs Gv */
         op = R_GS;
     do_lxx:
-        ot = dflag != MO_16 ? MO_32 : MO_16;
+        ot = dflag != MO_UW ? MO_32 : MO_UW;
         modrm = x86_ldub_code(env, s);
         reg = ((modrm >> 3) & 7) | rex_r;
         mod = (modrm >> 6) & 3;
@@ -5744,7 +5744,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         gen_op_ld_v(s, ot, s->T1, s->A0);
         gen_add_A0_im(s, 1 << ot);
         /* load the segment first to handle exceptions properly */
-        gen_op_ld_v(s, MO_16, s->T0, s->A0);
+        gen_op_ld_v(s, MO_UW, s->T0, s->A0);
         gen_movl_seg_T0(s, op);
         /* then put the data */
         gen_op_mov_reg_v(s, ot, reg, s->T1);
@@ -6287,7 +6287,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 case 0:
                     gen_helper_fnstsw(s->tmp2_i32, cpu_env);
                     tcg_gen_extu_i32_tl(s->T0, s->tmp2_i32);
-                    gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0);
+                    gen_op_mov_reg_v(s, MO_UW, R_EAX, s->T0);
                     break;
                 default:
                     goto unknown_op;
@@ -6575,14 +6575,14 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         break;
     case 0xe8: /* call im */
         {
-            if (dflag != MO_16) {
+            if (dflag != MO_UW) {
                 tval = (int32_t)insn_get(env, s, MO_32);
             } else {
-                tval = (int16_t)insn_get(env, s, MO_16);
+                tval = (int16_t)insn_get(env, s, MO_UW);
             }
             next_eip = s->pc - s->cs_base;
             tval += next_eip;
-            if (dflag == MO_16) {
+            if (dflag == MO_UW) {
                 tval &= 0xffff;
             } else if (!CODE64(s)) {
                 tval &= 0xffffffff;
@@ -6601,20 +6601,20 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 goto illegal_op;
             ot = dflag;
             offset = insn_get(env, s, ot);
-            selector = insn_get(env, s, MO_16);
+            selector = insn_get(env, s, MO_UW);

             tcg_gen_movi_tl(s->T0, selector);
             tcg_gen_movi_tl(s->T1, offset);
         }
         goto do_lcall;
     case 0xe9: /* jmp im */
-        if (dflag != MO_16) {
+        if (dflag != MO_UW) {
             tval = (int32_t)insn_get(env, s, MO_32);
         } else {
-            tval = (int16_t)insn_get(env, s, MO_16);
+            tval = (int16_t)insn_get(env, s, MO_UW);
         }
         tval += s->pc - s->cs_base;
-        if (dflag == MO_16) {
+        if (dflag == MO_UW) {
             tval &= 0xffff;
         } else if (!CODE64(s)) {
             tval &= 0xffffffff;
@@ -6630,7 +6630,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 goto illegal_op;
             ot = dflag;
             offset = insn_get(env, s, ot);
-            selector = insn_get(env, s, MO_16);
+            selector = insn_get(env, s, MO_UW);

             tcg_gen_movi_tl(s->T0, selector);
             tcg_gen_movi_tl(s->T1, offset);
@@ -6639,7 +6639,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
     case 0xeb: /* jmp Jb */
         tval = (int8_t)insn_get(env, s, MO_UB);
         tval += s->pc - s->cs_base;
-        if (dflag == MO_16) {
+        if (dflag == MO_UW) {
             tval &= 0xffff;
         }
         gen_jmp(s, tval);
@@ -6648,15 +6648,15 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         tval = (int8_t)insn_get(env, s, MO_UB);
         goto do_jcc;
     case 0x180 ... 0x18f: /* jcc Jv */
-        if (dflag != MO_16) {
+        if (dflag != MO_UW) {
             tval = (int32_t)insn_get(env, s, MO_32);
         } else {
-            tval = (int16_t)insn_get(env, s, MO_16);
+            tval = (int16_t)insn_get(env, s, MO_UW);
         }
     do_jcc:
         next_eip = s->pc - s->cs_base;
         tval += next_eip;
-        if (dflag == MO_16) {
+        if (dflag == MO_UW) {
             tval &= 0xffff;
         }
         gen_bnd_jmp(s);
@@ -6697,7 +6697,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         } else {
             ot = gen_pop_T0(s);
             if (s->cpl == 0) {
-                if (dflag != MO_16) {
+                if (dflag != MO_UW) {
                     gen_helper_write_eflags(cpu_env, s->T0,
                                             tcg_const_i32((TF_MASK | AC_MASK |
                                                            ID_MASK | NT_MASK |
@@ -6712,7 +6712,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 }
             } else {
                 if (s->cpl <= s->iopl) {
-                    if (dflag != MO_16) {
+                    if (dflag != MO_UW) {
                         gen_helper_write_eflags(cpu_env, s->T0,
                                                 tcg_const_i32((TF_MASK |
                                                                AC_MASK |
@@ -6729,7 +6729,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                                                               & 0xffff));
                     }
                 } else {
-                    if (dflag != MO_16) {
+                    if (dflag != MO_UW) {
                         gen_helper_write_eflags(cpu_env, s->T0,
                                            tcg_const_i32((TF_MASK | AC_MASK |
                                                           ID_MASK | NT_MASK)));
@@ -7110,7 +7110,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         gen_op_mov_v_reg(s, ot, s->T0, reg);
         gen_lea_modrm(env, s, modrm);
         tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
-        if (ot == MO_16) {
+        if (ot == MO_UW) {
             gen_helper_boundw(cpu_env, s->A0, s->tmp2_i32);
         } else {
             gen_helper_boundl(cpu_env, s->A0, s->tmp2_i32);
@@ -7149,7 +7149,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             tval = (int8_t)insn_get(env, s, MO_UB);
             next_eip = s->pc - s->cs_base;
             tval += next_eip;
-            if (dflag == MO_16) {
+            if (dflag == MO_UW) {
                 tval &= 0xffff;
             }

@@ -7291,7 +7291,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             gen_svm_check_intercept(s, pc_start, SVM_EXIT_LDTR_READ);
             tcg_gen_ld32u_tl(s->T0, cpu_env,
                              offsetof(CPUX86State, ldt.selector));
-            ot = mod == 3 ? dflag : MO_16;
+            ot = mod == 3 ? dflag : MO_UW;
             gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 1);
             break;
         case 2: /* lldt */
@@ -7301,7 +7301,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 gen_exception(s, EXCP0D_GPF, pc_start - s->cs_base);
             } else {
                 gen_svm_check_intercept(s, pc_start, SVM_EXIT_LDTR_WRITE);
-                gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0);
+                gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0);
                 tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
                 gen_helper_lldt(cpu_env, s->tmp2_i32);
             }
@@ -7312,7 +7312,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             gen_svm_check_intercept(s, pc_start, SVM_EXIT_TR_READ);
             tcg_gen_ld32u_tl(s->T0, cpu_env,
                              offsetof(CPUX86State, tr.selector));
-            ot = mod == 3 ? dflag : MO_16;
+            ot = mod == 3 ? dflag : MO_UW;
             gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 1);
             break;
         case 3: /* ltr */
@@ -7322,7 +7322,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 gen_exception(s, EXCP0D_GPF, pc_start - s->cs_base);
             } else {
                 gen_svm_check_intercept(s, pc_start, SVM_EXIT_TR_WRITE);
-                gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0);
+                gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0);
                 tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
                 gen_helper_ltr(cpu_env, s->tmp2_i32);
             }
@@ -7331,7 +7331,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         case 5: /* verw */
             if (!s->pe || s->vm86)
                 goto illegal_op;
-            gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0);
+            gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0);
             gen_update_cc_op(s);
             if (op == 4) {
                 gen_helper_verr(cpu_env, s->T0);
@@ -7353,10 +7353,10 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             gen_lea_modrm(env, s, modrm);
             tcg_gen_ld32u_tl(s->T0,
                              cpu_env, offsetof(CPUX86State, gdt.limit));
-            gen_op_st_v(s, MO_16, s->T0, s->A0);
+            gen_op_st_v(s, MO_UW, s->T0, s->A0);
             gen_add_A0_im(s, 2);
             tcg_gen_ld_tl(s->T0, cpu_env, offsetof(CPUX86State, gdt.base));
-            if (dflag == MO_16) {
+            if (dflag == MO_UW) {
                 tcg_gen_andi_tl(s->T0, s->T0, 0xffffff);
             }
             gen_op_st_v(s, CODE64(s) + MO_32, s->T0, s->A0);
@@ -7408,10 +7408,10 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             gen_svm_check_intercept(s, pc_start, SVM_EXIT_IDTR_READ);
             gen_lea_modrm(env, s, modrm);
             tcg_gen_ld32u_tl(s->T0, cpu_env, offsetof(CPUX86State, idt.limit));
-            gen_op_st_v(s, MO_16, s->T0, s->A0);
+            gen_op_st_v(s, MO_UW, s->T0, s->A0);
             gen_add_A0_im(s, 2);
             tcg_gen_ld_tl(s->T0, cpu_env, offsetof(CPUX86State, idt.base));
-            if (dflag == MO_16) {
+            if (dflag == MO_UW) {
                 tcg_gen_andi_tl(s->T0, s->T0, 0xffffff);
             }
             gen_op_st_v(s, CODE64(s) + MO_32, s->T0, s->A0);
@@ -7558,10 +7558,10 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             }
             gen_svm_check_intercept(s, pc_start, SVM_EXIT_GDTR_WRITE);
             gen_lea_modrm(env, s, modrm);
-            gen_op_ld_v(s, MO_16, s->T1, s->A0);
+            gen_op_ld_v(s, MO_UW, s->T1, s->A0);
             gen_add_A0_im(s, 2);
             gen_op_ld_v(s, CODE64(s) + MO_32, s->T0, s->A0);
-            if (dflag == MO_16) {
+            if (dflag == MO_UW) {
                 tcg_gen_andi_tl(s->T0, s->T0, 0xffffff);
             }
             tcg_gen_st_tl(s->T0, cpu_env, offsetof(CPUX86State, gdt.base));
@@ -7575,10 +7575,10 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             }
             gen_svm_check_intercept(s, pc_start, SVM_EXIT_IDTR_WRITE);
             gen_lea_modrm(env, s, modrm);
-            gen_op_ld_v(s, MO_16, s->T1, s->A0);
+            gen_op_ld_v(s, MO_UW, s->T1, s->A0);
             gen_add_A0_im(s, 2);
             gen_op_ld_v(s, CODE64(s) + MO_32, s->T0, s->A0);
-            if (dflag == MO_16) {
+            if (dflag == MO_UW) {
                 tcg_gen_andi_tl(s->T0, s->T0, 0xffffff);
             }
             tcg_gen_st_tl(s->T0, cpu_env, offsetof(CPUX86State, idt.base));
@@ -7590,9 +7590,9 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             tcg_gen_ld_tl(s->T0, cpu_env, offsetof(CPUX86State, cr[0]));
             if (CODE64(s)) {
                 mod = (modrm >> 6) & 3;
-                ot = (mod != 3 ? MO_16 : s->dflag);
+                ot = (mod != 3 ? MO_UW : s->dflag);
             } else {
-                ot = MO_16;
+                ot = MO_UW;
             }
             gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 1);
             break;
@@ -7619,7 +7619,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 break;
             }
             gen_svm_check_intercept(s, pc_start, SVM_EXIT_WRITE_CR0);
-            gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0);
+            gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0);
             gen_helper_lmsw(cpu_env, s->T0);
             gen_jmp_im(s, s->pc - s->cs_base);
             gen_eob(s);
@@ -7720,7 +7720,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             t0 = tcg_temp_local_new();
             t1 = tcg_temp_local_new();
             t2 = tcg_temp_local_new();
-            ot = MO_16;
+            ot = MO_UW;
             modrm = x86_ldub_code(env, s);
             reg = (modrm >> 3) & 7;
             mod = (modrm >> 6) & 3;
@@ -7765,10 +7765,10 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             TCGv t0;
             if (!s->pe || s->vm86)
                 goto illegal_op;
-            ot = dflag != MO_16 ? MO_32 : MO_16;
+            ot = dflag != MO_UW ? MO_32 : MO_UW;
             modrm = x86_ldub_code(env, s);
             reg = ((modrm >> 3) & 7) | rex_r;
-            gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0);
+            gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0);
             t0 = tcg_temp_local_new();
             gen_update_cc_op(s);
             if (b == 0x102) {
@@ -7813,7 +7813,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 /* bndcl */
                 if (reg >= 4
                     || (prefixes & PREFIX_LOCK)
-                    || s->aflag == MO_16) {
+                    || s->aflag == MO_UW) {
                     goto illegal_op;
                 }
                 gen_bndck(env, s, modrm, TCG_COND_LTU, cpu_bndl[reg]);
@@ -7821,7 +7821,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 /* bndcu */
                 if (reg >= 4
                     || (prefixes & PREFIX_LOCK)
-                    || s->aflag == MO_16) {
+                    || s->aflag == MO_UW) {
                     goto illegal_op;
                 }
                 TCGv_i64 notu = tcg_temp_new_i64();
@@ -7830,7 +7830,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 tcg_temp_free_i64(notu);
             } else if (prefixes & PREFIX_DATA) {
                 /* bndmov -- from reg/mem */
-                if (reg >= 4 || s->aflag == MO_16) {
+                if (reg >= 4 || s->aflag == MO_UW) {
                     goto illegal_op;
                 }
                 if (mod == 3) {
@@ -7865,7 +7865,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 AddressParts a = gen_lea_modrm_0(env, s, modrm);
                 if (reg >= 4
                     || (prefixes & PREFIX_LOCK)
-                    || s->aflag == MO_16
+                    || s->aflag == MO_UW
                     || a.base < -1) {
                     goto illegal_op;
                 }
@@ -7903,7 +7903,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 /* bndmk */
                 if (reg >= 4
                     || (prefixes & PREFIX_LOCK)
-                    || s->aflag == MO_16) {
+                    || s->aflag == MO_UW) {
                     goto illegal_op;
                 }
                 AddressParts a = gen_lea_modrm_0(env, s, modrm);
@@ -7931,13 +7931,13 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 /* bndcn */
                 if (reg >= 4
                     || (prefixes & PREFIX_LOCK)
-                    || s->aflag == MO_16) {
+                    || s->aflag == MO_UW) {
                     goto illegal_op;
                 }
                 gen_bndck(env, s, modrm, TCG_COND_GTU, cpu_bndu[reg]);
             } else if (prefixes & PREFIX_DATA) {
                 /* bndmov -- to reg/mem */
-                if (reg >= 4 || s->aflag == MO_16) {
+                if (reg >= 4 || s->aflag == MO_UW) {
                     goto illegal_op;
                 }
                 if (mod == 3) {
@@ -7970,7 +7970,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 AddressParts a = gen_lea_modrm_0(env, s, modrm);
                 if (reg >= 4
                     || (prefixes & PREFIX_LOCK)
-                    || s->aflag == MO_16
+                    || s->aflag == MO_UW
                     || a.base < -1) {
                     goto illegal_op;
                 }
@@ -8341,7 +8341,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         reg = ((modrm >> 3) & 7) | rex_r;

         if (s->prefix & PREFIX_DATA) {
-            ot = MO_16;
+            ot = MO_UW;
         } else {
             ot = mo_64_32(dflag);
         }
diff --git a/target/mips/translate.c b/target/mips/translate.c
index 20a9777..525c7fe 100644
--- a/target/mips/translate.c
+++ b/target/mips/translate.c
@@ -21087,7 +21087,7 @@ static void gen_pool32a5_nanomips_insn(DisasContext *ctx, int opc,
             imm = sextract32(ctx->opcode, 11, 11);
             imm = (int16_t)(imm << 6) >> 6;
             if (rt != 0) {
-                tcg_gen_movi_tl(cpu_gpr[rt], dup_const(MO_16, imm));
+                tcg_gen_movi_tl(cpu_gpr[rt], dup_const(MO_UW, imm));
             }
         }
         break;
diff --git a/target/ppc/translate/vmx-impl.inc.c b/target/ppc/translate/vmx-impl.inc.c
index 4130dd1..71efef4 100644
--- a/target/ppc/translate/vmx-impl.inc.c
+++ b/target/ppc/translate/vmx-impl.inc.c
@@ -406,29 +406,29 @@ static void glue(gen_, name)(DisasContext *ctx)                         \
 GEN_VXFORM_V(vaddubm, MO_UB, tcg_gen_gvec_add, 0, 0);
 GEN_VXFORM_DUAL_EXT(vaddubm, PPC_ALTIVEC, PPC_NONE, 0,       \
                     vmul10cuq, PPC_NONE, PPC2_ISA300, 0x0000F800)
-GEN_VXFORM_V(vadduhm, MO_16, tcg_gen_gvec_add, 0, 1);
+GEN_VXFORM_V(vadduhm, MO_UW, tcg_gen_gvec_add, 0, 1);
 GEN_VXFORM_DUAL(vadduhm, PPC_ALTIVEC, PPC_NONE,  \
                 vmul10ecuq, PPC_NONE, PPC2_ISA300)
 GEN_VXFORM_V(vadduwm, MO_32, tcg_gen_gvec_add, 0, 2);
 GEN_VXFORM_V(vaddudm, MO_64, tcg_gen_gvec_add, 0, 3);
 GEN_VXFORM_V(vsububm, MO_UB, tcg_gen_gvec_sub, 0, 16);
-GEN_VXFORM_V(vsubuhm, MO_16, tcg_gen_gvec_sub, 0, 17);
+GEN_VXFORM_V(vsubuhm, MO_UW, tcg_gen_gvec_sub, 0, 17);
 GEN_VXFORM_V(vsubuwm, MO_32, tcg_gen_gvec_sub, 0, 18);
 GEN_VXFORM_V(vsubudm, MO_64, tcg_gen_gvec_sub, 0, 19);
 GEN_VXFORM_V(vmaxub, MO_UB, tcg_gen_gvec_umax, 1, 0);
-GEN_VXFORM_V(vmaxuh, MO_16, tcg_gen_gvec_umax, 1, 1);
+GEN_VXFORM_V(vmaxuh, MO_UW, tcg_gen_gvec_umax, 1, 1);
 GEN_VXFORM_V(vmaxuw, MO_32, tcg_gen_gvec_umax, 1, 2);
 GEN_VXFORM_V(vmaxud, MO_64, tcg_gen_gvec_umax, 1, 3);
 GEN_VXFORM_V(vmaxsb, MO_UB, tcg_gen_gvec_smax, 1, 4);
-GEN_VXFORM_V(vmaxsh, MO_16, tcg_gen_gvec_smax, 1, 5);
+GEN_VXFORM_V(vmaxsh, MO_UW, tcg_gen_gvec_smax, 1, 5);
 GEN_VXFORM_V(vmaxsw, MO_32, tcg_gen_gvec_smax, 1, 6);
 GEN_VXFORM_V(vmaxsd, MO_64, tcg_gen_gvec_smax, 1, 7);
 GEN_VXFORM_V(vminub, MO_UB, tcg_gen_gvec_umin, 1, 8);
-GEN_VXFORM_V(vminuh, MO_16, tcg_gen_gvec_umin, 1, 9);
+GEN_VXFORM_V(vminuh, MO_UW, tcg_gen_gvec_umin, 1, 9);
 GEN_VXFORM_V(vminuw, MO_32, tcg_gen_gvec_umin, 1, 10);
 GEN_VXFORM_V(vminud, MO_64, tcg_gen_gvec_umin, 1, 11);
 GEN_VXFORM_V(vminsb, MO_UB, tcg_gen_gvec_smin, 1, 12);
-GEN_VXFORM_V(vminsh, MO_16, tcg_gen_gvec_smin, 1, 13);
+GEN_VXFORM_V(vminsh, MO_UW, tcg_gen_gvec_smin, 1, 13);
 GEN_VXFORM_V(vminsw, MO_32, tcg_gen_gvec_smin, 1, 14);
 GEN_VXFORM_V(vminsd, MO_64, tcg_gen_gvec_smin, 1, 15);
 GEN_VXFORM(vavgub, 1, 16);
@@ -531,18 +531,18 @@ GEN_VXFORM(vmulesb, 4, 12);
 GEN_VXFORM(vmulesh, 4, 13);
 GEN_VXFORM(vmulesw, 4, 14);
 GEN_VXFORM_V(vslb, MO_UB, tcg_gen_gvec_shlv, 2, 4);
-GEN_VXFORM_V(vslh, MO_16, tcg_gen_gvec_shlv, 2, 5);
+GEN_VXFORM_V(vslh, MO_UW, tcg_gen_gvec_shlv, 2, 5);
 GEN_VXFORM_V(vslw, MO_32, tcg_gen_gvec_shlv, 2, 6);
 GEN_VXFORM(vrlwnm, 2, 6);
 GEN_VXFORM_DUAL(vslw, PPC_ALTIVEC, PPC_NONE, \
                 vrlwnm, PPC_NONE, PPC2_ISA300)
 GEN_VXFORM_V(vsld, MO_64, tcg_gen_gvec_shlv, 2, 23);
 GEN_VXFORM_V(vsrb, MO_UB, tcg_gen_gvec_shrv, 2, 8);
-GEN_VXFORM_V(vsrh, MO_16, tcg_gen_gvec_shrv, 2, 9);
+GEN_VXFORM_V(vsrh, MO_UW, tcg_gen_gvec_shrv, 2, 9);
 GEN_VXFORM_V(vsrw, MO_32, tcg_gen_gvec_shrv, 2, 10);
 GEN_VXFORM_V(vsrd, MO_64, tcg_gen_gvec_shrv, 2, 27);
 GEN_VXFORM_V(vsrab, MO_UB, tcg_gen_gvec_sarv, 2, 12);
-GEN_VXFORM_V(vsrah, MO_16, tcg_gen_gvec_sarv, 2, 13);
+GEN_VXFORM_V(vsrah, MO_UW, tcg_gen_gvec_sarv, 2, 13);
 GEN_VXFORM_V(vsraw, MO_32, tcg_gen_gvec_sarv, 2, 14);
 GEN_VXFORM_V(vsrad, MO_64, tcg_gen_gvec_sarv, 2, 15);
 GEN_VXFORM(vsrv, 2, 28);
@@ -592,18 +592,18 @@ static void glue(gen_, NAME)(DisasContext *ctx)                         \
 GEN_VXFORM_SAT(vaddubs, MO_UB, add, usadd, 0, 8);
 GEN_VXFORM_DUAL_EXT(vaddubs, PPC_ALTIVEC, PPC_NONE, 0,       \
                     vmul10uq, PPC_NONE, PPC2_ISA300, 0x0000F800)
-GEN_VXFORM_SAT(vadduhs, MO_16, add, usadd, 0, 9);
+GEN_VXFORM_SAT(vadduhs, MO_UW, add, usadd, 0, 9);
 GEN_VXFORM_DUAL(vadduhs, PPC_ALTIVEC, PPC_NONE, \
                 vmul10euq, PPC_NONE, PPC2_ISA300)
 GEN_VXFORM_SAT(vadduws, MO_32, add, usadd, 0, 10);
 GEN_VXFORM_SAT(vaddsbs, MO_UB, add, ssadd, 0, 12);
-GEN_VXFORM_SAT(vaddshs, MO_16, add, ssadd, 0, 13);
+GEN_VXFORM_SAT(vaddshs, MO_UW, add, ssadd, 0, 13);
 GEN_VXFORM_SAT(vaddsws, MO_32, add, ssadd, 0, 14);
 GEN_VXFORM_SAT(vsububs, MO_UB, sub, ussub, 0, 24);
-GEN_VXFORM_SAT(vsubuhs, MO_16, sub, ussub, 0, 25);
+GEN_VXFORM_SAT(vsubuhs, MO_UW, sub, ussub, 0, 25);
 GEN_VXFORM_SAT(vsubuws, MO_32, sub, ussub, 0, 26);
 GEN_VXFORM_SAT(vsubsbs, MO_UB, sub, sssub, 0, 28);
-GEN_VXFORM_SAT(vsubshs, MO_16, sub, sssub, 0, 29);
+GEN_VXFORM_SAT(vsubshs, MO_UW, sub, sssub, 0, 29);
 GEN_VXFORM_SAT(vsubsws, MO_32, sub, sssub, 0, 30);
 GEN_VXFORM(vadduqm, 0, 4);
 GEN_VXFORM(vaddcuq, 0, 5);
@@ -913,7 +913,7 @@ static void glue(gen_, name)(DisasContext *ctx)                         \
     }

 GEN_VXFORM_VSPLT(vspltb, MO_UB, 6, 8);
-GEN_VXFORM_VSPLT(vsplth, MO_16, 6, 9);
+GEN_VXFORM_VSPLT(vsplth, MO_UW, 6, 9);
 GEN_VXFORM_VSPLT(vspltw, MO_32, 6, 10);
 GEN_VXFORM_UIMM_SPLAT(vextractub, 6, 8, 15);
 GEN_VXFORM_UIMM_SPLAT(vextractuh, 6, 9, 14);
diff --git a/target/s390x/translate_vx.inc.c b/target/s390x/translate_vx.inc.c
index bb424c8..65da6b3 100644
--- a/target/s390x/translate_vx.inc.c
+++ b/target/s390x/translate_vx.inc.c
@@ -47,7 +47,7 @@
 #define NUM_VEC_ELEMENT_BITS(es) (NUM_VEC_ELEMENT_BYTES(es) * BITS_PER_BYTE)

 #define ES_8    MO_UB
-#define ES_16   MO_16
+#define ES_16   MO_UW
 #define ES_32   MO_32
 #define ES_64   MO_64
 #define ES_128  4
diff --git a/target/s390x/vec.h b/target/s390x/vec.h
index b813054..28e1b1d 100644
--- a/target/s390x/vec.h
+++ b/target/s390x/vec.h
@@ -78,7 +78,7 @@ static inline uint64_t s390_vec_read_element(const S390Vector *v, uint8_t enr,
     switch (es) {
     case MO_UB:
         return s390_vec_read_element8(v, enr);
-    case MO_16:
+    case MO_UW:
         return s390_vec_read_element16(v, enr);
     case MO_32:
         return s390_vec_read_element32(v, enr);
@@ -124,7 +124,7 @@ static inline void s390_vec_write_element(S390Vector *v, uint8_t enr,
     case MO_UB:
         s390_vec_write_element8(v, enr, data);
         break;
-    case MO_16:
+    case MO_UW:
         s390_vec_write_element16(v, enr, data);
         break;
     case MO_32:
diff --git a/tcg/aarch64/tcg-target.inc.c b/tcg/aarch64/tcg-target.inc.c
index e4e0845..3d90c4b 100644
--- a/tcg/aarch64/tcg-target.inc.c
+++ b/tcg/aarch64/tcg-target.inc.c
@@ -430,20 +430,20 @@ typedef enum {
     /* Load/store register.  Described here as 3.3.12, but the helper
        that emits them can transform to 3.3.10 or 3.3.13.  */
     I3312_STRB      = 0x38000000 | LDST_ST << 22 | MO_UB << 30,
-    I3312_STRH      = 0x38000000 | LDST_ST << 22 | MO_16 << 30,
+    I3312_STRH      = 0x38000000 | LDST_ST << 22 | MO_UW << 30,
     I3312_STRW      = 0x38000000 | LDST_ST << 22 | MO_32 << 30,
     I3312_STRX      = 0x38000000 | LDST_ST << 22 | MO_64 << 30,

     I3312_LDRB      = 0x38000000 | LDST_LD << 22 | MO_UB << 30,
-    I3312_LDRH      = 0x38000000 | LDST_LD << 22 | MO_16 << 30,
+    I3312_LDRH      = 0x38000000 | LDST_LD << 22 | MO_UW << 30,
     I3312_LDRW      = 0x38000000 | LDST_LD << 22 | MO_32 << 30,
     I3312_LDRX      = 0x38000000 | LDST_LD << 22 | MO_64 << 30,

     I3312_LDRSBW    = 0x38000000 | LDST_LD_S_W << 22 | MO_UB << 30,
-    I3312_LDRSHW    = 0x38000000 | LDST_LD_S_W << 22 | MO_16 << 30,
+    I3312_LDRSHW    = 0x38000000 | LDST_LD_S_W << 22 | MO_UW << 30,

     I3312_LDRSBX    = 0x38000000 | LDST_LD_S_X << 22 | MO_UB << 30,
-    I3312_LDRSHX    = 0x38000000 | LDST_LD_S_X << 22 | MO_16 << 30,
+    I3312_LDRSHX    = 0x38000000 | LDST_LD_S_X << 22 | MO_UW << 30,
     I3312_LDRSWX    = 0x38000000 | LDST_LD_S_X << 22 | MO_32 << 30,

     I3312_LDRVS     = 0x3c000000 | LDST_LD << 22 | MO_32 << 30,
@@ -870,7 +870,7 @@ static void tcg_out_dupi_vec(TCGContext *s, TCGType type,

     /*
      * Test all bytes 0x00 or 0xff second.  This can match cases that
-     * might otherwise take 2 or 3 insns for MO_16 or MO_32 below.
+     * might otherwise take 2 or 3 insns for MO_UW or MO_32 below.
      */
     for (i = imm8 = 0; i < 8; i++) {
         uint8_t byte = v64 >> (i * 8);
@@ -889,7 +889,7 @@ static void tcg_out_dupi_vec(TCGContext *s, TCGType type,
      * cannot find an expansion there's no point checking a larger
      * width because we already know by replication it cannot match.
      */
-    if (v64 == dup_const(MO_16, v64)) {
+    if (v64 == dup_const(MO_UW, v64)) {
         uint16_t v16 = v64;

         if (is_shimm16(v16, &cmode, &imm8)) {
@@ -1733,7 +1733,7 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp memop, TCGType ext,
         if (bswap) {
             tcg_out_ldst_r(s, I3312_LDRH, data_r, addr_r, otype, off_r);
             tcg_out_rev16(s, data_r, data_r);
-            tcg_out_sxt(s, ext, MO_16, data_r, data_r);
+            tcg_out_sxt(s, ext, MO_UW, data_r, data_r);
         } else {
             tcg_out_ldst_r(s, (ext ? I3312_LDRSHX : I3312_LDRSHW),
                            data_r, addr_r, otype, off_r);
@@ -1775,7 +1775,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp memop,
     case MO_UB:
         tcg_out_ldst_r(s, I3312_STRB, data_r, addr_r, otype, off_r);
         break;
-    case MO_16:
+    case MO_UW:
         if (bswap && data_r != TCG_REG_XZR) {
             tcg_out_rev16(s, TCG_REG_TMP, data_r);
             data_r = TCG_REG_TMP;
@@ -2190,7 +2190,7 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
         break;
     case INDEX_op_ext16s_i64:
     case INDEX_op_ext16s_i32:
-        tcg_out_sxt(s, ext, MO_16, a0, a1);
+        tcg_out_sxt(s, ext, MO_UW, a0, a1);
         break;
     case INDEX_op_ext_i32_i64:
     case INDEX_op_ext32s_i64:
@@ -2202,7 +2202,7 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
         break;
     case INDEX_op_ext16u_i64:
     case INDEX_op_ext16u_i32:
-        tcg_out_uxt(s, MO_16, a0, a1);
+        tcg_out_uxt(s, MO_UW, a0, a1);
         break;
     case INDEX_op_extu_i32_i64:
     case INDEX_op_ext32u_i64:
diff --git a/tcg/arm/tcg-target.inc.c b/tcg/arm/tcg-target.inc.c
index 542ffa8..0bd400e 100644
--- a/tcg/arm/tcg-target.inc.c
+++ b/tcg/arm/tcg-target.inc.c
@@ -1432,7 +1432,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
     case MO_UB:
         argreg = tcg_out_arg_reg8(s, argreg, datalo);
         break;
-    case MO_16:
+    case MO_UW:
         argreg = tcg_out_arg_reg16(s, argreg, datalo);
         break;
     case MO_32:
@@ -1624,7 +1624,7 @@ static inline void tcg_out_qemu_st_index(TCGContext *s, int cond, TCGMemOp opc,
     case MO_UB:
         tcg_out_st8_r(s, cond, datalo, addrlo, addend);
         break;
-    case MO_16:
+    case MO_UW:
         if (bswap) {
             tcg_out_bswap16st(s, cond, TCG_REG_R0, datalo);
             tcg_out_st16_r(s, cond, TCG_REG_R0, addrlo, addend);
@@ -1669,7 +1669,7 @@ static inline void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp opc,
     case MO_UB:
         tcg_out_st8_12(s, COND_AL, datalo, addrlo, 0);
         break;
-    case MO_16:
+    case MO_UW:
         if (bswap) {
             tcg_out_bswap16st(s, COND_AL, TCG_REG_R0, datalo);
             tcg_out_st16_8(s, COND_AL, TCG_REG_R0, addrlo, 0);
diff --git a/tcg/i386/tcg-target.inc.c b/tcg/i386/tcg-target.inc.c
index 0d68ba4..31c3664 100644
--- a/tcg/i386/tcg-target.inc.c
+++ b/tcg/i386/tcg-target.inc.c
@@ -893,7 +893,7 @@ static bool tcg_out_dup_vec(TCGContext *s, TCGType type, unsigned vece,
             tcg_out_vex_modrm(s, OPC_PUNPCKLBW, r, a, a);
             a = r;
             /* FALLTHRU */
-        case MO_16:
+        case MO_UW:
             tcg_out_vex_modrm(s, OPC_PUNPCKLWD, r, a, a);
             a = r;
             /* FALLTHRU */
@@ -927,7 +927,7 @@ static bool tcg_out_dupm_vec(TCGContext *s, TCGType type, unsigned vece,
         case MO_32:
             tcg_out_vex_modrm_offset(s, OPC_VBROADCASTSS, r, 0, base, offset);
             break;
-        case MO_16:
+        case MO_UW:
             tcg_out_vex_modrm_offset(s, OPC_VPINSRW, r, r, base, offset);
             tcg_out8(s, 0); /* imm8 */
             tcg_out_dup_vec(s, type, vece, r, r);
@@ -2164,7 +2164,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg datalo, TCGReg datahi,
         tcg_out_modrm_sib_offset(s, OPC_MOVB_EvGv + P_REXB_R + seg,
                                  datalo, base, index, 0, ofs);
         break;
-    case MO_16:
+    case MO_UW:
         if (bswap) {
             tcg_out_mov(s, TCG_TYPE_I32, scratch, datalo);
             tcg_out_rolw_8(s, scratch);
@@ -2747,15 +2747,15 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc,
         OPC_PMAXUB, OPC_PMAXUW, OPC_PMAXUD, OPC_UD2
     };
     static int const shlv_insn[4] = {
-        /* TODO: AVX512 adds support for MO_16.  */
+        /* TODO: AVX512 adds support for MO_UW.  */
         OPC_UD2, OPC_UD2, OPC_VPSLLVD, OPC_VPSLLVQ
     };
     static int const shrv_insn[4] = {
-        /* TODO: AVX512 adds support for MO_16.  */
+        /* TODO: AVX512 adds support for MO_UW.  */
         OPC_UD2, OPC_UD2, OPC_VPSRLVD, OPC_VPSRLVQ
     };
     static int const sarv_insn[4] = {
-        /* TODO: AVX512 adds support for MO_16, MO_64.  */
+        /* TODO: AVX512 adds support for MO_UW, MO_64.  */
         OPC_UD2, OPC_UD2, OPC_VPSRAVD, OPC_UD2
     };
     static int const shls_insn[4] = {
@@ -2925,7 +2925,7 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc,
         sub = args[3];
         goto gen_simd_imm8;
     case INDEX_op_x86_blend_vec:
-        if (vece == MO_16) {
+        if (vece == MO_UW) {
             insn = OPC_PBLENDW;
         } else if (vece == MO_32) {
             insn = (have_avx2 ? OPC_VPBLENDD : OPC_BLENDPS);
@@ -3290,9 +3290,9 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, unsigned vece)

     case INDEX_op_shls_vec:
     case INDEX_op_shrs_vec:
-        return vece >= MO_16;
+        return vece >= MO_UW;
     case INDEX_op_sars_vec:
-        return vece >= MO_16 && vece <= MO_32;
+        return vece >= MO_UW && vece <= MO_32;

     case INDEX_op_shlv_vec:
     case INDEX_op_shrv_vec:
@@ -3314,7 +3314,7 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, unsigned vece)
     case INDEX_op_usadd_vec:
     case INDEX_op_sssub_vec:
     case INDEX_op_ussub_vec:
-        return vece <= MO_16;
+        return vece <= MO_UW;
     case INDEX_op_smin_vec:
     case INDEX_op_smax_vec:
     case INDEX_op_umin_vec:
@@ -3352,13 +3352,13 @@ static void expand_vec_shi(TCGType type, unsigned vece, bool shr,
               tcgv_vec_arg(t2), tcgv_vec_arg(v1), tcgv_vec_arg(v1));

     if (shr) {
-        tcg_gen_shri_vec(MO_16, t1, t1, imm + 8);
-        tcg_gen_shri_vec(MO_16, t2, t2, imm + 8);
+        tcg_gen_shri_vec(MO_UW, t1, t1, imm + 8);
+        tcg_gen_shri_vec(MO_UW, t2, t2, imm + 8);
     } else {
-        tcg_gen_shli_vec(MO_16, t1, t1, imm + 8);
-        tcg_gen_shli_vec(MO_16, t2, t2, imm + 8);
-        tcg_gen_shri_vec(MO_16, t1, t1, 8);
-        tcg_gen_shri_vec(MO_16, t2, t2, 8);
+        tcg_gen_shli_vec(MO_UW, t1, t1, imm + 8);
+        tcg_gen_shli_vec(MO_UW, t2, t2, imm + 8);
+        tcg_gen_shri_vec(MO_UW, t1, t1, 8);
+        tcg_gen_shri_vec(MO_UW, t2, t2, 8);
     }

     vec_gen_3(INDEX_op_x86_packus_vec, type, MO_UB,
@@ -3381,8 +3381,8 @@ static void expand_vec_sari(TCGType type, unsigned vece,
                   tcgv_vec_arg(t1), tcgv_vec_arg(v1), tcgv_vec_arg(v1));
         vec_gen_3(INDEX_op_x86_punpckh_vec, type, MO_UB,
                   tcgv_vec_arg(t2), tcgv_vec_arg(v1), tcgv_vec_arg(v1));
-        tcg_gen_sari_vec(MO_16, t1, t1, imm + 8);
-        tcg_gen_sari_vec(MO_16, t2, t2, imm + 8);
+        tcg_gen_sari_vec(MO_UW, t1, t1, imm + 8);
+        tcg_gen_sari_vec(MO_UW, t2, t2, imm + 8);
         vec_gen_3(INDEX_op_x86_packss_vec, type, MO_UB,
                   tcgv_vec_arg(v0), tcgv_vec_arg(t1), tcgv_vec_arg(t2));
         tcg_temp_free_vec(t1);
@@ -3446,8 +3446,8 @@ static void expand_vec_mul(TCGType type, unsigned vece,
                   tcgv_vec_arg(t1), tcgv_vec_arg(v1), tcgv_vec_arg(t2));
         vec_gen_3(INDEX_op_x86_punpckl_vec, TCG_TYPE_V128, MO_UB,
                   tcgv_vec_arg(t2), tcgv_vec_arg(t2), tcgv_vec_arg(v2));
-        tcg_gen_mul_vec(MO_16, t1, t1, t2);
-        tcg_gen_shri_vec(MO_16, t1, t1, 8);
+        tcg_gen_mul_vec(MO_UW, t1, t1, t2);
+        tcg_gen_shri_vec(MO_UW, t1, t1, 8);
         vec_gen_3(INDEX_op_x86_packus_vec, TCG_TYPE_V128, MO_UB,
                   tcgv_vec_arg(v0), tcgv_vec_arg(t1), tcgv_vec_arg(t1));
         tcg_temp_free_vec(t1);
@@ -3469,10 +3469,10 @@ static void expand_vec_mul(TCGType type, unsigned vece,
                   tcgv_vec_arg(t3), tcgv_vec_arg(v1), tcgv_vec_arg(t4));
         vec_gen_3(INDEX_op_x86_punpckh_vec, type, MO_UB,
                   tcgv_vec_arg(t4), tcgv_vec_arg(t4), tcgv_vec_arg(v2));
-        tcg_gen_mul_vec(MO_16, t1, t1, t2);
-        tcg_gen_mul_vec(MO_16, t3, t3, t4);
-        tcg_gen_shri_vec(MO_16, t1, t1, 8);
-        tcg_gen_shri_vec(MO_16, t3, t3, 8);
+        tcg_gen_mul_vec(MO_UW, t1, t1, t2);
+        tcg_gen_mul_vec(MO_UW, t3, t3, t4);
+        tcg_gen_shri_vec(MO_UW, t1, t1, 8);
+        tcg_gen_shri_vec(MO_UW, t3, t3, 8);
         vec_gen_3(INDEX_op_x86_packus_vec, type, MO_UB,
                   tcgv_vec_arg(v0), tcgv_vec_arg(t1), tcgv_vec_arg(t3));
         tcg_temp_free_vec(t1);
diff --git a/tcg/mips/tcg-target.inc.c b/tcg/mips/tcg-target.inc.c
index c6d13ea..1780cb1 100644
--- a/tcg/mips/tcg-target.inc.c
+++ b/tcg/mips/tcg-target.inc.c
@@ -1383,7 +1383,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
     case MO_UB:
         i = tcg_out_call_iarg_reg8(s, i, l->datalo_reg);
         break;
-    case MO_16:
+    case MO_UW:
         i = tcg_out_call_iarg_reg16(s, i, l->datalo_reg);
         break;
     case MO_32:
@@ -1570,12 +1570,12 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
         tcg_out_opc_imm(s, OPC_SB, lo, base, 0);
         break;

-    case MO_16 | MO_BSWAP:
+    case MO_UW | MO_BSWAP:
         tcg_out_opc_imm(s, OPC_ANDI, TCG_TMP1, lo, 0xffff);
         tcg_out_bswap16(s, TCG_TMP1, TCG_TMP1);
         lo = TCG_TMP1;
         /* FALLTHRU */
-    case MO_16:
+    case MO_UW:
         tcg_out_opc_imm(s, OPC_SH, lo, base, 0);
         break;

diff --git a/tcg/riscv/tcg-target.inc.c b/tcg/riscv/tcg-target.inc.c
index 9c60c0f..20bc19d 100644
--- a/tcg/riscv/tcg-target.inc.c
+++ b/tcg/riscv/tcg-target.inc.c
@@ -1104,7 +1104,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
     case MO_UB:
         tcg_out_ext8u(s, a2, a2);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_out_ext16u(s, a2, a2);
         break;
     default:
@@ -1219,7 +1219,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
     case MO_UB:
         tcg_out_opc_store(s, OPC_SB, base, lo, 0);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_out_opc_store(s, OPC_SH, base, lo, 0);
         break;
     case MO_32:
diff --git a/tcg/sparc/tcg-target.inc.c b/tcg/sparc/tcg-target.inc.c
index 479ee2e..85550b5 100644
--- a/tcg/sparc/tcg-target.inc.c
+++ b/tcg/sparc/tcg-target.inc.c
@@ -885,7 +885,7 @@ static void emit_extend(TCGContext *s, TCGReg r, int op)
     case MO_UB:
         tcg_out_arithi(s, r, r, 0xff, ARITH_AND);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_out_arithi(s, r, r, 16, SHIFT_SLL);
         tcg_out_arithi(s, r, r, 16, SHIFT_SRL);
         break;
diff --git a/tcg/tcg-op-gvec.c b/tcg/tcg-op-gvec.c
index 9658c36..da409f5 100644
--- a/tcg/tcg-op-gvec.c
+++ b/tcg/tcg-op-gvec.c
@@ -308,7 +308,7 @@ uint64_t (dup_const)(unsigned vece, uint64_t c)
     switch (vece) {
     case MO_UB:
         return 0x0101010101010101ull * (uint8_t)c;
-    case MO_16:
+    case MO_UW:
         return 0x0001000100010001ull * (uint16_t)c;
     case MO_32:
         return 0x0000000100000001ull * (uint32_t)c;
@@ -327,7 +327,7 @@ static void gen_dup_i32(unsigned vece, TCGv_i32 out, TCGv_i32 in)
         tcg_gen_ext8u_i32(out, in);
         tcg_gen_muli_i32(out, out, 0x01010101);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_deposit_i32(out, in, in, 16, 16);
         break;
     case MO_32:
@@ -345,7 +345,7 @@ static void gen_dup_i64(unsigned vece, TCGv_i64 out, TCGv_i64 in)
         tcg_gen_ext8u_i64(out, in);
         tcg_gen_muli_i64(out, out, 0x0101010101010101ull);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_ext16u_i64(out, in);
         tcg_gen_muli_i64(out, out, 0x0001000100010001ull);
         break;
@@ -558,7 +558,7 @@ static void do_dup(unsigned vece, uint32_t dofs, uint32_t oprsz,
                 tcg_gen_extrl_i64_i32(t_32, in_64);
             } else if (vece == MO_UB) {
                 tcg_gen_movi_i32(t_32, in_c & 0xff);
-            } else if (vece == MO_16) {
+            } else if (vece == MO_UW) {
                 tcg_gen_movi_i32(t_32, in_c & 0xffff);
             } else {
                 tcg_gen_movi_i32(t_32, in_c);
@@ -1459,7 +1459,7 @@ void tcg_gen_gvec_dup_mem(unsigned vece, uint32_t dofs, uint32_t aofs,
             case MO_UB:
                 tcg_gen_ld8u_i32(in, cpu_env, aofs);
                 break;
-            case MO_16:
+            case MO_UW:
                 tcg_gen_ld16u_i32(in, cpu_env, aofs);
                 break;
             default:
@@ -1526,7 +1526,7 @@ void tcg_gen_gvec_dup16i(uint32_t dofs, uint32_t oprsz,
                          uint32_t maxsz, uint16_t x)
 {
     check_size_align(oprsz, maxsz, dofs);
-    do_dup(MO_16, dofs, oprsz, maxsz, NULL, NULL, x);
+    do_dup(MO_UW, dofs, oprsz, maxsz, NULL, NULL, x);
 }

 void tcg_gen_gvec_dup8i(uint32_t dofs, uint32_t oprsz,
@@ -1579,7 +1579,7 @@ void tcg_gen_vec_add8_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)

 void tcg_gen_vec_add16_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
 {
-    TCGv_i64 m = tcg_const_i64(dup_const(MO_16, 0x8000));
+    TCGv_i64 m = tcg_const_i64(dup_const(MO_UW, 0x8000));
     gen_addv_mask(d, a, b, m);
     tcg_temp_free_i64(m);
 }
@@ -1613,7 +1613,7 @@ void tcg_gen_gvec_add(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_add_vec,
           .fno = gen_helper_gvec_add16,
           .opt_opc = vecop_list_add,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_add_i32,
           .fniv = tcg_gen_add_vec,
           .fno = gen_helper_gvec_add32,
@@ -1644,7 +1644,7 @@ void tcg_gen_gvec_adds(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_add_vec,
           .fno = gen_helper_gvec_adds16,
           .opt_opc = vecop_list_add,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_add_i32,
           .fniv = tcg_gen_add_vec,
           .fno = gen_helper_gvec_adds32,
@@ -1685,7 +1685,7 @@ void tcg_gen_gvec_subs(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_gvec_subs16,
           .opt_opc = vecop_list_sub,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_sub_i32,
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_gvec_subs32,
@@ -1732,7 +1732,7 @@ void tcg_gen_vec_sub8_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)

 void tcg_gen_vec_sub16_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
 {
-    TCGv_i64 m = tcg_const_i64(dup_const(MO_16, 0x8000));
+    TCGv_i64 m = tcg_const_i64(dup_const(MO_UW, 0x8000));
     gen_subv_mask(d, a, b, m);
     tcg_temp_free_i64(m);
 }
@@ -1764,7 +1764,7 @@ void tcg_gen_gvec_sub(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_gvec_sub16,
           .opt_opc = vecop_list_sub,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_sub_i32,
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_gvec_sub32,
@@ -1795,7 +1795,7 @@ void tcg_gen_gvec_mul(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_mul_vec,
           .fno = gen_helper_gvec_mul16,
           .opt_opc = vecop_list_mul,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_mul_i32,
           .fniv = tcg_gen_mul_vec,
           .fno = gen_helper_gvec_mul32,
@@ -1824,7 +1824,7 @@ void tcg_gen_gvec_muls(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_mul_vec,
           .fno = gen_helper_gvec_muls16,
           .opt_opc = vecop_list_mul,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_mul_i32,
           .fniv = tcg_gen_mul_vec,
           .fno = gen_helper_gvec_muls32,
@@ -1862,7 +1862,7 @@ void tcg_gen_gvec_ssadd(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_ssadd_vec,
           .fno = gen_helper_gvec_ssadd16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fniv = tcg_gen_ssadd_vec,
           .fno = gen_helper_gvec_ssadd32,
           .opt_opc = vecop_list,
@@ -1888,7 +1888,7 @@ void tcg_gen_gvec_sssub(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_sssub_vec,
           .fno = gen_helper_gvec_sssub16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fniv = tcg_gen_sssub_vec,
           .fno = gen_helper_gvec_sssub32,
           .opt_opc = vecop_list,
@@ -1930,7 +1930,7 @@ void tcg_gen_gvec_usadd(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_usadd_vec,
           .fno = gen_helper_gvec_usadd16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_usadd_i32,
           .fniv = tcg_gen_usadd_vec,
           .fno = gen_helper_gvec_usadd32,
@@ -1974,7 +1974,7 @@ void tcg_gen_gvec_ussub(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_ussub_vec,
           .fno = gen_helper_gvec_ussub16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_ussub_i32,
           .fniv = tcg_gen_ussub_vec,
           .fno = gen_helper_gvec_ussub32,
@@ -2002,7 +2002,7 @@ void tcg_gen_gvec_smin(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_smin_vec,
           .fno = gen_helper_gvec_smin16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_smin_i32,
           .fniv = tcg_gen_smin_vec,
           .fno = gen_helper_gvec_smin32,
@@ -2030,7 +2030,7 @@ void tcg_gen_gvec_umin(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_umin_vec,
           .fno = gen_helper_gvec_umin16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_umin_i32,
           .fniv = tcg_gen_umin_vec,
           .fno = gen_helper_gvec_umin32,
@@ -2058,7 +2058,7 @@ void tcg_gen_gvec_smax(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_smax_vec,
           .fno = gen_helper_gvec_smax16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_smax_i32,
           .fniv = tcg_gen_smax_vec,
           .fno = gen_helper_gvec_smax32,
@@ -2086,7 +2086,7 @@ void tcg_gen_gvec_umax(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_umax_vec,
           .fno = gen_helper_gvec_umax16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_umax_i32,
           .fniv = tcg_gen_umax_vec,
           .fno = gen_helper_gvec_umax32,
@@ -2127,7 +2127,7 @@ void tcg_gen_vec_neg8_i64(TCGv_i64 d, TCGv_i64 b)

 void tcg_gen_vec_neg16_i64(TCGv_i64 d, TCGv_i64 b)
 {
-    TCGv_i64 m = tcg_const_i64(dup_const(MO_16, 0x8000));
+    TCGv_i64 m = tcg_const_i64(dup_const(MO_UW, 0x8000));
     gen_negv_mask(d, b, m);
     tcg_temp_free_i64(m);
 }
@@ -2160,7 +2160,7 @@ void tcg_gen_gvec_neg(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_neg_vec,
           .fno = gen_helper_gvec_neg16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_neg_i32,
           .fniv = tcg_gen_neg_vec,
           .fno = gen_helper_gvec_neg32,
@@ -2206,7 +2206,7 @@ static void tcg_gen_vec_abs8_i64(TCGv_i64 d, TCGv_i64 b)

 static void tcg_gen_vec_abs16_i64(TCGv_i64 d, TCGv_i64 b)
 {
-    gen_absv_mask(d, b, MO_16);
+    gen_absv_mask(d, b, MO_UW);
 }

 void tcg_gen_gvec_abs(unsigned vece, uint32_t dofs, uint32_t aofs,
@@ -2223,7 +2223,7 @@ void tcg_gen_gvec_abs(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_abs_vec,
           .fno = gen_helper_gvec_abs16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_abs_i32,
           .fniv = tcg_gen_abs_vec,
           .fno = gen_helper_gvec_abs32,
@@ -2461,7 +2461,7 @@ void tcg_gen_vec_shl8i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c)

 void tcg_gen_vec_shl16i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c)
 {
-    uint64_t mask = dup_const(MO_16, 0xffff << c);
+    uint64_t mask = dup_const(MO_UW, 0xffff << c);
     tcg_gen_shli_i64(d, a, c);
     tcg_gen_andi_i64(d, d, mask);
 }
@@ -2480,7 +2480,7 @@ void tcg_gen_gvec_shli(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_shli_vec,
           .fno = gen_helper_gvec_shl16i,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_shli_i32,
           .fniv = tcg_gen_shli_vec,
           .fno = gen_helper_gvec_shl32i,
@@ -2512,7 +2512,7 @@ void tcg_gen_vec_shr8i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c)

 void tcg_gen_vec_shr16i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c)
 {
-    uint64_t mask = dup_const(MO_16, 0xffff >> c);
+    uint64_t mask = dup_const(MO_UW, 0xffff >> c);
     tcg_gen_shri_i64(d, a, c);
     tcg_gen_andi_i64(d, d, mask);
 }
@@ -2531,7 +2531,7 @@ void tcg_gen_gvec_shri(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_shri_vec,
           .fno = gen_helper_gvec_shr16i,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_shri_i32,
           .fniv = tcg_gen_shri_vec,
           .fno = gen_helper_gvec_shr32i,
@@ -2570,8 +2570,8 @@ void tcg_gen_vec_sar8i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c)

 void tcg_gen_vec_sar16i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c)
 {
-    uint64_t s_mask = dup_const(MO_16, 0x8000 >> c);
-    uint64_t c_mask = dup_const(MO_16, 0xffff >> c);
+    uint64_t s_mask = dup_const(MO_UW, 0x8000 >> c);
+    uint64_t c_mask = dup_const(MO_UW, 0xffff >> c);
     TCGv_i64 s = tcg_temp_new_i64();

     tcg_gen_shri_i64(d, a, c);
@@ -2596,7 +2596,7 @@ void tcg_gen_gvec_sari(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_sari_vec,
           .fno = gen_helper_gvec_sar16i,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_sari_i32,
           .fniv = tcg_gen_sari_vec,
           .fno = gen_helper_gvec_sar32i,
@@ -2884,7 +2884,7 @@ void tcg_gen_gvec_shlv(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_shlv_mod_vec,
           .fno = gen_helper_gvec_shl16v,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_shl_mod_i32,
           .fniv = tcg_gen_shlv_mod_vec,
           .fno = gen_helper_gvec_shl32v,
@@ -2947,7 +2947,7 @@ void tcg_gen_gvec_shrv(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_shrv_mod_vec,
           .fno = gen_helper_gvec_shr16v,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_shr_mod_i32,
           .fniv = tcg_gen_shrv_mod_vec,
           .fno = gen_helper_gvec_shr32v,
@@ -3010,7 +3010,7 @@ void tcg_gen_gvec_sarv(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_sarv_mod_vec,
           .fno = gen_helper_gvec_sar16v,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_sar_mod_i32,
           .fniv = tcg_gen_sarv_mod_vec,
           .fno = gen_helper_gvec_sar32v,
diff --git a/tcg/tcg-op-vec.c b/tcg/tcg-op-vec.c
index d7ffc9e..b0a4d98 100644
--- a/tcg/tcg-op-vec.c
+++ b/tcg/tcg-op-vec.c
@@ -270,7 +270,7 @@ void tcg_gen_dup32i_vec(TCGv_vec r, uint32_t a)

 void tcg_gen_dup16i_vec(TCGv_vec r, uint32_t a)
 {
-    do_dupi_vec(r, MO_REG, dup_const(MO_16, a));
+    do_dupi_vec(r, MO_REG, dup_const(MO_UW, a));
 }

 void tcg_gen_dup8i_vec(TCGv_vec r, uint32_t a)
diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c
index 61eda33..21d448c 100644
--- a/tcg/tcg-op.c
+++ b/tcg/tcg-op.c
@@ -2723,7 +2723,7 @@ static inline TCGMemOp tcg_canonicalize_memop(TCGMemOp op, bool is64, bool st)
     case MO_UB:
         op &= ~MO_BSWAP;
         break;
-    case MO_16:
+    case MO_UW:
         break;
     case MO_32:
         if (!is64) {
@@ -2810,7 +2810,7 @@ void tcg_gen_qemu_ld_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)

     if ((orig_memop ^ memop) & MO_BSWAP) {
         switch (orig_memop & MO_SIZE) {
-        case MO_16:
+        case MO_UW:
             tcg_gen_bswap16_i32(val, val);
             if (orig_memop & MO_SIGN) {
                 tcg_gen_ext16s_i32(val, val);
@@ -2837,7 +2837,7 @@ void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
     if (!TCG_TARGET_HAS_MEMORY_BSWAP && (memop & MO_BSWAP)) {
         swap = tcg_temp_new_i32();
         switch (memop & MO_SIZE) {
-        case MO_16:
+        case MO_UW:
             tcg_gen_ext16u_i32(swap, val);
             tcg_gen_bswap16_i32(swap, swap);
             break;
@@ -2890,7 +2890,7 @@ void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)

     if ((orig_memop ^ memop) & MO_BSWAP) {
         switch (orig_memop & MO_SIZE) {
-        case MO_16:
+        case MO_UW:
             tcg_gen_bswap16_i64(val, val);
             if (orig_memop & MO_SIGN) {
                 tcg_gen_ext16s_i64(val, val);
@@ -2928,7 +2928,7 @@ void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
     if (!TCG_TARGET_HAS_MEMORY_BSWAP && (memop & MO_BSWAP)) {
         swap = tcg_temp_new_i64();
         switch (memop & MO_SIZE) {
-        case MO_16:
+        case MO_UW:
             tcg_gen_ext16u_i64(swap, val);
             tcg_gen_bswap16_i64(swap, swap);
             break;
@@ -3025,8 +3025,8 @@ typedef void (*gen_atomic_op_i64)(TCGv_i64, TCGv_env, TCGv, TCGv_i64);

 static void * const table_cmpxchg[16] = {
     [MO_UB] = gen_helper_atomic_cmpxchgb,
-    [MO_16 | MO_LE] = gen_helper_atomic_cmpxchgw_le,
-    [MO_16 | MO_BE] = gen_helper_atomic_cmpxchgw_be,
+    [MO_UW | MO_LE] = gen_helper_atomic_cmpxchgw_le,
+    [MO_UW | MO_BE] = gen_helper_atomic_cmpxchgw_be,
     [MO_32 | MO_LE] = gen_helper_atomic_cmpxchgl_le,
     [MO_32 | MO_BE] = gen_helper_atomic_cmpxchgl_be,
     WITH_ATOMIC64([MO_64 | MO_LE] = gen_helper_atomic_cmpxchgq_le)
@@ -3249,8 +3249,8 @@ static void do_atomic_op_i64(TCGv_i64 ret, TCGv addr, TCGv_i64 val,
 #define GEN_ATOMIC_HELPER(NAME, OP, NEW)                                \
 static void * const table_##NAME[16] = {                                \
     [MO_UB] = gen_helper_atomic_##NAME##b,                               \
-    [MO_16 | MO_LE] = gen_helper_atomic_##NAME##w_le,                   \
-    [MO_16 | MO_BE] = gen_helper_atomic_##NAME##w_be,                   \
+    [MO_UW | MO_LE] = gen_helper_atomic_##NAME##w_le,                   \
+    [MO_UW | MO_BE] = gen_helper_atomic_##NAME##w_be,                   \
     [MO_32 | MO_LE] = gen_helper_atomic_##NAME##l_le,                   \
     [MO_32 | MO_BE] = gen_helper_atomic_##NAME##l_be,                   \
     WITH_ATOMIC64([MO_64 | MO_LE] = gen_helper_atomic_##NAME##q_le)     \
diff --git a/tcg/tcg.h b/tcg/tcg.h
index 5636d6b..a378887 100644
--- a/tcg/tcg.h
+++ b/tcg/tcg.h
@@ -1303,7 +1303,7 @@ uint64_t dup_const(unsigned vece, uint64_t c);
 #define dup_const(VECE, C)                                         \
     (__builtin_constant_p(VECE)                                    \
      ?   ((VECE) == MO_UB ? 0x0101010101010101ull * (uint8_t)(C)   \
-        : (VECE) == MO_16 ? 0x0001000100010001ull * (uint16_t)(C)  \
+        : (VECE) == MO_UW ? 0x0001000100010001ull * (uint16_t)(C)  \
         : (VECE) == MO_32 ? 0x0000000100000001ull * (uint32_t)(C)  \
         : dup_const(VECE, C))                                      \
      : dup_const(VECE, C))
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* Re: [Qemu-riscv] [Qemu-devel] [PATCH v2 02/20] tcg: Replace MO_16 with MO_UW alias
@ 2019-07-22 15:39   ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:39 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, david, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, mst, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, claudio.fontana, qemu-s390x, qemu-ppc,
	amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 104873 bytes --]

Preparation for splitting MO_16 out from TCGMemOp into new accelerator
independent MemOp.

As MO_16 will be a value of MemOp, existing TCGMemOp comparisons and
coercions will trigger -Wenum-compare and -Wenum-conversion.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 target/arm/sve_helper.c             |   4 +-
 target/arm/translate-a64.c          |  90 ++++++++--------
 target/arm/translate-sve.c          |  40 ++++----
 target/arm/translate-vfp.inc.c      |   2 +-
 target/arm/translate.c              |  32 +++---
 target/i386/translate.c             | 200 ++++++++++++++++++------------------
 target/mips/translate.c             |   2 +-
 target/ppc/translate/vmx-impl.inc.c |  28 ++---
 target/s390x/translate_vx.inc.c     |   2 +-
 target/s390x/vec.h                  |   4 +-
 tcg/aarch64/tcg-target.inc.c        |  20 ++--
 tcg/arm/tcg-target.inc.c            |   6 +-
 tcg/i386/tcg-target.inc.c           |  48 ++++-----
 tcg/mips/tcg-target.inc.c           |   6 +-
 tcg/riscv/tcg-target.inc.c          |   4 +-
 tcg/sparc/tcg-target.inc.c          |   2 +-
 tcg/tcg-op-gvec.c                   |  72 ++++++-------
 tcg/tcg-op-vec.c                    |   2 +-
 tcg/tcg-op.c                        |  18 ++--
 tcg/tcg.h                           |   2 +-
 20 files changed, 292 insertions(+), 292 deletions(-)

diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 4c7e11f..f6bef3d 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -1546,7 +1546,7 @@ void HELPER(sve_cpy_m_h)(void *vd, void *vn, void *vg,
     uint64_t *d = vd, *n = vn;
     uint8_t *pg = vg;

-    mm = dup_const(MO_16, mm);
+    mm = dup_const(MO_UW, mm);
     for (i = 0; i < opr_sz; i += 1) {
         uint64_t nn = n[i];
         uint64_t pp = expand_pred_h(pg[H1(i)]);
@@ -1600,7 +1600,7 @@ void HELPER(sve_cpy_z_h)(void *vd, void *vg, uint64_t val, uint32_t desc)
     uint64_t *d = vd;
     uint8_t *pg = vg;

-    val = dup_const(MO_16, val);
+    val = dup_const(MO_UW, val);
     for (i = 0; i < opr_sz; i += 1) {
         d[i] = val & expand_pred_h(pg[H1(i)]);
     }
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index f840b43..3acfccb 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -492,7 +492,7 @@ static TCGv_i32 read_fp_hreg(DisasContext *s, int reg)
 {
     TCGv_i32 v = tcg_temp_new_i32();

-    tcg_gen_ld16u_i32(v, cpu_env, fp_reg_offset(s, reg, MO_16));
+    tcg_gen_ld16u_i32(v, cpu_env, fp_reg_offset(s, reg, MO_UW));
     return v;
 }

@@ -996,7 +996,7 @@ static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx,
     case MO_UB:
         tcg_gen_ld8u_i64(tcg_dest, cpu_env, vect_off);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_ld16u_i64(tcg_dest, cpu_env, vect_off);
         break;
     case MO_32:
@@ -1005,7 +1005,7 @@ static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx,
     case MO_SB:
         tcg_gen_ld8s_i64(tcg_dest, cpu_env, vect_off);
         break;
-    case MO_16|MO_SIGN:
+    case MO_SW:
         tcg_gen_ld16s_i64(tcg_dest, cpu_env, vect_off);
         break;
     case MO_32|MO_SIGN:
@@ -1028,13 +1028,13 @@ static void read_vec_element_i32(DisasContext *s, TCGv_i32 tcg_dest, int srcidx,
     case MO_UB:
         tcg_gen_ld8u_i32(tcg_dest, cpu_env, vect_off);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_ld16u_i32(tcg_dest, cpu_env, vect_off);
         break;
     case MO_SB:
         tcg_gen_ld8s_i32(tcg_dest, cpu_env, vect_off);
         break;
-    case MO_16|MO_SIGN:
+    case MO_SW:
         tcg_gen_ld16s_i32(tcg_dest, cpu_env, vect_off);
         break;
     case MO_32:
@@ -1055,7 +1055,7 @@ static void write_vec_element(DisasContext *s, TCGv_i64 tcg_src, int destidx,
     case MO_UB:
         tcg_gen_st8_i64(tcg_src, cpu_env, vect_off);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_st16_i64(tcg_src, cpu_env, vect_off);
         break;
     case MO_32:
@@ -1077,7 +1077,7 @@ static void write_vec_element_i32(DisasContext *s, TCGv_i32 tcg_src,
     case MO_UB:
         tcg_gen_st8_i32(tcg_src, cpu_env, vect_off);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_st16_i32(tcg_src, cpu_env, vect_off);
         break;
     case MO_32:
@@ -5269,7 +5269,7 @@ static void handle_fp_compare(DisasContext *s, int size,
                               bool cmp_with_zero, bool signal_all_nans)
 {
     TCGv_i64 tcg_flags = tcg_temp_new_i64();
-    TCGv_ptr fpst = get_fpstatus_ptr(size == MO_16);
+    TCGv_ptr fpst = get_fpstatus_ptr(size == MO_UW);

     if (size == MO_64) {
         TCGv_i64 tcg_vn, tcg_vm;
@@ -5306,7 +5306,7 @@ static void handle_fp_compare(DisasContext *s, int size,
                 gen_helper_vfp_cmps_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
             }
             break;
-        case MO_16:
+        case MO_UW:
             if (signal_all_nans) {
                 gen_helper_vfp_cmpeh_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
             } else {
@@ -5360,7 +5360,7 @@ static void disas_fp_compare(DisasContext *s, uint32_t insn)
         size = MO_64;
         break;
     case 3:
-        size = MO_16;
+        size = MO_UW;
         if (dc_isar_feature(aa64_fp16, s)) {
             break;
         }
@@ -5411,7 +5411,7 @@ static void disas_fp_ccomp(DisasContext *s, uint32_t insn)
         size = MO_64;
         break;
     case 3:
-        size = MO_16;
+        size = MO_UW;
         if (dc_isar_feature(aa64_fp16, s)) {
             break;
         }
@@ -5477,7 +5477,7 @@ static void disas_fp_csel(DisasContext *s, uint32_t insn)
         sz = MO_64;
         break;
     case 3:
-        sz = MO_16;
+        sz = MO_UW;
         if (dc_isar_feature(aa64_fp16, s)) {
             break;
         }
@@ -6282,7 +6282,7 @@ static void disas_fp_imm(DisasContext *s, uint32_t insn)
         sz = MO_64;
         break;
     case 3:
-        sz = MO_16;
+        sz = MO_UW;
         if (dc_isar_feature(aa64_fp16, s)) {
             break;
         }
@@ -6593,7 +6593,7 @@ static void handle_fmov(DisasContext *s, int rd, int rn, int type, bool itof)
             break;
         case 3:
             /* 16 bit */
-            tcg_gen_ld16u_i64(tcg_rd, cpu_env, fp_reg_offset(s, rn, MO_16));
+            tcg_gen_ld16u_i64(tcg_rd, cpu_env, fp_reg_offset(s, rn, MO_UW));
             break;
         default:
             g_assert_not_reached();
@@ -7030,7 +7030,7 @@ static TCGv_i32 do_reduction_op(DisasContext *s, int fpopcode, int rn,
 {
     if (esize == size) {
         int element;
-        TCGMemOp msize = esize == 16 ? MO_16 : MO_32;
+        TCGMemOp msize = esize == 16 ? MO_UW : MO_32;
         TCGv_i32 tcg_elem;

         /* We should have one register left here */
@@ -7204,7 +7204,7 @@ static void disas_simd_across_lanes(DisasContext *s, uint32_t insn)
          * Note that correct NaN propagation requires that we do these
          * operations in exactly the order specified by the pseudocode.
          */
-        TCGv_ptr fpst = get_fpstatus_ptr(size == MO_16);
+        TCGv_ptr fpst = get_fpstatus_ptr(size == MO_UW);
         int fpopcode = opcode | is_min << 4 | is_u << 5;
         int vmap = (1 << elements) - 1;
         TCGv_i32 tcg_res32 = do_reduction_op(s, fpopcode, rn, esize,
@@ -7591,7 +7591,7 @@ static void disas_simd_mod_imm(DisasContext *s, uint32_t insn)
             } else {
                 if (o2) {
                     /* FMOV (vector, immediate) - half-precision */
-                    imm = vfp_expand_imm(MO_16, abcdefgh);
+                    imm = vfp_expand_imm(MO_UW, abcdefgh);
                     /* now duplicate across the lanes */
                     imm = bitfield_replicate(imm, 16);
                 } else {
@@ -7699,7 +7699,7 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
                 unallocated_encoding(s);
                 return;
             } else {
-                size = MO_16;
+                size = MO_UW;
             }
         } else {
             size = extract32(size, 0, 1) ? MO_64 : MO_32;
@@ -7709,7 +7709,7 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
             return;
         }

-        fpst = get_fpstatus_ptr(size == MO_16);
+        fpst = get_fpstatus_ptr(size == MO_UW);
         break;
     default:
         unallocated_encoding(s);
@@ -7760,7 +7760,7 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
         read_vec_element_i32(s, tcg_op1, rn, 0, size);
         read_vec_element_i32(s, tcg_op2, rn, 1, size);

-        if (size == MO_16) {
+        if (size == MO_UW) {
             switch (opcode) {
             case 0xc: /* FMAXNMP */
                 gen_helper_advsimd_maxnumh(tcg_res, tcg_op1, tcg_op2, fpst);
@@ -8222,7 +8222,7 @@ static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
                                    int elements, int is_signed,
                                    int fracbits, int size)
 {
-    TCGv_ptr tcg_fpst = get_fpstatus_ptr(size == MO_16);
+    TCGv_ptr tcg_fpst = get_fpstatus_ptr(size == MO_UW);
     TCGv_i32 tcg_shift = NULL;

     TCGMemOp mop = size | (is_signed ? MO_SIGN : 0);
@@ -8281,7 +8281,7 @@ static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
                     }
                 }
                 break;
-            case MO_16:
+            case MO_UW:
                 if (fracbits) {
                     if (is_signed) {
                         gen_helper_vfp_sltoh(tcg_float, tcg_int32,
@@ -8339,7 +8339,7 @@ static void handle_simd_shift_intfp_conv(DisasContext *s, bool is_scalar,
     } else if (immh & 4) {
         size = MO_32;
     } else if (immh & 2) {
-        size = MO_16;
+        size = MO_UW;
         if (!dc_isar_feature(aa64_fp16, s)) {
             unallocated_encoding(s);
             return;
@@ -8384,7 +8384,7 @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
     } else if (immh & 0x4) {
         size = MO_32;
     } else if (immh & 0x2) {
-        size = MO_16;
+        size = MO_UW;
         if (!dc_isar_feature(aa64_fp16, s)) {
             unallocated_encoding(s);
             return;
@@ -8403,7 +8403,7 @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
     assert(!(is_scalar && is_q));

     tcg_rmode = tcg_const_i32(arm_rmode_to_sf(FPROUNDING_ZERO));
-    tcg_fpstatus = get_fpstatus_ptr(size == MO_16);
+    tcg_fpstatus = get_fpstatus_ptr(size == MO_UW);
     gen_helper_set_rmode(tcg_rmode, tcg_rmode, tcg_fpstatus);
     fracbits = (16 << size) - immhb;
     tcg_shift = tcg_const_i32(fracbits);
@@ -8429,7 +8429,7 @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
         int maxpass = is_scalar ? 1 : ((8 << is_q) >> size);

         switch (size) {
-        case MO_16:
+        case MO_UW:
             if (is_u) {
                 fn = gen_helper_vfp_touhh;
             } else {
@@ -9388,7 +9388,7 @@ static void handle_2misc_fcmp_zero(DisasContext *s, int opcode,
         return;
     }

-    fpst = get_fpstatus_ptr(size == MO_16);
+    fpst = get_fpstatus_ptr(size == MO_UW);

     if (is_double) {
         TCGv_i64 tcg_op = tcg_temp_new_i64();
@@ -9440,7 +9440,7 @@ static void handle_2misc_fcmp_zero(DisasContext *s, int opcode,
         bool swap = false;
         int pass, maxpasses;

-        if (size == MO_16) {
+        if (size == MO_UW) {
             switch (opcode) {
             case 0x2e: /* FCMLT (zero) */
                 swap = true;
@@ -11422,8 +11422,8 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
             int passreg = pass < (maxpass / 2) ? rn : rm;
             int passelt = (pass << 1) & (maxpass - 1);

-            read_vec_element_i32(s, tcg_op1, passreg, passelt, MO_16);
-            read_vec_element_i32(s, tcg_op2, passreg, passelt + 1, MO_16);
+            read_vec_element_i32(s, tcg_op1, passreg, passelt, MO_UW);
+            read_vec_element_i32(s, tcg_op2, passreg, passelt + 1, MO_UW);
             tcg_res[pass] = tcg_temp_new_i32();

             switch (fpopcode) {
@@ -11450,7 +11450,7 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
         }

         for (pass = 0; pass < maxpass; pass++) {
-            write_vec_element_i32(s, tcg_res[pass], rd, pass, MO_16);
+            write_vec_element_i32(s, tcg_res[pass], rd, pass, MO_UW);
             tcg_temp_free_i32(tcg_res[pass]);
         }

@@ -11463,15 +11463,15 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
             TCGv_i32 tcg_op2 = tcg_temp_new_i32();
             TCGv_i32 tcg_res = tcg_temp_new_i32();

-            read_vec_element_i32(s, tcg_op1, rn, pass, MO_16);
-            read_vec_element_i32(s, tcg_op2, rm, pass, MO_16);
+            read_vec_element_i32(s, tcg_op1, rn, pass, MO_UW);
+            read_vec_element_i32(s, tcg_op2, rm, pass, MO_UW);

             switch (fpopcode) {
             case 0x0: /* FMAXNM */
                 gen_helper_advsimd_maxnumh(tcg_res, tcg_op1, tcg_op2, fpst);
                 break;
             case 0x1: /* FMLA */
-                read_vec_element_i32(s, tcg_res, rd, pass, MO_16);
+                read_vec_element_i32(s, tcg_res, rd, pass, MO_UW);
                 gen_helper_advsimd_muladdh(tcg_res, tcg_op1, tcg_op2, tcg_res,
                                            fpst);
                 break;
@@ -11496,7 +11496,7 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
             case 0x9: /* FMLS */
                 /* As usual for ARM, separate negation for fused multiply-add */
                 tcg_gen_xori_i32(tcg_op1, tcg_op1, 0x8000);
-                read_vec_element_i32(s, tcg_res, rd, pass, MO_16);
+                read_vec_element_i32(s, tcg_res, rd, pass, MO_UW);
                 gen_helper_advsimd_muladdh(tcg_res, tcg_op1, tcg_op2, tcg_res,
                                            fpst);
                 break;
@@ -11537,7 +11537,7 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
                 g_assert_not_reached();
             }

-            write_vec_element_i32(s, tcg_res, rd, pass, MO_16);
+            write_vec_element_i32(s, tcg_res, rd, pass, MO_UW);
             tcg_temp_free_i32(tcg_res);
             tcg_temp_free_i32(tcg_op1);
             tcg_temp_free_i32(tcg_op2);
@@ -11727,7 +11727,7 @@ static void handle_2misc_widening(DisasContext *s, int opcode, bool is_q,
         for (pass = 0; pass < 4; pass++) {
             tcg_res[pass] = tcg_temp_new_i32();

-            read_vec_element_i32(s, tcg_res[pass], rn, srcelt + pass, MO_16);
+            read_vec_element_i32(s, tcg_res[pass], rn, srcelt + pass, MO_UW);
             gen_helper_vfp_fcvt_f16_to_f32(tcg_res[pass], tcg_res[pass],
                                            fpst, ahp);
         }
@@ -11768,7 +11768,7 @@ static void handle_rev(DisasContext *s, int opcode, bool u,

             read_vec_element(s, tcg_tmp, rn, i, grp_size);
             switch (grp_size) {
-            case MO_16:
+            case MO_UW:
                 tcg_gen_bswap16_i64(tcg_tmp, tcg_tmp);
                 break;
             case MO_32:
@@ -12499,7 +12499,7 @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
         if (!fp_access_check(s)) {
             return;
         }
-        handle_simd_intfp_conv(s, rd, rn, elements, !u, 0, MO_16);
+        handle_simd_intfp_conv(s, rd, rn, elements, !u, 0, MO_UW);
         return;
     }
     break;
@@ -12508,7 +12508,7 @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
     case 0x2e: /* FCMLT (zero) */
     case 0x6c: /* FCMGE (zero) */
     case 0x6d: /* FCMLE (zero) */
-        handle_2misc_fcmp_zero(s, fpop, is_scalar, 0, is_q, MO_16, rn, rd);
+        handle_2misc_fcmp_zero(s, fpop, is_scalar, 0, is_q, MO_UW, rn, rd);
         return;
     case 0x3d: /* FRECPE */
     case 0x3f: /* FRECPX */
@@ -12668,7 +12668,7 @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
             TCGv_i32 tcg_op = tcg_temp_new_i32();
             TCGv_i32 tcg_res = tcg_temp_new_i32();

-            read_vec_element_i32(s, tcg_op, rn, pass, MO_16);
+            read_vec_element_i32(s, tcg_op, rn, pass, MO_UW);

             switch (fpop) {
             case 0x1a: /* FCVTNS */
@@ -12715,7 +12715,7 @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
                 g_assert_not_reached();
             }

-            write_vec_element_i32(s, tcg_res, rd, pass, MO_16);
+            write_vec_element_i32(s, tcg_res, rd, pass, MO_UW);

             tcg_temp_free_i32(tcg_res);
             tcg_temp_free_i32(tcg_op);
@@ -12839,7 +12839,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
             unallocated_encoding(s);
             return;
         }
-        size = MO_16;
+        size = MO_UW;
         /* is_fp, but we pass cpu_env not fp_status.  */
         break;
     default:
@@ -12852,7 +12852,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
         /* convert insn encoded size to TCGMemOp size */
         switch (size) {
         case 0: /* half-precision */
-            size = MO_16;
+            size = MO_UW;
             is_fp16 = true;
             break;
         case MO_32: /* single precision */
@@ -12899,7 +12899,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)

     /* Given TCGMemOp size, adjust register and indexing.  */
     switch (size) {
-    case MO_16:
+    case MO_UW:
         index = h << 2 | l << 1 | m;
         break;
     case MO_32:
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index ec5fb11..2bc1bd1 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -1679,7 +1679,7 @@ static void do_sat_addsub_vec(DisasContext *s, int esz, int rd, int rn,
         tcg_temp_free_i32(t32);
         break;

-    case MO_16:
+    case MO_UW:
         t32 = tcg_temp_new_i32();
         tcg_gen_extrl_i64_i32(t32, val);
         if (d) {
@@ -3314,7 +3314,7 @@ static bool trans_SUBR_zzi(DisasContext *s, arg_rri_esz *a)
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_sve_subri_h,
           .opt_opc = vecop_list,
-          .vece = MO_16,
+          .vece = MO_UW,
           .scalar_first = true },
         { .fni4 = tcg_gen_sub_i32,
           .fniv = tcg_gen_sub_vec,
@@ -3468,7 +3468,7 @@ static bool trans_FMLA_zzxz(DisasContext *s, arg_FMLA_zzxz *a)

     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);
         tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, a->rd),
                            vec_full_reg_offset(s, a->rn),
                            vec_full_reg_offset(s, a->rm),
@@ -3494,7 +3494,7 @@ static bool trans_FMUL_zzx(DisasContext *s, arg_FMUL_zzx *a)

     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);
         tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
                            vec_full_reg_offset(s, a->rn),
                            vec_full_reg_offset(s, a->rm),
@@ -3526,7 +3526,7 @@ static void do_reduce(DisasContext *s, arg_rpr_esz *a,

     tcg_gen_addi_ptr(t_zn, cpu_env, vec_full_reg_offset(s, a->rn));
     tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, a->pg));
-    status = get_fpstatus_ptr(a->esz == MO_16);
+    status = get_fpstatus_ptr(a->esz == MO_UW);

     fn(temp, t_zn, t_pg, status, t_desc);
     tcg_temp_free_ptr(t_zn);
@@ -3568,7 +3568,7 @@ DO_VPZ(FMAXV, fmaxv)
 static void do_zz_fp(DisasContext *s, arg_rr_esz *a, gen_helper_gvec_2_ptr *fn)
 {
     unsigned vsz = vec_full_reg_size(s);
-    TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+    TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);

     tcg_gen_gvec_2_ptr(vec_full_reg_offset(s, a->rd),
                        vec_full_reg_offset(s, a->rn),
@@ -3616,7 +3616,7 @@ static void do_ppz_fp(DisasContext *s, arg_rpr_esz *a,
                       gen_helper_gvec_3_ptr *fn)
 {
     unsigned vsz = vec_full_reg_size(s);
-    TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+    TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);

     tcg_gen_gvec_3_ptr(pred_full_reg_offset(s, a->rd),
                        vec_full_reg_offset(s, a->rn),
@@ -3668,7 +3668,7 @@ static bool trans_FTMAD(DisasContext *s, arg_FTMAD *a)
     }
     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);
         tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
                            vec_full_reg_offset(s, a->rn),
                            vec_full_reg_offset(s, a->rm),
@@ -3708,7 +3708,7 @@ static bool trans_FADDA(DisasContext *s, arg_rprr_esz *a)
     t_pg = tcg_temp_new_ptr();
     tcg_gen_addi_ptr(t_rm, cpu_env, vec_full_reg_offset(s, a->rm));
     tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, a->pg));
-    t_fpst = get_fpstatus_ptr(a->esz == MO_16);
+    t_fpst = get_fpstatus_ptr(a->esz == MO_UW);
     t_desc = tcg_const_i32(simd_desc(vsz, vsz, 0));

     fns[a->esz - 1](t_val, t_val, t_rm, t_pg, t_fpst, t_desc);
@@ -3735,7 +3735,7 @@ static bool do_zzz_fp(DisasContext *s, arg_rrr_esz *a,
     }
     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);
         tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
                            vec_full_reg_offset(s, a->rn),
                            vec_full_reg_offset(s, a->rm),
@@ -3777,7 +3777,7 @@ static bool do_zpzz_fp(DisasContext *s, arg_rprr_esz *a,
     }
     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);
         tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, a->rd),
                            vec_full_reg_offset(s, a->rn),
                            vec_full_reg_offset(s, a->rm),
@@ -3844,7 +3844,7 @@ static void do_fp_imm(DisasContext *s, arg_rpri_esz *a, uint64_t imm,
                       gen_helper_sve_fp2scalar *fn)
 {
     TCGv_i64 temp = tcg_const_i64(imm);
-    do_fp_scalar(s, a->rd, a->rn, a->pg, a->esz == MO_16, temp, fn);
+    do_fp_scalar(s, a->rd, a->rn, a->pg, a->esz == MO_UW, temp, fn);
     tcg_temp_free_i64(temp);
 }

@@ -3893,7 +3893,7 @@ static bool do_fp_cmp(DisasContext *s, arg_rprr_esz *a,
     }
     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);
         tcg_gen_gvec_4_ptr(pred_full_reg_offset(s, a->rd),
                            vec_full_reg_offset(s, a->rn),
                            vec_full_reg_offset(s, a->rm),
@@ -3937,7 +3937,7 @@ static bool trans_FCADD(DisasContext *s, arg_FCADD *a)
     }
     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);
         tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, a->rd),
                            vec_full_reg_offset(s, a->rn),
                            vec_full_reg_offset(s, a->rm),
@@ -4044,7 +4044,7 @@ static bool trans_FCMLA_zzxz(DisasContext *s, arg_FCMLA_zzxz *a)
     tcg_debug_assert(a->rd == a->ra);
     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);
         tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
                            vec_full_reg_offset(s, a->rn),
                            vec_full_reg_offset(s, a->rm),
@@ -4186,7 +4186,7 @@ static bool trans_FRINTI(DisasContext *s, arg_rpr_esz *a)
     if (a->esz == 0) {
         return false;
     }
-    return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_16,
+    return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_UW,
                       frint_fns[a->esz - 1]);
 }

@@ -4200,7 +4200,7 @@ static bool trans_FRINTX(DisasContext *s, arg_rpr_esz *a)
     if (a->esz == 0) {
         return false;
     }
-    return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_16, fns[a->esz - 1]);
+    return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_UW, fns[a->esz - 1]);
 }

 static bool do_frint_mode(DisasContext *s, arg_rpr_esz *a, int mode)
@@ -4211,7 +4211,7 @@ static bool do_frint_mode(DisasContext *s, arg_rpr_esz *a, int mode)
     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
         TCGv_i32 tmode = tcg_const_i32(mode);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);

         gen_helper_set_rmode(tmode, tmode, status);

@@ -4262,7 +4262,7 @@ static bool trans_FRECPX(DisasContext *s, arg_rpr_esz *a)
     if (a->esz == 0) {
         return false;
     }
-    return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_16, fns[a->esz - 1]);
+    return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_UW, fns[a->esz - 1]);
 }

 static bool trans_FSQRT(DisasContext *s, arg_rpr_esz *a)
@@ -4275,7 +4275,7 @@ static bool trans_FSQRT(DisasContext *s, arg_rpr_esz *a)
     if (a->esz == 0) {
         return false;
     }
-    return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_16, fns[a->esz - 1]);
+    return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_UW, fns[a->esz - 1]);
 }

 static bool trans_SCVTF_hh(DisasContext *s, arg_rpr_esz *a)
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
index 092eb5e..549874c 100644
--- a/target/arm/translate-vfp.inc.c
+++ b/target/arm/translate-vfp.inc.c
@@ -52,7 +52,7 @@ uint64_t vfp_expand_imm(int size, uint8_t imm8)
             (extract32(imm8, 0, 6) << 3);
         imm <<= 16;
         break;
-    case MO_16:
+    case MO_UW:
         imm = (extract32(imm8, 7, 1) ? 0x8000 : 0) |
             (extract32(imm8, 6, 1) ? 0x3000 : 0x4000) |
             (extract32(imm8, 0, 6) << 6);
diff --git a/target/arm/translate.c b/target/arm/translate.c
index 39266cf..8d10922 100644
--- a/target/arm/translate.c
+++ b/target/arm/translate.c
@@ -1477,7 +1477,7 @@ static void neon_store_element(int reg, int ele, TCGMemOp size, TCGv_i32 var)
     case MO_UB:
         tcg_gen_st8_i32(var, cpu_env, offset);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_st16_i32(var, cpu_env, offset);
         break;
     case MO_32:
@@ -1496,7 +1496,7 @@ static void neon_store_element64(int reg, int ele, TCGMemOp size, TCGv_i64 var)
     case MO_UB:
         tcg_gen_st8_i64(var, cpu_env, offset);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_st16_i64(var, cpu_env, offset);
         break;
     case MO_32:
@@ -4267,7 +4267,7 @@ const GVecGen2i ssra_op[4] = {
       .fniv = gen_ssra_vec,
       .load_dest = true,
       .opt_opc = vecop_list_ssra,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fni4 = gen_ssra32_i32,
       .fniv = gen_ssra_vec,
       .load_dest = true,
@@ -4325,7 +4325,7 @@ const GVecGen2i usra_op[4] = {
       .fniv = gen_usra_vec,
       .load_dest = true,
       .opt_opc = vecop_list_usra,
-      .vece = MO_16, },
+      .vece = MO_UW, },
     { .fni4 = gen_usra32_i32,
       .fniv = gen_usra_vec,
       .load_dest = true,
@@ -4353,7 +4353,7 @@ static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)

 static void gen_shr16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
 {
-    uint64_t mask = dup_const(MO_16, 0xffff >> shift);
+    uint64_t mask = dup_const(MO_UW, 0xffff >> shift);
     TCGv_i64 t = tcg_temp_new_i64();

     tcg_gen_shri_i64(t, a, shift);
@@ -4405,7 +4405,7 @@ const GVecGen2i sri_op[4] = {
       .fniv = gen_shr_ins_vec,
       .load_dest = true,
       .opt_opc = vecop_list_sri,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fni4 = gen_shr32_ins_i32,
       .fniv = gen_shr_ins_vec,
       .load_dest = true,
@@ -4433,7 +4433,7 @@ static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)

 static void gen_shl16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
 {
-    uint64_t mask = dup_const(MO_16, 0xffff << shift);
+    uint64_t mask = dup_const(MO_UW, 0xffff << shift);
     TCGv_i64 t = tcg_temp_new_i64();

     tcg_gen_shli_i64(t, a, shift);
@@ -4483,7 +4483,7 @@ const GVecGen2i sli_op[4] = {
       .fniv = gen_shl_ins_vec,
       .load_dest = true,
       .opt_opc = vecop_list_sli,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fni4 = gen_shl32_ins_i32,
       .fniv = gen_shl_ins_vec,
       .load_dest = true,
@@ -4579,7 +4579,7 @@ const GVecGen3 mla_op[4] = {
       .fniv = gen_mla_vec,
       .load_dest = true,
       .opt_opc = vecop_list_mla,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fni4 = gen_mla32_i32,
       .fniv = gen_mla_vec,
       .load_dest = true,
@@ -4603,7 +4603,7 @@ const GVecGen3 mls_op[4] = {
       .fniv = gen_mls_vec,
       .load_dest = true,
       .opt_opc = vecop_list_mls,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fni4 = gen_mls32_i32,
       .fniv = gen_mls_vec,
       .load_dest = true,
@@ -4649,7 +4649,7 @@ const GVecGen3 cmtst_op[4] = {
     { .fni4 = gen_helper_neon_tst_u16,
       .fniv = gen_cmtst_vec,
       .opt_opc = vecop_list_cmtst,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fni4 = gen_cmtst_i32,
       .fniv = gen_cmtst_vec,
       .opt_opc = vecop_list_cmtst,
@@ -4686,7 +4686,7 @@ const GVecGen4 uqadd_op[4] = {
       .fno = gen_helper_gvec_uqadd_h,
       .write_aofs = true,
       .opt_opc = vecop_list_uqadd,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fniv = gen_uqadd_vec,
       .fno = gen_helper_gvec_uqadd_s,
       .write_aofs = true,
@@ -4724,7 +4724,7 @@ const GVecGen4 sqadd_op[4] = {
       .fno = gen_helper_gvec_sqadd_h,
       .opt_opc = vecop_list_sqadd,
       .write_aofs = true,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fniv = gen_sqadd_vec,
       .fno = gen_helper_gvec_sqadd_s,
       .opt_opc = vecop_list_sqadd,
@@ -4762,7 +4762,7 @@ const GVecGen4 uqsub_op[4] = {
       .fno = gen_helper_gvec_uqsub_h,
       .opt_opc = vecop_list_uqsub,
       .write_aofs = true,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fniv = gen_uqsub_vec,
       .fno = gen_helper_gvec_uqsub_s,
       .opt_opc = vecop_list_uqsub,
@@ -4800,7 +4800,7 @@ const GVecGen4 sqsub_op[4] = {
       .fno = gen_helper_gvec_sqsub_h,
       .opt_opc = vecop_list_sqsub,
       .write_aofs = true,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fniv = gen_sqsub_vec,
       .fno = gen_helper_gvec_sqsub_s,
       .opt_opc = vecop_list_sqsub,
@@ -6876,7 +6876,7 @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
                     size = MO_UB;
                     element = (insn >> 17) & 7;
                 } else if (insn & (1 << 17)) {
-                    size = MO_16;
+                    size = MO_UW;
                     element = (insn >> 18) & 3;
                 } else {
                     size = MO_32;
diff --git a/target/i386/translate.c b/target/i386/translate.c
index 0e45300..0535bae 100644
--- a/target/i386/translate.c
+++ b/target/i386/translate.c
@@ -323,7 +323,7 @@ static inline bool byte_reg_is_xH(DisasContext *s, int reg)
 static inline TCGMemOp mo_pushpop(DisasContext *s, TCGMemOp ot)
 {
     if (CODE64(s)) {
-        return ot == MO_16 ? MO_16 : MO_64;
+        return ot == MO_UW ? MO_UW : MO_64;
     } else {
         return ot;
     }
@@ -332,7 +332,7 @@ static inline TCGMemOp mo_pushpop(DisasContext *s, TCGMemOp ot)
 /* Select the size of the stack pointer.  */
 static inline TCGMemOp mo_stacksize(DisasContext *s)
 {
-    return CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_16;
+    return CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_UW;
 }

 /* Select only size 64 else 32.  Used for SSE operand sizes.  */
@@ -356,7 +356,7 @@ static inline TCGMemOp mo_b_d(int b, TCGMemOp ot)
    Used for decoding operand size of port opcodes.  */
 static inline TCGMemOp mo_b_d32(int b, TCGMemOp ot)
 {
-    return b & 1 ? (ot == MO_16 ? MO_16 : MO_32) : MO_UB;
+    return b & 1 ? (ot == MO_UW ? MO_UW : MO_32) : MO_UB;
 }

 static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t0)
@@ -369,7 +369,7 @@ static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t0)
             tcg_gen_deposit_tl(cpu_regs[reg - 4], cpu_regs[reg - 4], t0, 8, 8);
         }
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_deposit_tl(cpu_regs[reg], cpu_regs[reg], t0, 0, 16);
         break;
     case MO_32:
@@ -473,7 +473,7 @@ static void gen_lea_v_seg(DisasContext *s, TCGMemOp aflag, TCGv a0,
             return;
         }
         break;
-    case MO_16:
+    case MO_UW:
         /* 16 bit address */
         tcg_gen_ext16u_tl(s->A0, a0);
         a0 = s->A0;
@@ -530,7 +530,7 @@ static TCGv gen_ext_tl(TCGv dst, TCGv src, TCGMemOp size, bool sign)
             tcg_gen_ext8u_tl(dst, src);
         }
         return dst;
-    case MO_16:
+    case MO_UW:
         if (sign) {
             tcg_gen_ext16s_tl(dst, src);
         } else {
@@ -583,7 +583,7 @@ static void gen_helper_in_func(TCGMemOp ot, TCGv v, TCGv_i32 n)
     case MO_UB:
         gen_helper_inb(v, cpu_env, n);
         break;
-    case MO_16:
+    case MO_UW:
         gen_helper_inw(v, cpu_env, n);
         break;
     case MO_32:
@@ -600,7 +600,7 @@ static void gen_helper_out_func(TCGMemOp ot, TCGv_i32 v, TCGv_i32 n)
     case MO_UB:
         gen_helper_outb(cpu_env, v, n);
         break;
-    case MO_16:
+    case MO_UW:
         gen_helper_outw(cpu_env, v, n);
         break;
     case MO_32:
@@ -622,7 +622,7 @@ static void gen_check_io(DisasContext *s, TCGMemOp ot, target_ulong cur_eip,
         case MO_UB:
             gen_helper_check_iob(cpu_env, s->tmp2_i32);
             break;
-        case MO_16:
+        case MO_UW:
             gen_helper_check_iow(cpu_env, s->tmp2_i32);
             break;
         case MO_32:
@@ -1562,7 +1562,7 @@ static void gen_rot_rm_T1(DisasContext *s, TCGMemOp ot, int op1, int is_right)
         tcg_gen_ext8u_tl(s->T0, s->T0);
         tcg_gen_muli_tl(s->T0, s->T0, 0x01010101);
         goto do_long;
-    case MO_16:
+    case MO_UW:
         /* Replicate the 16-bit input so that a 32-bit rotate works.  */
         tcg_gen_deposit_tl(s->T0, s->T0, s->T0, 16, 16);
         goto do_long;
@@ -1664,7 +1664,7 @@ static void gen_rot_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
         case MO_UB:
             mask = 7;
             goto do_shifts;
-        case MO_16:
+        case MO_UW:
             mask = 15;
         do_shifts:
             shift = op2 & mask;
@@ -1722,7 +1722,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
         case MO_UB:
             gen_helper_rcrb(s->T0, cpu_env, s->T0, s->T1);
             break;
-        case MO_16:
+        case MO_UW:
             gen_helper_rcrw(s->T0, cpu_env, s->T0, s->T1);
             break;
         case MO_32:
@@ -1741,7 +1741,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
         case MO_UB:
             gen_helper_rclb(s->T0, cpu_env, s->T0, s->T1);
             break;
-        case MO_16:
+        case MO_UW:
             gen_helper_rclw(s->T0, cpu_env, s->T0, s->T1);
             break;
         case MO_32:
@@ -1778,7 +1778,7 @@ static void gen_shiftd_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
     tcg_gen_andi_tl(count, count_in, mask);

     switch (ot) {
-    case MO_16:
+    case MO_UW:
         /* Note: we implement the Intel behaviour for shift count > 16.
            This means "shrdw C, B, A" shifts A:B:A >> C.  Build the B:A
            portion by constructing it as a 32-bit value.  */
@@ -1817,7 +1817,7 @@ static void gen_shiftd_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
             tcg_gen_shl_tl(s->T1, s->T1, s->tmp4);
         } else {
             tcg_gen_shl_tl(s->tmp0, s->T0, s->tmp0);
-            if (ot == MO_16) {
+            if (ot == MO_UW) {
                 /* Only needed if count > 16, for Intel behaviour.  */
                 tcg_gen_subfi_tl(s->tmp4, 33, count);
                 tcg_gen_shr_tl(s->tmp4, s->T1, s->tmp4);
@@ -2026,7 +2026,7 @@ static AddressParts gen_lea_modrm_0(CPUX86State *env, DisasContext *s,
         }
         break;

-    case MO_16:
+    case MO_UW:
         if (mod == 0) {
             if (rm == 6) {
                 base = -1;
@@ -2187,7 +2187,7 @@ static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, TCGMemOp ot)
     case MO_UB:
         ret = x86_ldub_code(env, s);
         break;
-    case MO_16:
+    case MO_UW:
         ret = x86_lduw_code(env, s);
         break;
     case MO_32:
@@ -2400,12 +2400,12 @@ static inline void gen_pop_update(DisasContext *s, TCGMemOp ot)

 static inline void gen_stack_A0(DisasContext *s)
 {
-    gen_lea_v_seg(s, s->ss32 ? MO_32 : MO_16, cpu_regs[R_ESP], R_SS, -1);
+    gen_lea_v_seg(s, s->ss32 ? MO_32 : MO_UW, cpu_regs[R_ESP], R_SS, -1);
 }

 static void gen_pusha(DisasContext *s)
 {
-    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_16;
+    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_UW;
     TCGMemOp d_ot = s->dflag;
     int size = 1 << d_ot;
     int i;
@@ -2421,7 +2421,7 @@ static void gen_pusha(DisasContext *s)

 static void gen_popa(DisasContext *s)
 {
-    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_16;
+    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_UW;
     TCGMemOp d_ot = s->dflag;
     int size = 1 << d_ot;
     int i;
@@ -2443,7 +2443,7 @@ static void gen_popa(DisasContext *s)
 static void gen_enter(DisasContext *s, int esp_addend, int level)
 {
     TCGMemOp d_ot = mo_pushpop(s, s->dflag);
-    TCGMemOp a_ot = CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_16;
+    TCGMemOp a_ot = CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_UW;
     int size = 1 << d_ot;

     /* Push BP; compute FrameTemp into T1.  */
@@ -3613,7 +3613,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
         case 0xc4: /* pinsrw */
         case 0x1c4:
             s->rip_offset = 1;
-            gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0);
+            gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0);
             val = x86_ldub_code(env, s);
             if (b1) {
                 val &= 7;
@@ -3786,7 +3786,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                 if ((b & 0xff) == 0xf0) {
                     ot = MO_UB;
                 } else if (s->dflag != MO_64) {
-                    ot = (s->prefix & PREFIX_DATA ? MO_16 : MO_32);
+                    ot = (s->prefix & PREFIX_DATA ? MO_UW : MO_32);
                 } else {
                     ot = MO_64;
                 }
@@ -3815,7 +3815,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                     goto illegal_op;
                 }
                 if (s->dflag != MO_64) {
-                    ot = (s->prefix & PREFIX_DATA ? MO_16 : MO_32);
+                    ot = (s->prefix & PREFIX_DATA ? MO_UW : MO_32);
                 } else {
                     ot = MO_64;
                 }
@@ -4630,7 +4630,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         /* In 64-bit mode, the default data size is 32-bit.  Select 64-bit
            data with rex_w, and 16-bit data with 0x66; rex_w takes precedence
            over 0x66 if both are present.  */
-        dflag = (rex_w > 0 ? MO_64 : prefixes & PREFIX_DATA ? MO_16 : MO_32);
+        dflag = (rex_w > 0 ? MO_64 : prefixes & PREFIX_DATA ? MO_UW : MO_32);
         /* In 64-bit mode, 0x67 selects 32-bit addressing.  */
         aflag = (prefixes & PREFIX_ADR ? MO_32 : MO_64);
     } else {
@@ -4638,13 +4638,13 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         if (s->code32 ^ ((prefixes & PREFIX_DATA) != 0)) {
             dflag = MO_32;
         } else {
-            dflag = MO_16;
+            dflag = MO_UW;
         }
         /* In 16/32-bit mode, 0x67 selects the opposite addressing.  */
         if (s->code32 ^ ((prefixes & PREFIX_ADR) != 0)) {
             aflag = MO_32;
         }  else {
-            aflag = MO_16;
+            aflag = MO_UW;
         }
     }

@@ -4872,21 +4872,21 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 tcg_gen_ext8u_tl(s->T1, s->T1);
                 /* XXX: use 32 bit mul which could be faster */
                 tcg_gen_mul_tl(s->T0, s->T0, s->T1);
-                gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0);
+                gen_op_mov_reg_v(s, MO_UW, R_EAX, s->T0);
                 tcg_gen_mov_tl(cpu_cc_dst, s->T0);
                 tcg_gen_andi_tl(cpu_cc_src, s->T0, 0xff00);
                 set_cc_op(s, CC_OP_MULB);
                 break;
-            case MO_16:
-                gen_op_mov_v_reg(s, MO_16, s->T1, R_EAX);
+            case MO_UW:
+                gen_op_mov_v_reg(s, MO_UW, s->T1, R_EAX);
                 tcg_gen_ext16u_tl(s->T0, s->T0);
                 tcg_gen_ext16u_tl(s->T1, s->T1);
                 /* XXX: use 32 bit mul which could be faster */
                 tcg_gen_mul_tl(s->T0, s->T0, s->T1);
-                gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0);
+                gen_op_mov_reg_v(s, MO_UW, R_EAX, s->T0);
                 tcg_gen_mov_tl(cpu_cc_dst, s->T0);
                 tcg_gen_shri_tl(s->T0, s->T0, 16);
-                gen_op_mov_reg_v(s, MO_16, R_EDX, s->T0);
+                gen_op_mov_reg_v(s, MO_UW, R_EDX, s->T0);
                 tcg_gen_mov_tl(cpu_cc_src, s->T0);
                 set_cc_op(s, CC_OP_MULW);
                 break;
@@ -4921,24 +4921,24 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 tcg_gen_ext8s_tl(s->T1, s->T1);
                 /* XXX: use 32 bit mul which could be faster */
                 tcg_gen_mul_tl(s->T0, s->T0, s->T1);
-                gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0);
+                gen_op_mov_reg_v(s, MO_UW, R_EAX, s->T0);
                 tcg_gen_mov_tl(cpu_cc_dst, s->T0);
                 tcg_gen_ext8s_tl(s->tmp0, s->T0);
                 tcg_gen_sub_tl(cpu_cc_src, s->T0, s->tmp0);
                 set_cc_op(s, CC_OP_MULB);
                 break;
-            case MO_16:
-                gen_op_mov_v_reg(s, MO_16, s->T1, R_EAX);
+            case MO_UW:
+                gen_op_mov_v_reg(s, MO_UW, s->T1, R_EAX);
                 tcg_gen_ext16s_tl(s->T0, s->T0);
                 tcg_gen_ext16s_tl(s->T1, s->T1);
                 /* XXX: use 32 bit mul which could be faster */
                 tcg_gen_mul_tl(s->T0, s->T0, s->T1);
-                gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0);
+                gen_op_mov_reg_v(s, MO_UW, R_EAX, s->T0);
                 tcg_gen_mov_tl(cpu_cc_dst, s->T0);
                 tcg_gen_ext16s_tl(s->tmp0, s->T0);
                 tcg_gen_sub_tl(cpu_cc_src, s->T0, s->tmp0);
                 tcg_gen_shri_tl(s->T0, s->T0, 16);
-                gen_op_mov_reg_v(s, MO_16, R_EDX, s->T0);
+                gen_op_mov_reg_v(s, MO_UW, R_EDX, s->T0);
                 set_cc_op(s, CC_OP_MULW);
                 break;
             default:
@@ -4972,7 +4972,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             case MO_UB:
                 gen_helper_divb_AL(cpu_env, s->T0);
                 break;
-            case MO_16:
+            case MO_UW:
                 gen_helper_divw_AX(cpu_env, s->T0);
                 break;
             default:
@@ -4991,7 +4991,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             case MO_UB:
                 gen_helper_idivb_AL(cpu_env, s->T0);
                 break;
-            case MO_16:
+            case MO_UW:
                 gen_helper_idivw_AX(cpu_env, s->T0);
                 break;
             default:
@@ -5026,7 +5026,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 /* operand size for jumps is 64 bit */
                 ot = MO_64;
             } else if (op == 3 || op == 5) {
-                ot = dflag != MO_16 ? MO_32 + (rex_w == 1) : MO_16;
+                ot = dflag != MO_UW ? MO_32 + (rex_w == 1) : MO_UW;
             } else if (op == 6) {
                 /* default push size is 64 bit */
                 ot = mo_pushpop(s, dflag);
@@ -5057,7 +5057,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             break;
         case 2: /* call Ev */
             /* XXX: optimize if memory (no 'and' is necessary) */
-            if (dflag == MO_16) {
+            if (dflag == MO_UW) {
                 tcg_gen_ext16u_tl(s->T0, s->T0);
             }
             next_eip = s->pc - s->cs_base;
@@ -5070,7 +5070,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         case 3: /* lcall Ev */
             gen_op_ld_v(s, ot, s->T1, s->A0);
             gen_add_A0_im(s, 1 << ot);
-            gen_op_ld_v(s, MO_16, s->T0, s->A0);
+            gen_op_ld_v(s, MO_UW, s->T0, s->A0);
         do_lcall:
             if (s->pe && !s->vm86) {
                 tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
@@ -5087,7 +5087,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             gen_jr(s, s->tmp4);
             break;
         case 4: /* jmp Ev */
-            if (dflag == MO_16) {
+            if (dflag == MO_UW) {
                 tcg_gen_ext16u_tl(s->T0, s->T0);
             }
             gen_op_jmp_v(s->T0);
@@ -5097,7 +5097,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         case 5: /* ljmp Ev */
             gen_op_ld_v(s, ot, s->T1, s->A0);
             gen_add_A0_im(s, 1 << ot);
-            gen_op_ld_v(s, MO_16, s->T0, s->A0);
+            gen_op_ld_v(s, MO_UW, s->T0, s->A0);
         do_ljmp:
             if (s->pe && !s->vm86) {
                 tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
@@ -5152,14 +5152,14 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             break;
 #endif
         case MO_32:
-            gen_op_mov_v_reg(s, MO_16, s->T0, R_EAX);
+            gen_op_mov_v_reg(s, MO_UW, s->T0, R_EAX);
             tcg_gen_ext16s_tl(s->T0, s->T0);
             gen_op_mov_reg_v(s, MO_32, R_EAX, s->T0);
             break;
-        case MO_16:
+        case MO_UW:
             gen_op_mov_v_reg(s, MO_UB, s->T0, R_EAX);
             tcg_gen_ext8s_tl(s->T0, s->T0);
-            gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0);
+            gen_op_mov_reg_v(s, MO_UW, R_EAX, s->T0);
             break;
         default:
             tcg_abort();
@@ -5180,11 +5180,11 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             tcg_gen_sari_tl(s->T0, s->T0, 31);
             gen_op_mov_reg_v(s, MO_32, R_EDX, s->T0);
             break;
-        case MO_16:
-            gen_op_mov_v_reg(s, MO_16, s->T0, R_EAX);
+        case MO_UW:
+            gen_op_mov_v_reg(s, MO_UW, s->T0, R_EAX);
             tcg_gen_ext16s_tl(s->T0, s->T0);
             tcg_gen_sari_tl(s->T0, s->T0, 15);
-            gen_op_mov_reg_v(s, MO_16, R_EDX, s->T0);
+            gen_op_mov_reg_v(s, MO_UW, R_EDX, s->T0);
             break;
         default:
             tcg_abort();
@@ -5538,7 +5538,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         reg = (modrm >> 3) & 7;
         if (reg >= 6 || reg == R_CS)
             goto illegal_op;
-        gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0);
+        gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0);
         gen_movl_seg_T0(s, reg);
         /* Note that reg == R_SS in gen_movl_seg_T0 always sets is_jmp.  */
         if (s->base.is_jmp) {
@@ -5558,7 +5558,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         if (reg >= 6)
             goto illegal_op;
         gen_op_movl_T0_seg(s, reg);
-        ot = mod == 3 ? dflag : MO_16;
+        ot = mod == 3 ? dflag : MO_UW;
         gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 1);
         break;

@@ -5734,7 +5734,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
     case 0x1b5: /* lgs Gv */
         op = R_GS;
     do_lxx:
-        ot = dflag != MO_16 ? MO_32 : MO_16;
+        ot = dflag != MO_UW ? MO_32 : MO_UW;
         modrm = x86_ldub_code(env, s);
         reg = ((modrm >> 3) & 7) | rex_r;
         mod = (modrm >> 6) & 3;
@@ -5744,7 +5744,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         gen_op_ld_v(s, ot, s->T1, s->A0);
         gen_add_A0_im(s, 1 << ot);
         /* load the segment first to handle exceptions properly */
-        gen_op_ld_v(s, MO_16, s->T0, s->A0);
+        gen_op_ld_v(s, MO_UW, s->T0, s->A0);
         gen_movl_seg_T0(s, op);
         /* then put the data */
         gen_op_mov_reg_v(s, ot, reg, s->T1);
@@ -6287,7 +6287,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 case 0:
                     gen_helper_fnstsw(s->tmp2_i32, cpu_env);
                     tcg_gen_extu_i32_tl(s->T0, s->tmp2_i32);
-                    gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0);
+                    gen_op_mov_reg_v(s, MO_UW, R_EAX, s->T0);
                     break;
                 default:
                     goto unknown_op;
@@ -6575,14 +6575,14 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         break;
     case 0xe8: /* call im */
         {
-            if (dflag != MO_16) {
+            if (dflag != MO_UW) {
                 tval = (int32_t)insn_get(env, s, MO_32);
             } else {
-                tval = (int16_t)insn_get(env, s, MO_16);
+                tval = (int16_t)insn_get(env, s, MO_UW);
             }
             next_eip = s->pc - s->cs_base;
             tval += next_eip;
-            if (dflag == MO_16) {
+            if (dflag == MO_UW) {
                 tval &= 0xffff;
             } else if (!CODE64(s)) {
                 tval &= 0xffffffff;
@@ -6601,20 +6601,20 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 goto illegal_op;
             ot = dflag;
             offset = insn_get(env, s, ot);
-            selector = insn_get(env, s, MO_16);
+            selector = insn_get(env, s, MO_UW);

             tcg_gen_movi_tl(s->T0, selector);
             tcg_gen_movi_tl(s->T1, offset);
         }
         goto do_lcall;
     case 0xe9: /* jmp im */
-        if (dflag != MO_16) {
+        if (dflag != MO_UW) {
             tval = (int32_t)insn_get(env, s, MO_32);
         } else {
-            tval = (int16_t)insn_get(env, s, MO_16);
+            tval = (int16_t)insn_get(env, s, MO_UW);
         }
         tval += s->pc - s->cs_base;
-        if (dflag == MO_16) {
+        if (dflag == MO_UW) {
             tval &= 0xffff;
         } else if (!CODE64(s)) {
             tval &= 0xffffffff;
@@ -6630,7 +6630,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 goto illegal_op;
             ot = dflag;
             offset = insn_get(env, s, ot);
-            selector = insn_get(env, s, MO_16);
+            selector = insn_get(env, s, MO_UW);

             tcg_gen_movi_tl(s->T0, selector);
             tcg_gen_movi_tl(s->T1, offset);
@@ -6639,7 +6639,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
     case 0xeb: /* jmp Jb */
         tval = (int8_t)insn_get(env, s, MO_UB);
         tval += s->pc - s->cs_base;
-        if (dflag == MO_16) {
+        if (dflag == MO_UW) {
             tval &= 0xffff;
         }
         gen_jmp(s, tval);
@@ -6648,15 +6648,15 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         tval = (int8_t)insn_get(env, s, MO_UB);
         goto do_jcc;
     case 0x180 ... 0x18f: /* jcc Jv */
-        if (dflag != MO_16) {
+        if (dflag != MO_UW) {
             tval = (int32_t)insn_get(env, s, MO_32);
         } else {
-            tval = (int16_t)insn_get(env, s, MO_16);
+            tval = (int16_t)insn_get(env, s, MO_UW);
         }
     do_jcc:
         next_eip = s->pc - s->cs_base;
         tval += next_eip;
-        if (dflag == MO_16) {
+        if (dflag == MO_UW) {
             tval &= 0xffff;
         }
         gen_bnd_jmp(s);
@@ -6697,7 +6697,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         } else {
             ot = gen_pop_T0(s);
             if (s->cpl == 0) {
-                if (dflag != MO_16) {
+                if (dflag != MO_UW) {
                     gen_helper_write_eflags(cpu_env, s->T0,
                                             tcg_const_i32((TF_MASK | AC_MASK |
                                                            ID_MASK | NT_MASK |
@@ -6712,7 +6712,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 }
             } else {
                 if (s->cpl <= s->iopl) {
-                    if (dflag != MO_16) {
+                    if (dflag != MO_UW) {
                         gen_helper_write_eflags(cpu_env, s->T0,
                                                 tcg_const_i32((TF_MASK |
                                                                AC_MASK |
@@ -6729,7 +6729,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                                                               & 0xffff));
                     }
                 } else {
-                    if (dflag != MO_16) {
+                    if (dflag != MO_UW) {
                         gen_helper_write_eflags(cpu_env, s->T0,
                                            tcg_const_i32((TF_MASK | AC_MASK |
                                                           ID_MASK | NT_MASK)));
@@ -7110,7 +7110,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         gen_op_mov_v_reg(s, ot, s->T0, reg);
         gen_lea_modrm(env, s, modrm);
         tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
-        if (ot == MO_16) {
+        if (ot == MO_UW) {
             gen_helper_boundw(cpu_env, s->A0, s->tmp2_i32);
         } else {
             gen_helper_boundl(cpu_env, s->A0, s->tmp2_i32);
@@ -7149,7 +7149,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             tval = (int8_t)insn_get(env, s, MO_UB);
             next_eip = s->pc - s->cs_base;
             tval += next_eip;
-            if (dflag == MO_16) {
+            if (dflag == MO_UW) {
                 tval &= 0xffff;
             }

@@ -7291,7 +7291,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             gen_svm_check_intercept(s, pc_start, SVM_EXIT_LDTR_READ);
             tcg_gen_ld32u_tl(s->T0, cpu_env,
                              offsetof(CPUX86State, ldt.selector));
-            ot = mod == 3 ? dflag : MO_16;
+            ot = mod == 3 ? dflag : MO_UW;
             gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 1);
             break;
         case 2: /* lldt */
@@ -7301,7 +7301,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 gen_exception(s, EXCP0D_GPF, pc_start - s->cs_base);
             } else {
                 gen_svm_check_intercept(s, pc_start, SVM_EXIT_LDTR_WRITE);
-                gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0);
+                gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0);
                 tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
                 gen_helper_lldt(cpu_env, s->tmp2_i32);
             }
@@ -7312,7 +7312,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             gen_svm_check_intercept(s, pc_start, SVM_EXIT_TR_READ);
             tcg_gen_ld32u_tl(s->T0, cpu_env,
                              offsetof(CPUX86State, tr.selector));
-            ot = mod == 3 ? dflag : MO_16;
+            ot = mod == 3 ? dflag : MO_UW;
             gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 1);
             break;
         case 3: /* ltr */
@@ -7322,7 +7322,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 gen_exception(s, EXCP0D_GPF, pc_start - s->cs_base);
             } else {
                 gen_svm_check_intercept(s, pc_start, SVM_EXIT_TR_WRITE);
-                gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0);
+                gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0);
                 tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
                 gen_helper_ltr(cpu_env, s->tmp2_i32);
             }
@@ -7331,7 +7331,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         case 5: /* verw */
             if (!s->pe || s->vm86)
                 goto illegal_op;
-            gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0);
+            gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0);
             gen_update_cc_op(s);
             if (op == 4) {
                 gen_helper_verr(cpu_env, s->T0);
@@ -7353,10 +7353,10 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             gen_lea_modrm(env, s, modrm);
             tcg_gen_ld32u_tl(s->T0,
                              cpu_env, offsetof(CPUX86State, gdt.limit));
-            gen_op_st_v(s, MO_16, s->T0, s->A0);
+            gen_op_st_v(s, MO_UW, s->T0, s->A0);
             gen_add_A0_im(s, 2);
             tcg_gen_ld_tl(s->T0, cpu_env, offsetof(CPUX86State, gdt.base));
-            if (dflag == MO_16) {
+            if (dflag == MO_UW) {
                 tcg_gen_andi_tl(s->T0, s->T0, 0xffffff);
             }
             gen_op_st_v(s, CODE64(s) + MO_32, s->T0, s->A0);
@@ -7408,10 +7408,10 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             gen_svm_check_intercept(s, pc_start, SVM_EXIT_IDTR_READ);
             gen_lea_modrm(env, s, modrm);
             tcg_gen_ld32u_tl(s->T0, cpu_env, offsetof(CPUX86State, idt.limit));
-            gen_op_st_v(s, MO_16, s->T0, s->A0);
+            gen_op_st_v(s, MO_UW, s->T0, s->A0);
             gen_add_A0_im(s, 2);
             tcg_gen_ld_tl(s->T0, cpu_env, offsetof(CPUX86State, idt.base));
-            if (dflag == MO_16) {
+            if (dflag == MO_UW) {
                 tcg_gen_andi_tl(s->T0, s->T0, 0xffffff);
             }
             gen_op_st_v(s, CODE64(s) + MO_32, s->T0, s->A0);
@@ -7558,10 +7558,10 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             }
             gen_svm_check_intercept(s, pc_start, SVM_EXIT_GDTR_WRITE);
             gen_lea_modrm(env, s, modrm);
-            gen_op_ld_v(s, MO_16, s->T1, s->A0);
+            gen_op_ld_v(s, MO_UW, s->T1, s->A0);
             gen_add_A0_im(s, 2);
             gen_op_ld_v(s, CODE64(s) + MO_32, s->T0, s->A0);
-            if (dflag == MO_16) {
+            if (dflag == MO_UW) {
                 tcg_gen_andi_tl(s->T0, s->T0, 0xffffff);
             }
             tcg_gen_st_tl(s->T0, cpu_env, offsetof(CPUX86State, gdt.base));
@@ -7575,10 +7575,10 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             }
             gen_svm_check_intercept(s, pc_start, SVM_EXIT_IDTR_WRITE);
             gen_lea_modrm(env, s, modrm);
-            gen_op_ld_v(s, MO_16, s->T1, s->A0);
+            gen_op_ld_v(s, MO_UW, s->T1, s->A0);
             gen_add_A0_im(s, 2);
             gen_op_ld_v(s, CODE64(s) + MO_32, s->T0, s->A0);
-            if (dflag == MO_16) {
+            if (dflag == MO_UW) {
                 tcg_gen_andi_tl(s->T0, s->T0, 0xffffff);
             }
             tcg_gen_st_tl(s->T0, cpu_env, offsetof(CPUX86State, idt.base));
@@ -7590,9 +7590,9 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             tcg_gen_ld_tl(s->T0, cpu_env, offsetof(CPUX86State, cr[0]));
             if (CODE64(s)) {
                 mod = (modrm >> 6) & 3;
-                ot = (mod != 3 ? MO_16 : s->dflag);
+                ot = (mod != 3 ? MO_UW : s->dflag);
             } else {
-                ot = MO_16;
+                ot = MO_UW;
             }
             gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 1);
             break;
@@ -7619,7 +7619,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 break;
             }
             gen_svm_check_intercept(s, pc_start, SVM_EXIT_WRITE_CR0);
-            gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0);
+            gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0);
             gen_helper_lmsw(cpu_env, s->T0);
             gen_jmp_im(s, s->pc - s->cs_base);
             gen_eob(s);
@@ -7720,7 +7720,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             t0 = tcg_temp_local_new();
             t1 = tcg_temp_local_new();
             t2 = tcg_temp_local_new();
-            ot = MO_16;
+            ot = MO_UW;
             modrm = x86_ldub_code(env, s);
             reg = (modrm >> 3) & 7;
             mod = (modrm >> 6) & 3;
@@ -7765,10 +7765,10 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             TCGv t0;
             if (!s->pe || s->vm86)
                 goto illegal_op;
-            ot = dflag != MO_16 ? MO_32 : MO_16;
+            ot = dflag != MO_UW ? MO_32 : MO_UW;
             modrm = x86_ldub_code(env, s);
             reg = ((modrm >> 3) & 7) | rex_r;
-            gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0);
+            gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0);
             t0 = tcg_temp_local_new();
             gen_update_cc_op(s);
             if (b == 0x102) {
@@ -7813,7 +7813,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 /* bndcl */
                 if (reg >= 4
                     || (prefixes & PREFIX_LOCK)
-                    || s->aflag == MO_16) {
+                    || s->aflag == MO_UW) {
                     goto illegal_op;
                 }
                 gen_bndck(env, s, modrm, TCG_COND_LTU, cpu_bndl[reg]);
@@ -7821,7 +7821,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 /* bndcu */
                 if (reg >= 4
                     || (prefixes & PREFIX_LOCK)
-                    || s->aflag == MO_16) {
+                    || s->aflag == MO_UW) {
                     goto illegal_op;
                 }
                 TCGv_i64 notu = tcg_temp_new_i64();
@@ -7830,7 +7830,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 tcg_temp_free_i64(notu);
             } else if (prefixes & PREFIX_DATA) {
                 /* bndmov -- from reg/mem */
-                if (reg >= 4 || s->aflag == MO_16) {
+                if (reg >= 4 || s->aflag == MO_UW) {
                     goto illegal_op;
                 }
                 if (mod == 3) {
@@ -7865,7 +7865,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 AddressParts a = gen_lea_modrm_0(env, s, modrm);
                 if (reg >= 4
                     || (prefixes & PREFIX_LOCK)
-                    || s->aflag == MO_16
+                    || s->aflag == MO_UW
                     || a.base < -1) {
                     goto illegal_op;
                 }
@@ -7903,7 +7903,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 /* bndmk */
                 if (reg >= 4
                     || (prefixes & PREFIX_LOCK)
-                    || s->aflag == MO_16) {
+                    || s->aflag == MO_UW) {
                     goto illegal_op;
                 }
                 AddressParts a = gen_lea_modrm_0(env, s, modrm);
@@ -7931,13 +7931,13 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 /* bndcn */
                 if (reg >= 4
                     || (prefixes & PREFIX_LOCK)
-                    || s->aflag == MO_16) {
+                    || s->aflag == MO_UW) {
                     goto illegal_op;
                 }
                 gen_bndck(env, s, modrm, TCG_COND_GTU, cpu_bndu[reg]);
             } else if (prefixes & PREFIX_DATA) {
                 /* bndmov -- to reg/mem */
-                if (reg >= 4 || s->aflag == MO_16) {
+                if (reg >= 4 || s->aflag == MO_UW) {
                     goto illegal_op;
                 }
                 if (mod == 3) {
@@ -7970,7 +7970,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 AddressParts a = gen_lea_modrm_0(env, s, modrm);
                 if (reg >= 4
                     || (prefixes & PREFIX_LOCK)
-                    || s->aflag == MO_16
+                    || s->aflag == MO_UW
                     || a.base < -1) {
                     goto illegal_op;
                 }
@@ -8341,7 +8341,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         reg = ((modrm >> 3) & 7) | rex_r;

         if (s->prefix & PREFIX_DATA) {
-            ot = MO_16;
+            ot = MO_UW;
         } else {
             ot = mo_64_32(dflag);
         }
diff --git a/target/mips/translate.c b/target/mips/translate.c
index 20a9777..525c7fe 100644
--- a/target/mips/translate.c
+++ b/target/mips/translate.c
@@ -21087,7 +21087,7 @@ static void gen_pool32a5_nanomips_insn(DisasContext *ctx, int opc,
             imm = sextract32(ctx->opcode, 11, 11);
             imm = (int16_t)(imm << 6) >> 6;
             if (rt != 0) {
-                tcg_gen_movi_tl(cpu_gpr[rt], dup_const(MO_16, imm));
+                tcg_gen_movi_tl(cpu_gpr[rt], dup_const(MO_UW, imm));
             }
         }
         break;
diff --git a/target/ppc/translate/vmx-impl.inc.c b/target/ppc/translate/vmx-impl.inc.c
index 4130dd1..71efef4 100644
--- a/target/ppc/translate/vmx-impl.inc.c
+++ b/target/ppc/translate/vmx-impl.inc.c
@@ -406,29 +406,29 @@ static void glue(gen_, name)(DisasContext *ctx)                         \
 GEN_VXFORM_V(vaddubm, MO_UB, tcg_gen_gvec_add, 0, 0);
 GEN_VXFORM_DUAL_EXT(vaddubm, PPC_ALTIVEC, PPC_NONE, 0,       \
                     vmul10cuq, PPC_NONE, PPC2_ISA300, 0x0000F800)
-GEN_VXFORM_V(vadduhm, MO_16, tcg_gen_gvec_add, 0, 1);
+GEN_VXFORM_V(vadduhm, MO_UW, tcg_gen_gvec_add, 0, 1);
 GEN_VXFORM_DUAL(vadduhm, PPC_ALTIVEC, PPC_NONE,  \
                 vmul10ecuq, PPC_NONE, PPC2_ISA300)
 GEN_VXFORM_V(vadduwm, MO_32, tcg_gen_gvec_add, 0, 2);
 GEN_VXFORM_V(vaddudm, MO_64, tcg_gen_gvec_add, 0, 3);
 GEN_VXFORM_V(vsububm, MO_UB, tcg_gen_gvec_sub, 0, 16);
-GEN_VXFORM_V(vsubuhm, MO_16, tcg_gen_gvec_sub, 0, 17);
+GEN_VXFORM_V(vsubuhm, MO_UW, tcg_gen_gvec_sub, 0, 17);
 GEN_VXFORM_V(vsubuwm, MO_32, tcg_gen_gvec_sub, 0, 18);
 GEN_VXFORM_V(vsubudm, MO_64, tcg_gen_gvec_sub, 0, 19);
 GEN_VXFORM_V(vmaxub, MO_UB, tcg_gen_gvec_umax, 1, 0);
-GEN_VXFORM_V(vmaxuh, MO_16, tcg_gen_gvec_umax, 1, 1);
+GEN_VXFORM_V(vmaxuh, MO_UW, tcg_gen_gvec_umax, 1, 1);
 GEN_VXFORM_V(vmaxuw, MO_32, tcg_gen_gvec_umax, 1, 2);
 GEN_VXFORM_V(vmaxud, MO_64, tcg_gen_gvec_umax, 1, 3);
 GEN_VXFORM_V(vmaxsb, MO_UB, tcg_gen_gvec_smax, 1, 4);
-GEN_VXFORM_V(vmaxsh, MO_16, tcg_gen_gvec_smax, 1, 5);
+GEN_VXFORM_V(vmaxsh, MO_UW, tcg_gen_gvec_smax, 1, 5);
 GEN_VXFORM_V(vmaxsw, MO_32, tcg_gen_gvec_smax, 1, 6);
 GEN_VXFORM_V(vmaxsd, MO_64, tcg_gen_gvec_smax, 1, 7);
 GEN_VXFORM_V(vminub, MO_UB, tcg_gen_gvec_umin, 1, 8);
-GEN_VXFORM_V(vminuh, MO_16, tcg_gen_gvec_umin, 1, 9);
+GEN_VXFORM_V(vminuh, MO_UW, tcg_gen_gvec_umin, 1, 9);
 GEN_VXFORM_V(vminuw, MO_32, tcg_gen_gvec_umin, 1, 10);
 GEN_VXFORM_V(vminud, MO_64, tcg_gen_gvec_umin, 1, 11);
 GEN_VXFORM_V(vminsb, MO_UB, tcg_gen_gvec_smin, 1, 12);
-GEN_VXFORM_V(vminsh, MO_16, tcg_gen_gvec_smin, 1, 13);
+GEN_VXFORM_V(vminsh, MO_UW, tcg_gen_gvec_smin, 1, 13);
 GEN_VXFORM_V(vminsw, MO_32, tcg_gen_gvec_smin, 1, 14);
 GEN_VXFORM_V(vminsd, MO_64, tcg_gen_gvec_smin, 1, 15);
 GEN_VXFORM(vavgub, 1, 16);
@@ -531,18 +531,18 @@ GEN_VXFORM(vmulesb, 4, 12);
 GEN_VXFORM(vmulesh, 4, 13);
 GEN_VXFORM(vmulesw, 4, 14);
 GEN_VXFORM_V(vslb, MO_UB, tcg_gen_gvec_shlv, 2, 4);
-GEN_VXFORM_V(vslh, MO_16, tcg_gen_gvec_shlv, 2, 5);
+GEN_VXFORM_V(vslh, MO_UW, tcg_gen_gvec_shlv, 2, 5);
 GEN_VXFORM_V(vslw, MO_32, tcg_gen_gvec_shlv, 2, 6);
 GEN_VXFORM(vrlwnm, 2, 6);
 GEN_VXFORM_DUAL(vslw, PPC_ALTIVEC, PPC_NONE, \
                 vrlwnm, PPC_NONE, PPC2_ISA300)
 GEN_VXFORM_V(vsld, MO_64, tcg_gen_gvec_shlv, 2, 23);
 GEN_VXFORM_V(vsrb, MO_UB, tcg_gen_gvec_shrv, 2, 8);
-GEN_VXFORM_V(vsrh, MO_16, tcg_gen_gvec_shrv, 2, 9);
+GEN_VXFORM_V(vsrh, MO_UW, tcg_gen_gvec_shrv, 2, 9);
 GEN_VXFORM_V(vsrw, MO_32, tcg_gen_gvec_shrv, 2, 10);
 GEN_VXFORM_V(vsrd, MO_64, tcg_gen_gvec_shrv, 2, 27);
 GEN_VXFORM_V(vsrab, MO_UB, tcg_gen_gvec_sarv, 2, 12);
-GEN_VXFORM_V(vsrah, MO_16, tcg_gen_gvec_sarv, 2, 13);
+GEN_VXFORM_V(vsrah, MO_UW, tcg_gen_gvec_sarv, 2, 13);
 GEN_VXFORM_V(vsraw, MO_32, tcg_gen_gvec_sarv, 2, 14);
 GEN_VXFORM_V(vsrad, MO_64, tcg_gen_gvec_sarv, 2, 15);
 GEN_VXFORM(vsrv, 2, 28);
@@ -592,18 +592,18 @@ static void glue(gen_, NAME)(DisasContext *ctx)                         \
 GEN_VXFORM_SAT(vaddubs, MO_UB, add, usadd, 0, 8);
 GEN_VXFORM_DUAL_EXT(vaddubs, PPC_ALTIVEC, PPC_NONE, 0,       \
                     vmul10uq, PPC_NONE, PPC2_ISA300, 0x0000F800)
-GEN_VXFORM_SAT(vadduhs, MO_16, add, usadd, 0, 9);
+GEN_VXFORM_SAT(vadduhs, MO_UW, add, usadd, 0, 9);
 GEN_VXFORM_DUAL(vadduhs, PPC_ALTIVEC, PPC_NONE, \
                 vmul10euq, PPC_NONE, PPC2_ISA300)
 GEN_VXFORM_SAT(vadduws, MO_32, add, usadd, 0, 10);
 GEN_VXFORM_SAT(vaddsbs, MO_UB, add, ssadd, 0, 12);
-GEN_VXFORM_SAT(vaddshs, MO_16, add, ssadd, 0, 13);
+GEN_VXFORM_SAT(vaddshs, MO_UW, add, ssadd, 0, 13);
 GEN_VXFORM_SAT(vaddsws, MO_32, add, ssadd, 0, 14);
 GEN_VXFORM_SAT(vsububs, MO_UB, sub, ussub, 0, 24);
-GEN_VXFORM_SAT(vsubuhs, MO_16, sub, ussub, 0, 25);
+GEN_VXFORM_SAT(vsubuhs, MO_UW, sub, ussub, 0, 25);
 GEN_VXFORM_SAT(vsubuws, MO_32, sub, ussub, 0, 26);
 GEN_VXFORM_SAT(vsubsbs, MO_UB, sub, sssub, 0, 28);
-GEN_VXFORM_SAT(vsubshs, MO_16, sub, sssub, 0, 29);
+GEN_VXFORM_SAT(vsubshs, MO_UW, sub, sssub, 0, 29);
 GEN_VXFORM_SAT(vsubsws, MO_32, sub, sssub, 0, 30);
 GEN_VXFORM(vadduqm, 0, 4);
 GEN_VXFORM(vaddcuq, 0, 5);
@@ -913,7 +913,7 @@ static void glue(gen_, name)(DisasContext *ctx)                         \
     }

 GEN_VXFORM_VSPLT(vspltb, MO_UB, 6, 8);
-GEN_VXFORM_VSPLT(vsplth, MO_16, 6, 9);
+GEN_VXFORM_VSPLT(vsplth, MO_UW, 6, 9);
 GEN_VXFORM_VSPLT(vspltw, MO_32, 6, 10);
 GEN_VXFORM_UIMM_SPLAT(vextractub, 6, 8, 15);
 GEN_VXFORM_UIMM_SPLAT(vextractuh, 6, 9, 14);
diff --git a/target/s390x/translate_vx.inc.c b/target/s390x/translate_vx.inc.c
index bb424c8..65da6b3 100644
--- a/target/s390x/translate_vx.inc.c
+++ b/target/s390x/translate_vx.inc.c
@@ -47,7 +47,7 @@
 #define NUM_VEC_ELEMENT_BITS(es) (NUM_VEC_ELEMENT_BYTES(es) * BITS_PER_BYTE)

 #define ES_8    MO_UB
-#define ES_16   MO_16
+#define ES_16   MO_UW
 #define ES_32   MO_32
 #define ES_64   MO_64
 #define ES_128  4
diff --git a/target/s390x/vec.h b/target/s390x/vec.h
index b813054..28e1b1d 100644
--- a/target/s390x/vec.h
+++ b/target/s390x/vec.h
@@ -78,7 +78,7 @@ static inline uint64_t s390_vec_read_element(const S390Vector *v, uint8_t enr,
     switch (es) {
     case MO_UB:
         return s390_vec_read_element8(v, enr);
-    case MO_16:
+    case MO_UW:
         return s390_vec_read_element16(v, enr);
     case MO_32:
         return s390_vec_read_element32(v, enr);
@@ -124,7 +124,7 @@ static inline void s390_vec_write_element(S390Vector *v, uint8_t enr,
     case MO_UB:
         s390_vec_write_element8(v, enr, data);
         break;
-    case MO_16:
+    case MO_UW:
         s390_vec_write_element16(v, enr, data);
         break;
     case MO_32:
diff --git a/tcg/aarch64/tcg-target.inc.c b/tcg/aarch64/tcg-target.inc.c
index e4e0845..3d90c4b 100644
--- a/tcg/aarch64/tcg-target.inc.c
+++ b/tcg/aarch64/tcg-target.inc.c
@@ -430,20 +430,20 @@ typedef enum {
     /* Load/store register.  Described here as 3.3.12, but the helper
        that emits them can transform to 3.3.10 or 3.3.13.  */
     I3312_STRB      = 0x38000000 | LDST_ST << 22 | MO_UB << 30,
-    I3312_STRH      = 0x38000000 | LDST_ST << 22 | MO_16 << 30,
+    I3312_STRH      = 0x38000000 | LDST_ST << 22 | MO_UW << 30,
     I3312_STRW      = 0x38000000 | LDST_ST << 22 | MO_32 << 30,
     I3312_STRX      = 0x38000000 | LDST_ST << 22 | MO_64 << 30,

     I3312_LDRB      = 0x38000000 | LDST_LD << 22 | MO_UB << 30,
-    I3312_LDRH      = 0x38000000 | LDST_LD << 22 | MO_16 << 30,
+    I3312_LDRH      = 0x38000000 | LDST_LD << 22 | MO_UW << 30,
     I3312_LDRW      = 0x38000000 | LDST_LD << 22 | MO_32 << 30,
     I3312_LDRX      = 0x38000000 | LDST_LD << 22 | MO_64 << 30,

     I3312_LDRSBW    = 0x38000000 | LDST_LD_S_W << 22 | MO_UB << 30,
-    I3312_LDRSHW    = 0x38000000 | LDST_LD_S_W << 22 | MO_16 << 30,
+    I3312_LDRSHW    = 0x38000000 | LDST_LD_S_W << 22 | MO_UW << 30,

     I3312_LDRSBX    = 0x38000000 | LDST_LD_S_X << 22 | MO_UB << 30,
-    I3312_LDRSHX    = 0x38000000 | LDST_LD_S_X << 22 | MO_16 << 30,
+    I3312_LDRSHX    = 0x38000000 | LDST_LD_S_X << 22 | MO_UW << 30,
     I3312_LDRSWX    = 0x38000000 | LDST_LD_S_X << 22 | MO_32 << 30,

     I3312_LDRVS     = 0x3c000000 | LDST_LD << 22 | MO_32 << 30,
@@ -870,7 +870,7 @@ static void tcg_out_dupi_vec(TCGContext *s, TCGType type,

     /*
      * Test all bytes 0x00 or 0xff second.  This can match cases that
-     * might otherwise take 2 or 3 insns for MO_16 or MO_32 below.
+     * might otherwise take 2 or 3 insns for MO_UW or MO_32 below.
      */
     for (i = imm8 = 0; i < 8; i++) {
         uint8_t byte = v64 >> (i * 8);
@@ -889,7 +889,7 @@ static void tcg_out_dupi_vec(TCGContext *s, TCGType type,
      * cannot find an expansion there's no point checking a larger
      * width because we already know by replication it cannot match.
      */
-    if (v64 == dup_const(MO_16, v64)) {
+    if (v64 == dup_const(MO_UW, v64)) {
         uint16_t v16 = v64;

         if (is_shimm16(v16, &cmode, &imm8)) {
@@ -1733,7 +1733,7 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp memop, TCGType ext,
         if (bswap) {
             tcg_out_ldst_r(s, I3312_LDRH, data_r, addr_r, otype, off_r);
             tcg_out_rev16(s, data_r, data_r);
-            tcg_out_sxt(s, ext, MO_16, data_r, data_r);
+            tcg_out_sxt(s, ext, MO_UW, data_r, data_r);
         } else {
             tcg_out_ldst_r(s, (ext ? I3312_LDRSHX : I3312_LDRSHW),
                            data_r, addr_r, otype, off_r);
@@ -1775,7 +1775,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp memop,
     case MO_UB:
         tcg_out_ldst_r(s, I3312_STRB, data_r, addr_r, otype, off_r);
         break;
-    case MO_16:
+    case MO_UW:
         if (bswap && data_r != TCG_REG_XZR) {
             tcg_out_rev16(s, TCG_REG_TMP, data_r);
             data_r = TCG_REG_TMP;
@@ -2190,7 +2190,7 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
         break;
     case INDEX_op_ext16s_i64:
     case INDEX_op_ext16s_i32:
-        tcg_out_sxt(s, ext, MO_16, a0, a1);
+        tcg_out_sxt(s, ext, MO_UW, a0, a1);
         break;
     case INDEX_op_ext_i32_i64:
     case INDEX_op_ext32s_i64:
@@ -2202,7 +2202,7 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
         break;
     case INDEX_op_ext16u_i64:
     case INDEX_op_ext16u_i32:
-        tcg_out_uxt(s, MO_16, a0, a1);
+        tcg_out_uxt(s, MO_UW, a0, a1);
         break;
     case INDEX_op_extu_i32_i64:
     case INDEX_op_ext32u_i64:
diff --git a/tcg/arm/tcg-target.inc.c b/tcg/arm/tcg-target.inc.c
index 542ffa8..0bd400e 100644
--- a/tcg/arm/tcg-target.inc.c
+++ b/tcg/arm/tcg-target.inc.c
@@ -1432,7 +1432,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
     case MO_UB:
         argreg = tcg_out_arg_reg8(s, argreg, datalo);
         break;
-    case MO_16:
+    case MO_UW:
         argreg = tcg_out_arg_reg16(s, argreg, datalo);
         break;
     case MO_32:
@@ -1624,7 +1624,7 @@ static inline void tcg_out_qemu_st_index(TCGContext *s, int cond, TCGMemOp opc,
     case MO_UB:
         tcg_out_st8_r(s, cond, datalo, addrlo, addend);
         break;
-    case MO_16:
+    case MO_UW:
         if (bswap) {
             tcg_out_bswap16st(s, cond, TCG_REG_R0, datalo);
             tcg_out_st16_r(s, cond, TCG_REG_R0, addrlo, addend);
@@ -1669,7 +1669,7 @@ static inline void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp opc,
     case MO_UB:
         tcg_out_st8_12(s, COND_AL, datalo, addrlo, 0);
         break;
-    case MO_16:
+    case MO_UW:
         if (bswap) {
             tcg_out_bswap16st(s, COND_AL, TCG_REG_R0, datalo);
             tcg_out_st16_8(s, COND_AL, TCG_REG_R0, addrlo, 0);
diff --git a/tcg/i386/tcg-target.inc.c b/tcg/i386/tcg-target.inc.c
index 0d68ba4..31c3664 100644
--- a/tcg/i386/tcg-target.inc.c
+++ b/tcg/i386/tcg-target.inc.c
@@ -893,7 +893,7 @@ static bool tcg_out_dup_vec(TCGContext *s, TCGType type, unsigned vece,
             tcg_out_vex_modrm(s, OPC_PUNPCKLBW, r, a, a);
             a = r;
             /* FALLTHRU */
-        case MO_16:
+        case MO_UW:
             tcg_out_vex_modrm(s, OPC_PUNPCKLWD, r, a, a);
             a = r;
             /* FALLTHRU */
@@ -927,7 +927,7 @@ static bool tcg_out_dupm_vec(TCGContext *s, TCGType type, unsigned vece,
         case MO_32:
             tcg_out_vex_modrm_offset(s, OPC_VBROADCASTSS, r, 0, base, offset);
             break;
-        case MO_16:
+        case MO_UW:
             tcg_out_vex_modrm_offset(s, OPC_VPINSRW, r, r, base, offset);
             tcg_out8(s, 0); /* imm8 */
             tcg_out_dup_vec(s, type, vece, r, r);
@@ -2164,7 +2164,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg datalo, TCGReg datahi,
         tcg_out_modrm_sib_offset(s, OPC_MOVB_EvGv + P_REXB_R + seg,
                                  datalo, base, index, 0, ofs);
         break;
-    case MO_16:
+    case MO_UW:
         if (bswap) {
             tcg_out_mov(s, TCG_TYPE_I32, scratch, datalo);
             tcg_out_rolw_8(s, scratch);
@@ -2747,15 +2747,15 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc,
         OPC_PMAXUB, OPC_PMAXUW, OPC_PMAXUD, OPC_UD2
     };
     static int const shlv_insn[4] = {
-        /* TODO: AVX512 adds support for MO_16.  */
+        /* TODO: AVX512 adds support for MO_UW.  */
         OPC_UD2, OPC_UD2, OPC_VPSLLVD, OPC_VPSLLVQ
     };
     static int const shrv_insn[4] = {
-        /* TODO: AVX512 adds support for MO_16.  */
+        /* TODO: AVX512 adds support for MO_UW.  */
         OPC_UD2, OPC_UD2, OPC_VPSRLVD, OPC_VPSRLVQ
     };
     static int const sarv_insn[4] = {
-        /* TODO: AVX512 adds support for MO_16, MO_64.  */
+        /* TODO: AVX512 adds support for MO_UW, MO_64.  */
         OPC_UD2, OPC_UD2, OPC_VPSRAVD, OPC_UD2
     };
     static int const shls_insn[4] = {
@@ -2925,7 +2925,7 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc,
         sub = args[3];
         goto gen_simd_imm8;
     case INDEX_op_x86_blend_vec:
-        if (vece == MO_16) {
+        if (vece == MO_UW) {
             insn = OPC_PBLENDW;
         } else if (vece == MO_32) {
             insn = (have_avx2 ? OPC_VPBLENDD : OPC_BLENDPS);
@@ -3290,9 +3290,9 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, unsigned vece)

     case INDEX_op_shls_vec:
     case INDEX_op_shrs_vec:
-        return vece >= MO_16;
+        return vece >= MO_UW;
     case INDEX_op_sars_vec:
-        return vece >= MO_16 && vece <= MO_32;
+        return vece >= MO_UW && vece <= MO_32;

     case INDEX_op_shlv_vec:
     case INDEX_op_shrv_vec:
@@ -3314,7 +3314,7 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, unsigned vece)
     case INDEX_op_usadd_vec:
     case INDEX_op_sssub_vec:
     case INDEX_op_ussub_vec:
-        return vece <= MO_16;
+        return vece <= MO_UW;
     case INDEX_op_smin_vec:
     case INDEX_op_smax_vec:
     case INDEX_op_umin_vec:
@@ -3352,13 +3352,13 @@ static void expand_vec_shi(TCGType type, unsigned vece, bool shr,
               tcgv_vec_arg(t2), tcgv_vec_arg(v1), tcgv_vec_arg(v1));

     if (shr) {
-        tcg_gen_shri_vec(MO_16, t1, t1, imm + 8);
-        tcg_gen_shri_vec(MO_16, t2, t2, imm + 8);
+        tcg_gen_shri_vec(MO_UW, t1, t1, imm + 8);
+        tcg_gen_shri_vec(MO_UW, t2, t2, imm + 8);
     } else {
-        tcg_gen_shli_vec(MO_16, t1, t1, imm + 8);
-        tcg_gen_shli_vec(MO_16, t2, t2, imm + 8);
-        tcg_gen_shri_vec(MO_16, t1, t1, 8);
-        tcg_gen_shri_vec(MO_16, t2, t2, 8);
+        tcg_gen_shli_vec(MO_UW, t1, t1, imm + 8);
+        tcg_gen_shli_vec(MO_UW, t2, t2, imm + 8);
+        tcg_gen_shri_vec(MO_UW, t1, t1, 8);
+        tcg_gen_shri_vec(MO_UW, t2, t2, 8);
     }

     vec_gen_3(INDEX_op_x86_packus_vec, type, MO_UB,
@@ -3381,8 +3381,8 @@ static void expand_vec_sari(TCGType type, unsigned vece,
                   tcgv_vec_arg(t1), tcgv_vec_arg(v1), tcgv_vec_arg(v1));
         vec_gen_3(INDEX_op_x86_punpckh_vec, type, MO_UB,
                   tcgv_vec_arg(t2), tcgv_vec_arg(v1), tcgv_vec_arg(v1));
-        tcg_gen_sari_vec(MO_16, t1, t1, imm + 8);
-        tcg_gen_sari_vec(MO_16, t2, t2, imm + 8);
+        tcg_gen_sari_vec(MO_UW, t1, t1, imm + 8);
+        tcg_gen_sari_vec(MO_UW, t2, t2, imm + 8);
         vec_gen_3(INDEX_op_x86_packss_vec, type, MO_UB,
                   tcgv_vec_arg(v0), tcgv_vec_arg(t1), tcgv_vec_arg(t2));
         tcg_temp_free_vec(t1);
@@ -3446,8 +3446,8 @@ static void expand_vec_mul(TCGType type, unsigned vece,
                   tcgv_vec_arg(t1), tcgv_vec_arg(v1), tcgv_vec_arg(t2));
         vec_gen_3(INDEX_op_x86_punpckl_vec, TCG_TYPE_V128, MO_UB,
                   tcgv_vec_arg(t2), tcgv_vec_arg(t2), tcgv_vec_arg(v2));
-        tcg_gen_mul_vec(MO_16, t1, t1, t2);
-        tcg_gen_shri_vec(MO_16, t1, t1, 8);
+        tcg_gen_mul_vec(MO_UW, t1, t1, t2);
+        tcg_gen_shri_vec(MO_UW, t1, t1, 8);
         vec_gen_3(INDEX_op_x86_packus_vec, TCG_TYPE_V128, MO_UB,
                   tcgv_vec_arg(v0), tcgv_vec_arg(t1), tcgv_vec_arg(t1));
         tcg_temp_free_vec(t1);
@@ -3469,10 +3469,10 @@ static void expand_vec_mul(TCGType type, unsigned vece,
                   tcgv_vec_arg(t3), tcgv_vec_arg(v1), tcgv_vec_arg(t4));
         vec_gen_3(INDEX_op_x86_punpckh_vec, type, MO_UB,
                   tcgv_vec_arg(t4), tcgv_vec_arg(t4), tcgv_vec_arg(v2));
-        tcg_gen_mul_vec(MO_16, t1, t1, t2);
-        tcg_gen_mul_vec(MO_16, t3, t3, t4);
-        tcg_gen_shri_vec(MO_16, t1, t1, 8);
-        tcg_gen_shri_vec(MO_16, t3, t3, 8);
+        tcg_gen_mul_vec(MO_UW, t1, t1, t2);
+        tcg_gen_mul_vec(MO_UW, t3, t3, t4);
+        tcg_gen_shri_vec(MO_UW, t1, t1, 8);
+        tcg_gen_shri_vec(MO_UW, t3, t3, 8);
         vec_gen_3(INDEX_op_x86_packus_vec, type, MO_UB,
                   tcgv_vec_arg(v0), tcgv_vec_arg(t1), tcgv_vec_arg(t3));
         tcg_temp_free_vec(t1);
diff --git a/tcg/mips/tcg-target.inc.c b/tcg/mips/tcg-target.inc.c
index c6d13ea..1780cb1 100644
--- a/tcg/mips/tcg-target.inc.c
+++ b/tcg/mips/tcg-target.inc.c
@@ -1383,7 +1383,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
     case MO_UB:
         i = tcg_out_call_iarg_reg8(s, i, l->datalo_reg);
         break;
-    case MO_16:
+    case MO_UW:
         i = tcg_out_call_iarg_reg16(s, i, l->datalo_reg);
         break;
     case MO_32:
@@ -1570,12 +1570,12 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
         tcg_out_opc_imm(s, OPC_SB, lo, base, 0);
         break;

-    case MO_16 | MO_BSWAP:
+    case MO_UW | MO_BSWAP:
         tcg_out_opc_imm(s, OPC_ANDI, TCG_TMP1, lo, 0xffff);
         tcg_out_bswap16(s, TCG_TMP1, TCG_TMP1);
         lo = TCG_TMP1;
         /* FALLTHRU */
-    case MO_16:
+    case MO_UW:
         tcg_out_opc_imm(s, OPC_SH, lo, base, 0);
         break;

diff --git a/tcg/riscv/tcg-target.inc.c b/tcg/riscv/tcg-target.inc.c
index 9c60c0f..20bc19d 100644
--- a/tcg/riscv/tcg-target.inc.c
+++ b/tcg/riscv/tcg-target.inc.c
@@ -1104,7 +1104,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
     case MO_UB:
         tcg_out_ext8u(s, a2, a2);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_out_ext16u(s, a2, a2);
         break;
     default:
@@ -1219,7 +1219,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
     case MO_UB:
         tcg_out_opc_store(s, OPC_SB, base, lo, 0);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_out_opc_store(s, OPC_SH, base, lo, 0);
         break;
     case MO_32:
diff --git a/tcg/sparc/tcg-target.inc.c b/tcg/sparc/tcg-target.inc.c
index 479ee2e..85550b5 100644
--- a/tcg/sparc/tcg-target.inc.c
+++ b/tcg/sparc/tcg-target.inc.c
@@ -885,7 +885,7 @@ static void emit_extend(TCGContext *s, TCGReg r, int op)
     case MO_UB:
         tcg_out_arithi(s, r, r, 0xff, ARITH_AND);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_out_arithi(s, r, r, 16, SHIFT_SLL);
         tcg_out_arithi(s, r, r, 16, SHIFT_SRL);
         break;
diff --git a/tcg/tcg-op-gvec.c b/tcg/tcg-op-gvec.c
index 9658c36..da409f5 100644
--- a/tcg/tcg-op-gvec.c
+++ b/tcg/tcg-op-gvec.c
@@ -308,7 +308,7 @@ uint64_t (dup_const)(unsigned vece, uint64_t c)
     switch (vece) {
     case MO_UB:
         return 0x0101010101010101ull * (uint8_t)c;
-    case MO_16:
+    case MO_UW:
         return 0x0001000100010001ull * (uint16_t)c;
     case MO_32:
         return 0x0000000100000001ull * (uint32_t)c;
@@ -327,7 +327,7 @@ static void gen_dup_i32(unsigned vece, TCGv_i32 out, TCGv_i32 in)
         tcg_gen_ext8u_i32(out, in);
         tcg_gen_muli_i32(out, out, 0x01010101);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_deposit_i32(out, in, in, 16, 16);
         break;
     case MO_32:
@@ -345,7 +345,7 @@ static void gen_dup_i64(unsigned vece, TCGv_i64 out, TCGv_i64 in)
         tcg_gen_ext8u_i64(out, in);
         tcg_gen_muli_i64(out, out, 0x0101010101010101ull);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_ext16u_i64(out, in);
         tcg_gen_muli_i64(out, out, 0x0001000100010001ull);
         break;
@@ -558,7 +558,7 @@ static void do_dup(unsigned vece, uint32_t dofs, uint32_t oprsz,
                 tcg_gen_extrl_i64_i32(t_32, in_64);
             } else if (vece == MO_UB) {
                 tcg_gen_movi_i32(t_32, in_c & 0xff);
-            } else if (vece == MO_16) {
+            } else if (vece == MO_UW) {
                 tcg_gen_movi_i32(t_32, in_c & 0xffff);
             } else {
                 tcg_gen_movi_i32(t_32, in_c);
@@ -1459,7 +1459,7 @@ void tcg_gen_gvec_dup_mem(unsigned vece, uint32_t dofs, uint32_t aofs,
             case MO_UB:
                 tcg_gen_ld8u_i32(in, cpu_env, aofs);
                 break;
-            case MO_16:
+            case MO_UW:
                 tcg_gen_ld16u_i32(in, cpu_env, aofs);
                 break;
             default:
@@ -1526,7 +1526,7 @@ void tcg_gen_gvec_dup16i(uint32_t dofs, uint32_t oprsz,
                          uint32_t maxsz, uint16_t x)
 {
     check_size_align(oprsz, maxsz, dofs);
-    do_dup(MO_16, dofs, oprsz, maxsz, NULL, NULL, x);
+    do_dup(MO_UW, dofs, oprsz, maxsz, NULL, NULL, x);
 }

 void tcg_gen_gvec_dup8i(uint32_t dofs, uint32_t oprsz,
@@ -1579,7 +1579,7 @@ void tcg_gen_vec_add8_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)

 void tcg_gen_vec_add16_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
 {
-    TCGv_i64 m = tcg_const_i64(dup_const(MO_16, 0x8000));
+    TCGv_i64 m = tcg_const_i64(dup_const(MO_UW, 0x8000));
     gen_addv_mask(d, a, b, m);
     tcg_temp_free_i64(m);
 }
@@ -1613,7 +1613,7 @@ void tcg_gen_gvec_add(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_add_vec,
           .fno = gen_helper_gvec_add16,
           .opt_opc = vecop_list_add,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_add_i32,
           .fniv = tcg_gen_add_vec,
           .fno = gen_helper_gvec_add32,
@@ -1644,7 +1644,7 @@ void tcg_gen_gvec_adds(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_add_vec,
           .fno = gen_helper_gvec_adds16,
           .opt_opc = vecop_list_add,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_add_i32,
           .fniv = tcg_gen_add_vec,
           .fno = gen_helper_gvec_adds32,
@@ -1685,7 +1685,7 @@ void tcg_gen_gvec_subs(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_gvec_subs16,
           .opt_opc = vecop_list_sub,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_sub_i32,
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_gvec_subs32,
@@ -1732,7 +1732,7 @@ void tcg_gen_vec_sub8_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)

 void tcg_gen_vec_sub16_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
 {
-    TCGv_i64 m = tcg_const_i64(dup_const(MO_16, 0x8000));
+    TCGv_i64 m = tcg_const_i64(dup_const(MO_UW, 0x8000));
     gen_subv_mask(d, a, b, m);
     tcg_temp_free_i64(m);
 }
@@ -1764,7 +1764,7 @@ void tcg_gen_gvec_sub(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_gvec_sub16,
           .opt_opc = vecop_list_sub,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_sub_i32,
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_gvec_sub32,
@@ -1795,7 +1795,7 @@ void tcg_gen_gvec_mul(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_mul_vec,
           .fno = gen_helper_gvec_mul16,
           .opt_opc = vecop_list_mul,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_mul_i32,
           .fniv = tcg_gen_mul_vec,
           .fno = gen_helper_gvec_mul32,
@@ -1824,7 +1824,7 @@ void tcg_gen_gvec_muls(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_mul_vec,
           .fno = gen_helper_gvec_muls16,
           .opt_opc = vecop_list_mul,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_mul_i32,
           .fniv = tcg_gen_mul_vec,
           .fno = gen_helper_gvec_muls32,
@@ -1862,7 +1862,7 @@ void tcg_gen_gvec_ssadd(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_ssadd_vec,
           .fno = gen_helper_gvec_ssadd16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fniv = tcg_gen_ssadd_vec,
           .fno = gen_helper_gvec_ssadd32,
           .opt_opc = vecop_list,
@@ -1888,7 +1888,7 @@ void tcg_gen_gvec_sssub(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_sssub_vec,
           .fno = gen_helper_gvec_sssub16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fniv = tcg_gen_sssub_vec,
           .fno = gen_helper_gvec_sssub32,
           .opt_opc = vecop_list,
@@ -1930,7 +1930,7 @@ void tcg_gen_gvec_usadd(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_usadd_vec,
           .fno = gen_helper_gvec_usadd16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_usadd_i32,
           .fniv = tcg_gen_usadd_vec,
           .fno = gen_helper_gvec_usadd32,
@@ -1974,7 +1974,7 @@ void tcg_gen_gvec_ussub(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_ussub_vec,
           .fno = gen_helper_gvec_ussub16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_ussub_i32,
           .fniv = tcg_gen_ussub_vec,
           .fno = gen_helper_gvec_ussub32,
@@ -2002,7 +2002,7 @@ void tcg_gen_gvec_smin(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_smin_vec,
           .fno = gen_helper_gvec_smin16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_smin_i32,
           .fniv = tcg_gen_smin_vec,
           .fno = gen_helper_gvec_smin32,
@@ -2030,7 +2030,7 @@ void tcg_gen_gvec_umin(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_umin_vec,
           .fno = gen_helper_gvec_umin16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_umin_i32,
           .fniv = tcg_gen_umin_vec,
           .fno = gen_helper_gvec_umin32,
@@ -2058,7 +2058,7 @@ void tcg_gen_gvec_smax(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_smax_vec,
           .fno = gen_helper_gvec_smax16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_smax_i32,
           .fniv = tcg_gen_smax_vec,
           .fno = gen_helper_gvec_smax32,
@@ -2086,7 +2086,7 @@ void tcg_gen_gvec_umax(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_umax_vec,
           .fno = gen_helper_gvec_umax16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_umax_i32,
           .fniv = tcg_gen_umax_vec,
           .fno = gen_helper_gvec_umax32,
@@ -2127,7 +2127,7 @@ void tcg_gen_vec_neg8_i64(TCGv_i64 d, TCGv_i64 b)

 void tcg_gen_vec_neg16_i64(TCGv_i64 d, TCGv_i64 b)
 {
-    TCGv_i64 m = tcg_const_i64(dup_const(MO_16, 0x8000));
+    TCGv_i64 m = tcg_const_i64(dup_const(MO_UW, 0x8000));
     gen_negv_mask(d, b, m);
     tcg_temp_free_i64(m);
 }
@@ -2160,7 +2160,7 @@ void tcg_gen_gvec_neg(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_neg_vec,
           .fno = gen_helper_gvec_neg16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_neg_i32,
           .fniv = tcg_gen_neg_vec,
           .fno = gen_helper_gvec_neg32,
@@ -2206,7 +2206,7 @@ static void tcg_gen_vec_abs8_i64(TCGv_i64 d, TCGv_i64 b)

 static void tcg_gen_vec_abs16_i64(TCGv_i64 d, TCGv_i64 b)
 {
-    gen_absv_mask(d, b, MO_16);
+    gen_absv_mask(d, b, MO_UW);
 }

 void tcg_gen_gvec_abs(unsigned vece, uint32_t dofs, uint32_t aofs,
@@ -2223,7 +2223,7 @@ void tcg_gen_gvec_abs(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_abs_vec,
           .fno = gen_helper_gvec_abs16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_abs_i32,
           .fniv = tcg_gen_abs_vec,
           .fno = gen_helper_gvec_abs32,
@@ -2461,7 +2461,7 @@ void tcg_gen_vec_shl8i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c)

 void tcg_gen_vec_shl16i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c)
 {
-    uint64_t mask = dup_const(MO_16, 0xffff << c);
+    uint64_t mask = dup_const(MO_UW, 0xffff << c);
     tcg_gen_shli_i64(d, a, c);
     tcg_gen_andi_i64(d, d, mask);
 }
@@ -2480,7 +2480,7 @@ void tcg_gen_gvec_shli(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_shli_vec,
           .fno = gen_helper_gvec_shl16i,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_shli_i32,
           .fniv = tcg_gen_shli_vec,
           .fno = gen_helper_gvec_shl32i,
@@ -2512,7 +2512,7 @@ void tcg_gen_vec_shr8i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c)

 void tcg_gen_vec_shr16i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c)
 {
-    uint64_t mask = dup_const(MO_16, 0xffff >> c);
+    uint64_t mask = dup_const(MO_UW, 0xffff >> c);
     tcg_gen_shri_i64(d, a, c);
     tcg_gen_andi_i64(d, d, mask);
 }
@@ -2531,7 +2531,7 @@ void tcg_gen_gvec_shri(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_shri_vec,
           .fno = gen_helper_gvec_shr16i,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_shri_i32,
           .fniv = tcg_gen_shri_vec,
           .fno = gen_helper_gvec_shr32i,
@@ -2570,8 +2570,8 @@ void tcg_gen_vec_sar8i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c)

 void tcg_gen_vec_sar16i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c)
 {
-    uint64_t s_mask = dup_const(MO_16, 0x8000 >> c);
-    uint64_t c_mask = dup_const(MO_16, 0xffff >> c);
+    uint64_t s_mask = dup_const(MO_UW, 0x8000 >> c);
+    uint64_t c_mask = dup_const(MO_UW, 0xffff >> c);
     TCGv_i64 s = tcg_temp_new_i64();

     tcg_gen_shri_i64(d, a, c);
@@ -2596,7 +2596,7 @@ void tcg_gen_gvec_sari(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_sari_vec,
           .fno = gen_helper_gvec_sar16i,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_sari_i32,
           .fniv = tcg_gen_sari_vec,
           .fno = gen_helper_gvec_sar32i,
@@ -2884,7 +2884,7 @@ void tcg_gen_gvec_shlv(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_shlv_mod_vec,
           .fno = gen_helper_gvec_shl16v,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_shl_mod_i32,
           .fniv = tcg_gen_shlv_mod_vec,
           .fno = gen_helper_gvec_shl32v,
@@ -2947,7 +2947,7 @@ void tcg_gen_gvec_shrv(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_shrv_mod_vec,
           .fno = gen_helper_gvec_shr16v,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_shr_mod_i32,
           .fniv = tcg_gen_shrv_mod_vec,
           .fno = gen_helper_gvec_shr32v,
@@ -3010,7 +3010,7 @@ void tcg_gen_gvec_sarv(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_sarv_mod_vec,
           .fno = gen_helper_gvec_sar16v,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_sar_mod_i32,
           .fniv = tcg_gen_sarv_mod_vec,
           .fno = gen_helper_gvec_sar32v,
diff --git a/tcg/tcg-op-vec.c b/tcg/tcg-op-vec.c
index d7ffc9e..b0a4d98 100644
--- a/tcg/tcg-op-vec.c
+++ b/tcg/tcg-op-vec.c
@@ -270,7 +270,7 @@ void tcg_gen_dup32i_vec(TCGv_vec r, uint32_t a)

 void tcg_gen_dup16i_vec(TCGv_vec r, uint32_t a)
 {
-    do_dupi_vec(r, MO_REG, dup_const(MO_16, a));
+    do_dupi_vec(r, MO_REG, dup_const(MO_UW, a));
 }

 void tcg_gen_dup8i_vec(TCGv_vec r, uint32_t a)
diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c
index 61eda33..21d448c 100644
--- a/tcg/tcg-op.c
+++ b/tcg/tcg-op.c
@@ -2723,7 +2723,7 @@ static inline TCGMemOp tcg_canonicalize_memop(TCGMemOp op, bool is64, bool st)
     case MO_UB:
         op &= ~MO_BSWAP;
         break;
-    case MO_16:
+    case MO_UW:
         break;
     case MO_32:
         if (!is64) {
@@ -2810,7 +2810,7 @@ void tcg_gen_qemu_ld_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)

     if ((orig_memop ^ memop) & MO_BSWAP) {
         switch (orig_memop & MO_SIZE) {
-        case MO_16:
+        case MO_UW:
             tcg_gen_bswap16_i32(val, val);
             if (orig_memop & MO_SIGN) {
                 tcg_gen_ext16s_i32(val, val);
@@ -2837,7 +2837,7 @@ void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
     if (!TCG_TARGET_HAS_MEMORY_BSWAP && (memop & MO_BSWAP)) {
         swap = tcg_temp_new_i32();
         switch (memop & MO_SIZE) {
-        case MO_16:
+        case MO_UW:
             tcg_gen_ext16u_i32(swap, val);
             tcg_gen_bswap16_i32(swap, swap);
             break;
@@ -2890,7 +2890,7 @@ void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)

     if ((orig_memop ^ memop) & MO_BSWAP) {
         switch (orig_memop & MO_SIZE) {
-        case MO_16:
+        case MO_UW:
             tcg_gen_bswap16_i64(val, val);
             if (orig_memop & MO_SIGN) {
                 tcg_gen_ext16s_i64(val, val);
@@ -2928,7 +2928,7 @@ void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
     if (!TCG_TARGET_HAS_MEMORY_BSWAP && (memop & MO_BSWAP)) {
         swap = tcg_temp_new_i64();
         switch (memop & MO_SIZE) {
-        case MO_16:
+        case MO_UW:
             tcg_gen_ext16u_i64(swap, val);
             tcg_gen_bswap16_i64(swap, swap);
             break;
@@ -3025,8 +3025,8 @@ typedef void (*gen_atomic_op_i64)(TCGv_i64, TCGv_env, TCGv, TCGv_i64);

 static void * const table_cmpxchg[16] = {
     [MO_UB] = gen_helper_atomic_cmpxchgb,
-    [MO_16 | MO_LE] = gen_helper_atomic_cmpxchgw_le,
-    [MO_16 | MO_BE] = gen_helper_atomic_cmpxchgw_be,
+    [MO_UW | MO_LE] = gen_helper_atomic_cmpxchgw_le,
+    [MO_UW | MO_BE] = gen_helper_atomic_cmpxchgw_be,
     [MO_32 | MO_LE] = gen_helper_atomic_cmpxchgl_le,
     [MO_32 | MO_BE] = gen_helper_atomic_cmpxchgl_be,
     WITH_ATOMIC64([MO_64 | MO_LE] = gen_helper_atomic_cmpxchgq_le)
@@ -3249,8 +3249,8 @@ static void do_atomic_op_i64(TCGv_i64 ret, TCGv addr, TCGv_i64 val,
 #define GEN_ATOMIC_HELPER(NAME, OP, NEW)                                \
 static void * const table_##NAME[16] = {                                \
     [MO_UB] = gen_helper_atomic_##NAME##b,                               \
-    [MO_16 | MO_LE] = gen_helper_atomic_##NAME##w_le,                   \
-    [MO_16 | MO_BE] = gen_helper_atomic_##NAME##w_be,                   \
+    [MO_UW | MO_LE] = gen_helper_atomic_##NAME##w_le,                   \
+    [MO_UW | MO_BE] = gen_helper_atomic_##NAME##w_be,                   \
     [MO_32 | MO_LE] = gen_helper_atomic_##NAME##l_le,                   \
     [MO_32 | MO_BE] = gen_helper_atomic_##NAME##l_be,                   \
     WITH_ATOMIC64([MO_64 | MO_LE] = gen_helper_atomic_##NAME##q_le)     \
diff --git a/tcg/tcg.h b/tcg/tcg.h
index 5636d6b..a378887 100644
--- a/tcg/tcg.h
+++ b/tcg/tcg.h
@@ -1303,7 +1303,7 @@ uint64_t dup_const(unsigned vece, uint64_t c);
 #define dup_const(VECE, C)                                         \
     (__builtin_constant_p(VECE)                                    \
      ?   ((VECE) == MO_UB ? 0x0101010101010101ull * (uint8_t)(C)   \
-        : (VECE) == MO_16 ? 0x0001000100010001ull * (uint16_t)(C)  \
+        : (VECE) == MO_UW ? 0x0001000100010001ull * (uint16_t)(C)  \
         : (VECE) == MO_32 ? 0x0000000100000001ull * (uint32_t)(C)  \
         : dup_const(VECE, C))                                      \
      : dup_const(VECE, C))
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 193118 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v2 02/20] tcg: Replace MO_16 with MO_UW alias
  2019-07-22 15:34 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-22 15:40   ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, claudio.fontana, alex.williamson, qemu-ppc, amarkovic,
	pbonzini, aurelien

Preparation for splitting MO_16 out from TCGMemOp into new accelerator
independent MemOp.

As MO_16 will be a value of MemOp, existing TCGMemOp comparisons and
coercions will trigger -Wenum-compare and -Wenum-conversion.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 target/arm/sve_helper.c             |   4 +-
 target/arm/translate-a64.c          |  90 ++++++++--------
 target/arm/translate-sve.c          |  40 ++++----
 target/arm/translate-vfp.inc.c      |   2 +-
 target/arm/translate.c              |  32 +++---
 target/i386/translate.c             | 200 ++++++++++++++++++------------------
 target/mips/translate.c             |   2 +-
 target/ppc/translate/vmx-impl.inc.c |  28 ++---
 target/s390x/translate_vx.inc.c     |   2 +-
 target/s390x/vec.h                  |   4 +-
 tcg/aarch64/tcg-target.inc.c        |  20 ++--
 tcg/arm/tcg-target.inc.c            |   6 +-
 tcg/i386/tcg-target.inc.c           |  48 ++++-----
 tcg/mips/tcg-target.inc.c           |   6 +-
 tcg/riscv/tcg-target.inc.c          |   4 +-
 tcg/sparc/tcg-target.inc.c          |   2 +-
 tcg/tcg-op-gvec.c                   |  72 ++++++-------
 tcg/tcg-op-vec.c                    |   2 +-
 tcg/tcg-op.c                        |  18 ++--
 tcg/tcg.h                           |   2 +-
 20 files changed, 292 insertions(+), 292 deletions(-)

diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 4c7e11f..f6bef3d 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -1546,7 +1546,7 @@ void HELPER(sve_cpy_m_h)(void *vd, void *vn, void *vg,
     uint64_t *d = vd, *n = vn;
     uint8_t *pg = vg;

-    mm = dup_const(MO_16, mm);
+    mm = dup_const(MO_UW, mm);
     for (i = 0; i < opr_sz; i += 1) {
         uint64_t nn = n[i];
         uint64_t pp = expand_pred_h(pg[H1(i)]);
@@ -1600,7 +1600,7 @@ void HELPER(sve_cpy_z_h)(void *vd, void *vg, uint64_t val, uint32_t desc)
     uint64_t *d = vd;
     uint8_t *pg = vg;

-    val = dup_const(MO_16, val);
+    val = dup_const(MO_UW, val);
     for (i = 0; i < opr_sz; i += 1) {
         d[i] = val & expand_pred_h(pg[H1(i)]);
     }
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index f840b43..3acfccb 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -492,7 +492,7 @@ static TCGv_i32 read_fp_hreg(DisasContext *s, int reg)
 {
     TCGv_i32 v = tcg_temp_new_i32();

-    tcg_gen_ld16u_i32(v, cpu_env, fp_reg_offset(s, reg, MO_16));
+    tcg_gen_ld16u_i32(v, cpu_env, fp_reg_offset(s, reg, MO_UW));
     return v;
 }

@@ -996,7 +996,7 @@ static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx,
     case MO_UB:
         tcg_gen_ld8u_i64(tcg_dest, cpu_env, vect_off);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_ld16u_i64(tcg_dest, cpu_env, vect_off);
         break;
     case MO_32:
@@ -1005,7 +1005,7 @@ static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx,
     case MO_SB:
         tcg_gen_ld8s_i64(tcg_dest, cpu_env, vect_off);
         break;
-    case MO_16|MO_SIGN:
+    case MO_SW:
         tcg_gen_ld16s_i64(tcg_dest, cpu_env, vect_off);
         break;
     case MO_32|MO_SIGN:
@@ -1028,13 +1028,13 @@ static void read_vec_element_i32(DisasContext *s, TCGv_i32 tcg_dest, int srcidx,
     case MO_UB:
         tcg_gen_ld8u_i32(tcg_dest, cpu_env, vect_off);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_ld16u_i32(tcg_dest, cpu_env, vect_off);
         break;
     case MO_SB:
         tcg_gen_ld8s_i32(tcg_dest, cpu_env, vect_off);
         break;
-    case MO_16|MO_SIGN:
+    case MO_SW:
         tcg_gen_ld16s_i32(tcg_dest, cpu_env, vect_off);
         break;
     case MO_32:
@@ -1055,7 +1055,7 @@ static void write_vec_element(DisasContext *s, TCGv_i64 tcg_src, int destidx,
     case MO_UB:
         tcg_gen_st8_i64(tcg_src, cpu_env, vect_off);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_st16_i64(tcg_src, cpu_env, vect_off);
         break;
     case MO_32:
@@ -1077,7 +1077,7 @@ static void write_vec_element_i32(DisasContext *s, TCGv_i32 tcg_src,
     case MO_UB:
         tcg_gen_st8_i32(tcg_src, cpu_env, vect_off);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_st16_i32(tcg_src, cpu_env, vect_off);
         break;
     case MO_32:
@@ -5269,7 +5269,7 @@ static void handle_fp_compare(DisasContext *s, int size,
                               bool cmp_with_zero, bool signal_all_nans)
 {
     TCGv_i64 tcg_flags = tcg_temp_new_i64();
-    TCGv_ptr fpst = get_fpstatus_ptr(size == MO_16);
+    TCGv_ptr fpst = get_fpstatus_ptr(size == MO_UW);

     if (size == MO_64) {
         TCGv_i64 tcg_vn, tcg_vm;
@@ -5306,7 +5306,7 @@ static void handle_fp_compare(DisasContext *s, int size,
                 gen_helper_vfp_cmps_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
             }
             break;
-        case MO_16:
+        case MO_UW:
             if (signal_all_nans) {
                 gen_helper_vfp_cmpeh_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
             } else {
@@ -5360,7 +5360,7 @@ static void disas_fp_compare(DisasContext *s, uint32_t insn)
         size = MO_64;
         break;
     case 3:
-        size = MO_16;
+        size = MO_UW;
         if (dc_isar_feature(aa64_fp16, s)) {
             break;
         }
@@ -5411,7 +5411,7 @@ static void disas_fp_ccomp(DisasContext *s, uint32_t insn)
         size = MO_64;
         break;
     case 3:
-        size = MO_16;
+        size = MO_UW;
         if (dc_isar_feature(aa64_fp16, s)) {
             break;
         }
@@ -5477,7 +5477,7 @@ static void disas_fp_csel(DisasContext *s, uint32_t insn)
         sz = MO_64;
         break;
     case 3:
-        sz = MO_16;
+        sz = MO_UW;
         if (dc_isar_feature(aa64_fp16, s)) {
             break;
         }
@@ -6282,7 +6282,7 @@ static void disas_fp_imm(DisasContext *s, uint32_t insn)
         sz = MO_64;
         break;
     case 3:
-        sz = MO_16;
+        sz = MO_UW;
         if (dc_isar_feature(aa64_fp16, s)) {
             break;
         }
@@ -6593,7 +6593,7 @@ static void handle_fmov(DisasContext *s, int rd, int rn, int type, bool itof)
             break;
         case 3:
             /* 16 bit */
-            tcg_gen_ld16u_i64(tcg_rd, cpu_env, fp_reg_offset(s, rn, MO_16));
+            tcg_gen_ld16u_i64(tcg_rd, cpu_env, fp_reg_offset(s, rn, MO_UW));
             break;
         default:
             g_assert_not_reached();
@@ -7030,7 +7030,7 @@ static TCGv_i32 do_reduction_op(DisasContext *s, int fpopcode, int rn,
 {
     if (esize == size) {
         int element;
-        TCGMemOp msize = esize == 16 ? MO_16 : MO_32;
+        TCGMemOp msize = esize == 16 ? MO_UW : MO_32;
         TCGv_i32 tcg_elem;

         /* We should have one register left here */
@@ -7204,7 +7204,7 @@ static void disas_simd_across_lanes(DisasContext *s, uint32_t insn)
          * Note that correct NaN propagation requires that we do these
          * operations in exactly the order specified by the pseudocode.
          */
-        TCGv_ptr fpst = get_fpstatus_ptr(size == MO_16);
+        TCGv_ptr fpst = get_fpstatus_ptr(size == MO_UW);
         int fpopcode = opcode | is_min << 4 | is_u << 5;
         int vmap = (1 << elements) - 1;
         TCGv_i32 tcg_res32 = do_reduction_op(s, fpopcode, rn, esize,
@@ -7591,7 +7591,7 @@ static void disas_simd_mod_imm(DisasContext *s, uint32_t insn)
             } else {
                 if (o2) {
                     /* FMOV (vector, immediate) - half-precision */
-                    imm = vfp_expand_imm(MO_16, abcdefgh);
+                    imm = vfp_expand_imm(MO_UW, abcdefgh);
                     /* now duplicate across the lanes */
                     imm = bitfield_replicate(imm, 16);
                 } else {
@@ -7699,7 +7699,7 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
                 unallocated_encoding(s);
                 return;
             } else {
-                size = MO_16;
+                size = MO_UW;
             }
         } else {
             size = extract32(size, 0, 1) ? MO_64 : MO_32;
@@ -7709,7 +7709,7 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
             return;
         }

-        fpst = get_fpstatus_ptr(size == MO_16);
+        fpst = get_fpstatus_ptr(size == MO_UW);
         break;
     default:
         unallocated_encoding(s);
@@ -7760,7 +7760,7 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
         read_vec_element_i32(s, tcg_op1, rn, 0, size);
         read_vec_element_i32(s, tcg_op2, rn, 1, size);

-        if (size == MO_16) {
+        if (size == MO_UW) {
             switch (opcode) {
             case 0xc: /* FMAXNMP */
                 gen_helper_advsimd_maxnumh(tcg_res, tcg_op1, tcg_op2, fpst);
@@ -8222,7 +8222,7 @@ static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
                                    int elements, int is_signed,
                                    int fracbits, int size)
 {
-    TCGv_ptr tcg_fpst = get_fpstatus_ptr(size == MO_16);
+    TCGv_ptr tcg_fpst = get_fpstatus_ptr(size == MO_UW);
     TCGv_i32 tcg_shift = NULL;

     TCGMemOp mop = size | (is_signed ? MO_SIGN : 0);
@@ -8281,7 +8281,7 @@ static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
                     }
                 }
                 break;
-            case MO_16:
+            case MO_UW:
                 if (fracbits) {
                     if (is_signed) {
                         gen_helper_vfp_sltoh(tcg_float, tcg_int32,
@@ -8339,7 +8339,7 @@ static void handle_simd_shift_intfp_conv(DisasContext *s, bool is_scalar,
     } else if (immh & 4) {
         size = MO_32;
     } else if (immh & 2) {
-        size = MO_16;
+        size = MO_UW;
         if (!dc_isar_feature(aa64_fp16, s)) {
             unallocated_encoding(s);
             return;
@@ -8384,7 +8384,7 @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
     } else if (immh & 0x4) {
         size = MO_32;
     } else if (immh & 0x2) {
-        size = MO_16;
+        size = MO_UW;
         if (!dc_isar_feature(aa64_fp16, s)) {
             unallocated_encoding(s);
             return;
@@ -8403,7 +8403,7 @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
     assert(!(is_scalar && is_q));

     tcg_rmode = tcg_const_i32(arm_rmode_to_sf(FPROUNDING_ZERO));
-    tcg_fpstatus = get_fpstatus_ptr(size == MO_16);
+    tcg_fpstatus = get_fpstatus_ptr(size == MO_UW);
     gen_helper_set_rmode(tcg_rmode, tcg_rmode, tcg_fpstatus);
     fracbits = (16 << size) - immhb;
     tcg_shift = tcg_const_i32(fracbits);
@@ -8429,7 +8429,7 @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
         int maxpass = is_scalar ? 1 : ((8 << is_q) >> size);

         switch (size) {
-        case MO_16:
+        case MO_UW:
             if (is_u) {
                 fn = gen_helper_vfp_touhh;
             } else {
@@ -9388,7 +9388,7 @@ static void handle_2misc_fcmp_zero(DisasContext *s, int opcode,
         return;
     }

-    fpst = get_fpstatus_ptr(size == MO_16);
+    fpst = get_fpstatus_ptr(size == MO_UW);

     if (is_double) {
         TCGv_i64 tcg_op = tcg_temp_new_i64();
@@ -9440,7 +9440,7 @@ static void handle_2misc_fcmp_zero(DisasContext *s, int opcode,
         bool swap = false;
         int pass, maxpasses;

-        if (size == MO_16) {
+        if (size == MO_UW) {
             switch (opcode) {
             case 0x2e: /* FCMLT (zero) */
                 swap = true;
@@ -11422,8 +11422,8 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
             int passreg = pass < (maxpass / 2) ? rn : rm;
             int passelt = (pass << 1) & (maxpass - 1);

-            read_vec_element_i32(s, tcg_op1, passreg, passelt, MO_16);
-            read_vec_element_i32(s, tcg_op2, passreg, passelt + 1, MO_16);
+            read_vec_element_i32(s, tcg_op1, passreg, passelt, MO_UW);
+            read_vec_element_i32(s, tcg_op2, passreg, passelt + 1, MO_UW);
             tcg_res[pass] = tcg_temp_new_i32();

             switch (fpopcode) {
@@ -11450,7 +11450,7 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
         }

         for (pass = 0; pass < maxpass; pass++) {
-            write_vec_element_i32(s, tcg_res[pass], rd, pass, MO_16);
+            write_vec_element_i32(s, tcg_res[pass], rd, pass, MO_UW);
             tcg_temp_free_i32(tcg_res[pass]);
         }

@@ -11463,15 +11463,15 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
             TCGv_i32 tcg_op2 = tcg_temp_new_i32();
             TCGv_i32 tcg_res = tcg_temp_new_i32();

-            read_vec_element_i32(s, tcg_op1, rn, pass, MO_16);
-            read_vec_element_i32(s, tcg_op2, rm, pass, MO_16);
+            read_vec_element_i32(s, tcg_op1, rn, pass, MO_UW);
+            read_vec_element_i32(s, tcg_op2, rm, pass, MO_UW);

             switch (fpopcode) {
             case 0x0: /* FMAXNM */
                 gen_helper_advsimd_maxnumh(tcg_res, tcg_op1, tcg_op2, fpst);
                 break;
             case 0x1: /* FMLA */
-                read_vec_element_i32(s, tcg_res, rd, pass, MO_16);
+                read_vec_element_i32(s, tcg_res, rd, pass, MO_UW);
                 gen_helper_advsimd_muladdh(tcg_res, tcg_op1, tcg_op2, tcg_res,
                                            fpst);
                 break;
@@ -11496,7 +11496,7 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
             case 0x9: /* FMLS */
                 /* As usual for ARM, separate negation for fused multiply-add */
                 tcg_gen_xori_i32(tcg_op1, tcg_op1, 0x8000);
-                read_vec_element_i32(s, tcg_res, rd, pass, MO_16);
+                read_vec_element_i32(s, tcg_res, rd, pass, MO_UW);
                 gen_helper_advsimd_muladdh(tcg_res, tcg_op1, tcg_op2, tcg_res,
                                            fpst);
                 break;
@@ -11537,7 +11537,7 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
                 g_assert_not_reached();
             }

-            write_vec_element_i32(s, tcg_res, rd, pass, MO_16);
+            write_vec_element_i32(s, tcg_res, rd, pass, MO_UW);
             tcg_temp_free_i32(tcg_res);
             tcg_temp_free_i32(tcg_op1);
             tcg_temp_free_i32(tcg_op2);
@@ -11727,7 +11727,7 @@ static void handle_2misc_widening(DisasContext *s, int opcode, bool is_q,
         for (pass = 0; pass < 4; pass++) {
             tcg_res[pass] = tcg_temp_new_i32();

-            read_vec_element_i32(s, tcg_res[pass], rn, srcelt + pass, MO_16);
+            read_vec_element_i32(s, tcg_res[pass], rn, srcelt + pass, MO_UW);
             gen_helper_vfp_fcvt_f16_to_f32(tcg_res[pass], tcg_res[pass],
                                            fpst, ahp);
         }
@@ -11768,7 +11768,7 @@ static void handle_rev(DisasContext *s, int opcode, bool u,

             read_vec_element(s, tcg_tmp, rn, i, grp_size);
             switch (grp_size) {
-            case MO_16:
+            case MO_UW:
                 tcg_gen_bswap16_i64(tcg_tmp, tcg_tmp);
                 break;
             case MO_32:
@@ -12499,7 +12499,7 @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
         if (!fp_access_check(s)) {
             return;
         }
-        handle_simd_intfp_conv(s, rd, rn, elements, !u, 0, MO_16);
+        handle_simd_intfp_conv(s, rd, rn, elements, !u, 0, MO_UW);
         return;
     }
     break;
@@ -12508,7 +12508,7 @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
     case 0x2e: /* FCMLT (zero) */
     case 0x6c: /* FCMGE (zero) */
     case 0x6d: /* FCMLE (zero) */
-        handle_2misc_fcmp_zero(s, fpop, is_scalar, 0, is_q, MO_16, rn, rd);
+        handle_2misc_fcmp_zero(s, fpop, is_scalar, 0, is_q, MO_UW, rn, rd);
         return;
     case 0x3d: /* FRECPE */
     case 0x3f: /* FRECPX */
@@ -12668,7 +12668,7 @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
             TCGv_i32 tcg_op = tcg_temp_new_i32();
             TCGv_i32 tcg_res = tcg_temp_new_i32();

-            read_vec_element_i32(s, tcg_op, rn, pass, MO_16);
+            read_vec_element_i32(s, tcg_op, rn, pass, MO_UW);

             switch (fpop) {
             case 0x1a: /* FCVTNS */
@@ -12715,7 +12715,7 @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
                 g_assert_not_reached();
             }

-            write_vec_element_i32(s, tcg_res, rd, pass, MO_16);
+            write_vec_element_i32(s, tcg_res, rd, pass, MO_UW);

             tcg_temp_free_i32(tcg_res);
             tcg_temp_free_i32(tcg_op);
@@ -12839,7 +12839,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
             unallocated_encoding(s);
             return;
         }
-        size = MO_16;
+        size = MO_UW;
         /* is_fp, but we pass cpu_env not fp_status.  */
         break;
     default:
@@ -12852,7 +12852,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
         /* convert insn encoded size to TCGMemOp size */
         switch (size) {
         case 0: /* half-precision */
-            size = MO_16;
+            size = MO_UW;
             is_fp16 = true;
             break;
         case MO_32: /* single precision */
@@ -12899,7 +12899,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)

     /* Given TCGMemOp size, adjust register and indexing.  */
     switch (size) {
-    case MO_16:
+    case MO_UW:
         index = h << 2 | l << 1 | m;
         break;
     case MO_32:
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index ec5fb11..2bc1bd1 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -1679,7 +1679,7 @@ static void do_sat_addsub_vec(DisasContext *s, int esz, int rd, int rn,
         tcg_temp_free_i32(t32);
         break;

-    case MO_16:
+    case MO_UW:
         t32 = tcg_temp_new_i32();
         tcg_gen_extrl_i64_i32(t32, val);
         if (d) {
@@ -3314,7 +3314,7 @@ static bool trans_SUBR_zzi(DisasContext *s, arg_rri_esz *a)
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_sve_subri_h,
           .opt_opc = vecop_list,
-          .vece = MO_16,
+          .vece = MO_UW,
           .scalar_first = true },
         { .fni4 = tcg_gen_sub_i32,
           .fniv = tcg_gen_sub_vec,
@@ -3468,7 +3468,7 @@ static bool trans_FMLA_zzxz(DisasContext *s, arg_FMLA_zzxz *a)

     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);
         tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, a->rd),
                            vec_full_reg_offset(s, a->rn),
                            vec_full_reg_offset(s, a->rm),
@@ -3494,7 +3494,7 @@ static bool trans_FMUL_zzx(DisasContext *s, arg_FMUL_zzx *a)

     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);
         tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
                            vec_full_reg_offset(s, a->rn),
                            vec_full_reg_offset(s, a->rm),
@@ -3526,7 +3526,7 @@ static void do_reduce(DisasContext *s, arg_rpr_esz *a,

     tcg_gen_addi_ptr(t_zn, cpu_env, vec_full_reg_offset(s, a->rn));
     tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, a->pg));
-    status = get_fpstatus_ptr(a->esz == MO_16);
+    status = get_fpstatus_ptr(a->esz == MO_UW);

     fn(temp, t_zn, t_pg, status, t_desc);
     tcg_temp_free_ptr(t_zn);
@@ -3568,7 +3568,7 @@ DO_VPZ(FMAXV, fmaxv)
 static void do_zz_fp(DisasContext *s, arg_rr_esz *a, gen_helper_gvec_2_ptr *fn)
 {
     unsigned vsz = vec_full_reg_size(s);
-    TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+    TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);

     tcg_gen_gvec_2_ptr(vec_full_reg_offset(s, a->rd),
                        vec_full_reg_offset(s, a->rn),
@@ -3616,7 +3616,7 @@ static void do_ppz_fp(DisasContext *s, arg_rpr_esz *a,
                       gen_helper_gvec_3_ptr *fn)
 {
     unsigned vsz = vec_full_reg_size(s);
-    TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+    TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);

     tcg_gen_gvec_3_ptr(pred_full_reg_offset(s, a->rd),
                        vec_full_reg_offset(s, a->rn),
@@ -3668,7 +3668,7 @@ static bool trans_FTMAD(DisasContext *s, arg_FTMAD *a)
     }
     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);
         tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
                            vec_full_reg_offset(s, a->rn),
                            vec_full_reg_offset(s, a->rm),
@@ -3708,7 +3708,7 @@ static bool trans_FADDA(DisasContext *s, arg_rprr_esz *a)
     t_pg = tcg_temp_new_ptr();
     tcg_gen_addi_ptr(t_rm, cpu_env, vec_full_reg_offset(s, a->rm));
     tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, a->pg));
-    t_fpst = get_fpstatus_ptr(a->esz == MO_16);
+    t_fpst = get_fpstatus_ptr(a->esz == MO_UW);
     t_desc = tcg_const_i32(simd_desc(vsz, vsz, 0));

     fns[a->esz - 1](t_val, t_val, t_rm, t_pg, t_fpst, t_desc);
@@ -3735,7 +3735,7 @@ static bool do_zzz_fp(DisasContext *s, arg_rrr_esz *a,
     }
     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);
         tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
                            vec_full_reg_offset(s, a->rn),
                            vec_full_reg_offset(s, a->rm),
@@ -3777,7 +3777,7 @@ static bool do_zpzz_fp(DisasContext *s, arg_rprr_esz *a,
     }
     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);
         tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, a->rd),
                            vec_full_reg_offset(s, a->rn),
                            vec_full_reg_offset(s, a->rm),
@@ -3844,7 +3844,7 @@ static void do_fp_imm(DisasContext *s, arg_rpri_esz *a, uint64_t imm,
                       gen_helper_sve_fp2scalar *fn)
 {
     TCGv_i64 temp = tcg_const_i64(imm);
-    do_fp_scalar(s, a->rd, a->rn, a->pg, a->esz == MO_16, temp, fn);
+    do_fp_scalar(s, a->rd, a->rn, a->pg, a->esz == MO_UW, temp, fn);
     tcg_temp_free_i64(temp);
 }

@@ -3893,7 +3893,7 @@ static bool do_fp_cmp(DisasContext *s, arg_rprr_esz *a,
     }
     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);
         tcg_gen_gvec_4_ptr(pred_full_reg_offset(s, a->rd),
                            vec_full_reg_offset(s, a->rn),
                            vec_full_reg_offset(s, a->rm),
@@ -3937,7 +3937,7 @@ static bool trans_FCADD(DisasContext *s, arg_FCADD *a)
     }
     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);
         tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, a->rd),
                            vec_full_reg_offset(s, a->rn),
                            vec_full_reg_offset(s, a->rm),
@@ -4044,7 +4044,7 @@ static bool trans_FCMLA_zzxz(DisasContext *s, arg_FCMLA_zzxz *a)
     tcg_debug_assert(a->rd == a->ra);
     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);
         tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
                            vec_full_reg_offset(s, a->rn),
                            vec_full_reg_offset(s, a->rm),
@@ -4186,7 +4186,7 @@ static bool trans_FRINTI(DisasContext *s, arg_rpr_esz *a)
     if (a->esz == 0) {
         return false;
     }
-    return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_16,
+    return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_UW,
                       frint_fns[a->esz - 1]);
 }

@@ -4200,7 +4200,7 @@ static bool trans_FRINTX(DisasContext *s, arg_rpr_esz *a)
     if (a->esz == 0) {
         return false;
     }
-    return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_16, fns[a->esz - 1]);
+    return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_UW, fns[a->esz - 1]);
 }

 static bool do_frint_mode(DisasContext *s, arg_rpr_esz *a, int mode)
@@ -4211,7 +4211,7 @@ static bool do_frint_mode(DisasContext *s, arg_rpr_esz *a, int mode)
     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
         TCGv_i32 tmode = tcg_const_i32(mode);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);

         gen_helper_set_rmode(tmode, tmode, status);

@@ -4262,7 +4262,7 @@ static bool trans_FRECPX(DisasContext *s, arg_rpr_esz *a)
     if (a->esz == 0) {
         return false;
     }
-    return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_16, fns[a->esz - 1]);
+    return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_UW, fns[a->esz - 1]);
 }

 static bool trans_FSQRT(DisasContext *s, arg_rpr_esz *a)
@@ -4275,7 +4275,7 @@ static bool trans_FSQRT(DisasContext *s, arg_rpr_esz *a)
     if (a->esz == 0) {
         return false;
     }
-    return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_16, fns[a->esz - 1]);
+    return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_UW, fns[a->esz - 1]);
 }

 static bool trans_SCVTF_hh(DisasContext *s, arg_rpr_esz *a)
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
index 092eb5e..549874c 100644
--- a/target/arm/translate-vfp.inc.c
+++ b/target/arm/translate-vfp.inc.c
@@ -52,7 +52,7 @@ uint64_t vfp_expand_imm(int size, uint8_t imm8)
             (extract32(imm8, 0, 6) << 3);
         imm <<= 16;
         break;
-    case MO_16:
+    case MO_UW:
         imm = (extract32(imm8, 7, 1) ? 0x8000 : 0) |
             (extract32(imm8, 6, 1) ? 0x3000 : 0x4000) |
             (extract32(imm8, 0, 6) << 6);
diff --git a/target/arm/translate.c b/target/arm/translate.c
index 39266cf..8d10922 100644
--- a/target/arm/translate.c
+++ b/target/arm/translate.c
@@ -1477,7 +1477,7 @@ static void neon_store_element(int reg, int ele, TCGMemOp size, TCGv_i32 var)
     case MO_UB:
         tcg_gen_st8_i32(var, cpu_env, offset);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_st16_i32(var, cpu_env, offset);
         break;
     case MO_32:
@@ -1496,7 +1496,7 @@ static void neon_store_element64(int reg, int ele, TCGMemOp size, TCGv_i64 var)
     case MO_UB:
         tcg_gen_st8_i64(var, cpu_env, offset);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_st16_i64(var, cpu_env, offset);
         break;
     case MO_32:
@@ -4267,7 +4267,7 @@ const GVecGen2i ssra_op[4] = {
       .fniv = gen_ssra_vec,
       .load_dest = true,
       .opt_opc = vecop_list_ssra,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fni4 = gen_ssra32_i32,
       .fniv = gen_ssra_vec,
       .load_dest = true,
@@ -4325,7 +4325,7 @@ const GVecGen2i usra_op[4] = {
       .fniv = gen_usra_vec,
       .load_dest = true,
       .opt_opc = vecop_list_usra,
-      .vece = MO_16, },
+      .vece = MO_UW, },
     { .fni4 = gen_usra32_i32,
       .fniv = gen_usra_vec,
       .load_dest = true,
@@ -4353,7 +4353,7 @@ static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)

 static void gen_shr16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
 {
-    uint64_t mask = dup_const(MO_16, 0xffff >> shift);
+    uint64_t mask = dup_const(MO_UW, 0xffff >> shift);
     TCGv_i64 t = tcg_temp_new_i64();

     tcg_gen_shri_i64(t, a, shift);
@@ -4405,7 +4405,7 @@ const GVecGen2i sri_op[4] = {
       .fniv = gen_shr_ins_vec,
       .load_dest = true,
       .opt_opc = vecop_list_sri,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fni4 = gen_shr32_ins_i32,
       .fniv = gen_shr_ins_vec,
       .load_dest = true,
@@ -4433,7 +4433,7 @@ static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)

 static void gen_shl16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
 {
-    uint64_t mask = dup_const(MO_16, 0xffff << shift);
+    uint64_t mask = dup_const(MO_UW, 0xffff << shift);
     TCGv_i64 t = tcg_temp_new_i64();

     tcg_gen_shli_i64(t, a, shift);
@@ -4483,7 +4483,7 @@ const GVecGen2i sli_op[4] = {
       .fniv = gen_shl_ins_vec,
       .load_dest = true,
       .opt_opc = vecop_list_sli,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fni4 = gen_shl32_ins_i32,
       .fniv = gen_shl_ins_vec,
       .load_dest = true,
@@ -4579,7 +4579,7 @@ const GVecGen3 mla_op[4] = {
       .fniv = gen_mla_vec,
       .load_dest = true,
       .opt_opc = vecop_list_mla,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fni4 = gen_mla32_i32,
       .fniv = gen_mla_vec,
       .load_dest = true,
@@ -4603,7 +4603,7 @@ const GVecGen3 mls_op[4] = {
       .fniv = gen_mls_vec,
       .load_dest = true,
       .opt_opc = vecop_list_mls,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fni4 = gen_mls32_i32,
       .fniv = gen_mls_vec,
       .load_dest = true,
@@ -4649,7 +4649,7 @@ const GVecGen3 cmtst_op[4] = {
     { .fni4 = gen_helper_neon_tst_u16,
       .fniv = gen_cmtst_vec,
       .opt_opc = vecop_list_cmtst,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fni4 = gen_cmtst_i32,
       .fniv = gen_cmtst_vec,
       .opt_opc = vecop_list_cmtst,
@@ -4686,7 +4686,7 @@ const GVecGen4 uqadd_op[4] = {
       .fno = gen_helper_gvec_uqadd_h,
       .write_aofs = true,
       .opt_opc = vecop_list_uqadd,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fniv = gen_uqadd_vec,
       .fno = gen_helper_gvec_uqadd_s,
       .write_aofs = true,
@@ -4724,7 +4724,7 @@ const GVecGen4 sqadd_op[4] = {
       .fno = gen_helper_gvec_sqadd_h,
       .opt_opc = vecop_list_sqadd,
       .write_aofs = true,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fniv = gen_sqadd_vec,
       .fno = gen_helper_gvec_sqadd_s,
       .opt_opc = vecop_list_sqadd,
@@ -4762,7 +4762,7 @@ const GVecGen4 uqsub_op[4] = {
       .fno = gen_helper_gvec_uqsub_h,
       .opt_opc = vecop_list_uqsub,
       .write_aofs = true,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fniv = gen_uqsub_vec,
       .fno = gen_helper_gvec_uqsub_s,
       .opt_opc = vecop_list_uqsub,
@@ -4800,7 +4800,7 @@ const GVecGen4 sqsub_op[4] = {
       .fno = gen_helper_gvec_sqsub_h,
       .opt_opc = vecop_list_sqsub,
       .write_aofs = true,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fniv = gen_sqsub_vec,
       .fno = gen_helper_gvec_sqsub_s,
       .opt_opc = vecop_list_sqsub,
@@ -6876,7 +6876,7 @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
                     size = MO_UB;
                     element = (insn >> 17) & 7;
                 } else if (insn & (1 << 17)) {
-                    size = MO_16;
+                    size = MO_UW;
                     element = (insn >> 18) & 3;
                 } else {
                     size = MO_32;
diff --git a/target/i386/translate.c b/target/i386/translate.c
index 0e45300..0535bae 100644
--- a/target/i386/translate.c
+++ b/target/i386/translate.c
@@ -323,7 +323,7 @@ static inline bool byte_reg_is_xH(DisasContext *s, int reg)
 static inline TCGMemOp mo_pushpop(DisasContext *s, TCGMemOp ot)
 {
     if (CODE64(s)) {
-        return ot == MO_16 ? MO_16 : MO_64;
+        return ot == MO_UW ? MO_UW : MO_64;
     } else {
         return ot;
     }
@@ -332,7 +332,7 @@ static inline TCGMemOp mo_pushpop(DisasContext *s, TCGMemOp ot)
 /* Select the size of the stack pointer.  */
 static inline TCGMemOp mo_stacksize(DisasContext *s)
 {
-    return CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_16;
+    return CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_UW;
 }

 /* Select only size 64 else 32.  Used for SSE operand sizes.  */
@@ -356,7 +356,7 @@ static inline TCGMemOp mo_b_d(int b, TCGMemOp ot)
    Used for decoding operand size of port opcodes.  */
 static inline TCGMemOp mo_b_d32(int b, TCGMemOp ot)
 {
-    return b & 1 ? (ot == MO_16 ? MO_16 : MO_32) : MO_UB;
+    return b & 1 ? (ot == MO_UW ? MO_UW : MO_32) : MO_UB;
 }

 static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t0)
@@ -369,7 +369,7 @@ static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t0)
             tcg_gen_deposit_tl(cpu_regs[reg - 4], cpu_regs[reg - 4], t0, 8, 8);
         }
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_deposit_tl(cpu_regs[reg], cpu_regs[reg], t0, 0, 16);
         break;
     case MO_32:
@@ -473,7 +473,7 @@ static void gen_lea_v_seg(DisasContext *s, TCGMemOp aflag, TCGv a0,
             return;
         }
         break;
-    case MO_16:
+    case MO_UW:
         /* 16 bit address */
         tcg_gen_ext16u_tl(s->A0, a0);
         a0 = s->A0;
@@ -530,7 +530,7 @@ static TCGv gen_ext_tl(TCGv dst, TCGv src, TCGMemOp size, bool sign)
             tcg_gen_ext8u_tl(dst, src);
         }
         return dst;
-    case MO_16:
+    case MO_UW:
         if (sign) {
             tcg_gen_ext16s_tl(dst, src);
         } else {
@@ -583,7 +583,7 @@ static void gen_helper_in_func(TCGMemOp ot, TCGv v, TCGv_i32 n)
     case MO_UB:
         gen_helper_inb(v, cpu_env, n);
         break;
-    case MO_16:
+    case MO_UW:
         gen_helper_inw(v, cpu_env, n);
         break;
     case MO_32:
@@ -600,7 +600,7 @@ static void gen_helper_out_func(TCGMemOp ot, TCGv_i32 v, TCGv_i32 n)
     case MO_UB:
         gen_helper_outb(cpu_env, v, n);
         break;
-    case MO_16:
+    case MO_UW:
         gen_helper_outw(cpu_env, v, n);
         break;
     case MO_32:
@@ -622,7 +622,7 @@ static void gen_check_io(DisasContext *s, TCGMemOp ot, target_ulong cur_eip,
         case MO_UB:
             gen_helper_check_iob(cpu_env, s->tmp2_i32);
             break;
-        case MO_16:
+        case MO_UW:
             gen_helper_check_iow(cpu_env, s->tmp2_i32);
             break;
         case MO_32:
@@ -1562,7 +1562,7 @@ static void gen_rot_rm_T1(DisasContext *s, TCGMemOp ot, int op1, int is_right)
         tcg_gen_ext8u_tl(s->T0, s->T0);
         tcg_gen_muli_tl(s->T0, s->T0, 0x01010101);
         goto do_long;
-    case MO_16:
+    case MO_UW:
         /* Replicate the 16-bit input so that a 32-bit rotate works.  */
         tcg_gen_deposit_tl(s->T0, s->T0, s->T0, 16, 16);
         goto do_long;
@@ -1664,7 +1664,7 @@ static void gen_rot_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
         case MO_UB:
             mask = 7;
             goto do_shifts;
-        case MO_16:
+        case MO_UW:
             mask = 15;
         do_shifts:
             shift = op2 & mask;
@@ -1722,7 +1722,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
         case MO_UB:
             gen_helper_rcrb(s->T0, cpu_env, s->T0, s->T1);
             break;
-        case MO_16:
+        case MO_UW:
             gen_helper_rcrw(s->T0, cpu_env, s->T0, s->T1);
             break;
         case MO_32:
@@ -1741,7 +1741,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
         case MO_UB:
             gen_helper_rclb(s->T0, cpu_env, s->T0, s->T1);
             break;
-        case MO_16:
+        case MO_UW:
             gen_helper_rclw(s->T0, cpu_env, s->T0, s->T1);
             break;
         case MO_32:
@@ -1778,7 +1778,7 @@ static void gen_shiftd_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
     tcg_gen_andi_tl(count, count_in, mask);

     switch (ot) {
-    case MO_16:
+    case MO_UW:
         /* Note: we implement the Intel behaviour for shift count > 16.
            This means "shrdw C, B, A" shifts A:B:A >> C.  Build the B:A
            portion by constructing it as a 32-bit value.  */
@@ -1817,7 +1817,7 @@ static void gen_shiftd_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
             tcg_gen_shl_tl(s->T1, s->T1, s->tmp4);
         } else {
             tcg_gen_shl_tl(s->tmp0, s->T0, s->tmp0);
-            if (ot == MO_16) {
+            if (ot == MO_UW) {
                 /* Only needed if count > 16, for Intel behaviour.  */
                 tcg_gen_subfi_tl(s->tmp4, 33, count);
                 tcg_gen_shr_tl(s->tmp4, s->T1, s->tmp4);
@@ -2026,7 +2026,7 @@ static AddressParts gen_lea_modrm_0(CPUX86State *env, DisasContext *s,
         }
         break;

-    case MO_16:
+    case MO_UW:
         if (mod == 0) {
             if (rm == 6) {
                 base = -1;
@@ -2187,7 +2187,7 @@ static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, TCGMemOp ot)
     case MO_UB:
         ret = x86_ldub_code(env, s);
         break;
-    case MO_16:
+    case MO_UW:
         ret = x86_lduw_code(env, s);
         break;
     case MO_32:
@@ -2400,12 +2400,12 @@ static inline void gen_pop_update(DisasContext *s, TCGMemOp ot)

 static inline void gen_stack_A0(DisasContext *s)
 {
-    gen_lea_v_seg(s, s->ss32 ? MO_32 : MO_16, cpu_regs[R_ESP], R_SS, -1);
+    gen_lea_v_seg(s, s->ss32 ? MO_32 : MO_UW, cpu_regs[R_ESP], R_SS, -1);
 }

 static void gen_pusha(DisasContext *s)
 {
-    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_16;
+    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_UW;
     TCGMemOp d_ot = s->dflag;
     int size = 1 << d_ot;
     int i;
@@ -2421,7 +2421,7 @@ static void gen_pusha(DisasContext *s)

 static void gen_popa(DisasContext *s)
 {
-    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_16;
+    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_UW;
     TCGMemOp d_ot = s->dflag;
     int size = 1 << d_ot;
     int i;
@@ -2443,7 +2443,7 @@ static void gen_popa(DisasContext *s)
 static void gen_enter(DisasContext *s, int esp_addend, int level)
 {
     TCGMemOp d_ot = mo_pushpop(s, s->dflag);
-    TCGMemOp a_ot = CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_16;
+    TCGMemOp a_ot = CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_UW;
     int size = 1 << d_ot;

     /* Push BP; compute FrameTemp into T1.  */
@@ -3613,7 +3613,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
         case 0xc4: /* pinsrw */
         case 0x1c4:
             s->rip_offset = 1;
-            gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0);
+            gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0);
             val = x86_ldub_code(env, s);
             if (b1) {
                 val &= 7;
@@ -3786,7 +3786,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                 if ((b & 0xff) == 0xf0) {
                     ot = MO_UB;
                 } else if (s->dflag != MO_64) {
-                    ot = (s->prefix & PREFIX_DATA ? MO_16 : MO_32);
+                    ot = (s->prefix & PREFIX_DATA ? MO_UW : MO_32);
                 } else {
                     ot = MO_64;
                 }
@@ -3815,7 +3815,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                     goto illegal_op;
                 }
                 if (s->dflag != MO_64) {
-                    ot = (s->prefix & PREFIX_DATA ? MO_16 : MO_32);
+                    ot = (s->prefix & PREFIX_DATA ? MO_UW : MO_32);
                 } else {
                     ot = MO_64;
                 }
@@ -4630,7 +4630,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         /* In 64-bit mode, the default data size is 32-bit.  Select 64-bit
            data with rex_w, and 16-bit data with 0x66; rex_w takes precedence
            over 0x66 if both are present.  */
-        dflag = (rex_w > 0 ? MO_64 : prefixes & PREFIX_DATA ? MO_16 : MO_32);
+        dflag = (rex_w > 0 ? MO_64 : prefixes & PREFIX_DATA ? MO_UW : MO_32);
         /* In 64-bit mode, 0x67 selects 32-bit addressing.  */
         aflag = (prefixes & PREFIX_ADR ? MO_32 : MO_64);
     } else {
@@ -4638,13 +4638,13 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         if (s->code32 ^ ((prefixes & PREFIX_DATA) != 0)) {
             dflag = MO_32;
         } else {
-            dflag = MO_16;
+            dflag = MO_UW;
         }
         /* In 16/32-bit mode, 0x67 selects the opposite addressing.  */
         if (s->code32 ^ ((prefixes & PREFIX_ADR) != 0)) {
             aflag = MO_32;
         }  else {
-            aflag = MO_16;
+            aflag = MO_UW;
         }
     }

@@ -4872,21 +4872,21 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 tcg_gen_ext8u_tl(s->T1, s->T1);
                 /* XXX: use 32 bit mul which could be faster */
                 tcg_gen_mul_tl(s->T0, s->T0, s->T1);
-                gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0);
+                gen_op_mov_reg_v(s, MO_UW, R_EAX, s->T0);
                 tcg_gen_mov_tl(cpu_cc_dst, s->T0);
                 tcg_gen_andi_tl(cpu_cc_src, s->T0, 0xff00);
                 set_cc_op(s, CC_OP_MULB);
                 break;
-            case MO_16:
-                gen_op_mov_v_reg(s, MO_16, s->T1, R_EAX);
+            case MO_UW:
+                gen_op_mov_v_reg(s, MO_UW, s->T1, R_EAX);
                 tcg_gen_ext16u_tl(s->T0, s->T0);
                 tcg_gen_ext16u_tl(s->T1, s->T1);
                 /* XXX: use 32 bit mul which could be faster */
                 tcg_gen_mul_tl(s->T0, s->T0, s->T1);
-                gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0);
+                gen_op_mov_reg_v(s, MO_UW, R_EAX, s->T0);
                 tcg_gen_mov_tl(cpu_cc_dst, s->T0);
                 tcg_gen_shri_tl(s->T0, s->T0, 16);
-                gen_op_mov_reg_v(s, MO_16, R_EDX, s->T0);
+                gen_op_mov_reg_v(s, MO_UW, R_EDX, s->T0);
                 tcg_gen_mov_tl(cpu_cc_src, s->T0);
                 set_cc_op(s, CC_OP_MULW);
                 break;
@@ -4921,24 +4921,24 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 tcg_gen_ext8s_tl(s->T1, s->T1);
                 /* XXX: use 32 bit mul which could be faster */
                 tcg_gen_mul_tl(s->T0, s->T0, s->T1);
-                gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0);
+                gen_op_mov_reg_v(s, MO_UW, R_EAX, s->T0);
                 tcg_gen_mov_tl(cpu_cc_dst, s->T0);
                 tcg_gen_ext8s_tl(s->tmp0, s->T0);
                 tcg_gen_sub_tl(cpu_cc_src, s->T0, s->tmp0);
                 set_cc_op(s, CC_OP_MULB);
                 break;
-            case MO_16:
-                gen_op_mov_v_reg(s, MO_16, s->T1, R_EAX);
+            case MO_UW:
+                gen_op_mov_v_reg(s, MO_UW, s->T1, R_EAX);
                 tcg_gen_ext16s_tl(s->T0, s->T0);
                 tcg_gen_ext16s_tl(s->T1, s->T1);
                 /* XXX: use 32 bit mul which could be faster */
                 tcg_gen_mul_tl(s->T0, s->T0, s->T1);
-                gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0);
+                gen_op_mov_reg_v(s, MO_UW, R_EAX, s->T0);
                 tcg_gen_mov_tl(cpu_cc_dst, s->T0);
                 tcg_gen_ext16s_tl(s->tmp0, s->T0);
                 tcg_gen_sub_tl(cpu_cc_src, s->T0, s->tmp0);
                 tcg_gen_shri_tl(s->T0, s->T0, 16);
-                gen_op_mov_reg_v(s, MO_16, R_EDX, s->T0);
+                gen_op_mov_reg_v(s, MO_UW, R_EDX, s->T0);
                 set_cc_op(s, CC_OP_MULW);
                 break;
             default:
@@ -4972,7 +4972,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             case MO_UB:
                 gen_helper_divb_AL(cpu_env, s->T0);
                 break;
-            case MO_16:
+            case MO_UW:
                 gen_helper_divw_AX(cpu_env, s->T0);
                 break;
             default:
@@ -4991,7 +4991,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             case MO_UB:
                 gen_helper_idivb_AL(cpu_env, s->T0);
                 break;
-            case MO_16:
+            case MO_UW:
                 gen_helper_idivw_AX(cpu_env, s->T0);
                 break;
             default:
@@ -5026,7 +5026,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 /* operand size for jumps is 64 bit */
                 ot = MO_64;
             } else if (op == 3 || op == 5) {
-                ot = dflag != MO_16 ? MO_32 + (rex_w == 1) : MO_16;
+                ot = dflag != MO_UW ? MO_32 + (rex_w == 1) : MO_UW;
             } else if (op == 6) {
                 /* default push size is 64 bit */
                 ot = mo_pushpop(s, dflag);
@@ -5057,7 +5057,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             break;
         case 2: /* call Ev */
             /* XXX: optimize if memory (no 'and' is necessary) */
-            if (dflag == MO_16) {
+            if (dflag == MO_UW) {
                 tcg_gen_ext16u_tl(s->T0, s->T0);
             }
             next_eip = s->pc - s->cs_base;
@@ -5070,7 +5070,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         case 3: /* lcall Ev */
             gen_op_ld_v(s, ot, s->T1, s->A0);
             gen_add_A0_im(s, 1 << ot);
-            gen_op_ld_v(s, MO_16, s->T0, s->A0);
+            gen_op_ld_v(s, MO_UW, s->T0, s->A0);
         do_lcall:
             if (s->pe && !s->vm86) {
                 tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
@@ -5087,7 +5087,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             gen_jr(s, s->tmp4);
             break;
         case 4: /* jmp Ev */
-            if (dflag == MO_16) {
+            if (dflag == MO_UW) {
                 tcg_gen_ext16u_tl(s->T0, s->T0);
             }
             gen_op_jmp_v(s->T0);
@@ -5097,7 +5097,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         case 5: /* ljmp Ev */
             gen_op_ld_v(s, ot, s->T1, s->A0);
             gen_add_A0_im(s, 1 << ot);
-            gen_op_ld_v(s, MO_16, s->T0, s->A0);
+            gen_op_ld_v(s, MO_UW, s->T0, s->A0);
         do_ljmp:
             if (s->pe && !s->vm86) {
                 tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
@@ -5152,14 +5152,14 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             break;
 #endif
         case MO_32:
-            gen_op_mov_v_reg(s, MO_16, s->T0, R_EAX);
+            gen_op_mov_v_reg(s, MO_UW, s->T0, R_EAX);
             tcg_gen_ext16s_tl(s->T0, s->T0);
             gen_op_mov_reg_v(s, MO_32, R_EAX, s->T0);
             break;
-        case MO_16:
+        case MO_UW:
             gen_op_mov_v_reg(s, MO_UB, s->T0, R_EAX);
             tcg_gen_ext8s_tl(s->T0, s->T0);
-            gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0);
+            gen_op_mov_reg_v(s, MO_UW, R_EAX, s->T0);
             break;
         default:
             tcg_abort();
@@ -5180,11 +5180,11 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             tcg_gen_sari_tl(s->T0, s->T0, 31);
             gen_op_mov_reg_v(s, MO_32, R_EDX, s->T0);
             break;
-        case MO_16:
-            gen_op_mov_v_reg(s, MO_16, s->T0, R_EAX);
+        case MO_UW:
+            gen_op_mov_v_reg(s, MO_UW, s->T0, R_EAX);
             tcg_gen_ext16s_tl(s->T0, s->T0);
             tcg_gen_sari_tl(s->T0, s->T0, 15);
-            gen_op_mov_reg_v(s, MO_16, R_EDX, s->T0);
+            gen_op_mov_reg_v(s, MO_UW, R_EDX, s->T0);
             break;
         default:
             tcg_abort();
@@ -5538,7 +5538,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         reg = (modrm >> 3) & 7;
         if (reg >= 6 || reg == R_CS)
             goto illegal_op;
-        gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0);
+        gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0);
         gen_movl_seg_T0(s, reg);
         /* Note that reg == R_SS in gen_movl_seg_T0 always sets is_jmp.  */
         if (s->base.is_jmp) {
@@ -5558,7 +5558,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         if (reg >= 6)
             goto illegal_op;
         gen_op_movl_T0_seg(s, reg);
-        ot = mod == 3 ? dflag : MO_16;
+        ot = mod == 3 ? dflag : MO_UW;
         gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 1);
         break;

@@ -5734,7 +5734,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
     case 0x1b5: /* lgs Gv */
         op = R_GS;
     do_lxx:
-        ot = dflag != MO_16 ? MO_32 : MO_16;
+        ot = dflag != MO_UW ? MO_32 : MO_UW;
         modrm = x86_ldub_code(env, s);
         reg = ((modrm >> 3) & 7) | rex_r;
         mod = (modrm >> 6) & 3;
@@ -5744,7 +5744,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         gen_op_ld_v(s, ot, s->T1, s->A0);
         gen_add_A0_im(s, 1 << ot);
         /* load the segment first to handle exceptions properly */
-        gen_op_ld_v(s, MO_16, s->T0, s->A0);
+        gen_op_ld_v(s, MO_UW, s->T0, s->A0);
         gen_movl_seg_T0(s, op);
         /* then put the data */
         gen_op_mov_reg_v(s, ot, reg, s->T1);
@@ -6287,7 +6287,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 case 0:
                     gen_helper_fnstsw(s->tmp2_i32, cpu_env);
                     tcg_gen_extu_i32_tl(s->T0, s->tmp2_i32);
-                    gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0);
+                    gen_op_mov_reg_v(s, MO_UW, R_EAX, s->T0);
                     break;
                 default:
                     goto unknown_op;
@@ -6575,14 +6575,14 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         break;
     case 0xe8: /* call im */
         {
-            if (dflag != MO_16) {
+            if (dflag != MO_UW) {
                 tval = (int32_t)insn_get(env, s, MO_32);
             } else {
-                tval = (int16_t)insn_get(env, s, MO_16);
+                tval = (int16_t)insn_get(env, s, MO_UW);
             }
             next_eip = s->pc - s->cs_base;
             tval += next_eip;
-            if (dflag == MO_16) {
+            if (dflag == MO_UW) {
                 tval &= 0xffff;
             } else if (!CODE64(s)) {
                 tval &= 0xffffffff;
@@ -6601,20 +6601,20 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 goto illegal_op;
             ot = dflag;
             offset = insn_get(env, s, ot);
-            selector = insn_get(env, s, MO_16);
+            selector = insn_get(env, s, MO_UW);

             tcg_gen_movi_tl(s->T0, selector);
             tcg_gen_movi_tl(s->T1, offset);
         }
         goto do_lcall;
     case 0xe9: /* jmp im */
-        if (dflag != MO_16) {
+        if (dflag != MO_UW) {
             tval = (int32_t)insn_get(env, s, MO_32);
         } else {
-            tval = (int16_t)insn_get(env, s, MO_16);
+            tval = (int16_t)insn_get(env, s, MO_UW);
         }
         tval += s->pc - s->cs_base;
-        if (dflag == MO_16) {
+        if (dflag == MO_UW) {
             tval &= 0xffff;
         } else if (!CODE64(s)) {
             tval &= 0xffffffff;
@@ -6630,7 +6630,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 goto illegal_op;
             ot = dflag;
             offset = insn_get(env, s, ot);
-            selector = insn_get(env, s, MO_16);
+            selector = insn_get(env, s, MO_UW);

             tcg_gen_movi_tl(s->T0, selector);
             tcg_gen_movi_tl(s->T1, offset);
@@ -6639,7 +6639,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
     case 0xeb: /* jmp Jb */
         tval = (int8_t)insn_get(env, s, MO_UB);
         tval += s->pc - s->cs_base;
-        if (dflag == MO_16) {
+        if (dflag == MO_UW) {
             tval &= 0xffff;
         }
         gen_jmp(s, tval);
@@ -6648,15 +6648,15 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         tval = (int8_t)insn_get(env, s, MO_UB);
         goto do_jcc;
     case 0x180 ... 0x18f: /* jcc Jv */
-        if (dflag != MO_16) {
+        if (dflag != MO_UW) {
             tval = (int32_t)insn_get(env, s, MO_32);
         } else {
-            tval = (int16_t)insn_get(env, s, MO_16);
+            tval = (int16_t)insn_get(env, s, MO_UW);
         }
     do_jcc:
         next_eip = s->pc - s->cs_base;
         tval += next_eip;
-        if (dflag == MO_16) {
+        if (dflag == MO_UW) {
             tval &= 0xffff;
         }
         gen_bnd_jmp(s);
@@ -6697,7 +6697,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         } else {
             ot = gen_pop_T0(s);
             if (s->cpl == 0) {
-                if (dflag != MO_16) {
+                if (dflag != MO_UW) {
                     gen_helper_write_eflags(cpu_env, s->T0,
                                             tcg_const_i32((TF_MASK | AC_MASK |
                                                            ID_MASK | NT_MASK |
@@ -6712,7 +6712,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 }
             } else {
                 if (s->cpl <= s->iopl) {
-                    if (dflag != MO_16) {
+                    if (dflag != MO_UW) {
                         gen_helper_write_eflags(cpu_env, s->T0,
                                                 tcg_const_i32((TF_MASK |
                                                                AC_MASK |
@@ -6729,7 +6729,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                                                               & 0xffff));
                     }
                 } else {
-                    if (dflag != MO_16) {
+                    if (dflag != MO_UW) {
                         gen_helper_write_eflags(cpu_env, s->T0,
                                            tcg_const_i32((TF_MASK | AC_MASK |
                                                           ID_MASK | NT_MASK)));
@@ -7110,7 +7110,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         gen_op_mov_v_reg(s, ot, s->T0, reg);
         gen_lea_modrm(env, s, modrm);
         tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
-        if (ot == MO_16) {
+        if (ot == MO_UW) {
             gen_helper_boundw(cpu_env, s->A0, s->tmp2_i32);
         } else {
             gen_helper_boundl(cpu_env, s->A0, s->tmp2_i32);
@@ -7149,7 +7149,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             tval = (int8_t)insn_get(env, s, MO_UB);
             next_eip = s->pc - s->cs_base;
             tval += next_eip;
-            if (dflag == MO_16) {
+            if (dflag == MO_UW) {
                 tval &= 0xffff;
             }

@@ -7291,7 +7291,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             gen_svm_check_intercept(s, pc_start, SVM_EXIT_LDTR_READ);
             tcg_gen_ld32u_tl(s->T0, cpu_env,
                              offsetof(CPUX86State, ldt.selector));
-            ot = mod == 3 ? dflag : MO_16;
+            ot = mod == 3 ? dflag : MO_UW;
             gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 1);
             break;
         case 2: /* lldt */
@@ -7301,7 +7301,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 gen_exception(s, EXCP0D_GPF, pc_start - s->cs_base);
             } else {
                 gen_svm_check_intercept(s, pc_start, SVM_EXIT_LDTR_WRITE);
-                gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0);
+                gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0);
                 tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
                 gen_helper_lldt(cpu_env, s->tmp2_i32);
             }
@@ -7312,7 +7312,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             gen_svm_check_intercept(s, pc_start, SVM_EXIT_TR_READ);
             tcg_gen_ld32u_tl(s->T0, cpu_env,
                              offsetof(CPUX86State, tr.selector));
-            ot = mod == 3 ? dflag : MO_16;
+            ot = mod == 3 ? dflag : MO_UW;
             gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 1);
             break;
         case 3: /* ltr */
@@ -7322,7 +7322,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 gen_exception(s, EXCP0D_GPF, pc_start - s->cs_base);
             } else {
                 gen_svm_check_intercept(s, pc_start, SVM_EXIT_TR_WRITE);
-                gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0);
+                gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0);
                 tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
                 gen_helper_ltr(cpu_env, s->tmp2_i32);
             }
@@ -7331,7 +7331,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         case 5: /* verw */
             if (!s->pe || s->vm86)
                 goto illegal_op;
-            gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0);
+            gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0);
             gen_update_cc_op(s);
             if (op == 4) {
                 gen_helper_verr(cpu_env, s->T0);
@@ -7353,10 +7353,10 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             gen_lea_modrm(env, s, modrm);
             tcg_gen_ld32u_tl(s->T0,
                              cpu_env, offsetof(CPUX86State, gdt.limit));
-            gen_op_st_v(s, MO_16, s->T0, s->A0);
+            gen_op_st_v(s, MO_UW, s->T0, s->A0);
             gen_add_A0_im(s, 2);
             tcg_gen_ld_tl(s->T0, cpu_env, offsetof(CPUX86State, gdt.base));
-            if (dflag == MO_16) {
+            if (dflag == MO_UW) {
                 tcg_gen_andi_tl(s->T0, s->T0, 0xffffff);
             }
             gen_op_st_v(s, CODE64(s) + MO_32, s->T0, s->A0);
@@ -7408,10 +7408,10 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             gen_svm_check_intercept(s, pc_start, SVM_EXIT_IDTR_READ);
             gen_lea_modrm(env, s, modrm);
             tcg_gen_ld32u_tl(s->T0, cpu_env, offsetof(CPUX86State, idt.limit));
-            gen_op_st_v(s, MO_16, s->T0, s->A0);
+            gen_op_st_v(s, MO_UW, s->T0, s->A0);
             gen_add_A0_im(s, 2);
             tcg_gen_ld_tl(s->T0, cpu_env, offsetof(CPUX86State, idt.base));
-            if (dflag == MO_16) {
+            if (dflag == MO_UW) {
                 tcg_gen_andi_tl(s->T0, s->T0, 0xffffff);
             }
             gen_op_st_v(s, CODE64(s) + MO_32, s->T0, s->A0);
@@ -7558,10 +7558,10 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             }
             gen_svm_check_intercept(s, pc_start, SVM_EXIT_GDTR_WRITE);
             gen_lea_modrm(env, s, modrm);
-            gen_op_ld_v(s, MO_16, s->T1, s->A0);
+            gen_op_ld_v(s, MO_UW, s->T1, s->A0);
             gen_add_A0_im(s, 2);
             gen_op_ld_v(s, CODE64(s) + MO_32, s->T0, s->A0);
-            if (dflag == MO_16) {
+            if (dflag == MO_UW) {
                 tcg_gen_andi_tl(s->T0, s->T0, 0xffffff);
             }
             tcg_gen_st_tl(s->T0, cpu_env, offsetof(CPUX86State, gdt.base));
@@ -7575,10 +7575,10 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             }
             gen_svm_check_intercept(s, pc_start, SVM_EXIT_IDTR_WRITE);
             gen_lea_modrm(env, s, modrm);
-            gen_op_ld_v(s, MO_16, s->T1, s->A0);
+            gen_op_ld_v(s, MO_UW, s->T1, s->A0);
             gen_add_A0_im(s, 2);
             gen_op_ld_v(s, CODE64(s) + MO_32, s->T0, s->A0);
-            if (dflag == MO_16) {
+            if (dflag == MO_UW) {
                 tcg_gen_andi_tl(s->T0, s->T0, 0xffffff);
             }
             tcg_gen_st_tl(s->T0, cpu_env, offsetof(CPUX86State, idt.base));
@@ -7590,9 +7590,9 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             tcg_gen_ld_tl(s->T0, cpu_env, offsetof(CPUX86State, cr[0]));
             if (CODE64(s)) {
                 mod = (modrm >> 6) & 3;
-                ot = (mod != 3 ? MO_16 : s->dflag);
+                ot = (mod != 3 ? MO_UW : s->dflag);
             } else {
-                ot = MO_16;
+                ot = MO_UW;
             }
             gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 1);
             break;
@@ -7619,7 +7619,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 break;
             }
             gen_svm_check_intercept(s, pc_start, SVM_EXIT_WRITE_CR0);
-            gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0);
+            gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0);
             gen_helper_lmsw(cpu_env, s->T0);
             gen_jmp_im(s, s->pc - s->cs_base);
             gen_eob(s);
@@ -7720,7 +7720,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             t0 = tcg_temp_local_new();
             t1 = tcg_temp_local_new();
             t2 = tcg_temp_local_new();
-            ot = MO_16;
+            ot = MO_UW;
             modrm = x86_ldub_code(env, s);
             reg = (modrm >> 3) & 7;
             mod = (modrm >> 6) & 3;
@@ -7765,10 +7765,10 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             TCGv t0;
             if (!s->pe || s->vm86)
                 goto illegal_op;
-            ot = dflag != MO_16 ? MO_32 : MO_16;
+            ot = dflag != MO_UW ? MO_32 : MO_UW;
             modrm = x86_ldub_code(env, s);
             reg = ((modrm >> 3) & 7) | rex_r;
-            gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0);
+            gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0);
             t0 = tcg_temp_local_new();
             gen_update_cc_op(s);
             if (b == 0x102) {
@@ -7813,7 +7813,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 /* bndcl */
                 if (reg >= 4
                     || (prefixes & PREFIX_LOCK)
-                    || s->aflag == MO_16) {
+                    || s->aflag == MO_UW) {
                     goto illegal_op;
                 }
                 gen_bndck(env, s, modrm, TCG_COND_LTU, cpu_bndl[reg]);
@@ -7821,7 +7821,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 /* bndcu */
                 if (reg >= 4
                     || (prefixes & PREFIX_LOCK)
-                    || s->aflag == MO_16) {
+                    || s->aflag == MO_UW) {
                     goto illegal_op;
                 }
                 TCGv_i64 notu = tcg_temp_new_i64();
@@ -7830,7 +7830,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 tcg_temp_free_i64(notu);
             } else if (prefixes & PREFIX_DATA) {
                 /* bndmov -- from reg/mem */
-                if (reg >= 4 || s->aflag == MO_16) {
+                if (reg >= 4 || s->aflag == MO_UW) {
                     goto illegal_op;
                 }
                 if (mod == 3) {
@@ -7865,7 +7865,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 AddressParts a = gen_lea_modrm_0(env, s, modrm);
                 if (reg >= 4
                     || (prefixes & PREFIX_LOCK)
-                    || s->aflag == MO_16
+                    || s->aflag == MO_UW
                     || a.base < -1) {
                     goto illegal_op;
                 }
@@ -7903,7 +7903,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 /* bndmk */
                 if (reg >= 4
                     || (prefixes & PREFIX_LOCK)
-                    || s->aflag == MO_16) {
+                    || s->aflag == MO_UW) {
                     goto illegal_op;
                 }
                 AddressParts a = gen_lea_modrm_0(env, s, modrm);
@@ -7931,13 +7931,13 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 /* bndcn */
                 if (reg >= 4
                     || (prefixes & PREFIX_LOCK)
-                    || s->aflag == MO_16) {
+                    || s->aflag == MO_UW) {
                     goto illegal_op;
                 }
                 gen_bndck(env, s, modrm, TCG_COND_GTU, cpu_bndu[reg]);
             } else if (prefixes & PREFIX_DATA) {
                 /* bndmov -- to reg/mem */
-                if (reg >= 4 || s->aflag == MO_16) {
+                if (reg >= 4 || s->aflag == MO_UW) {
                     goto illegal_op;
                 }
                 if (mod == 3) {
@@ -7970,7 +7970,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 AddressParts a = gen_lea_modrm_0(env, s, modrm);
                 if (reg >= 4
                     || (prefixes & PREFIX_LOCK)
-                    || s->aflag == MO_16
+                    || s->aflag == MO_UW
                     || a.base < -1) {
                     goto illegal_op;
                 }
@@ -8341,7 +8341,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         reg = ((modrm >> 3) & 7) | rex_r;

         if (s->prefix & PREFIX_DATA) {
-            ot = MO_16;
+            ot = MO_UW;
         } else {
             ot = mo_64_32(dflag);
         }
diff --git a/target/mips/translate.c b/target/mips/translate.c
index 20a9777..525c7fe 100644
--- a/target/mips/translate.c
+++ b/target/mips/translate.c
@@ -21087,7 +21087,7 @@ static void gen_pool32a5_nanomips_insn(DisasContext *ctx, int opc,
             imm = sextract32(ctx->opcode, 11, 11);
             imm = (int16_t)(imm << 6) >> 6;
             if (rt != 0) {
-                tcg_gen_movi_tl(cpu_gpr[rt], dup_const(MO_16, imm));
+                tcg_gen_movi_tl(cpu_gpr[rt], dup_const(MO_UW, imm));
             }
         }
         break;
diff --git a/target/ppc/translate/vmx-impl.inc.c b/target/ppc/translate/vmx-impl.inc.c
index 4130dd1..71efef4 100644
--- a/target/ppc/translate/vmx-impl.inc.c
+++ b/target/ppc/translate/vmx-impl.inc.c
@@ -406,29 +406,29 @@ static void glue(gen_, name)(DisasContext *ctx)                         \
 GEN_VXFORM_V(vaddubm, MO_UB, tcg_gen_gvec_add, 0, 0);
 GEN_VXFORM_DUAL_EXT(vaddubm, PPC_ALTIVEC, PPC_NONE, 0,       \
                     vmul10cuq, PPC_NONE, PPC2_ISA300, 0x0000F800)
-GEN_VXFORM_V(vadduhm, MO_16, tcg_gen_gvec_add, 0, 1);
+GEN_VXFORM_V(vadduhm, MO_UW, tcg_gen_gvec_add, 0, 1);
 GEN_VXFORM_DUAL(vadduhm, PPC_ALTIVEC, PPC_NONE,  \
                 vmul10ecuq, PPC_NONE, PPC2_ISA300)
 GEN_VXFORM_V(vadduwm, MO_32, tcg_gen_gvec_add, 0, 2);
 GEN_VXFORM_V(vaddudm, MO_64, tcg_gen_gvec_add, 0, 3);
 GEN_VXFORM_V(vsububm, MO_UB, tcg_gen_gvec_sub, 0, 16);
-GEN_VXFORM_V(vsubuhm, MO_16, tcg_gen_gvec_sub, 0, 17);
+GEN_VXFORM_V(vsubuhm, MO_UW, tcg_gen_gvec_sub, 0, 17);
 GEN_VXFORM_V(vsubuwm, MO_32, tcg_gen_gvec_sub, 0, 18);
 GEN_VXFORM_V(vsubudm, MO_64, tcg_gen_gvec_sub, 0, 19);
 GEN_VXFORM_V(vmaxub, MO_UB, tcg_gen_gvec_umax, 1, 0);
-GEN_VXFORM_V(vmaxuh, MO_16, tcg_gen_gvec_umax, 1, 1);
+GEN_VXFORM_V(vmaxuh, MO_UW, tcg_gen_gvec_umax, 1, 1);
 GEN_VXFORM_V(vmaxuw, MO_32, tcg_gen_gvec_umax, 1, 2);
 GEN_VXFORM_V(vmaxud, MO_64, tcg_gen_gvec_umax, 1, 3);
 GEN_VXFORM_V(vmaxsb, MO_UB, tcg_gen_gvec_smax, 1, 4);
-GEN_VXFORM_V(vmaxsh, MO_16, tcg_gen_gvec_smax, 1, 5);
+GEN_VXFORM_V(vmaxsh, MO_UW, tcg_gen_gvec_smax, 1, 5);
 GEN_VXFORM_V(vmaxsw, MO_32, tcg_gen_gvec_smax, 1, 6);
 GEN_VXFORM_V(vmaxsd, MO_64, tcg_gen_gvec_smax, 1, 7);
 GEN_VXFORM_V(vminub, MO_UB, tcg_gen_gvec_umin, 1, 8);
-GEN_VXFORM_V(vminuh, MO_16, tcg_gen_gvec_umin, 1, 9);
+GEN_VXFORM_V(vminuh, MO_UW, tcg_gen_gvec_umin, 1, 9);
 GEN_VXFORM_V(vminuw, MO_32, tcg_gen_gvec_umin, 1, 10);
 GEN_VXFORM_V(vminud, MO_64, tcg_gen_gvec_umin, 1, 11);
 GEN_VXFORM_V(vminsb, MO_UB, tcg_gen_gvec_smin, 1, 12);
-GEN_VXFORM_V(vminsh, MO_16, tcg_gen_gvec_smin, 1, 13);
+GEN_VXFORM_V(vminsh, MO_UW, tcg_gen_gvec_smin, 1, 13);
 GEN_VXFORM_V(vminsw, MO_32, tcg_gen_gvec_smin, 1, 14);
 GEN_VXFORM_V(vminsd, MO_64, tcg_gen_gvec_smin, 1, 15);
 GEN_VXFORM(vavgub, 1, 16);
@@ -531,18 +531,18 @@ GEN_VXFORM(vmulesb, 4, 12);
 GEN_VXFORM(vmulesh, 4, 13);
 GEN_VXFORM(vmulesw, 4, 14);
 GEN_VXFORM_V(vslb, MO_UB, tcg_gen_gvec_shlv, 2, 4);
-GEN_VXFORM_V(vslh, MO_16, tcg_gen_gvec_shlv, 2, 5);
+GEN_VXFORM_V(vslh, MO_UW, tcg_gen_gvec_shlv, 2, 5);
 GEN_VXFORM_V(vslw, MO_32, tcg_gen_gvec_shlv, 2, 6);
 GEN_VXFORM(vrlwnm, 2, 6);
 GEN_VXFORM_DUAL(vslw, PPC_ALTIVEC, PPC_NONE, \
                 vrlwnm, PPC_NONE, PPC2_ISA300)
 GEN_VXFORM_V(vsld, MO_64, tcg_gen_gvec_shlv, 2, 23);
 GEN_VXFORM_V(vsrb, MO_UB, tcg_gen_gvec_shrv, 2, 8);
-GEN_VXFORM_V(vsrh, MO_16, tcg_gen_gvec_shrv, 2, 9);
+GEN_VXFORM_V(vsrh, MO_UW, tcg_gen_gvec_shrv, 2, 9);
 GEN_VXFORM_V(vsrw, MO_32, tcg_gen_gvec_shrv, 2, 10);
 GEN_VXFORM_V(vsrd, MO_64, tcg_gen_gvec_shrv, 2, 27);
 GEN_VXFORM_V(vsrab, MO_UB, tcg_gen_gvec_sarv, 2, 12);
-GEN_VXFORM_V(vsrah, MO_16, tcg_gen_gvec_sarv, 2, 13);
+GEN_VXFORM_V(vsrah, MO_UW, tcg_gen_gvec_sarv, 2, 13);
 GEN_VXFORM_V(vsraw, MO_32, tcg_gen_gvec_sarv, 2, 14);
 GEN_VXFORM_V(vsrad, MO_64, tcg_gen_gvec_sarv, 2, 15);
 GEN_VXFORM(vsrv, 2, 28);
@@ -592,18 +592,18 @@ static void glue(gen_, NAME)(DisasContext *ctx)                         \
 GEN_VXFORM_SAT(vaddubs, MO_UB, add, usadd, 0, 8);
 GEN_VXFORM_DUAL_EXT(vaddubs, PPC_ALTIVEC, PPC_NONE, 0,       \
                     vmul10uq, PPC_NONE, PPC2_ISA300, 0x0000F800)
-GEN_VXFORM_SAT(vadduhs, MO_16, add, usadd, 0, 9);
+GEN_VXFORM_SAT(vadduhs, MO_UW, add, usadd, 0, 9);
 GEN_VXFORM_DUAL(vadduhs, PPC_ALTIVEC, PPC_NONE, \
                 vmul10euq, PPC_NONE, PPC2_ISA300)
 GEN_VXFORM_SAT(vadduws, MO_32, add, usadd, 0, 10);
 GEN_VXFORM_SAT(vaddsbs, MO_UB, add, ssadd, 0, 12);
-GEN_VXFORM_SAT(vaddshs, MO_16, add, ssadd, 0, 13);
+GEN_VXFORM_SAT(vaddshs, MO_UW, add, ssadd, 0, 13);
 GEN_VXFORM_SAT(vaddsws, MO_32, add, ssadd, 0, 14);
 GEN_VXFORM_SAT(vsububs, MO_UB, sub, ussub, 0, 24);
-GEN_VXFORM_SAT(vsubuhs, MO_16, sub, ussub, 0, 25);
+GEN_VXFORM_SAT(vsubuhs, MO_UW, sub, ussub, 0, 25);
 GEN_VXFORM_SAT(vsubuws, MO_32, sub, ussub, 0, 26);
 GEN_VXFORM_SAT(vsubsbs, MO_UB, sub, sssub, 0, 28);
-GEN_VXFORM_SAT(vsubshs, MO_16, sub, sssub, 0, 29);
+GEN_VXFORM_SAT(vsubshs, MO_UW, sub, sssub, 0, 29);
 GEN_VXFORM_SAT(vsubsws, MO_32, sub, sssub, 0, 30);
 GEN_VXFORM(vadduqm, 0, 4);
 GEN_VXFORM(vaddcuq, 0, 5);
@@ -913,7 +913,7 @@ static void glue(gen_, name)(DisasContext *ctx)                         \
     }

 GEN_VXFORM_VSPLT(vspltb, MO_UB, 6, 8);
-GEN_VXFORM_VSPLT(vsplth, MO_16, 6, 9);
+GEN_VXFORM_VSPLT(vsplth, MO_UW, 6, 9);
 GEN_VXFORM_VSPLT(vspltw, MO_32, 6, 10);
 GEN_VXFORM_UIMM_SPLAT(vextractub, 6, 8, 15);
 GEN_VXFORM_UIMM_SPLAT(vextractuh, 6, 9, 14);
diff --git a/target/s390x/translate_vx.inc.c b/target/s390x/translate_vx.inc.c
index bb424c8..65da6b3 100644
--- a/target/s390x/translate_vx.inc.c
+++ b/target/s390x/translate_vx.inc.c
@@ -47,7 +47,7 @@
 #define NUM_VEC_ELEMENT_BITS(es) (NUM_VEC_ELEMENT_BYTES(es) * BITS_PER_BYTE)

 #define ES_8    MO_UB
-#define ES_16   MO_16
+#define ES_16   MO_UW
 #define ES_32   MO_32
 #define ES_64   MO_64
 #define ES_128  4
diff --git a/target/s390x/vec.h b/target/s390x/vec.h
index b813054..28e1b1d 100644
--- a/target/s390x/vec.h
+++ b/target/s390x/vec.h
@@ -78,7 +78,7 @@ static inline uint64_t s390_vec_read_element(const S390Vector *v, uint8_t enr,
     switch (es) {
     case MO_UB:
         return s390_vec_read_element8(v, enr);
-    case MO_16:
+    case MO_UW:
         return s390_vec_read_element16(v, enr);
     case MO_32:
         return s390_vec_read_element32(v, enr);
@@ -124,7 +124,7 @@ static inline void s390_vec_write_element(S390Vector *v, uint8_t enr,
     case MO_UB:
         s390_vec_write_element8(v, enr, data);
         break;
-    case MO_16:
+    case MO_UW:
         s390_vec_write_element16(v, enr, data);
         break;
     case MO_32:
diff --git a/tcg/aarch64/tcg-target.inc.c b/tcg/aarch64/tcg-target.inc.c
index e4e0845..3d90c4b 100644
--- a/tcg/aarch64/tcg-target.inc.c
+++ b/tcg/aarch64/tcg-target.inc.c
@@ -430,20 +430,20 @@ typedef enum {
     /* Load/store register.  Described here as 3.3.12, but the helper
        that emits them can transform to 3.3.10 or 3.3.13.  */
     I3312_STRB      = 0x38000000 | LDST_ST << 22 | MO_UB << 30,
-    I3312_STRH      = 0x38000000 | LDST_ST << 22 | MO_16 << 30,
+    I3312_STRH      = 0x38000000 | LDST_ST << 22 | MO_UW << 30,
     I3312_STRW      = 0x38000000 | LDST_ST << 22 | MO_32 << 30,
     I3312_STRX      = 0x38000000 | LDST_ST << 22 | MO_64 << 30,

     I3312_LDRB      = 0x38000000 | LDST_LD << 22 | MO_UB << 30,
-    I3312_LDRH      = 0x38000000 | LDST_LD << 22 | MO_16 << 30,
+    I3312_LDRH      = 0x38000000 | LDST_LD << 22 | MO_UW << 30,
     I3312_LDRW      = 0x38000000 | LDST_LD << 22 | MO_32 << 30,
     I3312_LDRX      = 0x38000000 | LDST_LD << 22 | MO_64 << 30,

     I3312_LDRSBW    = 0x38000000 | LDST_LD_S_W << 22 | MO_UB << 30,
-    I3312_LDRSHW    = 0x38000000 | LDST_LD_S_W << 22 | MO_16 << 30,
+    I3312_LDRSHW    = 0x38000000 | LDST_LD_S_W << 22 | MO_UW << 30,

     I3312_LDRSBX    = 0x38000000 | LDST_LD_S_X << 22 | MO_UB << 30,
-    I3312_LDRSHX    = 0x38000000 | LDST_LD_S_X << 22 | MO_16 << 30,
+    I3312_LDRSHX    = 0x38000000 | LDST_LD_S_X << 22 | MO_UW << 30,
     I3312_LDRSWX    = 0x38000000 | LDST_LD_S_X << 22 | MO_32 << 30,

     I3312_LDRVS     = 0x3c000000 | LDST_LD << 22 | MO_32 << 30,
@@ -870,7 +870,7 @@ static void tcg_out_dupi_vec(TCGContext *s, TCGType type,

     /*
      * Test all bytes 0x00 or 0xff second.  This can match cases that
-     * might otherwise take 2 or 3 insns for MO_16 or MO_32 below.
+     * might otherwise take 2 or 3 insns for MO_UW or MO_32 below.
      */
     for (i = imm8 = 0; i < 8; i++) {
         uint8_t byte = v64 >> (i * 8);
@@ -889,7 +889,7 @@ static void tcg_out_dupi_vec(TCGContext *s, TCGType type,
      * cannot find an expansion there's no point checking a larger
      * width because we already know by replication it cannot match.
      */
-    if (v64 == dup_const(MO_16, v64)) {
+    if (v64 == dup_const(MO_UW, v64)) {
         uint16_t v16 = v64;

         if (is_shimm16(v16, &cmode, &imm8)) {
@@ -1733,7 +1733,7 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp memop, TCGType ext,
         if (bswap) {
             tcg_out_ldst_r(s, I3312_LDRH, data_r, addr_r, otype, off_r);
             tcg_out_rev16(s, data_r, data_r);
-            tcg_out_sxt(s, ext, MO_16, data_r, data_r);
+            tcg_out_sxt(s, ext, MO_UW, data_r, data_r);
         } else {
             tcg_out_ldst_r(s, (ext ? I3312_LDRSHX : I3312_LDRSHW),
                            data_r, addr_r, otype, off_r);
@@ -1775,7 +1775,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp memop,
     case MO_UB:
         tcg_out_ldst_r(s, I3312_STRB, data_r, addr_r, otype, off_r);
         break;
-    case MO_16:
+    case MO_UW:
         if (bswap && data_r != TCG_REG_XZR) {
             tcg_out_rev16(s, TCG_REG_TMP, data_r);
             data_r = TCG_REG_TMP;
@@ -2190,7 +2190,7 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
         break;
     case INDEX_op_ext16s_i64:
     case INDEX_op_ext16s_i32:
-        tcg_out_sxt(s, ext, MO_16, a0, a1);
+        tcg_out_sxt(s, ext, MO_UW, a0, a1);
         break;
     case INDEX_op_ext_i32_i64:
     case INDEX_op_ext32s_i64:
@@ -2202,7 +2202,7 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
         break;
     case INDEX_op_ext16u_i64:
     case INDEX_op_ext16u_i32:
-        tcg_out_uxt(s, MO_16, a0, a1);
+        tcg_out_uxt(s, MO_UW, a0, a1);
         break;
     case INDEX_op_extu_i32_i64:
     case INDEX_op_ext32u_i64:
diff --git a/tcg/arm/tcg-target.inc.c b/tcg/arm/tcg-target.inc.c
index 542ffa8..0bd400e 100644
--- a/tcg/arm/tcg-target.inc.c
+++ b/tcg/arm/tcg-target.inc.c
@@ -1432,7 +1432,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
     case MO_UB:
         argreg = tcg_out_arg_reg8(s, argreg, datalo);
         break;
-    case MO_16:
+    case MO_UW:
         argreg = tcg_out_arg_reg16(s, argreg, datalo);
         break;
     case MO_32:
@@ -1624,7 +1624,7 @@ static inline void tcg_out_qemu_st_index(TCGContext *s, int cond, TCGMemOp opc,
     case MO_UB:
         tcg_out_st8_r(s, cond, datalo, addrlo, addend);
         break;
-    case MO_16:
+    case MO_UW:
         if (bswap) {
             tcg_out_bswap16st(s, cond, TCG_REG_R0, datalo);
             tcg_out_st16_r(s, cond, TCG_REG_R0, addrlo, addend);
@@ -1669,7 +1669,7 @@ static inline void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp opc,
     case MO_UB:
         tcg_out_st8_12(s, COND_AL, datalo, addrlo, 0);
         break;
-    case MO_16:
+    case MO_UW:
         if (bswap) {
             tcg_out_bswap16st(s, COND_AL, TCG_REG_R0, datalo);
             tcg_out_st16_8(s, COND_AL, TCG_REG_R0, addrlo, 0);
diff --git a/tcg/i386/tcg-target.inc.c b/tcg/i386/tcg-target.inc.c
index 0d68ba4..31c3664 100644
--- a/tcg/i386/tcg-target.inc.c
+++ b/tcg/i386/tcg-target.inc.c
@@ -893,7 +893,7 @@ static bool tcg_out_dup_vec(TCGContext *s, TCGType type, unsigned vece,
             tcg_out_vex_modrm(s, OPC_PUNPCKLBW, r, a, a);
             a = r;
             /* FALLTHRU */
-        case MO_16:
+        case MO_UW:
             tcg_out_vex_modrm(s, OPC_PUNPCKLWD, r, a, a);
             a = r;
             /* FALLTHRU */
@@ -927,7 +927,7 @@ static bool tcg_out_dupm_vec(TCGContext *s, TCGType type, unsigned vece,
         case MO_32:
             tcg_out_vex_modrm_offset(s, OPC_VBROADCASTSS, r, 0, base, offset);
             break;
-        case MO_16:
+        case MO_UW:
             tcg_out_vex_modrm_offset(s, OPC_VPINSRW, r, r, base, offset);
             tcg_out8(s, 0); /* imm8 */
             tcg_out_dup_vec(s, type, vece, r, r);
@@ -2164,7 +2164,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg datalo, TCGReg datahi,
         tcg_out_modrm_sib_offset(s, OPC_MOVB_EvGv + P_REXB_R + seg,
                                  datalo, base, index, 0, ofs);
         break;
-    case MO_16:
+    case MO_UW:
         if (bswap) {
             tcg_out_mov(s, TCG_TYPE_I32, scratch, datalo);
             tcg_out_rolw_8(s, scratch);
@@ -2747,15 +2747,15 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc,
         OPC_PMAXUB, OPC_PMAXUW, OPC_PMAXUD, OPC_UD2
     };
     static int const shlv_insn[4] = {
-        /* TODO: AVX512 adds support for MO_16.  */
+        /* TODO: AVX512 adds support for MO_UW.  */
         OPC_UD2, OPC_UD2, OPC_VPSLLVD, OPC_VPSLLVQ
     };
     static int const shrv_insn[4] = {
-        /* TODO: AVX512 adds support for MO_16.  */
+        /* TODO: AVX512 adds support for MO_UW.  */
         OPC_UD2, OPC_UD2, OPC_VPSRLVD, OPC_VPSRLVQ
     };
     static int const sarv_insn[4] = {
-        /* TODO: AVX512 adds support for MO_16, MO_64.  */
+        /* TODO: AVX512 adds support for MO_UW, MO_64.  */
         OPC_UD2, OPC_UD2, OPC_VPSRAVD, OPC_UD2
     };
     static int const shls_insn[4] = {
@@ -2925,7 +2925,7 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc,
         sub = args[3];
         goto gen_simd_imm8;
     case INDEX_op_x86_blend_vec:
-        if (vece == MO_16) {
+        if (vece == MO_UW) {
             insn = OPC_PBLENDW;
         } else if (vece == MO_32) {
             insn = (have_avx2 ? OPC_VPBLENDD : OPC_BLENDPS);
@@ -3290,9 +3290,9 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, unsigned vece)

     case INDEX_op_shls_vec:
     case INDEX_op_shrs_vec:
-        return vece >= MO_16;
+        return vece >= MO_UW;
     case INDEX_op_sars_vec:
-        return vece >= MO_16 && vece <= MO_32;
+        return vece >= MO_UW && vece <= MO_32;

     case INDEX_op_shlv_vec:
     case INDEX_op_shrv_vec:
@@ -3314,7 +3314,7 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, unsigned vece)
     case INDEX_op_usadd_vec:
     case INDEX_op_sssub_vec:
     case INDEX_op_ussub_vec:
-        return vece <= MO_16;
+        return vece <= MO_UW;
     case INDEX_op_smin_vec:
     case INDEX_op_smax_vec:
     case INDEX_op_umin_vec:
@@ -3352,13 +3352,13 @@ static void expand_vec_shi(TCGType type, unsigned vece, bool shr,
               tcgv_vec_arg(t2), tcgv_vec_arg(v1), tcgv_vec_arg(v1));

     if (shr) {
-        tcg_gen_shri_vec(MO_16, t1, t1, imm + 8);
-        tcg_gen_shri_vec(MO_16, t2, t2, imm + 8);
+        tcg_gen_shri_vec(MO_UW, t1, t1, imm + 8);
+        tcg_gen_shri_vec(MO_UW, t2, t2, imm + 8);
     } else {
-        tcg_gen_shli_vec(MO_16, t1, t1, imm + 8);
-        tcg_gen_shli_vec(MO_16, t2, t2, imm + 8);
-        tcg_gen_shri_vec(MO_16, t1, t1, 8);
-        tcg_gen_shri_vec(MO_16, t2, t2, 8);
+        tcg_gen_shli_vec(MO_UW, t1, t1, imm + 8);
+        tcg_gen_shli_vec(MO_UW, t2, t2, imm + 8);
+        tcg_gen_shri_vec(MO_UW, t1, t1, 8);
+        tcg_gen_shri_vec(MO_UW, t2, t2, 8);
     }

     vec_gen_3(INDEX_op_x86_packus_vec, type, MO_UB,
@@ -3381,8 +3381,8 @@ static void expand_vec_sari(TCGType type, unsigned vece,
                   tcgv_vec_arg(t1), tcgv_vec_arg(v1), tcgv_vec_arg(v1));
         vec_gen_3(INDEX_op_x86_punpckh_vec, type, MO_UB,
                   tcgv_vec_arg(t2), tcgv_vec_arg(v1), tcgv_vec_arg(v1));
-        tcg_gen_sari_vec(MO_16, t1, t1, imm + 8);
-        tcg_gen_sari_vec(MO_16, t2, t2, imm + 8);
+        tcg_gen_sari_vec(MO_UW, t1, t1, imm + 8);
+        tcg_gen_sari_vec(MO_UW, t2, t2, imm + 8);
         vec_gen_3(INDEX_op_x86_packss_vec, type, MO_UB,
                   tcgv_vec_arg(v0), tcgv_vec_arg(t1), tcgv_vec_arg(t2));
         tcg_temp_free_vec(t1);
@@ -3446,8 +3446,8 @@ static void expand_vec_mul(TCGType type, unsigned vece,
                   tcgv_vec_arg(t1), tcgv_vec_arg(v1), tcgv_vec_arg(t2));
         vec_gen_3(INDEX_op_x86_punpckl_vec, TCG_TYPE_V128, MO_UB,
                   tcgv_vec_arg(t2), tcgv_vec_arg(t2), tcgv_vec_arg(v2));
-        tcg_gen_mul_vec(MO_16, t1, t1, t2);
-        tcg_gen_shri_vec(MO_16, t1, t1, 8);
+        tcg_gen_mul_vec(MO_UW, t1, t1, t2);
+        tcg_gen_shri_vec(MO_UW, t1, t1, 8);
         vec_gen_3(INDEX_op_x86_packus_vec, TCG_TYPE_V128, MO_UB,
                   tcgv_vec_arg(v0), tcgv_vec_arg(t1), tcgv_vec_arg(t1));
         tcg_temp_free_vec(t1);
@@ -3469,10 +3469,10 @@ static void expand_vec_mul(TCGType type, unsigned vece,
                   tcgv_vec_arg(t3), tcgv_vec_arg(v1), tcgv_vec_arg(t4));
         vec_gen_3(INDEX_op_x86_punpckh_vec, type, MO_UB,
                   tcgv_vec_arg(t4), tcgv_vec_arg(t4), tcgv_vec_arg(v2));
-        tcg_gen_mul_vec(MO_16, t1, t1, t2);
-        tcg_gen_mul_vec(MO_16, t3, t3, t4);
-        tcg_gen_shri_vec(MO_16, t1, t1, 8);
-        tcg_gen_shri_vec(MO_16, t3, t3, 8);
+        tcg_gen_mul_vec(MO_UW, t1, t1, t2);
+        tcg_gen_mul_vec(MO_UW, t3, t3, t4);
+        tcg_gen_shri_vec(MO_UW, t1, t1, 8);
+        tcg_gen_shri_vec(MO_UW, t3, t3, 8);
         vec_gen_3(INDEX_op_x86_packus_vec, type, MO_UB,
                   tcgv_vec_arg(v0), tcgv_vec_arg(t1), tcgv_vec_arg(t3));
         tcg_temp_free_vec(t1);
diff --git a/tcg/mips/tcg-target.inc.c b/tcg/mips/tcg-target.inc.c
index c6d13ea..1780cb1 100644
--- a/tcg/mips/tcg-target.inc.c
+++ b/tcg/mips/tcg-target.inc.c
@@ -1383,7 +1383,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
     case MO_UB:
         i = tcg_out_call_iarg_reg8(s, i, l->datalo_reg);
         break;
-    case MO_16:
+    case MO_UW:
         i = tcg_out_call_iarg_reg16(s, i, l->datalo_reg);
         break;
     case MO_32:
@@ -1570,12 +1570,12 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
         tcg_out_opc_imm(s, OPC_SB, lo, base, 0);
         break;

-    case MO_16 | MO_BSWAP:
+    case MO_UW | MO_BSWAP:
         tcg_out_opc_imm(s, OPC_ANDI, TCG_TMP1, lo, 0xffff);
         tcg_out_bswap16(s, TCG_TMP1, TCG_TMP1);
         lo = TCG_TMP1;
         /* FALLTHRU */
-    case MO_16:
+    case MO_UW:
         tcg_out_opc_imm(s, OPC_SH, lo, base, 0);
         break;

diff --git a/tcg/riscv/tcg-target.inc.c b/tcg/riscv/tcg-target.inc.c
index 9c60c0f..20bc19d 100644
--- a/tcg/riscv/tcg-target.inc.c
+++ b/tcg/riscv/tcg-target.inc.c
@@ -1104,7 +1104,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
     case MO_UB:
         tcg_out_ext8u(s, a2, a2);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_out_ext16u(s, a2, a2);
         break;
     default:
@@ -1219,7 +1219,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
     case MO_UB:
         tcg_out_opc_store(s, OPC_SB, base, lo, 0);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_out_opc_store(s, OPC_SH, base, lo, 0);
         break;
     case MO_32:
diff --git a/tcg/sparc/tcg-target.inc.c b/tcg/sparc/tcg-target.inc.c
index 479ee2e..85550b5 100644
--- a/tcg/sparc/tcg-target.inc.c
+++ b/tcg/sparc/tcg-target.inc.c
@@ -885,7 +885,7 @@ static void emit_extend(TCGContext *s, TCGReg r, int op)
     case MO_UB:
         tcg_out_arithi(s, r, r, 0xff, ARITH_AND);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_out_arithi(s, r, r, 16, SHIFT_SLL);
         tcg_out_arithi(s, r, r, 16, SHIFT_SRL);
         break;
diff --git a/tcg/tcg-op-gvec.c b/tcg/tcg-op-gvec.c
index 9658c36..da409f5 100644
--- a/tcg/tcg-op-gvec.c
+++ b/tcg/tcg-op-gvec.c
@@ -308,7 +308,7 @@ uint64_t (dup_const)(unsigned vece, uint64_t c)
     switch (vece) {
     case MO_UB:
         return 0x0101010101010101ull * (uint8_t)c;
-    case MO_16:
+    case MO_UW:
         return 0x0001000100010001ull * (uint16_t)c;
     case MO_32:
         return 0x0000000100000001ull * (uint32_t)c;
@@ -327,7 +327,7 @@ static void gen_dup_i32(unsigned vece, TCGv_i32 out, TCGv_i32 in)
         tcg_gen_ext8u_i32(out, in);
         tcg_gen_muli_i32(out, out, 0x01010101);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_deposit_i32(out, in, in, 16, 16);
         break;
     case MO_32:
@@ -345,7 +345,7 @@ static void gen_dup_i64(unsigned vece, TCGv_i64 out, TCGv_i64 in)
         tcg_gen_ext8u_i64(out, in);
         tcg_gen_muli_i64(out, out, 0x0101010101010101ull);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_ext16u_i64(out, in);
         tcg_gen_muli_i64(out, out, 0x0001000100010001ull);
         break;
@@ -558,7 +558,7 @@ static void do_dup(unsigned vece, uint32_t dofs, uint32_t oprsz,
                 tcg_gen_extrl_i64_i32(t_32, in_64);
             } else if (vece == MO_UB) {
                 tcg_gen_movi_i32(t_32, in_c & 0xff);
-            } else if (vece == MO_16) {
+            } else if (vece == MO_UW) {
                 tcg_gen_movi_i32(t_32, in_c & 0xffff);
             } else {
                 tcg_gen_movi_i32(t_32, in_c);
@@ -1459,7 +1459,7 @@ void tcg_gen_gvec_dup_mem(unsigned vece, uint32_t dofs, uint32_t aofs,
             case MO_UB:
                 tcg_gen_ld8u_i32(in, cpu_env, aofs);
                 break;
-            case MO_16:
+            case MO_UW:
                 tcg_gen_ld16u_i32(in, cpu_env, aofs);
                 break;
             default:
@@ -1526,7 +1526,7 @@ void tcg_gen_gvec_dup16i(uint32_t dofs, uint32_t oprsz,
                          uint32_t maxsz, uint16_t x)
 {
     check_size_align(oprsz, maxsz, dofs);
-    do_dup(MO_16, dofs, oprsz, maxsz, NULL, NULL, x);
+    do_dup(MO_UW, dofs, oprsz, maxsz, NULL, NULL, x);
 }

 void tcg_gen_gvec_dup8i(uint32_t dofs, uint32_t oprsz,
@@ -1579,7 +1579,7 @@ void tcg_gen_vec_add8_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)

 void tcg_gen_vec_add16_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
 {
-    TCGv_i64 m = tcg_const_i64(dup_const(MO_16, 0x8000));
+    TCGv_i64 m = tcg_const_i64(dup_const(MO_UW, 0x8000));
     gen_addv_mask(d, a, b, m);
     tcg_temp_free_i64(m);
 }
@@ -1613,7 +1613,7 @@ void tcg_gen_gvec_add(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_add_vec,
           .fno = gen_helper_gvec_add16,
           .opt_opc = vecop_list_add,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_add_i32,
           .fniv = tcg_gen_add_vec,
           .fno = gen_helper_gvec_add32,
@@ -1644,7 +1644,7 @@ void tcg_gen_gvec_adds(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_add_vec,
           .fno = gen_helper_gvec_adds16,
           .opt_opc = vecop_list_add,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_add_i32,
           .fniv = tcg_gen_add_vec,
           .fno = gen_helper_gvec_adds32,
@@ -1685,7 +1685,7 @@ void tcg_gen_gvec_subs(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_gvec_subs16,
           .opt_opc = vecop_list_sub,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_sub_i32,
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_gvec_subs32,
@@ -1732,7 +1732,7 @@ void tcg_gen_vec_sub8_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)

 void tcg_gen_vec_sub16_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
 {
-    TCGv_i64 m = tcg_const_i64(dup_const(MO_16, 0x8000));
+    TCGv_i64 m = tcg_const_i64(dup_const(MO_UW, 0x8000));
     gen_subv_mask(d, a, b, m);
     tcg_temp_free_i64(m);
 }
@@ -1764,7 +1764,7 @@ void tcg_gen_gvec_sub(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_gvec_sub16,
           .opt_opc = vecop_list_sub,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_sub_i32,
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_gvec_sub32,
@@ -1795,7 +1795,7 @@ void tcg_gen_gvec_mul(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_mul_vec,
           .fno = gen_helper_gvec_mul16,
           .opt_opc = vecop_list_mul,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_mul_i32,
           .fniv = tcg_gen_mul_vec,
           .fno = gen_helper_gvec_mul32,
@@ -1824,7 +1824,7 @@ void tcg_gen_gvec_muls(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_mul_vec,
           .fno = gen_helper_gvec_muls16,
           .opt_opc = vecop_list_mul,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_mul_i32,
           .fniv = tcg_gen_mul_vec,
           .fno = gen_helper_gvec_muls32,
@@ -1862,7 +1862,7 @@ void tcg_gen_gvec_ssadd(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_ssadd_vec,
           .fno = gen_helper_gvec_ssadd16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fniv = tcg_gen_ssadd_vec,
           .fno = gen_helper_gvec_ssadd32,
           .opt_opc = vecop_list,
@@ -1888,7 +1888,7 @@ void tcg_gen_gvec_sssub(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_sssub_vec,
           .fno = gen_helper_gvec_sssub16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fniv = tcg_gen_sssub_vec,
           .fno = gen_helper_gvec_sssub32,
           .opt_opc = vecop_list,
@@ -1930,7 +1930,7 @@ void tcg_gen_gvec_usadd(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_usadd_vec,
           .fno = gen_helper_gvec_usadd16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_usadd_i32,
           .fniv = tcg_gen_usadd_vec,
           .fno = gen_helper_gvec_usadd32,
@@ -1974,7 +1974,7 @@ void tcg_gen_gvec_ussub(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_ussub_vec,
           .fno = gen_helper_gvec_ussub16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_ussub_i32,
           .fniv = tcg_gen_ussub_vec,
           .fno = gen_helper_gvec_ussub32,
@@ -2002,7 +2002,7 @@ void tcg_gen_gvec_smin(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_smin_vec,
           .fno = gen_helper_gvec_smin16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_smin_i32,
           .fniv = tcg_gen_smin_vec,
           .fno = gen_helper_gvec_smin32,
@@ -2030,7 +2030,7 @@ void tcg_gen_gvec_umin(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_umin_vec,
           .fno = gen_helper_gvec_umin16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_umin_i32,
           .fniv = tcg_gen_umin_vec,
           .fno = gen_helper_gvec_umin32,
@@ -2058,7 +2058,7 @@ void tcg_gen_gvec_smax(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_smax_vec,
           .fno = gen_helper_gvec_smax16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_smax_i32,
           .fniv = tcg_gen_smax_vec,
           .fno = gen_helper_gvec_smax32,
@@ -2086,7 +2086,7 @@ void tcg_gen_gvec_umax(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_umax_vec,
           .fno = gen_helper_gvec_umax16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_umax_i32,
           .fniv = tcg_gen_umax_vec,
           .fno = gen_helper_gvec_umax32,
@@ -2127,7 +2127,7 @@ void tcg_gen_vec_neg8_i64(TCGv_i64 d, TCGv_i64 b)

 void tcg_gen_vec_neg16_i64(TCGv_i64 d, TCGv_i64 b)
 {
-    TCGv_i64 m = tcg_const_i64(dup_const(MO_16, 0x8000));
+    TCGv_i64 m = tcg_const_i64(dup_const(MO_UW, 0x8000));
     gen_negv_mask(d, b, m);
     tcg_temp_free_i64(m);
 }
@@ -2160,7 +2160,7 @@ void tcg_gen_gvec_neg(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_neg_vec,
           .fno = gen_helper_gvec_neg16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_neg_i32,
           .fniv = tcg_gen_neg_vec,
           .fno = gen_helper_gvec_neg32,
@@ -2206,7 +2206,7 @@ static void tcg_gen_vec_abs8_i64(TCGv_i64 d, TCGv_i64 b)

 static void tcg_gen_vec_abs16_i64(TCGv_i64 d, TCGv_i64 b)
 {
-    gen_absv_mask(d, b, MO_16);
+    gen_absv_mask(d, b, MO_UW);
 }

 void tcg_gen_gvec_abs(unsigned vece, uint32_t dofs, uint32_t aofs,
@@ -2223,7 +2223,7 @@ void tcg_gen_gvec_abs(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_abs_vec,
           .fno = gen_helper_gvec_abs16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_abs_i32,
           .fniv = tcg_gen_abs_vec,
           .fno = gen_helper_gvec_abs32,
@@ -2461,7 +2461,7 @@ void tcg_gen_vec_shl8i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c)

 void tcg_gen_vec_shl16i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c)
 {
-    uint64_t mask = dup_const(MO_16, 0xffff << c);
+    uint64_t mask = dup_const(MO_UW, 0xffff << c);
     tcg_gen_shli_i64(d, a, c);
     tcg_gen_andi_i64(d, d, mask);
 }
@@ -2480,7 +2480,7 @@ void tcg_gen_gvec_shli(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_shli_vec,
           .fno = gen_helper_gvec_shl16i,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_shli_i32,
           .fniv = tcg_gen_shli_vec,
           .fno = gen_helper_gvec_shl32i,
@@ -2512,7 +2512,7 @@ void tcg_gen_vec_shr8i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c)

 void tcg_gen_vec_shr16i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c)
 {
-    uint64_t mask = dup_const(MO_16, 0xffff >> c);
+    uint64_t mask = dup_const(MO_UW, 0xffff >> c);
     tcg_gen_shri_i64(d, a, c);
     tcg_gen_andi_i64(d, d, mask);
 }
@@ -2531,7 +2531,7 @@ void tcg_gen_gvec_shri(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_shri_vec,
           .fno = gen_helper_gvec_shr16i,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_shri_i32,
           .fniv = tcg_gen_shri_vec,
           .fno = gen_helper_gvec_shr32i,
@@ -2570,8 +2570,8 @@ void tcg_gen_vec_sar8i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c)

 void tcg_gen_vec_sar16i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c)
 {
-    uint64_t s_mask = dup_const(MO_16, 0x8000 >> c);
-    uint64_t c_mask = dup_const(MO_16, 0xffff >> c);
+    uint64_t s_mask = dup_const(MO_UW, 0x8000 >> c);
+    uint64_t c_mask = dup_const(MO_UW, 0xffff >> c);
     TCGv_i64 s = tcg_temp_new_i64();

     tcg_gen_shri_i64(d, a, c);
@@ -2596,7 +2596,7 @@ void tcg_gen_gvec_sari(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_sari_vec,
           .fno = gen_helper_gvec_sar16i,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_sari_i32,
           .fniv = tcg_gen_sari_vec,
           .fno = gen_helper_gvec_sar32i,
@@ -2884,7 +2884,7 @@ void tcg_gen_gvec_shlv(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_shlv_mod_vec,
           .fno = gen_helper_gvec_shl16v,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_shl_mod_i32,
           .fniv = tcg_gen_shlv_mod_vec,
           .fno = gen_helper_gvec_shl32v,
@@ -2947,7 +2947,7 @@ void tcg_gen_gvec_shrv(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_shrv_mod_vec,
           .fno = gen_helper_gvec_shr16v,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_shr_mod_i32,
           .fniv = tcg_gen_shrv_mod_vec,
           .fno = gen_helper_gvec_shr32v,
@@ -3010,7 +3010,7 @@ void tcg_gen_gvec_sarv(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_sarv_mod_vec,
           .fno = gen_helper_gvec_sar16v,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_sar_mod_i32,
           .fniv = tcg_gen_sarv_mod_vec,
           .fno = gen_helper_gvec_sar32v,
diff --git a/tcg/tcg-op-vec.c b/tcg/tcg-op-vec.c
index d7ffc9e..b0a4d98 100644
--- a/tcg/tcg-op-vec.c
+++ b/tcg/tcg-op-vec.c
@@ -270,7 +270,7 @@ void tcg_gen_dup32i_vec(TCGv_vec r, uint32_t a)

 void tcg_gen_dup16i_vec(TCGv_vec r, uint32_t a)
 {
-    do_dupi_vec(r, MO_REG, dup_const(MO_16, a));
+    do_dupi_vec(r, MO_REG, dup_const(MO_UW, a));
 }

 void tcg_gen_dup8i_vec(TCGv_vec r, uint32_t a)
diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c
index 61eda33..21d448c 100644
--- a/tcg/tcg-op.c
+++ b/tcg/tcg-op.c
@@ -2723,7 +2723,7 @@ static inline TCGMemOp tcg_canonicalize_memop(TCGMemOp op, bool is64, bool st)
     case MO_UB:
         op &= ~MO_BSWAP;
         break;
-    case MO_16:
+    case MO_UW:
         break;
     case MO_32:
         if (!is64) {
@@ -2810,7 +2810,7 @@ void tcg_gen_qemu_ld_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)

     if ((orig_memop ^ memop) & MO_BSWAP) {
         switch (orig_memop & MO_SIZE) {
-        case MO_16:
+        case MO_UW:
             tcg_gen_bswap16_i32(val, val);
             if (orig_memop & MO_SIGN) {
                 tcg_gen_ext16s_i32(val, val);
@@ -2837,7 +2837,7 @@ void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
     if (!TCG_TARGET_HAS_MEMORY_BSWAP && (memop & MO_BSWAP)) {
         swap = tcg_temp_new_i32();
         switch (memop & MO_SIZE) {
-        case MO_16:
+        case MO_UW:
             tcg_gen_ext16u_i32(swap, val);
             tcg_gen_bswap16_i32(swap, swap);
             break;
@@ -2890,7 +2890,7 @@ void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)

     if ((orig_memop ^ memop) & MO_BSWAP) {
         switch (orig_memop & MO_SIZE) {
-        case MO_16:
+        case MO_UW:
             tcg_gen_bswap16_i64(val, val);
             if (orig_memop & MO_SIGN) {
                 tcg_gen_ext16s_i64(val, val);
@@ -2928,7 +2928,7 @@ void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
     if (!TCG_TARGET_HAS_MEMORY_BSWAP && (memop & MO_BSWAP)) {
         swap = tcg_temp_new_i64();
         switch (memop & MO_SIZE) {
-        case MO_16:
+        case MO_UW:
             tcg_gen_ext16u_i64(swap, val);
             tcg_gen_bswap16_i64(swap, swap);
             break;
@@ -3025,8 +3025,8 @@ typedef void (*gen_atomic_op_i64)(TCGv_i64, TCGv_env, TCGv, TCGv_i64);

 static void * const table_cmpxchg[16] = {
     [MO_UB] = gen_helper_atomic_cmpxchgb,
-    [MO_16 | MO_LE] = gen_helper_atomic_cmpxchgw_le,
-    [MO_16 | MO_BE] = gen_helper_atomic_cmpxchgw_be,
+    [MO_UW | MO_LE] = gen_helper_atomic_cmpxchgw_le,
+    [MO_UW | MO_BE] = gen_helper_atomic_cmpxchgw_be,
     [MO_32 | MO_LE] = gen_helper_atomic_cmpxchgl_le,
     [MO_32 | MO_BE] = gen_helper_atomic_cmpxchgl_be,
     WITH_ATOMIC64([MO_64 | MO_LE] = gen_helper_atomic_cmpxchgq_le)
@@ -3249,8 +3249,8 @@ static void do_atomic_op_i64(TCGv_i64 ret, TCGv addr, TCGv_i64 val,
 #define GEN_ATOMIC_HELPER(NAME, OP, NEW)                                \
 static void * const table_##NAME[16] = {                                \
     [MO_UB] = gen_helper_atomic_##NAME##b,                               \
-    [MO_16 | MO_LE] = gen_helper_atomic_##NAME##w_le,                   \
-    [MO_16 | MO_BE] = gen_helper_atomic_##NAME##w_be,                   \
+    [MO_UW | MO_LE] = gen_helper_atomic_##NAME##w_le,                   \
+    [MO_UW | MO_BE] = gen_helper_atomic_##NAME##w_be,                   \
     [MO_32 | MO_LE] = gen_helper_atomic_##NAME##l_le,                   \
     [MO_32 | MO_BE] = gen_helper_atomic_##NAME##l_be,                   \
     WITH_ATOMIC64([MO_64 | MO_LE] = gen_helper_atomic_##NAME##q_le)     \
diff --git a/tcg/tcg.h b/tcg/tcg.h
index 5636d6b..a378887 100644
--- a/tcg/tcg.h
+++ b/tcg/tcg.h
@@ -1303,7 +1303,7 @@ uint64_t dup_const(unsigned vece, uint64_t c);
 #define dup_const(VECE, C)                                         \
     (__builtin_constant_p(VECE)                                    \
      ?   ((VECE) == MO_UB ? 0x0101010101010101ull * (uint8_t)(C)   \
-        : (VECE) == MO_16 ? 0x0001000100010001ull * (uint16_t)(C)  \
+        : (VECE) == MO_UW ? 0x0001000100010001ull * (uint16_t)(C)  \
         : (VECE) == MO_32 ? 0x0000000100000001ull * (uint32_t)(C)  \
         : dup_const(VECE, C))                                      \
      : dup_const(VECE, C))
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v2 02/20] tcg: Replace MO_16 with MO_UW alias
@ 2019-07-22 15:40   ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:40 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, david, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, mst, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, claudio.fontana, qemu-s390x, qemu-ppc,
	amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 104873 bytes --]

Preparation for splitting MO_16 out from TCGMemOp into new accelerator
independent MemOp.

As MO_16 will be a value of MemOp, existing TCGMemOp comparisons and
coercions will trigger -Wenum-compare and -Wenum-conversion.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 target/arm/sve_helper.c             |   4 +-
 target/arm/translate-a64.c          |  90 ++++++++--------
 target/arm/translate-sve.c          |  40 ++++----
 target/arm/translate-vfp.inc.c      |   2 +-
 target/arm/translate.c              |  32 +++---
 target/i386/translate.c             | 200 ++++++++++++++++++------------------
 target/mips/translate.c             |   2 +-
 target/ppc/translate/vmx-impl.inc.c |  28 ++---
 target/s390x/translate_vx.inc.c     |   2 +-
 target/s390x/vec.h                  |   4 +-
 tcg/aarch64/tcg-target.inc.c        |  20 ++--
 tcg/arm/tcg-target.inc.c            |   6 +-
 tcg/i386/tcg-target.inc.c           |  48 ++++-----
 tcg/mips/tcg-target.inc.c           |   6 +-
 tcg/riscv/tcg-target.inc.c          |   4 +-
 tcg/sparc/tcg-target.inc.c          |   2 +-
 tcg/tcg-op-gvec.c                   |  72 ++++++-------
 tcg/tcg-op-vec.c                    |   2 +-
 tcg/tcg-op.c                        |  18 ++--
 tcg/tcg.h                           |   2 +-
 20 files changed, 292 insertions(+), 292 deletions(-)

diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index 4c7e11f..f6bef3d 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -1546,7 +1546,7 @@ void HELPER(sve_cpy_m_h)(void *vd, void *vn, void *vg,
     uint64_t *d = vd, *n = vn;
     uint8_t *pg = vg;

-    mm = dup_const(MO_16, mm);
+    mm = dup_const(MO_UW, mm);
     for (i = 0; i < opr_sz; i += 1) {
         uint64_t nn = n[i];
         uint64_t pp = expand_pred_h(pg[H1(i)]);
@@ -1600,7 +1600,7 @@ void HELPER(sve_cpy_z_h)(void *vd, void *vg, uint64_t val, uint32_t desc)
     uint64_t *d = vd;
     uint8_t *pg = vg;

-    val = dup_const(MO_16, val);
+    val = dup_const(MO_UW, val);
     for (i = 0; i < opr_sz; i += 1) {
         d[i] = val & expand_pred_h(pg[H1(i)]);
     }
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index f840b43..3acfccb 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -492,7 +492,7 @@ static TCGv_i32 read_fp_hreg(DisasContext *s, int reg)
 {
     TCGv_i32 v = tcg_temp_new_i32();

-    tcg_gen_ld16u_i32(v, cpu_env, fp_reg_offset(s, reg, MO_16));
+    tcg_gen_ld16u_i32(v, cpu_env, fp_reg_offset(s, reg, MO_UW));
     return v;
 }

@@ -996,7 +996,7 @@ static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx,
     case MO_UB:
         tcg_gen_ld8u_i64(tcg_dest, cpu_env, vect_off);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_ld16u_i64(tcg_dest, cpu_env, vect_off);
         break;
     case MO_32:
@@ -1005,7 +1005,7 @@ static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx,
     case MO_SB:
         tcg_gen_ld8s_i64(tcg_dest, cpu_env, vect_off);
         break;
-    case MO_16|MO_SIGN:
+    case MO_SW:
         tcg_gen_ld16s_i64(tcg_dest, cpu_env, vect_off);
         break;
     case MO_32|MO_SIGN:
@@ -1028,13 +1028,13 @@ static void read_vec_element_i32(DisasContext *s, TCGv_i32 tcg_dest, int srcidx,
     case MO_UB:
         tcg_gen_ld8u_i32(tcg_dest, cpu_env, vect_off);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_ld16u_i32(tcg_dest, cpu_env, vect_off);
         break;
     case MO_SB:
         tcg_gen_ld8s_i32(tcg_dest, cpu_env, vect_off);
         break;
-    case MO_16|MO_SIGN:
+    case MO_SW:
         tcg_gen_ld16s_i32(tcg_dest, cpu_env, vect_off);
         break;
     case MO_32:
@@ -1055,7 +1055,7 @@ static void write_vec_element(DisasContext *s, TCGv_i64 tcg_src, int destidx,
     case MO_UB:
         tcg_gen_st8_i64(tcg_src, cpu_env, vect_off);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_st16_i64(tcg_src, cpu_env, vect_off);
         break;
     case MO_32:
@@ -1077,7 +1077,7 @@ static void write_vec_element_i32(DisasContext *s, TCGv_i32 tcg_src,
     case MO_UB:
         tcg_gen_st8_i32(tcg_src, cpu_env, vect_off);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_st16_i32(tcg_src, cpu_env, vect_off);
         break;
     case MO_32:
@@ -5269,7 +5269,7 @@ static void handle_fp_compare(DisasContext *s, int size,
                               bool cmp_with_zero, bool signal_all_nans)
 {
     TCGv_i64 tcg_flags = tcg_temp_new_i64();
-    TCGv_ptr fpst = get_fpstatus_ptr(size == MO_16);
+    TCGv_ptr fpst = get_fpstatus_ptr(size == MO_UW);

     if (size == MO_64) {
         TCGv_i64 tcg_vn, tcg_vm;
@@ -5306,7 +5306,7 @@ static void handle_fp_compare(DisasContext *s, int size,
                 gen_helper_vfp_cmps_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
             }
             break;
-        case MO_16:
+        case MO_UW:
             if (signal_all_nans) {
                 gen_helper_vfp_cmpeh_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
             } else {
@@ -5360,7 +5360,7 @@ static void disas_fp_compare(DisasContext *s, uint32_t insn)
         size = MO_64;
         break;
     case 3:
-        size = MO_16;
+        size = MO_UW;
         if (dc_isar_feature(aa64_fp16, s)) {
             break;
         }
@@ -5411,7 +5411,7 @@ static void disas_fp_ccomp(DisasContext *s, uint32_t insn)
         size = MO_64;
         break;
     case 3:
-        size = MO_16;
+        size = MO_UW;
         if (dc_isar_feature(aa64_fp16, s)) {
             break;
         }
@@ -5477,7 +5477,7 @@ static void disas_fp_csel(DisasContext *s, uint32_t insn)
         sz = MO_64;
         break;
     case 3:
-        sz = MO_16;
+        sz = MO_UW;
         if (dc_isar_feature(aa64_fp16, s)) {
             break;
         }
@@ -6282,7 +6282,7 @@ static void disas_fp_imm(DisasContext *s, uint32_t insn)
         sz = MO_64;
         break;
     case 3:
-        sz = MO_16;
+        sz = MO_UW;
         if (dc_isar_feature(aa64_fp16, s)) {
             break;
         }
@@ -6593,7 +6593,7 @@ static void handle_fmov(DisasContext *s, int rd, int rn, int type, bool itof)
             break;
         case 3:
             /* 16 bit */
-            tcg_gen_ld16u_i64(tcg_rd, cpu_env, fp_reg_offset(s, rn, MO_16));
+            tcg_gen_ld16u_i64(tcg_rd, cpu_env, fp_reg_offset(s, rn, MO_UW));
             break;
         default:
             g_assert_not_reached();
@@ -7030,7 +7030,7 @@ static TCGv_i32 do_reduction_op(DisasContext *s, int fpopcode, int rn,
 {
     if (esize == size) {
         int element;
-        TCGMemOp msize = esize == 16 ? MO_16 : MO_32;
+        TCGMemOp msize = esize == 16 ? MO_UW : MO_32;
         TCGv_i32 tcg_elem;

         /* We should have one register left here */
@@ -7204,7 +7204,7 @@ static void disas_simd_across_lanes(DisasContext *s, uint32_t insn)
          * Note that correct NaN propagation requires that we do these
          * operations in exactly the order specified by the pseudocode.
          */
-        TCGv_ptr fpst = get_fpstatus_ptr(size == MO_16);
+        TCGv_ptr fpst = get_fpstatus_ptr(size == MO_UW);
         int fpopcode = opcode | is_min << 4 | is_u << 5;
         int vmap = (1 << elements) - 1;
         TCGv_i32 tcg_res32 = do_reduction_op(s, fpopcode, rn, esize,
@@ -7591,7 +7591,7 @@ static void disas_simd_mod_imm(DisasContext *s, uint32_t insn)
             } else {
                 if (o2) {
                     /* FMOV (vector, immediate) - half-precision */
-                    imm = vfp_expand_imm(MO_16, abcdefgh);
+                    imm = vfp_expand_imm(MO_UW, abcdefgh);
                     /* now duplicate across the lanes */
                     imm = bitfield_replicate(imm, 16);
                 } else {
@@ -7699,7 +7699,7 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
                 unallocated_encoding(s);
                 return;
             } else {
-                size = MO_16;
+                size = MO_UW;
             }
         } else {
             size = extract32(size, 0, 1) ? MO_64 : MO_32;
@@ -7709,7 +7709,7 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
             return;
         }

-        fpst = get_fpstatus_ptr(size == MO_16);
+        fpst = get_fpstatus_ptr(size == MO_UW);
         break;
     default:
         unallocated_encoding(s);
@@ -7760,7 +7760,7 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
         read_vec_element_i32(s, tcg_op1, rn, 0, size);
         read_vec_element_i32(s, tcg_op2, rn, 1, size);

-        if (size == MO_16) {
+        if (size == MO_UW) {
             switch (opcode) {
             case 0xc: /* FMAXNMP */
                 gen_helper_advsimd_maxnumh(tcg_res, tcg_op1, tcg_op2, fpst);
@@ -8222,7 +8222,7 @@ static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
                                    int elements, int is_signed,
                                    int fracbits, int size)
 {
-    TCGv_ptr tcg_fpst = get_fpstatus_ptr(size == MO_16);
+    TCGv_ptr tcg_fpst = get_fpstatus_ptr(size == MO_UW);
     TCGv_i32 tcg_shift = NULL;

     TCGMemOp mop = size | (is_signed ? MO_SIGN : 0);
@@ -8281,7 +8281,7 @@ static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
                     }
                 }
                 break;
-            case MO_16:
+            case MO_UW:
                 if (fracbits) {
                     if (is_signed) {
                         gen_helper_vfp_sltoh(tcg_float, tcg_int32,
@@ -8339,7 +8339,7 @@ static void handle_simd_shift_intfp_conv(DisasContext *s, bool is_scalar,
     } else if (immh & 4) {
         size = MO_32;
     } else if (immh & 2) {
-        size = MO_16;
+        size = MO_UW;
         if (!dc_isar_feature(aa64_fp16, s)) {
             unallocated_encoding(s);
             return;
@@ -8384,7 +8384,7 @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
     } else if (immh & 0x4) {
         size = MO_32;
     } else if (immh & 0x2) {
-        size = MO_16;
+        size = MO_UW;
         if (!dc_isar_feature(aa64_fp16, s)) {
             unallocated_encoding(s);
             return;
@@ -8403,7 +8403,7 @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
     assert(!(is_scalar && is_q));

     tcg_rmode = tcg_const_i32(arm_rmode_to_sf(FPROUNDING_ZERO));
-    tcg_fpstatus = get_fpstatus_ptr(size == MO_16);
+    tcg_fpstatus = get_fpstatus_ptr(size == MO_UW);
     gen_helper_set_rmode(tcg_rmode, tcg_rmode, tcg_fpstatus);
     fracbits = (16 << size) - immhb;
     tcg_shift = tcg_const_i32(fracbits);
@@ -8429,7 +8429,7 @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
         int maxpass = is_scalar ? 1 : ((8 << is_q) >> size);

         switch (size) {
-        case MO_16:
+        case MO_UW:
             if (is_u) {
                 fn = gen_helper_vfp_touhh;
             } else {
@@ -9388,7 +9388,7 @@ static void handle_2misc_fcmp_zero(DisasContext *s, int opcode,
         return;
     }

-    fpst = get_fpstatus_ptr(size == MO_16);
+    fpst = get_fpstatus_ptr(size == MO_UW);

     if (is_double) {
         TCGv_i64 tcg_op = tcg_temp_new_i64();
@@ -9440,7 +9440,7 @@ static void handle_2misc_fcmp_zero(DisasContext *s, int opcode,
         bool swap = false;
         int pass, maxpasses;

-        if (size == MO_16) {
+        if (size == MO_UW) {
             switch (opcode) {
             case 0x2e: /* FCMLT (zero) */
                 swap = true;
@@ -11422,8 +11422,8 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
             int passreg = pass < (maxpass / 2) ? rn : rm;
             int passelt = (pass << 1) & (maxpass - 1);

-            read_vec_element_i32(s, tcg_op1, passreg, passelt, MO_16);
-            read_vec_element_i32(s, tcg_op2, passreg, passelt + 1, MO_16);
+            read_vec_element_i32(s, tcg_op1, passreg, passelt, MO_UW);
+            read_vec_element_i32(s, tcg_op2, passreg, passelt + 1, MO_UW);
             tcg_res[pass] = tcg_temp_new_i32();

             switch (fpopcode) {
@@ -11450,7 +11450,7 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
         }

         for (pass = 0; pass < maxpass; pass++) {
-            write_vec_element_i32(s, tcg_res[pass], rd, pass, MO_16);
+            write_vec_element_i32(s, tcg_res[pass], rd, pass, MO_UW);
             tcg_temp_free_i32(tcg_res[pass]);
         }

@@ -11463,15 +11463,15 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
             TCGv_i32 tcg_op2 = tcg_temp_new_i32();
             TCGv_i32 tcg_res = tcg_temp_new_i32();

-            read_vec_element_i32(s, tcg_op1, rn, pass, MO_16);
-            read_vec_element_i32(s, tcg_op2, rm, pass, MO_16);
+            read_vec_element_i32(s, tcg_op1, rn, pass, MO_UW);
+            read_vec_element_i32(s, tcg_op2, rm, pass, MO_UW);

             switch (fpopcode) {
             case 0x0: /* FMAXNM */
                 gen_helper_advsimd_maxnumh(tcg_res, tcg_op1, tcg_op2, fpst);
                 break;
             case 0x1: /* FMLA */
-                read_vec_element_i32(s, tcg_res, rd, pass, MO_16);
+                read_vec_element_i32(s, tcg_res, rd, pass, MO_UW);
                 gen_helper_advsimd_muladdh(tcg_res, tcg_op1, tcg_op2, tcg_res,
                                            fpst);
                 break;
@@ -11496,7 +11496,7 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
             case 0x9: /* FMLS */
                 /* As usual for ARM, separate negation for fused multiply-add */
                 tcg_gen_xori_i32(tcg_op1, tcg_op1, 0x8000);
-                read_vec_element_i32(s, tcg_res, rd, pass, MO_16);
+                read_vec_element_i32(s, tcg_res, rd, pass, MO_UW);
                 gen_helper_advsimd_muladdh(tcg_res, tcg_op1, tcg_op2, tcg_res,
                                            fpst);
                 break;
@@ -11537,7 +11537,7 @@ static void disas_simd_three_reg_same_fp16(DisasContext *s, uint32_t insn)
                 g_assert_not_reached();
             }

-            write_vec_element_i32(s, tcg_res, rd, pass, MO_16);
+            write_vec_element_i32(s, tcg_res, rd, pass, MO_UW);
             tcg_temp_free_i32(tcg_res);
             tcg_temp_free_i32(tcg_op1);
             tcg_temp_free_i32(tcg_op2);
@@ -11727,7 +11727,7 @@ static void handle_2misc_widening(DisasContext *s, int opcode, bool is_q,
         for (pass = 0; pass < 4; pass++) {
             tcg_res[pass] = tcg_temp_new_i32();

-            read_vec_element_i32(s, tcg_res[pass], rn, srcelt + pass, MO_16);
+            read_vec_element_i32(s, tcg_res[pass], rn, srcelt + pass, MO_UW);
             gen_helper_vfp_fcvt_f16_to_f32(tcg_res[pass], tcg_res[pass],
                                            fpst, ahp);
         }
@@ -11768,7 +11768,7 @@ static void handle_rev(DisasContext *s, int opcode, bool u,

             read_vec_element(s, tcg_tmp, rn, i, grp_size);
             switch (grp_size) {
-            case MO_16:
+            case MO_UW:
                 tcg_gen_bswap16_i64(tcg_tmp, tcg_tmp);
                 break;
             case MO_32:
@@ -12499,7 +12499,7 @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
         if (!fp_access_check(s)) {
             return;
         }
-        handle_simd_intfp_conv(s, rd, rn, elements, !u, 0, MO_16);
+        handle_simd_intfp_conv(s, rd, rn, elements, !u, 0, MO_UW);
         return;
     }
     break;
@@ -12508,7 +12508,7 @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
     case 0x2e: /* FCMLT (zero) */
     case 0x6c: /* FCMGE (zero) */
     case 0x6d: /* FCMLE (zero) */
-        handle_2misc_fcmp_zero(s, fpop, is_scalar, 0, is_q, MO_16, rn, rd);
+        handle_2misc_fcmp_zero(s, fpop, is_scalar, 0, is_q, MO_UW, rn, rd);
         return;
     case 0x3d: /* FRECPE */
     case 0x3f: /* FRECPX */
@@ -12668,7 +12668,7 @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
             TCGv_i32 tcg_op = tcg_temp_new_i32();
             TCGv_i32 tcg_res = tcg_temp_new_i32();

-            read_vec_element_i32(s, tcg_op, rn, pass, MO_16);
+            read_vec_element_i32(s, tcg_op, rn, pass, MO_UW);

             switch (fpop) {
             case 0x1a: /* FCVTNS */
@@ -12715,7 +12715,7 @@ static void disas_simd_two_reg_misc_fp16(DisasContext *s, uint32_t insn)
                 g_assert_not_reached();
             }

-            write_vec_element_i32(s, tcg_res, rd, pass, MO_16);
+            write_vec_element_i32(s, tcg_res, rd, pass, MO_UW);

             tcg_temp_free_i32(tcg_res);
             tcg_temp_free_i32(tcg_op);
@@ -12839,7 +12839,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
             unallocated_encoding(s);
             return;
         }
-        size = MO_16;
+        size = MO_UW;
         /* is_fp, but we pass cpu_env not fp_status.  */
         break;
     default:
@@ -12852,7 +12852,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
         /* convert insn encoded size to TCGMemOp size */
         switch (size) {
         case 0: /* half-precision */
-            size = MO_16;
+            size = MO_UW;
             is_fp16 = true;
             break;
         case MO_32: /* single precision */
@@ -12899,7 +12899,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)

     /* Given TCGMemOp size, adjust register and indexing.  */
     switch (size) {
-    case MO_16:
+    case MO_UW:
         index = h << 2 | l << 1 | m;
         break;
     case MO_32:
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index ec5fb11..2bc1bd1 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -1679,7 +1679,7 @@ static void do_sat_addsub_vec(DisasContext *s, int esz, int rd, int rn,
         tcg_temp_free_i32(t32);
         break;

-    case MO_16:
+    case MO_UW:
         t32 = tcg_temp_new_i32();
         tcg_gen_extrl_i64_i32(t32, val);
         if (d) {
@@ -3314,7 +3314,7 @@ static bool trans_SUBR_zzi(DisasContext *s, arg_rri_esz *a)
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_sve_subri_h,
           .opt_opc = vecop_list,
-          .vece = MO_16,
+          .vece = MO_UW,
           .scalar_first = true },
         { .fni4 = tcg_gen_sub_i32,
           .fniv = tcg_gen_sub_vec,
@@ -3468,7 +3468,7 @@ static bool trans_FMLA_zzxz(DisasContext *s, arg_FMLA_zzxz *a)

     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);
         tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, a->rd),
                            vec_full_reg_offset(s, a->rn),
                            vec_full_reg_offset(s, a->rm),
@@ -3494,7 +3494,7 @@ static bool trans_FMUL_zzx(DisasContext *s, arg_FMUL_zzx *a)

     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);
         tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
                            vec_full_reg_offset(s, a->rn),
                            vec_full_reg_offset(s, a->rm),
@@ -3526,7 +3526,7 @@ static void do_reduce(DisasContext *s, arg_rpr_esz *a,

     tcg_gen_addi_ptr(t_zn, cpu_env, vec_full_reg_offset(s, a->rn));
     tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, a->pg));
-    status = get_fpstatus_ptr(a->esz == MO_16);
+    status = get_fpstatus_ptr(a->esz == MO_UW);

     fn(temp, t_zn, t_pg, status, t_desc);
     tcg_temp_free_ptr(t_zn);
@@ -3568,7 +3568,7 @@ DO_VPZ(FMAXV, fmaxv)
 static void do_zz_fp(DisasContext *s, arg_rr_esz *a, gen_helper_gvec_2_ptr *fn)
 {
     unsigned vsz = vec_full_reg_size(s);
-    TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+    TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);

     tcg_gen_gvec_2_ptr(vec_full_reg_offset(s, a->rd),
                        vec_full_reg_offset(s, a->rn),
@@ -3616,7 +3616,7 @@ static void do_ppz_fp(DisasContext *s, arg_rpr_esz *a,
                       gen_helper_gvec_3_ptr *fn)
 {
     unsigned vsz = vec_full_reg_size(s);
-    TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+    TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);

     tcg_gen_gvec_3_ptr(pred_full_reg_offset(s, a->rd),
                        vec_full_reg_offset(s, a->rn),
@@ -3668,7 +3668,7 @@ static bool trans_FTMAD(DisasContext *s, arg_FTMAD *a)
     }
     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);
         tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
                            vec_full_reg_offset(s, a->rn),
                            vec_full_reg_offset(s, a->rm),
@@ -3708,7 +3708,7 @@ static bool trans_FADDA(DisasContext *s, arg_rprr_esz *a)
     t_pg = tcg_temp_new_ptr();
     tcg_gen_addi_ptr(t_rm, cpu_env, vec_full_reg_offset(s, a->rm));
     tcg_gen_addi_ptr(t_pg, cpu_env, pred_full_reg_offset(s, a->pg));
-    t_fpst = get_fpstatus_ptr(a->esz == MO_16);
+    t_fpst = get_fpstatus_ptr(a->esz == MO_UW);
     t_desc = tcg_const_i32(simd_desc(vsz, vsz, 0));

     fns[a->esz - 1](t_val, t_val, t_rm, t_pg, t_fpst, t_desc);
@@ -3735,7 +3735,7 @@ static bool do_zzz_fp(DisasContext *s, arg_rrr_esz *a,
     }
     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);
         tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
                            vec_full_reg_offset(s, a->rn),
                            vec_full_reg_offset(s, a->rm),
@@ -3777,7 +3777,7 @@ static bool do_zpzz_fp(DisasContext *s, arg_rprr_esz *a,
     }
     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);
         tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, a->rd),
                            vec_full_reg_offset(s, a->rn),
                            vec_full_reg_offset(s, a->rm),
@@ -3844,7 +3844,7 @@ static void do_fp_imm(DisasContext *s, arg_rpri_esz *a, uint64_t imm,
                       gen_helper_sve_fp2scalar *fn)
 {
     TCGv_i64 temp = tcg_const_i64(imm);
-    do_fp_scalar(s, a->rd, a->rn, a->pg, a->esz == MO_16, temp, fn);
+    do_fp_scalar(s, a->rd, a->rn, a->pg, a->esz == MO_UW, temp, fn);
     tcg_temp_free_i64(temp);
 }

@@ -3893,7 +3893,7 @@ static bool do_fp_cmp(DisasContext *s, arg_rprr_esz *a,
     }
     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);
         tcg_gen_gvec_4_ptr(pred_full_reg_offset(s, a->rd),
                            vec_full_reg_offset(s, a->rn),
                            vec_full_reg_offset(s, a->rm),
@@ -3937,7 +3937,7 @@ static bool trans_FCADD(DisasContext *s, arg_FCADD *a)
     }
     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);
         tcg_gen_gvec_4_ptr(vec_full_reg_offset(s, a->rd),
                            vec_full_reg_offset(s, a->rn),
                            vec_full_reg_offset(s, a->rm),
@@ -4044,7 +4044,7 @@ static bool trans_FCMLA_zzxz(DisasContext *s, arg_FCMLA_zzxz *a)
     tcg_debug_assert(a->rd == a->ra);
     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);
         tcg_gen_gvec_3_ptr(vec_full_reg_offset(s, a->rd),
                            vec_full_reg_offset(s, a->rn),
                            vec_full_reg_offset(s, a->rm),
@@ -4186,7 +4186,7 @@ static bool trans_FRINTI(DisasContext *s, arg_rpr_esz *a)
     if (a->esz == 0) {
         return false;
     }
-    return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_16,
+    return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_UW,
                       frint_fns[a->esz - 1]);
 }

@@ -4200,7 +4200,7 @@ static bool trans_FRINTX(DisasContext *s, arg_rpr_esz *a)
     if (a->esz == 0) {
         return false;
     }
-    return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_16, fns[a->esz - 1]);
+    return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_UW, fns[a->esz - 1]);
 }

 static bool do_frint_mode(DisasContext *s, arg_rpr_esz *a, int mode)
@@ -4211,7 +4211,7 @@ static bool do_frint_mode(DisasContext *s, arg_rpr_esz *a, int mode)
     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
         TCGv_i32 tmode = tcg_const_i32(mode);
-        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_16);
+        TCGv_ptr status = get_fpstatus_ptr(a->esz == MO_UW);

         gen_helper_set_rmode(tmode, tmode, status);

@@ -4262,7 +4262,7 @@ static bool trans_FRECPX(DisasContext *s, arg_rpr_esz *a)
     if (a->esz == 0) {
         return false;
     }
-    return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_16, fns[a->esz - 1]);
+    return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_UW, fns[a->esz - 1]);
 }

 static bool trans_FSQRT(DisasContext *s, arg_rpr_esz *a)
@@ -4275,7 +4275,7 @@ static bool trans_FSQRT(DisasContext *s, arg_rpr_esz *a)
     if (a->esz == 0) {
         return false;
     }
-    return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_16, fns[a->esz - 1]);
+    return do_zpz_ptr(s, a->rd, a->rn, a->pg, a->esz == MO_UW, fns[a->esz - 1]);
 }

 static bool trans_SCVTF_hh(DisasContext *s, arg_rpr_esz *a)
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
index 092eb5e..549874c 100644
--- a/target/arm/translate-vfp.inc.c
+++ b/target/arm/translate-vfp.inc.c
@@ -52,7 +52,7 @@ uint64_t vfp_expand_imm(int size, uint8_t imm8)
             (extract32(imm8, 0, 6) << 3);
         imm <<= 16;
         break;
-    case MO_16:
+    case MO_UW:
         imm = (extract32(imm8, 7, 1) ? 0x8000 : 0) |
             (extract32(imm8, 6, 1) ? 0x3000 : 0x4000) |
             (extract32(imm8, 0, 6) << 6);
diff --git a/target/arm/translate.c b/target/arm/translate.c
index 39266cf..8d10922 100644
--- a/target/arm/translate.c
+++ b/target/arm/translate.c
@@ -1477,7 +1477,7 @@ static void neon_store_element(int reg, int ele, TCGMemOp size, TCGv_i32 var)
     case MO_UB:
         tcg_gen_st8_i32(var, cpu_env, offset);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_st16_i32(var, cpu_env, offset);
         break;
     case MO_32:
@@ -1496,7 +1496,7 @@ static void neon_store_element64(int reg, int ele, TCGMemOp size, TCGv_i64 var)
     case MO_UB:
         tcg_gen_st8_i64(var, cpu_env, offset);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_st16_i64(var, cpu_env, offset);
         break;
     case MO_32:
@@ -4267,7 +4267,7 @@ const GVecGen2i ssra_op[4] = {
       .fniv = gen_ssra_vec,
       .load_dest = true,
       .opt_opc = vecop_list_ssra,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fni4 = gen_ssra32_i32,
       .fniv = gen_ssra_vec,
       .load_dest = true,
@@ -4325,7 +4325,7 @@ const GVecGen2i usra_op[4] = {
       .fniv = gen_usra_vec,
       .load_dest = true,
       .opt_opc = vecop_list_usra,
-      .vece = MO_16, },
+      .vece = MO_UW, },
     { .fni4 = gen_usra32_i32,
       .fniv = gen_usra_vec,
       .load_dest = true,
@@ -4353,7 +4353,7 @@ static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)

 static void gen_shr16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
 {
-    uint64_t mask = dup_const(MO_16, 0xffff >> shift);
+    uint64_t mask = dup_const(MO_UW, 0xffff >> shift);
     TCGv_i64 t = tcg_temp_new_i64();

     tcg_gen_shri_i64(t, a, shift);
@@ -4405,7 +4405,7 @@ const GVecGen2i sri_op[4] = {
       .fniv = gen_shr_ins_vec,
       .load_dest = true,
       .opt_opc = vecop_list_sri,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fni4 = gen_shr32_ins_i32,
       .fniv = gen_shr_ins_vec,
       .load_dest = true,
@@ -4433,7 +4433,7 @@ static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)

 static void gen_shl16_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
 {
-    uint64_t mask = dup_const(MO_16, 0xffff << shift);
+    uint64_t mask = dup_const(MO_UW, 0xffff << shift);
     TCGv_i64 t = tcg_temp_new_i64();

     tcg_gen_shli_i64(t, a, shift);
@@ -4483,7 +4483,7 @@ const GVecGen2i sli_op[4] = {
       .fniv = gen_shl_ins_vec,
       .load_dest = true,
       .opt_opc = vecop_list_sli,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fni4 = gen_shl32_ins_i32,
       .fniv = gen_shl_ins_vec,
       .load_dest = true,
@@ -4579,7 +4579,7 @@ const GVecGen3 mla_op[4] = {
       .fniv = gen_mla_vec,
       .load_dest = true,
       .opt_opc = vecop_list_mla,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fni4 = gen_mla32_i32,
       .fniv = gen_mla_vec,
       .load_dest = true,
@@ -4603,7 +4603,7 @@ const GVecGen3 mls_op[4] = {
       .fniv = gen_mls_vec,
       .load_dest = true,
       .opt_opc = vecop_list_mls,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fni4 = gen_mls32_i32,
       .fniv = gen_mls_vec,
       .load_dest = true,
@@ -4649,7 +4649,7 @@ const GVecGen3 cmtst_op[4] = {
     { .fni4 = gen_helper_neon_tst_u16,
       .fniv = gen_cmtst_vec,
       .opt_opc = vecop_list_cmtst,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fni4 = gen_cmtst_i32,
       .fniv = gen_cmtst_vec,
       .opt_opc = vecop_list_cmtst,
@@ -4686,7 +4686,7 @@ const GVecGen4 uqadd_op[4] = {
       .fno = gen_helper_gvec_uqadd_h,
       .write_aofs = true,
       .opt_opc = vecop_list_uqadd,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fniv = gen_uqadd_vec,
       .fno = gen_helper_gvec_uqadd_s,
       .write_aofs = true,
@@ -4724,7 +4724,7 @@ const GVecGen4 sqadd_op[4] = {
       .fno = gen_helper_gvec_sqadd_h,
       .opt_opc = vecop_list_sqadd,
       .write_aofs = true,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fniv = gen_sqadd_vec,
       .fno = gen_helper_gvec_sqadd_s,
       .opt_opc = vecop_list_sqadd,
@@ -4762,7 +4762,7 @@ const GVecGen4 uqsub_op[4] = {
       .fno = gen_helper_gvec_uqsub_h,
       .opt_opc = vecop_list_uqsub,
       .write_aofs = true,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fniv = gen_uqsub_vec,
       .fno = gen_helper_gvec_uqsub_s,
       .opt_opc = vecop_list_uqsub,
@@ -4800,7 +4800,7 @@ const GVecGen4 sqsub_op[4] = {
       .fno = gen_helper_gvec_sqsub_h,
       .opt_opc = vecop_list_sqsub,
       .write_aofs = true,
-      .vece = MO_16 },
+      .vece = MO_UW },
     { .fniv = gen_sqsub_vec,
       .fno = gen_helper_gvec_sqsub_s,
       .opt_opc = vecop_list_sqsub,
@@ -6876,7 +6876,7 @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
                     size = MO_UB;
                     element = (insn >> 17) & 7;
                 } else if (insn & (1 << 17)) {
-                    size = MO_16;
+                    size = MO_UW;
                     element = (insn >> 18) & 3;
                 } else {
                     size = MO_32;
diff --git a/target/i386/translate.c b/target/i386/translate.c
index 0e45300..0535bae 100644
--- a/target/i386/translate.c
+++ b/target/i386/translate.c
@@ -323,7 +323,7 @@ static inline bool byte_reg_is_xH(DisasContext *s, int reg)
 static inline TCGMemOp mo_pushpop(DisasContext *s, TCGMemOp ot)
 {
     if (CODE64(s)) {
-        return ot == MO_16 ? MO_16 : MO_64;
+        return ot == MO_UW ? MO_UW : MO_64;
     } else {
         return ot;
     }
@@ -332,7 +332,7 @@ static inline TCGMemOp mo_pushpop(DisasContext *s, TCGMemOp ot)
 /* Select the size of the stack pointer.  */
 static inline TCGMemOp mo_stacksize(DisasContext *s)
 {
-    return CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_16;
+    return CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_UW;
 }

 /* Select only size 64 else 32.  Used for SSE operand sizes.  */
@@ -356,7 +356,7 @@ static inline TCGMemOp mo_b_d(int b, TCGMemOp ot)
    Used for decoding operand size of port opcodes.  */
 static inline TCGMemOp mo_b_d32(int b, TCGMemOp ot)
 {
-    return b & 1 ? (ot == MO_16 ? MO_16 : MO_32) : MO_UB;
+    return b & 1 ? (ot == MO_UW ? MO_UW : MO_32) : MO_UB;
 }

 static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t0)
@@ -369,7 +369,7 @@ static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t0)
             tcg_gen_deposit_tl(cpu_regs[reg - 4], cpu_regs[reg - 4], t0, 8, 8);
         }
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_deposit_tl(cpu_regs[reg], cpu_regs[reg], t0, 0, 16);
         break;
     case MO_32:
@@ -473,7 +473,7 @@ static void gen_lea_v_seg(DisasContext *s, TCGMemOp aflag, TCGv a0,
             return;
         }
         break;
-    case MO_16:
+    case MO_UW:
         /* 16 bit address */
         tcg_gen_ext16u_tl(s->A0, a0);
         a0 = s->A0;
@@ -530,7 +530,7 @@ static TCGv gen_ext_tl(TCGv dst, TCGv src, TCGMemOp size, bool sign)
             tcg_gen_ext8u_tl(dst, src);
         }
         return dst;
-    case MO_16:
+    case MO_UW:
         if (sign) {
             tcg_gen_ext16s_tl(dst, src);
         } else {
@@ -583,7 +583,7 @@ static void gen_helper_in_func(TCGMemOp ot, TCGv v, TCGv_i32 n)
     case MO_UB:
         gen_helper_inb(v, cpu_env, n);
         break;
-    case MO_16:
+    case MO_UW:
         gen_helper_inw(v, cpu_env, n);
         break;
     case MO_32:
@@ -600,7 +600,7 @@ static void gen_helper_out_func(TCGMemOp ot, TCGv_i32 v, TCGv_i32 n)
     case MO_UB:
         gen_helper_outb(cpu_env, v, n);
         break;
-    case MO_16:
+    case MO_UW:
         gen_helper_outw(cpu_env, v, n);
         break;
     case MO_32:
@@ -622,7 +622,7 @@ static void gen_check_io(DisasContext *s, TCGMemOp ot, target_ulong cur_eip,
         case MO_UB:
             gen_helper_check_iob(cpu_env, s->tmp2_i32);
             break;
-        case MO_16:
+        case MO_UW:
             gen_helper_check_iow(cpu_env, s->tmp2_i32);
             break;
         case MO_32:
@@ -1562,7 +1562,7 @@ static void gen_rot_rm_T1(DisasContext *s, TCGMemOp ot, int op1, int is_right)
         tcg_gen_ext8u_tl(s->T0, s->T0);
         tcg_gen_muli_tl(s->T0, s->T0, 0x01010101);
         goto do_long;
-    case MO_16:
+    case MO_UW:
         /* Replicate the 16-bit input so that a 32-bit rotate works.  */
         tcg_gen_deposit_tl(s->T0, s->T0, s->T0, 16, 16);
         goto do_long;
@@ -1664,7 +1664,7 @@ static void gen_rot_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
         case MO_UB:
             mask = 7;
             goto do_shifts;
-        case MO_16:
+        case MO_UW:
             mask = 15;
         do_shifts:
             shift = op2 & mask;
@@ -1722,7 +1722,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
         case MO_UB:
             gen_helper_rcrb(s->T0, cpu_env, s->T0, s->T1);
             break;
-        case MO_16:
+        case MO_UW:
             gen_helper_rcrw(s->T0, cpu_env, s->T0, s->T1);
             break;
         case MO_32:
@@ -1741,7 +1741,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
         case MO_UB:
             gen_helper_rclb(s->T0, cpu_env, s->T0, s->T1);
             break;
-        case MO_16:
+        case MO_UW:
             gen_helper_rclw(s->T0, cpu_env, s->T0, s->T1);
             break;
         case MO_32:
@@ -1778,7 +1778,7 @@ static void gen_shiftd_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
     tcg_gen_andi_tl(count, count_in, mask);

     switch (ot) {
-    case MO_16:
+    case MO_UW:
         /* Note: we implement the Intel behaviour for shift count > 16.
            This means "shrdw C, B, A" shifts A:B:A >> C.  Build the B:A
            portion by constructing it as a 32-bit value.  */
@@ -1817,7 +1817,7 @@ static void gen_shiftd_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
             tcg_gen_shl_tl(s->T1, s->T1, s->tmp4);
         } else {
             tcg_gen_shl_tl(s->tmp0, s->T0, s->tmp0);
-            if (ot == MO_16) {
+            if (ot == MO_UW) {
                 /* Only needed if count > 16, for Intel behaviour.  */
                 tcg_gen_subfi_tl(s->tmp4, 33, count);
                 tcg_gen_shr_tl(s->tmp4, s->T1, s->tmp4);
@@ -2026,7 +2026,7 @@ static AddressParts gen_lea_modrm_0(CPUX86State *env, DisasContext *s,
         }
         break;

-    case MO_16:
+    case MO_UW:
         if (mod == 0) {
             if (rm == 6) {
                 base = -1;
@@ -2187,7 +2187,7 @@ static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, TCGMemOp ot)
     case MO_UB:
         ret = x86_ldub_code(env, s);
         break;
-    case MO_16:
+    case MO_UW:
         ret = x86_lduw_code(env, s);
         break;
     case MO_32:
@@ -2400,12 +2400,12 @@ static inline void gen_pop_update(DisasContext *s, TCGMemOp ot)

 static inline void gen_stack_A0(DisasContext *s)
 {
-    gen_lea_v_seg(s, s->ss32 ? MO_32 : MO_16, cpu_regs[R_ESP], R_SS, -1);
+    gen_lea_v_seg(s, s->ss32 ? MO_32 : MO_UW, cpu_regs[R_ESP], R_SS, -1);
 }

 static void gen_pusha(DisasContext *s)
 {
-    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_16;
+    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_UW;
     TCGMemOp d_ot = s->dflag;
     int size = 1 << d_ot;
     int i;
@@ -2421,7 +2421,7 @@ static void gen_pusha(DisasContext *s)

 static void gen_popa(DisasContext *s)
 {
-    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_16;
+    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_UW;
     TCGMemOp d_ot = s->dflag;
     int size = 1 << d_ot;
     int i;
@@ -2443,7 +2443,7 @@ static void gen_popa(DisasContext *s)
 static void gen_enter(DisasContext *s, int esp_addend, int level)
 {
     TCGMemOp d_ot = mo_pushpop(s, s->dflag);
-    TCGMemOp a_ot = CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_16;
+    TCGMemOp a_ot = CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_UW;
     int size = 1 << d_ot;

     /* Push BP; compute FrameTemp into T1.  */
@@ -3613,7 +3613,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
         case 0xc4: /* pinsrw */
         case 0x1c4:
             s->rip_offset = 1;
-            gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0);
+            gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0);
             val = x86_ldub_code(env, s);
             if (b1) {
                 val &= 7;
@@ -3786,7 +3786,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                 if ((b & 0xff) == 0xf0) {
                     ot = MO_UB;
                 } else if (s->dflag != MO_64) {
-                    ot = (s->prefix & PREFIX_DATA ? MO_16 : MO_32);
+                    ot = (s->prefix & PREFIX_DATA ? MO_UW : MO_32);
                 } else {
                     ot = MO_64;
                 }
@@ -3815,7 +3815,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                     goto illegal_op;
                 }
                 if (s->dflag != MO_64) {
-                    ot = (s->prefix & PREFIX_DATA ? MO_16 : MO_32);
+                    ot = (s->prefix & PREFIX_DATA ? MO_UW : MO_32);
                 } else {
                     ot = MO_64;
                 }
@@ -4630,7 +4630,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         /* In 64-bit mode, the default data size is 32-bit.  Select 64-bit
            data with rex_w, and 16-bit data with 0x66; rex_w takes precedence
            over 0x66 if both are present.  */
-        dflag = (rex_w > 0 ? MO_64 : prefixes & PREFIX_DATA ? MO_16 : MO_32);
+        dflag = (rex_w > 0 ? MO_64 : prefixes & PREFIX_DATA ? MO_UW : MO_32);
         /* In 64-bit mode, 0x67 selects 32-bit addressing.  */
         aflag = (prefixes & PREFIX_ADR ? MO_32 : MO_64);
     } else {
@@ -4638,13 +4638,13 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         if (s->code32 ^ ((prefixes & PREFIX_DATA) != 0)) {
             dflag = MO_32;
         } else {
-            dflag = MO_16;
+            dflag = MO_UW;
         }
         /* In 16/32-bit mode, 0x67 selects the opposite addressing.  */
         if (s->code32 ^ ((prefixes & PREFIX_ADR) != 0)) {
             aflag = MO_32;
         }  else {
-            aflag = MO_16;
+            aflag = MO_UW;
         }
     }

@@ -4872,21 +4872,21 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 tcg_gen_ext8u_tl(s->T1, s->T1);
                 /* XXX: use 32 bit mul which could be faster */
                 tcg_gen_mul_tl(s->T0, s->T0, s->T1);
-                gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0);
+                gen_op_mov_reg_v(s, MO_UW, R_EAX, s->T0);
                 tcg_gen_mov_tl(cpu_cc_dst, s->T0);
                 tcg_gen_andi_tl(cpu_cc_src, s->T0, 0xff00);
                 set_cc_op(s, CC_OP_MULB);
                 break;
-            case MO_16:
-                gen_op_mov_v_reg(s, MO_16, s->T1, R_EAX);
+            case MO_UW:
+                gen_op_mov_v_reg(s, MO_UW, s->T1, R_EAX);
                 tcg_gen_ext16u_tl(s->T0, s->T0);
                 tcg_gen_ext16u_tl(s->T1, s->T1);
                 /* XXX: use 32 bit mul which could be faster */
                 tcg_gen_mul_tl(s->T0, s->T0, s->T1);
-                gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0);
+                gen_op_mov_reg_v(s, MO_UW, R_EAX, s->T0);
                 tcg_gen_mov_tl(cpu_cc_dst, s->T0);
                 tcg_gen_shri_tl(s->T0, s->T0, 16);
-                gen_op_mov_reg_v(s, MO_16, R_EDX, s->T0);
+                gen_op_mov_reg_v(s, MO_UW, R_EDX, s->T0);
                 tcg_gen_mov_tl(cpu_cc_src, s->T0);
                 set_cc_op(s, CC_OP_MULW);
                 break;
@@ -4921,24 +4921,24 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 tcg_gen_ext8s_tl(s->T1, s->T1);
                 /* XXX: use 32 bit mul which could be faster */
                 tcg_gen_mul_tl(s->T0, s->T0, s->T1);
-                gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0);
+                gen_op_mov_reg_v(s, MO_UW, R_EAX, s->T0);
                 tcg_gen_mov_tl(cpu_cc_dst, s->T0);
                 tcg_gen_ext8s_tl(s->tmp0, s->T0);
                 tcg_gen_sub_tl(cpu_cc_src, s->T0, s->tmp0);
                 set_cc_op(s, CC_OP_MULB);
                 break;
-            case MO_16:
-                gen_op_mov_v_reg(s, MO_16, s->T1, R_EAX);
+            case MO_UW:
+                gen_op_mov_v_reg(s, MO_UW, s->T1, R_EAX);
                 tcg_gen_ext16s_tl(s->T0, s->T0);
                 tcg_gen_ext16s_tl(s->T1, s->T1);
                 /* XXX: use 32 bit mul which could be faster */
                 tcg_gen_mul_tl(s->T0, s->T0, s->T1);
-                gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0);
+                gen_op_mov_reg_v(s, MO_UW, R_EAX, s->T0);
                 tcg_gen_mov_tl(cpu_cc_dst, s->T0);
                 tcg_gen_ext16s_tl(s->tmp0, s->T0);
                 tcg_gen_sub_tl(cpu_cc_src, s->T0, s->tmp0);
                 tcg_gen_shri_tl(s->T0, s->T0, 16);
-                gen_op_mov_reg_v(s, MO_16, R_EDX, s->T0);
+                gen_op_mov_reg_v(s, MO_UW, R_EDX, s->T0);
                 set_cc_op(s, CC_OP_MULW);
                 break;
             default:
@@ -4972,7 +4972,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             case MO_UB:
                 gen_helper_divb_AL(cpu_env, s->T0);
                 break;
-            case MO_16:
+            case MO_UW:
                 gen_helper_divw_AX(cpu_env, s->T0);
                 break;
             default:
@@ -4991,7 +4991,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             case MO_UB:
                 gen_helper_idivb_AL(cpu_env, s->T0);
                 break;
-            case MO_16:
+            case MO_UW:
                 gen_helper_idivw_AX(cpu_env, s->T0);
                 break;
             default:
@@ -5026,7 +5026,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 /* operand size for jumps is 64 bit */
                 ot = MO_64;
             } else if (op == 3 || op == 5) {
-                ot = dflag != MO_16 ? MO_32 + (rex_w == 1) : MO_16;
+                ot = dflag != MO_UW ? MO_32 + (rex_w == 1) : MO_UW;
             } else if (op == 6) {
                 /* default push size is 64 bit */
                 ot = mo_pushpop(s, dflag);
@@ -5057,7 +5057,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             break;
         case 2: /* call Ev */
             /* XXX: optimize if memory (no 'and' is necessary) */
-            if (dflag == MO_16) {
+            if (dflag == MO_UW) {
                 tcg_gen_ext16u_tl(s->T0, s->T0);
             }
             next_eip = s->pc - s->cs_base;
@@ -5070,7 +5070,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         case 3: /* lcall Ev */
             gen_op_ld_v(s, ot, s->T1, s->A0);
             gen_add_A0_im(s, 1 << ot);
-            gen_op_ld_v(s, MO_16, s->T0, s->A0);
+            gen_op_ld_v(s, MO_UW, s->T0, s->A0);
         do_lcall:
             if (s->pe && !s->vm86) {
                 tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
@@ -5087,7 +5087,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             gen_jr(s, s->tmp4);
             break;
         case 4: /* jmp Ev */
-            if (dflag == MO_16) {
+            if (dflag == MO_UW) {
                 tcg_gen_ext16u_tl(s->T0, s->T0);
             }
             gen_op_jmp_v(s->T0);
@@ -5097,7 +5097,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         case 5: /* ljmp Ev */
             gen_op_ld_v(s, ot, s->T1, s->A0);
             gen_add_A0_im(s, 1 << ot);
-            gen_op_ld_v(s, MO_16, s->T0, s->A0);
+            gen_op_ld_v(s, MO_UW, s->T0, s->A0);
         do_ljmp:
             if (s->pe && !s->vm86) {
                 tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
@@ -5152,14 +5152,14 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             break;
 #endif
         case MO_32:
-            gen_op_mov_v_reg(s, MO_16, s->T0, R_EAX);
+            gen_op_mov_v_reg(s, MO_UW, s->T0, R_EAX);
             tcg_gen_ext16s_tl(s->T0, s->T0);
             gen_op_mov_reg_v(s, MO_32, R_EAX, s->T0);
             break;
-        case MO_16:
+        case MO_UW:
             gen_op_mov_v_reg(s, MO_UB, s->T0, R_EAX);
             tcg_gen_ext8s_tl(s->T0, s->T0);
-            gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0);
+            gen_op_mov_reg_v(s, MO_UW, R_EAX, s->T0);
             break;
         default:
             tcg_abort();
@@ -5180,11 +5180,11 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             tcg_gen_sari_tl(s->T0, s->T0, 31);
             gen_op_mov_reg_v(s, MO_32, R_EDX, s->T0);
             break;
-        case MO_16:
-            gen_op_mov_v_reg(s, MO_16, s->T0, R_EAX);
+        case MO_UW:
+            gen_op_mov_v_reg(s, MO_UW, s->T0, R_EAX);
             tcg_gen_ext16s_tl(s->T0, s->T0);
             tcg_gen_sari_tl(s->T0, s->T0, 15);
-            gen_op_mov_reg_v(s, MO_16, R_EDX, s->T0);
+            gen_op_mov_reg_v(s, MO_UW, R_EDX, s->T0);
             break;
         default:
             tcg_abort();
@@ -5538,7 +5538,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         reg = (modrm >> 3) & 7;
         if (reg >= 6 || reg == R_CS)
             goto illegal_op;
-        gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0);
+        gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0);
         gen_movl_seg_T0(s, reg);
         /* Note that reg == R_SS in gen_movl_seg_T0 always sets is_jmp.  */
         if (s->base.is_jmp) {
@@ -5558,7 +5558,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         if (reg >= 6)
             goto illegal_op;
         gen_op_movl_T0_seg(s, reg);
-        ot = mod == 3 ? dflag : MO_16;
+        ot = mod == 3 ? dflag : MO_UW;
         gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 1);
         break;

@@ -5734,7 +5734,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
     case 0x1b5: /* lgs Gv */
         op = R_GS;
     do_lxx:
-        ot = dflag != MO_16 ? MO_32 : MO_16;
+        ot = dflag != MO_UW ? MO_32 : MO_UW;
         modrm = x86_ldub_code(env, s);
         reg = ((modrm >> 3) & 7) | rex_r;
         mod = (modrm >> 6) & 3;
@@ -5744,7 +5744,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         gen_op_ld_v(s, ot, s->T1, s->A0);
         gen_add_A0_im(s, 1 << ot);
         /* load the segment first to handle exceptions properly */
-        gen_op_ld_v(s, MO_16, s->T0, s->A0);
+        gen_op_ld_v(s, MO_UW, s->T0, s->A0);
         gen_movl_seg_T0(s, op);
         /* then put the data */
         gen_op_mov_reg_v(s, ot, reg, s->T1);
@@ -6287,7 +6287,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 case 0:
                     gen_helper_fnstsw(s->tmp2_i32, cpu_env);
                     tcg_gen_extu_i32_tl(s->T0, s->tmp2_i32);
-                    gen_op_mov_reg_v(s, MO_16, R_EAX, s->T0);
+                    gen_op_mov_reg_v(s, MO_UW, R_EAX, s->T0);
                     break;
                 default:
                     goto unknown_op;
@@ -6575,14 +6575,14 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         break;
     case 0xe8: /* call im */
         {
-            if (dflag != MO_16) {
+            if (dflag != MO_UW) {
                 tval = (int32_t)insn_get(env, s, MO_32);
             } else {
-                tval = (int16_t)insn_get(env, s, MO_16);
+                tval = (int16_t)insn_get(env, s, MO_UW);
             }
             next_eip = s->pc - s->cs_base;
             tval += next_eip;
-            if (dflag == MO_16) {
+            if (dflag == MO_UW) {
                 tval &= 0xffff;
             } else if (!CODE64(s)) {
                 tval &= 0xffffffff;
@@ -6601,20 +6601,20 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 goto illegal_op;
             ot = dflag;
             offset = insn_get(env, s, ot);
-            selector = insn_get(env, s, MO_16);
+            selector = insn_get(env, s, MO_UW);

             tcg_gen_movi_tl(s->T0, selector);
             tcg_gen_movi_tl(s->T1, offset);
         }
         goto do_lcall;
     case 0xe9: /* jmp im */
-        if (dflag != MO_16) {
+        if (dflag != MO_UW) {
             tval = (int32_t)insn_get(env, s, MO_32);
         } else {
-            tval = (int16_t)insn_get(env, s, MO_16);
+            tval = (int16_t)insn_get(env, s, MO_UW);
         }
         tval += s->pc - s->cs_base;
-        if (dflag == MO_16) {
+        if (dflag == MO_UW) {
             tval &= 0xffff;
         } else if (!CODE64(s)) {
             tval &= 0xffffffff;
@@ -6630,7 +6630,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 goto illegal_op;
             ot = dflag;
             offset = insn_get(env, s, ot);
-            selector = insn_get(env, s, MO_16);
+            selector = insn_get(env, s, MO_UW);

             tcg_gen_movi_tl(s->T0, selector);
             tcg_gen_movi_tl(s->T1, offset);
@@ -6639,7 +6639,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
     case 0xeb: /* jmp Jb */
         tval = (int8_t)insn_get(env, s, MO_UB);
         tval += s->pc - s->cs_base;
-        if (dflag == MO_16) {
+        if (dflag == MO_UW) {
             tval &= 0xffff;
         }
         gen_jmp(s, tval);
@@ -6648,15 +6648,15 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         tval = (int8_t)insn_get(env, s, MO_UB);
         goto do_jcc;
     case 0x180 ... 0x18f: /* jcc Jv */
-        if (dflag != MO_16) {
+        if (dflag != MO_UW) {
             tval = (int32_t)insn_get(env, s, MO_32);
         } else {
-            tval = (int16_t)insn_get(env, s, MO_16);
+            tval = (int16_t)insn_get(env, s, MO_UW);
         }
     do_jcc:
         next_eip = s->pc - s->cs_base;
         tval += next_eip;
-        if (dflag == MO_16) {
+        if (dflag == MO_UW) {
             tval &= 0xffff;
         }
         gen_bnd_jmp(s);
@@ -6697,7 +6697,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         } else {
             ot = gen_pop_T0(s);
             if (s->cpl == 0) {
-                if (dflag != MO_16) {
+                if (dflag != MO_UW) {
                     gen_helper_write_eflags(cpu_env, s->T0,
                                             tcg_const_i32((TF_MASK | AC_MASK |
                                                            ID_MASK | NT_MASK |
@@ -6712,7 +6712,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 }
             } else {
                 if (s->cpl <= s->iopl) {
-                    if (dflag != MO_16) {
+                    if (dflag != MO_UW) {
                         gen_helper_write_eflags(cpu_env, s->T0,
                                                 tcg_const_i32((TF_MASK |
                                                                AC_MASK |
@@ -6729,7 +6729,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                                                               & 0xffff));
                     }
                 } else {
-                    if (dflag != MO_16) {
+                    if (dflag != MO_UW) {
                         gen_helper_write_eflags(cpu_env, s->T0,
                                            tcg_const_i32((TF_MASK | AC_MASK |
                                                           ID_MASK | NT_MASK)));
@@ -7110,7 +7110,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         gen_op_mov_v_reg(s, ot, s->T0, reg);
         gen_lea_modrm(env, s, modrm);
         tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
-        if (ot == MO_16) {
+        if (ot == MO_UW) {
             gen_helper_boundw(cpu_env, s->A0, s->tmp2_i32);
         } else {
             gen_helper_boundl(cpu_env, s->A0, s->tmp2_i32);
@@ -7149,7 +7149,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             tval = (int8_t)insn_get(env, s, MO_UB);
             next_eip = s->pc - s->cs_base;
             tval += next_eip;
-            if (dflag == MO_16) {
+            if (dflag == MO_UW) {
                 tval &= 0xffff;
             }

@@ -7291,7 +7291,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             gen_svm_check_intercept(s, pc_start, SVM_EXIT_LDTR_READ);
             tcg_gen_ld32u_tl(s->T0, cpu_env,
                              offsetof(CPUX86State, ldt.selector));
-            ot = mod == 3 ? dflag : MO_16;
+            ot = mod == 3 ? dflag : MO_UW;
             gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 1);
             break;
         case 2: /* lldt */
@@ -7301,7 +7301,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 gen_exception(s, EXCP0D_GPF, pc_start - s->cs_base);
             } else {
                 gen_svm_check_intercept(s, pc_start, SVM_EXIT_LDTR_WRITE);
-                gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0);
+                gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0);
                 tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
                 gen_helper_lldt(cpu_env, s->tmp2_i32);
             }
@@ -7312,7 +7312,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             gen_svm_check_intercept(s, pc_start, SVM_EXIT_TR_READ);
             tcg_gen_ld32u_tl(s->T0, cpu_env,
                              offsetof(CPUX86State, tr.selector));
-            ot = mod == 3 ? dflag : MO_16;
+            ot = mod == 3 ? dflag : MO_UW;
             gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 1);
             break;
         case 3: /* ltr */
@@ -7322,7 +7322,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 gen_exception(s, EXCP0D_GPF, pc_start - s->cs_base);
             } else {
                 gen_svm_check_intercept(s, pc_start, SVM_EXIT_TR_WRITE);
-                gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0);
+                gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0);
                 tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
                 gen_helper_ltr(cpu_env, s->tmp2_i32);
             }
@@ -7331,7 +7331,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         case 5: /* verw */
             if (!s->pe || s->vm86)
                 goto illegal_op;
-            gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0);
+            gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0);
             gen_update_cc_op(s);
             if (op == 4) {
                 gen_helper_verr(cpu_env, s->T0);
@@ -7353,10 +7353,10 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             gen_lea_modrm(env, s, modrm);
             tcg_gen_ld32u_tl(s->T0,
                              cpu_env, offsetof(CPUX86State, gdt.limit));
-            gen_op_st_v(s, MO_16, s->T0, s->A0);
+            gen_op_st_v(s, MO_UW, s->T0, s->A0);
             gen_add_A0_im(s, 2);
             tcg_gen_ld_tl(s->T0, cpu_env, offsetof(CPUX86State, gdt.base));
-            if (dflag == MO_16) {
+            if (dflag == MO_UW) {
                 tcg_gen_andi_tl(s->T0, s->T0, 0xffffff);
             }
             gen_op_st_v(s, CODE64(s) + MO_32, s->T0, s->A0);
@@ -7408,10 +7408,10 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             gen_svm_check_intercept(s, pc_start, SVM_EXIT_IDTR_READ);
             gen_lea_modrm(env, s, modrm);
             tcg_gen_ld32u_tl(s->T0, cpu_env, offsetof(CPUX86State, idt.limit));
-            gen_op_st_v(s, MO_16, s->T0, s->A0);
+            gen_op_st_v(s, MO_UW, s->T0, s->A0);
             gen_add_A0_im(s, 2);
             tcg_gen_ld_tl(s->T0, cpu_env, offsetof(CPUX86State, idt.base));
-            if (dflag == MO_16) {
+            if (dflag == MO_UW) {
                 tcg_gen_andi_tl(s->T0, s->T0, 0xffffff);
             }
             gen_op_st_v(s, CODE64(s) + MO_32, s->T0, s->A0);
@@ -7558,10 +7558,10 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             }
             gen_svm_check_intercept(s, pc_start, SVM_EXIT_GDTR_WRITE);
             gen_lea_modrm(env, s, modrm);
-            gen_op_ld_v(s, MO_16, s->T1, s->A0);
+            gen_op_ld_v(s, MO_UW, s->T1, s->A0);
             gen_add_A0_im(s, 2);
             gen_op_ld_v(s, CODE64(s) + MO_32, s->T0, s->A0);
-            if (dflag == MO_16) {
+            if (dflag == MO_UW) {
                 tcg_gen_andi_tl(s->T0, s->T0, 0xffffff);
             }
             tcg_gen_st_tl(s->T0, cpu_env, offsetof(CPUX86State, gdt.base));
@@ -7575,10 +7575,10 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             }
             gen_svm_check_intercept(s, pc_start, SVM_EXIT_IDTR_WRITE);
             gen_lea_modrm(env, s, modrm);
-            gen_op_ld_v(s, MO_16, s->T1, s->A0);
+            gen_op_ld_v(s, MO_UW, s->T1, s->A0);
             gen_add_A0_im(s, 2);
             gen_op_ld_v(s, CODE64(s) + MO_32, s->T0, s->A0);
-            if (dflag == MO_16) {
+            if (dflag == MO_UW) {
                 tcg_gen_andi_tl(s->T0, s->T0, 0xffffff);
             }
             tcg_gen_st_tl(s->T0, cpu_env, offsetof(CPUX86State, idt.base));
@@ -7590,9 +7590,9 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             tcg_gen_ld_tl(s->T0, cpu_env, offsetof(CPUX86State, cr[0]));
             if (CODE64(s)) {
                 mod = (modrm >> 6) & 3;
-                ot = (mod != 3 ? MO_16 : s->dflag);
+                ot = (mod != 3 ? MO_UW : s->dflag);
             } else {
-                ot = MO_16;
+                ot = MO_UW;
             }
             gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 1);
             break;
@@ -7619,7 +7619,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 break;
             }
             gen_svm_check_intercept(s, pc_start, SVM_EXIT_WRITE_CR0);
-            gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0);
+            gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0);
             gen_helper_lmsw(cpu_env, s->T0);
             gen_jmp_im(s, s->pc - s->cs_base);
             gen_eob(s);
@@ -7720,7 +7720,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             t0 = tcg_temp_local_new();
             t1 = tcg_temp_local_new();
             t2 = tcg_temp_local_new();
-            ot = MO_16;
+            ot = MO_UW;
             modrm = x86_ldub_code(env, s);
             reg = (modrm >> 3) & 7;
             mod = (modrm >> 6) & 3;
@@ -7765,10 +7765,10 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             TCGv t0;
             if (!s->pe || s->vm86)
                 goto illegal_op;
-            ot = dflag != MO_16 ? MO_32 : MO_16;
+            ot = dflag != MO_UW ? MO_32 : MO_UW;
             modrm = x86_ldub_code(env, s);
             reg = ((modrm >> 3) & 7) | rex_r;
-            gen_ldst_modrm(env, s, modrm, MO_16, OR_TMP0, 0);
+            gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0);
             t0 = tcg_temp_local_new();
             gen_update_cc_op(s);
             if (b == 0x102) {
@@ -7813,7 +7813,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 /* bndcl */
                 if (reg >= 4
                     || (prefixes & PREFIX_LOCK)
-                    || s->aflag == MO_16) {
+                    || s->aflag == MO_UW) {
                     goto illegal_op;
                 }
                 gen_bndck(env, s, modrm, TCG_COND_LTU, cpu_bndl[reg]);
@@ -7821,7 +7821,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 /* bndcu */
                 if (reg >= 4
                     || (prefixes & PREFIX_LOCK)
-                    || s->aflag == MO_16) {
+                    || s->aflag == MO_UW) {
                     goto illegal_op;
                 }
                 TCGv_i64 notu = tcg_temp_new_i64();
@@ -7830,7 +7830,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 tcg_temp_free_i64(notu);
             } else if (prefixes & PREFIX_DATA) {
                 /* bndmov -- from reg/mem */
-                if (reg >= 4 || s->aflag == MO_16) {
+                if (reg >= 4 || s->aflag == MO_UW) {
                     goto illegal_op;
                 }
                 if (mod == 3) {
@@ -7865,7 +7865,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 AddressParts a = gen_lea_modrm_0(env, s, modrm);
                 if (reg >= 4
                     || (prefixes & PREFIX_LOCK)
-                    || s->aflag == MO_16
+                    || s->aflag == MO_UW
                     || a.base < -1) {
                     goto illegal_op;
                 }
@@ -7903,7 +7903,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 /* bndmk */
                 if (reg >= 4
                     || (prefixes & PREFIX_LOCK)
-                    || s->aflag == MO_16) {
+                    || s->aflag == MO_UW) {
                     goto illegal_op;
                 }
                 AddressParts a = gen_lea_modrm_0(env, s, modrm);
@@ -7931,13 +7931,13 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 /* bndcn */
                 if (reg >= 4
                     || (prefixes & PREFIX_LOCK)
-                    || s->aflag == MO_16) {
+                    || s->aflag == MO_UW) {
                     goto illegal_op;
                 }
                 gen_bndck(env, s, modrm, TCG_COND_GTU, cpu_bndu[reg]);
             } else if (prefixes & PREFIX_DATA) {
                 /* bndmov -- to reg/mem */
-                if (reg >= 4 || s->aflag == MO_16) {
+                if (reg >= 4 || s->aflag == MO_UW) {
                     goto illegal_op;
                 }
                 if (mod == 3) {
@@ -7970,7 +7970,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 AddressParts a = gen_lea_modrm_0(env, s, modrm);
                 if (reg >= 4
                     || (prefixes & PREFIX_LOCK)
-                    || s->aflag == MO_16
+                    || s->aflag == MO_UW
                     || a.base < -1) {
                     goto illegal_op;
                 }
@@ -8341,7 +8341,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         reg = ((modrm >> 3) & 7) | rex_r;

         if (s->prefix & PREFIX_DATA) {
-            ot = MO_16;
+            ot = MO_UW;
         } else {
             ot = mo_64_32(dflag);
         }
diff --git a/target/mips/translate.c b/target/mips/translate.c
index 20a9777..525c7fe 100644
--- a/target/mips/translate.c
+++ b/target/mips/translate.c
@@ -21087,7 +21087,7 @@ static void gen_pool32a5_nanomips_insn(DisasContext *ctx, int opc,
             imm = sextract32(ctx->opcode, 11, 11);
             imm = (int16_t)(imm << 6) >> 6;
             if (rt != 0) {
-                tcg_gen_movi_tl(cpu_gpr[rt], dup_const(MO_16, imm));
+                tcg_gen_movi_tl(cpu_gpr[rt], dup_const(MO_UW, imm));
             }
         }
         break;
diff --git a/target/ppc/translate/vmx-impl.inc.c b/target/ppc/translate/vmx-impl.inc.c
index 4130dd1..71efef4 100644
--- a/target/ppc/translate/vmx-impl.inc.c
+++ b/target/ppc/translate/vmx-impl.inc.c
@@ -406,29 +406,29 @@ static void glue(gen_, name)(DisasContext *ctx)                         \
 GEN_VXFORM_V(vaddubm, MO_UB, tcg_gen_gvec_add, 0, 0);
 GEN_VXFORM_DUAL_EXT(vaddubm, PPC_ALTIVEC, PPC_NONE, 0,       \
                     vmul10cuq, PPC_NONE, PPC2_ISA300, 0x0000F800)
-GEN_VXFORM_V(vadduhm, MO_16, tcg_gen_gvec_add, 0, 1);
+GEN_VXFORM_V(vadduhm, MO_UW, tcg_gen_gvec_add, 0, 1);
 GEN_VXFORM_DUAL(vadduhm, PPC_ALTIVEC, PPC_NONE,  \
                 vmul10ecuq, PPC_NONE, PPC2_ISA300)
 GEN_VXFORM_V(vadduwm, MO_32, tcg_gen_gvec_add, 0, 2);
 GEN_VXFORM_V(vaddudm, MO_64, tcg_gen_gvec_add, 0, 3);
 GEN_VXFORM_V(vsububm, MO_UB, tcg_gen_gvec_sub, 0, 16);
-GEN_VXFORM_V(vsubuhm, MO_16, tcg_gen_gvec_sub, 0, 17);
+GEN_VXFORM_V(vsubuhm, MO_UW, tcg_gen_gvec_sub, 0, 17);
 GEN_VXFORM_V(vsubuwm, MO_32, tcg_gen_gvec_sub, 0, 18);
 GEN_VXFORM_V(vsubudm, MO_64, tcg_gen_gvec_sub, 0, 19);
 GEN_VXFORM_V(vmaxub, MO_UB, tcg_gen_gvec_umax, 1, 0);
-GEN_VXFORM_V(vmaxuh, MO_16, tcg_gen_gvec_umax, 1, 1);
+GEN_VXFORM_V(vmaxuh, MO_UW, tcg_gen_gvec_umax, 1, 1);
 GEN_VXFORM_V(vmaxuw, MO_32, tcg_gen_gvec_umax, 1, 2);
 GEN_VXFORM_V(vmaxud, MO_64, tcg_gen_gvec_umax, 1, 3);
 GEN_VXFORM_V(vmaxsb, MO_UB, tcg_gen_gvec_smax, 1, 4);
-GEN_VXFORM_V(vmaxsh, MO_16, tcg_gen_gvec_smax, 1, 5);
+GEN_VXFORM_V(vmaxsh, MO_UW, tcg_gen_gvec_smax, 1, 5);
 GEN_VXFORM_V(vmaxsw, MO_32, tcg_gen_gvec_smax, 1, 6);
 GEN_VXFORM_V(vmaxsd, MO_64, tcg_gen_gvec_smax, 1, 7);
 GEN_VXFORM_V(vminub, MO_UB, tcg_gen_gvec_umin, 1, 8);
-GEN_VXFORM_V(vminuh, MO_16, tcg_gen_gvec_umin, 1, 9);
+GEN_VXFORM_V(vminuh, MO_UW, tcg_gen_gvec_umin, 1, 9);
 GEN_VXFORM_V(vminuw, MO_32, tcg_gen_gvec_umin, 1, 10);
 GEN_VXFORM_V(vminud, MO_64, tcg_gen_gvec_umin, 1, 11);
 GEN_VXFORM_V(vminsb, MO_UB, tcg_gen_gvec_smin, 1, 12);
-GEN_VXFORM_V(vminsh, MO_16, tcg_gen_gvec_smin, 1, 13);
+GEN_VXFORM_V(vminsh, MO_UW, tcg_gen_gvec_smin, 1, 13);
 GEN_VXFORM_V(vminsw, MO_32, tcg_gen_gvec_smin, 1, 14);
 GEN_VXFORM_V(vminsd, MO_64, tcg_gen_gvec_smin, 1, 15);
 GEN_VXFORM(vavgub, 1, 16);
@@ -531,18 +531,18 @@ GEN_VXFORM(vmulesb, 4, 12);
 GEN_VXFORM(vmulesh, 4, 13);
 GEN_VXFORM(vmulesw, 4, 14);
 GEN_VXFORM_V(vslb, MO_UB, tcg_gen_gvec_shlv, 2, 4);
-GEN_VXFORM_V(vslh, MO_16, tcg_gen_gvec_shlv, 2, 5);
+GEN_VXFORM_V(vslh, MO_UW, tcg_gen_gvec_shlv, 2, 5);
 GEN_VXFORM_V(vslw, MO_32, tcg_gen_gvec_shlv, 2, 6);
 GEN_VXFORM(vrlwnm, 2, 6);
 GEN_VXFORM_DUAL(vslw, PPC_ALTIVEC, PPC_NONE, \
                 vrlwnm, PPC_NONE, PPC2_ISA300)
 GEN_VXFORM_V(vsld, MO_64, tcg_gen_gvec_shlv, 2, 23);
 GEN_VXFORM_V(vsrb, MO_UB, tcg_gen_gvec_shrv, 2, 8);
-GEN_VXFORM_V(vsrh, MO_16, tcg_gen_gvec_shrv, 2, 9);
+GEN_VXFORM_V(vsrh, MO_UW, tcg_gen_gvec_shrv, 2, 9);
 GEN_VXFORM_V(vsrw, MO_32, tcg_gen_gvec_shrv, 2, 10);
 GEN_VXFORM_V(vsrd, MO_64, tcg_gen_gvec_shrv, 2, 27);
 GEN_VXFORM_V(vsrab, MO_UB, tcg_gen_gvec_sarv, 2, 12);
-GEN_VXFORM_V(vsrah, MO_16, tcg_gen_gvec_sarv, 2, 13);
+GEN_VXFORM_V(vsrah, MO_UW, tcg_gen_gvec_sarv, 2, 13);
 GEN_VXFORM_V(vsraw, MO_32, tcg_gen_gvec_sarv, 2, 14);
 GEN_VXFORM_V(vsrad, MO_64, tcg_gen_gvec_sarv, 2, 15);
 GEN_VXFORM(vsrv, 2, 28);
@@ -592,18 +592,18 @@ static void glue(gen_, NAME)(DisasContext *ctx)                         \
 GEN_VXFORM_SAT(vaddubs, MO_UB, add, usadd, 0, 8);
 GEN_VXFORM_DUAL_EXT(vaddubs, PPC_ALTIVEC, PPC_NONE, 0,       \
                     vmul10uq, PPC_NONE, PPC2_ISA300, 0x0000F800)
-GEN_VXFORM_SAT(vadduhs, MO_16, add, usadd, 0, 9);
+GEN_VXFORM_SAT(vadduhs, MO_UW, add, usadd, 0, 9);
 GEN_VXFORM_DUAL(vadduhs, PPC_ALTIVEC, PPC_NONE, \
                 vmul10euq, PPC_NONE, PPC2_ISA300)
 GEN_VXFORM_SAT(vadduws, MO_32, add, usadd, 0, 10);
 GEN_VXFORM_SAT(vaddsbs, MO_UB, add, ssadd, 0, 12);
-GEN_VXFORM_SAT(vaddshs, MO_16, add, ssadd, 0, 13);
+GEN_VXFORM_SAT(vaddshs, MO_UW, add, ssadd, 0, 13);
 GEN_VXFORM_SAT(vaddsws, MO_32, add, ssadd, 0, 14);
 GEN_VXFORM_SAT(vsububs, MO_UB, sub, ussub, 0, 24);
-GEN_VXFORM_SAT(vsubuhs, MO_16, sub, ussub, 0, 25);
+GEN_VXFORM_SAT(vsubuhs, MO_UW, sub, ussub, 0, 25);
 GEN_VXFORM_SAT(vsubuws, MO_32, sub, ussub, 0, 26);
 GEN_VXFORM_SAT(vsubsbs, MO_UB, sub, sssub, 0, 28);
-GEN_VXFORM_SAT(vsubshs, MO_16, sub, sssub, 0, 29);
+GEN_VXFORM_SAT(vsubshs, MO_UW, sub, sssub, 0, 29);
 GEN_VXFORM_SAT(vsubsws, MO_32, sub, sssub, 0, 30);
 GEN_VXFORM(vadduqm, 0, 4);
 GEN_VXFORM(vaddcuq, 0, 5);
@@ -913,7 +913,7 @@ static void glue(gen_, name)(DisasContext *ctx)                         \
     }

 GEN_VXFORM_VSPLT(vspltb, MO_UB, 6, 8);
-GEN_VXFORM_VSPLT(vsplth, MO_16, 6, 9);
+GEN_VXFORM_VSPLT(vsplth, MO_UW, 6, 9);
 GEN_VXFORM_VSPLT(vspltw, MO_32, 6, 10);
 GEN_VXFORM_UIMM_SPLAT(vextractub, 6, 8, 15);
 GEN_VXFORM_UIMM_SPLAT(vextractuh, 6, 9, 14);
diff --git a/target/s390x/translate_vx.inc.c b/target/s390x/translate_vx.inc.c
index bb424c8..65da6b3 100644
--- a/target/s390x/translate_vx.inc.c
+++ b/target/s390x/translate_vx.inc.c
@@ -47,7 +47,7 @@
 #define NUM_VEC_ELEMENT_BITS(es) (NUM_VEC_ELEMENT_BYTES(es) * BITS_PER_BYTE)

 #define ES_8    MO_UB
-#define ES_16   MO_16
+#define ES_16   MO_UW
 #define ES_32   MO_32
 #define ES_64   MO_64
 #define ES_128  4
diff --git a/target/s390x/vec.h b/target/s390x/vec.h
index b813054..28e1b1d 100644
--- a/target/s390x/vec.h
+++ b/target/s390x/vec.h
@@ -78,7 +78,7 @@ static inline uint64_t s390_vec_read_element(const S390Vector *v, uint8_t enr,
     switch (es) {
     case MO_UB:
         return s390_vec_read_element8(v, enr);
-    case MO_16:
+    case MO_UW:
         return s390_vec_read_element16(v, enr);
     case MO_32:
         return s390_vec_read_element32(v, enr);
@@ -124,7 +124,7 @@ static inline void s390_vec_write_element(S390Vector *v, uint8_t enr,
     case MO_UB:
         s390_vec_write_element8(v, enr, data);
         break;
-    case MO_16:
+    case MO_UW:
         s390_vec_write_element16(v, enr, data);
         break;
     case MO_32:
diff --git a/tcg/aarch64/tcg-target.inc.c b/tcg/aarch64/tcg-target.inc.c
index e4e0845..3d90c4b 100644
--- a/tcg/aarch64/tcg-target.inc.c
+++ b/tcg/aarch64/tcg-target.inc.c
@@ -430,20 +430,20 @@ typedef enum {
     /* Load/store register.  Described here as 3.3.12, but the helper
        that emits them can transform to 3.3.10 or 3.3.13.  */
     I3312_STRB      = 0x38000000 | LDST_ST << 22 | MO_UB << 30,
-    I3312_STRH      = 0x38000000 | LDST_ST << 22 | MO_16 << 30,
+    I3312_STRH      = 0x38000000 | LDST_ST << 22 | MO_UW << 30,
     I3312_STRW      = 0x38000000 | LDST_ST << 22 | MO_32 << 30,
     I3312_STRX      = 0x38000000 | LDST_ST << 22 | MO_64 << 30,

     I3312_LDRB      = 0x38000000 | LDST_LD << 22 | MO_UB << 30,
-    I3312_LDRH      = 0x38000000 | LDST_LD << 22 | MO_16 << 30,
+    I3312_LDRH      = 0x38000000 | LDST_LD << 22 | MO_UW << 30,
     I3312_LDRW      = 0x38000000 | LDST_LD << 22 | MO_32 << 30,
     I3312_LDRX      = 0x38000000 | LDST_LD << 22 | MO_64 << 30,

     I3312_LDRSBW    = 0x38000000 | LDST_LD_S_W << 22 | MO_UB << 30,
-    I3312_LDRSHW    = 0x38000000 | LDST_LD_S_W << 22 | MO_16 << 30,
+    I3312_LDRSHW    = 0x38000000 | LDST_LD_S_W << 22 | MO_UW << 30,

     I3312_LDRSBX    = 0x38000000 | LDST_LD_S_X << 22 | MO_UB << 30,
-    I3312_LDRSHX    = 0x38000000 | LDST_LD_S_X << 22 | MO_16 << 30,
+    I3312_LDRSHX    = 0x38000000 | LDST_LD_S_X << 22 | MO_UW << 30,
     I3312_LDRSWX    = 0x38000000 | LDST_LD_S_X << 22 | MO_32 << 30,

     I3312_LDRVS     = 0x3c000000 | LDST_LD << 22 | MO_32 << 30,
@@ -870,7 +870,7 @@ static void tcg_out_dupi_vec(TCGContext *s, TCGType type,

     /*
      * Test all bytes 0x00 or 0xff second.  This can match cases that
-     * might otherwise take 2 or 3 insns for MO_16 or MO_32 below.
+     * might otherwise take 2 or 3 insns for MO_UW or MO_32 below.
      */
     for (i = imm8 = 0; i < 8; i++) {
         uint8_t byte = v64 >> (i * 8);
@@ -889,7 +889,7 @@ static void tcg_out_dupi_vec(TCGContext *s, TCGType type,
      * cannot find an expansion there's no point checking a larger
      * width because we already know by replication it cannot match.
      */
-    if (v64 == dup_const(MO_16, v64)) {
+    if (v64 == dup_const(MO_UW, v64)) {
         uint16_t v16 = v64;

         if (is_shimm16(v16, &cmode, &imm8)) {
@@ -1733,7 +1733,7 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp memop, TCGType ext,
         if (bswap) {
             tcg_out_ldst_r(s, I3312_LDRH, data_r, addr_r, otype, off_r);
             tcg_out_rev16(s, data_r, data_r);
-            tcg_out_sxt(s, ext, MO_16, data_r, data_r);
+            tcg_out_sxt(s, ext, MO_UW, data_r, data_r);
         } else {
             tcg_out_ldst_r(s, (ext ? I3312_LDRSHX : I3312_LDRSHW),
                            data_r, addr_r, otype, off_r);
@@ -1775,7 +1775,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp memop,
     case MO_UB:
         tcg_out_ldst_r(s, I3312_STRB, data_r, addr_r, otype, off_r);
         break;
-    case MO_16:
+    case MO_UW:
         if (bswap && data_r != TCG_REG_XZR) {
             tcg_out_rev16(s, TCG_REG_TMP, data_r);
             data_r = TCG_REG_TMP;
@@ -2190,7 +2190,7 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
         break;
     case INDEX_op_ext16s_i64:
     case INDEX_op_ext16s_i32:
-        tcg_out_sxt(s, ext, MO_16, a0, a1);
+        tcg_out_sxt(s, ext, MO_UW, a0, a1);
         break;
     case INDEX_op_ext_i32_i64:
     case INDEX_op_ext32s_i64:
@@ -2202,7 +2202,7 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
         break;
     case INDEX_op_ext16u_i64:
     case INDEX_op_ext16u_i32:
-        tcg_out_uxt(s, MO_16, a0, a1);
+        tcg_out_uxt(s, MO_UW, a0, a1);
         break;
     case INDEX_op_extu_i32_i64:
     case INDEX_op_ext32u_i64:
diff --git a/tcg/arm/tcg-target.inc.c b/tcg/arm/tcg-target.inc.c
index 542ffa8..0bd400e 100644
--- a/tcg/arm/tcg-target.inc.c
+++ b/tcg/arm/tcg-target.inc.c
@@ -1432,7 +1432,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
     case MO_UB:
         argreg = tcg_out_arg_reg8(s, argreg, datalo);
         break;
-    case MO_16:
+    case MO_UW:
         argreg = tcg_out_arg_reg16(s, argreg, datalo);
         break;
     case MO_32:
@@ -1624,7 +1624,7 @@ static inline void tcg_out_qemu_st_index(TCGContext *s, int cond, TCGMemOp opc,
     case MO_UB:
         tcg_out_st8_r(s, cond, datalo, addrlo, addend);
         break;
-    case MO_16:
+    case MO_UW:
         if (bswap) {
             tcg_out_bswap16st(s, cond, TCG_REG_R0, datalo);
             tcg_out_st16_r(s, cond, TCG_REG_R0, addrlo, addend);
@@ -1669,7 +1669,7 @@ static inline void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp opc,
     case MO_UB:
         tcg_out_st8_12(s, COND_AL, datalo, addrlo, 0);
         break;
-    case MO_16:
+    case MO_UW:
         if (bswap) {
             tcg_out_bswap16st(s, COND_AL, TCG_REG_R0, datalo);
             tcg_out_st16_8(s, COND_AL, TCG_REG_R0, addrlo, 0);
diff --git a/tcg/i386/tcg-target.inc.c b/tcg/i386/tcg-target.inc.c
index 0d68ba4..31c3664 100644
--- a/tcg/i386/tcg-target.inc.c
+++ b/tcg/i386/tcg-target.inc.c
@@ -893,7 +893,7 @@ static bool tcg_out_dup_vec(TCGContext *s, TCGType type, unsigned vece,
             tcg_out_vex_modrm(s, OPC_PUNPCKLBW, r, a, a);
             a = r;
             /* FALLTHRU */
-        case MO_16:
+        case MO_UW:
             tcg_out_vex_modrm(s, OPC_PUNPCKLWD, r, a, a);
             a = r;
             /* FALLTHRU */
@@ -927,7 +927,7 @@ static bool tcg_out_dupm_vec(TCGContext *s, TCGType type, unsigned vece,
         case MO_32:
             tcg_out_vex_modrm_offset(s, OPC_VBROADCASTSS, r, 0, base, offset);
             break;
-        case MO_16:
+        case MO_UW:
             tcg_out_vex_modrm_offset(s, OPC_VPINSRW, r, r, base, offset);
             tcg_out8(s, 0); /* imm8 */
             tcg_out_dup_vec(s, type, vece, r, r);
@@ -2164,7 +2164,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg datalo, TCGReg datahi,
         tcg_out_modrm_sib_offset(s, OPC_MOVB_EvGv + P_REXB_R + seg,
                                  datalo, base, index, 0, ofs);
         break;
-    case MO_16:
+    case MO_UW:
         if (bswap) {
             tcg_out_mov(s, TCG_TYPE_I32, scratch, datalo);
             tcg_out_rolw_8(s, scratch);
@@ -2747,15 +2747,15 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc,
         OPC_PMAXUB, OPC_PMAXUW, OPC_PMAXUD, OPC_UD2
     };
     static int const shlv_insn[4] = {
-        /* TODO: AVX512 adds support for MO_16.  */
+        /* TODO: AVX512 adds support for MO_UW.  */
         OPC_UD2, OPC_UD2, OPC_VPSLLVD, OPC_VPSLLVQ
     };
     static int const shrv_insn[4] = {
-        /* TODO: AVX512 adds support for MO_16.  */
+        /* TODO: AVX512 adds support for MO_UW.  */
         OPC_UD2, OPC_UD2, OPC_VPSRLVD, OPC_VPSRLVQ
     };
     static int const sarv_insn[4] = {
-        /* TODO: AVX512 adds support for MO_16, MO_64.  */
+        /* TODO: AVX512 adds support for MO_UW, MO_64.  */
         OPC_UD2, OPC_UD2, OPC_VPSRAVD, OPC_UD2
     };
     static int const shls_insn[4] = {
@@ -2925,7 +2925,7 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc,
         sub = args[3];
         goto gen_simd_imm8;
     case INDEX_op_x86_blend_vec:
-        if (vece == MO_16) {
+        if (vece == MO_UW) {
             insn = OPC_PBLENDW;
         } else if (vece == MO_32) {
             insn = (have_avx2 ? OPC_VPBLENDD : OPC_BLENDPS);
@@ -3290,9 +3290,9 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, unsigned vece)

     case INDEX_op_shls_vec:
     case INDEX_op_shrs_vec:
-        return vece >= MO_16;
+        return vece >= MO_UW;
     case INDEX_op_sars_vec:
-        return vece >= MO_16 && vece <= MO_32;
+        return vece >= MO_UW && vece <= MO_32;

     case INDEX_op_shlv_vec:
     case INDEX_op_shrv_vec:
@@ -3314,7 +3314,7 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, unsigned vece)
     case INDEX_op_usadd_vec:
     case INDEX_op_sssub_vec:
     case INDEX_op_ussub_vec:
-        return vece <= MO_16;
+        return vece <= MO_UW;
     case INDEX_op_smin_vec:
     case INDEX_op_smax_vec:
     case INDEX_op_umin_vec:
@@ -3352,13 +3352,13 @@ static void expand_vec_shi(TCGType type, unsigned vece, bool shr,
               tcgv_vec_arg(t2), tcgv_vec_arg(v1), tcgv_vec_arg(v1));

     if (shr) {
-        tcg_gen_shri_vec(MO_16, t1, t1, imm + 8);
-        tcg_gen_shri_vec(MO_16, t2, t2, imm + 8);
+        tcg_gen_shri_vec(MO_UW, t1, t1, imm + 8);
+        tcg_gen_shri_vec(MO_UW, t2, t2, imm + 8);
     } else {
-        tcg_gen_shli_vec(MO_16, t1, t1, imm + 8);
-        tcg_gen_shli_vec(MO_16, t2, t2, imm + 8);
-        tcg_gen_shri_vec(MO_16, t1, t1, 8);
-        tcg_gen_shri_vec(MO_16, t2, t2, 8);
+        tcg_gen_shli_vec(MO_UW, t1, t1, imm + 8);
+        tcg_gen_shli_vec(MO_UW, t2, t2, imm + 8);
+        tcg_gen_shri_vec(MO_UW, t1, t1, 8);
+        tcg_gen_shri_vec(MO_UW, t2, t2, 8);
     }

     vec_gen_3(INDEX_op_x86_packus_vec, type, MO_UB,
@@ -3381,8 +3381,8 @@ static void expand_vec_sari(TCGType type, unsigned vece,
                   tcgv_vec_arg(t1), tcgv_vec_arg(v1), tcgv_vec_arg(v1));
         vec_gen_3(INDEX_op_x86_punpckh_vec, type, MO_UB,
                   tcgv_vec_arg(t2), tcgv_vec_arg(v1), tcgv_vec_arg(v1));
-        tcg_gen_sari_vec(MO_16, t1, t1, imm + 8);
-        tcg_gen_sari_vec(MO_16, t2, t2, imm + 8);
+        tcg_gen_sari_vec(MO_UW, t1, t1, imm + 8);
+        tcg_gen_sari_vec(MO_UW, t2, t2, imm + 8);
         vec_gen_3(INDEX_op_x86_packss_vec, type, MO_UB,
                   tcgv_vec_arg(v0), tcgv_vec_arg(t1), tcgv_vec_arg(t2));
         tcg_temp_free_vec(t1);
@@ -3446,8 +3446,8 @@ static void expand_vec_mul(TCGType type, unsigned vece,
                   tcgv_vec_arg(t1), tcgv_vec_arg(v1), tcgv_vec_arg(t2));
         vec_gen_3(INDEX_op_x86_punpckl_vec, TCG_TYPE_V128, MO_UB,
                   tcgv_vec_arg(t2), tcgv_vec_arg(t2), tcgv_vec_arg(v2));
-        tcg_gen_mul_vec(MO_16, t1, t1, t2);
-        tcg_gen_shri_vec(MO_16, t1, t1, 8);
+        tcg_gen_mul_vec(MO_UW, t1, t1, t2);
+        tcg_gen_shri_vec(MO_UW, t1, t1, 8);
         vec_gen_3(INDEX_op_x86_packus_vec, TCG_TYPE_V128, MO_UB,
                   tcgv_vec_arg(v0), tcgv_vec_arg(t1), tcgv_vec_arg(t1));
         tcg_temp_free_vec(t1);
@@ -3469,10 +3469,10 @@ static void expand_vec_mul(TCGType type, unsigned vece,
                   tcgv_vec_arg(t3), tcgv_vec_arg(v1), tcgv_vec_arg(t4));
         vec_gen_3(INDEX_op_x86_punpckh_vec, type, MO_UB,
                   tcgv_vec_arg(t4), tcgv_vec_arg(t4), tcgv_vec_arg(v2));
-        tcg_gen_mul_vec(MO_16, t1, t1, t2);
-        tcg_gen_mul_vec(MO_16, t3, t3, t4);
-        tcg_gen_shri_vec(MO_16, t1, t1, 8);
-        tcg_gen_shri_vec(MO_16, t3, t3, 8);
+        tcg_gen_mul_vec(MO_UW, t1, t1, t2);
+        tcg_gen_mul_vec(MO_UW, t3, t3, t4);
+        tcg_gen_shri_vec(MO_UW, t1, t1, 8);
+        tcg_gen_shri_vec(MO_UW, t3, t3, 8);
         vec_gen_3(INDEX_op_x86_packus_vec, type, MO_UB,
                   tcgv_vec_arg(v0), tcgv_vec_arg(t1), tcgv_vec_arg(t3));
         tcg_temp_free_vec(t1);
diff --git a/tcg/mips/tcg-target.inc.c b/tcg/mips/tcg-target.inc.c
index c6d13ea..1780cb1 100644
--- a/tcg/mips/tcg-target.inc.c
+++ b/tcg/mips/tcg-target.inc.c
@@ -1383,7 +1383,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
     case MO_UB:
         i = tcg_out_call_iarg_reg8(s, i, l->datalo_reg);
         break;
-    case MO_16:
+    case MO_UW:
         i = tcg_out_call_iarg_reg16(s, i, l->datalo_reg);
         break;
     case MO_32:
@@ -1570,12 +1570,12 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
         tcg_out_opc_imm(s, OPC_SB, lo, base, 0);
         break;

-    case MO_16 | MO_BSWAP:
+    case MO_UW | MO_BSWAP:
         tcg_out_opc_imm(s, OPC_ANDI, TCG_TMP1, lo, 0xffff);
         tcg_out_bswap16(s, TCG_TMP1, TCG_TMP1);
         lo = TCG_TMP1;
         /* FALLTHRU */
-    case MO_16:
+    case MO_UW:
         tcg_out_opc_imm(s, OPC_SH, lo, base, 0);
         break;

diff --git a/tcg/riscv/tcg-target.inc.c b/tcg/riscv/tcg-target.inc.c
index 9c60c0f..20bc19d 100644
--- a/tcg/riscv/tcg-target.inc.c
+++ b/tcg/riscv/tcg-target.inc.c
@@ -1104,7 +1104,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
     case MO_UB:
         tcg_out_ext8u(s, a2, a2);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_out_ext16u(s, a2, a2);
         break;
     default:
@@ -1219,7 +1219,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
     case MO_UB:
         tcg_out_opc_store(s, OPC_SB, base, lo, 0);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_out_opc_store(s, OPC_SH, base, lo, 0);
         break;
     case MO_32:
diff --git a/tcg/sparc/tcg-target.inc.c b/tcg/sparc/tcg-target.inc.c
index 479ee2e..85550b5 100644
--- a/tcg/sparc/tcg-target.inc.c
+++ b/tcg/sparc/tcg-target.inc.c
@@ -885,7 +885,7 @@ static void emit_extend(TCGContext *s, TCGReg r, int op)
     case MO_UB:
         tcg_out_arithi(s, r, r, 0xff, ARITH_AND);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_out_arithi(s, r, r, 16, SHIFT_SLL);
         tcg_out_arithi(s, r, r, 16, SHIFT_SRL);
         break;
diff --git a/tcg/tcg-op-gvec.c b/tcg/tcg-op-gvec.c
index 9658c36..da409f5 100644
--- a/tcg/tcg-op-gvec.c
+++ b/tcg/tcg-op-gvec.c
@@ -308,7 +308,7 @@ uint64_t (dup_const)(unsigned vece, uint64_t c)
     switch (vece) {
     case MO_UB:
         return 0x0101010101010101ull * (uint8_t)c;
-    case MO_16:
+    case MO_UW:
         return 0x0001000100010001ull * (uint16_t)c;
     case MO_32:
         return 0x0000000100000001ull * (uint32_t)c;
@@ -327,7 +327,7 @@ static void gen_dup_i32(unsigned vece, TCGv_i32 out, TCGv_i32 in)
         tcg_gen_ext8u_i32(out, in);
         tcg_gen_muli_i32(out, out, 0x01010101);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_deposit_i32(out, in, in, 16, 16);
         break;
     case MO_32:
@@ -345,7 +345,7 @@ static void gen_dup_i64(unsigned vece, TCGv_i64 out, TCGv_i64 in)
         tcg_gen_ext8u_i64(out, in);
         tcg_gen_muli_i64(out, out, 0x0101010101010101ull);
         break;
-    case MO_16:
+    case MO_UW:
         tcg_gen_ext16u_i64(out, in);
         tcg_gen_muli_i64(out, out, 0x0001000100010001ull);
         break;
@@ -558,7 +558,7 @@ static void do_dup(unsigned vece, uint32_t dofs, uint32_t oprsz,
                 tcg_gen_extrl_i64_i32(t_32, in_64);
             } else if (vece == MO_UB) {
                 tcg_gen_movi_i32(t_32, in_c & 0xff);
-            } else if (vece == MO_16) {
+            } else if (vece == MO_UW) {
                 tcg_gen_movi_i32(t_32, in_c & 0xffff);
             } else {
                 tcg_gen_movi_i32(t_32, in_c);
@@ -1459,7 +1459,7 @@ void tcg_gen_gvec_dup_mem(unsigned vece, uint32_t dofs, uint32_t aofs,
             case MO_UB:
                 tcg_gen_ld8u_i32(in, cpu_env, aofs);
                 break;
-            case MO_16:
+            case MO_UW:
                 tcg_gen_ld16u_i32(in, cpu_env, aofs);
                 break;
             default:
@@ -1526,7 +1526,7 @@ void tcg_gen_gvec_dup16i(uint32_t dofs, uint32_t oprsz,
                          uint32_t maxsz, uint16_t x)
 {
     check_size_align(oprsz, maxsz, dofs);
-    do_dup(MO_16, dofs, oprsz, maxsz, NULL, NULL, x);
+    do_dup(MO_UW, dofs, oprsz, maxsz, NULL, NULL, x);
 }

 void tcg_gen_gvec_dup8i(uint32_t dofs, uint32_t oprsz,
@@ -1579,7 +1579,7 @@ void tcg_gen_vec_add8_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)

 void tcg_gen_vec_add16_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
 {
-    TCGv_i64 m = tcg_const_i64(dup_const(MO_16, 0x8000));
+    TCGv_i64 m = tcg_const_i64(dup_const(MO_UW, 0x8000));
     gen_addv_mask(d, a, b, m);
     tcg_temp_free_i64(m);
 }
@@ -1613,7 +1613,7 @@ void tcg_gen_gvec_add(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_add_vec,
           .fno = gen_helper_gvec_add16,
           .opt_opc = vecop_list_add,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_add_i32,
           .fniv = tcg_gen_add_vec,
           .fno = gen_helper_gvec_add32,
@@ -1644,7 +1644,7 @@ void tcg_gen_gvec_adds(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_add_vec,
           .fno = gen_helper_gvec_adds16,
           .opt_opc = vecop_list_add,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_add_i32,
           .fniv = tcg_gen_add_vec,
           .fno = gen_helper_gvec_adds32,
@@ -1685,7 +1685,7 @@ void tcg_gen_gvec_subs(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_gvec_subs16,
           .opt_opc = vecop_list_sub,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_sub_i32,
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_gvec_subs32,
@@ -1732,7 +1732,7 @@ void tcg_gen_vec_sub8_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)

 void tcg_gen_vec_sub16_i64(TCGv_i64 d, TCGv_i64 a, TCGv_i64 b)
 {
-    TCGv_i64 m = tcg_const_i64(dup_const(MO_16, 0x8000));
+    TCGv_i64 m = tcg_const_i64(dup_const(MO_UW, 0x8000));
     gen_subv_mask(d, a, b, m);
     tcg_temp_free_i64(m);
 }
@@ -1764,7 +1764,7 @@ void tcg_gen_gvec_sub(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_gvec_sub16,
           .opt_opc = vecop_list_sub,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_sub_i32,
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_gvec_sub32,
@@ -1795,7 +1795,7 @@ void tcg_gen_gvec_mul(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_mul_vec,
           .fno = gen_helper_gvec_mul16,
           .opt_opc = vecop_list_mul,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_mul_i32,
           .fniv = tcg_gen_mul_vec,
           .fno = gen_helper_gvec_mul32,
@@ -1824,7 +1824,7 @@ void tcg_gen_gvec_muls(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_mul_vec,
           .fno = gen_helper_gvec_muls16,
           .opt_opc = vecop_list_mul,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_mul_i32,
           .fniv = tcg_gen_mul_vec,
           .fno = gen_helper_gvec_muls32,
@@ -1862,7 +1862,7 @@ void tcg_gen_gvec_ssadd(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_ssadd_vec,
           .fno = gen_helper_gvec_ssadd16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fniv = tcg_gen_ssadd_vec,
           .fno = gen_helper_gvec_ssadd32,
           .opt_opc = vecop_list,
@@ -1888,7 +1888,7 @@ void tcg_gen_gvec_sssub(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_sssub_vec,
           .fno = gen_helper_gvec_sssub16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fniv = tcg_gen_sssub_vec,
           .fno = gen_helper_gvec_sssub32,
           .opt_opc = vecop_list,
@@ -1930,7 +1930,7 @@ void tcg_gen_gvec_usadd(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_usadd_vec,
           .fno = gen_helper_gvec_usadd16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_usadd_i32,
           .fniv = tcg_gen_usadd_vec,
           .fno = gen_helper_gvec_usadd32,
@@ -1974,7 +1974,7 @@ void tcg_gen_gvec_ussub(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_ussub_vec,
           .fno = gen_helper_gvec_ussub16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_ussub_i32,
           .fniv = tcg_gen_ussub_vec,
           .fno = gen_helper_gvec_ussub32,
@@ -2002,7 +2002,7 @@ void tcg_gen_gvec_smin(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_smin_vec,
           .fno = gen_helper_gvec_smin16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_smin_i32,
           .fniv = tcg_gen_smin_vec,
           .fno = gen_helper_gvec_smin32,
@@ -2030,7 +2030,7 @@ void tcg_gen_gvec_umin(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_umin_vec,
           .fno = gen_helper_gvec_umin16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_umin_i32,
           .fniv = tcg_gen_umin_vec,
           .fno = gen_helper_gvec_umin32,
@@ -2058,7 +2058,7 @@ void tcg_gen_gvec_smax(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_smax_vec,
           .fno = gen_helper_gvec_smax16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_smax_i32,
           .fniv = tcg_gen_smax_vec,
           .fno = gen_helper_gvec_smax32,
@@ -2086,7 +2086,7 @@ void tcg_gen_gvec_umax(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_umax_vec,
           .fno = gen_helper_gvec_umax16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_umax_i32,
           .fniv = tcg_gen_umax_vec,
           .fno = gen_helper_gvec_umax32,
@@ -2127,7 +2127,7 @@ void tcg_gen_vec_neg8_i64(TCGv_i64 d, TCGv_i64 b)

 void tcg_gen_vec_neg16_i64(TCGv_i64 d, TCGv_i64 b)
 {
-    TCGv_i64 m = tcg_const_i64(dup_const(MO_16, 0x8000));
+    TCGv_i64 m = tcg_const_i64(dup_const(MO_UW, 0x8000));
     gen_negv_mask(d, b, m);
     tcg_temp_free_i64(m);
 }
@@ -2160,7 +2160,7 @@ void tcg_gen_gvec_neg(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_neg_vec,
           .fno = gen_helper_gvec_neg16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_neg_i32,
           .fniv = tcg_gen_neg_vec,
           .fno = gen_helper_gvec_neg32,
@@ -2206,7 +2206,7 @@ static void tcg_gen_vec_abs8_i64(TCGv_i64 d, TCGv_i64 b)

 static void tcg_gen_vec_abs16_i64(TCGv_i64 d, TCGv_i64 b)
 {
-    gen_absv_mask(d, b, MO_16);
+    gen_absv_mask(d, b, MO_UW);
 }

 void tcg_gen_gvec_abs(unsigned vece, uint32_t dofs, uint32_t aofs,
@@ -2223,7 +2223,7 @@ void tcg_gen_gvec_abs(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_abs_vec,
           .fno = gen_helper_gvec_abs16,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_abs_i32,
           .fniv = tcg_gen_abs_vec,
           .fno = gen_helper_gvec_abs32,
@@ -2461,7 +2461,7 @@ void tcg_gen_vec_shl8i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c)

 void tcg_gen_vec_shl16i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c)
 {
-    uint64_t mask = dup_const(MO_16, 0xffff << c);
+    uint64_t mask = dup_const(MO_UW, 0xffff << c);
     tcg_gen_shli_i64(d, a, c);
     tcg_gen_andi_i64(d, d, mask);
 }
@@ -2480,7 +2480,7 @@ void tcg_gen_gvec_shli(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_shli_vec,
           .fno = gen_helper_gvec_shl16i,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_shli_i32,
           .fniv = tcg_gen_shli_vec,
           .fno = gen_helper_gvec_shl32i,
@@ -2512,7 +2512,7 @@ void tcg_gen_vec_shr8i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c)

 void tcg_gen_vec_shr16i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c)
 {
-    uint64_t mask = dup_const(MO_16, 0xffff >> c);
+    uint64_t mask = dup_const(MO_UW, 0xffff >> c);
     tcg_gen_shri_i64(d, a, c);
     tcg_gen_andi_i64(d, d, mask);
 }
@@ -2531,7 +2531,7 @@ void tcg_gen_gvec_shri(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_shri_vec,
           .fno = gen_helper_gvec_shr16i,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_shri_i32,
           .fniv = tcg_gen_shri_vec,
           .fno = gen_helper_gvec_shr32i,
@@ -2570,8 +2570,8 @@ void tcg_gen_vec_sar8i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c)

 void tcg_gen_vec_sar16i_i64(TCGv_i64 d, TCGv_i64 a, int64_t c)
 {
-    uint64_t s_mask = dup_const(MO_16, 0x8000 >> c);
-    uint64_t c_mask = dup_const(MO_16, 0xffff >> c);
+    uint64_t s_mask = dup_const(MO_UW, 0x8000 >> c);
+    uint64_t c_mask = dup_const(MO_UW, 0xffff >> c);
     TCGv_i64 s = tcg_temp_new_i64();

     tcg_gen_shri_i64(d, a, c);
@@ -2596,7 +2596,7 @@ void tcg_gen_gvec_sari(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_sari_vec,
           .fno = gen_helper_gvec_sar16i,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_sari_i32,
           .fniv = tcg_gen_sari_vec,
           .fno = gen_helper_gvec_sar32i,
@@ -2884,7 +2884,7 @@ void tcg_gen_gvec_shlv(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_shlv_mod_vec,
           .fno = gen_helper_gvec_shl16v,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_shl_mod_i32,
           .fniv = tcg_gen_shlv_mod_vec,
           .fno = gen_helper_gvec_shl32v,
@@ -2947,7 +2947,7 @@ void tcg_gen_gvec_shrv(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_shrv_mod_vec,
           .fno = gen_helper_gvec_shr16v,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_shr_mod_i32,
           .fniv = tcg_gen_shrv_mod_vec,
           .fno = gen_helper_gvec_shr32v,
@@ -3010,7 +3010,7 @@ void tcg_gen_gvec_sarv(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_sarv_mod_vec,
           .fno = gen_helper_gvec_sar16v,
           .opt_opc = vecop_list,
-          .vece = MO_16 },
+          .vece = MO_UW },
         { .fni4 = tcg_gen_sar_mod_i32,
           .fniv = tcg_gen_sarv_mod_vec,
           .fno = gen_helper_gvec_sar32v,
diff --git a/tcg/tcg-op-vec.c b/tcg/tcg-op-vec.c
index d7ffc9e..b0a4d98 100644
--- a/tcg/tcg-op-vec.c
+++ b/tcg/tcg-op-vec.c
@@ -270,7 +270,7 @@ void tcg_gen_dup32i_vec(TCGv_vec r, uint32_t a)

 void tcg_gen_dup16i_vec(TCGv_vec r, uint32_t a)
 {
-    do_dupi_vec(r, MO_REG, dup_const(MO_16, a));
+    do_dupi_vec(r, MO_REG, dup_const(MO_UW, a));
 }

 void tcg_gen_dup8i_vec(TCGv_vec r, uint32_t a)
diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c
index 61eda33..21d448c 100644
--- a/tcg/tcg-op.c
+++ b/tcg/tcg-op.c
@@ -2723,7 +2723,7 @@ static inline TCGMemOp tcg_canonicalize_memop(TCGMemOp op, bool is64, bool st)
     case MO_UB:
         op &= ~MO_BSWAP;
         break;
-    case MO_16:
+    case MO_UW:
         break;
     case MO_32:
         if (!is64) {
@@ -2810,7 +2810,7 @@ void tcg_gen_qemu_ld_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)

     if ((orig_memop ^ memop) & MO_BSWAP) {
         switch (orig_memop & MO_SIZE) {
-        case MO_16:
+        case MO_UW:
             tcg_gen_bswap16_i32(val, val);
             if (orig_memop & MO_SIGN) {
                 tcg_gen_ext16s_i32(val, val);
@@ -2837,7 +2837,7 @@ void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
     if (!TCG_TARGET_HAS_MEMORY_BSWAP && (memop & MO_BSWAP)) {
         swap = tcg_temp_new_i32();
         switch (memop & MO_SIZE) {
-        case MO_16:
+        case MO_UW:
             tcg_gen_ext16u_i32(swap, val);
             tcg_gen_bswap16_i32(swap, swap);
             break;
@@ -2890,7 +2890,7 @@ void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)

     if ((orig_memop ^ memop) & MO_BSWAP) {
         switch (orig_memop & MO_SIZE) {
-        case MO_16:
+        case MO_UW:
             tcg_gen_bswap16_i64(val, val);
             if (orig_memop & MO_SIGN) {
                 tcg_gen_ext16s_i64(val, val);
@@ -2928,7 +2928,7 @@ void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
     if (!TCG_TARGET_HAS_MEMORY_BSWAP && (memop & MO_BSWAP)) {
         swap = tcg_temp_new_i64();
         switch (memop & MO_SIZE) {
-        case MO_16:
+        case MO_UW:
             tcg_gen_ext16u_i64(swap, val);
             tcg_gen_bswap16_i64(swap, swap);
             break;
@@ -3025,8 +3025,8 @@ typedef void (*gen_atomic_op_i64)(TCGv_i64, TCGv_env, TCGv, TCGv_i64);

 static void * const table_cmpxchg[16] = {
     [MO_UB] = gen_helper_atomic_cmpxchgb,
-    [MO_16 | MO_LE] = gen_helper_atomic_cmpxchgw_le,
-    [MO_16 | MO_BE] = gen_helper_atomic_cmpxchgw_be,
+    [MO_UW | MO_LE] = gen_helper_atomic_cmpxchgw_le,
+    [MO_UW | MO_BE] = gen_helper_atomic_cmpxchgw_be,
     [MO_32 | MO_LE] = gen_helper_atomic_cmpxchgl_le,
     [MO_32 | MO_BE] = gen_helper_atomic_cmpxchgl_be,
     WITH_ATOMIC64([MO_64 | MO_LE] = gen_helper_atomic_cmpxchgq_le)
@@ -3249,8 +3249,8 @@ static void do_atomic_op_i64(TCGv_i64 ret, TCGv addr, TCGv_i64 val,
 #define GEN_ATOMIC_HELPER(NAME, OP, NEW)                                \
 static void * const table_##NAME[16] = {                                \
     [MO_UB] = gen_helper_atomic_##NAME##b,                               \
-    [MO_16 | MO_LE] = gen_helper_atomic_##NAME##w_le,                   \
-    [MO_16 | MO_BE] = gen_helper_atomic_##NAME##w_be,                   \
+    [MO_UW | MO_LE] = gen_helper_atomic_##NAME##w_le,                   \
+    [MO_UW | MO_BE] = gen_helper_atomic_##NAME##w_be,                   \
     [MO_32 | MO_LE] = gen_helper_atomic_##NAME##l_le,                   \
     [MO_32 | MO_BE] = gen_helper_atomic_##NAME##l_be,                   \
     WITH_ATOMIC64([MO_64 | MO_LE] = gen_helper_atomic_##NAME##q_le)     \
diff --git a/tcg/tcg.h b/tcg/tcg.h
index 5636d6b..a378887 100644
--- a/tcg/tcg.h
+++ b/tcg/tcg.h
@@ -1303,7 +1303,7 @@ uint64_t dup_const(unsigned vece, uint64_t c);
 #define dup_const(VECE, C)                                         \
     (__builtin_constant_p(VECE)                                    \
      ?   ((VECE) == MO_UB ? 0x0101010101010101ull * (uint8_t)(C)   \
-        : (VECE) == MO_16 ? 0x0001000100010001ull * (uint16_t)(C)  \
+        : (VECE) == MO_UW ? 0x0001000100010001ull * (uint16_t)(C)  \
         : (VECE) == MO_32 ? 0x0000000100000001ull * (uint32_t)(C)  \
         : dup_const(VECE, C))                                      \
      : dup_const(VECE, C))
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 193103 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v2 03/20] tcg: Replace MO_32 with MO_UL alias
  2019-07-22 15:34 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-22 15:41   ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:41 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, claudio.fontana, alex.williamson, qemu-ppc, amarkovic,
	pbonzini, aurelien

Preparation for splitting MO_32 out from TCGMemOp into new accelerator
independent MemOp.

As MO_32 will be a value of MemOp, existing TCGMemOp comparisons and
coercions will trigger -Wenum-compare and -Wenum-conversion.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 target/arm/sve_helper.c             |   6 +-
 target/arm/translate-a64.c          | 148 +++++++++++++++++------------------
 target/arm/translate-sve.c          |  12 +--
 target/arm/translate-vfp.inc.c      |   4 +-
 target/arm/translate.c              |  34 ++++----
 target/i386/translate.c             | 150 ++++++++++++++++++------------------
 target/ppc/translate/vmx-impl.inc.c |  28 +++----
 target/ppc/translate/vsx-impl.inc.c |   4 +-
 target/s390x/translate.c            |   4 +-
 target/s390x/translate_vx.inc.c     |   2 +-
 target/s390x/vec.h                  |   4 +-
 tcg/aarch64/tcg-target.inc.c        |  20 ++---
 tcg/arm/tcg-target.inc.c            |   6 +-
 tcg/i386/tcg-target.inc.c           |  28 +++----
 tcg/mips/tcg-target.inc.c           |   6 +-
 tcg/ppc/tcg-target.inc.c            |   2 +-
 tcg/riscv/tcg-target.inc.c          |   2 +-
 tcg/sparc/tcg-target.inc.c          |   2 +-
 tcg/tcg-op-gvec.c                   |  64 +++++++--------
 tcg/tcg-op-vec.c                    |   6 +-
 tcg/tcg-op.c                        |  18 ++---
 tcg/tcg.h                           |   2 +-
 22 files changed, 276 insertions(+), 276 deletions(-)

diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index f6bef3d..fa705c4 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -1561,7 +1561,7 @@ void HELPER(sve_cpy_m_s)(void *vd, void *vn, void *vg,
     uint64_t *d = vd, *n = vn;
     uint8_t *pg = vg;

-    mm = dup_const(MO_32, mm);
+    mm = dup_const(MO_UL, mm);
     for (i = 0; i < opr_sz; i += 1) {
         uint64_t nn = n[i];
         uint64_t pp = expand_pred_s(pg[H1(i)]);
@@ -1612,7 +1612,7 @@ void HELPER(sve_cpy_z_s)(void *vd, void *vg, uint64_t val, uint32_t desc)
     uint64_t *d = vd;
     uint8_t *pg = vg;

-    val = dup_const(MO_32, val);
+    val = dup_const(MO_UL, val);
     for (i = 0; i < opr_sz; i += 1) {
         d[i] = val & expand_pred_s(pg[H1(i)]);
     }
@@ -5123,7 +5123,7 @@ static inline void sve_ldff1_zs(CPUARMState *env, void *vd, void *vg, void *vm,
     target_ulong addr;

     /* Skip to the first true predicate.  */
-    reg_off = find_next_active(vg, 0, reg_max, MO_32);
+    reg_off = find_next_active(vg, 0, reg_max, MO_UL);
     if (likely(reg_off < reg_max)) {
         /* Perform one normal read, which will fault or not.  */
         set_helper_retaddr(ra);
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index 3acfccb..0b92e6d 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -484,7 +484,7 @@ static TCGv_i32 read_fp_sreg(DisasContext *s, int reg)
 {
     TCGv_i32 v = tcg_temp_new_i32();

-    tcg_gen_ld_i32(v, cpu_env, fp_reg_offset(s, reg, MO_32));
+    tcg_gen_ld_i32(v, cpu_env, fp_reg_offset(s, reg, MO_UL));
     return v;
 }

@@ -999,7 +999,7 @@ static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx,
     case MO_UW:
         tcg_gen_ld16u_i64(tcg_dest, cpu_env, vect_off);
         break;
-    case MO_32:
+    case MO_UL:
         tcg_gen_ld32u_i64(tcg_dest, cpu_env, vect_off);
         break;
     case MO_SB:
@@ -1008,7 +1008,7 @@ static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx,
     case MO_SW:
         tcg_gen_ld16s_i64(tcg_dest, cpu_env, vect_off);
         break;
-    case MO_32|MO_SIGN:
+    case MO_SL:
         tcg_gen_ld32s_i64(tcg_dest, cpu_env, vect_off);
         break;
     case MO_64:
@@ -1037,8 +1037,8 @@ static void read_vec_element_i32(DisasContext *s, TCGv_i32 tcg_dest, int srcidx,
     case MO_SW:
         tcg_gen_ld16s_i32(tcg_dest, cpu_env, vect_off);
         break;
-    case MO_32:
-    case MO_32|MO_SIGN:
+    case MO_UL:
+    case MO_SL:
         tcg_gen_ld_i32(tcg_dest, cpu_env, vect_off);
         break;
     default:
@@ -1058,7 +1058,7 @@ static void write_vec_element(DisasContext *s, TCGv_i64 tcg_src, int destidx,
     case MO_UW:
         tcg_gen_st16_i64(tcg_src, cpu_env, vect_off);
         break;
-    case MO_32:
+    case MO_UL:
         tcg_gen_st32_i64(tcg_src, cpu_env, vect_off);
         break;
     case MO_64:
@@ -1080,7 +1080,7 @@ static void write_vec_element_i32(DisasContext *s, TCGv_i32 tcg_src,
     case MO_UW:
         tcg_gen_st16_i32(tcg_src, cpu_env, vect_off);
         break;
-    case MO_32:
+    case MO_UL:
         tcg_gen_st_i32(tcg_src, cpu_env, vect_off);
         break;
     default:
@@ -5299,7 +5299,7 @@ static void handle_fp_compare(DisasContext *s, int size,
         }

         switch (size) {
-        case MO_32:
+        case MO_UL:
             if (signal_all_nans) {
                 gen_helper_vfp_cmpes_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
             } else {
@@ -5354,7 +5354,7 @@ static void disas_fp_compare(DisasContext *s, uint32_t insn)

     switch (type) {
     case 0:
-        size = MO_32;
+        size = MO_UL;
         break;
     case 1:
         size = MO_64;
@@ -5405,7 +5405,7 @@ static void disas_fp_ccomp(DisasContext *s, uint32_t insn)

     switch (type) {
     case 0:
-        size = MO_32;
+        size = MO_UL;
         break;
     case 1:
         size = MO_64;
@@ -5471,7 +5471,7 @@ static void disas_fp_csel(DisasContext *s, uint32_t insn)

     switch (type) {
     case 0:
-        sz = MO_32;
+        sz = MO_UL;
         break;
     case 1:
         sz = MO_64;
@@ -6276,7 +6276,7 @@ static void disas_fp_imm(DisasContext *s, uint32_t insn)

     switch (type) {
     case 0:
-        sz = MO_32;
+        sz = MO_UL;
         break;
     case 1:
         sz = MO_64;
@@ -6581,7 +6581,7 @@ static void handle_fmov(DisasContext *s, int rd, int rn, int type, bool itof)
         switch (type) {
         case 0:
             /* 32 bit */
-            tcg_gen_ld32u_i64(tcg_rd, cpu_env, fp_reg_offset(s, rn, MO_32));
+            tcg_gen_ld32u_i64(tcg_rd, cpu_env, fp_reg_offset(s, rn, MO_UL));
             break;
         case 1:
             /* 64 bit */
@@ -7030,7 +7030,7 @@ static TCGv_i32 do_reduction_op(DisasContext *s, int fpopcode, int rn,
 {
     if (esize == size) {
         int element;
-        TCGMemOp msize = esize == 16 ? MO_UW : MO_32;
+        TCGMemOp msize = esize == 16 ? MO_UW : MO_UL;
         TCGv_i32 tcg_elem;

         /* We should have one register left here */
@@ -7702,7 +7702,7 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
                 size = MO_UW;
             }
         } else {
-            size = extract32(size, 0, 1) ? MO_64 : MO_32;
+            size = extract32(size, 0, 1) ? MO_64 : MO_UL;
         }

         if (!fp_access_check(s)) {
@@ -8181,7 +8181,7 @@ static void handle_simd_qshl(DisasContext *s, bool scalar, bool is_q,
             }
         };
         NeonGenTwoOpEnvFn *genfn = fns[src_unsigned][dst_unsigned][size];
-        TCGMemOp memop = scalar ? size : MO_32;
+        TCGMemOp memop = scalar ? size : MO_UL;
         int maxpass = scalar ? 1 : is_q ? 4 : 2;

         for (pass = 0; pass < maxpass; pass++) {
@@ -8204,7 +8204,7 @@ static void handle_simd_qshl(DisasContext *s, bool scalar, bool is_q,
                 }
                 write_fp_sreg(s, rd, tcg_op);
             } else {
-                write_vec_element_i32(s, tcg_op, rd, pass, MO_32);
+                write_vec_element_i32(s, tcg_op, rd, pass, MO_UL);
             }

             tcg_temp_free_i32(tcg_op);
@@ -8264,7 +8264,7 @@ static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
             read_vec_element_i32(s, tcg_int32, rn, pass, mop);

             switch (size) {
-            case MO_32:
+            case MO_UL:
                 if (fracbits) {
                     if (is_signed) {
                         gen_helper_vfp_sltos(tcg_float, tcg_int32,
@@ -8337,7 +8337,7 @@ static void handle_simd_shift_intfp_conv(DisasContext *s, bool is_scalar,
             return;
         }
     } else if (immh & 4) {
-        size = MO_32;
+        size = MO_UL;
     } else if (immh & 2) {
         size = MO_UW;
         if (!dc_isar_feature(aa64_fp16, s)) {
@@ -8382,7 +8382,7 @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
             return;
         }
     } else if (immh & 0x4) {
-        size = MO_32;
+        size = MO_UL;
     } else if (immh & 0x2) {
         size = MO_UW;
         if (!dc_isar_feature(aa64_fp16, s)) {
@@ -8436,7 +8436,7 @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
                 fn = gen_helper_vfp_toshh;
             }
             break;
-        case MO_32:
+        case MO_UL:
             if (is_u) {
                 fn = gen_helper_vfp_touls;
             } else {
@@ -8588,8 +8588,8 @@ static void disas_simd_scalar_three_reg_diff(DisasContext *s, uint32_t insn)
         TCGv_i64 tcg_op2 = tcg_temp_new_i64();
         TCGv_i64 tcg_res = tcg_temp_new_i64();

-        read_vec_element(s, tcg_op1, rn, 0, MO_32 | MO_SIGN);
-        read_vec_element(s, tcg_op2, rm, 0, MO_32 | MO_SIGN);
+        read_vec_element(s, tcg_op1, rn, 0, MO_SL);
+        read_vec_element(s, tcg_op2, rm, 0, MO_SL);

         tcg_gen_mul_i64(tcg_res, tcg_op1, tcg_op2);
         gen_helper_neon_addl_saturate_s64(tcg_res, cpu_env, tcg_res, tcg_res);
@@ -8631,7 +8631,7 @@ static void disas_simd_scalar_three_reg_diff(DisasContext *s, uint32_t insn)
         case 0x9: /* SQDMLAL, SQDMLAL2 */
         {
             TCGv_i64 tcg_op3 = tcg_temp_new_i64();
-            read_vec_element(s, tcg_op3, rd, 0, MO_32);
+            read_vec_element(s, tcg_op3, rd, 0, MO_UL);
             gen_helper_neon_addl_saturate_s32(tcg_res, cpu_env,
                                               tcg_res, tcg_op3);
             tcg_temp_free_i64(tcg_op3);
@@ -8831,8 +8831,8 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
             TCGv_i32 tcg_op2 = tcg_temp_new_i32();
             TCGv_i32 tcg_res = tcg_temp_new_i32();

-            read_vec_element_i32(s, tcg_op1, rn, pass, MO_32);
-            read_vec_element_i32(s, tcg_op2, rm, pass, MO_32);
+            read_vec_element_i32(s, tcg_op1, rn, pass, MO_UL);
+            read_vec_element_i32(s, tcg_op2, rm, pass, MO_UL);

             switch (fpopcode) {
             case 0x39: /* FMLS */
@@ -8840,7 +8840,7 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
                 gen_helper_vfp_negs(tcg_op1, tcg_op1);
                 /* fall through */
             case 0x19: /* FMLA */
-                read_vec_element_i32(s, tcg_res, rd, pass, MO_32);
+                read_vec_element_i32(s, tcg_res, rd, pass, MO_UL);
                 gen_helper_vfp_muladds(tcg_res, tcg_op1, tcg_op2,
                                        tcg_res, fpst);
                 break;
@@ -8908,7 +8908,7 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
                 write_vec_element(s, tcg_tmp, rd, pass, MO_64);
                 tcg_temp_free_i64(tcg_tmp);
             } else {
-                write_vec_element_i32(s, tcg_res, rd, pass, MO_32);
+                write_vec_element_i32(s, tcg_res, rd, pass, MO_UL);
             }

             tcg_temp_free_i32(tcg_res);
@@ -9557,7 +9557,7 @@ static void handle_2misc_reciprocal(DisasContext *s, int opcode,
         }

         for (pass = 0; pass < maxpasses; pass++) {
-            read_vec_element_i32(s, tcg_op, rn, pass, MO_32);
+            read_vec_element_i32(s, tcg_op, rn, pass, MO_UL);

             switch (opcode) {
             case 0x3c: /* URECPE */
@@ -9579,7 +9579,7 @@ static void handle_2misc_reciprocal(DisasContext *s, int opcode,
             if (is_scalar) {
                 write_fp_sreg(s, rd, tcg_res);
             } else {
-                write_vec_element_i32(s, tcg_res, rd, pass, MO_32);
+                write_vec_element_i32(s, tcg_res, rd, pass, MO_UL);
             }
         }
         tcg_temp_free_i32(tcg_res);
@@ -9693,7 +9693,7 @@ static void handle_2misc_narrow(DisasContext *s, bool scalar,
     }

     for (pass = 0; pass < 2; pass++) {
-        write_vec_element_i32(s, tcg_res[pass], rd, destelt + pass, MO_32);
+        write_vec_element_i32(s, tcg_res[pass], rd, destelt + pass, MO_UL);
         tcg_temp_free_i32(tcg_res[pass]);
     }
     clear_vec_high(s, is_q, rd);
@@ -9740,8 +9740,8 @@ static void handle_2misc_satacc(DisasContext *s, bool is_scalar, bool is_u,
                 read_vec_element_i32(s, tcg_rn, rn, pass, size);
                 read_vec_element_i32(s, tcg_rd, rd, pass, size);
             } else {
-                read_vec_element_i32(s, tcg_rn, rn, pass, MO_32);
-                read_vec_element_i32(s, tcg_rd, rd, pass, MO_32);
+                read_vec_element_i32(s, tcg_rn, rn, pass, MO_UL);
+                read_vec_element_i32(s, tcg_rd, rd, pass, MO_UL);
             }

             if (is_u) { /* USQADD */
@@ -9779,7 +9779,7 @@ static void handle_2misc_satacc(DisasContext *s, bool is_scalar, bool is_u,
                 write_vec_element(s, tcg_zero, rd, 0, MO_64);
                 tcg_temp_free_i64(tcg_zero);
             }
-            write_vec_element_i32(s, tcg_rd, rd, pass, MO_32);
+            write_vec_element_i32(s, tcg_rd, rd, pass, MO_UL);
         }
         tcg_temp_free_i32(tcg_rd);
         tcg_temp_free_i32(tcg_rn);
@@ -10347,7 +10347,7 @@ static void handle_3rd_widening(DisasContext *s, int is_q, int is_u, int size,
             TCGv_i64 tcg_op1 = tcg_temp_new_i64();
             TCGv_i64 tcg_op2 = tcg_temp_new_i64();
             TCGv_i64 tcg_passres;
-            TCGMemOp memop = MO_32 | (is_u ? 0 : MO_SIGN);
+            TCGMemOp memop = is_u ? MO_UL : MO_SL;

             int elt = pass + is_q * 2;

@@ -10426,8 +10426,8 @@ static void handle_3rd_widening(DisasContext *s, int is_q, int is_u, int size,
             TCGv_i64 tcg_passres;
             int elt = pass + is_q * 2;

-            read_vec_element_i32(s, tcg_op1, rn, elt, MO_32);
-            read_vec_element_i32(s, tcg_op2, rm, elt, MO_32);
+            read_vec_element_i32(s, tcg_op1, rn, elt, MO_UL);
+            read_vec_element_i32(s, tcg_op2, rm, elt, MO_UL);

             if (accop == 0) {
                 tcg_passres = tcg_res[pass];
@@ -10547,7 +10547,7 @@ static void handle_3rd_wide(DisasContext *s, int is_q, int is_u, int size,
         NeonGenWidenFn *widenfn = widenfns[size][is_u];

         read_vec_element(s, tcg_op1, rn, pass, MO_64);
-        read_vec_element_i32(s, tcg_op2, rm, part + pass, MO_32);
+        read_vec_element_i32(s, tcg_op2, rm, part + pass, MO_UL);
         widenfn(tcg_op2_wide, tcg_op2);
         tcg_temp_free_i32(tcg_op2);
         tcg_res[pass] = tcg_temp_new_i64();
@@ -10603,7 +10603,7 @@ static void handle_3rd_narrowing(DisasContext *s, int is_q, int is_u, int size,
     }

     for (pass = 0; pass < 2; pass++) {
-        write_vec_element_i32(s, tcg_res[pass], rd, pass + part, MO_32);
+        write_vec_element_i32(s, tcg_res[pass], rd, pass + part, MO_UL);
         tcg_temp_free_i32(tcg_res[pass]);
     }
     clear_vec_high(s, is_q, rd);
@@ -10860,8 +10860,8 @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
             int passreg = pass < (maxpass / 2) ? rn : rm;
             int passelt = (is_q && (pass & 1)) ? 2 : 0;

-            read_vec_element_i32(s, tcg_op1, passreg, passelt, MO_32);
-            read_vec_element_i32(s, tcg_op2, passreg, passelt + 1, MO_32);
+            read_vec_element_i32(s, tcg_op1, passreg, passelt, MO_UL);
+            read_vec_element_i32(s, tcg_op2, passreg, passelt + 1, MO_UL);
             tcg_res[pass] = tcg_temp_new_i32();

             switch (opcode) {
@@ -10925,7 +10925,7 @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
         }

         for (pass = 0; pass < maxpass; pass++) {
-            write_vec_element_i32(s, tcg_res[pass], rd, pass, MO_32);
+            write_vec_element_i32(s, tcg_res[pass], rd, pass, MO_UL);
             tcg_temp_free_i32(tcg_res[pass]);
         }
         clear_vec_high(s, is_q, rd);
@@ -10971,7 +10971,7 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
             unallocated_encoding(s);
             return;
         }
-        handle_simd_3same_pair(s, is_q, 0, fpopcode, size ? MO_64 : MO_32,
+        handle_simd_3same_pair(s, is_q, 0, fpopcode, size ? MO_64 : MO_UL,
                                rn, rm, rd);
         return;
     case 0x1b: /* FMULX */
@@ -11174,8 +11174,8 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
             NeonGenTwoOpFn *genfn = NULL;
             NeonGenTwoOpEnvFn *genenvfn = NULL;

-            read_vec_element_i32(s, tcg_op1, rn, pass, MO_32);
-            read_vec_element_i32(s, tcg_op2, rm, pass, MO_32);
+            read_vec_element_i32(s, tcg_op1, rn, pass, MO_UL);
+            read_vec_element_i32(s, tcg_op2, rm, pass, MO_UL);

             switch (opcode) {
             case 0x0: /* SHADD, UHADD */
@@ -11292,11 +11292,11 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
                     tcg_gen_add_i32,
                 };

-                read_vec_element_i32(s, tcg_op1, rd, pass, MO_32);
+                read_vec_element_i32(s, tcg_op1, rd, pass, MO_UL);
                 fns[size](tcg_res, tcg_op1, tcg_res);
             }

-            write_vec_element_i32(s, tcg_res, rd, pass, MO_32);
+            write_vec_element_i32(s, tcg_res, rd, pass, MO_UL);

             tcg_temp_free_i32(tcg_res);
             tcg_temp_free_i32(tcg_op1);
@@ -11578,7 +11578,7 @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
         break;
     case 0x02: /* SDOT (vector) */
     case 0x12: /* UDOT (vector) */
-        if (size != MO_32) {
+        if (size != MO_UL) {
             unallocated_encoding(s);
             return;
         }
@@ -11709,7 +11709,7 @@ static void handle_2misc_widening(DisasContext *s, int opcode, bool is_q,
             TCGv_i32 tcg_op = tcg_temp_new_i32();
             tcg_res[pass] = tcg_temp_new_i64();

-            read_vec_element_i32(s, tcg_op, rn, srcelt + pass, MO_32);
+            read_vec_element_i32(s, tcg_op, rn, srcelt + pass, MO_UL);
             gen_helper_vfp_fcvtds(tcg_res[pass], tcg_op, cpu_env);
             tcg_temp_free_i32(tcg_op);
         }
@@ -11732,7 +11732,7 @@ static void handle_2misc_widening(DisasContext *s, int opcode, bool is_q,
                                            fpst, ahp);
         }
         for (pass = 0; pass < 4; pass++) {
-            write_vec_element_i32(s, tcg_res[pass], rd, pass, MO_32);
+            write_vec_element_i32(s, tcg_res[pass], rd, pass, MO_UL);
             tcg_temp_free_i32(tcg_res[pass]);
         }

@@ -11771,7 +11771,7 @@ static void handle_rev(DisasContext *s, int opcode, bool u,
             case MO_UW:
                 tcg_gen_bswap16_i64(tcg_tmp, tcg_tmp);
                 break;
-            case MO_32:
+            case MO_UL:
                 tcg_gen_bswap32_i64(tcg_tmp, tcg_tmp);
                 break;
             case MO_64:
@@ -11900,7 +11900,7 @@ static void handle_shll(DisasContext *s, bool is_q, int size, int rn, int rd)
         NeonGenWidenFn *widenfn = widenfns[size];
         TCGv_i32 tcg_op = tcg_temp_new_i32();

-        read_vec_element_i32(s, tcg_op, rn, part + pass, MO_32);
+        read_vec_element_i32(s, tcg_op, rn, part + pass, MO_UL);
         tcg_res[pass] = tcg_temp_new_i64();
         widenfn(tcg_res[pass], tcg_op);
         tcg_gen_shli_i64(tcg_res[pass], tcg_res[pass], 8 << size);
@@ -12251,7 +12251,7 @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
             TCGv_i32 tcg_res = tcg_temp_new_i32();
             TCGCond cond;

-            read_vec_element_i32(s, tcg_op, rn, pass, MO_32);
+            read_vec_element_i32(s, tcg_op, rn, pass, MO_UL);

             if (size == 2) {
                 /* Special cases for 32 bit elements */
@@ -12418,7 +12418,7 @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
                 }
             }

-            write_vec_element_i32(s, tcg_res, rd, pass, MO_32);
+            write_vec_element_i32(s, tcg_res, rd, pass, MO_UL);

             tcg_temp_free_i32(tcg_res);
             tcg_temp_free_i32(tcg_op);
@@ -12816,7 +12816,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
         break;
     case 0x0e: /* SDOT */
     case 0x1e: /* UDOT */
-        if (is_scalar || size != MO_32 || !dc_isar_feature(aa64_dp, s)) {
+        if (is_scalar || size != MO_UL || !dc_isar_feature(aa64_dp, s)) {
             unallocated_encoding(s);
             return;
         }
@@ -12835,7 +12835,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
     case 0x04: /* FMLSL */
     case 0x18: /* FMLAL2 */
     case 0x1c: /* FMLSL2 */
-        if (is_scalar || size != MO_32 || !dc_isar_feature(aa64_fhm, s)) {
+        if (is_scalar || size != MO_UL || !dc_isar_feature(aa64_fhm, s)) {
             unallocated_encoding(s);
             return;
         }
@@ -12855,7 +12855,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
             size = MO_UW;
             is_fp16 = true;
             break;
-        case MO_32: /* single precision */
+        case MO_UL: /* single precision */
         case MO_64: /* double precision */
             break;
         default:
@@ -12868,7 +12868,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
         /* Each indexable element is a complex pair.  */
         size += 1;
         switch (size) {
-        case MO_32:
+        case MO_UL:
             if (h && !is_q) {
                 unallocated_encoding(s);
                 return;
@@ -12902,7 +12902,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
     case MO_UW:
         index = h << 2 | l << 1 | m;
         break;
-    case MO_32:
+    case MO_UL:
         index = h << 1 | l;
         rm |= m << 4;
         break;
@@ -13038,7 +13038,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
             TCGv_i32 tcg_op = tcg_temp_new_i32();
             TCGv_i32 tcg_res = tcg_temp_new_i32();

-            read_vec_element_i32(s, tcg_op, rn, pass, is_scalar ? size : MO_32);
+            read_vec_element_i32(s, tcg_op, rn, pass, is_scalar ? size : MO_UL);

             switch (16 * u + opcode) {
             case 0x08: /* MUL */
@@ -13060,7 +13060,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
                 if (opcode == 0x8) {
                     break;
                 }
-                read_vec_element_i32(s, tcg_op, rd, pass, MO_32);
+                read_vec_element_i32(s, tcg_op, rd, pass, MO_UL);
                 genfn = fns[size - 1][is_sub];
                 genfn(tcg_res, tcg_op, tcg_res);
                 break;
@@ -13068,7 +13068,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
             case 0x05: /* FMLS */
             case 0x01: /* FMLA */
                 read_vec_element_i32(s, tcg_res, rd, pass,
-                                     is_scalar ? size : MO_32);
+                                     is_scalar ? size : MO_UL);
                 switch (size) {
                 case 1:
                     if (opcode == 0x5) {
@@ -13153,7 +13153,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
                 break;
             case 0x1d: /* SQRDMLAH */
                 read_vec_element_i32(s, tcg_res, rd, pass,
-                                     is_scalar ? size : MO_32);
+                                     is_scalar ? size : MO_UL);
                 if (size == 1) {
                     gen_helper_neon_qrdmlah_s16(tcg_res, cpu_env,
                                                 tcg_op, tcg_idx, tcg_res);
@@ -13164,7 +13164,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
                 break;
             case 0x1f: /* SQRDMLSH */
                 read_vec_element_i32(s, tcg_res, rd, pass,
-                                     is_scalar ? size : MO_32);
+                                     is_scalar ? size : MO_UL);
                 if (size == 1) {
                     gen_helper_neon_qrdmlsh_s16(tcg_res, cpu_env,
                                                 tcg_op, tcg_idx, tcg_res);
@@ -13180,7 +13180,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
             if (is_scalar) {
                 write_fp_sreg(s, rd, tcg_res);
             } else {
-                write_vec_element_i32(s, tcg_res, rd, pass, MO_32);
+                write_vec_element_i32(s, tcg_res, rd, pass, MO_UL);
             }

             tcg_temp_free_i32(tcg_op);
@@ -13194,7 +13194,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
         TCGv_i64 tcg_res[2];
         int pass;
         bool satop = extract32(opcode, 0, 1);
-        TCGMemOp memop = MO_32;
+        TCGMemOp memop = MO_UL;

         if (satop || !u) {
             memop |= MO_SIGN;
@@ -13288,7 +13288,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
                     read_vec_element_i32(s, tcg_op, rn, pass, size);
                 } else {
                     read_vec_element_i32(s, tcg_op, rn,
-                                         pass + (is_q * 2), MO_32);
+                                         pass + (is_q * 2), MO_UL);
                 }

                 tcg_res[pass] = tcg_temp_new_i64();
@@ -13780,19 +13780,19 @@ static void disas_crypto_four_reg(DisasContext *s, uint32_t insn)
         tcg_res = tcg_temp_new_i32();
         tcg_zero = tcg_const_i32(0);

-        read_vec_element_i32(s, tcg_op1, rn, 3, MO_32);
-        read_vec_element_i32(s, tcg_op2, rm, 3, MO_32);
-        read_vec_element_i32(s, tcg_op3, ra, 3, MO_32);
+        read_vec_element_i32(s, tcg_op1, rn, 3, MO_UL);
+        read_vec_element_i32(s, tcg_op2, rm, 3, MO_UL);
+        read_vec_element_i32(s, tcg_op3, ra, 3, MO_UL);

         tcg_gen_rotri_i32(tcg_res, tcg_op1, 20);
         tcg_gen_add_i32(tcg_res, tcg_res, tcg_op2);
         tcg_gen_add_i32(tcg_res, tcg_res, tcg_op3);
         tcg_gen_rotri_i32(tcg_res, tcg_res, 25);

-        write_vec_element_i32(s, tcg_zero, rd, 0, MO_32);
-        write_vec_element_i32(s, tcg_zero, rd, 1, MO_32);
-        write_vec_element_i32(s, tcg_zero, rd, 2, MO_32);
-        write_vec_element_i32(s, tcg_res, rd, 3, MO_32);
+        write_vec_element_i32(s, tcg_zero, rd, 0, MO_UL);
+        write_vec_element_i32(s, tcg_zero, rd, 1, MO_UL);
+        write_vec_element_i32(s, tcg_zero, rd, 2, MO_UL);
+        write_vec_element_i32(s, tcg_res, rd, 3, MO_UL);

         tcg_temp_free_i32(tcg_op1);
         tcg_temp_free_i32(tcg_op2);
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 2bc1bd1..f7c891d 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -1693,7 +1693,7 @@ static void do_sat_addsub_vec(DisasContext *s, int esz, int rd, int rn,
         tcg_temp_free_i32(t32);
         break;

-    case MO_32:
+    case MO_UL:
         t64 = tcg_temp_new_i64();
         if (d) {
             tcg_gen_neg_i64(t64, val);
@@ -3320,7 +3320,7 @@ static bool trans_SUBR_zzi(DisasContext *s, arg_rri_esz *a)
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_sve_subri_s,
           .opt_opc = vecop_list,
-          .vece = MO_32,
+          .vece = MO_UL,
           .scalar_first = true },
         { .fni8 = tcg_gen_sub_i64,
           .fniv = tcg_gen_sub_vec,
@@ -5258,7 +5258,7 @@ static bool trans_LD1_zprz(DisasContext *s, arg_LD1_zprz *a)
     }

     switch (a->esz) {
-    case MO_32:
+    case MO_UL:
         fn = gather_load_fn32[be][a->ff][a->xs][a->u][a->msz];
         break;
     case MO_64:
@@ -5286,7 +5286,7 @@ static bool trans_LD1_zpiz(DisasContext *s, arg_LD1_zpiz *a)
     }

     switch (a->esz) {
-    case MO_32:
+    case MO_UL:
         fn = gather_load_fn32[be][a->ff][0][a->u][a->msz];
         break;
     case MO_64:
@@ -5364,7 +5364,7 @@ static bool trans_ST1_zprz(DisasContext *s, arg_ST1_zprz *a)
         return true;
     }
     switch (a->esz) {
-    case MO_32:
+    case MO_UL:
         fn = scatter_store_fn32[be][a->xs][a->msz];
         break;
     case MO_64:
@@ -5392,7 +5392,7 @@ static bool trans_ST1_zpiz(DisasContext *s, arg_ST1_zpiz *a)
     }

     switch (a->esz) {
-    case MO_32:
+    case MO_UL:
         fn = scatter_store_fn32[be][0][a->msz];
         break;
     case MO_64:
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
index 549874c..5e0cd63 100644
--- a/target/arm/translate-vfp.inc.c
+++ b/target/arm/translate-vfp.inc.c
@@ -46,7 +46,7 @@ uint64_t vfp_expand_imm(int size, uint8_t imm8)
             extract32(imm8, 0, 6);
         imm <<= 48;
         break;
-    case MO_32:
+    case MO_UL:
         imm = (extract32(imm8, 7, 1) ? 0x8000 : 0) |
             (extract32(imm8, 6, 1) ? 0x3e00 : 0x4000) |
             (extract32(imm8, 0, 6) << 3);
@@ -1901,7 +1901,7 @@ static bool trans_VMOV_imm_sp(DisasContext *s, arg_VMOV_imm_sp *a)
         }
     }

-    fd = tcg_const_i32(vfp_expand_imm(MO_32, a->imm));
+    fd = tcg_const_i32(vfp_expand_imm(MO_UL, a->imm));

     for (;;) {
         neon_store_reg32(fd, vd);
diff --git a/target/arm/translate.c b/target/arm/translate.c
index 8d10922..5510ecd 100644
--- a/target/arm/translate.c
+++ b/target/arm/translate.c
@@ -1085,7 +1085,7 @@ static inline TCGv gen_aa32_addr(DisasContext *s, TCGv_i32 a32, TCGMemOp op)
     tcg_gen_extu_i32_tl(addr, a32);

     /* Not needed for user-mode BE32, where we use MO_BE instead.  */
-    if (!IS_USER_ONLY && s->sctlr_b && (op & MO_SIZE) < MO_32) {
+    if (!IS_USER_ONLY && s->sctlr_b && (op & MO_SIZE) < MO_UL) {
         tcg_gen_xori_tl(addr, addr, 4 - (1 << (op & MO_SIZE)));
     }
     return addr;
@@ -1480,7 +1480,7 @@ static void neon_store_element(int reg, int ele, TCGMemOp size, TCGv_i32 var)
     case MO_UW:
         tcg_gen_st16_i32(var, cpu_env, offset);
         break;
-    case MO_32:
+    case MO_UL:
         tcg_gen_st_i32(var, cpu_env, offset);
         break;
     default:
@@ -1499,7 +1499,7 @@ static void neon_store_element64(int reg, int ele, TCGMemOp size, TCGv_i64 var)
     case MO_UW:
         tcg_gen_st16_i64(var, cpu_env, offset);
         break;
-    case MO_32:
+    case MO_UL:
         tcg_gen_st32_i64(var, cpu_env, offset);
         break;
     case MO_64:
@@ -4272,7 +4272,7 @@ const GVecGen2i ssra_op[4] = {
       .fniv = gen_ssra_vec,
       .load_dest = true,
       .opt_opc = vecop_list_ssra,
-      .vece = MO_32 },
+      .vece = MO_UL },
     { .fni8 = gen_ssra64_i64,
       .fniv = gen_ssra_vec,
       .prefer_i64 = TCG_TARGET_REG_BITS == 64,
@@ -4330,7 +4330,7 @@ const GVecGen2i usra_op[4] = {
       .fniv = gen_usra_vec,
       .load_dest = true,
       .opt_opc = vecop_list_usra,
-      .vece = MO_32, },
+      .vece = MO_UL, },
     { .fni8 = gen_usra64_i64,
       .fniv = gen_usra_vec,
       .prefer_i64 = TCG_TARGET_REG_BITS == 64,
@@ -4410,7 +4410,7 @@ const GVecGen2i sri_op[4] = {
       .fniv = gen_shr_ins_vec,
       .load_dest = true,
       .opt_opc = vecop_list_sri,
-      .vece = MO_32 },
+      .vece = MO_UL },
     { .fni8 = gen_shr64_ins_i64,
       .fniv = gen_shr_ins_vec,
       .prefer_i64 = TCG_TARGET_REG_BITS == 64,
@@ -4488,7 +4488,7 @@ const GVecGen2i sli_op[4] = {
       .fniv = gen_shl_ins_vec,
       .load_dest = true,
       .opt_opc = vecop_list_sli,
-      .vece = MO_32 },
+      .vece = MO_UL },
     { .fni8 = gen_shl64_ins_i64,
       .fniv = gen_shl_ins_vec,
       .prefer_i64 = TCG_TARGET_REG_BITS == 64,
@@ -4584,7 +4584,7 @@ const GVecGen3 mla_op[4] = {
       .fniv = gen_mla_vec,
       .load_dest = true,
       .opt_opc = vecop_list_mla,
-      .vece = MO_32 },
+      .vece = MO_UL },
     { .fni8 = gen_mla64_i64,
       .fniv = gen_mla_vec,
       .prefer_i64 = TCG_TARGET_REG_BITS == 64,
@@ -4608,7 +4608,7 @@ const GVecGen3 mls_op[4] = {
       .fniv = gen_mls_vec,
       .load_dest = true,
       .opt_opc = vecop_list_mls,
-      .vece = MO_32 },
+      .vece = MO_UL },
     { .fni8 = gen_mls64_i64,
       .fniv = gen_mls_vec,
       .prefer_i64 = TCG_TARGET_REG_BITS == 64,
@@ -4653,7 +4653,7 @@ const GVecGen3 cmtst_op[4] = {
     { .fni4 = gen_cmtst_i32,
       .fniv = gen_cmtst_vec,
       .opt_opc = vecop_list_cmtst,
-      .vece = MO_32 },
+      .vece = MO_UL },
     { .fni8 = gen_cmtst_i64,
       .fniv = gen_cmtst_vec,
       .prefer_i64 = TCG_TARGET_REG_BITS == 64,
@@ -4691,7 +4691,7 @@ const GVecGen4 uqadd_op[4] = {
       .fno = gen_helper_gvec_uqadd_s,
       .write_aofs = true,
       .opt_opc = vecop_list_uqadd,
-      .vece = MO_32 },
+      .vece = MO_UL },
     { .fniv = gen_uqadd_vec,
       .fno = gen_helper_gvec_uqadd_d,
       .write_aofs = true,
@@ -4729,7 +4729,7 @@ const GVecGen4 sqadd_op[4] = {
       .fno = gen_helper_gvec_sqadd_s,
       .opt_opc = vecop_list_sqadd,
       .write_aofs = true,
-      .vece = MO_32 },
+      .vece = MO_UL },
     { .fniv = gen_sqadd_vec,
       .fno = gen_helper_gvec_sqadd_d,
       .opt_opc = vecop_list_sqadd,
@@ -4767,7 +4767,7 @@ const GVecGen4 uqsub_op[4] = {
       .fno = gen_helper_gvec_uqsub_s,
       .opt_opc = vecop_list_uqsub,
       .write_aofs = true,
-      .vece = MO_32 },
+      .vece = MO_UL },
     { .fniv = gen_uqsub_vec,
       .fno = gen_helper_gvec_uqsub_d,
       .opt_opc = vecop_list_uqsub,
@@ -4805,7 +4805,7 @@ const GVecGen4 sqsub_op[4] = {
       .fno = gen_helper_gvec_sqsub_s,
       .opt_opc = vecop_list_sqsub,
       .write_aofs = true,
-      .vece = MO_32 },
+      .vece = MO_UL },
     { .fniv = gen_sqsub_vec,
       .fno = gen_helper_gvec_sqsub_d,
       .opt_opc = vecop_list_sqsub,
@@ -5798,10 +5798,10 @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
                     /* The immediate value has already been inverted,
                      * so BIC becomes AND.
                      */
-                    tcg_gen_gvec_andi(MO_32, reg_ofs, reg_ofs, imm,
+                    tcg_gen_gvec_andi(MO_UL, reg_ofs, reg_ofs, imm,
                                       vec_size, vec_size);
                 } else {
-                    tcg_gen_gvec_ori(MO_32, reg_ofs, reg_ofs, imm,
+                    tcg_gen_gvec_ori(MO_UL, reg_ofs, reg_ofs, imm,
                                      vec_size, vec_size);
                 }
             } else {
@@ -6879,7 +6879,7 @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
                     size = MO_UW;
                     element = (insn >> 18) & 3;
                 } else {
-                    size = MO_32;
+                    size = MO_UL;
                     element = (insn >> 19) & 1;
                 }
                 tcg_gen_gvec_dup_mem(size, neon_reg_offset(rd, 0),
diff --git a/target/i386/translate.c b/target/i386/translate.c
index 0535bae..0e863d4 100644
--- a/target/i386/translate.c
+++ b/target/i386/translate.c
@@ -332,16 +332,16 @@ static inline TCGMemOp mo_pushpop(DisasContext *s, TCGMemOp ot)
 /* Select the size of the stack pointer.  */
 static inline TCGMemOp mo_stacksize(DisasContext *s)
 {
-    return CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_UW;
+    return CODE64(s) ? MO_64 : s->ss32 ? MO_UL : MO_UW;
 }

 /* Select only size 64 else 32.  Used for SSE operand sizes.  */
 static inline TCGMemOp mo_64_32(TCGMemOp ot)
 {
 #ifdef TARGET_X86_64
-    return ot == MO_64 ? MO_64 : MO_32;
+    return ot == MO_64 ? MO_64 : MO_UL;
 #else
-    return MO_32;
+    return MO_UL;
 #endif
 }

@@ -356,7 +356,7 @@ static inline TCGMemOp mo_b_d(int b, TCGMemOp ot)
    Used for decoding operand size of port opcodes.  */
 static inline TCGMemOp mo_b_d32(int b, TCGMemOp ot)
 {
-    return b & 1 ? (ot == MO_UW ? MO_UW : MO_32) : MO_UB;
+    return b & 1 ? (ot == MO_UW ? MO_UW : MO_UL) : MO_UB;
 }

 static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t0)
@@ -372,7 +372,7 @@ static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t0)
     case MO_UW:
         tcg_gen_deposit_tl(cpu_regs[reg], cpu_regs[reg], t0, 0, 16);
         break;
-    case MO_32:
+    case MO_UL:
         /* For x86_64, this sets the higher half of register to zero.
            For i386, this is equivalent to a mov. */
         tcg_gen_ext32u_tl(cpu_regs[reg], t0);
@@ -463,7 +463,7 @@ static void gen_lea_v_seg(DisasContext *s, TCGMemOp aflag, TCGv a0,
         }
         break;
 #endif
-    case MO_32:
+    case MO_UL:
         /* 32 bit address */
         if (ovr_seg < 0 && s->addseg) {
             ovr_seg = def_seg;
@@ -538,7 +538,7 @@ static TCGv gen_ext_tl(TCGv dst, TCGv src, TCGMemOp size, bool sign)
         }
         return dst;
 #ifdef TARGET_X86_64
-    case MO_32:
+    case MO_UL:
         if (sign) {
             tcg_gen_ext32s_tl(dst, src);
         } else {
@@ -586,7 +586,7 @@ static void gen_helper_in_func(TCGMemOp ot, TCGv v, TCGv_i32 n)
     case MO_UW:
         gen_helper_inw(v, cpu_env, n);
         break;
-    case MO_32:
+    case MO_UL:
         gen_helper_inl(v, cpu_env, n);
         break;
     default:
@@ -603,7 +603,7 @@ static void gen_helper_out_func(TCGMemOp ot, TCGv_i32 v, TCGv_i32 n)
     case MO_UW:
         gen_helper_outw(cpu_env, v, n);
         break;
-    case MO_32:
+    case MO_UL:
         gen_helper_outl(cpu_env, v, n);
         break;
     default:
@@ -625,7 +625,7 @@ static void gen_check_io(DisasContext *s, TCGMemOp ot, target_ulong cur_eip,
         case MO_UW:
             gen_helper_check_iow(cpu_env, s->tmp2_i32);
             break;
-        case MO_32:
+        case MO_UL:
             gen_helper_check_iol(cpu_env, s->tmp2_i32);
             break;
         default:
@@ -1077,7 +1077,7 @@ static TCGLabel *gen_jz_ecx_string(DisasContext *s, target_ulong next_eip)

 static inline void gen_stos(DisasContext *s, TCGMemOp ot)
 {
-    gen_op_mov_v_reg(s, MO_32, s->T0, R_EAX);
+    gen_op_mov_v_reg(s, MO_UL, s->T0, R_EAX);
     gen_string_movl_A0_EDI(s);
     gen_op_st_v(s, ot, s->T0, s->A0);
     gen_op_movl_T0_Dshift(s, ot);
@@ -1568,7 +1568,7 @@ static void gen_rot_rm_T1(DisasContext *s, TCGMemOp ot, int op1, int is_right)
         goto do_long;
     do_long:
 #ifdef TARGET_X86_64
-    case MO_32:
+    case MO_UL:
         tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
         tcg_gen_trunc_tl_i32(s->tmp3_i32, s->T1);
         if (is_right) {
@@ -1644,7 +1644,7 @@ static void gen_rot_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
     if (op2 != 0) {
         switch (ot) {
 #ifdef TARGET_X86_64
-        case MO_32:
+        case MO_UL:
             tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
             if (is_right) {
                 tcg_gen_rotri_i32(s->tmp2_i32, s->tmp2_i32, op2);
@@ -1725,7 +1725,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
         case MO_UW:
             gen_helper_rcrw(s->T0, cpu_env, s->T0, s->T1);
             break;
-        case MO_32:
+        case MO_UL:
             gen_helper_rcrl(s->T0, cpu_env, s->T0, s->T1);
             break;
 #ifdef TARGET_X86_64
@@ -1744,7 +1744,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
         case MO_UW:
             gen_helper_rclw(s->T0, cpu_env, s->T0, s->T1);
             break;
-        case MO_32:
+        case MO_UL:
             gen_helper_rcll(s->T0, cpu_env, s->T0, s->T1);
             break;
 #ifdef TARGET_X86_64
@@ -1791,7 +1791,7 @@ static void gen_shiftd_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
         }
         /* FALLTHRU */
 #ifdef TARGET_X86_64
-    case MO_32:
+    case MO_UL:
         /* Concatenate the two 32-bit values and use a 64-bit shift.  */
         tcg_gen_subi_tl(s->tmp0, count, 1);
         if (is_right) {
@@ -1984,7 +1984,7 @@ static AddressParts gen_lea_modrm_0(CPUX86State *env, DisasContext *s,

     switch (s->aflag) {
     case MO_64:
-    case MO_32:
+    case MO_UL:
         havesib = 0;
         if (rm == 4) {
             int code = x86_ldub_code(env, s);
@@ -2190,7 +2190,7 @@ static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, TCGMemOp ot)
     case MO_UW:
         ret = x86_lduw_code(env, s);
         break;
-    case MO_32:
+    case MO_UL:
 #ifdef TARGET_X86_64
     case MO_64:
 #endif
@@ -2204,7 +2204,7 @@ static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, TCGMemOp ot)

 static inline int insn_const_size(TCGMemOp ot)
 {
-    if (ot <= MO_32) {
+    if (ot <= MO_UL) {
         return 1 << ot;
     } else {
         return 4;
@@ -2400,12 +2400,12 @@ static inline void gen_pop_update(DisasContext *s, TCGMemOp ot)

 static inline void gen_stack_A0(DisasContext *s)
 {
-    gen_lea_v_seg(s, s->ss32 ? MO_32 : MO_UW, cpu_regs[R_ESP], R_SS, -1);
+    gen_lea_v_seg(s, s->ss32 ? MO_UL : MO_UW, cpu_regs[R_ESP], R_SS, -1);
 }

 static void gen_pusha(DisasContext *s)
 {
-    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_UW;
+    TCGMemOp s_ot = s->ss32 ? MO_UL : MO_UW;
     TCGMemOp d_ot = s->dflag;
     int size = 1 << d_ot;
     int i;
@@ -2421,7 +2421,7 @@ static void gen_pusha(DisasContext *s)

 static void gen_popa(DisasContext *s)
 {
-    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_UW;
+    TCGMemOp s_ot = s->ss32 ? MO_UL : MO_UW;
     TCGMemOp d_ot = s->dflag;
     int size = 1 << d_ot;
     int i;
@@ -2443,7 +2443,7 @@ static void gen_popa(DisasContext *s)
 static void gen_enter(DisasContext *s, int esp_addend, int level)
 {
     TCGMemOp d_ot = mo_pushpop(s, s->dflag);
-    TCGMemOp a_ot = CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_UW;
+    TCGMemOp a_ot = CODE64(s) ? MO_64 : s->ss32 ? MO_UL : MO_UW;
     int size = 1 << d_ot;

     /* Push BP; compute FrameTemp into T1.  */
@@ -3145,7 +3145,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
             } else {
                 tcg_gen_ld32u_tl(s->T0, cpu_env, offsetof(CPUX86State,
                     xmm_regs[reg].ZMM_L(0)));
-                gen_op_st_v(s, MO_32, s->T0, s->A0);
+                gen_op_st_v(s, MO_UL, s->T0, s->A0);
             }
             break;
         case 0x6e: /* movd mm, ea */
@@ -3157,7 +3157,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
             } else
 #endif
             {
-                gen_ldst_modrm(env, s, modrm, MO_32, OR_TMP0, 0);
+                gen_ldst_modrm(env, s, modrm, MO_UL, OR_TMP0, 0);
                 tcg_gen_addi_ptr(s->ptr0, cpu_env,
                                  offsetof(CPUX86State,fpregs[reg].mmx));
                 tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
@@ -3174,7 +3174,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
             } else
 #endif
             {
-                gen_ldst_modrm(env, s, modrm, MO_32, OR_TMP0, 0);
+                gen_ldst_modrm(env, s, modrm, MO_UL, OR_TMP0, 0);
                 tcg_gen_addi_ptr(s->ptr0, cpu_env,
                                  offsetof(CPUX86State,xmm_regs[reg]));
                 tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
@@ -3211,7 +3211,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
         case 0x210: /* movss xmm, ea */
             if (mod != 3) {
                 gen_lea_modrm(env, s, modrm);
-                gen_op_ld_v(s, MO_32, s->T0, s->A0);
+                gen_op_ld_v(s, MO_UL, s->T0, s->A0);
                 tcg_gen_st32_tl(s->T0, cpu_env,
                                 offsetof(CPUX86State, xmm_regs[reg].ZMM_L(0)));
                 tcg_gen_movi_tl(s->T0, 0);
@@ -3346,7 +3346,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
             {
                 tcg_gen_ld32u_tl(s->T0, cpu_env,
                                  offsetof(CPUX86State,fpregs[reg].mmx.MMX_L(0)));
-                gen_ldst_modrm(env, s, modrm, MO_32, OR_TMP0, 1);
+                gen_ldst_modrm(env, s, modrm, MO_UL, OR_TMP0, 1);
             }
             break;
         case 0x17e: /* movd ea, xmm */
@@ -3360,7 +3360,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
             {
                 tcg_gen_ld32u_tl(s->T0, cpu_env,
                                  offsetof(CPUX86State,xmm_regs[reg].ZMM_L(0)));
-                gen_ldst_modrm(env, s, modrm, MO_32, OR_TMP0, 1);
+                gen_ldst_modrm(env, s, modrm, MO_UL, OR_TMP0, 1);
             }
             break;
         case 0x27e: /* movq xmm, ea */
@@ -3405,7 +3405,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                 gen_lea_modrm(env, s, modrm);
                 tcg_gen_ld32u_tl(s->T0, cpu_env,
                                  offsetof(CPUX86State, xmm_regs[reg].ZMM_L(0)));
-                gen_op_st_v(s, MO_32, s->T0, s->A0);
+                gen_op_st_v(s, MO_UL, s->T0, s->A0);
             } else {
                 rm = (modrm & 7) | REX_B(s);
                 gen_op_movl(s, offsetof(CPUX86State, xmm_regs[rm].ZMM_L(0)),
@@ -3530,7 +3530,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
             gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0);
             op1_offset = offsetof(CPUX86State,xmm_regs[reg]);
             tcg_gen_addi_ptr(s->ptr0, cpu_env, op1_offset);
-            if (ot == MO_32) {
+            if (ot == MO_UL) {
                 SSEFunc_0_epi sse_fn_epi = sse_op_table3ai[(b >> 8) & 1];
                 tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
                 sse_fn_epi(cpu_env, s->ptr0, s->tmp2_i32);
@@ -3584,7 +3584,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                 if ((b >> 8) & 1) {
                     gen_ldq_env_A0(s, offsetof(CPUX86State, xmm_t0.ZMM_Q(0)));
                 } else {
-                    gen_op_ld_v(s, MO_32, s->T0, s->A0);
+                    gen_op_ld_v(s, MO_UL, s->T0, s->A0);
                     tcg_gen_st32_tl(s->T0, cpu_env,
                                     offsetof(CPUX86State, xmm_t0.ZMM_L(0)));
                 }
@@ -3594,7 +3594,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                 op2_offset = offsetof(CPUX86State,xmm_regs[rm]);
             }
             tcg_gen_addi_ptr(s->ptr0, cpu_env, op2_offset);
-            if (ot == MO_32) {
+            if (ot == MO_UL) {
                 SSEFunc_i_ep sse_fn_i_ep =
                     sse_op_table3bi[((b >> 7) & 2) | (b & 1)];
                 sse_fn_i_ep(s->tmp2_i32, cpu_env, s->ptr0);
@@ -3786,7 +3786,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                 if ((b & 0xff) == 0xf0) {
                     ot = MO_UB;
                 } else if (s->dflag != MO_64) {
-                    ot = (s->prefix & PREFIX_DATA ? MO_UW : MO_32);
+                    ot = (s->prefix & PREFIX_DATA ? MO_UW : MO_UL);
                 } else {
                     ot = MO_64;
                 }
@@ -3815,7 +3815,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                     goto illegal_op;
                 }
                 if (s->dflag != MO_64) {
-                    ot = (s->prefix & PREFIX_DATA ? MO_UW : MO_32);
+                    ot = (s->prefix & PREFIX_DATA ? MO_UW : MO_UL);
                 } else {
                     ot = MO_64;
                 }
@@ -4026,7 +4026,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,

                     switch (ot) {
 #ifdef TARGET_X86_64
-                    case MO_32:
+                    case MO_UL:
                         /* If we know TL is 64-bit, and we want a 32-bit
                            result, just do everything in 64-bit arithmetic.  */
                         tcg_gen_ext32u_i64(cpu_regs[reg], cpu_regs[reg]);
@@ -4172,7 +4172,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                     }
                     break;
                 case 0x16:
-                    if (ot == MO_32) { /* pextrd */
+                    if (ot == MO_UL) { /* pextrd */
                         tcg_gen_ld_i32(s->tmp2_i32, cpu_env,
                                         offsetof(CPUX86State,
                                                 xmm_regs[reg].ZMM_L(val & 3)));
@@ -4210,7 +4210,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                     break;
                 case 0x20: /* pinsrb */
                     if (mod == 3) {
-                        gen_op_mov_v_reg(s, MO_32, s->T0, rm);
+                        gen_op_mov_v_reg(s, MO_UL, s->T0, rm);
                     } else {
                         tcg_gen_qemu_ld_tl(s->T0, s->A0,
                                            s->mem_index, MO_UB);
@@ -4248,7 +4248,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                                                 xmm_regs[reg].ZMM_L(3)));
                     break;
                 case 0x22:
-                    if (ot == MO_32) { /* pinsrd */
+                    if (ot == MO_UL) { /* pinsrd */
                         if (mod == 3) {
                             tcg_gen_trunc_tl_i32(s->tmp2_i32, cpu_regs[rm]);
                         } else {
@@ -4393,7 +4393,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                 switch (sz) {
                 case 2:
                     /* 32 bit access */
-                    gen_op_ld_v(s, MO_32, s->T0, s->A0);
+                    gen_op_ld_v(s, MO_UL, s->T0, s->A0);
                     tcg_gen_st32_tl(s->T0, cpu_env,
                                     offsetof(CPUX86State,xmm_t0.ZMM_L(0)));
                     break;
@@ -4630,19 +4630,19 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         /* In 64-bit mode, the default data size is 32-bit.  Select 64-bit
            data with rex_w, and 16-bit data with 0x66; rex_w takes precedence
            over 0x66 if both are present.  */
-        dflag = (rex_w > 0 ? MO_64 : prefixes & PREFIX_DATA ? MO_UW : MO_32);
+        dflag = (rex_w > 0 ? MO_64 : prefixes & PREFIX_DATA ? MO_UW : MO_UL);
         /* In 64-bit mode, 0x67 selects 32-bit addressing.  */
-        aflag = (prefixes & PREFIX_ADR ? MO_32 : MO_64);
+        aflag = (prefixes & PREFIX_ADR ? MO_UL : MO_64);
     } else {
         /* In 16/32-bit mode, 0x66 selects the opposite data size.  */
         if (s->code32 ^ ((prefixes & PREFIX_DATA) != 0)) {
-            dflag = MO_32;
+            dflag = MO_UL;
         } else {
             dflag = MO_UW;
         }
         /* In 16/32-bit mode, 0x67 selects the opposite addressing.  */
         if (s->code32 ^ ((prefixes & PREFIX_ADR) != 0)) {
-            aflag = MO_32;
+            aflag = MO_UL;
         }  else {
             aflag = MO_UW;
         }
@@ -4891,7 +4891,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 set_cc_op(s, CC_OP_MULW);
                 break;
             default:
-            case MO_32:
+            case MO_UL:
                 tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
                 tcg_gen_trunc_tl_i32(s->tmp3_i32, cpu_regs[R_EAX]);
                 tcg_gen_mulu2_i32(s->tmp2_i32, s->tmp3_i32,
@@ -4942,7 +4942,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 set_cc_op(s, CC_OP_MULW);
                 break;
             default:
-            case MO_32:
+            case MO_UL:
                 tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
                 tcg_gen_trunc_tl_i32(s->tmp3_i32, cpu_regs[R_EAX]);
                 tcg_gen_muls2_i32(s->tmp2_i32, s->tmp3_i32,
@@ -4976,7 +4976,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 gen_helper_divw_AX(cpu_env, s->T0);
                 break;
             default:
-            case MO_32:
+            case MO_UL:
                 gen_helper_divl_EAX(cpu_env, s->T0);
                 break;
 #ifdef TARGET_X86_64
@@ -4995,7 +4995,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 gen_helper_idivw_AX(cpu_env, s->T0);
                 break;
             default:
-            case MO_32:
+            case MO_UL:
                 gen_helper_idivl_EAX(cpu_env, s->T0);
                 break;
 #ifdef TARGET_X86_64
@@ -5026,7 +5026,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 /* operand size for jumps is 64 bit */
                 ot = MO_64;
             } else if (op == 3 || op == 5) {
-                ot = dflag != MO_UW ? MO_32 + (rex_w == 1) : MO_UW;
+                ot = dflag != MO_UW ? MO_UL + (rex_w == 1) : MO_UW;
             } else if (op == 6) {
                 /* default push size is 64 bit */
                 ot = mo_pushpop(s, dflag);
@@ -5146,15 +5146,15 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         switch (dflag) {
 #ifdef TARGET_X86_64
         case MO_64:
-            gen_op_mov_v_reg(s, MO_32, s->T0, R_EAX);
+            gen_op_mov_v_reg(s, MO_UL, s->T0, R_EAX);
             tcg_gen_ext32s_tl(s->T0, s->T0);
             gen_op_mov_reg_v(s, MO_64, R_EAX, s->T0);
             break;
 #endif
-        case MO_32:
+        case MO_UL:
             gen_op_mov_v_reg(s, MO_UW, s->T0, R_EAX);
             tcg_gen_ext16s_tl(s->T0, s->T0);
-            gen_op_mov_reg_v(s, MO_32, R_EAX, s->T0);
+            gen_op_mov_reg_v(s, MO_UL, R_EAX, s->T0);
             break;
         case MO_UW:
             gen_op_mov_v_reg(s, MO_UB, s->T0, R_EAX);
@@ -5174,11 +5174,11 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             gen_op_mov_reg_v(s, MO_64, R_EDX, s->T0);
             break;
 #endif
-        case MO_32:
-            gen_op_mov_v_reg(s, MO_32, s->T0, R_EAX);
+        case MO_UL:
+            gen_op_mov_v_reg(s, MO_UL, s->T0, R_EAX);
             tcg_gen_ext32s_tl(s->T0, s->T0);
             tcg_gen_sari_tl(s->T0, s->T0, 31);
-            gen_op_mov_reg_v(s, MO_32, R_EDX, s->T0);
+            gen_op_mov_reg_v(s, MO_UL, R_EDX, s->T0);
             break;
         case MO_UW:
             gen_op_mov_v_reg(s, MO_UW, s->T0, R_EAX);
@@ -5219,7 +5219,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             tcg_gen_sub_tl(cpu_cc_src, cpu_cc_src, s->T1);
             break;
 #endif
-        case MO_32:
+        case MO_UL:
             tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
             tcg_gen_trunc_tl_i32(s->tmp3_i32, s->T1);
             tcg_gen_muls2_i32(s->tmp2_i32, s->tmp3_i32,
@@ -5394,7 +5394,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         /**************************/
         /* push/pop */
     case 0x50 ... 0x57: /* push */
-        gen_op_mov_v_reg(s, MO_32, s->T0, (b & 7) | REX_B(s));
+        gen_op_mov_v_reg(s, MO_UL, s->T0, (b & 7) | REX_B(s));
         gen_push_v(s, s->T0);
         break;
     case 0x58 ... 0x5f: /* pop */
@@ -5734,7 +5734,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
     case 0x1b5: /* lgs Gv */
         op = R_GS;
     do_lxx:
-        ot = dflag != MO_UW ? MO_32 : MO_UW;
+        ot = dflag != MO_UW ? MO_UL : MO_UW;
         modrm = x86_ldub_code(env, s);
         reg = ((modrm >> 3) & 7) | rex_r;
         mod = (modrm >> 6) & 3;
@@ -6576,7 +6576,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
     case 0xe8: /* call im */
         {
             if (dflag != MO_UW) {
-                tval = (int32_t)insn_get(env, s, MO_32);
+                tval = (int32_t)insn_get(env, s, MO_UL);
             } else {
                 tval = (int16_t)insn_get(env, s, MO_UW);
             }
@@ -6609,7 +6609,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         goto do_lcall;
     case 0xe9: /* jmp im */
         if (dflag != MO_UW) {
-            tval = (int32_t)insn_get(env, s, MO_32);
+            tval = (int32_t)insn_get(env, s, MO_UL);
         } else {
             tval = (int16_t)insn_get(env, s, MO_UW);
         }
@@ -6649,7 +6649,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         goto do_jcc;
     case 0x180 ... 0x18f: /* jcc Jv */
         if (dflag != MO_UW) {
-            tval = (int32_t)insn_get(env, s, MO_32);
+            tval = (int32_t)insn_get(env, s, MO_UL);
         } else {
             tval = (int16_t)insn_get(env, s, MO_UW);
         }
@@ -6827,7 +6827,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         reg = ((modrm >> 3) & 7) | rex_r;
         mod = (modrm >> 6) & 3;
         rm = (modrm & 7) | REX_B(s);
-        gen_op_mov_v_reg(s, MO_32, s->T1, reg);
+        gen_op_mov_v_reg(s, MO_UL, s->T1, reg);
         if (mod != 3) {
             AddressParts a = gen_lea_modrm_0(env, s, modrm);
             /* specific case: we need to add a displacement */
@@ -7126,10 +7126,10 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         } else
 #endif
         {
-            gen_op_mov_v_reg(s, MO_32, s->T0, reg);
+            gen_op_mov_v_reg(s, MO_UL, s->T0, reg);
             tcg_gen_ext32u_tl(s->T0, s->T0);
             tcg_gen_bswap32_tl(s->T0, s->T0);
-            gen_op_mov_reg_v(s, MO_32, reg, s->T0);
+            gen_op_mov_reg_v(s, MO_UL, reg, s->T0);
         }
         break;
     case 0xd6: /* salc */
@@ -7359,7 +7359,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             if (dflag == MO_UW) {
                 tcg_gen_andi_tl(s->T0, s->T0, 0xffffff);
             }
-            gen_op_st_v(s, CODE64(s) + MO_32, s->T0, s->A0);
+            gen_op_st_v(s, CODE64(s) + MO_UL, s->T0, s->A0);
             break;

         case 0xc8: /* monitor */
@@ -7414,7 +7414,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             if (dflag == MO_UW) {
                 tcg_gen_andi_tl(s->T0, s->T0, 0xffffff);
             }
-            gen_op_st_v(s, CODE64(s) + MO_32, s->T0, s->A0);
+            gen_op_st_v(s, CODE64(s) + MO_UL, s->T0, s->A0);
             break;

         case 0xd0: /* xgetbv */
@@ -7560,7 +7560,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             gen_lea_modrm(env, s, modrm);
             gen_op_ld_v(s, MO_UW, s->T1, s->A0);
             gen_add_A0_im(s, 2);
-            gen_op_ld_v(s, CODE64(s) + MO_32, s->T0, s->A0);
+            gen_op_ld_v(s, CODE64(s) + MO_UL, s->T0, s->A0);
             if (dflag == MO_UW) {
                 tcg_gen_andi_tl(s->T0, s->T0, 0xffffff);
             }
@@ -7577,7 +7577,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             gen_lea_modrm(env, s, modrm);
             gen_op_ld_v(s, MO_UW, s->T1, s->A0);
             gen_add_A0_im(s, 2);
-            gen_op_ld_v(s, CODE64(s) + MO_32, s->T0, s->A0);
+            gen_op_ld_v(s, CODE64(s) + MO_UL, s->T0, s->A0);
             if (dflag == MO_UW) {
                 tcg_gen_andi_tl(s->T0, s->T0, 0xffffff);
             }
@@ -7698,7 +7698,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             rm = (modrm & 7) | REX_B(s);

             if (mod == 3) {
-                gen_op_mov_v_reg(s, MO_32, s->T0, rm);
+                gen_op_mov_v_reg(s, MO_UL, s->T0, rm);
                 /* sign extend */
                 if (d_ot == MO_64) {
                     tcg_gen_ext32s_tl(s->T0, s->T0);
@@ -7706,7 +7706,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 gen_op_mov_reg_v(s, d_ot, reg, s->T0);
             } else {
                 gen_lea_modrm(env, s, modrm);
-                gen_op_ld_v(s, MO_32 | MO_SIGN, s->T0, s->A0);
+                gen_op_ld_v(s, MO_SL, s->T0, s->A0);
                 gen_op_mov_reg_v(s, d_ot, reg, s->T0);
             }
         } else
@@ -7765,7 +7765,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             TCGv t0;
             if (!s->pe || s->vm86)
                 goto illegal_op;
-            ot = dflag != MO_UW ? MO_32 : MO_UW;
+            ot = dflag != MO_UW ? MO_UL : MO_UW;
             modrm = x86_ldub_code(env, s);
             reg = ((modrm >> 3) & 7) | rex_r;
             gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0);
@@ -8016,7 +8016,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             if (CODE64(s))
                 ot = MO_64;
             else
-                ot = MO_32;
+                ot = MO_UL;
             if ((prefixes & PREFIX_LOCK) && (reg == 0) &&
                 (s->cpuid_ext3_features & CPUID_EXT3_CR8LEG)) {
                 reg = 8;
@@ -8073,7 +8073,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             if (CODE64(s))
                 ot = MO_64;
             else
-                ot = MO_32;
+                ot = MO_UL;
             if (reg >= 8) {
                 goto illegal_op;
             }
@@ -8168,7 +8168,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             }
             gen_lea_modrm(env, s, modrm);
             tcg_gen_ld32u_tl(s->T0, cpu_env, offsetof(CPUX86State, mxcsr));
-            gen_op_st_v(s, MO_32, s->T0, s->A0);
+            gen_op_st_v(s, MO_UL, s->T0, s->A0);
             break;

         CASE_MODRM_MEM_OP(4): /* xsave */
@@ -8268,7 +8268,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                     dst = treg, src = base;
                 }

-                if (s->dflag == MO_32) {
+                if (s->dflag == MO_UL) {
                     tcg_gen_ext32u_tl(dst, src);
                 } else {
                     tcg_gen_mov_tl(dst, src);
diff --git a/target/ppc/translate/vmx-impl.inc.c b/target/ppc/translate/vmx-impl.inc.c
index 71efef4..8aa767e 100644
--- a/target/ppc/translate/vmx-impl.inc.c
+++ b/target/ppc/translate/vmx-impl.inc.c
@@ -409,27 +409,27 @@ GEN_VXFORM_DUAL_EXT(vaddubm, PPC_ALTIVEC, PPC_NONE, 0,       \
 GEN_VXFORM_V(vadduhm, MO_UW, tcg_gen_gvec_add, 0, 1);
 GEN_VXFORM_DUAL(vadduhm, PPC_ALTIVEC, PPC_NONE,  \
                 vmul10ecuq, PPC_NONE, PPC2_ISA300)
-GEN_VXFORM_V(vadduwm, MO_32, tcg_gen_gvec_add, 0, 2);
+GEN_VXFORM_V(vadduwm, MO_UL, tcg_gen_gvec_add, 0, 2);
 GEN_VXFORM_V(vaddudm, MO_64, tcg_gen_gvec_add, 0, 3);
 GEN_VXFORM_V(vsububm, MO_UB, tcg_gen_gvec_sub, 0, 16);
 GEN_VXFORM_V(vsubuhm, MO_UW, tcg_gen_gvec_sub, 0, 17);
-GEN_VXFORM_V(vsubuwm, MO_32, tcg_gen_gvec_sub, 0, 18);
+GEN_VXFORM_V(vsubuwm, MO_UL, tcg_gen_gvec_sub, 0, 18);
 GEN_VXFORM_V(vsubudm, MO_64, tcg_gen_gvec_sub, 0, 19);
 GEN_VXFORM_V(vmaxub, MO_UB, tcg_gen_gvec_umax, 1, 0);
 GEN_VXFORM_V(vmaxuh, MO_UW, tcg_gen_gvec_umax, 1, 1);
-GEN_VXFORM_V(vmaxuw, MO_32, tcg_gen_gvec_umax, 1, 2);
+GEN_VXFORM_V(vmaxuw, MO_UL, tcg_gen_gvec_umax, 1, 2);
 GEN_VXFORM_V(vmaxud, MO_64, tcg_gen_gvec_umax, 1, 3);
 GEN_VXFORM_V(vmaxsb, MO_UB, tcg_gen_gvec_smax, 1, 4);
 GEN_VXFORM_V(vmaxsh, MO_UW, tcg_gen_gvec_smax, 1, 5);
-GEN_VXFORM_V(vmaxsw, MO_32, tcg_gen_gvec_smax, 1, 6);
+GEN_VXFORM_V(vmaxsw, MO_UL, tcg_gen_gvec_smax, 1, 6);
 GEN_VXFORM_V(vmaxsd, MO_64, tcg_gen_gvec_smax, 1, 7);
 GEN_VXFORM_V(vminub, MO_UB, tcg_gen_gvec_umin, 1, 8);
 GEN_VXFORM_V(vminuh, MO_UW, tcg_gen_gvec_umin, 1, 9);
-GEN_VXFORM_V(vminuw, MO_32, tcg_gen_gvec_umin, 1, 10);
+GEN_VXFORM_V(vminuw, MO_UL, tcg_gen_gvec_umin, 1, 10);
 GEN_VXFORM_V(vminud, MO_64, tcg_gen_gvec_umin, 1, 11);
 GEN_VXFORM_V(vminsb, MO_UB, tcg_gen_gvec_smin, 1, 12);
 GEN_VXFORM_V(vminsh, MO_UW, tcg_gen_gvec_smin, 1, 13);
-GEN_VXFORM_V(vminsw, MO_32, tcg_gen_gvec_smin, 1, 14);
+GEN_VXFORM_V(vminsw, MO_UL, tcg_gen_gvec_smin, 1, 14);
 GEN_VXFORM_V(vminsd, MO_64, tcg_gen_gvec_smin, 1, 15);
 GEN_VXFORM(vavgub, 1, 16);
 GEN_VXFORM(vabsdub, 1, 16);
@@ -532,18 +532,18 @@ GEN_VXFORM(vmulesh, 4, 13);
 GEN_VXFORM(vmulesw, 4, 14);
 GEN_VXFORM_V(vslb, MO_UB, tcg_gen_gvec_shlv, 2, 4);
 GEN_VXFORM_V(vslh, MO_UW, tcg_gen_gvec_shlv, 2, 5);
-GEN_VXFORM_V(vslw, MO_32, tcg_gen_gvec_shlv, 2, 6);
+GEN_VXFORM_V(vslw, MO_UL, tcg_gen_gvec_shlv, 2, 6);
 GEN_VXFORM(vrlwnm, 2, 6);
 GEN_VXFORM_DUAL(vslw, PPC_ALTIVEC, PPC_NONE, \
                 vrlwnm, PPC_NONE, PPC2_ISA300)
 GEN_VXFORM_V(vsld, MO_64, tcg_gen_gvec_shlv, 2, 23);
 GEN_VXFORM_V(vsrb, MO_UB, tcg_gen_gvec_shrv, 2, 8);
 GEN_VXFORM_V(vsrh, MO_UW, tcg_gen_gvec_shrv, 2, 9);
-GEN_VXFORM_V(vsrw, MO_32, tcg_gen_gvec_shrv, 2, 10);
+GEN_VXFORM_V(vsrw, MO_UL, tcg_gen_gvec_shrv, 2, 10);
 GEN_VXFORM_V(vsrd, MO_64, tcg_gen_gvec_shrv, 2, 27);
 GEN_VXFORM_V(vsrab, MO_UB, tcg_gen_gvec_sarv, 2, 12);
 GEN_VXFORM_V(vsrah, MO_UW, tcg_gen_gvec_sarv, 2, 13);
-GEN_VXFORM_V(vsraw, MO_32, tcg_gen_gvec_sarv, 2, 14);
+GEN_VXFORM_V(vsraw, MO_UL, tcg_gen_gvec_sarv, 2, 14);
 GEN_VXFORM_V(vsrad, MO_64, tcg_gen_gvec_sarv, 2, 15);
 GEN_VXFORM(vsrv, 2, 28);
 GEN_VXFORM(vslv, 2, 29);
@@ -595,16 +595,16 @@ GEN_VXFORM_DUAL_EXT(vaddubs, PPC_ALTIVEC, PPC_NONE, 0,       \
 GEN_VXFORM_SAT(vadduhs, MO_UW, add, usadd, 0, 9);
 GEN_VXFORM_DUAL(vadduhs, PPC_ALTIVEC, PPC_NONE, \
                 vmul10euq, PPC_NONE, PPC2_ISA300)
-GEN_VXFORM_SAT(vadduws, MO_32, add, usadd, 0, 10);
+GEN_VXFORM_SAT(vadduws, MO_UL, add, usadd, 0, 10);
 GEN_VXFORM_SAT(vaddsbs, MO_UB, add, ssadd, 0, 12);
 GEN_VXFORM_SAT(vaddshs, MO_UW, add, ssadd, 0, 13);
-GEN_VXFORM_SAT(vaddsws, MO_32, add, ssadd, 0, 14);
+GEN_VXFORM_SAT(vaddsws, MO_UL, add, ssadd, 0, 14);
 GEN_VXFORM_SAT(vsububs, MO_UB, sub, ussub, 0, 24);
 GEN_VXFORM_SAT(vsubuhs, MO_UW, sub, ussub, 0, 25);
-GEN_VXFORM_SAT(vsubuws, MO_32, sub, ussub, 0, 26);
+GEN_VXFORM_SAT(vsubuws, MO_UL, sub, ussub, 0, 26);
 GEN_VXFORM_SAT(vsubsbs, MO_UB, sub, sssub, 0, 28);
 GEN_VXFORM_SAT(vsubshs, MO_UW, sub, sssub, 0, 29);
-GEN_VXFORM_SAT(vsubsws, MO_32, sub, sssub, 0, 30);
+GEN_VXFORM_SAT(vsubsws, MO_UL, sub, sssub, 0, 30);
 GEN_VXFORM(vadduqm, 0, 4);
 GEN_VXFORM(vaddcuq, 0, 5);
 GEN_VXFORM3(vaddeuqm, 30, 0);
@@ -914,7 +914,7 @@ static void glue(gen_, name)(DisasContext *ctx)                         \

 GEN_VXFORM_VSPLT(vspltb, MO_UB, 6, 8);
 GEN_VXFORM_VSPLT(vsplth, MO_UW, 6, 9);
-GEN_VXFORM_VSPLT(vspltw, MO_32, 6, 10);
+GEN_VXFORM_VSPLT(vspltw, MO_UL, 6, 10);
 GEN_VXFORM_UIMM_SPLAT(vextractub, 6, 8, 15);
 GEN_VXFORM_UIMM_SPLAT(vextractuh, 6, 9, 14);
 GEN_VXFORM_UIMM_SPLAT(vextractuw, 6, 10, 12);
diff --git a/target/ppc/translate/vsx-impl.inc.c b/target/ppc/translate/vsx-impl.inc.c
index 3922686..212817e 100644
--- a/target/ppc/translate/vsx-impl.inc.c
+++ b/target/ppc/translate/vsx-impl.inc.c
@@ -1553,12 +1553,12 @@ static void gen_xxspltw(DisasContext *ctx)

     tofs = vsr_full_offset(rt);
     bofs = vsr_full_offset(rb);
-    bofs += uim << MO_32;
+    bofs += uim << MO_UL;
 #ifndef HOST_WORDS_BIG_ENDIAN
     bofs ^= 8 | 4;
 #endif

-    tcg_gen_gvec_dup_mem(MO_32, tofs, bofs, 16, 16);
+    tcg_gen_gvec_dup_mem(MO_UL, tofs, bofs, 16, 16);
 }

 #define pattern(x) (((x) & 0xff) * (~(uint64_t)0 / 0xff))
diff --git a/target/s390x/translate.c b/target/s390x/translate.c
index 415747f..9e646f1 100644
--- a/target/s390x/translate.c
+++ b/target/s390x/translate.c
@@ -196,7 +196,7 @@ static inline int freg64_offset(uint8_t reg)
 static inline int freg32_offset(uint8_t reg)
 {
     g_assert(reg < 16);
-    return vec_reg_offset(reg, 0, MO_32);
+    return vec_reg_offset(reg, 0, MO_UL);
 }

 static TCGv_i64 load_reg(int reg)
@@ -2283,7 +2283,7 @@ static DisasJumpType op_csp(DisasContext *s, DisasOps *o)

     /* Write back the output now, so that it happens before the
        following branch, so that we don't need local temps.  */
-    if ((mop & MO_SIZE) == MO_32) {
+    if ((mop & MO_SIZE) == MO_UL) {
         tcg_gen_deposit_i64(o->out, o->out, old, 0, 32);
     } else {
         tcg_gen_mov_i64(o->out, old);
diff --git a/target/s390x/translate_vx.inc.c b/target/s390x/translate_vx.inc.c
index 65da6b3..75d788c 100644
--- a/target/s390x/translate_vx.inc.c
+++ b/target/s390x/translate_vx.inc.c
@@ -48,7 +48,7 @@

 #define ES_8    MO_UB
 #define ES_16   MO_UW
-#define ES_32   MO_32
+#define ES_32   MO_UL
 #define ES_64   MO_64
 #define ES_128  4

diff --git a/target/s390x/vec.h b/target/s390x/vec.h
index 28e1b1d..f67392c 100644
--- a/target/s390x/vec.h
+++ b/target/s390x/vec.h
@@ -80,7 +80,7 @@ static inline uint64_t s390_vec_read_element(const S390Vector *v, uint8_t enr,
         return s390_vec_read_element8(v, enr);
     case MO_UW:
         return s390_vec_read_element16(v, enr);
-    case MO_32:
+    case MO_UL:
         return s390_vec_read_element32(v, enr);
     case MO_64:
         return s390_vec_read_element64(v, enr);
@@ -127,7 +127,7 @@ static inline void s390_vec_write_element(S390Vector *v, uint8_t enr,
     case MO_UW:
         s390_vec_write_element16(v, enr, data);
         break;
-    case MO_32:
+    case MO_UL:
         s390_vec_write_element32(v, enr, data);
         break;
     case MO_64:
diff --git a/tcg/aarch64/tcg-target.inc.c b/tcg/aarch64/tcg-target.inc.c
index 3d90c4b..dc4fd21 100644
--- a/tcg/aarch64/tcg-target.inc.c
+++ b/tcg/aarch64/tcg-target.inc.c
@@ -431,12 +431,12 @@ typedef enum {
        that emits them can transform to 3.3.10 or 3.3.13.  */
     I3312_STRB      = 0x38000000 | LDST_ST << 22 | MO_UB << 30,
     I3312_STRH      = 0x38000000 | LDST_ST << 22 | MO_UW << 30,
-    I3312_STRW      = 0x38000000 | LDST_ST << 22 | MO_32 << 30,
+    I3312_STRW      = 0x38000000 | LDST_ST << 22 | MO_UL << 30,
     I3312_STRX      = 0x38000000 | LDST_ST << 22 | MO_64 << 30,

     I3312_LDRB      = 0x38000000 | LDST_LD << 22 | MO_UB << 30,
     I3312_LDRH      = 0x38000000 | LDST_LD << 22 | MO_UW << 30,
-    I3312_LDRW      = 0x38000000 | LDST_LD << 22 | MO_32 << 30,
+    I3312_LDRW      = 0x38000000 | LDST_LD << 22 | MO_UL << 30,
     I3312_LDRX      = 0x38000000 | LDST_LD << 22 | MO_64 << 30,

     I3312_LDRSBW    = 0x38000000 | LDST_LD_S_W << 22 | MO_UB << 30,
@@ -444,10 +444,10 @@ typedef enum {

     I3312_LDRSBX    = 0x38000000 | LDST_LD_S_X << 22 | MO_UB << 30,
     I3312_LDRSHX    = 0x38000000 | LDST_LD_S_X << 22 | MO_UW << 30,
-    I3312_LDRSWX    = 0x38000000 | LDST_LD_S_X << 22 | MO_32 << 30,
+    I3312_LDRSWX    = 0x38000000 | LDST_LD_S_X << 22 | MO_UL << 30,

-    I3312_LDRVS     = 0x3c000000 | LDST_LD << 22 | MO_32 << 30,
-    I3312_STRVS     = 0x3c000000 | LDST_ST << 22 | MO_32 << 30,
+    I3312_LDRVS     = 0x3c000000 | LDST_LD << 22 | MO_UL << 30,
+    I3312_STRVS     = 0x3c000000 | LDST_ST << 22 | MO_UL << 30,

     I3312_LDRVD     = 0x3c000000 | LDST_LD << 22 | MO_64 << 30,
     I3312_STRVD     = 0x3c000000 | LDST_ST << 22 | MO_64 << 30,
@@ -870,7 +870,7 @@ static void tcg_out_dupi_vec(TCGContext *s, TCGType type,

     /*
      * Test all bytes 0x00 or 0xff second.  This can match cases that
-     * might otherwise take 2 or 3 insns for MO_UW or MO_32 below.
+     * might otherwise take 2 or 3 insns for MO_UW or MO_UL below.
      */
     for (i = imm8 = 0; i < 8; i++) {
         uint8_t byte = v64 >> (i * 8);
@@ -908,7 +908,7 @@ static void tcg_out_dupi_vec(TCGContext *s, TCGType type,
         tcg_out_insn(s, 3606, MOVI, q, rd, 0, 0x8, v16 & 0xff);
         tcg_out_insn(s, 3606, ORR, q, rd, 0, 0xa, v16 >> 8);
         return;
-    } else if (v64 == dup_const(MO_32, v64)) {
+    } else if (v64 == dup_const(MO_UL, v64)) {
         uint32_t v32 = v64;
         uint32_t n32 = ~v32;

@@ -1749,7 +1749,7 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp memop, TCGType ext,
         if (bswap) {
             tcg_out_ldst_r(s, I3312_LDRW, data_r, addr_r, otype, off_r);
             tcg_out_rev32(s, data_r, data_r);
-            tcg_out_sxt(s, TCG_TYPE_I64, MO_32, data_r, data_r);
+            tcg_out_sxt(s, TCG_TYPE_I64, MO_UL, data_r, data_r);
         } else {
             tcg_out_ldst_r(s, I3312_LDRSWX, data_r, addr_r, otype, off_r);
         }
@@ -1782,7 +1782,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp memop,
         }
         tcg_out_ldst_r(s, I3312_STRH, data_r, addr_r, otype, off_r);
         break;
-    case MO_32:
+    case MO_UL:
         if (bswap && data_r != TCG_REG_XZR) {
             tcg_out_rev32(s, TCG_REG_TMP, data_r);
             data_r = TCG_REG_TMP;
@@ -2194,7 +2194,7 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
         break;
     case INDEX_op_ext_i32_i64:
     case INDEX_op_ext32s_i64:
-        tcg_out_sxt(s, TCG_TYPE_I64, MO_32, a0, a1);
+        tcg_out_sxt(s, TCG_TYPE_I64, MO_UL, a0, a1);
         break;
     case INDEX_op_ext8u_i64:
     case INDEX_op_ext8u_i32:
diff --git a/tcg/arm/tcg-target.inc.c b/tcg/arm/tcg-target.inc.c
index 0bd400e..05560a2 100644
--- a/tcg/arm/tcg-target.inc.c
+++ b/tcg/arm/tcg-target.inc.c
@@ -1435,7 +1435,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
     case MO_UW:
         argreg = tcg_out_arg_reg16(s, argreg, datalo);
         break;
-    case MO_32:
+    case MO_UL:
     default:
         argreg = tcg_out_arg_reg32(s, argreg, datalo);
         break;
@@ -1632,7 +1632,7 @@ static inline void tcg_out_qemu_st_index(TCGContext *s, int cond, TCGMemOp opc,
             tcg_out_st16_r(s, cond, datalo, addrlo, addend);
         }
         break;
-    case MO_32:
+    case MO_UL:
     default:
         if (bswap) {
             tcg_out_bswap32(s, cond, TCG_REG_R0, datalo);
@@ -1677,7 +1677,7 @@ static inline void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp opc,
             tcg_out_st16_8(s, COND_AL, datalo, addrlo, 0);
         }
         break;
-    case MO_32:
+    case MO_UL:
     default:
         if (bswap) {
             tcg_out_bswap32(s, COND_AL, TCG_REG_R0, datalo);
diff --git a/tcg/i386/tcg-target.inc.c b/tcg/i386/tcg-target.inc.c
index 31c3664..93e4c63 100644
--- a/tcg/i386/tcg-target.inc.c
+++ b/tcg/i386/tcg-target.inc.c
@@ -897,7 +897,7 @@ static bool tcg_out_dup_vec(TCGContext *s, TCGType type, unsigned vece,
             tcg_out_vex_modrm(s, OPC_PUNPCKLWD, r, a, a);
             a = r;
             /* FALLTHRU */
-        case MO_32:
+        case MO_UL:
             tcg_out_vex_modrm(s, OPC_PSHUFD, r, 0, a);
             /* imm8 operand: all output lanes selected from input lane 0.  */
             tcg_out8(s, 0);
@@ -924,7 +924,7 @@ static bool tcg_out_dupm_vec(TCGContext *s, TCGType type, unsigned vece,
         case MO_64:
             tcg_out_vex_modrm_offset(s, OPC_MOVDDUP, r, 0, base, offset);
             break;
-        case MO_32:
+        case MO_UL:
             tcg_out_vex_modrm_offset(s, OPC_VBROADCASTSS, r, 0, base, offset);
             break;
         case MO_UW:
@@ -2173,7 +2173,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg datalo, TCGReg datahi,
         tcg_out_modrm_sib_offset(s, movop + P_DATA16 + seg, datalo,
                                  base, index, 0, ofs);
         break;
-    case MO_32:
+    case MO_UL:
         if (bswap) {
             tcg_out_mov(s, TCG_TYPE_I32, scratch, datalo);
             tcg_out_bswap32(s, scratch);
@@ -2927,7 +2927,7 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc,
     case INDEX_op_x86_blend_vec:
         if (vece == MO_UW) {
             insn = OPC_PBLENDW;
-        } else if (vece == MO_32) {
+        } else if (vece == MO_UL) {
             insn = (have_avx2 ? OPC_VPBLENDD : OPC_BLENDPS);
         } else {
             g_assert_not_reached();
@@ -3292,13 +3292,13 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, unsigned vece)
     case INDEX_op_shrs_vec:
         return vece >= MO_UW;
     case INDEX_op_sars_vec:
-        return vece >= MO_UW && vece <= MO_32;
+        return vece >= MO_UW && vece <= MO_UL;

     case INDEX_op_shlv_vec:
     case INDEX_op_shrv_vec:
-        return have_avx2 && vece >= MO_32;
+        return have_avx2 && vece >= MO_UL;
     case INDEX_op_sarv_vec:
-        return have_avx2 && vece == MO_32;
+        return have_avx2 && vece == MO_UL;

     case INDEX_op_mul_vec:
         if (vece == MO_UB) {
@@ -3320,7 +3320,7 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, unsigned vece)
     case INDEX_op_umin_vec:
     case INDEX_op_umax_vec:
     case INDEX_op_abs_vec:
-        return vece <= MO_32;
+        return vece <= MO_UL;

     default:
         return 0;
@@ -3396,9 +3396,9 @@ static void expand_vec_sari(TCGType type, unsigned vece,
              * shift (note that the ISA says shift of 32 is valid).
              */
             t1 = tcg_temp_new_vec(type);
-            tcg_gen_sari_vec(MO_32, t1, v1, imm);
+            tcg_gen_sari_vec(MO_UL, t1, v1, imm);
             tcg_gen_shri_vec(MO_64, v0, v1, imm);
-            vec_gen_4(INDEX_op_x86_blend_vec, type, MO_32,
+            vec_gen_4(INDEX_op_x86_blend_vec, type, MO_UL,
                       tcgv_vec_arg(v0), tcgv_vec_arg(v0),
                       tcgv_vec_arg(t1), 0xaa);
             tcg_temp_free_vec(t1);
@@ -3515,28 +3515,28 @@ static bool expand_vec_cmp_noinv(TCGType type, unsigned vece, TCGv_vec v0,
         fixup = NEED_SWAP | NEED_INV;
         break;
     case TCG_COND_LEU:
-        if (vece <= MO_32) {
+        if (vece <= MO_UL) {
             fixup = NEED_UMIN;
         } else {
             fixup = NEED_BIAS | NEED_INV;
         }
         break;
     case TCG_COND_GTU:
-        if (vece <= MO_32) {
+        if (vece <= MO_UL) {
             fixup = NEED_UMIN | NEED_INV;
         } else {
             fixup = NEED_BIAS;
         }
         break;
     case TCG_COND_GEU:
-        if (vece <= MO_32) {
+        if (vece <= MO_UL) {
             fixup = NEED_UMAX;
         } else {
             fixup = NEED_BIAS | NEED_SWAP | NEED_INV;
         }
         break;
     case TCG_COND_LTU:
-        if (vece <= MO_32) {
+        if (vece <= MO_UL) {
             fixup = NEED_UMAX | NEED_INV;
         } else {
             fixup = NEED_BIAS | NEED_SWAP;
diff --git a/tcg/mips/tcg-target.inc.c b/tcg/mips/tcg-target.inc.c
index 1780cb1..a78fe87 100644
--- a/tcg/mips/tcg-target.inc.c
+++ b/tcg/mips/tcg-target.inc.c
@@ -1386,7 +1386,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
     case MO_UW:
         i = tcg_out_call_iarg_reg16(s, i, l->datalo_reg);
         break;
-    case MO_32:
+    case MO_UL:
         i = tcg_out_call_iarg_reg(s, i, l->datalo_reg);
         break;
     case MO_64:
@@ -1579,11 +1579,11 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
         tcg_out_opc_imm(s, OPC_SH, lo, base, 0);
         break;

-    case MO_32 | MO_BSWAP:
+    case MO_UL | MO_BSWAP:
         tcg_out_bswap32(s, TCG_TMP3, lo);
         lo = TCG_TMP3;
         /* FALLTHRU */
-    case MO_32:
+    case MO_UL:
         tcg_out_opc_imm(s, OPC_SW, lo, base, 0);
         break;

diff --git a/tcg/ppc/tcg-target.inc.c b/tcg/ppc/tcg-target.inc.c
index 852b894..835336a 100644
--- a/tcg/ppc/tcg-target.inc.c
+++ b/tcg/ppc/tcg-target.inc.c
@@ -1714,7 +1714,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 #endif
             tcg_out_mov(s, TCG_TYPE_I32, arg++, hi);
             /* FALLTHRU */
-        case MO_32:
+        case MO_UL:
             tcg_out_mov(s, TCG_TYPE_I32, arg++, lo);
             break;
         default:
diff --git a/tcg/riscv/tcg-target.inc.c b/tcg/riscv/tcg-target.inc.c
index 20bc19d..1905986 100644
--- a/tcg/riscv/tcg-target.inc.c
+++ b/tcg/riscv/tcg-target.inc.c
@@ -1222,7 +1222,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
     case MO_UW:
         tcg_out_opc_store(s, OPC_SH, base, lo, 0);
         break;
-    case MO_32:
+    case MO_UL:
         tcg_out_opc_store(s, OPC_SW, base, lo, 0);
         break;
     case MO_64:
diff --git a/tcg/sparc/tcg-target.inc.c b/tcg/sparc/tcg-target.inc.c
index 85550b5..ac0d3a3 100644
--- a/tcg/sparc/tcg-target.inc.c
+++ b/tcg/sparc/tcg-target.inc.c
@@ -889,7 +889,7 @@ static void emit_extend(TCGContext *s, TCGReg r, int op)
         tcg_out_arithi(s, r, r, 16, SHIFT_SLL);
         tcg_out_arithi(s, r, r, 16, SHIFT_SRL);
         break;
-    case MO_32:
+    case MO_UL:
         if (SPARC64) {
             tcg_out_arith(s, r, r, 0, SHIFT_SRL);
         }
diff --git a/tcg/tcg-op-gvec.c b/tcg/tcg-op-gvec.c
index da409f5..e63622c 100644
--- a/tcg/tcg-op-gvec.c
+++ b/tcg/tcg-op-gvec.c
@@ -310,7 +310,7 @@ uint64_t (dup_const)(unsigned vece, uint64_t c)
         return 0x0101010101010101ull * (uint8_t)c;
     case MO_UW:
         return 0x0001000100010001ull * (uint16_t)c;
-    case MO_32:
+    case MO_UL:
         return 0x0000000100000001ull * (uint32_t)c;
     case MO_64:
         return c;
@@ -330,7 +330,7 @@ static void gen_dup_i32(unsigned vece, TCGv_i32 out, TCGv_i32 in)
     case MO_UW:
         tcg_gen_deposit_i32(out, in, in, 16, 16);
         break;
-    case MO_32:
+    case MO_UL:
         tcg_gen_mov_i32(out, in);
         break;
     default:
@@ -349,7 +349,7 @@ static void gen_dup_i64(unsigned vece, TCGv_i64 out, TCGv_i64 in)
         tcg_gen_ext16u_i64(out, in);
         tcg_gen_muli_i64(out, out, 0x0001000100010001ull);
         break;
-    case MO_32:
+    case MO_UL:
         tcg_gen_deposit_i64(out, in, in, 32, 32);
         break;
     case MO_64:
@@ -443,7 +443,7 @@ static void do_dup(unsigned vece, uint32_t dofs, uint32_t oprsz,
     TCGv_ptr t_ptr;
     uint32_t i;

-    assert(vece <= (in_32 ? MO_32 : MO_64));
+    assert(vece <= (in_32 ? MO_UL : MO_64));
     assert(in_32 == NULL || in_64 == NULL);

     /* If we're storing 0, expand oprsz to maxsz.  */
@@ -485,7 +485,7 @@ static void do_dup(unsigned vece, uint32_t dofs, uint32_t oprsz,
                use a 64-bit operation unless the 32-bit operation would
                be simple enough.  */
             if (TCG_TARGET_REG_BITS == 64
-                && (vece != MO_32 || !check_size_impl(oprsz, 4))) {
+                && (vece != MO_UL || !check_size_impl(oprsz, 4))) {
                 t_64 = tcg_temp_new_i64();
                 tcg_gen_extu_i32_i64(t_64, in_32);
                 gen_dup_i64(vece, t_64, t_64);
@@ -1430,7 +1430,7 @@ void tcg_gen_gvec_dup_i32(unsigned vece, uint32_t dofs, uint32_t oprsz,
                           uint32_t maxsz, TCGv_i32 in)
 {
     check_size_align(oprsz, maxsz, dofs);
-    tcg_debug_assert(vece <= MO_32);
+    tcg_debug_assert(vece <= MO_UL);
     do_dup(vece, dofs, oprsz, maxsz, in, NULL, 0);
 }

@@ -1453,7 +1453,7 @@ void tcg_gen_gvec_dup_mem(unsigned vece, uint32_t dofs, uint32_t aofs,
             tcg_gen_dup_mem_vec(vece, t_vec, cpu_env, aofs);
             do_dup_store(type, dofs, oprsz, maxsz, t_vec);
             tcg_temp_free_vec(t_vec);
-        } else if (vece <= MO_32) {
+        } else if (vece <= MO_UL) {
             TCGv_i32 in = tcg_temp_new_i32();
             switch (vece) {
             case MO_UB:
@@ -1519,7 +1519,7 @@ void tcg_gen_gvec_dup32i(uint32_t dofs, uint32_t oprsz,
                          uint32_t maxsz, uint32_t x)
 {
     check_size_align(oprsz, maxsz, dofs);
-    do_dup(MO_32, dofs, oprsz, maxsz, NULL, NULL, x);
+    do_dup(MO_UL, dofs, oprsz, maxsz, NULL, NULL, x);
 }

 void tcg_gen_gvec_dup16i(uint32_t dofs, uint32_t oprsz,
@@ -1618,7 +1618,7 @@ void tcg_gen_gvec_add(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_add_vec,
           .fno = gen_helper_gvec_add32,
           .opt_opc = vecop_list_add,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_add_i64,
           .fniv = tcg_gen_add_vec,
           .fno = gen_helper_gvec_add64,
@@ -1649,7 +1649,7 @@ void tcg_gen_gvec_adds(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_add_vec,
           .fno = gen_helper_gvec_adds32,
           .opt_opc = vecop_list_add,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_add_i64,
           .fniv = tcg_gen_add_vec,
           .fno = gen_helper_gvec_adds64,
@@ -1690,7 +1690,7 @@ void tcg_gen_gvec_subs(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_gvec_subs32,
           .opt_opc = vecop_list_sub,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_sub_i64,
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_gvec_subs64,
@@ -1769,7 +1769,7 @@ void tcg_gen_gvec_sub(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_gvec_sub32,
           .opt_opc = vecop_list_sub,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_sub_i64,
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_gvec_sub64,
@@ -1800,7 +1800,7 @@ void tcg_gen_gvec_mul(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_mul_vec,
           .fno = gen_helper_gvec_mul32,
           .opt_opc = vecop_list_mul,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_mul_i64,
           .fniv = tcg_gen_mul_vec,
           .fno = gen_helper_gvec_mul64,
@@ -1829,7 +1829,7 @@ void tcg_gen_gvec_muls(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_mul_vec,
           .fno = gen_helper_gvec_muls32,
           .opt_opc = vecop_list_mul,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_mul_i64,
           .fniv = tcg_gen_mul_vec,
           .fno = gen_helper_gvec_muls64,
@@ -1866,7 +1866,7 @@ void tcg_gen_gvec_ssadd(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_ssadd_vec,
           .fno = gen_helper_gvec_ssadd32,
           .opt_opc = vecop_list,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fniv = tcg_gen_ssadd_vec,
           .fno = gen_helper_gvec_ssadd64,
           .opt_opc = vecop_list,
@@ -1892,7 +1892,7 @@ void tcg_gen_gvec_sssub(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_sssub_vec,
           .fno = gen_helper_gvec_sssub32,
           .opt_opc = vecop_list,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fniv = tcg_gen_sssub_vec,
           .fno = gen_helper_gvec_sssub64,
           .opt_opc = vecop_list,
@@ -1935,7 +1935,7 @@ void tcg_gen_gvec_usadd(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_usadd_vec,
           .fno = gen_helper_gvec_usadd32,
           .opt_opc = vecop_list,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_usadd_i64,
           .fniv = tcg_gen_usadd_vec,
           .fno = gen_helper_gvec_usadd64,
@@ -1979,7 +1979,7 @@ void tcg_gen_gvec_ussub(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_ussub_vec,
           .fno = gen_helper_gvec_ussub32,
           .opt_opc = vecop_list,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_ussub_i64,
           .fniv = tcg_gen_ussub_vec,
           .fno = gen_helper_gvec_ussub64,
@@ -2007,7 +2007,7 @@ void tcg_gen_gvec_smin(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_smin_vec,
           .fno = gen_helper_gvec_smin32,
           .opt_opc = vecop_list,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_smin_i64,
           .fniv = tcg_gen_smin_vec,
           .fno = gen_helper_gvec_smin64,
@@ -2035,7 +2035,7 @@ void tcg_gen_gvec_umin(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_umin_vec,
           .fno = gen_helper_gvec_umin32,
           .opt_opc = vecop_list,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_umin_i64,
           .fniv = tcg_gen_umin_vec,
           .fno = gen_helper_gvec_umin64,
@@ -2063,7 +2063,7 @@ void tcg_gen_gvec_smax(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_smax_vec,
           .fno = gen_helper_gvec_smax32,
           .opt_opc = vecop_list,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_smax_i64,
           .fniv = tcg_gen_smax_vec,
           .fno = gen_helper_gvec_smax64,
@@ -2091,7 +2091,7 @@ void tcg_gen_gvec_umax(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_umax_vec,
           .fno = gen_helper_gvec_umax32,
           .opt_opc = vecop_list,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_umax_i64,
           .fniv = tcg_gen_umax_vec,
           .fno = gen_helper_gvec_umax64,
@@ -2165,7 +2165,7 @@ void tcg_gen_gvec_neg(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_neg_vec,
           .fno = gen_helper_gvec_neg32,
           .opt_opc = vecop_list,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_neg_i64,
           .fniv = tcg_gen_neg_vec,
           .fno = gen_helper_gvec_neg64,
@@ -2228,7 +2228,7 @@ void tcg_gen_gvec_abs(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_abs_vec,
           .fno = gen_helper_gvec_abs32,
           .opt_opc = vecop_list,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_abs_i64,
           .fniv = tcg_gen_abs_vec,
           .fno = gen_helper_gvec_abs64,
@@ -2485,7 +2485,7 @@ void tcg_gen_gvec_shli(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_shli_vec,
           .fno = gen_helper_gvec_shl32i,
           .opt_opc = vecop_list,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_shli_i64,
           .fniv = tcg_gen_shli_vec,
           .fno = gen_helper_gvec_shl64i,
@@ -2536,7 +2536,7 @@ void tcg_gen_gvec_shri(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_shri_vec,
           .fno = gen_helper_gvec_shr32i,
           .opt_opc = vecop_list,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_shri_i64,
           .fniv = tcg_gen_shri_vec,
           .fno = gen_helper_gvec_shr64i,
@@ -2601,7 +2601,7 @@ void tcg_gen_gvec_sari(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_sari_vec,
           .fno = gen_helper_gvec_sar32i,
           .opt_opc = vecop_list,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_sari_i64,
           .fniv = tcg_gen_sari_vec,
           .fno = gen_helper_gvec_sar64i,
@@ -2736,7 +2736,7 @@ do_gvec_shifts(unsigned vece, uint32_t dofs, uint32_t aofs, TCGv_i32 shift,
     }

     /* Otherwise fall back to integral... */
-    if (vece == MO_32 && check_size_impl(oprsz, 4)) {
+    if (vece == MO_UL && check_size_impl(oprsz, 4)) {
         expand_2s_i32(dofs, aofs, oprsz, shift, false, g->fni4);
     } else if (vece == MO_64 && check_size_impl(oprsz, 8)) {
         TCGv_i64 sh64 = tcg_temp_new_i64();
@@ -2889,7 +2889,7 @@ void tcg_gen_gvec_shlv(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_shlv_mod_vec,
           .fno = gen_helper_gvec_shl32v,
           .opt_opc = vecop_list,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_shl_mod_i64,
           .fniv = tcg_gen_shlv_mod_vec,
           .fno = gen_helper_gvec_shl64v,
@@ -2952,7 +2952,7 @@ void tcg_gen_gvec_shrv(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_shrv_mod_vec,
           .fno = gen_helper_gvec_shr32v,
           .opt_opc = vecop_list,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_shr_mod_i64,
           .fniv = tcg_gen_shrv_mod_vec,
           .fno = gen_helper_gvec_shr64v,
@@ -3015,7 +3015,7 @@ void tcg_gen_gvec_sarv(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_sarv_mod_vec,
           .fno = gen_helper_gvec_sar32v,
           .opt_opc = vecop_list,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_sar_mod_i64,
           .fniv = tcg_gen_sarv_mod_vec,
           .fno = gen_helper_gvec_sar64v,
@@ -3168,7 +3168,7 @@ void tcg_gen_gvec_cmp(TCGCond cond, unsigned vece, uint32_t dofs,
     case 0:
         if (vece == MO_64 && check_size_impl(oprsz, 8)) {
             expand_cmp_i64(dofs, aofs, bofs, oprsz, cond);
-        } else if (vece == MO_32 && check_size_impl(oprsz, 4)) {
+        } else if (vece == MO_UL && check_size_impl(oprsz, 4)) {
             expand_cmp_i32(dofs, aofs, bofs, oprsz, cond);
         } else {
             gen_helper_gvec_3 * const *fn = fns[cond];
diff --git a/tcg/tcg-op-vec.c b/tcg/tcg-op-vec.c
index b0a4d98..ff723ab 100644
--- a/tcg/tcg-op-vec.c
+++ b/tcg/tcg-op-vec.c
@@ -216,7 +216,7 @@ void tcg_gen_mov_vec(TCGv_vec r, TCGv_vec a)
     }
 }

-#define MO_REG  (TCG_TARGET_REG_BITS == 64 ? MO_64 : MO_32)
+#define MO_REG  (TCG_TARGET_REG_BITS == 64 ? MO_64 : MO_UL)

 static void do_dupi_vec(TCGv_vec r, unsigned vece, TCGArg a)
 {
@@ -253,7 +253,7 @@ TCGv_vec tcg_const_ones_vec_matching(TCGv_vec m)
 void tcg_gen_dup64i_vec(TCGv_vec r, uint64_t a)
 {
     if (TCG_TARGET_REG_BITS == 32 && a == deposit64(a, 32, 32, a)) {
-        do_dupi_vec(r, MO_32, a);
+        do_dupi_vec(r, MO_UL, a);
     } else if (TCG_TARGET_REG_BITS == 64 || a == (uint64_t)(int32_t)a) {
         do_dupi_vec(r, MO_64, a);
     } else {
@@ -265,7 +265,7 @@ void tcg_gen_dup64i_vec(TCGv_vec r, uint64_t a)

 void tcg_gen_dup32i_vec(TCGv_vec r, uint32_t a)
 {
-    do_dupi_vec(r, MO_REG, dup_const(MO_32, a));
+    do_dupi_vec(r, MO_REG, dup_const(MO_UL, a));
 }

 void tcg_gen_dup16i_vec(TCGv_vec r, uint32_t a)
diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c
index 21d448c..447683d 100644
--- a/tcg/tcg-op.c
+++ b/tcg/tcg-op.c
@@ -2725,7 +2725,7 @@ static inline TCGMemOp tcg_canonicalize_memop(TCGMemOp op, bool is64, bool st)
         break;
     case MO_UW:
         break;
-    case MO_32:
+    case MO_UL:
         if (!is64) {
             op &= ~MO_SIGN;
         }
@@ -2816,7 +2816,7 @@ void tcg_gen_qemu_ld_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
                 tcg_gen_ext16s_i32(val, val);
             }
             break;
-        case MO_32:
+        case MO_UL:
             tcg_gen_bswap32_i32(val, val);
             break;
         default:
@@ -2841,7 +2841,7 @@ void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
             tcg_gen_ext16u_i32(swap, val);
             tcg_gen_bswap16_i32(swap, swap);
             break;
-        case MO_32:
+        case MO_UL:
             tcg_gen_bswap32_i32(swap, val);
             break;
         default:
@@ -2896,7 +2896,7 @@ void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
                 tcg_gen_ext16s_i64(val, val);
             }
             break;
-        case MO_32:
+        case MO_UL:
             tcg_gen_bswap32_i64(val, val);
             if (orig_memop & MO_SIGN) {
                 tcg_gen_ext32s_i64(val, val);
@@ -2932,7 +2932,7 @@ void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
             tcg_gen_ext16u_i64(swap, val);
             tcg_gen_bswap16_i64(swap, swap);
             break;
-        case MO_32:
+        case MO_UL:
             tcg_gen_ext32u_i64(swap, val);
             tcg_gen_bswap32_i64(swap, swap);
             break;
@@ -3027,8 +3027,8 @@ static void * const table_cmpxchg[16] = {
     [MO_UB] = gen_helper_atomic_cmpxchgb,
     [MO_UW | MO_LE] = gen_helper_atomic_cmpxchgw_le,
     [MO_UW | MO_BE] = gen_helper_atomic_cmpxchgw_be,
-    [MO_32 | MO_LE] = gen_helper_atomic_cmpxchgl_le,
-    [MO_32 | MO_BE] = gen_helper_atomic_cmpxchgl_be,
+    [MO_UL | MO_LE] = gen_helper_atomic_cmpxchgl_le,
+    [MO_UL | MO_BE] = gen_helper_atomic_cmpxchgl_be,
     WITH_ATOMIC64([MO_64 | MO_LE] = gen_helper_atomic_cmpxchgq_le)
     WITH_ATOMIC64([MO_64 | MO_BE] = gen_helper_atomic_cmpxchgq_be)
 };
@@ -3251,8 +3251,8 @@ static void * const table_##NAME[16] = {                                \
     [MO_UB] = gen_helper_atomic_##NAME##b,                               \
     [MO_UW | MO_LE] = gen_helper_atomic_##NAME##w_le,                   \
     [MO_UW | MO_BE] = gen_helper_atomic_##NAME##w_be,                   \
-    [MO_32 | MO_LE] = gen_helper_atomic_##NAME##l_le,                   \
-    [MO_32 | MO_BE] = gen_helper_atomic_##NAME##l_be,                   \
+    [MO_UL | MO_LE] = gen_helper_atomic_##NAME##l_le,                   \
+    [MO_UL | MO_BE] = gen_helper_atomic_##NAME##l_be,                   \
     WITH_ATOMIC64([MO_64 | MO_LE] = gen_helper_atomic_##NAME##q_le)     \
     WITH_ATOMIC64([MO_64 | MO_BE] = gen_helper_atomic_##NAME##q_be)     \
 };                                                                      \
diff --git a/tcg/tcg.h b/tcg/tcg.h
index a378887..4b6ee89 100644
--- a/tcg/tcg.h
+++ b/tcg/tcg.h
@@ -1304,7 +1304,7 @@ uint64_t dup_const(unsigned vece, uint64_t c);
     (__builtin_constant_p(VECE)                                    \
      ?   ((VECE) == MO_UB ? 0x0101010101010101ull * (uint8_t)(C)   \
         : (VECE) == MO_UW ? 0x0001000100010001ull * (uint16_t)(C)  \
-        : (VECE) == MO_32 ? 0x0000000100000001ull * (uint32_t)(C)  \
+        : (VECE) == MO_UL ? 0x0000000100000001ull * (uint32_t)(C)  \
         : dup_const(VECE, C))                                      \
      : dup_const(VECE, C))

--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v2 03/20] tcg: Replace MO_32 with MO_UL alias
@ 2019-07-22 15:41   ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:41 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, david, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, mst, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, claudio.fontana, qemu-s390x, qemu-ppc,
	amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 99307 bytes --]

Preparation for splitting MO_32 out from TCGMemOp into new accelerator
independent MemOp.

As MO_32 will be a value of MemOp, existing TCGMemOp comparisons and
coercions will trigger -Wenum-compare and -Wenum-conversion.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 target/arm/sve_helper.c             |   6 +-
 target/arm/translate-a64.c          | 148 +++++++++++++++++------------------
 target/arm/translate-sve.c          |  12 +--
 target/arm/translate-vfp.inc.c      |   4 +-
 target/arm/translate.c              |  34 ++++----
 target/i386/translate.c             | 150 ++++++++++++++++++------------------
 target/ppc/translate/vmx-impl.inc.c |  28 +++----
 target/ppc/translate/vsx-impl.inc.c |   4 +-
 target/s390x/translate.c            |   4 +-
 target/s390x/translate_vx.inc.c     |   2 +-
 target/s390x/vec.h                  |   4 +-
 tcg/aarch64/tcg-target.inc.c        |  20 ++---
 tcg/arm/tcg-target.inc.c            |   6 +-
 tcg/i386/tcg-target.inc.c           |  28 +++----
 tcg/mips/tcg-target.inc.c           |   6 +-
 tcg/ppc/tcg-target.inc.c            |   2 +-
 tcg/riscv/tcg-target.inc.c          |   2 +-
 tcg/sparc/tcg-target.inc.c          |   2 +-
 tcg/tcg-op-gvec.c                   |  64 +++++++--------
 tcg/tcg-op-vec.c                    |   6 +-
 tcg/tcg-op.c                        |  18 ++---
 tcg/tcg.h                           |   2 +-
 22 files changed, 276 insertions(+), 276 deletions(-)

diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index f6bef3d..fa705c4 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -1561,7 +1561,7 @@ void HELPER(sve_cpy_m_s)(void *vd, void *vn, void *vg,
     uint64_t *d = vd, *n = vn;
     uint8_t *pg = vg;

-    mm = dup_const(MO_32, mm);
+    mm = dup_const(MO_UL, mm);
     for (i = 0; i < opr_sz; i += 1) {
         uint64_t nn = n[i];
         uint64_t pp = expand_pred_s(pg[H1(i)]);
@@ -1612,7 +1612,7 @@ void HELPER(sve_cpy_z_s)(void *vd, void *vg, uint64_t val, uint32_t desc)
     uint64_t *d = vd;
     uint8_t *pg = vg;

-    val = dup_const(MO_32, val);
+    val = dup_const(MO_UL, val);
     for (i = 0; i < opr_sz; i += 1) {
         d[i] = val & expand_pred_s(pg[H1(i)]);
     }
@@ -5123,7 +5123,7 @@ static inline void sve_ldff1_zs(CPUARMState *env, void *vd, void *vg, void *vm,
     target_ulong addr;

     /* Skip to the first true predicate.  */
-    reg_off = find_next_active(vg, 0, reg_max, MO_32);
+    reg_off = find_next_active(vg, 0, reg_max, MO_UL);
     if (likely(reg_off < reg_max)) {
         /* Perform one normal read, which will fault or not.  */
         set_helper_retaddr(ra);
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index 3acfccb..0b92e6d 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -484,7 +484,7 @@ static TCGv_i32 read_fp_sreg(DisasContext *s, int reg)
 {
     TCGv_i32 v = tcg_temp_new_i32();

-    tcg_gen_ld_i32(v, cpu_env, fp_reg_offset(s, reg, MO_32));
+    tcg_gen_ld_i32(v, cpu_env, fp_reg_offset(s, reg, MO_UL));
     return v;
 }

@@ -999,7 +999,7 @@ static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx,
     case MO_UW:
         tcg_gen_ld16u_i64(tcg_dest, cpu_env, vect_off);
         break;
-    case MO_32:
+    case MO_UL:
         tcg_gen_ld32u_i64(tcg_dest, cpu_env, vect_off);
         break;
     case MO_SB:
@@ -1008,7 +1008,7 @@ static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx,
     case MO_SW:
         tcg_gen_ld16s_i64(tcg_dest, cpu_env, vect_off);
         break;
-    case MO_32|MO_SIGN:
+    case MO_SL:
         tcg_gen_ld32s_i64(tcg_dest, cpu_env, vect_off);
         break;
     case MO_64:
@@ -1037,8 +1037,8 @@ static void read_vec_element_i32(DisasContext *s, TCGv_i32 tcg_dest, int srcidx,
     case MO_SW:
         tcg_gen_ld16s_i32(tcg_dest, cpu_env, vect_off);
         break;
-    case MO_32:
-    case MO_32|MO_SIGN:
+    case MO_UL:
+    case MO_SL:
         tcg_gen_ld_i32(tcg_dest, cpu_env, vect_off);
         break;
     default:
@@ -1058,7 +1058,7 @@ static void write_vec_element(DisasContext *s, TCGv_i64 tcg_src, int destidx,
     case MO_UW:
         tcg_gen_st16_i64(tcg_src, cpu_env, vect_off);
         break;
-    case MO_32:
+    case MO_UL:
         tcg_gen_st32_i64(tcg_src, cpu_env, vect_off);
         break;
     case MO_64:
@@ -1080,7 +1080,7 @@ static void write_vec_element_i32(DisasContext *s, TCGv_i32 tcg_src,
     case MO_UW:
         tcg_gen_st16_i32(tcg_src, cpu_env, vect_off);
         break;
-    case MO_32:
+    case MO_UL:
         tcg_gen_st_i32(tcg_src, cpu_env, vect_off);
         break;
     default:
@@ -5299,7 +5299,7 @@ static void handle_fp_compare(DisasContext *s, int size,
         }

         switch (size) {
-        case MO_32:
+        case MO_UL:
             if (signal_all_nans) {
                 gen_helper_vfp_cmpes_a64(tcg_flags, tcg_vn, tcg_vm, fpst);
             } else {
@@ -5354,7 +5354,7 @@ static void disas_fp_compare(DisasContext *s, uint32_t insn)

     switch (type) {
     case 0:
-        size = MO_32;
+        size = MO_UL;
         break;
     case 1:
         size = MO_64;
@@ -5405,7 +5405,7 @@ static void disas_fp_ccomp(DisasContext *s, uint32_t insn)

     switch (type) {
     case 0:
-        size = MO_32;
+        size = MO_UL;
         break;
     case 1:
         size = MO_64;
@@ -5471,7 +5471,7 @@ static void disas_fp_csel(DisasContext *s, uint32_t insn)

     switch (type) {
     case 0:
-        sz = MO_32;
+        sz = MO_UL;
         break;
     case 1:
         sz = MO_64;
@@ -6276,7 +6276,7 @@ static void disas_fp_imm(DisasContext *s, uint32_t insn)

     switch (type) {
     case 0:
-        sz = MO_32;
+        sz = MO_UL;
         break;
     case 1:
         sz = MO_64;
@@ -6581,7 +6581,7 @@ static void handle_fmov(DisasContext *s, int rd, int rn, int type, bool itof)
         switch (type) {
         case 0:
             /* 32 bit */
-            tcg_gen_ld32u_i64(tcg_rd, cpu_env, fp_reg_offset(s, rn, MO_32));
+            tcg_gen_ld32u_i64(tcg_rd, cpu_env, fp_reg_offset(s, rn, MO_UL));
             break;
         case 1:
             /* 64 bit */
@@ -7030,7 +7030,7 @@ static TCGv_i32 do_reduction_op(DisasContext *s, int fpopcode, int rn,
 {
     if (esize == size) {
         int element;
-        TCGMemOp msize = esize == 16 ? MO_UW : MO_32;
+        TCGMemOp msize = esize == 16 ? MO_UW : MO_UL;
         TCGv_i32 tcg_elem;

         /* We should have one register left here */
@@ -7702,7 +7702,7 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
                 size = MO_UW;
             }
         } else {
-            size = extract32(size, 0, 1) ? MO_64 : MO_32;
+            size = extract32(size, 0, 1) ? MO_64 : MO_UL;
         }

         if (!fp_access_check(s)) {
@@ -8181,7 +8181,7 @@ static void handle_simd_qshl(DisasContext *s, bool scalar, bool is_q,
             }
         };
         NeonGenTwoOpEnvFn *genfn = fns[src_unsigned][dst_unsigned][size];
-        TCGMemOp memop = scalar ? size : MO_32;
+        TCGMemOp memop = scalar ? size : MO_UL;
         int maxpass = scalar ? 1 : is_q ? 4 : 2;

         for (pass = 0; pass < maxpass; pass++) {
@@ -8204,7 +8204,7 @@ static void handle_simd_qshl(DisasContext *s, bool scalar, bool is_q,
                 }
                 write_fp_sreg(s, rd, tcg_op);
             } else {
-                write_vec_element_i32(s, tcg_op, rd, pass, MO_32);
+                write_vec_element_i32(s, tcg_op, rd, pass, MO_UL);
             }

             tcg_temp_free_i32(tcg_op);
@@ -8264,7 +8264,7 @@ static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
             read_vec_element_i32(s, tcg_int32, rn, pass, mop);

             switch (size) {
-            case MO_32:
+            case MO_UL:
                 if (fracbits) {
                     if (is_signed) {
                         gen_helper_vfp_sltos(tcg_float, tcg_int32,
@@ -8337,7 +8337,7 @@ static void handle_simd_shift_intfp_conv(DisasContext *s, bool is_scalar,
             return;
         }
     } else if (immh & 4) {
-        size = MO_32;
+        size = MO_UL;
     } else if (immh & 2) {
         size = MO_UW;
         if (!dc_isar_feature(aa64_fp16, s)) {
@@ -8382,7 +8382,7 @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
             return;
         }
     } else if (immh & 0x4) {
-        size = MO_32;
+        size = MO_UL;
     } else if (immh & 0x2) {
         size = MO_UW;
         if (!dc_isar_feature(aa64_fp16, s)) {
@@ -8436,7 +8436,7 @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
                 fn = gen_helper_vfp_toshh;
             }
             break;
-        case MO_32:
+        case MO_UL:
             if (is_u) {
                 fn = gen_helper_vfp_touls;
             } else {
@@ -8588,8 +8588,8 @@ static void disas_simd_scalar_three_reg_diff(DisasContext *s, uint32_t insn)
         TCGv_i64 tcg_op2 = tcg_temp_new_i64();
         TCGv_i64 tcg_res = tcg_temp_new_i64();

-        read_vec_element(s, tcg_op1, rn, 0, MO_32 | MO_SIGN);
-        read_vec_element(s, tcg_op2, rm, 0, MO_32 | MO_SIGN);
+        read_vec_element(s, tcg_op1, rn, 0, MO_SL);
+        read_vec_element(s, tcg_op2, rm, 0, MO_SL);

         tcg_gen_mul_i64(tcg_res, tcg_op1, tcg_op2);
         gen_helper_neon_addl_saturate_s64(tcg_res, cpu_env, tcg_res, tcg_res);
@@ -8631,7 +8631,7 @@ static void disas_simd_scalar_three_reg_diff(DisasContext *s, uint32_t insn)
         case 0x9: /* SQDMLAL, SQDMLAL2 */
         {
             TCGv_i64 tcg_op3 = tcg_temp_new_i64();
-            read_vec_element(s, tcg_op3, rd, 0, MO_32);
+            read_vec_element(s, tcg_op3, rd, 0, MO_UL);
             gen_helper_neon_addl_saturate_s32(tcg_res, cpu_env,
                                               tcg_res, tcg_op3);
             tcg_temp_free_i64(tcg_op3);
@@ -8831,8 +8831,8 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
             TCGv_i32 tcg_op2 = tcg_temp_new_i32();
             TCGv_i32 tcg_res = tcg_temp_new_i32();

-            read_vec_element_i32(s, tcg_op1, rn, pass, MO_32);
-            read_vec_element_i32(s, tcg_op2, rm, pass, MO_32);
+            read_vec_element_i32(s, tcg_op1, rn, pass, MO_UL);
+            read_vec_element_i32(s, tcg_op2, rm, pass, MO_UL);

             switch (fpopcode) {
             case 0x39: /* FMLS */
@@ -8840,7 +8840,7 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
                 gen_helper_vfp_negs(tcg_op1, tcg_op1);
                 /* fall through */
             case 0x19: /* FMLA */
-                read_vec_element_i32(s, tcg_res, rd, pass, MO_32);
+                read_vec_element_i32(s, tcg_res, rd, pass, MO_UL);
                 gen_helper_vfp_muladds(tcg_res, tcg_op1, tcg_op2,
                                        tcg_res, fpst);
                 break;
@@ -8908,7 +8908,7 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
                 write_vec_element(s, tcg_tmp, rd, pass, MO_64);
                 tcg_temp_free_i64(tcg_tmp);
             } else {
-                write_vec_element_i32(s, tcg_res, rd, pass, MO_32);
+                write_vec_element_i32(s, tcg_res, rd, pass, MO_UL);
             }

             tcg_temp_free_i32(tcg_res);
@@ -9557,7 +9557,7 @@ static void handle_2misc_reciprocal(DisasContext *s, int opcode,
         }

         for (pass = 0; pass < maxpasses; pass++) {
-            read_vec_element_i32(s, tcg_op, rn, pass, MO_32);
+            read_vec_element_i32(s, tcg_op, rn, pass, MO_UL);

             switch (opcode) {
             case 0x3c: /* URECPE */
@@ -9579,7 +9579,7 @@ static void handle_2misc_reciprocal(DisasContext *s, int opcode,
             if (is_scalar) {
                 write_fp_sreg(s, rd, tcg_res);
             } else {
-                write_vec_element_i32(s, tcg_res, rd, pass, MO_32);
+                write_vec_element_i32(s, tcg_res, rd, pass, MO_UL);
             }
         }
         tcg_temp_free_i32(tcg_res);
@@ -9693,7 +9693,7 @@ static void handle_2misc_narrow(DisasContext *s, bool scalar,
     }

     for (pass = 0; pass < 2; pass++) {
-        write_vec_element_i32(s, tcg_res[pass], rd, destelt + pass, MO_32);
+        write_vec_element_i32(s, tcg_res[pass], rd, destelt + pass, MO_UL);
         tcg_temp_free_i32(tcg_res[pass]);
     }
     clear_vec_high(s, is_q, rd);
@@ -9740,8 +9740,8 @@ static void handle_2misc_satacc(DisasContext *s, bool is_scalar, bool is_u,
                 read_vec_element_i32(s, tcg_rn, rn, pass, size);
                 read_vec_element_i32(s, tcg_rd, rd, pass, size);
             } else {
-                read_vec_element_i32(s, tcg_rn, rn, pass, MO_32);
-                read_vec_element_i32(s, tcg_rd, rd, pass, MO_32);
+                read_vec_element_i32(s, tcg_rn, rn, pass, MO_UL);
+                read_vec_element_i32(s, tcg_rd, rd, pass, MO_UL);
             }

             if (is_u) { /* USQADD */
@@ -9779,7 +9779,7 @@ static void handle_2misc_satacc(DisasContext *s, bool is_scalar, bool is_u,
                 write_vec_element(s, tcg_zero, rd, 0, MO_64);
                 tcg_temp_free_i64(tcg_zero);
             }
-            write_vec_element_i32(s, tcg_rd, rd, pass, MO_32);
+            write_vec_element_i32(s, tcg_rd, rd, pass, MO_UL);
         }
         tcg_temp_free_i32(tcg_rd);
         tcg_temp_free_i32(tcg_rn);
@@ -10347,7 +10347,7 @@ static void handle_3rd_widening(DisasContext *s, int is_q, int is_u, int size,
             TCGv_i64 tcg_op1 = tcg_temp_new_i64();
             TCGv_i64 tcg_op2 = tcg_temp_new_i64();
             TCGv_i64 tcg_passres;
-            TCGMemOp memop = MO_32 | (is_u ? 0 : MO_SIGN);
+            TCGMemOp memop = is_u ? MO_UL : MO_SL;

             int elt = pass + is_q * 2;

@@ -10426,8 +10426,8 @@ static void handle_3rd_widening(DisasContext *s, int is_q, int is_u, int size,
             TCGv_i64 tcg_passres;
             int elt = pass + is_q * 2;

-            read_vec_element_i32(s, tcg_op1, rn, elt, MO_32);
-            read_vec_element_i32(s, tcg_op2, rm, elt, MO_32);
+            read_vec_element_i32(s, tcg_op1, rn, elt, MO_UL);
+            read_vec_element_i32(s, tcg_op2, rm, elt, MO_UL);

             if (accop == 0) {
                 tcg_passres = tcg_res[pass];
@@ -10547,7 +10547,7 @@ static void handle_3rd_wide(DisasContext *s, int is_q, int is_u, int size,
         NeonGenWidenFn *widenfn = widenfns[size][is_u];

         read_vec_element(s, tcg_op1, rn, pass, MO_64);
-        read_vec_element_i32(s, tcg_op2, rm, part + pass, MO_32);
+        read_vec_element_i32(s, tcg_op2, rm, part + pass, MO_UL);
         widenfn(tcg_op2_wide, tcg_op2);
         tcg_temp_free_i32(tcg_op2);
         tcg_res[pass] = tcg_temp_new_i64();
@@ -10603,7 +10603,7 @@ static void handle_3rd_narrowing(DisasContext *s, int is_q, int is_u, int size,
     }

     for (pass = 0; pass < 2; pass++) {
-        write_vec_element_i32(s, tcg_res[pass], rd, pass + part, MO_32);
+        write_vec_element_i32(s, tcg_res[pass], rd, pass + part, MO_UL);
         tcg_temp_free_i32(tcg_res[pass]);
     }
     clear_vec_high(s, is_q, rd);
@@ -10860,8 +10860,8 @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
             int passreg = pass < (maxpass / 2) ? rn : rm;
             int passelt = (is_q && (pass & 1)) ? 2 : 0;

-            read_vec_element_i32(s, tcg_op1, passreg, passelt, MO_32);
-            read_vec_element_i32(s, tcg_op2, passreg, passelt + 1, MO_32);
+            read_vec_element_i32(s, tcg_op1, passreg, passelt, MO_UL);
+            read_vec_element_i32(s, tcg_op2, passreg, passelt + 1, MO_UL);
             tcg_res[pass] = tcg_temp_new_i32();

             switch (opcode) {
@@ -10925,7 +10925,7 @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
         }

         for (pass = 0; pass < maxpass; pass++) {
-            write_vec_element_i32(s, tcg_res[pass], rd, pass, MO_32);
+            write_vec_element_i32(s, tcg_res[pass], rd, pass, MO_UL);
             tcg_temp_free_i32(tcg_res[pass]);
         }
         clear_vec_high(s, is_q, rd);
@@ -10971,7 +10971,7 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
             unallocated_encoding(s);
             return;
         }
-        handle_simd_3same_pair(s, is_q, 0, fpopcode, size ? MO_64 : MO_32,
+        handle_simd_3same_pair(s, is_q, 0, fpopcode, size ? MO_64 : MO_UL,
                                rn, rm, rd);
         return;
     case 0x1b: /* FMULX */
@@ -11174,8 +11174,8 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
             NeonGenTwoOpFn *genfn = NULL;
             NeonGenTwoOpEnvFn *genenvfn = NULL;

-            read_vec_element_i32(s, tcg_op1, rn, pass, MO_32);
-            read_vec_element_i32(s, tcg_op2, rm, pass, MO_32);
+            read_vec_element_i32(s, tcg_op1, rn, pass, MO_UL);
+            read_vec_element_i32(s, tcg_op2, rm, pass, MO_UL);

             switch (opcode) {
             case 0x0: /* SHADD, UHADD */
@@ -11292,11 +11292,11 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
                     tcg_gen_add_i32,
                 };

-                read_vec_element_i32(s, tcg_op1, rd, pass, MO_32);
+                read_vec_element_i32(s, tcg_op1, rd, pass, MO_UL);
                 fns[size](tcg_res, tcg_op1, tcg_res);
             }

-            write_vec_element_i32(s, tcg_res, rd, pass, MO_32);
+            write_vec_element_i32(s, tcg_res, rd, pass, MO_UL);

             tcg_temp_free_i32(tcg_res);
             tcg_temp_free_i32(tcg_op1);
@@ -11578,7 +11578,7 @@ static void disas_simd_three_reg_same_extra(DisasContext *s, uint32_t insn)
         break;
     case 0x02: /* SDOT (vector) */
     case 0x12: /* UDOT (vector) */
-        if (size != MO_32) {
+        if (size != MO_UL) {
             unallocated_encoding(s);
             return;
         }
@@ -11709,7 +11709,7 @@ static void handle_2misc_widening(DisasContext *s, int opcode, bool is_q,
             TCGv_i32 tcg_op = tcg_temp_new_i32();
             tcg_res[pass] = tcg_temp_new_i64();

-            read_vec_element_i32(s, tcg_op, rn, srcelt + pass, MO_32);
+            read_vec_element_i32(s, tcg_op, rn, srcelt + pass, MO_UL);
             gen_helper_vfp_fcvtds(tcg_res[pass], tcg_op, cpu_env);
             tcg_temp_free_i32(tcg_op);
         }
@@ -11732,7 +11732,7 @@ static void handle_2misc_widening(DisasContext *s, int opcode, bool is_q,
                                            fpst, ahp);
         }
         for (pass = 0; pass < 4; pass++) {
-            write_vec_element_i32(s, tcg_res[pass], rd, pass, MO_32);
+            write_vec_element_i32(s, tcg_res[pass], rd, pass, MO_UL);
             tcg_temp_free_i32(tcg_res[pass]);
         }

@@ -11771,7 +11771,7 @@ static void handle_rev(DisasContext *s, int opcode, bool u,
             case MO_UW:
                 tcg_gen_bswap16_i64(tcg_tmp, tcg_tmp);
                 break;
-            case MO_32:
+            case MO_UL:
                 tcg_gen_bswap32_i64(tcg_tmp, tcg_tmp);
                 break;
             case MO_64:
@@ -11900,7 +11900,7 @@ static void handle_shll(DisasContext *s, bool is_q, int size, int rn, int rd)
         NeonGenWidenFn *widenfn = widenfns[size];
         TCGv_i32 tcg_op = tcg_temp_new_i32();

-        read_vec_element_i32(s, tcg_op, rn, part + pass, MO_32);
+        read_vec_element_i32(s, tcg_op, rn, part + pass, MO_UL);
         tcg_res[pass] = tcg_temp_new_i64();
         widenfn(tcg_res[pass], tcg_op);
         tcg_gen_shli_i64(tcg_res[pass], tcg_res[pass], 8 << size);
@@ -12251,7 +12251,7 @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
             TCGv_i32 tcg_res = tcg_temp_new_i32();
             TCGCond cond;

-            read_vec_element_i32(s, tcg_op, rn, pass, MO_32);
+            read_vec_element_i32(s, tcg_op, rn, pass, MO_UL);

             if (size == 2) {
                 /* Special cases for 32 bit elements */
@@ -12418,7 +12418,7 @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
                 }
             }

-            write_vec_element_i32(s, tcg_res, rd, pass, MO_32);
+            write_vec_element_i32(s, tcg_res, rd, pass, MO_UL);

             tcg_temp_free_i32(tcg_res);
             tcg_temp_free_i32(tcg_op);
@@ -12816,7 +12816,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
         break;
     case 0x0e: /* SDOT */
     case 0x1e: /* UDOT */
-        if (is_scalar || size != MO_32 || !dc_isar_feature(aa64_dp, s)) {
+        if (is_scalar || size != MO_UL || !dc_isar_feature(aa64_dp, s)) {
             unallocated_encoding(s);
             return;
         }
@@ -12835,7 +12835,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
     case 0x04: /* FMLSL */
     case 0x18: /* FMLAL2 */
     case 0x1c: /* FMLSL2 */
-        if (is_scalar || size != MO_32 || !dc_isar_feature(aa64_fhm, s)) {
+        if (is_scalar || size != MO_UL || !dc_isar_feature(aa64_fhm, s)) {
             unallocated_encoding(s);
             return;
         }
@@ -12855,7 +12855,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
             size = MO_UW;
             is_fp16 = true;
             break;
-        case MO_32: /* single precision */
+        case MO_UL: /* single precision */
         case MO_64: /* double precision */
             break;
         default:
@@ -12868,7 +12868,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
         /* Each indexable element is a complex pair.  */
         size += 1;
         switch (size) {
-        case MO_32:
+        case MO_UL:
             if (h && !is_q) {
                 unallocated_encoding(s);
                 return;
@@ -12902,7 +12902,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
     case MO_UW:
         index = h << 2 | l << 1 | m;
         break;
-    case MO_32:
+    case MO_UL:
         index = h << 1 | l;
         rm |= m << 4;
         break;
@@ -13038,7 +13038,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
             TCGv_i32 tcg_op = tcg_temp_new_i32();
             TCGv_i32 tcg_res = tcg_temp_new_i32();

-            read_vec_element_i32(s, tcg_op, rn, pass, is_scalar ? size : MO_32);
+            read_vec_element_i32(s, tcg_op, rn, pass, is_scalar ? size : MO_UL);

             switch (16 * u + opcode) {
             case 0x08: /* MUL */
@@ -13060,7 +13060,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
                 if (opcode == 0x8) {
                     break;
                 }
-                read_vec_element_i32(s, tcg_op, rd, pass, MO_32);
+                read_vec_element_i32(s, tcg_op, rd, pass, MO_UL);
                 genfn = fns[size - 1][is_sub];
                 genfn(tcg_res, tcg_op, tcg_res);
                 break;
@@ -13068,7 +13068,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
             case 0x05: /* FMLS */
             case 0x01: /* FMLA */
                 read_vec_element_i32(s, tcg_res, rd, pass,
-                                     is_scalar ? size : MO_32);
+                                     is_scalar ? size : MO_UL);
                 switch (size) {
                 case 1:
                     if (opcode == 0x5) {
@@ -13153,7 +13153,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
                 break;
             case 0x1d: /* SQRDMLAH */
                 read_vec_element_i32(s, tcg_res, rd, pass,
-                                     is_scalar ? size : MO_32);
+                                     is_scalar ? size : MO_UL);
                 if (size == 1) {
                     gen_helper_neon_qrdmlah_s16(tcg_res, cpu_env,
                                                 tcg_op, tcg_idx, tcg_res);
@@ -13164,7 +13164,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
                 break;
             case 0x1f: /* SQRDMLSH */
                 read_vec_element_i32(s, tcg_res, rd, pass,
-                                     is_scalar ? size : MO_32);
+                                     is_scalar ? size : MO_UL);
                 if (size == 1) {
                     gen_helper_neon_qrdmlsh_s16(tcg_res, cpu_env,
                                                 tcg_op, tcg_idx, tcg_res);
@@ -13180,7 +13180,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
             if (is_scalar) {
                 write_fp_sreg(s, rd, tcg_res);
             } else {
-                write_vec_element_i32(s, tcg_res, rd, pass, MO_32);
+                write_vec_element_i32(s, tcg_res, rd, pass, MO_UL);
             }

             tcg_temp_free_i32(tcg_op);
@@ -13194,7 +13194,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
         TCGv_i64 tcg_res[2];
         int pass;
         bool satop = extract32(opcode, 0, 1);
-        TCGMemOp memop = MO_32;
+        TCGMemOp memop = MO_UL;

         if (satop || !u) {
             memop |= MO_SIGN;
@@ -13288,7 +13288,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
                     read_vec_element_i32(s, tcg_op, rn, pass, size);
                 } else {
                     read_vec_element_i32(s, tcg_op, rn,
-                                         pass + (is_q * 2), MO_32);
+                                         pass + (is_q * 2), MO_UL);
                 }

                 tcg_res[pass] = tcg_temp_new_i64();
@@ -13780,19 +13780,19 @@ static void disas_crypto_four_reg(DisasContext *s, uint32_t insn)
         tcg_res = tcg_temp_new_i32();
         tcg_zero = tcg_const_i32(0);

-        read_vec_element_i32(s, tcg_op1, rn, 3, MO_32);
-        read_vec_element_i32(s, tcg_op2, rm, 3, MO_32);
-        read_vec_element_i32(s, tcg_op3, ra, 3, MO_32);
+        read_vec_element_i32(s, tcg_op1, rn, 3, MO_UL);
+        read_vec_element_i32(s, tcg_op2, rm, 3, MO_UL);
+        read_vec_element_i32(s, tcg_op3, ra, 3, MO_UL);

         tcg_gen_rotri_i32(tcg_res, tcg_op1, 20);
         tcg_gen_add_i32(tcg_res, tcg_res, tcg_op2);
         tcg_gen_add_i32(tcg_res, tcg_res, tcg_op3);
         tcg_gen_rotri_i32(tcg_res, tcg_res, 25);

-        write_vec_element_i32(s, tcg_zero, rd, 0, MO_32);
-        write_vec_element_i32(s, tcg_zero, rd, 1, MO_32);
-        write_vec_element_i32(s, tcg_zero, rd, 2, MO_32);
-        write_vec_element_i32(s, tcg_res, rd, 3, MO_32);
+        write_vec_element_i32(s, tcg_zero, rd, 0, MO_UL);
+        write_vec_element_i32(s, tcg_zero, rd, 1, MO_UL);
+        write_vec_element_i32(s, tcg_zero, rd, 2, MO_UL);
+        write_vec_element_i32(s, tcg_res, rd, 3, MO_UL);

         tcg_temp_free_i32(tcg_op1);
         tcg_temp_free_i32(tcg_op2);
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index 2bc1bd1..f7c891d 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -1693,7 +1693,7 @@ static void do_sat_addsub_vec(DisasContext *s, int esz, int rd, int rn,
         tcg_temp_free_i32(t32);
         break;

-    case MO_32:
+    case MO_UL:
         t64 = tcg_temp_new_i64();
         if (d) {
             tcg_gen_neg_i64(t64, val);
@@ -3320,7 +3320,7 @@ static bool trans_SUBR_zzi(DisasContext *s, arg_rri_esz *a)
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_sve_subri_s,
           .opt_opc = vecop_list,
-          .vece = MO_32,
+          .vece = MO_UL,
           .scalar_first = true },
         { .fni8 = tcg_gen_sub_i64,
           .fniv = tcg_gen_sub_vec,
@@ -5258,7 +5258,7 @@ static bool trans_LD1_zprz(DisasContext *s, arg_LD1_zprz *a)
     }

     switch (a->esz) {
-    case MO_32:
+    case MO_UL:
         fn = gather_load_fn32[be][a->ff][a->xs][a->u][a->msz];
         break;
     case MO_64:
@@ -5286,7 +5286,7 @@ static bool trans_LD1_zpiz(DisasContext *s, arg_LD1_zpiz *a)
     }

     switch (a->esz) {
-    case MO_32:
+    case MO_UL:
         fn = gather_load_fn32[be][a->ff][0][a->u][a->msz];
         break;
     case MO_64:
@@ -5364,7 +5364,7 @@ static bool trans_ST1_zprz(DisasContext *s, arg_ST1_zprz *a)
         return true;
     }
     switch (a->esz) {
-    case MO_32:
+    case MO_UL:
         fn = scatter_store_fn32[be][a->xs][a->msz];
         break;
     case MO_64:
@@ -5392,7 +5392,7 @@ static bool trans_ST1_zpiz(DisasContext *s, arg_ST1_zpiz *a)
     }

     switch (a->esz) {
-    case MO_32:
+    case MO_UL:
         fn = scatter_store_fn32[be][0][a->msz];
         break;
     case MO_64:
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
index 549874c..5e0cd63 100644
--- a/target/arm/translate-vfp.inc.c
+++ b/target/arm/translate-vfp.inc.c
@@ -46,7 +46,7 @@ uint64_t vfp_expand_imm(int size, uint8_t imm8)
             extract32(imm8, 0, 6);
         imm <<= 48;
         break;
-    case MO_32:
+    case MO_UL:
         imm = (extract32(imm8, 7, 1) ? 0x8000 : 0) |
             (extract32(imm8, 6, 1) ? 0x3e00 : 0x4000) |
             (extract32(imm8, 0, 6) << 3);
@@ -1901,7 +1901,7 @@ static bool trans_VMOV_imm_sp(DisasContext *s, arg_VMOV_imm_sp *a)
         }
     }

-    fd = tcg_const_i32(vfp_expand_imm(MO_32, a->imm));
+    fd = tcg_const_i32(vfp_expand_imm(MO_UL, a->imm));

     for (;;) {
         neon_store_reg32(fd, vd);
diff --git a/target/arm/translate.c b/target/arm/translate.c
index 8d10922..5510ecd 100644
--- a/target/arm/translate.c
+++ b/target/arm/translate.c
@@ -1085,7 +1085,7 @@ static inline TCGv gen_aa32_addr(DisasContext *s, TCGv_i32 a32, TCGMemOp op)
     tcg_gen_extu_i32_tl(addr, a32);

     /* Not needed for user-mode BE32, where we use MO_BE instead.  */
-    if (!IS_USER_ONLY && s->sctlr_b && (op & MO_SIZE) < MO_32) {
+    if (!IS_USER_ONLY && s->sctlr_b && (op & MO_SIZE) < MO_UL) {
         tcg_gen_xori_tl(addr, addr, 4 - (1 << (op & MO_SIZE)));
     }
     return addr;
@@ -1480,7 +1480,7 @@ static void neon_store_element(int reg, int ele, TCGMemOp size, TCGv_i32 var)
     case MO_UW:
         tcg_gen_st16_i32(var, cpu_env, offset);
         break;
-    case MO_32:
+    case MO_UL:
         tcg_gen_st_i32(var, cpu_env, offset);
         break;
     default:
@@ -1499,7 +1499,7 @@ static void neon_store_element64(int reg, int ele, TCGMemOp size, TCGv_i64 var)
     case MO_UW:
         tcg_gen_st16_i64(var, cpu_env, offset);
         break;
-    case MO_32:
+    case MO_UL:
         tcg_gen_st32_i64(var, cpu_env, offset);
         break;
     case MO_64:
@@ -4272,7 +4272,7 @@ const GVecGen2i ssra_op[4] = {
       .fniv = gen_ssra_vec,
       .load_dest = true,
       .opt_opc = vecop_list_ssra,
-      .vece = MO_32 },
+      .vece = MO_UL },
     { .fni8 = gen_ssra64_i64,
       .fniv = gen_ssra_vec,
       .prefer_i64 = TCG_TARGET_REG_BITS == 64,
@@ -4330,7 +4330,7 @@ const GVecGen2i usra_op[4] = {
       .fniv = gen_usra_vec,
       .load_dest = true,
       .opt_opc = vecop_list_usra,
-      .vece = MO_32, },
+      .vece = MO_UL, },
     { .fni8 = gen_usra64_i64,
       .fniv = gen_usra_vec,
       .prefer_i64 = TCG_TARGET_REG_BITS == 64,
@@ -4410,7 +4410,7 @@ const GVecGen2i sri_op[4] = {
       .fniv = gen_shr_ins_vec,
       .load_dest = true,
       .opt_opc = vecop_list_sri,
-      .vece = MO_32 },
+      .vece = MO_UL },
     { .fni8 = gen_shr64_ins_i64,
       .fniv = gen_shr_ins_vec,
       .prefer_i64 = TCG_TARGET_REG_BITS == 64,
@@ -4488,7 +4488,7 @@ const GVecGen2i sli_op[4] = {
       .fniv = gen_shl_ins_vec,
       .load_dest = true,
       .opt_opc = vecop_list_sli,
-      .vece = MO_32 },
+      .vece = MO_UL },
     { .fni8 = gen_shl64_ins_i64,
       .fniv = gen_shl_ins_vec,
       .prefer_i64 = TCG_TARGET_REG_BITS == 64,
@@ -4584,7 +4584,7 @@ const GVecGen3 mla_op[4] = {
       .fniv = gen_mla_vec,
       .load_dest = true,
       .opt_opc = vecop_list_mla,
-      .vece = MO_32 },
+      .vece = MO_UL },
     { .fni8 = gen_mla64_i64,
       .fniv = gen_mla_vec,
       .prefer_i64 = TCG_TARGET_REG_BITS == 64,
@@ -4608,7 +4608,7 @@ const GVecGen3 mls_op[4] = {
       .fniv = gen_mls_vec,
       .load_dest = true,
       .opt_opc = vecop_list_mls,
-      .vece = MO_32 },
+      .vece = MO_UL },
     { .fni8 = gen_mls64_i64,
       .fniv = gen_mls_vec,
       .prefer_i64 = TCG_TARGET_REG_BITS == 64,
@@ -4653,7 +4653,7 @@ const GVecGen3 cmtst_op[4] = {
     { .fni4 = gen_cmtst_i32,
       .fniv = gen_cmtst_vec,
       .opt_opc = vecop_list_cmtst,
-      .vece = MO_32 },
+      .vece = MO_UL },
     { .fni8 = gen_cmtst_i64,
       .fniv = gen_cmtst_vec,
       .prefer_i64 = TCG_TARGET_REG_BITS == 64,
@@ -4691,7 +4691,7 @@ const GVecGen4 uqadd_op[4] = {
       .fno = gen_helper_gvec_uqadd_s,
       .write_aofs = true,
       .opt_opc = vecop_list_uqadd,
-      .vece = MO_32 },
+      .vece = MO_UL },
     { .fniv = gen_uqadd_vec,
       .fno = gen_helper_gvec_uqadd_d,
       .write_aofs = true,
@@ -4729,7 +4729,7 @@ const GVecGen4 sqadd_op[4] = {
       .fno = gen_helper_gvec_sqadd_s,
       .opt_opc = vecop_list_sqadd,
       .write_aofs = true,
-      .vece = MO_32 },
+      .vece = MO_UL },
     { .fniv = gen_sqadd_vec,
       .fno = gen_helper_gvec_sqadd_d,
       .opt_opc = vecop_list_sqadd,
@@ -4767,7 +4767,7 @@ const GVecGen4 uqsub_op[4] = {
       .fno = gen_helper_gvec_uqsub_s,
       .opt_opc = vecop_list_uqsub,
       .write_aofs = true,
-      .vece = MO_32 },
+      .vece = MO_UL },
     { .fniv = gen_uqsub_vec,
       .fno = gen_helper_gvec_uqsub_d,
       .opt_opc = vecop_list_uqsub,
@@ -4805,7 +4805,7 @@ const GVecGen4 sqsub_op[4] = {
       .fno = gen_helper_gvec_sqsub_s,
       .opt_opc = vecop_list_sqsub,
       .write_aofs = true,
-      .vece = MO_32 },
+      .vece = MO_UL },
     { .fniv = gen_sqsub_vec,
       .fno = gen_helper_gvec_sqsub_d,
       .opt_opc = vecop_list_sqsub,
@@ -5798,10 +5798,10 @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
                     /* The immediate value has already been inverted,
                      * so BIC becomes AND.
                      */
-                    tcg_gen_gvec_andi(MO_32, reg_ofs, reg_ofs, imm,
+                    tcg_gen_gvec_andi(MO_UL, reg_ofs, reg_ofs, imm,
                                       vec_size, vec_size);
                 } else {
-                    tcg_gen_gvec_ori(MO_32, reg_ofs, reg_ofs, imm,
+                    tcg_gen_gvec_ori(MO_UL, reg_ofs, reg_ofs, imm,
                                      vec_size, vec_size);
                 }
             } else {
@@ -6879,7 +6879,7 @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
                     size = MO_UW;
                     element = (insn >> 18) & 3;
                 } else {
-                    size = MO_32;
+                    size = MO_UL;
                     element = (insn >> 19) & 1;
                 }
                 tcg_gen_gvec_dup_mem(size, neon_reg_offset(rd, 0),
diff --git a/target/i386/translate.c b/target/i386/translate.c
index 0535bae..0e863d4 100644
--- a/target/i386/translate.c
+++ b/target/i386/translate.c
@@ -332,16 +332,16 @@ static inline TCGMemOp mo_pushpop(DisasContext *s, TCGMemOp ot)
 /* Select the size of the stack pointer.  */
 static inline TCGMemOp mo_stacksize(DisasContext *s)
 {
-    return CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_UW;
+    return CODE64(s) ? MO_64 : s->ss32 ? MO_UL : MO_UW;
 }

 /* Select only size 64 else 32.  Used for SSE operand sizes.  */
 static inline TCGMemOp mo_64_32(TCGMemOp ot)
 {
 #ifdef TARGET_X86_64
-    return ot == MO_64 ? MO_64 : MO_32;
+    return ot == MO_64 ? MO_64 : MO_UL;
 #else
-    return MO_32;
+    return MO_UL;
 #endif
 }

@@ -356,7 +356,7 @@ static inline TCGMemOp mo_b_d(int b, TCGMemOp ot)
    Used for decoding operand size of port opcodes.  */
 static inline TCGMemOp mo_b_d32(int b, TCGMemOp ot)
 {
-    return b & 1 ? (ot == MO_UW ? MO_UW : MO_32) : MO_UB;
+    return b & 1 ? (ot == MO_UW ? MO_UW : MO_UL) : MO_UB;
 }

 static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t0)
@@ -372,7 +372,7 @@ static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t0)
     case MO_UW:
         tcg_gen_deposit_tl(cpu_regs[reg], cpu_regs[reg], t0, 0, 16);
         break;
-    case MO_32:
+    case MO_UL:
         /* For x86_64, this sets the higher half of register to zero.
            For i386, this is equivalent to a mov. */
         tcg_gen_ext32u_tl(cpu_regs[reg], t0);
@@ -463,7 +463,7 @@ static void gen_lea_v_seg(DisasContext *s, TCGMemOp aflag, TCGv a0,
         }
         break;
 #endif
-    case MO_32:
+    case MO_UL:
         /* 32 bit address */
         if (ovr_seg < 0 && s->addseg) {
             ovr_seg = def_seg;
@@ -538,7 +538,7 @@ static TCGv gen_ext_tl(TCGv dst, TCGv src, TCGMemOp size, bool sign)
         }
         return dst;
 #ifdef TARGET_X86_64
-    case MO_32:
+    case MO_UL:
         if (sign) {
             tcg_gen_ext32s_tl(dst, src);
         } else {
@@ -586,7 +586,7 @@ static void gen_helper_in_func(TCGMemOp ot, TCGv v, TCGv_i32 n)
     case MO_UW:
         gen_helper_inw(v, cpu_env, n);
         break;
-    case MO_32:
+    case MO_UL:
         gen_helper_inl(v, cpu_env, n);
         break;
     default:
@@ -603,7 +603,7 @@ static void gen_helper_out_func(TCGMemOp ot, TCGv_i32 v, TCGv_i32 n)
     case MO_UW:
         gen_helper_outw(cpu_env, v, n);
         break;
-    case MO_32:
+    case MO_UL:
         gen_helper_outl(cpu_env, v, n);
         break;
     default:
@@ -625,7 +625,7 @@ static void gen_check_io(DisasContext *s, TCGMemOp ot, target_ulong cur_eip,
         case MO_UW:
             gen_helper_check_iow(cpu_env, s->tmp2_i32);
             break;
-        case MO_32:
+        case MO_UL:
             gen_helper_check_iol(cpu_env, s->tmp2_i32);
             break;
         default:
@@ -1077,7 +1077,7 @@ static TCGLabel *gen_jz_ecx_string(DisasContext *s, target_ulong next_eip)

 static inline void gen_stos(DisasContext *s, TCGMemOp ot)
 {
-    gen_op_mov_v_reg(s, MO_32, s->T0, R_EAX);
+    gen_op_mov_v_reg(s, MO_UL, s->T0, R_EAX);
     gen_string_movl_A0_EDI(s);
     gen_op_st_v(s, ot, s->T0, s->A0);
     gen_op_movl_T0_Dshift(s, ot);
@@ -1568,7 +1568,7 @@ static void gen_rot_rm_T1(DisasContext *s, TCGMemOp ot, int op1, int is_right)
         goto do_long;
     do_long:
 #ifdef TARGET_X86_64
-    case MO_32:
+    case MO_UL:
         tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
         tcg_gen_trunc_tl_i32(s->tmp3_i32, s->T1);
         if (is_right) {
@@ -1644,7 +1644,7 @@ static void gen_rot_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
     if (op2 != 0) {
         switch (ot) {
 #ifdef TARGET_X86_64
-        case MO_32:
+        case MO_UL:
             tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
             if (is_right) {
                 tcg_gen_rotri_i32(s->tmp2_i32, s->tmp2_i32, op2);
@@ -1725,7 +1725,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
         case MO_UW:
             gen_helper_rcrw(s->T0, cpu_env, s->T0, s->T1);
             break;
-        case MO_32:
+        case MO_UL:
             gen_helper_rcrl(s->T0, cpu_env, s->T0, s->T1);
             break;
 #ifdef TARGET_X86_64
@@ -1744,7 +1744,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
         case MO_UW:
             gen_helper_rclw(s->T0, cpu_env, s->T0, s->T1);
             break;
-        case MO_32:
+        case MO_UL:
             gen_helper_rcll(s->T0, cpu_env, s->T0, s->T1);
             break;
 #ifdef TARGET_X86_64
@@ -1791,7 +1791,7 @@ static void gen_shiftd_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
         }
         /* FALLTHRU */
 #ifdef TARGET_X86_64
-    case MO_32:
+    case MO_UL:
         /* Concatenate the two 32-bit values and use a 64-bit shift.  */
         tcg_gen_subi_tl(s->tmp0, count, 1);
         if (is_right) {
@@ -1984,7 +1984,7 @@ static AddressParts gen_lea_modrm_0(CPUX86State *env, DisasContext *s,

     switch (s->aflag) {
     case MO_64:
-    case MO_32:
+    case MO_UL:
         havesib = 0;
         if (rm == 4) {
             int code = x86_ldub_code(env, s);
@@ -2190,7 +2190,7 @@ static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, TCGMemOp ot)
     case MO_UW:
         ret = x86_lduw_code(env, s);
         break;
-    case MO_32:
+    case MO_UL:
 #ifdef TARGET_X86_64
     case MO_64:
 #endif
@@ -2204,7 +2204,7 @@ static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, TCGMemOp ot)

 static inline int insn_const_size(TCGMemOp ot)
 {
-    if (ot <= MO_32) {
+    if (ot <= MO_UL) {
         return 1 << ot;
     } else {
         return 4;
@@ -2400,12 +2400,12 @@ static inline void gen_pop_update(DisasContext *s, TCGMemOp ot)

 static inline void gen_stack_A0(DisasContext *s)
 {
-    gen_lea_v_seg(s, s->ss32 ? MO_32 : MO_UW, cpu_regs[R_ESP], R_SS, -1);
+    gen_lea_v_seg(s, s->ss32 ? MO_UL : MO_UW, cpu_regs[R_ESP], R_SS, -1);
 }

 static void gen_pusha(DisasContext *s)
 {
-    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_UW;
+    TCGMemOp s_ot = s->ss32 ? MO_UL : MO_UW;
     TCGMemOp d_ot = s->dflag;
     int size = 1 << d_ot;
     int i;
@@ -2421,7 +2421,7 @@ static void gen_pusha(DisasContext *s)

 static void gen_popa(DisasContext *s)
 {
-    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_UW;
+    TCGMemOp s_ot = s->ss32 ? MO_UL : MO_UW;
     TCGMemOp d_ot = s->dflag;
     int size = 1 << d_ot;
     int i;
@@ -2443,7 +2443,7 @@ static void gen_popa(DisasContext *s)
 static void gen_enter(DisasContext *s, int esp_addend, int level)
 {
     TCGMemOp d_ot = mo_pushpop(s, s->dflag);
-    TCGMemOp a_ot = CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_UW;
+    TCGMemOp a_ot = CODE64(s) ? MO_64 : s->ss32 ? MO_UL : MO_UW;
     int size = 1 << d_ot;

     /* Push BP; compute FrameTemp into T1.  */
@@ -3145,7 +3145,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
             } else {
                 tcg_gen_ld32u_tl(s->T0, cpu_env, offsetof(CPUX86State,
                     xmm_regs[reg].ZMM_L(0)));
-                gen_op_st_v(s, MO_32, s->T0, s->A0);
+                gen_op_st_v(s, MO_UL, s->T0, s->A0);
             }
             break;
         case 0x6e: /* movd mm, ea */
@@ -3157,7 +3157,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
             } else
 #endif
             {
-                gen_ldst_modrm(env, s, modrm, MO_32, OR_TMP0, 0);
+                gen_ldst_modrm(env, s, modrm, MO_UL, OR_TMP0, 0);
                 tcg_gen_addi_ptr(s->ptr0, cpu_env,
                                  offsetof(CPUX86State,fpregs[reg].mmx));
                 tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
@@ -3174,7 +3174,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
             } else
 #endif
             {
-                gen_ldst_modrm(env, s, modrm, MO_32, OR_TMP0, 0);
+                gen_ldst_modrm(env, s, modrm, MO_UL, OR_TMP0, 0);
                 tcg_gen_addi_ptr(s->ptr0, cpu_env,
                                  offsetof(CPUX86State,xmm_regs[reg]));
                 tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
@@ -3211,7 +3211,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
         case 0x210: /* movss xmm, ea */
             if (mod != 3) {
                 gen_lea_modrm(env, s, modrm);
-                gen_op_ld_v(s, MO_32, s->T0, s->A0);
+                gen_op_ld_v(s, MO_UL, s->T0, s->A0);
                 tcg_gen_st32_tl(s->T0, cpu_env,
                                 offsetof(CPUX86State, xmm_regs[reg].ZMM_L(0)));
                 tcg_gen_movi_tl(s->T0, 0);
@@ -3346,7 +3346,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
             {
                 tcg_gen_ld32u_tl(s->T0, cpu_env,
                                  offsetof(CPUX86State,fpregs[reg].mmx.MMX_L(0)));
-                gen_ldst_modrm(env, s, modrm, MO_32, OR_TMP0, 1);
+                gen_ldst_modrm(env, s, modrm, MO_UL, OR_TMP0, 1);
             }
             break;
         case 0x17e: /* movd ea, xmm */
@@ -3360,7 +3360,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
             {
                 tcg_gen_ld32u_tl(s->T0, cpu_env,
                                  offsetof(CPUX86State,xmm_regs[reg].ZMM_L(0)));
-                gen_ldst_modrm(env, s, modrm, MO_32, OR_TMP0, 1);
+                gen_ldst_modrm(env, s, modrm, MO_UL, OR_TMP0, 1);
             }
             break;
         case 0x27e: /* movq xmm, ea */
@@ -3405,7 +3405,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                 gen_lea_modrm(env, s, modrm);
                 tcg_gen_ld32u_tl(s->T0, cpu_env,
                                  offsetof(CPUX86State, xmm_regs[reg].ZMM_L(0)));
-                gen_op_st_v(s, MO_32, s->T0, s->A0);
+                gen_op_st_v(s, MO_UL, s->T0, s->A0);
             } else {
                 rm = (modrm & 7) | REX_B(s);
                 gen_op_movl(s, offsetof(CPUX86State, xmm_regs[rm].ZMM_L(0)),
@@ -3530,7 +3530,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
             gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0);
             op1_offset = offsetof(CPUX86State,xmm_regs[reg]);
             tcg_gen_addi_ptr(s->ptr0, cpu_env, op1_offset);
-            if (ot == MO_32) {
+            if (ot == MO_UL) {
                 SSEFunc_0_epi sse_fn_epi = sse_op_table3ai[(b >> 8) & 1];
                 tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
                 sse_fn_epi(cpu_env, s->ptr0, s->tmp2_i32);
@@ -3584,7 +3584,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                 if ((b >> 8) & 1) {
                     gen_ldq_env_A0(s, offsetof(CPUX86State, xmm_t0.ZMM_Q(0)));
                 } else {
-                    gen_op_ld_v(s, MO_32, s->T0, s->A0);
+                    gen_op_ld_v(s, MO_UL, s->T0, s->A0);
                     tcg_gen_st32_tl(s->T0, cpu_env,
                                     offsetof(CPUX86State, xmm_t0.ZMM_L(0)));
                 }
@@ -3594,7 +3594,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                 op2_offset = offsetof(CPUX86State,xmm_regs[rm]);
             }
             tcg_gen_addi_ptr(s->ptr0, cpu_env, op2_offset);
-            if (ot == MO_32) {
+            if (ot == MO_UL) {
                 SSEFunc_i_ep sse_fn_i_ep =
                     sse_op_table3bi[((b >> 7) & 2) | (b & 1)];
                 sse_fn_i_ep(s->tmp2_i32, cpu_env, s->ptr0);
@@ -3786,7 +3786,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                 if ((b & 0xff) == 0xf0) {
                     ot = MO_UB;
                 } else if (s->dflag != MO_64) {
-                    ot = (s->prefix & PREFIX_DATA ? MO_UW : MO_32);
+                    ot = (s->prefix & PREFIX_DATA ? MO_UW : MO_UL);
                 } else {
                     ot = MO_64;
                 }
@@ -3815,7 +3815,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                     goto illegal_op;
                 }
                 if (s->dflag != MO_64) {
-                    ot = (s->prefix & PREFIX_DATA ? MO_UW : MO_32);
+                    ot = (s->prefix & PREFIX_DATA ? MO_UW : MO_UL);
                 } else {
                     ot = MO_64;
                 }
@@ -4026,7 +4026,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,

                     switch (ot) {
 #ifdef TARGET_X86_64
-                    case MO_32:
+                    case MO_UL:
                         /* If we know TL is 64-bit, and we want a 32-bit
                            result, just do everything in 64-bit arithmetic.  */
                         tcg_gen_ext32u_i64(cpu_regs[reg], cpu_regs[reg]);
@@ -4172,7 +4172,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                     }
                     break;
                 case 0x16:
-                    if (ot == MO_32) { /* pextrd */
+                    if (ot == MO_UL) { /* pextrd */
                         tcg_gen_ld_i32(s->tmp2_i32, cpu_env,
                                         offsetof(CPUX86State,
                                                 xmm_regs[reg].ZMM_L(val & 3)));
@@ -4210,7 +4210,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                     break;
                 case 0x20: /* pinsrb */
                     if (mod == 3) {
-                        gen_op_mov_v_reg(s, MO_32, s->T0, rm);
+                        gen_op_mov_v_reg(s, MO_UL, s->T0, rm);
                     } else {
                         tcg_gen_qemu_ld_tl(s->T0, s->A0,
                                            s->mem_index, MO_UB);
@@ -4248,7 +4248,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                                                 xmm_regs[reg].ZMM_L(3)));
                     break;
                 case 0x22:
-                    if (ot == MO_32) { /* pinsrd */
+                    if (ot == MO_UL) { /* pinsrd */
                         if (mod == 3) {
                             tcg_gen_trunc_tl_i32(s->tmp2_i32, cpu_regs[rm]);
                         } else {
@@ -4393,7 +4393,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                 switch (sz) {
                 case 2:
                     /* 32 bit access */
-                    gen_op_ld_v(s, MO_32, s->T0, s->A0);
+                    gen_op_ld_v(s, MO_UL, s->T0, s->A0);
                     tcg_gen_st32_tl(s->T0, cpu_env,
                                     offsetof(CPUX86State,xmm_t0.ZMM_L(0)));
                     break;
@@ -4630,19 +4630,19 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         /* In 64-bit mode, the default data size is 32-bit.  Select 64-bit
            data with rex_w, and 16-bit data with 0x66; rex_w takes precedence
            over 0x66 if both are present.  */
-        dflag = (rex_w > 0 ? MO_64 : prefixes & PREFIX_DATA ? MO_UW : MO_32);
+        dflag = (rex_w > 0 ? MO_64 : prefixes & PREFIX_DATA ? MO_UW : MO_UL);
         /* In 64-bit mode, 0x67 selects 32-bit addressing.  */
-        aflag = (prefixes & PREFIX_ADR ? MO_32 : MO_64);
+        aflag = (prefixes & PREFIX_ADR ? MO_UL : MO_64);
     } else {
         /* In 16/32-bit mode, 0x66 selects the opposite data size.  */
         if (s->code32 ^ ((prefixes & PREFIX_DATA) != 0)) {
-            dflag = MO_32;
+            dflag = MO_UL;
         } else {
             dflag = MO_UW;
         }
         /* In 16/32-bit mode, 0x67 selects the opposite addressing.  */
         if (s->code32 ^ ((prefixes & PREFIX_ADR) != 0)) {
-            aflag = MO_32;
+            aflag = MO_UL;
         }  else {
             aflag = MO_UW;
         }
@@ -4891,7 +4891,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 set_cc_op(s, CC_OP_MULW);
                 break;
             default:
-            case MO_32:
+            case MO_UL:
                 tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
                 tcg_gen_trunc_tl_i32(s->tmp3_i32, cpu_regs[R_EAX]);
                 tcg_gen_mulu2_i32(s->tmp2_i32, s->tmp3_i32,
@@ -4942,7 +4942,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 set_cc_op(s, CC_OP_MULW);
                 break;
             default:
-            case MO_32:
+            case MO_UL:
                 tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
                 tcg_gen_trunc_tl_i32(s->tmp3_i32, cpu_regs[R_EAX]);
                 tcg_gen_muls2_i32(s->tmp2_i32, s->tmp3_i32,
@@ -4976,7 +4976,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 gen_helper_divw_AX(cpu_env, s->T0);
                 break;
             default:
-            case MO_32:
+            case MO_UL:
                 gen_helper_divl_EAX(cpu_env, s->T0);
                 break;
 #ifdef TARGET_X86_64
@@ -4995,7 +4995,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 gen_helper_idivw_AX(cpu_env, s->T0);
                 break;
             default:
-            case MO_32:
+            case MO_UL:
                 gen_helper_idivl_EAX(cpu_env, s->T0);
                 break;
 #ifdef TARGET_X86_64
@@ -5026,7 +5026,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 /* operand size for jumps is 64 bit */
                 ot = MO_64;
             } else if (op == 3 || op == 5) {
-                ot = dflag != MO_UW ? MO_32 + (rex_w == 1) : MO_UW;
+                ot = dflag != MO_UW ? MO_UL + (rex_w == 1) : MO_UW;
             } else if (op == 6) {
                 /* default push size is 64 bit */
                 ot = mo_pushpop(s, dflag);
@@ -5146,15 +5146,15 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         switch (dflag) {
 #ifdef TARGET_X86_64
         case MO_64:
-            gen_op_mov_v_reg(s, MO_32, s->T0, R_EAX);
+            gen_op_mov_v_reg(s, MO_UL, s->T0, R_EAX);
             tcg_gen_ext32s_tl(s->T0, s->T0);
             gen_op_mov_reg_v(s, MO_64, R_EAX, s->T0);
             break;
 #endif
-        case MO_32:
+        case MO_UL:
             gen_op_mov_v_reg(s, MO_UW, s->T0, R_EAX);
             tcg_gen_ext16s_tl(s->T0, s->T0);
-            gen_op_mov_reg_v(s, MO_32, R_EAX, s->T0);
+            gen_op_mov_reg_v(s, MO_UL, R_EAX, s->T0);
             break;
         case MO_UW:
             gen_op_mov_v_reg(s, MO_UB, s->T0, R_EAX);
@@ -5174,11 +5174,11 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             gen_op_mov_reg_v(s, MO_64, R_EDX, s->T0);
             break;
 #endif
-        case MO_32:
-            gen_op_mov_v_reg(s, MO_32, s->T0, R_EAX);
+        case MO_UL:
+            gen_op_mov_v_reg(s, MO_UL, s->T0, R_EAX);
             tcg_gen_ext32s_tl(s->T0, s->T0);
             tcg_gen_sari_tl(s->T0, s->T0, 31);
-            gen_op_mov_reg_v(s, MO_32, R_EDX, s->T0);
+            gen_op_mov_reg_v(s, MO_UL, R_EDX, s->T0);
             break;
         case MO_UW:
             gen_op_mov_v_reg(s, MO_UW, s->T0, R_EAX);
@@ -5219,7 +5219,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             tcg_gen_sub_tl(cpu_cc_src, cpu_cc_src, s->T1);
             break;
 #endif
-        case MO_32:
+        case MO_UL:
             tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
             tcg_gen_trunc_tl_i32(s->tmp3_i32, s->T1);
             tcg_gen_muls2_i32(s->tmp2_i32, s->tmp3_i32,
@@ -5394,7 +5394,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         /**************************/
         /* push/pop */
     case 0x50 ... 0x57: /* push */
-        gen_op_mov_v_reg(s, MO_32, s->T0, (b & 7) | REX_B(s));
+        gen_op_mov_v_reg(s, MO_UL, s->T0, (b & 7) | REX_B(s));
         gen_push_v(s, s->T0);
         break;
     case 0x58 ... 0x5f: /* pop */
@@ -5734,7 +5734,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
     case 0x1b5: /* lgs Gv */
         op = R_GS;
     do_lxx:
-        ot = dflag != MO_UW ? MO_32 : MO_UW;
+        ot = dflag != MO_UW ? MO_UL : MO_UW;
         modrm = x86_ldub_code(env, s);
         reg = ((modrm >> 3) & 7) | rex_r;
         mod = (modrm >> 6) & 3;
@@ -6576,7 +6576,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
     case 0xe8: /* call im */
         {
             if (dflag != MO_UW) {
-                tval = (int32_t)insn_get(env, s, MO_32);
+                tval = (int32_t)insn_get(env, s, MO_UL);
             } else {
                 tval = (int16_t)insn_get(env, s, MO_UW);
             }
@@ -6609,7 +6609,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         goto do_lcall;
     case 0xe9: /* jmp im */
         if (dflag != MO_UW) {
-            tval = (int32_t)insn_get(env, s, MO_32);
+            tval = (int32_t)insn_get(env, s, MO_UL);
         } else {
             tval = (int16_t)insn_get(env, s, MO_UW);
         }
@@ -6649,7 +6649,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         goto do_jcc;
     case 0x180 ... 0x18f: /* jcc Jv */
         if (dflag != MO_UW) {
-            tval = (int32_t)insn_get(env, s, MO_32);
+            tval = (int32_t)insn_get(env, s, MO_UL);
         } else {
             tval = (int16_t)insn_get(env, s, MO_UW);
         }
@@ -6827,7 +6827,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         reg = ((modrm >> 3) & 7) | rex_r;
         mod = (modrm >> 6) & 3;
         rm = (modrm & 7) | REX_B(s);
-        gen_op_mov_v_reg(s, MO_32, s->T1, reg);
+        gen_op_mov_v_reg(s, MO_UL, s->T1, reg);
         if (mod != 3) {
             AddressParts a = gen_lea_modrm_0(env, s, modrm);
             /* specific case: we need to add a displacement */
@@ -7126,10 +7126,10 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         } else
 #endif
         {
-            gen_op_mov_v_reg(s, MO_32, s->T0, reg);
+            gen_op_mov_v_reg(s, MO_UL, s->T0, reg);
             tcg_gen_ext32u_tl(s->T0, s->T0);
             tcg_gen_bswap32_tl(s->T0, s->T0);
-            gen_op_mov_reg_v(s, MO_32, reg, s->T0);
+            gen_op_mov_reg_v(s, MO_UL, reg, s->T0);
         }
         break;
     case 0xd6: /* salc */
@@ -7359,7 +7359,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             if (dflag == MO_UW) {
                 tcg_gen_andi_tl(s->T0, s->T0, 0xffffff);
             }
-            gen_op_st_v(s, CODE64(s) + MO_32, s->T0, s->A0);
+            gen_op_st_v(s, CODE64(s) + MO_UL, s->T0, s->A0);
             break;

         case 0xc8: /* monitor */
@@ -7414,7 +7414,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             if (dflag == MO_UW) {
                 tcg_gen_andi_tl(s->T0, s->T0, 0xffffff);
             }
-            gen_op_st_v(s, CODE64(s) + MO_32, s->T0, s->A0);
+            gen_op_st_v(s, CODE64(s) + MO_UL, s->T0, s->A0);
             break;

         case 0xd0: /* xgetbv */
@@ -7560,7 +7560,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             gen_lea_modrm(env, s, modrm);
             gen_op_ld_v(s, MO_UW, s->T1, s->A0);
             gen_add_A0_im(s, 2);
-            gen_op_ld_v(s, CODE64(s) + MO_32, s->T0, s->A0);
+            gen_op_ld_v(s, CODE64(s) + MO_UL, s->T0, s->A0);
             if (dflag == MO_UW) {
                 tcg_gen_andi_tl(s->T0, s->T0, 0xffffff);
             }
@@ -7577,7 +7577,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             gen_lea_modrm(env, s, modrm);
             gen_op_ld_v(s, MO_UW, s->T1, s->A0);
             gen_add_A0_im(s, 2);
-            gen_op_ld_v(s, CODE64(s) + MO_32, s->T0, s->A0);
+            gen_op_ld_v(s, CODE64(s) + MO_UL, s->T0, s->A0);
             if (dflag == MO_UW) {
                 tcg_gen_andi_tl(s->T0, s->T0, 0xffffff);
             }
@@ -7698,7 +7698,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             rm = (modrm & 7) | REX_B(s);

             if (mod == 3) {
-                gen_op_mov_v_reg(s, MO_32, s->T0, rm);
+                gen_op_mov_v_reg(s, MO_UL, s->T0, rm);
                 /* sign extend */
                 if (d_ot == MO_64) {
                     tcg_gen_ext32s_tl(s->T0, s->T0);
@@ -7706,7 +7706,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 gen_op_mov_reg_v(s, d_ot, reg, s->T0);
             } else {
                 gen_lea_modrm(env, s, modrm);
-                gen_op_ld_v(s, MO_32 | MO_SIGN, s->T0, s->A0);
+                gen_op_ld_v(s, MO_SL, s->T0, s->A0);
                 gen_op_mov_reg_v(s, d_ot, reg, s->T0);
             }
         } else
@@ -7765,7 +7765,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             TCGv t0;
             if (!s->pe || s->vm86)
                 goto illegal_op;
-            ot = dflag != MO_UW ? MO_32 : MO_UW;
+            ot = dflag != MO_UW ? MO_UL : MO_UW;
             modrm = x86_ldub_code(env, s);
             reg = ((modrm >> 3) & 7) | rex_r;
             gen_ldst_modrm(env, s, modrm, MO_UW, OR_TMP0, 0);
@@ -8016,7 +8016,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             if (CODE64(s))
                 ot = MO_64;
             else
-                ot = MO_32;
+                ot = MO_UL;
             if ((prefixes & PREFIX_LOCK) && (reg == 0) &&
                 (s->cpuid_ext3_features & CPUID_EXT3_CR8LEG)) {
                 reg = 8;
@@ -8073,7 +8073,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             if (CODE64(s))
                 ot = MO_64;
             else
-                ot = MO_32;
+                ot = MO_UL;
             if (reg >= 8) {
                 goto illegal_op;
             }
@@ -8168,7 +8168,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             }
             gen_lea_modrm(env, s, modrm);
             tcg_gen_ld32u_tl(s->T0, cpu_env, offsetof(CPUX86State, mxcsr));
-            gen_op_st_v(s, MO_32, s->T0, s->A0);
+            gen_op_st_v(s, MO_UL, s->T0, s->A0);
             break;

         CASE_MODRM_MEM_OP(4): /* xsave */
@@ -8268,7 +8268,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                     dst = treg, src = base;
                 }

-                if (s->dflag == MO_32) {
+                if (s->dflag == MO_UL) {
                     tcg_gen_ext32u_tl(dst, src);
                 } else {
                     tcg_gen_mov_tl(dst, src);
diff --git a/target/ppc/translate/vmx-impl.inc.c b/target/ppc/translate/vmx-impl.inc.c
index 71efef4..8aa767e 100644
--- a/target/ppc/translate/vmx-impl.inc.c
+++ b/target/ppc/translate/vmx-impl.inc.c
@@ -409,27 +409,27 @@ GEN_VXFORM_DUAL_EXT(vaddubm, PPC_ALTIVEC, PPC_NONE, 0,       \
 GEN_VXFORM_V(vadduhm, MO_UW, tcg_gen_gvec_add, 0, 1);
 GEN_VXFORM_DUAL(vadduhm, PPC_ALTIVEC, PPC_NONE,  \
                 vmul10ecuq, PPC_NONE, PPC2_ISA300)
-GEN_VXFORM_V(vadduwm, MO_32, tcg_gen_gvec_add, 0, 2);
+GEN_VXFORM_V(vadduwm, MO_UL, tcg_gen_gvec_add, 0, 2);
 GEN_VXFORM_V(vaddudm, MO_64, tcg_gen_gvec_add, 0, 3);
 GEN_VXFORM_V(vsububm, MO_UB, tcg_gen_gvec_sub, 0, 16);
 GEN_VXFORM_V(vsubuhm, MO_UW, tcg_gen_gvec_sub, 0, 17);
-GEN_VXFORM_V(vsubuwm, MO_32, tcg_gen_gvec_sub, 0, 18);
+GEN_VXFORM_V(vsubuwm, MO_UL, tcg_gen_gvec_sub, 0, 18);
 GEN_VXFORM_V(vsubudm, MO_64, tcg_gen_gvec_sub, 0, 19);
 GEN_VXFORM_V(vmaxub, MO_UB, tcg_gen_gvec_umax, 1, 0);
 GEN_VXFORM_V(vmaxuh, MO_UW, tcg_gen_gvec_umax, 1, 1);
-GEN_VXFORM_V(vmaxuw, MO_32, tcg_gen_gvec_umax, 1, 2);
+GEN_VXFORM_V(vmaxuw, MO_UL, tcg_gen_gvec_umax, 1, 2);
 GEN_VXFORM_V(vmaxud, MO_64, tcg_gen_gvec_umax, 1, 3);
 GEN_VXFORM_V(vmaxsb, MO_UB, tcg_gen_gvec_smax, 1, 4);
 GEN_VXFORM_V(vmaxsh, MO_UW, tcg_gen_gvec_smax, 1, 5);
-GEN_VXFORM_V(vmaxsw, MO_32, tcg_gen_gvec_smax, 1, 6);
+GEN_VXFORM_V(vmaxsw, MO_UL, tcg_gen_gvec_smax, 1, 6);
 GEN_VXFORM_V(vmaxsd, MO_64, tcg_gen_gvec_smax, 1, 7);
 GEN_VXFORM_V(vminub, MO_UB, tcg_gen_gvec_umin, 1, 8);
 GEN_VXFORM_V(vminuh, MO_UW, tcg_gen_gvec_umin, 1, 9);
-GEN_VXFORM_V(vminuw, MO_32, tcg_gen_gvec_umin, 1, 10);
+GEN_VXFORM_V(vminuw, MO_UL, tcg_gen_gvec_umin, 1, 10);
 GEN_VXFORM_V(vminud, MO_64, tcg_gen_gvec_umin, 1, 11);
 GEN_VXFORM_V(vminsb, MO_UB, tcg_gen_gvec_smin, 1, 12);
 GEN_VXFORM_V(vminsh, MO_UW, tcg_gen_gvec_smin, 1, 13);
-GEN_VXFORM_V(vminsw, MO_32, tcg_gen_gvec_smin, 1, 14);
+GEN_VXFORM_V(vminsw, MO_UL, tcg_gen_gvec_smin, 1, 14);
 GEN_VXFORM_V(vminsd, MO_64, tcg_gen_gvec_smin, 1, 15);
 GEN_VXFORM(vavgub, 1, 16);
 GEN_VXFORM(vabsdub, 1, 16);
@@ -532,18 +532,18 @@ GEN_VXFORM(vmulesh, 4, 13);
 GEN_VXFORM(vmulesw, 4, 14);
 GEN_VXFORM_V(vslb, MO_UB, tcg_gen_gvec_shlv, 2, 4);
 GEN_VXFORM_V(vslh, MO_UW, tcg_gen_gvec_shlv, 2, 5);
-GEN_VXFORM_V(vslw, MO_32, tcg_gen_gvec_shlv, 2, 6);
+GEN_VXFORM_V(vslw, MO_UL, tcg_gen_gvec_shlv, 2, 6);
 GEN_VXFORM(vrlwnm, 2, 6);
 GEN_VXFORM_DUAL(vslw, PPC_ALTIVEC, PPC_NONE, \
                 vrlwnm, PPC_NONE, PPC2_ISA300)
 GEN_VXFORM_V(vsld, MO_64, tcg_gen_gvec_shlv, 2, 23);
 GEN_VXFORM_V(vsrb, MO_UB, tcg_gen_gvec_shrv, 2, 8);
 GEN_VXFORM_V(vsrh, MO_UW, tcg_gen_gvec_shrv, 2, 9);
-GEN_VXFORM_V(vsrw, MO_32, tcg_gen_gvec_shrv, 2, 10);
+GEN_VXFORM_V(vsrw, MO_UL, tcg_gen_gvec_shrv, 2, 10);
 GEN_VXFORM_V(vsrd, MO_64, tcg_gen_gvec_shrv, 2, 27);
 GEN_VXFORM_V(vsrab, MO_UB, tcg_gen_gvec_sarv, 2, 12);
 GEN_VXFORM_V(vsrah, MO_UW, tcg_gen_gvec_sarv, 2, 13);
-GEN_VXFORM_V(vsraw, MO_32, tcg_gen_gvec_sarv, 2, 14);
+GEN_VXFORM_V(vsraw, MO_UL, tcg_gen_gvec_sarv, 2, 14);
 GEN_VXFORM_V(vsrad, MO_64, tcg_gen_gvec_sarv, 2, 15);
 GEN_VXFORM(vsrv, 2, 28);
 GEN_VXFORM(vslv, 2, 29);
@@ -595,16 +595,16 @@ GEN_VXFORM_DUAL_EXT(vaddubs, PPC_ALTIVEC, PPC_NONE, 0,       \
 GEN_VXFORM_SAT(vadduhs, MO_UW, add, usadd, 0, 9);
 GEN_VXFORM_DUAL(vadduhs, PPC_ALTIVEC, PPC_NONE, \
                 vmul10euq, PPC_NONE, PPC2_ISA300)
-GEN_VXFORM_SAT(vadduws, MO_32, add, usadd, 0, 10);
+GEN_VXFORM_SAT(vadduws, MO_UL, add, usadd, 0, 10);
 GEN_VXFORM_SAT(vaddsbs, MO_UB, add, ssadd, 0, 12);
 GEN_VXFORM_SAT(vaddshs, MO_UW, add, ssadd, 0, 13);
-GEN_VXFORM_SAT(vaddsws, MO_32, add, ssadd, 0, 14);
+GEN_VXFORM_SAT(vaddsws, MO_UL, add, ssadd, 0, 14);
 GEN_VXFORM_SAT(vsububs, MO_UB, sub, ussub, 0, 24);
 GEN_VXFORM_SAT(vsubuhs, MO_UW, sub, ussub, 0, 25);
-GEN_VXFORM_SAT(vsubuws, MO_32, sub, ussub, 0, 26);
+GEN_VXFORM_SAT(vsubuws, MO_UL, sub, ussub, 0, 26);
 GEN_VXFORM_SAT(vsubsbs, MO_UB, sub, sssub, 0, 28);
 GEN_VXFORM_SAT(vsubshs, MO_UW, sub, sssub, 0, 29);
-GEN_VXFORM_SAT(vsubsws, MO_32, sub, sssub, 0, 30);
+GEN_VXFORM_SAT(vsubsws, MO_UL, sub, sssub, 0, 30);
 GEN_VXFORM(vadduqm, 0, 4);
 GEN_VXFORM(vaddcuq, 0, 5);
 GEN_VXFORM3(vaddeuqm, 30, 0);
@@ -914,7 +914,7 @@ static void glue(gen_, name)(DisasContext *ctx)                         \

 GEN_VXFORM_VSPLT(vspltb, MO_UB, 6, 8);
 GEN_VXFORM_VSPLT(vsplth, MO_UW, 6, 9);
-GEN_VXFORM_VSPLT(vspltw, MO_32, 6, 10);
+GEN_VXFORM_VSPLT(vspltw, MO_UL, 6, 10);
 GEN_VXFORM_UIMM_SPLAT(vextractub, 6, 8, 15);
 GEN_VXFORM_UIMM_SPLAT(vextractuh, 6, 9, 14);
 GEN_VXFORM_UIMM_SPLAT(vextractuw, 6, 10, 12);
diff --git a/target/ppc/translate/vsx-impl.inc.c b/target/ppc/translate/vsx-impl.inc.c
index 3922686..212817e 100644
--- a/target/ppc/translate/vsx-impl.inc.c
+++ b/target/ppc/translate/vsx-impl.inc.c
@@ -1553,12 +1553,12 @@ static void gen_xxspltw(DisasContext *ctx)

     tofs = vsr_full_offset(rt);
     bofs = vsr_full_offset(rb);
-    bofs += uim << MO_32;
+    bofs += uim << MO_UL;
 #ifndef HOST_WORDS_BIG_ENDIAN
     bofs ^= 8 | 4;
 #endif

-    tcg_gen_gvec_dup_mem(MO_32, tofs, bofs, 16, 16);
+    tcg_gen_gvec_dup_mem(MO_UL, tofs, bofs, 16, 16);
 }

 #define pattern(x) (((x) & 0xff) * (~(uint64_t)0 / 0xff))
diff --git a/target/s390x/translate.c b/target/s390x/translate.c
index 415747f..9e646f1 100644
--- a/target/s390x/translate.c
+++ b/target/s390x/translate.c
@@ -196,7 +196,7 @@ static inline int freg64_offset(uint8_t reg)
 static inline int freg32_offset(uint8_t reg)
 {
     g_assert(reg < 16);
-    return vec_reg_offset(reg, 0, MO_32);
+    return vec_reg_offset(reg, 0, MO_UL);
 }

 static TCGv_i64 load_reg(int reg)
@@ -2283,7 +2283,7 @@ static DisasJumpType op_csp(DisasContext *s, DisasOps *o)

     /* Write back the output now, so that it happens before the
        following branch, so that we don't need local temps.  */
-    if ((mop & MO_SIZE) == MO_32) {
+    if ((mop & MO_SIZE) == MO_UL) {
         tcg_gen_deposit_i64(o->out, o->out, old, 0, 32);
     } else {
         tcg_gen_mov_i64(o->out, old);
diff --git a/target/s390x/translate_vx.inc.c b/target/s390x/translate_vx.inc.c
index 65da6b3..75d788c 100644
--- a/target/s390x/translate_vx.inc.c
+++ b/target/s390x/translate_vx.inc.c
@@ -48,7 +48,7 @@

 #define ES_8    MO_UB
 #define ES_16   MO_UW
-#define ES_32   MO_32
+#define ES_32   MO_UL
 #define ES_64   MO_64
 #define ES_128  4

diff --git a/target/s390x/vec.h b/target/s390x/vec.h
index 28e1b1d..f67392c 100644
--- a/target/s390x/vec.h
+++ b/target/s390x/vec.h
@@ -80,7 +80,7 @@ static inline uint64_t s390_vec_read_element(const S390Vector *v, uint8_t enr,
         return s390_vec_read_element8(v, enr);
     case MO_UW:
         return s390_vec_read_element16(v, enr);
-    case MO_32:
+    case MO_UL:
         return s390_vec_read_element32(v, enr);
     case MO_64:
         return s390_vec_read_element64(v, enr);
@@ -127,7 +127,7 @@ static inline void s390_vec_write_element(S390Vector *v, uint8_t enr,
     case MO_UW:
         s390_vec_write_element16(v, enr, data);
         break;
-    case MO_32:
+    case MO_UL:
         s390_vec_write_element32(v, enr, data);
         break;
     case MO_64:
diff --git a/tcg/aarch64/tcg-target.inc.c b/tcg/aarch64/tcg-target.inc.c
index 3d90c4b..dc4fd21 100644
--- a/tcg/aarch64/tcg-target.inc.c
+++ b/tcg/aarch64/tcg-target.inc.c
@@ -431,12 +431,12 @@ typedef enum {
        that emits them can transform to 3.3.10 or 3.3.13.  */
     I3312_STRB      = 0x38000000 | LDST_ST << 22 | MO_UB << 30,
     I3312_STRH      = 0x38000000 | LDST_ST << 22 | MO_UW << 30,
-    I3312_STRW      = 0x38000000 | LDST_ST << 22 | MO_32 << 30,
+    I3312_STRW      = 0x38000000 | LDST_ST << 22 | MO_UL << 30,
     I3312_STRX      = 0x38000000 | LDST_ST << 22 | MO_64 << 30,

     I3312_LDRB      = 0x38000000 | LDST_LD << 22 | MO_UB << 30,
     I3312_LDRH      = 0x38000000 | LDST_LD << 22 | MO_UW << 30,
-    I3312_LDRW      = 0x38000000 | LDST_LD << 22 | MO_32 << 30,
+    I3312_LDRW      = 0x38000000 | LDST_LD << 22 | MO_UL << 30,
     I3312_LDRX      = 0x38000000 | LDST_LD << 22 | MO_64 << 30,

     I3312_LDRSBW    = 0x38000000 | LDST_LD_S_W << 22 | MO_UB << 30,
@@ -444,10 +444,10 @@ typedef enum {

     I3312_LDRSBX    = 0x38000000 | LDST_LD_S_X << 22 | MO_UB << 30,
     I3312_LDRSHX    = 0x38000000 | LDST_LD_S_X << 22 | MO_UW << 30,
-    I3312_LDRSWX    = 0x38000000 | LDST_LD_S_X << 22 | MO_32 << 30,
+    I3312_LDRSWX    = 0x38000000 | LDST_LD_S_X << 22 | MO_UL << 30,

-    I3312_LDRVS     = 0x3c000000 | LDST_LD << 22 | MO_32 << 30,
-    I3312_STRVS     = 0x3c000000 | LDST_ST << 22 | MO_32 << 30,
+    I3312_LDRVS     = 0x3c000000 | LDST_LD << 22 | MO_UL << 30,
+    I3312_STRVS     = 0x3c000000 | LDST_ST << 22 | MO_UL << 30,

     I3312_LDRVD     = 0x3c000000 | LDST_LD << 22 | MO_64 << 30,
     I3312_STRVD     = 0x3c000000 | LDST_ST << 22 | MO_64 << 30,
@@ -870,7 +870,7 @@ static void tcg_out_dupi_vec(TCGContext *s, TCGType type,

     /*
      * Test all bytes 0x00 or 0xff second.  This can match cases that
-     * might otherwise take 2 or 3 insns for MO_UW or MO_32 below.
+     * might otherwise take 2 or 3 insns for MO_UW or MO_UL below.
      */
     for (i = imm8 = 0; i < 8; i++) {
         uint8_t byte = v64 >> (i * 8);
@@ -908,7 +908,7 @@ static void tcg_out_dupi_vec(TCGContext *s, TCGType type,
         tcg_out_insn(s, 3606, MOVI, q, rd, 0, 0x8, v16 & 0xff);
         tcg_out_insn(s, 3606, ORR, q, rd, 0, 0xa, v16 >> 8);
         return;
-    } else if (v64 == dup_const(MO_32, v64)) {
+    } else if (v64 == dup_const(MO_UL, v64)) {
         uint32_t v32 = v64;
         uint32_t n32 = ~v32;

@@ -1749,7 +1749,7 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp memop, TCGType ext,
         if (bswap) {
             tcg_out_ldst_r(s, I3312_LDRW, data_r, addr_r, otype, off_r);
             tcg_out_rev32(s, data_r, data_r);
-            tcg_out_sxt(s, TCG_TYPE_I64, MO_32, data_r, data_r);
+            tcg_out_sxt(s, TCG_TYPE_I64, MO_UL, data_r, data_r);
         } else {
             tcg_out_ldst_r(s, I3312_LDRSWX, data_r, addr_r, otype, off_r);
         }
@@ -1782,7 +1782,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp memop,
         }
         tcg_out_ldst_r(s, I3312_STRH, data_r, addr_r, otype, off_r);
         break;
-    case MO_32:
+    case MO_UL:
         if (bswap && data_r != TCG_REG_XZR) {
             tcg_out_rev32(s, TCG_REG_TMP, data_r);
             data_r = TCG_REG_TMP;
@@ -2194,7 +2194,7 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc,
         break;
     case INDEX_op_ext_i32_i64:
     case INDEX_op_ext32s_i64:
-        tcg_out_sxt(s, TCG_TYPE_I64, MO_32, a0, a1);
+        tcg_out_sxt(s, TCG_TYPE_I64, MO_UL, a0, a1);
         break;
     case INDEX_op_ext8u_i64:
     case INDEX_op_ext8u_i32:
diff --git a/tcg/arm/tcg-target.inc.c b/tcg/arm/tcg-target.inc.c
index 0bd400e..05560a2 100644
--- a/tcg/arm/tcg-target.inc.c
+++ b/tcg/arm/tcg-target.inc.c
@@ -1435,7 +1435,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
     case MO_UW:
         argreg = tcg_out_arg_reg16(s, argreg, datalo);
         break;
-    case MO_32:
+    case MO_UL:
     default:
         argreg = tcg_out_arg_reg32(s, argreg, datalo);
         break;
@@ -1632,7 +1632,7 @@ static inline void tcg_out_qemu_st_index(TCGContext *s, int cond, TCGMemOp opc,
             tcg_out_st16_r(s, cond, datalo, addrlo, addend);
         }
         break;
-    case MO_32:
+    case MO_UL:
     default:
         if (bswap) {
             tcg_out_bswap32(s, cond, TCG_REG_R0, datalo);
@@ -1677,7 +1677,7 @@ static inline void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp opc,
             tcg_out_st16_8(s, COND_AL, datalo, addrlo, 0);
         }
         break;
-    case MO_32:
+    case MO_UL:
     default:
         if (bswap) {
             tcg_out_bswap32(s, COND_AL, TCG_REG_R0, datalo);
diff --git a/tcg/i386/tcg-target.inc.c b/tcg/i386/tcg-target.inc.c
index 31c3664..93e4c63 100644
--- a/tcg/i386/tcg-target.inc.c
+++ b/tcg/i386/tcg-target.inc.c
@@ -897,7 +897,7 @@ static bool tcg_out_dup_vec(TCGContext *s, TCGType type, unsigned vece,
             tcg_out_vex_modrm(s, OPC_PUNPCKLWD, r, a, a);
             a = r;
             /* FALLTHRU */
-        case MO_32:
+        case MO_UL:
             tcg_out_vex_modrm(s, OPC_PSHUFD, r, 0, a);
             /* imm8 operand: all output lanes selected from input lane 0.  */
             tcg_out8(s, 0);
@@ -924,7 +924,7 @@ static bool tcg_out_dupm_vec(TCGContext *s, TCGType type, unsigned vece,
         case MO_64:
             tcg_out_vex_modrm_offset(s, OPC_MOVDDUP, r, 0, base, offset);
             break;
-        case MO_32:
+        case MO_UL:
             tcg_out_vex_modrm_offset(s, OPC_VBROADCASTSS, r, 0, base, offset);
             break;
         case MO_UW:
@@ -2173,7 +2173,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg datalo, TCGReg datahi,
         tcg_out_modrm_sib_offset(s, movop + P_DATA16 + seg, datalo,
                                  base, index, 0, ofs);
         break;
-    case MO_32:
+    case MO_UL:
         if (bswap) {
             tcg_out_mov(s, TCG_TYPE_I32, scratch, datalo);
             tcg_out_bswap32(s, scratch);
@@ -2927,7 +2927,7 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc,
     case INDEX_op_x86_blend_vec:
         if (vece == MO_UW) {
             insn = OPC_PBLENDW;
-        } else if (vece == MO_32) {
+        } else if (vece == MO_UL) {
             insn = (have_avx2 ? OPC_VPBLENDD : OPC_BLENDPS);
         } else {
             g_assert_not_reached();
@@ -3292,13 +3292,13 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, unsigned vece)
     case INDEX_op_shrs_vec:
         return vece >= MO_UW;
     case INDEX_op_sars_vec:
-        return vece >= MO_UW && vece <= MO_32;
+        return vece >= MO_UW && vece <= MO_UL;

     case INDEX_op_shlv_vec:
     case INDEX_op_shrv_vec:
-        return have_avx2 && vece >= MO_32;
+        return have_avx2 && vece >= MO_UL;
     case INDEX_op_sarv_vec:
-        return have_avx2 && vece == MO_32;
+        return have_avx2 && vece == MO_UL;

     case INDEX_op_mul_vec:
         if (vece == MO_UB) {
@@ -3320,7 +3320,7 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, unsigned vece)
     case INDEX_op_umin_vec:
     case INDEX_op_umax_vec:
     case INDEX_op_abs_vec:
-        return vece <= MO_32;
+        return vece <= MO_UL;

     default:
         return 0;
@@ -3396,9 +3396,9 @@ static void expand_vec_sari(TCGType type, unsigned vece,
              * shift (note that the ISA says shift of 32 is valid).
              */
             t1 = tcg_temp_new_vec(type);
-            tcg_gen_sari_vec(MO_32, t1, v1, imm);
+            tcg_gen_sari_vec(MO_UL, t1, v1, imm);
             tcg_gen_shri_vec(MO_64, v0, v1, imm);
-            vec_gen_4(INDEX_op_x86_blend_vec, type, MO_32,
+            vec_gen_4(INDEX_op_x86_blend_vec, type, MO_UL,
                       tcgv_vec_arg(v0), tcgv_vec_arg(v0),
                       tcgv_vec_arg(t1), 0xaa);
             tcg_temp_free_vec(t1);
@@ -3515,28 +3515,28 @@ static bool expand_vec_cmp_noinv(TCGType type, unsigned vece, TCGv_vec v0,
         fixup = NEED_SWAP | NEED_INV;
         break;
     case TCG_COND_LEU:
-        if (vece <= MO_32) {
+        if (vece <= MO_UL) {
             fixup = NEED_UMIN;
         } else {
             fixup = NEED_BIAS | NEED_INV;
         }
         break;
     case TCG_COND_GTU:
-        if (vece <= MO_32) {
+        if (vece <= MO_UL) {
             fixup = NEED_UMIN | NEED_INV;
         } else {
             fixup = NEED_BIAS;
         }
         break;
     case TCG_COND_GEU:
-        if (vece <= MO_32) {
+        if (vece <= MO_UL) {
             fixup = NEED_UMAX;
         } else {
             fixup = NEED_BIAS | NEED_SWAP | NEED_INV;
         }
         break;
     case TCG_COND_LTU:
-        if (vece <= MO_32) {
+        if (vece <= MO_UL) {
             fixup = NEED_UMAX | NEED_INV;
         } else {
             fixup = NEED_BIAS | NEED_SWAP;
diff --git a/tcg/mips/tcg-target.inc.c b/tcg/mips/tcg-target.inc.c
index 1780cb1..a78fe87 100644
--- a/tcg/mips/tcg-target.inc.c
+++ b/tcg/mips/tcg-target.inc.c
@@ -1386,7 +1386,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
     case MO_UW:
         i = tcg_out_call_iarg_reg16(s, i, l->datalo_reg);
         break;
-    case MO_32:
+    case MO_UL:
         i = tcg_out_call_iarg_reg(s, i, l->datalo_reg);
         break;
     case MO_64:
@@ -1579,11 +1579,11 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
         tcg_out_opc_imm(s, OPC_SH, lo, base, 0);
         break;

-    case MO_32 | MO_BSWAP:
+    case MO_UL | MO_BSWAP:
         tcg_out_bswap32(s, TCG_TMP3, lo);
         lo = TCG_TMP3;
         /* FALLTHRU */
-    case MO_32:
+    case MO_UL:
         tcg_out_opc_imm(s, OPC_SW, lo, base, 0);
         break;

diff --git a/tcg/ppc/tcg-target.inc.c b/tcg/ppc/tcg-target.inc.c
index 852b894..835336a 100644
--- a/tcg/ppc/tcg-target.inc.c
+++ b/tcg/ppc/tcg-target.inc.c
@@ -1714,7 +1714,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 #endif
             tcg_out_mov(s, TCG_TYPE_I32, arg++, hi);
             /* FALLTHRU */
-        case MO_32:
+        case MO_UL:
             tcg_out_mov(s, TCG_TYPE_I32, arg++, lo);
             break;
         default:
diff --git a/tcg/riscv/tcg-target.inc.c b/tcg/riscv/tcg-target.inc.c
index 20bc19d..1905986 100644
--- a/tcg/riscv/tcg-target.inc.c
+++ b/tcg/riscv/tcg-target.inc.c
@@ -1222,7 +1222,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
     case MO_UW:
         tcg_out_opc_store(s, OPC_SH, base, lo, 0);
         break;
-    case MO_32:
+    case MO_UL:
         tcg_out_opc_store(s, OPC_SW, base, lo, 0);
         break;
     case MO_64:
diff --git a/tcg/sparc/tcg-target.inc.c b/tcg/sparc/tcg-target.inc.c
index 85550b5..ac0d3a3 100644
--- a/tcg/sparc/tcg-target.inc.c
+++ b/tcg/sparc/tcg-target.inc.c
@@ -889,7 +889,7 @@ static void emit_extend(TCGContext *s, TCGReg r, int op)
         tcg_out_arithi(s, r, r, 16, SHIFT_SLL);
         tcg_out_arithi(s, r, r, 16, SHIFT_SRL);
         break;
-    case MO_32:
+    case MO_UL:
         if (SPARC64) {
             tcg_out_arith(s, r, r, 0, SHIFT_SRL);
         }
diff --git a/tcg/tcg-op-gvec.c b/tcg/tcg-op-gvec.c
index da409f5..e63622c 100644
--- a/tcg/tcg-op-gvec.c
+++ b/tcg/tcg-op-gvec.c
@@ -310,7 +310,7 @@ uint64_t (dup_const)(unsigned vece, uint64_t c)
         return 0x0101010101010101ull * (uint8_t)c;
     case MO_UW:
         return 0x0001000100010001ull * (uint16_t)c;
-    case MO_32:
+    case MO_UL:
         return 0x0000000100000001ull * (uint32_t)c;
     case MO_64:
         return c;
@@ -330,7 +330,7 @@ static void gen_dup_i32(unsigned vece, TCGv_i32 out, TCGv_i32 in)
     case MO_UW:
         tcg_gen_deposit_i32(out, in, in, 16, 16);
         break;
-    case MO_32:
+    case MO_UL:
         tcg_gen_mov_i32(out, in);
         break;
     default:
@@ -349,7 +349,7 @@ static void gen_dup_i64(unsigned vece, TCGv_i64 out, TCGv_i64 in)
         tcg_gen_ext16u_i64(out, in);
         tcg_gen_muli_i64(out, out, 0x0001000100010001ull);
         break;
-    case MO_32:
+    case MO_UL:
         tcg_gen_deposit_i64(out, in, in, 32, 32);
         break;
     case MO_64:
@@ -443,7 +443,7 @@ static void do_dup(unsigned vece, uint32_t dofs, uint32_t oprsz,
     TCGv_ptr t_ptr;
     uint32_t i;

-    assert(vece <= (in_32 ? MO_32 : MO_64));
+    assert(vece <= (in_32 ? MO_UL : MO_64));
     assert(in_32 == NULL || in_64 == NULL);

     /* If we're storing 0, expand oprsz to maxsz.  */
@@ -485,7 +485,7 @@ static void do_dup(unsigned vece, uint32_t dofs, uint32_t oprsz,
                use a 64-bit operation unless the 32-bit operation would
                be simple enough.  */
             if (TCG_TARGET_REG_BITS == 64
-                && (vece != MO_32 || !check_size_impl(oprsz, 4))) {
+                && (vece != MO_UL || !check_size_impl(oprsz, 4))) {
                 t_64 = tcg_temp_new_i64();
                 tcg_gen_extu_i32_i64(t_64, in_32);
                 gen_dup_i64(vece, t_64, t_64);
@@ -1430,7 +1430,7 @@ void tcg_gen_gvec_dup_i32(unsigned vece, uint32_t dofs, uint32_t oprsz,
                           uint32_t maxsz, TCGv_i32 in)
 {
     check_size_align(oprsz, maxsz, dofs);
-    tcg_debug_assert(vece <= MO_32);
+    tcg_debug_assert(vece <= MO_UL);
     do_dup(vece, dofs, oprsz, maxsz, in, NULL, 0);
 }

@@ -1453,7 +1453,7 @@ void tcg_gen_gvec_dup_mem(unsigned vece, uint32_t dofs, uint32_t aofs,
             tcg_gen_dup_mem_vec(vece, t_vec, cpu_env, aofs);
             do_dup_store(type, dofs, oprsz, maxsz, t_vec);
             tcg_temp_free_vec(t_vec);
-        } else if (vece <= MO_32) {
+        } else if (vece <= MO_UL) {
             TCGv_i32 in = tcg_temp_new_i32();
             switch (vece) {
             case MO_UB:
@@ -1519,7 +1519,7 @@ void tcg_gen_gvec_dup32i(uint32_t dofs, uint32_t oprsz,
                          uint32_t maxsz, uint32_t x)
 {
     check_size_align(oprsz, maxsz, dofs);
-    do_dup(MO_32, dofs, oprsz, maxsz, NULL, NULL, x);
+    do_dup(MO_UL, dofs, oprsz, maxsz, NULL, NULL, x);
 }

 void tcg_gen_gvec_dup16i(uint32_t dofs, uint32_t oprsz,
@@ -1618,7 +1618,7 @@ void tcg_gen_gvec_add(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_add_vec,
           .fno = gen_helper_gvec_add32,
           .opt_opc = vecop_list_add,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_add_i64,
           .fniv = tcg_gen_add_vec,
           .fno = gen_helper_gvec_add64,
@@ -1649,7 +1649,7 @@ void tcg_gen_gvec_adds(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_add_vec,
           .fno = gen_helper_gvec_adds32,
           .opt_opc = vecop_list_add,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_add_i64,
           .fniv = tcg_gen_add_vec,
           .fno = gen_helper_gvec_adds64,
@@ -1690,7 +1690,7 @@ void tcg_gen_gvec_subs(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_gvec_subs32,
           .opt_opc = vecop_list_sub,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_sub_i64,
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_gvec_subs64,
@@ -1769,7 +1769,7 @@ void tcg_gen_gvec_sub(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_gvec_sub32,
           .opt_opc = vecop_list_sub,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_sub_i64,
           .fniv = tcg_gen_sub_vec,
           .fno = gen_helper_gvec_sub64,
@@ -1800,7 +1800,7 @@ void tcg_gen_gvec_mul(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_mul_vec,
           .fno = gen_helper_gvec_mul32,
           .opt_opc = vecop_list_mul,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_mul_i64,
           .fniv = tcg_gen_mul_vec,
           .fno = gen_helper_gvec_mul64,
@@ -1829,7 +1829,7 @@ void tcg_gen_gvec_muls(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_mul_vec,
           .fno = gen_helper_gvec_muls32,
           .opt_opc = vecop_list_mul,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_mul_i64,
           .fniv = tcg_gen_mul_vec,
           .fno = gen_helper_gvec_muls64,
@@ -1866,7 +1866,7 @@ void tcg_gen_gvec_ssadd(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_ssadd_vec,
           .fno = gen_helper_gvec_ssadd32,
           .opt_opc = vecop_list,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fniv = tcg_gen_ssadd_vec,
           .fno = gen_helper_gvec_ssadd64,
           .opt_opc = vecop_list,
@@ -1892,7 +1892,7 @@ void tcg_gen_gvec_sssub(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_sssub_vec,
           .fno = gen_helper_gvec_sssub32,
           .opt_opc = vecop_list,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fniv = tcg_gen_sssub_vec,
           .fno = gen_helper_gvec_sssub64,
           .opt_opc = vecop_list,
@@ -1935,7 +1935,7 @@ void tcg_gen_gvec_usadd(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_usadd_vec,
           .fno = gen_helper_gvec_usadd32,
           .opt_opc = vecop_list,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_usadd_i64,
           .fniv = tcg_gen_usadd_vec,
           .fno = gen_helper_gvec_usadd64,
@@ -1979,7 +1979,7 @@ void tcg_gen_gvec_ussub(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_ussub_vec,
           .fno = gen_helper_gvec_ussub32,
           .opt_opc = vecop_list,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_ussub_i64,
           .fniv = tcg_gen_ussub_vec,
           .fno = gen_helper_gvec_ussub64,
@@ -2007,7 +2007,7 @@ void tcg_gen_gvec_smin(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_smin_vec,
           .fno = gen_helper_gvec_smin32,
           .opt_opc = vecop_list,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_smin_i64,
           .fniv = tcg_gen_smin_vec,
           .fno = gen_helper_gvec_smin64,
@@ -2035,7 +2035,7 @@ void tcg_gen_gvec_umin(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_umin_vec,
           .fno = gen_helper_gvec_umin32,
           .opt_opc = vecop_list,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_umin_i64,
           .fniv = tcg_gen_umin_vec,
           .fno = gen_helper_gvec_umin64,
@@ -2063,7 +2063,7 @@ void tcg_gen_gvec_smax(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_smax_vec,
           .fno = gen_helper_gvec_smax32,
           .opt_opc = vecop_list,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_smax_i64,
           .fniv = tcg_gen_smax_vec,
           .fno = gen_helper_gvec_smax64,
@@ -2091,7 +2091,7 @@ void tcg_gen_gvec_umax(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_umax_vec,
           .fno = gen_helper_gvec_umax32,
           .opt_opc = vecop_list,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_umax_i64,
           .fniv = tcg_gen_umax_vec,
           .fno = gen_helper_gvec_umax64,
@@ -2165,7 +2165,7 @@ void tcg_gen_gvec_neg(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_neg_vec,
           .fno = gen_helper_gvec_neg32,
           .opt_opc = vecop_list,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_neg_i64,
           .fniv = tcg_gen_neg_vec,
           .fno = gen_helper_gvec_neg64,
@@ -2228,7 +2228,7 @@ void tcg_gen_gvec_abs(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_abs_vec,
           .fno = gen_helper_gvec_abs32,
           .opt_opc = vecop_list,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_abs_i64,
           .fniv = tcg_gen_abs_vec,
           .fno = gen_helper_gvec_abs64,
@@ -2485,7 +2485,7 @@ void tcg_gen_gvec_shli(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_shli_vec,
           .fno = gen_helper_gvec_shl32i,
           .opt_opc = vecop_list,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_shli_i64,
           .fniv = tcg_gen_shli_vec,
           .fno = gen_helper_gvec_shl64i,
@@ -2536,7 +2536,7 @@ void tcg_gen_gvec_shri(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_shri_vec,
           .fno = gen_helper_gvec_shr32i,
           .opt_opc = vecop_list,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_shri_i64,
           .fniv = tcg_gen_shri_vec,
           .fno = gen_helper_gvec_shr64i,
@@ -2601,7 +2601,7 @@ void tcg_gen_gvec_sari(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_sari_vec,
           .fno = gen_helper_gvec_sar32i,
           .opt_opc = vecop_list,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_sari_i64,
           .fniv = tcg_gen_sari_vec,
           .fno = gen_helper_gvec_sar64i,
@@ -2736,7 +2736,7 @@ do_gvec_shifts(unsigned vece, uint32_t dofs, uint32_t aofs, TCGv_i32 shift,
     }

     /* Otherwise fall back to integral... */
-    if (vece == MO_32 && check_size_impl(oprsz, 4)) {
+    if (vece == MO_UL && check_size_impl(oprsz, 4)) {
         expand_2s_i32(dofs, aofs, oprsz, shift, false, g->fni4);
     } else if (vece == MO_64 && check_size_impl(oprsz, 8)) {
         TCGv_i64 sh64 = tcg_temp_new_i64();
@@ -2889,7 +2889,7 @@ void tcg_gen_gvec_shlv(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_shlv_mod_vec,
           .fno = gen_helper_gvec_shl32v,
           .opt_opc = vecop_list,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_shl_mod_i64,
           .fniv = tcg_gen_shlv_mod_vec,
           .fno = gen_helper_gvec_shl64v,
@@ -2952,7 +2952,7 @@ void tcg_gen_gvec_shrv(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_shrv_mod_vec,
           .fno = gen_helper_gvec_shr32v,
           .opt_opc = vecop_list,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_shr_mod_i64,
           .fniv = tcg_gen_shrv_mod_vec,
           .fno = gen_helper_gvec_shr64v,
@@ -3015,7 +3015,7 @@ void tcg_gen_gvec_sarv(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_sarv_mod_vec,
           .fno = gen_helper_gvec_sar32v,
           .opt_opc = vecop_list,
-          .vece = MO_32 },
+          .vece = MO_UL },
         { .fni8 = tcg_gen_sar_mod_i64,
           .fniv = tcg_gen_sarv_mod_vec,
           .fno = gen_helper_gvec_sar64v,
@@ -3168,7 +3168,7 @@ void tcg_gen_gvec_cmp(TCGCond cond, unsigned vece, uint32_t dofs,
     case 0:
         if (vece == MO_64 && check_size_impl(oprsz, 8)) {
             expand_cmp_i64(dofs, aofs, bofs, oprsz, cond);
-        } else if (vece == MO_32 && check_size_impl(oprsz, 4)) {
+        } else if (vece == MO_UL && check_size_impl(oprsz, 4)) {
             expand_cmp_i32(dofs, aofs, bofs, oprsz, cond);
         } else {
             gen_helper_gvec_3 * const *fn = fns[cond];
diff --git a/tcg/tcg-op-vec.c b/tcg/tcg-op-vec.c
index b0a4d98..ff723ab 100644
--- a/tcg/tcg-op-vec.c
+++ b/tcg/tcg-op-vec.c
@@ -216,7 +216,7 @@ void tcg_gen_mov_vec(TCGv_vec r, TCGv_vec a)
     }
 }

-#define MO_REG  (TCG_TARGET_REG_BITS == 64 ? MO_64 : MO_32)
+#define MO_REG  (TCG_TARGET_REG_BITS == 64 ? MO_64 : MO_UL)

 static void do_dupi_vec(TCGv_vec r, unsigned vece, TCGArg a)
 {
@@ -253,7 +253,7 @@ TCGv_vec tcg_const_ones_vec_matching(TCGv_vec m)
 void tcg_gen_dup64i_vec(TCGv_vec r, uint64_t a)
 {
     if (TCG_TARGET_REG_BITS == 32 && a == deposit64(a, 32, 32, a)) {
-        do_dupi_vec(r, MO_32, a);
+        do_dupi_vec(r, MO_UL, a);
     } else if (TCG_TARGET_REG_BITS == 64 || a == (uint64_t)(int32_t)a) {
         do_dupi_vec(r, MO_64, a);
     } else {
@@ -265,7 +265,7 @@ void tcg_gen_dup64i_vec(TCGv_vec r, uint64_t a)

 void tcg_gen_dup32i_vec(TCGv_vec r, uint32_t a)
 {
-    do_dupi_vec(r, MO_REG, dup_const(MO_32, a));
+    do_dupi_vec(r, MO_REG, dup_const(MO_UL, a));
 }

 void tcg_gen_dup16i_vec(TCGv_vec r, uint32_t a)
diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c
index 21d448c..447683d 100644
--- a/tcg/tcg-op.c
+++ b/tcg/tcg-op.c
@@ -2725,7 +2725,7 @@ static inline TCGMemOp tcg_canonicalize_memop(TCGMemOp op, bool is64, bool st)
         break;
     case MO_UW:
         break;
-    case MO_32:
+    case MO_UL:
         if (!is64) {
             op &= ~MO_SIGN;
         }
@@ -2816,7 +2816,7 @@ void tcg_gen_qemu_ld_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
                 tcg_gen_ext16s_i32(val, val);
             }
             break;
-        case MO_32:
+        case MO_UL:
             tcg_gen_bswap32_i32(val, val);
             break;
         default:
@@ -2841,7 +2841,7 @@ void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
             tcg_gen_ext16u_i32(swap, val);
             tcg_gen_bswap16_i32(swap, swap);
             break;
-        case MO_32:
+        case MO_UL:
             tcg_gen_bswap32_i32(swap, val);
             break;
         default:
@@ -2896,7 +2896,7 @@ void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
                 tcg_gen_ext16s_i64(val, val);
             }
             break;
-        case MO_32:
+        case MO_UL:
             tcg_gen_bswap32_i64(val, val);
             if (orig_memop & MO_SIGN) {
                 tcg_gen_ext32s_i64(val, val);
@@ -2932,7 +2932,7 @@ void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
             tcg_gen_ext16u_i64(swap, val);
             tcg_gen_bswap16_i64(swap, swap);
             break;
-        case MO_32:
+        case MO_UL:
             tcg_gen_ext32u_i64(swap, val);
             tcg_gen_bswap32_i64(swap, swap);
             break;
@@ -3027,8 +3027,8 @@ static void * const table_cmpxchg[16] = {
     [MO_UB] = gen_helper_atomic_cmpxchgb,
     [MO_UW | MO_LE] = gen_helper_atomic_cmpxchgw_le,
     [MO_UW | MO_BE] = gen_helper_atomic_cmpxchgw_be,
-    [MO_32 | MO_LE] = gen_helper_atomic_cmpxchgl_le,
-    [MO_32 | MO_BE] = gen_helper_atomic_cmpxchgl_be,
+    [MO_UL | MO_LE] = gen_helper_atomic_cmpxchgl_le,
+    [MO_UL | MO_BE] = gen_helper_atomic_cmpxchgl_be,
     WITH_ATOMIC64([MO_64 | MO_LE] = gen_helper_atomic_cmpxchgq_le)
     WITH_ATOMIC64([MO_64 | MO_BE] = gen_helper_atomic_cmpxchgq_be)
 };
@@ -3251,8 +3251,8 @@ static void * const table_##NAME[16] = {                                \
     [MO_UB] = gen_helper_atomic_##NAME##b,                               \
     [MO_UW | MO_LE] = gen_helper_atomic_##NAME##w_le,                   \
     [MO_UW | MO_BE] = gen_helper_atomic_##NAME##w_be,                   \
-    [MO_32 | MO_LE] = gen_helper_atomic_##NAME##l_le,                   \
-    [MO_32 | MO_BE] = gen_helper_atomic_##NAME##l_be,                   \
+    [MO_UL | MO_LE] = gen_helper_atomic_##NAME##l_le,                   \
+    [MO_UL | MO_BE] = gen_helper_atomic_##NAME##l_be,                   \
     WITH_ATOMIC64([MO_64 | MO_LE] = gen_helper_atomic_##NAME##q_le)     \
     WITH_ATOMIC64([MO_64 | MO_BE] = gen_helper_atomic_##NAME##q_be)     \
 };                                                                      \
diff --git a/tcg/tcg.h b/tcg/tcg.h
index a378887..4b6ee89 100644
--- a/tcg/tcg.h
+++ b/tcg/tcg.h
@@ -1304,7 +1304,7 @@ uint64_t dup_const(unsigned vece, uint64_t c);
     (__builtin_constant_p(VECE)                                    \
      ?   ((VECE) == MO_UB ? 0x0101010101010101ull * (uint8_t)(C)   \
         : (VECE) == MO_UW ? 0x0001000100010001ull * (uint16_t)(C)  \
-        : (VECE) == MO_32 ? 0x0000000100000001ull * (uint32_t)(C)  \
+        : (VECE) == MO_UL ? 0x0000000100000001ull * (uint32_t)(C)  \
         : dup_const(VECE, C))                                      \
      : dup_const(VECE, C))

--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 183798 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v2 04/20] tcg: Replace MO_64 with MO_UQ alias
  2019-07-22 15:34 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-22 15:42   ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:42 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, claudio.fontana, alex.williamson, qemu-ppc, amarkovic,
	pbonzini, aurelien

Preparation for splitting MO_64 out from TCGMemOp into new accelerator
independent MemOp.

As MO_64 will be a value of MemOp, existing TCGMemOp comparisons and
coercions will trigger -Wenum-compare and -Wenum-conversion.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 target/arm/sve_helper.c             |   2 +-
 target/arm/translate-a64.c          | 270 ++++++++++++++++++------------------
 target/arm/translate-sve.c          |  18 +--
 target/arm/translate-vfp.inc.c      |   4 +-
 target/arm/translate.c              |  30 ++--
 target/i386/translate.c             | 122 ++++++++--------
 target/mips/translate.c             |   2 +-
 target/ppc/translate.c              |  28 ++--
 target/ppc/translate/fp-impl.inc.c  |   4 +-
 target/ppc/translate/vmx-impl.inc.c |  34 ++---
 target/ppc/translate/vsx-impl.inc.c |  18 +--
 target/s390x/translate.c            |   4 +-
 target/s390x/translate_vx.inc.c     |   6 +-
 target/s390x/vec.h                  |   4 +-
 target/sparc/translate.c            |   4 +-
 tcg/aarch64/tcg-target.inc.c        |  20 +--
 tcg/arm/tcg-target.inc.c            |  12 +-
 tcg/i386/tcg-target.inc.c           |  42 +++---
 tcg/mips/tcg-target.inc.c           |  12 +-
 tcg/ppc/tcg-target.inc.c            |  18 +--
 tcg/riscv/tcg-target.inc.c          |   6 +-
 tcg/s390/tcg-target.inc.c           |  10 +-
 tcg/sparc/tcg-target.inc.c          |   8 +-
 tcg/tcg-op-gvec.c                   | 132 +++++++++---------
 tcg/tcg-op-vec.c                    |  14 +-
 tcg/tcg-op.c                        |  24 ++--
 tcg/tcg.h                           |   9 +-
 27 files changed, 430 insertions(+), 427 deletions(-)

diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index fa705c4..1cfd746 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -5165,7 +5165,7 @@ static inline void sve_ldff1_zd(CPUARMState *env, void *vd, void *vg, void *vm,
     target_ulong addr;

     /* Skip to the first true predicate.  */
-    reg_off = find_next_active(vg, 0, reg_max, MO_64);
+    reg_off = find_next_active(vg, 0, reg_max, MO_UQ);
     if (likely(reg_off < reg_max)) {
         /* Perform one normal read, which will fault or not.  */
         set_helper_retaddr(ra);
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index 0b92e6d..3f9d103 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -463,7 +463,7 @@ static inline int fp_reg_offset(DisasContext *s, int regno, TCGMemOp size)
 /* Offset of the high half of the 128 bit vector Qn */
 static inline int fp_reg_hi_offset(DisasContext *s, int regno)
 {
-    return vec_reg_offset(s, regno, 1, MO_64);
+    return vec_reg_offset(s, regno, 1, MO_UQ);
 }

 /* Convenience accessors for reading and writing single and double
@@ -476,7 +476,7 @@ static TCGv_i64 read_fp_dreg(DisasContext *s, int reg)
 {
     TCGv_i64 v = tcg_temp_new_i64();

-    tcg_gen_ld_i64(v, cpu_env, fp_reg_offset(s, reg, MO_64));
+    tcg_gen_ld_i64(v, cpu_env, fp_reg_offset(s, reg, MO_UQ));
     return v;
 }

@@ -501,7 +501,7 @@ static TCGv_i32 read_fp_hreg(DisasContext *s, int reg)
  */
 static void clear_vec_high(DisasContext *s, bool is_q, int rd)
 {
-    unsigned ofs = fp_reg_offset(s, rd, MO_64);
+    unsigned ofs = fp_reg_offset(s, rd, MO_UQ);
     unsigned vsz = vec_full_reg_size(s);

     if (!is_q) {
@@ -516,7 +516,7 @@ static void clear_vec_high(DisasContext *s, bool is_q, int rd)

 void write_fp_dreg(DisasContext *s, int reg, TCGv_i64 v)
 {
-    unsigned ofs = fp_reg_offset(s, reg, MO_64);
+    unsigned ofs = fp_reg_offset(s, reg, MO_UQ);

     tcg_gen_st_i64(v, cpu_env, ofs);
     clear_vec_high(s, false, reg);
@@ -918,7 +918,7 @@ static void do_fp_st(DisasContext *s, int srcidx, TCGv_i64 tcg_addr, int size)
 {
     /* This writes the bottom N bits of a 128 bit wide vector to memory */
     TCGv_i64 tmp = tcg_temp_new_i64();
-    tcg_gen_ld_i64(tmp, cpu_env, fp_reg_offset(s, srcidx, MO_64));
+    tcg_gen_ld_i64(tmp, cpu_env, fp_reg_offset(s, srcidx, MO_UQ));
     if (size < 4) {
         tcg_gen_qemu_st_i64(tmp, tcg_addr, get_mem_index(s),
                             s->be_data + size);
@@ -928,10 +928,10 @@ static void do_fp_st(DisasContext *s, int srcidx, TCGv_i64 tcg_addr, int size)

         tcg_gen_addi_i64(tcg_hiaddr, tcg_addr, 8);
         tcg_gen_qemu_st_i64(tmp, be ? tcg_hiaddr : tcg_addr, get_mem_index(s),
-                            s->be_data | MO_Q);
+                            s->be_data | MO_UQ);
         tcg_gen_ld_i64(tmp, cpu_env, fp_reg_hi_offset(s, srcidx));
         tcg_gen_qemu_st_i64(tmp, be ? tcg_addr : tcg_hiaddr, get_mem_index(s),
-                            s->be_data | MO_Q);
+                            s->be_data | MO_UQ);
         tcg_temp_free_i64(tcg_hiaddr);
     }

@@ -960,13 +960,13 @@ static void do_fp_ld(DisasContext *s, int destidx, TCGv_i64 tcg_addr, int size)

         tcg_gen_addi_i64(tcg_hiaddr, tcg_addr, 8);
         tcg_gen_qemu_ld_i64(tmplo, be ? tcg_hiaddr : tcg_addr, get_mem_index(s),
-                            s->be_data | MO_Q);
+                            s->be_data | MO_UQ);
         tcg_gen_qemu_ld_i64(tmphi, be ? tcg_addr : tcg_hiaddr, get_mem_index(s),
-                            s->be_data | MO_Q);
+                            s->be_data | MO_UQ);
         tcg_temp_free_i64(tcg_hiaddr);
     }

-    tcg_gen_st_i64(tmplo, cpu_env, fp_reg_offset(s, destidx, MO_64));
+    tcg_gen_st_i64(tmplo, cpu_env, fp_reg_offset(s, destidx, MO_UQ));
     tcg_gen_st_i64(tmphi, cpu_env, fp_reg_hi_offset(s, destidx));

     tcg_temp_free_i64(tmplo);
@@ -1011,8 +1011,8 @@ static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx,
     case MO_SL:
         tcg_gen_ld32s_i64(tcg_dest, cpu_env, vect_off);
         break;
-    case MO_64:
-    case MO_64|MO_SIGN:
+    case MO_UQ:
+    case MO_SQ:
         tcg_gen_ld_i64(tcg_dest, cpu_env, vect_off);
         break;
     default:
@@ -1061,7 +1061,7 @@ static void write_vec_element(DisasContext *s, TCGv_i64 tcg_src, int destidx,
     case MO_UL:
         tcg_gen_st32_i64(tcg_src, cpu_env, vect_off);
         break;
-    case MO_64:
+    case MO_UQ:
         tcg_gen_st_i64(tcg_src, cpu_env, vect_off);
         break;
     default:
@@ -2207,7 +2207,7 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
         g_assert(size >= 2);
         if (size == 2) {
             /* The pair must be single-copy atomic for the doubleword.  */
-            memop |= MO_64 | MO_ALIGN;
+            memop |= MO_UQ | MO_ALIGN;
             tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx, memop);
             if (s->be_data == MO_LE) {
                 tcg_gen_extract_i64(cpu_reg(s, rt), cpu_exclusive_val, 0, 32);
@@ -2219,7 +2219,7 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
         } else {
             /* The pair must be single-copy atomic for *each* doubleword, not
                the entire quadword, however it must be quadword aligned.  */
-            memop |= MO_64;
+            memop |= MO_UQ;
             tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx,
                                 memop | MO_ALIGN_16);

@@ -2271,7 +2271,7 @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
             tcg_gen_atomic_cmpxchg_i64(tmp, cpu_exclusive_addr,
                                        cpu_exclusive_val, tmp,
                                        get_mem_index(s),
-                                       MO_64 | MO_ALIGN | s->be_data);
+                                       MO_UQ | MO_ALIGN | s->be_data);
             tcg_gen_setcond_i64(TCG_COND_NE, tmp, tmp, cpu_exclusive_val);
         } else if (tb_cflags(s->base.tb) & CF_PARALLEL) {
             if (!HAVE_CMPXCHG128) {
@@ -2355,7 +2355,7 @@ static void gen_compare_and_swap_pair(DisasContext *s, int rs, int rt,
         }

         tcg_gen_atomic_cmpxchg_i64(cmp, clean_addr, cmp, val, memidx,
-                                   MO_64 | MO_ALIGN | s->be_data);
+                                   MO_UQ | MO_ALIGN | s->be_data);
         tcg_temp_free_i64(val);

         if (s->be_data == MO_LE) {
@@ -2389,9 +2389,9 @@ static void gen_compare_and_swap_pair(DisasContext *s, int rs, int rt,

         /* Load the two words, in memory order.  */
         tcg_gen_qemu_ld_i64(d1, clean_addr, memidx,
-                            MO_64 | MO_ALIGN_16 | s->be_data);
+                            MO_UQ | MO_ALIGN_16 | s->be_data);
         tcg_gen_addi_i64(a2, clean_addr, 8);
-        tcg_gen_qemu_ld_i64(d2, a2, memidx, MO_64 | s->be_data);
+        tcg_gen_qemu_ld_i64(d2, a2, memidx, MO_UQ | s->be_data);

         /* Compare the two words, also in memory order.  */
         tcg_gen_setcond_i64(TCG_COND_EQ, c1, d1, s1);
@@ -2401,8 +2401,8 @@ static void gen_compare_and_swap_pair(DisasContext *s, int rs, int rt,
         /* If compare equal, write back new data, else write back old data.  */
         tcg_gen_movcond_i64(TCG_COND_NE, c1, c2, zero, t1, d1);
         tcg_gen_movcond_i64(TCG_COND_NE, c2, c2, zero, t2, d2);
-        tcg_gen_qemu_st_i64(c1, clean_addr, memidx, MO_64 | s->be_data);
-        tcg_gen_qemu_st_i64(c2, a2, memidx, MO_64 | s->be_data);
+        tcg_gen_qemu_st_i64(c1, clean_addr, memidx, MO_UQ | s->be_data);
+        tcg_gen_qemu_st_i64(c2, a2, memidx, MO_UQ | s->be_data);
         tcg_temp_free_i64(a2);
         tcg_temp_free_i64(c1);
         tcg_temp_free_i64(c2);
@@ -5271,7 +5271,7 @@ static void handle_fp_compare(DisasContext *s, int size,
     TCGv_i64 tcg_flags = tcg_temp_new_i64();
     TCGv_ptr fpst = get_fpstatus_ptr(size == MO_UW);

-    if (size == MO_64) {
+    if (size == MO_UQ) {
         TCGv_i64 tcg_vn, tcg_vm;

         tcg_vn = read_fp_dreg(s, rn);
@@ -5357,7 +5357,7 @@ static void disas_fp_compare(DisasContext *s, uint32_t insn)
         size = MO_UL;
         break;
     case 1:
-        size = MO_64;
+        size = MO_UQ;
         break;
     case 3:
         size = MO_UW;
@@ -5408,7 +5408,7 @@ static void disas_fp_ccomp(DisasContext *s, uint32_t insn)
         size = MO_UL;
         break;
     case 1:
-        size = MO_64;
+        size = MO_UQ;
         break;
     case 3:
         size = MO_UW;
@@ -5474,7 +5474,7 @@ static void disas_fp_csel(DisasContext *s, uint32_t insn)
         sz = MO_UL;
         break;
     case 1:
-        sz = MO_64;
+        sz = MO_UQ;
         break;
     case 3:
         sz = MO_UW;
@@ -6279,7 +6279,7 @@ static void disas_fp_imm(DisasContext *s, uint32_t insn)
         sz = MO_UL;
         break;
     case 1:
-        sz = MO_64;
+        sz = MO_UQ;
         break;
     case 3:
         sz = MO_UW;
@@ -6585,7 +6585,7 @@ static void handle_fmov(DisasContext *s, int rd, int rn, int type, bool itof)
             break;
         case 1:
             /* 64 bit */
-            tcg_gen_ld_i64(tcg_rd, cpu_env, fp_reg_offset(s, rn, MO_64));
+            tcg_gen_ld_i64(tcg_rd, cpu_env, fp_reg_offset(s, rn, MO_UQ));
             break;
         case 2:
             /* 64 bits from top half */
@@ -6819,9 +6819,9 @@ static void disas_simd_ext(DisasContext *s, uint32_t insn)
      * extracting 64 bits from a 64:64 concatenation.
      */
     if (!is_q) {
-        read_vec_element(s, tcg_resl, rn, 0, MO_64);
+        read_vec_element(s, tcg_resl, rn, 0, MO_UQ);
         if (pos != 0) {
-            read_vec_element(s, tcg_resh, rm, 0, MO_64);
+            read_vec_element(s, tcg_resh, rm, 0, MO_UQ);
             do_ext64(s, tcg_resh, tcg_resl, pos);
         }
         tcg_gen_movi_i64(tcg_resh, 0);
@@ -6839,22 +6839,22 @@ static void disas_simd_ext(DisasContext *s, uint32_t insn)
             pos -= 64;
         }

-        read_vec_element(s, tcg_resl, elt->reg, elt->elt, MO_64);
+        read_vec_element(s, tcg_resl, elt->reg, elt->elt, MO_UQ);
         elt++;
-        read_vec_element(s, tcg_resh, elt->reg, elt->elt, MO_64);
+        read_vec_element(s, tcg_resh, elt->reg, elt->elt, MO_UQ);
         elt++;
         if (pos != 0) {
             do_ext64(s, tcg_resh, tcg_resl, pos);
             tcg_hh = tcg_temp_new_i64();
-            read_vec_element(s, tcg_hh, elt->reg, elt->elt, MO_64);
+            read_vec_element(s, tcg_hh, elt->reg, elt->elt, MO_UQ);
             do_ext64(s, tcg_hh, tcg_resh, pos);
             tcg_temp_free_i64(tcg_hh);
         }
     }

-    write_vec_element(s, tcg_resl, rd, 0, MO_64);
+    write_vec_element(s, tcg_resl, rd, 0, MO_UQ);
     tcg_temp_free_i64(tcg_resl);
-    write_vec_element(s, tcg_resh, rd, 1, MO_64);
+    write_vec_element(s, tcg_resh, rd, 1, MO_UQ);
     tcg_temp_free_i64(tcg_resh);
 }

@@ -6895,12 +6895,12 @@ static void disas_simd_tb(DisasContext *s, uint32_t insn)
     tcg_resh = tcg_temp_new_i64();

     if (is_tblx) {
-        read_vec_element(s, tcg_resl, rd, 0, MO_64);
+        read_vec_element(s, tcg_resl, rd, 0, MO_UQ);
     } else {
         tcg_gen_movi_i64(tcg_resl, 0);
     }
     if (is_tblx && is_q) {
-        read_vec_element(s, tcg_resh, rd, 1, MO_64);
+        read_vec_element(s, tcg_resh, rd, 1, MO_UQ);
     } else {
         tcg_gen_movi_i64(tcg_resh, 0);
     }
@@ -6908,11 +6908,11 @@ static void disas_simd_tb(DisasContext *s, uint32_t insn)
     tcg_idx = tcg_temp_new_i64();
     tcg_regno = tcg_const_i32(rn);
     tcg_numregs = tcg_const_i32(len + 1);
-    read_vec_element(s, tcg_idx, rm, 0, MO_64);
+    read_vec_element(s, tcg_idx, rm, 0, MO_UQ);
     gen_helper_simd_tbl(tcg_resl, cpu_env, tcg_resl, tcg_idx,
                         tcg_regno, tcg_numregs);
     if (is_q) {
-        read_vec_element(s, tcg_idx, rm, 1, MO_64);
+        read_vec_element(s, tcg_idx, rm, 1, MO_UQ);
         gen_helper_simd_tbl(tcg_resh, cpu_env, tcg_resh, tcg_idx,
                             tcg_regno, tcg_numregs);
     }
@@ -6920,9 +6920,9 @@ static void disas_simd_tb(DisasContext *s, uint32_t insn)
     tcg_temp_free_i32(tcg_regno);
     tcg_temp_free_i32(tcg_numregs);

-    write_vec_element(s, tcg_resl, rd, 0, MO_64);
+    write_vec_element(s, tcg_resl, rd, 0, MO_UQ);
     tcg_temp_free_i64(tcg_resl);
-    write_vec_element(s, tcg_resh, rd, 1, MO_64);
+    write_vec_element(s, tcg_resh, rd, 1, MO_UQ);
     tcg_temp_free_i64(tcg_resh);
 }

@@ -7009,9 +7009,9 @@ static void disas_simd_zip_trn(DisasContext *s, uint32_t insn)

     tcg_temp_free_i64(tcg_res);

-    write_vec_element(s, tcg_resl, rd, 0, MO_64);
+    write_vec_element(s, tcg_resl, rd, 0, MO_UQ);
     tcg_temp_free_i64(tcg_resl);
-    write_vec_element(s, tcg_resh, rd, 1, MO_64);
+    write_vec_element(s, tcg_resh, rd, 1, MO_UQ);
     tcg_temp_free_i64(tcg_resh);
 }

@@ -7625,9 +7625,9 @@ static void disas_simd_mod_imm(DisasContext *s, uint32_t insn)
     } else {
         /* ORR or BIC, with BIC negation to AND handled above.  */
         if (is_neg) {
-            gen_gvec_fn2i(s, is_q, rd, rd, imm, tcg_gen_gvec_andi, MO_64);
+            gen_gvec_fn2i(s, is_q, rd, rd, imm, tcg_gen_gvec_andi, MO_UQ);
         } else {
-            gen_gvec_fn2i(s, is_q, rd, rd, imm, tcg_gen_gvec_ori, MO_64);
+            gen_gvec_fn2i(s, is_q, rd, rd, imm, tcg_gen_gvec_ori, MO_UQ);
         }
     }
 }
@@ -7702,7 +7702,7 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
                 size = MO_UW;
             }
         } else {
-            size = extract32(size, 0, 1) ? MO_64 : MO_UL;
+            size = extract32(size, 0, 1) ? MO_UQ : MO_UL;
         }

         if (!fp_access_check(s)) {
@@ -7716,13 +7716,13 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
         return;
     }

-    if (size == MO_64) {
+    if (size == MO_UQ) {
         TCGv_i64 tcg_op1 = tcg_temp_new_i64();
         TCGv_i64 tcg_op2 = tcg_temp_new_i64();
         TCGv_i64 tcg_res = tcg_temp_new_i64();

-        read_vec_element(s, tcg_op1, rn, 0, MO_64);
-        read_vec_element(s, tcg_op2, rn, 1, MO_64);
+        read_vec_element(s, tcg_op1, rn, 0, MO_UQ);
+        read_vec_element(s, tcg_op2, rn, 1, MO_UQ);

         switch (opcode) {
         case 0x3b: /* ADDP */
@@ -8085,9 +8085,9 @@ static void handle_vec_simd_sqshrn(DisasContext *s, bool is_scalar, bool is_q,
     }

     if (!is_q) {
-        write_vec_element(s, tcg_final, rd, 0, MO_64);
+        write_vec_element(s, tcg_final, rd, 0, MO_UQ);
     } else {
-        write_vec_element(s, tcg_final, rd, 1, MO_64);
+        write_vec_element(s, tcg_final, rd, 1, MO_UQ);
     }

     if (round) {
@@ -8155,9 +8155,9 @@ static void handle_simd_qshl(DisasContext *s, bool scalar, bool is_q,
         for (pass = 0; pass < maxpass; pass++) {
             TCGv_i64 tcg_op = tcg_temp_new_i64();

-            read_vec_element(s, tcg_op, rn, pass, MO_64);
+            read_vec_element(s, tcg_op, rn, pass, MO_UQ);
             genfn(tcg_op, cpu_env, tcg_op, tcg_shift);
-            write_vec_element(s, tcg_op, rd, pass, MO_64);
+            write_vec_element(s, tcg_op, rd, pass, MO_UQ);

             tcg_temp_free_i64(tcg_op);
         }
@@ -8228,11 +8228,11 @@ static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
     TCGMemOp mop = size | (is_signed ? MO_SIGN : 0);
     int pass;

-    if (fracbits || size == MO_64) {
+    if (fracbits || size == MO_UQ) {
         tcg_shift = tcg_const_i32(fracbits);
     }

-    if (size == MO_64) {
+    if (size == MO_UQ) {
         TCGv_i64 tcg_int64 = tcg_temp_new_i64();
         TCGv_i64 tcg_double = tcg_temp_new_i64();

@@ -8249,7 +8249,7 @@ static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
             if (elements == 1) {
                 write_fp_dreg(s, rd, tcg_double);
             } else {
-                write_vec_element(s, tcg_double, rd, pass, MO_64);
+                write_vec_element(s, tcg_double, rd, pass, MO_UQ);
             }
         }

@@ -8331,7 +8331,7 @@ static void handle_simd_shift_intfp_conv(DisasContext *s, bool is_scalar,
     int immhb = immh << 3 | immb;

     if (immh & 8) {
-        size = MO_64;
+        size = MO_UQ;
         if (!is_scalar && !is_q) {
             unallocated_encoding(s);
             return;
@@ -8376,7 +8376,7 @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
     TCGv_i32 tcg_rmode, tcg_shift;

     if (immh & 0x8) {
-        size = MO_64;
+        size = MO_UQ;
         if (!is_scalar && !is_q) {
             unallocated_encoding(s);
             return;
@@ -8408,19 +8408,19 @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
     fracbits = (16 << size) - immhb;
     tcg_shift = tcg_const_i32(fracbits);

-    if (size == MO_64) {
+    if (size == MO_UQ) {
         int maxpass = is_scalar ? 1 : 2;

         for (pass = 0; pass < maxpass; pass++) {
             TCGv_i64 tcg_op = tcg_temp_new_i64();

-            read_vec_element(s, tcg_op, rn, pass, MO_64);
+            read_vec_element(s, tcg_op, rn, pass, MO_UQ);
             if (is_u) {
                 gen_helper_vfp_touqd(tcg_op, tcg_op, tcg_shift, tcg_fpstatus);
             } else {
                 gen_helper_vfp_tosqd(tcg_op, tcg_op, tcg_shift, tcg_fpstatus);
             }
-            write_vec_element(s, tcg_op, rd, pass, MO_64);
+            write_vec_element(s, tcg_op, rd, pass, MO_UQ);
             tcg_temp_free_i64(tcg_op);
         }
         clear_vec_high(s, is_q, rd);
@@ -8601,7 +8601,7 @@ static void disas_simd_scalar_three_reg_diff(DisasContext *s, uint32_t insn)
             tcg_gen_neg_i64(tcg_res, tcg_res);
             /* fall through */
         case 0x9: /* SQDMLAL, SQDMLAL2 */
-            read_vec_element(s, tcg_op1, rd, 0, MO_64);
+            read_vec_element(s, tcg_op1, rd, 0, MO_UQ);
             gen_helper_neon_addl_saturate_s64(tcg_res, cpu_env,
                                               tcg_res, tcg_op1);
             break;
@@ -8751,8 +8751,8 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
             TCGv_i64 tcg_op2 = tcg_temp_new_i64();
             TCGv_i64 tcg_res = tcg_temp_new_i64();

-            read_vec_element(s, tcg_op1, rn, pass, MO_64);
-            read_vec_element(s, tcg_op2, rm, pass, MO_64);
+            read_vec_element(s, tcg_op1, rn, pass, MO_UQ);
+            read_vec_element(s, tcg_op2, rm, pass, MO_UQ);

             switch (fpopcode) {
             case 0x39: /* FMLS */
@@ -8760,7 +8760,7 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
                 gen_helper_vfp_negd(tcg_op1, tcg_op1);
                 /* fall through */
             case 0x19: /* FMLA */
-                read_vec_element(s, tcg_res, rd, pass, MO_64);
+                read_vec_element(s, tcg_res, rd, pass, MO_UQ);
                 gen_helper_vfp_muladdd(tcg_res, tcg_op1, tcg_op2,
                                        tcg_res, fpst);
                 break;
@@ -8820,7 +8820,7 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
                 g_assert_not_reached();
             }

-            write_vec_element(s, tcg_res, rd, pass, MO_64);
+            write_vec_element(s, tcg_res, rd, pass, MO_UQ);

             tcg_temp_free_i64(tcg_res);
             tcg_temp_free_i64(tcg_op1);
@@ -8905,7 +8905,7 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
                 TCGv_i64 tcg_tmp = tcg_temp_new_i64();

                 tcg_gen_extu_i32_i64(tcg_tmp, tcg_res);
-                write_vec_element(s, tcg_tmp, rd, pass, MO_64);
+                write_vec_element(s, tcg_tmp, rd, pass, MO_UQ);
                 tcg_temp_free_i64(tcg_tmp);
             } else {
                 write_vec_element_i32(s, tcg_res, rd, pass, MO_UL);
@@ -9381,7 +9381,7 @@ static void handle_2misc_fcmp_zero(DisasContext *s, int opcode,
                                    bool is_scalar, bool is_u, bool is_q,
                                    int size, int rn, int rd)
 {
-    bool is_double = (size == MO_64);
+    bool is_double = (size == MO_UQ);
     TCGv_ptr fpst;

     if (!fp_access_check(s)) {
@@ -9419,13 +9419,13 @@ static void handle_2misc_fcmp_zero(DisasContext *s, int opcode,
         }

         for (pass = 0; pass < (is_scalar ? 1 : 2); pass++) {
-            read_vec_element(s, tcg_op, rn, pass, MO_64);
+            read_vec_element(s, tcg_op, rn, pass, MO_UQ);
             if (swap) {
                 genfn(tcg_res, tcg_zero, tcg_op, fpst);
             } else {
                 genfn(tcg_res, tcg_op, tcg_zero, fpst);
             }
-            write_vec_element(s, tcg_res, rd, pass, MO_64);
+            write_vec_element(s, tcg_res, rd, pass, MO_UQ);
         }
         tcg_temp_free_i64(tcg_res);
         tcg_temp_free_i64(tcg_zero);
@@ -9526,7 +9526,7 @@ static void handle_2misc_reciprocal(DisasContext *s, int opcode,
         int pass;

         for (pass = 0; pass < (is_scalar ? 1 : 2); pass++) {
-            read_vec_element(s, tcg_op, rn, pass, MO_64);
+            read_vec_element(s, tcg_op, rn, pass, MO_UQ);
             switch (opcode) {
             case 0x3d: /* FRECPE */
                 gen_helper_recpe_f64(tcg_res, tcg_op, fpst);
@@ -9540,7 +9540,7 @@ static void handle_2misc_reciprocal(DisasContext *s, int opcode,
             default:
                 g_assert_not_reached();
             }
-            write_vec_element(s, tcg_res, rd, pass, MO_64);
+            write_vec_element(s, tcg_res, rd, pass, MO_UQ);
         }
         tcg_temp_free_i64(tcg_res);
         tcg_temp_free_i64(tcg_op);
@@ -9615,7 +9615,7 @@ static void handle_2misc_narrow(DisasContext *s, bool scalar,
         if (scalar) {
             read_vec_element(s, tcg_op, rn, pass, size + 1);
         } else {
-            read_vec_element(s, tcg_op, rn, pass, MO_64);
+            read_vec_element(s, tcg_op, rn, pass, MO_UQ);
         }
         tcg_res[pass] = tcg_temp_new_i32();

@@ -9711,15 +9711,15 @@ static void handle_2misc_satacc(DisasContext *s, bool is_scalar, bool is_u,
         int pass;

         for (pass = 0; pass < (is_scalar ? 1 : 2); pass++) {
-            read_vec_element(s, tcg_rn, rn, pass, MO_64);
-            read_vec_element(s, tcg_rd, rd, pass, MO_64);
+            read_vec_element(s, tcg_rn, rn, pass, MO_UQ);
+            read_vec_element(s, tcg_rd, rd, pass, MO_UQ);

             if (is_u) { /* USQADD */
                 gen_helper_neon_uqadd_s64(tcg_rd, cpu_env, tcg_rn, tcg_rd);
             } else { /* SUQADD */
                 gen_helper_neon_sqadd_u64(tcg_rd, cpu_env, tcg_rn, tcg_rd);
             }
-            write_vec_element(s, tcg_rd, rd, pass, MO_64);
+            write_vec_element(s, tcg_rd, rd, pass, MO_UQ);
         }
         tcg_temp_free_i64(tcg_rd);
         tcg_temp_free_i64(tcg_rn);
@@ -9776,7 +9776,7 @@ static void handle_2misc_satacc(DisasContext *s, bool is_scalar, bool is_u,

             if (is_scalar) {
                 TCGv_i64 tcg_zero = tcg_const_i64(0);
-                write_vec_element(s, tcg_zero, rd, 0, MO_64);
+                write_vec_element(s, tcg_zero, rd, 0, MO_UQ);
                 tcg_temp_free_i64(tcg_zero);
             }
             write_vec_element_i32(s, tcg_rd, rd, pass, MO_UL);
@@ -10146,7 +10146,7 @@ static void handle_vec_simd_wshli(DisasContext *s, bool is_q, bool is_u,
      * so if rd == rn we would overwrite parts of our input.
      * So load everything right now and use shifts in the main loop.
      */
-    read_vec_element(s, tcg_rn, rn, is_q ? 1 : 0, MO_64);
+    read_vec_element(s, tcg_rn, rn, is_q ? 1 : 0, MO_UQ);

     for (i = 0; i < elements; i++) {
         tcg_gen_shri_i64(tcg_rd, tcg_rn, i * esize);
@@ -10183,7 +10183,7 @@ static void handle_vec_simd_shrn(DisasContext *s, bool is_q,
     tcg_rn = tcg_temp_new_i64();
     tcg_rd = tcg_temp_new_i64();
     tcg_final = tcg_temp_new_i64();
-    read_vec_element(s, tcg_final, rd, is_q ? 1 : 0, MO_64);
+    read_vec_element(s, tcg_final, rd, is_q ? 1 : 0, MO_UQ);

     if (round) {
         uint64_t round_const = 1ULL << (shift - 1);
@@ -10201,9 +10201,9 @@ static void handle_vec_simd_shrn(DisasContext *s, bool is_q,
     }

     if (!is_q) {
-        write_vec_element(s, tcg_final, rd, 0, MO_64);
+        write_vec_element(s, tcg_final, rd, 0, MO_UQ);
     } else {
-        write_vec_element(s, tcg_final, rd, 1, MO_64);
+        write_vec_element(s, tcg_final, rd, 1, MO_UQ);
     }
     if (round) {
         tcg_temp_free_i64(tcg_round);
@@ -10335,8 +10335,8 @@ static void handle_3rd_widening(DisasContext *s, int is_q, int is_u, int size,
     }

     if (accop != 0) {
-        read_vec_element(s, tcg_res[0], rd, 0, MO_64);
-        read_vec_element(s, tcg_res[1], rd, 1, MO_64);
+        read_vec_element(s, tcg_res[0], rd, 0, MO_UQ);
+        read_vec_element(s, tcg_res[1], rd, 1, MO_UQ);
     }

     /* size == 2 means two 32x32->64 operations; this is worth special
@@ -10522,8 +10522,8 @@ static void handle_3rd_widening(DisasContext *s, int is_q, int is_u, int size,
         }
     }

-    write_vec_element(s, tcg_res[0], rd, 0, MO_64);
-    write_vec_element(s, tcg_res[1], rd, 1, MO_64);
+    write_vec_element(s, tcg_res[0], rd, 0, MO_UQ);
+    write_vec_element(s, tcg_res[1], rd, 1, MO_UQ);
     tcg_temp_free_i64(tcg_res[0]);
     tcg_temp_free_i64(tcg_res[1]);
 }
@@ -10546,7 +10546,7 @@ static void handle_3rd_wide(DisasContext *s, int is_q, int is_u, int size,
         };
         NeonGenWidenFn *widenfn = widenfns[size][is_u];

-        read_vec_element(s, tcg_op1, rn, pass, MO_64);
+        read_vec_element(s, tcg_op1, rn, pass, MO_UQ);
         read_vec_element_i32(s, tcg_op2, rm, part + pass, MO_UL);
         widenfn(tcg_op2_wide, tcg_op2);
         tcg_temp_free_i32(tcg_op2);
@@ -10558,7 +10558,7 @@ static void handle_3rd_wide(DisasContext *s, int is_q, int is_u, int size,
     }

     for (pass = 0; pass < 2; pass++) {
-        write_vec_element(s, tcg_res[pass], rd, pass, MO_64);
+        write_vec_element(s, tcg_res[pass], rd, pass, MO_UQ);
         tcg_temp_free_i64(tcg_res[pass]);
     }
 }
@@ -10589,8 +10589,8 @@ static void handle_3rd_narrowing(DisasContext *s, int is_q, int is_u, int size,
         };
         NeonGenNarrowFn *gennarrow = narrowfns[size][is_u];

-        read_vec_element(s, tcg_op1, rn, pass, MO_64);
-        read_vec_element(s, tcg_op2, rm, pass, MO_64);
+        read_vec_element(s, tcg_op1, rn, pass, MO_UQ);
+        read_vec_element(s, tcg_op2, rm, pass, MO_UQ);

         gen_neon_addl(size, (opcode == 6), tcg_wideres, tcg_op1, tcg_op2);

@@ -10621,12 +10621,12 @@ static void handle_pmull_64(DisasContext *s, int is_q, int rd, int rn, int rm)
     TCGv_i64 tcg_op2 = tcg_temp_new_i64();
     TCGv_i64 tcg_res = tcg_temp_new_i64();

-    read_vec_element(s, tcg_op1, rn, is_q, MO_64);
-    read_vec_element(s, tcg_op2, rm, is_q, MO_64);
+    read_vec_element(s, tcg_op1, rn, is_q, MO_UQ);
+    read_vec_element(s, tcg_op2, rm, is_q, MO_UQ);
     gen_helper_neon_pmull_64_lo(tcg_res, tcg_op1, tcg_op2);
-    write_vec_element(s, tcg_res, rd, 0, MO_64);
+    write_vec_element(s, tcg_res, rd, 0, MO_UQ);
     gen_helper_neon_pmull_64_hi(tcg_res, tcg_op1, tcg_op2);
-    write_vec_element(s, tcg_res, rd, 1, MO_64);
+    write_vec_element(s, tcg_res, rd, 1, MO_UQ);

     tcg_temp_free_i64(tcg_op1);
     tcg_temp_free_i64(tcg_op2);
@@ -10814,8 +10814,8 @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
             TCGv_i64 tcg_op2 = tcg_temp_new_i64();
             int passreg = (pass == 0) ? rn : rm;

-            read_vec_element(s, tcg_op1, passreg, 0, MO_64);
-            read_vec_element(s, tcg_op2, passreg, 1, MO_64);
+            read_vec_element(s, tcg_op1, passreg, 0, MO_UQ);
+            read_vec_element(s, tcg_op2, passreg, 1, MO_UQ);
             tcg_res[pass] = tcg_temp_new_i64();

             switch (opcode) {
@@ -10846,7 +10846,7 @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
         }

         for (pass = 0; pass < 2; pass++) {
-            write_vec_element(s, tcg_res[pass], rd, pass, MO_64);
+            write_vec_element(s, tcg_res[pass], rd, pass, MO_UQ);
             tcg_temp_free_i64(tcg_res[pass]);
         }
     } else {
@@ -10971,7 +10971,7 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
             unallocated_encoding(s);
             return;
         }
-        handle_simd_3same_pair(s, is_q, 0, fpopcode, size ? MO_64 : MO_UL,
+        handle_simd_3same_pair(s, is_q, 0, fpopcode, size ? MO_UQ : MO_UL,
                                rn, rm, rd);
         return;
     case 0x1b: /* FMULX */
@@ -11155,12 +11155,12 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
             TCGv_i64 tcg_op2 = tcg_temp_new_i64();
             TCGv_i64 tcg_res = tcg_temp_new_i64();

-            read_vec_element(s, tcg_op1, rn, pass, MO_64);
-            read_vec_element(s, tcg_op2, rm, pass, MO_64);
+            read_vec_element(s, tcg_op1, rn, pass, MO_UQ);
+            read_vec_element(s, tcg_op2, rm, pass, MO_UQ);

             handle_3same_64(s, opcode, u, tcg_res, tcg_op1, tcg_op2);

-            write_vec_element(s, tcg_res, rd, pass, MO_64);
+            write_vec_element(s, tcg_res, rd, pass, MO_UQ);

             tcg_temp_free_i64(tcg_res);
             tcg_temp_free_i64(tcg_op1);
@@ -11714,7 +11714,7 @@ static void handle_2misc_widening(DisasContext *s, int opcode, bool is_q,
             tcg_temp_free_i32(tcg_op);
         }
         for (pass = 0; pass < 2; pass++) {
-            write_vec_element(s, tcg_res[pass], rd, pass, MO_64);
+            write_vec_element(s, tcg_res[pass], rd, pass, MO_UQ);
             tcg_temp_free_i64(tcg_res[pass]);
         }
     } else {
@@ -11774,7 +11774,7 @@ static void handle_rev(DisasContext *s, int opcode, bool u,
             case MO_UL:
                 tcg_gen_bswap32_i64(tcg_tmp, tcg_tmp);
                 break;
-            case MO_64:
+            case MO_UQ:
                 tcg_gen_bswap64_i64(tcg_tmp, tcg_tmp);
                 break;
             default:
@@ -11803,8 +11803,8 @@ static void handle_rev(DisasContext *s, int opcode, bool u,
                 tcg_gen_deposit_i64(tcg_rd, tcg_rd, tcg_rn, off, esize);
             }
         }
-        write_vec_element(s, tcg_rd, rd, 0, MO_64);
-        write_vec_element(s, tcg_rd_hi, rd, 1, MO_64);
+        write_vec_element(s, tcg_rd, rd, 0, MO_UQ);
+        write_vec_element(s, tcg_rd_hi, rd, 1, MO_UQ);

         tcg_temp_free_i64(tcg_rd_hi);
         tcg_temp_free_i64(tcg_rd);
@@ -11839,7 +11839,7 @@ static void handle_2misc_pairwise(DisasContext *s, int opcode, bool u,
             read_vec_element(s, tcg_op2, rn, pass * 2 + 1, memop);
             tcg_gen_add_i64(tcg_res[pass], tcg_op1, tcg_op2);
             if (accum) {
-                read_vec_element(s, tcg_op1, rd, pass, MO_64);
+                read_vec_element(s, tcg_op1, rd, pass, MO_UQ);
                 tcg_gen_add_i64(tcg_res[pass], tcg_res[pass], tcg_op1);
             }

@@ -11859,11 +11859,11 @@ static void handle_2misc_pairwise(DisasContext *s, int opcode, bool u,

             tcg_res[pass] = tcg_temp_new_i64();

-            read_vec_element(s, tcg_op, rn, pass, MO_64);
+            read_vec_element(s, tcg_op, rn, pass, MO_UQ);
             genfn(tcg_res[pass], tcg_op);

             if (accum) {
-                read_vec_element(s, tcg_op, rd, pass, MO_64);
+                read_vec_element(s, tcg_op, rd, pass, MO_UQ);
                 if (size == 0) {
                     gen_helper_neon_addl_u16(tcg_res[pass],
                                              tcg_res[pass], tcg_op);
@@ -11879,7 +11879,7 @@ static void handle_2misc_pairwise(DisasContext *s, int opcode, bool u,
         tcg_res[1] = tcg_const_i64(0);
     }
     for (pass = 0; pass < 2; pass++) {
-        write_vec_element(s, tcg_res[pass], rd, pass, MO_64);
+        write_vec_element(s, tcg_res[pass], rd, pass, MO_UQ);
         tcg_temp_free_i64(tcg_res[pass]);
     }
 }
@@ -11909,7 +11909,7 @@ static void handle_shll(DisasContext *s, bool is_q, int size, int rn, int rd)
     }

     for (pass = 0; pass < 2; pass++) {
-        write_vec_element(s, tcg_res[pass], rd, pass, MO_64);
+        write_vec_element(s, tcg_res[pass], rd, pass, MO_UQ);
         tcg_temp_free_i64(tcg_res[pass]);
     }
 }
@@ -12233,12 +12233,12 @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
             TCGv_i64 tcg_op = tcg_temp_new_i64();
             TCGv_i64 tcg_res = tcg_temp_new_i64();

-            read_vec_element(s, tcg_op, rn, pass, MO_64);
+            read_vec_element(s, tcg_op, rn, pass, MO_UQ);

             handle_2misc_64(s, opcode, u, tcg_res, tcg_op,
                             tcg_rmode, tcg_fpstatus);

-            write_vec_element(s, tcg_res, rd, pass, MO_64);
+            write_vec_element(s, tcg_res, rd, pass, MO_UQ);

             tcg_temp_free_i64(tcg_res);
             tcg_temp_free_i64(tcg_op);
@@ -12856,7 +12856,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
             is_fp16 = true;
             break;
         case MO_UL: /* single precision */
-        case MO_64: /* double precision */
+        case MO_UQ: /* double precision */
             break;
         default:
             unallocated_encoding(s);
@@ -12875,7 +12875,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
             }
             is_fp16 = true;
             break;
-        case MO_64:
+        case MO_UQ:
             break;
         default:
             unallocated_encoding(s);
@@ -12886,7 +12886,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
     default: /* integer */
         switch (size) {
         case MO_UB:
-        case MO_64:
+        case MO_UQ:
             unallocated_encoding(s);
             return;
         }
@@ -12906,7 +12906,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
         index = h << 1 | l;
         rm |= m << 4;
         break;
-    case MO_64:
+    case MO_UQ:
         if (l || !is_q) {
             unallocated_encoding(s);
             return;
@@ -12946,7 +12946,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
                                vec_full_reg_offset(s, rn),
                                vec_full_reg_offset(s, rm), fpst,
                                is_q ? 16 : 8, vec_full_reg_size(s), data,
-                               size == MO_64
+                               size == MO_UQ
                                ? gen_helper_gvec_fcmlas_idx
                                : gen_helper_gvec_fcmlah_idx);
             tcg_temp_free_ptr(fpst);
@@ -12976,13 +12976,13 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)

         assert(is_fp && is_q && !is_long);

-        read_vec_element(s, tcg_idx, rm, index, MO_64);
+        read_vec_element(s, tcg_idx, rm, index, MO_UQ);

         for (pass = 0; pass < (is_scalar ? 1 : 2); pass++) {
             TCGv_i64 tcg_op = tcg_temp_new_i64();
             TCGv_i64 tcg_res = tcg_temp_new_i64();

-            read_vec_element(s, tcg_op, rn, pass, MO_64);
+            read_vec_element(s, tcg_op, rn, pass, MO_UQ);

             switch (16 * u + opcode) {
             case 0x05: /* FMLS */
@@ -12990,7 +12990,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
                 gen_helper_vfp_negd(tcg_op, tcg_op);
                 /* fall through */
             case 0x01: /* FMLA */
-                read_vec_element(s, tcg_res, rd, pass, MO_64);
+                read_vec_element(s, tcg_res, rd, pass, MO_UQ);
                 gen_helper_vfp_muladdd(tcg_res, tcg_op, tcg_idx, tcg_res, fpst);
                 break;
             case 0x09: /* FMUL */
@@ -13003,7 +13003,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
                 g_assert_not_reached();
             }

-            write_vec_element(s, tcg_res, rd, pass, MO_64);
+            write_vec_element(s, tcg_res, rd, pass, MO_UQ);
             tcg_temp_free_i64(tcg_op);
             tcg_temp_free_i64(tcg_res);
         }
@@ -13241,7 +13241,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
                 }

                 /* Accumulating op: handle accumulate step */
-                read_vec_element(s, tcg_res[pass], rd, pass, MO_64);
+                read_vec_element(s, tcg_res[pass], rd, pass, MO_UQ);

                 switch (opcode) {
                 case 0x2: /* SMLAL, SMLAL2, UMLAL, UMLAL2 */
@@ -13316,7 +13316,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
                 }

                 /* Accumulating op: handle accumulate step */
-                read_vec_element(s, tcg_res[pass], rd, pass, MO_64);
+                read_vec_element(s, tcg_res[pass], rd, pass, MO_UQ);

                 switch (opcode) {
                 case 0x2: /* SMLAL, SMLAL2, UMLAL, UMLAL2 */
@@ -13352,7 +13352,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
         }

         for (pass = 0; pass < 2; pass++) {
-            write_vec_element(s, tcg_res[pass], rd, pass, MO_64);
+            write_vec_element(s, tcg_res[pass], rd, pass, MO_UQ);
             tcg_temp_free_i64(tcg_res[pass]);
         }
     }
@@ -13639,14 +13639,14 @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
         tcg_res[1] = tcg_temp_new_i64();

         for (pass = 0; pass < 2; pass++) {
-            read_vec_element(s, tcg_op1, rn, pass, MO_64);
-            read_vec_element(s, tcg_op2, rm, pass, MO_64);
+            read_vec_element(s, tcg_op1, rn, pass, MO_UQ);
+            read_vec_element(s, tcg_op2, rm, pass, MO_UQ);

             tcg_gen_rotli_i64(tcg_res[pass], tcg_op2, 1);
             tcg_gen_xor_i64(tcg_res[pass], tcg_res[pass], tcg_op1);
         }
-        write_vec_element(s, tcg_res[0], rd, 0, MO_64);
-        write_vec_element(s, tcg_res[1], rd, 1, MO_64);
+        write_vec_element(s, tcg_res[0], rd, 0, MO_UQ);
+        write_vec_element(s, tcg_res[1], rd, 1, MO_UQ);

         tcg_temp_free_i64(tcg_op1);
         tcg_temp_free_i64(tcg_op2);
@@ -13750,9 +13750,9 @@ static void disas_crypto_four_reg(DisasContext *s, uint32_t insn)
         tcg_res[1] = tcg_temp_new_i64();

         for (pass = 0; pass < 2; pass++) {
-            read_vec_element(s, tcg_op1, rn, pass, MO_64);
-            read_vec_element(s, tcg_op2, rm, pass, MO_64);
-            read_vec_element(s, tcg_op3, ra, pass, MO_64);
+            read_vec_element(s, tcg_op1, rn, pass, MO_UQ);
+            read_vec_element(s, tcg_op2, rm, pass, MO_UQ);
+            read_vec_element(s, tcg_op3, ra, pass, MO_UQ);

             if (op0 == 0) {
                 /* EOR3 */
@@ -13763,8 +13763,8 @@ static void disas_crypto_four_reg(DisasContext *s, uint32_t insn)
             }
             tcg_gen_xor_i64(tcg_res[pass], tcg_res[pass], tcg_op1);
         }
-        write_vec_element(s, tcg_res[0], rd, 0, MO_64);
-        write_vec_element(s, tcg_res[1], rd, 1, MO_64);
+        write_vec_element(s, tcg_res[0], rd, 0, MO_UQ);
+        write_vec_element(s, tcg_res[1], rd, 1, MO_UQ);

         tcg_temp_free_i64(tcg_op1);
         tcg_temp_free_i64(tcg_op2);
@@ -13832,14 +13832,14 @@ static void disas_crypto_xar(DisasContext *s, uint32_t insn)
     tcg_res[1] = tcg_temp_new_i64();

     for (pass = 0; pass < 2; pass++) {
-        read_vec_element(s, tcg_op1, rn, pass, MO_64);
-        read_vec_element(s, tcg_op2, rm, pass, MO_64);
+        read_vec_element(s, tcg_op1, rn, pass, MO_UQ);
+        read_vec_element(s, tcg_op2, rm, pass, MO_UQ);

         tcg_gen_xor_i64(tcg_res[pass], tcg_op1, tcg_op2);
         tcg_gen_rotri_i64(tcg_res[pass], tcg_res[pass], imm6);
     }
-    write_vec_element(s, tcg_res[0], rd, 0, MO_64);
-    write_vec_element(s, tcg_res[1], rd, 1, MO_64);
+    write_vec_element(s, tcg_res[0], rd, 0, MO_UQ);
+    write_vec_element(s, tcg_res[1], rd, 1, MO_UQ);

     tcg_temp_free_i64(tcg_op1);
     tcg_temp_free_i64(tcg_op2);
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index f7c891d..423c461 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -1708,7 +1708,7 @@ static void do_sat_addsub_vec(DisasContext *s, int esz, int rd, int rn,
         tcg_temp_free_i64(t64);
         break;

-    case MO_64:
+    case MO_UQ:
         if (u) {
             if (d) {
                 gen_helper_sve_uqsubi_d(dptr, nptr, val, desc);
@@ -1862,7 +1862,7 @@ static bool do_zz_dbm(DisasContext *s, arg_rr_dbm *a, GVecGen2iFn *gvec_fn)
     }
     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
-        gvec_fn(MO_64, vec_full_reg_offset(s, a->rd),
+        gvec_fn(MO_UQ, vec_full_reg_offset(s, a->rd),
                 vec_full_reg_offset(s, a->rn), imm, vsz, vsz);
     }
     return true;
@@ -2076,7 +2076,7 @@ static bool trans_INSR_f(DisasContext *s, arg_rrr_esz *a)
 {
     if (sve_access_check(s)) {
         TCGv_i64 t = tcg_temp_new_i64();
-        tcg_gen_ld_i64(t, cpu_env, vec_reg_offset(s, a->rm, 0, MO_64));
+        tcg_gen_ld_i64(t, cpu_env, vec_reg_offset(s, a->rm, 0, MO_UQ));
         do_insr_i64(s, a, t);
         tcg_temp_free_i64(t);
     }
@@ -3327,7 +3327,7 @@ static bool trans_SUBR_zzi(DisasContext *s, arg_rri_esz *a)
           .fno = gen_helper_sve_subri_d,
           .opt_opc = vecop_list,
           .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-          .vece = MO_64,
+          .vece = MO_UQ,
           .scalar_first = true }
     };

@@ -4571,7 +4571,7 @@ static const TCGMemOp dtype_mop[16] = {
     MO_UB, MO_UB, MO_UB, MO_UB,
     MO_SL, MO_UW, MO_UW, MO_UW,
     MO_SW, MO_SW, MO_UL, MO_UL,
-    MO_SB, MO_SB, MO_SB, MO_Q
+    MO_SB, MO_SB, MO_SB, MO_UQ
 };

 #define dtype_msz(x)  (dtype_mop[x] & MO_SIZE)
@@ -5261,7 +5261,7 @@ static bool trans_LD1_zprz(DisasContext *s, arg_LD1_zprz *a)
     case MO_UL:
         fn = gather_load_fn32[be][a->ff][a->xs][a->u][a->msz];
         break;
-    case MO_64:
+    case MO_UQ:
         fn = gather_load_fn64[be][a->ff][a->xs][a->u][a->msz];
         break;
     }
@@ -5289,7 +5289,7 @@ static bool trans_LD1_zpiz(DisasContext *s, arg_LD1_zpiz *a)
     case MO_UL:
         fn = gather_load_fn32[be][a->ff][0][a->u][a->msz];
         break;
-    case MO_64:
+    case MO_UQ:
         fn = gather_load_fn64[be][a->ff][2][a->u][a->msz];
         break;
     }
@@ -5367,7 +5367,7 @@ static bool trans_ST1_zprz(DisasContext *s, arg_ST1_zprz *a)
     case MO_UL:
         fn = scatter_store_fn32[be][a->xs][a->msz];
         break;
-    case MO_64:
+    case MO_UQ:
         fn = scatter_store_fn64[be][a->xs][a->msz];
         break;
     default:
@@ -5395,7 +5395,7 @@ static bool trans_ST1_zpiz(DisasContext *s, arg_ST1_zpiz *a)
     case MO_UL:
         fn = scatter_store_fn32[be][0][a->msz];
         break;
-    case MO_64:
+    case MO_UQ:
         fn = scatter_store_fn64[be][2][a->msz];
         break;
     }
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
index 5e0cd63..d71944d 100644
--- a/target/arm/translate-vfp.inc.c
+++ b/target/arm/translate-vfp.inc.c
@@ -40,7 +40,7 @@ uint64_t vfp_expand_imm(int size, uint8_t imm8)
     uint64_t imm;

     switch (size) {
-    case MO_64:
+    case MO_UQ:
         imm = (extract32(imm8, 7, 1) ? 0x8000 : 0) |
             (extract32(imm8, 6, 1) ? 0x3fc0 : 0x4000) |
             extract32(imm8, 0, 6);
@@ -1960,7 +1960,7 @@ static bool trans_VMOV_imm_dp(DisasContext *s, arg_VMOV_imm_dp *a)
         }
     }

-    fd = tcg_const_i64(vfp_expand_imm(MO_64, a->imm));
+    fd = tcg_const_i64(vfp_expand_imm(MO_UQ, a->imm));

     for (;;) {
         neon_store_reg64(fd, vd);
diff --git a/target/arm/translate.c b/target/arm/translate.c
index 5510ecd..306ef24 100644
--- a/target/arm/translate.c
+++ b/target/arm/translate.c
@@ -1171,7 +1171,7 @@ static void gen_aa32_ld_i64(DisasContext *s, TCGv_i64 val, TCGv_i32 a32,
 static inline void gen_aa32_ld64(DisasContext *s, TCGv_i64 val,
                                  TCGv_i32 a32, int index)
 {
-    gen_aa32_ld_i64(s, val, a32, index, MO_Q | s->be_data);
+    gen_aa32_ld_i64(s, val, a32, index, MO_UQ | s->be_data);
 }

 static void gen_aa32_st_i64(DisasContext *s, TCGv_i64 val, TCGv_i32 a32,
@@ -1194,7 +1194,7 @@ static void gen_aa32_st_i64(DisasContext *s, TCGv_i64 val, TCGv_i32 a32,
 static inline void gen_aa32_st64(DisasContext *s, TCGv_i64 val,
                                  TCGv_i32 a32, int index)
 {
-    gen_aa32_st_i64(s, val, a32, index, MO_Q | s->be_data);
+    gen_aa32_st_i64(s, val, a32, index, MO_UQ | s->be_data);
 }

 DO_GEN_LD(8s, MO_SB)
@@ -1455,7 +1455,7 @@ static void neon_load_element64(TCGv_i64 var, int reg, int ele, TCGMemOp mop)
     case MO_UL:
         tcg_gen_ld32u_i64(var, cpu_env, offset);
         break;
-    case MO_Q:
+    case MO_UQ:
         tcg_gen_ld_i64(var, cpu_env, offset);
         break;
     default:
@@ -1502,7 +1502,7 @@ static void neon_store_element64(int reg, int ele, TCGMemOp size, TCGv_i64 var)
     case MO_UL:
         tcg_gen_st32_i64(var, cpu_env, offset);
         break;
-    case MO_64:
+    case MO_UQ:
         tcg_gen_st_i64(var, cpu_env, offset);
         break;
     default:
@@ -4278,7 +4278,7 @@ const GVecGen2i ssra_op[4] = {
       .prefer_i64 = TCG_TARGET_REG_BITS == 64,
       .opt_opc = vecop_list_ssra,
       .load_dest = true,
-      .vece = MO_64 },
+      .vece = MO_UQ },
 };

 static void gen_usra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
@@ -4336,7 +4336,7 @@ const GVecGen2i usra_op[4] = {
       .prefer_i64 = TCG_TARGET_REG_BITS == 64,
       .load_dest = true,
       .opt_opc = vecop_list_usra,
-      .vece = MO_64, },
+      .vece = MO_UQ, },
 };

 static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
@@ -4416,7 +4416,7 @@ const GVecGen2i sri_op[4] = {
       .prefer_i64 = TCG_TARGET_REG_BITS == 64,
       .load_dest = true,
       .opt_opc = vecop_list_sri,
-      .vece = MO_64 },
+      .vece = MO_UQ },
 };

 static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
@@ -4494,7 +4494,7 @@ const GVecGen2i sli_op[4] = {
       .prefer_i64 = TCG_TARGET_REG_BITS == 64,
       .load_dest = true,
       .opt_opc = vecop_list_sli,
-      .vece = MO_64 },
+      .vece = MO_UQ },
 };

 static void gen_mla8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
@@ -4590,7 +4590,7 @@ const GVecGen3 mla_op[4] = {
       .prefer_i64 = TCG_TARGET_REG_BITS == 64,
       .load_dest = true,
       .opt_opc = vecop_list_mla,
-      .vece = MO_64 },
+      .vece = MO_UQ },
 };

 const GVecGen3 mls_op[4] = {
@@ -4614,7 +4614,7 @@ const GVecGen3 mls_op[4] = {
       .prefer_i64 = TCG_TARGET_REG_BITS == 64,
       .load_dest = true,
       .opt_opc = vecop_list_mls,
-      .vece = MO_64 },
+      .vece = MO_UQ },
 };

 /* CMTST : test is "if (X & Y != 0)". */
@@ -4658,7 +4658,7 @@ const GVecGen3 cmtst_op[4] = {
       .fniv = gen_cmtst_vec,
       .prefer_i64 = TCG_TARGET_REG_BITS == 64,
       .opt_opc = vecop_list_cmtst,
-      .vece = MO_64 },
+      .vece = MO_UQ },
 };

 static void gen_uqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
@@ -4696,7 +4696,7 @@ const GVecGen4 uqadd_op[4] = {
       .fno = gen_helper_gvec_uqadd_d,
       .write_aofs = true,
       .opt_opc = vecop_list_uqadd,
-      .vece = MO_64 },
+      .vece = MO_UQ },
 };

 static void gen_sqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
@@ -4734,7 +4734,7 @@ const GVecGen4 sqadd_op[4] = {
       .fno = gen_helper_gvec_sqadd_d,
       .opt_opc = vecop_list_sqadd,
       .write_aofs = true,
-      .vece = MO_64 },
+      .vece = MO_UQ },
 };

 static void gen_uqsub_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
@@ -4772,7 +4772,7 @@ const GVecGen4 uqsub_op[4] = {
       .fno = gen_helper_gvec_uqsub_d,
       .opt_opc = vecop_list_uqsub,
       .write_aofs = true,
-      .vece = MO_64 },
+      .vece = MO_UQ },
 };

 static void gen_sqsub_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
@@ -4810,7 +4810,7 @@ const GVecGen4 sqsub_op[4] = {
       .fno = gen_helper_gvec_sqsub_d,
       .opt_opc = vecop_list_sqsub,
       .write_aofs = true,
-      .vece = MO_64 },
+      .vece = MO_UQ },
 };

 /* Translate a NEON data processing instruction.  Return nonzero if the
diff --git a/target/i386/translate.c b/target/i386/translate.c
index 0e863d4..8d62b37 100644
--- a/target/i386/translate.c
+++ b/target/i386/translate.c
@@ -323,7 +323,7 @@ static inline bool byte_reg_is_xH(DisasContext *s, int reg)
 static inline TCGMemOp mo_pushpop(DisasContext *s, TCGMemOp ot)
 {
     if (CODE64(s)) {
-        return ot == MO_UW ? MO_UW : MO_64;
+        return ot == MO_UW ? MO_UW : MO_UQ;
     } else {
         return ot;
     }
@@ -332,14 +332,14 @@ static inline TCGMemOp mo_pushpop(DisasContext *s, TCGMemOp ot)
 /* Select the size of the stack pointer.  */
 static inline TCGMemOp mo_stacksize(DisasContext *s)
 {
-    return CODE64(s) ? MO_64 : s->ss32 ? MO_UL : MO_UW;
+    return CODE64(s) ? MO_UQ : s->ss32 ? MO_UL : MO_UW;
 }

 /* Select only size 64 else 32.  Used for SSE operand sizes.  */
 static inline TCGMemOp mo_64_32(TCGMemOp ot)
 {
 #ifdef TARGET_X86_64
-    return ot == MO_64 ? MO_64 : MO_UL;
+    return ot == MO_UQ ? MO_UQ : MO_UL;
 #else
     return MO_UL;
 #endif
@@ -378,7 +378,7 @@ static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t0)
         tcg_gen_ext32u_tl(cpu_regs[reg], t0);
         break;
 #ifdef TARGET_X86_64
-    case MO_64:
+    case MO_UQ:
         tcg_gen_mov_tl(cpu_regs[reg], t0);
         break;
 #endif
@@ -456,7 +456,7 @@ static void gen_lea_v_seg(DisasContext *s, TCGMemOp aflag, TCGv a0,
 {
     switch (aflag) {
 #ifdef TARGET_X86_64
-    case MO_64:
+    case MO_UQ:
         if (ovr_seg < 0) {
             tcg_gen_mov_tl(s->A0, a0);
             return;
@@ -492,7 +492,7 @@ static void gen_lea_v_seg(DisasContext *s, TCGMemOp aflag, TCGv a0,
     if (ovr_seg >= 0) {
         TCGv seg = cpu_seg_base[ovr_seg];

-        if (aflag == MO_64) {
+        if (aflag == MO_UQ) {
             tcg_gen_add_tl(s->A0, a0, seg);
         } else if (CODE64(s)) {
             tcg_gen_ext32u_tl(s->A0, a0);
@@ -1469,7 +1469,7 @@ static void gen_shift_flags(DisasContext *s, TCGMemOp ot, TCGv result,
 static void gen_shift_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
                             int is_right, int is_arith)
 {
-    target_ulong mask = (ot == MO_64 ? 0x3f : 0x1f);
+    target_ulong mask = (ot == MO_UQ ? 0x3f : 0x1f);

     /* load */
     if (op1 == OR_TMP0) {
@@ -1505,7 +1505,7 @@ static void gen_shift_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
 static void gen_shift_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
                             int is_right, int is_arith)
 {
-    int mask = (ot == MO_64 ? 0x3f : 0x1f);
+    int mask = (ot == MO_UQ ? 0x3f : 0x1f);

     /* load */
     if (op1 == OR_TMP0)
@@ -1544,7 +1544,7 @@ static void gen_shift_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,

 static void gen_rot_rm_T1(DisasContext *s, TCGMemOp ot, int op1, int is_right)
 {
-    target_ulong mask = (ot == MO_64 ? 0x3f : 0x1f);
+    target_ulong mask = (ot == MO_UQ ? 0x3f : 0x1f);
     TCGv_i32 t0, t1;

     /* load */
@@ -1630,7 +1630,7 @@ static void gen_rot_rm_T1(DisasContext *s, TCGMemOp ot, int op1, int is_right)
 static void gen_rot_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
                           int is_right)
 {
-    int mask = (ot == MO_64 ? 0x3f : 0x1f);
+    int mask = (ot == MO_UQ ? 0x3f : 0x1f);
     int shift;

     /* load */
@@ -1729,7 +1729,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
             gen_helper_rcrl(s->T0, cpu_env, s->T0, s->T1);
             break;
 #ifdef TARGET_X86_64
-        case MO_64:
+        case MO_UQ:
             gen_helper_rcrq(s->T0, cpu_env, s->T0, s->T1);
             break;
 #endif
@@ -1748,7 +1748,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
             gen_helper_rcll(s->T0, cpu_env, s->T0, s->T1);
             break;
 #ifdef TARGET_X86_64
-        case MO_64:
+        case MO_UQ:
             gen_helper_rclq(s->T0, cpu_env, s->T0, s->T1);
             break;
 #endif
@@ -1764,7 +1764,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
 static void gen_shiftd_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
                              bool is_right, TCGv count_in)
 {
-    target_ulong mask = (ot == MO_64 ? 63 : 31);
+    target_ulong mask = (ot == MO_UQ ? 63 : 31);
     TCGv count;

     /* load */
@@ -1983,7 +1983,7 @@ static AddressParts gen_lea_modrm_0(CPUX86State *env, DisasContext *s,
     }

     switch (s->aflag) {
-    case MO_64:
+    case MO_UQ:
     case MO_UL:
         havesib = 0;
         if (rm == 4) {
@@ -2192,7 +2192,7 @@ static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, TCGMemOp ot)
         break;
     case MO_UL:
 #ifdef TARGET_X86_64
-    case MO_64:
+    case MO_UQ:
 #endif
         ret = x86_ldl_code(env, s);
         break;
@@ -2443,7 +2443,7 @@ static void gen_popa(DisasContext *s)
 static void gen_enter(DisasContext *s, int esp_addend, int level)
 {
     TCGMemOp d_ot = mo_pushpop(s, s->dflag);
-    TCGMemOp a_ot = CODE64(s) ? MO_64 : s->ss32 ? MO_UL : MO_UW;
+    TCGMemOp a_ot = CODE64(s) ? MO_UQ : s->ss32 ? MO_UL : MO_UW;
     int size = 1 << d_ot;

     /* Push BP; compute FrameTemp into T1.  */
@@ -3150,8 +3150,8 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
             break;
         case 0x6e: /* movd mm, ea */
 #ifdef TARGET_X86_64
-            if (s->dflag == MO_64) {
-                gen_ldst_modrm(env, s, modrm, MO_64, OR_TMP0, 0);
+            if (s->dflag == MO_UQ) {
+                gen_ldst_modrm(env, s, modrm, MO_UQ, OR_TMP0, 0);
                 tcg_gen_st_tl(s->T0, cpu_env,
                               offsetof(CPUX86State, fpregs[reg].mmx));
             } else
@@ -3166,8 +3166,8 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
             break;
         case 0x16e: /* movd xmm, ea */
 #ifdef TARGET_X86_64
-            if (s->dflag == MO_64) {
-                gen_ldst_modrm(env, s, modrm, MO_64, OR_TMP0, 0);
+            if (s->dflag == MO_UQ) {
+                gen_ldst_modrm(env, s, modrm, MO_UQ, OR_TMP0, 0);
                 tcg_gen_addi_ptr(s->ptr0, cpu_env,
                                  offsetof(CPUX86State,xmm_regs[reg]));
                 gen_helper_movq_mm_T0_xmm(s->ptr0, s->T0);
@@ -3337,10 +3337,10 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
             break;
         case 0x7e: /* movd ea, mm */
 #ifdef TARGET_X86_64
-            if (s->dflag == MO_64) {
+            if (s->dflag == MO_UQ) {
                 tcg_gen_ld_i64(s->T0, cpu_env,
                                offsetof(CPUX86State,fpregs[reg].mmx));
-                gen_ldst_modrm(env, s, modrm, MO_64, OR_TMP0, 1);
+                gen_ldst_modrm(env, s, modrm, MO_UQ, OR_TMP0, 1);
             } else
 #endif
             {
@@ -3351,10 +3351,10 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
             break;
         case 0x17e: /* movd ea, xmm */
 #ifdef TARGET_X86_64
-            if (s->dflag == MO_64) {
+            if (s->dflag == MO_UQ) {
                 tcg_gen_ld_i64(s->T0, cpu_env,
                                offsetof(CPUX86State,xmm_regs[reg].ZMM_Q(0)));
-                gen_ldst_modrm(env, s, modrm, MO_64, OR_TMP0, 1);
+                gen_ldst_modrm(env, s, modrm, MO_UQ, OR_TMP0, 1);
             } else
 #endif
             {
@@ -3785,10 +3785,10 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                 }
                 if ((b & 0xff) == 0xf0) {
                     ot = MO_UB;
-                } else if (s->dflag != MO_64) {
+                } else if (s->dflag != MO_UQ) {
                     ot = (s->prefix & PREFIX_DATA ? MO_UW : MO_UL);
                 } else {
-                    ot = MO_64;
+                    ot = MO_UQ;
                 }

                 tcg_gen_trunc_tl_i32(s->tmp2_i32, cpu_regs[reg]);
@@ -3814,10 +3814,10 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                 if (!(s->cpuid_ext_features & CPUID_EXT_MOVBE)) {
                     goto illegal_op;
                 }
-                if (s->dflag != MO_64) {
+                if (s->dflag != MO_UQ) {
                     ot = (s->prefix & PREFIX_DATA ? MO_UW : MO_UL);
                 } else {
-                    ot = MO_64;
+                    ot = MO_UQ;
                 }

                 gen_lea_modrm(env, s, modrm);
@@ -3861,7 +3861,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                     tcg_gen_ext8u_tl(s->A0, cpu_regs[s->vex_v]);
                     tcg_gen_shr_tl(s->T0, s->T0, s->A0);

-                    bound = tcg_const_tl(ot == MO_64 ? 63 : 31);
+                    bound = tcg_const_tl(ot == MO_UQ ? 63 : 31);
                     zero = tcg_const_tl(0);
                     tcg_gen_movcond_tl(TCG_COND_LEU, s->T0, s->A0, bound,
                                        s->T0, zero);
@@ -3894,7 +3894,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                 gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0);
                 tcg_gen_ext8u_tl(s->T1, cpu_regs[s->vex_v]);
                 {
-                    TCGv bound = tcg_const_tl(ot == MO_64 ? 63 : 31);
+                    TCGv bound = tcg_const_tl(ot == MO_UQ ? 63 : 31);
                     /* Note that since we're using BMILG (in order to get O
                        cleared) we need to store the inverse into C.  */
                     tcg_gen_setcond_tl(TCG_COND_LT, cpu_cc_src,
@@ -3929,7 +3929,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                     tcg_gen_extu_i32_tl(cpu_regs[reg], s->tmp3_i32);
                     break;
 #ifdef TARGET_X86_64
-                case MO_64:
+                case MO_UQ:
                     tcg_gen_mulu2_i64(s->T0, s->T1,
                                       s->T0, cpu_regs[R_EDX]);
                     tcg_gen_mov_i64(cpu_regs[s->vex_v], s->T0);
@@ -3949,7 +3949,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                 gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0);
                 /* Note that by zero-extending the mask operand, we
                    automatically handle zero-extending the result.  */
-                if (ot == MO_64) {
+                if (ot == MO_UQ) {
                     tcg_gen_mov_tl(s->T1, cpu_regs[s->vex_v]);
                 } else {
                     tcg_gen_ext32u_tl(s->T1, cpu_regs[s->vex_v]);
@@ -3967,7 +3967,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                 gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0);
                 /* Note that by zero-extending the mask operand, we
                    automatically handle zero-extending the result.  */
-                if (ot == MO_64) {
+                if (ot == MO_UQ) {
                     tcg_gen_mov_tl(s->T1, cpu_regs[s->vex_v]);
                 } else {
                     tcg_gen_ext32u_tl(s->T1, cpu_regs[s->vex_v]);
@@ -4063,7 +4063,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                 }
                 ot = mo_64_32(s->dflag);
                 gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0);
-                if (ot == MO_64) {
+                if (ot == MO_UQ) {
                     tcg_gen_andi_tl(s->T1, cpu_regs[s->vex_v], 63);
                 } else {
                     tcg_gen_andi_tl(s->T1, cpu_regs[s->vex_v], 31);
@@ -4071,12 +4071,12 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                 if (b == 0x1f7) {
                     tcg_gen_shl_tl(s->T0, s->T0, s->T1);
                 } else if (b == 0x2f7) {
-                    if (ot != MO_64) {
+                    if (ot != MO_UQ) {
                         tcg_gen_ext32s_tl(s->T0, s->T0);
                     }
                     tcg_gen_sar_tl(s->T0, s->T0, s->T1);
                 } else {
-                    if (ot != MO_64) {
+                    if (ot != MO_UQ) {
                         tcg_gen_ext32u_tl(s->T0, s->T0);
                     }
                     tcg_gen_shr_tl(s->T0, s->T0, s->T1);
@@ -4302,7 +4302,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
             if ((b & 0xfc) == 0x60) { /* pcmpXstrX */
                 set_cc_op(s, CC_OP_EFLAGS);

-                if (s->dflag == MO_64) {
+                if (s->dflag == MO_UQ) {
                     /* The helper must use entire 64-bit gp registers */
                     val |= 1 << 8;
                 }
@@ -4329,7 +4329,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                 ot = mo_64_32(s->dflag);
                 gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0);
                 b = x86_ldub_code(env, s);
-                if (ot == MO_64) {
+                if (ot == MO_UQ) {
                     tcg_gen_rotri_tl(s->T0, s->T0, b & 63);
                 } else {
                     tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
@@ -4630,9 +4630,9 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         /* In 64-bit mode, the default data size is 32-bit.  Select 64-bit
            data with rex_w, and 16-bit data with 0x66; rex_w takes precedence
            over 0x66 if both are present.  */
-        dflag = (rex_w > 0 ? MO_64 : prefixes & PREFIX_DATA ? MO_UW : MO_UL);
+        dflag = (rex_w > 0 ? MO_UQ : prefixes & PREFIX_DATA ? MO_UW : MO_UL);
         /* In 64-bit mode, 0x67 selects 32-bit addressing.  */
-        aflag = (prefixes & PREFIX_ADR ? MO_UL : MO_64);
+        aflag = (prefixes & PREFIX_ADR ? MO_UL : MO_UQ);
     } else {
         /* In 16/32-bit mode, 0x66 selects the opposite data size.  */
         if (s->code32 ^ ((prefixes & PREFIX_DATA) != 0)) {
@@ -4903,7 +4903,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 set_cc_op(s, CC_OP_MULL);
                 break;
 #ifdef TARGET_X86_64
-            case MO_64:
+            case MO_UQ:
                 tcg_gen_mulu2_i64(cpu_regs[R_EAX], cpu_regs[R_EDX],
                                   s->T0, cpu_regs[R_EAX]);
                 tcg_gen_mov_tl(cpu_cc_dst, cpu_regs[R_EAX]);
@@ -4956,7 +4956,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 set_cc_op(s, CC_OP_MULL);
                 break;
 #ifdef TARGET_X86_64
-            case MO_64:
+            case MO_UQ:
                 tcg_gen_muls2_i64(cpu_regs[R_EAX], cpu_regs[R_EDX],
                                   s->T0, cpu_regs[R_EAX]);
                 tcg_gen_mov_tl(cpu_cc_dst, cpu_regs[R_EAX]);
@@ -4980,7 +4980,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 gen_helper_divl_EAX(cpu_env, s->T0);
                 break;
 #ifdef TARGET_X86_64
-            case MO_64:
+            case MO_UQ:
                 gen_helper_divq_EAX(cpu_env, s->T0);
                 break;
 #endif
@@ -4999,7 +4999,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 gen_helper_idivl_EAX(cpu_env, s->T0);
                 break;
 #ifdef TARGET_X86_64
-            case MO_64:
+            case MO_UQ:
                 gen_helper_idivq_EAX(cpu_env, s->T0);
                 break;
 #endif
@@ -5024,7 +5024,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         if (CODE64(s)) {
             if (op == 2 || op == 4) {
                 /* operand size for jumps is 64 bit */
-                ot = MO_64;
+                ot = MO_UQ;
             } else if (op == 3 || op == 5) {
                 ot = dflag != MO_UW ? MO_UL + (rex_w == 1) : MO_UW;
             } else if (op == 6) {
@@ -5145,10 +5145,10 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
     case 0x98: /* CWDE/CBW */
         switch (dflag) {
 #ifdef TARGET_X86_64
-        case MO_64:
+        case MO_UQ:
             gen_op_mov_v_reg(s, MO_UL, s->T0, R_EAX);
             tcg_gen_ext32s_tl(s->T0, s->T0);
-            gen_op_mov_reg_v(s, MO_64, R_EAX, s->T0);
+            gen_op_mov_reg_v(s, MO_UQ, R_EAX, s->T0);
             break;
 #endif
         case MO_UL:
@@ -5168,10 +5168,10 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
     case 0x99: /* CDQ/CWD */
         switch (dflag) {
 #ifdef TARGET_X86_64
-        case MO_64:
-            gen_op_mov_v_reg(s, MO_64, s->T0, R_EAX);
+        case MO_UQ:
+            gen_op_mov_v_reg(s, MO_UQ, s->T0, R_EAX);
             tcg_gen_sari_tl(s->T0, s->T0, 63);
-            gen_op_mov_reg_v(s, MO_64, R_EDX, s->T0);
+            gen_op_mov_reg_v(s, MO_UQ, R_EDX, s->T0);
             break;
 #endif
         case MO_UL:
@@ -5212,7 +5212,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         }
         switch (ot) {
 #ifdef TARGET_X86_64
-        case MO_64:
+        case MO_UQ:
             tcg_gen_muls2_i64(cpu_regs[reg], s->T1, s->T0, s->T1);
             tcg_gen_mov_tl(cpu_cc_dst, cpu_regs[reg]);
             tcg_gen_sari_tl(cpu_cc_src, cpu_cc_dst, 63);
@@ -5338,7 +5338,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 goto illegal_op;
             }
 #ifdef TARGET_X86_64
-            if (dflag == MO_64) {
+            if (dflag == MO_UQ) {
                 if (!(s->cpuid_ext_features & CPUID_EXT_CX16)) {
                     goto illegal_op;
                 }
@@ -5636,7 +5636,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             ot = mo_b_d(b, dflag);
             switch (s->aflag) {
 #ifdef TARGET_X86_64
-            case MO_64:
+            case MO_UQ:
                 offset_addr = x86_ldq_code(env, s);
                 break;
 #endif
@@ -5671,13 +5671,13 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         break;
     case 0xb8 ... 0xbf: /* mov R, Iv */
 #ifdef TARGET_X86_64
-        if (dflag == MO_64) {
+        if (dflag == MO_UQ) {
             uint64_t tmp;
             /* 64 bit case */
             tmp = x86_ldq_code(env, s);
             reg = (b & 7) | REX_B(s);
             tcg_gen_movi_tl(s->T0, tmp);
-            gen_op_mov_reg_v(s, MO_64, reg, s->T0);
+            gen_op_mov_reg_v(s, MO_UQ, reg, s->T0);
         } else
 #endif
         {
@@ -7119,10 +7119,10 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
     case 0x1c8 ... 0x1cf: /* bswap reg */
         reg = (b & 7) | REX_B(s);
 #ifdef TARGET_X86_64
-        if (dflag == MO_64) {
-            gen_op_mov_v_reg(s, MO_64, s->T0, reg);
+        if (dflag == MO_UQ) {
+            gen_op_mov_v_reg(s, MO_UQ, s->T0, reg);
             tcg_gen_bswap64_i64(s->T0, s->T0);
-            gen_op_mov_reg_v(s, MO_64, reg, s->T0);
+            gen_op_mov_reg_v(s, MO_UQ, reg, s->T0);
         } else
 #endif
         {
@@ -7700,7 +7700,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             if (mod == 3) {
                 gen_op_mov_v_reg(s, MO_UL, s->T0, rm);
                 /* sign extend */
-                if (d_ot == MO_64) {
+                if (d_ot == MO_UQ) {
                     tcg_gen_ext32s_tl(s->T0, s->T0);
                 }
                 gen_op_mov_reg_v(s, d_ot, reg, s->T0);
@@ -8014,7 +8014,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             rm = (modrm & 7) | REX_B(s);
             reg = ((modrm >> 3) & 7) | rex_r;
             if (CODE64(s))
-                ot = MO_64;
+                ot = MO_UQ;
             else
                 ot = MO_UL;
             if ((prefixes & PREFIX_LOCK) && (reg == 0) &&
@@ -8071,7 +8071,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             rm = (modrm & 7) | REX_B(s);
             reg = ((modrm >> 3) & 7) | rex_r;
             if (CODE64(s))
-                ot = MO_64;
+                ot = MO_UQ;
             else
                 ot = MO_UL;
             if (reg >= 8) {
diff --git a/target/mips/translate.c b/target/mips/translate.c
index 525c7fe..1023f68 100644
--- a/target/mips/translate.c
+++ b/target/mips/translate.c
@@ -3766,7 +3766,7 @@ static void gen_scwp(DisasContext *ctx, uint32_t base, int16_t offset,

     tcg_gen_ld_i64(llval, cpu_env, offsetof(CPUMIPSState, llval_wp));
     tcg_gen_atomic_cmpxchg_i64(val, taddr, llval, tval,
-                               eva ? MIPS_HFLAG_UM : ctx->mem_idx, MO_64);
+                               eva ? MIPS_HFLAG_UM : ctx->mem_idx, MO_UQ);
     if (reg1 != 0) {
         tcg_gen_movi_tl(cpu_gpr[reg1], 1);
     }
diff --git a/target/ppc/translate.c b/target/ppc/translate.c
index 4a5de28..f39dd94 100644
--- a/target/ppc/translate.c
+++ b/target/ppc/translate.c
@@ -2470,10 +2470,10 @@ GEN_QEMU_LOAD_64(ld8u,  DEF_MEMOP(MO_UB))
 GEN_QEMU_LOAD_64(ld16u, DEF_MEMOP(MO_UW))
 GEN_QEMU_LOAD_64(ld32u, DEF_MEMOP(MO_UL))
 GEN_QEMU_LOAD_64(ld32s, DEF_MEMOP(MO_SL))
-GEN_QEMU_LOAD_64(ld64,  DEF_MEMOP(MO_Q))
+GEN_QEMU_LOAD_64(ld64,  DEF_MEMOP(MO_UQ))

 #if defined(TARGET_PPC64)
-GEN_QEMU_LOAD_64(ld64ur, BSWAP_MEMOP(MO_Q))
+GEN_QEMU_LOAD_64(ld64ur, BSWAP_MEMOP(MO_UQ))
 #endif

 #define GEN_QEMU_STORE_TL(stop, op)                                     \
@@ -2502,10 +2502,10 @@ static void glue(gen_qemu_, glue(stop, _i64))(DisasContext *ctx,  \
 GEN_QEMU_STORE_64(st8,  DEF_MEMOP(MO_UB))
 GEN_QEMU_STORE_64(st16, DEF_MEMOP(MO_UW))
 GEN_QEMU_STORE_64(st32, DEF_MEMOP(MO_UL))
-GEN_QEMU_STORE_64(st64, DEF_MEMOP(MO_Q))
+GEN_QEMU_STORE_64(st64, DEF_MEMOP(MO_UQ))

 #if defined(TARGET_PPC64)
-GEN_QEMU_STORE_64(st64r, BSWAP_MEMOP(MO_Q))
+GEN_QEMU_STORE_64(st64r, BSWAP_MEMOP(MO_UQ))
 #endif

 #define GEN_LD(name, ldop, opc, type)                                         \
@@ -2605,7 +2605,7 @@ GEN_LDEPX(lb, DEF_MEMOP(MO_UB), 0x1F, 0x02)
 GEN_LDEPX(lh, DEF_MEMOP(MO_UW), 0x1F, 0x08)
 GEN_LDEPX(lw, DEF_MEMOP(MO_UL), 0x1F, 0x00)
 #if defined(TARGET_PPC64)
-GEN_LDEPX(ld, DEF_MEMOP(MO_Q), 0x1D, 0x00)
+GEN_LDEPX(ld, DEF_MEMOP(MO_UQ), 0x1D, 0x00)
 #endif

 #if defined(TARGET_PPC64)
@@ -2808,7 +2808,7 @@ GEN_STEPX(stb, DEF_MEMOP(MO_UB), 0x1F, 0x06)
 GEN_STEPX(sth, DEF_MEMOP(MO_UW), 0x1F, 0x0C)
 GEN_STEPX(stw, DEF_MEMOP(MO_UL), 0x1F, 0x04)
 #if defined(TARGET_PPC64)
-GEN_STEPX(std, DEF_MEMOP(MO_Q), 0x1d, 0x04)
+GEN_STEPX(std, DEF_MEMOP(MO_UQ), 0x1d, 0x04)
 #endif

 #if defined(TARGET_PPC64)
@@ -3244,7 +3244,7 @@ static void gen_ld_atomic(DisasContext *ctx, TCGMemOp memop)
             TCGv t1 = tcg_temp_new();

             tcg_gen_qemu_ld_tl(t0, EA, ctx->mem_idx, memop);
-            if ((memop & MO_SIZE) == MO_64 || TARGET_LONG_BITS == 32) {
+            if ((memop & MO_SIZE) == MO_UQ || TARGET_LONG_BITS == 32) {
                 tcg_gen_mov_tl(t1, src);
             } else {
                 tcg_gen_ext32u_tl(t1, src);
@@ -3302,7 +3302,7 @@ static void gen_lwat(DisasContext *ctx)
 #ifdef TARGET_PPC64
 static void gen_ldat(DisasContext *ctx)
 {
-    gen_ld_atomic(ctx, DEF_MEMOP(MO_Q));
+    gen_ld_atomic(ctx, DEF_MEMOP(MO_UQ));
 }
 #endif

@@ -3385,7 +3385,7 @@ static void gen_stwat(DisasContext *ctx)
 #ifdef TARGET_PPC64
 static void gen_stdat(DisasContext *ctx)
 {
-    gen_st_atomic(ctx, DEF_MEMOP(MO_Q));
+    gen_st_atomic(ctx, DEF_MEMOP(MO_UQ));
 }
 #endif

@@ -3437,9 +3437,9 @@ STCX(stwcx_, DEF_MEMOP(MO_UL))

 #if defined(TARGET_PPC64)
 /* ldarx */
-LARX(ldarx, DEF_MEMOP(MO_Q))
+LARX(ldarx, DEF_MEMOP(MO_UQ))
 /* stdcx. */
-STCX(stdcx_, DEF_MEMOP(MO_Q))
+STCX(stdcx_, DEF_MEMOP(MO_UQ))

 /* lqarx */
 static void gen_lqarx(DisasContext *ctx)
@@ -3520,7 +3520,7 @@ static void gen_stqcx_(DisasContext *ctx)

     if (tb_cflags(ctx->base.tb) & CF_PARALLEL) {
         if (HAVE_CMPXCHG128) {
-            TCGv_i32 oi = tcg_const_i32(DEF_MEMOP(MO_Q) | MO_ALIGN_16);
+            TCGv_i32 oi = tcg_const_i32(DEF_MEMOP(MO_UQ) | MO_ALIGN_16);
             if (ctx->le_mode) {
                 gen_helper_stqcx_le_parallel(cpu_crf[0], cpu_env,
                                              EA, lo, hi, oi);
@@ -7366,7 +7366,7 @@ GEN_LDEPX(lb, DEF_MEMOP(MO_UB), 0x1F, 0x02)
 GEN_LDEPX(lh, DEF_MEMOP(MO_UW), 0x1F, 0x08)
 GEN_LDEPX(lw, DEF_MEMOP(MO_UL), 0x1F, 0x00)
 #if defined(TARGET_PPC64)
-GEN_LDEPX(ld, DEF_MEMOP(MO_Q), 0x1D, 0x00)
+GEN_LDEPX(ld, DEF_MEMOP(MO_UQ), 0x1D, 0x00)
 #endif

 #undef GEN_ST
@@ -7412,7 +7412,7 @@ GEN_STEPX(stb, DEF_MEMOP(MO_UB), 0x1F, 0x06)
 GEN_STEPX(sth, DEF_MEMOP(MO_UW), 0x1F, 0x0C)
 GEN_STEPX(stw, DEF_MEMOP(MO_UL), 0x1F, 0x04)
 #if defined(TARGET_PPC64)
-GEN_STEPX(std, DEF_MEMOP(MO_Q), 0x1D, 0x04)
+GEN_STEPX(std, DEF_MEMOP(MO_UQ), 0x1D, 0x04)
 #endif

 #undef GEN_CRLOGIC
diff --git a/target/ppc/translate/fp-impl.inc.c b/target/ppc/translate/fp-impl.inc.c
index 9dcff94..3fd54ac 100644
--- a/target/ppc/translate/fp-impl.inc.c
+++ b/target/ppc/translate/fp-impl.inc.c
@@ -855,7 +855,7 @@ static void gen_lfdepx(DisasContext *ctx)
     EA = tcg_temp_new();
     t0 = tcg_temp_new_i64();
     gen_addr_reg_index(ctx, EA);
-    tcg_gen_qemu_ld_i64(t0, EA, PPC_TLB_EPID_LOAD, DEF_MEMOP(MO_Q));
+    tcg_gen_qemu_ld_i64(t0, EA, PPC_TLB_EPID_LOAD, DEF_MEMOP(MO_UQ));
     set_fpr(rD(ctx->opcode), t0);
     tcg_temp_free(EA);
     tcg_temp_free_i64(t0);
@@ -1091,7 +1091,7 @@ static void gen_stfdepx(DisasContext *ctx)
     t0 = tcg_temp_new_i64();
     gen_addr_reg_index(ctx, EA);
     get_fpr(t0, rD(ctx->opcode));
-    tcg_gen_qemu_st_i64(t0, EA, PPC_TLB_EPID_STORE, DEF_MEMOP(MO_Q));
+    tcg_gen_qemu_st_i64(t0, EA, PPC_TLB_EPID_STORE, DEF_MEMOP(MO_UQ));
     tcg_temp_free(EA);
     tcg_temp_free_i64(t0);
 }
diff --git a/target/ppc/translate/vmx-impl.inc.c b/target/ppc/translate/vmx-impl.inc.c
index 8aa767e..867dc52 100644
--- a/target/ppc/translate/vmx-impl.inc.c
+++ b/target/ppc/translate/vmx-impl.inc.c
@@ -290,14 +290,14 @@ static void glue(gen_, name)(DisasContext *ctx)                         \
 }

 /* Logical operations */
-GEN_VXFORM_V(vand, MO_64, tcg_gen_gvec_and, 2, 16);
-GEN_VXFORM_V(vandc, MO_64, tcg_gen_gvec_andc, 2, 17);
-GEN_VXFORM_V(vor, MO_64, tcg_gen_gvec_or, 2, 18);
-GEN_VXFORM_V(vxor, MO_64, tcg_gen_gvec_xor, 2, 19);
-GEN_VXFORM_V(vnor, MO_64, tcg_gen_gvec_nor, 2, 20);
-GEN_VXFORM_V(veqv, MO_64, tcg_gen_gvec_eqv, 2, 26);
-GEN_VXFORM_V(vnand, MO_64, tcg_gen_gvec_nand, 2, 22);
-GEN_VXFORM_V(vorc, MO_64, tcg_gen_gvec_orc, 2, 21);
+GEN_VXFORM_V(vand, MO_UQ, tcg_gen_gvec_and, 2, 16);
+GEN_VXFORM_V(vandc, MO_UQ, tcg_gen_gvec_andc, 2, 17);
+GEN_VXFORM_V(vor, MO_UQ, tcg_gen_gvec_or, 2, 18);
+GEN_VXFORM_V(vxor, MO_UQ, tcg_gen_gvec_xor, 2, 19);
+GEN_VXFORM_V(vnor, MO_UQ, tcg_gen_gvec_nor, 2, 20);
+GEN_VXFORM_V(veqv, MO_UQ, tcg_gen_gvec_eqv, 2, 26);
+GEN_VXFORM_V(vnand, MO_UQ, tcg_gen_gvec_nand, 2, 22);
+GEN_VXFORM_V(vorc, MO_UQ, tcg_gen_gvec_orc, 2, 21);

 #define GEN_VXFORM(name, opc2, opc3)                                    \
 static void glue(gen_, name)(DisasContext *ctx)                         \
@@ -410,27 +410,27 @@ GEN_VXFORM_V(vadduhm, MO_UW, tcg_gen_gvec_add, 0, 1);
 GEN_VXFORM_DUAL(vadduhm, PPC_ALTIVEC, PPC_NONE,  \
                 vmul10ecuq, PPC_NONE, PPC2_ISA300)
 GEN_VXFORM_V(vadduwm, MO_UL, tcg_gen_gvec_add, 0, 2);
-GEN_VXFORM_V(vaddudm, MO_64, tcg_gen_gvec_add, 0, 3);
+GEN_VXFORM_V(vaddudm, MO_UQ, tcg_gen_gvec_add, 0, 3);
 GEN_VXFORM_V(vsububm, MO_UB, tcg_gen_gvec_sub, 0, 16);
 GEN_VXFORM_V(vsubuhm, MO_UW, tcg_gen_gvec_sub, 0, 17);
 GEN_VXFORM_V(vsubuwm, MO_UL, tcg_gen_gvec_sub, 0, 18);
-GEN_VXFORM_V(vsubudm, MO_64, tcg_gen_gvec_sub, 0, 19);
+GEN_VXFORM_V(vsubudm, MO_UQ, tcg_gen_gvec_sub, 0, 19);
 GEN_VXFORM_V(vmaxub, MO_UB, tcg_gen_gvec_umax, 1, 0);
 GEN_VXFORM_V(vmaxuh, MO_UW, tcg_gen_gvec_umax, 1, 1);
 GEN_VXFORM_V(vmaxuw, MO_UL, tcg_gen_gvec_umax, 1, 2);
-GEN_VXFORM_V(vmaxud, MO_64, tcg_gen_gvec_umax, 1, 3);
+GEN_VXFORM_V(vmaxud, MO_UQ, tcg_gen_gvec_umax, 1, 3);
 GEN_VXFORM_V(vmaxsb, MO_UB, tcg_gen_gvec_smax, 1, 4);
 GEN_VXFORM_V(vmaxsh, MO_UW, tcg_gen_gvec_smax, 1, 5);
 GEN_VXFORM_V(vmaxsw, MO_UL, tcg_gen_gvec_smax, 1, 6);
-GEN_VXFORM_V(vmaxsd, MO_64, tcg_gen_gvec_smax, 1, 7);
+GEN_VXFORM_V(vmaxsd, MO_UQ, tcg_gen_gvec_smax, 1, 7);
 GEN_VXFORM_V(vminub, MO_UB, tcg_gen_gvec_umin, 1, 8);
 GEN_VXFORM_V(vminuh, MO_UW, tcg_gen_gvec_umin, 1, 9);
 GEN_VXFORM_V(vminuw, MO_UL, tcg_gen_gvec_umin, 1, 10);
-GEN_VXFORM_V(vminud, MO_64, tcg_gen_gvec_umin, 1, 11);
+GEN_VXFORM_V(vminud, MO_UQ, tcg_gen_gvec_umin, 1, 11);
 GEN_VXFORM_V(vminsb, MO_UB, tcg_gen_gvec_smin, 1, 12);
 GEN_VXFORM_V(vminsh, MO_UW, tcg_gen_gvec_smin, 1, 13);
 GEN_VXFORM_V(vminsw, MO_UL, tcg_gen_gvec_smin, 1, 14);
-GEN_VXFORM_V(vminsd, MO_64, tcg_gen_gvec_smin, 1, 15);
+GEN_VXFORM_V(vminsd, MO_UQ, tcg_gen_gvec_smin, 1, 15);
 GEN_VXFORM(vavgub, 1, 16);
 GEN_VXFORM(vabsdub, 1, 16);
 GEN_VXFORM_DUAL(vavgub, PPC_ALTIVEC, PPC_NONE, \
@@ -536,15 +536,15 @@ GEN_VXFORM_V(vslw, MO_UL, tcg_gen_gvec_shlv, 2, 6);
 GEN_VXFORM(vrlwnm, 2, 6);
 GEN_VXFORM_DUAL(vslw, PPC_ALTIVEC, PPC_NONE, \
                 vrlwnm, PPC_NONE, PPC2_ISA300)
-GEN_VXFORM_V(vsld, MO_64, tcg_gen_gvec_shlv, 2, 23);
+GEN_VXFORM_V(vsld, MO_UQ, tcg_gen_gvec_shlv, 2, 23);
 GEN_VXFORM_V(vsrb, MO_UB, tcg_gen_gvec_shrv, 2, 8);
 GEN_VXFORM_V(vsrh, MO_UW, tcg_gen_gvec_shrv, 2, 9);
 GEN_VXFORM_V(vsrw, MO_UL, tcg_gen_gvec_shrv, 2, 10);
-GEN_VXFORM_V(vsrd, MO_64, tcg_gen_gvec_shrv, 2, 27);
+GEN_VXFORM_V(vsrd, MO_UQ, tcg_gen_gvec_shrv, 2, 27);
 GEN_VXFORM_V(vsrab, MO_UB, tcg_gen_gvec_sarv, 2, 12);
 GEN_VXFORM_V(vsrah, MO_UW, tcg_gen_gvec_sarv, 2, 13);
 GEN_VXFORM_V(vsraw, MO_UL, tcg_gen_gvec_sarv, 2, 14);
-GEN_VXFORM_V(vsrad, MO_64, tcg_gen_gvec_sarv, 2, 15);
+GEN_VXFORM_V(vsrad, MO_UQ, tcg_gen_gvec_sarv, 2, 15);
 GEN_VXFORM(vsrv, 2, 28);
 GEN_VXFORM(vslv, 2, 29);
 GEN_VXFORM(vslo, 6, 16);
diff --git a/target/ppc/translate/vsx-impl.inc.c b/target/ppc/translate/vsx-impl.inc.c
index 212817e..d607974 100644
--- a/target/ppc/translate/vsx-impl.inc.c
+++ b/target/ppc/translate/vsx-impl.inc.c
@@ -1475,14 +1475,14 @@ static void glue(gen_, name)(DisasContext *ctx)                      \
                vsr_full_offset(xB(ctx->opcode)), 16, 16);            \
     }

-VSX_LOGICAL(xxland, MO_64, tcg_gen_gvec_and)
-VSX_LOGICAL(xxlandc, MO_64, tcg_gen_gvec_andc)
-VSX_LOGICAL(xxlor, MO_64, tcg_gen_gvec_or)
-VSX_LOGICAL(xxlxor, MO_64, tcg_gen_gvec_xor)
-VSX_LOGICAL(xxlnor, MO_64, tcg_gen_gvec_nor)
-VSX_LOGICAL(xxleqv, MO_64, tcg_gen_gvec_eqv)
-VSX_LOGICAL(xxlnand, MO_64, tcg_gen_gvec_nand)
-VSX_LOGICAL(xxlorc, MO_64, tcg_gen_gvec_orc)
+VSX_LOGICAL(xxland, MO_UQ, tcg_gen_gvec_and)
+VSX_LOGICAL(xxlandc, MO_UQ, tcg_gen_gvec_andc)
+VSX_LOGICAL(xxlor, MO_UQ, tcg_gen_gvec_or)
+VSX_LOGICAL(xxlxor, MO_UQ, tcg_gen_gvec_xor)
+VSX_LOGICAL(xxlnor, MO_UQ, tcg_gen_gvec_nor)
+VSX_LOGICAL(xxleqv, MO_UQ, tcg_gen_gvec_eqv)
+VSX_LOGICAL(xxlnand, MO_UQ, tcg_gen_gvec_nand)
+VSX_LOGICAL(xxlorc, MO_UQ, tcg_gen_gvec_orc)

 #define VSX_XXMRG(name, high)                               \
 static void glue(gen_, name)(DisasContext *ctx)             \
@@ -1535,7 +1535,7 @@ static void gen_xxsel(DisasContext *ctx)
         gen_exception(ctx, POWERPC_EXCP_VSXU);
         return;
     }
-    tcg_gen_gvec_bitsel(MO_64, vsr_full_offset(rt), vsr_full_offset(rc),
+    tcg_gen_gvec_bitsel(MO_UQ, vsr_full_offset(rt), vsr_full_offset(rc),
                         vsr_full_offset(rb), vsr_full_offset(ra), 16, 16);
 }

diff --git a/target/s390x/translate.c b/target/s390x/translate.c
index 9e646f1..5c72db1 100644
--- a/target/s390x/translate.c
+++ b/target/s390x/translate.c
@@ -180,7 +180,7 @@ static inline int vec_reg_offset(uint8_t reg, uint8_t enr, TCGMemOp es)
      * the two 8 byte elements have to be loaded separately. Let's force all
      * 16 byte operations to handle it in a special way.
      */
-    g_assert(es <= MO_64);
+    g_assert(es <= MO_UQ);
 #ifndef HOST_WORDS_BIGENDIAN
     offs ^= (8 - bytes);
 #endif
@@ -190,7 +190,7 @@ static inline int vec_reg_offset(uint8_t reg, uint8_t enr, TCGMemOp es)
 static inline int freg64_offset(uint8_t reg)
 {
     g_assert(reg < 16);
-    return vec_reg_offset(reg, 0, MO_64);
+    return vec_reg_offset(reg, 0, MO_UQ);
 }

 static inline int freg32_offset(uint8_t reg)
diff --git a/target/s390x/translate_vx.inc.c b/target/s390x/translate_vx.inc.c
index 75d788c..6252262 100644
--- a/target/s390x/translate_vx.inc.c
+++ b/target/s390x/translate_vx.inc.c
@@ -30,8 +30,8 @@
  * Sizes:
  *  On s390x, the operand size (oprsz) and the maximum size (maxsz) are
  *  always 16 (128 bit). What gvec code calls "vece", s390x calls "es",
- *  a.k.a. "element size". These values nicely map to MO_UB ... MO_64. Only
- *  128 bit element size has to be treated in a special way (MO_64 + 1).
+ *  a.k.a. "element size". These values nicely map to MO_UB ... MO_UQ. Only
+ *  128 bit element size has to be treated in a special way (MO_UQ + 1).
  *  We will use ES_* instead of MO_* for this reason in this file.
  *
  * CC handling:
@@ -49,7 +49,7 @@
 #define ES_8    MO_UB
 #define ES_16   MO_UW
 #define ES_32   MO_UL
-#define ES_64   MO_64
+#define ES_64   MO_UQ
 #define ES_128  4

 /* Floating-Point Format */
diff --git a/target/s390x/vec.h b/target/s390x/vec.h
index f67392c..b59da65 100644
--- a/target/s390x/vec.h
+++ b/target/s390x/vec.h
@@ -82,7 +82,7 @@ static inline uint64_t s390_vec_read_element(const S390Vector *v, uint8_t enr,
         return s390_vec_read_element16(v, enr);
     case MO_UL:
         return s390_vec_read_element32(v, enr);
-    case MO_64:
+    case MO_UQ:
         return s390_vec_read_element64(v, enr);
     default:
         g_assert_not_reached();
@@ -130,7 +130,7 @@ static inline void s390_vec_write_element(S390Vector *v, uint8_t enr,
     case MO_UL:
         s390_vec_write_element32(v, enr, data);
         break;
-    case MO_64:
+    case MO_UQ:
         s390_vec_write_element64(v, enr, data);
         break;
     default:
diff --git a/target/sparc/translate.c b/target/sparc/translate.c
index 091bab5..499622b 100644
--- a/target/sparc/translate.c
+++ b/target/sparc/translate.c
@@ -2840,7 +2840,7 @@ static void gen_ldda_asi(DisasContext *dc, TCGv addr, int insn, int rd)
     default:
         {
             TCGv_i32 r_asi = tcg_const_i32(da.asi);
-            TCGv_i32 r_mop = tcg_const_i32(MO_Q);
+            TCGv_i32 r_mop = tcg_const_i32(MO_UQ);

             save_state(dc);
             gen_helper_ld_asi(t64, cpu_env, addr, r_asi, r_mop);
@@ -2896,7 +2896,7 @@ static void gen_stda_asi(DisasContext *dc, TCGv hi, TCGv addr,
     default:
         {
             TCGv_i32 r_asi = tcg_const_i32(da.asi);
-            TCGv_i32 r_mop = tcg_const_i32(MO_Q);
+            TCGv_i32 r_mop = tcg_const_i32(MO_UQ);

             save_state(dc);
             gen_helper_st_asi(cpu_env, addr, t64, r_asi, r_mop);
diff --git a/tcg/aarch64/tcg-target.inc.c b/tcg/aarch64/tcg-target.inc.c
index dc4fd21..d14afa9 100644
--- a/tcg/aarch64/tcg-target.inc.c
+++ b/tcg/aarch64/tcg-target.inc.c
@@ -432,12 +432,12 @@ typedef enum {
     I3312_STRB      = 0x38000000 | LDST_ST << 22 | MO_UB << 30,
     I3312_STRH      = 0x38000000 | LDST_ST << 22 | MO_UW << 30,
     I3312_STRW      = 0x38000000 | LDST_ST << 22 | MO_UL << 30,
-    I3312_STRX      = 0x38000000 | LDST_ST << 22 | MO_64 << 30,
+    I3312_STRX      = 0x38000000 | LDST_ST << 22 | MO_UQ << 30,

     I3312_LDRB      = 0x38000000 | LDST_LD << 22 | MO_UB << 30,
     I3312_LDRH      = 0x38000000 | LDST_LD << 22 | MO_UW << 30,
     I3312_LDRW      = 0x38000000 | LDST_LD << 22 | MO_UL << 30,
-    I3312_LDRX      = 0x38000000 | LDST_LD << 22 | MO_64 << 30,
+    I3312_LDRX      = 0x38000000 | LDST_LD << 22 | MO_UQ << 30,

     I3312_LDRSBW    = 0x38000000 | LDST_LD_S_W << 22 | MO_UB << 30,
     I3312_LDRSHW    = 0x38000000 | LDST_LD_S_W << 22 | MO_UW << 30,
@@ -449,8 +449,8 @@ typedef enum {
     I3312_LDRVS     = 0x3c000000 | LDST_LD << 22 | MO_UL << 30,
     I3312_STRVS     = 0x3c000000 | LDST_ST << 22 | MO_UL << 30,

-    I3312_LDRVD     = 0x3c000000 | LDST_LD << 22 | MO_64 << 30,
-    I3312_STRVD     = 0x3c000000 | LDST_ST << 22 | MO_64 << 30,
+    I3312_LDRVD     = 0x3c000000 | LDST_LD << 22 | MO_UQ << 30,
+    I3312_STRVD     = 0x3c000000 | LDST_ST << 22 | MO_UQ << 30,

     I3312_LDRVQ     = 0x3c000000 | 3 << 22 | 0 << 30,
     I3312_STRVQ     = 0x3c000000 | 2 << 22 | 0 << 30,
@@ -1595,7 +1595,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
     if (opc & MO_SIGN) {
         tcg_out_sxt(s, lb->type, size, lb->datalo_reg, TCG_REG_X0);
     } else {
-        tcg_out_mov(s, size == MO_64, lb->datalo_reg, TCG_REG_X0);
+        tcg_out_mov(s, size == MO_UQ, lb->datalo_reg, TCG_REG_X0);
     }

     tcg_out_goto(s, lb->raddr);
@@ -1614,7 +1614,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)

     tcg_out_mov(s, TCG_TYPE_PTR, TCG_REG_X0, TCG_AREG0);
     tcg_out_mov(s, TARGET_LONG_BITS == 64, TCG_REG_X1, lb->addrlo_reg);
-    tcg_out_mov(s, size == MO_64, TCG_REG_X2, lb->datalo_reg);
+    tcg_out_mov(s, size == MO_UQ, TCG_REG_X2, lb->datalo_reg);
     tcg_out_movi(s, TCG_TYPE_I32, TCG_REG_X3, oi);
     tcg_out_adr(s, TCG_REG_X4, lb->raddr);
     tcg_out_call(s, qemu_st_helpers[opc & (MO_BSWAP | MO_SIZE)]);
@@ -1754,7 +1754,7 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp memop, TCGType ext,
             tcg_out_ldst_r(s, I3312_LDRSWX, data_r, addr_r, otype, off_r);
         }
         break;
-    case MO_Q:
+    case MO_UQ:
         tcg_out_ldst_r(s, I3312_LDRX, data_r, addr_r, otype, off_r);
         if (bswap) {
             tcg_out_rev64(s, data_r, data_r);
@@ -1789,7 +1789,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp memop,
         }
         tcg_out_ldst_r(s, I3312_STRW, data_r, addr_r, otype, off_r);
         break;
-    case MO_64:
+    case MO_UQ:
         if (bswap && data_r != TCG_REG_XZR) {
             tcg_out_rev64(s, TCG_REG_TMP, data_r);
             data_r = TCG_REG_TMP;
@@ -1838,7 +1838,7 @@ static void tcg_out_qemu_st(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
     tcg_out_tlb_read(s, addr_reg, memop, &label_ptr, mem_index, 0);
     tcg_out_qemu_st_direct(s, memop, data_reg,
                            TCG_REG_X1, otype, addr_reg);
-    add_qemu_ldst_label(s, false, oi, (memop & MO_SIZE)== MO_64,
+    add_qemu_ldst_label(s, false, oi, (memop & MO_SIZE) == MO_UQ,
                         data_reg, addr_reg, s->code_ptr, label_ptr);
 #else /* !CONFIG_SOFTMMU */
     if (USE_GUEST_BASE) {
@@ -2506,7 +2506,7 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, unsigned vece)
     case INDEX_op_smin_vec:
     case INDEX_op_umax_vec:
     case INDEX_op_umin_vec:
-        return vece < MO_64;
+        return vece < MO_UQ;

     default:
         return 0;
diff --git a/tcg/arm/tcg-target.inc.c b/tcg/arm/tcg-target.inc.c
index 05560a2..70eeb8a 100644
--- a/tcg/arm/tcg-target.inc.c
+++ b/tcg/arm/tcg-target.inc.c
@@ -1389,7 +1389,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
     default:
         tcg_out_mov_reg(s, COND_AL, datalo, TCG_REG_R0);
         break;
-    case MO_Q:
+    case MO_UQ:
         if (datalo != TCG_REG_R1) {
             tcg_out_mov_reg(s, COND_AL, datalo, TCG_REG_R0);
             tcg_out_mov_reg(s, COND_AL, datahi, TCG_REG_R1);
@@ -1439,7 +1439,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
     default:
         argreg = tcg_out_arg_reg32(s, argreg, datalo);
         break;
-    case MO_64:
+    case MO_UQ:
         argreg = tcg_out_arg_reg64(s, argreg, datalo, datahi);
         break;
     }
@@ -1487,7 +1487,7 @@ static inline void tcg_out_qemu_ld_index(TCGContext *s, TCGMemOp opc,
             tcg_out_bswap32(s, COND_AL, datalo, datalo);
         }
         break;
-    case MO_Q:
+    case MO_UQ:
         {
             TCGReg dl = (bswap ? datahi : datalo);
             TCGReg dh = (bswap ? datalo : datahi);
@@ -1548,7 +1548,7 @@ static inline void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp opc,
             tcg_out_bswap32(s, COND_AL, datalo, datalo);
         }
         break;
-    case MO_Q:
+    case MO_UQ:
         {
             TCGReg dl = (bswap ? datahi : datalo);
             TCGReg dh = (bswap ? datalo : datahi);
@@ -1641,7 +1641,7 @@ static inline void tcg_out_qemu_st_index(TCGContext *s, int cond, TCGMemOp opc,
             tcg_out_st32_r(s, cond, datalo, addrlo, addend);
         }
         break;
-    case MO_64:
+    case MO_UQ:
         /* Avoid strd for user-only emulation, to handle unaligned.  */
         if (bswap) {
             tcg_out_bswap32(s, cond, TCG_REG_R0, datahi);
@@ -1686,7 +1686,7 @@ static inline void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp opc,
             tcg_out_st32_12(s, COND_AL, datalo, addrlo, 0);
         }
         break;
-    case MO_64:
+    case MO_UQ:
         /* Avoid strd for user-only emulation, to handle unaligned.  */
         if (bswap) {
             tcg_out_bswap32(s, COND_AL, TCG_REG_R0, datahi);
diff --git a/tcg/i386/tcg-target.inc.c b/tcg/i386/tcg-target.inc.c
index 93e4c63..3a73334 100644
--- a/tcg/i386/tcg-target.inc.c
+++ b/tcg/i386/tcg-target.inc.c
@@ -902,7 +902,7 @@ static bool tcg_out_dup_vec(TCGContext *s, TCGType type, unsigned vece,
             /* imm8 operand: all output lanes selected from input lane 0.  */
             tcg_out8(s, 0);
             break;
-        case MO_64:
+        case MO_UQ:
             tcg_out_vex_modrm(s, OPC_PUNPCKLQDQ, r, a, a);
             break;
         default:
@@ -921,7 +921,7 @@ static bool tcg_out_dupm_vec(TCGContext *s, TCGType type, unsigned vece,
                                  r, 0, base, offset);
     } else {
         switch (vece) {
-        case MO_64:
+        case MO_UQ:
             tcg_out_vex_modrm_offset(s, OPC_MOVDDUP, r, 0, base, offset);
             break;
         case MO_UL:
@@ -1868,7 +1868,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
     case MO_UL:
         tcg_out_mov(s, TCG_TYPE_I32, data_reg, TCG_REG_EAX);
         break;
-    case MO_Q:
+    case MO_UQ:
         if (TCG_TARGET_REG_BITS == 64) {
             tcg_out_mov(s, TCG_TYPE_I64, data_reg, TCG_REG_RAX);
         } else if (data_reg == TCG_REG_EDX) {
@@ -1923,7 +1923,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
         tcg_out_st(s, TCG_TYPE_I32, l->datalo_reg, TCG_REG_ESP, ofs);
         ofs += 4;

-        if (s_bits == MO_64) {
+        if (s_bits == MO_UQ) {
             tcg_out_st(s, TCG_TYPE_I32, l->datahi_reg, TCG_REG_ESP, ofs);
             ofs += 4;
         }
@@ -1937,7 +1937,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
     } else {
         tcg_out_mov(s, TCG_TYPE_PTR, tcg_target_call_iarg_regs[0], TCG_AREG0);
         /* The second argument is already loaded with addrlo.  */
-        tcg_out_mov(s, (s_bits == MO_64 ? TCG_TYPE_I64 : TCG_TYPE_I32),
+        tcg_out_mov(s, (s_bits == MO_UQ ? TCG_TYPE_I64 : TCG_TYPE_I32),
                     tcg_target_call_iarg_regs[2], l->datalo_reg);
         tcg_out_movi(s, TCG_TYPE_I32, tcg_target_call_iarg_regs[3], oi);

@@ -2060,7 +2060,7 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg datalo, TCGReg datahi,
         }
         break;
 #endif
-    case MO_Q:
+    case MO_UQ:
         if (TCG_TARGET_REG_BITS == 64) {
             tcg_out_modrm_sib_offset(s, movop + P_REXW + seg, datalo,
                                      base, index, 0, ofs);
@@ -2181,7 +2181,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg datalo, TCGReg datahi,
         }
         tcg_out_modrm_sib_offset(s, movop + seg, datalo, base, index, 0, ofs);
         break;
-    case MO_64:
+    case MO_UQ:
         if (TCG_TARGET_REG_BITS == 64) {
             if (bswap) {
                 tcg_out_mov(s, TCG_TYPE_I64, scratch, datalo);
@@ -2755,7 +2755,7 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc,
         OPC_UD2, OPC_UD2, OPC_VPSRLVD, OPC_VPSRLVQ
     };
     static int const sarv_insn[4] = {
-        /* TODO: AVX512 adds support for MO_UW, MO_64.  */
+        /* TODO: AVX512 adds support for MO_UW, MO_UQ.  */
         OPC_UD2, OPC_UD2, OPC_VPSRAVD, OPC_UD2
     };
     static int const shls_insn[4] = {
@@ -2768,7 +2768,7 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc,
         OPC_UD2, OPC_PSRAW, OPC_PSRAD, OPC_UD2
     };
     static int const abs_insn[4] = {
-        /* TODO: AVX512 adds support for MO_64.  */
+        /* TODO: AVX512 adds support for MO_UQ.  */
         OPC_PABSB, OPC_PABSW, OPC_PABSD, OPC_UD2
     };

@@ -2898,7 +2898,7 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc,
         sub = 2;
         goto gen_shift;
     case INDEX_op_sari_vec:
-        tcg_debug_assert(vece != MO_64);
+        tcg_debug_assert(vece != MO_UQ);
         sub = 4;
     gen_shift:
         tcg_debug_assert(vece != MO_UB);
@@ -3281,9 +3281,11 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, unsigned vece)
         if (vece == MO_UB) {
             return -1;
         }
-        /* We can emulate this for MO_64, but it does not pay off
-           unless we're producing at least 4 values.  */
-        if (vece == MO_64) {
+        /*
+         * We can emulate this for MO_UQ, but it does not pay off
+         * unless we're producing at least 4 values.
+         */
+        if (vece == MO_UQ) {
             return type >= TCG_TYPE_V256 ? -1 : 0;
         }
         return 1;
@@ -3305,7 +3307,7 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, unsigned vece)
             /* We can expand the operation for MO_UB.  */
             return -1;
         }
-        if (vece == MO_64) {
+        if (vece == MO_UQ) {
             return 0;
         }
         return 1;
@@ -3389,7 +3391,7 @@ static void expand_vec_sari(TCGType type, unsigned vece,
         tcg_temp_free_vec(t2);
         break;

-    case MO_64:
+    case MO_UQ:
         if (imm <= 32) {
             /* We can emulate a small sign extend by performing an arithmetic
              * 32-bit shift and overwriting the high half of a 64-bit logical
@@ -3397,7 +3399,7 @@ static void expand_vec_sari(TCGType type, unsigned vece,
              */
             t1 = tcg_temp_new_vec(type);
             tcg_gen_sari_vec(MO_UL, t1, v1, imm);
-            tcg_gen_shri_vec(MO_64, v0, v1, imm);
+            tcg_gen_shri_vec(MO_UQ, v0, v1, imm);
             vec_gen_4(INDEX_op_x86_blend_vec, type, MO_UL,
                       tcgv_vec_arg(v0), tcgv_vec_arg(v0),
                       tcgv_vec_arg(t1), 0xaa);
@@ -3407,10 +3409,10 @@ static void expand_vec_sari(TCGType type, unsigned vece,
              * the sign-extend, shift and merge.
              */
             t1 = tcg_const_zeros_vec(type);
-            tcg_gen_cmp_vec(TCG_COND_GT, MO_64, t1, t1, v1);
-            tcg_gen_shri_vec(MO_64, v0, v1, imm);
-            tcg_gen_shli_vec(MO_64, t1, t1, 64 - imm);
-            tcg_gen_or_vec(MO_64, v0, v0, t1);
+            tcg_gen_cmp_vec(TCG_COND_GT, MO_UQ, t1, t1, v1);
+            tcg_gen_shri_vec(MO_UQ, v0, v1, imm);
+            tcg_gen_shli_vec(MO_UQ, t1, t1, 64 - imm);
+            tcg_gen_or_vec(MO_UQ, v0, v0, t1);
             tcg_temp_free_vec(t1);
         }
         break;
diff --git a/tcg/mips/tcg-target.inc.c b/tcg/mips/tcg-target.inc.c
index a78fe87..ef31fc8 100644
--- a/tcg/mips/tcg-target.inc.c
+++ b/tcg/mips/tcg-target.inc.c
@@ -1336,7 +1336,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
     tcg_out_mov(s, TCG_TYPE_PTR, tcg_target_call_iarg_regs[0], TCG_AREG0);

     v0 = l->datalo_reg;
-    if (TCG_TARGET_REG_BITS == 32 && (opc & MO_SIZE) == MO_64) {
+    if (TCG_TARGET_REG_BITS == 32 && (opc & MO_SIZE) == MO_UQ) {
         /* We eliminated V0 from the possible output registers, so it
            cannot be clobbered here.  So we must move V1 first.  */
         if (MIPS_BE) {
@@ -1389,7 +1389,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
     case MO_UL:
         i = tcg_out_call_iarg_reg(s, i, l->datalo_reg);
         break;
-    case MO_64:
+    case MO_UQ:
         if (TCG_TARGET_REG_BITS == 32) {
             i = tcg_out_call_iarg_reg2(s, i, l->datalo_reg, l->datahi_reg);
         } else {
@@ -1470,7 +1470,7 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg lo, TCGReg hi,
     case MO_SL:
         tcg_out_opc_imm(s, OPC_LW, lo, base, 0);
         break;
-    case MO_Q | MO_BSWAP:
+    case MO_UQ | MO_BSWAP:
         if (TCG_TARGET_REG_BITS == 64) {
             if (use_mips32r2_instructions) {
                 tcg_out_opc_imm(s, OPC_LD, lo, base, 0);
@@ -1499,7 +1499,7 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg lo, TCGReg hi,
             tcg_out_mov(s, TCG_TYPE_I32, MIPS_BE ? hi : lo, TCG_TMP3);
         }
         break;
-    case MO_Q:
+    case MO_UQ:
         /* Prefer to load from offset 0 first, but allow for overlap.  */
         if (TCG_TARGET_REG_BITS == 64) {
             tcg_out_opc_imm(s, OPC_LD, lo, base, 0);
@@ -1587,7 +1587,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
         tcg_out_opc_imm(s, OPC_SW, lo, base, 0);
         break;

-    case MO_64 | MO_BSWAP:
+    case MO_UQ | MO_BSWAP:
         if (TCG_TARGET_REG_BITS == 64) {
             tcg_out_bswap64(s, TCG_TMP3, lo);
             tcg_out_opc_imm(s, OPC_SD, TCG_TMP3, base, 0);
@@ -1605,7 +1605,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
             tcg_out_opc_imm(s, OPC_SW, TCG_TMP3, base, 4);
         }
         break;
-    case MO_64:
+    case MO_UQ:
         if (TCG_TARGET_REG_BITS == 64) {
             tcg_out_opc_imm(s, OPC_SD, lo, base, 0);
         } else {
diff --git a/tcg/ppc/tcg-target.inc.c b/tcg/ppc/tcg-target.inc.c
index 835336a..13a2437 100644
--- a/tcg/ppc/tcg-target.inc.c
+++ b/tcg/ppc/tcg-target.inc.c
@@ -1445,24 +1445,24 @@ static const uint32_t qemu_ldx_opc[16] = {
     [MO_UB] = LBZX,
     [MO_UW] = LHZX,
     [MO_UL] = LWZX,
-    [MO_Q]  = LDX,
+    [MO_UQ]  = LDX,
     [MO_SW] = LHAX,
     [MO_SL] = LWAX,
     [MO_BSWAP | MO_UB] = LBZX,
     [MO_BSWAP | MO_UW] = LHBRX,
     [MO_BSWAP | MO_UL] = LWBRX,
-    [MO_BSWAP | MO_Q]  = LDBRX,
+    [MO_BSWAP | MO_UQ]  = LDBRX,
 };

 static const uint32_t qemu_stx_opc[16] = {
     [MO_UB] = STBX,
     [MO_UW] = STHX,
     [MO_UL] = STWX,
-    [MO_Q]  = STDX,
+    [MO_UQ]  = STDX,
     [MO_BSWAP | MO_UB] = STBX,
     [MO_BSWAP | MO_UW] = STHBRX,
     [MO_BSWAP | MO_UL] = STWBRX,
-    [MO_BSWAP | MO_Q]  = STDBRX,
+    [MO_BSWAP | MO_UQ]  = STDBRX,
 };

 static const uint32_t qemu_exts_opc[4] = {
@@ -1663,7 +1663,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)

     lo = lb->datalo_reg;
     hi = lb->datahi_reg;
-    if (TCG_TARGET_REG_BITS == 32 && (opc & MO_SIZE) == MO_64) {
+    if (TCG_TARGET_REG_BITS == 32 && (opc & MO_SIZE) == MO_UQ) {
         tcg_out_mov(s, TCG_TYPE_I32, lo, TCG_REG_R4);
         tcg_out_mov(s, TCG_TYPE_I32, hi, TCG_REG_R3);
     } else if (opc & MO_SIGN) {
@@ -1708,7 +1708,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
     hi = lb->datahi_reg;
     if (TCG_TARGET_REG_BITS == 32) {
         switch (s_bits) {
-        case MO_64:
+        case MO_UQ:
 #ifdef TCG_TARGET_CALL_ALIGN_ARGS
             arg |= 1;
 #endif
@@ -1722,7 +1722,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
             break;
         }
     } else {
-        if (s_bits == MO_64) {
+        if (s_bits == MO_UQ) {
             tcg_out_mov(s, TCG_TYPE_I64, arg++, lo);
         } else {
             tcg_out_rld(s, RLDICL, arg++, lo, 0, 64 - (8 << s_bits));
@@ -1775,7 +1775,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
     }
 #endif

-    if (TCG_TARGET_REG_BITS == 32 && s_bits == MO_64) {
+    if (TCG_TARGET_REG_BITS == 32 && s_bits == MO_UQ) {
         if (opc & MO_BSWAP) {
             tcg_out32(s, ADDI | TAI(TCG_REG_R0, addrlo, 4));
             tcg_out32(s, LWBRX | TAB(datalo, rbase, addrlo));
@@ -1850,7 +1850,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64)
     }
 #endif

-    if (TCG_TARGET_REG_BITS == 32 && s_bits == MO_64) {
+    if (TCG_TARGET_REG_BITS == 32 && s_bits == MO_UQ) {
         if (opc & MO_BSWAP) {
             tcg_out32(s, ADDI | TAI(TCG_REG_R0, addrlo, 4));
             tcg_out32(s, STWBRX | SAB(datalo, rbase, addrlo));
diff --git a/tcg/riscv/tcg-target.inc.c b/tcg/riscv/tcg-target.inc.c
index 1905986..90363df 100644
--- a/tcg/riscv/tcg-target.inc.c
+++ b/tcg/riscv/tcg-target.inc.c
@@ -1068,7 +1068,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
     tcg_out_movi(s, TCG_TYPE_PTR, a3, (tcg_target_long)l->raddr);

     tcg_out_call(s, qemu_ld_helpers[opc & (MO_BSWAP | MO_SSIZE)]);
-    tcg_out_mov(s, (opc & MO_SIZE) == MO_64, l->datalo_reg, a0);
+    tcg_out_mov(s, (opc & MO_SIZE) == MO_UQ, l->datalo_reg, a0);

     tcg_out_goto(s, l->raddr);
     return true;
@@ -1150,7 +1150,7 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg lo, TCGReg hi,
     case MO_SL:
         tcg_out_opc_imm(s, OPC_LW, lo, base, 0);
         break;
-    case MO_Q:
+    case MO_UQ:
         /* Prefer to load from offset 0 first, but allow for overlap.  */
         if (TCG_TARGET_REG_BITS == 64) {
             tcg_out_opc_imm(s, OPC_LD, lo, base, 0);
@@ -1225,7 +1225,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
     case MO_UL:
         tcg_out_opc_store(s, OPC_SW, base, lo, 0);
         break;
-    case MO_64:
+    case MO_UQ:
         if (TCG_TARGET_REG_BITS == 64) {
             tcg_out_opc_store(s, OPC_SD, base, lo, 0);
         } else {
diff --git a/tcg/s390/tcg-target.inc.c b/tcg/s390/tcg-target.inc.c
index fe42939..db1102e 100644
--- a/tcg/s390/tcg-target.inc.c
+++ b/tcg/s390/tcg-target.inc.c
@@ -1477,10 +1477,10 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp opc, TCGReg data,
         tcg_out_insn(s, RXY, LGF, data, base, index, disp);
         break;

-    case MO_Q | MO_BSWAP:
+    case MO_UQ | MO_BSWAP:
         tcg_out_insn(s, RXY, LRVG, data, base, index, disp);
         break;
-    case MO_Q:
+    case MO_UQ:
         tcg_out_insn(s, RXY, LG, data, base, index, disp);
         break;

@@ -1523,10 +1523,10 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp opc, TCGReg data,
         }
         break;

-    case MO_Q | MO_BSWAP:
+    case MO_UQ | MO_BSWAP:
         tcg_out_insn(s, RXY, STRVG, data, base, index, disp);
         break;
-    case MO_Q:
+    case MO_UQ:
         tcg_out_insn(s, RXY, STG, data, base, index, disp);
         break;

@@ -1660,7 +1660,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
     case MO_UL:
         tgen_ext32u(s, TCG_REG_R4, data_reg);
         break;
-    case MO_Q:
+    case MO_UQ:
         tcg_out_mov(s, TCG_TYPE_I64, TCG_REG_R4, data_reg);
         break;
     default:
diff --git a/tcg/sparc/tcg-target.inc.c b/tcg/sparc/tcg-target.inc.c
index ac0d3a3..7c50118 100644
--- a/tcg/sparc/tcg-target.inc.c
+++ b/tcg/sparc/tcg-target.inc.c
@@ -894,7 +894,7 @@ static void emit_extend(TCGContext *s, TCGReg r, int op)
             tcg_out_arith(s, r, r, 0, SHIFT_SRL);
         }
         break;
-    case MO_64:
+    case MO_UQ:
         break;
     }
 }
@@ -977,7 +977,7 @@ static void build_trampolines(TCGContext *s)
             } else {
                 ra += 1;
             }
-            if ((i & MO_SIZE) == MO_64) {
+            if ((i & MO_SIZE) == MO_UQ) {
                 /* Install the high part of the data.  */
                 tcg_out_arithi(s, ra, ra + 1, 32, SHIFT_SRLX);
                 ra += 2;
@@ -1217,7 +1217,7 @@ static void tcg_out_qemu_ld(TCGContext *s, TCGReg data, TCGReg addr,
             tcg_out_mov(s, TCG_TYPE_REG, data, TCG_REG_O0);
         }
     } else {
-        if ((memop & MO_SIZE) == MO_64) {
+        if ((memop & MO_SIZE) == MO_UQ) {
             tcg_out_arithi(s, TCG_REG_O0, TCG_REG_O0, 32, SHIFT_SLLX);
             tcg_out_arithi(s, TCG_REG_O1, TCG_REG_O1, 0, SHIFT_SRL);
             tcg_out_arith(s, data, TCG_REG_O0, TCG_REG_O1, ARITH_OR);
@@ -1274,7 +1274,7 @@ static void tcg_out_qemu_st(TCGContext *s, TCGReg data, TCGReg addr,
         param++;
     }
     tcg_out_mov(s, TCG_TYPE_REG, param++, addrz);
-    if (!SPARC64 && (memop & MO_SIZE) == MO_64) {
+    if (!SPARC64 && (memop & MO_SIZE) == MO_UQ) {
         /* Skip the high-part; we'll perform the extract in the trampoline.  */
         param++;
     }
diff --git a/tcg/tcg-op-gvec.c b/tcg/tcg-op-gvec.c
index e63622c..0c0eea5 100644
--- a/tcg/tcg-op-gvec.c
+++ b/tcg/tcg-op-gvec.c
@@ -312,7 +312,7 @@ uint64_t (dup_const)(unsigned vece, uint64_t c)
         return 0x0001000100010001ull * (uint16_t)c;
     case MO_UL:
         return 0x0000000100000001ull * (uint32_t)c;
-    case MO_64:
+    case MO_UQ:
         return c;
     default:
         g_assert_not_reached();
@@ -352,7 +352,7 @@ static void gen_dup_i64(unsigned vece, TCGv_i64 out, TCGv_i64 in)
     case MO_UL:
         tcg_gen_deposit_i64(out, in, in, 32, 32);
         break;
-    case MO_64:
+    case MO_UQ:
         tcg_gen_mov_i64(out, in);
         break;
     default:
@@ -443,7 +443,7 @@ static void do_dup(unsigned vece, uint32_t dofs, uint32_t oprsz,
     TCGv_ptr t_ptr;
     uint32_t i;

-    assert(vece <= (in_32 ? MO_UL : MO_64));
+    assert(vece <= (in_32 ? MO_UL : MO_UQ));
     assert(in_32 == NULL || in_64 == NULL);

     /* If we're storing 0, expand oprsz to maxsz.  */
@@ -459,7 +459,7 @@ static void do_dup(unsigned vece, uint32_t dofs, uint32_t oprsz,
      */
     type = choose_vector_type(NULL, vece, oprsz,
                               (TCG_TARGET_REG_BITS == 64 && in_32 == NULL
-                               && (in_64 == NULL || vece == MO_64)));
+                               && (in_64 == NULL || vece == MO_UQ)));
     if (type != 0) {
         TCGv_vec t_vec = tcg_temp_new_vec(type);

@@ -502,7 +502,7 @@ static void do_dup(unsigned vece, uint32_t dofs, uint32_t oprsz,
             /* For 64-bit hosts, use 64-bit constants for "simple" constants
                or when we'd need too many 32-bit stores, or when a 64-bit
                constant is really required.  */
-            if (vece == MO_64
+            if (vece == MO_UQ
                 || (TCG_TARGET_REG_BITS == 64
                     && (in_c == 0 || in_c == -1
                         || !check_size_impl(oprsz, 4)))) {
@@ -534,7 +534,7 @@ static void do_dup(unsigned vece, uint32_t dofs, uint32_t oprsz,
     tcg_gen_addi_ptr(t_ptr, cpu_env, dofs);
     t_desc = tcg_const_i32(simd_desc(oprsz, maxsz, 0));

-    if (vece == MO_64) {
+    if (vece == MO_UQ) {
         if (in_64) {
             gen_helper_gvec_dup64(t_ptr, t_desc, in_64);
         } else {
@@ -1438,7 +1438,7 @@ void tcg_gen_gvec_dup_i64(unsigned vece, uint32_t dofs, uint32_t oprsz,
                           uint32_t maxsz, TCGv_i64 in)
 {
     check_size_align(oprsz, maxsz, dofs);
-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     do_dup(vece, dofs, oprsz, maxsz, NULL, in, 0);
 }

@@ -1446,7 +1446,7 @@ void tcg_gen_gvec_dup_mem(unsigned vece, uint32_t dofs, uint32_t aofs,
                           uint32_t oprsz, uint32_t maxsz)
 {
     check_size_align(oprsz, maxsz, dofs);
-    if (vece <= MO_64) {
+    if (vece <= MO_UQ) {
         TCGType type = choose_vector_type(NULL, vece, oprsz, 0);
         if (type != 0) {
             TCGv_vec t_vec = tcg_temp_new_vec(type);
@@ -1512,7 +1512,7 @@ void tcg_gen_gvec_dup64i(uint32_t dofs, uint32_t oprsz,
                          uint32_t maxsz, uint64_t x)
 {
     check_size_align(oprsz, maxsz, dofs);
-    do_dup(MO_64, dofs, oprsz, maxsz, NULL, NULL, x);
+    do_dup(MO_UQ, dofs, oprsz, maxsz, NULL, NULL, x);
 }

 void tcg_gen_gvec_dup32i(uint32_t dofs, uint32_t oprsz,
@@ -1624,10 +1624,10 @@ void tcg_gen_gvec_add(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fno = gen_helper_gvec_add64,
           .opt_opc = vecop_list_add,
           .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-          .vece = MO_64 },
+          .vece = MO_UQ },
     };

-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]);
 }

@@ -1655,10 +1655,10 @@ void tcg_gen_gvec_adds(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fno = gen_helper_gvec_adds64,
           .opt_opc = vecop_list_add,
           .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-          .vece = MO_64 },
+          .vece = MO_UQ },
     };

-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_2s(dofs, aofs, oprsz, maxsz, c, &g[vece]);
 }

@@ -1696,10 +1696,10 @@ void tcg_gen_gvec_subs(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fno = gen_helper_gvec_subs64,
           .opt_opc = vecop_list_sub,
           .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-          .vece = MO_64 },
+          .vece = MO_UQ },
     };

-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_2s(dofs, aofs, oprsz, maxsz, c, &g[vece]);
 }

@@ -1775,10 +1775,10 @@ void tcg_gen_gvec_sub(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fno = gen_helper_gvec_sub64,
           .opt_opc = vecop_list_sub,
           .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-          .vece = MO_64 },
+          .vece = MO_UQ },
     };

-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]);
 }

@@ -1806,10 +1806,10 @@ void tcg_gen_gvec_mul(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fno = gen_helper_gvec_mul64,
           .opt_opc = vecop_list_mul,
           .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-          .vece = MO_64 },
+          .vece = MO_UQ },
     };

-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]);
 }

@@ -1835,10 +1835,10 @@ void tcg_gen_gvec_muls(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fno = gen_helper_gvec_muls64,
           .opt_opc = vecop_list_mul,
           .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-          .vece = MO_64 },
+          .vece = MO_UQ },
     };

-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_2s(dofs, aofs, oprsz, maxsz, c, &g[vece]);
 }

@@ -1870,9 +1870,9 @@ void tcg_gen_gvec_ssadd(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_ssadd_vec,
           .fno = gen_helper_gvec_ssadd64,
           .opt_opc = vecop_list,
-          .vece = MO_64 },
+          .vece = MO_UQ },
     };
-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]);
 }

@@ -1896,9 +1896,9 @@ void tcg_gen_gvec_sssub(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_sssub_vec,
           .fno = gen_helper_gvec_sssub64,
           .opt_opc = vecop_list,
-          .vece = MO_64 },
+          .vece = MO_UQ },
     };
-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]);
 }

@@ -1940,9 +1940,9 @@ void tcg_gen_gvec_usadd(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_usadd_vec,
           .fno = gen_helper_gvec_usadd64,
           .opt_opc = vecop_list,
-          .vece = MO_64 }
+          .vece = MO_UQ }
     };
-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]);
 }

@@ -1984,9 +1984,9 @@ void tcg_gen_gvec_ussub(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_ussub_vec,
           .fno = gen_helper_gvec_ussub64,
           .opt_opc = vecop_list,
-          .vece = MO_64 }
+          .vece = MO_UQ }
     };
-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]);
 }

@@ -2012,9 +2012,9 @@ void tcg_gen_gvec_smin(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_smin_vec,
           .fno = gen_helper_gvec_smin64,
           .opt_opc = vecop_list,
-          .vece = MO_64 }
+          .vece = MO_UQ }
     };
-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]);
 }

@@ -2040,9 +2040,9 @@ void tcg_gen_gvec_umin(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_umin_vec,
           .fno = gen_helper_gvec_umin64,
           .opt_opc = vecop_list,
-          .vece = MO_64 }
+          .vece = MO_UQ }
     };
-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]);
 }

@@ -2068,9 +2068,9 @@ void tcg_gen_gvec_smax(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_smax_vec,
           .fno = gen_helper_gvec_smax64,
           .opt_opc = vecop_list,
-          .vece = MO_64 }
+          .vece = MO_UQ }
     };
-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]);
 }

@@ -2096,9 +2096,9 @@ void tcg_gen_gvec_umax(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_umax_vec,
           .fno = gen_helper_gvec_umax64,
           .opt_opc = vecop_list,
-          .vece = MO_64 }
+          .vece = MO_UQ }
     };
-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]);
 }

@@ -2171,10 +2171,10 @@ void tcg_gen_gvec_neg(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fno = gen_helper_gvec_neg64,
           .opt_opc = vecop_list,
           .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-          .vece = MO_64 },
+          .vece = MO_UQ },
     };

-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_2(dofs, aofs, oprsz, maxsz, &g[vece]);
 }

@@ -2234,10 +2234,10 @@ void tcg_gen_gvec_abs(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fno = gen_helper_gvec_abs64,
           .opt_opc = vecop_list,
           .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-          .vece = MO_64 },
+          .vece = MO_UQ },
     };

-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_2(dofs, aofs, oprsz, maxsz, &g[vece]);
 }

@@ -2382,7 +2382,7 @@ static const GVecGen2s gop_ands = {
     .fniv = tcg_gen_and_vec,
     .fno = gen_helper_gvec_ands,
     .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-    .vece = MO_64
+    .vece = MO_UQ
 };

 void tcg_gen_gvec_ands(unsigned vece, uint32_t dofs, uint32_t aofs,
@@ -2407,7 +2407,7 @@ static const GVecGen2s gop_xors = {
     .fniv = tcg_gen_xor_vec,
     .fno = gen_helper_gvec_xors,
     .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-    .vece = MO_64
+    .vece = MO_UQ
 };

 void tcg_gen_gvec_xors(unsigned vece, uint32_t dofs, uint32_t aofs,
@@ -2432,7 +2432,7 @@ static const GVecGen2s gop_ors = {
     .fniv = tcg_gen_or_vec,
     .fno = gen_helper_gvec_ors,
     .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-    .vece = MO_64
+    .vece = MO_UQ
 };

 void tcg_gen_gvec_ors(unsigned vece, uint32_t dofs, uint32_t aofs,
@@ -2491,10 +2491,10 @@ void tcg_gen_gvec_shli(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fno = gen_helper_gvec_shl64i,
           .opt_opc = vecop_list,
           .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-          .vece = MO_64 },
+          .vece = MO_UQ },
     };

-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_debug_assert(shift >= 0 && shift < (8 << vece));
     if (shift == 0) {
         tcg_gen_gvec_mov(vece, dofs, aofs, oprsz, maxsz);
@@ -2542,10 +2542,10 @@ void tcg_gen_gvec_shri(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fno = gen_helper_gvec_shr64i,
           .opt_opc = vecop_list,
           .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-          .vece = MO_64 },
+          .vece = MO_UQ },
     };

-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_debug_assert(shift >= 0 && shift < (8 << vece));
     if (shift == 0) {
         tcg_gen_gvec_mov(vece, dofs, aofs, oprsz, maxsz);
@@ -2607,10 +2607,10 @@ void tcg_gen_gvec_sari(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fno = gen_helper_gvec_sar64i,
           .opt_opc = vecop_list,
           .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-          .vece = MO_64 },
+          .vece = MO_UQ },
     };

-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_debug_assert(shift >= 0 && shift < (8 << vece));
     if (shift == 0) {
         tcg_gen_gvec_mov(vece, dofs, aofs, oprsz, maxsz);
@@ -2660,7 +2660,7 @@ do_gvec_shifts(unsigned vece, uint32_t dofs, uint32_t aofs, TCGv_i32 shift,
     check_overlap_2(dofs, aofs, maxsz);

     /* If the backend has a scalar expansion, great.  */
-    type = choose_vector_type(g->s_list, vece, oprsz, vece == MO_64);
+    type = choose_vector_type(g->s_list, vece, oprsz, vece == MO_UQ);
     if (type) {
         const TCGOpcode *hold_list = tcg_swap_vecop_list(NULL);
         switch (type) {
@@ -2692,15 +2692,15 @@ do_gvec_shifts(unsigned vece, uint32_t dofs, uint32_t aofs, TCGv_i32 shift,
     }

     /* If the backend supports variable vector shifts, also cool.  */
-    type = choose_vector_type(g->v_list, vece, oprsz, vece == MO_64);
+    type = choose_vector_type(g->v_list, vece, oprsz, vece == MO_UQ);
     if (type) {
         const TCGOpcode *hold_list = tcg_swap_vecop_list(NULL);
         TCGv_vec v_shift = tcg_temp_new_vec(type);

-        if (vece == MO_64) {
+        if (vece == MO_UQ) {
             TCGv_i64 sh64 = tcg_temp_new_i64();
             tcg_gen_extu_i32_i64(sh64, shift);
-            tcg_gen_dup_i64_vec(MO_64, v_shift, sh64);
+            tcg_gen_dup_i64_vec(MO_UQ, v_shift, sh64);
             tcg_temp_free_i64(sh64);
         } else {
             tcg_gen_dup_i32_vec(vece, v_shift, shift);
@@ -2738,7 +2738,7 @@ do_gvec_shifts(unsigned vece, uint32_t dofs, uint32_t aofs, TCGv_i32 shift,
     /* Otherwise fall back to integral... */
     if (vece == MO_UL && check_size_impl(oprsz, 4)) {
         expand_2s_i32(dofs, aofs, oprsz, shift, false, g->fni4);
-    } else if (vece == MO_64 && check_size_impl(oprsz, 8)) {
+    } else if (vece == MO_UQ && check_size_impl(oprsz, 8)) {
         TCGv_i64 sh64 = tcg_temp_new_i64();
         tcg_gen_extu_i32_i64(sh64, shift);
         expand_2s_i64(dofs, aofs, oprsz, sh64, false, g->fni8);
@@ -2785,7 +2785,7 @@ void tcg_gen_gvec_shls(unsigned vece, uint32_t dofs, uint32_t aofs,
         .v_list = { INDEX_op_shlv_vec, 0 },
     };

-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     do_gvec_shifts(vece, dofs, aofs, shift, oprsz, maxsz, &g);
 }

@@ -2807,7 +2807,7 @@ void tcg_gen_gvec_shrs(unsigned vece, uint32_t dofs, uint32_t aofs,
         .v_list = { INDEX_op_shrv_vec, 0 },
     };

-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     do_gvec_shifts(vece, dofs, aofs, shift, oprsz, maxsz, &g);
 }

@@ -2829,7 +2829,7 @@ void tcg_gen_gvec_sars(unsigned vece, uint32_t dofs, uint32_t aofs,
         .v_list = { INDEX_op_sarv_vec, 0 },
     };

-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     do_gvec_shifts(vece, dofs, aofs, shift, oprsz, maxsz, &g);
 }

@@ -2895,10 +2895,10 @@ void tcg_gen_gvec_shlv(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fno = gen_helper_gvec_shl64v,
           .opt_opc = vecop_list,
           .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-          .vece = MO_64 },
+          .vece = MO_UQ },
     };

-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]);
 }

@@ -2958,10 +2958,10 @@ void tcg_gen_gvec_shrv(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fno = gen_helper_gvec_shr64v,
           .opt_opc = vecop_list,
           .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-          .vece = MO_64 },
+          .vece = MO_UQ },
     };

-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]);
 }

@@ -3021,10 +3021,10 @@ void tcg_gen_gvec_sarv(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fno = gen_helper_gvec_sar64v,
           .opt_opc = vecop_list,
           .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-          .vece = MO_64 },
+          .vece = MO_UQ },
     };

-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]);
 }

@@ -3140,7 +3140,7 @@ void tcg_gen_gvec_cmp(TCGCond cond, unsigned vece, uint32_t dofs,
      */
     hold_list = tcg_swap_vecop_list(cmp_list);
     type = choose_vector_type(cmp_list, vece, oprsz,
-                              TCG_TARGET_REG_BITS == 64 && vece == MO_64);
+                              TCG_TARGET_REG_BITS == 64 && vece == MO_UQ);
     switch (type) {
     case TCG_TYPE_V256:
         /* Recall that ARM SVE allows vector sizes that are not a
@@ -3166,7 +3166,7 @@ void tcg_gen_gvec_cmp(TCGCond cond, unsigned vece, uint32_t dofs,
         break;

     case 0:
-        if (vece == MO_64 && check_size_impl(oprsz, 8)) {
+        if (vece == MO_UQ && check_size_impl(oprsz, 8)) {
             expand_cmp_i64(dofs, aofs, bofs, oprsz, cond);
         } else if (vece == MO_UL && check_size_impl(oprsz, 4)) {
             expand_cmp_i32(dofs, aofs, bofs, oprsz, cond);
diff --git a/tcg/tcg-op-vec.c b/tcg/tcg-op-vec.c
index ff723ab..e8aea38 100644
--- a/tcg/tcg-op-vec.c
+++ b/tcg/tcg-op-vec.c
@@ -216,7 +216,7 @@ void tcg_gen_mov_vec(TCGv_vec r, TCGv_vec a)
     }
 }

-#define MO_REG  (TCG_TARGET_REG_BITS == 64 ? MO_64 : MO_UL)
+#define MO_REG  (TCG_TARGET_REG_BITS == 64 ? MO_UQ : MO_UL)

 static void do_dupi_vec(TCGv_vec r, unsigned vece, TCGArg a)
 {
@@ -255,10 +255,10 @@ void tcg_gen_dup64i_vec(TCGv_vec r, uint64_t a)
     if (TCG_TARGET_REG_BITS == 32 && a == deposit64(a, 32, 32, a)) {
         do_dupi_vec(r, MO_UL, a);
     } else if (TCG_TARGET_REG_BITS == 64 || a == (uint64_t)(int32_t)a) {
-        do_dupi_vec(r, MO_64, a);
+        do_dupi_vec(r, MO_UQ, a);
     } else {
         TCGv_i64 c = tcg_const_i64(a);
-        tcg_gen_dup_i64_vec(MO_64, r, c);
+        tcg_gen_dup_i64_vec(MO_UQ, r, c);
         tcg_temp_free_i64(c);
     }
 }
@@ -292,10 +292,10 @@ void tcg_gen_dup_i64_vec(unsigned vece, TCGv_vec r, TCGv_i64 a)
     if (TCG_TARGET_REG_BITS == 64) {
         TCGArg ai = tcgv_i64_arg(a);
         vec_gen_2(INDEX_op_dup_vec, type, vece, ri, ai);
-    } else if (vece == MO_64) {
+    } else if (vece == MO_UQ) {
         TCGArg al = tcgv_i32_arg(TCGV_LOW(a));
         TCGArg ah = tcgv_i32_arg(TCGV_HIGH(a));
-        vec_gen_3(INDEX_op_dup2_vec, type, MO_64, ri, al, ah);
+        vec_gen_3(INDEX_op_dup2_vec, type, MO_UQ, ri, al, ah);
     } else {
         TCGArg ai = tcgv_i32_arg(TCGV_LOW(a));
         vec_gen_2(INDEX_op_dup_vec, type, vece, ri, ai);
@@ -709,10 +709,10 @@ static void do_shifts(unsigned vece, TCGv_vec r, TCGv_vec a,
     } else {
         TCGv_vec vec_s = tcg_temp_new_vec(type);

-        if (vece == MO_64) {
+        if (vece == MO_UQ) {
             TCGv_i64 s64 = tcg_temp_new_i64();
             tcg_gen_extu_i32_i64(s64, s);
-            tcg_gen_dup_i64_vec(MO_64, vec_s, s64);
+            tcg_gen_dup_i64_vec(MO_UQ, vec_s, s64);
             tcg_temp_free_i64(s64);
         } else {
             tcg_gen_dup_i32_vec(vece, vec_s, s);
diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c
index 447683d..a9f3e13 100644
--- a/tcg/tcg-op.c
+++ b/tcg/tcg-op.c
@@ -2730,7 +2730,7 @@ static inline TCGMemOp tcg_canonicalize_memop(TCGMemOp op, bool is64, bool st)
             op &= ~MO_SIGN;
         }
         break;
-    case MO_64:
+    case MO_UQ:
         if (!is64) {
             tcg_abort();
         }
@@ -2862,7 +2862,7 @@ void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
 {
     TCGMemOp orig_memop;

-    if (TCG_TARGET_REG_BITS == 32 && (memop & MO_SIZE) < MO_64) {
+    if (TCG_TARGET_REG_BITS == 32 && (memop & MO_SIZE) < MO_UQ) {
         tcg_gen_qemu_ld_i32(TCGV_LOW(val), addr, idx, memop);
         if (memop & MO_SIGN) {
             tcg_gen_sari_i32(TCGV_HIGH(val), TCGV_LOW(val), 31);
@@ -2881,7 +2881,7 @@ void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
     if (!TCG_TARGET_HAS_MEMORY_BSWAP && (memop & MO_BSWAP)) {
         memop &= ~MO_BSWAP;
         /* The bswap primitive requires zero-extended input.  */
-        if ((memop & MO_SIGN) && (memop & MO_SIZE) < MO_64) {
+        if ((memop & MO_SIGN) && (memop & MO_SIZE) < MO_UQ) {
             memop &= ~MO_SIGN;
         }
     }
@@ -2902,7 +2902,7 @@ void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
                 tcg_gen_ext32s_i64(val, val);
             }
             break;
-        case MO_64:
+        case MO_UQ:
             tcg_gen_bswap64_i64(val, val);
             break;
         default:
@@ -2915,7 +2915,7 @@ void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
 {
     TCGv_i64 swap = NULL;

-    if (TCG_TARGET_REG_BITS == 32 && (memop & MO_SIZE) < MO_64) {
+    if (TCG_TARGET_REG_BITS == 32 && (memop & MO_SIZE) < MO_UQ) {
         tcg_gen_qemu_st_i32(TCGV_LOW(val), addr, idx, memop);
         return;
     }
@@ -2936,7 +2936,7 @@ void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
             tcg_gen_ext32u_i64(swap, val);
             tcg_gen_bswap32_i64(swap, swap);
             break;
-        case MO_64:
+        case MO_UQ:
             tcg_gen_bswap64_i64(swap, val);
             break;
         default:
@@ -3029,8 +3029,8 @@ static void * const table_cmpxchg[16] = {
     [MO_UW | MO_BE] = gen_helper_atomic_cmpxchgw_be,
     [MO_UL | MO_LE] = gen_helper_atomic_cmpxchgl_le,
     [MO_UL | MO_BE] = gen_helper_atomic_cmpxchgl_be,
-    WITH_ATOMIC64([MO_64 | MO_LE] = gen_helper_atomic_cmpxchgq_le)
-    WITH_ATOMIC64([MO_64 | MO_BE] = gen_helper_atomic_cmpxchgq_be)
+    WITH_ATOMIC64([MO_UQ | MO_LE] = gen_helper_atomic_cmpxchgq_le)
+    WITH_ATOMIC64([MO_UQ | MO_BE] = gen_helper_atomic_cmpxchgq_be)
 };

 void tcg_gen_atomic_cmpxchg_i32(TCGv_i32 retv, TCGv addr, TCGv_i32 cmpv,
@@ -3099,7 +3099,7 @@ void tcg_gen_atomic_cmpxchg_i64(TCGv_i64 retv, TCGv addr, TCGv_i64 cmpv,
             tcg_gen_mov_i64(retv, t1);
         }
         tcg_temp_free_i64(t1);
-    } else if ((memop & MO_SIZE) == MO_64) {
+    } else if ((memop & MO_SIZE) == MO_UQ) {
 #ifdef CONFIG_ATOMIC64
         gen_atomic_cx_i64 gen;

@@ -3207,7 +3207,7 @@ static void do_atomic_op_i64(TCGv_i64 ret, TCGv addr, TCGv_i64 val,
 {
     memop = tcg_canonicalize_memop(memop, 1, 0);

-    if ((memop & MO_SIZE) == MO_64) {
+    if ((memop & MO_SIZE) == MO_UQ) {
 #ifdef CONFIG_ATOMIC64
         gen_atomic_op_i64 gen;

@@ -3253,8 +3253,8 @@ static void * const table_##NAME[16] = {                                \
     [MO_UW | MO_BE] = gen_helper_atomic_##NAME##w_be,                   \
     [MO_UL | MO_LE] = gen_helper_atomic_##NAME##l_le,                   \
     [MO_UL | MO_BE] = gen_helper_atomic_##NAME##l_be,                   \
-    WITH_ATOMIC64([MO_64 | MO_LE] = gen_helper_atomic_##NAME##q_le)     \
-    WITH_ATOMIC64([MO_64 | MO_BE] = gen_helper_atomic_##NAME##q_be)     \
+    WITH_ATOMIC64([MO_UQ | MO_LE] = gen_helper_atomic_##NAME##q_le)     \
+    WITH_ATOMIC64([MO_UQ | MO_BE] = gen_helper_atomic_##NAME##q_be)     \
 };                                                                      \
 void tcg_gen_atomic_##NAME##_i32                                        \
     (TCGv_i32 ret, TCGv addr, TCGv_i32 val, TCGArg idx, TCGMemOp memop) \
diff --git a/tcg/tcg.h b/tcg/tcg.h
index 4b6ee89..63e9897 100644
--- a/tcg/tcg.h
+++ b/tcg/tcg.h
@@ -371,28 +371,29 @@ typedef enum TCGMemOp {
     MO_UB    = MO_8,
     MO_UW    = MO_16,
     MO_UL    = MO_32,
+    MO_UQ    = MO_64,
     MO_SB    = MO_SIGN | MO_8,
     MO_SW    = MO_SIGN | MO_16,
     MO_SL    = MO_SIGN | MO_32,
-    MO_Q     = MO_64,
+    MO_SQ    = MO_SIGN | MO_64,

     MO_LEUW  = MO_LE | MO_UW,
     MO_LEUL  = MO_LE | MO_UL,
     MO_LESW  = MO_LE | MO_SW,
     MO_LESL  = MO_LE | MO_SL,
-    MO_LEQ   = MO_LE | MO_Q,
+    MO_LEQ   = MO_LE | MO_UQ,

     MO_BEUW  = MO_BE | MO_UW,
     MO_BEUL  = MO_BE | MO_UL,
     MO_BESW  = MO_BE | MO_SW,
     MO_BESL  = MO_BE | MO_SL,
-    MO_BEQ   = MO_BE | MO_Q,
+    MO_BEQ   = MO_BE | MO_UQ,

     MO_TEUW  = MO_TE | MO_UW,
     MO_TEUL  = MO_TE | MO_UL,
     MO_TESW  = MO_TE | MO_SW,
     MO_TESL  = MO_TE | MO_SL,
-    MO_TEQ   = MO_TE | MO_Q,
+    MO_TEQ   = MO_TE | MO_UQ,

     MO_SSIZE = MO_SIZE | MO_SIGN,
 } TCGMemOp;
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v2 04/20] tcg: Replace MO_64 with MO_UQ alias
@ 2019-07-22 15:42   ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:42 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, david, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, mst, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, claudio.fontana, qemu-s390x, qemu-ppc,
	amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 134298 bytes --]

Preparation for splitting MO_64 out from TCGMemOp into new accelerator
independent MemOp.

As MO_64 will be a value of MemOp, existing TCGMemOp comparisons and
coercions will trigger -Wenum-compare and -Wenum-conversion.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 target/arm/sve_helper.c             |   2 +-
 target/arm/translate-a64.c          | 270 ++++++++++++++++++------------------
 target/arm/translate-sve.c          |  18 +--
 target/arm/translate-vfp.inc.c      |   4 +-
 target/arm/translate.c              |  30 ++--
 target/i386/translate.c             | 122 ++++++++--------
 target/mips/translate.c             |   2 +-
 target/ppc/translate.c              |  28 ++--
 target/ppc/translate/fp-impl.inc.c  |   4 +-
 target/ppc/translate/vmx-impl.inc.c |  34 ++---
 target/ppc/translate/vsx-impl.inc.c |  18 +--
 target/s390x/translate.c            |   4 +-
 target/s390x/translate_vx.inc.c     |   6 +-
 target/s390x/vec.h                  |   4 +-
 target/sparc/translate.c            |   4 +-
 tcg/aarch64/tcg-target.inc.c        |  20 +--
 tcg/arm/tcg-target.inc.c            |  12 +-
 tcg/i386/tcg-target.inc.c           |  42 +++---
 tcg/mips/tcg-target.inc.c           |  12 +-
 tcg/ppc/tcg-target.inc.c            |  18 +--
 tcg/riscv/tcg-target.inc.c          |   6 +-
 tcg/s390/tcg-target.inc.c           |  10 +-
 tcg/sparc/tcg-target.inc.c          |   8 +-
 tcg/tcg-op-gvec.c                   | 132 +++++++++---------
 tcg/tcg-op-vec.c                    |  14 +-
 tcg/tcg-op.c                        |  24 ++--
 tcg/tcg.h                           |   9 +-
 27 files changed, 430 insertions(+), 427 deletions(-)

diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
index fa705c4..1cfd746 100644
--- a/target/arm/sve_helper.c
+++ b/target/arm/sve_helper.c
@@ -5165,7 +5165,7 @@ static inline void sve_ldff1_zd(CPUARMState *env, void *vd, void *vg, void *vm,
     target_ulong addr;

     /* Skip to the first true predicate.  */
-    reg_off = find_next_active(vg, 0, reg_max, MO_64);
+    reg_off = find_next_active(vg, 0, reg_max, MO_UQ);
     if (likely(reg_off < reg_max)) {
         /* Perform one normal read, which will fault or not.  */
         set_helper_retaddr(ra);
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index 0b92e6d..3f9d103 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -463,7 +463,7 @@ static inline int fp_reg_offset(DisasContext *s, int regno, TCGMemOp size)
 /* Offset of the high half of the 128 bit vector Qn */
 static inline int fp_reg_hi_offset(DisasContext *s, int regno)
 {
-    return vec_reg_offset(s, regno, 1, MO_64);
+    return vec_reg_offset(s, regno, 1, MO_UQ);
 }

 /* Convenience accessors for reading and writing single and double
@@ -476,7 +476,7 @@ static TCGv_i64 read_fp_dreg(DisasContext *s, int reg)
 {
     TCGv_i64 v = tcg_temp_new_i64();

-    tcg_gen_ld_i64(v, cpu_env, fp_reg_offset(s, reg, MO_64));
+    tcg_gen_ld_i64(v, cpu_env, fp_reg_offset(s, reg, MO_UQ));
     return v;
 }

@@ -501,7 +501,7 @@ static TCGv_i32 read_fp_hreg(DisasContext *s, int reg)
  */
 static void clear_vec_high(DisasContext *s, bool is_q, int rd)
 {
-    unsigned ofs = fp_reg_offset(s, rd, MO_64);
+    unsigned ofs = fp_reg_offset(s, rd, MO_UQ);
     unsigned vsz = vec_full_reg_size(s);

     if (!is_q) {
@@ -516,7 +516,7 @@ static void clear_vec_high(DisasContext *s, bool is_q, int rd)

 void write_fp_dreg(DisasContext *s, int reg, TCGv_i64 v)
 {
-    unsigned ofs = fp_reg_offset(s, reg, MO_64);
+    unsigned ofs = fp_reg_offset(s, reg, MO_UQ);

     tcg_gen_st_i64(v, cpu_env, ofs);
     clear_vec_high(s, false, reg);
@@ -918,7 +918,7 @@ static void do_fp_st(DisasContext *s, int srcidx, TCGv_i64 tcg_addr, int size)
 {
     /* This writes the bottom N bits of a 128 bit wide vector to memory */
     TCGv_i64 tmp = tcg_temp_new_i64();
-    tcg_gen_ld_i64(tmp, cpu_env, fp_reg_offset(s, srcidx, MO_64));
+    tcg_gen_ld_i64(tmp, cpu_env, fp_reg_offset(s, srcidx, MO_UQ));
     if (size < 4) {
         tcg_gen_qemu_st_i64(tmp, tcg_addr, get_mem_index(s),
                             s->be_data + size);
@@ -928,10 +928,10 @@ static void do_fp_st(DisasContext *s, int srcidx, TCGv_i64 tcg_addr, int size)

         tcg_gen_addi_i64(tcg_hiaddr, tcg_addr, 8);
         tcg_gen_qemu_st_i64(tmp, be ? tcg_hiaddr : tcg_addr, get_mem_index(s),
-                            s->be_data | MO_Q);
+                            s->be_data | MO_UQ);
         tcg_gen_ld_i64(tmp, cpu_env, fp_reg_hi_offset(s, srcidx));
         tcg_gen_qemu_st_i64(tmp, be ? tcg_addr : tcg_hiaddr, get_mem_index(s),
-                            s->be_data | MO_Q);
+                            s->be_data | MO_UQ);
         tcg_temp_free_i64(tcg_hiaddr);
     }

@@ -960,13 +960,13 @@ static void do_fp_ld(DisasContext *s, int destidx, TCGv_i64 tcg_addr, int size)

         tcg_gen_addi_i64(tcg_hiaddr, tcg_addr, 8);
         tcg_gen_qemu_ld_i64(tmplo, be ? tcg_hiaddr : tcg_addr, get_mem_index(s),
-                            s->be_data | MO_Q);
+                            s->be_data | MO_UQ);
         tcg_gen_qemu_ld_i64(tmphi, be ? tcg_addr : tcg_hiaddr, get_mem_index(s),
-                            s->be_data | MO_Q);
+                            s->be_data | MO_UQ);
         tcg_temp_free_i64(tcg_hiaddr);
     }

-    tcg_gen_st_i64(tmplo, cpu_env, fp_reg_offset(s, destidx, MO_64));
+    tcg_gen_st_i64(tmplo, cpu_env, fp_reg_offset(s, destidx, MO_UQ));
     tcg_gen_st_i64(tmphi, cpu_env, fp_reg_hi_offset(s, destidx));

     tcg_temp_free_i64(tmplo);
@@ -1011,8 +1011,8 @@ static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx,
     case MO_SL:
         tcg_gen_ld32s_i64(tcg_dest, cpu_env, vect_off);
         break;
-    case MO_64:
-    case MO_64|MO_SIGN:
+    case MO_UQ:
+    case MO_SQ:
         tcg_gen_ld_i64(tcg_dest, cpu_env, vect_off);
         break;
     default:
@@ -1061,7 +1061,7 @@ static void write_vec_element(DisasContext *s, TCGv_i64 tcg_src, int destidx,
     case MO_UL:
         tcg_gen_st32_i64(tcg_src, cpu_env, vect_off);
         break;
-    case MO_64:
+    case MO_UQ:
         tcg_gen_st_i64(tcg_src, cpu_env, vect_off);
         break;
     default:
@@ -2207,7 +2207,7 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
         g_assert(size >= 2);
         if (size == 2) {
             /* The pair must be single-copy atomic for the doubleword.  */
-            memop |= MO_64 | MO_ALIGN;
+            memop |= MO_UQ | MO_ALIGN;
             tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx, memop);
             if (s->be_data == MO_LE) {
                 tcg_gen_extract_i64(cpu_reg(s, rt), cpu_exclusive_val, 0, 32);
@@ -2219,7 +2219,7 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
         } else {
             /* The pair must be single-copy atomic for *each* doubleword, not
                the entire quadword, however it must be quadword aligned.  */
-            memop |= MO_64;
+            memop |= MO_UQ;
             tcg_gen_qemu_ld_i64(cpu_exclusive_val, addr, idx,
                                 memop | MO_ALIGN_16);

@@ -2271,7 +2271,7 @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
             tcg_gen_atomic_cmpxchg_i64(tmp, cpu_exclusive_addr,
                                        cpu_exclusive_val, tmp,
                                        get_mem_index(s),
-                                       MO_64 | MO_ALIGN | s->be_data);
+                                       MO_UQ | MO_ALIGN | s->be_data);
             tcg_gen_setcond_i64(TCG_COND_NE, tmp, tmp, cpu_exclusive_val);
         } else if (tb_cflags(s->base.tb) & CF_PARALLEL) {
             if (!HAVE_CMPXCHG128) {
@@ -2355,7 +2355,7 @@ static void gen_compare_and_swap_pair(DisasContext *s, int rs, int rt,
         }

         tcg_gen_atomic_cmpxchg_i64(cmp, clean_addr, cmp, val, memidx,
-                                   MO_64 | MO_ALIGN | s->be_data);
+                                   MO_UQ | MO_ALIGN | s->be_data);
         tcg_temp_free_i64(val);

         if (s->be_data == MO_LE) {
@@ -2389,9 +2389,9 @@ static void gen_compare_and_swap_pair(DisasContext *s, int rs, int rt,

         /* Load the two words, in memory order.  */
         tcg_gen_qemu_ld_i64(d1, clean_addr, memidx,
-                            MO_64 | MO_ALIGN_16 | s->be_data);
+                            MO_UQ | MO_ALIGN_16 | s->be_data);
         tcg_gen_addi_i64(a2, clean_addr, 8);
-        tcg_gen_qemu_ld_i64(d2, a2, memidx, MO_64 | s->be_data);
+        tcg_gen_qemu_ld_i64(d2, a2, memidx, MO_UQ | s->be_data);

         /* Compare the two words, also in memory order.  */
         tcg_gen_setcond_i64(TCG_COND_EQ, c1, d1, s1);
@@ -2401,8 +2401,8 @@ static void gen_compare_and_swap_pair(DisasContext *s, int rs, int rt,
         /* If compare equal, write back new data, else write back old data.  */
         tcg_gen_movcond_i64(TCG_COND_NE, c1, c2, zero, t1, d1);
         tcg_gen_movcond_i64(TCG_COND_NE, c2, c2, zero, t2, d2);
-        tcg_gen_qemu_st_i64(c1, clean_addr, memidx, MO_64 | s->be_data);
-        tcg_gen_qemu_st_i64(c2, a2, memidx, MO_64 | s->be_data);
+        tcg_gen_qemu_st_i64(c1, clean_addr, memidx, MO_UQ | s->be_data);
+        tcg_gen_qemu_st_i64(c2, a2, memidx, MO_UQ | s->be_data);
         tcg_temp_free_i64(a2);
         tcg_temp_free_i64(c1);
         tcg_temp_free_i64(c2);
@@ -5271,7 +5271,7 @@ static void handle_fp_compare(DisasContext *s, int size,
     TCGv_i64 tcg_flags = tcg_temp_new_i64();
     TCGv_ptr fpst = get_fpstatus_ptr(size == MO_UW);

-    if (size == MO_64) {
+    if (size == MO_UQ) {
         TCGv_i64 tcg_vn, tcg_vm;

         tcg_vn = read_fp_dreg(s, rn);
@@ -5357,7 +5357,7 @@ static void disas_fp_compare(DisasContext *s, uint32_t insn)
         size = MO_UL;
         break;
     case 1:
-        size = MO_64;
+        size = MO_UQ;
         break;
     case 3:
         size = MO_UW;
@@ -5408,7 +5408,7 @@ static void disas_fp_ccomp(DisasContext *s, uint32_t insn)
         size = MO_UL;
         break;
     case 1:
-        size = MO_64;
+        size = MO_UQ;
         break;
     case 3:
         size = MO_UW;
@@ -5474,7 +5474,7 @@ static void disas_fp_csel(DisasContext *s, uint32_t insn)
         sz = MO_UL;
         break;
     case 1:
-        sz = MO_64;
+        sz = MO_UQ;
         break;
     case 3:
         sz = MO_UW;
@@ -6279,7 +6279,7 @@ static void disas_fp_imm(DisasContext *s, uint32_t insn)
         sz = MO_UL;
         break;
     case 1:
-        sz = MO_64;
+        sz = MO_UQ;
         break;
     case 3:
         sz = MO_UW;
@@ -6585,7 +6585,7 @@ static void handle_fmov(DisasContext *s, int rd, int rn, int type, bool itof)
             break;
         case 1:
             /* 64 bit */
-            tcg_gen_ld_i64(tcg_rd, cpu_env, fp_reg_offset(s, rn, MO_64));
+            tcg_gen_ld_i64(tcg_rd, cpu_env, fp_reg_offset(s, rn, MO_UQ));
             break;
         case 2:
             /* 64 bits from top half */
@@ -6819,9 +6819,9 @@ static void disas_simd_ext(DisasContext *s, uint32_t insn)
      * extracting 64 bits from a 64:64 concatenation.
      */
     if (!is_q) {
-        read_vec_element(s, tcg_resl, rn, 0, MO_64);
+        read_vec_element(s, tcg_resl, rn, 0, MO_UQ);
         if (pos != 0) {
-            read_vec_element(s, tcg_resh, rm, 0, MO_64);
+            read_vec_element(s, tcg_resh, rm, 0, MO_UQ);
             do_ext64(s, tcg_resh, tcg_resl, pos);
         }
         tcg_gen_movi_i64(tcg_resh, 0);
@@ -6839,22 +6839,22 @@ static void disas_simd_ext(DisasContext *s, uint32_t insn)
             pos -= 64;
         }

-        read_vec_element(s, tcg_resl, elt->reg, elt->elt, MO_64);
+        read_vec_element(s, tcg_resl, elt->reg, elt->elt, MO_UQ);
         elt++;
-        read_vec_element(s, tcg_resh, elt->reg, elt->elt, MO_64);
+        read_vec_element(s, tcg_resh, elt->reg, elt->elt, MO_UQ);
         elt++;
         if (pos != 0) {
             do_ext64(s, tcg_resh, tcg_resl, pos);
             tcg_hh = tcg_temp_new_i64();
-            read_vec_element(s, tcg_hh, elt->reg, elt->elt, MO_64);
+            read_vec_element(s, tcg_hh, elt->reg, elt->elt, MO_UQ);
             do_ext64(s, tcg_hh, tcg_resh, pos);
             tcg_temp_free_i64(tcg_hh);
         }
     }

-    write_vec_element(s, tcg_resl, rd, 0, MO_64);
+    write_vec_element(s, tcg_resl, rd, 0, MO_UQ);
     tcg_temp_free_i64(tcg_resl);
-    write_vec_element(s, tcg_resh, rd, 1, MO_64);
+    write_vec_element(s, tcg_resh, rd, 1, MO_UQ);
     tcg_temp_free_i64(tcg_resh);
 }

@@ -6895,12 +6895,12 @@ static void disas_simd_tb(DisasContext *s, uint32_t insn)
     tcg_resh = tcg_temp_new_i64();

     if (is_tblx) {
-        read_vec_element(s, tcg_resl, rd, 0, MO_64);
+        read_vec_element(s, tcg_resl, rd, 0, MO_UQ);
     } else {
         tcg_gen_movi_i64(tcg_resl, 0);
     }
     if (is_tblx && is_q) {
-        read_vec_element(s, tcg_resh, rd, 1, MO_64);
+        read_vec_element(s, tcg_resh, rd, 1, MO_UQ);
     } else {
         tcg_gen_movi_i64(tcg_resh, 0);
     }
@@ -6908,11 +6908,11 @@ static void disas_simd_tb(DisasContext *s, uint32_t insn)
     tcg_idx = tcg_temp_new_i64();
     tcg_regno = tcg_const_i32(rn);
     tcg_numregs = tcg_const_i32(len + 1);
-    read_vec_element(s, tcg_idx, rm, 0, MO_64);
+    read_vec_element(s, tcg_idx, rm, 0, MO_UQ);
     gen_helper_simd_tbl(tcg_resl, cpu_env, tcg_resl, tcg_idx,
                         tcg_regno, tcg_numregs);
     if (is_q) {
-        read_vec_element(s, tcg_idx, rm, 1, MO_64);
+        read_vec_element(s, tcg_idx, rm, 1, MO_UQ);
         gen_helper_simd_tbl(tcg_resh, cpu_env, tcg_resh, tcg_idx,
                             tcg_regno, tcg_numregs);
     }
@@ -6920,9 +6920,9 @@ static void disas_simd_tb(DisasContext *s, uint32_t insn)
     tcg_temp_free_i32(tcg_regno);
     tcg_temp_free_i32(tcg_numregs);

-    write_vec_element(s, tcg_resl, rd, 0, MO_64);
+    write_vec_element(s, tcg_resl, rd, 0, MO_UQ);
     tcg_temp_free_i64(tcg_resl);
-    write_vec_element(s, tcg_resh, rd, 1, MO_64);
+    write_vec_element(s, tcg_resh, rd, 1, MO_UQ);
     tcg_temp_free_i64(tcg_resh);
 }

@@ -7009,9 +7009,9 @@ static void disas_simd_zip_trn(DisasContext *s, uint32_t insn)

     tcg_temp_free_i64(tcg_res);

-    write_vec_element(s, tcg_resl, rd, 0, MO_64);
+    write_vec_element(s, tcg_resl, rd, 0, MO_UQ);
     tcg_temp_free_i64(tcg_resl);
-    write_vec_element(s, tcg_resh, rd, 1, MO_64);
+    write_vec_element(s, tcg_resh, rd, 1, MO_UQ);
     tcg_temp_free_i64(tcg_resh);
 }

@@ -7625,9 +7625,9 @@ static void disas_simd_mod_imm(DisasContext *s, uint32_t insn)
     } else {
         /* ORR or BIC, with BIC negation to AND handled above.  */
         if (is_neg) {
-            gen_gvec_fn2i(s, is_q, rd, rd, imm, tcg_gen_gvec_andi, MO_64);
+            gen_gvec_fn2i(s, is_q, rd, rd, imm, tcg_gen_gvec_andi, MO_UQ);
         } else {
-            gen_gvec_fn2i(s, is_q, rd, rd, imm, tcg_gen_gvec_ori, MO_64);
+            gen_gvec_fn2i(s, is_q, rd, rd, imm, tcg_gen_gvec_ori, MO_UQ);
         }
     }
 }
@@ -7702,7 +7702,7 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
                 size = MO_UW;
             }
         } else {
-            size = extract32(size, 0, 1) ? MO_64 : MO_UL;
+            size = extract32(size, 0, 1) ? MO_UQ : MO_UL;
         }

         if (!fp_access_check(s)) {
@@ -7716,13 +7716,13 @@ static void disas_simd_scalar_pairwise(DisasContext *s, uint32_t insn)
         return;
     }

-    if (size == MO_64) {
+    if (size == MO_UQ) {
         TCGv_i64 tcg_op1 = tcg_temp_new_i64();
         TCGv_i64 tcg_op2 = tcg_temp_new_i64();
         TCGv_i64 tcg_res = tcg_temp_new_i64();

-        read_vec_element(s, tcg_op1, rn, 0, MO_64);
-        read_vec_element(s, tcg_op2, rn, 1, MO_64);
+        read_vec_element(s, tcg_op1, rn, 0, MO_UQ);
+        read_vec_element(s, tcg_op2, rn, 1, MO_UQ);

         switch (opcode) {
         case 0x3b: /* ADDP */
@@ -8085,9 +8085,9 @@ static void handle_vec_simd_sqshrn(DisasContext *s, bool is_scalar, bool is_q,
     }

     if (!is_q) {
-        write_vec_element(s, tcg_final, rd, 0, MO_64);
+        write_vec_element(s, tcg_final, rd, 0, MO_UQ);
     } else {
-        write_vec_element(s, tcg_final, rd, 1, MO_64);
+        write_vec_element(s, tcg_final, rd, 1, MO_UQ);
     }

     if (round) {
@@ -8155,9 +8155,9 @@ static void handle_simd_qshl(DisasContext *s, bool scalar, bool is_q,
         for (pass = 0; pass < maxpass; pass++) {
             TCGv_i64 tcg_op = tcg_temp_new_i64();

-            read_vec_element(s, tcg_op, rn, pass, MO_64);
+            read_vec_element(s, tcg_op, rn, pass, MO_UQ);
             genfn(tcg_op, cpu_env, tcg_op, tcg_shift);
-            write_vec_element(s, tcg_op, rd, pass, MO_64);
+            write_vec_element(s, tcg_op, rd, pass, MO_UQ);

             tcg_temp_free_i64(tcg_op);
         }
@@ -8228,11 +8228,11 @@ static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
     TCGMemOp mop = size | (is_signed ? MO_SIGN : 0);
     int pass;

-    if (fracbits || size == MO_64) {
+    if (fracbits || size == MO_UQ) {
         tcg_shift = tcg_const_i32(fracbits);
     }

-    if (size == MO_64) {
+    if (size == MO_UQ) {
         TCGv_i64 tcg_int64 = tcg_temp_new_i64();
         TCGv_i64 tcg_double = tcg_temp_new_i64();

@@ -8249,7 +8249,7 @@ static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
             if (elements == 1) {
                 write_fp_dreg(s, rd, tcg_double);
             } else {
-                write_vec_element(s, tcg_double, rd, pass, MO_64);
+                write_vec_element(s, tcg_double, rd, pass, MO_UQ);
             }
         }

@@ -8331,7 +8331,7 @@ static void handle_simd_shift_intfp_conv(DisasContext *s, bool is_scalar,
     int immhb = immh << 3 | immb;

     if (immh & 8) {
-        size = MO_64;
+        size = MO_UQ;
         if (!is_scalar && !is_q) {
             unallocated_encoding(s);
             return;
@@ -8376,7 +8376,7 @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
     TCGv_i32 tcg_rmode, tcg_shift;

     if (immh & 0x8) {
-        size = MO_64;
+        size = MO_UQ;
         if (!is_scalar && !is_q) {
             unallocated_encoding(s);
             return;
@@ -8408,19 +8408,19 @@ static void handle_simd_shift_fpint_conv(DisasContext *s, bool is_scalar,
     fracbits = (16 << size) - immhb;
     tcg_shift = tcg_const_i32(fracbits);

-    if (size == MO_64) {
+    if (size == MO_UQ) {
         int maxpass = is_scalar ? 1 : 2;

         for (pass = 0; pass < maxpass; pass++) {
             TCGv_i64 tcg_op = tcg_temp_new_i64();

-            read_vec_element(s, tcg_op, rn, pass, MO_64);
+            read_vec_element(s, tcg_op, rn, pass, MO_UQ);
             if (is_u) {
                 gen_helper_vfp_touqd(tcg_op, tcg_op, tcg_shift, tcg_fpstatus);
             } else {
                 gen_helper_vfp_tosqd(tcg_op, tcg_op, tcg_shift, tcg_fpstatus);
             }
-            write_vec_element(s, tcg_op, rd, pass, MO_64);
+            write_vec_element(s, tcg_op, rd, pass, MO_UQ);
             tcg_temp_free_i64(tcg_op);
         }
         clear_vec_high(s, is_q, rd);
@@ -8601,7 +8601,7 @@ static void disas_simd_scalar_three_reg_diff(DisasContext *s, uint32_t insn)
             tcg_gen_neg_i64(tcg_res, tcg_res);
             /* fall through */
         case 0x9: /* SQDMLAL, SQDMLAL2 */
-            read_vec_element(s, tcg_op1, rd, 0, MO_64);
+            read_vec_element(s, tcg_op1, rd, 0, MO_UQ);
             gen_helper_neon_addl_saturate_s64(tcg_res, cpu_env,
                                               tcg_res, tcg_op1);
             break;
@@ -8751,8 +8751,8 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
             TCGv_i64 tcg_op2 = tcg_temp_new_i64();
             TCGv_i64 tcg_res = tcg_temp_new_i64();

-            read_vec_element(s, tcg_op1, rn, pass, MO_64);
-            read_vec_element(s, tcg_op2, rm, pass, MO_64);
+            read_vec_element(s, tcg_op1, rn, pass, MO_UQ);
+            read_vec_element(s, tcg_op2, rm, pass, MO_UQ);

             switch (fpopcode) {
             case 0x39: /* FMLS */
@@ -8760,7 +8760,7 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
                 gen_helper_vfp_negd(tcg_op1, tcg_op1);
                 /* fall through */
             case 0x19: /* FMLA */
-                read_vec_element(s, tcg_res, rd, pass, MO_64);
+                read_vec_element(s, tcg_res, rd, pass, MO_UQ);
                 gen_helper_vfp_muladdd(tcg_res, tcg_op1, tcg_op2,
                                        tcg_res, fpst);
                 break;
@@ -8820,7 +8820,7 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
                 g_assert_not_reached();
             }

-            write_vec_element(s, tcg_res, rd, pass, MO_64);
+            write_vec_element(s, tcg_res, rd, pass, MO_UQ);

             tcg_temp_free_i64(tcg_res);
             tcg_temp_free_i64(tcg_op1);
@@ -8905,7 +8905,7 @@ static void handle_3same_float(DisasContext *s, int size, int elements,
                 TCGv_i64 tcg_tmp = tcg_temp_new_i64();

                 tcg_gen_extu_i32_i64(tcg_tmp, tcg_res);
-                write_vec_element(s, tcg_tmp, rd, pass, MO_64);
+                write_vec_element(s, tcg_tmp, rd, pass, MO_UQ);
                 tcg_temp_free_i64(tcg_tmp);
             } else {
                 write_vec_element_i32(s, tcg_res, rd, pass, MO_UL);
@@ -9381,7 +9381,7 @@ static void handle_2misc_fcmp_zero(DisasContext *s, int opcode,
                                    bool is_scalar, bool is_u, bool is_q,
                                    int size, int rn, int rd)
 {
-    bool is_double = (size == MO_64);
+    bool is_double = (size == MO_UQ);
     TCGv_ptr fpst;

     if (!fp_access_check(s)) {
@@ -9419,13 +9419,13 @@ static void handle_2misc_fcmp_zero(DisasContext *s, int opcode,
         }

         for (pass = 0; pass < (is_scalar ? 1 : 2); pass++) {
-            read_vec_element(s, tcg_op, rn, pass, MO_64);
+            read_vec_element(s, tcg_op, rn, pass, MO_UQ);
             if (swap) {
                 genfn(tcg_res, tcg_zero, tcg_op, fpst);
             } else {
                 genfn(tcg_res, tcg_op, tcg_zero, fpst);
             }
-            write_vec_element(s, tcg_res, rd, pass, MO_64);
+            write_vec_element(s, tcg_res, rd, pass, MO_UQ);
         }
         tcg_temp_free_i64(tcg_res);
         tcg_temp_free_i64(tcg_zero);
@@ -9526,7 +9526,7 @@ static void handle_2misc_reciprocal(DisasContext *s, int opcode,
         int pass;

         for (pass = 0; pass < (is_scalar ? 1 : 2); pass++) {
-            read_vec_element(s, tcg_op, rn, pass, MO_64);
+            read_vec_element(s, tcg_op, rn, pass, MO_UQ);
             switch (opcode) {
             case 0x3d: /* FRECPE */
                 gen_helper_recpe_f64(tcg_res, tcg_op, fpst);
@@ -9540,7 +9540,7 @@ static void handle_2misc_reciprocal(DisasContext *s, int opcode,
             default:
                 g_assert_not_reached();
             }
-            write_vec_element(s, tcg_res, rd, pass, MO_64);
+            write_vec_element(s, tcg_res, rd, pass, MO_UQ);
         }
         tcg_temp_free_i64(tcg_res);
         tcg_temp_free_i64(tcg_op);
@@ -9615,7 +9615,7 @@ static void handle_2misc_narrow(DisasContext *s, bool scalar,
         if (scalar) {
             read_vec_element(s, tcg_op, rn, pass, size + 1);
         } else {
-            read_vec_element(s, tcg_op, rn, pass, MO_64);
+            read_vec_element(s, tcg_op, rn, pass, MO_UQ);
         }
         tcg_res[pass] = tcg_temp_new_i32();

@@ -9711,15 +9711,15 @@ static void handle_2misc_satacc(DisasContext *s, bool is_scalar, bool is_u,
         int pass;

         for (pass = 0; pass < (is_scalar ? 1 : 2); pass++) {
-            read_vec_element(s, tcg_rn, rn, pass, MO_64);
-            read_vec_element(s, tcg_rd, rd, pass, MO_64);
+            read_vec_element(s, tcg_rn, rn, pass, MO_UQ);
+            read_vec_element(s, tcg_rd, rd, pass, MO_UQ);

             if (is_u) { /* USQADD */
                 gen_helper_neon_uqadd_s64(tcg_rd, cpu_env, tcg_rn, tcg_rd);
             } else { /* SUQADD */
                 gen_helper_neon_sqadd_u64(tcg_rd, cpu_env, tcg_rn, tcg_rd);
             }
-            write_vec_element(s, tcg_rd, rd, pass, MO_64);
+            write_vec_element(s, tcg_rd, rd, pass, MO_UQ);
         }
         tcg_temp_free_i64(tcg_rd);
         tcg_temp_free_i64(tcg_rn);
@@ -9776,7 +9776,7 @@ static void handle_2misc_satacc(DisasContext *s, bool is_scalar, bool is_u,

             if (is_scalar) {
                 TCGv_i64 tcg_zero = tcg_const_i64(0);
-                write_vec_element(s, tcg_zero, rd, 0, MO_64);
+                write_vec_element(s, tcg_zero, rd, 0, MO_UQ);
                 tcg_temp_free_i64(tcg_zero);
             }
             write_vec_element_i32(s, tcg_rd, rd, pass, MO_UL);
@@ -10146,7 +10146,7 @@ static void handle_vec_simd_wshli(DisasContext *s, bool is_q, bool is_u,
      * so if rd == rn we would overwrite parts of our input.
      * So load everything right now and use shifts in the main loop.
      */
-    read_vec_element(s, tcg_rn, rn, is_q ? 1 : 0, MO_64);
+    read_vec_element(s, tcg_rn, rn, is_q ? 1 : 0, MO_UQ);

     for (i = 0; i < elements; i++) {
         tcg_gen_shri_i64(tcg_rd, tcg_rn, i * esize);
@@ -10183,7 +10183,7 @@ static void handle_vec_simd_shrn(DisasContext *s, bool is_q,
     tcg_rn = tcg_temp_new_i64();
     tcg_rd = tcg_temp_new_i64();
     tcg_final = tcg_temp_new_i64();
-    read_vec_element(s, tcg_final, rd, is_q ? 1 : 0, MO_64);
+    read_vec_element(s, tcg_final, rd, is_q ? 1 : 0, MO_UQ);

     if (round) {
         uint64_t round_const = 1ULL << (shift - 1);
@@ -10201,9 +10201,9 @@ static void handle_vec_simd_shrn(DisasContext *s, bool is_q,
     }

     if (!is_q) {
-        write_vec_element(s, tcg_final, rd, 0, MO_64);
+        write_vec_element(s, tcg_final, rd, 0, MO_UQ);
     } else {
-        write_vec_element(s, tcg_final, rd, 1, MO_64);
+        write_vec_element(s, tcg_final, rd, 1, MO_UQ);
     }
     if (round) {
         tcg_temp_free_i64(tcg_round);
@@ -10335,8 +10335,8 @@ static void handle_3rd_widening(DisasContext *s, int is_q, int is_u, int size,
     }

     if (accop != 0) {
-        read_vec_element(s, tcg_res[0], rd, 0, MO_64);
-        read_vec_element(s, tcg_res[1], rd, 1, MO_64);
+        read_vec_element(s, tcg_res[0], rd, 0, MO_UQ);
+        read_vec_element(s, tcg_res[1], rd, 1, MO_UQ);
     }

     /* size == 2 means two 32x32->64 operations; this is worth special
@@ -10522,8 +10522,8 @@ static void handle_3rd_widening(DisasContext *s, int is_q, int is_u, int size,
         }
     }

-    write_vec_element(s, tcg_res[0], rd, 0, MO_64);
-    write_vec_element(s, tcg_res[1], rd, 1, MO_64);
+    write_vec_element(s, tcg_res[0], rd, 0, MO_UQ);
+    write_vec_element(s, tcg_res[1], rd, 1, MO_UQ);
     tcg_temp_free_i64(tcg_res[0]);
     tcg_temp_free_i64(tcg_res[1]);
 }
@@ -10546,7 +10546,7 @@ static void handle_3rd_wide(DisasContext *s, int is_q, int is_u, int size,
         };
         NeonGenWidenFn *widenfn = widenfns[size][is_u];

-        read_vec_element(s, tcg_op1, rn, pass, MO_64);
+        read_vec_element(s, tcg_op1, rn, pass, MO_UQ);
         read_vec_element_i32(s, tcg_op2, rm, part + pass, MO_UL);
         widenfn(tcg_op2_wide, tcg_op2);
         tcg_temp_free_i32(tcg_op2);
@@ -10558,7 +10558,7 @@ static void handle_3rd_wide(DisasContext *s, int is_q, int is_u, int size,
     }

     for (pass = 0; pass < 2; pass++) {
-        write_vec_element(s, tcg_res[pass], rd, pass, MO_64);
+        write_vec_element(s, tcg_res[pass], rd, pass, MO_UQ);
         tcg_temp_free_i64(tcg_res[pass]);
     }
 }
@@ -10589,8 +10589,8 @@ static void handle_3rd_narrowing(DisasContext *s, int is_q, int is_u, int size,
         };
         NeonGenNarrowFn *gennarrow = narrowfns[size][is_u];

-        read_vec_element(s, tcg_op1, rn, pass, MO_64);
-        read_vec_element(s, tcg_op2, rm, pass, MO_64);
+        read_vec_element(s, tcg_op1, rn, pass, MO_UQ);
+        read_vec_element(s, tcg_op2, rm, pass, MO_UQ);

         gen_neon_addl(size, (opcode == 6), tcg_wideres, tcg_op1, tcg_op2);

@@ -10621,12 +10621,12 @@ static void handle_pmull_64(DisasContext *s, int is_q, int rd, int rn, int rm)
     TCGv_i64 tcg_op2 = tcg_temp_new_i64();
     TCGv_i64 tcg_res = tcg_temp_new_i64();

-    read_vec_element(s, tcg_op1, rn, is_q, MO_64);
-    read_vec_element(s, tcg_op2, rm, is_q, MO_64);
+    read_vec_element(s, tcg_op1, rn, is_q, MO_UQ);
+    read_vec_element(s, tcg_op2, rm, is_q, MO_UQ);
     gen_helper_neon_pmull_64_lo(tcg_res, tcg_op1, tcg_op2);
-    write_vec_element(s, tcg_res, rd, 0, MO_64);
+    write_vec_element(s, tcg_res, rd, 0, MO_UQ);
     gen_helper_neon_pmull_64_hi(tcg_res, tcg_op1, tcg_op2);
-    write_vec_element(s, tcg_res, rd, 1, MO_64);
+    write_vec_element(s, tcg_res, rd, 1, MO_UQ);

     tcg_temp_free_i64(tcg_op1);
     tcg_temp_free_i64(tcg_op2);
@@ -10814,8 +10814,8 @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
             TCGv_i64 tcg_op2 = tcg_temp_new_i64();
             int passreg = (pass == 0) ? rn : rm;

-            read_vec_element(s, tcg_op1, passreg, 0, MO_64);
-            read_vec_element(s, tcg_op2, passreg, 1, MO_64);
+            read_vec_element(s, tcg_op1, passreg, 0, MO_UQ);
+            read_vec_element(s, tcg_op2, passreg, 1, MO_UQ);
             tcg_res[pass] = tcg_temp_new_i64();

             switch (opcode) {
@@ -10846,7 +10846,7 @@ static void handle_simd_3same_pair(DisasContext *s, int is_q, int u, int opcode,
         }

         for (pass = 0; pass < 2; pass++) {
-            write_vec_element(s, tcg_res[pass], rd, pass, MO_64);
+            write_vec_element(s, tcg_res[pass], rd, pass, MO_UQ);
             tcg_temp_free_i64(tcg_res[pass]);
         }
     } else {
@@ -10971,7 +10971,7 @@ static void disas_simd_3same_float(DisasContext *s, uint32_t insn)
             unallocated_encoding(s);
             return;
         }
-        handle_simd_3same_pair(s, is_q, 0, fpopcode, size ? MO_64 : MO_UL,
+        handle_simd_3same_pair(s, is_q, 0, fpopcode, size ? MO_UQ : MO_UL,
                                rn, rm, rd);
         return;
     case 0x1b: /* FMULX */
@@ -11155,12 +11155,12 @@ static void disas_simd_3same_int(DisasContext *s, uint32_t insn)
             TCGv_i64 tcg_op2 = tcg_temp_new_i64();
             TCGv_i64 tcg_res = tcg_temp_new_i64();

-            read_vec_element(s, tcg_op1, rn, pass, MO_64);
-            read_vec_element(s, tcg_op2, rm, pass, MO_64);
+            read_vec_element(s, tcg_op1, rn, pass, MO_UQ);
+            read_vec_element(s, tcg_op2, rm, pass, MO_UQ);

             handle_3same_64(s, opcode, u, tcg_res, tcg_op1, tcg_op2);

-            write_vec_element(s, tcg_res, rd, pass, MO_64);
+            write_vec_element(s, tcg_res, rd, pass, MO_UQ);

             tcg_temp_free_i64(tcg_res);
             tcg_temp_free_i64(tcg_op1);
@@ -11714,7 +11714,7 @@ static void handle_2misc_widening(DisasContext *s, int opcode, bool is_q,
             tcg_temp_free_i32(tcg_op);
         }
         for (pass = 0; pass < 2; pass++) {
-            write_vec_element(s, tcg_res[pass], rd, pass, MO_64);
+            write_vec_element(s, tcg_res[pass], rd, pass, MO_UQ);
             tcg_temp_free_i64(tcg_res[pass]);
         }
     } else {
@@ -11774,7 +11774,7 @@ static void handle_rev(DisasContext *s, int opcode, bool u,
             case MO_UL:
                 tcg_gen_bswap32_i64(tcg_tmp, tcg_tmp);
                 break;
-            case MO_64:
+            case MO_UQ:
                 tcg_gen_bswap64_i64(tcg_tmp, tcg_tmp);
                 break;
             default:
@@ -11803,8 +11803,8 @@ static void handle_rev(DisasContext *s, int opcode, bool u,
                 tcg_gen_deposit_i64(tcg_rd, tcg_rd, tcg_rn, off, esize);
             }
         }
-        write_vec_element(s, tcg_rd, rd, 0, MO_64);
-        write_vec_element(s, tcg_rd_hi, rd, 1, MO_64);
+        write_vec_element(s, tcg_rd, rd, 0, MO_UQ);
+        write_vec_element(s, tcg_rd_hi, rd, 1, MO_UQ);

         tcg_temp_free_i64(tcg_rd_hi);
         tcg_temp_free_i64(tcg_rd);
@@ -11839,7 +11839,7 @@ static void handle_2misc_pairwise(DisasContext *s, int opcode, bool u,
             read_vec_element(s, tcg_op2, rn, pass * 2 + 1, memop);
             tcg_gen_add_i64(tcg_res[pass], tcg_op1, tcg_op2);
             if (accum) {
-                read_vec_element(s, tcg_op1, rd, pass, MO_64);
+                read_vec_element(s, tcg_op1, rd, pass, MO_UQ);
                 tcg_gen_add_i64(tcg_res[pass], tcg_res[pass], tcg_op1);
             }

@@ -11859,11 +11859,11 @@ static void handle_2misc_pairwise(DisasContext *s, int opcode, bool u,

             tcg_res[pass] = tcg_temp_new_i64();

-            read_vec_element(s, tcg_op, rn, pass, MO_64);
+            read_vec_element(s, tcg_op, rn, pass, MO_UQ);
             genfn(tcg_res[pass], tcg_op);

             if (accum) {
-                read_vec_element(s, tcg_op, rd, pass, MO_64);
+                read_vec_element(s, tcg_op, rd, pass, MO_UQ);
                 if (size == 0) {
                     gen_helper_neon_addl_u16(tcg_res[pass],
                                              tcg_res[pass], tcg_op);
@@ -11879,7 +11879,7 @@ static void handle_2misc_pairwise(DisasContext *s, int opcode, bool u,
         tcg_res[1] = tcg_const_i64(0);
     }
     for (pass = 0; pass < 2; pass++) {
-        write_vec_element(s, tcg_res[pass], rd, pass, MO_64);
+        write_vec_element(s, tcg_res[pass], rd, pass, MO_UQ);
         tcg_temp_free_i64(tcg_res[pass]);
     }
 }
@@ -11909,7 +11909,7 @@ static void handle_shll(DisasContext *s, bool is_q, int size, int rn, int rd)
     }

     for (pass = 0; pass < 2; pass++) {
-        write_vec_element(s, tcg_res[pass], rd, pass, MO_64);
+        write_vec_element(s, tcg_res[pass], rd, pass, MO_UQ);
         tcg_temp_free_i64(tcg_res[pass]);
     }
 }
@@ -12233,12 +12233,12 @@ static void disas_simd_two_reg_misc(DisasContext *s, uint32_t insn)
             TCGv_i64 tcg_op = tcg_temp_new_i64();
             TCGv_i64 tcg_res = tcg_temp_new_i64();

-            read_vec_element(s, tcg_op, rn, pass, MO_64);
+            read_vec_element(s, tcg_op, rn, pass, MO_UQ);

             handle_2misc_64(s, opcode, u, tcg_res, tcg_op,
                             tcg_rmode, tcg_fpstatus);

-            write_vec_element(s, tcg_res, rd, pass, MO_64);
+            write_vec_element(s, tcg_res, rd, pass, MO_UQ);

             tcg_temp_free_i64(tcg_res);
             tcg_temp_free_i64(tcg_op);
@@ -12856,7 +12856,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
             is_fp16 = true;
             break;
         case MO_UL: /* single precision */
-        case MO_64: /* double precision */
+        case MO_UQ: /* double precision */
             break;
         default:
             unallocated_encoding(s);
@@ -12875,7 +12875,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
             }
             is_fp16 = true;
             break;
-        case MO_64:
+        case MO_UQ:
             break;
         default:
             unallocated_encoding(s);
@@ -12886,7 +12886,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
     default: /* integer */
         switch (size) {
         case MO_UB:
-        case MO_64:
+        case MO_UQ:
             unallocated_encoding(s);
             return;
         }
@@ -12906,7 +12906,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
         index = h << 1 | l;
         rm |= m << 4;
         break;
-    case MO_64:
+    case MO_UQ:
         if (l || !is_q) {
             unallocated_encoding(s);
             return;
@@ -12946,7 +12946,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
                                vec_full_reg_offset(s, rn),
                                vec_full_reg_offset(s, rm), fpst,
                                is_q ? 16 : 8, vec_full_reg_size(s), data,
-                               size == MO_64
+                               size == MO_UQ
                                ? gen_helper_gvec_fcmlas_idx
                                : gen_helper_gvec_fcmlah_idx);
             tcg_temp_free_ptr(fpst);
@@ -12976,13 +12976,13 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)

         assert(is_fp && is_q && !is_long);

-        read_vec_element(s, tcg_idx, rm, index, MO_64);
+        read_vec_element(s, tcg_idx, rm, index, MO_UQ);

         for (pass = 0; pass < (is_scalar ? 1 : 2); pass++) {
             TCGv_i64 tcg_op = tcg_temp_new_i64();
             TCGv_i64 tcg_res = tcg_temp_new_i64();

-            read_vec_element(s, tcg_op, rn, pass, MO_64);
+            read_vec_element(s, tcg_op, rn, pass, MO_UQ);

             switch (16 * u + opcode) {
             case 0x05: /* FMLS */
@@ -12990,7 +12990,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
                 gen_helper_vfp_negd(tcg_op, tcg_op);
                 /* fall through */
             case 0x01: /* FMLA */
-                read_vec_element(s, tcg_res, rd, pass, MO_64);
+                read_vec_element(s, tcg_res, rd, pass, MO_UQ);
                 gen_helper_vfp_muladdd(tcg_res, tcg_op, tcg_idx, tcg_res, fpst);
                 break;
             case 0x09: /* FMUL */
@@ -13003,7 +13003,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
                 g_assert_not_reached();
             }

-            write_vec_element(s, tcg_res, rd, pass, MO_64);
+            write_vec_element(s, tcg_res, rd, pass, MO_UQ);
             tcg_temp_free_i64(tcg_op);
             tcg_temp_free_i64(tcg_res);
         }
@@ -13241,7 +13241,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
                 }

                 /* Accumulating op: handle accumulate step */
-                read_vec_element(s, tcg_res[pass], rd, pass, MO_64);
+                read_vec_element(s, tcg_res[pass], rd, pass, MO_UQ);

                 switch (opcode) {
                 case 0x2: /* SMLAL, SMLAL2, UMLAL, UMLAL2 */
@@ -13316,7 +13316,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
                 }

                 /* Accumulating op: handle accumulate step */
-                read_vec_element(s, tcg_res[pass], rd, pass, MO_64);
+                read_vec_element(s, tcg_res[pass], rd, pass, MO_UQ);

                 switch (opcode) {
                 case 0x2: /* SMLAL, SMLAL2, UMLAL, UMLAL2 */
@@ -13352,7 +13352,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
         }

         for (pass = 0; pass < 2; pass++) {
-            write_vec_element(s, tcg_res[pass], rd, pass, MO_64);
+            write_vec_element(s, tcg_res[pass], rd, pass, MO_UQ);
             tcg_temp_free_i64(tcg_res[pass]);
         }
     }
@@ -13639,14 +13639,14 @@ static void disas_crypto_three_reg_sha512(DisasContext *s, uint32_t insn)
         tcg_res[1] = tcg_temp_new_i64();

         for (pass = 0; pass < 2; pass++) {
-            read_vec_element(s, tcg_op1, rn, pass, MO_64);
-            read_vec_element(s, tcg_op2, rm, pass, MO_64);
+            read_vec_element(s, tcg_op1, rn, pass, MO_UQ);
+            read_vec_element(s, tcg_op2, rm, pass, MO_UQ);

             tcg_gen_rotli_i64(tcg_res[pass], tcg_op2, 1);
             tcg_gen_xor_i64(tcg_res[pass], tcg_res[pass], tcg_op1);
         }
-        write_vec_element(s, tcg_res[0], rd, 0, MO_64);
-        write_vec_element(s, tcg_res[1], rd, 1, MO_64);
+        write_vec_element(s, tcg_res[0], rd, 0, MO_UQ);
+        write_vec_element(s, tcg_res[1], rd, 1, MO_UQ);

         tcg_temp_free_i64(tcg_op1);
         tcg_temp_free_i64(tcg_op2);
@@ -13750,9 +13750,9 @@ static void disas_crypto_four_reg(DisasContext *s, uint32_t insn)
         tcg_res[1] = tcg_temp_new_i64();

         for (pass = 0; pass < 2; pass++) {
-            read_vec_element(s, tcg_op1, rn, pass, MO_64);
-            read_vec_element(s, tcg_op2, rm, pass, MO_64);
-            read_vec_element(s, tcg_op3, ra, pass, MO_64);
+            read_vec_element(s, tcg_op1, rn, pass, MO_UQ);
+            read_vec_element(s, tcg_op2, rm, pass, MO_UQ);
+            read_vec_element(s, tcg_op3, ra, pass, MO_UQ);

             if (op0 == 0) {
                 /* EOR3 */
@@ -13763,8 +13763,8 @@ static void disas_crypto_four_reg(DisasContext *s, uint32_t insn)
             }
             tcg_gen_xor_i64(tcg_res[pass], tcg_res[pass], tcg_op1);
         }
-        write_vec_element(s, tcg_res[0], rd, 0, MO_64);
-        write_vec_element(s, tcg_res[1], rd, 1, MO_64);
+        write_vec_element(s, tcg_res[0], rd, 0, MO_UQ);
+        write_vec_element(s, tcg_res[1], rd, 1, MO_UQ);

         tcg_temp_free_i64(tcg_op1);
         tcg_temp_free_i64(tcg_op2);
@@ -13832,14 +13832,14 @@ static void disas_crypto_xar(DisasContext *s, uint32_t insn)
     tcg_res[1] = tcg_temp_new_i64();

     for (pass = 0; pass < 2; pass++) {
-        read_vec_element(s, tcg_op1, rn, pass, MO_64);
-        read_vec_element(s, tcg_op2, rm, pass, MO_64);
+        read_vec_element(s, tcg_op1, rn, pass, MO_UQ);
+        read_vec_element(s, tcg_op2, rm, pass, MO_UQ);

         tcg_gen_xor_i64(tcg_res[pass], tcg_op1, tcg_op2);
         tcg_gen_rotri_i64(tcg_res[pass], tcg_res[pass], imm6);
     }
-    write_vec_element(s, tcg_res[0], rd, 0, MO_64);
-    write_vec_element(s, tcg_res[1], rd, 1, MO_64);
+    write_vec_element(s, tcg_res[0], rd, 0, MO_UQ);
+    write_vec_element(s, tcg_res[1], rd, 1, MO_UQ);

     tcg_temp_free_i64(tcg_op1);
     tcg_temp_free_i64(tcg_op2);
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index f7c891d..423c461 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -1708,7 +1708,7 @@ static void do_sat_addsub_vec(DisasContext *s, int esz, int rd, int rn,
         tcg_temp_free_i64(t64);
         break;

-    case MO_64:
+    case MO_UQ:
         if (u) {
             if (d) {
                 gen_helper_sve_uqsubi_d(dptr, nptr, val, desc);
@@ -1862,7 +1862,7 @@ static bool do_zz_dbm(DisasContext *s, arg_rr_dbm *a, GVecGen2iFn *gvec_fn)
     }
     if (sve_access_check(s)) {
         unsigned vsz = vec_full_reg_size(s);
-        gvec_fn(MO_64, vec_full_reg_offset(s, a->rd),
+        gvec_fn(MO_UQ, vec_full_reg_offset(s, a->rd),
                 vec_full_reg_offset(s, a->rn), imm, vsz, vsz);
     }
     return true;
@@ -2076,7 +2076,7 @@ static bool trans_INSR_f(DisasContext *s, arg_rrr_esz *a)
 {
     if (sve_access_check(s)) {
         TCGv_i64 t = tcg_temp_new_i64();
-        tcg_gen_ld_i64(t, cpu_env, vec_reg_offset(s, a->rm, 0, MO_64));
+        tcg_gen_ld_i64(t, cpu_env, vec_reg_offset(s, a->rm, 0, MO_UQ));
         do_insr_i64(s, a, t);
         tcg_temp_free_i64(t);
     }
@@ -3327,7 +3327,7 @@ static bool trans_SUBR_zzi(DisasContext *s, arg_rri_esz *a)
           .fno = gen_helper_sve_subri_d,
           .opt_opc = vecop_list,
           .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-          .vece = MO_64,
+          .vece = MO_UQ,
           .scalar_first = true }
     };

@@ -4571,7 +4571,7 @@ static const TCGMemOp dtype_mop[16] = {
     MO_UB, MO_UB, MO_UB, MO_UB,
     MO_SL, MO_UW, MO_UW, MO_UW,
     MO_SW, MO_SW, MO_UL, MO_UL,
-    MO_SB, MO_SB, MO_SB, MO_Q
+    MO_SB, MO_SB, MO_SB, MO_UQ
 };

 #define dtype_msz(x)  (dtype_mop[x] & MO_SIZE)
@@ -5261,7 +5261,7 @@ static bool trans_LD1_zprz(DisasContext *s, arg_LD1_zprz *a)
     case MO_UL:
         fn = gather_load_fn32[be][a->ff][a->xs][a->u][a->msz];
         break;
-    case MO_64:
+    case MO_UQ:
         fn = gather_load_fn64[be][a->ff][a->xs][a->u][a->msz];
         break;
     }
@@ -5289,7 +5289,7 @@ static bool trans_LD1_zpiz(DisasContext *s, arg_LD1_zpiz *a)
     case MO_UL:
         fn = gather_load_fn32[be][a->ff][0][a->u][a->msz];
         break;
-    case MO_64:
+    case MO_UQ:
         fn = gather_load_fn64[be][a->ff][2][a->u][a->msz];
         break;
     }
@@ -5367,7 +5367,7 @@ static bool trans_ST1_zprz(DisasContext *s, arg_ST1_zprz *a)
     case MO_UL:
         fn = scatter_store_fn32[be][a->xs][a->msz];
         break;
-    case MO_64:
+    case MO_UQ:
         fn = scatter_store_fn64[be][a->xs][a->msz];
         break;
     default:
@@ -5395,7 +5395,7 @@ static bool trans_ST1_zpiz(DisasContext *s, arg_ST1_zpiz *a)
     case MO_UL:
         fn = scatter_store_fn32[be][0][a->msz];
         break;
-    case MO_64:
+    case MO_UQ:
         fn = scatter_store_fn64[be][2][a->msz];
         break;
     }
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
index 5e0cd63..d71944d 100644
--- a/target/arm/translate-vfp.inc.c
+++ b/target/arm/translate-vfp.inc.c
@@ -40,7 +40,7 @@ uint64_t vfp_expand_imm(int size, uint8_t imm8)
     uint64_t imm;

     switch (size) {
-    case MO_64:
+    case MO_UQ:
         imm = (extract32(imm8, 7, 1) ? 0x8000 : 0) |
             (extract32(imm8, 6, 1) ? 0x3fc0 : 0x4000) |
             extract32(imm8, 0, 6);
@@ -1960,7 +1960,7 @@ static bool trans_VMOV_imm_dp(DisasContext *s, arg_VMOV_imm_dp *a)
         }
     }

-    fd = tcg_const_i64(vfp_expand_imm(MO_64, a->imm));
+    fd = tcg_const_i64(vfp_expand_imm(MO_UQ, a->imm));

     for (;;) {
         neon_store_reg64(fd, vd);
diff --git a/target/arm/translate.c b/target/arm/translate.c
index 5510ecd..306ef24 100644
--- a/target/arm/translate.c
+++ b/target/arm/translate.c
@@ -1171,7 +1171,7 @@ static void gen_aa32_ld_i64(DisasContext *s, TCGv_i64 val, TCGv_i32 a32,
 static inline void gen_aa32_ld64(DisasContext *s, TCGv_i64 val,
                                  TCGv_i32 a32, int index)
 {
-    gen_aa32_ld_i64(s, val, a32, index, MO_Q | s->be_data);
+    gen_aa32_ld_i64(s, val, a32, index, MO_UQ | s->be_data);
 }

 static void gen_aa32_st_i64(DisasContext *s, TCGv_i64 val, TCGv_i32 a32,
@@ -1194,7 +1194,7 @@ static void gen_aa32_st_i64(DisasContext *s, TCGv_i64 val, TCGv_i32 a32,
 static inline void gen_aa32_st64(DisasContext *s, TCGv_i64 val,
                                  TCGv_i32 a32, int index)
 {
-    gen_aa32_st_i64(s, val, a32, index, MO_Q | s->be_data);
+    gen_aa32_st_i64(s, val, a32, index, MO_UQ | s->be_data);
 }

 DO_GEN_LD(8s, MO_SB)
@@ -1455,7 +1455,7 @@ static void neon_load_element64(TCGv_i64 var, int reg, int ele, TCGMemOp mop)
     case MO_UL:
         tcg_gen_ld32u_i64(var, cpu_env, offset);
         break;
-    case MO_Q:
+    case MO_UQ:
         tcg_gen_ld_i64(var, cpu_env, offset);
         break;
     default:
@@ -1502,7 +1502,7 @@ static void neon_store_element64(int reg, int ele, TCGMemOp size, TCGv_i64 var)
     case MO_UL:
         tcg_gen_st32_i64(var, cpu_env, offset);
         break;
-    case MO_64:
+    case MO_UQ:
         tcg_gen_st_i64(var, cpu_env, offset);
         break;
     default:
@@ -4278,7 +4278,7 @@ const GVecGen2i ssra_op[4] = {
       .prefer_i64 = TCG_TARGET_REG_BITS == 64,
       .opt_opc = vecop_list_ssra,
       .load_dest = true,
-      .vece = MO_64 },
+      .vece = MO_UQ },
 };

 static void gen_usra8_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
@@ -4336,7 +4336,7 @@ const GVecGen2i usra_op[4] = {
       .prefer_i64 = TCG_TARGET_REG_BITS == 64,
       .load_dest = true,
       .opt_opc = vecop_list_usra,
-      .vece = MO_64, },
+      .vece = MO_UQ, },
 };

 static void gen_shr8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
@@ -4416,7 +4416,7 @@ const GVecGen2i sri_op[4] = {
       .prefer_i64 = TCG_TARGET_REG_BITS == 64,
       .load_dest = true,
       .opt_opc = vecop_list_sri,
-      .vece = MO_64 },
+      .vece = MO_UQ },
 };

 static void gen_shl8_ins_i64(TCGv_i64 d, TCGv_i64 a, int64_t shift)
@@ -4494,7 +4494,7 @@ const GVecGen2i sli_op[4] = {
       .prefer_i64 = TCG_TARGET_REG_BITS == 64,
       .load_dest = true,
       .opt_opc = vecop_list_sli,
-      .vece = MO_64 },
+      .vece = MO_UQ },
 };

 static void gen_mla8_i32(TCGv_i32 d, TCGv_i32 a, TCGv_i32 b)
@@ -4590,7 +4590,7 @@ const GVecGen3 mla_op[4] = {
       .prefer_i64 = TCG_TARGET_REG_BITS == 64,
       .load_dest = true,
       .opt_opc = vecop_list_mla,
-      .vece = MO_64 },
+      .vece = MO_UQ },
 };

 const GVecGen3 mls_op[4] = {
@@ -4614,7 +4614,7 @@ const GVecGen3 mls_op[4] = {
       .prefer_i64 = TCG_TARGET_REG_BITS == 64,
       .load_dest = true,
       .opt_opc = vecop_list_mls,
-      .vece = MO_64 },
+      .vece = MO_UQ },
 };

 /* CMTST : test is "if (X & Y != 0)". */
@@ -4658,7 +4658,7 @@ const GVecGen3 cmtst_op[4] = {
       .fniv = gen_cmtst_vec,
       .prefer_i64 = TCG_TARGET_REG_BITS == 64,
       .opt_opc = vecop_list_cmtst,
-      .vece = MO_64 },
+      .vece = MO_UQ },
 };

 static void gen_uqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
@@ -4696,7 +4696,7 @@ const GVecGen4 uqadd_op[4] = {
       .fno = gen_helper_gvec_uqadd_d,
       .write_aofs = true,
       .opt_opc = vecop_list_uqadd,
-      .vece = MO_64 },
+      .vece = MO_UQ },
 };

 static void gen_sqadd_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
@@ -4734,7 +4734,7 @@ const GVecGen4 sqadd_op[4] = {
       .fno = gen_helper_gvec_sqadd_d,
       .opt_opc = vecop_list_sqadd,
       .write_aofs = true,
-      .vece = MO_64 },
+      .vece = MO_UQ },
 };

 static void gen_uqsub_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
@@ -4772,7 +4772,7 @@ const GVecGen4 uqsub_op[4] = {
       .fno = gen_helper_gvec_uqsub_d,
       .opt_opc = vecop_list_uqsub,
       .write_aofs = true,
-      .vece = MO_64 },
+      .vece = MO_UQ },
 };

 static void gen_sqsub_vec(unsigned vece, TCGv_vec t, TCGv_vec sat,
@@ -4810,7 +4810,7 @@ const GVecGen4 sqsub_op[4] = {
       .fno = gen_helper_gvec_sqsub_d,
       .opt_opc = vecop_list_sqsub,
       .write_aofs = true,
-      .vece = MO_64 },
+      .vece = MO_UQ },
 };

 /* Translate a NEON data processing instruction.  Return nonzero if the
diff --git a/target/i386/translate.c b/target/i386/translate.c
index 0e863d4..8d62b37 100644
--- a/target/i386/translate.c
+++ b/target/i386/translate.c
@@ -323,7 +323,7 @@ static inline bool byte_reg_is_xH(DisasContext *s, int reg)
 static inline TCGMemOp mo_pushpop(DisasContext *s, TCGMemOp ot)
 {
     if (CODE64(s)) {
-        return ot == MO_UW ? MO_UW : MO_64;
+        return ot == MO_UW ? MO_UW : MO_UQ;
     } else {
         return ot;
     }
@@ -332,14 +332,14 @@ static inline TCGMemOp mo_pushpop(DisasContext *s, TCGMemOp ot)
 /* Select the size of the stack pointer.  */
 static inline TCGMemOp mo_stacksize(DisasContext *s)
 {
-    return CODE64(s) ? MO_64 : s->ss32 ? MO_UL : MO_UW;
+    return CODE64(s) ? MO_UQ : s->ss32 ? MO_UL : MO_UW;
 }

 /* Select only size 64 else 32.  Used for SSE operand sizes.  */
 static inline TCGMemOp mo_64_32(TCGMemOp ot)
 {
 #ifdef TARGET_X86_64
-    return ot == MO_64 ? MO_64 : MO_UL;
+    return ot == MO_UQ ? MO_UQ : MO_UL;
 #else
     return MO_UL;
 #endif
@@ -378,7 +378,7 @@ static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t0)
         tcg_gen_ext32u_tl(cpu_regs[reg], t0);
         break;
 #ifdef TARGET_X86_64
-    case MO_64:
+    case MO_UQ:
         tcg_gen_mov_tl(cpu_regs[reg], t0);
         break;
 #endif
@@ -456,7 +456,7 @@ static void gen_lea_v_seg(DisasContext *s, TCGMemOp aflag, TCGv a0,
 {
     switch (aflag) {
 #ifdef TARGET_X86_64
-    case MO_64:
+    case MO_UQ:
         if (ovr_seg < 0) {
             tcg_gen_mov_tl(s->A0, a0);
             return;
@@ -492,7 +492,7 @@ static void gen_lea_v_seg(DisasContext *s, TCGMemOp aflag, TCGv a0,
     if (ovr_seg >= 0) {
         TCGv seg = cpu_seg_base[ovr_seg];

-        if (aflag == MO_64) {
+        if (aflag == MO_UQ) {
             tcg_gen_add_tl(s->A0, a0, seg);
         } else if (CODE64(s)) {
             tcg_gen_ext32u_tl(s->A0, a0);
@@ -1469,7 +1469,7 @@ static void gen_shift_flags(DisasContext *s, TCGMemOp ot, TCGv result,
 static void gen_shift_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
                             int is_right, int is_arith)
 {
-    target_ulong mask = (ot == MO_64 ? 0x3f : 0x1f);
+    target_ulong mask = (ot == MO_UQ ? 0x3f : 0x1f);

     /* load */
     if (op1 == OR_TMP0) {
@@ -1505,7 +1505,7 @@ static void gen_shift_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
 static void gen_shift_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
                             int is_right, int is_arith)
 {
-    int mask = (ot == MO_64 ? 0x3f : 0x1f);
+    int mask = (ot == MO_UQ ? 0x3f : 0x1f);

     /* load */
     if (op1 == OR_TMP0)
@@ -1544,7 +1544,7 @@ static void gen_shift_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,

 static void gen_rot_rm_T1(DisasContext *s, TCGMemOp ot, int op1, int is_right)
 {
-    target_ulong mask = (ot == MO_64 ? 0x3f : 0x1f);
+    target_ulong mask = (ot == MO_UQ ? 0x3f : 0x1f);
     TCGv_i32 t0, t1;

     /* load */
@@ -1630,7 +1630,7 @@ static void gen_rot_rm_T1(DisasContext *s, TCGMemOp ot, int op1, int is_right)
 static void gen_rot_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
                           int is_right)
 {
-    int mask = (ot == MO_64 ? 0x3f : 0x1f);
+    int mask = (ot == MO_UQ ? 0x3f : 0x1f);
     int shift;

     /* load */
@@ -1729,7 +1729,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
             gen_helper_rcrl(s->T0, cpu_env, s->T0, s->T1);
             break;
 #ifdef TARGET_X86_64
-        case MO_64:
+        case MO_UQ:
             gen_helper_rcrq(s->T0, cpu_env, s->T0, s->T1);
             break;
 #endif
@@ -1748,7 +1748,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
             gen_helper_rcll(s->T0, cpu_env, s->T0, s->T1);
             break;
 #ifdef TARGET_X86_64
-        case MO_64:
+        case MO_UQ:
             gen_helper_rclq(s->T0, cpu_env, s->T0, s->T1);
             break;
 #endif
@@ -1764,7 +1764,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
 static void gen_shiftd_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
                              bool is_right, TCGv count_in)
 {
-    target_ulong mask = (ot == MO_64 ? 63 : 31);
+    target_ulong mask = (ot == MO_UQ ? 63 : 31);
     TCGv count;

     /* load */
@@ -1983,7 +1983,7 @@ static AddressParts gen_lea_modrm_0(CPUX86State *env, DisasContext *s,
     }

     switch (s->aflag) {
-    case MO_64:
+    case MO_UQ:
     case MO_UL:
         havesib = 0;
         if (rm == 4) {
@@ -2192,7 +2192,7 @@ static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, TCGMemOp ot)
         break;
     case MO_UL:
 #ifdef TARGET_X86_64
-    case MO_64:
+    case MO_UQ:
 #endif
         ret = x86_ldl_code(env, s);
         break;
@@ -2443,7 +2443,7 @@ static void gen_popa(DisasContext *s)
 static void gen_enter(DisasContext *s, int esp_addend, int level)
 {
     TCGMemOp d_ot = mo_pushpop(s, s->dflag);
-    TCGMemOp a_ot = CODE64(s) ? MO_64 : s->ss32 ? MO_UL : MO_UW;
+    TCGMemOp a_ot = CODE64(s) ? MO_UQ : s->ss32 ? MO_UL : MO_UW;
     int size = 1 << d_ot;

     /* Push BP; compute FrameTemp into T1.  */
@@ -3150,8 +3150,8 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
             break;
         case 0x6e: /* movd mm, ea */
 #ifdef TARGET_X86_64
-            if (s->dflag == MO_64) {
-                gen_ldst_modrm(env, s, modrm, MO_64, OR_TMP0, 0);
+            if (s->dflag == MO_UQ) {
+                gen_ldst_modrm(env, s, modrm, MO_UQ, OR_TMP0, 0);
                 tcg_gen_st_tl(s->T0, cpu_env,
                               offsetof(CPUX86State, fpregs[reg].mmx));
             } else
@@ -3166,8 +3166,8 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
             break;
         case 0x16e: /* movd xmm, ea */
 #ifdef TARGET_X86_64
-            if (s->dflag == MO_64) {
-                gen_ldst_modrm(env, s, modrm, MO_64, OR_TMP0, 0);
+            if (s->dflag == MO_UQ) {
+                gen_ldst_modrm(env, s, modrm, MO_UQ, OR_TMP0, 0);
                 tcg_gen_addi_ptr(s->ptr0, cpu_env,
                                  offsetof(CPUX86State,xmm_regs[reg]));
                 gen_helper_movq_mm_T0_xmm(s->ptr0, s->T0);
@@ -3337,10 +3337,10 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
             break;
         case 0x7e: /* movd ea, mm */
 #ifdef TARGET_X86_64
-            if (s->dflag == MO_64) {
+            if (s->dflag == MO_UQ) {
                 tcg_gen_ld_i64(s->T0, cpu_env,
                                offsetof(CPUX86State,fpregs[reg].mmx));
-                gen_ldst_modrm(env, s, modrm, MO_64, OR_TMP0, 1);
+                gen_ldst_modrm(env, s, modrm, MO_UQ, OR_TMP0, 1);
             } else
 #endif
             {
@@ -3351,10 +3351,10 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
             break;
         case 0x17e: /* movd ea, xmm */
 #ifdef TARGET_X86_64
-            if (s->dflag == MO_64) {
+            if (s->dflag == MO_UQ) {
                 tcg_gen_ld_i64(s->T0, cpu_env,
                                offsetof(CPUX86State,xmm_regs[reg].ZMM_Q(0)));
-                gen_ldst_modrm(env, s, modrm, MO_64, OR_TMP0, 1);
+                gen_ldst_modrm(env, s, modrm, MO_UQ, OR_TMP0, 1);
             } else
 #endif
             {
@@ -3785,10 +3785,10 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                 }
                 if ((b & 0xff) == 0xf0) {
                     ot = MO_UB;
-                } else if (s->dflag != MO_64) {
+                } else if (s->dflag != MO_UQ) {
                     ot = (s->prefix & PREFIX_DATA ? MO_UW : MO_UL);
                 } else {
-                    ot = MO_64;
+                    ot = MO_UQ;
                 }

                 tcg_gen_trunc_tl_i32(s->tmp2_i32, cpu_regs[reg]);
@@ -3814,10 +3814,10 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                 if (!(s->cpuid_ext_features & CPUID_EXT_MOVBE)) {
                     goto illegal_op;
                 }
-                if (s->dflag != MO_64) {
+                if (s->dflag != MO_UQ) {
                     ot = (s->prefix & PREFIX_DATA ? MO_UW : MO_UL);
                 } else {
-                    ot = MO_64;
+                    ot = MO_UQ;
                 }

                 gen_lea_modrm(env, s, modrm);
@@ -3861,7 +3861,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                     tcg_gen_ext8u_tl(s->A0, cpu_regs[s->vex_v]);
                     tcg_gen_shr_tl(s->T0, s->T0, s->A0);

-                    bound = tcg_const_tl(ot == MO_64 ? 63 : 31);
+                    bound = tcg_const_tl(ot == MO_UQ ? 63 : 31);
                     zero = tcg_const_tl(0);
                     tcg_gen_movcond_tl(TCG_COND_LEU, s->T0, s->A0, bound,
                                        s->T0, zero);
@@ -3894,7 +3894,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                 gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0);
                 tcg_gen_ext8u_tl(s->T1, cpu_regs[s->vex_v]);
                 {
-                    TCGv bound = tcg_const_tl(ot == MO_64 ? 63 : 31);
+                    TCGv bound = tcg_const_tl(ot == MO_UQ ? 63 : 31);
                     /* Note that since we're using BMILG (in order to get O
                        cleared) we need to store the inverse into C.  */
                     tcg_gen_setcond_tl(TCG_COND_LT, cpu_cc_src,
@@ -3929,7 +3929,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                     tcg_gen_extu_i32_tl(cpu_regs[reg], s->tmp3_i32);
                     break;
 #ifdef TARGET_X86_64
-                case MO_64:
+                case MO_UQ:
                     tcg_gen_mulu2_i64(s->T0, s->T1,
                                       s->T0, cpu_regs[R_EDX]);
                     tcg_gen_mov_i64(cpu_regs[s->vex_v], s->T0);
@@ -3949,7 +3949,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                 gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0);
                 /* Note that by zero-extending the mask operand, we
                    automatically handle zero-extending the result.  */
-                if (ot == MO_64) {
+                if (ot == MO_UQ) {
                     tcg_gen_mov_tl(s->T1, cpu_regs[s->vex_v]);
                 } else {
                     tcg_gen_ext32u_tl(s->T1, cpu_regs[s->vex_v]);
@@ -3967,7 +3967,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                 gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0);
                 /* Note that by zero-extending the mask operand, we
                    automatically handle zero-extending the result.  */
-                if (ot == MO_64) {
+                if (ot == MO_UQ) {
                     tcg_gen_mov_tl(s->T1, cpu_regs[s->vex_v]);
                 } else {
                     tcg_gen_ext32u_tl(s->T1, cpu_regs[s->vex_v]);
@@ -4063,7 +4063,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                 }
                 ot = mo_64_32(s->dflag);
                 gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0);
-                if (ot == MO_64) {
+                if (ot == MO_UQ) {
                     tcg_gen_andi_tl(s->T1, cpu_regs[s->vex_v], 63);
                 } else {
                     tcg_gen_andi_tl(s->T1, cpu_regs[s->vex_v], 31);
@@ -4071,12 +4071,12 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                 if (b == 0x1f7) {
                     tcg_gen_shl_tl(s->T0, s->T0, s->T1);
                 } else if (b == 0x2f7) {
-                    if (ot != MO_64) {
+                    if (ot != MO_UQ) {
                         tcg_gen_ext32s_tl(s->T0, s->T0);
                     }
                     tcg_gen_sar_tl(s->T0, s->T0, s->T1);
                 } else {
-                    if (ot != MO_64) {
+                    if (ot != MO_UQ) {
                         tcg_gen_ext32u_tl(s->T0, s->T0);
                     }
                     tcg_gen_shr_tl(s->T0, s->T0, s->T1);
@@ -4302,7 +4302,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
             if ((b & 0xfc) == 0x60) { /* pcmpXstrX */
                 set_cc_op(s, CC_OP_EFLAGS);

-                if (s->dflag == MO_64) {
+                if (s->dflag == MO_UQ) {
                     /* The helper must use entire 64-bit gp registers */
                     val |= 1 << 8;
                 }
@@ -4329,7 +4329,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
                 ot = mo_64_32(s->dflag);
                 gen_ldst_modrm(env, s, modrm, ot, OR_TMP0, 0);
                 b = x86_ldub_code(env, s);
-                if (ot == MO_64) {
+                if (ot == MO_UQ) {
                     tcg_gen_rotri_tl(s->T0, s->T0, b & 63);
                 } else {
                     tcg_gen_trunc_tl_i32(s->tmp2_i32, s->T0);
@@ -4630,9 +4630,9 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         /* In 64-bit mode, the default data size is 32-bit.  Select 64-bit
            data with rex_w, and 16-bit data with 0x66; rex_w takes precedence
            over 0x66 if both are present.  */
-        dflag = (rex_w > 0 ? MO_64 : prefixes & PREFIX_DATA ? MO_UW : MO_UL);
+        dflag = (rex_w > 0 ? MO_UQ : prefixes & PREFIX_DATA ? MO_UW : MO_UL);
         /* In 64-bit mode, 0x67 selects 32-bit addressing.  */
-        aflag = (prefixes & PREFIX_ADR ? MO_UL : MO_64);
+        aflag = (prefixes & PREFIX_ADR ? MO_UL : MO_UQ);
     } else {
         /* In 16/32-bit mode, 0x66 selects the opposite data size.  */
         if (s->code32 ^ ((prefixes & PREFIX_DATA) != 0)) {
@@ -4903,7 +4903,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 set_cc_op(s, CC_OP_MULL);
                 break;
 #ifdef TARGET_X86_64
-            case MO_64:
+            case MO_UQ:
                 tcg_gen_mulu2_i64(cpu_regs[R_EAX], cpu_regs[R_EDX],
                                   s->T0, cpu_regs[R_EAX]);
                 tcg_gen_mov_tl(cpu_cc_dst, cpu_regs[R_EAX]);
@@ -4956,7 +4956,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 set_cc_op(s, CC_OP_MULL);
                 break;
 #ifdef TARGET_X86_64
-            case MO_64:
+            case MO_UQ:
                 tcg_gen_muls2_i64(cpu_regs[R_EAX], cpu_regs[R_EDX],
                                   s->T0, cpu_regs[R_EAX]);
                 tcg_gen_mov_tl(cpu_cc_dst, cpu_regs[R_EAX]);
@@ -4980,7 +4980,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 gen_helper_divl_EAX(cpu_env, s->T0);
                 break;
 #ifdef TARGET_X86_64
-            case MO_64:
+            case MO_UQ:
                 gen_helper_divq_EAX(cpu_env, s->T0);
                 break;
 #endif
@@ -4999,7 +4999,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 gen_helper_idivl_EAX(cpu_env, s->T0);
                 break;
 #ifdef TARGET_X86_64
-            case MO_64:
+            case MO_UQ:
                 gen_helper_idivq_EAX(cpu_env, s->T0);
                 break;
 #endif
@@ -5024,7 +5024,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         if (CODE64(s)) {
             if (op == 2 || op == 4) {
                 /* operand size for jumps is 64 bit */
-                ot = MO_64;
+                ot = MO_UQ;
             } else if (op == 3 || op == 5) {
                 ot = dflag != MO_UW ? MO_UL + (rex_w == 1) : MO_UW;
             } else if (op == 6) {
@@ -5145,10 +5145,10 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
     case 0x98: /* CWDE/CBW */
         switch (dflag) {
 #ifdef TARGET_X86_64
-        case MO_64:
+        case MO_UQ:
             gen_op_mov_v_reg(s, MO_UL, s->T0, R_EAX);
             tcg_gen_ext32s_tl(s->T0, s->T0);
-            gen_op_mov_reg_v(s, MO_64, R_EAX, s->T0);
+            gen_op_mov_reg_v(s, MO_UQ, R_EAX, s->T0);
             break;
 #endif
         case MO_UL:
@@ -5168,10 +5168,10 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
     case 0x99: /* CDQ/CWD */
         switch (dflag) {
 #ifdef TARGET_X86_64
-        case MO_64:
-            gen_op_mov_v_reg(s, MO_64, s->T0, R_EAX);
+        case MO_UQ:
+            gen_op_mov_v_reg(s, MO_UQ, s->T0, R_EAX);
             tcg_gen_sari_tl(s->T0, s->T0, 63);
-            gen_op_mov_reg_v(s, MO_64, R_EDX, s->T0);
+            gen_op_mov_reg_v(s, MO_UQ, R_EDX, s->T0);
             break;
 #endif
         case MO_UL:
@@ -5212,7 +5212,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         }
         switch (ot) {
 #ifdef TARGET_X86_64
-        case MO_64:
+        case MO_UQ:
             tcg_gen_muls2_i64(cpu_regs[reg], s->T1, s->T0, s->T1);
             tcg_gen_mov_tl(cpu_cc_dst, cpu_regs[reg]);
             tcg_gen_sari_tl(cpu_cc_src, cpu_cc_dst, 63);
@@ -5338,7 +5338,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 goto illegal_op;
             }
 #ifdef TARGET_X86_64
-            if (dflag == MO_64) {
+            if (dflag == MO_UQ) {
                 if (!(s->cpuid_ext_features & CPUID_EXT_CX16)) {
                     goto illegal_op;
                 }
@@ -5636,7 +5636,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             ot = mo_b_d(b, dflag);
             switch (s->aflag) {
 #ifdef TARGET_X86_64
-            case MO_64:
+            case MO_UQ:
                 offset_addr = x86_ldq_code(env, s);
                 break;
 #endif
@@ -5671,13 +5671,13 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
         break;
     case 0xb8 ... 0xbf: /* mov R, Iv */
 #ifdef TARGET_X86_64
-        if (dflag == MO_64) {
+        if (dflag == MO_UQ) {
             uint64_t tmp;
             /* 64 bit case */
             tmp = x86_ldq_code(env, s);
             reg = (b & 7) | REX_B(s);
             tcg_gen_movi_tl(s->T0, tmp);
-            gen_op_mov_reg_v(s, MO_64, reg, s->T0);
+            gen_op_mov_reg_v(s, MO_UQ, reg, s->T0);
         } else
 #endif
         {
@@ -7119,10 +7119,10 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
     case 0x1c8 ... 0x1cf: /* bswap reg */
         reg = (b & 7) | REX_B(s);
 #ifdef TARGET_X86_64
-        if (dflag == MO_64) {
-            gen_op_mov_v_reg(s, MO_64, s->T0, reg);
+        if (dflag == MO_UQ) {
+            gen_op_mov_v_reg(s, MO_UQ, s->T0, reg);
             tcg_gen_bswap64_i64(s->T0, s->T0);
-            gen_op_mov_reg_v(s, MO_64, reg, s->T0);
+            gen_op_mov_reg_v(s, MO_UQ, reg, s->T0);
         } else
 #endif
         {
@@ -7700,7 +7700,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             if (mod == 3) {
                 gen_op_mov_v_reg(s, MO_UL, s->T0, rm);
                 /* sign extend */
-                if (d_ot == MO_64) {
+                if (d_ot == MO_UQ) {
                     tcg_gen_ext32s_tl(s->T0, s->T0);
                 }
                 gen_op_mov_reg_v(s, d_ot, reg, s->T0);
@@ -8014,7 +8014,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             rm = (modrm & 7) | REX_B(s);
             reg = ((modrm >> 3) & 7) | rex_r;
             if (CODE64(s))
-                ot = MO_64;
+                ot = MO_UQ;
             else
                 ot = MO_UL;
             if ((prefixes & PREFIX_LOCK) && (reg == 0) &&
@@ -8071,7 +8071,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
             rm = (modrm & 7) | REX_B(s);
             reg = ((modrm >> 3) & 7) | rex_r;
             if (CODE64(s))
-                ot = MO_64;
+                ot = MO_UQ;
             else
                 ot = MO_UL;
             if (reg >= 8) {
diff --git a/target/mips/translate.c b/target/mips/translate.c
index 525c7fe..1023f68 100644
--- a/target/mips/translate.c
+++ b/target/mips/translate.c
@@ -3766,7 +3766,7 @@ static void gen_scwp(DisasContext *ctx, uint32_t base, int16_t offset,

     tcg_gen_ld_i64(llval, cpu_env, offsetof(CPUMIPSState, llval_wp));
     tcg_gen_atomic_cmpxchg_i64(val, taddr, llval, tval,
-                               eva ? MIPS_HFLAG_UM : ctx->mem_idx, MO_64);
+                               eva ? MIPS_HFLAG_UM : ctx->mem_idx, MO_UQ);
     if (reg1 != 0) {
         tcg_gen_movi_tl(cpu_gpr[reg1], 1);
     }
diff --git a/target/ppc/translate.c b/target/ppc/translate.c
index 4a5de28..f39dd94 100644
--- a/target/ppc/translate.c
+++ b/target/ppc/translate.c
@@ -2470,10 +2470,10 @@ GEN_QEMU_LOAD_64(ld8u,  DEF_MEMOP(MO_UB))
 GEN_QEMU_LOAD_64(ld16u, DEF_MEMOP(MO_UW))
 GEN_QEMU_LOAD_64(ld32u, DEF_MEMOP(MO_UL))
 GEN_QEMU_LOAD_64(ld32s, DEF_MEMOP(MO_SL))
-GEN_QEMU_LOAD_64(ld64,  DEF_MEMOP(MO_Q))
+GEN_QEMU_LOAD_64(ld64,  DEF_MEMOP(MO_UQ))

 #if defined(TARGET_PPC64)
-GEN_QEMU_LOAD_64(ld64ur, BSWAP_MEMOP(MO_Q))
+GEN_QEMU_LOAD_64(ld64ur, BSWAP_MEMOP(MO_UQ))
 #endif

 #define GEN_QEMU_STORE_TL(stop, op)                                     \
@@ -2502,10 +2502,10 @@ static void glue(gen_qemu_, glue(stop, _i64))(DisasContext *ctx,  \
 GEN_QEMU_STORE_64(st8,  DEF_MEMOP(MO_UB))
 GEN_QEMU_STORE_64(st16, DEF_MEMOP(MO_UW))
 GEN_QEMU_STORE_64(st32, DEF_MEMOP(MO_UL))
-GEN_QEMU_STORE_64(st64, DEF_MEMOP(MO_Q))
+GEN_QEMU_STORE_64(st64, DEF_MEMOP(MO_UQ))

 #if defined(TARGET_PPC64)
-GEN_QEMU_STORE_64(st64r, BSWAP_MEMOP(MO_Q))
+GEN_QEMU_STORE_64(st64r, BSWAP_MEMOP(MO_UQ))
 #endif

 #define GEN_LD(name, ldop, opc, type)                                         \
@@ -2605,7 +2605,7 @@ GEN_LDEPX(lb, DEF_MEMOP(MO_UB), 0x1F, 0x02)
 GEN_LDEPX(lh, DEF_MEMOP(MO_UW), 0x1F, 0x08)
 GEN_LDEPX(lw, DEF_MEMOP(MO_UL), 0x1F, 0x00)
 #if defined(TARGET_PPC64)
-GEN_LDEPX(ld, DEF_MEMOP(MO_Q), 0x1D, 0x00)
+GEN_LDEPX(ld, DEF_MEMOP(MO_UQ), 0x1D, 0x00)
 #endif

 #if defined(TARGET_PPC64)
@@ -2808,7 +2808,7 @@ GEN_STEPX(stb, DEF_MEMOP(MO_UB), 0x1F, 0x06)
 GEN_STEPX(sth, DEF_MEMOP(MO_UW), 0x1F, 0x0C)
 GEN_STEPX(stw, DEF_MEMOP(MO_UL), 0x1F, 0x04)
 #if defined(TARGET_PPC64)
-GEN_STEPX(std, DEF_MEMOP(MO_Q), 0x1d, 0x04)
+GEN_STEPX(std, DEF_MEMOP(MO_UQ), 0x1d, 0x04)
 #endif

 #if defined(TARGET_PPC64)
@@ -3244,7 +3244,7 @@ static void gen_ld_atomic(DisasContext *ctx, TCGMemOp memop)
             TCGv t1 = tcg_temp_new();

             tcg_gen_qemu_ld_tl(t0, EA, ctx->mem_idx, memop);
-            if ((memop & MO_SIZE) == MO_64 || TARGET_LONG_BITS == 32) {
+            if ((memop & MO_SIZE) == MO_UQ || TARGET_LONG_BITS == 32) {
                 tcg_gen_mov_tl(t1, src);
             } else {
                 tcg_gen_ext32u_tl(t1, src);
@@ -3302,7 +3302,7 @@ static void gen_lwat(DisasContext *ctx)
 #ifdef TARGET_PPC64
 static void gen_ldat(DisasContext *ctx)
 {
-    gen_ld_atomic(ctx, DEF_MEMOP(MO_Q));
+    gen_ld_atomic(ctx, DEF_MEMOP(MO_UQ));
 }
 #endif

@@ -3385,7 +3385,7 @@ static void gen_stwat(DisasContext *ctx)
 #ifdef TARGET_PPC64
 static void gen_stdat(DisasContext *ctx)
 {
-    gen_st_atomic(ctx, DEF_MEMOP(MO_Q));
+    gen_st_atomic(ctx, DEF_MEMOP(MO_UQ));
 }
 #endif

@@ -3437,9 +3437,9 @@ STCX(stwcx_, DEF_MEMOP(MO_UL))

 #if defined(TARGET_PPC64)
 /* ldarx */
-LARX(ldarx, DEF_MEMOP(MO_Q))
+LARX(ldarx, DEF_MEMOP(MO_UQ))
 /* stdcx. */
-STCX(stdcx_, DEF_MEMOP(MO_Q))
+STCX(stdcx_, DEF_MEMOP(MO_UQ))

 /* lqarx */
 static void gen_lqarx(DisasContext *ctx)
@@ -3520,7 +3520,7 @@ static void gen_stqcx_(DisasContext *ctx)

     if (tb_cflags(ctx->base.tb) & CF_PARALLEL) {
         if (HAVE_CMPXCHG128) {
-            TCGv_i32 oi = tcg_const_i32(DEF_MEMOP(MO_Q) | MO_ALIGN_16);
+            TCGv_i32 oi = tcg_const_i32(DEF_MEMOP(MO_UQ) | MO_ALIGN_16);
             if (ctx->le_mode) {
                 gen_helper_stqcx_le_parallel(cpu_crf[0], cpu_env,
                                              EA, lo, hi, oi);
@@ -7366,7 +7366,7 @@ GEN_LDEPX(lb, DEF_MEMOP(MO_UB), 0x1F, 0x02)
 GEN_LDEPX(lh, DEF_MEMOP(MO_UW), 0x1F, 0x08)
 GEN_LDEPX(lw, DEF_MEMOP(MO_UL), 0x1F, 0x00)
 #if defined(TARGET_PPC64)
-GEN_LDEPX(ld, DEF_MEMOP(MO_Q), 0x1D, 0x00)
+GEN_LDEPX(ld, DEF_MEMOP(MO_UQ), 0x1D, 0x00)
 #endif

 #undef GEN_ST
@@ -7412,7 +7412,7 @@ GEN_STEPX(stb, DEF_MEMOP(MO_UB), 0x1F, 0x06)
 GEN_STEPX(sth, DEF_MEMOP(MO_UW), 0x1F, 0x0C)
 GEN_STEPX(stw, DEF_MEMOP(MO_UL), 0x1F, 0x04)
 #if defined(TARGET_PPC64)
-GEN_STEPX(std, DEF_MEMOP(MO_Q), 0x1D, 0x04)
+GEN_STEPX(std, DEF_MEMOP(MO_UQ), 0x1D, 0x04)
 #endif

 #undef GEN_CRLOGIC
diff --git a/target/ppc/translate/fp-impl.inc.c b/target/ppc/translate/fp-impl.inc.c
index 9dcff94..3fd54ac 100644
--- a/target/ppc/translate/fp-impl.inc.c
+++ b/target/ppc/translate/fp-impl.inc.c
@@ -855,7 +855,7 @@ static void gen_lfdepx(DisasContext *ctx)
     EA = tcg_temp_new();
     t0 = tcg_temp_new_i64();
     gen_addr_reg_index(ctx, EA);
-    tcg_gen_qemu_ld_i64(t0, EA, PPC_TLB_EPID_LOAD, DEF_MEMOP(MO_Q));
+    tcg_gen_qemu_ld_i64(t0, EA, PPC_TLB_EPID_LOAD, DEF_MEMOP(MO_UQ));
     set_fpr(rD(ctx->opcode), t0);
     tcg_temp_free(EA);
     tcg_temp_free_i64(t0);
@@ -1091,7 +1091,7 @@ static void gen_stfdepx(DisasContext *ctx)
     t0 = tcg_temp_new_i64();
     gen_addr_reg_index(ctx, EA);
     get_fpr(t0, rD(ctx->opcode));
-    tcg_gen_qemu_st_i64(t0, EA, PPC_TLB_EPID_STORE, DEF_MEMOP(MO_Q));
+    tcg_gen_qemu_st_i64(t0, EA, PPC_TLB_EPID_STORE, DEF_MEMOP(MO_UQ));
     tcg_temp_free(EA);
     tcg_temp_free_i64(t0);
 }
diff --git a/target/ppc/translate/vmx-impl.inc.c b/target/ppc/translate/vmx-impl.inc.c
index 8aa767e..867dc52 100644
--- a/target/ppc/translate/vmx-impl.inc.c
+++ b/target/ppc/translate/vmx-impl.inc.c
@@ -290,14 +290,14 @@ static void glue(gen_, name)(DisasContext *ctx)                         \
 }

 /* Logical operations */
-GEN_VXFORM_V(vand, MO_64, tcg_gen_gvec_and, 2, 16);
-GEN_VXFORM_V(vandc, MO_64, tcg_gen_gvec_andc, 2, 17);
-GEN_VXFORM_V(vor, MO_64, tcg_gen_gvec_or, 2, 18);
-GEN_VXFORM_V(vxor, MO_64, tcg_gen_gvec_xor, 2, 19);
-GEN_VXFORM_V(vnor, MO_64, tcg_gen_gvec_nor, 2, 20);
-GEN_VXFORM_V(veqv, MO_64, tcg_gen_gvec_eqv, 2, 26);
-GEN_VXFORM_V(vnand, MO_64, tcg_gen_gvec_nand, 2, 22);
-GEN_VXFORM_V(vorc, MO_64, tcg_gen_gvec_orc, 2, 21);
+GEN_VXFORM_V(vand, MO_UQ, tcg_gen_gvec_and, 2, 16);
+GEN_VXFORM_V(vandc, MO_UQ, tcg_gen_gvec_andc, 2, 17);
+GEN_VXFORM_V(vor, MO_UQ, tcg_gen_gvec_or, 2, 18);
+GEN_VXFORM_V(vxor, MO_UQ, tcg_gen_gvec_xor, 2, 19);
+GEN_VXFORM_V(vnor, MO_UQ, tcg_gen_gvec_nor, 2, 20);
+GEN_VXFORM_V(veqv, MO_UQ, tcg_gen_gvec_eqv, 2, 26);
+GEN_VXFORM_V(vnand, MO_UQ, tcg_gen_gvec_nand, 2, 22);
+GEN_VXFORM_V(vorc, MO_UQ, tcg_gen_gvec_orc, 2, 21);

 #define GEN_VXFORM(name, opc2, opc3)                                    \
 static void glue(gen_, name)(DisasContext *ctx)                         \
@@ -410,27 +410,27 @@ GEN_VXFORM_V(vadduhm, MO_UW, tcg_gen_gvec_add, 0, 1);
 GEN_VXFORM_DUAL(vadduhm, PPC_ALTIVEC, PPC_NONE,  \
                 vmul10ecuq, PPC_NONE, PPC2_ISA300)
 GEN_VXFORM_V(vadduwm, MO_UL, tcg_gen_gvec_add, 0, 2);
-GEN_VXFORM_V(vaddudm, MO_64, tcg_gen_gvec_add, 0, 3);
+GEN_VXFORM_V(vaddudm, MO_UQ, tcg_gen_gvec_add, 0, 3);
 GEN_VXFORM_V(vsububm, MO_UB, tcg_gen_gvec_sub, 0, 16);
 GEN_VXFORM_V(vsubuhm, MO_UW, tcg_gen_gvec_sub, 0, 17);
 GEN_VXFORM_V(vsubuwm, MO_UL, tcg_gen_gvec_sub, 0, 18);
-GEN_VXFORM_V(vsubudm, MO_64, tcg_gen_gvec_sub, 0, 19);
+GEN_VXFORM_V(vsubudm, MO_UQ, tcg_gen_gvec_sub, 0, 19);
 GEN_VXFORM_V(vmaxub, MO_UB, tcg_gen_gvec_umax, 1, 0);
 GEN_VXFORM_V(vmaxuh, MO_UW, tcg_gen_gvec_umax, 1, 1);
 GEN_VXFORM_V(vmaxuw, MO_UL, tcg_gen_gvec_umax, 1, 2);
-GEN_VXFORM_V(vmaxud, MO_64, tcg_gen_gvec_umax, 1, 3);
+GEN_VXFORM_V(vmaxud, MO_UQ, tcg_gen_gvec_umax, 1, 3);
 GEN_VXFORM_V(vmaxsb, MO_UB, tcg_gen_gvec_smax, 1, 4);
 GEN_VXFORM_V(vmaxsh, MO_UW, tcg_gen_gvec_smax, 1, 5);
 GEN_VXFORM_V(vmaxsw, MO_UL, tcg_gen_gvec_smax, 1, 6);
-GEN_VXFORM_V(vmaxsd, MO_64, tcg_gen_gvec_smax, 1, 7);
+GEN_VXFORM_V(vmaxsd, MO_UQ, tcg_gen_gvec_smax, 1, 7);
 GEN_VXFORM_V(vminub, MO_UB, tcg_gen_gvec_umin, 1, 8);
 GEN_VXFORM_V(vminuh, MO_UW, tcg_gen_gvec_umin, 1, 9);
 GEN_VXFORM_V(vminuw, MO_UL, tcg_gen_gvec_umin, 1, 10);
-GEN_VXFORM_V(vminud, MO_64, tcg_gen_gvec_umin, 1, 11);
+GEN_VXFORM_V(vminud, MO_UQ, tcg_gen_gvec_umin, 1, 11);
 GEN_VXFORM_V(vminsb, MO_UB, tcg_gen_gvec_smin, 1, 12);
 GEN_VXFORM_V(vminsh, MO_UW, tcg_gen_gvec_smin, 1, 13);
 GEN_VXFORM_V(vminsw, MO_UL, tcg_gen_gvec_smin, 1, 14);
-GEN_VXFORM_V(vminsd, MO_64, tcg_gen_gvec_smin, 1, 15);
+GEN_VXFORM_V(vminsd, MO_UQ, tcg_gen_gvec_smin, 1, 15);
 GEN_VXFORM(vavgub, 1, 16);
 GEN_VXFORM(vabsdub, 1, 16);
 GEN_VXFORM_DUAL(vavgub, PPC_ALTIVEC, PPC_NONE, \
@@ -536,15 +536,15 @@ GEN_VXFORM_V(vslw, MO_UL, tcg_gen_gvec_shlv, 2, 6);
 GEN_VXFORM(vrlwnm, 2, 6);
 GEN_VXFORM_DUAL(vslw, PPC_ALTIVEC, PPC_NONE, \
                 vrlwnm, PPC_NONE, PPC2_ISA300)
-GEN_VXFORM_V(vsld, MO_64, tcg_gen_gvec_shlv, 2, 23);
+GEN_VXFORM_V(vsld, MO_UQ, tcg_gen_gvec_shlv, 2, 23);
 GEN_VXFORM_V(vsrb, MO_UB, tcg_gen_gvec_shrv, 2, 8);
 GEN_VXFORM_V(vsrh, MO_UW, tcg_gen_gvec_shrv, 2, 9);
 GEN_VXFORM_V(vsrw, MO_UL, tcg_gen_gvec_shrv, 2, 10);
-GEN_VXFORM_V(vsrd, MO_64, tcg_gen_gvec_shrv, 2, 27);
+GEN_VXFORM_V(vsrd, MO_UQ, tcg_gen_gvec_shrv, 2, 27);
 GEN_VXFORM_V(vsrab, MO_UB, tcg_gen_gvec_sarv, 2, 12);
 GEN_VXFORM_V(vsrah, MO_UW, tcg_gen_gvec_sarv, 2, 13);
 GEN_VXFORM_V(vsraw, MO_UL, tcg_gen_gvec_sarv, 2, 14);
-GEN_VXFORM_V(vsrad, MO_64, tcg_gen_gvec_sarv, 2, 15);
+GEN_VXFORM_V(vsrad, MO_UQ, tcg_gen_gvec_sarv, 2, 15);
 GEN_VXFORM(vsrv, 2, 28);
 GEN_VXFORM(vslv, 2, 29);
 GEN_VXFORM(vslo, 6, 16);
diff --git a/target/ppc/translate/vsx-impl.inc.c b/target/ppc/translate/vsx-impl.inc.c
index 212817e..d607974 100644
--- a/target/ppc/translate/vsx-impl.inc.c
+++ b/target/ppc/translate/vsx-impl.inc.c
@@ -1475,14 +1475,14 @@ static void glue(gen_, name)(DisasContext *ctx)                      \
                vsr_full_offset(xB(ctx->opcode)), 16, 16);            \
     }

-VSX_LOGICAL(xxland, MO_64, tcg_gen_gvec_and)
-VSX_LOGICAL(xxlandc, MO_64, tcg_gen_gvec_andc)
-VSX_LOGICAL(xxlor, MO_64, tcg_gen_gvec_or)
-VSX_LOGICAL(xxlxor, MO_64, tcg_gen_gvec_xor)
-VSX_LOGICAL(xxlnor, MO_64, tcg_gen_gvec_nor)
-VSX_LOGICAL(xxleqv, MO_64, tcg_gen_gvec_eqv)
-VSX_LOGICAL(xxlnand, MO_64, tcg_gen_gvec_nand)
-VSX_LOGICAL(xxlorc, MO_64, tcg_gen_gvec_orc)
+VSX_LOGICAL(xxland, MO_UQ, tcg_gen_gvec_and)
+VSX_LOGICAL(xxlandc, MO_UQ, tcg_gen_gvec_andc)
+VSX_LOGICAL(xxlor, MO_UQ, tcg_gen_gvec_or)
+VSX_LOGICAL(xxlxor, MO_UQ, tcg_gen_gvec_xor)
+VSX_LOGICAL(xxlnor, MO_UQ, tcg_gen_gvec_nor)
+VSX_LOGICAL(xxleqv, MO_UQ, tcg_gen_gvec_eqv)
+VSX_LOGICAL(xxlnand, MO_UQ, tcg_gen_gvec_nand)
+VSX_LOGICAL(xxlorc, MO_UQ, tcg_gen_gvec_orc)

 #define VSX_XXMRG(name, high)                               \
 static void glue(gen_, name)(DisasContext *ctx)             \
@@ -1535,7 +1535,7 @@ static void gen_xxsel(DisasContext *ctx)
         gen_exception(ctx, POWERPC_EXCP_VSXU);
         return;
     }
-    tcg_gen_gvec_bitsel(MO_64, vsr_full_offset(rt), vsr_full_offset(rc),
+    tcg_gen_gvec_bitsel(MO_UQ, vsr_full_offset(rt), vsr_full_offset(rc),
                         vsr_full_offset(rb), vsr_full_offset(ra), 16, 16);
 }

diff --git a/target/s390x/translate.c b/target/s390x/translate.c
index 9e646f1..5c72db1 100644
--- a/target/s390x/translate.c
+++ b/target/s390x/translate.c
@@ -180,7 +180,7 @@ static inline int vec_reg_offset(uint8_t reg, uint8_t enr, TCGMemOp es)
      * the two 8 byte elements have to be loaded separately. Let's force all
      * 16 byte operations to handle it in a special way.
      */
-    g_assert(es <= MO_64);
+    g_assert(es <= MO_UQ);
 #ifndef HOST_WORDS_BIGENDIAN
     offs ^= (8 - bytes);
 #endif
@@ -190,7 +190,7 @@ static inline int vec_reg_offset(uint8_t reg, uint8_t enr, TCGMemOp es)
 static inline int freg64_offset(uint8_t reg)
 {
     g_assert(reg < 16);
-    return vec_reg_offset(reg, 0, MO_64);
+    return vec_reg_offset(reg, 0, MO_UQ);
 }

 static inline int freg32_offset(uint8_t reg)
diff --git a/target/s390x/translate_vx.inc.c b/target/s390x/translate_vx.inc.c
index 75d788c..6252262 100644
--- a/target/s390x/translate_vx.inc.c
+++ b/target/s390x/translate_vx.inc.c
@@ -30,8 +30,8 @@
  * Sizes:
  *  On s390x, the operand size (oprsz) and the maximum size (maxsz) are
  *  always 16 (128 bit). What gvec code calls "vece", s390x calls "es",
- *  a.k.a. "element size". These values nicely map to MO_UB ... MO_64. Only
- *  128 bit element size has to be treated in a special way (MO_64 + 1).
+ *  a.k.a. "element size". These values nicely map to MO_UB ... MO_UQ. Only
+ *  128 bit element size has to be treated in a special way (MO_UQ + 1).
  *  We will use ES_* instead of MO_* for this reason in this file.
  *
  * CC handling:
@@ -49,7 +49,7 @@
 #define ES_8    MO_UB
 #define ES_16   MO_UW
 #define ES_32   MO_UL
-#define ES_64   MO_64
+#define ES_64   MO_UQ
 #define ES_128  4

 /* Floating-Point Format */
diff --git a/target/s390x/vec.h b/target/s390x/vec.h
index f67392c..b59da65 100644
--- a/target/s390x/vec.h
+++ b/target/s390x/vec.h
@@ -82,7 +82,7 @@ static inline uint64_t s390_vec_read_element(const S390Vector *v, uint8_t enr,
         return s390_vec_read_element16(v, enr);
     case MO_UL:
         return s390_vec_read_element32(v, enr);
-    case MO_64:
+    case MO_UQ:
         return s390_vec_read_element64(v, enr);
     default:
         g_assert_not_reached();
@@ -130,7 +130,7 @@ static inline void s390_vec_write_element(S390Vector *v, uint8_t enr,
     case MO_UL:
         s390_vec_write_element32(v, enr, data);
         break;
-    case MO_64:
+    case MO_UQ:
         s390_vec_write_element64(v, enr, data);
         break;
     default:
diff --git a/target/sparc/translate.c b/target/sparc/translate.c
index 091bab5..499622b 100644
--- a/target/sparc/translate.c
+++ b/target/sparc/translate.c
@@ -2840,7 +2840,7 @@ static void gen_ldda_asi(DisasContext *dc, TCGv addr, int insn, int rd)
     default:
         {
             TCGv_i32 r_asi = tcg_const_i32(da.asi);
-            TCGv_i32 r_mop = tcg_const_i32(MO_Q);
+            TCGv_i32 r_mop = tcg_const_i32(MO_UQ);

             save_state(dc);
             gen_helper_ld_asi(t64, cpu_env, addr, r_asi, r_mop);
@@ -2896,7 +2896,7 @@ static void gen_stda_asi(DisasContext *dc, TCGv hi, TCGv addr,
     default:
         {
             TCGv_i32 r_asi = tcg_const_i32(da.asi);
-            TCGv_i32 r_mop = tcg_const_i32(MO_Q);
+            TCGv_i32 r_mop = tcg_const_i32(MO_UQ);

             save_state(dc);
             gen_helper_st_asi(cpu_env, addr, t64, r_asi, r_mop);
diff --git a/tcg/aarch64/tcg-target.inc.c b/tcg/aarch64/tcg-target.inc.c
index dc4fd21..d14afa9 100644
--- a/tcg/aarch64/tcg-target.inc.c
+++ b/tcg/aarch64/tcg-target.inc.c
@@ -432,12 +432,12 @@ typedef enum {
     I3312_STRB      = 0x38000000 | LDST_ST << 22 | MO_UB << 30,
     I3312_STRH      = 0x38000000 | LDST_ST << 22 | MO_UW << 30,
     I3312_STRW      = 0x38000000 | LDST_ST << 22 | MO_UL << 30,
-    I3312_STRX      = 0x38000000 | LDST_ST << 22 | MO_64 << 30,
+    I3312_STRX      = 0x38000000 | LDST_ST << 22 | MO_UQ << 30,

     I3312_LDRB      = 0x38000000 | LDST_LD << 22 | MO_UB << 30,
     I3312_LDRH      = 0x38000000 | LDST_LD << 22 | MO_UW << 30,
     I3312_LDRW      = 0x38000000 | LDST_LD << 22 | MO_UL << 30,
-    I3312_LDRX      = 0x38000000 | LDST_LD << 22 | MO_64 << 30,
+    I3312_LDRX      = 0x38000000 | LDST_LD << 22 | MO_UQ << 30,

     I3312_LDRSBW    = 0x38000000 | LDST_LD_S_W << 22 | MO_UB << 30,
     I3312_LDRSHW    = 0x38000000 | LDST_LD_S_W << 22 | MO_UW << 30,
@@ -449,8 +449,8 @@ typedef enum {
     I3312_LDRVS     = 0x3c000000 | LDST_LD << 22 | MO_UL << 30,
     I3312_STRVS     = 0x3c000000 | LDST_ST << 22 | MO_UL << 30,

-    I3312_LDRVD     = 0x3c000000 | LDST_LD << 22 | MO_64 << 30,
-    I3312_STRVD     = 0x3c000000 | LDST_ST << 22 | MO_64 << 30,
+    I3312_LDRVD     = 0x3c000000 | LDST_LD << 22 | MO_UQ << 30,
+    I3312_STRVD     = 0x3c000000 | LDST_ST << 22 | MO_UQ << 30,

     I3312_LDRVQ     = 0x3c000000 | 3 << 22 | 0 << 30,
     I3312_STRVQ     = 0x3c000000 | 2 << 22 | 0 << 30,
@@ -1595,7 +1595,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
     if (opc & MO_SIGN) {
         tcg_out_sxt(s, lb->type, size, lb->datalo_reg, TCG_REG_X0);
     } else {
-        tcg_out_mov(s, size == MO_64, lb->datalo_reg, TCG_REG_X0);
+        tcg_out_mov(s, size == MO_UQ, lb->datalo_reg, TCG_REG_X0);
     }

     tcg_out_goto(s, lb->raddr);
@@ -1614,7 +1614,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)

     tcg_out_mov(s, TCG_TYPE_PTR, TCG_REG_X0, TCG_AREG0);
     tcg_out_mov(s, TARGET_LONG_BITS == 64, TCG_REG_X1, lb->addrlo_reg);
-    tcg_out_mov(s, size == MO_64, TCG_REG_X2, lb->datalo_reg);
+    tcg_out_mov(s, size == MO_UQ, TCG_REG_X2, lb->datalo_reg);
     tcg_out_movi(s, TCG_TYPE_I32, TCG_REG_X3, oi);
     tcg_out_adr(s, TCG_REG_X4, lb->raddr);
     tcg_out_call(s, qemu_st_helpers[opc & (MO_BSWAP | MO_SIZE)]);
@@ -1754,7 +1754,7 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp memop, TCGType ext,
             tcg_out_ldst_r(s, I3312_LDRSWX, data_r, addr_r, otype, off_r);
         }
         break;
-    case MO_Q:
+    case MO_UQ:
         tcg_out_ldst_r(s, I3312_LDRX, data_r, addr_r, otype, off_r);
         if (bswap) {
             tcg_out_rev64(s, data_r, data_r);
@@ -1789,7 +1789,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp memop,
         }
         tcg_out_ldst_r(s, I3312_STRW, data_r, addr_r, otype, off_r);
         break;
-    case MO_64:
+    case MO_UQ:
         if (bswap && data_r != TCG_REG_XZR) {
             tcg_out_rev64(s, TCG_REG_TMP, data_r);
             data_r = TCG_REG_TMP;
@@ -1838,7 +1838,7 @@ static void tcg_out_qemu_st(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
     tcg_out_tlb_read(s, addr_reg, memop, &label_ptr, mem_index, 0);
     tcg_out_qemu_st_direct(s, memop, data_reg,
                            TCG_REG_X1, otype, addr_reg);
-    add_qemu_ldst_label(s, false, oi, (memop & MO_SIZE)== MO_64,
+    add_qemu_ldst_label(s, false, oi, (memop & MO_SIZE) == MO_UQ,
                         data_reg, addr_reg, s->code_ptr, label_ptr);
 #else /* !CONFIG_SOFTMMU */
     if (USE_GUEST_BASE) {
@@ -2506,7 +2506,7 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, unsigned vece)
     case INDEX_op_smin_vec:
     case INDEX_op_umax_vec:
     case INDEX_op_umin_vec:
-        return vece < MO_64;
+        return vece < MO_UQ;

     default:
         return 0;
diff --git a/tcg/arm/tcg-target.inc.c b/tcg/arm/tcg-target.inc.c
index 05560a2..70eeb8a 100644
--- a/tcg/arm/tcg-target.inc.c
+++ b/tcg/arm/tcg-target.inc.c
@@ -1389,7 +1389,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
     default:
         tcg_out_mov_reg(s, COND_AL, datalo, TCG_REG_R0);
         break;
-    case MO_Q:
+    case MO_UQ:
         if (datalo != TCG_REG_R1) {
             tcg_out_mov_reg(s, COND_AL, datalo, TCG_REG_R0);
             tcg_out_mov_reg(s, COND_AL, datahi, TCG_REG_R1);
@@ -1439,7 +1439,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
     default:
         argreg = tcg_out_arg_reg32(s, argreg, datalo);
         break;
-    case MO_64:
+    case MO_UQ:
         argreg = tcg_out_arg_reg64(s, argreg, datalo, datahi);
         break;
     }
@@ -1487,7 +1487,7 @@ static inline void tcg_out_qemu_ld_index(TCGContext *s, TCGMemOp opc,
             tcg_out_bswap32(s, COND_AL, datalo, datalo);
         }
         break;
-    case MO_Q:
+    case MO_UQ:
         {
             TCGReg dl = (bswap ? datahi : datalo);
             TCGReg dh = (bswap ? datalo : datahi);
@@ -1548,7 +1548,7 @@ static inline void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp opc,
             tcg_out_bswap32(s, COND_AL, datalo, datalo);
         }
         break;
-    case MO_Q:
+    case MO_UQ:
         {
             TCGReg dl = (bswap ? datahi : datalo);
             TCGReg dh = (bswap ? datalo : datahi);
@@ -1641,7 +1641,7 @@ static inline void tcg_out_qemu_st_index(TCGContext *s, int cond, TCGMemOp opc,
             tcg_out_st32_r(s, cond, datalo, addrlo, addend);
         }
         break;
-    case MO_64:
+    case MO_UQ:
         /* Avoid strd for user-only emulation, to handle unaligned.  */
         if (bswap) {
             tcg_out_bswap32(s, cond, TCG_REG_R0, datahi);
@@ -1686,7 +1686,7 @@ static inline void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp opc,
             tcg_out_st32_12(s, COND_AL, datalo, addrlo, 0);
         }
         break;
-    case MO_64:
+    case MO_UQ:
         /* Avoid strd for user-only emulation, to handle unaligned.  */
         if (bswap) {
             tcg_out_bswap32(s, COND_AL, TCG_REG_R0, datahi);
diff --git a/tcg/i386/tcg-target.inc.c b/tcg/i386/tcg-target.inc.c
index 93e4c63..3a73334 100644
--- a/tcg/i386/tcg-target.inc.c
+++ b/tcg/i386/tcg-target.inc.c
@@ -902,7 +902,7 @@ static bool tcg_out_dup_vec(TCGContext *s, TCGType type, unsigned vece,
             /* imm8 operand: all output lanes selected from input lane 0.  */
             tcg_out8(s, 0);
             break;
-        case MO_64:
+        case MO_UQ:
             tcg_out_vex_modrm(s, OPC_PUNPCKLQDQ, r, a, a);
             break;
         default:
@@ -921,7 +921,7 @@ static bool tcg_out_dupm_vec(TCGContext *s, TCGType type, unsigned vece,
                                  r, 0, base, offset);
     } else {
         switch (vece) {
-        case MO_64:
+        case MO_UQ:
             tcg_out_vex_modrm_offset(s, OPC_MOVDDUP, r, 0, base, offset);
             break;
         case MO_UL:
@@ -1868,7 +1868,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
     case MO_UL:
         tcg_out_mov(s, TCG_TYPE_I32, data_reg, TCG_REG_EAX);
         break;
-    case MO_Q:
+    case MO_UQ:
         if (TCG_TARGET_REG_BITS == 64) {
             tcg_out_mov(s, TCG_TYPE_I64, data_reg, TCG_REG_RAX);
         } else if (data_reg == TCG_REG_EDX) {
@@ -1923,7 +1923,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
         tcg_out_st(s, TCG_TYPE_I32, l->datalo_reg, TCG_REG_ESP, ofs);
         ofs += 4;

-        if (s_bits == MO_64) {
+        if (s_bits == MO_UQ) {
             tcg_out_st(s, TCG_TYPE_I32, l->datahi_reg, TCG_REG_ESP, ofs);
             ofs += 4;
         }
@@ -1937,7 +1937,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
     } else {
         tcg_out_mov(s, TCG_TYPE_PTR, tcg_target_call_iarg_regs[0], TCG_AREG0);
         /* The second argument is already loaded with addrlo.  */
-        tcg_out_mov(s, (s_bits == MO_64 ? TCG_TYPE_I64 : TCG_TYPE_I32),
+        tcg_out_mov(s, (s_bits == MO_UQ ? TCG_TYPE_I64 : TCG_TYPE_I32),
                     tcg_target_call_iarg_regs[2], l->datalo_reg);
         tcg_out_movi(s, TCG_TYPE_I32, tcg_target_call_iarg_regs[3], oi);

@@ -2060,7 +2060,7 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg datalo, TCGReg datahi,
         }
         break;
 #endif
-    case MO_Q:
+    case MO_UQ:
         if (TCG_TARGET_REG_BITS == 64) {
             tcg_out_modrm_sib_offset(s, movop + P_REXW + seg, datalo,
                                      base, index, 0, ofs);
@@ -2181,7 +2181,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg datalo, TCGReg datahi,
         }
         tcg_out_modrm_sib_offset(s, movop + seg, datalo, base, index, 0, ofs);
         break;
-    case MO_64:
+    case MO_UQ:
         if (TCG_TARGET_REG_BITS == 64) {
             if (bswap) {
                 tcg_out_mov(s, TCG_TYPE_I64, scratch, datalo);
@@ -2755,7 +2755,7 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc,
         OPC_UD2, OPC_UD2, OPC_VPSRLVD, OPC_VPSRLVQ
     };
     static int const sarv_insn[4] = {
-        /* TODO: AVX512 adds support for MO_UW, MO_64.  */
+        /* TODO: AVX512 adds support for MO_UW, MO_UQ.  */
         OPC_UD2, OPC_UD2, OPC_VPSRAVD, OPC_UD2
     };
     static int const shls_insn[4] = {
@@ -2768,7 +2768,7 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc,
         OPC_UD2, OPC_PSRAW, OPC_PSRAD, OPC_UD2
     };
     static int const abs_insn[4] = {
-        /* TODO: AVX512 adds support for MO_64.  */
+        /* TODO: AVX512 adds support for MO_UQ.  */
         OPC_PABSB, OPC_PABSW, OPC_PABSD, OPC_UD2
     };

@@ -2898,7 +2898,7 @@ static void tcg_out_vec_op(TCGContext *s, TCGOpcode opc,
         sub = 2;
         goto gen_shift;
     case INDEX_op_sari_vec:
-        tcg_debug_assert(vece != MO_64);
+        tcg_debug_assert(vece != MO_UQ);
         sub = 4;
     gen_shift:
         tcg_debug_assert(vece != MO_UB);
@@ -3281,9 +3281,11 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, unsigned vece)
         if (vece == MO_UB) {
             return -1;
         }
-        /* We can emulate this for MO_64, but it does not pay off
-           unless we're producing at least 4 values.  */
-        if (vece == MO_64) {
+        /*
+         * We can emulate this for MO_UQ, but it does not pay off
+         * unless we're producing at least 4 values.
+         */
+        if (vece == MO_UQ) {
             return type >= TCG_TYPE_V256 ? -1 : 0;
         }
         return 1;
@@ -3305,7 +3307,7 @@ int tcg_can_emit_vec_op(TCGOpcode opc, TCGType type, unsigned vece)
             /* We can expand the operation for MO_UB.  */
             return -1;
         }
-        if (vece == MO_64) {
+        if (vece == MO_UQ) {
             return 0;
         }
         return 1;
@@ -3389,7 +3391,7 @@ static void expand_vec_sari(TCGType type, unsigned vece,
         tcg_temp_free_vec(t2);
         break;

-    case MO_64:
+    case MO_UQ:
         if (imm <= 32) {
             /* We can emulate a small sign extend by performing an arithmetic
              * 32-bit shift and overwriting the high half of a 64-bit logical
@@ -3397,7 +3399,7 @@ static void expand_vec_sari(TCGType type, unsigned vece,
              */
             t1 = tcg_temp_new_vec(type);
             tcg_gen_sari_vec(MO_UL, t1, v1, imm);
-            tcg_gen_shri_vec(MO_64, v0, v1, imm);
+            tcg_gen_shri_vec(MO_UQ, v0, v1, imm);
             vec_gen_4(INDEX_op_x86_blend_vec, type, MO_UL,
                       tcgv_vec_arg(v0), tcgv_vec_arg(v0),
                       tcgv_vec_arg(t1), 0xaa);
@@ -3407,10 +3409,10 @@ static void expand_vec_sari(TCGType type, unsigned vece,
              * the sign-extend, shift and merge.
              */
             t1 = tcg_const_zeros_vec(type);
-            tcg_gen_cmp_vec(TCG_COND_GT, MO_64, t1, t1, v1);
-            tcg_gen_shri_vec(MO_64, v0, v1, imm);
-            tcg_gen_shli_vec(MO_64, t1, t1, 64 - imm);
-            tcg_gen_or_vec(MO_64, v0, v0, t1);
+            tcg_gen_cmp_vec(TCG_COND_GT, MO_UQ, t1, t1, v1);
+            tcg_gen_shri_vec(MO_UQ, v0, v1, imm);
+            tcg_gen_shli_vec(MO_UQ, t1, t1, 64 - imm);
+            tcg_gen_or_vec(MO_UQ, v0, v0, t1);
             tcg_temp_free_vec(t1);
         }
         break;
diff --git a/tcg/mips/tcg-target.inc.c b/tcg/mips/tcg-target.inc.c
index a78fe87..ef31fc8 100644
--- a/tcg/mips/tcg-target.inc.c
+++ b/tcg/mips/tcg-target.inc.c
@@ -1336,7 +1336,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
     tcg_out_mov(s, TCG_TYPE_PTR, tcg_target_call_iarg_regs[0], TCG_AREG0);

     v0 = l->datalo_reg;
-    if (TCG_TARGET_REG_BITS == 32 && (opc & MO_SIZE) == MO_64) {
+    if (TCG_TARGET_REG_BITS == 32 && (opc & MO_SIZE) == MO_UQ) {
         /* We eliminated V0 from the possible output registers, so it
            cannot be clobbered here.  So we must move V1 first.  */
         if (MIPS_BE) {
@@ -1389,7 +1389,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
     case MO_UL:
         i = tcg_out_call_iarg_reg(s, i, l->datalo_reg);
         break;
-    case MO_64:
+    case MO_UQ:
         if (TCG_TARGET_REG_BITS == 32) {
             i = tcg_out_call_iarg_reg2(s, i, l->datalo_reg, l->datahi_reg);
         } else {
@@ -1470,7 +1470,7 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg lo, TCGReg hi,
     case MO_SL:
         tcg_out_opc_imm(s, OPC_LW, lo, base, 0);
         break;
-    case MO_Q | MO_BSWAP:
+    case MO_UQ | MO_BSWAP:
         if (TCG_TARGET_REG_BITS == 64) {
             if (use_mips32r2_instructions) {
                 tcg_out_opc_imm(s, OPC_LD, lo, base, 0);
@@ -1499,7 +1499,7 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg lo, TCGReg hi,
             tcg_out_mov(s, TCG_TYPE_I32, MIPS_BE ? hi : lo, TCG_TMP3);
         }
         break;
-    case MO_Q:
+    case MO_UQ:
         /* Prefer to load from offset 0 first, but allow for overlap.  */
         if (TCG_TARGET_REG_BITS == 64) {
             tcg_out_opc_imm(s, OPC_LD, lo, base, 0);
@@ -1587,7 +1587,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
         tcg_out_opc_imm(s, OPC_SW, lo, base, 0);
         break;

-    case MO_64 | MO_BSWAP:
+    case MO_UQ | MO_BSWAP:
         if (TCG_TARGET_REG_BITS == 64) {
             tcg_out_bswap64(s, TCG_TMP3, lo);
             tcg_out_opc_imm(s, OPC_SD, TCG_TMP3, base, 0);
@@ -1605,7 +1605,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
             tcg_out_opc_imm(s, OPC_SW, TCG_TMP3, base, 4);
         }
         break;
-    case MO_64:
+    case MO_UQ:
         if (TCG_TARGET_REG_BITS == 64) {
             tcg_out_opc_imm(s, OPC_SD, lo, base, 0);
         } else {
diff --git a/tcg/ppc/tcg-target.inc.c b/tcg/ppc/tcg-target.inc.c
index 835336a..13a2437 100644
--- a/tcg/ppc/tcg-target.inc.c
+++ b/tcg/ppc/tcg-target.inc.c
@@ -1445,24 +1445,24 @@ static const uint32_t qemu_ldx_opc[16] = {
     [MO_UB] = LBZX,
     [MO_UW] = LHZX,
     [MO_UL] = LWZX,
-    [MO_Q]  = LDX,
+    [MO_UQ]  = LDX,
     [MO_SW] = LHAX,
     [MO_SL] = LWAX,
     [MO_BSWAP | MO_UB] = LBZX,
     [MO_BSWAP | MO_UW] = LHBRX,
     [MO_BSWAP | MO_UL] = LWBRX,
-    [MO_BSWAP | MO_Q]  = LDBRX,
+    [MO_BSWAP | MO_UQ]  = LDBRX,
 };

 static const uint32_t qemu_stx_opc[16] = {
     [MO_UB] = STBX,
     [MO_UW] = STHX,
     [MO_UL] = STWX,
-    [MO_Q]  = STDX,
+    [MO_UQ]  = STDX,
     [MO_BSWAP | MO_UB] = STBX,
     [MO_BSWAP | MO_UW] = STHBRX,
     [MO_BSWAP | MO_UL] = STWBRX,
-    [MO_BSWAP | MO_Q]  = STDBRX,
+    [MO_BSWAP | MO_UQ]  = STDBRX,
 };

 static const uint32_t qemu_exts_opc[4] = {
@@ -1663,7 +1663,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)

     lo = lb->datalo_reg;
     hi = lb->datahi_reg;
-    if (TCG_TARGET_REG_BITS == 32 && (opc & MO_SIZE) == MO_64) {
+    if (TCG_TARGET_REG_BITS == 32 && (opc & MO_SIZE) == MO_UQ) {
         tcg_out_mov(s, TCG_TYPE_I32, lo, TCG_REG_R4);
         tcg_out_mov(s, TCG_TYPE_I32, hi, TCG_REG_R3);
     } else if (opc & MO_SIGN) {
@@ -1708,7 +1708,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
     hi = lb->datahi_reg;
     if (TCG_TARGET_REG_BITS == 32) {
         switch (s_bits) {
-        case MO_64:
+        case MO_UQ:
 #ifdef TCG_TARGET_CALL_ALIGN_ARGS
             arg |= 1;
 #endif
@@ -1722,7 +1722,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
             break;
         }
     } else {
-        if (s_bits == MO_64) {
+        if (s_bits == MO_UQ) {
             tcg_out_mov(s, TCG_TYPE_I64, arg++, lo);
         } else {
             tcg_out_rld(s, RLDICL, arg++, lo, 0, 64 - (8 << s_bits));
@@ -1775,7 +1775,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
     }
 #endif

-    if (TCG_TARGET_REG_BITS == 32 && s_bits == MO_64) {
+    if (TCG_TARGET_REG_BITS == 32 && s_bits == MO_UQ) {
         if (opc & MO_BSWAP) {
             tcg_out32(s, ADDI | TAI(TCG_REG_R0, addrlo, 4));
             tcg_out32(s, LWBRX | TAB(datalo, rbase, addrlo));
@@ -1850,7 +1850,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64)
     }
 #endif

-    if (TCG_TARGET_REG_BITS == 32 && s_bits == MO_64) {
+    if (TCG_TARGET_REG_BITS == 32 && s_bits == MO_UQ) {
         if (opc & MO_BSWAP) {
             tcg_out32(s, ADDI | TAI(TCG_REG_R0, addrlo, 4));
             tcg_out32(s, STWBRX | SAB(datalo, rbase, addrlo));
diff --git a/tcg/riscv/tcg-target.inc.c b/tcg/riscv/tcg-target.inc.c
index 1905986..90363df 100644
--- a/tcg/riscv/tcg-target.inc.c
+++ b/tcg/riscv/tcg-target.inc.c
@@ -1068,7 +1068,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
     tcg_out_movi(s, TCG_TYPE_PTR, a3, (tcg_target_long)l->raddr);

     tcg_out_call(s, qemu_ld_helpers[opc & (MO_BSWAP | MO_SSIZE)]);
-    tcg_out_mov(s, (opc & MO_SIZE) == MO_64, l->datalo_reg, a0);
+    tcg_out_mov(s, (opc & MO_SIZE) == MO_UQ, l->datalo_reg, a0);

     tcg_out_goto(s, l->raddr);
     return true;
@@ -1150,7 +1150,7 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg lo, TCGReg hi,
     case MO_SL:
         tcg_out_opc_imm(s, OPC_LW, lo, base, 0);
         break;
-    case MO_Q:
+    case MO_UQ:
         /* Prefer to load from offset 0 first, but allow for overlap.  */
         if (TCG_TARGET_REG_BITS == 64) {
             tcg_out_opc_imm(s, OPC_LD, lo, base, 0);
@@ -1225,7 +1225,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
     case MO_UL:
         tcg_out_opc_store(s, OPC_SW, base, lo, 0);
         break;
-    case MO_64:
+    case MO_UQ:
         if (TCG_TARGET_REG_BITS == 64) {
             tcg_out_opc_store(s, OPC_SD, base, lo, 0);
         } else {
diff --git a/tcg/s390/tcg-target.inc.c b/tcg/s390/tcg-target.inc.c
index fe42939..db1102e 100644
--- a/tcg/s390/tcg-target.inc.c
+++ b/tcg/s390/tcg-target.inc.c
@@ -1477,10 +1477,10 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp opc, TCGReg data,
         tcg_out_insn(s, RXY, LGF, data, base, index, disp);
         break;

-    case MO_Q | MO_BSWAP:
+    case MO_UQ | MO_BSWAP:
         tcg_out_insn(s, RXY, LRVG, data, base, index, disp);
         break;
-    case MO_Q:
+    case MO_UQ:
         tcg_out_insn(s, RXY, LG, data, base, index, disp);
         break;

@@ -1523,10 +1523,10 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp opc, TCGReg data,
         }
         break;

-    case MO_Q | MO_BSWAP:
+    case MO_UQ | MO_BSWAP:
         tcg_out_insn(s, RXY, STRVG, data, base, index, disp);
         break;
-    case MO_Q:
+    case MO_UQ:
         tcg_out_insn(s, RXY, STG, data, base, index, disp);
         break;

@@ -1660,7 +1660,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
     case MO_UL:
         tgen_ext32u(s, TCG_REG_R4, data_reg);
         break;
-    case MO_Q:
+    case MO_UQ:
         tcg_out_mov(s, TCG_TYPE_I64, TCG_REG_R4, data_reg);
         break;
     default:
diff --git a/tcg/sparc/tcg-target.inc.c b/tcg/sparc/tcg-target.inc.c
index ac0d3a3..7c50118 100644
--- a/tcg/sparc/tcg-target.inc.c
+++ b/tcg/sparc/tcg-target.inc.c
@@ -894,7 +894,7 @@ static void emit_extend(TCGContext *s, TCGReg r, int op)
             tcg_out_arith(s, r, r, 0, SHIFT_SRL);
         }
         break;
-    case MO_64:
+    case MO_UQ:
         break;
     }
 }
@@ -977,7 +977,7 @@ static void build_trampolines(TCGContext *s)
             } else {
                 ra += 1;
             }
-            if ((i & MO_SIZE) == MO_64) {
+            if ((i & MO_SIZE) == MO_UQ) {
                 /* Install the high part of the data.  */
                 tcg_out_arithi(s, ra, ra + 1, 32, SHIFT_SRLX);
                 ra += 2;
@@ -1217,7 +1217,7 @@ static void tcg_out_qemu_ld(TCGContext *s, TCGReg data, TCGReg addr,
             tcg_out_mov(s, TCG_TYPE_REG, data, TCG_REG_O0);
         }
     } else {
-        if ((memop & MO_SIZE) == MO_64) {
+        if ((memop & MO_SIZE) == MO_UQ) {
             tcg_out_arithi(s, TCG_REG_O0, TCG_REG_O0, 32, SHIFT_SLLX);
             tcg_out_arithi(s, TCG_REG_O1, TCG_REG_O1, 0, SHIFT_SRL);
             tcg_out_arith(s, data, TCG_REG_O0, TCG_REG_O1, ARITH_OR);
@@ -1274,7 +1274,7 @@ static void tcg_out_qemu_st(TCGContext *s, TCGReg data, TCGReg addr,
         param++;
     }
     tcg_out_mov(s, TCG_TYPE_REG, param++, addrz);
-    if (!SPARC64 && (memop & MO_SIZE) == MO_64) {
+    if (!SPARC64 && (memop & MO_SIZE) == MO_UQ) {
         /* Skip the high-part; we'll perform the extract in the trampoline.  */
         param++;
     }
diff --git a/tcg/tcg-op-gvec.c b/tcg/tcg-op-gvec.c
index e63622c..0c0eea5 100644
--- a/tcg/tcg-op-gvec.c
+++ b/tcg/tcg-op-gvec.c
@@ -312,7 +312,7 @@ uint64_t (dup_const)(unsigned vece, uint64_t c)
         return 0x0001000100010001ull * (uint16_t)c;
     case MO_UL:
         return 0x0000000100000001ull * (uint32_t)c;
-    case MO_64:
+    case MO_UQ:
         return c;
     default:
         g_assert_not_reached();
@@ -352,7 +352,7 @@ static void gen_dup_i64(unsigned vece, TCGv_i64 out, TCGv_i64 in)
     case MO_UL:
         tcg_gen_deposit_i64(out, in, in, 32, 32);
         break;
-    case MO_64:
+    case MO_UQ:
         tcg_gen_mov_i64(out, in);
         break;
     default:
@@ -443,7 +443,7 @@ static void do_dup(unsigned vece, uint32_t dofs, uint32_t oprsz,
     TCGv_ptr t_ptr;
     uint32_t i;

-    assert(vece <= (in_32 ? MO_UL : MO_64));
+    assert(vece <= (in_32 ? MO_UL : MO_UQ));
     assert(in_32 == NULL || in_64 == NULL);

     /* If we're storing 0, expand oprsz to maxsz.  */
@@ -459,7 +459,7 @@ static void do_dup(unsigned vece, uint32_t dofs, uint32_t oprsz,
      */
     type = choose_vector_type(NULL, vece, oprsz,
                               (TCG_TARGET_REG_BITS == 64 && in_32 == NULL
-                               && (in_64 == NULL || vece == MO_64)));
+                               && (in_64 == NULL || vece == MO_UQ)));
     if (type != 0) {
         TCGv_vec t_vec = tcg_temp_new_vec(type);

@@ -502,7 +502,7 @@ static void do_dup(unsigned vece, uint32_t dofs, uint32_t oprsz,
             /* For 64-bit hosts, use 64-bit constants for "simple" constants
                or when we'd need too many 32-bit stores, or when a 64-bit
                constant is really required.  */
-            if (vece == MO_64
+            if (vece == MO_UQ
                 || (TCG_TARGET_REG_BITS == 64
                     && (in_c == 0 || in_c == -1
                         || !check_size_impl(oprsz, 4)))) {
@@ -534,7 +534,7 @@ static void do_dup(unsigned vece, uint32_t dofs, uint32_t oprsz,
     tcg_gen_addi_ptr(t_ptr, cpu_env, dofs);
     t_desc = tcg_const_i32(simd_desc(oprsz, maxsz, 0));

-    if (vece == MO_64) {
+    if (vece == MO_UQ) {
         if (in_64) {
             gen_helper_gvec_dup64(t_ptr, t_desc, in_64);
         } else {
@@ -1438,7 +1438,7 @@ void tcg_gen_gvec_dup_i64(unsigned vece, uint32_t dofs, uint32_t oprsz,
                           uint32_t maxsz, TCGv_i64 in)
 {
     check_size_align(oprsz, maxsz, dofs);
-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     do_dup(vece, dofs, oprsz, maxsz, NULL, in, 0);
 }

@@ -1446,7 +1446,7 @@ void tcg_gen_gvec_dup_mem(unsigned vece, uint32_t dofs, uint32_t aofs,
                           uint32_t oprsz, uint32_t maxsz)
 {
     check_size_align(oprsz, maxsz, dofs);
-    if (vece <= MO_64) {
+    if (vece <= MO_UQ) {
         TCGType type = choose_vector_type(NULL, vece, oprsz, 0);
         if (type != 0) {
             TCGv_vec t_vec = tcg_temp_new_vec(type);
@@ -1512,7 +1512,7 @@ void tcg_gen_gvec_dup64i(uint32_t dofs, uint32_t oprsz,
                          uint32_t maxsz, uint64_t x)
 {
     check_size_align(oprsz, maxsz, dofs);
-    do_dup(MO_64, dofs, oprsz, maxsz, NULL, NULL, x);
+    do_dup(MO_UQ, dofs, oprsz, maxsz, NULL, NULL, x);
 }

 void tcg_gen_gvec_dup32i(uint32_t dofs, uint32_t oprsz,
@@ -1624,10 +1624,10 @@ void tcg_gen_gvec_add(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fno = gen_helper_gvec_add64,
           .opt_opc = vecop_list_add,
           .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-          .vece = MO_64 },
+          .vece = MO_UQ },
     };

-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]);
 }

@@ -1655,10 +1655,10 @@ void tcg_gen_gvec_adds(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fno = gen_helper_gvec_adds64,
           .opt_opc = vecop_list_add,
           .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-          .vece = MO_64 },
+          .vece = MO_UQ },
     };

-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_2s(dofs, aofs, oprsz, maxsz, c, &g[vece]);
 }

@@ -1696,10 +1696,10 @@ void tcg_gen_gvec_subs(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fno = gen_helper_gvec_subs64,
           .opt_opc = vecop_list_sub,
           .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-          .vece = MO_64 },
+          .vece = MO_UQ },
     };

-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_2s(dofs, aofs, oprsz, maxsz, c, &g[vece]);
 }

@@ -1775,10 +1775,10 @@ void tcg_gen_gvec_sub(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fno = gen_helper_gvec_sub64,
           .opt_opc = vecop_list_sub,
           .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-          .vece = MO_64 },
+          .vece = MO_UQ },
     };

-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]);
 }

@@ -1806,10 +1806,10 @@ void tcg_gen_gvec_mul(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fno = gen_helper_gvec_mul64,
           .opt_opc = vecop_list_mul,
           .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-          .vece = MO_64 },
+          .vece = MO_UQ },
     };

-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]);
 }

@@ -1835,10 +1835,10 @@ void tcg_gen_gvec_muls(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fno = gen_helper_gvec_muls64,
           .opt_opc = vecop_list_mul,
           .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-          .vece = MO_64 },
+          .vece = MO_UQ },
     };

-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_2s(dofs, aofs, oprsz, maxsz, c, &g[vece]);
 }

@@ -1870,9 +1870,9 @@ void tcg_gen_gvec_ssadd(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_ssadd_vec,
           .fno = gen_helper_gvec_ssadd64,
           .opt_opc = vecop_list,
-          .vece = MO_64 },
+          .vece = MO_UQ },
     };
-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]);
 }

@@ -1896,9 +1896,9 @@ void tcg_gen_gvec_sssub(unsigned vece, uint32_t dofs, uint32_t aofs,
         { .fniv = tcg_gen_sssub_vec,
           .fno = gen_helper_gvec_sssub64,
           .opt_opc = vecop_list,
-          .vece = MO_64 },
+          .vece = MO_UQ },
     };
-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]);
 }

@@ -1940,9 +1940,9 @@ void tcg_gen_gvec_usadd(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_usadd_vec,
           .fno = gen_helper_gvec_usadd64,
           .opt_opc = vecop_list,
-          .vece = MO_64 }
+          .vece = MO_UQ }
     };
-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]);
 }

@@ -1984,9 +1984,9 @@ void tcg_gen_gvec_ussub(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_ussub_vec,
           .fno = gen_helper_gvec_ussub64,
           .opt_opc = vecop_list,
-          .vece = MO_64 }
+          .vece = MO_UQ }
     };
-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]);
 }

@@ -2012,9 +2012,9 @@ void tcg_gen_gvec_smin(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_smin_vec,
           .fno = gen_helper_gvec_smin64,
           .opt_opc = vecop_list,
-          .vece = MO_64 }
+          .vece = MO_UQ }
     };
-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]);
 }

@@ -2040,9 +2040,9 @@ void tcg_gen_gvec_umin(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_umin_vec,
           .fno = gen_helper_gvec_umin64,
           .opt_opc = vecop_list,
-          .vece = MO_64 }
+          .vece = MO_UQ }
     };
-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]);
 }

@@ -2068,9 +2068,9 @@ void tcg_gen_gvec_smax(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_smax_vec,
           .fno = gen_helper_gvec_smax64,
           .opt_opc = vecop_list,
-          .vece = MO_64 }
+          .vece = MO_UQ }
     };
-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]);
 }

@@ -2096,9 +2096,9 @@ void tcg_gen_gvec_umax(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fniv = tcg_gen_umax_vec,
           .fno = gen_helper_gvec_umax64,
           .opt_opc = vecop_list,
-          .vece = MO_64 }
+          .vece = MO_UQ }
     };
-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]);
 }

@@ -2171,10 +2171,10 @@ void tcg_gen_gvec_neg(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fno = gen_helper_gvec_neg64,
           .opt_opc = vecop_list,
           .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-          .vece = MO_64 },
+          .vece = MO_UQ },
     };

-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_2(dofs, aofs, oprsz, maxsz, &g[vece]);
 }

@@ -2234,10 +2234,10 @@ void tcg_gen_gvec_abs(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fno = gen_helper_gvec_abs64,
           .opt_opc = vecop_list,
           .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-          .vece = MO_64 },
+          .vece = MO_UQ },
     };

-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_2(dofs, aofs, oprsz, maxsz, &g[vece]);
 }

@@ -2382,7 +2382,7 @@ static const GVecGen2s gop_ands = {
     .fniv = tcg_gen_and_vec,
     .fno = gen_helper_gvec_ands,
     .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-    .vece = MO_64
+    .vece = MO_UQ
 };

 void tcg_gen_gvec_ands(unsigned vece, uint32_t dofs, uint32_t aofs,
@@ -2407,7 +2407,7 @@ static const GVecGen2s gop_xors = {
     .fniv = tcg_gen_xor_vec,
     .fno = gen_helper_gvec_xors,
     .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-    .vece = MO_64
+    .vece = MO_UQ
 };

 void tcg_gen_gvec_xors(unsigned vece, uint32_t dofs, uint32_t aofs,
@@ -2432,7 +2432,7 @@ static const GVecGen2s gop_ors = {
     .fniv = tcg_gen_or_vec,
     .fno = gen_helper_gvec_ors,
     .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-    .vece = MO_64
+    .vece = MO_UQ
 };

 void tcg_gen_gvec_ors(unsigned vece, uint32_t dofs, uint32_t aofs,
@@ -2491,10 +2491,10 @@ void tcg_gen_gvec_shli(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fno = gen_helper_gvec_shl64i,
           .opt_opc = vecop_list,
           .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-          .vece = MO_64 },
+          .vece = MO_UQ },
     };

-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_debug_assert(shift >= 0 && shift < (8 << vece));
     if (shift == 0) {
         tcg_gen_gvec_mov(vece, dofs, aofs, oprsz, maxsz);
@@ -2542,10 +2542,10 @@ void tcg_gen_gvec_shri(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fno = gen_helper_gvec_shr64i,
           .opt_opc = vecop_list,
           .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-          .vece = MO_64 },
+          .vece = MO_UQ },
     };

-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_debug_assert(shift >= 0 && shift < (8 << vece));
     if (shift == 0) {
         tcg_gen_gvec_mov(vece, dofs, aofs, oprsz, maxsz);
@@ -2607,10 +2607,10 @@ void tcg_gen_gvec_sari(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fno = gen_helper_gvec_sar64i,
           .opt_opc = vecop_list,
           .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-          .vece = MO_64 },
+          .vece = MO_UQ },
     };

-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_debug_assert(shift >= 0 && shift < (8 << vece));
     if (shift == 0) {
         tcg_gen_gvec_mov(vece, dofs, aofs, oprsz, maxsz);
@@ -2660,7 +2660,7 @@ do_gvec_shifts(unsigned vece, uint32_t dofs, uint32_t aofs, TCGv_i32 shift,
     check_overlap_2(dofs, aofs, maxsz);

     /* If the backend has a scalar expansion, great.  */
-    type = choose_vector_type(g->s_list, vece, oprsz, vece == MO_64);
+    type = choose_vector_type(g->s_list, vece, oprsz, vece == MO_UQ);
     if (type) {
         const TCGOpcode *hold_list = tcg_swap_vecop_list(NULL);
         switch (type) {
@@ -2692,15 +2692,15 @@ do_gvec_shifts(unsigned vece, uint32_t dofs, uint32_t aofs, TCGv_i32 shift,
     }

     /* If the backend supports variable vector shifts, also cool.  */
-    type = choose_vector_type(g->v_list, vece, oprsz, vece == MO_64);
+    type = choose_vector_type(g->v_list, vece, oprsz, vece == MO_UQ);
     if (type) {
         const TCGOpcode *hold_list = tcg_swap_vecop_list(NULL);
         TCGv_vec v_shift = tcg_temp_new_vec(type);

-        if (vece == MO_64) {
+        if (vece == MO_UQ) {
             TCGv_i64 sh64 = tcg_temp_new_i64();
             tcg_gen_extu_i32_i64(sh64, shift);
-            tcg_gen_dup_i64_vec(MO_64, v_shift, sh64);
+            tcg_gen_dup_i64_vec(MO_UQ, v_shift, sh64);
             tcg_temp_free_i64(sh64);
         } else {
             tcg_gen_dup_i32_vec(vece, v_shift, shift);
@@ -2738,7 +2738,7 @@ do_gvec_shifts(unsigned vece, uint32_t dofs, uint32_t aofs, TCGv_i32 shift,
     /* Otherwise fall back to integral... */
     if (vece == MO_UL && check_size_impl(oprsz, 4)) {
         expand_2s_i32(dofs, aofs, oprsz, shift, false, g->fni4);
-    } else if (vece == MO_64 && check_size_impl(oprsz, 8)) {
+    } else if (vece == MO_UQ && check_size_impl(oprsz, 8)) {
         TCGv_i64 sh64 = tcg_temp_new_i64();
         tcg_gen_extu_i32_i64(sh64, shift);
         expand_2s_i64(dofs, aofs, oprsz, sh64, false, g->fni8);
@@ -2785,7 +2785,7 @@ void tcg_gen_gvec_shls(unsigned vece, uint32_t dofs, uint32_t aofs,
         .v_list = { INDEX_op_shlv_vec, 0 },
     };

-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     do_gvec_shifts(vece, dofs, aofs, shift, oprsz, maxsz, &g);
 }

@@ -2807,7 +2807,7 @@ void tcg_gen_gvec_shrs(unsigned vece, uint32_t dofs, uint32_t aofs,
         .v_list = { INDEX_op_shrv_vec, 0 },
     };

-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     do_gvec_shifts(vece, dofs, aofs, shift, oprsz, maxsz, &g);
 }

@@ -2829,7 +2829,7 @@ void tcg_gen_gvec_sars(unsigned vece, uint32_t dofs, uint32_t aofs,
         .v_list = { INDEX_op_sarv_vec, 0 },
     };

-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     do_gvec_shifts(vece, dofs, aofs, shift, oprsz, maxsz, &g);
 }

@@ -2895,10 +2895,10 @@ void tcg_gen_gvec_shlv(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fno = gen_helper_gvec_shl64v,
           .opt_opc = vecop_list,
           .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-          .vece = MO_64 },
+          .vece = MO_UQ },
     };

-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]);
 }

@@ -2958,10 +2958,10 @@ void tcg_gen_gvec_shrv(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fno = gen_helper_gvec_shr64v,
           .opt_opc = vecop_list,
           .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-          .vece = MO_64 },
+          .vece = MO_UQ },
     };

-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]);
 }

@@ -3021,10 +3021,10 @@ void tcg_gen_gvec_sarv(unsigned vece, uint32_t dofs, uint32_t aofs,
           .fno = gen_helper_gvec_sar64v,
           .opt_opc = vecop_list,
           .prefer_i64 = TCG_TARGET_REG_BITS == 64,
-          .vece = MO_64 },
+          .vece = MO_UQ },
     };

-    tcg_debug_assert(vece <= MO_64);
+    tcg_debug_assert(vece <= MO_UQ);
     tcg_gen_gvec_3(dofs, aofs, bofs, oprsz, maxsz, &g[vece]);
 }

@@ -3140,7 +3140,7 @@ void tcg_gen_gvec_cmp(TCGCond cond, unsigned vece, uint32_t dofs,
      */
     hold_list = tcg_swap_vecop_list(cmp_list);
     type = choose_vector_type(cmp_list, vece, oprsz,
-                              TCG_TARGET_REG_BITS == 64 && vece == MO_64);
+                              TCG_TARGET_REG_BITS == 64 && vece == MO_UQ);
     switch (type) {
     case TCG_TYPE_V256:
         /* Recall that ARM SVE allows vector sizes that are not a
@@ -3166,7 +3166,7 @@ void tcg_gen_gvec_cmp(TCGCond cond, unsigned vece, uint32_t dofs,
         break;

     case 0:
-        if (vece == MO_64 && check_size_impl(oprsz, 8)) {
+        if (vece == MO_UQ && check_size_impl(oprsz, 8)) {
             expand_cmp_i64(dofs, aofs, bofs, oprsz, cond);
         } else if (vece == MO_UL && check_size_impl(oprsz, 4)) {
             expand_cmp_i32(dofs, aofs, bofs, oprsz, cond);
diff --git a/tcg/tcg-op-vec.c b/tcg/tcg-op-vec.c
index ff723ab..e8aea38 100644
--- a/tcg/tcg-op-vec.c
+++ b/tcg/tcg-op-vec.c
@@ -216,7 +216,7 @@ void tcg_gen_mov_vec(TCGv_vec r, TCGv_vec a)
     }
 }

-#define MO_REG  (TCG_TARGET_REG_BITS == 64 ? MO_64 : MO_UL)
+#define MO_REG  (TCG_TARGET_REG_BITS == 64 ? MO_UQ : MO_UL)

 static void do_dupi_vec(TCGv_vec r, unsigned vece, TCGArg a)
 {
@@ -255,10 +255,10 @@ void tcg_gen_dup64i_vec(TCGv_vec r, uint64_t a)
     if (TCG_TARGET_REG_BITS == 32 && a == deposit64(a, 32, 32, a)) {
         do_dupi_vec(r, MO_UL, a);
     } else if (TCG_TARGET_REG_BITS == 64 || a == (uint64_t)(int32_t)a) {
-        do_dupi_vec(r, MO_64, a);
+        do_dupi_vec(r, MO_UQ, a);
     } else {
         TCGv_i64 c = tcg_const_i64(a);
-        tcg_gen_dup_i64_vec(MO_64, r, c);
+        tcg_gen_dup_i64_vec(MO_UQ, r, c);
         tcg_temp_free_i64(c);
     }
 }
@@ -292,10 +292,10 @@ void tcg_gen_dup_i64_vec(unsigned vece, TCGv_vec r, TCGv_i64 a)
     if (TCG_TARGET_REG_BITS == 64) {
         TCGArg ai = tcgv_i64_arg(a);
         vec_gen_2(INDEX_op_dup_vec, type, vece, ri, ai);
-    } else if (vece == MO_64) {
+    } else if (vece == MO_UQ) {
         TCGArg al = tcgv_i32_arg(TCGV_LOW(a));
         TCGArg ah = tcgv_i32_arg(TCGV_HIGH(a));
-        vec_gen_3(INDEX_op_dup2_vec, type, MO_64, ri, al, ah);
+        vec_gen_3(INDEX_op_dup2_vec, type, MO_UQ, ri, al, ah);
     } else {
         TCGArg ai = tcgv_i32_arg(TCGV_LOW(a));
         vec_gen_2(INDEX_op_dup_vec, type, vece, ri, ai);
@@ -709,10 +709,10 @@ static void do_shifts(unsigned vece, TCGv_vec r, TCGv_vec a,
     } else {
         TCGv_vec vec_s = tcg_temp_new_vec(type);

-        if (vece == MO_64) {
+        if (vece == MO_UQ) {
             TCGv_i64 s64 = tcg_temp_new_i64();
             tcg_gen_extu_i32_i64(s64, s);
-            tcg_gen_dup_i64_vec(MO_64, vec_s, s64);
+            tcg_gen_dup_i64_vec(MO_UQ, vec_s, s64);
             tcg_temp_free_i64(s64);
         } else {
             tcg_gen_dup_i32_vec(vece, vec_s, s);
diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c
index 447683d..a9f3e13 100644
--- a/tcg/tcg-op.c
+++ b/tcg/tcg-op.c
@@ -2730,7 +2730,7 @@ static inline TCGMemOp tcg_canonicalize_memop(TCGMemOp op, bool is64, bool st)
             op &= ~MO_SIGN;
         }
         break;
-    case MO_64:
+    case MO_UQ:
         if (!is64) {
             tcg_abort();
         }
@@ -2862,7 +2862,7 @@ void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
 {
     TCGMemOp orig_memop;

-    if (TCG_TARGET_REG_BITS == 32 && (memop & MO_SIZE) < MO_64) {
+    if (TCG_TARGET_REG_BITS == 32 && (memop & MO_SIZE) < MO_UQ) {
         tcg_gen_qemu_ld_i32(TCGV_LOW(val), addr, idx, memop);
         if (memop & MO_SIGN) {
             tcg_gen_sari_i32(TCGV_HIGH(val), TCGV_LOW(val), 31);
@@ -2881,7 +2881,7 @@ void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
     if (!TCG_TARGET_HAS_MEMORY_BSWAP && (memop & MO_BSWAP)) {
         memop &= ~MO_BSWAP;
         /* The bswap primitive requires zero-extended input.  */
-        if ((memop & MO_SIGN) && (memop & MO_SIZE) < MO_64) {
+        if ((memop & MO_SIGN) && (memop & MO_SIZE) < MO_UQ) {
             memop &= ~MO_SIGN;
         }
     }
@@ -2902,7 +2902,7 @@ void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
                 tcg_gen_ext32s_i64(val, val);
             }
             break;
-        case MO_64:
+        case MO_UQ:
             tcg_gen_bswap64_i64(val, val);
             break;
         default:
@@ -2915,7 +2915,7 @@ void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
 {
     TCGv_i64 swap = NULL;

-    if (TCG_TARGET_REG_BITS == 32 && (memop & MO_SIZE) < MO_64) {
+    if (TCG_TARGET_REG_BITS == 32 && (memop & MO_SIZE) < MO_UQ) {
         tcg_gen_qemu_st_i32(TCGV_LOW(val), addr, idx, memop);
         return;
     }
@@ -2936,7 +2936,7 @@ void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
             tcg_gen_ext32u_i64(swap, val);
             tcg_gen_bswap32_i64(swap, swap);
             break;
-        case MO_64:
+        case MO_UQ:
             tcg_gen_bswap64_i64(swap, val);
             break;
         default:
@@ -3029,8 +3029,8 @@ static void * const table_cmpxchg[16] = {
     [MO_UW | MO_BE] = gen_helper_atomic_cmpxchgw_be,
     [MO_UL | MO_LE] = gen_helper_atomic_cmpxchgl_le,
     [MO_UL | MO_BE] = gen_helper_atomic_cmpxchgl_be,
-    WITH_ATOMIC64([MO_64 | MO_LE] = gen_helper_atomic_cmpxchgq_le)
-    WITH_ATOMIC64([MO_64 | MO_BE] = gen_helper_atomic_cmpxchgq_be)
+    WITH_ATOMIC64([MO_UQ | MO_LE] = gen_helper_atomic_cmpxchgq_le)
+    WITH_ATOMIC64([MO_UQ | MO_BE] = gen_helper_atomic_cmpxchgq_be)
 };

 void tcg_gen_atomic_cmpxchg_i32(TCGv_i32 retv, TCGv addr, TCGv_i32 cmpv,
@@ -3099,7 +3099,7 @@ void tcg_gen_atomic_cmpxchg_i64(TCGv_i64 retv, TCGv addr, TCGv_i64 cmpv,
             tcg_gen_mov_i64(retv, t1);
         }
         tcg_temp_free_i64(t1);
-    } else if ((memop & MO_SIZE) == MO_64) {
+    } else if ((memop & MO_SIZE) == MO_UQ) {
 #ifdef CONFIG_ATOMIC64
         gen_atomic_cx_i64 gen;

@@ -3207,7 +3207,7 @@ static void do_atomic_op_i64(TCGv_i64 ret, TCGv addr, TCGv_i64 val,
 {
     memop = tcg_canonicalize_memop(memop, 1, 0);

-    if ((memop & MO_SIZE) == MO_64) {
+    if ((memop & MO_SIZE) == MO_UQ) {
 #ifdef CONFIG_ATOMIC64
         gen_atomic_op_i64 gen;

@@ -3253,8 +3253,8 @@ static void * const table_##NAME[16] = {                                \
     [MO_UW | MO_BE] = gen_helper_atomic_##NAME##w_be,                   \
     [MO_UL | MO_LE] = gen_helper_atomic_##NAME##l_le,                   \
     [MO_UL | MO_BE] = gen_helper_atomic_##NAME##l_be,                   \
-    WITH_ATOMIC64([MO_64 | MO_LE] = gen_helper_atomic_##NAME##q_le)     \
-    WITH_ATOMIC64([MO_64 | MO_BE] = gen_helper_atomic_##NAME##q_be)     \
+    WITH_ATOMIC64([MO_UQ | MO_LE] = gen_helper_atomic_##NAME##q_le)     \
+    WITH_ATOMIC64([MO_UQ | MO_BE] = gen_helper_atomic_##NAME##q_be)     \
 };                                                                      \
 void tcg_gen_atomic_##NAME##_i32                                        \
     (TCGv_i32 ret, TCGv addr, TCGv_i32 val, TCGArg idx, TCGMemOp memop) \
diff --git a/tcg/tcg.h b/tcg/tcg.h
index 4b6ee89..63e9897 100644
--- a/tcg/tcg.h
+++ b/tcg/tcg.h
@@ -371,28 +371,29 @@ typedef enum TCGMemOp {
     MO_UB    = MO_8,
     MO_UW    = MO_16,
     MO_UL    = MO_32,
+    MO_UQ    = MO_64,
     MO_SB    = MO_SIGN | MO_8,
     MO_SW    = MO_SIGN | MO_16,
     MO_SL    = MO_SIGN | MO_32,
-    MO_Q     = MO_64,
+    MO_SQ    = MO_SIGN | MO_64,

     MO_LEUW  = MO_LE | MO_UW,
     MO_LEUL  = MO_LE | MO_UL,
     MO_LESW  = MO_LE | MO_SW,
     MO_LESL  = MO_LE | MO_SL,
-    MO_LEQ   = MO_LE | MO_Q,
+    MO_LEQ   = MO_LE | MO_UQ,

     MO_BEUW  = MO_BE | MO_UW,
     MO_BEUL  = MO_BE | MO_UL,
     MO_BESW  = MO_BE | MO_SW,
     MO_BESL  = MO_BE | MO_SL,
-    MO_BEQ   = MO_BE | MO_Q,
+    MO_BEQ   = MO_BE | MO_UQ,

     MO_TEUW  = MO_TE | MO_UW,
     MO_TEUL  = MO_TE | MO_UL,
     MO_TESW  = MO_TE | MO_SW,
     MO_TESL  = MO_TE | MO_SL,
-    MO_TEQ   = MO_TE | MO_Q,
+    MO_TEQ   = MO_TE | MO_UQ,

     MO_SSIZE = MO_SIZE | MO_SIGN,
 } TCGMemOp;
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 240657 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v2 05/20] tcg: Move size+sign+endian from TCGMemOp to MemOp
  2019-07-22 15:34 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-22 15:43   ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:43 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, claudio.fontana, alex.williamson, qemu-ppc, amarkovic,
	pbonzini, aurelien

Preparation for modifying the memory API to take size+sign+endianness
instead of just size.

Accelerator independent MemOp enum is extended by TCGMemOp enum.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 MAINTAINERS          |  1 +
 include/exec/memop.h | 27 +++++++++++++++++++++++++++
 tcg/tcg.h            | 15 +++++----------
 3 files changed, 33 insertions(+), 10 deletions(-)
 create mode 100644 include/exec/memop.h

diff --git a/MAINTAINERS b/MAINTAINERS
index cc9636b..3f148cd 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1890,6 +1890,7 @@ M: Paolo Bonzini <pbonzini@redhat.com>
 S: Supported
 F: include/exec/ioport.h
 F: ioport.c
+F: include/exec/memop.h
 F: include/exec/memory.h
 F: include/exec/ram_addr.h
 F: memory.c
diff --git a/include/exec/memop.h b/include/exec/memop.h
new file mode 100644
index 0000000..43e99d7
--- /dev/null
+++ b/include/exec/memop.h
@@ -0,0 +1,27 @@
+/*
+ * Constants for memory operations
+ *
+ * Authors:
+ *  Richard Henderson <rth@twiddle.net>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+
+#ifndef MEMOP_H
+#define MEMOP_H
+
+typedef enum MemOp {
+    MO_8     = 0,
+    MO_16    = 1,
+    MO_32    = 2,
+    MO_64    = 3,
+    MO_SIZE  = 3,   /* Mask for the above.  */
+
+    MO_SIGN  = 4,   /* Sign-extended, otherwise zero-extended.  */
+
+    MO_BSWAP = 8,   /* Host reverse endian.  */
+} MemOp;
+
+#endif
diff --git a/tcg/tcg.h b/tcg/tcg.h
index 63e9897..18b91fe 100644
--- a/tcg/tcg.h
+++ b/tcg/tcg.h
@@ -26,6 +26,7 @@
 #define TCG_H

 #include "cpu.h"
+#include "exec/memop.h"
 #include "exec/tb-context.h"
 #include "qemu/bitops.h"
 #include "qemu/queue.h"
@@ -309,17 +310,11 @@ typedef enum TCGType {
 #endif
 } TCGType;

-/* Constants for qemu_ld and qemu_st for the Memory Operation field.  */
+/*
+ * Extend MemOp with constants for qemu_ld and qemu_st for the Memory
+ * Operation field.
+ */
 typedef enum TCGMemOp {
-    MO_8     = 0,
-    MO_16    = 1,
-    MO_32    = 2,
-    MO_64    = 3,
-    MO_SIZE  = 3,   /* Mask for the above.  */
-
-    MO_SIGN  = 4,   /* Sign-extended, otherwise zero-extended.  */
-
-    MO_BSWAP = 8,   /* Host reverse endian.  */
 #ifdef HOST_WORDS_BIGENDIAN
     MO_LE    = MO_BSWAP,
     MO_BE    = 0,
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v2 05/20] tcg: Move size+sign+endian from TCGMemOp to MemOp
@ 2019-07-22 15:43   ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:43 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, david, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, mst, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, claudio.fontana, qemu-s390x, qemu-ppc,
	amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 2420 bytes --]

Preparation for modifying the memory API to take size+sign+endianness
instead of just size.

Accelerator independent MemOp enum is extended by TCGMemOp enum.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 MAINTAINERS          |  1 +
 include/exec/memop.h | 27 +++++++++++++++++++++++++++
 tcg/tcg.h            | 15 +++++----------
 3 files changed, 33 insertions(+), 10 deletions(-)
 create mode 100644 include/exec/memop.h

diff --git a/MAINTAINERS b/MAINTAINERS
index cc9636b..3f148cd 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1890,6 +1890,7 @@ M: Paolo Bonzini <pbonzini@redhat.com>
 S: Supported
 F: include/exec/ioport.h
 F: ioport.c
+F: include/exec/memop.h
 F: include/exec/memory.h
 F: include/exec/ram_addr.h
 F: memory.c
diff --git a/include/exec/memop.h b/include/exec/memop.h
new file mode 100644
index 0000000..43e99d7
--- /dev/null
+++ b/include/exec/memop.h
@@ -0,0 +1,27 @@
+/*
+ * Constants for memory operations
+ *
+ * Authors:
+ *  Richard Henderson <rth@twiddle.net>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+
+#ifndef MEMOP_H
+#define MEMOP_H
+
+typedef enum MemOp {
+    MO_8     = 0,
+    MO_16    = 1,
+    MO_32    = 2,
+    MO_64    = 3,
+    MO_SIZE  = 3,   /* Mask for the above.  */
+
+    MO_SIGN  = 4,   /* Sign-extended, otherwise zero-extended.  */
+
+    MO_BSWAP = 8,   /* Host reverse endian.  */
+} MemOp;
+
+#endif
diff --git a/tcg/tcg.h b/tcg/tcg.h
index 63e9897..18b91fe 100644
--- a/tcg/tcg.h
+++ b/tcg/tcg.h
@@ -26,6 +26,7 @@
 #define TCG_H

 #include "cpu.h"
+#include "exec/memop.h"
 #include "exec/tb-context.h"
 #include "qemu/bitops.h"
 #include "qemu/queue.h"
@@ -309,17 +310,11 @@ typedef enum TCGType {
 #endif
 } TCGType;

-/* Constants for qemu_ld and qemu_st for the Memory Operation field.  */
+/*
+ * Extend MemOp with constants for qemu_ld and qemu_st for the Memory
+ * Operation field.
+ */
 typedef enum TCGMemOp {
-    MO_8     = 0,
-    MO_16    = 1,
-    MO_32    = 2,
-    MO_64    = 3,
-    MO_SIZE  = 3,   /* Mask for the above.  */
-
-    MO_SIGN  = 4,   /* Sign-extended, otherwise zero-extended.  */
-
-    MO_BSWAP = 8,   /* Host reverse endian.  */
 #ifdef HOST_WORDS_BIGENDIAN
     MO_LE    = MO_BSWAP,
     MO_BE    = 0,
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 4980 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v2 06/20] tcg: Rename get_memop to get_tcgmemop
  2019-07-22 15:34 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-22 15:44   ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, claudio.fontana, alex.williamson, qemu-ppc, amarkovic,
	pbonzini, aurelien

Correct naming as there is now both MemOp and TCGMemOp.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 accel/tcg/cputlb.c           |  6 +++---
 tcg/aarch64/tcg-target.inc.c |  8 ++++----
 tcg/arm/tcg-target.inc.c     |  8 ++++----
 tcg/i386/tcg-target.inc.c    |  8 ++++----
 tcg/mips/tcg-target.inc.c    | 10 +++++-----
 tcg/optimize.c               |  2 +-
 tcg/ppc/tcg-target.inc.c     |  8 ++++----
 tcg/riscv/tcg-target.inc.c   | 10 +++++-----
 tcg/s390/tcg-target.inc.c    |  8 ++++----
 tcg/sparc/tcg-target.inc.c   |  4 ++--
 tcg/tcg.c                    |  2 +-
 tcg/tcg.h                    |  4 ++--
 tcg/tci.c                    |  8 ++++----
 13 files changed, 43 insertions(+), 43 deletions(-)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index bb9897b..184fc54 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1133,7 +1133,7 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
     uintptr_t index = tlb_index(env, mmu_idx, addr);
     CPUTLBEntry *tlbe = tlb_entry(env, mmu_idx, addr);
     target_ulong tlb_addr = tlb_addr_write(tlbe);
-    TCGMemOp mop = get_memop(oi);
+    TCGMemOp mop = get_tcgmemop(oi);
     int a_bits = get_alignment_bits(mop);
     int s_bits = mop & MO_SIZE;
     void *hostaddr;
@@ -1257,7 +1257,7 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
         offsetof(CPUTLBEntry, addr_code) : offsetof(CPUTLBEntry, addr_read);
     const MMUAccessType access_type =
         code_read ? MMU_INST_FETCH : MMU_DATA_LOAD;
-    unsigned a_bits = get_alignment_bits(get_memop(oi));
+    unsigned a_bits = get_alignment_bits(get_tcgmemop(oi));
     void *haddr;
     uint64_t res;

@@ -1506,7 +1506,7 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
     CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr);
     target_ulong tlb_addr = tlb_addr_write(entry);
     const size_t tlb_off = offsetof(CPUTLBEntry, addr_write);
-    unsigned a_bits = get_alignment_bits(get_memop(oi));
+    unsigned a_bits = get_alignment_bits(get_tcgmemop(oi));
     void *haddr;

     /* Handle CPU specific unaligned behaviour */
diff --git a/tcg/aarch64/tcg-target.inc.c b/tcg/aarch64/tcg-target.inc.c
index d14afa9..886da51 100644
--- a/tcg/aarch64/tcg-target.inc.c
+++ b/tcg/aarch64/tcg-target.inc.c
@@ -1580,7 +1580,7 @@ static inline void tcg_out_adr(TCGContext *s, TCGReg rd, void *target)
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);
     TCGMemOp size = opc & MO_SIZE;

     if (!reloc_pc19(lb->label_ptr[0], s->code_ptr)) {
@@ -1605,7 +1605,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);
     TCGMemOp size = opc & MO_SIZE;

     if (!reloc_pc19(lb->label_ptr[0], s->code_ptr)) {
@@ -1804,7 +1804,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp memop,
 static void tcg_out_qemu_ld(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
                             TCGMemOpIdx oi, TCGType ext)
 {
-    TCGMemOp memop = get_memop(oi);
+    TCGMemOp memop = get_tcgmemop(oi);
     const TCGType otype = TARGET_LONG_BITS == 64 ? TCG_TYPE_I64 : TCG_TYPE_I32;
 #ifdef CONFIG_SOFTMMU
     unsigned mem_index = get_mmuidx(oi);
@@ -1829,7 +1829,7 @@ static void tcg_out_qemu_ld(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
 static void tcg_out_qemu_st(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
                             TCGMemOpIdx oi)
 {
-    TCGMemOp memop = get_memop(oi);
+    TCGMemOp memop = get_tcgmemop(oi);
     const TCGType otype = TARGET_LONG_BITS == 64 ? TCG_TYPE_I64 : TCG_TYPE_I32;
 #ifdef CONFIG_SOFTMMU
     unsigned mem_index = get_mmuidx(oi);
diff --git a/tcg/arm/tcg-target.inc.c b/tcg/arm/tcg-target.inc.c
index 70eeb8a..98c5b47 100644
--- a/tcg/arm/tcg-target.inc.c
+++ b/tcg/arm/tcg-target.inc.c
@@ -1348,7 +1348,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGReg argreg, datalo, datahi;
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);
     void *func;

     if (!reloc_pc24(lb->label_ptr[0], s->code_ptr)) {
@@ -1412,7 +1412,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGReg argreg, datalo, datahi;
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);

     if (!reloc_pc24(lb->label_ptr[0], s->code_ptr)) {
         return false;
@@ -1589,7 +1589,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)
     addrlo = *args++;
     addrhi = (TARGET_LONG_BITS == 64 ? *args++ : 0);
     oi = *args++;
-    opc = get_memop(oi);
+    opc = get_tcgmemop(oi);

 #ifdef CONFIG_SOFTMMU
     mem_index = get_mmuidx(oi);
@@ -1720,7 +1720,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is64)
     addrlo = *args++;
     addrhi = (TARGET_LONG_BITS == 64 ? *args++ : 0);
     oi = *args++;
-    opc = get_memop(oi);
+    opc = get_tcgmemop(oi);

 #ifdef CONFIG_SOFTMMU
     mem_index = get_mmuidx(oi);
diff --git a/tcg/i386/tcg-target.inc.c b/tcg/i386/tcg-target.inc.c
index 3a73334..e4525ca 100644
--- a/tcg/i386/tcg-target.inc.c
+++ b/tcg/i386/tcg-target.inc.c
@@ -1810,7 +1810,7 @@ static void add_qemu_ldst_label(TCGContext *s, bool is_ld, bool is_64,
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);
     TCGReg data_reg;
     tcg_insn_unit **label_ptr = &l->label_ptr[0];
     int rexw = (l->type == TCG_TYPE_I64 ? P_REXW : 0);
@@ -1895,7 +1895,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);
     TCGMemOp s_bits = opc & MO_SIZE;
     tcg_insn_unit **label_ptr = &l->label_ptr[0];
     TCGReg retaddr;
@@ -2114,7 +2114,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)
     addrlo = *args++;
     addrhi = (TARGET_LONG_BITS > TCG_TARGET_REG_BITS ? *args++ : 0);
     oi = *args++;
-    opc = get_memop(oi);
+    opc = get_tcgmemop(oi);

 #if defined(CONFIG_SOFTMMU)
     mem_index = get_mmuidx(oi);
@@ -2232,7 +2232,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is64)
     addrlo = *args++;
     addrhi = (TARGET_LONG_BITS > TCG_TARGET_REG_BITS ? *args++ : 0);
     oi = *args++;
-    opc = get_memop(oi);
+    opc = get_tcgmemop(oi);

 #if defined(CONFIG_SOFTMMU)
     mem_index = get_mmuidx(oi);
diff --git a/tcg/mips/tcg-target.inc.c b/tcg/mips/tcg-target.inc.c
index ef31fc8..010afd0 100644
--- a/tcg/mips/tcg-target.inc.c
+++ b/tcg/mips/tcg-target.inc.c
@@ -1215,7 +1215,7 @@ static void tcg_out_tlb_load(TCGContext *s, TCGReg base, TCGReg addrl,
                              TCGReg addrh, TCGMemOpIdx oi,
                              tcg_insn_unit *label_ptr[2], bool is_load)
 {
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);
     unsigned s_bits = opc & MO_SIZE;
     unsigned a_bits = get_alignment_bits(opc);
     int mem_index = get_mmuidx(oi);
@@ -1313,7 +1313,7 @@ static void add_qemu_ldst_label(TCGContext *s, int is_ld, TCGMemOpIdx oi,
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);
     TCGReg v0;
     int i;

@@ -1363,7 +1363,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);
     TCGMemOp s_bits = opc & MO_SIZE;
     int i;

@@ -1532,7 +1532,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
     addr_regl = *args++;
     addr_regh = (TCG_TARGET_REG_BITS < TARGET_LONG_BITS ? *args++ : 0);
     oi = *args++;
-    opc = get_memop(oi);
+    opc = get_tcgmemop(oi);

 #if defined(CONFIG_SOFTMMU)
     tcg_out_tlb_load(s, base, addr_regl, addr_regh, oi, label_ptr, 1);
@@ -1635,7 +1635,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64)
     addr_regl = *args++;
     addr_regh = (TCG_TARGET_REG_BITS < TARGET_LONG_BITS ? *args++ : 0);
     oi = *args++;
-    opc = get_memop(oi);
+    opc = get_tcgmemop(oi);

 #if defined(CONFIG_SOFTMMU)
     tcg_out_tlb_load(s, base, addr_regl, addr_regh, oi, label_ptr, 0);
diff --git a/tcg/optimize.c b/tcg/optimize.c
index d2424de..422bcbb 100644
--- a/tcg/optimize.c
+++ b/tcg/optimize.c
@@ -1014,7 +1014,7 @@ void tcg_optimize(TCGContext *s)
         CASE_OP_32_64(qemu_ld):
             {
                 TCGMemOpIdx oi = op->args[nb_oargs + nb_iargs];
-                TCGMemOp mop = get_memop(oi);
+                TCGMemOp mop = get_tcgmemop(oi);
                 if (!(mop & MO_SIGN)) {
                     mask = (2ULL << ((8 << (mop & MO_SIZE)) - 1)) - 1;
                 }
diff --git a/tcg/ppc/tcg-target.inc.c b/tcg/ppc/tcg-target.inc.c
index 13a2437..0ab4faa 100644
--- a/tcg/ppc/tcg-target.inc.c
+++ b/tcg/ppc/tcg-target.inc.c
@@ -1633,7 +1633,7 @@ static void add_qemu_ldst_label(TCGContext *s, bool is_ld, TCGMemOpIdx oi,
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);
     TCGReg hi, lo, arg = TCG_REG_R3;

     if (!reloc_pc14(lb->label_ptr[0], s->code_ptr)) {
@@ -1680,7 +1680,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);
     TCGMemOp s_bits = opc & MO_SIZE;
     TCGReg hi, lo, arg = TCG_REG_R3;

@@ -1755,7 +1755,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
     addrlo = *args++;
     addrhi = (TCG_TARGET_REG_BITS < TARGET_LONG_BITS ? *args++ : 0);
     oi = *args++;
-    opc = get_memop(oi);
+    opc = get_tcgmemop(oi);
     s_bits = opc & MO_SIZE;

 #ifdef CONFIG_SOFTMMU
@@ -1830,7 +1830,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64)
     addrlo = *args++;
     addrhi = (TCG_TARGET_REG_BITS < TARGET_LONG_BITS ? *args++ : 0);
     oi = *args++;
-    opc = get_memop(oi);
+    opc = get_tcgmemop(oi);
     s_bits = opc & MO_SIZE;

 #ifdef CONFIG_SOFTMMU
diff --git a/tcg/riscv/tcg-target.inc.c b/tcg/riscv/tcg-target.inc.c
index 90363df..ab4e035 100644
--- a/tcg/riscv/tcg-target.inc.c
+++ b/tcg/riscv/tcg-target.inc.c
@@ -970,7 +970,7 @@ static void tcg_out_tlb_load(TCGContext *s, TCGReg addrl,
                              TCGReg addrh, TCGMemOpIdx oi,
                              tcg_insn_unit **label_ptr, bool is_load)
 {
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);
     unsigned s_bits = opc & MO_SIZE;
     unsigned a_bits = get_alignment_bits(opc);
     tcg_target_long compare_mask;
@@ -1044,7 +1044,7 @@ static void add_qemu_ldst_label(TCGContext *s, int is_ld, TCGMemOpIdx oi,
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);
     TCGReg a0 = tcg_target_call_iarg_regs[0];
     TCGReg a1 = tcg_target_call_iarg_regs[1];
     TCGReg a2 = tcg_target_call_iarg_regs[2];
@@ -1077,7 +1077,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);
     TCGMemOp s_bits = opc & MO_SIZE;
     TCGReg a0 = tcg_target_call_iarg_regs[0];
     TCGReg a1 = tcg_target_call_iarg_regs[1];
@@ -1183,7 +1183,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
     addr_regl = *args++;
     addr_regh = (TCG_TARGET_REG_BITS < TARGET_LONG_BITS ? *args++ : 0);
     oi = *args++;
-    opc = get_memop(oi);
+    opc = get_tcgmemop(oi);

 #if defined(CONFIG_SOFTMMU)
     tcg_out_tlb_load(s, addr_regl, addr_regh, oi, label_ptr, 1);
@@ -1254,7 +1254,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64)
     addr_regl = *args++;
     addr_regh = (TCG_TARGET_REG_BITS < TARGET_LONG_BITS ? *args++ : 0);
     oi = *args++;
-    opc = get_memop(oi);
+    opc = get_tcgmemop(oi);

 #if defined(CONFIG_SOFTMMU)
     tcg_out_tlb_load(s, addr_regl, addr_regh, oi, label_ptr, 0);
diff --git a/tcg/s390/tcg-target.inc.c b/tcg/s390/tcg-target.inc.c
index db1102e..4d8078b 100644
--- a/tcg/s390/tcg-target.inc.c
+++ b/tcg/s390/tcg-target.inc.c
@@ -1614,7 +1614,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
     TCGReg addr_reg = lb->addrlo_reg;
     TCGReg data_reg = lb->datalo_reg;
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);

     if (!patch_reloc(lb->label_ptr[0], R_390_PC16DBL,
                      (intptr_t)s->code_ptr, 2)) {
@@ -1639,7 +1639,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
     TCGReg addr_reg = lb->addrlo_reg;
     TCGReg data_reg = lb->datalo_reg;
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);

     if (!patch_reloc(lb->label_ptr[0], R_390_PC16DBL,
                      (intptr_t)s->code_ptr, 2)) {
@@ -1694,7 +1694,7 @@ static void tcg_prepare_user_ldst(TCGContext *s, TCGReg *addr_reg,
 static void tcg_out_qemu_ld(TCGContext* s, TCGReg data_reg, TCGReg addr_reg,
                             TCGMemOpIdx oi)
 {
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);
 #ifdef CONFIG_SOFTMMU
     unsigned mem_index = get_mmuidx(oi);
     tcg_insn_unit *label_ptr;
@@ -1721,7 +1721,7 @@ static void tcg_out_qemu_ld(TCGContext* s, TCGReg data_reg, TCGReg addr_reg,
 static void tcg_out_qemu_st(TCGContext* s, TCGReg data_reg, TCGReg addr_reg,
                             TCGMemOpIdx oi)
 {
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);
 #ifdef CONFIG_SOFTMMU
     unsigned mem_index = get_mmuidx(oi);
     tcg_insn_unit *label_ptr;
diff --git a/tcg/sparc/tcg-target.inc.c b/tcg/sparc/tcg-target.inc.c
index 7c50118..e6cf2c4 100644
--- a/tcg/sparc/tcg-target.inc.c
+++ b/tcg/sparc/tcg-target.inc.c
@@ -1164,7 +1164,7 @@ static const int qemu_st_opc[16] = {
 static void tcg_out_qemu_ld(TCGContext *s, TCGReg data, TCGReg addr,
                             TCGMemOpIdx oi, bool is_64)
 {
-    TCGMemOp memop = get_memop(oi);
+    TCGMemOp memop = get_tcgmemop(oi);
 #ifdef CONFIG_SOFTMMU
     unsigned memi = get_mmuidx(oi);
     TCGReg addrz, param;
@@ -1246,7 +1246,7 @@ static void tcg_out_qemu_ld(TCGContext *s, TCGReg data, TCGReg addr,
 static void tcg_out_qemu_st(TCGContext *s, TCGReg data, TCGReg addr,
                             TCGMemOpIdx oi)
 {
-    TCGMemOp memop = get_memop(oi);
+    TCGMemOp memop = get_tcgmemop(oi);
 #ifdef CONFIG_SOFTMMU
     unsigned memi = get_mmuidx(oi);
     TCGReg addrz, param;
diff --git a/tcg/tcg.c b/tcg/tcg.c
index be2c33c..492d7c6 100644
--- a/tcg/tcg.c
+++ b/tcg/tcg.c
@@ -2056,7 +2056,7 @@ static void tcg_dump_ops(TCGContext *s, bool have_prefs)
             case INDEX_op_qemu_st_i64:
                 {
                     TCGMemOpIdx oi = op->args[k++];
-                    TCGMemOp op = get_memop(oi);
+                    TCGMemOp op = get_tcgmemop(oi);
                     unsigned ix = get_mmuidx(oi);

                     if (op & ~(MO_AMASK | MO_BSWAP | MO_SSIZE)) {
diff --git a/tcg/tcg.h b/tcg/tcg.h
index 18b91fe..8a3f912 100644
--- a/tcg/tcg.h
+++ b/tcg/tcg.h
@@ -1197,12 +1197,12 @@ static inline TCGMemOpIdx make_memop_idx(TCGMemOp op, unsigned idx)
 }

 /**
- * get_memop
+ * get_tcgmemop
  * @oi: combined op/idx parameter
  *
  * Extract the memory operation from the combined value.
  */
-static inline TCGMemOp get_memop(TCGMemOpIdx oi)
+static inline TCGMemOp get_tcgmemop(TCGMemOpIdx oi)
 {
     return oi >> 4;
 }
diff --git a/tcg/tci.c b/tcg/tci.c
index 33edca1..b3c5795 100644
--- a/tcg/tci.c
+++ b/tcg/tci.c
@@ -1109,7 +1109,7 @@ uintptr_t tcg_qemu_tb_exec(CPUArchState *env, uint8_t *tb_ptr)
             t0 = *tb_ptr++;
             taddr = tci_read_ulong(regs, &tb_ptr);
             oi = tci_read_i(&tb_ptr);
-            switch (get_memop(oi) & (MO_BSWAP | MO_SSIZE)) {
+            switch (get_tcgmemop(oi) & (MO_BSWAP | MO_SSIZE)) {
             case MO_UB:
                 tmp32 = qemu_ld_ub;
                 break;
@@ -1146,7 +1146,7 @@ uintptr_t tcg_qemu_tb_exec(CPUArchState *env, uint8_t *tb_ptr)
             }
             taddr = tci_read_ulong(regs, &tb_ptr);
             oi = tci_read_i(&tb_ptr);
-            switch (get_memop(oi) & (MO_BSWAP | MO_SSIZE)) {
+            switch (get_tcgmemop(oi) & (MO_BSWAP | MO_SSIZE)) {
             case MO_UB:
                 tmp64 = qemu_ld_ub;
                 break;
@@ -1195,7 +1195,7 @@ uintptr_t tcg_qemu_tb_exec(CPUArchState *env, uint8_t *tb_ptr)
             t0 = tci_read_r(regs, &tb_ptr);
             taddr = tci_read_ulong(regs, &tb_ptr);
             oi = tci_read_i(&tb_ptr);
-            switch (get_memop(oi) & (MO_BSWAP | MO_SIZE)) {
+            switch (get_tcgmemop(oi) & (MO_BSWAP | MO_SIZE)) {
             case MO_UB:
                 qemu_st_b(t0);
                 break;
@@ -1219,7 +1219,7 @@ uintptr_t tcg_qemu_tb_exec(CPUArchState *env, uint8_t *tb_ptr)
             tmp64 = tci_read_r64(regs, &tb_ptr);
             taddr = tci_read_ulong(regs, &tb_ptr);
             oi = tci_read_i(&tb_ptr);
-            switch (get_memop(oi) & (MO_BSWAP | MO_SIZE)) {
+            switch (get_tcgmemop(oi) & (MO_BSWAP | MO_SIZE)) {
             case MO_UB:
                 qemu_st_b(tmp64);
                 break;
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v2 06/20] tcg: Rename get_memop to get_tcgmemop
@ 2019-07-22 15:44   ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, david, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, mst, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, claudio.fontana, qemu-s390x, qemu-ppc,
	amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 18888 bytes --]

Correct naming as there is now both MemOp and TCGMemOp.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 accel/tcg/cputlb.c           |  6 +++---
 tcg/aarch64/tcg-target.inc.c |  8 ++++----
 tcg/arm/tcg-target.inc.c     |  8 ++++----
 tcg/i386/tcg-target.inc.c    |  8 ++++----
 tcg/mips/tcg-target.inc.c    | 10 +++++-----
 tcg/optimize.c               |  2 +-
 tcg/ppc/tcg-target.inc.c     |  8 ++++----
 tcg/riscv/tcg-target.inc.c   | 10 +++++-----
 tcg/s390/tcg-target.inc.c    |  8 ++++----
 tcg/sparc/tcg-target.inc.c   |  4 ++--
 tcg/tcg.c                    |  2 +-
 tcg/tcg.h                    |  4 ++--
 tcg/tci.c                    |  8 ++++----
 13 files changed, 43 insertions(+), 43 deletions(-)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index bb9897b..184fc54 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1133,7 +1133,7 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
     uintptr_t index = tlb_index(env, mmu_idx, addr);
     CPUTLBEntry *tlbe = tlb_entry(env, mmu_idx, addr);
     target_ulong tlb_addr = tlb_addr_write(tlbe);
-    TCGMemOp mop = get_memop(oi);
+    TCGMemOp mop = get_tcgmemop(oi);
     int a_bits = get_alignment_bits(mop);
     int s_bits = mop & MO_SIZE;
     void *hostaddr;
@@ -1257,7 +1257,7 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
         offsetof(CPUTLBEntry, addr_code) : offsetof(CPUTLBEntry, addr_read);
     const MMUAccessType access_type =
         code_read ? MMU_INST_FETCH : MMU_DATA_LOAD;
-    unsigned a_bits = get_alignment_bits(get_memop(oi));
+    unsigned a_bits = get_alignment_bits(get_tcgmemop(oi));
     void *haddr;
     uint64_t res;

@@ -1506,7 +1506,7 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
     CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr);
     target_ulong tlb_addr = tlb_addr_write(entry);
     const size_t tlb_off = offsetof(CPUTLBEntry, addr_write);
-    unsigned a_bits = get_alignment_bits(get_memop(oi));
+    unsigned a_bits = get_alignment_bits(get_tcgmemop(oi));
     void *haddr;

     /* Handle CPU specific unaligned behaviour */
diff --git a/tcg/aarch64/tcg-target.inc.c b/tcg/aarch64/tcg-target.inc.c
index d14afa9..886da51 100644
--- a/tcg/aarch64/tcg-target.inc.c
+++ b/tcg/aarch64/tcg-target.inc.c
@@ -1580,7 +1580,7 @@ static inline void tcg_out_adr(TCGContext *s, TCGReg rd, void *target)
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);
     TCGMemOp size = opc & MO_SIZE;

     if (!reloc_pc19(lb->label_ptr[0], s->code_ptr)) {
@@ -1605,7 +1605,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);
     TCGMemOp size = opc & MO_SIZE;

     if (!reloc_pc19(lb->label_ptr[0], s->code_ptr)) {
@@ -1804,7 +1804,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp memop,
 static void tcg_out_qemu_ld(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
                             TCGMemOpIdx oi, TCGType ext)
 {
-    TCGMemOp memop = get_memop(oi);
+    TCGMemOp memop = get_tcgmemop(oi);
     const TCGType otype = TARGET_LONG_BITS == 64 ? TCG_TYPE_I64 : TCG_TYPE_I32;
 #ifdef CONFIG_SOFTMMU
     unsigned mem_index = get_mmuidx(oi);
@@ -1829,7 +1829,7 @@ static void tcg_out_qemu_ld(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
 static void tcg_out_qemu_st(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
                             TCGMemOpIdx oi)
 {
-    TCGMemOp memop = get_memop(oi);
+    TCGMemOp memop = get_tcgmemop(oi);
     const TCGType otype = TARGET_LONG_BITS == 64 ? TCG_TYPE_I64 : TCG_TYPE_I32;
 #ifdef CONFIG_SOFTMMU
     unsigned mem_index = get_mmuidx(oi);
diff --git a/tcg/arm/tcg-target.inc.c b/tcg/arm/tcg-target.inc.c
index 70eeb8a..98c5b47 100644
--- a/tcg/arm/tcg-target.inc.c
+++ b/tcg/arm/tcg-target.inc.c
@@ -1348,7 +1348,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGReg argreg, datalo, datahi;
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);
     void *func;

     if (!reloc_pc24(lb->label_ptr[0], s->code_ptr)) {
@@ -1412,7 +1412,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGReg argreg, datalo, datahi;
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);

     if (!reloc_pc24(lb->label_ptr[0], s->code_ptr)) {
         return false;
@@ -1589,7 +1589,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)
     addrlo = *args++;
     addrhi = (TARGET_LONG_BITS == 64 ? *args++ : 0);
     oi = *args++;
-    opc = get_memop(oi);
+    opc = get_tcgmemop(oi);

 #ifdef CONFIG_SOFTMMU
     mem_index = get_mmuidx(oi);
@@ -1720,7 +1720,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is64)
     addrlo = *args++;
     addrhi = (TARGET_LONG_BITS == 64 ? *args++ : 0);
     oi = *args++;
-    opc = get_memop(oi);
+    opc = get_tcgmemop(oi);

 #ifdef CONFIG_SOFTMMU
     mem_index = get_mmuidx(oi);
diff --git a/tcg/i386/tcg-target.inc.c b/tcg/i386/tcg-target.inc.c
index 3a73334..e4525ca 100644
--- a/tcg/i386/tcg-target.inc.c
+++ b/tcg/i386/tcg-target.inc.c
@@ -1810,7 +1810,7 @@ static void add_qemu_ldst_label(TCGContext *s, bool is_ld, bool is_64,
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);
     TCGReg data_reg;
     tcg_insn_unit **label_ptr = &l->label_ptr[0];
     int rexw = (l->type == TCG_TYPE_I64 ? P_REXW : 0);
@@ -1895,7 +1895,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);
     TCGMemOp s_bits = opc & MO_SIZE;
     tcg_insn_unit **label_ptr = &l->label_ptr[0];
     TCGReg retaddr;
@@ -2114,7 +2114,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)
     addrlo = *args++;
     addrhi = (TARGET_LONG_BITS > TCG_TARGET_REG_BITS ? *args++ : 0);
     oi = *args++;
-    opc = get_memop(oi);
+    opc = get_tcgmemop(oi);

 #if defined(CONFIG_SOFTMMU)
     mem_index = get_mmuidx(oi);
@@ -2232,7 +2232,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is64)
     addrlo = *args++;
     addrhi = (TARGET_LONG_BITS > TCG_TARGET_REG_BITS ? *args++ : 0);
     oi = *args++;
-    opc = get_memop(oi);
+    opc = get_tcgmemop(oi);

 #if defined(CONFIG_SOFTMMU)
     mem_index = get_mmuidx(oi);
diff --git a/tcg/mips/tcg-target.inc.c b/tcg/mips/tcg-target.inc.c
index ef31fc8..010afd0 100644
--- a/tcg/mips/tcg-target.inc.c
+++ b/tcg/mips/tcg-target.inc.c
@@ -1215,7 +1215,7 @@ static void tcg_out_tlb_load(TCGContext *s, TCGReg base, TCGReg addrl,
                              TCGReg addrh, TCGMemOpIdx oi,
                              tcg_insn_unit *label_ptr[2], bool is_load)
 {
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);
     unsigned s_bits = opc & MO_SIZE;
     unsigned a_bits = get_alignment_bits(opc);
     int mem_index = get_mmuidx(oi);
@@ -1313,7 +1313,7 @@ static void add_qemu_ldst_label(TCGContext *s, int is_ld, TCGMemOpIdx oi,
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);
     TCGReg v0;
     int i;

@@ -1363,7 +1363,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);
     TCGMemOp s_bits = opc & MO_SIZE;
     int i;

@@ -1532,7 +1532,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
     addr_regl = *args++;
     addr_regh = (TCG_TARGET_REG_BITS < TARGET_LONG_BITS ? *args++ : 0);
     oi = *args++;
-    opc = get_memop(oi);
+    opc = get_tcgmemop(oi);

 #if defined(CONFIG_SOFTMMU)
     tcg_out_tlb_load(s, base, addr_regl, addr_regh, oi, label_ptr, 1);
@@ -1635,7 +1635,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64)
     addr_regl = *args++;
     addr_regh = (TCG_TARGET_REG_BITS < TARGET_LONG_BITS ? *args++ : 0);
     oi = *args++;
-    opc = get_memop(oi);
+    opc = get_tcgmemop(oi);

 #if defined(CONFIG_SOFTMMU)
     tcg_out_tlb_load(s, base, addr_regl, addr_regh, oi, label_ptr, 0);
diff --git a/tcg/optimize.c b/tcg/optimize.c
index d2424de..422bcbb 100644
--- a/tcg/optimize.c
+++ b/tcg/optimize.c
@@ -1014,7 +1014,7 @@ void tcg_optimize(TCGContext *s)
         CASE_OP_32_64(qemu_ld):
             {
                 TCGMemOpIdx oi = op->args[nb_oargs + nb_iargs];
-                TCGMemOp mop = get_memop(oi);
+                TCGMemOp mop = get_tcgmemop(oi);
                 if (!(mop & MO_SIGN)) {
                     mask = (2ULL << ((8 << (mop & MO_SIZE)) - 1)) - 1;
                 }
diff --git a/tcg/ppc/tcg-target.inc.c b/tcg/ppc/tcg-target.inc.c
index 13a2437..0ab4faa 100644
--- a/tcg/ppc/tcg-target.inc.c
+++ b/tcg/ppc/tcg-target.inc.c
@@ -1633,7 +1633,7 @@ static void add_qemu_ldst_label(TCGContext *s, bool is_ld, TCGMemOpIdx oi,
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);
     TCGReg hi, lo, arg = TCG_REG_R3;

     if (!reloc_pc14(lb->label_ptr[0], s->code_ptr)) {
@@ -1680,7 +1680,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);
     TCGMemOp s_bits = opc & MO_SIZE;
     TCGReg hi, lo, arg = TCG_REG_R3;

@@ -1755,7 +1755,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
     addrlo = *args++;
     addrhi = (TCG_TARGET_REG_BITS < TARGET_LONG_BITS ? *args++ : 0);
     oi = *args++;
-    opc = get_memop(oi);
+    opc = get_tcgmemop(oi);
     s_bits = opc & MO_SIZE;

 #ifdef CONFIG_SOFTMMU
@@ -1830,7 +1830,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64)
     addrlo = *args++;
     addrhi = (TCG_TARGET_REG_BITS < TARGET_LONG_BITS ? *args++ : 0);
     oi = *args++;
-    opc = get_memop(oi);
+    opc = get_tcgmemop(oi);
     s_bits = opc & MO_SIZE;

 #ifdef CONFIG_SOFTMMU
diff --git a/tcg/riscv/tcg-target.inc.c b/tcg/riscv/tcg-target.inc.c
index 90363df..ab4e035 100644
--- a/tcg/riscv/tcg-target.inc.c
+++ b/tcg/riscv/tcg-target.inc.c
@@ -970,7 +970,7 @@ static void tcg_out_tlb_load(TCGContext *s, TCGReg addrl,
                              TCGReg addrh, TCGMemOpIdx oi,
                              tcg_insn_unit **label_ptr, bool is_load)
 {
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);
     unsigned s_bits = opc & MO_SIZE;
     unsigned a_bits = get_alignment_bits(opc);
     tcg_target_long compare_mask;
@@ -1044,7 +1044,7 @@ static void add_qemu_ldst_label(TCGContext *s, int is_ld, TCGMemOpIdx oi,
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);
     TCGReg a0 = tcg_target_call_iarg_regs[0];
     TCGReg a1 = tcg_target_call_iarg_regs[1];
     TCGReg a2 = tcg_target_call_iarg_regs[2];
@@ -1077,7 +1077,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);
     TCGMemOp s_bits = opc & MO_SIZE;
     TCGReg a0 = tcg_target_call_iarg_regs[0];
     TCGReg a1 = tcg_target_call_iarg_regs[1];
@@ -1183,7 +1183,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
     addr_regl = *args++;
     addr_regh = (TCG_TARGET_REG_BITS < TARGET_LONG_BITS ? *args++ : 0);
     oi = *args++;
-    opc = get_memop(oi);
+    opc = get_tcgmemop(oi);

 #if defined(CONFIG_SOFTMMU)
     tcg_out_tlb_load(s, addr_regl, addr_regh, oi, label_ptr, 1);
@@ -1254,7 +1254,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64)
     addr_regl = *args++;
     addr_regh = (TCG_TARGET_REG_BITS < TARGET_LONG_BITS ? *args++ : 0);
     oi = *args++;
-    opc = get_memop(oi);
+    opc = get_tcgmemop(oi);

 #if defined(CONFIG_SOFTMMU)
     tcg_out_tlb_load(s, addr_regl, addr_regh, oi, label_ptr, 0);
diff --git a/tcg/s390/tcg-target.inc.c b/tcg/s390/tcg-target.inc.c
index db1102e..4d8078b 100644
--- a/tcg/s390/tcg-target.inc.c
+++ b/tcg/s390/tcg-target.inc.c
@@ -1614,7 +1614,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
     TCGReg addr_reg = lb->addrlo_reg;
     TCGReg data_reg = lb->datalo_reg;
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);

     if (!patch_reloc(lb->label_ptr[0], R_390_PC16DBL,
                      (intptr_t)s->code_ptr, 2)) {
@@ -1639,7 +1639,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
     TCGReg addr_reg = lb->addrlo_reg;
     TCGReg data_reg = lb->datalo_reg;
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);

     if (!patch_reloc(lb->label_ptr[0], R_390_PC16DBL,
                      (intptr_t)s->code_ptr, 2)) {
@@ -1694,7 +1694,7 @@ static void tcg_prepare_user_ldst(TCGContext *s, TCGReg *addr_reg,
 static void tcg_out_qemu_ld(TCGContext* s, TCGReg data_reg, TCGReg addr_reg,
                             TCGMemOpIdx oi)
 {
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);
 #ifdef CONFIG_SOFTMMU
     unsigned mem_index = get_mmuidx(oi);
     tcg_insn_unit *label_ptr;
@@ -1721,7 +1721,7 @@ static void tcg_out_qemu_ld(TCGContext* s, TCGReg data_reg, TCGReg addr_reg,
 static void tcg_out_qemu_st(TCGContext* s, TCGReg data_reg, TCGReg addr_reg,
                             TCGMemOpIdx oi)
 {
-    TCGMemOp opc = get_memop(oi);
+    TCGMemOp opc = get_tcgmemop(oi);
 #ifdef CONFIG_SOFTMMU
     unsigned mem_index = get_mmuidx(oi);
     tcg_insn_unit *label_ptr;
diff --git a/tcg/sparc/tcg-target.inc.c b/tcg/sparc/tcg-target.inc.c
index 7c50118..e6cf2c4 100644
--- a/tcg/sparc/tcg-target.inc.c
+++ b/tcg/sparc/tcg-target.inc.c
@@ -1164,7 +1164,7 @@ static const int qemu_st_opc[16] = {
 static void tcg_out_qemu_ld(TCGContext *s, TCGReg data, TCGReg addr,
                             TCGMemOpIdx oi, bool is_64)
 {
-    TCGMemOp memop = get_memop(oi);
+    TCGMemOp memop = get_tcgmemop(oi);
 #ifdef CONFIG_SOFTMMU
     unsigned memi = get_mmuidx(oi);
     TCGReg addrz, param;
@@ -1246,7 +1246,7 @@ static void tcg_out_qemu_ld(TCGContext *s, TCGReg data, TCGReg addr,
 static void tcg_out_qemu_st(TCGContext *s, TCGReg data, TCGReg addr,
                             TCGMemOpIdx oi)
 {
-    TCGMemOp memop = get_memop(oi);
+    TCGMemOp memop = get_tcgmemop(oi);
 #ifdef CONFIG_SOFTMMU
     unsigned memi = get_mmuidx(oi);
     TCGReg addrz, param;
diff --git a/tcg/tcg.c b/tcg/tcg.c
index be2c33c..492d7c6 100644
--- a/tcg/tcg.c
+++ b/tcg/tcg.c
@@ -2056,7 +2056,7 @@ static void tcg_dump_ops(TCGContext *s, bool have_prefs)
             case INDEX_op_qemu_st_i64:
                 {
                     TCGMemOpIdx oi = op->args[k++];
-                    TCGMemOp op = get_memop(oi);
+                    TCGMemOp op = get_tcgmemop(oi);
                     unsigned ix = get_mmuidx(oi);

                     if (op & ~(MO_AMASK | MO_BSWAP | MO_SSIZE)) {
diff --git a/tcg/tcg.h b/tcg/tcg.h
index 18b91fe..8a3f912 100644
--- a/tcg/tcg.h
+++ b/tcg/tcg.h
@@ -1197,12 +1197,12 @@ static inline TCGMemOpIdx make_memop_idx(TCGMemOp op, unsigned idx)
 }

 /**
- * get_memop
+ * get_tcgmemop
  * @oi: combined op/idx parameter
  *
  * Extract the memory operation from the combined value.
  */
-static inline TCGMemOp get_memop(TCGMemOpIdx oi)
+static inline TCGMemOp get_tcgmemop(TCGMemOpIdx oi)
 {
     return oi >> 4;
 }
diff --git a/tcg/tci.c b/tcg/tci.c
index 33edca1..b3c5795 100644
--- a/tcg/tci.c
+++ b/tcg/tci.c
@@ -1109,7 +1109,7 @@ uintptr_t tcg_qemu_tb_exec(CPUArchState *env, uint8_t *tb_ptr)
             t0 = *tb_ptr++;
             taddr = tci_read_ulong(regs, &tb_ptr);
             oi = tci_read_i(&tb_ptr);
-            switch (get_memop(oi) & (MO_BSWAP | MO_SSIZE)) {
+            switch (get_tcgmemop(oi) & (MO_BSWAP | MO_SSIZE)) {
             case MO_UB:
                 tmp32 = qemu_ld_ub;
                 break;
@@ -1146,7 +1146,7 @@ uintptr_t tcg_qemu_tb_exec(CPUArchState *env, uint8_t *tb_ptr)
             }
             taddr = tci_read_ulong(regs, &tb_ptr);
             oi = tci_read_i(&tb_ptr);
-            switch (get_memop(oi) & (MO_BSWAP | MO_SSIZE)) {
+            switch (get_tcgmemop(oi) & (MO_BSWAP | MO_SSIZE)) {
             case MO_UB:
                 tmp64 = qemu_ld_ub;
                 break;
@@ -1195,7 +1195,7 @@ uintptr_t tcg_qemu_tb_exec(CPUArchState *env, uint8_t *tb_ptr)
             t0 = tci_read_r(regs, &tb_ptr);
             taddr = tci_read_ulong(regs, &tb_ptr);
             oi = tci_read_i(&tb_ptr);
-            switch (get_memop(oi) & (MO_BSWAP | MO_SIZE)) {
+            switch (get_tcgmemop(oi) & (MO_BSWAP | MO_SIZE)) {
             case MO_UB:
                 qemu_st_b(t0);
                 break;
@@ -1219,7 +1219,7 @@ uintptr_t tcg_qemu_tb_exec(CPUArchState *env, uint8_t *tb_ptr)
             tmp64 = tci_read_r64(regs, &tb_ptr);
             taddr = tci_read_ulong(regs, &tb_ptr);
             oi = tci_read_i(&tb_ptr);
-            switch (get_memop(oi) & (MO_BSWAP | MO_SIZE)) {
+            switch (get_tcgmemop(oi) & (MO_BSWAP | MO_SIZE)) {
             case MO_UB:
                 qemu_st_b(tmp64);
                 break;
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 31873 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v2 07/20] memory: Access MemoryRegion with MemOp
  2019-07-22 15:34 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-22 15:45   ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:45 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, claudio.fontana, alex.williamson, qemu-ppc, amarkovic,
	pbonzini, aurelien

Replacing size with size+sign+endianness (MemOp) will enable us to
collapse the two byte swaps, adjust_endianness and handle_bswap, along
the I/O path.

While interfaces are converted, callers will have existing unsigned
size coerced into a MemOp, and the callee will use this MemOp as an
unsigned size.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 include/exec/memop.h  | 4 ++++
 include/exec/memory.h | 9 +++++----
 memory.c              | 7 +++++--
 3 files changed, 14 insertions(+), 6 deletions(-)

diff --git a/include/exec/memop.h b/include/exec/memop.h
index 43e99d7..73f1bf7 100644
--- a/include/exec/memop.h
+++ b/include/exec/memop.h
@@ -24,4 +24,8 @@ typedef enum MemOp {
     MO_BSWAP = 8,   /* Host reverse endian.  */
 } MemOp;

+/* No-op while memory_region_dispatch_[read|write] is converted to MemOp */
+#define MEMOP_SIZE(op)  (op)    /* MemOp to size.  */
+#define SIZE_MEMOP(ul)  (ul)    /* Size to MemOp.  */
+
 #endif
diff --git a/include/exec/memory.h b/include/exec/memory.h
index bb0961d..30b1c58 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -19,6 +19,7 @@
 #include "exec/cpu-common.h"
 #include "exec/hwaddr.h"
 #include "exec/memattrs.h"
+#include "exec/memop.h"
 #include "exec/ramlist.h"
 #include "qemu/queue.h"
 #include "qemu/int128.h"
@@ -1731,13 +1732,13 @@ void mtree_info(bool flatview, bool dispatch_tree, bool owner);
  * @mr: #MemoryRegion to access
  * @addr: address within that region
  * @pval: pointer to uint64_t which the data is written to
- * @size: size of the access in bytes
+ * @op: encodes size of the access in bytes
  * @attrs: memory transaction attributes to use for the access
  */
 MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
                                         hwaddr addr,
                                         uint64_t *pval,
-                                        unsigned size,
+                                        MemOp op,
                                         MemTxAttrs attrs);
 /**
  * memory_region_dispatch_write: perform a write directly to the specified
@@ -1746,13 +1747,13 @@ MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
  * @mr: #MemoryRegion to access
  * @addr: address within that region
  * @data: data to write
- * @size: size of the access in bytes
+ * @op: encodes size of the access in bytes
  * @attrs: memory transaction attributes to use for the access
  */
 MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
                                          hwaddr addr,
                                          uint64_t data,
-                                         unsigned size,
+                                         MemOp op,
                                          MemTxAttrs attrs);

 /**
diff --git a/memory.c b/memory.c
index d4579bb..73cb345 100644
--- a/memory.c
+++ b/memory.c
@@ -1437,10 +1437,11 @@ static MemTxResult memory_region_dispatch_read1(MemoryRegion *mr,
 MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
                                         hwaddr addr,
                                         uint64_t *pval,
-                                        unsigned size,
+                                        MemOp op,
                                         MemTxAttrs attrs)
 {
     MemTxResult r;
+    unsigned size = MEMOP_SIZE(op);

     if (!memory_region_access_valid(mr, addr, size, false, attrs)) {
         *pval = unassigned_mem_read(mr, addr, size);
@@ -1481,9 +1482,11 @@ static bool memory_region_dispatch_write_eventfds(MemoryRegion *mr,
 MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
                                          hwaddr addr,
                                          uint64_t data,
-                                         unsigned size,
+                                         MemOp op,
                                          MemTxAttrs attrs)
 {
+    unsigned size = MEMOP_SIZE(op);
+
     if (!memory_region_access_valid(mr, addr, size, true, attrs)) {
         unassigned_mem_write(mr, addr, data, size);
         return MEMTX_DECODE_ERROR;
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v2 07/20] memory: Access MemoryRegion with MemOp
@ 2019-07-22 15:45   ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:45 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, david, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, mst, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, claudio.fontana, qemu-s390x, qemu-ppc,
	amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 4226 bytes --]

Replacing size with size+sign+endianness (MemOp) will enable us to
collapse the two byte swaps, adjust_endianness and handle_bswap, along
the I/O path.

While interfaces are converted, callers will have existing unsigned
size coerced into a MemOp, and the callee will use this MemOp as an
unsigned size.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 include/exec/memop.h  | 4 ++++
 include/exec/memory.h | 9 +++++----
 memory.c              | 7 +++++--
 3 files changed, 14 insertions(+), 6 deletions(-)

diff --git a/include/exec/memop.h b/include/exec/memop.h
index 43e99d7..73f1bf7 100644
--- a/include/exec/memop.h
+++ b/include/exec/memop.h
@@ -24,4 +24,8 @@ typedef enum MemOp {
     MO_BSWAP = 8,   /* Host reverse endian.  */
 } MemOp;

+/* No-op while memory_region_dispatch_[read|write] is converted to MemOp */
+#define MEMOP_SIZE(op)  (op)    /* MemOp to size.  */
+#define SIZE_MEMOP(ul)  (ul)    /* Size to MemOp.  */
+
 #endif
diff --git a/include/exec/memory.h b/include/exec/memory.h
index bb0961d..30b1c58 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -19,6 +19,7 @@
 #include "exec/cpu-common.h"
 #include "exec/hwaddr.h"
 #include "exec/memattrs.h"
+#include "exec/memop.h"
 #include "exec/ramlist.h"
 #include "qemu/queue.h"
 #include "qemu/int128.h"
@@ -1731,13 +1732,13 @@ void mtree_info(bool flatview, bool dispatch_tree, bool owner);
  * @mr: #MemoryRegion to access
  * @addr: address within that region
  * @pval: pointer to uint64_t which the data is written to
- * @size: size of the access in bytes
+ * @op: encodes size of the access in bytes
  * @attrs: memory transaction attributes to use for the access
  */
 MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
                                         hwaddr addr,
                                         uint64_t *pval,
-                                        unsigned size,
+                                        MemOp op,
                                         MemTxAttrs attrs);
 /**
  * memory_region_dispatch_write: perform a write directly to the specified
@@ -1746,13 +1747,13 @@ MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
  * @mr: #MemoryRegion to access
  * @addr: address within that region
  * @data: data to write
- * @size: size of the access in bytes
+ * @op: encodes size of the access in bytes
  * @attrs: memory transaction attributes to use for the access
  */
 MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
                                          hwaddr addr,
                                          uint64_t data,
-                                         unsigned size,
+                                         MemOp op,
                                          MemTxAttrs attrs);

 /**
diff --git a/memory.c b/memory.c
index d4579bb..73cb345 100644
--- a/memory.c
+++ b/memory.c
@@ -1437,10 +1437,11 @@ static MemTxResult memory_region_dispatch_read1(MemoryRegion *mr,
 MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
                                         hwaddr addr,
                                         uint64_t *pval,
-                                        unsigned size,
+                                        MemOp op,
                                         MemTxAttrs attrs)
 {
     MemTxResult r;
+    unsigned size = MEMOP_SIZE(op);

     if (!memory_region_access_valid(mr, addr, size, false, attrs)) {
         *pval = unassigned_mem_read(mr, addr, size);
@@ -1481,9 +1482,11 @@ static bool memory_region_dispatch_write_eventfds(MemoryRegion *mr,
 MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
                                          hwaddr addr,
                                          uint64_t data,
-                                         unsigned size,
+                                         MemOp op,
                                          MemTxAttrs attrs)
 {
+    unsigned size = MEMOP_SIZE(op);
+
     if (!memory_region_access_valid(mr, addr, size, true, attrs)) {
         unassigned_mem_write(mr, addr, data, size);
         return MEMTX_DECODE_ERROR;
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 8706 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v2 08/20] target/mips: Access MemoryRegion with MemOp
  2019-07-22 15:34 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-22 15:45   ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:45 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, claudio.fontana, alex.williamson, qemu-ppc, amarkovic,
	pbonzini, aurelien

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 target/mips/op_helper.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/target/mips/op_helper.c b/target/mips/op_helper.c
index 9e2e02f..dccb8df 100644
--- a/target/mips/op_helper.c
+++ b/target/mips/op_helper.c
@@ -24,6 +24,7 @@
 #include "exec/helper-proto.h"
 #include "exec/exec-all.h"
 #include "exec/cpu_ldst.h"
+#include "exec/memop.h"
 #include "sysemu/kvm.h"

 /*****************************************************************************/
@@ -4740,11 +4741,11 @@ void helper_cache(CPUMIPSState *env, target_ulong addr, uint32_t op)
     if (op == 9) {
         /* Index Store Tag */
         memory_region_dispatch_write(env->itc_tag, index, env->CP0_TagLo,
-                                     8, MEMTXATTRS_UNSPECIFIED);
+                                     SIZE_MEMOP(8), MEMTXATTRS_UNSPECIFIED);
     } else if (op == 5) {
         /* Index Load Tag */
         memory_region_dispatch_read(env->itc_tag, index, &env->CP0_TagLo,
-                                    8, MEMTXATTRS_UNSPECIFIED);
+                                    SIZE_MEMOP(8), MEMTXATTRS_UNSPECIFIED);
     }
 #endif
 }
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v2 08/20] target/mips: Access MemoryRegion with MemOp
@ 2019-07-22 15:45   ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:45 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, david, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, mst, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, claudio.fontana, qemu-s390x, qemu-ppc,
	amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 1233 bytes --]

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 target/mips/op_helper.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/target/mips/op_helper.c b/target/mips/op_helper.c
index 9e2e02f..dccb8df 100644
--- a/target/mips/op_helper.c
+++ b/target/mips/op_helper.c
@@ -24,6 +24,7 @@
 #include "exec/helper-proto.h"
 #include "exec/exec-all.h"
 #include "exec/cpu_ldst.h"
+#include "exec/memop.h"
 #include "sysemu/kvm.h"

 /*****************************************************************************/
@@ -4740,11 +4741,11 @@ void helper_cache(CPUMIPSState *env, target_ulong addr, uint32_t op)
     if (op == 9) {
         /* Index Store Tag */
         memory_region_dispatch_write(env->itc_tag, index, env->CP0_TagLo,
-                                     8, MEMTXATTRS_UNSPECIFIED);
+                                     SIZE_MEMOP(8), MEMTXATTRS_UNSPECIFIED);
     } else if (op == 5) {
         /* Index Load Tag */
         memory_region_dispatch_read(env->itc_tag, index, &env->CP0_TagLo,
-                                    8, MEMTXATTRS_UNSPECIFIED);
+                                    SIZE_MEMOP(8), MEMTXATTRS_UNSPECIFIED);
     }
 #endif
 }
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 2858 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v2 09/20] hw/s390x: Access MemoryRegion with MemOp
  2019-07-22 15:34 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-22 15:46   ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, claudio.fontana, alex.williamson, qemu-ppc, amarkovic,
	pbonzini, aurelien

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 hw/s390x/s390-pci-inst.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c
index 0023514..c126bcc 100644
--- a/hw/s390x/s390-pci-inst.c
+++ b/hw/s390x/s390-pci-inst.c
@@ -15,6 +15,7 @@
 #include "cpu.h"
 #include "s390-pci-inst.h"
 #include "s390-pci-bus.h"
+#include "exec/memop.h"
 #include "exec/memory-internal.h"
 #include "qemu/error-report.h"
 #include "sysemu/hw_accel.h"
@@ -372,7 +373,7 @@ static MemTxResult zpci_read_bar(S390PCIBusDevice *pbdev, uint8_t pcias,
     mr = pbdev->pdev->io_regions[pcias].memory;
     mr = s390_get_subregion(mr, offset, len);
     offset -= mr->addr;
-    return memory_region_dispatch_read(mr, offset, data, len,
+    return memory_region_dispatch_read(mr, offset, data, SIZE_MEMOP(len),
                                        MEMTXATTRS_UNSPECIFIED);
 }

@@ -471,7 +472,7 @@ static MemTxResult zpci_write_bar(S390PCIBusDevice *pbdev, uint8_t pcias,
     mr = pbdev->pdev->io_regions[pcias].memory;
     mr = s390_get_subregion(mr, offset, len);
     offset -= mr->addr;
-    return memory_region_dispatch_write(mr, offset, data, len,
+    return memory_region_dispatch_write(mr, offset, data, SIZE_MEMOP(len),
                                         MEMTXATTRS_UNSPECIFIED);
 }

@@ -780,7 +781,8 @@ int pcistb_service_call(S390CPU *cpu, uint8_t r1, uint8_t r3, uint64_t gaddr,

     for (i = 0; i < len / 8; i++) {
         result = memory_region_dispatch_write(mr, offset + i * 8,
-                                              ldq_p(buffer + i * 8), 8,
+                                              ldq_p(buffer + i * 8),
+                                              SIZE_MEMOP(8),
                                               MEMTXATTRS_UNSPECIFIED);
         if (result != MEMTX_OK) {
             s390_program_interrupt(env, PGM_OPERAND, 6, ra);
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v2 09/20] hw/s390x: Access MemoryRegion with MemOp
@ 2019-07-22 15:46   ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, david, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, mst, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, claudio.fontana, qemu-s390x, qemu-ppc,
	amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 1998 bytes --]

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 hw/s390x/s390-pci-inst.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c
index 0023514..c126bcc 100644
--- a/hw/s390x/s390-pci-inst.c
+++ b/hw/s390x/s390-pci-inst.c
@@ -15,6 +15,7 @@
 #include "cpu.h"
 #include "s390-pci-inst.h"
 #include "s390-pci-bus.h"
+#include "exec/memop.h"
 #include "exec/memory-internal.h"
 #include "qemu/error-report.h"
 #include "sysemu/hw_accel.h"
@@ -372,7 +373,7 @@ static MemTxResult zpci_read_bar(S390PCIBusDevice *pbdev, uint8_t pcias,
     mr = pbdev->pdev->io_regions[pcias].memory;
     mr = s390_get_subregion(mr, offset, len);
     offset -= mr->addr;
-    return memory_region_dispatch_read(mr, offset, data, len,
+    return memory_region_dispatch_read(mr, offset, data, SIZE_MEMOP(len),
                                        MEMTXATTRS_UNSPECIFIED);
 }

@@ -471,7 +472,7 @@ static MemTxResult zpci_write_bar(S390PCIBusDevice *pbdev, uint8_t pcias,
     mr = pbdev->pdev->io_regions[pcias].memory;
     mr = s390_get_subregion(mr, offset, len);
     offset -= mr->addr;
-    return memory_region_dispatch_write(mr, offset, data, len,
+    return memory_region_dispatch_write(mr, offset, data, SIZE_MEMOP(len),
                                         MEMTXATTRS_UNSPECIFIED);
 }

@@ -780,7 +781,8 @@ int pcistb_service_call(S390CPU *cpu, uint8_t r1, uint8_t r3, uint64_t gaddr,

     for (i = 0; i < len / 8; i++) {
         result = memory_region_dispatch_write(mr, offset + i * 8,
-                                              ldq_p(buffer + i * 8), 8,
+                                              ldq_p(buffer + i * 8),
+                                              SIZE_MEMOP(8),
                                               MEMTXATTRS_UNSPECIFIED);
         if (result != MEMTX_OK) {
             s390_program_interrupt(env, PGM_OPERAND, 6, ra);
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 4258 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v2 10/20] hw/intc/armv7m_nic: Access MemoryRegion with MemOp
  2019-07-22 15:34 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-22 15:47   ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, claudio.fontana, alex.williamson, qemu-ppc, amarkovic,
	pbonzini, aurelien

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 hw/intc/armv7m_nvic.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
index 9f8f0d3..25bb88a 100644
--- a/hw/intc/armv7m_nvic.c
+++ b/hw/intc/armv7m_nvic.c
@@ -18,6 +18,7 @@
 #include "hw/intc/armv7m_nvic.h"
 #include "target/arm/cpu.h"
 #include "exec/exec-all.h"
+#include "exec/memop.h"
 #include "qemu/log.h"
 #include "qemu/module.h"
 #include "trace.h"
@@ -2345,7 +2346,8 @@ static MemTxResult nvic_sysreg_ns_write(void *opaque, hwaddr addr,
     if (attrs.secure) {
         /* S accesses to the alias act like NS accesses to the real region */
         attrs.secure = 0;
-        return memory_region_dispatch_write(mr, addr, value, size, attrs);
+        return memory_region_dispatch_write(mr, addr, value, SIZE_MEMOP(size),
+                                            attrs);
     } else {
         /* NS attrs are RAZ/WI for privileged, and BusFault for user */
         if (attrs.user) {
@@ -2364,7 +2366,8 @@ static MemTxResult nvic_sysreg_ns_read(void *opaque, hwaddr addr,
     if (attrs.secure) {
         /* S accesses to the alias act like NS accesses to the real region */
         attrs.secure = 0;
-        return memory_region_dispatch_read(mr, addr, data, size, attrs);
+        return memory_region_dispatch_read(mr, addr, data, SIZE_MEMOP(size),
+                                           attrs);
     } else {
         /* NS attrs are RAZ/WI for privileged, and BusFault for user */
         if (attrs.user) {
@@ -2390,7 +2393,8 @@ static MemTxResult nvic_systick_write(void *opaque, hwaddr addr,

     /* Direct the access to the correct systick */
     mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->systick[attrs.secure]), 0);
-    return memory_region_dispatch_write(mr, addr, value, size, attrs);
+    return memory_region_dispatch_write(mr, addr, value, SIZE_MEMOP(size),
+                                        attrs);
 }

 static MemTxResult nvic_systick_read(void *opaque, hwaddr addr,
@@ -2402,7 +2406,7 @@ static MemTxResult nvic_systick_read(void *opaque, hwaddr addr,

     /* Direct the access to the correct systick */
     mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->systick[attrs.secure]), 0);
-    return memory_region_dispatch_read(mr, addr, data, size, attrs);
+    return memory_region_dispatch_read(mr, addr, data, SIZE_MEMOP(size), attrs);
 }

 static const MemoryRegionOps nvic_systick_ops = {
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v2 10/20] hw/intc/armv7m_nic: Access MemoryRegion with MemOp
@ 2019-07-22 15:47   ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, david, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, mst, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, claudio.fontana, qemu-s390x, qemu-ppc,
	amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 2558 bytes --]

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 hw/intc/armv7m_nvic.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
index 9f8f0d3..25bb88a 100644
--- a/hw/intc/armv7m_nvic.c
+++ b/hw/intc/armv7m_nvic.c
@@ -18,6 +18,7 @@
 #include "hw/intc/armv7m_nvic.h"
 #include "target/arm/cpu.h"
 #include "exec/exec-all.h"
+#include "exec/memop.h"
 #include "qemu/log.h"
 #include "qemu/module.h"
 #include "trace.h"
@@ -2345,7 +2346,8 @@ static MemTxResult nvic_sysreg_ns_write(void *opaque, hwaddr addr,
     if (attrs.secure) {
         /* S accesses to the alias act like NS accesses to the real region */
         attrs.secure = 0;
-        return memory_region_dispatch_write(mr, addr, value, size, attrs);
+        return memory_region_dispatch_write(mr, addr, value, SIZE_MEMOP(size),
+                                            attrs);
     } else {
         /* NS attrs are RAZ/WI for privileged, and BusFault for user */
         if (attrs.user) {
@@ -2364,7 +2366,8 @@ static MemTxResult nvic_sysreg_ns_read(void *opaque, hwaddr addr,
     if (attrs.secure) {
         /* S accesses to the alias act like NS accesses to the real region */
         attrs.secure = 0;
-        return memory_region_dispatch_read(mr, addr, data, size, attrs);
+        return memory_region_dispatch_read(mr, addr, data, SIZE_MEMOP(size),
+                                           attrs);
     } else {
         /* NS attrs are RAZ/WI for privileged, and BusFault for user */
         if (attrs.user) {
@@ -2390,7 +2393,8 @@ static MemTxResult nvic_systick_write(void *opaque, hwaddr addr,

     /* Direct the access to the correct systick */
     mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->systick[attrs.secure]), 0);
-    return memory_region_dispatch_write(mr, addr, value, size, attrs);
+    return memory_region_dispatch_write(mr, addr, value, SIZE_MEMOP(size),
+                                        attrs);
 }

 static MemTxResult nvic_systick_read(void *opaque, hwaddr addr,
@@ -2402,7 +2406,7 @@ static MemTxResult nvic_systick_read(void *opaque, hwaddr addr,

     /* Direct the access to the correct systick */
     mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->systick[attrs.secure]), 0);
-    return memory_region_dispatch_read(mr, addr, data, size, attrs);
+    return memory_region_dispatch_read(mr, addr, data, SIZE_MEMOP(size), attrs);
 }

 static const MemoryRegionOps nvic_systick_ops = {
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 4811 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v2 11/20] hw/virtio: Access MemoryRegion with MemOp
  2019-07-22 15:34 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-22 15:48   ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:48 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, claudio.fontana, alex.williamson, qemu-ppc, amarkovic,
	pbonzini, aurelien

On 17/07/19 08:06, Paolo Bonzini wrote:

> My main concern is that MO_BE/MO_LE/MO_TE do not really apply to the
> memory.c paths.  MO_BSWAP is never passed into the MemOp, even if target
> endianness != host endianness.
>
> Therefore, you could return MO_TE | MO_{8,16,32,64} from this function,
> and change memory_region_endianness_inverted to test
> HOST_WORDS_BIGENDIAN instead of TARGET_WORDS_BIGENDIAN.  Then the two
> MO_BSWAPs (one from MO_TE, one from adjust_endianness because
> memory_region_endianness_inverted returns true) cancel out if the
> memory region's endianness is the same as the host's but different
> from the target's.
>
> Some care is needed in virtio_address_space_write and zpci_write_bar.  I
> think the latter is okay, while the former could do something like this:
>
> diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
> index ce928f2429..61885f020c 100644
> --- a/hw/virtio/virtio-pci.c
> +++ b/hw/virtio/virtio-pci.c
> @@ -541,16 +541,16 @@ void virtio_address_space_write(VirtIOPCIProxy *proxy,
> hwaddr addr,
>          val = pci_get_byte(buf);
>          break;
>      case 2:
> -        val = cpu_to_le16(pci_get_word(buf));
> +        val = pci_get_word(buf);
>          break;
>      case 4:
> -        val = cpu_to_le32(pci_get_long(buf));
> +        val = pci_get_long(buf);
>          break;
>      default:
>          /* As length is under guest control, handle illegal values. */
>          return;
>      }
> -    memory_region_dispatch_write(mr, addr, val, len, MEMTXATTRS_UNSPECIFIED);
> +    memory_region_dispatch_write(mr, addr, val, size_memop(len) & ~MO_BSWAP,
> MEMTXATTRS_UNSPECIFIED);
>  }
>
>  static void

Sorry Paolo, I noted the need to take care in virtio_address_space_write and
zpci_write_bar but did not understand.

> Some care is needed in virtio_address_space_write and zpci_write_bar.
Is this advice for my v1 implementation, or in the case of the
MO_TE | MO_{8,16,32,64} idead suggested in the paragraph before?

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 hw/virtio/virtio-pci.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index ce928f2..265f066 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -17,6 +17,7 @@

 #include "qemu/osdep.h"

+#include "exec/memop.h"
 #include "standard-headers/linux/virtio_pci.h"
 #include "hw/virtio/virtio.h"
 #include "hw/pci/pci.h"
@@ -550,7 +551,8 @@ void virtio_address_space_write(VirtIOPCIProxy *proxy, hwaddr addr,
         /* As length is under guest control, handle illegal values. */
         return;
     }
-    memory_region_dispatch_write(mr, addr, val, len, MEMTXATTRS_UNSPECIFIED);
+    memory_region_dispatch_write(mr, addr, val, SIZE_MEMOP(len),
+                                 MEMTXATTRS_UNSPECIFIED);
 }

 static void
@@ -573,7 +575,8 @@ virtio_address_space_read(VirtIOPCIProxy *proxy, hwaddr addr,
     /* Make sure caller aligned buf properly */
     assert(!(((uintptr_t)buf) & (len - 1)));

-    memory_region_dispatch_read(mr, addr, &val, len, MEMTXATTRS_UNSPECIFIED);
+    memory_region_dispatch_read(mr, addr, &val, SIZE_MEMOP(len),
+                                MEMTXATTRS_UNSPECIFIED);
     switch (len) {
     case 1:
         pci_set_byte(buf, val);
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v2 11/20] hw/virtio: Access MemoryRegion with MemOp
@ 2019-07-22 15:48   ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:48 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, david, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, mst, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, claudio.fontana, qemu-s390x, qemu-ppc,
	amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 3413 bytes --]

On 17/07/19 08:06, Paolo Bonzini wrote:

> My main concern is that MO_BE/MO_LE/MO_TE do not really apply to the
> memory.c paths.  MO_BSWAP is never passed into the MemOp, even if target
> endianness != host endianness.
>
> Therefore, you could return MO_TE | MO_{8,16,32,64} from this function,
> and change memory_region_endianness_inverted to test
> HOST_WORDS_BIGENDIAN instead of TARGET_WORDS_BIGENDIAN.  Then the two
> MO_BSWAPs (one from MO_TE, one from adjust_endianness because
> memory_region_endianness_inverted returns true) cancel out if the
> memory region's endianness is the same as the host's but different
> from the target's.
>
> Some care is needed in virtio_address_space_write and zpci_write_bar.  I
> think the latter is okay, while the former could do something like this:
>
> diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
> index ce928f2429..61885f020c 100644
> --- a/hw/virtio/virtio-pci.c
> +++ b/hw/virtio/virtio-pci.c
> @@ -541,16 +541,16 @@ void virtio_address_space_write(VirtIOPCIProxy *proxy,
> hwaddr addr,
>          val = pci_get_byte(buf);
>          break;
>      case 2:
> -        val = cpu_to_le16(pci_get_word(buf));
> +        val = pci_get_word(buf);
>          break;
>      case 4:
> -        val = cpu_to_le32(pci_get_long(buf));
> +        val = pci_get_long(buf);
>          break;
>      default:
>          /* As length is under guest control, handle illegal values. */
>          return;
>      }
> -    memory_region_dispatch_write(mr, addr, val, len, MEMTXATTRS_UNSPECIFIED);
> +    memory_region_dispatch_write(mr, addr, val, size_memop(len) & ~MO_BSWAP,
> MEMTXATTRS_UNSPECIFIED);
>  }
>
>  static void

Sorry Paolo, I noted the need to take care in virtio_address_space_write and
zpci_write_bar but did not understand.

> Some care is needed in virtio_address_space_write and zpci_write_bar.
Is this advice for my v1 implementation, or in the case of the
MO_TE | MO_{8,16,32,64} idead suggested in the paragraph before?

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 hw/virtio/virtio-pci.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index ce928f2..265f066 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -17,6 +17,7 @@

 #include "qemu/osdep.h"

+#include "exec/memop.h"
 #include "standard-headers/linux/virtio_pci.h"
 #include "hw/virtio/virtio.h"
 #include "hw/pci/pci.h"
@@ -550,7 +551,8 @@ void virtio_address_space_write(VirtIOPCIProxy *proxy, hwaddr addr,
         /* As length is under guest control, handle illegal values. */
         return;
     }
-    memory_region_dispatch_write(mr, addr, val, len, MEMTXATTRS_UNSPECIFIED);
+    memory_region_dispatch_write(mr, addr, val, SIZE_MEMOP(len),
+                                 MEMTXATTRS_UNSPECIFIED);
 }

 static void
@@ -573,7 +575,8 @@ virtio_address_space_read(VirtIOPCIProxy *proxy, hwaddr addr,
     /* Make sure caller aligned buf properly */
     assert(!(((uintptr_t)buf) & (len - 1)));

-    memory_region_dispatch_read(mr, addr, &val, len, MEMTXATTRS_UNSPECIFIED);
+    memory_region_dispatch_read(mr, addr, &val, SIZE_MEMOP(len),
+                                MEMTXATTRS_UNSPECIFIED);
     switch (len) {
     case 1:
         pci_set_byte(buf, val);
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 6123 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v2 12/20] hw/vfio: Access MemoryRegion with MemOp
  2019-07-22 15:34 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-22 15:48   ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:48 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, claudio.fontana, alex.williamson, qemu-ppc, amarkovic,
	pbonzini, aurelien

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 hw/vfio/pci-quirks.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/hw/vfio/pci-quirks.c b/hw/vfio/pci-quirks.c
index b35a640..3240afa 100644
--- a/hw/vfio/pci-quirks.c
+++ b/hw/vfio/pci-quirks.c
@@ -1071,7 +1071,7 @@ static void vfio_rtl8168_quirk_address_write(void *opaque, hwaddr addr,

                 /* Write to the proper guest MSI-X table instead */
                 memory_region_dispatch_write(&vdev->pdev.msix_table_mmio,
-                                             offset, val, size,
+                                             offset, val, SIZE_MEMOP(size),
                                              MEMTXATTRS_UNSPECIFIED);
             }
             return; /* Do not write guest MSI-X data to hardware */
@@ -1102,7 +1102,8 @@ static uint64_t vfio_rtl8168_quirk_data_read(void *opaque,
     if (rtl->enabled && (vdev->pdev.cap_present & QEMU_PCI_CAP_MSIX)) {
         hwaddr offset = rtl->addr & 0xfff;
         memory_region_dispatch_read(&vdev->pdev.msix_table_mmio, offset,
-                                    &data, size, MEMTXATTRS_UNSPECIFIED);
+                                    &data, SIZE_MEMOP(size),
+                                    MEMTXATTRS_UNSPECIFIED);
         trace_vfio_quirk_rtl8168_msix_read(vdev->vbasedev.name, offset, data);
     }

--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v2 12/20] hw/vfio: Access MemoryRegion with MemOp
@ 2019-07-22 15:48   ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:48 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, david, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, mst, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, claudio.fontana, qemu-s390x, qemu-ppc,
	amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 1417 bytes --]

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 hw/vfio/pci-quirks.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/hw/vfio/pci-quirks.c b/hw/vfio/pci-quirks.c
index b35a640..3240afa 100644
--- a/hw/vfio/pci-quirks.c
+++ b/hw/vfio/pci-quirks.c
@@ -1071,7 +1071,7 @@ static void vfio_rtl8168_quirk_address_write(void *opaque, hwaddr addr,

                 /* Write to the proper guest MSI-X table instead */
                 memory_region_dispatch_write(&vdev->pdev.msix_table_mmio,
-                                             offset, val, size,
+                                             offset, val, SIZE_MEMOP(size),
                                              MEMTXATTRS_UNSPECIFIED);
             }
             return; /* Do not write guest MSI-X data to hardware */
@@ -1102,7 +1102,8 @@ static uint64_t vfio_rtl8168_quirk_data_read(void *opaque,
     if (rtl->enabled && (vdev->pdev.cap_present & QEMU_PCI_CAP_MSIX)) {
         hwaddr offset = rtl->addr & 0xfff;
         memory_region_dispatch_read(&vdev->pdev.msix_table_mmio, offset,
-                                    &data, size, MEMTXATTRS_UNSPECIFIED);
+                                    &data, SIZE_MEMOP(size),
+                                    MEMTXATTRS_UNSPECIFIED);
         trace_vfio_quirk_rtl8168_msix_read(vdev->vbasedev.name, offset, data);
     }

--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 3329 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v2 13/20] exec: Access MemoryRegion with MemOp
  2019-07-22 15:34 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-22 15:49   ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:49 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, claudio.fontana, alex.williamson, qemu-ppc, amarkovic,
	pbonzini, aurelien

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 exec.c            |  6 ++++--
 memory_ldst.inc.c | 18 +++++++++---------
 2 files changed, 13 insertions(+), 11 deletions(-)

diff --git a/exec.c b/exec.c
index 3e78de3..5013864 100644
--- a/exec.c
+++ b/exec.c
@@ -3334,7 +3334,8 @@ static MemTxResult flatview_write_continue(FlatView *fv, hwaddr addr,
             /* XXX: could force current_cpu to NULL to avoid
                potential bugs */
             val = ldn_p(buf, l);
-            result |= memory_region_dispatch_write(mr, addr1, val, l, attrs);
+            result |= memory_region_dispatch_write(mr, addr1, val,
+                                                   SIZE_MEMOP(l), attrs);
         } else {
             /* RAM case */
             ptr = qemu_ram_ptr_length(mr->ram_block, addr1, &l, false);
@@ -3395,7 +3396,8 @@ MemTxResult flatview_read_continue(FlatView *fv, hwaddr addr,
             /* I/O case */
             release_lock |= prepare_mmio_access(mr);
             l = memory_access_size(mr, l, addr1);
-            result |= memory_region_dispatch_read(mr, addr1, &val, l, attrs);
+            result |= memory_region_dispatch_read(mr, addr1, &val,
+                                                  SIZE_MEMOP(l), attrs);
             stn_p(buf, l, val);
         } else {
             /* RAM case */
diff --git a/memory_ldst.inc.c b/memory_ldst.inc.c
index acf865b..e073cf9 100644
--- a/memory_ldst.inc.c
+++ b/memory_ldst.inc.c
@@ -38,7 +38,7 @@ static inline uint32_t glue(address_space_ldl_internal, SUFFIX)(ARG1_DECL,
         release_lock |= prepare_mmio_access(mr);

         /* I/O case */
-        r = memory_region_dispatch_read(mr, addr1, &val, 4, attrs);
+        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(4), attrs);
 #if defined(TARGET_WORDS_BIGENDIAN)
         if (endian == DEVICE_LITTLE_ENDIAN) {
             val = bswap32(val);
@@ -114,7 +114,7 @@ static inline uint64_t glue(address_space_ldq_internal, SUFFIX)(ARG1_DECL,
         release_lock |= prepare_mmio_access(mr);

         /* I/O case */
-        r = memory_region_dispatch_read(mr, addr1, &val, 8, attrs);
+        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(8), attrs);
 #if defined(TARGET_WORDS_BIGENDIAN)
         if (endian == DEVICE_LITTLE_ENDIAN) {
             val = bswap64(val);
@@ -188,7 +188,7 @@ uint32_t glue(address_space_ldub, SUFFIX)(ARG1_DECL,
         release_lock |= prepare_mmio_access(mr);

         /* I/O case */
-        r = memory_region_dispatch_read(mr, addr1, &val, 1, attrs);
+        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(1), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
@@ -224,7 +224,7 @@ static inline uint32_t glue(address_space_lduw_internal, SUFFIX)(ARG1_DECL,
         release_lock |= prepare_mmio_access(mr);

         /* I/O case */
-        r = memory_region_dispatch_read(mr, addr1, &val, 2, attrs);
+        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(2), attrs);
 #if defined(TARGET_WORDS_BIGENDIAN)
         if (endian == DEVICE_LITTLE_ENDIAN) {
             val = bswap16(val);
@@ -300,7 +300,7 @@ void glue(address_space_stl_notdirty, SUFFIX)(ARG1_DECL,
     if (l < 4 || !memory_access_is_direct(mr, true)) {
         release_lock |= prepare_mmio_access(mr);

-        r = memory_region_dispatch_write(mr, addr1, val, 4, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(4), attrs);
     } else {
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
         stl_p(ptr, val);
@@ -346,7 +346,7 @@ static inline void glue(address_space_stl_internal, SUFFIX)(ARG1_DECL,
             val = bswap32(val);
         }
 #endif
-        r = memory_region_dispatch_write(mr, addr1, val, 4, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(4), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
@@ -408,7 +408,7 @@ void glue(address_space_stb, SUFFIX)(ARG1_DECL,
     mr = TRANSLATE(addr, &addr1, &l, true, attrs);
     if (!memory_access_is_direct(mr, true)) {
         release_lock |= prepare_mmio_access(mr);
-        r = memory_region_dispatch_write(mr, addr1, val, 1, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(1), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
@@ -451,7 +451,7 @@ static inline void glue(address_space_stw_internal, SUFFIX)(ARG1_DECL,
             val = bswap16(val);
         }
 #endif
-        r = memory_region_dispatch_write(mr, addr1, val, 2, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(2), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
@@ -524,7 +524,7 @@ static void glue(address_space_stq_internal, SUFFIX)(ARG1_DECL,
             val = bswap64(val);
         }
 #endif
-        r = memory_region_dispatch_write(mr, addr1, val, 8, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(8), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v2 13/20] exec: Access MemoryRegion with MemOp
@ 2019-07-22 15:49   ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:49 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, david, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, mst, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, claudio.fontana, qemu-s390x, qemu-ppc,
	amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 5348 bytes --]

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 exec.c            |  6 ++++--
 memory_ldst.inc.c | 18 +++++++++---------
 2 files changed, 13 insertions(+), 11 deletions(-)

diff --git a/exec.c b/exec.c
index 3e78de3..5013864 100644
--- a/exec.c
+++ b/exec.c
@@ -3334,7 +3334,8 @@ static MemTxResult flatview_write_continue(FlatView *fv, hwaddr addr,
             /* XXX: could force current_cpu to NULL to avoid
                potential bugs */
             val = ldn_p(buf, l);
-            result |= memory_region_dispatch_write(mr, addr1, val, l, attrs);
+            result |= memory_region_dispatch_write(mr, addr1, val,
+                                                   SIZE_MEMOP(l), attrs);
         } else {
             /* RAM case */
             ptr = qemu_ram_ptr_length(mr->ram_block, addr1, &l, false);
@@ -3395,7 +3396,8 @@ MemTxResult flatview_read_continue(FlatView *fv, hwaddr addr,
             /* I/O case */
             release_lock |= prepare_mmio_access(mr);
             l = memory_access_size(mr, l, addr1);
-            result |= memory_region_dispatch_read(mr, addr1, &val, l, attrs);
+            result |= memory_region_dispatch_read(mr, addr1, &val,
+                                                  SIZE_MEMOP(l), attrs);
             stn_p(buf, l, val);
         } else {
             /* RAM case */
diff --git a/memory_ldst.inc.c b/memory_ldst.inc.c
index acf865b..e073cf9 100644
--- a/memory_ldst.inc.c
+++ b/memory_ldst.inc.c
@@ -38,7 +38,7 @@ static inline uint32_t glue(address_space_ldl_internal, SUFFIX)(ARG1_DECL,
         release_lock |= prepare_mmio_access(mr);

         /* I/O case */
-        r = memory_region_dispatch_read(mr, addr1, &val, 4, attrs);
+        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(4), attrs);
 #if defined(TARGET_WORDS_BIGENDIAN)
         if (endian == DEVICE_LITTLE_ENDIAN) {
             val = bswap32(val);
@@ -114,7 +114,7 @@ static inline uint64_t glue(address_space_ldq_internal, SUFFIX)(ARG1_DECL,
         release_lock |= prepare_mmio_access(mr);

         /* I/O case */
-        r = memory_region_dispatch_read(mr, addr1, &val, 8, attrs);
+        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(8), attrs);
 #if defined(TARGET_WORDS_BIGENDIAN)
         if (endian == DEVICE_LITTLE_ENDIAN) {
             val = bswap64(val);
@@ -188,7 +188,7 @@ uint32_t glue(address_space_ldub, SUFFIX)(ARG1_DECL,
         release_lock |= prepare_mmio_access(mr);

         /* I/O case */
-        r = memory_region_dispatch_read(mr, addr1, &val, 1, attrs);
+        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(1), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
@@ -224,7 +224,7 @@ static inline uint32_t glue(address_space_lduw_internal, SUFFIX)(ARG1_DECL,
         release_lock |= prepare_mmio_access(mr);

         /* I/O case */
-        r = memory_region_dispatch_read(mr, addr1, &val, 2, attrs);
+        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(2), attrs);
 #if defined(TARGET_WORDS_BIGENDIAN)
         if (endian == DEVICE_LITTLE_ENDIAN) {
             val = bswap16(val);
@@ -300,7 +300,7 @@ void glue(address_space_stl_notdirty, SUFFIX)(ARG1_DECL,
     if (l < 4 || !memory_access_is_direct(mr, true)) {
         release_lock |= prepare_mmio_access(mr);

-        r = memory_region_dispatch_write(mr, addr1, val, 4, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(4), attrs);
     } else {
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
         stl_p(ptr, val);
@@ -346,7 +346,7 @@ static inline void glue(address_space_stl_internal, SUFFIX)(ARG1_DECL,
             val = bswap32(val);
         }
 #endif
-        r = memory_region_dispatch_write(mr, addr1, val, 4, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(4), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
@@ -408,7 +408,7 @@ void glue(address_space_stb, SUFFIX)(ARG1_DECL,
     mr = TRANSLATE(addr, &addr1, &l, true, attrs);
     if (!memory_access_is_direct(mr, true)) {
         release_lock |= prepare_mmio_access(mr);
-        r = memory_region_dispatch_write(mr, addr1, val, 1, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(1), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
@@ -451,7 +451,7 @@ static inline void glue(address_space_stw_internal, SUFFIX)(ARG1_DECL,
             val = bswap16(val);
         }
 #endif
-        r = memory_region_dispatch_write(mr, addr1, val, 2, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(2), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
@@ -524,7 +524,7 @@ static void glue(address_space_stq_internal, SUFFIX)(ARG1_DECL,
             val = bswap64(val);
         }
 #endif
-        r = memory_region_dispatch_write(mr, addr1, val, 8, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(8), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 9769 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v2 14/20] cputlb: Access MemoryRegion with MemOp
  2019-07-22 15:34 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-22 15:50   ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:50 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, claudio.fontana, alex.williamson, qemu-ppc, amarkovic,
	pbonzini, aurelien

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 accel/tcg/cputlb.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 184fc54..97d7a64 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -906,8 +906,8 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
         qemu_mutex_lock_iothread();
         locked = true;
     }
-    r = memory_region_dispatch_read(mr, mr_offset,
-                                    &val, size, iotlbentry->attrs);
+    r = memory_region_dispatch_read(mr, mr_offset, &val, SIZE_MEMOP(size),
+                                    iotlbentry->attrs);
     if (r != MEMTX_OK) {
         hwaddr physaddr = mr_offset +
             section->offset_within_address_space -
@@ -947,8 +947,8 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
         qemu_mutex_lock_iothread();
         locked = true;
     }
-    r = memory_region_dispatch_write(mr, mr_offset,
-                                     val, size, iotlbentry->attrs);
+    r = memory_region_dispatch_write(mr, mr_offset, val, SIZE_MEMOP(size),
+                                    iotlbentry->attrs);
     if (r != MEMTX_OK) {
         hwaddr physaddr = mr_offset +
             section->offset_within_address_space -
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v2 14/20] cputlb: Access MemoryRegion with MemOp
@ 2019-07-22 15:50   ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:50 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, david, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, mst, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, claudio.fontana, qemu-s390x, qemu-ppc,
	amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 1376 bytes --]

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 accel/tcg/cputlb.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 184fc54..97d7a64 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -906,8 +906,8 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
         qemu_mutex_lock_iothread();
         locked = true;
     }
-    r = memory_region_dispatch_read(mr, mr_offset,
-                                    &val, size, iotlbentry->attrs);
+    r = memory_region_dispatch_read(mr, mr_offset, &val, SIZE_MEMOP(size),
+                                    iotlbentry->attrs);
     if (r != MEMTX_OK) {
         hwaddr physaddr = mr_offset +
             section->offset_within_address_space -
@@ -947,8 +947,8 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
         qemu_mutex_lock_iothread();
         locked = true;
     }
-    r = memory_region_dispatch_write(mr, mr_offset,
-                                     val, size, iotlbentry->attrs);
+    r = memory_region_dispatch_write(mr, mr_offset, val, SIZE_MEMOP(size),
+                                    iotlbentry->attrs);
     if (r != MEMTX_OK) {
         hwaddr physaddr = mr_offset +
             section->offset_within_address_space -
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 3111 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v2 15/20] memory: Access MemoryRegion with MemOp semantics
  2019-07-22 15:34 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-22 15:50   ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:50 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, claudio.fontana, alex.williamson, qemu-ppc, amarkovic,
	pbonzini, aurelien

To convert interfaces of MemoryRegion access, MEMOP_SIZE and
SIZE_MEMOP no-op stubs were introduced to change syntax while keeping
the existing semantics.

Now with interfaces converted, we fill the stubs and use MemOp
semantics.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 include/exec/memop.h | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/include/exec/memop.h b/include/exec/memop.h
index 73f1bf7..dff6da2 100644
--- a/include/exec/memop.h
+++ b/include/exec/memop.h
@@ -24,8 +24,7 @@ typedef enum MemOp {
     MO_BSWAP = 8,   /* Host reverse endian.  */
 } MemOp;

-/* No-op while memory_region_dispatch_[read|write] is converted to MemOp */
-#define MEMOP_SIZE(op)  (op)    /* MemOp to size.  */
-#define SIZE_MEMOP(ul)  (ul)    /* Size to MemOp.  */
+#define MEMOP_SIZE(op)  (1 << ((op) & MO_SIZE)) /* MemOp to size.  */
+#define SIZE_MEMOP(ul)  (ctzl(ul))              /* Size to MemOp.  */

 #endif
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v2 15/20] memory: Access MemoryRegion with MemOp semantics
@ 2019-07-22 15:50   ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:50 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, david, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, mst, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, claudio.fontana, qemu-s390x, qemu-ppc,
	amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 989 bytes --]

To convert interfaces of MemoryRegion access, MEMOP_SIZE and
SIZE_MEMOP no-op stubs were introduced to change syntax while keeping
the existing semantics.

Now with interfaces converted, we fill the stubs and use MemOp
semantics.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 include/exec/memop.h | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/include/exec/memop.h b/include/exec/memop.h
index 73f1bf7..dff6da2 100644
--- a/include/exec/memop.h
+++ b/include/exec/memop.h
@@ -24,8 +24,7 @@ typedef enum MemOp {
     MO_BSWAP = 8,   /* Host reverse endian.  */
 } MemOp;

-/* No-op while memory_region_dispatch_[read|write] is converted to MemOp */
-#define MEMOP_SIZE(op)  (op)    /* MemOp to size.  */
-#define SIZE_MEMOP(ul)  (ul)    /* Size to MemOp.  */
+#define MEMOP_SIZE(op)  (1 << ((op) & MO_SIZE)) /* MemOp to size.  */
+#define SIZE_MEMOP(ul)  (ctzl(ul))              /* Size to MemOp.  */

 #endif
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 2104 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v2 16/20] memory: Single byte swap along the I/O path
  2019-07-22 15:34 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-22 15:51   ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:51 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, claudio.fontana, alex.williamson, qemu-ppc, amarkovic,
	pbonzini, aurelien

Now that MemOp has been pushed down into the memory API, we can
collapse the two byte swaps adjust_endianness and handle_bswap into
the former.

Collapsing byte swaps along the I/O path enables additional endian
inversion logic, e.g. SPARC64 Invert Endian TTE bit, with redundant
byte swaps cancelling out.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 accel/tcg/cputlb.c | 58 +++++++++++++++++++++++++-----------------------------
 memory.c           | 30 ++++++++++++++++------------
 2 files changed, 44 insertions(+), 44 deletions(-)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 97d7a64..6f5262c 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -881,7 +881,7 @@ static void tlb_fill(CPUState *cpu, target_ulong addr, int size,

 static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
                          int mmu_idx, target_ulong addr, uintptr_t retaddr,
-                         MMUAccessType access_type, int size)
+                         MMUAccessType access_type, MemOp op)
 {
     CPUState *cpu = env_cpu(env);
     hwaddr mr_offset;
@@ -906,14 +906,13 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
         qemu_mutex_lock_iothread();
         locked = true;
     }
-    r = memory_region_dispatch_read(mr, mr_offset, &val, SIZE_MEMOP(size),
-                                    iotlbentry->attrs);
+    r = memory_region_dispatch_read(mr, mr_offset, &val, op, iotlbentry->attrs);
     if (r != MEMTX_OK) {
         hwaddr physaddr = mr_offset +
             section->offset_within_address_space -
             section->offset_within_region;

-        cpu_transaction_failed(cpu, physaddr, addr, size, access_type,
+        cpu_transaction_failed(cpu, physaddr, addr, MEMOP_SIZE(op), access_type,
                                mmu_idx, iotlbentry->attrs, r, retaddr);
     }
     if (locked) {
@@ -925,7 +924,7 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,

 static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
                       int mmu_idx, uint64_t val, target_ulong addr,
-                      uintptr_t retaddr, int size)
+                      uintptr_t retaddr, MemOp op)
 {
     CPUState *cpu = env_cpu(env);
     hwaddr mr_offset;
@@ -947,15 +946,15 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
         qemu_mutex_lock_iothread();
         locked = true;
     }
-    r = memory_region_dispatch_write(mr, mr_offset, val, SIZE_MEMOP(size),
-                                    iotlbentry->attrs);
+    r = memory_region_dispatch_write(mr, mr_offset, val, op, iotlbentry->attrs);
     if (r != MEMTX_OK) {
         hwaddr physaddr = mr_offset +
             section->offset_within_address_space -
             section->offset_within_region;

-        cpu_transaction_failed(cpu, physaddr, addr, size, MMU_DATA_STORE,
-                               mmu_idx, iotlbentry->attrs, r, retaddr);
+        cpu_transaction_failed(cpu, physaddr, addr, MEMOP_SIZE(op),
+                               MMU_DATA_STORE, mmu_idx, iotlbentry->attrs, r,
+                               retaddr);
     }
     if (locked) {
         qemu_mutex_unlock_iothread();
@@ -1210,26 +1209,13 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
 #endif

 /*
- * Byte Swap Helper
+ * Byte Swap Checker
  *
- * This should all dead code away depending on the build host and
- * access type.
+ * Dead code should all go away depending on the build host and access type.
  */
-
-static inline uint64_t handle_bswap(uint64_t val, int size, bool big_endian)
+static inline bool need_bswap(bool big_endian)
 {
-    if ((big_endian && NEED_BE_BSWAP) || (!big_endian && NEED_LE_BSWAP)) {
-        switch (size) {
-        case 1: return val;
-        case 2: return bswap16(val);
-        case 4: return bswap32(val);
-        case 8: return bswap64(val);
-        default:
-            g_assert_not_reached();
-        }
-    } else {
-        return val;
-    }
+    return (big_endian && NEED_BE_BSWAP) || (!big_endian && NEED_LE_BSWAP);
 }

 /*
@@ -1260,6 +1246,7 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
     unsigned a_bits = get_alignment_bits(get_tcgmemop(oi));
     void *haddr;
     uint64_t res;
+    MemOp op;

     /* Handle CPU specific unaligned behaviour */
     if (addr & ((1 << a_bits) - 1)) {
@@ -1305,9 +1292,13 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
             }
         }

-        res = io_readx(env, &env_tlb(env)->d[mmu_idx].iotlb[index],
-                       mmu_idx, addr, retaddr, access_type, size);
-        return handle_bswap(res, size, big_endian);
+        op = SIZE_MEMOP(size);
+        if (need_bswap(big_endian)) {
+            op ^= MO_BSWAP;
+        }
+
+        return io_readx(env, &env_tlb(env)->d[mmu_idx].iotlb[index],
+                       mmu_idx, addr, retaddr, access_type, op);
     }

     /* Handle slow unaligned access (it spans two pages or IO).  */
@@ -1508,6 +1499,7 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
     const size_t tlb_off = offsetof(CPUTLBEntry, addr_write);
     unsigned a_bits = get_alignment_bits(get_tcgmemop(oi));
     void *haddr;
+    MemOp op;

     /* Handle CPU specific unaligned behaviour */
     if (addr & ((1 << a_bits) - 1)) {
@@ -1553,9 +1545,13 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
             }
         }

+        op = SIZE_MEMOP(size);
+        if (need_bswap(big_endian)) {
+            op ^= MO_BSWAP;
+        }
+
         io_writex(env, &env_tlb(env)->d[mmu_idx].iotlb[index], mmu_idx,
-                  handle_bswap(val, size, big_endian),
-                  addr, retaddr, size);
+                  val, addr, retaddr, op);
         return;
     }

diff --git a/memory.c b/memory.c
index 73cb345..0aaa0a7 100644
--- a/memory.c
+++ b/memory.c
@@ -350,7 +350,7 @@ static bool memory_region_big_endian(MemoryRegion *mr)
 #endif
 }

-static bool memory_region_wrong_endianness(MemoryRegion *mr)
+static bool memory_region_endianness_inverted(MemoryRegion *mr)
 {
 #ifdef TARGET_WORDS_BIGENDIAN
     return mr->ops->endianness == DEVICE_LITTLE_ENDIAN;
@@ -359,23 +359,27 @@ static bool memory_region_wrong_endianness(MemoryRegion *mr)
 #endif
 }

-static void adjust_endianness(MemoryRegion *mr, uint64_t *data, unsigned size)
+static void adjust_endianness(MemoryRegion *mr, uint64_t *data, MemOp op)
 {
-    if (memory_region_wrong_endianness(mr)) {
-        switch (size) {
-        case 1:
+    if (memory_region_endianness_inverted(mr)) {
+        op ^= MO_BSWAP;
+    }
+
+    if (op & MO_BSWAP) {
+        switch (op & MO_SIZE) {
+        case MO_8:
             break;
-        case 2:
+        case MO_16:
             *data = bswap16(*data);
             break;
-        case 4:
+        case MO_32:
             *data = bswap32(*data);
             break;
-        case 8:
+        case MO_64:
             *data = bswap64(*data);
             break;
         default:
-            abort();
+            g_assert_not_reached();
         }
     }
 }
@@ -1449,7 +1453,7 @@ MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
     }

     r = memory_region_dispatch_read1(mr, addr, pval, size, attrs);
-    adjust_endianness(mr, pval, size);
+    adjust_endianness(mr, pval, op);
     return r;
 }

@@ -1492,7 +1496,7 @@ MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
         return MEMTX_DECODE_ERROR;
     }

-    adjust_endianness(mr, &data, size);
+    adjust_endianness(mr, &data, op);

     if ((!kvm_eventfds_enabled()) &&
         memory_region_dispatch_write_eventfds(mr, addr, data, size, attrs)) {
@@ -2338,7 +2342,7 @@ void memory_region_add_eventfd(MemoryRegion *mr,
     }

     if (size) {
-        adjust_endianness(mr, &mrfd.data, size);
+        adjust_endianness(mr, &mrfd.data, SIZE_MEMOP(size));
     }
     memory_region_transaction_begin();
     for (i = 0; i < mr->ioeventfd_nb; ++i) {
@@ -2373,7 +2377,7 @@ void memory_region_del_eventfd(MemoryRegion *mr,
     unsigned i;

     if (size) {
-        adjust_endianness(mr, &mrfd.data, size);
+        adjust_endianness(mr, &mrfd.data, SIZE_MEMOP(size));
     }
     memory_region_transaction_begin();
     for (i = 0; i < mr->ioeventfd_nb; ++i) {
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v2 16/20] memory: Single byte swap along the I/O path
@ 2019-07-22 15:51   ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:51 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, david, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, mst, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, claudio.fontana, qemu-s390x, qemu-ppc,
	amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 8591 bytes --]

Now that MemOp has been pushed down into the memory API, we can
collapse the two byte swaps adjust_endianness and handle_bswap into
the former.

Collapsing byte swaps along the I/O path enables additional endian
inversion logic, e.g. SPARC64 Invert Endian TTE bit, with redundant
byte swaps cancelling out.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 accel/tcg/cputlb.c | 58 +++++++++++++++++++++++++-----------------------------
 memory.c           | 30 ++++++++++++++++------------
 2 files changed, 44 insertions(+), 44 deletions(-)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 97d7a64..6f5262c 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -881,7 +881,7 @@ static void tlb_fill(CPUState *cpu, target_ulong addr, int size,

 static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
                          int mmu_idx, target_ulong addr, uintptr_t retaddr,
-                         MMUAccessType access_type, int size)
+                         MMUAccessType access_type, MemOp op)
 {
     CPUState *cpu = env_cpu(env);
     hwaddr mr_offset;
@@ -906,14 +906,13 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
         qemu_mutex_lock_iothread();
         locked = true;
     }
-    r = memory_region_dispatch_read(mr, mr_offset, &val, SIZE_MEMOP(size),
-                                    iotlbentry->attrs);
+    r = memory_region_dispatch_read(mr, mr_offset, &val, op, iotlbentry->attrs);
     if (r != MEMTX_OK) {
         hwaddr physaddr = mr_offset +
             section->offset_within_address_space -
             section->offset_within_region;

-        cpu_transaction_failed(cpu, physaddr, addr, size, access_type,
+        cpu_transaction_failed(cpu, physaddr, addr, MEMOP_SIZE(op), access_type,
                                mmu_idx, iotlbentry->attrs, r, retaddr);
     }
     if (locked) {
@@ -925,7 +924,7 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,

 static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
                       int mmu_idx, uint64_t val, target_ulong addr,
-                      uintptr_t retaddr, int size)
+                      uintptr_t retaddr, MemOp op)
 {
     CPUState *cpu = env_cpu(env);
     hwaddr mr_offset;
@@ -947,15 +946,15 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
         qemu_mutex_lock_iothread();
         locked = true;
     }
-    r = memory_region_dispatch_write(mr, mr_offset, val, SIZE_MEMOP(size),
-                                    iotlbentry->attrs);
+    r = memory_region_dispatch_write(mr, mr_offset, val, op, iotlbentry->attrs);
     if (r != MEMTX_OK) {
         hwaddr physaddr = mr_offset +
             section->offset_within_address_space -
             section->offset_within_region;

-        cpu_transaction_failed(cpu, physaddr, addr, size, MMU_DATA_STORE,
-                               mmu_idx, iotlbentry->attrs, r, retaddr);
+        cpu_transaction_failed(cpu, physaddr, addr, MEMOP_SIZE(op),
+                               MMU_DATA_STORE, mmu_idx, iotlbentry->attrs, r,
+                               retaddr);
     }
     if (locked) {
         qemu_mutex_unlock_iothread();
@@ -1210,26 +1209,13 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
 #endif

 /*
- * Byte Swap Helper
+ * Byte Swap Checker
  *
- * This should all dead code away depending on the build host and
- * access type.
+ * Dead code should all go away depending on the build host and access type.
  */
-
-static inline uint64_t handle_bswap(uint64_t val, int size, bool big_endian)
+static inline bool need_bswap(bool big_endian)
 {
-    if ((big_endian && NEED_BE_BSWAP) || (!big_endian && NEED_LE_BSWAP)) {
-        switch (size) {
-        case 1: return val;
-        case 2: return bswap16(val);
-        case 4: return bswap32(val);
-        case 8: return bswap64(val);
-        default:
-            g_assert_not_reached();
-        }
-    } else {
-        return val;
-    }
+    return (big_endian && NEED_BE_BSWAP) || (!big_endian && NEED_LE_BSWAP);
 }

 /*
@@ -1260,6 +1246,7 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
     unsigned a_bits = get_alignment_bits(get_tcgmemop(oi));
     void *haddr;
     uint64_t res;
+    MemOp op;

     /* Handle CPU specific unaligned behaviour */
     if (addr & ((1 << a_bits) - 1)) {
@@ -1305,9 +1292,13 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
             }
         }

-        res = io_readx(env, &env_tlb(env)->d[mmu_idx].iotlb[index],
-                       mmu_idx, addr, retaddr, access_type, size);
-        return handle_bswap(res, size, big_endian);
+        op = SIZE_MEMOP(size);
+        if (need_bswap(big_endian)) {
+            op ^= MO_BSWAP;
+        }
+
+        return io_readx(env, &env_tlb(env)->d[mmu_idx].iotlb[index],
+                       mmu_idx, addr, retaddr, access_type, op);
     }

     /* Handle slow unaligned access (it spans two pages or IO).  */
@@ -1508,6 +1499,7 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
     const size_t tlb_off = offsetof(CPUTLBEntry, addr_write);
     unsigned a_bits = get_alignment_bits(get_tcgmemop(oi));
     void *haddr;
+    MemOp op;

     /* Handle CPU specific unaligned behaviour */
     if (addr & ((1 << a_bits) - 1)) {
@@ -1553,9 +1545,13 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
             }
         }

+        op = SIZE_MEMOP(size);
+        if (need_bswap(big_endian)) {
+            op ^= MO_BSWAP;
+        }
+
         io_writex(env, &env_tlb(env)->d[mmu_idx].iotlb[index], mmu_idx,
-                  handle_bswap(val, size, big_endian),
-                  addr, retaddr, size);
+                  val, addr, retaddr, op);
         return;
     }

diff --git a/memory.c b/memory.c
index 73cb345..0aaa0a7 100644
--- a/memory.c
+++ b/memory.c
@@ -350,7 +350,7 @@ static bool memory_region_big_endian(MemoryRegion *mr)
 #endif
 }

-static bool memory_region_wrong_endianness(MemoryRegion *mr)
+static bool memory_region_endianness_inverted(MemoryRegion *mr)
 {
 #ifdef TARGET_WORDS_BIGENDIAN
     return mr->ops->endianness == DEVICE_LITTLE_ENDIAN;
@@ -359,23 +359,27 @@ static bool memory_region_wrong_endianness(MemoryRegion *mr)
 #endif
 }

-static void adjust_endianness(MemoryRegion *mr, uint64_t *data, unsigned size)
+static void adjust_endianness(MemoryRegion *mr, uint64_t *data, MemOp op)
 {
-    if (memory_region_wrong_endianness(mr)) {
-        switch (size) {
-        case 1:
+    if (memory_region_endianness_inverted(mr)) {
+        op ^= MO_BSWAP;
+    }
+
+    if (op & MO_BSWAP) {
+        switch (op & MO_SIZE) {
+        case MO_8:
             break;
-        case 2:
+        case MO_16:
             *data = bswap16(*data);
             break;
-        case 4:
+        case MO_32:
             *data = bswap32(*data);
             break;
-        case 8:
+        case MO_64:
             *data = bswap64(*data);
             break;
         default:
-            abort();
+            g_assert_not_reached();
         }
     }
 }
@@ -1449,7 +1453,7 @@ MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
     }

     r = memory_region_dispatch_read1(mr, addr, pval, size, attrs);
-    adjust_endianness(mr, pval, size);
+    adjust_endianness(mr, pval, op);
     return r;
 }

@@ -1492,7 +1496,7 @@ MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
         return MEMTX_DECODE_ERROR;
     }

-    adjust_endianness(mr, &data, size);
+    adjust_endianness(mr, &data, op);

     if ((!kvm_eventfds_enabled()) &&
         memory_region_dispatch_write_eventfds(mr, addr, data, size, attrs)) {
@@ -2338,7 +2342,7 @@ void memory_region_add_eventfd(MemoryRegion *mr,
     }

     if (size) {
-        adjust_endianness(mr, &mrfd.data, size);
+        adjust_endianness(mr, &mrfd.data, SIZE_MEMOP(size));
     }
     memory_region_transaction_begin();
     for (i = 0; i < mr->ioeventfd_nb; ++i) {
@@ -2373,7 +2377,7 @@ void memory_region_del_eventfd(MemoryRegion *mr,
     unsigned i;

     if (size) {
-        adjust_endianness(mr, &mrfd.data, size);
+        adjust_endianness(mr, &mrfd.data, SIZE_MEMOP(size));
     }
     memory_region_transaction_begin();
     for (i = 0; i < mr->ioeventfd_nb; ++i) {
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 16319 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v2 17/20] cpu: TLB_FLAGS_MASK bit to force memory slow path
  2019-07-22 15:34 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-22 15:51   ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:51 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, claudio.fontana, alex.williamson, qemu-ppc, amarkovic,
	pbonzini, aurelien

The fast path is taken when TLB_FLAGS_MASK is all zero.

TLB_FORCE_SLOW is simply a TLB_FLAGS_MASK bit to force the slow path,
there are no other side effects.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 include/exec/cpu-all.h | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
index 536ea58..e496f99 100644
--- a/include/exec/cpu-all.h
+++ b/include/exec/cpu-all.h
@@ -331,12 +331,18 @@ CPUArchState *cpu_copy(CPUArchState *env);
 #define TLB_MMIO            (1 << (TARGET_PAGE_BITS - 3))
 /* Set if TLB entry must have MMU lookup repeated for every access */
 #define TLB_RECHECK         (1 << (TARGET_PAGE_BITS - 4))
+/* Set if TLB entry must take the slow path.  */
+#define TLB_FORCE_SLOW      (1 << (TARGET_PAGE_BITS - 5))

 /* Use this mask to check interception with an alignment mask
  * in a TCG backend.
  */
-#define TLB_FLAGS_MASK  (TLB_INVALID_MASK | TLB_NOTDIRTY | TLB_MMIO \
-                         | TLB_RECHECK)
+#define TLB_FLAGS_MASK \
+    (TLB_INVALID_MASK  \
+     | TLB_NOTDIRTY    \
+     | TLB_MMIO        \
+     | TLB_RECHECK     \
+     | TLB_FORCE_SLOW)

 /**
  * tlb_hit_page: return true if page aligned @addr is a hit against the
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v2 17/20] cpu: TLB_FLAGS_MASK bit to force memory slow path
@ 2019-07-22 15:51   ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:51 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, david, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, mst, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, claudio.fontana, qemu-s390x, qemu-ppc,
	amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 1305 bytes --]

The fast path is taken when TLB_FLAGS_MASK is all zero.

TLB_FORCE_SLOW is simply a TLB_FLAGS_MASK bit to force the slow path,
there are no other side effects.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 include/exec/cpu-all.h | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
index 536ea58..e496f99 100644
--- a/include/exec/cpu-all.h
+++ b/include/exec/cpu-all.h
@@ -331,12 +331,18 @@ CPUArchState *cpu_copy(CPUArchState *env);
 #define TLB_MMIO            (1 << (TARGET_PAGE_BITS - 3))
 /* Set if TLB entry must have MMU lookup repeated for every access */
 #define TLB_RECHECK         (1 << (TARGET_PAGE_BITS - 4))
+/* Set if TLB entry must take the slow path.  */
+#define TLB_FORCE_SLOW      (1 << (TARGET_PAGE_BITS - 5))

 /* Use this mask to check interception with an alignment mask
  * in a TCG backend.
  */
-#define TLB_FLAGS_MASK  (TLB_INVALID_MASK | TLB_NOTDIRTY | TLB_MMIO \
-                         | TLB_RECHECK)
+#define TLB_FLAGS_MASK \
+    (TLB_INVALID_MASK  \
+     | TLB_NOTDIRTY    \
+     | TLB_MMIO        \
+     | TLB_RECHECK     \
+     | TLB_FORCE_SLOW)

 /**
  * tlb_hit_page: return true if page aligned @addr is a hit against the
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 2710 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v2 18/20] cputlb: Byte swap memory transaction attribute
  2019-07-22 15:34 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-22 15:52   ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:52 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, claudio.fontana, alex.williamson, qemu-ppc, amarkovic,
	pbonzini, aurelien

Notice new attribute, byte swap, and force the transaction through the
memory slow path.

Required by architectures that can invert endianness of memory
transaction, e.g. SPARC64 has the Invert Endian TTE bit.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 accel/tcg/cputlb.c      | 11 +++++++++++
 include/exec/memattrs.h |  2 ++
 2 files changed, 13 insertions(+)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 6f5262c..619787b 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -738,6 +738,9 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
          */
         address |= TLB_RECHECK;
     }
+    if (attrs.byte_swap) {
+        address |= TLB_FORCE_SLOW;
+    }
     if (!memory_region_is_ram(section->mr) &&
         !memory_region_is_romd(section->mr)) {
         /* IO memory case */
@@ -891,6 +894,10 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
     bool locked = false;
     MemTxResult r;

+    if (iotlbentry->attrs.byte_swap) {
+        op ^= MO_BSWAP;
+    }
+
     section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
     mr = section->mr;
     mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
@@ -933,6 +940,10 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
     bool locked = false;
     MemTxResult r;

+    if (iotlbentry->attrs.byte_swap) {
+        op ^= MO_BSWAP;
+    }
+
     section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
     mr = section->mr;
     mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
diff --git a/include/exec/memattrs.h b/include/exec/memattrs.h
index d4a3477..a0644eb 100644
--- a/include/exec/memattrs.h
+++ b/include/exec/memattrs.h
@@ -37,6 +37,8 @@ typedef struct MemTxAttrs {
     unsigned int user:1;
     /* Requester ID (for MSI for example) */
     unsigned int requester_id:16;
+    /* SPARC64: TTE invert endianness */
+    unsigned int byte_swap:1;
     /*
      * The following are target-specific page-table bits.  These are not
      * related to actual memory transactions at all.  However, this structure
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v2 18/20] cputlb: Byte swap memory transaction attribute
@ 2019-07-22 15:52   ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:52 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, david, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, mst, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, claudio.fontana, qemu-s390x, qemu-ppc,
	amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 2205 bytes --]

Notice new attribute, byte swap, and force the transaction through the
memory slow path.

Required by architectures that can invert endianness of memory
transaction, e.g. SPARC64 has the Invert Endian TTE bit.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 accel/tcg/cputlb.c      | 11 +++++++++++
 include/exec/memattrs.h |  2 ++
 2 files changed, 13 insertions(+)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 6f5262c..619787b 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -738,6 +738,9 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
          */
         address |= TLB_RECHECK;
     }
+    if (attrs.byte_swap) {
+        address |= TLB_FORCE_SLOW;
+    }
     if (!memory_region_is_ram(section->mr) &&
         !memory_region_is_romd(section->mr)) {
         /* IO memory case */
@@ -891,6 +894,10 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
     bool locked = false;
     MemTxResult r;

+    if (iotlbentry->attrs.byte_swap) {
+        op ^= MO_BSWAP;
+    }
+
     section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
     mr = section->mr;
     mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
@@ -933,6 +940,10 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
     bool locked = false;
     MemTxResult r;

+    if (iotlbentry->attrs.byte_swap) {
+        op ^= MO_BSWAP;
+    }
+
     section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
     mr = section->mr;
     mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
diff --git a/include/exec/memattrs.h b/include/exec/memattrs.h
index d4a3477..a0644eb 100644
--- a/include/exec/memattrs.h
+++ b/include/exec/memattrs.h
@@ -37,6 +37,8 @@ typedef struct MemTxAttrs {
     unsigned int user:1;
     /* Requester ID (for MSI for example) */
     unsigned int requester_id:16;
+    /* SPARC64: TTE invert endianness */
+    unsigned int byte_swap:1;
     /*
      * The following are target-specific page-table bits.  These are not
      * related to actual memory transactions at all.  However, this structure
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 4277 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v2 19/20] target/sparc: Add TLB entry with attributes
  2019-07-22 15:34 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-22 15:53   ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:53 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, claudio.fontana, alex.williamson, qemu-ppc, amarkovic,
	pbonzini, aurelien

Append MemTxAttrs to interfaces so we can pass along up coming Invert
Endian TTE bit on SPARC64.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 target/sparc/mmu_helper.c | 32 ++++++++++++++++++--------------
 1 file changed, 18 insertions(+), 14 deletions(-)

diff --git a/target/sparc/mmu_helper.c b/target/sparc/mmu_helper.c
index cbd1e91..826e14b 100644
--- a/target/sparc/mmu_helper.c
+++ b/target/sparc/mmu_helper.c
@@ -88,7 +88,7 @@ static const int perm_table[2][8] = {
 };

 static int get_physical_address(CPUSPARCState *env, hwaddr *physical,
-                                int *prot, int *access_index,
+                                int *prot, int *access_index, MemTxAttrs *attrs,
                                 target_ulong address, int rw, int mmu_idx,
                                 target_ulong *page_size)
 {
@@ -219,6 +219,7 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
     target_ulong vaddr;
     target_ulong page_size;
     int error_code = 0, prot, access_index;
+    MemTxAttrs attrs = {};

     /*
      * TODO: If we ever need tlb_vaddr_to_host for this target,
@@ -229,7 +230,7 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
     assert(!probe);

     address &= TARGET_PAGE_MASK;
-    error_code = get_physical_address(env, &paddr, &prot, &access_index,
+    error_code = get_physical_address(env, &paddr, &prot, &access_index, &attrs,
                                       address, access_type,
                                       mmu_idx, &page_size);
     vaddr = address;
@@ -490,8 +491,8 @@ static inline int ultrasparc_tag_match(SparcTLBEntry *tlb,
     return 0;
 }

-static int get_physical_address_data(CPUSPARCState *env,
-                                     hwaddr *physical, int *prot,
+static int get_physical_address_data(CPUSPARCState *env, hwaddr *physical,
+                                     int *prot, MemTxAttrs *attrs,
                                      target_ulong address, int rw, int mmu_idx)
 {
     CPUState *cs = env_cpu(env);
@@ -608,8 +609,8 @@ static int get_physical_address_data(CPUSPARCState *env,
     return 1;
 }

-static int get_physical_address_code(CPUSPARCState *env,
-                                     hwaddr *physical, int *prot,
+static int get_physical_address_code(CPUSPARCState *env, hwaddr *physical,
+                                     int *prot, MemTxAttrs *attrs,
                                      target_ulong address, int mmu_idx)
 {
     CPUState *cs = env_cpu(env);
@@ -686,7 +687,7 @@ static int get_physical_address_code(CPUSPARCState *env,
 }

 static int get_physical_address(CPUSPARCState *env, hwaddr *physical,
-                                int *prot, int *access_index,
+                                int *prot, int *access_index, MemTxAttrs *attrs,
                                 target_ulong address, int rw, int mmu_idx,
                                 target_ulong *page_size)
 {
@@ -716,11 +717,11 @@ static int get_physical_address(CPUSPARCState *env, hwaddr *physical,
     }

     if (rw == 2) {
-        return get_physical_address_code(env, physical, prot, address,
+        return get_physical_address_code(env, physical, prot, attrs, address,
                                          mmu_idx);
     } else {
-        return get_physical_address_data(env, physical, prot, address, rw,
-                                         mmu_idx);
+        return get_physical_address_data(env, physical, prot, attrs, address,
+                                         rw, mmu_idx);
     }
 }

@@ -734,10 +735,11 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
     target_ulong vaddr;
     hwaddr paddr;
     target_ulong page_size;
+    MemTxAttrs attrs = {};
     int error_code = 0, prot, access_index;

     address &= TARGET_PAGE_MASK;
-    error_code = get_physical_address(env, &paddr, &prot, &access_index,
+    error_code = get_physical_address(env, &paddr, &prot, &access_index, &attrs,
                                       address, access_type,
                                       mmu_idx, &page_size);
     if (likely(error_code == 0)) {
@@ -747,7 +749,8 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
                                    env->dmmu.mmu_primary_context,
                                    env->dmmu.mmu_secondary_context);

-        tlb_set_page(cs, vaddr, paddr, prot, mmu_idx, page_size);
+        tlb_set_page_with_attrs(cs, vaddr, paddr, attrs, prot, mmu_idx,
+                                page_size);
         return true;
     }
     if (probe) {
@@ -849,9 +852,10 @@ static int cpu_sparc_get_phys_page(CPUSPARCState *env, hwaddr *phys,
 {
     target_ulong page_size;
     int prot, access_index;
+    MemTxAttrs attrs = {};

-    return get_physical_address(env, phys, &prot, &access_index, addr, rw,
-                                mmu_idx, &page_size);
+    return get_physical_address(env, phys, &prot, &access_index, &attrs, addr,
+                                rw, mmu_idx, &page_size);
 }

 #if defined(TARGET_SPARC64)
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v2 19/20] target/sparc: Add TLB entry with attributes
@ 2019-07-22 15:53   ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:53 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, david, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, mst, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, claudio.fontana, qemu-s390x, qemu-ppc,
	amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 5225 bytes --]

Append MemTxAttrs to interfaces so we can pass along up coming Invert
Endian TTE bit on SPARC64.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 target/sparc/mmu_helper.c | 32 ++++++++++++++++++--------------
 1 file changed, 18 insertions(+), 14 deletions(-)

diff --git a/target/sparc/mmu_helper.c b/target/sparc/mmu_helper.c
index cbd1e91..826e14b 100644
--- a/target/sparc/mmu_helper.c
+++ b/target/sparc/mmu_helper.c
@@ -88,7 +88,7 @@ static const int perm_table[2][8] = {
 };

 static int get_physical_address(CPUSPARCState *env, hwaddr *physical,
-                                int *prot, int *access_index,
+                                int *prot, int *access_index, MemTxAttrs *attrs,
                                 target_ulong address, int rw, int mmu_idx,
                                 target_ulong *page_size)
 {
@@ -219,6 +219,7 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
     target_ulong vaddr;
     target_ulong page_size;
     int error_code = 0, prot, access_index;
+    MemTxAttrs attrs = {};

     /*
      * TODO: If we ever need tlb_vaddr_to_host for this target,
@@ -229,7 +230,7 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
     assert(!probe);

     address &= TARGET_PAGE_MASK;
-    error_code = get_physical_address(env, &paddr, &prot, &access_index,
+    error_code = get_physical_address(env, &paddr, &prot, &access_index, &attrs,
                                       address, access_type,
                                       mmu_idx, &page_size);
     vaddr = address;
@@ -490,8 +491,8 @@ static inline int ultrasparc_tag_match(SparcTLBEntry *tlb,
     return 0;
 }

-static int get_physical_address_data(CPUSPARCState *env,
-                                     hwaddr *physical, int *prot,
+static int get_physical_address_data(CPUSPARCState *env, hwaddr *physical,
+                                     int *prot, MemTxAttrs *attrs,
                                      target_ulong address, int rw, int mmu_idx)
 {
     CPUState *cs = env_cpu(env);
@@ -608,8 +609,8 @@ static int get_physical_address_data(CPUSPARCState *env,
     return 1;
 }

-static int get_physical_address_code(CPUSPARCState *env,
-                                     hwaddr *physical, int *prot,
+static int get_physical_address_code(CPUSPARCState *env, hwaddr *physical,
+                                     int *prot, MemTxAttrs *attrs,
                                      target_ulong address, int mmu_idx)
 {
     CPUState *cs = env_cpu(env);
@@ -686,7 +687,7 @@ static int get_physical_address_code(CPUSPARCState *env,
 }

 static int get_physical_address(CPUSPARCState *env, hwaddr *physical,
-                                int *prot, int *access_index,
+                                int *prot, int *access_index, MemTxAttrs *attrs,
                                 target_ulong address, int rw, int mmu_idx,
                                 target_ulong *page_size)
 {
@@ -716,11 +717,11 @@ static int get_physical_address(CPUSPARCState *env, hwaddr *physical,
     }

     if (rw == 2) {
-        return get_physical_address_code(env, physical, prot, address,
+        return get_physical_address_code(env, physical, prot, attrs, address,
                                          mmu_idx);
     } else {
-        return get_physical_address_data(env, physical, prot, address, rw,
-                                         mmu_idx);
+        return get_physical_address_data(env, physical, prot, attrs, address,
+                                         rw, mmu_idx);
     }
 }

@@ -734,10 +735,11 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
     target_ulong vaddr;
     hwaddr paddr;
     target_ulong page_size;
+    MemTxAttrs attrs = {};
     int error_code = 0, prot, access_index;

     address &= TARGET_PAGE_MASK;
-    error_code = get_physical_address(env, &paddr, &prot, &access_index,
+    error_code = get_physical_address(env, &paddr, &prot, &access_index, &attrs,
                                       address, access_type,
                                       mmu_idx, &page_size);
     if (likely(error_code == 0)) {
@@ -747,7 +749,8 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
                                    env->dmmu.mmu_primary_context,
                                    env->dmmu.mmu_secondary_context);

-        tlb_set_page(cs, vaddr, paddr, prot, mmu_idx, page_size);
+        tlb_set_page_with_attrs(cs, vaddr, paddr, attrs, prot, mmu_idx,
+                                page_size);
         return true;
     }
     if (probe) {
@@ -849,9 +852,10 @@ static int cpu_sparc_get_phys_page(CPUSPARCState *env, hwaddr *phys,
 {
     target_ulong page_size;
     int prot, access_index;
+    MemTxAttrs attrs = {};

-    return get_physical_address(env, phys, &prot, &access_index, addr, rw,
-                                mmu_idx, &page_size);
+    return get_physical_address(env, phys, &prot, &access_index, &attrs, addr,
+                                rw, mmu_idx, &page_size);
 }

 #if defined(TARGET_SPARC64)
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 10583 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v2 20/20] target/sparc: sun4u Invert Endian TTE bit
  2019-07-22 15:34 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-22 15:54   ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:54 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, claudio.fontana, alex.williamson, qemu-ppc, amarkovic,
	pbonzini, aurelien

This bit configures endianness of PCI MMIO devices. It is used by
Solaris and OpenBSD sunhme drivers.

Tested working on OpenBSD.

Unfortunately Solaris 10 had a unrelated keyboard issue blocking
testing... another inch towards Solaris 10 on SPARC64 =)

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 target/sparc/cpu.h        | 2 ++
 target/sparc/mmu_helper.c | 8 +++++++-
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/target/sparc/cpu.h b/target/sparc/cpu.h
index 8ed2250..77e8e07 100644
--- a/target/sparc/cpu.h
+++ b/target/sparc/cpu.h
@@ -277,6 +277,7 @@ enum {

 #define TTE_VALID_BIT       (1ULL << 63)
 #define TTE_NFO_BIT         (1ULL << 60)
+#define TTE_IE_BIT          (1ULL << 59)
 #define TTE_USED_BIT        (1ULL << 41)
 #define TTE_LOCKED_BIT      (1ULL <<  6)
 #define TTE_SIDEEFFECT_BIT  (1ULL <<  3)
@@ -293,6 +294,7 @@ enum {

 #define TTE_IS_VALID(tte)   ((tte) & TTE_VALID_BIT)
 #define TTE_IS_NFO(tte)     ((tte) & TTE_NFO_BIT)
+#define TTE_IS_IE(tte)      ((tte) & TTE_IE_BIT)
 #define TTE_IS_USED(tte)    ((tte) & TTE_USED_BIT)
 #define TTE_IS_LOCKED(tte)  ((tte) & TTE_LOCKED_BIT)
 #define TTE_IS_SIDEEFFECT(tte) ((tte) & TTE_SIDEEFFECT_BIT)
diff --git a/target/sparc/mmu_helper.c b/target/sparc/mmu_helper.c
index 826e14b..77dc86a 100644
--- a/target/sparc/mmu_helper.c
+++ b/target/sparc/mmu_helper.c
@@ -537,6 +537,10 @@ static int get_physical_address_data(CPUSPARCState *env, hwaddr *physical,
         if (ultrasparc_tag_match(&env->dtlb[i], address, context, physical)) {
             int do_fault = 0;

+            if (TTE_IS_IE(env->dtlb[i].tte)) {
+                attrs->byte_swap = true;
+            }
+
             /* access ok? */
             /* multiple bits in SFSR.FT may be set on TT_DFAULT */
             if (TTE_IS_PRIV(env->dtlb[i].tte) && is_user) {
@@ -792,7 +796,7 @@ void dump_mmu(CPUSPARCState *env)
             }
             if (TTE_IS_VALID(env->dtlb[i].tte)) {
                 qemu_printf("[%02u] VA: %" PRIx64 ", PA: %llx"
-                            ", %s, %s, %s, %s, ctx %" PRId64 " %s\n",
+                            ", %s, %s, %s, %s, ie %s, ctx %" PRId64 " %s\n",
                             i,
                             env->dtlb[i].tag & (uint64_t)~0x1fffULL,
                             TTE_PA(env->dtlb[i].tte),
@@ -801,6 +805,8 @@ void dump_mmu(CPUSPARCState *env)
                             TTE_IS_W_OK(env->dtlb[i].tte) ? "RW" : "RO",
                             TTE_IS_LOCKED(env->dtlb[i].tte) ?
                             "locked" : "unlocked",
+                            TTE_IS_IE(env->dtlb[i].tte) ?
+                            "yes" : "no",
                             env->dtlb[i].tag & (uint64_t)0x1fffULL,
                             TTE_IS_GLOBAL(env->dtlb[i].tte) ?
                             "global" : "local");
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v2 20/20] target/sparc: sun4u Invert Endian TTE bit
@ 2019-07-22 15:54   ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 15:54 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, david, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, mst, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, claudio.fontana, qemu-s390x, qemu-ppc,
	amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 2930 bytes --]

This bit configures endianness of PCI MMIO devices. It is used by
Solaris and OpenBSD sunhme drivers.

Tested working on OpenBSD.

Unfortunately Solaris 10 had a unrelated keyboard issue blocking
testing... another inch towards Solaris 10 on SPARC64 =)

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 target/sparc/cpu.h        | 2 ++
 target/sparc/mmu_helper.c | 8 +++++++-
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/target/sparc/cpu.h b/target/sparc/cpu.h
index 8ed2250..77e8e07 100644
--- a/target/sparc/cpu.h
+++ b/target/sparc/cpu.h
@@ -277,6 +277,7 @@ enum {

 #define TTE_VALID_BIT       (1ULL << 63)
 #define TTE_NFO_BIT         (1ULL << 60)
+#define TTE_IE_BIT          (1ULL << 59)
 #define TTE_USED_BIT        (1ULL << 41)
 #define TTE_LOCKED_BIT      (1ULL <<  6)
 #define TTE_SIDEEFFECT_BIT  (1ULL <<  3)
@@ -293,6 +294,7 @@ enum {

 #define TTE_IS_VALID(tte)   ((tte) & TTE_VALID_BIT)
 #define TTE_IS_NFO(tte)     ((tte) & TTE_NFO_BIT)
+#define TTE_IS_IE(tte)      ((tte) & TTE_IE_BIT)
 #define TTE_IS_USED(tte)    ((tte) & TTE_USED_BIT)
 #define TTE_IS_LOCKED(tte)  ((tte) & TTE_LOCKED_BIT)
 #define TTE_IS_SIDEEFFECT(tte) ((tte) & TTE_SIDEEFFECT_BIT)
diff --git a/target/sparc/mmu_helper.c b/target/sparc/mmu_helper.c
index 826e14b..77dc86a 100644
--- a/target/sparc/mmu_helper.c
+++ b/target/sparc/mmu_helper.c
@@ -537,6 +537,10 @@ static int get_physical_address_data(CPUSPARCState *env, hwaddr *physical,
         if (ultrasparc_tag_match(&env->dtlb[i], address, context, physical)) {
             int do_fault = 0;

+            if (TTE_IS_IE(env->dtlb[i].tte)) {
+                attrs->byte_swap = true;
+            }
+
             /* access ok? */
             /* multiple bits in SFSR.FT may be set on TT_DFAULT */
             if (TTE_IS_PRIV(env->dtlb[i].tte) && is_user) {
@@ -792,7 +796,7 @@ void dump_mmu(CPUSPARCState *env)
             }
             if (TTE_IS_VALID(env->dtlb[i].tte)) {
                 qemu_printf("[%02u] VA: %" PRIx64 ", PA: %llx"
-                            ", %s, %s, %s, %s, ctx %" PRId64 " %s\n",
+                            ", %s, %s, %s, %s, ie %s, ctx %" PRId64 " %s\n",
                             i,
                             env->dtlb[i].tag & (uint64_t)~0x1fffULL,
                             TTE_PA(env->dtlb[i].tte),
@@ -801,6 +805,8 @@ void dump_mmu(CPUSPARCState *env)
                             TTE_IS_W_OK(env->dtlb[i].tte) ? "RW" : "RO",
                             TTE_IS_LOCKED(env->dtlb[i].tte) ?
                             "locked" : "unlocked",
+                            TTE_IS_IE(env->dtlb[i].tte) ?
+                            "yes" : "no",
                             env->dtlb[i].tag & (uint64_t)0x1fffULL,
                             TTE_IS_GLOBAL(env->dtlb[i].tte) ?
                             "global" : "local");
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 6268 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* Re: [Qemu-devel] [PATCH v2 00/20] Invert Endian bit in SPARCv9 MMU TTE
  2019-07-22 15:34 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-22 15:59   ` Richard Henderson
  -1 siblings, 0 replies; 120+ messages in thread
From: Richard Henderson @ 2019-07-22 15:59 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, claudio.fontana, alex.williamson, qemu-ppc, amarkovic,
	pbonzini, aurelien

On 7/22/19 8:34 AM, tony.nguyen@bt.com wrote:
> Tony Nguyen (20):
>   tcg: Replace MO_8 with MO_UB alias
>   tcg: Replace MO_16 with MO_UW alias
>   tcg: Replace MO_32 with MO_UL alias
>   tcg: Replace MO_64 with MO_UQ alias
>   tcg: Move size+sign+endian from TCGMemOp to MemOp

I don't like any of these first 5 patches.
I don't understand your motivation here.  Why?


r~


^ permalink raw reply	[flat|nested] 120+ messages in thread

* Re: [Qemu-riscv] [Qemu-devel] [PATCH v2 00/20] Invert Endian bit in SPARCv9 MMU TTE
@ 2019-07-22 15:59   ` Richard Henderson
  0 siblings, 0 replies; 120+ messages in thread
From: Richard Henderson @ 2019-07-22 15:59 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, david, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, mst, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, claudio.fontana, qemu-s390x, qemu-ppc,
	amarkovic, pbonzini, aurelien

On 7/22/19 8:34 AM, tony.nguyen@bt.com wrote:
> Tony Nguyen (20):
>   tcg: Replace MO_8 with MO_UB alias
>   tcg: Replace MO_16 with MO_UW alias
>   tcg: Replace MO_32 with MO_UL alias
>   tcg: Replace MO_64 with MO_UQ alias
>   tcg: Move size+sign+endian from TCGMemOp to MemOp

I don't like any of these first 5 patches.
I don't understand your motivation here.  Why?


r~


^ permalink raw reply	[flat|nested] 120+ messages in thread

* Re: [Qemu-devel] [PATCH v2 00/20] Invert Endian bit in SPARCv9 MMU TTE
  2019-07-22 15:59   ` [Qemu-riscv] " Richard Henderson
@ 2019-07-22 16:22     ` Paolo Bonzini
  -1 siblings, 0 replies; 120+ messages in thread
From: Paolo Bonzini @ 2019-07-22 16:22 UTC (permalink / raw)
  To: Richard Henderson, tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, claudio.fontana, alex.williamson, qemu-ppc, amarkovic,
	aurelien

On 22/07/19 17:59, Richard Henderson wrote:
> On 7/22/19 8:34 AM, tony.nguyen@bt.com wrote:
>> Tony Nguyen (20):
>>   tcg: Replace MO_8 with MO_UB alias
>>   tcg: Replace MO_16 with MO_UW alias
>>   tcg: Replace MO_32 with MO_UL alias
>>   tcg: Replace MO_64 with MO_UQ alias
>>   tcg: Move size+sign+endian from TCGMemOp to MemOp
> I don't like any of these first 5 patches.
> I don't understand your motivation here.  Why?

He wants to avoid namespace collisions between MemOp and TCGMemOp, I think.

Paolo


^ permalink raw reply	[flat|nested] 120+ messages in thread

* Re: [Qemu-riscv] [Qemu-devel] [PATCH v2 00/20] Invert Endian bit in SPARCv9 MMU TTE
@ 2019-07-22 16:22     ` Paolo Bonzini
  0 siblings, 0 replies; 120+ messages in thread
From: Paolo Bonzini @ 2019-07-22 16:22 UTC (permalink / raw)
  To: Richard Henderson, tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, david, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, mst, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, claudio.fontana, qemu-s390x, qemu-ppc,
	amarkovic, aurelien

On 22/07/19 17:59, Richard Henderson wrote:
> On 7/22/19 8:34 AM, tony.nguyen@bt.com wrote:
>> Tony Nguyen (20):
>>   tcg: Replace MO_8 with MO_UB alias
>>   tcg: Replace MO_16 with MO_UW alias
>>   tcg: Replace MO_32 with MO_UL alias
>>   tcg: Replace MO_64 with MO_UQ alias
>>   tcg: Move size+sign+endian from TCGMemOp to MemOp
> I don't like any of these first 5 patches.
> I don't understand your motivation here.  Why?

He wants to avoid namespace collisions between MemOp and TCGMemOp, I think.

Paolo


^ permalink raw reply	[flat|nested] 120+ messages in thread

* Re: [Qemu-devel] [PATCH v2 00/20] Invert Endian bit in SPARCv9 MMU TTE
  2019-07-22 15:59   ` [Qemu-riscv] " Richard Henderson
@ 2019-07-22 16:28     ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 16:28 UTC (permalink / raw)
  To: richard.henderson, qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, claudio.fontana, qemu-s390x, qemu-ppc,
	amarkovic, pbonzini, aurelien

On 7/22/19 8:59 AM, Richard Henderson wrote:

>On 7/22/19 8:34 AM, tony.nguyen@bt.com wrote:
>> Tony Nguyen (20):
>>   tcg: Replace MO_8 with MO_UB alias
>>   tcg: Replace MO_16 with MO_UW alias
>>   tcg: Replace MO_32 with MO_UL alias
>>   tcg: Replace MO_64 with MO_UQ alias
>>   tcg: Move size+sign+endian from TCGMemOp to MemOp
>
>I don't like any of these first 5 patches.
>I don't understand your motivation here.  Why?

The motivation is to only move the attributes required by the memory API
from TCGMemOp into accelerator independent MemOp.

Once I moved MO_{8|16|32|64} into MemOp, there arose many -Wenum-compare and
-Wenum-conversion as a TCGMemOp and a MemOp are being compared and implicitly
coerced.

Thus the idea to replace MO_{8|16|32|64} with MO_{UB|UW|UL|UQ} so we remain
comparing and coercing the same enum type, both TCGMemOps.

Do you prefer the v1 implementation of making TCGMemOp -> MemOp?

Tony.

^ permalink raw reply	[flat|nested] 120+ messages in thread

* Re: [Qemu-riscv] [Qemu-devel] [PATCH v2 00/20] Invert Endian bit in SPARCv9 MMU TTE
@ 2019-07-22 16:28     ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-22 16:28 UTC (permalink / raw)
  To: richard.henderson, qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, claudio.fontana, alex.williamson, qemu-ppc, amarkovic,
	pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 950 bytes --]

On 7/22/19 8:59 AM, Richard Henderson wrote:

>On 7/22/19 8:34 AM, tony.nguyen@bt.com wrote:
>> Tony Nguyen (20):
>>   tcg: Replace MO_8 with MO_UB alias
>>   tcg: Replace MO_16 with MO_UW alias
>>   tcg: Replace MO_32 with MO_UL alias
>>   tcg: Replace MO_64 with MO_UQ alias
>>   tcg: Move size+sign+endian from TCGMemOp to MemOp
>
>I don't like any of these first 5 patches.
>I don't understand your motivation here.  Why?

The motivation is to only move the attributes required by the memory API
from TCGMemOp into accelerator independent MemOp.

Once I moved MO_{8|16|32|64} into MemOp, there arose many -Wenum-compare and
-Wenum-conversion as a TCGMemOp and a MemOp are being compared and implicitly
coerced.

Thus the idea to replace MO_{8|16|32|64} with MO_{UB|UW|UL|UQ} so we remain
comparing and coercing the same enum type, both TCGMemOps.

Do you prefer the v1 implementation of making TCGMemOp -> MemOp?

Tony.

[-- Attachment #2: Type: text/html, Size: 2203 bytes --]

^ permalink raw reply	[flat|nested] 120+ messages in thread

* Re: [Qemu-devel] [PATCH v2 00/20] Invert Endian bit in SPARCv9 MMU TTE
  2019-07-22 16:28     ` [Qemu-riscv] " tony.nguyen
@ 2019-07-22 18:58       ` Richard Henderson
  -1 siblings, 0 replies; 120+ messages in thread
From: Richard Henderson @ 2019-07-22 18:58 UTC (permalink / raw)
  To: tony.nguyen, richard.henderson, qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, atar4qemu,
	ehabkost, sw, alex.williamson, qemu-arm, david, qemu-riscv,
	cohuck, claudio.fontana, qemu-s390x, qemu-ppc, amarkovic,
	pbonzini, aurelien

On 7/22/19 9:28 AM, tony.nguyen@bt.com wrote:
> Do you prefer the v1 implementation of making TCGMemOp -> MemOp?

Yes, I did prefer moving the entire enum.

The use of MO_8 etc instead of MO_UB often emphasized that we were dealing only
with a size without a sign.


r~


^ permalink raw reply	[flat|nested] 120+ messages in thread

* Re: [Qemu-riscv] [Qemu-devel] [PATCH v2 00/20] Invert Endian bit in SPARCv9 MMU TTE
@ 2019-07-22 18:58       ` Richard Henderson
  0 siblings, 0 replies; 120+ messages in thread
From: Richard Henderson @ 2019-07-22 18:58 UTC (permalink / raw)
  To: tony.nguyen, richard.henderson, qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, atar4qemu,
	ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv, cohuck,
	claudio.fontana, alex.williamson, qemu-ppc, amarkovic, pbonzini,
	aurelien

On 7/22/19 9:28 AM, tony.nguyen@bt.com wrote:
> Do you prefer the v1 implementation of making TCGMemOp -> MemOp?

Yes, I did prefer moving the entire enum.

The use of MO_8 etc instead of MO_UB often emphasized that we were dealing only
with a size without a sign.


r~


^ permalink raw reply	[flat|nested] 120+ messages in thread

* Re: [Qemu-devel] [qemu-s390x] [PATCH v2 01/20] tcg: Replace MO_8 with MO_UB alias
  2019-07-22 15:38   ` [Qemu-riscv] " tony.nguyen
@ 2019-07-23  8:04     ` David Hildenbrand
  -1 siblings, 0 replies; 120+ messages in thread
From: David Hildenbrand @ 2019-07-23  8:04 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, pasic, borntraeger, rth, atar4qemu,
	ehabkost, sw, alex.williamson, qemu-arm, david, qemu-riscv,
	cohuck, claudio.fontana, qemu-s390x, qemu-ppc, amarkovic,
	pbonzini, aurelien

On 22.07.19 17:38, tony.nguyen@bt.com wrote:
> Preparation for splitting MO_8 out from TCGMemOp into new accelerator
> independent MemOp.
> 
> As MO_8 will be a value of MemOp, existing TCGMemOp comparisons and
> coercions will trigger -Wenum-compare and -Wenum-conversion.
> 
> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
> ---
>  target/arm/sve_helper.c             |  4 +-
>  target/arm/translate-a64.c          | 14 +++----
>  target/arm/translate-sve.c          |  4 +-
>  target/arm/translate.c              | 38 +++++++++----------
>  target/i386/translate.c             | 72
> +++++++++++++++++------------------
>  target/mips/translate.c             |  4 +-
>  target/ppc/translate/vmx-impl.inc.c | 28 +++++++-------
>  target/s390x/translate.c            |  2 +-
>  target/s390x/translate_vx.inc.c     |  4 +-
>  target/s390x/vec.h                  |  4 +-
>  tcg/aarch64/tcg-target.inc.c        | 16 ++++----
>  tcg/arm/tcg-target.inc.c            |  6 +--
>  tcg/i386/tcg-target.inc.c           | 54 +++++++++++++-------------
>  tcg/mips/tcg-target.inc.c           |  4 +-
>  tcg/riscv/tcg-target.inc.c          |  4 +-
>  tcg/sparc/tcg-target.inc.c          |  2 +-
>  tcg/tcg-op-gvec.c                   | 76
> ++++++++++++++++++-------------------
>  tcg/tcg-op-vec.c                    | 10 ++---
>  tcg/tcg-op.c                        |  6 +--
>  tcg/tcg.h                           |  2 +-
>  20 files changed, 177 insertions(+), 177 deletions(-)
> 
> diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
> index fc0c175..4c7e11f 100644
> --- a/target/arm/sve_helper.c
> +++ b/target/arm/sve_helper.c
> @@ -1531,7 +1531,7 @@ void HELPER(sve_cpy_m_b)(void *vd, void *vn, void *vg,
>      uint64_t *d = vd, *n = vn;
>      uint8_t *pg = vg;
>  
> -    mm = dup_const(MO_8, mm);
> +    mm = dup_const(MO_UB, mm);

Sorry, but I don't like this. I never liked the use of
byte/word/long/quad for 8/16/32/64. It is a constant source of
confusion. Yes, we have it in the code, but at least I'd rather want to
see this getting less than more.

-- 

Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 120+ messages in thread

* Re: [Qemu-riscv] [qemu-s390x] [Qemu-devel] [PATCH v2 01/20] tcg: Replace MO_8 with MO_UB alias
@ 2019-07-23  8:04     ` David Hildenbrand
  0 siblings, 0 replies; 120+ messages in thread
From: David Hildenbrand @ 2019-07-23  8:04 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, pasic, borntraeger, rth, atar4qemu,
	ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv, cohuck,
	claudio.fontana, alex.williamson, qemu-ppc, amarkovic, pbonzini,
	aurelien

On 22.07.19 17:38, tony.nguyen@bt.com wrote:
> Preparation for splitting MO_8 out from TCGMemOp into new accelerator
> independent MemOp.
> 
> As MO_8 will be a value of MemOp, existing TCGMemOp comparisons and
> coercions will trigger -Wenum-compare and -Wenum-conversion.
> 
> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
> ---
>  target/arm/sve_helper.c             |  4 +-
>  target/arm/translate-a64.c          | 14 +++----
>  target/arm/translate-sve.c          |  4 +-
>  target/arm/translate.c              | 38 +++++++++----------
>  target/i386/translate.c             | 72
> +++++++++++++++++------------------
>  target/mips/translate.c             |  4 +-
>  target/ppc/translate/vmx-impl.inc.c | 28 +++++++-------
>  target/s390x/translate.c            |  2 +-
>  target/s390x/translate_vx.inc.c     |  4 +-
>  target/s390x/vec.h                  |  4 +-
>  tcg/aarch64/tcg-target.inc.c        | 16 ++++----
>  tcg/arm/tcg-target.inc.c            |  6 +--
>  tcg/i386/tcg-target.inc.c           | 54 +++++++++++++-------------
>  tcg/mips/tcg-target.inc.c           |  4 +-
>  tcg/riscv/tcg-target.inc.c          |  4 +-
>  tcg/sparc/tcg-target.inc.c          |  2 +-
>  tcg/tcg-op-gvec.c                   | 76
> ++++++++++++++++++-------------------
>  tcg/tcg-op-vec.c                    | 10 ++---
>  tcg/tcg-op.c                        |  6 +--
>  tcg/tcg.h                           |  2 +-
>  20 files changed, 177 insertions(+), 177 deletions(-)
> 
> diff --git a/target/arm/sve_helper.c b/target/arm/sve_helper.c
> index fc0c175..4c7e11f 100644
> --- a/target/arm/sve_helper.c
> +++ b/target/arm/sve_helper.c
> @@ -1531,7 +1531,7 @@ void HELPER(sve_cpy_m_b)(void *vd, void *vn, void *vg,
>      uint64_t *d = vd, *n = vn;
>      uint8_t *pg = vg;
>  
> -    mm = dup_const(MO_8, mm);
> +    mm = dup_const(MO_UB, mm);

Sorry, but I don't like this. I never liked the use of
byte/word/long/quad for 8/16/32/64. It is a constant source of
confusion. Yes, we have it in the code, but at least I'd rather want to
see this getting less than more.

-- 

Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v3 00/15] Invert Endian bit in SPARCv9 MMU TTE
  2019-07-22 15:51   ` [Qemu-riscv] " tony.nguyen
@ 2019-07-25  7:01     ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  7:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

This patchset implements the IE (Invert Endian) bit in SPARCv9 MMU TTE.

It is an attempt of the instructions outlined by Richard Henderson to Mark
Cave-Ayland.

Tested with OpenBSD on sun4u. Solaris 10 is my actual goal, but unfortunately a
separate keyboard issue remains in the way.

On 01/11/17 19:15, Mark Cave-Ayland wrote:

>On 15/08/17 19:10, Richard Henderson wrote:
>
>> [CC Peter re MemTxAttrs below]
>>
>> On 08/15/2017 09:38 AM, Mark Cave-Ayland wrote:
>>> Working through an incorrect endian issue on qemu-system-sparc64, it has
>>> become apparent that at least one OS makes use of the IE (Invert Endian)
>>> bit in the SPARCv9 MMU TTE to map PCI memory space without the
>>> programmer having to manually endian-swap accesses.
>>>
>>> In other words, to quote the UltraSPARC specification: "if this bit is
>>> set, accesses to the associated page are processed with inverse
>>> endianness from what is specified by the instruction (big-for-little and
>>> little-for-big)".

A good explanation by Mark why the IE bit is required.

>>>
>>> Looking through various bits of code, I'm trying to get a feel for the
>>> best way to implement this in an efficient manner. From what I can see
>>> this could be solved using an additional MMU index, however I'm not
>>> overly familiar with the memory and softmmu subsystems.
>>
>> No, it can't be solved with an MMU index.
>>
>>> Can anyone point me in the right direction as to what would be the best
>>> way to implement this feature within QEMU?
>>
>> It's definitely tricky.
>>
>> We definitely need some TLB_FLAGS_MASK bit set so that we're forced through
>> the
>> memory slow path.  There is no other way to bypass the endianness that we've
>> already encoded from the target instruction.
>>
>> Given the tlb_set_page_with_attrs interface, I would think that we need a new
>> bit in MemTxAttrs, so that the target/sparc tlb_fill (and subroutines) can
>> pass
>> along the TTE bit for the given page.
>>
>> We have an existing problem in softmmu_template.h,
>>
>>     /* ??? Note that the io helpers always read data in the target
>>        byte ordering.  We should push the LE/BE request down into io.  */
>>     res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr);
>>     res = TGT_BE(res);
>>
>> We do not want to add a third(!) byte swap along the i/o path.  We need to
>> collapse the two that we have already before considering this one.
>>
>> This probably takes the form of:
>>
>> (1) Replacing the "int size" argument with "TCGMemOp memop" for
>>       a) io_{read,write}x in accel/tcg/cputlb.c,
>>       b) memory_region_dispatch_{read,write} in memory.c,
>>       c) adjust_endianness in memory.c.
>>     This carries size+sign+endianness down to the next level.
>>
>> (2) In memory.c, adjust_endianness,
>>
>>      if (memory_region_wrong_endianness(mr)) {
>> -        switch (size) {
>> +        memop ^= MO_BSWAP;
>> +    }
>> +    if (memop & MO_BSWAP) {
>>
>>     For extra credit, re-arrange memory_region_wrong_endianness
>>     to something more explicit -- "wrong" isn't helpful.
>
>Finally I've had a bit of spare time to experiment with this approach,
>and from what I can see there are currently 2 issues:
>
>
>1) Using TCGMemOp in memory.c means it is no longer accelerator agnostic
>
>For the moment I've defined a separate MemOp in memory.h and provided a
>mapping function in io_{read,write}x to map from TCGMemOp to MemOp and
>then pass that into memory_region_dispatch_{read,write}.
>
>Other than not referencing TCGMemOp in the memory API, another reason
>for doing this was that I wasn't convinced that all the MO_ attributes
>were valid outside of TCG. I do, of course, strongly defer to other
>people's knowledge in this area though.
>
>
>2) The above changes to adjust_endianness() fail when
>memory_region_dispatch_{read,write} are called recursively
>
>Whilst booting qemu-system-sparc64 I see that
>memory_region_dispatch_{read,write} get called recursively - once via
>io_{read,write}x and then again via flatview_read_continue() in exec.c.
>
>The net effect of this is that we perform the bswap correctly at the
>tail of the recursion, but then as we travel back up the stack we hit
>memory_region_dispatch_{read,write} once again causing a second bswap
>which means the value is returned with the incorrect endian again.
>
>
>My understanding from your softmmu_template.h comment above is that the
>memory API should do the endian swapping internally allowing the removal
>of the final TGT_BE/TGT_LE applied to the result, or did I get this wrong?
>
>> (3) In tlb_set_page_with_attrs, notice attrs.byte_swap and set
>>     a new TLB_FORCE_SLOW bit within TLB_FLAGS_MASK.
>>
>> (4) In io_{read,write}x, if iotlbentry->attrs.byte_swap is set,
>>     then memop ^= MO_BSWAP.

Thanks all for the v1 and v2 feedback.

v2:
- Moved size+sign+endianness attributes from TCGMemOp into MemOp.
  In v1 TCGMemOp was re-purposed entirely into MemOp.
- Replaced MemOp MO_{8|16|32|64} with TCGMemOp MO_{UB|UW|UL|UQ} alias.
  This is to avoid warnings on comparing and coercing different enums.
- Renamed get_memop to get_tcgmemop for clarity.
- MEMOP is now SIZE_MEMOP, which is just ctzl(size).
- Split patch 3/4 so one memory_region_dispatch_{read|write} interface
  is converted per patch.
- Do not reuse TLB_RECHECK, use new TLB_FORCE_SLOW instead.
- Split patch 4/4 so adding the MemTxAddrs parameters and converting
  tlb_set_page() to tlb_set_page_with_attrs() is separate from usage.
- CC'd maintainers.

v3:
- Like v1, the entire TCGMemOp enum is now MemOp.
- MemOp target dependant attributes are conditional upon NEED_CPU_H

Tony Nguyen (15):
  tcg: TCGMemOp is now accelerator independent MemOp
  memory: Access MemoryRegion with MemOp
  target/mips: Access MemoryRegion with MemOp
  hw/s390x: Access MemoryRegion with MemOp
  hw/intc/armv7m_nic: Access MemoryRegion with MemOp
  hw/virtio: Access MemoryRegion with MemOp
  hw/vfio: Access MemoryRegion with MemOp
  exec: Access MemoryRegion with MemOp
  cputlb: Access MemoryRegion with MemOp
  memory: Access MemoryRegion with MemOp semantics
  memory: Single byte swap along the I/O path
  cpu: TLB_FLAGS_MASK bit to force memory slow path
  cputlb: Byte swap memory transaction attribute
  target/sparc: Add TLB entry with attributes
  target/sparc: sun4u Invert Endian TTE bit

 accel/tcg/cputlb.c                      |  71 +++++++++--------
 exec.c                                  |   6 +-
 hw/intc/armv7m_nvic.c                   |  12 ++-
 hw/s390x/s390-pci-inst.c                |   8 +-
 hw/vfio/pci-quirks.c                    |   5 +-
 hw/virtio/virtio-pci.c                  |   7 +-
 include/exec/cpu-all.h                  |  10 ++-
 include/exec/memattrs.h                 |   2 +
 include/exec/memop.h                    | 112 +++++++++++++++++++++++++++
 include/exec/memory.h                   |   9 ++-
 memory.c                                |  37 +++++----
 memory_ldst.inc.c                       |  18 ++---
 target/alpha/translate.c                |   2 +-
 target/arm/translate-a64.c              |  48 ++++++------
 target/arm/translate-a64.h              |   2 +-
 target/arm/translate-sve.c              |   2 +-
 target/arm/translate.c                  |  32 ++++----
 target/arm/translate.h                  |   2 +-
 target/hppa/translate.c                 |  14 ++--
 target/i386/translate.c                 | 132 ++++++++++++++++----------------
 target/m68k/translate.c                 |   2 +-
 target/microblaze/translate.c           |   4 +-
 target/mips/op_helper.c                 |   5 +-
 target/mips/translate.c                 |   8 +-
 target/openrisc/translate.c             |   4 +-
 target/ppc/translate.c                  |  12 +--
 target/riscv/insn_trans/trans_rva.inc.c |   8 +-
 target/riscv/insn_trans/trans_rvi.inc.c |   4 +-
 target/s390x/translate.c                |   6 +-
 target/s390x/translate_vx.inc.c         |  10 +--
 target/sparc/cpu.h                      |   2 +
 target/sparc/mmu_helper.c               |  40 ++++++----
 target/sparc/translate.c                |  14 ++--
 target/tilegx/translate.c               |  10 +--
 target/tricore/translate.c              |   8 +-
 tcg/README                              |   2 +-
 tcg/aarch64/tcg-target.inc.c            |  26 +++----
 tcg/arm/tcg-target.inc.c                |  26 +++----
 tcg/i386/tcg-target.inc.c               |  24 +++---
 tcg/mips/tcg-target.inc.c               |  16 ++--
 tcg/optimize.c                          |   2 +-
 tcg/ppc/tcg-target.inc.c                |  12 +--
 tcg/riscv/tcg-target.inc.c              |  20 ++---
 tcg/s390/tcg-target.inc.c               |  14 ++--
 tcg/sparc/tcg-target.inc.c              |   6 +-
 tcg/tcg-op.c                            |  38 ++++-----
 tcg/tcg-op.h                            |  86 ++++++++++-----------
 tcg/tcg.c                               |   2 +-
 tcg/tcg.h                               |  99 ++----------------------
 trace/mem-internal.h                    |   4 +-
 trace/mem.h                             |   4 +-
 51 files changed, 561 insertions(+), 488 deletions(-)
 create mode 100644 include/exec/memop.h

--
1.8.3.1




^ permalink raw reply	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v3 00/15] Invert Endian bit in SPARCv9 MMU TTE
@ 2019-07-25  7:01     ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  7:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, david, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, mst, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, qemu-s390x, qemu-ppc, amarkovic, pbonzini,
	aurelien

[-- Attachment #1: Type: text/plain, Size: 9403 bytes --]

This patchset implements the IE (Invert Endian) bit in SPARCv9 MMU TTE.

It is an attempt of the instructions outlined by Richard Henderson to Mark
Cave-Ayland.

Tested with OpenBSD on sun4u. Solaris 10 is my actual goal, but unfortunately a
separate keyboard issue remains in the way.

On 01/11/17 19:15, Mark Cave-Ayland wrote:

>On 15/08/17 19:10, Richard Henderson wrote:
>
>> [CC Peter re MemTxAttrs below]
>>
>> On 08/15/2017 09:38 AM, Mark Cave-Ayland wrote:
>>> Working through an incorrect endian issue on qemu-system-sparc64, it has
>>> become apparent that at least one OS makes use of the IE (Invert Endian)
>>> bit in the SPARCv9 MMU TTE to map PCI memory space without the
>>> programmer having to manually endian-swap accesses.
>>>
>>> In other words, to quote the UltraSPARC specification: "if this bit is
>>> set, accesses to the associated page are processed with inverse
>>> endianness from what is specified by the instruction (big-for-little and
>>> little-for-big)".

A good explanation by Mark why the IE bit is required.

>>>
>>> Looking through various bits of code, I'm trying to get a feel for the
>>> best way to implement this in an efficient manner. From what I can see
>>> this could be solved using an additional MMU index, however I'm not
>>> overly familiar with the memory and softmmu subsystems.
>>
>> No, it can't be solved with an MMU index.
>>
>>> Can anyone point me in the right direction as to what would be the best
>>> way to implement this feature within QEMU?
>>
>> It's definitely tricky.
>>
>> We definitely need some TLB_FLAGS_MASK bit set so that we're forced through
>> the
>> memory slow path.  There is no other way to bypass the endianness that we've
>> already encoded from the target instruction.
>>
>> Given the tlb_set_page_with_attrs interface, I would think that we need a new
>> bit in MemTxAttrs, so that the target/sparc tlb_fill (and subroutines) can
>> pass
>> along the TTE bit for the given page.
>>
>> We have an existing problem in softmmu_template.h,
>>
>>     /* ??? Note that the io helpers always read data in the target
>>        byte ordering.  We should push the LE/BE request down into io.  */
>>     res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr);
>>     res = TGT_BE(res);
>>
>> We do not want to add a third(!) byte swap along the i/o path.  We need to
>> collapse the two that we have already before considering this one.
>>
>> This probably takes the form of:
>>
>> (1) Replacing the "int size" argument with "TCGMemOp memop" for
>>       a) io_{read,write}x in accel/tcg/cputlb.c,
>>       b) memory_region_dispatch_{read,write} in memory.c,
>>       c) adjust_endianness in memory.c.
>>     This carries size+sign+endianness down to the next level.
>>
>> (2) In memory.c, adjust_endianness,
>>
>>      if (memory_region_wrong_endianness(mr)) {
>> -        switch (size) {
>> +        memop ^= MO_BSWAP;
>> +    }
>> +    if (memop & MO_BSWAP) {
>>
>>     For extra credit, re-arrange memory_region_wrong_endianness
>>     to something more explicit -- "wrong" isn't helpful.
>
>Finally I've had a bit of spare time to experiment with this approach,
>and from what I can see there are currently 2 issues:
>
>
>1) Using TCGMemOp in memory.c means it is no longer accelerator agnostic
>
>For the moment I've defined a separate MemOp in memory.h and provided a
>mapping function in io_{read,write}x to map from TCGMemOp to MemOp and
>then pass that into memory_region_dispatch_{read,write}.
>
>Other than not referencing TCGMemOp in the memory API, another reason
>for doing this was that I wasn't convinced that all the MO_ attributes
>were valid outside of TCG. I do, of course, strongly defer to other
>people's knowledge in this area though.
>
>
>2) The above changes to adjust_endianness() fail when
>memory_region_dispatch_{read,write} are called recursively
>
>Whilst booting qemu-system-sparc64 I see that
>memory_region_dispatch_{read,write} get called recursively - once via
>io_{read,write}x and then again via flatview_read_continue() in exec.c.
>
>The net effect of this is that we perform the bswap correctly at the
>tail of the recursion, but then as we travel back up the stack we hit
>memory_region_dispatch_{read,write} once again causing a second bswap
>which means the value is returned with the incorrect endian again.
>
>
>My understanding from your softmmu_template.h comment above is that the
>memory API should do the endian swapping internally allowing the removal
>of the final TGT_BE/TGT_LE applied to the result, or did I get this wrong?
>
>> (3) In tlb_set_page_with_attrs, notice attrs.byte_swap and set
>>     a new TLB_FORCE_SLOW bit within TLB_FLAGS_MASK.
>>
>> (4) In io_{read,write}x, if iotlbentry->attrs.byte_swap is set,
>>     then memop ^= MO_BSWAP.

Thanks all for the v1 and v2 feedback.

v2:
- Moved size+sign+endianness attributes from TCGMemOp into MemOp.
  In v1 TCGMemOp was re-purposed entirely into MemOp.
- Replaced MemOp MO_{8|16|32|64} with TCGMemOp MO_{UB|UW|UL|UQ} alias.
  This is to avoid warnings on comparing and coercing different enums.
- Renamed get_memop to get_tcgmemop for clarity.
- MEMOP is now SIZE_MEMOP, which is just ctzl(size).
- Split patch 3/4 so one memory_region_dispatch_{read|write} interface
  is converted per patch.
- Do not reuse TLB_RECHECK, use new TLB_FORCE_SLOW instead.
- Split patch 4/4 so adding the MemTxAddrs parameters and converting
  tlb_set_page() to tlb_set_page_with_attrs() is separate from usage.
- CC'd maintainers.

v3:
- Like v1, the entire TCGMemOp enum is now MemOp.
- MemOp target dependant attributes are conditional upon NEED_CPU_H

Tony Nguyen (15):
  tcg: TCGMemOp is now accelerator independent MemOp
  memory: Access MemoryRegion with MemOp
  target/mips: Access MemoryRegion with MemOp
  hw/s390x: Access MemoryRegion with MemOp
  hw/intc/armv7m_nic: Access MemoryRegion with MemOp
  hw/virtio: Access MemoryRegion with MemOp
  hw/vfio: Access MemoryRegion with MemOp
  exec: Access MemoryRegion with MemOp
  cputlb: Access MemoryRegion with MemOp
  memory: Access MemoryRegion with MemOp semantics
  memory: Single byte swap along the I/O path
  cpu: TLB_FLAGS_MASK bit to force memory slow path
  cputlb: Byte swap memory transaction attribute
  target/sparc: Add TLB entry with attributes
  target/sparc: sun4u Invert Endian TTE bit

 accel/tcg/cputlb.c                      |  71 +++++++++--------
 exec.c                                  |   6 +-
 hw/intc/armv7m_nvic.c                   |  12 ++-
 hw/s390x/s390-pci-inst.c                |   8 +-
 hw/vfio/pci-quirks.c                    |   5 +-
 hw/virtio/virtio-pci.c                  |   7 +-
 include/exec/cpu-all.h                  |  10 ++-
 include/exec/memattrs.h                 |   2 +
 include/exec/memop.h                    | 112 +++++++++++++++++++++++++++
 include/exec/memory.h                   |   9 ++-
 memory.c                                |  37 +++++----
 memory_ldst.inc.c                       |  18 ++---
 target/alpha/translate.c                |   2 +-
 target/arm/translate-a64.c              |  48 ++++++------
 target/arm/translate-a64.h              |   2 +-
 target/arm/translate-sve.c              |   2 +-
 target/arm/translate.c                  |  32 ++++----
 target/arm/translate.h                  |   2 +-
 target/hppa/translate.c                 |  14 ++--
 target/i386/translate.c                 | 132 ++++++++++++++++----------------
 target/m68k/translate.c                 |   2 +-
 target/microblaze/translate.c           |   4 +-
 target/mips/op_helper.c                 |   5 +-
 target/mips/translate.c                 |   8 +-
 target/openrisc/translate.c             |   4 +-
 target/ppc/translate.c                  |  12 +--
 target/riscv/insn_trans/trans_rva.inc.c |   8 +-
 target/riscv/insn_trans/trans_rvi.inc.c |   4 +-
 target/s390x/translate.c                |   6 +-
 target/s390x/translate_vx.inc.c         |  10 +--
 target/sparc/cpu.h                      |   2 +
 target/sparc/mmu_helper.c               |  40 ++++++----
 target/sparc/translate.c                |  14 ++--
 target/tilegx/translate.c               |  10 +--
 target/tricore/translate.c              |   8 +-
 tcg/README                              |   2 +-
 tcg/aarch64/tcg-target.inc.c            |  26 +++----
 tcg/arm/tcg-target.inc.c                |  26 +++----
 tcg/i386/tcg-target.inc.c               |  24 +++---
 tcg/mips/tcg-target.inc.c               |  16 ++--
 tcg/optimize.c                          |   2 +-
 tcg/ppc/tcg-target.inc.c                |  12 +--
 tcg/riscv/tcg-target.inc.c              |  20 ++---
 tcg/s390/tcg-target.inc.c               |  14 ++--
 tcg/sparc/tcg-target.inc.c              |   6 +-
 tcg/tcg-op.c                            |  38 ++++-----
 tcg/tcg-op.h                            |  86 ++++++++++-----------
 tcg/tcg.c                               |   2 +-
 tcg/tcg.h                               |  99 ++----------------------
 trace/mem-internal.h                    |   4 +-
 trace/mem.h                             |   4 +-
 51 files changed, 561 insertions(+), 488 deletions(-)
 create mode 100644 include/exec/memop.h

--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 16774 bytes --]

^ permalink raw reply	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v3 01/15] tcg: TCGMemOp is now accelerator independent MemOp
  2019-07-25  7:01     ` [Qemu-riscv] " tony.nguyen
@ 2019-07-25  7:03       ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  7:03 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, qemu-s390x, qemu-ppc, amarkovic, pbonzini,
	aurelien

Preparation for collapsing the two byte swaps, adjust_endianness and
handle_bswap, along the I/O path.

Target dependant attributes are conditionalize upon NEED_CPU_H.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 accel/tcg/cputlb.c                      |   2 +-
 include/exec/memop.h                    | 109 ++++++++++++++++++++++++++
 target/alpha/translate.c                |   2 +-
 target/arm/translate-a64.c              |  48 ++++++------
 target/arm/translate-a64.h              |   2 +-
 target/arm/translate-sve.c              |   2 +-
 target/arm/translate.c                  |  32 ++++----
 target/arm/translate.h                  |   2 +-
 target/hppa/translate.c                 |  14 ++--
 target/i386/translate.c                 | 132 ++++++++++++++++----------------
 target/m68k/translate.c                 |   2 +-
 target/microblaze/translate.c           |   4 +-
 target/mips/translate.c                 |   8 +-
 target/openrisc/translate.c             |   4 +-
 target/ppc/translate.c                  |  12 +--
 target/riscv/insn_trans/trans_rva.inc.c |   8 +-
 target/riscv/insn_trans/trans_rvi.inc.c |   4 +-
 target/s390x/translate.c                |   6 +-
 target/s390x/translate_vx.inc.c         |  10 +--
 target/sparc/translate.c                |  14 ++--
 target/tilegx/translate.c               |  10 +--
 target/tricore/translate.c              |   8 +-
 tcg/README                              |   2 +-
 tcg/aarch64/tcg-target.inc.c            |  26 +++----
 tcg/arm/tcg-target.inc.c                |  26 +++----
 tcg/i386/tcg-target.inc.c               |  24 +++---
 tcg/mips/tcg-target.inc.c               |  16 ++--
 tcg/optimize.c                          |   2 +-
 tcg/ppc/tcg-target.inc.c                |  12 +--
 tcg/riscv/tcg-target.inc.c              |  20 ++---
 tcg/s390/tcg-target.inc.c               |  14 ++--
 tcg/sparc/tcg-target.inc.c              |   6 +-
 tcg/tcg-op.c                            |  38 ++++-----
 tcg/tcg-op.h                            |  86 ++++++++++-----------
 tcg/tcg.c                               |   2 +-
 tcg/tcg.h                               |  99 ++----------------------
 trace/mem-internal.h                    |   4 +-
 trace/mem.h                             |   4 +-
 38 files changed, 419 insertions(+), 397 deletions(-)
 create mode 100644 include/exec/memop.h

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index bb9897b..523be4c 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1133,7 +1133,7 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
     uintptr_t index = tlb_index(env, mmu_idx, addr);
     CPUTLBEntry *tlbe = tlb_entry(env, mmu_idx, addr);
     target_ulong tlb_addr = tlb_addr_write(tlbe);
-    TCGMemOp mop = get_memop(oi);
+    MemOp mop = get_memop(oi);
     int a_bits = get_alignment_bits(mop);
     int s_bits = mop & MO_SIZE;
     void *hostaddr;
diff --git a/include/exec/memop.h b/include/exec/memop.h
new file mode 100644
index 0000000..ac58066
--- /dev/null
+++ b/include/exec/memop.h
@@ -0,0 +1,109 @@
+/*
+ * Constants for memory operations
+ *
+ * Authors:
+ *  Richard Henderson <rth@twiddle.net>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+
+#ifndef MEMOP_H
+#define MEMOP_H
+
+typedef enum MemOp {
+    MO_8     = 0,
+    MO_16    = 1,
+    MO_32    = 2,
+    MO_64    = 3,
+    MO_SIZE  = 3,   /* Mask for the above.  */
+
+    MO_SIGN  = 4,   /* Sign-extended, otherwise zero-extended.  */
+
+    MO_BSWAP = 8,   /* Host reverse endian.  */
+#ifdef HOST_WORDS_BIGENDIAN
+    MO_LE    = MO_BSWAP,
+    MO_BE    = 0,
+#else
+    MO_LE    = 0,
+    MO_BE    = MO_BSWAP,
+#endif
+#ifdef NEED_CPU_H
+#ifdef TARGET_WORDS_BIGENDIAN
+    MO_TE    = MO_BE,
+#else
+    MO_TE    = MO_LE,
+#endif
+#endif
+
+    /*
+     * MO_UNALN accesses are never checked for alignment.
+     * MO_ALIGN accesses will result in a call to the CPU's
+     * do_unaligned_access hook if the guest address is not aligned.
+     * The default depends on whether the target CPU defines ALIGNED_ONLY.
+     *
+     * Some architectures (e.g. ARMv8) need the address which is aligned
+     * to a size more than the size of the memory access.
+     * Some architectures (e.g. SPARCv9) need an address which is aligned,
+     * but less strictly than the natural alignment.
+     *
+     * MO_ALIGN supposes the alignment size is the size of a memory access.
+     *
+     * There are three options:
+     * - unaligned access permitted (MO_UNALN).
+     * - an alignment to the size of an access (MO_ALIGN);
+     * - an alignment to a specified size, which may be more or less than
+     *   the access size (MO_ALIGN_x where 'x' is a size in bytes);
+     */
+    MO_ASHIFT = 4,
+    MO_AMASK = 7 << MO_ASHIFT,
+#ifdef NEED_CPU_H
+#ifdef ALIGNED_ONLY
+    MO_ALIGN = 0,
+    MO_UNALN = MO_AMASK,
+#else
+    MO_ALIGN = MO_AMASK,
+    MO_UNALN = 0,
+#endif
+#endif
+    MO_ALIGN_2  = 1 << MO_ASHIFT,
+    MO_ALIGN_4  = 2 << MO_ASHIFT,
+    MO_ALIGN_8  = 3 << MO_ASHIFT,
+    MO_ALIGN_16 = 4 << MO_ASHIFT,
+    MO_ALIGN_32 = 5 << MO_ASHIFT,
+    MO_ALIGN_64 = 6 << MO_ASHIFT,
+
+    /* Combinations of the above, for ease of use.  */
+    MO_UB    = MO_8,
+    MO_UW    = MO_16,
+    MO_UL    = MO_32,
+    MO_SB    = MO_SIGN | MO_8,
+    MO_SW    = MO_SIGN | MO_16,
+    MO_SL    = MO_SIGN | MO_32,
+    MO_Q     = MO_64,
+
+    MO_LEUW  = MO_LE | MO_UW,
+    MO_LEUL  = MO_LE | MO_UL,
+    MO_LESW  = MO_LE | MO_SW,
+    MO_LESL  = MO_LE | MO_SL,
+    MO_LEQ   = MO_LE | MO_Q,
+
+    MO_BEUW  = MO_BE | MO_UW,
+    MO_BEUL  = MO_BE | MO_UL,
+    MO_BESW  = MO_BE | MO_SW,
+    MO_BESL  = MO_BE | MO_SL,
+    MO_BEQ   = MO_BE | MO_Q,
+
+#ifdef NEED_CPU_H
+    MO_TEUW  = MO_TE | MO_UW,
+    MO_TEUL  = MO_TE | MO_UL,
+    MO_TESW  = MO_TE | MO_SW,
+    MO_TESL  = MO_TE | MO_SL,
+    MO_TEQ   = MO_TE | MO_Q,
+#endif
+
+    MO_SSIZE = MO_SIZE | MO_SIGN,
+} MemOp;
+
+#endif
diff --git a/target/alpha/translate.c b/target/alpha/translate.c
index 2c9cccf..d5d4888 100644
--- a/target/alpha/translate.c
+++ b/target/alpha/translate.c
@@ -403,7 +403,7 @@ static inline void gen_store_mem(DisasContext *ctx,

 static DisasJumpType gen_store_conditional(DisasContext *ctx, int ra, int rb,
                                            int32_t disp16, int mem_idx,
-                                           TCGMemOp op)
+                                           MemOp op)
 {
     TCGLabel *lab_fail, *lab_done;
     TCGv addr, val;
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index d323147..b6c07d6 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -85,7 +85,7 @@ typedef void NeonGenOneOpFn(TCGv_i64, TCGv_i64);
 typedef void CryptoTwoOpFn(TCGv_ptr, TCGv_ptr);
 typedef void CryptoThreeOpIntFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
 typedef void CryptoThreeOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
-typedef void AtomicThreeOpFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGArg, TCGMemOp);
+typedef void AtomicThreeOpFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGArg, MemOp);

 /* initialize TCG globals.  */
 void a64_translate_init(void)
@@ -455,7 +455,7 @@ TCGv_i64 read_cpu_reg_sp(DisasContext *s, int reg, int sf)
  * Dn, Sn, Hn or Bn).
  * (Note that this is not the same mapping as for A32; see cpu.h)
  */
-static inline int fp_reg_offset(DisasContext *s, int regno, TCGMemOp size)
+static inline int fp_reg_offset(DisasContext *s, int regno, MemOp size)
 {
     return vec_reg_offset(s, regno, 0, size);
 }
@@ -871,7 +871,7 @@ static void do_gpr_ld_memidx(DisasContext *s,
                              bool iss_valid, unsigned int iss_srt,
                              bool iss_sf, bool iss_ar)
 {
-    TCGMemOp memop = s->be_data + size;
+    MemOp memop = s->be_data + size;

     g_assert(size <= 3);

@@ -948,7 +948,7 @@ static void do_fp_ld(DisasContext *s, int destidx, TCGv_i64 tcg_addr, int size)
     TCGv_i64 tmphi;

     if (size < 4) {
-        TCGMemOp memop = s->be_data + size;
+        MemOp memop = s->be_data + size;
         tmphi = tcg_const_i64(0);
         tcg_gen_qemu_ld_i64(tmplo, tcg_addr, get_mem_index(s), memop);
     } else {
@@ -989,7 +989,7 @@ static void do_fp_ld(DisasContext *s, int destidx, TCGv_i64 tcg_addr, int size)

 /* Get value of an element within a vector register */
 static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx,
-                             int element, TCGMemOp memop)
+                             int element, MemOp memop)
 {
     int vect_off = vec_reg_offset(s, srcidx, element, memop & MO_SIZE);
     switch (memop) {
@@ -1021,7 +1021,7 @@ static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx,
 }

 static void read_vec_element_i32(DisasContext *s, TCGv_i32 tcg_dest, int srcidx,
-                                 int element, TCGMemOp memop)
+                                 int element, MemOp memop)
 {
     int vect_off = vec_reg_offset(s, srcidx, element, memop & MO_SIZE);
     switch (memop) {
@@ -1048,7 +1048,7 @@ static void read_vec_element_i32(DisasContext *s, TCGv_i32 tcg_dest, int srcidx,

 /* Set value of an element within a vector register */
 static void write_vec_element(DisasContext *s, TCGv_i64 tcg_src, int destidx,
-                              int element, TCGMemOp memop)
+                              int element, MemOp memop)
 {
     int vect_off = vec_reg_offset(s, destidx, element, memop & MO_SIZE);
     switch (memop) {
@@ -1070,7 +1070,7 @@ static void write_vec_element(DisasContext *s, TCGv_i64 tcg_src, int destidx,
 }

 static void write_vec_element_i32(DisasContext *s, TCGv_i32 tcg_src,
-                                  int destidx, int element, TCGMemOp memop)
+                                  int destidx, int element, MemOp memop)
 {
     int vect_off = vec_reg_offset(s, destidx, element, memop & MO_SIZE);
     switch (memop) {
@@ -1090,7 +1090,7 @@ static void write_vec_element_i32(DisasContext *s, TCGv_i32 tcg_src,

 /* Store from vector register to memory */
 static void do_vec_st(DisasContext *s, int srcidx, int element,
-                      TCGv_i64 tcg_addr, int size, TCGMemOp endian)
+                      TCGv_i64 tcg_addr, int size, MemOp endian)
 {
     TCGv_i64 tcg_tmp = tcg_temp_new_i64();

@@ -1102,7 +1102,7 @@ static void do_vec_st(DisasContext *s, int srcidx, int element,

 /* Load from memory to vector register */
 static void do_vec_ld(DisasContext *s, int destidx, int element,
-                      TCGv_i64 tcg_addr, int size, TCGMemOp endian)
+                      TCGv_i64 tcg_addr, int size, MemOp endian)
 {
     TCGv_i64 tcg_tmp = tcg_temp_new_i64();

@@ -2200,7 +2200,7 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
                                TCGv_i64 addr, int size, bool is_pair)
 {
     int idx = get_mem_index(s);
-    TCGMemOp memop = s->be_data;
+    MemOp memop = s->be_data;

     g_assert(size <= 3);
     if (is_pair) {
@@ -3286,7 +3286,7 @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
     bool is_postidx = extract32(insn, 23, 1);
     bool is_q = extract32(insn, 30, 1);
     TCGv_i64 clean_addr, tcg_rn, tcg_ebytes;
-    TCGMemOp endian = s->be_data;
+    MemOp endian = s->be_data;

     int ebytes;   /* bytes per element */
     int elements; /* elements per vector */
@@ -5455,7 +5455,7 @@ static void disas_fp_csel(DisasContext *s, uint32_t insn)
     unsigned int mos, type, rm, cond, rn, rd;
     TCGv_i64 t_true, t_false, t_zero;
     DisasCompare64 c;
-    TCGMemOp sz;
+    MemOp sz;

     mos = extract32(insn, 29, 3);
     type = extract32(insn, 22, 2);
@@ -6267,7 +6267,7 @@ static void disas_fp_imm(DisasContext *s, uint32_t insn)
     int mos = extract32(insn, 29, 3);
     uint64_t imm;
     TCGv_i64 tcg_res;
-    TCGMemOp sz;
+    MemOp sz;

     if (mos || imm5) {
         unallocated_encoding(s);
@@ -7030,7 +7030,7 @@ static TCGv_i32 do_reduction_op(DisasContext *s, int fpopcode, int rn,
 {
     if (esize == size) {
         int element;
-        TCGMemOp msize = esize == 16 ? MO_16 : MO_32;
+        MemOp msize = esize == 16 ? MO_16 : MO_32;
         TCGv_i32 tcg_elem;

         /* We should have one register left here */
@@ -8022,7 +8022,7 @@ static void handle_vec_simd_sqshrn(DisasContext *s, bool is_scalar, bool is_q,
     int shift = (2 * esize) - immhb;
     int elements = is_scalar ? 1 : (64 / esize);
     bool round = extract32(opcode, 0, 1);
-    TCGMemOp ldop = (size + 1) | (is_u_shift ? 0 : MO_SIGN);
+    MemOp ldop = (size + 1) | (is_u_shift ? 0 : MO_SIGN);
     TCGv_i64 tcg_rn, tcg_rd, tcg_round;
     TCGv_i32 tcg_rd_narrowed;
     TCGv_i64 tcg_final;
@@ -8181,7 +8181,7 @@ static void handle_simd_qshl(DisasContext *s, bool scalar, bool is_q,
             }
         };
         NeonGenTwoOpEnvFn *genfn = fns[src_unsigned][dst_unsigned][size];
-        TCGMemOp memop = scalar ? size : MO_32;
+        MemOp memop = scalar ? size : MO_32;
         int maxpass = scalar ? 1 : is_q ? 4 : 2;

         for (pass = 0; pass < maxpass; pass++) {
@@ -8225,7 +8225,7 @@ static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
     TCGv_ptr tcg_fpst = get_fpstatus_ptr(size == MO_16);
     TCGv_i32 tcg_shift = NULL;

-    TCGMemOp mop = size | (is_signed ? MO_SIGN : 0);
+    MemOp mop = size | (is_signed ? MO_SIGN : 0);
     int pass;

     if (fracbits || size == MO_64) {
@@ -10004,7 +10004,7 @@ static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
     int dsize = is_q ? 128 : 64;
     int esize = 8 << size;
     int elements = dsize/esize;
-    TCGMemOp memop = size | (is_u ? 0 : MO_SIGN);
+    MemOp memop = size | (is_u ? 0 : MO_SIGN);
     TCGv_i64 tcg_rn = new_tmp_a64(s);
     TCGv_i64 tcg_rd = new_tmp_a64(s);
     TCGv_i64 tcg_round;
@@ -10347,7 +10347,7 @@ static void handle_3rd_widening(DisasContext *s, int is_q, int is_u, int size,
             TCGv_i64 tcg_op1 = tcg_temp_new_i64();
             TCGv_i64 tcg_op2 = tcg_temp_new_i64();
             TCGv_i64 tcg_passres;
-            TCGMemOp memop = MO_32 | (is_u ? 0 : MO_SIGN);
+            MemOp memop = MO_32 | (is_u ? 0 : MO_SIGN);

             int elt = pass + is_q * 2;

@@ -11827,7 +11827,7 @@ static void handle_2misc_pairwise(DisasContext *s, int opcode, bool u,

     if (size == 2) {
         /* 32 + 32 -> 64 op */
-        TCGMemOp memop = size + (u ? 0 : MO_SIGN);
+        MemOp memop = size + (u ? 0 : MO_SIGN);

         for (pass = 0; pass < maxpass; pass++) {
             TCGv_i64 tcg_op1 = tcg_temp_new_i64();
@@ -12849,7 +12849,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)

     switch (is_fp) {
     case 1: /* normal fp */
-        /* convert insn encoded size to TCGMemOp size */
+        /* convert insn encoded size to MemOp size */
         switch (size) {
         case 0: /* half-precision */
             size = MO_16;
@@ -12897,7 +12897,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
         return;
     }

-    /* Given TCGMemOp size, adjust register and indexing.  */
+    /* Given MemOp size, adjust register and indexing.  */
     switch (size) {
     case MO_16:
         index = h << 2 | l << 1 | m;
@@ -13194,7 +13194,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
         TCGv_i64 tcg_res[2];
         int pass;
         bool satop = extract32(opcode, 0, 1);
-        TCGMemOp memop = MO_32;
+        MemOp memop = MO_32;

         if (satop || !u) {
             memop |= MO_SIGN;
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
index 9ab4087..f1246b7 100644
--- a/target/arm/translate-a64.h
+++ b/target/arm/translate-a64.h
@@ -64,7 +64,7 @@ static inline void assert_fp_access_checked(DisasContext *s)
  * the FP/vector register Qn.
  */
 static inline int vec_reg_offset(DisasContext *s, int regno,
-                                 int element, TCGMemOp size)
+                                 int element, MemOp size)
 {
     int element_size = 1 << size;
     int offs = element * element_size;
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index fa068b0..5d7edd0 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -4567,7 +4567,7 @@ static bool trans_STR_pri(DisasContext *s, arg_rri *a)
  */

 /* The memory mode of the dtype.  */
-static const TCGMemOp dtype_mop[16] = {
+static const MemOp dtype_mop[16] = {
     MO_UB, MO_UB, MO_UB, MO_UB,
     MO_SL, MO_UW, MO_UW, MO_UW,
     MO_SW, MO_SW, MO_UL, MO_UL,
diff --git a/target/arm/translate.c b/target/arm/translate.c
index 7853462..d116c8c 100644
--- a/target/arm/translate.c
+++ b/target/arm/translate.c
@@ -114,7 +114,7 @@ typedef enum ISSInfo {
 } ISSInfo;

 /* Save the syndrome information for a Data Abort */
-static void disas_set_da_iss(DisasContext *s, TCGMemOp memop, ISSInfo issinfo)
+static void disas_set_da_iss(DisasContext *s, MemOp memop, ISSInfo issinfo)
 {
     uint32_t syn;
     int sas = memop & MO_SIZE;
@@ -1079,7 +1079,7 @@ static inline void store_reg_from_load(DisasContext *s, int reg, TCGv_i32 var)
  * that the address argument is TCGv_i32 rather than TCGv.
  */

-static inline TCGv gen_aa32_addr(DisasContext *s, TCGv_i32 a32, TCGMemOp op)
+static inline TCGv gen_aa32_addr(DisasContext *s, TCGv_i32 a32, MemOp op)
 {
     TCGv addr = tcg_temp_new();
     tcg_gen_extu_i32_tl(addr, a32);
@@ -1092,7 +1092,7 @@ static inline TCGv gen_aa32_addr(DisasContext *s, TCGv_i32 a32, TCGMemOp op)
 }

 static void gen_aa32_ld_i32(DisasContext *s, TCGv_i32 val, TCGv_i32 a32,
-                            int index, TCGMemOp opc)
+                            int index, MemOp opc)
 {
     TCGv addr;

@@ -1107,7 +1107,7 @@ static void gen_aa32_ld_i32(DisasContext *s, TCGv_i32 val, TCGv_i32 a32,
 }

 static void gen_aa32_st_i32(DisasContext *s, TCGv_i32 val, TCGv_i32 a32,
-                            int index, TCGMemOp opc)
+                            int index, MemOp opc)
 {
     TCGv addr;

@@ -1160,7 +1160,7 @@ static inline void gen_aa32_frob64(DisasContext *s, TCGv_i64 val)
 }

 static void gen_aa32_ld_i64(DisasContext *s, TCGv_i64 val, TCGv_i32 a32,
-                            int index, TCGMemOp opc)
+                            int index, MemOp opc)
 {
     TCGv addr = gen_aa32_addr(s, a32, opc);
     tcg_gen_qemu_ld_i64(val, addr, index, opc);
@@ -1175,7 +1175,7 @@ static inline void gen_aa32_ld64(DisasContext *s, TCGv_i64 val,
 }

 static void gen_aa32_st_i64(DisasContext *s, TCGv_i64 val, TCGv_i32 a32,
-                            int index, TCGMemOp opc)
+                            int index, MemOp opc)
 {
     TCGv addr = gen_aa32_addr(s, a32, opc);

@@ -1400,7 +1400,7 @@ neon_reg_offset (int reg, int n)
  * where 0 is the least significant end of the register.
  */
 static inline long
-neon_element_offset(int reg, int element, TCGMemOp size)
+neon_element_offset(int reg, int element, MemOp size)
 {
     int element_size = 1 << size;
     int ofs = element * element_size;
@@ -1422,7 +1422,7 @@ static TCGv_i32 neon_load_reg(int reg, int pass)
     return tmp;
 }

-static void neon_load_element(TCGv_i32 var, int reg, int ele, TCGMemOp mop)
+static void neon_load_element(TCGv_i32 var, int reg, int ele, MemOp mop)
 {
     long offset = neon_element_offset(reg, ele, mop & MO_SIZE);

@@ -1441,7 +1441,7 @@ static void neon_load_element(TCGv_i32 var, int reg, int ele, TCGMemOp mop)
     }
 }

-static void neon_load_element64(TCGv_i64 var, int reg, int ele, TCGMemOp mop)
+static void neon_load_element64(TCGv_i64 var, int reg, int ele, MemOp mop)
 {
     long offset = neon_element_offset(reg, ele, mop & MO_SIZE);

@@ -1469,7 +1469,7 @@ static void neon_store_reg(int reg, int pass, TCGv_i32 var)
     tcg_temp_free_i32(var);
 }

-static void neon_store_element(int reg, int ele, TCGMemOp size, TCGv_i32 var)
+static void neon_store_element(int reg, int ele, MemOp size, TCGv_i32 var)
 {
     long offset = neon_element_offset(reg, ele, size);

@@ -1488,7 +1488,7 @@ static void neon_store_element(int reg, int ele, TCGMemOp size, TCGv_i32 var)
     }
 }

-static void neon_store_element64(int reg, int ele, TCGMemOp size, TCGv_i64 var)
+static void neon_store_element64(int reg, int ele, MemOp size, TCGv_i64 var)
 {
     long offset = neon_element_offset(reg, ele, size);

@@ -3558,7 +3558,7 @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
     int n;
     int vec_size;
     int mmu_idx;
-    TCGMemOp endian;
+    MemOp endian;
     TCGv_i32 addr;
     TCGv_i32 tmp;
     TCGv_i32 tmp2;
@@ -6867,7 +6867,7 @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
             } else if ((insn & 0x380) == 0) {
                 /* VDUP */
                 int element;
-                TCGMemOp size;
+                MemOp size;

                 if ((insn & (7 << 16)) == 0 || (q && (rd & 1))) {
                     return 1;
@@ -7435,7 +7435,7 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
                                TCGv_i32 addr, int size)
 {
     TCGv_i32 tmp = tcg_temp_new_i32();
-    TCGMemOp opc = size | MO_ALIGN | s->be_data;
+    MemOp opc = size | MO_ALIGN | s->be_data;

     s->is_ldex = true;

@@ -7489,7 +7489,7 @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
     TCGv taddr;
     TCGLabel *done_label;
     TCGLabel *fail_label;
-    TCGMemOp opc = size | MO_ALIGN | s->be_data;
+    MemOp opc = size | MO_ALIGN | s->be_data;

     /* if (env->exclusive_addr == addr && env->exclusive_val == [addr]) {
          [addr] = {Rt};
@@ -8603,7 +8603,7 @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
                         */

                         TCGv taddr;
-                        TCGMemOp opc = s->be_data;
+                        MemOp opc = s->be_data;

                         rm = (insn) & 0xf;

diff --git a/target/arm/translate.h b/target/arm/translate.h
index a20f6e2..284c510 100644
--- a/target/arm/translate.h
+++ b/target/arm/translate.h
@@ -21,7 +21,7 @@ typedef struct DisasContext {
     int condexec_cond;
     int thumb;
     int sctlr_b;
-    TCGMemOp be_data;
+    MemOp be_data;
 #if !defined(CONFIG_USER_ONLY)
     int user;
 #endif
diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 188fe68..ff4802a 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -1500,7 +1500,7 @@ static void form_gva(DisasContext *ctx, TCGv_tl *pgva, TCGv_reg *pofs,
  */
 static void do_load_32(DisasContext *ctx, TCGv_i32 dest, unsigned rb,
                        unsigned rx, int scale, target_sreg disp,
-                       unsigned sp, int modify, TCGMemOp mop)
+                       unsigned sp, int modify, MemOp mop)
 {
     TCGv_reg ofs;
     TCGv_tl addr;
@@ -1518,7 +1518,7 @@ static void do_load_32(DisasContext *ctx, TCGv_i32 dest, unsigned rb,

 static void do_load_64(DisasContext *ctx, TCGv_i64 dest, unsigned rb,
                        unsigned rx, int scale, target_sreg disp,
-                       unsigned sp, int modify, TCGMemOp mop)
+                       unsigned sp, int modify, MemOp mop)
 {
     TCGv_reg ofs;
     TCGv_tl addr;
@@ -1536,7 +1536,7 @@ static void do_load_64(DisasContext *ctx, TCGv_i64 dest, unsigned rb,

 static void do_store_32(DisasContext *ctx, TCGv_i32 src, unsigned rb,
                         unsigned rx, int scale, target_sreg disp,
-                        unsigned sp, int modify, TCGMemOp mop)
+                        unsigned sp, int modify, MemOp mop)
 {
     TCGv_reg ofs;
     TCGv_tl addr;
@@ -1554,7 +1554,7 @@ static void do_store_32(DisasContext *ctx, TCGv_i32 src, unsigned rb,

 static void do_store_64(DisasContext *ctx, TCGv_i64 src, unsigned rb,
                         unsigned rx, int scale, target_sreg disp,
-                        unsigned sp, int modify, TCGMemOp mop)
+                        unsigned sp, int modify, MemOp mop)
 {
     TCGv_reg ofs;
     TCGv_tl addr;
@@ -1580,7 +1580,7 @@ static void do_store_64(DisasContext *ctx, TCGv_i64 src, unsigned rb,

 static bool do_load(DisasContext *ctx, unsigned rt, unsigned rb,
                     unsigned rx, int scale, target_sreg disp,
-                    unsigned sp, int modify, TCGMemOp mop)
+                    unsigned sp, int modify, MemOp mop)
 {
     TCGv_reg dest;

@@ -1653,7 +1653,7 @@ static bool trans_fldd(DisasContext *ctx, arg_ldst *a)

 static bool do_store(DisasContext *ctx, unsigned rt, unsigned rb,
                      target_sreg disp, unsigned sp,
-                     int modify, TCGMemOp mop)
+                     int modify, MemOp mop)
 {
     nullify_over(ctx);
     do_store_reg(ctx, load_gpr(ctx, rt), rb, 0, 0, disp, sp, modify, mop);
@@ -2940,7 +2940,7 @@ static bool trans_st(DisasContext *ctx, arg_ldst *a)

 static bool trans_ldc(DisasContext *ctx, arg_ldst *a)
 {
-    TCGMemOp mop = MO_TEUL | MO_ALIGN_16 | a->size;
+    MemOp mop = MO_TEUL | MO_ALIGN_16 | a->size;
     TCGv_reg zero, dest, ofs;
     TCGv_tl addr;

diff --git a/target/i386/translate.c b/target/i386/translate.c
index 03150a8..def9867 100644
--- a/target/i386/translate.c
+++ b/target/i386/translate.c
@@ -87,8 +87,8 @@ typedef struct DisasContext {
     /* current insn context */
     int override; /* -1 if no override */
     int prefix;
-    TCGMemOp aflag;
-    TCGMemOp dflag;
+    MemOp aflag;
+    MemOp dflag;
     target_ulong pc_start;
     target_ulong pc; /* pc = eip + cs_base */
     /* current block context */
@@ -149,7 +149,7 @@ static void gen_eob(DisasContext *s);
 static void gen_jr(DisasContext *s, TCGv dest);
 static void gen_jmp(DisasContext *s, target_ulong eip);
 static void gen_jmp_tb(DisasContext *s, target_ulong eip, int tb_num);
-static void gen_op(DisasContext *s1, int op, TCGMemOp ot, int d);
+static void gen_op(DisasContext *s1, int op, MemOp ot, int d);

 /* i386 arith/logic operations */
 enum {
@@ -320,7 +320,7 @@ static inline bool byte_reg_is_xH(DisasContext *s, int reg)
 }

 /* Select the size of a push/pop operation.  */
-static inline TCGMemOp mo_pushpop(DisasContext *s, TCGMemOp ot)
+static inline MemOp mo_pushpop(DisasContext *s, MemOp ot)
 {
     if (CODE64(s)) {
         return ot == MO_16 ? MO_16 : MO_64;
@@ -330,13 +330,13 @@ static inline TCGMemOp mo_pushpop(DisasContext *s, TCGMemOp ot)
 }

 /* Select the size of the stack pointer.  */
-static inline TCGMemOp mo_stacksize(DisasContext *s)
+static inline MemOp mo_stacksize(DisasContext *s)
 {
     return CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_16;
 }

 /* Select only size 64 else 32.  Used for SSE operand sizes.  */
-static inline TCGMemOp mo_64_32(TCGMemOp ot)
+static inline MemOp mo_64_32(MemOp ot)
 {
 #ifdef TARGET_X86_64
     return ot == MO_64 ? MO_64 : MO_32;
@@ -347,19 +347,19 @@ static inline TCGMemOp mo_64_32(TCGMemOp ot)

 /* Select size 8 if lsb of B is clear, else OT.  Used for decoding
    byte vs word opcodes.  */
-static inline TCGMemOp mo_b_d(int b, TCGMemOp ot)
+static inline MemOp mo_b_d(int b, MemOp ot)
 {
     return b & 1 ? ot : MO_8;
 }

 /* Select size 8 if lsb of B is clear, else OT capped at 32.
    Used for decoding operand size of port opcodes.  */
-static inline TCGMemOp mo_b_d32(int b, TCGMemOp ot)
+static inline MemOp mo_b_d32(int b, MemOp ot)
 {
     return b & 1 ? (ot == MO_16 ? MO_16 : MO_32) : MO_8;
 }

-static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t0)
+static void gen_op_mov_reg_v(DisasContext *s, MemOp ot, int reg, TCGv t0)
 {
     switch(ot) {
     case MO_8:
@@ -388,7 +388,7 @@ static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t0)
 }

 static inline
-void gen_op_mov_v_reg(DisasContext *s, TCGMemOp ot, TCGv t0, int reg)
+void gen_op_mov_v_reg(DisasContext *s, MemOp ot, TCGv t0, int reg)
 {
     if (ot == MO_8 && byte_reg_is_xH(s, reg)) {
         tcg_gen_extract_tl(t0, cpu_regs[reg - 4], 8, 8);
@@ -411,13 +411,13 @@ static inline void gen_op_jmp_v(TCGv dest)
 }

 static inline
-void gen_op_add_reg_im(DisasContext *s, TCGMemOp size, int reg, int32_t val)
+void gen_op_add_reg_im(DisasContext *s, MemOp size, int reg, int32_t val)
 {
     tcg_gen_addi_tl(s->tmp0, cpu_regs[reg], val);
     gen_op_mov_reg_v(s, size, reg, s->tmp0);
 }

-static inline void gen_op_add_reg_T0(DisasContext *s, TCGMemOp size, int reg)
+static inline void gen_op_add_reg_T0(DisasContext *s, MemOp size, int reg)
 {
     tcg_gen_add_tl(s->tmp0, cpu_regs[reg], s->T0);
     gen_op_mov_reg_v(s, size, reg, s->tmp0);
@@ -451,7 +451,7 @@ static inline void gen_jmp_im(DisasContext *s, target_ulong pc)
 /* Compute SEG:REG into A0.  SEG is selected from the override segment
    (OVR_SEG) and the default segment (DEF_SEG).  OVR_SEG may be -1 to
    indicate no override.  */
-static void gen_lea_v_seg(DisasContext *s, TCGMemOp aflag, TCGv a0,
+static void gen_lea_v_seg(DisasContext *s, MemOp aflag, TCGv a0,
                           int def_seg, int ovr_seg)
 {
     switch (aflag) {
@@ -514,13 +514,13 @@ static inline void gen_string_movl_A0_EDI(DisasContext *s)
     gen_lea_v_seg(s, s->aflag, cpu_regs[R_EDI], R_ES, -1);
 }

-static inline void gen_op_movl_T0_Dshift(DisasContext *s, TCGMemOp ot)
+static inline void gen_op_movl_T0_Dshift(DisasContext *s, MemOp ot)
 {
     tcg_gen_ld32s_tl(s->T0, cpu_env, offsetof(CPUX86State, df));
     tcg_gen_shli_tl(s->T0, s->T0, ot);
 };

-static TCGv gen_ext_tl(TCGv dst, TCGv src, TCGMemOp size, bool sign)
+static TCGv gen_ext_tl(TCGv dst, TCGv src, MemOp size, bool sign)
 {
     switch (size) {
     case MO_8:
@@ -551,18 +551,18 @@ static TCGv gen_ext_tl(TCGv dst, TCGv src, TCGMemOp size, bool sign)
     }
 }

-static void gen_extu(TCGMemOp ot, TCGv reg)
+static void gen_extu(MemOp ot, TCGv reg)
 {
     gen_ext_tl(reg, reg, ot, false);
 }

-static void gen_exts(TCGMemOp ot, TCGv reg)
+static void gen_exts(MemOp ot, TCGv reg)
 {
     gen_ext_tl(reg, reg, ot, true);
 }

 static inline
-void gen_op_jnz_ecx(DisasContext *s, TCGMemOp size, TCGLabel *label1)
+void gen_op_jnz_ecx(DisasContext *s, MemOp size, TCGLabel *label1)
 {
     tcg_gen_mov_tl(s->tmp0, cpu_regs[R_ECX]);
     gen_extu(size, s->tmp0);
@@ -570,14 +570,14 @@ void gen_op_jnz_ecx(DisasContext *s, TCGMemOp size, TCGLabel *label1)
 }

 static inline
-void gen_op_jz_ecx(DisasContext *s, TCGMemOp size, TCGLabel *label1)
+void gen_op_jz_ecx(DisasContext *s, MemOp size, TCGLabel *label1)
 {
     tcg_gen_mov_tl(s->tmp0, cpu_regs[R_ECX]);
     gen_extu(size, s->tmp0);
     tcg_gen_brcondi_tl(TCG_COND_EQ, s->tmp0, 0, label1);
 }

-static void gen_helper_in_func(TCGMemOp ot, TCGv v, TCGv_i32 n)
+static void gen_helper_in_func(MemOp ot, TCGv v, TCGv_i32 n)
 {
     switch (ot) {
     case MO_8:
@@ -594,7 +594,7 @@ static void gen_helper_in_func(TCGMemOp ot, TCGv v, TCGv_i32 n)
     }
 }

-static void gen_helper_out_func(TCGMemOp ot, TCGv_i32 v, TCGv_i32 n)
+static void gen_helper_out_func(MemOp ot, TCGv_i32 v, TCGv_i32 n)
 {
     switch (ot) {
     case MO_8:
@@ -611,7 +611,7 @@ static void gen_helper_out_func(TCGMemOp ot, TCGv_i32 v, TCGv_i32 n)
     }
 }

-static void gen_check_io(DisasContext *s, TCGMemOp ot, target_ulong cur_eip,
+static void gen_check_io(DisasContext *s, MemOp ot, target_ulong cur_eip,
                          uint32_t svm_flags)
 {
     target_ulong next_eip;
@@ -644,7 +644,7 @@ static void gen_check_io(DisasContext *s, TCGMemOp ot, target_ulong cur_eip,
     }
 }

-static inline void gen_movs(DisasContext *s, TCGMemOp ot)
+static inline void gen_movs(DisasContext *s, MemOp ot)
 {
     gen_string_movl_A0_ESI(s);
     gen_op_ld_v(s, ot, s->T0, s->A0);
@@ -840,7 +840,7 @@ static CCPrepare gen_prepare_eflags_s(DisasContext *s, TCGv reg)
         return (CCPrepare) { .cond = TCG_COND_NEVER, .mask = -1 };
     default:
         {
-            TCGMemOp size = (s->cc_op - CC_OP_ADDB) & 3;
+            MemOp size = (s->cc_op - CC_OP_ADDB) & 3;
             TCGv t0 = gen_ext_tl(reg, cpu_cc_dst, size, true);
             return (CCPrepare) { .cond = TCG_COND_LT, .reg = t0, .mask = -1 };
         }
@@ -885,7 +885,7 @@ static CCPrepare gen_prepare_eflags_z(DisasContext *s, TCGv reg)
                              .mask = -1 };
     default:
         {
-            TCGMemOp size = (s->cc_op - CC_OP_ADDB) & 3;
+            MemOp size = (s->cc_op - CC_OP_ADDB) & 3;
             TCGv t0 = gen_ext_tl(reg, cpu_cc_dst, size, false);
             return (CCPrepare) { .cond = TCG_COND_EQ, .reg = t0, .mask = -1 };
         }
@@ -897,7 +897,7 @@ static CCPrepare gen_prepare_eflags_z(DisasContext *s, TCGv reg)
 static CCPrepare gen_prepare_cc(DisasContext *s, int b, TCGv reg)
 {
     int inv, jcc_op, cond;
-    TCGMemOp size;
+    MemOp size;
     CCPrepare cc;
     TCGv t0;

@@ -1075,7 +1075,7 @@ static TCGLabel *gen_jz_ecx_string(DisasContext *s, target_ulong next_eip)
     return l2;
 }

-static inline void gen_stos(DisasContext *s, TCGMemOp ot)
+static inline void gen_stos(DisasContext *s, MemOp ot)
 {
     gen_op_mov_v_reg(s, MO_32, s->T0, R_EAX);
     gen_string_movl_A0_EDI(s);
@@ -1084,7 +1084,7 @@ static inline void gen_stos(DisasContext *s, TCGMemOp ot)
     gen_op_add_reg_T0(s, s->aflag, R_EDI);
 }

-static inline void gen_lods(DisasContext *s, TCGMemOp ot)
+static inline void gen_lods(DisasContext *s, MemOp ot)
 {
     gen_string_movl_A0_ESI(s);
     gen_op_ld_v(s, ot, s->T0, s->A0);
@@ -1093,7 +1093,7 @@ static inline void gen_lods(DisasContext *s, TCGMemOp ot)
     gen_op_add_reg_T0(s, s->aflag, R_ESI);
 }

-static inline void gen_scas(DisasContext *s, TCGMemOp ot)
+static inline void gen_scas(DisasContext *s, MemOp ot)
 {
     gen_string_movl_A0_EDI(s);
     gen_op_ld_v(s, ot, s->T1, s->A0);
@@ -1102,7 +1102,7 @@ static inline void gen_scas(DisasContext *s, TCGMemOp ot)
     gen_op_add_reg_T0(s, s->aflag, R_EDI);
 }

-static inline void gen_cmps(DisasContext *s, TCGMemOp ot)
+static inline void gen_cmps(DisasContext *s, MemOp ot)
 {
     gen_string_movl_A0_EDI(s);
     gen_op_ld_v(s, ot, s->T1, s->A0);
@@ -1126,7 +1126,7 @@ static void gen_bpt_io(DisasContext *s, TCGv_i32 t_port, int ot)
 }


-static inline void gen_ins(DisasContext *s, TCGMemOp ot)
+static inline void gen_ins(DisasContext *s, MemOp ot)
 {
     if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
         gen_io_start();
@@ -1148,7 +1148,7 @@ static inline void gen_ins(DisasContext *s, TCGMemOp ot)
     }
 }

-static inline void gen_outs(DisasContext *s, TCGMemOp ot)
+static inline void gen_outs(DisasContext *s, MemOp ot)
 {
     if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
         gen_io_start();
@@ -1171,7 +1171,7 @@ static inline void gen_outs(DisasContext *s, TCGMemOp ot)
 /* same method as Valgrind : we generate jumps to current or next
    instruction */
 #define GEN_REPZ(op)                                                          \
-static inline void gen_repz_ ## op(DisasContext *s, TCGMemOp ot,              \
+static inline void gen_repz_ ## op(DisasContext *s, MemOp ot,              \
                                  target_ulong cur_eip, target_ulong next_eip) \
 {                                                                             \
     TCGLabel *l2;                                                             \
@@ -1187,7 +1187,7 @@ static inline void gen_repz_ ## op(DisasContext *s, TCGMemOp ot,              \
 }

 #define GEN_REPZ2(op)                                                         \
-static inline void gen_repz_ ## op(DisasContext *s, TCGMemOp ot,              \
+static inline void gen_repz_ ## op(DisasContext *s, MemOp ot,              \
                                    target_ulong cur_eip,                      \
                                    target_ulong next_eip,                     \
                                    int nz)                                    \
@@ -1284,7 +1284,7 @@ static void gen_illegal_opcode(DisasContext *s)
 }

 /* if d == OR_TMP0, it means memory operand (address in A0) */
-static void gen_op(DisasContext *s1, int op, TCGMemOp ot, int d)
+static void gen_op(DisasContext *s1, int op, MemOp ot, int d)
 {
     if (d != OR_TMP0) {
         if (s1->prefix & PREFIX_LOCK) {
@@ -1395,7 +1395,7 @@ static void gen_op(DisasContext *s1, int op, TCGMemOp ot, int d)
 }

 /* if d == OR_TMP0, it means memory operand (address in A0) */
-static void gen_inc(DisasContext *s1, TCGMemOp ot, int d, int c)
+static void gen_inc(DisasContext *s1, MemOp ot, int d, int c)
 {
     if (s1->prefix & PREFIX_LOCK) {
         if (d != OR_TMP0) {
@@ -1421,7 +1421,7 @@ static void gen_inc(DisasContext *s1, TCGMemOp ot, int d, int c)
     set_cc_op(s1, (c > 0 ? CC_OP_INCB : CC_OP_DECB) + ot);
 }

-static void gen_shift_flags(DisasContext *s, TCGMemOp ot, TCGv result,
+static void gen_shift_flags(DisasContext *s, MemOp ot, TCGv result,
                             TCGv shm1, TCGv count, bool is_right)
 {
     TCGv_i32 z32, s32, oldop;
@@ -1466,7 +1466,7 @@ static void gen_shift_flags(DisasContext *s, TCGMemOp ot, TCGv result,
     set_cc_op(s, CC_OP_DYNAMIC);
 }

-static void gen_shift_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
+static void gen_shift_rm_T1(DisasContext *s, MemOp ot, int op1,
                             int is_right, int is_arith)
 {
     target_ulong mask = (ot == MO_64 ? 0x3f : 0x1f);
@@ -1502,7 +1502,7 @@ static void gen_shift_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
     gen_shift_flags(s, ot, s->T0, s->tmp0, s->T1, is_right);
 }

-static void gen_shift_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
+static void gen_shift_rm_im(DisasContext *s, MemOp ot, int op1, int op2,
                             int is_right, int is_arith)
 {
     int mask = (ot == MO_64 ? 0x3f : 0x1f);
@@ -1542,7 +1542,7 @@ static void gen_shift_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
     }
 }

-static void gen_rot_rm_T1(DisasContext *s, TCGMemOp ot, int op1, int is_right)
+static void gen_rot_rm_T1(DisasContext *s, MemOp ot, int op1, int is_right)
 {
     target_ulong mask = (ot == MO_64 ? 0x3f : 0x1f);
     TCGv_i32 t0, t1;
@@ -1627,7 +1627,7 @@ static void gen_rot_rm_T1(DisasContext *s, TCGMemOp ot, int op1, int is_right)
     set_cc_op(s, CC_OP_DYNAMIC);
 }

-static void gen_rot_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
+static void gen_rot_rm_im(DisasContext *s, MemOp ot, int op1, int op2,
                           int is_right)
 {
     int mask = (ot == MO_64 ? 0x3f : 0x1f);
@@ -1705,7 +1705,7 @@ static void gen_rot_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
 }

 /* XXX: add faster immediate = 1 case */
-static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
+static void gen_rotc_rm_T1(DisasContext *s, MemOp ot, int op1,
                            int is_right)
 {
     gen_compute_eflags(s);
@@ -1761,7 +1761,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
 }

 /* XXX: add faster immediate case */
-static void gen_shiftd_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
+static void gen_shiftd_rm_T1(DisasContext *s, MemOp ot, int op1,
                              bool is_right, TCGv count_in)
 {
     target_ulong mask = (ot == MO_64 ? 63 : 31);
@@ -1842,7 +1842,7 @@ static void gen_shiftd_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
     tcg_temp_free(count);
 }

-static void gen_shift(DisasContext *s1, int op, TCGMemOp ot, int d, int s)
+static void gen_shift(DisasContext *s1, int op, MemOp ot, int d, int s)
 {
     if (s != OR_TMP1)
         gen_op_mov_v_reg(s1, ot, s1->T1, s);
@@ -1872,7 +1872,7 @@ static void gen_shift(DisasContext *s1, int op, TCGMemOp ot, int d, int s)
     }
 }

-static void gen_shifti(DisasContext *s1, int op, TCGMemOp ot, int d, int c)
+static void gen_shifti(DisasContext *s1, int op, MemOp ot, int d, int c)
 {
     switch(op) {
     case OP_ROL:
@@ -2149,7 +2149,7 @@ static void gen_add_A0_ds_seg(DisasContext *s)
 /* generate modrm memory load or store of 'reg'. TMP0 is used if reg ==
    OR_TMP0 */
 static void gen_ldst_modrm(CPUX86State *env, DisasContext *s, int modrm,
-                           TCGMemOp ot, int reg, int is_store)
+                           MemOp ot, int reg, int is_store)
 {
     int mod, rm;

@@ -2179,7 +2179,7 @@ static void gen_ldst_modrm(CPUX86State *env, DisasContext *s, int modrm,
     }
 }

-static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, TCGMemOp ot)
+static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, MemOp ot)
 {
     uint32_t ret;

@@ -2202,7 +2202,7 @@ static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, TCGMemOp ot)
     return ret;
 }

-static inline int insn_const_size(TCGMemOp ot)
+static inline int insn_const_size(MemOp ot)
 {
     if (ot <= MO_32) {
         return 1 << ot;
@@ -2266,7 +2266,7 @@ static inline void gen_jcc(DisasContext *s, int b,
     }
 }

-static void gen_cmovcc1(CPUX86State *env, DisasContext *s, TCGMemOp ot, int b,
+static void gen_cmovcc1(CPUX86State *env, DisasContext *s, MemOp ot, int b,
                         int modrm, int reg)
 {
     CCPrepare cc;
@@ -2363,8 +2363,8 @@ static inline void gen_stack_update(DisasContext *s, int addend)
 /* Generate a push. It depends on ss32, addseg and dflag.  */
 static void gen_push_v(DisasContext *s, TCGv val)
 {
-    TCGMemOp d_ot = mo_pushpop(s, s->dflag);
-    TCGMemOp a_ot = mo_stacksize(s);
+    MemOp d_ot = mo_pushpop(s, s->dflag);
+    MemOp a_ot = mo_stacksize(s);
     int size = 1 << d_ot;
     TCGv new_esp = s->A0;

@@ -2383,9 +2383,9 @@ static void gen_push_v(DisasContext *s, TCGv val)
 }

 /* two step pop is necessary for precise exceptions */
-static TCGMemOp gen_pop_T0(DisasContext *s)
+static MemOp gen_pop_T0(DisasContext *s)
 {
-    TCGMemOp d_ot = mo_pushpop(s, s->dflag);
+    MemOp d_ot = mo_pushpop(s, s->dflag);

     gen_lea_v_seg(s, mo_stacksize(s), cpu_regs[R_ESP], R_SS, -1);
     gen_op_ld_v(s, d_ot, s->T0, s->A0);
@@ -2393,7 +2393,7 @@ static TCGMemOp gen_pop_T0(DisasContext *s)
     return d_ot;
 }

-static inline void gen_pop_update(DisasContext *s, TCGMemOp ot)
+static inline void gen_pop_update(DisasContext *s, MemOp ot)
 {
     gen_stack_update(s, 1 << ot);
 }
@@ -2405,8 +2405,8 @@ static inline void gen_stack_A0(DisasContext *s)

 static void gen_pusha(DisasContext *s)
 {
-    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_16;
-    TCGMemOp d_ot = s->dflag;
+    MemOp s_ot = s->ss32 ? MO_32 : MO_16;
+    MemOp d_ot = s->dflag;
     int size = 1 << d_ot;
     int i;

@@ -2421,8 +2421,8 @@ static void gen_pusha(DisasContext *s)

 static void gen_popa(DisasContext *s)
 {
-    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_16;
-    TCGMemOp d_ot = s->dflag;
+    MemOp s_ot = s->ss32 ? MO_32 : MO_16;
+    MemOp d_ot = s->dflag;
     int size = 1 << d_ot;
     int i;

@@ -2442,8 +2442,8 @@ static void gen_popa(DisasContext *s)

 static void gen_enter(DisasContext *s, int esp_addend, int level)
 {
-    TCGMemOp d_ot = mo_pushpop(s, s->dflag);
-    TCGMemOp a_ot = CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_16;
+    MemOp d_ot = mo_pushpop(s, s->dflag);
+    MemOp a_ot = CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_16;
     int size = 1 << d_ot;

     /* Push BP; compute FrameTemp into T1.  */
@@ -2482,8 +2482,8 @@ static void gen_enter(DisasContext *s, int esp_addend, int level)

 static void gen_leave(DisasContext *s)
 {
-    TCGMemOp d_ot = mo_pushpop(s, s->dflag);
-    TCGMemOp a_ot = mo_stacksize(s);
+    MemOp d_ot = mo_pushpop(s, s->dflag);
+    MemOp a_ot = mo_stacksize(s);

     gen_lea_v_seg(s, a_ot, cpu_regs[R_EBP], R_SS, -1);
     gen_op_ld_v(s, d_ot, s->T0, s->A0);
@@ -3045,7 +3045,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
     SSEFunc_0_eppi sse_fn_eppi;
     SSEFunc_0_ppi sse_fn_ppi;
     SSEFunc_0_eppt sse_fn_eppt;
-    TCGMemOp ot;
+    MemOp ot;

     b &= 0xff;
     if (s->prefix & PREFIX_DATA)
@@ -4488,7 +4488,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
     CPUX86State *env = cpu->env_ptr;
     int b, prefixes;
     int shift;
-    TCGMemOp ot, aflag, dflag;
+    MemOp ot, aflag, dflag;
     int modrm, reg, rm, mod, op, opreg, val;
     target_ulong next_eip, tval;
     int rex_w, rex_r;
@@ -5567,8 +5567,8 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
     case 0x1be: /* movsbS Gv, Eb */
     case 0x1bf: /* movswS Gv, Eb */
         {
-            TCGMemOp d_ot;
-            TCGMemOp s_ot;
+            MemOp d_ot;
+            MemOp s_ot;

             /* d_ot is the size of destination */
             d_ot = dflag;
diff --git a/target/m68k/translate.c b/target/m68k/translate.c
index 60bcfb7..24c1dd3 100644
--- a/target/m68k/translate.c
+++ b/target/m68k/translate.c
@@ -2414,7 +2414,7 @@ DISAS_INSN(cas)
     uint16_t ext;
     TCGv load;
     TCGv cmp;
-    TCGMemOp opc;
+    MemOp opc;

     switch ((insn >> 9) & 3) {
     case 1:
diff --git a/target/microblaze/translate.c b/target/microblaze/translate.c
index 9ce65f3..41d1b8b 100644
--- a/target/microblaze/translate.c
+++ b/target/microblaze/translate.c
@@ -919,7 +919,7 @@ static void dec_load(DisasContext *dc)
     unsigned int size;
     bool rev = false, ex = false, ea = false;
     int mem_index = cpu_mmu_index(&dc->cpu->env, false);
-    TCGMemOp mop;
+    MemOp mop;

     mop = dc->opcode & 3;
     size = 1 << mop;
@@ -1035,7 +1035,7 @@ static void dec_store(DisasContext *dc)
     unsigned int size;
     bool rev = false, ex = false, ea = false;
     int mem_index = cpu_mmu_index(&dc->cpu->env, false);
-    TCGMemOp mop;
+    MemOp mop;

     mop = dc->opcode & 3;
     size = 1 << mop;
diff --git a/target/mips/translate.c b/target/mips/translate.c
index ca62800..59b5d85 100644
--- a/target/mips/translate.c
+++ b/target/mips/translate.c
@@ -2526,7 +2526,7 @@ typedef struct DisasContext {
     int32_t CP0_Config5;
     /* Routine used to access memory */
     int mem_idx;
-    TCGMemOp default_tcg_memop_mask;
+    MemOp default_tcg_memop_mask;
     uint32_t hflags, saved_hflags;
     target_ulong btarget;
     bool ulri;
@@ -3706,7 +3706,7 @@ static void gen_st(DisasContext *ctx, uint32_t opc, int rt,

 /* Store conditional */
 static void gen_st_cond(DisasContext *ctx, int rt, int base, int offset,
-                        TCGMemOp tcg_mo, bool eva)
+                        MemOp tcg_mo, bool eva)
 {
     TCGv addr, t0, val;
     TCGLabel *l1 = gen_new_label();
@@ -4546,7 +4546,7 @@ static void gen_HILO(DisasContext *ctx, uint32_t opc, int acc, int reg)
 }

 static inline void gen_r6_ld(target_long addr, int reg, int memidx,
-                             TCGMemOp memop)
+                             MemOp memop)
 {
     TCGv t0 = tcg_const_tl(addr);
     tcg_gen_qemu_ld_tl(t0, t0, memidx, memop);
@@ -21828,7 +21828,7 @@ static int decode_nanomips_32_48_opc(CPUMIPSState *env, DisasContext *ctx)
                              extract32(ctx->opcode, 0, 8);
                     TCGv va = tcg_temp_new();
                     TCGv t1 = tcg_temp_new();
-                    TCGMemOp memop = (extract32(ctx->opcode, 8, 3)) ==
+                    MemOp memop = (extract32(ctx->opcode, 8, 3)) ==
                                       NM_P_LS_UAWM ? MO_UNALN : 0;

                     count = (count == 0) ? 8 : count;
diff --git a/target/openrisc/translate.c b/target/openrisc/translate.c
index 4360ce4..b189c50 100644
--- a/target/openrisc/translate.c
+++ b/target/openrisc/translate.c
@@ -681,7 +681,7 @@ static bool trans_l_lwa(DisasContext *dc, arg_load *a)
     return true;
 }

-static void do_load(DisasContext *dc, arg_load *a, TCGMemOp mop)
+static void do_load(DisasContext *dc, arg_load *a, MemOp mop)
 {
     TCGv ea;

@@ -763,7 +763,7 @@ static bool trans_l_swa(DisasContext *dc, arg_store *a)
     return true;
 }

-static void do_store(DisasContext *dc, arg_store *a, TCGMemOp mop)
+static void do_store(DisasContext *dc, arg_store *a, MemOp mop)
 {
     TCGv t0 = tcg_temp_new();
     tcg_gen_addi_tl(t0, cpu_R[a->a], a->i);
diff --git a/target/ppc/translate.c b/target/ppc/translate.c
index 4a5de28..31800ed 100644
--- a/target/ppc/translate.c
+++ b/target/ppc/translate.c
@@ -162,7 +162,7 @@ struct DisasContext {
     int mem_idx;
     int access_type;
     /* Translation flags */
-    TCGMemOp default_tcg_memop_mask;
+    MemOp default_tcg_memop_mask;
 #if defined(TARGET_PPC64)
     bool sf_mode;
     bool has_cfar;
@@ -3142,7 +3142,7 @@ static void gen_isync(DisasContext *ctx)

 #define MEMOP_GET_SIZE(x)  (1 << ((x) & MO_SIZE))

-static void gen_load_locked(DisasContext *ctx, TCGMemOp memop)
+static void gen_load_locked(DisasContext *ctx, MemOp memop)
 {
     TCGv gpr = cpu_gpr[rD(ctx->opcode)];
     TCGv t0 = tcg_temp_new();
@@ -3167,7 +3167,7 @@ LARX(lbarx, DEF_MEMOP(MO_UB))
 LARX(lharx, DEF_MEMOP(MO_UW))
 LARX(lwarx, DEF_MEMOP(MO_UL))

-static void gen_fetch_inc_conditional(DisasContext *ctx, TCGMemOp memop,
+static void gen_fetch_inc_conditional(DisasContext *ctx, MemOp memop,
                                       TCGv EA, TCGCond cond, int addend)
 {
     TCGv t = tcg_temp_new();
@@ -3193,7 +3193,7 @@ static void gen_fetch_inc_conditional(DisasContext *ctx, TCGMemOp memop,
     tcg_temp_free(u);
 }

-static void gen_ld_atomic(DisasContext *ctx, TCGMemOp memop)
+static void gen_ld_atomic(DisasContext *ctx, MemOp memop)
 {
     uint32_t gpr_FC = FC(ctx->opcode);
     TCGv EA = tcg_temp_new();
@@ -3306,7 +3306,7 @@ static void gen_ldat(DisasContext *ctx)
 }
 #endif

-static void gen_st_atomic(DisasContext *ctx, TCGMemOp memop)
+static void gen_st_atomic(DisasContext *ctx, MemOp memop)
 {
     uint32_t gpr_FC = FC(ctx->opcode);
     TCGv EA = tcg_temp_new();
@@ -3389,7 +3389,7 @@ static void gen_stdat(DisasContext *ctx)
 }
 #endif

-static void gen_conditional_store(DisasContext *ctx, TCGMemOp memop)
+static void gen_conditional_store(DisasContext *ctx, MemOp memop)
 {
     TCGLabel *l1 = gen_new_label();
     TCGLabel *l2 = gen_new_label();
diff --git a/target/riscv/insn_trans/trans_rva.inc.c b/target/riscv/insn_trans/trans_rva.inc.c
index fadd888..be8a9f0 100644
--- a/target/riscv/insn_trans/trans_rva.inc.c
+++ b/target/riscv/insn_trans/trans_rva.inc.c
@@ -18,7 +18,7 @@
  * this program.  If not, see <http://www.gnu.org/licenses/>.
  */

-static inline bool gen_lr(DisasContext *ctx, arg_atomic *a, TCGMemOp mop)
+static inline bool gen_lr(DisasContext *ctx, arg_atomic *a, MemOp mop)
 {
     TCGv src1 = tcg_temp_new();
     /* Put addr in load_res, data in load_val.  */
@@ -37,7 +37,7 @@ static inline bool gen_lr(DisasContext *ctx, arg_atomic *a, TCGMemOp mop)
     return true;
 }

-static inline bool gen_sc(DisasContext *ctx, arg_atomic *a, TCGMemOp mop)
+static inline bool gen_sc(DisasContext *ctx, arg_atomic *a, MemOp mop)
 {
     TCGv src1 = tcg_temp_new();
     TCGv src2 = tcg_temp_new();
@@ -82,8 +82,8 @@ static inline bool gen_sc(DisasContext *ctx, arg_atomic *a, TCGMemOp mop)
 }

 static bool gen_amo(DisasContext *ctx, arg_atomic *a,
-                    void(*func)(TCGv, TCGv, TCGv, TCGArg, TCGMemOp),
-                    TCGMemOp mop)
+                    void(*func)(TCGv, TCGv, TCGv, TCGArg, MemOp),
+                    MemOp mop)
 {
     TCGv src1 = tcg_temp_new();
     TCGv src2 = tcg_temp_new();
diff --git a/target/riscv/insn_trans/trans_rvi.inc.c b/target/riscv/insn_trans/trans_rvi.inc.c
index ea64731..cf440d1 100644
--- a/target/riscv/insn_trans/trans_rvi.inc.c
+++ b/target/riscv/insn_trans/trans_rvi.inc.c
@@ -135,7 +135,7 @@ static bool trans_bgeu(DisasContext *ctx, arg_bgeu *a)
     return gen_branch(ctx, a, TCG_COND_GEU);
 }

-static bool gen_load(DisasContext *ctx, arg_lb *a, TCGMemOp memop)
+static bool gen_load(DisasContext *ctx, arg_lb *a, MemOp memop)
 {
     TCGv t0 = tcg_temp_new();
     TCGv t1 = tcg_temp_new();
@@ -174,7 +174,7 @@ static bool trans_lhu(DisasContext *ctx, arg_lhu *a)
     return gen_load(ctx, a, MO_TEUW);
 }

-static bool gen_store(DisasContext *ctx, arg_sb *a, TCGMemOp memop)
+static bool gen_store(DisasContext *ctx, arg_sb *a, MemOp memop)
 {
     TCGv t0 = tcg_temp_new();
     TCGv dat = tcg_temp_new();
diff --git a/target/s390x/translate.c b/target/s390x/translate.c
index ac0d8b6..2927247 100644
--- a/target/s390x/translate.c
+++ b/target/s390x/translate.c
@@ -152,7 +152,7 @@ static inline int vec_full_reg_offset(uint8_t reg)
     return offsetof(CPUS390XState, vregs[reg][0]);
 }

-static inline int vec_reg_offset(uint8_t reg, uint8_t enr, TCGMemOp es)
+static inline int vec_reg_offset(uint8_t reg, uint8_t enr, MemOp es)
 {
     /* Convert element size (es) - e.g. MO_8 - to bytes */
     const uint8_t bytes = 1 << es;
@@ -2262,7 +2262,7 @@ static DisasJumpType op_csst(DisasContext *s, DisasOps *o)
 #ifndef CONFIG_USER_ONLY
 static DisasJumpType op_csp(DisasContext *s, DisasOps *o)
 {
-    TCGMemOp mop = s->insn->data;
+    MemOp mop = s->insn->data;
     TCGv_i64 addr, old, cc;
     TCGLabel *lab = gen_new_label();

@@ -3228,7 +3228,7 @@ static DisasJumpType op_lm64(DisasContext *s, DisasOps *o)
 static DisasJumpType op_lpd(DisasContext *s, DisasOps *o)
 {
     TCGv_i64 a1, a2;
-    TCGMemOp mop = s->insn->data;
+    MemOp mop = s->insn->data;

     /* In a parallel context, stop the world and single step.  */
     if (tb_cflags(s->base.tb) & CF_PARALLEL) {
diff --git a/target/s390x/translate_vx.inc.c b/target/s390x/translate_vx.inc.c
index 41d5cf8..4c56bbb 100644
--- a/target/s390x/translate_vx.inc.c
+++ b/target/s390x/translate_vx.inc.c
@@ -57,13 +57,13 @@
 #define FPF_LONG        3
 #define FPF_EXT         4

-static inline bool valid_vec_element(uint8_t enr, TCGMemOp es)
+static inline bool valid_vec_element(uint8_t enr, MemOp es)
 {
     return !(enr & ~(NUM_VEC_ELEMENTS(es) - 1));
 }

 static void read_vec_element_i64(TCGv_i64 dst, uint8_t reg, uint8_t enr,
-                                 TCGMemOp memop)
+                                 MemOp memop)
 {
     const int offs = vec_reg_offset(reg, enr, memop & MO_SIZE);

@@ -96,7 +96,7 @@ static void read_vec_element_i64(TCGv_i64 dst, uint8_t reg, uint8_t enr,
 }

 static void read_vec_element_i32(TCGv_i32 dst, uint8_t reg, uint8_t enr,
-                                 TCGMemOp memop)
+                                 MemOp memop)
 {
     const int offs = vec_reg_offset(reg, enr, memop & MO_SIZE);

@@ -123,7 +123,7 @@ static void read_vec_element_i32(TCGv_i32 dst, uint8_t reg, uint8_t enr,
 }

 static void write_vec_element_i64(TCGv_i64 src, int reg, uint8_t enr,
-                                  TCGMemOp memop)
+                                  MemOp memop)
 {
     const int offs = vec_reg_offset(reg, enr, memop & MO_SIZE);

@@ -146,7 +146,7 @@ static void write_vec_element_i64(TCGv_i64 src, int reg, uint8_t enr,
 }

 static void write_vec_element_i32(TCGv_i32 src, int reg, uint8_t enr,
-                                  TCGMemOp memop)
+                                  MemOp memop)
 {
     const int offs = vec_reg_offset(reg, enr, memop & MO_SIZE);

diff --git a/target/sparc/translate.c b/target/sparc/translate.c
index 091bab5..bef9ce6 100644
--- a/target/sparc/translate.c
+++ b/target/sparc/translate.c
@@ -2019,7 +2019,7 @@ static inline void gen_ne_fop_QD(DisasContext *dc, int rd, int rs,
 }

 static void gen_swap(DisasContext *dc, TCGv dst, TCGv src,
-                     TCGv addr, int mmu_idx, TCGMemOp memop)
+                     TCGv addr, int mmu_idx, MemOp memop)
 {
     gen_address_mask(dc, addr);
     tcg_gen_atomic_xchg_tl(dst, addr, src, mmu_idx, memop);
@@ -2050,10 +2050,10 @@ typedef struct {
     ASIType type;
     int asi;
     int mem_idx;
-    TCGMemOp memop;
+    MemOp memop;
 } DisasASI;

-static DisasASI get_asi(DisasContext *dc, int insn, TCGMemOp memop)
+static DisasASI get_asi(DisasContext *dc, int insn, MemOp memop)
 {
     int asi = GET_FIELD(insn, 19, 26);
     ASIType type = GET_ASI_HELPER;
@@ -2267,7 +2267,7 @@ static DisasASI get_asi(DisasContext *dc, int insn, TCGMemOp memop)
 }

 static void gen_ld_asi(DisasContext *dc, TCGv dst, TCGv addr,
-                       int insn, TCGMemOp memop)
+                       int insn, MemOp memop)
 {
     DisasASI da = get_asi(dc, insn, memop);

@@ -2305,7 +2305,7 @@ static void gen_ld_asi(DisasContext *dc, TCGv dst, TCGv addr,
 }

 static void gen_st_asi(DisasContext *dc, TCGv src, TCGv addr,
-                       int insn, TCGMemOp memop)
+                       int insn, MemOp memop)
 {
     DisasASI da = get_asi(dc, insn, memop);

@@ -2511,7 +2511,7 @@ static void gen_ldf_asi(DisasContext *dc, TCGv addr,
     case GET_ASI_BLOCK:
         /* Valid for lddfa on aligned registers only.  */
         if (size == 8 && (rd & 7) == 0) {
-            TCGMemOp memop;
+            MemOp memop;
             TCGv eight;
             int i;

@@ -2625,7 +2625,7 @@ static void gen_stf_asi(DisasContext *dc, TCGv addr,
     case GET_ASI_BLOCK:
         /* Valid for stdfa on aligned registers only.  */
         if (size == 8 && (rd & 7) == 0) {
-            TCGMemOp memop;
+            MemOp memop;
             TCGv eight;
             int i;

diff --git a/target/tilegx/translate.c b/target/tilegx/translate.c
index c46a4ab..68dd4aa 100644
--- a/target/tilegx/translate.c
+++ b/target/tilegx/translate.c
@@ -290,7 +290,7 @@ static void gen_cmul2(TCGv tdest, TCGv tsrca, TCGv tsrcb, int sh, int rd)
 }

 static TileExcp gen_st_opcode(DisasContext *dc, unsigned dest, unsigned srca,
-                              unsigned srcb, TCGMemOp memop, const char *name)
+                              unsigned srcb, MemOp memop, const char *name)
 {
     if (dest) {
         return TILEGX_EXCP_OPCODE_UNKNOWN;
@@ -305,7 +305,7 @@ static TileExcp gen_st_opcode(DisasContext *dc, unsigned dest, unsigned srca,
 }

 static TileExcp gen_st_add_opcode(DisasContext *dc, unsigned srca, unsigned srcb,
-                                  int imm, TCGMemOp memop, const char *name)
+                                  int imm, MemOp memop, const char *name)
 {
     TCGv tsrca = load_gr(dc, srca);
     TCGv tsrcb = load_gr(dc, srcb);
@@ -496,7 +496,7 @@ static TileExcp gen_rr_opcode(DisasContext *dc, unsigned opext,
 {
     TCGv tdest, tsrca;
     const char *mnemonic;
-    TCGMemOp memop;
+    MemOp memop;
     TileExcp ret = TILEGX_EXCP_NONE;
     bool prefetch_nofault = false;

@@ -1478,7 +1478,7 @@ static TileExcp gen_rri_opcode(DisasContext *dc, unsigned opext,
     TCGv tsrca = load_gr(dc, srca);
     bool prefetch_nofault = false;
     const char *mnemonic;
-    TCGMemOp memop;
+    MemOp memop;
     int i2, i3;
     TCGv t0;

@@ -2106,7 +2106,7 @@ static TileExcp decode_y2(DisasContext *dc, tilegx_bundle_bits bundle)
     unsigned srca = get_SrcA_Y2(bundle);
     unsigned srcbdest = get_SrcBDest_Y2(bundle);
     const char *mnemonic;
-    TCGMemOp memop;
+    MemOp memop;
     bool prefetch_nofault = false;

     switch (OEY2(opc, mode)) {
diff --git a/target/tricore/translate.c b/target/tricore/translate.c
index dc2a65f..87a5f50 100644
--- a/target/tricore/translate.c
+++ b/target/tricore/translate.c
@@ -227,7 +227,7 @@ static inline void generate_trap(DisasContext *ctx, int class, int tin);
 /* Functions for load/save to/from memory */

 static inline void gen_offset_ld(DisasContext *ctx, TCGv r1, TCGv r2,
-                                 int16_t con, TCGMemOp mop)
+                                 int16_t con, MemOp mop)
 {
     TCGv temp = tcg_temp_new();
     tcg_gen_addi_tl(temp, r2, con);
@@ -236,7 +236,7 @@ static inline void gen_offset_ld(DisasContext *ctx, TCGv r1, TCGv r2,
 }

 static inline void gen_offset_st(DisasContext *ctx, TCGv r1, TCGv r2,
-                                 int16_t con, TCGMemOp mop)
+                                 int16_t con, MemOp mop)
 {
     TCGv temp = tcg_temp_new();
     tcg_gen_addi_tl(temp, r2, con);
@@ -284,7 +284,7 @@ static void gen_offset_ld_2regs(TCGv rh, TCGv rl, TCGv base, int16_t con,
 }

 static void gen_st_preincr(DisasContext *ctx, TCGv r1, TCGv r2, int16_t off,
-                           TCGMemOp mop)
+                           MemOp mop)
 {
     TCGv temp = tcg_temp_new();
     tcg_gen_addi_tl(temp, r2, off);
@@ -294,7 +294,7 @@ static void gen_st_preincr(DisasContext *ctx, TCGv r1, TCGv r2, int16_t off,
 }

 static void gen_ld_preincr(DisasContext *ctx, TCGv r1, TCGv r2, int16_t off,
-                           TCGMemOp mop)
+                           MemOp mop)
 {
     TCGv temp = tcg_temp_new();
     tcg_gen_addi_tl(temp, r2, off);
diff --git a/tcg/README b/tcg/README
index 21fcdf7..b4382fa 100644
--- a/tcg/README
+++ b/tcg/README
@@ -512,7 +512,7 @@ Both t0 and t1 may be split into little-endian ordered pairs of registers
 if dealing with 64-bit quantities on a 32-bit host.

 The memidx selects the qemu tlb index to use (e.g. user or kernel access).
-The flags are the TCGMemOp bits, selecting the sign, width, and endianness
+The flags are the MemOp bits, selecting the sign, width, and endianness
 of the memory access.

 For a 32-bit host, qemu_ld/st_i64 is guaranteed to only be used with a
diff --git a/tcg/aarch64/tcg-target.inc.c b/tcg/aarch64/tcg-target.inc.c
index 0713448..3f92101 100644
--- a/tcg/aarch64/tcg-target.inc.c
+++ b/tcg/aarch64/tcg-target.inc.c
@@ -1423,7 +1423,7 @@ static inline void tcg_out_rev16(TCGContext *s, TCGReg rd, TCGReg rn)
     tcg_out_insn(s, 3507, REV16, TCG_TYPE_I32, rd, rn);
 }

-static inline void tcg_out_sxt(TCGContext *s, TCGType ext, TCGMemOp s_bits,
+static inline void tcg_out_sxt(TCGContext *s, TCGType ext, MemOp s_bits,
                                TCGReg rd, TCGReg rn)
 {
     /* Using ALIASes SXTB, SXTH, SXTW, of SBFM Xd, Xn, #0, #7|15|31 */
@@ -1431,7 +1431,7 @@ static inline void tcg_out_sxt(TCGContext *s, TCGType ext, TCGMemOp s_bits,
     tcg_out_sbfm(s, ext, rd, rn, 0, bits);
 }

-static inline void tcg_out_uxt(TCGContext *s, TCGMemOp s_bits,
+static inline void tcg_out_uxt(TCGContext *s, MemOp s_bits,
                                TCGReg rd, TCGReg rn)
 {
     /* Using ALIASes UXTB, UXTH of UBFM Wd, Wn, #0, #7|15 */
@@ -1580,8 +1580,8 @@ static inline void tcg_out_adr(TCGContext *s, TCGReg rd, void *target)
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp size = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp size = opc & MO_SIZE;

     if (!reloc_pc19(lb->label_ptr[0], s->code_ptr)) {
         return false;
@@ -1605,8 +1605,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp size = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp size = opc & MO_SIZE;

     if (!reloc_pc19(lb->label_ptr[0], s->code_ptr)) {
         return false;
@@ -1649,7 +1649,7 @@ QEMU_BUILD_BUG_ON(offsetof(CPUTLBDescFast, table) != 8);
    slow path for the failure case, which will be patched later when finalizing
    the slow path. Generated code returns the host addend in X1,
    clobbers X0,X2,X3,TMP. */
-static void tcg_out_tlb_read(TCGContext *s, TCGReg addr_reg, TCGMemOp opc,
+static void tcg_out_tlb_read(TCGContext *s, TCGReg addr_reg, MemOp opc,
                              tcg_insn_unit **label_ptr, int mem_index,
                              bool is_read)
 {
@@ -1709,11 +1709,11 @@ static void tcg_out_tlb_read(TCGContext *s, TCGReg addr_reg, TCGMemOp opc,

 #endif /* CONFIG_SOFTMMU */

-static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp memop, TCGType ext,
+static void tcg_out_qemu_ld_direct(TCGContext *s, MemOp memop, TCGType ext,
                                    TCGReg data_r, TCGReg addr_r,
                                    TCGType otype, TCGReg off_r)
 {
-    const TCGMemOp bswap = memop & MO_BSWAP;
+    const MemOp bswap = memop & MO_BSWAP;

     switch (memop & MO_SSIZE) {
     case MO_UB:
@@ -1765,11 +1765,11 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp memop, TCGType ext,
     }
 }

-static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp memop,
+static void tcg_out_qemu_st_direct(TCGContext *s, MemOp memop,
                                    TCGReg data_r, TCGReg addr_r,
                                    TCGType otype, TCGReg off_r)
 {
-    const TCGMemOp bswap = memop & MO_BSWAP;
+    const MemOp bswap = memop & MO_BSWAP;

     switch (memop & MO_SIZE) {
     case MO_8:
@@ -1804,7 +1804,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp memop,
 static void tcg_out_qemu_ld(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
                             TCGMemOpIdx oi, TCGType ext)
 {
-    TCGMemOp memop = get_memop(oi);
+    MemOp memop = get_memop(oi);
     const TCGType otype = TARGET_LONG_BITS == 64 ? TCG_TYPE_I64 : TCG_TYPE_I32;
 #ifdef CONFIG_SOFTMMU
     unsigned mem_index = get_mmuidx(oi);
@@ -1829,7 +1829,7 @@ static void tcg_out_qemu_ld(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
 static void tcg_out_qemu_st(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
                             TCGMemOpIdx oi)
 {
-    TCGMemOp memop = get_memop(oi);
+    MemOp memop = get_memop(oi);
     const TCGType otype = TARGET_LONG_BITS == 64 ? TCG_TYPE_I64 : TCG_TYPE_I32;
 #ifdef CONFIG_SOFTMMU
     unsigned mem_index = get_mmuidx(oi);
diff --git a/tcg/arm/tcg-target.inc.c b/tcg/arm/tcg-target.inc.c
index ece88dc..94d80d7 100644
--- a/tcg/arm/tcg-target.inc.c
+++ b/tcg/arm/tcg-target.inc.c
@@ -1233,7 +1233,7 @@ QEMU_BUILD_BUG_ON(offsetof(CPUTLBDescFast, table) != 4);
    containing the addend of the tlb entry.  Clobbers R0, R1, R2, TMP.  */

 static TCGReg tcg_out_tlb_read(TCGContext *s, TCGReg addrlo, TCGReg addrhi,
-                               TCGMemOp opc, int mem_index, bool is_load)
+                               MemOp opc, int mem_index, bool is_load)
 {
     int cmp_off = (is_load ? offsetof(CPUTLBEntry, addr_read)
                    : offsetof(CPUTLBEntry, addr_write));
@@ -1348,7 +1348,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGReg argreg, datalo, datahi;
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     void *func;

     if (!reloc_pc24(lb->label_ptr[0], s->code_ptr)) {
@@ -1412,7 +1412,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGReg argreg, datalo, datahi;
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);

     if (!reloc_pc24(lb->label_ptr[0], s->code_ptr)) {
         return false;
@@ -1453,11 +1453,11 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 }
 #endif /* SOFTMMU */

-static inline void tcg_out_qemu_ld_index(TCGContext *s, TCGMemOp opc,
+static inline void tcg_out_qemu_ld_index(TCGContext *s, MemOp opc,
                                          TCGReg datalo, TCGReg datahi,
                                          TCGReg addrlo, TCGReg addend)
 {
-    TCGMemOp bswap = opc & MO_BSWAP;
+    MemOp bswap = opc & MO_BSWAP;

     switch (opc & MO_SSIZE) {
     case MO_UB:
@@ -1514,11 +1514,11 @@ static inline void tcg_out_qemu_ld_index(TCGContext *s, TCGMemOp opc,
     }
 }

-static inline void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp opc,
+static inline void tcg_out_qemu_ld_direct(TCGContext *s, MemOp opc,
                                           TCGReg datalo, TCGReg datahi,
                                           TCGReg addrlo)
 {
-    TCGMemOp bswap = opc & MO_BSWAP;
+    MemOp bswap = opc & MO_BSWAP;

     switch (opc & MO_SSIZE) {
     case MO_UB:
@@ -1577,7 +1577,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)
 {
     TCGReg addrlo, datalo, datahi, addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #ifdef CONFIG_SOFTMMU
     int mem_index;
     TCGReg addend;
@@ -1614,11 +1614,11 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)
 #endif
 }

-static inline void tcg_out_qemu_st_index(TCGContext *s, int cond, TCGMemOp opc,
+static inline void tcg_out_qemu_st_index(TCGContext *s, int cond, MemOp opc,
                                          TCGReg datalo, TCGReg datahi,
                                          TCGReg addrlo, TCGReg addend)
 {
-    TCGMemOp bswap = opc & MO_BSWAP;
+    MemOp bswap = opc & MO_BSWAP;

     switch (opc & MO_SIZE) {
     case MO_8:
@@ -1659,11 +1659,11 @@ static inline void tcg_out_qemu_st_index(TCGContext *s, int cond, TCGMemOp opc,
     }
 }

-static inline void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp opc,
+static inline void tcg_out_qemu_st_direct(TCGContext *s, MemOp opc,
                                           TCGReg datalo, TCGReg datahi,
                                           TCGReg addrlo)
 {
-    TCGMemOp bswap = opc & MO_BSWAP;
+    MemOp bswap = opc & MO_BSWAP;

     switch (opc & MO_SIZE) {
     case MO_8:
@@ -1708,7 +1708,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is64)
 {
     TCGReg addrlo, datalo, datahi, addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #ifdef CONFIG_SOFTMMU
     int mem_index;
     TCGReg addend;
diff --git a/tcg/i386/tcg-target.inc.c b/tcg/i386/tcg-target.inc.c
index 6ddeebf..9d8ed97 100644
--- a/tcg/i386/tcg-target.inc.c
+++ b/tcg/i386/tcg-target.inc.c
@@ -1697,7 +1697,7 @@ static void * const qemu_st_helpers[16] = {
    First argument register is clobbered.  */

 static inline void tcg_out_tlb_load(TCGContext *s, TCGReg addrlo, TCGReg addrhi,
-                                    int mem_index, TCGMemOp opc,
+                                    int mem_index, MemOp opc,
                                     tcg_insn_unit **label_ptr, int which)
 {
     const TCGReg r0 = TCG_REG_L0;
@@ -1810,7 +1810,7 @@ static void add_qemu_ldst_label(TCGContext *s, bool is_ld, bool is_64,
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     TCGReg data_reg;
     tcg_insn_unit **label_ptr = &l->label_ptr[0];
     int rexw = (l->type == TCG_TYPE_I64 ? P_REXW : 0);
@@ -1895,8 +1895,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp s_bits = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp s_bits = opc & MO_SIZE;
     tcg_insn_unit **label_ptr = &l->label_ptr[0];
     TCGReg retaddr;

@@ -1995,10 +1995,10 @@ static inline int setup_guest_base_seg(void)

 static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg datalo, TCGReg datahi,
                                    TCGReg base, int index, intptr_t ofs,
-                                   int seg, bool is64, TCGMemOp memop)
+                                   int seg, bool is64, MemOp memop)
 {
-    const TCGMemOp real_bswap = memop & MO_BSWAP;
-    TCGMemOp bswap = real_bswap;
+    const MemOp real_bswap = memop & MO_BSWAP;
+    MemOp bswap = real_bswap;
     int rexw = is64 * P_REXW;
     int movop = OPC_MOVL_GvEv;

@@ -2103,7 +2103,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)
     TCGReg datalo, datahi, addrlo;
     TCGReg addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     int mem_index;
     tcg_insn_unit *label_ptr[2];
@@ -2137,15 +2137,15 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)

 static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg datalo, TCGReg datahi,
                                    TCGReg base, int index, intptr_t ofs,
-                                   int seg, TCGMemOp memop)
+                                   int seg, MemOp memop)
 {
     /* ??? Ideally we wouldn't need a scratch register.  For user-only,
        we could perform the bswap twice to restore the original value
        instead of moving to the scratch.  But as it is, the L constraint
        means that TCG_REG_L0 is definitely free here.  */
     const TCGReg scratch = TCG_REG_L0;
-    const TCGMemOp real_bswap = memop & MO_BSWAP;
-    TCGMemOp bswap = real_bswap;
+    const MemOp real_bswap = memop & MO_BSWAP;
+    MemOp bswap = real_bswap;
     int movop = OPC_MOVL_EvGv;

     if (have_movbe && real_bswap) {
@@ -2221,7 +2221,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is64)
     TCGReg datalo, datahi, addrlo;
     TCGReg addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     int mem_index;
     tcg_insn_unit *label_ptr[2];
diff --git a/tcg/mips/tcg-target.inc.c b/tcg/mips/tcg-target.inc.c
index 41bff32..5442167 100644
--- a/tcg/mips/tcg-target.inc.c
+++ b/tcg/mips/tcg-target.inc.c
@@ -1215,7 +1215,7 @@ static void tcg_out_tlb_load(TCGContext *s, TCGReg base, TCGReg addrl,
                              TCGReg addrh, TCGMemOpIdx oi,
                              tcg_insn_unit *label_ptr[2], bool is_load)
 {
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     unsigned s_bits = opc & MO_SIZE;
     unsigned a_bits = get_alignment_bits(opc);
     int mem_index = get_mmuidx(oi);
@@ -1313,7 +1313,7 @@ static void add_qemu_ldst_label(TCGContext *s, int is_ld, TCGMemOpIdx oi,
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     TCGReg v0;
     int i;

@@ -1363,8 +1363,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp s_bits = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp s_bits = opc & MO_SIZE;
     int i;

     /* resolve label address */
@@ -1413,7 +1413,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 #endif

 static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg lo, TCGReg hi,
-                                   TCGReg base, TCGMemOp opc, bool is_64)
+                                   TCGReg base, MemOp opc, bool is_64)
 {
     switch (opc & (MO_SSIZE | MO_BSWAP)) {
     case MO_UB:
@@ -1521,7 +1521,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg addr_regl, addr_regh __attribute__((unused));
     TCGReg data_regl, data_regh;
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     tcg_insn_unit *label_ptr[2];
 #endif
@@ -1558,7 +1558,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
 }

 static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
-                                   TCGReg base, TCGMemOp opc)
+                                   TCGReg base, MemOp opc)
 {
     /* Don't clutter the code below with checks to avoid bswapping ZERO.  */
     if ((lo | hi) == 0) {
@@ -1624,7 +1624,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg addr_regl, addr_regh __attribute__((unused));
     TCGReg data_regl, data_regh;
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     tcg_insn_unit *label_ptr[2];
 #endif
diff --git a/tcg/optimize.c b/tcg/optimize.c
index d2424de..a89ffda 100644
--- a/tcg/optimize.c
+++ b/tcg/optimize.c
@@ -1014,7 +1014,7 @@ void tcg_optimize(TCGContext *s)
         CASE_OP_32_64(qemu_ld):
             {
                 TCGMemOpIdx oi = op->args[nb_oargs + nb_iargs];
-                TCGMemOp mop = get_memop(oi);
+                MemOp mop = get_memop(oi);
                 if (!(mop & MO_SIGN)) {
                     mask = (2ULL << ((8 << (mop & MO_SIZE)) - 1)) - 1;
                 }
diff --git a/tcg/ppc/tcg-target.inc.c b/tcg/ppc/tcg-target.inc.c
index 852b894..815edac 100644
--- a/tcg/ppc/tcg-target.inc.c
+++ b/tcg/ppc/tcg-target.inc.c
@@ -1506,7 +1506,7 @@ QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) < -32768);
    in CR7, loads the addend of the TLB into R3, and returns the register
    containing the guest address (zero-extended into R4).  Clobbers R0 and R2. */

-static TCGReg tcg_out_tlb_read(TCGContext *s, TCGMemOp opc,
+static TCGReg tcg_out_tlb_read(TCGContext *s, MemOp opc,
                                TCGReg addrlo, TCGReg addrhi,
                                int mem_index, bool is_read)
 {
@@ -1633,7 +1633,7 @@ static void add_qemu_ldst_label(TCGContext *s, bool is_ld, TCGMemOpIdx oi,
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     TCGReg hi, lo, arg = TCG_REG_R3;

     if (!reloc_pc14(lb->label_ptr[0], s->code_ptr)) {
@@ -1680,8 +1680,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp s_bits = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp s_bits = opc & MO_SIZE;
     TCGReg hi, lo, arg = TCG_REG_R3;

     if (!reloc_pc14(lb->label_ptr[0], s->code_ptr)) {
@@ -1744,7 +1744,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg datalo, datahi, addrlo, rbase;
     TCGReg addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc, s_bits;
+    MemOp opc, s_bits;
 #ifdef CONFIG_SOFTMMU
     int mem_index;
     tcg_insn_unit *label_ptr;
@@ -1819,7 +1819,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg datalo, datahi, addrlo, rbase;
     TCGReg addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc, s_bits;
+    MemOp opc, s_bits;
 #ifdef CONFIG_SOFTMMU
     int mem_index;
     tcg_insn_unit *label_ptr;
diff --git a/tcg/riscv/tcg-target.inc.c b/tcg/riscv/tcg-target.inc.c
index 3e76bf5..7018509 100644
--- a/tcg/riscv/tcg-target.inc.c
+++ b/tcg/riscv/tcg-target.inc.c
@@ -970,7 +970,7 @@ static void tcg_out_tlb_load(TCGContext *s, TCGReg addrl,
                              TCGReg addrh, TCGMemOpIdx oi,
                              tcg_insn_unit **label_ptr, bool is_load)
 {
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     unsigned s_bits = opc & MO_SIZE;
     unsigned a_bits = get_alignment_bits(opc);
     tcg_target_long compare_mask;
@@ -1044,7 +1044,7 @@ static void add_qemu_ldst_label(TCGContext *s, int is_ld, TCGMemOpIdx oi,
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     TCGReg a0 = tcg_target_call_iarg_regs[0];
     TCGReg a1 = tcg_target_call_iarg_regs[1];
     TCGReg a2 = tcg_target_call_iarg_regs[2];
@@ -1077,8 +1077,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp s_bits = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp s_bits = opc & MO_SIZE;
     TCGReg a0 = tcg_target_call_iarg_regs[0];
     TCGReg a1 = tcg_target_call_iarg_regs[1];
     TCGReg a2 = tcg_target_call_iarg_regs[2];
@@ -1121,9 +1121,9 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 #endif /* CONFIG_SOFTMMU */

 static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg lo, TCGReg hi,
-                                   TCGReg base, TCGMemOp opc, bool is_64)
+                                   TCGReg base, MemOp opc, bool is_64)
 {
-    const TCGMemOp bswap = opc & MO_BSWAP;
+    const MemOp bswap = opc & MO_BSWAP;

     /* We don't yet handle byteswapping, assert */
     g_assert(!bswap);
@@ -1172,7 +1172,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg addr_regl, addr_regh __attribute__((unused));
     TCGReg data_regl, data_regh;
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     tcg_insn_unit *label_ptr[1];
 #endif
@@ -1208,9 +1208,9 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
 }

 static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
-                                   TCGReg base, TCGMemOp opc)
+                                   TCGReg base, MemOp opc)
 {
-    const TCGMemOp bswap = opc & MO_BSWAP;
+    const MemOp bswap = opc & MO_BSWAP;

     /* We don't yet handle byteswapping, assert */
     g_assert(!bswap);
@@ -1243,7 +1243,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg addr_regl, addr_regh __attribute__((unused));
     TCGReg data_regl, data_regh;
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     tcg_insn_unit *label_ptr[1];
 #endif
diff --git a/tcg/s390/tcg-target.inc.c b/tcg/s390/tcg-target.inc.c
index fe42939..bc5650b 100644
--- a/tcg/s390/tcg-target.inc.c
+++ b/tcg/s390/tcg-target.inc.c
@@ -1430,7 +1430,7 @@ static void tcg_out_call(TCGContext *s, tcg_insn_unit *dest)
     }
 }

-static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp opc, TCGReg data,
+static void tcg_out_qemu_ld_direct(TCGContext *s, MemOp opc, TCGReg data,
                                    TCGReg base, TCGReg index, int disp)
 {
     switch (opc & (MO_SSIZE | MO_BSWAP)) {
@@ -1489,7 +1489,7 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp opc, TCGReg data,
     }
 }

-static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp opc, TCGReg data,
+static void tcg_out_qemu_st_direct(TCGContext *s, MemOp opc, TCGReg data,
                                    TCGReg base, TCGReg index, int disp)
 {
     switch (opc & (MO_SIZE | MO_BSWAP)) {
@@ -1544,7 +1544,7 @@ QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) < -(1 << 19));

 /* Load and compare a TLB entry, leaving the flags set.  Loads the TLB
    addend into R2.  Returns a register with the santitized guest address.  */
-static TCGReg tcg_out_tlb_read(TCGContext* s, TCGReg addr_reg, TCGMemOp opc,
+static TCGReg tcg_out_tlb_read(TCGContext* s, TCGReg addr_reg, MemOp opc,
                                int mem_index, bool is_ld)
 {
     unsigned s_bits = opc & MO_SIZE;
@@ -1614,7 +1614,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
     TCGReg addr_reg = lb->addrlo_reg;
     TCGReg data_reg = lb->datalo_reg;
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);

     if (!patch_reloc(lb->label_ptr[0], R_390_PC16DBL,
                      (intptr_t)s->code_ptr, 2)) {
@@ -1639,7 +1639,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
     TCGReg addr_reg = lb->addrlo_reg;
     TCGReg data_reg = lb->datalo_reg;
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);

     if (!patch_reloc(lb->label_ptr[0], R_390_PC16DBL,
                      (intptr_t)s->code_ptr, 2)) {
@@ -1694,7 +1694,7 @@ static void tcg_prepare_user_ldst(TCGContext *s, TCGReg *addr_reg,
 static void tcg_out_qemu_ld(TCGContext* s, TCGReg data_reg, TCGReg addr_reg,
                             TCGMemOpIdx oi)
 {
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
 #ifdef CONFIG_SOFTMMU
     unsigned mem_index = get_mmuidx(oi);
     tcg_insn_unit *label_ptr;
@@ -1721,7 +1721,7 @@ static void tcg_out_qemu_ld(TCGContext* s, TCGReg data_reg, TCGReg addr_reg,
 static void tcg_out_qemu_st(TCGContext* s, TCGReg data_reg, TCGReg addr_reg,
                             TCGMemOpIdx oi)
 {
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
 #ifdef CONFIG_SOFTMMU
     unsigned mem_index = get_mmuidx(oi);
     tcg_insn_unit *label_ptr;
diff --git a/tcg/sparc/tcg-target.inc.c b/tcg/sparc/tcg-target.inc.c
index 10b1cea..d7986cd 100644
--- a/tcg/sparc/tcg-target.inc.c
+++ b/tcg/sparc/tcg-target.inc.c
@@ -1081,7 +1081,7 @@ QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) < -(1 << 12));
    is in the returned register, maybe %o0.  The TLB addend is in %o1.  */

 static TCGReg tcg_out_tlb_load(TCGContext *s, TCGReg addr, int mem_index,
-                               TCGMemOp opc, int which)
+                               MemOp opc, int which)
 {
     int fast_off = TLB_MASK_TABLE_OFS(mem_index);
     int mask_off = fast_off + offsetof(CPUTLBDescFast, mask);
@@ -1164,7 +1164,7 @@ static const int qemu_st_opc[16] = {
 static void tcg_out_qemu_ld(TCGContext *s, TCGReg data, TCGReg addr,
                             TCGMemOpIdx oi, bool is_64)
 {
-    TCGMemOp memop = get_memop(oi);
+    MemOp memop = get_memop(oi);
 #ifdef CONFIG_SOFTMMU
     unsigned memi = get_mmuidx(oi);
     TCGReg addrz, param;
@@ -1246,7 +1246,7 @@ static void tcg_out_qemu_ld(TCGContext *s, TCGReg data, TCGReg addr,
 static void tcg_out_qemu_st(TCGContext *s, TCGReg data, TCGReg addr,
                             TCGMemOpIdx oi)
 {
-    TCGMemOp memop = get_memop(oi);
+    MemOp memop = get_memop(oi);
 #ifdef CONFIG_SOFTMMU
     unsigned memi = get_mmuidx(oi);
     TCGReg addrz, param;
diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c
index 587d092..e87c327 100644
--- a/tcg/tcg-op.c
+++ b/tcg/tcg-op.c
@@ -2714,7 +2714,7 @@ void tcg_gen_lookup_and_goto_ptr(void)
     }
 }

-static inline TCGMemOp tcg_canonicalize_memop(TCGMemOp op, bool is64, bool st)
+static inline MemOp tcg_canonicalize_memop(MemOp op, bool is64, bool st)
 {
     /* Trigger the asserts within as early as possible.  */
     (void)get_alignment_bits(op);
@@ -2743,7 +2743,7 @@ static inline TCGMemOp tcg_canonicalize_memop(TCGMemOp op, bool is64, bool st)
 }

 static void gen_ldst_i32(TCGOpcode opc, TCGv_i32 val, TCGv addr,
-                         TCGMemOp memop, TCGArg idx)
+                         MemOp memop, TCGArg idx)
 {
     TCGMemOpIdx oi = make_memop_idx(memop, idx);
 #if TARGET_LONG_BITS == 32
@@ -2758,7 +2758,7 @@ static void gen_ldst_i32(TCGOpcode opc, TCGv_i32 val, TCGv addr,
 }

 static void gen_ldst_i64(TCGOpcode opc, TCGv_i64 val, TCGv addr,
-                         TCGMemOp memop, TCGArg idx)
+                         MemOp memop, TCGArg idx)
 {
     TCGMemOpIdx oi = make_memop_idx(memop, idx);
 #if TARGET_LONG_BITS == 32
@@ -2788,9 +2788,9 @@ static void tcg_gen_req_mo(TCGBar type)
     }
 }

-void tcg_gen_qemu_ld_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
+void tcg_gen_qemu_ld_i32(TCGv_i32 val, TCGv addr, TCGArg idx, MemOp memop)
 {
-    TCGMemOp orig_memop;
+    MemOp orig_memop;

     tcg_gen_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
     memop = tcg_canonicalize_memop(memop, 0, 0);
@@ -2825,7 +2825,7 @@ void tcg_gen_qemu_ld_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
     }
 }

-void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
+void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCGArg idx, MemOp memop)
 {
     TCGv_i32 swap = NULL;

@@ -2858,9 +2858,9 @@ void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
     }
 }

-void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
+void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, MemOp memop)
 {
-    TCGMemOp orig_memop;
+    MemOp orig_memop;

     if (TCG_TARGET_REG_BITS == 32 && (memop & MO_SIZE) < MO_64) {
         tcg_gen_qemu_ld_i32(TCGV_LOW(val), addr, idx, memop);
@@ -2911,7 +2911,7 @@ void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
     }
 }

-void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
+void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, MemOp memop)
 {
     TCGv_i64 swap = NULL;

@@ -2953,7 +2953,7 @@ void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
     }
 }

-static void tcg_gen_ext_i32(TCGv_i32 ret, TCGv_i32 val, TCGMemOp opc)
+static void tcg_gen_ext_i32(TCGv_i32 ret, TCGv_i32 val, MemOp opc)
 {
     switch (opc & MO_SSIZE) {
     case MO_SB:
@@ -2974,7 +2974,7 @@ static void tcg_gen_ext_i32(TCGv_i32 ret, TCGv_i32 val, TCGMemOp opc)
     }
 }

-static void tcg_gen_ext_i64(TCGv_i64 ret, TCGv_i64 val, TCGMemOp opc)
+static void tcg_gen_ext_i64(TCGv_i64 ret, TCGv_i64 val, MemOp opc)
 {
     switch (opc & MO_SSIZE) {
     case MO_SB:
@@ -3034,7 +3034,7 @@ static void * const table_cmpxchg[16] = {
 };

 void tcg_gen_atomic_cmpxchg_i32(TCGv_i32 retv, TCGv addr, TCGv_i32 cmpv,
-                                TCGv_i32 newv, TCGArg idx, TCGMemOp memop)
+                                TCGv_i32 newv, TCGArg idx, MemOp memop)
 {
     memop = tcg_canonicalize_memop(memop, 0, 0);

@@ -3078,7 +3078,7 @@ void tcg_gen_atomic_cmpxchg_i32(TCGv_i32 retv, TCGv addr, TCGv_i32 cmpv,
 }

 void tcg_gen_atomic_cmpxchg_i64(TCGv_i64 retv, TCGv addr, TCGv_i64 cmpv,
-                                TCGv_i64 newv, TCGArg idx, TCGMemOp memop)
+                                TCGv_i64 newv, TCGArg idx, MemOp memop)
 {
     memop = tcg_canonicalize_memop(memop, 1, 0);

@@ -3142,7 +3142,7 @@ void tcg_gen_atomic_cmpxchg_i64(TCGv_i64 retv, TCGv addr, TCGv_i64 cmpv,
 }

 static void do_nonatomic_op_i32(TCGv_i32 ret, TCGv addr, TCGv_i32 val,
-                                TCGArg idx, TCGMemOp memop, bool new_val,
+                                TCGArg idx, MemOp memop, bool new_val,
                                 void (*gen)(TCGv_i32, TCGv_i32, TCGv_i32))
 {
     TCGv_i32 t1 = tcg_temp_new_i32();
@@ -3160,7 +3160,7 @@ static void do_nonatomic_op_i32(TCGv_i32 ret, TCGv addr, TCGv_i32 val,
 }

 static void do_atomic_op_i32(TCGv_i32 ret, TCGv addr, TCGv_i32 val,
-                             TCGArg idx, TCGMemOp memop, void * const table[])
+                             TCGArg idx, MemOp memop, void * const table[])
 {
     gen_atomic_op_i32 gen;

@@ -3185,7 +3185,7 @@ static void do_atomic_op_i32(TCGv_i32 ret, TCGv addr, TCGv_i32 val,
 }

 static void do_nonatomic_op_i64(TCGv_i64 ret, TCGv addr, TCGv_i64 val,
-                                TCGArg idx, TCGMemOp memop, bool new_val,
+                                TCGArg idx, MemOp memop, bool new_val,
                                 void (*gen)(TCGv_i64, TCGv_i64, TCGv_i64))
 {
     TCGv_i64 t1 = tcg_temp_new_i64();
@@ -3203,7 +3203,7 @@ static void do_nonatomic_op_i64(TCGv_i64 ret, TCGv addr, TCGv_i64 val,
 }

 static void do_atomic_op_i64(TCGv_i64 ret, TCGv addr, TCGv_i64 val,
-                             TCGArg idx, TCGMemOp memop, void * const table[])
+                             TCGArg idx, MemOp memop, void * const table[])
 {
     memop = tcg_canonicalize_memop(memop, 1, 0);

@@ -3257,7 +3257,7 @@ static void * const table_##NAME[16] = {                                \
     WITH_ATOMIC64([MO_64 | MO_BE] = gen_helper_atomic_##NAME##q_be)     \
 };                                                                      \
 void tcg_gen_atomic_##NAME##_i32                                        \
-    (TCGv_i32 ret, TCGv addr, TCGv_i32 val, TCGArg idx, TCGMemOp memop) \
+    (TCGv_i32 ret, TCGv addr, TCGv_i32 val, TCGArg idx, MemOp memop)    \
 {                                                                       \
     if (tcg_ctx->tb_cflags & CF_PARALLEL) {                             \
         do_atomic_op_i32(ret, addr, val, idx, memop, table_##NAME);     \
@@ -3267,7 +3267,7 @@ void tcg_gen_atomic_##NAME##_i32                                        \
     }                                                                   \
 }                                                                       \
 void tcg_gen_atomic_##NAME##_i64                                        \
-    (TCGv_i64 ret, TCGv addr, TCGv_i64 val, TCGArg idx, TCGMemOp memop) \
+    (TCGv_i64 ret, TCGv addr, TCGv_i64 val, TCGArg idx, MemOp memop)    \
 {                                                                       \
     if (tcg_ctx->tb_cflags & CF_PARALLEL) {                             \
         do_atomic_op_i64(ret, addr, val, idx, memop, table_##NAME);     \
diff --git a/tcg/tcg-op.h b/tcg/tcg-op.h
index 2d4dd5c..e9cf172 100644
--- a/tcg/tcg-op.h
+++ b/tcg/tcg-op.h
@@ -851,10 +851,10 @@ void tcg_gen_lookup_and_goto_ptr(void);
 #define tcg_gen_qemu_st_tl tcg_gen_qemu_st_i64
 #endif

-void tcg_gen_qemu_ld_i32(TCGv_i32, TCGv, TCGArg, TCGMemOp);
-void tcg_gen_qemu_st_i32(TCGv_i32, TCGv, TCGArg, TCGMemOp);
-void tcg_gen_qemu_ld_i64(TCGv_i64, TCGv, TCGArg, TCGMemOp);
-void tcg_gen_qemu_st_i64(TCGv_i64, TCGv, TCGArg, TCGMemOp);
+void tcg_gen_qemu_ld_i32(TCGv_i32, TCGv, TCGArg, MemOp);
+void tcg_gen_qemu_st_i32(TCGv_i32, TCGv, TCGArg, MemOp);
+void tcg_gen_qemu_ld_i64(TCGv_i64, TCGv, TCGArg, MemOp);
+void tcg_gen_qemu_st_i64(TCGv_i64, TCGv, TCGArg, MemOp);

 static inline void tcg_gen_qemu_ld8u(TCGv ret, TCGv addr, int mem_index)
 {
@@ -912,46 +912,46 @@ static inline void tcg_gen_qemu_st64(TCGv_i64 arg, TCGv addr, int mem_index)
 }

 void tcg_gen_atomic_cmpxchg_i32(TCGv_i32, TCGv, TCGv_i32, TCGv_i32,
-                                TCGArg, TCGMemOp);
+                                TCGArg, MemOp);
 void tcg_gen_atomic_cmpxchg_i64(TCGv_i64, TCGv, TCGv_i64, TCGv_i64,
-                                TCGArg, TCGMemOp);
-
-void tcg_gen_atomic_xchg_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_xchg_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-
-void tcg_gen_atomic_fetch_add_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_add_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_and_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_and_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_or_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_or_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_xor_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_xor_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_smin_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_smin_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_umin_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_umin_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_smax_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_smax_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_umax_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_umax_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-
-void tcg_gen_atomic_add_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_add_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_and_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_and_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_or_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_or_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_xor_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_xor_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_smin_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_smin_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_umin_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_umin_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_smax_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_smax_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_umax_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_umax_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
+                                TCGArg, MemOp);
+
+void tcg_gen_atomic_xchg_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_xchg_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+
+void tcg_gen_atomic_fetch_add_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_add_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_and_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_and_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_or_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_or_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_xor_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_xor_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_smin_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_smin_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_umin_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_umin_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_smax_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_smax_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_umax_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_umax_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+
+void tcg_gen_atomic_add_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_add_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_and_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_and_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_or_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_or_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_xor_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_xor_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_smin_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_smin_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_umin_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_umin_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_smax_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_smax_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_umax_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_umax_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);

 void tcg_gen_mov_vec(TCGv_vec, TCGv_vec);
 void tcg_gen_dup_i32_vec(unsigned vece, TCGv_vec, TCGv_i32);
diff --git a/tcg/tcg.c b/tcg/tcg.c
index be2c33c..aa9931f 100644
--- a/tcg/tcg.c
+++ b/tcg/tcg.c
@@ -2056,7 +2056,7 @@ static void tcg_dump_ops(TCGContext *s, bool have_prefs)
             case INDEX_op_qemu_st_i64:
                 {
                     TCGMemOpIdx oi = op->args[k++];
-                    TCGMemOp op = get_memop(oi);
+                    MemOp op = get_memop(oi);
                     unsigned ix = get_mmuidx(oi);

                     if (op & ~(MO_AMASK | MO_BSWAP | MO_SSIZE)) {
diff --git a/tcg/tcg.h b/tcg/tcg.h
index b411e17..a37181c 100644
--- a/tcg/tcg.h
+++ b/tcg/tcg.h
@@ -26,6 +26,7 @@
 #define TCG_H

 #include "cpu.h"
+#include "exec/memop.h"
 #include "exec/tb-context.h"
 #include "qemu/bitops.h"
 #include "qemu/queue.h"
@@ -309,101 +310,13 @@ typedef enum TCGType {
 #endif
 } TCGType;

-/* Constants for qemu_ld and qemu_st for the Memory Operation field.  */
-typedef enum TCGMemOp {
-    MO_8     = 0,
-    MO_16    = 1,
-    MO_32    = 2,
-    MO_64    = 3,
-    MO_SIZE  = 3,   /* Mask for the above.  */
-
-    MO_SIGN  = 4,   /* Sign-extended, otherwise zero-extended.  */
-
-    MO_BSWAP = 8,   /* Host reverse endian.  */
-#ifdef HOST_WORDS_BIGENDIAN
-    MO_LE    = MO_BSWAP,
-    MO_BE    = 0,
-#else
-    MO_LE    = 0,
-    MO_BE    = MO_BSWAP,
-#endif
-#ifdef TARGET_WORDS_BIGENDIAN
-    MO_TE    = MO_BE,
-#else
-    MO_TE    = MO_LE,
-#endif
-
-    /* MO_UNALN accesses are never checked for alignment.
-     * MO_ALIGN accesses will result in a call to the CPU's
-     * do_unaligned_access hook if the guest address is not aligned.
-     * The default depends on whether the target CPU defines ALIGNED_ONLY.
-     *
-     * Some architectures (e.g. ARMv8) need the address which is aligned
-     * to a size more than the size of the memory access.
-     * Some architectures (e.g. SPARCv9) need an address which is aligned,
-     * but less strictly than the natural alignment.
-     *
-     * MO_ALIGN supposes the alignment size is the size of a memory access.
-     *
-     * There are three options:
-     * - unaligned access permitted (MO_UNALN).
-     * - an alignment to the size of an access (MO_ALIGN);
-     * - an alignment to a specified size, which may be more or less than
-     *   the access size (MO_ALIGN_x where 'x' is a size in bytes);
-     */
-    MO_ASHIFT = 4,
-    MO_AMASK = 7 << MO_ASHIFT,
-#ifdef ALIGNED_ONLY
-    MO_ALIGN = 0,
-    MO_UNALN = MO_AMASK,
-#else
-    MO_ALIGN = MO_AMASK,
-    MO_UNALN = 0,
-#endif
-    MO_ALIGN_2  = 1 << MO_ASHIFT,
-    MO_ALIGN_4  = 2 << MO_ASHIFT,
-    MO_ALIGN_8  = 3 << MO_ASHIFT,
-    MO_ALIGN_16 = 4 << MO_ASHIFT,
-    MO_ALIGN_32 = 5 << MO_ASHIFT,
-    MO_ALIGN_64 = 6 << MO_ASHIFT,
-
-    /* Combinations of the above, for ease of use.  */
-    MO_UB    = MO_8,
-    MO_UW    = MO_16,
-    MO_UL    = MO_32,
-    MO_SB    = MO_SIGN | MO_8,
-    MO_SW    = MO_SIGN | MO_16,
-    MO_SL    = MO_SIGN | MO_32,
-    MO_Q     = MO_64,
-
-    MO_LEUW  = MO_LE | MO_UW,
-    MO_LEUL  = MO_LE | MO_UL,
-    MO_LESW  = MO_LE | MO_SW,
-    MO_LESL  = MO_LE | MO_SL,
-    MO_LEQ   = MO_LE | MO_Q,
-
-    MO_BEUW  = MO_BE | MO_UW,
-    MO_BEUL  = MO_BE | MO_UL,
-    MO_BESW  = MO_BE | MO_SW,
-    MO_BESL  = MO_BE | MO_SL,
-    MO_BEQ   = MO_BE | MO_Q,
-
-    MO_TEUW  = MO_TE | MO_UW,
-    MO_TEUL  = MO_TE | MO_UL,
-    MO_TESW  = MO_TE | MO_SW,
-    MO_TESL  = MO_TE | MO_SL,
-    MO_TEQ   = MO_TE | MO_Q,
-
-    MO_SSIZE = MO_SIZE | MO_SIGN,
-} TCGMemOp;
-
 /**
  * get_alignment_bits
- * @memop: TCGMemOp value
+ * @memop: MemOp value
  *
  * Extract the alignment size from the memop.
  */
-static inline unsigned get_alignment_bits(TCGMemOp memop)
+static inline unsigned get_alignment_bits(MemOp memop)
 {
     unsigned a = memop & MO_AMASK;

@@ -1184,7 +1097,7 @@ static inline size_t tcg_current_code_size(TCGContext *s)
     return tcg_ptr_byte_diff(s->code_ptr, s->code_buf);
 }

-/* Combine the TCGMemOp and mmu_idx parameters into a single value.  */
+/* Combine the MemOp and mmu_idx parameters into a single value.  */
 typedef uint32_t TCGMemOpIdx;

 /**
@@ -1194,7 +1107,7 @@ typedef uint32_t TCGMemOpIdx;
  *
  * Encode these values into a single parameter.
  */
-static inline TCGMemOpIdx make_memop_idx(TCGMemOp op, unsigned idx)
+static inline TCGMemOpIdx make_memop_idx(MemOp op, unsigned idx)
 {
     tcg_debug_assert(idx <= 15);
     return (op << 4) | idx;
@@ -1206,7 +1119,7 @@ static inline TCGMemOpIdx make_memop_idx(TCGMemOp op, unsigned idx)
  *
  * Extract the memory operation from the combined value.
  */
-static inline TCGMemOp get_memop(TCGMemOpIdx oi)
+static inline MemOp get_memop(TCGMemOpIdx oi)
 {
     return oi >> 4;
 }
diff --git a/trace/mem-internal.h b/trace/mem-internal.h
index f6efaf6..3444fbc 100644
--- a/trace/mem-internal.h
+++ b/trace/mem-internal.h
@@ -16,7 +16,7 @@
 #define TRACE_MEM_ST (1ULL << 5)    /* store (y/n) */

 static inline uint8_t trace_mem_build_info(
-    int size_shift, bool sign_extend, TCGMemOp endianness, bool store)
+    int size_shift, bool sign_extend, MemOp endianness, bool store)
 {
     uint8_t res;

@@ -33,7 +33,7 @@ static inline uint8_t trace_mem_build_info(
     return res;
 }

-static inline uint8_t trace_mem_get_info(TCGMemOp op, bool store)
+static inline uint8_t trace_mem_get_info(MemOp op, bool store)
 {
     return trace_mem_build_info(op & MO_SIZE, !!(op & MO_SIGN),
                                 op & MO_BSWAP, store);
diff --git a/trace/mem.h b/trace/mem.h
index 2b58196..8cf213d 100644
--- a/trace/mem.h
+++ b/trace/mem.h
@@ -18,7 +18,7 @@
  *
  * Return a value for the 'info' argument in guest memory access traces.
  */
-static uint8_t trace_mem_get_info(TCGMemOp op, bool store);
+static uint8_t trace_mem_get_info(MemOp op, bool store);

 /**
  * trace_mem_build_info:
@@ -26,7 +26,7 @@ static uint8_t trace_mem_get_info(TCGMemOp op, bool store);
  * Return a value for the 'info' argument in guest memory access traces.
  */
 static uint8_t trace_mem_build_info(int size_shift, bool sign_extend,
-                                    TCGMemOp endianness, bool store);
+                                    MemOp endianness, bool store);


 #include "trace/mem-internal.h"
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v3 01/15] tcg: TCGMemOp is now accelerator independent MemOp
@ 2019-07-25  7:03       ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  7:03 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 107827 bytes --]

Preparation for collapsing the two byte swaps, adjust_endianness and
handle_bswap, along the I/O path.

Target dependant attributes are conditionalize upon NEED_CPU_H.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 accel/tcg/cputlb.c                      |   2 +-
 include/exec/memop.h                    | 109 ++++++++++++++++++++++++++
 target/alpha/translate.c                |   2 +-
 target/arm/translate-a64.c              |  48 ++++++------
 target/arm/translate-a64.h              |   2 +-
 target/arm/translate-sve.c              |   2 +-
 target/arm/translate.c                  |  32 ++++----
 target/arm/translate.h                  |   2 +-
 target/hppa/translate.c                 |  14 ++--
 target/i386/translate.c                 | 132 ++++++++++++++++----------------
 target/m68k/translate.c                 |   2 +-
 target/microblaze/translate.c           |   4 +-
 target/mips/translate.c                 |   8 +-
 target/openrisc/translate.c             |   4 +-
 target/ppc/translate.c                  |  12 +--
 target/riscv/insn_trans/trans_rva.inc.c |   8 +-
 target/riscv/insn_trans/trans_rvi.inc.c |   4 +-
 target/s390x/translate.c                |   6 +-
 target/s390x/translate_vx.inc.c         |  10 +--
 target/sparc/translate.c                |  14 ++--
 target/tilegx/translate.c               |  10 +--
 target/tricore/translate.c              |   8 +-
 tcg/README                              |   2 +-
 tcg/aarch64/tcg-target.inc.c            |  26 +++----
 tcg/arm/tcg-target.inc.c                |  26 +++----
 tcg/i386/tcg-target.inc.c               |  24 +++---
 tcg/mips/tcg-target.inc.c               |  16 ++--
 tcg/optimize.c                          |   2 +-
 tcg/ppc/tcg-target.inc.c                |  12 +--
 tcg/riscv/tcg-target.inc.c              |  20 ++---
 tcg/s390/tcg-target.inc.c               |  14 ++--
 tcg/sparc/tcg-target.inc.c              |   6 +-
 tcg/tcg-op.c                            |  38 ++++-----
 tcg/tcg-op.h                            |  86 ++++++++++-----------
 tcg/tcg.c                               |   2 +-
 tcg/tcg.h                               |  99 ++----------------------
 trace/mem-internal.h                    |   4 +-
 trace/mem.h                             |   4 +-
 38 files changed, 419 insertions(+), 397 deletions(-)
 create mode 100644 include/exec/memop.h

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index bb9897b..523be4c 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1133,7 +1133,7 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
     uintptr_t index = tlb_index(env, mmu_idx, addr);
     CPUTLBEntry *tlbe = tlb_entry(env, mmu_idx, addr);
     target_ulong tlb_addr = tlb_addr_write(tlbe);
-    TCGMemOp mop = get_memop(oi);
+    MemOp mop = get_memop(oi);
     int a_bits = get_alignment_bits(mop);
     int s_bits = mop & MO_SIZE;
     void *hostaddr;
diff --git a/include/exec/memop.h b/include/exec/memop.h
new file mode 100644
index 0000000..ac58066
--- /dev/null
+++ b/include/exec/memop.h
@@ -0,0 +1,109 @@
+/*
+ * Constants for memory operations
+ *
+ * Authors:
+ *  Richard Henderson <rth@twiddle.net>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+
+#ifndef MEMOP_H
+#define MEMOP_H
+
+typedef enum MemOp {
+    MO_8     = 0,
+    MO_16    = 1,
+    MO_32    = 2,
+    MO_64    = 3,
+    MO_SIZE  = 3,   /* Mask for the above.  */
+
+    MO_SIGN  = 4,   /* Sign-extended, otherwise zero-extended.  */
+
+    MO_BSWAP = 8,   /* Host reverse endian.  */
+#ifdef HOST_WORDS_BIGENDIAN
+    MO_LE    = MO_BSWAP,
+    MO_BE    = 0,
+#else
+    MO_LE    = 0,
+    MO_BE    = MO_BSWAP,
+#endif
+#ifdef NEED_CPU_H
+#ifdef TARGET_WORDS_BIGENDIAN
+    MO_TE    = MO_BE,
+#else
+    MO_TE    = MO_LE,
+#endif
+#endif
+
+    /*
+     * MO_UNALN accesses are never checked for alignment.
+     * MO_ALIGN accesses will result in a call to the CPU's
+     * do_unaligned_access hook if the guest address is not aligned.
+     * The default depends on whether the target CPU defines ALIGNED_ONLY.
+     *
+     * Some architectures (e.g. ARMv8) need the address which is aligned
+     * to a size more than the size of the memory access.
+     * Some architectures (e.g. SPARCv9) need an address which is aligned,
+     * but less strictly than the natural alignment.
+     *
+     * MO_ALIGN supposes the alignment size is the size of a memory access.
+     *
+     * There are three options:
+     * - unaligned access permitted (MO_UNALN).
+     * - an alignment to the size of an access (MO_ALIGN);
+     * - an alignment to a specified size, which may be more or less than
+     *   the access size (MO_ALIGN_x where 'x' is a size in bytes);
+     */
+    MO_ASHIFT = 4,
+    MO_AMASK = 7 << MO_ASHIFT,
+#ifdef NEED_CPU_H
+#ifdef ALIGNED_ONLY
+    MO_ALIGN = 0,
+    MO_UNALN = MO_AMASK,
+#else
+    MO_ALIGN = MO_AMASK,
+    MO_UNALN = 0,
+#endif
+#endif
+    MO_ALIGN_2  = 1 << MO_ASHIFT,
+    MO_ALIGN_4  = 2 << MO_ASHIFT,
+    MO_ALIGN_8  = 3 << MO_ASHIFT,
+    MO_ALIGN_16 = 4 << MO_ASHIFT,
+    MO_ALIGN_32 = 5 << MO_ASHIFT,
+    MO_ALIGN_64 = 6 << MO_ASHIFT,
+
+    /* Combinations of the above, for ease of use.  */
+    MO_UB    = MO_8,
+    MO_UW    = MO_16,
+    MO_UL    = MO_32,
+    MO_SB    = MO_SIGN | MO_8,
+    MO_SW    = MO_SIGN | MO_16,
+    MO_SL    = MO_SIGN | MO_32,
+    MO_Q     = MO_64,
+
+    MO_LEUW  = MO_LE | MO_UW,
+    MO_LEUL  = MO_LE | MO_UL,
+    MO_LESW  = MO_LE | MO_SW,
+    MO_LESL  = MO_LE | MO_SL,
+    MO_LEQ   = MO_LE | MO_Q,
+
+    MO_BEUW  = MO_BE | MO_UW,
+    MO_BEUL  = MO_BE | MO_UL,
+    MO_BESW  = MO_BE | MO_SW,
+    MO_BESL  = MO_BE | MO_SL,
+    MO_BEQ   = MO_BE | MO_Q,
+
+#ifdef NEED_CPU_H
+    MO_TEUW  = MO_TE | MO_UW,
+    MO_TEUL  = MO_TE | MO_UL,
+    MO_TESW  = MO_TE | MO_SW,
+    MO_TESL  = MO_TE | MO_SL,
+    MO_TEQ   = MO_TE | MO_Q,
+#endif
+
+    MO_SSIZE = MO_SIZE | MO_SIGN,
+} MemOp;
+
+#endif
diff --git a/target/alpha/translate.c b/target/alpha/translate.c
index 2c9cccf..d5d4888 100644
--- a/target/alpha/translate.c
+++ b/target/alpha/translate.c
@@ -403,7 +403,7 @@ static inline void gen_store_mem(DisasContext *ctx,

 static DisasJumpType gen_store_conditional(DisasContext *ctx, int ra, int rb,
                                            int32_t disp16, int mem_idx,
-                                           TCGMemOp op)
+                                           MemOp op)
 {
     TCGLabel *lab_fail, *lab_done;
     TCGv addr, val;
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index d323147..b6c07d6 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -85,7 +85,7 @@ typedef void NeonGenOneOpFn(TCGv_i64, TCGv_i64);
 typedef void CryptoTwoOpFn(TCGv_ptr, TCGv_ptr);
 typedef void CryptoThreeOpIntFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
 typedef void CryptoThreeOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
-typedef void AtomicThreeOpFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGArg, TCGMemOp);
+typedef void AtomicThreeOpFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGArg, MemOp);

 /* initialize TCG globals.  */
 void a64_translate_init(void)
@@ -455,7 +455,7 @@ TCGv_i64 read_cpu_reg_sp(DisasContext *s, int reg, int sf)
  * Dn, Sn, Hn or Bn).
  * (Note that this is not the same mapping as for A32; see cpu.h)
  */
-static inline int fp_reg_offset(DisasContext *s, int regno, TCGMemOp size)
+static inline int fp_reg_offset(DisasContext *s, int regno, MemOp size)
 {
     return vec_reg_offset(s, regno, 0, size);
 }
@@ -871,7 +871,7 @@ static void do_gpr_ld_memidx(DisasContext *s,
                              bool iss_valid, unsigned int iss_srt,
                              bool iss_sf, bool iss_ar)
 {
-    TCGMemOp memop = s->be_data + size;
+    MemOp memop = s->be_data + size;

     g_assert(size <= 3);

@@ -948,7 +948,7 @@ static void do_fp_ld(DisasContext *s, int destidx, TCGv_i64 tcg_addr, int size)
     TCGv_i64 tmphi;

     if (size < 4) {
-        TCGMemOp memop = s->be_data + size;
+        MemOp memop = s->be_data + size;
         tmphi = tcg_const_i64(0);
         tcg_gen_qemu_ld_i64(tmplo, tcg_addr, get_mem_index(s), memop);
     } else {
@@ -989,7 +989,7 @@ static void do_fp_ld(DisasContext *s, int destidx, TCGv_i64 tcg_addr, int size)

 /* Get value of an element within a vector register */
 static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx,
-                             int element, TCGMemOp memop)
+                             int element, MemOp memop)
 {
     int vect_off = vec_reg_offset(s, srcidx, element, memop & MO_SIZE);
     switch (memop) {
@@ -1021,7 +1021,7 @@ static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx,
 }

 static void read_vec_element_i32(DisasContext *s, TCGv_i32 tcg_dest, int srcidx,
-                                 int element, TCGMemOp memop)
+                                 int element, MemOp memop)
 {
     int vect_off = vec_reg_offset(s, srcidx, element, memop & MO_SIZE);
     switch (memop) {
@@ -1048,7 +1048,7 @@ static void read_vec_element_i32(DisasContext *s, TCGv_i32 tcg_dest, int srcidx,

 /* Set value of an element within a vector register */
 static void write_vec_element(DisasContext *s, TCGv_i64 tcg_src, int destidx,
-                              int element, TCGMemOp memop)
+                              int element, MemOp memop)
 {
     int vect_off = vec_reg_offset(s, destidx, element, memop & MO_SIZE);
     switch (memop) {
@@ -1070,7 +1070,7 @@ static void write_vec_element(DisasContext *s, TCGv_i64 tcg_src, int destidx,
 }

 static void write_vec_element_i32(DisasContext *s, TCGv_i32 tcg_src,
-                                  int destidx, int element, TCGMemOp memop)
+                                  int destidx, int element, MemOp memop)
 {
     int vect_off = vec_reg_offset(s, destidx, element, memop & MO_SIZE);
     switch (memop) {
@@ -1090,7 +1090,7 @@ static void write_vec_element_i32(DisasContext *s, TCGv_i32 tcg_src,

 /* Store from vector register to memory */
 static void do_vec_st(DisasContext *s, int srcidx, int element,
-                      TCGv_i64 tcg_addr, int size, TCGMemOp endian)
+                      TCGv_i64 tcg_addr, int size, MemOp endian)
 {
     TCGv_i64 tcg_tmp = tcg_temp_new_i64();

@@ -1102,7 +1102,7 @@ static void do_vec_st(DisasContext *s, int srcidx, int element,

 /* Load from memory to vector register */
 static void do_vec_ld(DisasContext *s, int destidx, int element,
-                      TCGv_i64 tcg_addr, int size, TCGMemOp endian)
+                      TCGv_i64 tcg_addr, int size, MemOp endian)
 {
     TCGv_i64 tcg_tmp = tcg_temp_new_i64();

@@ -2200,7 +2200,7 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
                                TCGv_i64 addr, int size, bool is_pair)
 {
     int idx = get_mem_index(s);
-    TCGMemOp memop = s->be_data;
+    MemOp memop = s->be_data;

     g_assert(size <= 3);
     if (is_pair) {
@@ -3286,7 +3286,7 @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
     bool is_postidx = extract32(insn, 23, 1);
     bool is_q = extract32(insn, 30, 1);
     TCGv_i64 clean_addr, tcg_rn, tcg_ebytes;
-    TCGMemOp endian = s->be_data;
+    MemOp endian = s->be_data;

     int ebytes;   /* bytes per element */
     int elements; /* elements per vector */
@@ -5455,7 +5455,7 @@ static void disas_fp_csel(DisasContext *s, uint32_t insn)
     unsigned int mos, type, rm, cond, rn, rd;
     TCGv_i64 t_true, t_false, t_zero;
     DisasCompare64 c;
-    TCGMemOp sz;
+    MemOp sz;

     mos = extract32(insn, 29, 3);
     type = extract32(insn, 22, 2);
@@ -6267,7 +6267,7 @@ static void disas_fp_imm(DisasContext *s, uint32_t insn)
     int mos = extract32(insn, 29, 3);
     uint64_t imm;
     TCGv_i64 tcg_res;
-    TCGMemOp sz;
+    MemOp sz;

     if (mos || imm5) {
         unallocated_encoding(s);
@@ -7030,7 +7030,7 @@ static TCGv_i32 do_reduction_op(DisasContext *s, int fpopcode, int rn,
 {
     if (esize == size) {
         int element;
-        TCGMemOp msize = esize == 16 ? MO_16 : MO_32;
+        MemOp msize = esize == 16 ? MO_16 : MO_32;
         TCGv_i32 tcg_elem;

         /* We should have one register left here */
@@ -8022,7 +8022,7 @@ static void handle_vec_simd_sqshrn(DisasContext *s, bool is_scalar, bool is_q,
     int shift = (2 * esize) - immhb;
     int elements = is_scalar ? 1 : (64 / esize);
     bool round = extract32(opcode, 0, 1);
-    TCGMemOp ldop = (size + 1) | (is_u_shift ? 0 : MO_SIGN);
+    MemOp ldop = (size + 1) | (is_u_shift ? 0 : MO_SIGN);
     TCGv_i64 tcg_rn, tcg_rd, tcg_round;
     TCGv_i32 tcg_rd_narrowed;
     TCGv_i64 tcg_final;
@@ -8181,7 +8181,7 @@ static void handle_simd_qshl(DisasContext *s, bool scalar, bool is_q,
             }
         };
         NeonGenTwoOpEnvFn *genfn = fns[src_unsigned][dst_unsigned][size];
-        TCGMemOp memop = scalar ? size : MO_32;
+        MemOp memop = scalar ? size : MO_32;
         int maxpass = scalar ? 1 : is_q ? 4 : 2;

         for (pass = 0; pass < maxpass; pass++) {
@@ -8225,7 +8225,7 @@ static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
     TCGv_ptr tcg_fpst = get_fpstatus_ptr(size == MO_16);
     TCGv_i32 tcg_shift = NULL;

-    TCGMemOp mop = size | (is_signed ? MO_SIGN : 0);
+    MemOp mop = size | (is_signed ? MO_SIGN : 0);
     int pass;

     if (fracbits || size == MO_64) {
@@ -10004,7 +10004,7 @@ static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
     int dsize = is_q ? 128 : 64;
     int esize = 8 << size;
     int elements = dsize/esize;
-    TCGMemOp memop = size | (is_u ? 0 : MO_SIGN);
+    MemOp memop = size | (is_u ? 0 : MO_SIGN);
     TCGv_i64 tcg_rn = new_tmp_a64(s);
     TCGv_i64 tcg_rd = new_tmp_a64(s);
     TCGv_i64 tcg_round;
@@ -10347,7 +10347,7 @@ static void handle_3rd_widening(DisasContext *s, int is_q, int is_u, int size,
             TCGv_i64 tcg_op1 = tcg_temp_new_i64();
             TCGv_i64 tcg_op2 = tcg_temp_new_i64();
             TCGv_i64 tcg_passres;
-            TCGMemOp memop = MO_32 | (is_u ? 0 : MO_SIGN);
+            MemOp memop = MO_32 | (is_u ? 0 : MO_SIGN);

             int elt = pass + is_q * 2;

@@ -11827,7 +11827,7 @@ static void handle_2misc_pairwise(DisasContext *s, int opcode, bool u,

     if (size == 2) {
         /* 32 + 32 -> 64 op */
-        TCGMemOp memop = size + (u ? 0 : MO_SIGN);
+        MemOp memop = size + (u ? 0 : MO_SIGN);

         for (pass = 0; pass < maxpass; pass++) {
             TCGv_i64 tcg_op1 = tcg_temp_new_i64();
@@ -12849,7 +12849,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)

     switch (is_fp) {
     case 1: /* normal fp */
-        /* convert insn encoded size to TCGMemOp size */
+        /* convert insn encoded size to MemOp size */
         switch (size) {
         case 0: /* half-precision */
             size = MO_16;
@@ -12897,7 +12897,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
         return;
     }

-    /* Given TCGMemOp size, adjust register and indexing.  */
+    /* Given MemOp size, adjust register and indexing.  */
     switch (size) {
     case MO_16:
         index = h << 2 | l << 1 | m;
@@ -13194,7 +13194,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
         TCGv_i64 tcg_res[2];
         int pass;
         bool satop = extract32(opcode, 0, 1);
-        TCGMemOp memop = MO_32;
+        MemOp memop = MO_32;

         if (satop || !u) {
             memop |= MO_SIGN;
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
index 9ab4087..f1246b7 100644
--- a/target/arm/translate-a64.h
+++ b/target/arm/translate-a64.h
@@ -64,7 +64,7 @@ static inline void assert_fp_access_checked(DisasContext *s)
  * the FP/vector register Qn.
  */
 static inline int vec_reg_offset(DisasContext *s, int regno,
-                                 int element, TCGMemOp size)
+                                 int element, MemOp size)
 {
     int element_size = 1 << size;
     int offs = element * element_size;
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index fa068b0..5d7edd0 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -4567,7 +4567,7 @@ static bool trans_STR_pri(DisasContext *s, arg_rri *a)
  */

 /* The memory mode of the dtype.  */
-static const TCGMemOp dtype_mop[16] = {
+static const MemOp dtype_mop[16] = {
     MO_UB, MO_UB, MO_UB, MO_UB,
     MO_SL, MO_UW, MO_UW, MO_UW,
     MO_SW, MO_SW, MO_UL, MO_UL,
diff --git a/target/arm/translate.c b/target/arm/translate.c
index 7853462..d116c8c 100644
--- a/target/arm/translate.c
+++ b/target/arm/translate.c
@@ -114,7 +114,7 @@ typedef enum ISSInfo {
 } ISSInfo;

 /* Save the syndrome information for a Data Abort */
-static void disas_set_da_iss(DisasContext *s, TCGMemOp memop, ISSInfo issinfo)
+static void disas_set_da_iss(DisasContext *s, MemOp memop, ISSInfo issinfo)
 {
     uint32_t syn;
     int sas = memop & MO_SIZE;
@@ -1079,7 +1079,7 @@ static inline void store_reg_from_load(DisasContext *s, int reg, TCGv_i32 var)
  * that the address argument is TCGv_i32 rather than TCGv.
  */

-static inline TCGv gen_aa32_addr(DisasContext *s, TCGv_i32 a32, TCGMemOp op)
+static inline TCGv gen_aa32_addr(DisasContext *s, TCGv_i32 a32, MemOp op)
 {
     TCGv addr = tcg_temp_new();
     tcg_gen_extu_i32_tl(addr, a32);
@@ -1092,7 +1092,7 @@ static inline TCGv gen_aa32_addr(DisasContext *s, TCGv_i32 a32, TCGMemOp op)
 }

 static void gen_aa32_ld_i32(DisasContext *s, TCGv_i32 val, TCGv_i32 a32,
-                            int index, TCGMemOp opc)
+                            int index, MemOp opc)
 {
     TCGv addr;

@@ -1107,7 +1107,7 @@ static void gen_aa32_ld_i32(DisasContext *s, TCGv_i32 val, TCGv_i32 a32,
 }

 static void gen_aa32_st_i32(DisasContext *s, TCGv_i32 val, TCGv_i32 a32,
-                            int index, TCGMemOp opc)
+                            int index, MemOp opc)
 {
     TCGv addr;

@@ -1160,7 +1160,7 @@ static inline void gen_aa32_frob64(DisasContext *s, TCGv_i64 val)
 }

 static void gen_aa32_ld_i64(DisasContext *s, TCGv_i64 val, TCGv_i32 a32,
-                            int index, TCGMemOp opc)
+                            int index, MemOp opc)
 {
     TCGv addr = gen_aa32_addr(s, a32, opc);
     tcg_gen_qemu_ld_i64(val, addr, index, opc);
@@ -1175,7 +1175,7 @@ static inline void gen_aa32_ld64(DisasContext *s, TCGv_i64 val,
 }

 static void gen_aa32_st_i64(DisasContext *s, TCGv_i64 val, TCGv_i32 a32,
-                            int index, TCGMemOp opc)
+                            int index, MemOp opc)
 {
     TCGv addr = gen_aa32_addr(s, a32, opc);

@@ -1400,7 +1400,7 @@ neon_reg_offset (int reg, int n)
  * where 0 is the least significant end of the register.
  */
 static inline long
-neon_element_offset(int reg, int element, TCGMemOp size)
+neon_element_offset(int reg, int element, MemOp size)
 {
     int element_size = 1 << size;
     int ofs = element * element_size;
@@ -1422,7 +1422,7 @@ static TCGv_i32 neon_load_reg(int reg, int pass)
     return tmp;
 }

-static void neon_load_element(TCGv_i32 var, int reg, int ele, TCGMemOp mop)
+static void neon_load_element(TCGv_i32 var, int reg, int ele, MemOp mop)
 {
     long offset = neon_element_offset(reg, ele, mop & MO_SIZE);

@@ -1441,7 +1441,7 @@ static void neon_load_element(TCGv_i32 var, int reg, int ele, TCGMemOp mop)
     }
 }

-static void neon_load_element64(TCGv_i64 var, int reg, int ele, TCGMemOp mop)
+static void neon_load_element64(TCGv_i64 var, int reg, int ele, MemOp mop)
 {
     long offset = neon_element_offset(reg, ele, mop & MO_SIZE);

@@ -1469,7 +1469,7 @@ static void neon_store_reg(int reg, int pass, TCGv_i32 var)
     tcg_temp_free_i32(var);
 }

-static void neon_store_element(int reg, int ele, TCGMemOp size, TCGv_i32 var)
+static void neon_store_element(int reg, int ele, MemOp size, TCGv_i32 var)
 {
     long offset = neon_element_offset(reg, ele, size);

@@ -1488,7 +1488,7 @@ static void neon_store_element(int reg, int ele, TCGMemOp size, TCGv_i32 var)
     }
 }

-static void neon_store_element64(int reg, int ele, TCGMemOp size, TCGv_i64 var)
+static void neon_store_element64(int reg, int ele, MemOp size, TCGv_i64 var)
 {
     long offset = neon_element_offset(reg, ele, size);

@@ -3558,7 +3558,7 @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
     int n;
     int vec_size;
     int mmu_idx;
-    TCGMemOp endian;
+    MemOp endian;
     TCGv_i32 addr;
     TCGv_i32 tmp;
     TCGv_i32 tmp2;
@@ -6867,7 +6867,7 @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
             } else if ((insn & 0x380) == 0) {
                 /* VDUP */
                 int element;
-                TCGMemOp size;
+                MemOp size;

                 if ((insn & (7 << 16)) == 0 || (q && (rd & 1))) {
                     return 1;
@@ -7435,7 +7435,7 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
                                TCGv_i32 addr, int size)
 {
     TCGv_i32 tmp = tcg_temp_new_i32();
-    TCGMemOp opc = size | MO_ALIGN | s->be_data;
+    MemOp opc = size | MO_ALIGN | s->be_data;

     s->is_ldex = true;

@@ -7489,7 +7489,7 @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
     TCGv taddr;
     TCGLabel *done_label;
     TCGLabel *fail_label;
-    TCGMemOp opc = size | MO_ALIGN | s->be_data;
+    MemOp opc = size | MO_ALIGN | s->be_data;

     /* if (env->exclusive_addr == addr && env->exclusive_val == [addr]) {
          [addr] = {Rt};
@@ -8603,7 +8603,7 @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
                         */

                         TCGv taddr;
-                        TCGMemOp opc = s->be_data;
+                        MemOp opc = s->be_data;

                         rm = (insn) & 0xf;

diff --git a/target/arm/translate.h b/target/arm/translate.h
index a20f6e2..284c510 100644
--- a/target/arm/translate.h
+++ b/target/arm/translate.h
@@ -21,7 +21,7 @@ typedef struct DisasContext {
     int condexec_cond;
     int thumb;
     int sctlr_b;
-    TCGMemOp be_data;
+    MemOp be_data;
 #if !defined(CONFIG_USER_ONLY)
     int user;
 #endif
diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 188fe68..ff4802a 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -1500,7 +1500,7 @@ static void form_gva(DisasContext *ctx, TCGv_tl *pgva, TCGv_reg *pofs,
  */
 static void do_load_32(DisasContext *ctx, TCGv_i32 dest, unsigned rb,
                        unsigned rx, int scale, target_sreg disp,
-                       unsigned sp, int modify, TCGMemOp mop)
+                       unsigned sp, int modify, MemOp mop)
 {
     TCGv_reg ofs;
     TCGv_tl addr;
@@ -1518,7 +1518,7 @@ static void do_load_32(DisasContext *ctx, TCGv_i32 dest, unsigned rb,

 static void do_load_64(DisasContext *ctx, TCGv_i64 dest, unsigned rb,
                        unsigned rx, int scale, target_sreg disp,
-                       unsigned sp, int modify, TCGMemOp mop)
+                       unsigned sp, int modify, MemOp mop)
 {
     TCGv_reg ofs;
     TCGv_tl addr;
@@ -1536,7 +1536,7 @@ static void do_load_64(DisasContext *ctx, TCGv_i64 dest, unsigned rb,

 static void do_store_32(DisasContext *ctx, TCGv_i32 src, unsigned rb,
                         unsigned rx, int scale, target_sreg disp,
-                        unsigned sp, int modify, TCGMemOp mop)
+                        unsigned sp, int modify, MemOp mop)
 {
     TCGv_reg ofs;
     TCGv_tl addr;
@@ -1554,7 +1554,7 @@ static void do_store_32(DisasContext *ctx, TCGv_i32 src, unsigned rb,

 static void do_store_64(DisasContext *ctx, TCGv_i64 src, unsigned rb,
                         unsigned rx, int scale, target_sreg disp,
-                        unsigned sp, int modify, TCGMemOp mop)
+                        unsigned sp, int modify, MemOp mop)
 {
     TCGv_reg ofs;
     TCGv_tl addr;
@@ -1580,7 +1580,7 @@ static void do_store_64(DisasContext *ctx, TCGv_i64 src, unsigned rb,

 static bool do_load(DisasContext *ctx, unsigned rt, unsigned rb,
                     unsigned rx, int scale, target_sreg disp,
-                    unsigned sp, int modify, TCGMemOp mop)
+                    unsigned sp, int modify, MemOp mop)
 {
     TCGv_reg dest;

@@ -1653,7 +1653,7 @@ static bool trans_fldd(DisasContext *ctx, arg_ldst *a)

 static bool do_store(DisasContext *ctx, unsigned rt, unsigned rb,
                      target_sreg disp, unsigned sp,
-                     int modify, TCGMemOp mop)
+                     int modify, MemOp mop)
 {
     nullify_over(ctx);
     do_store_reg(ctx, load_gpr(ctx, rt), rb, 0, 0, disp, sp, modify, mop);
@@ -2940,7 +2940,7 @@ static bool trans_st(DisasContext *ctx, arg_ldst *a)

 static bool trans_ldc(DisasContext *ctx, arg_ldst *a)
 {
-    TCGMemOp mop = MO_TEUL | MO_ALIGN_16 | a->size;
+    MemOp mop = MO_TEUL | MO_ALIGN_16 | a->size;
     TCGv_reg zero, dest, ofs;
     TCGv_tl addr;

diff --git a/target/i386/translate.c b/target/i386/translate.c
index 03150a8..def9867 100644
--- a/target/i386/translate.c
+++ b/target/i386/translate.c
@@ -87,8 +87,8 @@ typedef struct DisasContext {
     /* current insn context */
     int override; /* -1 if no override */
     int prefix;
-    TCGMemOp aflag;
-    TCGMemOp dflag;
+    MemOp aflag;
+    MemOp dflag;
     target_ulong pc_start;
     target_ulong pc; /* pc = eip + cs_base */
     /* current block context */
@@ -149,7 +149,7 @@ static void gen_eob(DisasContext *s);
 static void gen_jr(DisasContext *s, TCGv dest);
 static void gen_jmp(DisasContext *s, target_ulong eip);
 static void gen_jmp_tb(DisasContext *s, target_ulong eip, int tb_num);
-static void gen_op(DisasContext *s1, int op, TCGMemOp ot, int d);
+static void gen_op(DisasContext *s1, int op, MemOp ot, int d);

 /* i386 arith/logic operations */
 enum {
@@ -320,7 +320,7 @@ static inline bool byte_reg_is_xH(DisasContext *s, int reg)
 }

 /* Select the size of a push/pop operation.  */
-static inline TCGMemOp mo_pushpop(DisasContext *s, TCGMemOp ot)
+static inline MemOp mo_pushpop(DisasContext *s, MemOp ot)
 {
     if (CODE64(s)) {
         return ot == MO_16 ? MO_16 : MO_64;
@@ -330,13 +330,13 @@ static inline TCGMemOp mo_pushpop(DisasContext *s, TCGMemOp ot)
 }

 /* Select the size of the stack pointer.  */
-static inline TCGMemOp mo_stacksize(DisasContext *s)
+static inline MemOp mo_stacksize(DisasContext *s)
 {
     return CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_16;
 }

 /* Select only size 64 else 32.  Used for SSE operand sizes.  */
-static inline TCGMemOp mo_64_32(TCGMemOp ot)
+static inline MemOp mo_64_32(MemOp ot)
 {
 #ifdef TARGET_X86_64
     return ot == MO_64 ? MO_64 : MO_32;
@@ -347,19 +347,19 @@ static inline TCGMemOp mo_64_32(TCGMemOp ot)

 /* Select size 8 if lsb of B is clear, else OT.  Used for decoding
    byte vs word opcodes.  */
-static inline TCGMemOp mo_b_d(int b, TCGMemOp ot)
+static inline MemOp mo_b_d(int b, MemOp ot)
 {
     return b & 1 ? ot : MO_8;
 }

 /* Select size 8 if lsb of B is clear, else OT capped at 32.
    Used for decoding operand size of port opcodes.  */
-static inline TCGMemOp mo_b_d32(int b, TCGMemOp ot)
+static inline MemOp mo_b_d32(int b, MemOp ot)
 {
     return b & 1 ? (ot == MO_16 ? MO_16 : MO_32) : MO_8;
 }

-static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t0)
+static void gen_op_mov_reg_v(DisasContext *s, MemOp ot, int reg, TCGv t0)
 {
     switch(ot) {
     case MO_8:
@@ -388,7 +388,7 @@ static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t0)
 }

 static inline
-void gen_op_mov_v_reg(DisasContext *s, TCGMemOp ot, TCGv t0, int reg)
+void gen_op_mov_v_reg(DisasContext *s, MemOp ot, TCGv t0, int reg)
 {
     if (ot == MO_8 && byte_reg_is_xH(s, reg)) {
         tcg_gen_extract_tl(t0, cpu_regs[reg - 4], 8, 8);
@@ -411,13 +411,13 @@ static inline void gen_op_jmp_v(TCGv dest)
 }

 static inline
-void gen_op_add_reg_im(DisasContext *s, TCGMemOp size, int reg, int32_t val)
+void gen_op_add_reg_im(DisasContext *s, MemOp size, int reg, int32_t val)
 {
     tcg_gen_addi_tl(s->tmp0, cpu_regs[reg], val);
     gen_op_mov_reg_v(s, size, reg, s->tmp0);
 }

-static inline void gen_op_add_reg_T0(DisasContext *s, TCGMemOp size, int reg)
+static inline void gen_op_add_reg_T0(DisasContext *s, MemOp size, int reg)
 {
     tcg_gen_add_tl(s->tmp0, cpu_regs[reg], s->T0);
     gen_op_mov_reg_v(s, size, reg, s->tmp0);
@@ -451,7 +451,7 @@ static inline void gen_jmp_im(DisasContext *s, target_ulong pc)
 /* Compute SEG:REG into A0.  SEG is selected from the override segment
    (OVR_SEG) and the default segment (DEF_SEG).  OVR_SEG may be -1 to
    indicate no override.  */
-static void gen_lea_v_seg(DisasContext *s, TCGMemOp aflag, TCGv a0,
+static void gen_lea_v_seg(DisasContext *s, MemOp aflag, TCGv a0,
                           int def_seg, int ovr_seg)
 {
     switch (aflag) {
@@ -514,13 +514,13 @@ static inline void gen_string_movl_A0_EDI(DisasContext *s)
     gen_lea_v_seg(s, s->aflag, cpu_regs[R_EDI], R_ES, -1);
 }

-static inline void gen_op_movl_T0_Dshift(DisasContext *s, TCGMemOp ot)
+static inline void gen_op_movl_T0_Dshift(DisasContext *s, MemOp ot)
 {
     tcg_gen_ld32s_tl(s->T0, cpu_env, offsetof(CPUX86State, df));
     tcg_gen_shli_tl(s->T0, s->T0, ot);
 };

-static TCGv gen_ext_tl(TCGv dst, TCGv src, TCGMemOp size, bool sign)
+static TCGv gen_ext_tl(TCGv dst, TCGv src, MemOp size, bool sign)
 {
     switch (size) {
     case MO_8:
@@ -551,18 +551,18 @@ static TCGv gen_ext_tl(TCGv dst, TCGv src, TCGMemOp size, bool sign)
     }
 }

-static void gen_extu(TCGMemOp ot, TCGv reg)
+static void gen_extu(MemOp ot, TCGv reg)
 {
     gen_ext_tl(reg, reg, ot, false);
 }

-static void gen_exts(TCGMemOp ot, TCGv reg)
+static void gen_exts(MemOp ot, TCGv reg)
 {
     gen_ext_tl(reg, reg, ot, true);
 }

 static inline
-void gen_op_jnz_ecx(DisasContext *s, TCGMemOp size, TCGLabel *label1)
+void gen_op_jnz_ecx(DisasContext *s, MemOp size, TCGLabel *label1)
 {
     tcg_gen_mov_tl(s->tmp0, cpu_regs[R_ECX]);
     gen_extu(size, s->tmp0);
@@ -570,14 +570,14 @@ void gen_op_jnz_ecx(DisasContext *s, TCGMemOp size, TCGLabel *label1)
 }

 static inline
-void gen_op_jz_ecx(DisasContext *s, TCGMemOp size, TCGLabel *label1)
+void gen_op_jz_ecx(DisasContext *s, MemOp size, TCGLabel *label1)
 {
     tcg_gen_mov_tl(s->tmp0, cpu_regs[R_ECX]);
     gen_extu(size, s->tmp0);
     tcg_gen_brcondi_tl(TCG_COND_EQ, s->tmp0, 0, label1);
 }

-static void gen_helper_in_func(TCGMemOp ot, TCGv v, TCGv_i32 n)
+static void gen_helper_in_func(MemOp ot, TCGv v, TCGv_i32 n)
 {
     switch (ot) {
     case MO_8:
@@ -594,7 +594,7 @@ static void gen_helper_in_func(TCGMemOp ot, TCGv v, TCGv_i32 n)
     }
 }

-static void gen_helper_out_func(TCGMemOp ot, TCGv_i32 v, TCGv_i32 n)
+static void gen_helper_out_func(MemOp ot, TCGv_i32 v, TCGv_i32 n)
 {
     switch (ot) {
     case MO_8:
@@ -611,7 +611,7 @@ static void gen_helper_out_func(TCGMemOp ot, TCGv_i32 v, TCGv_i32 n)
     }
 }

-static void gen_check_io(DisasContext *s, TCGMemOp ot, target_ulong cur_eip,
+static void gen_check_io(DisasContext *s, MemOp ot, target_ulong cur_eip,
                          uint32_t svm_flags)
 {
     target_ulong next_eip;
@@ -644,7 +644,7 @@ static void gen_check_io(DisasContext *s, TCGMemOp ot, target_ulong cur_eip,
     }
 }

-static inline void gen_movs(DisasContext *s, TCGMemOp ot)
+static inline void gen_movs(DisasContext *s, MemOp ot)
 {
     gen_string_movl_A0_ESI(s);
     gen_op_ld_v(s, ot, s->T0, s->A0);
@@ -840,7 +840,7 @@ static CCPrepare gen_prepare_eflags_s(DisasContext *s, TCGv reg)
         return (CCPrepare) { .cond = TCG_COND_NEVER, .mask = -1 };
     default:
         {
-            TCGMemOp size = (s->cc_op - CC_OP_ADDB) & 3;
+            MemOp size = (s->cc_op - CC_OP_ADDB) & 3;
             TCGv t0 = gen_ext_tl(reg, cpu_cc_dst, size, true);
             return (CCPrepare) { .cond = TCG_COND_LT, .reg = t0, .mask = -1 };
         }
@@ -885,7 +885,7 @@ static CCPrepare gen_prepare_eflags_z(DisasContext *s, TCGv reg)
                              .mask = -1 };
     default:
         {
-            TCGMemOp size = (s->cc_op - CC_OP_ADDB) & 3;
+            MemOp size = (s->cc_op - CC_OP_ADDB) & 3;
             TCGv t0 = gen_ext_tl(reg, cpu_cc_dst, size, false);
             return (CCPrepare) { .cond = TCG_COND_EQ, .reg = t0, .mask = -1 };
         }
@@ -897,7 +897,7 @@ static CCPrepare gen_prepare_eflags_z(DisasContext *s, TCGv reg)
 static CCPrepare gen_prepare_cc(DisasContext *s, int b, TCGv reg)
 {
     int inv, jcc_op, cond;
-    TCGMemOp size;
+    MemOp size;
     CCPrepare cc;
     TCGv t0;

@@ -1075,7 +1075,7 @@ static TCGLabel *gen_jz_ecx_string(DisasContext *s, target_ulong next_eip)
     return l2;
 }

-static inline void gen_stos(DisasContext *s, TCGMemOp ot)
+static inline void gen_stos(DisasContext *s, MemOp ot)
 {
     gen_op_mov_v_reg(s, MO_32, s->T0, R_EAX);
     gen_string_movl_A0_EDI(s);
@@ -1084,7 +1084,7 @@ static inline void gen_stos(DisasContext *s, TCGMemOp ot)
     gen_op_add_reg_T0(s, s->aflag, R_EDI);
 }

-static inline void gen_lods(DisasContext *s, TCGMemOp ot)
+static inline void gen_lods(DisasContext *s, MemOp ot)
 {
     gen_string_movl_A0_ESI(s);
     gen_op_ld_v(s, ot, s->T0, s->A0);
@@ -1093,7 +1093,7 @@ static inline void gen_lods(DisasContext *s, TCGMemOp ot)
     gen_op_add_reg_T0(s, s->aflag, R_ESI);
 }

-static inline void gen_scas(DisasContext *s, TCGMemOp ot)
+static inline void gen_scas(DisasContext *s, MemOp ot)
 {
     gen_string_movl_A0_EDI(s);
     gen_op_ld_v(s, ot, s->T1, s->A0);
@@ -1102,7 +1102,7 @@ static inline void gen_scas(DisasContext *s, TCGMemOp ot)
     gen_op_add_reg_T0(s, s->aflag, R_EDI);
 }

-static inline void gen_cmps(DisasContext *s, TCGMemOp ot)
+static inline void gen_cmps(DisasContext *s, MemOp ot)
 {
     gen_string_movl_A0_EDI(s);
     gen_op_ld_v(s, ot, s->T1, s->A0);
@@ -1126,7 +1126,7 @@ static void gen_bpt_io(DisasContext *s, TCGv_i32 t_port, int ot)
 }


-static inline void gen_ins(DisasContext *s, TCGMemOp ot)
+static inline void gen_ins(DisasContext *s, MemOp ot)
 {
     if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
         gen_io_start();
@@ -1148,7 +1148,7 @@ static inline void gen_ins(DisasContext *s, TCGMemOp ot)
     }
 }

-static inline void gen_outs(DisasContext *s, TCGMemOp ot)
+static inline void gen_outs(DisasContext *s, MemOp ot)
 {
     if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
         gen_io_start();
@@ -1171,7 +1171,7 @@ static inline void gen_outs(DisasContext *s, TCGMemOp ot)
 /* same method as Valgrind : we generate jumps to current or next
    instruction */
 #define GEN_REPZ(op)                                                          \
-static inline void gen_repz_ ## op(DisasContext *s, TCGMemOp ot,              \
+static inline void gen_repz_ ## op(DisasContext *s, MemOp ot,              \
                                  target_ulong cur_eip, target_ulong next_eip) \
 {                                                                             \
     TCGLabel *l2;                                                             \
@@ -1187,7 +1187,7 @@ static inline void gen_repz_ ## op(DisasContext *s, TCGMemOp ot,              \
 }

 #define GEN_REPZ2(op)                                                         \
-static inline void gen_repz_ ## op(DisasContext *s, TCGMemOp ot,              \
+static inline void gen_repz_ ## op(DisasContext *s, MemOp ot,              \
                                    target_ulong cur_eip,                      \
                                    target_ulong next_eip,                     \
                                    int nz)                                    \
@@ -1284,7 +1284,7 @@ static void gen_illegal_opcode(DisasContext *s)
 }

 /* if d == OR_TMP0, it means memory operand (address in A0) */
-static void gen_op(DisasContext *s1, int op, TCGMemOp ot, int d)
+static void gen_op(DisasContext *s1, int op, MemOp ot, int d)
 {
     if (d != OR_TMP0) {
         if (s1->prefix & PREFIX_LOCK) {
@@ -1395,7 +1395,7 @@ static void gen_op(DisasContext *s1, int op, TCGMemOp ot, int d)
 }

 /* if d == OR_TMP0, it means memory operand (address in A0) */
-static void gen_inc(DisasContext *s1, TCGMemOp ot, int d, int c)
+static void gen_inc(DisasContext *s1, MemOp ot, int d, int c)
 {
     if (s1->prefix & PREFIX_LOCK) {
         if (d != OR_TMP0) {
@@ -1421,7 +1421,7 @@ static void gen_inc(DisasContext *s1, TCGMemOp ot, int d, int c)
     set_cc_op(s1, (c > 0 ? CC_OP_INCB : CC_OP_DECB) + ot);
 }

-static void gen_shift_flags(DisasContext *s, TCGMemOp ot, TCGv result,
+static void gen_shift_flags(DisasContext *s, MemOp ot, TCGv result,
                             TCGv shm1, TCGv count, bool is_right)
 {
     TCGv_i32 z32, s32, oldop;
@@ -1466,7 +1466,7 @@ static void gen_shift_flags(DisasContext *s, TCGMemOp ot, TCGv result,
     set_cc_op(s, CC_OP_DYNAMIC);
 }

-static void gen_shift_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
+static void gen_shift_rm_T1(DisasContext *s, MemOp ot, int op1,
                             int is_right, int is_arith)
 {
     target_ulong mask = (ot == MO_64 ? 0x3f : 0x1f);
@@ -1502,7 +1502,7 @@ static void gen_shift_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
     gen_shift_flags(s, ot, s->T0, s->tmp0, s->T1, is_right);
 }

-static void gen_shift_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
+static void gen_shift_rm_im(DisasContext *s, MemOp ot, int op1, int op2,
                             int is_right, int is_arith)
 {
     int mask = (ot == MO_64 ? 0x3f : 0x1f);
@@ -1542,7 +1542,7 @@ static void gen_shift_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
     }
 }

-static void gen_rot_rm_T1(DisasContext *s, TCGMemOp ot, int op1, int is_right)
+static void gen_rot_rm_T1(DisasContext *s, MemOp ot, int op1, int is_right)
 {
     target_ulong mask = (ot == MO_64 ? 0x3f : 0x1f);
     TCGv_i32 t0, t1;
@@ -1627,7 +1627,7 @@ static void gen_rot_rm_T1(DisasContext *s, TCGMemOp ot, int op1, int is_right)
     set_cc_op(s, CC_OP_DYNAMIC);
 }

-static void gen_rot_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
+static void gen_rot_rm_im(DisasContext *s, MemOp ot, int op1, int op2,
                           int is_right)
 {
     int mask = (ot == MO_64 ? 0x3f : 0x1f);
@@ -1705,7 +1705,7 @@ static void gen_rot_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
 }

 /* XXX: add faster immediate = 1 case */
-static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
+static void gen_rotc_rm_T1(DisasContext *s, MemOp ot, int op1,
                            int is_right)
 {
     gen_compute_eflags(s);
@@ -1761,7 +1761,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
 }

 /* XXX: add faster immediate case */
-static void gen_shiftd_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
+static void gen_shiftd_rm_T1(DisasContext *s, MemOp ot, int op1,
                              bool is_right, TCGv count_in)
 {
     target_ulong mask = (ot == MO_64 ? 63 : 31);
@@ -1842,7 +1842,7 @@ static void gen_shiftd_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
     tcg_temp_free(count);
 }

-static void gen_shift(DisasContext *s1, int op, TCGMemOp ot, int d, int s)
+static void gen_shift(DisasContext *s1, int op, MemOp ot, int d, int s)
 {
     if (s != OR_TMP1)
         gen_op_mov_v_reg(s1, ot, s1->T1, s);
@@ -1872,7 +1872,7 @@ static void gen_shift(DisasContext *s1, int op, TCGMemOp ot, int d, int s)
     }
 }

-static void gen_shifti(DisasContext *s1, int op, TCGMemOp ot, int d, int c)
+static void gen_shifti(DisasContext *s1, int op, MemOp ot, int d, int c)
 {
     switch(op) {
     case OP_ROL:
@@ -2149,7 +2149,7 @@ static void gen_add_A0_ds_seg(DisasContext *s)
 /* generate modrm memory load or store of 'reg'. TMP0 is used if reg ==
    OR_TMP0 */
 static void gen_ldst_modrm(CPUX86State *env, DisasContext *s, int modrm,
-                           TCGMemOp ot, int reg, int is_store)
+                           MemOp ot, int reg, int is_store)
 {
     int mod, rm;

@@ -2179,7 +2179,7 @@ static void gen_ldst_modrm(CPUX86State *env, DisasContext *s, int modrm,
     }
 }

-static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, TCGMemOp ot)
+static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, MemOp ot)
 {
     uint32_t ret;

@@ -2202,7 +2202,7 @@ static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, TCGMemOp ot)
     return ret;
 }

-static inline int insn_const_size(TCGMemOp ot)
+static inline int insn_const_size(MemOp ot)
 {
     if (ot <= MO_32) {
         return 1 << ot;
@@ -2266,7 +2266,7 @@ static inline void gen_jcc(DisasContext *s, int b,
     }
 }

-static void gen_cmovcc1(CPUX86State *env, DisasContext *s, TCGMemOp ot, int b,
+static void gen_cmovcc1(CPUX86State *env, DisasContext *s, MemOp ot, int b,
                         int modrm, int reg)
 {
     CCPrepare cc;
@@ -2363,8 +2363,8 @@ static inline void gen_stack_update(DisasContext *s, int addend)
 /* Generate a push. It depends on ss32, addseg and dflag.  */
 static void gen_push_v(DisasContext *s, TCGv val)
 {
-    TCGMemOp d_ot = mo_pushpop(s, s->dflag);
-    TCGMemOp a_ot = mo_stacksize(s);
+    MemOp d_ot = mo_pushpop(s, s->dflag);
+    MemOp a_ot = mo_stacksize(s);
     int size = 1 << d_ot;
     TCGv new_esp = s->A0;

@@ -2383,9 +2383,9 @@ static void gen_push_v(DisasContext *s, TCGv val)
 }

 /* two step pop is necessary for precise exceptions */
-static TCGMemOp gen_pop_T0(DisasContext *s)
+static MemOp gen_pop_T0(DisasContext *s)
 {
-    TCGMemOp d_ot = mo_pushpop(s, s->dflag);
+    MemOp d_ot = mo_pushpop(s, s->dflag);

     gen_lea_v_seg(s, mo_stacksize(s), cpu_regs[R_ESP], R_SS, -1);
     gen_op_ld_v(s, d_ot, s->T0, s->A0);
@@ -2393,7 +2393,7 @@ static TCGMemOp gen_pop_T0(DisasContext *s)
     return d_ot;
 }

-static inline void gen_pop_update(DisasContext *s, TCGMemOp ot)
+static inline void gen_pop_update(DisasContext *s, MemOp ot)
 {
     gen_stack_update(s, 1 << ot);
 }
@@ -2405,8 +2405,8 @@ static inline void gen_stack_A0(DisasContext *s)

 static void gen_pusha(DisasContext *s)
 {
-    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_16;
-    TCGMemOp d_ot = s->dflag;
+    MemOp s_ot = s->ss32 ? MO_32 : MO_16;
+    MemOp d_ot = s->dflag;
     int size = 1 << d_ot;
     int i;

@@ -2421,8 +2421,8 @@ static void gen_pusha(DisasContext *s)

 static void gen_popa(DisasContext *s)
 {
-    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_16;
-    TCGMemOp d_ot = s->dflag;
+    MemOp s_ot = s->ss32 ? MO_32 : MO_16;
+    MemOp d_ot = s->dflag;
     int size = 1 << d_ot;
     int i;

@@ -2442,8 +2442,8 @@ static void gen_popa(DisasContext *s)

 static void gen_enter(DisasContext *s, int esp_addend, int level)
 {
-    TCGMemOp d_ot = mo_pushpop(s, s->dflag);
-    TCGMemOp a_ot = CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_16;
+    MemOp d_ot = mo_pushpop(s, s->dflag);
+    MemOp a_ot = CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_16;
     int size = 1 << d_ot;

     /* Push BP; compute FrameTemp into T1.  */
@@ -2482,8 +2482,8 @@ static void gen_enter(DisasContext *s, int esp_addend, int level)

 static void gen_leave(DisasContext *s)
 {
-    TCGMemOp d_ot = mo_pushpop(s, s->dflag);
-    TCGMemOp a_ot = mo_stacksize(s);
+    MemOp d_ot = mo_pushpop(s, s->dflag);
+    MemOp a_ot = mo_stacksize(s);

     gen_lea_v_seg(s, a_ot, cpu_regs[R_EBP], R_SS, -1);
     gen_op_ld_v(s, d_ot, s->T0, s->A0);
@@ -3045,7 +3045,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
     SSEFunc_0_eppi sse_fn_eppi;
     SSEFunc_0_ppi sse_fn_ppi;
     SSEFunc_0_eppt sse_fn_eppt;
-    TCGMemOp ot;
+    MemOp ot;

     b &= 0xff;
     if (s->prefix & PREFIX_DATA)
@@ -4488,7 +4488,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
     CPUX86State *env = cpu->env_ptr;
     int b, prefixes;
     int shift;
-    TCGMemOp ot, aflag, dflag;
+    MemOp ot, aflag, dflag;
     int modrm, reg, rm, mod, op, opreg, val;
     target_ulong next_eip, tval;
     int rex_w, rex_r;
@@ -5567,8 +5567,8 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
     case 0x1be: /* movsbS Gv, Eb */
     case 0x1bf: /* movswS Gv, Eb */
         {
-            TCGMemOp d_ot;
-            TCGMemOp s_ot;
+            MemOp d_ot;
+            MemOp s_ot;

             /* d_ot is the size of destination */
             d_ot = dflag;
diff --git a/target/m68k/translate.c b/target/m68k/translate.c
index 60bcfb7..24c1dd3 100644
--- a/target/m68k/translate.c
+++ b/target/m68k/translate.c
@@ -2414,7 +2414,7 @@ DISAS_INSN(cas)
     uint16_t ext;
     TCGv load;
     TCGv cmp;
-    TCGMemOp opc;
+    MemOp opc;

     switch ((insn >> 9) & 3) {
     case 1:
diff --git a/target/microblaze/translate.c b/target/microblaze/translate.c
index 9ce65f3..41d1b8b 100644
--- a/target/microblaze/translate.c
+++ b/target/microblaze/translate.c
@@ -919,7 +919,7 @@ static void dec_load(DisasContext *dc)
     unsigned int size;
     bool rev = false, ex = false, ea = false;
     int mem_index = cpu_mmu_index(&dc->cpu->env, false);
-    TCGMemOp mop;
+    MemOp mop;

     mop = dc->opcode & 3;
     size = 1 << mop;
@@ -1035,7 +1035,7 @@ static void dec_store(DisasContext *dc)
     unsigned int size;
     bool rev = false, ex = false, ea = false;
     int mem_index = cpu_mmu_index(&dc->cpu->env, false);
-    TCGMemOp mop;
+    MemOp mop;

     mop = dc->opcode & 3;
     size = 1 << mop;
diff --git a/target/mips/translate.c b/target/mips/translate.c
index ca62800..59b5d85 100644
--- a/target/mips/translate.c
+++ b/target/mips/translate.c
@@ -2526,7 +2526,7 @@ typedef struct DisasContext {
     int32_t CP0_Config5;
     /* Routine used to access memory */
     int mem_idx;
-    TCGMemOp default_tcg_memop_mask;
+    MemOp default_tcg_memop_mask;
     uint32_t hflags, saved_hflags;
     target_ulong btarget;
     bool ulri;
@@ -3706,7 +3706,7 @@ static void gen_st(DisasContext *ctx, uint32_t opc, int rt,

 /* Store conditional */
 static void gen_st_cond(DisasContext *ctx, int rt, int base, int offset,
-                        TCGMemOp tcg_mo, bool eva)
+                        MemOp tcg_mo, bool eva)
 {
     TCGv addr, t0, val;
     TCGLabel *l1 = gen_new_label();
@@ -4546,7 +4546,7 @@ static void gen_HILO(DisasContext *ctx, uint32_t opc, int acc, int reg)
 }

 static inline void gen_r6_ld(target_long addr, int reg, int memidx,
-                             TCGMemOp memop)
+                             MemOp memop)
 {
     TCGv t0 = tcg_const_tl(addr);
     tcg_gen_qemu_ld_tl(t0, t0, memidx, memop);
@@ -21828,7 +21828,7 @@ static int decode_nanomips_32_48_opc(CPUMIPSState *env, DisasContext *ctx)
                              extract32(ctx->opcode, 0, 8);
                     TCGv va = tcg_temp_new();
                     TCGv t1 = tcg_temp_new();
-                    TCGMemOp memop = (extract32(ctx->opcode, 8, 3)) ==
+                    MemOp memop = (extract32(ctx->opcode, 8, 3)) ==
                                       NM_P_LS_UAWM ? MO_UNALN : 0;

                     count = (count == 0) ? 8 : count;
diff --git a/target/openrisc/translate.c b/target/openrisc/translate.c
index 4360ce4..b189c50 100644
--- a/target/openrisc/translate.c
+++ b/target/openrisc/translate.c
@@ -681,7 +681,7 @@ static bool trans_l_lwa(DisasContext *dc, arg_load *a)
     return true;
 }

-static void do_load(DisasContext *dc, arg_load *a, TCGMemOp mop)
+static void do_load(DisasContext *dc, arg_load *a, MemOp mop)
 {
     TCGv ea;

@@ -763,7 +763,7 @@ static bool trans_l_swa(DisasContext *dc, arg_store *a)
     return true;
 }

-static void do_store(DisasContext *dc, arg_store *a, TCGMemOp mop)
+static void do_store(DisasContext *dc, arg_store *a, MemOp mop)
 {
     TCGv t0 = tcg_temp_new();
     tcg_gen_addi_tl(t0, cpu_R[a->a], a->i);
diff --git a/target/ppc/translate.c b/target/ppc/translate.c
index 4a5de28..31800ed 100644
--- a/target/ppc/translate.c
+++ b/target/ppc/translate.c
@@ -162,7 +162,7 @@ struct DisasContext {
     int mem_idx;
     int access_type;
     /* Translation flags */
-    TCGMemOp default_tcg_memop_mask;
+    MemOp default_tcg_memop_mask;
 #if defined(TARGET_PPC64)
     bool sf_mode;
     bool has_cfar;
@@ -3142,7 +3142,7 @@ static void gen_isync(DisasContext *ctx)

 #define MEMOP_GET_SIZE(x)  (1 << ((x) & MO_SIZE))

-static void gen_load_locked(DisasContext *ctx, TCGMemOp memop)
+static void gen_load_locked(DisasContext *ctx, MemOp memop)
 {
     TCGv gpr = cpu_gpr[rD(ctx->opcode)];
     TCGv t0 = tcg_temp_new();
@@ -3167,7 +3167,7 @@ LARX(lbarx, DEF_MEMOP(MO_UB))
 LARX(lharx, DEF_MEMOP(MO_UW))
 LARX(lwarx, DEF_MEMOP(MO_UL))

-static void gen_fetch_inc_conditional(DisasContext *ctx, TCGMemOp memop,
+static void gen_fetch_inc_conditional(DisasContext *ctx, MemOp memop,
                                       TCGv EA, TCGCond cond, int addend)
 {
     TCGv t = tcg_temp_new();
@@ -3193,7 +3193,7 @@ static void gen_fetch_inc_conditional(DisasContext *ctx, TCGMemOp memop,
     tcg_temp_free(u);
 }

-static void gen_ld_atomic(DisasContext *ctx, TCGMemOp memop)
+static void gen_ld_atomic(DisasContext *ctx, MemOp memop)
 {
     uint32_t gpr_FC = FC(ctx->opcode);
     TCGv EA = tcg_temp_new();
@@ -3306,7 +3306,7 @@ static void gen_ldat(DisasContext *ctx)
 }
 #endif

-static void gen_st_atomic(DisasContext *ctx, TCGMemOp memop)
+static void gen_st_atomic(DisasContext *ctx, MemOp memop)
 {
     uint32_t gpr_FC = FC(ctx->opcode);
     TCGv EA = tcg_temp_new();
@@ -3389,7 +3389,7 @@ static void gen_stdat(DisasContext *ctx)
 }
 #endif

-static void gen_conditional_store(DisasContext *ctx, TCGMemOp memop)
+static void gen_conditional_store(DisasContext *ctx, MemOp memop)
 {
     TCGLabel *l1 = gen_new_label();
     TCGLabel *l2 = gen_new_label();
diff --git a/target/riscv/insn_trans/trans_rva.inc.c b/target/riscv/insn_trans/trans_rva.inc.c
index fadd888..be8a9f0 100644
--- a/target/riscv/insn_trans/trans_rva.inc.c
+++ b/target/riscv/insn_trans/trans_rva.inc.c
@@ -18,7 +18,7 @@
  * this program.  If not, see <http://www.gnu.org/licenses/>.
  */

-static inline bool gen_lr(DisasContext *ctx, arg_atomic *a, TCGMemOp mop)
+static inline bool gen_lr(DisasContext *ctx, arg_atomic *a, MemOp mop)
 {
     TCGv src1 = tcg_temp_new();
     /* Put addr in load_res, data in load_val.  */
@@ -37,7 +37,7 @@ static inline bool gen_lr(DisasContext *ctx, arg_atomic *a, TCGMemOp mop)
     return true;
 }

-static inline bool gen_sc(DisasContext *ctx, arg_atomic *a, TCGMemOp mop)
+static inline bool gen_sc(DisasContext *ctx, arg_atomic *a, MemOp mop)
 {
     TCGv src1 = tcg_temp_new();
     TCGv src2 = tcg_temp_new();
@@ -82,8 +82,8 @@ static inline bool gen_sc(DisasContext *ctx, arg_atomic *a, TCGMemOp mop)
 }

 static bool gen_amo(DisasContext *ctx, arg_atomic *a,
-                    void(*func)(TCGv, TCGv, TCGv, TCGArg, TCGMemOp),
-                    TCGMemOp mop)
+                    void(*func)(TCGv, TCGv, TCGv, TCGArg, MemOp),
+                    MemOp mop)
 {
     TCGv src1 = tcg_temp_new();
     TCGv src2 = tcg_temp_new();
diff --git a/target/riscv/insn_trans/trans_rvi.inc.c b/target/riscv/insn_trans/trans_rvi.inc.c
index ea64731..cf440d1 100644
--- a/target/riscv/insn_trans/trans_rvi.inc.c
+++ b/target/riscv/insn_trans/trans_rvi.inc.c
@@ -135,7 +135,7 @@ static bool trans_bgeu(DisasContext *ctx, arg_bgeu *a)
     return gen_branch(ctx, a, TCG_COND_GEU);
 }

-static bool gen_load(DisasContext *ctx, arg_lb *a, TCGMemOp memop)
+static bool gen_load(DisasContext *ctx, arg_lb *a, MemOp memop)
 {
     TCGv t0 = tcg_temp_new();
     TCGv t1 = tcg_temp_new();
@@ -174,7 +174,7 @@ static bool trans_lhu(DisasContext *ctx, arg_lhu *a)
     return gen_load(ctx, a, MO_TEUW);
 }

-static bool gen_store(DisasContext *ctx, arg_sb *a, TCGMemOp memop)
+static bool gen_store(DisasContext *ctx, arg_sb *a, MemOp memop)
 {
     TCGv t0 = tcg_temp_new();
     TCGv dat = tcg_temp_new();
diff --git a/target/s390x/translate.c b/target/s390x/translate.c
index ac0d8b6..2927247 100644
--- a/target/s390x/translate.c
+++ b/target/s390x/translate.c
@@ -152,7 +152,7 @@ static inline int vec_full_reg_offset(uint8_t reg)
     return offsetof(CPUS390XState, vregs[reg][0]);
 }

-static inline int vec_reg_offset(uint8_t reg, uint8_t enr, TCGMemOp es)
+static inline int vec_reg_offset(uint8_t reg, uint8_t enr, MemOp es)
 {
     /* Convert element size (es) - e.g. MO_8 - to bytes */
     const uint8_t bytes = 1 << es;
@@ -2262,7 +2262,7 @@ static DisasJumpType op_csst(DisasContext *s, DisasOps *o)
 #ifndef CONFIG_USER_ONLY
 static DisasJumpType op_csp(DisasContext *s, DisasOps *o)
 {
-    TCGMemOp mop = s->insn->data;
+    MemOp mop = s->insn->data;
     TCGv_i64 addr, old, cc;
     TCGLabel *lab = gen_new_label();

@@ -3228,7 +3228,7 @@ static DisasJumpType op_lm64(DisasContext *s, DisasOps *o)
 static DisasJumpType op_lpd(DisasContext *s, DisasOps *o)
 {
     TCGv_i64 a1, a2;
-    TCGMemOp mop = s->insn->data;
+    MemOp mop = s->insn->data;

     /* In a parallel context, stop the world and single step.  */
     if (tb_cflags(s->base.tb) & CF_PARALLEL) {
diff --git a/target/s390x/translate_vx.inc.c b/target/s390x/translate_vx.inc.c
index 41d5cf8..4c56bbb 100644
--- a/target/s390x/translate_vx.inc.c
+++ b/target/s390x/translate_vx.inc.c
@@ -57,13 +57,13 @@
 #define FPF_LONG        3
 #define FPF_EXT         4

-static inline bool valid_vec_element(uint8_t enr, TCGMemOp es)
+static inline bool valid_vec_element(uint8_t enr, MemOp es)
 {
     return !(enr & ~(NUM_VEC_ELEMENTS(es) - 1));
 }

 static void read_vec_element_i64(TCGv_i64 dst, uint8_t reg, uint8_t enr,
-                                 TCGMemOp memop)
+                                 MemOp memop)
 {
     const int offs = vec_reg_offset(reg, enr, memop & MO_SIZE);

@@ -96,7 +96,7 @@ static void read_vec_element_i64(TCGv_i64 dst, uint8_t reg, uint8_t enr,
 }

 static void read_vec_element_i32(TCGv_i32 dst, uint8_t reg, uint8_t enr,
-                                 TCGMemOp memop)
+                                 MemOp memop)
 {
     const int offs = vec_reg_offset(reg, enr, memop & MO_SIZE);

@@ -123,7 +123,7 @@ static void read_vec_element_i32(TCGv_i32 dst, uint8_t reg, uint8_t enr,
 }

 static void write_vec_element_i64(TCGv_i64 src, int reg, uint8_t enr,
-                                  TCGMemOp memop)
+                                  MemOp memop)
 {
     const int offs = vec_reg_offset(reg, enr, memop & MO_SIZE);

@@ -146,7 +146,7 @@ static void write_vec_element_i64(TCGv_i64 src, int reg, uint8_t enr,
 }

 static void write_vec_element_i32(TCGv_i32 src, int reg, uint8_t enr,
-                                  TCGMemOp memop)
+                                  MemOp memop)
 {
     const int offs = vec_reg_offset(reg, enr, memop & MO_SIZE);

diff --git a/target/sparc/translate.c b/target/sparc/translate.c
index 091bab5..bef9ce6 100644
--- a/target/sparc/translate.c
+++ b/target/sparc/translate.c
@@ -2019,7 +2019,7 @@ static inline void gen_ne_fop_QD(DisasContext *dc, int rd, int rs,
 }

 static void gen_swap(DisasContext *dc, TCGv dst, TCGv src,
-                     TCGv addr, int mmu_idx, TCGMemOp memop)
+                     TCGv addr, int mmu_idx, MemOp memop)
 {
     gen_address_mask(dc, addr);
     tcg_gen_atomic_xchg_tl(dst, addr, src, mmu_idx, memop);
@@ -2050,10 +2050,10 @@ typedef struct {
     ASIType type;
     int asi;
     int mem_idx;
-    TCGMemOp memop;
+    MemOp memop;
 } DisasASI;

-static DisasASI get_asi(DisasContext *dc, int insn, TCGMemOp memop)
+static DisasASI get_asi(DisasContext *dc, int insn, MemOp memop)
 {
     int asi = GET_FIELD(insn, 19, 26);
     ASIType type = GET_ASI_HELPER;
@@ -2267,7 +2267,7 @@ static DisasASI get_asi(DisasContext *dc, int insn, TCGMemOp memop)
 }

 static void gen_ld_asi(DisasContext *dc, TCGv dst, TCGv addr,
-                       int insn, TCGMemOp memop)
+                       int insn, MemOp memop)
 {
     DisasASI da = get_asi(dc, insn, memop);

@@ -2305,7 +2305,7 @@ static void gen_ld_asi(DisasContext *dc, TCGv dst, TCGv addr,
 }

 static void gen_st_asi(DisasContext *dc, TCGv src, TCGv addr,
-                       int insn, TCGMemOp memop)
+                       int insn, MemOp memop)
 {
     DisasASI da = get_asi(dc, insn, memop);

@@ -2511,7 +2511,7 @@ static void gen_ldf_asi(DisasContext *dc, TCGv addr,
     case GET_ASI_BLOCK:
         /* Valid for lddfa on aligned registers only.  */
         if (size == 8 && (rd & 7) == 0) {
-            TCGMemOp memop;
+            MemOp memop;
             TCGv eight;
             int i;

@@ -2625,7 +2625,7 @@ static void gen_stf_asi(DisasContext *dc, TCGv addr,
     case GET_ASI_BLOCK:
         /* Valid for stdfa on aligned registers only.  */
         if (size == 8 && (rd & 7) == 0) {
-            TCGMemOp memop;
+            MemOp memop;
             TCGv eight;
             int i;

diff --git a/target/tilegx/translate.c b/target/tilegx/translate.c
index c46a4ab..68dd4aa 100644
--- a/target/tilegx/translate.c
+++ b/target/tilegx/translate.c
@@ -290,7 +290,7 @@ static void gen_cmul2(TCGv tdest, TCGv tsrca, TCGv tsrcb, int sh, int rd)
 }

 static TileExcp gen_st_opcode(DisasContext *dc, unsigned dest, unsigned srca,
-                              unsigned srcb, TCGMemOp memop, const char *name)
+                              unsigned srcb, MemOp memop, const char *name)
 {
     if (dest) {
         return TILEGX_EXCP_OPCODE_UNKNOWN;
@@ -305,7 +305,7 @@ static TileExcp gen_st_opcode(DisasContext *dc, unsigned dest, unsigned srca,
 }

 static TileExcp gen_st_add_opcode(DisasContext *dc, unsigned srca, unsigned srcb,
-                                  int imm, TCGMemOp memop, const char *name)
+                                  int imm, MemOp memop, const char *name)
 {
     TCGv tsrca = load_gr(dc, srca);
     TCGv tsrcb = load_gr(dc, srcb);
@@ -496,7 +496,7 @@ static TileExcp gen_rr_opcode(DisasContext *dc, unsigned opext,
 {
     TCGv tdest, tsrca;
     const char *mnemonic;
-    TCGMemOp memop;
+    MemOp memop;
     TileExcp ret = TILEGX_EXCP_NONE;
     bool prefetch_nofault = false;

@@ -1478,7 +1478,7 @@ static TileExcp gen_rri_opcode(DisasContext *dc, unsigned opext,
     TCGv tsrca = load_gr(dc, srca);
     bool prefetch_nofault = false;
     const char *mnemonic;
-    TCGMemOp memop;
+    MemOp memop;
     int i2, i3;
     TCGv t0;

@@ -2106,7 +2106,7 @@ static TileExcp decode_y2(DisasContext *dc, tilegx_bundle_bits bundle)
     unsigned srca = get_SrcA_Y2(bundle);
     unsigned srcbdest = get_SrcBDest_Y2(bundle);
     const char *mnemonic;
-    TCGMemOp memop;
+    MemOp memop;
     bool prefetch_nofault = false;

     switch (OEY2(opc, mode)) {
diff --git a/target/tricore/translate.c b/target/tricore/translate.c
index dc2a65f..87a5f50 100644
--- a/target/tricore/translate.c
+++ b/target/tricore/translate.c
@@ -227,7 +227,7 @@ static inline void generate_trap(DisasContext *ctx, int class, int tin);
 /* Functions for load/save to/from memory */

 static inline void gen_offset_ld(DisasContext *ctx, TCGv r1, TCGv r2,
-                                 int16_t con, TCGMemOp mop)
+                                 int16_t con, MemOp mop)
 {
     TCGv temp = tcg_temp_new();
     tcg_gen_addi_tl(temp, r2, con);
@@ -236,7 +236,7 @@ static inline void gen_offset_ld(DisasContext *ctx, TCGv r1, TCGv r2,
 }

 static inline void gen_offset_st(DisasContext *ctx, TCGv r1, TCGv r2,
-                                 int16_t con, TCGMemOp mop)
+                                 int16_t con, MemOp mop)
 {
     TCGv temp = tcg_temp_new();
     tcg_gen_addi_tl(temp, r2, con);
@@ -284,7 +284,7 @@ static void gen_offset_ld_2regs(TCGv rh, TCGv rl, TCGv base, int16_t con,
 }

 static void gen_st_preincr(DisasContext *ctx, TCGv r1, TCGv r2, int16_t off,
-                           TCGMemOp mop)
+                           MemOp mop)
 {
     TCGv temp = tcg_temp_new();
     tcg_gen_addi_tl(temp, r2, off);
@@ -294,7 +294,7 @@ static void gen_st_preincr(DisasContext *ctx, TCGv r1, TCGv r2, int16_t off,
 }

 static void gen_ld_preincr(DisasContext *ctx, TCGv r1, TCGv r2, int16_t off,
-                           TCGMemOp mop)
+                           MemOp mop)
 {
     TCGv temp = tcg_temp_new();
     tcg_gen_addi_tl(temp, r2, off);
diff --git a/tcg/README b/tcg/README
index 21fcdf7..b4382fa 100644
--- a/tcg/README
+++ b/tcg/README
@@ -512,7 +512,7 @@ Both t0 and t1 may be split into little-endian ordered pairs of registers
 if dealing with 64-bit quantities on a 32-bit host.

 The memidx selects the qemu tlb index to use (e.g. user or kernel access).
-The flags are the TCGMemOp bits, selecting the sign, width, and endianness
+The flags are the MemOp bits, selecting the sign, width, and endianness
 of the memory access.

 For a 32-bit host, qemu_ld/st_i64 is guaranteed to only be used with a
diff --git a/tcg/aarch64/tcg-target.inc.c b/tcg/aarch64/tcg-target.inc.c
index 0713448..3f92101 100644
--- a/tcg/aarch64/tcg-target.inc.c
+++ b/tcg/aarch64/tcg-target.inc.c
@@ -1423,7 +1423,7 @@ static inline void tcg_out_rev16(TCGContext *s, TCGReg rd, TCGReg rn)
     tcg_out_insn(s, 3507, REV16, TCG_TYPE_I32, rd, rn);
 }

-static inline void tcg_out_sxt(TCGContext *s, TCGType ext, TCGMemOp s_bits,
+static inline void tcg_out_sxt(TCGContext *s, TCGType ext, MemOp s_bits,
                                TCGReg rd, TCGReg rn)
 {
     /* Using ALIASes SXTB, SXTH, SXTW, of SBFM Xd, Xn, #0, #7|15|31 */
@@ -1431,7 +1431,7 @@ static inline void tcg_out_sxt(TCGContext *s, TCGType ext, TCGMemOp s_bits,
     tcg_out_sbfm(s, ext, rd, rn, 0, bits);
 }

-static inline void tcg_out_uxt(TCGContext *s, TCGMemOp s_bits,
+static inline void tcg_out_uxt(TCGContext *s, MemOp s_bits,
                                TCGReg rd, TCGReg rn)
 {
     /* Using ALIASes UXTB, UXTH of UBFM Wd, Wn, #0, #7|15 */
@@ -1580,8 +1580,8 @@ static inline void tcg_out_adr(TCGContext *s, TCGReg rd, void *target)
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp size = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp size = opc & MO_SIZE;

     if (!reloc_pc19(lb->label_ptr[0], s->code_ptr)) {
         return false;
@@ -1605,8 +1605,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp size = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp size = opc & MO_SIZE;

     if (!reloc_pc19(lb->label_ptr[0], s->code_ptr)) {
         return false;
@@ -1649,7 +1649,7 @@ QEMU_BUILD_BUG_ON(offsetof(CPUTLBDescFast, table) != 8);
    slow path for the failure case, which will be patched later when finalizing
    the slow path. Generated code returns the host addend in X1,
    clobbers X0,X2,X3,TMP. */
-static void tcg_out_tlb_read(TCGContext *s, TCGReg addr_reg, TCGMemOp opc,
+static void tcg_out_tlb_read(TCGContext *s, TCGReg addr_reg, MemOp opc,
                              tcg_insn_unit **label_ptr, int mem_index,
                              bool is_read)
 {
@@ -1709,11 +1709,11 @@ static void tcg_out_tlb_read(TCGContext *s, TCGReg addr_reg, TCGMemOp opc,

 #endif /* CONFIG_SOFTMMU */

-static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp memop, TCGType ext,
+static void tcg_out_qemu_ld_direct(TCGContext *s, MemOp memop, TCGType ext,
                                    TCGReg data_r, TCGReg addr_r,
                                    TCGType otype, TCGReg off_r)
 {
-    const TCGMemOp bswap = memop & MO_BSWAP;
+    const MemOp bswap = memop & MO_BSWAP;

     switch (memop & MO_SSIZE) {
     case MO_UB:
@@ -1765,11 +1765,11 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp memop, TCGType ext,
     }
 }

-static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp memop,
+static void tcg_out_qemu_st_direct(TCGContext *s, MemOp memop,
                                    TCGReg data_r, TCGReg addr_r,
                                    TCGType otype, TCGReg off_r)
 {
-    const TCGMemOp bswap = memop & MO_BSWAP;
+    const MemOp bswap = memop & MO_BSWAP;

     switch (memop & MO_SIZE) {
     case MO_8:
@@ -1804,7 +1804,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp memop,
 static void tcg_out_qemu_ld(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
                             TCGMemOpIdx oi, TCGType ext)
 {
-    TCGMemOp memop = get_memop(oi);
+    MemOp memop = get_memop(oi);
     const TCGType otype = TARGET_LONG_BITS == 64 ? TCG_TYPE_I64 : TCG_TYPE_I32;
 #ifdef CONFIG_SOFTMMU
     unsigned mem_index = get_mmuidx(oi);
@@ -1829,7 +1829,7 @@ static void tcg_out_qemu_ld(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
 static void tcg_out_qemu_st(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
                             TCGMemOpIdx oi)
 {
-    TCGMemOp memop = get_memop(oi);
+    MemOp memop = get_memop(oi);
     const TCGType otype = TARGET_LONG_BITS == 64 ? TCG_TYPE_I64 : TCG_TYPE_I32;
 #ifdef CONFIG_SOFTMMU
     unsigned mem_index = get_mmuidx(oi);
diff --git a/tcg/arm/tcg-target.inc.c b/tcg/arm/tcg-target.inc.c
index ece88dc..94d80d7 100644
--- a/tcg/arm/tcg-target.inc.c
+++ b/tcg/arm/tcg-target.inc.c
@@ -1233,7 +1233,7 @@ QEMU_BUILD_BUG_ON(offsetof(CPUTLBDescFast, table) != 4);
    containing the addend of the tlb entry.  Clobbers R0, R1, R2, TMP.  */

 static TCGReg tcg_out_tlb_read(TCGContext *s, TCGReg addrlo, TCGReg addrhi,
-                               TCGMemOp opc, int mem_index, bool is_load)
+                               MemOp opc, int mem_index, bool is_load)
 {
     int cmp_off = (is_load ? offsetof(CPUTLBEntry, addr_read)
                    : offsetof(CPUTLBEntry, addr_write));
@@ -1348,7 +1348,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGReg argreg, datalo, datahi;
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     void *func;

     if (!reloc_pc24(lb->label_ptr[0], s->code_ptr)) {
@@ -1412,7 +1412,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGReg argreg, datalo, datahi;
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);

     if (!reloc_pc24(lb->label_ptr[0], s->code_ptr)) {
         return false;
@@ -1453,11 +1453,11 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 }
 #endif /* SOFTMMU */

-static inline void tcg_out_qemu_ld_index(TCGContext *s, TCGMemOp opc,
+static inline void tcg_out_qemu_ld_index(TCGContext *s, MemOp opc,
                                          TCGReg datalo, TCGReg datahi,
                                          TCGReg addrlo, TCGReg addend)
 {
-    TCGMemOp bswap = opc & MO_BSWAP;
+    MemOp bswap = opc & MO_BSWAP;

     switch (opc & MO_SSIZE) {
     case MO_UB:
@@ -1514,11 +1514,11 @@ static inline void tcg_out_qemu_ld_index(TCGContext *s, TCGMemOp opc,
     }
 }

-static inline void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp opc,
+static inline void tcg_out_qemu_ld_direct(TCGContext *s, MemOp opc,
                                           TCGReg datalo, TCGReg datahi,
                                           TCGReg addrlo)
 {
-    TCGMemOp bswap = opc & MO_BSWAP;
+    MemOp bswap = opc & MO_BSWAP;

     switch (opc & MO_SSIZE) {
     case MO_UB:
@@ -1577,7 +1577,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)
 {
     TCGReg addrlo, datalo, datahi, addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #ifdef CONFIG_SOFTMMU
     int mem_index;
     TCGReg addend;
@@ -1614,11 +1614,11 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)
 #endif
 }

-static inline void tcg_out_qemu_st_index(TCGContext *s, int cond, TCGMemOp opc,
+static inline void tcg_out_qemu_st_index(TCGContext *s, int cond, MemOp opc,
                                          TCGReg datalo, TCGReg datahi,
                                          TCGReg addrlo, TCGReg addend)
 {
-    TCGMemOp bswap = opc & MO_BSWAP;
+    MemOp bswap = opc & MO_BSWAP;

     switch (opc & MO_SIZE) {
     case MO_8:
@@ -1659,11 +1659,11 @@ static inline void tcg_out_qemu_st_index(TCGContext *s, int cond, TCGMemOp opc,
     }
 }

-static inline void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp opc,
+static inline void tcg_out_qemu_st_direct(TCGContext *s, MemOp opc,
                                           TCGReg datalo, TCGReg datahi,
                                           TCGReg addrlo)
 {
-    TCGMemOp bswap = opc & MO_BSWAP;
+    MemOp bswap = opc & MO_BSWAP;

     switch (opc & MO_SIZE) {
     case MO_8:
@@ -1708,7 +1708,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is64)
 {
     TCGReg addrlo, datalo, datahi, addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #ifdef CONFIG_SOFTMMU
     int mem_index;
     TCGReg addend;
diff --git a/tcg/i386/tcg-target.inc.c b/tcg/i386/tcg-target.inc.c
index 6ddeebf..9d8ed97 100644
--- a/tcg/i386/tcg-target.inc.c
+++ b/tcg/i386/tcg-target.inc.c
@@ -1697,7 +1697,7 @@ static void * const qemu_st_helpers[16] = {
    First argument register is clobbered.  */

 static inline void tcg_out_tlb_load(TCGContext *s, TCGReg addrlo, TCGReg addrhi,
-                                    int mem_index, TCGMemOp opc,
+                                    int mem_index, MemOp opc,
                                     tcg_insn_unit **label_ptr, int which)
 {
     const TCGReg r0 = TCG_REG_L0;
@@ -1810,7 +1810,7 @@ static void add_qemu_ldst_label(TCGContext *s, bool is_ld, bool is_64,
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     TCGReg data_reg;
     tcg_insn_unit **label_ptr = &l->label_ptr[0];
     int rexw = (l->type == TCG_TYPE_I64 ? P_REXW : 0);
@@ -1895,8 +1895,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp s_bits = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp s_bits = opc & MO_SIZE;
     tcg_insn_unit **label_ptr = &l->label_ptr[0];
     TCGReg retaddr;

@@ -1995,10 +1995,10 @@ static inline int setup_guest_base_seg(void)

 static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg datalo, TCGReg datahi,
                                    TCGReg base, int index, intptr_t ofs,
-                                   int seg, bool is64, TCGMemOp memop)
+                                   int seg, bool is64, MemOp memop)
 {
-    const TCGMemOp real_bswap = memop & MO_BSWAP;
-    TCGMemOp bswap = real_bswap;
+    const MemOp real_bswap = memop & MO_BSWAP;
+    MemOp bswap = real_bswap;
     int rexw = is64 * P_REXW;
     int movop = OPC_MOVL_GvEv;

@@ -2103,7 +2103,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)
     TCGReg datalo, datahi, addrlo;
     TCGReg addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     int mem_index;
     tcg_insn_unit *label_ptr[2];
@@ -2137,15 +2137,15 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)

 static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg datalo, TCGReg datahi,
                                    TCGReg base, int index, intptr_t ofs,
-                                   int seg, TCGMemOp memop)
+                                   int seg, MemOp memop)
 {
     /* ??? Ideally we wouldn't need a scratch register.  For user-only,
        we could perform the bswap twice to restore the original value
        instead of moving to the scratch.  But as it is, the L constraint
        means that TCG_REG_L0 is definitely free here.  */
     const TCGReg scratch = TCG_REG_L0;
-    const TCGMemOp real_bswap = memop & MO_BSWAP;
-    TCGMemOp bswap = real_bswap;
+    const MemOp real_bswap = memop & MO_BSWAP;
+    MemOp bswap = real_bswap;
     int movop = OPC_MOVL_EvGv;

     if (have_movbe && real_bswap) {
@@ -2221,7 +2221,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is64)
     TCGReg datalo, datahi, addrlo;
     TCGReg addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     int mem_index;
     tcg_insn_unit *label_ptr[2];
diff --git a/tcg/mips/tcg-target.inc.c b/tcg/mips/tcg-target.inc.c
index 41bff32..5442167 100644
--- a/tcg/mips/tcg-target.inc.c
+++ b/tcg/mips/tcg-target.inc.c
@@ -1215,7 +1215,7 @@ static void tcg_out_tlb_load(TCGContext *s, TCGReg base, TCGReg addrl,
                              TCGReg addrh, TCGMemOpIdx oi,
                              tcg_insn_unit *label_ptr[2], bool is_load)
 {
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     unsigned s_bits = opc & MO_SIZE;
     unsigned a_bits = get_alignment_bits(opc);
     int mem_index = get_mmuidx(oi);
@@ -1313,7 +1313,7 @@ static void add_qemu_ldst_label(TCGContext *s, int is_ld, TCGMemOpIdx oi,
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     TCGReg v0;
     int i;

@@ -1363,8 +1363,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp s_bits = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp s_bits = opc & MO_SIZE;
     int i;

     /* resolve label address */
@@ -1413,7 +1413,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 #endif

 static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg lo, TCGReg hi,
-                                   TCGReg base, TCGMemOp opc, bool is_64)
+                                   TCGReg base, MemOp opc, bool is_64)
 {
     switch (opc & (MO_SSIZE | MO_BSWAP)) {
     case MO_UB:
@@ -1521,7 +1521,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg addr_regl, addr_regh __attribute__((unused));
     TCGReg data_regl, data_regh;
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     tcg_insn_unit *label_ptr[2];
 #endif
@@ -1558,7 +1558,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
 }

 static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
-                                   TCGReg base, TCGMemOp opc)
+                                   TCGReg base, MemOp opc)
 {
     /* Don't clutter the code below with checks to avoid bswapping ZERO.  */
     if ((lo | hi) == 0) {
@@ -1624,7 +1624,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg addr_regl, addr_regh __attribute__((unused));
     TCGReg data_regl, data_regh;
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     tcg_insn_unit *label_ptr[2];
 #endif
diff --git a/tcg/optimize.c b/tcg/optimize.c
index d2424de..a89ffda 100644
--- a/tcg/optimize.c
+++ b/tcg/optimize.c
@@ -1014,7 +1014,7 @@ void tcg_optimize(TCGContext *s)
         CASE_OP_32_64(qemu_ld):
             {
                 TCGMemOpIdx oi = op->args[nb_oargs + nb_iargs];
-                TCGMemOp mop = get_memop(oi);
+                MemOp mop = get_memop(oi);
                 if (!(mop & MO_SIGN)) {
                     mask = (2ULL << ((8 << (mop & MO_SIZE)) - 1)) - 1;
                 }
diff --git a/tcg/ppc/tcg-target.inc.c b/tcg/ppc/tcg-target.inc.c
index 852b894..815edac 100644
--- a/tcg/ppc/tcg-target.inc.c
+++ b/tcg/ppc/tcg-target.inc.c
@@ -1506,7 +1506,7 @@ QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) < -32768);
    in CR7, loads the addend of the TLB into R3, and returns the register
    containing the guest address (zero-extended into R4).  Clobbers R0 and R2. */

-static TCGReg tcg_out_tlb_read(TCGContext *s, TCGMemOp opc,
+static TCGReg tcg_out_tlb_read(TCGContext *s, MemOp opc,
                                TCGReg addrlo, TCGReg addrhi,
                                int mem_index, bool is_read)
 {
@@ -1633,7 +1633,7 @@ static void add_qemu_ldst_label(TCGContext *s, bool is_ld, TCGMemOpIdx oi,
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     TCGReg hi, lo, arg = TCG_REG_R3;

     if (!reloc_pc14(lb->label_ptr[0], s->code_ptr)) {
@@ -1680,8 +1680,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp s_bits = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp s_bits = opc & MO_SIZE;
     TCGReg hi, lo, arg = TCG_REG_R3;

     if (!reloc_pc14(lb->label_ptr[0], s->code_ptr)) {
@@ -1744,7 +1744,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg datalo, datahi, addrlo, rbase;
     TCGReg addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc, s_bits;
+    MemOp opc, s_bits;
 #ifdef CONFIG_SOFTMMU
     int mem_index;
     tcg_insn_unit *label_ptr;
@@ -1819,7 +1819,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg datalo, datahi, addrlo, rbase;
     TCGReg addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc, s_bits;
+    MemOp opc, s_bits;
 #ifdef CONFIG_SOFTMMU
     int mem_index;
     tcg_insn_unit *label_ptr;
diff --git a/tcg/riscv/tcg-target.inc.c b/tcg/riscv/tcg-target.inc.c
index 3e76bf5..7018509 100644
--- a/tcg/riscv/tcg-target.inc.c
+++ b/tcg/riscv/tcg-target.inc.c
@@ -970,7 +970,7 @@ static void tcg_out_tlb_load(TCGContext *s, TCGReg addrl,
                              TCGReg addrh, TCGMemOpIdx oi,
                              tcg_insn_unit **label_ptr, bool is_load)
 {
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     unsigned s_bits = opc & MO_SIZE;
     unsigned a_bits = get_alignment_bits(opc);
     tcg_target_long compare_mask;
@@ -1044,7 +1044,7 @@ static void add_qemu_ldst_label(TCGContext *s, int is_ld, TCGMemOpIdx oi,
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     TCGReg a0 = tcg_target_call_iarg_regs[0];
     TCGReg a1 = tcg_target_call_iarg_regs[1];
     TCGReg a2 = tcg_target_call_iarg_regs[2];
@@ -1077,8 +1077,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp s_bits = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp s_bits = opc & MO_SIZE;
     TCGReg a0 = tcg_target_call_iarg_regs[0];
     TCGReg a1 = tcg_target_call_iarg_regs[1];
     TCGReg a2 = tcg_target_call_iarg_regs[2];
@@ -1121,9 +1121,9 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 #endif /* CONFIG_SOFTMMU */

 static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg lo, TCGReg hi,
-                                   TCGReg base, TCGMemOp opc, bool is_64)
+                                   TCGReg base, MemOp opc, bool is_64)
 {
-    const TCGMemOp bswap = opc & MO_BSWAP;
+    const MemOp bswap = opc & MO_BSWAP;

     /* We don't yet handle byteswapping, assert */
     g_assert(!bswap);
@@ -1172,7 +1172,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg addr_regl, addr_regh __attribute__((unused));
     TCGReg data_regl, data_regh;
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     tcg_insn_unit *label_ptr[1];
 #endif
@@ -1208,9 +1208,9 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
 }

 static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
-                                   TCGReg base, TCGMemOp opc)
+                                   TCGReg base, MemOp opc)
 {
-    const TCGMemOp bswap = opc & MO_BSWAP;
+    const MemOp bswap = opc & MO_BSWAP;

     /* We don't yet handle byteswapping, assert */
     g_assert(!bswap);
@@ -1243,7 +1243,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg addr_regl, addr_regh __attribute__((unused));
     TCGReg data_regl, data_regh;
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     tcg_insn_unit *label_ptr[1];
 #endif
diff --git a/tcg/s390/tcg-target.inc.c b/tcg/s390/tcg-target.inc.c
index fe42939..bc5650b 100644
--- a/tcg/s390/tcg-target.inc.c
+++ b/tcg/s390/tcg-target.inc.c
@@ -1430,7 +1430,7 @@ static void tcg_out_call(TCGContext *s, tcg_insn_unit *dest)
     }
 }

-static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp opc, TCGReg data,
+static void tcg_out_qemu_ld_direct(TCGContext *s, MemOp opc, TCGReg data,
                                    TCGReg base, TCGReg index, int disp)
 {
     switch (opc & (MO_SSIZE | MO_BSWAP)) {
@@ -1489,7 +1489,7 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp opc, TCGReg data,
     }
 }

-static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp opc, TCGReg data,
+static void tcg_out_qemu_st_direct(TCGContext *s, MemOp opc, TCGReg data,
                                    TCGReg base, TCGReg index, int disp)
 {
     switch (opc & (MO_SIZE | MO_BSWAP)) {
@@ -1544,7 +1544,7 @@ QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) < -(1 << 19));

 /* Load and compare a TLB entry, leaving the flags set.  Loads the TLB
    addend into R2.  Returns a register with the santitized guest address.  */
-static TCGReg tcg_out_tlb_read(TCGContext* s, TCGReg addr_reg, TCGMemOp opc,
+static TCGReg tcg_out_tlb_read(TCGContext* s, TCGReg addr_reg, MemOp opc,
                                int mem_index, bool is_ld)
 {
     unsigned s_bits = opc & MO_SIZE;
@@ -1614,7 +1614,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
     TCGReg addr_reg = lb->addrlo_reg;
     TCGReg data_reg = lb->datalo_reg;
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);

     if (!patch_reloc(lb->label_ptr[0], R_390_PC16DBL,
                      (intptr_t)s->code_ptr, 2)) {
@@ -1639,7 +1639,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
     TCGReg addr_reg = lb->addrlo_reg;
     TCGReg data_reg = lb->datalo_reg;
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);

     if (!patch_reloc(lb->label_ptr[0], R_390_PC16DBL,
                      (intptr_t)s->code_ptr, 2)) {
@@ -1694,7 +1694,7 @@ static void tcg_prepare_user_ldst(TCGContext *s, TCGReg *addr_reg,
 static void tcg_out_qemu_ld(TCGContext* s, TCGReg data_reg, TCGReg addr_reg,
                             TCGMemOpIdx oi)
 {
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
 #ifdef CONFIG_SOFTMMU
     unsigned mem_index = get_mmuidx(oi);
     tcg_insn_unit *label_ptr;
@@ -1721,7 +1721,7 @@ static void tcg_out_qemu_ld(TCGContext* s, TCGReg data_reg, TCGReg addr_reg,
 static void tcg_out_qemu_st(TCGContext* s, TCGReg data_reg, TCGReg addr_reg,
                             TCGMemOpIdx oi)
 {
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
 #ifdef CONFIG_SOFTMMU
     unsigned mem_index = get_mmuidx(oi);
     tcg_insn_unit *label_ptr;
diff --git a/tcg/sparc/tcg-target.inc.c b/tcg/sparc/tcg-target.inc.c
index 10b1cea..d7986cd 100644
--- a/tcg/sparc/tcg-target.inc.c
+++ b/tcg/sparc/tcg-target.inc.c
@@ -1081,7 +1081,7 @@ QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) < -(1 << 12));
    is in the returned register, maybe %o0.  The TLB addend is in %o1.  */

 static TCGReg tcg_out_tlb_load(TCGContext *s, TCGReg addr, int mem_index,
-                               TCGMemOp opc, int which)
+                               MemOp opc, int which)
 {
     int fast_off = TLB_MASK_TABLE_OFS(mem_index);
     int mask_off = fast_off + offsetof(CPUTLBDescFast, mask);
@@ -1164,7 +1164,7 @@ static const int qemu_st_opc[16] = {
 static void tcg_out_qemu_ld(TCGContext *s, TCGReg data, TCGReg addr,
                             TCGMemOpIdx oi, bool is_64)
 {
-    TCGMemOp memop = get_memop(oi);
+    MemOp memop = get_memop(oi);
 #ifdef CONFIG_SOFTMMU
     unsigned memi = get_mmuidx(oi);
     TCGReg addrz, param;
@@ -1246,7 +1246,7 @@ static void tcg_out_qemu_ld(TCGContext *s, TCGReg data, TCGReg addr,
 static void tcg_out_qemu_st(TCGContext *s, TCGReg data, TCGReg addr,
                             TCGMemOpIdx oi)
 {
-    TCGMemOp memop = get_memop(oi);
+    MemOp memop = get_memop(oi);
 #ifdef CONFIG_SOFTMMU
     unsigned memi = get_mmuidx(oi);
     TCGReg addrz, param;
diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c
index 587d092..e87c327 100644
--- a/tcg/tcg-op.c
+++ b/tcg/tcg-op.c
@@ -2714,7 +2714,7 @@ void tcg_gen_lookup_and_goto_ptr(void)
     }
 }

-static inline TCGMemOp tcg_canonicalize_memop(TCGMemOp op, bool is64, bool st)
+static inline MemOp tcg_canonicalize_memop(MemOp op, bool is64, bool st)
 {
     /* Trigger the asserts within as early as possible.  */
     (void)get_alignment_bits(op);
@@ -2743,7 +2743,7 @@ static inline TCGMemOp tcg_canonicalize_memop(TCGMemOp op, bool is64, bool st)
 }

 static void gen_ldst_i32(TCGOpcode opc, TCGv_i32 val, TCGv addr,
-                         TCGMemOp memop, TCGArg idx)
+                         MemOp memop, TCGArg idx)
 {
     TCGMemOpIdx oi = make_memop_idx(memop, idx);
 #if TARGET_LONG_BITS == 32
@@ -2758,7 +2758,7 @@ static void gen_ldst_i32(TCGOpcode opc, TCGv_i32 val, TCGv addr,
 }

 static void gen_ldst_i64(TCGOpcode opc, TCGv_i64 val, TCGv addr,
-                         TCGMemOp memop, TCGArg idx)
+                         MemOp memop, TCGArg idx)
 {
     TCGMemOpIdx oi = make_memop_idx(memop, idx);
 #if TARGET_LONG_BITS == 32
@@ -2788,9 +2788,9 @@ static void tcg_gen_req_mo(TCGBar type)
     }
 }

-void tcg_gen_qemu_ld_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
+void tcg_gen_qemu_ld_i32(TCGv_i32 val, TCGv addr, TCGArg idx, MemOp memop)
 {
-    TCGMemOp orig_memop;
+    MemOp orig_memop;

     tcg_gen_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
     memop = tcg_canonicalize_memop(memop, 0, 0);
@@ -2825,7 +2825,7 @@ void tcg_gen_qemu_ld_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
     }
 }

-void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
+void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCGArg idx, MemOp memop)
 {
     TCGv_i32 swap = NULL;

@@ -2858,9 +2858,9 @@ void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
     }
 }

-void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
+void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, MemOp memop)
 {
-    TCGMemOp orig_memop;
+    MemOp orig_memop;

     if (TCG_TARGET_REG_BITS == 32 && (memop & MO_SIZE) < MO_64) {
         tcg_gen_qemu_ld_i32(TCGV_LOW(val), addr, idx, memop);
@@ -2911,7 +2911,7 @@ void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
     }
 }

-void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
+void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, MemOp memop)
 {
     TCGv_i64 swap = NULL;

@@ -2953,7 +2953,7 @@ void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
     }
 }

-static void tcg_gen_ext_i32(TCGv_i32 ret, TCGv_i32 val, TCGMemOp opc)
+static void tcg_gen_ext_i32(TCGv_i32 ret, TCGv_i32 val, MemOp opc)
 {
     switch (opc & MO_SSIZE) {
     case MO_SB:
@@ -2974,7 +2974,7 @@ static void tcg_gen_ext_i32(TCGv_i32 ret, TCGv_i32 val, TCGMemOp opc)
     }
 }

-static void tcg_gen_ext_i64(TCGv_i64 ret, TCGv_i64 val, TCGMemOp opc)
+static void tcg_gen_ext_i64(TCGv_i64 ret, TCGv_i64 val, MemOp opc)
 {
     switch (opc & MO_SSIZE) {
     case MO_SB:
@@ -3034,7 +3034,7 @@ static void * const table_cmpxchg[16] = {
 };

 void tcg_gen_atomic_cmpxchg_i32(TCGv_i32 retv, TCGv addr, TCGv_i32 cmpv,
-                                TCGv_i32 newv, TCGArg idx, TCGMemOp memop)
+                                TCGv_i32 newv, TCGArg idx, MemOp memop)
 {
     memop = tcg_canonicalize_memop(memop, 0, 0);

@@ -3078,7 +3078,7 @@ void tcg_gen_atomic_cmpxchg_i32(TCGv_i32 retv, TCGv addr, TCGv_i32 cmpv,
 }

 void tcg_gen_atomic_cmpxchg_i64(TCGv_i64 retv, TCGv addr, TCGv_i64 cmpv,
-                                TCGv_i64 newv, TCGArg idx, TCGMemOp memop)
+                                TCGv_i64 newv, TCGArg idx, MemOp memop)
 {
     memop = tcg_canonicalize_memop(memop, 1, 0);

@@ -3142,7 +3142,7 @@ void tcg_gen_atomic_cmpxchg_i64(TCGv_i64 retv, TCGv addr, TCGv_i64 cmpv,
 }

 static void do_nonatomic_op_i32(TCGv_i32 ret, TCGv addr, TCGv_i32 val,
-                                TCGArg idx, TCGMemOp memop, bool new_val,
+                                TCGArg idx, MemOp memop, bool new_val,
                                 void (*gen)(TCGv_i32, TCGv_i32, TCGv_i32))
 {
     TCGv_i32 t1 = tcg_temp_new_i32();
@@ -3160,7 +3160,7 @@ static void do_nonatomic_op_i32(TCGv_i32 ret, TCGv addr, TCGv_i32 val,
 }

 static void do_atomic_op_i32(TCGv_i32 ret, TCGv addr, TCGv_i32 val,
-                             TCGArg idx, TCGMemOp memop, void * const table[])
+                             TCGArg idx, MemOp memop, void * const table[])
 {
     gen_atomic_op_i32 gen;

@@ -3185,7 +3185,7 @@ static void do_atomic_op_i32(TCGv_i32 ret, TCGv addr, TCGv_i32 val,
 }

 static void do_nonatomic_op_i64(TCGv_i64 ret, TCGv addr, TCGv_i64 val,
-                                TCGArg idx, TCGMemOp memop, bool new_val,
+                                TCGArg idx, MemOp memop, bool new_val,
                                 void (*gen)(TCGv_i64, TCGv_i64, TCGv_i64))
 {
     TCGv_i64 t1 = tcg_temp_new_i64();
@@ -3203,7 +3203,7 @@ static void do_nonatomic_op_i64(TCGv_i64 ret, TCGv addr, TCGv_i64 val,
 }

 static void do_atomic_op_i64(TCGv_i64 ret, TCGv addr, TCGv_i64 val,
-                             TCGArg idx, TCGMemOp memop, void * const table[])
+                             TCGArg idx, MemOp memop, void * const table[])
 {
     memop = tcg_canonicalize_memop(memop, 1, 0);

@@ -3257,7 +3257,7 @@ static void * const table_##NAME[16] = {                                \
     WITH_ATOMIC64([MO_64 | MO_BE] = gen_helper_atomic_##NAME##q_be)     \
 };                                                                      \
 void tcg_gen_atomic_##NAME##_i32                                        \
-    (TCGv_i32 ret, TCGv addr, TCGv_i32 val, TCGArg idx, TCGMemOp memop) \
+    (TCGv_i32 ret, TCGv addr, TCGv_i32 val, TCGArg idx, MemOp memop)    \
 {                                                                       \
     if (tcg_ctx->tb_cflags & CF_PARALLEL) {                             \
         do_atomic_op_i32(ret, addr, val, idx, memop, table_##NAME);     \
@@ -3267,7 +3267,7 @@ void tcg_gen_atomic_##NAME##_i32                                        \
     }                                                                   \
 }                                                                       \
 void tcg_gen_atomic_##NAME##_i64                                        \
-    (TCGv_i64 ret, TCGv addr, TCGv_i64 val, TCGArg idx, TCGMemOp memop) \
+    (TCGv_i64 ret, TCGv addr, TCGv_i64 val, TCGArg idx, MemOp memop)    \
 {                                                                       \
     if (tcg_ctx->tb_cflags & CF_PARALLEL) {                             \
         do_atomic_op_i64(ret, addr, val, idx, memop, table_##NAME);     \
diff --git a/tcg/tcg-op.h b/tcg/tcg-op.h
index 2d4dd5c..e9cf172 100644
--- a/tcg/tcg-op.h
+++ b/tcg/tcg-op.h
@@ -851,10 +851,10 @@ void tcg_gen_lookup_and_goto_ptr(void);
 #define tcg_gen_qemu_st_tl tcg_gen_qemu_st_i64
 #endif

-void tcg_gen_qemu_ld_i32(TCGv_i32, TCGv, TCGArg, TCGMemOp);
-void tcg_gen_qemu_st_i32(TCGv_i32, TCGv, TCGArg, TCGMemOp);
-void tcg_gen_qemu_ld_i64(TCGv_i64, TCGv, TCGArg, TCGMemOp);
-void tcg_gen_qemu_st_i64(TCGv_i64, TCGv, TCGArg, TCGMemOp);
+void tcg_gen_qemu_ld_i32(TCGv_i32, TCGv, TCGArg, MemOp);
+void tcg_gen_qemu_st_i32(TCGv_i32, TCGv, TCGArg, MemOp);
+void tcg_gen_qemu_ld_i64(TCGv_i64, TCGv, TCGArg, MemOp);
+void tcg_gen_qemu_st_i64(TCGv_i64, TCGv, TCGArg, MemOp);

 static inline void tcg_gen_qemu_ld8u(TCGv ret, TCGv addr, int mem_index)
 {
@@ -912,46 +912,46 @@ static inline void tcg_gen_qemu_st64(TCGv_i64 arg, TCGv addr, int mem_index)
 }

 void tcg_gen_atomic_cmpxchg_i32(TCGv_i32, TCGv, TCGv_i32, TCGv_i32,
-                                TCGArg, TCGMemOp);
+                                TCGArg, MemOp);
 void tcg_gen_atomic_cmpxchg_i64(TCGv_i64, TCGv, TCGv_i64, TCGv_i64,
-                                TCGArg, TCGMemOp);
-
-void tcg_gen_atomic_xchg_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_xchg_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-
-void tcg_gen_atomic_fetch_add_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_add_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_and_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_and_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_or_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_or_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_xor_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_xor_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_smin_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_smin_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_umin_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_umin_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_smax_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_smax_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_umax_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_umax_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-
-void tcg_gen_atomic_add_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_add_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_and_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_and_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_or_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_or_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_xor_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_xor_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_smin_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_smin_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_umin_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_umin_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_smax_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_smax_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_umax_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_umax_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
+                                TCGArg, MemOp);
+
+void tcg_gen_atomic_xchg_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_xchg_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+
+void tcg_gen_atomic_fetch_add_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_add_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_and_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_and_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_or_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_or_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_xor_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_xor_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_smin_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_smin_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_umin_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_umin_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_smax_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_smax_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_umax_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_umax_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+
+void tcg_gen_atomic_add_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_add_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_and_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_and_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_or_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_or_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_xor_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_xor_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_smin_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_smin_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_umin_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_umin_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_smax_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_smax_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_umax_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_umax_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);

 void tcg_gen_mov_vec(TCGv_vec, TCGv_vec);
 void tcg_gen_dup_i32_vec(unsigned vece, TCGv_vec, TCGv_i32);
diff --git a/tcg/tcg.c b/tcg/tcg.c
index be2c33c..aa9931f 100644
--- a/tcg/tcg.c
+++ b/tcg/tcg.c
@@ -2056,7 +2056,7 @@ static void tcg_dump_ops(TCGContext *s, bool have_prefs)
             case INDEX_op_qemu_st_i64:
                 {
                     TCGMemOpIdx oi = op->args[k++];
-                    TCGMemOp op = get_memop(oi);
+                    MemOp op = get_memop(oi);
                     unsigned ix = get_mmuidx(oi);

                     if (op & ~(MO_AMASK | MO_BSWAP | MO_SSIZE)) {
diff --git a/tcg/tcg.h b/tcg/tcg.h
index b411e17..a37181c 100644
--- a/tcg/tcg.h
+++ b/tcg/tcg.h
@@ -26,6 +26,7 @@
 #define TCG_H

 #include "cpu.h"
+#include "exec/memop.h"
 #include "exec/tb-context.h"
 #include "qemu/bitops.h"
 #include "qemu/queue.h"
@@ -309,101 +310,13 @@ typedef enum TCGType {
 #endif
 } TCGType;

-/* Constants for qemu_ld and qemu_st for the Memory Operation field.  */
-typedef enum TCGMemOp {
-    MO_8     = 0,
-    MO_16    = 1,
-    MO_32    = 2,
-    MO_64    = 3,
-    MO_SIZE  = 3,   /* Mask for the above.  */
-
-    MO_SIGN  = 4,   /* Sign-extended, otherwise zero-extended.  */
-
-    MO_BSWAP = 8,   /* Host reverse endian.  */
-#ifdef HOST_WORDS_BIGENDIAN
-    MO_LE    = MO_BSWAP,
-    MO_BE    = 0,
-#else
-    MO_LE    = 0,
-    MO_BE    = MO_BSWAP,
-#endif
-#ifdef TARGET_WORDS_BIGENDIAN
-    MO_TE    = MO_BE,
-#else
-    MO_TE    = MO_LE,
-#endif
-
-    /* MO_UNALN accesses are never checked for alignment.
-     * MO_ALIGN accesses will result in a call to the CPU's
-     * do_unaligned_access hook if the guest address is not aligned.
-     * The default depends on whether the target CPU defines ALIGNED_ONLY.
-     *
-     * Some architectures (e.g. ARMv8) need the address which is aligned
-     * to a size more than the size of the memory access.
-     * Some architectures (e.g. SPARCv9) need an address which is aligned,
-     * but less strictly than the natural alignment.
-     *
-     * MO_ALIGN supposes the alignment size is the size of a memory access.
-     *
-     * There are three options:
-     * - unaligned access permitted (MO_UNALN).
-     * - an alignment to the size of an access (MO_ALIGN);
-     * - an alignment to a specified size, which may be more or less than
-     *   the access size (MO_ALIGN_x where 'x' is a size in bytes);
-     */
-    MO_ASHIFT = 4,
-    MO_AMASK = 7 << MO_ASHIFT,
-#ifdef ALIGNED_ONLY
-    MO_ALIGN = 0,
-    MO_UNALN = MO_AMASK,
-#else
-    MO_ALIGN = MO_AMASK,
-    MO_UNALN = 0,
-#endif
-    MO_ALIGN_2  = 1 << MO_ASHIFT,
-    MO_ALIGN_4  = 2 << MO_ASHIFT,
-    MO_ALIGN_8  = 3 << MO_ASHIFT,
-    MO_ALIGN_16 = 4 << MO_ASHIFT,
-    MO_ALIGN_32 = 5 << MO_ASHIFT,
-    MO_ALIGN_64 = 6 << MO_ASHIFT,
-
-    /* Combinations of the above, for ease of use.  */
-    MO_UB    = MO_8,
-    MO_UW    = MO_16,
-    MO_UL    = MO_32,
-    MO_SB    = MO_SIGN | MO_8,
-    MO_SW    = MO_SIGN | MO_16,
-    MO_SL    = MO_SIGN | MO_32,
-    MO_Q     = MO_64,
-
-    MO_LEUW  = MO_LE | MO_UW,
-    MO_LEUL  = MO_LE | MO_UL,
-    MO_LESW  = MO_LE | MO_SW,
-    MO_LESL  = MO_LE | MO_SL,
-    MO_LEQ   = MO_LE | MO_Q,
-
-    MO_BEUW  = MO_BE | MO_UW,
-    MO_BEUL  = MO_BE | MO_UL,
-    MO_BESW  = MO_BE | MO_SW,
-    MO_BESL  = MO_BE | MO_SL,
-    MO_BEQ   = MO_BE | MO_Q,
-
-    MO_TEUW  = MO_TE | MO_UW,
-    MO_TEUL  = MO_TE | MO_UL,
-    MO_TESW  = MO_TE | MO_SW,
-    MO_TESL  = MO_TE | MO_SL,
-    MO_TEQ   = MO_TE | MO_Q,
-
-    MO_SSIZE = MO_SIZE | MO_SIGN,
-} TCGMemOp;
-
 /**
  * get_alignment_bits
- * @memop: TCGMemOp value
+ * @memop: MemOp value
  *
  * Extract the alignment size from the memop.
  */
-static inline unsigned get_alignment_bits(TCGMemOp memop)
+static inline unsigned get_alignment_bits(MemOp memop)
 {
     unsigned a = memop & MO_AMASK;

@@ -1184,7 +1097,7 @@ static inline size_t tcg_current_code_size(TCGContext *s)
     return tcg_ptr_byte_diff(s->code_ptr, s->code_buf);
 }

-/* Combine the TCGMemOp and mmu_idx parameters into a single value.  */
+/* Combine the MemOp and mmu_idx parameters into a single value.  */
 typedef uint32_t TCGMemOpIdx;

 /**
@@ -1194,7 +1107,7 @@ typedef uint32_t TCGMemOpIdx;
  *
  * Encode these values into a single parameter.
  */
-static inline TCGMemOpIdx make_memop_idx(TCGMemOp op, unsigned idx)
+static inline TCGMemOpIdx make_memop_idx(MemOp op, unsigned idx)
 {
     tcg_debug_assert(idx <= 15);
     return (op << 4) | idx;
@@ -1206,7 +1119,7 @@ static inline TCGMemOpIdx make_memop_idx(TCGMemOp op, unsigned idx)
  *
  * Extract the memory operation from the combined value.
  */
-static inline TCGMemOp get_memop(TCGMemOpIdx oi)
+static inline MemOp get_memop(TCGMemOpIdx oi)
 {
     return oi >> 4;
 }
diff --git a/trace/mem-internal.h b/trace/mem-internal.h
index f6efaf6..3444fbc 100644
--- a/trace/mem-internal.h
+++ b/trace/mem-internal.h
@@ -16,7 +16,7 @@
 #define TRACE_MEM_ST (1ULL << 5)    /* store (y/n) */

 static inline uint8_t trace_mem_build_info(
-    int size_shift, bool sign_extend, TCGMemOp endianness, bool store)
+    int size_shift, bool sign_extend, MemOp endianness, bool store)
 {
     uint8_t res;

@@ -33,7 +33,7 @@ static inline uint8_t trace_mem_build_info(
     return res;
 }

-static inline uint8_t trace_mem_get_info(TCGMemOp op, bool store)
+static inline uint8_t trace_mem_get_info(MemOp op, bool store)
 {
     return trace_mem_build_info(op & MO_SIZE, !!(op & MO_SIGN),
                                 op & MO_BSWAP, store);
diff --git a/trace/mem.h b/trace/mem.h
index 2b58196..8cf213d 100644
--- a/trace/mem.h
+++ b/trace/mem.h
@@ -18,7 +18,7 @@
  *
  * Return a value for the 'info' argument in guest memory access traces.
  */
-static uint8_t trace_mem_get_info(TCGMemOp op, bool store);
+static uint8_t trace_mem_get_info(MemOp op, bool store);

 /**
  * trace_mem_build_info:
@@ -26,7 +26,7 @@ static uint8_t trace_mem_get_info(TCGMemOp op, bool store);
  * Return a value for the 'info' argument in guest memory access traces.
  */
 static uint8_t trace_mem_build_info(int size_shift, bool sign_extend,
-                                    TCGMemOp endianness, bool store);
+                                    MemOp endianness, bool store);


 #include "trace/mem-internal.h"
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 181328 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v3 02/15] memory: Access MemoryRegion with MemOp
  2019-07-25  7:01     ` [Qemu-riscv] " tony.nguyen
@ 2019-07-25  7:03       ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  7:03 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, qemu-s390x, qemu-ppc, amarkovic, pbonzini,
	aurelien

Replacing size with size+sign+endianness (MemOp) will enable us to
collapse the two byte swaps, adjust_endianness and handle_bswap, along
the I/O path.

While interfaces are converted, callers will have existing unsigned
size coerced into a MemOp, and the callee will use this MemOp as an
unsigned size.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 include/exec/memop.h  | 4 ++++
 include/exec/memory.h | 9 +++++----
 memory.c              | 7 +++++--
 3 files changed, 14 insertions(+), 6 deletions(-)

diff --git a/include/exec/memop.h b/include/exec/memop.h
index ac58066..09c8d20 100644
--- a/include/exec/memop.h
+++ b/include/exec/memop.h
@@ -106,4 +106,8 @@ typedef enum MemOp {
     MO_SSIZE = MO_SIZE | MO_SIGN,
 } MemOp;

+/* No-op while memory_region_dispatch_[read|write] is converted to MemOp */
+#define MEMOP_SIZE(op)  (op)    /* MemOp to size.  */
+#define SIZE_MEMOP(ul)  (ul)    /* Size to MemOp.  */
+
 #endif
diff --git a/include/exec/memory.h b/include/exec/memory.h
index bb0961d..30b1c58 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -19,6 +19,7 @@
 #include "exec/cpu-common.h"
 #include "exec/hwaddr.h"
 #include "exec/memattrs.h"
+#include "exec/memop.h"
 #include "exec/ramlist.h"
 #include "qemu/queue.h"
 #include "qemu/int128.h"
@@ -1731,13 +1732,13 @@ void mtree_info(bool flatview, bool dispatch_tree, bool owner);
  * @mr: #MemoryRegion to access
  * @addr: address within that region
  * @pval: pointer to uint64_t which the data is written to
- * @size: size of the access in bytes
+ * @op: encodes size of the access in bytes
  * @attrs: memory transaction attributes to use for the access
  */
 MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
                                         hwaddr addr,
                                         uint64_t *pval,
-                                        unsigned size,
+                                        MemOp op,
                                         MemTxAttrs attrs);
 /**
  * memory_region_dispatch_write: perform a write directly to the specified
@@ -1746,13 +1747,13 @@ MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
  * @mr: #MemoryRegion to access
  * @addr: address within that region
  * @data: data to write
- * @size: size of the access in bytes
+ * @op: encodes size of the access in bytes
  * @attrs: memory transaction attributes to use for the access
  */
 MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
                                          hwaddr addr,
                                          uint64_t data,
-                                         unsigned size,
+                                         MemOp op,
                                          MemTxAttrs attrs);

 /**
diff --git a/memory.c b/memory.c
index 5d8c9a9..6982e19 100644
--- a/memory.c
+++ b/memory.c
@@ -1439,10 +1439,11 @@ static MemTxResult memory_region_dispatch_read1(MemoryRegion *mr,
 MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
                                         hwaddr addr,
                                         uint64_t *pval,
-                                        unsigned size,
+                                        MemOp op,
                                         MemTxAttrs attrs)
 {
     MemTxResult r;
+    unsigned size = MEMOP_SIZE(op);

     if (!memory_region_access_valid(mr, addr, size, false, attrs)) {
         *pval = unassigned_mem_read(mr, addr, size);
@@ -1483,9 +1484,11 @@ static bool memory_region_dispatch_write_eventfds(MemoryRegion *mr,
 MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
                                          hwaddr addr,
                                          uint64_t data,
-                                         unsigned size,
+                                         MemOp op,
                                          MemTxAttrs attrs)
 {
+    unsigned size = MEMOP_SIZE(op);
+
     if (!memory_region_access_valid(mr, addr, size, true, attrs)) {
         unassigned_mem_write(mr, addr, data, size);
         return MEMTX_DECODE_ERROR;
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v3 02/15] memory: Access MemoryRegion with MemOp
@ 2019-07-25  7:03       ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  7:03 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 4214 bytes --]

Replacing size with size+sign+endianness (MemOp) will enable us to
collapse the two byte swaps, adjust_endianness and handle_bswap, along
the I/O path.

While interfaces are converted, callers will have existing unsigned
size coerced into a MemOp, and the callee will use this MemOp as an
unsigned size.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 include/exec/memop.h  | 4 ++++
 include/exec/memory.h | 9 +++++----
 memory.c              | 7 +++++--
 3 files changed, 14 insertions(+), 6 deletions(-)

diff --git a/include/exec/memop.h b/include/exec/memop.h
index ac58066..09c8d20 100644
--- a/include/exec/memop.h
+++ b/include/exec/memop.h
@@ -106,4 +106,8 @@ typedef enum MemOp {
     MO_SSIZE = MO_SIZE | MO_SIGN,
 } MemOp;

+/* No-op while memory_region_dispatch_[read|write] is converted to MemOp */
+#define MEMOP_SIZE(op)  (op)    /* MemOp to size.  */
+#define SIZE_MEMOP(ul)  (ul)    /* Size to MemOp.  */
+
 #endif
diff --git a/include/exec/memory.h b/include/exec/memory.h
index bb0961d..30b1c58 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -19,6 +19,7 @@
 #include "exec/cpu-common.h"
 #include "exec/hwaddr.h"
 #include "exec/memattrs.h"
+#include "exec/memop.h"
 #include "exec/ramlist.h"
 #include "qemu/queue.h"
 #include "qemu/int128.h"
@@ -1731,13 +1732,13 @@ void mtree_info(bool flatview, bool dispatch_tree, bool owner);
  * @mr: #MemoryRegion to access
  * @addr: address within that region
  * @pval: pointer to uint64_t which the data is written to
- * @size: size of the access in bytes
+ * @op: encodes size of the access in bytes
  * @attrs: memory transaction attributes to use for the access
  */
 MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
                                         hwaddr addr,
                                         uint64_t *pval,
-                                        unsigned size,
+                                        MemOp op,
                                         MemTxAttrs attrs);
 /**
  * memory_region_dispatch_write: perform a write directly to the specified
@@ -1746,13 +1747,13 @@ MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
  * @mr: #MemoryRegion to access
  * @addr: address within that region
  * @data: data to write
- * @size: size of the access in bytes
+ * @op: encodes size of the access in bytes
  * @attrs: memory transaction attributes to use for the access
  */
 MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
                                          hwaddr addr,
                                          uint64_t data,
-                                         unsigned size,
+                                         MemOp op,
                                          MemTxAttrs attrs);

 /**
diff --git a/memory.c b/memory.c
index 5d8c9a9..6982e19 100644
--- a/memory.c
+++ b/memory.c
@@ -1439,10 +1439,11 @@ static MemTxResult memory_region_dispatch_read1(MemoryRegion *mr,
 MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
                                         hwaddr addr,
                                         uint64_t *pval,
-                                        unsigned size,
+                                        MemOp op,
                                         MemTxAttrs attrs)
 {
     MemTxResult r;
+    unsigned size = MEMOP_SIZE(op);

     if (!memory_region_access_valid(mr, addr, size, false, attrs)) {
         *pval = unassigned_mem_read(mr, addr, size);
@@ -1483,9 +1484,11 @@ static bool memory_region_dispatch_write_eventfds(MemoryRegion *mr,
 MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
                                          hwaddr addr,
                                          uint64_t data,
-                                         unsigned size,
+                                         MemOp op,
                                          MemTxAttrs attrs)
 {
+    unsigned size = MEMOP_SIZE(op);
+
     if (!memory_region_access_valid(mr, addr, size, true, attrs)) {
         unassigned_mem_write(mr, addr, data, size);
         return MEMTX_DECODE_ERROR;
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 8684 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v3 03/15] target/mips: Access MemoryRegion with MemOp
  2019-07-25  7:01     ` [Qemu-riscv] " tony.nguyen
@ 2019-07-25  7:05       ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  7:05 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, qemu-s390x, qemu-ppc, amarkovic, pbonzini,
	aurelien

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 target/mips/op_helper.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/target/mips/op_helper.c b/target/mips/op_helper.c
index 9e2e02f..dccb8df 100644
--- a/target/mips/op_helper.c
+++ b/target/mips/op_helper.c
@@ -24,6 +24,7 @@
 #include "exec/helper-proto.h"
 #include "exec/exec-all.h"
 #include "exec/cpu_ldst.h"
+#include "exec/memop.h"
 #include "sysemu/kvm.h"

 /*****************************************************************************/
@@ -4740,11 +4741,11 @@ void helper_cache(CPUMIPSState *env, target_ulong addr, uint32_t op)
     if (op == 9) {
         /* Index Store Tag */
         memory_region_dispatch_write(env->itc_tag, index, env->CP0_TagLo,
-                                     8, MEMTXATTRS_UNSPECIFIED);
+                                     SIZE_MEMOP(8), MEMTXATTRS_UNSPECIFIED);
     } else if (op == 5) {
         /* Index Load Tag */
         memory_region_dispatch_read(env->itc_tag, index, &env->CP0_TagLo,
-                                    8, MEMTXATTRS_UNSPECIFIED);
+                                    SIZE_MEMOP(8), MEMTXATTRS_UNSPECIFIED);
     }
 #endif
 }
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v3 03/15] target/mips: Access MemoryRegion with MemOp
@ 2019-07-25  7:05       ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  7:05 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 1233 bytes --]

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 target/mips/op_helper.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/target/mips/op_helper.c b/target/mips/op_helper.c
index 9e2e02f..dccb8df 100644
--- a/target/mips/op_helper.c
+++ b/target/mips/op_helper.c
@@ -24,6 +24,7 @@
 #include "exec/helper-proto.h"
 #include "exec/exec-all.h"
 #include "exec/cpu_ldst.h"
+#include "exec/memop.h"
 #include "sysemu/kvm.h"

 /*****************************************************************************/
@@ -4740,11 +4741,11 @@ void helper_cache(CPUMIPSState *env, target_ulong addr, uint32_t op)
     if (op == 9) {
         /* Index Store Tag */
         memory_region_dispatch_write(env->itc_tag, index, env->CP0_TagLo,
-                                     8, MEMTXATTRS_UNSPECIFIED);
+                                     SIZE_MEMOP(8), MEMTXATTRS_UNSPECIFIED);
     } else if (op == 5) {
         /* Index Load Tag */
         memory_region_dispatch_read(env->itc_tag, index, &env->CP0_TagLo,
-                                    8, MEMTXATTRS_UNSPECIFIED);
+                                    SIZE_MEMOP(8), MEMTXATTRS_UNSPECIFIED);
     }
 #endif
 }
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 2988 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v3 04/15] hw/s390x: Access MemoryRegion with MemOp
  2019-07-25  7:01     ` [Qemu-riscv] " tony.nguyen
@ 2019-07-25  7:06       ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  7:06 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, qemu-s390x, qemu-ppc, amarkovic, pbonzini,
	aurelien

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 hw/s390x/s390-pci-inst.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c
index 0023514..c126bcc 100644
--- a/hw/s390x/s390-pci-inst.c
+++ b/hw/s390x/s390-pci-inst.c
@@ -15,6 +15,7 @@
 #include "cpu.h"
 #include "s390-pci-inst.h"
 #include "s390-pci-bus.h"
+#include "exec/memop.h"
 #include "exec/memory-internal.h"
 #include "qemu/error-report.h"
 #include "sysemu/hw_accel.h"
@@ -372,7 +373,7 @@ static MemTxResult zpci_read_bar(S390PCIBusDevice *pbdev, uint8_t pcias,
     mr = pbdev->pdev->io_regions[pcias].memory;
     mr = s390_get_subregion(mr, offset, len);
     offset -= mr->addr;
-    return memory_region_dispatch_read(mr, offset, data, len,
+    return memory_region_dispatch_read(mr, offset, data, SIZE_MEMOP(len),
                                        MEMTXATTRS_UNSPECIFIED);
 }

@@ -471,7 +472,7 @@ static MemTxResult zpci_write_bar(S390PCIBusDevice *pbdev, uint8_t pcias,
     mr = pbdev->pdev->io_regions[pcias].memory;
     mr = s390_get_subregion(mr, offset, len);
     offset -= mr->addr;
-    return memory_region_dispatch_write(mr, offset, data, len,
+    return memory_region_dispatch_write(mr, offset, data, SIZE_MEMOP(len),
                                         MEMTXATTRS_UNSPECIFIED);
 }

@@ -780,7 +781,8 @@ int pcistb_service_call(S390CPU *cpu, uint8_t r1, uint8_t r3, uint64_t gaddr,

     for (i = 0; i < len / 8; i++) {
         result = memory_region_dispatch_write(mr, offset + i * 8,
-                                              ldq_p(buffer + i * 8), 8,
+                                              ldq_p(buffer + i * 8),
+                                              SIZE_MEMOP(8),
                                               MEMTXATTRS_UNSPECIFIED);
         if (result != MEMTX_OK) {
             s390_program_interrupt(env, PGM_OPERAND, 6, ra);
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v3 04/15] hw/s390x: Access MemoryRegion with MemOp
@ 2019-07-25  7:06       ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  7:06 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 1998 bytes --]

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 hw/s390x/s390-pci-inst.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c
index 0023514..c126bcc 100644
--- a/hw/s390x/s390-pci-inst.c
+++ b/hw/s390x/s390-pci-inst.c
@@ -15,6 +15,7 @@
 #include "cpu.h"
 #include "s390-pci-inst.h"
 #include "s390-pci-bus.h"
+#include "exec/memop.h"
 #include "exec/memory-internal.h"
 #include "qemu/error-report.h"
 #include "sysemu/hw_accel.h"
@@ -372,7 +373,7 @@ static MemTxResult zpci_read_bar(S390PCIBusDevice *pbdev, uint8_t pcias,
     mr = pbdev->pdev->io_regions[pcias].memory;
     mr = s390_get_subregion(mr, offset, len);
     offset -= mr->addr;
-    return memory_region_dispatch_read(mr, offset, data, len,
+    return memory_region_dispatch_read(mr, offset, data, SIZE_MEMOP(len),
                                        MEMTXATTRS_UNSPECIFIED);
 }

@@ -471,7 +472,7 @@ static MemTxResult zpci_write_bar(S390PCIBusDevice *pbdev, uint8_t pcias,
     mr = pbdev->pdev->io_regions[pcias].memory;
     mr = s390_get_subregion(mr, offset, len);
     offset -= mr->addr;
-    return memory_region_dispatch_write(mr, offset, data, len,
+    return memory_region_dispatch_write(mr, offset, data, SIZE_MEMOP(len),
                                         MEMTXATTRS_UNSPECIFIED);
 }

@@ -780,7 +781,8 @@ int pcistb_service_call(S390CPU *cpu, uint8_t r1, uint8_t r3, uint64_t gaddr,

     for (i = 0; i < len / 8; i++) {
         result = memory_region_dispatch_write(mr, offset + i * 8,
-                                              ldq_p(buffer + i * 8), 8,
+                                              ldq_p(buffer + i * 8),
+                                              SIZE_MEMOP(8),
                                               MEMTXATTRS_UNSPECIFIED);
         if (result != MEMTX_OK) {
             s390_program_interrupt(env, PGM_OPERAND, 6, ra);
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 4258 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v3 05/15] hw/intc/armv7m_nic: Access MemoryRegion with MemOp
  2019-07-25  7:01     ` [Qemu-riscv] " tony.nguyen
@ 2019-07-25  7:06       ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  7:06 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, qemu-s390x, qemu-ppc, amarkovic, pbonzini,
	aurelien

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 hw/intc/armv7m_nvic.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
index 9f8f0d3..25bb88a 100644
--- a/hw/intc/armv7m_nvic.c
+++ b/hw/intc/armv7m_nvic.c
@@ -18,6 +18,7 @@
 #include "hw/intc/armv7m_nvic.h"
 #include "target/arm/cpu.h"
 #include "exec/exec-all.h"
+#include "exec/memop.h"
 #include "qemu/log.h"
 #include "qemu/module.h"
 #include "trace.h"
@@ -2345,7 +2346,8 @@ static MemTxResult nvic_sysreg_ns_write(void *opaque, hwaddr addr,
     if (attrs.secure) {
         /* S accesses to the alias act like NS accesses to the real region */
         attrs.secure = 0;
-        return memory_region_dispatch_write(mr, addr, value, size, attrs);
+        return memory_region_dispatch_write(mr, addr, value, SIZE_MEMOP(size),
+                                            attrs);
     } else {
         /* NS attrs are RAZ/WI for privileged, and BusFault for user */
         if (attrs.user) {
@@ -2364,7 +2366,8 @@ static MemTxResult nvic_sysreg_ns_read(void *opaque, hwaddr addr,
     if (attrs.secure) {
         /* S accesses to the alias act like NS accesses to the real region */
         attrs.secure = 0;
-        return memory_region_dispatch_read(mr, addr, data, size, attrs);
+        return memory_region_dispatch_read(mr, addr, data, SIZE_MEMOP(size),
+                                           attrs);
     } else {
         /* NS attrs are RAZ/WI for privileged, and BusFault for user */
         if (attrs.user) {
@@ -2390,7 +2393,8 @@ static MemTxResult nvic_systick_write(void *opaque, hwaddr addr,

     /* Direct the access to the correct systick */
     mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->systick[attrs.secure]), 0);
-    return memory_region_dispatch_write(mr, addr, value, size, attrs);
+    return memory_region_dispatch_write(mr, addr, value, SIZE_MEMOP(size),
+                                        attrs);
 }

 static MemTxResult nvic_systick_read(void *opaque, hwaddr addr,
@@ -2402,7 +2406,7 @@ static MemTxResult nvic_systick_read(void *opaque, hwaddr addr,

     /* Direct the access to the correct systick */
     mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->systick[attrs.secure]), 0);
-    return memory_region_dispatch_read(mr, addr, data, size, attrs);
+    return memory_region_dispatch_read(mr, addr, data, SIZE_MEMOP(size), attrs);
 }

 static const MemoryRegionOps nvic_systick_ops = {
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v3 05/15] hw/intc/armv7m_nic: Access MemoryRegion with MemOp
@ 2019-07-25  7:06       ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  7:06 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 2558 bytes --]

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 hw/intc/armv7m_nvic.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
index 9f8f0d3..25bb88a 100644
--- a/hw/intc/armv7m_nvic.c
+++ b/hw/intc/armv7m_nvic.c
@@ -18,6 +18,7 @@
 #include "hw/intc/armv7m_nvic.h"
 #include "target/arm/cpu.h"
 #include "exec/exec-all.h"
+#include "exec/memop.h"
 #include "qemu/log.h"
 #include "qemu/module.h"
 #include "trace.h"
@@ -2345,7 +2346,8 @@ static MemTxResult nvic_sysreg_ns_write(void *opaque, hwaddr addr,
     if (attrs.secure) {
         /* S accesses to the alias act like NS accesses to the real region */
         attrs.secure = 0;
-        return memory_region_dispatch_write(mr, addr, value, size, attrs);
+        return memory_region_dispatch_write(mr, addr, value, SIZE_MEMOP(size),
+                                            attrs);
     } else {
         /* NS attrs are RAZ/WI for privileged, and BusFault for user */
         if (attrs.user) {
@@ -2364,7 +2366,8 @@ static MemTxResult nvic_sysreg_ns_read(void *opaque, hwaddr addr,
     if (attrs.secure) {
         /* S accesses to the alias act like NS accesses to the real region */
         attrs.secure = 0;
-        return memory_region_dispatch_read(mr, addr, data, size, attrs);
+        return memory_region_dispatch_read(mr, addr, data, SIZE_MEMOP(size),
+                                           attrs);
     } else {
         /* NS attrs are RAZ/WI for privileged, and BusFault for user */
         if (attrs.user) {
@@ -2390,7 +2393,8 @@ static MemTxResult nvic_systick_write(void *opaque, hwaddr addr,

     /* Direct the access to the correct systick */
     mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->systick[attrs.secure]), 0);
-    return memory_region_dispatch_write(mr, addr, value, size, attrs);
+    return memory_region_dispatch_write(mr, addr, value, SIZE_MEMOP(size),
+                                        attrs);
 }

 static MemTxResult nvic_systick_read(void *opaque, hwaddr addr,
@@ -2402,7 +2406,7 @@ static MemTxResult nvic_systick_read(void *opaque, hwaddr addr,

     /* Direct the access to the correct systick */
     mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->systick[attrs.secure]), 0);
-    return memory_region_dispatch_read(mr, addr, data, size, attrs);
+    return memory_region_dispatch_read(mr, addr, data, SIZE_MEMOP(size), attrs);
 }

 static const MemoryRegionOps nvic_systick_ops = {
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 4811 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v3 06/15] hw/virtio: Access MemoryRegion with MemOp
  2019-07-25  7:01     ` [Qemu-riscv] " tony.nguyen
@ 2019-07-25  7:07       ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  7:07 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, qemu-s390x, qemu-ppc, amarkovic, pbonzini,
	aurelien

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 hw/virtio/virtio-pci.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index ce928f2..265f066 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -17,6 +17,7 @@

 #include "qemu/osdep.h"

+#include "exec/memop.h"
 #include "standard-headers/linux/virtio_pci.h"
 #include "hw/virtio/virtio.h"
 #include "hw/pci/pci.h"
@@ -550,7 +551,8 @@ void virtio_address_space_write(VirtIOPCIProxy *proxy, hwaddr addr,
         /* As length is under guest control, handle illegal values. */
         return;
     }
-    memory_region_dispatch_write(mr, addr, val, len, MEMTXATTRS_UNSPECIFIED);
+    memory_region_dispatch_write(mr, addr, val, SIZE_MEMOP(len),
+                                 MEMTXATTRS_UNSPECIFIED);
 }

 static void
@@ -573,7 +575,8 @@ virtio_address_space_read(VirtIOPCIProxy *proxy, hwaddr addr,
     /* Make sure caller aligned buf properly */
     assert(!(((uintptr_t)buf) & (len - 1)));

-    memory_region_dispatch_read(mr, addr, &val, len, MEMTXATTRS_UNSPECIFIED);
+    memory_region_dispatch_read(mr, addr, &val, SIZE_MEMOP(len),
+                                MEMTXATTRS_UNSPECIFIED);
     switch (len) {
     case 1:
         pci_set_byte(buf, val);
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v3 06/15] hw/virtio: Access MemoryRegion with MemOp
@ 2019-07-25  7:07       ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  7:07 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 1369 bytes --]

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 hw/virtio/virtio-pci.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index ce928f2..265f066 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -17,6 +17,7 @@

 #include "qemu/osdep.h"

+#include "exec/memop.h"
 #include "standard-headers/linux/virtio_pci.h"
 #include "hw/virtio/virtio.h"
 #include "hw/pci/pci.h"
@@ -550,7 +551,8 @@ void virtio_address_space_write(VirtIOPCIProxy *proxy, hwaddr addr,
         /* As length is under guest control, handle illegal values. */
         return;
     }
-    memory_region_dispatch_write(mr, addr, val, len, MEMTXATTRS_UNSPECIFIED);
+    memory_region_dispatch_write(mr, addr, val, SIZE_MEMOP(len),
+                                 MEMTXATTRS_UNSPECIFIED);
 }

 static void
@@ -573,7 +575,8 @@ virtio_address_space_read(VirtIOPCIProxy *proxy, hwaddr addr,
     /* Make sure caller aligned buf properly */
     assert(!(((uintptr_t)buf) & (len - 1)));

-    memory_region_dispatch_read(mr, addr, &val, len, MEMTXATTRS_UNSPECIFIED);
+    memory_region_dispatch_read(mr, addr, &val, SIZE_MEMOP(len),
+                                MEMTXATTRS_UNSPECIFIED);
     switch (len) {
     case 1:
         pci_set_byte(buf, val);
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 2972 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v3 07/15] hw/vfio: Access MemoryRegion with MemOp
  2019-07-25  7:01     ` [Qemu-riscv] " tony.nguyen
@ 2019-07-25  7:08       ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  7:08 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, qemu-s390x, qemu-ppc, amarkovic, pbonzini,
	aurelien

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 hw/vfio/pci-quirks.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/hw/vfio/pci-quirks.c b/hw/vfio/pci-quirks.c
index b35a640..3240afa 100644
--- a/hw/vfio/pci-quirks.c
+++ b/hw/vfio/pci-quirks.c
@@ -1071,7 +1071,7 @@ static void vfio_rtl8168_quirk_address_write(void *opaque, hwaddr addr,

                 /* Write to the proper guest MSI-X table instead */
                 memory_region_dispatch_write(&vdev->pdev.msix_table_mmio,
-                                             offset, val, size,
+                                             offset, val, SIZE_MEMOP(size),
                                              MEMTXATTRS_UNSPECIFIED);
             }
             return; /* Do not write guest MSI-X data to hardware */
@@ -1102,7 +1102,8 @@ static uint64_t vfio_rtl8168_quirk_data_read(void *opaque,
     if (rtl->enabled && (vdev->pdev.cap_present & QEMU_PCI_CAP_MSIX)) {
         hwaddr offset = rtl->addr & 0xfff;
         memory_region_dispatch_read(&vdev->pdev.msix_table_mmio, offset,
-                                    &data, size, MEMTXATTRS_UNSPECIFIED);
+                                    &data, SIZE_MEMOP(size),
+                                    MEMTXATTRS_UNSPECIFIED);
         trace_vfio_quirk_rtl8168_msix_read(vdev->vbasedev.name, offset, data);
     }

--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v3 07/15] hw/vfio: Access MemoryRegion with MemOp
@ 2019-07-25  7:08       ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  7:08 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 1417 bytes --]

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 hw/vfio/pci-quirks.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/hw/vfio/pci-quirks.c b/hw/vfio/pci-quirks.c
index b35a640..3240afa 100644
--- a/hw/vfio/pci-quirks.c
+++ b/hw/vfio/pci-quirks.c
@@ -1071,7 +1071,7 @@ static void vfio_rtl8168_quirk_address_write(void *opaque, hwaddr addr,

                 /* Write to the proper guest MSI-X table instead */
                 memory_region_dispatch_write(&vdev->pdev.msix_table_mmio,
-                                             offset, val, size,
+                                             offset, val, SIZE_MEMOP(size),
                                              MEMTXATTRS_UNSPECIFIED);
             }
             return; /* Do not write guest MSI-X data to hardware */
@@ -1102,7 +1102,8 @@ static uint64_t vfio_rtl8168_quirk_data_read(void *opaque,
     if (rtl->enabled && (vdev->pdev.cap_present & QEMU_PCI_CAP_MSIX)) {
         hwaddr offset = rtl->addr & 0xfff;
         memory_region_dispatch_read(&vdev->pdev.msix_table_mmio, offset,
-                                    &data, size, MEMTXATTRS_UNSPECIFIED);
+                                    &data, SIZE_MEMOP(size),
+                                    MEMTXATTRS_UNSPECIFIED);
         trace_vfio_quirk_rtl8168_msix_read(vdev->vbasedev.name, offset, data);
     }

--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 3329 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v3 08/15] exec: Access MemoryRegion with MemOp
  2019-07-25  7:01     ` [Qemu-riscv] " tony.nguyen
@ 2019-07-25  7:08       ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  7:08 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, qemu-s390x, qemu-ppc, amarkovic, pbonzini,
	aurelien

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 exec.c            |  6 ++++--
 memory_ldst.inc.c | 18 +++++++++---------
 2 files changed, 13 insertions(+), 11 deletions(-)

diff --git a/exec.c b/exec.c
index 3e78de3..5013864 100644
--- a/exec.c
+++ b/exec.c
@@ -3334,7 +3334,8 @@ static MemTxResult flatview_write_continue(FlatView *fv, hwaddr addr,
             /* XXX: could force current_cpu to NULL to avoid
                potential bugs */
             val = ldn_p(buf, l);
-            result |= memory_region_dispatch_write(mr, addr1, val, l, attrs);
+            result |= memory_region_dispatch_write(mr, addr1, val,
+                                                   SIZE_MEMOP(l), attrs);
         } else {
             /* RAM case */
             ptr = qemu_ram_ptr_length(mr->ram_block, addr1, &l, false);
@@ -3395,7 +3396,8 @@ MemTxResult flatview_read_continue(FlatView *fv, hwaddr addr,
             /* I/O case */
             release_lock |= prepare_mmio_access(mr);
             l = memory_access_size(mr, l, addr1);
-            result |= memory_region_dispatch_read(mr, addr1, &val, l, attrs);
+            result |= memory_region_dispatch_read(mr, addr1, &val,
+                                                  SIZE_MEMOP(l), attrs);
             stn_p(buf, l, val);
         } else {
             /* RAM case */
diff --git a/memory_ldst.inc.c b/memory_ldst.inc.c
index acf865b..e073cf9 100644
--- a/memory_ldst.inc.c
+++ b/memory_ldst.inc.c
@@ -38,7 +38,7 @@ static inline uint32_t glue(address_space_ldl_internal, SUFFIX)(ARG1_DECL,
         release_lock |= prepare_mmio_access(mr);

         /* I/O case */
-        r = memory_region_dispatch_read(mr, addr1, &val, 4, attrs);
+        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(4), attrs);
 #if defined(TARGET_WORDS_BIGENDIAN)
         if (endian == DEVICE_LITTLE_ENDIAN) {
             val = bswap32(val);
@@ -114,7 +114,7 @@ static inline uint64_t glue(address_space_ldq_internal, SUFFIX)(ARG1_DECL,
         release_lock |= prepare_mmio_access(mr);

         /* I/O case */
-        r = memory_region_dispatch_read(mr, addr1, &val, 8, attrs);
+        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(8), attrs);
 #if defined(TARGET_WORDS_BIGENDIAN)
         if (endian == DEVICE_LITTLE_ENDIAN) {
             val = bswap64(val);
@@ -188,7 +188,7 @@ uint32_t glue(address_space_ldub, SUFFIX)(ARG1_DECL,
         release_lock |= prepare_mmio_access(mr);

         /* I/O case */
-        r = memory_region_dispatch_read(mr, addr1, &val, 1, attrs);
+        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(1), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
@@ -224,7 +224,7 @@ static inline uint32_t glue(address_space_lduw_internal, SUFFIX)(ARG1_DECL,
         release_lock |= prepare_mmio_access(mr);

         /* I/O case */
-        r = memory_region_dispatch_read(mr, addr1, &val, 2, attrs);
+        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(2), attrs);
 #if defined(TARGET_WORDS_BIGENDIAN)
         if (endian == DEVICE_LITTLE_ENDIAN) {
             val = bswap16(val);
@@ -300,7 +300,7 @@ void glue(address_space_stl_notdirty, SUFFIX)(ARG1_DECL,
     if (l < 4 || !memory_access_is_direct(mr, true)) {
         release_lock |= prepare_mmio_access(mr);

-        r = memory_region_dispatch_write(mr, addr1, val, 4, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(4), attrs);
     } else {
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
         stl_p(ptr, val);
@@ -346,7 +346,7 @@ static inline void glue(address_space_stl_internal, SUFFIX)(ARG1_DECL,
             val = bswap32(val);
         }
 #endif
-        r = memory_region_dispatch_write(mr, addr1, val, 4, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(4), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
@@ -408,7 +408,7 @@ void glue(address_space_stb, SUFFIX)(ARG1_DECL,
     mr = TRANSLATE(addr, &addr1, &l, true, attrs);
     if (!memory_access_is_direct(mr, true)) {
         release_lock |= prepare_mmio_access(mr);
-        r = memory_region_dispatch_write(mr, addr1, val, 1, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(1), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
@@ -451,7 +451,7 @@ static inline void glue(address_space_stw_internal, SUFFIX)(ARG1_DECL,
             val = bswap16(val);
         }
 #endif
-        r = memory_region_dispatch_write(mr, addr1, val, 2, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(2), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
@@ -524,7 +524,7 @@ static void glue(address_space_stq_internal, SUFFIX)(ARG1_DECL,
             val = bswap64(val);
         }
 #endif
-        r = memory_region_dispatch_write(mr, addr1, val, 8, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(8), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v3 08/15] exec: Access MemoryRegion with MemOp
@ 2019-07-25  7:08       ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  7:08 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 5348 bytes --]

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 exec.c            |  6 ++++--
 memory_ldst.inc.c | 18 +++++++++---------
 2 files changed, 13 insertions(+), 11 deletions(-)

diff --git a/exec.c b/exec.c
index 3e78de3..5013864 100644
--- a/exec.c
+++ b/exec.c
@@ -3334,7 +3334,8 @@ static MemTxResult flatview_write_continue(FlatView *fv, hwaddr addr,
             /* XXX: could force current_cpu to NULL to avoid
                potential bugs */
             val = ldn_p(buf, l);
-            result |= memory_region_dispatch_write(mr, addr1, val, l, attrs);
+            result |= memory_region_dispatch_write(mr, addr1, val,
+                                                   SIZE_MEMOP(l), attrs);
         } else {
             /* RAM case */
             ptr = qemu_ram_ptr_length(mr->ram_block, addr1, &l, false);
@@ -3395,7 +3396,8 @@ MemTxResult flatview_read_continue(FlatView *fv, hwaddr addr,
             /* I/O case */
             release_lock |= prepare_mmio_access(mr);
             l = memory_access_size(mr, l, addr1);
-            result |= memory_region_dispatch_read(mr, addr1, &val, l, attrs);
+            result |= memory_region_dispatch_read(mr, addr1, &val,
+                                                  SIZE_MEMOP(l), attrs);
             stn_p(buf, l, val);
         } else {
             /* RAM case */
diff --git a/memory_ldst.inc.c b/memory_ldst.inc.c
index acf865b..e073cf9 100644
--- a/memory_ldst.inc.c
+++ b/memory_ldst.inc.c
@@ -38,7 +38,7 @@ static inline uint32_t glue(address_space_ldl_internal, SUFFIX)(ARG1_DECL,
         release_lock |= prepare_mmio_access(mr);

         /* I/O case */
-        r = memory_region_dispatch_read(mr, addr1, &val, 4, attrs);
+        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(4), attrs);
 #if defined(TARGET_WORDS_BIGENDIAN)
         if (endian == DEVICE_LITTLE_ENDIAN) {
             val = bswap32(val);
@@ -114,7 +114,7 @@ static inline uint64_t glue(address_space_ldq_internal, SUFFIX)(ARG1_DECL,
         release_lock |= prepare_mmio_access(mr);

         /* I/O case */
-        r = memory_region_dispatch_read(mr, addr1, &val, 8, attrs);
+        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(8), attrs);
 #if defined(TARGET_WORDS_BIGENDIAN)
         if (endian == DEVICE_LITTLE_ENDIAN) {
             val = bswap64(val);
@@ -188,7 +188,7 @@ uint32_t glue(address_space_ldub, SUFFIX)(ARG1_DECL,
         release_lock |= prepare_mmio_access(mr);

         /* I/O case */
-        r = memory_region_dispatch_read(mr, addr1, &val, 1, attrs);
+        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(1), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
@@ -224,7 +224,7 @@ static inline uint32_t glue(address_space_lduw_internal, SUFFIX)(ARG1_DECL,
         release_lock |= prepare_mmio_access(mr);

         /* I/O case */
-        r = memory_region_dispatch_read(mr, addr1, &val, 2, attrs);
+        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(2), attrs);
 #if defined(TARGET_WORDS_BIGENDIAN)
         if (endian == DEVICE_LITTLE_ENDIAN) {
             val = bswap16(val);
@@ -300,7 +300,7 @@ void glue(address_space_stl_notdirty, SUFFIX)(ARG1_DECL,
     if (l < 4 || !memory_access_is_direct(mr, true)) {
         release_lock |= prepare_mmio_access(mr);

-        r = memory_region_dispatch_write(mr, addr1, val, 4, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(4), attrs);
     } else {
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
         stl_p(ptr, val);
@@ -346,7 +346,7 @@ static inline void glue(address_space_stl_internal, SUFFIX)(ARG1_DECL,
             val = bswap32(val);
         }
 #endif
-        r = memory_region_dispatch_write(mr, addr1, val, 4, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(4), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
@@ -408,7 +408,7 @@ void glue(address_space_stb, SUFFIX)(ARG1_DECL,
     mr = TRANSLATE(addr, &addr1, &l, true, attrs);
     if (!memory_access_is_direct(mr, true)) {
         release_lock |= prepare_mmio_access(mr);
-        r = memory_region_dispatch_write(mr, addr1, val, 1, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(1), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
@@ -451,7 +451,7 @@ static inline void glue(address_space_stw_internal, SUFFIX)(ARG1_DECL,
             val = bswap16(val);
         }
 #endif
-        r = memory_region_dispatch_write(mr, addr1, val, 2, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(2), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
@@ -524,7 +524,7 @@ static void glue(address_space_stq_internal, SUFFIX)(ARG1_DECL,
             val = bswap64(val);
         }
 #endif
-        r = memory_region_dispatch_write(mr, addr1, val, 8, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(8), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 9769 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v3 09/15] cputlb: Access MemoryRegion with MemOp
  2019-07-25  7:01     ` [Qemu-riscv] " tony.nguyen
@ 2019-07-25  7:08       ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  7:08 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, qemu-s390x, qemu-ppc, amarkovic, pbonzini,
	aurelien

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 accel/tcg/cputlb.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 523be4c..a4a0bf7 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -906,8 +906,8 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
         qemu_mutex_lock_iothread();
         locked = true;
     }
-    r = memory_region_dispatch_read(mr, mr_offset,
-                                    &val, size, iotlbentry->attrs);
+    r = memory_region_dispatch_read(mr, mr_offset, &val, SIZE_MEMOP(size),
+                                    iotlbentry->attrs);
     if (r != MEMTX_OK) {
         hwaddr physaddr = mr_offset +
             section->offset_within_address_space -
@@ -947,8 +947,8 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
         qemu_mutex_lock_iothread();
         locked = true;
     }
-    r = memory_region_dispatch_write(mr, mr_offset,
-                                     val, size, iotlbentry->attrs);
+    r = memory_region_dispatch_write(mr, mr_offset, val, SIZE_MEMOP(size),
+                                    iotlbentry->attrs);
     if (r != MEMTX_OK) {
         hwaddr physaddr = mr_offset +
             section->offset_within_address_space -
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v3 09/15] cputlb: Access MemoryRegion with MemOp
@ 2019-07-25  7:08       ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  7:08 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 1376 bytes --]

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 accel/tcg/cputlb.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 523be4c..a4a0bf7 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -906,8 +906,8 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
         qemu_mutex_lock_iothread();
         locked = true;
     }
-    r = memory_region_dispatch_read(mr, mr_offset,
-                                    &val, size, iotlbentry->attrs);
+    r = memory_region_dispatch_read(mr, mr_offset, &val, SIZE_MEMOP(size),
+                                    iotlbentry->attrs);
     if (r != MEMTX_OK) {
         hwaddr physaddr = mr_offset +
             section->offset_within_address_space -
@@ -947,8 +947,8 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
         qemu_mutex_lock_iothread();
         locked = true;
     }
-    r = memory_region_dispatch_write(mr, mr_offset,
-                                     val, size, iotlbentry->attrs);
+    r = memory_region_dispatch_write(mr, mr_offset, val, SIZE_MEMOP(size),
+                                    iotlbentry->attrs);
     if (r != MEMTX_OK) {
         hwaddr physaddr = mr_offset +
             section->offset_within_address_space -
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 3111 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v3 10/15] memory: Access MemoryRegion with MemOp semantics
  2019-07-25  7:01     ` [Qemu-riscv] " tony.nguyen
@ 2019-07-25  7:09       ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  7:09 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, qemu-s390x, qemu-ppc, amarkovic, pbonzini,
	aurelien

To convert interfaces of MemoryRegion access, MEMOP_SIZE and
SIZE_MEMOP no-op stubs were introduced to change syntax while keeping
the existing semantics.

Now with interfaces converted, we fill the stubs and use MemOp
semantics.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 include/exec/memop.h | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/include/exec/memop.h b/include/exec/memop.h
index 09c8d20..f2847e8 100644
--- a/include/exec/memop.h
+++ b/include/exec/memop.h
@@ -106,8 +106,7 @@ typedef enum MemOp {
     MO_SSIZE = MO_SIZE | MO_SIGN,
 } MemOp;

-/* No-op while memory_region_dispatch_[read|write] is converted to MemOp */
-#define MEMOP_SIZE(op)  (op)    /* MemOp to size.  */
-#define SIZE_MEMOP(ul)  (ul)    /* Size to MemOp.  */
+#define MEMOP_SIZE(op)  (1 << ((op) & MO_SIZE)) /* MemOp to size.  */
+#define SIZE_MEMOP(ul)  (ctzl(ul))              /* Size to MemOp.  */

 #endif
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v3 10/15] memory: Access MemoryRegion with MemOp semantics
@ 2019-07-25  7:09       ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  7:09 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 977 bytes --]

To convert interfaces of MemoryRegion access, MEMOP_SIZE and
SIZE_MEMOP no-op stubs were introduced to change syntax while keeping
the existing semantics.

Now with interfaces converted, we fill the stubs and use MemOp
semantics.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 include/exec/memop.h | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/include/exec/memop.h b/include/exec/memop.h
index 09c8d20..f2847e8 100644
--- a/include/exec/memop.h
+++ b/include/exec/memop.h
@@ -106,8 +106,7 @@ typedef enum MemOp {
     MO_SSIZE = MO_SIZE | MO_SIGN,
 } MemOp;

-/* No-op while memory_region_dispatch_[read|write] is converted to MemOp */
-#define MEMOP_SIZE(op)  (op)    /* MemOp to size.  */
-#define SIZE_MEMOP(ul)  (ul)    /* Size to MemOp.  */
+#define MEMOP_SIZE(op)  (1 << ((op) & MO_SIZE)) /* MemOp to size.  */
+#define SIZE_MEMOP(ul)  (ctzl(ul))              /* Size to MemOp.  */

 #endif
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 2088 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v3 11/15] memory: Single byte swap along the I/O path
  2019-07-25  7:01     ` [Qemu-riscv] " tony.nguyen
@ 2019-07-25  7:10       ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  7:10 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, qemu-s390x, qemu-ppc, amarkovic, pbonzini,
	aurelien

Now that MemOp has been pushed down into the memory API, we can
collapse the two byte swaps adjust_endianness and handle_bswap into
the former.

Collapsing byte swaps along the I/O path enables additional endian
inversion logic, e.g. SPARC64 Invert Endian TTE bit, with redundant
byte swaps cancelling out.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 accel/tcg/cputlb.c | 58 +++++++++++++++++++++++++-----------------------------
 memory.c           | 30 ++++++++++++++++------------
 2 files changed, 44 insertions(+), 44 deletions(-)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index a4a0bf7..e61b1eb 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -881,7 +881,7 @@ static void tlb_fill(CPUState *cpu, target_ulong addr, int size,

 static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
                          int mmu_idx, target_ulong addr, uintptr_t retaddr,
-                         MMUAccessType access_type, int size)
+                         MMUAccessType access_type, MemOp op)
 {
     CPUState *cpu = env_cpu(env);
     hwaddr mr_offset;
@@ -906,14 +906,13 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
         qemu_mutex_lock_iothread();
         locked = true;
     }
-    r = memory_region_dispatch_read(mr, mr_offset, &val, SIZE_MEMOP(size),
-                                    iotlbentry->attrs);
+    r = memory_region_dispatch_read(mr, mr_offset, &val, op, iotlbentry->attrs);
     if (r != MEMTX_OK) {
         hwaddr physaddr = mr_offset +
             section->offset_within_address_space -
             section->offset_within_region;

-        cpu_transaction_failed(cpu, physaddr, addr, size, access_type,
+        cpu_transaction_failed(cpu, physaddr, addr, MEMOP_SIZE(op), access_type,
                                mmu_idx, iotlbentry->attrs, r, retaddr);
     }
     if (locked) {
@@ -925,7 +924,7 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,

 static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
                       int mmu_idx, uint64_t val, target_ulong addr,
-                      uintptr_t retaddr, int size)
+                      uintptr_t retaddr, MemOp op)
 {
     CPUState *cpu = env_cpu(env);
     hwaddr mr_offset;
@@ -947,15 +946,15 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
         qemu_mutex_lock_iothread();
         locked = true;
     }
-    r = memory_region_dispatch_write(mr, mr_offset, val, SIZE_MEMOP(size),
-                                    iotlbentry->attrs);
+    r = memory_region_dispatch_write(mr, mr_offset, val, op, iotlbentry->attrs);
     if (r != MEMTX_OK) {
         hwaddr physaddr = mr_offset +
             section->offset_within_address_space -
             section->offset_within_region;

-        cpu_transaction_failed(cpu, physaddr, addr, size, MMU_DATA_STORE,
-                               mmu_idx, iotlbentry->attrs, r, retaddr);
+        cpu_transaction_failed(cpu, physaddr, addr, MEMOP_SIZE(op),
+                               MMU_DATA_STORE, mmu_idx, iotlbentry->attrs, r,
+                               retaddr);
     }
     if (locked) {
         qemu_mutex_unlock_iothread();
@@ -1210,26 +1209,13 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
 #endif

 /*
- * Byte Swap Helper
+ * Byte Swap Checker
  *
- * This should all dead code away depending on the build host and
- * access type.
+ * Dead code should all go away depending on the build host and access type.
  */
-
-static inline uint64_t handle_bswap(uint64_t val, int size, bool big_endian)
+static inline bool need_bswap(bool big_endian)
 {
-    if ((big_endian && NEED_BE_BSWAP) || (!big_endian && NEED_LE_BSWAP)) {
-        switch (size) {
-        case 1: return val;
-        case 2: return bswap16(val);
-        case 4: return bswap32(val);
-        case 8: return bswap64(val);
-        default:
-            g_assert_not_reached();
-        }
-    } else {
-        return val;
-    }
+    return (big_endian && NEED_BE_BSWAP) || (!big_endian && NEED_LE_BSWAP);
 }

 /*
@@ -1260,6 +1246,7 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
     unsigned a_bits = get_alignment_bits(get_memop(oi));
     void *haddr;
     uint64_t res;
+    MemOp op;

     /* Handle CPU specific unaligned behaviour */
     if (addr & ((1 << a_bits) - 1)) {
@@ -1305,9 +1292,13 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
             }
         }

-        res = io_readx(env, &env_tlb(env)->d[mmu_idx].iotlb[index],
-                       mmu_idx, addr, retaddr, access_type, size);
-        return handle_bswap(res, size, big_endian);
+        op = SIZE_MEMOP(size);
+        if (need_bswap(big_endian)) {
+            op ^= MO_BSWAP;
+        }
+
+        return io_readx(env, &env_tlb(env)->d[mmu_idx].iotlb[index],
+                       mmu_idx, addr, retaddr, access_type, op);
     }

     /* Handle slow unaligned access (it spans two pages or IO).  */
@@ -1508,6 +1499,7 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
     const size_t tlb_off = offsetof(CPUTLBEntry, addr_write);
     unsigned a_bits = get_alignment_bits(get_memop(oi));
     void *haddr;
+    MemOp op;

     /* Handle CPU specific unaligned behaviour */
     if (addr & ((1 << a_bits) - 1)) {
@@ -1553,9 +1545,13 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
             }
         }

+        op = SIZE_MEMOP(size);
+        if (need_bswap(big_endian)) {
+            op ^= MO_BSWAP;
+        }
+
         io_writex(env, &env_tlb(env)->d[mmu_idx].iotlb[index], mmu_idx,
-                  handle_bswap(val, size, big_endian),
-                  addr, retaddr, size);
+                  val, addr, retaddr, op);
         return;
     }

diff --git a/memory.c b/memory.c
index 6982e19..0277d3d 100644
--- a/memory.c
+++ b/memory.c
@@ -352,7 +352,7 @@ static bool memory_region_big_endian(MemoryRegion *mr)
 #endif
 }

-static bool memory_region_wrong_endianness(MemoryRegion *mr)
+static bool memory_region_endianness_inverted(MemoryRegion *mr)
 {
 #ifdef TARGET_WORDS_BIGENDIAN
     return mr->ops->endianness == DEVICE_LITTLE_ENDIAN;
@@ -361,23 +361,27 @@ static bool memory_region_wrong_endianness(MemoryRegion *mr)
 #endif
 }

-static void adjust_endianness(MemoryRegion *mr, uint64_t *data, unsigned size)
+static void adjust_endianness(MemoryRegion *mr, uint64_t *data, MemOp op)
 {
-    if (memory_region_wrong_endianness(mr)) {
-        switch (size) {
-        case 1:
+    if (memory_region_endianness_inverted(mr)) {
+        op ^= MO_BSWAP;
+    }
+
+    if (op & MO_BSWAP) {
+        switch (op & MO_SIZE) {
+        case MO_8:
             break;
-        case 2:
+        case MO_16:
             *data = bswap16(*data);
             break;
-        case 4:
+        case MO_32:
             *data = bswap32(*data);
             break;
-        case 8:
+        case MO_64:
             *data = bswap64(*data);
             break;
         default:
-            abort();
+            g_assert_not_reached();
         }
     }
 }
@@ -1451,7 +1455,7 @@ MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
     }

     r = memory_region_dispatch_read1(mr, addr, pval, size, attrs);
-    adjust_endianness(mr, pval, size);
+    adjust_endianness(mr, pval, op);
     return r;
 }

@@ -1494,7 +1498,7 @@ MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
         return MEMTX_DECODE_ERROR;
     }

-    adjust_endianness(mr, &data, size);
+    adjust_endianness(mr, &data, op);

     if ((!kvm_eventfds_enabled()) &&
         memory_region_dispatch_write_eventfds(mr, addr, data, size, attrs)) {
@@ -2340,7 +2344,7 @@ void memory_region_add_eventfd(MemoryRegion *mr,
     }

     if (size) {
-        adjust_endianness(mr, &mrfd.data, size);
+        adjust_endianness(mr, &mrfd.data, SIZE_MEMOP(size));
     }
     memory_region_transaction_begin();
     for (i = 0; i < mr->ioeventfd_nb; ++i) {
@@ -2375,7 +2379,7 @@ void memory_region_del_eventfd(MemoryRegion *mr,
     unsigned i;

     if (size) {
-        adjust_endianness(mr, &mrfd.data, size);
+        adjust_endianness(mr, &mrfd.data, SIZE_MEMOP(size));
     }
     memory_region_transaction_begin();
     for (i = 0; i < mr->ioeventfd_nb; ++i) {
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v3 11/15] memory: Single byte swap along the I/O path
@ 2019-07-25  7:10       ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  7:10 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 8585 bytes --]

Now that MemOp has been pushed down into the memory API, we can
collapse the two byte swaps adjust_endianness and handle_bswap into
the former.

Collapsing byte swaps along the I/O path enables additional endian
inversion logic, e.g. SPARC64 Invert Endian TTE bit, with redundant
byte swaps cancelling out.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 accel/tcg/cputlb.c | 58 +++++++++++++++++++++++++-----------------------------
 memory.c           | 30 ++++++++++++++++------------
 2 files changed, 44 insertions(+), 44 deletions(-)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index a4a0bf7..e61b1eb 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -881,7 +881,7 @@ static void tlb_fill(CPUState *cpu, target_ulong addr, int size,

 static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
                          int mmu_idx, target_ulong addr, uintptr_t retaddr,
-                         MMUAccessType access_type, int size)
+                         MMUAccessType access_type, MemOp op)
 {
     CPUState *cpu = env_cpu(env);
     hwaddr mr_offset;
@@ -906,14 +906,13 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
         qemu_mutex_lock_iothread();
         locked = true;
     }
-    r = memory_region_dispatch_read(mr, mr_offset, &val, SIZE_MEMOP(size),
-                                    iotlbentry->attrs);
+    r = memory_region_dispatch_read(mr, mr_offset, &val, op, iotlbentry->attrs);
     if (r != MEMTX_OK) {
         hwaddr physaddr = mr_offset +
             section->offset_within_address_space -
             section->offset_within_region;

-        cpu_transaction_failed(cpu, physaddr, addr, size, access_type,
+        cpu_transaction_failed(cpu, physaddr, addr, MEMOP_SIZE(op), access_type,
                                mmu_idx, iotlbentry->attrs, r, retaddr);
     }
     if (locked) {
@@ -925,7 +924,7 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,

 static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
                       int mmu_idx, uint64_t val, target_ulong addr,
-                      uintptr_t retaddr, int size)
+                      uintptr_t retaddr, MemOp op)
 {
     CPUState *cpu = env_cpu(env);
     hwaddr mr_offset;
@@ -947,15 +946,15 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
         qemu_mutex_lock_iothread();
         locked = true;
     }
-    r = memory_region_dispatch_write(mr, mr_offset, val, SIZE_MEMOP(size),
-                                    iotlbentry->attrs);
+    r = memory_region_dispatch_write(mr, mr_offset, val, op, iotlbentry->attrs);
     if (r != MEMTX_OK) {
         hwaddr physaddr = mr_offset +
             section->offset_within_address_space -
             section->offset_within_region;

-        cpu_transaction_failed(cpu, physaddr, addr, size, MMU_DATA_STORE,
-                               mmu_idx, iotlbentry->attrs, r, retaddr);
+        cpu_transaction_failed(cpu, physaddr, addr, MEMOP_SIZE(op),
+                               MMU_DATA_STORE, mmu_idx, iotlbentry->attrs, r,
+                               retaddr);
     }
     if (locked) {
         qemu_mutex_unlock_iothread();
@@ -1210,26 +1209,13 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
 #endif

 /*
- * Byte Swap Helper
+ * Byte Swap Checker
  *
- * This should all dead code away depending on the build host and
- * access type.
+ * Dead code should all go away depending on the build host and access type.
  */
-
-static inline uint64_t handle_bswap(uint64_t val, int size, bool big_endian)
+static inline bool need_bswap(bool big_endian)
 {
-    if ((big_endian && NEED_BE_BSWAP) || (!big_endian && NEED_LE_BSWAP)) {
-        switch (size) {
-        case 1: return val;
-        case 2: return bswap16(val);
-        case 4: return bswap32(val);
-        case 8: return bswap64(val);
-        default:
-            g_assert_not_reached();
-        }
-    } else {
-        return val;
-    }
+    return (big_endian && NEED_BE_BSWAP) || (!big_endian && NEED_LE_BSWAP);
 }

 /*
@@ -1260,6 +1246,7 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
     unsigned a_bits = get_alignment_bits(get_memop(oi));
     void *haddr;
     uint64_t res;
+    MemOp op;

     /* Handle CPU specific unaligned behaviour */
     if (addr & ((1 << a_bits) - 1)) {
@@ -1305,9 +1292,13 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
             }
         }

-        res = io_readx(env, &env_tlb(env)->d[mmu_idx].iotlb[index],
-                       mmu_idx, addr, retaddr, access_type, size);
-        return handle_bswap(res, size, big_endian);
+        op = SIZE_MEMOP(size);
+        if (need_bswap(big_endian)) {
+            op ^= MO_BSWAP;
+        }
+
+        return io_readx(env, &env_tlb(env)->d[mmu_idx].iotlb[index],
+                       mmu_idx, addr, retaddr, access_type, op);
     }

     /* Handle slow unaligned access (it spans two pages or IO).  */
@@ -1508,6 +1499,7 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
     const size_t tlb_off = offsetof(CPUTLBEntry, addr_write);
     unsigned a_bits = get_alignment_bits(get_memop(oi));
     void *haddr;
+    MemOp op;

     /* Handle CPU specific unaligned behaviour */
     if (addr & ((1 << a_bits) - 1)) {
@@ -1553,9 +1545,13 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
             }
         }

+        op = SIZE_MEMOP(size);
+        if (need_bswap(big_endian)) {
+            op ^= MO_BSWAP;
+        }
+
         io_writex(env, &env_tlb(env)->d[mmu_idx].iotlb[index], mmu_idx,
-                  handle_bswap(val, size, big_endian),
-                  addr, retaddr, size);
+                  val, addr, retaddr, op);
         return;
     }

diff --git a/memory.c b/memory.c
index 6982e19..0277d3d 100644
--- a/memory.c
+++ b/memory.c
@@ -352,7 +352,7 @@ static bool memory_region_big_endian(MemoryRegion *mr)
 #endif
 }

-static bool memory_region_wrong_endianness(MemoryRegion *mr)
+static bool memory_region_endianness_inverted(MemoryRegion *mr)
 {
 #ifdef TARGET_WORDS_BIGENDIAN
     return mr->ops->endianness == DEVICE_LITTLE_ENDIAN;
@@ -361,23 +361,27 @@ static bool memory_region_wrong_endianness(MemoryRegion *mr)
 #endif
 }

-static void adjust_endianness(MemoryRegion *mr, uint64_t *data, unsigned size)
+static void adjust_endianness(MemoryRegion *mr, uint64_t *data, MemOp op)
 {
-    if (memory_region_wrong_endianness(mr)) {
-        switch (size) {
-        case 1:
+    if (memory_region_endianness_inverted(mr)) {
+        op ^= MO_BSWAP;
+    }
+
+    if (op & MO_BSWAP) {
+        switch (op & MO_SIZE) {
+        case MO_8:
             break;
-        case 2:
+        case MO_16:
             *data = bswap16(*data);
             break;
-        case 4:
+        case MO_32:
             *data = bswap32(*data);
             break;
-        case 8:
+        case MO_64:
             *data = bswap64(*data);
             break;
         default:
-            abort();
+            g_assert_not_reached();
         }
     }
 }
@@ -1451,7 +1455,7 @@ MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
     }

     r = memory_region_dispatch_read1(mr, addr, pval, size, attrs);
-    adjust_endianness(mr, pval, size);
+    adjust_endianness(mr, pval, op);
     return r;
 }

@@ -1494,7 +1498,7 @@ MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
         return MEMTX_DECODE_ERROR;
     }

-    adjust_endianness(mr, &data, size);
+    adjust_endianness(mr, &data, op);

     if ((!kvm_eventfds_enabled()) &&
         memory_region_dispatch_write_eventfds(mr, addr, data, size, attrs)) {
@@ -2340,7 +2344,7 @@ void memory_region_add_eventfd(MemoryRegion *mr,
     }

     if (size) {
-        adjust_endianness(mr, &mrfd.data, size);
+        adjust_endianness(mr, &mrfd.data, SIZE_MEMOP(size));
     }
     memory_region_transaction_begin();
     for (i = 0; i < mr->ioeventfd_nb; ++i) {
@@ -2375,7 +2379,7 @@ void memory_region_del_eventfd(MemoryRegion *mr,
     unsigned i;

     if (size) {
-        adjust_endianness(mr, &mrfd.data, size);
+        adjust_endianness(mr, &mrfd.data, SIZE_MEMOP(size));
     }
     memory_region_transaction_begin();
     for (i = 0; i < mr->ioeventfd_nb; ++i) {
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 16313 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v3 12/15] cpu: TLB_FLAGS_MASK bit to force memory slow path
  2019-07-25  7:01     ` [Qemu-riscv] " tony.nguyen
@ 2019-07-25  7:10       ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  7:10 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, qemu-s390x, qemu-ppc, amarkovic, pbonzini,
	aurelien

The fast path is taken when TLB_FLAGS_MASK is all zero.

TLB_FORCE_SLOW is simply a TLB_FLAGS_MASK bit to force the slow path,
there are no other side effects.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 include/exec/cpu-all.h | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
index 536ea58..e496f99 100644
--- a/include/exec/cpu-all.h
+++ b/include/exec/cpu-all.h
@@ -331,12 +331,18 @@ CPUArchState *cpu_copy(CPUArchState *env);
 #define TLB_MMIO            (1 << (TARGET_PAGE_BITS - 3))
 /* Set if TLB entry must have MMU lookup repeated for every access */
 #define TLB_RECHECK         (1 << (TARGET_PAGE_BITS - 4))
+/* Set if TLB entry must take the slow path.  */
+#define TLB_FORCE_SLOW      (1 << (TARGET_PAGE_BITS - 5))

 /* Use this mask to check interception with an alignment mask
  * in a TCG backend.
  */
-#define TLB_FLAGS_MASK  (TLB_INVALID_MASK | TLB_NOTDIRTY | TLB_MMIO \
-                         | TLB_RECHECK)
+#define TLB_FLAGS_MASK \
+    (TLB_INVALID_MASK  \
+     | TLB_NOTDIRTY    \
+     | TLB_MMIO        \
+     | TLB_RECHECK     \
+     | TLB_FORCE_SLOW)

 /**
  * tlb_hit_page: return true if page aligned @addr is a hit against the
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v3 12/15] cpu: TLB_FLAGS_MASK bit to force memory slow path
@ 2019-07-25  7:10       ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  7:10 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 1305 bytes --]

The fast path is taken when TLB_FLAGS_MASK is all zero.

TLB_FORCE_SLOW is simply a TLB_FLAGS_MASK bit to force the slow path,
there are no other side effects.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 include/exec/cpu-all.h | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
index 536ea58..e496f99 100644
--- a/include/exec/cpu-all.h
+++ b/include/exec/cpu-all.h
@@ -331,12 +331,18 @@ CPUArchState *cpu_copy(CPUArchState *env);
 #define TLB_MMIO            (1 << (TARGET_PAGE_BITS - 3))
 /* Set if TLB entry must have MMU lookup repeated for every access */
 #define TLB_RECHECK         (1 << (TARGET_PAGE_BITS - 4))
+/* Set if TLB entry must take the slow path.  */
+#define TLB_FORCE_SLOW      (1 << (TARGET_PAGE_BITS - 5))

 /* Use this mask to check interception with an alignment mask
  * in a TCG backend.
  */
-#define TLB_FLAGS_MASK  (TLB_INVALID_MASK | TLB_NOTDIRTY | TLB_MMIO \
-                         | TLB_RECHECK)
+#define TLB_FLAGS_MASK \
+    (TLB_INVALID_MASK  \
+     | TLB_NOTDIRTY    \
+     | TLB_MMIO        \
+     | TLB_RECHECK     \
+     | TLB_FORCE_SLOW)

 /**
  * tlb_hit_page: return true if page aligned @addr is a hit against the
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 2710 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v3 13/15] cputlb: Byte swap memory transaction attribute
  2019-07-25  7:01     ` [Qemu-riscv] " tony.nguyen
@ 2019-07-25  7:11       ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  7:11 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, qemu-s390x, qemu-ppc, amarkovic, pbonzini,
	aurelien

Notice new attribute, byte swap, and force the transaction through the
memory slow path.

Required by architectures that can invert endianness of memory
transaction, e.g. SPARC64 has the Invert Endian TTE bit.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 accel/tcg/cputlb.c      | 11 +++++++++++
 include/exec/memattrs.h |  2 ++
 2 files changed, 13 insertions(+)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index e61b1eb..f292a87 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -738,6 +738,9 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
          */
         address |= TLB_RECHECK;
     }
+    if (attrs.byte_swap) {
+        address |= TLB_FORCE_SLOW;
+    }
     if (!memory_region_is_ram(section->mr) &&
         !memory_region_is_romd(section->mr)) {
         /* IO memory case */
@@ -891,6 +894,10 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
     bool locked = false;
     MemTxResult r;

+    if (iotlbentry->attrs.byte_swap) {
+        op ^= MO_BSWAP;
+    }
+
     section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
     mr = section->mr;
     mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
@@ -933,6 +940,10 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
     bool locked = false;
     MemTxResult r;

+    if (iotlbentry->attrs.byte_swap) {
+        op ^= MO_BSWAP;
+    }
+
     section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
     mr = section->mr;
     mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
diff --git a/include/exec/memattrs.h b/include/exec/memattrs.h
index d4a3477..a0644eb 100644
--- a/include/exec/memattrs.h
+++ b/include/exec/memattrs.h
@@ -37,6 +37,8 @@ typedef struct MemTxAttrs {
     unsigned int user:1;
     /* Requester ID (for MSI for example) */
     unsigned int requester_id:16;
+    /* SPARC64: TTE invert endianness */
+    unsigned int byte_swap:1;
     /*
      * The following are target-specific page-table bits.  These are not
      * related to actual memory transactions at all.  However, this structure
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v3 13/15] cputlb: Byte swap memory transaction attribute
@ 2019-07-25  7:11       ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  7:11 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 2205 bytes --]

Notice new attribute, byte swap, and force the transaction through the
memory slow path.

Required by architectures that can invert endianness of memory
transaction, e.g. SPARC64 has the Invert Endian TTE bit.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 accel/tcg/cputlb.c      | 11 +++++++++++
 include/exec/memattrs.h |  2 ++
 2 files changed, 13 insertions(+)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index e61b1eb..f292a87 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -738,6 +738,9 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
          */
         address |= TLB_RECHECK;
     }
+    if (attrs.byte_swap) {
+        address |= TLB_FORCE_SLOW;
+    }
     if (!memory_region_is_ram(section->mr) &&
         !memory_region_is_romd(section->mr)) {
         /* IO memory case */
@@ -891,6 +894,10 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
     bool locked = false;
     MemTxResult r;

+    if (iotlbentry->attrs.byte_swap) {
+        op ^= MO_BSWAP;
+    }
+
     section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
     mr = section->mr;
     mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
@@ -933,6 +940,10 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
     bool locked = false;
     MemTxResult r;

+    if (iotlbentry->attrs.byte_swap) {
+        op ^= MO_BSWAP;
+    }
+
     section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
     mr = section->mr;
     mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
diff --git a/include/exec/memattrs.h b/include/exec/memattrs.h
index d4a3477..a0644eb 100644
--- a/include/exec/memattrs.h
+++ b/include/exec/memattrs.h
@@ -37,6 +37,8 @@ typedef struct MemTxAttrs {
     unsigned int user:1;
     /* Requester ID (for MSI for example) */
     unsigned int requester_id:16;
+    /* SPARC64: TTE invert endianness */
+    unsigned int byte_swap:1;
     /*
      * The following are target-specific page-table bits.  These are not
      * related to actual memory transactions at all.  However, this structure
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 4277 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v3 14/15] target/sparc: Add TLB entry with attributes
  2019-07-25  7:01     ` [Qemu-riscv] " tony.nguyen
@ 2019-07-25  7:11       ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  7:11 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, qemu-s390x, qemu-ppc, amarkovic, pbonzini,
	aurelien

Append MemTxAttrs to interfaces so we can pass along up coming Invert
Endian TTE bit on SPARC64.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 target/sparc/mmu_helper.c | 32 ++++++++++++++++++--------------
 1 file changed, 18 insertions(+), 14 deletions(-)

diff --git a/target/sparc/mmu_helper.c b/target/sparc/mmu_helper.c
index cbd1e91..826e14b 100644
--- a/target/sparc/mmu_helper.c
+++ b/target/sparc/mmu_helper.c
@@ -88,7 +88,7 @@ static const int perm_table[2][8] = {
 };

 static int get_physical_address(CPUSPARCState *env, hwaddr *physical,
-                                int *prot, int *access_index,
+                                int *prot, int *access_index, MemTxAttrs *attrs,
                                 target_ulong address, int rw, int mmu_idx,
                                 target_ulong *page_size)
 {
@@ -219,6 +219,7 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
     target_ulong vaddr;
     target_ulong page_size;
     int error_code = 0, prot, access_index;
+    MemTxAttrs attrs = {};

     /*
      * TODO: If we ever need tlb_vaddr_to_host for this target,
@@ -229,7 +230,7 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
     assert(!probe);

     address &= TARGET_PAGE_MASK;
-    error_code = get_physical_address(env, &paddr, &prot, &access_index,
+    error_code = get_physical_address(env, &paddr, &prot, &access_index, &attrs,
                                       address, access_type,
                                       mmu_idx, &page_size);
     vaddr = address;
@@ -490,8 +491,8 @@ static inline int ultrasparc_tag_match(SparcTLBEntry *tlb,
     return 0;
 }

-static int get_physical_address_data(CPUSPARCState *env,
-                                     hwaddr *physical, int *prot,
+static int get_physical_address_data(CPUSPARCState *env, hwaddr *physical,
+                                     int *prot, MemTxAttrs *attrs,
                                      target_ulong address, int rw, int mmu_idx)
 {
     CPUState *cs = env_cpu(env);
@@ -608,8 +609,8 @@ static int get_physical_address_data(CPUSPARCState *env,
     return 1;
 }

-static int get_physical_address_code(CPUSPARCState *env,
-                                     hwaddr *physical, int *prot,
+static int get_physical_address_code(CPUSPARCState *env, hwaddr *physical,
+                                     int *prot, MemTxAttrs *attrs,
                                      target_ulong address, int mmu_idx)
 {
     CPUState *cs = env_cpu(env);
@@ -686,7 +687,7 @@ static int get_physical_address_code(CPUSPARCState *env,
 }

 static int get_physical_address(CPUSPARCState *env, hwaddr *physical,
-                                int *prot, int *access_index,
+                                int *prot, int *access_index, MemTxAttrs *attrs,
                                 target_ulong address, int rw, int mmu_idx,
                                 target_ulong *page_size)
 {
@@ -716,11 +717,11 @@ static int get_physical_address(CPUSPARCState *env, hwaddr *physical,
     }

     if (rw == 2) {
-        return get_physical_address_code(env, physical, prot, address,
+        return get_physical_address_code(env, physical, prot, attrs, address,
                                          mmu_idx);
     } else {
-        return get_physical_address_data(env, physical, prot, address, rw,
-                                         mmu_idx);
+        return get_physical_address_data(env, physical, prot, attrs, address,
+                                         rw, mmu_idx);
     }
 }

@@ -734,10 +735,11 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
     target_ulong vaddr;
     hwaddr paddr;
     target_ulong page_size;
+    MemTxAttrs attrs = {};
     int error_code = 0, prot, access_index;

     address &= TARGET_PAGE_MASK;
-    error_code = get_physical_address(env, &paddr, &prot, &access_index,
+    error_code = get_physical_address(env, &paddr, &prot, &access_index, &attrs,
                                       address, access_type,
                                       mmu_idx, &page_size);
     if (likely(error_code == 0)) {
@@ -747,7 +749,8 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
                                    env->dmmu.mmu_primary_context,
                                    env->dmmu.mmu_secondary_context);

-        tlb_set_page(cs, vaddr, paddr, prot, mmu_idx, page_size);
+        tlb_set_page_with_attrs(cs, vaddr, paddr, attrs, prot, mmu_idx,
+                                page_size);
         return true;
     }
     if (probe) {
@@ -849,9 +852,10 @@ static int cpu_sparc_get_phys_page(CPUSPARCState *env, hwaddr *phys,
 {
     target_ulong page_size;
     int prot, access_index;
+    MemTxAttrs attrs = {};

-    return get_physical_address(env, phys, &prot, &access_index, addr, rw,
-                                mmu_idx, &page_size);
+    return get_physical_address(env, phys, &prot, &access_index, &attrs, addr,
+                                rw, mmu_idx, &page_size);
 }

 #if defined(TARGET_SPARC64)
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v3 14/15] target/sparc: Add TLB entry with attributes
@ 2019-07-25  7:11       ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  7:11 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 5225 bytes --]

Append MemTxAttrs to interfaces so we can pass along up coming Invert
Endian TTE bit on SPARC64.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 target/sparc/mmu_helper.c | 32 ++++++++++++++++++--------------
 1 file changed, 18 insertions(+), 14 deletions(-)

diff --git a/target/sparc/mmu_helper.c b/target/sparc/mmu_helper.c
index cbd1e91..826e14b 100644
--- a/target/sparc/mmu_helper.c
+++ b/target/sparc/mmu_helper.c
@@ -88,7 +88,7 @@ static const int perm_table[2][8] = {
 };

 static int get_physical_address(CPUSPARCState *env, hwaddr *physical,
-                                int *prot, int *access_index,
+                                int *prot, int *access_index, MemTxAttrs *attrs,
                                 target_ulong address, int rw, int mmu_idx,
                                 target_ulong *page_size)
 {
@@ -219,6 +219,7 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
     target_ulong vaddr;
     target_ulong page_size;
     int error_code = 0, prot, access_index;
+    MemTxAttrs attrs = {};

     /*
      * TODO: If we ever need tlb_vaddr_to_host for this target,
@@ -229,7 +230,7 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
     assert(!probe);

     address &= TARGET_PAGE_MASK;
-    error_code = get_physical_address(env, &paddr, &prot, &access_index,
+    error_code = get_physical_address(env, &paddr, &prot, &access_index, &attrs,
                                       address, access_type,
                                       mmu_idx, &page_size);
     vaddr = address;
@@ -490,8 +491,8 @@ static inline int ultrasparc_tag_match(SparcTLBEntry *tlb,
     return 0;
 }

-static int get_physical_address_data(CPUSPARCState *env,
-                                     hwaddr *physical, int *prot,
+static int get_physical_address_data(CPUSPARCState *env, hwaddr *physical,
+                                     int *prot, MemTxAttrs *attrs,
                                      target_ulong address, int rw, int mmu_idx)
 {
     CPUState *cs = env_cpu(env);
@@ -608,8 +609,8 @@ static int get_physical_address_data(CPUSPARCState *env,
     return 1;
 }

-static int get_physical_address_code(CPUSPARCState *env,
-                                     hwaddr *physical, int *prot,
+static int get_physical_address_code(CPUSPARCState *env, hwaddr *physical,
+                                     int *prot, MemTxAttrs *attrs,
                                      target_ulong address, int mmu_idx)
 {
     CPUState *cs = env_cpu(env);
@@ -686,7 +687,7 @@ static int get_physical_address_code(CPUSPARCState *env,
 }

 static int get_physical_address(CPUSPARCState *env, hwaddr *physical,
-                                int *prot, int *access_index,
+                                int *prot, int *access_index, MemTxAttrs *attrs,
                                 target_ulong address, int rw, int mmu_idx,
                                 target_ulong *page_size)
 {
@@ -716,11 +717,11 @@ static int get_physical_address(CPUSPARCState *env, hwaddr *physical,
     }

     if (rw == 2) {
-        return get_physical_address_code(env, physical, prot, address,
+        return get_physical_address_code(env, physical, prot, attrs, address,
                                          mmu_idx);
     } else {
-        return get_physical_address_data(env, physical, prot, address, rw,
-                                         mmu_idx);
+        return get_physical_address_data(env, physical, prot, attrs, address,
+                                         rw, mmu_idx);
     }
 }

@@ -734,10 +735,11 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
     target_ulong vaddr;
     hwaddr paddr;
     target_ulong page_size;
+    MemTxAttrs attrs = {};
     int error_code = 0, prot, access_index;

     address &= TARGET_PAGE_MASK;
-    error_code = get_physical_address(env, &paddr, &prot, &access_index,
+    error_code = get_physical_address(env, &paddr, &prot, &access_index, &attrs,
                                       address, access_type,
                                       mmu_idx, &page_size);
     if (likely(error_code == 0)) {
@@ -747,7 +749,8 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
                                    env->dmmu.mmu_primary_context,
                                    env->dmmu.mmu_secondary_context);

-        tlb_set_page(cs, vaddr, paddr, prot, mmu_idx, page_size);
+        tlb_set_page_with_attrs(cs, vaddr, paddr, attrs, prot, mmu_idx,
+                                page_size);
         return true;
     }
     if (probe) {
@@ -849,9 +852,10 @@ static int cpu_sparc_get_phys_page(CPUSPARCState *env, hwaddr *phys,
 {
     target_ulong page_size;
     int prot, access_index;
+    MemTxAttrs attrs = {};

-    return get_physical_address(env, phys, &prot, &access_index, addr, rw,
-                                mmu_idx, &page_size);
+    return get_physical_address(env, phys, &prot, &access_index, &attrs, addr,
+                                rw, mmu_idx, &page_size);
 }

 #if defined(TARGET_SPARC64)
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 10583 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v3 15/15] target/sparc: sun4u Invert Endian TTE bit
  2019-07-25  7:01     ` [Qemu-riscv] " tony.nguyen
@ 2019-07-25  7:12       ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  7:12 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, qemu-s390x, qemu-ppc, amarkovic, pbonzini,
	aurelien

This bit configures endianness of PCI MMIO devices. It is used by
Solaris and OpenBSD sunhme drivers.

Tested working on OpenBSD.

Unfortunately Solaris 10 had a unrelated keyboard issue blocking
testing... another inch towards Solaris 10 on SPARC64 =)

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 target/sparc/cpu.h        | 2 ++
 target/sparc/mmu_helper.c | 8 +++++++-
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/target/sparc/cpu.h b/target/sparc/cpu.h
index 8ed2250..77e8e07 100644
--- a/target/sparc/cpu.h
+++ b/target/sparc/cpu.h
@@ -277,6 +277,7 @@ enum {

 #define TTE_VALID_BIT       (1ULL << 63)
 #define TTE_NFO_BIT         (1ULL << 60)
+#define TTE_IE_BIT          (1ULL << 59)
 #define TTE_USED_BIT        (1ULL << 41)
 #define TTE_LOCKED_BIT      (1ULL <<  6)
 #define TTE_SIDEEFFECT_BIT  (1ULL <<  3)
@@ -293,6 +294,7 @@ enum {

 #define TTE_IS_VALID(tte)   ((tte) & TTE_VALID_BIT)
 #define TTE_IS_NFO(tte)     ((tte) & TTE_NFO_BIT)
+#define TTE_IS_IE(tte)      ((tte) & TTE_IE_BIT)
 #define TTE_IS_USED(tte)    ((tte) & TTE_USED_BIT)
 #define TTE_IS_LOCKED(tte)  ((tte) & TTE_LOCKED_BIT)
 #define TTE_IS_SIDEEFFECT(tte) ((tte) & TTE_SIDEEFFECT_BIT)
diff --git a/target/sparc/mmu_helper.c b/target/sparc/mmu_helper.c
index 826e14b..77dc86a 100644
--- a/target/sparc/mmu_helper.c
+++ b/target/sparc/mmu_helper.c
@@ -537,6 +537,10 @@ static int get_physical_address_data(CPUSPARCState *env, hwaddr *physical,
         if (ultrasparc_tag_match(&env->dtlb[i], address, context, physical)) {
             int do_fault = 0;

+            if (TTE_IS_IE(env->dtlb[i].tte)) {
+                attrs->byte_swap = true;
+            }
+
             /* access ok? */
             /* multiple bits in SFSR.FT may be set on TT_DFAULT */
             if (TTE_IS_PRIV(env->dtlb[i].tte) && is_user) {
@@ -792,7 +796,7 @@ void dump_mmu(CPUSPARCState *env)
             }
             if (TTE_IS_VALID(env->dtlb[i].tte)) {
                 qemu_printf("[%02u] VA: %" PRIx64 ", PA: %llx"
-                            ", %s, %s, %s, %s, ctx %" PRId64 " %s\n",
+                            ", %s, %s, %s, %s, ie %s, ctx %" PRId64 " %s\n",
                             i,
                             env->dtlb[i].tag & (uint64_t)~0x1fffULL,
                             TTE_PA(env->dtlb[i].tte),
@@ -801,6 +805,8 @@ void dump_mmu(CPUSPARCState *env)
                             TTE_IS_W_OK(env->dtlb[i].tte) ? "RW" : "RO",
                             TTE_IS_LOCKED(env->dtlb[i].tte) ?
                             "locked" : "unlocked",
+                            TTE_IS_IE(env->dtlb[i].tte) ?
+                            "yes" : "no",
                             env->dtlb[i].tag & (uint64_t)0x1fffULL,
                             TTE_IS_GLOBAL(env->dtlb[i].tte) ?
                             "global" : "local");
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v3 15/15] target/sparc: sun4u Invert Endian TTE bit
@ 2019-07-25  7:12       ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  7:12 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 2930 bytes --]

This bit configures endianness of PCI MMIO devices. It is used by
Solaris and OpenBSD sunhme drivers.

Tested working on OpenBSD.

Unfortunately Solaris 10 had a unrelated keyboard issue blocking
testing... another inch towards Solaris 10 on SPARC64 =)

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 target/sparc/cpu.h        | 2 ++
 target/sparc/mmu_helper.c | 8 +++++++-
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/target/sparc/cpu.h b/target/sparc/cpu.h
index 8ed2250..77e8e07 100644
--- a/target/sparc/cpu.h
+++ b/target/sparc/cpu.h
@@ -277,6 +277,7 @@ enum {

 #define TTE_VALID_BIT       (1ULL << 63)
 #define TTE_NFO_BIT         (1ULL << 60)
+#define TTE_IE_BIT          (1ULL << 59)
 #define TTE_USED_BIT        (1ULL << 41)
 #define TTE_LOCKED_BIT      (1ULL <<  6)
 #define TTE_SIDEEFFECT_BIT  (1ULL <<  3)
@@ -293,6 +294,7 @@ enum {

 #define TTE_IS_VALID(tte)   ((tte) & TTE_VALID_BIT)
 #define TTE_IS_NFO(tte)     ((tte) & TTE_NFO_BIT)
+#define TTE_IS_IE(tte)      ((tte) & TTE_IE_BIT)
 #define TTE_IS_USED(tte)    ((tte) & TTE_USED_BIT)
 #define TTE_IS_LOCKED(tte)  ((tte) & TTE_LOCKED_BIT)
 #define TTE_IS_SIDEEFFECT(tte) ((tte) & TTE_SIDEEFFECT_BIT)
diff --git a/target/sparc/mmu_helper.c b/target/sparc/mmu_helper.c
index 826e14b..77dc86a 100644
--- a/target/sparc/mmu_helper.c
+++ b/target/sparc/mmu_helper.c
@@ -537,6 +537,10 @@ static int get_physical_address_data(CPUSPARCState *env, hwaddr *physical,
         if (ultrasparc_tag_match(&env->dtlb[i], address, context, physical)) {
             int do_fault = 0;

+            if (TTE_IS_IE(env->dtlb[i].tte)) {
+                attrs->byte_swap = true;
+            }
+
             /* access ok? */
             /* multiple bits in SFSR.FT may be set on TT_DFAULT */
             if (TTE_IS_PRIV(env->dtlb[i].tte) && is_user) {
@@ -792,7 +796,7 @@ void dump_mmu(CPUSPARCState *env)
             }
             if (TTE_IS_VALID(env->dtlb[i].tte)) {
                 qemu_printf("[%02u] VA: %" PRIx64 ", PA: %llx"
-                            ", %s, %s, %s, %s, ctx %" PRId64 " %s\n",
+                            ", %s, %s, %s, %s, ie %s, ctx %" PRId64 " %s\n",
                             i,
                             env->dtlb[i].tag & (uint64_t)~0x1fffULL,
                             TTE_PA(env->dtlb[i].tte),
@@ -801,6 +805,8 @@ void dump_mmu(CPUSPARCState *env)
                             TTE_IS_W_OK(env->dtlb[i].tte) ? "RW" : "RO",
                             TTE_IS_LOCKED(env->dtlb[i].tte) ?
                             "locked" : "unlocked",
+                            TTE_IS_IE(env->dtlb[i].tte) ?
+                            "yes" : "no",
                             env->dtlb[i].tag & (uint64_t)0x1fffULL,
                             TTE_IS_GLOBAL(env->dtlb[i].tte) ?
                             "global" : "local");
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 6268 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* Re: [Qemu-devel] [PATCH v3 00/15] Invert Endian bit in SPARCv9 MMU TTE
  2019-07-25  7:01     ` [Qemu-riscv] " tony.nguyen
@ 2019-07-25  7:25       ` no-reply
  -1 siblings, 0 replies; 120+ messages in thread
From: no-reply @ 2019-07-25  7:25 UTC (permalink / raw)
  To: tony.nguyen
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	qemu-devel, Alistair.Francis, arikalo, david, pasic, borntraeger,
	rth, atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david,
	qemu-riscv, cohuck, alex.williamson, qemu-ppc, amarkovic,
	pbonzini, aurelien

Patchew URL: https://patchew.org/QEMU/1564038073754.91133@bt.com/



Hi,

This series seems to have some coding style problems. See output below for
more information:

Type: series
Subject: [Qemu-devel] [PATCH v3 00/15] Invert Endian bit in SPARCv9 MMU TTE
Message-id: 1564038073754.91133@bt.com

=== TEST SCRIPT BEGIN ===
#!/bin/bash
git rev-parse base > /dev/null || exit 0
git config --local diff.renamelimit 0
git config --local diff.renames True
git config --local diff.algorithm histogram
./scripts/checkpatch.pl --mailback base..
=== TEST SCRIPT END ===

Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
From https://github.com/patchew-project/qemu
 * [new tag]         patchew/1564035256-11828-1-git-send-email-jing2.liu@linux.intel.com -> patchew/1564035256-11828-1-git-send-email-jing2.liu@linux.intel.com
 * [new tag]         patchew/1564038073754.91133@bt.com -> patchew/1564038073754.91133@bt.com
 - [tag update]      patchew/20190724070307.12568-1-richardw.yang@linux.intel.com -> patchew/20190724070307.12568-1-richardw.yang@linux.intel.com
Submodule 'capstone' (https://git.qemu.org/git/capstone.git) registered for path 'capstone'
Submodule 'dtc' (https://git.qemu.org/git/dtc.git) registered for path 'dtc'
Submodule 'roms/QemuMacDrivers' (https://git.qemu.org/git/QemuMacDrivers.git) registered for path 'roms/QemuMacDrivers'
Submodule 'roms/SLOF' (https://git.qemu.org/git/SLOF.git) registered for path 'roms/SLOF'
Submodule 'roms/edk2' (https://git.qemu.org/git/edk2.git) registered for path 'roms/edk2'
Submodule 'roms/ipxe' (https://git.qemu.org/git/ipxe.git) registered for path 'roms/ipxe'
Submodule 'roms/openbios' (https://git.qemu.org/git/openbios.git) registered for path 'roms/openbios'
Submodule 'roms/openhackware' (https://git.qemu.org/git/openhackware.git) registered for path 'roms/openhackware'
Submodule 'roms/opensbi' (https://git.qemu.org/git/opensbi.git) registered for path 'roms/opensbi'
Submodule 'roms/qemu-palcode' (https://git.qemu.org/git/qemu-palcode.git) registered for path 'roms/qemu-palcode'
Submodule 'roms/seabios' (https://git.qemu.org/git/seabios.git/) registered for path 'roms/seabios'
Submodule 'roms/seabios-hppa' (https://git.qemu.org/git/seabios-hppa.git) registered for path 'roms/seabios-hppa'
Submodule 'roms/sgabios' (https://git.qemu.org/git/sgabios.git) registered for path 'roms/sgabios'
Submodule 'roms/skiboot' (https://git.qemu.org/git/skiboot.git) registered for path 'roms/skiboot'
Submodule 'roms/u-boot' (https://git.qemu.org/git/u-boot.git) registered for path 'roms/u-boot'
Submodule 'roms/u-boot-sam460ex' (https://git.qemu.org/git/u-boot-sam460ex.git) registered for path 'roms/u-boot-sam460ex'
Submodule 'slirp' (https://git.qemu.org/git/libslirp.git) registered for path 'slirp'
Submodule 'tests/fp/berkeley-softfloat-3' (https://git.qemu.org/git/berkeley-softfloat-3.git) registered for path 'tests/fp/berkeley-softfloat-3'
Submodule 'tests/fp/berkeley-testfloat-3' (https://git.qemu.org/git/berkeley-testfloat-3.git) registered for path 'tests/fp/berkeley-testfloat-3'
Submodule 'ui/keycodemapdb' (https://git.qemu.org/git/keycodemapdb.git) registered for path 'ui/keycodemapdb'
Cloning into 'capstone'...
Submodule path 'capstone': checked out '22ead3e0bfdb87516656453336160e0a37b066bf'
Cloning into 'dtc'...
Submodule path 'dtc': checked out '88f18909db731a627456f26d779445f84e449536'
Cloning into 'roms/QemuMacDrivers'...
Submodule path 'roms/QemuMacDrivers': checked out '90c488d5f4a407342247b9ea869df1c2d9c8e266'
Cloning into 'roms/SLOF'...
Submodule path 'roms/SLOF': checked out 'ba1ab360eebe6338bb8d7d83a9220ccf7e213af3'
Cloning into 'roms/edk2'...
Submodule path 'roms/edk2': checked out '20d2e5a125e34fc8501026613a71549b2a1a3e54'
Submodule 'SoftFloat' (https://github.com/ucb-bar/berkeley-softfloat-3.git) registered for path 'ArmPkg/Library/ArmSoftFloatLib/berkeley-softfloat-3'
Submodule 'CryptoPkg/Library/OpensslLib/openssl' (https://github.com/openssl/openssl) registered for path 'CryptoPkg/Library/OpensslLib/openssl'
Cloning into 'ArmPkg/Library/ArmSoftFloatLib/berkeley-softfloat-3'...
Submodule path 'roms/edk2/ArmPkg/Library/ArmSoftFloatLib/berkeley-softfloat-3': checked out 'b64af41c3276f97f0e181920400ee056b9c88037'
Cloning into 'CryptoPkg/Library/OpensslLib/openssl'...
Submodule path 'roms/edk2/CryptoPkg/Library/OpensslLib/openssl': checked out '50eaac9f3337667259de725451f201e784599687'
Submodule 'boringssl' (https://boringssl.googlesource.com/boringssl) registered for path 'boringssl'
Submodule 'krb5' (https://github.com/krb5/krb5) registered for path 'krb5'
Submodule 'pyca.cryptography' (https://github.com/pyca/cryptography.git) registered for path 'pyca-cryptography'
Cloning into 'boringssl'...
Submodule path 'roms/edk2/CryptoPkg/Library/OpensslLib/openssl/boringssl': checked out '2070f8ad9151dc8f3a73bffaa146b5e6937a583f'
Cloning into 'krb5'...
Submodule path 'roms/edk2/CryptoPkg/Library/OpensslLib/openssl/krb5': checked out 'b9ad6c49505c96a088326b62a52568e3484f2168'
Cloning into 'pyca-cryptography'...
Submodule path 'roms/edk2/CryptoPkg/Library/OpensslLib/openssl/pyca-cryptography': checked out '09403100de2f6f1cdd0d484dcb8e620f1c335c8f'
Cloning into 'roms/ipxe'...
Submodule path 'roms/ipxe': checked out 'de4565cbe76ea9f7913a01f331be3ee901bb6e17'
Cloning into 'roms/openbios'...
Submodule path 'roms/openbios': checked out 'c79e0ecb84f4f1ee3f73f521622e264edd1bf174'
Cloning into 'roms/openhackware'...
Submodule path 'roms/openhackware': checked out 'c559da7c8eec5e45ef1f67978827af6f0b9546f5'
Cloning into 'roms/opensbi'...
Submodule path 'roms/opensbi': checked out 'ce228ee0919deb9957192d723eecc8aaae2697c6'
Cloning into 'roms/qemu-palcode'...
Submodule path 'roms/qemu-palcode': checked out 'bf0e13698872450164fa7040da36a95d2d4b326f'
Cloning into 'roms/seabios'...
Submodule path 'roms/seabios': checked out 'a5cab58e9a3fb6e168aba919c5669bea406573b4'
Cloning into 'roms/seabios-hppa'...
Submodule path 'roms/seabios-hppa': checked out '0f4fe84658165e96ce35870fd19fc634e182e77b'
Cloning into 'roms/sgabios'...
Submodule path 'roms/sgabios': checked out 'cbaee52287e5f32373181cff50a00b6c4ac9015a'
Cloning into 'roms/skiboot'...
Submodule path 'roms/skiboot': checked out '261ca8e779e5138869a45f174caa49be6a274501'
Cloning into 'roms/u-boot'...
Submodule path 'roms/u-boot': checked out 'd3689267f92c5956e09cc7d1baa4700141662bff'
Cloning into 'roms/u-boot-sam460ex'...
Submodule path 'roms/u-boot-sam460ex': checked out '60b3916f33e617a815973c5a6df77055b2e3a588'
Cloning into 'slirp'...
Submodule path 'slirp': checked out 'f0da6726207b740f6101028b2992f918477a4b08'
Cloning into 'tests/fp/berkeley-softfloat-3'...
Submodule path 'tests/fp/berkeley-softfloat-3': checked out 'b64af41c3276f97f0e181920400ee056b9c88037'
Cloning into 'tests/fp/berkeley-testfloat-3'...
Submodule path 'tests/fp/berkeley-testfloat-3': checked out '5a59dcec19327396a011a17fd924aed4fec416b3'
Cloning into 'ui/keycodemapdb'...
Submodule path 'ui/keycodemapdb': checked out '6b3d716e2b6472eb7189d3220552280ef3d832ce'
Switched to a new branch 'test'
137b6fa target/sparc: sun4u Invert Endian TTE bit
0ee3c7f target/sparc: Add TLB entry with attributes
7efad31 cputlb: Byte swap memory transaction attribute
596bddd cpu: TLB_FLAGS_MASK bit to force memory slow path
b940a07 memory: Single byte swap along the I/O path
9a175f1 memory: Access MemoryRegion with MemOp semantics
1a75b71 cputlb: Access MemoryRegion with MemOp
3594a95 exec: Access MemoryRegion with MemOp
283d4cd hw/vfio: Access MemoryRegion with MemOp
9a99848 hw/virtio: Access MemoryRegion with MemOp
9362a2f hw/intc/armv7m_nic: Access MemoryRegion with MemOp
05bdc5a hw/s390x: Access MemoryRegion with MemOp
dc42f13 target/mips: Access MemoryRegion with MemOp
723c418 memory: Access MemoryRegion with MemOp
f1836e6 tcg: TCGMemOp is now accelerator independent MemOp

=== OUTPUT BEGIN ===
1/15 Checking commit f1836e642442 (tcg: TCGMemOp is now accelerator independent MemOp)
WARNING: added, moved or deleted file(s), does MAINTAINERS need updating?
#29: 
new file mode 100644

ERROR: "foo* bar" should be "foo *bar"
#2132: FILE: tcg/s390/tcg-target.inc.c:1547:
+static TCGReg tcg_out_tlb_read(TCGContext* s, TCGReg addr_reg, MemOp opc,

total: 1 errors, 1 warnings, 2270 lines checked

Patch 1/15 has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.

2/15 Checking commit 723c41862d5c (memory: Access MemoryRegion with MemOp)
3/15 Checking commit dc42f1365003 (target/mips: Access MemoryRegion with MemOp)
4/15 Checking commit 05bdc5a6ff16 (hw/s390x: Access MemoryRegion with MemOp)
5/15 Checking commit 9362a2f9ad8a (hw/intc/armv7m_nic: Access MemoryRegion with MemOp)
6/15 Checking commit 9a99848400f4 (hw/virtio: Access MemoryRegion with MemOp)
7/15 Checking commit 283d4cd6d780 (hw/vfio: Access MemoryRegion with MemOp)
8/15 Checking commit 3594a95a4678 (exec: Access MemoryRegion with MemOp)
9/15 Checking commit 1a75b71d5fe1 (cputlb: Access MemoryRegion with MemOp)
10/15 Checking commit 9a175f1c451c (memory: Access MemoryRegion with MemOp semantics)
11/15 Checking commit b940a07b9327 (memory: Single byte swap along the I/O path)
12/15 Checking commit 596bddd814e7 (cpu: TLB_FLAGS_MASK bit to force memory slow path)
13/15 Checking commit 7efad311078c (cputlb: Byte swap memory transaction attribute)
14/15 Checking commit 0ee3c7f1c76a (target/sparc: Add TLB entry with attributes)
15/15 Checking commit 137b6fa8edff (target/sparc: sun4u Invert Endian TTE bit)
=== OUTPUT END ===

Test command exited with code: 1


The full log is available at
http://patchew.org/logs/1564038073754.91133@bt.com/testing.checkpatch/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-devel@redhat.com

^ permalink raw reply	[flat|nested] 120+ messages in thread

* Re: [Qemu-riscv] [Qemu-devel] [PATCH v3 00/15] Invert Endian bit in SPARCv9 MMU TTE
@ 2019-07-25  7:25       ` no-reply
  0 siblings, 0 replies; 120+ messages in thread
From: no-reply @ 2019-07-25  7:25 UTC (permalink / raw)
  To: tony.nguyen
  Cc: qemu-devel, peter.maydell, walling, mst, palmer,
	mark.cave-ayland, Alistair.Francis, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm,
	david, qemu-riscv, cohuck, alex.williamson, qemu-ppc, amarkovic,
	pbonzini, aurelien

Patchew URL: https://patchew.org/QEMU/1564038073754.91133@bt.com/



Hi,

This series seems to have some coding style problems. See output below for
more information:

Type: series
Subject: [Qemu-devel] [PATCH v3 00/15] Invert Endian bit in SPARCv9 MMU TTE
Message-id: 1564038073754.91133@bt.com

=== TEST SCRIPT BEGIN ===
#!/bin/bash
git rev-parse base > /dev/null || exit 0
git config --local diff.renamelimit 0
git config --local diff.renames True
git config --local diff.algorithm histogram
./scripts/checkpatch.pl --mailback base..
=== TEST SCRIPT END ===

Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
From https://github.com/patchew-project/qemu
 * [new tag]         patchew/1564035256-11828-1-git-send-email-jing2.liu@linux.intel.com -> patchew/1564035256-11828-1-git-send-email-jing2.liu@linux.intel.com
 * [new tag]         patchew/1564038073754.91133@bt.com -> patchew/1564038073754.91133@bt.com
 - [tag update]      patchew/20190724070307.12568-1-richardw.yang@linux.intel.com -> patchew/20190724070307.12568-1-richardw.yang@linux.intel.com
Submodule 'capstone' (https://git.qemu.org/git/capstone.git) registered for path 'capstone'
Submodule 'dtc' (https://git.qemu.org/git/dtc.git) registered for path 'dtc'
Submodule 'roms/QemuMacDrivers' (https://git.qemu.org/git/QemuMacDrivers.git) registered for path 'roms/QemuMacDrivers'
Submodule 'roms/SLOF' (https://git.qemu.org/git/SLOF.git) registered for path 'roms/SLOF'
Submodule 'roms/edk2' (https://git.qemu.org/git/edk2.git) registered for path 'roms/edk2'
Submodule 'roms/ipxe' (https://git.qemu.org/git/ipxe.git) registered for path 'roms/ipxe'
Submodule 'roms/openbios' (https://git.qemu.org/git/openbios.git) registered for path 'roms/openbios'
Submodule 'roms/openhackware' (https://git.qemu.org/git/openhackware.git) registered for path 'roms/openhackware'
Submodule 'roms/opensbi' (https://git.qemu.org/git/opensbi.git) registered for path 'roms/opensbi'
Submodule 'roms/qemu-palcode' (https://git.qemu.org/git/qemu-palcode.git) registered for path 'roms/qemu-palcode'
Submodule 'roms/seabios' (https://git.qemu.org/git/seabios.git/) registered for path 'roms/seabios'
Submodule 'roms/seabios-hppa' (https://git.qemu.org/git/seabios-hppa.git) registered for path 'roms/seabios-hppa'
Submodule 'roms/sgabios' (https://git.qemu.org/git/sgabios.git) registered for path 'roms/sgabios'
Submodule 'roms/skiboot' (https://git.qemu.org/git/skiboot.git) registered for path 'roms/skiboot'
Submodule 'roms/u-boot' (https://git.qemu.org/git/u-boot.git) registered for path 'roms/u-boot'
Submodule 'roms/u-boot-sam460ex' (https://git.qemu.org/git/u-boot-sam460ex.git) registered for path 'roms/u-boot-sam460ex'
Submodule 'slirp' (https://git.qemu.org/git/libslirp.git) registered for path 'slirp'
Submodule 'tests/fp/berkeley-softfloat-3' (https://git.qemu.org/git/berkeley-softfloat-3.git) registered for path 'tests/fp/berkeley-softfloat-3'
Submodule 'tests/fp/berkeley-testfloat-3' (https://git.qemu.org/git/berkeley-testfloat-3.git) registered for path 'tests/fp/berkeley-testfloat-3'
Submodule 'ui/keycodemapdb' (https://git.qemu.org/git/keycodemapdb.git) registered for path 'ui/keycodemapdb'
Cloning into 'capstone'...
Submodule path 'capstone': checked out '22ead3e0bfdb87516656453336160e0a37b066bf'
Cloning into 'dtc'...
Submodule path 'dtc': checked out '88f18909db731a627456f26d779445f84e449536'
Cloning into 'roms/QemuMacDrivers'...
Submodule path 'roms/QemuMacDrivers': checked out '90c488d5f4a407342247b9ea869df1c2d9c8e266'
Cloning into 'roms/SLOF'...
Submodule path 'roms/SLOF': checked out 'ba1ab360eebe6338bb8d7d83a9220ccf7e213af3'
Cloning into 'roms/edk2'...
Submodule path 'roms/edk2': checked out '20d2e5a125e34fc8501026613a71549b2a1a3e54'
Submodule 'SoftFloat' (https://github.com/ucb-bar/berkeley-softfloat-3.git) registered for path 'ArmPkg/Library/ArmSoftFloatLib/berkeley-softfloat-3'
Submodule 'CryptoPkg/Library/OpensslLib/openssl' (https://github.com/openssl/openssl) registered for path 'CryptoPkg/Library/OpensslLib/openssl'
Cloning into 'ArmPkg/Library/ArmSoftFloatLib/berkeley-softfloat-3'...
Submodule path 'roms/edk2/ArmPkg/Library/ArmSoftFloatLib/berkeley-softfloat-3': checked out 'b64af41c3276f97f0e181920400ee056b9c88037'
Cloning into 'CryptoPkg/Library/OpensslLib/openssl'...
Submodule path 'roms/edk2/CryptoPkg/Library/OpensslLib/openssl': checked out '50eaac9f3337667259de725451f201e784599687'
Submodule 'boringssl' (https://boringssl.googlesource.com/boringssl) registered for path 'boringssl'
Submodule 'krb5' (https://github.com/krb5/krb5) registered for path 'krb5'
Submodule 'pyca.cryptography' (https://github.com/pyca/cryptography.git) registered for path 'pyca-cryptography'
Cloning into 'boringssl'...
Submodule path 'roms/edk2/CryptoPkg/Library/OpensslLib/openssl/boringssl': checked out '2070f8ad9151dc8f3a73bffaa146b5e6937a583f'
Cloning into 'krb5'...
Submodule path 'roms/edk2/CryptoPkg/Library/OpensslLib/openssl/krb5': checked out 'b9ad6c49505c96a088326b62a52568e3484f2168'
Cloning into 'pyca-cryptography'...
Submodule path 'roms/edk2/CryptoPkg/Library/OpensslLib/openssl/pyca-cryptography': checked out '09403100de2f6f1cdd0d484dcb8e620f1c335c8f'
Cloning into 'roms/ipxe'...
Submodule path 'roms/ipxe': checked out 'de4565cbe76ea9f7913a01f331be3ee901bb6e17'
Cloning into 'roms/openbios'...
Submodule path 'roms/openbios': checked out 'c79e0ecb84f4f1ee3f73f521622e264edd1bf174'
Cloning into 'roms/openhackware'...
Submodule path 'roms/openhackware': checked out 'c559da7c8eec5e45ef1f67978827af6f0b9546f5'
Cloning into 'roms/opensbi'...
Submodule path 'roms/opensbi': checked out 'ce228ee0919deb9957192d723eecc8aaae2697c6'
Cloning into 'roms/qemu-palcode'...
Submodule path 'roms/qemu-palcode': checked out 'bf0e13698872450164fa7040da36a95d2d4b326f'
Cloning into 'roms/seabios'...
Submodule path 'roms/seabios': checked out 'a5cab58e9a3fb6e168aba919c5669bea406573b4'
Cloning into 'roms/seabios-hppa'...
Submodule path 'roms/seabios-hppa': checked out '0f4fe84658165e96ce35870fd19fc634e182e77b'
Cloning into 'roms/sgabios'...
Submodule path 'roms/sgabios': checked out 'cbaee52287e5f32373181cff50a00b6c4ac9015a'
Cloning into 'roms/skiboot'...
Submodule path 'roms/skiboot': checked out '261ca8e779e5138869a45f174caa49be6a274501'
Cloning into 'roms/u-boot'...
Submodule path 'roms/u-boot': checked out 'd3689267f92c5956e09cc7d1baa4700141662bff'
Cloning into 'roms/u-boot-sam460ex'...
Submodule path 'roms/u-boot-sam460ex': checked out '60b3916f33e617a815973c5a6df77055b2e3a588'
Cloning into 'slirp'...
Submodule path 'slirp': checked out 'f0da6726207b740f6101028b2992f918477a4b08'
Cloning into 'tests/fp/berkeley-softfloat-3'...
Submodule path 'tests/fp/berkeley-softfloat-3': checked out 'b64af41c3276f97f0e181920400ee056b9c88037'
Cloning into 'tests/fp/berkeley-testfloat-3'...
Submodule path 'tests/fp/berkeley-testfloat-3': checked out '5a59dcec19327396a011a17fd924aed4fec416b3'
Cloning into 'ui/keycodemapdb'...
Submodule path 'ui/keycodemapdb': checked out '6b3d716e2b6472eb7189d3220552280ef3d832ce'
Switched to a new branch 'test'
137b6fa target/sparc: sun4u Invert Endian TTE bit
0ee3c7f target/sparc: Add TLB entry with attributes
7efad31 cputlb: Byte swap memory transaction attribute
596bddd cpu: TLB_FLAGS_MASK bit to force memory slow path
b940a07 memory: Single byte swap along the I/O path
9a175f1 memory: Access MemoryRegion with MemOp semantics
1a75b71 cputlb: Access MemoryRegion with MemOp
3594a95 exec: Access MemoryRegion with MemOp
283d4cd hw/vfio: Access MemoryRegion with MemOp
9a99848 hw/virtio: Access MemoryRegion with MemOp
9362a2f hw/intc/armv7m_nic: Access MemoryRegion with MemOp
05bdc5a hw/s390x: Access MemoryRegion with MemOp
dc42f13 target/mips: Access MemoryRegion with MemOp
723c418 memory: Access MemoryRegion with MemOp
f1836e6 tcg: TCGMemOp is now accelerator independent MemOp

=== OUTPUT BEGIN ===
1/15 Checking commit f1836e642442 (tcg: TCGMemOp is now accelerator independent MemOp)
WARNING: added, moved or deleted file(s), does MAINTAINERS need updating?
#29: 
new file mode 100644

ERROR: "foo* bar" should be "foo *bar"
#2132: FILE: tcg/s390/tcg-target.inc.c:1547:
+static TCGReg tcg_out_tlb_read(TCGContext* s, TCGReg addr_reg, MemOp opc,

total: 1 errors, 1 warnings, 2270 lines checked

Patch 1/15 has style problems, please review.  If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.

2/15 Checking commit 723c41862d5c (memory: Access MemoryRegion with MemOp)
3/15 Checking commit dc42f1365003 (target/mips: Access MemoryRegion with MemOp)
4/15 Checking commit 05bdc5a6ff16 (hw/s390x: Access MemoryRegion with MemOp)
5/15 Checking commit 9362a2f9ad8a (hw/intc/armv7m_nic: Access MemoryRegion with MemOp)
6/15 Checking commit 9a99848400f4 (hw/virtio: Access MemoryRegion with MemOp)
7/15 Checking commit 283d4cd6d780 (hw/vfio: Access MemoryRegion with MemOp)
8/15 Checking commit 3594a95a4678 (exec: Access MemoryRegion with MemOp)
9/15 Checking commit 1a75b71d5fe1 (cputlb: Access MemoryRegion with MemOp)
10/15 Checking commit 9a175f1c451c (memory: Access MemoryRegion with MemOp semantics)
11/15 Checking commit b940a07b9327 (memory: Single byte swap along the I/O path)
12/15 Checking commit 596bddd814e7 (cpu: TLB_FLAGS_MASK bit to force memory slow path)
13/15 Checking commit 7efad311078c (cputlb: Byte swap memory transaction attribute)
14/15 Checking commit 0ee3c7f1c76a (target/sparc: Add TLB entry with attributes)
15/15 Checking commit 137b6fa8edff (target/sparc: sun4u Invert Endian TTE bit)
=== OUTPUT END ===

Test command exited with code: 1


The full log is available at
http://patchew.org/logs/1564038073754.91133@bt.com/testing.checkpatch/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-devel@redhat.com

^ permalink raw reply	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v4 00/15] Invert Endian bit in SPARCv9 MMU TTE
  2019-07-25  7:01     ` [Qemu-riscv] " tony.nguyen
@ 2019-07-25  7:58       ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  7:58 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, qemu-s390x, qemu-ppc, amarkovic, pbonzini,
	aurelien

This patchset implements the IE (Invert Endian) bit in SPARCv9 MMU TTE.

It is an attempt of the instructions outlined by Richard Henderson to Mark
Cave-Ayland.

Tested with OpenBSD on sun4u. Solaris 10 is my actual goal, but unfortunately a
separate keyboard issue remains in the way.

On 01/11/17 19:15, Mark Cave-Ayland wrote:

>On 15/08/17 19:10, Richard Henderson wrote:
>
>> [CC Peter re MemTxAttrs below]
>>
>> On 08/15/2017 09:38 AM, Mark Cave-Ayland wrote:
>>> Working through an incorrect endian issue on qemu-system-sparc64, it has
>>> become apparent that at least one OS makes use of the IE (Invert Endian)
>>> bit in the SPARCv9 MMU TTE to map PCI memory space without the
>>> programmer having to manually endian-swap accesses.
>>>
>>> In other words, to quote the UltraSPARC specification: "if this bit is
>>> set, accesses to the associated page are processed with inverse
>>> endianness from what is specified by the instruction (big-for-little and
>>> little-for-big)".

A good explanation by Mark why the IE bit is required.

>>>
>>> Looking through various bits of code, I'm trying to get a feel for the
>>> best way to implement this in an efficient manner. From what I can see
>>> this could be solved using an additional MMU index, however I'm not
>>> overly familiar with the memory and softmmu subsystems.
>>
>> No, it can't be solved with an MMU index.
>>
>>> Can anyone point me in the right direction as to what would be the best
>>> way to implement this feature within QEMU?
>>
>> It's definitely tricky.
>>
>> We definitely need some TLB_FLAGS_MASK bit set so that we're forced through
>> the
>> memory slow path.  There is no other way to bypass the endianness that we've
>> already encoded from the target instruction.
>>
>> Given the tlb_set_page_with_attrs interface, I would think that we need a new
>> bit in MemTxAttrs, so that the target/sparc tlb_fill (and subroutines) can
>> pass
>> along the TTE bit for the given page.
>>
>> We have an existing problem in softmmu_template.h,
>>
>>     /* ??? Note that the io helpers always read data in the target
>>        byte ordering.  We should push the LE/BE request down into io.  */
>>     res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr);
>>     res = TGT_BE(res);
>>
>> We do not want to add a third(!) byte swap along the i/o path.  We need to
>> collapse the two that we have already before considering this one.
>>
>> This probably takes the form of:
>>
>> (1) Replacing the "int size" argument with "TCGMemOp memop" for
>>       a) io_{read,write}x in accel/tcg/cputlb.c,
>>       b) memory_region_dispatch_{read,write} in memory.c,
>>       c) adjust_endianness in memory.c.
>>     This carries size+sign+endianness down to the next level.
>>
>> (2) In memory.c, adjust_endianness,
>>
>>      if (memory_region_wrong_endianness(mr)) {
>> -        switch (size) {
>> +        memop ^= MO_BSWAP;
>> +    }
>> +    if (memop & MO_BSWAP) {
>>
>>     For extra credit, re-arrange memory_region_wrong_endianness
>>     to something more explicit -- "wrong" isn't helpful.
>
>Finally I've had a bit of spare time to experiment with this approach,
>and from what I can see there are currently 2 issues:
>
>
>1) Using TCGMemOp in memory.c means it is no longer accelerator agnostic
>
>For the moment I've defined a separate MemOp in memory.h and provided a
>mapping function in io_{read,write}x to map from TCGMemOp to MemOp and
>then pass that into memory_region_dispatch_{read,write}.
>
>Other than not referencing TCGMemOp in the memory API, another reason
>for doing this was that I wasn't convinced that all the MO_ attributes
>were valid outside of TCG. I do, of course, strongly defer to other
>people's knowledge in this area though.
>
>
>2) The above changes to adjust_endianness() fail when
>memory_region_dispatch_{read,write} are called recursively
>
>Whilst booting qemu-system-sparc64 I see that
>memory_region_dispatch_{read,write} get called recursively - once via
>io_{read,write}x and then again via flatview_read_continue() in exec.c.
>
>The net effect of this is that we perform the bswap correctly at the
>tail of the recursion, but then as we travel back up the stack we hit
>memory_region_dispatch_{read,write} once again causing a second bswap
>which means the value is returned with the incorrect endian again.
>
>
>My understanding from your softmmu_template.h comment above is that the
>memory API should do the endian swapping internally allowing the removal
>of the final TGT_BE/TGT_LE applied to the result, or did I get this wrong?
>
>> (3) In tlb_set_page_with_attrs, notice attrs.byte_swap and set
>>     a new TLB_FORCE_SLOW bit within TLB_FLAGS_MASK.
>>
>> (4) In io_{read,write}x, if iotlbentry->attrs.byte_swap is set,
>>     then memop ^= MO_BSWAP.

Thanks all for the feedback.

v2:
- Moved size+sign+endianness attributes from TCGMemOp into MemOp.
  In v1 TCGMemOp was re-purposed entirely into MemOp.
- Replaced MemOp MO_{8|16|32|64} with TCGMemOp MO_{UB|UW|UL|UQ} alias.
  This is to avoid warnings on comparing and coercing different enums.
- Renamed get_memop to get_tcgmemop for clarity.
- MEMOP is now SIZE_MEMOP, which is just ctzl(size).
- Split patch 3/4 so one memory_region_dispatch_{read|write} interface
  is converted per patch.
- Do not reuse TLB_RECHECK, use new TLB_FORCE_SLOW instead.
- Split patch 4/4 so adding the MemTxAddrs parameters and converting
  tlb_set_page() to tlb_set_page_with_attrs() is separate from usage.
- CC'd maintainers.

v3:
- Like v1, the entire TCGMemOp enum is now MemOp.
- MemOp target dependant attributes are conditional upon NEED_CPU_H

v4:
- Added Paolo Bonzini as include/exec/memop.h maintainer

Tony Nguyen (15):
  tcg: TCGMemOp is now accelerator independent MemOp
  memory: Access MemoryRegion with MemOp
  target/mips: Access MemoryRegion with MemOp
  hw/s390x: Access MemoryRegion with MemOp
  hw/intc/armv7m_nic: Access MemoryRegion with MemOp
  hw/virtio: Access MemoryRegion with MemOp
  hw/vfio: Access MemoryRegion with MemOp
  exec: Access MemoryRegion with MemOp
  cputlb: Access MemoryRegion with MemOp
  memory: Access MemoryRegion with MemOp semantics
  memory: Single byte swap along the I/O path
  cpu: TLB_FLAGS_MASK bit to force memory slow path
  cputlb: Byte swap memory transaction attribute
  target/sparc: Add TLB entry with attributes
  target/sparc: sun4u Invert Endian TTE bit

 MAINTAINERS                             |   1 +
 accel/tcg/cputlb.c                      |  71 +++++++++--------
 exec.c                                  |   6 +-
 hw/intc/armv7m_nvic.c                   |  12 ++-
 hw/s390x/s390-pci-inst.c                |   8 +-
 hw/vfio/pci-quirks.c                    |   5 +-
 hw/virtio/virtio-pci.c                  |   7 +-
 include/exec/cpu-all.h                  |  10 ++-
 include/exec/memattrs.h                 |   2 +
 include/exec/memop.h                    | 112 +++++++++++++++++++++++++++
 include/exec/memory.h                   |   9 ++-
 memory.c                                |  37 +++++----
 memory_ldst.inc.c                       |  18 ++---
 target/alpha/translate.c                |   2 +-
 target/arm/translate-a64.c              |  48 ++++++------
 target/arm/translate-a64.h              |   2 +-
 target/arm/translate-sve.c              |   2 +-
 target/arm/translate.c                  |  32 ++++----
 target/arm/translate.h                  |   2 +-
 target/hppa/translate.c                 |  14 ++--
 target/i386/translate.c                 | 132 ++++++++++++++++----------------
 target/m68k/translate.c                 |   2 +-
 target/microblaze/translate.c           |   4 +-
 target/mips/op_helper.c                 |   5 +-
 target/mips/translate.c                 |   8 +-
 target/openrisc/translate.c             |   4 +-
 target/ppc/translate.c                  |  12 +--
 target/riscv/insn_trans/trans_rva.inc.c |   8 +-
 target/riscv/insn_trans/trans_rvi.inc.c |   4 +-
 target/s390x/translate.c                |   6 +-
 target/s390x/translate_vx.inc.c         |  10 +--
 target/sparc/cpu.h                      |   2 +
 target/sparc/mmu_helper.c               |  40 ++++++----
 target/sparc/translate.c                |  14 ++--
 target/tilegx/translate.c               |  10 +--
 target/tricore/translate.c              |   8 +-
 tcg/README                              |   2 +-
 tcg/aarch64/tcg-target.inc.c            |  26 +++----
 tcg/arm/tcg-target.inc.c                |  26 +++----
 tcg/i386/tcg-target.inc.c               |  24 +++---
 tcg/mips/tcg-target.inc.c               |  16 ++--
 tcg/optimize.c                          |   2 +-
 tcg/ppc/tcg-target.inc.c                |  12 +--
 tcg/riscv/tcg-target.inc.c              |  20 ++---
 tcg/s390/tcg-target.inc.c               |  14 ++--
 tcg/sparc/tcg-target.inc.c              |   6 +-
 tcg/tcg-op.c                            |  38 ++++-----
 tcg/tcg-op.h                            |  86 ++++++++++-----------
 tcg/tcg.c                               |   2 +-
 tcg/tcg.h                               |  99 ++----------------------
 trace/mem-internal.h                    |   4 +-
 trace/mem.h                             |   4 +-
 52 files changed, 562 insertions(+), 488 deletions(-)
 create mode 100644 include/exec/memop.h

--
1.8.3.1




^ permalink raw reply	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v4 00/15] Invert Endian bit in SPARCv9 MMU TTE
@ 2019-07-25  7:58       ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  7:58 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 9508 bytes --]

This patchset implements the IE (Invert Endian) bit in SPARCv9 MMU TTE.

It is an attempt of the instructions outlined by Richard Henderson to Mark
Cave-Ayland.

Tested with OpenBSD on sun4u. Solaris 10 is my actual goal, but unfortunately a
separate keyboard issue remains in the way.

On 01/11/17 19:15, Mark Cave-Ayland wrote:

>On 15/08/17 19:10, Richard Henderson wrote:
>
>> [CC Peter re MemTxAttrs below]
>>
>> On 08/15/2017 09:38 AM, Mark Cave-Ayland wrote:
>>> Working through an incorrect endian issue on qemu-system-sparc64, it has
>>> become apparent that at least one OS makes use of the IE (Invert Endian)
>>> bit in the SPARCv9 MMU TTE to map PCI memory space without the
>>> programmer having to manually endian-swap accesses.
>>>
>>> In other words, to quote the UltraSPARC specification: "if this bit is
>>> set, accesses to the associated page are processed with inverse
>>> endianness from what is specified by the instruction (big-for-little and
>>> little-for-big)".

A good explanation by Mark why the IE bit is required.

>>>
>>> Looking through various bits of code, I'm trying to get a feel for the
>>> best way to implement this in an efficient manner. From what I can see
>>> this could be solved using an additional MMU index, however I'm not
>>> overly familiar with the memory and softmmu subsystems.
>>
>> No, it can't be solved with an MMU index.
>>
>>> Can anyone point me in the right direction as to what would be the best
>>> way to implement this feature within QEMU?
>>
>> It's definitely tricky.
>>
>> We definitely need some TLB_FLAGS_MASK bit set so that we're forced through
>> the
>> memory slow path.  There is no other way to bypass the endianness that we've
>> already encoded from the target instruction.
>>
>> Given the tlb_set_page_with_attrs interface, I would think that we need a new
>> bit in MemTxAttrs, so that the target/sparc tlb_fill (and subroutines) can
>> pass
>> along the TTE bit for the given page.
>>
>> We have an existing problem in softmmu_template.h,
>>
>>     /* ??? Note that the io helpers always read data in the target
>>        byte ordering.  We should push the LE/BE request down into io.  */
>>     res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr);
>>     res = TGT_BE(res);
>>
>> We do not want to add a third(!) byte swap along the i/o path.  We need to
>> collapse the two that we have already before considering this one.
>>
>> This probably takes the form of:
>>
>> (1) Replacing the "int size" argument with "TCGMemOp memop" for
>>       a) io_{read,write}x in accel/tcg/cputlb.c,
>>       b) memory_region_dispatch_{read,write} in memory.c,
>>       c) adjust_endianness in memory.c.
>>     This carries size+sign+endianness down to the next level.
>>
>> (2) In memory.c, adjust_endianness,
>>
>>      if (memory_region_wrong_endianness(mr)) {
>> -        switch (size) {
>> +        memop ^= MO_BSWAP;
>> +    }
>> +    if (memop & MO_BSWAP) {
>>
>>     For extra credit, re-arrange memory_region_wrong_endianness
>>     to something more explicit -- "wrong" isn't helpful.
>
>Finally I've had a bit of spare time to experiment with this approach,
>and from what I can see there are currently 2 issues:
>
>
>1) Using TCGMemOp in memory.c means it is no longer accelerator agnostic
>
>For the moment I've defined a separate MemOp in memory.h and provided a
>mapping function in io_{read,write}x to map from TCGMemOp to MemOp and
>then pass that into memory_region_dispatch_{read,write}.
>
>Other than not referencing TCGMemOp in the memory API, another reason
>for doing this was that I wasn't convinced that all the MO_ attributes
>were valid outside of TCG. I do, of course, strongly defer to other
>people's knowledge in this area though.
>
>
>2) The above changes to adjust_endianness() fail when
>memory_region_dispatch_{read,write} are called recursively
>
>Whilst booting qemu-system-sparc64 I see that
>memory_region_dispatch_{read,write} get called recursively - once via
>io_{read,write}x and then again via flatview_read_continue() in exec.c.
>
>The net effect of this is that we perform the bswap correctly at the
>tail of the recursion, but then as we travel back up the stack we hit
>memory_region_dispatch_{read,write} once again causing a second bswap
>which means the value is returned with the incorrect endian again.
>
>
>My understanding from your softmmu_template.h comment above is that the
>memory API should do the endian swapping internally allowing the removal
>of the final TGT_BE/TGT_LE applied to the result, or did I get this wrong?
>
>> (3) In tlb_set_page_with_attrs, notice attrs.byte_swap and set
>>     a new TLB_FORCE_SLOW bit within TLB_FLAGS_MASK.
>>
>> (4) In io_{read,write}x, if iotlbentry->attrs.byte_swap is set,
>>     then memop ^= MO_BSWAP.

Thanks all for the feedback.

v2:
- Moved size+sign+endianness attributes from TCGMemOp into MemOp.
  In v1 TCGMemOp was re-purposed entirely into MemOp.
- Replaced MemOp MO_{8|16|32|64} with TCGMemOp MO_{UB|UW|UL|UQ} alias.
  This is to avoid warnings on comparing and coercing different enums.
- Renamed get_memop to get_tcgmemop for clarity.
- MEMOP is now SIZE_MEMOP, which is just ctzl(size).
- Split patch 3/4 so one memory_region_dispatch_{read|write} interface
  is converted per patch.
- Do not reuse TLB_RECHECK, use new TLB_FORCE_SLOW instead.
- Split patch 4/4 so adding the MemTxAddrs parameters and converting
  tlb_set_page() to tlb_set_page_with_attrs() is separate from usage.
- CC'd maintainers.

v3:
- Like v1, the entire TCGMemOp enum is now MemOp.
- MemOp target dependant attributes are conditional upon NEED_CPU_H

v4:
- Added Paolo Bonzini as include/exec/memop.h maintainer

Tony Nguyen (15):
  tcg: TCGMemOp is now accelerator independent MemOp
  memory: Access MemoryRegion with MemOp
  target/mips: Access MemoryRegion with MemOp
  hw/s390x: Access MemoryRegion with MemOp
  hw/intc/armv7m_nic: Access MemoryRegion with MemOp
  hw/virtio: Access MemoryRegion with MemOp
  hw/vfio: Access MemoryRegion with MemOp
  exec: Access MemoryRegion with MemOp
  cputlb: Access MemoryRegion with MemOp
  memory: Access MemoryRegion with MemOp semantics
  memory: Single byte swap along the I/O path
  cpu: TLB_FLAGS_MASK bit to force memory slow path
  cputlb: Byte swap memory transaction attribute
  target/sparc: Add TLB entry with attributes
  target/sparc: sun4u Invert Endian TTE bit

 MAINTAINERS                             |   1 +
 accel/tcg/cputlb.c                      |  71 +++++++++--------
 exec.c                                  |   6 +-
 hw/intc/armv7m_nvic.c                   |  12 ++-
 hw/s390x/s390-pci-inst.c                |   8 +-
 hw/vfio/pci-quirks.c                    |   5 +-
 hw/virtio/virtio-pci.c                  |   7 +-
 include/exec/cpu-all.h                  |  10 ++-
 include/exec/memattrs.h                 |   2 +
 include/exec/memop.h                    | 112 +++++++++++++++++++++++++++
 include/exec/memory.h                   |   9 ++-
 memory.c                                |  37 +++++----
 memory_ldst.inc.c                       |  18 ++---
 target/alpha/translate.c                |   2 +-
 target/arm/translate-a64.c              |  48 ++++++------
 target/arm/translate-a64.h              |   2 +-
 target/arm/translate-sve.c              |   2 +-
 target/arm/translate.c                  |  32 ++++----
 target/arm/translate.h                  |   2 +-
 target/hppa/translate.c                 |  14 ++--
 target/i386/translate.c                 | 132 ++++++++++++++++----------------
 target/m68k/translate.c                 |   2 +-
 target/microblaze/translate.c           |   4 +-
 target/mips/op_helper.c                 |   5 +-
 target/mips/translate.c                 |   8 +-
 target/openrisc/translate.c             |   4 +-
 target/ppc/translate.c                  |  12 +--
 target/riscv/insn_trans/trans_rva.inc.c |   8 +-
 target/riscv/insn_trans/trans_rvi.inc.c |   4 +-
 target/s390x/translate.c                |   6 +-
 target/s390x/translate_vx.inc.c         |  10 +--
 target/sparc/cpu.h                      |   2 +
 target/sparc/mmu_helper.c               |  40 ++++++----
 target/sparc/translate.c                |  14 ++--
 target/tilegx/translate.c               |  10 +--
 target/tricore/translate.c              |   8 +-
 tcg/README                              |   2 +-
 tcg/aarch64/tcg-target.inc.c            |  26 +++----
 tcg/arm/tcg-target.inc.c                |  26 +++----
 tcg/i386/tcg-target.inc.c               |  24 +++---
 tcg/mips/tcg-target.inc.c               |  16 ++--
 tcg/optimize.c                          |   2 +-
 tcg/ppc/tcg-target.inc.c                |  12 +--
 tcg/riscv/tcg-target.inc.c              |  20 ++---
 tcg/s390/tcg-target.inc.c               |  14 ++--
 tcg/sparc/tcg-target.inc.c              |   6 +-
 tcg/tcg-op.c                            |  38 ++++-----
 tcg/tcg-op.h                            |  86 ++++++++++-----------
 tcg/tcg.c                               |   2 +-
 tcg/tcg.h                               |  99 ++----------------------
 trace/mem-internal.h                    |   4 +-
 trace/mem.h                             |   4 +-
 52 files changed, 562 insertions(+), 488 deletions(-)
 create mode 100644 include/exec/memop.h

--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 16977 bytes --]

^ permalink raw reply	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v4 01/15] tcg: TCGMemOp is now accelerator independent MemOp
  2019-07-25  7:58       ` [Qemu-riscv] " tony.nguyen
@ 2019-07-25  8:00         ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  8:00 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

Preparation for collapsing the two byte swaps, adjust_endianness and
handle_bswap, along the I/O path.

Target dependant attributes are conditionalize upon NEED_CPU_H.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 MAINTAINERS                             |   1 +
 accel/tcg/cputlb.c                      |   2 +-
 include/exec/memop.h                    | 109 ++++++++++++++++++++++++++
 target/alpha/translate.c                |   2 +-
 target/arm/translate-a64.c              |  48 ++++++------
 target/arm/translate-a64.h              |   2 +-
 target/arm/translate-sve.c              |   2 +-
 target/arm/translate.c                  |  32 ++++----
 target/arm/translate.h                  |   2 +-
 target/hppa/translate.c                 |  14 ++--
 target/i386/translate.c                 | 132 ++++++++++++++++----------------
 target/m68k/translate.c                 |   2 +-
 target/microblaze/translate.c           |   4 +-
 target/mips/translate.c                 |   8 +-
 target/openrisc/translate.c             |   4 +-
 target/ppc/translate.c                  |  12 +--
 target/riscv/insn_trans/trans_rva.inc.c |   8 +-
 target/riscv/insn_trans/trans_rvi.inc.c |   4 +-
 target/s390x/translate.c                |   6 +-
 target/s390x/translate_vx.inc.c         |  10 +--
 target/sparc/translate.c                |  14 ++--
 target/tilegx/translate.c               |  10 +--
 target/tricore/translate.c              |   8 +-
 tcg/README                              |   2 +-
 tcg/aarch64/tcg-target.inc.c            |  26 +++----
 tcg/arm/tcg-target.inc.c                |  26 +++----
 tcg/i386/tcg-target.inc.c               |  24 +++---
 tcg/mips/tcg-target.inc.c               |  16 ++--
 tcg/optimize.c                          |   2 +-
 tcg/ppc/tcg-target.inc.c                |  12 +--
 tcg/riscv/tcg-target.inc.c              |  20 ++---
 tcg/s390/tcg-target.inc.c               |  14 ++--
 tcg/sparc/tcg-target.inc.c              |   6 +-
 tcg/tcg-op.c                            |  38 ++++-----
 tcg/tcg-op.h                            |  86 ++++++++++-----------
 tcg/tcg.c                               |   2 +-
 tcg/tcg.h                               |  99 ++----------------------
 trace/mem-internal.h                    |   4 +-
 trace/mem.h                             |   4 +-
 39 files changed, 420 insertions(+), 397 deletions(-)
 create mode 100644 include/exec/memop.h

diff --git a/MAINTAINERS b/MAINTAINERS
index cc9636b..3f148cd 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1890,6 +1890,7 @@ M: Paolo Bonzini <pbonzini@redhat.com>
 S: Supported
 F: include/exec/ioport.h
 F: ioport.c
+F: include/exec/memop.h
 F: include/exec/memory.h
 F: include/exec/ram_addr.h
 F: memory.c
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index bb9897b..523be4c 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1133,7 +1133,7 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
     uintptr_t index = tlb_index(env, mmu_idx, addr);
     CPUTLBEntry *tlbe = tlb_entry(env, mmu_idx, addr);
     target_ulong tlb_addr = tlb_addr_write(tlbe);
-    TCGMemOp mop = get_memop(oi);
+    MemOp mop = get_memop(oi);
     int a_bits = get_alignment_bits(mop);
     int s_bits = mop & MO_SIZE;
     void *hostaddr;
diff --git a/include/exec/memop.h b/include/exec/memop.h
new file mode 100644
index 0000000..ac58066
--- /dev/null
+++ b/include/exec/memop.h
@@ -0,0 +1,109 @@
+/*
+ * Constants for memory operations
+ *
+ * Authors:
+ *  Richard Henderson <rth@twiddle.net>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+
+#ifndef MEMOP_H
+#define MEMOP_H
+
+typedef enum MemOp {
+    MO_8     = 0,
+    MO_16    = 1,
+    MO_32    = 2,
+    MO_64    = 3,
+    MO_SIZE  = 3,   /* Mask for the above.  */
+
+    MO_SIGN  = 4,   /* Sign-extended, otherwise zero-extended.  */
+
+    MO_BSWAP = 8,   /* Host reverse endian.  */
+#ifdef HOST_WORDS_BIGENDIAN
+    MO_LE    = MO_BSWAP,
+    MO_BE    = 0,
+#else
+    MO_LE    = 0,
+    MO_BE    = MO_BSWAP,
+#endif
+#ifdef NEED_CPU_H
+#ifdef TARGET_WORDS_BIGENDIAN
+    MO_TE    = MO_BE,
+#else
+    MO_TE    = MO_LE,
+#endif
+#endif
+
+    /*
+     * MO_UNALN accesses are never checked for alignment.
+     * MO_ALIGN accesses will result in a call to the CPU's
+     * do_unaligned_access hook if the guest address is not aligned.
+     * The default depends on whether the target CPU defines ALIGNED_ONLY.
+     *
+     * Some architectures (e.g. ARMv8) need the address which is aligned
+     * to a size more than the size of the memory access.
+     * Some architectures (e.g. SPARCv9) need an address which is aligned,
+     * but less strictly than the natural alignment.
+     *
+     * MO_ALIGN supposes the alignment size is the size of a memory access.
+     *
+     * There are three options:
+     * - unaligned access permitted (MO_UNALN).
+     * - an alignment to the size of an access (MO_ALIGN);
+     * - an alignment to a specified size, which may be more or less than
+     *   the access size (MO_ALIGN_x where 'x' is a size in bytes);
+     */
+    MO_ASHIFT = 4,
+    MO_AMASK = 7 << MO_ASHIFT,
+#ifdef NEED_CPU_H
+#ifdef ALIGNED_ONLY
+    MO_ALIGN = 0,
+    MO_UNALN = MO_AMASK,
+#else
+    MO_ALIGN = MO_AMASK,
+    MO_UNALN = 0,
+#endif
+#endif
+    MO_ALIGN_2  = 1 << MO_ASHIFT,
+    MO_ALIGN_4  = 2 << MO_ASHIFT,
+    MO_ALIGN_8  = 3 << MO_ASHIFT,
+    MO_ALIGN_16 = 4 << MO_ASHIFT,
+    MO_ALIGN_32 = 5 << MO_ASHIFT,
+    MO_ALIGN_64 = 6 << MO_ASHIFT,
+
+    /* Combinations of the above, for ease of use.  */
+    MO_UB    = MO_8,
+    MO_UW    = MO_16,
+    MO_UL    = MO_32,
+    MO_SB    = MO_SIGN | MO_8,
+    MO_SW    = MO_SIGN | MO_16,
+    MO_SL    = MO_SIGN | MO_32,
+    MO_Q     = MO_64,
+
+    MO_LEUW  = MO_LE | MO_UW,
+    MO_LEUL  = MO_LE | MO_UL,
+    MO_LESW  = MO_LE | MO_SW,
+    MO_LESL  = MO_LE | MO_SL,
+    MO_LEQ   = MO_LE | MO_Q,
+
+    MO_BEUW  = MO_BE | MO_UW,
+    MO_BEUL  = MO_BE | MO_UL,
+    MO_BESW  = MO_BE | MO_SW,
+    MO_BESL  = MO_BE | MO_SL,
+    MO_BEQ   = MO_BE | MO_Q,
+
+#ifdef NEED_CPU_H
+    MO_TEUW  = MO_TE | MO_UW,
+    MO_TEUL  = MO_TE | MO_UL,
+    MO_TESW  = MO_TE | MO_SW,
+    MO_TESL  = MO_TE | MO_SL,
+    MO_TEQ   = MO_TE | MO_Q,
+#endif
+
+    MO_SSIZE = MO_SIZE | MO_SIGN,
+} MemOp;
+
+#endif
diff --git a/target/alpha/translate.c b/target/alpha/translate.c
index 2c9cccf..d5d4888 100644
--- a/target/alpha/translate.c
+++ b/target/alpha/translate.c
@@ -403,7 +403,7 @@ static inline void gen_store_mem(DisasContext *ctx,

 static DisasJumpType gen_store_conditional(DisasContext *ctx, int ra, int rb,
                                            int32_t disp16, int mem_idx,
-                                           TCGMemOp op)
+                                           MemOp op)
 {
     TCGLabel *lab_fail, *lab_done;
     TCGv addr, val;
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index d323147..b6c07d6 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -85,7 +85,7 @@ typedef void NeonGenOneOpFn(TCGv_i64, TCGv_i64);
 typedef void CryptoTwoOpFn(TCGv_ptr, TCGv_ptr);
 typedef void CryptoThreeOpIntFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
 typedef void CryptoThreeOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
-typedef void AtomicThreeOpFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGArg, TCGMemOp);
+typedef void AtomicThreeOpFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGArg, MemOp);

 /* initialize TCG globals.  */
 void a64_translate_init(void)
@@ -455,7 +455,7 @@ TCGv_i64 read_cpu_reg_sp(DisasContext *s, int reg, int sf)
  * Dn, Sn, Hn or Bn).
  * (Note that this is not the same mapping as for A32; see cpu.h)
  */
-static inline int fp_reg_offset(DisasContext *s, int regno, TCGMemOp size)
+static inline int fp_reg_offset(DisasContext *s, int regno, MemOp size)
 {
     return vec_reg_offset(s, regno, 0, size);
 }
@@ -871,7 +871,7 @@ static void do_gpr_ld_memidx(DisasContext *s,
                              bool iss_valid, unsigned int iss_srt,
                              bool iss_sf, bool iss_ar)
 {
-    TCGMemOp memop = s->be_data + size;
+    MemOp memop = s->be_data + size;

     g_assert(size <= 3);

@@ -948,7 +948,7 @@ static void do_fp_ld(DisasContext *s, int destidx, TCGv_i64 tcg_addr, int size)
     TCGv_i64 tmphi;

     if (size < 4) {
-        TCGMemOp memop = s->be_data + size;
+        MemOp memop = s->be_data + size;
         tmphi = tcg_const_i64(0);
         tcg_gen_qemu_ld_i64(tmplo, tcg_addr, get_mem_index(s), memop);
     } else {
@@ -989,7 +989,7 @@ static void do_fp_ld(DisasContext *s, int destidx, TCGv_i64 tcg_addr, int size)

 /* Get value of an element within a vector register */
 static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx,
-                             int element, TCGMemOp memop)
+                             int element, MemOp memop)
 {
     int vect_off = vec_reg_offset(s, srcidx, element, memop & MO_SIZE);
     switch (memop) {
@@ -1021,7 +1021,7 @@ static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx,
 }

 static void read_vec_element_i32(DisasContext *s, TCGv_i32 tcg_dest, int srcidx,
-                                 int element, TCGMemOp memop)
+                                 int element, MemOp memop)
 {
     int vect_off = vec_reg_offset(s, srcidx, element, memop & MO_SIZE);
     switch (memop) {
@@ -1048,7 +1048,7 @@ static void read_vec_element_i32(DisasContext *s, TCGv_i32 tcg_dest, int srcidx,

 /* Set value of an element within a vector register */
 static void write_vec_element(DisasContext *s, TCGv_i64 tcg_src, int destidx,
-                              int element, TCGMemOp memop)
+                              int element, MemOp memop)
 {
     int vect_off = vec_reg_offset(s, destidx, element, memop & MO_SIZE);
     switch (memop) {
@@ -1070,7 +1070,7 @@ static void write_vec_element(DisasContext *s, TCGv_i64 tcg_src, int destidx,
 }

 static void write_vec_element_i32(DisasContext *s, TCGv_i32 tcg_src,
-                                  int destidx, int element, TCGMemOp memop)
+                                  int destidx, int element, MemOp memop)
 {
     int vect_off = vec_reg_offset(s, destidx, element, memop & MO_SIZE);
     switch (memop) {
@@ -1090,7 +1090,7 @@ static void write_vec_element_i32(DisasContext *s, TCGv_i32 tcg_src,

 /* Store from vector register to memory */
 static void do_vec_st(DisasContext *s, int srcidx, int element,
-                      TCGv_i64 tcg_addr, int size, TCGMemOp endian)
+                      TCGv_i64 tcg_addr, int size, MemOp endian)
 {
     TCGv_i64 tcg_tmp = tcg_temp_new_i64();

@@ -1102,7 +1102,7 @@ static void do_vec_st(DisasContext *s, int srcidx, int element,

 /* Load from memory to vector register */
 static void do_vec_ld(DisasContext *s, int destidx, int element,
-                      TCGv_i64 tcg_addr, int size, TCGMemOp endian)
+                      TCGv_i64 tcg_addr, int size, MemOp endian)
 {
     TCGv_i64 tcg_tmp = tcg_temp_new_i64();

@@ -2200,7 +2200,7 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
                                TCGv_i64 addr, int size, bool is_pair)
 {
     int idx = get_mem_index(s);
-    TCGMemOp memop = s->be_data;
+    MemOp memop = s->be_data;

     g_assert(size <= 3);
     if (is_pair) {
@@ -3286,7 +3286,7 @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
     bool is_postidx = extract32(insn, 23, 1);
     bool is_q = extract32(insn, 30, 1);
     TCGv_i64 clean_addr, tcg_rn, tcg_ebytes;
-    TCGMemOp endian = s->be_data;
+    MemOp endian = s->be_data;

     int ebytes;   /* bytes per element */
     int elements; /* elements per vector */
@@ -5455,7 +5455,7 @@ static void disas_fp_csel(DisasContext *s, uint32_t insn)
     unsigned int mos, type, rm, cond, rn, rd;
     TCGv_i64 t_true, t_false, t_zero;
     DisasCompare64 c;
-    TCGMemOp sz;
+    MemOp sz;

     mos = extract32(insn, 29, 3);
     type = extract32(insn, 22, 2);
@@ -6267,7 +6267,7 @@ static void disas_fp_imm(DisasContext *s, uint32_t insn)
     int mos = extract32(insn, 29, 3);
     uint64_t imm;
     TCGv_i64 tcg_res;
-    TCGMemOp sz;
+    MemOp sz;

     if (mos || imm5) {
         unallocated_encoding(s);
@@ -7030,7 +7030,7 @@ static TCGv_i32 do_reduction_op(DisasContext *s, int fpopcode, int rn,
 {
     if (esize == size) {
         int element;
-        TCGMemOp msize = esize == 16 ? MO_16 : MO_32;
+        MemOp msize = esize == 16 ? MO_16 : MO_32;
         TCGv_i32 tcg_elem;

         /* We should have one register left here */
@@ -8022,7 +8022,7 @@ static void handle_vec_simd_sqshrn(DisasContext *s, bool is_scalar, bool is_q,
     int shift = (2 * esize) - immhb;
     int elements = is_scalar ? 1 : (64 / esize);
     bool round = extract32(opcode, 0, 1);
-    TCGMemOp ldop = (size + 1) | (is_u_shift ? 0 : MO_SIGN);
+    MemOp ldop = (size + 1) | (is_u_shift ? 0 : MO_SIGN);
     TCGv_i64 tcg_rn, tcg_rd, tcg_round;
     TCGv_i32 tcg_rd_narrowed;
     TCGv_i64 tcg_final;
@@ -8181,7 +8181,7 @@ static void handle_simd_qshl(DisasContext *s, bool scalar, bool is_q,
             }
         };
         NeonGenTwoOpEnvFn *genfn = fns[src_unsigned][dst_unsigned][size];
-        TCGMemOp memop = scalar ? size : MO_32;
+        MemOp memop = scalar ? size : MO_32;
         int maxpass = scalar ? 1 : is_q ? 4 : 2;

         for (pass = 0; pass < maxpass; pass++) {
@@ -8225,7 +8225,7 @@ static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
     TCGv_ptr tcg_fpst = get_fpstatus_ptr(size == MO_16);
     TCGv_i32 tcg_shift = NULL;

-    TCGMemOp mop = size | (is_signed ? MO_SIGN : 0);
+    MemOp mop = size | (is_signed ? MO_SIGN : 0);
     int pass;

     if (fracbits || size == MO_64) {
@@ -10004,7 +10004,7 @@ static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
     int dsize = is_q ? 128 : 64;
     int esize = 8 << size;
     int elements = dsize/esize;
-    TCGMemOp memop = size | (is_u ? 0 : MO_SIGN);
+    MemOp memop = size | (is_u ? 0 : MO_SIGN);
     TCGv_i64 tcg_rn = new_tmp_a64(s);
     TCGv_i64 tcg_rd = new_tmp_a64(s);
     TCGv_i64 tcg_round;
@@ -10347,7 +10347,7 @@ static void handle_3rd_widening(DisasContext *s, int is_q, int is_u, int size,
             TCGv_i64 tcg_op1 = tcg_temp_new_i64();
             TCGv_i64 tcg_op2 = tcg_temp_new_i64();
             TCGv_i64 tcg_passres;
-            TCGMemOp memop = MO_32 | (is_u ? 0 : MO_SIGN);
+            MemOp memop = MO_32 | (is_u ? 0 : MO_SIGN);

             int elt = pass + is_q * 2;

@@ -11827,7 +11827,7 @@ static void handle_2misc_pairwise(DisasContext *s, int opcode, bool u,

     if (size == 2) {
         /* 32 + 32 -> 64 op */
-        TCGMemOp memop = size + (u ? 0 : MO_SIGN);
+        MemOp memop = size + (u ? 0 : MO_SIGN);

         for (pass = 0; pass < maxpass; pass++) {
             TCGv_i64 tcg_op1 = tcg_temp_new_i64();
@@ -12849,7 +12849,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)

     switch (is_fp) {
     case 1: /* normal fp */
-        /* convert insn encoded size to TCGMemOp size */
+        /* convert insn encoded size to MemOp size */
         switch (size) {
         case 0: /* half-precision */
             size = MO_16;
@@ -12897,7 +12897,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
         return;
     }

-    /* Given TCGMemOp size, adjust register and indexing.  */
+    /* Given MemOp size, adjust register and indexing.  */
     switch (size) {
     case MO_16:
         index = h << 2 | l << 1 | m;
@@ -13194,7 +13194,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
         TCGv_i64 tcg_res[2];
         int pass;
         bool satop = extract32(opcode, 0, 1);
-        TCGMemOp memop = MO_32;
+        MemOp memop = MO_32;

         if (satop || !u) {
             memop |= MO_SIGN;
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
index 9ab4087..f1246b7 100644
--- a/target/arm/translate-a64.h
+++ b/target/arm/translate-a64.h
@@ -64,7 +64,7 @@ static inline void assert_fp_access_checked(DisasContext *s)
  * the FP/vector register Qn.
  */
 static inline int vec_reg_offset(DisasContext *s, int regno,
-                                 int element, TCGMemOp size)
+                                 int element, MemOp size)
 {
     int element_size = 1 << size;
     int offs = element * element_size;
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index fa068b0..5d7edd0 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -4567,7 +4567,7 @@ static bool trans_STR_pri(DisasContext *s, arg_rri *a)
  */

 /* The memory mode of the dtype.  */
-static const TCGMemOp dtype_mop[16] = {
+static const MemOp dtype_mop[16] = {
     MO_UB, MO_UB, MO_UB, MO_UB,
     MO_SL, MO_UW, MO_UW, MO_UW,
     MO_SW, MO_SW, MO_UL, MO_UL,
diff --git a/target/arm/translate.c b/target/arm/translate.c
index 7853462..d116c8c 100644
--- a/target/arm/translate.c
+++ b/target/arm/translate.c
@@ -114,7 +114,7 @@ typedef enum ISSInfo {
 } ISSInfo;

 /* Save the syndrome information for a Data Abort */
-static void disas_set_da_iss(DisasContext *s, TCGMemOp memop, ISSInfo issinfo)
+static void disas_set_da_iss(DisasContext *s, MemOp memop, ISSInfo issinfo)
 {
     uint32_t syn;
     int sas = memop & MO_SIZE;
@@ -1079,7 +1079,7 @@ static inline void store_reg_from_load(DisasContext *s, int reg, TCGv_i32 var)
  * that the address argument is TCGv_i32 rather than TCGv.
  */

-static inline TCGv gen_aa32_addr(DisasContext *s, TCGv_i32 a32, TCGMemOp op)
+static inline TCGv gen_aa32_addr(DisasContext *s, TCGv_i32 a32, MemOp op)
 {
     TCGv addr = tcg_temp_new();
     tcg_gen_extu_i32_tl(addr, a32);
@@ -1092,7 +1092,7 @@ static inline TCGv gen_aa32_addr(DisasContext *s, TCGv_i32 a32, TCGMemOp op)
 }

 static void gen_aa32_ld_i32(DisasContext *s, TCGv_i32 val, TCGv_i32 a32,
-                            int index, TCGMemOp opc)
+                            int index, MemOp opc)
 {
     TCGv addr;

@@ -1107,7 +1107,7 @@ static void gen_aa32_ld_i32(DisasContext *s, TCGv_i32 val, TCGv_i32 a32,
 }

 static void gen_aa32_st_i32(DisasContext *s, TCGv_i32 val, TCGv_i32 a32,
-                            int index, TCGMemOp opc)
+                            int index, MemOp opc)
 {
     TCGv addr;

@@ -1160,7 +1160,7 @@ static inline void gen_aa32_frob64(DisasContext *s, TCGv_i64 val)
 }

 static void gen_aa32_ld_i64(DisasContext *s, TCGv_i64 val, TCGv_i32 a32,
-                            int index, TCGMemOp opc)
+                            int index, MemOp opc)
 {
     TCGv addr = gen_aa32_addr(s, a32, opc);
     tcg_gen_qemu_ld_i64(val, addr, index, opc);
@@ -1175,7 +1175,7 @@ static inline void gen_aa32_ld64(DisasContext *s, TCGv_i64 val,
 }

 static void gen_aa32_st_i64(DisasContext *s, TCGv_i64 val, TCGv_i32 a32,
-                            int index, TCGMemOp opc)
+                            int index, MemOp opc)
 {
     TCGv addr = gen_aa32_addr(s, a32, opc);

@@ -1400,7 +1400,7 @@ neon_reg_offset (int reg, int n)
  * where 0 is the least significant end of the register.
  */
 static inline long
-neon_element_offset(int reg, int element, TCGMemOp size)
+neon_element_offset(int reg, int element, MemOp size)
 {
     int element_size = 1 << size;
     int ofs = element * element_size;
@@ -1422,7 +1422,7 @@ static TCGv_i32 neon_load_reg(int reg, int pass)
     return tmp;
 }

-static void neon_load_element(TCGv_i32 var, int reg, int ele, TCGMemOp mop)
+static void neon_load_element(TCGv_i32 var, int reg, int ele, MemOp mop)
 {
     long offset = neon_element_offset(reg, ele, mop & MO_SIZE);

@@ -1441,7 +1441,7 @@ static void neon_load_element(TCGv_i32 var, int reg, int ele, TCGMemOp mop)
     }
 }

-static void neon_load_element64(TCGv_i64 var, int reg, int ele, TCGMemOp mop)
+static void neon_load_element64(TCGv_i64 var, int reg, int ele, MemOp mop)
 {
     long offset = neon_element_offset(reg, ele, mop & MO_SIZE);

@@ -1469,7 +1469,7 @@ static void neon_store_reg(int reg, int pass, TCGv_i32 var)
     tcg_temp_free_i32(var);
 }

-static void neon_store_element(int reg, int ele, TCGMemOp size, TCGv_i32 var)
+static void neon_store_element(int reg, int ele, MemOp size, TCGv_i32 var)
 {
     long offset = neon_element_offset(reg, ele, size);

@@ -1488,7 +1488,7 @@ static void neon_store_element(int reg, int ele, TCGMemOp size, TCGv_i32 var)
     }
 }

-static void neon_store_element64(int reg, int ele, TCGMemOp size, TCGv_i64 var)
+static void neon_store_element64(int reg, int ele, MemOp size, TCGv_i64 var)
 {
     long offset = neon_element_offset(reg, ele, size);

@@ -3558,7 +3558,7 @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
     int n;
     int vec_size;
     int mmu_idx;
-    TCGMemOp endian;
+    MemOp endian;
     TCGv_i32 addr;
     TCGv_i32 tmp;
     TCGv_i32 tmp2;
@@ -6867,7 +6867,7 @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
             } else if ((insn & 0x380) == 0) {
                 /* VDUP */
                 int element;
-                TCGMemOp size;
+                MemOp size;

                 if ((insn & (7 << 16)) == 0 || (q && (rd & 1))) {
                     return 1;
@@ -7435,7 +7435,7 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
                                TCGv_i32 addr, int size)
 {
     TCGv_i32 tmp = tcg_temp_new_i32();
-    TCGMemOp opc = size | MO_ALIGN | s->be_data;
+    MemOp opc = size | MO_ALIGN | s->be_data;

     s->is_ldex = true;

@@ -7489,7 +7489,7 @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
     TCGv taddr;
     TCGLabel *done_label;
     TCGLabel *fail_label;
-    TCGMemOp opc = size | MO_ALIGN | s->be_data;
+    MemOp opc = size | MO_ALIGN | s->be_data;

     /* if (env->exclusive_addr == addr && env->exclusive_val == [addr]) {
          [addr] = {Rt};
@@ -8603,7 +8603,7 @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
                         */

                         TCGv taddr;
-                        TCGMemOp opc = s->be_data;
+                        MemOp opc = s->be_data;

                         rm = (insn) & 0xf;

diff --git a/target/arm/translate.h b/target/arm/translate.h
index a20f6e2..284c510 100644
--- a/target/arm/translate.h
+++ b/target/arm/translate.h
@@ -21,7 +21,7 @@ typedef struct DisasContext {
     int condexec_cond;
     int thumb;
     int sctlr_b;
-    TCGMemOp be_data;
+    MemOp be_data;
 #if !defined(CONFIG_USER_ONLY)
     int user;
 #endif
diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 188fe68..ff4802a 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -1500,7 +1500,7 @@ static void form_gva(DisasContext *ctx, TCGv_tl *pgva, TCGv_reg *pofs,
  */
 static void do_load_32(DisasContext *ctx, TCGv_i32 dest, unsigned rb,
                        unsigned rx, int scale, target_sreg disp,
-                       unsigned sp, int modify, TCGMemOp mop)
+                       unsigned sp, int modify, MemOp mop)
 {
     TCGv_reg ofs;
     TCGv_tl addr;
@@ -1518,7 +1518,7 @@ static void do_load_32(DisasContext *ctx, TCGv_i32 dest, unsigned rb,

 static void do_load_64(DisasContext *ctx, TCGv_i64 dest, unsigned rb,
                        unsigned rx, int scale, target_sreg disp,
-                       unsigned sp, int modify, TCGMemOp mop)
+                       unsigned sp, int modify, MemOp mop)
 {
     TCGv_reg ofs;
     TCGv_tl addr;
@@ -1536,7 +1536,7 @@ static void do_load_64(DisasContext *ctx, TCGv_i64 dest, unsigned rb,

 static void do_store_32(DisasContext *ctx, TCGv_i32 src, unsigned rb,
                         unsigned rx, int scale, target_sreg disp,
-                        unsigned sp, int modify, TCGMemOp mop)
+                        unsigned sp, int modify, MemOp mop)
 {
     TCGv_reg ofs;
     TCGv_tl addr;
@@ -1554,7 +1554,7 @@ static void do_store_32(DisasContext *ctx, TCGv_i32 src, unsigned rb,

 static void do_store_64(DisasContext *ctx, TCGv_i64 src, unsigned rb,
                         unsigned rx, int scale, target_sreg disp,
-                        unsigned sp, int modify, TCGMemOp mop)
+                        unsigned sp, int modify, MemOp mop)
 {
     TCGv_reg ofs;
     TCGv_tl addr;
@@ -1580,7 +1580,7 @@ static void do_store_64(DisasContext *ctx, TCGv_i64 src, unsigned rb,

 static bool do_load(DisasContext *ctx, unsigned rt, unsigned rb,
                     unsigned rx, int scale, target_sreg disp,
-                    unsigned sp, int modify, TCGMemOp mop)
+                    unsigned sp, int modify, MemOp mop)
 {
     TCGv_reg dest;

@@ -1653,7 +1653,7 @@ static bool trans_fldd(DisasContext *ctx, arg_ldst *a)

 static bool do_store(DisasContext *ctx, unsigned rt, unsigned rb,
                      target_sreg disp, unsigned sp,
-                     int modify, TCGMemOp mop)
+                     int modify, MemOp mop)
 {
     nullify_over(ctx);
     do_store_reg(ctx, load_gpr(ctx, rt), rb, 0, 0, disp, sp, modify, mop);
@@ -2940,7 +2940,7 @@ static bool trans_st(DisasContext *ctx, arg_ldst *a)

 static bool trans_ldc(DisasContext *ctx, arg_ldst *a)
 {
-    TCGMemOp mop = MO_TEUL | MO_ALIGN_16 | a->size;
+    MemOp mop = MO_TEUL | MO_ALIGN_16 | a->size;
     TCGv_reg zero, dest, ofs;
     TCGv_tl addr;

diff --git a/target/i386/translate.c b/target/i386/translate.c
index 03150a8..def9867 100644
--- a/target/i386/translate.c
+++ b/target/i386/translate.c
@@ -87,8 +87,8 @@ typedef struct DisasContext {
     /* current insn context */
     int override; /* -1 if no override */
     int prefix;
-    TCGMemOp aflag;
-    TCGMemOp dflag;
+    MemOp aflag;
+    MemOp dflag;
     target_ulong pc_start;
     target_ulong pc; /* pc = eip + cs_base */
     /* current block context */
@@ -149,7 +149,7 @@ static void gen_eob(DisasContext *s);
 static void gen_jr(DisasContext *s, TCGv dest);
 static void gen_jmp(DisasContext *s, target_ulong eip);
 static void gen_jmp_tb(DisasContext *s, target_ulong eip, int tb_num);
-static void gen_op(DisasContext *s1, int op, TCGMemOp ot, int d);
+static void gen_op(DisasContext *s1, int op, MemOp ot, int d);

 /* i386 arith/logic operations */
 enum {
@@ -320,7 +320,7 @@ static inline bool byte_reg_is_xH(DisasContext *s, int reg)
 }

 /* Select the size of a push/pop operation.  */
-static inline TCGMemOp mo_pushpop(DisasContext *s, TCGMemOp ot)
+static inline MemOp mo_pushpop(DisasContext *s, MemOp ot)
 {
     if (CODE64(s)) {
         return ot == MO_16 ? MO_16 : MO_64;
@@ -330,13 +330,13 @@ static inline TCGMemOp mo_pushpop(DisasContext *s, TCGMemOp ot)
 }

 /* Select the size of the stack pointer.  */
-static inline TCGMemOp mo_stacksize(DisasContext *s)
+static inline MemOp mo_stacksize(DisasContext *s)
 {
     return CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_16;
 }

 /* Select only size 64 else 32.  Used for SSE operand sizes.  */
-static inline TCGMemOp mo_64_32(TCGMemOp ot)
+static inline MemOp mo_64_32(MemOp ot)
 {
 #ifdef TARGET_X86_64
     return ot == MO_64 ? MO_64 : MO_32;
@@ -347,19 +347,19 @@ static inline TCGMemOp mo_64_32(TCGMemOp ot)

 /* Select size 8 if lsb of B is clear, else OT.  Used for decoding
    byte vs word opcodes.  */
-static inline TCGMemOp mo_b_d(int b, TCGMemOp ot)
+static inline MemOp mo_b_d(int b, MemOp ot)
 {
     return b & 1 ? ot : MO_8;
 }

 /* Select size 8 if lsb of B is clear, else OT capped at 32.
    Used for decoding operand size of port opcodes.  */
-static inline TCGMemOp mo_b_d32(int b, TCGMemOp ot)
+static inline MemOp mo_b_d32(int b, MemOp ot)
 {
     return b & 1 ? (ot == MO_16 ? MO_16 : MO_32) : MO_8;
 }

-static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t0)
+static void gen_op_mov_reg_v(DisasContext *s, MemOp ot, int reg, TCGv t0)
 {
     switch(ot) {
     case MO_8:
@@ -388,7 +388,7 @@ static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t0)
 }

 static inline
-void gen_op_mov_v_reg(DisasContext *s, TCGMemOp ot, TCGv t0, int reg)
+void gen_op_mov_v_reg(DisasContext *s, MemOp ot, TCGv t0, int reg)
 {
     if (ot == MO_8 && byte_reg_is_xH(s, reg)) {
         tcg_gen_extract_tl(t0, cpu_regs[reg - 4], 8, 8);
@@ -411,13 +411,13 @@ static inline void gen_op_jmp_v(TCGv dest)
 }

 static inline
-void gen_op_add_reg_im(DisasContext *s, TCGMemOp size, int reg, int32_t val)
+void gen_op_add_reg_im(DisasContext *s, MemOp size, int reg, int32_t val)
 {
     tcg_gen_addi_tl(s->tmp0, cpu_regs[reg], val);
     gen_op_mov_reg_v(s, size, reg, s->tmp0);
 }

-static inline void gen_op_add_reg_T0(DisasContext *s, TCGMemOp size, int reg)
+static inline void gen_op_add_reg_T0(DisasContext *s, MemOp size, int reg)
 {
     tcg_gen_add_tl(s->tmp0, cpu_regs[reg], s->T0);
     gen_op_mov_reg_v(s, size, reg, s->tmp0);
@@ -451,7 +451,7 @@ static inline void gen_jmp_im(DisasContext *s, target_ulong pc)
 /* Compute SEG:REG into A0.  SEG is selected from the override segment
    (OVR_SEG) and the default segment (DEF_SEG).  OVR_SEG may be -1 to
    indicate no override.  */
-static void gen_lea_v_seg(DisasContext *s, TCGMemOp aflag, TCGv a0,
+static void gen_lea_v_seg(DisasContext *s, MemOp aflag, TCGv a0,
                           int def_seg, int ovr_seg)
 {
     switch (aflag) {
@@ -514,13 +514,13 @@ static inline void gen_string_movl_A0_EDI(DisasContext *s)
     gen_lea_v_seg(s, s->aflag, cpu_regs[R_EDI], R_ES, -1);
 }

-static inline void gen_op_movl_T0_Dshift(DisasContext *s, TCGMemOp ot)
+static inline void gen_op_movl_T0_Dshift(DisasContext *s, MemOp ot)
 {
     tcg_gen_ld32s_tl(s->T0, cpu_env, offsetof(CPUX86State, df));
     tcg_gen_shli_tl(s->T0, s->T0, ot);
 };

-static TCGv gen_ext_tl(TCGv dst, TCGv src, TCGMemOp size, bool sign)
+static TCGv gen_ext_tl(TCGv dst, TCGv src, MemOp size, bool sign)
 {
     switch (size) {
     case MO_8:
@@ -551,18 +551,18 @@ static TCGv gen_ext_tl(TCGv dst, TCGv src, TCGMemOp size, bool sign)
     }
 }

-static void gen_extu(TCGMemOp ot, TCGv reg)
+static void gen_extu(MemOp ot, TCGv reg)
 {
     gen_ext_tl(reg, reg, ot, false);
 }

-static void gen_exts(TCGMemOp ot, TCGv reg)
+static void gen_exts(MemOp ot, TCGv reg)
 {
     gen_ext_tl(reg, reg, ot, true);
 }

 static inline
-void gen_op_jnz_ecx(DisasContext *s, TCGMemOp size, TCGLabel *label1)
+void gen_op_jnz_ecx(DisasContext *s, MemOp size, TCGLabel *label1)
 {
     tcg_gen_mov_tl(s->tmp0, cpu_regs[R_ECX]);
     gen_extu(size, s->tmp0);
@@ -570,14 +570,14 @@ void gen_op_jnz_ecx(DisasContext *s, TCGMemOp size, TCGLabel *label1)
 }

 static inline
-void gen_op_jz_ecx(DisasContext *s, TCGMemOp size, TCGLabel *label1)
+void gen_op_jz_ecx(DisasContext *s, MemOp size, TCGLabel *label1)
 {
     tcg_gen_mov_tl(s->tmp0, cpu_regs[R_ECX]);
     gen_extu(size, s->tmp0);
     tcg_gen_brcondi_tl(TCG_COND_EQ, s->tmp0, 0, label1);
 }

-static void gen_helper_in_func(TCGMemOp ot, TCGv v, TCGv_i32 n)
+static void gen_helper_in_func(MemOp ot, TCGv v, TCGv_i32 n)
 {
     switch (ot) {
     case MO_8:
@@ -594,7 +594,7 @@ static void gen_helper_in_func(TCGMemOp ot, TCGv v, TCGv_i32 n)
     }
 }

-static void gen_helper_out_func(TCGMemOp ot, TCGv_i32 v, TCGv_i32 n)
+static void gen_helper_out_func(MemOp ot, TCGv_i32 v, TCGv_i32 n)
 {
     switch (ot) {
     case MO_8:
@@ -611,7 +611,7 @@ static void gen_helper_out_func(TCGMemOp ot, TCGv_i32 v, TCGv_i32 n)
     }
 }

-static void gen_check_io(DisasContext *s, TCGMemOp ot, target_ulong cur_eip,
+static void gen_check_io(DisasContext *s, MemOp ot, target_ulong cur_eip,
                          uint32_t svm_flags)
 {
     target_ulong next_eip;
@@ -644,7 +644,7 @@ static void gen_check_io(DisasContext *s, TCGMemOp ot, target_ulong cur_eip,
     }
 }

-static inline void gen_movs(DisasContext *s, TCGMemOp ot)
+static inline void gen_movs(DisasContext *s, MemOp ot)
 {
     gen_string_movl_A0_ESI(s);
     gen_op_ld_v(s, ot, s->T0, s->A0);
@@ -840,7 +840,7 @@ static CCPrepare gen_prepare_eflags_s(DisasContext *s, TCGv reg)
         return (CCPrepare) { .cond = TCG_COND_NEVER, .mask = -1 };
     default:
         {
-            TCGMemOp size = (s->cc_op - CC_OP_ADDB) & 3;
+            MemOp size = (s->cc_op - CC_OP_ADDB) & 3;
             TCGv t0 = gen_ext_tl(reg, cpu_cc_dst, size, true);
             return (CCPrepare) { .cond = TCG_COND_LT, .reg = t0, .mask = -1 };
         }
@@ -885,7 +885,7 @@ static CCPrepare gen_prepare_eflags_z(DisasContext *s, TCGv reg)
                              .mask = -1 };
     default:
         {
-            TCGMemOp size = (s->cc_op - CC_OP_ADDB) & 3;
+            MemOp size = (s->cc_op - CC_OP_ADDB) & 3;
             TCGv t0 = gen_ext_tl(reg, cpu_cc_dst, size, false);
             return (CCPrepare) { .cond = TCG_COND_EQ, .reg = t0, .mask = -1 };
         }
@@ -897,7 +897,7 @@ static CCPrepare gen_prepare_eflags_z(DisasContext *s, TCGv reg)
 static CCPrepare gen_prepare_cc(DisasContext *s, int b, TCGv reg)
 {
     int inv, jcc_op, cond;
-    TCGMemOp size;
+    MemOp size;
     CCPrepare cc;
     TCGv t0;

@@ -1075,7 +1075,7 @@ static TCGLabel *gen_jz_ecx_string(DisasContext *s, target_ulong next_eip)
     return l2;
 }

-static inline void gen_stos(DisasContext *s, TCGMemOp ot)
+static inline void gen_stos(DisasContext *s, MemOp ot)
 {
     gen_op_mov_v_reg(s, MO_32, s->T0, R_EAX);
     gen_string_movl_A0_EDI(s);
@@ -1084,7 +1084,7 @@ static inline void gen_stos(DisasContext *s, TCGMemOp ot)
     gen_op_add_reg_T0(s, s->aflag, R_EDI);
 }

-static inline void gen_lods(DisasContext *s, TCGMemOp ot)
+static inline void gen_lods(DisasContext *s, MemOp ot)
 {
     gen_string_movl_A0_ESI(s);
     gen_op_ld_v(s, ot, s->T0, s->A0);
@@ -1093,7 +1093,7 @@ static inline void gen_lods(DisasContext *s, TCGMemOp ot)
     gen_op_add_reg_T0(s, s->aflag, R_ESI);
 }

-static inline void gen_scas(DisasContext *s, TCGMemOp ot)
+static inline void gen_scas(DisasContext *s, MemOp ot)
 {
     gen_string_movl_A0_EDI(s);
     gen_op_ld_v(s, ot, s->T1, s->A0);
@@ -1102,7 +1102,7 @@ static inline void gen_scas(DisasContext *s, TCGMemOp ot)
     gen_op_add_reg_T0(s, s->aflag, R_EDI);
 }

-static inline void gen_cmps(DisasContext *s, TCGMemOp ot)
+static inline void gen_cmps(DisasContext *s, MemOp ot)
 {
     gen_string_movl_A0_EDI(s);
     gen_op_ld_v(s, ot, s->T1, s->A0);
@@ -1126,7 +1126,7 @@ static void gen_bpt_io(DisasContext *s, TCGv_i32 t_port, int ot)
 }


-static inline void gen_ins(DisasContext *s, TCGMemOp ot)
+static inline void gen_ins(DisasContext *s, MemOp ot)
 {
     if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
         gen_io_start();
@@ -1148,7 +1148,7 @@ static inline void gen_ins(DisasContext *s, TCGMemOp ot)
     }
 }

-static inline void gen_outs(DisasContext *s, TCGMemOp ot)
+static inline void gen_outs(DisasContext *s, MemOp ot)
 {
     if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
         gen_io_start();
@@ -1171,7 +1171,7 @@ static inline void gen_outs(DisasContext *s, TCGMemOp ot)
 /* same method as Valgrind : we generate jumps to current or next
    instruction */
 #define GEN_REPZ(op)                                                          \
-static inline void gen_repz_ ## op(DisasContext *s, TCGMemOp ot,              \
+static inline void gen_repz_ ## op(DisasContext *s, MemOp ot,              \
                                  target_ulong cur_eip, target_ulong next_eip) \
 {                                                                             \
     TCGLabel *l2;                                                             \
@@ -1187,7 +1187,7 @@ static inline void gen_repz_ ## op(DisasContext *s, TCGMemOp ot,              \
 }

 #define GEN_REPZ2(op)                                                         \
-static inline void gen_repz_ ## op(DisasContext *s, TCGMemOp ot,              \
+static inline void gen_repz_ ## op(DisasContext *s, MemOp ot,              \
                                    target_ulong cur_eip,                      \
                                    target_ulong next_eip,                     \
                                    int nz)                                    \
@@ -1284,7 +1284,7 @@ static void gen_illegal_opcode(DisasContext *s)
 }

 /* if d == OR_TMP0, it means memory operand (address in A0) */
-static void gen_op(DisasContext *s1, int op, TCGMemOp ot, int d)
+static void gen_op(DisasContext *s1, int op, MemOp ot, int d)
 {
     if (d != OR_TMP0) {
         if (s1->prefix & PREFIX_LOCK) {
@@ -1395,7 +1395,7 @@ static void gen_op(DisasContext *s1, int op, TCGMemOp ot, int d)
 }

 /* if d == OR_TMP0, it means memory operand (address in A0) */
-static void gen_inc(DisasContext *s1, TCGMemOp ot, int d, int c)
+static void gen_inc(DisasContext *s1, MemOp ot, int d, int c)
 {
     if (s1->prefix & PREFIX_LOCK) {
         if (d != OR_TMP0) {
@@ -1421,7 +1421,7 @@ static void gen_inc(DisasContext *s1, TCGMemOp ot, int d, int c)
     set_cc_op(s1, (c > 0 ? CC_OP_INCB : CC_OP_DECB) + ot);
 }

-static void gen_shift_flags(DisasContext *s, TCGMemOp ot, TCGv result,
+static void gen_shift_flags(DisasContext *s, MemOp ot, TCGv result,
                             TCGv shm1, TCGv count, bool is_right)
 {
     TCGv_i32 z32, s32, oldop;
@@ -1466,7 +1466,7 @@ static void gen_shift_flags(DisasContext *s, TCGMemOp ot, TCGv result,
     set_cc_op(s, CC_OP_DYNAMIC);
 }

-static void gen_shift_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
+static void gen_shift_rm_T1(DisasContext *s, MemOp ot, int op1,
                             int is_right, int is_arith)
 {
     target_ulong mask = (ot == MO_64 ? 0x3f : 0x1f);
@@ -1502,7 +1502,7 @@ static void gen_shift_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
     gen_shift_flags(s, ot, s->T0, s->tmp0, s->T1, is_right);
 }

-static void gen_shift_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
+static void gen_shift_rm_im(DisasContext *s, MemOp ot, int op1, int op2,
                             int is_right, int is_arith)
 {
     int mask = (ot == MO_64 ? 0x3f : 0x1f);
@@ -1542,7 +1542,7 @@ static void gen_shift_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
     }
 }

-static void gen_rot_rm_T1(DisasContext *s, TCGMemOp ot, int op1, int is_right)
+static void gen_rot_rm_T1(DisasContext *s, MemOp ot, int op1, int is_right)
 {
     target_ulong mask = (ot == MO_64 ? 0x3f : 0x1f);
     TCGv_i32 t0, t1;
@@ -1627,7 +1627,7 @@ static void gen_rot_rm_T1(DisasContext *s, TCGMemOp ot, int op1, int is_right)
     set_cc_op(s, CC_OP_DYNAMIC);
 }

-static void gen_rot_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
+static void gen_rot_rm_im(DisasContext *s, MemOp ot, int op1, int op2,
                           int is_right)
 {
     int mask = (ot == MO_64 ? 0x3f : 0x1f);
@@ -1705,7 +1705,7 @@ static void gen_rot_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
 }

 /* XXX: add faster immediate = 1 case */
-static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
+static void gen_rotc_rm_T1(DisasContext *s, MemOp ot, int op1,
                            int is_right)
 {
     gen_compute_eflags(s);
@@ -1761,7 +1761,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
 }

 /* XXX: add faster immediate case */
-static void gen_shiftd_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
+static void gen_shiftd_rm_T1(DisasContext *s, MemOp ot, int op1,
                              bool is_right, TCGv count_in)
 {
     target_ulong mask = (ot == MO_64 ? 63 : 31);
@@ -1842,7 +1842,7 @@ static void gen_shiftd_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
     tcg_temp_free(count);
 }

-static void gen_shift(DisasContext *s1, int op, TCGMemOp ot, int d, int s)
+static void gen_shift(DisasContext *s1, int op, MemOp ot, int d, int s)
 {
     if (s != OR_TMP1)
         gen_op_mov_v_reg(s1, ot, s1->T1, s);
@@ -1872,7 +1872,7 @@ static void gen_shift(DisasContext *s1, int op, TCGMemOp ot, int d, int s)
     }
 }

-static void gen_shifti(DisasContext *s1, int op, TCGMemOp ot, int d, int c)
+static void gen_shifti(DisasContext *s1, int op, MemOp ot, int d, int c)
 {
     switch(op) {
     case OP_ROL:
@@ -2149,7 +2149,7 @@ static void gen_add_A0_ds_seg(DisasContext *s)
 /* generate modrm memory load or store of 'reg'. TMP0 is used if reg ==
    OR_TMP0 */
 static void gen_ldst_modrm(CPUX86State *env, DisasContext *s, int modrm,
-                           TCGMemOp ot, int reg, int is_store)
+                           MemOp ot, int reg, int is_store)
 {
     int mod, rm;

@@ -2179,7 +2179,7 @@ static void gen_ldst_modrm(CPUX86State *env, DisasContext *s, int modrm,
     }
 }

-static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, TCGMemOp ot)
+static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, MemOp ot)
 {
     uint32_t ret;

@@ -2202,7 +2202,7 @@ static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, TCGMemOp ot)
     return ret;
 }

-static inline int insn_const_size(TCGMemOp ot)
+static inline int insn_const_size(MemOp ot)
 {
     if (ot <= MO_32) {
         return 1 << ot;
@@ -2266,7 +2266,7 @@ static inline void gen_jcc(DisasContext *s, int b,
     }
 }

-static void gen_cmovcc1(CPUX86State *env, DisasContext *s, TCGMemOp ot, int b,
+static void gen_cmovcc1(CPUX86State *env, DisasContext *s, MemOp ot, int b,
                         int modrm, int reg)
 {
     CCPrepare cc;
@@ -2363,8 +2363,8 @@ static inline void gen_stack_update(DisasContext *s, int addend)
 /* Generate a push. It depends on ss32, addseg and dflag.  */
 static void gen_push_v(DisasContext *s, TCGv val)
 {
-    TCGMemOp d_ot = mo_pushpop(s, s->dflag);
-    TCGMemOp a_ot = mo_stacksize(s);
+    MemOp d_ot = mo_pushpop(s, s->dflag);
+    MemOp a_ot = mo_stacksize(s);
     int size = 1 << d_ot;
     TCGv new_esp = s->A0;

@@ -2383,9 +2383,9 @@ static void gen_push_v(DisasContext *s, TCGv val)
 }

 /* two step pop is necessary for precise exceptions */
-static TCGMemOp gen_pop_T0(DisasContext *s)
+static MemOp gen_pop_T0(DisasContext *s)
 {
-    TCGMemOp d_ot = mo_pushpop(s, s->dflag);
+    MemOp d_ot = mo_pushpop(s, s->dflag);

     gen_lea_v_seg(s, mo_stacksize(s), cpu_regs[R_ESP], R_SS, -1);
     gen_op_ld_v(s, d_ot, s->T0, s->A0);
@@ -2393,7 +2393,7 @@ static TCGMemOp gen_pop_T0(DisasContext *s)
     return d_ot;
 }

-static inline void gen_pop_update(DisasContext *s, TCGMemOp ot)
+static inline void gen_pop_update(DisasContext *s, MemOp ot)
 {
     gen_stack_update(s, 1 << ot);
 }
@@ -2405,8 +2405,8 @@ static inline void gen_stack_A0(DisasContext *s)

 static void gen_pusha(DisasContext *s)
 {
-    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_16;
-    TCGMemOp d_ot = s->dflag;
+    MemOp s_ot = s->ss32 ? MO_32 : MO_16;
+    MemOp d_ot = s->dflag;
     int size = 1 << d_ot;
     int i;

@@ -2421,8 +2421,8 @@ static void gen_pusha(DisasContext *s)

 static void gen_popa(DisasContext *s)
 {
-    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_16;
-    TCGMemOp d_ot = s->dflag;
+    MemOp s_ot = s->ss32 ? MO_32 : MO_16;
+    MemOp d_ot = s->dflag;
     int size = 1 << d_ot;
     int i;

@@ -2442,8 +2442,8 @@ static void gen_popa(DisasContext *s)

 static void gen_enter(DisasContext *s, int esp_addend, int level)
 {
-    TCGMemOp d_ot = mo_pushpop(s, s->dflag);
-    TCGMemOp a_ot = CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_16;
+    MemOp d_ot = mo_pushpop(s, s->dflag);
+    MemOp a_ot = CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_16;
     int size = 1 << d_ot;

     /* Push BP; compute FrameTemp into T1.  */
@@ -2482,8 +2482,8 @@ static void gen_enter(DisasContext *s, int esp_addend, int level)

 static void gen_leave(DisasContext *s)
 {
-    TCGMemOp d_ot = mo_pushpop(s, s->dflag);
-    TCGMemOp a_ot = mo_stacksize(s);
+    MemOp d_ot = mo_pushpop(s, s->dflag);
+    MemOp a_ot = mo_stacksize(s);

     gen_lea_v_seg(s, a_ot, cpu_regs[R_EBP], R_SS, -1);
     gen_op_ld_v(s, d_ot, s->T0, s->A0);
@@ -3045,7 +3045,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
     SSEFunc_0_eppi sse_fn_eppi;
     SSEFunc_0_ppi sse_fn_ppi;
     SSEFunc_0_eppt sse_fn_eppt;
-    TCGMemOp ot;
+    MemOp ot;

     b &= 0xff;
     if (s->prefix & PREFIX_DATA)
@@ -4488,7 +4488,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
     CPUX86State *env = cpu->env_ptr;
     int b, prefixes;
     int shift;
-    TCGMemOp ot, aflag, dflag;
+    MemOp ot, aflag, dflag;
     int modrm, reg, rm, mod, op, opreg, val;
     target_ulong next_eip, tval;
     int rex_w, rex_r;
@@ -5567,8 +5567,8 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
     case 0x1be: /* movsbS Gv, Eb */
     case 0x1bf: /* movswS Gv, Eb */
         {
-            TCGMemOp d_ot;
-            TCGMemOp s_ot;
+            MemOp d_ot;
+            MemOp s_ot;

             /* d_ot is the size of destination */
             d_ot = dflag;
diff --git a/target/m68k/translate.c b/target/m68k/translate.c
index 60bcfb7..24c1dd3 100644
--- a/target/m68k/translate.c
+++ b/target/m68k/translate.c
@@ -2414,7 +2414,7 @@ DISAS_INSN(cas)
     uint16_t ext;
     TCGv load;
     TCGv cmp;
-    TCGMemOp opc;
+    MemOp opc;

     switch ((insn >> 9) & 3) {
     case 1:
diff --git a/target/microblaze/translate.c b/target/microblaze/translate.c
index 9ce65f3..41d1b8b 100644
--- a/target/microblaze/translate.c
+++ b/target/microblaze/translate.c
@@ -919,7 +919,7 @@ static void dec_load(DisasContext *dc)
     unsigned int size;
     bool rev = false, ex = false, ea = false;
     int mem_index = cpu_mmu_index(&dc->cpu->env, false);
-    TCGMemOp mop;
+    MemOp mop;

     mop = dc->opcode & 3;
     size = 1 << mop;
@@ -1035,7 +1035,7 @@ static void dec_store(DisasContext *dc)
     unsigned int size;
     bool rev = false, ex = false, ea = false;
     int mem_index = cpu_mmu_index(&dc->cpu->env, false);
-    TCGMemOp mop;
+    MemOp mop;

     mop = dc->opcode & 3;
     size = 1 << mop;
diff --git a/target/mips/translate.c b/target/mips/translate.c
index ca62800..59b5d85 100644
--- a/target/mips/translate.c
+++ b/target/mips/translate.c
@@ -2526,7 +2526,7 @@ typedef struct DisasContext {
     int32_t CP0_Config5;
     /* Routine used to access memory */
     int mem_idx;
-    TCGMemOp default_tcg_memop_mask;
+    MemOp default_tcg_memop_mask;
     uint32_t hflags, saved_hflags;
     target_ulong btarget;
     bool ulri;
@@ -3706,7 +3706,7 @@ static void gen_st(DisasContext *ctx, uint32_t opc, int rt,

 /* Store conditional */
 static void gen_st_cond(DisasContext *ctx, int rt, int base, int offset,
-                        TCGMemOp tcg_mo, bool eva)
+                        MemOp tcg_mo, bool eva)
 {
     TCGv addr, t0, val;
     TCGLabel *l1 = gen_new_label();
@@ -4546,7 +4546,7 @@ static void gen_HILO(DisasContext *ctx, uint32_t opc, int acc, int reg)
 }

 static inline void gen_r6_ld(target_long addr, int reg, int memidx,
-                             TCGMemOp memop)
+                             MemOp memop)
 {
     TCGv t0 = tcg_const_tl(addr);
     tcg_gen_qemu_ld_tl(t0, t0, memidx, memop);
@@ -21828,7 +21828,7 @@ static int decode_nanomips_32_48_opc(CPUMIPSState *env, DisasContext *ctx)
                              extract32(ctx->opcode, 0, 8);
                     TCGv va = tcg_temp_new();
                     TCGv t1 = tcg_temp_new();
-                    TCGMemOp memop = (extract32(ctx->opcode, 8, 3)) ==
+                    MemOp memop = (extract32(ctx->opcode, 8, 3)) ==
                                       NM_P_LS_UAWM ? MO_UNALN : 0;

                     count = (count == 0) ? 8 : count;
diff --git a/target/openrisc/translate.c b/target/openrisc/translate.c
index 4360ce4..b189c50 100644
--- a/target/openrisc/translate.c
+++ b/target/openrisc/translate.c
@@ -681,7 +681,7 @@ static bool trans_l_lwa(DisasContext *dc, arg_load *a)
     return true;
 }

-static void do_load(DisasContext *dc, arg_load *a, TCGMemOp mop)
+static void do_load(DisasContext *dc, arg_load *a, MemOp mop)
 {
     TCGv ea;

@@ -763,7 +763,7 @@ static bool trans_l_swa(DisasContext *dc, arg_store *a)
     return true;
 }

-static void do_store(DisasContext *dc, arg_store *a, TCGMemOp mop)
+static void do_store(DisasContext *dc, arg_store *a, MemOp mop)
 {
     TCGv t0 = tcg_temp_new();
     tcg_gen_addi_tl(t0, cpu_R[a->a], a->i);
diff --git a/target/ppc/translate.c b/target/ppc/translate.c
index 4a5de28..31800ed 100644
--- a/target/ppc/translate.c
+++ b/target/ppc/translate.c
@@ -162,7 +162,7 @@ struct DisasContext {
     int mem_idx;
     int access_type;
     /* Translation flags */
-    TCGMemOp default_tcg_memop_mask;
+    MemOp default_tcg_memop_mask;
 #if defined(TARGET_PPC64)
     bool sf_mode;
     bool has_cfar;
@@ -3142,7 +3142,7 @@ static void gen_isync(DisasContext *ctx)

 #define MEMOP_GET_SIZE(x)  (1 << ((x) & MO_SIZE))

-static void gen_load_locked(DisasContext *ctx, TCGMemOp memop)
+static void gen_load_locked(DisasContext *ctx, MemOp memop)
 {
     TCGv gpr = cpu_gpr[rD(ctx->opcode)];
     TCGv t0 = tcg_temp_new();
@@ -3167,7 +3167,7 @@ LARX(lbarx, DEF_MEMOP(MO_UB))
 LARX(lharx, DEF_MEMOP(MO_UW))
 LARX(lwarx, DEF_MEMOP(MO_UL))

-static void gen_fetch_inc_conditional(DisasContext *ctx, TCGMemOp memop,
+static void gen_fetch_inc_conditional(DisasContext *ctx, MemOp memop,
                                       TCGv EA, TCGCond cond, int addend)
 {
     TCGv t = tcg_temp_new();
@@ -3193,7 +3193,7 @@ static void gen_fetch_inc_conditional(DisasContext *ctx, TCGMemOp memop,
     tcg_temp_free(u);
 }

-static void gen_ld_atomic(DisasContext *ctx, TCGMemOp memop)
+static void gen_ld_atomic(DisasContext *ctx, MemOp memop)
 {
     uint32_t gpr_FC = FC(ctx->opcode);
     TCGv EA = tcg_temp_new();
@@ -3306,7 +3306,7 @@ static void gen_ldat(DisasContext *ctx)
 }
 #endif

-static void gen_st_atomic(DisasContext *ctx, TCGMemOp memop)
+static void gen_st_atomic(DisasContext *ctx, MemOp memop)
 {
     uint32_t gpr_FC = FC(ctx->opcode);
     TCGv EA = tcg_temp_new();
@@ -3389,7 +3389,7 @@ static void gen_stdat(DisasContext *ctx)
 }
 #endif

-static void gen_conditional_store(DisasContext *ctx, TCGMemOp memop)
+static void gen_conditional_store(DisasContext *ctx, MemOp memop)
 {
     TCGLabel *l1 = gen_new_label();
     TCGLabel *l2 = gen_new_label();
diff --git a/target/riscv/insn_trans/trans_rva.inc.c b/target/riscv/insn_trans/trans_rva.inc.c
index fadd888..be8a9f0 100644
--- a/target/riscv/insn_trans/trans_rva.inc.c
+++ b/target/riscv/insn_trans/trans_rva.inc.c
@@ -18,7 +18,7 @@
  * this program.  If not, see <http://www.gnu.org/licenses/>.
  */

-static inline bool gen_lr(DisasContext *ctx, arg_atomic *a, TCGMemOp mop)
+static inline bool gen_lr(DisasContext *ctx, arg_atomic *a, MemOp mop)
 {
     TCGv src1 = tcg_temp_new();
     /* Put addr in load_res, data in load_val.  */
@@ -37,7 +37,7 @@ static inline bool gen_lr(DisasContext *ctx, arg_atomic *a, TCGMemOp mop)
     return true;
 }

-static inline bool gen_sc(DisasContext *ctx, arg_atomic *a, TCGMemOp mop)
+static inline bool gen_sc(DisasContext *ctx, arg_atomic *a, MemOp mop)
 {
     TCGv src1 = tcg_temp_new();
     TCGv src2 = tcg_temp_new();
@@ -82,8 +82,8 @@ static inline bool gen_sc(DisasContext *ctx, arg_atomic *a, TCGMemOp mop)
 }

 static bool gen_amo(DisasContext *ctx, arg_atomic *a,
-                    void(*func)(TCGv, TCGv, TCGv, TCGArg, TCGMemOp),
-                    TCGMemOp mop)
+                    void(*func)(TCGv, TCGv, TCGv, TCGArg, MemOp),
+                    MemOp mop)
 {
     TCGv src1 = tcg_temp_new();
     TCGv src2 = tcg_temp_new();
diff --git a/target/riscv/insn_trans/trans_rvi.inc.c b/target/riscv/insn_trans/trans_rvi.inc.c
index ea64731..cf440d1 100644
--- a/target/riscv/insn_trans/trans_rvi.inc.c
+++ b/target/riscv/insn_trans/trans_rvi.inc.c
@@ -135,7 +135,7 @@ static bool trans_bgeu(DisasContext *ctx, arg_bgeu *a)
     return gen_branch(ctx, a, TCG_COND_GEU);
 }

-static bool gen_load(DisasContext *ctx, arg_lb *a, TCGMemOp memop)
+static bool gen_load(DisasContext *ctx, arg_lb *a, MemOp memop)
 {
     TCGv t0 = tcg_temp_new();
     TCGv t1 = tcg_temp_new();
@@ -174,7 +174,7 @@ static bool trans_lhu(DisasContext *ctx, arg_lhu *a)
     return gen_load(ctx, a, MO_TEUW);
 }

-static bool gen_store(DisasContext *ctx, arg_sb *a, TCGMemOp memop)
+static bool gen_store(DisasContext *ctx, arg_sb *a, MemOp memop)
 {
     TCGv t0 = tcg_temp_new();
     TCGv dat = tcg_temp_new();
diff --git a/target/s390x/translate.c b/target/s390x/translate.c
index ac0d8b6..2927247 100644
--- a/target/s390x/translate.c
+++ b/target/s390x/translate.c
@@ -152,7 +152,7 @@ static inline int vec_full_reg_offset(uint8_t reg)
     return offsetof(CPUS390XState, vregs[reg][0]);
 }

-static inline int vec_reg_offset(uint8_t reg, uint8_t enr, TCGMemOp es)
+static inline int vec_reg_offset(uint8_t reg, uint8_t enr, MemOp es)
 {
     /* Convert element size (es) - e.g. MO_8 - to bytes */
     const uint8_t bytes = 1 << es;
@@ -2262,7 +2262,7 @@ static DisasJumpType op_csst(DisasContext *s, DisasOps *o)
 #ifndef CONFIG_USER_ONLY
 static DisasJumpType op_csp(DisasContext *s, DisasOps *o)
 {
-    TCGMemOp mop = s->insn->data;
+    MemOp mop = s->insn->data;
     TCGv_i64 addr, old, cc;
     TCGLabel *lab = gen_new_label();

@@ -3228,7 +3228,7 @@ static DisasJumpType op_lm64(DisasContext *s, DisasOps *o)
 static DisasJumpType op_lpd(DisasContext *s, DisasOps *o)
 {
     TCGv_i64 a1, a2;
-    TCGMemOp mop = s->insn->data;
+    MemOp mop = s->insn->data;

     /* In a parallel context, stop the world and single step.  */
     if (tb_cflags(s->base.tb) & CF_PARALLEL) {
diff --git a/target/s390x/translate_vx.inc.c b/target/s390x/translate_vx.inc.c
index 41d5cf8..4c56bbb 100644
--- a/target/s390x/translate_vx.inc.c
+++ b/target/s390x/translate_vx.inc.c
@@ -57,13 +57,13 @@
 #define FPF_LONG        3
 #define FPF_EXT         4

-static inline bool valid_vec_element(uint8_t enr, TCGMemOp es)
+static inline bool valid_vec_element(uint8_t enr, MemOp es)
 {
     return !(enr & ~(NUM_VEC_ELEMENTS(es) - 1));
 }

 static void read_vec_element_i64(TCGv_i64 dst, uint8_t reg, uint8_t enr,
-                                 TCGMemOp memop)
+                                 MemOp memop)
 {
     const int offs = vec_reg_offset(reg, enr, memop & MO_SIZE);

@@ -96,7 +96,7 @@ static void read_vec_element_i64(TCGv_i64 dst, uint8_t reg, uint8_t enr,
 }

 static void read_vec_element_i32(TCGv_i32 dst, uint8_t reg, uint8_t enr,
-                                 TCGMemOp memop)
+                                 MemOp memop)
 {
     const int offs = vec_reg_offset(reg, enr, memop & MO_SIZE);

@@ -123,7 +123,7 @@ static void read_vec_element_i32(TCGv_i32 dst, uint8_t reg, uint8_t enr,
 }

 static void write_vec_element_i64(TCGv_i64 src, int reg, uint8_t enr,
-                                  TCGMemOp memop)
+                                  MemOp memop)
 {
     const int offs = vec_reg_offset(reg, enr, memop & MO_SIZE);

@@ -146,7 +146,7 @@ static void write_vec_element_i64(TCGv_i64 src, int reg, uint8_t enr,
 }

 static void write_vec_element_i32(TCGv_i32 src, int reg, uint8_t enr,
-                                  TCGMemOp memop)
+                                  MemOp memop)
 {
     const int offs = vec_reg_offset(reg, enr, memop & MO_SIZE);

diff --git a/target/sparc/translate.c b/target/sparc/translate.c
index 091bab5..bef9ce6 100644
--- a/target/sparc/translate.c
+++ b/target/sparc/translate.c
@@ -2019,7 +2019,7 @@ static inline void gen_ne_fop_QD(DisasContext *dc, int rd, int rs,
 }

 static void gen_swap(DisasContext *dc, TCGv dst, TCGv src,
-                     TCGv addr, int mmu_idx, TCGMemOp memop)
+                     TCGv addr, int mmu_idx, MemOp memop)
 {
     gen_address_mask(dc, addr);
     tcg_gen_atomic_xchg_tl(dst, addr, src, mmu_idx, memop);
@@ -2050,10 +2050,10 @@ typedef struct {
     ASIType type;
     int asi;
     int mem_idx;
-    TCGMemOp memop;
+    MemOp memop;
 } DisasASI;

-static DisasASI get_asi(DisasContext *dc, int insn, TCGMemOp memop)
+static DisasASI get_asi(DisasContext *dc, int insn, MemOp memop)
 {
     int asi = GET_FIELD(insn, 19, 26);
     ASIType type = GET_ASI_HELPER;
@@ -2267,7 +2267,7 @@ static DisasASI get_asi(DisasContext *dc, int insn, TCGMemOp memop)
 }

 static void gen_ld_asi(DisasContext *dc, TCGv dst, TCGv addr,
-                       int insn, TCGMemOp memop)
+                       int insn, MemOp memop)
 {
     DisasASI da = get_asi(dc, insn, memop);

@@ -2305,7 +2305,7 @@ static void gen_ld_asi(DisasContext *dc, TCGv dst, TCGv addr,
 }

 static void gen_st_asi(DisasContext *dc, TCGv src, TCGv addr,
-                       int insn, TCGMemOp memop)
+                       int insn, MemOp memop)
 {
     DisasASI da = get_asi(dc, insn, memop);

@@ -2511,7 +2511,7 @@ static void gen_ldf_asi(DisasContext *dc, TCGv addr,
     case GET_ASI_BLOCK:
         /* Valid for lddfa on aligned registers only.  */
         if (size == 8 && (rd & 7) == 0) {
-            TCGMemOp memop;
+            MemOp memop;
             TCGv eight;
             int i;

@@ -2625,7 +2625,7 @@ static void gen_stf_asi(DisasContext *dc, TCGv addr,
     case GET_ASI_BLOCK:
         /* Valid for stdfa on aligned registers only.  */
         if (size == 8 && (rd & 7) == 0) {
-            TCGMemOp memop;
+            MemOp memop;
             TCGv eight;
             int i;

diff --git a/target/tilegx/translate.c b/target/tilegx/translate.c
index c46a4ab..68dd4aa 100644
--- a/target/tilegx/translate.c
+++ b/target/tilegx/translate.c
@@ -290,7 +290,7 @@ static void gen_cmul2(TCGv tdest, TCGv tsrca, TCGv tsrcb, int sh, int rd)
 }

 static TileExcp gen_st_opcode(DisasContext *dc, unsigned dest, unsigned srca,
-                              unsigned srcb, TCGMemOp memop, const char *name)
+                              unsigned srcb, MemOp memop, const char *name)
 {
     if (dest) {
         return TILEGX_EXCP_OPCODE_UNKNOWN;
@@ -305,7 +305,7 @@ static TileExcp gen_st_opcode(DisasContext *dc, unsigned dest, unsigned srca,
 }

 static TileExcp gen_st_add_opcode(DisasContext *dc, unsigned srca, unsigned srcb,
-                                  int imm, TCGMemOp memop, const char *name)
+                                  int imm, MemOp memop, const char *name)
 {
     TCGv tsrca = load_gr(dc, srca);
     TCGv tsrcb = load_gr(dc, srcb);
@@ -496,7 +496,7 @@ static TileExcp gen_rr_opcode(DisasContext *dc, unsigned opext,
 {
     TCGv tdest, tsrca;
     const char *mnemonic;
-    TCGMemOp memop;
+    MemOp memop;
     TileExcp ret = TILEGX_EXCP_NONE;
     bool prefetch_nofault = false;

@@ -1478,7 +1478,7 @@ static TileExcp gen_rri_opcode(DisasContext *dc, unsigned opext,
     TCGv tsrca = load_gr(dc, srca);
     bool prefetch_nofault = false;
     const char *mnemonic;
-    TCGMemOp memop;
+    MemOp memop;
     int i2, i3;
     TCGv t0;

@@ -2106,7 +2106,7 @@ static TileExcp decode_y2(DisasContext *dc, tilegx_bundle_bits bundle)
     unsigned srca = get_SrcA_Y2(bundle);
     unsigned srcbdest = get_SrcBDest_Y2(bundle);
     const char *mnemonic;
-    TCGMemOp memop;
+    MemOp memop;
     bool prefetch_nofault = false;

     switch (OEY2(opc, mode)) {
diff --git a/target/tricore/translate.c b/target/tricore/translate.c
index dc2a65f..87a5f50 100644
--- a/target/tricore/translate.c
+++ b/target/tricore/translate.c
@@ -227,7 +227,7 @@ static inline void generate_trap(DisasContext *ctx, int class, int tin);
 /* Functions for load/save to/from memory */

 static inline void gen_offset_ld(DisasContext *ctx, TCGv r1, TCGv r2,
-                                 int16_t con, TCGMemOp mop)
+                                 int16_t con, MemOp mop)
 {
     TCGv temp = tcg_temp_new();
     tcg_gen_addi_tl(temp, r2, con);
@@ -236,7 +236,7 @@ static inline void gen_offset_ld(DisasContext *ctx, TCGv r1, TCGv r2,
 }

 static inline void gen_offset_st(DisasContext *ctx, TCGv r1, TCGv r2,
-                                 int16_t con, TCGMemOp mop)
+                                 int16_t con, MemOp mop)
 {
     TCGv temp = tcg_temp_new();
     tcg_gen_addi_tl(temp, r2, con);
@@ -284,7 +284,7 @@ static void gen_offset_ld_2regs(TCGv rh, TCGv rl, TCGv base, int16_t con,
 }

 static void gen_st_preincr(DisasContext *ctx, TCGv r1, TCGv r2, int16_t off,
-                           TCGMemOp mop)
+                           MemOp mop)
 {
     TCGv temp = tcg_temp_new();
     tcg_gen_addi_tl(temp, r2, off);
@@ -294,7 +294,7 @@ static void gen_st_preincr(DisasContext *ctx, TCGv r1, TCGv r2, int16_t off,
 }

 static void gen_ld_preincr(DisasContext *ctx, TCGv r1, TCGv r2, int16_t off,
-                           TCGMemOp mop)
+                           MemOp mop)
 {
     TCGv temp = tcg_temp_new();
     tcg_gen_addi_tl(temp, r2, off);
diff --git a/tcg/README b/tcg/README
index 21fcdf7..b4382fa 100644
--- a/tcg/README
+++ b/tcg/README
@@ -512,7 +512,7 @@ Both t0 and t1 may be split into little-endian ordered pairs of registers
 if dealing with 64-bit quantities on a 32-bit host.

 The memidx selects the qemu tlb index to use (e.g. user or kernel access).
-The flags are the TCGMemOp bits, selecting the sign, width, and endianness
+The flags are the MemOp bits, selecting the sign, width, and endianness
 of the memory access.

 For a 32-bit host, qemu_ld/st_i64 is guaranteed to only be used with a
diff --git a/tcg/aarch64/tcg-target.inc.c b/tcg/aarch64/tcg-target.inc.c
index 0713448..3f92101 100644
--- a/tcg/aarch64/tcg-target.inc.c
+++ b/tcg/aarch64/tcg-target.inc.c
@@ -1423,7 +1423,7 @@ static inline void tcg_out_rev16(TCGContext *s, TCGReg rd, TCGReg rn)
     tcg_out_insn(s, 3507, REV16, TCG_TYPE_I32, rd, rn);
 }

-static inline void tcg_out_sxt(TCGContext *s, TCGType ext, TCGMemOp s_bits,
+static inline void tcg_out_sxt(TCGContext *s, TCGType ext, MemOp s_bits,
                                TCGReg rd, TCGReg rn)
 {
     /* Using ALIASes SXTB, SXTH, SXTW, of SBFM Xd, Xn, #0, #7|15|31 */
@@ -1431,7 +1431,7 @@ static inline void tcg_out_sxt(TCGContext *s, TCGType ext, TCGMemOp s_bits,
     tcg_out_sbfm(s, ext, rd, rn, 0, bits);
 }

-static inline void tcg_out_uxt(TCGContext *s, TCGMemOp s_bits,
+static inline void tcg_out_uxt(TCGContext *s, MemOp s_bits,
                                TCGReg rd, TCGReg rn)
 {
     /* Using ALIASes UXTB, UXTH of UBFM Wd, Wn, #0, #7|15 */
@@ -1580,8 +1580,8 @@ static inline void tcg_out_adr(TCGContext *s, TCGReg rd, void *target)
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp size = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp size = opc & MO_SIZE;

     if (!reloc_pc19(lb->label_ptr[0], s->code_ptr)) {
         return false;
@@ -1605,8 +1605,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp size = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp size = opc & MO_SIZE;

     if (!reloc_pc19(lb->label_ptr[0], s->code_ptr)) {
         return false;
@@ -1649,7 +1649,7 @@ QEMU_BUILD_BUG_ON(offsetof(CPUTLBDescFast, table) != 8);
    slow path for the failure case, which will be patched later when finalizing
    the slow path. Generated code returns the host addend in X1,
    clobbers X0,X2,X3,TMP. */
-static void tcg_out_tlb_read(TCGContext *s, TCGReg addr_reg, TCGMemOp opc,
+static void tcg_out_tlb_read(TCGContext *s, TCGReg addr_reg, MemOp opc,
                              tcg_insn_unit **label_ptr, int mem_index,
                              bool is_read)
 {
@@ -1709,11 +1709,11 @@ static void tcg_out_tlb_read(TCGContext *s, TCGReg addr_reg, TCGMemOp opc,

 #endif /* CONFIG_SOFTMMU */

-static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp memop, TCGType ext,
+static void tcg_out_qemu_ld_direct(TCGContext *s, MemOp memop, TCGType ext,
                                    TCGReg data_r, TCGReg addr_r,
                                    TCGType otype, TCGReg off_r)
 {
-    const TCGMemOp bswap = memop & MO_BSWAP;
+    const MemOp bswap = memop & MO_BSWAP;

     switch (memop & MO_SSIZE) {
     case MO_UB:
@@ -1765,11 +1765,11 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp memop, TCGType ext,
     }
 }

-static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp memop,
+static void tcg_out_qemu_st_direct(TCGContext *s, MemOp memop,
                                    TCGReg data_r, TCGReg addr_r,
                                    TCGType otype, TCGReg off_r)
 {
-    const TCGMemOp bswap = memop & MO_BSWAP;
+    const MemOp bswap = memop & MO_BSWAP;

     switch (memop & MO_SIZE) {
     case MO_8:
@@ -1804,7 +1804,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp memop,
 static void tcg_out_qemu_ld(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
                             TCGMemOpIdx oi, TCGType ext)
 {
-    TCGMemOp memop = get_memop(oi);
+    MemOp memop = get_memop(oi);
     const TCGType otype = TARGET_LONG_BITS == 64 ? TCG_TYPE_I64 : TCG_TYPE_I32;
 #ifdef CONFIG_SOFTMMU
     unsigned mem_index = get_mmuidx(oi);
@@ -1829,7 +1829,7 @@ static void tcg_out_qemu_ld(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
 static void tcg_out_qemu_st(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
                             TCGMemOpIdx oi)
 {
-    TCGMemOp memop = get_memop(oi);
+    MemOp memop = get_memop(oi);
     const TCGType otype = TARGET_LONG_BITS == 64 ? TCG_TYPE_I64 : TCG_TYPE_I32;
 #ifdef CONFIG_SOFTMMU
     unsigned mem_index = get_mmuidx(oi);
diff --git a/tcg/arm/tcg-target.inc.c b/tcg/arm/tcg-target.inc.c
index ece88dc..94d80d7 100644
--- a/tcg/arm/tcg-target.inc.c
+++ b/tcg/arm/tcg-target.inc.c
@@ -1233,7 +1233,7 @@ QEMU_BUILD_BUG_ON(offsetof(CPUTLBDescFast, table) != 4);
    containing the addend of the tlb entry.  Clobbers R0, R1, R2, TMP.  */

 static TCGReg tcg_out_tlb_read(TCGContext *s, TCGReg addrlo, TCGReg addrhi,
-                               TCGMemOp opc, int mem_index, bool is_load)
+                               MemOp opc, int mem_index, bool is_load)
 {
     int cmp_off = (is_load ? offsetof(CPUTLBEntry, addr_read)
                    : offsetof(CPUTLBEntry, addr_write));
@@ -1348,7 +1348,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGReg argreg, datalo, datahi;
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     void *func;

     if (!reloc_pc24(lb->label_ptr[0], s->code_ptr)) {
@@ -1412,7 +1412,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGReg argreg, datalo, datahi;
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);

     if (!reloc_pc24(lb->label_ptr[0], s->code_ptr)) {
         return false;
@@ -1453,11 +1453,11 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 }
 #endif /* SOFTMMU */

-static inline void tcg_out_qemu_ld_index(TCGContext *s, TCGMemOp opc,
+static inline void tcg_out_qemu_ld_index(TCGContext *s, MemOp opc,
                                          TCGReg datalo, TCGReg datahi,
                                          TCGReg addrlo, TCGReg addend)
 {
-    TCGMemOp bswap = opc & MO_BSWAP;
+    MemOp bswap = opc & MO_BSWAP;

     switch (opc & MO_SSIZE) {
     case MO_UB:
@@ -1514,11 +1514,11 @@ static inline void tcg_out_qemu_ld_index(TCGContext *s, TCGMemOp opc,
     }
 }

-static inline void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp opc,
+static inline void tcg_out_qemu_ld_direct(TCGContext *s, MemOp opc,
                                           TCGReg datalo, TCGReg datahi,
                                           TCGReg addrlo)
 {
-    TCGMemOp bswap = opc & MO_BSWAP;
+    MemOp bswap = opc & MO_BSWAP;

     switch (opc & MO_SSIZE) {
     case MO_UB:
@@ -1577,7 +1577,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)
 {
     TCGReg addrlo, datalo, datahi, addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #ifdef CONFIG_SOFTMMU
     int mem_index;
     TCGReg addend;
@@ -1614,11 +1614,11 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)
 #endif
 }

-static inline void tcg_out_qemu_st_index(TCGContext *s, int cond, TCGMemOp opc,
+static inline void tcg_out_qemu_st_index(TCGContext *s, int cond, MemOp opc,
                                          TCGReg datalo, TCGReg datahi,
                                          TCGReg addrlo, TCGReg addend)
 {
-    TCGMemOp bswap = opc & MO_BSWAP;
+    MemOp bswap = opc & MO_BSWAP;

     switch (opc & MO_SIZE) {
     case MO_8:
@@ -1659,11 +1659,11 @@ static inline void tcg_out_qemu_st_index(TCGContext *s, int cond, TCGMemOp opc,
     }
 }

-static inline void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp opc,
+static inline void tcg_out_qemu_st_direct(TCGContext *s, MemOp opc,
                                           TCGReg datalo, TCGReg datahi,
                                           TCGReg addrlo)
 {
-    TCGMemOp bswap = opc & MO_BSWAP;
+    MemOp bswap = opc & MO_BSWAP;

     switch (opc & MO_SIZE) {
     case MO_8:
@@ -1708,7 +1708,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is64)
 {
     TCGReg addrlo, datalo, datahi, addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #ifdef CONFIG_SOFTMMU
     int mem_index;
     TCGReg addend;
diff --git a/tcg/i386/tcg-target.inc.c b/tcg/i386/tcg-target.inc.c
index 6ddeebf..9d8ed97 100644
--- a/tcg/i386/tcg-target.inc.c
+++ b/tcg/i386/tcg-target.inc.c
@@ -1697,7 +1697,7 @@ static void * const qemu_st_helpers[16] = {
    First argument register is clobbered.  */

 static inline void tcg_out_tlb_load(TCGContext *s, TCGReg addrlo, TCGReg addrhi,
-                                    int mem_index, TCGMemOp opc,
+                                    int mem_index, MemOp opc,
                                     tcg_insn_unit **label_ptr, int which)
 {
     const TCGReg r0 = TCG_REG_L0;
@@ -1810,7 +1810,7 @@ static void add_qemu_ldst_label(TCGContext *s, bool is_ld, bool is_64,
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     TCGReg data_reg;
     tcg_insn_unit **label_ptr = &l->label_ptr[0];
     int rexw = (l->type == TCG_TYPE_I64 ? P_REXW : 0);
@@ -1895,8 +1895,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp s_bits = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp s_bits = opc & MO_SIZE;
     tcg_insn_unit **label_ptr = &l->label_ptr[0];
     TCGReg retaddr;

@@ -1995,10 +1995,10 @@ static inline int setup_guest_base_seg(void)

 static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg datalo, TCGReg datahi,
                                    TCGReg base, int index, intptr_t ofs,
-                                   int seg, bool is64, TCGMemOp memop)
+                                   int seg, bool is64, MemOp memop)
 {
-    const TCGMemOp real_bswap = memop & MO_BSWAP;
-    TCGMemOp bswap = real_bswap;
+    const MemOp real_bswap = memop & MO_BSWAP;
+    MemOp bswap = real_bswap;
     int rexw = is64 * P_REXW;
     int movop = OPC_MOVL_GvEv;

@@ -2103,7 +2103,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)
     TCGReg datalo, datahi, addrlo;
     TCGReg addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     int mem_index;
     tcg_insn_unit *label_ptr[2];
@@ -2137,15 +2137,15 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)

 static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg datalo, TCGReg datahi,
                                    TCGReg base, int index, intptr_t ofs,
-                                   int seg, TCGMemOp memop)
+                                   int seg, MemOp memop)
 {
     /* ??? Ideally we wouldn't need a scratch register.  For user-only,
        we could perform the bswap twice to restore the original value
        instead of moving to the scratch.  But as it is, the L constraint
        means that TCG_REG_L0 is definitely free here.  */
     const TCGReg scratch = TCG_REG_L0;
-    const TCGMemOp real_bswap = memop & MO_BSWAP;
-    TCGMemOp bswap = real_bswap;
+    const MemOp real_bswap = memop & MO_BSWAP;
+    MemOp bswap = real_bswap;
     int movop = OPC_MOVL_EvGv;

     if (have_movbe && real_bswap) {
@@ -2221,7 +2221,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is64)
     TCGReg datalo, datahi, addrlo;
     TCGReg addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     int mem_index;
     tcg_insn_unit *label_ptr[2];
diff --git a/tcg/mips/tcg-target.inc.c b/tcg/mips/tcg-target.inc.c
index 41bff32..5442167 100644
--- a/tcg/mips/tcg-target.inc.c
+++ b/tcg/mips/tcg-target.inc.c
@@ -1215,7 +1215,7 @@ static void tcg_out_tlb_load(TCGContext *s, TCGReg base, TCGReg addrl,
                              TCGReg addrh, TCGMemOpIdx oi,
                              tcg_insn_unit *label_ptr[2], bool is_load)
 {
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     unsigned s_bits = opc & MO_SIZE;
     unsigned a_bits = get_alignment_bits(opc);
     int mem_index = get_mmuidx(oi);
@@ -1313,7 +1313,7 @@ static void add_qemu_ldst_label(TCGContext *s, int is_ld, TCGMemOpIdx oi,
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     TCGReg v0;
     int i;

@@ -1363,8 +1363,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp s_bits = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp s_bits = opc & MO_SIZE;
     int i;

     /* resolve label address */
@@ -1413,7 +1413,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 #endif

 static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg lo, TCGReg hi,
-                                   TCGReg base, TCGMemOp opc, bool is_64)
+                                   TCGReg base, MemOp opc, bool is_64)
 {
     switch (opc & (MO_SSIZE | MO_BSWAP)) {
     case MO_UB:
@@ -1521,7 +1521,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg addr_regl, addr_regh __attribute__((unused));
     TCGReg data_regl, data_regh;
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     tcg_insn_unit *label_ptr[2];
 #endif
@@ -1558,7 +1558,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
 }

 static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
-                                   TCGReg base, TCGMemOp opc)
+                                   TCGReg base, MemOp opc)
 {
     /* Don't clutter the code below with checks to avoid bswapping ZERO.  */
     if ((lo | hi) == 0) {
@@ -1624,7 +1624,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg addr_regl, addr_regh __attribute__((unused));
     TCGReg data_regl, data_regh;
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     tcg_insn_unit *label_ptr[2];
 #endif
diff --git a/tcg/optimize.c b/tcg/optimize.c
index d2424de..a89ffda 100644
--- a/tcg/optimize.c
+++ b/tcg/optimize.c
@@ -1014,7 +1014,7 @@ void tcg_optimize(TCGContext *s)
         CASE_OP_32_64(qemu_ld):
             {
                 TCGMemOpIdx oi = op->args[nb_oargs + nb_iargs];
-                TCGMemOp mop = get_memop(oi);
+                MemOp mop = get_memop(oi);
                 if (!(mop & MO_SIGN)) {
                     mask = (2ULL << ((8 << (mop & MO_SIZE)) - 1)) - 1;
                 }
diff --git a/tcg/ppc/tcg-target.inc.c b/tcg/ppc/tcg-target.inc.c
index 852b894..815edac 100644
--- a/tcg/ppc/tcg-target.inc.c
+++ b/tcg/ppc/tcg-target.inc.c
@@ -1506,7 +1506,7 @@ QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) < -32768);
    in CR7, loads the addend of the TLB into R3, and returns the register
    containing the guest address (zero-extended into R4).  Clobbers R0 and R2. */

-static TCGReg tcg_out_tlb_read(TCGContext *s, TCGMemOp opc,
+static TCGReg tcg_out_tlb_read(TCGContext *s, MemOp opc,
                                TCGReg addrlo, TCGReg addrhi,
                                int mem_index, bool is_read)
 {
@@ -1633,7 +1633,7 @@ static void add_qemu_ldst_label(TCGContext *s, bool is_ld, TCGMemOpIdx oi,
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     TCGReg hi, lo, arg = TCG_REG_R3;

     if (!reloc_pc14(lb->label_ptr[0], s->code_ptr)) {
@@ -1680,8 +1680,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp s_bits = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp s_bits = opc & MO_SIZE;
     TCGReg hi, lo, arg = TCG_REG_R3;

     if (!reloc_pc14(lb->label_ptr[0], s->code_ptr)) {
@@ -1744,7 +1744,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg datalo, datahi, addrlo, rbase;
     TCGReg addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc, s_bits;
+    MemOp opc, s_bits;
 #ifdef CONFIG_SOFTMMU
     int mem_index;
     tcg_insn_unit *label_ptr;
@@ -1819,7 +1819,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg datalo, datahi, addrlo, rbase;
     TCGReg addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc, s_bits;
+    MemOp opc, s_bits;
 #ifdef CONFIG_SOFTMMU
     int mem_index;
     tcg_insn_unit *label_ptr;
diff --git a/tcg/riscv/tcg-target.inc.c b/tcg/riscv/tcg-target.inc.c
index 3e76bf5..7018509 100644
--- a/tcg/riscv/tcg-target.inc.c
+++ b/tcg/riscv/tcg-target.inc.c
@@ -970,7 +970,7 @@ static void tcg_out_tlb_load(TCGContext *s, TCGReg addrl,
                              TCGReg addrh, TCGMemOpIdx oi,
                              tcg_insn_unit **label_ptr, bool is_load)
 {
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     unsigned s_bits = opc & MO_SIZE;
     unsigned a_bits = get_alignment_bits(opc);
     tcg_target_long compare_mask;
@@ -1044,7 +1044,7 @@ static void add_qemu_ldst_label(TCGContext *s, int is_ld, TCGMemOpIdx oi,
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     TCGReg a0 = tcg_target_call_iarg_regs[0];
     TCGReg a1 = tcg_target_call_iarg_regs[1];
     TCGReg a2 = tcg_target_call_iarg_regs[2];
@@ -1077,8 +1077,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp s_bits = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp s_bits = opc & MO_SIZE;
     TCGReg a0 = tcg_target_call_iarg_regs[0];
     TCGReg a1 = tcg_target_call_iarg_regs[1];
     TCGReg a2 = tcg_target_call_iarg_regs[2];
@@ -1121,9 +1121,9 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 #endif /* CONFIG_SOFTMMU */

 static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg lo, TCGReg hi,
-                                   TCGReg base, TCGMemOp opc, bool is_64)
+                                   TCGReg base, MemOp opc, bool is_64)
 {
-    const TCGMemOp bswap = opc & MO_BSWAP;
+    const MemOp bswap = opc & MO_BSWAP;

     /* We don't yet handle byteswapping, assert */
     g_assert(!bswap);
@@ -1172,7 +1172,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg addr_regl, addr_regh __attribute__((unused));
     TCGReg data_regl, data_regh;
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     tcg_insn_unit *label_ptr[1];
 #endif
@@ -1208,9 +1208,9 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
 }

 static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
-                                   TCGReg base, TCGMemOp opc)
+                                   TCGReg base, MemOp opc)
 {
-    const TCGMemOp bswap = opc & MO_BSWAP;
+    const MemOp bswap = opc & MO_BSWAP;

     /* We don't yet handle byteswapping, assert */
     g_assert(!bswap);
@@ -1243,7 +1243,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg addr_regl, addr_regh __attribute__((unused));
     TCGReg data_regl, data_regh;
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     tcg_insn_unit *label_ptr[1];
 #endif
diff --git a/tcg/s390/tcg-target.inc.c b/tcg/s390/tcg-target.inc.c
index fe42939..8aaa4ce 100644
--- a/tcg/s390/tcg-target.inc.c
+++ b/tcg/s390/tcg-target.inc.c
@@ -1430,7 +1430,7 @@ static void tcg_out_call(TCGContext *s, tcg_insn_unit *dest)
     }
 }

-static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp opc, TCGReg data,
+static void tcg_out_qemu_ld_direct(TCGContext *s, MemOp opc, TCGReg data,
                                    TCGReg base, TCGReg index, int disp)
 {
     switch (opc & (MO_SSIZE | MO_BSWAP)) {
@@ -1489,7 +1489,7 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp opc, TCGReg data,
     }
 }

-static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp opc, TCGReg data,
+static void tcg_out_qemu_st_direct(TCGContext *s, MemOp opc, TCGReg data,
                                    TCGReg base, TCGReg index, int disp)
 {
     switch (opc & (MO_SIZE | MO_BSWAP)) {
@@ -1544,7 +1544,7 @@ QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) < -(1 << 19));

 /* Load and compare a TLB entry, leaving the flags set.  Loads the TLB
    addend into R2.  Returns a register with the santitized guest address.  */
-static TCGReg tcg_out_tlb_read(TCGContext* s, TCGReg addr_reg, TCGMemOp opc,
+static TCGReg tcg_out_tlb_read(TCGContext *s, TCGReg addr_reg, MemOp opc,
                                int mem_index, bool is_ld)
 {
     unsigned s_bits = opc & MO_SIZE;
@@ -1614,7 +1614,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
     TCGReg addr_reg = lb->addrlo_reg;
     TCGReg data_reg = lb->datalo_reg;
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);

     if (!patch_reloc(lb->label_ptr[0], R_390_PC16DBL,
                      (intptr_t)s->code_ptr, 2)) {
@@ -1639,7 +1639,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
     TCGReg addr_reg = lb->addrlo_reg;
     TCGReg data_reg = lb->datalo_reg;
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);

     if (!patch_reloc(lb->label_ptr[0], R_390_PC16DBL,
                      (intptr_t)s->code_ptr, 2)) {
@@ -1694,7 +1694,7 @@ static void tcg_prepare_user_ldst(TCGContext *s, TCGReg *addr_reg,
 static void tcg_out_qemu_ld(TCGContext* s, TCGReg data_reg, TCGReg addr_reg,
                             TCGMemOpIdx oi)
 {
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
 #ifdef CONFIG_SOFTMMU
     unsigned mem_index = get_mmuidx(oi);
     tcg_insn_unit *label_ptr;
@@ -1721,7 +1721,7 @@ static void tcg_out_qemu_ld(TCGContext* s, TCGReg data_reg, TCGReg addr_reg,
 static void tcg_out_qemu_st(TCGContext* s, TCGReg data_reg, TCGReg addr_reg,
                             TCGMemOpIdx oi)
 {
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
 #ifdef CONFIG_SOFTMMU
     unsigned mem_index = get_mmuidx(oi);
     tcg_insn_unit *label_ptr;
diff --git a/tcg/sparc/tcg-target.inc.c b/tcg/sparc/tcg-target.inc.c
index 10b1cea..d7986cd 100644
--- a/tcg/sparc/tcg-target.inc.c
+++ b/tcg/sparc/tcg-target.inc.c
@@ -1081,7 +1081,7 @@ QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) < -(1 << 12));
    is in the returned register, maybe %o0.  The TLB addend is in %o1.  */

 static TCGReg tcg_out_tlb_load(TCGContext *s, TCGReg addr, int mem_index,
-                               TCGMemOp opc, int which)
+                               MemOp opc, int which)
 {
     int fast_off = TLB_MASK_TABLE_OFS(mem_index);
     int mask_off = fast_off + offsetof(CPUTLBDescFast, mask);
@@ -1164,7 +1164,7 @@ static const int qemu_st_opc[16] = {
 static void tcg_out_qemu_ld(TCGContext *s, TCGReg data, TCGReg addr,
                             TCGMemOpIdx oi, bool is_64)
 {
-    TCGMemOp memop = get_memop(oi);
+    MemOp memop = get_memop(oi);
 #ifdef CONFIG_SOFTMMU
     unsigned memi = get_mmuidx(oi);
     TCGReg addrz, param;
@@ -1246,7 +1246,7 @@ static void tcg_out_qemu_ld(TCGContext *s, TCGReg data, TCGReg addr,
 static void tcg_out_qemu_st(TCGContext *s, TCGReg data, TCGReg addr,
                             TCGMemOpIdx oi)
 {
-    TCGMemOp memop = get_memop(oi);
+    MemOp memop = get_memop(oi);
 #ifdef CONFIG_SOFTMMU
     unsigned memi = get_mmuidx(oi);
     TCGReg addrz, param;
diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c
index 587d092..e87c327 100644
--- a/tcg/tcg-op.c
+++ b/tcg/tcg-op.c
@@ -2714,7 +2714,7 @@ void tcg_gen_lookup_and_goto_ptr(void)
     }
 }

-static inline TCGMemOp tcg_canonicalize_memop(TCGMemOp op, bool is64, bool st)
+static inline MemOp tcg_canonicalize_memop(MemOp op, bool is64, bool st)
 {
     /* Trigger the asserts within as early as possible.  */
     (void)get_alignment_bits(op);
@@ -2743,7 +2743,7 @@ static inline TCGMemOp tcg_canonicalize_memop(TCGMemOp op, bool is64, bool st)
 }

 static void gen_ldst_i32(TCGOpcode opc, TCGv_i32 val, TCGv addr,
-                         TCGMemOp memop, TCGArg idx)
+                         MemOp memop, TCGArg idx)
 {
     TCGMemOpIdx oi = make_memop_idx(memop, idx);
 #if TARGET_LONG_BITS == 32
@@ -2758,7 +2758,7 @@ static void gen_ldst_i32(TCGOpcode opc, TCGv_i32 val, TCGv addr,
 }

 static void gen_ldst_i64(TCGOpcode opc, TCGv_i64 val, TCGv addr,
-                         TCGMemOp memop, TCGArg idx)
+                         MemOp memop, TCGArg idx)
 {
     TCGMemOpIdx oi = make_memop_idx(memop, idx);
 #if TARGET_LONG_BITS == 32
@@ -2788,9 +2788,9 @@ static void tcg_gen_req_mo(TCGBar type)
     }
 }

-void tcg_gen_qemu_ld_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
+void tcg_gen_qemu_ld_i32(TCGv_i32 val, TCGv addr, TCGArg idx, MemOp memop)
 {
-    TCGMemOp orig_memop;
+    MemOp orig_memop;

     tcg_gen_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
     memop = tcg_canonicalize_memop(memop, 0, 0);
@@ -2825,7 +2825,7 @@ void tcg_gen_qemu_ld_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
     }
 }

-void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
+void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCGArg idx, MemOp memop)
 {
     TCGv_i32 swap = NULL;

@@ -2858,9 +2858,9 @@ void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
     }
 }

-void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
+void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, MemOp memop)
 {
-    TCGMemOp orig_memop;
+    MemOp orig_memop;

     if (TCG_TARGET_REG_BITS == 32 && (memop & MO_SIZE) < MO_64) {
         tcg_gen_qemu_ld_i32(TCGV_LOW(val), addr, idx, memop);
@@ -2911,7 +2911,7 @@ void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
     }
 }

-void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
+void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, MemOp memop)
 {
     TCGv_i64 swap = NULL;

@@ -2953,7 +2953,7 @@ void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
     }
 }

-static void tcg_gen_ext_i32(TCGv_i32 ret, TCGv_i32 val, TCGMemOp opc)
+static void tcg_gen_ext_i32(TCGv_i32 ret, TCGv_i32 val, MemOp opc)
 {
     switch (opc & MO_SSIZE) {
     case MO_SB:
@@ -2974,7 +2974,7 @@ static void tcg_gen_ext_i32(TCGv_i32 ret, TCGv_i32 val, TCGMemOp opc)
     }
 }

-static void tcg_gen_ext_i64(TCGv_i64 ret, TCGv_i64 val, TCGMemOp opc)
+static void tcg_gen_ext_i64(TCGv_i64 ret, TCGv_i64 val, MemOp opc)
 {
     switch (opc & MO_SSIZE) {
     case MO_SB:
@@ -3034,7 +3034,7 @@ static void * const table_cmpxchg[16] = {
 };

 void tcg_gen_atomic_cmpxchg_i32(TCGv_i32 retv, TCGv addr, TCGv_i32 cmpv,
-                                TCGv_i32 newv, TCGArg idx, TCGMemOp memop)
+                                TCGv_i32 newv, TCGArg idx, MemOp memop)
 {
     memop = tcg_canonicalize_memop(memop, 0, 0);

@@ -3078,7 +3078,7 @@ void tcg_gen_atomic_cmpxchg_i32(TCGv_i32 retv, TCGv addr, TCGv_i32 cmpv,
 }

 void tcg_gen_atomic_cmpxchg_i64(TCGv_i64 retv, TCGv addr, TCGv_i64 cmpv,
-                                TCGv_i64 newv, TCGArg idx, TCGMemOp memop)
+                                TCGv_i64 newv, TCGArg idx, MemOp memop)
 {
     memop = tcg_canonicalize_memop(memop, 1, 0);

@@ -3142,7 +3142,7 @@ void tcg_gen_atomic_cmpxchg_i64(TCGv_i64 retv, TCGv addr, TCGv_i64 cmpv,
 }

 static void do_nonatomic_op_i32(TCGv_i32 ret, TCGv addr, TCGv_i32 val,
-                                TCGArg idx, TCGMemOp memop, bool new_val,
+                                TCGArg idx, MemOp memop, bool new_val,
                                 void (*gen)(TCGv_i32, TCGv_i32, TCGv_i32))
 {
     TCGv_i32 t1 = tcg_temp_new_i32();
@@ -3160,7 +3160,7 @@ static void do_nonatomic_op_i32(TCGv_i32 ret, TCGv addr, TCGv_i32 val,
 }

 static void do_atomic_op_i32(TCGv_i32 ret, TCGv addr, TCGv_i32 val,
-                             TCGArg idx, TCGMemOp memop, void * const table[])
+                             TCGArg idx, MemOp memop, void * const table[])
 {
     gen_atomic_op_i32 gen;

@@ -3185,7 +3185,7 @@ static void do_atomic_op_i32(TCGv_i32 ret, TCGv addr, TCGv_i32 val,
 }

 static void do_nonatomic_op_i64(TCGv_i64 ret, TCGv addr, TCGv_i64 val,
-                                TCGArg idx, TCGMemOp memop, bool new_val,
+                                TCGArg idx, MemOp memop, bool new_val,
                                 void (*gen)(TCGv_i64, TCGv_i64, TCGv_i64))
 {
     TCGv_i64 t1 = tcg_temp_new_i64();
@@ -3203,7 +3203,7 @@ static void do_nonatomic_op_i64(TCGv_i64 ret, TCGv addr, TCGv_i64 val,
 }

 static void do_atomic_op_i64(TCGv_i64 ret, TCGv addr, TCGv_i64 val,
-                             TCGArg idx, TCGMemOp memop, void * const table[])
+                             TCGArg idx, MemOp memop, void * const table[])
 {
     memop = tcg_canonicalize_memop(memop, 1, 0);

@@ -3257,7 +3257,7 @@ static void * const table_##NAME[16] = {                                \
     WITH_ATOMIC64([MO_64 | MO_BE] = gen_helper_atomic_##NAME##q_be)     \
 };                                                                      \
 void tcg_gen_atomic_##NAME##_i32                                        \
-    (TCGv_i32 ret, TCGv addr, TCGv_i32 val, TCGArg idx, TCGMemOp memop) \
+    (TCGv_i32 ret, TCGv addr, TCGv_i32 val, TCGArg idx, MemOp memop)    \
 {                                                                       \
     if (tcg_ctx->tb_cflags & CF_PARALLEL) {                             \
         do_atomic_op_i32(ret, addr, val, idx, memop, table_##NAME);     \
@@ -3267,7 +3267,7 @@ void tcg_gen_atomic_##NAME##_i32                                        \
     }                                                                   \
 }                                                                       \
 void tcg_gen_atomic_##NAME##_i64                                        \
-    (TCGv_i64 ret, TCGv addr, TCGv_i64 val, TCGArg idx, TCGMemOp memop) \
+    (TCGv_i64 ret, TCGv addr, TCGv_i64 val, TCGArg idx, MemOp memop)    \
 {                                                                       \
     if (tcg_ctx->tb_cflags & CF_PARALLEL) {                             \
         do_atomic_op_i64(ret, addr, val, idx, memop, table_##NAME);     \
diff --git a/tcg/tcg-op.h b/tcg/tcg-op.h
index 2d4dd5c..e9cf172 100644
--- a/tcg/tcg-op.h
+++ b/tcg/tcg-op.h
@@ -851,10 +851,10 @@ void tcg_gen_lookup_and_goto_ptr(void);
 #define tcg_gen_qemu_st_tl tcg_gen_qemu_st_i64
 #endif

-void tcg_gen_qemu_ld_i32(TCGv_i32, TCGv, TCGArg, TCGMemOp);
-void tcg_gen_qemu_st_i32(TCGv_i32, TCGv, TCGArg, TCGMemOp);
-void tcg_gen_qemu_ld_i64(TCGv_i64, TCGv, TCGArg, TCGMemOp);
-void tcg_gen_qemu_st_i64(TCGv_i64, TCGv, TCGArg, TCGMemOp);
+void tcg_gen_qemu_ld_i32(TCGv_i32, TCGv, TCGArg, MemOp);
+void tcg_gen_qemu_st_i32(TCGv_i32, TCGv, TCGArg, MemOp);
+void tcg_gen_qemu_ld_i64(TCGv_i64, TCGv, TCGArg, MemOp);
+void tcg_gen_qemu_st_i64(TCGv_i64, TCGv, TCGArg, MemOp);

 static inline void tcg_gen_qemu_ld8u(TCGv ret, TCGv addr, int mem_index)
 {
@@ -912,46 +912,46 @@ static inline void tcg_gen_qemu_st64(TCGv_i64 arg, TCGv addr, int mem_index)
 }

 void tcg_gen_atomic_cmpxchg_i32(TCGv_i32, TCGv, TCGv_i32, TCGv_i32,
-                                TCGArg, TCGMemOp);
+                                TCGArg, MemOp);
 void tcg_gen_atomic_cmpxchg_i64(TCGv_i64, TCGv, TCGv_i64, TCGv_i64,
-                                TCGArg, TCGMemOp);
-
-void tcg_gen_atomic_xchg_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_xchg_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-
-void tcg_gen_atomic_fetch_add_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_add_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_and_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_and_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_or_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_or_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_xor_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_xor_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_smin_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_smin_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_umin_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_umin_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_smax_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_smax_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_umax_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_umax_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-
-void tcg_gen_atomic_add_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_add_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_and_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_and_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_or_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_or_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_xor_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_xor_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_smin_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_smin_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_umin_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_umin_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_smax_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_smax_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_umax_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_umax_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
+                                TCGArg, MemOp);
+
+void tcg_gen_atomic_xchg_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_xchg_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+
+void tcg_gen_atomic_fetch_add_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_add_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_and_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_and_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_or_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_or_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_xor_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_xor_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_smin_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_smin_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_umin_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_umin_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_smax_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_smax_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_umax_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_umax_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+
+void tcg_gen_atomic_add_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_add_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_and_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_and_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_or_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_or_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_xor_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_xor_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_smin_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_smin_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_umin_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_umin_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_smax_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_smax_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_umax_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_umax_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);

 void tcg_gen_mov_vec(TCGv_vec, TCGv_vec);
 void tcg_gen_dup_i32_vec(unsigned vece, TCGv_vec, TCGv_i32);
diff --git a/tcg/tcg.c b/tcg/tcg.c
index be2c33c..aa9931f 100644
--- a/tcg/tcg.c
+++ b/tcg/tcg.c
@@ -2056,7 +2056,7 @@ static void tcg_dump_ops(TCGContext *s, bool have_prefs)
             case INDEX_op_qemu_st_i64:
                 {
                     TCGMemOpIdx oi = op->args[k++];
-                    TCGMemOp op = get_memop(oi);
+                    MemOp op = get_memop(oi);
                     unsigned ix = get_mmuidx(oi);

                     if (op & ~(MO_AMASK | MO_BSWAP | MO_SSIZE)) {
diff --git a/tcg/tcg.h b/tcg/tcg.h
index b411e17..a37181c 100644
--- a/tcg/tcg.h
+++ b/tcg/tcg.h
@@ -26,6 +26,7 @@
 #define TCG_H

 #include "cpu.h"
+#include "exec/memop.h"
 #include "exec/tb-context.h"
 #include "qemu/bitops.h"
 #include "qemu/queue.h"
@@ -309,101 +310,13 @@ typedef enum TCGType {
 #endif
 } TCGType;

-/* Constants for qemu_ld and qemu_st for the Memory Operation field.  */
-typedef enum TCGMemOp {
-    MO_8     = 0,
-    MO_16    = 1,
-    MO_32    = 2,
-    MO_64    = 3,
-    MO_SIZE  = 3,   /* Mask for the above.  */
-
-    MO_SIGN  = 4,   /* Sign-extended, otherwise zero-extended.  */
-
-    MO_BSWAP = 8,   /* Host reverse endian.  */
-#ifdef HOST_WORDS_BIGENDIAN
-    MO_LE    = MO_BSWAP,
-    MO_BE    = 0,
-#else
-    MO_LE    = 0,
-    MO_BE    = MO_BSWAP,
-#endif
-#ifdef TARGET_WORDS_BIGENDIAN
-    MO_TE    = MO_BE,
-#else
-    MO_TE    = MO_LE,
-#endif
-
-    /* MO_UNALN accesses are never checked for alignment.
-     * MO_ALIGN accesses will result in a call to the CPU's
-     * do_unaligned_access hook if the guest address is not aligned.
-     * The default depends on whether the target CPU defines ALIGNED_ONLY.
-     *
-     * Some architectures (e.g. ARMv8) need the address which is aligned
-     * to a size more than the size of the memory access.
-     * Some architectures (e.g. SPARCv9) need an address which is aligned,
-     * but less strictly than the natural alignment.
-     *
-     * MO_ALIGN supposes the alignment size is the size of a memory access.
-     *
-     * There are three options:
-     * - unaligned access permitted (MO_UNALN).
-     * - an alignment to the size of an access (MO_ALIGN);
-     * - an alignment to a specified size, which may be more or less than
-     *   the access size (MO_ALIGN_x where 'x' is a size in bytes);
-     */
-    MO_ASHIFT = 4,
-    MO_AMASK = 7 << MO_ASHIFT,
-#ifdef ALIGNED_ONLY
-    MO_ALIGN = 0,
-    MO_UNALN = MO_AMASK,
-#else
-    MO_ALIGN = MO_AMASK,
-    MO_UNALN = 0,
-#endif
-    MO_ALIGN_2  = 1 << MO_ASHIFT,
-    MO_ALIGN_4  = 2 << MO_ASHIFT,
-    MO_ALIGN_8  = 3 << MO_ASHIFT,
-    MO_ALIGN_16 = 4 << MO_ASHIFT,
-    MO_ALIGN_32 = 5 << MO_ASHIFT,
-    MO_ALIGN_64 = 6 << MO_ASHIFT,
-
-    /* Combinations of the above, for ease of use.  */
-    MO_UB    = MO_8,
-    MO_UW    = MO_16,
-    MO_UL    = MO_32,
-    MO_SB    = MO_SIGN | MO_8,
-    MO_SW    = MO_SIGN | MO_16,
-    MO_SL    = MO_SIGN | MO_32,
-    MO_Q     = MO_64,
-
-    MO_LEUW  = MO_LE | MO_UW,
-    MO_LEUL  = MO_LE | MO_UL,
-    MO_LESW  = MO_LE | MO_SW,
-    MO_LESL  = MO_LE | MO_SL,
-    MO_LEQ   = MO_LE | MO_Q,
-
-    MO_BEUW  = MO_BE | MO_UW,
-    MO_BEUL  = MO_BE | MO_UL,
-    MO_BESW  = MO_BE | MO_SW,
-    MO_BESL  = MO_BE | MO_SL,
-    MO_BEQ   = MO_BE | MO_Q,
-
-    MO_TEUW  = MO_TE | MO_UW,
-    MO_TEUL  = MO_TE | MO_UL,
-    MO_TESW  = MO_TE | MO_SW,
-    MO_TESL  = MO_TE | MO_SL,
-    MO_TEQ   = MO_TE | MO_Q,
-
-    MO_SSIZE = MO_SIZE | MO_SIGN,
-} TCGMemOp;
-
 /**
  * get_alignment_bits
- * @memop: TCGMemOp value
+ * @memop: MemOp value
  *
  * Extract the alignment size from the memop.
  */
-static inline unsigned get_alignment_bits(TCGMemOp memop)
+static inline unsigned get_alignment_bits(MemOp memop)
 {
     unsigned a = memop & MO_AMASK;

@@ -1184,7 +1097,7 @@ static inline size_t tcg_current_code_size(TCGContext *s)
     return tcg_ptr_byte_diff(s->code_ptr, s->code_buf);
 }

-/* Combine the TCGMemOp and mmu_idx parameters into a single value.  */
+/* Combine the MemOp and mmu_idx parameters into a single value.  */
 typedef uint32_t TCGMemOpIdx;

 /**
@@ -1194,7 +1107,7 @@ typedef uint32_t TCGMemOpIdx;
  *
  * Encode these values into a single parameter.
  */
-static inline TCGMemOpIdx make_memop_idx(TCGMemOp op, unsigned idx)
+static inline TCGMemOpIdx make_memop_idx(MemOp op, unsigned idx)
 {
     tcg_debug_assert(idx <= 15);
     return (op << 4) | idx;
@@ -1206,7 +1119,7 @@ static inline TCGMemOpIdx make_memop_idx(TCGMemOp op, unsigned idx)
  *
  * Extract the memory operation from the combined value.
  */
-static inline TCGMemOp get_memop(TCGMemOpIdx oi)
+static inline MemOp get_memop(TCGMemOpIdx oi)
 {
     return oi >> 4;
 }
diff --git a/trace/mem-internal.h b/trace/mem-internal.h
index f6efaf6..3444fbc 100644
--- a/trace/mem-internal.h
+++ b/trace/mem-internal.h
@@ -16,7 +16,7 @@
 #define TRACE_MEM_ST (1ULL << 5)    /* store (y/n) */

 static inline uint8_t trace_mem_build_info(
-    int size_shift, bool sign_extend, TCGMemOp endianness, bool store)
+    int size_shift, bool sign_extend, MemOp endianness, bool store)
 {
     uint8_t res;

@@ -33,7 +33,7 @@ static inline uint8_t trace_mem_build_info(
     return res;
 }

-static inline uint8_t trace_mem_get_info(TCGMemOp op, bool store)
+static inline uint8_t trace_mem_get_info(MemOp op, bool store)
 {
     return trace_mem_build_info(op & MO_SIZE, !!(op & MO_SIGN),
                                 op & MO_BSWAP, store);
diff --git a/trace/mem.h b/trace/mem.h
index 2b58196..8cf213d 100644
--- a/trace/mem.h
+++ b/trace/mem.h
@@ -18,7 +18,7 @@
  *
  * Return a value for the 'info' argument in guest memory access traces.
  */
-static uint8_t trace_mem_get_info(TCGMemOp op, bool store);
+static uint8_t trace_mem_get_info(MemOp op, bool store);

 /**
  * trace_mem_build_info:
@@ -26,7 +26,7 @@ static uint8_t trace_mem_get_info(TCGMemOp op, bool store);
  * Return a value for the 'info' argument in guest memory access traces.
  */
 static uint8_t trace_mem_build_info(int size_shift, bool sign_extend,
-                                    TCGMemOp endianness, bool store);
+                                    MemOp endianness, bool store);


 #include "trace/mem-internal.h"
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v4 01/15] tcg: TCGMemOp is now accelerator independent MemOp
@ 2019-07-25  8:00         ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  8:00 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, qemu-s390x, qemu-ppc, amarkovic, pbonzini,
	aurelien

[-- Attachment #1: Type: text/plain, Size: 108200 bytes --]

Preparation for collapsing the two byte swaps, adjust_endianness and
handle_bswap, along the I/O path.

Target dependant attributes are conditionalize upon NEED_CPU_H.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 MAINTAINERS                             |   1 +
 accel/tcg/cputlb.c                      |   2 +-
 include/exec/memop.h                    | 109 ++++++++++++++++++++++++++
 target/alpha/translate.c                |   2 +-
 target/arm/translate-a64.c              |  48 ++++++------
 target/arm/translate-a64.h              |   2 +-
 target/arm/translate-sve.c              |   2 +-
 target/arm/translate.c                  |  32 ++++----
 target/arm/translate.h                  |   2 +-
 target/hppa/translate.c                 |  14 ++--
 target/i386/translate.c                 | 132 ++++++++++++++++----------------
 target/m68k/translate.c                 |   2 +-
 target/microblaze/translate.c           |   4 +-
 target/mips/translate.c                 |   8 +-
 target/openrisc/translate.c             |   4 +-
 target/ppc/translate.c                  |  12 +--
 target/riscv/insn_trans/trans_rva.inc.c |   8 +-
 target/riscv/insn_trans/trans_rvi.inc.c |   4 +-
 target/s390x/translate.c                |   6 +-
 target/s390x/translate_vx.inc.c         |  10 +--
 target/sparc/translate.c                |  14 ++--
 target/tilegx/translate.c               |  10 +--
 target/tricore/translate.c              |   8 +-
 tcg/README                              |   2 +-
 tcg/aarch64/tcg-target.inc.c            |  26 +++----
 tcg/arm/tcg-target.inc.c                |  26 +++----
 tcg/i386/tcg-target.inc.c               |  24 +++---
 tcg/mips/tcg-target.inc.c               |  16 ++--
 tcg/optimize.c                          |   2 +-
 tcg/ppc/tcg-target.inc.c                |  12 +--
 tcg/riscv/tcg-target.inc.c              |  20 ++---
 tcg/s390/tcg-target.inc.c               |  14 ++--
 tcg/sparc/tcg-target.inc.c              |   6 +-
 tcg/tcg-op.c                            |  38 ++++-----
 tcg/tcg-op.h                            |  86 ++++++++++-----------
 tcg/tcg.c                               |   2 +-
 tcg/tcg.h                               |  99 ++----------------------
 trace/mem-internal.h                    |   4 +-
 trace/mem.h                             |   4 +-
 39 files changed, 420 insertions(+), 397 deletions(-)
 create mode 100644 include/exec/memop.h

diff --git a/MAINTAINERS b/MAINTAINERS
index cc9636b..3f148cd 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1890,6 +1890,7 @@ M: Paolo Bonzini <pbonzini@redhat.com>
 S: Supported
 F: include/exec/ioport.h
 F: ioport.c
+F: include/exec/memop.h
 F: include/exec/memory.h
 F: include/exec/ram_addr.h
 F: memory.c
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index bb9897b..523be4c 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1133,7 +1133,7 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
     uintptr_t index = tlb_index(env, mmu_idx, addr);
     CPUTLBEntry *tlbe = tlb_entry(env, mmu_idx, addr);
     target_ulong tlb_addr = tlb_addr_write(tlbe);
-    TCGMemOp mop = get_memop(oi);
+    MemOp mop = get_memop(oi);
     int a_bits = get_alignment_bits(mop);
     int s_bits = mop & MO_SIZE;
     void *hostaddr;
diff --git a/include/exec/memop.h b/include/exec/memop.h
new file mode 100644
index 0000000..ac58066
--- /dev/null
+++ b/include/exec/memop.h
@@ -0,0 +1,109 @@
+/*
+ * Constants for memory operations
+ *
+ * Authors:
+ *  Richard Henderson <rth@twiddle.net>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+
+#ifndef MEMOP_H
+#define MEMOP_H
+
+typedef enum MemOp {
+    MO_8     = 0,
+    MO_16    = 1,
+    MO_32    = 2,
+    MO_64    = 3,
+    MO_SIZE  = 3,   /* Mask for the above.  */
+
+    MO_SIGN  = 4,   /* Sign-extended, otherwise zero-extended.  */
+
+    MO_BSWAP = 8,   /* Host reverse endian.  */
+#ifdef HOST_WORDS_BIGENDIAN
+    MO_LE    = MO_BSWAP,
+    MO_BE    = 0,
+#else
+    MO_LE    = 0,
+    MO_BE    = MO_BSWAP,
+#endif
+#ifdef NEED_CPU_H
+#ifdef TARGET_WORDS_BIGENDIAN
+    MO_TE    = MO_BE,
+#else
+    MO_TE    = MO_LE,
+#endif
+#endif
+
+    /*
+     * MO_UNALN accesses are never checked for alignment.
+     * MO_ALIGN accesses will result in a call to the CPU's
+     * do_unaligned_access hook if the guest address is not aligned.
+     * The default depends on whether the target CPU defines ALIGNED_ONLY.
+     *
+     * Some architectures (e.g. ARMv8) need the address which is aligned
+     * to a size more than the size of the memory access.
+     * Some architectures (e.g. SPARCv9) need an address which is aligned,
+     * but less strictly than the natural alignment.
+     *
+     * MO_ALIGN supposes the alignment size is the size of a memory access.
+     *
+     * There are three options:
+     * - unaligned access permitted (MO_UNALN).
+     * - an alignment to the size of an access (MO_ALIGN);
+     * - an alignment to a specified size, which may be more or less than
+     *   the access size (MO_ALIGN_x where 'x' is a size in bytes);
+     */
+    MO_ASHIFT = 4,
+    MO_AMASK = 7 << MO_ASHIFT,
+#ifdef NEED_CPU_H
+#ifdef ALIGNED_ONLY
+    MO_ALIGN = 0,
+    MO_UNALN = MO_AMASK,
+#else
+    MO_ALIGN = MO_AMASK,
+    MO_UNALN = 0,
+#endif
+#endif
+    MO_ALIGN_2  = 1 << MO_ASHIFT,
+    MO_ALIGN_4  = 2 << MO_ASHIFT,
+    MO_ALIGN_8  = 3 << MO_ASHIFT,
+    MO_ALIGN_16 = 4 << MO_ASHIFT,
+    MO_ALIGN_32 = 5 << MO_ASHIFT,
+    MO_ALIGN_64 = 6 << MO_ASHIFT,
+
+    /* Combinations of the above, for ease of use.  */
+    MO_UB    = MO_8,
+    MO_UW    = MO_16,
+    MO_UL    = MO_32,
+    MO_SB    = MO_SIGN | MO_8,
+    MO_SW    = MO_SIGN | MO_16,
+    MO_SL    = MO_SIGN | MO_32,
+    MO_Q     = MO_64,
+
+    MO_LEUW  = MO_LE | MO_UW,
+    MO_LEUL  = MO_LE | MO_UL,
+    MO_LESW  = MO_LE | MO_SW,
+    MO_LESL  = MO_LE | MO_SL,
+    MO_LEQ   = MO_LE | MO_Q,
+
+    MO_BEUW  = MO_BE | MO_UW,
+    MO_BEUL  = MO_BE | MO_UL,
+    MO_BESW  = MO_BE | MO_SW,
+    MO_BESL  = MO_BE | MO_SL,
+    MO_BEQ   = MO_BE | MO_Q,
+
+#ifdef NEED_CPU_H
+    MO_TEUW  = MO_TE | MO_UW,
+    MO_TEUL  = MO_TE | MO_UL,
+    MO_TESW  = MO_TE | MO_SW,
+    MO_TESL  = MO_TE | MO_SL,
+    MO_TEQ   = MO_TE | MO_Q,
+#endif
+
+    MO_SSIZE = MO_SIZE | MO_SIGN,
+} MemOp;
+
+#endif
diff --git a/target/alpha/translate.c b/target/alpha/translate.c
index 2c9cccf..d5d4888 100644
--- a/target/alpha/translate.c
+++ b/target/alpha/translate.c
@@ -403,7 +403,7 @@ static inline void gen_store_mem(DisasContext *ctx,

 static DisasJumpType gen_store_conditional(DisasContext *ctx, int ra, int rb,
                                            int32_t disp16, int mem_idx,
-                                           TCGMemOp op)
+                                           MemOp op)
 {
     TCGLabel *lab_fail, *lab_done;
     TCGv addr, val;
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index d323147..b6c07d6 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -85,7 +85,7 @@ typedef void NeonGenOneOpFn(TCGv_i64, TCGv_i64);
 typedef void CryptoTwoOpFn(TCGv_ptr, TCGv_ptr);
 typedef void CryptoThreeOpIntFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
 typedef void CryptoThreeOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
-typedef void AtomicThreeOpFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGArg, TCGMemOp);
+typedef void AtomicThreeOpFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGArg, MemOp);

 /* initialize TCG globals.  */
 void a64_translate_init(void)
@@ -455,7 +455,7 @@ TCGv_i64 read_cpu_reg_sp(DisasContext *s, int reg, int sf)
  * Dn, Sn, Hn or Bn).
  * (Note that this is not the same mapping as for A32; see cpu.h)
  */
-static inline int fp_reg_offset(DisasContext *s, int regno, TCGMemOp size)
+static inline int fp_reg_offset(DisasContext *s, int regno, MemOp size)
 {
     return vec_reg_offset(s, regno, 0, size);
 }
@@ -871,7 +871,7 @@ static void do_gpr_ld_memidx(DisasContext *s,
                              bool iss_valid, unsigned int iss_srt,
                              bool iss_sf, bool iss_ar)
 {
-    TCGMemOp memop = s->be_data + size;
+    MemOp memop = s->be_data + size;

     g_assert(size <= 3);

@@ -948,7 +948,7 @@ static void do_fp_ld(DisasContext *s, int destidx, TCGv_i64 tcg_addr, int size)
     TCGv_i64 tmphi;

     if (size < 4) {
-        TCGMemOp memop = s->be_data + size;
+        MemOp memop = s->be_data + size;
         tmphi = tcg_const_i64(0);
         tcg_gen_qemu_ld_i64(tmplo, tcg_addr, get_mem_index(s), memop);
     } else {
@@ -989,7 +989,7 @@ static void do_fp_ld(DisasContext *s, int destidx, TCGv_i64 tcg_addr, int size)

 /* Get value of an element within a vector register */
 static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx,
-                             int element, TCGMemOp memop)
+                             int element, MemOp memop)
 {
     int vect_off = vec_reg_offset(s, srcidx, element, memop & MO_SIZE);
     switch (memop) {
@@ -1021,7 +1021,7 @@ static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx,
 }

 static void read_vec_element_i32(DisasContext *s, TCGv_i32 tcg_dest, int srcidx,
-                                 int element, TCGMemOp memop)
+                                 int element, MemOp memop)
 {
     int vect_off = vec_reg_offset(s, srcidx, element, memop & MO_SIZE);
     switch (memop) {
@@ -1048,7 +1048,7 @@ static void read_vec_element_i32(DisasContext *s, TCGv_i32 tcg_dest, int srcidx,

 /* Set value of an element within a vector register */
 static void write_vec_element(DisasContext *s, TCGv_i64 tcg_src, int destidx,
-                              int element, TCGMemOp memop)
+                              int element, MemOp memop)
 {
     int vect_off = vec_reg_offset(s, destidx, element, memop & MO_SIZE);
     switch (memop) {
@@ -1070,7 +1070,7 @@ static void write_vec_element(DisasContext *s, TCGv_i64 tcg_src, int destidx,
 }

 static void write_vec_element_i32(DisasContext *s, TCGv_i32 tcg_src,
-                                  int destidx, int element, TCGMemOp memop)
+                                  int destidx, int element, MemOp memop)
 {
     int vect_off = vec_reg_offset(s, destidx, element, memop & MO_SIZE);
     switch (memop) {
@@ -1090,7 +1090,7 @@ static void write_vec_element_i32(DisasContext *s, TCGv_i32 tcg_src,

 /* Store from vector register to memory */
 static void do_vec_st(DisasContext *s, int srcidx, int element,
-                      TCGv_i64 tcg_addr, int size, TCGMemOp endian)
+                      TCGv_i64 tcg_addr, int size, MemOp endian)
 {
     TCGv_i64 tcg_tmp = tcg_temp_new_i64();

@@ -1102,7 +1102,7 @@ static void do_vec_st(DisasContext *s, int srcidx, int element,

 /* Load from memory to vector register */
 static void do_vec_ld(DisasContext *s, int destidx, int element,
-                      TCGv_i64 tcg_addr, int size, TCGMemOp endian)
+                      TCGv_i64 tcg_addr, int size, MemOp endian)
 {
     TCGv_i64 tcg_tmp = tcg_temp_new_i64();

@@ -2200,7 +2200,7 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
                                TCGv_i64 addr, int size, bool is_pair)
 {
     int idx = get_mem_index(s);
-    TCGMemOp memop = s->be_data;
+    MemOp memop = s->be_data;

     g_assert(size <= 3);
     if (is_pair) {
@@ -3286,7 +3286,7 @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
     bool is_postidx = extract32(insn, 23, 1);
     bool is_q = extract32(insn, 30, 1);
     TCGv_i64 clean_addr, tcg_rn, tcg_ebytes;
-    TCGMemOp endian = s->be_data;
+    MemOp endian = s->be_data;

     int ebytes;   /* bytes per element */
     int elements; /* elements per vector */
@@ -5455,7 +5455,7 @@ static void disas_fp_csel(DisasContext *s, uint32_t insn)
     unsigned int mos, type, rm, cond, rn, rd;
     TCGv_i64 t_true, t_false, t_zero;
     DisasCompare64 c;
-    TCGMemOp sz;
+    MemOp sz;

     mos = extract32(insn, 29, 3);
     type = extract32(insn, 22, 2);
@@ -6267,7 +6267,7 @@ static void disas_fp_imm(DisasContext *s, uint32_t insn)
     int mos = extract32(insn, 29, 3);
     uint64_t imm;
     TCGv_i64 tcg_res;
-    TCGMemOp sz;
+    MemOp sz;

     if (mos || imm5) {
         unallocated_encoding(s);
@@ -7030,7 +7030,7 @@ static TCGv_i32 do_reduction_op(DisasContext *s, int fpopcode, int rn,
 {
     if (esize == size) {
         int element;
-        TCGMemOp msize = esize == 16 ? MO_16 : MO_32;
+        MemOp msize = esize == 16 ? MO_16 : MO_32;
         TCGv_i32 tcg_elem;

         /* We should have one register left here */
@@ -8022,7 +8022,7 @@ static void handle_vec_simd_sqshrn(DisasContext *s, bool is_scalar, bool is_q,
     int shift = (2 * esize) - immhb;
     int elements = is_scalar ? 1 : (64 / esize);
     bool round = extract32(opcode, 0, 1);
-    TCGMemOp ldop = (size + 1) | (is_u_shift ? 0 : MO_SIGN);
+    MemOp ldop = (size + 1) | (is_u_shift ? 0 : MO_SIGN);
     TCGv_i64 tcg_rn, tcg_rd, tcg_round;
     TCGv_i32 tcg_rd_narrowed;
     TCGv_i64 tcg_final;
@@ -8181,7 +8181,7 @@ static void handle_simd_qshl(DisasContext *s, bool scalar, bool is_q,
             }
         };
         NeonGenTwoOpEnvFn *genfn = fns[src_unsigned][dst_unsigned][size];
-        TCGMemOp memop = scalar ? size : MO_32;
+        MemOp memop = scalar ? size : MO_32;
         int maxpass = scalar ? 1 : is_q ? 4 : 2;

         for (pass = 0; pass < maxpass; pass++) {
@@ -8225,7 +8225,7 @@ static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
     TCGv_ptr tcg_fpst = get_fpstatus_ptr(size == MO_16);
     TCGv_i32 tcg_shift = NULL;

-    TCGMemOp mop = size | (is_signed ? MO_SIGN : 0);
+    MemOp mop = size | (is_signed ? MO_SIGN : 0);
     int pass;

     if (fracbits || size == MO_64) {
@@ -10004,7 +10004,7 @@ static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
     int dsize = is_q ? 128 : 64;
     int esize = 8 << size;
     int elements = dsize/esize;
-    TCGMemOp memop = size | (is_u ? 0 : MO_SIGN);
+    MemOp memop = size | (is_u ? 0 : MO_SIGN);
     TCGv_i64 tcg_rn = new_tmp_a64(s);
     TCGv_i64 tcg_rd = new_tmp_a64(s);
     TCGv_i64 tcg_round;
@@ -10347,7 +10347,7 @@ static void handle_3rd_widening(DisasContext *s, int is_q, int is_u, int size,
             TCGv_i64 tcg_op1 = tcg_temp_new_i64();
             TCGv_i64 tcg_op2 = tcg_temp_new_i64();
             TCGv_i64 tcg_passres;
-            TCGMemOp memop = MO_32 | (is_u ? 0 : MO_SIGN);
+            MemOp memop = MO_32 | (is_u ? 0 : MO_SIGN);

             int elt = pass + is_q * 2;

@@ -11827,7 +11827,7 @@ static void handle_2misc_pairwise(DisasContext *s, int opcode, bool u,

     if (size == 2) {
         /* 32 + 32 -> 64 op */
-        TCGMemOp memop = size + (u ? 0 : MO_SIGN);
+        MemOp memop = size + (u ? 0 : MO_SIGN);

         for (pass = 0; pass < maxpass; pass++) {
             TCGv_i64 tcg_op1 = tcg_temp_new_i64();
@@ -12849,7 +12849,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)

     switch (is_fp) {
     case 1: /* normal fp */
-        /* convert insn encoded size to TCGMemOp size */
+        /* convert insn encoded size to MemOp size */
         switch (size) {
         case 0: /* half-precision */
             size = MO_16;
@@ -12897,7 +12897,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
         return;
     }

-    /* Given TCGMemOp size, adjust register and indexing.  */
+    /* Given MemOp size, adjust register and indexing.  */
     switch (size) {
     case MO_16:
         index = h << 2 | l << 1 | m;
@@ -13194,7 +13194,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
         TCGv_i64 tcg_res[2];
         int pass;
         bool satop = extract32(opcode, 0, 1);
-        TCGMemOp memop = MO_32;
+        MemOp memop = MO_32;

         if (satop || !u) {
             memop |= MO_SIGN;
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
index 9ab4087..f1246b7 100644
--- a/target/arm/translate-a64.h
+++ b/target/arm/translate-a64.h
@@ -64,7 +64,7 @@ static inline void assert_fp_access_checked(DisasContext *s)
  * the FP/vector register Qn.
  */
 static inline int vec_reg_offset(DisasContext *s, int regno,
-                                 int element, TCGMemOp size)
+                                 int element, MemOp size)
 {
     int element_size = 1 << size;
     int offs = element * element_size;
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index fa068b0..5d7edd0 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -4567,7 +4567,7 @@ static bool trans_STR_pri(DisasContext *s, arg_rri *a)
  */

 /* The memory mode of the dtype.  */
-static const TCGMemOp dtype_mop[16] = {
+static const MemOp dtype_mop[16] = {
     MO_UB, MO_UB, MO_UB, MO_UB,
     MO_SL, MO_UW, MO_UW, MO_UW,
     MO_SW, MO_SW, MO_UL, MO_UL,
diff --git a/target/arm/translate.c b/target/arm/translate.c
index 7853462..d116c8c 100644
--- a/target/arm/translate.c
+++ b/target/arm/translate.c
@@ -114,7 +114,7 @@ typedef enum ISSInfo {
 } ISSInfo;

 /* Save the syndrome information for a Data Abort */
-static void disas_set_da_iss(DisasContext *s, TCGMemOp memop, ISSInfo issinfo)
+static void disas_set_da_iss(DisasContext *s, MemOp memop, ISSInfo issinfo)
 {
     uint32_t syn;
     int sas = memop & MO_SIZE;
@@ -1079,7 +1079,7 @@ static inline void store_reg_from_load(DisasContext *s, int reg, TCGv_i32 var)
  * that the address argument is TCGv_i32 rather than TCGv.
  */

-static inline TCGv gen_aa32_addr(DisasContext *s, TCGv_i32 a32, TCGMemOp op)
+static inline TCGv gen_aa32_addr(DisasContext *s, TCGv_i32 a32, MemOp op)
 {
     TCGv addr = tcg_temp_new();
     tcg_gen_extu_i32_tl(addr, a32);
@@ -1092,7 +1092,7 @@ static inline TCGv gen_aa32_addr(DisasContext *s, TCGv_i32 a32, TCGMemOp op)
 }

 static void gen_aa32_ld_i32(DisasContext *s, TCGv_i32 val, TCGv_i32 a32,
-                            int index, TCGMemOp opc)
+                            int index, MemOp opc)
 {
     TCGv addr;

@@ -1107,7 +1107,7 @@ static void gen_aa32_ld_i32(DisasContext *s, TCGv_i32 val, TCGv_i32 a32,
 }

 static void gen_aa32_st_i32(DisasContext *s, TCGv_i32 val, TCGv_i32 a32,
-                            int index, TCGMemOp opc)
+                            int index, MemOp opc)
 {
     TCGv addr;

@@ -1160,7 +1160,7 @@ static inline void gen_aa32_frob64(DisasContext *s, TCGv_i64 val)
 }

 static void gen_aa32_ld_i64(DisasContext *s, TCGv_i64 val, TCGv_i32 a32,
-                            int index, TCGMemOp opc)
+                            int index, MemOp opc)
 {
     TCGv addr = gen_aa32_addr(s, a32, opc);
     tcg_gen_qemu_ld_i64(val, addr, index, opc);
@@ -1175,7 +1175,7 @@ static inline void gen_aa32_ld64(DisasContext *s, TCGv_i64 val,
 }

 static void gen_aa32_st_i64(DisasContext *s, TCGv_i64 val, TCGv_i32 a32,
-                            int index, TCGMemOp opc)
+                            int index, MemOp opc)
 {
     TCGv addr = gen_aa32_addr(s, a32, opc);

@@ -1400,7 +1400,7 @@ neon_reg_offset (int reg, int n)
  * where 0 is the least significant end of the register.
  */
 static inline long
-neon_element_offset(int reg, int element, TCGMemOp size)
+neon_element_offset(int reg, int element, MemOp size)
 {
     int element_size = 1 << size;
     int ofs = element * element_size;
@@ -1422,7 +1422,7 @@ static TCGv_i32 neon_load_reg(int reg, int pass)
     return tmp;
 }

-static void neon_load_element(TCGv_i32 var, int reg, int ele, TCGMemOp mop)
+static void neon_load_element(TCGv_i32 var, int reg, int ele, MemOp mop)
 {
     long offset = neon_element_offset(reg, ele, mop & MO_SIZE);

@@ -1441,7 +1441,7 @@ static void neon_load_element(TCGv_i32 var, int reg, int ele, TCGMemOp mop)
     }
 }

-static void neon_load_element64(TCGv_i64 var, int reg, int ele, TCGMemOp mop)
+static void neon_load_element64(TCGv_i64 var, int reg, int ele, MemOp mop)
 {
     long offset = neon_element_offset(reg, ele, mop & MO_SIZE);

@@ -1469,7 +1469,7 @@ static void neon_store_reg(int reg, int pass, TCGv_i32 var)
     tcg_temp_free_i32(var);
 }

-static void neon_store_element(int reg, int ele, TCGMemOp size, TCGv_i32 var)
+static void neon_store_element(int reg, int ele, MemOp size, TCGv_i32 var)
 {
     long offset = neon_element_offset(reg, ele, size);

@@ -1488,7 +1488,7 @@ static void neon_store_element(int reg, int ele, TCGMemOp size, TCGv_i32 var)
     }
 }

-static void neon_store_element64(int reg, int ele, TCGMemOp size, TCGv_i64 var)
+static void neon_store_element64(int reg, int ele, MemOp size, TCGv_i64 var)
 {
     long offset = neon_element_offset(reg, ele, size);

@@ -3558,7 +3558,7 @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
     int n;
     int vec_size;
     int mmu_idx;
-    TCGMemOp endian;
+    MemOp endian;
     TCGv_i32 addr;
     TCGv_i32 tmp;
     TCGv_i32 tmp2;
@@ -6867,7 +6867,7 @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
             } else if ((insn & 0x380) == 0) {
                 /* VDUP */
                 int element;
-                TCGMemOp size;
+                MemOp size;

                 if ((insn & (7 << 16)) == 0 || (q && (rd & 1))) {
                     return 1;
@@ -7435,7 +7435,7 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
                                TCGv_i32 addr, int size)
 {
     TCGv_i32 tmp = tcg_temp_new_i32();
-    TCGMemOp opc = size | MO_ALIGN | s->be_data;
+    MemOp opc = size | MO_ALIGN | s->be_data;

     s->is_ldex = true;

@@ -7489,7 +7489,7 @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
     TCGv taddr;
     TCGLabel *done_label;
     TCGLabel *fail_label;
-    TCGMemOp opc = size | MO_ALIGN | s->be_data;
+    MemOp opc = size | MO_ALIGN | s->be_data;

     /* if (env->exclusive_addr == addr && env->exclusive_val == [addr]) {
          [addr] = {Rt};
@@ -8603,7 +8603,7 @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
                         */

                         TCGv taddr;
-                        TCGMemOp opc = s->be_data;
+                        MemOp opc = s->be_data;

                         rm = (insn) & 0xf;

diff --git a/target/arm/translate.h b/target/arm/translate.h
index a20f6e2..284c510 100644
--- a/target/arm/translate.h
+++ b/target/arm/translate.h
@@ -21,7 +21,7 @@ typedef struct DisasContext {
     int condexec_cond;
     int thumb;
     int sctlr_b;
-    TCGMemOp be_data;
+    MemOp be_data;
 #if !defined(CONFIG_USER_ONLY)
     int user;
 #endif
diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 188fe68..ff4802a 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -1500,7 +1500,7 @@ static void form_gva(DisasContext *ctx, TCGv_tl *pgva, TCGv_reg *pofs,
  */
 static void do_load_32(DisasContext *ctx, TCGv_i32 dest, unsigned rb,
                        unsigned rx, int scale, target_sreg disp,
-                       unsigned sp, int modify, TCGMemOp mop)
+                       unsigned sp, int modify, MemOp mop)
 {
     TCGv_reg ofs;
     TCGv_tl addr;
@@ -1518,7 +1518,7 @@ static void do_load_32(DisasContext *ctx, TCGv_i32 dest, unsigned rb,

 static void do_load_64(DisasContext *ctx, TCGv_i64 dest, unsigned rb,
                        unsigned rx, int scale, target_sreg disp,
-                       unsigned sp, int modify, TCGMemOp mop)
+                       unsigned sp, int modify, MemOp mop)
 {
     TCGv_reg ofs;
     TCGv_tl addr;
@@ -1536,7 +1536,7 @@ static void do_load_64(DisasContext *ctx, TCGv_i64 dest, unsigned rb,

 static void do_store_32(DisasContext *ctx, TCGv_i32 src, unsigned rb,
                         unsigned rx, int scale, target_sreg disp,
-                        unsigned sp, int modify, TCGMemOp mop)
+                        unsigned sp, int modify, MemOp mop)
 {
     TCGv_reg ofs;
     TCGv_tl addr;
@@ -1554,7 +1554,7 @@ static void do_store_32(DisasContext *ctx, TCGv_i32 src, unsigned rb,

 static void do_store_64(DisasContext *ctx, TCGv_i64 src, unsigned rb,
                         unsigned rx, int scale, target_sreg disp,
-                        unsigned sp, int modify, TCGMemOp mop)
+                        unsigned sp, int modify, MemOp mop)
 {
     TCGv_reg ofs;
     TCGv_tl addr;
@@ -1580,7 +1580,7 @@ static void do_store_64(DisasContext *ctx, TCGv_i64 src, unsigned rb,

 static bool do_load(DisasContext *ctx, unsigned rt, unsigned rb,
                     unsigned rx, int scale, target_sreg disp,
-                    unsigned sp, int modify, TCGMemOp mop)
+                    unsigned sp, int modify, MemOp mop)
 {
     TCGv_reg dest;

@@ -1653,7 +1653,7 @@ static bool trans_fldd(DisasContext *ctx, arg_ldst *a)

 static bool do_store(DisasContext *ctx, unsigned rt, unsigned rb,
                      target_sreg disp, unsigned sp,
-                     int modify, TCGMemOp mop)
+                     int modify, MemOp mop)
 {
     nullify_over(ctx);
     do_store_reg(ctx, load_gpr(ctx, rt), rb, 0, 0, disp, sp, modify, mop);
@@ -2940,7 +2940,7 @@ static bool trans_st(DisasContext *ctx, arg_ldst *a)

 static bool trans_ldc(DisasContext *ctx, arg_ldst *a)
 {
-    TCGMemOp mop = MO_TEUL | MO_ALIGN_16 | a->size;
+    MemOp mop = MO_TEUL | MO_ALIGN_16 | a->size;
     TCGv_reg zero, dest, ofs;
     TCGv_tl addr;

diff --git a/target/i386/translate.c b/target/i386/translate.c
index 03150a8..def9867 100644
--- a/target/i386/translate.c
+++ b/target/i386/translate.c
@@ -87,8 +87,8 @@ typedef struct DisasContext {
     /* current insn context */
     int override; /* -1 if no override */
     int prefix;
-    TCGMemOp aflag;
-    TCGMemOp dflag;
+    MemOp aflag;
+    MemOp dflag;
     target_ulong pc_start;
     target_ulong pc; /* pc = eip + cs_base */
     /* current block context */
@@ -149,7 +149,7 @@ static void gen_eob(DisasContext *s);
 static void gen_jr(DisasContext *s, TCGv dest);
 static void gen_jmp(DisasContext *s, target_ulong eip);
 static void gen_jmp_tb(DisasContext *s, target_ulong eip, int tb_num);
-static void gen_op(DisasContext *s1, int op, TCGMemOp ot, int d);
+static void gen_op(DisasContext *s1, int op, MemOp ot, int d);

 /* i386 arith/logic operations */
 enum {
@@ -320,7 +320,7 @@ static inline bool byte_reg_is_xH(DisasContext *s, int reg)
 }

 /* Select the size of a push/pop operation.  */
-static inline TCGMemOp mo_pushpop(DisasContext *s, TCGMemOp ot)
+static inline MemOp mo_pushpop(DisasContext *s, MemOp ot)
 {
     if (CODE64(s)) {
         return ot == MO_16 ? MO_16 : MO_64;
@@ -330,13 +330,13 @@ static inline TCGMemOp mo_pushpop(DisasContext *s, TCGMemOp ot)
 }

 /* Select the size of the stack pointer.  */
-static inline TCGMemOp mo_stacksize(DisasContext *s)
+static inline MemOp mo_stacksize(DisasContext *s)
 {
     return CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_16;
 }

 /* Select only size 64 else 32.  Used for SSE operand sizes.  */
-static inline TCGMemOp mo_64_32(TCGMemOp ot)
+static inline MemOp mo_64_32(MemOp ot)
 {
 #ifdef TARGET_X86_64
     return ot == MO_64 ? MO_64 : MO_32;
@@ -347,19 +347,19 @@ static inline TCGMemOp mo_64_32(TCGMemOp ot)

 /* Select size 8 if lsb of B is clear, else OT.  Used for decoding
    byte vs word opcodes.  */
-static inline TCGMemOp mo_b_d(int b, TCGMemOp ot)
+static inline MemOp mo_b_d(int b, MemOp ot)
 {
     return b & 1 ? ot : MO_8;
 }

 /* Select size 8 if lsb of B is clear, else OT capped at 32.
    Used for decoding operand size of port opcodes.  */
-static inline TCGMemOp mo_b_d32(int b, TCGMemOp ot)
+static inline MemOp mo_b_d32(int b, MemOp ot)
 {
     return b & 1 ? (ot == MO_16 ? MO_16 : MO_32) : MO_8;
 }

-static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t0)
+static void gen_op_mov_reg_v(DisasContext *s, MemOp ot, int reg, TCGv t0)
 {
     switch(ot) {
     case MO_8:
@@ -388,7 +388,7 @@ static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t0)
 }

 static inline
-void gen_op_mov_v_reg(DisasContext *s, TCGMemOp ot, TCGv t0, int reg)
+void gen_op_mov_v_reg(DisasContext *s, MemOp ot, TCGv t0, int reg)
 {
     if (ot == MO_8 && byte_reg_is_xH(s, reg)) {
         tcg_gen_extract_tl(t0, cpu_regs[reg - 4], 8, 8);
@@ -411,13 +411,13 @@ static inline void gen_op_jmp_v(TCGv dest)
 }

 static inline
-void gen_op_add_reg_im(DisasContext *s, TCGMemOp size, int reg, int32_t val)
+void gen_op_add_reg_im(DisasContext *s, MemOp size, int reg, int32_t val)
 {
     tcg_gen_addi_tl(s->tmp0, cpu_regs[reg], val);
     gen_op_mov_reg_v(s, size, reg, s->tmp0);
 }

-static inline void gen_op_add_reg_T0(DisasContext *s, TCGMemOp size, int reg)
+static inline void gen_op_add_reg_T0(DisasContext *s, MemOp size, int reg)
 {
     tcg_gen_add_tl(s->tmp0, cpu_regs[reg], s->T0);
     gen_op_mov_reg_v(s, size, reg, s->tmp0);
@@ -451,7 +451,7 @@ static inline void gen_jmp_im(DisasContext *s, target_ulong pc)
 /* Compute SEG:REG into A0.  SEG is selected from the override segment
    (OVR_SEG) and the default segment (DEF_SEG).  OVR_SEG may be -1 to
    indicate no override.  */
-static void gen_lea_v_seg(DisasContext *s, TCGMemOp aflag, TCGv a0,
+static void gen_lea_v_seg(DisasContext *s, MemOp aflag, TCGv a0,
                           int def_seg, int ovr_seg)
 {
     switch (aflag) {
@@ -514,13 +514,13 @@ static inline void gen_string_movl_A0_EDI(DisasContext *s)
     gen_lea_v_seg(s, s->aflag, cpu_regs[R_EDI], R_ES, -1);
 }

-static inline void gen_op_movl_T0_Dshift(DisasContext *s, TCGMemOp ot)
+static inline void gen_op_movl_T0_Dshift(DisasContext *s, MemOp ot)
 {
     tcg_gen_ld32s_tl(s->T0, cpu_env, offsetof(CPUX86State, df));
     tcg_gen_shli_tl(s->T0, s->T0, ot);
 };

-static TCGv gen_ext_tl(TCGv dst, TCGv src, TCGMemOp size, bool sign)
+static TCGv gen_ext_tl(TCGv dst, TCGv src, MemOp size, bool sign)
 {
     switch (size) {
     case MO_8:
@@ -551,18 +551,18 @@ static TCGv gen_ext_tl(TCGv dst, TCGv src, TCGMemOp size, bool sign)
     }
 }

-static void gen_extu(TCGMemOp ot, TCGv reg)
+static void gen_extu(MemOp ot, TCGv reg)
 {
     gen_ext_tl(reg, reg, ot, false);
 }

-static void gen_exts(TCGMemOp ot, TCGv reg)
+static void gen_exts(MemOp ot, TCGv reg)
 {
     gen_ext_tl(reg, reg, ot, true);
 }

 static inline
-void gen_op_jnz_ecx(DisasContext *s, TCGMemOp size, TCGLabel *label1)
+void gen_op_jnz_ecx(DisasContext *s, MemOp size, TCGLabel *label1)
 {
     tcg_gen_mov_tl(s->tmp0, cpu_regs[R_ECX]);
     gen_extu(size, s->tmp0);
@@ -570,14 +570,14 @@ void gen_op_jnz_ecx(DisasContext *s, TCGMemOp size, TCGLabel *label1)
 }

 static inline
-void gen_op_jz_ecx(DisasContext *s, TCGMemOp size, TCGLabel *label1)
+void gen_op_jz_ecx(DisasContext *s, MemOp size, TCGLabel *label1)
 {
     tcg_gen_mov_tl(s->tmp0, cpu_regs[R_ECX]);
     gen_extu(size, s->tmp0);
     tcg_gen_brcondi_tl(TCG_COND_EQ, s->tmp0, 0, label1);
 }

-static void gen_helper_in_func(TCGMemOp ot, TCGv v, TCGv_i32 n)
+static void gen_helper_in_func(MemOp ot, TCGv v, TCGv_i32 n)
 {
     switch (ot) {
     case MO_8:
@@ -594,7 +594,7 @@ static void gen_helper_in_func(TCGMemOp ot, TCGv v, TCGv_i32 n)
     }
 }

-static void gen_helper_out_func(TCGMemOp ot, TCGv_i32 v, TCGv_i32 n)
+static void gen_helper_out_func(MemOp ot, TCGv_i32 v, TCGv_i32 n)
 {
     switch (ot) {
     case MO_8:
@@ -611,7 +611,7 @@ static void gen_helper_out_func(TCGMemOp ot, TCGv_i32 v, TCGv_i32 n)
     }
 }

-static void gen_check_io(DisasContext *s, TCGMemOp ot, target_ulong cur_eip,
+static void gen_check_io(DisasContext *s, MemOp ot, target_ulong cur_eip,
                          uint32_t svm_flags)
 {
     target_ulong next_eip;
@@ -644,7 +644,7 @@ static void gen_check_io(DisasContext *s, TCGMemOp ot, target_ulong cur_eip,
     }
 }

-static inline void gen_movs(DisasContext *s, TCGMemOp ot)
+static inline void gen_movs(DisasContext *s, MemOp ot)
 {
     gen_string_movl_A0_ESI(s);
     gen_op_ld_v(s, ot, s->T0, s->A0);
@@ -840,7 +840,7 @@ static CCPrepare gen_prepare_eflags_s(DisasContext *s, TCGv reg)
         return (CCPrepare) { .cond = TCG_COND_NEVER, .mask = -1 };
     default:
         {
-            TCGMemOp size = (s->cc_op - CC_OP_ADDB) & 3;
+            MemOp size = (s->cc_op - CC_OP_ADDB) & 3;
             TCGv t0 = gen_ext_tl(reg, cpu_cc_dst, size, true);
             return (CCPrepare) { .cond = TCG_COND_LT, .reg = t0, .mask = -1 };
         }
@@ -885,7 +885,7 @@ static CCPrepare gen_prepare_eflags_z(DisasContext *s, TCGv reg)
                              .mask = -1 };
     default:
         {
-            TCGMemOp size = (s->cc_op - CC_OP_ADDB) & 3;
+            MemOp size = (s->cc_op - CC_OP_ADDB) & 3;
             TCGv t0 = gen_ext_tl(reg, cpu_cc_dst, size, false);
             return (CCPrepare) { .cond = TCG_COND_EQ, .reg = t0, .mask = -1 };
         }
@@ -897,7 +897,7 @@ static CCPrepare gen_prepare_eflags_z(DisasContext *s, TCGv reg)
 static CCPrepare gen_prepare_cc(DisasContext *s, int b, TCGv reg)
 {
     int inv, jcc_op, cond;
-    TCGMemOp size;
+    MemOp size;
     CCPrepare cc;
     TCGv t0;

@@ -1075,7 +1075,7 @@ static TCGLabel *gen_jz_ecx_string(DisasContext *s, target_ulong next_eip)
     return l2;
 }

-static inline void gen_stos(DisasContext *s, TCGMemOp ot)
+static inline void gen_stos(DisasContext *s, MemOp ot)
 {
     gen_op_mov_v_reg(s, MO_32, s->T0, R_EAX);
     gen_string_movl_A0_EDI(s);
@@ -1084,7 +1084,7 @@ static inline void gen_stos(DisasContext *s, TCGMemOp ot)
     gen_op_add_reg_T0(s, s->aflag, R_EDI);
 }

-static inline void gen_lods(DisasContext *s, TCGMemOp ot)
+static inline void gen_lods(DisasContext *s, MemOp ot)
 {
     gen_string_movl_A0_ESI(s);
     gen_op_ld_v(s, ot, s->T0, s->A0);
@@ -1093,7 +1093,7 @@ static inline void gen_lods(DisasContext *s, TCGMemOp ot)
     gen_op_add_reg_T0(s, s->aflag, R_ESI);
 }

-static inline void gen_scas(DisasContext *s, TCGMemOp ot)
+static inline void gen_scas(DisasContext *s, MemOp ot)
 {
     gen_string_movl_A0_EDI(s);
     gen_op_ld_v(s, ot, s->T1, s->A0);
@@ -1102,7 +1102,7 @@ static inline void gen_scas(DisasContext *s, TCGMemOp ot)
     gen_op_add_reg_T0(s, s->aflag, R_EDI);
 }

-static inline void gen_cmps(DisasContext *s, TCGMemOp ot)
+static inline void gen_cmps(DisasContext *s, MemOp ot)
 {
     gen_string_movl_A0_EDI(s);
     gen_op_ld_v(s, ot, s->T1, s->A0);
@@ -1126,7 +1126,7 @@ static void gen_bpt_io(DisasContext *s, TCGv_i32 t_port, int ot)
 }


-static inline void gen_ins(DisasContext *s, TCGMemOp ot)
+static inline void gen_ins(DisasContext *s, MemOp ot)
 {
     if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
         gen_io_start();
@@ -1148,7 +1148,7 @@ static inline void gen_ins(DisasContext *s, TCGMemOp ot)
     }
 }

-static inline void gen_outs(DisasContext *s, TCGMemOp ot)
+static inline void gen_outs(DisasContext *s, MemOp ot)
 {
     if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
         gen_io_start();
@@ -1171,7 +1171,7 @@ static inline void gen_outs(DisasContext *s, TCGMemOp ot)
 /* same method as Valgrind : we generate jumps to current or next
    instruction */
 #define GEN_REPZ(op)                                                          \
-static inline void gen_repz_ ## op(DisasContext *s, TCGMemOp ot,              \
+static inline void gen_repz_ ## op(DisasContext *s, MemOp ot,              \
                                  target_ulong cur_eip, target_ulong next_eip) \
 {                                                                             \
     TCGLabel *l2;                                                             \
@@ -1187,7 +1187,7 @@ static inline void gen_repz_ ## op(DisasContext *s, TCGMemOp ot,              \
 }

 #define GEN_REPZ2(op)                                                         \
-static inline void gen_repz_ ## op(DisasContext *s, TCGMemOp ot,              \
+static inline void gen_repz_ ## op(DisasContext *s, MemOp ot,              \
                                    target_ulong cur_eip,                      \
                                    target_ulong next_eip,                     \
                                    int nz)                                    \
@@ -1284,7 +1284,7 @@ static void gen_illegal_opcode(DisasContext *s)
 }

 /* if d == OR_TMP0, it means memory operand (address in A0) */
-static void gen_op(DisasContext *s1, int op, TCGMemOp ot, int d)
+static void gen_op(DisasContext *s1, int op, MemOp ot, int d)
 {
     if (d != OR_TMP0) {
         if (s1->prefix & PREFIX_LOCK) {
@@ -1395,7 +1395,7 @@ static void gen_op(DisasContext *s1, int op, TCGMemOp ot, int d)
 }

 /* if d == OR_TMP0, it means memory operand (address in A0) */
-static void gen_inc(DisasContext *s1, TCGMemOp ot, int d, int c)
+static void gen_inc(DisasContext *s1, MemOp ot, int d, int c)
 {
     if (s1->prefix & PREFIX_LOCK) {
         if (d != OR_TMP0) {
@@ -1421,7 +1421,7 @@ static void gen_inc(DisasContext *s1, TCGMemOp ot, int d, int c)
     set_cc_op(s1, (c > 0 ? CC_OP_INCB : CC_OP_DECB) + ot);
 }

-static void gen_shift_flags(DisasContext *s, TCGMemOp ot, TCGv result,
+static void gen_shift_flags(DisasContext *s, MemOp ot, TCGv result,
                             TCGv shm1, TCGv count, bool is_right)
 {
     TCGv_i32 z32, s32, oldop;
@@ -1466,7 +1466,7 @@ static void gen_shift_flags(DisasContext *s, TCGMemOp ot, TCGv result,
     set_cc_op(s, CC_OP_DYNAMIC);
 }

-static void gen_shift_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
+static void gen_shift_rm_T1(DisasContext *s, MemOp ot, int op1,
                             int is_right, int is_arith)
 {
     target_ulong mask = (ot == MO_64 ? 0x3f : 0x1f);
@@ -1502,7 +1502,7 @@ static void gen_shift_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
     gen_shift_flags(s, ot, s->T0, s->tmp0, s->T1, is_right);
 }

-static void gen_shift_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
+static void gen_shift_rm_im(DisasContext *s, MemOp ot, int op1, int op2,
                             int is_right, int is_arith)
 {
     int mask = (ot == MO_64 ? 0x3f : 0x1f);
@@ -1542,7 +1542,7 @@ static void gen_shift_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
     }
 }

-static void gen_rot_rm_T1(DisasContext *s, TCGMemOp ot, int op1, int is_right)
+static void gen_rot_rm_T1(DisasContext *s, MemOp ot, int op1, int is_right)
 {
     target_ulong mask = (ot == MO_64 ? 0x3f : 0x1f);
     TCGv_i32 t0, t1;
@@ -1627,7 +1627,7 @@ static void gen_rot_rm_T1(DisasContext *s, TCGMemOp ot, int op1, int is_right)
     set_cc_op(s, CC_OP_DYNAMIC);
 }

-static void gen_rot_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
+static void gen_rot_rm_im(DisasContext *s, MemOp ot, int op1, int op2,
                           int is_right)
 {
     int mask = (ot == MO_64 ? 0x3f : 0x1f);
@@ -1705,7 +1705,7 @@ static void gen_rot_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
 }

 /* XXX: add faster immediate = 1 case */
-static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
+static void gen_rotc_rm_T1(DisasContext *s, MemOp ot, int op1,
                            int is_right)
 {
     gen_compute_eflags(s);
@@ -1761,7 +1761,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
 }

 /* XXX: add faster immediate case */
-static void gen_shiftd_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
+static void gen_shiftd_rm_T1(DisasContext *s, MemOp ot, int op1,
                              bool is_right, TCGv count_in)
 {
     target_ulong mask = (ot == MO_64 ? 63 : 31);
@@ -1842,7 +1842,7 @@ static void gen_shiftd_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
     tcg_temp_free(count);
 }

-static void gen_shift(DisasContext *s1, int op, TCGMemOp ot, int d, int s)
+static void gen_shift(DisasContext *s1, int op, MemOp ot, int d, int s)
 {
     if (s != OR_TMP1)
         gen_op_mov_v_reg(s1, ot, s1->T1, s);
@@ -1872,7 +1872,7 @@ static void gen_shift(DisasContext *s1, int op, TCGMemOp ot, int d, int s)
     }
 }

-static void gen_shifti(DisasContext *s1, int op, TCGMemOp ot, int d, int c)
+static void gen_shifti(DisasContext *s1, int op, MemOp ot, int d, int c)
 {
     switch(op) {
     case OP_ROL:
@@ -2149,7 +2149,7 @@ static void gen_add_A0_ds_seg(DisasContext *s)
 /* generate modrm memory load or store of 'reg'. TMP0 is used if reg ==
    OR_TMP0 */
 static void gen_ldst_modrm(CPUX86State *env, DisasContext *s, int modrm,
-                           TCGMemOp ot, int reg, int is_store)
+                           MemOp ot, int reg, int is_store)
 {
     int mod, rm;

@@ -2179,7 +2179,7 @@ static void gen_ldst_modrm(CPUX86State *env, DisasContext *s, int modrm,
     }
 }

-static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, TCGMemOp ot)
+static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, MemOp ot)
 {
     uint32_t ret;

@@ -2202,7 +2202,7 @@ static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, TCGMemOp ot)
     return ret;
 }

-static inline int insn_const_size(TCGMemOp ot)
+static inline int insn_const_size(MemOp ot)
 {
     if (ot <= MO_32) {
         return 1 << ot;
@@ -2266,7 +2266,7 @@ static inline void gen_jcc(DisasContext *s, int b,
     }
 }

-static void gen_cmovcc1(CPUX86State *env, DisasContext *s, TCGMemOp ot, int b,
+static void gen_cmovcc1(CPUX86State *env, DisasContext *s, MemOp ot, int b,
                         int modrm, int reg)
 {
     CCPrepare cc;
@@ -2363,8 +2363,8 @@ static inline void gen_stack_update(DisasContext *s, int addend)
 /* Generate a push. It depends on ss32, addseg and dflag.  */
 static void gen_push_v(DisasContext *s, TCGv val)
 {
-    TCGMemOp d_ot = mo_pushpop(s, s->dflag);
-    TCGMemOp a_ot = mo_stacksize(s);
+    MemOp d_ot = mo_pushpop(s, s->dflag);
+    MemOp a_ot = mo_stacksize(s);
     int size = 1 << d_ot;
     TCGv new_esp = s->A0;

@@ -2383,9 +2383,9 @@ static void gen_push_v(DisasContext *s, TCGv val)
 }

 /* two step pop is necessary for precise exceptions */
-static TCGMemOp gen_pop_T0(DisasContext *s)
+static MemOp gen_pop_T0(DisasContext *s)
 {
-    TCGMemOp d_ot = mo_pushpop(s, s->dflag);
+    MemOp d_ot = mo_pushpop(s, s->dflag);

     gen_lea_v_seg(s, mo_stacksize(s), cpu_regs[R_ESP], R_SS, -1);
     gen_op_ld_v(s, d_ot, s->T0, s->A0);
@@ -2393,7 +2393,7 @@ static TCGMemOp gen_pop_T0(DisasContext *s)
     return d_ot;
 }

-static inline void gen_pop_update(DisasContext *s, TCGMemOp ot)
+static inline void gen_pop_update(DisasContext *s, MemOp ot)
 {
     gen_stack_update(s, 1 << ot);
 }
@@ -2405,8 +2405,8 @@ static inline void gen_stack_A0(DisasContext *s)

 static void gen_pusha(DisasContext *s)
 {
-    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_16;
-    TCGMemOp d_ot = s->dflag;
+    MemOp s_ot = s->ss32 ? MO_32 : MO_16;
+    MemOp d_ot = s->dflag;
     int size = 1 << d_ot;
     int i;

@@ -2421,8 +2421,8 @@ static void gen_pusha(DisasContext *s)

 static void gen_popa(DisasContext *s)
 {
-    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_16;
-    TCGMemOp d_ot = s->dflag;
+    MemOp s_ot = s->ss32 ? MO_32 : MO_16;
+    MemOp d_ot = s->dflag;
     int size = 1 << d_ot;
     int i;

@@ -2442,8 +2442,8 @@ static void gen_popa(DisasContext *s)

 static void gen_enter(DisasContext *s, int esp_addend, int level)
 {
-    TCGMemOp d_ot = mo_pushpop(s, s->dflag);
-    TCGMemOp a_ot = CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_16;
+    MemOp d_ot = mo_pushpop(s, s->dflag);
+    MemOp a_ot = CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_16;
     int size = 1 << d_ot;

     /* Push BP; compute FrameTemp into T1.  */
@@ -2482,8 +2482,8 @@ static void gen_enter(DisasContext *s, int esp_addend, int level)

 static void gen_leave(DisasContext *s)
 {
-    TCGMemOp d_ot = mo_pushpop(s, s->dflag);
-    TCGMemOp a_ot = mo_stacksize(s);
+    MemOp d_ot = mo_pushpop(s, s->dflag);
+    MemOp a_ot = mo_stacksize(s);

     gen_lea_v_seg(s, a_ot, cpu_regs[R_EBP], R_SS, -1);
     gen_op_ld_v(s, d_ot, s->T0, s->A0);
@@ -3045,7 +3045,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
     SSEFunc_0_eppi sse_fn_eppi;
     SSEFunc_0_ppi sse_fn_ppi;
     SSEFunc_0_eppt sse_fn_eppt;
-    TCGMemOp ot;
+    MemOp ot;

     b &= 0xff;
     if (s->prefix & PREFIX_DATA)
@@ -4488,7 +4488,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
     CPUX86State *env = cpu->env_ptr;
     int b, prefixes;
     int shift;
-    TCGMemOp ot, aflag, dflag;
+    MemOp ot, aflag, dflag;
     int modrm, reg, rm, mod, op, opreg, val;
     target_ulong next_eip, tval;
     int rex_w, rex_r;
@@ -5567,8 +5567,8 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
     case 0x1be: /* movsbS Gv, Eb */
     case 0x1bf: /* movswS Gv, Eb */
         {
-            TCGMemOp d_ot;
-            TCGMemOp s_ot;
+            MemOp d_ot;
+            MemOp s_ot;

             /* d_ot is the size of destination */
             d_ot = dflag;
diff --git a/target/m68k/translate.c b/target/m68k/translate.c
index 60bcfb7..24c1dd3 100644
--- a/target/m68k/translate.c
+++ b/target/m68k/translate.c
@@ -2414,7 +2414,7 @@ DISAS_INSN(cas)
     uint16_t ext;
     TCGv load;
     TCGv cmp;
-    TCGMemOp opc;
+    MemOp opc;

     switch ((insn >> 9) & 3) {
     case 1:
diff --git a/target/microblaze/translate.c b/target/microblaze/translate.c
index 9ce65f3..41d1b8b 100644
--- a/target/microblaze/translate.c
+++ b/target/microblaze/translate.c
@@ -919,7 +919,7 @@ static void dec_load(DisasContext *dc)
     unsigned int size;
     bool rev = false, ex = false, ea = false;
     int mem_index = cpu_mmu_index(&dc->cpu->env, false);
-    TCGMemOp mop;
+    MemOp mop;

     mop = dc->opcode & 3;
     size = 1 << mop;
@@ -1035,7 +1035,7 @@ static void dec_store(DisasContext *dc)
     unsigned int size;
     bool rev = false, ex = false, ea = false;
     int mem_index = cpu_mmu_index(&dc->cpu->env, false);
-    TCGMemOp mop;
+    MemOp mop;

     mop = dc->opcode & 3;
     size = 1 << mop;
diff --git a/target/mips/translate.c b/target/mips/translate.c
index ca62800..59b5d85 100644
--- a/target/mips/translate.c
+++ b/target/mips/translate.c
@@ -2526,7 +2526,7 @@ typedef struct DisasContext {
     int32_t CP0_Config5;
     /* Routine used to access memory */
     int mem_idx;
-    TCGMemOp default_tcg_memop_mask;
+    MemOp default_tcg_memop_mask;
     uint32_t hflags, saved_hflags;
     target_ulong btarget;
     bool ulri;
@@ -3706,7 +3706,7 @@ static void gen_st(DisasContext *ctx, uint32_t opc, int rt,

 /* Store conditional */
 static void gen_st_cond(DisasContext *ctx, int rt, int base, int offset,
-                        TCGMemOp tcg_mo, bool eva)
+                        MemOp tcg_mo, bool eva)
 {
     TCGv addr, t0, val;
     TCGLabel *l1 = gen_new_label();
@@ -4546,7 +4546,7 @@ static void gen_HILO(DisasContext *ctx, uint32_t opc, int acc, int reg)
 }

 static inline void gen_r6_ld(target_long addr, int reg, int memidx,
-                             TCGMemOp memop)
+                             MemOp memop)
 {
     TCGv t0 = tcg_const_tl(addr);
     tcg_gen_qemu_ld_tl(t0, t0, memidx, memop);
@@ -21828,7 +21828,7 @@ static int decode_nanomips_32_48_opc(CPUMIPSState *env, DisasContext *ctx)
                              extract32(ctx->opcode, 0, 8);
                     TCGv va = tcg_temp_new();
                     TCGv t1 = tcg_temp_new();
-                    TCGMemOp memop = (extract32(ctx->opcode, 8, 3)) ==
+                    MemOp memop = (extract32(ctx->opcode, 8, 3)) ==
                                       NM_P_LS_UAWM ? MO_UNALN : 0;

                     count = (count == 0) ? 8 : count;
diff --git a/target/openrisc/translate.c b/target/openrisc/translate.c
index 4360ce4..b189c50 100644
--- a/target/openrisc/translate.c
+++ b/target/openrisc/translate.c
@@ -681,7 +681,7 @@ static bool trans_l_lwa(DisasContext *dc, arg_load *a)
     return true;
 }

-static void do_load(DisasContext *dc, arg_load *a, TCGMemOp mop)
+static void do_load(DisasContext *dc, arg_load *a, MemOp mop)
 {
     TCGv ea;

@@ -763,7 +763,7 @@ static bool trans_l_swa(DisasContext *dc, arg_store *a)
     return true;
 }

-static void do_store(DisasContext *dc, arg_store *a, TCGMemOp mop)
+static void do_store(DisasContext *dc, arg_store *a, MemOp mop)
 {
     TCGv t0 = tcg_temp_new();
     tcg_gen_addi_tl(t0, cpu_R[a->a], a->i);
diff --git a/target/ppc/translate.c b/target/ppc/translate.c
index 4a5de28..31800ed 100644
--- a/target/ppc/translate.c
+++ b/target/ppc/translate.c
@@ -162,7 +162,7 @@ struct DisasContext {
     int mem_idx;
     int access_type;
     /* Translation flags */
-    TCGMemOp default_tcg_memop_mask;
+    MemOp default_tcg_memop_mask;
 #if defined(TARGET_PPC64)
     bool sf_mode;
     bool has_cfar;
@@ -3142,7 +3142,7 @@ static void gen_isync(DisasContext *ctx)

 #define MEMOP_GET_SIZE(x)  (1 << ((x) & MO_SIZE))

-static void gen_load_locked(DisasContext *ctx, TCGMemOp memop)
+static void gen_load_locked(DisasContext *ctx, MemOp memop)
 {
     TCGv gpr = cpu_gpr[rD(ctx->opcode)];
     TCGv t0 = tcg_temp_new();
@@ -3167,7 +3167,7 @@ LARX(lbarx, DEF_MEMOP(MO_UB))
 LARX(lharx, DEF_MEMOP(MO_UW))
 LARX(lwarx, DEF_MEMOP(MO_UL))

-static void gen_fetch_inc_conditional(DisasContext *ctx, TCGMemOp memop,
+static void gen_fetch_inc_conditional(DisasContext *ctx, MemOp memop,
                                       TCGv EA, TCGCond cond, int addend)
 {
     TCGv t = tcg_temp_new();
@@ -3193,7 +3193,7 @@ static void gen_fetch_inc_conditional(DisasContext *ctx, TCGMemOp memop,
     tcg_temp_free(u);
 }

-static void gen_ld_atomic(DisasContext *ctx, TCGMemOp memop)
+static void gen_ld_atomic(DisasContext *ctx, MemOp memop)
 {
     uint32_t gpr_FC = FC(ctx->opcode);
     TCGv EA = tcg_temp_new();
@@ -3306,7 +3306,7 @@ static void gen_ldat(DisasContext *ctx)
 }
 #endif

-static void gen_st_atomic(DisasContext *ctx, TCGMemOp memop)
+static void gen_st_atomic(DisasContext *ctx, MemOp memop)
 {
     uint32_t gpr_FC = FC(ctx->opcode);
     TCGv EA = tcg_temp_new();
@@ -3389,7 +3389,7 @@ static void gen_stdat(DisasContext *ctx)
 }
 #endif

-static void gen_conditional_store(DisasContext *ctx, TCGMemOp memop)
+static void gen_conditional_store(DisasContext *ctx, MemOp memop)
 {
     TCGLabel *l1 = gen_new_label();
     TCGLabel *l2 = gen_new_label();
diff --git a/target/riscv/insn_trans/trans_rva.inc.c b/target/riscv/insn_trans/trans_rva.inc.c
index fadd888..be8a9f0 100644
--- a/target/riscv/insn_trans/trans_rva.inc.c
+++ b/target/riscv/insn_trans/trans_rva.inc.c
@@ -18,7 +18,7 @@
  * this program.  If not, see <http://www.gnu.org/licenses/>.
  */

-static inline bool gen_lr(DisasContext *ctx, arg_atomic *a, TCGMemOp mop)
+static inline bool gen_lr(DisasContext *ctx, arg_atomic *a, MemOp mop)
 {
     TCGv src1 = tcg_temp_new();
     /* Put addr in load_res, data in load_val.  */
@@ -37,7 +37,7 @@ static inline bool gen_lr(DisasContext *ctx, arg_atomic *a, TCGMemOp mop)
     return true;
 }

-static inline bool gen_sc(DisasContext *ctx, arg_atomic *a, TCGMemOp mop)
+static inline bool gen_sc(DisasContext *ctx, arg_atomic *a, MemOp mop)
 {
     TCGv src1 = tcg_temp_new();
     TCGv src2 = tcg_temp_new();
@@ -82,8 +82,8 @@ static inline bool gen_sc(DisasContext *ctx, arg_atomic *a, TCGMemOp mop)
 }

 static bool gen_amo(DisasContext *ctx, arg_atomic *a,
-                    void(*func)(TCGv, TCGv, TCGv, TCGArg, TCGMemOp),
-                    TCGMemOp mop)
+                    void(*func)(TCGv, TCGv, TCGv, TCGArg, MemOp),
+                    MemOp mop)
 {
     TCGv src1 = tcg_temp_new();
     TCGv src2 = tcg_temp_new();
diff --git a/target/riscv/insn_trans/trans_rvi.inc.c b/target/riscv/insn_trans/trans_rvi.inc.c
index ea64731..cf440d1 100644
--- a/target/riscv/insn_trans/trans_rvi.inc.c
+++ b/target/riscv/insn_trans/trans_rvi.inc.c
@@ -135,7 +135,7 @@ static bool trans_bgeu(DisasContext *ctx, arg_bgeu *a)
     return gen_branch(ctx, a, TCG_COND_GEU);
 }

-static bool gen_load(DisasContext *ctx, arg_lb *a, TCGMemOp memop)
+static bool gen_load(DisasContext *ctx, arg_lb *a, MemOp memop)
 {
     TCGv t0 = tcg_temp_new();
     TCGv t1 = tcg_temp_new();
@@ -174,7 +174,7 @@ static bool trans_lhu(DisasContext *ctx, arg_lhu *a)
     return gen_load(ctx, a, MO_TEUW);
 }

-static bool gen_store(DisasContext *ctx, arg_sb *a, TCGMemOp memop)
+static bool gen_store(DisasContext *ctx, arg_sb *a, MemOp memop)
 {
     TCGv t0 = tcg_temp_new();
     TCGv dat = tcg_temp_new();
diff --git a/target/s390x/translate.c b/target/s390x/translate.c
index ac0d8b6..2927247 100644
--- a/target/s390x/translate.c
+++ b/target/s390x/translate.c
@@ -152,7 +152,7 @@ static inline int vec_full_reg_offset(uint8_t reg)
     return offsetof(CPUS390XState, vregs[reg][0]);
 }

-static inline int vec_reg_offset(uint8_t reg, uint8_t enr, TCGMemOp es)
+static inline int vec_reg_offset(uint8_t reg, uint8_t enr, MemOp es)
 {
     /* Convert element size (es) - e.g. MO_8 - to bytes */
     const uint8_t bytes = 1 << es;
@@ -2262,7 +2262,7 @@ static DisasJumpType op_csst(DisasContext *s, DisasOps *o)
 #ifndef CONFIG_USER_ONLY
 static DisasJumpType op_csp(DisasContext *s, DisasOps *o)
 {
-    TCGMemOp mop = s->insn->data;
+    MemOp mop = s->insn->data;
     TCGv_i64 addr, old, cc;
     TCGLabel *lab = gen_new_label();

@@ -3228,7 +3228,7 @@ static DisasJumpType op_lm64(DisasContext *s, DisasOps *o)
 static DisasJumpType op_lpd(DisasContext *s, DisasOps *o)
 {
     TCGv_i64 a1, a2;
-    TCGMemOp mop = s->insn->data;
+    MemOp mop = s->insn->data;

     /* In a parallel context, stop the world and single step.  */
     if (tb_cflags(s->base.tb) & CF_PARALLEL) {
diff --git a/target/s390x/translate_vx.inc.c b/target/s390x/translate_vx.inc.c
index 41d5cf8..4c56bbb 100644
--- a/target/s390x/translate_vx.inc.c
+++ b/target/s390x/translate_vx.inc.c
@@ -57,13 +57,13 @@
 #define FPF_LONG        3
 #define FPF_EXT         4

-static inline bool valid_vec_element(uint8_t enr, TCGMemOp es)
+static inline bool valid_vec_element(uint8_t enr, MemOp es)
 {
     return !(enr & ~(NUM_VEC_ELEMENTS(es) - 1));
 }

 static void read_vec_element_i64(TCGv_i64 dst, uint8_t reg, uint8_t enr,
-                                 TCGMemOp memop)
+                                 MemOp memop)
 {
     const int offs = vec_reg_offset(reg, enr, memop & MO_SIZE);

@@ -96,7 +96,7 @@ static void read_vec_element_i64(TCGv_i64 dst, uint8_t reg, uint8_t enr,
 }

 static void read_vec_element_i32(TCGv_i32 dst, uint8_t reg, uint8_t enr,
-                                 TCGMemOp memop)
+                                 MemOp memop)
 {
     const int offs = vec_reg_offset(reg, enr, memop & MO_SIZE);

@@ -123,7 +123,7 @@ static void read_vec_element_i32(TCGv_i32 dst, uint8_t reg, uint8_t enr,
 }

 static void write_vec_element_i64(TCGv_i64 src, int reg, uint8_t enr,
-                                  TCGMemOp memop)
+                                  MemOp memop)
 {
     const int offs = vec_reg_offset(reg, enr, memop & MO_SIZE);

@@ -146,7 +146,7 @@ static void write_vec_element_i64(TCGv_i64 src, int reg, uint8_t enr,
 }

 static void write_vec_element_i32(TCGv_i32 src, int reg, uint8_t enr,
-                                  TCGMemOp memop)
+                                  MemOp memop)
 {
     const int offs = vec_reg_offset(reg, enr, memop & MO_SIZE);

diff --git a/target/sparc/translate.c b/target/sparc/translate.c
index 091bab5..bef9ce6 100644
--- a/target/sparc/translate.c
+++ b/target/sparc/translate.c
@@ -2019,7 +2019,7 @@ static inline void gen_ne_fop_QD(DisasContext *dc, int rd, int rs,
 }

 static void gen_swap(DisasContext *dc, TCGv dst, TCGv src,
-                     TCGv addr, int mmu_idx, TCGMemOp memop)
+                     TCGv addr, int mmu_idx, MemOp memop)
 {
     gen_address_mask(dc, addr);
     tcg_gen_atomic_xchg_tl(dst, addr, src, mmu_idx, memop);
@@ -2050,10 +2050,10 @@ typedef struct {
     ASIType type;
     int asi;
     int mem_idx;
-    TCGMemOp memop;
+    MemOp memop;
 } DisasASI;

-static DisasASI get_asi(DisasContext *dc, int insn, TCGMemOp memop)
+static DisasASI get_asi(DisasContext *dc, int insn, MemOp memop)
 {
     int asi = GET_FIELD(insn, 19, 26);
     ASIType type = GET_ASI_HELPER;
@@ -2267,7 +2267,7 @@ static DisasASI get_asi(DisasContext *dc, int insn, TCGMemOp memop)
 }

 static void gen_ld_asi(DisasContext *dc, TCGv dst, TCGv addr,
-                       int insn, TCGMemOp memop)
+                       int insn, MemOp memop)
 {
     DisasASI da = get_asi(dc, insn, memop);

@@ -2305,7 +2305,7 @@ static void gen_ld_asi(DisasContext *dc, TCGv dst, TCGv addr,
 }

 static void gen_st_asi(DisasContext *dc, TCGv src, TCGv addr,
-                       int insn, TCGMemOp memop)
+                       int insn, MemOp memop)
 {
     DisasASI da = get_asi(dc, insn, memop);

@@ -2511,7 +2511,7 @@ static void gen_ldf_asi(DisasContext *dc, TCGv addr,
     case GET_ASI_BLOCK:
         /* Valid for lddfa on aligned registers only.  */
         if (size == 8 && (rd & 7) == 0) {
-            TCGMemOp memop;
+            MemOp memop;
             TCGv eight;
             int i;

@@ -2625,7 +2625,7 @@ static void gen_stf_asi(DisasContext *dc, TCGv addr,
     case GET_ASI_BLOCK:
         /* Valid for stdfa on aligned registers only.  */
         if (size == 8 && (rd & 7) == 0) {
-            TCGMemOp memop;
+            MemOp memop;
             TCGv eight;
             int i;

diff --git a/target/tilegx/translate.c b/target/tilegx/translate.c
index c46a4ab..68dd4aa 100644
--- a/target/tilegx/translate.c
+++ b/target/tilegx/translate.c
@@ -290,7 +290,7 @@ static void gen_cmul2(TCGv tdest, TCGv tsrca, TCGv tsrcb, int sh, int rd)
 }

 static TileExcp gen_st_opcode(DisasContext *dc, unsigned dest, unsigned srca,
-                              unsigned srcb, TCGMemOp memop, const char *name)
+                              unsigned srcb, MemOp memop, const char *name)
 {
     if (dest) {
         return TILEGX_EXCP_OPCODE_UNKNOWN;
@@ -305,7 +305,7 @@ static TileExcp gen_st_opcode(DisasContext *dc, unsigned dest, unsigned srca,
 }

 static TileExcp gen_st_add_opcode(DisasContext *dc, unsigned srca, unsigned srcb,
-                                  int imm, TCGMemOp memop, const char *name)
+                                  int imm, MemOp memop, const char *name)
 {
     TCGv tsrca = load_gr(dc, srca);
     TCGv tsrcb = load_gr(dc, srcb);
@@ -496,7 +496,7 @@ static TileExcp gen_rr_opcode(DisasContext *dc, unsigned opext,
 {
     TCGv tdest, tsrca;
     const char *mnemonic;
-    TCGMemOp memop;
+    MemOp memop;
     TileExcp ret = TILEGX_EXCP_NONE;
     bool prefetch_nofault = false;

@@ -1478,7 +1478,7 @@ static TileExcp gen_rri_opcode(DisasContext *dc, unsigned opext,
     TCGv tsrca = load_gr(dc, srca);
     bool prefetch_nofault = false;
     const char *mnemonic;
-    TCGMemOp memop;
+    MemOp memop;
     int i2, i3;
     TCGv t0;

@@ -2106,7 +2106,7 @@ static TileExcp decode_y2(DisasContext *dc, tilegx_bundle_bits bundle)
     unsigned srca = get_SrcA_Y2(bundle);
     unsigned srcbdest = get_SrcBDest_Y2(bundle);
     const char *mnemonic;
-    TCGMemOp memop;
+    MemOp memop;
     bool prefetch_nofault = false;

     switch (OEY2(opc, mode)) {
diff --git a/target/tricore/translate.c b/target/tricore/translate.c
index dc2a65f..87a5f50 100644
--- a/target/tricore/translate.c
+++ b/target/tricore/translate.c
@@ -227,7 +227,7 @@ static inline void generate_trap(DisasContext *ctx, int class, int tin);
 /* Functions for load/save to/from memory */

 static inline void gen_offset_ld(DisasContext *ctx, TCGv r1, TCGv r2,
-                                 int16_t con, TCGMemOp mop)
+                                 int16_t con, MemOp mop)
 {
     TCGv temp = tcg_temp_new();
     tcg_gen_addi_tl(temp, r2, con);
@@ -236,7 +236,7 @@ static inline void gen_offset_ld(DisasContext *ctx, TCGv r1, TCGv r2,
 }

 static inline void gen_offset_st(DisasContext *ctx, TCGv r1, TCGv r2,
-                                 int16_t con, TCGMemOp mop)
+                                 int16_t con, MemOp mop)
 {
     TCGv temp = tcg_temp_new();
     tcg_gen_addi_tl(temp, r2, con);
@@ -284,7 +284,7 @@ static void gen_offset_ld_2regs(TCGv rh, TCGv rl, TCGv base, int16_t con,
 }

 static void gen_st_preincr(DisasContext *ctx, TCGv r1, TCGv r2, int16_t off,
-                           TCGMemOp mop)
+                           MemOp mop)
 {
     TCGv temp = tcg_temp_new();
     tcg_gen_addi_tl(temp, r2, off);
@@ -294,7 +294,7 @@ static void gen_st_preincr(DisasContext *ctx, TCGv r1, TCGv r2, int16_t off,
 }

 static void gen_ld_preincr(DisasContext *ctx, TCGv r1, TCGv r2, int16_t off,
-                           TCGMemOp mop)
+                           MemOp mop)
 {
     TCGv temp = tcg_temp_new();
     tcg_gen_addi_tl(temp, r2, off);
diff --git a/tcg/README b/tcg/README
index 21fcdf7..b4382fa 100644
--- a/tcg/README
+++ b/tcg/README
@@ -512,7 +512,7 @@ Both t0 and t1 may be split into little-endian ordered pairs of registers
 if dealing with 64-bit quantities on a 32-bit host.

 The memidx selects the qemu tlb index to use (e.g. user or kernel access).
-The flags are the TCGMemOp bits, selecting the sign, width, and endianness
+The flags are the MemOp bits, selecting the sign, width, and endianness
 of the memory access.

 For a 32-bit host, qemu_ld/st_i64 is guaranteed to only be used with a
diff --git a/tcg/aarch64/tcg-target.inc.c b/tcg/aarch64/tcg-target.inc.c
index 0713448..3f92101 100644
--- a/tcg/aarch64/tcg-target.inc.c
+++ b/tcg/aarch64/tcg-target.inc.c
@@ -1423,7 +1423,7 @@ static inline void tcg_out_rev16(TCGContext *s, TCGReg rd, TCGReg rn)
     tcg_out_insn(s, 3507, REV16, TCG_TYPE_I32, rd, rn);
 }

-static inline void tcg_out_sxt(TCGContext *s, TCGType ext, TCGMemOp s_bits,
+static inline void tcg_out_sxt(TCGContext *s, TCGType ext, MemOp s_bits,
                                TCGReg rd, TCGReg rn)
 {
     /* Using ALIASes SXTB, SXTH, SXTW, of SBFM Xd, Xn, #0, #7|15|31 */
@@ -1431,7 +1431,7 @@ static inline void tcg_out_sxt(TCGContext *s, TCGType ext, TCGMemOp s_bits,
     tcg_out_sbfm(s, ext, rd, rn, 0, bits);
 }

-static inline void tcg_out_uxt(TCGContext *s, TCGMemOp s_bits,
+static inline void tcg_out_uxt(TCGContext *s, MemOp s_bits,
                                TCGReg rd, TCGReg rn)
 {
     /* Using ALIASes UXTB, UXTH of UBFM Wd, Wn, #0, #7|15 */
@@ -1580,8 +1580,8 @@ static inline void tcg_out_adr(TCGContext *s, TCGReg rd, void *target)
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp size = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp size = opc & MO_SIZE;

     if (!reloc_pc19(lb->label_ptr[0], s->code_ptr)) {
         return false;
@@ -1605,8 +1605,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp size = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp size = opc & MO_SIZE;

     if (!reloc_pc19(lb->label_ptr[0], s->code_ptr)) {
         return false;
@@ -1649,7 +1649,7 @@ QEMU_BUILD_BUG_ON(offsetof(CPUTLBDescFast, table) != 8);
    slow path for the failure case, which will be patched later when finalizing
    the slow path. Generated code returns the host addend in X1,
    clobbers X0,X2,X3,TMP. */
-static void tcg_out_tlb_read(TCGContext *s, TCGReg addr_reg, TCGMemOp opc,
+static void tcg_out_tlb_read(TCGContext *s, TCGReg addr_reg, MemOp opc,
                              tcg_insn_unit **label_ptr, int mem_index,
                              bool is_read)
 {
@@ -1709,11 +1709,11 @@ static void tcg_out_tlb_read(TCGContext *s, TCGReg addr_reg, TCGMemOp opc,

 #endif /* CONFIG_SOFTMMU */

-static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp memop, TCGType ext,
+static void tcg_out_qemu_ld_direct(TCGContext *s, MemOp memop, TCGType ext,
                                    TCGReg data_r, TCGReg addr_r,
                                    TCGType otype, TCGReg off_r)
 {
-    const TCGMemOp bswap = memop & MO_BSWAP;
+    const MemOp bswap = memop & MO_BSWAP;

     switch (memop & MO_SSIZE) {
     case MO_UB:
@@ -1765,11 +1765,11 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp memop, TCGType ext,
     }
 }

-static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp memop,
+static void tcg_out_qemu_st_direct(TCGContext *s, MemOp memop,
                                    TCGReg data_r, TCGReg addr_r,
                                    TCGType otype, TCGReg off_r)
 {
-    const TCGMemOp bswap = memop & MO_BSWAP;
+    const MemOp bswap = memop & MO_BSWAP;

     switch (memop & MO_SIZE) {
     case MO_8:
@@ -1804,7 +1804,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp memop,
 static void tcg_out_qemu_ld(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
                             TCGMemOpIdx oi, TCGType ext)
 {
-    TCGMemOp memop = get_memop(oi);
+    MemOp memop = get_memop(oi);
     const TCGType otype = TARGET_LONG_BITS == 64 ? TCG_TYPE_I64 : TCG_TYPE_I32;
 #ifdef CONFIG_SOFTMMU
     unsigned mem_index = get_mmuidx(oi);
@@ -1829,7 +1829,7 @@ static void tcg_out_qemu_ld(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
 static void tcg_out_qemu_st(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
                             TCGMemOpIdx oi)
 {
-    TCGMemOp memop = get_memop(oi);
+    MemOp memop = get_memop(oi);
     const TCGType otype = TARGET_LONG_BITS == 64 ? TCG_TYPE_I64 : TCG_TYPE_I32;
 #ifdef CONFIG_SOFTMMU
     unsigned mem_index = get_mmuidx(oi);
diff --git a/tcg/arm/tcg-target.inc.c b/tcg/arm/tcg-target.inc.c
index ece88dc..94d80d7 100644
--- a/tcg/arm/tcg-target.inc.c
+++ b/tcg/arm/tcg-target.inc.c
@@ -1233,7 +1233,7 @@ QEMU_BUILD_BUG_ON(offsetof(CPUTLBDescFast, table) != 4);
    containing the addend of the tlb entry.  Clobbers R0, R1, R2, TMP.  */

 static TCGReg tcg_out_tlb_read(TCGContext *s, TCGReg addrlo, TCGReg addrhi,
-                               TCGMemOp opc, int mem_index, bool is_load)
+                               MemOp opc, int mem_index, bool is_load)
 {
     int cmp_off = (is_load ? offsetof(CPUTLBEntry, addr_read)
                    : offsetof(CPUTLBEntry, addr_write));
@@ -1348,7 +1348,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGReg argreg, datalo, datahi;
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     void *func;

     if (!reloc_pc24(lb->label_ptr[0], s->code_ptr)) {
@@ -1412,7 +1412,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGReg argreg, datalo, datahi;
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);

     if (!reloc_pc24(lb->label_ptr[0], s->code_ptr)) {
         return false;
@@ -1453,11 +1453,11 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 }
 #endif /* SOFTMMU */

-static inline void tcg_out_qemu_ld_index(TCGContext *s, TCGMemOp opc,
+static inline void tcg_out_qemu_ld_index(TCGContext *s, MemOp opc,
                                          TCGReg datalo, TCGReg datahi,
                                          TCGReg addrlo, TCGReg addend)
 {
-    TCGMemOp bswap = opc & MO_BSWAP;
+    MemOp bswap = opc & MO_BSWAP;

     switch (opc & MO_SSIZE) {
     case MO_UB:
@@ -1514,11 +1514,11 @@ static inline void tcg_out_qemu_ld_index(TCGContext *s, TCGMemOp opc,
     }
 }

-static inline void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp opc,
+static inline void tcg_out_qemu_ld_direct(TCGContext *s, MemOp opc,
                                           TCGReg datalo, TCGReg datahi,
                                           TCGReg addrlo)
 {
-    TCGMemOp bswap = opc & MO_BSWAP;
+    MemOp bswap = opc & MO_BSWAP;

     switch (opc & MO_SSIZE) {
     case MO_UB:
@@ -1577,7 +1577,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)
 {
     TCGReg addrlo, datalo, datahi, addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #ifdef CONFIG_SOFTMMU
     int mem_index;
     TCGReg addend;
@@ -1614,11 +1614,11 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)
 #endif
 }

-static inline void tcg_out_qemu_st_index(TCGContext *s, int cond, TCGMemOp opc,
+static inline void tcg_out_qemu_st_index(TCGContext *s, int cond, MemOp opc,
                                          TCGReg datalo, TCGReg datahi,
                                          TCGReg addrlo, TCGReg addend)
 {
-    TCGMemOp bswap = opc & MO_BSWAP;
+    MemOp bswap = opc & MO_BSWAP;

     switch (opc & MO_SIZE) {
     case MO_8:
@@ -1659,11 +1659,11 @@ static inline void tcg_out_qemu_st_index(TCGContext *s, int cond, TCGMemOp opc,
     }
 }

-static inline void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp opc,
+static inline void tcg_out_qemu_st_direct(TCGContext *s, MemOp opc,
                                           TCGReg datalo, TCGReg datahi,
                                           TCGReg addrlo)
 {
-    TCGMemOp bswap = opc & MO_BSWAP;
+    MemOp bswap = opc & MO_BSWAP;

     switch (opc & MO_SIZE) {
     case MO_8:
@@ -1708,7 +1708,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is64)
 {
     TCGReg addrlo, datalo, datahi, addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #ifdef CONFIG_SOFTMMU
     int mem_index;
     TCGReg addend;
diff --git a/tcg/i386/tcg-target.inc.c b/tcg/i386/tcg-target.inc.c
index 6ddeebf..9d8ed97 100644
--- a/tcg/i386/tcg-target.inc.c
+++ b/tcg/i386/tcg-target.inc.c
@@ -1697,7 +1697,7 @@ static void * const qemu_st_helpers[16] = {
    First argument register is clobbered.  */

 static inline void tcg_out_tlb_load(TCGContext *s, TCGReg addrlo, TCGReg addrhi,
-                                    int mem_index, TCGMemOp opc,
+                                    int mem_index, MemOp opc,
                                     tcg_insn_unit **label_ptr, int which)
 {
     const TCGReg r0 = TCG_REG_L0;
@@ -1810,7 +1810,7 @@ static void add_qemu_ldst_label(TCGContext *s, bool is_ld, bool is_64,
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     TCGReg data_reg;
     tcg_insn_unit **label_ptr = &l->label_ptr[0];
     int rexw = (l->type == TCG_TYPE_I64 ? P_REXW : 0);
@@ -1895,8 +1895,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp s_bits = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp s_bits = opc & MO_SIZE;
     tcg_insn_unit **label_ptr = &l->label_ptr[0];
     TCGReg retaddr;

@@ -1995,10 +1995,10 @@ static inline int setup_guest_base_seg(void)

 static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg datalo, TCGReg datahi,
                                    TCGReg base, int index, intptr_t ofs,
-                                   int seg, bool is64, TCGMemOp memop)
+                                   int seg, bool is64, MemOp memop)
 {
-    const TCGMemOp real_bswap = memop & MO_BSWAP;
-    TCGMemOp bswap = real_bswap;
+    const MemOp real_bswap = memop & MO_BSWAP;
+    MemOp bswap = real_bswap;
     int rexw = is64 * P_REXW;
     int movop = OPC_MOVL_GvEv;

@@ -2103,7 +2103,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)
     TCGReg datalo, datahi, addrlo;
     TCGReg addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     int mem_index;
     tcg_insn_unit *label_ptr[2];
@@ -2137,15 +2137,15 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)

 static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg datalo, TCGReg datahi,
                                    TCGReg base, int index, intptr_t ofs,
-                                   int seg, TCGMemOp memop)
+                                   int seg, MemOp memop)
 {
     /* ??? Ideally we wouldn't need a scratch register.  For user-only,
        we could perform the bswap twice to restore the original value
        instead of moving to the scratch.  But as it is, the L constraint
        means that TCG_REG_L0 is definitely free here.  */
     const TCGReg scratch = TCG_REG_L0;
-    const TCGMemOp real_bswap = memop & MO_BSWAP;
-    TCGMemOp bswap = real_bswap;
+    const MemOp real_bswap = memop & MO_BSWAP;
+    MemOp bswap = real_bswap;
     int movop = OPC_MOVL_EvGv;

     if (have_movbe && real_bswap) {
@@ -2221,7 +2221,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is64)
     TCGReg datalo, datahi, addrlo;
     TCGReg addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     int mem_index;
     tcg_insn_unit *label_ptr[2];
diff --git a/tcg/mips/tcg-target.inc.c b/tcg/mips/tcg-target.inc.c
index 41bff32..5442167 100644
--- a/tcg/mips/tcg-target.inc.c
+++ b/tcg/mips/tcg-target.inc.c
@@ -1215,7 +1215,7 @@ static void tcg_out_tlb_load(TCGContext *s, TCGReg base, TCGReg addrl,
                              TCGReg addrh, TCGMemOpIdx oi,
                              tcg_insn_unit *label_ptr[2], bool is_load)
 {
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     unsigned s_bits = opc & MO_SIZE;
     unsigned a_bits = get_alignment_bits(opc);
     int mem_index = get_mmuidx(oi);
@@ -1313,7 +1313,7 @@ static void add_qemu_ldst_label(TCGContext *s, int is_ld, TCGMemOpIdx oi,
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     TCGReg v0;
     int i;

@@ -1363,8 +1363,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp s_bits = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp s_bits = opc & MO_SIZE;
     int i;

     /* resolve label address */
@@ -1413,7 +1413,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 #endif

 static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg lo, TCGReg hi,
-                                   TCGReg base, TCGMemOp opc, bool is_64)
+                                   TCGReg base, MemOp opc, bool is_64)
 {
     switch (opc & (MO_SSIZE | MO_BSWAP)) {
     case MO_UB:
@@ -1521,7 +1521,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg addr_regl, addr_regh __attribute__((unused));
     TCGReg data_regl, data_regh;
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     tcg_insn_unit *label_ptr[2];
 #endif
@@ -1558,7 +1558,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
 }

 static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
-                                   TCGReg base, TCGMemOp opc)
+                                   TCGReg base, MemOp opc)
 {
     /* Don't clutter the code below with checks to avoid bswapping ZERO.  */
     if ((lo | hi) == 0) {
@@ -1624,7 +1624,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg addr_regl, addr_regh __attribute__((unused));
     TCGReg data_regl, data_regh;
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     tcg_insn_unit *label_ptr[2];
 #endif
diff --git a/tcg/optimize.c b/tcg/optimize.c
index d2424de..a89ffda 100644
--- a/tcg/optimize.c
+++ b/tcg/optimize.c
@@ -1014,7 +1014,7 @@ void tcg_optimize(TCGContext *s)
         CASE_OP_32_64(qemu_ld):
             {
                 TCGMemOpIdx oi = op->args[nb_oargs + nb_iargs];
-                TCGMemOp mop = get_memop(oi);
+                MemOp mop = get_memop(oi);
                 if (!(mop & MO_SIGN)) {
                     mask = (2ULL << ((8 << (mop & MO_SIZE)) - 1)) - 1;
                 }
diff --git a/tcg/ppc/tcg-target.inc.c b/tcg/ppc/tcg-target.inc.c
index 852b894..815edac 100644
--- a/tcg/ppc/tcg-target.inc.c
+++ b/tcg/ppc/tcg-target.inc.c
@@ -1506,7 +1506,7 @@ QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) < -32768);
    in CR7, loads the addend of the TLB into R3, and returns the register
    containing the guest address (zero-extended into R4).  Clobbers R0 and R2. */

-static TCGReg tcg_out_tlb_read(TCGContext *s, TCGMemOp opc,
+static TCGReg tcg_out_tlb_read(TCGContext *s, MemOp opc,
                                TCGReg addrlo, TCGReg addrhi,
                                int mem_index, bool is_read)
 {
@@ -1633,7 +1633,7 @@ static void add_qemu_ldst_label(TCGContext *s, bool is_ld, TCGMemOpIdx oi,
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     TCGReg hi, lo, arg = TCG_REG_R3;

     if (!reloc_pc14(lb->label_ptr[0], s->code_ptr)) {
@@ -1680,8 +1680,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp s_bits = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp s_bits = opc & MO_SIZE;
     TCGReg hi, lo, arg = TCG_REG_R3;

     if (!reloc_pc14(lb->label_ptr[0], s->code_ptr)) {
@@ -1744,7 +1744,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg datalo, datahi, addrlo, rbase;
     TCGReg addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc, s_bits;
+    MemOp opc, s_bits;
 #ifdef CONFIG_SOFTMMU
     int mem_index;
     tcg_insn_unit *label_ptr;
@@ -1819,7 +1819,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg datalo, datahi, addrlo, rbase;
     TCGReg addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc, s_bits;
+    MemOp opc, s_bits;
 #ifdef CONFIG_SOFTMMU
     int mem_index;
     tcg_insn_unit *label_ptr;
diff --git a/tcg/riscv/tcg-target.inc.c b/tcg/riscv/tcg-target.inc.c
index 3e76bf5..7018509 100644
--- a/tcg/riscv/tcg-target.inc.c
+++ b/tcg/riscv/tcg-target.inc.c
@@ -970,7 +970,7 @@ static void tcg_out_tlb_load(TCGContext *s, TCGReg addrl,
                              TCGReg addrh, TCGMemOpIdx oi,
                              tcg_insn_unit **label_ptr, bool is_load)
 {
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     unsigned s_bits = opc & MO_SIZE;
     unsigned a_bits = get_alignment_bits(opc);
     tcg_target_long compare_mask;
@@ -1044,7 +1044,7 @@ static void add_qemu_ldst_label(TCGContext *s, int is_ld, TCGMemOpIdx oi,
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     TCGReg a0 = tcg_target_call_iarg_regs[0];
     TCGReg a1 = tcg_target_call_iarg_regs[1];
     TCGReg a2 = tcg_target_call_iarg_regs[2];
@@ -1077,8 +1077,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp s_bits = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp s_bits = opc & MO_SIZE;
     TCGReg a0 = tcg_target_call_iarg_regs[0];
     TCGReg a1 = tcg_target_call_iarg_regs[1];
     TCGReg a2 = tcg_target_call_iarg_regs[2];
@@ -1121,9 +1121,9 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 #endif /* CONFIG_SOFTMMU */

 static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg lo, TCGReg hi,
-                                   TCGReg base, TCGMemOp opc, bool is_64)
+                                   TCGReg base, MemOp opc, bool is_64)
 {
-    const TCGMemOp bswap = opc & MO_BSWAP;
+    const MemOp bswap = opc & MO_BSWAP;

     /* We don't yet handle byteswapping, assert */
     g_assert(!bswap);
@@ -1172,7 +1172,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg addr_regl, addr_regh __attribute__((unused));
     TCGReg data_regl, data_regh;
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     tcg_insn_unit *label_ptr[1];
 #endif
@@ -1208,9 +1208,9 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
 }

 static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
-                                   TCGReg base, TCGMemOp opc)
+                                   TCGReg base, MemOp opc)
 {
-    const TCGMemOp bswap = opc & MO_BSWAP;
+    const MemOp bswap = opc & MO_BSWAP;

     /* We don't yet handle byteswapping, assert */
     g_assert(!bswap);
@@ -1243,7 +1243,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg addr_regl, addr_regh __attribute__((unused));
     TCGReg data_regl, data_regh;
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     tcg_insn_unit *label_ptr[1];
 #endif
diff --git a/tcg/s390/tcg-target.inc.c b/tcg/s390/tcg-target.inc.c
index fe42939..8aaa4ce 100644
--- a/tcg/s390/tcg-target.inc.c
+++ b/tcg/s390/tcg-target.inc.c
@@ -1430,7 +1430,7 @@ static void tcg_out_call(TCGContext *s, tcg_insn_unit *dest)
     }
 }

-static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp opc, TCGReg data,
+static void tcg_out_qemu_ld_direct(TCGContext *s, MemOp opc, TCGReg data,
                                    TCGReg base, TCGReg index, int disp)
 {
     switch (opc & (MO_SSIZE | MO_BSWAP)) {
@@ -1489,7 +1489,7 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp opc, TCGReg data,
     }
 }

-static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp opc, TCGReg data,
+static void tcg_out_qemu_st_direct(TCGContext *s, MemOp opc, TCGReg data,
                                    TCGReg base, TCGReg index, int disp)
 {
     switch (opc & (MO_SIZE | MO_BSWAP)) {
@@ -1544,7 +1544,7 @@ QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) < -(1 << 19));

 /* Load and compare a TLB entry, leaving the flags set.  Loads the TLB
    addend into R2.  Returns a register with the santitized guest address.  */
-static TCGReg tcg_out_tlb_read(TCGContext* s, TCGReg addr_reg, TCGMemOp opc,
+static TCGReg tcg_out_tlb_read(TCGContext *s, TCGReg addr_reg, MemOp opc,
                                int mem_index, bool is_ld)
 {
     unsigned s_bits = opc & MO_SIZE;
@@ -1614,7 +1614,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
     TCGReg addr_reg = lb->addrlo_reg;
     TCGReg data_reg = lb->datalo_reg;
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);

     if (!patch_reloc(lb->label_ptr[0], R_390_PC16DBL,
                      (intptr_t)s->code_ptr, 2)) {
@@ -1639,7 +1639,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
     TCGReg addr_reg = lb->addrlo_reg;
     TCGReg data_reg = lb->datalo_reg;
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);

     if (!patch_reloc(lb->label_ptr[0], R_390_PC16DBL,
                      (intptr_t)s->code_ptr, 2)) {
@@ -1694,7 +1694,7 @@ static void tcg_prepare_user_ldst(TCGContext *s, TCGReg *addr_reg,
 static void tcg_out_qemu_ld(TCGContext* s, TCGReg data_reg, TCGReg addr_reg,
                             TCGMemOpIdx oi)
 {
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
 #ifdef CONFIG_SOFTMMU
     unsigned mem_index = get_mmuidx(oi);
     tcg_insn_unit *label_ptr;
@@ -1721,7 +1721,7 @@ static void tcg_out_qemu_ld(TCGContext* s, TCGReg data_reg, TCGReg addr_reg,
 static void tcg_out_qemu_st(TCGContext* s, TCGReg data_reg, TCGReg addr_reg,
                             TCGMemOpIdx oi)
 {
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
 #ifdef CONFIG_SOFTMMU
     unsigned mem_index = get_mmuidx(oi);
     tcg_insn_unit *label_ptr;
diff --git a/tcg/sparc/tcg-target.inc.c b/tcg/sparc/tcg-target.inc.c
index 10b1cea..d7986cd 100644
--- a/tcg/sparc/tcg-target.inc.c
+++ b/tcg/sparc/tcg-target.inc.c
@@ -1081,7 +1081,7 @@ QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) < -(1 << 12));
    is in the returned register, maybe %o0.  The TLB addend is in %o1.  */

 static TCGReg tcg_out_tlb_load(TCGContext *s, TCGReg addr, int mem_index,
-                               TCGMemOp opc, int which)
+                               MemOp opc, int which)
 {
     int fast_off = TLB_MASK_TABLE_OFS(mem_index);
     int mask_off = fast_off + offsetof(CPUTLBDescFast, mask);
@@ -1164,7 +1164,7 @@ static const int qemu_st_opc[16] = {
 static void tcg_out_qemu_ld(TCGContext *s, TCGReg data, TCGReg addr,
                             TCGMemOpIdx oi, bool is_64)
 {
-    TCGMemOp memop = get_memop(oi);
+    MemOp memop = get_memop(oi);
 #ifdef CONFIG_SOFTMMU
     unsigned memi = get_mmuidx(oi);
     TCGReg addrz, param;
@@ -1246,7 +1246,7 @@ static void tcg_out_qemu_ld(TCGContext *s, TCGReg data, TCGReg addr,
 static void tcg_out_qemu_st(TCGContext *s, TCGReg data, TCGReg addr,
                             TCGMemOpIdx oi)
 {
-    TCGMemOp memop = get_memop(oi);
+    MemOp memop = get_memop(oi);
 #ifdef CONFIG_SOFTMMU
     unsigned memi = get_mmuidx(oi);
     TCGReg addrz, param;
diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c
index 587d092..e87c327 100644
--- a/tcg/tcg-op.c
+++ b/tcg/tcg-op.c
@@ -2714,7 +2714,7 @@ void tcg_gen_lookup_and_goto_ptr(void)
     }
 }

-static inline TCGMemOp tcg_canonicalize_memop(TCGMemOp op, bool is64, bool st)
+static inline MemOp tcg_canonicalize_memop(MemOp op, bool is64, bool st)
 {
     /* Trigger the asserts within as early as possible.  */
     (void)get_alignment_bits(op);
@@ -2743,7 +2743,7 @@ static inline TCGMemOp tcg_canonicalize_memop(TCGMemOp op, bool is64, bool st)
 }

 static void gen_ldst_i32(TCGOpcode opc, TCGv_i32 val, TCGv addr,
-                         TCGMemOp memop, TCGArg idx)
+                         MemOp memop, TCGArg idx)
 {
     TCGMemOpIdx oi = make_memop_idx(memop, idx);
 #if TARGET_LONG_BITS == 32
@@ -2758,7 +2758,7 @@ static void gen_ldst_i32(TCGOpcode opc, TCGv_i32 val, TCGv addr,
 }

 static void gen_ldst_i64(TCGOpcode opc, TCGv_i64 val, TCGv addr,
-                         TCGMemOp memop, TCGArg idx)
+                         MemOp memop, TCGArg idx)
 {
     TCGMemOpIdx oi = make_memop_idx(memop, idx);
 #if TARGET_LONG_BITS == 32
@@ -2788,9 +2788,9 @@ static void tcg_gen_req_mo(TCGBar type)
     }
 }

-void tcg_gen_qemu_ld_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
+void tcg_gen_qemu_ld_i32(TCGv_i32 val, TCGv addr, TCGArg idx, MemOp memop)
 {
-    TCGMemOp orig_memop;
+    MemOp orig_memop;

     tcg_gen_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
     memop = tcg_canonicalize_memop(memop, 0, 0);
@@ -2825,7 +2825,7 @@ void tcg_gen_qemu_ld_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
     }
 }

-void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
+void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCGArg idx, MemOp memop)
 {
     TCGv_i32 swap = NULL;

@@ -2858,9 +2858,9 @@ void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
     }
 }

-void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
+void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, MemOp memop)
 {
-    TCGMemOp orig_memop;
+    MemOp orig_memop;

     if (TCG_TARGET_REG_BITS == 32 && (memop & MO_SIZE) < MO_64) {
         tcg_gen_qemu_ld_i32(TCGV_LOW(val), addr, idx, memop);
@@ -2911,7 +2911,7 @@ void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
     }
 }

-void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
+void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, MemOp memop)
 {
     TCGv_i64 swap = NULL;

@@ -2953,7 +2953,7 @@ void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
     }
 }

-static void tcg_gen_ext_i32(TCGv_i32 ret, TCGv_i32 val, TCGMemOp opc)
+static void tcg_gen_ext_i32(TCGv_i32 ret, TCGv_i32 val, MemOp opc)
 {
     switch (opc & MO_SSIZE) {
     case MO_SB:
@@ -2974,7 +2974,7 @@ static void tcg_gen_ext_i32(TCGv_i32 ret, TCGv_i32 val, TCGMemOp opc)
     }
 }

-static void tcg_gen_ext_i64(TCGv_i64 ret, TCGv_i64 val, TCGMemOp opc)
+static void tcg_gen_ext_i64(TCGv_i64 ret, TCGv_i64 val, MemOp opc)
 {
     switch (opc & MO_SSIZE) {
     case MO_SB:
@@ -3034,7 +3034,7 @@ static void * const table_cmpxchg[16] = {
 };

 void tcg_gen_atomic_cmpxchg_i32(TCGv_i32 retv, TCGv addr, TCGv_i32 cmpv,
-                                TCGv_i32 newv, TCGArg idx, TCGMemOp memop)
+                                TCGv_i32 newv, TCGArg idx, MemOp memop)
 {
     memop = tcg_canonicalize_memop(memop, 0, 0);

@@ -3078,7 +3078,7 @@ void tcg_gen_atomic_cmpxchg_i32(TCGv_i32 retv, TCGv addr, TCGv_i32 cmpv,
 }

 void tcg_gen_atomic_cmpxchg_i64(TCGv_i64 retv, TCGv addr, TCGv_i64 cmpv,
-                                TCGv_i64 newv, TCGArg idx, TCGMemOp memop)
+                                TCGv_i64 newv, TCGArg idx, MemOp memop)
 {
     memop = tcg_canonicalize_memop(memop, 1, 0);

@@ -3142,7 +3142,7 @@ void tcg_gen_atomic_cmpxchg_i64(TCGv_i64 retv, TCGv addr, TCGv_i64 cmpv,
 }

 static void do_nonatomic_op_i32(TCGv_i32 ret, TCGv addr, TCGv_i32 val,
-                                TCGArg idx, TCGMemOp memop, bool new_val,
+                                TCGArg idx, MemOp memop, bool new_val,
                                 void (*gen)(TCGv_i32, TCGv_i32, TCGv_i32))
 {
     TCGv_i32 t1 = tcg_temp_new_i32();
@@ -3160,7 +3160,7 @@ static void do_nonatomic_op_i32(TCGv_i32 ret, TCGv addr, TCGv_i32 val,
 }

 static void do_atomic_op_i32(TCGv_i32 ret, TCGv addr, TCGv_i32 val,
-                             TCGArg idx, TCGMemOp memop, void * const table[])
+                             TCGArg idx, MemOp memop, void * const table[])
 {
     gen_atomic_op_i32 gen;

@@ -3185,7 +3185,7 @@ static void do_atomic_op_i32(TCGv_i32 ret, TCGv addr, TCGv_i32 val,
 }

 static void do_nonatomic_op_i64(TCGv_i64 ret, TCGv addr, TCGv_i64 val,
-                                TCGArg idx, TCGMemOp memop, bool new_val,
+                                TCGArg idx, MemOp memop, bool new_val,
                                 void (*gen)(TCGv_i64, TCGv_i64, TCGv_i64))
 {
     TCGv_i64 t1 = tcg_temp_new_i64();
@@ -3203,7 +3203,7 @@ static void do_nonatomic_op_i64(TCGv_i64 ret, TCGv addr, TCGv_i64 val,
 }

 static void do_atomic_op_i64(TCGv_i64 ret, TCGv addr, TCGv_i64 val,
-                             TCGArg idx, TCGMemOp memop, void * const table[])
+                             TCGArg idx, MemOp memop, void * const table[])
 {
     memop = tcg_canonicalize_memop(memop, 1, 0);

@@ -3257,7 +3257,7 @@ static void * const table_##NAME[16] = {                                \
     WITH_ATOMIC64([MO_64 | MO_BE] = gen_helper_atomic_##NAME##q_be)     \
 };                                                                      \
 void tcg_gen_atomic_##NAME##_i32                                        \
-    (TCGv_i32 ret, TCGv addr, TCGv_i32 val, TCGArg idx, TCGMemOp memop) \
+    (TCGv_i32 ret, TCGv addr, TCGv_i32 val, TCGArg idx, MemOp memop)    \
 {                                                                       \
     if (tcg_ctx->tb_cflags & CF_PARALLEL) {                             \
         do_atomic_op_i32(ret, addr, val, idx, memop, table_##NAME);     \
@@ -3267,7 +3267,7 @@ void tcg_gen_atomic_##NAME##_i32                                        \
     }                                                                   \
 }                                                                       \
 void tcg_gen_atomic_##NAME##_i64                                        \
-    (TCGv_i64 ret, TCGv addr, TCGv_i64 val, TCGArg idx, TCGMemOp memop) \
+    (TCGv_i64 ret, TCGv addr, TCGv_i64 val, TCGArg idx, MemOp memop)    \
 {                                                                       \
     if (tcg_ctx->tb_cflags & CF_PARALLEL) {                             \
         do_atomic_op_i64(ret, addr, val, idx, memop, table_##NAME);     \
diff --git a/tcg/tcg-op.h b/tcg/tcg-op.h
index 2d4dd5c..e9cf172 100644
--- a/tcg/tcg-op.h
+++ b/tcg/tcg-op.h
@@ -851,10 +851,10 @@ void tcg_gen_lookup_and_goto_ptr(void);
 #define tcg_gen_qemu_st_tl tcg_gen_qemu_st_i64
 #endif

-void tcg_gen_qemu_ld_i32(TCGv_i32, TCGv, TCGArg, TCGMemOp);
-void tcg_gen_qemu_st_i32(TCGv_i32, TCGv, TCGArg, TCGMemOp);
-void tcg_gen_qemu_ld_i64(TCGv_i64, TCGv, TCGArg, TCGMemOp);
-void tcg_gen_qemu_st_i64(TCGv_i64, TCGv, TCGArg, TCGMemOp);
+void tcg_gen_qemu_ld_i32(TCGv_i32, TCGv, TCGArg, MemOp);
+void tcg_gen_qemu_st_i32(TCGv_i32, TCGv, TCGArg, MemOp);
+void tcg_gen_qemu_ld_i64(TCGv_i64, TCGv, TCGArg, MemOp);
+void tcg_gen_qemu_st_i64(TCGv_i64, TCGv, TCGArg, MemOp);

 static inline void tcg_gen_qemu_ld8u(TCGv ret, TCGv addr, int mem_index)
 {
@@ -912,46 +912,46 @@ static inline void tcg_gen_qemu_st64(TCGv_i64 arg, TCGv addr, int mem_index)
 }

 void tcg_gen_atomic_cmpxchg_i32(TCGv_i32, TCGv, TCGv_i32, TCGv_i32,
-                                TCGArg, TCGMemOp);
+                                TCGArg, MemOp);
 void tcg_gen_atomic_cmpxchg_i64(TCGv_i64, TCGv, TCGv_i64, TCGv_i64,
-                                TCGArg, TCGMemOp);
-
-void tcg_gen_atomic_xchg_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_xchg_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-
-void tcg_gen_atomic_fetch_add_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_add_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_and_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_and_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_or_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_or_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_xor_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_xor_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_smin_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_smin_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_umin_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_umin_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_smax_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_smax_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_umax_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_umax_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-
-void tcg_gen_atomic_add_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_add_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_and_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_and_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_or_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_or_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_xor_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_xor_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_smin_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_smin_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_umin_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_umin_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_smax_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_smax_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_umax_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_umax_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
+                                TCGArg, MemOp);
+
+void tcg_gen_atomic_xchg_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_xchg_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+
+void tcg_gen_atomic_fetch_add_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_add_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_and_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_and_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_or_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_or_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_xor_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_xor_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_smin_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_smin_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_umin_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_umin_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_smax_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_smax_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_umax_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_umax_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+
+void tcg_gen_atomic_add_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_add_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_and_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_and_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_or_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_or_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_xor_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_xor_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_smin_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_smin_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_umin_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_umin_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_smax_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_smax_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_umax_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_umax_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);

 void tcg_gen_mov_vec(TCGv_vec, TCGv_vec);
 void tcg_gen_dup_i32_vec(unsigned vece, TCGv_vec, TCGv_i32);
diff --git a/tcg/tcg.c b/tcg/tcg.c
index be2c33c..aa9931f 100644
--- a/tcg/tcg.c
+++ b/tcg/tcg.c
@@ -2056,7 +2056,7 @@ static void tcg_dump_ops(TCGContext *s, bool have_prefs)
             case INDEX_op_qemu_st_i64:
                 {
                     TCGMemOpIdx oi = op->args[k++];
-                    TCGMemOp op = get_memop(oi);
+                    MemOp op = get_memop(oi);
                     unsigned ix = get_mmuidx(oi);

                     if (op & ~(MO_AMASK | MO_BSWAP | MO_SSIZE)) {
diff --git a/tcg/tcg.h b/tcg/tcg.h
index b411e17..a37181c 100644
--- a/tcg/tcg.h
+++ b/tcg/tcg.h
@@ -26,6 +26,7 @@
 #define TCG_H

 #include "cpu.h"
+#include "exec/memop.h"
 #include "exec/tb-context.h"
 #include "qemu/bitops.h"
 #include "qemu/queue.h"
@@ -309,101 +310,13 @@ typedef enum TCGType {
 #endif
 } TCGType;

-/* Constants for qemu_ld and qemu_st for the Memory Operation field.  */
-typedef enum TCGMemOp {
-    MO_8     = 0,
-    MO_16    = 1,
-    MO_32    = 2,
-    MO_64    = 3,
-    MO_SIZE  = 3,   /* Mask for the above.  */
-
-    MO_SIGN  = 4,   /* Sign-extended, otherwise zero-extended.  */
-
-    MO_BSWAP = 8,   /* Host reverse endian.  */
-#ifdef HOST_WORDS_BIGENDIAN
-    MO_LE    = MO_BSWAP,
-    MO_BE    = 0,
-#else
-    MO_LE    = 0,
-    MO_BE    = MO_BSWAP,
-#endif
-#ifdef TARGET_WORDS_BIGENDIAN
-    MO_TE    = MO_BE,
-#else
-    MO_TE    = MO_LE,
-#endif
-
-    /* MO_UNALN accesses are never checked for alignment.
-     * MO_ALIGN accesses will result in a call to the CPU's
-     * do_unaligned_access hook if the guest address is not aligned.
-     * The default depends on whether the target CPU defines ALIGNED_ONLY.
-     *
-     * Some architectures (e.g. ARMv8) need the address which is aligned
-     * to a size more than the size of the memory access.
-     * Some architectures (e.g. SPARCv9) need an address which is aligned,
-     * but less strictly than the natural alignment.
-     *
-     * MO_ALIGN supposes the alignment size is the size of a memory access.
-     *
-     * There are three options:
-     * - unaligned access permitted (MO_UNALN).
-     * - an alignment to the size of an access (MO_ALIGN);
-     * - an alignment to a specified size, which may be more or less than
-     *   the access size (MO_ALIGN_x where 'x' is a size in bytes);
-     */
-    MO_ASHIFT = 4,
-    MO_AMASK = 7 << MO_ASHIFT,
-#ifdef ALIGNED_ONLY
-    MO_ALIGN = 0,
-    MO_UNALN = MO_AMASK,
-#else
-    MO_ALIGN = MO_AMASK,
-    MO_UNALN = 0,
-#endif
-    MO_ALIGN_2  = 1 << MO_ASHIFT,
-    MO_ALIGN_4  = 2 << MO_ASHIFT,
-    MO_ALIGN_8  = 3 << MO_ASHIFT,
-    MO_ALIGN_16 = 4 << MO_ASHIFT,
-    MO_ALIGN_32 = 5 << MO_ASHIFT,
-    MO_ALIGN_64 = 6 << MO_ASHIFT,
-
-    /* Combinations of the above, for ease of use.  */
-    MO_UB    = MO_8,
-    MO_UW    = MO_16,
-    MO_UL    = MO_32,
-    MO_SB    = MO_SIGN | MO_8,
-    MO_SW    = MO_SIGN | MO_16,
-    MO_SL    = MO_SIGN | MO_32,
-    MO_Q     = MO_64,
-
-    MO_LEUW  = MO_LE | MO_UW,
-    MO_LEUL  = MO_LE | MO_UL,
-    MO_LESW  = MO_LE | MO_SW,
-    MO_LESL  = MO_LE | MO_SL,
-    MO_LEQ   = MO_LE | MO_Q,
-
-    MO_BEUW  = MO_BE | MO_UW,
-    MO_BEUL  = MO_BE | MO_UL,
-    MO_BESW  = MO_BE | MO_SW,
-    MO_BESL  = MO_BE | MO_SL,
-    MO_BEQ   = MO_BE | MO_Q,
-
-    MO_TEUW  = MO_TE | MO_UW,
-    MO_TEUL  = MO_TE | MO_UL,
-    MO_TESW  = MO_TE | MO_SW,
-    MO_TESL  = MO_TE | MO_SL,
-    MO_TEQ   = MO_TE | MO_Q,
-
-    MO_SSIZE = MO_SIZE | MO_SIGN,
-} TCGMemOp;
-
 /**
  * get_alignment_bits
- * @memop: TCGMemOp value
+ * @memop: MemOp value
  *
  * Extract the alignment size from the memop.
  */
-static inline unsigned get_alignment_bits(TCGMemOp memop)
+static inline unsigned get_alignment_bits(MemOp memop)
 {
     unsigned a = memop & MO_AMASK;

@@ -1184,7 +1097,7 @@ static inline size_t tcg_current_code_size(TCGContext *s)
     return tcg_ptr_byte_diff(s->code_ptr, s->code_buf);
 }

-/* Combine the TCGMemOp and mmu_idx parameters into a single value.  */
+/* Combine the MemOp and mmu_idx parameters into a single value.  */
 typedef uint32_t TCGMemOpIdx;

 /**
@@ -1194,7 +1107,7 @@ typedef uint32_t TCGMemOpIdx;
  *
  * Encode these values into a single parameter.
  */
-static inline TCGMemOpIdx make_memop_idx(TCGMemOp op, unsigned idx)
+static inline TCGMemOpIdx make_memop_idx(MemOp op, unsigned idx)
 {
     tcg_debug_assert(idx <= 15);
     return (op << 4) | idx;
@@ -1206,7 +1119,7 @@ static inline TCGMemOpIdx make_memop_idx(TCGMemOp op, unsigned idx)
  *
  * Extract the memory operation from the combined value.
  */
-static inline TCGMemOp get_memop(TCGMemOpIdx oi)
+static inline MemOp get_memop(TCGMemOpIdx oi)
 {
     return oi >> 4;
 }
diff --git a/trace/mem-internal.h b/trace/mem-internal.h
index f6efaf6..3444fbc 100644
--- a/trace/mem-internal.h
+++ b/trace/mem-internal.h
@@ -16,7 +16,7 @@
 #define TRACE_MEM_ST (1ULL << 5)    /* store (y/n) */

 static inline uint8_t trace_mem_build_info(
-    int size_shift, bool sign_extend, TCGMemOp endianness, bool store)
+    int size_shift, bool sign_extend, MemOp endianness, bool store)
 {
     uint8_t res;

@@ -33,7 +33,7 @@ static inline uint8_t trace_mem_build_info(
     return res;
 }

-static inline uint8_t trace_mem_get_info(TCGMemOp op, bool store)
+static inline uint8_t trace_mem_get_info(MemOp op, bool store)
 {
     return trace_mem_build_info(op & MO_SIZE, !!(op & MO_SIGN),
                                 op & MO_BSWAP, store);
diff --git a/trace/mem.h b/trace/mem.h
index 2b58196..8cf213d 100644
--- a/trace/mem.h
+++ b/trace/mem.h
@@ -18,7 +18,7 @@
  *
  * Return a value for the 'info' argument in guest memory access traces.
  */
-static uint8_t trace_mem_get_info(TCGMemOp op, bool store);
+static uint8_t trace_mem_get_info(MemOp op, bool store);

 /**
  * trace_mem_build_info:
@@ -26,7 +26,7 @@ static uint8_t trace_mem_get_info(TCGMemOp op, bool store);
  * Return a value for the 'info' argument in guest memory access traces.
  */
 static uint8_t trace_mem_build_info(int size_shift, bool sign_extend,
-                                    TCGMemOp endianness, bool store);
+                                    MemOp endianness, bool store);


 #include "trace/mem-internal.h"
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 181978 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v4 02/15] memory: Access MemoryRegion with MemOp
  2019-07-25  7:58       ` [Qemu-riscv] " tony.nguyen
@ 2019-07-25  8:00         ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  8:00 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

Replacing size with size+sign+endianness (MemOp) will enable us to
collapse the two byte swaps, adjust_endianness and handle_bswap, along
the I/O path.

While interfaces are converted, callers will have existing unsigned
size coerced into a MemOp, and the callee will use this MemOp as an
unsigned size.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 include/exec/memop.h  | 4 ++++
 include/exec/memory.h | 9 +++++----
 memory.c              | 7 +++++--
 3 files changed, 14 insertions(+), 6 deletions(-)

diff --git a/include/exec/memop.h b/include/exec/memop.h
index ac58066..09c8d20 100644
--- a/include/exec/memop.h
+++ b/include/exec/memop.h
@@ -106,4 +106,8 @@ typedef enum MemOp {
     MO_SSIZE = MO_SIZE | MO_SIGN,
 } MemOp;

+/* No-op while memory_region_dispatch_[read|write] is converted to MemOp */
+#define MEMOP_SIZE(op)  (op)    /* MemOp to size.  */
+#define SIZE_MEMOP(ul)  (ul)    /* Size to MemOp.  */
+
 #endif
diff --git a/include/exec/memory.h b/include/exec/memory.h
index bb0961d..30b1c58 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -19,6 +19,7 @@
 #include "exec/cpu-common.h"
 #include "exec/hwaddr.h"
 #include "exec/memattrs.h"
+#include "exec/memop.h"
 #include "exec/ramlist.h"
 #include "qemu/queue.h"
 #include "qemu/int128.h"
@@ -1731,13 +1732,13 @@ void mtree_info(bool flatview, bool dispatch_tree, bool owner);
  * @mr: #MemoryRegion to access
  * @addr: address within that region
  * @pval: pointer to uint64_t which the data is written to
- * @size: size of the access in bytes
+ * @op: encodes size of the access in bytes
  * @attrs: memory transaction attributes to use for the access
  */
 MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
                                         hwaddr addr,
                                         uint64_t *pval,
-                                        unsigned size,
+                                        MemOp op,
                                         MemTxAttrs attrs);
 /**
  * memory_region_dispatch_write: perform a write directly to the specified
@@ -1746,13 +1747,13 @@ MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
  * @mr: #MemoryRegion to access
  * @addr: address within that region
  * @data: data to write
- * @size: size of the access in bytes
+ * @op: encodes size of the access in bytes
  * @attrs: memory transaction attributes to use for the access
  */
 MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
                                          hwaddr addr,
                                          uint64_t data,
-                                         unsigned size,
+                                         MemOp op,
                                          MemTxAttrs attrs);

 /**
diff --git a/memory.c b/memory.c
index 5d8c9a9..6982e19 100644
--- a/memory.c
+++ b/memory.c
@@ -1439,10 +1439,11 @@ static MemTxResult memory_region_dispatch_read1(MemoryRegion *mr,
 MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
                                         hwaddr addr,
                                         uint64_t *pval,
-                                        unsigned size,
+                                        MemOp op,
                                         MemTxAttrs attrs)
 {
     MemTxResult r;
+    unsigned size = MEMOP_SIZE(op);

     if (!memory_region_access_valid(mr, addr, size, false, attrs)) {
         *pval = unassigned_mem_read(mr, addr, size);
@@ -1483,9 +1484,11 @@ static bool memory_region_dispatch_write_eventfds(MemoryRegion *mr,
 MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
                                          hwaddr addr,
                                          uint64_t data,
-                                         unsigned size,
+                                         MemOp op,
                                          MemTxAttrs attrs)
 {
+    unsigned size = MEMOP_SIZE(op);
+
     if (!memory_region_access_valid(mr, addr, size, true, attrs)) {
         unassigned_mem_write(mr, addr, data, size);
         return MEMTX_DECODE_ERROR;
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v4 02/15] memory: Access MemoryRegion with MemOp
@ 2019-07-25  8:00         ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  8:00 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, qemu-s390x, qemu-ppc, amarkovic, pbonzini,
	aurelien

[-- Attachment #1: Type: text/plain, Size: 4214 bytes --]

Replacing size with size+sign+endianness (MemOp) will enable us to
collapse the two byte swaps, adjust_endianness and handle_bswap, along
the I/O path.

While interfaces are converted, callers will have existing unsigned
size coerced into a MemOp, and the callee will use this MemOp as an
unsigned size.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 include/exec/memop.h  | 4 ++++
 include/exec/memory.h | 9 +++++----
 memory.c              | 7 +++++--
 3 files changed, 14 insertions(+), 6 deletions(-)

diff --git a/include/exec/memop.h b/include/exec/memop.h
index ac58066..09c8d20 100644
--- a/include/exec/memop.h
+++ b/include/exec/memop.h
@@ -106,4 +106,8 @@ typedef enum MemOp {
     MO_SSIZE = MO_SIZE | MO_SIGN,
 } MemOp;

+/* No-op while memory_region_dispatch_[read|write] is converted to MemOp */
+#define MEMOP_SIZE(op)  (op)    /* MemOp to size.  */
+#define SIZE_MEMOP(ul)  (ul)    /* Size to MemOp.  */
+
 #endif
diff --git a/include/exec/memory.h b/include/exec/memory.h
index bb0961d..30b1c58 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -19,6 +19,7 @@
 #include "exec/cpu-common.h"
 #include "exec/hwaddr.h"
 #include "exec/memattrs.h"
+#include "exec/memop.h"
 #include "exec/ramlist.h"
 #include "qemu/queue.h"
 #include "qemu/int128.h"
@@ -1731,13 +1732,13 @@ void mtree_info(bool flatview, bool dispatch_tree, bool owner);
  * @mr: #MemoryRegion to access
  * @addr: address within that region
  * @pval: pointer to uint64_t which the data is written to
- * @size: size of the access in bytes
+ * @op: encodes size of the access in bytes
  * @attrs: memory transaction attributes to use for the access
  */
 MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
                                         hwaddr addr,
                                         uint64_t *pval,
-                                        unsigned size,
+                                        MemOp op,
                                         MemTxAttrs attrs);
 /**
  * memory_region_dispatch_write: perform a write directly to the specified
@@ -1746,13 +1747,13 @@ MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
  * @mr: #MemoryRegion to access
  * @addr: address within that region
  * @data: data to write
- * @size: size of the access in bytes
+ * @op: encodes size of the access in bytes
  * @attrs: memory transaction attributes to use for the access
  */
 MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
                                          hwaddr addr,
                                          uint64_t data,
-                                         unsigned size,
+                                         MemOp op,
                                          MemTxAttrs attrs);

 /**
diff --git a/memory.c b/memory.c
index 5d8c9a9..6982e19 100644
--- a/memory.c
+++ b/memory.c
@@ -1439,10 +1439,11 @@ static MemTxResult memory_region_dispatch_read1(MemoryRegion *mr,
 MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
                                         hwaddr addr,
                                         uint64_t *pval,
-                                        unsigned size,
+                                        MemOp op,
                                         MemTxAttrs attrs)
 {
     MemTxResult r;
+    unsigned size = MEMOP_SIZE(op);

     if (!memory_region_access_valid(mr, addr, size, false, attrs)) {
         *pval = unassigned_mem_read(mr, addr, size);
@@ -1483,9 +1484,11 @@ static bool memory_region_dispatch_write_eventfds(MemoryRegion *mr,
 MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
                                          hwaddr addr,
                                          uint64_t data,
-                                         unsigned size,
+                                         MemOp op,
                                          MemTxAttrs attrs)
 {
+    unsigned size = MEMOP_SIZE(op);
+
     if (!memory_region_access_valid(mr, addr, size, true, attrs)) {
         unassigned_mem_write(mr, addr, data, size);
         return MEMTX_DECODE_ERROR;
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 8684 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v4 03/15] target/mips: Access MemoryRegion with MemOp
  2019-07-25  7:58       ` [Qemu-riscv] " tony.nguyen
@ 2019-07-25  8:01         ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  8:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 target/mips/op_helper.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/target/mips/op_helper.c b/target/mips/op_helper.c
index 9e2e02f..dccb8df 100644
--- a/target/mips/op_helper.c
+++ b/target/mips/op_helper.c
@@ -24,6 +24,7 @@
 #include "exec/helper-proto.h"
 #include "exec/exec-all.h"
 #include "exec/cpu_ldst.h"
+#include "exec/memop.h"
 #include "sysemu/kvm.h"

 /*****************************************************************************/
@@ -4740,11 +4741,11 @@ void helper_cache(CPUMIPSState *env, target_ulong addr, uint32_t op)
     if (op == 9) {
         /* Index Store Tag */
         memory_region_dispatch_write(env->itc_tag, index, env->CP0_TagLo,
-                                     8, MEMTXATTRS_UNSPECIFIED);
+                                     SIZE_MEMOP(8), MEMTXATTRS_UNSPECIFIED);
     } else if (op == 5) {
         /* Index Load Tag */
         memory_region_dispatch_read(env->itc_tag, index, &env->CP0_TagLo,
-                                    8, MEMTXATTRS_UNSPECIFIED);
+                                    SIZE_MEMOP(8), MEMTXATTRS_UNSPECIFIED);
     }
 #endif
 }
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v4 03/15] target/mips: Access MemoryRegion with MemOp
@ 2019-07-25  8:01         ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  8:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, qemu-s390x, qemu-ppc, amarkovic, pbonzini,
	aurelien

[-- Attachment #1: Type: text/plain, Size: 1233 bytes --]

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 target/mips/op_helper.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/target/mips/op_helper.c b/target/mips/op_helper.c
index 9e2e02f..dccb8df 100644
--- a/target/mips/op_helper.c
+++ b/target/mips/op_helper.c
@@ -24,6 +24,7 @@
 #include "exec/helper-proto.h"
 #include "exec/exec-all.h"
 #include "exec/cpu_ldst.h"
+#include "exec/memop.h"
 #include "sysemu/kvm.h"

 /*****************************************************************************/
@@ -4740,11 +4741,11 @@ void helper_cache(CPUMIPSState *env, target_ulong addr, uint32_t op)
     if (op == 9) {
         /* Index Store Tag */
         memory_region_dispatch_write(env->itc_tag, index, env->CP0_TagLo,
-                                     8, MEMTXATTRS_UNSPECIFIED);
+                                     SIZE_MEMOP(8), MEMTXATTRS_UNSPECIFIED);
     } else if (op == 5) {
         /* Index Load Tag */
         memory_region_dispatch_read(env->itc_tag, index, &env->CP0_TagLo,
-                                    8, MEMTXATTRS_UNSPECIFIED);
+                                    SIZE_MEMOP(8), MEMTXATTRS_UNSPECIFIED);
     }
 #endif
 }
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 2858 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v4 04/15] hw/s390x: Access MemoryRegion with MemOp
  2019-07-25  7:58       ` [Qemu-riscv] " tony.nguyen
@ 2019-07-25  8:01         ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  8:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 hw/s390x/s390-pci-inst.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c
index 0023514..c126bcc 100644
--- a/hw/s390x/s390-pci-inst.c
+++ b/hw/s390x/s390-pci-inst.c
@@ -15,6 +15,7 @@
 #include "cpu.h"
 #include "s390-pci-inst.h"
 #include "s390-pci-bus.h"
+#include "exec/memop.h"
 #include "exec/memory-internal.h"
 #include "qemu/error-report.h"
 #include "sysemu/hw_accel.h"
@@ -372,7 +373,7 @@ static MemTxResult zpci_read_bar(S390PCIBusDevice *pbdev, uint8_t pcias,
     mr = pbdev->pdev->io_regions[pcias].memory;
     mr = s390_get_subregion(mr, offset, len);
     offset -= mr->addr;
-    return memory_region_dispatch_read(mr, offset, data, len,
+    return memory_region_dispatch_read(mr, offset, data, SIZE_MEMOP(len),
                                        MEMTXATTRS_UNSPECIFIED);
 }

@@ -471,7 +472,7 @@ static MemTxResult zpci_write_bar(S390PCIBusDevice *pbdev, uint8_t pcias,
     mr = pbdev->pdev->io_regions[pcias].memory;
     mr = s390_get_subregion(mr, offset, len);
     offset -= mr->addr;
-    return memory_region_dispatch_write(mr, offset, data, len,
+    return memory_region_dispatch_write(mr, offset, data, SIZE_MEMOP(len),
                                         MEMTXATTRS_UNSPECIFIED);
 }

@@ -780,7 +781,8 @@ int pcistb_service_call(S390CPU *cpu, uint8_t r1, uint8_t r3, uint64_t gaddr,

     for (i = 0; i < len / 8; i++) {
         result = memory_region_dispatch_write(mr, offset + i * 8,
-                                              ldq_p(buffer + i * 8), 8,
+                                              ldq_p(buffer + i * 8),
+                                              SIZE_MEMOP(8),
                                               MEMTXATTRS_UNSPECIFIED);
         if (result != MEMTX_OK) {
             s390_program_interrupt(env, PGM_OPERAND, 6, ra);
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v4 04/15] hw/s390x: Access MemoryRegion with MemOp
@ 2019-07-25  8:01         ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  8:01 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, qemu-s390x, qemu-ppc, amarkovic, pbonzini,
	aurelien

[-- Attachment #1: Type: text/plain, Size: 1998 bytes --]

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 hw/s390x/s390-pci-inst.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c
index 0023514..c126bcc 100644
--- a/hw/s390x/s390-pci-inst.c
+++ b/hw/s390x/s390-pci-inst.c
@@ -15,6 +15,7 @@
 #include "cpu.h"
 #include "s390-pci-inst.h"
 #include "s390-pci-bus.h"
+#include "exec/memop.h"
 #include "exec/memory-internal.h"
 #include "qemu/error-report.h"
 #include "sysemu/hw_accel.h"
@@ -372,7 +373,7 @@ static MemTxResult zpci_read_bar(S390PCIBusDevice *pbdev, uint8_t pcias,
     mr = pbdev->pdev->io_regions[pcias].memory;
     mr = s390_get_subregion(mr, offset, len);
     offset -= mr->addr;
-    return memory_region_dispatch_read(mr, offset, data, len,
+    return memory_region_dispatch_read(mr, offset, data, SIZE_MEMOP(len),
                                        MEMTXATTRS_UNSPECIFIED);
 }

@@ -471,7 +472,7 @@ static MemTxResult zpci_write_bar(S390PCIBusDevice *pbdev, uint8_t pcias,
     mr = pbdev->pdev->io_regions[pcias].memory;
     mr = s390_get_subregion(mr, offset, len);
     offset -= mr->addr;
-    return memory_region_dispatch_write(mr, offset, data, len,
+    return memory_region_dispatch_write(mr, offset, data, SIZE_MEMOP(len),
                                         MEMTXATTRS_UNSPECIFIED);
 }

@@ -780,7 +781,8 @@ int pcistb_service_call(S390CPU *cpu, uint8_t r1, uint8_t r3, uint64_t gaddr,

     for (i = 0; i < len / 8; i++) {
         result = memory_region_dispatch_write(mr, offset + i * 8,
-                                              ldq_p(buffer + i * 8), 8,
+                                              ldq_p(buffer + i * 8),
+                                              SIZE_MEMOP(8),
                                               MEMTXATTRS_UNSPECIFIED);
         if (result != MEMTX_OK) {
             s390_program_interrupt(env, PGM_OPERAND, 6, ra);
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 4258 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v4 05/15] hw/intc/armv7m_nic: Access MemoryRegion with MemOp
  2019-07-25  7:58       ` [Qemu-riscv] " tony.nguyen
@ 2019-07-25  8:02         ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  8:02 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 hw/intc/armv7m_nvic.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
index 9f8f0d3..25bb88a 100644
--- a/hw/intc/armv7m_nvic.c
+++ b/hw/intc/armv7m_nvic.c
@@ -18,6 +18,7 @@
 #include "hw/intc/armv7m_nvic.h"
 #include "target/arm/cpu.h"
 #include "exec/exec-all.h"
+#include "exec/memop.h"
 #include "qemu/log.h"
 #include "qemu/module.h"
 #include "trace.h"
@@ -2345,7 +2346,8 @@ static MemTxResult nvic_sysreg_ns_write(void *opaque, hwaddr addr,
     if (attrs.secure) {
         /* S accesses to the alias act like NS accesses to the real region */
         attrs.secure = 0;
-        return memory_region_dispatch_write(mr, addr, value, size, attrs);
+        return memory_region_dispatch_write(mr, addr, value, SIZE_MEMOP(size),
+                                            attrs);
     } else {
         /* NS attrs are RAZ/WI for privileged, and BusFault for user */
         if (attrs.user) {
@@ -2364,7 +2366,8 @@ static MemTxResult nvic_sysreg_ns_read(void *opaque, hwaddr addr,
     if (attrs.secure) {
         /* S accesses to the alias act like NS accesses to the real region */
         attrs.secure = 0;
-        return memory_region_dispatch_read(mr, addr, data, size, attrs);
+        return memory_region_dispatch_read(mr, addr, data, SIZE_MEMOP(size),
+                                           attrs);
     } else {
         /* NS attrs are RAZ/WI for privileged, and BusFault for user */
         if (attrs.user) {
@@ -2390,7 +2393,8 @@ static MemTxResult nvic_systick_write(void *opaque, hwaddr addr,

     /* Direct the access to the correct systick */
     mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->systick[attrs.secure]), 0);
-    return memory_region_dispatch_write(mr, addr, value, size, attrs);
+    return memory_region_dispatch_write(mr, addr, value, SIZE_MEMOP(size),
+                                        attrs);
 }

 static MemTxResult nvic_systick_read(void *opaque, hwaddr addr,
@@ -2402,7 +2406,7 @@ static MemTxResult nvic_systick_read(void *opaque, hwaddr addr,

     /* Direct the access to the correct systick */
     mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->systick[attrs.secure]), 0);
-    return memory_region_dispatch_read(mr, addr, data, size, attrs);
+    return memory_region_dispatch_read(mr, addr, data, SIZE_MEMOP(size), attrs);
 }

 static const MemoryRegionOps nvic_systick_ops = {
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v4 05/15] hw/intc/armv7m_nic: Access MemoryRegion with MemOp
@ 2019-07-25  8:02         ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  8:02 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, qemu-s390x, qemu-ppc, amarkovic, pbonzini,
	aurelien

[-- Attachment #1: Type: text/plain, Size: 2558 bytes --]

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 hw/intc/armv7m_nvic.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
index 9f8f0d3..25bb88a 100644
--- a/hw/intc/armv7m_nvic.c
+++ b/hw/intc/armv7m_nvic.c
@@ -18,6 +18,7 @@
 #include "hw/intc/armv7m_nvic.h"
 #include "target/arm/cpu.h"
 #include "exec/exec-all.h"
+#include "exec/memop.h"
 #include "qemu/log.h"
 #include "qemu/module.h"
 #include "trace.h"
@@ -2345,7 +2346,8 @@ static MemTxResult nvic_sysreg_ns_write(void *opaque, hwaddr addr,
     if (attrs.secure) {
         /* S accesses to the alias act like NS accesses to the real region */
         attrs.secure = 0;
-        return memory_region_dispatch_write(mr, addr, value, size, attrs);
+        return memory_region_dispatch_write(mr, addr, value, SIZE_MEMOP(size),
+                                            attrs);
     } else {
         /* NS attrs are RAZ/WI for privileged, and BusFault for user */
         if (attrs.user) {
@@ -2364,7 +2366,8 @@ static MemTxResult nvic_sysreg_ns_read(void *opaque, hwaddr addr,
     if (attrs.secure) {
         /* S accesses to the alias act like NS accesses to the real region */
         attrs.secure = 0;
-        return memory_region_dispatch_read(mr, addr, data, size, attrs);
+        return memory_region_dispatch_read(mr, addr, data, SIZE_MEMOP(size),
+                                           attrs);
     } else {
         /* NS attrs are RAZ/WI for privileged, and BusFault for user */
         if (attrs.user) {
@@ -2390,7 +2393,8 @@ static MemTxResult nvic_systick_write(void *opaque, hwaddr addr,

     /* Direct the access to the correct systick */
     mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->systick[attrs.secure]), 0);
-    return memory_region_dispatch_write(mr, addr, value, size, attrs);
+    return memory_region_dispatch_write(mr, addr, value, SIZE_MEMOP(size),
+                                        attrs);
 }

 static MemTxResult nvic_systick_read(void *opaque, hwaddr addr,
@@ -2402,7 +2406,7 @@ static MemTxResult nvic_systick_read(void *opaque, hwaddr addr,

     /* Direct the access to the correct systick */
     mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->systick[attrs.secure]), 0);
-    return memory_region_dispatch_read(mr, addr, data, size, attrs);
+    return memory_region_dispatch_read(mr, addr, data, SIZE_MEMOP(size), attrs);
 }

 static const MemoryRegionOps nvic_systick_ops = {
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 4811 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v4 06/15] hw/virtio: Access MemoryRegion with MemOp
  2019-07-25  7:58       ` [Qemu-riscv] " tony.nguyen
@ 2019-07-25  8:02         ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  8:02 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 hw/virtio/virtio-pci.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index ce928f2..265f066 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -17,6 +17,7 @@

 #include "qemu/osdep.h"

+#include "exec/memop.h"
 #include "standard-headers/linux/virtio_pci.h"
 #include "hw/virtio/virtio.h"
 #include "hw/pci/pci.h"
@@ -550,7 +551,8 @@ void virtio_address_space_write(VirtIOPCIProxy *proxy, hwaddr addr,
         /* As length is under guest control, handle illegal values. */
         return;
     }
-    memory_region_dispatch_write(mr, addr, val, len, MEMTXATTRS_UNSPECIFIED);
+    memory_region_dispatch_write(mr, addr, val, SIZE_MEMOP(len),
+                                 MEMTXATTRS_UNSPECIFIED);
 }

 static void
@@ -573,7 +575,8 @@ virtio_address_space_read(VirtIOPCIProxy *proxy, hwaddr addr,
     /* Make sure caller aligned buf properly */
     assert(!(((uintptr_t)buf) & (len - 1)));

-    memory_region_dispatch_read(mr, addr, &val, len, MEMTXATTRS_UNSPECIFIED);
+    memory_region_dispatch_read(mr, addr, &val, SIZE_MEMOP(len),
+                                MEMTXATTRS_UNSPECIFIED);
     switch (len) {
     case 1:
         pci_set_byte(buf, val);
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v4 06/15] hw/virtio: Access MemoryRegion with MemOp
@ 2019-07-25  8:02         ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  8:02 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, qemu-s390x, qemu-ppc, amarkovic, pbonzini,
	aurelien

[-- Attachment #1: Type: text/plain, Size: 1369 bytes --]

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 hw/virtio/virtio-pci.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index ce928f2..265f066 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -17,6 +17,7 @@

 #include "qemu/osdep.h"

+#include "exec/memop.h"
 #include "standard-headers/linux/virtio_pci.h"
 #include "hw/virtio/virtio.h"
 #include "hw/pci/pci.h"
@@ -550,7 +551,8 @@ void virtio_address_space_write(VirtIOPCIProxy *proxy, hwaddr addr,
         /* As length is under guest control, handle illegal values. */
         return;
     }
-    memory_region_dispatch_write(mr, addr, val, len, MEMTXATTRS_UNSPECIFIED);
+    memory_region_dispatch_write(mr, addr, val, SIZE_MEMOP(len),
+                                 MEMTXATTRS_UNSPECIFIED);
 }

 static void
@@ -573,7 +575,8 @@ virtio_address_space_read(VirtIOPCIProxy *proxy, hwaddr addr,
     /* Make sure caller aligned buf properly */
     assert(!(((uintptr_t)buf) & (len - 1)));

-    memory_region_dispatch_read(mr, addr, &val, len, MEMTXATTRS_UNSPECIFIED);
+    memory_region_dispatch_read(mr, addr, &val, SIZE_MEMOP(len),
+                                MEMTXATTRS_UNSPECIFIED);
     switch (len) {
     case 1:
         pci_set_byte(buf, val);
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 2934 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v4 07/15] hw/vfio: Access MemoryRegion with MemOp
  2019-07-25  7:58       ` [Qemu-riscv] " tony.nguyen
@ 2019-07-25  8:03         ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  8:03 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 hw/vfio/pci-quirks.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/hw/vfio/pci-quirks.c b/hw/vfio/pci-quirks.c
index b35a640..3240afa 100644
--- a/hw/vfio/pci-quirks.c
+++ b/hw/vfio/pci-quirks.c
@@ -1071,7 +1071,7 @@ static void vfio_rtl8168_quirk_address_write(void *opaque, hwaddr addr,

                 /* Write to the proper guest MSI-X table instead */
                 memory_region_dispatch_write(&vdev->pdev.msix_table_mmio,
-                                             offset, val, size,
+                                             offset, val, SIZE_MEMOP(size),
                                              MEMTXATTRS_UNSPECIFIED);
             }
             return; /* Do not write guest MSI-X data to hardware */
@@ -1102,7 +1102,8 @@ static uint64_t vfio_rtl8168_quirk_data_read(void *opaque,
     if (rtl->enabled && (vdev->pdev.cap_present & QEMU_PCI_CAP_MSIX)) {
         hwaddr offset = rtl->addr & 0xfff;
         memory_region_dispatch_read(&vdev->pdev.msix_table_mmio, offset,
-                                    &data, size, MEMTXATTRS_UNSPECIFIED);
+                                    &data, SIZE_MEMOP(size),
+                                    MEMTXATTRS_UNSPECIFIED);
         trace_vfio_quirk_rtl8168_msix_read(vdev->vbasedev.name, offset, data);
     }

--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v4 07/15] hw/vfio: Access MemoryRegion with MemOp
@ 2019-07-25  8:03         ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  8:03 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, qemu-s390x, qemu-ppc, amarkovic, pbonzini,
	aurelien

[-- Attachment #1: Type: text/plain, Size: 1417 bytes --]

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 hw/vfio/pci-quirks.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/hw/vfio/pci-quirks.c b/hw/vfio/pci-quirks.c
index b35a640..3240afa 100644
--- a/hw/vfio/pci-quirks.c
+++ b/hw/vfio/pci-quirks.c
@@ -1071,7 +1071,7 @@ static void vfio_rtl8168_quirk_address_write(void *opaque, hwaddr addr,

                 /* Write to the proper guest MSI-X table instead */
                 memory_region_dispatch_write(&vdev->pdev.msix_table_mmio,
-                                             offset, val, size,
+                                             offset, val, SIZE_MEMOP(size),
                                              MEMTXATTRS_UNSPECIFIED);
             }
             return; /* Do not write guest MSI-X data to hardware */
@@ -1102,7 +1102,8 @@ static uint64_t vfio_rtl8168_quirk_data_read(void *opaque,
     if (rtl->enabled && (vdev->pdev.cap_present & QEMU_PCI_CAP_MSIX)) {
         hwaddr offset = rtl->addr & 0xfff;
         memory_region_dispatch_read(&vdev->pdev.msix_table_mmio, offset,
-                                    &data, size, MEMTXATTRS_UNSPECIFIED);
+                                    &data, SIZE_MEMOP(size),
+                                    MEMTXATTRS_UNSPECIFIED);
         trace_vfio_quirk_rtl8168_msix_read(vdev->vbasedev.name, offset, data);
     }

--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 3329 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v4 08/15] exec: Access MemoryRegion with MemOp
  2019-07-25  7:58       ` [Qemu-riscv] " tony.nguyen
@ 2019-07-25  8:03         ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  8:03 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 exec.c            |  6 ++++--
 memory_ldst.inc.c | 18 +++++++++---------
 2 files changed, 13 insertions(+), 11 deletions(-)

diff --git a/exec.c b/exec.c
index 3e78de3..5013864 100644
--- a/exec.c
+++ b/exec.c
@@ -3334,7 +3334,8 @@ static MemTxResult flatview_write_continue(FlatView *fv, hwaddr addr,
             /* XXX: could force current_cpu to NULL to avoid
                potential bugs */
             val = ldn_p(buf, l);
-            result |= memory_region_dispatch_write(mr, addr1, val, l, attrs);
+            result |= memory_region_dispatch_write(mr, addr1, val,
+                                                   SIZE_MEMOP(l), attrs);
         } else {
             /* RAM case */
             ptr = qemu_ram_ptr_length(mr->ram_block, addr1, &l, false);
@@ -3395,7 +3396,8 @@ MemTxResult flatview_read_continue(FlatView *fv, hwaddr addr,
             /* I/O case */
             release_lock |= prepare_mmio_access(mr);
             l = memory_access_size(mr, l, addr1);
-            result |= memory_region_dispatch_read(mr, addr1, &val, l, attrs);
+            result |= memory_region_dispatch_read(mr, addr1, &val,
+                                                  SIZE_MEMOP(l), attrs);
             stn_p(buf, l, val);
         } else {
             /* RAM case */
diff --git a/memory_ldst.inc.c b/memory_ldst.inc.c
index acf865b..e073cf9 100644
--- a/memory_ldst.inc.c
+++ b/memory_ldst.inc.c
@@ -38,7 +38,7 @@ static inline uint32_t glue(address_space_ldl_internal, SUFFIX)(ARG1_DECL,
         release_lock |= prepare_mmio_access(mr);

         /* I/O case */
-        r = memory_region_dispatch_read(mr, addr1, &val, 4, attrs);
+        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(4), attrs);
 #if defined(TARGET_WORDS_BIGENDIAN)
         if (endian == DEVICE_LITTLE_ENDIAN) {
             val = bswap32(val);
@@ -114,7 +114,7 @@ static inline uint64_t glue(address_space_ldq_internal, SUFFIX)(ARG1_DECL,
         release_lock |= prepare_mmio_access(mr);

         /* I/O case */
-        r = memory_region_dispatch_read(mr, addr1, &val, 8, attrs);
+        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(8), attrs);
 #if defined(TARGET_WORDS_BIGENDIAN)
         if (endian == DEVICE_LITTLE_ENDIAN) {
             val = bswap64(val);
@@ -188,7 +188,7 @@ uint32_t glue(address_space_ldub, SUFFIX)(ARG1_DECL,
         release_lock |= prepare_mmio_access(mr);

         /* I/O case */
-        r = memory_region_dispatch_read(mr, addr1, &val, 1, attrs);
+        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(1), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
@@ -224,7 +224,7 @@ static inline uint32_t glue(address_space_lduw_internal, SUFFIX)(ARG1_DECL,
         release_lock |= prepare_mmio_access(mr);

         /* I/O case */
-        r = memory_region_dispatch_read(mr, addr1, &val, 2, attrs);
+        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(2), attrs);
 #if defined(TARGET_WORDS_BIGENDIAN)
         if (endian == DEVICE_LITTLE_ENDIAN) {
             val = bswap16(val);
@@ -300,7 +300,7 @@ void glue(address_space_stl_notdirty, SUFFIX)(ARG1_DECL,
     if (l < 4 || !memory_access_is_direct(mr, true)) {
         release_lock |= prepare_mmio_access(mr);

-        r = memory_region_dispatch_write(mr, addr1, val, 4, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(4), attrs);
     } else {
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
         stl_p(ptr, val);
@@ -346,7 +346,7 @@ static inline void glue(address_space_stl_internal, SUFFIX)(ARG1_DECL,
             val = bswap32(val);
         }
 #endif
-        r = memory_region_dispatch_write(mr, addr1, val, 4, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(4), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
@@ -408,7 +408,7 @@ void glue(address_space_stb, SUFFIX)(ARG1_DECL,
     mr = TRANSLATE(addr, &addr1, &l, true, attrs);
     if (!memory_access_is_direct(mr, true)) {
         release_lock |= prepare_mmio_access(mr);
-        r = memory_region_dispatch_write(mr, addr1, val, 1, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(1), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
@@ -451,7 +451,7 @@ static inline void glue(address_space_stw_internal, SUFFIX)(ARG1_DECL,
             val = bswap16(val);
         }
 #endif
-        r = memory_region_dispatch_write(mr, addr1, val, 2, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(2), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
@@ -524,7 +524,7 @@ static void glue(address_space_stq_internal, SUFFIX)(ARG1_DECL,
             val = bswap64(val);
         }
 #endif
-        r = memory_region_dispatch_write(mr, addr1, val, 8, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(8), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v4 08/15] exec: Access MemoryRegion with MemOp
@ 2019-07-25  8:03         ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  8:03 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, qemu-s390x, qemu-ppc, amarkovic, pbonzini,
	aurelien

[-- Attachment #1: Type: text/plain, Size: 5348 bytes --]

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 exec.c            |  6 ++++--
 memory_ldst.inc.c | 18 +++++++++---------
 2 files changed, 13 insertions(+), 11 deletions(-)

diff --git a/exec.c b/exec.c
index 3e78de3..5013864 100644
--- a/exec.c
+++ b/exec.c
@@ -3334,7 +3334,8 @@ static MemTxResult flatview_write_continue(FlatView *fv, hwaddr addr,
             /* XXX: could force current_cpu to NULL to avoid
                potential bugs */
             val = ldn_p(buf, l);
-            result |= memory_region_dispatch_write(mr, addr1, val, l, attrs);
+            result |= memory_region_dispatch_write(mr, addr1, val,
+                                                   SIZE_MEMOP(l), attrs);
         } else {
             /* RAM case */
             ptr = qemu_ram_ptr_length(mr->ram_block, addr1, &l, false);
@@ -3395,7 +3396,8 @@ MemTxResult flatview_read_continue(FlatView *fv, hwaddr addr,
             /* I/O case */
             release_lock |= prepare_mmio_access(mr);
             l = memory_access_size(mr, l, addr1);
-            result |= memory_region_dispatch_read(mr, addr1, &val, l, attrs);
+            result |= memory_region_dispatch_read(mr, addr1, &val,
+                                                  SIZE_MEMOP(l), attrs);
             stn_p(buf, l, val);
         } else {
             /* RAM case */
diff --git a/memory_ldst.inc.c b/memory_ldst.inc.c
index acf865b..e073cf9 100644
--- a/memory_ldst.inc.c
+++ b/memory_ldst.inc.c
@@ -38,7 +38,7 @@ static inline uint32_t glue(address_space_ldl_internal, SUFFIX)(ARG1_DECL,
         release_lock |= prepare_mmio_access(mr);

         /* I/O case */
-        r = memory_region_dispatch_read(mr, addr1, &val, 4, attrs);
+        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(4), attrs);
 #if defined(TARGET_WORDS_BIGENDIAN)
         if (endian == DEVICE_LITTLE_ENDIAN) {
             val = bswap32(val);
@@ -114,7 +114,7 @@ static inline uint64_t glue(address_space_ldq_internal, SUFFIX)(ARG1_DECL,
         release_lock |= prepare_mmio_access(mr);

         /* I/O case */
-        r = memory_region_dispatch_read(mr, addr1, &val, 8, attrs);
+        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(8), attrs);
 #if defined(TARGET_WORDS_BIGENDIAN)
         if (endian == DEVICE_LITTLE_ENDIAN) {
             val = bswap64(val);
@@ -188,7 +188,7 @@ uint32_t glue(address_space_ldub, SUFFIX)(ARG1_DECL,
         release_lock |= prepare_mmio_access(mr);

         /* I/O case */
-        r = memory_region_dispatch_read(mr, addr1, &val, 1, attrs);
+        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(1), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
@@ -224,7 +224,7 @@ static inline uint32_t glue(address_space_lduw_internal, SUFFIX)(ARG1_DECL,
         release_lock |= prepare_mmio_access(mr);

         /* I/O case */
-        r = memory_region_dispatch_read(mr, addr1, &val, 2, attrs);
+        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(2), attrs);
 #if defined(TARGET_WORDS_BIGENDIAN)
         if (endian == DEVICE_LITTLE_ENDIAN) {
             val = bswap16(val);
@@ -300,7 +300,7 @@ void glue(address_space_stl_notdirty, SUFFIX)(ARG1_DECL,
     if (l < 4 || !memory_access_is_direct(mr, true)) {
         release_lock |= prepare_mmio_access(mr);

-        r = memory_region_dispatch_write(mr, addr1, val, 4, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(4), attrs);
     } else {
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
         stl_p(ptr, val);
@@ -346,7 +346,7 @@ static inline void glue(address_space_stl_internal, SUFFIX)(ARG1_DECL,
             val = bswap32(val);
         }
 #endif
-        r = memory_region_dispatch_write(mr, addr1, val, 4, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(4), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
@@ -408,7 +408,7 @@ void glue(address_space_stb, SUFFIX)(ARG1_DECL,
     mr = TRANSLATE(addr, &addr1, &l, true, attrs);
     if (!memory_access_is_direct(mr, true)) {
         release_lock |= prepare_mmio_access(mr);
-        r = memory_region_dispatch_write(mr, addr1, val, 1, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(1), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
@@ -451,7 +451,7 @@ static inline void glue(address_space_stw_internal, SUFFIX)(ARG1_DECL,
             val = bswap16(val);
         }
 #endif
-        r = memory_region_dispatch_write(mr, addr1, val, 2, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(2), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
@@ -524,7 +524,7 @@ static void glue(address_space_stq_internal, SUFFIX)(ARG1_DECL,
             val = bswap64(val);
         }
 #endif
-        r = memory_region_dispatch_write(mr, addr1, val, 8, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(8), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 9769 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v4 09/15] cputlb: Access MemoryRegion with MemOp
  2019-07-25  7:58       ` [Qemu-riscv] " tony.nguyen
@ 2019-07-25  8:04         ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  8:04 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 accel/tcg/cputlb.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 523be4c..a4a0bf7 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -906,8 +906,8 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
         qemu_mutex_lock_iothread();
         locked = true;
     }
-    r = memory_region_dispatch_read(mr, mr_offset,
-                                    &val, size, iotlbentry->attrs);
+    r = memory_region_dispatch_read(mr, mr_offset, &val, SIZE_MEMOP(size),
+                                    iotlbentry->attrs);
     if (r != MEMTX_OK) {
         hwaddr physaddr = mr_offset +
             section->offset_within_address_space -
@@ -947,8 +947,8 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
         qemu_mutex_lock_iothread();
         locked = true;
     }
-    r = memory_region_dispatch_write(mr, mr_offset,
-                                     val, size, iotlbentry->attrs);
+    r = memory_region_dispatch_write(mr, mr_offset, val, SIZE_MEMOP(size),
+                                    iotlbentry->attrs);
     if (r != MEMTX_OK) {
         hwaddr physaddr = mr_offset +
             section->offset_within_address_space -
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v4 09/15] cputlb: Access MemoryRegion with MemOp
@ 2019-07-25  8:04         ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  8:04 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, qemu-s390x, qemu-ppc, amarkovic, pbonzini,
	aurelien

[-- Attachment #1: Type: text/plain, Size: 1376 bytes --]

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 accel/tcg/cputlb.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 523be4c..a4a0bf7 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -906,8 +906,8 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
         qemu_mutex_lock_iothread();
         locked = true;
     }
-    r = memory_region_dispatch_read(mr, mr_offset,
-                                    &val, size, iotlbentry->attrs);
+    r = memory_region_dispatch_read(mr, mr_offset, &val, SIZE_MEMOP(size),
+                                    iotlbentry->attrs);
     if (r != MEMTX_OK) {
         hwaddr physaddr = mr_offset +
             section->offset_within_address_space -
@@ -947,8 +947,8 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
         qemu_mutex_lock_iothread();
         locked = true;
     }
-    r = memory_region_dispatch_write(mr, mr_offset,
-                                     val, size, iotlbentry->attrs);
+    r = memory_region_dispatch_write(mr, mr_offset, val, SIZE_MEMOP(size),
+                                    iotlbentry->attrs);
     if (r != MEMTX_OK) {
         hwaddr physaddr = mr_offset +
             section->offset_within_address_space -
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 3111 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v4 10/15] memory: Access MemoryRegion with MemOp semantics
  2019-07-25  7:58       ` [Qemu-riscv] " tony.nguyen
@ 2019-07-25  8:04         ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  8:04 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

To convert interfaces of MemoryRegion access, MEMOP_SIZE and
SIZE_MEMOP no-op stubs were introduced to change syntax while keeping
the existing semantics.

Now with interfaces converted, we fill the stubs and use MemOp
semantics.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 include/exec/memop.h | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/include/exec/memop.h b/include/exec/memop.h
index 09c8d20..f2847e8 100644
--- a/include/exec/memop.h
+++ b/include/exec/memop.h
@@ -106,8 +106,7 @@ typedef enum MemOp {
     MO_SSIZE = MO_SIZE | MO_SIGN,
 } MemOp;

-/* No-op while memory_region_dispatch_[read|write] is converted to MemOp */
-#define MEMOP_SIZE(op)  (op)    /* MemOp to size.  */
-#define SIZE_MEMOP(ul)  (ul)    /* Size to MemOp.  */
+#define MEMOP_SIZE(op)  (1 << ((op) & MO_SIZE)) /* MemOp to size.  */
+#define SIZE_MEMOP(ul)  (ctzl(ul))              /* Size to MemOp.  */

 #endif
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v4 10/15] memory: Access MemoryRegion with MemOp semantics
@ 2019-07-25  8:04         ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  8:04 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, qemu-s390x, qemu-ppc, amarkovic, pbonzini,
	aurelien

[-- Attachment #1: Type: text/plain, Size: 977 bytes --]

To convert interfaces of MemoryRegion access, MEMOP_SIZE and
SIZE_MEMOP no-op stubs were introduced to change syntax while keeping
the existing semantics.

Now with interfaces converted, we fill the stubs and use MemOp
semantics.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 include/exec/memop.h | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/include/exec/memop.h b/include/exec/memop.h
index 09c8d20..f2847e8 100644
--- a/include/exec/memop.h
+++ b/include/exec/memop.h
@@ -106,8 +106,7 @@ typedef enum MemOp {
     MO_SSIZE = MO_SIZE | MO_SIGN,
 } MemOp;

-/* No-op while memory_region_dispatch_[read|write] is converted to MemOp */
-#define MEMOP_SIZE(op)  (op)    /* MemOp to size.  */
-#define SIZE_MEMOP(ul)  (ul)    /* Size to MemOp.  */
+#define MEMOP_SIZE(op)  (1 << ((op) & MO_SIZE)) /* MemOp to size.  */
+#define SIZE_MEMOP(ul)  (ctzl(ul))              /* Size to MemOp.  */

 #endif
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 2088 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v4 11/15] memory: Single byte swap along the I/O path
  2019-07-25  7:58       ` [Qemu-riscv] " tony.nguyen
@ 2019-07-25  8:04         ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  8:04 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

Now that MemOp has been pushed down into the memory API, we can
collapse the two byte swaps adjust_endianness and handle_bswap into
the former.

Collapsing byte swaps along the I/O path enables additional endian
inversion logic, e.g. SPARC64 Invert Endian TTE bit, with redundant
byte swaps cancelling out.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 accel/tcg/cputlb.c | 58 +++++++++++++++++++++++++-----------------------------
 memory.c           | 30 ++++++++++++++++------------
 2 files changed, 44 insertions(+), 44 deletions(-)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index a4a0bf7..e61b1eb 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -881,7 +881,7 @@ static void tlb_fill(CPUState *cpu, target_ulong addr, int size,

 static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
                          int mmu_idx, target_ulong addr, uintptr_t retaddr,
-                         MMUAccessType access_type, int size)
+                         MMUAccessType access_type, MemOp op)
 {
     CPUState *cpu = env_cpu(env);
     hwaddr mr_offset;
@@ -906,14 +906,13 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
         qemu_mutex_lock_iothread();
         locked = true;
     }
-    r = memory_region_dispatch_read(mr, mr_offset, &val, SIZE_MEMOP(size),
-                                    iotlbentry->attrs);
+    r = memory_region_dispatch_read(mr, mr_offset, &val, op, iotlbentry->attrs);
     if (r != MEMTX_OK) {
         hwaddr physaddr = mr_offset +
             section->offset_within_address_space -
             section->offset_within_region;

-        cpu_transaction_failed(cpu, physaddr, addr, size, access_type,
+        cpu_transaction_failed(cpu, physaddr, addr, MEMOP_SIZE(op), access_type,
                                mmu_idx, iotlbentry->attrs, r, retaddr);
     }
     if (locked) {
@@ -925,7 +924,7 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,

 static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
                       int mmu_idx, uint64_t val, target_ulong addr,
-                      uintptr_t retaddr, int size)
+                      uintptr_t retaddr, MemOp op)
 {
     CPUState *cpu = env_cpu(env);
     hwaddr mr_offset;
@@ -947,15 +946,15 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
         qemu_mutex_lock_iothread();
         locked = true;
     }
-    r = memory_region_dispatch_write(mr, mr_offset, val, SIZE_MEMOP(size),
-                                    iotlbentry->attrs);
+    r = memory_region_dispatch_write(mr, mr_offset, val, op, iotlbentry->attrs);
     if (r != MEMTX_OK) {
         hwaddr physaddr = mr_offset +
             section->offset_within_address_space -
             section->offset_within_region;

-        cpu_transaction_failed(cpu, physaddr, addr, size, MMU_DATA_STORE,
-                               mmu_idx, iotlbentry->attrs, r, retaddr);
+        cpu_transaction_failed(cpu, physaddr, addr, MEMOP_SIZE(op),
+                               MMU_DATA_STORE, mmu_idx, iotlbentry->attrs, r,
+                               retaddr);
     }
     if (locked) {
         qemu_mutex_unlock_iothread();
@@ -1210,26 +1209,13 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
 #endif

 /*
- * Byte Swap Helper
+ * Byte Swap Checker
  *
- * This should all dead code away depending on the build host and
- * access type.
+ * Dead code should all go away depending on the build host and access type.
  */
-
-static inline uint64_t handle_bswap(uint64_t val, int size, bool big_endian)
+static inline bool need_bswap(bool big_endian)
 {
-    if ((big_endian && NEED_BE_BSWAP) || (!big_endian && NEED_LE_BSWAP)) {
-        switch (size) {
-        case 1: return val;
-        case 2: return bswap16(val);
-        case 4: return bswap32(val);
-        case 8: return bswap64(val);
-        default:
-            g_assert_not_reached();
-        }
-    } else {
-        return val;
-    }
+    return (big_endian && NEED_BE_BSWAP) || (!big_endian && NEED_LE_BSWAP);
 }

 /*
@@ -1260,6 +1246,7 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
     unsigned a_bits = get_alignment_bits(get_memop(oi));
     void *haddr;
     uint64_t res;
+    MemOp op;

     /* Handle CPU specific unaligned behaviour */
     if (addr & ((1 << a_bits) - 1)) {
@@ -1305,9 +1292,13 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
             }
         }

-        res = io_readx(env, &env_tlb(env)->d[mmu_idx].iotlb[index],
-                       mmu_idx, addr, retaddr, access_type, size);
-        return handle_bswap(res, size, big_endian);
+        op = SIZE_MEMOP(size);
+        if (need_bswap(big_endian)) {
+            op ^= MO_BSWAP;
+        }
+
+        return io_readx(env, &env_tlb(env)->d[mmu_idx].iotlb[index],
+                       mmu_idx, addr, retaddr, access_type, op);
     }

     /* Handle slow unaligned access (it spans two pages or IO).  */
@@ -1508,6 +1499,7 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
     const size_t tlb_off = offsetof(CPUTLBEntry, addr_write);
     unsigned a_bits = get_alignment_bits(get_memop(oi));
     void *haddr;
+    MemOp op;

     /* Handle CPU specific unaligned behaviour */
     if (addr & ((1 << a_bits) - 1)) {
@@ -1553,9 +1545,13 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
             }
         }

+        op = SIZE_MEMOP(size);
+        if (need_bswap(big_endian)) {
+            op ^= MO_BSWAP;
+        }
+
         io_writex(env, &env_tlb(env)->d[mmu_idx].iotlb[index], mmu_idx,
-                  handle_bswap(val, size, big_endian),
-                  addr, retaddr, size);
+                  val, addr, retaddr, op);
         return;
     }

diff --git a/memory.c b/memory.c
index 6982e19..0277d3d 100644
--- a/memory.c
+++ b/memory.c
@@ -352,7 +352,7 @@ static bool memory_region_big_endian(MemoryRegion *mr)
 #endif
 }

-static bool memory_region_wrong_endianness(MemoryRegion *mr)
+static bool memory_region_endianness_inverted(MemoryRegion *mr)
 {
 #ifdef TARGET_WORDS_BIGENDIAN
     return mr->ops->endianness == DEVICE_LITTLE_ENDIAN;
@@ -361,23 +361,27 @@ static bool memory_region_wrong_endianness(MemoryRegion *mr)
 #endif
 }

-static void adjust_endianness(MemoryRegion *mr, uint64_t *data, unsigned size)
+static void adjust_endianness(MemoryRegion *mr, uint64_t *data, MemOp op)
 {
-    if (memory_region_wrong_endianness(mr)) {
-        switch (size) {
-        case 1:
+    if (memory_region_endianness_inverted(mr)) {
+        op ^= MO_BSWAP;
+    }
+
+    if (op & MO_BSWAP) {
+        switch (op & MO_SIZE) {
+        case MO_8:
             break;
-        case 2:
+        case MO_16:
             *data = bswap16(*data);
             break;
-        case 4:
+        case MO_32:
             *data = bswap32(*data);
             break;
-        case 8:
+        case MO_64:
             *data = bswap64(*data);
             break;
         default:
-            abort();
+            g_assert_not_reached();
         }
     }
 }
@@ -1451,7 +1455,7 @@ MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
     }

     r = memory_region_dispatch_read1(mr, addr, pval, size, attrs);
-    adjust_endianness(mr, pval, size);
+    adjust_endianness(mr, pval, op);
     return r;
 }

@@ -1494,7 +1498,7 @@ MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
         return MEMTX_DECODE_ERROR;
     }

-    adjust_endianness(mr, &data, size);
+    adjust_endianness(mr, &data, op);

     if ((!kvm_eventfds_enabled()) &&
         memory_region_dispatch_write_eventfds(mr, addr, data, size, attrs)) {
@@ -2340,7 +2344,7 @@ void memory_region_add_eventfd(MemoryRegion *mr,
     }

     if (size) {
-        adjust_endianness(mr, &mrfd.data, size);
+        adjust_endianness(mr, &mrfd.data, SIZE_MEMOP(size));
     }
     memory_region_transaction_begin();
     for (i = 0; i < mr->ioeventfd_nb; ++i) {
@@ -2375,7 +2379,7 @@ void memory_region_del_eventfd(MemoryRegion *mr,
     unsigned i;

     if (size) {
-        adjust_endianness(mr, &mrfd.data, size);
+        adjust_endianness(mr, &mrfd.data, SIZE_MEMOP(size));
     }
     memory_region_transaction_begin();
     for (i = 0; i < mr->ioeventfd_nb; ++i) {
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v4 11/15] memory: Single byte swap along the I/O path
@ 2019-07-25  8:04         ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  8:04 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, qemu-s390x, qemu-ppc, amarkovic, pbonzini,
	aurelien

[-- Attachment #1: Type: text/plain, Size: 8585 bytes --]

Now that MemOp has been pushed down into the memory API, we can
collapse the two byte swaps adjust_endianness and handle_bswap into
the former.

Collapsing byte swaps along the I/O path enables additional endian
inversion logic, e.g. SPARC64 Invert Endian TTE bit, with redundant
byte swaps cancelling out.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 accel/tcg/cputlb.c | 58 +++++++++++++++++++++++++-----------------------------
 memory.c           | 30 ++++++++++++++++------------
 2 files changed, 44 insertions(+), 44 deletions(-)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index a4a0bf7..e61b1eb 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -881,7 +881,7 @@ static void tlb_fill(CPUState *cpu, target_ulong addr, int size,

 static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
                          int mmu_idx, target_ulong addr, uintptr_t retaddr,
-                         MMUAccessType access_type, int size)
+                         MMUAccessType access_type, MemOp op)
 {
     CPUState *cpu = env_cpu(env);
     hwaddr mr_offset;
@@ -906,14 +906,13 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
         qemu_mutex_lock_iothread();
         locked = true;
     }
-    r = memory_region_dispatch_read(mr, mr_offset, &val, SIZE_MEMOP(size),
-                                    iotlbentry->attrs);
+    r = memory_region_dispatch_read(mr, mr_offset, &val, op, iotlbentry->attrs);
     if (r != MEMTX_OK) {
         hwaddr physaddr = mr_offset +
             section->offset_within_address_space -
             section->offset_within_region;

-        cpu_transaction_failed(cpu, physaddr, addr, size, access_type,
+        cpu_transaction_failed(cpu, physaddr, addr, MEMOP_SIZE(op), access_type,
                                mmu_idx, iotlbentry->attrs, r, retaddr);
     }
     if (locked) {
@@ -925,7 +924,7 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,

 static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
                       int mmu_idx, uint64_t val, target_ulong addr,
-                      uintptr_t retaddr, int size)
+                      uintptr_t retaddr, MemOp op)
 {
     CPUState *cpu = env_cpu(env);
     hwaddr mr_offset;
@@ -947,15 +946,15 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
         qemu_mutex_lock_iothread();
         locked = true;
     }
-    r = memory_region_dispatch_write(mr, mr_offset, val, SIZE_MEMOP(size),
-                                    iotlbentry->attrs);
+    r = memory_region_dispatch_write(mr, mr_offset, val, op, iotlbentry->attrs);
     if (r != MEMTX_OK) {
         hwaddr physaddr = mr_offset +
             section->offset_within_address_space -
             section->offset_within_region;

-        cpu_transaction_failed(cpu, physaddr, addr, size, MMU_DATA_STORE,
-                               mmu_idx, iotlbentry->attrs, r, retaddr);
+        cpu_transaction_failed(cpu, physaddr, addr, MEMOP_SIZE(op),
+                               MMU_DATA_STORE, mmu_idx, iotlbentry->attrs, r,
+                               retaddr);
     }
     if (locked) {
         qemu_mutex_unlock_iothread();
@@ -1210,26 +1209,13 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
 #endif

 /*
- * Byte Swap Helper
+ * Byte Swap Checker
  *
- * This should all dead code away depending on the build host and
- * access type.
+ * Dead code should all go away depending on the build host and access type.
  */
-
-static inline uint64_t handle_bswap(uint64_t val, int size, bool big_endian)
+static inline bool need_bswap(bool big_endian)
 {
-    if ((big_endian && NEED_BE_BSWAP) || (!big_endian && NEED_LE_BSWAP)) {
-        switch (size) {
-        case 1: return val;
-        case 2: return bswap16(val);
-        case 4: return bswap32(val);
-        case 8: return bswap64(val);
-        default:
-            g_assert_not_reached();
-        }
-    } else {
-        return val;
-    }
+    return (big_endian && NEED_BE_BSWAP) || (!big_endian && NEED_LE_BSWAP);
 }

 /*
@@ -1260,6 +1246,7 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
     unsigned a_bits = get_alignment_bits(get_memop(oi));
     void *haddr;
     uint64_t res;
+    MemOp op;

     /* Handle CPU specific unaligned behaviour */
     if (addr & ((1 << a_bits) - 1)) {
@@ -1305,9 +1292,13 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
             }
         }

-        res = io_readx(env, &env_tlb(env)->d[mmu_idx].iotlb[index],
-                       mmu_idx, addr, retaddr, access_type, size);
-        return handle_bswap(res, size, big_endian);
+        op = SIZE_MEMOP(size);
+        if (need_bswap(big_endian)) {
+            op ^= MO_BSWAP;
+        }
+
+        return io_readx(env, &env_tlb(env)->d[mmu_idx].iotlb[index],
+                       mmu_idx, addr, retaddr, access_type, op);
     }

     /* Handle slow unaligned access (it spans two pages or IO).  */
@@ -1508,6 +1499,7 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
     const size_t tlb_off = offsetof(CPUTLBEntry, addr_write);
     unsigned a_bits = get_alignment_bits(get_memop(oi));
     void *haddr;
+    MemOp op;

     /* Handle CPU specific unaligned behaviour */
     if (addr & ((1 << a_bits) - 1)) {
@@ -1553,9 +1545,13 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
             }
         }

+        op = SIZE_MEMOP(size);
+        if (need_bswap(big_endian)) {
+            op ^= MO_BSWAP;
+        }
+
         io_writex(env, &env_tlb(env)->d[mmu_idx].iotlb[index], mmu_idx,
-                  handle_bswap(val, size, big_endian),
-                  addr, retaddr, size);
+                  val, addr, retaddr, op);
         return;
     }

diff --git a/memory.c b/memory.c
index 6982e19..0277d3d 100644
--- a/memory.c
+++ b/memory.c
@@ -352,7 +352,7 @@ static bool memory_region_big_endian(MemoryRegion *mr)
 #endif
 }

-static bool memory_region_wrong_endianness(MemoryRegion *mr)
+static bool memory_region_endianness_inverted(MemoryRegion *mr)
 {
 #ifdef TARGET_WORDS_BIGENDIAN
     return mr->ops->endianness == DEVICE_LITTLE_ENDIAN;
@@ -361,23 +361,27 @@ static bool memory_region_wrong_endianness(MemoryRegion *mr)
 #endif
 }

-static void adjust_endianness(MemoryRegion *mr, uint64_t *data, unsigned size)
+static void adjust_endianness(MemoryRegion *mr, uint64_t *data, MemOp op)
 {
-    if (memory_region_wrong_endianness(mr)) {
-        switch (size) {
-        case 1:
+    if (memory_region_endianness_inverted(mr)) {
+        op ^= MO_BSWAP;
+    }
+
+    if (op & MO_BSWAP) {
+        switch (op & MO_SIZE) {
+        case MO_8:
             break;
-        case 2:
+        case MO_16:
             *data = bswap16(*data);
             break;
-        case 4:
+        case MO_32:
             *data = bswap32(*data);
             break;
-        case 8:
+        case MO_64:
             *data = bswap64(*data);
             break;
         default:
-            abort();
+            g_assert_not_reached();
         }
     }
 }
@@ -1451,7 +1455,7 @@ MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
     }

     r = memory_region_dispatch_read1(mr, addr, pval, size, attrs);
-    adjust_endianness(mr, pval, size);
+    adjust_endianness(mr, pval, op);
     return r;
 }

@@ -1494,7 +1498,7 @@ MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
         return MEMTX_DECODE_ERROR;
     }

-    adjust_endianness(mr, &data, size);
+    adjust_endianness(mr, &data, op);

     if ((!kvm_eventfds_enabled()) &&
         memory_region_dispatch_write_eventfds(mr, addr, data, size, attrs)) {
@@ -2340,7 +2344,7 @@ void memory_region_add_eventfd(MemoryRegion *mr,
     }

     if (size) {
-        adjust_endianness(mr, &mrfd.data, size);
+        adjust_endianness(mr, &mrfd.data, SIZE_MEMOP(size));
     }
     memory_region_transaction_begin();
     for (i = 0; i < mr->ioeventfd_nb; ++i) {
@@ -2375,7 +2379,7 @@ void memory_region_del_eventfd(MemoryRegion *mr,
     unsigned i;

     if (size) {
-        adjust_endianness(mr, &mrfd.data, size);
+        adjust_endianness(mr, &mrfd.data, SIZE_MEMOP(size));
     }
     memory_region_transaction_begin();
     for (i = 0; i < mr->ioeventfd_nb; ++i) {
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 16313 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v4 12/15] cpu: TLB_FLAGS_MASK bit to force memory slow path
  2019-07-25  7:58       ` [Qemu-riscv] " tony.nguyen
@ 2019-07-25  8:05         ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  8:05 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

The fast path is taken when TLB_FLAGS_MASK is all zero.

TLB_FORCE_SLOW is simply a TLB_FLAGS_MASK bit to force the slow path,
there are no other side effects.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 include/exec/cpu-all.h | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
index 536ea58..e496f99 100644
--- a/include/exec/cpu-all.h
+++ b/include/exec/cpu-all.h
@@ -331,12 +331,18 @@ CPUArchState *cpu_copy(CPUArchState *env);
 #define TLB_MMIO            (1 << (TARGET_PAGE_BITS - 3))
 /* Set if TLB entry must have MMU lookup repeated for every access */
 #define TLB_RECHECK         (1 << (TARGET_PAGE_BITS - 4))
+/* Set if TLB entry must take the slow path.  */
+#define TLB_FORCE_SLOW      (1 << (TARGET_PAGE_BITS - 5))

 /* Use this mask to check interception with an alignment mask
  * in a TCG backend.
  */
-#define TLB_FLAGS_MASK  (TLB_INVALID_MASK | TLB_NOTDIRTY | TLB_MMIO \
-                         | TLB_RECHECK)
+#define TLB_FLAGS_MASK \
+    (TLB_INVALID_MASK  \
+     | TLB_NOTDIRTY    \
+     | TLB_MMIO        \
+     | TLB_RECHECK     \
+     | TLB_FORCE_SLOW)

 /**
  * tlb_hit_page: return true if page aligned @addr is a hit against the
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v4 12/15] cpu: TLB_FLAGS_MASK bit to force memory slow path
@ 2019-07-25  8:05         ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  8:05 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, qemu-s390x, qemu-ppc, amarkovic, pbonzini,
	aurelien

[-- Attachment #1: Type: text/plain, Size: 1305 bytes --]

The fast path is taken when TLB_FLAGS_MASK is all zero.

TLB_FORCE_SLOW is simply a TLB_FLAGS_MASK bit to force the slow path,
there are no other side effects.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 include/exec/cpu-all.h | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
index 536ea58..e496f99 100644
--- a/include/exec/cpu-all.h
+++ b/include/exec/cpu-all.h
@@ -331,12 +331,18 @@ CPUArchState *cpu_copy(CPUArchState *env);
 #define TLB_MMIO            (1 << (TARGET_PAGE_BITS - 3))
 /* Set if TLB entry must have MMU lookup repeated for every access */
 #define TLB_RECHECK         (1 << (TARGET_PAGE_BITS - 4))
+/* Set if TLB entry must take the slow path.  */
+#define TLB_FORCE_SLOW      (1 << (TARGET_PAGE_BITS - 5))

 /* Use this mask to check interception with an alignment mask
  * in a TCG backend.
  */
-#define TLB_FLAGS_MASK  (TLB_INVALID_MASK | TLB_NOTDIRTY | TLB_MMIO \
-                         | TLB_RECHECK)
+#define TLB_FLAGS_MASK \
+    (TLB_INVALID_MASK  \
+     | TLB_NOTDIRTY    \
+     | TLB_MMIO        \
+     | TLB_RECHECK     \
+     | TLB_FORCE_SLOW)

 /**
  * tlb_hit_page: return true if page aligned @addr is a hit against the
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 2710 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v4 13/15] cputlb: Byte swap memory transaction attribute
  2019-07-25  7:58       ` [Qemu-riscv] " tony.nguyen
@ 2019-07-25  8:05         ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  8:05 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

Notice new attribute, byte swap, and force the transaction through the
memory slow path.

Required by architectures that can invert endianness of memory
transaction, e.g. SPARC64 has the Invert Endian TTE bit.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 accel/tcg/cputlb.c      | 11 +++++++++++
 include/exec/memattrs.h |  2 ++
 2 files changed, 13 insertions(+)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index e61b1eb..f292a87 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -738,6 +738,9 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
          */
         address |= TLB_RECHECK;
     }
+    if (attrs.byte_swap) {
+        address |= TLB_FORCE_SLOW;
+    }
     if (!memory_region_is_ram(section->mr) &&
         !memory_region_is_romd(section->mr)) {
         /* IO memory case */
@@ -891,6 +894,10 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
     bool locked = false;
     MemTxResult r;

+    if (iotlbentry->attrs.byte_swap) {
+        op ^= MO_BSWAP;
+    }
+
     section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
     mr = section->mr;
     mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
@@ -933,6 +940,10 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
     bool locked = false;
     MemTxResult r;

+    if (iotlbentry->attrs.byte_swap) {
+        op ^= MO_BSWAP;
+    }
+
     section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
     mr = section->mr;
     mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
diff --git a/include/exec/memattrs.h b/include/exec/memattrs.h
index d4a3477..a0644eb 100644
--- a/include/exec/memattrs.h
+++ b/include/exec/memattrs.h
@@ -37,6 +37,8 @@ typedef struct MemTxAttrs {
     unsigned int user:1;
     /* Requester ID (for MSI for example) */
     unsigned int requester_id:16;
+    /* SPARC64: TTE invert endianness */
+    unsigned int byte_swap:1;
     /*
      * The following are target-specific page-table bits.  These are not
      * related to actual memory transactions at all.  However, this structure
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v4 13/15] cputlb: Byte swap memory transaction attribute
@ 2019-07-25  8:05         ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  8:05 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, qemu-s390x, qemu-ppc, amarkovic, pbonzini,
	aurelien

[-- Attachment #1: Type: text/plain, Size: 2205 bytes --]

Notice new attribute, byte swap, and force the transaction through the
memory slow path.

Required by architectures that can invert endianness of memory
transaction, e.g. SPARC64 has the Invert Endian TTE bit.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 accel/tcg/cputlb.c      | 11 +++++++++++
 include/exec/memattrs.h |  2 ++
 2 files changed, 13 insertions(+)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index e61b1eb..f292a87 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -738,6 +738,9 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
          */
         address |= TLB_RECHECK;
     }
+    if (attrs.byte_swap) {
+        address |= TLB_FORCE_SLOW;
+    }
     if (!memory_region_is_ram(section->mr) &&
         !memory_region_is_romd(section->mr)) {
         /* IO memory case */
@@ -891,6 +894,10 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
     bool locked = false;
     MemTxResult r;

+    if (iotlbentry->attrs.byte_swap) {
+        op ^= MO_BSWAP;
+    }
+
     section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
     mr = section->mr;
     mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
@@ -933,6 +940,10 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
     bool locked = false;
     MemTxResult r;

+    if (iotlbentry->attrs.byte_swap) {
+        op ^= MO_BSWAP;
+    }
+
     section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
     mr = section->mr;
     mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
diff --git a/include/exec/memattrs.h b/include/exec/memattrs.h
index d4a3477..a0644eb 100644
--- a/include/exec/memattrs.h
+++ b/include/exec/memattrs.h
@@ -37,6 +37,8 @@ typedef struct MemTxAttrs {
     unsigned int user:1;
     /* Requester ID (for MSI for example) */
     unsigned int requester_id:16;
+    /* SPARC64: TTE invert endianness */
+    unsigned int byte_swap:1;
     /*
      * The following are target-specific page-table bits.  These are not
      * related to actual memory transactions at all.  However, this structure
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 4277 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v4 14/15] target/sparc: Add TLB entry with attributes
  2019-07-25  7:58       ` [Qemu-riscv] " tony.nguyen
@ 2019-07-25  8:06         ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  8:06 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

Append MemTxAttrs to interfaces so we can pass along up coming Invert
Endian TTE bit on SPARC64.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 target/sparc/mmu_helper.c | 32 ++++++++++++++++++--------------
 1 file changed, 18 insertions(+), 14 deletions(-)

diff --git a/target/sparc/mmu_helper.c b/target/sparc/mmu_helper.c
index cbd1e91..826e14b 100644
--- a/target/sparc/mmu_helper.c
+++ b/target/sparc/mmu_helper.c
@@ -88,7 +88,7 @@ static const int perm_table[2][8] = {
 };

 static int get_physical_address(CPUSPARCState *env, hwaddr *physical,
-                                int *prot, int *access_index,
+                                int *prot, int *access_index, MemTxAttrs *attrs,
                                 target_ulong address, int rw, int mmu_idx,
                                 target_ulong *page_size)
 {
@@ -219,6 +219,7 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
     target_ulong vaddr;
     target_ulong page_size;
     int error_code = 0, prot, access_index;
+    MemTxAttrs attrs = {};

     /*
      * TODO: If we ever need tlb_vaddr_to_host for this target,
@@ -229,7 +230,7 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
     assert(!probe);

     address &= TARGET_PAGE_MASK;
-    error_code = get_physical_address(env, &paddr, &prot, &access_index,
+    error_code = get_physical_address(env, &paddr, &prot, &access_index, &attrs,
                                       address, access_type,
                                       mmu_idx, &page_size);
     vaddr = address;
@@ -490,8 +491,8 @@ static inline int ultrasparc_tag_match(SparcTLBEntry *tlb,
     return 0;
 }

-static int get_physical_address_data(CPUSPARCState *env,
-                                     hwaddr *physical, int *prot,
+static int get_physical_address_data(CPUSPARCState *env, hwaddr *physical,
+                                     int *prot, MemTxAttrs *attrs,
                                      target_ulong address, int rw, int mmu_idx)
 {
     CPUState *cs = env_cpu(env);
@@ -608,8 +609,8 @@ static int get_physical_address_data(CPUSPARCState *env,
     return 1;
 }

-static int get_physical_address_code(CPUSPARCState *env,
-                                     hwaddr *physical, int *prot,
+static int get_physical_address_code(CPUSPARCState *env, hwaddr *physical,
+                                     int *prot, MemTxAttrs *attrs,
                                      target_ulong address, int mmu_idx)
 {
     CPUState *cs = env_cpu(env);
@@ -686,7 +687,7 @@ static int get_physical_address_code(CPUSPARCState *env,
 }

 static int get_physical_address(CPUSPARCState *env, hwaddr *physical,
-                                int *prot, int *access_index,
+                                int *prot, int *access_index, MemTxAttrs *attrs,
                                 target_ulong address, int rw, int mmu_idx,
                                 target_ulong *page_size)
 {
@@ -716,11 +717,11 @@ static int get_physical_address(CPUSPARCState *env, hwaddr *physical,
     }

     if (rw == 2) {
-        return get_physical_address_code(env, physical, prot, address,
+        return get_physical_address_code(env, physical, prot, attrs, address,
                                          mmu_idx);
     } else {
-        return get_physical_address_data(env, physical, prot, address, rw,
-                                         mmu_idx);
+        return get_physical_address_data(env, physical, prot, attrs, address,
+                                         rw, mmu_idx);
     }
 }

@@ -734,10 +735,11 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
     target_ulong vaddr;
     hwaddr paddr;
     target_ulong page_size;
+    MemTxAttrs attrs = {};
     int error_code = 0, prot, access_index;

     address &= TARGET_PAGE_MASK;
-    error_code = get_physical_address(env, &paddr, &prot, &access_index,
+    error_code = get_physical_address(env, &paddr, &prot, &access_index, &attrs,
                                       address, access_type,
                                       mmu_idx, &page_size);
     if (likely(error_code == 0)) {
@@ -747,7 +749,8 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
                                    env->dmmu.mmu_primary_context,
                                    env->dmmu.mmu_secondary_context);

-        tlb_set_page(cs, vaddr, paddr, prot, mmu_idx, page_size);
+        tlb_set_page_with_attrs(cs, vaddr, paddr, attrs, prot, mmu_idx,
+                                page_size);
         return true;
     }
     if (probe) {
@@ -849,9 +852,10 @@ static int cpu_sparc_get_phys_page(CPUSPARCState *env, hwaddr *phys,
 {
     target_ulong page_size;
     int prot, access_index;
+    MemTxAttrs attrs = {};

-    return get_physical_address(env, phys, &prot, &access_index, addr, rw,
-                                mmu_idx, &page_size);
+    return get_physical_address(env, phys, &prot, &access_index, &attrs, addr,
+                                rw, mmu_idx, &page_size);
 }

 #if defined(TARGET_SPARC64)
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v4 14/15] target/sparc: Add TLB entry with attributes
@ 2019-07-25  8:06         ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  8:06 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, qemu-s390x, qemu-ppc, amarkovic, pbonzini,
	aurelien

[-- Attachment #1: Type: text/plain, Size: 5225 bytes --]

Append MemTxAttrs to interfaces so we can pass along up coming Invert
Endian TTE bit on SPARC64.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 target/sparc/mmu_helper.c | 32 ++++++++++++++++++--------------
 1 file changed, 18 insertions(+), 14 deletions(-)

diff --git a/target/sparc/mmu_helper.c b/target/sparc/mmu_helper.c
index cbd1e91..826e14b 100644
--- a/target/sparc/mmu_helper.c
+++ b/target/sparc/mmu_helper.c
@@ -88,7 +88,7 @@ static const int perm_table[2][8] = {
 };

 static int get_physical_address(CPUSPARCState *env, hwaddr *physical,
-                                int *prot, int *access_index,
+                                int *prot, int *access_index, MemTxAttrs *attrs,
                                 target_ulong address, int rw, int mmu_idx,
                                 target_ulong *page_size)
 {
@@ -219,6 +219,7 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
     target_ulong vaddr;
     target_ulong page_size;
     int error_code = 0, prot, access_index;
+    MemTxAttrs attrs = {};

     /*
      * TODO: If we ever need tlb_vaddr_to_host for this target,
@@ -229,7 +230,7 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
     assert(!probe);

     address &= TARGET_PAGE_MASK;
-    error_code = get_physical_address(env, &paddr, &prot, &access_index,
+    error_code = get_physical_address(env, &paddr, &prot, &access_index, &attrs,
                                       address, access_type,
                                       mmu_idx, &page_size);
     vaddr = address;
@@ -490,8 +491,8 @@ static inline int ultrasparc_tag_match(SparcTLBEntry *tlb,
     return 0;
 }

-static int get_physical_address_data(CPUSPARCState *env,
-                                     hwaddr *physical, int *prot,
+static int get_physical_address_data(CPUSPARCState *env, hwaddr *physical,
+                                     int *prot, MemTxAttrs *attrs,
                                      target_ulong address, int rw, int mmu_idx)
 {
     CPUState *cs = env_cpu(env);
@@ -608,8 +609,8 @@ static int get_physical_address_data(CPUSPARCState *env,
     return 1;
 }

-static int get_physical_address_code(CPUSPARCState *env,
-                                     hwaddr *physical, int *prot,
+static int get_physical_address_code(CPUSPARCState *env, hwaddr *physical,
+                                     int *prot, MemTxAttrs *attrs,
                                      target_ulong address, int mmu_idx)
 {
     CPUState *cs = env_cpu(env);
@@ -686,7 +687,7 @@ static int get_physical_address_code(CPUSPARCState *env,
 }

 static int get_physical_address(CPUSPARCState *env, hwaddr *physical,
-                                int *prot, int *access_index,
+                                int *prot, int *access_index, MemTxAttrs *attrs,
                                 target_ulong address, int rw, int mmu_idx,
                                 target_ulong *page_size)
 {
@@ -716,11 +717,11 @@ static int get_physical_address(CPUSPARCState *env, hwaddr *physical,
     }

     if (rw == 2) {
-        return get_physical_address_code(env, physical, prot, address,
+        return get_physical_address_code(env, physical, prot, attrs, address,
                                          mmu_idx);
     } else {
-        return get_physical_address_data(env, physical, prot, address, rw,
-                                         mmu_idx);
+        return get_physical_address_data(env, physical, prot, attrs, address,
+                                         rw, mmu_idx);
     }
 }

@@ -734,10 +735,11 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
     target_ulong vaddr;
     hwaddr paddr;
     target_ulong page_size;
+    MemTxAttrs attrs = {};
     int error_code = 0, prot, access_index;

     address &= TARGET_PAGE_MASK;
-    error_code = get_physical_address(env, &paddr, &prot, &access_index,
+    error_code = get_physical_address(env, &paddr, &prot, &access_index, &attrs,
                                       address, access_type,
                                       mmu_idx, &page_size);
     if (likely(error_code == 0)) {
@@ -747,7 +749,8 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
                                    env->dmmu.mmu_primary_context,
                                    env->dmmu.mmu_secondary_context);

-        tlb_set_page(cs, vaddr, paddr, prot, mmu_idx, page_size);
+        tlb_set_page_with_attrs(cs, vaddr, paddr, attrs, prot, mmu_idx,
+                                page_size);
         return true;
     }
     if (probe) {
@@ -849,9 +852,10 @@ static int cpu_sparc_get_phys_page(CPUSPARCState *env, hwaddr *phys,
 {
     target_ulong page_size;
     int prot, access_index;
+    MemTxAttrs attrs = {};

-    return get_physical_address(env, phys, &prot, &access_index, addr, rw,
-                                mmu_idx, &page_size);
+    return get_physical_address(env, phys, &prot, &access_index, &attrs, addr,
+                                rw, mmu_idx, &page_size);
 }

 #if defined(TARGET_SPARC64)
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 10583 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-devel] [PATCH v4 15/15] target/sparc: sun4u Invert Endian TTE bit
  2019-07-25  7:58       ` [Qemu-riscv] " tony.nguyen
@ 2019-07-25  8:06         ` tony.nguyen
  -1 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  8:06 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, qemu-s390x, qemu-arm, david, qemu-riscv,
	cohuck, alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

This bit configures endianness of PCI MMIO devices. It is used by
Solaris and OpenBSD sunhme drivers.

Tested working on OpenBSD.

Unfortunately Solaris 10 had a unrelated keyboard issue blocking
testing... another inch towards Solaris 10 on SPARC64 =)

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 target/sparc/cpu.h        | 2 ++
 target/sparc/mmu_helper.c | 8 +++++++-
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/target/sparc/cpu.h b/target/sparc/cpu.h
index 8ed2250..77e8e07 100644
--- a/target/sparc/cpu.h
+++ b/target/sparc/cpu.h
@@ -277,6 +277,7 @@ enum {

 #define TTE_VALID_BIT       (1ULL << 63)
 #define TTE_NFO_BIT         (1ULL << 60)
+#define TTE_IE_BIT          (1ULL << 59)
 #define TTE_USED_BIT        (1ULL << 41)
 #define TTE_LOCKED_BIT      (1ULL <<  6)
 #define TTE_SIDEEFFECT_BIT  (1ULL <<  3)
@@ -293,6 +294,7 @@ enum {

 #define TTE_IS_VALID(tte)   ((tte) & TTE_VALID_BIT)
 #define TTE_IS_NFO(tte)     ((tte) & TTE_NFO_BIT)
+#define TTE_IS_IE(tte)      ((tte) & TTE_IE_BIT)
 #define TTE_IS_USED(tte)    ((tte) & TTE_USED_BIT)
 #define TTE_IS_LOCKED(tte)  ((tte) & TTE_LOCKED_BIT)
 #define TTE_IS_SIDEEFFECT(tte) ((tte) & TTE_SIDEEFFECT_BIT)
diff --git a/target/sparc/mmu_helper.c b/target/sparc/mmu_helper.c
index 826e14b..77dc86a 100644
--- a/target/sparc/mmu_helper.c
+++ b/target/sparc/mmu_helper.c
@@ -537,6 +537,10 @@ static int get_physical_address_data(CPUSPARCState *env, hwaddr *physical,
         if (ultrasparc_tag_match(&env->dtlb[i], address, context, physical)) {
             int do_fault = 0;

+            if (TTE_IS_IE(env->dtlb[i].tte)) {
+                attrs->byte_swap = true;
+            }
+
             /* access ok? */
             /* multiple bits in SFSR.FT may be set on TT_DFAULT */
             if (TTE_IS_PRIV(env->dtlb[i].tte) && is_user) {
@@ -792,7 +796,7 @@ void dump_mmu(CPUSPARCState *env)
             }
             if (TTE_IS_VALID(env->dtlb[i].tte)) {
                 qemu_printf("[%02u] VA: %" PRIx64 ", PA: %llx"
-                            ", %s, %s, %s, %s, ctx %" PRId64 " %s\n",
+                            ", %s, %s, %s, %s, ie %s, ctx %" PRId64 " %s\n",
                             i,
                             env->dtlb[i].tag & (uint64_t)~0x1fffULL,
                             TTE_PA(env->dtlb[i].tte),
@@ -801,6 +805,8 @@ void dump_mmu(CPUSPARCState *env)
                             TTE_IS_W_OK(env->dtlb[i].tte) ? "RW" : "RO",
                             TTE_IS_LOCKED(env->dtlb[i].tte) ?
                             "locked" : "unlocked",
+                            TTE_IS_IE(env->dtlb[i].tte) ?
+                            "yes" : "no",
                             env->dtlb[i].tag & (uint64_t)0x1fffULL,
                             TTE_IS_GLOBAL(env->dtlb[i].tte) ?
                             "global" : "local");
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 120+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v4 15/15] target/sparc: sun4u Invert Endian TTE bit
@ 2019-07-25  8:06         ` tony.nguyen
  0 siblings, 0 replies; 120+ messages in thread
From: tony.nguyen @ 2019-07-25  8:06 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, mst, palmer, mark.cave-ayland,
	Alistair.Francis, arikalo, david, pasic, borntraeger, rth,
	atar4qemu, ehabkost, sw, alex.williamson, qemu-arm, david,
	qemu-riscv, cohuck, qemu-s390x, qemu-ppc, amarkovic, pbonzini,
	aurelien

[-- Attachment #1: Type: text/plain, Size: 2930 bytes --]

This bit configures endianness of PCI MMIO devices. It is used by
Solaris and OpenBSD sunhme drivers.

Tested working on OpenBSD.

Unfortunately Solaris 10 had a unrelated keyboard issue blocking
testing... another inch towards Solaris 10 on SPARC64 =)

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 target/sparc/cpu.h        | 2 ++
 target/sparc/mmu_helper.c | 8 +++++++-
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/target/sparc/cpu.h b/target/sparc/cpu.h
index 8ed2250..77e8e07 100644
--- a/target/sparc/cpu.h
+++ b/target/sparc/cpu.h
@@ -277,6 +277,7 @@ enum {

 #define TTE_VALID_BIT       (1ULL << 63)
 #define TTE_NFO_BIT         (1ULL << 60)
+#define TTE_IE_BIT          (1ULL << 59)
 #define TTE_USED_BIT        (1ULL << 41)
 #define TTE_LOCKED_BIT      (1ULL <<  6)
 #define TTE_SIDEEFFECT_BIT  (1ULL <<  3)
@@ -293,6 +294,7 @@ enum {

 #define TTE_IS_VALID(tte)   ((tte) & TTE_VALID_BIT)
 #define TTE_IS_NFO(tte)     ((tte) & TTE_NFO_BIT)
+#define TTE_IS_IE(tte)      ((tte) & TTE_IE_BIT)
 #define TTE_IS_USED(tte)    ((tte) & TTE_USED_BIT)
 #define TTE_IS_LOCKED(tte)  ((tte) & TTE_LOCKED_BIT)
 #define TTE_IS_SIDEEFFECT(tte) ((tte) & TTE_SIDEEFFECT_BIT)
diff --git a/target/sparc/mmu_helper.c b/target/sparc/mmu_helper.c
index 826e14b..77dc86a 100644
--- a/target/sparc/mmu_helper.c
+++ b/target/sparc/mmu_helper.c
@@ -537,6 +537,10 @@ static int get_physical_address_data(CPUSPARCState *env, hwaddr *physical,
         if (ultrasparc_tag_match(&env->dtlb[i], address, context, physical)) {
             int do_fault = 0;

+            if (TTE_IS_IE(env->dtlb[i].tte)) {
+                attrs->byte_swap = true;
+            }
+
             /* access ok? */
             /* multiple bits in SFSR.FT may be set on TT_DFAULT */
             if (TTE_IS_PRIV(env->dtlb[i].tte) && is_user) {
@@ -792,7 +796,7 @@ void dump_mmu(CPUSPARCState *env)
             }
             if (TTE_IS_VALID(env->dtlb[i].tte)) {
                 qemu_printf("[%02u] VA: %" PRIx64 ", PA: %llx"
-                            ", %s, %s, %s, %s, ctx %" PRId64 " %s\n",
+                            ", %s, %s, %s, %s, ie %s, ctx %" PRId64 " %s\n",
                             i,
                             env->dtlb[i].tag & (uint64_t)~0x1fffULL,
                             TTE_PA(env->dtlb[i].tte),
@@ -801,6 +805,8 @@ void dump_mmu(CPUSPARCState *env)
                             TTE_IS_W_OK(env->dtlb[i].tte) ? "RW" : "RO",
                             TTE_IS_LOCKED(env->dtlb[i].tte) ?
                             "locked" : "unlocked",
+                            TTE_IS_IE(env->dtlb[i].tte) ?
+                            "yes" : "no",
                             env->dtlb[i].tag & (uint64_t)0x1fffULL,
                             TTE_IS_GLOBAL(env->dtlb[i].tte) ?
                             "global" : "local");
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 6268 bytes --]

^ permalink raw reply related	[flat|nested] 120+ messages in thread

end of thread, other threads:[~2019-07-25  8:07 UTC | newest]

Thread overview: 120+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-22 15:34 [Qemu-devel] [PATCH v2 00/20] Invert Endian bit in SPARCv9 MMU TTE tony.nguyen
2019-07-22 15:34 ` [Qemu-riscv] " tony.nguyen
2019-07-22 15:38 ` [Qemu-devel] [PATCH v2 01/20] tcg: Replace MO_8 with MO_UB alias tony.nguyen
2019-07-22 15:38   ` [Qemu-riscv] " tony.nguyen
2019-07-23  8:04   ` [Qemu-devel] [qemu-s390x] " David Hildenbrand
2019-07-23  8:04     ` [Qemu-riscv] [qemu-s390x] [Qemu-devel] " David Hildenbrand
2019-07-22 15:39 ` [Qemu-devel] [PATCH v2 02/20] tcg: Replace MO_16 with MO_UW alias tony.nguyen
2019-07-22 15:39   ` [Qemu-riscv] " tony.nguyen
2019-07-22 15:40 ` tony.nguyen
2019-07-22 15:40   ` [Qemu-riscv] " tony.nguyen
2019-07-22 15:41 ` [Qemu-devel] [PATCH v2 03/20] tcg: Replace MO_32 with MO_UL alias tony.nguyen
2019-07-22 15:41   ` [Qemu-riscv] " tony.nguyen
2019-07-22 15:42 ` [Qemu-devel] [PATCH v2 04/20] tcg: Replace MO_64 with MO_UQ alias tony.nguyen
2019-07-22 15:42   ` [Qemu-riscv] " tony.nguyen
2019-07-22 15:43 ` [Qemu-devel] [PATCH v2 05/20] tcg: Move size+sign+endian from TCGMemOp to MemOp tony.nguyen
2019-07-22 15:43   ` [Qemu-riscv] " tony.nguyen
2019-07-22 15:44 ` [Qemu-devel] [PATCH v2 06/20] tcg: Rename get_memop to get_tcgmemop tony.nguyen
2019-07-22 15:44   ` [Qemu-riscv] " tony.nguyen
2019-07-22 15:45 ` [Qemu-devel] [PATCH v2 07/20] memory: Access MemoryRegion with MemOp tony.nguyen
2019-07-22 15:45   ` [Qemu-riscv] " tony.nguyen
2019-07-22 15:45 ` [Qemu-devel] [PATCH v2 08/20] target/mips: " tony.nguyen
2019-07-22 15:45   ` [Qemu-riscv] " tony.nguyen
2019-07-22 15:46 ` [Qemu-devel] [PATCH v2 09/20] hw/s390x: " tony.nguyen
2019-07-22 15:46   ` [Qemu-riscv] " tony.nguyen
2019-07-22 15:47 ` [Qemu-devel] [PATCH v2 10/20] hw/intc/armv7m_nic: " tony.nguyen
2019-07-22 15:47   ` [Qemu-riscv] " tony.nguyen
2019-07-22 15:48 ` [Qemu-devel] [PATCH v2 11/20] hw/virtio: " tony.nguyen
2019-07-22 15:48   ` [Qemu-riscv] " tony.nguyen
2019-07-22 15:48 ` [Qemu-devel] [PATCH v2 12/20] hw/vfio: " tony.nguyen
2019-07-22 15:48   ` [Qemu-riscv] " tony.nguyen
2019-07-22 15:49 ` [Qemu-devel] [PATCH v2 13/20] exec: " tony.nguyen
2019-07-22 15:49   ` [Qemu-riscv] " tony.nguyen
2019-07-22 15:50 ` [Qemu-devel] [PATCH v2 14/20] cputlb: " tony.nguyen
2019-07-22 15:50   ` [Qemu-riscv] " tony.nguyen
2019-07-22 15:50 ` [Qemu-devel] [PATCH v2 15/20] memory: Access MemoryRegion with MemOp semantics tony.nguyen
2019-07-22 15:50   ` [Qemu-riscv] " tony.nguyen
2019-07-22 15:51 ` [Qemu-devel] [PATCH v2 16/20] memory: Single byte swap along the I/O path tony.nguyen
2019-07-22 15:51   ` [Qemu-riscv] " tony.nguyen
2019-07-22 15:51 ` [Qemu-devel] [PATCH v2 17/20] cpu: TLB_FLAGS_MASK bit to force memory slow path tony.nguyen
2019-07-22 15:51   ` [Qemu-riscv] " tony.nguyen
2019-07-25  7:01   ` [Qemu-devel] [PATCH v3 00/15] Invert Endian bit in SPARCv9 MMU TTE tony.nguyen
2019-07-25  7:01     ` [Qemu-riscv] " tony.nguyen
2019-07-25  7:03     ` [Qemu-devel] [PATCH v3 01/15] tcg: TCGMemOp is now accelerator independent MemOp tony.nguyen
2019-07-25  7:03       ` [Qemu-riscv] " tony.nguyen
2019-07-25  7:03     ` [Qemu-devel] [PATCH v3 02/15] memory: Access MemoryRegion with MemOp tony.nguyen
2019-07-25  7:03       ` [Qemu-riscv] " tony.nguyen
2019-07-25  7:05     ` [Qemu-devel] [PATCH v3 03/15] target/mips: " tony.nguyen
2019-07-25  7:05       ` [Qemu-riscv] " tony.nguyen
2019-07-25  7:06     ` [Qemu-devel] [PATCH v3 04/15] hw/s390x: " tony.nguyen
2019-07-25  7:06       ` [Qemu-riscv] " tony.nguyen
2019-07-25  7:06     ` [Qemu-devel] [PATCH v3 05/15] hw/intc/armv7m_nic: " tony.nguyen
2019-07-25  7:06       ` [Qemu-riscv] " tony.nguyen
2019-07-25  7:07     ` [Qemu-devel] [PATCH v3 06/15] hw/virtio: " tony.nguyen
2019-07-25  7:07       ` [Qemu-riscv] " tony.nguyen
2019-07-25  7:08     ` [Qemu-devel] [PATCH v3 07/15] hw/vfio: " tony.nguyen
2019-07-25  7:08       ` [Qemu-riscv] " tony.nguyen
2019-07-25  7:08     ` [Qemu-devel] [PATCH v3 08/15] exec: " tony.nguyen
2019-07-25  7:08       ` [Qemu-riscv] " tony.nguyen
2019-07-25  7:08     ` [Qemu-devel] [PATCH v3 09/15] cputlb: " tony.nguyen
2019-07-25  7:08       ` [Qemu-riscv] " tony.nguyen
2019-07-25  7:09     ` [Qemu-devel] [PATCH v3 10/15] memory: Access MemoryRegion with MemOp semantics tony.nguyen
2019-07-25  7:09       ` [Qemu-riscv] " tony.nguyen
2019-07-25  7:10     ` [Qemu-devel] [PATCH v3 11/15] memory: Single byte swap along the I/O path tony.nguyen
2019-07-25  7:10       ` [Qemu-riscv] " tony.nguyen
2019-07-25  7:10     ` [Qemu-devel] [PATCH v3 12/15] cpu: TLB_FLAGS_MASK bit to force memory slow path tony.nguyen
2019-07-25  7:10       ` [Qemu-riscv] " tony.nguyen
2019-07-25  7:11     ` [Qemu-devel] [PATCH v3 13/15] cputlb: Byte swap memory transaction attribute tony.nguyen
2019-07-25  7:11       ` [Qemu-riscv] " tony.nguyen
2019-07-25  7:11     ` [Qemu-devel] [PATCH v3 14/15] target/sparc: Add TLB entry with attributes tony.nguyen
2019-07-25  7:11       ` [Qemu-riscv] " tony.nguyen
2019-07-25  7:12     ` [Qemu-devel] [PATCH v3 15/15] target/sparc: sun4u Invert Endian TTE bit tony.nguyen
2019-07-25  7:12       ` [Qemu-riscv] " tony.nguyen
2019-07-25  7:25     ` [Qemu-devel] [PATCH v3 00/15] Invert Endian bit in SPARCv9 MMU TTE no-reply
2019-07-25  7:25       ` [Qemu-riscv] " no-reply
2019-07-25  7:58     ` [Qemu-devel] [PATCH v4 " tony.nguyen
2019-07-25  7:58       ` [Qemu-riscv] " tony.nguyen
2019-07-25  8:00       ` [Qemu-devel] [PATCH v4 01/15] tcg: TCGMemOp is now accelerator independent MemOp tony.nguyen
2019-07-25  8:00         ` [Qemu-riscv] " tony.nguyen
2019-07-25  8:00       ` [Qemu-devel] [PATCH v4 02/15] memory: Access MemoryRegion with MemOp tony.nguyen
2019-07-25  8:00         ` [Qemu-riscv] " tony.nguyen
2019-07-25  8:01       ` [Qemu-devel] [PATCH v4 03/15] target/mips: " tony.nguyen
2019-07-25  8:01         ` [Qemu-riscv] " tony.nguyen
2019-07-25  8:01       ` [Qemu-devel] [PATCH v4 04/15] hw/s390x: " tony.nguyen
2019-07-25  8:01         ` [Qemu-riscv] " tony.nguyen
2019-07-25  8:02       ` [Qemu-devel] [PATCH v4 05/15] hw/intc/armv7m_nic: " tony.nguyen
2019-07-25  8:02         ` [Qemu-riscv] " tony.nguyen
2019-07-25  8:02       ` [Qemu-devel] [PATCH v4 06/15] hw/virtio: " tony.nguyen
2019-07-25  8:02         ` [Qemu-riscv] " tony.nguyen
2019-07-25  8:03       ` [Qemu-devel] [PATCH v4 07/15] hw/vfio: " tony.nguyen
2019-07-25  8:03         ` [Qemu-riscv] " tony.nguyen
2019-07-25  8:03       ` [Qemu-devel] [PATCH v4 08/15] exec: " tony.nguyen
2019-07-25  8:03         ` [Qemu-riscv] " tony.nguyen
2019-07-25  8:04       ` [Qemu-devel] [PATCH v4 09/15] cputlb: " tony.nguyen
2019-07-25  8:04         ` [Qemu-riscv] " tony.nguyen
2019-07-25  8:04       ` [Qemu-devel] [PATCH v4 10/15] memory: Access MemoryRegion with MemOp semantics tony.nguyen
2019-07-25  8:04         ` [Qemu-riscv] " tony.nguyen
2019-07-25  8:04       ` [Qemu-devel] [PATCH v4 11/15] memory: Single byte swap along the I/O path tony.nguyen
2019-07-25  8:04         ` [Qemu-riscv] " tony.nguyen
2019-07-25  8:05       ` [Qemu-devel] [PATCH v4 12/15] cpu: TLB_FLAGS_MASK bit to force memory slow path tony.nguyen
2019-07-25  8:05         ` [Qemu-riscv] " tony.nguyen
2019-07-25  8:05       ` [Qemu-devel] [PATCH v4 13/15] cputlb: Byte swap memory transaction attribute tony.nguyen
2019-07-25  8:05         ` [Qemu-riscv] " tony.nguyen
2019-07-25  8:06       ` [Qemu-devel] [PATCH v4 14/15] target/sparc: Add TLB entry with attributes tony.nguyen
2019-07-25  8:06         ` [Qemu-riscv] " tony.nguyen
2019-07-25  8:06       ` [Qemu-devel] [PATCH v4 15/15] target/sparc: sun4u Invert Endian TTE bit tony.nguyen
2019-07-25  8:06         ` [Qemu-riscv] " tony.nguyen
2019-07-22 15:52 ` [Qemu-devel] [PATCH v2 18/20] cputlb: Byte swap memory transaction attribute tony.nguyen
2019-07-22 15:52   ` [Qemu-riscv] " tony.nguyen
2019-07-22 15:53 ` [Qemu-devel] [PATCH v2 19/20] target/sparc: Add TLB entry with attributes tony.nguyen
2019-07-22 15:53   ` [Qemu-riscv] " tony.nguyen
2019-07-22 15:54 ` [Qemu-devel] [PATCH v2 20/20] target/sparc: sun4u Invert Endian TTE bit tony.nguyen
2019-07-22 15:54   ` [Qemu-riscv] " tony.nguyen
2019-07-22 15:59 ` [Qemu-devel] [PATCH v2 00/20] Invert Endian bit in SPARCv9 MMU TTE Richard Henderson
2019-07-22 15:59   ` [Qemu-riscv] " Richard Henderson
2019-07-22 16:22   ` Paolo Bonzini
2019-07-22 16:22     ` [Qemu-riscv] " Paolo Bonzini
2019-07-22 16:28   ` tony.nguyen
2019-07-22 16:28     ` [Qemu-riscv] " tony.nguyen
2019-07-22 18:58     ` Richard Henderson
2019-07-22 18:58       ` [Qemu-riscv] " Richard Henderson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.