All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v5 00/15] Invert Endian bit in SPARCv9 MMU TTE
@ 2019-07-26  6:42 ` tony.nguyen
  0 siblings, 0 replies; 78+ messages in thread
From: tony.nguyen @ 2019-07-26  6:42 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, sagark, david, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, arikalo, mst, pasic,
	borntraeger, rth, atar4qemu, ehabkost, alex.williamson, qemu-arm,
	stefanha, shorne, david, qemu-riscv, qemu-s390x, kbastian,
	cohuck, laurent, qemu-ppc, amarkovic, pbonzini, aurelien

This patchset implements the IE (Invert Endian) bit in SPARCv9 MMU TTE.

It is an attempt of the instructions outlined by Richard Henderson to Mark
Cave-Ayland.

Tested with OpenBSD on sun4u. Solaris 10 is my actual goal, but unfortunately a
separate keyboard issue remains in the way.

On 01/11/17 19:15, Mark Cave-Ayland wrote:

>On 15/08/17 19:10, Richard Henderson wrote:
>
>> [CC Peter re MemTxAttrs below]
>>
>> On 08/15/2017 09:38 AM, Mark Cave-Ayland wrote:
>>> Working through an incorrect endian issue on qemu-system-sparc64, it has
>>> become apparent that at least one OS makes use of the IE (Invert Endian)
>>> bit in the SPARCv9 MMU TTE to map PCI memory space without the
>>> programmer having to manually endian-swap accesses.
>>>
>>> In other words, to quote the UltraSPARC specification: "if this bit is
>>> set, accesses to the associated page are processed with inverse
>>> endianness from what is specified by the instruction (big-for-little and
>>> little-for-big)".

A good explanation by Mark why the IE bit is required.

>>>
>>> Looking through various bits of code, I'm trying to get a feel for the
>>> best way to implement this in an efficient manner. From what I can see
>>> this could be solved using an additional MMU index, however I'm not
>>> overly familiar with the memory and softmmu subsystems.
>>
>> No, it can't be solved with an MMU index.
>>
>>> Can anyone point me in the right direction as to what would be the best
>>> way to implement this feature within QEMU?
>>
>> It's definitely tricky.
>>
>> We definitely need some TLB_FLAGS_MASK bit set so that we're forced through
>> the
>> memory slow path.  There is no other way to bypass the endianness that we've
>> already encoded from the target instruction.
>>
>> Given the tlb_set_page_with_attrs interface, I would think that we need a new
>> bit in MemTxAttrs, so that the target/sparc tlb_fill (and subroutines) can
>> pass
>> along the TTE bit for the given page.
>>
>> We have an existing problem in softmmu_template.h,
>>
>>     /* ??? Note that the io helpers always read data in the target
>>        byte ordering.  We should push the LE/BE request down into io.  */
>>     res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr);
>>     res = TGT_BE(res);
>>
>> We do not want to add a third(!) byte swap along the i/o path.  We need to
>> collapse the two that we have already before considering this one.
>>
>> This probably takes the form of:
>>
>> (1) Replacing the "int size" argument with "TCGMemOp memop" for
>>       a) io_{read,write}x in accel/tcg/cputlb.c,
>>       b) memory_region_dispatch_{read,write} in memory.c,
>>       c) adjust_endianness in memory.c.
>>     This carries size+sign+endianness down to the next level.
>>
>> (2) In memory.c, adjust_endianness,
>>
>>      if (memory_region_wrong_endianness(mr)) {
>> -        switch (size) {
>> +        memop ^= MO_BSWAP;
>> +    }
>> +    if (memop & MO_BSWAP) {
>>
>>     For extra credit, re-arrange memory_region_wrong_endianness
>>     to something more explicit -- "wrong" isn't helpful.
>
>Finally I've had a bit of spare time to experiment with this approach,
>and from what I can see there are currently 2 issues:
>
>
>1) Using TCGMemOp in memory.c means it is no longer accelerator agnostic
>
>For the moment I've defined a separate MemOp in memory.h and provided a
>mapping function in io_{read,write}x to map from TCGMemOp to MemOp and
>then pass that into memory_region_dispatch_{read,write}.
>
>Other than not referencing TCGMemOp in the memory API, another reason
>for doing this was that I wasn't convinced that all the MO_ attributes
>were valid outside of TCG. I do, of course, strongly defer to other
>people's knowledge in this area though.
>
>
>2) The above changes to adjust_endianness() fail when
>memory_region_dispatch_{read,write} are called recursively
>
>Whilst booting qemu-system-sparc64 I see that
>memory_region_dispatch_{read,write} get called recursively - once via
>io_{read,write}x and then again via flatview_read_continue() in exec.c.
>
>The net effect of this is that we perform the bswap correctly at the
>tail of the recursion, but then as we travel back up the stack we hit
>memory_region_dispatch_{read,write} once again causing a second bswap
>which means the value is returned with the incorrect endian again.
>
>
>My understanding from your softmmu_template.h comment above is that the
>memory API should do the endian swapping internally allowing the removal
>of the final TGT_BE/TGT_LE applied to the result, or did I get this wrong?
>
>> (3) In tlb_set_page_with_attrs, notice attrs.byte_swap and set
>>     a new TLB_FORCE_SLOW bit within TLB_FLAGS_MASK.
>>
>> (4) In io_{read,write}x, if iotlbentry->attrs.byte_swap is set,
>>     then memop ^= MO_BSWAP.

Thanks all for the feedback. Learnt a lot =)

v2:
- Moved size+sign+endianness attributes from TCGMemOp into MemOp.
  In v1 TCGMemOp was re-purposed entirely into MemOp.
- Replaced MemOp MO_{8|16|32|64} with TCGMemOp MO_{UB|UW|UL|UQ} alias.
  This is to avoid warnings on comparing and coercing different enums.
- Renamed get_memop to get_tcgmemop for clarity.
- MEMOP is now SIZE_MEMOP, which is just ctzl(size).
- Split patch 3/4 so one memory_region_dispatch_{read|write} interface
  is converted per patch.
- Do not reuse TLB_RECHECK, use new TLB_FORCE_SLOW instead.
- Split patch 4/4 so adding the MemTxAddrs parameters and converting
  tlb_set_page() to tlb_set_page_with_attrs() is separate from usage.
- CC'd maintainers.

v3:
- Like v1, the entire TCGMemOp enum is now MemOp.
- MemOp target dependant attributes are conditional upon NEED_CPU_H

v4:
- Added Paolo Bonzini as include/exec/memop.h maintainer

v5:
- Improved commit messages to clarify how interface to access
  MemoryRegion will be converted from "unsigned size" to "MemOp op".
- Moved cpu_transaction_failed() MemOp conversion from patch #11 to #9
  to make review easier.

Tony Nguyen (15):
  tcg: TCGMemOp is now accelerator independent MemOp
  memory: Access MemoryRegion with MemOp
  target/mips: Access MemoryRegion with MemOp
  hw/s390x: Access MemoryRegion with MemOp
  hw/intc/armv7m_nic: Access MemoryRegion with MemOp
  hw/virtio: Access MemoryRegion with MemOp
  hw/vfio: Access MemoryRegion with MemOp
  exec: Access MemoryRegion with MemOp
  cputlb: Access MemoryRegion with MemOp
  memory: Access MemoryRegion with MemOp semantics
  memory: Single byte swap along the I/O path
  cpu: TLB_FLAGS_MASK bit to force memory slow path
  cputlb: Byte swap memory transaction attribute
  target/sparc: Add TLB entry with attributes
  target/sparc: sun4u Invert Endian TTE bit

 MAINTAINERS                             |   1 +
 accel/tcg/cputlb.c                      |  71 +++++++++--------
 exec.c                                  |   6 +-
 hw/intc/armv7m_nvic.c                   |  12 ++-
 hw/s390x/s390-pci-inst.c                |   8 +-
 hw/vfio/pci-quirks.c                    |   5 +-
 hw/virtio/virtio-pci.c                  |   7 +-
 include/exec/cpu-all.h                  |  10 ++-
 include/exec/memattrs.h                 |   2 +
 include/exec/memop.h                    | 112 +++++++++++++++++++++++++++
 include/exec/memory.h                   |   9 ++-
 memory.c                                |  37 +++++----
 memory_ldst.inc.c                       |  18 ++---
 target/alpha/translate.c                |   2 +-
 target/arm/translate-a64.c              |  48 ++++++------
 target/arm/translate-a64.h              |   2 +-
 target/arm/translate-sve.c              |   2 +-
 target/arm/translate.c                  |  32 ++++----
 target/arm/translate.h                  |   2 +-
 target/hppa/translate.c                 |  14 ++--
 target/i386/translate.c                 | 132 ++++++++++++++++----------------
 target/m68k/translate.c                 |   2 +-
 target/microblaze/translate.c           |   4 +-
 target/mips/op_helper.c                 |   5 +-
 target/mips/translate.c                 |   8 +-
 target/openrisc/translate.c             |   4 +-
 target/ppc/translate.c                  |  12 +--
 target/riscv/insn_trans/trans_rva.inc.c |   8 +-
 target/riscv/insn_trans/trans_rvi.inc.c |   4 +-
 target/s390x/translate.c                |   6 +-
 target/s390x/translate_vx.inc.c         |  10 +--
 target/sparc/cpu.h                      |   2 +
 target/sparc/mmu_helper.c               |  40 ++++++----
 target/sparc/translate.c                |  14 ++--
 target/tilegx/translate.c               |  10 +--
 target/tricore/translate.c              |   8 +-
 tcg/README                              |   2 +-
 tcg/aarch64/tcg-target.inc.c            |  26 +++----
 tcg/arm/tcg-target.inc.c                |  26 +++----
 tcg/i386/tcg-target.inc.c               |  24 +++---
 tcg/mips/tcg-target.inc.c               |  16 ++--
 tcg/optimize.c                          |   2 +-
 tcg/ppc/tcg-target.inc.c                |  12 +--
 tcg/riscv/tcg-target.inc.c              |  20 ++---
 tcg/s390/tcg-target.inc.c               |  14 ++--
 tcg/sparc/tcg-target.inc.c              |   6 +-
 tcg/tcg-op.c                            |  38 ++++-----
 tcg/tcg-op.h                            |  86 ++++++++++-----------
 tcg/tcg.c                               |   2 +-
 tcg/tcg.h                               |  99 ++----------------------
 trace/mem-internal.h                    |   4 +-
 trace/mem.h                             |   4 +-
 52 files changed, 562 insertions(+), 488 deletions(-)
 create mode 100644 include/exec/memop.h

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v5 00/15] Invert Endian bit in SPARCv9 MMU TTE
@ 2019-07-26  6:42 ` tony.nguyen
  0 siblings, 0 replies; 78+ messages in thread
From: tony.nguyen @ 2019-07-26  6:42 UTC (permalink / raw)
  To: qemu-devel
  Cc: rth, pbonzini, peter.maydell, walling, cohuck, pasic,
	borntraeger, david, alex.williamson, mst, ehabkost, laurent,
	edgar.iglesias, aurelien, amarkovic, arikalo, shorne, david,
	palmer, Alistair.Francis, sagark, kbastian, mark.cave-ayland,
	atar4qemu, balrogg, stefanha, qemu-arm, qemu-s390x, qemu-ppc,
	qemu-riscv

This patchset implements the IE (Invert Endian) bit in SPARCv9 MMU TTE.

It is an attempt of the instructions outlined by Richard Henderson to Mark
Cave-Ayland.

Tested with OpenBSD on sun4u. Solaris 10 is my actual goal, but unfortunately a
separate keyboard issue remains in the way.

On 01/11/17 19:15, Mark Cave-Ayland wrote:

>On 15/08/17 19:10, Richard Henderson wrote:
>
>> [CC Peter re MemTxAttrs below]
>>
>> On 08/15/2017 09:38 AM, Mark Cave-Ayland wrote:
>>> Working through an incorrect endian issue on qemu-system-sparc64, it has
>>> become apparent that at least one OS makes use of the IE (Invert Endian)
>>> bit in the SPARCv9 MMU TTE to map PCI memory space without the
>>> programmer having to manually endian-swap accesses.
>>>
>>> In other words, to quote the UltraSPARC specification: "if this bit is
>>> set, accesses to the associated page are processed with inverse
>>> endianness from what is specified by the instruction (big-for-little and
>>> little-for-big)".

A good explanation by Mark why the IE bit is required.

>>>
>>> Looking through various bits of code, I'm trying to get a feel for the
>>> best way to implement this in an efficient manner. From what I can see
>>> this could be solved using an additional MMU index, however I'm not
>>> overly familiar with the memory and softmmu subsystems.
>>
>> No, it can't be solved with an MMU index.
>>
>>> Can anyone point me in the right direction as to what would be the best
>>> way to implement this feature within QEMU?
>>
>> It's definitely tricky.
>>
>> We definitely need some TLB_FLAGS_MASK bit set so that we're forced through
>> the
>> memory slow path.  There is no other way to bypass the endianness that we've
>> already encoded from the target instruction.
>>
>> Given the tlb_set_page_with_attrs interface, I would think that we need a new
>> bit in MemTxAttrs, so that the target/sparc tlb_fill (and subroutines) can
>> pass
>> along the TTE bit for the given page.
>>
>> We have an existing problem in softmmu_template.h,
>>
>>     /* ??? Note that the io helpers always read data in the target
>>        byte ordering.  We should push the LE/BE request down into io.  */
>>     res = glue(io_read, SUFFIX)(env, mmu_idx, index, addr, retaddr);
>>     res = TGT_BE(res);
>>
>> We do not want to add a third(!) byte swap along the i/o path.  We need to
>> collapse the two that we have already before considering this one.
>>
>> This probably takes the form of:
>>
>> (1) Replacing the "int size" argument with "TCGMemOp memop" for
>>       a) io_{read,write}x in accel/tcg/cputlb.c,
>>       b) memory_region_dispatch_{read,write} in memory.c,
>>       c) adjust_endianness in memory.c.
>>     This carries size+sign+endianness down to the next level.
>>
>> (2) In memory.c, adjust_endianness,
>>
>>      if (memory_region_wrong_endianness(mr)) {
>> -        switch (size) {
>> +        memop ^= MO_BSWAP;
>> +    }
>> +    if (memop & MO_BSWAP) {
>>
>>     For extra credit, re-arrange memory_region_wrong_endianness
>>     to something more explicit -- "wrong" isn't helpful.
>
>Finally I've had a bit of spare time to experiment with this approach,
>and from what I can see there are currently 2 issues:
>
>
>1) Using TCGMemOp in memory.c means it is no longer accelerator agnostic
>
>For the moment I've defined a separate MemOp in memory.h and provided a
>mapping function in io_{read,write}x to map from TCGMemOp to MemOp and
>then pass that into memory_region_dispatch_{read,write}.
>
>Other than not referencing TCGMemOp in the memory API, another reason
>for doing this was that I wasn't convinced that all the MO_ attributes
>were valid outside of TCG. I do, of course, strongly defer to other
>people's knowledge in this area though.
>
>
>2) The above changes to adjust_endianness() fail when
>memory_region_dispatch_{read,write} are called recursively
>
>Whilst booting qemu-system-sparc64 I see that
>memory_region_dispatch_{read,write} get called recursively - once via
>io_{read,write}x and then again via flatview_read_continue() in exec.c.
>
>The net effect of this is that we perform the bswap correctly at the
>tail of the recursion, but then as we travel back up the stack we hit
>memory_region_dispatch_{read,write} once again causing a second bswap
>which means the value is returned with the incorrect endian again.
>
>
>My understanding from your softmmu_template.h comment above is that the
>memory API should do the endian swapping internally allowing the removal
>of the final TGT_BE/TGT_LE applied to the result, or did I get this wrong?
>
>> (3) In tlb_set_page_with_attrs, notice attrs.byte_swap and set
>>     a new TLB_FORCE_SLOW bit within TLB_FLAGS_MASK.
>>
>> (4) In io_{read,write}x, if iotlbentry->attrs.byte_swap is set,
>>     then memop ^= MO_BSWAP.

Thanks all for the feedback. Learnt a lot =)

v2:
- Moved size+sign+endianness attributes from TCGMemOp into MemOp.
  In v1 TCGMemOp was re-purposed entirely into MemOp.
- Replaced MemOp MO_{8|16|32|64} with TCGMemOp MO_{UB|UW|UL|UQ} alias.
  This is to avoid warnings on comparing and coercing different enums.
- Renamed get_memop to get_tcgmemop for clarity.
- MEMOP is now SIZE_MEMOP, which is just ctzl(size).
- Split patch 3/4 so one memory_region_dispatch_{read|write} interface
  is converted per patch.
- Do not reuse TLB_RECHECK, use new TLB_FORCE_SLOW instead.
- Split patch 4/4 so adding the MemTxAddrs parameters and converting
  tlb_set_page() to tlb_set_page_with_attrs() is separate from usage.
- CC'd maintainers.

v3:
- Like v1, the entire TCGMemOp enum is now MemOp.
- MemOp target dependant attributes are conditional upon NEED_CPU_H

v4:
- Added Paolo Bonzini as include/exec/memop.h maintainer

v5:
- Improved commit messages to clarify how interface to access
  MemoryRegion will be converted from "unsigned size" to "MemOp op".
- Moved cpu_transaction_failed() MemOp conversion from patch #11 to #9
  to make review easier.

Tony Nguyen (15):
  tcg: TCGMemOp is now accelerator independent MemOp
  memory: Access MemoryRegion with MemOp
  target/mips: Access MemoryRegion with MemOp
  hw/s390x: Access MemoryRegion with MemOp
  hw/intc/armv7m_nic: Access MemoryRegion with MemOp
  hw/virtio: Access MemoryRegion with MemOp
  hw/vfio: Access MemoryRegion with MemOp
  exec: Access MemoryRegion with MemOp
  cputlb: Access MemoryRegion with MemOp
  memory: Access MemoryRegion with MemOp semantics
  memory: Single byte swap along the I/O path
  cpu: TLB_FLAGS_MASK bit to force memory slow path
  cputlb: Byte swap memory transaction attribute
  target/sparc: Add TLB entry with attributes
  target/sparc: sun4u Invert Endian TTE bit

 MAINTAINERS                             |   1 +
 accel/tcg/cputlb.c                      |  71 +++++++++--------
 exec.c                                  |   6 +-
 hw/intc/armv7m_nvic.c                   |  12 ++-
 hw/s390x/s390-pci-inst.c                |   8 +-
 hw/vfio/pci-quirks.c                    |   5 +-
 hw/virtio/virtio-pci.c                  |   7 +-
 include/exec/cpu-all.h                  |  10 ++-
 include/exec/memattrs.h                 |   2 +
 include/exec/memop.h                    | 112 +++++++++++++++++++++++++++
 include/exec/memory.h                   |   9 ++-
 memory.c                                |  37 +++++----
 memory_ldst.inc.c                       |  18 ++---
 target/alpha/translate.c                |   2 +-
 target/arm/translate-a64.c              |  48 ++++++------
 target/arm/translate-a64.h              |   2 +-
 target/arm/translate-sve.c              |   2 +-
 target/arm/translate.c                  |  32 ++++----
 target/arm/translate.h                  |   2 +-
 target/hppa/translate.c                 |  14 ++--
 target/i386/translate.c                 | 132 ++++++++++++++++----------------
 target/m68k/translate.c                 |   2 +-
 target/microblaze/translate.c           |   4 +-
 target/mips/op_helper.c                 |   5 +-
 target/mips/translate.c                 |   8 +-
 target/openrisc/translate.c             |   4 +-
 target/ppc/translate.c                  |  12 +--
 target/riscv/insn_trans/trans_rva.inc.c |   8 +-
 target/riscv/insn_trans/trans_rvi.inc.c |   4 +-
 target/s390x/translate.c                |   6 +-
 target/s390x/translate_vx.inc.c         |  10 +--
 target/sparc/cpu.h                      |   2 +
 target/sparc/mmu_helper.c               |  40 ++++++----
 target/sparc/translate.c                |  14 ++--
 target/tilegx/translate.c               |  10 +--
 target/tricore/translate.c              |   8 +-
 tcg/README                              |   2 +-
 tcg/aarch64/tcg-target.inc.c            |  26 +++----
 tcg/arm/tcg-target.inc.c                |  26 +++----
 tcg/i386/tcg-target.inc.c               |  24 +++---
 tcg/mips/tcg-target.inc.c               |  16 ++--
 tcg/optimize.c                          |   2 +-
 tcg/ppc/tcg-target.inc.c                |  12 +--
 tcg/riscv/tcg-target.inc.c              |  20 ++---
 tcg/s390/tcg-target.inc.c               |  14 ++--
 tcg/sparc/tcg-target.inc.c              |   6 +-
 tcg/tcg-op.c                            |  38 ++++-----
 tcg/tcg-op.h                            |  86 ++++++++++-----------
 tcg/tcg.c                               |   2 +-
 tcg/tcg.h                               |  99 ++----------------------
 trace/mem-internal.h                    |   4 +-
 trace/mem.h                             |   4 +-
 52 files changed, 562 insertions(+), 488 deletions(-)
 create mode 100644 include/exec/memop.h

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v5 01/15] tcg: TCGMemOp is now accelerator independent MemOp
  2019-07-26  6:42 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-26  6:43   ` tony.nguyen
  -1 siblings, 0 replies; 78+ messages in thread
From: tony.nguyen @ 2019-07-26  6:43 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, qemu-s390x, qemu-arm,
	stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

Preparation for collapsing the two byte swaps, adjust_endianness and
handle_bswap, along the I/O path.

Target dependant attributes are conditionalize upon NEED_CPU_H.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 MAINTAINERS                             |   1 +
 accel/tcg/cputlb.c                      |   2 +-
 include/exec/memop.h                    | 109 ++++++++++++++++++++++++++
 target/alpha/translate.c                |   2 +-
 target/arm/translate-a64.c              |  48 ++++++------
 target/arm/translate-a64.h              |   2 +-
 target/arm/translate-sve.c              |   2 +-
 target/arm/translate.c                  |  32 ++++----
 target/arm/translate.h                  |   2 +-
 target/hppa/translate.c                 |  14 ++--
 target/i386/translate.c                 | 132 ++++++++++++++++----------------
 target/m68k/translate.c                 |   2 +-
 target/microblaze/translate.c           |   4 +-
 target/mips/translate.c                 |   8 +-
 target/openrisc/translate.c             |   4 +-
 target/ppc/translate.c                  |  12 +--
 target/riscv/insn_trans/trans_rva.inc.c |   8 +-
 target/riscv/insn_trans/trans_rvi.inc.c |   4 +-
 target/s390x/translate.c                |   6 +-
 target/s390x/translate_vx.inc.c         |  10 +--
 target/sparc/translate.c                |  14 ++--
 target/tilegx/translate.c               |  10 +--
 target/tricore/translate.c              |   8 +-
 tcg/README                              |   2 +-
 tcg/aarch64/tcg-target.inc.c            |  26 +++----
 tcg/arm/tcg-target.inc.c                |  26 +++----
 tcg/i386/tcg-target.inc.c               |  24 +++---
 tcg/mips/tcg-target.inc.c               |  16 ++--
 tcg/optimize.c                          |   2 +-
 tcg/ppc/tcg-target.inc.c                |  12 +--
 tcg/riscv/tcg-target.inc.c              |  20 ++---
 tcg/s390/tcg-target.inc.c               |  14 ++--
 tcg/sparc/tcg-target.inc.c              |   6 +-
 tcg/tcg-op.c                            |  38 ++++-----
 tcg/tcg-op.h                            |  86 ++++++++++-----------
 tcg/tcg.c                               |   2 +-
 tcg/tcg.h                               |  99 ++----------------------
 trace/mem-internal.h                    |   4 +-
 trace/mem.h                             |   4 +-
 39 files changed, 420 insertions(+), 397 deletions(-)
 create mode 100644 include/exec/memop.h

diff --git a/MAINTAINERS b/MAINTAINERS
index cc9636b..3f148cd 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1890,6 +1890,7 @@ M: Paolo Bonzini <pbonzini@redhat.com>
 S: Supported
 F: include/exec/ioport.h
 F: ioport.c
+F: include/exec/memop.h
 F: include/exec/memory.h
 F: include/exec/ram_addr.h
 F: memory.c
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index bb9897b..523be4c 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1133,7 +1133,7 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
     uintptr_t index = tlb_index(env, mmu_idx, addr);
     CPUTLBEntry *tlbe = tlb_entry(env, mmu_idx, addr);
     target_ulong tlb_addr = tlb_addr_write(tlbe);
-    TCGMemOp mop = get_memop(oi);
+    MemOp mop = get_memop(oi);
     int a_bits = get_alignment_bits(mop);
     int s_bits = mop & MO_SIZE;
     void *hostaddr;
diff --git a/include/exec/memop.h b/include/exec/memop.h
new file mode 100644
index 0000000..ac58066
--- /dev/null
+++ b/include/exec/memop.h
@@ -0,0 +1,109 @@
+/*
+ * Constants for memory operations
+ *
+ * Authors:
+ *  Richard Henderson <rth@twiddle.net>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+
+#ifndef MEMOP_H
+#define MEMOP_H
+
+typedef enum MemOp {
+    MO_8     = 0,
+    MO_16    = 1,
+    MO_32    = 2,
+    MO_64    = 3,
+    MO_SIZE  = 3,   /* Mask for the above.  */
+
+    MO_SIGN  = 4,   /* Sign-extended, otherwise zero-extended.  */
+
+    MO_BSWAP = 8,   /* Host reverse endian.  */
+#ifdef HOST_WORDS_BIGENDIAN
+    MO_LE    = MO_BSWAP,
+    MO_BE    = 0,
+#else
+    MO_LE    = 0,
+    MO_BE    = MO_BSWAP,
+#endif
+#ifdef NEED_CPU_H
+#ifdef TARGET_WORDS_BIGENDIAN
+    MO_TE    = MO_BE,
+#else
+    MO_TE    = MO_LE,
+#endif
+#endif
+
+    /*
+     * MO_UNALN accesses are never checked for alignment.
+     * MO_ALIGN accesses will result in a call to the CPU's
+     * do_unaligned_access hook if the guest address is not aligned.
+     * The default depends on whether the target CPU defines ALIGNED_ONLY.
+     *
+     * Some architectures (e.g. ARMv8) need the address which is aligned
+     * to a size more than the size of the memory access.
+     * Some architectures (e.g. SPARCv9) need an address which is aligned,
+     * but less strictly than the natural alignment.
+     *
+     * MO_ALIGN supposes the alignment size is the size of a memory access.
+     *
+     * There are three options:
+     * - unaligned access permitted (MO_UNALN).
+     * - an alignment to the size of an access (MO_ALIGN);
+     * - an alignment to a specified size, which may be more or less than
+     *   the access size (MO_ALIGN_x where 'x' is a size in bytes);
+     */
+    MO_ASHIFT = 4,
+    MO_AMASK = 7 << MO_ASHIFT,
+#ifdef NEED_CPU_H
+#ifdef ALIGNED_ONLY
+    MO_ALIGN = 0,
+    MO_UNALN = MO_AMASK,
+#else
+    MO_ALIGN = MO_AMASK,
+    MO_UNALN = 0,
+#endif
+#endif
+    MO_ALIGN_2  = 1 << MO_ASHIFT,
+    MO_ALIGN_4  = 2 << MO_ASHIFT,
+    MO_ALIGN_8  = 3 << MO_ASHIFT,
+    MO_ALIGN_16 = 4 << MO_ASHIFT,
+    MO_ALIGN_32 = 5 << MO_ASHIFT,
+    MO_ALIGN_64 = 6 << MO_ASHIFT,
+
+    /* Combinations of the above, for ease of use.  */
+    MO_UB    = MO_8,
+    MO_UW    = MO_16,
+    MO_UL    = MO_32,
+    MO_SB    = MO_SIGN | MO_8,
+    MO_SW    = MO_SIGN | MO_16,
+    MO_SL    = MO_SIGN | MO_32,
+    MO_Q     = MO_64,
+
+    MO_LEUW  = MO_LE | MO_UW,
+    MO_LEUL  = MO_LE | MO_UL,
+    MO_LESW  = MO_LE | MO_SW,
+    MO_LESL  = MO_LE | MO_SL,
+    MO_LEQ   = MO_LE | MO_Q,
+
+    MO_BEUW  = MO_BE | MO_UW,
+    MO_BEUL  = MO_BE | MO_UL,
+    MO_BESW  = MO_BE | MO_SW,
+    MO_BESL  = MO_BE | MO_SL,
+    MO_BEQ   = MO_BE | MO_Q,
+
+#ifdef NEED_CPU_H
+    MO_TEUW  = MO_TE | MO_UW,
+    MO_TEUL  = MO_TE | MO_UL,
+    MO_TESW  = MO_TE | MO_SW,
+    MO_TESL  = MO_TE | MO_SL,
+    MO_TEQ   = MO_TE | MO_Q,
+#endif
+
+    MO_SSIZE = MO_SIZE | MO_SIGN,
+} MemOp;
+
+#endif
diff --git a/target/alpha/translate.c b/target/alpha/translate.c
index 2c9cccf..d5d4888 100644
--- a/target/alpha/translate.c
+++ b/target/alpha/translate.c
@@ -403,7 +403,7 @@ static inline void gen_store_mem(DisasContext *ctx,

 static DisasJumpType gen_store_conditional(DisasContext *ctx, int ra, int rb,
                                            int32_t disp16, int mem_idx,
-                                           TCGMemOp op)
+                                           MemOp op)
 {
     TCGLabel *lab_fail, *lab_done;
     TCGv addr, val;
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index d323147..b6c07d6 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -85,7 +85,7 @@ typedef void NeonGenOneOpFn(TCGv_i64, TCGv_i64);
 typedef void CryptoTwoOpFn(TCGv_ptr, TCGv_ptr);
 typedef void CryptoThreeOpIntFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
 typedef void CryptoThreeOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
-typedef void AtomicThreeOpFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGArg, TCGMemOp);
+typedef void AtomicThreeOpFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGArg, MemOp);

 /* initialize TCG globals.  */
 void a64_translate_init(void)
@@ -455,7 +455,7 @@ TCGv_i64 read_cpu_reg_sp(DisasContext *s, int reg, int sf)
  * Dn, Sn, Hn or Bn).
  * (Note that this is not the same mapping as for A32; see cpu.h)
  */
-static inline int fp_reg_offset(DisasContext *s, int regno, TCGMemOp size)
+static inline int fp_reg_offset(DisasContext *s, int regno, MemOp size)
 {
     return vec_reg_offset(s, regno, 0, size);
 }
@@ -871,7 +871,7 @@ static void do_gpr_ld_memidx(DisasContext *s,
                              bool iss_valid, unsigned int iss_srt,
                              bool iss_sf, bool iss_ar)
 {
-    TCGMemOp memop = s->be_data + size;
+    MemOp memop = s->be_data + size;

     g_assert(size <= 3);

@@ -948,7 +948,7 @@ static void do_fp_ld(DisasContext *s, int destidx, TCGv_i64 tcg_addr, int size)
     TCGv_i64 tmphi;

     if (size < 4) {
-        TCGMemOp memop = s->be_data + size;
+        MemOp memop = s->be_data + size;
         tmphi = tcg_const_i64(0);
         tcg_gen_qemu_ld_i64(tmplo, tcg_addr, get_mem_index(s), memop);
     } else {
@@ -989,7 +989,7 @@ static void do_fp_ld(DisasContext *s, int destidx, TCGv_i64 tcg_addr, int size)

 /* Get value of an element within a vector register */
 static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx,
-                             int element, TCGMemOp memop)
+                             int element, MemOp memop)
 {
     int vect_off = vec_reg_offset(s, srcidx, element, memop & MO_SIZE);
     switch (memop) {
@@ -1021,7 +1021,7 @@ static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx,
 }

 static void read_vec_element_i32(DisasContext *s, TCGv_i32 tcg_dest, int srcidx,
-                                 int element, TCGMemOp memop)
+                                 int element, MemOp memop)
 {
     int vect_off = vec_reg_offset(s, srcidx, element, memop & MO_SIZE);
     switch (memop) {
@@ -1048,7 +1048,7 @@ static void read_vec_element_i32(DisasContext *s, TCGv_i32 tcg_dest, int srcidx,

 /* Set value of an element within a vector register */
 static void write_vec_element(DisasContext *s, TCGv_i64 tcg_src, int destidx,
-                              int element, TCGMemOp memop)
+                              int element, MemOp memop)
 {
     int vect_off = vec_reg_offset(s, destidx, element, memop & MO_SIZE);
     switch (memop) {
@@ -1070,7 +1070,7 @@ static void write_vec_element(DisasContext *s, TCGv_i64 tcg_src, int destidx,
 }

 static void write_vec_element_i32(DisasContext *s, TCGv_i32 tcg_src,
-                                  int destidx, int element, TCGMemOp memop)
+                                  int destidx, int element, MemOp memop)
 {
     int vect_off = vec_reg_offset(s, destidx, element, memop & MO_SIZE);
     switch (memop) {
@@ -1090,7 +1090,7 @@ static void write_vec_element_i32(DisasContext *s, TCGv_i32 tcg_src,

 /* Store from vector register to memory */
 static void do_vec_st(DisasContext *s, int srcidx, int element,
-                      TCGv_i64 tcg_addr, int size, TCGMemOp endian)
+                      TCGv_i64 tcg_addr, int size, MemOp endian)
 {
     TCGv_i64 tcg_tmp = tcg_temp_new_i64();

@@ -1102,7 +1102,7 @@ static void do_vec_st(DisasContext *s, int srcidx, int element,

 /* Load from memory to vector register */
 static void do_vec_ld(DisasContext *s, int destidx, int element,
-                      TCGv_i64 tcg_addr, int size, TCGMemOp endian)
+                      TCGv_i64 tcg_addr, int size, MemOp endian)
 {
     TCGv_i64 tcg_tmp = tcg_temp_new_i64();

@@ -2200,7 +2200,7 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
                                TCGv_i64 addr, int size, bool is_pair)
 {
     int idx = get_mem_index(s);
-    TCGMemOp memop = s->be_data;
+    MemOp memop = s->be_data;

     g_assert(size <= 3);
     if (is_pair) {
@@ -3286,7 +3286,7 @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
     bool is_postidx = extract32(insn, 23, 1);
     bool is_q = extract32(insn, 30, 1);
     TCGv_i64 clean_addr, tcg_rn, tcg_ebytes;
-    TCGMemOp endian = s->be_data;
+    MemOp endian = s->be_data;

     int ebytes;   /* bytes per element */
     int elements; /* elements per vector */
@@ -5455,7 +5455,7 @@ static void disas_fp_csel(DisasContext *s, uint32_t insn)
     unsigned int mos, type, rm, cond, rn, rd;
     TCGv_i64 t_true, t_false, t_zero;
     DisasCompare64 c;
-    TCGMemOp sz;
+    MemOp sz;

     mos = extract32(insn, 29, 3);
     type = extract32(insn, 22, 2);
@@ -6267,7 +6267,7 @@ static void disas_fp_imm(DisasContext *s, uint32_t insn)
     int mos = extract32(insn, 29, 3);
     uint64_t imm;
     TCGv_i64 tcg_res;
-    TCGMemOp sz;
+    MemOp sz;

     if (mos || imm5) {
         unallocated_encoding(s);
@@ -7030,7 +7030,7 @@ static TCGv_i32 do_reduction_op(DisasContext *s, int fpopcode, int rn,
 {
     if (esize == size) {
         int element;
-        TCGMemOp msize = esize == 16 ? MO_16 : MO_32;
+        MemOp msize = esize == 16 ? MO_16 : MO_32;
         TCGv_i32 tcg_elem;

         /* We should have one register left here */
@@ -8022,7 +8022,7 @@ static void handle_vec_simd_sqshrn(DisasContext *s, bool is_scalar, bool is_q,
     int shift = (2 * esize) - immhb;
     int elements = is_scalar ? 1 : (64 / esize);
     bool round = extract32(opcode, 0, 1);
-    TCGMemOp ldop = (size + 1) | (is_u_shift ? 0 : MO_SIGN);
+    MemOp ldop = (size + 1) | (is_u_shift ? 0 : MO_SIGN);
     TCGv_i64 tcg_rn, tcg_rd, tcg_round;
     TCGv_i32 tcg_rd_narrowed;
     TCGv_i64 tcg_final;
@@ -8181,7 +8181,7 @@ static void handle_simd_qshl(DisasContext *s, bool scalar, bool is_q,
             }
         };
         NeonGenTwoOpEnvFn *genfn = fns[src_unsigned][dst_unsigned][size];
-        TCGMemOp memop = scalar ? size : MO_32;
+        MemOp memop = scalar ? size : MO_32;
         int maxpass = scalar ? 1 : is_q ? 4 : 2;

         for (pass = 0; pass < maxpass; pass++) {
@@ -8225,7 +8225,7 @@ static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
     TCGv_ptr tcg_fpst = get_fpstatus_ptr(size == MO_16);
     TCGv_i32 tcg_shift = NULL;

-    TCGMemOp mop = size | (is_signed ? MO_SIGN : 0);
+    MemOp mop = size | (is_signed ? MO_SIGN : 0);
     int pass;

     if (fracbits || size == MO_64) {
@@ -10004,7 +10004,7 @@ static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
     int dsize = is_q ? 128 : 64;
     int esize = 8 << size;
     int elements = dsize/esize;
-    TCGMemOp memop = size | (is_u ? 0 : MO_SIGN);
+    MemOp memop = size | (is_u ? 0 : MO_SIGN);
     TCGv_i64 tcg_rn = new_tmp_a64(s);
     TCGv_i64 tcg_rd = new_tmp_a64(s);
     TCGv_i64 tcg_round;
@@ -10347,7 +10347,7 @@ static void handle_3rd_widening(DisasContext *s, int is_q, int is_u, int size,
             TCGv_i64 tcg_op1 = tcg_temp_new_i64();
             TCGv_i64 tcg_op2 = tcg_temp_new_i64();
             TCGv_i64 tcg_passres;
-            TCGMemOp memop = MO_32 | (is_u ? 0 : MO_SIGN);
+            MemOp memop = MO_32 | (is_u ? 0 : MO_SIGN);

             int elt = pass + is_q * 2;

@@ -11827,7 +11827,7 @@ static void handle_2misc_pairwise(DisasContext *s, int opcode, bool u,

     if (size == 2) {
         /* 32 + 32 -> 64 op */
-        TCGMemOp memop = size + (u ? 0 : MO_SIGN);
+        MemOp memop = size + (u ? 0 : MO_SIGN);

         for (pass = 0; pass < maxpass; pass++) {
             TCGv_i64 tcg_op1 = tcg_temp_new_i64();
@@ -12849,7 +12849,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)

     switch (is_fp) {
     case 1: /* normal fp */
-        /* convert insn encoded size to TCGMemOp size */
+        /* convert insn encoded size to MemOp size */
         switch (size) {
         case 0: /* half-precision */
             size = MO_16;
@@ -12897,7 +12897,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
         return;
     }

-    /* Given TCGMemOp size, adjust register and indexing.  */
+    /* Given MemOp size, adjust register and indexing.  */
     switch (size) {
     case MO_16:
         index = h << 2 | l << 1 | m;
@@ -13194,7 +13194,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
         TCGv_i64 tcg_res[2];
         int pass;
         bool satop = extract32(opcode, 0, 1);
-        TCGMemOp memop = MO_32;
+        MemOp memop = MO_32;

         if (satop || !u) {
             memop |= MO_SIGN;
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
index 9ab4087..f1246b7 100644
--- a/target/arm/translate-a64.h
+++ b/target/arm/translate-a64.h
@@ -64,7 +64,7 @@ static inline void assert_fp_access_checked(DisasContext *s)
  * the FP/vector register Qn.
  */
 static inline int vec_reg_offset(DisasContext *s, int regno,
-                                 int element, TCGMemOp size)
+                                 int element, MemOp size)
 {
     int element_size = 1 << size;
     int offs = element * element_size;
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index fa068b0..5d7edd0 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -4567,7 +4567,7 @@ static bool trans_STR_pri(DisasContext *s, arg_rri *a)
  */

 /* The memory mode of the dtype.  */
-static const TCGMemOp dtype_mop[16] = {
+static const MemOp dtype_mop[16] = {
     MO_UB, MO_UB, MO_UB, MO_UB,
     MO_SL, MO_UW, MO_UW, MO_UW,
     MO_SW, MO_SW, MO_UL, MO_UL,
diff --git a/target/arm/translate.c b/target/arm/translate.c
index 7853462..d116c8c 100644
--- a/target/arm/translate.c
+++ b/target/arm/translate.c
@@ -114,7 +114,7 @@ typedef enum ISSInfo {
 } ISSInfo;

 /* Save the syndrome information for a Data Abort */
-static void disas_set_da_iss(DisasContext *s, TCGMemOp memop, ISSInfo issinfo)
+static void disas_set_da_iss(DisasContext *s, MemOp memop, ISSInfo issinfo)
 {
     uint32_t syn;
     int sas = memop & MO_SIZE;
@@ -1079,7 +1079,7 @@ static inline void store_reg_from_load(DisasContext *s, int reg, TCGv_i32 var)
  * that the address argument is TCGv_i32 rather than TCGv.
  */

-static inline TCGv gen_aa32_addr(DisasContext *s, TCGv_i32 a32, TCGMemOp op)
+static inline TCGv gen_aa32_addr(DisasContext *s, TCGv_i32 a32, MemOp op)
 {
     TCGv addr = tcg_temp_new();
     tcg_gen_extu_i32_tl(addr, a32);
@@ -1092,7 +1092,7 @@ static inline TCGv gen_aa32_addr(DisasContext *s, TCGv_i32 a32, TCGMemOp op)
 }

 static void gen_aa32_ld_i32(DisasContext *s, TCGv_i32 val, TCGv_i32 a32,
-                            int index, TCGMemOp opc)
+                            int index, MemOp opc)
 {
     TCGv addr;

@@ -1107,7 +1107,7 @@ static void gen_aa32_ld_i32(DisasContext *s, TCGv_i32 val, TCGv_i32 a32,
 }

 static void gen_aa32_st_i32(DisasContext *s, TCGv_i32 val, TCGv_i32 a32,
-                            int index, TCGMemOp opc)
+                            int index, MemOp opc)
 {
     TCGv addr;

@@ -1160,7 +1160,7 @@ static inline void gen_aa32_frob64(DisasContext *s, TCGv_i64 val)
 }

 static void gen_aa32_ld_i64(DisasContext *s, TCGv_i64 val, TCGv_i32 a32,
-                            int index, TCGMemOp opc)
+                            int index, MemOp opc)
 {
     TCGv addr = gen_aa32_addr(s, a32, opc);
     tcg_gen_qemu_ld_i64(val, addr, index, opc);
@@ -1175,7 +1175,7 @@ static inline void gen_aa32_ld64(DisasContext *s, TCGv_i64 val,
 }

 static void gen_aa32_st_i64(DisasContext *s, TCGv_i64 val, TCGv_i32 a32,
-                            int index, TCGMemOp opc)
+                            int index, MemOp opc)
 {
     TCGv addr = gen_aa32_addr(s, a32, opc);

@@ -1400,7 +1400,7 @@ neon_reg_offset (int reg, int n)
  * where 0 is the least significant end of the register.
  */
 static inline long
-neon_element_offset(int reg, int element, TCGMemOp size)
+neon_element_offset(int reg, int element, MemOp size)
 {
     int element_size = 1 << size;
     int ofs = element * element_size;
@@ -1422,7 +1422,7 @@ static TCGv_i32 neon_load_reg(int reg, int pass)
     return tmp;
 }

-static void neon_load_element(TCGv_i32 var, int reg, int ele, TCGMemOp mop)
+static void neon_load_element(TCGv_i32 var, int reg, int ele, MemOp mop)
 {
     long offset = neon_element_offset(reg, ele, mop & MO_SIZE);

@@ -1441,7 +1441,7 @@ static void neon_load_element(TCGv_i32 var, int reg, int ele, TCGMemOp mop)
     }
 }

-static void neon_load_element64(TCGv_i64 var, int reg, int ele, TCGMemOp mop)
+static void neon_load_element64(TCGv_i64 var, int reg, int ele, MemOp mop)
 {
     long offset = neon_element_offset(reg, ele, mop & MO_SIZE);

@@ -1469,7 +1469,7 @@ static void neon_store_reg(int reg, int pass, TCGv_i32 var)
     tcg_temp_free_i32(var);
 }

-static void neon_store_element(int reg, int ele, TCGMemOp size, TCGv_i32 var)
+static void neon_store_element(int reg, int ele, MemOp size, TCGv_i32 var)
 {
     long offset = neon_element_offset(reg, ele, size);

@@ -1488,7 +1488,7 @@ static void neon_store_element(int reg, int ele, TCGMemOp size, TCGv_i32 var)
     }
 }

-static void neon_store_element64(int reg, int ele, TCGMemOp size, TCGv_i64 var)
+static void neon_store_element64(int reg, int ele, MemOp size, TCGv_i64 var)
 {
     long offset = neon_element_offset(reg, ele, size);

@@ -3558,7 +3558,7 @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
     int n;
     int vec_size;
     int mmu_idx;
-    TCGMemOp endian;
+    MemOp endian;
     TCGv_i32 addr;
     TCGv_i32 tmp;
     TCGv_i32 tmp2;
@@ -6867,7 +6867,7 @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
             } else if ((insn & 0x380) == 0) {
                 /* VDUP */
                 int element;
-                TCGMemOp size;
+                MemOp size;

                 if ((insn & (7 << 16)) == 0 || (q && (rd & 1))) {
                     return 1;
@@ -7435,7 +7435,7 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
                                TCGv_i32 addr, int size)
 {
     TCGv_i32 tmp = tcg_temp_new_i32();
-    TCGMemOp opc = size | MO_ALIGN | s->be_data;
+    MemOp opc = size | MO_ALIGN | s->be_data;

     s->is_ldex = true;

@@ -7489,7 +7489,7 @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
     TCGv taddr;
     TCGLabel *done_label;
     TCGLabel *fail_label;
-    TCGMemOp opc = size | MO_ALIGN | s->be_data;
+    MemOp opc = size | MO_ALIGN | s->be_data;

     /* if (env->exclusive_addr == addr && env->exclusive_val == [addr]) {
          [addr] = {Rt};
@@ -8603,7 +8603,7 @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
                         */

                         TCGv taddr;
-                        TCGMemOp opc = s->be_data;
+                        MemOp opc = s->be_data;

                         rm = (insn) & 0xf;

diff --git a/target/arm/translate.h b/target/arm/translate.h
index a20f6e2..284c510 100644
--- a/target/arm/translate.h
+++ b/target/arm/translate.h
@@ -21,7 +21,7 @@ typedef struct DisasContext {
     int condexec_cond;
     int thumb;
     int sctlr_b;
-    TCGMemOp be_data;
+    MemOp be_data;
 #if !defined(CONFIG_USER_ONLY)
     int user;
 #endif
diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 188fe68..ff4802a 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -1500,7 +1500,7 @@ static void form_gva(DisasContext *ctx, TCGv_tl *pgva, TCGv_reg *pofs,
  */
 static void do_load_32(DisasContext *ctx, TCGv_i32 dest, unsigned rb,
                        unsigned rx, int scale, target_sreg disp,
-                       unsigned sp, int modify, TCGMemOp mop)
+                       unsigned sp, int modify, MemOp mop)
 {
     TCGv_reg ofs;
     TCGv_tl addr;
@@ -1518,7 +1518,7 @@ static void do_load_32(DisasContext *ctx, TCGv_i32 dest, unsigned rb,

 static void do_load_64(DisasContext *ctx, TCGv_i64 dest, unsigned rb,
                        unsigned rx, int scale, target_sreg disp,
-                       unsigned sp, int modify, TCGMemOp mop)
+                       unsigned sp, int modify, MemOp mop)
 {
     TCGv_reg ofs;
     TCGv_tl addr;
@@ -1536,7 +1536,7 @@ static void do_load_64(DisasContext *ctx, TCGv_i64 dest, unsigned rb,

 static void do_store_32(DisasContext *ctx, TCGv_i32 src, unsigned rb,
                         unsigned rx, int scale, target_sreg disp,
-                        unsigned sp, int modify, TCGMemOp mop)
+                        unsigned sp, int modify, MemOp mop)
 {
     TCGv_reg ofs;
     TCGv_tl addr;
@@ -1554,7 +1554,7 @@ static void do_store_32(DisasContext *ctx, TCGv_i32 src, unsigned rb,

 static void do_store_64(DisasContext *ctx, TCGv_i64 src, unsigned rb,
                         unsigned rx, int scale, target_sreg disp,
-                        unsigned sp, int modify, TCGMemOp mop)
+                        unsigned sp, int modify, MemOp mop)
 {
     TCGv_reg ofs;
     TCGv_tl addr;
@@ -1580,7 +1580,7 @@ static void do_store_64(DisasContext *ctx, TCGv_i64 src, unsigned rb,

 static bool do_load(DisasContext *ctx, unsigned rt, unsigned rb,
                     unsigned rx, int scale, target_sreg disp,
-                    unsigned sp, int modify, TCGMemOp mop)
+                    unsigned sp, int modify, MemOp mop)
 {
     TCGv_reg dest;

@@ -1653,7 +1653,7 @@ static bool trans_fldd(DisasContext *ctx, arg_ldst *a)

 static bool do_store(DisasContext *ctx, unsigned rt, unsigned rb,
                      target_sreg disp, unsigned sp,
-                     int modify, TCGMemOp mop)
+                     int modify, MemOp mop)
 {
     nullify_over(ctx);
     do_store_reg(ctx, load_gpr(ctx, rt), rb, 0, 0, disp, sp, modify, mop);
@@ -2940,7 +2940,7 @@ static bool trans_st(DisasContext *ctx, arg_ldst *a)

 static bool trans_ldc(DisasContext *ctx, arg_ldst *a)
 {
-    TCGMemOp mop = MO_TEUL | MO_ALIGN_16 | a->size;
+    MemOp mop = MO_TEUL | MO_ALIGN_16 | a->size;
     TCGv_reg zero, dest, ofs;
     TCGv_tl addr;

diff --git a/target/i386/translate.c b/target/i386/translate.c
index 03150a8..def9867 100644
--- a/target/i386/translate.c
+++ b/target/i386/translate.c
@@ -87,8 +87,8 @@ typedef struct DisasContext {
     /* current insn context */
     int override; /* -1 if no override */
     int prefix;
-    TCGMemOp aflag;
-    TCGMemOp dflag;
+    MemOp aflag;
+    MemOp dflag;
     target_ulong pc_start;
     target_ulong pc; /* pc = eip + cs_base */
     /* current block context */
@@ -149,7 +149,7 @@ static void gen_eob(DisasContext *s);
 static void gen_jr(DisasContext *s, TCGv dest);
 static void gen_jmp(DisasContext *s, target_ulong eip);
 static void gen_jmp_tb(DisasContext *s, target_ulong eip, int tb_num);
-static void gen_op(DisasContext *s1, int op, TCGMemOp ot, int d);
+static void gen_op(DisasContext *s1, int op, MemOp ot, int d);

 /* i386 arith/logic operations */
 enum {
@@ -320,7 +320,7 @@ static inline bool byte_reg_is_xH(DisasContext *s, int reg)
 }

 /* Select the size of a push/pop operation.  */
-static inline TCGMemOp mo_pushpop(DisasContext *s, TCGMemOp ot)
+static inline MemOp mo_pushpop(DisasContext *s, MemOp ot)
 {
     if (CODE64(s)) {
         return ot == MO_16 ? MO_16 : MO_64;
@@ -330,13 +330,13 @@ static inline TCGMemOp mo_pushpop(DisasContext *s, TCGMemOp ot)
 }

 /* Select the size of the stack pointer.  */
-static inline TCGMemOp mo_stacksize(DisasContext *s)
+static inline MemOp mo_stacksize(DisasContext *s)
 {
     return CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_16;
 }

 /* Select only size 64 else 32.  Used for SSE operand sizes.  */
-static inline TCGMemOp mo_64_32(TCGMemOp ot)
+static inline MemOp mo_64_32(MemOp ot)
 {
 #ifdef TARGET_X86_64
     return ot == MO_64 ? MO_64 : MO_32;
@@ -347,19 +347,19 @@ static inline TCGMemOp mo_64_32(TCGMemOp ot)

 /* Select size 8 if lsb of B is clear, else OT.  Used for decoding
    byte vs word opcodes.  */
-static inline TCGMemOp mo_b_d(int b, TCGMemOp ot)
+static inline MemOp mo_b_d(int b, MemOp ot)
 {
     return b & 1 ? ot : MO_8;
 }

 /* Select size 8 if lsb of B is clear, else OT capped at 32.
    Used for decoding operand size of port opcodes.  */
-static inline TCGMemOp mo_b_d32(int b, TCGMemOp ot)
+static inline MemOp mo_b_d32(int b, MemOp ot)
 {
     return b & 1 ? (ot == MO_16 ? MO_16 : MO_32) : MO_8;
 }

-static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t0)
+static void gen_op_mov_reg_v(DisasContext *s, MemOp ot, int reg, TCGv t0)
 {
     switch(ot) {
     case MO_8:
@@ -388,7 +388,7 @@ static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t0)
 }

 static inline
-void gen_op_mov_v_reg(DisasContext *s, TCGMemOp ot, TCGv t0, int reg)
+void gen_op_mov_v_reg(DisasContext *s, MemOp ot, TCGv t0, int reg)
 {
     if (ot == MO_8 && byte_reg_is_xH(s, reg)) {
         tcg_gen_extract_tl(t0, cpu_regs[reg - 4], 8, 8);
@@ -411,13 +411,13 @@ static inline void gen_op_jmp_v(TCGv dest)
 }

 static inline
-void gen_op_add_reg_im(DisasContext *s, TCGMemOp size, int reg, int32_t val)
+void gen_op_add_reg_im(DisasContext *s, MemOp size, int reg, int32_t val)
 {
     tcg_gen_addi_tl(s->tmp0, cpu_regs[reg], val);
     gen_op_mov_reg_v(s, size, reg, s->tmp0);
 }

-static inline void gen_op_add_reg_T0(DisasContext *s, TCGMemOp size, int reg)
+static inline void gen_op_add_reg_T0(DisasContext *s, MemOp size, int reg)
 {
     tcg_gen_add_tl(s->tmp0, cpu_regs[reg], s->T0);
     gen_op_mov_reg_v(s, size, reg, s->tmp0);
@@ -451,7 +451,7 @@ static inline void gen_jmp_im(DisasContext *s, target_ulong pc)
 /* Compute SEG:REG into A0.  SEG is selected from the override segment
    (OVR_SEG) and the default segment (DEF_SEG).  OVR_SEG may be -1 to
    indicate no override.  */
-static void gen_lea_v_seg(DisasContext *s, TCGMemOp aflag, TCGv a0,
+static void gen_lea_v_seg(DisasContext *s, MemOp aflag, TCGv a0,
                           int def_seg, int ovr_seg)
 {
     switch (aflag) {
@@ -514,13 +514,13 @@ static inline void gen_string_movl_A0_EDI(DisasContext *s)
     gen_lea_v_seg(s, s->aflag, cpu_regs[R_EDI], R_ES, -1);
 }

-static inline void gen_op_movl_T0_Dshift(DisasContext *s, TCGMemOp ot)
+static inline void gen_op_movl_T0_Dshift(DisasContext *s, MemOp ot)
 {
     tcg_gen_ld32s_tl(s->T0, cpu_env, offsetof(CPUX86State, df));
     tcg_gen_shli_tl(s->T0, s->T0, ot);
 };

-static TCGv gen_ext_tl(TCGv dst, TCGv src, TCGMemOp size, bool sign)
+static TCGv gen_ext_tl(TCGv dst, TCGv src, MemOp size, bool sign)
 {
     switch (size) {
     case MO_8:
@@ -551,18 +551,18 @@ static TCGv gen_ext_tl(TCGv dst, TCGv src, TCGMemOp size, bool sign)
     }
 }

-static void gen_extu(TCGMemOp ot, TCGv reg)
+static void gen_extu(MemOp ot, TCGv reg)
 {
     gen_ext_tl(reg, reg, ot, false);
 }

-static void gen_exts(TCGMemOp ot, TCGv reg)
+static void gen_exts(MemOp ot, TCGv reg)
 {
     gen_ext_tl(reg, reg, ot, true);
 }

 static inline
-void gen_op_jnz_ecx(DisasContext *s, TCGMemOp size, TCGLabel *label1)
+void gen_op_jnz_ecx(DisasContext *s, MemOp size, TCGLabel *label1)
 {
     tcg_gen_mov_tl(s->tmp0, cpu_regs[R_ECX]);
     gen_extu(size, s->tmp0);
@@ -570,14 +570,14 @@ void gen_op_jnz_ecx(DisasContext *s, TCGMemOp size, TCGLabel *label1)
 }

 static inline
-void gen_op_jz_ecx(DisasContext *s, TCGMemOp size, TCGLabel *label1)
+void gen_op_jz_ecx(DisasContext *s, MemOp size, TCGLabel *label1)
 {
     tcg_gen_mov_tl(s->tmp0, cpu_regs[R_ECX]);
     gen_extu(size, s->tmp0);
     tcg_gen_brcondi_tl(TCG_COND_EQ, s->tmp0, 0, label1);
 }

-static void gen_helper_in_func(TCGMemOp ot, TCGv v, TCGv_i32 n)
+static void gen_helper_in_func(MemOp ot, TCGv v, TCGv_i32 n)
 {
     switch (ot) {
     case MO_8:
@@ -594,7 +594,7 @@ static void gen_helper_in_func(TCGMemOp ot, TCGv v, TCGv_i32 n)
     }
 }

-static void gen_helper_out_func(TCGMemOp ot, TCGv_i32 v, TCGv_i32 n)
+static void gen_helper_out_func(MemOp ot, TCGv_i32 v, TCGv_i32 n)
 {
     switch (ot) {
     case MO_8:
@@ -611,7 +611,7 @@ static void gen_helper_out_func(TCGMemOp ot, TCGv_i32 v, TCGv_i32 n)
     }
 }

-static void gen_check_io(DisasContext *s, TCGMemOp ot, target_ulong cur_eip,
+static void gen_check_io(DisasContext *s, MemOp ot, target_ulong cur_eip,
                          uint32_t svm_flags)
 {
     target_ulong next_eip;
@@ -644,7 +644,7 @@ static void gen_check_io(DisasContext *s, TCGMemOp ot, target_ulong cur_eip,
     }
 }

-static inline void gen_movs(DisasContext *s, TCGMemOp ot)
+static inline void gen_movs(DisasContext *s, MemOp ot)
 {
     gen_string_movl_A0_ESI(s);
     gen_op_ld_v(s, ot, s->T0, s->A0);
@@ -840,7 +840,7 @@ static CCPrepare gen_prepare_eflags_s(DisasContext *s, TCGv reg)
         return (CCPrepare) { .cond = TCG_COND_NEVER, .mask = -1 };
     default:
         {
-            TCGMemOp size = (s->cc_op - CC_OP_ADDB) & 3;
+            MemOp size = (s->cc_op - CC_OP_ADDB) & 3;
             TCGv t0 = gen_ext_tl(reg, cpu_cc_dst, size, true);
             return (CCPrepare) { .cond = TCG_COND_LT, .reg = t0, .mask = -1 };
         }
@@ -885,7 +885,7 @@ static CCPrepare gen_prepare_eflags_z(DisasContext *s, TCGv reg)
                              .mask = -1 };
     default:
         {
-            TCGMemOp size = (s->cc_op - CC_OP_ADDB) & 3;
+            MemOp size = (s->cc_op - CC_OP_ADDB) & 3;
             TCGv t0 = gen_ext_tl(reg, cpu_cc_dst, size, false);
             return (CCPrepare) { .cond = TCG_COND_EQ, .reg = t0, .mask = -1 };
         }
@@ -897,7 +897,7 @@ static CCPrepare gen_prepare_eflags_z(DisasContext *s, TCGv reg)
 static CCPrepare gen_prepare_cc(DisasContext *s, int b, TCGv reg)
 {
     int inv, jcc_op, cond;
-    TCGMemOp size;
+    MemOp size;
     CCPrepare cc;
     TCGv t0;

@@ -1075,7 +1075,7 @@ static TCGLabel *gen_jz_ecx_string(DisasContext *s, target_ulong next_eip)
     return l2;
 }

-static inline void gen_stos(DisasContext *s, TCGMemOp ot)
+static inline void gen_stos(DisasContext *s, MemOp ot)
 {
     gen_op_mov_v_reg(s, MO_32, s->T0, R_EAX);
     gen_string_movl_A0_EDI(s);
@@ -1084,7 +1084,7 @@ static inline void gen_stos(DisasContext *s, TCGMemOp ot)
     gen_op_add_reg_T0(s, s->aflag, R_EDI);
 }

-static inline void gen_lods(DisasContext *s, TCGMemOp ot)
+static inline void gen_lods(DisasContext *s, MemOp ot)
 {
     gen_string_movl_A0_ESI(s);
     gen_op_ld_v(s, ot, s->T0, s->A0);
@@ -1093,7 +1093,7 @@ static inline void gen_lods(DisasContext *s, TCGMemOp ot)
     gen_op_add_reg_T0(s, s->aflag, R_ESI);
 }

-static inline void gen_scas(DisasContext *s, TCGMemOp ot)
+static inline void gen_scas(DisasContext *s, MemOp ot)
 {
     gen_string_movl_A0_EDI(s);
     gen_op_ld_v(s, ot, s->T1, s->A0);
@@ -1102,7 +1102,7 @@ static inline void gen_scas(DisasContext *s, TCGMemOp ot)
     gen_op_add_reg_T0(s, s->aflag, R_EDI);
 }

-static inline void gen_cmps(DisasContext *s, TCGMemOp ot)
+static inline void gen_cmps(DisasContext *s, MemOp ot)
 {
     gen_string_movl_A0_EDI(s);
     gen_op_ld_v(s, ot, s->T1, s->A0);
@@ -1126,7 +1126,7 @@ static void gen_bpt_io(DisasContext *s, TCGv_i32 t_port, int ot)
 }


-static inline void gen_ins(DisasContext *s, TCGMemOp ot)
+static inline void gen_ins(DisasContext *s, MemOp ot)
 {
     if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
         gen_io_start();
@@ -1148,7 +1148,7 @@ static inline void gen_ins(DisasContext *s, TCGMemOp ot)
     }
 }

-static inline void gen_outs(DisasContext *s, TCGMemOp ot)
+static inline void gen_outs(DisasContext *s, MemOp ot)
 {
     if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
         gen_io_start();
@@ -1171,7 +1171,7 @@ static inline void gen_outs(DisasContext *s, TCGMemOp ot)
 /* same method as Valgrind : we generate jumps to current or next
    instruction */
 #define GEN_REPZ(op)                                                          \
-static inline void gen_repz_ ## op(DisasContext *s, TCGMemOp ot,              \
+static inline void gen_repz_ ## op(DisasContext *s, MemOp ot,              \
                                  target_ulong cur_eip, target_ulong next_eip) \
 {                                                                             \
     TCGLabel *l2;                                                             \
@@ -1187,7 +1187,7 @@ static inline void gen_repz_ ## op(DisasContext *s, TCGMemOp ot,              \
 }

 #define GEN_REPZ2(op)                                                         \
-static inline void gen_repz_ ## op(DisasContext *s, TCGMemOp ot,              \
+static inline void gen_repz_ ## op(DisasContext *s, MemOp ot,              \
                                    target_ulong cur_eip,                      \
                                    target_ulong next_eip,                     \
                                    int nz)                                    \
@@ -1284,7 +1284,7 @@ static void gen_illegal_opcode(DisasContext *s)
 }

 /* if d == OR_TMP0, it means memory operand (address in A0) */
-static void gen_op(DisasContext *s1, int op, TCGMemOp ot, int d)
+static void gen_op(DisasContext *s1, int op, MemOp ot, int d)
 {
     if (d != OR_TMP0) {
         if (s1->prefix & PREFIX_LOCK) {
@@ -1395,7 +1395,7 @@ static void gen_op(DisasContext *s1, int op, TCGMemOp ot, int d)
 }

 /* if d == OR_TMP0, it means memory operand (address in A0) */
-static void gen_inc(DisasContext *s1, TCGMemOp ot, int d, int c)
+static void gen_inc(DisasContext *s1, MemOp ot, int d, int c)
 {
     if (s1->prefix & PREFIX_LOCK) {
         if (d != OR_TMP0) {
@@ -1421,7 +1421,7 @@ static void gen_inc(DisasContext *s1, TCGMemOp ot, int d, int c)
     set_cc_op(s1, (c > 0 ? CC_OP_INCB : CC_OP_DECB) + ot);
 }

-static void gen_shift_flags(DisasContext *s, TCGMemOp ot, TCGv result,
+static void gen_shift_flags(DisasContext *s, MemOp ot, TCGv result,
                             TCGv shm1, TCGv count, bool is_right)
 {
     TCGv_i32 z32, s32, oldop;
@@ -1466,7 +1466,7 @@ static void gen_shift_flags(DisasContext *s, TCGMemOp ot, TCGv result,
     set_cc_op(s, CC_OP_DYNAMIC);
 }

-static void gen_shift_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
+static void gen_shift_rm_T1(DisasContext *s, MemOp ot, int op1,
                             int is_right, int is_arith)
 {
     target_ulong mask = (ot == MO_64 ? 0x3f : 0x1f);
@@ -1502,7 +1502,7 @@ static void gen_shift_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
     gen_shift_flags(s, ot, s->T0, s->tmp0, s->T1, is_right);
 }

-static void gen_shift_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
+static void gen_shift_rm_im(DisasContext *s, MemOp ot, int op1, int op2,
                             int is_right, int is_arith)
 {
     int mask = (ot == MO_64 ? 0x3f : 0x1f);
@@ -1542,7 +1542,7 @@ static void gen_shift_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
     }
 }

-static void gen_rot_rm_T1(DisasContext *s, TCGMemOp ot, int op1, int is_right)
+static void gen_rot_rm_T1(DisasContext *s, MemOp ot, int op1, int is_right)
 {
     target_ulong mask = (ot == MO_64 ? 0x3f : 0x1f);
     TCGv_i32 t0, t1;
@@ -1627,7 +1627,7 @@ static void gen_rot_rm_T1(DisasContext *s, TCGMemOp ot, int op1, int is_right)
     set_cc_op(s, CC_OP_DYNAMIC);
 }

-static void gen_rot_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
+static void gen_rot_rm_im(DisasContext *s, MemOp ot, int op1, int op2,
                           int is_right)
 {
     int mask = (ot == MO_64 ? 0x3f : 0x1f);
@@ -1705,7 +1705,7 @@ static void gen_rot_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
 }

 /* XXX: add faster immediate = 1 case */
-static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
+static void gen_rotc_rm_T1(DisasContext *s, MemOp ot, int op1,
                            int is_right)
 {
     gen_compute_eflags(s);
@@ -1761,7 +1761,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
 }

 /* XXX: add faster immediate case */
-static void gen_shiftd_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
+static void gen_shiftd_rm_T1(DisasContext *s, MemOp ot, int op1,
                              bool is_right, TCGv count_in)
 {
     target_ulong mask = (ot == MO_64 ? 63 : 31);
@@ -1842,7 +1842,7 @@ static void gen_shiftd_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
     tcg_temp_free(count);
 }

-static void gen_shift(DisasContext *s1, int op, TCGMemOp ot, int d, int s)
+static void gen_shift(DisasContext *s1, int op, MemOp ot, int d, int s)
 {
     if (s != OR_TMP1)
         gen_op_mov_v_reg(s1, ot, s1->T1, s);
@@ -1872,7 +1872,7 @@ static void gen_shift(DisasContext *s1, int op, TCGMemOp ot, int d, int s)
     }
 }

-static void gen_shifti(DisasContext *s1, int op, TCGMemOp ot, int d, int c)
+static void gen_shifti(DisasContext *s1, int op, MemOp ot, int d, int c)
 {
     switch(op) {
     case OP_ROL:
@@ -2149,7 +2149,7 @@ static void gen_add_A0_ds_seg(DisasContext *s)
 /* generate modrm memory load or store of 'reg'. TMP0 is used if reg ==
    OR_TMP0 */
 static void gen_ldst_modrm(CPUX86State *env, DisasContext *s, int modrm,
-                           TCGMemOp ot, int reg, int is_store)
+                           MemOp ot, int reg, int is_store)
 {
     int mod, rm;

@@ -2179,7 +2179,7 @@ static void gen_ldst_modrm(CPUX86State *env, DisasContext *s, int modrm,
     }
 }

-static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, TCGMemOp ot)
+static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, MemOp ot)
 {
     uint32_t ret;

@@ -2202,7 +2202,7 @@ static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, TCGMemOp ot)
     return ret;
 }

-static inline int insn_const_size(TCGMemOp ot)
+static inline int insn_const_size(MemOp ot)
 {
     if (ot <= MO_32) {
         return 1 << ot;
@@ -2266,7 +2266,7 @@ static inline void gen_jcc(DisasContext *s, int b,
     }
 }

-static void gen_cmovcc1(CPUX86State *env, DisasContext *s, TCGMemOp ot, int b,
+static void gen_cmovcc1(CPUX86State *env, DisasContext *s, MemOp ot, int b,
                         int modrm, int reg)
 {
     CCPrepare cc;
@@ -2363,8 +2363,8 @@ static inline void gen_stack_update(DisasContext *s, int addend)
 /* Generate a push. It depends on ss32, addseg and dflag.  */
 static void gen_push_v(DisasContext *s, TCGv val)
 {
-    TCGMemOp d_ot = mo_pushpop(s, s->dflag);
-    TCGMemOp a_ot = mo_stacksize(s);
+    MemOp d_ot = mo_pushpop(s, s->dflag);
+    MemOp a_ot = mo_stacksize(s);
     int size = 1 << d_ot;
     TCGv new_esp = s->A0;

@@ -2383,9 +2383,9 @@ static void gen_push_v(DisasContext *s, TCGv val)
 }

 /* two step pop is necessary for precise exceptions */
-static TCGMemOp gen_pop_T0(DisasContext *s)
+static MemOp gen_pop_T0(DisasContext *s)
 {
-    TCGMemOp d_ot = mo_pushpop(s, s->dflag);
+    MemOp d_ot = mo_pushpop(s, s->dflag);

     gen_lea_v_seg(s, mo_stacksize(s), cpu_regs[R_ESP], R_SS, -1);
     gen_op_ld_v(s, d_ot, s->T0, s->A0);
@@ -2393,7 +2393,7 @@ static TCGMemOp gen_pop_T0(DisasContext *s)
     return d_ot;
 }

-static inline void gen_pop_update(DisasContext *s, TCGMemOp ot)
+static inline void gen_pop_update(DisasContext *s, MemOp ot)
 {
     gen_stack_update(s, 1 << ot);
 }
@@ -2405,8 +2405,8 @@ static inline void gen_stack_A0(DisasContext *s)

 static void gen_pusha(DisasContext *s)
 {
-    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_16;
-    TCGMemOp d_ot = s->dflag;
+    MemOp s_ot = s->ss32 ? MO_32 : MO_16;
+    MemOp d_ot = s->dflag;
     int size = 1 << d_ot;
     int i;

@@ -2421,8 +2421,8 @@ static void gen_pusha(DisasContext *s)

 static void gen_popa(DisasContext *s)
 {
-    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_16;
-    TCGMemOp d_ot = s->dflag;
+    MemOp s_ot = s->ss32 ? MO_32 : MO_16;
+    MemOp d_ot = s->dflag;
     int size = 1 << d_ot;
     int i;

@@ -2442,8 +2442,8 @@ static void gen_popa(DisasContext *s)

 static void gen_enter(DisasContext *s, int esp_addend, int level)
 {
-    TCGMemOp d_ot = mo_pushpop(s, s->dflag);
-    TCGMemOp a_ot = CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_16;
+    MemOp d_ot = mo_pushpop(s, s->dflag);
+    MemOp a_ot = CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_16;
     int size = 1 << d_ot;

     /* Push BP; compute FrameTemp into T1.  */
@@ -2482,8 +2482,8 @@ static void gen_enter(DisasContext *s, int esp_addend, int level)

 static void gen_leave(DisasContext *s)
 {
-    TCGMemOp d_ot = mo_pushpop(s, s->dflag);
-    TCGMemOp a_ot = mo_stacksize(s);
+    MemOp d_ot = mo_pushpop(s, s->dflag);
+    MemOp a_ot = mo_stacksize(s);

     gen_lea_v_seg(s, a_ot, cpu_regs[R_EBP], R_SS, -1);
     gen_op_ld_v(s, d_ot, s->T0, s->A0);
@@ -3045,7 +3045,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
     SSEFunc_0_eppi sse_fn_eppi;
     SSEFunc_0_ppi sse_fn_ppi;
     SSEFunc_0_eppt sse_fn_eppt;
-    TCGMemOp ot;
+    MemOp ot;

     b &= 0xff;
     if (s->prefix & PREFIX_DATA)
@@ -4488,7 +4488,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
     CPUX86State *env = cpu->env_ptr;
     int b, prefixes;
     int shift;
-    TCGMemOp ot, aflag, dflag;
+    MemOp ot, aflag, dflag;
     int modrm, reg, rm, mod, op, opreg, val;
     target_ulong next_eip, tval;
     int rex_w, rex_r;
@@ -5567,8 +5567,8 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
     case 0x1be: /* movsbS Gv, Eb */
     case 0x1bf: /* movswS Gv, Eb */
         {
-            TCGMemOp d_ot;
-            TCGMemOp s_ot;
+            MemOp d_ot;
+            MemOp s_ot;

             /* d_ot is the size of destination */
             d_ot = dflag;
diff --git a/target/m68k/translate.c b/target/m68k/translate.c
index 60bcfb7..24c1dd3 100644
--- a/target/m68k/translate.c
+++ b/target/m68k/translate.c
@@ -2414,7 +2414,7 @@ DISAS_INSN(cas)
     uint16_t ext;
     TCGv load;
     TCGv cmp;
-    TCGMemOp opc;
+    MemOp opc;

     switch ((insn >> 9) & 3) {
     case 1:
diff --git a/target/microblaze/translate.c b/target/microblaze/translate.c
index 9ce65f3..41d1b8b 100644
--- a/target/microblaze/translate.c
+++ b/target/microblaze/translate.c
@@ -919,7 +919,7 @@ static void dec_load(DisasContext *dc)
     unsigned int size;
     bool rev = false, ex = false, ea = false;
     int mem_index = cpu_mmu_index(&dc->cpu->env, false);
-    TCGMemOp mop;
+    MemOp mop;

     mop = dc->opcode & 3;
     size = 1 << mop;
@@ -1035,7 +1035,7 @@ static void dec_store(DisasContext *dc)
     unsigned int size;
     bool rev = false, ex = false, ea = false;
     int mem_index = cpu_mmu_index(&dc->cpu->env, false);
-    TCGMemOp mop;
+    MemOp mop;

     mop = dc->opcode & 3;
     size = 1 << mop;
diff --git a/target/mips/translate.c b/target/mips/translate.c
index ca62800..59b5d85 100644
--- a/target/mips/translate.c
+++ b/target/mips/translate.c
@@ -2526,7 +2526,7 @@ typedef struct DisasContext {
     int32_t CP0_Config5;
     /* Routine used to access memory */
     int mem_idx;
-    TCGMemOp default_tcg_memop_mask;
+    MemOp default_tcg_memop_mask;
     uint32_t hflags, saved_hflags;
     target_ulong btarget;
     bool ulri;
@@ -3706,7 +3706,7 @@ static void gen_st(DisasContext *ctx, uint32_t opc, int rt,

 /* Store conditional */
 static void gen_st_cond(DisasContext *ctx, int rt, int base, int offset,
-                        TCGMemOp tcg_mo, bool eva)
+                        MemOp tcg_mo, bool eva)
 {
     TCGv addr, t0, val;
     TCGLabel *l1 = gen_new_label();
@@ -4546,7 +4546,7 @@ static void gen_HILO(DisasContext *ctx, uint32_t opc, int acc, int reg)
 }

 static inline void gen_r6_ld(target_long addr, int reg, int memidx,
-                             TCGMemOp memop)
+                             MemOp memop)
 {
     TCGv t0 = tcg_const_tl(addr);
     tcg_gen_qemu_ld_tl(t0, t0, memidx, memop);
@@ -21828,7 +21828,7 @@ static int decode_nanomips_32_48_opc(CPUMIPSState *env, DisasContext *ctx)
                              extract32(ctx->opcode, 0, 8);
                     TCGv va = tcg_temp_new();
                     TCGv t1 = tcg_temp_new();
-                    TCGMemOp memop = (extract32(ctx->opcode, 8, 3)) ==
+                    MemOp memop = (extract32(ctx->opcode, 8, 3)) ==
                                       NM_P_LS_UAWM ? MO_UNALN : 0;

                     count = (count == 0) ? 8 : count;
diff --git a/target/openrisc/translate.c b/target/openrisc/translate.c
index 4360ce4..b189c50 100644
--- a/target/openrisc/translate.c
+++ b/target/openrisc/translate.c
@@ -681,7 +681,7 @@ static bool trans_l_lwa(DisasContext *dc, arg_load *a)
     return true;
 }

-static void do_load(DisasContext *dc, arg_load *a, TCGMemOp mop)
+static void do_load(DisasContext *dc, arg_load *a, MemOp mop)
 {
     TCGv ea;

@@ -763,7 +763,7 @@ static bool trans_l_swa(DisasContext *dc, arg_store *a)
     return true;
 }

-static void do_store(DisasContext *dc, arg_store *a, TCGMemOp mop)
+static void do_store(DisasContext *dc, arg_store *a, MemOp mop)
 {
     TCGv t0 = tcg_temp_new();
     tcg_gen_addi_tl(t0, cpu_R[a->a], a->i);
diff --git a/target/ppc/translate.c b/target/ppc/translate.c
index 4a5de28..31800ed 100644
--- a/target/ppc/translate.c
+++ b/target/ppc/translate.c
@@ -162,7 +162,7 @@ struct DisasContext {
     int mem_idx;
     int access_type;
     /* Translation flags */
-    TCGMemOp default_tcg_memop_mask;
+    MemOp default_tcg_memop_mask;
 #if defined(TARGET_PPC64)
     bool sf_mode;
     bool has_cfar;
@@ -3142,7 +3142,7 @@ static void gen_isync(DisasContext *ctx)

 #define MEMOP_GET_SIZE(x)  (1 << ((x) & MO_SIZE))

-static void gen_load_locked(DisasContext *ctx, TCGMemOp memop)
+static void gen_load_locked(DisasContext *ctx, MemOp memop)
 {
     TCGv gpr = cpu_gpr[rD(ctx->opcode)];
     TCGv t0 = tcg_temp_new();
@@ -3167,7 +3167,7 @@ LARX(lbarx, DEF_MEMOP(MO_UB))
 LARX(lharx, DEF_MEMOP(MO_UW))
 LARX(lwarx, DEF_MEMOP(MO_UL))

-static void gen_fetch_inc_conditional(DisasContext *ctx, TCGMemOp memop,
+static void gen_fetch_inc_conditional(DisasContext *ctx, MemOp memop,
                                       TCGv EA, TCGCond cond, int addend)
 {
     TCGv t = tcg_temp_new();
@@ -3193,7 +3193,7 @@ static void gen_fetch_inc_conditional(DisasContext *ctx, TCGMemOp memop,
     tcg_temp_free(u);
 }

-static void gen_ld_atomic(DisasContext *ctx, TCGMemOp memop)
+static void gen_ld_atomic(DisasContext *ctx, MemOp memop)
 {
     uint32_t gpr_FC = FC(ctx->opcode);
     TCGv EA = tcg_temp_new();
@@ -3306,7 +3306,7 @@ static void gen_ldat(DisasContext *ctx)
 }
 #endif

-static void gen_st_atomic(DisasContext *ctx, TCGMemOp memop)
+static void gen_st_atomic(DisasContext *ctx, MemOp memop)
 {
     uint32_t gpr_FC = FC(ctx->opcode);
     TCGv EA = tcg_temp_new();
@@ -3389,7 +3389,7 @@ static void gen_stdat(DisasContext *ctx)
 }
 #endif

-static void gen_conditional_store(DisasContext *ctx, TCGMemOp memop)
+static void gen_conditional_store(DisasContext *ctx, MemOp memop)
 {
     TCGLabel *l1 = gen_new_label();
     TCGLabel *l2 = gen_new_label();
diff --git a/target/riscv/insn_trans/trans_rva.inc.c b/target/riscv/insn_trans/trans_rva.inc.c
index fadd888..be8a9f0 100644
--- a/target/riscv/insn_trans/trans_rva.inc.c
+++ b/target/riscv/insn_trans/trans_rva.inc.c
@@ -18,7 +18,7 @@
  * this program.  If not, see <http://www.gnu.org/licenses/>.
  */

-static inline bool gen_lr(DisasContext *ctx, arg_atomic *a, TCGMemOp mop)
+static inline bool gen_lr(DisasContext *ctx, arg_atomic *a, MemOp mop)
 {
     TCGv src1 = tcg_temp_new();
     /* Put addr in load_res, data in load_val.  */
@@ -37,7 +37,7 @@ static inline bool gen_lr(DisasContext *ctx, arg_atomic *a, TCGMemOp mop)
     return true;
 }

-static inline bool gen_sc(DisasContext *ctx, arg_atomic *a, TCGMemOp mop)
+static inline bool gen_sc(DisasContext *ctx, arg_atomic *a, MemOp mop)
 {
     TCGv src1 = tcg_temp_new();
     TCGv src2 = tcg_temp_new();
@@ -82,8 +82,8 @@ static inline bool gen_sc(DisasContext *ctx, arg_atomic *a, TCGMemOp mop)
 }

 static bool gen_amo(DisasContext *ctx, arg_atomic *a,
-                    void(*func)(TCGv, TCGv, TCGv, TCGArg, TCGMemOp),
-                    TCGMemOp mop)
+                    void(*func)(TCGv, TCGv, TCGv, TCGArg, MemOp),
+                    MemOp mop)
 {
     TCGv src1 = tcg_temp_new();
     TCGv src2 = tcg_temp_new();
diff --git a/target/riscv/insn_trans/trans_rvi.inc.c b/target/riscv/insn_trans/trans_rvi.inc.c
index ea64731..cf440d1 100644
--- a/target/riscv/insn_trans/trans_rvi.inc.c
+++ b/target/riscv/insn_trans/trans_rvi.inc.c
@@ -135,7 +135,7 @@ static bool trans_bgeu(DisasContext *ctx, arg_bgeu *a)
     return gen_branch(ctx, a, TCG_COND_GEU);
 }

-static bool gen_load(DisasContext *ctx, arg_lb *a, TCGMemOp memop)
+static bool gen_load(DisasContext *ctx, arg_lb *a, MemOp memop)
 {
     TCGv t0 = tcg_temp_new();
     TCGv t1 = tcg_temp_new();
@@ -174,7 +174,7 @@ static bool trans_lhu(DisasContext *ctx, arg_lhu *a)
     return gen_load(ctx, a, MO_TEUW);
 }

-static bool gen_store(DisasContext *ctx, arg_sb *a, TCGMemOp memop)
+static bool gen_store(DisasContext *ctx, arg_sb *a, MemOp memop)
 {
     TCGv t0 = tcg_temp_new();
     TCGv dat = tcg_temp_new();
diff --git a/target/s390x/translate.c b/target/s390x/translate.c
index ac0d8b6..2927247 100644
--- a/target/s390x/translate.c
+++ b/target/s390x/translate.c
@@ -152,7 +152,7 @@ static inline int vec_full_reg_offset(uint8_t reg)
     return offsetof(CPUS390XState, vregs[reg][0]);
 }

-static inline int vec_reg_offset(uint8_t reg, uint8_t enr, TCGMemOp es)
+static inline int vec_reg_offset(uint8_t reg, uint8_t enr, MemOp es)
 {
     /* Convert element size (es) - e.g. MO_8 - to bytes */
     const uint8_t bytes = 1 << es;
@@ -2262,7 +2262,7 @@ static DisasJumpType op_csst(DisasContext *s, DisasOps *o)
 #ifndef CONFIG_USER_ONLY
 static DisasJumpType op_csp(DisasContext *s, DisasOps *o)
 {
-    TCGMemOp mop = s->insn->data;
+    MemOp mop = s->insn->data;
     TCGv_i64 addr, old, cc;
     TCGLabel *lab = gen_new_label();

@@ -3228,7 +3228,7 @@ static DisasJumpType op_lm64(DisasContext *s, DisasOps *o)
 static DisasJumpType op_lpd(DisasContext *s, DisasOps *o)
 {
     TCGv_i64 a1, a2;
-    TCGMemOp mop = s->insn->data;
+    MemOp mop = s->insn->data;

     /* In a parallel context, stop the world and single step.  */
     if (tb_cflags(s->base.tb) & CF_PARALLEL) {
diff --git a/target/s390x/translate_vx.inc.c b/target/s390x/translate_vx.inc.c
index 41d5cf8..4c56bbb 100644
--- a/target/s390x/translate_vx.inc.c
+++ b/target/s390x/translate_vx.inc.c
@@ -57,13 +57,13 @@
 #define FPF_LONG        3
 #define FPF_EXT         4

-static inline bool valid_vec_element(uint8_t enr, TCGMemOp es)
+static inline bool valid_vec_element(uint8_t enr, MemOp es)
 {
     return !(enr & ~(NUM_VEC_ELEMENTS(es) - 1));
 }

 static void read_vec_element_i64(TCGv_i64 dst, uint8_t reg, uint8_t enr,
-                                 TCGMemOp memop)
+                                 MemOp memop)
 {
     const int offs = vec_reg_offset(reg, enr, memop & MO_SIZE);

@@ -96,7 +96,7 @@ static void read_vec_element_i64(TCGv_i64 dst, uint8_t reg, uint8_t enr,
 }

 static void read_vec_element_i32(TCGv_i32 dst, uint8_t reg, uint8_t enr,
-                                 TCGMemOp memop)
+                                 MemOp memop)
 {
     const int offs = vec_reg_offset(reg, enr, memop & MO_SIZE);

@@ -123,7 +123,7 @@ static void read_vec_element_i32(TCGv_i32 dst, uint8_t reg, uint8_t enr,
 }

 static void write_vec_element_i64(TCGv_i64 src, int reg, uint8_t enr,
-                                  TCGMemOp memop)
+                                  MemOp memop)
 {
     const int offs = vec_reg_offset(reg, enr, memop & MO_SIZE);

@@ -146,7 +146,7 @@ static void write_vec_element_i64(TCGv_i64 src, int reg, uint8_t enr,
 }

 static void write_vec_element_i32(TCGv_i32 src, int reg, uint8_t enr,
-                                  TCGMemOp memop)
+                                  MemOp memop)
 {
     const int offs = vec_reg_offset(reg, enr, memop & MO_SIZE);

diff --git a/target/sparc/translate.c b/target/sparc/translate.c
index 091bab5..bef9ce6 100644
--- a/target/sparc/translate.c
+++ b/target/sparc/translate.c
@@ -2019,7 +2019,7 @@ static inline void gen_ne_fop_QD(DisasContext *dc, int rd, int rs,
 }

 static void gen_swap(DisasContext *dc, TCGv dst, TCGv src,
-                     TCGv addr, int mmu_idx, TCGMemOp memop)
+                     TCGv addr, int mmu_idx, MemOp memop)
 {
     gen_address_mask(dc, addr);
     tcg_gen_atomic_xchg_tl(dst, addr, src, mmu_idx, memop);
@@ -2050,10 +2050,10 @@ typedef struct {
     ASIType type;
     int asi;
     int mem_idx;
-    TCGMemOp memop;
+    MemOp memop;
 } DisasASI;

-static DisasASI get_asi(DisasContext *dc, int insn, TCGMemOp memop)
+static DisasASI get_asi(DisasContext *dc, int insn, MemOp memop)
 {
     int asi = GET_FIELD(insn, 19, 26);
     ASIType type = GET_ASI_HELPER;
@@ -2267,7 +2267,7 @@ static DisasASI get_asi(DisasContext *dc, int insn, TCGMemOp memop)
 }

 static void gen_ld_asi(DisasContext *dc, TCGv dst, TCGv addr,
-                       int insn, TCGMemOp memop)
+                       int insn, MemOp memop)
 {
     DisasASI da = get_asi(dc, insn, memop);

@@ -2305,7 +2305,7 @@ static void gen_ld_asi(DisasContext *dc, TCGv dst, TCGv addr,
 }

 static void gen_st_asi(DisasContext *dc, TCGv src, TCGv addr,
-                       int insn, TCGMemOp memop)
+                       int insn, MemOp memop)
 {
     DisasASI da = get_asi(dc, insn, memop);

@@ -2511,7 +2511,7 @@ static void gen_ldf_asi(DisasContext *dc, TCGv addr,
     case GET_ASI_BLOCK:
         /* Valid for lddfa on aligned registers only.  */
         if (size == 8 && (rd & 7) == 0) {
-            TCGMemOp memop;
+            MemOp memop;
             TCGv eight;
             int i;

@@ -2625,7 +2625,7 @@ static void gen_stf_asi(DisasContext *dc, TCGv addr,
     case GET_ASI_BLOCK:
         /* Valid for stdfa on aligned registers only.  */
         if (size == 8 && (rd & 7) == 0) {
-            TCGMemOp memop;
+            MemOp memop;
             TCGv eight;
             int i;

diff --git a/target/tilegx/translate.c b/target/tilegx/translate.c
index c46a4ab..68dd4aa 100644
--- a/target/tilegx/translate.c
+++ b/target/tilegx/translate.c
@@ -290,7 +290,7 @@ static void gen_cmul2(TCGv tdest, TCGv tsrca, TCGv tsrcb, int sh, int rd)
 }

 static TileExcp gen_st_opcode(DisasContext *dc, unsigned dest, unsigned srca,
-                              unsigned srcb, TCGMemOp memop, const char *name)
+                              unsigned srcb, MemOp memop, const char *name)
 {
     if (dest) {
         return TILEGX_EXCP_OPCODE_UNKNOWN;
@@ -305,7 +305,7 @@ static TileExcp gen_st_opcode(DisasContext *dc, unsigned dest, unsigned srca,
 }

 static TileExcp gen_st_add_opcode(DisasContext *dc, unsigned srca, unsigned srcb,
-                                  int imm, TCGMemOp memop, const char *name)
+                                  int imm, MemOp memop, const char *name)
 {
     TCGv tsrca = load_gr(dc, srca);
     TCGv tsrcb = load_gr(dc, srcb);
@@ -496,7 +496,7 @@ static TileExcp gen_rr_opcode(DisasContext *dc, unsigned opext,
 {
     TCGv tdest, tsrca;
     const char *mnemonic;
-    TCGMemOp memop;
+    MemOp memop;
     TileExcp ret = TILEGX_EXCP_NONE;
     bool prefetch_nofault = false;

@@ -1478,7 +1478,7 @@ static TileExcp gen_rri_opcode(DisasContext *dc, unsigned opext,
     TCGv tsrca = load_gr(dc, srca);
     bool prefetch_nofault = false;
     const char *mnemonic;
-    TCGMemOp memop;
+    MemOp memop;
     int i2, i3;
     TCGv t0;

@@ -2106,7 +2106,7 @@ static TileExcp decode_y2(DisasContext *dc, tilegx_bundle_bits bundle)
     unsigned srca = get_SrcA_Y2(bundle);
     unsigned srcbdest = get_SrcBDest_Y2(bundle);
     const char *mnemonic;
-    TCGMemOp memop;
+    MemOp memop;
     bool prefetch_nofault = false;

     switch (OEY2(opc, mode)) {
diff --git a/target/tricore/translate.c b/target/tricore/translate.c
index dc2a65f..87a5f50 100644
--- a/target/tricore/translate.c
+++ b/target/tricore/translate.c
@@ -227,7 +227,7 @@ static inline void generate_trap(DisasContext *ctx, int class, int tin);
 /* Functions for load/save to/from memory */

 static inline void gen_offset_ld(DisasContext *ctx, TCGv r1, TCGv r2,
-                                 int16_t con, TCGMemOp mop)
+                                 int16_t con, MemOp mop)
 {
     TCGv temp = tcg_temp_new();
     tcg_gen_addi_tl(temp, r2, con);
@@ -236,7 +236,7 @@ static inline void gen_offset_ld(DisasContext *ctx, TCGv r1, TCGv r2,
 }

 static inline void gen_offset_st(DisasContext *ctx, TCGv r1, TCGv r2,
-                                 int16_t con, TCGMemOp mop)
+                                 int16_t con, MemOp mop)
 {
     TCGv temp = tcg_temp_new();
     tcg_gen_addi_tl(temp, r2, con);
@@ -284,7 +284,7 @@ static void gen_offset_ld_2regs(TCGv rh, TCGv rl, TCGv base, int16_t con,
 }

 static void gen_st_preincr(DisasContext *ctx, TCGv r1, TCGv r2, int16_t off,
-                           TCGMemOp mop)
+                           MemOp mop)
 {
     TCGv temp = tcg_temp_new();
     tcg_gen_addi_tl(temp, r2, off);
@@ -294,7 +294,7 @@ static void gen_st_preincr(DisasContext *ctx, TCGv r1, TCGv r2, int16_t off,
 }

 static void gen_ld_preincr(DisasContext *ctx, TCGv r1, TCGv r2, int16_t off,
-                           TCGMemOp mop)
+                           MemOp mop)
 {
     TCGv temp = tcg_temp_new();
     tcg_gen_addi_tl(temp, r2, off);
diff --git a/tcg/README b/tcg/README
index 21fcdf7..b4382fa 100644
--- a/tcg/README
+++ b/tcg/README
@@ -512,7 +512,7 @@ Both t0 and t1 may be split into little-endian ordered pairs of registers
 if dealing with 64-bit quantities on a 32-bit host.

 The memidx selects the qemu tlb index to use (e.g. user or kernel access).
-The flags are the TCGMemOp bits, selecting the sign, width, and endianness
+The flags are the MemOp bits, selecting the sign, width, and endianness
 of the memory access.

 For a 32-bit host, qemu_ld/st_i64 is guaranteed to only be used with a
diff --git a/tcg/aarch64/tcg-target.inc.c b/tcg/aarch64/tcg-target.inc.c
index 0713448..3f92101 100644
--- a/tcg/aarch64/tcg-target.inc.c
+++ b/tcg/aarch64/tcg-target.inc.c
@@ -1423,7 +1423,7 @@ static inline void tcg_out_rev16(TCGContext *s, TCGReg rd, TCGReg rn)
     tcg_out_insn(s, 3507, REV16, TCG_TYPE_I32, rd, rn);
 }

-static inline void tcg_out_sxt(TCGContext *s, TCGType ext, TCGMemOp s_bits,
+static inline void tcg_out_sxt(TCGContext *s, TCGType ext, MemOp s_bits,
                                TCGReg rd, TCGReg rn)
 {
     /* Using ALIASes SXTB, SXTH, SXTW, of SBFM Xd, Xn, #0, #7|15|31 */
@@ -1431,7 +1431,7 @@ static inline void tcg_out_sxt(TCGContext *s, TCGType ext, TCGMemOp s_bits,
     tcg_out_sbfm(s, ext, rd, rn, 0, bits);
 }

-static inline void tcg_out_uxt(TCGContext *s, TCGMemOp s_bits,
+static inline void tcg_out_uxt(TCGContext *s, MemOp s_bits,
                                TCGReg rd, TCGReg rn)
 {
     /* Using ALIASes UXTB, UXTH of UBFM Wd, Wn, #0, #7|15 */
@@ -1580,8 +1580,8 @@ static inline void tcg_out_adr(TCGContext *s, TCGReg rd, void *target)
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp size = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp size = opc & MO_SIZE;

     if (!reloc_pc19(lb->label_ptr[0], s->code_ptr)) {
         return false;
@@ -1605,8 +1605,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp size = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp size = opc & MO_SIZE;

     if (!reloc_pc19(lb->label_ptr[0], s->code_ptr)) {
         return false;
@@ -1649,7 +1649,7 @@ QEMU_BUILD_BUG_ON(offsetof(CPUTLBDescFast, table) != 8);
    slow path for the failure case, which will be patched later when finalizing
    the slow path. Generated code returns the host addend in X1,
    clobbers X0,X2,X3,TMP. */
-static void tcg_out_tlb_read(TCGContext *s, TCGReg addr_reg, TCGMemOp opc,
+static void tcg_out_tlb_read(TCGContext *s, TCGReg addr_reg, MemOp opc,
                              tcg_insn_unit **label_ptr, int mem_index,
                              bool is_read)
 {
@@ -1709,11 +1709,11 @@ static void tcg_out_tlb_read(TCGContext *s, TCGReg addr_reg, TCGMemOp opc,

 #endif /* CONFIG_SOFTMMU */

-static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp memop, TCGType ext,
+static void tcg_out_qemu_ld_direct(TCGContext *s, MemOp memop, TCGType ext,
                                    TCGReg data_r, TCGReg addr_r,
                                    TCGType otype, TCGReg off_r)
 {
-    const TCGMemOp bswap = memop & MO_BSWAP;
+    const MemOp bswap = memop & MO_BSWAP;

     switch (memop & MO_SSIZE) {
     case MO_UB:
@@ -1765,11 +1765,11 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp memop, TCGType ext,
     }
 }

-static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp memop,
+static void tcg_out_qemu_st_direct(TCGContext *s, MemOp memop,
                                    TCGReg data_r, TCGReg addr_r,
                                    TCGType otype, TCGReg off_r)
 {
-    const TCGMemOp bswap = memop & MO_BSWAP;
+    const MemOp bswap = memop & MO_BSWAP;

     switch (memop & MO_SIZE) {
     case MO_8:
@@ -1804,7 +1804,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp memop,
 static void tcg_out_qemu_ld(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
                             TCGMemOpIdx oi, TCGType ext)
 {
-    TCGMemOp memop = get_memop(oi);
+    MemOp memop = get_memop(oi);
     const TCGType otype = TARGET_LONG_BITS == 64 ? TCG_TYPE_I64 : TCG_TYPE_I32;
 #ifdef CONFIG_SOFTMMU
     unsigned mem_index = get_mmuidx(oi);
@@ -1829,7 +1829,7 @@ static void tcg_out_qemu_ld(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
 static void tcg_out_qemu_st(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
                             TCGMemOpIdx oi)
 {
-    TCGMemOp memop = get_memop(oi);
+    MemOp memop = get_memop(oi);
     const TCGType otype = TARGET_LONG_BITS == 64 ? TCG_TYPE_I64 : TCG_TYPE_I32;
 #ifdef CONFIG_SOFTMMU
     unsigned mem_index = get_mmuidx(oi);
diff --git a/tcg/arm/tcg-target.inc.c b/tcg/arm/tcg-target.inc.c
index ece88dc..94d80d7 100644
--- a/tcg/arm/tcg-target.inc.c
+++ b/tcg/arm/tcg-target.inc.c
@@ -1233,7 +1233,7 @@ QEMU_BUILD_BUG_ON(offsetof(CPUTLBDescFast, table) != 4);
    containing the addend of the tlb entry.  Clobbers R0, R1, R2, TMP.  */

 static TCGReg tcg_out_tlb_read(TCGContext *s, TCGReg addrlo, TCGReg addrhi,
-                               TCGMemOp opc, int mem_index, bool is_load)
+                               MemOp opc, int mem_index, bool is_load)
 {
     int cmp_off = (is_load ? offsetof(CPUTLBEntry, addr_read)
                    : offsetof(CPUTLBEntry, addr_write));
@@ -1348,7 +1348,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGReg argreg, datalo, datahi;
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     void *func;

     if (!reloc_pc24(lb->label_ptr[0], s->code_ptr)) {
@@ -1412,7 +1412,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGReg argreg, datalo, datahi;
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);

     if (!reloc_pc24(lb->label_ptr[0], s->code_ptr)) {
         return false;
@@ -1453,11 +1453,11 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 }
 #endif /* SOFTMMU */

-static inline void tcg_out_qemu_ld_index(TCGContext *s, TCGMemOp opc,
+static inline void tcg_out_qemu_ld_index(TCGContext *s, MemOp opc,
                                          TCGReg datalo, TCGReg datahi,
                                          TCGReg addrlo, TCGReg addend)
 {
-    TCGMemOp bswap = opc & MO_BSWAP;
+    MemOp bswap = opc & MO_BSWAP;

     switch (opc & MO_SSIZE) {
     case MO_UB:
@@ -1514,11 +1514,11 @@ static inline void tcg_out_qemu_ld_index(TCGContext *s, TCGMemOp opc,
     }
 }

-static inline void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp opc,
+static inline void tcg_out_qemu_ld_direct(TCGContext *s, MemOp opc,
                                           TCGReg datalo, TCGReg datahi,
                                           TCGReg addrlo)
 {
-    TCGMemOp bswap = opc & MO_BSWAP;
+    MemOp bswap = opc & MO_BSWAP;

     switch (opc & MO_SSIZE) {
     case MO_UB:
@@ -1577,7 +1577,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)
 {
     TCGReg addrlo, datalo, datahi, addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #ifdef CONFIG_SOFTMMU
     int mem_index;
     TCGReg addend;
@@ -1614,11 +1614,11 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)
 #endif
 }

-static inline void tcg_out_qemu_st_index(TCGContext *s, int cond, TCGMemOp opc,
+static inline void tcg_out_qemu_st_index(TCGContext *s, int cond, MemOp opc,
                                          TCGReg datalo, TCGReg datahi,
                                          TCGReg addrlo, TCGReg addend)
 {
-    TCGMemOp bswap = opc & MO_BSWAP;
+    MemOp bswap = opc & MO_BSWAP;

     switch (opc & MO_SIZE) {
     case MO_8:
@@ -1659,11 +1659,11 @@ static inline void tcg_out_qemu_st_index(TCGContext *s, int cond, TCGMemOp opc,
     }
 }

-static inline void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp opc,
+static inline void tcg_out_qemu_st_direct(TCGContext *s, MemOp opc,
                                           TCGReg datalo, TCGReg datahi,
                                           TCGReg addrlo)
 {
-    TCGMemOp bswap = opc & MO_BSWAP;
+    MemOp bswap = opc & MO_BSWAP;

     switch (opc & MO_SIZE) {
     case MO_8:
@@ -1708,7 +1708,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is64)
 {
     TCGReg addrlo, datalo, datahi, addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #ifdef CONFIG_SOFTMMU
     int mem_index;
     TCGReg addend;
diff --git a/tcg/i386/tcg-target.inc.c b/tcg/i386/tcg-target.inc.c
index 6ddeebf..9d8ed97 100644
--- a/tcg/i386/tcg-target.inc.c
+++ b/tcg/i386/tcg-target.inc.c
@@ -1697,7 +1697,7 @@ static void * const qemu_st_helpers[16] = {
    First argument register is clobbered.  */

 static inline void tcg_out_tlb_load(TCGContext *s, TCGReg addrlo, TCGReg addrhi,
-                                    int mem_index, TCGMemOp opc,
+                                    int mem_index, MemOp opc,
                                     tcg_insn_unit **label_ptr, int which)
 {
     const TCGReg r0 = TCG_REG_L0;
@@ -1810,7 +1810,7 @@ static void add_qemu_ldst_label(TCGContext *s, bool is_ld, bool is_64,
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     TCGReg data_reg;
     tcg_insn_unit **label_ptr = &l->label_ptr[0];
     int rexw = (l->type == TCG_TYPE_I64 ? P_REXW : 0);
@@ -1895,8 +1895,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp s_bits = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp s_bits = opc & MO_SIZE;
     tcg_insn_unit **label_ptr = &l->label_ptr[0];
     TCGReg retaddr;

@@ -1995,10 +1995,10 @@ static inline int setup_guest_base_seg(void)

 static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg datalo, TCGReg datahi,
                                    TCGReg base, int index, intptr_t ofs,
-                                   int seg, bool is64, TCGMemOp memop)
+                                   int seg, bool is64, MemOp memop)
 {
-    const TCGMemOp real_bswap = memop & MO_BSWAP;
-    TCGMemOp bswap = real_bswap;
+    const MemOp real_bswap = memop & MO_BSWAP;
+    MemOp bswap = real_bswap;
     int rexw = is64 * P_REXW;
     int movop = OPC_MOVL_GvEv;

@@ -2103,7 +2103,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)
     TCGReg datalo, datahi, addrlo;
     TCGReg addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     int mem_index;
     tcg_insn_unit *label_ptr[2];
@@ -2137,15 +2137,15 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)

 static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg datalo, TCGReg datahi,
                                    TCGReg base, int index, intptr_t ofs,
-                                   int seg, TCGMemOp memop)
+                                   int seg, MemOp memop)
 {
     /* ??? Ideally we wouldn't need a scratch register.  For user-only,
        we could perform the bswap twice to restore the original value
        instead of moving to the scratch.  But as it is, the L constraint
        means that TCG_REG_L0 is definitely free here.  */
     const TCGReg scratch = TCG_REG_L0;
-    const TCGMemOp real_bswap = memop & MO_BSWAP;
-    TCGMemOp bswap = real_bswap;
+    const MemOp real_bswap = memop & MO_BSWAP;
+    MemOp bswap = real_bswap;
     int movop = OPC_MOVL_EvGv;

     if (have_movbe && real_bswap) {
@@ -2221,7 +2221,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is64)
     TCGReg datalo, datahi, addrlo;
     TCGReg addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     int mem_index;
     tcg_insn_unit *label_ptr[2];
diff --git a/tcg/mips/tcg-target.inc.c b/tcg/mips/tcg-target.inc.c
index 41bff32..5442167 100644
--- a/tcg/mips/tcg-target.inc.c
+++ b/tcg/mips/tcg-target.inc.c
@@ -1215,7 +1215,7 @@ static void tcg_out_tlb_load(TCGContext *s, TCGReg base, TCGReg addrl,
                              TCGReg addrh, TCGMemOpIdx oi,
                              tcg_insn_unit *label_ptr[2], bool is_load)
 {
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     unsigned s_bits = opc & MO_SIZE;
     unsigned a_bits = get_alignment_bits(opc);
     int mem_index = get_mmuidx(oi);
@@ -1313,7 +1313,7 @@ static void add_qemu_ldst_label(TCGContext *s, int is_ld, TCGMemOpIdx oi,
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     TCGReg v0;
     int i;

@@ -1363,8 +1363,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp s_bits = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp s_bits = opc & MO_SIZE;
     int i;

     /* resolve label address */
@@ -1413,7 +1413,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 #endif

 static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg lo, TCGReg hi,
-                                   TCGReg base, TCGMemOp opc, bool is_64)
+                                   TCGReg base, MemOp opc, bool is_64)
 {
     switch (opc & (MO_SSIZE | MO_BSWAP)) {
     case MO_UB:
@@ -1521,7 +1521,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg addr_regl, addr_regh __attribute__((unused));
     TCGReg data_regl, data_regh;
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     tcg_insn_unit *label_ptr[2];
 #endif
@@ -1558,7 +1558,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
 }

 static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
-                                   TCGReg base, TCGMemOp opc)
+                                   TCGReg base, MemOp opc)
 {
     /* Don't clutter the code below with checks to avoid bswapping ZERO.  */
     if ((lo | hi) == 0) {
@@ -1624,7 +1624,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg addr_regl, addr_regh __attribute__((unused));
     TCGReg data_regl, data_regh;
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     tcg_insn_unit *label_ptr[2];
 #endif
diff --git a/tcg/optimize.c b/tcg/optimize.c
index d2424de..a89ffda 100644
--- a/tcg/optimize.c
+++ b/tcg/optimize.c
@@ -1014,7 +1014,7 @@ void tcg_optimize(TCGContext *s)
         CASE_OP_32_64(qemu_ld):
             {
                 TCGMemOpIdx oi = op->args[nb_oargs + nb_iargs];
-                TCGMemOp mop = get_memop(oi);
+                MemOp mop = get_memop(oi);
                 if (!(mop & MO_SIGN)) {
                     mask = (2ULL << ((8 << (mop & MO_SIZE)) - 1)) - 1;
                 }
diff --git a/tcg/ppc/tcg-target.inc.c b/tcg/ppc/tcg-target.inc.c
index 852b894..815edac 100644
--- a/tcg/ppc/tcg-target.inc.c
+++ b/tcg/ppc/tcg-target.inc.c
@@ -1506,7 +1506,7 @@ QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) < -32768);
    in CR7, loads the addend of the TLB into R3, and returns the register
    containing the guest address (zero-extended into R4).  Clobbers R0 and R2. */

-static TCGReg tcg_out_tlb_read(TCGContext *s, TCGMemOp opc,
+static TCGReg tcg_out_tlb_read(TCGContext *s, MemOp opc,
                                TCGReg addrlo, TCGReg addrhi,
                                int mem_index, bool is_read)
 {
@@ -1633,7 +1633,7 @@ static void add_qemu_ldst_label(TCGContext *s, bool is_ld, TCGMemOpIdx oi,
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     TCGReg hi, lo, arg = TCG_REG_R3;

     if (!reloc_pc14(lb->label_ptr[0], s->code_ptr)) {
@@ -1680,8 +1680,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp s_bits = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp s_bits = opc & MO_SIZE;
     TCGReg hi, lo, arg = TCG_REG_R3;

     if (!reloc_pc14(lb->label_ptr[0], s->code_ptr)) {
@@ -1744,7 +1744,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg datalo, datahi, addrlo, rbase;
     TCGReg addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc, s_bits;
+    MemOp opc, s_bits;
 #ifdef CONFIG_SOFTMMU
     int mem_index;
     tcg_insn_unit *label_ptr;
@@ -1819,7 +1819,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg datalo, datahi, addrlo, rbase;
     TCGReg addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc, s_bits;
+    MemOp opc, s_bits;
 #ifdef CONFIG_SOFTMMU
     int mem_index;
     tcg_insn_unit *label_ptr;
diff --git a/tcg/riscv/tcg-target.inc.c b/tcg/riscv/tcg-target.inc.c
index 3e76bf5..7018509 100644
--- a/tcg/riscv/tcg-target.inc.c
+++ b/tcg/riscv/tcg-target.inc.c
@@ -970,7 +970,7 @@ static void tcg_out_tlb_load(TCGContext *s, TCGReg addrl,
                              TCGReg addrh, TCGMemOpIdx oi,
                              tcg_insn_unit **label_ptr, bool is_load)
 {
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     unsigned s_bits = opc & MO_SIZE;
     unsigned a_bits = get_alignment_bits(opc);
     tcg_target_long compare_mask;
@@ -1044,7 +1044,7 @@ static void add_qemu_ldst_label(TCGContext *s, int is_ld, TCGMemOpIdx oi,
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     TCGReg a0 = tcg_target_call_iarg_regs[0];
     TCGReg a1 = tcg_target_call_iarg_regs[1];
     TCGReg a2 = tcg_target_call_iarg_regs[2];
@@ -1077,8 +1077,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp s_bits = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp s_bits = opc & MO_SIZE;
     TCGReg a0 = tcg_target_call_iarg_regs[0];
     TCGReg a1 = tcg_target_call_iarg_regs[1];
     TCGReg a2 = tcg_target_call_iarg_regs[2];
@@ -1121,9 +1121,9 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 #endif /* CONFIG_SOFTMMU */

 static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg lo, TCGReg hi,
-                                   TCGReg base, TCGMemOp opc, bool is_64)
+                                   TCGReg base, MemOp opc, bool is_64)
 {
-    const TCGMemOp bswap = opc & MO_BSWAP;
+    const MemOp bswap = opc & MO_BSWAP;

     /* We don't yet handle byteswapping, assert */
     g_assert(!bswap);
@@ -1172,7 +1172,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg addr_regl, addr_regh __attribute__((unused));
     TCGReg data_regl, data_regh;
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     tcg_insn_unit *label_ptr[1];
 #endif
@@ -1208,9 +1208,9 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
 }

 static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
-                                   TCGReg base, TCGMemOp opc)
+                                   TCGReg base, MemOp opc)
 {
-    const TCGMemOp bswap = opc & MO_BSWAP;
+    const MemOp bswap = opc & MO_BSWAP;

     /* We don't yet handle byteswapping, assert */
     g_assert(!bswap);
@@ -1243,7 +1243,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg addr_regl, addr_regh __attribute__((unused));
     TCGReg data_regl, data_regh;
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     tcg_insn_unit *label_ptr[1];
 #endif
diff --git a/tcg/s390/tcg-target.inc.c b/tcg/s390/tcg-target.inc.c
index fe42939..8aaa4ce 100644
--- a/tcg/s390/tcg-target.inc.c
+++ b/tcg/s390/tcg-target.inc.c
@@ -1430,7 +1430,7 @@ static void tcg_out_call(TCGContext *s, tcg_insn_unit *dest)
     }
 }

-static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp opc, TCGReg data,
+static void tcg_out_qemu_ld_direct(TCGContext *s, MemOp opc, TCGReg data,
                                    TCGReg base, TCGReg index, int disp)
 {
     switch (opc & (MO_SSIZE | MO_BSWAP)) {
@@ -1489,7 +1489,7 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp opc, TCGReg data,
     }
 }

-static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp opc, TCGReg data,
+static void tcg_out_qemu_st_direct(TCGContext *s, MemOp opc, TCGReg data,
                                    TCGReg base, TCGReg index, int disp)
 {
     switch (opc & (MO_SIZE | MO_BSWAP)) {
@@ -1544,7 +1544,7 @@ QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) < -(1 << 19));

 /* Load and compare a TLB entry, leaving the flags set.  Loads the TLB
    addend into R2.  Returns a register with the santitized guest address.  */
-static TCGReg tcg_out_tlb_read(TCGContext* s, TCGReg addr_reg, TCGMemOp opc,
+static TCGReg tcg_out_tlb_read(TCGContext *s, TCGReg addr_reg, MemOp opc,
                                int mem_index, bool is_ld)
 {
     unsigned s_bits = opc & MO_SIZE;
@@ -1614,7 +1614,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
     TCGReg addr_reg = lb->addrlo_reg;
     TCGReg data_reg = lb->datalo_reg;
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);

     if (!patch_reloc(lb->label_ptr[0], R_390_PC16DBL,
                      (intptr_t)s->code_ptr, 2)) {
@@ -1639,7 +1639,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
     TCGReg addr_reg = lb->addrlo_reg;
     TCGReg data_reg = lb->datalo_reg;
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);

     if (!patch_reloc(lb->label_ptr[0], R_390_PC16DBL,
                      (intptr_t)s->code_ptr, 2)) {
@@ -1694,7 +1694,7 @@ static void tcg_prepare_user_ldst(TCGContext *s, TCGReg *addr_reg,
 static void tcg_out_qemu_ld(TCGContext* s, TCGReg data_reg, TCGReg addr_reg,
                             TCGMemOpIdx oi)
 {
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
 #ifdef CONFIG_SOFTMMU
     unsigned mem_index = get_mmuidx(oi);
     tcg_insn_unit *label_ptr;
@@ -1721,7 +1721,7 @@ static void tcg_out_qemu_ld(TCGContext* s, TCGReg data_reg, TCGReg addr_reg,
 static void tcg_out_qemu_st(TCGContext* s, TCGReg data_reg, TCGReg addr_reg,
                             TCGMemOpIdx oi)
 {
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
 #ifdef CONFIG_SOFTMMU
     unsigned mem_index = get_mmuidx(oi);
     tcg_insn_unit *label_ptr;
diff --git a/tcg/sparc/tcg-target.inc.c b/tcg/sparc/tcg-target.inc.c
index 10b1cea..d7986cd 100644
--- a/tcg/sparc/tcg-target.inc.c
+++ b/tcg/sparc/tcg-target.inc.c
@@ -1081,7 +1081,7 @@ QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) < -(1 << 12));
    is in the returned register, maybe %o0.  The TLB addend is in %o1.  */

 static TCGReg tcg_out_tlb_load(TCGContext *s, TCGReg addr, int mem_index,
-                               TCGMemOp opc, int which)
+                               MemOp opc, int which)
 {
     int fast_off = TLB_MASK_TABLE_OFS(mem_index);
     int mask_off = fast_off + offsetof(CPUTLBDescFast, mask);
@@ -1164,7 +1164,7 @@ static const int qemu_st_opc[16] = {
 static void tcg_out_qemu_ld(TCGContext *s, TCGReg data, TCGReg addr,
                             TCGMemOpIdx oi, bool is_64)
 {
-    TCGMemOp memop = get_memop(oi);
+    MemOp memop = get_memop(oi);
 #ifdef CONFIG_SOFTMMU
     unsigned memi = get_mmuidx(oi);
     TCGReg addrz, param;
@@ -1246,7 +1246,7 @@ static void tcg_out_qemu_ld(TCGContext *s, TCGReg data, TCGReg addr,
 static void tcg_out_qemu_st(TCGContext *s, TCGReg data, TCGReg addr,
                             TCGMemOpIdx oi)
 {
-    TCGMemOp memop = get_memop(oi);
+    MemOp memop = get_memop(oi);
 #ifdef CONFIG_SOFTMMU
     unsigned memi = get_mmuidx(oi);
     TCGReg addrz, param;
diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c
index 587d092..e87c327 100644
--- a/tcg/tcg-op.c
+++ b/tcg/tcg-op.c
@@ -2714,7 +2714,7 @@ void tcg_gen_lookup_and_goto_ptr(void)
     }
 }

-static inline TCGMemOp tcg_canonicalize_memop(TCGMemOp op, bool is64, bool st)
+static inline MemOp tcg_canonicalize_memop(MemOp op, bool is64, bool st)
 {
     /* Trigger the asserts within as early as possible.  */
     (void)get_alignment_bits(op);
@@ -2743,7 +2743,7 @@ static inline TCGMemOp tcg_canonicalize_memop(TCGMemOp op, bool is64, bool st)
 }

 static void gen_ldst_i32(TCGOpcode opc, TCGv_i32 val, TCGv addr,
-                         TCGMemOp memop, TCGArg idx)
+                         MemOp memop, TCGArg idx)
 {
     TCGMemOpIdx oi = make_memop_idx(memop, idx);
 #if TARGET_LONG_BITS == 32
@@ -2758,7 +2758,7 @@ static void gen_ldst_i32(TCGOpcode opc, TCGv_i32 val, TCGv addr,
 }

 static void gen_ldst_i64(TCGOpcode opc, TCGv_i64 val, TCGv addr,
-                         TCGMemOp memop, TCGArg idx)
+                         MemOp memop, TCGArg idx)
 {
     TCGMemOpIdx oi = make_memop_idx(memop, idx);
 #if TARGET_LONG_BITS == 32
@@ -2788,9 +2788,9 @@ static void tcg_gen_req_mo(TCGBar type)
     }
 }

-void tcg_gen_qemu_ld_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
+void tcg_gen_qemu_ld_i32(TCGv_i32 val, TCGv addr, TCGArg idx, MemOp memop)
 {
-    TCGMemOp orig_memop;
+    MemOp orig_memop;

     tcg_gen_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
     memop = tcg_canonicalize_memop(memop, 0, 0);
@@ -2825,7 +2825,7 @@ void tcg_gen_qemu_ld_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
     }
 }

-void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
+void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCGArg idx, MemOp memop)
 {
     TCGv_i32 swap = NULL;

@@ -2858,9 +2858,9 @@ void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
     }
 }

-void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
+void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, MemOp memop)
 {
-    TCGMemOp orig_memop;
+    MemOp orig_memop;

     if (TCG_TARGET_REG_BITS == 32 && (memop & MO_SIZE) < MO_64) {
         tcg_gen_qemu_ld_i32(TCGV_LOW(val), addr, idx, memop);
@@ -2911,7 +2911,7 @@ void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
     }
 }

-void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
+void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, MemOp memop)
 {
     TCGv_i64 swap = NULL;

@@ -2953,7 +2953,7 @@ void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
     }
 }

-static void tcg_gen_ext_i32(TCGv_i32 ret, TCGv_i32 val, TCGMemOp opc)
+static void tcg_gen_ext_i32(TCGv_i32 ret, TCGv_i32 val, MemOp opc)
 {
     switch (opc & MO_SSIZE) {
     case MO_SB:
@@ -2974,7 +2974,7 @@ static void tcg_gen_ext_i32(TCGv_i32 ret, TCGv_i32 val, TCGMemOp opc)
     }
 }

-static void tcg_gen_ext_i64(TCGv_i64 ret, TCGv_i64 val, TCGMemOp opc)
+static void tcg_gen_ext_i64(TCGv_i64 ret, TCGv_i64 val, MemOp opc)
 {
     switch (opc & MO_SSIZE) {
     case MO_SB:
@@ -3034,7 +3034,7 @@ static void * const table_cmpxchg[16] = {
 };

 void tcg_gen_atomic_cmpxchg_i32(TCGv_i32 retv, TCGv addr, TCGv_i32 cmpv,
-                                TCGv_i32 newv, TCGArg idx, TCGMemOp memop)
+                                TCGv_i32 newv, TCGArg idx, MemOp memop)
 {
     memop = tcg_canonicalize_memop(memop, 0, 0);

@@ -3078,7 +3078,7 @@ void tcg_gen_atomic_cmpxchg_i32(TCGv_i32 retv, TCGv addr, TCGv_i32 cmpv,
 }

 void tcg_gen_atomic_cmpxchg_i64(TCGv_i64 retv, TCGv addr, TCGv_i64 cmpv,
-                                TCGv_i64 newv, TCGArg idx, TCGMemOp memop)
+                                TCGv_i64 newv, TCGArg idx, MemOp memop)
 {
     memop = tcg_canonicalize_memop(memop, 1, 0);

@@ -3142,7 +3142,7 @@ void tcg_gen_atomic_cmpxchg_i64(TCGv_i64 retv, TCGv addr, TCGv_i64 cmpv,
 }

 static void do_nonatomic_op_i32(TCGv_i32 ret, TCGv addr, TCGv_i32 val,
-                                TCGArg idx, TCGMemOp memop, bool new_val,
+                                TCGArg idx, MemOp memop, bool new_val,
                                 void (*gen)(TCGv_i32, TCGv_i32, TCGv_i32))
 {
     TCGv_i32 t1 = tcg_temp_new_i32();
@@ -3160,7 +3160,7 @@ static void do_nonatomic_op_i32(TCGv_i32 ret, TCGv addr, TCGv_i32 val,
 }

 static void do_atomic_op_i32(TCGv_i32 ret, TCGv addr, TCGv_i32 val,
-                             TCGArg idx, TCGMemOp memop, void * const table[])
+                             TCGArg idx, MemOp memop, void * const table[])
 {
     gen_atomic_op_i32 gen;

@@ -3185,7 +3185,7 @@ static void do_atomic_op_i32(TCGv_i32 ret, TCGv addr, TCGv_i32 val,
 }

 static void do_nonatomic_op_i64(TCGv_i64 ret, TCGv addr, TCGv_i64 val,
-                                TCGArg idx, TCGMemOp memop, bool new_val,
+                                TCGArg idx, MemOp memop, bool new_val,
                                 void (*gen)(TCGv_i64, TCGv_i64, TCGv_i64))
 {
     TCGv_i64 t1 = tcg_temp_new_i64();
@@ -3203,7 +3203,7 @@ static void do_nonatomic_op_i64(TCGv_i64 ret, TCGv addr, TCGv_i64 val,
 }

 static void do_atomic_op_i64(TCGv_i64 ret, TCGv addr, TCGv_i64 val,
-                             TCGArg idx, TCGMemOp memop, void * const table[])
+                             TCGArg idx, MemOp memop, void * const table[])
 {
     memop = tcg_canonicalize_memop(memop, 1, 0);

@@ -3257,7 +3257,7 @@ static void * const table_##NAME[16] = {                                \
     WITH_ATOMIC64([MO_64 | MO_BE] = gen_helper_atomic_##NAME##q_be)     \
 };                                                                      \
 void tcg_gen_atomic_##NAME##_i32                                        \
-    (TCGv_i32 ret, TCGv addr, TCGv_i32 val, TCGArg idx, TCGMemOp memop) \
+    (TCGv_i32 ret, TCGv addr, TCGv_i32 val, TCGArg idx, MemOp memop)    \
 {                                                                       \
     if (tcg_ctx->tb_cflags & CF_PARALLEL) {                             \
         do_atomic_op_i32(ret, addr, val, idx, memop, table_##NAME);     \
@@ -3267,7 +3267,7 @@ void tcg_gen_atomic_##NAME##_i32                                        \
     }                                                                   \
 }                                                                       \
 void tcg_gen_atomic_##NAME##_i64                                        \
-    (TCGv_i64 ret, TCGv addr, TCGv_i64 val, TCGArg idx, TCGMemOp memop) \
+    (TCGv_i64 ret, TCGv addr, TCGv_i64 val, TCGArg idx, MemOp memop)    \
 {                                                                       \
     if (tcg_ctx->tb_cflags & CF_PARALLEL) {                             \
         do_atomic_op_i64(ret, addr, val, idx, memop, table_##NAME);     \
diff --git a/tcg/tcg-op.h b/tcg/tcg-op.h
index 2d4dd5c..e9cf172 100644
--- a/tcg/tcg-op.h
+++ b/tcg/tcg-op.h
@@ -851,10 +851,10 @@ void tcg_gen_lookup_and_goto_ptr(void);
 #define tcg_gen_qemu_st_tl tcg_gen_qemu_st_i64
 #endif

-void tcg_gen_qemu_ld_i32(TCGv_i32, TCGv, TCGArg, TCGMemOp);
-void tcg_gen_qemu_st_i32(TCGv_i32, TCGv, TCGArg, TCGMemOp);
-void tcg_gen_qemu_ld_i64(TCGv_i64, TCGv, TCGArg, TCGMemOp);
-void tcg_gen_qemu_st_i64(TCGv_i64, TCGv, TCGArg, TCGMemOp);
+void tcg_gen_qemu_ld_i32(TCGv_i32, TCGv, TCGArg, MemOp);
+void tcg_gen_qemu_st_i32(TCGv_i32, TCGv, TCGArg, MemOp);
+void tcg_gen_qemu_ld_i64(TCGv_i64, TCGv, TCGArg, MemOp);
+void tcg_gen_qemu_st_i64(TCGv_i64, TCGv, TCGArg, MemOp);

 static inline void tcg_gen_qemu_ld8u(TCGv ret, TCGv addr, int mem_index)
 {
@@ -912,46 +912,46 @@ static inline void tcg_gen_qemu_st64(TCGv_i64 arg, TCGv addr, int mem_index)
 }

 void tcg_gen_atomic_cmpxchg_i32(TCGv_i32, TCGv, TCGv_i32, TCGv_i32,
-                                TCGArg, TCGMemOp);
+                                TCGArg, MemOp);
 void tcg_gen_atomic_cmpxchg_i64(TCGv_i64, TCGv, TCGv_i64, TCGv_i64,
-                                TCGArg, TCGMemOp);
-
-void tcg_gen_atomic_xchg_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_xchg_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-
-void tcg_gen_atomic_fetch_add_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_add_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_and_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_and_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_or_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_or_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_xor_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_xor_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_smin_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_smin_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_umin_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_umin_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_smax_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_smax_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_umax_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_umax_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-
-void tcg_gen_atomic_add_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_add_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_and_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_and_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_or_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_or_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_xor_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_xor_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_smin_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_smin_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_umin_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_umin_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_smax_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_smax_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_umax_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_umax_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
+                                TCGArg, MemOp);
+
+void tcg_gen_atomic_xchg_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_xchg_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+
+void tcg_gen_atomic_fetch_add_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_add_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_and_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_and_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_or_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_or_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_xor_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_xor_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_smin_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_smin_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_umin_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_umin_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_smax_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_smax_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_umax_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_umax_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+
+void tcg_gen_atomic_add_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_add_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_and_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_and_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_or_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_or_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_xor_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_xor_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_smin_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_smin_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_umin_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_umin_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_smax_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_smax_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_umax_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_umax_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);

 void tcg_gen_mov_vec(TCGv_vec, TCGv_vec);
 void tcg_gen_dup_i32_vec(unsigned vece, TCGv_vec, TCGv_i32);
diff --git a/tcg/tcg.c b/tcg/tcg.c
index be2c33c..aa9931f 100644
--- a/tcg/tcg.c
+++ b/tcg/tcg.c
@@ -2056,7 +2056,7 @@ static void tcg_dump_ops(TCGContext *s, bool have_prefs)
             case INDEX_op_qemu_st_i64:
                 {
                     TCGMemOpIdx oi = op->args[k++];
-                    TCGMemOp op = get_memop(oi);
+                    MemOp op = get_memop(oi);
                     unsigned ix = get_mmuidx(oi);

                     if (op & ~(MO_AMASK | MO_BSWAP | MO_SSIZE)) {
diff --git a/tcg/tcg.h b/tcg/tcg.h
index b411e17..a37181c 100644
--- a/tcg/tcg.h
+++ b/tcg/tcg.h
@@ -26,6 +26,7 @@
 #define TCG_H

 #include "cpu.h"
+#include "exec/memop.h"
 #include "exec/tb-context.h"
 #include "qemu/bitops.h"
 #include "qemu/queue.h"
@@ -309,101 +310,13 @@ typedef enum TCGType {
 #endif
 } TCGType;

-/* Constants for qemu_ld and qemu_st for the Memory Operation field.  */
-typedef enum TCGMemOp {
-    MO_8     = 0,
-    MO_16    = 1,
-    MO_32    = 2,
-    MO_64    = 3,
-    MO_SIZE  = 3,   /* Mask for the above.  */
-
-    MO_SIGN  = 4,   /* Sign-extended, otherwise zero-extended.  */
-
-    MO_BSWAP = 8,   /* Host reverse endian.  */
-#ifdef HOST_WORDS_BIGENDIAN
-    MO_LE    = MO_BSWAP,
-    MO_BE    = 0,
-#else
-    MO_LE    = 0,
-    MO_BE    = MO_BSWAP,
-#endif
-#ifdef TARGET_WORDS_BIGENDIAN
-    MO_TE    = MO_BE,
-#else
-    MO_TE    = MO_LE,
-#endif
-
-    /* MO_UNALN accesses are never checked for alignment.
-     * MO_ALIGN accesses will result in a call to the CPU's
-     * do_unaligned_access hook if the guest address is not aligned.
-     * The default depends on whether the target CPU defines ALIGNED_ONLY.
-     *
-     * Some architectures (e.g. ARMv8) need the address which is aligned
-     * to a size more than the size of the memory access.
-     * Some architectures (e.g. SPARCv9) need an address which is aligned,
-     * but less strictly than the natural alignment.
-     *
-     * MO_ALIGN supposes the alignment size is the size of a memory access.
-     *
-     * There are three options:
-     * - unaligned access permitted (MO_UNALN).
-     * - an alignment to the size of an access (MO_ALIGN);
-     * - an alignment to a specified size, which may be more or less than
-     *   the access size (MO_ALIGN_x where 'x' is a size in bytes);
-     */
-    MO_ASHIFT = 4,
-    MO_AMASK = 7 << MO_ASHIFT,
-#ifdef ALIGNED_ONLY
-    MO_ALIGN = 0,
-    MO_UNALN = MO_AMASK,
-#else
-    MO_ALIGN = MO_AMASK,
-    MO_UNALN = 0,
-#endif
-    MO_ALIGN_2  = 1 << MO_ASHIFT,
-    MO_ALIGN_4  = 2 << MO_ASHIFT,
-    MO_ALIGN_8  = 3 << MO_ASHIFT,
-    MO_ALIGN_16 = 4 << MO_ASHIFT,
-    MO_ALIGN_32 = 5 << MO_ASHIFT,
-    MO_ALIGN_64 = 6 << MO_ASHIFT,
-
-    /* Combinations of the above, for ease of use.  */
-    MO_UB    = MO_8,
-    MO_UW    = MO_16,
-    MO_UL    = MO_32,
-    MO_SB    = MO_SIGN | MO_8,
-    MO_SW    = MO_SIGN | MO_16,
-    MO_SL    = MO_SIGN | MO_32,
-    MO_Q     = MO_64,
-
-    MO_LEUW  = MO_LE | MO_UW,
-    MO_LEUL  = MO_LE | MO_UL,
-    MO_LESW  = MO_LE | MO_SW,
-    MO_LESL  = MO_LE | MO_SL,
-    MO_LEQ   = MO_LE | MO_Q,
-
-    MO_BEUW  = MO_BE | MO_UW,
-    MO_BEUL  = MO_BE | MO_UL,
-    MO_BESW  = MO_BE | MO_SW,
-    MO_BESL  = MO_BE | MO_SL,
-    MO_BEQ   = MO_BE | MO_Q,
-
-    MO_TEUW  = MO_TE | MO_UW,
-    MO_TEUL  = MO_TE | MO_UL,
-    MO_TESW  = MO_TE | MO_SW,
-    MO_TESL  = MO_TE | MO_SL,
-    MO_TEQ   = MO_TE | MO_Q,
-
-    MO_SSIZE = MO_SIZE | MO_SIGN,
-} TCGMemOp;
-
 /**
  * get_alignment_bits
- * @memop: TCGMemOp value
+ * @memop: MemOp value
  *
  * Extract the alignment size from the memop.
  */
-static inline unsigned get_alignment_bits(TCGMemOp memop)
+static inline unsigned get_alignment_bits(MemOp memop)
 {
     unsigned a = memop & MO_AMASK;

@@ -1184,7 +1097,7 @@ static inline size_t tcg_current_code_size(TCGContext *s)
     return tcg_ptr_byte_diff(s->code_ptr, s->code_buf);
 }

-/* Combine the TCGMemOp and mmu_idx parameters into a single value.  */
+/* Combine the MemOp and mmu_idx parameters into a single value.  */
 typedef uint32_t TCGMemOpIdx;

 /**
@@ -1194,7 +1107,7 @@ typedef uint32_t TCGMemOpIdx;
  *
  * Encode these values into a single parameter.
  */
-static inline TCGMemOpIdx make_memop_idx(TCGMemOp op, unsigned idx)
+static inline TCGMemOpIdx make_memop_idx(MemOp op, unsigned idx)
 {
     tcg_debug_assert(idx <= 15);
     return (op << 4) | idx;
@@ -1206,7 +1119,7 @@ static inline TCGMemOpIdx make_memop_idx(TCGMemOp op, unsigned idx)
  *
  * Extract the memory operation from the combined value.
  */
-static inline TCGMemOp get_memop(TCGMemOpIdx oi)
+static inline MemOp get_memop(TCGMemOpIdx oi)
 {
     return oi >> 4;
 }
diff --git a/trace/mem-internal.h b/trace/mem-internal.h
index f6efaf6..3444fbc 100644
--- a/trace/mem-internal.h
+++ b/trace/mem-internal.h
@@ -16,7 +16,7 @@
 #define TRACE_MEM_ST (1ULL << 5)    /* store (y/n) */

 static inline uint8_t trace_mem_build_info(
-    int size_shift, bool sign_extend, TCGMemOp endianness, bool store)
+    int size_shift, bool sign_extend, MemOp endianness, bool store)
 {
     uint8_t res;

@@ -33,7 +33,7 @@ static inline uint8_t trace_mem_build_info(
     return res;
 }

-static inline uint8_t trace_mem_get_info(TCGMemOp op, bool store)
+static inline uint8_t trace_mem_get_info(MemOp op, bool store)
 {
     return trace_mem_build_info(op & MO_SIZE, !!(op & MO_SIGN),
                                 op & MO_BSWAP, store);
diff --git a/trace/mem.h b/trace/mem.h
index 2b58196..8cf213d 100644
--- a/trace/mem.h
+++ b/trace/mem.h
@@ -18,7 +18,7 @@
  *
  * Return a value for the 'info' argument in guest memory access traces.
  */
-static uint8_t trace_mem_get_info(TCGMemOp op, bool store);
+static uint8_t trace_mem_get_info(MemOp op, bool store);

 /**
  * trace_mem_build_info:
@@ -26,7 +26,7 @@ static uint8_t trace_mem_get_info(TCGMemOp op, bool store);
  * Return a value for the 'info' argument in guest memory access traces.
  */
 static uint8_t trace_mem_build_info(int size_shift, bool sign_extend,
-                                    TCGMemOp endianness, bool store);
+                                    MemOp endianness, bool store);


 #include "trace/mem-internal.h"
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v5 01/15] tcg: TCGMemOp is now accelerator independent MemOp
@ 2019-07-26  6:43   ` tony.nguyen
  0 siblings, 0 replies; 78+ messages in thread
From: tony.nguyen @ 2019-07-26  6:43 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, sagark, david, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, arikalo, mst, pasic,
	borntraeger, rth, atar4qemu, ehabkost, alex.williamson, qemu-arm,
	stefanha, shorne, david, qemu-riscv, qemu-s390x, kbastian,
	cohuck, laurent, qemu-ppc, amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 108200 bytes --]

Preparation for collapsing the two byte swaps, adjust_endianness and
handle_bswap, along the I/O path.

Target dependant attributes are conditionalize upon NEED_CPU_H.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 MAINTAINERS                             |   1 +
 accel/tcg/cputlb.c                      |   2 +-
 include/exec/memop.h                    | 109 ++++++++++++++++++++++++++
 target/alpha/translate.c                |   2 +-
 target/arm/translate-a64.c              |  48 ++++++------
 target/arm/translate-a64.h              |   2 +-
 target/arm/translate-sve.c              |   2 +-
 target/arm/translate.c                  |  32 ++++----
 target/arm/translate.h                  |   2 +-
 target/hppa/translate.c                 |  14 ++--
 target/i386/translate.c                 | 132 ++++++++++++++++----------------
 target/m68k/translate.c                 |   2 +-
 target/microblaze/translate.c           |   4 +-
 target/mips/translate.c                 |   8 +-
 target/openrisc/translate.c             |   4 +-
 target/ppc/translate.c                  |  12 +--
 target/riscv/insn_trans/trans_rva.inc.c |   8 +-
 target/riscv/insn_trans/trans_rvi.inc.c |   4 +-
 target/s390x/translate.c                |   6 +-
 target/s390x/translate_vx.inc.c         |  10 +--
 target/sparc/translate.c                |  14 ++--
 target/tilegx/translate.c               |  10 +--
 target/tricore/translate.c              |   8 +-
 tcg/README                              |   2 +-
 tcg/aarch64/tcg-target.inc.c            |  26 +++----
 tcg/arm/tcg-target.inc.c                |  26 +++----
 tcg/i386/tcg-target.inc.c               |  24 +++---
 tcg/mips/tcg-target.inc.c               |  16 ++--
 tcg/optimize.c                          |   2 +-
 tcg/ppc/tcg-target.inc.c                |  12 +--
 tcg/riscv/tcg-target.inc.c              |  20 ++---
 tcg/s390/tcg-target.inc.c               |  14 ++--
 tcg/sparc/tcg-target.inc.c              |   6 +-
 tcg/tcg-op.c                            |  38 ++++-----
 tcg/tcg-op.h                            |  86 ++++++++++-----------
 tcg/tcg.c                               |   2 +-
 tcg/tcg.h                               |  99 ++----------------------
 trace/mem-internal.h                    |   4 +-
 trace/mem.h                             |   4 +-
 39 files changed, 420 insertions(+), 397 deletions(-)
 create mode 100644 include/exec/memop.h

diff --git a/MAINTAINERS b/MAINTAINERS
index cc9636b..3f148cd 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1890,6 +1890,7 @@ M: Paolo Bonzini <pbonzini@redhat.com>
 S: Supported
 F: include/exec/ioport.h
 F: ioport.c
+F: include/exec/memop.h
 F: include/exec/memory.h
 F: include/exec/ram_addr.h
 F: memory.c
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index bb9897b..523be4c 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1133,7 +1133,7 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
     uintptr_t index = tlb_index(env, mmu_idx, addr);
     CPUTLBEntry *tlbe = tlb_entry(env, mmu_idx, addr);
     target_ulong tlb_addr = tlb_addr_write(tlbe);
-    TCGMemOp mop = get_memop(oi);
+    MemOp mop = get_memop(oi);
     int a_bits = get_alignment_bits(mop);
     int s_bits = mop & MO_SIZE;
     void *hostaddr;
diff --git a/include/exec/memop.h b/include/exec/memop.h
new file mode 100644
index 0000000..ac58066
--- /dev/null
+++ b/include/exec/memop.h
@@ -0,0 +1,109 @@
+/*
+ * Constants for memory operations
+ *
+ * Authors:
+ *  Richard Henderson <rth@twiddle.net>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ *
+ */
+
+#ifndef MEMOP_H
+#define MEMOP_H
+
+typedef enum MemOp {
+    MO_8     = 0,
+    MO_16    = 1,
+    MO_32    = 2,
+    MO_64    = 3,
+    MO_SIZE  = 3,   /* Mask for the above.  */
+
+    MO_SIGN  = 4,   /* Sign-extended, otherwise zero-extended.  */
+
+    MO_BSWAP = 8,   /* Host reverse endian.  */
+#ifdef HOST_WORDS_BIGENDIAN
+    MO_LE    = MO_BSWAP,
+    MO_BE    = 0,
+#else
+    MO_LE    = 0,
+    MO_BE    = MO_BSWAP,
+#endif
+#ifdef NEED_CPU_H
+#ifdef TARGET_WORDS_BIGENDIAN
+    MO_TE    = MO_BE,
+#else
+    MO_TE    = MO_LE,
+#endif
+#endif
+
+    /*
+     * MO_UNALN accesses are never checked for alignment.
+     * MO_ALIGN accesses will result in a call to the CPU's
+     * do_unaligned_access hook if the guest address is not aligned.
+     * The default depends on whether the target CPU defines ALIGNED_ONLY.
+     *
+     * Some architectures (e.g. ARMv8) need the address which is aligned
+     * to a size more than the size of the memory access.
+     * Some architectures (e.g. SPARCv9) need an address which is aligned,
+     * but less strictly than the natural alignment.
+     *
+     * MO_ALIGN supposes the alignment size is the size of a memory access.
+     *
+     * There are three options:
+     * - unaligned access permitted (MO_UNALN).
+     * - an alignment to the size of an access (MO_ALIGN);
+     * - an alignment to a specified size, which may be more or less than
+     *   the access size (MO_ALIGN_x where 'x' is a size in bytes);
+     */
+    MO_ASHIFT = 4,
+    MO_AMASK = 7 << MO_ASHIFT,
+#ifdef NEED_CPU_H
+#ifdef ALIGNED_ONLY
+    MO_ALIGN = 0,
+    MO_UNALN = MO_AMASK,
+#else
+    MO_ALIGN = MO_AMASK,
+    MO_UNALN = 0,
+#endif
+#endif
+    MO_ALIGN_2  = 1 << MO_ASHIFT,
+    MO_ALIGN_4  = 2 << MO_ASHIFT,
+    MO_ALIGN_8  = 3 << MO_ASHIFT,
+    MO_ALIGN_16 = 4 << MO_ASHIFT,
+    MO_ALIGN_32 = 5 << MO_ASHIFT,
+    MO_ALIGN_64 = 6 << MO_ASHIFT,
+
+    /* Combinations of the above, for ease of use.  */
+    MO_UB    = MO_8,
+    MO_UW    = MO_16,
+    MO_UL    = MO_32,
+    MO_SB    = MO_SIGN | MO_8,
+    MO_SW    = MO_SIGN | MO_16,
+    MO_SL    = MO_SIGN | MO_32,
+    MO_Q     = MO_64,
+
+    MO_LEUW  = MO_LE | MO_UW,
+    MO_LEUL  = MO_LE | MO_UL,
+    MO_LESW  = MO_LE | MO_SW,
+    MO_LESL  = MO_LE | MO_SL,
+    MO_LEQ   = MO_LE | MO_Q,
+
+    MO_BEUW  = MO_BE | MO_UW,
+    MO_BEUL  = MO_BE | MO_UL,
+    MO_BESW  = MO_BE | MO_SW,
+    MO_BESL  = MO_BE | MO_SL,
+    MO_BEQ   = MO_BE | MO_Q,
+
+#ifdef NEED_CPU_H
+    MO_TEUW  = MO_TE | MO_UW,
+    MO_TEUL  = MO_TE | MO_UL,
+    MO_TESW  = MO_TE | MO_SW,
+    MO_TESL  = MO_TE | MO_SL,
+    MO_TEQ   = MO_TE | MO_Q,
+#endif
+
+    MO_SSIZE = MO_SIZE | MO_SIGN,
+} MemOp;
+
+#endif
diff --git a/target/alpha/translate.c b/target/alpha/translate.c
index 2c9cccf..d5d4888 100644
--- a/target/alpha/translate.c
+++ b/target/alpha/translate.c
@@ -403,7 +403,7 @@ static inline void gen_store_mem(DisasContext *ctx,

 static DisasJumpType gen_store_conditional(DisasContext *ctx, int ra, int rb,
                                            int32_t disp16, int mem_idx,
-                                           TCGMemOp op)
+                                           MemOp op)
 {
     TCGLabel *lab_fail, *lab_done;
     TCGv addr, val;
diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
index d323147..b6c07d6 100644
--- a/target/arm/translate-a64.c
+++ b/target/arm/translate-a64.c
@@ -85,7 +85,7 @@ typedef void NeonGenOneOpFn(TCGv_i64, TCGv_i64);
 typedef void CryptoTwoOpFn(TCGv_ptr, TCGv_ptr);
 typedef void CryptoThreeOpIntFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
 typedef void CryptoThreeOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
-typedef void AtomicThreeOpFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGArg, TCGMemOp);
+typedef void AtomicThreeOpFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGArg, MemOp);

 /* initialize TCG globals.  */
 void a64_translate_init(void)
@@ -455,7 +455,7 @@ TCGv_i64 read_cpu_reg_sp(DisasContext *s, int reg, int sf)
  * Dn, Sn, Hn or Bn).
  * (Note that this is not the same mapping as for A32; see cpu.h)
  */
-static inline int fp_reg_offset(DisasContext *s, int regno, TCGMemOp size)
+static inline int fp_reg_offset(DisasContext *s, int regno, MemOp size)
 {
     return vec_reg_offset(s, regno, 0, size);
 }
@@ -871,7 +871,7 @@ static void do_gpr_ld_memidx(DisasContext *s,
                              bool iss_valid, unsigned int iss_srt,
                              bool iss_sf, bool iss_ar)
 {
-    TCGMemOp memop = s->be_data + size;
+    MemOp memop = s->be_data + size;

     g_assert(size <= 3);

@@ -948,7 +948,7 @@ static void do_fp_ld(DisasContext *s, int destidx, TCGv_i64 tcg_addr, int size)
     TCGv_i64 tmphi;

     if (size < 4) {
-        TCGMemOp memop = s->be_data + size;
+        MemOp memop = s->be_data + size;
         tmphi = tcg_const_i64(0);
         tcg_gen_qemu_ld_i64(tmplo, tcg_addr, get_mem_index(s), memop);
     } else {
@@ -989,7 +989,7 @@ static void do_fp_ld(DisasContext *s, int destidx, TCGv_i64 tcg_addr, int size)

 /* Get value of an element within a vector register */
 static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx,
-                             int element, TCGMemOp memop)
+                             int element, MemOp memop)
 {
     int vect_off = vec_reg_offset(s, srcidx, element, memop & MO_SIZE);
     switch (memop) {
@@ -1021,7 +1021,7 @@ static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx,
 }

 static void read_vec_element_i32(DisasContext *s, TCGv_i32 tcg_dest, int srcidx,
-                                 int element, TCGMemOp memop)
+                                 int element, MemOp memop)
 {
     int vect_off = vec_reg_offset(s, srcidx, element, memop & MO_SIZE);
     switch (memop) {
@@ -1048,7 +1048,7 @@ static void read_vec_element_i32(DisasContext *s, TCGv_i32 tcg_dest, int srcidx,

 /* Set value of an element within a vector register */
 static void write_vec_element(DisasContext *s, TCGv_i64 tcg_src, int destidx,
-                              int element, TCGMemOp memop)
+                              int element, MemOp memop)
 {
     int vect_off = vec_reg_offset(s, destidx, element, memop & MO_SIZE);
     switch (memop) {
@@ -1070,7 +1070,7 @@ static void write_vec_element(DisasContext *s, TCGv_i64 tcg_src, int destidx,
 }

 static void write_vec_element_i32(DisasContext *s, TCGv_i32 tcg_src,
-                                  int destidx, int element, TCGMemOp memop)
+                                  int destidx, int element, MemOp memop)
 {
     int vect_off = vec_reg_offset(s, destidx, element, memop & MO_SIZE);
     switch (memop) {
@@ -1090,7 +1090,7 @@ static void write_vec_element_i32(DisasContext *s, TCGv_i32 tcg_src,

 /* Store from vector register to memory */
 static void do_vec_st(DisasContext *s, int srcidx, int element,
-                      TCGv_i64 tcg_addr, int size, TCGMemOp endian)
+                      TCGv_i64 tcg_addr, int size, MemOp endian)
 {
     TCGv_i64 tcg_tmp = tcg_temp_new_i64();

@@ -1102,7 +1102,7 @@ static void do_vec_st(DisasContext *s, int srcidx, int element,

 /* Load from memory to vector register */
 static void do_vec_ld(DisasContext *s, int destidx, int element,
-                      TCGv_i64 tcg_addr, int size, TCGMemOp endian)
+                      TCGv_i64 tcg_addr, int size, MemOp endian)
 {
     TCGv_i64 tcg_tmp = tcg_temp_new_i64();

@@ -2200,7 +2200,7 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
                                TCGv_i64 addr, int size, bool is_pair)
 {
     int idx = get_mem_index(s);
-    TCGMemOp memop = s->be_data;
+    MemOp memop = s->be_data;

     g_assert(size <= 3);
     if (is_pair) {
@@ -3286,7 +3286,7 @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
     bool is_postidx = extract32(insn, 23, 1);
     bool is_q = extract32(insn, 30, 1);
     TCGv_i64 clean_addr, tcg_rn, tcg_ebytes;
-    TCGMemOp endian = s->be_data;
+    MemOp endian = s->be_data;

     int ebytes;   /* bytes per element */
     int elements; /* elements per vector */
@@ -5455,7 +5455,7 @@ static void disas_fp_csel(DisasContext *s, uint32_t insn)
     unsigned int mos, type, rm, cond, rn, rd;
     TCGv_i64 t_true, t_false, t_zero;
     DisasCompare64 c;
-    TCGMemOp sz;
+    MemOp sz;

     mos = extract32(insn, 29, 3);
     type = extract32(insn, 22, 2);
@@ -6267,7 +6267,7 @@ static void disas_fp_imm(DisasContext *s, uint32_t insn)
     int mos = extract32(insn, 29, 3);
     uint64_t imm;
     TCGv_i64 tcg_res;
-    TCGMemOp sz;
+    MemOp sz;

     if (mos || imm5) {
         unallocated_encoding(s);
@@ -7030,7 +7030,7 @@ static TCGv_i32 do_reduction_op(DisasContext *s, int fpopcode, int rn,
 {
     if (esize == size) {
         int element;
-        TCGMemOp msize = esize == 16 ? MO_16 : MO_32;
+        MemOp msize = esize == 16 ? MO_16 : MO_32;
         TCGv_i32 tcg_elem;

         /* We should have one register left here */
@@ -8022,7 +8022,7 @@ static void handle_vec_simd_sqshrn(DisasContext *s, bool is_scalar, bool is_q,
     int shift = (2 * esize) - immhb;
     int elements = is_scalar ? 1 : (64 / esize);
     bool round = extract32(opcode, 0, 1);
-    TCGMemOp ldop = (size + 1) | (is_u_shift ? 0 : MO_SIGN);
+    MemOp ldop = (size + 1) | (is_u_shift ? 0 : MO_SIGN);
     TCGv_i64 tcg_rn, tcg_rd, tcg_round;
     TCGv_i32 tcg_rd_narrowed;
     TCGv_i64 tcg_final;
@@ -8181,7 +8181,7 @@ static void handle_simd_qshl(DisasContext *s, bool scalar, bool is_q,
             }
         };
         NeonGenTwoOpEnvFn *genfn = fns[src_unsigned][dst_unsigned][size];
-        TCGMemOp memop = scalar ? size : MO_32;
+        MemOp memop = scalar ? size : MO_32;
         int maxpass = scalar ? 1 : is_q ? 4 : 2;

         for (pass = 0; pass < maxpass; pass++) {
@@ -8225,7 +8225,7 @@ static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
     TCGv_ptr tcg_fpst = get_fpstatus_ptr(size == MO_16);
     TCGv_i32 tcg_shift = NULL;

-    TCGMemOp mop = size | (is_signed ? MO_SIGN : 0);
+    MemOp mop = size | (is_signed ? MO_SIGN : 0);
     int pass;

     if (fracbits || size == MO_64) {
@@ -10004,7 +10004,7 @@ static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
     int dsize = is_q ? 128 : 64;
     int esize = 8 << size;
     int elements = dsize/esize;
-    TCGMemOp memop = size | (is_u ? 0 : MO_SIGN);
+    MemOp memop = size | (is_u ? 0 : MO_SIGN);
     TCGv_i64 tcg_rn = new_tmp_a64(s);
     TCGv_i64 tcg_rd = new_tmp_a64(s);
     TCGv_i64 tcg_round;
@@ -10347,7 +10347,7 @@ static void handle_3rd_widening(DisasContext *s, int is_q, int is_u, int size,
             TCGv_i64 tcg_op1 = tcg_temp_new_i64();
             TCGv_i64 tcg_op2 = tcg_temp_new_i64();
             TCGv_i64 tcg_passres;
-            TCGMemOp memop = MO_32 | (is_u ? 0 : MO_SIGN);
+            MemOp memop = MO_32 | (is_u ? 0 : MO_SIGN);

             int elt = pass + is_q * 2;

@@ -11827,7 +11827,7 @@ static void handle_2misc_pairwise(DisasContext *s, int opcode, bool u,

     if (size == 2) {
         /* 32 + 32 -> 64 op */
-        TCGMemOp memop = size + (u ? 0 : MO_SIGN);
+        MemOp memop = size + (u ? 0 : MO_SIGN);

         for (pass = 0; pass < maxpass; pass++) {
             TCGv_i64 tcg_op1 = tcg_temp_new_i64();
@@ -12849,7 +12849,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)

     switch (is_fp) {
     case 1: /* normal fp */
-        /* convert insn encoded size to TCGMemOp size */
+        /* convert insn encoded size to MemOp size */
         switch (size) {
         case 0: /* half-precision */
             size = MO_16;
@@ -12897,7 +12897,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
         return;
     }

-    /* Given TCGMemOp size, adjust register and indexing.  */
+    /* Given MemOp size, adjust register and indexing.  */
     switch (size) {
     case MO_16:
         index = h << 2 | l << 1 | m;
@@ -13194,7 +13194,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
         TCGv_i64 tcg_res[2];
         int pass;
         bool satop = extract32(opcode, 0, 1);
-        TCGMemOp memop = MO_32;
+        MemOp memop = MO_32;

         if (satop || !u) {
             memop |= MO_SIGN;
diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
index 9ab4087..f1246b7 100644
--- a/target/arm/translate-a64.h
+++ b/target/arm/translate-a64.h
@@ -64,7 +64,7 @@ static inline void assert_fp_access_checked(DisasContext *s)
  * the FP/vector register Qn.
  */
 static inline int vec_reg_offset(DisasContext *s, int regno,
-                                 int element, TCGMemOp size)
+                                 int element, MemOp size)
 {
     int element_size = 1 << size;
     int offs = element * element_size;
diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
index fa068b0..5d7edd0 100644
--- a/target/arm/translate-sve.c
+++ b/target/arm/translate-sve.c
@@ -4567,7 +4567,7 @@ static bool trans_STR_pri(DisasContext *s, arg_rri *a)
  */

 /* The memory mode of the dtype.  */
-static const TCGMemOp dtype_mop[16] = {
+static const MemOp dtype_mop[16] = {
     MO_UB, MO_UB, MO_UB, MO_UB,
     MO_SL, MO_UW, MO_UW, MO_UW,
     MO_SW, MO_SW, MO_UL, MO_UL,
diff --git a/target/arm/translate.c b/target/arm/translate.c
index 7853462..d116c8c 100644
--- a/target/arm/translate.c
+++ b/target/arm/translate.c
@@ -114,7 +114,7 @@ typedef enum ISSInfo {
 } ISSInfo;

 /* Save the syndrome information for a Data Abort */
-static void disas_set_da_iss(DisasContext *s, TCGMemOp memop, ISSInfo issinfo)
+static void disas_set_da_iss(DisasContext *s, MemOp memop, ISSInfo issinfo)
 {
     uint32_t syn;
     int sas = memop & MO_SIZE;
@@ -1079,7 +1079,7 @@ static inline void store_reg_from_load(DisasContext *s, int reg, TCGv_i32 var)
  * that the address argument is TCGv_i32 rather than TCGv.
  */

-static inline TCGv gen_aa32_addr(DisasContext *s, TCGv_i32 a32, TCGMemOp op)
+static inline TCGv gen_aa32_addr(DisasContext *s, TCGv_i32 a32, MemOp op)
 {
     TCGv addr = tcg_temp_new();
     tcg_gen_extu_i32_tl(addr, a32);
@@ -1092,7 +1092,7 @@ static inline TCGv gen_aa32_addr(DisasContext *s, TCGv_i32 a32, TCGMemOp op)
 }

 static void gen_aa32_ld_i32(DisasContext *s, TCGv_i32 val, TCGv_i32 a32,
-                            int index, TCGMemOp opc)
+                            int index, MemOp opc)
 {
     TCGv addr;

@@ -1107,7 +1107,7 @@ static void gen_aa32_ld_i32(DisasContext *s, TCGv_i32 val, TCGv_i32 a32,
 }

 static void gen_aa32_st_i32(DisasContext *s, TCGv_i32 val, TCGv_i32 a32,
-                            int index, TCGMemOp opc)
+                            int index, MemOp opc)
 {
     TCGv addr;

@@ -1160,7 +1160,7 @@ static inline void gen_aa32_frob64(DisasContext *s, TCGv_i64 val)
 }

 static void gen_aa32_ld_i64(DisasContext *s, TCGv_i64 val, TCGv_i32 a32,
-                            int index, TCGMemOp opc)
+                            int index, MemOp opc)
 {
     TCGv addr = gen_aa32_addr(s, a32, opc);
     tcg_gen_qemu_ld_i64(val, addr, index, opc);
@@ -1175,7 +1175,7 @@ static inline void gen_aa32_ld64(DisasContext *s, TCGv_i64 val,
 }

 static void gen_aa32_st_i64(DisasContext *s, TCGv_i64 val, TCGv_i32 a32,
-                            int index, TCGMemOp opc)
+                            int index, MemOp opc)
 {
     TCGv addr = gen_aa32_addr(s, a32, opc);

@@ -1400,7 +1400,7 @@ neon_reg_offset (int reg, int n)
  * where 0 is the least significant end of the register.
  */
 static inline long
-neon_element_offset(int reg, int element, TCGMemOp size)
+neon_element_offset(int reg, int element, MemOp size)
 {
     int element_size = 1 << size;
     int ofs = element * element_size;
@@ -1422,7 +1422,7 @@ static TCGv_i32 neon_load_reg(int reg, int pass)
     return tmp;
 }

-static void neon_load_element(TCGv_i32 var, int reg, int ele, TCGMemOp mop)
+static void neon_load_element(TCGv_i32 var, int reg, int ele, MemOp mop)
 {
     long offset = neon_element_offset(reg, ele, mop & MO_SIZE);

@@ -1441,7 +1441,7 @@ static void neon_load_element(TCGv_i32 var, int reg, int ele, TCGMemOp mop)
     }
 }

-static void neon_load_element64(TCGv_i64 var, int reg, int ele, TCGMemOp mop)
+static void neon_load_element64(TCGv_i64 var, int reg, int ele, MemOp mop)
 {
     long offset = neon_element_offset(reg, ele, mop & MO_SIZE);

@@ -1469,7 +1469,7 @@ static void neon_store_reg(int reg, int pass, TCGv_i32 var)
     tcg_temp_free_i32(var);
 }

-static void neon_store_element(int reg, int ele, TCGMemOp size, TCGv_i32 var)
+static void neon_store_element(int reg, int ele, MemOp size, TCGv_i32 var)
 {
     long offset = neon_element_offset(reg, ele, size);

@@ -1488,7 +1488,7 @@ static void neon_store_element(int reg, int ele, TCGMemOp size, TCGv_i32 var)
     }
 }

-static void neon_store_element64(int reg, int ele, TCGMemOp size, TCGv_i64 var)
+static void neon_store_element64(int reg, int ele, MemOp size, TCGv_i64 var)
 {
     long offset = neon_element_offset(reg, ele, size);

@@ -3558,7 +3558,7 @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
     int n;
     int vec_size;
     int mmu_idx;
-    TCGMemOp endian;
+    MemOp endian;
     TCGv_i32 addr;
     TCGv_i32 tmp;
     TCGv_i32 tmp2;
@@ -6867,7 +6867,7 @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
             } else if ((insn & 0x380) == 0) {
                 /* VDUP */
                 int element;
-                TCGMemOp size;
+                MemOp size;

                 if ((insn & (7 << 16)) == 0 || (q && (rd & 1))) {
                     return 1;
@@ -7435,7 +7435,7 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
                                TCGv_i32 addr, int size)
 {
     TCGv_i32 tmp = tcg_temp_new_i32();
-    TCGMemOp opc = size | MO_ALIGN | s->be_data;
+    MemOp opc = size | MO_ALIGN | s->be_data;

     s->is_ldex = true;

@@ -7489,7 +7489,7 @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
     TCGv taddr;
     TCGLabel *done_label;
     TCGLabel *fail_label;
-    TCGMemOp opc = size | MO_ALIGN | s->be_data;
+    MemOp opc = size | MO_ALIGN | s->be_data;

     /* if (env->exclusive_addr == addr && env->exclusive_val == [addr]) {
          [addr] = {Rt};
@@ -8603,7 +8603,7 @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
                         */

                         TCGv taddr;
-                        TCGMemOp opc = s->be_data;
+                        MemOp opc = s->be_data;

                         rm = (insn) & 0xf;

diff --git a/target/arm/translate.h b/target/arm/translate.h
index a20f6e2..284c510 100644
--- a/target/arm/translate.h
+++ b/target/arm/translate.h
@@ -21,7 +21,7 @@ typedef struct DisasContext {
     int condexec_cond;
     int thumb;
     int sctlr_b;
-    TCGMemOp be_data;
+    MemOp be_data;
 #if !defined(CONFIG_USER_ONLY)
     int user;
 #endif
diff --git a/target/hppa/translate.c b/target/hppa/translate.c
index 188fe68..ff4802a 100644
--- a/target/hppa/translate.c
+++ b/target/hppa/translate.c
@@ -1500,7 +1500,7 @@ static void form_gva(DisasContext *ctx, TCGv_tl *pgva, TCGv_reg *pofs,
  */
 static void do_load_32(DisasContext *ctx, TCGv_i32 dest, unsigned rb,
                        unsigned rx, int scale, target_sreg disp,
-                       unsigned sp, int modify, TCGMemOp mop)
+                       unsigned sp, int modify, MemOp mop)
 {
     TCGv_reg ofs;
     TCGv_tl addr;
@@ -1518,7 +1518,7 @@ static void do_load_32(DisasContext *ctx, TCGv_i32 dest, unsigned rb,

 static void do_load_64(DisasContext *ctx, TCGv_i64 dest, unsigned rb,
                        unsigned rx, int scale, target_sreg disp,
-                       unsigned sp, int modify, TCGMemOp mop)
+                       unsigned sp, int modify, MemOp mop)
 {
     TCGv_reg ofs;
     TCGv_tl addr;
@@ -1536,7 +1536,7 @@ static void do_load_64(DisasContext *ctx, TCGv_i64 dest, unsigned rb,

 static void do_store_32(DisasContext *ctx, TCGv_i32 src, unsigned rb,
                         unsigned rx, int scale, target_sreg disp,
-                        unsigned sp, int modify, TCGMemOp mop)
+                        unsigned sp, int modify, MemOp mop)
 {
     TCGv_reg ofs;
     TCGv_tl addr;
@@ -1554,7 +1554,7 @@ static void do_store_32(DisasContext *ctx, TCGv_i32 src, unsigned rb,

 static void do_store_64(DisasContext *ctx, TCGv_i64 src, unsigned rb,
                         unsigned rx, int scale, target_sreg disp,
-                        unsigned sp, int modify, TCGMemOp mop)
+                        unsigned sp, int modify, MemOp mop)
 {
     TCGv_reg ofs;
     TCGv_tl addr;
@@ -1580,7 +1580,7 @@ static void do_store_64(DisasContext *ctx, TCGv_i64 src, unsigned rb,

 static bool do_load(DisasContext *ctx, unsigned rt, unsigned rb,
                     unsigned rx, int scale, target_sreg disp,
-                    unsigned sp, int modify, TCGMemOp mop)
+                    unsigned sp, int modify, MemOp mop)
 {
     TCGv_reg dest;

@@ -1653,7 +1653,7 @@ static bool trans_fldd(DisasContext *ctx, arg_ldst *a)

 static bool do_store(DisasContext *ctx, unsigned rt, unsigned rb,
                      target_sreg disp, unsigned sp,
-                     int modify, TCGMemOp mop)
+                     int modify, MemOp mop)
 {
     nullify_over(ctx);
     do_store_reg(ctx, load_gpr(ctx, rt), rb, 0, 0, disp, sp, modify, mop);
@@ -2940,7 +2940,7 @@ static bool trans_st(DisasContext *ctx, arg_ldst *a)

 static bool trans_ldc(DisasContext *ctx, arg_ldst *a)
 {
-    TCGMemOp mop = MO_TEUL | MO_ALIGN_16 | a->size;
+    MemOp mop = MO_TEUL | MO_ALIGN_16 | a->size;
     TCGv_reg zero, dest, ofs;
     TCGv_tl addr;

diff --git a/target/i386/translate.c b/target/i386/translate.c
index 03150a8..def9867 100644
--- a/target/i386/translate.c
+++ b/target/i386/translate.c
@@ -87,8 +87,8 @@ typedef struct DisasContext {
     /* current insn context */
     int override; /* -1 if no override */
     int prefix;
-    TCGMemOp aflag;
-    TCGMemOp dflag;
+    MemOp aflag;
+    MemOp dflag;
     target_ulong pc_start;
     target_ulong pc; /* pc = eip + cs_base */
     /* current block context */
@@ -149,7 +149,7 @@ static void gen_eob(DisasContext *s);
 static void gen_jr(DisasContext *s, TCGv dest);
 static void gen_jmp(DisasContext *s, target_ulong eip);
 static void gen_jmp_tb(DisasContext *s, target_ulong eip, int tb_num);
-static void gen_op(DisasContext *s1, int op, TCGMemOp ot, int d);
+static void gen_op(DisasContext *s1, int op, MemOp ot, int d);

 /* i386 arith/logic operations */
 enum {
@@ -320,7 +320,7 @@ static inline bool byte_reg_is_xH(DisasContext *s, int reg)
 }

 /* Select the size of a push/pop operation.  */
-static inline TCGMemOp mo_pushpop(DisasContext *s, TCGMemOp ot)
+static inline MemOp mo_pushpop(DisasContext *s, MemOp ot)
 {
     if (CODE64(s)) {
         return ot == MO_16 ? MO_16 : MO_64;
@@ -330,13 +330,13 @@ static inline TCGMemOp mo_pushpop(DisasContext *s, TCGMemOp ot)
 }

 /* Select the size of the stack pointer.  */
-static inline TCGMemOp mo_stacksize(DisasContext *s)
+static inline MemOp mo_stacksize(DisasContext *s)
 {
     return CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_16;
 }

 /* Select only size 64 else 32.  Used for SSE operand sizes.  */
-static inline TCGMemOp mo_64_32(TCGMemOp ot)
+static inline MemOp mo_64_32(MemOp ot)
 {
 #ifdef TARGET_X86_64
     return ot == MO_64 ? MO_64 : MO_32;
@@ -347,19 +347,19 @@ static inline TCGMemOp mo_64_32(TCGMemOp ot)

 /* Select size 8 if lsb of B is clear, else OT.  Used for decoding
    byte vs word opcodes.  */
-static inline TCGMemOp mo_b_d(int b, TCGMemOp ot)
+static inline MemOp mo_b_d(int b, MemOp ot)
 {
     return b & 1 ? ot : MO_8;
 }

 /* Select size 8 if lsb of B is clear, else OT capped at 32.
    Used for decoding operand size of port opcodes.  */
-static inline TCGMemOp mo_b_d32(int b, TCGMemOp ot)
+static inline MemOp mo_b_d32(int b, MemOp ot)
 {
     return b & 1 ? (ot == MO_16 ? MO_16 : MO_32) : MO_8;
 }

-static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t0)
+static void gen_op_mov_reg_v(DisasContext *s, MemOp ot, int reg, TCGv t0)
 {
     switch(ot) {
     case MO_8:
@@ -388,7 +388,7 @@ static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t0)
 }

 static inline
-void gen_op_mov_v_reg(DisasContext *s, TCGMemOp ot, TCGv t0, int reg)
+void gen_op_mov_v_reg(DisasContext *s, MemOp ot, TCGv t0, int reg)
 {
     if (ot == MO_8 && byte_reg_is_xH(s, reg)) {
         tcg_gen_extract_tl(t0, cpu_regs[reg - 4], 8, 8);
@@ -411,13 +411,13 @@ static inline void gen_op_jmp_v(TCGv dest)
 }

 static inline
-void gen_op_add_reg_im(DisasContext *s, TCGMemOp size, int reg, int32_t val)
+void gen_op_add_reg_im(DisasContext *s, MemOp size, int reg, int32_t val)
 {
     tcg_gen_addi_tl(s->tmp0, cpu_regs[reg], val);
     gen_op_mov_reg_v(s, size, reg, s->tmp0);
 }

-static inline void gen_op_add_reg_T0(DisasContext *s, TCGMemOp size, int reg)
+static inline void gen_op_add_reg_T0(DisasContext *s, MemOp size, int reg)
 {
     tcg_gen_add_tl(s->tmp0, cpu_regs[reg], s->T0);
     gen_op_mov_reg_v(s, size, reg, s->tmp0);
@@ -451,7 +451,7 @@ static inline void gen_jmp_im(DisasContext *s, target_ulong pc)
 /* Compute SEG:REG into A0.  SEG is selected from the override segment
    (OVR_SEG) and the default segment (DEF_SEG).  OVR_SEG may be -1 to
    indicate no override.  */
-static void gen_lea_v_seg(DisasContext *s, TCGMemOp aflag, TCGv a0,
+static void gen_lea_v_seg(DisasContext *s, MemOp aflag, TCGv a0,
                           int def_seg, int ovr_seg)
 {
     switch (aflag) {
@@ -514,13 +514,13 @@ static inline void gen_string_movl_A0_EDI(DisasContext *s)
     gen_lea_v_seg(s, s->aflag, cpu_regs[R_EDI], R_ES, -1);
 }

-static inline void gen_op_movl_T0_Dshift(DisasContext *s, TCGMemOp ot)
+static inline void gen_op_movl_T0_Dshift(DisasContext *s, MemOp ot)
 {
     tcg_gen_ld32s_tl(s->T0, cpu_env, offsetof(CPUX86State, df));
     tcg_gen_shli_tl(s->T0, s->T0, ot);
 };

-static TCGv gen_ext_tl(TCGv dst, TCGv src, TCGMemOp size, bool sign)
+static TCGv gen_ext_tl(TCGv dst, TCGv src, MemOp size, bool sign)
 {
     switch (size) {
     case MO_8:
@@ -551,18 +551,18 @@ static TCGv gen_ext_tl(TCGv dst, TCGv src, TCGMemOp size, bool sign)
     }
 }

-static void gen_extu(TCGMemOp ot, TCGv reg)
+static void gen_extu(MemOp ot, TCGv reg)
 {
     gen_ext_tl(reg, reg, ot, false);
 }

-static void gen_exts(TCGMemOp ot, TCGv reg)
+static void gen_exts(MemOp ot, TCGv reg)
 {
     gen_ext_tl(reg, reg, ot, true);
 }

 static inline
-void gen_op_jnz_ecx(DisasContext *s, TCGMemOp size, TCGLabel *label1)
+void gen_op_jnz_ecx(DisasContext *s, MemOp size, TCGLabel *label1)
 {
     tcg_gen_mov_tl(s->tmp0, cpu_regs[R_ECX]);
     gen_extu(size, s->tmp0);
@@ -570,14 +570,14 @@ void gen_op_jnz_ecx(DisasContext *s, TCGMemOp size, TCGLabel *label1)
 }

 static inline
-void gen_op_jz_ecx(DisasContext *s, TCGMemOp size, TCGLabel *label1)
+void gen_op_jz_ecx(DisasContext *s, MemOp size, TCGLabel *label1)
 {
     tcg_gen_mov_tl(s->tmp0, cpu_regs[R_ECX]);
     gen_extu(size, s->tmp0);
     tcg_gen_brcondi_tl(TCG_COND_EQ, s->tmp0, 0, label1);
 }

-static void gen_helper_in_func(TCGMemOp ot, TCGv v, TCGv_i32 n)
+static void gen_helper_in_func(MemOp ot, TCGv v, TCGv_i32 n)
 {
     switch (ot) {
     case MO_8:
@@ -594,7 +594,7 @@ static void gen_helper_in_func(TCGMemOp ot, TCGv v, TCGv_i32 n)
     }
 }

-static void gen_helper_out_func(TCGMemOp ot, TCGv_i32 v, TCGv_i32 n)
+static void gen_helper_out_func(MemOp ot, TCGv_i32 v, TCGv_i32 n)
 {
     switch (ot) {
     case MO_8:
@@ -611,7 +611,7 @@ static void gen_helper_out_func(TCGMemOp ot, TCGv_i32 v, TCGv_i32 n)
     }
 }

-static void gen_check_io(DisasContext *s, TCGMemOp ot, target_ulong cur_eip,
+static void gen_check_io(DisasContext *s, MemOp ot, target_ulong cur_eip,
                          uint32_t svm_flags)
 {
     target_ulong next_eip;
@@ -644,7 +644,7 @@ static void gen_check_io(DisasContext *s, TCGMemOp ot, target_ulong cur_eip,
     }
 }

-static inline void gen_movs(DisasContext *s, TCGMemOp ot)
+static inline void gen_movs(DisasContext *s, MemOp ot)
 {
     gen_string_movl_A0_ESI(s);
     gen_op_ld_v(s, ot, s->T0, s->A0);
@@ -840,7 +840,7 @@ static CCPrepare gen_prepare_eflags_s(DisasContext *s, TCGv reg)
         return (CCPrepare) { .cond = TCG_COND_NEVER, .mask = -1 };
     default:
         {
-            TCGMemOp size = (s->cc_op - CC_OP_ADDB) & 3;
+            MemOp size = (s->cc_op - CC_OP_ADDB) & 3;
             TCGv t0 = gen_ext_tl(reg, cpu_cc_dst, size, true);
             return (CCPrepare) { .cond = TCG_COND_LT, .reg = t0, .mask = -1 };
         }
@@ -885,7 +885,7 @@ static CCPrepare gen_prepare_eflags_z(DisasContext *s, TCGv reg)
                              .mask = -1 };
     default:
         {
-            TCGMemOp size = (s->cc_op - CC_OP_ADDB) & 3;
+            MemOp size = (s->cc_op - CC_OP_ADDB) & 3;
             TCGv t0 = gen_ext_tl(reg, cpu_cc_dst, size, false);
             return (CCPrepare) { .cond = TCG_COND_EQ, .reg = t0, .mask = -1 };
         }
@@ -897,7 +897,7 @@ static CCPrepare gen_prepare_eflags_z(DisasContext *s, TCGv reg)
 static CCPrepare gen_prepare_cc(DisasContext *s, int b, TCGv reg)
 {
     int inv, jcc_op, cond;
-    TCGMemOp size;
+    MemOp size;
     CCPrepare cc;
     TCGv t0;

@@ -1075,7 +1075,7 @@ static TCGLabel *gen_jz_ecx_string(DisasContext *s, target_ulong next_eip)
     return l2;
 }

-static inline void gen_stos(DisasContext *s, TCGMemOp ot)
+static inline void gen_stos(DisasContext *s, MemOp ot)
 {
     gen_op_mov_v_reg(s, MO_32, s->T0, R_EAX);
     gen_string_movl_A0_EDI(s);
@@ -1084,7 +1084,7 @@ static inline void gen_stos(DisasContext *s, TCGMemOp ot)
     gen_op_add_reg_T0(s, s->aflag, R_EDI);
 }

-static inline void gen_lods(DisasContext *s, TCGMemOp ot)
+static inline void gen_lods(DisasContext *s, MemOp ot)
 {
     gen_string_movl_A0_ESI(s);
     gen_op_ld_v(s, ot, s->T0, s->A0);
@@ -1093,7 +1093,7 @@ static inline void gen_lods(DisasContext *s, TCGMemOp ot)
     gen_op_add_reg_T0(s, s->aflag, R_ESI);
 }

-static inline void gen_scas(DisasContext *s, TCGMemOp ot)
+static inline void gen_scas(DisasContext *s, MemOp ot)
 {
     gen_string_movl_A0_EDI(s);
     gen_op_ld_v(s, ot, s->T1, s->A0);
@@ -1102,7 +1102,7 @@ static inline void gen_scas(DisasContext *s, TCGMemOp ot)
     gen_op_add_reg_T0(s, s->aflag, R_EDI);
 }

-static inline void gen_cmps(DisasContext *s, TCGMemOp ot)
+static inline void gen_cmps(DisasContext *s, MemOp ot)
 {
     gen_string_movl_A0_EDI(s);
     gen_op_ld_v(s, ot, s->T1, s->A0);
@@ -1126,7 +1126,7 @@ static void gen_bpt_io(DisasContext *s, TCGv_i32 t_port, int ot)
 }


-static inline void gen_ins(DisasContext *s, TCGMemOp ot)
+static inline void gen_ins(DisasContext *s, MemOp ot)
 {
     if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
         gen_io_start();
@@ -1148,7 +1148,7 @@ static inline void gen_ins(DisasContext *s, TCGMemOp ot)
     }
 }

-static inline void gen_outs(DisasContext *s, TCGMemOp ot)
+static inline void gen_outs(DisasContext *s, MemOp ot)
 {
     if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
         gen_io_start();
@@ -1171,7 +1171,7 @@ static inline void gen_outs(DisasContext *s, TCGMemOp ot)
 /* same method as Valgrind : we generate jumps to current or next
    instruction */
 #define GEN_REPZ(op)                                                          \
-static inline void gen_repz_ ## op(DisasContext *s, TCGMemOp ot,              \
+static inline void gen_repz_ ## op(DisasContext *s, MemOp ot,              \
                                  target_ulong cur_eip, target_ulong next_eip) \
 {                                                                             \
     TCGLabel *l2;                                                             \
@@ -1187,7 +1187,7 @@ static inline void gen_repz_ ## op(DisasContext *s, TCGMemOp ot,              \
 }

 #define GEN_REPZ2(op)                                                         \
-static inline void gen_repz_ ## op(DisasContext *s, TCGMemOp ot,              \
+static inline void gen_repz_ ## op(DisasContext *s, MemOp ot,              \
                                    target_ulong cur_eip,                      \
                                    target_ulong next_eip,                     \
                                    int nz)                                    \
@@ -1284,7 +1284,7 @@ static void gen_illegal_opcode(DisasContext *s)
 }

 /* if d == OR_TMP0, it means memory operand (address in A0) */
-static void gen_op(DisasContext *s1, int op, TCGMemOp ot, int d)
+static void gen_op(DisasContext *s1, int op, MemOp ot, int d)
 {
     if (d != OR_TMP0) {
         if (s1->prefix & PREFIX_LOCK) {
@@ -1395,7 +1395,7 @@ static void gen_op(DisasContext *s1, int op, TCGMemOp ot, int d)
 }

 /* if d == OR_TMP0, it means memory operand (address in A0) */
-static void gen_inc(DisasContext *s1, TCGMemOp ot, int d, int c)
+static void gen_inc(DisasContext *s1, MemOp ot, int d, int c)
 {
     if (s1->prefix & PREFIX_LOCK) {
         if (d != OR_TMP0) {
@@ -1421,7 +1421,7 @@ static void gen_inc(DisasContext *s1, TCGMemOp ot, int d, int c)
     set_cc_op(s1, (c > 0 ? CC_OP_INCB : CC_OP_DECB) + ot);
 }

-static void gen_shift_flags(DisasContext *s, TCGMemOp ot, TCGv result,
+static void gen_shift_flags(DisasContext *s, MemOp ot, TCGv result,
                             TCGv shm1, TCGv count, bool is_right)
 {
     TCGv_i32 z32, s32, oldop;
@@ -1466,7 +1466,7 @@ static void gen_shift_flags(DisasContext *s, TCGMemOp ot, TCGv result,
     set_cc_op(s, CC_OP_DYNAMIC);
 }

-static void gen_shift_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
+static void gen_shift_rm_T1(DisasContext *s, MemOp ot, int op1,
                             int is_right, int is_arith)
 {
     target_ulong mask = (ot == MO_64 ? 0x3f : 0x1f);
@@ -1502,7 +1502,7 @@ static void gen_shift_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
     gen_shift_flags(s, ot, s->T0, s->tmp0, s->T1, is_right);
 }

-static void gen_shift_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
+static void gen_shift_rm_im(DisasContext *s, MemOp ot, int op1, int op2,
                             int is_right, int is_arith)
 {
     int mask = (ot == MO_64 ? 0x3f : 0x1f);
@@ -1542,7 +1542,7 @@ static void gen_shift_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
     }
 }

-static void gen_rot_rm_T1(DisasContext *s, TCGMemOp ot, int op1, int is_right)
+static void gen_rot_rm_T1(DisasContext *s, MemOp ot, int op1, int is_right)
 {
     target_ulong mask = (ot == MO_64 ? 0x3f : 0x1f);
     TCGv_i32 t0, t1;
@@ -1627,7 +1627,7 @@ static void gen_rot_rm_T1(DisasContext *s, TCGMemOp ot, int op1, int is_right)
     set_cc_op(s, CC_OP_DYNAMIC);
 }

-static void gen_rot_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
+static void gen_rot_rm_im(DisasContext *s, MemOp ot, int op1, int op2,
                           int is_right)
 {
     int mask = (ot == MO_64 ? 0x3f : 0x1f);
@@ -1705,7 +1705,7 @@ static void gen_rot_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
 }

 /* XXX: add faster immediate = 1 case */
-static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
+static void gen_rotc_rm_T1(DisasContext *s, MemOp ot, int op1,
                            int is_right)
 {
     gen_compute_eflags(s);
@@ -1761,7 +1761,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
 }

 /* XXX: add faster immediate case */
-static void gen_shiftd_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
+static void gen_shiftd_rm_T1(DisasContext *s, MemOp ot, int op1,
                              bool is_right, TCGv count_in)
 {
     target_ulong mask = (ot == MO_64 ? 63 : 31);
@@ -1842,7 +1842,7 @@ static void gen_shiftd_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
     tcg_temp_free(count);
 }

-static void gen_shift(DisasContext *s1, int op, TCGMemOp ot, int d, int s)
+static void gen_shift(DisasContext *s1, int op, MemOp ot, int d, int s)
 {
     if (s != OR_TMP1)
         gen_op_mov_v_reg(s1, ot, s1->T1, s);
@@ -1872,7 +1872,7 @@ static void gen_shift(DisasContext *s1, int op, TCGMemOp ot, int d, int s)
     }
 }

-static void gen_shifti(DisasContext *s1, int op, TCGMemOp ot, int d, int c)
+static void gen_shifti(DisasContext *s1, int op, MemOp ot, int d, int c)
 {
     switch(op) {
     case OP_ROL:
@@ -2149,7 +2149,7 @@ static void gen_add_A0_ds_seg(DisasContext *s)
 /* generate modrm memory load or store of 'reg'. TMP0 is used if reg ==
    OR_TMP0 */
 static void gen_ldst_modrm(CPUX86State *env, DisasContext *s, int modrm,
-                           TCGMemOp ot, int reg, int is_store)
+                           MemOp ot, int reg, int is_store)
 {
     int mod, rm;

@@ -2179,7 +2179,7 @@ static void gen_ldst_modrm(CPUX86State *env, DisasContext *s, int modrm,
     }
 }

-static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, TCGMemOp ot)
+static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, MemOp ot)
 {
     uint32_t ret;

@@ -2202,7 +2202,7 @@ static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, TCGMemOp ot)
     return ret;
 }

-static inline int insn_const_size(TCGMemOp ot)
+static inline int insn_const_size(MemOp ot)
 {
     if (ot <= MO_32) {
         return 1 << ot;
@@ -2266,7 +2266,7 @@ static inline void gen_jcc(DisasContext *s, int b,
     }
 }

-static void gen_cmovcc1(CPUX86State *env, DisasContext *s, TCGMemOp ot, int b,
+static void gen_cmovcc1(CPUX86State *env, DisasContext *s, MemOp ot, int b,
                         int modrm, int reg)
 {
     CCPrepare cc;
@@ -2363,8 +2363,8 @@ static inline void gen_stack_update(DisasContext *s, int addend)
 /* Generate a push. It depends on ss32, addseg and dflag.  */
 static void gen_push_v(DisasContext *s, TCGv val)
 {
-    TCGMemOp d_ot = mo_pushpop(s, s->dflag);
-    TCGMemOp a_ot = mo_stacksize(s);
+    MemOp d_ot = mo_pushpop(s, s->dflag);
+    MemOp a_ot = mo_stacksize(s);
     int size = 1 << d_ot;
     TCGv new_esp = s->A0;

@@ -2383,9 +2383,9 @@ static void gen_push_v(DisasContext *s, TCGv val)
 }

 /* two step pop is necessary for precise exceptions */
-static TCGMemOp gen_pop_T0(DisasContext *s)
+static MemOp gen_pop_T0(DisasContext *s)
 {
-    TCGMemOp d_ot = mo_pushpop(s, s->dflag);
+    MemOp d_ot = mo_pushpop(s, s->dflag);

     gen_lea_v_seg(s, mo_stacksize(s), cpu_regs[R_ESP], R_SS, -1);
     gen_op_ld_v(s, d_ot, s->T0, s->A0);
@@ -2393,7 +2393,7 @@ static TCGMemOp gen_pop_T0(DisasContext *s)
     return d_ot;
 }

-static inline void gen_pop_update(DisasContext *s, TCGMemOp ot)
+static inline void gen_pop_update(DisasContext *s, MemOp ot)
 {
     gen_stack_update(s, 1 << ot);
 }
@@ -2405,8 +2405,8 @@ static inline void gen_stack_A0(DisasContext *s)

 static void gen_pusha(DisasContext *s)
 {
-    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_16;
-    TCGMemOp d_ot = s->dflag;
+    MemOp s_ot = s->ss32 ? MO_32 : MO_16;
+    MemOp d_ot = s->dflag;
     int size = 1 << d_ot;
     int i;

@@ -2421,8 +2421,8 @@ static void gen_pusha(DisasContext *s)

 static void gen_popa(DisasContext *s)
 {
-    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_16;
-    TCGMemOp d_ot = s->dflag;
+    MemOp s_ot = s->ss32 ? MO_32 : MO_16;
+    MemOp d_ot = s->dflag;
     int size = 1 << d_ot;
     int i;

@@ -2442,8 +2442,8 @@ static void gen_popa(DisasContext *s)

 static void gen_enter(DisasContext *s, int esp_addend, int level)
 {
-    TCGMemOp d_ot = mo_pushpop(s, s->dflag);
-    TCGMemOp a_ot = CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_16;
+    MemOp d_ot = mo_pushpop(s, s->dflag);
+    MemOp a_ot = CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_16;
     int size = 1 << d_ot;

     /* Push BP; compute FrameTemp into T1.  */
@@ -2482,8 +2482,8 @@ static void gen_enter(DisasContext *s, int esp_addend, int level)

 static void gen_leave(DisasContext *s)
 {
-    TCGMemOp d_ot = mo_pushpop(s, s->dflag);
-    TCGMemOp a_ot = mo_stacksize(s);
+    MemOp d_ot = mo_pushpop(s, s->dflag);
+    MemOp a_ot = mo_stacksize(s);

     gen_lea_v_seg(s, a_ot, cpu_regs[R_EBP], R_SS, -1);
     gen_op_ld_v(s, d_ot, s->T0, s->A0);
@@ -3045,7 +3045,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
     SSEFunc_0_eppi sse_fn_eppi;
     SSEFunc_0_ppi sse_fn_ppi;
     SSEFunc_0_eppt sse_fn_eppt;
-    TCGMemOp ot;
+    MemOp ot;

     b &= 0xff;
     if (s->prefix & PREFIX_DATA)
@@ -4488,7 +4488,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
     CPUX86State *env = cpu->env_ptr;
     int b, prefixes;
     int shift;
-    TCGMemOp ot, aflag, dflag;
+    MemOp ot, aflag, dflag;
     int modrm, reg, rm, mod, op, opreg, val;
     target_ulong next_eip, tval;
     int rex_w, rex_r;
@@ -5567,8 +5567,8 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
     case 0x1be: /* movsbS Gv, Eb */
     case 0x1bf: /* movswS Gv, Eb */
         {
-            TCGMemOp d_ot;
-            TCGMemOp s_ot;
+            MemOp d_ot;
+            MemOp s_ot;

             /* d_ot is the size of destination */
             d_ot = dflag;
diff --git a/target/m68k/translate.c b/target/m68k/translate.c
index 60bcfb7..24c1dd3 100644
--- a/target/m68k/translate.c
+++ b/target/m68k/translate.c
@@ -2414,7 +2414,7 @@ DISAS_INSN(cas)
     uint16_t ext;
     TCGv load;
     TCGv cmp;
-    TCGMemOp opc;
+    MemOp opc;

     switch ((insn >> 9) & 3) {
     case 1:
diff --git a/target/microblaze/translate.c b/target/microblaze/translate.c
index 9ce65f3..41d1b8b 100644
--- a/target/microblaze/translate.c
+++ b/target/microblaze/translate.c
@@ -919,7 +919,7 @@ static void dec_load(DisasContext *dc)
     unsigned int size;
     bool rev = false, ex = false, ea = false;
     int mem_index = cpu_mmu_index(&dc->cpu->env, false);
-    TCGMemOp mop;
+    MemOp mop;

     mop = dc->opcode & 3;
     size = 1 << mop;
@@ -1035,7 +1035,7 @@ static void dec_store(DisasContext *dc)
     unsigned int size;
     bool rev = false, ex = false, ea = false;
     int mem_index = cpu_mmu_index(&dc->cpu->env, false);
-    TCGMemOp mop;
+    MemOp mop;

     mop = dc->opcode & 3;
     size = 1 << mop;
diff --git a/target/mips/translate.c b/target/mips/translate.c
index ca62800..59b5d85 100644
--- a/target/mips/translate.c
+++ b/target/mips/translate.c
@@ -2526,7 +2526,7 @@ typedef struct DisasContext {
     int32_t CP0_Config5;
     /* Routine used to access memory */
     int mem_idx;
-    TCGMemOp default_tcg_memop_mask;
+    MemOp default_tcg_memop_mask;
     uint32_t hflags, saved_hflags;
     target_ulong btarget;
     bool ulri;
@@ -3706,7 +3706,7 @@ static void gen_st(DisasContext *ctx, uint32_t opc, int rt,

 /* Store conditional */
 static void gen_st_cond(DisasContext *ctx, int rt, int base, int offset,
-                        TCGMemOp tcg_mo, bool eva)
+                        MemOp tcg_mo, bool eva)
 {
     TCGv addr, t0, val;
     TCGLabel *l1 = gen_new_label();
@@ -4546,7 +4546,7 @@ static void gen_HILO(DisasContext *ctx, uint32_t opc, int acc, int reg)
 }

 static inline void gen_r6_ld(target_long addr, int reg, int memidx,
-                             TCGMemOp memop)
+                             MemOp memop)
 {
     TCGv t0 = tcg_const_tl(addr);
     tcg_gen_qemu_ld_tl(t0, t0, memidx, memop);
@@ -21828,7 +21828,7 @@ static int decode_nanomips_32_48_opc(CPUMIPSState *env, DisasContext *ctx)
                              extract32(ctx->opcode, 0, 8);
                     TCGv va = tcg_temp_new();
                     TCGv t1 = tcg_temp_new();
-                    TCGMemOp memop = (extract32(ctx->opcode, 8, 3)) ==
+                    MemOp memop = (extract32(ctx->opcode, 8, 3)) ==
                                       NM_P_LS_UAWM ? MO_UNALN : 0;

                     count = (count == 0) ? 8 : count;
diff --git a/target/openrisc/translate.c b/target/openrisc/translate.c
index 4360ce4..b189c50 100644
--- a/target/openrisc/translate.c
+++ b/target/openrisc/translate.c
@@ -681,7 +681,7 @@ static bool trans_l_lwa(DisasContext *dc, arg_load *a)
     return true;
 }

-static void do_load(DisasContext *dc, arg_load *a, TCGMemOp mop)
+static void do_load(DisasContext *dc, arg_load *a, MemOp mop)
 {
     TCGv ea;

@@ -763,7 +763,7 @@ static bool trans_l_swa(DisasContext *dc, arg_store *a)
     return true;
 }

-static void do_store(DisasContext *dc, arg_store *a, TCGMemOp mop)
+static void do_store(DisasContext *dc, arg_store *a, MemOp mop)
 {
     TCGv t0 = tcg_temp_new();
     tcg_gen_addi_tl(t0, cpu_R[a->a], a->i);
diff --git a/target/ppc/translate.c b/target/ppc/translate.c
index 4a5de28..31800ed 100644
--- a/target/ppc/translate.c
+++ b/target/ppc/translate.c
@@ -162,7 +162,7 @@ struct DisasContext {
     int mem_idx;
     int access_type;
     /* Translation flags */
-    TCGMemOp default_tcg_memop_mask;
+    MemOp default_tcg_memop_mask;
 #if defined(TARGET_PPC64)
     bool sf_mode;
     bool has_cfar;
@@ -3142,7 +3142,7 @@ static void gen_isync(DisasContext *ctx)

 #define MEMOP_GET_SIZE(x)  (1 << ((x) & MO_SIZE))

-static void gen_load_locked(DisasContext *ctx, TCGMemOp memop)
+static void gen_load_locked(DisasContext *ctx, MemOp memop)
 {
     TCGv gpr = cpu_gpr[rD(ctx->opcode)];
     TCGv t0 = tcg_temp_new();
@@ -3167,7 +3167,7 @@ LARX(lbarx, DEF_MEMOP(MO_UB))
 LARX(lharx, DEF_MEMOP(MO_UW))
 LARX(lwarx, DEF_MEMOP(MO_UL))

-static void gen_fetch_inc_conditional(DisasContext *ctx, TCGMemOp memop,
+static void gen_fetch_inc_conditional(DisasContext *ctx, MemOp memop,
                                       TCGv EA, TCGCond cond, int addend)
 {
     TCGv t = tcg_temp_new();
@@ -3193,7 +3193,7 @@ static void gen_fetch_inc_conditional(DisasContext *ctx, TCGMemOp memop,
     tcg_temp_free(u);
 }

-static void gen_ld_atomic(DisasContext *ctx, TCGMemOp memop)
+static void gen_ld_atomic(DisasContext *ctx, MemOp memop)
 {
     uint32_t gpr_FC = FC(ctx->opcode);
     TCGv EA = tcg_temp_new();
@@ -3306,7 +3306,7 @@ static void gen_ldat(DisasContext *ctx)
 }
 #endif

-static void gen_st_atomic(DisasContext *ctx, TCGMemOp memop)
+static void gen_st_atomic(DisasContext *ctx, MemOp memop)
 {
     uint32_t gpr_FC = FC(ctx->opcode);
     TCGv EA = tcg_temp_new();
@@ -3389,7 +3389,7 @@ static void gen_stdat(DisasContext *ctx)
 }
 #endif

-static void gen_conditional_store(DisasContext *ctx, TCGMemOp memop)
+static void gen_conditional_store(DisasContext *ctx, MemOp memop)
 {
     TCGLabel *l1 = gen_new_label();
     TCGLabel *l2 = gen_new_label();
diff --git a/target/riscv/insn_trans/trans_rva.inc.c b/target/riscv/insn_trans/trans_rva.inc.c
index fadd888..be8a9f0 100644
--- a/target/riscv/insn_trans/trans_rva.inc.c
+++ b/target/riscv/insn_trans/trans_rva.inc.c
@@ -18,7 +18,7 @@
  * this program.  If not, see <http://www.gnu.org/licenses/>.
  */

-static inline bool gen_lr(DisasContext *ctx, arg_atomic *a, TCGMemOp mop)
+static inline bool gen_lr(DisasContext *ctx, arg_atomic *a, MemOp mop)
 {
     TCGv src1 = tcg_temp_new();
     /* Put addr in load_res, data in load_val.  */
@@ -37,7 +37,7 @@ static inline bool gen_lr(DisasContext *ctx, arg_atomic *a, TCGMemOp mop)
     return true;
 }

-static inline bool gen_sc(DisasContext *ctx, arg_atomic *a, TCGMemOp mop)
+static inline bool gen_sc(DisasContext *ctx, arg_atomic *a, MemOp mop)
 {
     TCGv src1 = tcg_temp_new();
     TCGv src2 = tcg_temp_new();
@@ -82,8 +82,8 @@ static inline bool gen_sc(DisasContext *ctx, arg_atomic *a, TCGMemOp mop)
 }

 static bool gen_amo(DisasContext *ctx, arg_atomic *a,
-                    void(*func)(TCGv, TCGv, TCGv, TCGArg, TCGMemOp),
-                    TCGMemOp mop)
+                    void(*func)(TCGv, TCGv, TCGv, TCGArg, MemOp),
+                    MemOp mop)
 {
     TCGv src1 = tcg_temp_new();
     TCGv src2 = tcg_temp_new();
diff --git a/target/riscv/insn_trans/trans_rvi.inc.c b/target/riscv/insn_trans/trans_rvi.inc.c
index ea64731..cf440d1 100644
--- a/target/riscv/insn_trans/trans_rvi.inc.c
+++ b/target/riscv/insn_trans/trans_rvi.inc.c
@@ -135,7 +135,7 @@ static bool trans_bgeu(DisasContext *ctx, arg_bgeu *a)
     return gen_branch(ctx, a, TCG_COND_GEU);
 }

-static bool gen_load(DisasContext *ctx, arg_lb *a, TCGMemOp memop)
+static bool gen_load(DisasContext *ctx, arg_lb *a, MemOp memop)
 {
     TCGv t0 = tcg_temp_new();
     TCGv t1 = tcg_temp_new();
@@ -174,7 +174,7 @@ static bool trans_lhu(DisasContext *ctx, arg_lhu *a)
     return gen_load(ctx, a, MO_TEUW);
 }

-static bool gen_store(DisasContext *ctx, arg_sb *a, TCGMemOp memop)
+static bool gen_store(DisasContext *ctx, arg_sb *a, MemOp memop)
 {
     TCGv t0 = tcg_temp_new();
     TCGv dat = tcg_temp_new();
diff --git a/target/s390x/translate.c b/target/s390x/translate.c
index ac0d8b6..2927247 100644
--- a/target/s390x/translate.c
+++ b/target/s390x/translate.c
@@ -152,7 +152,7 @@ static inline int vec_full_reg_offset(uint8_t reg)
     return offsetof(CPUS390XState, vregs[reg][0]);
 }

-static inline int vec_reg_offset(uint8_t reg, uint8_t enr, TCGMemOp es)
+static inline int vec_reg_offset(uint8_t reg, uint8_t enr, MemOp es)
 {
     /* Convert element size (es) - e.g. MO_8 - to bytes */
     const uint8_t bytes = 1 << es;
@@ -2262,7 +2262,7 @@ static DisasJumpType op_csst(DisasContext *s, DisasOps *o)
 #ifndef CONFIG_USER_ONLY
 static DisasJumpType op_csp(DisasContext *s, DisasOps *o)
 {
-    TCGMemOp mop = s->insn->data;
+    MemOp mop = s->insn->data;
     TCGv_i64 addr, old, cc;
     TCGLabel *lab = gen_new_label();

@@ -3228,7 +3228,7 @@ static DisasJumpType op_lm64(DisasContext *s, DisasOps *o)
 static DisasJumpType op_lpd(DisasContext *s, DisasOps *o)
 {
     TCGv_i64 a1, a2;
-    TCGMemOp mop = s->insn->data;
+    MemOp mop = s->insn->data;

     /* In a parallel context, stop the world and single step.  */
     if (tb_cflags(s->base.tb) & CF_PARALLEL) {
diff --git a/target/s390x/translate_vx.inc.c b/target/s390x/translate_vx.inc.c
index 41d5cf8..4c56bbb 100644
--- a/target/s390x/translate_vx.inc.c
+++ b/target/s390x/translate_vx.inc.c
@@ -57,13 +57,13 @@
 #define FPF_LONG        3
 #define FPF_EXT         4

-static inline bool valid_vec_element(uint8_t enr, TCGMemOp es)
+static inline bool valid_vec_element(uint8_t enr, MemOp es)
 {
     return !(enr & ~(NUM_VEC_ELEMENTS(es) - 1));
 }

 static void read_vec_element_i64(TCGv_i64 dst, uint8_t reg, uint8_t enr,
-                                 TCGMemOp memop)
+                                 MemOp memop)
 {
     const int offs = vec_reg_offset(reg, enr, memop & MO_SIZE);

@@ -96,7 +96,7 @@ static void read_vec_element_i64(TCGv_i64 dst, uint8_t reg, uint8_t enr,
 }

 static void read_vec_element_i32(TCGv_i32 dst, uint8_t reg, uint8_t enr,
-                                 TCGMemOp memop)
+                                 MemOp memop)
 {
     const int offs = vec_reg_offset(reg, enr, memop & MO_SIZE);

@@ -123,7 +123,7 @@ static void read_vec_element_i32(TCGv_i32 dst, uint8_t reg, uint8_t enr,
 }

 static void write_vec_element_i64(TCGv_i64 src, int reg, uint8_t enr,
-                                  TCGMemOp memop)
+                                  MemOp memop)
 {
     const int offs = vec_reg_offset(reg, enr, memop & MO_SIZE);

@@ -146,7 +146,7 @@ static void write_vec_element_i64(TCGv_i64 src, int reg, uint8_t enr,
 }

 static void write_vec_element_i32(TCGv_i32 src, int reg, uint8_t enr,
-                                  TCGMemOp memop)
+                                  MemOp memop)
 {
     const int offs = vec_reg_offset(reg, enr, memop & MO_SIZE);

diff --git a/target/sparc/translate.c b/target/sparc/translate.c
index 091bab5..bef9ce6 100644
--- a/target/sparc/translate.c
+++ b/target/sparc/translate.c
@@ -2019,7 +2019,7 @@ static inline void gen_ne_fop_QD(DisasContext *dc, int rd, int rs,
 }

 static void gen_swap(DisasContext *dc, TCGv dst, TCGv src,
-                     TCGv addr, int mmu_idx, TCGMemOp memop)
+                     TCGv addr, int mmu_idx, MemOp memop)
 {
     gen_address_mask(dc, addr);
     tcg_gen_atomic_xchg_tl(dst, addr, src, mmu_idx, memop);
@@ -2050,10 +2050,10 @@ typedef struct {
     ASIType type;
     int asi;
     int mem_idx;
-    TCGMemOp memop;
+    MemOp memop;
 } DisasASI;

-static DisasASI get_asi(DisasContext *dc, int insn, TCGMemOp memop)
+static DisasASI get_asi(DisasContext *dc, int insn, MemOp memop)
 {
     int asi = GET_FIELD(insn, 19, 26);
     ASIType type = GET_ASI_HELPER;
@@ -2267,7 +2267,7 @@ static DisasASI get_asi(DisasContext *dc, int insn, TCGMemOp memop)
 }

 static void gen_ld_asi(DisasContext *dc, TCGv dst, TCGv addr,
-                       int insn, TCGMemOp memop)
+                       int insn, MemOp memop)
 {
     DisasASI da = get_asi(dc, insn, memop);

@@ -2305,7 +2305,7 @@ static void gen_ld_asi(DisasContext *dc, TCGv dst, TCGv addr,
 }

 static void gen_st_asi(DisasContext *dc, TCGv src, TCGv addr,
-                       int insn, TCGMemOp memop)
+                       int insn, MemOp memop)
 {
     DisasASI da = get_asi(dc, insn, memop);

@@ -2511,7 +2511,7 @@ static void gen_ldf_asi(DisasContext *dc, TCGv addr,
     case GET_ASI_BLOCK:
         /* Valid for lddfa on aligned registers only.  */
         if (size == 8 && (rd & 7) == 0) {
-            TCGMemOp memop;
+            MemOp memop;
             TCGv eight;
             int i;

@@ -2625,7 +2625,7 @@ static void gen_stf_asi(DisasContext *dc, TCGv addr,
     case GET_ASI_BLOCK:
         /* Valid for stdfa on aligned registers only.  */
         if (size == 8 && (rd & 7) == 0) {
-            TCGMemOp memop;
+            MemOp memop;
             TCGv eight;
             int i;

diff --git a/target/tilegx/translate.c b/target/tilegx/translate.c
index c46a4ab..68dd4aa 100644
--- a/target/tilegx/translate.c
+++ b/target/tilegx/translate.c
@@ -290,7 +290,7 @@ static void gen_cmul2(TCGv tdest, TCGv tsrca, TCGv tsrcb, int sh, int rd)
 }

 static TileExcp gen_st_opcode(DisasContext *dc, unsigned dest, unsigned srca,
-                              unsigned srcb, TCGMemOp memop, const char *name)
+                              unsigned srcb, MemOp memop, const char *name)
 {
     if (dest) {
         return TILEGX_EXCP_OPCODE_UNKNOWN;
@@ -305,7 +305,7 @@ static TileExcp gen_st_opcode(DisasContext *dc, unsigned dest, unsigned srca,
 }

 static TileExcp gen_st_add_opcode(DisasContext *dc, unsigned srca, unsigned srcb,
-                                  int imm, TCGMemOp memop, const char *name)
+                                  int imm, MemOp memop, const char *name)
 {
     TCGv tsrca = load_gr(dc, srca);
     TCGv tsrcb = load_gr(dc, srcb);
@@ -496,7 +496,7 @@ static TileExcp gen_rr_opcode(DisasContext *dc, unsigned opext,
 {
     TCGv tdest, tsrca;
     const char *mnemonic;
-    TCGMemOp memop;
+    MemOp memop;
     TileExcp ret = TILEGX_EXCP_NONE;
     bool prefetch_nofault = false;

@@ -1478,7 +1478,7 @@ static TileExcp gen_rri_opcode(DisasContext *dc, unsigned opext,
     TCGv tsrca = load_gr(dc, srca);
     bool prefetch_nofault = false;
     const char *mnemonic;
-    TCGMemOp memop;
+    MemOp memop;
     int i2, i3;
     TCGv t0;

@@ -2106,7 +2106,7 @@ static TileExcp decode_y2(DisasContext *dc, tilegx_bundle_bits bundle)
     unsigned srca = get_SrcA_Y2(bundle);
     unsigned srcbdest = get_SrcBDest_Y2(bundle);
     const char *mnemonic;
-    TCGMemOp memop;
+    MemOp memop;
     bool prefetch_nofault = false;

     switch (OEY2(opc, mode)) {
diff --git a/target/tricore/translate.c b/target/tricore/translate.c
index dc2a65f..87a5f50 100644
--- a/target/tricore/translate.c
+++ b/target/tricore/translate.c
@@ -227,7 +227,7 @@ static inline void generate_trap(DisasContext *ctx, int class, int tin);
 /* Functions for load/save to/from memory */

 static inline void gen_offset_ld(DisasContext *ctx, TCGv r1, TCGv r2,
-                                 int16_t con, TCGMemOp mop)
+                                 int16_t con, MemOp mop)
 {
     TCGv temp = tcg_temp_new();
     tcg_gen_addi_tl(temp, r2, con);
@@ -236,7 +236,7 @@ static inline void gen_offset_ld(DisasContext *ctx, TCGv r1, TCGv r2,
 }

 static inline void gen_offset_st(DisasContext *ctx, TCGv r1, TCGv r2,
-                                 int16_t con, TCGMemOp mop)
+                                 int16_t con, MemOp mop)
 {
     TCGv temp = tcg_temp_new();
     tcg_gen_addi_tl(temp, r2, con);
@@ -284,7 +284,7 @@ static void gen_offset_ld_2regs(TCGv rh, TCGv rl, TCGv base, int16_t con,
 }

 static void gen_st_preincr(DisasContext *ctx, TCGv r1, TCGv r2, int16_t off,
-                           TCGMemOp mop)
+                           MemOp mop)
 {
     TCGv temp = tcg_temp_new();
     tcg_gen_addi_tl(temp, r2, off);
@@ -294,7 +294,7 @@ static void gen_st_preincr(DisasContext *ctx, TCGv r1, TCGv r2, int16_t off,
 }

 static void gen_ld_preincr(DisasContext *ctx, TCGv r1, TCGv r2, int16_t off,
-                           TCGMemOp mop)
+                           MemOp mop)
 {
     TCGv temp = tcg_temp_new();
     tcg_gen_addi_tl(temp, r2, off);
diff --git a/tcg/README b/tcg/README
index 21fcdf7..b4382fa 100644
--- a/tcg/README
+++ b/tcg/README
@@ -512,7 +512,7 @@ Both t0 and t1 may be split into little-endian ordered pairs of registers
 if dealing with 64-bit quantities on a 32-bit host.

 The memidx selects the qemu tlb index to use (e.g. user or kernel access).
-The flags are the TCGMemOp bits, selecting the sign, width, and endianness
+The flags are the MemOp bits, selecting the sign, width, and endianness
 of the memory access.

 For a 32-bit host, qemu_ld/st_i64 is guaranteed to only be used with a
diff --git a/tcg/aarch64/tcg-target.inc.c b/tcg/aarch64/tcg-target.inc.c
index 0713448..3f92101 100644
--- a/tcg/aarch64/tcg-target.inc.c
+++ b/tcg/aarch64/tcg-target.inc.c
@@ -1423,7 +1423,7 @@ static inline void tcg_out_rev16(TCGContext *s, TCGReg rd, TCGReg rn)
     tcg_out_insn(s, 3507, REV16, TCG_TYPE_I32, rd, rn);
 }

-static inline void tcg_out_sxt(TCGContext *s, TCGType ext, TCGMemOp s_bits,
+static inline void tcg_out_sxt(TCGContext *s, TCGType ext, MemOp s_bits,
                                TCGReg rd, TCGReg rn)
 {
     /* Using ALIASes SXTB, SXTH, SXTW, of SBFM Xd, Xn, #0, #7|15|31 */
@@ -1431,7 +1431,7 @@ static inline void tcg_out_sxt(TCGContext *s, TCGType ext, TCGMemOp s_bits,
     tcg_out_sbfm(s, ext, rd, rn, 0, bits);
 }

-static inline void tcg_out_uxt(TCGContext *s, TCGMemOp s_bits,
+static inline void tcg_out_uxt(TCGContext *s, MemOp s_bits,
                                TCGReg rd, TCGReg rn)
 {
     /* Using ALIASes UXTB, UXTH of UBFM Wd, Wn, #0, #7|15 */
@@ -1580,8 +1580,8 @@ static inline void tcg_out_adr(TCGContext *s, TCGReg rd, void *target)
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp size = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp size = opc & MO_SIZE;

     if (!reloc_pc19(lb->label_ptr[0], s->code_ptr)) {
         return false;
@@ -1605,8 +1605,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp size = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp size = opc & MO_SIZE;

     if (!reloc_pc19(lb->label_ptr[0], s->code_ptr)) {
         return false;
@@ -1649,7 +1649,7 @@ QEMU_BUILD_BUG_ON(offsetof(CPUTLBDescFast, table) != 8);
    slow path for the failure case, which will be patched later when finalizing
    the slow path. Generated code returns the host addend in X1,
    clobbers X0,X2,X3,TMP. */
-static void tcg_out_tlb_read(TCGContext *s, TCGReg addr_reg, TCGMemOp opc,
+static void tcg_out_tlb_read(TCGContext *s, TCGReg addr_reg, MemOp opc,
                              tcg_insn_unit **label_ptr, int mem_index,
                              bool is_read)
 {
@@ -1709,11 +1709,11 @@ static void tcg_out_tlb_read(TCGContext *s, TCGReg addr_reg, TCGMemOp opc,

 #endif /* CONFIG_SOFTMMU */

-static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp memop, TCGType ext,
+static void tcg_out_qemu_ld_direct(TCGContext *s, MemOp memop, TCGType ext,
                                    TCGReg data_r, TCGReg addr_r,
                                    TCGType otype, TCGReg off_r)
 {
-    const TCGMemOp bswap = memop & MO_BSWAP;
+    const MemOp bswap = memop & MO_BSWAP;

     switch (memop & MO_SSIZE) {
     case MO_UB:
@@ -1765,11 +1765,11 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp memop, TCGType ext,
     }
 }

-static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp memop,
+static void tcg_out_qemu_st_direct(TCGContext *s, MemOp memop,
                                    TCGReg data_r, TCGReg addr_r,
                                    TCGType otype, TCGReg off_r)
 {
-    const TCGMemOp bswap = memop & MO_BSWAP;
+    const MemOp bswap = memop & MO_BSWAP;

     switch (memop & MO_SIZE) {
     case MO_8:
@@ -1804,7 +1804,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp memop,
 static void tcg_out_qemu_ld(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
                             TCGMemOpIdx oi, TCGType ext)
 {
-    TCGMemOp memop = get_memop(oi);
+    MemOp memop = get_memop(oi);
     const TCGType otype = TARGET_LONG_BITS == 64 ? TCG_TYPE_I64 : TCG_TYPE_I32;
 #ifdef CONFIG_SOFTMMU
     unsigned mem_index = get_mmuidx(oi);
@@ -1829,7 +1829,7 @@ static void tcg_out_qemu_ld(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
 static void tcg_out_qemu_st(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
                             TCGMemOpIdx oi)
 {
-    TCGMemOp memop = get_memop(oi);
+    MemOp memop = get_memop(oi);
     const TCGType otype = TARGET_LONG_BITS == 64 ? TCG_TYPE_I64 : TCG_TYPE_I32;
 #ifdef CONFIG_SOFTMMU
     unsigned mem_index = get_mmuidx(oi);
diff --git a/tcg/arm/tcg-target.inc.c b/tcg/arm/tcg-target.inc.c
index ece88dc..94d80d7 100644
--- a/tcg/arm/tcg-target.inc.c
+++ b/tcg/arm/tcg-target.inc.c
@@ -1233,7 +1233,7 @@ QEMU_BUILD_BUG_ON(offsetof(CPUTLBDescFast, table) != 4);
    containing the addend of the tlb entry.  Clobbers R0, R1, R2, TMP.  */

 static TCGReg tcg_out_tlb_read(TCGContext *s, TCGReg addrlo, TCGReg addrhi,
-                               TCGMemOp opc, int mem_index, bool is_load)
+                               MemOp opc, int mem_index, bool is_load)
 {
     int cmp_off = (is_load ? offsetof(CPUTLBEntry, addr_read)
                    : offsetof(CPUTLBEntry, addr_write));
@@ -1348,7 +1348,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGReg argreg, datalo, datahi;
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     void *func;

     if (!reloc_pc24(lb->label_ptr[0], s->code_ptr)) {
@@ -1412,7 +1412,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGReg argreg, datalo, datahi;
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);

     if (!reloc_pc24(lb->label_ptr[0], s->code_ptr)) {
         return false;
@@ -1453,11 +1453,11 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 }
 #endif /* SOFTMMU */

-static inline void tcg_out_qemu_ld_index(TCGContext *s, TCGMemOp opc,
+static inline void tcg_out_qemu_ld_index(TCGContext *s, MemOp opc,
                                          TCGReg datalo, TCGReg datahi,
                                          TCGReg addrlo, TCGReg addend)
 {
-    TCGMemOp bswap = opc & MO_BSWAP;
+    MemOp bswap = opc & MO_BSWAP;

     switch (opc & MO_SSIZE) {
     case MO_UB:
@@ -1514,11 +1514,11 @@ static inline void tcg_out_qemu_ld_index(TCGContext *s, TCGMemOp opc,
     }
 }

-static inline void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp opc,
+static inline void tcg_out_qemu_ld_direct(TCGContext *s, MemOp opc,
                                           TCGReg datalo, TCGReg datahi,
                                           TCGReg addrlo)
 {
-    TCGMemOp bswap = opc & MO_BSWAP;
+    MemOp bswap = opc & MO_BSWAP;

     switch (opc & MO_SSIZE) {
     case MO_UB:
@@ -1577,7 +1577,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)
 {
     TCGReg addrlo, datalo, datahi, addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #ifdef CONFIG_SOFTMMU
     int mem_index;
     TCGReg addend;
@@ -1614,11 +1614,11 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)
 #endif
 }

-static inline void tcg_out_qemu_st_index(TCGContext *s, int cond, TCGMemOp opc,
+static inline void tcg_out_qemu_st_index(TCGContext *s, int cond, MemOp opc,
                                          TCGReg datalo, TCGReg datahi,
                                          TCGReg addrlo, TCGReg addend)
 {
-    TCGMemOp bswap = opc & MO_BSWAP;
+    MemOp bswap = opc & MO_BSWAP;

     switch (opc & MO_SIZE) {
     case MO_8:
@@ -1659,11 +1659,11 @@ static inline void tcg_out_qemu_st_index(TCGContext *s, int cond, TCGMemOp opc,
     }
 }

-static inline void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp opc,
+static inline void tcg_out_qemu_st_direct(TCGContext *s, MemOp opc,
                                           TCGReg datalo, TCGReg datahi,
                                           TCGReg addrlo)
 {
-    TCGMemOp bswap = opc & MO_BSWAP;
+    MemOp bswap = opc & MO_BSWAP;

     switch (opc & MO_SIZE) {
     case MO_8:
@@ -1708,7 +1708,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is64)
 {
     TCGReg addrlo, datalo, datahi, addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #ifdef CONFIG_SOFTMMU
     int mem_index;
     TCGReg addend;
diff --git a/tcg/i386/tcg-target.inc.c b/tcg/i386/tcg-target.inc.c
index 6ddeebf..9d8ed97 100644
--- a/tcg/i386/tcg-target.inc.c
+++ b/tcg/i386/tcg-target.inc.c
@@ -1697,7 +1697,7 @@ static void * const qemu_st_helpers[16] = {
    First argument register is clobbered.  */

 static inline void tcg_out_tlb_load(TCGContext *s, TCGReg addrlo, TCGReg addrhi,
-                                    int mem_index, TCGMemOp opc,
+                                    int mem_index, MemOp opc,
                                     tcg_insn_unit **label_ptr, int which)
 {
     const TCGReg r0 = TCG_REG_L0;
@@ -1810,7 +1810,7 @@ static void add_qemu_ldst_label(TCGContext *s, bool is_ld, bool is_64,
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     TCGReg data_reg;
     tcg_insn_unit **label_ptr = &l->label_ptr[0];
     int rexw = (l->type == TCG_TYPE_I64 ? P_REXW : 0);
@@ -1895,8 +1895,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp s_bits = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp s_bits = opc & MO_SIZE;
     tcg_insn_unit **label_ptr = &l->label_ptr[0];
     TCGReg retaddr;

@@ -1995,10 +1995,10 @@ static inline int setup_guest_base_seg(void)

 static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg datalo, TCGReg datahi,
                                    TCGReg base, int index, intptr_t ofs,
-                                   int seg, bool is64, TCGMemOp memop)
+                                   int seg, bool is64, MemOp memop)
 {
-    const TCGMemOp real_bswap = memop & MO_BSWAP;
-    TCGMemOp bswap = real_bswap;
+    const MemOp real_bswap = memop & MO_BSWAP;
+    MemOp bswap = real_bswap;
     int rexw = is64 * P_REXW;
     int movop = OPC_MOVL_GvEv;

@@ -2103,7 +2103,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)
     TCGReg datalo, datahi, addrlo;
     TCGReg addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     int mem_index;
     tcg_insn_unit *label_ptr[2];
@@ -2137,15 +2137,15 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)

 static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg datalo, TCGReg datahi,
                                    TCGReg base, int index, intptr_t ofs,
-                                   int seg, TCGMemOp memop)
+                                   int seg, MemOp memop)
 {
     /* ??? Ideally we wouldn't need a scratch register.  For user-only,
        we could perform the bswap twice to restore the original value
        instead of moving to the scratch.  But as it is, the L constraint
        means that TCG_REG_L0 is definitely free here.  */
     const TCGReg scratch = TCG_REG_L0;
-    const TCGMemOp real_bswap = memop & MO_BSWAP;
-    TCGMemOp bswap = real_bswap;
+    const MemOp real_bswap = memop & MO_BSWAP;
+    MemOp bswap = real_bswap;
     int movop = OPC_MOVL_EvGv;

     if (have_movbe && real_bswap) {
@@ -2221,7 +2221,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is64)
     TCGReg datalo, datahi, addrlo;
     TCGReg addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     int mem_index;
     tcg_insn_unit *label_ptr[2];
diff --git a/tcg/mips/tcg-target.inc.c b/tcg/mips/tcg-target.inc.c
index 41bff32..5442167 100644
--- a/tcg/mips/tcg-target.inc.c
+++ b/tcg/mips/tcg-target.inc.c
@@ -1215,7 +1215,7 @@ static void tcg_out_tlb_load(TCGContext *s, TCGReg base, TCGReg addrl,
                              TCGReg addrh, TCGMemOpIdx oi,
                              tcg_insn_unit *label_ptr[2], bool is_load)
 {
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     unsigned s_bits = opc & MO_SIZE;
     unsigned a_bits = get_alignment_bits(opc);
     int mem_index = get_mmuidx(oi);
@@ -1313,7 +1313,7 @@ static void add_qemu_ldst_label(TCGContext *s, int is_ld, TCGMemOpIdx oi,
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     TCGReg v0;
     int i;

@@ -1363,8 +1363,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp s_bits = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp s_bits = opc & MO_SIZE;
     int i;

     /* resolve label address */
@@ -1413,7 +1413,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 #endif

 static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg lo, TCGReg hi,
-                                   TCGReg base, TCGMemOp opc, bool is_64)
+                                   TCGReg base, MemOp opc, bool is_64)
 {
     switch (opc & (MO_SSIZE | MO_BSWAP)) {
     case MO_UB:
@@ -1521,7 +1521,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg addr_regl, addr_regh __attribute__((unused));
     TCGReg data_regl, data_regh;
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     tcg_insn_unit *label_ptr[2];
 #endif
@@ -1558,7 +1558,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
 }

 static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
-                                   TCGReg base, TCGMemOp opc)
+                                   TCGReg base, MemOp opc)
 {
     /* Don't clutter the code below with checks to avoid bswapping ZERO.  */
     if ((lo | hi) == 0) {
@@ -1624,7 +1624,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg addr_regl, addr_regh __attribute__((unused));
     TCGReg data_regl, data_regh;
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     tcg_insn_unit *label_ptr[2];
 #endif
diff --git a/tcg/optimize.c b/tcg/optimize.c
index d2424de..a89ffda 100644
--- a/tcg/optimize.c
+++ b/tcg/optimize.c
@@ -1014,7 +1014,7 @@ void tcg_optimize(TCGContext *s)
         CASE_OP_32_64(qemu_ld):
             {
                 TCGMemOpIdx oi = op->args[nb_oargs + nb_iargs];
-                TCGMemOp mop = get_memop(oi);
+                MemOp mop = get_memop(oi);
                 if (!(mop & MO_SIGN)) {
                     mask = (2ULL << ((8 << (mop & MO_SIZE)) - 1)) - 1;
                 }
diff --git a/tcg/ppc/tcg-target.inc.c b/tcg/ppc/tcg-target.inc.c
index 852b894..815edac 100644
--- a/tcg/ppc/tcg-target.inc.c
+++ b/tcg/ppc/tcg-target.inc.c
@@ -1506,7 +1506,7 @@ QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) < -32768);
    in CR7, loads the addend of the TLB into R3, and returns the register
    containing the guest address (zero-extended into R4).  Clobbers R0 and R2. */

-static TCGReg tcg_out_tlb_read(TCGContext *s, TCGMemOp opc,
+static TCGReg tcg_out_tlb_read(TCGContext *s, MemOp opc,
                                TCGReg addrlo, TCGReg addrhi,
                                int mem_index, bool is_read)
 {
@@ -1633,7 +1633,7 @@ static void add_qemu_ldst_label(TCGContext *s, bool is_ld, TCGMemOpIdx oi,
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     TCGReg hi, lo, arg = TCG_REG_R3;

     if (!reloc_pc14(lb->label_ptr[0], s->code_ptr)) {
@@ -1680,8 +1680,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
 {
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp s_bits = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp s_bits = opc & MO_SIZE;
     TCGReg hi, lo, arg = TCG_REG_R3;

     if (!reloc_pc14(lb->label_ptr[0], s->code_ptr)) {
@@ -1744,7 +1744,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg datalo, datahi, addrlo, rbase;
     TCGReg addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc, s_bits;
+    MemOp opc, s_bits;
 #ifdef CONFIG_SOFTMMU
     int mem_index;
     tcg_insn_unit *label_ptr;
@@ -1819,7 +1819,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg datalo, datahi, addrlo, rbase;
     TCGReg addrhi __attribute__((unused));
     TCGMemOpIdx oi;
-    TCGMemOp opc, s_bits;
+    MemOp opc, s_bits;
 #ifdef CONFIG_SOFTMMU
     int mem_index;
     tcg_insn_unit *label_ptr;
diff --git a/tcg/riscv/tcg-target.inc.c b/tcg/riscv/tcg-target.inc.c
index 3e76bf5..7018509 100644
--- a/tcg/riscv/tcg-target.inc.c
+++ b/tcg/riscv/tcg-target.inc.c
@@ -970,7 +970,7 @@ static void tcg_out_tlb_load(TCGContext *s, TCGReg addrl,
                              TCGReg addrh, TCGMemOpIdx oi,
                              tcg_insn_unit **label_ptr, bool is_load)
 {
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     unsigned s_bits = opc & MO_SIZE;
     unsigned a_bits = get_alignment_bits(opc);
     tcg_target_long compare_mask;
@@ -1044,7 +1044,7 @@ static void add_qemu_ldst_label(TCGContext *s, int is_ld, TCGMemOpIdx oi,
 static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
     TCGReg a0 = tcg_target_call_iarg_regs[0];
     TCGReg a1 = tcg_target_call_iarg_regs[1];
     TCGReg a2 = tcg_target_call_iarg_regs[2];
@@ -1077,8 +1077,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 {
     TCGMemOpIdx oi = l->oi;
-    TCGMemOp opc = get_memop(oi);
-    TCGMemOp s_bits = opc & MO_SIZE;
+    MemOp opc = get_memop(oi);
+    MemOp s_bits = opc & MO_SIZE;
     TCGReg a0 = tcg_target_call_iarg_regs[0];
     TCGReg a1 = tcg_target_call_iarg_regs[1];
     TCGReg a2 = tcg_target_call_iarg_regs[2];
@@ -1121,9 +1121,9 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
 #endif /* CONFIG_SOFTMMU */

 static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg lo, TCGReg hi,
-                                   TCGReg base, TCGMemOp opc, bool is_64)
+                                   TCGReg base, MemOp opc, bool is_64)
 {
-    const TCGMemOp bswap = opc & MO_BSWAP;
+    const MemOp bswap = opc & MO_BSWAP;

     /* We don't yet handle byteswapping, assert */
     g_assert(!bswap);
@@ -1172,7 +1172,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg addr_regl, addr_regh __attribute__((unused));
     TCGReg data_regl, data_regh;
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     tcg_insn_unit *label_ptr[1];
 #endif
@@ -1208,9 +1208,9 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
 }

 static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
-                                   TCGReg base, TCGMemOp opc)
+                                   TCGReg base, MemOp opc)
 {
-    const TCGMemOp bswap = opc & MO_BSWAP;
+    const MemOp bswap = opc & MO_BSWAP;

     /* We don't yet handle byteswapping, assert */
     g_assert(!bswap);
@@ -1243,7 +1243,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64)
     TCGReg addr_regl, addr_regh __attribute__((unused));
     TCGReg data_regl, data_regh;
     TCGMemOpIdx oi;
-    TCGMemOp opc;
+    MemOp opc;
 #if defined(CONFIG_SOFTMMU)
     tcg_insn_unit *label_ptr[1];
 #endif
diff --git a/tcg/s390/tcg-target.inc.c b/tcg/s390/tcg-target.inc.c
index fe42939..8aaa4ce 100644
--- a/tcg/s390/tcg-target.inc.c
+++ b/tcg/s390/tcg-target.inc.c
@@ -1430,7 +1430,7 @@ static void tcg_out_call(TCGContext *s, tcg_insn_unit *dest)
     }
 }

-static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp opc, TCGReg data,
+static void tcg_out_qemu_ld_direct(TCGContext *s, MemOp opc, TCGReg data,
                                    TCGReg base, TCGReg index, int disp)
 {
     switch (opc & (MO_SSIZE | MO_BSWAP)) {
@@ -1489,7 +1489,7 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp opc, TCGReg data,
     }
 }

-static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp opc, TCGReg data,
+static void tcg_out_qemu_st_direct(TCGContext *s, MemOp opc, TCGReg data,
                                    TCGReg base, TCGReg index, int disp)
 {
     switch (opc & (MO_SIZE | MO_BSWAP)) {
@@ -1544,7 +1544,7 @@ QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) < -(1 << 19));

 /* Load and compare a TLB entry, leaving the flags set.  Loads the TLB
    addend into R2.  Returns a register with the santitized guest address.  */
-static TCGReg tcg_out_tlb_read(TCGContext* s, TCGReg addr_reg, TCGMemOp opc,
+static TCGReg tcg_out_tlb_read(TCGContext *s, TCGReg addr_reg, MemOp opc,
                                int mem_index, bool is_ld)
 {
     unsigned s_bits = opc & MO_SIZE;
@@ -1614,7 +1614,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
     TCGReg addr_reg = lb->addrlo_reg;
     TCGReg data_reg = lb->datalo_reg;
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);

     if (!patch_reloc(lb->label_ptr[0], R_390_PC16DBL,
                      (intptr_t)s->code_ptr, 2)) {
@@ -1639,7 +1639,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
     TCGReg addr_reg = lb->addrlo_reg;
     TCGReg data_reg = lb->datalo_reg;
     TCGMemOpIdx oi = lb->oi;
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);

     if (!patch_reloc(lb->label_ptr[0], R_390_PC16DBL,
                      (intptr_t)s->code_ptr, 2)) {
@@ -1694,7 +1694,7 @@ static void tcg_prepare_user_ldst(TCGContext *s, TCGReg *addr_reg,
 static void tcg_out_qemu_ld(TCGContext* s, TCGReg data_reg, TCGReg addr_reg,
                             TCGMemOpIdx oi)
 {
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
 #ifdef CONFIG_SOFTMMU
     unsigned mem_index = get_mmuidx(oi);
     tcg_insn_unit *label_ptr;
@@ -1721,7 +1721,7 @@ static void tcg_out_qemu_ld(TCGContext* s, TCGReg data_reg, TCGReg addr_reg,
 static void tcg_out_qemu_st(TCGContext* s, TCGReg data_reg, TCGReg addr_reg,
                             TCGMemOpIdx oi)
 {
-    TCGMemOp opc = get_memop(oi);
+    MemOp opc = get_memop(oi);
 #ifdef CONFIG_SOFTMMU
     unsigned mem_index = get_mmuidx(oi);
     tcg_insn_unit *label_ptr;
diff --git a/tcg/sparc/tcg-target.inc.c b/tcg/sparc/tcg-target.inc.c
index 10b1cea..d7986cd 100644
--- a/tcg/sparc/tcg-target.inc.c
+++ b/tcg/sparc/tcg-target.inc.c
@@ -1081,7 +1081,7 @@ QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) < -(1 << 12));
    is in the returned register, maybe %o0.  The TLB addend is in %o1.  */

 static TCGReg tcg_out_tlb_load(TCGContext *s, TCGReg addr, int mem_index,
-                               TCGMemOp opc, int which)
+                               MemOp opc, int which)
 {
     int fast_off = TLB_MASK_TABLE_OFS(mem_index);
     int mask_off = fast_off + offsetof(CPUTLBDescFast, mask);
@@ -1164,7 +1164,7 @@ static const int qemu_st_opc[16] = {
 static void tcg_out_qemu_ld(TCGContext *s, TCGReg data, TCGReg addr,
                             TCGMemOpIdx oi, bool is_64)
 {
-    TCGMemOp memop = get_memop(oi);
+    MemOp memop = get_memop(oi);
 #ifdef CONFIG_SOFTMMU
     unsigned memi = get_mmuidx(oi);
     TCGReg addrz, param;
@@ -1246,7 +1246,7 @@ static void tcg_out_qemu_ld(TCGContext *s, TCGReg data, TCGReg addr,
 static void tcg_out_qemu_st(TCGContext *s, TCGReg data, TCGReg addr,
                             TCGMemOpIdx oi)
 {
-    TCGMemOp memop = get_memop(oi);
+    MemOp memop = get_memop(oi);
 #ifdef CONFIG_SOFTMMU
     unsigned memi = get_mmuidx(oi);
     TCGReg addrz, param;
diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c
index 587d092..e87c327 100644
--- a/tcg/tcg-op.c
+++ b/tcg/tcg-op.c
@@ -2714,7 +2714,7 @@ void tcg_gen_lookup_and_goto_ptr(void)
     }
 }

-static inline TCGMemOp tcg_canonicalize_memop(TCGMemOp op, bool is64, bool st)
+static inline MemOp tcg_canonicalize_memop(MemOp op, bool is64, bool st)
 {
     /* Trigger the asserts within as early as possible.  */
     (void)get_alignment_bits(op);
@@ -2743,7 +2743,7 @@ static inline TCGMemOp tcg_canonicalize_memop(TCGMemOp op, bool is64, bool st)
 }

 static void gen_ldst_i32(TCGOpcode opc, TCGv_i32 val, TCGv addr,
-                         TCGMemOp memop, TCGArg idx)
+                         MemOp memop, TCGArg idx)
 {
     TCGMemOpIdx oi = make_memop_idx(memop, idx);
 #if TARGET_LONG_BITS == 32
@@ -2758,7 +2758,7 @@ static void gen_ldst_i32(TCGOpcode opc, TCGv_i32 val, TCGv addr,
 }

 static void gen_ldst_i64(TCGOpcode opc, TCGv_i64 val, TCGv addr,
-                         TCGMemOp memop, TCGArg idx)
+                         MemOp memop, TCGArg idx)
 {
     TCGMemOpIdx oi = make_memop_idx(memop, idx);
 #if TARGET_LONG_BITS == 32
@@ -2788,9 +2788,9 @@ static void tcg_gen_req_mo(TCGBar type)
     }
 }

-void tcg_gen_qemu_ld_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
+void tcg_gen_qemu_ld_i32(TCGv_i32 val, TCGv addr, TCGArg idx, MemOp memop)
 {
-    TCGMemOp orig_memop;
+    MemOp orig_memop;

     tcg_gen_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
     memop = tcg_canonicalize_memop(memop, 0, 0);
@@ -2825,7 +2825,7 @@ void tcg_gen_qemu_ld_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
     }
 }

-void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
+void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCGArg idx, MemOp memop)
 {
     TCGv_i32 swap = NULL;

@@ -2858,9 +2858,9 @@ void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
     }
 }

-void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
+void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, MemOp memop)
 {
-    TCGMemOp orig_memop;
+    MemOp orig_memop;

     if (TCG_TARGET_REG_BITS == 32 && (memop & MO_SIZE) < MO_64) {
         tcg_gen_qemu_ld_i32(TCGV_LOW(val), addr, idx, memop);
@@ -2911,7 +2911,7 @@ void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
     }
 }

-void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
+void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, MemOp memop)
 {
     TCGv_i64 swap = NULL;

@@ -2953,7 +2953,7 @@ void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
     }
 }

-static void tcg_gen_ext_i32(TCGv_i32 ret, TCGv_i32 val, TCGMemOp opc)
+static void tcg_gen_ext_i32(TCGv_i32 ret, TCGv_i32 val, MemOp opc)
 {
     switch (opc & MO_SSIZE) {
     case MO_SB:
@@ -2974,7 +2974,7 @@ static void tcg_gen_ext_i32(TCGv_i32 ret, TCGv_i32 val, TCGMemOp opc)
     }
 }

-static void tcg_gen_ext_i64(TCGv_i64 ret, TCGv_i64 val, TCGMemOp opc)
+static void tcg_gen_ext_i64(TCGv_i64 ret, TCGv_i64 val, MemOp opc)
 {
     switch (opc & MO_SSIZE) {
     case MO_SB:
@@ -3034,7 +3034,7 @@ static void * const table_cmpxchg[16] = {
 };

 void tcg_gen_atomic_cmpxchg_i32(TCGv_i32 retv, TCGv addr, TCGv_i32 cmpv,
-                                TCGv_i32 newv, TCGArg idx, TCGMemOp memop)
+                                TCGv_i32 newv, TCGArg idx, MemOp memop)
 {
     memop = tcg_canonicalize_memop(memop, 0, 0);

@@ -3078,7 +3078,7 @@ void tcg_gen_atomic_cmpxchg_i32(TCGv_i32 retv, TCGv addr, TCGv_i32 cmpv,
 }

 void tcg_gen_atomic_cmpxchg_i64(TCGv_i64 retv, TCGv addr, TCGv_i64 cmpv,
-                                TCGv_i64 newv, TCGArg idx, TCGMemOp memop)
+                                TCGv_i64 newv, TCGArg idx, MemOp memop)
 {
     memop = tcg_canonicalize_memop(memop, 1, 0);

@@ -3142,7 +3142,7 @@ void tcg_gen_atomic_cmpxchg_i64(TCGv_i64 retv, TCGv addr, TCGv_i64 cmpv,
 }

 static void do_nonatomic_op_i32(TCGv_i32 ret, TCGv addr, TCGv_i32 val,
-                                TCGArg idx, TCGMemOp memop, bool new_val,
+                                TCGArg idx, MemOp memop, bool new_val,
                                 void (*gen)(TCGv_i32, TCGv_i32, TCGv_i32))
 {
     TCGv_i32 t1 = tcg_temp_new_i32();
@@ -3160,7 +3160,7 @@ static void do_nonatomic_op_i32(TCGv_i32 ret, TCGv addr, TCGv_i32 val,
 }

 static void do_atomic_op_i32(TCGv_i32 ret, TCGv addr, TCGv_i32 val,
-                             TCGArg idx, TCGMemOp memop, void * const table[])
+                             TCGArg idx, MemOp memop, void * const table[])
 {
     gen_atomic_op_i32 gen;

@@ -3185,7 +3185,7 @@ static void do_atomic_op_i32(TCGv_i32 ret, TCGv addr, TCGv_i32 val,
 }

 static void do_nonatomic_op_i64(TCGv_i64 ret, TCGv addr, TCGv_i64 val,
-                                TCGArg idx, TCGMemOp memop, bool new_val,
+                                TCGArg idx, MemOp memop, bool new_val,
                                 void (*gen)(TCGv_i64, TCGv_i64, TCGv_i64))
 {
     TCGv_i64 t1 = tcg_temp_new_i64();
@@ -3203,7 +3203,7 @@ static void do_nonatomic_op_i64(TCGv_i64 ret, TCGv addr, TCGv_i64 val,
 }

 static void do_atomic_op_i64(TCGv_i64 ret, TCGv addr, TCGv_i64 val,
-                             TCGArg idx, TCGMemOp memop, void * const table[])
+                             TCGArg idx, MemOp memop, void * const table[])
 {
     memop = tcg_canonicalize_memop(memop, 1, 0);

@@ -3257,7 +3257,7 @@ static void * const table_##NAME[16] = {                                \
     WITH_ATOMIC64([MO_64 | MO_BE] = gen_helper_atomic_##NAME##q_be)     \
 };                                                                      \
 void tcg_gen_atomic_##NAME##_i32                                        \
-    (TCGv_i32 ret, TCGv addr, TCGv_i32 val, TCGArg idx, TCGMemOp memop) \
+    (TCGv_i32 ret, TCGv addr, TCGv_i32 val, TCGArg idx, MemOp memop)    \
 {                                                                       \
     if (tcg_ctx->tb_cflags & CF_PARALLEL) {                             \
         do_atomic_op_i32(ret, addr, val, idx, memop, table_##NAME);     \
@@ -3267,7 +3267,7 @@ void tcg_gen_atomic_##NAME##_i32                                        \
     }                                                                   \
 }                                                                       \
 void tcg_gen_atomic_##NAME##_i64                                        \
-    (TCGv_i64 ret, TCGv addr, TCGv_i64 val, TCGArg idx, TCGMemOp memop) \
+    (TCGv_i64 ret, TCGv addr, TCGv_i64 val, TCGArg idx, MemOp memop)    \
 {                                                                       \
     if (tcg_ctx->tb_cflags & CF_PARALLEL) {                             \
         do_atomic_op_i64(ret, addr, val, idx, memop, table_##NAME);     \
diff --git a/tcg/tcg-op.h b/tcg/tcg-op.h
index 2d4dd5c..e9cf172 100644
--- a/tcg/tcg-op.h
+++ b/tcg/tcg-op.h
@@ -851,10 +851,10 @@ void tcg_gen_lookup_and_goto_ptr(void);
 #define tcg_gen_qemu_st_tl tcg_gen_qemu_st_i64
 #endif

-void tcg_gen_qemu_ld_i32(TCGv_i32, TCGv, TCGArg, TCGMemOp);
-void tcg_gen_qemu_st_i32(TCGv_i32, TCGv, TCGArg, TCGMemOp);
-void tcg_gen_qemu_ld_i64(TCGv_i64, TCGv, TCGArg, TCGMemOp);
-void tcg_gen_qemu_st_i64(TCGv_i64, TCGv, TCGArg, TCGMemOp);
+void tcg_gen_qemu_ld_i32(TCGv_i32, TCGv, TCGArg, MemOp);
+void tcg_gen_qemu_st_i32(TCGv_i32, TCGv, TCGArg, MemOp);
+void tcg_gen_qemu_ld_i64(TCGv_i64, TCGv, TCGArg, MemOp);
+void tcg_gen_qemu_st_i64(TCGv_i64, TCGv, TCGArg, MemOp);

 static inline void tcg_gen_qemu_ld8u(TCGv ret, TCGv addr, int mem_index)
 {
@@ -912,46 +912,46 @@ static inline void tcg_gen_qemu_st64(TCGv_i64 arg, TCGv addr, int mem_index)
 }

 void tcg_gen_atomic_cmpxchg_i32(TCGv_i32, TCGv, TCGv_i32, TCGv_i32,
-                                TCGArg, TCGMemOp);
+                                TCGArg, MemOp);
 void tcg_gen_atomic_cmpxchg_i64(TCGv_i64, TCGv, TCGv_i64, TCGv_i64,
-                                TCGArg, TCGMemOp);
-
-void tcg_gen_atomic_xchg_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_xchg_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-
-void tcg_gen_atomic_fetch_add_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_add_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_and_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_and_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_or_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_or_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_xor_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_xor_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_smin_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_smin_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_umin_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_umin_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_smax_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_smax_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_umax_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_fetch_umax_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-
-void tcg_gen_atomic_add_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_add_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_and_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_and_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_or_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_or_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_xor_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_xor_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_smin_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_smin_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_umin_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_umin_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_smax_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_smax_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
-void tcg_gen_atomic_umax_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
-void tcg_gen_atomic_umax_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
+                                TCGArg, MemOp);
+
+void tcg_gen_atomic_xchg_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_xchg_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+
+void tcg_gen_atomic_fetch_add_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_add_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_and_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_and_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_or_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_or_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_xor_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_xor_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_smin_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_smin_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_umin_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_umin_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_smax_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_smax_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_umax_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_fetch_umax_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+
+void tcg_gen_atomic_add_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_add_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_and_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_and_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_or_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_or_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_xor_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_xor_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_smin_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_smin_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_umin_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_umin_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_smax_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_smax_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
+void tcg_gen_atomic_umax_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
+void tcg_gen_atomic_umax_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);

 void tcg_gen_mov_vec(TCGv_vec, TCGv_vec);
 void tcg_gen_dup_i32_vec(unsigned vece, TCGv_vec, TCGv_i32);
diff --git a/tcg/tcg.c b/tcg/tcg.c
index be2c33c..aa9931f 100644
--- a/tcg/tcg.c
+++ b/tcg/tcg.c
@@ -2056,7 +2056,7 @@ static void tcg_dump_ops(TCGContext *s, bool have_prefs)
             case INDEX_op_qemu_st_i64:
                 {
                     TCGMemOpIdx oi = op->args[k++];
-                    TCGMemOp op = get_memop(oi);
+                    MemOp op = get_memop(oi);
                     unsigned ix = get_mmuidx(oi);

                     if (op & ~(MO_AMASK | MO_BSWAP | MO_SSIZE)) {
diff --git a/tcg/tcg.h b/tcg/tcg.h
index b411e17..a37181c 100644
--- a/tcg/tcg.h
+++ b/tcg/tcg.h
@@ -26,6 +26,7 @@
 #define TCG_H

 #include "cpu.h"
+#include "exec/memop.h"
 #include "exec/tb-context.h"
 #include "qemu/bitops.h"
 #include "qemu/queue.h"
@@ -309,101 +310,13 @@ typedef enum TCGType {
 #endif
 } TCGType;

-/* Constants for qemu_ld and qemu_st for the Memory Operation field.  */
-typedef enum TCGMemOp {
-    MO_8     = 0,
-    MO_16    = 1,
-    MO_32    = 2,
-    MO_64    = 3,
-    MO_SIZE  = 3,   /* Mask for the above.  */
-
-    MO_SIGN  = 4,   /* Sign-extended, otherwise zero-extended.  */
-
-    MO_BSWAP = 8,   /* Host reverse endian.  */
-#ifdef HOST_WORDS_BIGENDIAN
-    MO_LE    = MO_BSWAP,
-    MO_BE    = 0,
-#else
-    MO_LE    = 0,
-    MO_BE    = MO_BSWAP,
-#endif
-#ifdef TARGET_WORDS_BIGENDIAN
-    MO_TE    = MO_BE,
-#else
-    MO_TE    = MO_LE,
-#endif
-
-    /* MO_UNALN accesses are never checked for alignment.
-     * MO_ALIGN accesses will result in a call to the CPU's
-     * do_unaligned_access hook if the guest address is not aligned.
-     * The default depends on whether the target CPU defines ALIGNED_ONLY.
-     *
-     * Some architectures (e.g. ARMv8) need the address which is aligned
-     * to a size more than the size of the memory access.
-     * Some architectures (e.g. SPARCv9) need an address which is aligned,
-     * but less strictly than the natural alignment.
-     *
-     * MO_ALIGN supposes the alignment size is the size of a memory access.
-     *
-     * There are three options:
-     * - unaligned access permitted (MO_UNALN).
-     * - an alignment to the size of an access (MO_ALIGN);
-     * - an alignment to a specified size, which may be more or less than
-     *   the access size (MO_ALIGN_x where 'x' is a size in bytes);
-     */
-    MO_ASHIFT = 4,
-    MO_AMASK = 7 << MO_ASHIFT,
-#ifdef ALIGNED_ONLY
-    MO_ALIGN = 0,
-    MO_UNALN = MO_AMASK,
-#else
-    MO_ALIGN = MO_AMASK,
-    MO_UNALN = 0,
-#endif
-    MO_ALIGN_2  = 1 << MO_ASHIFT,
-    MO_ALIGN_4  = 2 << MO_ASHIFT,
-    MO_ALIGN_8  = 3 << MO_ASHIFT,
-    MO_ALIGN_16 = 4 << MO_ASHIFT,
-    MO_ALIGN_32 = 5 << MO_ASHIFT,
-    MO_ALIGN_64 = 6 << MO_ASHIFT,
-
-    /* Combinations of the above, for ease of use.  */
-    MO_UB    = MO_8,
-    MO_UW    = MO_16,
-    MO_UL    = MO_32,
-    MO_SB    = MO_SIGN | MO_8,
-    MO_SW    = MO_SIGN | MO_16,
-    MO_SL    = MO_SIGN | MO_32,
-    MO_Q     = MO_64,
-
-    MO_LEUW  = MO_LE | MO_UW,
-    MO_LEUL  = MO_LE | MO_UL,
-    MO_LESW  = MO_LE | MO_SW,
-    MO_LESL  = MO_LE | MO_SL,
-    MO_LEQ   = MO_LE | MO_Q,
-
-    MO_BEUW  = MO_BE | MO_UW,
-    MO_BEUL  = MO_BE | MO_UL,
-    MO_BESW  = MO_BE | MO_SW,
-    MO_BESL  = MO_BE | MO_SL,
-    MO_BEQ   = MO_BE | MO_Q,
-
-    MO_TEUW  = MO_TE | MO_UW,
-    MO_TEUL  = MO_TE | MO_UL,
-    MO_TESW  = MO_TE | MO_SW,
-    MO_TESL  = MO_TE | MO_SL,
-    MO_TEQ   = MO_TE | MO_Q,
-
-    MO_SSIZE = MO_SIZE | MO_SIGN,
-} TCGMemOp;
-
 /**
  * get_alignment_bits
- * @memop: TCGMemOp value
+ * @memop: MemOp value
  *
  * Extract the alignment size from the memop.
  */
-static inline unsigned get_alignment_bits(TCGMemOp memop)
+static inline unsigned get_alignment_bits(MemOp memop)
 {
     unsigned a = memop & MO_AMASK;

@@ -1184,7 +1097,7 @@ static inline size_t tcg_current_code_size(TCGContext *s)
     return tcg_ptr_byte_diff(s->code_ptr, s->code_buf);
 }

-/* Combine the TCGMemOp and mmu_idx parameters into a single value.  */
+/* Combine the MemOp and mmu_idx parameters into a single value.  */
 typedef uint32_t TCGMemOpIdx;

 /**
@@ -1194,7 +1107,7 @@ typedef uint32_t TCGMemOpIdx;
  *
  * Encode these values into a single parameter.
  */
-static inline TCGMemOpIdx make_memop_idx(TCGMemOp op, unsigned idx)
+static inline TCGMemOpIdx make_memop_idx(MemOp op, unsigned idx)
 {
     tcg_debug_assert(idx <= 15);
     return (op << 4) | idx;
@@ -1206,7 +1119,7 @@ static inline TCGMemOpIdx make_memop_idx(TCGMemOp op, unsigned idx)
  *
  * Extract the memory operation from the combined value.
  */
-static inline TCGMemOp get_memop(TCGMemOpIdx oi)
+static inline MemOp get_memop(TCGMemOpIdx oi)
 {
     return oi >> 4;
 }
diff --git a/trace/mem-internal.h b/trace/mem-internal.h
index f6efaf6..3444fbc 100644
--- a/trace/mem-internal.h
+++ b/trace/mem-internal.h
@@ -16,7 +16,7 @@
 #define TRACE_MEM_ST (1ULL << 5)    /* store (y/n) */

 static inline uint8_t trace_mem_build_info(
-    int size_shift, bool sign_extend, TCGMemOp endianness, bool store)
+    int size_shift, bool sign_extend, MemOp endianness, bool store)
 {
     uint8_t res;

@@ -33,7 +33,7 @@ static inline uint8_t trace_mem_build_info(
     return res;
 }

-static inline uint8_t trace_mem_get_info(TCGMemOp op, bool store)
+static inline uint8_t trace_mem_get_info(MemOp op, bool store)
 {
     return trace_mem_build_info(op & MO_SIZE, !!(op & MO_SIGN),
                                 op & MO_BSWAP, store);
diff --git a/trace/mem.h b/trace/mem.h
index 2b58196..8cf213d 100644
--- a/trace/mem.h
+++ b/trace/mem.h
@@ -18,7 +18,7 @@
  *
  * Return a value for the 'info' argument in guest memory access traces.
  */
-static uint8_t trace_mem_get_info(TCGMemOp op, bool store);
+static uint8_t trace_mem_get_info(MemOp op, bool store);

 /**
  * trace_mem_build_info:
@@ -26,7 +26,7 @@ static uint8_t trace_mem_get_info(TCGMemOp op, bool store);
  * Return a value for the 'info' argument in guest memory access traces.
  */
 static uint8_t trace_mem_build_info(int size_shift, bool sign_extend,
-                                    TCGMemOp endianness, bool store);
+                                    MemOp endianness, bool store);


 #include "trace/mem-internal.h"
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 181978 bytes --]

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v5 02/15] memory: Access MemoryRegion with MemOp
  2019-07-26  6:42 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-26  6:43   ` tony.nguyen
  -1 siblings, 0 replies; 78+ messages in thread
From: tony.nguyen @ 2019-07-26  6:43 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, qemu-s390x, qemu-arm,
	stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

Change memory_region_dispatch_{read|write} parameter "unsigned size"
to "MemOp op".

The endianness encoded in MemOp will enable the collapse of two byte
swaps, adjust_endianness and handle_bswap, along the I/O path.

Interfaces will be converted in two steps: first syntactically then
semantically.

The syntax change is usage of no-op MEMOP_SIZE and SIZE_MEMOP macros.
Being no-op there are no logical change, and we rely on coercion
between unsigned and MemOp.

The semantic change is implementing MEMOP_SIZE and SIZE_MEMOP to
logically convert an unsigned size to and from a size+sign+endianness
encoded MemOp.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 include/exec/memop.h  | 4 ++++
 include/exec/memory.h | 9 +++++----
 memory.c              | 7 +++++--
 3 files changed, 14 insertions(+), 6 deletions(-)

diff --git a/include/exec/memop.h b/include/exec/memop.h
index ac58066..09c8d20 100644
--- a/include/exec/memop.h
+++ b/include/exec/memop.h
@@ -106,4 +106,8 @@ typedef enum MemOp {
     MO_SSIZE = MO_SIZE | MO_SIGN,
 } MemOp;

+/* No-op while memory_region_dispatch_[read|write] is converted to MemOp */
+#define MEMOP_SIZE(op)  (op)    /* MemOp to size.  */
+#define SIZE_MEMOP(ul)  (ul)    /* Size to MemOp.  */
+
 #endif
diff --git a/include/exec/memory.h b/include/exec/memory.h
index bb0961d..0ea4843 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -19,6 +19,7 @@
 #include "exec/cpu-common.h"
 #include "exec/hwaddr.h"
 #include "exec/memattrs.h"
+#include "exec/memop.h"
 #include "exec/ramlist.h"
 #include "qemu/queue.h"
 #include "qemu/int128.h"
@@ -1731,13 +1732,13 @@ void mtree_info(bool flatview, bool dispatch_tree, bool owner);
  * @mr: #MemoryRegion to access
  * @addr: address within that region
  * @pval: pointer to uint64_t which the data is written to
- * @size: size of the access in bytes
+ * @op: size of the access in bytes
  * @attrs: memory transaction attributes to use for the access
  */
 MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
                                         hwaddr addr,
                                         uint64_t *pval,
-                                        unsigned size,
+                                        MemOp op,
                                         MemTxAttrs attrs);
 /**
  * memory_region_dispatch_write: perform a write directly to the specified
@@ -1746,13 +1747,13 @@ MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
  * @mr: #MemoryRegion to access
  * @addr: address within that region
  * @data: data to write
- * @size: size of the access in bytes
+ * @op: size of the access in bytes
  * @attrs: memory transaction attributes to use for the access
  */
 MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
                                          hwaddr addr,
                                          uint64_t data,
-                                         unsigned size,
+                                         MemOp op,
                                          MemTxAttrs attrs);

 /**
diff --git a/memory.c b/memory.c
index 5d8c9a9..6982e19 100644
--- a/memory.c
+++ b/memory.c
@@ -1439,10 +1439,11 @@ static MemTxResult memory_region_dispatch_read1(MemoryRegion *mr,
 MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
                                         hwaddr addr,
                                         uint64_t *pval,
-                                        unsigned size,
+                                        MemOp op,
                                         MemTxAttrs attrs)
 {
     MemTxResult r;
+    unsigned size = MEMOP_SIZE(op);

     if (!memory_region_access_valid(mr, addr, size, false, attrs)) {
         *pval = unassigned_mem_read(mr, addr, size);
@@ -1483,9 +1484,11 @@ static bool memory_region_dispatch_write_eventfds(MemoryRegion *mr,
 MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
                                          hwaddr addr,
                                          uint64_t data,
-                                         unsigned size,
+                                         MemOp op,
                                          MemTxAttrs attrs)
 {
+    unsigned size = MEMOP_SIZE(op);
+
     if (!memory_region_access_valid(mr, addr, size, true, attrs)) {
         unassigned_mem_write(mr, addr, data, size);
         return MEMTX_DECODE_ERROR;
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v5 02/15] memory: Access MemoryRegion with MemOp
@ 2019-07-26  6:43   ` tony.nguyen
  0 siblings, 0 replies; 78+ messages in thread
From: tony.nguyen @ 2019-07-26  6:43 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, sagark, david, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, arikalo, mst, pasic,
	borntraeger, rth, atar4qemu, ehabkost, alex.williamson, qemu-arm,
	stefanha, shorne, david, qemu-riscv, qemu-s390x, kbastian,
	cohuck, laurent, qemu-ppc, amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 4518 bytes --]

Change memory_region_dispatch_{read|write} parameter "unsigned size"
to "MemOp op".

The endianness encoded in MemOp will enable the collapse of two byte
swaps, adjust_endianness and handle_bswap, along the I/O path.

Interfaces will be converted in two steps: first syntactically then
semantically.

The syntax change is usage of no-op MEMOP_SIZE and SIZE_MEMOP macros.
Being no-op there are no logical change, and we rely on coercion
between unsigned and MemOp.

The semantic change is implementing MEMOP_SIZE and SIZE_MEMOP to
logically convert an unsigned size to and from a size+sign+endianness
encoded MemOp.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 include/exec/memop.h  | 4 ++++
 include/exec/memory.h | 9 +++++----
 memory.c              | 7 +++++--
 3 files changed, 14 insertions(+), 6 deletions(-)

diff --git a/include/exec/memop.h b/include/exec/memop.h
index ac58066..09c8d20 100644
--- a/include/exec/memop.h
+++ b/include/exec/memop.h
@@ -106,4 +106,8 @@ typedef enum MemOp {
     MO_SSIZE = MO_SIZE | MO_SIGN,
 } MemOp;

+/* No-op while memory_region_dispatch_[read|write] is converted to MemOp */
+#define MEMOP_SIZE(op)  (op)    /* MemOp to size.  */
+#define SIZE_MEMOP(ul)  (ul)    /* Size to MemOp.  */
+
 #endif
diff --git a/include/exec/memory.h b/include/exec/memory.h
index bb0961d..0ea4843 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -19,6 +19,7 @@
 #include "exec/cpu-common.h"
 #include "exec/hwaddr.h"
 #include "exec/memattrs.h"
+#include "exec/memop.h"
 #include "exec/ramlist.h"
 #include "qemu/queue.h"
 #include "qemu/int128.h"
@@ -1731,13 +1732,13 @@ void mtree_info(bool flatview, bool dispatch_tree, bool owner);
  * @mr: #MemoryRegion to access
  * @addr: address within that region
  * @pval: pointer to uint64_t which the data is written to
- * @size: size of the access in bytes
+ * @op: size of the access in bytes
  * @attrs: memory transaction attributes to use for the access
  */
 MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
                                         hwaddr addr,
                                         uint64_t *pval,
-                                        unsigned size,
+                                        MemOp op,
                                         MemTxAttrs attrs);
 /**
  * memory_region_dispatch_write: perform a write directly to the specified
@@ -1746,13 +1747,13 @@ MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
  * @mr: #MemoryRegion to access
  * @addr: address within that region
  * @data: data to write
- * @size: size of the access in bytes
+ * @op: size of the access in bytes
  * @attrs: memory transaction attributes to use for the access
  */
 MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
                                          hwaddr addr,
                                          uint64_t data,
-                                         unsigned size,
+                                         MemOp op,
                                          MemTxAttrs attrs);

 /**
diff --git a/memory.c b/memory.c
index 5d8c9a9..6982e19 100644
--- a/memory.c
+++ b/memory.c
@@ -1439,10 +1439,11 @@ static MemTxResult memory_region_dispatch_read1(MemoryRegion *mr,
 MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
                                         hwaddr addr,
                                         uint64_t *pval,
-                                        unsigned size,
+                                        MemOp op,
                                         MemTxAttrs attrs)
 {
     MemTxResult r;
+    unsigned size = MEMOP_SIZE(op);

     if (!memory_region_access_valid(mr, addr, size, false, attrs)) {
         *pval = unassigned_mem_read(mr, addr, size);
@@ -1483,9 +1484,11 @@ static bool memory_region_dispatch_write_eventfds(MemoryRegion *mr,
 MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
                                          hwaddr addr,
                                          uint64_t data,
-                                         unsigned size,
+                                         MemOp op,
                                          MemTxAttrs attrs)
 {
+    unsigned size = MEMOP_SIZE(op);
+
     if (!memory_region_access_valid(mr, addr, size, true, attrs)) {
         unassigned_mem_write(mr, addr, data, size);
         return MEMTX_DECODE_ERROR;
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 9125 bytes --]

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v5 03/15] target/mips: Access MemoryRegion with MemOp
  2019-07-26  6:42 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-26  6:44   ` tony.nguyen
  -1 siblings, 0 replies; 78+ messages in thread
From: tony.nguyen @ 2019-07-26  6:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, qemu-s390x, qemu-arm,
	stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

No-op SIZE_MEMOP macro allows us to later easily convert
memory_region_dispatch_{read|write} paramter "unsigned size" into a
size+sign+endianness encoded "MemOp op".

Being a no-op macro, this patch does not introduce any logical change.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 target/mips/op_helper.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/target/mips/op_helper.c b/target/mips/op_helper.c
index 9e2e02f..dccb8df 100644
--- a/target/mips/op_helper.c
+++ b/target/mips/op_helper.c
@@ -24,6 +24,7 @@
 #include "exec/helper-proto.h"
 #include "exec/exec-all.h"
 #include "exec/cpu_ldst.h"
+#include "exec/memop.h"
 #include "sysemu/kvm.h"

 /*****************************************************************************/
@@ -4740,11 +4741,11 @@ void helper_cache(CPUMIPSState *env, target_ulong addr, uint32_t op)
     if (op == 9) {
         /* Index Store Tag */
         memory_region_dispatch_write(env->itc_tag, index, env->CP0_TagLo,
-                                     8, MEMTXATTRS_UNSPECIFIED);
+                                     SIZE_MEMOP(8), MEMTXATTRS_UNSPECIFIED);
     } else if (op == 5) {
         /* Index Load Tag */
         memory_region_dispatch_read(env->itc_tag, index, &env->CP0_TagLo,
-                                    8, MEMTXATTRS_UNSPECIFIED);
+                                    SIZE_MEMOP(8), MEMTXATTRS_UNSPECIFIED);
     }
 #endif
 }
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v5 03/15] target/mips: Access MemoryRegion with MemOp
@ 2019-07-26  6:44   ` tony.nguyen
  0 siblings, 0 replies; 78+ messages in thread
From: tony.nguyen @ 2019-07-26  6:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, sagark, david, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, arikalo, mst, pasic,
	borntraeger, rth, atar4qemu, ehabkost, alex.williamson, qemu-arm,
	stefanha, shorne, david, qemu-riscv, qemu-s390x, kbastian,
	cohuck, laurent, qemu-ppc, amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 1535 bytes --]

No-op SIZE_MEMOP macro allows us to later easily convert
memory_region_dispatch_{read|write} paramter "unsigned size" into a
size+sign+endianness encoded "MemOp op".

Being a no-op macro, this patch does not introduce any logical change.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 target/mips/op_helper.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/target/mips/op_helper.c b/target/mips/op_helper.c
index 9e2e02f..dccb8df 100644
--- a/target/mips/op_helper.c
+++ b/target/mips/op_helper.c
@@ -24,6 +24,7 @@
 #include "exec/helper-proto.h"
 #include "exec/exec-all.h"
 #include "exec/cpu_ldst.h"
+#include "exec/memop.h"
 #include "sysemu/kvm.h"

 /*****************************************************************************/
@@ -4740,11 +4741,11 @@ void helper_cache(CPUMIPSState *env, target_ulong addr, uint32_t op)
     if (op == 9) {
         /* Index Store Tag */
         memory_region_dispatch_write(env->itc_tag, index, env->CP0_TagLo,
-                                     8, MEMTXATTRS_UNSPECIFIED);
+                                     SIZE_MEMOP(8), MEMTXATTRS_UNSPECIFIED);
     } else if (op == 5) {
         /* Index Load Tag */
         memory_region_dispatch_read(env->itc_tag, index, &env->CP0_TagLo,
-                                    8, MEMTXATTRS_UNSPECIFIED);
+                                    SIZE_MEMOP(8), MEMTXATTRS_UNSPECIFIED);
     }
 #endif
 }
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 3266 bytes --]

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v5 04/15] hw/s390x: Access MemoryRegion with MemOp
  2019-07-26  6:42 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-26  6:44   ` tony.nguyen
  -1 siblings, 0 replies; 78+ messages in thread
From: tony.nguyen @ 2019-07-26  6:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, qemu-s390x, qemu-arm,
	stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

No-op SIZE_MEMOP macro allows us to later easily convert
memory_region_dispatch_{read|write} paramter "unsigned size" into a
size+sign+endianness encoded "MemOp op".

Being a no-op macro, this patch does not introduce any logical change.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 hw/s390x/s390-pci-inst.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c
index 0023514..c126bcc 100644
--- a/hw/s390x/s390-pci-inst.c
+++ b/hw/s390x/s390-pci-inst.c
@@ -15,6 +15,7 @@
 #include "cpu.h"
 #include "s390-pci-inst.h"
 #include "s390-pci-bus.h"
+#include "exec/memop.h"
 #include "exec/memory-internal.h"
 #include "qemu/error-report.h"
 #include "sysemu/hw_accel.h"
@@ -372,7 +373,7 @@ static MemTxResult zpci_read_bar(S390PCIBusDevice *pbdev, uint8_t pcias,
     mr = pbdev->pdev->io_regions[pcias].memory;
     mr = s390_get_subregion(mr, offset, len);
     offset -= mr->addr;
-    return memory_region_dispatch_read(mr, offset, data, len,
+    return memory_region_dispatch_read(mr, offset, data, SIZE_MEMOP(len),
                                        MEMTXATTRS_UNSPECIFIED);
 }

@@ -471,7 +472,7 @@ static MemTxResult zpci_write_bar(S390PCIBusDevice *pbdev, uint8_t pcias,
     mr = pbdev->pdev->io_regions[pcias].memory;
     mr = s390_get_subregion(mr, offset, len);
     offset -= mr->addr;
-    return memory_region_dispatch_write(mr, offset, data, len,
+    return memory_region_dispatch_write(mr, offset, data, SIZE_MEMOP(len),
                                         MEMTXATTRS_UNSPECIFIED);
 }

@@ -780,7 +781,8 @@ int pcistb_service_call(S390CPU *cpu, uint8_t r1, uint8_t r3, uint64_t gaddr,

     for (i = 0; i < len / 8; i++) {
         result = memory_region_dispatch_write(mr, offset + i * 8,
-                                              ldq_p(buffer + i * 8), 8,
+                                              ldq_p(buffer + i * 8),
+                                              SIZE_MEMOP(8),
                                               MEMTXATTRS_UNSPECIFIED);
         if (result != MEMTX_OK) {
             s390_program_interrupt(env, PGM_OPERAND, 6, ra);
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v5 04/15] hw/s390x: Access MemoryRegion with MemOp
@ 2019-07-26  6:44   ` tony.nguyen
  0 siblings, 0 replies; 78+ messages in thread
From: tony.nguyen @ 2019-07-26  6:44 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, sagark, david, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, arikalo, mst, pasic,
	borntraeger, rth, atar4qemu, ehabkost, alex.williamson, qemu-arm,
	stefanha, shorne, david, qemu-riscv, qemu-s390x, kbastian,
	cohuck, laurent, qemu-ppc, amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 2243 bytes --]

No-op SIZE_MEMOP macro allows us to later easily convert
memory_region_dispatch_{read|write} paramter "unsigned size" into a
size+sign+endianness encoded "MemOp op".

Being a no-op macro, this patch does not introduce any logical change.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 hw/s390x/s390-pci-inst.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/hw/s390x/s390-pci-inst.c b/hw/s390x/s390-pci-inst.c
index 0023514..c126bcc 100644
--- a/hw/s390x/s390-pci-inst.c
+++ b/hw/s390x/s390-pci-inst.c
@@ -15,6 +15,7 @@
 #include "cpu.h"
 #include "s390-pci-inst.h"
 #include "s390-pci-bus.h"
+#include "exec/memop.h"
 #include "exec/memory-internal.h"
 #include "qemu/error-report.h"
 #include "sysemu/hw_accel.h"
@@ -372,7 +373,7 @@ static MemTxResult zpci_read_bar(S390PCIBusDevice *pbdev, uint8_t pcias,
     mr = pbdev->pdev->io_regions[pcias].memory;
     mr = s390_get_subregion(mr, offset, len);
     offset -= mr->addr;
-    return memory_region_dispatch_read(mr, offset, data, len,
+    return memory_region_dispatch_read(mr, offset, data, SIZE_MEMOP(len),
                                        MEMTXATTRS_UNSPECIFIED);
 }

@@ -471,7 +472,7 @@ static MemTxResult zpci_write_bar(S390PCIBusDevice *pbdev, uint8_t pcias,
     mr = pbdev->pdev->io_regions[pcias].memory;
     mr = s390_get_subregion(mr, offset, len);
     offset -= mr->addr;
-    return memory_region_dispatch_write(mr, offset, data, len,
+    return memory_region_dispatch_write(mr, offset, data, SIZE_MEMOP(len),
                                         MEMTXATTRS_UNSPECIFIED);
 }

@@ -780,7 +781,8 @@ int pcistb_service_call(S390CPU *cpu, uint8_t r1, uint8_t r3, uint64_t gaddr,

     for (i = 0; i < len / 8; i++) {
         result = memory_region_dispatch_write(mr, offset + i * 8,
-                                              ldq_p(buffer + i * 8), 8,
+                                              ldq_p(buffer + i * 8),
+                                              SIZE_MEMOP(8),
                                               MEMTXATTRS_UNSPECIFIED);
         if (result != MEMTX_OK) {
             s390_program_interrupt(env, PGM_OPERAND, 6, ra);
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 4609 bytes --]

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v5 05/15] hw/intc/armv7m_nic: Access MemoryRegion with MemOp
  2019-07-26  6:42 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-26  6:45   ` tony.nguyen
  -1 siblings, 0 replies; 78+ messages in thread
From: tony.nguyen @ 2019-07-26  6:45 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, qemu-s390x, qemu-arm,
	stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

No-op SIZE_MEMOP macro allows us to later easily convert
memory_region_dispatch_{read|write} paramter "unsigned size" into a
size+sign+endianness encoded "MemOp op".

Being a no-op macro, this patch does not introduce any logical change.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 hw/intc/armv7m_nvic.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
index 9f8f0d3..25bb88a 100644
--- a/hw/intc/armv7m_nvic.c
+++ b/hw/intc/armv7m_nvic.c
@@ -18,6 +18,7 @@
 #include "hw/intc/armv7m_nvic.h"
 #include "target/arm/cpu.h"
 #include "exec/exec-all.h"
+#include "exec/memop.h"
 #include "qemu/log.h"
 #include "qemu/module.h"
 #include "trace.h"
@@ -2345,7 +2346,8 @@ static MemTxResult nvic_sysreg_ns_write(void *opaque, hwaddr addr,
     if (attrs.secure) {
         /* S accesses to the alias act like NS accesses to the real region */
         attrs.secure = 0;
-        return memory_region_dispatch_write(mr, addr, value, size, attrs);
+        return memory_region_dispatch_write(mr, addr, value, SIZE_MEMOP(size),
+                                            attrs);
     } else {
         /* NS attrs are RAZ/WI for privileged, and BusFault for user */
         if (attrs.user) {
@@ -2364,7 +2366,8 @@ static MemTxResult nvic_sysreg_ns_read(void *opaque, hwaddr addr,
     if (attrs.secure) {
         /* S accesses to the alias act like NS accesses to the real region */
         attrs.secure = 0;
-        return memory_region_dispatch_read(mr, addr, data, size, attrs);
+        return memory_region_dispatch_read(mr, addr, data, SIZE_MEMOP(size),
+                                           attrs);
     } else {
         /* NS attrs are RAZ/WI for privileged, and BusFault for user */
         if (attrs.user) {
@@ -2390,7 +2393,8 @@ static MemTxResult nvic_systick_write(void *opaque, hwaddr addr,

     /* Direct the access to the correct systick */
     mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->systick[attrs.secure]), 0);
-    return memory_region_dispatch_write(mr, addr, value, size, attrs);
+    return memory_region_dispatch_write(mr, addr, value, SIZE_MEMOP(size),
+                                        attrs);
 }

 static MemTxResult nvic_systick_read(void *opaque, hwaddr addr,
@@ -2402,7 +2406,7 @@ static MemTxResult nvic_systick_read(void *opaque, hwaddr addr,

     /* Direct the access to the correct systick */
     mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->systick[attrs.secure]), 0);
-    return memory_region_dispatch_read(mr, addr, data, size, attrs);
+    return memory_region_dispatch_read(mr, addr, data, SIZE_MEMOP(size), attrs);
 }

 static const MemoryRegionOps nvic_systick_ops = {
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v5 05/15] hw/intc/armv7m_nic: Access MemoryRegion with MemOp
@ 2019-07-26  6:45   ` tony.nguyen
  0 siblings, 0 replies; 78+ messages in thread
From: tony.nguyen @ 2019-07-26  6:45 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, sagark, david, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, arikalo, mst, pasic,
	borntraeger, rth, atar4qemu, ehabkost, alex.williamson, qemu-arm,
	stefanha, shorne, david, qemu-riscv, qemu-s390x, kbastian,
	cohuck, laurent, qemu-ppc, amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 2860 bytes --]

No-op SIZE_MEMOP macro allows us to later easily convert
memory_region_dispatch_{read|write} paramter "unsigned size" into a
size+sign+endianness encoded "MemOp op".

Being a no-op macro, this patch does not introduce any logical change.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 hw/intc/armv7m_nvic.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/hw/intc/armv7m_nvic.c b/hw/intc/armv7m_nvic.c
index 9f8f0d3..25bb88a 100644
--- a/hw/intc/armv7m_nvic.c
+++ b/hw/intc/armv7m_nvic.c
@@ -18,6 +18,7 @@
 #include "hw/intc/armv7m_nvic.h"
 #include "target/arm/cpu.h"
 #include "exec/exec-all.h"
+#include "exec/memop.h"
 #include "qemu/log.h"
 #include "qemu/module.h"
 #include "trace.h"
@@ -2345,7 +2346,8 @@ static MemTxResult nvic_sysreg_ns_write(void *opaque, hwaddr addr,
     if (attrs.secure) {
         /* S accesses to the alias act like NS accesses to the real region */
         attrs.secure = 0;
-        return memory_region_dispatch_write(mr, addr, value, size, attrs);
+        return memory_region_dispatch_write(mr, addr, value, SIZE_MEMOP(size),
+                                            attrs);
     } else {
         /* NS attrs are RAZ/WI for privileged, and BusFault for user */
         if (attrs.user) {
@@ -2364,7 +2366,8 @@ static MemTxResult nvic_sysreg_ns_read(void *opaque, hwaddr addr,
     if (attrs.secure) {
         /* S accesses to the alias act like NS accesses to the real region */
         attrs.secure = 0;
-        return memory_region_dispatch_read(mr, addr, data, size, attrs);
+        return memory_region_dispatch_read(mr, addr, data, SIZE_MEMOP(size),
+                                           attrs);
     } else {
         /* NS attrs are RAZ/WI for privileged, and BusFault for user */
         if (attrs.user) {
@@ -2390,7 +2393,8 @@ static MemTxResult nvic_systick_write(void *opaque, hwaddr addr,

     /* Direct the access to the correct systick */
     mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->systick[attrs.secure]), 0);
-    return memory_region_dispatch_write(mr, addr, value, size, attrs);
+    return memory_region_dispatch_write(mr, addr, value, SIZE_MEMOP(size),
+                                        attrs);
 }

 static MemTxResult nvic_systick_read(void *opaque, hwaddr addr,
@@ -2402,7 +2406,7 @@ static MemTxResult nvic_systick_read(void *opaque, hwaddr addr,

     /* Direct the access to the correct systick */
     mr = sysbus_mmio_get_region(SYS_BUS_DEVICE(&s->systick[attrs.secure]), 0);
-    return memory_region_dispatch_read(mr, addr, data, size, attrs);
+    return memory_region_dispatch_read(mr, addr, data, SIZE_MEMOP(size), attrs);
 }

 static const MemoryRegionOps nvic_systick_ops = {
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 5236 bytes --]

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v5 06/15] hw/virtio: Access MemoryRegion with MemOp
  2019-07-26  6:42 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-26  6:45   ` tony.nguyen
  -1 siblings, 0 replies; 78+ messages in thread
From: tony.nguyen @ 2019-07-26  6:45 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, qemu-s390x, qemu-arm,
	stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

No-op SIZE_MEMOP macro allows us to later easily convert
memory_region_dispatch_{read|write} paramter "unsigned size" into a
size+sign+endianness encoded "MemOp op".

Being a no-op macro, this patch does not introduce any logical change.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 hw/virtio/virtio-pci.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index ce928f2..265f066 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -17,6 +17,7 @@

 #include "qemu/osdep.h"

+#include "exec/memop.h"
 #include "standard-headers/linux/virtio_pci.h"
 #include "hw/virtio/virtio.h"
 #include "hw/pci/pci.h"
@@ -550,7 +551,8 @@ void virtio_address_space_write(VirtIOPCIProxy *proxy, hwaddr addr,
         /* As length is under guest control, handle illegal values. */
         return;
     }
-    memory_region_dispatch_write(mr, addr, val, len, MEMTXATTRS_UNSPECIFIED);
+    memory_region_dispatch_write(mr, addr, val, SIZE_MEMOP(len),
+                                 MEMTXATTRS_UNSPECIFIED);
 }

 static void
@@ -573,7 +575,8 @@ virtio_address_space_read(VirtIOPCIProxy *proxy, hwaddr addr,
     /* Make sure caller aligned buf properly */
     assert(!(((uintptr_t)buf) & (len - 1)));

-    memory_region_dispatch_read(mr, addr, &val, len, MEMTXATTRS_UNSPECIFIED);
+    memory_region_dispatch_read(mr, addr, &val, SIZE_MEMOP(len),
+                                MEMTXATTRS_UNSPECIFIED);
     switch (len) {
     case 1:
         pci_set_byte(buf, val);
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v5 06/15] hw/virtio: Access MemoryRegion with MemOp
@ 2019-07-26  6:45   ` tony.nguyen
  0 siblings, 0 replies; 78+ messages in thread
From: tony.nguyen @ 2019-07-26  6:45 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, sagark, david, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, arikalo, mst, pasic,
	borntraeger, rth, atar4qemu, ehabkost, alex.williamson, qemu-arm,
	stefanha, shorne, david, qemu-riscv, qemu-s390x, kbastian,
	cohuck, laurent, qemu-ppc, amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 1614 bytes --]

No-op SIZE_MEMOP macro allows us to later easily convert
memory_region_dispatch_{read|write} paramter "unsigned size" into a
size+sign+endianness encoded "MemOp op".

Being a no-op macro, this patch does not introduce any logical change.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 hw/virtio/virtio-pci.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index ce928f2..265f066 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -17,6 +17,7 @@

 #include "qemu/osdep.h"

+#include "exec/memop.h"
 #include "standard-headers/linux/virtio_pci.h"
 #include "hw/virtio/virtio.h"
 #include "hw/pci/pci.h"
@@ -550,7 +551,8 @@ void virtio_address_space_write(VirtIOPCIProxy *proxy, hwaddr addr,
         /* As length is under guest control, handle illegal values. */
         return;
     }
-    memory_region_dispatch_write(mr, addr, val, len, MEMTXATTRS_UNSPECIFIED);
+    memory_region_dispatch_write(mr, addr, val, SIZE_MEMOP(len),
+                                 MEMTXATTRS_UNSPECIFIED);
 }

 static void
@@ -573,7 +575,8 @@ virtio_address_space_read(VirtIOPCIProxy *proxy, hwaddr addr,
     /* Make sure caller aligned buf properly */
     assert(!(((uintptr_t)buf) & (len - 1)));

-    memory_region_dispatch_read(mr, addr, &val, len, MEMTXATTRS_UNSPECIFIED);
+    memory_region_dispatch_read(mr, addr, &val, SIZE_MEMOP(len),
+                                MEMTXATTRS_UNSPECIFIED);
     switch (len) {
     case 1:
         pci_set_byte(buf, val);
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 3285 bytes --]

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v5 07/15] hw/vfio: Access MemoryRegion with MemOp
  2019-07-26  6:42 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-26  6:46   ` tony.nguyen
  -1 siblings, 0 replies; 78+ messages in thread
From: tony.nguyen @ 2019-07-26  6:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, qemu-s390x, qemu-arm,
	stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

No-op SIZE_MEMOP macro allows us to later easily convert
memory_region_dispatch_{read|write} paramter "unsigned size" into a
size+sign+endianness encoded "MemOp op".

Being a no-op macro, this patch does not introduce any logical change.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 hw/vfio/pci-quirks.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/hw/vfio/pci-quirks.c b/hw/vfio/pci-quirks.c
index b35a640..3240afa 100644
--- a/hw/vfio/pci-quirks.c
+++ b/hw/vfio/pci-quirks.c
@@ -1071,7 +1071,7 @@ static void vfio_rtl8168_quirk_address_write(void *opaque, hwaddr addr,

                 /* Write to the proper guest MSI-X table instead */
                 memory_region_dispatch_write(&vdev->pdev.msix_table_mmio,
-                                             offset, val, size,
+                                             offset, val, SIZE_MEMOP(size),
                                              MEMTXATTRS_UNSPECIFIED);
             }
             return; /* Do not write guest MSI-X data to hardware */
@@ -1102,7 +1102,8 @@ static uint64_t vfio_rtl8168_quirk_data_read(void *opaque,
     if (rtl->enabled && (vdev->pdev.cap_present & QEMU_PCI_CAP_MSIX)) {
         hwaddr offset = rtl->addr & 0xfff;
         memory_region_dispatch_read(&vdev->pdev.msix_table_mmio, offset,
-                                    &data, size, MEMTXATTRS_UNSPECIFIED);
+                                    &data, SIZE_MEMOP(size),
+                                    MEMTXATTRS_UNSPECIFIED);
         trace_vfio_quirk_rtl8168_msix_read(vdev->vbasedev.name, offset, data);
     }

--
1.8.3.1




^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v5 07/15] hw/vfio: Access MemoryRegion with MemOp
@ 2019-07-26  6:46   ` tony.nguyen
  0 siblings, 0 replies; 78+ messages in thread
From: tony.nguyen @ 2019-07-26  6:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, sagark, david, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, arikalo, mst, pasic,
	borntraeger, rth, atar4qemu, ehabkost, alex.williamson, qemu-arm,
	stefanha, shorne, david, qemu-riscv, qemu-s390x, kbastian,
	cohuck, laurent, qemu-ppc, amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 1662 bytes --]

No-op SIZE_MEMOP macro allows us to later easily convert
memory_region_dispatch_{read|write} paramter "unsigned size" into a
size+sign+endianness encoded "MemOp op".

Being a no-op macro, this patch does not introduce any logical change.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 hw/vfio/pci-quirks.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/hw/vfio/pci-quirks.c b/hw/vfio/pci-quirks.c
index b35a640..3240afa 100644
--- a/hw/vfio/pci-quirks.c
+++ b/hw/vfio/pci-quirks.c
@@ -1071,7 +1071,7 @@ static void vfio_rtl8168_quirk_address_write(void *opaque, hwaddr addr,

                 /* Write to the proper guest MSI-X table instead */
                 memory_region_dispatch_write(&vdev->pdev.msix_table_mmio,
-                                             offset, val, size,
+                                             offset, val, SIZE_MEMOP(size),
                                              MEMTXATTRS_UNSPECIFIED);
             }
             return; /* Do not write guest MSI-X data to hardware */
@@ -1102,7 +1102,8 @@ static uint64_t vfio_rtl8168_quirk_data_read(void *opaque,
     if (rtl->enabled && (vdev->pdev.cap_present & QEMU_PCI_CAP_MSIX)) {
         hwaddr offset = rtl->addr & 0xfff;
         memory_region_dispatch_read(&vdev->pdev.msix_table_mmio, offset,
-                                    &data, size, MEMTXATTRS_UNSPECIFIED);
+                                    &data, SIZE_MEMOP(size),
+                                    MEMTXATTRS_UNSPECIFIED);
         trace_vfio_quirk_rtl8168_msix_read(vdev->vbasedev.name, offset, data);
     }

--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 3680 bytes --]

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v5 08/15] exec: Access MemoryRegion with MemOp
  2019-07-26  6:42 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-26  6:46   ` tony.nguyen
  -1 siblings, 0 replies; 78+ messages in thread
From: tony.nguyen @ 2019-07-26  6:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, qemu-s390x, qemu-arm,
	stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

No-op SIZE_MEMOP macro allows us to later easily convert
memory_region_dispatch_{read|write} paramter "unsigned size" into a
size+sign+endianness encoded "MemOp op".

Being a no-op macro, this patch does not introduce any logical change.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 exec.c            |  6 ++++--
 memory_ldst.inc.c | 18 +++++++++---------
 2 files changed, 13 insertions(+), 11 deletions(-)

diff --git a/exec.c b/exec.c
index 3e78de3..5013864 100644
--- a/exec.c
+++ b/exec.c
@@ -3334,7 +3334,8 @@ static MemTxResult flatview_write_continue(FlatView *fv, hwaddr addr,
             /* XXX: could force current_cpu to NULL to avoid
                potential bugs */
             val = ldn_p(buf, l);
-            result |= memory_region_dispatch_write(mr, addr1, val, l, attrs);
+            result |= memory_region_dispatch_write(mr, addr1, val,
+                                                   SIZE_MEMOP(l), attrs);
         } else {
             /* RAM case */
             ptr = qemu_ram_ptr_length(mr->ram_block, addr1, &l, false);
@@ -3395,7 +3396,8 @@ MemTxResult flatview_read_continue(FlatView *fv, hwaddr addr,
             /* I/O case */
             release_lock |= prepare_mmio_access(mr);
             l = memory_access_size(mr, l, addr1);
-            result |= memory_region_dispatch_read(mr, addr1, &val, l, attrs);
+            result |= memory_region_dispatch_read(mr, addr1, &val,
+                                                  SIZE_MEMOP(l), attrs);
             stn_p(buf, l, val);
         } else {
             /* RAM case */
diff --git a/memory_ldst.inc.c b/memory_ldst.inc.c
index acf865b..e073cf9 100644
--- a/memory_ldst.inc.c
+++ b/memory_ldst.inc.c
@@ -38,7 +38,7 @@ static inline uint32_t glue(address_space_ldl_internal, SUFFIX)(ARG1_DECL,
         release_lock |= prepare_mmio_access(mr);

         /* I/O case */
-        r = memory_region_dispatch_read(mr, addr1, &val, 4, attrs);
+        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(4), attrs);
 #if defined(TARGET_WORDS_BIGENDIAN)
         if (endian == DEVICE_LITTLE_ENDIAN) {
             val = bswap32(val);
@@ -114,7 +114,7 @@ static inline uint64_t glue(address_space_ldq_internal, SUFFIX)(ARG1_DECL,
         release_lock |= prepare_mmio_access(mr);

         /* I/O case */
-        r = memory_region_dispatch_read(mr, addr1, &val, 8, attrs);
+        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(8), attrs);
 #if defined(TARGET_WORDS_BIGENDIAN)
         if (endian == DEVICE_LITTLE_ENDIAN) {
             val = bswap64(val);
@@ -188,7 +188,7 @@ uint32_t glue(address_space_ldub, SUFFIX)(ARG1_DECL,
         release_lock |= prepare_mmio_access(mr);

         /* I/O case */
-        r = memory_region_dispatch_read(mr, addr1, &val, 1, attrs);
+        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(1), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
@@ -224,7 +224,7 @@ static inline uint32_t glue(address_space_lduw_internal, SUFFIX)(ARG1_DECL,
         release_lock |= prepare_mmio_access(mr);

         /* I/O case */
-        r = memory_region_dispatch_read(mr, addr1, &val, 2, attrs);
+        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(2), attrs);
 #if defined(TARGET_WORDS_BIGENDIAN)
         if (endian == DEVICE_LITTLE_ENDIAN) {
             val = bswap16(val);
@@ -300,7 +300,7 @@ void glue(address_space_stl_notdirty, SUFFIX)(ARG1_DECL,
     if (l < 4 || !memory_access_is_direct(mr, true)) {
         release_lock |= prepare_mmio_access(mr);

-        r = memory_region_dispatch_write(mr, addr1, val, 4, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(4), attrs);
     } else {
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
         stl_p(ptr, val);
@@ -346,7 +346,7 @@ static inline void glue(address_space_stl_internal, SUFFIX)(ARG1_DECL,
             val = bswap32(val);
         }
 #endif
-        r = memory_region_dispatch_write(mr, addr1, val, 4, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(4), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
@@ -408,7 +408,7 @@ void glue(address_space_stb, SUFFIX)(ARG1_DECL,
     mr = TRANSLATE(addr, &addr1, &l, true, attrs);
     if (!memory_access_is_direct(mr, true)) {
         release_lock |= prepare_mmio_access(mr);
-        r = memory_region_dispatch_write(mr, addr1, val, 1, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(1), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
@@ -451,7 +451,7 @@ static inline void glue(address_space_stw_internal, SUFFIX)(ARG1_DECL,
             val = bswap16(val);
         }
 #endif
-        r = memory_region_dispatch_write(mr, addr1, val, 2, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(2), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
@@ -524,7 +524,7 @@ static void glue(address_space_stq_internal, SUFFIX)(ARG1_DECL,
             val = bswap64(val);
         }
 #endif
-        r = memory_region_dispatch_write(mr, addr1, val, 8, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(8), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v5 08/15] exec: Access MemoryRegion with MemOp
@ 2019-07-26  6:46   ` tony.nguyen
  0 siblings, 0 replies; 78+ messages in thread
From: tony.nguyen @ 2019-07-26  6:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, sagark, david, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, arikalo, mst, pasic,
	borntraeger, rth, atar4qemu, ehabkost, alex.williamson, qemu-arm,
	stefanha, shorne, david, qemu-riscv, qemu-s390x, kbastian,
	cohuck, laurent, qemu-ppc, amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 5650 bytes --]

No-op SIZE_MEMOP macro allows us to later easily convert
memory_region_dispatch_{read|write} paramter "unsigned size" into a
size+sign+endianness encoded "MemOp op".

Being a no-op macro, this patch does not introduce any logical change.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 exec.c            |  6 ++++--
 memory_ldst.inc.c | 18 +++++++++---------
 2 files changed, 13 insertions(+), 11 deletions(-)

diff --git a/exec.c b/exec.c
index 3e78de3..5013864 100644
--- a/exec.c
+++ b/exec.c
@@ -3334,7 +3334,8 @@ static MemTxResult flatview_write_continue(FlatView *fv, hwaddr addr,
             /* XXX: could force current_cpu to NULL to avoid
                potential bugs */
             val = ldn_p(buf, l);
-            result |= memory_region_dispatch_write(mr, addr1, val, l, attrs);
+            result |= memory_region_dispatch_write(mr, addr1, val,
+                                                   SIZE_MEMOP(l), attrs);
         } else {
             /* RAM case */
             ptr = qemu_ram_ptr_length(mr->ram_block, addr1, &l, false);
@@ -3395,7 +3396,8 @@ MemTxResult flatview_read_continue(FlatView *fv, hwaddr addr,
             /* I/O case */
             release_lock |= prepare_mmio_access(mr);
             l = memory_access_size(mr, l, addr1);
-            result |= memory_region_dispatch_read(mr, addr1, &val, l, attrs);
+            result |= memory_region_dispatch_read(mr, addr1, &val,
+                                                  SIZE_MEMOP(l), attrs);
             stn_p(buf, l, val);
         } else {
             /* RAM case */
diff --git a/memory_ldst.inc.c b/memory_ldst.inc.c
index acf865b..e073cf9 100644
--- a/memory_ldst.inc.c
+++ b/memory_ldst.inc.c
@@ -38,7 +38,7 @@ static inline uint32_t glue(address_space_ldl_internal, SUFFIX)(ARG1_DECL,
         release_lock |= prepare_mmio_access(mr);

         /* I/O case */
-        r = memory_region_dispatch_read(mr, addr1, &val, 4, attrs);
+        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(4), attrs);
 #if defined(TARGET_WORDS_BIGENDIAN)
         if (endian == DEVICE_LITTLE_ENDIAN) {
             val = bswap32(val);
@@ -114,7 +114,7 @@ static inline uint64_t glue(address_space_ldq_internal, SUFFIX)(ARG1_DECL,
         release_lock |= prepare_mmio_access(mr);

         /* I/O case */
-        r = memory_region_dispatch_read(mr, addr1, &val, 8, attrs);
+        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(8), attrs);
 #if defined(TARGET_WORDS_BIGENDIAN)
         if (endian == DEVICE_LITTLE_ENDIAN) {
             val = bswap64(val);
@@ -188,7 +188,7 @@ uint32_t glue(address_space_ldub, SUFFIX)(ARG1_DECL,
         release_lock |= prepare_mmio_access(mr);

         /* I/O case */
-        r = memory_region_dispatch_read(mr, addr1, &val, 1, attrs);
+        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(1), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
@@ -224,7 +224,7 @@ static inline uint32_t glue(address_space_lduw_internal, SUFFIX)(ARG1_DECL,
         release_lock |= prepare_mmio_access(mr);

         /* I/O case */
-        r = memory_region_dispatch_read(mr, addr1, &val, 2, attrs);
+        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(2), attrs);
 #if defined(TARGET_WORDS_BIGENDIAN)
         if (endian == DEVICE_LITTLE_ENDIAN) {
             val = bswap16(val);
@@ -300,7 +300,7 @@ void glue(address_space_stl_notdirty, SUFFIX)(ARG1_DECL,
     if (l < 4 || !memory_access_is_direct(mr, true)) {
         release_lock |= prepare_mmio_access(mr);

-        r = memory_region_dispatch_write(mr, addr1, val, 4, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(4), attrs);
     } else {
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
         stl_p(ptr, val);
@@ -346,7 +346,7 @@ static inline void glue(address_space_stl_internal, SUFFIX)(ARG1_DECL,
             val = bswap32(val);
         }
 #endif
-        r = memory_region_dispatch_write(mr, addr1, val, 4, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(4), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
@@ -408,7 +408,7 @@ void glue(address_space_stb, SUFFIX)(ARG1_DECL,
     mr = TRANSLATE(addr, &addr1, &l, true, attrs);
     if (!memory_access_is_direct(mr, true)) {
         release_lock |= prepare_mmio_access(mr);
-        r = memory_region_dispatch_write(mr, addr1, val, 1, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(1), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
@@ -451,7 +451,7 @@ static inline void glue(address_space_stw_internal, SUFFIX)(ARG1_DECL,
             val = bswap16(val);
         }
 #endif
-        r = memory_region_dispatch_write(mr, addr1, val, 2, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(2), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
@@ -524,7 +524,7 @@ static void glue(address_space_stq_internal, SUFFIX)(ARG1_DECL,
             val = bswap64(val);
         }
 #endif
-        r = memory_region_dispatch_write(mr, addr1, val, 8, attrs);
+        r = memory_region_dispatch_write(mr, addr1, val, SIZE_MEMOP(8), attrs);
     } else {
         /* RAM case */
         ptr = qemu_map_ram_ptr(mr->ram_block, addr1);
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 10194 bytes --]

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v5 09/15] cputlb: Access MemoryRegion with MemOp
  2019-07-26  6:42 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-26  6:46   ` tony.nguyen
  -1 siblings, 0 replies; 78+ messages in thread
From: tony.nguyen @ 2019-07-26  6:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, qemu-s390x, qemu-arm,
	stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

No-op MEMOP_SIZE and SIZE_MEMOP macros allows us to later easily
convert memory_region_dispatch_{read|write} paramter "unsigned size"
into a size+sign+endianness encoded "MemOp op".

Being a no-op macro, this patch does not introduce any logical change.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 accel/tcg/cputlb.c | 21 ++++++++++-----------
 1 file changed, 10 insertions(+), 11 deletions(-)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 523be4c..5d88cec 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -881,7 +881,7 @@ static void tlb_fill(CPUState *cpu, target_ulong addr, int size,

 static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
                          int mmu_idx, target_ulong addr, uintptr_t retaddr,
-                         MMUAccessType access_type, int size)
+                         MMUAccessType access_type, MemOp op)
 {
     CPUState *cpu = env_cpu(env);
     hwaddr mr_offset;
@@ -906,14 +906,13 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
         qemu_mutex_lock_iothread();
         locked = true;
     }
-    r = memory_region_dispatch_read(mr, mr_offset,
-                                    &val, size, iotlbentry->attrs);
+    r = memory_region_dispatch_read(mr, mr_offset, &val, op, iotlbentry->attrs);
     if (r != MEMTX_OK) {
         hwaddr physaddr = mr_offset +
             section->offset_within_address_space -
             section->offset_within_region;

-        cpu_transaction_failed(cpu, physaddr, addr, size, access_type,
+        cpu_transaction_failed(cpu, physaddr, addr, MEMOP_SIZE(op), access_type,
                                mmu_idx, iotlbentry->attrs, r, retaddr);
     }
     if (locked) {
@@ -925,7 +924,7 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,

 static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
                       int mmu_idx, uint64_t val, target_ulong addr,
-                      uintptr_t retaddr, int size)
+                      uintptr_t retaddr, MemOp op)
 {
     CPUState *cpu = env_cpu(env);
     hwaddr mr_offset;
@@ -947,15 +946,15 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
         qemu_mutex_lock_iothread();
         locked = true;
     }
-    r = memory_region_dispatch_write(mr, mr_offset,
-                                     val, size, iotlbentry->attrs);
+    r = memory_region_dispatch_write(mr, mr_offset, val, op, iotlbentry->attrs);
     if (r != MEMTX_OK) {
         hwaddr physaddr = mr_offset +
             section->offset_within_address_space -
             section->offset_within_region;

-        cpu_transaction_failed(cpu, physaddr, addr, size, MMU_DATA_STORE,
-                               mmu_idx, iotlbentry->attrs, r, retaddr);
+        cpu_transaction_failed(cpu, physaddr, addr, MEMOP_SIZE(op),
+                               MMU_DATA_STORE, mmu_idx, iotlbentry->attrs, r,
+                               retaddr);
     }
     if (locked) {
         qemu_mutex_unlock_iothread();
@@ -1306,7 +1305,7 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
         }

         res = io_readx(env, &env_tlb(env)->d[mmu_idx].iotlb[index],
-                       mmu_idx, addr, retaddr, access_type, size);
+                       mmu_idx, addr, retaddr, access_type, SIZE_MEMOP(size));
         return handle_bswap(res, size, big_endian);
     }

@@ -1555,7 +1554,7 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,

         io_writex(env, &env_tlb(env)->d[mmu_idx].iotlb[index], mmu_idx,
                   handle_bswap(val, size, big_endian),
-                  addr, retaddr, size);
+                  addr, retaddr, SIZE_MEMOP(size));
         return;
     }

--
1.8.3.1




^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v5 09/15] cputlb: Access MemoryRegion with MemOp
@ 2019-07-26  6:46   ` tony.nguyen
  0 siblings, 0 replies; 78+ messages in thread
From: tony.nguyen @ 2019-07-26  6:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, sagark, david, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, arikalo, mst, pasic,
	borntraeger, rth, atar4qemu, ehabkost, alex.williamson, qemu-arm,
	stefanha, shorne, david, qemu-riscv, qemu-s390x, kbastian,
	cohuck, laurent, qemu-ppc, amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 3883 bytes --]

No-op MEMOP_SIZE and SIZE_MEMOP macros allows us to later easily
convert memory_region_dispatch_{read|write} paramter "unsigned size"
into a size+sign+endianness encoded "MemOp op".

Being a no-op macro, this patch does not introduce any logical change.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 accel/tcg/cputlb.c | 21 ++++++++++-----------
 1 file changed, 10 insertions(+), 11 deletions(-)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 523be4c..5d88cec 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -881,7 +881,7 @@ static void tlb_fill(CPUState *cpu, target_ulong addr, int size,

 static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
                          int mmu_idx, target_ulong addr, uintptr_t retaddr,
-                         MMUAccessType access_type, int size)
+                         MMUAccessType access_type, MemOp op)
 {
     CPUState *cpu = env_cpu(env);
     hwaddr mr_offset;
@@ -906,14 +906,13 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
         qemu_mutex_lock_iothread();
         locked = true;
     }
-    r = memory_region_dispatch_read(mr, mr_offset,
-                                    &val, size, iotlbentry->attrs);
+    r = memory_region_dispatch_read(mr, mr_offset, &val, op, iotlbentry->attrs);
     if (r != MEMTX_OK) {
         hwaddr physaddr = mr_offset +
             section->offset_within_address_space -
             section->offset_within_region;

-        cpu_transaction_failed(cpu, physaddr, addr, size, access_type,
+        cpu_transaction_failed(cpu, physaddr, addr, MEMOP_SIZE(op), access_type,
                                mmu_idx, iotlbentry->attrs, r, retaddr);
     }
     if (locked) {
@@ -925,7 +924,7 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,

 static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
                       int mmu_idx, uint64_t val, target_ulong addr,
-                      uintptr_t retaddr, int size)
+                      uintptr_t retaddr, MemOp op)
 {
     CPUState *cpu = env_cpu(env);
     hwaddr mr_offset;
@@ -947,15 +946,15 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
         qemu_mutex_lock_iothread();
         locked = true;
     }
-    r = memory_region_dispatch_write(mr, mr_offset,
-                                     val, size, iotlbentry->attrs);
+    r = memory_region_dispatch_write(mr, mr_offset, val, op, iotlbentry->attrs);
     if (r != MEMTX_OK) {
         hwaddr physaddr = mr_offset +
             section->offset_within_address_space -
             section->offset_within_region;

-        cpu_transaction_failed(cpu, physaddr, addr, size, MMU_DATA_STORE,
-                               mmu_idx, iotlbentry->attrs, r, retaddr);
+        cpu_transaction_failed(cpu, physaddr, addr, MEMOP_SIZE(op),
+                               MMU_DATA_STORE, mmu_idx, iotlbentry->attrs, r,
+                               retaddr);
     }
     if (locked) {
         qemu_mutex_unlock_iothread();
@@ -1306,7 +1305,7 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
         }

         res = io_readx(env, &env_tlb(env)->d[mmu_idx].iotlb[index],
-                       mmu_idx, addr, retaddr, access_type, size);
+                       mmu_idx, addr, retaddr, access_type, SIZE_MEMOP(size));
         return handle_bswap(res, size, big_endian);
     }

@@ -1555,7 +1554,7 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,

         io_writex(env, &env_tlb(env)->d[mmu_idx].iotlb[index], mmu_idx,
                   handle_bswap(val, size, big_endian),
-                  addr, retaddr, size);
+                  addr, retaddr, SIZE_MEMOP(size));
         return;
     }

--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 7626 bytes --]

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v5 10/15] memory: Access MemoryRegion with MemOp semantics
  2019-07-26  6:42 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-26  6:47   ` tony.nguyen
  -1 siblings, 0 replies; 78+ messages in thread
From: tony.nguyen @ 2019-07-26  6:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, qemu-s390x, qemu-arm,
	stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

To convert interfaces of MemoryRegion access, MEMOP_SIZE and
SIZE_MEMOP no-op stubs were introduced to change syntax while keeping
the existing semantics.

Now with interfaces converted, we fill the stubs and use MemOp
semantics.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 include/exec/memop.h  | 5 ++---
 include/exec/memory.h | 4 ++--
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/include/exec/memop.h b/include/exec/memop.h
index 09c8d20..f2847e8 100644
--- a/include/exec/memop.h
+++ b/include/exec/memop.h
@@ -106,8 +106,7 @@ typedef enum MemOp {
     MO_SSIZE = MO_SIZE | MO_SIGN,
 } MemOp;

-/* No-op while memory_region_dispatch_[read|write] is converted to MemOp */
-#define MEMOP_SIZE(op)  (op)    /* MemOp to size.  */
-#define SIZE_MEMOP(ul)  (ul)    /* Size to MemOp.  */
+#define MEMOP_SIZE(op)  (1 << ((op) & MO_SIZE)) /* MemOp to size.  */
+#define SIZE_MEMOP(ul)  (ctzl(ul))              /* Size to MemOp.  */

 #endif
diff --git a/include/exec/memory.h b/include/exec/memory.h
index 0ea4843..975b86a 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -1732,7 +1732,7 @@ void mtree_info(bool flatview, bool dispatch_tree, bool owner);
  * @mr: #MemoryRegion to access
  * @addr: address within that region
  * @pval: pointer to uint64_t which the data is written to
- * @op: size of the access in bytes
+ * @op: size, sign, and endianness of the memory operation
  * @attrs: memory transaction attributes to use for the access
  */
 MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
@@ -1747,7 +1747,7 @@ MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
  * @mr: #MemoryRegion to access
  * @addr: address within that region
  * @data: data to write
- * @op: size of the access in bytes
+ * @op: size, sign, and endianness of the memory operation
  * @attrs: memory transaction attributes to use for the access
  */
 MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v5 10/15] memory: Access MemoryRegion with MemOp semantics
@ 2019-07-26  6:47   ` tony.nguyen
  0 siblings, 0 replies; 78+ messages in thread
From: tony.nguyen @ 2019-07-26  6:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, sagark, david, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, arikalo, mst, pasic,
	borntraeger, rth, atar4qemu, ehabkost, alex.williamson, qemu-arm,
	stefanha, shorne, david, qemu-riscv, qemu-s390x, kbastian,
	cohuck, laurent, qemu-ppc, amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 2023 bytes --]

To convert interfaces of MemoryRegion access, MEMOP_SIZE and
SIZE_MEMOP no-op stubs were introduced to change syntax while keeping
the existing semantics.

Now with interfaces converted, we fill the stubs and use MemOp
semantics.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 include/exec/memop.h  | 5 ++---
 include/exec/memory.h | 4 ++--
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/include/exec/memop.h b/include/exec/memop.h
index 09c8d20..f2847e8 100644
--- a/include/exec/memop.h
+++ b/include/exec/memop.h
@@ -106,8 +106,7 @@ typedef enum MemOp {
     MO_SSIZE = MO_SIZE | MO_SIGN,
 } MemOp;

-/* No-op while memory_region_dispatch_[read|write] is converted to MemOp */
-#define MEMOP_SIZE(op)  (op)    /* MemOp to size.  */
-#define SIZE_MEMOP(ul)  (ul)    /* Size to MemOp.  */
+#define MEMOP_SIZE(op)  (1 << ((op) & MO_SIZE)) /* MemOp to size.  */
+#define SIZE_MEMOP(ul)  (ctzl(ul))              /* Size to MemOp.  */

 #endif
diff --git a/include/exec/memory.h b/include/exec/memory.h
index 0ea4843..975b86a 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -1732,7 +1732,7 @@ void mtree_info(bool flatview, bool dispatch_tree, bool owner);
  * @mr: #MemoryRegion to access
  * @addr: address within that region
  * @pval: pointer to uint64_t which the data is written to
- * @op: size of the access in bytes
+ * @op: size, sign, and endianness of the memory operation
  * @attrs: memory transaction attributes to use for the access
  */
 MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
@@ -1747,7 +1747,7 @@ MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
  * @mr: #MemoryRegion to access
  * @addr: address within that region
  * @data: data to write
- * @op: size of the access in bytes
+ * @op: size, sign, and endianness of the memory operation
  * @attrs: memory transaction attributes to use for the access
  */
 MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 3493 bytes --]

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v5 11/15] memory: Single byte swap along the I/O path
  2019-07-26  6:42 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-26  6:47   ` tony.nguyen
  -1 siblings, 0 replies; 78+ messages in thread
From: tony.nguyen @ 2019-07-26  6:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, qemu-s390x, qemu-arm,
	stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

Now that MemOp has been pushed down into the memory API, we can
collapse the two byte swaps adjust_endianness and handle_bswap into
the former.

Collapsing byte swaps along the I/O path enables additional endian
inversion logic, e.g. SPARC64 Invert Endian TTE bit, with redundant
byte swaps cancelling out.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 accel/tcg/cputlb.c | 41 +++++++++++++++++++----------------------
 memory.c           | 30 +++++++++++++++++-------------
 2 files changed, 36 insertions(+), 35 deletions(-)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 5d88cec..e61b1eb 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1209,26 +1209,13 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
 #endif

 /*
- * Byte Swap Helper
+ * Byte Swap Checker
  *
- * This should all dead code away depending on the build host and
- * access type.
+ * Dead code should all go away depending on the build host and access type.
  */
-
-static inline uint64_t handle_bswap(uint64_t val, int size, bool big_endian)
+static inline bool need_bswap(bool big_endian)
 {
-    if ((big_endian && NEED_BE_BSWAP) || (!big_endian && NEED_LE_BSWAP)) {
-        switch (size) {
-        case 1: return val;
-        case 2: return bswap16(val);
-        case 4: return bswap32(val);
-        case 8: return bswap64(val);
-        default:
-            g_assert_not_reached();
-        }
-    } else {
-        return val;
-    }
+    return (big_endian && NEED_BE_BSWAP) || (!big_endian && NEED_LE_BSWAP);
 }

 /*
@@ -1259,6 +1246,7 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
     unsigned a_bits = get_alignment_bits(get_memop(oi));
     void *haddr;
     uint64_t res;
+    MemOp op;

     /* Handle CPU specific unaligned behaviour */
     if (addr & ((1 << a_bits) - 1)) {
@@ -1304,9 +1292,13 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
             }
         }

-        res = io_readx(env, &env_tlb(env)->d[mmu_idx].iotlb[index],
-                       mmu_idx, addr, retaddr, access_type, SIZE_MEMOP(size));
-        return handle_bswap(res, size, big_endian);
+        op = SIZE_MEMOP(size);
+        if (need_bswap(big_endian)) {
+            op ^= MO_BSWAP;
+        }
+
+        return io_readx(env, &env_tlb(env)->d[mmu_idx].iotlb[index],
+                       mmu_idx, addr, retaddr, access_type, op);
     }

     /* Handle slow unaligned access (it spans two pages or IO).  */
@@ -1507,6 +1499,7 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
     const size_t tlb_off = offsetof(CPUTLBEntry, addr_write);
     unsigned a_bits = get_alignment_bits(get_memop(oi));
     void *haddr;
+    MemOp op;

     /* Handle CPU specific unaligned behaviour */
     if (addr & ((1 << a_bits) - 1)) {
@@ -1552,9 +1545,13 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
             }
         }

+        op = SIZE_MEMOP(size);
+        if (need_bswap(big_endian)) {
+            op ^= MO_BSWAP;
+        }
+
         io_writex(env, &env_tlb(env)->d[mmu_idx].iotlb[index], mmu_idx,
-                  handle_bswap(val, size, big_endian),
-                  addr, retaddr, SIZE_MEMOP(size));
+                  val, addr, retaddr, op);
         return;
     }

diff --git a/memory.c b/memory.c
index 6982e19..0277d3d 100644
--- a/memory.c
+++ b/memory.c
@@ -352,7 +352,7 @@ static bool memory_region_big_endian(MemoryRegion *mr)
 #endif
 }

-static bool memory_region_wrong_endianness(MemoryRegion *mr)
+static bool memory_region_endianness_inverted(MemoryRegion *mr)
 {
 #ifdef TARGET_WORDS_BIGENDIAN
     return mr->ops->endianness == DEVICE_LITTLE_ENDIAN;
@@ -361,23 +361,27 @@ static bool memory_region_wrong_endianness(MemoryRegion *mr)
 #endif
 }

-static void adjust_endianness(MemoryRegion *mr, uint64_t *data, unsigned size)
+static void adjust_endianness(MemoryRegion *mr, uint64_t *data, MemOp op)
 {
-    if (memory_region_wrong_endianness(mr)) {
-        switch (size) {
-        case 1:
+    if (memory_region_endianness_inverted(mr)) {
+        op ^= MO_BSWAP;
+    }
+
+    if (op & MO_BSWAP) {
+        switch (op & MO_SIZE) {
+        case MO_8:
             break;
-        case 2:
+        case MO_16:
             *data = bswap16(*data);
             break;
-        case 4:
+        case MO_32:
             *data = bswap32(*data);
             break;
-        case 8:
+        case MO_64:
             *data = bswap64(*data);
             break;
         default:
-            abort();
+            g_assert_not_reached();
         }
     }
 }
@@ -1451,7 +1455,7 @@ MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
     }

     r = memory_region_dispatch_read1(mr, addr, pval, size, attrs);
-    adjust_endianness(mr, pval, size);
+    adjust_endianness(mr, pval, op);
     return r;
 }

@@ -1494,7 +1498,7 @@ MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
         return MEMTX_DECODE_ERROR;
     }

-    adjust_endianness(mr, &data, size);
+    adjust_endianness(mr, &data, op);

     if ((!kvm_eventfds_enabled()) &&
         memory_region_dispatch_write_eventfds(mr, addr, data, size, attrs)) {
@@ -2340,7 +2344,7 @@ void memory_region_add_eventfd(MemoryRegion *mr,
     }

     if (size) {
-        adjust_endianness(mr, &mrfd.data, size);
+        adjust_endianness(mr, &mrfd.data, SIZE_MEMOP(size));
     }
     memory_region_transaction_begin();
     for (i = 0; i < mr->ioeventfd_nb; ++i) {
@@ -2375,7 +2379,7 @@ void memory_region_del_eventfd(MemoryRegion *mr,
     unsigned i;

     if (size) {
-        adjust_endianness(mr, &mrfd.data, size);
+        adjust_endianness(mr, &mrfd.data, SIZE_MEMOP(size));
     }
     memory_region_transaction_begin();
     for (i = 0; i < mr->ioeventfd_nb; ++i) {
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v5 11/15] memory: Single byte swap along the I/O path
@ 2019-07-26  6:47   ` tony.nguyen
  0 siblings, 0 replies; 78+ messages in thread
From: tony.nguyen @ 2019-07-26  6:47 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, sagark, david, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, arikalo, mst, pasic,
	borntraeger, rth, atar4qemu, ehabkost, alex.williamson, qemu-arm,
	stefanha, shorne, david, qemu-riscv, qemu-s390x, kbastian,
	cohuck, laurent, qemu-ppc, amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 5997 bytes --]

Now that MemOp has been pushed down into the memory API, we can
collapse the two byte swaps adjust_endianness and handle_bswap into
the former.

Collapsing byte swaps along the I/O path enables additional endian
inversion logic, e.g. SPARC64 Invert Endian TTE bit, with redundant
byte swaps cancelling out.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 accel/tcg/cputlb.c | 41 +++++++++++++++++++----------------------
 memory.c           | 30 +++++++++++++++++-------------
 2 files changed, 36 insertions(+), 35 deletions(-)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 5d88cec..e61b1eb 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1209,26 +1209,13 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
 #endif

 /*
- * Byte Swap Helper
+ * Byte Swap Checker
  *
- * This should all dead code away depending on the build host and
- * access type.
+ * Dead code should all go away depending on the build host and access type.
  */
-
-static inline uint64_t handle_bswap(uint64_t val, int size, bool big_endian)
+static inline bool need_bswap(bool big_endian)
 {
-    if ((big_endian && NEED_BE_BSWAP) || (!big_endian && NEED_LE_BSWAP)) {
-        switch (size) {
-        case 1: return val;
-        case 2: return bswap16(val);
-        case 4: return bswap32(val);
-        case 8: return bswap64(val);
-        default:
-            g_assert_not_reached();
-        }
-    } else {
-        return val;
-    }
+    return (big_endian && NEED_BE_BSWAP) || (!big_endian && NEED_LE_BSWAP);
 }

 /*
@@ -1259,6 +1246,7 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
     unsigned a_bits = get_alignment_bits(get_memop(oi));
     void *haddr;
     uint64_t res;
+    MemOp op;

     /* Handle CPU specific unaligned behaviour */
     if (addr & ((1 << a_bits) - 1)) {
@@ -1304,9 +1292,13 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
             }
         }

-        res = io_readx(env, &env_tlb(env)->d[mmu_idx].iotlb[index],
-                       mmu_idx, addr, retaddr, access_type, SIZE_MEMOP(size));
-        return handle_bswap(res, size, big_endian);
+        op = SIZE_MEMOP(size);
+        if (need_bswap(big_endian)) {
+            op ^= MO_BSWAP;
+        }
+
+        return io_readx(env, &env_tlb(env)->d[mmu_idx].iotlb[index],
+                       mmu_idx, addr, retaddr, access_type, op);
     }

     /* Handle slow unaligned access (it spans two pages or IO).  */
@@ -1507,6 +1499,7 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
     const size_t tlb_off = offsetof(CPUTLBEntry, addr_write);
     unsigned a_bits = get_alignment_bits(get_memop(oi));
     void *haddr;
+    MemOp op;

     /* Handle CPU specific unaligned behaviour */
     if (addr & ((1 << a_bits) - 1)) {
@@ -1552,9 +1545,13 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
             }
         }

+        op = SIZE_MEMOP(size);
+        if (need_bswap(big_endian)) {
+            op ^= MO_BSWAP;
+        }
+
         io_writex(env, &env_tlb(env)->d[mmu_idx].iotlb[index], mmu_idx,
-                  handle_bswap(val, size, big_endian),
-                  addr, retaddr, SIZE_MEMOP(size));
+                  val, addr, retaddr, op);
         return;
     }

diff --git a/memory.c b/memory.c
index 6982e19..0277d3d 100644
--- a/memory.c
+++ b/memory.c
@@ -352,7 +352,7 @@ static bool memory_region_big_endian(MemoryRegion *mr)
 #endif
 }

-static bool memory_region_wrong_endianness(MemoryRegion *mr)
+static bool memory_region_endianness_inverted(MemoryRegion *mr)
 {
 #ifdef TARGET_WORDS_BIGENDIAN
     return mr->ops->endianness == DEVICE_LITTLE_ENDIAN;
@@ -361,23 +361,27 @@ static bool memory_region_wrong_endianness(MemoryRegion *mr)
 #endif
 }

-static void adjust_endianness(MemoryRegion *mr, uint64_t *data, unsigned size)
+static void adjust_endianness(MemoryRegion *mr, uint64_t *data, MemOp op)
 {
-    if (memory_region_wrong_endianness(mr)) {
-        switch (size) {
-        case 1:
+    if (memory_region_endianness_inverted(mr)) {
+        op ^= MO_BSWAP;
+    }
+
+    if (op & MO_BSWAP) {
+        switch (op & MO_SIZE) {
+        case MO_8:
             break;
-        case 2:
+        case MO_16:
             *data = bswap16(*data);
             break;
-        case 4:
+        case MO_32:
             *data = bswap32(*data);
             break;
-        case 8:
+        case MO_64:
             *data = bswap64(*data);
             break;
         default:
-            abort();
+            g_assert_not_reached();
         }
     }
 }
@@ -1451,7 +1455,7 @@ MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
     }

     r = memory_region_dispatch_read1(mr, addr, pval, size, attrs);
-    adjust_endianness(mr, pval, size);
+    adjust_endianness(mr, pval, op);
     return r;
 }

@@ -1494,7 +1498,7 @@ MemTxResult memory_region_dispatch_write(MemoryRegion *mr,
         return MEMTX_DECODE_ERROR;
     }

-    adjust_endianness(mr, &data, size);
+    adjust_endianness(mr, &data, op);

     if ((!kvm_eventfds_enabled()) &&
         memory_region_dispatch_write_eventfds(mr, addr, data, size, attrs)) {
@@ -2340,7 +2344,7 @@ void memory_region_add_eventfd(MemoryRegion *mr,
     }

     if (size) {
-        adjust_endianness(mr, &mrfd.data, size);
+        adjust_endianness(mr, &mrfd.data, SIZE_MEMOP(size));
     }
     memory_region_transaction_begin();
     for (i = 0; i < mr->ioeventfd_nb; ++i) {
@@ -2375,7 +2379,7 @@ void memory_region_del_eventfd(MemoryRegion *mr,
     unsigned i;

     if (size) {
-        adjust_endianness(mr, &mrfd.data, size);
+        adjust_endianness(mr, &mrfd.data, SIZE_MEMOP(size));
     }
     memory_region_transaction_begin();
     for (i = 0; i < mr->ioeventfd_nb; ++i) {
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 11504 bytes --]

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v5 12/15] cpu: TLB_FLAGS_MASK bit to force memory slow path
  2019-07-26  6:42 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-26  6:48   ` tony.nguyen
  -1 siblings, 0 replies; 78+ messages in thread
From: tony.nguyen @ 2019-07-26  6:48 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, qemu-s390x, qemu-arm,
	stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

The fast path is taken when TLB_FLAGS_MASK is all zero.

TLB_FORCE_SLOW is simply a TLB_FLAGS_MASK bit to force the slow path,
there are no other side effects.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 include/exec/cpu-all.h | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
index 536ea58..e496f99 100644
--- a/include/exec/cpu-all.h
+++ b/include/exec/cpu-all.h
@@ -331,12 +331,18 @@ CPUArchState *cpu_copy(CPUArchState *env);
 #define TLB_MMIO            (1 << (TARGET_PAGE_BITS - 3))
 /* Set if TLB entry must have MMU lookup repeated for every access */
 #define TLB_RECHECK         (1 << (TARGET_PAGE_BITS - 4))
+/* Set if TLB entry must take the slow path.  */
+#define TLB_FORCE_SLOW      (1 << (TARGET_PAGE_BITS - 5))

 /* Use this mask to check interception with an alignment mask
  * in a TCG backend.
  */
-#define TLB_FLAGS_MASK  (TLB_INVALID_MASK | TLB_NOTDIRTY | TLB_MMIO \
-                         | TLB_RECHECK)
+#define TLB_FLAGS_MASK \
+    (TLB_INVALID_MASK  \
+     | TLB_NOTDIRTY    \
+     | TLB_MMIO        \
+     | TLB_RECHECK     \
+     | TLB_FORCE_SLOW)

 /**
  * tlb_hit_page: return true if page aligned @addr is a hit against the
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v5 12/15] cpu: TLB_FLAGS_MASK bit to force memory slow path
@ 2019-07-26  6:48   ` tony.nguyen
  0 siblings, 0 replies; 78+ messages in thread
From: tony.nguyen @ 2019-07-26  6:48 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, sagark, david, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, arikalo, mst, pasic,
	borntraeger, rth, atar4qemu, ehabkost, alex.williamson, qemu-arm,
	stefanha, shorne, david, qemu-riscv, qemu-s390x, kbastian,
	cohuck, laurent, qemu-ppc, amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 1305 bytes --]

The fast path is taken when TLB_FLAGS_MASK is all zero.

TLB_FORCE_SLOW is simply a TLB_FLAGS_MASK bit to force the slow path,
there are no other side effects.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 include/exec/cpu-all.h | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
index 536ea58..e496f99 100644
--- a/include/exec/cpu-all.h
+++ b/include/exec/cpu-all.h
@@ -331,12 +331,18 @@ CPUArchState *cpu_copy(CPUArchState *env);
 #define TLB_MMIO            (1 << (TARGET_PAGE_BITS - 3))
 /* Set if TLB entry must have MMU lookup repeated for every access */
 #define TLB_RECHECK         (1 << (TARGET_PAGE_BITS - 4))
+/* Set if TLB entry must take the slow path.  */
+#define TLB_FORCE_SLOW      (1 << (TARGET_PAGE_BITS - 5))

 /* Use this mask to check interception with an alignment mask
  * in a TCG backend.
  */
-#define TLB_FLAGS_MASK  (TLB_INVALID_MASK | TLB_NOTDIRTY | TLB_MMIO \
-                         | TLB_RECHECK)
+#define TLB_FLAGS_MASK \
+    (TLB_INVALID_MASK  \
+     | TLB_NOTDIRTY    \
+     | TLB_MMIO        \
+     | TLB_RECHECK     \
+     | TLB_FORCE_SLOW)

 /**
  * tlb_hit_page: return true if page aligned @addr is a hit against the
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 2710 bytes --]

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v5 13/15] cputlb: Byte swap memory transaction attribute
  2019-07-26  6:42 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-26  6:48   ` tony.nguyen
  -1 siblings, 0 replies; 78+ messages in thread
From: tony.nguyen @ 2019-07-26  6:48 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, qemu-s390x, qemu-arm,
	stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

Notice new attribute, byte swap, and force the transaction through the
memory slow path.

Required by architectures that can invert endianness of memory
transaction, e.g. SPARC64 has the Invert Endian TTE bit.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 accel/tcg/cputlb.c      | 11 +++++++++++
 include/exec/memattrs.h |  2 ++
 2 files changed, 13 insertions(+)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index e61b1eb..f292a87 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -738,6 +738,9 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
          */
         address |= TLB_RECHECK;
     }
+    if (attrs.byte_swap) {
+        address |= TLB_FORCE_SLOW;
+    }
     if (!memory_region_is_ram(section->mr) &&
         !memory_region_is_romd(section->mr)) {
         /* IO memory case */
@@ -891,6 +894,10 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
     bool locked = false;
     MemTxResult r;

+    if (iotlbentry->attrs.byte_swap) {
+        op ^= MO_BSWAP;
+    }
+
     section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
     mr = section->mr;
     mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
@@ -933,6 +940,10 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
     bool locked = false;
     MemTxResult r;

+    if (iotlbentry->attrs.byte_swap) {
+        op ^= MO_BSWAP;
+    }
+
     section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
     mr = section->mr;
     mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
diff --git a/include/exec/memattrs.h b/include/exec/memattrs.h
index d4a3477..a0644eb 100644
--- a/include/exec/memattrs.h
+++ b/include/exec/memattrs.h
@@ -37,6 +37,8 @@ typedef struct MemTxAttrs {
     unsigned int user:1;
     /* Requester ID (for MSI for example) */
     unsigned int requester_id:16;
+    /* SPARC64: TTE invert endianness */
+    unsigned int byte_swap:1;
     /*
      * The following are target-specific page-table bits.  These are not
      * related to actual memory transactions at all.  However, this structure
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v5 13/15] cputlb: Byte swap memory transaction attribute
@ 2019-07-26  6:48   ` tony.nguyen
  0 siblings, 0 replies; 78+ messages in thread
From: tony.nguyen @ 2019-07-26  6:48 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, sagark, david, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, arikalo, mst, pasic,
	borntraeger, rth, atar4qemu, ehabkost, alex.williamson, qemu-arm,
	stefanha, shorne, david, qemu-riscv, qemu-s390x, kbastian,
	cohuck, laurent, qemu-ppc, amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 2205 bytes --]

Notice new attribute, byte swap, and force the transaction through the
memory slow path.

Required by architectures that can invert endianness of memory
transaction, e.g. SPARC64 has the Invert Endian TTE bit.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 accel/tcg/cputlb.c      | 11 +++++++++++
 include/exec/memattrs.h |  2 ++
 2 files changed, 13 insertions(+)

diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index e61b1eb..f292a87 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -738,6 +738,9 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
          */
         address |= TLB_RECHECK;
     }
+    if (attrs.byte_swap) {
+        address |= TLB_FORCE_SLOW;
+    }
     if (!memory_region_is_ram(section->mr) &&
         !memory_region_is_romd(section->mr)) {
         /* IO memory case */
@@ -891,6 +894,10 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
     bool locked = false;
     MemTxResult r;

+    if (iotlbentry->attrs.byte_swap) {
+        op ^= MO_BSWAP;
+    }
+
     section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
     mr = section->mr;
     mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
@@ -933,6 +940,10 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
     bool locked = false;
     MemTxResult r;

+    if (iotlbentry->attrs.byte_swap) {
+        op ^= MO_BSWAP;
+    }
+
     section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
     mr = section->mr;
     mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
diff --git a/include/exec/memattrs.h b/include/exec/memattrs.h
index d4a3477..a0644eb 100644
--- a/include/exec/memattrs.h
+++ b/include/exec/memattrs.h
@@ -37,6 +37,8 @@ typedef struct MemTxAttrs {
     unsigned int user:1;
     /* Requester ID (for MSI for example) */
     unsigned int requester_id:16;
+    /* SPARC64: TTE invert endianness */
+    unsigned int byte_swap:1;
     /*
      * The following are target-specific page-table bits.  These are not
      * related to actual memory transactions at all.  However, this structure
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 4277 bytes --]

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v5 14/15] target/sparc: Add TLB entry with attributes
  2019-07-26  6:42 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-26  6:48   ` tony.nguyen
  -1 siblings, 0 replies; 78+ messages in thread
From: tony.nguyen @ 2019-07-26  6:48 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, qemu-s390x, qemu-arm,
	stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

Append MemTxAttrs to interfaces so we can pass along up coming Invert
Endian TTE bit on SPARC64.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 target/sparc/mmu_helper.c | 32 ++++++++++++++++++--------------
 1 file changed, 18 insertions(+), 14 deletions(-)

diff --git a/target/sparc/mmu_helper.c b/target/sparc/mmu_helper.c
index cbd1e91..826e14b 100644
--- a/target/sparc/mmu_helper.c
+++ b/target/sparc/mmu_helper.c
@@ -88,7 +88,7 @@ static const int perm_table[2][8] = {
 };

 static int get_physical_address(CPUSPARCState *env, hwaddr *physical,
-                                int *prot, int *access_index,
+                                int *prot, int *access_index, MemTxAttrs *attrs,
                                 target_ulong address, int rw, int mmu_idx,
                                 target_ulong *page_size)
 {
@@ -219,6 +219,7 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
     target_ulong vaddr;
     target_ulong page_size;
     int error_code = 0, prot, access_index;
+    MemTxAttrs attrs = {};

     /*
      * TODO: If we ever need tlb_vaddr_to_host for this target,
@@ -229,7 +230,7 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
     assert(!probe);

     address &= TARGET_PAGE_MASK;
-    error_code = get_physical_address(env, &paddr, &prot, &access_index,
+    error_code = get_physical_address(env, &paddr, &prot, &access_index, &attrs,
                                       address, access_type,
                                       mmu_idx, &page_size);
     vaddr = address;
@@ -490,8 +491,8 @@ static inline int ultrasparc_tag_match(SparcTLBEntry *tlb,
     return 0;
 }

-static int get_physical_address_data(CPUSPARCState *env,
-                                     hwaddr *physical, int *prot,
+static int get_physical_address_data(CPUSPARCState *env, hwaddr *physical,
+                                     int *prot, MemTxAttrs *attrs,
                                      target_ulong address, int rw, int mmu_idx)
 {
     CPUState *cs = env_cpu(env);
@@ -608,8 +609,8 @@ static int get_physical_address_data(CPUSPARCState *env,
     return 1;
 }

-static int get_physical_address_code(CPUSPARCState *env,
-                                     hwaddr *physical, int *prot,
+static int get_physical_address_code(CPUSPARCState *env, hwaddr *physical,
+                                     int *prot, MemTxAttrs *attrs,
                                      target_ulong address, int mmu_idx)
 {
     CPUState *cs = env_cpu(env);
@@ -686,7 +687,7 @@ static int get_physical_address_code(CPUSPARCState *env,
 }

 static int get_physical_address(CPUSPARCState *env, hwaddr *physical,
-                                int *prot, int *access_index,
+                                int *prot, int *access_index, MemTxAttrs *attrs,
                                 target_ulong address, int rw, int mmu_idx,
                                 target_ulong *page_size)
 {
@@ -716,11 +717,11 @@ static int get_physical_address(CPUSPARCState *env, hwaddr *physical,
     }

     if (rw == 2) {
-        return get_physical_address_code(env, physical, prot, address,
+        return get_physical_address_code(env, physical, prot, attrs, address,
                                          mmu_idx);
     } else {
-        return get_physical_address_data(env, physical, prot, address, rw,
-                                         mmu_idx);
+        return get_physical_address_data(env, physical, prot, attrs, address,
+                                         rw, mmu_idx);
     }
 }

@@ -734,10 +735,11 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
     target_ulong vaddr;
     hwaddr paddr;
     target_ulong page_size;
+    MemTxAttrs attrs = {};
     int error_code = 0, prot, access_index;

     address &= TARGET_PAGE_MASK;
-    error_code = get_physical_address(env, &paddr, &prot, &access_index,
+    error_code = get_physical_address(env, &paddr, &prot, &access_index, &attrs,
                                       address, access_type,
                                       mmu_idx, &page_size);
     if (likely(error_code == 0)) {
@@ -747,7 +749,8 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
                                    env->dmmu.mmu_primary_context,
                                    env->dmmu.mmu_secondary_context);

-        tlb_set_page(cs, vaddr, paddr, prot, mmu_idx, page_size);
+        tlb_set_page_with_attrs(cs, vaddr, paddr, attrs, prot, mmu_idx,
+                                page_size);
         return true;
     }
     if (probe) {
@@ -849,9 +852,10 @@ static int cpu_sparc_get_phys_page(CPUSPARCState *env, hwaddr *phys,
 {
     target_ulong page_size;
     int prot, access_index;
+    MemTxAttrs attrs = {};

-    return get_physical_address(env, phys, &prot, &access_index, addr, rw,
-                                mmu_idx, &page_size);
+    return get_physical_address(env, phys, &prot, &access_index, &attrs, addr,
+                                rw, mmu_idx, &page_size);
 }

 #if defined(TARGET_SPARC64)
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v5 14/15] target/sparc: Add TLB entry with attributes
@ 2019-07-26  6:48   ` tony.nguyen
  0 siblings, 0 replies; 78+ messages in thread
From: tony.nguyen @ 2019-07-26  6:48 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, sagark, david, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, arikalo, mst, pasic,
	borntraeger, rth, atar4qemu, ehabkost, alex.williamson, qemu-arm,
	stefanha, shorne, david, qemu-riscv, qemu-s390x, kbastian,
	cohuck, laurent, qemu-ppc, amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 5225 bytes --]

Append MemTxAttrs to interfaces so we can pass along up coming Invert
Endian TTE bit on SPARC64.

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 target/sparc/mmu_helper.c | 32 ++++++++++++++++++--------------
 1 file changed, 18 insertions(+), 14 deletions(-)

diff --git a/target/sparc/mmu_helper.c b/target/sparc/mmu_helper.c
index cbd1e91..826e14b 100644
--- a/target/sparc/mmu_helper.c
+++ b/target/sparc/mmu_helper.c
@@ -88,7 +88,7 @@ static const int perm_table[2][8] = {
 };

 static int get_physical_address(CPUSPARCState *env, hwaddr *physical,
-                                int *prot, int *access_index,
+                                int *prot, int *access_index, MemTxAttrs *attrs,
                                 target_ulong address, int rw, int mmu_idx,
                                 target_ulong *page_size)
 {
@@ -219,6 +219,7 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
     target_ulong vaddr;
     target_ulong page_size;
     int error_code = 0, prot, access_index;
+    MemTxAttrs attrs = {};

     /*
      * TODO: If we ever need tlb_vaddr_to_host for this target,
@@ -229,7 +230,7 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
     assert(!probe);

     address &= TARGET_PAGE_MASK;
-    error_code = get_physical_address(env, &paddr, &prot, &access_index,
+    error_code = get_physical_address(env, &paddr, &prot, &access_index, &attrs,
                                       address, access_type,
                                       mmu_idx, &page_size);
     vaddr = address;
@@ -490,8 +491,8 @@ static inline int ultrasparc_tag_match(SparcTLBEntry *tlb,
     return 0;
 }

-static int get_physical_address_data(CPUSPARCState *env,
-                                     hwaddr *physical, int *prot,
+static int get_physical_address_data(CPUSPARCState *env, hwaddr *physical,
+                                     int *prot, MemTxAttrs *attrs,
                                      target_ulong address, int rw, int mmu_idx)
 {
     CPUState *cs = env_cpu(env);
@@ -608,8 +609,8 @@ static int get_physical_address_data(CPUSPARCState *env,
     return 1;
 }

-static int get_physical_address_code(CPUSPARCState *env,
-                                     hwaddr *physical, int *prot,
+static int get_physical_address_code(CPUSPARCState *env, hwaddr *physical,
+                                     int *prot, MemTxAttrs *attrs,
                                      target_ulong address, int mmu_idx)
 {
     CPUState *cs = env_cpu(env);
@@ -686,7 +687,7 @@ static int get_physical_address_code(CPUSPARCState *env,
 }

 static int get_physical_address(CPUSPARCState *env, hwaddr *physical,
-                                int *prot, int *access_index,
+                                int *prot, int *access_index, MemTxAttrs *attrs,
                                 target_ulong address, int rw, int mmu_idx,
                                 target_ulong *page_size)
 {
@@ -716,11 +717,11 @@ static int get_physical_address(CPUSPARCState *env, hwaddr *physical,
     }

     if (rw == 2) {
-        return get_physical_address_code(env, physical, prot, address,
+        return get_physical_address_code(env, physical, prot, attrs, address,
                                          mmu_idx);
     } else {
-        return get_physical_address_data(env, physical, prot, address, rw,
-                                         mmu_idx);
+        return get_physical_address_data(env, physical, prot, attrs, address,
+                                         rw, mmu_idx);
     }
 }

@@ -734,10 +735,11 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
     target_ulong vaddr;
     hwaddr paddr;
     target_ulong page_size;
+    MemTxAttrs attrs = {};
     int error_code = 0, prot, access_index;

     address &= TARGET_PAGE_MASK;
-    error_code = get_physical_address(env, &paddr, &prot, &access_index,
+    error_code = get_physical_address(env, &paddr, &prot, &access_index, &attrs,
                                       address, access_type,
                                       mmu_idx, &page_size);
     if (likely(error_code == 0)) {
@@ -747,7 +749,8 @@ bool sparc_cpu_tlb_fill(CPUState *cs, vaddr address, int size,
                                    env->dmmu.mmu_primary_context,
                                    env->dmmu.mmu_secondary_context);

-        tlb_set_page(cs, vaddr, paddr, prot, mmu_idx, page_size);
+        tlb_set_page_with_attrs(cs, vaddr, paddr, attrs, prot, mmu_idx,
+                                page_size);
         return true;
     }
     if (probe) {
@@ -849,9 +852,10 @@ static int cpu_sparc_get_phys_page(CPUSPARCState *env, hwaddr *phys,
 {
     target_ulong page_size;
     int prot, access_index;
+    MemTxAttrs attrs = {};

-    return get_physical_address(env, phys, &prot, &access_index, addr, rw,
-                                mmu_idx, &page_size);
+    return get_physical_address(env, phys, &prot, &access_index, &attrs, addr,
+                                rw, mmu_idx, &page_size);
 }

 #if defined(TARGET_SPARC64)
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 10583 bytes --]

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-devel] [PATCH v5 15/15] target/sparc: sun4u Invert Endian TTE bit
  2019-07-26  6:42 ` [Qemu-riscv] " tony.nguyen
@ 2019-07-26  6:49   ` tony.nguyen
  -1 siblings, 0 replies; 78+ messages in thread
From: tony.nguyen @ 2019-07-26  6:49 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, qemu-s390x, qemu-arm,
	stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

This bit configures endianness of PCI MMIO devices. It is used by
Solaris and OpenBSD sunhme drivers.

Tested working on OpenBSD.

Unfortunately Solaris 10 had a unrelated keyboard issue blocking
testing... another inch towards Solaris 10 on SPARC64 =)

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 target/sparc/cpu.h        | 2 ++
 target/sparc/mmu_helper.c | 8 +++++++-
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/target/sparc/cpu.h b/target/sparc/cpu.h
index 8ed2250..77e8e07 100644
--- a/target/sparc/cpu.h
+++ b/target/sparc/cpu.h
@@ -277,6 +277,7 @@ enum {

 #define TTE_VALID_BIT       (1ULL << 63)
 #define TTE_NFO_BIT         (1ULL << 60)
+#define TTE_IE_BIT          (1ULL << 59)
 #define TTE_USED_BIT        (1ULL << 41)
 #define TTE_LOCKED_BIT      (1ULL <<  6)
 #define TTE_SIDEEFFECT_BIT  (1ULL <<  3)
@@ -293,6 +294,7 @@ enum {

 #define TTE_IS_VALID(tte)   ((tte) & TTE_VALID_BIT)
 #define TTE_IS_NFO(tte)     ((tte) & TTE_NFO_BIT)
+#define TTE_IS_IE(tte)      ((tte) & TTE_IE_BIT)
 #define TTE_IS_USED(tte)    ((tte) & TTE_USED_BIT)
 #define TTE_IS_LOCKED(tte)  ((tte) & TTE_LOCKED_BIT)
 #define TTE_IS_SIDEEFFECT(tte) ((tte) & TTE_SIDEEFFECT_BIT)
diff --git a/target/sparc/mmu_helper.c b/target/sparc/mmu_helper.c
index 826e14b..77dc86a 100644
--- a/target/sparc/mmu_helper.c
+++ b/target/sparc/mmu_helper.c
@@ -537,6 +537,10 @@ static int get_physical_address_data(CPUSPARCState *env, hwaddr *physical,
         if (ultrasparc_tag_match(&env->dtlb[i], address, context, physical)) {
             int do_fault = 0;

+            if (TTE_IS_IE(env->dtlb[i].tte)) {
+                attrs->byte_swap = true;
+            }
+
             /* access ok? */
             /* multiple bits in SFSR.FT may be set on TT_DFAULT */
             if (TTE_IS_PRIV(env->dtlb[i].tte) && is_user) {
@@ -792,7 +796,7 @@ void dump_mmu(CPUSPARCState *env)
             }
             if (TTE_IS_VALID(env->dtlb[i].tte)) {
                 qemu_printf("[%02u] VA: %" PRIx64 ", PA: %llx"
-                            ", %s, %s, %s, %s, ctx %" PRId64 " %s\n",
+                            ", %s, %s, %s, %s, ie %s, ctx %" PRId64 " %s\n",
                             i,
                             env->dtlb[i].tag & (uint64_t)~0x1fffULL,
                             TTE_PA(env->dtlb[i].tte),
@@ -801,6 +805,8 @@ void dump_mmu(CPUSPARCState *env)
                             TTE_IS_W_OK(env->dtlb[i].tte) ? "RW" : "RO",
                             TTE_IS_LOCKED(env->dtlb[i].tte) ?
                             "locked" : "unlocked",
+                            TTE_IS_IE(env->dtlb[i].tte) ?
+                            "yes" : "no",
                             env->dtlb[i].tag & (uint64_t)0x1fffULL,
                             TTE_IS_GLOBAL(env->dtlb[i].tte) ?
                             "global" : "local");
--
1.8.3.1




^ permalink raw reply related	[flat|nested] 78+ messages in thread

* [Qemu-riscv] [Qemu-devel] [PATCH v5 15/15] target/sparc: sun4u Invert Endian TTE bit
@ 2019-07-26  6:49   ` tony.nguyen
  0 siblings, 0 replies; 78+ messages in thread
From: tony.nguyen @ 2019-07-26  6:49 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, walling, sagark, david, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, arikalo, mst, pasic,
	borntraeger, rth, atar4qemu, ehabkost, alex.williamson, qemu-arm,
	stefanha, shorne, david, qemu-riscv, qemu-s390x, kbastian,
	cohuck, laurent, qemu-ppc, amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 2930 bytes --]

This bit configures endianness of PCI MMIO devices. It is used by
Solaris and OpenBSD sunhme drivers.

Tested working on OpenBSD.

Unfortunately Solaris 10 had a unrelated keyboard issue blocking
testing... another inch towards Solaris 10 on SPARC64 =)

Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
---
 target/sparc/cpu.h        | 2 ++
 target/sparc/mmu_helper.c | 8 +++++++-
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/target/sparc/cpu.h b/target/sparc/cpu.h
index 8ed2250..77e8e07 100644
--- a/target/sparc/cpu.h
+++ b/target/sparc/cpu.h
@@ -277,6 +277,7 @@ enum {

 #define TTE_VALID_BIT       (1ULL << 63)
 #define TTE_NFO_BIT         (1ULL << 60)
+#define TTE_IE_BIT          (1ULL << 59)
 #define TTE_USED_BIT        (1ULL << 41)
 #define TTE_LOCKED_BIT      (1ULL <<  6)
 #define TTE_SIDEEFFECT_BIT  (1ULL <<  3)
@@ -293,6 +294,7 @@ enum {

 #define TTE_IS_VALID(tte)   ((tte) & TTE_VALID_BIT)
 #define TTE_IS_NFO(tte)     ((tte) & TTE_NFO_BIT)
+#define TTE_IS_IE(tte)      ((tte) & TTE_IE_BIT)
 #define TTE_IS_USED(tte)    ((tte) & TTE_USED_BIT)
 #define TTE_IS_LOCKED(tte)  ((tte) & TTE_LOCKED_BIT)
 #define TTE_IS_SIDEEFFECT(tte) ((tte) & TTE_SIDEEFFECT_BIT)
diff --git a/target/sparc/mmu_helper.c b/target/sparc/mmu_helper.c
index 826e14b..77dc86a 100644
--- a/target/sparc/mmu_helper.c
+++ b/target/sparc/mmu_helper.c
@@ -537,6 +537,10 @@ static int get_physical_address_data(CPUSPARCState *env, hwaddr *physical,
         if (ultrasparc_tag_match(&env->dtlb[i], address, context, physical)) {
             int do_fault = 0;

+            if (TTE_IS_IE(env->dtlb[i].tte)) {
+                attrs->byte_swap = true;
+            }
+
             /* access ok? */
             /* multiple bits in SFSR.FT may be set on TT_DFAULT */
             if (TTE_IS_PRIV(env->dtlb[i].tte) && is_user) {
@@ -792,7 +796,7 @@ void dump_mmu(CPUSPARCState *env)
             }
             if (TTE_IS_VALID(env->dtlb[i].tte)) {
                 qemu_printf("[%02u] VA: %" PRIx64 ", PA: %llx"
-                            ", %s, %s, %s, %s, ctx %" PRId64 " %s\n",
+                            ", %s, %s, %s, %s, ie %s, ctx %" PRId64 " %s\n",
                             i,
                             env->dtlb[i].tag & (uint64_t)~0x1fffULL,
                             TTE_PA(env->dtlb[i].tte),
@@ -801,6 +805,8 @@ void dump_mmu(CPUSPARCState *env)
                             TTE_IS_W_OK(env->dtlb[i].tte) ? "RW" : "RO",
                             TTE_IS_LOCKED(env->dtlb[i].tte) ?
                             "locked" : "unlocked",
+                            TTE_IS_IE(env->dtlb[i].tte) ?
+                            "yes" : "no",
                             env->dtlb[i].tag & (uint64_t)0x1fffULL,
                             TTE_IS_GLOBAL(env->dtlb[i].tte) ?
                             "global" : "local");
--
1.8.3.1




[-- Attachment #2: Type: text/html, Size: 6268 bytes --]

^ permalink raw reply related	[flat|nested] 78+ messages in thread

* Re: [Qemu-devel] [PATCH v5 01/15] tcg: TCGMemOp is now accelerator independent MemOp
  2019-07-26  6:43   ` [Qemu-riscv] " tony.nguyen
@ 2019-07-26  7:43     ` David Gibson
  -1 siblings, 0 replies; 78+ messages in thread
From: David Gibson @ 2019-07-26  7:43 UTC (permalink / raw)
  To: tony.nguyen
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	qemu-devel, laurent, Alistair.Francis, edgar.iglesias, arikalo,
	david, pasic, borntraeger, atar4qemu, ehabkost, alex.williamson,
	qemu-arm, stefanha, shorne, rth, qemu-riscv, kbastian, cohuck,
	qemu-s390x, qemu-ppc, amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 113949 bytes --]

On Fri, Jul 26, 2019 at 06:43:27AM +0000, tony.nguyen@bt.com wrote:
> Preparation for collapsing the two byte swaps, adjust_endianness and
> handle_bswap, along the I/O path.
> 
> Target dependant attributes are conditionalize upon NEED_CPU_H.
> 
> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>

ppc parts
Acked-by: David Gibson <david@gibson.dropbear.id.au>

> ---
>  MAINTAINERS                             |   1 +
>  accel/tcg/cputlb.c                      |   2 +-
>  include/exec/memop.h                    | 109 ++++++++++++++++++++++++++
>  target/alpha/translate.c                |   2 +-
>  target/arm/translate-a64.c              |  48 ++++++------
>  target/arm/translate-a64.h              |   2 +-
>  target/arm/translate-sve.c              |   2 +-
>  target/arm/translate.c                  |  32 ++++----
>  target/arm/translate.h                  |   2 +-
>  target/hppa/translate.c                 |  14 ++--
>  target/i386/translate.c                 | 132 ++++++++++++++++----------------
>  target/m68k/translate.c                 |   2 +-
>  target/microblaze/translate.c           |   4 +-
>  target/mips/translate.c                 |   8 +-
>  target/openrisc/translate.c             |   4 +-
>  target/ppc/translate.c                  |  12 +--
>  target/riscv/insn_trans/trans_rva.inc.c |   8 +-
>  target/riscv/insn_trans/trans_rvi.inc.c |   4 +-
>  target/s390x/translate.c                |   6 +-
>  target/s390x/translate_vx.inc.c         |  10 +--
>  target/sparc/translate.c                |  14 ++--
>  target/tilegx/translate.c               |  10 +--
>  target/tricore/translate.c              |   8 +-
>  tcg/README                              |   2 +-
>  tcg/aarch64/tcg-target.inc.c            |  26 +++----
>  tcg/arm/tcg-target.inc.c                |  26 +++----
>  tcg/i386/tcg-target.inc.c               |  24 +++---
>  tcg/mips/tcg-target.inc.c               |  16 ++--
>  tcg/optimize.c                          |   2 +-
>  tcg/ppc/tcg-target.inc.c                |  12 +--
>  tcg/riscv/tcg-target.inc.c              |  20 ++---
>  tcg/s390/tcg-target.inc.c               |  14 ++--
>  tcg/sparc/tcg-target.inc.c              |   6 +-
>  tcg/tcg-op.c                            |  38 ++++-----
>  tcg/tcg-op.h                            |  86 ++++++++++-----------
>  tcg/tcg.c                               |   2 +-
>  tcg/tcg.h                               |  99 ++----------------------
>  trace/mem-internal.h                    |   4 +-
>  trace/mem.h                             |   4 +-
>  39 files changed, 420 insertions(+), 397 deletions(-)
>  create mode 100644 include/exec/memop.h
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index cc9636b..3f148cd 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -1890,6 +1890,7 @@ M: Paolo Bonzini <pbonzini@redhat.com>
>  S: Supported
>  F: include/exec/ioport.h
>  F: ioport.c
> +F: include/exec/memop.h
>  F: include/exec/memory.h
>  F: include/exec/ram_addr.h
>  F: memory.c
> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
> index bb9897b..523be4c 100644
> --- a/accel/tcg/cputlb.c
> +++ b/accel/tcg/cputlb.c
> @@ -1133,7 +1133,7 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
>      uintptr_t index = tlb_index(env, mmu_idx, addr);
>      CPUTLBEntry *tlbe = tlb_entry(env, mmu_idx, addr);
>      target_ulong tlb_addr = tlb_addr_write(tlbe);
> -    TCGMemOp mop = get_memop(oi);
> +    MemOp mop = get_memop(oi);
>      int a_bits = get_alignment_bits(mop);
>      int s_bits = mop & MO_SIZE;
>      void *hostaddr;
> diff --git a/include/exec/memop.h b/include/exec/memop.h
> new file mode 100644
> index 0000000..ac58066
> --- /dev/null
> +++ b/include/exec/memop.h
> @@ -0,0 +1,109 @@
> +/*
> + * Constants for memory operations
> + *
> + * Authors:
> + *  Richard Henderson <rth@twiddle.net>
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2 or later.
> + * See the COPYING file in the top-level directory.
> + *
> + */
> +
> +#ifndef MEMOP_H
> +#define MEMOP_H
> +
> +typedef enum MemOp {
> +    MO_8     = 0,
> +    MO_16    = 1,
> +    MO_32    = 2,
> +    MO_64    = 3,
> +    MO_SIZE  = 3,   /* Mask for the above.  */
> +
> +    MO_SIGN  = 4,   /* Sign-extended, otherwise zero-extended.  */
> +
> +    MO_BSWAP = 8,   /* Host reverse endian.  */
> +#ifdef HOST_WORDS_BIGENDIAN
> +    MO_LE    = MO_BSWAP,
> +    MO_BE    = 0,
> +#else
> +    MO_LE    = 0,
> +    MO_BE    = MO_BSWAP,
> +#endif
> +#ifdef NEED_CPU_H
> +#ifdef TARGET_WORDS_BIGENDIAN
> +    MO_TE    = MO_BE,
> +#else
> +    MO_TE    = MO_LE,
> +#endif
> +#endif
> +
> +    /*
> +     * MO_UNALN accesses are never checked for alignment.
> +     * MO_ALIGN accesses will result in a call to the CPU's
> +     * do_unaligned_access hook if the guest address is not aligned.
> +     * The default depends on whether the target CPU defines ALIGNED_ONLY.
> +     *
> +     * Some architectures (e.g. ARMv8) need the address which is aligned
> +     * to a size more than the size of the memory access.
> +     * Some architectures (e.g. SPARCv9) need an address which is aligned,
> +     * but less strictly than the natural alignment.
> +     *
> +     * MO_ALIGN supposes the alignment size is the size of a memory access.
> +     *
> +     * There are three options:
> +     * - unaligned access permitted (MO_UNALN).
> +     * - an alignment to the size of an access (MO_ALIGN);
> +     * - an alignment to a specified size, which may be more or less than
> +     *   the access size (MO_ALIGN_x where 'x' is a size in bytes);
> +     */
> +    MO_ASHIFT = 4,
> +    MO_AMASK = 7 << MO_ASHIFT,
> +#ifdef NEED_CPU_H
> +#ifdef ALIGNED_ONLY
> +    MO_ALIGN = 0,
> +    MO_UNALN = MO_AMASK,
> +#else
> +    MO_ALIGN = MO_AMASK,
> +    MO_UNALN = 0,
> +#endif
> +#endif
> +    MO_ALIGN_2  = 1 << MO_ASHIFT,
> +    MO_ALIGN_4  = 2 << MO_ASHIFT,
> +    MO_ALIGN_8  = 3 << MO_ASHIFT,
> +    MO_ALIGN_16 = 4 << MO_ASHIFT,
> +    MO_ALIGN_32 = 5 << MO_ASHIFT,
> +    MO_ALIGN_64 = 6 << MO_ASHIFT,
> +
> +    /* Combinations of the above, for ease of use.  */
> +    MO_UB    = MO_8,
> +    MO_UW    = MO_16,
> +    MO_UL    = MO_32,
> +    MO_SB    = MO_SIGN | MO_8,
> +    MO_SW    = MO_SIGN | MO_16,
> +    MO_SL    = MO_SIGN | MO_32,
> +    MO_Q     = MO_64,
> +
> +    MO_LEUW  = MO_LE | MO_UW,
> +    MO_LEUL  = MO_LE | MO_UL,
> +    MO_LESW  = MO_LE | MO_SW,
> +    MO_LESL  = MO_LE | MO_SL,
> +    MO_LEQ   = MO_LE | MO_Q,
> +
> +    MO_BEUW  = MO_BE | MO_UW,
> +    MO_BEUL  = MO_BE | MO_UL,
> +    MO_BESW  = MO_BE | MO_SW,
> +    MO_BESL  = MO_BE | MO_SL,
> +    MO_BEQ   = MO_BE | MO_Q,
> +
> +#ifdef NEED_CPU_H
> +    MO_TEUW  = MO_TE | MO_UW,
> +    MO_TEUL  = MO_TE | MO_UL,
> +    MO_TESW  = MO_TE | MO_SW,
> +    MO_TESL  = MO_TE | MO_SL,
> +    MO_TEQ   = MO_TE | MO_Q,
> +#endif
> +
> +    MO_SSIZE = MO_SIZE | MO_SIGN,
> +} MemOp;
> +
> +#endif
> diff --git a/target/alpha/translate.c b/target/alpha/translate.c
> index 2c9cccf..d5d4888 100644
> --- a/target/alpha/translate.c
> +++ b/target/alpha/translate.c
> @@ -403,7 +403,7 @@ static inline void gen_store_mem(DisasContext *ctx,
> 
>  static DisasJumpType gen_store_conditional(DisasContext *ctx, int ra, int rb,
>                                             int32_t disp16, int mem_idx,
> -                                           TCGMemOp op)
> +                                           MemOp op)
>  {
>      TCGLabel *lab_fail, *lab_done;
>      TCGv addr, val;
> diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
> index d323147..b6c07d6 100644
> --- a/target/arm/translate-a64.c
> +++ b/target/arm/translate-a64.c
> @@ -85,7 +85,7 @@ typedef void NeonGenOneOpFn(TCGv_i64, TCGv_i64);
>  typedef void CryptoTwoOpFn(TCGv_ptr, TCGv_ptr);
>  typedef void CryptoThreeOpIntFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
>  typedef void CryptoThreeOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
> -typedef void AtomicThreeOpFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGArg, TCGMemOp);
> +typedef void AtomicThreeOpFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGArg, MemOp);
> 
>  /* initialize TCG globals.  */
>  void a64_translate_init(void)
> @@ -455,7 +455,7 @@ TCGv_i64 read_cpu_reg_sp(DisasContext *s, int reg, int sf)
>   * Dn, Sn, Hn or Bn).
>   * (Note that this is not the same mapping as for A32; see cpu.h)
>   */
> -static inline int fp_reg_offset(DisasContext *s, int regno, TCGMemOp size)
> +static inline int fp_reg_offset(DisasContext *s, int regno, MemOp size)
>  {
>      return vec_reg_offset(s, regno, 0, size);
>  }
> @@ -871,7 +871,7 @@ static void do_gpr_ld_memidx(DisasContext *s,
>                               bool iss_valid, unsigned int iss_srt,
>                               bool iss_sf, bool iss_ar)
>  {
> -    TCGMemOp memop = s->be_data + size;
> +    MemOp memop = s->be_data + size;
> 
>      g_assert(size <= 3);
> 
> @@ -948,7 +948,7 @@ static void do_fp_ld(DisasContext *s, int destidx, TCGv_i64 tcg_addr, int size)
>      TCGv_i64 tmphi;
> 
>      if (size < 4) {
> -        TCGMemOp memop = s->be_data + size;
> +        MemOp memop = s->be_data + size;
>          tmphi = tcg_const_i64(0);
>          tcg_gen_qemu_ld_i64(tmplo, tcg_addr, get_mem_index(s), memop);
>      } else {
> @@ -989,7 +989,7 @@ static void do_fp_ld(DisasContext *s, int destidx, TCGv_i64 tcg_addr, int size)
> 
>  /* Get value of an element within a vector register */
>  static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx,
> -                             int element, TCGMemOp memop)
> +                             int element, MemOp memop)
>  {
>      int vect_off = vec_reg_offset(s, srcidx, element, memop & MO_SIZE);
>      switch (memop) {
> @@ -1021,7 +1021,7 @@ static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx,
>  }
> 
>  static void read_vec_element_i32(DisasContext *s, TCGv_i32 tcg_dest, int srcidx,
> -                                 int element, TCGMemOp memop)
> +                                 int element, MemOp memop)
>  {
>      int vect_off = vec_reg_offset(s, srcidx, element, memop & MO_SIZE);
>      switch (memop) {
> @@ -1048,7 +1048,7 @@ static void read_vec_element_i32(DisasContext *s, TCGv_i32 tcg_dest, int srcidx,
> 
>  /* Set value of an element within a vector register */
>  static void write_vec_element(DisasContext *s, TCGv_i64 tcg_src, int destidx,
> -                              int element, TCGMemOp memop)
> +                              int element, MemOp memop)
>  {
>      int vect_off = vec_reg_offset(s, destidx, element, memop & MO_SIZE);
>      switch (memop) {
> @@ -1070,7 +1070,7 @@ static void write_vec_element(DisasContext *s, TCGv_i64 tcg_src, int destidx,
>  }
> 
>  static void write_vec_element_i32(DisasContext *s, TCGv_i32 tcg_src,
> -                                  int destidx, int element, TCGMemOp memop)
> +                                  int destidx, int element, MemOp memop)
>  {
>      int vect_off = vec_reg_offset(s, destidx, element, memop & MO_SIZE);
>      switch (memop) {
> @@ -1090,7 +1090,7 @@ static void write_vec_element_i32(DisasContext *s, TCGv_i32 tcg_src,
> 
>  /* Store from vector register to memory */
>  static void do_vec_st(DisasContext *s, int srcidx, int element,
> -                      TCGv_i64 tcg_addr, int size, TCGMemOp endian)
> +                      TCGv_i64 tcg_addr, int size, MemOp endian)
>  {
>      TCGv_i64 tcg_tmp = tcg_temp_new_i64();
> 
> @@ -1102,7 +1102,7 @@ static void do_vec_st(DisasContext *s, int srcidx, int element,
> 
>  /* Load from memory to vector register */
>  static void do_vec_ld(DisasContext *s, int destidx, int element,
> -                      TCGv_i64 tcg_addr, int size, TCGMemOp endian)
> +                      TCGv_i64 tcg_addr, int size, MemOp endian)
>  {
>      TCGv_i64 tcg_tmp = tcg_temp_new_i64();
> 
> @@ -2200,7 +2200,7 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
>                                 TCGv_i64 addr, int size, bool is_pair)
>  {
>      int idx = get_mem_index(s);
> -    TCGMemOp memop = s->be_data;
> +    MemOp memop = s->be_data;
> 
>      g_assert(size <= 3);
>      if (is_pair) {
> @@ -3286,7 +3286,7 @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
>      bool is_postidx = extract32(insn, 23, 1);
>      bool is_q = extract32(insn, 30, 1);
>      TCGv_i64 clean_addr, tcg_rn, tcg_ebytes;
> -    TCGMemOp endian = s->be_data;
> +    MemOp endian = s->be_data;
> 
>      int ebytes;   /* bytes per element */
>      int elements; /* elements per vector */
> @@ -5455,7 +5455,7 @@ static void disas_fp_csel(DisasContext *s, uint32_t insn)
>      unsigned int mos, type, rm, cond, rn, rd;
>      TCGv_i64 t_true, t_false, t_zero;
>      DisasCompare64 c;
> -    TCGMemOp sz;
> +    MemOp sz;
> 
>      mos = extract32(insn, 29, 3);
>      type = extract32(insn, 22, 2);
> @@ -6267,7 +6267,7 @@ static void disas_fp_imm(DisasContext *s, uint32_t insn)
>      int mos = extract32(insn, 29, 3);
>      uint64_t imm;
>      TCGv_i64 tcg_res;
> -    TCGMemOp sz;
> +    MemOp sz;
> 
>      if (mos || imm5) {
>          unallocated_encoding(s);
> @@ -7030,7 +7030,7 @@ static TCGv_i32 do_reduction_op(DisasContext *s, int fpopcode, int rn,
>  {
>      if (esize == size) {
>          int element;
> -        TCGMemOp msize = esize == 16 ? MO_16 : MO_32;
> +        MemOp msize = esize == 16 ? MO_16 : MO_32;
>          TCGv_i32 tcg_elem;
> 
>          /* We should have one register left here */
> @@ -8022,7 +8022,7 @@ static void handle_vec_simd_sqshrn(DisasContext *s, bool is_scalar, bool is_q,
>      int shift = (2 * esize) - immhb;
>      int elements = is_scalar ? 1 : (64 / esize);
>      bool round = extract32(opcode, 0, 1);
> -    TCGMemOp ldop = (size + 1) | (is_u_shift ? 0 : MO_SIGN);
> +    MemOp ldop = (size + 1) | (is_u_shift ? 0 : MO_SIGN);
>      TCGv_i64 tcg_rn, tcg_rd, tcg_round;
>      TCGv_i32 tcg_rd_narrowed;
>      TCGv_i64 tcg_final;
> @@ -8181,7 +8181,7 @@ static void handle_simd_qshl(DisasContext *s, bool scalar, bool is_q,
>              }
>          };
>          NeonGenTwoOpEnvFn *genfn = fns[src_unsigned][dst_unsigned][size];
> -        TCGMemOp memop = scalar ? size : MO_32;
> +        MemOp memop = scalar ? size : MO_32;
>          int maxpass = scalar ? 1 : is_q ? 4 : 2;
> 
>          for (pass = 0; pass < maxpass; pass++) {
> @@ -8225,7 +8225,7 @@ static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
>      TCGv_ptr tcg_fpst = get_fpstatus_ptr(size == MO_16);
>      TCGv_i32 tcg_shift = NULL;
> 
> -    TCGMemOp mop = size | (is_signed ? MO_SIGN : 0);
> +    MemOp mop = size | (is_signed ? MO_SIGN : 0);
>      int pass;
> 
>      if (fracbits || size == MO_64) {
> @@ -10004,7 +10004,7 @@ static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
>      int dsize = is_q ? 128 : 64;
>      int esize = 8 << size;
>      int elements = dsize/esize;
> -    TCGMemOp memop = size | (is_u ? 0 : MO_SIGN);
> +    MemOp memop = size | (is_u ? 0 : MO_SIGN);
>      TCGv_i64 tcg_rn = new_tmp_a64(s);
>      TCGv_i64 tcg_rd = new_tmp_a64(s);
>      TCGv_i64 tcg_round;
> @@ -10347,7 +10347,7 @@ static void handle_3rd_widening(DisasContext *s, int is_q, int is_u, int size,
>              TCGv_i64 tcg_op1 = tcg_temp_new_i64();
>              TCGv_i64 tcg_op2 = tcg_temp_new_i64();
>              TCGv_i64 tcg_passres;
> -            TCGMemOp memop = MO_32 | (is_u ? 0 : MO_SIGN);
> +            MemOp memop = MO_32 | (is_u ? 0 : MO_SIGN);
> 
>              int elt = pass + is_q * 2;
> 
> @@ -11827,7 +11827,7 @@ static void handle_2misc_pairwise(DisasContext *s, int opcode, bool u,
> 
>      if (size == 2) {
>          /* 32 + 32 -> 64 op */
> -        TCGMemOp memop = size + (u ? 0 : MO_SIGN);
> +        MemOp memop = size + (u ? 0 : MO_SIGN);
> 
>          for (pass = 0; pass < maxpass; pass++) {
>              TCGv_i64 tcg_op1 = tcg_temp_new_i64();
> @@ -12849,7 +12849,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
> 
>      switch (is_fp) {
>      case 1: /* normal fp */
> -        /* convert insn encoded size to TCGMemOp size */
> +        /* convert insn encoded size to MemOp size */
>          switch (size) {
>          case 0: /* half-precision */
>              size = MO_16;
> @@ -12897,7 +12897,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
>          return;
>      }
> 
> -    /* Given TCGMemOp size, adjust register and indexing.  */
> +    /* Given MemOp size, adjust register and indexing.  */
>      switch (size) {
>      case MO_16:
>          index = h << 2 | l << 1 | m;
> @@ -13194,7 +13194,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
>          TCGv_i64 tcg_res[2];
>          int pass;
>          bool satop = extract32(opcode, 0, 1);
> -        TCGMemOp memop = MO_32;
> +        MemOp memop = MO_32;
> 
>          if (satop || !u) {
>              memop |= MO_SIGN;
> diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
> index 9ab4087..f1246b7 100644
> --- a/target/arm/translate-a64.h
> +++ b/target/arm/translate-a64.h
> @@ -64,7 +64,7 @@ static inline void assert_fp_access_checked(DisasContext *s)
>   * the FP/vector register Qn.
>   */
>  static inline int vec_reg_offset(DisasContext *s, int regno,
> -                                 int element, TCGMemOp size)
> +                                 int element, MemOp size)
>  {
>      int element_size = 1 << size;
>      int offs = element * element_size;
> diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
> index fa068b0..5d7edd0 100644
> --- a/target/arm/translate-sve.c
> +++ b/target/arm/translate-sve.c
> @@ -4567,7 +4567,7 @@ static bool trans_STR_pri(DisasContext *s, arg_rri *a)
>   */
> 
>  /* The memory mode of the dtype.  */
> -static const TCGMemOp dtype_mop[16] = {
> +static const MemOp dtype_mop[16] = {
>      MO_UB, MO_UB, MO_UB, MO_UB,
>      MO_SL, MO_UW, MO_UW, MO_UW,
>      MO_SW, MO_SW, MO_UL, MO_UL,
> diff --git a/target/arm/translate.c b/target/arm/translate.c
> index 7853462..d116c8c 100644
> --- a/target/arm/translate.c
> +++ b/target/arm/translate.c
> @@ -114,7 +114,7 @@ typedef enum ISSInfo {
>  } ISSInfo;
> 
>  /* Save the syndrome information for a Data Abort */
> -static void disas_set_da_iss(DisasContext *s, TCGMemOp memop, ISSInfo issinfo)
> +static void disas_set_da_iss(DisasContext *s, MemOp memop, ISSInfo issinfo)
>  {
>      uint32_t syn;
>      int sas = memop & MO_SIZE;
> @@ -1079,7 +1079,7 @@ static inline void store_reg_from_load(DisasContext *s, int reg, TCGv_i32 var)
>   * that the address argument is TCGv_i32 rather than TCGv.
>   */
> 
> -static inline TCGv gen_aa32_addr(DisasContext *s, TCGv_i32 a32, TCGMemOp op)
> +static inline TCGv gen_aa32_addr(DisasContext *s, TCGv_i32 a32, MemOp op)
>  {
>      TCGv addr = tcg_temp_new();
>      tcg_gen_extu_i32_tl(addr, a32);
> @@ -1092,7 +1092,7 @@ static inline TCGv gen_aa32_addr(DisasContext *s, TCGv_i32 a32, TCGMemOp op)
>  }
> 
>  static void gen_aa32_ld_i32(DisasContext *s, TCGv_i32 val, TCGv_i32 a32,
> -                            int index, TCGMemOp opc)
> +                            int index, MemOp opc)
>  {
>      TCGv addr;
> 
> @@ -1107,7 +1107,7 @@ static void gen_aa32_ld_i32(DisasContext *s, TCGv_i32 val, TCGv_i32 a32,
>  }
> 
>  static void gen_aa32_st_i32(DisasContext *s, TCGv_i32 val, TCGv_i32 a32,
> -                            int index, TCGMemOp opc)
> +                            int index, MemOp opc)
>  {
>      TCGv addr;
> 
> @@ -1160,7 +1160,7 @@ static inline void gen_aa32_frob64(DisasContext *s, TCGv_i64 val)
>  }
> 
>  static void gen_aa32_ld_i64(DisasContext *s, TCGv_i64 val, TCGv_i32 a32,
> -                            int index, TCGMemOp opc)
> +                            int index, MemOp opc)
>  {
>      TCGv addr = gen_aa32_addr(s, a32, opc);
>      tcg_gen_qemu_ld_i64(val, addr, index, opc);
> @@ -1175,7 +1175,7 @@ static inline void gen_aa32_ld64(DisasContext *s, TCGv_i64 val,
>  }
> 
>  static void gen_aa32_st_i64(DisasContext *s, TCGv_i64 val, TCGv_i32 a32,
> -                            int index, TCGMemOp opc)
> +                            int index, MemOp opc)
>  {
>      TCGv addr = gen_aa32_addr(s, a32, opc);
> 
> @@ -1400,7 +1400,7 @@ neon_reg_offset (int reg, int n)
>   * where 0 is the least significant end of the register.
>   */
>  static inline long
> -neon_element_offset(int reg, int element, TCGMemOp size)
> +neon_element_offset(int reg, int element, MemOp size)
>  {
>      int element_size = 1 << size;
>      int ofs = element * element_size;
> @@ -1422,7 +1422,7 @@ static TCGv_i32 neon_load_reg(int reg, int pass)
>      return tmp;
>  }
> 
> -static void neon_load_element(TCGv_i32 var, int reg, int ele, TCGMemOp mop)
> +static void neon_load_element(TCGv_i32 var, int reg, int ele, MemOp mop)
>  {
>      long offset = neon_element_offset(reg, ele, mop & MO_SIZE);
> 
> @@ -1441,7 +1441,7 @@ static void neon_load_element(TCGv_i32 var, int reg, int ele, TCGMemOp mop)
>      }
>  }
> 
> -static void neon_load_element64(TCGv_i64 var, int reg, int ele, TCGMemOp mop)
> +static void neon_load_element64(TCGv_i64 var, int reg, int ele, MemOp mop)
>  {
>      long offset = neon_element_offset(reg, ele, mop & MO_SIZE);
> 
> @@ -1469,7 +1469,7 @@ static void neon_store_reg(int reg, int pass, TCGv_i32 var)
>      tcg_temp_free_i32(var);
>  }
> 
> -static void neon_store_element(int reg, int ele, TCGMemOp size, TCGv_i32 var)
> +static void neon_store_element(int reg, int ele, MemOp size, TCGv_i32 var)
>  {
>      long offset = neon_element_offset(reg, ele, size);
> 
> @@ -1488,7 +1488,7 @@ static void neon_store_element(int reg, int ele, TCGMemOp size, TCGv_i32 var)
>      }
>  }
> 
> -static void neon_store_element64(int reg, int ele, TCGMemOp size, TCGv_i64 var)
> +static void neon_store_element64(int reg, int ele, MemOp size, TCGv_i64 var)
>  {
>      long offset = neon_element_offset(reg, ele, size);
> 
> @@ -3558,7 +3558,7 @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
>      int n;
>      int vec_size;
>      int mmu_idx;
> -    TCGMemOp endian;
> +    MemOp endian;
>      TCGv_i32 addr;
>      TCGv_i32 tmp;
>      TCGv_i32 tmp2;
> @@ -6867,7 +6867,7 @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
>              } else if ((insn & 0x380) == 0) {
>                  /* VDUP */
>                  int element;
> -                TCGMemOp size;
> +                MemOp size;
> 
>                  if ((insn & (7 << 16)) == 0 || (q && (rd & 1))) {
>                      return 1;
> @@ -7435,7 +7435,7 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
>                                 TCGv_i32 addr, int size)
>  {
>      TCGv_i32 tmp = tcg_temp_new_i32();
> -    TCGMemOp opc = size | MO_ALIGN | s->be_data;
> +    MemOp opc = size | MO_ALIGN | s->be_data;
> 
>      s->is_ldex = true;
> 
> @@ -7489,7 +7489,7 @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
>      TCGv taddr;
>      TCGLabel *done_label;
>      TCGLabel *fail_label;
> -    TCGMemOp opc = size | MO_ALIGN | s->be_data;
> +    MemOp opc = size | MO_ALIGN | s->be_data;
> 
>      /* if (env->exclusive_addr == addr && env->exclusive_val == [addr]) {
>           [addr] = {Rt};
> @@ -8603,7 +8603,7 @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
>                          */
> 
>                          TCGv taddr;
> -                        TCGMemOp opc = s->be_data;
> +                        MemOp opc = s->be_data;
> 
>                          rm = (insn) & 0xf;
> 
> diff --git a/target/arm/translate.h b/target/arm/translate.h
> index a20f6e2..284c510 100644
> --- a/target/arm/translate.h
> +++ b/target/arm/translate.h
> @@ -21,7 +21,7 @@ typedef struct DisasContext {
>      int condexec_cond;
>      int thumb;
>      int sctlr_b;
> -    TCGMemOp be_data;
> +    MemOp be_data;
>  #if !defined(CONFIG_USER_ONLY)
>      int user;
>  #endif
> diff --git a/target/hppa/translate.c b/target/hppa/translate.c
> index 188fe68..ff4802a 100644
> --- a/target/hppa/translate.c
> +++ b/target/hppa/translate.c
> @@ -1500,7 +1500,7 @@ static void form_gva(DisasContext *ctx, TCGv_tl *pgva, TCGv_reg *pofs,
>   */
>  static void do_load_32(DisasContext *ctx, TCGv_i32 dest, unsigned rb,
>                         unsigned rx, int scale, target_sreg disp,
> -                       unsigned sp, int modify, TCGMemOp mop)
> +                       unsigned sp, int modify, MemOp mop)
>  {
>      TCGv_reg ofs;
>      TCGv_tl addr;
> @@ -1518,7 +1518,7 @@ static void do_load_32(DisasContext *ctx, TCGv_i32 dest, unsigned rb,
> 
>  static void do_load_64(DisasContext *ctx, TCGv_i64 dest, unsigned rb,
>                         unsigned rx, int scale, target_sreg disp,
> -                       unsigned sp, int modify, TCGMemOp mop)
> +                       unsigned sp, int modify, MemOp mop)
>  {
>      TCGv_reg ofs;
>      TCGv_tl addr;
> @@ -1536,7 +1536,7 @@ static void do_load_64(DisasContext *ctx, TCGv_i64 dest, unsigned rb,
> 
>  static void do_store_32(DisasContext *ctx, TCGv_i32 src, unsigned rb,
>                          unsigned rx, int scale, target_sreg disp,
> -                        unsigned sp, int modify, TCGMemOp mop)
> +                        unsigned sp, int modify, MemOp mop)
>  {
>      TCGv_reg ofs;
>      TCGv_tl addr;
> @@ -1554,7 +1554,7 @@ static void do_store_32(DisasContext *ctx, TCGv_i32 src, unsigned rb,
> 
>  static void do_store_64(DisasContext *ctx, TCGv_i64 src, unsigned rb,
>                          unsigned rx, int scale, target_sreg disp,
> -                        unsigned sp, int modify, TCGMemOp mop)
> +                        unsigned sp, int modify, MemOp mop)
>  {
>      TCGv_reg ofs;
>      TCGv_tl addr;
> @@ -1580,7 +1580,7 @@ static void do_store_64(DisasContext *ctx, TCGv_i64 src, unsigned rb,
> 
>  static bool do_load(DisasContext *ctx, unsigned rt, unsigned rb,
>                      unsigned rx, int scale, target_sreg disp,
> -                    unsigned sp, int modify, TCGMemOp mop)
> +                    unsigned sp, int modify, MemOp mop)
>  {
>      TCGv_reg dest;
> 
> @@ -1653,7 +1653,7 @@ static bool trans_fldd(DisasContext *ctx, arg_ldst *a)
> 
>  static bool do_store(DisasContext *ctx, unsigned rt, unsigned rb,
>                       target_sreg disp, unsigned sp,
> -                     int modify, TCGMemOp mop)
> +                     int modify, MemOp mop)
>  {
>      nullify_over(ctx);
>      do_store_reg(ctx, load_gpr(ctx, rt), rb, 0, 0, disp, sp, modify, mop);
> @@ -2940,7 +2940,7 @@ static bool trans_st(DisasContext *ctx, arg_ldst *a)
> 
>  static bool trans_ldc(DisasContext *ctx, arg_ldst *a)
>  {
> -    TCGMemOp mop = MO_TEUL | MO_ALIGN_16 | a->size;
> +    MemOp mop = MO_TEUL | MO_ALIGN_16 | a->size;
>      TCGv_reg zero, dest, ofs;
>      TCGv_tl addr;
> 
> diff --git a/target/i386/translate.c b/target/i386/translate.c
> index 03150a8..def9867 100644
> --- a/target/i386/translate.c
> +++ b/target/i386/translate.c
> @@ -87,8 +87,8 @@ typedef struct DisasContext {
>      /* current insn context */
>      int override; /* -1 if no override */
>      int prefix;
> -    TCGMemOp aflag;
> -    TCGMemOp dflag;
> +    MemOp aflag;
> +    MemOp dflag;
>      target_ulong pc_start;
>      target_ulong pc; /* pc = eip + cs_base */
>      /* current block context */
> @@ -149,7 +149,7 @@ static void gen_eob(DisasContext *s);
>  static void gen_jr(DisasContext *s, TCGv dest);
>  static void gen_jmp(DisasContext *s, target_ulong eip);
>  static void gen_jmp_tb(DisasContext *s, target_ulong eip, int tb_num);
> -static void gen_op(DisasContext *s1, int op, TCGMemOp ot, int d);
> +static void gen_op(DisasContext *s1, int op, MemOp ot, int d);
> 
>  /* i386 arith/logic operations */
>  enum {
> @@ -320,7 +320,7 @@ static inline bool byte_reg_is_xH(DisasContext *s, int reg)
>  }
> 
>  /* Select the size of a push/pop operation.  */
> -static inline TCGMemOp mo_pushpop(DisasContext *s, TCGMemOp ot)
> +static inline MemOp mo_pushpop(DisasContext *s, MemOp ot)
>  {
>      if (CODE64(s)) {
>          return ot == MO_16 ? MO_16 : MO_64;
> @@ -330,13 +330,13 @@ static inline TCGMemOp mo_pushpop(DisasContext *s, TCGMemOp ot)
>  }
> 
>  /* Select the size of the stack pointer.  */
> -static inline TCGMemOp mo_stacksize(DisasContext *s)
> +static inline MemOp mo_stacksize(DisasContext *s)
>  {
>      return CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_16;
>  }
> 
>  /* Select only size 64 else 32.  Used for SSE operand sizes.  */
> -static inline TCGMemOp mo_64_32(TCGMemOp ot)
> +static inline MemOp mo_64_32(MemOp ot)
>  {
>  #ifdef TARGET_X86_64
>      return ot == MO_64 ? MO_64 : MO_32;
> @@ -347,19 +347,19 @@ static inline TCGMemOp mo_64_32(TCGMemOp ot)
> 
>  /* Select size 8 if lsb of B is clear, else OT.  Used for decoding
>     byte vs word opcodes.  */
> -static inline TCGMemOp mo_b_d(int b, TCGMemOp ot)
> +static inline MemOp mo_b_d(int b, MemOp ot)
>  {
>      return b & 1 ? ot : MO_8;
>  }
> 
>  /* Select size 8 if lsb of B is clear, else OT capped at 32.
>     Used for decoding operand size of port opcodes.  */
> -static inline TCGMemOp mo_b_d32(int b, TCGMemOp ot)
> +static inline MemOp mo_b_d32(int b, MemOp ot)
>  {
>      return b & 1 ? (ot == MO_16 ? MO_16 : MO_32) : MO_8;
>  }
> 
> -static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t0)
> +static void gen_op_mov_reg_v(DisasContext *s, MemOp ot, int reg, TCGv t0)
>  {
>      switch(ot) {
>      case MO_8:
> @@ -388,7 +388,7 @@ static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t0)
>  }
> 
>  static inline
> -void gen_op_mov_v_reg(DisasContext *s, TCGMemOp ot, TCGv t0, int reg)
> +void gen_op_mov_v_reg(DisasContext *s, MemOp ot, TCGv t0, int reg)
>  {
>      if (ot == MO_8 && byte_reg_is_xH(s, reg)) {
>          tcg_gen_extract_tl(t0, cpu_regs[reg - 4], 8, 8);
> @@ -411,13 +411,13 @@ static inline void gen_op_jmp_v(TCGv dest)
>  }
> 
>  static inline
> -void gen_op_add_reg_im(DisasContext *s, TCGMemOp size, int reg, int32_t val)
> +void gen_op_add_reg_im(DisasContext *s, MemOp size, int reg, int32_t val)
>  {
>      tcg_gen_addi_tl(s->tmp0, cpu_regs[reg], val);
>      gen_op_mov_reg_v(s, size, reg, s->tmp0);
>  }
> 
> -static inline void gen_op_add_reg_T0(DisasContext *s, TCGMemOp size, int reg)
> +static inline void gen_op_add_reg_T0(DisasContext *s, MemOp size, int reg)
>  {
>      tcg_gen_add_tl(s->tmp0, cpu_regs[reg], s->T0);
>      gen_op_mov_reg_v(s, size, reg, s->tmp0);
> @@ -451,7 +451,7 @@ static inline void gen_jmp_im(DisasContext *s, target_ulong pc)
>  /* Compute SEG:REG into A0.  SEG is selected from the override segment
>     (OVR_SEG) and the default segment (DEF_SEG).  OVR_SEG may be -1 to
>     indicate no override.  */
> -static void gen_lea_v_seg(DisasContext *s, TCGMemOp aflag, TCGv a0,
> +static void gen_lea_v_seg(DisasContext *s, MemOp aflag, TCGv a0,
>                            int def_seg, int ovr_seg)
>  {
>      switch (aflag) {
> @@ -514,13 +514,13 @@ static inline void gen_string_movl_A0_EDI(DisasContext *s)
>      gen_lea_v_seg(s, s->aflag, cpu_regs[R_EDI], R_ES, -1);
>  }
> 
> -static inline void gen_op_movl_T0_Dshift(DisasContext *s, TCGMemOp ot)
> +static inline void gen_op_movl_T0_Dshift(DisasContext *s, MemOp ot)
>  {
>      tcg_gen_ld32s_tl(s->T0, cpu_env, offsetof(CPUX86State, df));
>      tcg_gen_shli_tl(s->T0, s->T0, ot);
>  };
> 
> -static TCGv gen_ext_tl(TCGv dst, TCGv src, TCGMemOp size, bool sign)
> +static TCGv gen_ext_tl(TCGv dst, TCGv src, MemOp size, bool sign)
>  {
>      switch (size) {
>      case MO_8:
> @@ -551,18 +551,18 @@ static TCGv gen_ext_tl(TCGv dst, TCGv src, TCGMemOp size, bool sign)
>      }
>  }
> 
> -static void gen_extu(TCGMemOp ot, TCGv reg)
> +static void gen_extu(MemOp ot, TCGv reg)
>  {
>      gen_ext_tl(reg, reg, ot, false);
>  }
> 
> -static void gen_exts(TCGMemOp ot, TCGv reg)
> +static void gen_exts(MemOp ot, TCGv reg)
>  {
>      gen_ext_tl(reg, reg, ot, true);
>  }
> 
>  static inline
> -void gen_op_jnz_ecx(DisasContext *s, TCGMemOp size, TCGLabel *label1)
> +void gen_op_jnz_ecx(DisasContext *s, MemOp size, TCGLabel *label1)
>  {
>      tcg_gen_mov_tl(s->tmp0, cpu_regs[R_ECX]);
>      gen_extu(size, s->tmp0);
> @@ -570,14 +570,14 @@ void gen_op_jnz_ecx(DisasContext *s, TCGMemOp size, TCGLabel *label1)
>  }
> 
>  static inline
> -void gen_op_jz_ecx(DisasContext *s, TCGMemOp size, TCGLabel *label1)
> +void gen_op_jz_ecx(DisasContext *s, MemOp size, TCGLabel *label1)
>  {
>      tcg_gen_mov_tl(s->tmp0, cpu_regs[R_ECX]);
>      gen_extu(size, s->tmp0);
>      tcg_gen_brcondi_tl(TCG_COND_EQ, s->tmp0, 0, label1);
>  }
> 
> -static void gen_helper_in_func(TCGMemOp ot, TCGv v, TCGv_i32 n)
> +static void gen_helper_in_func(MemOp ot, TCGv v, TCGv_i32 n)
>  {
>      switch (ot) {
>      case MO_8:
> @@ -594,7 +594,7 @@ static void gen_helper_in_func(TCGMemOp ot, TCGv v, TCGv_i32 n)
>      }
>  }
> 
> -static void gen_helper_out_func(TCGMemOp ot, TCGv_i32 v, TCGv_i32 n)
> +static void gen_helper_out_func(MemOp ot, TCGv_i32 v, TCGv_i32 n)
>  {
>      switch (ot) {
>      case MO_8:
> @@ -611,7 +611,7 @@ static void gen_helper_out_func(TCGMemOp ot, TCGv_i32 v, TCGv_i32 n)
>      }
>  }
> 
> -static void gen_check_io(DisasContext *s, TCGMemOp ot, target_ulong cur_eip,
> +static void gen_check_io(DisasContext *s, MemOp ot, target_ulong cur_eip,
>                           uint32_t svm_flags)
>  {
>      target_ulong next_eip;
> @@ -644,7 +644,7 @@ static void gen_check_io(DisasContext *s, TCGMemOp ot, target_ulong cur_eip,
>      }
>  }
> 
> -static inline void gen_movs(DisasContext *s, TCGMemOp ot)
> +static inline void gen_movs(DisasContext *s, MemOp ot)
>  {
>      gen_string_movl_A0_ESI(s);
>      gen_op_ld_v(s, ot, s->T0, s->A0);
> @@ -840,7 +840,7 @@ static CCPrepare gen_prepare_eflags_s(DisasContext *s, TCGv reg)
>          return (CCPrepare) { .cond = TCG_COND_NEVER, .mask = -1 };
>      default:
>          {
> -            TCGMemOp size = (s->cc_op - CC_OP_ADDB) & 3;
> +            MemOp size = (s->cc_op - CC_OP_ADDB) & 3;
>              TCGv t0 = gen_ext_tl(reg, cpu_cc_dst, size, true);
>              return (CCPrepare) { .cond = TCG_COND_LT, .reg = t0, .mask = -1 };
>          }
> @@ -885,7 +885,7 @@ static CCPrepare gen_prepare_eflags_z(DisasContext *s, TCGv reg)
>                               .mask = -1 };
>      default:
>          {
> -            TCGMemOp size = (s->cc_op - CC_OP_ADDB) & 3;
> +            MemOp size = (s->cc_op - CC_OP_ADDB) & 3;
>              TCGv t0 = gen_ext_tl(reg, cpu_cc_dst, size, false);
>              return (CCPrepare) { .cond = TCG_COND_EQ, .reg = t0, .mask = -1 };
>          }
> @@ -897,7 +897,7 @@ static CCPrepare gen_prepare_eflags_z(DisasContext *s, TCGv reg)
>  static CCPrepare gen_prepare_cc(DisasContext *s, int b, TCGv reg)
>  {
>      int inv, jcc_op, cond;
> -    TCGMemOp size;
> +    MemOp size;
>      CCPrepare cc;
>      TCGv t0;
> 
> @@ -1075,7 +1075,7 @@ static TCGLabel *gen_jz_ecx_string(DisasContext *s, target_ulong next_eip)
>      return l2;
>  }
> 
> -static inline void gen_stos(DisasContext *s, TCGMemOp ot)
> +static inline void gen_stos(DisasContext *s, MemOp ot)
>  {
>      gen_op_mov_v_reg(s, MO_32, s->T0, R_EAX);
>      gen_string_movl_A0_EDI(s);
> @@ -1084,7 +1084,7 @@ static inline void gen_stos(DisasContext *s, TCGMemOp ot)
>      gen_op_add_reg_T0(s, s->aflag, R_EDI);
>  }
> 
> -static inline void gen_lods(DisasContext *s, TCGMemOp ot)
> +static inline void gen_lods(DisasContext *s, MemOp ot)
>  {
>      gen_string_movl_A0_ESI(s);
>      gen_op_ld_v(s, ot, s->T0, s->A0);
> @@ -1093,7 +1093,7 @@ static inline void gen_lods(DisasContext *s, TCGMemOp ot)
>      gen_op_add_reg_T0(s, s->aflag, R_ESI);
>  }
> 
> -static inline void gen_scas(DisasContext *s, TCGMemOp ot)
> +static inline void gen_scas(DisasContext *s, MemOp ot)
>  {
>      gen_string_movl_A0_EDI(s);
>      gen_op_ld_v(s, ot, s->T1, s->A0);
> @@ -1102,7 +1102,7 @@ static inline void gen_scas(DisasContext *s, TCGMemOp ot)
>      gen_op_add_reg_T0(s, s->aflag, R_EDI);
>  }
> 
> -static inline void gen_cmps(DisasContext *s, TCGMemOp ot)
> +static inline void gen_cmps(DisasContext *s, MemOp ot)
>  {
>      gen_string_movl_A0_EDI(s);
>      gen_op_ld_v(s, ot, s->T1, s->A0);
> @@ -1126,7 +1126,7 @@ static void gen_bpt_io(DisasContext *s, TCGv_i32 t_port, int ot)
>  }
> 
> 
> -static inline void gen_ins(DisasContext *s, TCGMemOp ot)
> +static inline void gen_ins(DisasContext *s, MemOp ot)
>  {
>      if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
>          gen_io_start();
> @@ -1148,7 +1148,7 @@ static inline void gen_ins(DisasContext *s, TCGMemOp ot)
>      }
>  }
> 
> -static inline void gen_outs(DisasContext *s, TCGMemOp ot)
> +static inline void gen_outs(DisasContext *s, MemOp ot)
>  {
>      if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
>          gen_io_start();
> @@ -1171,7 +1171,7 @@ static inline void gen_outs(DisasContext *s, TCGMemOp ot)
>  /* same method as Valgrind : we generate jumps to current or next
>     instruction */
>  #define GEN_REPZ(op)                                                          \
> -static inline void gen_repz_ ## op(DisasContext *s, TCGMemOp ot,              \
> +static inline void gen_repz_ ## op(DisasContext *s, MemOp ot,              \
>                                   target_ulong cur_eip, target_ulong next_eip) \
>  {                                                                             \
>      TCGLabel *l2;                                                             \
> @@ -1187,7 +1187,7 @@ static inline void gen_repz_ ## op(DisasContext *s, TCGMemOp ot,              \
>  }
> 
>  #define GEN_REPZ2(op)                                                         \
> -static inline void gen_repz_ ## op(DisasContext *s, TCGMemOp ot,              \
> +static inline void gen_repz_ ## op(DisasContext *s, MemOp ot,              \
>                                     target_ulong cur_eip,                      \
>                                     target_ulong next_eip,                     \
>                                     int nz)                                    \
> @@ -1284,7 +1284,7 @@ static void gen_illegal_opcode(DisasContext *s)
>  }
> 
>  /* if d == OR_TMP0, it means memory operand (address in A0) */
> -static void gen_op(DisasContext *s1, int op, TCGMemOp ot, int d)
> +static void gen_op(DisasContext *s1, int op, MemOp ot, int d)
>  {
>      if (d != OR_TMP0) {
>          if (s1->prefix & PREFIX_LOCK) {
> @@ -1395,7 +1395,7 @@ static void gen_op(DisasContext *s1, int op, TCGMemOp ot, int d)
>  }
> 
>  /* if d == OR_TMP0, it means memory operand (address in A0) */
> -static void gen_inc(DisasContext *s1, TCGMemOp ot, int d, int c)
> +static void gen_inc(DisasContext *s1, MemOp ot, int d, int c)
>  {
>      if (s1->prefix & PREFIX_LOCK) {
>          if (d != OR_TMP0) {
> @@ -1421,7 +1421,7 @@ static void gen_inc(DisasContext *s1, TCGMemOp ot, int d, int c)
>      set_cc_op(s1, (c > 0 ? CC_OP_INCB : CC_OP_DECB) + ot);
>  }
> 
> -static void gen_shift_flags(DisasContext *s, TCGMemOp ot, TCGv result,
> +static void gen_shift_flags(DisasContext *s, MemOp ot, TCGv result,
>                              TCGv shm1, TCGv count, bool is_right)
>  {
>      TCGv_i32 z32, s32, oldop;
> @@ -1466,7 +1466,7 @@ static void gen_shift_flags(DisasContext *s, TCGMemOp ot, TCGv result,
>      set_cc_op(s, CC_OP_DYNAMIC);
>  }
> 
> -static void gen_shift_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
> +static void gen_shift_rm_T1(DisasContext *s, MemOp ot, int op1,
>                              int is_right, int is_arith)
>  {
>      target_ulong mask = (ot == MO_64 ? 0x3f : 0x1f);
> @@ -1502,7 +1502,7 @@ static void gen_shift_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
>      gen_shift_flags(s, ot, s->T0, s->tmp0, s->T1, is_right);
>  }
> 
> -static void gen_shift_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
> +static void gen_shift_rm_im(DisasContext *s, MemOp ot, int op1, int op2,
>                              int is_right, int is_arith)
>  {
>      int mask = (ot == MO_64 ? 0x3f : 0x1f);
> @@ -1542,7 +1542,7 @@ static void gen_shift_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
>      }
>  }
> 
> -static void gen_rot_rm_T1(DisasContext *s, TCGMemOp ot, int op1, int is_right)
> +static void gen_rot_rm_T1(DisasContext *s, MemOp ot, int op1, int is_right)
>  {
>      target_ulong mask = (ot == MO_64 ? 0x3f : 0x1f);
>      TCGv_i32 t0, t1;
> @@ -1627,7 +1627,7 @@ static void gen_rot_rm_T1(DisasContext *s, TCGMemOp ot, int op1, int is_right)
>      set_cc_op(s, CC_OP_DYNAMIC);
>  }
> 
> -static void gen_rot_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
> +static void gen_rot_rm_im(DisasContext *s, MemOp ot, int op1, int op2,
>                            int is_right)
>  {
>      int mask = (ot == MO_64 ? 0x3f : 0x1f);
> @@ -1705,7 +1705,7 @@ static void gen_rot_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
>  }
> 
>  /* XXX: add faster immediate = 1 case */
> -static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
> +static void gen_rotc_rm_T1(DisasContext *s, MemOp ot, int op1,
>                             int is_right)
>  {
>      gen_compute_eflags(s);
> @@ -1761,7 +1761,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
>  }
> 
>  /* XXX: add faster immediate case */
> -static void gen_shiftd_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
> +static void gen_shiftd_rm_T1(DisasContext *s, MemOp ot, int op1,
>                               bool is_right, TCGv count_in)
>  {
>      target_ulong mask = (ot == MO_64 ? 63 : 31);
> @@ -1842,7 +1842,7 @@ static void gen_shiftd_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
>      tcg_temp_free(count);
>  }
> 
> -static void gen_shift(DisasContext *s1, int op, TCGMemOp ot, int d, int s)
> +static void gen_shift(DisasContext *s1, int op, MemOp ot, int d, int s)
>  {
>      if (s != OR_TMP1)
>          gen_op_mov_v_reg(s1, ot, s1->T1, s);
> @@ -1872,7 +1872,7 @@ static void gen_shift(DisasContext *s1, int op, TCGMemOp ot, int d, int s)
>      }
>  }
> 
> -static void gen_shifti(DisasContext *s1, int op, TCGMemOp ot, int d, int c)
> +static void gen_shifti(DisasContext *s1, int op, MemOp ot, int d, int c)
>  {
>      switch(op) {
>      case OP_ROL:
> @@ -2149,7 +2149,7 @@ static void gen_add_A0_ds_seg(DisasContext *s)
>  /* generate modrm memory load or store of 'reg'. TMP0 is used if reg ==
>     OR_TMP0 */
>  static void gen_ldst_modrm(CPUX86State *env, DisasContext *s, int modrm,
> -                           TCGMemOp ot, int reg, int is_store)
> +                           MemOp ot, int reg, int is_store)
>  {
>      int mod, rm;
> 
> @@ -2179,7 +2179,7 @@ static void gen_ldst_modrm(CPUX86State *env, DisasContext *s, int modrm,
>      }
>  }
> 
> -static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, TCGMemOp ot)
> +static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, MemOp ot)
>  {
>      uint32_t ret;
> 
> @@ -2202,7 +2202,7 @@ static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, TCGMemOp ot)
>      return ret;
>  }
> 
> -static inline int insn_const_size(TCGMemOp ot)
> +static inline int insn_const_size(MemOp ot)
>  {
>      if (ot <= MO_32) {
>          return 1 << ot;
> @@ -2266,7 +2266,7 @@ static inline void gen_jcc(DisasContext *s, int b,
>      }
>  }
> 
> -static void gen_cmovcc1(CPUX86State *env, DisasContext *s, TCGMemOp ot, int b,
> +static void gen_cmovcc1(CPUX86State *env, DisasContext *s, MemOp ot, int b,
>                          int modrm, int reg)
>  {
>      CCPrepare cc;
> @@ -2363,8 +2363,8 @@ static inline void gen_stack_update(DisasContext *s, int addend)
>  /* Generate a push. It depends on ss32, addseg and dflag.  */
>  static void gen_push_v(DisasContext *s, TCGv val)
>  {
> -    TCGMemOp d_ot = mo_pushpop(s, s->dflag);
> -    TCGMemOp a_ot = mo_stacksize(s);
> +    MemOp d_ot = mo_pushpop(s, s->dflag);
> +    MemOp a_ot = mo_stacksize(s);
>      int size = 1 << d_ot;
>      TCGv new_esp = s->A0;
> 
> @@ -2383,9 +2383,9 @@ static void gen_push_v(DisasContext *s, TCGv val)
>  }
> 
>  /* two step pop is necessary for precise exceptions */
> -static TCGMemOp gen_pop_T0(DisasContext *s)
> +static MemOp gen_pop_T0(DisasContext *s)
>  {
> -    TCGMemOp d_ot = mo_pushpop(s, s->dflag);
> +    MemOp d_ot = mo_pushpop(s, s->dflag);
> 
>      gen_lea_v_seg(s, mo_stacksize(s), cpu_regs[R_ESP], R_SS, -1);
>      gen_op_ld_v(s, d_ot, s->T0, s->A0);
> @@ -2393,7 +2393,7 @@ static TCGMemOp gen_pop_T0(DisasContext *s)
>      return d_ot;
>  }
> 
> -static inline void gen_pop_update(DisasContext *s, TCGMemOp ot)
> +static inline void gen_pop_update(DisasContext *s, MemOp ot)
>  {
>      gen_stack_update(s, 1 << ot);
>  }
> @@ -2405,8 +2405,8 @@ static inline void gen_stack_A0(DisasContext *s)
> 
>  static void gen_pusha(DisasContext *s)
>  {
> -    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_16;
> -    TCGMemOp d_ot = s->dflag;
> +    MemOp s_ot = s->ss32 ? MO_32 : MO_16;
> +    MemOp d_ot = s->dflag;
>      int size = 1 << d_ot;
>      int i;
> 
> @@ -2421,8 +2421,8 @@ static void gen_pusha(DisasContext *s)
> 
>  static void gen_popa(DisasContext *s)
>  {
> -    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_16;
> -    TCGMemOp d_ot = s->dflag;
> +    MemOp s_ot = s->ss32 ? MO_32 : MO_16;
> +    MemOp d_ot = s->dflag;
>      int size = 1 << d_ot;
>      int i;
> 
> @@ -2442,8 +2442,8 @@ static void gen_popa(DisasContext *s)
> 
>  static void gen_enter(DisasContext *s, int esp_addend, int level)
>  {
> -    TCGMemOp d_ot = mo_pushpop(s, s->dflag);
> -    TCGMemOp a_ot = CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_16;
> +    MemOp d_ot = mo_pushpop(s, s->dflag);
> +    MemOp a_ot = CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_16;
>      int size = 1 << d_ot;
> 
>      /* Push BP; compute FrameTemp into T1.  */
> @@ -2482,8 +2482,8 @@ static void gen_enter(DisasContext *s, int esp_addend, int level)
> 
>  static void gen_leave(DisasContext *s)
>  {
> -    TCGMemOp d_ot = mo_pushpop(s, s->dflag);
> -    TCGMemOp a_ot = mo_stacksize(s);
> +    MemOp d_ot = mo_pushpop(s, s->dflag);
> +    MemOp a_ot = mo_stacksize(s);
> 
>      gen_lea_v_seg(s, a_ot, cpu_regs[R_EBP], R_SS, -1);
>      gen_op_ld_v(s, d_ot, s->T0, s->A0);
> @@ -3045,7 +3045,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
>      SSEFunc_0_eppi sse_fn_eppi;
>      SSEFunc_0_ppi sse_fn_ppi;
>      SSEFunc_0_eppt sse_fn_eppt;
> -    TCGMemOp ot;
> +    MemOp ot;
> 
>      b &= 0xff;
>      if (s->prefix & PREFIX_DATA)
> @@ -4488,7 +4488,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
>      CPUX86State *env = cpu->env_ptr;
>      int b, prefixes;
>      int shift;
> -    TCGMemOp ot, aflag, dflag;
> +    MemOp ot, aflag, dflag;
>      int modrm, reg, rm, mod, op, opreg, val;
>      target_ulong next_eip, tval;
>      int rex_w, rex_r;
> @@ -5567,8 +5567,8 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
>      case 0x1be: /* movsbS Gv, Eb */
>      case 0x1bf: /* movswS Gv, Eb */
>          {
> -            TCGMemOp d_ot;
> -            TCGMemOp s_ot;
> +            MemOp d_ot;
> +            MemOp s_ot;
> 
>              /* d_ot is the size of destination */
>              d_ot = dflag;
> diff --git a/target/m68k/translate.c b/target/m68k/translate.c
> index 60bcfb7..24c1dd3 100644
> --- a/target/m68k/translate.c
> +++ b/target/m68k/translate.c
> @@ -2414,7 +2414,7 @@ DISAS_INSN(cas)
>      uint16_t ext;
>      TCGv load;
>      TCGv cmp;
> -    TCGMemOp opc;
> +    MemOp opc;
> 
>      switch ((insn >> 9) & 3) {
>      case 1:
> diff --git a/target/microblaze/translate.c b/target/microblaze/translate.c
> index 9ce65f3..41d1b8b 100644
> --- a/target/microblaze/translate.c
> +++ b/target/microblaze/translate.c
> @@ -919,7 +919,7 @@ static void dec_load(DisasContext *dc)
>      unsigned int size;
>      bool rev = false, ex = false, ea = false;
>      int mem_index = cpu_mmu_index(&dc->cpu->env, false);
> -    TCGMemOp mop;
> +    MemOp mop;
> 
>      mop = dc->opcode & 3;
>      size = 1 << mop;
> @@ -1035,7 +1035,7 @@ static void dec_store(DisasContext *dc)
>      unsigned int size;
>      bool rev = false, ex = false, ea = false;
>      int mem_index = cpu_mmu_index(&dc->cpu->env, false);
> -    TCGMemOp mop;
> +    MemOp mop;
> 
>      mop = dc->opcode & 3;
>      size = 1 << mop;
> diff --git a/target/mips/translate.c b/target/mips/translate.c
> index ca62800..59b5d85 100644
> --- a/target/mips/translate.c
> +++ b/target/mips/translate.c
> @@ -2526,7 +2526,7 @@ typedef struct DisasContext {
>      int32_t CP0_Config5;
>      /* Routine used to access memory */
>      int mem_idx;
> -    TCGMemOp default_tcg_memop_mask;
> +    MemOp default_tcg_memop_mask;
>      uint32_t hflags, saved_hflags;
>      target_ulong btarget;
>      bool ulri;
> @@ -3706,7 +3706,7 @@ static void gen_st(DisasContext *ctx, uint32_t opc, int rt,
> 
>  /* Store conditional */
>  static void gen_st_cond(DisasContext *ctx, int rt, int base, int offset,
> -                        TCGMemOp tcg_mo, bool eva)
> +                        MemOp tcg_mo, bool eva)
>  {
>      TCGv addr, t0, val;
>      TCGLabel *l1 = gen_new_label();
> @@ -4546,7 +4546,7 @@ static void gen_HILO(DisasContext *ctx, uint32_t opc, int acc, int reg)
>  }
> 
>  static inline void gen_r6_ld(target_long addr, int reg, int memidx,
> -                             TCGMemOp memop)
> +                             MemOp memop)
>  {
>      TCGv t0 = tcg_const_tl(addr);
>      tcg_gen_qemu_ld_tl(t0, t0, memidx, memop);
> @@ -21828,7 +21828,7 @@ static int decode_nanomips_32_48_opc(CPUMIPSState *env, DisasContext *ctx)
>                               extract32(ctx->opcode, 0, 8);
>                      TCGv va = tcg_temp_new();
>                      TCGv t1 = tcg_temp_new();
> -                    TCGMemOp memop = (extract32(ctx->opcode, 8, 3)) ==
> +                    MemOp memop = (extract32(ctx->opcode, 8, 3)) ==
>                                        NM_P_LS_UAWM ? MO_UNALN : 0;
> 
>                      count = (count == 0) ? 8 : count;
> diff --git a/target/openrisc/translate.c b/target/openrisc/translate.c
> index 4360ce4..b189c50 100644
> --- a/target/openrisc/translate.c
> +++ b/target/openrisc/translate.c
> @@ -681,7 +681,7 @@ static bool trans_l_lwa(DisasContext *dc, arg_load *a)
>      return true;
>  }
> 
> -static void do_load(DisasContext *dc, arg_load *a, TCGMemOp mop)
> +static void do_load(DisasContext *dc, arg_load *a, MemOp mop)
>  {
>      TCGv ea;
> 
> @@ -763,7 +763,7 @@ static bool trans_l_swa(DisasContext *dc, arg_store *a)
>      return true;
>  }
> 
> -static void do_store(DisasContext *dc, arg_store *a, TCGMemOp mop)
> +static void do_store(DisasContext *dc, arg_store *a, MemOp mop)
>  {
>      TCGv t0 = tcg_temp_new();
>      tcg_gen_addi_tl(t0, cpu_R[a->a], a->i);
> diff --git a/target/ppc/translate.c b/target/ppc/translate.c
> index 4a5de28..31800ed 100644
> --- a/target/ppc/translate.c
> +++ b/target/ppc/translate.c
> @@ -162,7 +162,7 @@ struct DisasContext {
>      int mem_idx;
>      int access_type;
>      /* Translation flags */
> -    TCGMemOp default_tcg_memop_mask;
> +    MemOp default_tcg_memop_mask;
>  #if defined(TARGET_PPC64)
>      bool sf_mode;
>      bool has_cfar;
> @@ -3142,7 +3142,7 @@ static void gen_isync(DisasContext *ctx)
> 
>  #define MEMOP_GET_SIZE(x)  (1 << ((x) & MO_SIZE))
> 
> -static void gen_load_locked(DisasContext *ctx, TCGMemOp memop)
> +static void gen_load_locked(DisasContext *ctx, MemOp memop)
>  {
>      TCGv gpr = cpu_gpr[rD(ctx->opcode)];
>      TCGv t0 = tcg_temp_new();
> @@ -3167,7 +3167,7 @@ LARX(lbarx, DEF_MEMOP(MO_UB))
>  LARX(lharx, DEF_MEMOP(MO_UW))
>  LARX(lwarx, DEF_MEMOP(MO_UL))
> 
> -static void gen_fetch_inc_conditional(DisasContext *ctx, TCGMemOp memop,
> +static void gen_fetch_inc_conditional(DisasContext *ctx, MemOp memop,
>                                        TCGv EA, TCGCond cond, int addend)
>  {
>      TCGv t = tcg_temp_new();
> @@ -3193,7 +3193,7 @@ static void gen_fetch_inc_conditional(DisasContext *ctx, TCGMemOp memop,
>      tcg_temp_free(u);
>  }
> 
> -static void gen_ld_atomic(DisasContext *ctx, TCGMemOp memop)
> +static void gen_ld_atomic(DisasContext *ctx, MemOp memop)
>  {
>      uint32_t gpr_FC = FC(ctx->opcode);
>      TCGv EA = tcg_temp_new();
> @@ -3306,7 +3306,7 @@ static void gen_ldat(DisasContext *ctx)
>  }
>  #endif
> 
> -static void gen_st_atomic(DisasContext *ctx, TCGMemOp memop)
> +static void gen_st_atomic(DisasContext *ctx, MemOp memop)
>  {
>      uint32_t gpr_FC = FC(ctx->opcode);
>      TCGv EA = tcg_temp_new();
> @@ -3389,7 +3389,7 @@ static void gen_stdat(DisasContext *ctx)
>  }
>  #endif
> 
> -static void gen_conditional_store(DisasContext *ctx, TCGMemOp memop)
> +static void gen_conditional_store(DisasContext *ctx, MemOp memop)
>  {
>      TCGLabel *l1 = gen_new_label();
>      TCGLabel *l2 = gen_new_label();
> diff --git a/target/riscv/insn_trans/trans_rva.inc.c b/target/riscv/insn_trans/trans_rva.inc.c
> index fadd888..be8a9f0 100644
> --- a/target/riscv/insn_trans/trans_rva.inc.c
> +++ b/target/riscv/insn_trans/trans_rva.inc.c
> @@ -18,7 +18,7 @@
>   * this program.  If not, see <http://www.gnu.org/licenses/>.
>   */
> 
> -static inline bool gen_lr(DisasContext *ctx, arg_atomic *a, TCGMemOp mop)
> +static inline bool gen_lr(DisasContext *ctx, arg_atomic *a, MemOp mop)
>  {
>      TCGv src1 = tcg_temp_new();
>      /* Put addr in load_res, data in load_val.  */
> @@ -37,7 +37,7 @@ static inline bool gen_lr(DisasContext *ctx, arg_atomic *a, TCGMemOp mop)
>      return true;
>  }
> 
> -static inline bool gen_sc(DisasContext *ctx, arg_atomic *a, TCGMemOp mop)
> +static inline bool gen_sc(DisasContext *ctx, arg_atomic *a, MemOp mop)
>  {
>      TCGv src1 = tcg_temp_new();
>      TCGv src2 = tcg_temp_new();
> @@ -82,8 +82,8 @@ static inline bool gen_sc(DisasContext *ctx, arg_atomic *a, TCGMemOp mop)
>  }
> 
>  static bool gen_amo(DisasContext *ctx, arg_atomic *a,
> -                    void(*func)(TCGv, TCGv, TCGv, TCGArg, TCGMemOp),
> -                    TCGMemOp mop)
> +                    void(*func)(TCGv, TCGv, TCGv, TCGArg, MemOp),
> +                    MemOp mop)
>  {
>      TCGv src1 = tcg_temp_new();
>      TCGv src2 = tcg_temp_new();
> diff --git a/target/riscv/insn_trans/trans_rvi.inc.c b/target/riscv/insn_trans/trans_rvi.inc.c
> index ea64731..cf440d1 100644
> --- a/target/riscv/insn_trans/trans_rvi.inc.c
> +++ b/target/riscv/insn_trans/trans_rvi.inc.c
> @@ -135,7 +135,7 @@ static bool trans_bgeu(DisasContext *ctx, arg_bgeu *a)
>      return gen_branch(ctx, a, TCG_COND_GEU);
>  }
> 
> -static bool gen_load(DisasContext *ctx, arg_lb *a, TCGMemOp memop)
> +static bool gen_load(DisasContext *ctx, arg_lb *a, MemOp memop)
>  {
>      TCGv t0 = tcg_temp_new();
>      TCGv t1 = tcg_temp_new();
> @@ -174,7 +174,7 @@ static bool trans_lhu(DisasContext *ctx, arg_lhu *a)
>      return gen_load(ctx, a, MO_TEUW);
>  }
> 
> -static bool gen_store(DisasContext *ctx, arg_sb *a, TCGMemOp memop)
> +static bool gen_store(DisasContext *ctx, arg_sb *a, MemOp memop)
>  {
>      TCGv t0 = tcg_temp_new();
>      TCGv dat = tcg_temp_new();
> diff --git a/target/s390x/translate.c b/target/s390x/translate.c
> index ac0d8b6..2927247 100644
> --- a/target/s390x/translate.c
> +++ b/target/s390x/translate.c
> @@ -152,7 +152,7 @@ static inline int vec_full_reg_offset(uint8_t reg)
>      return offsetof(CPUS390XState, vregs[reg][0]);
>  }
> 
> -static inline int vec_reg_offset(uint8_t reg, uint8_t enr, TCGMemOp es)
> +static inline int vec_reg_offset(uint8_t reg, uint8_t enr, MemOp es)
>  {
>      /* Convert element size (es) - e.g. MO_8 - to bytes */
>      const uint8_t bytes = 1 << es;
> @@ -2262,7 +2262,7 @@ static DisasJumpType op_csst(DisasContext *s, DisasOps *o)
>  #ifndef CONFIG_USER_ONLY
>  static DisasJumpType op_csp(DisasContext *s, DisasOps *o)
>  {
> -    TCGMemOp mop = s->insn->data;
> +    MemOp mop = s->insn->data;
>      TCGv_i64 addr, old, cc;
>      TCGLabel *lab = gen_new_label();
> 
> @@ -3228,7 +3228,7 @@ static DisasJumpType op_lm64(DisasContext *s, DisasOps *o)
>  static DisasJumpType op_lpd(DisasContext *s, DisasOps *o)
>  {
>      TCGv_i64 a1, a2;
> -    TCGMemOp mop = s->insn->data;
> +    MemOp mop = s->insn->data;
> 
>      /* In a parallel context, stop the world and single step.  */
>      if (tb_cflags(s->base.tb) & CF_PARALLEL) {
> diff --git a/target/s390x/translate_vx.inc.c b/target/s390x/translate_vx.inc.c
> index 41d5cf8..4c56bbb 100644
> --- a/target/s390x/translate_vx.inc.c
> +++ b/target/s390x/translate_vx.inc.c
> @@ -57,13 +57,13 @@
>  #define FPF_LONG        3
>  #define FPF_EXT         4
> 
> -static inline bool valid_vec_element(uint8_t enr, TCGMemOp es)
> +static inline bool valid_vec_element(uint8_t enr, MemOp es)
>  {
>      return !(enr & ~(NUM_VEC_ELEMENTS(es) - 1));
>  }
> 
>  static void read_vec_element_i64(TCGv_i64 dst, uint8_t reg, uint8_t enr,
> -                                 TCGMemOp memop)
> +                                 MemOp memop)
>  {
>      const int offs = vec_reg_offset(reg, enr, memop & MO_SIZE);
> 
> @@ -96,7 +96,7 @@ static void read_vec_element_i64(TCGv_i64 dst, uint8_t reg, uint8_t enr,
>  }
> 
>  static void read_vec_element_i32(TCGv_i32 dst, uint8_t reg, uint8_t enr,
> -                                 TCGMemOp memop)
> +                                 MemOp memop)
>  {
>      const int offs = vec_reg_offset(reg, enr, memop & MO_SIZE);
> 
> @@ -123,7 +123,7 @@ static void read_vec_element_i32(TCGv_i32 dst, uint8_t reg, uint8_t enr,
>  }
> 
>  static void write_vec_element_i64(TCGv_i64 src, int reg, uint8_t enr,
> -                                  TCGMemOp memop)
> +                                  MemOp memop)
>  {
>      const int offs = vec_reg_offset(reg, enr, memop & MO_SIZE);
> 
> @@ -146,7 +146,7 @@ static void write_vec_element_i64(TCGv_i64 src, int reg, uint8_t enr,
>  }
> 
>  static void write_vec_element_i32(TCGv_i32 src, int reg, uint8_t enr,
> -                                  TCGMemOp memop)
> +                                  MemOp memop)
>  {
>      const int offs = vec_reg_offset(reg, enr, memop & MO_SIZE);
> 
> diff --git a/target/sparc/translate.c b/target/sparc/translate.c
> index 091bab5..bef9ce6 100644
> --- a/target/sparc/translate.c
> +++ b/target/sparc/translate.c
> @@ -2019,7 +2019,7 @@ static inline void gen_ne_fop_QD(DisasContext *dc, int rd, int rs,
>  }
> 
>  static void gen_swap(DisasContext *dc, TCGv dst, TCGv src,
> -                     TCGv addr, int mmu_idx, TCGMemOp memop)
> +                     TCGv addr, int mmu_idx, MemOp memop)
>  {
>      gen_address_mask(dc, addr);
>      tcg_gen_atomic_xchg_tl(dst, addr, src, mmu_idx, memop);
> @@ -2050,10 +2050,10 @@ typedef struct {
>      ASIType type;
>      int asi;
>      int mem_idx;
> -    TCGMemOp memop;
> +    MemOp memop;
>  } DisasASI;
> 
> -static DisasASI get_asi(DisasContext *dc, int insn, TCGMemOp memop)
> +static DisasASI get_asi(DisasContext *dc, int insn, MemOp memop)
>  {
>      int asi = GET_FIELD(insn, 19, 26);
>      ASIType type = GET_ASI_HELPER;
> @@ -2267,7 +2267,7 @@ static DisasASI get_asi(DisasContext *dc, int insn, TCGMemOp memop)
>  }
> 
>  static void gen_ld_asi(DisasContext *dc, TCGv dst, TCGv addr,
> -                       int insn, TCGMemOp memop)
> +                       int insn, MemOp memop)
>  {
>      DisasASI da = get_asi(dc, insn, memop);
> 
> @@ -2305,7 +2305,7 @@ static void gen_ld_asi(DisasContext *dc, TCGv dst, TCGv addr,
>  }
> 
>  static void gen_st_asi(DisasContext *dc, TCGv src, TCGv addr,
> -                       int insn, TCGMemOp memop)
> +                       int insn, MemOp memop)
>  {
>      DisasASI da = get_asi(dc, insn, memop);
> 
> @@ -2511,7 +2511,7 @@ static void gen_ldf_asi(DisasContext *dc, TCGv addr,
>      case GET_ASI_BLOCK:
>          /* Valid for lddfa on aligned registers only.  */
>          if (size == 8 && (rd & 7) == 0) {
> -            TCGMemOp memop;
> +            MemOp memop;
>              TCGv eight;
>              int i;
> 
> @@ -2625,7 +2625,7 @@ static void gen_stf_asi(DisasContext *dc, TCGv addr,
>      case GET_ASI_BLOCK:
>          /* Valid for stdfa on aligned registers only.  */
>          if (size == 8 && (rd & 7) == 0) {
> -            TCGMemOp memop;
> +            MemOp memop;
>              TCGv eight;
>              int i;
> 
> diff --git a/target/tilegx/translate.c b/target/tilegx/translate.c
> index c46a4ab..68dd4aa 100644
> --- a/target/tilegx/translate.c
> +++ b/target/tilegx/translate.c
> @@ -290,7 +290,7 @@ static void gen_cmul2(TCGv tdest, TCGv tsrca, TCGv tsrcb, int sh, int rd)
>  }
> 
>  static TileExcp gen_st_opcode(DisasContext *dc, unsigned dest, unsigned srca,
> -                              unsigned srcb, TCGMemOp memop, const char *name)
> +                              unsigned srcb, MemOp memop, const char *name)
>  {
>      if (dest) {
>          return TILEGX_EXCP_OPCODE_UNKNOWN;
> @@ -305,7 +305,7 @@ static TileExcp gen_st_opcode(DisasContext *dc, unsigned dest, unsigned srca,
>  }
> 
>  static TileExcp gen_st_add_opcode(DisasContext *dc, unsigned srca, unsigned srcb,
> -                                  int imm, TCGMemOp memop, const char *name)
> +                                  int imm, MemOp memop, const char *name)
>  {
>      TCGv tsrca = load_gr(dc, srca);
>      TCGv tsrcb = load_gr(dc, srcb);
> @@ -496,7 +496,7 @@ static TileExcp gen_rr_opcode(DisasContext *dc, unsigned opext,
>  {
>      TCGv tdest, tsrca;
>      const char *mnemonic;
> -    TCGMemOp memop;
> +    MemOp memop;
>      TileExcp ret = TILEGX_EXCP_NONE;
>      bool prefetch_nofault = false;
> 
> @@ -1478,7 +1478,7 @@ static TileExcp gen_rri_opcode(DisasContext *dc, unsigned opext,
>      TCGv tsrca = load_gr(dc, srca);
>      bool prefetch_nofault = false;
>      const char *mnemonic;
> -    TCGMemOp memop;
> +    MemOp memop;
>      int i2, i3;
>      TCGv t0;
> 
> @@ -2106,7 +2106,7 @@ static TileExcp decode_y2(DisasContext *dc, tilegx_bundle_bits bundle)
>      unsigned srca = get_SrcA_Y2(bundle);
>      unsigned srcbdest = get_SrcBDest_Y2(bundle);
>      const char *mnemonic;
> -    TCGMemOp memop;
> +    MemOp memop;
>      bool prefetch_nofault = false;
> 
>      switch (OEY2(opc, mode)) {
> diff --git a/target/tricore/translate.c b/target/tricore/translate.c
> index dc2a65f..87a5f50 100644
> --- a/target/tricore/translate.c
> +++ b/target/tricore/translate.c
> @@ -227,7 +227,7 @@ static inline void generate_trap(DisasContext *ctx, int class, int tin);
>  /* Functions for load/save to/from memory */
> 
>  static inline void gen_offset_ld(DisasContext *ctx, TCGv r1, TCGv r2,
> -                                 int16_t con, TCGMemOp mop)
> +                                 int16_t con, MemOp mop)
>  {
>      TCGv temp = tcg_temp_new();
>      tcg_gen_addi_tl(temp, r2, con);
> @@ -236,7 +236,7 @@ static inline void gen_offset_ld(DisasContext *ctx, TCGv r1, TCGv r2,
>  }
> 
>  static inline void gen_offset_st(DisasContext *ctx, TCGv r1, TCGv r2,
> -                                 int16_t con, TCGMemOp mop)
> +                                 int16_t con, MemOp mop)
>  {
>      TCGv temp = tcg_temp_new();
>      tcg_gen_addi_tl(temp, r2, con);
> @@ -284,7 +284,7 @@ static void gen_offset_ld_2regs(TCGv rh, TCGv rl, TCGv base, int16_t con,
>  }
> 
>  static void gen_st_preincr(DisasContext *ctx, TCGv r1, TCGv r2, int16_t off,
> -                           TCGMemOp mop)
> +                           MemOp mop)
>  {
>      TCGv temp = tcg_temp_new();
>      tcg_gen_addi_tl(temp, r2, off);
> @@ -294,7 +294,7 @@ static void gen_st_preincr(DisasContext *ctx, TCGv r1, TCGv r2, int16_t off,
>  }
> 
>  static void gen_ld_preincr(DisasContext *ctx, TCGv r1, TCGv r2, int16_t off,
> -                           TCGMemOp mop)
> +                           MemOp mop)
>  {
>      TCGv temp = tcg_temp_new();
>      tcg_gen_addi_tl(temp, r2, off);
> diff --git a/tcg/README b/tcg/README
> index 21fcdf7..b4382fa 100644
> --- a/tcg/README
> +++ b/tcg/README
> @@ -512,7 +512,7 @@ Both t0 and t1 may be split into little-endian ordered pairs of registers
>  if dealing with 64-bit quantities on a 32-bit host.
> 
>  The memidx selects the qemu tlb index to use (e.g. user or kernel access).
> -The flags are the TCGMemOp bits, selecting the sign, width, and endianness
> +The flags are the MemOp bits, selecting the sign, width, and endianness
>  of the memory access.
> 
>  For a 32-bit host, qemu_ld/st_i64 is guaranteed to only be used with a
> diff --git a/tcg/aarch64/tcg-target.inc.c b/tcg/aarch64/tcg-target.inc.c
> index 0713448..3f92101 100644
> --- a/tcg/aarch64/tcg-target.inc.c
> +++ b/tcg/aarch64/tcg-target.inc.c
> @@ -1423,7 +1423,7 @@ static inline void tcg_out_rev16(TCGContext *s, TCGReg rd, TCGReg rn)
>      tcg_out_insn(s, 3507, REV16, TCG_TYPE_I32, rd, rn);
>  }
> 
> -static inline void tcg_out_sxt(TCGContext *s, TCGType ext, TCGMemOp s_bits,
> +static inline void tcg_out_sxt(TCGContext *s, TCGType ext, MemOp s_bits,
>                                 TCGReg rd, TCGReg rn)
>  {
>      /* Using ALIASes SXTB, SXTH, SXTW, of SBFM Xd, Xn, #0, #7|15|31 */
> @@ -1431,7 +1431,7 @@ static inline void tcg_out_sxt(TCGContext *s, TCGType ext, TCGMemOp s_bits,
>      tcg_out_sbfm(s, ext, rd, rn, 0, bits);
>  }
> 
> -static inline void tcg_out_uxt(TCGContext *s, TCGMemOp s_bits,
> +static inline void tcg_out_uxt(TCGContext *s, MemOp s_bits,
>                                 TCGReg rd, TCGReg rn)
>  {
>      /* Using ALIASes UXTB, UXTH of UBFM Wd, Wn, #0, #7|15 */
> @@ -1580,8 +1580,8 @@ static inline void tcg_out_adr(TCGContext *s, TCGReg rd, void *target)
>  static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
>  {
>      TCGMemOpIdx oi = lb->oi;
> -    TCGMemOp opc = get_memop(oi);
> -    TCGMemOp size = opc & MO_SIZE;
> +    MemOp opc = get_memop(oi);
> +    MemOp size = opc & MO_SIZE;
> 
>      if (!reloc_pc19(lb->label_ptr[0], s->code_ptr)) {
>          return false;
> @@ -1605,8 +1605,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
>  static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
>  {
>      TCGMemOpIdx oi = lb->oi;
> -    TCGMemOp opc = get_memop(oi);
> -    TCGMemOp size = opc & MO_SIZE;
> +    MemOp opc = get_memop(oi);
> +    MemOp size = opc & MO_SIZE;
> 
>      if (!reloc_pc19(lb->label_ptr[0], s->code_ptr)) {
>          return false;
> @@ -1649,7 +1649,7 @@ QEMU_BUILD_BUG_ON(offsetof(CPUTLBDescFast, table) != 8);
>     slow path for the failure case, which will be patched later when finalizing
>     the slow path. Generated code returns the host addend in X1,
>     clobbers X0,X2,X3,TMP. */
> -static void tcg_out_tlb_read(TCGContext *s, TCGReg addr_reg, TCGMemOp opc,
> +static void tcg_out_tlb_read(TCGContext *s, TCGReg addr_reg, MemOp opc,
>                               tcg_insn_unit **label_ptr, int mem_index,
>                               bool is_read)
>  {
> @@ -1709,11 +1709,11 @@ static void tcg_out_tlb_read(TCGContext *s, TCGReg addr_reg, TCGMemOp opc,
> 
>  #endif /* CONFIG_SOFTMMU */
> 
> -static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp memop, TCGType ext,
> +static void tcg_out_qemu_ld_direct(TCGContext *s, MemOp memop, TCGType ext,
>                                     TCGReg data_r, TCGReg addr_r,
>                                     TCGType otype, TCGReg off_r)
>  {
> -    const TCGMemOp bswap = memop & MO_BSWAP;
> +    const MemOp bswap = memop & MO_BSWAP;
> 
>      switch (memop & MO_SSIZE) {
>      case MO_UB:
> @@ -1765,11 +1765,11 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp memop, TCGType ext,
>      }
>  }
> 
> -static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp memop,
> +static void tcg_out_qemu_st_direct(TCGContext *s, MemOp memop,
>                                     TCGReg data_r, TCGReg addr_r,
>                                     TCGType otype, TCGReg off_r)
>  {
> -    const TCGMemOp bswap = memop & MO_BSWAP;
> +    const MemOp bswap = memop & MO_BSWAP;
> 
>      switch (memop & MO_SIZE) {
>      case MO_8:
> @@ -1804,7 +1804,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp memop,
>  static void tcg_out_qemu_ld(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
>                              TCGMemOpIdx oi, TCGType ext)
>  {
> -    TCGMemOp memop = get_memop(oi);
> +    MemOp memop = get_memop(oi);
>      const TCGType otype = TARGET_LONG_BITS == 64 ? TCG_TYPE_I64 : TCG_TYPE_I32;
>  #ifdef CONFIG_SOFTMMU
>      unsigned mem_index = get_mmuidx(oi);
> @@ -1829,7 +1829,7 @@ static void tcg_out_qemu_ld(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
>  static void tcg_out_qemu_st(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
>                              TCGMemOpIdx oi)
>  {
> -    TCGMemOp memop = get_memop(oi);
> +    MemOp memop = get_memop(oi);
>      const TCGType otype = TARGET_LONG_BITS == 64 ? TCG_TYPE_I64 : TCG_TYPE_I32;
>  #ifdef CONFIG_SOFTMMU
>      unsigned mem_index = get_mmuidx(oi);
> diff --git a/tcg/arm/tcg-target.inc.c b/tcg/arm/tcg-target.inc.c
> index ece88dc..94d80d7 100644
> --- a/tcg/arm/tcg-target.inc.c
> +++ b/tcg/arm/tcg-target.inc.c
> @@ -1233,7 +1233,7 @@ QEMU_BUILD_BUG_ON(offsetof(CPUTLBDescFast, table) != 4);
>     containing the addend of the tlb entry.  Clobbers R0, R1, R2, TMP.  */
> 
>  static TCGReg tcg_out_tlb_read(TCGContext *s, TCGReg addrlo, TCGReg addrhi,
> -                               TCGMemOp opc, int mem_index, bool is_load)
> +                               MemOp opc, int mem_index, bool is_load)
>  {
>      int cmp_off = (is_load ? offsetof(CPUTLBEntry, addr_read)
>                     : offsetof(CPUTLBEntry, addr_write));
> @@ -1348,7 +1348,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
>  {
>      TCGReg argreg, datalo, datahi;
>      TCGMemOpIdx oi = lb->oi;
> -    TCGMemOp opc = get_memop(oi);
> +    MemOp opc = get_memop(oi);
>      void *func;
> 
>      if (!reloc_pc24(lb->label_ptr[0], s->code_ptr)) {
> @@ -1412,7 +1412,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
>  {
>      TCGReg argreg, datalo, datahi;
>      TCGMemOpIdx oi = lb->oi;
> -    TCGMemOp opc = get_memop(oi);
> +    MemOp opc = get_memop(oi);
> 
>      if (!reloc_pc24(lb->label_ptr[0], s->code_ptr)) {
>          return false;
> @@ -1453,11 +1453,11 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
>  }
>  #endif /* SOFTMMU */
> 
> -static inline void tcg_out_qemu_ld_index(TCGContext *s, TCGMemOp opc,
> +static inline void tcg_out_qemu_ld_index(TCGContext *s, MemOp opc,
>                                           TCGReg datalo, TCGReg datahi,
>                                           TCGReg addrlo, TCGReg addend)
>  {
> -    TCGMemOp bswap = opc & MO_BSWAP;
> +    MemOp bswap = opc & MO_BSWAP;
> 
>      switch (opc & MO_SSIZE) {
>      case MO_UB:
> @@ -1514,11 +1514,11 @@ static inline void tcg_out_qemu_ld_index(TCGContext *s, TCGMemOp opc,
>      }
>  }
> 
> -static inline void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp opc,
> +static inline void tcg_out_qemu_ld_direct(TCGContext *s, MemOp opc,
>                                            TCGReg datalo, TCGReg datahi,
>                                            TCGReg addrlo)
>  {
> -    TCGMemOp bswap = opc & MO_BSWAP;
> +    MemOp bswap = opc & MO_BSWAP;
> 
>      switch (opc & MO_SSIZE) {
>      case MO_UB:
> @@ -1577,7 +1577,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)
>  {
>      TCGReg addrlo, datalo, datahi, addrhi __attribute__((unused));
>      TCGMemOpIdx oi;
> -    TCGMemOp opc;
> +    MemOp opc;
>  #ifdef CONFIG_SOFTMMU
>      int mem_index;
>      TCGReg addend;
> @@ -1614,11 +1614,11 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)
>  #endif
>  }
> 
> -static inline void tcg_out_qemu_st_index(TCGContext *s, int cond, TCGMemOp opc,
> +static inline void tcg_out_qemu_st_index(TCGContext *s, int cond, MemOp opc,
>                                           TCGReg datalo, TCGReg datahi,
>                                           TCGReg addrlo, TCGReg addend)
>  {
> -    TCGMemOp bswap = opc & MO_BSWAP;
> +    MemOp bswap = opc & MO_BSWAP;
> 
>      switch (opc & MO_SIZE) {
>      case MO_8:
> @@ -1659,11 +1659,11 @@ static inline void tcg_out_qemu_st_index(TCGContext *s, int cond, TCGMemOp opc,
>      }
>  }
> 
> -static inline void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp opc,
> +static inline void tcg_out_qemu_st_direct(TCGContext *s, MemOp opc,
>                                            TCGReg datalo, TCGReg datahi,
>                                            TCGReg addrlo)
>  {
> -    TCGMemOp bswap = opc & MO_BSWAP;
> +    MemOp bswap = opc & MO_BSWAP;
> 
>      switch (opc & MO_SIZE) {
>      case MO_8:
> @@ -1708,7 +1708,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is64)
>  {
>      TCGReg addrlo, datalo, datahi, addrhi __attribute__((unused));
>      TCGMemOpIdx oi;
> -    TCGMemOp opc;
> +    MemOp opc;
>  #ifdef CONFIG_SOFTMMU
>      int mem_index;
>      TCGReg addend;
> diff --git a/tcg/i386/tcg-target.inc.c b/tcg/i386/tcg-target.inc.c
> index 6ddeebf..9d8ed97 100644
> --- a/tcg/i386/tcg-target.inc.c
> +++ b/tcg/i386/tcg-target.inc.c
> @@ -1697,7 +1697,7 @@ static void * const qemu_st_helpers[16] = {
>     First argument register is clobbered.  */
> 
>  static inline void tcg_out_tlb_load(TCGContext *s, TCGReg addrlo, TCGReg addrhi,
> -                                    int mem_index, TCGMemOp opc,
> +                                    int mem_index, MemOp opc,
>                                      tcg_insn_unit **label_ptr, int which)
>  {
>      const TCGReg r0 = TCG_REG_L0;
> @@ -1810,7 +1810,7 @@ static void add_qemu_ldst_label(TCGContext *s, bool is_ld, bool is_64,
>  static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
>  {
>      TCGMemOpIdx oi = l->oi;
> -    TCGMemOp opc = get_memop(oi);
> +    MemOp opc = get_memop(oi);
>      TCGReg data_reg;
>      tcg_insn_unit **label_ptr = &l->label_ptr[0];
>      int rexw = (l->type == TCG_TYPE_I64 ? P_REXW : 0);
> @@ -1895,8 +1895,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
>  static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
>  {
>      TCGMemOpIdx oi = l->oi;
> -    TCGMemOp opc = get_memop(oi);
> -    TCGMemOp s_bits = opc & MO_SIZE;
> +    MemOp opc = get_memop(oi);
> +    MemOp s_bits = opc & MO_SIZE;
>      tcg_insn_unit **label_ptr = &l->label_ptr[0];
>      TCGReg retaddr;
> 
> @@ -1995,10 +1995,10 @@ static inline int setup_guest_base_seg(void)
> 
>  static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg datalo, TCGReg datahi,
>                                     TCGReg base, int index, intptr_t ofs,
> -                                   int seg, bool is64, TCGMemOp memop)
> +                                   int seg, bool is64, MemOp memop)
>  {
> -    const TCGMemOp real_bswap = memop & MO_BSWAP;
> -    TCGMemOp bswap = real_bswap;
> +    const MemOp real_bswap = memop & MO_BSWAP;
> +    MemOp bswap = real_bswap;
>      int rexw = is64 * P_REXW;
>      int movop = OPC_MOVL_GvEv;
> 
> @@ -2103,7 +2103,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)
>      TCGReg datalo, datahi, addrlo;
>      TCGReg addrhi __attribute__((unused));
>      TCGMemOpIdx oi;
> -    TCGMemOp opc;
> +    MemOp opc;
>  #if defined(CONFIG_SOFTMMU)
>      int mem_index;
>      tcg_insn_unit *label_ptr[2];
> @@ -2137,15 +2137,15 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)
> 
>  static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg datalo, TCGReg datahi,
>                                     TCGReg base, int index, intptr_t ofs,
> -                                   int seg, TCGMemOp memop)
> +                                   int seg, MemOp memop)
>  {
>      /* ??? Ideally we wouldn't need a scratch register.  For user-only,
>         we could perform the bswap twice to restore the original value
>         instead of moving to the scratch.  But as it is, the L constraint
>         means that TCG_REG_L0 is definitely free here.  */
>      const TCGReg scratch = TCG_REG_L0;
> -    const TCGMemOp real_bswap = memop & MO_BSWAP;
> -    TCGMemOp bswap = real_bswap;
> +    const MemOp real_bswap = memop & MO_BSWAP;
> +    MemOp bswap = real_bswap;
>      int movop = OPC_MOVL_EvGv;
> 
>      if (have_movbe && real_bswap) {
> @@ -2221,7 +2221,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is64)
>      TCGReg datalo, datahi, addrlo;
>      TCGReg addrhi __attribute__((unused));
>      TCGMemOpIdx oi;
> -    TCGMemOp opc;
> +    MemOp opc;
>  #if defined(CONFIG_SOFTMMU)
>      int mem_index;
>      tcg_insn_unit *label_ptr[2];
> diff --git a/tcg/mips/tcg-target.inc.c b/tcg/mips/tcg-target.inc.c
> index 41bff32..5442167 100644
> --- a/tcg/mips/tcg-target.inc.c
> +++ b/tcg/mips/tcg-target.inc.c
> @@ -1215,7 +1215,7 @@ static void tcg_out_tlb_load(TCGContext *s, TCGReg base, TCGReg addrl,
>                               TCGReg addrh, TCGMemOpIdx oi,
>                               tcg_insn_unit *label_ptr[2], bool is_load)
>  {
> -    TCGMemOp opc = get_memop(oi);
> +    MemOp opc = get_memop(oi);
>      unsigned s_bits = opc & MO_SIZE;
>      unsigned a_bits = get_alignment_bits(opc);
>      int mem_index = get_mmuidx(oi);
> @@ -1313,7 +1313,7 @@ static void add_qemu_ldst_label(TCGContext *s, int is_ld, TCGMemOpIdx oi,
>  static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
>  {
>      TCGMemOpIdx oi = l->oi;
> -    TCGMemOp opc = get_memop(oi);
> +    MemOp opc = get_memop(oi);
>      TCGReg v0;
>      int i;
> 
> @@ -1363,8 +1363,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
>  static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
>  {
>      TCGMemOpIdx oi = l->oi;
> -    TCGMemOp opc = get_memop(oi);
> -    TCGMemOp s_bits = opc & MO_SIZE;
> +    MemOp opc = get_memop(oi);
> +    MemOp s_bits = opc & MO_SIZE;
>      int i;
> 
>      /* resolve label address */
> @@ -1413,7 +1413,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
>  #endif
> 
>  static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg lo, TCGReg hi,
> -                                   TCGReg base, TCGMemOp opc, bool is_64)
> +                                   TCGReg base, MemOp opc, bool is_64)
>  {
>      switch (opc & (MO_SSIZE | MO_BSWAP)) {
>      case MO_UB:
> @@ -1521,7 +1521,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
>      TCGReg addr_regl, addr_regh __attribute__((unused));
>      TCGReg data_regl, data_regh;
>      TCGMemOpIdx oi;
> -    TCGMemOp opc;
> +    MemOp opc;
>  #if defined(CONFIG_SOFTMMU)
>      tcg_insn_unit *label_ptr[2];
>  #endif
> @@ -1558,7 +1558,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
>  }
> 
>  static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
> -                                   TCGReg base, TCGMemOp opc)
> +                                   TCGReg base, MemOp opc)
>  {
>      /* Don't clutter the code below with checks to avoid bswapping ZERO.  */
>      if ((lo | hi) == 0) {
> @@ -1624,7 +1624,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64)
>      TCGReg addr_regl, addr_regh __attribute__((unused));
>      TCGReg data_regl, data_regh;
>      TCGMemOpIdx oi;
> -    TCGMemOp opc;
> +    MemOp opc;
>  #if defined(CONFIG_SOFTMMU)
>      tcg_insn_unit *label_ptr[2];
>  #endif
> diff --git a/tcg/optimize.c b/tcg/optimize.c
> index d2424de..a89ffda 100644
> --- a/tcg/optimize.c
> +++ b/tcg/optimize.c
> @@ -1014,7 +1014,7 @@ void tcg_optimize(TCGContext *s)
>          CASE_OP_32_64(qemu_ld):
>              {
>                  TCGMemOpIdx oi = op->args[nb_oargs + nb_iargs];
> -                TCGMemOp mop = get_memop(oi);
> +                MemOp mop = get_memop(oi);
>                  if (!(mop & MO_SIGN)) {
>                      mask = (2ULL << ((8 << (mop & MO_SIZE)) - 1)) - 1;
>                  }
> diff --git a/tcg/ppc/tcg-target.inc.c b/tcg/ppc/tcg-target.inc.c
> index 852b894..815edac 100644
> --- a/tcg/ppc/tcg-target.inc.c
> +++ b/tcg/ppc/tcg-target.inc.c
> @@ -1506,7 +1506,7 @@ QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) < -32768);
>     in CR7, loads the addend of the TLB into R3, and returns the register
>     containing the guest address (zero-extended into R4).  Clobbers R0 and R2. */
> 
> -static TCGReg tcg_out_tlb_read(TCGContext *s, TCGMemOp opc,
> +static TCGReg tcg_out_tlb_read(TCGContext *s, MemOp opc,
>                                 TCGReg addrlo, TCGReg addrhi,
>                                 int mem_index, bool is_read)
>  {
> @@ -1633,7 +1633,7 @@ static void add_qemu_ldst_label(TCGContext *s, bool is_ld, TCGMemOpIdx oi,
>  static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
>  {
>      TCGMemOpIdx oi = lb->oi;
> -    TCGMemOp opc = get_memop(oi);
> +    MemOp opc = get_memop(oi);
>      TCGReg hi, lo, arg = TCG_REG_R3;
> 
>      if (!reloc_pc14(lb->label_ptr[0], s->code_ptr)) {
> @@ -1680,8 +1680,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
>  static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
>  {
>      TCGMemOpIdx oi = lb->oi;
> -    TCGMemOp opc = get_memop(oi);
> -    TCGMemOp s_bits = opc & MO_SIZE;
> +    MemOp opc = get_memop(oi);
> +    MemOp s_bits = opc & MO_SIZE;
>      TCGReg hi, lo, arg = TCG_REG_R3;
> 
>      if (!reloc_pc14(lb->label_ptr[0], s->code_ptr)) {
> @@ -1744,7 +1744,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
>      TCGReg datalo, datahi, addrlo, rbase;
>      TCGReg addrhi __attribute__((unused));
>      TCGMemOpIdx oi;
> -    TCGMemOp opc, s_bits;
> +    MemOp opc, s_bits;
>  #ifdef CONFIG_SOFTMMU
>      int mem_index;
>      tcg_insn_unit *label_ptr;
> @@ -1819,7 +1819,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64)
>      TCGReg datalo, datahi, addrlo, rbase;
>      TCGReg addrhi __attribute__((unused));
>      TCGMemOpIdx oi;
> -    TCGMemOp opc, s_bits;
> +    MemOp opc, s_bits;
>  #ifdef CONFIG_SOFTMMU
>      int mem_index;
>      tcg_insn_unit *label_ptr;
> diff --git a/tcg/riscv/tcg-target.inc.c b/tcg/riscv/tcg-target.inc.c
> index 3e76bf5..7018509 100644
> --- a/tcg/riscv/tcg-target.inc.c
> +++ b/tcg/riscv/tcg-target.inc.c
> @@ -970,7 +970,7 @@ static void tcg_out_tlb_load(TCGContext *s, TCGReg addrl,
>                               TCGReg addrh, TCGMemOpIdx oi,
>                               tcg_insn_unit **label_ptr, bool is_load)
>  {
> -    TCGMemOp opc = get_memop(oi);
> +    MemOp opc = get_memop(oi);
>      unsigned s_bits = opc & MO_SIZE;
>      unsigned a_bits = get_alignment_bits(opc);
>      tcg_target_long compare_mask;
> @@ -1044,7 +1044,7 @@ static void add_qemu_ldst_label(TCGContext *s, int is_ld, TCGMemOpIdx oi,
>  static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
>  {
>      TCGMemOpIdx oi = l->oi;
> -    TCGMemOp opc = get_memop(oi);
> +    MemOp opc = get_memop(oi);
>      TCGReg a0 = tcg_target_call_iarg_regs[0];
>      TCGReg a1 = tcg_target_call_iarg_regs[1];
>      TCGReg a2 = tcg_target_call_iarg_regs[2];
> @@ -1077,8 +1077,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
>  static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
>  {
>      TCGMemOpIdx oi = l->oi;
> -    TCGMemOp opc = get_memop(oi);
> -    TCGMemOp s_bits = opc & MO_SIZE;
> +    MemOp opc = get_memop(oi);
> +    MemOp s_bits = opc & MO_SIZE;
>      TCGReg a0 = tcg_target_call_iarg_regs[0];
>      TCGReg a1 = tcg_target_call_iarg_regs[1];
>      TCGReg a2 = tcg_target_call_iarg_regs[2];
> @@ -1121,9 +1121,9 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
>  #endif /* CONFIG_SOFTMMU */
> 
>  static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg lo, TCGReg hi,
> -                                   TCGReg base, TCGMemOp opc, bool is_64)
> +                                   TCGReg base, MemOp opc, bool is_64)
>  {
> -    const TCGMemOp bswap = opc & MO_BSWAP;
> +    const MemOp bswap = opc & MO_BSWAP;
> 
>      /* We don't yet handle byteswapping, assert */
>      g_assert(!bswap);
> @@ -1172,7 +1172,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
>      TCGReg addr_regl, addr_regh __attribute__((unused));
>      TCGReg data_regl, data_regh;
>      TCGMemOpIdx oi;
> -    TCGMemOp opc;
> +    MemOp opc;
>  #if defined(CONFIG_SOFTMMU)
>      tcg_insn_unit *label_ptr[1];
>  #endif
> @@ -1208,9 +1208,9 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
>  }
> 
>  static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
> -                                   TCGReg base, TCGMemOp opc)
> +                                   TCGReg base, MemOp opc)
>  {
> -    const TCGMemOp bswap = opc & MO_BSWAP;
> +    const MemOp bswap = opc & MO_BSWAP;
> 
>      /* We don't yet handle byteswapping, assert */
>      g_assert(!bswap);
> @@ -1243,7 +1243,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64)
>      TCGReg addr_regl, addr_regh __attribute__((unused));
>      TCGReg data_regl, data_regh;
>      TCGMemOpIdx oi;
> -    TCGMemOp opc;
> +    MemOp opc;
>  #if defined(CONFIG_SOFTMMU)
>      tcg_insn_unit *label_ptr[1];
>  #endif
> diff --git a/tcg/s390/tcg-target.inc.c b/tcg/s390/tcg-target.inc.c
> index fe42939..8aaa4ce 100644
> --- a/tcg/s390/tcg-target.inc.c
> +++ b/tcg/s390/tcg-target.inc.c
> @@ -1430,7 +1430,7 @@ static void tcg_out_call(TCGContext *s, tcg_insn_unit *dest)
>      }
>  }
> 
> -static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp opc, TCGReg data,
> +static void tcg_out_qemu_ld_direct(TCGContext *s, MemOp opc, TCGReg data,
>                                     TCGReg base, TCGReg index, int disp)
>  {
>      switch (opc & (MO_SSIZE | MO_BSWAP)) {
> @@ -1489,7 +1489,7 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp opc, TCGReg data,
>      }
>  }
> 
> -static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp opc, TCGReg data,
> +static void tcg_out_qemu_st_direct(TCGContext *s, MemOp opc, TCGReg data,
>                                     TCGReg base, TCGReg index, int disp)
>  {
>      switch (opc & (MO_SIZE | MO_BSWAP)) {
> @@ -1544,7 +1544,7 @@ QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) < -(1 << 19));
> 
>  /* Load and compare a TLB entry, leaving the flags set.  Loads the TLB
>     addend into R2.  Returns a register with the santitized guest address.  */
> -static TCGReg tcg_out_tlb_read(TCGContext* s, TCGReg addr_reg, TCGMemOp opc,
> +static TCGReg tcg_out_tlb_read(TCGContext *s, TCGReg addr_reg, MemOp opc,
>                                 int mem_index, bool is_ld)
>  {
>      unsigned s_bits = opc & MO_SIZE;
> @@ -1614,7 +1614,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
>      TCGReg addr_reg = lb->addrlo_reg;
>      TCGReg data_reg = lb->datalo_reg;
>      TCGMemOpIdx oi = lb->oi;
> -    TCGMemOp opc = get_memop(oi);
> +    MemOp opc = get_memop(oi);
> 
>      if (!patch_reloc(lb->label_ptr[0], R_390_PC16DBL,
>                       (intptr_t)s->code_ptr, 2)) {
> @@ -1639,7 +1639,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
>      TCGReg addr_reg = lb->addrlo_reg;
>      TCGReg data_reg = lb->datalo_reg;
>      TCGMemOpIdx oi = lb->oi;
> -    TCGMemOp opc = get_memop(oi);
> +    MemOp opc = get_memop(oi);
> 
>      if (!patch_reloc(lb->label_ptr[0], R_390_PC16DBL,
>                       (intptr_t)s->code_ptr, 2)) {
> @@ -1694,7 +1694,7 @@ static void tcg_prepare_user_ldst(TCGContext *s, TCGReg *addr_reg,
>  static void tcg_out_qemu_ld(TCGContext* s, TCGReg data_reg, TCGReg addr_reg,
>                              TCGMemOpIdx oi)
>  {
> -    TCGMemOp opc = get_memop(oi);
> +    MemOp opc = get_memop(oi);
>  #ifdef CONFIG_SOFTMMU
>      unsigned mem_index = get_mmuidx(oi);
>      tcg_insn_unit *label_ptr;
> @@ -1721,7 +1721,7 @@ static void tcg_out_qemu_ld(TCGContext* s, TCGReg data_reg, TCGReg addr_reg,
>  static void tcg_out_qemu_st(TCGContext* s, TCGReg data_reg, TCGReg addr_reg,
>                              TCGMemOpIdx oi)
>  {
> -    TCGMemOp opc = get_memop(oi);
> +    MemOp opc = get_memop(oi);
>  #ifdef CONFIG_SOFTMMU
>      unsigned mem_index = get_mmuidx(oi);
>      tcg_insn_unit *label_ptr;
> diff --git a/tcg/sparc/tcg-target.inc.c b/tcg/sparc/tcg-target.inc.c
> index 10b1cea..d7986cd 100644
> --- a/tcg/sparc/tcg-target.inc.c
> +++ b/tcg/sparc/tcg-target.inc.c
> @@ -1081,7 +1081,7 @@ QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) < -(1 << 12));
>     is in the returned register, maybe %o0.  The TLB addend is in %o1.  */
> 
>  static TCGReg tcg_out_tlb_load(TCGContext *s, TCGReg addr, int mem_index,
> -                               TCGMemOp opc, int which)
> +                               MemOp opc, int which)
>  {
>      int fast_off = TLB_MASK_TABLE_OFS(mem_index);
>      int mask_off = fast_off + offsetof(CPUTLBDescFast, mask);
> @@ -1164,7 +1164,7 @@ static const int qemu_st_opc[16] = {
>  static void tcg_out_qemu_ld(TCGContext *s, TCGReg data, TCGReg addr,
>                              TCGMemOpIdx oi, bool is_64)
>  {
> -    TCGMemOp memop = get_memop(oi);
> +    MemOp memop = get_memop(oi);
>  #ifdef CONFIG_SOFTMMU
>      unsigned memi = get_mmuidx(oi);
>      TCGReg addrz, param;
> @@ -1246,7 +1246,7 @@ static void tcg_out_qemu_ld(TCGContext *s, TCGReg data, TCGReg addr,
>  static void tcg_out_qemu_st(TCGContext *s, TCGReg data, TCGReg addr,
>                              TCGMemOpIdx oi)
>  {
> -    TCGMemOp memop = get_memop(oi);
> +    MemOp memop = get_memop(oi);
>  #ifdef CONFIG_SOFTMMU
>      unsigned memi = get_mmuidx(oi);
>      TCGReg addrz, param;
> diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c
> index 587d092..e87c327 100644
> --- a/tcg/tcg-op.c
> +++ b/tcg/tcg-op.c
> @@ -2714,7 +2714,7 @@ void tcg_gen_lookup_and_goto_ptr(void)
>      }
>  }
> 
> -static inline TCGMemOp tcg_canonicalize_memop(TCGMemOp op, bool is64, bool st)
> +static inline MemOp tcg_canonicalize_memop(MemOp op, bool is64, bool st)
>  {
>      /* Trigger the asserts within as early as possible.  */
>      (void)get_alignment_bits(op);
> @@ -2743,7 +2743,7 @@ static inline TCGMemOp tcg_canonicalize_memop(TCGMemOp op, bool is64, bool st)
>  }
> 
>  static void gen_ldst_i32(TCGOpcode opc, TCGv_i32 val, TCGv addr,
> -                         TCGMemOp memop, TCGArg idx)
> +                         MemOp memop, TCGArg idx)
>  {
>      TCGMemOpIdx oi = make_memop_idx(memop, idx);
>  #if TARGET_LONG_BITS == 32
> @@ -2758,7 +2758,7 @@ static void gen_ldst_i32(TCGOpcode opc, TCGv_i32 val, TCGv addr,
>  }
> 
>  static void gen_ldst_i64(TCGOpcode opc, TCGv_i64 val, TCGv addr,
> -                         TCGMemOp memop, TCGArg idx)
> +                         MemOp memop, TCGArg idx)
>  {
>      TCGMemOpIdx oi = make_memop_idx(memop, idx);
>  #if TARGET_LONG_BITS == 32
> @@ -2788,9 +2788,9 @@ static void tcg_gen_req_mo(TCGBar type)
>      }
>  }
> 
> -void tcg_gen_qemu_ld_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
> +void tcg_gen_qemu_ld_i32(TCGv_i32 val, TCGv addr, TCGArg idx, MemOp memop)
>  {
> -    TCGMemOp orig_memop;
> +    MemOp orig_memop;
> 
>      tcg_gen_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
>      memop = tcg_canonicalize_memop(memop, 0, 0);
> @@ -2825,7 +2825,7 @@ void tcg_gen_qemu_ld_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
>      }
>  }
> 
> -void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
> +void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCGArg idx, MemOp memop)
>  {
>      TCGv_i32 swap = NULL;
> 
> @@ -2858,9 +2858,9 @@ void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
>      }
>  }
> 
> -void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
> +void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, MemOp memop)
>  {
> -    TCGMemOp orig_memop;
> +    MemOp orig_memop;
> 
>      if (TCG_TARGET_REG_BITS == 32 && (memop & MO_SIZE) < MO_64) {
>          tcg_gen_qemu_ld_i32(TCGV_LOW(val), addr, idx, memop);
> @@ -2911,7 +2911,7 @@ void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
>      }
>  }
> 
> -void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
> +void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, MemOp memop)
>  {
>      TCGv_i64 swap = NULL;
> 
> @@ -2953,7 +2953,7 @@ void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
>      }
>  }
> 
> -static void tcg_gen_ext_i32(TCGv_i32 ret, TCGv_i32 val, TCGMemOp opc)
> +static void tcg_gen_ext_i32(TCGv_i32 ret, TCGv_i32 val, MemOp opc)
>  {
>      switch (opc & MO_SSIZE) {
>      case MO_SB:
> @@ -2974,7 +2974,7 @@ static void tcg_gen_ext_i32(TCGv_i32 ret, TCGv_i32 val, TCGMemOp opc)
>      }
>  }
> 
> -static void tcg_gen_ext_i64(TCGv_i64 ret, TCGv_i64 val, TCGMemOp opc)
> +static void tcg_gen_ext_i64(TCGv_i64 ret, TCGv_i64 val, MemOp opc)
>  {
>      switch (opc & MO_SSIZE) {
>      case MO_SB:
> @@ -3034,7 +3034,7 @@ static void * const table_cmpxchg[16] = {
>  };
> 
>  void tcg_gen_atomic_cmpxchg_i32(TCGv_i32 retv, TCGv addr, TCGv_i32 cmpv,
> -                                TCGv_i32 newv, TCGArg idx, TCGMemOp memop)
> +                                TCGv_i32 newv, TCGArg idx, MemOp memop)
>  {
>      memop = tcg_canonicalize_memop(memop, 0, 0);
> 
> @@ -3078,7 +3078,7 @@ void tcg_gen_atomic_cmpxchg_i32(TCGv_i32 retv, TCGv addr, TCGv_i32 cmpv,
>  }
> 
>  void tcg_gen_atomic_cmpxchg_i64(TCGv_i64 retv, TCGv addr, TCGv_i64 cmpv,
> -                                TCGv_i64 newv, TCGArg idx, TCGMemOp memop)
> +                                TCGv_i64 newv, TCGArg idx, MemOp memop)
>  {
>      memop = tcg_canonicalize_memop(memop, 1, 0);
> 
> @@ -3142,7 +3142,7 @@ void tcg_gen_atomic_cmpxchg_i64(TCGv_i64 retv, TCGv addr, TCGv_i64 cmpv,
>  }
> 
>  static void do_nonatomic_op_i32(TCGv_i32 ret, TCGv addr, TCGv_i32 val,
> -                                TCGArg idx, TCGMemOp memop, bool new_val,
> +                                TCGArg idx, MemOp memop, bool new_val,
>                                  void (*gen)(TCGv_i32, TCGv_i32, TCGv_i32))
>  {
>      TCGv_i32 t1 = tcg_temp_new_i32();
> @@ -3160,7 +3160,7 @@ static void do_nonatomic_op_i32(TCGv_i32 ret, TCGv addr, TCGv_i32 val,
>  }
> 
>  static void do_atomic_op_i32(TCGv_i32 ret, TCGv addr, TCGv_i32 val,
> -                             TCGArg idx, TCGMemOp memop, void * const table[])
> +                             TCGArg idx, MemOp memop, void * const table[])
>  {
>      gen_atomic_op_i32 gen;
> 
> @@ -3185,7 +3185,7 @@ static void do_atomic_op_i32(TCGv_i32 ret, TCGv addr, TCGv_i32 val,
>  }
> 
>  static void do_nonatomic_op_i64(TCGv_i64 ret, TCGv addr, TCGv_i64 val,
> -                                TCGArg idx, TCGMemOp memop, bool new_val,
> +                                TCGArg idx, MemOp memop, bool new_val,
>                                  void (*gen)(TCGv_i64, TCGv_i64, TCGv_i64))
>  {
>      TCGv_i64 t1 = tcg_temp_new_i64();
> @@ -3203,7 +3203,7 @@ static void do_nonatomic_op_i64(TCGv_i64 ret, TCGv addr, TCGv_i64 val,
>  }
> 
>  static void do_atomic_op_i64(TCGv_i64 ret, TCGv addr, TCGv_i64 val,
> -                             TCGArg idx, TCGMemOp memop, void * const table[])
> +                             TCGArg idx, MemOp memop, void * const table[])
>  {
>      memop = tcg_canonicalize_memop(memop, 1, 0);
> 
> @@ -3257,7 +3257,7 @@ static void * const table_##NAME[16] = {                                \
>      WITH_ATOMIC64([MO_64 | MO_BE] = gen_helper_atomic_##NAME##q_be)     \
>  };                                                                      \
>  void tcg_gen_atomic_##NAME##_i32                                        \
> -    (TCGv_i32 ret, TCGv addr, TCGv_i32 val, TCGArg idx, TCGMemOp memop) \
> +    (TCGv_i32 ret, TCGv addr, TCGv_i32 val, TCGArg idx, MemOp memop)    \
>  {                                                                       \
>      if (tcg_ctx->tb_cflags & CF_PARALLEL) {                             \
>          do_atomic_op_i32(ret, addr, val, idx, memop, table_##NAME);     \
> @@ -3267,7 +3267,7 @@ void tcg_gen_atomic_##NAME##_i32                                        \
>      }                                                                   \
>  }                                                                       \
>  void tcg_gen_atomic_##NAME##_i64                                        \
> -    (TCGv_i64 ret, TCGv addr, TCGv_i64 val, TCGArg idx, TCGMemOp memop) \
> +    (TCGv_i64 ret, TCGv addr, TCGv_i64 val, TCGArg idx, MemOp memop)    \
>  {                                                                       \
>      if (tcg_ctx->tb_cflags & CF_PARALLEL) {                             \
>          do_atomic_op_i64(ret, addr, val, idx, memop, table_##NAME);     \
> diff --git a/tcg/tcg-op.h b/tcg/tcg-op.h
> index 2d4dd5c..e9cf172 100644
> --- a/tcg/tcg-op.h
> +++ b/tcg/tcg-op.h
> @@ -851,10 +851,10 @@ void tcg_gen_lookup_and_goto_ptr(void);
>  #define tcg_gen_qemu_st_tl tcg_gen_qemu_st_i64
>  #endif
> 
> -void tcg_gen_qemu_ld_i32(TCGv_i32, TCGv, TCGArg, TCGMemOp);
> -void tcg_gen_qemu_st_i32(TCGv_i32, TCGv, TCGArg, TCGMemOp);
> -void tcg_gen_qemu_ld_i64(TCGv_i64, TCGv, TCGArg, TCGMemOp);
> -void tcg_gen_qemu_st_i64(TCGv_i64, TCGv, TCGArg, TCGMemOp);
> +void tcg_gen_qemu_ld_i32(TCGv_i32, TCGv, TCGArg, MemOp);
> +void tcg_gen_qemu_st_i32(TCGv_i32, TCGv, TCGArg, MemOp);
> +void tcg_gen_qemu_ld_i64(TCGv_i64, TCGv, TCGArg, MemOp);
> +void tcg_gen_qemu_st_i64(TCGv_i64, TCGv, TCGArg, MemOp);
> 
>  static inline void tcg_gen_qemu_ld8u(TCGv ret, TCGv addr, int mem_index)
>  {
> @@ -912,46 +912,46 @@ static inline void tcg_gen_qemu_st64(TCGv_i64 arg, TCGv addr, int mem_index)
>  }
> 
>  void tcg_gen_atomic_cmpxchg_i32(TCGv_i32, TCGv, TCGv_i32, TCGv_i32,
> -                                TCGArg, TCGMemOp);
> +                                TCGArg, MemOp);
>  void tcg_gen_atomic_cmpxchg_i64(TCGv_i64, TCGv, TCGv_i64, TCGv_i64,
> -                                TCGArg, TCGMemOp);
> -
> -void tcg_gen_atomic_xchg_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_xchg_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
> -
> -void tcg_gen_atomic_fetch_add_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_fetch_add_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_fetch_and_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_fetch_and_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_fetch_or_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_fetch_or_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_fetch_xor_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_fetch_xor_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_fetch_smin_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_fetch_smin_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_fetch_umin_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_fetch_umin_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_fetch_smax_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_fetch_smax_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_fetch_umax_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_fetch_umax_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
> -
> -void tcg_gen_atomic_add_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_add_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_and_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_and_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_or_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_or_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_xor_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_xor_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_smin_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_smin_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_umin_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_umin_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_smax_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_smax_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_umax_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_umax_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
> +                                TCGArg, MemOp);
> +
> +void tcg_gen_atomic_xchg_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
> +void tcg_gen_atomic_xchg_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
> +
> +void tcg_gen_atomic_fetch_add_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
> +void tcg_gen_atomic_fetch_add_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
> +void tcg_gen_atomic_fetch_and_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
> +void tcg_gen_atomic_fetch_and_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
> +void tcg_gen_atomic_fetch_or_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
> +void tcg_gen_atomic_fetch_or_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
> +void tcg_gen_atomic_fetch_xor_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
> +void tcg_gen_atomic_fetch_xor_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
> +void tcg_gen_atomic_fetch_smin_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
> +void tcg_gen_atomic_fetch_smin_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
> +void tcg_gen_atomic_fetch_umin_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
> +void tcg_gen_atomic_fetch_umin_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
> +void tcg_gen_atomic_fetch_smax_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
> +void tcg_gen_atomic_fetch_smax_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
> +void tcg_gen_atomic_fetch_umax_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
> +void tcg_gen_atomic_fetch_umax_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
> +
> +void tcg_gen_atomic_add_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
> +void tcg_gen_atomic_add_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
> +void tcg_gen_atomic_and_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
> +void tcg_gen_atomic_and_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
> +void tcg_gen_atomic_or_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
> +void tcg_gen_atomic_or_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
> +void tcg_gen_atomic_xor_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
> +void tcg_gen_atomic_xor_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
> +void tcg_gen_atomic_smin_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
> +void tcg_gen_atomic_smin_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
> +void tcg_gen_atomic_umin_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
> +void tcg_gen_atomic_umin_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
> +void tcg_gen_atomic_smax_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
> +void tcg_gen_atomic_smax_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
> +void tcg_gen_atomic_umax_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
> +void tcg_gen_atomic_umax_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
> 
>  void tcg_gen_mov_vec(TCGv_vec, TCGv_vec);
>  void tcg_gen_dup_i32_vec(unsigned vece, TCGv_vec, TCGv_i32);
> diff --git a/tcg/tcg.c b/tcg/tcg.c
> index be2c33c..aa9931f 100644
> --- a/tcg/tcg.c
> +++ b/tcg/tcg.c
> @@ -2056,7 +2056,7 @@ static void tcg_dump_ops(TCGContext *s, bool have_prefs)
>              case INDEX_op_qemu_st_i64:
>                  {
>                      TCGMemOpIdx oi = op->args[k++];
> -                    TCGMemOp op = get_memop(oi);
> +                    MemOp op = get_memop(oi);
>                      unsigned ix = get_mmuidx(oi);
> 
>                      if (op & ~(MO_AMASK | MO_BSWAP | MO_SSIZE)) {
> diff --git a/tcg/tcg.h b/tcg/tcg.h
> index b411e17..a37181c 100644
> --- a/tcg/tcg.h
> +++ b/tcg/tcg.h
> @@ -26,6 +26,7 @@
>  #define TCG_H
> 
>  #include "cpu.h"
> +#include "exec/memop.h"
>  #include "exec/tb-context.h"
>  #include "qemu/bitops.h"
>  #include "qemu/queue.h"
> @@ -309,101 +310,13 @@ typedef enum TCGType {
>  #endif
>  } TCGType;
> 
> -/* Constants for qemu_ld and qemu_st for the Memory Operation field.  */
> -typedef enum TCGMemOp {
> -    MO_8     = 0,
> -    MO_16    = 1,
> -    MO_32    = 2,
> -    MO_64    = 3,
> -    MO_SIZE  = 3,   /* Mask for the above.  */
> -
> -    MO_SIGN  = 4,   /* Sign-extended, otherwise zero-extended.  */
> -
> -    MO_BSWAP = 8,   /* Host reverse endian.  */
> -#ifdef HOST_WORDS_BIGENDIAN
> -    MO_LE    = MO_BSWAP,
> -    MO_BE    = 0,
> -#else
> -    MO_LE    = 0,
> -    MO_BE    = MO_BSWAP,
> -#endif
> -#ifdef TARGET_WORDS_BIGENDIAN
> -    MO_TE    = MO_BE,
> -#else
> -    MO_TE    = MO_LE,
> -#endif
> -
> -    /* MO_UNALN accesses are never checked for alignment.
> -     * MO_ALIGN accesses will result in a call to the CPU's
> -     * do_unaligned_access hook if the guest address is not aligned.
> -     * The default depends on whether the target CPU defines ALIGNED_ONLY.
> -     *
> -     * Some architectures (e.g. ARMv8) need the address which is aligned
> -     * to a size more than the size of the memory access.
> -     * Some architectures (e.g. SPARCv9) need an address which is aligned,
> -     * but less strictly than the natural alignment.
> -     *
> -     * MO_ALIGN supposes the alignment size is the size of a memory access.
> -     *
> -     * There are three options:
> -     * - unaligned access permitted (MO_UNALN).
> -     * - an alignment to the size of an access (MO_ALIGN);
> -     * - an alignment to a specified size, which may be more or less than
> -     *   the access size (MO_ALIGN_x where 'x' is a size in bytes);
> -     */
> -    MO_ASHIFT = 4,
> -    MO_AMASK = 7 << MO_ASHIFT,
> -#ifdef ALIGNED_ONLY
> -    MO_ALIGN = 0,
> -    MO_UNALN = MO_AMASK,
> -#else
> -    MO_ALIGN = MO_AMASK,
> -    MO_UNALN = 0,
> -#endif
> -    MO_ALIGN_2  = 1 << MO_ASHIFT,
> -    MO_ALIGN_4  = 2 << MO_ASHIFT,
> -    MO_ALIGN_8  = 3 << MO_ASHIFT,
> -    MO_ALIGN_16 = 4 << MO_ASHIFT,
> -    MO_ALIGN_32 = 5 << MO_ASHIFT,
> -    MO_ALIGN_64 = 6 << MO_ASHIFT,
> -
> -    /* Combinations of the above, for ease of use.  */
> -    MO_UB    = MO_8,
> -    MO_UW    = MO_16,
> -    MO_UL    = MO_32,
> -    MO_SB    = MO_SIGN | MO_8,
> -    MO_SW    = MO_SIGN | MO_16,
> -    MO_SL    = MO_SIGN | MO_32,
> -    MO_Q     = MO_64,
> -
> -    MO_LEUW  = MO_LE | MO_UW,
> -    MO_LEUL  = MO_LE | MO_UL,
> -    MO_LESW  = MO_LE | MO_SW,
> -    MO_LESL  = MO_LE | MO_SL,
> -    MO_LEQ   = MO_LE | MO_Q,
> -
> -    MO_BEUW  = MO_BE | MO_UW,
> -    MO_BEUL  = MO_BE | MO_UL,
> -    MO_BESW  = MO_BE | MO_SW,
> -    MO_BESL  = MO_BE | MO_SL,
> -    MO_BEQ   = MO_BE | MO_Q,
> -
> -    MO_TEUW  = MO_TE | MO_UW,
> -    MO_TEUL  = MO_TE | MO_UL,
> -    MO_TESW  = MO_TE | MO_SW,
> -    MO_TESL  = MO_TE | MO_SL,
> -    MO_TEQ   = MO_TE | MO_Q,
> -
> -    MO_SSIZE = MO_SIZE | MO_SIGN,
> -} TCGMemOp;
> -
>  /**
>   * get_alignment_bits
> - * @memop: TCGMemOp value
> + * @memop: MemOp value
>   *
>   * Extract the alignment size from the memop.
>   */
> -static inline unsigned get_alignment_bits(TCGMemOp memop)
> +static inline unsigned get_alignment_bits(MemOp memop)
>  {
>      unsigned a = memop & MO_AMASK;
> 
> @@ -1184,7 +1097,7 @@ static inline size_t tcg_current_code_size(TCGContext *s)
>      return tcg_ptr_byte_diff(s->code_ptr, s->code_buf);
>  }
> 
> -/* Combine the TCGMemOp and mmu_idx parameters into a single value.  */
> +/* Combine the MemOp and mmu_idx parameters into a single value.  */
>  typedef uint32_t TCGMemOpIdx;
> 
>  /**
> @@ -1194,7 +1107,7 @@ typedef uint32_t TCGMemOpIdx;
>   *
>   * Encode these values into a single parameter.
>   */
> -static inline TCGMemOpIdx make_memop_idx(TCGMemOp op, unsigned idx)
> +static inline TCGMemOpIdx make_memop_idx(MemOp op, unsigned idx)
>  {
>      tcg_debug_assert(idx <= 15);
>      return (op << 4) | idx;
> @@ -1206,7 +1119,7 @@ static inline TCGMemOpIdx make_memop_idx(TCGMemOp op, unsigned idx)
>   *
>   * Extract the memory operation from the combined value.
>   */
> -static inline TCGMemOp get_memop(TCGMemOpIdx oi)
> +static inline MemOp get_memop(TCGMemOpIdx oi)
>  {
>      return oi >> 4;
>  }
> diff --git a/trace/mem-internal.h b/trace/mem-internal.h
> index f6efaf6..3444fbc 100644
> --- a/trace/mem-internal.h
> +++ b/trace/mem-internal.h
> @@ -16,7 +16,7 @@
>  #define TRACE_MEM_ST (1ULL << 5)    /* store (y/n) */
> 
>  static inline uint8_t trace_mem_build_info(
> -    int size_shift, bool sign_extend, TCGMemOp endianness, bool store)
> +    int size_shift, bool sign_extend, MemOp endianness, bool store)
>  {
>      uint8_t res;
> 
> @@ -33,7 +33,7 @@ static inline uint8_t trace_mem_build_info(
>      return res;
>  }
> 
> -static inline uint8_t trace_mem_get_info(TCGMemOp op, bool store)
> +static inline uint8_t trace_mem_get_info(MemOp op, bool store)
>  {
>      return trace_mem_build_info(op & MO_SIZE, !!(op & MO_SIGN),
>                                  op & MO_BSWAP, store);
> diff --git a/trace/mem.h b/trace/mem.h
> index 2b58196..8cf213d 100644
> --- a/trace/mem.h
> +++ b/trace/mem.h
> @@ -18,7 +18,7 @@
>   *
>   * Return a value for the 'info' argument in guest memory access traces.
>   */
> -static uint8_t trace_mem_get_info(TCGMemOp op, bool store);
> +static uint8_t trace_mem_get_info(MemOp op, bool store);
> 
>  /**
>   * trace_mem_build_info:
> @@ -26,7 +26,7 @@ static uint8_t trace_mem_get_info(TCGMemOp op, bool store);
>   * Return a value for the 'info' argument in guest memory access traces.
>   */
>  static uint8_t trace_mem_build_info(int size_shift, bool sign_extend,
> -                                    TCGMemOp endianness, bool store);
> +                                    MemOp endianness, bool store);
> 
> 
>  #include "trace/mem-internal.h"

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-riscv] [Qemu-devel] [PATCH v5 01/15] tcg: TCGMemOp is now accelerator independent MemOp
@ 2019-07-26  7:43     ` David Gibson
  0 siblings, 0 replies; 78+ messages in thread
From: David Gibson @ 2019-07-26  7:43 UTC (permalink / raw)
  To: tony.nguyen
  Cc: qemu-devel, peter.maydell, walling, sagark, david, palmer,
	mark.cave-ayland, Alistair.Francis, edgar.iglesias, arikalo, mst,
	pasic, borntraeger, rth, atar4qemu, ehabkost, alex.williamson,
	qemu-arm, stefanha, shorne, qemu-riscv, qemu-s390x, kbastian,
	cohuck, laurent, qemu-ppc, amarkovic, pbonzini, aurelien

[-- Attachment #1: Type: text/plain, Size: 113949 bytes --]

On Fri, Jul 26, 2019 at 06:43:27AM +0000, tony.nguyen@bt.com wrote:
> Preparation for collapsing the two byte swaps, adjust_endianness and
> handle_bswap, along the I/O path.
> 
> Target dependant attributes are conditionalize upon NEED_CPU_H.
> 
> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>

ppc parts
Acked-by: David Gibson <david@gibson.dropbear.id.au>

> ---
>  MAINTAINERS                             |   1 +
>  accel/tcg/cputlb.c                      |   2 +-
>  include/exec/memop.h                    | 109 ++++++++++++++++++++++++++
>  target/alpha/translate.c                |   2 +-
>  target/arm/translate-a64.c              |  48 ++++++------
>  target/arm/translate-a64.h              |   2 +-
>  target/arm/translate-sve.c              |   2 +-
>  target/arm/translate.c                  |  32 ++++----
>  target/arm/translate.h                  |   2 +-
>  target/hppa/translate.c                 |  14 ++--
>  target/i386/translate.c                 | 132 ++++++++++++++++----------------
>  target/m68k/translate.c                 |   2 +-
>  target/microblaze/translate.c           |   4 +-
>  target/mips/translate.c                 |   8 +-
>  target/openrisc/translate.c             |   4 +-
>  target/ppc/translate.c                  |  12 +--
>  target/riscv/insn_trans/trans_rva.inc.c |   8 +-
>  target/riscv/insn_trans/trans_rvi.inc.c |   4 +-
>  target/s390x/translate.c                |   6 +-
>  target/s390x/translate_vx.inc.c         |  10 +--
>  target/sparc/translate.c                |  14 ++--
>  target/tilegx/translate.c               |  10 +--
>  target/tricore/translate.c              |   8 +-
>  tcg/README                              |   2 +-
>  tcg/aarch64/tcg-target.inc.c            |  26 +++----
>  tcg/arm/tcg-target.inc.c                |  26 +++----
>  tcg/i386/tcg-target.inc.c               |  24 +++---
>  tcg/mips/tcg-target.inc.c               |  16 ++--
>  tcg/optimize.c                          |   2 +-
>  tcg/ppc/tcg-target.inc.c                |  12 +--
>  tcg/riscv/tcg-target.inc.c              |  20 ++---
>  tcg/s390/tcg-target.inc.c               |  14 ++--
>  tcg/sparc/tcg-target.inc.c              |   6 +-
>  tcg/tcg-op.c                            |  38 ++++-----
>  tcg/tcg-op.h                            |  86 ++++++++++-----------
>  tcg/tcg.c                               |   2 +-
>  tcg/tcg.h                               |  99 ++----------------------
>  trace/mem-internal.h                    |   4 +-
>  trace/mem.h                             |   4 +-
>  39 files changed, 420 insertions(+), 397 deletions(-)
>  create mode 100644 include/exec/memop.h
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index cc9636b..3f148cd 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -1890,6 +1890,7 @@ M: Paolo Bonzini <pbonzini@redhat.com>
>  S: Supported
>  F: include/exec/ioport.h
>  F: ioport.c
> +F: include/exec/memop.h
>  F: include/exec/memory.h
>  F: include/exec/ram_addr.h
>  F: memory.c
> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
> index bb9897b..523be4c 100644
> --- a/accel/tcg/cputlb.c
> +++ b/accel/tcg/cputlb.c
> @@ -1133,7 +1133,7 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
>      uintptr_t index = tlb_index(env, mmu_idx, addr);
>      CPUTLBEntry *tlbe = tlb_entry(env, mmu_idx, addr);
>      target_ulong tlb_addr = tlb_addr_write(tlbe);
> -    TCGMemOp mop = get_memop(oi);
> +    MemOp mop = get_memop(oi);
>      int a_bits = get_alignment_bits(mop);
>      int s_bits = mop & MO_SIZE;
>      void *hostaddr;
> diff --git a/include/exec/memop.h b/include/exec/memop.h
> new file mode 100644
> index 0000000..ac58066
> --- /dev/null
> +++ b/include/exec/memop.h
> @@ -0,0 +1,109 @@
> +/*
> + * Constants for memory operations
> + *
> + * Authors:
> + *  Richard Henderson <rth@twiddle.net>
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2 or later.
> + * See the COPYING file in the top-level directory.
> + *
> + */
> +
> +#ifndef MEMOP_H
> +#define MEMOP_H
> +
> +typedef enum MemOp {
> +    MO_8     = 0,
> +    MO_16    = 1,
> +    MO_32    = 2,
> +    MO_64    = 3,
> +    MO_SIZE  = 3,   /* Mask for the above.  */
> +
> +    MO_SIGN  = 4,   /* Sign-extended, otherwise zero-extended.  */
> +
> +    MO_BSWAP = 8,   /* Host reverse endian.  */
> +#ifdef HOST_WORDS_BIGENDIAN
> +    MO_LE    = MO_BSWAP,
> +    MO_BE    = 0,
> +#else
> +    MO_LE    = 0,
> +    MO_BE    = MO_BSWAP,
> +#endif
> +#ifdef NEED_CPU_H
> +#ifdef TARGET_WORDS_BIGENDIAN
> +    MO_TE    = MO_BE,
> +#else
> +    MO_TE    = MO_LE,
> +#endif
> +#endif
> +
> +    /*
> +     * MO_UNALN accesses are never checked for alignment.
> +     * MO_ALIGN accesses will result in a call to the CPU's
> +     * do_unaligned_access hook if the guest address is not aligned.
> +     * The default depends on whether the target CPU defines ALIGNED_ONLY.
> +     *
> +     * Some architectures (e.g. ARMv8) need the address which is aligned
> +     * to a size more than the size of the memory access.
> +     * Some architectures (e.g. SPARCv9) need an address which is aligned,
> +     * but less strictly than the natural alignment.
> +     *
> +     * MO_ALIGN supposes the alignment size is the size of a memory access.
> +     *
> +     * There are three options:
> +     * - unaligned access permitted (MO_UNALN).
> +     * - an alignment to the size of an access (MO_ALIGN);
> +     * - an alignment to a specified size, which may be more or less than
> +     *   the access size (MO_ALIGN_x where 'x' is a size in bytes);
> +     */
> +    MO_ASHIFT = 4,
> +    MO_AMASK = 7 << MO_ASHIFT,
> +#ifdef NEED_CPU_H
> +#ifdef ALIGNED_ONLY
> +    MO_ALIGN = 0,
> +    MO_UNALN = MO_AMASK,
> +#else
> +    MO_ALIGN = MO_AMASK,
> +    MO_UNALN = 0,
> +#endif
> +#endif
> +    MO_ALIGN_2  = 1 << MO_ASHIFT,
> +    MO_ALIGN_4  = 2 << MO_ASHIFT,
> +    MO_ALIGN_8  = 3 << MO_ASHIFT,
> +    MO_ALIGN_16 = 4 << MO_ASHIFT,
> +    MO_ALIGN_32 = 5 << MO_ASHIFT,
> +    MO_ALIGN_64 = 6 << MO_ASHIFT,
> +
> +    /* Combinations of the above, for ease of use.  */
> +    MO_UB    = MO_8,
> +    MO_UW    = MO_16,
> +    MO_UL    = MO_32,
> +    MO_SB    = MO_SIGN | MO_8,
> +    MO_SW    = MO_SIGN | MO_16,
> +    MO_SL    = MO_SIGN | MO_32,
> +    MO_Q     = MO_64,
> +
> +    MO_LEUW  = MO_LE | MO_UW,
> +    MO_LEUL  = MO_LE | MO_UL,
> +    MO_LESW  = MO_LE | MO_SW,
> +    MO_LESL  = MO_LE | MO_SL,
> +    MO_LEQ   = MO_LE | MO_Q,
> +
> +    MO_BEUW  = MO_BE | MO_UW,
> +    MO_BEUL  = MO_BE | MO_UL,
> +    MO_BESW  = MO_BE | MO_SW,
> +    MO_BESL  = MO_BE | MO_SL,
> +    MO_BEQ   = MO_BE | MO_Q,
> +
> +#ifdef NEED_CPU_H
> +    MO_TEUW  = MO_TE | MO_UW,
> +    MO_TEUL  = MO_TE | MO_UL,
> +    MO_TESW  = MO_TE | MO_SW,
> +    MO_TESL  = MO_TE | MO_SL,
> +    MO_TEQ   = MO_TE | MO_Q,
> +#endif
> +
> +    MO_SSIZE = MO_SIZE | MO_SIGN,
> +} MemOp;
> +
> +#endif
> diff --git a/target/alpha/translate.c b/target/alpha/translate.c
> index 2c9cccf..d5d4888 100644
> --- a/target/alpha/translate.c
> +++ b/target/alpha/translate.c
> @@ -403,7 +403,7 @@ static inline void gen_store_mem(DisasContext *ctx,
> 
>  static DisasJumpType gen_store_conditional(DisasContext *ctx, int ra, int rb,
>                                             int32_t disp16, int mem_idx,
> -                                           TCGMemOp op)
> +                                           MemOp op)
>  {
>      TCGLabel *lab_fail, *lab_done;
>      TCGv addr, val;
> diff --git a/target/arm/translate-a64.c b/target/arm/translate-a64.c
> index d323147..b6c07d6 100644
> --- a/target/arm/translate-a64.c
> +++ b/target/arm/translate-a64.c
> @@ -85,7 +85,7 @@ typedef void NeonGenOneOpFn(TCGv_i64, TCGv_i64);
>  typedef void CryptoTwoOpFn(TCGv_ptr, TCGv_ptr);
>  typedef void CryptoThreeOpIntFn(TCGv_ptr, TCGv_ptr, TCGv_i32);
>  typedef void CryptoThreeOpFn(TCGv_ptr, TCGv_ptr, TCGv_ptr);
> -typedef void AtomicThreeOpFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGArg, TCGMemOp);
> +typedef void AtomicThreeOpFn(TCGv_i64, TCGv_i64, TCGv_i64, TCGArg, MemOp);
> 
>  /* initialize TCG globals.  */
>  void a64_translate_init(void)
> @@ -455,7 +455,7 @@ TCGv_i64 read_cpu_reg_sp(DisasContext *s, int reg, int sf)
>   * Dn, Sn, Hn or Bn).
>   * (Note that this is not the same mapping as for A32; see cpu.h)
>   */
> -static inline int fp_reg_offset(DisasContext *s, int regno, TCGMemOp size)
> +static inline int fp_reg_offset(DisasContext *s, int regno, MemOp size)
>  {
>      return vec_reg_offset(s, regno, 0, size);
>  }
> @@ -871,7 +871,7 @@ static void do_gpr_ld_memidx(DisasContext *s,
>                               bool iss_valid, unsigned int iss_srt,
>                               bool iss_sf, bool iss_ar)
>  {
> -    TCGMemOp memop = s->be_data + size;
> +    MemOp memop = s->be_data + size;
> 
>      g_assert(size <= 3);
> 
> @@ -948,7 +948,7 @@ static void do_fp_ld(DisasContext *s, int destidx, TCGv_i64 tcg_addr, int size)
>      TCGv_i64 tmphi;
> 
>      if (size < 4) {
> -        TCGMemOp memop = s->be_data + size;
> +        MemOp memop = s->be_data + size;
>          tmphi = tcg_const_i64(0);
>          tcg_gen_qemu_ld_i64(tmplo, tcg_addr, get_mem_index(s), memop);
>      } else {
> @@ -989,7 +989,7 @@ static void do_fp_ld(DisasContext *s, int destidx, TCGv_i64 tcg_addr, int size)
> 
>  /* Get value of an element within a vector register */
>  static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx,
> -                             int element, TCGMemOp memop)
> +                             int element, MemOp memop)
>  {
>      int vect_off = vec_reg_offset(s, srcidx, element, memop & MO_SIZE);
>      switch (memop) {
> @@ -1021,7 +1021,7 @@ static void read_vec_element(DisasContext *s, TCGv_i64 tcg_dest, int srcidx,
>  }
> 
>  static void read_vec_element_i32(DisasContext *s, TCGv_i32 tcg_dest, int srcidx,
> -                                 int element, TCGMemOp memop)
> +                                 int element, MemOp memop)
>  {
>      int vect_off = vec_reg_offset(s, srcidx, element, memop & MO_SIZE);
>      switch (memop) {
> @@ -1048,7 +1048,7 @@ static void read_vec_element_i32(DisasContext *s, TCGv_i32 tcg_dest, int srcidx,
> 
>  /* Set value of an element within a vector register */
>  static void write_vec_element(DisasContext *s, TCGv_i64 tcg_src, int destidx,
> -                              int element, TCGMemOp memop)
> +                              int element, MemOp memop)
>  {
>      int vect_off = vec_reg_offset(s, destidx, element, memop & MO_SIZE);
>      switch (memop) {
> @@ -1070,7 +1070,7 @@ static void write_vec_element(DisasContext *s, TCGv_i64 tcg_src, int destidx,
>  }
> 
>  static void write_vec_element_i32(DisasContext *s, TCGv_i32 tcg_src,
> -                                  int destidx, int element, TCGMemOp memop)
> +                                  int destidx, int element, MemOp memop)
>  {
>      int vect_off = vec_reg_offset(s, destidx, element, memop & MO_SIZE);
>      switch (memop) {
> @@ -1090,7 +1090,7 @@ static void write_vec_element_i32(DisasContext *s, TCGv_i32 tcg_src,
> 
>  /* Store from vector register to memory */
>  static void do_vec_st(DisasContext *s, int srcidx, int element,
> -                      TCGv_i64 tcg_addr, int size, TCGMemOp endian)
> +                      TCGv_i64 tcg_addr, int size, MemOp endian)
>  {
>      TCGv_i64 tcg_tmp = tcg_temp_new_i64();
> 
> @@ -1102,7 +1102,7 @@ static void do_vec_st(DisasContext *s, int srcidx, int element,
> 
>  /* Load from memory to vector register */
>  static void do_vec_ld(DisasContext *s, int destidx, int element,
> -                      TCGv_i64 tcg_addr, int size, TCGMemOp endian)
> +                      TCGv_i64 tcg_addr, int size, MemOp endian)
>  {
>      TCGv_i64 tcg_tmp = tcg_temp_new_i64();
> 
> @@ -2200,7 +2200,7 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
>                                 TCGv_i64 addr, int size, bool is_pair)
>  {
>      int idx = get_mem_index(s);
> -    TCGMemOp memop = s->be_data;
> +    MemOp memop = s->be_data;
> 
>      g_assert(size <= 3);
>      if (is_pair) {
> @@ -3286,7 +3286,7 @@ static void disas_ldst_multiple_struct(DisasContext *s, uint32_t insn)
>      bool is_postidx = extract32(insn, 23, 1);
>      bool is_q = extract32(insn, 30, 1);
>      TCGv_i64 clean_addr, tcg_rn, tcg_ebytes;
> -    TCGMemOp endian = s->be_data;
> +    MemOp endian = s->be_data;
> 
>      int ebytes;   /* bytes per element */
>      int elements; /* elements per vector */
> @@ -5455,7 +5455,7 @@ static void disas_fp_csel(DisasContext *s, uint32_t insn)
>      unsigned int mos, type, rm, cond, rn, rd;
>      TCGv_i64 t_true, t_false, t_zero;
>      DisasCompare64 c;
> -    TCGMemOp sz;
> +    MemOp sz;
> 
>      mos = extract32(insn, 29, 3);
>      type = extract32(insn, 22, 2);
> @@ -6267,7 +6267,7 @@ static void disas_fp_imm(DisasContext *s, uint32_t insn)
>      int mos = extract32(insn, 29, 3);
>      uint64_t imm;
>      TCGv_i64 tcg_res;
> -    TCGMemOp sz;
> +    MemOp sz;
> 
>      if (mos || imm5) {
>          unallocated_encoding(s);
> @@ -7030,7 +7030,7 @@ static TCGv_i32 do_reduction_op(DisasContext *s, int fpopcode, int rn,
>  {
>      if (esize == size) {
>          int element;
> -        TCGMemOp msize = esize == 16 ? MO_16 : MO_32;
> +        MemOp msize = esize == 16 ? MO_16 : MO_32;
>          TCGv_i32 tcg_elem;
> 
>          /* We should have one register left here */
> @@ -8022,7 +8022,7 @@ static void handle_vec_simd_sqshrn(DisasContext *s, bool is_scalar, bool is_q,
>      int shift = (2 * esize) - immhb;
>      int elements = is_scalar ? 1 : (64 / esize);
>      bool round = extract32(opcode, 0, 1);
> -    TCGMemOp ldop = (size + 1) | (is_u_shift ? 0 : MO_SIGN);
> +    MemOp ldop = (size + 1) | (is_u_shift ? 0 : MO_SIGN);
>      TCGv_i64 tcg_rn, tcg_rd, tcg_round;
>      TCGv_i32 tcg_rd_narrowed;
>      TCGv_i64 tcg_final;
> @@ -8181,7 +8181,7 @@ static void handle_simd_qshl(DisasContext *s, bool scalar, bool is_q,
>              }
>          };
>          NeonGenTwoOpEnvFn *genfn = fns[src_unsigned][dst_unsigned][size];
> -        TCGMemOp memop = scalar ? size : MO_32;
> +        MemOp memop = scalar ? size : MO_32;
>          int maxpass = scalar ? 1 : is_q ? 4 : 2;
> 
>          for (pass = 0; pass < maxpass; pass++) {
> @@ -8225,7 +8225,7 @@ static void handle_simd_intfp_conv(DisasContext *s, int rd, int rn,
>      TCGv_ptr tcg_fpst = get_fpstatus_ptr(size == MO_16);
>      TCGv_i32 tcg_shift = NULL;
> 
> -    TCGMemOp mop = size | (is_signed ? MO_SIGN : 0);
> +    MemOp mop = size | (is_signed ? MO_SIGN : 0);
>      int pass;
> 
>      if (fracbits || size == MO_64) {
> @@ -10004,7 +10004,7 @@ static void handle_vec_simd_shri(DisasContext *s, bool is_q, bool is_u,
>      int dsize = is_q ? 128 : 64;
>      int esize = 8 << size;
>      int elements = dsize/esize;
> -    TCGMemOp memop = size | (is_u ? 0 : MO_SIGN);
> +    MemOp memop = size | (is_u ? 0 : MO_SIGN);
>      TCGv_i64 tcg_rn = new_tmp_a64(s);
>      TCGv_i64 tcg_rd = new_tmp_a64(s);
>      TCGv_i64 tcg_round;
> @@ -10347,7 +10347,7 @@ static void handle_3rd_widening(DisasContext *s, int is_q, int is_u, int size,
>              TCGv_i64 tcg_op1 = tcg_temp_new_i64();
>              TCGv_i64 tcg_op2 = tcg_temp_new_i64();
>              TCGv_i64 tcg_passres;
> -            TCGMemOp memop = MO_32 | (is_u ? 0 : MO_SIGN);
> +            MemOp memop = MO_32 | (is_u ? 0 : MO_SIGN);
> 
>              int elt = pass + is_q * 2;
> 
> @@ -11827,7 +11827,7 @@ static void handle_2misc_pairwise(DisasContext *s, int opcode, bool u,
> 
>      if (size == 2) {
>          /* 32 + 32 -> 64 op */
> -        TCGMemOp memop = size + (u ? 0 : MO_SIGN);
> +        MemOp memop = size + (u ? 0 : MO_SIGN);
> 
>          for (pass = 0; pass < maxpass; pass++) {
>              TCGv_i64 tcg_op1 = tcg_temp_new_i64();
> @@ -12849,7 +12849,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
> 
>      switch (is_fp) {
>      case 1: /* normal fp */
> -        /* convert insn encoded size to TCGMemOp size */
> +        /* convert insn encoded size to MemOp size */
>          switch (size) {
>          case 0: /* half-precision */
>              size = MO_16;
> @@ -12897,7 +12897,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
>          return;
>      }
> 
> -    /* Given TCGMemOp size, adjust register and indexing.  */
> +    /* Given MemOp size, adjust register and indexing.  */
>      switch (size) {
>      case MO_16:
>          index = h << 2 | l << 1 | m;
> @@ -13194,7 +13194,7 @@ static void disas_simd_indexed(DisasContext *s, uint32_t insn)
>          TCGv_i64 tcg_res[2];
>          int pass;
>          bool satop = extract32(opcode, 0, 1);
> -        TCGMemOp memop = MO_32;
> +        MemOp memop = MO_32;
> 
>          if (satop || !u) {
>              memop |= MO_SIGN;
> diff --git a/target/arm/translate-a64.h b/target/arm/translate-a64.h
> index 9ab4087..f1246b7 100644
> --- a/target/arm/translate-a64.h
> +++ b/target/arm/translate-a64.h
> @@ -64,7 +64,7 @@ static inline void assert_fp_access_checked(DisasContext *s)
>   * the FP/vector register Qn.
>   */
>  static inline int vec_reg_offset(DisasContext *s, int regno,
> -                                 int element, TCGMemOp size)
> +                                 int element, MemOp size)
>  {
>      int element_size = 1 << size;
>      int offs = element * element_size;
> diff --git a/target/arm/translate-sve.c b/target/arm/translate-sve.c
> index fa068b0..5d7edd0 100644
> --- a/target/arm/translate-sve.c
> +++ b/target/arm/translate-sve.c
> @@ -4567,7 +4567,7 @@ static bool trans_STR_pri(DisasContext *s, arg_rri *a)
>   */
> 
>  /* The memory mode of the dtype.  */
> -static const TCGMemOp dtype_mop[16] = {
> +static const MemOp dtype_mop[16] = {
>      MO_UB, MO_UB, MO_UB, MO_UB,
>      MO_SL, MO_UW, MO_UW, MO_UW,
>      MO_SW, MO_SW, MO_UL, MO_UL,
> diff --git a/target/arm/translate.c b/target/arm/translate.c
> index 7853462..d116c8c 100644
> --- a/target/arm/translate.c
> +++ b/target/arm/translate.c
> @@ -114,7 +114,7 @@ typedef enum ISSInfo {
>  } ISSInfo;
> 
>  /* Save the syndrome information for a Data Abort */
> -static void disas_set_da_iss(DisasContext *s, TCGMemOp memop, ISSInfo issinfo)
> +static void disas_set_da_iss(DisasContext *s, MemOp memop, ISSInfo issinfo)
>  {
>      uint32_t syn;
>      int sas = memop & MO_SIZE;
> @@ -1079,7 +1079,7 @@ static inline void store_reg_from_load(DisasContext *s, int reg, TCGv_i32 var)
>   * that the address argument is TCGv_i32 rather than TCGv.
>   */
> 
> -static inline TCGv gen_aa32_addr(DisasContext *s, TCGv_i32 a32, TCGMemOp op)
> +static inline TCGv gen_aa32_addr(DisasContext *s, TCGv_i32 a32, MemOp op)
>  {
>      TCGv addr = tcg_temp_new();
>      tcg_gen_extu_i32_tl(addr, a32);
> @@ -1092,7 +1092,7 @@ static inline TCGv gen_aa32_addr(DisasContext *s, TCGv_i32 a32, TCGMemOp op)
>  }
> 
>  static void gen_aa32_ld_i32(DisasContext *s, TCGv_i32 val, TCGv_i32 a32,
> -                            int index, TCGMemOp opc)
> +                            int index, MemOp opc)
>  {
>      TCGv addr;
> 
> @@ -1107,7 +1107,7 @@ static void gen_aa32_ld_i32(DisasContext *s, TCGv_i32 val, TCGv_i32 a32,
>  }
> 
>  static void gen_aa32_st_i32(DisasContext *s, TCGv_i32 val, TCGv_i32 a32,
> -                            int index, TCGMemOp opc)
> +                            int index, MemOp opc)
>  {
>      TCGv addr;
> 
> @@ -1160,7 +1160,7 @@ static inline void gen_aa32_frob64(DisasContext *s, TCGv_i64 val)
>  }
> 
>  static void gen_aa32_ld_i64(DisasContext *s, TCGv_i64 val, TCGv_i32 a32,
> -                            int index, TCGMemOp opc)
> +                            int index, MemOp opc)
>  {
>      TCGv addr = gen_aa32_addr(s, a32, opc);
>      tcg_gen_qemu_ld_i64(val, addr, index, opc);
> @@ -1175,7 +1175,7 @@ static inline void gen_aa32_ld64(DisasContext *s, TCGv_i64 val,
>  }
> 
>  static void gen_aa32_st_i64(DisasContext *s, TCGv_i64 val, TCGv_i32 a32,
> -                            int index, TCGMemOp opc)
> +                            int index, MemOp opc)
>  {
>      TCGv addr = gen_aa32_addr(s, a32, opc);
> 
> @@ -1400,7 +1400,7 @@ neon_reg_offset (int reg, int n)
>   * where 0 is the least significant end of the register.
>   */
>  static inline long
> -neon_element_offset(int reg, int element, TCGMemOp size)
> +neon_element_offset(int reg, int element, MemOp size)
>  {
>      int element_size = 1 << size;
>      int ofs = element * element_size;
> @@ -1422,7 +1422,7 @@ static TCGv_i32 neon_load_reg(int reg, int pass)
>      return tmp;
>  }
> 
> -static void neon_load_element(TCGv_i32 var, int reg, int ele, TCGMemOp mop)
> +static void neon_load_element(TCGv_i32 var, int reg, int ele, MemOp mop)
>  {
>      long offset = neon_element_offset(reg, ele, mop & MO_SIZE);
> 
> @@ -1441,7 +1441,7 @@ static void neon_load_element(TCGv_i32 var, int reg, int ele, TCGMemOp mop)
>      }
>  }
> 
> -static void neon_load_element64(TCGv_i64 var, int reg, int ele, TCGMemOp mop)
> +static void neon_load_element64(TCGv_i64 var, int reg, int ele, MemOp mop)
>  {
>      long offset = neon_element_offset(reg, ele, mop & MO_SIZE);
> 
> @@ -1469,7 +1469,7 @@ static void neon_store_reg(int reg, int pass, TCGv_i32 var)
>      tcg_temp_free_i32(var);
>  }
> 
> -static void neon_store_element(int reg, int ele, TCGMemOp size, TCGv_i32 var)
> +static void neon_store_element(int reg, int ele, MemOp size, TCGv_i32 var)
>  {
>      long offset = neon_element_offset(reg, ele, size);
> 
> @@ -1488,7 +1488,7 @@ static void neon_store_element(int reg, int ele, TCGMemOp size, TCGv_i32 var)
>      }
>  }
> 
> -static void neon_store_element64(int reg, int ele, TCGMemOp size, TCGv_i64 var)
> +static void neon_store_element64(int reg, int ele, MemOp size, TCGv_i64 var)
>  {
>      long offset = neon_element_offset(reg, ele, size);
> 
> @@ -3558,7 +3558,7 @@ static int disas_neon_ls_insn(DisasContext *s, uint32_t insn)
>      int n;
>      int vec_size;
>      int mmu_idx;
> -    TCGMemOp endian;
> +    MemOp endian;
>      TCGv_i32 addr;
>      TCGv_i32 tmp;
>      TCGv_i32 tmp2;
> @@ -6867,7 +6867,7 @@ static int disas_neon_data_insn(DisasContext *s, uint32_t insn)
>              } else if ((insn & 0x380) == 0) {
>                  /* VDUP */
>                  int element;
> -                TCGMemOp size;
> +                MemOp size;
> 
>                  if ((insn & (7 << 16)) == 0 || (q && (rd & 1))) {
>                      return 1;
> @@ -7435,7 +7435,7 @@ static void gen_load_exclusive(DisasContext *s, int rt, int rt2,
>                                 TCGv_i32 addr, int size)
>  {
>      TCGv_i32 tmp = tcg_temp_new_i32();
> -    TCGMemOp opc = size | MO_ALIGN | s->be_data;
> +    MemOp opc = size | MO_ALIGN | s->be_data;
> 
>      s->is_ldex = true;
> 
> @@ -7489,7 +7489,7 @@ static void gen_store_exclusive(DisasContext *s, int rd, int rt, int rt2,
>      TCGv taddr;
>      TCGLabel *done_label;
>      TCGLabel *fail_label;
> -    TCGMemOp opc = size | MO_ALIGN | s->be_data;
> +    MemOp opc = size | MO_ALIGN | s->be_data;
> 
>      /* if (env->exclusive_addr == addr && env->exclusive_val == [addr]) {
>           [addr] = {Rt};
> @@ -8603,7 +8603,7 @@ static void disas_arm_insn(DisasContext *s, unsigned int insn)
>                          */
> 
>                          TCGv taddr;
> -                        TCGMemOp opc = s->be_data;
> +                        MemOp opc = s->be_data;
> 
>                          rm = (insn) & 0xf;
> 
> diff --git a/target/arm/translate.h b/target/arm/translate.h
> index a20f6e2..284c510 100644
> --- a/target/arm/translate.h
> +++ b/target/arm/translate.h
> @@ -21,7 +21,7 @@ typedef struct DisasContext {
>      int condexec_cond;
>      int thumb;
>      int sctlr_b;
> -    TCGMemOp be_data;
> +    MemOp be_data;
>  #if !defined(CONFIG_USER_ONLY)
>      int user;
>  #endif
> diff --git a/target/hppa/translate.c b/target/hppa/translate.c
> index 188fe68..ff4802a 100644
> --- a/target/hppa/translate.c
> +++ b/target/hppa/translate.c
> @@ -1500,7 +1500,7 @@ static void form_gva(DisasContext *ctx, TCGv_tl *pgva, TCGv_reg *pofs,
>   */
>  static void do_load_32(DisasContext *ctx, TCGv_i32 dest, unsigned rb,
>                         unsigned rx, int scale, target_sreg disp,
> -                       unsigned sp, int modify, TCGMemOp mop)
> +                       unsigned sp, int modify, MemOp mop)
>  {
>      TCGv_reg ofs;
>      TCGv_tl addr;
> @@ -1518,7 +1518,7 @@ static void do_load_32(DisasContext *ctx, TCGv_i32 dest, unsigned rb,
> 
>  static void do_load_64(DisasContext *ctx, TCGv_i64 dest, unsigned rb,
>                         unsigned rx, int scale, target_sreg disp,
> -                       unsigned sp, int modify, TCGMemOp mop)
> +                       unsigned sp, int modify, MemOp mop)
>  {
>      TCGv_reg ofs;
>      TCGv_tl addr;
> @@ -1536,7 +1536,7 @@ static void do_load_64(DisasContext *ctx, TCGv_i64 dest, unsigned rb,
> 
>  static void do_store_32(DisasContext *ctx, TCGv_i32 src, unsigned rb,
>                          unsigned rx, int scale, target_sreg disp,
> -                        unsigned sp, int modify, TCGMemOp mop)
> +                        unsigned sp, int modify, MemOp mop)
>  {
>      TCGv_reg ofs;
>      TCGv_tl addr;
> @@ -1554,7 +1554,7 @@ static void do_store_32(DisasContext *ctx, TCGv_i32 src, unsigned rb,
> 
>  static void do_store_64(DisasContext *ctx, TCGv_i64 src, unsigned rb,
>                          unsigned rx, int scale, target_sreg disp,
> -                        unsigned sp, int modify, TCGMemOp mop)
> +                        unsigned sp, int modify, MemOp mop)
>  {
>      TCGv_reg ofs;
>      TCGv_tl addr;
> @@ -1580,7 +1580,7 @@ static void do_store_64(DisasContext *ctx, TCGv_i64 src, unsigned rb,
> 
>  static bool do_load(DisasContext *ctx, unsigned rt, unsigned rb,
>                      unsigned rx, int scale, target_sreg disp,
> -                    unsigned sp, int modify, TCGMemOp mop)
> +                    unsigned sp, int modify, MemOp mop)
>  {
>      TCGv_reg dest;
> 
> @@ -1653,7 +1653,7 @@ static bool trans_fldd(DisasContext *ctx, arg_ldst *a)
> 
>  static bool do_store(DisasContext *ctx, unsigned rt, unsigned rb,
>                       target_sreg disp, unsigned sp,
> -                     int modify, TCGMemOp mop)
> +                     int modify, MemOp mop)
>  {
>      nullify_over(ctx);
>      do_store_reg(ctx, load_gpr(ctx, rt), rb, 0, 0, disp, sp, modify, mop);
> @@ -2940,7 +2940,7 @@ static bool trans_st(DisasContext *ctx, arg_ldst *a)
> 
>  static bool trans_ldc(DisasContext *ctx, arg_ldst *a)
>  {
> -    TCGMemOp mop = MO_TEUL | MO_ALIGN_16 | a->size;
> +    MemOp mop = MO_TEUL | MO_ALIGN_16 | a->size;
>      TCGv_reg zero, dest, ofs;
>      TCGv_tl addr;
> 
> diff --git a/target/i386/translate.c b/target/i386/translate.c
> index 03150a8..def9867 100644
> --- a/target/i386/translate.c
> +++ b/target/i386/translate.c
> @@ -87,8 +87,8 @@ typedef struct DisasContext {
>      /* current insn context */
>      int override; /* -1 if no override */
>      int prefix;
> -    TCGMemOp aflag;
> -    TCGMemOp dflag;
> +    MemOp aflag;
> +    MemOp dflag;
>      target_ulong pc_start;
>      target_ulong pc; /* pc = eip + cs_base */
>      /* current block context */
> @@ -149,7 +149,7 @@ static void gen_eob(DisasContext *s);
>  static void gen_jr(DisasContext *s, TCGv dest);
>  static void gen_jmp(DisasContext *s, target_ulong eip);
>  static void gen_jmp_tb(DisasContext *s, target_ulong eip, int tb_num);
> -static void gen_op(DisasContext *s1, int op, TCGMemOp ot, int d);
> +static void gen_op(DisasContext *s1, int op, MemOp ot, int d);
> 
>  /* i386 arith/logic operations */
>  enum {
> @@ -320,7 +320,7 @@ static inline bool byte_reg_is_xH(DisasContext *s, int reg)
>  }
> 
>  /* Select the size of a push/pop operation.  */
> -static inline TCGMemOp mo_pushpop(DisasContext *s, TCGMemOp ot)
> +static inline MemOp mo_pushpop(DisasContext *s, MemOp ot)
>  {
>      if (CODE64(s)) {
>          return ot == MO_16 ? MO_16 : MO_64;
> @@ -330,13 +330,13 @@ static inline TCGMemOp mo_pushpop(DisasContext *s, TCGMemOp ot)
>  }
> 
>  /* Select the size of the stack pointer.  */
> -static inline TCGMemOp mo_stacksize(DisasContext *s)
> +static inline MemOp mo_stacksize(DisasContext *s)
>  {
>      return CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_16;
>  }
> 
>  /* Select only size 64 else 32.  Used for SSE operand sizes.  */
> -static inline TCGMemOp mo_64_32(TCGMemOp ot)
> +static inline MemOp mo_64_32(MemOp ot)
>  {
>  #ifdef TARGET_X86_64
>      return ot == MO_64 ? MO_64 : MO_32;
> @@ -347,19 +347,19 @@ static inline TCGMemOp mo_64_32(TCGMemOp ot)
> 
>  /* Select size 8 if lsb of B is clear, else OT.  Used for decoding
>     byte vs word opcodes.  */
> -static inline TCGMemOp mo_b_d(int b, TCGMemOp ot)
> +static inline MemOp mo_b_d(int b, MemOp ot)
>  {
>      return b & 1 ? ot : MO_8;
>  }
> 
>  /* Select size 8 if lsb of B is clear, else OT capped at 32.
>     Used for decoding operand size of port opcodes.  */
> -static inline TCGMemOp mo_b_d32(int b, TCGMemOp ot)
> +static inline MemOp mo_b_d32(int b, MemOp ot)
>  {
>      return b & 1 ? (ot == MO_16 ? MO_16 : MO_32) : MO_8;
>  }
> 
> -static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t0)
> +static void gen_op_mov_reg_v(DisasContext *s, MemOp ot, int reg, TCGv t0)
>  {
>      switch(ot) {
>      case MO_8:
> @@ -388,7 +388,7 @@ static void gen_op_mov_reg_v(DisasContext *s, TCGMemOp ot, int reg, TCGv t0)
>  }
> 
>  static inline
> -void gen_op_mov_v_reg(DisasContext *s, TCGMemOp ot, TCGv t0, int reg)
> +void gen_op_mov_v_reg(DisasContext *s, MemOp ot, TCGv t0, int reg)
>  {
>      if (ot == MO_8 && byte_reg_is_xH(s, reg)) {
>          tcg_gen_extract_tl(t0, cpu_regs[reg - 4], 8, 8);
> @@ -411,13 +411,13 @@ static inline void gen_op_jmp_v(TCGv dest)
>  }
> 
>  static inline
> -void gen_op_add_reg_im(DisasContext *s, TCGMemOp size, int reg, int32_t val)
> +void gen_op_add_reg_im(DisasContext *s, MemOp size, int reg, int32_t val)
>  {
>      tcg_gen_addi_tl(s->tmp0, cpu_regs[reg], val);
>      gen_op_mov_reg_v(s, size, reg, s->tmp0);
>  }
> 
> -static inline void gen_op_add_reg_T0(DisasContext *s, TCGMemOp size, int reg)
> +static inline void gen_op_add_reg_T0(DisasContext *s, MemOp size, int reg)
>  {
>      tcg_gen_add_tl(s->tmp0, cpu_regs[reg], s->T0);
>      gen_op_mov_reg_v(s, size, reg, s->tmp0);
> @@ -451,7 +451,7 @@ static inline void gen_jmp_im(DisasContext *s, target_ulong pc)
>  /* Compute SEG:REG into A0.  SEG is selected from the override segment
>     (OVR_SEG) and the default segment (DEF_SEG).  OVR_SEG may be -1 to
>     indicate no override.  */
> -static void gen_lea_v_seg(DisasContext *s, TCGMemOp aflag, TCGv a0,
> +static void gen_lea_v_seg(DisasContext *s, MemOp aflag, TCGv a0,
>                            int def_seg, int ovr_seg)
>  {
>      switch (aflag) {
> @@ -514,13 +514,13 @@ static inline void gen_string_movl_A0_EDI(DisasContext *s)
>      gen_lea_v_seg(s, s->aflag, cpu_regs[R_EDI], R_ES, -1);
>  }
> 
> -static inline void gen_op_movl_T0_Dshift(DisasContext *s, TCGMemOp ot)
> +static inline void gen_op_movl_T0_Dshift(DisasContext *s, MemOp ot)
>  {
>      tcg_gen_ld32s_tl(s->T0, cpu_env, offsetof(CPUX86State, df));
>      tcg_gen_shli_tl(s->T0, s->T0, ot);
>  };
> 
> -static TCGv gen_ext_tl(TCGv dst, TCGv src, TCGMemOp size, bool sign)
> +static TCGv gen_ext_tl(TCGv dst, TCGv src, MemOp size, bool sign)
>  {
>      switch (size) {
>      case MO_8:
> @@ -551,18 +551,18 @@ static TCGv gen_ext_tl(TCGv dst, TCGv src, TCGMemOp size, bool sign)
>      }
>  }
> 
> -static void gen_extu(TCGMemOp ot, TCGv reg)
> +static void gen_extu(MemOp ot, TCGv reg)
>  {
>      gen_ext_tl(reg, reg, ot, false);
>  }
> 
> -static void gen_exts(TCGMemOp ot, TCGv reg)
> +static void gen_exts(MemOp ot, TCGv reg)
>  {
>      gen_ext_tl(reg, reg, ot, true);
>  }
> 
>  static inline
> -void gen_op_jnz_ecx(DisasContext *s, TCGMemOp size, TCGLabel *label1)
> +void gen_op_jnz_ecx(DisasContext *s, MemOp size, TCGLabel *label1)
>  {
>      tcg_gen_mov_tl(s->tmp0, cpu_regs[R_ECX]);
>      gen_extu(size, s->tmp0);
> @@ -570,14 +570,14 @@ void gen_op_jnz_ecx(DisasContext *s, TCGMemOp size, TCGLabel *label1)
>  }
> 
>  static inline
> -void gen_op_jz_ecx(DisasContext *s, TCGMemOp size, TCGLabel *label1)
> +void gen_op_jz_ecx(DisasContext *s, MemOp size, TCGLabel *label1)
>  {
>      tcg_gen_mov_tl(s->tmp0, cpu_regs[R_ECX]);
>      gen_extu(size, s->tmp0);
>      tcg_gen_brcondi_tl(TCG_COND_EQ, s->tmp0, 0, label1);
>  }
> 
> -static void gen_helper_in_func(TCGMemOp ot, TCGv v, TCGv_i32 n)
> +static void gen_helper_in_func(MemOp ot, TCGv v, TCGv_i32 n)
>  {
>      switch (ot) {
>      case MO_8:
> @@ -594,7 +594,7 @@ static void gen_helper_in_func(TCGMemOp ot, TCGv v, TCGv_i32 n)
>      }
>  }
> 
> -static void gen_helper_out_func(TCGMemOp ot, TCGv_i32 v, TCGv_i32 n)
> +static void gen_helper_out_func(MemOp ot, TCGv_i32 v, TCGv_i32 n)
>  {
>      switch (ot) {
>      case MO_8:
> @@ -611,7 +611,7 @@ static void gen_helper_out_func(TCGMemOp ot, TCGv_i32 v, TCGv_i32 n)
>      }
>  }
> 
> -static void gen_check_io(DisasContext *s, TCGMemOp ot, target_ulong cur_eip,
> +static void gen_check_io(DisasContext *s, MemOp ot, target_ulong cur_eip,
>                           uint32_t svm_flags)
>  {
>      target_ulong next_eip;
> @@ -644,7 +644,7 @@ static void gen_check_io(DisasContext *s, TCGMemOp ot, target_ulong cur_eip,
>      }
>  }
> 
> -static inline void gen_movs(DisasContext *s, TCGMemOp ot)
> +static inline void gen_movs(DisasContext *s, MemOp ot)
>  {
>      gen_string_movl_A0_ESI(s);
>      gen_op_ld_v(s, ot, s->T0, s->A0);
> @@ -840,7 +840,7 @@ static CCPrepare gen_prepare_eflags_s(DisasContext *s, TCGv reg)
>          return (CCPrepare) { .cond = TCG_COND_NEVER, .mask = -1 };
>      default:
>          {
> -            TCGMemOp size = (s->cc_op - CC_OP_ADDB) & 3;
> +            MemOp size = (s->cc_op - CC_OP_ADDB) & 3;
>              TCGv t0 = gen_ext_tl(reg, cpu_cc_dst, size, true);
>              return (CCPrepare) { .cond = TCG_COND_LT, .reg = t0, .mask = -1 };
>          }
> @@ -885,7 +885,7 @@ static CCPrepare gen_prepare_eflags_z(DisasContext *s, TCGv reg)
>                               .mask = -1 };
>      default:
>          {
> -            TCGMemOp size = (s->cc_op - CC_OP_ADDB) & 3;
> +            MemOp size = (s->cc_op - CC_OP_ADDB) & 3;
>              TCGv t0 = gen_ext_tl(reg, cpu_cc_dst, size, false);
>              return (CCPrepare) { .cond = TCG_COND_EQ, .reg = t0, .mask = -1 };
>          }
> @@ -897,7 +897,7 @@ static CCPrepare gen_prepare_eflags_z(DisasContext *s, TCGv reg)
>  static CCPrepare gen_prepare_cc(DisasContext *s, int b, TCGv reg)
>  {
>      int inv, jcc_op, cond;
> -    TCGMemOp size;
> +    MemOp size;
>      CCPrepare cc;
>      TCGv t0;
> 
> @@ -1075,7 +1075,7 @@ static TCGLabel *gen_jz_ecx_string(DisasContext *s, target_ulong next_eip)
>      return l2;
>  }
> 
> -static inline void gen_stos(DisasContext *s, TCGMemOp ot)
> +static inline void gen_stos(DisasContext *s, MemOp ot)
>  {
>      gen_op_mov_v_reg(s, MO_32, s->T0, R_EAX);
>      gen_string_movl_A0_EDI(s);
> @@ -1084,7 +1084,7 @@ static inline void gen_stos(DisasContext *s, TCGMemOp ot)
>      gen_op_add_reg_T0(s, s->aflag, R_EDI);
>  }
> 
> -static inline void gen_lods(DisasContext *s, TCGMemOp ot)
> +static inline void gen_lods(DisasContext *s, MemOp ot)
>  {
>      gen_string_movl_A0_ESI(s);
>      gen_op_ld_v(s, ot, s->T0, s->A0);
> @@ -1093,7 +1093,7 @@ static inline void gen_lods(DisasContext *s, TCGMemOp ot)
>      gen_op_add_reg_T0(s, s->aflag, R_ESI);
>  }
> 
> -static inline void gen_scas(DisasContext *s, TCGMemOp ot)
> +static inline void gen_scas(DisasContext *s, MemOp ot)
>  {
>      gen_string_movl_A0_EDI(s);
>      gen_op_ld_v(s, ot, s->T1, s->A0);
> @@ -1102,7 +1102,7 @@ static inline void gen_scas(DisasContext *s, TCGMemOp ot)
>      gen_op_add_reg_T0(s, s->aflag, R_EDI);
>  }
> 
> -static inline void gen_cmps(DisasContext *s, TCGMemOp ot)
> +static inline void gen_cmps(DisasContext *s, MemOp ot)
>  {
>      gen_string_movl_A0_EDI(s);
>      gen_op_ld_v(s, ot, s->T1, s->A0);
> @@ -1126,7 +1126,7 @@ static void gen_bpt_io(DisasContext *s, TCGv_i32 t_port, int ot)
>  }
> 
> 
> -static inline void gen_ins(DisasContext *s, TCGMemOp ot)
> +static inline void gen_ins(DisasContext *s, MemOp ot)
>  {
>      if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
>          gen_io_start();
> @@ -1148,7 +1148,7 @@ static inline void gen_ins(DisasContext *s, TCGMemOp ot)
>      }
>  }
> 
> -static inline void gen_outs(DisasContext *s, TCGMemOp ot)
> +static inline void gen_outs(DisasContext *s, MemOp ot)
>  {
>      if (tb_cflags(s->base.tb) & CF_USE_ICOUNT) {
>          gen_io_start();
> @@ -1171,7 +1171,7 @@ static inline void gen_outs(DisasContext *s, TCGMemOp ot)
>  /* same method as Valgrind : we generate jumps to current or next
>     instruction */
>  #define GEN_REPZ(op)                                                          \
> -static inline void gen_repz_ ## op(DisasContext *s, TCGMemOp ot,              \
> +static inline void gen_repz_ ## op(DisasContext *s, MemOp ot,              \
>                                   target_ulong cur_eip, target_ulong next_eip) \
>  {                                                                             \
>      TCGLabel *l2;                                                             \
> @@ -1187,7 +1187,7 @@ static inline void gen_repz_ ## op(DisasContext *s, TCGMemOp ot,              \
>  }
> 
>  #define GEN_REPZ2(op)                                                         \
> -static inline void gen_repz_ ## op(DisasContext *s, TCGMemOp ot,              \
> +static inline void gen_repz_ ## op(DisasContext *s, MemOp ot,              \
>                                     target_ulong cur_eip,                      \
>                                     target_ulong next_eip,                     \
>                                     int nz)                                    \
> @@ -1284,7 +1284,7 @@ static void gen_illegal_opcode(DisasContext *s)
>  }
> 
>  /* if d == OR_TMP0, it means memory operand (address in A0) */
> -static void gen_op(DisasContext *s1, int op, TCGMemOp ot, int d)
> +static void gen_op(DisasContext *s1, int op, MemOp ot, int d)
>  {
>      if (d != OR_TMP0) {
>          if (s1->prefix & PREFIX_LOCK) {
> @@ -1395,7 +1395,7 @@ static void gen_op(DisasContext *s1, int op, TCGMemOp ot, int d)
>  }
> 
>  /* if d == OR_TMP0, it means memory operand (address in A0) */
> -static void gen_inc(DisasContext *s1, TCGMemOp ot, int d, int c)
> +static void gen_inc(DisasContext *s1, MemOp ot, int d, int c)
>  {
>      if (s1->prefix & PREFIX_LOCK) {
>          if (d != OR_TMP0) {
> @@ -1421,7 +1421,7 @@ static void gen_inc(DisasContext *s1, TCGMemOp ot, int d, int c)
>      set_cc_op(s1, (c > 0 ? CC_OP_INCB : CC_OP_DECB) + ot);
>  }
> 
> -static void gen_shift_flags(DisasContext *s, TCGMemOp ot, TCGv result,
> +static void gen_shift_flags(DisasContext *s, MemOp ot, TCGv result,
>                              TCGv shm1, TCGv count, bool is_right)
>  {
>      TCGv_i32 z32, s32, oldop;
> @@ -1466,7 +1466,7 @@ static void gen_shift_flags(DisasContext *s, TCGMemOp ot, TCGv result,
>      set_cc_op(s, CC_OP_DYNAMIC);
>  }
> 
> -static void gen_shift_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
> +static void gen_shift_rm_T1(DisasContext *s, MemOp ot, int op1,
>                              int is_right, int is_arith)
>  {
>      target_ulong mask = (ot == MO_64 ? 0x3f : 0x1f);
> @@ -1502,7 +1502,7 @@ static void gen_shift_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
>      gen_shift_flags(s, ot, s->T0, s->tmp0, s->T1, is_right);
>  }
> 
> -static void gen_shift_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
> +static void gen_shift_rm_im(DisasContext *s, MemOp ot, int op1, int op2,
>                              int is_right, int is_arith)
>  {
>      int mask = (ot == MO_64 ? 0x3f : 0x1f);
> @@ -1542,7 +1542,7 @@ static void gen_shift_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
>      }
>  }
> 
> -static void gen_rot_rm_T1(DisasContext *s, TCGMemOp ot, int op1, int is_right)
> +static void gen_rot_rm_T1(DisasContext *s, MemOp ot, int op1, int is_right)
>  {
>      target_ulong mask = (ot == MO_64 ? 0x3f : 0x1f);
>      TCGv_i32 t0, t1;
> @@ -1627,7 +1627,7 @@ static void gen_rot_rm_T1(DisasContext *s, TCGMemOp ot, int op1, int is_right)
>      set_cc_op(s, CC_OP_DYNAMIC);
>  }
> 
> -static void gen_rot_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
> +static void gen_rot_rm_im(DisasContext *s, MemOp ot, int op1, int op2,
>                            int is_right)
>  {
>      int mask = (ot == MO_64 ? 0x3f : 0x1f);
> @@ -1705,7 +1705,7 @@ static void gen_rot_rm_im(DisasContext *s, TCGMemOp ot, int op1, int op2,
>  }
> 
>  /* XXX: add faster immediate = 1 case */
> -static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
> +static void gen_rotc_rm_T1(DisasContext *s, MemOp ot, int op1,
>                             int is_right)
>  {
>      gen_compute_eflags(s);
> @@ -1761,7 +1761,7 @@ static void gen_rotc_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
>  }
> 
>  /* XXX: add faster immediate case */
> -static void gen_shiftd_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
> +static void gen_shiftd_rm_T1(DisasContext *s, MemOp ot, int op1,
>                               bool is_right, TCGv count_in)
>  {
>      target_ulong mask = (ot == MO_64 ? 63 : 31);
> @@ -1842,7 +1842,7 @@ static void gen_shiftd_rm_T1(DisasContext *s, TCGMemOp ot, int op1,
>      tcg_temp_free(count);
>  }
> 
> -static void gen_shift(DisasContext *s1, int op, TCGMemOp ot, int d, int s)
> +static void gen_shift(DisasContext *s1, int op, MemOp ot, int d, int s)
>  {
>      if (s != OR_TMP1)
>          gen_op_mov_v_reg(s1, ot, s1->T1, s);
> @@ -1872,7 +1872,7 @@ static void gen_shift(DisasContext *s1, int op, TCGMemOp ot, int d, int s)
>      }
>  }
> 
> -static void gen_shifti(DisasContext *s1, int op, TCGMemOp ot, int d, int c)
> +static void gen_shifti(DisasContext *s1, int op, MemOp ot, int d, int c)
>  {
>      switch(op) {
>      case OP_ROL:
> @@ -2149,7 +2149,7 @@ static void gen_add_A0_ds_seg(DisasContext *s)
>  /* generate modrm memory load or store of 'reg'. TMP0 is used if reg ==
>     OR_TMP0 */
>  static void gen_ldst_modrm(CPUX86State *env, DisasContext *s, int modrm,
> -                           TCGMemOp ot, int reg, int is_store)
> +                           MemOp ot, int reg, int is_store)
>  {
>      int mod, rm;
> 
> @@ -2179,7 +2179,7 @@ static void gen_ldst_modrm(CPUX86State *env, DisasContext *s, int modrm,
>      }
>  }
> 
> -static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, TCGMemOp ot)
> +static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, MemOp ot)
>  {
>      uint32_t ret;
> 
> @@ -2202,7 +2202,7 @@ static inline uint32_t insn_get(CPUX86State *env, DisasContext *s, TCGMemOp ot)
>      return ret;
>  }
> 
> -static inline int insn_const_size(TCGMemOp ot)
> +static inline int insn_const_size(MemOp ot)
>  {
>      if (ot <= MO_32) {
>          return 1 << ot;
> @@ -2266,7 +2266,7 @@ static inline void gen_jcc(DisasContext *s, int b,
>      }
>  }
> 
> -static void gen_cmovcc1(CPUX86State *env, DisasContext *s, TCGMemOp ot, int b,
> +static void gen_cmovcc1(CPUX86State *env, DisasContext *s, MemOp ot, int b,
>                          int modrm, int reg)
>  {
>      CCPrepare cc;
> @@ -2363,8 +2363,8 @@ static inline void gen_stack_update(DisasContext *s, int addend)
>  /* Generate a push. It depends on ss32, addseg and dflag.  */
>  static void gen_push_v(DisasContext *s, TCGv val)
>  {
> -    TCGMemOp d_ot = mo_pushpop(s, s->dflag);
> -    TCGMemOp a_ot = mo_stacksize(s);
> +    MemOp d_ot = mo_pushpop(s, s->dflag);
> +    MemOp a_ot = mo_stacksize(s);
>      int size = 1 << d_ot;
>      TCGv new_esp = s->A0;
> 
> @@ -2383,9 +2383,9 @@ static void gen_push_v(DisasContext *s, TCGv val)
>  }
> 
>  /* two step pop is necessary for precise exceptions */
> -static TCGMemOp gen_pop_T0(DisasContext *s)
> +static MemOp gen_pop_T0(DisasContext *s)
>  {
> -    TCGMemOp d_ot = mo_pushpop(s, s->dflag);
> +    MemOp d_ot = mo_pushpop(s, s->dflag);
> 
>      gen_lea_v_seg(s, mo_stacksize(s), cpu_regs[R_ESP], R_SS, -1);
>      gen_op_ld_v(s, d_ot, s->T0, s->A0);
> @@ -2393,7 +2393,7 @@ static TCGMemOp gen_pop_T0(DisasContext *s)
>      return d_ot;
>  }
> 
> -static inline void gen_pop_update(DisasContext *s, TCGMemOp ot)
> +static inline void gen_pop_update(DisasContext *s, MemOp ot)
>  {
>      gen_stack_update(s, 1 << ot);
>  }
> @@ -2405,8 +2405,8 @@ static inline void gen_stack_A0(DisasContext *s)
> 
>  static void gen_pusha(DisasContext *s)
>  {
> -    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_16;
> -    TCGMemOp d_ot = s->dflag;
> +    MemOp s_ot = s->ss32 ? MO_32 : MO_16;
> +    MemOp d_ot = s->dflag;
>      int size = 1 << d_ot;
>      int i;
> 
> @@ -2421,8 +2421,8 @@ static void gen_pusha(DisasContext *s)
> 
>  static void gen_popa(DisasContext *s)
>  {
> -    TCGMemOp s_ot = s->ss32 ? MO_32 : MO_16;
> -    TCGMemOp d_ot = s->dflag;
> +    MemOp s_ot = s->ss32 ? MO_32 : MO_16;
> +    MemOp d_ot = s->dflag;
>      int size = 1 << d_ot;
>      int i;
> 
> @@ -2442,8 +2442,8 @@ static void gen_popa(DisasContext *s)
> 
>  static void gen_enter(DisasContext *s, int esp_addend, int level)
>  {
> -    TCGMemOp d_ot = mo_pushpop(s, s->dflag);
> -    TCGMemOp a_ot = CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_16;
> +    MemOp d_ot = mo_pushpop(s, s->dflag);
> +    MemOp a_ot = CODE64(s) ? MO_64 : s->ss32 ? MO_32 : MO_16;
>      int size = 1 << d_ot;
> 
>      /* Push BP; compute FrameTemp into T1.  */
> @@ -2482,8 +2482,8 @@ static void gen_enter(DisasContext *s, int esp_addend, int level)
> 
>  static void gen_leave(DisasContext *s)
>  {
> -    TCGMemOp d_ot = mo_pushpop(s, s->dflag);
> -    TCGMemOp a_ot = mo_stacksize(s);
> +    MemOp d_ot = mo_pushpop(s, s->dflag);
> +    MemOp a_ot = mo_stacksize(s);
> 
>      gen_lea_v_seg(s, a_ot, cpu_regs[R_EBP], R_SS, -1);
>      gen_op_ld_v(s, d_ot, s->T0, s->A0);
> @@ -3045,7 +3045,7 @@ static void gen_sse(CPUX86State *env, DisasContext *s, int b,
>      SSEFunc_0_eppi sse_fn_eppi;
>      SSEFunc_0_ppi sse_fn_ppi;
>      SSEFunc_0_eppt sse_fn_eppt;
> -    TCGMemOp ot;
> +    MemOp ot;
> 
>      b &= 0xff;
>      if (s->prefix & PREFIX_DATA)
> @@ -4488,7 +4488,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
>      CPUX86State *env = cpu->env_ptr;
>      int b, prefixes;
>      int shift;
> -    TCGMemOp ot, aflag, dflag;
> +    MemOp ot, aflag, dflag;
>      int modrm, reg, rm, mod, op, opreg, val;
>      target_ulong next_eip, tval;
>      int rex_w, rex_r;
> @@ -5567,8 +5567,8 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
>      case 0x1be: /* movsbS Gv, Eb */
>      case 0x1bf: /* movswS Gv, Eb */
>          {
> -            TCGMemOp d_ot;
> -            TCGMemOp s_ot;
> +            MemOp d_ot;
> +            MemOp s_ot;
> 
>              /* d_ot is the size of destination */
>              d_ot = dflag;
> diff --git a/target/m68k/translate.c b/target/m68k/translate.c
> index 60bcfb7..24c1dd3 100644
> --- a/target/m68k/translate.c
> +++ b/target/m68k/translate.c
> @@ -2414,7 +2414,7 @@ DISAS_INSN(cas)
>      uint16_t ext;
>      TCGv load;
>      TCGv cmp;
> -    TCGMemOp opc;
> +    MemOp opc;
> 
>      switch ((insn >> 9) & 3) {
>      case 1:
> diff --git a/target/microblaze/translate.c b/target/microblaze/translate.c
> index 9ce65f3..41d1b8b 100644
> --- a/target/microblaze/translate.c
> +++ b/target/microblaze/translate.c
> @@ -919,7 +919,7 @@ static void dec_load(DisasContext *dc)
>      unsigned int size;
>      bool rev = false, ex = false, ea = false;
>      int mem_index = cpu_mmu_index(&dc->cpu->env, false);
> -    TCGMemOp mop;
> +    MemOp mop;
> 
>      mop = dc->opcode & 3;
>      size = 1 << mop;
> @@ -1035,7 +1035,7 @@ static void dec_store(DisasContext *dc)
>      unsigned int size;
>      bool rev = false, ex = false, ea = false;
>      int mem_index = cpu_mmu_index(&dc->cpu->env, false);
> -    TCGMemOp mop;
> +    MemOp mop;
> 
>      mop = dc->opcode & 3;
>      size = 1 << mop;
> diff --git a/target/mips/translate.c b/target/mips/translate.c
> index ca62800..59b5d85 100644
> --- a/target/mips/translate.c
> +++ b/target/mips/translate.c
> @@ -2526,7 +2526,7 @@ typedef struct DisasContext {
>      int32_t CP0_Config5;
>      /* Routine used to access memory */
>      int mem_idx;
> -    TCGMemOp default_tcg_memop_mask;
> +    MemOp default_tcg_memop_mask;
>      uint32_t hflags, saved_hflags;
>      target_ulong btarget;
>      bool ulri;
> @@ -3706,7 +3706,7 @@ static void gen_st(DisasContext *ctx, uint32_t opc, int rt,
> 
>  /* Store conditional */
>  static void gen_st_cond(DisasContext *ctx, int rt, int base, int offset,
> -                        TCGMemOp tcg_mo, bool eva)
> +                        MemOp tcg_mo, bool eva)
>  {
>      TCGv addr, t0, val;
>      TCGLabel *l1 = gen_new_label();
> @@ -4546,7 +4546,7 @@ static void gen_HILO(DisasContext *ctx, uint32_t opc, int acc, int reg)
>  }
> 
>  static inline void gen_r6_ld(target_long addr, int reg, int memidx,
> -                             TCGMemOp memop)
> +                             MemOp memop)
>  {
>      TCGv t0 = tcg_const_tl(addr);
>      tcg_gen_qemu_ld_tl(t0, t0, memidx, memop);
> @@ -21828,7 +21828,7 @@ static int decode_nanomips_32_48_opc(CPUMIPSState *env, DisasContext *ctx)
>                               extract32(ctx->opcode, 0, 8);
>                      TCGv va = tcg_temp_new();
>                      TCGv t1 = tcg_temp_new();
> -                    TCGMemOp memop = (extract32(ctx->opcode, 8, 3)) ==
> +                    MemOp memop = (extract32(ctx->opcode, 8, 3)) ==
>                                        NM_P_LS_UAWM ? MO_UNALN : 0;
> 
>                      count = (count == 0) ? 8 : count;
> diff --git a/target/openrisc/translate.c b/target/openrisc/translate.c
> index 4360ce4..b189c50 100644
> --- a/target/openrisc/translate.c
> +++ b/target/openrisc/translate.c
> @@ -681,7 +681,7 @@ static bool trans_l_lwa(DisasContext *dc, arg_load *a)
>      return true;
>  }
> 
> -static void do_load(DisasContext *dc, arg_load *a, TCGMemOp mop)
> +static void do_load(DisasContext *dc, arg_load *a, MemOp mop)
>  {
>      TCGv ea;
> 
> @@ -763,7 +763,7 @@ static bool trans_l_swa(DisasContext *dc, arg_store *a)
>      return true;
>  }
> 
> -static void do_store(DisasContext *dc, arg_store *a, TCGMemOp mop)
> +static void do_store(DisasContext *dc, arg_store *a, MemOp mop)
>  {
>      TCGv t0 = tcg_temp_new();
>      tcg_gen_addi_tl(t0, cpu_R[a->a], a->i);
> diff --git a/target/ppc/translate.c b/target/ppc/translate.c
> index 4a5de28..31800ed 100644
> --- a/target/ppc/translate.c
> +++ b/target/ppc/translate.c
> @@ -162,7 +162,7 @@ struct DisasContext {
>      int mem_idx;
>      int access_type;
>      /* Translation flags */
> -    TCGMemOp default_tcg_memop_mask;
> +    MemOp default_tcg_memop_mask;
>  #if defined(TARGET_PPC64)
>      bool sf_mode;
>      bool has_cfar;
> @@ -3142,7 +3142,7 @@ static void gen_isync(DisasContext *ctx)
> 
>  #define MEMOP_GET_SIZE(x)  (1 << ((x) & MO_SIZE))
> 
> -static void gen_load_locked(DisasContext *ctx, TCGMemOp memop)
> +static void gen_load_locked(DisasContext *ctx, MemOp memop)
>  {
>      TCGv gpr = cpu_gpr[rD(ctx->opcode)];
>      TCGv t0 = tcg_temp_new();
> @@ -3167,7 +3167,7 @@ LARX(lbarx, DEF_MEMOP(MO_UB))
>  LARX(lharx, DEF_MEMOP(MO_UW))
>  LARX(lwarx, DEF_MEMOP(MO_UL))
> 
> -static void gen_fetch_inc_conditional(DisasContext *ctx, TCGMemOp memop,
> +static void gen_fetch_inc_conditional(DisasContext *ctx, MemOp memop,
>                                        TCGv EA, TCGCond cond, int addend)
>  {
>      TCGv t = tcg_temp_new();
> @@ -3193,7 +3193,7 @@ static void gen_fetch_inc_conditional(DisasContext *ctx, TCGMemOp memop,
>      tcg_temp_free(u);
>  }
> 
> -static void gen_ld_atomic(DisasContext *ctx, TCGMemOp memop)
> +static void gen_ld_atomic(DisasContext *ctx, MemOp memop)
>  {
>      uint32_t gpr_FC = FC(ctx->opcode);
>      TCGv EA = tcg_temp_new();
> @@ -3306,7 +3306,7 @@ static void gen_ldat(DisasContext *ctx)
>  }
>  #endif
> 
> -static void gen_st_atomic(DisasContext *ctx, TCGMemOp memop)
> +static void gen_st_atomic(DisasContext *ctx, MemOp memop)
>  {
>      uint32_t gpr_FC = FC(ctx->opcode);
>      TCGv EA = tcg_temp_new();
> @@ -3389,7 +3389,7 @@ static void gen_stdat(DisasContext *ctx)
>  }
>  #endif
> 
> -static void gen_conditional_store(DisasContext *ctx, TCGMemOp memop)
> +static void gen_conditional_store(DisasContext *ctx, MemOp memop)
>  {
>      TCGLabel *l1 = gen_new_label();
>      TCGLabel *l2 = gen_new_label();
> diff --git a/target/riscv/insn_trans/trans_rva.inc.c b/target/riscv/insn_trans/trans_rva.inc.c
> index fadd888..be8a9f0 100644
> --- a/target/riscv/insn_trans/trans_rva.inc.c
> +++ b/target/riscv/insn_trans/trans_rva.inc.c
> @@ -18,7 +18,7 @@
>   * this program.  If not, see <http://www.gnu.org/licenses/>.
>   */
> 
> -static inline bool gen_lr(DisasContext *ctx, arg_atomic *a, TCGMemOp mop)
> +static inline bool gen_lr(DisasContext *ctx, arg_atomic *a, MemOp mop)
>  {
>      TCGv src1 = tcg_temp_new();
>      /* Put addr in load_res, data in load_val.  */
> @@ -37,7 +37,7 @@ static inline bool gen_lr(DisasContext *ctx, arg_atomic *a, TCGMemOp mop)
>      return true;
>  }
> 
> -static inline bool gen_sc(DisasContext *ctx, arg_atomic *a, TCGMemOp mop)
> +static inline bool gen_sc(DisasContext *ctx, arg_atomic *a, MemOp mop)
>  {
>      TCGv src1 = tcg_temp_new();
>      TCGv src2 = tcg_temp_new();
> @@ -82,8 +82,8 @@ static inline bool gen_sc(DisasContext *ctx, arg_atomic *a, TCGMemOp mop)
>  }
> 
>  static bool gen_amo(DisasContext *ctx, arg_atomic *a,
> -                    void(*func)(TCGv, TCGv, TCGv, TCGArg, TCGMemOp),
> -                    TCGMemOp mop)
> +                    void(*func)(TCGv, TCGv, TCGv, TCGArg, MemOp),
> +                    MemOp mop)
>  {
>      TCGv src1 = tcg_temp_new();
>      TCGv src2 = tcg_temp_new();
> diff --git a/target/riscv/insn_trans/trans_rvi.inc.c b/target/riscv/insn_trans/trans_rvi.inc.c
> index ea64731..cf440d1 100644
> --- a/target/riscv/insn_trans/trans_rvi.inc.c
> +++ b/target/riscv/insn_trans/trans_rvi.inc.c
> @@ -135,7 +135,7 @@ static bool trans_bgeu(DisasContext *ctx, arg_bgeu *a)
>      return gen_branch(ctx, a, TCG_COND_GEU);
>  }
> 
> -static bool gen_load(DisasContext *ctx, arg_lb *a, TCGMemOp memop)
> +static bool gen_load(DisasContext *ctx, arg_lb *a, MemOp memop)
>  {
>      TCGv t0 = tcg_temp_new();
>      TCGv t1 = tcg_temp_new();
> @@ -174,7 +174,7 @@ static bool trans_lhu(DisasContext *ctx, arg_lhu *a)
>      return gen_load(ctx, a, MO_TEUW);
>  }
> 
> -static bool gen_store(DisasContext *ctx, arg_sb *a, TCGMemOp memop)
> +static bool gen_store(DisasContext *ctx, arg_sb *a, MemOp memop)
>  {
>      TCGv t0 = tcg_temp_new();
>      TCGv dat = tcg_temp_new();
> diff --git a/target/s390x/translate.c b/target/s390x/translate.c
> index ac0d8b6..2927247 100644
> --- a/target/s390x/translate.c
> +++ b/target/s390x/translate.c
> @@ -152,7 +152,7 @@ static inline int vec_full_reg_offset(uint8_t reg)
>      return offsetof(CPUS390XState, vregs[reg][0]);
>  }
> 
> -static inline int vec_reg_offset(uint8_t reg, uint8_t enr, TCGMemOp es)
> +static inline int vec_reg_offset(uint8_t reg, uint8_t enr, MemOp es)
>  {
>      /* Convert element size (es) - e.g. MO_8 - to bytes */
>      const uint8_t bytes = 1 << es;
> @@ -2262,7 +2262,7 @@ static DisasJumpType op_csst(DisasContext *s, DisasOps *o)
>  #ifndef CONFIG_USER_ONLY
>  static DisasJumpType op_csp(DisasContext *s, DisasOps *o)
>  {
> -    TCGMemOp mop = s->insn->data;
> +    MemOp mop = s->insn->data;
>      TCGv_i64 addr, old, cc;
>      TCGLabel *lab = gen_new_label();
> 
> @@ -3228,7 +3228,7 @@ static DisasJumpType op_lm64(DisasContext *s, DisasOps *o)
>  static DisasJumpType op_lpd(DisasContext *s, DisasOps *o)
>  {
>      TCGv_i64 a1, a2;
> -    TCGMemOp mop = s->insn->data;
> +    MemOp mop = s->insn->data;
> 
>      /* In a parallel context, stop the world and single step.  */
>      if (tb_cflags(s->base.tb) & CF_PARALLEL) {
> diff --git a/target/s390x/translate_vx.inc.c b/target/s390x/translate_vx.inc.c
> index 41d5cf8..4c56bbb 100644
> --- a/target/s390x/translate_vx.inc.c
> +++ b/target/s390x/translate_vx.inc.c
> @@ -57,13 +57,13 @@
>  #define FPF_LONG        3
>  #define FPF_EXT         4
> 
> -static inline bool valid_vec_element(uint8_t enr, TCGMemOp es)
> +static inline bool valid_vec_element(uint8_t enr, MemOp es)
>  {
>      return !(enr & ~(NUM_VEC_ELEMENTS(es) - 1));
>  }
> 
>  static void read_vec_element_i64(TCGv_i64 dst, uint8_t reg, uint8_t enr,
> -                                 TCGMemOp memop)
> +                                 MemOp memop)
>  {
>      const int offs = vec_reg_offset(reg, enr, memop & MO_SIZE);
> 
> @@ -96,7 +96,7 @@ static void read_vec_element_i64(TCGv_i64 dst, uint8_t reg, uint8_t enr,
>  }
> 
>  static void read_vec_element_i32(TCGv_i32 dst, uint8_t reg, uint8_t enr,
> -                                 TCGMemOp memop)
> +                                 MemOp memop)
>  {
>      const int offs = vec_reg_offset(reg, enr, memop & MO_SIZE);
> 
> @@ -123,7 +123,7 @@ static void read_vec_element_i32(TCGv_i32 dst, uint8_t reg, uint8_t enr,
>  }
> 
>  static void write_vec_element_i64(TCGv_i64 src, int reg, uint8_t enr,
> -                                  TCGMemOp memop)
> +                                  MemOp memop)
>  {
>      const int offs = vec_reg_offset(reg, enr, memop & MO_SIZE);
> 
> @@ -146,7 +146,7 @@ static void write_vec_element_i64(TCGv_i64 src, int reg, uint8_t enr,
>  }
> 
>  static void write_vec_element_i32(TCGv_i32 src, int reg, uint8_t enr,
> -                                  TCGMemOp memop)
> +                                  MemOp memop)
>  {
>      const int offs = vec_reg_offset(reg, enr, memop & MO_SIZE);
> 
> diff --git a/target/sparc/translate.c b/target/sparc/translate.c
> index 091bab5..bef9ce6 100644
> --- a/target/sparc/translate.c
> +++ b/target/sparc/translate.c
> @@ -2019,7 +2019,7 @@ static inline void gen_ne_fop_QD(DisasContext *dc, int rd, int rs,
>  }
> 
>  static void gen_swap(DisasContext *dc, TCGv dst, TCGv src,
> -                     TCGv addr, int mmu_idx, TCGMemOp memop)
> +                     TCGv addr, int mmu_idx, MemOp memop)
>  {
>      gen_address_mask(dc, addr);
>      tcg_gen_atomic_xchg_tl(dst, addr, src, mmu_idx, memop);
> @@ -2050,10 +2050,10 @@ typedef struct {
>      ASIType type;
>      int asi;
>      int mem_idx;
> -    TCGMemOp memop;
> +    MemOp memop;
>  } DisasASI;
> 
> -static DisasASI get_asi(DisasContext *dc, int insn, TCGMemOp memop)
> +static DisasASI get_asi(DisasContext *dc, int insn, MemOp memop)
>  {
>      int asi = GET_FIELD(insn, 19, 26);
>      ASIType type = GET_ASI_HELPER;
> @@ -2267,7 +2267,7 @@ static DisasASI get_asi(DisasContext *dc, int insn, TCGMemOp memop)
>  }
> 
>  static void gen_ld_asi(DisasContext *dc, TCGv dst, TCGv addr,
> -                       int insn, TCGMemOp memop)
> +                       int insn, MemOp memop)
>  {
>      DisasASI da = get_asi(dc, insn, memop);
> 
> @@ -2305,7 +2305,7 @@ static void gen_ld_asi(DisasContext *dc, TCGv dst, TCGv addr,
>  }
> 
>  static void gen_st_asi(DisasContext *dc, TCGv src, TCGv addr,
> -                       int insn, TCGMemOp memop)
> +                       int insn, MemOp memop)
>  {
>      DisasASI da = get_asi(dc, insn, memop);
> 
> @@ -2511,7 +2511,7 @@ static void gen_ldf_asi(DisasContext *dc, TCGv addr,
>      case GET_ASI_BLOCK:
>          /* Valid for lddfa on aligned registers only.  */
>          if (size == 8 && (rd & 7) == 0) {
> -            TCGMemOp memop;
> +            MemOp memop;
>              TCGv eight;
>              int i;
> 
> @@ -2625,7 +2625,7 @@ static void gen_stf_asi(DisasContext *dc, TCGv addr,
>      case GET_ASI_BLOCK:
>          /* Valid for stdfa on aligned registers only.  */
>          if (size == 8 && (rd & 7) == 0) {
> -            TCGMemOp memop;
> +            MemOp memop;
>              TCGv eight;
>              int i;
> 
> diff --git a/target/tilegx/translate.c b/target/tilegx/translate.c
> index c46a4ab..68dd4aa 100644
> --- a/target/tilegx/translate.c
> +++ b/target/tilegx/translate.c
> @@ -290,7 +290,7 @@ static void gen_cmul2(TCGv tdest, TCGv tsrca, TCGv tsrcb, int sh, int rd)
>  }
> 
>  static TileExcp gen_st_opcode(DisasContext *dc, unsigned dest, unsigned srca,
> -                              unsigned srcb, TCGMemOp memop, const char *name)
> +                              unsigned srcb, MemOp memop, const char *name)
>  {
>      if (dest) {
>          return TILEGX_EXCP_OPCODE_UNKNOWN;
> @@ -305,7 +305,7 @@ static TileExcp gen_st_opcode(DisasContext *dc, unsigned dest, unsigned srca,
>  }
> 
>  static TileExcp gen_st_add_opcode(DisasContext *dc, unsigned srca, unsigned srcb,
> -                                  int imm, TCGMemOp memop, const char *name)
> +                                  int imm, MemOp memop, const char *name)
>  {
>      TCGv tsrca = load_gr(dc, srca);
>      TCGv tsrcb = load_gr(dc, srcb);
> @@ -496,7 +496,7 @@ static TileExcp gen_rr_opcode(DisasContext *dc, unsigned opext,
>  {
>      TCGv tdest, tsrca;
>      const char *mnemonic;
> -    TCGMemOp memop;
> +    MemOp memop;
>      TileExcp ret = TILEGX_EXCP_NONE;
>      bool prefetch_nofault = false;
> 
> @@ -1478,7 +1478,7 @@ static TileExcp gen_rri_opcode(DisasContext *dc, unsigned opext,
>      TCGv tsrca = load_gr(dc, srca);
>      bool prefetch_nofault = false;
>      const char *mnemonic;
> -    TCGMemOp memop;
> +    MemOp memop;
>      int i2, i3;
>      TCGv t0;
> 
> @@ -2106,7 +2106,7 @@ static TileExcp decode_y2(DisasContext *dc, tilegx_bundle_bits bundle)
>      unsigned srca = get_SrcA_Y2(bundle);
>      unsigned srcbdest = get_SrcBDest_Y2(bundle);
>      const char *mnemonic;
> -    TCGMemOp memop;
> +    MemOp memop;
>      bool prefetch_nofault = false;
> 
>      switch (OEY2(opc, mode)) {
> diff --git a/target/tricore/translate.c b/target/tricore/translate.c
> index dc2a65f..87a5f50 100644
> --- a/target/tricore/translate.c
> +++ b/target/tricore/translate.c
> @@ -227,7 +227,7 @@ static inline void generate_trap(DisasContext *ctx, int class, int tin);
>  /* Functions for load/save to/from memory */
> 
>  static inline void gen_offset_ld(DisasContext *ctx, TCGv r1, TCGv r2,
> -                                 int16_t con, TCGMemOp mop)
> +                                 int16_t con, MemOp mop)
>  {
>      TCGv temp = tcg_temp_new();
>      tcg_gen_addi_tl(temp, r2, con);
> @@ -236,7 +236,7 @@ static inline void gen_offset_ld(DisasContext *ctx, TCGv r1, TCGv r2,
>  }
> 
>  static inline void gen_offset_st(DisasContext *ctx, TCGv r1, TCGv r2,
> -                                 int16_t con, TCGMemOp mop)
> +                                 int16_t con, MemOp mop)
>  {
>      TCGv temp = tcg_temp_new();
>      tcg_gen_addi_tl(temp, r2, con);
> @@ -284,7 +284,7 @@ static void gen_offset_ld_2regs(TCGv rh, TCGv rl, TCGv base, int16_t con,
>  }
> 
>  static void gen_st_preincr(DisasContext *ctx, TCGv r1, TCGv r2, int16_t off,
> -                           TCGMemOp mop)
> +                           MemOp mop)
>  {
>      TCGv temp = tcg_temp_new();
>      tcg_gen_addi_tl(temp, r2, off);
> @@ -294,7 +294,7 @@ static void gen_st_preincr(DisasContext *ctx, TCGv r1, TCGv r2, int16_t off,
>  }
> 
>  static void gen_ld_preincr(DisasContext *ctx, TCGv r1, TCGv r2, int16_t off,
> -                           TCGMemOp mop)
> +                           MemOp mop)
>  {
>      TCGv temp = tcg_temp_new();
>      tcg_gen_addi_tl(temp, r2, off);
> diff --git a/tcg/README b/tcg/README
> index 21fcdf7..b4382fa 100644
> --- a/tcg/README
> +++ b/tcg/README
> @@ -512,7 +512,7 @@ Both t0 and t1 may be split into little-endian ordered pairs of registers
>  if dealing with 64-bit quantities on a 32-bit host.
> 
>  The memidx selects the qemu tlb index to use (e.g. user or kernel access).
> -The flags are the TCGMemOp bits, selecting the sign, width, and endianness
> +The flags are the MemOp bits, selecting the sign, width, and endianness
>  of the memory access.
> 
>  For a 32-bit host, qemu_ld/st_i64 is guaranteed to only be used with a
> diff --git a/tcg/aarch64/tcg-target.inc.c b/tcg/aarch64/tcg-target.inc.c
> index 0713448..3f92101 100644
> --- a/tcg/aarch64/tcg-target.inc.c
> +++ b/tcg/aarch64/tcg-target.inc.c
> @@ -1423,7 +1423,7 @@ static inline void tcg_out_rev16(TCGContext *s, TCGReg rd, TCGReg rn)
>      tcg_out_insn(s, 3507, REV16, TCG_TYPE_I32, rd, rn);
>  }
> 
> -static inline void tcg_out_sxt(TCGContext *s, TCGType ext, TCGMemOp s_bits,
> +static inline void tcg_out_sxt(TCGContext *s, TCGType ext, MemOp s_bits,
>                                 TCGReg rd, TCGReg rn)
>  {
>      /* Using ALIASes SXTB, SXTH, SXTW, of SBFM Xd, Xn, #0, #7|15|31 */
> @@ -1431,7 +1431,7 @@ static inline void tcg_out_sxt(TCGContext *s, TCGType ext, TCGMemOp s_bits,
>      tcg_out_sbfm(s, ext, rd, rn, 0, bits);
>  }
> 
> -static inline void tcg_out_uxt(TCGContext *s, TCGMemOp s_bits,
> +static inline void tcg_out_uxt(TCGContext *s, MemOp s_bits,
>                                 TCGReg rd, TCGReg rn)
>  {
>      /* Using ALIASes UXTB, UXTH of UBFM Wd, Wn, #0, #7|15 */
> @@ -1580,8 +1580,8 @@ static inline void tcg_out_adr(TCGContext *s, TCGReg rd, void *target)
>  static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
>  {
>      TCGMemOpIdx oi = lb->oi;
> -    TCGMemOp opc = get_memop(oi);
> -    TCGMemOp size = opc & MO_SIZE;
> +    MemOp opc = get_memop(oi);
> +    MemOp size = opc & MO_SIZE;
> 
>      if (!reloc_pc19(lb->label_ptr[0], s->code_ptr)) {
>          return false;
> @@ -1605,8 +1605,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
>  static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
>  {
>      TCGMemOpIdx oi = lb->oi;
> -    TCGMemOp opc = get_memop(oi);
> -    TCGMemOp size = opc & MO_SIZE;
> +    MemOp opc = get_memop(oi);
> +    MemOp size = opc & MO_SIZE;
> 
>      if (!reloc_pc19(lb->label_ptr[0], s->code_ptr)) {
>          return false;
> @@ -1649,7 +1649,7 @@ QEMU_BUILD_BUG_ON(offsetof(CPUTLBDescFast, table) != 8);
>     slow path for the failure case, which will be patched later when finalizing
>     the slow path. Generated code returns the host addend in X1,
>     clobbers X0,X2,X3,TMP. */
> -static void tcg_out_tlb_read(TCGContext *s, TCGReg addr_reg, TCGMemOp opc,
> +static void tcg_out_tlb_read(TCGContext *s, TCGReg addr_reg, MemOp opc,
>                               tcg_insn_unit **label_ptr, int mem_index,
>                               bool is_read)
>  {
> @@ -1709,11 +1709,11 @@ static void tcg_out_tlb_read(TCGContext *s, TCGReg addr_reg, TCGMemOp opc,
> 
>  #endif /* CONFIG_SOFTMMU */
> 
> -static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp memop, TCGType ext,
> +static void tcg_out_qemu_ld_direct(TCGContext *s, MemOp memop, TCGType ext,
>                                     TCGReg data_r, TCGReg addr_r,
>                                     TCGType otype, TCGReg off_r)
>  {
> -    const TCGMemOp bswap = memop & MO_BSWAP;
> +    const MemOp bswap = memop & MO_BSWAP;
> 
>      switch (memop & MO_SSIZE) {
>      case MO_UB:
> @@ -1765,11 +1765,11 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp memop, TCGType ext,
>      }
>  }
> 
> -static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp memop,
> +static void tcg_out_qemu_st_direct(TCGContext *s, MemOp memop,
>                                     TCGReg data_r, TCGReg addr_r,
>                                     TCGType otype, TCGReg off_r)
>  {
> -    const TCGMemOp bswap = memop & MO_BSWAP;
> +    const MemOp bswap = memop & MO_BSWAP;
> 
>      switch (memop & MO_SIZE) {
>      case MO_8:
> @@ -1804,7 +1804,7 @@ static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp memop,
>  static void tcg_out_qemu_ld(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
>                              TCGMemOpIdx oi, TCGType ext)
>  {
> -    TCGMemOp memop = get_memop(oi);
> +    MemOp memop = get_memop(oi);
>      const TCGType otype = TARGET_LONG_BITS == 64 ? TCG_TYPE_I64 : TCG_TYPE_I32;
>  #ifdef CONFIG_SOFTMMU
>      unsigned mem_index = get_mmuidx(oi);
> @@ -1829,7 +1829,7 @@ static void tcg_out_qemu_ld(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
>  static void tcg_out_qemu_st(TCGContext *s, TCGReg data_reg, TCGReg addr_reg,
>                              TCGMemOpIdx oi)
>  {
> -    TCGMemOp memop = get_memop(oi);
> +    MemOp memop = get_memop(oi);
>      const TCGType otype = TARGET_LONG_BITS == 64 ? TCG_TYPE_I64 : TCG_TYPE_I32;
>  #ifdef CONFIG_SOFTMMU
>      unsigned mem_index = get_mmuidx(oi);
> diff --git a/tcg/arm/tcg-target.inc.c b/tcg/arm/tcg-target.inc.c
> index ece88dc..94d80d7 100644
> --- a/tcg/arm/tcg-target.inc.c
> +++ b/tcg/arm/tcg-target.inc.c
> @@ -1233,7 +1233,7 @@ QEMU_BUILD_BUG_ON(offsetof(CPUTLBDescFast, table) != 4);
>     containing the addend of the tlb entry.  Clobbers R0, R1, R2, TMP.  */
> 
>  static TCGReg tcg_out_tlb_read(TCGContext *s, TCGReg addrlo, TCGReg addrhi,
> -                               TCGMemOp opc, int mem_index, bool is_load)
> +                               MemOp opc, int mem_index, bool is_load)
>  {
>      int cmp_off = (is_load ? offsetof(CPUTLBEntry, addr_read)
>                     : offsetof(CPUTLBEntry, addr_write));
> @@ -1348,7 +1348,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
>  {
>      TCGReg argreg, datalo, datahi;
>      TCGMemOpIdx oi = lb->oi;
> -    TCGMemOp opc = get_memop(oi);
> +    MemOp opc = get_memop(oi);
>      void *func;
> 
>      if (!reloc_pc24(lb->label_ptr[0], s->code_ptr)) {
> @@ -1412,7 +1412,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
>  {
>      TCGReg argreg, datalo, datahi;
>      TCGMemOpIdx oi = lb->oi;
> -    TCGMemOp opc = get_memop(oi);
> +    MemOp opc = get_memop(oi);
> 
>      if (!reloc_pc24(lb->label_ptr[0], s->code_ptr)) {
>          return false;
> @@ -1453,11 +1453,11 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
>  }
>  #endif /* SOFTMMU */
> 
> -static inline void tcg_out_qemu_ld_index(TCGContext *s, TCGMemOp opc,
> +static inline void tcg_out_qemu_ld_index(TCGContext *s, MemOp opc,
>                                           TCGReg datalo, TCGReg datahi,
>                                           TCGReg addrlo, TCGReg addend)
>  {
> -    TCGMemOp bswap = opc & MO_BSWAP;
> +    MemOp bswap = opc & MO_BSWAP;
> 
>      switch (opc & MO_SSIZE) {
>      case MO_UB:
> @@ -1514,11 +1514,11 @@ static inline void tcg_out_qemu_ld_index(TCGContext *s, TCGMemOp opc,
>      }
>  }
> 
> -static inline void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp opc,
> +static inline void tcg_out_qemu_ld_direct(TCGContext *s, MemOp opc,
>                                            TCGReg datalo, TCGReg datahi,
>                                            TCGReg addrlo)
>  {
> -    TCGMemOp bswap = opc & MO_BSWAP;
> +    MemOp bswap = opc & MO_BSWAP;
> 
>      switch (opc & MO_SSIZE) {
>      case MO_UB:
> @@ -1577,7 +1577,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)
>  {
>      TCGReg addrlo, datalo, datahi, addrhi __attribute__((unused));
>      TCGMemOpIdx oi;
> -    TCGMemOp opc;
> +    MemOp opc;
>  #ifdef CONFIG_SOFTMMU
>      int mem_index;
>      TCGReg addend;
> @@ -1614,11 +1614,11 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)
>  #endif
>  }
> 
> -static inline void tcg_out_qemu_st_index(TCGContext *s, int cond, TCGMemOp opc,
> +static inline void tcg_out_qemu_st_index(TCGContext *s, int cond, MemOp opc,
>                                           TCGReg datalo, TCGReg datahi,
>                                           TCGReg addrlo, TCGReg addend)
>  {
> -    TCGMemOp bswap = opc & MO_BSWAP;
> +    MemOp bswap = opc & MO_BSWAP;
> 
>      switch (opc & MO_SIZE) {
>      case MO_8:
> @@ -1659,11 +1659,11 @@ static inline void tcg_out_qemu_st_index(TCGContext *s, int cond, TCGMemOp opc,
>      }
>  }
> 
> -static inline void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp opc,
> +static inline void tcg_out_qemu_st_direct(TCGContext *s, MemOp opc,
>                                            TCGReg datalo, TCGReg datahi,
>                                            TCGReg addrlo)
>  {
> -    TCGMemOp bswap = opc & MO_BSWAP;
> +    MemOp bswap = opc & MO_BSWAP;
> 
>      switch (opc & MO_SIZE) {
>      case MO_8:
> @@ -1708,7 +1708,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is64)
>  {
>      TCGReg addrlo, datalo, datahi, addrhi __attribute__((unused));
>      TCGMemOpIdx oi;
> -    TCGMemOp opc;
> +    MemOp opc;
>  #ifdef CONFIG_SOFTMMU
>      int mem_index;
>      TCGReg addend;
> diff --git a/tcg/i386/tcg-target.inc.c b/tcg/i386/tcg-target.inc.c
> index 6ddeebf..9d8ed97 100644
> --- a/tcg/i386/tcg-target.inc.c
> +++ b/tcg/i386/tcg-target.inc.c
> @@ -1697,7 +1697,7 @@ static void * const qemu_st_helpers[16] = {
>     First argument register is clobbered.  */
> 
>  static inline void tcg_out_tlb_load(TCGContext *s, TCGReg addrlo, TCGReg addrhi,
> -                                    int mem_index, TCGMemOp opc,
> +                                    int mem_index, MemOp opc,
>                                      tcg_insn_unit **label_ptr, int which)
>  {
>      const TCGReg r0 = TCG_REG_L0;
> @@ -1810,7 +1810,7 @@ static void add_qemu_ldst_label(TCGContext *s, bool is_ld, bool is_64,
>  static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
>  {
>      TCGMemOpIdx oi = l->oi;
> -    TCGMemOp opc = get_memop(oi);
> +    MemOp opc = get_memop(oi);
>      TCGReg data_reg;
>      tcg_insn_unit **label_ptr = &l->label_ptr[0];
>      int rexw = (l->type == TCG_TYPE_I64 ? P_REXW : 0);
> @@ -1895,8 +1895,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
>  static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
>  {
>      TCGMemOpIdx oi = l->oi;
> -    TCGMemOp opc = get_memop(oi);
> -    TCGMemOp s_bits = opc & MO_SIZE;
> +    MemOp opc = get_memop(oi);
> +    MemOp s_bits = opc & MO_SIZE;
>      tcg_insn_unit **label_ptr = &l->label_ptr[0];
>      TCGReg retaddr;
> 
> @@ -1995,10 +1995,10 @@ static inline int setup_guest_base_seg(void)
> 
>  static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg datalo, TCGReg datahi,
>                                     TCGReg base, int index, intptr_t ofs,
> -                                   int seg, bool is64, TCGMemOp memop)
> +                                   int seg, bool is64, MemOp memop)
>  {
> -    const TCGMemOp real_bswap = memop & MO_BSWAP;
> -    TCGMemOp bswap = real_bswap;
> +    const MemOp real_bswap = memop & MO_BSWAP;
> +    MemOp bswap = real_bswap;
>      int rexw = is64 * P_REXW;
>      int movop = OPC_MOVL_GvEv;
> 
> @@ -2103,7 +2103,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)
>      TCGReg datalo, datahi, addrlo;
>      TCGReg addrhi __attribute__((unused));
>      TCGMemOpIdx oi;
> -    TCGMemOp opc;
> +    MemOp opc;
>  #if defined(CONFIG_SOFTMMU)
>      int mem_index;
>      tcg_insn_unit *label_ptr[2];
> @@ -2137,15 +2137,15 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is64)
> 
>  static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg datalo, TCGReg datahi,
>                                     TCGReg base, int index, intptr_t ofs,
> -                                   int seg, TCGMemOp memop)
> +                                   int seg, MemOp memop)
>  {
>      /* ??? Ideally we wouldn't need a scratch register.  For user-only,
>         we could perform the bswap twice to restore the original value
>         instead of moving to the scratch.  But as it is, the L constraint
>         means that TCG_REG_L0 is definitely free here.  */
>      const TCGReg scratch = TCG_REG_L0;
> -    const TCGMemOp real_bswap = memop & MO_BSWAP;
> -    TCGMemOp bswap = real_bswap;
> +    const MemOp real_bswap = memop & MO_BSWAP;
> +    MemOp bswap = real_bswap;
>      int movop = OPC_MOVL_EvGv;
> 
>      if (have_movbe && real_bswap) {
> @@ -2221,7 +2221,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is64)
>      TCGReg datalo, datahi, addrlo;
>      TCGReg addrhi __attribute__((unused));
>      TCGMemOpIdx oi;
> -    TCGMemOp opc;
> +    MemOp opc;
>  #if defined(CONFIG_SOFTMMU)
>      int mem_index;
>      tcg_insn_unit *label_ptr[2];
> diff --git a/tcg/mips/tcg-target.inc.c b/tcg/mips/tcg-target.inc.c
> index 41bff32..5442167 100644
> --- a/tcg/mips/tcg-target.inc.c
> +++ b/tcg/mips/tcg-target.inc.c
> @@ -1215,7 +1215,7 @@ static void tcg_out_tlb_load(TCGContext *s, TCGReg base, TCGReg addrl,
>                               TCGReg addrh, TCGMemOpIdx oi,
>                               tcg_insn_unit *label_ptr[2], bool is_load)
>  {
> -    TCGMemOp opc = get_memop(oi);
> +    MemOp opc = get_memop(oi);
>      unsigned s_bits = opc & MO_SIZE;
>      unsigned a_bits = get_alignment_bits(opc);
>      int mem_index = get_mmuidx(oi);
> @@ -1313,7 +1313,7 @@ static void add_qemu_ldst_label(TCGContext *s, int is_ld, TCGMemOpIdx oi,
>  static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
>  {
>      TCGMemOpIdx oi = l->oi;
> -    TCGMemOp opc = get_memop(oi);
> +    MemOp opc = get_memop(oi);
>      TCGReg v0;
>      int i;
> 
> @@ -1363,8 +1363,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
>  static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
>  {
>      TCGMemOpIdx oi = l->oi;
> -    TCGMemOp opc = get_memop(oi);
> -    TCGMemOp s_bits = opc & MO_SIZE;
> +    MemOp opc = get_memop(oi);
> +    MemOp s_bits = opc & MO_SIZE;
>      int i;
> 
>      /* resolve label address */
> @@ -1413,7 +1413,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
>  #endif
> 
>  static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg lo, TCGReg hi,
> -                                   TCGReg base, TCGMemOp opc, bool is_64)
> +                                   TCGReg base, MemOp opc, bool is_64)
>  {
>      switch (opc & (MO_SSIZE | MO_BSWAP)) {
>      case MO_UB:
> @@ -1521,7 +1521,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
>      TCGReg addr_regl, addr_regh __attribute__((unused));
>      TCGReg data_regl, data_regh;
>      TCGMemOpIdx oi;
> -    TCGMemOp opc;
> +    MemOp opc;
>  #if defined(CONFIG_SOFTMMU)
>      tcg_insn_unit *label_ptr[2];
>  #endif
> @@ -1558,7 +1558,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
>  }
> 
>  static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
> -                                   TCGReg base, TCGMemOp opc)
> +                                   TCGReg base, MemOp opc)
>  {
>      /* Don't clutter the code below with checks to avoid bswapping ZERO.  */
>      if ((lo | hi) == 0) {
> @@ -1624,7 +1624,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64)
>      TCGReg addr_regl, addr_regh __attribute__((unused));
>      TCGReg data_regl, data_regh;
>      TCGMemOpIdx oi;
> -    TCGMemOp opc;
> +    MemOp opc;
>  #if defined(CONFIG_SOFTMMU)
>      tcg_insn_unit *label_ptr[2];
>  #endif
> diff --git a/tcg/optimize.c b/tcg/optimize.c
> index d2424de..a89ffda 100644
> --- a/tcg/optimize.c
> +++ b/tcg/optimize.c
> @@ -1014,7 +1014,7 @@ void tcg_optimize(TCGContext *s)
>          CASE_OP_32_64(qemu_ld):
>              {
>                  TCGMemOpIdx oi = op->args[nb_oargs + nb_iargs];
> -                TCGMemOp mop = get_memop(oi);
> +                MemOp mop = get_memop(oi);
>                  if (!(mop & MO_SIGN)) {
>                      mask = (2ULL << ((8 << (mop & MO_SIZE)) - 1)) - 1;
>                  }
> diff --git a/tcg/ppc/tcg-target.inc.c b/tcg/ppc/tcg-target.inc.c
> index 852b894..815edac 100644
> --- a/tcg/ppc/tcg-target.inc.c
> +++ b/tcg/ppc/tcg-target.inc.c
> @@ -1506,7 +1506,7 @@ QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) < -32768);
>     in CR7, loads the addend of the TLB into R3, and returns the register
>     containing the guest address (zero-extended into R4).  Clobbers R0 and R2. */
> 
> -static TCGReg tcg_out_tlb_read(TCGContext *s, TCGMemOp opc,
> +static TCGReg tcg_out_tlb_read(TCGContext *s, MemOp opc,
>                                 TCGReg addrlo, TCGReg addrhi,
>                                 int mem_index, bool is_read)
>  {
> @@ -1633,7 +1633,7 @@ static void add_qemu_ldst_label(TCGContext *s, bool is_ld, TCGMemOpIdx oi,
>  static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
>  {
>      TCGMemOpIdx oi = lb->oi;
> -    TCGMemOp opc = get_memop(oi);
> +    MemOp opc = get_memop(oi);
>      TCGReg hi, lo, arg = TCG_REG_R3;
> 
>      if (!reloc_pc14(lb->label_ptr[0], s->code_ptr)) {
> @@ -1680,8 +1680,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
>  static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
>  {
>      TCGMemOpIdx oi = lb->oi;
> -    TCGMemOp opc = get_memop(oi);
> -    TCGMemOp s_bits = opc & MO_SIZE;
> +    MemOp opc = get_memop(oi);
> +    MemOp s_bits = opc & MO_SIZE;
>      TCGReg hi, lo, arg = TCG_REG_R3;
> 
>      if (!reloc_pc14(lb->label_ptr[0], s->code_ptr)) {
> @@ -1744,7 +1744,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
>      TCGReg datalo, datahi, addrlo, rbase;
>      TCGReg addrhi __attribute__((unused));
>      TCGMemOpIdx oi;
> -    TCGMemOp opc, s_bits;
> +    MemOp opc, s_bits;
>  #ifdef CONFIG_SOFTMMU
>      int mem_index;
>      tcg_insn_unit *label_ptr;
> @@ -1819,7 +1819,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64)
>      TCGReg datalo, datahi, addrlo, rbase;
>      TCGReg addrhi __attribute__((unused));
>      TCGMemOpIdx oi;
> -    TCGMemOp opc, s_bits;
> +    MemOp opc, s_bits;
>  #ifdef CONFIG_SOFTMMU
>      int mem_index;
>      tcg_insn_unit *label_ptr;
> diff --git a/tcg/riscv/tcg-target.inc.c b/tcg/riscv/tcg-target.inc.c
> index 3e76bf5..7018509 100644
> --- a/tcg/riscv/tcg-target.inc.c
> +++ b/tcg/riscv/tcg-target.inc.c
> @@ -970,7 +970,7 @@ static void tcg_out_tlb_load(TCGContext *s, TCGReg addrl,
>                               TCGReg addrh, TCGMemOpIdx oi,
>                               tcg_insn_unit **label_ptr, bool is_load)
>  {
> -    TCGMemOp opc = get_memop(oi);
> +    MemOp opc = get_memop(oi);
>      unsigned s_bits = opc & MO_SIZE;
>      unsigned a_bits = get_alignment_bits(opc);
>      tcg_target_long compare_mask;
> @@ -1044,7 +1044,7 @@ static void add_qemu_ldst_label(TCGContext *s, int is_ld, TCGMemOpIdx oi,
>  static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
>  {
>      TCGMemOpIdx oi = l->oi;
> -    TCGMemOp opc = get_memop(oi);
> +    MemOp opc = get_memop(oi);
>      TCGReg a0 = tcg_target_call_iarg_regs[0];
>      TCGReg a1 = tcg_target_call_iarg_regs[1];
>      TCGReg a2 = tcg_target_call_iarg_regs[2];
> @@ -1077,8 +1077,8 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
>  static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
>  {
>      TCGMemOpIdx oi = l->oi;
> -    TCGMemOp opc = get_memop(oi);
> -    TCGMemOp s_bits = opc & MO_SIZE;
> +    MemOp opc = get_memop(oi);
> +    MemOp s_bits = opc & MO_SIZE;
>      TCGReg a0 = tcg_target_call_iarg_regs[0];
>      TCGReg a1 = tcg_target_call_iarg_regs[1];
>      TCGReg a2 = tcg_target_call_iarg_regs[2];
> @@ -1121,9 +1121,9 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l)
>  #endif /* CONFIG_SOFTMMU */
> 
>  static void tcg_out_qemu_ld_direct(TCGContext *s, TCGReg lo, TCGReg hi,
> -                                   TCGReg base, TCGMemOp opc, bool is_64)
> +                                   TCGReg base, MemOp opc, bool is_64)
>  {
> -    const TCGMemOp bswap = opc & MO_BSWAP;
> +    const MemOp bswap = opc & MO_BSWAP;
> 
>      /* We don't yet handle byteswapping, assert */
>      g_assert(!bswap);
> @@ -1172,7 +1172,7 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
>      TCGReg addr_regl, addr_regh __attribute__((unused));
>      TCGReg data_regl, data_regh;
>      TCGMemOpIdx oi;
> -    TCGMemOp opc;
> +    MemOp opc;
>  #if defined(CONFIG_SOFTMMU)
>      tcg_insn_unit *label_ptr[1];
>  #endif
> @@ -1208,9 +1208,9 @@ static void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, bool is_64)
>  }
> 
>  static void tcg_out_qemu_st_direct(TCGContext *s, TCGReg lo, TCGReg hi,
> -                                   TCGReg base, TCGMemOp opc)
> +                                   TCGReg base, MemOp opc)
>  {
> -    const TCGMemOp bswap = opc & MO_BSWAP;
> +    const MemOp bswap = opc & MO_BSWAP;
> 
>      /* We don't yet handle byteswapping, assert */
>      g_assert(!bswap);
> @@ -1243,7 +1243,7 @@ static void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, bool is_64)
>      TCGReg addr_regl, addr_regh __attribute__((unused));
>      TCGReg data_regl, data_regh;
>      TCGMemOpIdx oi;
> -    TCGMemOp opc;
> +    MemOp opc;
>  #if defined(CONFIG_SOFTMMU)
>      tcg_insn_unit *label_ptr[1];
>  #endif
> diff --git a/tcg/s390/tcg-target.inc.c b/tcg/s390/tcg-target.inc.c
> index fe42939..8aaa4ce 100644
> --- a/tcg/s390/tcg-target.inc.c
> +++ b/tcg/s390/tcg-target.inc.c
> @@ -1430,7 +1430,7 @@ static void tcg_out_call(TCGContext *s, tcg_insn_unit *dest)
>      }
>  }
> 
> -static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp opc, TCGReg data,
> +static void tcg_out_qemu_ld_direct(TCGContext *s, MemOp opc, TCGReg data,
>                                     TCGReg base, TCGReg index, int disp)
>  {
>      switch (opc & (MO_SSIZE | MO_BSWAP)) {
> @@ -1489,7 +1489,7 @@ static void tcg_out_qemu_ld_direct(TCGContext *s, TCGMemOp opc, TCGReg data,
>      }
>  }
> 
> -static void tcg_out_qemu_st_direct(TCGContext *s, TCGMemOp opc, TCGReg data,
> +static void tcg_out_qemu_st_direct(TCGContext *s, MemOp opc, TCGReg data,
>                                     TCGReg base, TCGReg index, int disp)
>  {
>      switch (opc & (MO_SIZE | MO_BSWAP)) {
> @@ -1544,7 +1544,7 @@ QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) < -(1 << 19));
> 
>  /* Load and compare a TLB entry, leaving the flags set.  Loads the TLB
>     addend into R2.  Returns a register with the santitized guest address.  */
> -static TCGReg tcg_out_tlb_read(TCGContext* s, TCGReg addr_reg, TCGMemOp opc,
> +static TCGReg tcg_out_tlb_read(TCGContext *s, TCGReg addr_reg, MemOp opc,
>                                 int mem_index, bool is_ld)
>  {
>      unsigned s_bits = opc & MO_SIZE;
> @@ -1614,7 +1614,7 @@ static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
>      TCGReg addr_reg = lb->addrlo_reg;
>      TCGReg data_reg = lb->datalo_reg;
>      TCGMemOpIdx oi = lb->oi;
> -    TCGMemOp opc = get_memop(oi);
> +    MemOp opc = get_memop(oi);
> 
>      if (!patch_reloc(lb->label_ptr[0], R_390_PC16DBL,
>                       (intptr_t)s->code_ptr, 2)) {
> @@ -1639,7 +1639,7 @@ static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *lb)
>      TCGReg addr_reg = lb->addrlo_reg;
>      TCGReg data_reg = lb->datalo_reg;
>      TCGMemOpIdx oi = lb->oi;
> -    TCGMemOp opc = get_memop(oi);
> +    MemOp opc = get_memop(oi);
> 
>      if (!patch_reloc(lb->label_ptr[0], R_390_PC16DBL,
>                       (intptr_t)s->code_ptr, 2)) {
> @@ -1694,7 +1694,7 @@ static void tcg_prepare_user_ldst(TCGContext *s, TCGReg *addr_reg,
>  static void tcg_out_qemu_ld(TCGContext* s, TCGReg data_reg, TCGReg addr_reg,
>                              TCGMemOpIdx oi)
>  {
> -    TCGMemOp opc = get_memop(oi);
> +    MemOp opc = get_memop(oi);
>  #ifdef CONFIG_SOFTMMU
>      unsigned mem_index = get_mmuidx(oi);
>      tcg_insn_unit *label_ptr;
> @@ -1721,7 +1721,7 @@ static void tcg_out_qemu_ld(TCGContext* s, TCGReg data_reg, TCGReg addr_reg,
>  static void tcg_out_qemu_st(TCGContext* s, TCGReg data_reg, TCGReg addr_reg,
>                              TCGMemOpIdx oi)
>  {
> -    TCGMemOp opc = get_memop(oi);
> +    MemOp opc = get_memop(oi);
>  #ifdef CONFIG_SOFTMMU
>      unsigned mem_index = get_mmuidx(oi);
>      tcg_insn_unit *label_ptr;
> diff --git a/tcg/sparc/tcg-target.inc.c b/tcg/sparc/tcg-target.inc.c
> index 10b1cea..d7986cd 100644
> --- a/tcg/sparc/tcg-target.inc.c
> +++ b/tcg/sparc/tcg-target.inc.c
> @@ -1081,7 +1081,7 @@ QEMU_BUILD_BUG_ON(TLB_MASK_TABLE_OFS(0) < -(1 << 12));
>     is in the returned register, maybe %o0.  The TLB addend is in %o1.  */
> 
>  static TCGReg tcg_out_tlb_load(TCGContext *s, TCGReg addr, int mem_index,
> -                               TCGMemOp opc, int which)
> +                               MemOp opc, int which)
>  {
>      int fast_off = TLB_MASK_TABLE_OFS(mem_index);
>      int mask_off = fast_off + offsetof(CPUTLBDescFast, mask);
> @@ -1164,7 +1164,7 @@ static const int qemu_st_opc[16] = {
>  static void tcg_out_qemu_ld(TCGContext *s, TCGReg data, TCGReg addr,
>                              TCGMemOpIdx oi, bool is_64)
>  {
> -    TCGMemOp memop = get_memop(oi);
> +    MemOp memop = get_memop(oi);
>  #ifdef CONFIG_SOFTMMU
>      unsigned memi = get_mmuidx(oi);
>      TCGReg addrz, param;
> @@ -1246,7 +1246,7 @@ static void tcg_out_qemu_ld(TCGContext *s, TCGReg data, TCGReg addr,
>  static void tcg_out_qemu_st(TCGContext *s, TCGReg data, TCGReg addr,
>                              TCGMemOpIdx oi)
>  {
> -    TCGMemOp memop = get_memop(oi);
> +    MemOp memop = get_memop(oi);
>  #ifdef CONFIG_SOFTMMU
>      unsigned memi = get_mmuidx(oi);
>      TCGReg addrz, param;
> diff --git a/tcg/tcg-op.c b/tcg/tcg-op.c
> index 587d092..e87c327 100644
> --- a/tcg/tcg-op.c
> +++ b/tcg/tcg-op.c
> @@ -2714,7 +2714,7 @@ void tcg_gen_lookup_and_goto_ptr(void)
>      }
>  }
> 
> -static inline TCGMemOp tcg_canonicalize_memop(TCGMemOp op, bool is64, bool st)
> +static inline MemOp tcg_canonicalize_memop(MemOp op, bool is64, bool st)
>  {
>      /* Trigger the asserts within as early as possible.  */
>      (void)get_alignment_bits(op);
> @@ -2743,7 +2743,7 @@ static inline TCGMemOp tcg_canonicalize_memop(TCGMemOp op, bool is64, bool st)
>  }
> 
>  static void gen_ldst_i32(TCGOpcode opc, TCGv_i32 val, TCGv addr,
> -                         TCGMemOp memop, TCGArg idx)
> +                         MemOp memop, TCGArg idx)
>  {
>      TCGMemOpIdx oi = make_memop_idx(memop, idx);
>  #if TARGET_LONG_BITS == 32
> @@ -2758,7 +2758,7 @@ static void gen_ldst_i32(TCGOpcode opc, TCGv_i32 val, TCGv addr,
>  }
> 
>  static void gen_ldst_i64(TCGOpcode opc, TCGv_i64 val, TCGv addr,
> -                         TCGMemOp memop, TCGArg idx)
> +                         MemOp memop, TCGArg idx)
>  {
>      TCGMemOpIdx oi = make_memop_idx(memop, idx);
>  #if TARGET_LONG_BITS == 32
> @@ -2788,9 +2788,9 @@ static void tcg_gen_req_mo(TCGBar type)
>      }
>  }
> 
> -void tcg_gen_qemu_ld_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
> +void tcg_gen_qemu_ld_i32(TCGv_i32 val, TCGv addr, TCGArg idx, MemOp memop)
>  {
> -    TCGMemOp orig_memop;
> +    MemOp orig_memop;
> 
>      tcg_gen_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
>      memop = tcg_canonicalize_memop(memop, 0, 0);
> @@ -2825,7 +2825,7 @@ void tcg_gen_qemu_ld_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
>      }
>  }
> 
> -void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
> +void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCGArg idx, MemOp memop)
>  {
>      TCGv_i32 swap = NULL;
> 
> @@ -2858,9 +2858,9 @@ void tcg_gen_qemu_st_i32(TCGv_i32 val, TCGv addr, TCGArg idx, TCGMemOp memop)
>      }
>  }
> 
> -void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
> +void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, MemOp memop)
>  {
> -    TCGMemOp orig_memop;
> +    MemOp orig_memop;
> 
>      if (TCG_TARGET_REG_BITS == 32 && (memop & MO_SIZE) < MO_64) {
>          tcg_gen_qemu_ld_i32(TCGV_LOW(val), addr, idx, memop);
> @@ -2911,7 +2911,7 @@ void tcg_gen_qemu_ld_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
>      }
>  }
> 
> -void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
> +void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, MemOp memop)
>  {
>      TCGv_i64 swap = NULL;
> 
> @@ -2953,7 +2953,7 @@ void tcg_gen_qemu_st_i64(TCGv_i64 val, TCGv addr, TCGArg idx, TCGMemOp memop)
>      }
>  }
> 
> -static void tcg_gen_ext_i32(TCGv_i32 ret, TCGv_i32 val, TCGMemOp opc)
> +static void tcg_gen_ext_i32(TCGv_i32 ret, TCGv_i32 val, MemOp opc)
>  {
>      switch (opc & MO_SSIZE) {
>      case MO_SB:
> @@ -2974,7 +2974,7 @@ static void tcg_gen_ext_i32(TCGv_i32 ret, TCGv_i32 val, TCGMemOp opc)
>      }
>  }
> 
> -static void tcg_gen_ext_i64(TCGv_i64 ret, TCGv_i64 val, TCGMemOp opc)
> +static void tcg_gen_ext_i64(TCGv_i64 ret, TCGv_i64 val, MemOp opc)
>  {
>      switch (opc & MO_SSIZE) {
>      case MO_SB:
> @@ -3034,7 +3034,7 @@ static void * const table_cmpxchg[16] = {
>  };
> 
>  void tcg_gen_atomic_cmpxchg_i32(TCGv_i32 retv, TCGv addr, TCGv_i32 cmpv,
> -                                TCGv_i32 newv, TCGArg idx, TCGMemOp memop)
> +                                TCGv_i32 newv, TCGArg idx, MemOp memop)
>  {
>      memop = tcg_canonicalize_memop(memop, 0, 0);
> 
> @@ -3078,7 +3078,7 @@ void tcg_gen_atomic_cmpxchg_i32(TCGv_i32 retv, TCGv addr, TCGv_i32 cmpv,
>  }
> 
>  void tcg_gen_atomic_cmpxchg_i64(TCGv_i64 retv, TCGv addr, TCGv_i64 cmpv,
> -                                TCGv_i64 newv, TCGArg idx, TCGMemOp memop)
> +                                TCGv_i64 newv, TCGArg idx, MemOp memop)
>  {
>      memop = tcg_canonicalize_memop(memop, 1, 0);
> 
> @@ -3142,7 +3142,7 @@ void tcg_gen_atomic_cmpxchg_i64(TCGv_i64 retv, TCGv addr, TCGv_i64 cmpv,
>  }
> 
>  static void do_nonatomic_op_i32(TCGv_i32 ret, TCGv addr, TCGv_i32 val,
> -                                TCGArg idx, TCGMemOp memop, bool new_val,
> +                                TCGArg idx, MemOp memop, bool new_val,
>                                  void (*gen)(TCGv_i32, TCGv_i32, TCGv_i32))
>  {
>      TCGv_i32 t1 = tcg_temp_new_i32();
> @@ -3160,7 +3160,7 @@ static void do_nonatomic_op_i32(TCGv_i32 ret, TCGv addr, TCGv_i32 val,
>  }
> 
>  static void do_atomic_op_i32(TCGv_i32 ret, TCGv addr, TCGv_i32 val,
> -                             TCGArg idx, TCGMemOp memop, void * const table[])
> +                             TCGArg idx, MemOp memop, void * const table[])
>  {
>      gen_atomic_op_i32 gen;
> 
> @@ -3185,7 +3185,7 @@ static void do_atomic_op_i32(TCGv_i32 ret, TCGv addr, TCGv_i32 val,
>  }
> 
>  static void do_nonatomic_op_i64(TCGv_i64 ret, TCGv addr, TCGv_i64 val,
> -                                TCGArg idx, TCGMemOp memop, bool new_val,
> +                                TCGArg idx, MemOp memop, bool new_val,
>                                  void (*gen)(TCGv_i64, TCGv_i64, TCGv_i64))
>  {
>      TCGv_i64 t1 = tcg_temp_new_i64();
> @@ -3203,7 +3203,7 @@ static void do_nonatomic_op_i64(TCGv_i64 ret, TCGv addr, TCGv_i64 val,
>  }
> 
>  static void do_atomic_op_i64(TCGv_i64 ret, TCGv addr, TCGv_i64 val,
> -                             TCGArg idx, TCGMemOp memop, void * const table[])
> +                             TCGArg idx, MemOp memop, void * const table[])
>  {
>      memop = tcg_canonicalize_memop(memop, 1, 0);
> 
> @@ -3257,7 +3257,7 @@ static void * const table_##NAME[16] = {                                \
>      WITH_ATOMIC64([MO_64 | MO_BE] = gen_helper_atomic_##NAME##q_be)     \
>  };                                                                      \
>  void tcg_gen_atomic_##NAME##_i32                                        \
> -    (TCGv_i32 ret, TCGv addr, TCGv_i32 val, TCGArg idx, TCGMemOp memop) \
> +    (TCGv_i32 ret, TCGv addr, TCGv_i32 val, TCGArg idx, MemOp memop)    \
>  {                                                                       \
>      if (tcg_ctx->tb_cflags & CF_PARALLEL) {                             \
>          do_atomic_op_i32(ret, addr, val, idx, memop, table_##NAME);     \
> @@ -3267,7 +3267,7 @@ void tcg_gen_atomic_##NAME##_i32                                        \
>      }                                                                   \
>  }                                                                       \
>  void tcg_gen_atomic_##NAME##_i64                                        \
> -    (TCGv_i64 ret, TCGv addr, TCGv_i64 val, TCGArg idx, TCGMemOp memop) \
> +    (TCGv_i64 ret, TCGv addr, TCGv_i64 val, TCGArg idx, MemOp memop)    \
>  {                                                                       \
>      if (tcg_ctx->tb_cflags & CF_PARALLEL) {                             \
>          do_atomic_op_i64(ret, addr, val, idx, memop, table_##NAME);     \
> diff --git a/tcg/tcg-op.h b/tcg/tcg-op.h
> index 2d4dd5c..e9cf172 100644
> --- a/tcg/tcg-op.h
> +++ b/tcg/tcg-op.h
> @@ -851,10 +851,10 @@ void tcg_gen_lookup_and_goto_ptr(void);
>  #define tcg_gen_qemu_st_tl tcg_gen_qemu_st_i64
>  #endif
> 
> -void tcg_gen_qemu_ld_i32(TCGv_i32, TCGv, TCGArg, TCGMemOp);
> -void tcg_gen_qemu_st_i32(TCGv_i32, TCGv, TCGArg, TCGMemOp);
> -void tcg_gen_qemu_ld_i64(TCGv_i64, TCGv, TCGArg, TCGMemOp);
> -void tcg_gen_qemu_st_i64(TCGv_i64, TCGv, TCGArg, TCGMemOp);
> +void tcg_gen_qemu_ld_i32(TCGv_i32, TCGv, TCGArg, MemOp);
> +void tcg_gen_qemu_st_i32(TCGv_i32, TCGv, TCGArg, MemOp);
> +void tcg_gen_qemu_ld_i64(TCGv_i64, TCGv, TCGArg, MemOp);
> +void tcg_gen_qemu_st_i64(TCGv_i64, TCGv, TCGArg, MemOp);
> 
>  static inline void tcg_gen_qemu_ld8u(TCGv ret, TCGv addr, int mem_index)
>  {
> @@ -912,46 +912,46 @@ static inline void tcg_gen_qemu_st64(TCGv_i64 arg, TCGv addr, int mem_index)
>  }
> 
>  void tcg_gen_atomic_cmpxchg_i32(TCGv_i32, TCGv, TCGv_i32, TCGv_i32,
> -                                TCGArg, TCGMemOp);
> +                                TCGArg, MemOp);
>  void tcg_gen_atomic_cmpxchg_i64(TCGv_i64, TCGv, TCGv_i64, TCGv_i64,
> -                                TCGArg, TCGMemOp);
> -
> -void tcg_gen_atomic_xchg_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_xchg_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
> -
> -void tcg_gen_atomic_fetch_add_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_fetch_add_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_fetch_and_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_fetch_and_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_fetch_or_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_fetch_or_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_fetch_xor_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_fetch_xor_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_fetch_smin_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_fetch_smin_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_fetch_umin_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_fetch_umin_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_fetch_smax_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_fetch_smax_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_fetch_umax_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_fetch_umax_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
> -
> -void tcg_gen_atomic_add_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_add_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_and_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_and_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_or_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_or_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_xor_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_xor_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_smin_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_smin_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_umin_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_umin_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_smax_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_smax_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_umax_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, TCGMemOp);
> -void tcg_gen_atomic_umax_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, TCGMemOp);
> +                                TCGArg, MemOp);
> +
> +void tcg_gen_atomic_xchg_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
> +void tcg_gen_atomic_xchg_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
> +
> +void tcg_gen_atomic_fetch_add_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
> +void tcg_gen_atomic_fetch_add_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
> +void tcg_gen_atomic_fetch_and_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
> +void tcg_gen_atomic_fetch_and_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
> +void tcg_gen_atomic_fetch_or_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
> +void tcg_gen_atomic_fetch_or_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
> +void tcg_gen_atomic_fetch_xor_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
> +void tcg_gen_atomic_fetch_xor_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
> +void tcg_gen_atomic_fetch_smin_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
> +void tcg_gen_atomic_fetch_smin_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
> +void tcg_gen_atomic_fetch_umin_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
> +void tcg_gen_atomic_fetch_umin_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
> +void tcg_gen_atomic_fetch_smax_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
> +void tcg_gen_atomic_fetch_smax_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
> +void tcg_gen_atomic_fetch_umax_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
> +void tcg_gen_atomic_fetch_umax_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
> +
> +void tcg_gen_atomic_add_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
> +void tcg_gen_atomic_add_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
> +void tcg_gen_atomic_and_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
> +void tcg_gen_atomic_and_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
> +void tcg_gen_atomic_or_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
> +void tcg_gen_atomic_or_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
> +void tcg_gen_atomic_xor_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
> +void tcg_gen_atomic_xor_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
> +void tcg_gen_atomic_smin_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
> +void tcg_gen_atomic_smin_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
> +void tcg_gen_atomic_umin_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
> +void tcg_gen_atomic_umin_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
> +void tcg_gen_atomic_smax_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
> +void tcg_gen_atomic_smax_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
> +void tcg_gen_atomic_umax_fetch_i32(TCGv_i32, TCGv, TCGv_i32, TCGArg, MemOp);
> +void tcg_gen_atomic_umax_fetch_i64(TCGv_i64, TCGv, TCGv_i64, TCGArg, MemOp);
> 
>  void tcg_gen_mov_vec(TCGv_vec, TCGv_vec);
>  void tcg_gen_dup_i32_vec(unsigned vece, TCGv_vec, TCGv_i32);
> diff --git a/tcg/tcg.c b/tcg/tcg.c
> index be2c33c..aa9931f 100644
> --- a/tcg/tcg.c
> +++ b/tcg/tcg.c
> @@ -2056,7 +2056,7 @@ static void tcg_dump_ops(TCGContext *s, bool have_prefs)
>              case INDEX_op_qemu_st_i64:
>                  {
>                      TCGMemOpIdx oi = op->args[k++];
> -                    TCGMemOp op = get_memop(oi);
> +                    MemOp op = get_memop(oi);
>                      unsigned ix = get_mmuidx(oi);
> 
>                      if (op & ~(MO_AMASK | MO_BSWAP | MO_SSIZE)) {
> diff --git a/tcg/tcg.h b/tcg/tcg.h
> index b411e17..a37181c 100644
> --- a/tcg/tcg.h
> +++ b/tcg/tcg.h
> @@ -26,6 +26,7 @@
>  #define TCG_H
> 
>  #include "cpu.h"
> +#include "exec/memop.h"
>  #include "exec/tb-context.h"
>  #include "qemu/bitops.h"
>  #include "qemu/queue.h"
> @@ -309,101 +310,13 @@ typedef enum TCGType {
>  #endif
>  } TCGType;
> 
> -/* Constants for qemu_ld and qemu_st for the Memory Operation field.  */
> -typedef enum TCGMemOp {
> -    MO_8     = 0,
> -    MO_16    = 1,
> -    MO_32    = 2,
> -    MO_64    = 3,
> -    MO_SIZE  = 3,   /* Mask for the above.  */
> -
> -    MO_SIGN  = 4,   /* Sign-extended, otherwise zero-extended.  */
> -
> -    MO_BSWAP = 8,   /* Host reverse endian.  */
> -#ifdef HOST_WORDS_BIGENDIAN
> -    MO_LE    = MO_BSWAP,
> -    MO_BE    = 0,
> -#else
> -    MO_LE    = 0,
> -    MO_BE    = MO_BSWAP,
> -#endif
> -#ifdef TARGET_WORDS_BIGENDIAN
> -    MO_TE    = MO_BE,
> -#else
> -    MO_TE    = MO_LE,
> -#endif
> -
> -    /* MO_UNALN accesses are never checked for alignment.
> -     * MO_ALIGN accesses will result in a call to the CPU's
> -     * do_unaligned_access hook if the guest address is not aligned.
> -     * The default depends on whether the target CPU defines ALIGNED_ONLY.
> -     *
> -     * Some architectures (e.g. ARMv8) need the address which is aligned
> -     * to a size more than the size of the memory access.
> -     * Some architectures (e.g. SPARCv9) need an address which is aligned,
> -     * but less strictly than the natural alignment.
> -     *
> -     * MO_ALIGN supposes the alignment size is the size of a memory access.
> -     *
> -     * There are three options:
> -     * - unaligned access permitted (MO_UNALN).
> -     * - an alignment to the size of an access (MO_ALIGN);
> -     * - an alignment to a specified size, which may be more or less than
> -     *   the access size (MO_ALIGN_x where 'x' is a size in bytes);
> -     */
> -    MO_ASHIFT = 4,
> -    MO_AMASK = 7 << MO_ASHIFT,
> -#ifdef ALIGNED_ONLY
> -    MO_ALIGN = 0,
> -    MO_UNALN = MO_AMASK,
> -#else
> -    MO_ALIGN = MO_AMASK,
> -    MO_UNALN = 0,
> -#endif
> -    MO_ALIGN_2  = 1 << MO_ASHIFT,
> -    MO_ALIGN_4  = 2 << MO_ASHIFT,
> -    MO_ALIGN_8  = 3 << MO_ASHIFT,
> -    MO_ALIGN_16 = 4 << MO_ASHIFT,
> -    MO_ALIGN_32 = 5 << MO_ASHIFT,
> -    MO_ALIGN_64 = 6 << MO_ASHIFT,
> -
> -    /* Combinations of the above, for ease of use.  */
> -    MO_UB    = MO_8,
> -    MO_UW    = MO_16,
> -    MO_UL    = MO_32,
> -    MO_SB    = MO_SIGN | MO_8,
> -    MO_SW    = MO_SIGN | MO_16,
> -    MO_SL    = MO_SIGN | MO_32,
> -    MO_Q     = MO_64,
> -
> -    MO_LEUW  = MO_LE | MO_UW,
> -    MO_LEUL  = MO_LE | MO_UL,
> -    MO_LESW  = MO_LE | MO_SW,
> -    MO_LESL  = MO_LE | MO_SL,
> -    MO_LEQ   = MO_LE | MO_Q,
> -
> -    MO_BEUW  = MO_BE | MO_UW,
> -    MO_BEUL  = MO_BE | MO_UL,
> -    MO_BESW  = MO_BE | MO_SW,
> -    MO_BESL  = MO_BE | MO_SL,
> -    MO_BEQ   = MO_BE | MO_Q,
> -
> -    MO_TEUW  = MO_TE | MO_UW,
> -    MO_TEUL  = MO_TE | MO_UL,
> -    MO_TESW  = MO_TE | MO_SW,
> -    MO_TESL  = MO_TE | MO_SL,
> -    MO_TEQ   = MO_TE | MO_Q,
> -
> -    MO_SSIZE = MO_SIZE | MO_SIGN,
> -} TCGMemOp;
> -
>  /**
>   * get_alignment_bits
> - * @memop: TCGMemOp value
> + * @memop: MemOp value
>   *
>   * Extract the alignment size from the memop.
>   */
> -static inline unsigned get_alignment_bits(TCGMemOp memop)
> +static inline unsigned get_alignment_bits(MemOp memop)
>  {
>      unsigned a = memop & MO_AMASK;
> 
> @@ -1184,7 +1097,7 @@ static inline size_t tcg_current_code_size(TCGContext *s)
>      return tcg_ptr_byte_diff(s->code_ptr, s->code_buf);
>  }
> 
> -/* Combine the TCGMemOp and mmu_idx parameters into a single value.  */
> +/* Combine the MemOp and mmu_idx parameters into a single value.  */
>  typedef uint32_t TCGMemOpIdx;
> 
>  /**
> @@ -1194,7 +1107,7 @@ typedef uint32_t TCGMemOpIdx;
>   *
>   * Encode these values into a single parameter.
>   */
> -static inline TCGMemOpIdx make_memop_idx(TCGMemOp op, unsigned idx)
> +static inline TCGMemOpIdx make_memop_idx(MemOp op, unsigned idx)
>  {
>      tcg_debug_assert(idx <= 15);
>      return (op << 4) | idx;
> @@ -1206,7 +1119,7 @@ static inline TCGMemOpIdx make_memop_idx(TCGMemOp op, unsigned idx)
>   *
>   * Extract the memory operation from the combined value.
>   */
> -static inline TCGMemOp get_memop(TCGMemOpIdx oi)
> +static inline MemOp get_memop(TCGMemOpIdx oi)
>  {
>      return oi >> 4;
>  }
> diff --git a/trace/mem-internal.h b/trace/mem-internal.h
> index f6efaf6..3444fbc 100644
> --- a/trace/mem-internal.h
> +++ b/trace/mem-internal.h
> @@ -16,7 +16,7 @@
>  #define TRACE_MEM_ST (1ULL << 5)    /* store (y/n) */
> 
>  static inline uint8_t trace_mem_build_info(
> -    int size_shift, bool sign_extend, TCGMemOp endianness, bool store)
> +    int size_shift, bool sign_extend, MemOp endianness, bool store)
>  {
>      uint8_t res;
> 
> @@ -33,7 +33,7 @@ static inline uint8_t trace_mem_build_info(
>      return res;
>  }
> 
> -static inline uint8_t trace_mem_get_info(TCGMemOp op, bool store)
> +static inline uint8_t trace_mem_get_info(MemOp op, bool store)
>  {
>      return trace_mem_build_info(op & MO_SIZE, !!(op & MO_SIGN),
>                                  op & MO_BSWAP, store);
> diff --git a/trace/mem.h b/trace/mem.h
> index 2b58196..8cf213d 100644
> --- a/trace/mem.h
> +++ b/trace/mem.h
> @@ -18,7 +18,7 @@
>   *
>   * Return a value for the 'info' argument in guest memory access traces.
>   */
> -static uint8_t trace_mem_get_info(TCGMemOp op, bool store);
> +static uint8_t trace_mem_get_info(MemOp op, bool store);
> 
>  /**
>   * trace_mem_build_info:
> @@ -26,7 +26,7 @@ static uint8_t trace_mem_get_info(TCGMemOp op, bool store);
>   * Return a value for the 'info' argument in guest memory access traces.
>   */
>  static uint8_t trace_mem_build_info(int size_shift, bool sign_extend,
> -                                    TCGMemOp endianness, bool store);
> +                                    MemOp endianness, bool store);
> 
> 
>  #include "trace/mem-internal.h"

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-devel] [PATCH v5 11/15] memory: Single byte swap along the I/O path
  2019-07-26  6:47   ` [Qemu-riscv] " tony.nguyen
@ 2019-07-26  9:26     ` Paolo Bonzini
  -1 siblings, 0 replies; 78+ messages in thread
From: Paolo Bonzini @ 2019-07-26  9:26 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, qemu-s390x, qemu-arm,
	stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, amarkovic, aurelien

On 26/07/19 08:47, tony.nguyen@bt.com wrote:
> +        op = SIZE_MEMOP(size);
> +        if (need_bswap(big_endian)) {
> +            op ^= MO_BSWAP;
> +        }

And this has the same issue as the first version.  It should be

	op = SIZE_MEMOP(size) | (big_endian ? MO_BE : MO_LE);

and everything should work.  If it doesn't (and indeed it doesn't :)) it
means you have bugs somewhere else.

Paolo


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-riscv] [Qemu-devel] [PATCH v5 11/15] memory: Single byte swap along the I/O path
@ 2019-07-26  9:26     ` Paolo Bonzini
  0 siblings, 0 replies; 78+ messages in thread
From: Paolo Bonzini @ 2019-07-26  9:26 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, david, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, arikalo, mst, pasic,
	borntraeger, rth, atar4qemu, ehabkost, alex.williamson, qemu-arm,
	stefanha, shorne, david, qemu-riscv, qemu-s390x, kbastian,
	cohuck, laurent, qemu-ppc, amarkovic, aurelien

On 26/07/19 08:47, tony.nguyen@bt.com wrote:
> +        op = SIZE_MEMOP(size);
> +        if (need_bswap(big_endian)) {
> +            op ^= MO_BSWAP;
> +        }

And this has the same issue as the first version.  It should be

	op = SIZE_MEMOP(size) | (big_endian ? MO_BE : MO_LE);

and everything should work.  If it doesn't (and indeed it doesn't :)) it
means you have bugs somewhere else.

Paolo


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-devel] [PATCH v5 11/15] memory: Single byte swap along the I/O path
  2019-07-26  6:47   ` [Qemu-riscv] " tony.nguyen
@ 2019-07-26  9:39     ` Paolo Bonzini
  -1 siblings, 0 replies; 78+ messages in thread
From: Paolo Bonzini @ 2019-07-26  9:39 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, qemu-s390x, qemu-arm,
	stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, amarkovic, aurelien

On 26/07/19 08:47, tony.nguyen@bt.com wrote:
> +static bool memory_region_endianness_inverted(MemoryRegion *mr)
>  {
>  #ifdef TARGET_WORDS_BIGENDIAN
>      return mr->ops->endianness == DEVICE_LITTLE_ENDIAN;
> @@ -361,23 +361,27 @@ static bool
> memory_region_wrong_endianness(MemoryRegion *mr)
>  #endif
>  }
>  
> -static void adjust_endianness(MemoryRegion *mr, uint64_t *data,
> unsigned size)
> +static void adjust_endianness(MemoryRegion *mr, uint64_t *data, MemOp op)
>  {
> -    if (memory_region_wrong_endianness(mr)) {
> -        switch (size) {
> -        case 1:
> +    if (memory_region_endianness_inverted(mr)) {
> +        op ^= MO_BSWAP;
> +    }

Here it should not matter: the caller of memory_region_dispatch_read
should includes one of MO_TE/MO_LE/MO_BE in the op (or nothing for host
endianness).  Then memory_region_endianness_inverted can be:

  if (mr->ops->endianness == DEVICE_NATIVE_ENDIAN)
    return (op & MO_BSWAP) != MO_TE;
  else if (mr->ops->endianness == DEVICE_BIG_ENDIAN)
    return (op & MO_BSWAP) != MO_BE;
  else if (mr->ops->endianness == DEVICE_LITTLE_ENDIAN)
    return (op & MO_BSWAP) != MO_LE;

and adjust_endianness does

  if (memory_region_endianness_inverted(mr, op)) {
    switch (op & MO_SIZE) {
      ...
    }
  }

I think the changes should be split in two parts.  Before this patch,
you modify all callers of memory_region_dispatch_* so that they already
pass the right endianness op; however, you leave the existing swap in
place.  So for example in load_helper you'd have in a previous patch

+        /* FIXME: io_readx ignores MO_BSWAP.  */
+        op = SIZE_MEMOP(size) | (big_endian ? MO_BE : MO_LE);
         res = io_readx(env, &env_tlb(env)->d[mmu_idx].iotlb[index],
-                       mmu_idx, addr, retaddr, access_type,
SIZE_MEMOP(size));
+                       mmu_idx, addr, retaddr, access_type, op);
         return handle_bswap(res, size, big_endian);

Then, in this patch, you remove the handle_bswap call as well as the
FIXME comment.

Paolo


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-riscv] [Qemu-devel] [PATCH v5 11/15] memory: Single byte swap along the I/O path
@ 2019-07-26  9:39     ` Paolo Bonzini
  0 siblings, 0 replies; 78+ messages in thread
From: Paolo Bonzini @ 2019-07-26  9:39 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, david, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, arikalo, mst, pasic,
	borntraeger, rth, atar4qemu, ehabkost, alex.williamson, qemu-arm,
	stefanha, shorne, david, qemu-riscv, qemu-s390x, kbastian,
	cohuck, laurent, qemu-ppc, amarkovic, aurelien

On 26/07/19 08:47, tony.nguyen@bt.com wrote:
> +static bool memory_region_endianness_inverted(MemoryRegion *mr)
>  {
>  #ifdef TARGET_WORDS_BIGENDIAN
>      return mr->ops->endianness == DEVICE_LITTLE_ENDIAN;
> @@ -361,23 +361,27 @@ static bool
> memory_region_wrong_endianness(MemoryRegion *mr)
>  #endif
>  }
>  
> -static void adjust_endianness(MemoryRegion *mr, uint64_t *data,
> unsigned size)
> +static void adjust_endianness(MemoryRegion *mr, uint64_t *data, MemOp op)
>  {
> -    if (memory_region_wrong_endianness(mr)) {
> -        switch (size) {
> -        case 1:
> +    if (memory_region_endianness_inverted(mr)) {
> +        op ^= MO_BSWAP;
> +    }

Here it should not matter: the caller of memory_region_dispatch_read
should includes one of MO_TE/MO_LE/MO_BE in the op (or nothing for host
endianness).  Then memory_region_endianness_inverted can be:

  if (mr->ops->endianness == DEVICE_NATIVE_ENDIAN)
    return (op & MO_BSWAP) != MO_TE;
  else if (mr->ops->endianness == DEVICE_BIG_ENDIAN)
    return (op & MO_BSWAP) != MO_BE;
  else if (mr->ops->endianness == DEVICE_LITTLE_ENDIAN)
    return (op & MO_BSWAP) != MO_LE;

and adjust_endianness does

  if (memory_region_endianness_inverted(mr, op)) {
    switch (op & MO_SIZE) {
      ...
    }
  }

I think the changes should be split in two parts.  Before this patch,
you modify all callers of memory_region_dispatch_* so that they already
pass the right endianness op; however, you leave the existing swap in
place.  So for example in load_helper you'd have in a previous patch

+        /* FIXME: io_readx ignores MO_BSWAP.  */
+        op = SIZE_MEMOP(size) | (big_endian ? MO_BE : MO_LE);
         res = io_readx(env, &env_tlb(env)->d[mmu_idx].iotlb[index],
-                       mmu_idx, addr, retaddr, access_type,
SIZE_MEMOP(size));
+                       mmu_idx, addr, retaddr, access_type, op);
         return handle_bswap(res, size, big_endian);

Then, in this patch, you remove the handle_bswap call as well as the
FIXME comment.

Paolo


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-devel] [PATCH v5 09/15] cputlb: Access MemoryRegion with MemOp
  2019-07-26  6:46   ` [Qemu-riscv] " tony.nguyen
@ 2019-07-26 11:03     ` Philippe Mathieu-Daudé
  -1 siblings, 0 replies; 78+ messages in thread
From: Philippe Mathieu-Daudé @ 2019-07-26 11:03 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, alex.williamson, arikalo,
	david, pasic, borntraeger, rth, atar4qemu, ehabkost, qemu-s390x,
	qemu-arm, stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	laurent, qemu-ppc, amarkovic, pbonzini, aurelien

On 7/26/19 8:46 AM, tony.nguyen@bt.com wrote:
> No-op MEMOP_SIZE and SIZE_MEMOP macros allows us to later easily
> convert memory_region_dispatch_{read|write} paramter "unsigned size"
> into a size+sign+endianness encoded "MemOp op".
> 
> Being a no-op macro, this patch does not introduce any logical change.
> 
> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
> ---
>  accel/tcg/cputlb.c | 21 ++++++++++-----------
>  1 file changed, 10 insertions(+), 11 deletions(-)
> 
> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
> index 523be4c..5d88cec 100644
> --- a/accel/tcg/cputlb.c
> +++ b/accel/tcg/cputlb.c
> @@ -881,7 +881,7 @@ static void tlb_fill(CPUState *cpu, target_ulong addr, int size,
> 
>  static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
>                           int mmu_idx, target_ulong addr, uintptr_t retaddr,
> -                         MMUAccessType access_type, int size)
> +                         MMUAccessType access_type, MemOp op)
>  {
>      CPUState *cpu = env_cpu(env);
>      hwaddr mr_offset;
> @@ -906,14 +906,13 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
>          qemu_mutex_lock_iothread();
>          locked = true;
>      }
> -    r = memory_region_dispatch_read(mr, mr_offset,
> -                                    &val, size, iotlbentry->attrs);
> +    r = memory_region_dispatch_read(mr, mr_offset, &val, op, iotlbentry->attrs);
>      if (r != MEMTX_OK) {
>          hwaddr physaddr = mr_offset +
>              section->offset_within_address_space -
>              section->offset_within_region;
> 
> -        cpu_transaction_failed(cpu, physaddr, addr, size, access_type,
> +        cpu_transaction_failed(cpu, physaddr, addr, MEMOP_SIZE(op), access_type,
>                                 mmu_idx, iotlbentry->attrs, r, retaddr);
>      }
>      if (locked) {
> @@ -925,7 +924,7 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
> 
>  static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
>                        int mmu_idx, uint64_t val, target_ulong addr,
> -                      uintptr_t retaddr, int size)
> +                      uintptr_t retaddr, MemOp op)
>  {
>      CPUState *cpu = env_cpu(env);
>      hwaddr mr_offset;
> @@ -947,15 +946,15 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
>          qemu_mutex_lock_iothread();
>          locked = true;
>      }
> -    r = memory_region_dispatch_write(mr, mr_offset,
> -                                     val, size, iotlbentry->attrs);
> +    r = memory_region_dispatch_write(mr, mr_offset, val, op, iotlbentry->attrs);
>      if (r != MEMTX_OK) {
>          hwaddr physaddr = mr_offset +
>              section->offset_within_address_space -
>              section->offset_within_region;
> 
> -        cpu_transaction_failed(cpu, physaddr, addr, size, MMU_DATA_STORE,
> -                               mmu_idx, iotlbentry->attrs, r, retaddr);
> +        cpu_transaction_failed(cpu, physaddr, addr, MEMOP_SIZE(op),
> +                               MMU_DATA_STORE, mmu_idx, iotlbentry->attrs, r,
> +                               retaddr);
>      }
>      if (locked) {
>          qemu_mutex_unlock_iothread();
> @@ -1306,7 +1305,7 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
>          }
> 
>          res = io_readx(env, &env_tlb(env)->d[mmu_idx].iotlb[index],
> -                       mmu_idx, addr, retaddr, access_type, size);
> +                       mmu_idx, addr, retaddr, access_type, SIZE_MEMOP(size));
>          return handle_bswap(res, size, big_endian);
>      }
> 
> @@ -1555,7 +1554,7 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
> 
>          io_writex(env, &env_tlb(env)->d[mmu_idx].iotlb[index], mmu_idx,
>                    handle_bswap(val, size, big_endian),
> -                  addr, retaddr, size);
> +                  addr, retaddr, SIZE_MEMOP(size));
>          return;
>      }
> 
> --
> 1.8.3.1
> 
> 
> 

Cleaner :)

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-riscv] [Qemu-devel] [PATCH v5 09/15] cputlb: Access MemoryRegion with MemOp
@ 2019-07-26 11:03     ` Philippe Mathieu-Daudé
  0 siblings, 0 replies; 78+ messages in thread
From: Philippe Mathieu-Daudé @ 2019-07-26 11:03 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, qemu-s390x, qemu-arm,
	stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

On 7/26/19 8:46 AM, tony.nguyen@bt.com wrote:
> No-op MEMOP_SIZE and SIZE_MEMOP macros allows us to later easily
> convert memory_region_dispatch_{read|write} paramter "unsigned size"
> into a size+sign+endianness encoded "MemOp op".
> 
> Being a no-op macro, this patch does not introduce any logical change.
> 
> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
> ---
>  accel/tcg/cputlb.c | 21 ++++++++++-----------
>  1 file changed, 10 insertions(+), 11 deletions(-)
> 
> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
> index 523be4c..5d88cec 100644
> --- a/accel/tcg/cputlb.c
> +++ b/accel/tcg/cputlb.c
> @@ -881,7 +881,7 @@ static void tlb_fill(CPUState *cpu, target_ulong addr, int size,
> 
>  static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
>                           int mmu_idx, target_ulong addr, uintptr_t retaddr,
> -                         MMUAccessType access_type, int size)
> +                         MMUAccessType access_type, MemOp op)
>  {
>      CPUState *cpu = env_cpu(env);
>      hwaddr mr_offset;
> @@ -906,14 +906,13 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
>          qemu_mutex_lock_iothread();
>          locked = true;
>      }
> -    r = memory_region_dispatch_read(mr, mr_offset,
> -                                    &val, size, iotlbentry->attrs);
> +    r = memory_region_dispatch_read(mr, mr_offset, &val, op, iotlbentry->attrs);
>      if (r != MEMTX_OK) {
>          hwaddr physaddr = mr_offset +
>              section->offset_within_address_space -
>              section->offset_within_region;
> 
> -        cpu_transaction_failed(cpu, physaddr, addr, size, access_type,
> +        cpu_transaction_failed(cpu, physaddr, addr, MEMOP_SIZE(op), access_type,
>                                 mmu_idx, iotlbentry->attrs, r, retaddr);
>      }
>      if (locked) {
> @@ -925,7 +924,7 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
> 
>  static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
>                        int mmu_idx, uint64_t val, target_ulong addr,
> -                      uintptr_t retaddr, int size)
> +                      uintptr_t retaddr, MemOp op)
>  {
>      CPUState *cpu = env_cpu(env);
>      hwaddr mr_offset;
> @@ -947,15 +946,15 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
>          qemu_mutex_lock_iothread();
>          locked = true;
>      }
> -    r = memory_region_dispatch_write(mr, mr_offset,
> -                                     val, size, iotlbentry->attrs);
> +    r = memory_region_dispatch_write(mr, mr_offset, val, op, iotlbentry->attrs);
>      if (r != MEMTX_OK) {
>          hwaddr physaddr = mr_offset +
>              section->offset_within_address_space -
>              section->offset_within_region;
> 
> -        cpu_transaction_failed(cpu, physaddr, addr, size, MMU_DATA_STORE,
> -                               mmu_idx, iotlbentry->attrs, r, retaddr);
> +        cpu_transaction_failed(cpu, physaddr, addr, MEMOP_SIZE(op),
> +                               MMU_DATA_STORE, mmu_idx, iotlbentry->attrs, r,
> +                               retaddr);
>      }
>      if (locked) {
>          qemu_mutex_unlock_iothread();
> @@ -1306,7 +1305,7 @@ load_helper(CPUArchState *env, target_ulong addr, TCGMemOpIdx oi,
>          }
> 
>          res = io_readx(env, &env_tlb(env)->d[mmu_idx].iotlb[index],
> -                       mmu_idx, addr, retaddr, access_type, size);
> +                       mmu_idx, addr, retaddr, access_type, SIZE_MEMOP(size));
>          return handle_bswap(res, size, big_endian);
>      }
> 
> @@ -1555,7 +1554,7 @@ store_helper(CPUArchState *env, target_ulong addr, uint64_t val,
> 
>          io_writex(env, &env_tlb(env)->d[mmu_idx].iotlb[index], mmu_idx,
>                    handle_bswap(val, size, big_endian),
> -                  addr, retaddr, size);
> +                  addr, retaddr, SIZE_MEMOP(size));
>          return;
>      }
> 
> --
> 1.8.3.1
> 
> 
> 

Cleaner :)

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-devel] [EXTERNAL]Re: [PATCH v5 09/15] cputlb: Access MemoryRegion with MemOp
  2019-07-26 11:03     ` [Qemu-riscv] " Philippe Mathieu-Daudé
@ 2019-07-26 11:16       ` Aleksandar Markovic
  -1 siblings, 0 replies; 78+ messages in thread
From: Aleksandar Markovic @ 2019-07-26 11:16 UTC (permalink / raw)
  To: Philippe Mathieu-Daudé, tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, alex.williamson,
	Aleksandar Rikalo, david, pasic, borntraeger, rth, atar4qemu,
	ehabkost, qemu-s390x, qemu-arm, stefanha, shorne, david,
	qemu-riscv, kbastian, cohuck, laurent, qemu-ppc, pbonzini,
	aurelien

On 7/26/19 8:46 AM, tony.nguyen@bt.com wrote:
> No-op MEMOP_SIZE and SIZE_MEMOP macros allows us to later easily
> convert memory_region_dispatch_{read|write} paramter "unsigned size"
> into a size+sign+endianness encoded "MemOp op".
> 
> Being a no-op macro, this patch does not introduce any logical change.
> 

"allows" should read "allow".

"paramter" should read "parameter".

"logical change" should read "change in behavior".

Thanks,
Aleksandar

> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
> ---

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-riscv] [EXTERNAL]Re: [Qemu-devel] [PATCH v5 09/15] cputlb: Access MemoryRegion with MemOp
@ 2019-07-26 11:16       ` Aleksandar Markovic
  0 siblings, 0 replies; 78+ messages in thread
From: Aleksandar Markovic @ 2019-07-26 11:16 UTC (permalink / raw)
  To: Philippe Mathieu-Daudé, tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, Aleksandar Rikalo,
	david, pasic, borntraeger, rth, atar4qemu, ehabkost, qemu-s390x,
	qemu-arm, stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, pbonzini, aurelien

On 7/26/19 8:46 AM, tony.nguyen@bt.com wrote:
> No-op MEMOP_SIZE and SIZE_MEMOP macros allows us to later easily
> convert memory_region_dispatch_{read|write} paramter "unsigned size"
> into a size+sign+endianness encoded "MemOp op".
> 
> Being a no-op macro, this patch does not introduce any logical change.
> 

"allows" should read "allow".

"paramter" should read "parameter".

"logical change" should read "change in behavior".

Thanks,
Aleksandar

> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
> ---

^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-devel] [EXTERNAL]Re: [PATCH v5 09/15] cputlb: Access MemoryRegion with MemOp
  2019-07-26 11:03     ` [Qemu-riscv] " Philippe Mathieu-Daudé
@ 2019-07-26 11:23       ` Aleksandar Markovic
  -1 siblings, 0 replies; 78+ messages in thread
From: Aleksandar Markovic @ 2019-07-26 11:23 UTC (permalink / raw)
  To: Philippe Mathieu-Daudé, tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, alex.williamson,
	Aleksandar Rikalo, david, pasic, borntraeger, rth, atar4qemu,
	ehabkost, qemu-s390x, qemu-arm, stefanha, shorne, david,
	qemu-riscv, kbastian, cohuck, laurent, qemu-ppc, pbonzini,
	aurelien



________________________________________
From: Philippe Mathieu-Daudé <philmd@redhat.com>
Sent: Friday, July 26, 2019 1:03 PM
To: tony.nguyen@bt.com; qemu-devel@nongnu.org
Cc: peter.maydell@linaro.org; walling@linux.ibm.com; sagark@eecs.berkeley.edu; mst@redhat.com; palmer@sifive.com; mark.cave-ayland@ilande.co.uk; laurent@vivier.eu; Alistair.Francis@wdc.com; edgar.iglesias@gmail.com; Aleksandar Rikalo; david@redhat.com; pasic@linux.ibm.com; borntraeger@de.ibm.com; rth@twiddle.net; atar4qemu@gmail.com; ehabkost@redhat.com; qemu-s390x@nongnu.org; qemu-arm@nongnu.org; stefanha@redhat.com; shorne@gmail.com; david@gibson.dropbear.id.au; qemu-riscv@nongnu.org; kbastian@mail.uni-paderborn.de; cohuck@redhat.com; alex.williamson@redhat.com; qemu-ppc@nongnu.org; Aleksandar Markovic; pbonzini@redhat.com; aurelien@aurel32.net
Subject: [EXTERNAL]Re: [Qemu-devel] [PATCH v5 09/15] cputlb: Access MemoryRegion with MemOp

On 7/26/19 8:46 AM, tony.nguyen@bt.com wrote:
> No-op MEMOP_SIZE and SIZE_MEMOP macros allows us to later easily
> convert memory_region_dispatch_{read|write} paramter "unsigned size"
> into a size+sign+endianness encoded "MemOp op".
>
> Being a no-op macro, this patch does not introduce any logical change.
>

The last sentence has a bad structure. Possible remedy:

"Being a no-op macro," -> "Relying no-op macros,"

I think this patch should be reogranized (possibly by splitting) so that
the hunks that introduce usage of macros are in a separate patch, which
would leave only changes that directly involve using "MemOp" in this
patch.

Thanks,
Aleksandar


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-riscv] [EXTERNAL]Re: [Qemu-devel] [PATCH v5 09/15] cputlb: Access MemoryRegion with MemOp
@ 2019-07-26 11:23       ` Aleksandar Markovic
  0 siblings, 0 replies; 78+ messages in thread
From: Aleksandar Markovic @ 2019-07-26 11:23 UTC (permalink / raw)
  To: Philippe Mathieu-Daudé, tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, Aleksandar Rikalo,
	david, pasic, borntraeger, rth, atar4qemu, ehabkost, qemu-s390x,
	qemu-arm, stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, pbonzini, aurelien



________________________________________
From: Philippe Mathieu-Daudé <philmd@redhat.com>
Sent: Friday, July 26, 2019 1:03 PM
To: tony.nguyen@bt.com; qemu-devel@nongnu.org
Cc: peter.maydell@linaro.org; walling@linux.ibm.com; sagark@eecs.berkeley.edu; mst@redhat.com; palmer@sifive.com; mark.cave-ayland@ilande.co.uk; laurent@vivier.eu; Alistair.Francis@wdc.com; edgar.iglesias@gmail.com; Aleksandar Rikalo; david@redhat.com; pasic@linux.ibm.com; borntraeger@de.ibm.com; rth@twiddle.net; atar4qemu@gmail.com; ehabkost@redhat.com; qemu-s390x@nongnu.org; qemu-arm@nongnu.org; stefanha@redhat.com; shorne@gmail.com; david@gibson.dropbear.id.au; qemu-riscv@nongnu.org; kbastian@mail.uni-paderborn.de; cohuck@redhat.com; alex.williamson@redhat.com; qemu-ppc@nongnu.org; Aleksandar Markovic; pbonzini@redhat.com; aurelien@aurel32.net
Subject: [EXTERNAL]Re: [Qemu-devel] [PATCH v5 09/15] cputlb: Access MemoryRegion with MemOp

On 7/26/19 8:46 AM, tony.nguyen@bt.com wrote:
> No-op MEMOP_SIZE and SIZE_MEMOP macros allows us to later easily
> convert memory_region_dispatch_{read|write} paramter "unsigned size"
> into a size+sign+endianness encoded "MemOp op".
>
> Being a no-op macro, this patch does not introduce any logical change.
>

The last sentence has a bad structure. Possible remedy:

"Being a no-op macro," -> "Relying no-op macros,"

I think this patch should be reogranized (possibly by splitting) so that
the hunks that introduce usage of macros are in a separate patch, which
would leave only changes that directly involve using "MemOp" in this
patch.

Thanks,
Aleksandar


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-devel] [PATCH v5 01/15] tcg: TCGMemOp is now accelerator independent MemOp
  2019-07-26  6:43   ` [Qemu-riscv] " tony.nguyen
@ 2019-07-26 13:27     ` Richard Henderson
  -1 siblings, 0 replies; 78+ messages in thread
From: Richard Henderson @ 2019-07-26 13:27 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, alex.williamson, arikalo,
	david, pasic, borntraeger, rth, atar4qemu, ehabkost, qemu-s390x,
	qemu-arm, stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	laurent, qemu-ppc, amarkovic, pbonzini, aurelien

On 7/25/19 11:43 PM, tony.nguyen@bt.com wrote:
> +#ifdef NEED_CPU_H
> +#ifdef ALIGNED_ONLY
> +    MO_ALIGN = 0,
> +    MO_UNALN = MO_AMASK,

You need the configure patch got TARGET_ALIGNED_ONLY that you posted separately
as patch 1 in order for this to work.

Otherwise,

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-riscv] [Qemu-devel] [PATCH v5 01/15] tcg: TCGMemOp is now accelerator independent MemOp
@ 2019-07-26 13:27     ` Richard Henderson
  0 siblings, 0 replies; 78+ messages in thread
From: Richard Henderson @ 2019-07-26 13:27 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, qemu-s390x, qemu-arm,
	stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

On 7/25/19 11:43 PM, tony.nguyen@bt.com wrote:
> +#ifdef NEED_CPU_H
> +#ifdef ALIGNED_ONLY
> +    MO_ALIGN = 0,
> +    MO_UNALN = MO_AMASK,

You need the configure patch got TARGET_ALIGNED_ONLY that you posted separately
as patch 1 in order for this to work.

Otherwise,

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-devel] [PATCH v5 02/15] memory: Access MemoryRegion with MemOp
  2019-07-26  6:43   ` [Qemu-riscv] " tony.nguyen
@ 2019-07-26 13:36     ` Richard Henderson
  -1 siblings, 0 replies; 78+ messages in thread
From: Richard Henderson @ 2019-07-26 13:36 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, alex.williamson, arikalo,
	david, pasic, borntraeger, rth, atar4qemu, ehabkost, qemu-s390x,
	qemu-arm, stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	laurent, qemu-ppc, amarkovic, pbonzini, aurelien

On 7/25/19 11:43 PM, tony.nguyen@bt.com wrote:
>  } MemOp;
> 
> +/* No-op while memory_region_dispatch_[read|write] is converted to MemOp */
> +#define MEMOP_SIZE(op)  (op)    /* MemOp to size.  */
> +#define SIZE_MEMOP(ul)  (ul)    /* Size to MemOp.  */
> +

This doesn't thrill me, because for 9 patches these macros don't do what they
say on the tin, but I'll accept it.

I would prefer lower-case and that the real implementation in patch 10 be
inline functions with proper types instead of typeless macros.  In particular,
"unsigned" not "unsigned long" as you imply from "ul" here, since that's what
was used ...

>  MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
>                                          hwaddr addr,
>                                          uint64_t *pval,
> -                                        unsigned size,
> +                                        MemOp op,
>                                          MemTxAttrs attrs);

... here.

With the name case change,
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-riscv] [Qemu-devel] [PATCH v5 02/15] memory: Access MemoryRegion with MemOp
@ 2019-07-26 13:36     ` Richard Henderson
  0 siblings, 0 replies; 78+ messages in thread
From: Richard Henderson @ 2019-07-26 13:36 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, qemu-s390x, qemu-arm,
	stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

On 7/25/19 11:43 PM, tony.nguyen@bt.com wrote:
>  } MemOp;
> 
> +/* No-op while memory_region_dispatch_[read|write] is converted to MemOp */
> +#define MEMOP_SIZE(op)  (op)    /* MemOp to size.  */
> +#define SIZE_MEMOP(ul)  (ul)    /* Size to MemOp.  */
> +

This doesn't thrill me, because for 9 patches these macros don't do what they
say on the tin, but I'll accept it.

I would prefer lower-case and that the real implementation in patch 10 be
inline functions with proper types instead of typeless macros.  In particular,
"unsigned" not "unsigned long" as you imply from "ul" here, since that's what
was used ...

>  MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
>                                          hwaddr addr,
>                                          uint64_t *pval,
> -                                        unsigned size,
> +                                        MemOp op,
>                                          MemTxAttrs attrs);

... here.

With the name case change,
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-devel] [PATCH v5 03/15] target/mips: Access MemoryRegion with MemOp
  2019-07-26  6:44   ` [Qemu-riscv] " tony.nguyen
@ 2019-07-26 13:40     ` Richard Henderson
  -1 siblings, 0 replies; 78+ messages in thread
From: Richard Henderson @ 2019-07-26 13:40 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, alex.williamson, arikalo,
	david, pasic, borntraeger, rth, atar4qemu, ehabkost, qemu-s390x,
	qemu-arm, stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	laurent, qemu-ppc, amarkovic, pbonzini, aurelien

On 7/25/19 11:44 PM, tony.nguyen@bt.com wrote:
>          memory_region_dispatch_read(env->itc_tag, index, &env->CP0_TagLo,
> -                                    8, MEMTXATTRS_UNSPECIFIED);
> +                                    SIZE_MEMOP(8), MEMTXATTRS_UNSPECIFIED);

As an example of why I'm not especially keen on the temporary incorrect
implementation of size_memop, you'll need a second pass through these files to
replace this with the proper MO_64.

That said,
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-riscv] [Qemu-devel] [PATCH v5 03/15] target/mips: Access MemoryRegion with MemOp
@ 2019-07-26 13:40     ` Richard Henderson
  0 siblings, 0 replies; 78+ messages in thread
From: Richard Henderson @ 2019-07-26 13:40 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, qemu-s390x, qemu-arm,
	stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

On 7/25/19 11:44 PM, tony.nguyen@bt.com wrote:
>          memory_region_dispatch_read(env->itc_tag, index, &env->CP0_TagLo,
> -                                    8, MEMTXATTRS_UNSPECIFIED);
> +                                    SIZE_MEMOP(8), MEMTXATTRS_UNSPECIFIED);

As an example of why I'm not especially keen on the temporary incorrect
implementation of size_memop, you'll need a second pass through these files to
replace this with the proper MO_64.

That said,
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-devel] [PATCH v5 04/15] hw/s390x: Access MemoryRegion with MemOp
  2019-07-26  6:44   ` [Qemu-riscv] " tony.nguyen
@ 2019-07-26 13:42     ` Richard Henderson
  -1 siblings, 0 replies; 78+ messages in thread
From: Richard Henderson @ 2019-07-26 13:42 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, alex.williamson, arikalo,
	david, pasic, borntraeger, rth, atar4qemu, ehabkost, qemu-s390x,
	qemu-arm, stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	laurent, qemu-ppc, amarkovic, pbonzini, aurelien

On 7/25/19 11:44 PM, tony.nguyen@bt.com wrote:
> No-op SIZE_MEMOP macro allows us to later easily convert
> memory_region_dispatch_{read|write} paramter "unsigned size" into a
> size+sign+endianness encoded "MemOp op".
> 
> Being a no-op macro, this patch does not introduce any logical change.
> 
> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
> ---
>  hw/s390x/s390-pci-inst.c | 8 +++++---
>  1 file changed, 5 insertions(+), 3 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

>      for (i = 0; i < len / 8; i++) {
>          result = memory_region_dispatch_write(mr, offset + i * 8,
> -                                              ldq_p(buffer + i * 8), 8,
> +                                              ldq_p(buffer + i * 8),
> +                                              SIZE_MEMOP(8),

MO_64, eventually.


r~


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-riscv] [Qemu-devel] [PATCH v5 04/15] hw/s390x: Access MemoryRegion with MemOp
@ 2019-07-26 13:42     ` Richard Henderson
  0 siblings, 0 replies; 78+ messages in thread
From: Richard Henderson @ 2019-07-26 13:42 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, qemu-s390x, qemu-arm,
	stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

On 7/25/19 11:44 PM, tony.nguyen@bt.com wrote:
> No-op SIZE_MEMOP macro allows us to later easily convert
> memory_region_dispatch_{read|write} paramter "unsigned size" into a
> size+sign+endianness encoded "MemOp op".
> 
> Being a no-op macro, this patch does not introduce any logical change.
> 
> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
> ---
>  hw/s390x/s390-pci-inst.c | 8 +++++---
>  1 file changed, 5 insertions(+), 3 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

>      for (i = 0; i < len / 8; i++) {
>          result = memory_region_dispatch_write(mr, offset + i * 8,
> -                                              ldq_p(buffer + i * 8), 8,
> +                                              ldq_p(buffer + i * 8),
> +                                              SIZE_MEMOP(8),

MO_64, eventually.


r~


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-devel] [PATCH v5 05/15] hw/intc/armv7m_nic: Access MemoryRegion with MemOp
  2019-07-26  6:45   ` [Qemu-riscv] " tony.nguyen
@ 2019-07-26 13:43     ` Richard Henderson
  -1 siblings, 0 replies; 78+ messages in thread
From: Richard Henderson @ 2019-07-26 13:43 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, alex.williamson, arikalo,
	david, pasic, borntraeger, rth, atar4qemu, ehabkost, qemu-s390x,
	qemu-arm, stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	laurent, qemu-ppc, amarkovic, pbonzini, aurelien

On 7/25/19 11:45 PM, tony.nguyen@bt.com wrote:
> No-op SIZE_MEMOP macro allows us to later easily convert
> memory_region_dispatch_{read|write} paramter "unsigned size" into a
> size+sign+endianness encoded "MemOp op".
> 
> Being a no-op macro, this patch does not introduce any logical change.
> 
> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> ---
>  hw/intc/armv7m_nvic.c | 12 ++++++++----
>  1 file changed, 8 insertions(+), 4 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-riscv] [Qemu-devel] [PATCH v5 05/15] hw/intc/armv7m_nic: Access MemoryRegion with MemOp
@ 2019-07-26 13:43     ` Richard Henderson
  0 siblings, 0 replies; 78+ messages in thread
From: Richard Henderson @ 2019-07-26 13:43 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, qemu-s390x, qemu-arm,
	stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

On 7/25/19 11:45 PM, tony.nguyen@bt.com wrote:
> No-op SIZE_MEMOP macro allows us to later easily convert
> memory_region_dispatch_{read|write} paramter "unsigned size" into a
> size+sign+endianness encoded "MemOp op".
> 
> Being a no-op macro, this patch does not introduce any logical change.
> 
> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> ---
>  hw/intc/armv7m_nvic.c | 12 ++++++++----
>  1 file changed, 8 insertions(+), 4 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-devel] [PATCH v5 06/15] hw/virtio: Access MemoryRegion with MemOp
  2019-07-26  6:45   ` [Qemu-riscv] " tony.nguyen
@ 2019-07-26 13:43     ` Richard Henderson
  -1 siblings, 0 replies; 78+ messages in thread
From: Richard Henderson @ 2019-07-26 13:43 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, alex.williamson, arikalo,
	david, pasic, borntraeger, rth, atar4qemu, ehabkost, qemu-s390x,
	qemu-arm, stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	laurent, qemu-ppc, amarkovic, pbonzini, aurelien

On 7/25/19 11:45 PM, tony.nguyen@bt.com wrote:
> No-op SIZE_MEMOP macro allows us to later easily convert
> memory_region_dispatch_{read|write} paramter "unsigned size" into a
> size+sign+endianness encoded "MemOp op".
> 
> Being a no-op macro, this patch does not introduce any logical change.
> 
> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
> ---
>  hw/virtio/virtio-pci.c | 7 +++++--
>  1 file changed, 5 insertions(+), 2 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-riscv] [Qemu-devel] [PATCH v5 06/15] hw/virtio: Access MemoryRegion with MemOp
@ 2019-07-26 13:43     ` Richard Henderson
  0 siblings, 0 replies; 78+ messages in thread
From: Richard Henderson @ 2019-07-26 13:43 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, qemu-s390x, qemu-arm,
	stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

On 7/25/19 11:45 PM, tony.nguyen@bt.com wrote:
> No-op SIZE_MEMOP macro allows us to later easily convert
> memory_region_dispatch_{read|write} paramter "unsigned size" into a
> size+sign+endianness encoded "MemOp op".
> 
> Being a no-op macro, this patch does not introduce any logical change.
> 
> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
> ---
>  hw/virtio/virtio-pci.c | 7 +++++--
>  1 file changed, 5 insertions(+), 2 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-devel] [PATCH v5 07/15] hw/vfio: Access MemoryRegion with MemOp
  2019-07-26  6:46   ` [Qemu-riscv] " tony.nguyen
@ 2019-07-26 13:43     ` Richard Henderson
  -1 siblings, 0 replies; 78+ messages in thread
From: Richard Henderson @ 2019-07-26 13:43 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, alex.williamson, arikalo,
	david, pasic, borntraeger, rth, atar4qemu, ehabkost, qemu-s390x,
	qemu-arm, stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	laurent, qemu-ppc, amarkovic, pbonzini, aurelien

On 7/25/19 11:46 PM, tony.nguyen@bt.com wrote:
> No-op SIZE_MEMOP macro allows us to later easily convert
> memory_region_dispatch_{read|write} paramter "unsigned size" into a
> size+sign+endianness encoded "MemOp op".
> 
> Being a no-op macro, this patch does not introduce any logical change.
> 
> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
> ---
>  hw/vfio/pci-quirks.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-riscv] [Qemu-devel] [PATCH v5 07/15] hw/vfio: Access MemoryRegion with MemOp
@ 2019-07-26 13:43     ` Richard Henderson
  0 siblings, 0 replies; 78+ messages in thread
From: Richard Henderson @ 2019-07-26 13:43 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, qemu-s390x, qemu-arm,
	stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

On 7/25/19 11:46 PM, tony.nguyen@bt.com wrote:
> No-op SIZE_MEMOP macro allows us to later easily convert
> memory_region_dispatch_{read|write} paramter "unsigned size" into a
> size+sign+endianness encoded "MemOp op".
> 
> Being a no-op macro, this patch does not introduce any logical change.
> 
> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
> ---
>  hw/vfio/pci-quirks.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-devel] [PATCH v5 08/15] exec: Access MemoryRegion with MemOp
  2019-07-26  6:46   ` [Qemu-riscv] " tony.nguyen
@ 2019-07-26 13:46     ` Richard Henderson
  -1 siblings, 0 replies; 78+ messages in thread
From: Richard Henderson @ 2019-07-26 13:46 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, alex.williamson, arikalo,
	david, pasic, borntraeger, rth, atar4qemu, ehabkost, qemu-s390x,
	qemu-arm, stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	laurent, qemu-ppc, amarkovic, pbonzini, aurelien

On 7/25/19 11:46 PM, tony.nguyen@bt.com wrote:
> No-op SIZE_MEMOP macro allows us to later easily convert
> memory_region_dispatch_{read|write} paramter "unsigned size" into a
> size+sign+endianness encoded "MemOp op".
> 
> Being a no-op macro, this patch does not introduce any logical change.
> 
> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> ---
>  exec.c            |  6 ++++--
>  memory_ldst.inc.c | 18 +++++++++---------
>  2 files changed, 13 insertions(+), 11 deletions(-)


Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


>          /* I/O case */
> -        r = memory_region_dispatch_read(mr, addr1, &val, 4, attrs);
> +        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(4), attrs);

MO_32, eventually, as well as

> -        r = memory_region_dispatch_read(mr, addr1, &val, 8, attrs);
> +        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(8), attrs);

MO_64

> -        r = memory_region_dispatch_read(mr, addr1, &val, 1, attrs);
> +        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(1), attrs);

MO_8

> -        r = memory_region_dispatch_read(mr, addr1, &val, 2, attrs);
> +        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(2), attrs);

MO_16, and so on.


r~


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-riscv] [Qemu-devel] [PATCH v5 08/15] exec: Access MemoryRegion with MemOp
@ 2019-07-26 13:46     ` Richard Henderson
  0 siblings, 0 replies; 78+ messages in thread
From: Richard Henderson @ 2019-07-26 13:46 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, qemu-s390x, qemu-arm,
	stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

On 7/25/19 11:46 PM, tony.nguyen@bt.com wrote:
> No-op SIZE_MEMOP macro allows us to later easily convert
> memory_region_dispatch_{read|write} paramter "unsigned size" into a
> size+sign+endianness encoded "MemOp op".
> 
> Being a no-op macro, this patch does not introduce any logical change.
> 
> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> ---
>  exec.c            |  6 ++++--
>  memory_ldst.inc.c | 18 +++++++++---------
>  2 files changed, 13 insertions(+), 11 deletions(-)


Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


>          /* I/O case */
> -        r = memory_region_dispatch_read(mr, addr1, &val, 4, attrs);
> +        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(4), attrs);

MO_32, eventually, as well as

> -        r = memory_region_dispatch_read(mr, addr1, &val, 8, attrs);
> +        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(8), attrs);

MO_64

> -        r = memory_region_dispatch_read(mr, addr1, &val, 1, attrs);
> +        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(1), attrs);

MO_8

> -        r = memory_region_dispatch_read(mr, addr1, &val, 2, attrs);
> +        r = memory_region_dispatch_read(mr, addr1, &val, SIZE_MEMOP(2), attrs);

MO_16, and so on.


r~


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-devel] [PATCH v5 02/15] memory: Access MemoryRegion with MemOp
  2019-07-26 13:36     ` [Qemu-riscv] " Richard Henderson
@ 2019-07-26 14:04       ` Richard Henderson
  -1 siblings, 0 replies; 78+ messages in thread
From: Richard Henderson @ 2019-07-26 14:04 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, alex.williamson, arikalo,
	david, pasic, borntraeger, rth, atar4qemu, ehabkost, qemu-s390x,
	qemu-arm, stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	laurent, qemu-ppc, amarkovic, pbonzini, aurelien

On 7/26/19 6:36 AM, Richard Henderson wrote:
> On 7/25/19 11:43 PM, tony.nguyen@bt.com wrote:
>>  } MemOp;
>>
>> +/* No-op while memory_region_dispatch_[read|write] is converted to MemOp */
>> +#define MEMOP_SIZE(op)  (op)    /* MemOp to size.  */
>> +#define SIZE_MEMOP(ul)  (ul)    /* Size to MemOp.  */
>> +
> 
> This doesn't thrill me, because for 9 patches these macros don't do what they
> say on the tin, but I'll accept it.
> 
> I would prefer lower-case and that the real implementation in patch 10 be
> inline functions with proper types instead of typeless macros.  In particular,
> "unsigned" not "unsigned long" as you imply from "ul" here, since that's what
> was used ...
> 
>>  MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
>>                                          hwaddr addr,
>>                                          uint64_t *pval,
>> -                                        unsigned size,
>> +                                        MemOp op,
>>                                          MemTxAttrs attrs);

Actually, I thought of something that would make me happier:

Do not make any change to memory_region_dispatch_{read,write} now.  Let the
operand continue to be "unsigned size", because it still is, because of the
no-op macros.

Make the change to to the proper type at the same time that you flip the switch
to use the proper conversion function.  This will make patch 10 about 5 lines
longer, but we'll have proper typing at all points in between.


r~

> 
> ... here.
> 
> With the name case change,
> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> 
> 
> r~
> 



^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-riscv] [Qemu-devel] [PATCH v5 02/15] memory: Access MemoryRegion with MemOp
@ 2019-07-26 14:04       ` Richard Henderson
  0 siblings, 0 replies; 78+ messages in thread
From: Richard Henderson @ 2019-07-26 14:04 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, qemu-s390x, qemu-arm,
	stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

On 7/26/19 6:36 AM, Richard Henderson wrote:
> On 7/25/19 11:43 PM, tony.nguyen@bt.com wrote:
>>  } MemOp;
>>
>> +/* No-op while memory_region_dispatch_[read|write] is converted to MemOp */
>> +#define MEMOP_SIZE(op)  (op)    /* MemOp to size.  */
>> +#define SIZE_MEMOP(ul)  (ul)    /* Size to MemOp.  */
>> +
> 
> This doesn't thrill me, because for 9 patches these macros don't do what they
> say on the tin, but I'll accept it.
> 
> I would prefer lower-case and that the real implementation in patch 10 be
> inline functions with proper types instead of typeless macros.  In particular,
> "unsigned" not "unsigned long" as you imply from "ul" here, since that's what
> was used ...
> 
>>  MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
>>                                          hwaddr addr,
>>                                          uint64_t *pval,
>> -                                        unsigned size,
>> +                                        MemOp op,
>>                                          MemTxAttrs attrs);

Actually, I thought of something that would make me happier:

Do not make any change to memory_region_dispatch_{read,write} now.  Let the
operand continue to be "unsigned size", because it still is, because of the
no-op macros.

Make the change to to the proper type at the same time that you flip the switch
to use the proper conversion function.  This will make patch 10 about 5 lines
longer, but we'll have proper typing at all points in between.


r~

> 
> ... here.
> 
> With the name case change,
> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> 
> 
> r~
> 



^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-devel] [PATCH v5 09/15] cputlb: Access MemoryRegion with MemOp
  2019-07-26  6:46   ` [Qemu-riscv] " tony.nguyen
@ 2019-07-26 14:14     ` Richard Henderson
  -1 siblings, 0 replies; 78+ messages in thread
From: Richard Henderson @ 2019-07-26 14:14 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, alex.williamson, arikalo,
	david, pasic, borntraeger, rth, atar4qemu, ehabkost, qemu-s390x,
	qemu-arm, stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	laurent, qemu-ppc, amarkovic, pbonzini, aurelien

On 7/25/19 11:46 PM, tony.nguyen@bt.com wrote:
> No-op MEMOP_SIZE and SIZE_MEMOP macros allows us to later easily
> convert memory_region_dispatch_{read|write} paramter "unsigned size"
> into a size+sign+endianness encoded "MemOp op".
> 
> Being a no-op macro, this patch does not introduce any logical change.
> 
> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
> ---
>  accel/tcg/cputlb.c | 21 ++++++++++-----------
>  1 file changed, 10 insertions(+), 11 deletions(-)
> 
> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
> index 523be4c..5d88cec 100644
> --- a/accel/tcg/cputlb.c
> +++ b/accel/tcg/cputlb.c
> @@ -881,7 +881,7 @@ static void tlb_fill(CPUState *cpu, target_ulong addr, int size,
> 
>  static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
>                           int mmu_idx, target_ulong addr, uintptr_t retaddr,
> -                         MMUAccessType access_type, int size)
> +                         MMUAccessType access_type, MemOp op)

As I mentioned for patch 2, don't change this now, wait until after patch 10.

> -    r = memory_region_dispatch_read(mr, mr_offset,
> -                                    &val, size, iotlbentry->attrs);
> +    r = memory_region_dispatch_read(mr, mr_offset, &val, op, iotlbentry->attrs);

So size_memop here,

> -        cpu_transaction_failed(cpu, physaddr, addr, size, access_type,
> +        cpu_transaction_failed(cpu, physaddr, addr, MEMOP_SIZE(op), access_type,
>                                 mmu_idx, iotlbentry->attrs, r, retaddr);

but no memop_size here.

>  static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
>                        int mmu_idx, uint64_t val, target_ulong addr,
> -                      uintptr_t retaddr, int size)
> +                      uintptr_t retaddr, MemOp op)

Likewise.

>          res = io_readx(env, &env_tlb(env)->d[mmu_idx].iotlb[index],
> -                       mmu_idx, addr, retaddr, access_type, size);
> +                       mmu_idx, addr, retaddr, access_type, SIZE_MEMOP(size));

And when you do come back to change the types after patch 10, at the top of the
function:

-    unsigned a_bits = get_alignment_bits(get_memop(oi));
+    MemOp op = get_memop(oi);
+    unsigned a_bits = get_alignment_bits(op);

and then pass along op directly.  Which will fix some of the weirdness in patch 11.


r~


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-riscv] [Qemu-devel] [PATCH v5 09/15] cputlb: Access MemoryRegion with MemOp
@ 2019-07-26 14:14     ` Richard Henderson
  0 siblings, 0 replies; 78+ messages in thread
From: Richard Henderson @ 2019-07-26 14:14 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, qemu-s390x, qemu-arm,
	stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

On 7/25/19 11:46 PM, tony.nguyen@bt.com wrote:
> No-op MEMOP_SIZE and SIZE_MEMOP macros allows us to later easily
> convert memory_region_dispatch_{read|write} paramter "unsigned size"
> into a size+sign+endianness encoded "MemOp op".
> 
> Being a no-op macro, this patch does not introduce any logical change.
> 
> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
> ---
>  accel/tcg/cputlb.c | 21 ++++++++++-----------
>  1 file changed, 10 insertions(+), 11 deletions(-)
> 
> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
> index 523be4c..5d88cec 100644
> --- a/accel/tcg/cputlb.c
> +++ b/accel/tcg/cputlb.c
> @@ -881,7 +881,7 @@ static void tlb_fill(CPUState *cpu, target_ulong addr, int size,
> 
>  static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
>                           int mmu_idx, target_ulong addr, uintptr_t retaddr,
> -                         MMUAccessType access_type, int size)
> +                         MMUAccessType access_type, MemOp op)

As I mentioned for patch 2, don't change this now, wait until after patch 10.

> -    r = memory_region_dispatch_read(mr, mr_offset,
> -                                    &val, size, iotlbentry->attrs);
> +    r = memory_region_dispatch_read(mr, mr_offset, &val, op, iotlbentry->attrs);

So size_memop here,

> -        cpu_transaction_failed(cpu, physaddr, addr, size, access_type,
> +        cpu_transaction_failed(cpu, physaddr, addr, MEMOP_SIZE(op), access_type,
>                                 mmu_idx, iotlbentry->attrs, r, retaddr);

but no memop_size here.

>  static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
>                        int mmu_idx, uint64_t val, target_ulong addr,
> -                      uintptr_t retaddr, int size)
> +                      uintptr_t retaddr, MemOp op)

Likewise.

>          res = io_readx(env, &env_tlb(env)->d[mmu_idx].iotlb[index],
> -                       mmu_idx, addr, retaddr, access_type, size);
> +                       mmu_idx, addr, retaddr, access_type, SIZE_MEMOP(size));

And when you do come back to change the types after patch 10, at the top of the
function:

-    unsigned a_bits = get_alignment_bits(get_memop(oi));
+    MemOp op = get_memop(oi);
+    unsigned a_bits = get_alignment_bits(op);

and then pass along op directly.  Which will fix some of the weirdness in patch 11.


r~


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-devel] [PATCH v5 10/15] memory: Access MemoryRegion with MemOp semantics
  2019-07-26  6:47   ` [Qemu-riscv] " tony.nguyen
@ 2019-07-26 14:24     ` Richard Henderson
  -1 siblings, 0 replies; 78+ messages in thread
From: Richard Henderson @ 2019-07-26 14:24 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, alex.williamson, arikalo,
	david, pasic, borntraeger, rth, atar4qemu, ehabkost, qemu-s390x,
	qemu-arm, stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	laurent, qemu-ppc, amarkovic, pbonzini, aurelien

On 7/25/19 11:47 PM, tony.nguyen@bt.com wrote:
> To convert interfaces of MemoryRegion access, MEMOP_SIZE and
> SIZE_MEMOP no-op stubs were introduced to change syntax while keeping
> the existing semantics.
> 
> Now with interfaces converted, we fill the stubs and use MemOp
> semantics.
> 
> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
> ---
>  include/exec/memop.h  | 5 ++---
>  include/exec/memory.h | 4 ++--
>  2 files changed, 4 insertions(+), 5 deletions(-)
> 
> diff --git a/include/exec/memop.h b/include/exec/memop.h
> index 09c8d20..f2847e8 100644
> --- a/include/exec/memop.h
> +++ b/include/exec/memop.h
> @@ -106,8 +106,7 @@ typedef enum MemOp {
>      MO_SSIZE = MO_SIZE | MO_SIGN,
>  } MemOp;
> 
> -/* No-op while memory_region_dispatch_[read|write] is converted to MemOp */
> -#define MEMOP_SIZE(op)  (op)    /* MemOp to size.  */
> -#define SIZE_MEMOP(ul)  (ul)    /* Size to MemOp.  */
> +#define MEMOP_SIZE(op)  (1 << ((op) & MO_SIZE)) /* MemOp to size.  */
> +#define SIZE_MEMOP(ul)  (ctzl(ul))              /* Size to MemOp.  */

As mentioned, I'd prefer inline functions.

I think it wouldn't go amiss to do

static inline MemOp size_memop(unsigned size)
{
#ifdef CONFIG_DEBUG_TCG
    /* power of 2 up to 8 */
    assert((size & (size - 1)) == 0 && size >= 1 && size <= 8);
#endif
    return ctz32(size);
}


> diff --git a/include/exec/memory.h b/include/exec/memory.h
> index 0ea4843..975b86a 100644
> --- a/include/exec/memory.h
> +++ b/include/exec/memory.h
> @@ -1732,7 +1732,7 @@ void mtree_info(bool flatview, bool dispatch_tree, bool owner);
>   * @mr: #MemoryRegion to access
>   * @addr: address within that region
>   * @pval: pointer to uint64_t which the data is written to
> - * @op: size of the access in bytes
> + * @op: size, sign, and endianness of the memory operation
>   * @attrs: memory transaction attributes to use for the access
>   */
>  MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
> @@ -1747,7 +1747,7 @@ MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
>   * @mr: #MemoryRegion to access
>   * @addr: address within that region
>   * @data: data to write
> - * @op: size of the access in bytes
> + * @op: size, sign, and endianness of the memory operation
>   * @attrs: memory transaction attributes to use for the access
>   */
>  MemTxResult memory_region_dispatch_write(MemoryRegion *mr,

As I mentioned, now is when the actual interface change to these functions
should occur.


r~


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-riscv] [Qemu-devel] [PATCH v5 10/15] memory: Access MemoryRegion with MemOp semantics
@ 2019-07-26 14:24     ` Richard Henderson
  0 siblings, 0 replies; 78+ messages in thread
From: Richard Henderson @ 2019-07-26 14:24 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, qemu-s390x, qemu-arm,
	stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

On 7/25/19 11:47 PM, tony.nguyen@bt.com wrote:
> To convert interfaces of MemoryRegion access, MEMOP_SIZE and
> SIZE_MEMOP no-op stubs were introduced to change syntax while keeping
> the existing semantics.
> 
> Now with interfaces converted, we fill the stubs and use MemOp
> semantics.
> 
> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
> ---
>  include/exec/memop.h  | 5 ++---
>  include/exec/memory.h | 4 ++--
>  2 files changed, 4 insertions(+), 5 deletions(-)
> 
> diff --git a/include/exec/memop.h b/include/exec/memop.h
> index 09c8d20..f2847e8 100644
> --- a/include/exec/memop.h
> +++ b/include/exec/memop.h
> @@ -106,8 +106,7 @@ typedef enum MemOp {
>      MO_SSIZE = MO_SIZE | MO_SIGN,
>  } MemOp;
> 
> -/* No-op while memory_region_dispatch_[read|write] is converted to MemOp */
> -#define MEMOP_SIZE(op)  (op)    /* MemOp to size.  */
> -#define SIZE_MEMOP(ul)  (ul)    /* Size to MemOp.  */
> +#define MEMOP_SIZE(op)  (1 << ((op) & MO_SIZE)) /* MemOp to size.  */
> +#define SIZE_MEMOP(ul)  (ctzl(ul))              /* Size to MemOp.  */

As mentioned, I'd prefer inline functions.

I think it wouldn't go amiss to do

static inline MemOp size_memop(unsigned size)
{
#ifdef CONFIG_DEBUG_TCG
    /* power of 2 up to 8 */
    assert((size & (size - 1)) == 0 && size >= 1 && size <= 8);
#endif
    return ctz32(size);
}


> diff --git a/include/exec/memory.h b/include/exec/memory.h
> index 0ea4843..975b86a 100644
> --- a/include/exec/memory.h
> +++ b/include/exec/memory.h
> @@ -1732,7 +1732,7 @@ void mtree_info(bool flatview, bool dispatch_tree, bool owner);
>   * @mr: #MemoryRegion to access
>   * @addr: address within that region
>   * @pval: pointer to uint64_t which the data is written to
> - * @op: size of the access in bytes
> + * @op: size, sign, and endianness of the memory operation
>   * @attrs: memory transaction attributes to use for the access
>   */
>  MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
> @@ -1747,7 +1747,7 @@ MemTxResult memory_region_dispatch_read(MemoryRegion *mr,
>   * @mr: #MemoryRegion to access
>   * @addr: address within that region
>   * @data: data to write
> - * @op: size of the access in bytes
> + * @op: size, sign, and endianness of the memory operation
>   * @attrs: memory transaction attributes to use for the access
>   */
>  MemTxResult memory_region_dispatch_write(MemoryRegion *mr,

As I mentioned, now is when the actual interface change to these functions
should occur.


r~


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-devel] [PATCH v5 11/15] memory: Single byte swap along the I/O path
  2019-07-26  9:26     ` [Qemu-riscv] " Paolo Bonzini
@ 2019-07-26 14:29       ` Richard Henderson
  -1 siblings, 0 replies; 78+ messages in thread
From: Richard Henderson @ 2019-07-26 14:29 UTC (permalink / raw)
  To: Paolo Bonzini, tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, alex.williamson, arikalo,
	david, pasic, borntraeger, rth, atar4qemu, ehabkost, qemu-s390x,
	qemu-arm, stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	laurent, qemu-ppc, amarkovic, aurelien

On 7/26/19 2:26 AM, Paolo Bonzini wrote:
> On 26/07/19 08:47, tony.nguyen@bt.com wrote:
>> +        op = SIZE_MEMOP(size);
>> +        if (need_bswap(big_endian)) {
>> +            op ^= MO_BSWAP;
>> +        }
> 
> And this has the same issue as the first version.  It should be
> 
> 	op = SIZE_MEMOP(size) | (big_endian ? MO_BE : MO_LE);
> 
> and everything should work.  If it doesn't (and indeed it doesn't :)) it
> means you have bugs somewhere else.

As I mentioned against patch 9, which also touches this area, it should be
using the MemOp that is already passed in to this function instead of building
a new one from scratch.

But, yes, any failure in that would mean bugs somewhere else.  ;-)


r~


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-riscv] [Qemu-devel] [PATCH v5 11/15] memory: Single byte swap along the I/O path
@ 2019-07-26 14:29       ` Richard Henderson
  0 siblings, 0 replies; 78+ messages in thread
From: Richard Henderson @ 2019-07-26 14:29 UTC (permalink / raw)
  To: Paolo Bonzini, tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, qemu-s390x, qemu-arm,
	stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, amarkovic, aurelien

On 7/26/19 2:26 AM, Paolo Bonzini wrote:
> On 26/07/19 08:47, tony.nguyen@bt.com wrote:
>> +        op = SIZE_MEMOP(size);
>> +        if (need_bswap(big_endian)) {
>> +            op ^= MO_BSWAP;
>> +        }
> 
> And this has the same issue as the first version.  It should be
> 
> 	op = SIZE_MEMOP(size) | (big_endian ? MO_BE : MO_LE);
> 
> and everything should work.  If it doesn't (and indeed it doesn't :)) it
> means you have bugs somewhere else.

As I mentioned against patch 9, which also touches this area, it should be
using the MemOp that is already passed in to this function instead of building
a new one from scratch.

But, yes, any failure in that would mean bugs somewhere else.  ;-)


r~


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-devel] [PATCH v5 11/15] memory: Single byte swap along the I/O path
  2019-07-26  9:39     ` [Qemu-riscv] " Paolo Bonzini
@ 2019-07-26 14:45       ` Richard Henderson
  -1 siblings, 0 replies; 78+ messages in thread
From: Richard Henderson @ 2019-07-26 14:45 UTC (permalink / raw)
  To: Paolo Bonzini, tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, alex.williamson, arikalo,
	david, pasic, borntraeger, rth, atar4qemu, ehabkost, qemu-s390x,
	qemu-arm, stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	laurent, qemu-ppc, amarkovic, aurelien

On 7/26/19 2:39 AM, Paolo Bonzini wrote:
> Then memory_region_endianness_inverted can be:
> 
>   if (mr->ops->endianness == DEVICE_NATIVE_ENDIAN)
>     return (op & MO_BSWAP) != MO_TE;
>   else if (mr->ops->endianness == DEVICE_BIG_ENDIAN)
>     return (op & MO_BSWAP) != MO_BE;
>   else if (mr->ops->endianness == DEVICE_LITTLE_ENDIAN)
>     return (op & MO_BSWAP) != MO_LE;

Possibly outside the scope of this patch set, I'd like to replace enum
device_endian with MemOp, with exactly the above enumerator replacement.

In the meantime, perhaps a conversion function

static MemOp devendian_memop(enum device_endian d)
{
    switch (d) {
    case DEVICE_NATIVE_ENDIAN:
        return MO_TE;
    case DEVICE_BIG_ENDIAN:
        return MO_BE;
    case DEVICE_LITTLE_ENDIAN:
        return MO_LE;
    default:
        g_assert_not_reached();
    }
}

With that, this would simplify to

    return (op & MO_BSWAP) != devendian_memop(mr->ops->endianness);


> I think the changes should be split in two parts.  Before this patch,
> you modify all callers of memory_region_dispatch_* so that they already
> pass the right endianness op; however, you leave the existing swap in
> place.  So for example in load_helper you'd have in a previous patch
> 
> +        /* FIXME: io_readx ignores MO_BSWAP.  */
> +        op = SIZE_MEMOP(size) | (big_endian ? MO_BE : MO_LE);
>          res = io_readx(env, &env_tlb(env)->d[mmu_idx].iotlb[index],
> -                       mmu_idx, addr, retaddr, access_type,
> SIZE_MEMOP(size));
> +                       mmu_idx, addr, retaddr, access_type, op);
>          return handle_bswap(res, size, big_endian);
> 
> Then, in this patch, you remove the handle_bswap call as well as the
> FIXME comment.

Agreed.


r~


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-riscv] [Qemu-devel] [PATCH v5 11/15] memory: Single byte swap along the I/O path
@ 2019-07-26 14:45       ` Richard Henderson
  0 siblings, 0 replies; 78+ messages in thread
From: Richard Henderson @ 2019-07-26 14:45 UTC (permalink / raw)
  To: Paolo Bonzini, tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, qemu-s390x, qemu-arm,
	stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, amarkovic, aurelien

On 7/26/19 2:39 AM, Paolo Bonzini wrote:
> Then memory_region_endianness_inverted can be:
> 
>   if (mr->ops->endianness == DEVICE_NATIVE_ENDIAN)
>     return (op & MO_BSWAP) != MO_TE;
>   else if (mr->ops->endianness == DEVICE_BIG_ENDIAN)
>     return (op & MO_BSWAP) != MO_BE;
>   else if (mr->ops->endianness == DEVICE_LITTLE_ENDIAN)
>     return (op & MO_BSWAP) != MO_LE;

Possibly outside the scope of this patch set, I'd like to replace enum
device_endian with MemOp, with exactly the above enumerator replacement.

In the meantime, perhaps a conversion function

static MemOp devendian_memop(enum device_endian d)
{
    switch (d) {
    case DEVICE_NATIVE_ENDIAN:
        return MO_TE;
    case DEVICE_BIG_ENDIAN:
        return MO_BE;
    case DEVICE_LITTLE_ENDIAN:
        return MO_LE;
    default:
        g_assert_not_reached();
    }
}

With that, this would simplify to

    return (op & MO_BSWAP) != devendian_memop(mr->ops->endianness);


> I think the changes should be split in two parts.  Before this patch,
> you modify all callers of memory_region_dispatch_* so that they already
> pass the right endianness op; however, you leave the existing swap in
> place.  So for example in load_helper you'd have in a previous patch
> 
> +        /* FIXME: io_readx ignores MO_BSWAP.  */
> +        op = SIZE_MEMOP(size) | (big_endian ? MO_BE : MO_LE);
>          res = io_readx(env, &env_tlb(env)->d[mmu_idx].iotlb[index],
> -                       mmu_idx, addr, retaddr, access_type,
> SIZE_MEMOP(size));
> +                       mmu_idx, addr, retaddr, access_type, op);
>          return handle_bswap(res, size, big_endian);
> 
> Then, in this patch, you remove the handle_bswap call as well as the
> FIXME comment.

Agreed.


r~


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-devel] [PATCH v5 12/15] cpu: TLB_FLAGS_MASK bit to force memory slow path
  2019-07-26  6:48   ` [Qemu-riscv] " tony.nguyen
@ 2019-07-26 14:48     ` Richard Henderson
  -1 siblings, 0 replies; 78+ messages in thread
From: Richard Henderson @ 2019-07-26 14:48 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, alex.williamson, arikalo,
	david, pasic, borntraeger, rth, atar4qemu, ehabkost, qemu-s390x,
	qemu-arm, stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	laurent, qemu-ppc, amarkovic, pbonzini, aurelien

On 7/25/19 11:48 PM, tony.nguyen@bt.com wrote:
> The fast path is taken when TLB_FLAGS_MASK is all zero.
> 
> TLB_FORCE_SLOW is simply a TLB_FLAGS_MASK bit to force the slow path,
> there are no other side effects.
> 
> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
> ---
>  include/exec/cpu-all.h | 10 ++++++++--
>  1 file changed, 8 insertions(+), 2 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-riscv] [Qemu-devel] [PATCH v5 12/15] cpu: TLB_FLAGS_MASK bit to force memory slow path
@ 2019-07-26 14:48     ` Richard Henderson
  0 siblings, 0 replies; 78+ messages in thread
From: Richard Henderson @ 2019-07-26 14:48 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, qemu-s390x, qemu-arm,
	stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

On 7/25/19 11:48 PM, tony.nguyen@bt.com wrote:
> The fast path is taken when TLB_FLAGS_MASK is all zero.
> 
> TLB_FORCE_SLOW is simply a TLB_FLAGS_MASK bit to force the slow path,
> there are no other side effects.
> 
> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
> ---
>  include/exec/cpu-all.h | 10 ++++++++--
>  1 file changed, 8 insertions(+), 2 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-devel] [PATCH v5 13/15] cputlb: Byte swap memory transaction attribute
  2019-07-26  6:48   ` [Qemu-riscv] " tony.nguyen
@ 2019-07-26 14:52     ` Richard Henderson
  -1 siblings, 0 replies; 78+ messages in thread
From: Richard Henderson @ 2019-07-26 14:52 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, alex.williamson, arikalo,
	david, pasic, borntraeger, rth, atar4qemu, ehabkost, qemu-s390x,
	qemu-arm, stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	laurent, qemu-ppc, amarkovic, pbonzini, aurelien

On 7/25/19 11:48 PM, tony.nguyen@bt.com wrote:
> Notice new attribute, byte swap, and force the transaction through the
> memory slow path.
> 
> Required by architectures that can invert endianness of memory
> transaction, e.g. SPARC64 has the Invert Endian TTE bit.
> 
> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
> ---
>  accel/tcg/cputlb.c      | 11 +++++++++++
>  include/exec/memattrs.h |  2 ++
>  2 files changed, 13 insertions(+)
> 
> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
> index e61b1eb..f292a87 100644
> --- a/accel/tcg/cputlb.c
> +++ b/accel/tcg/cputlb.c
> @@ -738,6 +738,9 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
>           */
>          address |= TLB_RECHECK;
>      }
> +    if (attrs.byte_swap) {
> +        address |= TLB_FORCE_SLOW;
> +    }
>      if (!memory_region_is_ram(section->mr) &&
>          !memory_region_is_romd(section->mr)) {
>          /* IO memory case */
> @@ -891,6 +894,10 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
>      bool locked = false;
>      MemTxResult r;
> 
> +    if (iotlbentry->attrs.byte_swap) {
> +        op ^= MO_BSWAP;
> +    }
> +
>      section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
>      mr = section->mr;
>      mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
> @@ -933,6 +940,10 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
>      bool locked = false;
>      MemTxResult r;
> 
> +    if (iotlbentry->attrs.byte_swap) {
> +        op ^= MO_BSWAP;
> +    }
> +
>      section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
>      mr = section->mr;
>      mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
> diff --git a/include/exec/memattrs.h b/include/exec/memattrs.h
> index d4a3477..a0644eb 100644
> --- a/include/exec/memattrs.h
> +++ b/include/exec/memattrs.h
> @@ -37,6 +37,8 @@ typedef struct MemTxAttrs {
>      unsigned int user:1;
>      /* Requester ID (for MSI for example) */
>      unsigned int requester_id:16;
> +    /* SPARC64: TTE invert endianness */
> +    unsigned int byte_swap:1;

Don't mention Sparc here, otherwise it seems like it only applies to Sparc,
when it is really a generic feature only currently used by Sparc.

Just say "Invert endianness for this page".

With that,

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-riscv] [Qemu-devel] [PATCH v5 13/15] cputlb: Byte swap memory transaction attribute
@ 2019-07-26 14:52     ` Richard Henderson
  0 siblings, 0 replies; 78+ messages in thread
From: Richard Henderson @ 2019-07-26 14:52 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, qemu-s390x, qemu-arm,
	stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

On 7/25/19 11:48 PM, tony.nguyen@bt.com wrote:
> Notice new attribute, byte swap, and force the transaction through the
> memory slow path.
> 
> Required by architectures that can invert endianness of memory
> transaction, e.g. SPARC64 has the Invert Endian TTE bit.
> 
> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
> ---
>  accel/tcg/cputlb.c      | 11 +++++++++++
>  include/exec/memattrs.h |  2 ++
>  2 files changed, 13 insertions(+)
> 
> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
> index e61b1eb..f292a87 100644
> --- a/accel/tcg/cputlb.c
> +++ b/accel/tcg/cputlb.c
> @@ -738,6 +738,9 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
>           */
>          address |= TLB_RECHECK;
>      }
> +    if (attrs.byte_swap) {
> +        address |= TLB_FORCE_SLOW;
> +    }
>      if (!memory_region_is_ram(section->mr) &&
>          !memory_region_is_romd(section->mr)) {
>          /* IO memory case */
> @@ -891,6 +894,10 @@ static uint64_t io_readx(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
>      bool locked = false;
>      MemTxResult r;
> 
> +    if (iotlbentry->attrs.byte_swap) {
> +        op ^= MO_BSWAP;
> +    }
> +
>      section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
>      mr = section->mr;
>      mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
> @@ -933,6 +940,10 @@ static void io_writex(CPUArchState *env, CPUIOTLBEntry *iotlbentry,
>      bool locked = false;
>      MemTxResult r;
> 
> +    if (iotlbentry->attrs.byte_swap) {
> +        op ^= MO_BSWAP;
> +    }
> +
>      section = iotlb_to_section(cpu, iotlbentry->addr, iotlbentry->attrs);
>      mr = section->mr;
>      mr_offset = (iotlbentry->addr & TARGET_PAGE_MASK) + addr;
> diff --git a/include/exec/memattrs.h b/include/exec/memattrs.h
> index d4a3477..a0644eb 100644
> --- a/include/exec/memattrs.h
> +++ b/include/exec/memattrs.h
> @@ -37,6 +37,8 @@ typedef struct MemTxAttrs {
>      unsigned int user:1;
>      /* Requester ID (for MSI for example) */
>      unsigned int requester_id:16;
> +    /* SPARC64: TTE invert endianness */
> +    unsigned int byte_swap:1;

Don't mention Sparc here, otherwise it seems like it only applies to Sparc,
when it is really a generic feature only currently used by Sparc.

Just say "Invert endianness for this page".

With that,

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-devel] [PATCH v5 14/15] target/sparc: Add TLB entry with attributes
  2019-07-26  6:48   ` [Qemu-riscv] " tony.nguyen
@ 2019-07-26 14:55     ` Richard Henderson
  -1 siblings, 0 replies; 78+ messages in thread
From: Richard Henderson @ 2019-07-26 14:55 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, alex.williamson, arikalo,
	david, pasic, borntraeger, rth, atar4qemu, ehabkost, qemu-s390x,
	qemu-arm, stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	laurent, qemu-ppc, amarkovic, pbonzini, aurelien

On 7/25/19 11:48 PM, tony.nguyen@bt.com wrote:
> Append MemTxAttrs to interfaces so we can pass along up coming Invert
> Endian TTE bit on SPARC64.
> 
> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
> ---
>  target/sparc/mmu_helper.c | 32 ++++++++++++++++++--------------
>  1 file changed, 18 insertions(+), 14 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~



^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-riscv] [Qemu-devel] [PATCH v5 14/15] target/sparc: Add TLB entry with attributes
@ 2019-07-26 14:55     ` Richard Henderson
  0 siblings, 0 replies; 78+ messages in thread
From: Richard Henderson @ 2019-07-26 14:55 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, qemu-s390x, qemu-arm,
	stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

On 7/25/19 11:48 PM, tony.nguyen@bt.com wrote:
> Append MemTxAttrs to interfaces so we can pass along up coming Invert
> Endian TTE bit on SPARC64.
> 
> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
> ---
>  target/sparc/mmu_helper.c | 32 ++++++++++++++++++--------------
>  1 file changed, 18 insertions(+), 14 deletions(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~



^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-devel] [PATCH v5 15/15] target/sparc: sun4u Invert Endian TTE bit
  2019-07-26  6:49   ` [Qemu-riscv] " tony.nguyen
@ 2019-07-26 14:56     ` Richard Henderson
  -1 siblings, 0 replies; 78+ messages in thread
From: Richard Henderson @ 2019-07-26 14:56 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	Alistair.Francis, edgar.iglesias, alex.williamson, arikalo,
	david, pasic, borntraeger, rth, atar4qemu, ehabkost, qemu-s390x,
	qemu-arm, stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	laurent, qemu-ppc, amarkovic, pbonzini, aurelien

On 7/25/19 11:49 PM, tony.nguyen@bt.com wrote:
> This bit configures endianness of PCI MMIO devices. It is used by
> Solaris and OpenBSD sunhme drivers.
> 
> Tested working on OpenBSD.
> 
> Unfortunately Solaris 10 had a unrelated keyboard issue blocking
> testing... another inch towards Solaris 10 on SPARC64 =)
> 
> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
> ---
>  target/sparc/cpu.h        | 2 ++
>  target/sparc/mmu_helper.c | 8 +++++++-
>  2 files changed, 9 insertions(+), 1 deletion(-)


Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~


^ permalink raw reply	[flat|nested] 78+ messages in thread

* Re: [Qemu-riscv] [Qemu-devel] [PATCH v5 15/15] target/sparc: sun4u Invert Endian TTE bit
@ 2019-07-26 14:56     ` Richard Henderson
  0 siblings, 0 replies; 78+ messages in thread
From: Richard Henderson @ 2019-07-26 14:56 UTC (permalink / raw)
  To: tony.nguyen, qemu-devel
  Cc: peter.maydell, walling, sagark, mst, palmer, mark.cave-ayland,
	laurent, Alistair.Francis, edgar.iglesias, arikalo, david, pasic,
	borntraeger, rth, atar4qemu, ehabkost, qemu-s390x, qemu-arm,
	stefanha, shorne, david, qemu-riscv, kbastian, cohuck,
	alex.williamson, qemu-ppc, amarkovic, pbonzini, aurelien

On 7/25/19 11:49 PM, tony.nguyen@bt.com wrote:
> This bit configures endianness of PCI MMIO devices. It is used by
> Solaris and OpenBSD sunhme drivers.
> 
> Tested working on OpenBSD.
> 
> Unfortunately Solaris 10 had a unrelated keyboard issue blocking
> testing... another inch towards Solaris 10 on SPARC64 =)
> 
> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
> ---
>  target/sparc/cpu.h        | 2 ++
>  target/sparc/mmu_helper.c | 8 +++++++-
>  2 files changed, 9 insertions(+), 1 deletion(-)


Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~


^ permalink raw reply	[flat|nested] 78+ messages in thread

end of thread, other threads:[~2019-07-26 14:57 UTC | newest]

Thread overview: 78+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-26  6:42 [Qemu-devel] [PATCH v5 00/15] Invert Endian bit in SPARCv9 MMU TTE tony.nguyen
2019-07-26  6:42 ` [Qemu-riscv] " tony.nguyen
2019-07-26  6:43 ` [Qemu-devel] [PATCH v5 01/15] tcg: TCGMemOp is now accelerator independent MemOp tony.nguyen
2019-07-26  6:43   ` [Qemu-riscv] " tony.nguyen
2019-07-26  7:43   ` David Gibson
2019-07-26  7:43     ` [Qemu-riscv] " David Gibson
2019-07-26 13:27   ` Richard Henderson
2019-07-26 13:27     ` [Qemu-riscv] " Richard Henderson
2019-07-26  6:43 ` [Qemu-devel] [PATCH v5 02/15] memory: Access MemoryRegion with MemOp tony.nguyen
2019-07-26  6:43   ` [Qemu-riscv] " tony.nguyen
2019-07-26 13:36   ` Richard Henderson
2019-07-26 13:36     ` [Qemu-riscv] " Richard Henderson
2019-07-26 14:04     ` Richard Henderson
2019-07-26 14:04       ` [Qemu-riscv] " Richard Henderson
2019-07-26  6:44 ` [Qemu-devel] [PATCH v5 03/15] target/mips: " tony.nguyen
2019-07-26  6:44   ` [Qemu-riscv] " tony.nguyen
2019-07-26 13:40   ` Richard Henderson
2019-07-26 13:40     ` [Qemu-riscv] " Richard Henderson
2019-07-26  6:44 ` [Qemu-devel] [PATCH v5 04/15] hw/s390x: " tony.nguyen
2019-07-26  6:44   ` [Qemu-riscv] " tony.nguyen
2019-07-26 13:42   ` Richard Henderson
2019-07-26 13:42     ` [Qemu-riscv] " Richard Henderson
2019-07-26  6:45 ` [Qemu-devel] [PATCH v5 05/15] hw/intc/armv7m_nic: " tony.nguyen
2019-07-26  6:45   ` [Qemu-riscv] " tony.nguyen
2019-07-26 13:43   ` Richard Henderson
2019-07-26 13:43     ` [Qemu-riscv] " Richard Henderson
2019-07-26  6:45 ` [Qemu-devel] [PATCH v5 06/15] hw/virtio: " tony.nguyen
2019-07-26  6:45   ` [Qemu-riscv] " tony.nguyen
2019-07-26 13:43   ` Richard Henderson
2019-07-26 13:43     ` [Qemu-riscv] " Richard Henderson
2019-07-26  6:46 ` [Qemu-devel] [PATCH v5 07/15] hw/vfio: " tony.nguyen
2019-07-26  6:46   ` [Qemu-riscv] " tony.nguyen
2019-07-26 13:43   ` Richard Henderson
2019-07-26 13:43     ` [Qemu-riscv] " Richard Henderson
2019-07-26  6:46 ` [Qemu-devel] [PATCH v5 08/15] exec: " tony.nguyen
2019-07-26  6:46   ` [Qemu-riscv] " tony.nguyen
2019-07-26 13:46   ` Richard Henderson
2019-07-26 13:46     ` [Qemu-riscv] " Richard Henderson
2019-07-26  6:46 ` [Qemu-devel] [PATCH v5 09/15] cputlb: " tony.nguyen
2019-07-26  6:46   ` [Qemu-riscv] " tony.nguyen
2019-07-26 11:03   ` Philippe Mathieu-Daudé
2019-07-26 11:03     ` [Qemu-riscv] " Philippe Mathieu-Daudé
2019-07-26 11:16     ` [Qemu-devel] [EXTERNAL]Re: " Aleksandar Markovic
2019-07-26 11:16       ` [Qemu-riscv] [EXTERNAL]Re: [Qemu-devel] " Aleksandar Markovic
2019-07-26 11:23     ` [Qemu-devel] [EXTERNAL]Re: " Aleksandar Markovic
2019-07-26 11:23       ` [Qemu-riscv] [EXTERNAL]Re: [Qemu-devel] " Aleksandar Markovic
2019-07-26 14:14   ` Richard Henderson
2019-07-26 14:14     ` [Qemu-riscv] " Richard Henderson
2019-07-26  6:47 ` [Qemu-devel] [PATCH v5 10/15] memory: Access MemoryRegion with MemOp semantics tony.nguyen
2019-07-26  6:47   ` [Qemu-riscv] " tony.nguyen
2019-07-26 14:24   ` Richard Henderson
2019-07-26 14:24     ` [Qemu-riscv] " Richard Henderson
2019-07-26  6:47 ` [Qemu-devel] [PATCH v5 11/15] memory: Single byte swap along the I/O path tony.nguyen
2019-07-26  6:47   ` [Qemu-riscv] " tony.nguyen
2019-07-26  9:26   ` Paolo Bonzini
2019-07-26  9:26     ` [Qemu-riscv] " Paolo Bonzini
2019-07-26 14:29     ` Richard Henderson
2019-07-26 14:29       ` [Qemu-riscv] " Richard Henderson
2019-07-26  9:39   ` Paolo Bonzini
2019-07-26  9:39     ` [Qemu-riscv] " Paolo Bonzini
2019-07-26 14:45     ` Richard Henderson
2019-07-26 14:45       ` [Qemu-riscv] " Richard Henderson
2019-07-26  6:48 ` [Qemu-devel] [PATCH v5 12/15] cpu: TLB_FLAGS_MASK bit to force memory slow path tony.nguyen
2019-07-26  6:48   ` [Qemu-riscv] " tony.nguyen
2019-07-26 14:48   ` Richard Henderson
2019-07-26 14:48     ` [Qemu-riscv] " Richard Henderson
2019-07-26  6:48 ` [Qemu-devel] [PATCH v5 13/15] cputlb: Byte swap memory transaction attribute tony.nguyen
2019-07-26  6:48   ` [Qemu-riscv] " tony.nguyen
2019-07-26 14:52   ` Richard Henderson
2019-07-26 14:52     ` [Qemu-riscv] " Richard Henderson
2019-07-26  6:48 ` [Qemu-devel] [PATCH v5 14/15] target/sparc: Add TLB entry with attributes tony.nguyen
2019-07-26  6:48   ` [Qemu-riscv] " tony.nguyen
2019-07-26 14:55   ` Richard Henderson
2019-07-26 14:55     ` [Qemu-riscv] " Richard Henderson
2019-07-26  6:49 ` [Qemu-devel] [PATCH v5 15/15] target/sparc: sun4u Invert Endian TTE bit tony.nguyen
2019-07-26  6:49   ` [Qemu-riscv] " tony.nguyen
2019-07-26 14:56   ` Richard Henderson
2019-07-26 14:56     ` [Qemu-riscv] " Richard Henderson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.