All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [RISU PATCH v3 00/18] Support for generating x86 SIMD test images
@ 2019-07-11 22:32 Jan Bobek
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 01/18] risugen_common: add helper functions insnv, randint Jan Bobek
                   ` (18 more replies)
  0 siblings, 19 replies; 49+ messages in thread
From: Jan Bobek @ 2019-07-11 22:32 UTC (permalink / raw)
  To: qemu-devel; +Cc: Jan Bobek, Alex Bennée, Richard Henderson

This is v3 of the patch series posted in [1] and [2]. Note that this
is the first fully-featured patch series implementing all desired
functionality, including (V)LDMXCSR and VSIB-based instructions like
VGATHER*.

While implementing the last bits required in order to support VGATHERx
instructions, I ran into problems which required a larger redesign;
namely, there are no more !emit blocks as their functionality is now
implemented in regular !constraints blocks. Also, memory constraints
are specified in !memory blocks, similarly to other architectures.

I tested these changes on my machine; both master and slave modes work
in both 32-bit and 64-bit modes.

Cheers,
 -Jan

Changes since v2:
  Too many to be listed individually; this patch series might be
  better reviewed on its own.

References:
  1. https://lists.nongnu.org/archive/html/qemu-devel/2019-06/msg04123.html
  2. https://lists.nongnu.org/archive/html/qemu-devel/2019-07/msg00001.html

Jan Bobek (18):
  risugen_common: add helper functions insnv, randint
  risugen_common: split eval_with_fields into extract_fields and
    eval_block
  risugen_x86_asm: add module
  risugen_x86_constraints: add module
  risugen_x86_memory: add module
  risugen_x86: add module
  risugen: allow all byte-aligned instructions
  risugen: add command-line flag --x86_64
  risugen: add --xfeatures option for x86
  x86.risu: add MMX instructions
  x86.risu: add SSE instructions
  x86.risu: add SSE2 instructions
  x86.risu: add SSE3 instructions
  x86.risu: add SSSE3 instructions
  x86.risu: add SSE4.1 and SSE4.2 instructions
  x86.risu: add AES and PCLMULQDQ instructions
  x86.risu: add AVX instructions
  x86.risu: add AVX2 instructions

 risugen                    |   27 +-
 risugen_arm.pm             |    6 +-
 risugen_common.pm          |  117 +-
 risugen_m68k.pm            |    3 +-
 risugen_ppc64.pm           |    6 +-
 risugen_x86.pm             |  518 +++++
 risugen_x86_asm.pm         |  918 ++++++++
 risugen_x86_constraints.pm |  154 ++
 risugen_x86_memory.pm      |   87 +
 x86.risu                   | 4499 ++++++++++++++++++++++++++++++++++++
 10 files changed, 6293 insertions(+), 42 deletions(-)
 create mode 100644 risugen_x86.pm
 create mode 100644 risugen_x86_asm.pm
 create mode 100644 risugen_x86_constraints.pm
 create mode 100644 risugen_x86_memory.pm
 create mode 100644 x86.risu

-- 
2.20.1



^ permalink raw reply	[flat|nested] 49+ messages in thread

* [Qemu-devel] [RISU PATCH v3 01/18] risugen_common: add helper functions insnv, randint
  2019-07-11 22:32 [Qemu-devel] [RISU PATCH v3 00/18] Support for generating x86 SIMD test images Jan Bobek
@ 2019-07-11 22:32 ` Jan Bobek
  2019-07-12  5:48   ` Richard Henderson
  2019-07-12 12:41   ` Alex Bennée
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 02/18] risugen_common: split eval_with_fields into extract_fields and eval_block Jan Bobek
                   ` (17 subsequent siblings)
  18 siblings, 2 replies; 49+ messages in thread
From: Jan Bobek @ 2019-07-11 22:32 UTC (permalink / raw)
  To: qemu-devel; +Cc: Jan Bobek, Alex Bennée, Richard Henderson

insnv allows emitting variable-length instructions in little-endian or
big-endian byte order; it subsumes functionality of former insn16()
and insn32() functions.

randint can reliably generate signed or unsigned integers of arbitrary
width.

Signed-off-by: Jan Bobek <jan.bobek@gmail.com>
---
 risugen_common.pm | 55 +++++++++++++++++++++++++++++++++++++++++------
 1 file changed, 48 insertions(+), 7 deletions(-)

diff --git a/risugen_common.pm b/risugen_common.pm
index 71ee996..d63250a 100644
--- a/risugen_common.pm
+++ b/risugen_common.pm
@@ -23,8 +23,9 @@ BEGIN {
     require Exporter;
 
     our @ISA = qw(Exporter);
-    our @EXPORT = qw(open_bin close_bin set_endian insn32 insn16 $bytecount
-                   progress_start progress_update progress_end
+    our @EXPORT = qw(open_bin close_bin set_endian insn32 insn16
+                   $bytecount insnv randint progress_start
+                   progress_update progress_end
                    eval_with_fields is_pow_of_2 sextract ctz
                    dump_insn_details);
 }
@@ -37,7 +38,7 @@ my $bigendian = 0;
 # (default is little endian, 0).
 sub set_endian
 {
-    $bigendian = @_;
+    ($bigendian) = @_;
 }
 
 sub open_bin
@@ -52,18 +53,58 @@ sub close_bin
     close(BIN) or die "can't close output file: $!";
 }
 
+sub insnv(%)
+{
+    my (%args) = @_;
+
+    # Default to big-endian order, so that the instruction bytes are
+    # emitted in the same order as they are written in the
+    # configuration file.
+    $args{bigendian} = 1 unless defined $args{bigendian};
+
+    for (my $bitcur = 0; $bitcur < $args{width}; $bitcur += 8) {
+        my $value = $args{value} >> ($args{bigendian}
+                                     ? $args{width} - $bitcur - 8
+                                     : $bitcur);
+
+        print BIN pack("C", $value & 0xff);
+        $bytecount += 1;
+    }
+}
+
 sub insn32($)
 {
     my ($insn) = @_;
-    print BIN pack($bigendian ? "N" : "V", $insn);
-    $bytecount += 4;
+    insnv(value => $insn, width => 32, bigendian => $bigendian);
 }
 
 sub insn16($)
 {
     my ($insn) = @_;
-    print BIN pack($bigendian ? "n" : "v", $insn);
-    $bytecount += 2;
+    insnv(value => $insn, width => 16, bigendian => $bigendian);
+}
+
+sub randint
+{
+    my (%args) = @_;
+    my $width = $args{width};
+
+    if ($width > 32) {
+        # Generate at most 32 bits at once; Perl's rand() does not
+        # behave well with ranges that are too large.
+        my $lower = randint(%args, width => 32);
+        my $upper = randint(%args, width => $args{width} - 32);
+        # Use arithmetic rather than bitwise operators, since bitwise
+        # ops turn signed integers into unsigned.
+        return $upper * (1 << 32) + $lower;
+    } elsif ($width > 0) {
+        my $halfrange = 1 << ($width - 1);
+        my $value = int(rand(2 * $halfrange));
+        $value -= $halfrange if defined $args{signed} && $args{signed};
+        return $value;
+    } else {
+        return 0;
+    }
 }
 
 # Progress bar implementation
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [Qemu-devel] [RISU PATCH v3 02/18] risugen_common: split eval_with_fields into extract_fields and eval_block
  2019-07-11 22:32 [Qemu-devel] [RISU PATCH v3 00/18] Support for generating x86 SIMD test images Jan Bobek
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 01/18] risugen_common: add helper functions insnv, randint Jan Bobek
@ 2019-07-11 22:32 ` Jan Bobek
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 03/18] risugen_x86_asm: add module Jan Bobek
                   ` (16 subsequent siblings)
  18 siblings, 0 replies; 49+ messages in thread
From: Jan Bobek @ 2019-07-11 22:32 UTC (permalink / raw)
  To: qemu-devel; +Cc: Jan Bobek, Alex Bennée, Richard Henderson

extract_fields can extract named variable fields from an opcode; it
returns a hash which can be then passed as environment parameter to
eval_block. More importantly, this allows the caller to augment the
block environment with more variables, if they wish to do so.

Signed-off-by: Jan Bobek <jan.bobek@gmail.com>
---
 risugen_arm.pm    |  6 +++--
 risugen_common.pm | 64 ++++++++++++++++++++++++++++-------------------
 risugen_m68k.pm   |  3 ++-
 risugen_ppc64.pm  |  6 +++--
 4 files changed, 48 insertions(+), 31 deletions(-)

diff --git a/risugen_arm.pm b/risugen_arm.pm
index 8d423b1..23a468c 100644
--- a/risugen_arm.pm
+++ b/risugen_arm.pm
@@ -992,7 +992,8 @@ sub gen_one_insn($$)
         if (defined $constraint) {
             # user-specified constraint: evaluate in an environment
             # with variables set corresponding to the variable fields.
-            my $v = eval_with_fields($insnname, $insn, $rec, "constraints", $constraint);
+            my %env = extract_fields($insn, $rec);
+            my $v = eval_block($insnname, "constraints", $constraint, \%env);
             if (!$v) {
                 $constraintfailures++;
                 if ($constraintfailures > 10000) {
@@ -1020,7 +1021,8 @@ sub gen_one_insn($$)
             } else {
                 align(4);
             }
-            $basereg = eval_with_fields($insnname, $insn, $rec, "memory", $memblock);
+            my %env = extract_fields($insn, $rec);
+            $basereg = eval_block($insnname, "memory", $memblock, \%env);
 
             if ($is_aarch64) {
                 data_barrier();
diff --git a/risugen_common.pm b/risugen_common.pm
index d63250a..3f927ef 100644
--- a/risugen_common.pm
+++ b/risugen_common.pm
@@ -25,8 +25,8 @@ BEGIN {
     our @ISA = qw(Exporter);
     our @EXPORT = qw(open_bin close_bin set_endian insn32 insn16
                    $bytecount insnv randint progress_start
-                   progress_update progress_end
-                   eval_with_fields is_pow_of_2 sextract ctz
+                   progress_update progress_end extract_fields
+                   eval_block is_pow_of_2 sextract ctz
                    dump_insn_details);
 }
 
@@ -138,36 +138,48 @@ sub progress_end()
     $| = 0;
 }
 
-sub eval_with_fields($$$$$) {
-    # Evaluate the given block in an environment with Perl variables
-    # set corresponding to the variable fields for the insn.
-    # Return the result of the eval; we die with a useful error
-    # message in case of syntax error.
-    #
-    # At the moment we just evaluate the string in the environment
-    # of the calling package.
-    # What we *ought* to do here is to give the config snippets
-    # their own package, and explicitly import into it only the
-    # functions that we want to be accessible to the config.
-    # That would provide better separation and an explicitly set up
-    # environment that doesn't allow config file code to accidentally
-    # change state it shouldn't have access to, and avoid the need to
-    # use 'caller' to get the package name of our calling function.
-    my ($insnname, $insn, $rec, $blockname, $block) = @_;
+sub extract_fields($$)
+{
+    my ($insn, $rec) = @_;
+
+    my %fields = ();
+    for my $tuple (@{ $rec->{fields} }) {
+        my ($var, $pos, $mask) = @$tuple;
+        $fields{$var} = ($insn >> $pos) & $mask;
+    }
+    return %fields;
+}
+
+# Evaluate the given block in an environment with Perl variables set
+# corresponding to env. Return the result of the eval; we die with a
+# useful error message in case of syntax error.
+#
+# At the moment we just evaluate the string in the environment of the
+# calling package. What we *ought* to do here is to give the config
+# snippets their own package, and explicitly import into it only the
+# functions that we want to be accessible to the config.  That would
+# provide better separation and an explicitly set up environment that
+# doesn't allow config file code to accidentally change state it
+# shouldn't have access to, and avoid the need to use 'caller' to get
+# the package name of our calling function.
+sub eval_block($$$$)
+{
+    my ($insnname, $blockname, $block, $env) = @_;
+
     my $calling_package = caller;
     my $evalstr = "{ package $calling_package; ";
-    for my $tuple (@{ $rec->{fields} }) {
-        my ($var, $pos, $mask) = @$tuple;
-        my $val = ($insn >> $pos) & $mask;
-        $evalstr .= "my (\$$var) = $val; ";
+    for (keys %{$env}) {
+        $evalstr .= "my " unless $_ eq '_';
+        $evalstr .= "(\$$_) = \$env->{$_}; ";
     }
     $evalstr .= $block;
     $evalstr .= "}";
+
     my $v = eval $evalstr;
-    if ($@) {
-        print "Syntax error detected evaluating $insnname $blockname string:\n$block\n$@";
-        exit(1);
-    }
+    die "Syntax error detected evaluating $insnname $blockname string:\n"
+        . "$block\n"
+        . "$@"
+        if ($@);
     return $v;
 }
 
diff --git a/risugen_m68k.pm b/risugen_m68k.pm
index 7d62b13..8c812b5 100644
--- a/risugen_m68k.pm
+++ b/risugen_m68k.pm
@@ -129,7 +129,8 @@ sub gen_one_insn($$)
         if (defined $constraint) {
             # user-specified constraint: evaluate in an environment
             # with variables set corresponding to the variable fields.
-            my $v = eval_with_fields($insnname, $insn, $rec, "constraints", $constraint);
+            my %env = extract_fields($insn, $rec);
+            my $v = eval_block($insnname, "constraints", $constraint, \%env);
             if (!$v) {
                 $constraintfailures++;
                 if ($constraintfailures > 10000) {
diff --git a/risugen_ppc64.pm b/risugen_ppc64.pm
index b241172..40f717e 100644
--- a/risugen_ppc64.pm
+++ b/risugen_ppc64.pm
@@ -311,7 +311,8 @@ sub gen_one_insn($$)
         if (defined $constraint) {
             # user-specified constraint: evaluate in an environment
             # with variables set corresponding to the variable fields.
-            my $v = eval_with_fields($insnname, $insn, $rec, "constraints", $constraint);
+            my %env = extract_fields($insn, $rec);
+            my $v = eval_block($insnname, "constraints", $constraint, \%env);
             if (!$v) {
                 $constraintfailures++;
                 if ($constraintfailures > 10000) {
@@ -335,7 +336,8 @@ sub gen_one_insn($$)
             # Default alignment requirement for ARM is 4 bytes,
             # we use 16 for Aarch64, although often unnecessary and overkill.
             align(16);
-            $basereg = eval_with_fields($insnname, $insn, $rec, "memory", $memblock);
+            my %env = extract_fields($insn, $rec);
+            $basereg = eval_block($insnname, "memory", $memblock, \%env);
         }
 
         insn32($insn);
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [Qemu-devel] [RISU PATCH v3 03/18] risugen_x86_asm: add module
  2019-07-11 22:32 [Qemu-devel] [RISU PATCH v3 00/18] Support for generating x86 SIMD test images Jan Bobek
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 01/18] risugen_common: add helper functions insnv, randint Jan Bobek
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 02/18] risugen_common: split eval_with_fields into extract_fields and eval_block Jan Bobek
@ 2019-07-11 22:32 ` Jan Bobek
  2019-07-12 14:11   ` Richard Henderson
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 04/18] risugen_x86_constraints: " Jan Bobek
                   ` (15 subsequent siblings)
  18 siblings, 1 reply; 49+ messages in thread
From: Jan Bobek @ 2019-07-11 22:32 UTC (permalink / raw)
  To: qemu-devel; +Cc: Jan Bobek, Alex Bennée, Richard Henderson

The module risugen_x86_asm.pm exports named register constants and
asm_insn_* family of functions, which greatly simplify emission of x86
instructions.

Signed-off-by: Jan Bobek <jan.bobek@gmail.com>
---
 risugen_x86_asm.pm | 918 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 918 insertions(+)
 create mode 100644 risugen_x86_asm.pm

diff --git a/risugen_x86_asm.pm b/risugen_x86_asm.pm
new file mode 100644
index 0000000..642f18b
--- /dev/null
+++ b/risugen_x86_asm.pm
@@ -0,0 +1,918 @@
+#!/usr/bin/perl -w
+###############################################################################
+# Copyright (c) 2019 Jan Bobek
+# All rights reserved. This program and the accompanying materials
+# are made available under the terms of the Eclipse Public License v1.0
+# which accompanies this distribution, and is available at
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Contributors:
+#     Jan Bobek - initial implementation
+###############################################################################
+
+# risugen_x86_asm -- risugen_x86's helper module for x86 assembly
+package risugen_x86_asm;
+
+use strict;
+use warnings;
+
+use risugen_common;
+
+our @ISA    = qw(Exporter);
+our @EXPORT = qw(
+    asm_insn asm_insn_ud1 asm_insn_xor asm_insn_sahf asm_insn_lea64
+    asm_insn_call asm_insn_jmp asm_insn_pop asm_insn_mov
+    asm_insn_mov64 asm_insn_movq asm_insn_add asm_insn_add64
+    asm_insn_and asm_insn_and64 asm_insn_neg asm_insn_neg64
+    asm_insn_xchg asm_insn_xchg64 asm_insn_movaps asm_insn_vmovaps
+    asm_insn_movdqu asm_insn_vmovdqu
+    REG_RAX REG_RCX REG_RDX REG_RBX REG_RSP REG_RBP REG_RSI REG_RDI
+    REG_R8 REG_R9 REG_R10 REG_R11 REG_R12 REG_R13 REG_R14 REG_R15
+    );
+
+use constant {
+    VEX_L_128 => 0,
+    VEX_L_256 => 1,
+
+    VEX_P_NONE   => 0b00,
+    VEX_P_DATA16 => 0b01,
+    VEX_P_REP    => 0b10,
+    VEX_P_REPNE  => 0b11,
+
+    VEX_M_0F   => 0b00001,
+    VEX_M_0F38 => 0b00010,
+    VEX_M_0F3A => 0b00011,
+
+    VEX_V_UNUSED => 0b1111,
+
+    REG_RAX => 0,
+    REG_RCX => 1,
+    REG_RDX => 2,
+    REG_RBX => 3,
+    REG_RSP => 4,
+    REG_RBP => 5,
+    REG_RSI => 6,
+    REG_RDI => 7,
+    REG_R8  => 8,
+    REG_R9  => 9,
+    REG_R10 => 10,
+    REG_R11 => 11,
+    REG_R12 => 12,
+    REG_R13 => 13,
+    REG_R14 => 14,
+    REG_R15 => 15,
+
+    MOD_INDIRECT        => 0b00,
+    MOD_INDIRECT_DISP8  => 0b01,
+    MOD_INDIRECT_DISP32 => 0b10,
+    MOD_DIRECT          => 0b11,
+};
+
+sub write_insn_repne(%)
+{
+    insnv(value => 0xF2, width => 8);
+}
+
+sub write_insn_rep(%)
+{
+    insnv(value => 0xF3, width => 8);
+}
+
+sub write_insn_data16(%)
+{
+    insnv(value => 0x66, width => 8);
+}
+
+sub write_insn_rex(%)
+{
+    my (%args) = @_;
+
+    my $rex = 0x40;
+    $rex |= (defined $args{w} && $args{w}) << 3;
+    $rex |= (defined $args{r} && $args{r}) << 2;
+    $rex |= (defined $args{x} && $args{x}) << 1;
+    $rex |= (defined $args{b} && $args{b}) << 0;
+    insnv(value => $rex, width => 8);
+}
+
+sub write_insn_vex(%)
+{
+    my (%args) = @_;
+
+    $args{r} = 1            unless defined $args{r};
+    $args{x} = 1            unless defined $args{x};
+    $args{b} = 1            unless defined $args{b};
+    $args{w} = 0            unless defined $args{w};
+    $args{m} = VEX_M_0F     unless defined $args{m};
+    $args{v} = VEX_V_UNUSED unless defined $args{v};
+    $args{p} = VEX_P_NONE   unless defined $args{p};
+
+    # The Intel manual implies that 2-byte VEX prefix is equivalent to
+    # VEX.X = 1, VEX.B = 1, VEX.W = 0 and VEX.M = VEX_M_0F.
+    if ($args{x} && $args{b} && !$args{w} && $args{m} == VEX_M_0F) {
+        # We can use the 2-byte VEX prefix
+        my $vex = 0xC5 << 8;
+        $vex |= ($args{r} & 0b1)    << 7;
+        $vex |= ($args{v} & 0b1111) << 3;
+        $vex |= ($args{l} & 0b1)    << 2;
+        $vex |= ($args{p} & 0b11)   << 0;
+        insnv(value => $vex, width => 16);
+    } else {
+        # We have to use the 3-byte VEX prefix
+        my $vex = 0xC4 << 16;
+        $vex |= ($args{r} & 0b1)     << 15;
+        $vex |= ($args{x} & 0b1)     << 14;
+        $vex |= ($args{b} & 0b1)     << 13;
+        $vex |= ($args{m} & 0b11111) << 8;
+        $vex |= ($args{w} & 0b1)     << 7;
+        $vex |= ($args{v} & 0b1111)  << 3;
+        $vex |= ($args{l} & 0b1)     << 2;
+        $vex |= ($args{p} & 0b11)    << 0;
+        insnv(value => $vex, width => 24);
+    }
+}
+
+sub write_insn_modrm(%)
+{
+    my (%args) = @_;
+
+    my $modrm = 0;
+    $modrm |= ($args{mod} & 0b11)  << 6;
+    $modrm |= ($args{reg} & 0b111) << 3;
+    $modrm |= ($args{rm}  & 0b111) << 0;
+    insnv(value => $modrm, width => 8);
+}
+
+sub write_insn_sib(%)
+{
+    my (%args) = @_;
+
+    my $sib = 0;
+    $sib |= ($args{ss}    & 0b11)  << 6;
+    $sib |= ($args{index} & 0b111) << 3;
+    $sib |= ($args{base}  & 0b111) << 0;
+    insnv(value => $sib, width => 8);
+}
+
+sub write_insn(%)
+{
+    my (%insn) = @_;
+
+    my @tokens;
+    push @tokens, "EVEX"   if defined $insn{evex};
+    push @tokens, "VEX"    if defined $insn{vex};
+    push @tokens, "REP"    if defined $insn{rep};
+    push @tokens, "REPNE"  if defined $insn{repne};
+    push @tokens, "DATA16" if defined $insn{data16};
+    push @tokens, "REX"    if defined $insn{rex};
+    push @tokens, "OP"     if defined $insn{opcode};
+    push @tokens, "MODRM"  if defined $insn{modrm};
+    push @tokens, "SIB"    if defined $insn{sib};
+    push @tokens, "DISP"   if defined $insn{disp};
+    push @tokens, "IMM"    if defined $insn{imm};
+    push @tokens, "END";
+
+    # (EVEX | VEX | ((REP | REPNE)? DATA16? REX?)) OP (MODRM SIB? DISP?)? IMM? END
+    my $token = shift @tokens;
+
+    if ($token eq "EVEX") {
+        $token = shift @tokens;
+        write_insn_evex(%{$insn{evex}});
+    } elsif ($token eq "VEX") {
+        $token = shift @tokens;
+        write_insn_vex(%{$insn{vex}});
+    } else {
+        if ($token eq "REP") {
+            $token = shift @tokens;
+            write_insn_rep(%{$insn{rep}});
+        } elsif ($token eq "REPNE") {
+            $token = shift @tokens;
+            write_insn_repne(%{$insn{repne}});
+        }
+        if ($token eq "DATA16") {
+            $token = shift @tokens;
+            write_insn_data16(%{$insn{data16}});
+        }
+        if ($token eq "REX") {
+            $token = shift @tokens;
+            write_insn_rex(%{$insn{rex}});
+        }
+    }
+
+    die "Unexpected instruction tokens where OP expected: $token @tokens\n"
+        unless $token eq "OP";
+
+    $token = shift @tokens;
+    insnv(%{$insn{opcode}});
+
+    if ($token eq "MODRM") {
+        $token = shift @tokens;
+        write_insn_modrm(%{$insn{modrm}});
+
+        if ($token eq "SIB") {
+            $token = shift @tokens;
+            write_insn_sib(%{$insn{sib}});
+        }
+        if ($token eq "DISP") {
+            $token = shift @tokens;
+            insnv(%{$insn{disp}}, bigendian => 0);
+        }
+    }
+    if ($token eq "IMM") {
+        $token = shift @tokens;
+        insnv(%{$insn{imm}}, bigendian => 0);
+    }
+
+    die "Unexpected junk tokens at the end of instruction: $token @tokens\n"
+        unless $token eq "END";
+}
+
+sub asm_insn_vex_rxb($)
+{
+    my ($insn) = @_;
+    my $have_rex = defined $insn->{rex};
+
+    my @tokens;
+    push @tokens, "VEX.R"  if defined $insn->{vex}{r};
+    push @tokens, "REX.R"  if $have_rex && defined $insn->{rex}{r};
+    push @tokens, "VEX.X"  if defined $insn->{vex}{x};
+    push @tokens, "REX.X"  if $have_rex && defined $insn->{rex}{x};
+    push @tokens, "VEX.B"  if defined $insn->{vex}{b};
+    push @tokens, "REX.B"  if $have_rex && defined $insn->{rex}{b};
+    push @tokens, "END";
+
+    # (VEX.R | REX.R)? (VEX.X | REX.X)? (VEX.B | REX.B)? END
+    my $token = shift @tokens;
+
+    if ($token eq "VEX.R") {
+        $token = shift @tokens;
+    } elsif ($token eq "REX.R") {
+        $token = shift @tokens;
+        $insn->{vex}{r} = !$insn->{rex}{r};
+        delete $insn->{rex}{r};
+    }
+    if ($token eq "VEX.X") {
+        $token = shift @tokens;
+    } elsif ($token eq "REX.X") {
+        $token = shift @tokens;
+        $insn->{vex}{x} = !$insn->{rex}{x};
+        delete $insn->{rex}{x};
+    }
+    if ($token eq "VEX.B") {
+        $token = shift @tokens;
+    } elsif ($token eq "REX.B") {
+        $token = shift @tokens;
+        $insn->{vex}{b} = !$insn->{rex}{b};
+        delete $insn->{rex}{b};
+    }
+
+    die "unexpected junk at the end of VEX.RXB tokens: $token @tokens\n"
+        unless $token eq "END";
+
+    if ($have_rex) {
+        die "REX not empty"
+            unless !%{$insn->{rex}};
+
+        delete $insn->{rex};
+    }
+}
+
+sub asm_insn_vex_p($)
+{
+    my ($insn) = @_;
+
+    my @tokens;
+    push @tokens, "VEX.P"  if defined $insn->{vex}{p};
+    push @tokens, "DATA16" if defined $insn->{data16};
+    push @tokens, "REP"    if defined $insn->{rep};
+    push @tokens, "REPNE"  if defined $insn->{repne};
+    push @tokens, "END";
+
+    # (VEX.P | DATA16 | REP | REPNE)? END
+    my $token = shift @tokens;
+
+    if ($token eq "VEX.P") {
+        $token = shift @tokens;
+        my $vex_p = $insn->{vex}{p};
+        delete $insn->{vex}{p};
+
+        $insn->{vex}{p} = VEX_P_DATA16 if $vex_p == 0x66;
+        $insn->{vex}{p} = VEX_P_REPNE  if $vex_p == 0xF2;
+        $insn->{vex}{p} = VEX_P_REP    if $vex_p == 0xF3;
+
+        die "invalid value of VEX.P=$vex_p\n"
+            unless defined $insn->{vex}{p};
+    } elsif ($token eq "DATA16") {
+        $token = shift @tokens;
+        $insn->{vex}{p} = VEX_P_DATA16;
+        delete $insn->{data16};
+    } elsif ($token eq "REP") {
+        $token = shift @tokens;
+        $insn->{vex}{p} = VEX_P_REP;
+        delete $insn->{rep};
+    } elsif ($token eq "REPNE") {
+        $token = shift @tokens;
+        $insn->{vex}{p} = VEX_P_REPNE;
+        delete $insn->{repne};
+    }
+
+    die "unexpected junk at the end of VEX.P tokens: $token @tokens\n"
+        unless $token eq "END";
+}
+
+sub asm_insn_vex_m($)
+{
+    my ($insn) = @_;
+    my $opcvalue = $insn->{opcode}{value};
+    my $opcwidth = $insn->{opcode}{width};
+
+    my @tokens;
+    push @tokens, "VEX.M" if defined $insn->{vex}{m};
+    push @tokens, "0F"    if $opcwidth >= 16 && (($opcvalue >> ($opcwidth -  8)) & 0xFF) == 0x0F;
+    push @tokens, "38"    if $opcwidth >= 24 && (($opcvalue >> ($opcwidth - 16)) & 0xFF) == 0x38;
+    push @tokens, "3A"    if $opcwidth >= 24 && (($opcvalue >> ($opcwidth - 16)) & 0xFF) == 0x3A;
+    push @tokens, "END";
+
+    # (VEX.M | 0F (38 | 3A)?) END
+    my $token = shift @tokens;
+
+    if ($token eq "VEX.M") {
+        $token = shift @tokens;
+        my $vex_m = $insn->{vex}{m};
+        delete $insn->{vex}{m};
+
+        $insn->{vex}{m} = VEX_M_0F   if $vex_m == 0x0F;
+        $insn->{vex}{m} = VEX_M_0F38 if $vex_m == 0x0F38;
+        $insn->{vex}{m} = VEX_M_0F3A if $vex_m == 0x0F3A;
+
+        die "invalid value of VEX.M=$vex_m\n"
+            unless defined $insn->{vex}{m};
+    } elsif ($token eq "0F") {
+        $token = shift @tokens;
+
+        if ($token eq "38" || $token eq "3A") {
+            $token = shift @tokens;
+
+            $insn->{vex}{m} = VEX_M_0F38 if $token eq "38";
+            $insn->{vex}{m} = VEX_M_0F3A if $token eq "3A";
+            $insn->{opcode}{value} &= (1 << ($opcwidth - 16)) - 1;
+            $insn->{opcode}{width} -= 16;
+        } else {
+            $insn->{vex}{m} = VEX_M_0F;
+            $insn->{opcode}{value} &= (1 << ($opcwidth - 8)) - 1;
+            $insn->{opcode}{width} -= 8;
+        }
+    } else {
+        die "unexpected vex token where VEX.M or 0F expected: $token @tokens\n";
+    }
+
+    die "unexpected junk at the end of VEX.M tokens: $token @tokens\n"
+        unless $token eq "END";
+}
+
+sub asm_insn_vex_l($)
+{
+    my ($insn) = @_;
+    my $vex_l = $insn->{vex}{l};
+    delete $insn->{vex}{l};
+
+    $insn->{vex}{l} = 0         if $vex_l == 0;
+    $insn->{vex}{l} = VEX_L_128 if $vex_l == 128;
+    $insn->{vex}{l} = VEX_L_256 if $vex_l == 256;
+
+    die "invalid value of VEX.L=$vex_l\n"
+        unless defined $insn->{vex}{l};
+}
+
+sub asm_insn_vex_v($)
+{
+    my ($insn) = @_;
+
+    $insn->{vex}{v} ^= 0b1111 if defined $insn->{vex}{v};
+}
+
+sub asm_insn_vex($)
+{
+    my ($insn) = @_;
+
+    asm_insn_vex_rxb($insn);
+    asm_insn_vex_p($insn);
+    asm_insn_vex_m($insn);
+    asm_insn_vex_l($insn);
+    asm_insn_vex_v($insn);
+}
+
+sub asm_insn_modrm_rex($)
+{
+    my ($insn) = @_;
+
+    asm_insn_val($insn->{modrm}, 'disp');
+
+    my @tokens;
+    push @tokens, "REG"    if defined $insn->{modrm}{reg};
+    push @tokens, "REG2"   if defined $insn->{modrm}{reg2};
+    push @tokens, "BASE"   if defined $insn->{modrm}{base};
+    push @tokens, "DISP"   if defined $insn->{modrm}{disp};
+    push @tokens, "INDEX"  if defined $insn->{modrm}{index};
+    push @tokens, "VINDEX" if defined $insn->{modrm}{vindex};
+    push @tokens, "END";
+
+    # REG (REG2 | (BASE DISP? | DISP) (INDEX | VINDEX)?) END
+    my $token = shift @tokens;
+
+    die "unexpected modrm tokens where REG expected: $token @tokens\n"
+        unless $token eq "REG";
+
+    $token = shift @tokens;
+    my $reg = $insn->{modrm}{reg};
+
+    $insn->{rex}{r}     = 1 if $reg & 0b1000;
+    $insn->{modrm}{reg} = $reg & 0b111;
+
+    if ($token eq "REG2") {
+        $token = shift @tokens;
+        my $reg2 = $insn->{modrm}{reg2};
+        delete $insn->{modrm}{reg2};
+
+        $insn->{rex}{b}     = 1 if $reg2 & 0b1000;
+        $insn->{modrm}{mod} = MOD_DIRECT;
+        $insn->{modrm}{rm}  = $reg2 & 0b111;
+    } else {
+        if ($token eq "BASE") {
+            $token = shift @tokens;
+            my $base = $insn->{modrm}{base};
+            delete $insn->{modrm}{base};
+
+            $insn->{rex}{b}    = 1 if $base & 0b1000;
+            $insn->{modrm}{rm} = $base & 0b111;
+
+            if ($token eq "DISP") {
+                $token = shift @tokens;
+                my $disp = $insn->{modrm}{disp};
+                delete $insn->{modrm}{disp};
+
+                die "displacement too large: $disp->{width}\n"
+                    unless $disp->{width} <= 32;
+
+                if ($disp->{width} <= 8) {
+                    $insn->{modrm}{mod}  = MOD_INDIRECT_DISP8;
+                    $insn->{disp}{width} = 8;
+                } else {
+                    $insn->{modrm}{mod}  = MOD_INDIRECT_DISP32;
+                    $insn->{disp}{width} = 32;
+                }
+
+                $insn->{disp}{value} = $disp->{value};
+            } elsif (($base & 0b111) == REG_RBP) {
+                # Must use explicit displacement for RBP/R13-based
+                # addressing
+                $insn->{modrm}{mod}  = MOD_INDIRECT_DISP8;
+                $insn->{disp}{value} = 0;
+                $insn->{disp}{width} = 8;
+            } else {
+                $insn->{modrm}{mod} = MOD_INDIRECT;
+            }
+        } elsif ($token eq "DISP") {
+            $token = shift @tokens;
+            my $disp = $insn->{modrm}{disp};
+            delete $insn->{modrm}{disp};
+
+            die "displacement too large: $disp->{width}\n"
+                unless $disp->{width} <= 32;
+
+            # Displacement only
+            $insn->{modrm}{mod}  = MOD_INDIRECT;
+            $insn->{modrm}{rm}   = REG_RBP;
+            $insn->{disp}{value} = $disp->{value};
+            $insn->{disp}{width} = 32;
+        } else {
+            die "DISP or BASE expected: $token @tokens\n";
+        }
+
+        if ($token eq "INDEX" || $token eq "VINDEX") {
+            $insn->{modrm}{ss} = 0 unless defined $insn->{modrm}{ss};
+
+            my $index;
+            if ($token eq "VINDEX") {
+                $index = $insn->{modrm}{vindex};
+                delete $insn->{modrm}{vindex};
+            } else {
+                $index = $insn->{modrm}{index};
+                delete $insn->{modrm}{index};
+
+                # RSP cannot be encoded as index register.
+                die "cannot encode RSP as index register\n"
+                    if $index == REG_RSP;
+            }
+
+            $token = shift @tokens;
+            my $ss = $insn->{modrm}{ss};
+            delete $insn->{modrm}{ss};
+
+            $insn->{rex}{x}     = 1 if $index & 0b1000;
+            $insn->{sib}{ss}    = $ss;
+            $insn->{sib}{index} = $index & 0b111;
+            $insn->{sib}{base}  = $insn->{modrm}{rm};
+            $insn->{modrm}{rm}  = REG_RSP; # SIB
+        } elsif ($insn->{modrm}{rm} == REG_RSP) {
+            # Must use SIB for RSP/R12-based adressing
+            $insn->{sib}{ss}    = 0;
+            $insn->{sib}{index} = REG_RSP; # No index
+            $insn->{sib}{base}  = REG_RSP;
+        }
+    }
+
+    die "unexpected junk at the end of modrm tokens: $token @tokens\n"
+        unless $token eq "END";
+}
+
+sub asm_insn_val($$)
+{
+    my ($insn, $k) = @_;
+
+    my @tokens;
+    push @tokens, "K"   if defined $insn->{$k};
+    push @tokens, "K8"  if defined $insn->{$k . "8"};
+    push @tokens, "K16" if defined $insn->{$k . "16"};
+    push @tokens, "K32" if defined $insn->{$k . "32"};
+    push @tokens, "K64" if defined $insn->{$k . "64"};
+    push @tokens, "END";
+
+    # (K | K8 | K16 | K32 | K64)? END
+    my $token = shift @tokens;
+
+    if ($token eq "K") {
+        $token = shift @tokens;
+    } elsif ($token eq "K8") {
+        $token = shift @tokens;
+        my $value = $insn->{$k . "8"};
+        delete $insn->{$k . "8"};
+
+        $insn->{$k}{value} = $value;
+        $insn->{$k}{width} = 8;
+    } elsif ($token eq "K16") {
+        $token = shift @tokens;
+        my $value = $insn->{$k . "16"};
+        delete $insn->{$k . "16"};
+
+        $insn->{$k}{value} = $value;
+        $insn->{$k}{width} = 16;
+    } elsif ($token eq "K32") {
+        $token = shift @tokens;
+        my $value = $insn->{$k . "32"};
+        delete $insn->{$k . "32"};
+
+        $insn->{$k}{value} = $value;
+        $insn->{$k}{width} = 32;
+    } elsif ($token eq "K64") {
+        $token = shift @tokens;
+        my $value = $insn->{$k . "64"};
+        delete $insn->{$k . "64"};
+
+        $insn->{$k}{value} = $value;
+        $insn->{$k}{width} = 64;
+    }
+
+    die "unexpected junk at the end of value tokens: $token @tokens\n"
+        unless $token eq "END";
+}
+
+sub asm_insn(%)
+{
+    my (%insn) = @_;
+
+    asm_insn_val(\%insn, 'opcode');
+    asm_insn_val(\%insn, 'imm');
+    asm_insn_modrm_rex(\%insn) if defined $insn{modrm};
+    asm_insn_vex(\%insn)       if defined $insn{vex};
+    write_insn(%insn);
+}
+
+sub asm_insn_ud1(%)
+{
+    my (%modrm) = @_;
+    asm_insn(opcode16 => 0x0FB9, modrm => \%modrm);
+}
+
+sub asm_insn_xor(%)
+{
+    my (%modrm) = @_;
+
+    my %insn       = ();
+    $insn{opcode8} = 0x33;
+    $insn{modrm}   = \%modrm;
+    asm_insn(%insn);
+}
+
+sub asm_insn_sahf()
+{
+    my %insn       = ();
+    $insn{opcode8} = 0x9E;
+    asm_insn(%insn);
+}
+
+sub asm_insn_lea64(%)
+{
+    my (%modrm) = @_;
+
+    my %insn       = ();
+    $insn{rex}{w}  = 1;
+    $insn{opcode8} = 0x8D;
+    $insn{modrm}   = \%modrm;
+    asm_insn(%insn);
+}
+
+sub asm_insn_call(%)
+{
+    my (%insn) = @_;
+    asm_insn_val(\%insn, 'imm');
+
+    die "imm too large: $insn{imm}{width}"
+        unless $insn{imm}{width} <= 32;
+
+    $insn{opcode8}    = 0xE8;
+    $insn{imm}{width} = 32;
+    asm_insn(%insn);
+}
+
+sub asm_insn_jmp(%)
+{
+    my (%insn) = @_;
+    asm_insn_val(\%insn, 'imm');
+
+    die "imm too large: $insn{imm}{width}"
+        unless $insn{imm}{width} <= 32;
+
+    $insn{opcode8}    = 0xE9;
+    $insn{imm}{width} = 32;
+    asm_insn(%insn);
+}
+
+sub asm_insn_pop(%)
+{
+    my (%args) = @_;
+
+    my %insn       = ();
+    $insn{rex}{b}  = 1 if $args{reg} & 0b1000;
+    $insn{opcode8} = 0x58 | ($args{reg} & 0b111);
+    asm_insn(%insn);
+}
+
+sub asm_insn_mov_(%)
+{
+    my (%args) = @_;
+    my $is_wide64 = $args{w}; delete $args{w};
+    asm_insn_val(\%args, 'imm');
+
+    if (!defined $args{imm}) {
+        # Regular MOV reg, r/m. The W flag differentiates between
+        # 32-bit and 64-bit registers.
+        my %insn       = ();
+        $insn{rex}{w}  = 1 if $is_wide64;
+        $insn{opcode8} = 0x8B;
+        $insn{modrm}   = \%args;
+        asm_insn(%insn);
+    } elsif ($is_wide64
+             && $args{imm}{width} <= 32
+             && $args{imm}{value} < 0) {
+        # Move signed immediate to 64-bit register. This is the right
+        # time to use sign-extending move to save space; no point in
+        # using this opcode for 32-bit registers or for non-negative
+        # values, since both of these cases are better handled by
+        # 0xB8, which is shorter.
+        $args{imm}{width} = 32;
+
+        my %insn           = ();
+        $insn{rex}{w}      = 1;
+        $insn{opcode8}     = 0xC7;
+        $insn{modrm}{reg}  = 0;
+        $insn{modrm}{reg2} = $args{reg};
+        $insn{imm}         = $args{imm};
+        asm_insn(%insn);
+    } elsif ($args{imm}{width} <= (!$is_wide64 ? 32 : 64)) {
+        # Move immediate to 32/64-bit register. Note that this opcode
+        # is zero-extending, since the upper part of the destination
+        # register is automatically zeroed when moving a 32-bit
+        # immediate on x86_64.
+        $args{imm}{width} = ($args{imm}{width} <= 32 ? 32 : 64);
+
+        my %insn       = ();
+        $insn{rex}{w}  = 1 if $args{imm}{width} > 32;
+        $insn{rex}{b}  = 1 if $args{reg} & 0b1000;
+        $insn{opcode8} = 0xB8 | ($args{reg} & 0b111);
+        $insn{imm}     = $args{imm};
+        asm_insn(%insn);
+    } else {
+        die "imm too large: $args{imm}{width}";
+    }
+}
+
+sub asm_insn_mov(%)
+{
+    my (%args) = @_;
+    asm_insn_mov_(%args, w => 0);
+}
+
+sub asm_insn_mov64(%)
+{
+    my (%args) = @_;
+    asm_insn_mov_(%args, w => 1);
+}
+
+# Currently only the MOVQ mm, mm/m64 form is supported (and assumed).
+sub asm_insn_movq(%)
+{
+    my (%modrm) = @_;
+
+    my %insn        = ();
+    $insn{opcode16} = 0x0F6F;
+    $insn{modrm}    = \%modrm;
+    asm_insn(%insn);
+}
+
+sub asm_insn_add_(%)
+{
+    my (%args) = @_;
+    my $is_wide64 = $args{w}; delete $args{w};
+    asm_insn_val(\%args, 'imm');
+
+    if (!defined $args{imm}) {
+        # Regular ADD r/m, reg. The W flag differentiates between
+        # 32-bit and 64-bit registers.
+        my %insn       = ();
+        $insn{rex}{w}  = 1 if $is_wide64;
+        $insn{opcode8} = 0x01;
+        $insn{modrm}   = \%args;
+        asm_insn(%insn);
+    } elsif ($args{imm}{width} <= 8) {
+        # ADD r/m, imm8 with sign-extension.
+        my %insn          = ();
+        $insn{rex}{w}     = 1 if $is_wide64;
+        $insn{opcode8}    = 0x83;
+        $insn{modrm}      = \%args;
+        $insn{modrm}{reg} = 0;
+        $insn{imm}        = $args{imm}; delete $args{imm};
+        $insn{imm}{width} = 8;
+        asm_insn(%insn);
+    } else {
+        die "imm too large: $args{imm}{width}\n";
+    }
+}
+
+sub asm_insn_add(%)
+{
+    my (%args) = @_;
+    asm_insn_add_(%args, w => 0);
+}
+
+sub asm_insn_add64(%)
+{
+    my (%args) = @_;
+    asm_insn_add_(%args, w => 1);
+}
+
+sub asm_insn_and_(%)
+{
+    my (%args) = @_;
+    my $is_wide64 = $args{w}; delete $args{w};
+    asm_insn_val(\%args, 'imm');
+
+    if (!defined $args{imm}) {
+        # Regular AND r/m, reg. The W flag differentiates between
+        # 32-bit and 64-bit registers.
+        my %insn       = ();
+        $insn{rex}{w}  = 1 if $is_wide64;
+        $insn{opcode8} = 0x21;
+        $insn{modrm}   = \%args;
+        asm_insn(%insn);
+    } elsif ($args{imm}{width} <= 8) {
+        # AND r/m, imm8 with sign-extension.
+        my %insn          = ();
+        $insn{rex}{w}     = 1 if $is_wide64;
+        $insn{opcode8}    = 0x83;
+        $insn{modrm}      = \%args;
+        $insn{modrm}{reg} = 4;
+        $insn{imm}        = $args{imm}; delete $args{imm};
+        $insn{imm}{width} = 8;
+        asm_insn(%insn);
+    } else {
+        die "imm too large: $args{imm}{width}\n";
+    }
+}
+
+sub asm_insn_and(%)
+{
+    my (%args) = @_;
+    asm_insn_and_(%args, w => 0);
+}
+
+sub asm_insn_and64(%)
+{
+    my (%args) = @_;
+    asm_insn_and_(%args, w => 1);
+}
+
+sub asm_insn_neg_(%)
+{
+    my (%args) = @_;
+
+    my %insn          = ();
+    $insn{rex}{w}     = 1 if $args{w}; delete $args{w};
+    $insn{opcode8}    = 0xF7;
+    $insn{modrm}      = \%args;
+    $insn{modrm}{reg} = 3;
+    asm_insn(%insn);
+}
+
+sub asm_insn_neg(%)
+{
+    my (%modrm) = @_;
+    asm_insn_neg_(%modrm, w => 0);
+}
+
+sub asm_insn_neg64(%)
+{
+    my (%modrm) = @_;
+    asm_insn_neg_(%modrm, w => 1);
+}
+
+sub asm_insn_xchg_(%)
+{
+    my (%args) = @_;
+
+    if (defined $args{reg2} &&
+        ($args{reg} == REG_RAX || $args{reg2} == REG_RAX)) {
+        # We can use the short form, yay!
+        my $reg = ($args{reg} == REG_RAX ? $args{reg2} : $args{reg});
+
+        my %insn       = ();
+        $insn{rex}{w}  = 1 if $args{w}; delete $args{w};
+        $insn{rex}{b}  = 1 if $reg & 0b1000;
+        $insn{opcode8} = 0x90 | ($reg & 0b111);
+        asm_insn(%insn);
+    } else {
+        my %insn       = ();
+        $insn{rex}{w}  = 1 if $args{w}; delete $args{w};
+        $insn{opcode8} = 0x87;
+        $insn{modrm}   = \%args;
+        asm_insn(%insn);
+    }
+}
+
+sub asm_insn_xchg(%)
+{
+    my (%modrm) = @_;
+    asm_insn_xchg_(%modrm, w => 0);
+}
+
+sub asm_insn_xchg64(%)
+{
+    my (%modrm) = @_;
+    asm_insn_xchg_(%modrm, w => 1);
+}
+
+sub asm_insn_movaps(%)
+{
+    my (%modrm) = @_;
+
+    my %insn        = ();
+    $insn{opcode16} = 0x0F28;
+    $insn{modrm}    = \%modrm;
+    asm_insn(%insn);
+}
+
+sub asm_insn_vmovaps(%)
+{
+    my (%args) = @_;
+
+    my %insn       = ();
+    $insn{vex}{l}  = $args{l}; delete $args{l};
+    $insn{vex}{m}  = 0x0F;
+    $insn{opcode8} = 0x28;
+    $insn{modrm}   = \%args;
+    asm_insn(%insn);
+}
+
+sub asm_insn_movdqu(%)
+{
+    my (%modrm) = @_;
+
+    my %insn        = ();
+    $insn{rep}      = {};
+    $insn{opcode16} = 0x0F6F;
+    $insn{modrm}    = \%modrm;
+    asm_insn(%insn);
+}
+
+sub asm_insn_vmovdqu(%)
+{
+    my (%args) = @_;
+
+    my %insn       = ();
+    $insn{rep}     = {};
+    $insn{vex}{l}  = $args{l}; delete $args{l};
+    $insn{vex}{m}  = 0x0F;
+    $insn{opcode8} = 0x6F;
+    $insn{modrm}   = \%args;
+    asm_insn(%insn);
+}
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [Qemu-devel] [RISU PATCH v3 04/18] risugen_x86_constraints: add module
  2019-07-11 22:32 [Qemu-devel] [RISU PATCH v3 00/18] Support for generating x86 SIMD test images Jan Bobek
                   ` (2 preceding siblings ...)
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 03/18] risugen_x86_asm: add module Jan Bobek
@ 2019-07-11 22:32 ` Jan Bobek
  2019-07-12 14:24   ` Richard Henderson
  2019-07-21  1:54   ` Richard Henderson
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 05/18] risugen_x86_memory: " Jan Bobek
                   ` (14 subsequent siblings)
  18 siblings, 2 replies; 49+ messages in thread
From: Jan Bobek @ 2019-07-11 22:32 UTC (permalink / raw)
  To: qemu-devel; +Cc: Jan Bobek, Alex Bennée, Richard Henderson

The module risugen_x86_constraints.pm provides environment for
evaluating x86 "!constraints" blocks. This is facilitated by the
single exported function eval_constraints_block.

Signed-off-by: Jan Bobek <jan.bobek@gmail.com>
---
 risugen_x86_constraints.pm | 154 +++++++++++++++++++++++++++++++++++++
 1 file changed, 154 insertions(+)
 create mode 100644 risugen_x86_constraints.pm

diff --git a/risugen_x86_constraints.pm b/risugen_x86_constraints.pm
new file mode 100644
index 0000000..a4ee687
--- /dev/null
+++ b/risugen_x86_constraints.pm
@@ -0,0 +1,154 @@
+#!/usr/bin/perl -w
+###############################################################################
+# Copyright (c) 2019 Jan Bobek
+# All rights reserved. This program and the accompanying materials
+# are made available under the terms of the Eclipse Public License v1.0
+# which accompanies this distribution, and is available at
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Contributors:
+#     Jan Bobek - initial implementation
+###############################################################################
+
+# risugen_x86_constraints -- risugen_x86's helper module for "!constraints" blocks
+package risugen_x86_constraints;
+
+use strict;
+use warnings;
+
+use risugen_common;
+use risugen_x86_asm;
+
+our @ISA    = qw(Exporter);
+our @EXPORT = qw(eval_constraints_block);
+
+my $is_x86_64;
+
+sub data16($%)
+{
+    my ($insn, %data16) = @_;
+    $insn->{data16} = \%data16;
+}
+
+sub rep($%)
+{
+    my ($insn, %rep) = @_;
+    $insn->{rep} = \%rep;
+}
+
+sub repne($%)
+{
+    my ($insn, %repne) = @_;
+    $insn->{repne} = \%repne;
+}
+
+sub rex($%)
+{
+    my ($insn, %rex) = @_;
+    # It doesn't make sense to randomize any REX fields, since REX.W
+    # is opcode-like and REX.R/.X/.B are encoded automatically by
+    # risugen_x86_asm.
+    $insn->{rex} = \%rex;
+}
+
+sub vex($%)
+{
+    my ($insn, %vex) = @_;
+    my $regidw = $is_x86_64 ? 4 : 3;
+
+    # There is no point in randomizing other VEX fields, since
+    # VEX.R/.X/.B are encoded automatically by risugen_x86_asm, and
+    # VEX.M/.P are opcodes.
+    $vex{l} = randint(width => 1) ? 256 : 128 unless defined $vex{l};
+    $vex{v} = randint(width => $regidw)       unless defined $vex{v};
+    $vex{w} = randint(width => 1)             unless defined $vex{w};
+    $insn->{vex} = \%vex;
+}
+
+sub modrm_($%)
+{
+    my ($insn, %args) = @_;
+    my $regidw = $is_x86_64 ? 4 : 3;
+
+    my %modrm = ();
+    if (defined $args{reg}) {
+        # This makes the config file syntax a bit more accommodating
+        # in cases where MODRM.REG is an opcode extension field.
+        $modrm{reg} = $args{reg};
+    } else {
+        $modrm{reg} = randint(width => $regidw);
+    }
+
+    # There is also a displacement-only form, but we don't know
+    # absolute address of the memblock, so we cannot test it.
+    my $form = int(rand(4));
+    if ($form == 0) {
+        $modrm{reg2} = randint(width => $regidw);
+    } else {
+        $modrm{base} = randint(width => $regidw);
+
+        if ($form == 2) {
+            $modrm{base}        = randint(width => $regidw);
+            $modrm{disp}{value} = randint(width => 8, signed => 1);
+            $modrm{disp}{width} = 8;
+        } elsif ($form == 3) {
+            $modrm{base}        = randint(width => $regidw);
+            $modrm{disp}{value} = randint(width => 32, signed => 1);
+            $modrm{disp}{width} = 32;
+        }
+
+        my $have_index = int(rand(2));
+        if ($have_index) {
+            my $indexk      = $args{indexk};
+            $modrm{ss}      = randint(width => 2);
+            $modrm{$indexk} = randint(width => $regidw);
+        }
+    }
+
+    $insn->{modrm} = \%modrm;
+}
+
+sub modrm($%)
+{
+    my ($insn, %args) = @_;
+    modrm_($insn, indexk => 'index', %args);
+}
+
+sub modrm_vsib($%)
+{
+    my ($insn, %args) = @_;
+    modrm_($insn, indexk => 'vindex', %args);
+}
+
+sub imm($%)
+{
+    my ($insn, %args) = @_;
+    $insn->{imm}{value} = randint(%args);
+    $insn->{imm}{width} = $args{width};
+}
+
+sub eval_constraints_block(%)
+{
+    my (%args) = @_;
+    my $rec = $args{rec};
+    my $insn = $args{insn};
+    my $insnname = $rec->{name};
+    my $opcode = $insn->{opcode}{value};
+
+    $is_x86_64 = $args{is_x86_64};
+
+    my $constraint = $rec->{blocks}{"constraints"};
+    if (defined $constraint) {
+        # user-specified constraint: evaluate in an environment
+        # with variables set corresponding to the variable fields.
+        my %env = extract_fields($opcode, $rec);
+        # set the variable $_ to the instruction in question
+        $env{_} = $insn;
+
+        return eval_block($insnname, "constraints", $constraint, \%env);
+    } else {
+        return 1;
+    }
+}
+
+1;
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [Qemu-devel] [RISU PATCH v3 05/18] risugen_x86_memory: add module
  2019-07-11 22:32 [Qemu-devel] [RISU PATCH v3 00/18] Support for generating x86 SIMD test images Jan Bobek
                   ` (3 preceding siblings ...)
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 04/18] risugen_x86_constraints: " Jan Bobek
@ 2019-07-11 22:32 ` Jan Bobek
  2019-07-21  1:58   ` Richard Henderson
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 06/18] risugen_x86: " Jan Bobek
                   ` (13 subsequent siblings)
  18 siblings, 1 reply; 49+ messages in thread
From: Jan Bobek @ 2019-07-11 22:32 UTC (permalink / raw)
  To: qemu-devel; +Cc: Jan Bobek, Alex Bennée, Richard Henderson

The module risugen_x86_memory.pm provides environment for evaluating
x86 "!memory" blocks. This is facilitated by the single exported
function eval_memory_block.

Signed-off-by: Jan Bobek <jan.bobek@gmail.com>
---
 risugen_x86_memory.pm | 87 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 87 insertions(+)
 create mode 100644 risugen_x86_memory.pm

diff --git a/risugen_x86_memory.pm b/risugen_x86_memory.pm
new file mode 100644
index 0000000..6aa6877
--- /dev/null
+++ b/risugen_x86_memory.pm
@@ -0,0 +1,87 @@
+#!/usr/bin/perl -w
+###############################################################################
+# Copyright (c) 2019 Jan Bobek
+# All rights reserved. This program and the accompanying materials
+# are made available under the terms of the Eclipse Public License v1.0
+# which accompanies this distribution, and is available at
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Contributors:
+#     Jan Bobek - initial implementation
+###############################################################################
+
+# risugen_x86_memory -- risugen_x86's helper module for "!memory" blocks
+package risugen_x86_memory;
+
+use strict;
+use warnings;
+
+use risugen_common;
+use risugen_x86_asm;
+
+our @ISA    = qw(Exporter);
+our @EXPORT = qw(eval_memory_block);
+
+my %memory_opts;
+
+sub load(%)
+{
+    my (%args) = @_;
+
+    @memory_opts{keys %args} = values %args;
+    $memory_opts{is_write}   = 0;
+}
+
+sub store(%)
+{
+    my (%args) = @_;
+
+    @memory_opts{keys %args} = values %args;
+    $memory_opts{is_write}   = 1;
+}
+
+sub eval_memory_block(%)
+{
+    my (%args) = @_;
+    my $rec = $args{rec};
+    my $insn = $args{insn};
+    my $insnname = $rec->{name};
+    my $opcode = $insn->{opcode}{value};
+
+    # Setup reasonable defaults
+    %memory_opts           = ();
+    $memory_opts{size}     = 0;
+    $memory_opts{align}    = 1;
+    $memory_opts{disp}     = 0;
+    $memory_opts{ss}       = 0;
+    $memory_opts{value}    = 0;
+    $memory_opts{mask}     = 0;
+    $memory_opts{rollback} = 0;
+    $memory_opts{is_write} = 0;
+
+    if (defined $insn->{modrm}) {
+        my $modrm = $insn->{modrm};
+
+        $memory_opts{ss}     = $modrm->{ss}          if defined $modrm->{ss};
+        $memory_opts{index}  = $modrm->{index}       if defined $modrm->{index};
+        $memory_opts{vindex} = $modrm->{vindex}      if defined $modrm->{vindex};
+        $memory_opts{base}   = $modrm->{base}        if defined $modrm->{base};
+        $memory_opts{disp}   = $modrm->{disp}{value} if defined $modrm->{disp};
+
+        $memory_opts{rollback} = defined $modrm->{base};
+    }
+
+    my $memory = $rec->{blocks}{"memory"};
+    if (defined $memory) {
+        # Evaluate in an environment with variables set corresponding
+        # to the variable fields.
+        my %env = extract_fields($opcode, $rec);
+        # set the variable $_ to the instruction in question
+        $env{_} = $insn;
+
+        eval_block($insnname, "memory", $memory, \%env);
+    }
+    return %memory_opts;
+}
+
+1;
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [Qemu-devel] [RISU PATCH v3 06/18] risugen_x86: add module
  2019-07-11 22:32 [Qemu-devel] [RISU PATCH v3 00/18] Support for generating x86 SIMD test images Jan Bobek
                   ` (4 preceding siblings ...)
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 05/18] risugen_x86_memory: " Jan Bobek
@ 2019-07-11 22:32 ` Jan Bobek
  2019-07-21  2:02   ` Richard Henderson
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 07/18] risugen: allow all byte-aligned instructions Jan Bobek
                   ` (12 subsequent siblings)
  18 siblings, 1 reply; 49+ messages in thread
From: Jan Bobek @ 2019-07-11 22:32 UTC (permalink / raw)
  To: qemu-devel; +Cc: Jan Bobek, Alex Bennée, Richard Henderson

risugen_x86.pm is the main backend module for Intel i386 and x86_64
architectures; it orchestrates generation of the test code with
support from the rest of risugen_x86_* modules.

Signed-off-by: Jan Bobek <jan.bobek@gmail.com>
---
 risugen_x86.pm | 518 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 518 insertions(+)
 create mode 100644 risugen_x86.pm

diff --git a/risugen_x86.pm b/risugen_x86.pm
new file mode 100644
index 0000000..ae11843
--- /dev/null
+++ b/risugen_x86.pm
@@ -0,0 +1,518 @@
+#!/usr/bin/perl -w
+###############################################################################
+# Copyright (c) 2019 Jan Bobek
+# All rights reserved. This program and the accompanying materials
+# are made available under the terms of the Eclipse Public License v1.0
+# which accompanies this distribution, and is available at
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Contributors:
+#     Jan Bobek - initial implementation
+###############################################################################
+
+# risugen_x86 -- risugen module for Intel i386/x86_64 architectures
+package risugen_x86;
+
+use strict;
+use warnings;
+
+use risugen_common;
+use risugen_x86_asm;
+use risugen_x86_constraints;
+use risugen_x86_memory;
+
+require Exporter;
+
+our @ISA    = qw(Exporter);
+our @EXPORT = qw(write_test_code);
+
+use constant {
+    RISUOP_COMPARE     => 0,        # compare registers
+    RISUOP_TESTEND     => 1,        # end of test, stop
+    RISUOP_SETMEMBLOCK => 2,        # eax is address of memory block (8192 bytes)
+    RISUOP_GETMEMBLOCK => 3,        # add the address of memory block to eax
+    RISUOP_COMPAREMEM  => 4,        # compare memory block
+
+    # Maximum alignment restriction permitted for a memory op.
+    MAXALIGN => 64,
+    MEMBLOCK_LEN => 8192,
+};
+
+my $periodic_reg_random = 1;
+my $is_x86_64 = 0;
+
+sub wrap_int32($)
+{
+    my ($x) = @_;
+    my $r = 1 << 31;
+    return ($x + $r) % (2 * $r) - $r;
+}
+
+sub asm_insn_risuop($)
+{
+    my ($op) = @_;
+    asm_insn_ud1(reg => REG_RAX, reg2 => $op);
+}
+
+sub asm_insn_movT(%)
+{
+    my (%args) = @_;
+
+    if ($is_x86_64) {
+        asm_insn_mov64(%args);
+    } else {
+        asm_insn_mov(%args);
+    }
+}
+
+sub asm_insn_movT_imm(%)
+{
+    my (%args) = @_;
+    my $imm = $args{imm}; delete $args{imm};
+
+    my $is_sint32 = (-0x80000000 <= $imm && $imm <= 0x7fffffff);
+    my $is_uint32 = (0 <= $imm && $imm <= 0xffffffff);
+
+    $args{$is_sint32 || $is_uint32 ? 'imm32' : 'imm64'} = $imm;
+    asm_insn_movT(%args);
+}
+
+sub asm_insn_addT(%)
+{
+    my (%args) = @_;
+
+    if ($is_x86_64) {
+        asm_insn_add64(%args);
+    } else {
+        asm_insn_add(%args);
+    }
+}
+
+sub asm_insn_negT(%)
+{
+    my (%args) = @_;
+
+    if ($is_x86_64) {
+        asm_insn_neg64(%args);
+    } else {
+        asm_insn_neg(%args);
+    }
+}
+
+sub asm_insn_xchgT(%)
+{
+    my (%args) = @_;
+
+    if ($is_x86_64) {
+        asm_insn_xchg64(%args);
+    } else {
+        asm_insn_xchg(%args);
+    }
+}
+
+sub write_random_regdata()
+{
+    my $reg_cnt = $is_x86_64 ? 16 : 8;
+    my $reg_width = $is_x86_64 ? 64 : 32;
+
+    # initialize flags register
+    asm_insn_xor(reg => REG_RAX, reg2 => REG_RAX);
+    asm_insn_sahf();
+
+    # general purpose registers
+    for (my $reg = 0; $reg < $reg_cnt; $reg++) {
+        if ($reg != REG_RSP) {
+            my $imm = randint(width => $reg_width, signed => 1);
+            asm_insn_movT_imm(reg => $reg, imm => $imm);
+        }
+    }
+}
+
+# At the end of this function, we can emit $datalen data-bytes which
+# will be skipped over at runtime, but whose address will be present
+# in EAX and optionally aligned.
+sub prepare_datablock(%)
+{
+    my (%args) = @_;
+    $args{align} = 0 unless defined $args{align} && $args{align} > 1;
+
+    # First, load current EIP/RIP into EAX/RAX. Easy to do on x86_64
+    # thanks to RIP-relative addressing, but on i386 we need to play
+    # some well-known tricks with the CALL instruction. Then, AND the
+    # EAX/RAX register with correct mask to obtain the aligned
+    # address.
+    my $reg = REG_RAX;
+
+    if ($is_x86_64) {
+        my $disp32 = 5;         # 5-byte JMP
+        $disp32 += 4 + ($args{align} - 1) if $args{align}; # 4-byte AND
+
+        asm_insn_lea64(reg => $reg, disp32 => $disp32);
+        asm_insn_and64(reg2 => $reg, imm8 => ~($args{align} - 1))
+            if $args{align};
+    } else {
+        my $imm8 = 1 + 3 + 5;   # 1-byte POP + 3-byte ADD + 5-byte JMP
+        $imm8 += 3 + ($args{align} - 1) if $args{align}; # 3-byte AND
+
+        # displacement = next instruction
+        asm_insn_call(imm32 => 0x00000000);
+        asm_insn_pop(reg => $reg);
+        asm_insn_add(reg2 => $reg, imm8 => $imm8);
+        asm_insn_and(reg2 => $reg, imm8 => ~($args{align} - 1))
+            if $args{align};
+    }
+
+    # JMP over the data blob.
+    asm_insn_jmp(imm32 => $args{datalen});
+}
+
+# Write a block of random data, $datalen bytes long, optionally
+# aligned, and load its address into EAX/RAX.
+sub write_random_datablock(%)
+{
+    my (%args) = @_;
+    prepare_datablock(%args);
+
+    # Generate the random data
+    my $datalen = $args{datalen};
+    for (my $w = 8; 0 < $w; $w /= 2) {
+        for (; $w <= $datalen; $datalen -= $w) {
+            my $value = randint(width => 8 * $w);
+            insnv(value => $value, width => 8 * $w);
+        }
+    }
+}
+
+sub write_random_vregdata(%)
+{
+    my (%args) = @_;
+    $args{ymm} = 0          unless defined $args{ymm};
+    $args{xmm} = $args{ymm} unless defined $args{xmm};
+    $args{mm}  = 0          unless defined $args{mm};
+
+    die "cannot initialize YMM registers only\n"
+        if $args{ymm} && !$args{xmm};
+
+    my $datalen = 0;
+
+    my $mmreg_count = 8;
+    my $mmreg_size  = 8;
+    $datalen += $mmreg_count * $mmreg_size if $args{mm};
+
+    my $xmmreg_count = $is_x86_64 ? 16 : 8;
+    my $xmmreg_size  = 16;
+    $datalen += $xmmreg_count * $xmmreg_size if $args{xmm};
+
+    my $ymmreg_count = $xmmreg_count;
+    my $ymmreg_size  = 32 - $xmmreg_size;
+    $datalen += $ymmreg_count * $ymmreg_size if $args{ymm};
+
+    return unless $datalen > 0;
+
+    # Generate random data blob
+    write_random_datablock(datalen => $datalen + MAXALIGN - 1,
+                           align => MAXALIGN);
+
+    # Load the random data into vector regs.
+    my $offset = 0;
+
+    if ($args{mm}) {
+        for (my $mmreg = 0; $mmreg < $mmreg_count; $mmreg += 1) {
+            asm_insn_movq(reg => $mmreg,
+                          base => REG_RAX,
+                          disp32 => $offset);
+            $offset += $mmreg_size;
+        }
+    }
+    if ($args{ymm}) {
+        for (my $ymmreg = 0; $ymmreg < $ymmreg_count; $ymmreg += 1) {
+            asm_insn_vmovaps(l => ($xmmreg_size + $ymmreg_size) * 8,
+                             reg => $ymmreg,
+                             base => REG_RAX,
+                             disp32 => $offset);
+            $offset += $xmmreg_size + $ymmreg_size;
+        }
+    } elsif ($args{xmm}) {
+        for (my $xmmreg = 0; $xmmreg < $xmmreg_count; $xmmreg += 1) {
+            asm_insn_movaps(reg => $xmmreg,
+                            base => REG_RAX,
+                            disp32 => $offset);
+            $offset += $xmmreg_size;
+        }
+    }
+}
+
+sub write_memblock_setup()
+{
+    # Generate random data blob
+    write_random_datablock(datalen => MEMBLOCK_LEN + MAXALIGN - 1,
+                           align => MAXALIGN);
+
+    # Pointer is in EAX/RAX; set the memblock
+    asm_insn_risuop(RISUOP_SETMEMBLOCK);
+}
+
+sub write_random_register_data(%)
+{
+    my (%args) = @_;
+    write_random_vregdata(%{$args{vregs}}) if defined $args{vregs};
+    write_random_regdata();
+    asm_insn_risuop(RISUOP_COMPARE);
+}
+
+sub write_mem_getoffset(%)
+{
+    my (%args) = @_;
+
+    my @tokens;
+    push @tokens, "BASE"   if defined $args{base};
+    push @tokens, "INDEX"  if defined $args{index};
+    push @tokens, "VINDEX" if defined $args{vindex};
+    push @tokens, "END";
+
+    # (BASE (INDEX | VINDEX)?)? END
+    my $token = shift @tokens;
+
+    if ($token eq "BASE") {
+        $token = shift @tokens;
+        # We must not modify RSP during tests, therefore it cannot be a
+        # base register.
+        return 0 if $args{base} == REG_RSP;
+
+        if ($token eq "VINDEX") {
+            $token = shift @tokens;
+
+            die "VSIB requested, but addrw undefined"
+                unless defined $args{addrw};
+            die "VSIB requested, but count undefined"
+                unless defined $args{count};
+
+            write_mem_getoffset_base_vindex(%args);
+        } elsif ($token eq "INDEX") {
+            $token = shift @tokens;
+            # RSP cannot be used as an index in regular SIB... And we may
+            # not modify it anyway.
+            return 0 if $args{index} == REG_RSP;
+            # If index and base registers are the same, we may not be able
+            # to honor the alignment requirements.
+            return 0 if $args{index} == $args{base};
+
+            write_mem_getoffset_base_index(%args);
+        } else {
+            write_mem_getoffset_base(%args);
+        }
+    }
+
+    die "unexpected junk at the end of getoffset tokens: $token @tokens\n"
+        unless $token eq "END";
+}
+
+sub write_mem_getoffset_base(%)
+{
+    my (%args) = @_;
+
+    if ($args{mask}) {
+        die "size $args{size} is too large for masking"
+            unless $args{size} <= 8;
+        die "simultaneous alignment and masking not supported"
+            if $args{align} > 1;
+
+        prepare_datablock(datalen => $args{size});
+
+        my $width = $args{size} * 8;
+        my $value = randint(width => $width);
+        $value = ($value & ~$args{mask}) | ($args{value} & $args{mask});
+        insnv(value => $value, width => $width, bigendian => 0);
+
+        my $offset = -$args{disp};
+        $offset = wrap_int32($offset) if !$is_x86_64;
+
+        asm_insn_movT_imm(reg => REG_RDX, imm => $offset);
+        asm_insn_addT(reg2 => REG_RAX, reg => REG_RDX);
+    } else {
+        my $offset = int(rand(MEMBLOCK_LEN - $args{size}));
+        $offset &= ~($args{align} - 1);
+
+        $offset -= $args{disp};
+        $offset = wrap_int32($offset) if !$is_x86_64;
+
+        asm_insn_movT_imm(reg => REG_RAX, imm => $offset);
+        asm_insn_risuop(RISUOP_GETMEMBLOCK);
+    }
+
+    asm_insn_xchgT(reg => $args{base}, reg2 => REG_RAX)
+        unless $args{base} == REG_RAX;
+}
+
+sub write_mem_getoffset_base_index(%)
+{
+    my (%args) = @_;
+
+    my $addrw = ($is_x86_64 ? 64 : 32) - $args{ss} - 1;
+    my $index = randint(width => $addrw, signed => 1);
+    $args{disp} += $index * (1 << $args{ss});
+
+    write_mem_getoffset_base(%args);
+    asm_insn_movT_imm(reg => $args{index}, imm => $index);
+}
+
+sub write_mem_getoffset_base_vindex(%)
+{
+    my (%args) = @_;
+
+    my $addrw = $args{addrw} - $args{ss} - 1;
+    my $base = randint(width => $addrw, signed => 1);
+    $args{disp} += $base * (1 << $args{ss});
+
+    my $datalen = $args{addrw} * $args{count} / 8;
+    prepare_datablock(datalen => $datalen);
+
+    for(my $i = 0; $i < $args{count}; ++$i) {
+        my $index = int(rand(MEMBLOCK_LEN - $args{size}));
+        $index &= ~($args{align} - 1);
+        $index >>= $args{ss};
+
+        insnv(value => $base + $index,
+              width => $args{addrw},
+              bigendian => 0);
+    }
+
+    asm_insn_vmovdqu(l => $args{addrw} * $args{count},
+                     reg => $args{vindex},
+                     base => REG_RAX);
+
+    write_mem_getoffset_base(%args, size => MEMBLOCK_LEN);
+}
+
+sub write_mem_getoffset_rollback(%)
+{
+    my (%args) = @_;
+
+    # The base register contains an address of the form &memblock +
+    # offset. We need to turn it into just offset, otherwise we may
+    # get value mismatches since the memory layout can be different.
+    asm_insn_xchgT(reg => $args{base}, reg2 => REG_RAX)
+        unless $args{base} == REG_RAX;
+    asm_insn_negT(reg2 => REG_RAX);
+    asm_insn_risuop(RISUOP_GETMEMBLOCK);
+
+    # I didn't originally think this was neccessary, but there were
+    # random sign-flag mismatch failures on 32-bit, probably due to
+    # the absolute address being randomly in the positive/negative
+    # range of int32 -- the first NEG would then pollute the EFLAGS
+    # register with this information. Using another NEG is a neat
+    # way of overwriting all this information with consistent values.
+    asm_insn_negT(reg2 => REG_RAX);
+}
+
+sub gen_one_insn($)
+{
+    # Given an instruction-details array, generate an instruction
+    my ($rec) = @_;
+    my $insnname = $rec->{name};
+    my $insnwidth = $rec->{width};
+
+    my $constraintfailures = 0;
+
+    my %insn;
+    my %memopts;
+    INSN: while(1) {
+        my $opcode = randint(width => 32);
+        $opcode &= ~$rec->{fixedbitmask};
+        $opcode |= $rec->{fixedbits};
+
+        # This is not 100 % correct, since $opcode is still padded to
+        # 32-bit width. This is necessary so that extract_fields in
+        # eval_constraints_block and eval_memory_block works
+        # correctly, but we need to fix it up before calling asm_insn.
+        %insn                = ();
+        $insn{opcode}{value} = $opcode;
+        $insn{opcode}{width} = $insnwidth;
+
+        my $v = eval_constraints_block(rec => $rec, insn => \%insn,
+                                       is_x86_64 => $is_x86_64);
+        if ($v && !$is_x86_64 && defined $insn{rex}) {
+            # REX.W is part of the opcode; we will never be able to
+            # generate this instruction in 32-bit mode.
+            return 0 if defined $insn{rex}{w} && $insn{rex}{w};
+            $v = 0;
+        }
+        if ($v) {
+            %memopts = eval_memory_block(rec => $rec, insn => \%insn);
+            $v = write_mem_getoffset(%memopts);
+        }
+        if (!$v) {
+            $constraintfailures++;
+            if ($constraintfailures > 10000) {
+                print "10000 consecutive constraint failures for $insnname constraints\n";
+                exit (1);
+            }
+            next INSN;
+        }
+
+        # OK, we got a good one
+        $constraintfailures = 0;
+
+        # Get rid of the extra padding before calling asm_insn; see
+        # above for details.
+        $insn{opcode}{value} >>= 32 - $insnwidth;
+
+        asm_insn(%insn);
+        write_mem_getoffset_rollback(%memopts) if $memopts{rollback};
+        asm_insn_risuop(RISUOP_COMPAREMEM)     if $memopts{is_write};
+        asm_insn_risuop(RISUOP_COMPARE);
+
+        return 1;
+    }
+}
+
+sub write_test_code($)
+{
+    my ($params) = @_;
+
+    my $numinsns = $params->{ 'numinsns' };
+    my $outfile = $params->{ 'outfile' };
+
+    my %insn_details = %{ $params->{ 'details' } };
+    my @keys = @{ $params->{ 'keys' } };
+
+    $is_x86_64 = $params->{ 'x86_64' };
+    my $xfeatures = $params->{ 'xfeatures' };
+
+    my %vregs   = ();
+    $vregs{ymm} = $xfeatures eq 'avx';
+    $vregs{xmm} = $vregs{ymm} || $xfeatures eq 'sse';
+    $vregs{mm}  = $vregs{xmm} || $xfeatures eq 'mmx';
+
+    open_bin($outfile);
+
+    # TODO better random number generator?
+    srand(0);
+
+    print "Generating code using patterns: @keys...\n";
+    progress_start(78, $numinsns);
+
+    write_memblock_setup();
+
+    # memblock setup doesn't clean its registers, so this must come afterwards.
+    write_random_register_data(vregs => \%vregs);
+
+    for (my $i = 0; $i < $numinsns;) {
+        my $insn_enc = $keys[int rand (@keys)];
+
+        next if !gen_one_insn($insn_details{$insn_enc});
+        $i += 1;
+
+        # Rewrite the registers periodically. This avoids the tendency
+        # for the VFP registers to decay to NaNs and zeroes.
+        if ($periodic_reg_random && ($i % 100) == 0) {
+            write_random_register_data(vregs => \%vregs);
+        }
+        progress_update($i);
+    }
+    asm_insn_risuop(RISUOP_TESTEND);
+    progress_end();
+    close_bin();
+}
+
+1;
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [Qemu-devel] [RISU PATCH v3 07/18] risugen: allow all byte-aligned instructions
  2019-07-11 22:32 [Qemu-devel] [RISU PATCH v3 00/18] Support for generating x86 SIMD test images Jan Bobek
                   ` (5 preceding siblings ...)
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 06/18] risugen_x86: " Jan Bobek
@ 2019-07-11 22:32 ` Jan Bobek
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 08/18] risugen: add command-line flag --x86_64 Jan Bobek
                   ` (11 subsequent siblings)
  18 siblings, 0 replies; 49+ messages in thread
From: Jan Bobek @ 2019-07-11 22:32 UTC (permalink / raw)
  To: qemu-devel; +Cc: Jan Bobek, Alex Bennée, Richard Henderson

Accept all instructions whose bit length is divisible by 8. Note that
the maximum instruction length (as specified in the config file) is 32
bits, hence this change permits instructions which are 8 bits or 24
bits long (16-bit instructions have already been considered valid).

Note that while valid x86 instructions may be up to 15 bytes long, the
length constraint described above only applies to the main opcode
field, which is usually only 1 or 2 bytes long. Therefore, the primary
purpose of this change is to allow 1-byte x86 opcodes.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Jan Bobek <jan.bobek@gmail.com>
---
 risugen | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/risugen b/risugen
index e690b18..0c859aa 100755
--- a/risugen
+++ b/risugen
@@ -229,12 +229,11 @@ sub parse_config_file($)
                 push @fields, [ $var, $bitpos, $bitmask ];
             }
         }
-        if ($bitpos == 16) {
-            # assume this is a half-width thumb instruction
+        if ($bitpos % 8 == 0) {
             # Note that we don't fiddle with the bitmasks or positions,
             # which means the generated insn will be in the high halfword!
-            $insnwidth = 16;
-        } elsif ($bitpos != 0) {
+            $insnwidth -= $bitpos;
+        } else {
             print STDERR "$file:$.: ($insn $enc) not enough bits specified\n";
             exit(1);
         }
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [Qemu-devel] [RISU PATCH v3 08/18] risugen: add command-line flag --x86_64
  2019-07-11 22:32 [Qemu-devel] [RISU PATCH v3 00/18] Support for generating x86 SIMD test images Jan Bobek
                   ` (6 preceding siblings ...)
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 07/18] risugen: allow all byte-aligned instructions Jan Bobek
@ 2019-07-11 22:32 ` Jan Bobek
  2019-07-17 17:00   ` Richard Henderson
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 09/18] risugen: add --xfeatures option for x86 Jan Bobek
                   ` (10 subsequent siblings)
  18 siblings, 1 reply; 49+ messages in thread
From: Jan Bobek @ 2019-07-11 22:32 UTC (permalink / raw)
  To: qemu-devel; +Cc: Jan Bobek, Alex Bennée, Richard Henderson

This flag instructs the x86 backend to emit 64-bit (rather than
32-bit) code.

Signed-off-by: Jan Bobek <jan.bobek@gmail.com>
---
 risugen | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/risugen b/risugen
index 0c859aa..50920eb 100755
--- a/risugen
+++ b/risugen
@@ -10,6 +10,7 @@
 #     Peter Maydell (Linaro) - initial implementation
 #     Claudio Fontana (Linaro) - initial aarch64 support
 #     Jose Ricardo Ziviani (IBM) - initial ppc64 support and arch isolation
+#     Jan Bobek - initial x86 support
 ###############################################################################
 
 # risugen -- generate a test binary file for use with risu
@@ -309,6 +310,7 @@ Valid options:
                    Useful to test before support for FP is available.
     --sve        : enable sve floating point
     --be         : generate instructions in Big-Endian byte order (ppc64 only).
+    --x86_64     : generate 64-bit (rather than 32-bit) code. (x86 only)
     --help       : print this message
 EOT
 }
@@ -321,6 +323,7 @@ sub main()
     my $fp_enabled = 1;
     my $sve_enabled = 0;
     my $big_endian = 0;
+    my $is_x86_64 = 0;
     my ($infile, $outfile);
 
     GetOptions( "help" => sub { usage(); exit(0); },
@@ -338,6 +341,7 @@ sub main()
                 "be" => sub { $big_endian = 1; },
                 "no-fp" => sub { $fp_enabled = 0; },
                 "sve" => sub { $sve_enabled = 1; },
+                "x86_64" => sub { $is_x86_64 = 1; },
         ) or return 1;
     # allow "--pattern re,re" and "--pattern re --pattern re"
     @pattern_re = split(/,/,join(',',@pattern_re));
@@ -371,7 +375,8 @@ sub main()
         'keys' => \@insn_keys,
         'arch' => $full_arch[0],
         'subarch' => $full_arch[1] || '',
-        'bigendian' => $big_endian
+        'bigendian' => $big_endian,
+        'x86_64' => $is_x86_64,
     );
 
     write_test_code(\%params);
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [Qemu-devel] [RISU PATCH v3 09/18] risugen: add --xfeatures option for x86
  2019-07-11 22:32 [Qemu-devel] [RISU PATCH v3 00/18] Support for generating x86 SIMD test images Jan Bobek
                   ` (7 preceding siblings ...)
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 08/18] risugen: add command-line flag --x86_64 Jan Bobek
@ 2019-07-11 22:32 ` Jan Bobek
  2019-07-17 17:01   ` Richard Henderson
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 10/18] x86.risu: add MMX instructions Jan Bobek
                   ` (9 subsequent siblings)
  18 siblings, 1 reply; 49+ messages in thread
From: Jan Bobek @ 2019-07-11 22:32 UTC (permalink / raw)
  To: qemu-devel; +Cc: Jan Bobek, Alex Bennée, Richard Henderson

The --xfeatures option is modelled after identically-named option to
RISU itself; it allows the user to specify which vector registers
should be initialized, so that the test image doesn't try to access
registers which may not be present at runtime. Note that it is still
the user's responsibility to filter out the test instructions using
these registers.

Signed-off-by: Jan Bobek <jan.bobek@gmail.com>
---
 risugen | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/risugen b/risugen
index 50920eb..76424e1 100755
--- a/risugen
+++ b/risugen
@@ -311,6 +311,9 @@ Valid options:
     --sve        : enable sve floating point
     --be         : generate instructions in Big-Endian byte order (ppc64 only).
     --x86_64     : generate 64-bit (rather than 32-bit) code. (x86 only)
+    --xfeatures {none|mmx|sse|avx} : what SIMD registers should be
+                   initialized. The initialization is cumulative,
+                   i.e. AVX includes both MMX and SSE. (x86 only)
     --help       : print this message
 EOT
 }
@@ -324,6 +327,7 @@ sub main()
     my $sve_enabled = 0;
     my $big_endian = 0;
     my $is_x86_64 = 0;
+    my $xfeatures = 'none';
     my ($infile, $outfile);
 
     GetOptions( "help" => sub { usage(); exit(0); },
@@ -342,6 +346,14 @@ sub main()
                 "no-fp" => sub { $fp_enabled = 0; },
                 "sve" => sub { $sve_enabled = 1; },
                 "x86_64" => sub { $is_x86_64 = 1; },
+                "xfeatures=s" => sub {
+                    $xfeatures = $_[1];
+                    die "value for xfeatures must be one of 'none', 'mmx', 'sse', 'avx' (got '$xfeatures')\n"
+                        unless ($xfeatures eq 'none'
+                                || $xfeatures eq 'mmx'
+                                || $xfeatures eq 'sse'
+                                || $xfeatures eq 'avx');
+                },
         ) or return 1;
     # allow "--pattern re,re" and "--pattern re --pattern re"
     @pattern_re = split(/,/,join(',',@pattern_re));
@@ -377,6 +389,7 @@ sub main()
         'subarch' => $full_arch[1] || '',
         'bigendian' => $big_endian,
         'x86_64' => $is_x86_64,
+        'xfeatures' => $xfeatures,
     );
 
     write_test_code(\%params);
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [Qemu-devel] [RISU PATCH v3 10/18] x86.risu: add MMX instructions
  2019-07-11 22:32 [Qemu-devel] [RISU PATCH v3 00/18] Support for generating x86 SIMD test images Jan Bobek
                   ` (8 preceding siblings ...)
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 09/18] risugen: add --xfeatures option for x86 Jan Bobek
@ 2019-07-11 22:32 ` Jan Bobek
  2019-07-20  4:30   ` Richard Henderson
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 11/18] x86.risu: add SSE instructions Jan Bobek
                   ` (8 subsequent siblings)
  18 siblings, 1 reply; 49+ messages in thread
From: Jan Bobek @ 2019-07-11 22:32 UTC (permalink / raw)
  To: qemu-devel; +Cc: Jan Bobek, Alex Bennée, Richard Henderson

Add an x86 configuration file with all MMX instructions.

Signed-off-by: Jan Bobek <jan.bobek@gmail.com>
---
 x86.risu | 321 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 321 insertions(+)
 create mode 100644 x86.risu

diff --git a/x86.risu b/x86.risu
new file mode 100644
index 0000000..208ac16
--- /dev/null
+++ b/x86.risu
@@ -0,0 +1,321 @@
+###############################################################################
+# Copyright (c) 2019 Jan Bobek
+# All rights reserved. This program and the accompanying materials
+# are made available under the terms of the Eclipse Public License v1.0
+# which accompanies this distribution, and is available at
+# http://www.eclipse.org/legal/epl-v10.html
+#
+# Contributors:
+#     Jan Bobek - initial implementation
+###############################################################################
+
+# Input file for risugen defining x86 instructions
+.mode x86
+
+#
+# Data Transfer Instructions
+# --------------------------
+#
+
+# NP 0F 6E /r: MOVD mm,r/m32
+# NP 0F 7E /r: MOVD r/m32,mm
+MOVD MMX 00001111 011 d 1110 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
+  !memory { $d ? store(size => 4) : load(size => 4); }
+
+# NP REX.W + 0F 6E /r: MOVQ mm,r/m64
+# NP REX.W + 0F 7E /r: MOVQ r/m64,mm
+MOVQ MMX 00001111 011 d 1110 \
+  !constraints { rex($_, w => 1); modrm($_); $_->{modrm}{reg} &= 0b111; !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
+  !memory { $d ? store(size => 8) : load(size => 8); }
+
+# NP 0F 6F /r: MOVQ mm, mm/m64
+# NP 0F 7F /r: MOVQ mm/m64, mm
+MOVQ_mm MMX 00001111 011 d 1111 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { $d ? store(size => 8) : load(size => 8); }
+
+#
+# Arithmetic Instructions
+# -----------------------
+#
+
+# NP 0F FC /r: PADDB mm, mm/m64
+PADDB MMX 00001111 11111100 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F FD /r: PADDW mm, mm/m64
+PADDW MMX 00001111 11111101 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F FE /r: PADDD mm, mm/m64
+PADDD MMX 00001111 11111110 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F EC /r: PADDSB mm, mm/m64
+PADDSB MMX 00001111 11101100 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F ED /r: PADDSW mm, mm/m64
+PADDSW MMX 00001111 11101101 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F DC /r: PADDUSB mm,mm/m64
+PADDUSB MMX 00001111 11011100 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F DD /r: PADDUSW mm,mm/m64
+PADDUSW MMX 00001111 11011101 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F F8 /r: PSUBB mm, mm/m64
+PSUBB MMX 00001111 11111000 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F F9 /r: PSUBW mm, mm/m64
+PSUBW MMX 00001111 11111001 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F FA /r: PSUBD mm, mm/m64
+PSUBD MMX 00001111 11111010 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F E8 /r: PSUBSB mm, mm/m64
+PSUBSB MMX 00001111 11101000 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F E9 /r: PSUBSW mm, mm/m64
+PSUBSW MMX 00001111 11101001 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F D8 /r: PSUBUSB mm, mm/m64
+PSUBUSB MMX 00001111 11011000 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F D9 /r: PSUBUSW mm, mm/m64
+PSUBUSW MMX 00001111 11011001 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F D5 /r: PMULLW mm, mm/m64
+PMULLW MMX 00001111 11010101 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F E5 /r: PMULHW mm, mm/m64
+PMULHW MMX 00001111 11100101 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F F5 /r: PMADDWD mm, mm/m64
+PMADDWD MMX 00001111 11110101 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+#
+# Comparison Instructions
+# -----------------------
+#
+
+# NP 0F 74 /r: PCMPEQB mm,mm/m64
+PCMPEQB MMX 00001111 01110100 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F 75 /r: PCMPEQW mm,mm/m64
+PCMPEQW MMX 00001111 01110101 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F 76 /r: PCMPEQD mm,mm/m64
+PCMPEQD MMX 00001111 01110110 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F 64 /r: PCMPGTB mm,mm/m64
+PCMPGTB MMX 00001111 01100100 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F 65 /r: PCMPGTW mm,mm/m64
+PCMPGTW MMX 00001111 01100101 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F 66 /r: PCMPGTD mm,mm/m64
+PCMPGTD MMX 00001111 01100110 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+#
+# Logical Instructions
+# --------------------
+#
+
+# NP 0F DB /r: PAND mm, mm/m64
+PAND MMX 00001111 11011011 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F DF /r: PANDN mm, mm/m64
+PANDN MMX 00001111 11011111 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F EB /r: POR mm, mm/m64
+POR MMX 00001111 11101011 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F EF /r: PXOR mm, mm/m64
+PXOR MMX 00001111 11101111 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+#
+# Shift and Rotate Instructions
+# -----------------------------
+#
+
+# NP 0F F1 /r: PSLLW mm, mm/m64
+PSLLW MMX 00001111 11110001 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F F2 /r: PSLLD mm, mm/m64
+PSLLD MMX 00001111 11110010 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F F3 /r: PSLLQ mm, mm/m64
+PSLLQ MMX 00001111 11110011 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F 71 /6 ib: PSLLW mm1, imm8
+PSLLW_imm MMX 00001111 01110001 \
+  !constraints { modrm($_, reg => 6); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} }
+
+# NP 0F 72 /6 ib: PSLLD mm, imm8
+PSLLD_imm MMX 00001111 01110010 \
+  !constraints { modrm($_, reg => 6); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} }
+
+# NP 0F 73 /6 ib: PSLLQ mm, imm8
+PSLLQ_imm MMX 00001111 01110011 \
+  !constraints { modrm($_, reg => 6); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} }
+
+# NP 0F D1 /r: PSRLW mm, mm/m64
+PSRLW MMX 00001111 11010001 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F D2 /r: PSRLD mm, mm/m64
+PSRLD MMX 00001111 11010010 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F D3 /r: PSRLQ mm, mm/m64
+PSRLQ MMX 00001111 11010011 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F 71 /2 ib: PSRLW mm, imm8
+PSRLW_imm MMX 00001111 01110001 \
+  !constraints { modrm($_, reg => 2); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} }
+
+# NP 0F 72 /2 ib: PSRLD mm, imm8
+PSRLD_imm MMX 00001111 01110010 \
+  !constraints { modrm($_, reg => 2); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} }
+
+# NP 0F 73 /2 ib: PSRLQ mm, imm8
+PSRLQ_imm MMX 00001111 01110011 \
+  !constraints { modrm($_, reg => 2); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} }
+
+# NP 0F E1 /r: PSRAW mm,mm/m64
+PSRAW MMX 00001111 11100001 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F E2 /r: PSRAD mm,mm/m64
+PSRAD MMX 00001111 11100010 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F 71 /4 ib: PSRAW mm,imm8
+PSRAW_imm MMX 00001111 01110001 \
+  !constraints { modrm($_, reg => 4); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} }
+
+# NP 0F 72 /4 ib: PSRAD mm,imm8
+PSRAD_imm MMX 00001111 01110010 \
+  !constraints { modrm($_, reg => 4); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} }
+
+#
+# Shuffle, Unpack, Blend, Insert, Extract, Broadcast, Permute, Gather Instructions
+# --------------------------------------------------------------------------------
+#
+
+# NP 0F 63 /r: PACKSSWB mm1, mm2/m64
+PACKSSWB MMX 00001111 01100011 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F 6B /r: PACKSSDW mm1, mm2/m64
+PACKSSDW MMX 00001111 01101011 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F 67 /r: PACKUSWB mm, mm/m64
+PACKUSWB MMX 00001111 01100111 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F 68 /r: PUNPCKHBW mm, mm/m64
+PUNPCKHBW MMX 00001111 01101000 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8, align => 8); }
+
+# NP 0F 69 /r: PUNPCKHWD mm, mm/m64
+PUNPCKHWD MMX 00001111 01101001 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F 6A /r: PUNPCKHDQ mm, mm/m64
+PUNPCKHDQ MMX 00001111 01101010 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F 60 /r: PUNPCKLBW mm, mm/m32
+PUNPCKLBW MMX 00001111 01100000 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 4); }
+
+# NP 0F 61 /r: PUNPCKLWD mm, mm/m32
+PUNPCKLWD MMX 00001111 01100001 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 4); }
+
+# NP 0F 62 /r: PUNPCKLDQ mm, mm/m32
+PUNPCKLDQ MMX 00001111 01100010 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 4); }
+
+#
+# State Management Instructions
+# -----------------------------
+#
+
+# NP 0F 77: EMMS
+EMMS MMX 00001111 01110111
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [Qemu-devel] [RISU PATCH v3 11/18] x86.risu: add SSE instructions
  2019-07-11 22:32 [Qemu-devel] [RISU PATCH v3 00/18] Support for generating x86 SIMD test images Jan Bobek
                   ` (9 preceding siblings ...)
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 10/18] x86.risu: add MMX instructions Jan Bobek
@ 2019-07-11 22:32 ` Jan Bobek
  2019-07-20 17:50   ` Richard Henderson
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 12/18] x86.risu: add SSE2 instructions Jan Bobek
                   ` (7 subsequent siblings)
  18 siblings, 1 reply; 49+ messages in thread
From: Jan Bobek @ 2019-07-11 22:32 UTC (permalink / raw)
  To: qemu-devel; +Cc: Jan Bobek, Alex Bennée, Richard Henderson

Add SSE instructions to the x86 configuration file.

Signed-off-by: Jan Bobek <jan.bobek@gmail.com>
---
 x86.risu | 318 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 318 insertions(+)

diff --git a/x86.risu b/x86.risu
index 208ac16..2d963fc 100644
--- a/x86.risu
+++ b/x86.risu
@@ -35,6 +35,52 @@ MOVQ_mm MMX 00001111 011 d 1111 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { $d ? store(size => 8) : load(size => 8); }
 
+# NP 0F 28 /r: MOVAPS xmm1, xmm2/m128
+# NP 0F 29 /r: MOVAPS xmm2/m128, xmm1
+MOVAPS SSE 00001111 0010100 d \
+  !constraints { modrm($_); 1 } \
+  !memory { $d ? store(size => 16, align => 16) : load(size => 16, align => 16); }
+
+# NP 0F 10 /r: MOVUPS xmm1, xmm2/m128
+# NP 0F 11 /r: MOVUPS xmm2/m128, xmm1
+MOVUPS SSE 00001111 0001000 d \
+  !constraints { modrm($_); 1 } \
+  !memory { $d ? store(size => 16) : load(size => 16); }
+
+# F3 0F 10 /r: MOVSS xmm1, xmm2/m32
+# F3 0F 11 /r: MOVSS xmm2/m32, xmm1
+MOVSS SSE 00001111 0001000 d \
+  !constraints { rep($_); modrm($_); 1 } \
+  !memory { $d ? store(size => 4) : load(size => 4); }
+
+# NP 0F 12 /r: MOVLPS xmm1, m64
+# 0F 13 /r: MOVLPS m64, xmm1
+MOVLPS SSE 00001111 0001001 d \
+  !constraints { modrm($_); !defined $_->{modrm}{reg2} } \
+  !memory { $d ? store(size => 8) : load(size => 8); }
+
+# NP 0F 16 /r: MOVHPS xmm1, m64
+# NP 0F 17 /r: MOVHPS m64, xmm1
+MOVHPS SSE 00001111 0001011 d \
+  !constraints { modrm($_); !defined $_->{modrm}{reg2} } \
+  !memory { $d ? store(size => 8) : load(size => 8); }
+
+# NP 0F 16 /r: MOVLHPS xmm1, xmm2
+MOVLHPS SSE 00001111 00010110 \
+  !constraints { modrm($_); defined $_->{modrm}{reg2} }
+
+# NP 0F 12 /r: MOVHLPS xmm1, xmm2
+MOVHLPS SSE 00001111 00010010 \
+  !constraints { modrm($_); defined $_->{modrm}{reg2} }
+
+# NP 0F D7 /r: PMOVMSKB reg, mm
+PMOVMSKB SSE 00001111 11010111 \
+  !constraints { modrm($_); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; $_->{modrm}{reg} != REG_RSP && defined $_->{modrm}{reg2} }
+
+# NP 0F 50 /r: MOVMSKPS reg, xmm
+MOVMSKPS SSE 00001111 01010000 \
+  !constraints { modrm($_); $_->{modrm}{reg} != REG_RSP && defined $_->{modrm}{reg2} }
+
 #
 # Arithmetic Instructions
 # -----------------------
@@ -75,6 +121,16 @@ PADDUSW MMX 00001111 11011101 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# NP 0F 58 /r: ADDPS xmm1, xmm2/m128
+ADDPS SSE 00001111 01011000 \
+  !constraints { modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# F3 0F 58 /r: ADDSS xmm1, xmm2/m32
+ADDSS SSE 00001111 01011000 \
+  !constraints { rep($_); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
 # NP 0F F8 /r: PSUBB mm, mm/m64
 PSUBB MMX 00001111 11111000 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -110,6 +166,16 @@ PSUBUSW MMX 00001111 11011001 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# NP 0F 5C /r: SUBPS xmm1, xmm2/m128
+SUBPS SSE 00001111 01011100 \
+  !constraints { modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# F3 0F 5C /r: SUBSS xmm1, xmm2/m32
+SUBSS SSE 00001111 01011100 \
+  !constraints { rep($_); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
 # NP 0F D5 /r: PMULLW mm, mm/m64
 PMULLW MMX 00001111 11010101 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -120,11 +186,121 @@ PMULHW MMX 00001111 11100101 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# NP 0F E4 /r: PMULHUW mm1, mm2/m64
+PMULHUW SSE 00001111 11100100 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F 59 /r: MULPS xmm1, xmm2/m128
+MULPS SSE 00001111 01011001 \
+  !constraints { modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# F3 0F 59 /r: MULSS xmm1,xmm2/m32
+MULSS SSE 00001111 01011001 \
+  !constraints { rep($_); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
 # NP 0F F5 /r: PMADDWD mm, mm/m64
 PMADDWD MMX 00001111 11110101 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# NP 0F 5E /r: DIVPS xmm1, xmm2/m128
+DIVPS SSE 00001111 01011110 \
+  !constraints { modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# F3 0F 5E /r: DIVSS xmm1, xmm2/m32
+DIVSS SSE 00001111 01011110 \
+  !constraints { rep($_); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
+# NP 0F 53 /r: RCPPS xmm1, xmm2/m128
+RCPPS SSE 00001111 01010011 \
+  !constraints { modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# F3 0F 53 /r: RCPSS xmm1, xmm2/m32
+RCPSS SSE 00001111 01010011 \
+  !constraints { rep($_); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
+# NP 0F 51 /r: SQRTPS xmm1, xmm2/m128
+SQRTPS SSE 00001111 01010001 \
+  !constraints { modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# F3 0F 51 /r: SQRTSS xmm1, xmm2/m32
+SQRTSS SSE 00001111 01010001 \
+  !constraints { rep($_); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
+# NP 0F 52 /r: RSQRTPS xmm1, xmm2/m128
+RSQRTPS SSE 00001111 01010010 \
+  !constraints { modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# F3 0F 52 /r: RSQRTSS xmm1, xmm2/m32
+RSQRTSS SSE 00001111 01010010 \
+  !constraints { rep($_); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
+# NP 0F DA /r: PMINUB mm1, mm2/m64
+PMINUB SSE 00001111 11011010 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F EA /r: PMINSW mm1, mm2/m64
+PMINSW SSE 00001111 11101010 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F 5D /r: MINPS xmm1, xmm2/m128
+MINPS SSE 00001111 01011101 \
+  !constraints { modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# F3 0F 5D /r: MINSS xmm1,xmm2/m32
+MINSS SSE 00001111 01011101 \
+  !constraints { rep($_); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
+# NP 0F DE /r: PMAXUB mm1, mm2/m64
+PMAXUB SSE 00001111 11011110 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F EE /r: PMAXSW mm1, mm2/m64
+PMAXSW SSE 00001111 11101110 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F 5F /r: MAXPS xmm1, xmm2/m128
+MAXPS SSE 00001111 01011111 \
+  !constraints { modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# F3 0F 5F /r: MAXSS xmm1, xmm2/m32
+MAXSS SSE 00001111 01011111 \
+  !constraints { rep($_); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
+# NP 0F E0 /r: PAVGB mm1, mm2/m64
+PAVGB SSE 00001111 11100000 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F E3 /r: PAVGW mm1, mm2/m64
+PAVGW SSE 00001111 11100011 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F F6 /r: PSADBW mm1, mm2/m64
+PSADBW SSE 00001111 11110110 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
 #
 # Comparison Instructions
 # -----------------------
@@ -160,6 +336,26 @@ PCMPGTD MMX 00001111 01100110 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# NP 0F C2 /r ib: CMPPS xmm1, xmm2/m128, imm8
+CMPPS SSE 00001111 11000010 \
+  !constraints { modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# F3 0F C2 /r ib: CMPSS xmm1, xmm2/m32, imm8
+CMPSS SSE 00001111 11000010 \
+  !constraints { rep($_); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 4); }
+
+# NP 0F 2E /r: UCOMISS xmm1, xmm2/m32
+UCOMISS SSE 00001111 00101110 \
+  !constraints { modrm($_); 1 } \
+  !memory { load(size => 4); }
+
+# NP 0F 2F /r: COMISS xmm1, xmm2/m32
+COMISS SSE 00001111 00101111 \
+  !constraints { modrm($_); 1 } \
+  !memory { load(size => 4); }
+
 #
 # Logical Instructions
 # --------------------
@@ -170,21 +366,41 @@ PAND MMX 00001111 11011011 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# NP 0F 54 /r: ANDPS xmm1, xmm2/m128
+ANDPS SSE 00001111 01010100 \
+  !constraints { modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F DF /r: PANDN mm, mm/m64
 PANDN MMX 00001111 11011111 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# NP 0F 55 /r: ANDNPS xmm1, xmm2/m128
+ANDNPS SSE 00001111 01010101 \
+  !constraints { modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F EB /r: POR mm, mm/m64
 POR MMX 00001111 11101011 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# NP 0F 56 /r: ORPS xmm1, xmm2/m128
+ORPS SSE 00001111 01010110 \
+  !constraints { modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F EF /r: PXOR mm, mm/m64
 PXOR MMX 00001111 11101111 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# NP 0F 57 /r: XORPS xmm1, xmm2/m128
+XORPS SSE 00001111 01010111 \
+  !constraints { modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 #
 # Shift and Rotate Instructions
 # -----------------------------
@@ -312,6 +528,98 @@ PUNPCKLDQ MMX 00001111 01100010 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 4); }
 
+# NP 0F 14 /r: UNPCKLPS xmm1, xmm2/m128
+UNPCKLPS SSE 00001111 00010100 \
+  !constraints { modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# NP 0F 15 /r: UNPCKHPS xmm1, xmm2/m128
+UNPCKHPS SSE 00001111 00010101 \
+  !constraints { modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# NP 0F 70 /r ib: PSHUFW mm1, mm2/m64, imm8
+PSHUFW SSE 00001111 01110000 \
+  !constraints { modrm($_); imm($_, width => 8); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F C6 /r ib: SHUFPS xmm1, xmm3/m128, imm8
+SHUFPS SSE 00001111 11000110 \
+  !constraints { modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# NP 0F C4 /r ib: PINSRW mm, r32/m16, imm8
+PINSRW SSE 00001111 11000100 \
+  !constraints { modrm($_); imm($_, width => 8); $_->{modrm}{reg} &= 0b111; !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
+  !memory { load(size => 2); }
+
+# NP 0F C5 /r ib: PEXTRW reg, mm, imm8
+PEXTRW_reg SSE 00001111 11000101 \
+  !constraints { modrm($_); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; $_->{modrm}{reg} != REG_RSP && defined $_->{modrm}{reg2} }
+
+#
+# Conversion Instructions
+# -----------------------
+#
+
+# NP 0F 2A /r: CVTPI2PS xmm, mm/m64
+CVTPI2PS SSE 00001111 00101010 \
+  !constraints { modrm($_); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F 2D /r: CVTPS2PI mm, xmm/m64
+CVTPS2PI SSE 00001111 00101101 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F 2C /r: CVTTPS2PI mm, xmm/m64
+CVTTPS2PI SSE 00001111 00101100 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; 1 } \
+  !memory { load(size => 8); }
+
+#
+# Cacheability Control, Prefetch, and Instruction Ordering Instructions
+# ---------------------------------------------------------------------
+#
+
+# NP 0F F7 /r: MASKMOVQ mm1, mm2
+MASKMOVQ SSE 00001111 11110111 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} } \
+  !memory { load(size => 8, base => REG_RDI, rollback => 1); }
+
+# NP 0F 2B /r: MOVNTPS m128, xmm1
+MOVNTPS SSE 00001111 00101011 \
+  !constraints { modrm($_); !defined $_->{modrm}{reg2} } \
+  !memory { store(size => 16, align => 16); }
+
+# NP 0F E7 /r: MOVNTQ m64, mm
+MOVNTQ SSE 00001111 11100111 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; !defined $_->{modrm}{reg2} } \
+  !memory { store(size => 8); }
+
+# 0F 18 /1: PREFETCHT0 m8
+PREFETCHT0 SSE 00001111 00011000 \
+  !constraints { modrm($_, reg => 1); !defined $_->{modrm}{reg2} } \
+  !memory { load(size => 1); }
+
+# 0F 18 /2: PREFETCHT1 m8
+PREFETCHT1 SSE 00001111 00011000 \
+  !constraints { modrm($_, reg => 2); !defined $_->{modrm}{reg2} } \
+  !memory { load(size => 1); }
+
+# 0F 18 /3: PREFETCHT2 m8
+PREFETCHT2 SSE 00001111 00011000 \
+  !constraints { modrm($_, reg => 3); !defined $_->{modrm}{reg2} } \
+  !memory { load(size => 1); }
+
+# 0F 18 /0: PREFETCHNTA m8
+PREFETCHNTA SSE 00001111 00011000 \
+  !constraints { modrm($_, reg => 0); !defined $_->{modrm}{reg2} } \
+  !memory { load(size => 1); }
+
+# NP 0F AE F8: SFENCE
+SFENCE SSE 00001111 10101110 11111000
+
 #
 # State Management Instructions
 # -----------------------------
@@ -319,3 +627,13 @@ PUNPCKLDQ MMX 00001111 01100010 \
 
 # NP 0F 77: EMMS
 EMMS MMX 00001111 01110111
+
+# NP 0F AE /2: LDMXCSR m32
+LDMXCSR SSE 00001111 10101110 \
+  !constraints { modrm($_, reg => 2); !defined $_->{modrm}{reg2} } \
+  !memory { load(size => 4, value => 0x000001f80, mask => 0xffff1f80); }
+
+# NP 0F AE /3: STMXCSR m32
+STMXCSR SSE 00001111 10101110 \
+  !constraints { modrm($_, reg => 3); !defined $_->{modrm}{reg2} } \
+  !memory { store(size => 4); }
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [Qemu-devel] [RISU PATCH v3 12/18] x86.risu: add SSE2 instructions
  2019-07-11 22:32 [Qemu-devel] [RISU PATCH v3 00/18] Support for generating x86 SIMD test images Jan Bobek
                   ` (10 preceding siblings ...)
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 11/18] x86.risu: add SSE instructions Jan Bobek
@ 2019-07-11 22:32 ` Jan Bobek
  2019-07-20 21:19   ` Richard Henderson
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 13/18] x86.risu: add SSE3 instructions Jan Bobek
                   ` (6 subsequent siblings)
  18 siblings, 1 reply; 49+ messages in thread
From: Jan Bobek @ 2019-07-11 22:32 UTC (permalink / raw)
  To: qemu-devel; +Cc: Jan Bobek, Alex Bennée, Richard Henderson

Add SSE2 instructions to the x86 configuration file.

Signed-off-by: Jan Bobek <jan.bobek@gmail.com>
---
 x86.risu | 734 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 734 insertions(+)

diff --git a/x86.risu b/x86.risu
index 2d963fc..b9d424e 100644
--- a/x86.risu
+++ b/x86.risu
@@ -23,48 +23,120 @@ MOVD MMX 00001111 011 d 1110 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
   !memory { $d ? store(size => 4) : load(size => 4); }
 
+# 66 0F 6E /r: MOVD xmm,r/m32
+# 66 0F 7E /r: MOVD r/m32,xmm
+MOVD SSE2 00001111 011 d 1110 \
+  !constraints { data16($_); modrm($_); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
+  !memory { $d ? store(size => 4) : load(size => 4); }
+
 # NP REX.W + 0F 6E /r: MOVQ mm,r/m64
 # NP REX.W + 0F 7E /r: MOVQ r/m64,mm
 MOVQ MMX 00001111 011 d 1110 \
   !constraints { rex($_, w => 1); modrm($_); $_->{modrm}{reg} &= 0b111; !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
   !memory { $d ? store(size => 8) : load(size => 8); }
 
+# 66 REX.W 0F 6E /r: MOVQ xmm,r/m64
+# 66 REX.W 0F 7E /r: MOVQ r/m64,xmm
+MOVQ SSE2 00001111 011 d 1110 \
+  !constraints { data16($_); rex($_, w => 1); modrm($_); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
+  !memory { $d ? store(size => 8) : load(size => 8); }
+
 # NP 0F 6F /r: MOVQ mm, mm/m64
 # NP 0F 7F /r: MOVQ mm/m64, mm
 MOVQ_mm MMX 00001111 011 d 1111 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { $d ? store(size => 8) : load(size => 8); }
 
+# F3 0F 7E /r: MOVQ xmm1, xmm2/m64
+MOVQ_xmm1 SSE2 00001111 01111110 \
+  !constraints { rep($_); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
+# 66 0F D6 /r: MOVQ xmm2/m64, xmm1
+MOVQ_xmm2 SSE2 00001111 11010110 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { store(size => 8); }
+
 # NP 0F 28 /r: MOVAPS xmm1, xmm2/m128
 # NP 0F 29 /r: MOVAPS xmm2/m128, xmm1
 MOVAPS SSE 00001111 0010100 d \
   !constraints { modrm($_); 1 } \
   !memory { $d ? store(size => 16, align => 16) : load(size => 16, align => 16); }
 
+# 66 0F 28 /r: MOVAPD xmm1, xmm2/m128
+# 66 0F 29 /r: MOVAPD xmm2/m128, xmm1
+MOVAPD SSE2 00001111 0010100 d \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { $d ? store(size => 16, align => 16) : load(size => 16, align => 16); }
+
+# 66 0F 6F /r: MOVDQA xmm1, xmm2/m128
+# 66 0F 7F /r: MOVDQA xmm2/m128, xmm1
+MOVDQA SSE2 00001111 011 d 1111 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { $d ? store(size => 16, align => 16) : load(size => 16, align => 16); }
+
 # NP 0F 10 /r: MOVUPS xmm1, xmm2/m128
 # NP 0F 11 /r: MOVUPS xmm2/m128, xmm1
 MOVUPS SSE 00001111 0001000 d \
   !constraints { modrm($_); 1 } \
   !memory { $d ? store(size => 16) : load(size => 16); }
 
+# 66 0F 10 /r: MOVUPD xmm1, xmm2/m128
+# 66 0F 11 /r: MOVUPD xmm2/m128, xmm1
+MOVUPD SSE2 00001111 0001000 d \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { $d ? store(size => 16) : load(size => 16); }
+
+# F3 0F 6F /r: MOVDQU xmm1,xmm2/m128
+# F3 0F 7F /r: MOVDQU xmm2/m128,xmm1
+MOVDQU SSE2 00001111 011 d 1111 \
+  !constraints { rep($_); modrm($_); 1 } \
+  !memory { $d ? store(size => 16) : load(size => 16); }
+
 # F3 0F 10 /r: MOVSS xmm1, xmm2/m32
 # F3 0F 11 /r: MOVSS xmm2/m32, xmm1
 MOVSS SSE 00001111 0001000 d \
   !constraints { rep($_); modrm($_); 1 } \
   !memory { $d ? store(size => 4) : load(size => 4); }
 
+# F2 0F 10 /r: MOVSD xmm1, xmm2/m64
+# F2 0F 11 /r: MOVSD xmm1/m64, xmm2
+MOVSD SSE2 00001111 0001000 d \
+  !constraints { repne($_); modrm($_); 1 } \
+  !memory { $d ? store(size => 8): load(size => 8); }
+
+# F3 0F D6 /r: MOVQ2DQ xmm, mm
+MOVQ2DQ SSE2 00001111 11010110 \
+  !constraints { rep($_); modrm($_); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} }
+
+# F2 0F D6 /r: MOVDQ2Q mm, xmm
+MOVDQ2Q SSE2 00001111 11010110 \
+  !constraints { repne($_); modrm($_); $_->{modrm}{reg} &= 0b111; defined $_->{modrm}{reg2} }
+
 # NP 0F 12 /r: MOVLPS xmm1, m64
 # 0F 13 /r: MOVLPS m64, xmm1
 MOVLPS SSE 00001111 0001001 d \
   !constraints { modrm($_); !defined $_->{modrm}{reg2} } \
   !memory { $d ? store(size => 8) : load(size => 8); }
 
+# 66 0F 12 /r: MOVLPD xmm1,m64
+# 66 0F 13 /r: MOVLPD m64,xmm1
+MOVLPD SSE2 00001111 0001001 d \
+  !constraints { data16($_); modrm($_); !defined $_->{modrm}{reg2} } \
+  !memory { $d ? store(size => 8) : load(size => 8); }
+
 # NP 0F 16 /r: MOVHPS xmm1, m64
 # NP 0F 17 /r: MOVHPS m64, xmm1
 MOVHPS SSE 00001111 0001011 d \
   !constraints { modrm($_); !defined $_->{modrm}{reg2} } \
   !memory { $d ? store(size => 8) : load(size => 8); }
 
+# 66 0F 16 /r: MOVHPD xmm1, m64
+# 66 0F 17 /r: MOVHPD m64, xmm1
+MOVHPD SSE2 00001111 0001011 d \
+  !constraints { data16($_); modrm($_); !defined $_->{modrm}{reg2} } \
+  !memory { $d ? store(size => 8) : load(size => 8); }
+
 # NP 0F 16 /r: MOVLHPS xmm1, xmm2
 MOVLHPS SSE 00001111 00010110 \
   !constraints { modrm($_); defined $_->{modrm}{reg2} }
@@ -77,10 +149,18 @@ MOVHLPS SSE 00001111 00010010 \
 PMOVMSKB SSE 00001111 11010111 \
   !constraints { modrm($_); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; $_->{modrm}{reg} != REG_RSP && defined $_->{modrm}{reg2} }
 
+# 66 0F D7 /r: PMOVMSKB reg, xmm
+PMOVMSKB SSE2 00001111 11010111 \
+  !constraints { data16($_); modrm($_); $_->{modrm}{reg} != REG_RSP && defined $_->{modrm}{reg2} }
+
 # NP 0F 50 /r: MOVMSKPS reg, xmm
 MOVMSKPS SSE 00001111 01010000 \
   !constraints { modrm($_); $_->{modrm}{reg} != REG_RSP && defined $_->{modrm}{reg2} }
 
+# 66 0F 50 /r: MOVMSKPD reg, xmm
+MOVMSKPD SSE2 00001111 01010000 \
+  !constraints { data16($_); modrm($_); $_->{modrm}{reg} != REG_RSP && defined $_->{modrm}{reg2} }
+
 #
 # Arithmetic Instructions
 # -----------------------
@@ -91,131 +171,291 @@ PADDB MMX 00001111 11111100 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F FC /r: PADDB xmm1, xmm2/m128
+PADDB SSE2 00001111 11111100 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F FD /r: PADDW mm, mm/m64
 PADDW MMX 00001111 11111101 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F FD /r: PADDW xmm1, xmm2/m128
+PADDW SSE2 00001111 11111101 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F FE /r: PADDD mm, mm/m64
 PADDD MMX 00001111 11111110 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F FE /r: PADDD xmm1, xmm2/m128
+PADDD SSE2 00001111 11111110 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# NP 0F D4 /r: PADDQ mm, mm/m64
+PADDQ_mm SSE2 00001111 11010100 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# 66 0F D4 /r: PADDQ xmm1, xmm2/m128
+PADDQ SSE2 00001111 11010100 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F EC /r: PADDSB mm, mm/m64
 PADDSB MMX 00001111 11101100 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F EC /r: PADDSB xmm1, xmm2/m128
+PADDSB SSE2 00001111 11101100 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F ED /r: PADDSW mm, mm/m64
 PADDSW MMX 00001111 11101101 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F ED /r: PADDSW xmm1, xmm2/m128
+PADDSW SSE2 00001111 11101101 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F DC /r: PADDUSB mm,mm/m64
 PADDUSB MMX 00001111 11011100 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F DC /r: PADDUSB xmm1,xmm2/m128
+PADDUSB SSE2 00001111 11011100 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F DD /r: PADDUSW mm,mm/m64
 PADDUSW MMX 00001111 11011101 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F DD /r: PADDUSW xmm1,xmm2/m128
+PADDUSW SSE2 00001111 11011101 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 58 /r: ADDPS xmm1, xmm2/m128
 ADDPS SSE 00001111 01011000 \
   !constraints { modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# 66 0F 58 /r: ADDPD xmm1, xmm2/m128
+ADDPD SSE2 00001111 01011000 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # F3 0F 58 /r: ADDSS xmm1, xmm2/m32
 ADDSS SSE 00001111 01011000 \
   !constraints { rep($_); modrm($_); 1 } \
   !memory { load(size => 4); }
 
+# F2 0F 58 /r: ADDSD xmm1, xmm2/m64
+ADDSD SSE2 00001111 01011000 \
+  !constraints { repne($_); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
 # NP 0F F8 /r: PSUBB mm, mm/m64
 PSUBB MMX 00001111 11111000 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F F8 /r: PSUBB xmm1, xmm2/m128
+PSUBB SSE2 00001111 11111000 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F F9 /r: PSUBW mm, mm/m64
 PSUBW MMX 00001111 11111001 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F F9 /r: PSUBW xmm1, xmm2/m128
+PSUBW SSE2 00001111 11111001 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F FA /r: PSUBD mm, mm/m64
 PSUBD MMX 00001111 11111010 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F FA /r: PSUBD xmm1, xmm2/m128
+PSUBD SSE2 00001111 11111010 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# NP 0F FB /r: PSUBQ mm1, mm2/m64
+PSUBQ_mm SSE2 00001111 11111011 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# 66 0F FB /r: PSUBQ xmm1, xmm2/m128
+PSUBQ SSE2 00001111 11111011 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F E8 /r: PSUBSB mm, mm/m64
 PSUBSB MMX 00001111 11101000 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F E8 /r: PSUBSB xmm1, xmm2/m128
+PSUBSB SSE2 00001111 11101000 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F E9 /r: PSUBSW mm, mm/m64
 PSUBSW MMX 00001111 11101001 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F E9 /r: PSUBSW xmm1, xmm2/m128
+PSUBSW SSE2 00001111 11101001 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F D8 /r: PSUBUSB mm, mm/m64
 PSUBUSB MMX 00001111 11011000 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F D8 /r: PSUBUSB xmm1, xmm2/m128
+PSUBUSB SSE2 00001111 11011000 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F D9 /r: PSUBUSW mm, mm/m64
 PSUBUSW MMX 00001111 11011001 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F D9 /r: PSUBUSW xmm1, xmm2/m128
+PSUBUSW SSE2 00001111 11011001 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 5C /r: SUBPS xmm1, xmm2/m128
 SUBPS SSE 00001111 01011100 \
   !constraints { modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# 66 0F 5C /r: SUBPD xmm1, xmm2/m128
+SUBPD SSE2 00001111 01011100 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # F3 0F 5C /r: SUBSS xmm1, xmm2/m32
 SUBSS SSE 00001111 01011100 \
   !constraints { rep($_); modrm($_); 1 } \
   !memory { load(size => 4); }
 
+# F2 0F 5C /r: SUBSD xmm1, xmm2/m64
+SUBSD SSE2 00001111 01011100 \
+  !constraints { repne($_); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
 # NP 0F D5 /r: PMULLW mm, mm/m64
 PMULLW MMX 00001111 11010101 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F D5 /r: PMULLW xmm1, xmm2/m128
+PMULLW SSE2 00001111 11010101 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F E5 /r: PMULHW mm, mm/m64
 PMULHW MMX 00001111 11100101 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F E5 /r: PMULHW xmm1, xmm2/m128
+PMULHW SSE2 00001111 11100101 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F E4 /r: PMULHUW mm1, mm2/m64
 PMULHUW SSE 00001111 11100100 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F E4 /r: PMULHUW xmm1, xmm2/m128
+PMULHUW SSE2 00001111 11100100 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# NP 0F F4 /r: PMULUDQ mm1, mm2/m64
+PMULUDQ_mm SSE2 00001111 11110100 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# 66 0F F4 /r: PMULUDQ xmm1, xmm2/m128
+PMULUDQ SSE2 00001111 11110100 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 59 /r: MULPS xmm1, xmm2/m128
 MULPS SSE 00001111 01011001 \
   !constraints { modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# 66 0F 59 /r: MULPD xmm1, xmm2/m128
+MULPD SSE2 00001111 01011001 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # F3 0F 59 /r: MULSS xmm1,xmm2/m32
 MULSS SSE 00001111 01011001 \
   !constraints { rep($_); modrm($_); 1 } \
   !memory { load(size => 4); }
 
+# F2 0F 59 /r: MULSD xmm1,xmm2/m64
+MULSD SSE2 00001111 01011001 \
+  !constraints { repne($_); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
 # NP 0F F5 /r: PMADDWD mm, mm/m64
 PMADDWD MMX 00001111 11110101 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F F5 /r: PMADDWD xmm1, xmm2/m128
+PMADDWD SSE2 00001111 11110101 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 5E /r: DIVPS xmm1, xmm2/m128
 DIVPS SSE 00001111 01011110 \
   !constraints { modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# 66 0F 5E /r: DIVPD xmm1, xmm2/m128
+DIVPD SSE2 00001111 01011110 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # F3 0F 5E /r: DIVSS xmm1, xmm2/m32
 DIVSS SSE 00001111 01011110 \
   !constraints { rep($_); modrm($_); 1 } \
   !memory { load(size => 4); }
 
+# F2 0F 5E /r: DIVSD xmm1, xmm2/m64
+DIVSD SSE2 00001111 01011110 \
+  !constraints { repne($_); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
 # NP 0F 53 /r: RCPPS xmm1, xmm2/m128
 RCPPS SSE 00001111 01010011 \
   !constraints { modrm($_); 1 } \
@@ -231,11 +471,21 @@ SQRTPS SSE 00001111 01010001 \
   !constraints { modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# 66 0F 51 /r: SQRTPD xmm1, xmm2/m128
+SQRTPD SSE2 00001111 01010001 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # F3 0F 51 /r: SQRTSS xmm1, xmm2/m32
 SQRTSS SSE 00001111 01010001 \
   !constraints { rep($_); modrm($_); 1 } \
   !memory { load(size => 4); }
 
+# F2 0F 51 /r: SQRTSD xmm1,xmm2/m64
+SQRTSD SSE2 00001111 01010001 \
+  !constraints { repne($_); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
 # NP 0F 52 /r: RSQRTPS xmm1, xmm2/m128
 RSQRTPS SSE 00001111 01010010 \
   !constraints { modrm($_); 1 } \
@@ -251,56 +501,111 @@ PMINUB SSE 00001111 11011010 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F DA /r: PMINUB xmm1, xmm2/m128
+PMINUB SSE2 00001111 11011010 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F EA /r: PMINSW mm1, mm2/m64
 PMINSW SSE 00001111 11101010 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F EA /r: PMINSW xmm1, xmm2/m128
+PMINSW SSE2 00001111 11101010 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 5D /r: MINPS xmm1, xmm2/m128
 MINPS SSE 00001111 01011101 \
   !constraints { modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# 66 0F 5D /r: MINPD xmm1, xmm2/m128
+MINPD SSE2 00001111 01011101 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # F3 0F 5D /r: MINSS xmm1,xmm2/m32
 MINSS SSE 00001111 01011101 \
   !constraints { rep($_); modrm($_); 1 } \
   !memory { load(size => 4); }
 
+# F2 0F 5D /r: MINSD xmm1, xmm2/m64
+MINSD SSE2 00001111 01011101 \
+  !constraints { repne($_); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
 # NP 0F DE /r: PMAXUB mm1, mm2/m64
 PMAXUB SSE 00001111 11011110 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F DE /r: PMAXUB xmm1, xmm2/m128
+PMAXUB SSE2 00001111 11011110 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F EE /r: PMAXSW mm1, mm2/m64
 PMAXSW SSE 00001111 11101110 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F EE /r: PMAXSW xmm1, xmm2/m128
+PMAXSW SSE2 00001111 11101110 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 5F /r: MAXPS xmm1, xmm2/m128
 MAXPS SSE 00001111 01011111 \
   !constraints { modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# 66 0F 5F /r: MAXPD xmm1, xmm2/m128
+MAXPD SSE2 00001111 01011111 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # F3 0F 5F /r: MAXSS xmm1, xmm2/m32
 MAXSS SSE 00001111 01011111 \
   !constraints { rep($_); modrm($_); 1 } \
   !memory { load(size => 4); }
 
+# F2 0F 5F /r: MAXSD xmm1, xmm2/m64
+MAXSD SSE2 00001111 01011111 \
+  !constraints { repne($_); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
 # NP 0F E0 /r: PAVGB mm1, mm2/m64
 PAVGB SSE 00001111 11100000 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F E0 /r: PAVGB xmm1, xmm2/m128
+PAVGB SSE2 00001111 11100000 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F E3 /r: PAVGW mm1, mm2/m64
 PAVGW SSE 00001111 11100011 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F E3 /r: PAVGW xmm1, xmm2/m128
+PAVGW SSE2 00001111 11100011 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F F6 /r: PSADBW mm1, mm2/m64
 PSADBW SSE 00001111 11110110 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F F6 /r: PSADBW xmm1, xmm2/m128
+PSADBW SSE2 00001111 11110110 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 #
 # Comparison Instructions
 # -----------------------
@@ -311,51 +616,101 @@ PCMPEQB MMX 00001111 01110100 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F 74 /r: PCMPEQB xmm1,xmm2/m128
+PCMPEQB SSE2 00001111 01110100 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 75 /r: PCMPEQW mm,mm/m64
 PCMPEQW MMX 00001111 01110101 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F 75 /r: PCMPEQW xmm1,xmm2/m128
+PCMPEQW SSE2 00001111 01110101 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 76 /r: PCMPEQD mm,mm/m64
 PCMPEQD MMX 00001111 01110110 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F 76 /r: PCMPEQD xmm1,xmm2/m128
+PCMPEQD SSE2 00001111 01110110 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 64 /r: PCMPGTB mm,mm/m64
 PCMPGTB MMX 00001111 01100100 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F 64 /r: PCMPGTB xmm1,xmm2/m128
+PCMPGTB SSE2 00001111 01100100 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 65 /r: PCMPGTW mm,mm/m64
 PCMPGTW MMX 00001111 01100101 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F 65 /r: PCMPGTW xmm1,xmm2/m128
+PCMPGTW SSE2 00001111 01100101 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 66 /r: PCMPGTD mm,mm/m64
 PCMPGTD MMX 00001111 01100110 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F 66 /r: PCMPGTD xmm1,xmm2/m128
+PCMPGTD SSE2 00001111 01100110 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F C2 /r ib: CMPPS xmm1, xmm2/m128, imm8
 CMPPS SSE 00001111 11000010 \
   !constraints { modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# 66 0F C2 /r ib: CMPPD xmm1, xmm2/m128, imm8
+CMPPD SSE2 00001111 11000010 \
+  !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # F3 0F C2 /r ib: CMPSS xmm1, xmm2/m32, imm8
 CMPSS SSE 00001111 11000010 \
   !constraints { rep($_); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 4); }
 
+# F2 0F C2 /r ib: CMPSD xmm1, xmm2/m64, imm8
+CMPSD SSE2 00001111 11000010 \
+  !constraints { repne($_); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 8); }
+
 # NP 0F 2E /r: UCOMISS xmm1, xmm2/m32
 UCOMISS SSE 00001111 00101110 \
   !constraints { modrm($_); 1 } \
   !memory { load(size => 4); }
 
+# 66 0F 2E /r: UCOMISD xmm1, xmm2/m64
+UCOMISD SSE2 00001111 00101110 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
 # NP 0F 2F /r: COMISS xmm1, xmm2/m32
 COMISS SSE 00001111 00101111 \
   !constraints { modrm($_); 1 } \
   !memory { load(size => 4); }
 
+# 66 0F 2F /r: COMISD xmm1, xmm2/m64
+COMISD SSE2 00001111 00101111 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
 #
 # Logical Instructions
 # --------------------
@@ -366,41 +721,81 @@ PAND MMX 00001111 11011011 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F DB /r: PAND xmm1, xmm2/m128
+PAND SSE2 00001111 11011011 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 54 /r: ANDPS xmm1, xmm2/m128
 ANDPS SSE 00001111 01010100 \
   !constraints { modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# 66 0F 54 /r: ANDPD xmm1, xmm2/m128
+ANDPD SSE2 00001111 01010100 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F DF /r: PANDN mm, mm/m64
 PANDN MMX 00001111 11011111 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F DF /r: PANDN xmm1, xmm2/m128
+PANDN SSE2 00001111 11011111 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 55 /r: ANDNPS xmm1, xmm2/m128
 ANDNPS SSE 00001111 01010101 \
   !constraints { modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# 66 0F 55 /r: ANDNPD xmm1, xmm2/m128
+ANDNPD SSE2 00001111 01010101 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F EB /r: POR mm, mm/m64
 POR MMX 00001111 11101011 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F EB /r: POR xmm1, xmm2/m128
+POR SSE2 00001111 11101011 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 56 /r: ORPS xmm1, xmm2/m128
 ORPS SSE 00001111 01010110 \
   !constraints { modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# 66 0F 56 /r: ORPD xmm1, xmm2/m128
+ORPD SSE2 00001111 01010110 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F EF /r: PXOR mm, mm/m64
 PXOR MMX 00001111 11101111 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F EF /r: PXOR xmm1, xmm2/m128
+PXOR SSE2 00001111 11101111 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 57 /r: XORPS xmm1, xmm2/m128
 XORPS SSE 00001111 01010111 \
   !constraints { modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# 66 0F 57 /r: XORPD xmm1, xmm2/m128
+XORPD SSE2 00001111 01010111 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 #
 # Shift and Rotate Instructions
 # -----------------------------
@@ -411,73 +806,153 @@ PSLLW MMX 00001111 11110001 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F F1 /r: PSLLW xmm1, xmm2/m128
+PSLLW SSE2 00001111 11110001 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F F2 /r: PSLLD mm, mm/m64
 PSLLD MMX 00001111 11110010 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F F2 /r: PSLLD xmm1, xmm2/m128
+PSLLD SSE2 00001111 11110010 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F F3 /r: PSLLQ mm, mm/m64
 PSLLQ MMX 00001111 11110011 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F F3 /r: PSLLQ xmm1, xmm2/m128
+PSLLQ SSE2 00001111 11110011 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 71 /6 ib: PSLLW mm1, imm8
 PSLLW_imm MMX 00001111 01110001 \
   !constraints { modrm($_, reg => 6); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} }
 
+# 66 0F 71 /6 ib: PSLLW xmm1, imm8
+PSLLW_imm SSE2 00001111 01110001 \
+  !constraints { data16($_); modrm($_, reg => 6); imm($_, width => 8); defined $_->{modrm}{reg2} }
+
 # NP 0F 72 /6 ib: PSLLD mm, imm8
 PSLLD_imm MMX 00001111 01110010 \
   !constraints { modrm($_, reg => 6); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} }
 
+# 66 0F 72 /6 ib: PSLLD xmm1, imm8
+PSLLD_imm SSE2 00001111 01110010 \
+  !constraints { data16($_); modrm($_, reg => 6); imm($_, width => 8); defined $_->{modrm}{reg2} }
+
 # NP 0F 73 /6 ib: PSLLQ mm, imm8
 PSLLQ_imm MMX 00001111 01110011 \
   !constraints { modrm($_, reg => 6); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} }
 
+# 66 0F 73 /6 ib: PSLLQ xmm1, imm8
+PSLLQ_imm SSE2 00001111 01110011 \
+  !constraints { data16($_); modrm($_, reg => 6); imm($_, width => 8); defined $_->{modrm}{reg2} }
+
+# 66 0F 73 /7 ib: PSLLDQ xmm1, imm8
+PSLLDQ_imm SSE2 00001111 01110011 \
+  !constraints { data16($_); modrm($_, reg => 7); imm($_, width => 8); defined $_->{modrm}{reg2} }
+
 # NP 0F D1 /r: PSRLW mm, mm/m64
 PSRLW MMX 00001111 11010001 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F D1 /r: PSRLW xmm1, xmm2/m128
+PSRLW SSE2 00001111 11010001 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F D2 /r: PSRLD mm, mm/m64
 PSRLD MMX 00001111 11010010 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F D2 /r: PSRLD xmm1, xmm2/m128
+PSRLD SSE2 00001111 11010010 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F D3 /r: PSRLQ mm, mm/m64
 PSRLQ MMX 00001111 11010011 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F D3 /r: PSRLQ xmm1, xmm2/m128
+PSRLQ SSE2 00001111 11010011 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 71 /2 ib: PSRLW mm, imm8
 PSRLW_imm MMX 00001111 01110001 \
   !constraints { modrm($_, reg => 2); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} }
 
+# 66 0F 71 /2 ib: PSRLW xmm1, imm8
+PSRLW_imm SSE2 00001111 01110001 \
+  !constraints { data16($_); modrm($_, reg => 2); imm($_, width => 8); defined $_->{modrm}{reg2} }
+
 # NP 0F 72 /2 ib: PSRLD mm, imm8
 PSRLD_imm MMX 00001111 01110010 \
   !constraints { modrm($_, reg => 2); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} }
 
+# 66 0F 72 /2 ib: PSRLD xmm1, imm8
+PSRLD_imm SSE2 00001111 01110010 \
+  !constraints { data16($_); modrm($_, reg => 2); imm($_, width => 8); defined $_->{modrm}{reg2} }
+
 # NP 0F 73 /2 ib: PSRLQ mm, imm8
 PSRLQ_imm MMX 00001111 01110011 \
   !constraints { modrm($_, reg => 2); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} }
 
+# 66 0F 73 /2 ib: PSRLQ xmm1, imm8
+PSRLQ_imm SSE2 00001111 01110011 \
+  !constraints { data16($_); modrm($_, reg => 2); imm($_, width => 8); defined $_->{modrm}{reg2} }
+
+# 66 0F 73 /3 ib: PSRLDQ xmm1, imm8
+PSRLDQ_imm SSE2 00001111 01110011 \
+  !constraints { data16($_); modrm($_, reg => 3); imm($_, width => 8); defined $_->{modrm}{reg2} }
+
 # NP 0F E1 /r: PSRAW mm,mm/m64
 PSRAW MMX 00001111 11100001 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F E1 /r: PSRAW xmm1,xmm2/m128
+PSRAW SSE2 00001111 11100001 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F E2 /r: PSRAD mm,mm/m64
 PSRAD MMX 00001111 11100010 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F E2 /r: PSRAD xmm1,xmm2/m128
+PSRAD SSE2 00001111 11100010 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 71 /4 ib: PSRAW mm,imm8
 PSRAW_imm MMX 00001111 01110001 \
   !constraints { modrm($_, reg => 4); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} }
 
+# 66 0F 71 /4 ib: PSRAW xmm1,imm8
+PSRAW_imm SSE2 00001111 01110001 \
+  !constraints { data16($_); modrm($_, reg => 4); imm($_, width => 8); defined $_->{modrm}{reg2} }
+
 # NP 0F 72 /4 ib: PSRAD mm,imm8
 PSRAD_imm MMX 00001111 01110010 \
   !constraints { modrm($_, reg => 4); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} }
 
+# 66 0F 72 /4 ib: PSRAD xmm1,imm8
+PSRAD_imm SSE2 00001111 01110010 \
+  !constraints { data16($_); modrm($_, reg => 4); imm($_, width => 8); defined $_->{modrm}{reg2} }
+
 #
 # Shuffle, Unpack, Blend, Insert, Extract, Broadcast, Permute, Gather Instructions
 # --------------------------------------------------------------------------------
@@ -488,75 +963,169 @@ PACKSSWB MMX 00001111 01100011 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F 63 /r: PACKSSWB xmm1, xmm2/m128
+PACKSSWB SSE2 00001111 01100011 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 6B /r: PACKSSDW mm1, mm2/m64
 PACKSSDW MMX 00001111 01101011 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F 6B /r: PACKSSDW xmm1, xmm2/m128
+PACKSSDW SSE2 00001111 01101011 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 67 /r: PACKUSWB mm, mm/m64
 PACKUSWB MMX 00001111 01100111 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F 67 /r: PACKUSWB xmm1, xmm2/m128
+PACKUSWB SSE2 00001111 01100111 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 68 /r: PUNPCKHBW mm, mm/m64
 PUNPCKHBW MMX 00001111 01101000 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8, align => 8); }
 
+# 66 0F 68 /r: PUNPCKHBW xmm1, xmm2/m128
+PUNPCKHBW SSE2 00001111 01101000 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 69 /r: PUNPCKHWD mm, mm/m64
 PUNPCKHWD MMX 00001111 01101001 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F 69 /r: PUNPCKHWD xmm1, xmm2/m128
+PUNPCKHWD SSE2 00001111 01101001 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 6A /r: PUNPCKHDQ mm, mm/m64
 PUNPCKHDQ MMX 00001111 01101010 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# 66 0F 6A /r: PUNPCKHDQ xmm1, xmm2/m128
+PUNPCKHDQ SSE2 00001111 01101010 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# 66 0F 6D /r: PUNPCKHQDQ xmm1, xmm2/m128
+PUNPCKHQDQ SSE2 00001111 01101101 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 60 /r: PUNPCKLBW mm, mm/m32
 PUNPCKLBW MMX 00001111 01100000 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 4); }
 
+# 66 0F 60 /r: PUNPCKLBW xmm1, xmm2/m128
+PUNPCKLBW SSE2 00001111 01100000 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 61 /r: PUNPCKLWD mm, mm/m32
 PUNPCKLWD MMX 00001111 01100001 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 4); }
 
+# 66 0F 61 /r: PUNPCKLWD xmm1, xmm2/m128
+PUNPCKLWD SSE2 00001111 01100001 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 62 /r: PUNPCKLDQ mm, mm/m32
 PUNPCKLDQ MMX 00001111 01100010 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 4); }
 
+# 66 0F 62 /r: PUNPCKLDQ xmm1, xmm2/m128
+PUNPCKLDQ SSE2 00001111 01100010 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# 66 0F 6C /r: PUNPCKLQDQ xmm1, xmm2/m128
+PUNPCKLQDQ SSE2 00001111 01101100 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 14 /r: UNPCKLPS xmm1, xmm2/m128
 UNPCKLPS SSE 00001111 00010100 \
   !constraints { modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# 66 0F 14 /r: UNPCKLPD xmm1, xmm2/m128
+UNPCKLPD SSE2 00001111 00010100 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 15 /r: UNPCKHPS xmm1, xmm2/m128
 UNPCKHPS SSE 00001111 00010101 \
   !constraints { modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# 66 0F 15 /r: UNPCKHPD xmm1, xmm2/m128
+UNPCKHPD SSE2 00001111 00010101 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 70 /r ib: PSHUFW mm1, mm2/m64, imm8
 PSHUFW SSE 00001111 01110000 \
   !constraints { modrm($_); imm($_, width => 8); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# F2 0F 70 /r ib: PSHUFLW xmm1, xmm2/m128, imm8
+PSHUFLW SSE2 00001111 01110000 \
+  !constraints { repne($_); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# F3 0F 70 /r ib: PSHUFHW xmm1, xmm2/m128, imm8
+PSHUFHW SSE2 00001111 01110000 \
+  !constraints { rep($_); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# 66 0F 70 /r ib: PSHUFD xmm1, xmm2/m128, imm8
+PSHUFD SSE2 00001111 01110000 \
+  !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F C6 /r ib: SHUFPS xmm1, xmm3/m128, imm8
 SHUFPS SSE 00001111 11000110 \
   !constraints { modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# 66 0F C6 /r ib: SHUFPD xmm1, xmm2/m128, imm8
+SHUFPD SSE2 00001111 11000110 \
+  !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F C4 /r ib: PINSRW mm, r32/m16, imm8
 PINSRW SSE 00001111 11000100 \
   !constraints { modrm($_); imm($_, width => 8); $_->{modrm}{reg} &= 0b111; !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
   !memory { load(size => 2); }
 
+# 66 0F C4 /r ib: PINSRW xmm, r32/m16, imm8
+PINSRW SSE2 00001111 11000100 \
+  !constraints { data16($_); modrm($_); imm($_, width => 8); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
+  !memory { load(size => 2); }
+
 # NP 0F C5 /r ib: PEXTRW reg, mm, imm8
 PEXTRW_reg SSE 00001111 11000101 \
   !constraints { modrm($_); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; $_->{modrm}{reg} != REG_RSP && defined $_->{modrm}{reg2} }
 
+# 66 0F C5 /r ib: PEXTRW reg, xmm, imm8
+PEXTRW_reg SSE2 00001111 11000101 \
+  !constraints { data16($_); modrm($_); imm($_, width => 8); $_->{modrm}{reg} != REG_RSP && defined $_->{modrm}{reg2} }
+
 #
 # Conversion Instructions
 # -----------------------
@@ -567,16 +1136,141 @@ CVTPI2PS SSE 00001111 00101010 \
   !constraints { modrm($_); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
   !memory { load(size => 8); }
 
+# F3 0F 2A /r: CVTSI2SS xmm1,r/m32
+CVTSI2SS SSE2 00001111 00101010 \
+  !constraints { rep($_); modrm($_); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
+  !memory { load(size => 4); }
+
+# F3 REX.W 0F 2A /r: CVTSI2SS xmm1,r/m64
+CVTSI2SS_64 SSE2 00001111 00101010 \
+  !constraints { rep($_); rex($_, w => 1); modrm($_); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
+  !memory { load(size => 8); }
+
+# 66 0F 2A /r: CVTPI2PD xmm, mm/m64
+CVTPI2PD SSE2 00001111 00101010 \
+  !constraints { data16($_); modrm($_); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# F2 0F 2A /r: CVTSI2SD xmm1,r32/m32
+CVTSI2SD SSE2 00001111 00101010 \
+  !constraints { repne($_); modrm($_); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
+  !memory { load(size => 4); }
+
+# F2 REX.W 0F 2A /r: CVTSI2SD xmm1,r/m64
+CVTSI2SD_64 SSE2 00001111 00101010 \
+  !constraints { repne($_); rex($_, w => 1); modrm($_); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
+  !memory { load(size => 8); }
+
 # NP 0F 2D /r: CVTPS2PI mm, xmm/m64
 CVTPS2PI SSE 00001111 00101101 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; 1 } \
   !memory { load(size => 8); }
 
+# F3 0F 2D /r: CVTSS2SI r32,xmm1/m32
+CVTSS2SI SSE2 00001111 00101101 \
+  !constraints { rep($_); modrm($_); $_->{modrm}{reg} != REG_RSP } \
+  !memory { load(size => 4, rollback => (defined $_->{modrm}{base} && $_->{modrm}{base} != $_->{modrm}{reg})); }
+
+# F3 REX.W 0F 2D /r: CVTSS2SI r64,xmm1/m32
+CVTSS2SI_64 SSE2 00001111 00101101 \
+  !constraints { rep($_); rex($_, w => 1); modrm($_); $_->{modrm}{reg} != REG_RSP } \
+  !memory { load(size => 4, rollback => (defined $_->{modrm}{base} && $_->{modrm}{base} != $_->{modrm}{reg})); }
+
+# 66 0F 2D /r: CVTPD2PI mm, xmm/m128
+CVTPD2PI SSE2 00001111 00101101 \
+  !constraints { data16($_); modrm($_); $_->{modrm}{reg} &= 0b111; 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# F2 0F 2D /r: CVTSD2SI r32,xmm1/m64
+CVTSD2SI SSE2 00001111 00101101 \
+  !constraints { repne($_); modrm($_); $_->{modrm}{reg} != REG_RSP } \
+  !memory { load(size => 8, rollback => (defined $_->{modrm}{base} && $_->{modrm}{base} != $_->{modrm}{reg})); }
+
+# F2 REX.W 0F 2D /r: CVTSD2SI r64,xmm1/m64
+CVTSD2SI_64 SSE2 00001111 00101101 \
+  !constraints { repne($_); rex($_, w => 1); modrm($_); $_->{modrm}{reg} != REG_RSP } \
+  !memory { load(size => 8, rollback => (defined $_->{modrm}{base} && $_->{modrm}{base} != $_->{modrm}{reg})); }
+
 # NP 0F 2C /r: CVTTPS2PI mm, xmm/m64
 CVTTPS2PI SSE 00001111 00101100 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; 1 } \
   !memory { load(size => 8); }
 
+# F3 0F 2C /r: CVTTSS2SI r32,xmm1/m32
+CVTTSS2SI SSE2 00001111 00101100 \
+  !constraints { rep($_); modrm($_); $_->{modrm}{reg} != REG_RSP } \
+  !memory { load(size => 4, rollback => (defined $_->{modrm}{base} && $_->{modrm}{base} != $_->{modrm}{reg})); }
+
+# F3 REX.W 0F 2C /r: CVTTSS2SI r64,xmm1/m32
+CVTTSS2SI_64 SSE2 00001111 00101100 \
+  !constraints { rep($_); rex($_, w => 1); modrm($_); $_->{modrm}{reg} != REG_RSP } \
+  !memory { load(size => 4, rollback => (defined $_->{modrm}{base} && $_->{modrm}{base} != $_->{modrm}{reg})); }
+
+# 66 0F 2C /r: CVTTPD2PI mm, xmm/m128
+CVTTPD2PI SSE2 00001111 00101100 \
+  !constraints { data16($_); modrm($_); $_->{modrm}{reg} &= 0b111; 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# F2 0F 2C /r: CVTTSD2SI r32,xmm1/m64
+CVTTSD2SI SSE2 00001111 00101100 \
+  !constraints { repne($_); modrm($_); $_->{modrm}{reg} != REG_RSP } \
+  !memory { load(size => 8, rollback => (defined $_->{modrm}{base} && $_->{modrm}{base} != $_->{modrm}{reg})); }
+
+# F2 REX.W 0F 2C /r: CVTTSD2SI r64,xmm1/m64
+CVTTSD2SI_64 SSE2 00001111 00101100 \
+  !constraints { repne($_); rex($_, w => 1); modrm($_); $_->{modrm}{reg} != REG_RSP } \
+  !memory { load(size => 8, rollback => (defined $_->{modrm}{base} && $_->{modrm}{base} != $_->{modrm}{reg})); }
+
+# F2 0F E6 /r: CVTPD2DQ xmm1, xmm2/m128
+CVTPD2DQ SSE2 00001111 11100110 \
+  !constraints { repne($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# 66 0F E6 /r: CVTTPD2DQ xmm1, xmm2/m128
+CVTTPD2DQ SSE2 00001111 11100110 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# F3 0F E6 /r: CVTDQ2PD xmm1, xmm2/m64
+CVTDQ2PD SSE2 00001111 11100110 \
+  !constraints { rep($_); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F 5A /r: CVTPS2PD xmm1, xmm2/m64
+CVTPS2PD SSE2 00001111 01011010 \
+  !constraints { modrm($_); 1 } \
+  !memory { load(size => 8); }
+
+# 66 0F 5A /r: CVTPD2PS xmm1, xmm2/m128
+CVTPD2PS SSE2 00001111 01011010 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# F3 0F 5A /r: CVTSS2SD xmm1, xmm2/m32
+CVTSS2SD SSE2 00001111 01011010 \
+  !constraints { rep($_); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
+# F2 0F 5A /r: CVTSD2SS xmm1, xmm2/m64
+CVTSD2SS SSE2 00001111 01011010 \
+  !constraints { repne($_); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
+# NP 0F 5B /r: CVTDQ2PS xmm1, xmm2/m128
+CVTDQ2PS SSE2 00001111 01011011 \
+  !constraints { modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# 66 0F 5B /r: CVTPS2DQ xmm1, xmm2/m128
+CVTPS2DQ SSE2 00001111 01011011 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# F3 0F 5B /r: CVTTPS2DQ xmm1, xmm2/m128
+CVTTPS2DQ SSE2 00001111 01011011 \
+  !constraints { rep($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 #
 # Cacheability Control, Prefetch, and Instruction Ordering Instructions
 # ---------------------------------------------------------------------
@@ -587,16 +1281,41 @@ MASKMOVQ SSE 00001111 11110111 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} } \
   !memory { load(size => 8, base => REG_RDI, rollback => 1); }
 
+# 66 0F F7 /r: MASKMOVDQU xmm1, xmm2
+MASKMOVDQU SSE2 00001111 11110111 \
+  !constraints { data16($_); modrm($_); defined $_->{modrm}{reg2} } \
+  !memory { load(size => 16, base => REG_RDI, rollback => 1); }
+
 # NP 0F 2B /r: MOVNTPS m128, xmm1
 MOVNTPS SSE 00001111 00101011 \
   !constraints { modrm($_); !defined $_->{modrm}{reg2} } \
   !memory { store(size => 16, align => 16); }
 
+# 66 0F 2B /r: MOVNTPD m128, xmm1
+MOVNTPD SSE2 00001111 00101011 \
+  !constraints { data16($_); modrm($_); !defined $_->{modrm}{reg2} } \
+  !memory { store(size => 16, align => 16); }
+
+# NP 0F C3 /r: MOVNTI m32, r32
+MOVNTI SSE2 00001111 11000011 \
+  !constraints { modrm($_); $_->{modrm}{reg} != REG_RSP && defined $_->{modrm}{base} && $_->{modrm}{base} != $_->{modrm}{reg} } \
+  !memory { store(size => 4); }
+
+# NP REX.W + 0F C3 /r: MOVNTI m64, r64
+MOVNTI_64 SSE2 00001111 11000011 \
+  !constraints { rex($_, w => 1); modrm($_); $_->{modrm}{reg} != REG_RSP && defined $_->{modrm}{base} && $_->{modrm}{base} != $_->{modrm}{reg} } \
+  !memory { store(size => 8); }
+
 # NP 0F E7 /r: MOVNTQ m64, mm
 MOVNTQ SSE 00001111 11100111 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; !defined $_->{modrm}{reg2} } \
   !memory { store(size => 8); }
 
+# 66 0F E7 /r: MOVNTDQ m128, xmm1
+MOVNTDQ SSE2 00001111 11100111 \
+  !constraints { data16($_); modrm($_); !defined $_->{modrm}{reg2} } \
+  !memory { store(size => 16, align => 16); }
+
 # 0F 18 /1: PREFETCHT0 m8
 PREFETCHT0 SSE 00001111 00011000 \
   !constraints { modrm($_, reg => 1); !defined $_->{modrm}{reg2} } \
@@ -617,9 +1336,24 @@ PREFETCHNTA SSE 00001111 00011000 \
   !constraints { modrm($_, reg => 0); !defined $_->{modrm}{reg2} } \
   !memory { load(size => 1); }
 
+# NP 0F AE /7: CLFLUSH m8
+CLFLUSH SSE2 00001111 10101110 \
+  !constraints { modrm($_, reg => 7); !defined $_->{modrm}{reg2} } \
+  !memory { store(size => 1); }
+
 # NP 0F AE F8: SFENCE
 SFENCE SSE 00001111 10101110 11111000
 
+# NP 0F AE E8: LFENCE
+LFENCE SSE2 00001111 10101110 11101000
+
+# NP 0F AE F0: MFENCE
+MFENCE SSE2 00001111 10101110 11110000
+
+# F3 90: PAUSE
+PAUSE SSE2 10010000 \
+  !constraints { rep($_); 1 }
+
 #
 # State Management Instructions
 # -----------------------------
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [Qemu-devel] [RISU PATCH v3 13/18] x86.risu: add SSE3 instructions
  2019-07-11 22:32 [Qemu-devel] [RISU PATCH v3 00/18] Support for generating x86 SIMD test images Jan Bobek
                   ` (11 preceding siblings ...)
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 12/18] x86.risu: add SSE2 instructions Jan Bobek
@ 2019-07-11 22:32 ` Jan Bobek
  2019-07-20 21:27   ` Richard Henderson
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 14/18] x86.risu: add SSSE3 instructions Jan Bobek
                   ` (5 subsequent siblings)
  18 siblings, 1 reply; 49+ messages in thread
From: Jan Bobek @ 2019-07-11 22:32 UTC (permalink / raw)
  To: qemu-devel; +Cc: Jan Bobek, Alex Bennée, Richard Henderson

Add SSE3 instructions to the x86 configuration file.

Signed-off-by: Jan Bobek <jan.bobek@gmail.com>
---
 x86.risu | 50 ++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 50 insertions(+)

diff --git a/x86.risu b/x86.risu
index b9d424e..d40b9df 100644
--- a/x86.risu
+++ b/x86.risu
@@ -161,6 +161,26 @@ MOVMSKPS SSE 00001111 01010000 \
 MOVMSKPD SSE2 00001111 01010000 \
   !constraints { data16($_); modrm($_); $_->{modrm}{reg} != REG_RSP && defined $_->{modrm}{reg2} }
 
+# F2 0F F0 /r: LDDQU xmm1, m128
+LDDQU SSE3 00001111 11110000 \
+  !constraints { repne($_); modrm($_); !defined $_->{modrm}{reg2} } \
+  !memory { load(size => 16); }
+
+# F3 0F 16 /r: MOVSHDUP xmm1, xmm2/m128
+MOVSHDUP SSE3 00001111 00010110 \
+  !constraints { rep($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# F3 0F 12 /r: MOVSLDUP xmm1, xmm2/m128
+MOVSLDUP SSE3 00001111 00010010 \
+  !constraints { rep($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# F2 0F 12 /r: MOVDDUP xmm1, xmm2/m64
+MOVDDUP SSE3 00001111 00010010 \
+  !constraints { repne($_); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
 #
 # Arithmetic Instructions
 # -----------------------
@@ -266,6 +286,16 @@ ADDSD SSE2 00001111 01011000 \
   !constraints { repne($_); modrm($_); 1 } \
   !memory { load(size => 8); }
 
+# F2 0F 7C /r: HADDPS xmm1, xmm2/m128
+HADDPS SSE3 00001111 01111100 \
+  !constraints { repne($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# 66 0F 7C /r: HADDPD xmm1, xmm2/m128
+HADDPD SSE3 00001111 01111100 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F F8 /r: PSUBB mm, mm/m64
 PSUBB MMX 00001111 11111000 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -366,6 +396,26 @@ SUBSD SSE2 00001111 01011100 \
   !constraints { repne($_); modrm($_); 1 } \
   !memory { load(size => 8); }
 
+# F2 0F 7D /r: HSUBPS xmm1, xmm2/m128
+HSUBPS SSE3 00001111 01111101 \
+  !constraints { repne($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# 66 0F 7D /r: HSUBPD xmm1, xmm2/m128
+HSUBPD SSE3 00001111 01111101 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# F2 0F D0 /r: ADDSUBPS xmm1, xmm2/m128
+ADDSUBPS SSE3 00001111 11010000 \
+  !constraints { repne($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# 66 0F D0 /r: ADDSUBPD xmm1, xmm2/m128
+ADDSUBPD SSE3 00001111 11010000 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F D5 /r: PMULLW mm, mm/m64
 PMULLW MMX 00001111 11010101 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [Qemu-devel] [RISU PATCH v3 14/18] x86.risu: add SSSE3 instructions
  2019-07-11 22:32 [Qemu-devel] [RISU PATCH v3 00/18] Support for generating x86 SIMD test images Jan Bobek
                   ` (12 preceding siblings ...)
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 13/18] x86.risu: add SSE3 instructions Jan Bobek
@ 2019-07-11 22:32 ` Jan Bobek
  2019-07-20 21:52   ` Richard Henderson
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 15/18] x86.risu: add SSE4.1 and SSE4.2 instructions Jan Bobek
                   ` (4 subsequent siblings)
  18 siblings, 1 reply; 49+ messages in thread
From: Jan Bobek @ 2019-07-11 22:32 UTC (permalink / raw)
  To: qemu-devel; +Cc: Jan Bobek, Alex Bennée, Richard Henderson

Add SSSE3 instructions to the x86 configuration file.

Signed-off-by: Jan Bobek <jan.bobek@gmail.com>
---
 x86.risu | 160 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 160 insertions(+)

diff --git a/x86.risu b/x86.risu
index d40b9df..6f89a80 100644
--- a/x86.risu
+++ b/x86.risu
@@ -286,6 +286,36 @@ ADDSD SSE2 00001111 01011000 \
   !constraints { repne($_); modrm($_); 1 } \
   !memory { load(size => 8); }
 
+# NP 0F 38 01 /r: PHADDW mm1, mm2/m64
+PHADDW_mm SSSE3 00001111 00111000 00000001 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# 66 0F 38 01 /r: PHADDW xmm1, xmm2/m128
+PHADDW SSSE3 00001111 00111000 00000001 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# NP 0F 38 02 /r: PHADDD mm1, mm2/m64
+PHADDD_mm SSSE3 00001111 00111000 00000010 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# 66 0F 38 02 /r: PHADDD xmm1, xmm2/m128
+PHADDD SSSE3 00001111 00111000 00000010 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# NP 0F 38 03 /r: PHADDSW mm1, mm2/m64
+PHADDSW_mm SSSE3 00001111 00111000 00000011 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# 66 0F 38 03 /r: PHADDSW xmm1, xmm2/m128
+PHADDSW SSSE3 00001111 00111000 00000011 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # F2 0F 7C /r: HADDPS xmm1, xmm2/m128
 HADDPS SSE3 00001111 01111100 \
   !constraints { repne($_); modrm($_); 1 } \
@@ -396,6 +426,36 @@ SUBSD SSE2 00001111 01011100 \
   !constraints { repne($_); modrm($_); 1 } \
   !memory { load(size => 8); }
 
+# NP 0F 38 05 /r: PHSUBW mm1, mm2/m64
+PHSUBW_mm SSSE3 00001111 00111000 00000101 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# 66 0F 38 05 /r: PHSUBW xmm1, xmm2/m128
+PHSUBW SSSE3 00001111 00111000 00000101 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# NP 0F 38 06 /r: PHSUBD mm1, mm2/m64
+PHSUBD_mm SSSE3 00001111 00111000 00000110 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# 66 0F 38 06 /r: PHSUBD xmm1, xmm2/m128
+PHSUBD SSSE3 00001111 00111000 00000110 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# NP 0F 38 07 /r: PHSUBSW mm1, mm2/m64
+PHSUBSW_mm SSSE3 00001111 00111000 00000111 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# 66 0F 38 07 /r: PHSUBSW xmm1, xmm2/m128
+PHSUBSW SSSE3 00001111 00111000 00000111 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # F2 0F 7D /r: HSUBPS xmm1, xmm2/m128
 HSUBPS SSE3 00001111 01111101 \
   !constraints { repne($_); modrm($_); 1 } \
@@ -456,6 +516,16 @@ PMULUDQ SSE2 00001111 11110100 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# NP 0F 38 0B /r: PMULHRSW mm1, mm2/m64
+PMULHRSW_mm SSSE3 00001111 00111000 00001011 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# 66 0F 38 0B /r: PMULHRSW xmm1, xmm2/m128
+PMULHRSW SSSE3 00001111 00111000 00001011 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 59 /r: MULPS xmm1, xmm2/m128
 MULPS SSE 00001111 01011001 \
   !constraints { modrm($_); 1 } \
@@ -486,6 +556,16 @@ PMADDWD SSE2 00001111 11110101 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# NP 0F 38 04 /r: PMADDUBSW mm1, mm2/m64
+PMADDUBSW_mm SSSE3 00001111 00111000 00000100 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# 66 0F 38 04 /r: PMADDUBSW xmm1, xmm2/m128
+PMADDUBSW SSSE3 00001111 00111000 00000100 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 5E /r: DIVPS xmm1, xmm2/m128
 DIVPS SSE 00001111 01011110 \
   !constraints { modrm($_); 1 } \
@@ -656,6 +736,66 @@ PSADBW SSE2 00001111 11110110 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# NP 0F 38 1C /r: PABSB mm1, mm2/m64
+PABSB_mm SSSE3 00001111 00111000 00011100 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# 66 0F 38 1C /r: PABSB xmm1, xmm2/m128
+PABSB SSSE3 00001111 00111000 00011100 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# NP 0F 38 1D /r: PABSW mm1, mm2/m64
+PABSW_mm SSSE3 00001111 00111000 00011101 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# 66 0F 38 1D /r: PABSW xmm1, xmm2/m128
+PABSW SSSE3 00001111 00111000 00011101 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# NP 0F 38 1E /r: PABSD mm1, mm2/m64
+PABSD_mm SSSE3 00001111 00111000 00011110 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# 66 0F 38 1E /r: PABSD xmm1, xmm2/m128
+PABSD SSSE3 00001111 00111000 00011110 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# NP 0F 38 08 /r: PSIGNB mm1, mm2/m64
+PSIGNB_mm SSSE3 00001111 00111000 00001000 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# 66 0F 38 08 /r: PSIGNB xmm1, xmm2/m128
+PSIGNB SSSE3 00001111 00111000 00001000 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# NP 0F 38 09 /r: PSIGNW mm1, mm2/m64
+PSIGNW_mm SSSE3 00001111 00111000 00001001 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# 66 0F 38 09 /r: PSIGNW xmm1, xmm2/m128
+PSIGNW SSSE3 00001111 00111000 00001001 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# NP 0F 38 0A /r: PSIGND mm1, mm2/m64
+PSIGND_mm SSSE3 00001111 00111000 00001010 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# 66 0F 38 0A /r: PSIGND xmm1, xmm2/m128
+PSIGND SSSE3 00001111 00111000 00001010 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 #
 # Comparison Instructions
 # -----------------------
@@ -1003,6 +1143,16 @@ PSRAD_imm MMX 00001111 01110010 \
 PSRAD_imm SSE2 00001111 01110010 \
   !constraints { data16($_); modrm($_, reg => 4); imm($_, width => 8); defined $_->{modrm}{reg2} }
 
+# NP 0F 3A 0F /r ib: PALIGNR mm1, mm2/m64, imm8
+PALIGNR_mm SSSE3 00001111 00111010 00001111 \
+  !constraints { modrm($_); imm($_, width => 8); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# 66 0F 3A 0F /r ib: PALIGNR xmm1, xmm2/m128, imm8
+PALIGNR SSSE3 00001111 00111010 00001111 \
+  !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 #
 # Shuffle, Unpack, Blend, Insert, Extract, Broadcast, Permute, Gather Instructions
 # --------------------------------------------------------------------------------
@@ -1128,6 +1278,16 @@ UNPCKHPD SSE2 00001111 00010101 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# NP 0F 38 00 /r: PSHUFB mm1, mm2/m64
+PSHUFB_mm SSSE3 00001111 00111000 00000000 \
+  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
+  !memory { load(size => 8); }
+
+# 66 0F 38 00 /r: PSHUFB xmm1, xmm2/m128
+PSHUFB SSSE3 00001111 00111000 00000000 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 70 /r ib: PSHUFW mm1, mm2/m64, imm8
 PSHUFW SSE 00001111 01110000 \
   !constraints { modrm($_); imm($_, width => 8); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [Qemu-devel] [RISU PATCH v3 15/18] x86.risu: add SSE4.1 and SSE4.2 instructions
  2019-07-11 22:32 [Qemu-devel] [RISU PATCH v3 00/18] Support for generating x86 SIMD test images Jan Bobek
                   ` (13 preceding siblings ...)
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 14/18] x86.risu: add SSSE3 instructions Jan Bobek
@ 2019-07-11 22:32 ` Jan Bobek
  2019-07-20 22:28   ` Richard Henderson
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 16/18] x86.risu: add AES and PCLMULQDQ instructions Jan Bobek
                   ` (3 subsequent siblings)
  18 siblings, 1 reply; 49+ messages in thread
From: Jan Bobek @ 2019-07-11 22:32 UTC (permalink / raw)
  To: qemu-devel; +Cc: Jan Bobek, Alex Bennée, Richard Henderson

Add SSE4.1 and SSE4.2 instructions to the x86 configuration file.

Signed-off-by: Jan Bobek <jan.bobek@gmail.com>
---
 x86.risu | 270 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 270 insertions(+)

diff --git a/x86.risu b/x86.risu
index 6f89a80..bc6636e 100644
--- a/x86.risu
+++ b/x86.risu
@@ -486,6 +486,11 @@ PMULLW SSE2 00001111 11010101 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# 66 0F 38 40 /r: PMULLD xmm1, xmm2/m128
+PMULLD SSE4_1 00001111 00111000 01000000 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F E5 /r: PMULHW mm, mm/m64
 PMULHW MMX 00001111 11100101 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -506,6 +511,11 @@ PMULHUW SSE2 00001111 11100100 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# 66 0F 38 28 /r: PMULDQ xmm1, xmm2/m128
+PMULDQ SSE4_1 00001111 00111000 00101000 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F F4 /r: PMULUDQ mm1, mm2/m64
 PMULUDQ_mm SSE2 00001111 11110100 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -636,6 +646,21 @@ PMINUB SSE2 00001111 11011010 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# 66 0F 38 3A /r: PMINUW xmm1, xmm2/m128
+PMINUW SSE4_1 00001111 00111000 00111010 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# 66 0F 38 3B /r: PMINUD xmm1, xmm2/m128
+PMINUD SSE4_1 00001111 00111000 00111011 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# 66 0F 38 38 /r: PMINSB xmm1, xmm2/m128
+PMINSB SSE4_1 00001111 00111000 00111000 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F EA /r: PMINSW mm1, mm2/m64
 PMINSW SSE 00001111 11101010 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -646,6 +671,11 @@ PMINSW SSE2 00001111 11101010 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# 66 0F 38 39 /r: PMINSD xmm1, xmm2/m128
+PMINSD SSE4_1 00001111 00111000 00111001 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 5D /r: MINPS xmm1, xmm2/m128
 MINPS SSE 00001111 01011101 \
   !constraints { modrm($_); 1 } \
@@ -666,6 +696,11 @@ MINSD SSE2 00001111 01011101 \
   !constraints { repne($_); modrm($_); 1 } \
   !memory { load(size => 8); }
 
+# 66 0F 38 41 /r: PHMINPOSUW xmm1, xmm2/m128
+PHMINPOSUW SSE4_1 00001111 00111000 01000001 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F DE /r: PMAXUB mm1, mm2/m64
 PMAXUB SSE 00001111 11011110 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -676,6 +711,21 @@ PMAXUB SSE2 00001111 11011110 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# 66 0F 38 3E /r: PMAXUW xmm1, xmm2/m128
+PMAXUW SSE4_1 00001111 00111000 00111110 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# 66 0F 38 3F /r: PMAXUD xmm1, xmm2/m128
+PMAXUD SSE4_1 00001111 00111000 00111111 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# 66 0F 38 3C /r: PMAXSB xmm1, xmm2/m128
+PMAXSB SSE4_1 00001111 00111000 00111100 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F EE /r: PMAXSW mm1, mm2/m64
 PMAXSW SSE 00001111 11101110 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -686,6 +736,11 @@ PMAXSW SSE2 00001111 11101110 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# 66 0F 38 3D /r: PMAXSD xmm1, xmm2/m128
+PMAXSD SSE4_1 00001111 00111000 00111101 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 5F /r: MAXPS xmm1, xmm2/m128
 MAXPS SSE 00001111 01011111 \
   !constraints { modrm($_); 1 } \
@@ -736,6 +791,11 @@ PSADBW SSE2 00001111 11110110 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# 66 0F 3A 42 /r ib: MPSADBW xmm1, xmm2/m128, imm8
+MPSADBW SSE4_1 00001111 00111010 01000010 \
+  !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 38 1C /r: PABSB mm1, mm2/m64
 PABSB_mm SSSE3 00001111 00111000 00011100 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -796,6 +856,36 @@ PSIGND SSSE3 00001111 00111000 00001010 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# 66 0F 3A 40 /r ib: DPPS xmm1, xmm2/m128, imm8
+DPPS SSE4_1 00001111 00111010 01000000 \
+  !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# 66 0F 3A 41 /r ib: DPPD xmm1, xmm2/m128, imm8
+DPPD SSE4_1 00001111 00111010 01000001 \
+  !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# 66 0F 3A 08 /r ib: ROUNDPS xmm1, xmm2/m128, imm8
+ROUNDPS SSE4_1 00001111 00111010 00001000 \
+  !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# 66 0F 3A 09 /r ib: ROUNDPD xmm1, xmm2/m128, imm8
+ROUNDPD SSE4_1 00001111 00111010 00001001 \
+  !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# 66 0F 3A 0A /r ib: ROUNDSS xmm1, xmm2/m32, imm8
+ROUNDSS SSE4_1 00001111 00111010 00001010 \
+  !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 4); }
+
+# 66 0F 3A 0B /r ib: ROUNDSD xmm1, xmm2/m64, imm8
+ROUNDSD SSE4_1 00001111 00111010 00001011 \
+  !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 8); }
+
 #
 # Comparison Instructions
 # -----------------------
@@ -831,6 +921,11 @@ PCMPEQD SSE2 00001111 01110110 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# 66 0F 38 29 /r: PCMPEQQ xmm1, xmm2/m128
+PCMPEQQ SSE4_1 00001111 00111000 00101001 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 64 /r: PCMPGTB mm,mm/m64
 PCMPGTB MMX 00001111 01100100 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -861,6 +956,36 @@ PCMPGTD SSE2 00001111 01100110 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# 66 0F 38 37 /r: PCMPGTQ xmm1,xmm2/m128
+PCMPGTQ SSE4_2 00001111 00111000 00110111 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# 66 0F 3A 60 /r imm8: PCMPESTRM xmm1, xmm2/m128, imm8
+PCMPESTRM SSE4_2 00001111 00111010 01100000 \
+  !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16); }
+
+# 66 0F 3A 61 /r imm8: PCMPESTRI xmm1, xmm2/m128, imm8
+PCMPESTRI SSE4_2 00001111 00111010 01100001 \
+  !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16, rollback => (defined $_->{modrm}{base} && $_->{modrm}{base} != REG_RCX)); }
+
+# 66 0F 3A 62 /r imm8: PCMPISTRM xmm1, xmm2/m128, imm8
+PCMPISTRM SSE4_2 00001111 00111010 01100010 \
+  !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16); }
+
+# 66 0F 3A 63 /r imm8: PCMPISTRI xmm1, xmm2/m128, imm8
+PCMPISTRI SSE4_2 00001111 00111010 01100011 \
+  !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16, rollback => (defined $_->{modrm}{base} && $_->{modrm}{base} != REG_RCX)); }
+
+# 66 0F 38 17 /r: PTEST xmm1, xmm2/m128
+PTEST SSE4_1 00001111 00111000 00010111 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F C2 /r ib: CMPPS xmm1, xmm2/m128, imm8
 CMPPS SSE 00001111 11000010 \
   !constraints { modrm($_); imm($_, width => 8); 1 } \
@@ -1188,6 +1313,11 @@ PACKUSWB SSE2 00001111 01100111 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# 66 0F 38 2B /r: PACKUSDW xmm1, xmm2/m128
+PACKUSDW SSE4_1 00001111 00111000 00101011 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 # NP 0F 68 /r: PUNPCKHBW mm, mm/m64
 PUNPCKHBW MMX 00001111 01101000 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1318,6 +1448,46 @@ SHUFPD SSE2 00001111 11000110 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# 66 0F 3A 0C /r ib: BLENDPS xmm1, xmm2/m128, imm8
+BLENDPS SSE4_1 00001111 00111010 00001100 \
+  !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# 66 0F 3A 0D /r ib: BLENDPD xmm1, xmm2/m128, imm8
+BLENDPD SSE4_1 00001111 00111010 00001101 \
+  !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# 66 0F 38 14 /r: BLENDVPS xmm1, xmm2/m128, <XMM0>
+BLENDVPS SSE4_1 00001111 00111000 00010100 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# 66 0F 38 15 /r: BLENDVPD xmm1, xmm2/m128 , <XMM0>
+BLENDVPD SSE4_1 00001111 00111000 00010101 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# 66 0F 38 10 /r: PBLENDVB xmm1, xmm2/m128, <XMM0>
+PBLENDVB SSE4_1 00001111 00111000 00010000 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# 66 0F 3A 0E /r ib: PBLENDW xmm1, xmm2/m128, imm8
+PBLENDW SSE4_1 00001111 00111010 00001110 \
+  !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# 66 0F 3A 21 /r ib: INSERTPS xmm1, xmm2/m32, imm8
+INSERTPS SSE4_1 00001111 00111010 00100001 \
+  !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 4); }
+
+# 66 0F 3A 20 /r ib: PINSRB xmm1,r32/m8,imm8
+PINSRB SSE4_1 00001111 00111010 00100000 \
+  !constraints { data16($_); modrm($_); imm($_, width => 8); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
+  !memory { load(size => 1); }
+
 # NP 0F C4 /r ib: PINSRW mm, r32/m16, imm8
 PINSRW SSE 00001111 11000100 \
   !constraints { modrm($_); imm($_, width => 8); $_->{modrm}{reg} &= 0b111; !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
@@ -1328,6 +1498,41 @@ PINSRW SSE2 00001111 11000100 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
   !memory { load(size => 2); }
 
+# 66 0F 3A 22 /r ib: PINSRD xmm1,r/m32,imm8
+PINSRD SSE4_1 00001111 00111010 00100010 \
+  !constraints { data16($_); modrm($_); imm($_, width => 8); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
+  !memory { load(size => 4); }
+
+# 66 REX.W 0F 3A 22 /r ib: PINSRQ xmm1,r/m64,imm8
+PINSRQ SSE4_1 00001111 00111010 00100010 \
+  !constraints { data16($_); rex($_, w => 1); modrm($_); imm($_, width => 8); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
+  !memory { load(size => 8); }
+
+# 66 0F 3A 17 /r ib: EXTRACTPS reg/m32, xmm1, imm8
+EXTRACTPS SSE4_1 00001111 00111010 00010111 \
+  !constraints { data16($_); modrm($_); imm($_, width => 8); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
+  !memory { store(size => 4); }
+
+# 66 0F 3A 14 /r ib: PEXTRB reg/m8,xmm2,imm8
+PEXTRB SSE4_1 00001111 00111010 00010100 \
+  !constraints { data16($_); modrm($_); imm($_, width => 8); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
+  !memory { store(size => 1); }
+
+# 66 0F 3A 15 /r ib: PEXTRW reg/m16, xmm, imm8
+PEXTRW SSE4_1 00001111 00111010 00010101 \
+  !constraints { data16($_); modrm($_); imm($_, width => 8); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
+  !memory { store(size => 2); }
+
+# 66 0F 3A 16 /r ib: PEXTRD r/m32,xmm2,imm8
+PEXTRD SSE4_1 00001111 00111010 00010110 \
+  !constraints { data16($_); modrm($_); imm($_, width => 8); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
+  !memory { store(size => 4); }
+
+# 66 REX.W 0F 3A 16 /r ib: PEXTRQ r/m64,xmm2,imm8
+PEXTRQ SSE4_1 00001111 00111010 00010110 \
+  !constraints { data16($_); rex($_, w => 1); modrm($_); imm($_, width => 8); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
+  !memory { store(size => 8); }
+
 # NP 0F C5 /r ib: PEXTRW reg, mm, imm8
 PEXTRW_reg SSE 00001111 11000101 \
   !constraints { modrm($_); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; $_->{modrm}{reg} != REG_RSP && defined $_->{modrm}{reg2} }
@@ -1341,6 +1546,66 @@ PEXTRW_reg SSE2 00001111 11000101 \
 # -----------------------
 #
 
+# 66 0f 38 20 /r: PMOVSXBW xmm1, xmm2/m64
+PMOVSXBW SSE4_1 00001111 00111000 00100000 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
+# 66 0f 38 21 /r: PMOVSXBD xmm1, xmm2/m32
+PMOVSXBD SSE4_1 00001111 00111000 00100001 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
+# 66 0f 38 22 /r: PMOVSXBQ xmm1, xmm2/m16
+PMOVSXBQ SSE4_1 00001111 00111000 00100010 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 2); }
+
+# 66 0f 38 23 /r: PMOVSXWD xmm1, xmm2/m64
+PMOVSXWD SSE4_1 00001111 00111000 00100011 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
+# 66 0f 38 24 /r: PMOVSXWQ xmm1, xmm2/m32
+PMOVSXWQ SSE4_1 00001111 00111000 00100100 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
+# 66 0f 38 25 /r: PMOVSXDQ xmm1, xmm2/m64
+PMOVSXDQ SSE4_1 00001111 00111000 00100101 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
+# 66 0f 38 30 /r: PMOVZXBW xmm1, xmm2/m64
+PMOVZXBW SSE4_1 00001111 00111000 00110000 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
+# 66 0f 38 31 /r: PMOVZXBD xmm1, xmm2/m32
+PMOVZXBD SSE4_1 00001111 00111000 00110001 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
+# 66 0f 38 32 /r: PMOVZXBQ xmm1, xmm2/m16
+PMOVZXBQ SSE4_1 00001111 00111000 00110010 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 2); }
+
+# 66 0f 38 33 /r: PMOVZXWD xmm1, xmm2/m64
+PMOVZXWD SSE4_1 00001111 00111000 00110011 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
+# 66 0f 38 34 /r: PMOVZXWQ xmm1, xmm2/m32
+PMOVZXWQ SSE4_1 00001111 00111000 00110100 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
+# 66 0f 38 35 /r: PMOVZXDQ xmm1, xmm2/m64
+PMOVZXDQ SSE4_1 00001111 00111000 00110101 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
 # NP 0F 2A /r: CVTPI2PS xmm, mm/m64
 CVTPI2PS SSE 00001111 00101010 \
   !constraints { modrm($_); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1526,6 +1791,11 @@ MOVNTDQ SSE2 00001111 11100111 \
   !constraints { data16($_); modrm($_); !defined $_->{modrm}{reg2} } \
   !memory { store(size => 16, align => 16); }
 
+# 66 0F 38 2A /r: MOVNTDQA xmm1, m128
+MOVNTDQA SSE4_1 00001111 00111000 00101010 \
+  !constraints { data16($_); modrm($_); !defined $_->{modrm}{reg2} } \
+  !memory { load(size => 16, align => 16); }
+
 # 0F 18 /1: PREFETCHT0 m8
 PREFETCHT0 SSE 00001111 00011000 \
   !constraints { modrm($_, reg => 1); !defined $_->{modrm}{reg2} } \
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [Qemu-devel] [RISU PATCH v3 16/18] x86.risu: add AES and PCLMULQDQ instructions
  2019-07-11 22:32 [Qemu-devel] [RISU PATCH v3 00/18] Support for generating x86 SIMD test images Jan Bobek
                   ` (14 preceding siblings ...)
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 15/18] x86.risu: add SSE4.1 and SSE4.2 instructions Jan Bobek
@ 2019-07-11 22:32 ` Jan Bobek
  2019-07-20 22:35   ` Richard Henderson
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 17/18] x86.risu: add AVX instructions Jan Bobek
                   ` (2 subsequent siblings)
  18 siblings, 1 reply; 49+ messages in thread
From: Jan Bobek @ 2019-07-11 22:32 UTC (permalink / raw)
  To: qemu-devel; +Cc: Jan Bobek, Alex Bennée, Richard Henderson

Add AES-NI and PCLMULQDQ instructions to the x86 configuration file.

Signed-off-by: Jan Bobek <jan.bobek@gmail.com>
---
 x86.risu | 45 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 45 insertions(+)

diff --git a/x86.risu b/x86.risu
index bc6636e..177979a 100644
--- a/x86.risu
+++ b/x86.risu
@@ -886,6 +886,51 @@ ROUNDSD SSE4_1 00001111 00111010 00001011 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 8); }
 
+#
+# AES Instructions
+# ----------------
+#
+
+# 66 0F 38 DE /r: AESDEC xmm1, xmm2/m128
+AESDEC AES 00001111 00111000 11011110 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# 66 0F 38 DF /r: AESDECLAST xmm1, xmm2/m128
+AESDECLAST AES 00001111 00111000 11011111 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# 66 0F 38 DC /r: AESENC xmm1, xmm2/m128
+AESENC AES 00001111 00111000 11011100 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# 66 0F 38 DD /r: AESENCLAST xmm1, xmm2/m128
+AESENCLAST AES 00001111 00111000 11011101 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# 66 0F 38 DB /r: AESIMC xmm1, xmm2/m128
+AESIMC AES 00001111 00111000 11011011 \
+  !constraints { data16($_); modrm($_); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+# 66 0F 3A DF /r ib: AESKEYGENASSIST xmm1, xmm2/m128, imm8
+AESKEYGENASSIST AES 00001111 00111010 11011111 \
+  !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16, align => 16); }
+
+#
+# PCLMULQDQ Instructions
+# ----------------------
+#
+
+# 66 0F 3A 44 /r ib: PCLMULQDQ xmm1, xmm2/m128, imm8
+PCLMULQDQ PCLMULQDQ 00001111 00111010 01000100 \
+  !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16, align => 16); }
+
 #
 # Comparison Instructions
 # -----------------------
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [Qemu-devel] [RISU PATCH v3 17/18] x86.risu: add AVX instructions
  2019-07-11 22:32 [Qemu-devel] [RISU PATCH v3 00/18] Support for generating x86 SIMD test images Jan Bobek
                   ` (15 preceding siblings ...)
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 16/18] x86.risu: add AES and PCLMULQDQ instructions Jan Bobek
@ 2019-07-11 22:32 ` Jan Bobek
  2019-07-21  0:04   ` Richard Henderson
  2019-07-11 22:33 ` [Qemu-devel] [RISU PATCH v3 18/18] x86.risu: add AVX2 instructions Jan Bobek
  2019-07-12 13:34 ` [Qemu-devel] [RISU PATCH v3 00/18] Support for generating x86 SIMD test images Alex Bennée
  18 siblings, 1 reply; 49+ messages in thread
From: Jan Bobek @ 2019-07-11 22:32 UTC (permalink / raw)
  To: qemu-devel; +Cc: Jan Bobek, Alex Bennée, Richard Henderson

Add AVX instructions to the x86 configuration file.

Signed-off-by: Jan Bobek <jan.bobek@gmail.com>
---
 x86.risu | 1362 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 1362 insertions(+)

diff --git a/x86.risu b/x86.risu
index 177979a..03ffc89 100644
--- a/x86.risu
+++ b/x86.risu
@@ -29,6 +29,12 @@ MOVD SSE2 00001111 011 d 1110 \
   !constraints { data16($_); modrm($_); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
   !memory { $d ? store(size => 4) : load(size => 4); }
 
+# VEX.128.66.0F.W0 6E /r: VMOVD xmm1,r32/m32
+# VEX.128.66.0F.W0 7E /r: VMOVD r32/m32,xmm1
+VMOVD AVX 011 d 1110 \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0x66, w => 0); modrm($_); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
+  !memory { $d ? store(size => 4) : load(size => 4); }
+
 # NP REX.W + 0F 6E /r: MOVQ mm,r/m64
 # NP REX.W + 0F 7E /r: MOVQ r/m64,mm
 MOVQ MMX 00001111 011 d 1110 \
@@ -41,6 +47,12 @@ MOVQ SSE2 00001111 011 d 1110 \
   !constraints { data16($_); rex($_, w => 1); modrm($_); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
   !memory { $d ? store(size => 8) : load(size => 8); }
 
+# VEX.128.66.0F.W1 6E /r: VMOVQ xmm1,r64/m64
+# VEX.128.66.0F.W1 7E /r: VMOVQ r64/m64,xmm1
+VMOVQ AVX 011 d 1110 \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0x66, w => 1); modrm($_); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
+  !memory { $d ? store(size => 8) : load(size => 8); }
+
 # NP 0F 6F /r: MOVQ mm, mm/m64
 # NP 0F 7F /r: MOVQ mm/m64, mm
 MOVQ_mm MMX 00001111 011 d 1111 \
@@ -52,59 +64,121 @@ MOVQ_xmm1 SSE2 00001111 01111110 \
   !constraints { rep($_); modrm($_); 1 } \
   !memory { load(size => 8); }
 
+# VEX.128.F3.0F.WIG 7E /r: VMOVQ xmm1, xmm2/m64
+VMOVQ_xmm1 AVX 01111110 \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0xF3); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
 # 66 0F D6 /r: MOVQ xmm2/m64, xmm1
 MOVQ_xmm2 SSE2 00001111 11010110 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { store(size => 8); }
 
+# VEX.128.66.0F.WIG D6 /r: VMOVQ xmm1/m64, xmm2
+VMOVQ_xmm2 AVX 11010110 \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { store(size => 8); }
+
 # NP 0F 28 /r: MOVAPS xmm1, xmm2/m128
 # NP 0F 29 /r: MOVAPS xmm2/m128, xmm1
 MOVAPS SSE 00001111 0010100 d \
   !constraints { modrm($_); 1 } \
   !memory { $d ? store(size => 16, align => 16) : load(size => 16, align => 16); }
 
+# VEX.128.0F.WIG 28 /r: VMOVAPS xmm1, xmm2/m128
+# VEX.128.0F.WIG 29 /r: VMOVAPS xmm2/m128, xmm1
+VMOVAPS AVX 0010100 d \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0); modrm($_); 1 } \
+  !memory { $d ? store(size => 16, align => 16) : load(size => 16, align => 16); }
+
 # 66 0F 28 /r: MOVAPD xmm1, xmm2/m128
 # 66 0F 29 /r: MOVAPD xmm2/m128, xmm1
 MOVAPD SSE2 00001111 0010100 d \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { $d ? store(size => 16, align => 16) : load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG 28 /r: VMOVAPD xmm1, xmm2/m128
+# VEX.128.66.0F.WIG 29 /r: VMOVAPD xmm2/m128, xmm1
+VMOVAPD AVX 0010100 d \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { $d ? store(size => 16, align => 16) : load(size => 16, align => 16); }
+
 # 66 0F 6F /r: MOVDQA xmm1, xmm2/m128
 # 66 0F 7F /r: MOVDQA xmm2/m128, xmm1
 MOVDQA SSE2 00001111 011 d 1111 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { $d ? store(size => 16, align => 16) : load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG 6F /r: VMOVDQA xmm1, xmm2/m128
+# VEX.128.66.0F.WIG 7F /r: VMOVDQA xmm2/m128, xmm1
+VMOVDQA AVX 011 d 1111 \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { $d ? store(size => 16, align => 16) : load(size => 16, align => 16); }
+
 # NP 0F 10 /r: MOVUPS xmm1, xmm2/m128
 # NP 0F 11 /r: MOVUPS xmm2/m128, xmm1
 MOVUPS SSE 00001111 0001000 d \
   !constraints { modrm($_); 1 } \
   !memory { $d ? store(size => 16) : load(size => 16); }
 
+# VEX.128.0F.WIG 10 /r: VMOVUPS xmm1, xmm2/m128
+# VEX.128.0F.WIG 11 /r: VMOVUPS xmm2/m128, xmm1
+VMOVUPS AVX 0001000 d \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0); modrm($_); 1 } \
+  !memory { $d ? store(size => 16) : load(size => 16); }
+
 # 66 0F 10 /r: MOVUPD xmm1, xmm2/m128
 # 66 0F 11 /r: MOVUPD xmm2/m128, xmm1
 MOVUPD SSE2 00001111 0001000 d \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { $d ? store(size => 16) : load(size => 16); }
 
+# VEX.128.66.0F.WIG 10 /r: VMOVUPD xmm1, xmm2/m128
+# VEX.128.66.0F.WIG 11 /r: VMOVUPD xmm2/m128, xmm1
+VMOVUPD AVX 0001000 d \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { $d ? store(size => 16) : load(size => 16); }
+
 # F3 0F 6F /r: MOVDQU xmm1,xmm2/m128
 # F3 0F 7F /r: MOVDQU xmm2/m128,xmm1
 MOVDQU SSE2 00001111 011 d 1111 \
   !constraints { rep($_); modrm($_); 1 } \
   !memory { $d ? store(size => 16) : load(size => 16); }
 
+# VEX.128.F3.0F.WIG 6F /r: VMOVDQU xmm1,xmm2/m128
+# VEX.128.F3.0F.WIG 7F /r: VMOVDQU xmm2/m128,xmm1
+VMOVDQU AVX 011 d 1111 \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0xF3); modrm($_); 1 } \
+  !memory { $d ? store(size => 16) : load(size => 16); }
+
 # F3 0F 10 /r: MOVSS xmm1, xmm2/m32
 # F3 0F 11 /r: MOVSS xmm2/m32, xmm1
 MOVSS SSE 00001111 0001000 d \
   !constraints { rep($_); modrm($_); 1 } \
   !memory { $d ? store(size => 4) : load(size => 4); }
 
+# VEX.LIG.F3.0F.WIG 10 /r: VMOVSS xmm1, xmm2, xmm3
+# VEX.LIG.F3.0F.WIG 10 /r: VMOVSS xmm1, m32
+# VEX.LIG.F3.0F.WIG 11 /r: VMOVSS xmm1, xmm2, xmm3
+# VEX.LIG.F3.0F.WIG 11 /r: VMOVSS m32, xmm1
+VMOVSS AVX 0001000 d \
+  !constraints { vex($_, m => 0x0F, l => 0, p => 0xF3); modrm($_); $_->{vex}{v} = 0 unless defined $_->{modrm}{reg2}; 1 } \
+  !memory { $d ? store(size => 4) : load(size => 4); }
+
 # F2 0F 10 /r: MOVSD xmm1, xmm2/m64
 # F2 0F 11 /r: MOVSD xmm1/m64, xmm2
 MOVSD SSE2 00001111 0001000 d \
   !constraints { repne($_); modrm($_); 1 } \
   !memory { $d ? store(size => 8): load(size => 8); }
 
+# VEX.LIG.F2.0F.WIG 10 /r: VMOVSD xmm1, xmm2, xmm3
+# VEX.LIG.F2.0F.WIG 10 /r: VMOVSD xmm1, m64
+# VEX.LIG.F2.0F.WIG 11 /r: VMOVSD xmm1, xmm2, xmm3
+# VEX.LIG.F2.0F.WIG 11 /r: VMOVSD m64, xmm1
+VMOVSD AVX 0001000 d \
+  !constraints { vex($_, m => 0x0F, l => 0, p => 0xF2); modrm($_); $_->{vex}{v} = 0 unless defined $_->{modrm}{reg2}; 1 } \
+  !memory { $d ? store(size => 8) : load(size => 8); }
+
 # F3 0F D6 /r: MOVQ2DQ xmm, mm
 MOVQ2DQ SSE2 00001111 11010110 \
   !constraints { rep($_); modrm($_); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} }
@@ -119,32 +193,64 @@ MOVLPS SSE 00001111 0001001 d \
   !constraints { modrm($_); !defined $_->{modrm}{reg2} } \
   !memory { $d ? store(size => 8) : load(size => 8); }
 
+# VEX.128.0F.WIG 12 /r: VMOVLPS xmm2, xmm1, m64
+# VEX.128.0F.WIG 13 /r: VMOVLPS m64, xmm1
+VMOVLPS AVX 0001001 d \
+  !constraints { vex($_, m => 0x0F, l => 128); modrm($_); $_->{vex}{v} = 0 if $d; !defined $_->{modrm}{reg2} } \
+  !memory { $d ? store(size => 8) : load(size => 8); }
+
 # 66 0F 12 /r: MOVLPD xmm1,m64
 # 66 0F 13 /r: MOVLPD m64,xmm1
 MOVLPD SSE2 00001111 0001001 d \
   !constraints { data16($_); modrm($_); !defined $_->{modrm}{reg2} } \
   !memory { $d ? store(size => 8) : load(size => 8); }
 
+# VEX.128.66.0F.WIG 12 /r: VMOVLPD xmm2,xmm1,m64
+# VEX.128.66.0F.WIG 13 /r: VMOVLPD m64,xmm1
+VMOVLPD AVX 0001001 d \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); $_->{vex}{v} = 0 if $d; !defined $_->{modrm}{reg2} } \
+  !memory { $d ? store(size => 8) : load(size => 8); }
+
 # NP 0F 16 /r: MOVHPS xmm1, m64
 # NP 0F 17 /r: MOVHPS m64, xmm1
 MOVHPS SSE 00001111 0001011 d \
   !constraints { modrm($_); !defined $_->{modrm}{reg2} } \
   !memory { $d ? store(size => 8) : load(size => 8); }
 
+# VEX.128.0F.WIG 16 /r: VMOVHPS xmm2, xmm1, m64
+# VEX.128.0F.WIG 17 /r: VMOVHPS m64, xmm1
+VMOVHPS AVX 0001011 d \
+  !constraints { vex($_, m => 0x0F, l => 128); modrm($_); $_->{vex}{v} = 0 if $d; !defined $_->{modrm}{reg2} } \
+  !memory { $d ? store(size => 8) : load(size => 8); }
+
 # 66 0F 16 /r: MOVHPD xmm1, m64
 # 66 0F 17 /r: MOVHPD m64, xmm1
 MOVHPD SSE2 00001111 0001011 d \
   !constraints { data16($_); modrm($_); !defined $_->{modrm}{reg2} } \
   !memory { $d ? store(size => 8) : load(size => 8); }
 
+# VEX.128.66.0F.WIG 16 /r: VMOVHPD xmm2, xmm1, m64
+# VEX.128.66.0F.WIG 17 /r: VMOVHPD m64, xmm1
+VMOVHPD AVX 0001011 d \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); $_->{vex}{v} = 0 if $d; !defined $_->{modrm}{reg2} } \
+  !memory { $d ? store(size => 8) : load(size => 8); }
+
 # NP 0F 16 /r: MOVLHPS xmm1, xmm2
 MOVLHPS SSE 00001111 00010110 \
   !constraints { modrm($_); defined $_->{modrm}{reg2} }
 
+# VEX.128.0F.WIG 16 /r: VMOVLHPS xmm1, xmm2, xmm3
+VMOVLHPS AVX 00010110 \
+  !constraints { vex($_, m => 0x0F, l => 128); modrm($_); defined $_->{modrm}{reg2} }
+
 # NP 0F 12 /r: MOVHLPS xmm1, xmm2
 MOVHLPS SSE 00001111 00010010 \
   !constraints { modrm($_); defined $_->{modrm}{reg2} }
 
+# VEX.128.0F.WIG 12 /r: VMOVHLPS xmm1, xmm2, xmm3
+VMOVHLPS AVX 00010010 \
+  !constraints { vex($_, m => 0x0F, l => 128); modrm($_); defined $_->{modrm}{reg2} }
+
 # NP 0F D7 /r: PMOVMSKB reg, mm
 PMOVMSKB SSE 00001111 11010111 \
   !constraints { modrm($_); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; $_->{modrm}{reg} != REG_RSP && defined $_->{modrm}{reg2} }
@@ -153,34 +259,66 @@ PMOVMSKB SSE 00001111 11010111 \
 PMOVMSKB SSE2 00001111 11010111 \
   !constraints { data16($_); modrm($_); $_->{modrm}{reg} != REG_RSP && defined $_->{modrm}{reg2} }
 
+# VEX.128.66.0F.WIG D7 /r: VPMOVMSKB reg, xmm1
+VPMOVMSKB AVX 11010111 \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0x66); modrm($_); $_->{modrm}{reg} != REG_RSP && defined $_->{modrm}{reg2} }
+
 # NP 0F 50 /r: MOVMSKPS reg, xmm
 MOVMSKPS SSE 00001111 01010000 \
   !constraints { modrm($_); $_->{modrm}{reg} != REG_RSP && defined $_->{modrm}{reg2} }
 
+# VEX.128.0F.WIG 50 /r: VMOVMSKPS reg, xmm2
+VMOVMSKPS AVX 01010000 \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0); modrm($_); $_->{modrm}{reg} != REG_RSP && defined $_->{modrm}{reg2} }
+
 # 66 0F 50 /r: MOVMSKPD reg, xmm
 MOVMSKPD SSE2 00001111 01010000 \
   !constraints { data16($_); modrm($_); $_->{modrm}{reg} != REG_RSP && defined $_->{modrm}{reg2} }
 
+# VEX.128.66.0F.WIG 50 /r: VMOVMSKPD reg, xmm2
+VMOVMSKPD AVX 01010000 \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0x66); modrm($_); $_->{modrm}{reg} != REG_RSP && defined $_->{modrm}{reg2} }
+
 # F2 0F F0 /r: LDDQU xmm1, m128
 LDDQU SSE3 00001111 11110000 \
   !constraints { repne($_); modrm($_); !defined $_->{modrm}{reg2} } \
   !memory { load(size => 16); }
 
+# VEX.128.F2.0F.WIG F0 /r: VLDDQU xmm1, m128
+VLDDQU AVX 11110000 \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0xF2); modrm($_); !defined $_->{modrm}{reg2} } \
+  !memory { load(size => 16); }
+
 # F3 0F 16 /r: MOVSHDUP xmm1, xmm2/m128
 MOVSHDUP SSE3 00001111 00010110 \
   !constraints { rep($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.F3.0F.WIG 16 /r: VMOVSHDUP xmm1, xmm2/m128
+VMOVSHDUP AVX 00010110 \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0xF3); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # F3 0F 12 /r: MOVSLDUP xmm1, xmm2/m128
 MOVSLDUP SSE3 00001111 00010010 \
   !constraints { rep($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.F3.0F.WIG 12 /r: VMOVSLDUP xmm1, xmm2/m128
+VMOVSLDUP AVX 00010010 \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0xF3); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # F2 0F 12 /r: MOVDDUP xmm1, xmm2/m64
 MOVDDUP SSE3 00001111 00010010 \
   !constraints { repne($_); modrm($_); 1 } \
   !memory { load(size => 8); }
 
+# VEX.128.F2.0F.WIG 12 /r: VMOVDDUP xmm1, xmm2/m64
+VMOVDDUP AVX 00010010 \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0xF2); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
 #
 # Arithmetic Instructions
 # -----------------------
@@ -196,6 +334,11 @@ PADDB SSE2 00001111 11111100 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG FC /r: VPADDB xmm1, xmm2, xmm3/m128
+VPADDB AVX 11111100 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F FD /r: PADDW mm, mm/m64
 PADDW MMX 00001111 11111101 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -206,6 +349,11 @@ PADDW SSE2 00001111 11111101 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG FD /r: VPADDW xmm1, xmm2, xmm3/m128
+VPADDW AVX 11111101 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F FE /r: PADDD mm, mm/m64
 PADDD MMX 00001111 11111110 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -216,6 +364,11 @@ PADDD SSE2 00001111 11111110 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG FE /r: VPADDD xmm1, xmm2, xmm3/m128
+VPADDD AVX 11111110 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F D4 /r: PADDQ mm, mm/m64
 PADDQ_mm SSE2 00001111 11010100 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -226,6 +379,11 @@ PADDQ SSE2 00001111 11010100 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG D4 /r: VPADDQ xmm1, xmm2, xmm3/m128
+VPADDQ AVX 11010100 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F EC /r: PADDSB mm, mm/m64
 PADDSB MMX 00001111 11101100 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -236,6 +394,11 @@ PADDSB SSE2 00001111 11101100 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG EC /r: VPADDSB xmm1, xmm2, xmm3/m128
+VPADDSB AVX 11101100 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F ED /r: PADDSW mm, mm/m64
 PADDSW MMX 00001111 11101101 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -246,6 +409,11 @@ PADDSW SSE2 00001111 11101101 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG ED /r: VPADDSW xmm1, xmm2, xmm3/m128
+VPADDSW AVX 11101101 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F DC /r: PADDUSB mm,mm/m64
 PADDUSB MMX 00001111 11011100 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -256,6 +424,11 @@ PADDUSB SSE2 00001111 11011100 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG DC /r: VPADDUSB xmm1,xmm2,xmm3/m128
+VPADDUSB AVX 11011100 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F DD /r: PADDUSW mm,mm/m64
 PADDUSW MMX 00001111 11011101 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -266,26 +439,51 @@ PADDUSW SSE2 00001111 11011101 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG DD /r: VPADDUSW xmm1,xmm2,xmm3/m128
+VPADDUSW AVX 11011101 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 58 /r: ADDPS xmm1, xmm2/m128
 ADDPS SSE 00001111 01011000 \
   !constraints { modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.0F.WIG 58 /r: VADDPS xmm1,xmm2, xmm3/m128
+VADDPS AVX 01011000 \
+  !constraints { vex($_, m => 0x0F, l => 128); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 58 /r: ADDPD xmm1, xmm2/m128
 ADDPD SSE2 00001111 01011000 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG 58 /r: VADDPD xmm1,xmm2, xmm3/m128
+VADDPD AVX 01011000 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # F3 0F 58 /r: ADDSS xmm1, xmm2/m32
 ADDSS SSE 00001111 01011000 \
   !constraints { rep($_); modrm($_); 1 } \
   !memory { load(size => 4); }
 
+# VEX.LIG.F3.0F.WIG 58 /r: VADDSS xmm1,xmm2, xmm3/m32
+VADDSS AVX 01011000 \
+  !constraints { vex($_, m => 0x0F, l => 0, p => 0xF3); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
 # F2 0F 58 /r: ADDSD xmm1, xmm2/m64
 ADDSD SSE2 00001111 01011000 \
   !constraints { repne($_); modrm($_); 1 } \
   !memory { load(size => 8); }
 
+# VEX.LIG.F2.0F.WIG 58 /r: VADDSD xmm1, xmm2, xmm3/m64
+VADDSD AVX 01011000 \
+  !constraints { vex($_, m => 0x0F, l => 0, p => 0xF2); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
 # NP 0F 38 01 /r: PHADDW mm1, mm2/m64
 PHADDW_mm SSSE3 00001111 00111000 00000001 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -296,6 +494,11 @@ PHADDW SSSE3 00001111 00111000 00000001 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38.WIG 01 /r: VPHADDW xmm1, xmm2, xmm3/m128
+VPHADDW AVX 00000001 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 38 02 /r: PHADDD mm1, mm2/m64
 PHADDD_mm SSSE3 00001111 00111000 00000010 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -306,6 +509,11 @@ PHADDD SSSE3 00001111 00111000 00000010 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38.WIG 02 /r: VPHADDD xmm1, xmm2, xmm3/m128
+VPHADDD AVX 00000010 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 38 03 /r: PHADDSW mm1, mm2/m64
 PHADDSW_mm SSSE3 00001111 00111000 00000011 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -316,16 +524,31 @@ PHADDSW SSSE3 00001111 00111000 00000011 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38.WIG 03 /r: VPHADDSW xmm1, xmm2, xmm3/m128
+VPHADDSW AVX 00000011 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # F2 0F 7C /r: HADDPS xmm1, xmm2/m128
 HADDPS SSE3 00001111 01111100 \
   !constraints { repne($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.F2.0F.WIG 7C /r: VHADDPS xmm1, xmm2, xmm3/m128
+VHADDPS AVX 01111100 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0xF2); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 7C /r: HADDPD xmm1, xmm2/m128
 HADDPD SSE3 00001111 01111100 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG 7C /r: VHADDPD xmm1,xmm2, xmm3/m128
+VHADDPD AVX 01111100 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F F8 /r: PSUBB mm, mm/m64
 PSUBB MMX 00001111 11111000 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -336,6 +559,11 @@ PSUBB SSE2 00001111 11111000 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG F8 /r: VPSUBB xmm1, xmm2, xmm3/m128
+VPSUBB AVX 11111000 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F F9 /r: PSUBW mm, mm/m64
 PSUBW MMX 00001111 11111001 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -346,6 +574,11 @@ PSUBW SSE2 00001111 11111001 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG F9 /r: VPSUBW xmm1, xmm2, xmm3/m128
+VPSUBW AVX 11111001 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F FA /r: PSUBD mm, mm/m64
 PSUBD MMX 00001111 11111010 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -356,6 +589,11 @@ PSUBD SSE2 00001111 11111010 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG FA /r: VPSUBD xmm1, xmm2, xmm3/m128
+VPSUBD AVX 11111010 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F FB /r: PSUBQ mm1, mm2/m64
 PSUBQ_mm SSE2 00001111 11111011 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -366,6 +604,11 @@ PSUBQ SSE2 00001111 11111011 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG FB /r: VPSUBQ xmm1, xmm2, xmm3/m128
+VPSUBQ AVX 11111011 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F E8 /r: PSUBSB mm, mm/m64
 PSUBSB MMX 00001111 11101000 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -376,6 +619,11 @@ PSUBSB SSE2 00001111 11101000 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG E8 /r: VPSUBSB xmm1, xmm2, xmm3/m128
+VPSUBSB AVX 11101000 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F E9 /r: PSUBSW mm, mm/m64
 PSUBSW MMX 00001111 11101001 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -386,6 +634,11 @@ PSUBSW SSE2 00001111 11101001 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG E9 /r: VPSUBSW xmm1, xmm2, xmm3/m128
+VPSUBSW AVX 11101001 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F D8 /r: PSUBUSB mm, mm/m64
 PSUBUSB MMX 00001111 11011000 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -396,6 +649,11 @@ PSUBUSB SSE2 00001111 11011000 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG D8 /r: VPSUBUSB xmm1, xmm2, xmm3/m128
+VPSUBUSB AVX 11011000 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F D9 /r: PSUBUSW mm, mm/m64
 PSUBUSW MMX 00001111 11011001 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -406,26 +664,51 @@ PSUBUSW SSE2 00001111 11011001 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG D9 /r: VPSUBUSW xmm1, xmm2, xmm3/m128
+VPSUBUSW AVX 11011001 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 5C /r: SUBPS xmm1, xmm2/m128
 SUBPS SSE 00001111 01011100 \
   !constraints { modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.0F.WIG 5C /r: VSUBPS xmm1,xmm2, xmm3/m128
+VSUBPS AVX 01011100 \
+  !constraints { vex($_, m => 0x0F, l => 128); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 5C /r: SUBPD xmm1, xmm2/m128
 SUBPD SSE2 00001111 01011100 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG 5C /r: VSUBPD xmm1,xmm2, xmm3/m128
+VSUBPD AVX 01011100 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # F3 0F 5C /r: SUBSS xmm1, xmm2/m32
 SUBSS SSE 00001111 01011100 \
   !constraints { rep($_); modrm($_); 1 } \
   !memory { load(size => 4); }
 
+# VEX.LIG.F3.0F.WIG 5C /r: VSUBSS xmm1,xmm2, xmm3/m32
+VSUBSS AVX 01011100 \
+  !constraints { vex($_, m => 0x0F, l => 0, p => 0xF3); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
 # F2 0F 5C /r: SUBSD xmm1, xmm2/m64
 SUBSD SSE2 00001111 01011100 \
   !constraints { repne($_); modrm($_); 1 } \
   !memory { load(size => 8); }
 
+# VEX.LIG.F2.0F.WIG 5C /r: VSUBSD xmm1,xmm2, xmm3/m64
+VSUBSD AVX 01011100 \
+  !constraints { vex($_, m => 0x0F, l => 0, p => 0xF2); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
 # NP 0F 38 05 /r: PHSUBW mm1, mm2/m64
 PHSUBW_mm SSSE3 00001111 00111000 00000101 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -436,6 +719,11 @@ PHSUBW SSSE3 00001111 00111000 00000101 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38.WIG 05 /r: VPHSUBW xmm1, xmm2, xmm3/m128
+VPHSUBW AVX 00000101 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 38 06 /r: PHSUBD mm1, mm2/m64
 PHSUBD_mm SSSE3 00001111 00111000 00000110 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -446,6 +734,11 @@ PHSUBD SSSE3 00001111 00111000 00000110 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38.WIG 06 /r: VPHSUBD xmm1, xmm2, xmm3/m128
+VPHSUBD AVX 00000110 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 38 07 /r: PHSUBSW mm1, mm2/m64
 PHSUBSW_mm SSSE3 00001111 00111000 00000111 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -456,26 +749,51 @@ PHSUBSW SSSE3 00001111 00111000 00000111 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38.WIG 07 /r: VPHSUBSW xmm1, xmm2, xmm3/m128
+VPHSUBSW AVX 00000111 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # F2 0F 7D /r: HSUBPS xmm1, xmm2/m128
 HSUBPS SSE3 00001111 01111101 \
   !constraints { repne($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.F2.0F.WIG 7D /r: VHSUBPS xmm1, xmm2, xmm3/m128
+VHSUBPS AVX 01111101 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0xF2); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 7D /r: HSUBPD xmm1, xmm2/m128
 HSUBPD SSE3 00001111 01111101 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG 7D /r: VHSUBPD xmm1,xmm2, xmm3/m128
+VHSUBPD AVX 01111101 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # F2 0F D0 /r: ADDSUBPS xmm1, xmm2/m128
 ADDSUBPS SSE3 00001111 11010000 \
   !constraints { repne($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.F2.0F.WIG D0 /r: VADDSUBPS xmm1, xmm2, xmm3/m128
+VADDSUBPS AVX 11010000 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0xF2); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F D0 /r: ADDSUBPD xmm1, xmm2/m128
 ADDSUBPD SSE3 00001111 11010000 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG D0 /r: VADDSUBPD xmm1, xmm2, xmm3/m128
+VADDSUBPD AVX 11010000 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F D5 /r: PMULLW mm, mm/m64
 PMULLW MMX 00001111 11010101 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -486,11 +804,21 @@ PMULLW SSE2 00001111 11010101 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG D5 /r: VPMULLW xmm1, xmm2, xmm3/m128
+VPMULLW AVX 11010101 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 38 40 /r: PMULLD xmm1, xmm2/m128
 PMULLD SSE4_1 00001111 00111000 01000000 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38.WIG 40 /r: VPMULLD xmm1, xmm2, xmm3/m128
+VPMULLD AVX 01000000 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F E5 /r: PMULHW mm, mm/m64
 PMULHW MMX 00001111 11100101 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -501,6 +829,11 @@ PMULHW SSE2 00001111 11100101 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG E5 /r: VPMULHW xmm1, xmm2, xmm3/m128
+VPMULHW AVX 11100101 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F E4 /r: PMULHUW mm1, mm2/m64
 PMULHUW SSE 00001111 11100100 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -511,11 +844,21 @@ PMULHUW SSE2 00001111 11100100 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG E4 /r: VPMULHUW xmm1, xmm2, xmm3/m128
+VPMULHUW AVX 11100100 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 38 28 /r: PMULDQ xmm1, xmm2/m128
 PMULDQ SSE4_1 00001111 00111000 00101000 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38.WIG 28 /r: VPMULDQ xmm1, xmm2, xmm3/m128
+VPMULDQ AVX 00101000 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F F4 /r: PMULUDQ mm1, mm2/m64
 PMULUDQ_mm SSE2 00001111 11110100 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -526,6 +869,11 @@ PMULUDQ SSE2 00001111 11110100 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG F4 /r: VPMULUDQ xmm1, xmm2, xmm3/m128
+VPMULUDQ AVX 11110100 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 38 0B /r: PMULHRSW mm1, mm2/m64
 PMULHRSW_mm SSSE3 00001111 00111000 00001011 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -536,26 +884,51 @@ PMULHRSW SSSE3 00001111 00111000 00001011 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38.WIG 0B /r: VPMULHRSW xmm1, xmm2, xmm3/m128
+VPMULHRSW AVX 00001011 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 59 /r: MULPS xmm1, xmm2/m128
 MULPS SSE 00001111 01011001 \
   !constraints { modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.0F.WIG 59 /r: VMULPS xmm1,xmm2, xmm3/m128
+VMULPS AVX 01011001 \
+  !constraints { vex($_, m => 0x0F, l => 128); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 59 /r: MULPD xmm1, xmm2/m128
 MULPD SSE2 00001111 01011001 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG 59 /r: VMULPD xmm1,xmm2, xmm3/m128
+VMULPD AVX 01011001 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # F3 0F 59 /r: MULSS xmm1,xmm2/m32
 MULSS SSE 00001111 01011001 \
   !constraints { rep($_); modrm($_); 1 } \
   !memory { load(size => 4); }
 
+# VEX.LIG.F3.0F.WIG 59 /r: VMULSS xmm1,xmm2, xmm3/m32
+VMULSS AVX 01011001 \
+  !constraints { vex($_, m => 0x0F, l => 0, p => 0xF3); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
 # F2 0F 59 /r: MULSD xmm1,xmm2/m64
 MULSD SSE2 00001111 01011001 \
   !constraints { repne($_); modrm($_); 1 } \
   !memory { load(size => 8); }
 
+# VEX.LIG.F2.0F.WIG 59 /r: VMULSD xmm1,xmm2, xmm3/m64
+VMULSD AVX 01011001 \
+  !constraints { vex($_, m => 0x0F, l => 0, p => 0xF2); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
 # NP 0F F5 /r: PMADDWD mm, mm/m64
 PMADDWD MMX 00001111 11110101 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -566,6 +939,11 @@ PMADDWD SSE2 00001111 11110101 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG F5 /r: VPMADDWD xmm1, xmm2, xmm3/m128
+VPMADDWD AVX 11110101 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 38 04 /r: PMADDUBSW mm1, mm2/m64
 PMADDUBSW_mm SSSE3 00001111 00111000 00000100 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -576,66 +954,131 @@ PMADDUBSW SSSE3 00001111 00111000 00000100 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38.WIG 04 /r: VPMADDUBSW xmm1, xmm2, xmm3/m128
+VPMADDUBSW AVX 00000100 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 5E /r: DIVPS xmm1, xmm2/m128
 DIVPS SSE 00001111 01011110 \
   !constraints { modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.0F.WIG 5E /r: VDIVPS xmm1, xmm2, xmm3/m128
+VDIVPS AVX 01011110 \
+  !constraints { vex($_, m => 0x0F, l => 128); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 5E /r: DIVPD xmm1, xmm2/m128
 DIVPD SSE2 00001111 01011110 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG 5E /r: VDIVPD xmm1, xmm2, xmm3/m128
+VDIVPD AVX 01011110 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # F3 0F 5E /r: DIVSS xmm1, xmm2/m32
 DIVSS SSE 00001111 01011110 \
   !constraints { rep($_); modrm($_); 1 } \
   !memory { load(size => 4); }
 
+# VEX.LIG.F3.0F.WIG 5E /r: VDIVSS xmm1, xmm2, xmm3/m32
+VDIVSS AVX 01011110 \
+  !constraints { vex($_, m => 0x0F, l => 0, p => 0xF3); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
 # F2 0F 5E /r: DIVSD xmm1, xmm2/m64
 DIVSD SSE2 00001111 01011110 \
   !constraints { repne($_); modrm($_); 1 } \
   !memory { load(size => 8); }
 
+# VEX.LIG.F2.0F.WIG 5E /r: VDIVSD xmm1, xmm2, xmm3/m64
+VDIVSD AVX 01011110 \
+  !constraints { vex($_, m => 0x0F, l => 0, p => 0xF2); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
 # NP 0F 53 /r: RCPPS xmm1, xmm2/m128
 RCPPS SSE 00001111 01010011 \
   !constraints { modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.0F.WIG 53 /r: VRCPPS xmm1, xmm2/m128
+VRCPPS AVX 01010011 \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # F3 0F 53 /r: RCPSS xmm1, xmm2/m32
 RCPSS SSE 00001111 01010011 \
   !constraints { rep($_); modrm($_); 1 } \
   !memory { load(size => 4); }
 
+# VEX.LIG.F3.0F.WIG 53 /r: VRCPSS xmm1, xmm2, xmm3/m32
+VRCPSS AVX 01010011 \
+  !constraints { vex($_, m => 0x0F, l => 0, p => 0xF3); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
 # NP 0F 51 /r: SQRTPS xmm1, xmm2/m128
 SQRTPS SSE 00001111 01010001 \
   !constraints { modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.0F.WIG 51 /r: VSQRTPS xmm1, xmm2/m128
+VSQRTPS AVX 01010001 \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 51 /r: SQRTPD xmm1, xmm2/m128
 SQRTPD SSE2 00001111 01010001 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG 51 /r: VSQRTPD xmm1, xmm2/m128
+VSQRTPD AVX 01010001 \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # F3 0F 51 /r: SQRTSS xmm1, xmm2/m32
 SQRTSS SSE 00001111 01010001 \
   !constraints { rep($_); modrm($_); 1 } \
   !memory { load(size => 4); }
 
+# VEX.LIG.F3.0F.WIG 51 /r: VSQRTSS xmm1, xmm2, xmm3/m32
+VSQRTSS AVX 01010001 \
+  !constraints { vex($_, m => 0x0F, l => 0, p => 0xF3); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
 # F2 0F 51 /r: SQRTSD xmm1,xmm2/m64
 SQRTSD SSE2 00001111 01010001 \
   !constraints { repne($_); modrm($_); 1 } \
   !memory { load(size => 8); }
 
+# VEX.LIG.F2.0F.WIG 51 /r: VSQRTSD xmm1,xmm2, xmm3/m64
+VSQRTSD AVX 01010001 \
+  !constraints { vex($_, m => 0x0F, l => 0, p => 0xF2); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
 # NP 0F 52 /r: RSQRTPS xmm1, xmm2/m128
 RSQRTPS SSE 00001111 01010010 \
   !constraints { modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.0F.WIG 52 /r: VRSQRTPS xmm1, xmm2/m128
+VRSQRTPS AVX 01010010 \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # F3 0F 52 /r: RSQRTSS xmm1, xmm2/m32
 RSQRTSS SSE 00001111 01010010 \
   !constraints { rep($_); modrm($_); 1 } \
   !memory { load(size => 4); }
 
+# VEX.LIG.F3.0F.WIG 52 /r: VRSQRTSS xmm1, xmm2, xmm3/m32
+VRSQRTSS AVX 01010010 \
+  !constraints { vex($_, m => 0x0F, l => 0, p => 0xF3); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
 # NP 0F DA /r: PMINUB mm1, mm2/m64
 PMINUB SSE 00001111 11011010 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -646,21 +1089,41 @@ PMINUB SSE2 00001111 11011010 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F DA /r: VPMINUB xmm1, xmm2, xmm3/m128
+VPMINUB AVX 11011010 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 38 3A /r: PMINUW xmm1, xmm2/m128
 PMINUW SSE4_1 00001111 00111000 00111010 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38 3A /r: VPMINUW xmm1, xmm2, xmm3/m128
+VPMINUW AVX 00111010 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 38 3B /r: PMINUD xmm1, xmm2/m128
 PMINUD SSE4_1 00001111 00111000 00111011 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38.WIG 3B /r: VPMINUD xmm1, xmm2, xmm3/m128
+VPMINUD AVX 00111011 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 38 38 /r: PMINSB xmm1, xmm2/m128
 PMINSB SSE4_1 00001111 00111000 00111000 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38 38 /r: VPMINSB xmm1, xmm2, xmm3/m128
+VPMINSB AVX 00111000 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F EA /r: PMINSW mm1, mm2/m64
 PMINSW SSE 00001111 11101010 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -671,36 +1134,71 @@ PMINSW SSE2 00001111 11101010 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F EA /r: VPMINSW xmm1, xmm2, xmm3/m128
+VPMINSW AVX 11101010 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 38 39 /r: PMINSD xmm1, xmm2/m128
 PMINSD SSE4_1 00001111 00111000 00111001 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38.WIG 39 /r: VPMINSD xmm1, xmm2, xmm3/m128
+VPMINSD AVX 00111001 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 5D /r: MINPS xmm1, xmm2/m128
 MINPS SSE 00001111 01011101 \
   !constraints { modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.0F.WIG 5D /r: VMINPS xmm1, xmm2, xmm3/m128
+VMINPS AVX 01011101 \
+  !constraints { vex($_, m => 0x0F, l => 128); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 5D /r: MINPD xmm1, xmm2/m128
 MINPD SSE2 00001111 01011101 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG 5D /r: VMINPD xmm1, xmm2, xmm3/m128
+VMINPD AVX 01011101 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # F3 0F 5D /r: MINSS xmm1,xmm2/m32
 MINSS SSE 00001111 01011101 \
   !constraints { rep($_); modrm($_); 1 } \
   !memory { load(size => 4); }
 
+# VEX.LIG.F3.0F.WIG 5D /r: VMINSS xmm1,xmm2, xmm3/m32
+VMINSS AVX 01011101 \
+  !constraints { vex($_, m => 0x0F, l => 0, p => 0xF3); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
 # F2 0F 5D /r: MINSD xmm1, xmm2/m64
 MINSD SSE2 00001111 01011101 \
   !constraints { repne($_); modrm($_); 1 } \
   !memory { load(size => 8); }
 
+# VEX.LIG.F2.0F.WIG 5D /r: VMINSD xmm1, xmm2, xmm3/m64
+VMINSD AVX 01011101 \
+  !constraints { vex($_, m => 0x0F, l => 0, p => 0xF2); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
 # 66 0F 38 41 /r: PHMINPOSUW xmm1, xmm2/m128
 PHMINPOSUW SSE4_1 00001111 00111000 01000001 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38.WIG 41 /r: VPHMINPOSUW xmm1, xmm2/m128
+VPHMINPOSUW AVX 01000001 \
+  !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F DE /r: PMAXUB mm1, mm2/m64
 PMAXUB SSE 00001111 11011110 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -711,21 +1209,41 @@ PMAXUB SSE2 00001111 11011110 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F DE /r: VPMAXUB xmm1, xmm2, xmm3/m128
+VPMAXUB AVX 11011110 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 38 3E /r: PMAXUW xmm1, xmm2/m128
 PMAXUW SSE4_1 00001111 00111000 00111110 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38 3E /r: VPMAXUW xmm1, xmm2, xmm3/m128
+VPMAXUW AVX 00111110 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 38 3F /r: PMAXUD xmm1, xmm2/m128
 PMAXUD SSE4_1 00001111 00111000 00111111 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38.WIG 3F /r: VPMAXUD xmm1, xmm2, xmm3/m128
+VPMAXUD AVX 00111111 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 38 3C /r: PMAXSB xmm1, xmm2/m128
 PMAXSB SSE4_1 00001111 00111000 00111100 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38.WIG 3C /r: VPMAXSB xmm1, xmm2, xmm3/m128
+VPMAXSB AVX 00111100 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F EE /r: PMAXSW mm1, mm2/m64
 PMAXSW SSE 00001111 11101110 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -736,31 +1254,61 @@ PMAXSW SSE2 00001111 11101110 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG EE /r: VPMAXSW xmm1, xmm2, xmm3/m128
+VPMAXSW AVX 11101110 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 38 3D /r: PMAXSD xmm1, xmm2/m128
 PMAXSD SSE4_1 00001111 00111000 00111101 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38.WIG 3D /r: VPMAXSD xmm1, xmm2, xmm3/m128
+VPMAXSD AVX 00111101 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 5F /r: MAXPS xmm1, xmm2/m128
 MAXPS SSE 00001111 01011111 \
   !constraints { modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.0F.WIG 5F /r: VMAXPS xmm1, xmm2, xmm3/m128
+VMAXPS AVX 01011111 \
+  !constraints { vex($_, m => 0x0F, l => 128); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 5F /r: MAXPD xmm1, xmm2/m128
 MAXPD SSE2 00001111 01011111 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG 5F /r: VMAXPD xmm1, xmm2, xmm3/m128
+VMAXPD AVX 01011111 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # F3 0F 5F /r: MAXSS xmm1, xmm2/m32
 MAXSS SSE 00001111 01011111 \
   !constraints { rep($_); modrm($_); 1 } \
   !memory { load(size => 4); }
 
+# VEX.LIG.F3.0F.WIG 5F /r: VMAXSS xmm1, xmm2, xmm3/m32
+VMAXSS AVX 01011111 \
+  !constraints { vex($_, m => 0x0F, l => 0, p => 0xF3); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
 # F2 0F 5F /r: MAXSD xmm1, xmm2/m64
 MAXSD SSE2 00001111 01011111 \
   !constraints { repne($_); modrm($_); 1 } \
   !memory { load(size => 8); }
 
+# VEX.LIG.F2.0F.WIG 5F /r: VMAXSD xmm1, xmm2, xmm3/m64
+VMAXSD AVX 01011111 \
+  !constraints { vex($_, m => 0x0F, l => 0, p => 0xF2); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
 # NP 0F E0 /r: PAVGB mm1, mm2/m64
 PAVGB SSE 00001111 11100000 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -771,6 +1319,11 @@ PAVGB SSE2 00001111 11100000 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG E0 /r: VPAVGB xmm1, xmm2, xmm3/m128
+VPAVGB AVX 11100000 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F E3 /r: PAVGW mm1, mm2/m64
 PAVGW SSE 00001111 11100011 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -781,6 +1334,11 @@ PAVGW SSE2 00001111 11100011 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG E3 /r: VPAVGW xmm1, xmm2, xmm3/m128
+VPAVGW AVX 11100011 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F F6 /r: PSADBW mm1, mm2/m64
 PSADBW SSE 00001111 11110110 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -791,11 +1349,21 @@ PSADBW SSE2 00001111 11110110 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG F6 /r: VPSADBW xmm1, xmm2, xmm3/m128
+VPSADBW AVX 11110110 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 3A 42 /r ib: MPSADBW xmm1, xmm2/m128, imm8
 MPSADBW SSE4_1 00001111 00111010 01000010 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F3A.WIG 42 /r ib: VMPSADBW xmm1, xmm2, xmm3/m128, imm8
+VMPSADBW AVX 01000010 \
+  !constraints { vex($_, m => 0x0F3A, l => 128, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 38 1C /r: PABSB mm1, mm2/m64
 PABSB_mm SSSE3 00001111 00111000 00011100 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -806,6 +1374,11 @@ PABSB SSSE3 00001111 00111000 00011100 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38.WIG 1C /r: VPABSB xmm1, xmm2/m128
+VPABSB AVX 00011100 \
+  !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 38 1D /r: PABSW mm1, mm2/m64
 PABSW_mm SSSE3 00001111 00111000 00011101 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -816,6 +1389,11 @@ PABSW SSSE3 00001111 00111000 00011101 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38.WIG 1D /r: VPABSW xmm1, xmm2/m128
+VPABSW AVX 00011101 \
+  !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 38 1E /r: PABSD mm1, mm2/m64
 PABSD_mm SSSE3 00001111 00111000 00011110 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -826,6 +1404,11 @@ PABSD SSSE3 00001111 00111000 00011110 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38.WIG 1E /r: VPABSD xmm1, xmm2/m128
+VPABSD AVX 00011110 \
+  !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 38 08 /r: PSIGNB mm1, mm2/m64
 PSIGNB_mm SSSE3 00001111 00111000 00001000 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -836,6 +1419,11 @@ PSIGNB SSSE3 00001111 00111000 00001000 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38.WIG 08 /r: VPSIGNB xmm1, xmm2, xmm3/m128
+VPSIGNB AVX 00001000 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 38 09 /r: PSIGNW mm1, mm2/m64
 PSIGNW_mm SSSE3 00001111 00111000 00001001 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -846,6 +1434,11 @@ PSIGNW SSSE3 00001111 00111000 00001001 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38.WIG 09 /r: VPSIGNW xmm1, xmm2, xmm3/m128
+VPSIGNW AVX 00001001 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 38 0A /r: PSIGND mm1, mm2/m64
 PSIGND_mm SSSE3 00001111 00111000 00001010 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -856,36 +1449,71 @@ PSIGND SSSE3 00001111 00111000 00001010 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38.WIG 0A /r: VPSIGND xmm1, xmm2, xmm3/m128
+VPSIGND AVX 00001010 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 3A 40 /r ib: DPPS xmm1, xmm2/m128, imm8
 DPPS SSE4_1 00001111 00111010 01000000 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F3A.WIG 40 /r ib: VDPPS xmm1,xmm2, xmm3/m128, imm8
+VDPPS AVX 01000000 \
+  !constraints { vex($_, m => 0x0F3A, l => 128, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 3A 41 /r ib: DPPD xmm1, xmm2/m128, imm8
 DPPD SSE4_1 00001111 00111010 01000001 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F3A.WIG 41 /r ib: VDPPD xmm1,xmm2, xmm3/m128, imm8
+VDPPD AVX 01000001 \
+  !constraints { vex($_, m => 0x0F3A, l => 128, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 3A 08 /r ib: ROUNDPS xmm1, xmm2/m128, imm8
 ROUNDPS SSE4_1 00001111 00111010 00001000 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F3A.WIG 08 /r ib: VROUNDPS xmm1, xmm2/m128, imm8
+VROUNDPS AVX 00001000 \
+  !constraints { vex($_, m => 0x0F3A, l => 128, v => 0, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 3A 09 /r ib: ROUNDPD xmm1, xmm2/m128, imm8
 ROUNDPD SSE4_1 00001111 00111010 00001001 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F3A.WIG 09 /r ib: VROUNDPD xmm1, xmm2/m128, imm8
+VROUNDPD AVX 00001001 \
+  !constraints { vex($_, m => 0x0F3A, l => 128, v => 0, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 3A 0A /r ib: ROUNDSS xmm1, xmm2/m32, imm8
 ROUNDSS SSE4_1 00001111 00111010 00001010 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 4); }
 
+# VEX.LIG.66.0F3A.WIG 0A /r ib: VROUNDSS xmm1, xmm2, xmm3/m32, imm8
+VROUNDSS AVX 00001010 \
+  !constraints { vex($_, m => 0x0F3A, l => 0, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 4); }
+
 # 66 0F 3A 0B /r ib: ROUNDSD xmm1, xmm2/m64, imm8
 ROUNDSD SSE4_1 00001111 00111010 00001011 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 8); }
 
+# VEX.LIG.66.0F3A.WIG 0B /r ib: VROUNDSD xmm1, xmm2, xmm3/m64, imm8
+VROUNDSD AVX 00001011 \
+  !constraints { vex($_, m => 0x0F3A, l => 0, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 8); }
+
 #
 # AES Instructions
 # ----------------
@@ -896,31 +1524,61 @@ AESDEC AES 00001111 00111000 11011110 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38.WIG DE /r: VAESDEC xmm1, xmm2, xmm3/m128
+VAESDEC AES_AVX 11011110 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 38 DF /r: AESDECLAST xmm1, xmm2/m128
 AESDECLAST AES 00001111 00111000 11011111 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38.WIG DF /r: VAESDECLAST xmm1, xmm2, xmm3/m128
+VAESDECLAST AES_AVX 11011111 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 38 DC /r: AESENC xmm1, xmm2/m128
 AESENC AES 00001111 00111000 11011100 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38.WIG DC /r: VAESENC xmm1, xmm2, xmm3/m128
+VAESENC AES_AVX 11011100 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 38 DD /r: AESENCLAST xmm1, xmm2/m128
 AESENCLAST AES 00001111 00111000 11011101 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38.WIG DD /r: VAESENCLAST xmm1, xmm2, xmm3/m128
+VAESENCLAST AES_AVX 11011101 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 38 DB /r: AESIMC xmm1, xmm2/m128
 AESIMC AES 00001111 00111000 11011011 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38.WIG DB /r: VAESIMC xmm1, xmm2/m128
+VAESIMC AES_AVX 11011011 \
+  !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 3A DF /r ib: AESKEYGENASSIST xmm1, xmm2/m128, imm8
 AESKEYGENASSIST AES 00001111 00111010 11011111 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F3A.WIG DF /r ib: VAESKEYGENASSIST xmm1, xmm2/m128, imm8
+VAESKEYGENASSIST AES_AVX 11011111 \
+  !constraints { vex($_, m => 0x0F3A, l => 128, v => 0, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16); }
+
 #
 # PCLMULQDQ Instructions
 # ----------------------
@@ -931,6 +1589,11 @@ PCLMULQDQ PCLMULQDQ 00001111 00111010 01000100 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F3A.WIG 44 /r ib: VPCLMULQDQ xmm1, xmm2, xmm3/m128, imm8
+VPCLMULQDQ PCLMULQDQ_AVX 01000100 \
+  !constraints { vex($_, m => 0x0F3A, l => 128, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16); }
+
 #
 # Comparison Instructions
 # -----------------------
@@ -946,6 +1609,11 @@ PCMPEQB SSE2 00001111 01110100 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG 74 /r: VPCMPEQB xmm1,xmm2,xmm3/m128
+VPCMPEQB AVX 01110100 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 75 /r: PCMPEQW mm,mm/m64
 PCMPEQW MMX 00001111 01110101 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -956,6 +1624,11 @@ PCMPEQW SSE2 00001111 01110101 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG 75 /r: VPCMPEQW xmm1,xmm2,xmm3/m128
+VPCMPEQW AVX 01110101 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 76 /r: PCMPEQD mm,mm/m64
 PCMPEQD MMX 00001111 01110110 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -966,11 +1639,21 @@ PCMPEQD SSE2 00001111 01110110 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG 76 /r: VPCMPEQD xmm1,xmm2,xmm3/m128
+VPCMPEQD AVX 01110110 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 38 29 /r: PCMPEQQ xmm1, xmm2/m128
 PCMPEQQ SSE4_1 00001111 00111000 00101001 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38.WIG 29 /r: VPCMPEQQ xmm1, xmm2, xmm3/m128
+VPCMPEQQ AVX 00101001 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 64 /r: PCMPGTB mm,mm/m64
 PCMPGTB MMX 00001111 01100100 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -981,6 +1664,11 @@ PCMPGTB SSE2 00001111 01100100 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG 64 /r: VPCMPGTB xmm1,xmm2,xmm3/m128
+VPCMPGTB AVX 01100100 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 65 /r: PCMPGTW mm,mm/m64
 PCMPGTW MMX 00001111 01100101 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -991,6 +1679,11 @@ PCMPGTW SSE2 00001111 01100101 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG 65 /r: VPCMPGTW xmm1,xmm2,xmm3/m128
+VPCMPGTW AVX 01100101 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 66 /r: PCMPGTD mm,mm/m64
 PCMPGTD MMX 00001111 01100110 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1001,76 +1694,161 @@ PCMPGTD SSE2 00001111 01100110 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG 66 /r: VPCMPGTD xmm1,xmm2,xmm3/m128
+VPCMPGTD AVX 01100110 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 38 37 /r: PCMPGTQ xmm1,xmm2/m128
 PCMPGTQ SSE4_2 00001111 00111000 00110111 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38.WIG 37 /r: VPCMPGTQ xmm1, xmm2, xmm3/m128
+VPCMPGTQ AVX 00110111 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 3A 60 /r imm8: PCMPESTRM xmm1, xmm2/m128, imm8
 PCMPESTRM SSE4_2 00001111 00111010 01100000 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16); }
 
+# VEX.128.66.0F3A 60 /r ib: VPCMPESTRM xmm1, xmm2/m128, imm8
+VPCMPESTRM AVX 01100000 \
+  !constraints { vex($_, m => 0x0F3A, l => 128, v => 0, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 3A 61 /r imm8: PCMPESTRI xmm1, xmm2/m128, imm8
 PCMPESTRI SSE4_2 00001111 00111010 01100001 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16, rollback => (defined $_->{modrm}{base} && $_->{modrm}{base} != REG_RCX)); }
 
+# VEX.128.66.0F3A 61 /r ib: VPCMPESTRI xmm1, xmm2/m128, imm8
+VPCMPESTRI AVX 01100001 \
+  !constraints { vex($_, m => 0x0F3A, l => 128, v => 0, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16, rollback => (defined $_->{modrm}{base} && $_->{modrm}{base} != REG_RCX)); }
+
 # 66 0F 3A 62 /r imm8: PCMPISTRM xmm1, xmm2/m128, imm8
 PCMPISTRM SSE4_2 00001111 00111010 01100010 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16); }
 
+# VEX.128.66.0F3A.WIG 62 /r ib: VPCMPISTRM xmm1, xmm2/m128, imm8
+VPCMPISTRM AVX 01100010 \
+  !constraints { vex($_, m => 0x0F3A, l => 128, v => 0, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 3A 63 /r imm8: PCMPISTRI xmm1, xmm2/m128, imm8
 PCMPISTRI SSE4_2 00001111 00111010 01100011 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16, rollback => (defined $_->{modrm}{base} && $_->{modrm}{base} != REG_RCX)); }
 
+# VEX.128.66.0F3A.WIG 63 /r ib: VPCMPISTRI xmm1, xmm2/m128, imm8
+VPCMPISTRI AVX 01100011 \
+  !constraints { vex($_, m => 0x0F3A, l => 128, v => 0, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16, rollback => (defined $_->{modrm}{base} && $_->{modrm}{base} != REG_RCX)); }
+
 # 66 0F 38 17 /r: PTEST xmm1, xmm2/m128
 PTEST SSE4_1 00001111 00111000 00010111 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38.WIG 17 /r: VPTEST xmm1, xmm2/m128
+VPTEST AVX 00010111 \
+  !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
+# VEX.128.66.0F38.W0 0E /r: VTESTPS xmm1, xmm2/m128
+VTESTPS AVX 00001110 \
+  !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66, w => 0); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
+# VEX.128.66.0F38.W0 0F /r: VTESTPD xmm1, xmm2/m128
+VTESTPD AVX 00001111 \
+  !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66, w => 0); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F C2 /r ib: CMPPS xmm1, xmm2/m128, imm8
 CMPPS SSE 00001111 11000010 \
   !constraints { modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.0F.WIG C2 /r ib: VCMPPS xmm1, xmm2, xmm3/m128, imm8
+VCMPPS AVX 11000010 \
+  !constraints { vex($_, m => 0x0F, l => 128); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F C2 /r ib: CMPPD xmm1, xmm2/m128, imm8
 CMPPD SSE2 00001111 11000010 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG C2 /r ib: VCMPPD xmm1, xmm2, xmm3/m128, imm8
+VCMPPD AVX 11000010 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16); }
+
 # F3 0F C2 /r ib: CMPSS xmm1, xmm2/m32, imm8
 CMPSS SSE 00001111 11000010 \
   !constraints { rep($_); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 4); }
 
+# VEX.LIG.F3.0F.WIG C2 /r ib: VCMPSS xmm1, xmm2, xmm3/m32, imm8
+VCMPSS AVX 11000010 \
+  !constraints { vex($_, m => 0x0F, l => 0, p => 0xF3); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 4); }
+
 # F2 0F C2 /r ib: CMPSD xmm1, xmm2/m64, imm8
 CMPSD SSE2 00001111 11000010 \
   !constraints { repne($_); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 8); }
 
+# VEX.LIG.F2.0F.WIG C2 /r ib: VCMPSD xmm1, xmm2, xmm3/m64, imm8
+VCMPSD AVX 11000010 \
+  !constraints { vex($_, m => 0x0F, l => 0, p => 0xF2); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 8); }
+
 # NP 0F 2E /r: UCOMISS xmm1, xmm2/m32
 UCOMISS SSE 00001111 00101110 \
   !constraints { modrm($_); 1 } \
   !memory { load(size => 4); }
 
+# VEX.LIG.0F.WIG 2E /r: VUCOMISS xmm1, xmm2/m32
+VUCOMISS AVX 00101110 \
+  !constraints { vex($_, m => 0x0F, l => 0, v => 0); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
 # 66 0F 2E /r: UCOMISD xmm1, xmm2/m64
 UCOMISD SSE2 00001111 00101110 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 8); }
 
+# VEX.LIG.66.0F.WIG 2E /r: VUCOMISD xmm1, xmm2/m64
+VUCOMISD AVX 00101110 \
+  !constraints { vex($_, m => 0x0F, l => 0, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
 # NP 0F 2F /r: COMISS xmm1, xmm2/m32
 COMISS SSE 00001111 00101111 \
   !constraints { modrm($_); 1 } \
   !memory { load(size => 4); }
 
+# VEX.LIG.0F.WIG 2F /r: VCOMISS xmm1, xmm2/m32
+VCOMISS AVX 00101111 \
+  !constraints { vex($_, m => 0x0F, l => 0, v => 0); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
 # 66 0F 2F /r: COMISD xmm1, xmm2/m64
 COMISD SSE2 00001111 00101111 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 8); }
 
+# VEX.LIG.66.0F.WIG 2F /r: VCOMISD xmm1, xmm2/m64
+VCOMISD AVX 00101111 \
+  !constraints { vex($_, m => 0x0F, l => 0, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
 #
 # Logical Instructions
 # --------------------
@@ -1086,16 +1864,31 @@ PAND SSE2 00001111 11011011 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG DB /r: VPAND xmm1, xmm2, xmm3/m128
+VPAND AVX 11011011 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 54 /r: ANDPS xmm1, xmm2/m128
 ANDPS SSE 00001111 01010100 \
   !constraints { modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.0F 54 /r: VANDPS xmm1,xmm2, xmm3/m128
+VANDPS AVX 01010100 \
+  !constraints { vex($_, m => 0x0F, l => 128); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 54 /r: ANDPD xmm1, xmm2/m128
 ANDPD SSE2 00001111 01010100 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F 54 /r: VANDPD xmm1, xmm2, xmm3/m128
+VANDPD AVX 01010100 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F DF /r: PANDN mm, mm/m64
 PANDN MMX 00001111 11011111 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1106,16 +1899,31 @@ PANDN SSE2 00001111 11011111 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG DF /r: VPANDN xmm1, xmm2, xmm3/m128
+VPANDN AVX 11011111 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 55 /r: ANDNPS xmm1, xmm2/m128
 ANDNPS SSE 00001111 01010101 \
   !constraints { modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.0F 55 /r: VANDNPS xmm1, xmm2, xmm3/m128
+VANDNPS AVX 01010101 \
+  !constraints { vex($_, m => 0x0F, l => 128); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 55 /r: ANDNPD xmm1, xmm2/m128
 ANDNPD SSE2 00001111 01010101 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F 55 /r: VANDNPD xmm1, xmm2, xmm3/m128
+VANDNPD AVX 01010101 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F EB /r: POR mm, mm/m64
 POR MMX 00001111 11101011 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1126,16 +1934,31 @@ POR SSE2 00001111 11101011 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG EB /r: VPOR xmm1, xmm2, xmm3/m128
+VPOR AVX 11101011 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 56 /r: ORPS xmm1, xmm2/m128
 ORPS SSE 00001111 01010110 \
   !constraints { modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.0F 56 /r: VORPS xmm1,xmm2, xmm3/m128
+VORPS AVX 01010110 \
+  !constraints { vex($_, m => 0x0F, l => 128); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 56 /r: ORPD xmm1, xmm2/m128
 ORPD SSE2 00001111 01010110 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F 56 /r: VORPD xmm1,xmm2, xmm3/m128
+VORPD AVX 01010110 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F EF /r: PXOR mm, mm/m64
 PXOR MMX 00001111 11101111 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1146,16 +1969,31 @@ PXOR SSE2 00001111 11101111 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG EF /r: VPXOR xmm1, xmm2, xmm3/m128
+VPXOR AVX 11101111 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 57 /r: XORPS xmm1, xmm2/m128
 XORPS SSE 00001111 01010111 \
   !constraints { modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.0F.WIG 57 /r: VXORPS xmm1,xmm2, xmm3/m128
+VXORPS AVX 01010111 \
+  !constraints { vex($_, m => 0x0F, l => 128); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 57 /r: XORPD xmm1, xmm2/m128
 XORPD SSE2 00001111 01010111 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG 57 /r: VXORPD xmm1,xmm2, xmm3/m128
+VXORPD AVX 01010111 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 #
 # Shift and Rotate Instructions
 # -----------------------------
@@ -1171,6 +2009,11 @@ PSLLW SSE2 00001111 11110001 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG F1 /r: VPSLLW xmm1, xmm2, xmm3/m128
+VPSLLW AVX 11110001 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F F2 /r: PSLLD mm, mm/m64
 PSLLD MMX 00001111 11110010 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1181,6 +2024,11 @@ PSLLD SSE2 00001111 11110010 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG F2 /r: VPSLLD xmm1, xmm2, xmm3/m128
+VPSLLD AVX 11110010 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F F3 /r: PSLLQ mm, mm/m64
 PSLLQ MMX 00001111 11110011 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1191,6 +2039,11 @@ PSLLQ SSE2 00001111 11110011 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG F3 /r: VPSLLQ xmm1, xmm2, xmm3/m128
+VPSLLQ AVX 11110011 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 71 /6 ib: PSLLW mm1, imm8
 PSLLW_imm MMX 00001111 01110001 \
   !constraints { modrm($_, reg => 6); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} }
@@ -1199,6 +2052,10 @@ PSLLW_imm MMX 00001111 01110001 \
 PSLLW_imm SSE2 00001111 01110001 \
   !constraints { data16($_); modrm($_, reg => 6); imm($_, width => 8); defined $_->{modrm}{reg2} }
 
+# VEX.128.66.0F.WIG 71 /6 ib: VPSLLW xmm1, xmm2, imm8
+VPSLLW_imm AVX 01110001 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_, reg => 6); imm($_, width => 8); defined $_->{modrm}{reg2} }
+
 # NP 0F 72 /6 ib: PSLLD mm, imm8
 PSLLD_imm MMX 00001111 01110010 \
   !constraints { modrm($_, reg => 6); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} }
@@ -1207,6 +2064,10 @@ PSLLD_imm MMX 00001111 01110010 \
 PSLLD_imm SSE2 00001111 01110010 \
   !constraints { data16($_); modrm($_, reg => 6); imm($_, width => 8); defined $_->{modrm}{reg2} }
 
+# VEX.128.66.0F.WIG 72 /6 ib: VPSLLD xmm1, xmm2, imm8
+VPSLLD_imm AVX 01110010 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_, reg => 6); imm($_, width => 8); defined $_->{modrm}{reg2} }
+
 # NP 0F 73 /6 ib: PSLLQ mm, imm8
 PSLLQ_imm MMX 00001111 01110011 \
   !constraints { modrm($_, reg => 6); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} }
@@ -1215,10 +2076,18 @@ PSLLQ_imm MMX 00001111 01110011 \
 PSLLQ_imm SSE2 00001111 01110011 \
   !constraints { data16($_); modrm($_, reg => 6); imm($_, width => 8); defined $_->{modrm}{reg2} }
 
+# VEX.128.66.0F.WIG 73 /6 ib: VPSLLQ xmm1, xmm2, imm8
+VPSLLQ_imm AVX 01110011 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_, reg => 6); imm($_, width => 8); defined $_->{modrm}{reg2} }
+
 # 66 0F 73 /7 ib: PSLLDQ xmm1, imm8
 PSLLDQ_imm SSE2 00001111 01110011 \
   !constraints { data16($_); modrm($_, reg => 7); imm($_, width => 8); defined $_->{modrm}{reg2} }
 
+# VEX.128.66.0F.WIG 73 /7 ib: VPSLLDQ xmm1, xmm2, imm8
+VPSLLDQ_imm AVX 01110011 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_, reg => 7); imm($_, width => 8); defined $_->{modrm}{reg2} }
+
 # NP 0F D1 /r: PSRLW mm, mm/m64
 PSRLW MMX 00001111 11010001 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1229,6 +2098,11 @@ PSRLW SSE2 00001111 11010001 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG D1 /r: VPSRLW xmm1, xmm2, xmm3/m128
+VPSRLW AVX 11010001 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F D2 /r: PSRLD mm, mm/m64
 PSRLD MMX 00001111 11010010 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1239,6 +2113,11 @@ PSRLD SSE2 00001111 11010010 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG D2 /r: VPSRLD xmm1, xmm2, xmm3/m128
+VPSRLD AVX 11010010 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F D3 /r: PSRLQ mm, mm/m64
 PSRLQ MMX 00001111 11010011 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1249,6 +2128,11 @@ PSRLQ SSE2 00001111 11010011 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG D3 /r: VPSRLQ xmm1, xmm2, xmm3/m128
+VPSRLQ AVX 11010011 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 71 /2 ib: PSRLW mm, imm8
 PSRLW_imm MMX 00001111 01110001 \
   !constraints { modrm($_, reg => 2); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} }
@@ -1257,6 +2141,10 @@ PSRLW_imm MMX 00001111 01110001 \
 PSRLW_imm SSE2 00001111 01110001 \
   !constraints { data16($_); modrm($_, reg => 2); imm($_, width => 8); defined $_->{modrm}{reg2} }
 
+# VEX.128.66.0F.WIG 71 /2 ib: VPSRLW xmm1, xmm2, imm8
+VPSRLW_imm AVX 01110001 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_, reg => 2); imm($_, width => 8); defined $_->{modrm}{reg2} }
+
 # NP 0F 72 /2 ib: PSRLD mm, imm8
 PSRLD_imm MMX 00001111 01110010 \
   !constraints { modrm($_, reg => 2); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} }
@@ -1265,6 +2153,10 @@ PSRLD_imm MMX 00001111 01110010 \
 PSRLD_imm SSE2 00001111 01110010 \
   !constraints { data16($_); modrm($_, reg => 2); imm($_, width => 8); defined $_->{modrm}{reg2} }
 
+# VEX.128.66.0F.WIG 72 /2 ib: VPSRLD xmm1, xmm2, imm8
+VPSRLD_imm AVX 01110010 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_, reg => 2); imm($_, width => 8); defined $_->{modrm}{reg2} }
+
 # NP 0F 73 /2 ib: PSRLQ mm, imm8
 PSRLQ_imm MMX 00001111 01110011 \
   !constraints { modrm($_, reg => 2); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} }
@@ -1273,10 +2165,18 @@ PSRLQ_imm MMX 00001111 01110011 \
 PSRLQ_imm SSE2 00001111 01110011 \
   !constraints { data16($_); modrm($_, reg => 2); imm($_, width => 8); defined $_->{modrm}{reg2} }
 
+# VEX.128.66.0F.WIG 73 /2 ib: VPSRLQ xmm1, xmm2, imm8
+VPSRLQ_imm AVX 01110011 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_, reg => 2); imm($_, width => 8); defined $_->{modrm}{reg2} }
+
 # 66 0F 73 /3 ib: PSRLDQ xmm1, imm8
 PSRLDQ_imm SSE2 00001111 01110011 \
   !constraints { data16($_); modrm($_, reg => 3); imm($_, width => 8); defined $_->{modrm}{reg2} }
 
+# VEX.128.66.0F.WIG 73 /3 ib: VPSRLDQ xmm1, xmm2, imm8
+VPSRLDQ_imm AVX 01110011 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_, reg => 3); imm($_, width => 8); defined $_->{modrm}{reg2} }
+
 # NP 0F E1 /r: PSRAW mm,mm/m64
 PSRAW MMX 00001111 11100001 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1287,6 +2187,11 @@ PSRAW SSE2 00001111 11100001 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG E1 /r: VPSRAW xmm1,xmm2,xmm3/m128
+VPSRAW AVX 11100001 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F E2 /r: PSRAD mm,mm/m64
 PSRAD MMX 00001111 11100010 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1297,6 +2202,11 @@ PSRAD SSE2 00001111 11100010 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG E2 /r: VPSRAD xmm1,xmm2,xmm3/m128
+VPSRAD AVX 11100010 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 71 /4 ib: PSRAW mm,imm8
 PSRAW_imm MMX 00001111 01110001 \
   !constraints { modrm($_, reg => 4); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} }
@@ -1305,6 +2215,10 @@ PSRAW_imm MMX 00001111 01110001 \
 PSRAW_imm SSE2 00001111 01110001 \
   !constraints { data16($_); modrm($_, reg => 4); imm($_, width => 8); defined $_->{modrm}{reg2} }
 
+# VEX.128.66.0F.WIG 71 /4 ib: VPSRAW xmm1,xmm2,imm8
+VPSRAW_imm AVX 01110001 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_, reg => 4); imm($_, width => 8); defined $_->{modrm}{reg2} }
+
 # NP 0F 72 /4 ib: PSRAD mm,imm8
 PSRAD_imm MMX 00001111 01110010 \
   !constraints { modrm($_, reg => 4); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} }
@@ -1313,6 +2227,10 @@ PSRAD_imm MMX 00001111 01110010 \
 PSRAD_imm SSE2 00001111 01110010 \
   !constraints { data16($_); modrm($_, reg => 4); imm($_, width => 8); defined $_->{modrm}{reg2} }
 
+# VEX.128.66.0F.WIG 72 /4 ib: VPSRAD xmm1,xmm2,imm8
+VPSRAD_imm AVX 01110010 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_, reg => 4); imm($_, width => 8); defined $_->{modrm}{reg2} }
+
 # NP 0F 3A 0F /r ib: PALIGNR mm1, mm2/m64, imm8
 PALIGNR_mm SSSE3 00001111 00111010 00001111 \
   !constraints { modrm($_); imm($_, width => 8); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1323,6 +2241,11 @@ PALIGNR SSSE3 00001111 00111010 00001111 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F3A.WIG 0F /r ib: VPALIGNR xmm1, xmm2, xmm3/m128, imm8
+VPALIGNR AVX 00001111 \
+  !constraints { vex($_, m => 0x0F3A, l => 128, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16); }
+
 #
 # Shuffle, Unpack, Blend, Insert, Extract, Broadcast, Permute, Gather Instructions
 # --------------------------------------------------------------------------------
@@ -1338,6 +2261,11 @@ PACKSSWB SSE2 00001111 01100011 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG 63 /r: VPACKSSWB xmm1,xmm2, xmm3/m128
+VPACKSSWB AVX 01100011 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 6B /r: PACKSSDW mm1, mm2/m64
 PACKSSDW MMX 00001111 01101011 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1348,6 +2276,11 @@ PACKSSDW SSE2 00001111 01101011 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG 6B /r: VPACKSSDW xmm1,xmm2, xmm3/m128
+VPACKSSDW AVX 01101011 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 67 /r: PACKUSWB mm, mm/m64
 PACKUSWB MMX 00001111 01100111 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1358,11 +2291,21 @@ PACKUSWB SSE2 00001111 01100111 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG 67 /r: VPACKUSWB xmm1, xmm2, xmm3/m128
+VPACKUSWB AVX 01100111 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 38 2B /r: PACKUSDW xmm1, xmm2/m128
 PACKUSDW SSE4_1 00001111 00111000 00101011 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38 2B /r: VPACKUSDW xmm1,xmm2, xmm3/m128
+VPACKUSDW AVX 00101011 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 68 /r: PUNPCKHBW mm, mm/m64
 PUNPCKHBW MMX 00001111 01101000 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1373,6 +2316,11 @@ PUNPCKHBW SSE2 00001111 01101000 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG 68 /r: VPUNPCKHBW xmm1,xmm2, xmm3/m128
+VPUNPCKHBW AVX 01101000 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 69 /r: PUNPCKHWD mm, mm/m64
 PUNPCKHWD MMX 00001111 01101001 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1383,6 +2331,11 @@ PUNPCKHWD SSE2 00001111 01101001 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG 69 /r: VPUNPCKHWD xmm1,xmm2, xmm3/m128
+VPUNPCKHWD AVX 01101001 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 6A /r: PUNPCKHDQ mm, mm/m64
 PUNPCKHDQ MMX 00001111 01101010 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1393,11 +2346,21 @@ PUNPCKHDQ SSE2 00001111 01101010 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG 6A /r: VPUNPCKHDQ xmm1, xmm2, xmm3/m128
+VPUNPCKHDQ AVX 01101010 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 6D /r: PUNPCKHQDQ xmm1, xmm2/m128
 PUNPCKHQDQ SSE2 00001111 01101101 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG 6D /r: VPUNPCKHQDQ xmm1, xmm2, xmm3/m128
+VPUNPCKHQDQ AVX 01101101 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 60 /r: PUNPCKLBW mm, mm/m32
 PUNPCKLBW MMX 00001111 01100000 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1408,6 +2371,11 @@ PUNPCKLBW SSE2 00001111 01100000 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG 60 /r: VPUNPCKLBW xmm1,xmm2, xmm3/m128
+VPUNPCKLBW AVX 01100000 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 61 /r: PUNPCKLWD mm, mm/m32
 PUNPCKLWD MMX 00001111 01100001 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1418,6 +2386,11 @@ PUNPCKLWD SSE2 00001111 01100001 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG 61 /r: VPUNPCKLWD xmm1,xmm2, xmm3/m128
+VPUNPCKLWD AVX 01100001 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 62 /r: PUNPCKLDQ mm, mm/m32
 PUNPCKLDQ MMX 00001111 01100010 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1428,31 +2401,61 @@ PUNPCKLDQ SSE2 00001111 01100010 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG 62 /r: VPUNPCKLDQ xmm1, xmm2, xmm3/m128
+VPUNPCKLDQ AVX 01100010 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 6C /r: PUNPCKLQDQ xmm1, xmm2/m128
 PUNPCKLQDQ SSE2 00001111 01101100 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG 6C /r: VPUNPCKLQDQ xmm1, xmm2, xmm3/m128
+VPUNPCKLQDQ AVX 01101100 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 14 /r: UNPCKLPS xmm1, xmm2/m128
 UNPCKLPS SSE 00001111 00010100 \
   !constraints { modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.0F.WIG 14 /r: VUNPCKLPS xmm1,xmm2, xmm3/m128
+VUNPCKLPS AVX 00010100 \
+  !constraints { vex($_, m => 0x0F, l => 128); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 14 /r: UNPCKLPD xmm1, xmm2/m128
 UNPCKLPD SSE2 00001111 00010100 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG 14 /r: VUNPCKLPD xmm1,xmm2, xmm3/m128
+VUNPCKLPD AVX 00010100 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 15 /r: UNPCKHPS xmm1, xmm2/m128
 UNPCKHPS SSE 00001111 00010101 \
   !constraints { modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.0F.WIG 15 /r: VUNPCKHPS xmm1, xmm2, xmm3/m128
+VUNPCKHPS AVX 00010101 \
+  !constraints { vex($_, m => 0x0F, l => 128); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 15 /r: UNPCKHPD xmm1, xmm2/m128
 UNPCKHPD SSE2 00001111 00010101 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG 15 /r: VUNPCKHPD xmm1,xmm2, xmm3/m128
+VUNPCKHPD AVX 00010101 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 38 00 /r: PSHUFB mm1, mm2/m64
 PSHUFB_mm SSSE3 00001111 00111000 00000000 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1463,6 +2466,11 @@ PSHUFB SSSE3 00001111 00111000 00000000 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38.WIG 00 /r: VPSHUFB xmm1, xmm2, xmm3/m128
+VPSHUFB AVX 00000000 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 70 /r ib: PSHUFW mm1, mm2/m64, imm8
 PSHUFW SSE 00001111 01110000 \
   !constraints { modrm($_); imm($_, width => 8); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1473,66 +2481,131 @@ PSHUFLW SSE2 00001111 01110000 \
   !constraints { repne($_); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.F2.0F.WIG 70 /r ib: VPSHUFLW xmm1, xmm2/m128, imm8
+VPSHUFLW AVX 01110000 \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0xF2); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16); }
+
 # F3 0F 70 /r ib: PSHUFHW xmm1, xmm2/m128, imm8
 PSHUFHW SSE2 00001111 01110000 \
   !constraints { rep($_); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.F3.0F.WIG 70 /r ib: VPSHUFHW xmm1, xmm2/m128, imm8
+VPSHUFHW AVX 01110000 \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0xF3); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 70 /r ib: PSHUFD xmm1, xmm2/m128, imm8
 PSHUFD SSE2 00001111 01110000 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG 70 /r ib: VPSHUFD xmm1, xmm2/m128, imm8
+VPSHUFD AVX 01110000 \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F C6 /r ib: SHUFPS xmm1, xmm3/m128, imm8
 SHUFPS SSE 00001111 11000110 \
   !constraints { modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.0F.WIG C6 /r ib: VSHUFPS xmm1, xmm2, xmm3/m128, imm8
+VSHUFPS AVX 11000110 \
+  !constraints { vex($_, m => 0x0F, l => 128); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F C6 /r ib: SHUFPD xmm1, xmm2/m128, imm8
 SHUFPD SSE2 00001111 11000110 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG C6 /r ib: VSHUFPD xmm1, xmm2, xmm3/m128, imm8
+VSHUFPD AVX 11000110 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 3A 0C /r ib: BLENDPS xmm1, xmm2/m128, imm8
 BLENDPS SSE4_1 00001111 00111010 00001100 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F3A.WIG 0C /r ib: VBLENDPS xmm1, xmm2, xmm3/m128, imm8
+VBLENDPS AVX 00001100 \
+  !constraints { vex($_, m => 0x0F3A, l => 128, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 3A 0D /r ib: BLENDPD xmm1, xmm2/m128, imm8
 BLENDPD SSE4_1 00001111 00111010 00001101 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F3A.WIG 0D /r ib: VBLENDPD xmm1, xmm2, xmm3/m128, imm8
+VBLENDPD AVX 00001101 \
+  !constraints { vex($_, m => 0x0F3A, l => 128, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 38 14 /r: BLENDVPS xmm1, xmm2/m128, <XMM0>
 BLENDVPS SSE4_1 00001111 00111000 00010100 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F3A.W0 4A /r /is4: VBLENDVPS xmm1, xmm2, xmm3/m128, xmm4
+VBLENDVPS AVX 01001010 \
+  !constraints { vex($_, m => 0x0F3A, l => 128, p => 0x66, w => 0); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 38 15 /r: BLENDVPD xmm1, xmm2/m128 , <XMM0>
 BLENDVPD SSE4_1 00001111 00111000 00010101 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F3A.W0 4B /r /is4: VBLENDVPD xmm1, xmm2, xmm3/m128, xmm4
+VBLENDVPD AVX 01001011 \
+  !constraints { vex($_, m => 0x0F3A, l => 128, p => 0x66, w => 0); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 38 10 /r: PBLENDVB xmm1, xmm2/m128, <XMM0>
 PBLENDVB SSE4_1 00001111 00111000 00010000 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F3A.W0 4C /r /is4: VPBLENDVB xmm1, xmm2, xmm3/m128, xmm4
+VPBLENDVB AVX 01001100 \
+  !constraints { vex($_, m => 0x0F3A, l => 128, p => 0x66, w => 0); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 3A 0E /r ib: PBLENDW xmm1, xmm2/m128, imm8
 PBLENDW SSE4_1 00001111 00111010 00001110 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F3A.WIG 0E /r ib: VPBLENDW xmm1, xmm2, xmm3/m128, imm8
+VPBLENDW AVX 00001110 \
+  !constraints { vex($_, m => 0x0F3A, l => 128, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 3A 21 /r ib: INSERTPS xmm1, xmm2/m32, imm8
 INSERTPS SSE4_1 00001111 00111010 00100001 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 4); }
 
+# VEX.128.66.0F3A.WIG 21 /r ib: VINSERTPS xmm1, xmm2, xmm3/m32, imm8
+VINSERTPS AVX 00100001 \
+  !constraints { vex($_, m => 0x0F3A, l => 128, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 4); }
+
 # 66 0F 3A 20 /r ib: PINSRB xmm1,r32/m8,imm8
 PINSRB SSE4_1 00001111 00111010 00100000 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
   !memory { load(size => 1); }
 
+# VEX.128.66.0F3A.W0 20 /r ib: VPINSRB xmm1,xmm2,r32/m8,imm8
+VPINSRB AVX 00100000 \
+  !constraints { vex($_, m => 0x0F3A, l => 128, p => 0x66, w => 0); modrm($_); imm($_, width => 8); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
+  !memory { load(size => 1); }
+
 # NP 0F C4 /r ib: PINSRW mm, r32/m16, imm8
 PINSRW SSE 00001111 11000100 \
   !constraints { modrm($_); imm($_, width => 8); $_->{modrm}{reg} &= 0b111; !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
@@ -1543,41 +2616,81 @@ PINSRW SSE2 00001111 11000100 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
   !memory { load(size => 2); }
 
+# VEX.128.66.0F.W0 C4 /r ib: VPINSRW xmm1, xmm2, r32/m16, imm8
+VPINSRW AVX 11000100 \
+  !constraints { vex($_, m => 0x0F, l => 128, p => 0x66, w => 0); modrm($_); imm($_, width => 8); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
+  !memory { load(size => 2); }
+
 # 66 0F 3A 22 /r ib: PINSRD xmm1,r/m32,imm8
 PINSRD SSE4_1 00001111 00111010 00100010 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
   !memory { load(size => 4); }
 
+# VEX.128.66.0F3A.W0 22 /r ib: VPINSRD xmm1,xmm2,r/m32,imm8
+VPINSRD AVX 00100010 \
+  !constraints { vex($_, m => 0x0F3A, l => 128, p => 0x66, w => 0); modrm($_); imm($_, width => 8); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
+  !memory { load(size => 4); }
+
 # 66 REX.W 0F 3A 22 /r ib: PINSRQ xmm1,r/m64,imm8
 PINSRQ SSE4_1 00001111 00111010 00100010 \
   !constraints { data16($_); rex($_, w => 1); modrm($_); imm($_, width => 8); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
   !memory { load(size => 8); }
 
+# VEX.128.66.0F3A.W1 22 /r ib: VPINSRQ xmm1,xmm2,r/m64,imm8
+VPINSRQ AVX 00100010 \
+  !constraints { vex($_, m => 0x0F3A, l => 128, p => 0x66, w => 1); modrm($_); imm($_, width => 8); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
+  !memory { load(size => 8); }
+
 # 66 0F 3A 17 /r ib: EXTRACTPS reg/m32, xmm1, imm8
 EXTRACTPS SSE4_1 00001111 00111010 00010111 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
   !memory { store(size => 4); }
 
+# VEX.128.66.0F3A.WIG 17 /r ib: VEXTRACTPS reg/m32, xmm1, imm8
+VEXTRACTPS AVX 00010111 \
+  !constraints { vex($_, m => 0x0F3A, l => 128, v => 0, p => 0x66); modrm($_); imm($_, width => 8); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
+  !memory { store(size => 4); }
+
 # 66 0F 3A 14 /r ib: PEXTRB reg/m8,xmm2,imm8
 PEXTRB SSE4_1 00001111 00111010 00010100 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
   !memory { store(size => 1); }
 
+# VEX.128.66.0F3A.W0 14 /r ib: VPEXTRB reg/m8,xmm2,imm8
+VPEXTRB AVX 00010100 \
+  !constraints { vex($_, m => 0x0F3A, l => 128, v => 0, p => 0x66, w => 0); modrm($_); imm($_, width => 8); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
+  !memory { store(size => 1); }
+
 # 66 0F 3A 15 /r ib: PEXTRW reg/m16, xmm, imm8
 PEXTRW SSE4_1 00001111 00111010 00010101 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
   !memory { store(size => 2); }
 
+# VEX.128.66.0F3A.W0 15 /r ib: VPEXTRW reg/m16, xmm2, imm8
+VPEXTRW AVX 00010101 \
+  !constraints { vex($_, m => 0x0F3A, l => 128, v => 0, p => 0x66, w => 0); modrm($_); imm($_, width => 8); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
+  !memory { store(size => 2); }
+
 # 66 0F 3A 16 /r ib: PEXTRD r/m32,xmm2,imm8
 PEXTRD SSE4_1 00001111 00111010 00010110 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
   !memory { store(size => 4); }
 
+# VEX.128.66.0F3A.W0 16 /r ib: VPEXTRD r32/m32,xmm2,imm8
+VPEXTRD AVX 00010110 \
+  !constraints { vex($_, m => 0x0F3A, l => 128, v => 0, p => 0x66, w => 0); modrm($_); imm($_, width => 8); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
+  !memory { store(size => 4); }
+
 # 66 REX.W 0F 3A 16 /r ib: PEXTRQ r/m64,xmm2,imm8
 PEXTRQ SSE4_1 00001111 00111010 00010110 \
   !constraints { data16($_); rex($_, w => 1); modrm($_); imm($_, width => 8); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
   !memory { store(size => 8); }
 
+# VEX.128.66.0F3A.W1 16 /r ib: VPEXTRQ r64/m64,xmm2,imm8
+VPEXTRQ AVX 00010110 \
+  !constraints { vex($_, m => 0x0F3A, l => 128, v => 0, p => 0x66, w => 1); modrm($_); imm($_, width => 8); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
+  !memory { store(size => 8); }
+
 # NP 0F C5 /r ib: PEXTRW reg, mm, imm8
 PEXTRW_reg SSE 00001111 11000101 \
   !constraints { modrm($_); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; $_->{modrm}{reg} != REG_RSP && defined $_->{modrm}{reg2} }
@@ -1586,6 +2699,30 @@ PEXTRW_reg SSE 00001111 11000101 \
 PEXTRW_reg SSE2 00001111 11000101 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); $_->{modrm}{reg} != REG_RSP && defined $_->{modrm}{reg2} }
 
+# VEX.128.66.0F.W0 C5 /r ib: VPEXTRW reg, xmm1, imm8
+VPEXTRW_reg AVX 11000101 \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0x66, w => 0); modrm($_); imm($_, width => 8); $_->{modrm}{reg} != REG_RSP && defined $_->{modrm}{reg2} }
+
+# VEX.128.66.0F38.W0 0C /r: VPERMILPS xmm1, xmm2, xmm3/m128
+VPERMILPS AVX 00001100 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66, w => 0); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
+# VEX.128.66.0F3A.W0 04 /r ib: VPERMILPS xmm1, xmm2/m128, imm8
+VPERMILPS_imm AVX 00000100 \
+  !constraints { vex($_, m => 0x0F3A, l => 128, v => 0, p => 0x66, w => 0); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16); }
+
+# VEX.128.66.0F38.W0 0D /r: VPERMILPD xmm1, xmm2, xmm3/m128
+VPERMILPD AVX 00001101 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66, w => 0); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
+# VEX.128.66.0F3A.W0 05 /r ib: VPERMILPD xmm1, xmm2/m128, imm8
+VPERMILPD_imm AVX 00000101 \
+  !constraints { vex($_, m => 0x0F3A, l => 128, v => 0, p => 0x66, w => 0); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16); }
+
 #
 # Conversion Instructions
 # -----------------------
@@ -1596,61 +2733,121 @@ PMOVSXBW SSE4_1 00001111 00111000 00100000 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 8); }
 
+# VEX.128.66.0F38.WIG 20 /r: VPMOVSXBW xmm1, xmm2/m64
+VPMOVSXBW AVX 00100000 \
+  !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
 # 66 0f 38 21 /r: PMOVSXBD xmm1, xmm2/m32
 PMOVSXBD SSE4_1 00001111 00111000 00100001 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 4); }
 
+# VEX.128.66.0F38.WIG 21 /r: VPMOVSXBD xmm1, xmm2/m32
+VPMOVSXBD AVX 00100001 \
+  !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
 # 66 0f 38 22 /r: PMOVSXBQ xmm1, xmm2/m16
 PMOVSXBQ SSE4_1 00001111 00111000 00100010 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 2); }
 
+# VEX.128.66.0F38.WIG 22 /r: VPMOVSXBQ xmm1, xmm2/m16
+VPMOVSXBQ AVX 00100010 \
+  !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 2); }
+
 # 66 0f 38 23 /r: PMOVSXWD xmm1, xmm2/m64
 PMOVSXWD SSE4_1 00001111 00111000 00100011 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 8); }
 
+# VEX.128.66.0F38.WIG 23 /r: VPMOVSXWD xmm1, xmm2/m64
+VPMOVSXWD AVX 00100011 \
+  !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
 # 66 0f 38 24 /r: PMOVSXWQ xmm1, xmm2/m32
 PMOVSXWQ SSE4_1 00001111 00111000 00100100 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 4); }
 
+# VEX.128.66.0F38.WIG 24 /r: VPMOVSXWQ xmm1, xmm2/m32
+VPMOVSXWQ AVX 00100100 \
+  !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
 # 66 0f 38 25 /r: PMOVSXDQ xmm1, xmm2/m64
 PMOVSXDQ SSE4_1 00001111 00111000 00100101 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 8); }
 
+# VEX.128.66.0F38.WIG 25 /r: VPMOVSXDQ xmm1, xmm2/m64
+VPMOVSXDQ AVX 00100101 \
+  !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
 # 66 0f 38 30 /r: PMOVZXBW xmm1, xmm2/m64
 PMOVZXBW SSE4_1 00001111 00111000 00110000 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 8); }
 
+# VEX.128.66.0F38.WIG 30 /r: VPMOVZXBW xmm1, xmm2/m64
+VPMOVZXBW AVX 00110000 \
+  !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
 # 66 0f 38 31 /r: PMOVZXBD xmm1, xmm2/m32
 PMOVZXBD SSE4_1 00001111 00111000 00110001 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 4); }
 
+# VEX.128.66.0F38.WIG 31 /r: VPMOVZXBD xmm1, xmm2/m32
+VPMOVZXBD AVX 00110001 \
+  !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
 # 66 0f 38 32 /r: PMOVZXBQ xmm1, xmm2/m16
 PMOVZXBQ SSE4_1 00001111 00111000 00110010 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 2); }
 
+# VEX.128.66.0F38.WIG 32 /r: VPMOVZXBQ xmm1, xmm2/m16
+VPMOVZXBQ AVX 00110010 \
+  !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 2); }
+
 # 66 0f 38 33 /r: PMOVZXWD xmm1, xmm2/m64
 PMOVZXWD SSE4_1 00001111 00111000 00110011 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 8); }
 
+# VEX.128.66.0F38.WIG 33 /r: VPMOVZXWD xmm1, xmm2/m64
+VPMOVZXWD AVX 00110011 \
+  !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
 # 66 0f 38 34 /r: PMOVZXWQ xmm1, xmm2/m32
 PMOVZXWQ SSE4_1 00001111 00111000 00110100 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 4); }
 
+# VEX.128.66.0F38.WIG 34 /r: VPMOVZXWQ xmm1, xmm2/m32
+VPMOVZXWQ AVX 00110100 \
+  !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
 # 66 0f 38 35 /r: PMOVZXDQ xmm1, xmm2/m64
 PMOVZXDQ SSE4_1 00001111 00111000 00110101 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 8); }
 
+# VEX.128.66.0F38.WIG 35 /r: VPMOVZXDQ xmm1, xmm2/m64
+VPMOVZXDQ AVX 00110101 \
+  !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
 # NP 0F 2A /r: CVTPI2PS xmm, mm/m64
 CVTPI2PS SSE 00001111 00101010 \
   !constraints { modrm($_); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1666,6 +2863,16 @@ CVTSI2SS_64 SSE2 00001111 00101010 \
   !constraints { rep($_); rex($_, w => 1); modrm($_); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
   !memory { load(size => 8); }
 
+# VEX.LIG.F3.0F.W0 2A /r: VCVTSI2SS xmm1,xmm2,r/m32
+VCVTSI2SS AVX 00101010 \
+  !constraints { vex($_, m => 0x0F, l => 0, p => 0xF3, w => 0); modrm($_); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
+  !memory { load(size => 4); }
+
+# VEX.LIG.F3.0F.W1 2A /r: VCVTSI2SS xmm1,xmm2,r/m64
+VCVTSI2SS_64 AVX 00101010 \
+  !constraints { vex($_, m => 0x0F, l => 0, p => 0xF3, w => 1); modrm($_); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
+  !memory { load(size => 8); }
+
 # 66 0F 2A /r: CVTPI2PD xmm, mm/m64
 CVTPI2PD SSE2 00001111 00101010 \
   !constraints { data16($_); modrm($_); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1681,6 +2888,16 @@ CVTSI2SD_64 SSE2 00001111 00101010 \
   !constraints { repne($_); rex($_, w => 1); modrm($_); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
   !memory { load(size => 8); }
 
+# VEX.LIG.F2.0F.W0 2A /r: VCVTSI2SD xmm1,xmm2,r/m32
+VCVTSI2SD AVX 00101010 \
+  !constraints { vex($_, m => 0x0F, l => 0, p => 0xF2, w => 0); modrm($_); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
+  !memory { load(size => 4); }
+
+# VEX.LIG.F2.0F.W1 2A /r: VCVTSI2SD xmm1,xmm2,r/m64
+VCVTSI2SD_64 AVX 00101010 \
+  !constraints { vex($_, m => 0x0F, l => 0, p => 0xF2, w => 1); modrm($_); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
+  !memory { load(size => 8); }
+
 # NP 0F 2D /r: CVTPS2PI mm, xmm/m64
 CVTPS2PI SSE 00001111 00101101 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; 1 } \
@@ -1696,6 +2913,16 @@ CVTSS2SI_64 SSE2 00001111 00101101 \
   !constraints { rep($_); rex($_, w => 1); modrm($_); $_->{modrm}{reg} != REG_RSP } \
   !memory { load(size => 4, rollback => (defined $_->{modrm}{base} && $_->{modrm}{base} != $_->{modrm}{reg})); }
 
+# VEX.LIG.F3.0F.W0 2D /r: VCVTSS2SI r32,xmm1/m32
+VCVTSS2SI AVX 00101101 \
+  !constraints { vex($_, m => 0x0F, l => 0, v => 0, p => 0xF3, w => 0); modrm($_); $_->{modrm}{reg} != REG_RSP } \
+  !memory { load(size => 4, rollback => (defined $_->{modrm}{base} && $_->{modrm}{base} != $_->{modrm}{reg})); }
+
+# VEX.LIG.F3.0F.W1 2D /r: VCVTSS2SI r64,xmm1/m32
+VCVTSS2SI_64 AVX 00101101 \
+  !constraints { vex($_, m => 0x0F, l => 0, v => 0, p => 0xF3, w => 1); modrm($_); $_->{modrm}{reg} != REG_RSP } \
+  !memory { load(size => 4, rollback => (defined $_->{modrm}{base} && $_->{modrm}{base} != $_->{modrm}{reg})); }
+
 # 66 0F 2D /r: CVTPD2PI mm, xmm/m128
 CVTPD2PI SSE2 00001111 00101101 \
   !constraints { data16($_); modrm($_); $_->{modrm}{reg} &= 0b111; 1 } \
@@ -1711,6 +2938,16 @@ CVTSD2SI_64 SSE2 00001111 00101101 \
   !constraints { repne($_); rex($_, w => 1); modrm($_); $_->{modrm}{reg} != REG_RSP } \
   !memory { load(size => 8, rollback => (defined $_->{modrm}{base} && $_->{modrm}{base} != $_->{modrm}{reg})); }
 
+# VEX.LIG.F2.0F.W0 2D /r: VCVTSD2SI r32,xmm1/m64
+VCVTSD2SI AVX 00101101 \
+  !constraints { vex($_, m => 0x0F, l => 0, v => 0, p => 0xF2, w => 0); modrm($_); $_->{modrm}{reg} != REG_RSP } \
+  !memory { load(size => 8, rollback => (defined $_->{modrm}{base} && $_->{modrm}{base} != $_->{modrm}{reg})); }
+
+# VEX.LIG.F2.0F.W1 2D /r: VCVTSD2SI r64,xmm1/m64
+VCVTSD2SI_64 AVX 00101101 \
+  !constraints { vex($_, m => 0x0F, l => 0, v => 0, p => 0xF2, w => 1); modrm($_); $_->{modrm}{reg} != REG_RSP } \
+  !memory { load(size => 8, rollback => (defined $_->{modrm}{base} && $_->{modrm}{base} != $_->{modrm}{reg})); }
+
 # NP 0F 2C /r: CVTTPS2PI mm, xmm/m64
 CVTTPS2PI SSE 00001111 00101100 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; 1 } \
@@ -1726,6 +2963,16 @@ CVTTSS2SI_64 SSE2 00001111 00101100 \
   !constraints { rep($_); rex($_, w => 1); modrm($_); $_->{modrm}{reg} != REG_RSP } \
   !memory { load(size => 4, rollback => (defined $_->{modrm}{base} && $_->{modrm}{base} != $_->{modrm}{reg})); }
 
+# VEX.LIG.F3.0F.W0 2C /r: VCVTTSS2SI r32,xmm1/m32
+VCVTTSS2SI AVX 00101100 \
+  !constraints { vex($_, m => 0x0F, l => 0, v => 0, p => 0xF3, w => 0); modrm($_); $_->{modrm}{reg} != REG_RSP } \
+  !memory { load(size => 4, rollback => (defined $_->{modrm}{base} && $_->{modrm}{base} != $_->{modrm}{reg})); }
+
+# VEX.LIG.F3.0F.W1 2C /r: VCVTTSS2SI r64,xmm1/m32
+VCVTTSS2SI_64 AVX 00101100 \
+  !constraints { vex($_, m => 0x0F, l => 0, v => 0, p => 0xF3, w => 1); modrm($_); $_->{modrm}{reg} != REG_RSP } \
+  !memory { load(size => 4, rollback => (defined $_->{modrm}{base} && $_->{modrm}{base} != $_->{modrm}{reg})); }
+
 # 66 0F 2C /r: CVTTPD2PI mm, xmm/m128
 CVTTPD2PI SSE2 00001111 00101100 \
   !constraints { data16($_); modrm($_); $_->{modrm}{reg} &= 0b111; 1 } \
@@ -1741,56 +2988,116 @@ CVTTSD2SI_64 SSE2 00001111 00101100 \
   !constraints { repne($_); rex($_, w => 1); modrm($_); $_->{modrm}{reg} != REG_RSP } \
   !memory { load(size => 8, rollback => (defined $_->{modrm}{base} && $_->{modrm}{base} != $_->{modrm}{reg})); }
 
+# VEX.LIG.F2.0F.W0 2C /r: VCVTTSD2SI r32,xmm1/m64
+VCVTTSD2SI AVX 00101100 \
+  !constraints { vex($_, m => 0x0F, l => 0, v => 0, p => 0xF2, w => 0); modrm($_); $_->{modrm}{reg} != REG_RSP } \
+  !memory { load(size => 8, rollback => (defined $_->{modrm}{base} && $_->{modrm}{base} != $_->{modrm}{reg})); }
+
+# VEX.LIG.F2.0F.W1 2C /r: VCVTTSD2SI r64,xmm1/m64
+VCVTTSD2SI_64 AVX 00101100 \
+  !constraints { vex($_, m => 0x0F, l => 0, v => 0, p => 0xF2, w => 1); modrm($_); $_->{modrm}{reg} != REG_RSP } \
+  !memory { load(size => 8, rollback => (defined $_->{modrm}{base} && $_->{modrm}{base} != $_->{modrm}{reg})); }
+
 # F2 0F E6 /r: CVTPD2DQ xmm1, xmm2/m128
 CVTPD2DQ SSE2 00001111 11100110 \
   !constraints { repne($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.F2.0F.WIG E6 /r: VCVTPD2DQ xmm1, xmm2/m128
+VCVTPD2DQ AVX 11100110 \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0xF2); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F E6 /r: CVTTPD2DQ xmm1, xmm2/m128
 CVTTPD2DQ SSE2 00001111 11100110 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG E6 /r: VCVTTPD2DQ xmm1, xmm2/m128
+VCVTTPD2DQ AVX 11100110 \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # F3 0F E6 /r: CVTDQ2PD xmm1, xmm2/m64
 CVTDQ2PD SSE2 00001111 11100110 \
   !constraints { rep($_); modrm($_); 1 } \
   !memory { load(size => 8); }
 
+# VEX.128.F3.0F.WIG E6 /r: VCVTDQ2PD xmm1, xmm2/m64
+VCVTDQ2PD AVX 11100110 \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0xF3); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
 # NP 0F 5A /r: CVTPS2PD xmm1, xmm2/m64
 CVTPS2PD SSE2 00001111 01011010 \
   !constraints { modrm($_); 1 } \
   !memory { load(size => 8); }
 
+# VEX.128.0F.WIG 5A /r: VCVTPS2PD xmm1, xmm2/m64
+VCVTPS2PD AVX 01011010 \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
 # 66 0F 5A /r: CVTPD2PS xmm1, xmm2/m128
 CVTPD2PS SSE2 00001111 01011010 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG 5A /r: VCVTPD2PS xmm1, xmm2/m128
+VCVTPD2PS AVX 01011010 \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # F3 0F 5A /r: CVTSS2SD xmm1, xmm2/m32
 CVTSS2SD SSE2 00001111 01011010 \
   !constraints { rep($_); modrm($_); 1 } \
   !memory { load(size => 4); }
 
+# VEX.LIG.F3.0F.WIG 5A /r: VCVTSS2SD xmm1, xmm2, xmm3/m32
+VCVTSS2SD AVX 01011010 \
+  !constraints { vex($_, m => 0x0F, l => 0, p => 0xF3); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
 # F2 0F 5A /r: CVTSD2SS xmm1, xmm2/m64
 CVTSD2SS SSE2 00001111 01011010 \
   !constraints { repne($_); modrm($_); 1 } \
   !memory { load(size => 8); }
 
+# VEX.LIG.F2.0F.WIG 5A /r: VCVTSD2SS xmm1,xmm2, xmm3/m64
+VCVTSD2SS AVX 01011010 \
+  !constraints { vex($_, m => 0x0F, l => 0, p => 0xF2); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
 # NP 0F 5B /r: CVTDQ2PS xmm1, xmm2/m128
 CVTDQ2PS SSE2 00001111 01011011 \
   !constraints { modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.0F.WIG 5B /r: VCVTDQ2PS xmm1, xmm2/m128
+VCVTDQ2PS AVX 01011011 \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 5B /r: CVTPS2DQ xmm1, xmm2/m128
 CVTPS2DQ SSE2 00001111 01011011 \
   !constraints { data16($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG 5B /r: VCVTPS2DQ xmm1, xmm2/m128
+VCVTPS2DQ AVX 01011011 \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # F3 0F 5B /r: CVTTPS2DQ xmm1, xmm2/m128
 CVTTPS2DQ SSE2 00001111 01011011 \
   !constraints { rep($_); modrm($_); 1 } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.F3.0F.WIG 5B /r: VCVTTPS2DQ xmm1, xmm2/m128
+VCVTTPS2DQ AVX 01011011 \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0xF3); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 #
 # Cacheability Control, Prefetch, and Instruction Ordering Instructions
 # ---------------------------------------------------------------------
@@ -1806,16 +3113,43 @@ MASKMOVDQU SSE2 00001111 11110111 \
   !constraints { data16($_); modrm($_); defined $_->{modrm}{reg2} } \
   !memory { load(size => 16, base => REG_RDI, rollback => 1); }
 
+# VEX.128.66.0F.WIG F7 /r: VMASKMOVDQU xmm1, xmm2
+VMASKMOVDQU AVX 11110111 \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0x66); modrm($_); defined $_->{modrm}{reg2} } \
+  !memory { load(size => 16, base => REG_RDI, rollback => 1); }
+
+# VEX.128.66.0F38.W0 2C /r: VMASKMOVPS xmm1, xmm2, m128
+# VEX.128.66.0F38.W0 2E /r: VMASKMOVPS m128, xmm1, xmm2
+VMASKMOVPS AVX 001011 d 0 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66, w => 0); modrm($_); !defined $_->{modrm}{reg2} } \
+  !memory { $d ? store(size => 16) : load(size => 16); }
+
+# VEX.128.66.0F38.W0 2D /r: VMASKMOVPD xmm1, xmm2, m128
+# VEX.128.66.0F38.W0 2F /r: VMASKMOVPD m128, xmm1, xmm2
+VMASKMOVPD AVX 001011 d 1 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66, w => 0); modrm($_); !defined $_->{modrm}{reg2} } \
+  !memory { $d ? store(size => 16) : load(size => 16); }
+
 # NP 0F 2B /r: MOVNTPS m128, xmm1
 MOVNTPS SSE 00001111 00101011 \
   !constraints { modrm($_); !defined $_->{modrm}{reg2} } \
   !memory { store(size => 16, align => 16); }
 
+# VEX.128.0F.WIG 2B /r: VMOVNTPS m128, xmm1
+VMOVNTPS AVX 00101011 \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0); modrm($_); !defined $_->{modrm}{reg2} } \
+  !memory { store(size => 16, align => 16); }
+
 # 66 0F 2B /r: MOVNTPD m128, xmm1
 MOVNTPD SSE2 00001111 00101011 \
   !constraints { data16($_); modrm($_); !defined $_->{modrm}{reg2} } \
   !memory { store(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG 2B /r: VMOVNTPD m128, xmm1
+VMOVNTPD AVX 00101011 \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0x66); modrm($_); !defined $_->{modrm}{reg2} } \
+  !memory { store(size => 16, align => 16); }
+
 # NP 0F C3 /r: MOVNTI m32, r32
 MOVNTI SSE2 00001111 11000011 \
   !constraints { modrm($_); $_->{modrm}{reg} != REG_RSP && defined $_->{modrm}{base} && $_->{modrm}{base} != $_->{modrm}{reg} } \
@@ -1836,11 +3170,21 @@ MOVNTDQ SSE2 00001111 11100111 \
   !constraints { data16($_); modrm($_); !defined $_->{modrm}{reg2} } \
   !memory { store(size => 16, align => 16); }
 
+# VEX.128.66.0F.WIG E7 /r: VMOVNTDQ m128, xmm1
+VMOVNTDQ AVX 11100111 \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0x66); modrm($_); !defined $_->{modrm}{reg2} } \
+  !memory { store(size => 16, align => 16); }
+
 # 66 0F 38 2A /r: MOVNTDQA xmm1, m128
 MOVNTDQA SSE4_1 00001111 00111000 00101010 \
   !constraints { data16($_); modrm($_); !defined $_->{modrm}{reg2} } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.128.66.0F38.WIG 2A /r: VMOVNTDQA xmm1, m128
+VMOVNTDQA AVX 00101010 \
+  !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); !defined $_->{modrm}{reg2} } \
+  !memory { load(size => 16, align => 16); }
+
 # 0F 18 /1: PREFETCHT0 m8
 PREFETCHT0 SSE 00001111 00011000 \
   !constraints { modrm($_, reg => 1); !defined $_->{modrm}{reg2} } \
@@ -1887,12 +3231,30 @@ PAUSE SSE2 10010000 \
 # NP 0F 77: EMMS
 EMMS MMX 00001111 01110111
 
+# VEX.128.0F.WIG 77: VZEROUPPER
+VZEROUPPER AVX 01110111 \
+  !constraints { vex($_, m => 0x0F, l => 128, v => 0); 1 }
+
+# VEX.256.0F.WIG 77: VZEROALL
+VZEROALL AVX 01110111 \
+  !constraints { vex($_, m => 0x0F, l => 256, v => 0); 1 }
+
 # NP 0F AE /2: LDMXCSR m32
 LDMXCSR SSE 00001111 10101110 \
   !constraints { modrm($_, reg => 2); !defined $_->{modrm}{reg2} } \
   !memory { load(size => 4, value => 0x000001f80, mask => 0xffff1f80); }
 
+# VEX.LZ.0F.WIG AE /2: VLDMXCSR m32
+VLDMXCSR AVX 10101110 \
+  !constraints { vex($_, m => 0x0F, l => 0, v => 0); modrm($_, reg => 2); !defined $_->{modrm}{reg2} } \
+  !memory { load(size => 4, value => 0x000001f80, mask => 0xffff1f80); }
+
 # NP 0F AE /3: STMXCSR m32
 STMXCSR SSE 00001111 10101110 \
   !constraints { modrm($_, reg => 3); !defined $_->{modrm}{reg2} } \
   !memory { store(size => 4); }
+
+# VEX.LZ.0F.WIG AE /3: VSTMXCSR m32
+VSTMXCSR AVX 10101110 \
+  !constraints { vex($_, m => 0x0F, l => 0, v => 0); modrm($_, reg => 3); !defined $_->{modrm}{reg2} } \
+  !memory { store(size => 4); }
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [Qemu-devel] [RISU PATCH v3 18/18] x86.risu: add AVX2 instructions
  2019-07-11 22:32 [Qemu-devel] [RISU PATCH v3 00/18] Support for generating x86 SIMD test images Jan Bobek
                   ` (16 preceding siblings ...)
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 17/18] x86.risu: add AVX instructions Jan Bobek
@ 2019-07-11 22:33 ` Jan Bobek
  2019-07-21  0:46   ` Richard Henderson
  2019-07-12 13:34 ` [Qemu-devel] [RISU PATCH v3 00/18] Support for generating x86 SIMD test images Alex Bennée
  18 siblings, 1 reply; 49+ messages in thread
From: Jan Bobek @ 2019-07-11 22:33 UTC (permalink / raw)
  To: qemu-devel; +Cc: Jan Bobek, Alex Bennée, Richard Henderson

Add AVX2 instructions to the configuration file.

Signed-off-by: Jan Bobek <jan.bobek@gmail.com>
---
 x86.risu | 1239 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 1239 insertions(+)

diff --git a/x86.risu b/x86.risu
index 03ffc89..1705a8e 100644
--- a/x86.risu
+++ b/x86.risu
@@ -91,6 +91,12 @@ VMOVAPS AVX 0010100 d \
   !constraints { vex($_, m => 0x0F, l => 128, v => 0); modrm($_); 1 } \
   !memory { $d ? store(size => 16, align => 16) : load(size => 16, align => 16); }
 
+# VEX.256.0F.WIG 28 /r: VMOVAPS ymm1, ymm2/m256
+# VEX.256.0F.WIG 29 /r: VMOVAPS ymm2/m256, ymm1
+VMOVAPS AVX2 0010100 d \
+  !constraints { vex($_, m => 0x0F, l => 256, v => 0); modrm($_); 1 } \
+  !memory { $d ? store(size => 32, align => 32) : load(size => 32, align => 32); }
+
 # 66 0F 28 /r: MOVAPD xmm1, xmm2/m128
 # 66 0F 29 /r: MOVAPD xmm2/m128, xmm1
 MOVAPD SSE2 00001111 0010100 d \
@@ -103,6 +109,12 @@ VMOVAPD AVX 0010100 d \
   !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
   !memory { $d ? store(size => 16, align => 16) : load(size => 16, align => 16); }
 
+# VEX.256.66.0F.WIG 28 /r: VMOVAPD ymm1, ymm2/m256
+# VEX.256.66.0F.WIG 29 /r: VMOVAPD ymm2/m256, ymm1
+VMOVAPD AVX2 0010100 d \
+  !constraints { vex($_, m => 0x0F, l => 256, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { $d ? store(size => 32, align => 32) : load(size => 32, align => 32); }
+
 # 66 0F 6F /r: MOVDQA xmm1, xmm2/m128
 # 66 0F 7F /r: MOVDQA xmm2/m128, xmm1
 MOVDQA SSE2 00001111 011 d 1111 \
@@ -115,6 +127,12 @@ VMOVDQA AVX 011 d 1111 \
   !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
   !memory { $d ? store(size => 16, align => 16) : load(size => 16, align => 16); }
 
+# VEX.256.66.0F.WIG 6F /r: VMOVDQA ymm1, ymm2/m256
+# VEX.256.66.0F.WIG 7F /r: VMOVDQA ymm2/m256, ymm1
+VMOVDQA AVX2 011 d 1111 \
+  !constraints { vex($_, m => 0x0F, l => 256, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { $d ? store(size => 32, align => 32) : load(size => 32, align => 32); }
+
 # NP 0F 10 /r: MOVUPS xmm1, xmm2/m128
 # NP 0F 11 /r: MOVUPS xmm2/m128, xmm1
 MOVUPS SSE 00001111 0001000 d \
@@ -127,6 +145,12 @@ VMOVUPS AVX 0001000 d \
   !constraints { vex($_, m => 0x0F, l => 128, v => 0); modrm($_); 1 } \
   !memory { $d ? store(size => 16) : load(size => 16); }
 
+# VEX.256.0F.WIG 10 /r: VMOVUPS ymm1, ymm2/m256
+# VEX.256.0F.WIG 11 /r: VMOVUPS ymm2/m256, ymm1
+VMOVUPS AVX2 0001000 d \
+  !constraints { vex($_, m => 0x0F, l => 256, v => 0); modrm($_); 1 } \
+  !memory { $d ? store(size => 32) : load(size => 32); }
+
 # 66 0F 10 /r: MOVUPD xmm1, xmm2/m128
 # 66 0F 11 /r: MOVUPD xmm2/m128, xmm1
 MOVUPD SSE2 00001111 0001000 d \
@@ -139,6 +163,12 @@ VMOVUPD AVX 0001000 d \
   !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
   !memory { $d ? store(size => 16) : load(size => 16); }
 
+# VEX.256.66.0F.WIG 10 /r: VMOVUPD ymm1, ymm2/m256
+# VEX.256.66.0F.WIG 11 /r: VMOVUPD ymm2/m256, ymm1
+VMOVUPD AVX2 0001000 d \
+  !constraints { vex($_, m => 0x0F, l => 256, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { $d ? store(size => 32) : load(size => 32); }
+
 # F3 0F 6F /r: MOVDQU xmm1,xmm2/m128
 # F3 0F 7F /r: MOVDQU xmm2/m128,xmm1
 MOVDQU SSE2 00001111 011 d 1111 \
@@ -151,6 +181,12 @@ VMOVDQU AVX 011 d 1111 \
   !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0xF3); modrm($_); 1 } \
   !memory { $d ? store(size => 16) : load(size => 16); }
 
+# VEX.256.F3.0F.WIG 6F /r: VMOVDQU ymm1,ymm2/m256
+# VEX.256.F3.0F.WIG 7F /r: VMOVDQU ymm2/m256,ymm1
+VMOVDQU AVX2 011 d 1111 \
+  !constraints { vex($_, m => 0x0F, l => 256, v => 0, p => 0xF3); modrm($_); 1 } \
+  !memory { $d ? store(size => 32) : load(size => 32); }
+
 # F3 0F 10 /r: MOVSS xmm1, xmm2/m32
 # F3 0F 11 /r: MOVSS xmm2/m32, xmm1
 MOVSS SSE 00001111 0001000 d \
@@ -263,6 +299,10 @@ PMOVMSKB SSE2 00001111 11010111 \
 VPMOVMSKB AVX 11010111 \
   !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0x66); modrm($_); $_->{modrm}{reg} != REG_RSP && defined $_->{modrm}{reg2} }
 
+# VEX.256.66.0F.WIG D7 /r: VPMOVMSKB reg, ymm1
+VPMOVMSKB AVX2 11010111 \
+  !constraints { vex($_, m => 0x0F, l => 256, v => 0, p => 0x66); modrm($_); $_->{modrm}{reg} != REG_RSP && defined $_->{modrm}{reg2} }
+
 # NP 0F 50 /r: MOVMSKPS reg, xmm
 MOVMSKPS SSE 00001111 01010000 \
   !constraints { modrm($_); $_->{modrm}{reg} != REG_RSP && defined $_->{modrm}{reg2} }
@@ -271,6 +311,10 @@ MOVMSKPS SSE 00001111 01010000 \
 VMOVMSKPS AVX 01010000 \
   !constraints { vex($_, m => 0x0F, l => 128, v => 0); modrm($_); $_->{modrm}{reg} != REG_RSP && defined $_->{modrm}{reg2} }
 
+# VEX.256.0F.WIG 50 /r: VMOVMSKPS reg, ymm2
+VMOVMSKPS AVX2 01010000 \
+  !constraints { vex($_, m => 0x0F, l => 256, v => 0); modrm($_); $_->{modrm}{reg} != REG_RSP && defined $_->{modrm}{reg2} }
+
 # 66 0F 50 /r: MOVMSKPD reg, xmm
 MOVMSKPD SSE2 00001111 01010000 \
   !constraints { data16($_); modrm($_); $_->{modrm}{reg} != REG_RSP && defined $_->{modrm}{reg2} }
@@ -279,6 +323,10 @@ MOVMSKPD SSE2 00001111 01010000 \
 VMOVMSKPD AVX 01010000 \
   !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0x66); modrm($_); $_->{modrm}{reg} != REG_RSP && defined $_->{modrm}{reg2} }
 
+# VEX.256.66.0F.WIG 50 /r: VMOVMSKPD reg, ymm2
+VMOVMSKPD AVX2 01010000 \
+  !constraints { vex($_, m => 0x0F, l => 256, v => 0, p => 0x66); modrm($_); $_->{modrm}{reg} != REG_RSP && defined $_->{modrm}{reg2} }
+
 # F2 0F F0 /r: LDDQU xmm1, m128
 LDDQU SSE3 00001111 11110000 \
   !constraints { repne($_); modrm($_); !defined $_->{modrm}{reg2} } \
@@ -289,6 +337,11 @@ VLDDQU AVX 11110000 \
   !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0xF2); modrm($_); !defined $_->{modrm}{reg2} } \
   !memory { load(size => 16); }
 
+# VEX.256.F2.0F.WIG F0 /r: VLDDQU ymm1, m256
+VLDDQU AVX2 11110000 \
+  !constraints { vex($_, m => 0x0F, l => 256, v => 0, p => 0xF2); modrm($_); !defined $_->{modrm}{reg2} } \
+  !memory { load(size => 32); }
+
 # F3 0F 16 /r: MOVSHDUP xmm1, xmm2/m128
 MOVSHDUP SSE3 00001111 00010110 \
   !constraints { rep($_); modrm($_); 1 } \
@@ -299,6 +352,11 @@ VMOVSHDUP AVX 00010110 \
   !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0xF3); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.F3.0F.WIG 16 /r: VMOVSHDUP ymm1, ymm2/m256
+VMOVSHDUP AVX2 00010110 \
+  !constraints { vex($_, m => 0x0F, l => 256, v => 0, p => 0xF3); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # F3 0F 12 /r: MOVSLDUP xmm1, xmm2/m128
 MOVSLDUP SSE3 00001111 00010010 \
   !constraints { rep($_); modrm($_); 1 } \
@@ -309,6 +367,11 @@ VMOVSLDUP AVX 00010010 \
   !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0xF3); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.F3.0F.WIG 12 /r: VMOVSLDUP ymm1, ymm2/m256
+VMOVSLDUP AVX2 00010010 \
+  !constraints { vex($_, m => 0x0F, l => 256, v => 0, p => 0xF3); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # F2 0F 12 /r: MOVDDUP xmm1, xmm2/m64
 MOVDDUP SSE3 00001111 00010010 \
   !constraints { repne($_); modrm($_); 1 } \
@@ -319,6 +382,11 @@ VMOVDDUP AVX 00010010 \
   !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0xF2); modrm($_); 1 } \
   !memory { load(size => 8); }
 
+# VEX.256.F2.0F.WIG 12 /r: VMOVDDUP ymm1, ymm2/m256
+VMOVDDUP AVX2 00010010 \
+  !constraints { vex($_, m => 0x0F, l => 256, v => 0, p => 0xF2); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 #
 # Arithmetic Instructions
 # -----------------------
@@ -339,6 +407,11 @@ VPADDB AVX 11111100 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG FC /r: VPADDB ymm1, ymm2, ymm3/m256
+VPADDB AVX2 11111100 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F FD /r: PADDW mm, mm/m64
 PADDW MMX 00001111 11111101 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -354,6 +427,11 @@ VPADDW AVX 11111101 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG FD /r: VPADDW ymm1, ymm2, ymm3/m256
+VPADDW AVX2 11111101 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F FE /r: PADDD mm, mm/m64
 PADDD MMX 00001111 11111110 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -369,6 +447,11 @@ VPADDD AVX 11111110 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG FE /r: VPADDD ymm1, ymm2, ymm3/m256
+VPADDD AVX2 11111110 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F D4 /r: PADDQ mm, mm/m64
 PADDQ_mm SSE2 00001111 11010100 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -384,6 +467,11 @@ VPADDQ AVX 11010100 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG D4 /r: VPADDQ ymm1, ymm2, ymm3/m256
+VPADDQ AVX2 11010100 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F EC /r: PADDSB mm, mm/m64
 PADDSB MMX 00001111 11101100 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -399,6 +487,11 @@ VPADDSB AVX 11101100 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG EC /r: VPADDSB ymm1, ymm2, ymm3/m256
+VPADDSB AVX2 11101100 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F ED /r: PADDSW mm, mm/m64
 PADDSW MMX 00001111 11101101 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -414,6 +507,11 @@ VPADDSW AVX 11101101 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG ED /r: VPADDSW ymm1, ymm2, ymm3/m256
+VPADDSW AVX2 11101101 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F DC /r: PADDUSB mm,mm/m64
 PADDUSB MMX 00001111 11011100 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -429,6 +527,11 @@ VPADDUSB AVX 11011100 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG DC /r: VPADDUSB ymm1,ymm2,ymm3/m256
+VPADDUSB AVX2 11011100 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F DD /r: PADDUSW mm,mm/m64
 PADDUSW MMX 00001111 11011101 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -444,6 +547,11 @@ VPADDUSW AVX 11011101 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG DD /r: VPADDUSW ymm1,ymm2,ymm3/m256
+VPADDUSW AVX2 11011101 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 58 /r: ADDPS xmm1, xmm2/m128
 ADDPS SSE 00001111 01011000 \
   !constraints { modrm($_); 1 } \
@@ -454,6 +562,11 @@ VADDPS AVX 01011000 \
   !constraints { vex($_, m => 0x0F, l => 128); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.0F.WIG 58 /r: VADDPS ymm1, ymm2, ymm3/m256
+VADDPS AVX2 01011000 \
+  !constraints { vex($_, m => 0x0F, l => 256); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 58 /r: ADDPD xmm1, xmm2/m128
 ADDPD SSE2 00001111 01011000 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -464,6 +577,11 @@ VADDPD AVX 01011000 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG 58 /r: VADDPD ymm1, ymm2, ymm3/m256
+VADDPD AVX2 01011000 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # F3 0F 58 /r: ADDSS xmm1, xmm2/m32
 ADDSS SSE 00001111 01011000 \
   !constraints { rep($_); modrm($_); 1 } \
@@ -499,6 +617,11 @@ VPHADDW AVX 00000001 \
   !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F38.WIG 01 /r: VPHADDW ymm1, ymm2, ymm3/m256
+VPHADDW AVX2 00000001 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 38 02 /r: PHADDD mm1, mm2/m64
 PHADDD_mm SSSE3 00001111 00111000 00000010 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -514,6 +637,11 @@ VPHADDD AVX 00000010 \
   !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F38.WIG 02 /r: VPHADDD ymm1, ymm2, ymm3/m256
+VPHADDD AVX2 00000010 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 38 03 /r: PHADDSW mm1, mm2/m64
 PHADDSW_mm SSSE3 00001111 00111000 00000011 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -529,6 +657,11 @@ VPHADDSW AVX 00000011 \
   !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F38.WIG 03 /r: VPHADDSW ymm1, ymm2, ymm3/m256
+VPHADDSW AVX2 00000011 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # F2 0F 7C /r: HADDPS xmm1, xmm2/m128
 HADDPS SSE3 00001111 01111100 \
   !constraints { repne($_); modrm($_); 1 } \
@@ -539,6 +672,11 @@ VHADDPS AVX 01111100 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0xF2); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.F2.0F.WIG 7C /r: VHADDPS ymm1, ymm2, ymm3/m256
+VHADDPS AVX2 01111100 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0xF2); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 7C /r: HADDPD xmm1, xmm2/m128
 HADDPD SSE3 00001111 01111100 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -549,6 +687,11 @@ VHADDPD AVX 01111100 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG 7C /r: VHADDPD ymm1, ymm2, ymm3/m256
+VHADDPD AVX2 01111100 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F F8 /r: PSUBB mm, mm/m64
 PSUBB MMX 00001111 11111000 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -564,6 +707,11 @@ VPSUBB AVX 11111000 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG F8 /r: VPSUBB ymm1, ymm2, ymm3/m256
+VPSUBB AVX2 11111000 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F F9 /r: PSUBW mm, mm/m64
 PSUBW MMX 00001111 11111001 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -579,6 +727,11 @@ VPSUBW AVX 11111001 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG F9 /r: VPSUBW ymm1, ymm2, ymm3/m256
+VPSUBW AVX2 11111001 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F FA /r: PSUBD mm, mm/m64
 PSUBD MMX 00001111 11111010 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -594,6 +747,11 @@ VPSUBD AVX 11111010 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG FA /r: VPSUBD ymm1, ymm2, ymm3/m256
+VPSUBD AVX2 11111010 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F FB /r: PSUBQ mm1, mm2/m64
 PSUBQ_mm SSE2 00001111 11111011 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -609,6 +767,11 @@ VPSUBQ AVX 11111011 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG FB /r: VPSUBQ ymm1, ymm2, ymm3/m256
+VPSUBQ AVX2 11111011 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F E8 /r: PSUBSB mm, mm/m64
 PSUBSB MMX 00001111 11101000 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -624,6 +787,11 @@ VPSUBSB AVX 11101000 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG E8 /r: VPSUBSB ymm1, ymm2, ymm3/m256
+VPSUBSB AVX2 11101000 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F E9 /r: PSUBSW mm, mm/m64
 PSUBSW MMX 00001111 11101001 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -639,6 +807,11 @@ VPSUBSW AVX 11101001 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG E9 /r: VPSUBSW ymm1, ymm2, ymm3/m256
+VPSUBSW AVX2 11101001 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F D8 /r: PSUBUSB mm, mm/m64
 PSUBUSB MMX 00001111 11011000 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -654,6 +827,11 @@ VPSUBUSB AVX 11011000 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG D8 /r: VPSUBUSB ymm1, ymm2, ymm3/m256
+VPSUBUSB AVX2 11011000 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F D9 /r: PSUBUSW mm, mm/m64
 PSUBUSW MMX 00001111 11011001 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -669,6 +847,11 @@ VPSUBUSW AVX 11011001 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG D9 /r: VPSUBUSW ymm1, ymm2, ymm3/m256
+VPSUBUSW AVX2 11011001 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 5C /r: SUBPS xmm1, xmm2/m128
 SUBPS SSE 00001111 01011100 \
   !constraints { modrm($_); 1 } \
@@ -679,6 +862,11 @@ VSUBPS AVX 01011100 \
   !constraints { vex($_, m => 0x0F, l => 128); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.0F.WIG 5C /r: VSUBPS ymm1, ymm2, ymm3/m256
+VSUBPS AVX2 01011100 \
+  !constraints { vex($_, m => 0x0F, l => 256); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 5C /r: SUBPD xmm1, xmm2/m128
 SUBPD SSE2 00001111 01011100 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -689,6 +877,11 @@ VSUBPD AVX 01011100 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG 5C /r: VSUBPD ymm1, ymm2, ymm3/m256
+VSUBPD AVX2 01011100 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # F3 0F 5C /r: SUBSS xmm1, xmm2/m32
 SUBSS SSE 00001111 01011100 \
   !constraints { rep($_); modrm($_); 1 } \
@@ -724,6 +917,11 @@ VPHSUBW AVX 00000101 \
   !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F38.WIG 05 /r: VPHSUBW ymm1, ymm2, ymm3/m256
+VPHSUBW AVX2 00000101 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 38 06 /r: PHSUBD mm1, mm2/m64
 PHSUBD_mm SSSE3 00001111 00111000 00000110 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -739,6 +937,11 @@ VPHSUBD AVX 00000110 \
   !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F38.WIG 06 /r: VPHSUBD ymm1, ymm2, ymm3/m256
+VPHSUBD AVX2 00000110 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 38 07 /r: PHSUBSW mm1, mm2/m64
 PHSUBSW_mm SSSE3 00001111 00111000 00000111 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -754,6 +957,11 @@ VPHSUBSW AVX 00000111 \
   !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F38.WIG 07 /r: VPHSUBSW ymm1, ymm2, ymm3/m256
+VPHSUBSW AVX2 00000111 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # F2 0F 7D /r: HSUBPS xmm1, xmm2/m128
 HSUBPS SSE3 00001111 01111101 \
   !constraints { repne($_); modrm($_); 1 } \
@@ -764,6 +972,11 @@ VHSUBPS AVX 01111101 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0xF2); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.F2.0F.WIG 7D /r: VHSUBPS ymm1, ymm2, ymm3/m256
+VHSUBPS AVX2 01111101 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0xF2); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 7D /r: HSUBPD xmm1, xmm2/m128
 HSUBPD SSE3 00001111 01111101 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -774,6 +987,11 @@ VHSUBPD AVX 01111101 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG 7D /r: VHSUBPD ymm1, ymm2, ymm3/m256
+VHSUBPD AVX2 01111101 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # F2 0F D0 /r: ADDSUBPS xmm1, xmm2/m128
 ADDSUBPS SSE3 00001111 11010000 \
   !constraints { repne($_); modrm($_); 1 } \
@@ -784,6 +1002,11 @@ VADDSUBPS AVX 11010000 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0xF2); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.F2.0F.WIG D0 /r: VADDSUBPS ymm1, ymm2, ymm3/m256
+VADDSUBPS AVX2 11010000 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0xF2); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F D0 /r: ADDSUBPD xmm1, xmm2/m128
 ADDSUBPD SSE3 00001111 11010000 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -794,6 +1017,11 @@ VADDSUBPD AVX 11010000 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG D0 /r: VADDSUBPD ymm1, ymm2, ymm3/m256
+VADDSUBPD AVX2 11010000 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F D5 /r: PMULLW mm, mm/m64
 PMULLW MMX 00001111 11010101 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -809,6 +1037,11 @@ VPMULLW AVX 11010101 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG D5 /r: VPMULLW ymm1, ymm2, ymm3/m256
+VPMULLW AVX2 11010101 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 38 40 /r: PMULLD xmm1, xmm2/m128
 PMULLD SSE4_1 00001111 00111000 01000000 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -819,6 +1052,11 @@ VPMULLD AVX 01000000 \
   !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F38.WIG 40 /r: VPMULLD ymm1, ymm2, ymm3/m256
+VPMULLD AVX2 01000000 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F E5 /r: PMULHW mm, mm/m64
 PMULHW MMX 00001111 11100101 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -834,6 +1072,11 @@ VPMULHW AVX 11100101 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG E5 /r: VPMULHW ymm1, ymm2, ymm3/m256
+VPMULHW AVX2 11100101 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F E4 /r: PMULHUW mm1, mm2/m64
 PMULHUW SSE 00001111 11100100 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -849,6 +1092,11 @@ VPMULHUW AVX 11100100 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG E4 /r: VPMULHUW ymm1, ymm2, ymm3/m256
+VPMULHUW AVX2 11100100 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 38 28 /r: PMULDQ xmm1, xmm2/m128
 PMULDQ SSE4_1 00001111 00111000 00101000 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -859,6 +1107,11 @@ VPMULDQ AVX 00101000 \
   !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F38.WIG 28 /r: VPMULDQ ymm1, ymm2, ymm3/m256
+VPMULDQ AVX2 00101000 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F F4 /r: PMULUDQ mm1, mm2/m64
 PMULUDQ_mm SSE2 00001111 11110100 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -874,6 +1127,11 @@ VPMULUDQ AVX 11110100 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG F4 /r: VPMULUDQ ymm1, ymm2, ymm3/m256
+VPMULUDQ AVX2 11110100 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 38 0B /r: PMULHRSW mm1, mm2/m64
 PMULHRSW_mm SSSE3 00001111 00111000 00001011 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -889,6 +1147,11 @@ VPMULHRSW AVX 00001011 \
   !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F38.WIG 0B /r: VPMULHRSW ymm1, ymm2, ymm3/m256
+VPMULHRSW AVX2 00001011 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 59 /r: MULPS xmm1, xmm2/m128
 MULPS SSE 00001111 01011001 \
   !constraints { modrm($_); 1 } \
@@ -899,6 +1162,11 @@ VMULPS AVX 01011001 \
   !constraints { vex($_, m => 0x0F, l => 128); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.0F.WIG 59 /r: VMULPS ymm1, ymm2, ymm3/m256
+VMULPS AVX2 01011001 \
+  !constraints { vex($_, m => 0x0F, l => 256); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 59 /r: MULPD xmm1, xmm2/m128
 MULPD SSE2 00001111 01011001 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -909,6 +1177,11 @@ VMULPD AVX 01011001 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG 59 /r: VMULPD ymm1, ymm2, ymm3/m256
+VMULPD AVX2 01011001 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # F3 0F 59 /r: MULSS xmm1,xmm2/m32
 MULSS SSE 00001111 01011001 \
   !constraints { rep($_); modrm($_); 1 } \
@@ -944,6 +1217,11 @@ VPMADDWD AVX 11110101 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG F5 /r: VPMADDWD ymm1, ymm2, ymm3/m256
+VPMADDWD AVX2 11110101 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 38 04 /r: PMADDUBSW mm1, mm2/m64
 PMADDUBSW_mm SSSE3 00001111 00111000 00000100 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -959,6 +1237,11 @@ VPMADDUBSW AVX 00000100 \
   !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F38.WIG 04 /r: VPMADDUBSW ymm1, ymm2, ymm3/m256
+VPMADDUBSW AVX2 00000100 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 5E /r: DIVPS xmm1, xmm2/m128
 DIVPS SSE 00001111 01011110 \
   !constraints { modrm($_); 1 } \
@@ -969,6 +1252,11 @@ VDIVPS AVX 01011110 \
   !constraints { vex($_, m => 0x0F, l => 128); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.0F.WIG 5E /r: VDIVPS ymm1, ymm2, ymm3/m256
+VDIVPS AVX2 01011110 \
+  !constraints { vex($_, m => 0x0F, l => 256); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 5E /r: DIVPD xmm1, xmm2/m128
 DIVPD SSE2 00001111 01011110 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -979,6 +1267,11 @@ VDIVPD AVX 01011110 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG 5E /r: VDIVPD ymm1, ymm2, ymm3/m256
+VDIVPD AVX2 01011110 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # F3 0F 5E /r: DIVSS xmm1, xmm2/m32
 DIVSS SSE 00001111 01011110 \
   !constraints { rep($_); modrm($_); 1 } \
@@ -1009,6 +1302,11 @@ VRCPPS AVX 01010011 \
   !constraints { vex($_, m => 0x0F, l => 128, v => 0); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.0F.WIG 53 /r: VRCPPS ymm1, ymm2/m256
+VRCPPS AVX2 01010011 \
+  !constraints { vex($_, m => 0x0F, l => 256, v => 0); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # F3 0F 53 /r: RCPSS xmm1, xmm2/m32
 RCPSS SSE 00001111 01010011 \
   !constraints { rep($_); modrm($_); 1 } \
@@ -1029,6 +1327,11 @@ VSQRTPS AVX 01010001 \
   !constraints { vex($_, m => 0x0F, l => 128, v => 0); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.0F.WIG 51 /r: VSQRTPS ymm1, ymm2/m256
+VSQRTPS AVX2 01010001 \
+  !constraints { vex($_, m => 0x0F, l => 256, v => 0); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 51 /r: SQRTPD xmm1, xmm2/m128
 SQRTPD SSE2 00001111 01010001 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -1039,6 +1342,11 @@ VSQRTPD AVX 01010001 \
   !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG 51 /r: VSQRTPD ymm1, ymm2/m256
+VSQRTPD AVX2 01010001 \
+  !constraints { vex($_, m => 0x0F, l => 256, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # F3 0F 51 /r: SQRTSS xmm1, xmm2/m32
 SQRTSS SSE 00001111 01010001 \
   !constraints { rep($_); modrm($_); 1 } \
@@ -1069,6 +1377,11 @@ VRSQRTPS AVX 01010010 \
   !constraints { vex($_, m => 0x0F, l => 128, v => 0); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.0F.WIG 52 /r: VRSQRTPS ymm1, ymm2/m256
+VRSQRTPS AVX2 01010010 \
+  !constraints { vex($_, m => 0x0F, l => 256, v => 0); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # F3 0F 52 /r: RSQRTSS xmm1, xmm2/m32
 RSQRTSS SSE 00001111 01010010 \
   !constraints { rep($_); modrm($_); 1 } \
@@ -1094,6 +1407,11 @@ VPMINUB AVX 11011010 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F DA /r: VPMINUB ymm1, ymm2, ymm3/m256
+VPMINUB AVX2 11011010 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 38 3A /r: PMINUW xmm1, xmm2/m128
 PMINUW SSE4_1 00001111 00111000 00111010 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -1104,6 +1422,11 @@ VPMINUW AVX 00111010 \
   !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F38 3A /r: VPMINUW ymm1, ymm2, ymm3/m256
+VPMINUW AVX2 00111010 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 38 3B /r: PMINUD xmm1, xmm2/m128
 PMINUD SSE4_1 00001111 00111000 00111011 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -1114,6 +1437,11 @@ VPMINUD AVX 00111011 \
   !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F38.WIG 3B /r: VPMINUD ymm1, ymm2, ymm3/m256
+VPMINUD AVX2 00111011 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 38 38 /r: PMINSB xmm1, xmm2/m128
 PMINSB SSE4_1 00001111 00111000 00111000 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -1124,6 +1452,11 @@ VPMINSB AVX 00111000 \
   !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F38 38 /r: VPMINSB ymm1, ymm2, ymm3/m256
+VPMINSB AVX2 00111000 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F EA /r: PMINSW mm1, mm2/m64
 PMINSW SSE 00001111 11101010 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1139,6 +1472,11 @@ VPMINSW AVX 11101010 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F EA /r: VPMINSW ymm1, ymm2, ymm3/m256
+VPMINSW AVX2 11101010 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 38 39 /r: PMINSD xmm1, xmm2/m128
 PMINSD SSE4_1 00001111 00111000 00111001 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -1149,6 +1487,11 @@ VPMINSD AVX 00111001 \
   !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F38.WIG 39 /r: VPMINSD ymm1, ymm2, ymm3/m256
+VPMINSD AVX2 00111001 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 5D /r: MINPS xmm1, xmm2/m128
 MINPS SSE 00001111 01011101 \
   !constraints { modrm($_); 1 } \
@@ -1159,6 +1502,11 @@ VMINPS AVX 01011101 \
   !constraints { vex($_, m => 0x0F, l => 128); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.0F.WIG 5D /r: VMINPS ymm1, ymm2, ymm3/m256
+VMINPS AVX2 01011101 \
+  !constraints { vex($_, m => 0x0F, l => 256); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 5D /r: MINPD xmm1, xmm2/m128
 MINPD SSE2 00001111 01011101 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -1169,6 +1517,11 @@ VMINPD AVX 01011101 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG 5D /r: VMINPD ymm1, ymm2, ymm3/m256
+VMINPD AVX2 01011101 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # F3 0F 5D /r: MINSS xmm1,xmm2/m32
 MINSS SSE 00001111 01011101 \
   !constraints { rep($_); modrm($_); 1 } \
@@ -1214,6 +1567,11 @@ VPMAXUB AVX 11011110 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F DE /r: VPMAXUB ymm1, ymm2, ymm3/m256
+VPMAXUB AVX2 11011110 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 38 3E /r: PMAXUW xmm1, xmm2/m128
 PMAXUW SSE4_1 00001111 00111000 00111110 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -1224,6 +1582,11 @@ VPMAXUW AVX 00111110 \
   !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F38 3E /r: VPMAXUW ymm1, ymm2, ymm3/m256
+VPMAXUW AVX2 00111110 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 38 3F /r: PMAXUD xmm1, xmm2/m128
 PMAXUD SSE4_1 00001111 00111000 00111111 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -1234,6 +1597,11 @@ VPMAXUD AVX 00111111 \
   !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F38.WIG 3F /r: VPMAXUD ymm1, ymm2, ymm3/m256
+VPMAXUD AVX2 00111111 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 38 3C /r: PMAXSB xmm1, xmm2/m128
 PMAXSB SSE4_1 00001111 00111000 00111100 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -1244,6 +1612,11 @@ VPMAXSB AVX 00111100 \
   !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F38.WIG 3C /r: VPMAXSB ymm1, ymm2, ymm3/m256
+VPMAXSB AVX2 00111100 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F EE /r: PMAXSW mm1, mm2/m64
 PMAXSW SSE 00001111 11101110 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1259,6 +1632,11 @@ VPMAXSW AVX 11101110 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG EE /r: VPMAXSW ymm1, ymm2, ymm3/m256
+VPMAXSW AVX2 11101110 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 38 3D /r: PMAXSD xmm1, xmm2/m128
 PMAXSD SSE4_1 00001111 00111000 00111101 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -1269,6 +1647,11 @@ VPMAXSD AVX 00111101 \
   !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F38.WIG 3D /r: VPMAXSD ymm1, ymm2, ymm3/m256
+VPMAXSD AVX2 00111101 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 5F /r: MAXPS xmm1, xmm2/m128
 MAXPS SSE 00001111 01011111 \
   !constraints { modrm($_); 1 } \
@@ -1279,6 +1662,11 @@ VMAXPS AVX 01011111 \
   !constraints { vex($_, m => 0x0F, l => 128); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.0F.WIG 5F /r: VMAXPS ymm1, ymm2, ymm3/m256
+VMAXPS AVX2 01011111 \
+  !constraints { vex($_, m => 0x0F, l => 256); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 5F /r: MAXPD xmm1, xmm2/m128
 MAXPD SSE2 00001111 01011111 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -1289,6 +1677,11 @@ VMAXPD AVX 01011111 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG 5F /r: VMAXPD ymm1, ymm2, ymm3/m256
+VMAXPD AVX2 01011111 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # F3 0F 5F /r: MAXSS xmm1, xmm2/m32
 MAXSS SSE 00001111 01011111 \
   !constraints { rep($_); modrm($_); 1 } \
@@ -1324,6 +1717,11 @@ VPAVGB AVX 11100000 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG E0 /r: VPAVGB ymm1, ymm2, ymm3/m256
+VPAVGB AVX2 11100000 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F E3 /r: PAVGW mm1, mm2/m64
 PAVGW SSE 00001111 11100011 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1339,6 +1737,11 @@ VPAVGW AVX 11100011 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG E3 /r: VPAVGW ymm1, ymm2, ymm3/m256
+VPAVGW AVX2 11100011 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F F6 /r: PSADBW mm1, mm2/m64
 PSADBW SSE 00001111 11110110 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1354,6 +1757,11 @@ VPSADBW AVX 11110110 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG F6 /r: VPSADBW ymm1, ymm2, ymm3/m256
+VPSADBW AVX2 11110110 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 3A 42 /r ib: MPSADBW xmm1, xmm2/m128, imm8
 MPSADBW SSE4_1 00001111 00111010 01000010 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
@@ -1364,6 +1772,11 @@ VMPSADBW AVX 01000010 \
   !constraints { vex($_, m => 0x0F3A, l => 128, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F3A.WIG 42 /r ib: VMPSADBW ymm1, ymm2, ymm3/m256, imm8
+VMPSADBW AVX2 01000010 \
+  !constraints { vex($_, m => 0x0F3A, l => 256, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 38 1C /r: PABSB mm1, mm2/m64
 PABSB_mm SSSE3 00001111 00111000 00011100 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1379,6 +1792,11 @@ VPABSB AVX 00011100 \
   !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F38.WIG 1C /r: VPABSB ymm1, ymm2/m256
+VPABSB AVX2 00011100 \
+  !constraints { vex($_, m => 0x0F38, l => 256, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 38 1D /r: PABSW mm1, mm2/m64
 PABSW_mm SSSE3 00001111 00111000 00011101 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1394,6 +1812,11 @@ VPABSW AVX 00011101 \
   !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F38.WIG 1D /r: VPABSW ymm1, ymm2/m256
+VPABSW AVX2 00011101 \
+  !constraints { vex($_, m => 0x0F38, l => 256, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 38 1E /r: PABSD mm1, mm2/m64
 PABSD_mm SSSE3 00001111 00111000 00011110 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1409,6 +1832,11 @@ VPABSD AVX 00011110 \
   !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F38.WIG 1E /r: VPABSD ymm1, ymm2/m256
+VPABSD AVX2 00011110 \
+  !constraints { vex($_, m => 0x0F38, l => 256, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 38 08 /r: PSIGNB mm1, mm2/m64
 PSIGNB_mm SSSE3 00001111 00111000 00001000 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1424,6 +1852,11 @@ VPSIGNB AVX 00001000 \
   !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F38.WIG 08 /r: VPSIGNB ymm1, ymm2, ymm3/m256
+VPSIGNB AVX2 00001000 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 38 09 /r: PSIGNW mm1, mm2/m64
 PSIGNW_mm SSSE3 00001111 00111000 00001001 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1439,6 +1872,11 @@ VPSIGNW AVX 00001001 \
   !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F38.WIG 09 /r: VPSIGNW ymm1, ymm2, ymm3/m256
+VPSIGNW AVX2 00001001 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 38 0A /r: PSIGND mm1, mm2/m64
 PSIGND_mm SSSE3 00001111 00111000 00001010 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1454,6 +1892,11 @@ VPSIGND AVX 00001010 \
   !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F38.WIG 0A /r: VPSIGND ymm1, ymm2, ymm3/m256
+VPSIGND AVX2 00001010 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 3A 40 /r ib: DPPS xmm1, xmm2/m128, imm8
 DPPS SSE4_1 00001111 00111010 01000000 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
@@ -1464,6 +1907,11 @@ VDPPS AVX 01000000 \
   !constraints { vex($_, m => 0x0F3A, l => 128, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F3A.WIG 40 /r ib: VDPPS ymm1, ymm2, ymm3/m256, imm8
+VDPPS AVX2 01000000 \
+  !constraints { vex($_, m => 0x0F3A, l => 256, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 3A 41 /r ib: DPPD xmm1, xmm2/m128, imm8
 DPPD SSE4_1 00001111 00111010 01000001 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
@@ -1484,6 +1932,11 @@ VROUNDPS AVX 00001000 \
   !constraints { vex($_, m => 0x0F3A, l => 128, v => 0, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F3A.WIG 08 /r ib: VROUNDPS ymm1, ymm2/m256, imm8
+VROUNDPS AVX2 00001000 \
+  !constraints { vex($_, m => 0x0F3A, l => 256, v => 0, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 3A 09 /r ib: ROUNDPD xmm1, xmm2/m128, imm8
 ROUNDPD SSE4_1 00001111 00111010 00001001 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
@@ -1494,6 +1947,11 @@ VROUNDPD AVX 00001001 \
   !constraints { vex($_, m => 0x0F3A, l => 128, v => 0, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F3A.WIG 09 /r ib: VROUNDPD ymm1, ymm2/m256, imm8
+VROUNDPD AVX2 00001001 \
+  !constraints { vex($_, m => 0x0F3A, l => 256, v => 0, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 3A 0A /r ib: ROUNDSS xmm1, xmm2/m32, imm8
 ROUNDSS SSE4_1 00001111 00111010 00001010 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
@@ -1614,6 +2072,11 @@ VPCMPEQB AVX 01110100 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG 74 /r: VPCMPEQB ymm1,ymm2,ymm3/m256
+VPCMPEQB AVX2 01110100 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 75 /r: PCMPEQW mm,mm/m64
 PCMPEQW MMX 00001111 01110101 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1629,6 +2092,11 @@ VPCMPEQW AVX 01110101 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG 75 /r: VPCMPEQW ymm1,ymm2,ymm3/m256
+VPCMPEQW AVX2 01110101 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 76 /r: PCMPEQD mm,mm/m64
 PCMPEQD MMX 00001111 01110110 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1644,6 +2112,11 @@ VPCMPEQD AVX 01110110 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG 76 /r: VPCMPEQD ymm1,ymm2,ymm3/m256
+VPCMPEQD AVX2 01110110 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 38 29 /r: PCMPEQQ xmm1, xmm2/m128
 PCMPEQQ SSE4_1 00001111 00111000 00101001 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -1654,6 +2127,11 @@ VPCMPEQQ AVX 00101001 \
   !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F38.WIG 29 /r: VPCMPEQQ ymm1, ymm2, ymm3/m256
+VPCMPEQQ AVX2 00101001 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 64 /r: PCMPGTB mm,mm/m64
 PCMPGTB MMX 00001111 01100100 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1669,6 +2147,11 @@ VPCMPGTB AVX 01100100 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG 64 /r: VPCMPGTB ymm1,ymm2,ymm3/m256
+VPCMPGTB AVX2 01100100 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 65 /r: PCMPGTW mm,mm/m64
 PCMPGTW MMX 00001111 01100101 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1684,6 +2167,11 @@ VPCMPGTW AVX 01100101 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG 65 /r: VPCMPGTW ymm1,ymm2,ymm3/m256
+VPCMPGTW AVX2 01100101 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 66 /r: PCMPGTD mm,mm/m64
 PCMPGTD MMX 00001111 01100110 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1699,6 +2187,11 @@ VPCMPGTD AVX 01100110 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG 66 /r: VPCMPGTD ymm1,ymm2,ymm3/m256
+VPCMPGTD AVX2 01100110 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 38 37 /r: PCMPGTQ xmm1,xmm2/m128
 PCMPGTQ SSE4_2 00001111 00111000 00110111 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -1709,6 +2202,11 @@ VPCMPGTQ AVX 00110111 \
   !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F38.WIG 37 /r: VPCMPGTQ ymm1, ymm2, ymm3/m256
+VPCMPGTQ AVX2 00110111 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 3A 60 /r imm8: PCMPESTRM xmm1, xmm2/m128, imm8
 PCMPESTRM SSE4_2 00001111 00111010 01100000 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
@@ -1759,16 +2257,31 @@ VPTEST AVX 00010111 \
   !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F38.WIG 17 /r: VPTEST ymm1, ymm2/m256
+VPTEST AVX2 00010111 \
+  !constraints { vex($_, m => 0x0F38, l => 256, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # VEX.128.66.0F38.W0 0E /r: VTESTPS xmm1, xmm2/m128
 VTESTPS AVX 00001110 \
   !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66, w => 0); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F38.W0 0E /r: VTESTPS ymm1, ymm2/m256
+VTESTPS AVX2 00001110 \
+  !constraints { vex($_, m => 0x0F38, l => 256, v => 0, p => 0x66, w => 0); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # VEX.128.66.0F38.W0 0F /r: VTESTPD xmm1, xmm2/m128
 VTESTPD AVX 00001111 \
   !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66, w => 0); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F38.W0 0F /r: VTESTPD ymm1, ymm2/m256
+VTESTPD AVX2 00001111 \
+  !constraints { vex($_, m => 0x0F38, l => 256, v => 0, p => 0x66, w => 0); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F C2 /r ib: CMPPS xmm1, xmm2/m128, imm8
 CMPPS SSE 00001111 11000010 \
   !constraints { modrm($_); imm($_, width => 8); 1 } \
@@ -1779,6 +2292,11 @@ VCMPPS AVX 11000010 \
   !constraints { vex($_, m => 0x0F, l => 128); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.0F.WIG C2 /r ib: VCMPPS ymm1, ymm2, ymm3/m256, imm8
+VCMPPS AVX2 11000010 \
+  !constraints { vex($_, m => 0x0F, l => 256); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F C2 /r ib: CMPPD xmm1, xmm2/m128, imm8
 CMPPD SSE2 00001111 11000010 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
@@ -1789,6 +2307,11 @@ VCMPPD AVX 11000010 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG C2 /r ib: VCMPPD ymm1, ymm2, ymm3/m256, imm8
+VCMPPD AVX2 11000010 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 32); }
+
 # F3 0F C2 /r ib: CMPSS xmm1, xmm2/m32, imm8
 CMPSS SSE 00001111 11000010 \
   !constraints { rep($_); modrm($_); imm($_, width => 8); 1 } \
@@ -1869,6 +2392,11 @@ VPAND AVX 11011011 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG DB /r: VPAND ymm1, ymm2, ymm3/m256
+VPAND AVX2 11011011 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 54 /r: ANDPS xmm1, xmm2/m128
 ANDPS SSE 00001111 01010100 \
   !constraints { modrm($_); 1 } \
@@ -1879,6 +2407,11 @@ VANDPS AVX 01010100 \
   !constraints { vex($_, m => 0x0F, l => 128); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.0F 54 /r: VANDPS ymm1, ymm2, ymm3/m256
+VANDPS AVX2 01010100 \
+  !constraints { vex($_, m => 0x0F, l => 256); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 54 /r: ANDPD xmm1, xmm2/m128
 ANDPD SSE2 00001111 01010100 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -1889,6 +2422,11 @@ VANDPD AVX 01010100 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F 54 /r: VANDPD ymm1, ymm2, ymm3/m256
+VANDPD AVX2 01010100 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F DF /r: PANDN mm, mm/m64
 PANDN MMX 00001111 11011111 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1904,6 +2442,11 @@ VPANDN AVX 11011111 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG DF /r: VPANDN ymm1, ymm2, ymm3/m256
+VPANDN AVX2 11011111 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 55 /r: ANDNPS xmm1, xmm2/m128
 ANDNPS SSE 00001111 01010101 \
   !constraints { modrm($_); 1 } \
@@ -1914,6 +2457,11 @@ VANDNPS AVX 01010101 \
   !constraints { vex($_, m => 0x0F, l => 128); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.0F 55 /r: VANDNPS ymm1, ymm2, ymm3/m256
+VANDNPS AVX2 01010101 \
+  !constraints { vex($_, m => 0x0F, l => 256); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 55 /r: ANDNPD xmm1, xmm2/m128
 ANDNPD SSE2 00001111 01010101 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -1924,6 +2472,11 @@ VANDNPD AVX 01010101 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F 55 /r: VANDNPD ymm1, ymm2, ymm3/m256
+VANDNPD AVX2 01010101 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F EB /r: POR mm, mm/m64
 POR MMX 00001111 11101011 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1939,6 +2492,11 @@ VPOR AVX 11101011 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG EB /r: VPOR ymm1, ymm2, ymm3/m256
+VPOR AVX2 11101011 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 56 /r: ORPS xmm1, xmm2/m128
 ORPS SSE 00001111 01010110 \
   !constraints { modrm($_); 1 } \
@@ -1949,6 +2507,11 @@ VORPS AVX 01010110 \
   !constraints { vex($_, m => 0x0F, l => 128); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.0F 56 /r: VORPS ymm1, ymm2, ymm3/m256
+VORPS AVX2 01010110 \
+  !constraints { vex($_, m => 0x0F, l => 256); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 56 /r: ORPD xmm1, xmm2/m128
 ORPD SSE2 00001111 01010110 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -1959,6 +2522,11 @@ VORPD AVX 01010110 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F 56 /r: VORPD ymm1, ymm2, ymm3/m256
+VORPD AVX2 01010110 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F EF /r: PXOR mm, mm/m64
 PXOR MMX 00001111 11101111 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -1974,6 +2542,11 @@ VPXOR AVX 11101111 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG EF /r: VPXOR ymm1, ymm2, ymm3/m256
+VPXOR AVX2 11101111 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 57 /r: XORPS xmm1, xmm2/m128
 XORPS SSE 00001111 01010111 \
   !constraints { modrm($_); 1 } \
@@ -1984,6 +2557,11 @@ VXORPS AVX 01010111 \
   !constraints { vex($_, m => 0x0F, l => 128); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.0F.WIG 57 /r: VXORPS ymm1, ymm2, ymm3/m256
+VXORPS AVX2 01010111 \
+  !constraints { vex($_, m => 0x0F, l => 256); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 57 /r: XORPD xmm1, xmm2/m128
 XORPD SSE2 00001111 01010111 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -1994,6 +2572,11 @@ VXORPD AVX 01010111 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG 57 /r: VXORPD ymm1, ymm2, ymm3/m256
+VXORPD AVX2 01010111 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 #
 # Shift and Rotate Instructions
 # -----------------------------
@@ -2014,6 +2597,11 @@ VPSLLW AVX 11110001 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG F1 /r: VPSLLW ymm1, ymm2, xmm3/m128
+VPSLLW AVX2 11110001 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F F2 /r: PSLLD mm, mm/m64
 PSLLD MMX 00001111 11110010 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -2029,6 +2617,11 @@ VPSLLD AVX 11110010 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG F2 /r: VPSLLD ymm1, ymm2, xmm3/m128
+VPSLLD AVX2 11110010 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F F3 /r: PSLLQ mm, mm/m64
 PSLLQ MMX 00001111 11110011 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -2044,6 +2637,11 @@ VPSLLQ AVX 11110011 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG F3 /r: VPSLLQ ymm1, ymm2, xmm3/m128
+VPSLLQ AVX2 11110011 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 71 /6 ib: PSLLW mm1, imm8
 PSLLW_imm MMX 00001111 01110001 \
   !constraints { modrm($_, reg => 6); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} }
@@ -2056,6 +2654,10 @@ PSLLW_imm SSE2 00001111 01110001 \
 VPSLLW_imm AVX 01110001 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_, reg => 6); imm($_, width => 8); defined $_->{modrm}{reg2} }
 
+# VEX.256.66.0F.WIG 71 /6 ib: VPSLLW ymm1, ymm2, imm8
+VPSLLW_imm AVX2 01110001 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_, reg => 6); imm($_, width => 8); defined $_->{modrm}{reg2} }
+
 # NP 0F 72 /6 ib: PSLLD mm, imm8
 PSLLD_imm MMX 00001111 01110010 \
   !constraints { modrm($_, reg => 6); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} }
@@ -2068,6 +2670,10 @@ PSLLD_imm SSE2 00001111 01110010 \
 VPSLLD_imm AVX 01110010 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_, reg => 6); imm($_, width => 8); defined $_->{modrm}{reg2} }
 
+# VEX.256.66.0F.WIG 72 /6 ib: VPSLLD ymm1, ymm2, imm8
+VPSLLD_imm AVX2 01110010 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_, reg => 6); imm($_, width => 8); defined $_->{modrm}{reg2} }
+
 # NP 0F 73 /6 ib: PSLLQ mm, imm8
 PSLLQ_imm MMX 00001111 01110011 \
   !constraints { modrm($_, reg => 6); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} }
@@ -2080,6 +2686,10 @@ PSLLQ_imm SSE2 00001111 01110011 \
 VPSLLQ_imm AVX 01110011 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_, reg => 6); imm($_, width => 8); defined $_->{modrm}{reg2} }
 
+# VEX.256.66.0F.WIG 73 /6 ib: VPSLLQ ymm1, ymm2, imm8
+VPSLLQ_imm AVX2 01110011 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_, reg => 6); imm($_, width => 8); defined $_->{modrm}{reg2} }
+
 # 66 0F 73 /7 ib: PSLLDQ xmm1, imm8
 PSLLDQ_imm SSE2 00001111 01110011 \
   !constraints { data16($_); modrm($_, reg => 7); imm($_, width => 8); defined $_->{modrm}{reg2} }
@@ -2088,6 +2698,30 @@ PSLLDQ_imm SSE2 00001111 01110011 \
 VPSLLDQ_imm AVX 01110011 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_, reg => 7); imm($_, width => 8); defined $_->{modrm}{reg2} }
 
+# VEX.256.66.0F.WIG 73 /7 ib: VPSLLDQ ymm1, ymm2, imm8
+VPSLLDQ_imm AVX2 01110011 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_, reg => 7); imm($_, width => 8); defined $_->{modrm}{reg2} }
+
+# VEX.128.66.0F38.W0 47 /r: VPSLLVD xmm1, xmm2, xmm3/m128
+VPSLLVD_xmm AVX2 01000111 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66, w => 0); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
+# VEX.256.66.0F38.W0 47 /r: VPSLLVD ymm1, ymm2, ymm3/m256
+VPSLLVD AVX2 01000111 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66, w => 0); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
+# VEX.128.66.0F38.W1 47 /r: VPSLLVQ xmm1, xmm2, xmm3/m128
+VPSLLVQ_xmm AVX2 01000111 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66, w => 1); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
+# VEX.256.66.0F38.W1 47 /r: VPSLLVQ ymm1, ymm2, ymm3/m256
+VPSLLVQ AVX2 01000111 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66, w => 1); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F D1 /r: PSRLW mm, mm/m64
 PSRLW MMX 00001111 11010001 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -2103,6 +2737,11 @@ VPSRLW AVX 11010001 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG D1 /r: VPSRLW ymm1, ymm2, xmm3/m128
+VPSRLW AVX2 11010001 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F D2 /r: PSRLD mm, mm/m64
 PSRLD MMX 00001111 11010010 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -2118,6 +2757,11 @@ VPSRLD AVX 11010010 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG D2 /r: VPSRLD ymm1, ymm2, xmm3/m128
+VPSRLD AVX2 11010010 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F D3 /r: PSRLQ mm, mm/m64
 PSRLQ MMX 00001111 11010011 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -2133,6 +2777,11 @@ VPSRLQ AVX 11010011 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG D3 /r: VPSRLQ ymm1, ymm2, xmm3/m128
+VPSRLQ AVX2 11010011 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 71 /2 ib: PSRLW mm, imm8
 PSRLW_imm MMX 00001111 01110001 \
   !constraints { modrm($_, reg => 2); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} }
@@ -2145,6 +2794,10 @@ PSRLW_imm SSE2 00001111 01110001 \
 VPSRLW_imm AVX 01110001 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_, reg => 2); imm($_, width => 8); defined $_->{modrm}{reg2} }
 
+# VEX.256.66.0F.WIG 71 /2 ib: VPSRLW ymm1, ymm2, imm8
+VPSRLW_imm AVX2 01110001 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_, reg => 2); imm($_, width => 8); defined $_->{modrm}{reg2} }
+
 # NP 0F 72 /2 ib: PSRLD mm, imm8
 PSRLD_imm MMX 00001111 01110010 \
   !constraints { modrm($_, reg => 2); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} }
@@ -2157,6 +2810,10 @@ PSRLD_imm SSE2 00001111 01110010 \
 VPSRLD_imm AVX 01110010 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_, reg => 2); imm($_, width => 8); defined $_->{modrm}{reg2} }
 
+# VEX.256.66.0F.WIG 72 /2 ib: VPSRLD ymm1, ymm2, imm8
+VPSRLD_imm AVX2 01110010 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_, reg => 2); imm($_, width => 8); defined $_->{modrm}{reg2} }
+
 # NP 0F 73 /2 ib: PSRLQ mm, imm8
 PSRLQ_imm MMX 00001111 01110011 \
   !constraints { modrm($_, reg => 2); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} }
@@ -2169,6 +2826,10 @@ PSRLQ_imm SSE2 00001111 01110011 \
 VPSRLQ_imm AVX 01110011 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_, reg => 2); imm($_, width => 8); defined $_->{modrm}{reg2} }
 
+# VEX.256.66.0F.WIG 73 /2 ib: VPSRLQ ymm1, ymm2, imm8
+VPSRLQ_imm AVX2 01110011 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_, reg => 2); imm($_, width => 8); defined $_->{modrm}{reg2} }
+
 # 66 0F 73 /3 ib: PSRLDQ xmm1, imm8
 PSRLDQ_imm SSE2 00001111 01110011 \
   !constraints { data16($_); modrm($_, reg => 3); imm($_, width => 8); defined $_->{modrm}{reg2} }
@@ -2177,6 +2838,30 @@ PSRLDQ_imm SSE2 00001111 01110011 \
 VPSRLDQ_imm AVX 01110011 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_, reg => 3); imm($_, width => 8); defined $_->{modrm}{reg2} }
 
+# VEX.256.66.0F.WIG 73 /3 ib: VPSRLDQ ymm1, ymm2, imm8
+VPSRLDQ_imm AVX2 01110011 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_, reg => 3); imm($_, width => 8); defined $_->{modrm}{reg2} }
+
+# VEX.128.66.0F38.W0 45 /r: VPSRLVD xmm1, xmm2, xmm3/m128
+VPSRLVD_xmm AVX2 01000101 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66, w => 0); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
+# VEX.256.66.0F38.W0 45 /r: VPSRLVD ymm1, ymm2, ymm3/m256
+VPSRLVD AVX2 01000101 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66, w => 0); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
+# VEX.128.66.0F38.W1 45 /r: VPSRLVQ xmm1, xmm2, xmm3/m128
+VPSRLVQ_xmm AVX2 01000101 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66, w => 1); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
+# VEX.256.66.0F38.W1 45 /r: VPSRLVQ ymm1, ymm2, ymm3/m256
+VPSRLVQ AVX2 01000101 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66, w => 1); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F E1 /r: PSRAW mm,mm/m64
 PSRAW MMX 00001111 11100001 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -2192,6 +2877,11 @@ VPSRAW AVX 11100001 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG E1 /r: VPSRAW ymm1,ymm2,xmm3/m128
+VPSRAW AVX2 11100001 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F E2 /r: PSRAD mm,mm/m64
 PSRAD MMX 00001111 11100010 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -2207,6 +2897,11 @@ VPSRAD AVX 11100010 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG E2 /r: VPSRAD ymm1,ymm2,xmm3/m128
+VPSRAD AVX2 11100010 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 71 /4 ib: PSRAW mm,imm8
 PSRAW_imm MMX 00001111 01110001 \
   !constraints { modrm($_, reg => 4); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} }
@@ -2219,6 +2914,10 @@ PSRAW_imm SSE2 00001111 01110001 \
 VPSRAW_imm AVX 01110001 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_, reg => 4); imm($_, width => 8); defined $_->{modrm}{reg2} }
 
+# VEX.256.66.0F.WIG 71 /4 ib: VPSRAW ymm1,ymm2,imm8
+VPSRAW_imm AVX2 01110001 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_, reg => 4); imm($_, width => 8); defined $_->{modrm}{reg2} }
+
 # NP 0F 72 /4 ib: PSRAD mm,imm8
 PSRAD_imm MMX 00001111 01110010 \
   !constraints { modrm($_, reg => 4); imm($_, width => 8); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} }
@@ -2231,6 +2930,20 @@ PSRAD_imm SSE2 00001111 01110010 \
 VPSRAD_imm AVX 01110010 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_, reg => 4); imm($_, width => 8); defined $_->{modrm}{reg2} }
 
+# VEX.256.66.0F.WIG 72 /4 ib: VPSRAD ymm1,ymm2,imm8
+VPSRAD_imm AVX2 01110010 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_, reg => 4); imm($_, width => 8); defined $_->{modrm}{reg2} }
+
+# VEX.128.66.0F38.W0 46 /r: VPSRAVD xmm1, xmm2, xmm3/m128
+VPSRAVD_xmm AVX2 01000110 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66, w => 0); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
+# VEX.256.66.0F38.W0 46 /r: VPSRAVD ymm1, ymm2, ymm3/m256
+VPSRAVD AVX2 01000110 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66, w => 0); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 3A 0F /r ib: PALIGNR mm1, mm2/m64, imm8
 PALIGNR_mm SSSE3 00001111 00111010 00001111 \
   !constraints { modrm($_); imm($_, width => 8); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -2246,6 +2959,11 @@ VPALIGNR AVX 00001111 \
   !constraints { vex($_, m => 0x0F3A, l => 128, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F3A.WIG 0F /r ib: VPALIGNR ymm1, ymm2, ymm3/m256, imm8
+VPALIGNR AVX2 00001111 \
+  !constraints { vex($_, m => 0x0F3A, l => 256, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 32); }
+
 #
 # Shuffle, Unpack, Blend, Insert, Extract, Broadcast, Permute, Gather Instructions
 # --------------------------------------------------------------------------------
@@ -2266,6 +2984,11 @@ VPACKSSWB AVX 01100011 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG 63 /r: VPACKSSWB ymm1, ymm2, ymm3/m256
+VPACKSSWB AVX2 01100011 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 6B /r: PACKSSDW mm1, mm2/m64
 PACKSSDW MMX 00001111 01101011 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -2281,6 +3004,11 @@ VPACKSSDW AVX 01101011 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG 6B /r: VPACKSSDW ymm1, ymm2, ymm3/m256
+VPACKSSDW AVX2 01101011 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 67 /r: PACKUSWB mm, mm/m64
 PACKUSWB MMX 00001111 01100111 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -2296,6 +3024,11 @@ VPACKUSWB AVX 01100111 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG 67 /r: VPACKUSWB ymm1, ymm2, ymm3/m256
+VPACKUSWB AVX2 01100111 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 38 2B /r: PACKUSDW xmm1, xmm2/m128
 PACKUSDW SSE4_1 00001111 00111000 00101011 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -2306,6 +3039,11 @@ VPACKUSDW AVX 00101011 \
   !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F38 2B /r: VPACKUSDW ymm1, ymm2, ymm3/m256
+VPACKUSDW AVX2 00101011 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 68 /r: PUNPCKHBW mm, mm/m64
 PUNPCKHBW MMX 00001111 01101000 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -2321,6 +3059,11 @@ VPUNPCKHBW AVX 01101000 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG 68 /r: VPUNPCKHBW ymm1, ymm2, ymm3/m256
+VPUNPCKHBW AVX2 01101000 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 69 /r: PUNPCKHWD mm, mm/m64
 PUNPCKHWD MMX 00001111 01101001 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -2336,6 +3079,11 @@ VPUNPCKHWD AVX 01101001 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG 69 /r: VPUNPCKHWD ymm1, ymm2, ymm3/m256
+VPUNPCKHWD AVX2 01101001 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 6A /r: PUNPCKHDQ mm, mm/m64
 PUNPCKHDQ MMX 00001111 01101010 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -2351,6 +3099,11 @@ VPUNPCKHDQ AVX 01101010 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG 6A /r: VPUNPCKHDQ ymm1, ymm2, ymm3/m256
+VPUNPCKHDQ AVX2 01101010 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 6D /r: PUNPCKHQDQ xmm1, xmm2/m128
 PUNPCKHQDQ SSE2 00001111 01101101 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -2361,6 +3114,11 @@ VPUNPCKHQDQ AVX 01101101 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG 6D /r: VPUNPCKHQDQ ymm1, ymm2, ymm3/m256
+VPUNPCKHQDQ AVX2 01101101 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 60 /r: PUNPCKLBW mm, mm/m32
 PUNPCKLBW MMX 00001111 01100000 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -2376,6 +3134,11 @@ VPUNPCKLBW AVX 01100000 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG 60 /r: VPUNPCKLBW ymm1, ymm2, ymm3/m256
+VPUNPCKLBW AVX2 01100000 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 61 /r: PUNPCKLWD mm, mm/m32
 PUNPCKLWD MMX 00001111 01100001 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -2391,6 +3154,11 @@ VPUNPCKLWD AVX 01100001 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG 61 /r: VPUNPCKLWD ymm1, ymm2, ymm3/m256
+VPUNPCKLWD AVX2 01100001 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 62 /r: PUNPCKLDQ mm, mm/m32
 PUNPCKLDQ MMX 00001111 01100010 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -2406,6 +3174,11 @@ VPUNPCKLDQ AVX 01100010 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG 62 /r: VPUNPCKLDQ ymm1, ymm2, ymm3/m256
+VPUNPCKLDQ AVX2 01100010 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 6C /r: PUNPCKLQDQ xmm1, xmm2/m128
 PUNPCKLQDQ SSE2 00001111 01101100 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -2416,6 +3189,11 @@ VPUNPCKLQDQ AVX 01101100 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG 6C /r: VPUNPCKLQDQ ymm1, ymm2, ymm3/m256
+VPUNPCKLQDQ AVX2 01101100 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 14 /r: UNPCKLPS xmm1, xmm2/m128
 UNPCKLPS SSE 00001111 00010100 \
   !constraints { modrm($_); 1 } \
@@ -2426,6 +3204,11 @@ VUNPCKLPS AVX 00010100 \
   !constraints { vex($_, m => 0x0F, l => 128); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.0F.WIG 14 /r: VUNPCKLPS ymm1,ymm2,ymm3/m256
+VUNPCKLPS AVX2 00010100 \
+  !constraints { vex($_, m => 0x0F, l => 256); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 14 /r: UNPCKLPD xmm1, xmm2/m128
 UNPCKLPD SSE2 00001111 00010100 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -2436,6 +3219,11 @@ VUNPCKLPD AVX 00010100 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG 14 /r: VUNPCKLPD ymm1,ymm2, ymm3/m256
+VUNPCKLPD AVX2 00010100 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 15 /r: UNPCKHPS xmm1, xmm2/m128
 UNPCKHPS SSE 00001111 00010101 \
   !constraints { modrm($_); 1 } \
@@ -2446,6 +3234,11 @@ VUNPCKHPS AVX 00010101 \
   !constraints { vex($_, m => 0x0F, l => 128); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.0F.WIG 15 /r: VUNPCKHPS ymm1, ymm2, ymm3/m256
+VUNPCKHPS AVX2 00010101 \
+  !constraints { vex($_, m => 0x0F, l => 256); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 15 /r: UNPCKHPD xmm1, xmm2/m128
 UNPCKHPD SSE2 00001111 00010101 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -2456,6 +3249,11 @@ VUNPCKHPD AVX 00010101 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG 15 /r: VUNPCKHPD ymm1,ymm2, ymm3/m256
+VUNPCKHPD AVX2 00010101 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 38 00 /r: PSHUFB mm1, mm2/m64
 PSHUFB_mm SSSE3 00001111 00111000 00000000 \
   !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -2471,6 +3269,11 @@ VPSHUFB AVX 00000000 \
   !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F38.WIG 00 /r: VPSHUFB ymm1, ymm2, ymm3/m256
+VPSHUFB AVX2 00000000 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F 70 /r ib: PSHUFW mm1, mm2/m64, imm8
 PSHUFW SSE 00001111 01110000 \
   !constraints { modrm($_); imm($_, width => 8); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -2486,6 +3289,11 @@ VPSHUFLW AVX 01110000 \
   !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0xF2); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.F2.0F.WIG 70 /r ib: VPSHUFLW ymm1, ymm2/m256, imm8
+VPSHUFLW AVX2 01110000 \
+  !constraints { vex($_, m => 0x0F, l => 256, v => 0, p => 0xF2); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 32); }
+
 # F3 0F 70 /r ib: PSHUFHW xmm1, xmm2/m128, imm8
 PSHUFHW SSE2 00001111 01110000 \
   !constraints { rep($_); modrm($_); imm($_, width => 8); 1 } \
@@ -2496,6 +3304,11 @@ VPSHUFHW AVX 01110000 \
   !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0xF3); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.F3.0F.WIG 70 /r ib: VPSHUFHW ymm1, ymm2/m256, imm8
+VPSHUFHW AVX2 01110000 \
+  !constraints { vex($_, m => 0x0F, l => 256, v => 0, p => 0xF3); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 70 /r ib: PSHUFD xmm1, xmm2/m128, imm8
 PSHUFD SSE2 00001111 01110000 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
@@ -2506,6 +3319,11 @@ VPSHUFD AVX 01110000 \
   !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG 70 /r ib: VPSHUFD ymm1, ymm2/m256, imm8
+VPSHUFD AVX2 01110000 \
+  !constraints { vex($_, m => 0x0F, l => 256, v => 0, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 32); }
+
 # NP 0F C6 /r ib: SHUFPS xmm1, xmm3/m128, imm8
 SHUFPS SSE 00001111 11000110 \
   !constraints { modrm($_); imm($_, width => 8); 1 } \
@@ -2516,6 +3334,11 @@ VSHUFPS AVX 11000110 \
   !constraints { vex($_, m => 0x0F, l => 128); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.0F.WIG C6 /r ib: VSHUFPS ymm1, ymm2, ymm3/m256, imm8
+VSHUFPS AVX2 11000110 \
+  !constraints { vex($_, m => 0x0F, l => 256); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F C6 /r ib: SHUFPD xmm1, xmm2/m128, imm8
 SHUFPD SSE2 00001111 11000110 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
@@ -2526,6 +3349,11 @@ VSHUFPD AVX 11000110 \
   !constraints { vex($_, m => 0x0F, l => 128, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG C6 /r ib: VSHUFPD ymm1, ymm2, ymm3/m256, imm8
+VSHUFPD AVX2 11000110 \
+  !constraints { vex($_, m => 0x0F, l => 256, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 3A 0C /r ib: BLENDPS xmm1, xmm2/m128, imm8
 BLENDPS SSE4_1 00001111 00111010 00001100 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
@@ -2536,6 +3364,11 @@ VBLENDPS AVX 00001100 \
   !constraints { vex($_, m => 0x0F3A, l => 128, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F3A.WIG 0C /r ib: VBLENDPS ymm1, ymm2, ymm3/m256, imm8
+VBLENDPS AVX2 00001100 \
+  !constraints { vex($_, m => 0x0F3A, l => 256, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 3A 0D /r ib: BLENDPD xmm1, xmm2/m128, imm8
 BLENDPD SSE4_1 00001111 00111010 00001101 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
@@ -2546,6 +3379,11 @@ VBLENDPD AVX 00001101 \
   !constraints { vex($_, m => 0x0F3A, l => 128, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F3A.WIG 0D /r ib: VBLENDPD ymm1, ymm2, ymm3/m256, imm8
+VBLENDPD AVX2 00001101 \
+  !constraints { vex($_, m => 0x0F3A, l => 256, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 38 14 /r: BLENDVPS xmm1, xmm2/m128, <XMM0>
 BLENDVPS SSE4_1 00001111 00111000 00010100 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -2556,6 +3394,11 @@ VBLENDVPS AVX 01001010 \
   !constraints { vex($_, m => 0x0F3A, l => 128, p => 0x66, w => 0); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F3A.W0 4A /r /is4: VBLENDVPS ymm1, ymm2, ymm3/m256, ymm4
+VBLENDVPS AVX2 01001010 \
+  !constraints { vex($_, m => 0x0F3A, l => 256, p => 0x66, w => 0); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 38 15 /r: BLENDVPD xmm1, xmm2/m128 , <XMM0>
 BLENDVPD SSE4_1 00001111 00111000 00010101 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -2566,6 +3409,11 @@ VBLENDVPD AVX 01001011 \
   !constraints { vex($_, m => 0x0F3A, l => 128, p => 0x66, w => 0); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F3A.W0 4B /r /is4: VBLENDVPD ymm1, ymm2, ymm3/m256, ymm4
+VBLENDVPD AVX2 01001011 \
+  !constraints { vex($_, m => 0x0F3A, l => 256, p => 0x66, w => 0); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 38 10 /r: PBLENDVB xmm1, xmm2/m128, <XMM0>
 PBLENDVB SSE4_1 00001111 00111000 00010000 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -2576,6 +3424,11 @@ VPBLENDVB AVX 01001100 \
   !constraints { vex($_, m => 0x0F3A, l => 128, p => 0x66, w => 0); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F3A.W0 4C /r /is4: VPBLENDVB ymm1, ymm2, ymm3/m256, ymm4
+VPBLENDVB AVX2 01001100 \
+  !constraints { vex($_, m => 0x0F3A, l => 256, p => 0x66, w => 0); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 3A 0E /r ib: PBLENDW xmm1, xmm2/m128, imm8
 PBLENDW SSE4_1 00001111 00111010 00001110 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
@@ -2586,6 +3439,21 @@ VPBLENDW AVX 00001110 \
   !constraints { vex($_, m => 0x0F3A, l => 128, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F3A.WIG 0E /r ib: VPBLENDW ymm1, ymm2, ymm3/m256, imm8
+VPBLENDW AVX2 00001110 \
+  !constraints { vex($_, m => 0x0F3A, l => 256, p => 0x66); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 32); }
+
+# VEX.128.66.0F3A.W0 02 /r ib: VPBLENDD xmm1, xmm2, xmm3/m128, imm8
+VPBLENDD_xmm AVX2 00000010 \
+  !constraints { vex($_, m => 0x0F3A, l => 128, p => 0x66, w => 0); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16); }
+
+# VEX.256.66.0F3A.W0 02 /r ib: VPBLENDD ymm1, ymm2, ymm3/m256, imm8
+VPBLENDD AVX2 00000010 \
+  !constraints { vex($_, m => 0x0F3A, l => 256, p => 0x66, w => 0); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 3A 21 /r ib: INSERTPS xmm1, xmm2/m32, imm8
 INSERTPS SSE4_1 00001111 00111010 00100001 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); 1 } \
@@ -2641,6 +3509,16 @@ VPINSRQ AVX 00100010 \
   !constraints { vex($_, m => 0x0F3A, l => 128, p => 0x66, w => 1); modrm($_); imm($_, width => 8); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
   !memory { load(size => 8); }
 
+# VEX.256.66.0F3A.W0 18 /r ib: VINSERTF128 ymm1, ymm2, xmm3/m128, imm8
+VINSERTF128 AVX2 00011000 \
+  !constraints { vex($_, m => 0x0F3A, l => 256, p => 0x66, w => 0); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16); }
+
+# VEX.256.66.0F3A.W0 38 /r ib: VINSERTI128 ymm1, ymm2, xmm3/m128, imm8
+VINSERTI128 AVX2 00111000 \
+  !constraints { vex($_, m => 0x0F3A, l => 256, p => 0x66, w => 0); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 3A 17 /r ib: EXTRACTPS reg/m32, xmm1, imm8
 EXTRACTPS SSE4_1 00001111 00111010 00010111 \
   !constraints { data16($_); modrm($_); imm($_, width => 8); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
@@ -2703,26 +3581,231 @@ PEXTRW_reg SSE2 00001111 11000101 \
 VPEXTRW_reg AVX 11000101 \
   !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0x66, w => 0); modrm($_); imm($_, width => 8); $_->{modrm}{reg} != REG_RSP && defined $_->{modrm}{reg2} }
 
+# VEX.256.66.0F3A.W0 19 /r ib: VEXTRACTF128 xmm1/m128, ymm2, imm8
+VEXTRACTF128 AVX2 00011001 \
+  !constraints { vex($_, m => 0x0F3A, l => 256, v => 0, p => 0x66, w => 0); modrm($_); imm($_, width => 8); 1 } \
+  !memory { store(size => 16); }
+
+# VEX.256.66.0F3A.W0 39 /r ib: VEXTRACTI128 xmm1/m128, ymm2, imm8
+VEXTRACTI128 AVX2 00111001 \
+  !constraints { vex($_, m => 0x0F3A, l => 256, v => 0, p => 0x66, w => 0); modrm($_); imm($_, width => 8); 1 } \
+  !memory { store(size => 16); }
+
+# VEX.128.66.0F38.W0 78 /r: VPBROADCASTB xmm1,xmm2/m8
+VPBROADCASTB_xmm AVX2 01111000 \
+  !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66, w => 0); modrm($_); 1 } \
+  !memory { load(size => 1); }
+
+# VEX.256.66.0F38.W0 78 /r: VPBROADCASTB ymm1,xmm2/m8
+VPBROADCASTB AVX2 01111000 \
+  !constraints { vex($_, m => 0x0F38, l => 256, v => 0, p => 0x66, w => 0); modrm($_); 1 } \
+  !memory { load(size => 1); }
+
+# VEX.128.66.0F38.W0 79 /r: VPBROADCASTW xmm1,xmm2/m16
+VPBROADCASTW_xmm AVX2 01111001 \
+  !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66, w => 0); modrm($_); 1 } \
+  !memory { load(size => 2); }
+
+# VEX.256.66.0F38.W0 79 /r: VPBROADCASTW ymm1,xmm2/m16
+VPBROADCASTW AVX2 01111001 \
+  !constraints { vex($_, m => 0x0F38, l => 256, v => 0, p => 0x66, w => 0); modrm($_); 1 } \
+  !memory { load(size => 2); }
+
+# VEX.128.66.0F38.W0 58 /r: VPBROADCASTD xmm1,xmm2/m32
+VPBROADCASTD_xmm AVX2 01011000 \
+  !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66, w => 0); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
+# VEX.256.66.0F38.W0 58 /r: VPBROADCASTD ymm1,xmm2/m32
+VPBROADCASTD AVX2 01011000 \
+  !constraints { vex($_, m => 0x0F38, l => 256, v => 0, p => 0x66, w => 0); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
+# VEX.128.66.0F38.W0 59 /r: VPBROADCASTQ xmm1,xmm2/m64
+VPBROADCASTQ_xmm AVX2 01011001 \
+  !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66, w => 0); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
+# VEX.256.66.0F38.W0 59 /r: VPBROADCASTQ ymm1,xmm2/m64
+VPBROADCASTQ AVX2 01011001 \
+  !constraints { vex($_, m => 0x0F38, l => 256, v => 0, p => 0x66, w => 0); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
+# VEX.128.66.0F38.W0 18 /r: VBROADCASTSS xmm1, xmm2/m32
+VBROADCASTSS_xmm AVX2 00011000 \
+  !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66, w => 0); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
+# VEX.256.66.0F38.W0 18 /r: VBROADCASTSS ymm1, xmm2/m32
+VBROADCASTSS AVX2 00011000 \
+  !constraints { vex($_, m => 0x0F38, l => 256, v => 0, p => 0x66, w => 0); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
+# VEX.256.66.0F38.W0 19 /r: VBROADCASTSD ymm1, xmm2/m64
+VBROADCASTSD AVX2 00011001 \
+  !constraints { vex($_, m => 0x0F38, l => 256, v => 0, p => 0x66, w => 0); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
+# VEX.256.66.0F38.W0 1A /r: VBROADCASTF128 ymm1, m128
+VBROADCASTF128 AVX2 00011010 \
+  !constraints { vex($_, m => 0x0F38, l => 256, v => 0, p => 0x66, w => 0); modrm($_); !defined $_->{modrm}{reg2} } \
+  !memory { load(size => 16); }
+
+# VEX.256.66.0F38.W0 5A /r: VBROADCASTI128 ymm1,m128
+VBROADCASTI128 AVX2 01011010 \
+  !constraints { vex($_, m => 0x0F38, l => 256, v => 0, p => 0x66, w => 0); modrm($_); !defined $_->{modrm}{reg2} } \
+  !memory { load(size => 16); }
+
+# VEX.256.66.0F3A.W0 06 /r ib: VPERM2F128 ymm1, ymm2, ymm3/m256, imm8
+VPERM2F128 AVX2 00000110 \
+  !constraints { vex($_, m => 0x0F3A, l => 256, p => 0x66, w => 0); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 32); }
+
+# VEX.256.66.0F3A.W0 46 /r ib: VPERM2I128 ymm1, ymm2, ymm3/m256, imm8
+VPERM2I128 AVX2 01000110 \
+  !constraints { vex($_, m => 0x0F3A, l => 256, p => 0x66, w => 0); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 32); }
+
+# VEX.256.66.0F38.W0 36 /r: VPERMD ymm1, ymm2, ymm3/m256
+VPERMD AVX2 00110110 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66, w => 0); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
+# VEX.256.66.0F38.W0 16 /r: VPERMPS ymm1, ymm2, ymm3/m256
+VPERMPS AVX2 00010110 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66, w => 0); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # VEX.128.66.0F38.W0 0C /r: VPERMILPS xmm1, xmm2, xmm3/m128
 VPERMILPS AVX 00001100 \
   !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66, w => 0); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F38.W0 0C /r: VPERMILPS ymm1, ymm2, ymm3/m256
+VPERMILPS AVX2 00001100 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66, w => 0); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # VEX.128.66.0F3A.W0 04 /r ib: VPERMILPS xmm1, xmm2/m128, imm8
 VPERMILPS_imm AVX 00000100 \
   !constraints { vex($_, m => 0x0F3A, l => 128, v => 0, p => 0x66, w => 0); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F3A.W0 04 /r ib: VPERMILPS ymm1, ymm2/m256, imm8
+VPERMILPS_imm AVX2 00000100 \
+  !constraints { vex($_, m => 0x0F3A, l => 256, v => 0, p => 0x66, w => 0); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 32); }
+
 # VEX.128.66.0F38.W0 0D /r: VPERMILPD xmm1, xmm2, xmm3/m128
 VPERMILPD AVX 00001101 \
   !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66, w => 0); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F38.W0 0D /r: VPERMILPD ymm1, ymm2, ymm3/m256
+VPERMILPD AVX2 00001101 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66, w => 0); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # VEX.128.66.0F3A.W0 05 /r ib: VPERMILPD xmm1, xmm2/m128, imm8
 VPERMILPD_imm AVX 00000101 \
   !constraints { vex($_, m => 0x0F3A, l => 128, v => 0, p => 0x66, w => 0); modrm($_); imm($_, width => 8); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F3A.W0 05 /r ib: VPERMILPD ymm1, ymm2/m256, imm8
+VPERMILPD_imm AVX2 00000101 \
+  !constraints { vex($_, m => 0x0F3A, l => 256, v => 0, p => 0x66, w => 0); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 32); }
+
+# VEX.256.66.0F3A.W1 00 /r ib: VPERMQ ymm1, ymm2/m256, imm8
+VPERMQ AVX2 00000000 \
+  !constraints { vex($_, m => 0x0F3A, l => 256, v => 0, p => 0x66, w => 1); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 32); }
+
+# VEX.256.66.0F3A.W1 01 /r ib: VPERMPD ymm1, ymm2/m256, imm8
+VPERMPD AVX2 00000001 \
+  !constraints { vex($_, m => 0x0F3A, l => 256, v => 0, p => 0x66, w => 1); modrm($_); imm($_, width => 8); 1 } \
+  !memory { load(size => 32); }
+
+# VEX.128.66.0F38.W0 92 /r: VGATHERDPS xmm1, vm32x, xmm2
+VGATHERDPS_xmm AVX2 10010010 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66, w => 0); modrm_vsib($_); defined $_->{modrm}{vindex} && $_->{vex}{v} != $_->{modrm}{reg} && $_->{vex}{v} != $_->{modrm}{vindex} && $_->{modrm}{reg} != $_->{modrm}{vindex} } \
+  !memory { load(size => 4, addrw => 32, count => 4); }
+
+# VEX.256.66.0F38.W0 92 /r: VGATHERDPS ymm1, vm32y, ymm2
+VGATHERDPS AVX2 10010010 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66, w => 0); modrm_vsib($_); defined $_->{modrm}{vindex} && $_->{vex}{v} != $_->{modrm}{reg} && $_->{vex}{v} != $_->{modrm}{vindex} && $_->{modrm}{reg} != $_->{modrm}{vindex} } \
+  !memory { load(size => 4, addrw => 32, count => 8); }
+
+# VEX.128.66.0F38.W1 92 /r: VGATHERDPD xmm1, vm32x, xmm2
+VGATHERDPD_xmm AVX2 10010010 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66, w => 1); modrm_vsib($_); defined $_->{modrm}{vindex} && $_->{vex}{v} != $_->{modrm}{reg} && $_->{vex}{v} != $_->{modrm}{vindex} && $_->{modrm}{reg} != $_->{modrm}{vindex} } \
+  !memory { load(size => 8, addrw => 32, count => 4); }
+
+# VEX.256.66.0F38.W1 92 /r: VGATHERDPD ymm1, vm32x, ymm2
+VGATHERDPD AVX2 10010010 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66, w => 1); modrm_vsib($_); defined $_->{modrm}{vindex} && $_->{vex}{v} != $_->{modrm}{reg} && $_->{vex}{v} != $_->{modrm}{vindex} && $_->{modrm}{reg} != $_->{modrm}{vindex} } \
+  !memory { load(size => 8, addrw => 32, count => 4); }
+
+# VEX.128.66.0F38.W0 93 /r: VGATHERQPS xmm1, vm64x, xmm2
+VGATHERQPS_xmm AVX2 10010011 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66, w => 0); modrm_vsib($_); defined $_->{modrm}{vindex} && $_->{vex}{v} != $_->{modrm}{reg} && $_->{vex}{v} != $_->{modrm}{vindex} && $_->{modrm}{reg} != $_->{modrm}{vindex} } \
+  !memory { load(size => 4, addrw => 64, count => 2); }
+
+# VEX.256.66.0F38.W0 93 /r: VGATHERQPS xmm1, vm64y, xmm2
+VGATHERQPS AVX2 10010011 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66, w => 0); modrm_vsib($_); defined $_->{modrm}{vindex} && $_->{vex}{v} != $_->{modrm}{reg} && $_->{vex}{v} != $_->{modrm}{vindex} && $_->{modrm}{reg} != $_->{modrm}{vindex} } \
+  !memory { load(size => 4, addrw => 64, count => 4); }
+
+# VEX.128.66.0F38.W1 93 /r: VGATHERQPD xmm1, vm64x, xmm2
+VGATHERQPD_xmm AVX2 10010011 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66, w => 1); modrm_vsib($_); defined $_->{modrm}{vindex} && $_->{vex}{v} != $_->{modrm}{reg} && $_->{vex}{v} != $_->{modrm}{vindex} && $_->{modrm}{reg} != $_->{modrm}{vindex} } \
+  !memory { load(size => 8, addrw => 64, count => 2); }
+
+# VEX.256.66.0F38.W1 93 /r: VGATHERQPD ymm1, vm64y, ymm2
+VGATHERQPD AVX2 10010011 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66, w => 1); modrm_vsib($_); defined $_->{modrm}{vindex} && $_->{vex}{v} != $_->{modrm}{reg} && $_->{vex}{v} != $_->{modrm}{vindex} && $_->{modrm}{reg} != $_->{modrm}{vindex} } \
+  !memory { load(size => 8, addrw => 64, count => 4); }
+
+# VEX.128.66.0F38.W0 90 /r: VPGATHERDD xmm1, vm32x, xmm2
+VPGATHERDD_xmm AVX2 10010000 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66, w => 0); modrm_vsib($_); defined $_->{modrm}{vindex} && $_->{vex}{v} != $_->{modrm}{reg} && $_->{vex}{v} != $_->{modrm}{vindex} && $_->{modrm}{reg} != $_->{modrm}{vindex} } \
+  !memory { load(size => 4, addrw => 32, count => 4); }
+
+# VEX.256.66.0F38.W0 90 /r: VPGATHERDD ymm1, vm32y, ymm2
+VPGATHERDD AVX2 10010000 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66, w => 0); modrm_vsib($_); defined $_->{modrm}{vindex} && $_->{vex}{v} != $_->{modrm}{reg} && $_->{vex}{v} != $_->{modrm}{vindex} && $_->{modrm}{reg} != $_->{modrm}{vindex} } \
+  !memory { load(size => 4, addrw => 32, count => 8); }
+
+# VEX.128.66.0F38.W1 90 /r: VPGATHERDQ xmm1, vm32x, xmm2
+VPGATHERDQ_xmm AVX2 10010000 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66, w => 1); modrm_vsib($_); defined $_->{modrm}{vindex} && $_->{vex}{v} != $_->{modrm}{reg} && $_->{vex}{v} != $_->{modrm}{vindex} && $_->{modrm}{reg} != $_->{modrm}{vindex} } \
+  !memory { load(size => 8, addrw => 32, count => 4); }
+
+# VEX.256.66.0F38.W1 90 /r: VPGATHERDQ ymm1, vm32x, ymm2
+VPGATHERDQ AVX2 10010000 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66, w => 1); modrm_vsib($_); defined $_->{modrm}{vindex} && $_->{vex}{v} != $_->{modrm}{reg} && $_->{vex}{v} != $_->{modrm}{vindex} && $_->{modrm}{reg} != $_->{modrm}{vindex} } \
+  !memory { load(size => 8, addrw => 32, count => 4); }
+
+# VEX.128.66.0F38.W0 91 /r: VPGATHERQD xmm1, vm64x, xmm2
+VPGATHERQD_xmm AVX2 10010001 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66, w => 0); modrm_vsib($_); defined $_->{modrm}{vindex} && $_->{vex}{v} != $_->{modrm}{reg} && $_->{vex}{v} != $_->{modrm}{vindex} && $_->{modrm}{reg} != $_->{modrm}{vindex} } \
+  !memory { load(size => 4, addrw => 64, count => 2); }
+
+# VEX.256.66.0F38.W0 91 /r: VPGATHERQD xmm1, vm64y, xmm2
+VPGATHERQD AVX2 10010001 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66, w => 0); modrm_vsib($_); defined $_->{modrm}{vindex} && $_->{vex}{v} != $_->{modrm}{reg} && $_->{vex}{v} != $_->{modrm}{vindex} && $_->{modrm}{reg} != $_->{modrm}{vindex} } \
+  !memory { load(size => 4, addrw => 64, count => 4); }
+
+# VEX.128.66.0F38.W1 91 /r: VPGATHERQQ xmm1, vm64x, xmm2
+VPGATHERQQ_xmm AVX2 10010001 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66, w => 1); modrm_vsib($_); defined $_->{modrm}{vindex} && $_->{vex}{v} != $_->{modrm}{reg} && $_->{vex}{v} != $_->{modrm}{vindex} && $_->{modrm}{reg} != $_->{modrm}{vindex} } \
+  !memory { load(size => 8, addrw => 64, count => 2); }
+
+# VEX.256.66.0F38.W1 91 /r: VPGATHERQQ ymm1, vm64y, ymm2
+VPGATHERQQ AVX2 10010001 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66, w => 1); modrm_vsib($_); defined $_->{modrm}{vindex} && $_->{vex}{v} != $_->{modrm}{reg} && $_->{vex}{v} != $_->{modrm}{vindex} && $_->{modrm}{reg} != $_->{modrm}{vindex} } \
+  !memory { load(size => 8, addrw => 64, count => 4); }
+
 #
 # Conversion Instructions
 # -----------------------
@@ -2738,6 +3821,11 @@ VPMOVSXBW AVX 00100000 \
   !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 8); }
 
+# VEX.256.66.0F38.WIG 20 /r: VPMOVSXBW ymm1, xmm2/m128
+VPMOVSXBW AVX2 00100000 \
+  !constraints { vex($_, m => 0x0F38, l => 256, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0f 38 21 /r: PMOVSXBD xmm1, xmm2/m32
 PMOVSXBD SSE4_1 00001111 00111000 00100001 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -2748,6 +3836,11 @@ VPMOVSXBD AVX 00100001 \
   !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 4); }
 
+# VEX.256.66.0F38.WIG 21 /r: VPMOVSXBD ymm1, xmm2/m64
+VPMOVSXBD AVX2 00100001 \
+  !constraints { vex($_, m => 0x0F38, l => 256, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
 # 66 0f 38 22 /r: PMOVSXBQ xmm1, xmm2/m16
 PMOVSXBQ SSE4_1 00001111 00111000 00100010 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -2758,6 +3851,11 @@ VPMOVSXBQ AVX 00100010 \
   !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 2); }
 
+# VEX.256.66.0F38.WIG 22 /r: VPMOVSXBQ ymm1, xmm2/m32
+VPMOVSXBQ AVX2 00100010 \
+  !constraints { vex($_, m => 0x0F38, l => 256, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
 # 66 0f 38 23 /r: PMOVSXWD xmm1, xmm2/m64
 PMOVSXWD SSE4_1 00001111 00111000 00100011 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -2768,6 +3866,11 @@ VPMOVSXWD AVX 00100011 \
   !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 8); }
 
+# VEX.256.66.0F38.WIG 23 /r: VPMOVSXWD ymm1, xmm2/m128
+VPMOVSXWD AVX2 00100011 \
+  !constraints { vex($_, m => 0x0F38, l => 256, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0f 38 24 /r: PMOVSXWQ xmm1, xmm2/m32
 PMOVSXWQ SSE4_1 00001111 00111000 00100100 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -2778,6 +3881,11 @@ VPMOVSXWQ AVX 00100100 \
   !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 4); }
 
+# VEX.256.66.0F38.WIG 24 /r: VPMOVSXWQ ymm1, xmm2/m64
+VPMOVSXWQ AVX2 00100100 \
+  !constraints { vex($_, m => 0x0F38, l => 256, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
 # 66 0f 38 25 /r: PMOVSXDQ xmm1, xmm2/m64
 PMOVSXDQ SSE4_1 00001111 00111000 00100101 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -2788,6 +3896,11 @@ VPMOVSXDQ AVX 00100101 \
   !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 8); }
 
+# VEX.256.66.0F38.WIG 25 /r: VPMOVSXDQ ymm1, xmm2/m128
+VPMOVSXDQ AVX2 00100101 \
+  !constraints { vex($_, m => 0x0F38, l => 256, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0f 38 30 /r: PMOVZXBW xmm1, xmm2/m64
 PMOVZXBW SSE4_1 00001111 00111000 00110000 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -2798,6 +3911,11 @@ VPMOVZXBW AVX 00110000 \
   !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 8); }
 
+# VEX.256.66.0F38.WIG 30 /r: VPMOVZXBW ymm1, xmm2/m128
+VPMOVZXBW AVX2 00110000 \
+  !constraints { vex($_, m => 0x0F38, l => 256, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0f 38 31 /r: PMOVZXBD xmm1, xmm2/m32
 PMOVZXBD SSE4_1 00001111 00111000 00110001 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -2808,6 +3926,11 @@ VPMOVZXBD AVX 00110001 \
   !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 4); }
 
+# VEX.256.66.0F38.WIG 31 /r: VPMOVZXBD ymm1, xmm2/m64
+VPMOVZXBD AVX2 00110001 \
+  !constraints { vex($_, m => 0x0F38, l => 256, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
 # 66 0f 38 32 /r: PMOVZXBQ xmm1, xmm2/m16
 PMOVZXBQ SSE4_1 00001111 00111000 00110010 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -2818,6 +3941,11 @@ VPMOVZXBQ AVX 00110010 \
   !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 2); }
 
+# VEX.256.66.0F38.WIG 32 /r: VPMOVZXBQ ymm1, xmm2/m32
+VPMOVZXBQ AVX2 00110010 \
+  !constraints { vex($_, m => 0x0F38, l => 256, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 4); }
+
 # 66 0f 38 33 /r: PMOVZXWD xmm1, xmm2/m64
 PMOVZXWD SSE4_1 00001111 00111000 00110011 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -2828,6 +3956,11 @@ VPMOVZXWD AVX 00110011 \
   !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 8); }
 
+# VEX.256.66.0F38.WIG 33 /r: VPMOVZXWD ymm1, xmm2/m128
+VPMOVZXWD AVX2 00110011 \
+  !constraints { vex($_, m => 0x0F38, l => 256, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0f 38 34 /r: PMOVZXWQ xmm1, xmm2/m32
 PMOVZXWQ SSE4_1 00001111 00111000 00110100 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -2838,6 +3971,11 @@ VPMOVZXWQ AVX 00110100 \
   !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 4); }
 
+# VEX.256.66.0F38.WIG 34 /r: VPMOVZXWQ ymm1, xmm2/m64
+VPMOVZXWQ AVX2 00110100 \
+  !constraints { vex($_, m => 0x0F38, l => 256, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 8); }
+
 # 66 0f 38 35 /r: PMOVZXDQ xmm1, xmm2/m64
 PMOVZXDQ SSE4_1 00001111 00111000 00110101 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -2848,6 +3986,11 @@ VPMOVZXDQ AVX 00110101 \
   !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 8); }
 
+# VEX.256.66.0F38.WIG 35 /r: VPMOVZXDQ ymm1, xmm2/m128
+VPMOVZXDQ AVX2 00110101 \
+  !constraints { vex($_, m => 0x0F38, l => 256, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 2A /r: CVTPI2PS xmm, mm/m64
 CVTPI2PS SSE 00001111 00101010 \
   !constraints { modrm($_); $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; 1 } \
@@ -3008,6 +4151,11 @@ VCVTPD2DQ AVX 11100110 \
   !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0xF2); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.F2.0F.WIG E6 /r: VCVTPD2DQ xmm1, ymm2/m256
+VCVTPD2DQ AVX2 11100110 \
+  !constraints { vex($_, m => 0x0F, l => 256, v => 0, p => 0xF2); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F E6 /r: CVTTPD2DQ xmm1, xmm2/m128
 CVTTPD2DQ SSE2 00001111 11100110 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -3018,6 +4166,11 @@ VCVTTPD2DQ AVX 11100110 \
   !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG E6 /r: VCVTTPD2DQ xmm1, ymm2/m256
+VCVTTPD2DQ AVX2 11100110 \
+  !constraints { vex($_, m => 0x0F, l => 256, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # F3 0F E6 /r: CVTDQ2PD xmm1, xmm2/m64
 CVTDQ2PD SSE2 00001111 11100110 \
   !constraints { rep($_); modrm($_); 1 } \
@@ -3028,6 +4181,11 @@ VCVTDQ2PD AVX 11100110 \
   !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0xF3); modrm($_); 1 } \
   !memory { load(size => 8); }
 
+# VEX.256.F3.0F.WIG E6 /r: VCVTDQ2PD ymm1, xmm2/m128
+VCVTDQ2PD AVX2 11100110 \
+  !constraints { vex($_, m => 0x0F, l => 256, v => 0, p => 0xF3); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # NP 0F 5A /r: CVTPS2PD xmm1, xmm2/m64
 CVTPS2PD SSE2 00001111 01011010 \
   !constraints { modrm($_); 1 } \
@@ -3038,6 +4196,11 @@ VCVTPS2PD AVX 01011010 \
   !constraints { vex($_, m => 0x0F, l => 128, v => 0); modrm($_); 1 } \
   !memory { load(size => 8); }
 
+# VEX.256.0F.WIG 5A /r: VCVTPS2PD ymm1, xmm2/m128
+VCVTPS2PD AVX2 01011010 \
+  !constraints { vex($_, m => 0x0F, l => 256, v => 0); modrm($_); 1 } \
+  !memory { load(size => 16); }
+
 # 66 0F 5A /r: CVTPD2PS xmm1, xmm2/m128
 CVTPD2PS SSE2 00001111 01011010 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -3048,6 +4211,11 @@ VCVTPD2PS AVX 01011010 \
   !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG 5A /r: VCVTPD2PS xmm1, ymm2/m256
+VCVTPD2PS AVX2 01011010 \
+  !constraints { vex($_, m => 0x0F, l => 256, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # F3 0F 5A /r: CVTSS2SD xmm1, xmm2/m32
 CVTSS2SD SSE2 00001111 01011010 \
   !constraints { rep($_); modrm($_); 1 } \
@@ -3078,6 +4246,11 @@ VCVTDQ2PS AVX 01011011 \
   !constraints { vex($_, m => 0x0F, l => 128, v => 0); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.0F.WIG 5B /r: VCVTDQ2PS ymm1, ymm2/m256
+VCVTDQ2PS AVX2 01011011 \
+  !constraints { vex($_, m => 0x0F, l => 256, v => 0); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # 66 0F 5B /r: CVTPS2DQ xmm1, xmm2/m128
 CVTPS2DQ SSE2 00001111 01011011 \
   !constraints { data16($_); modrm($_); 1 } \
@@ -3088,6 +4261,11 @@ VCVTPS2DQ AVX 01011011 \
   !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0x66); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.66.0F.WIG 5B /r: VCVTPS2DQ ymm1, ymm2/m256
+VCVTPS2DQ AVX2 01011011 \
+  !constraints { vex($_, m => 0x0F, l => 256, v => 0, p => 0x66); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 # F3 0F 5B /r: CVTTPS2DQ xmm1, xmm2/m128
 CVTTPS2DQ SSE2 00001111 01011011 \
   !constraints { rep($_); modrm($_); 1 } \
@@ -3098,6 +4276,11 @@ VCVTTPS2DQ AVX 01011011 \
   !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0xF3); modrm($_); 1 } \
   !memory { load(size => 16); }
 
+# VEX.256.F3.0F.WIG 5B /r: VCVTTPS2DQ ymm1, ymm2/m256
+VCVTTPS2DQ AVX2 01011011 \
+  !constraints { vex($_, m => 0x0F, l => 256, v => 0, p => 0xF3); modrm($_); 1 } \
+  !memory { load(size => 32); }
+
 #
 # Cacheability Control, Prefetch, and Instruction Ordering Instructions
 # ---------------------------------------------------------------------
@@ -3124,12 +4307,48 @@ VMASKMOVPS AVX 001011 d 0 \
   !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66, w => 0); modrm($_); !defined $_->{modrm}{reg2} } \
   !memory { $d ? store(size => 16) : load(size => 16); }
 
+# VEX.256.66.0F38.W0 2C /r: VMASKMOVPS ymm1, ymm2, m256
+# VEX.256.66.0F38.W0 2E /r: VMASKMOVPS m256, ymm1, ymm2
+VMASKMOVPS AVX2 001011 d 0 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66, w => 0); modrm($_); !defined $_->{modrm}{reg2} } \
+  !memory { $d ? store(size => 32) : load(size => 32); }
+
 # VEX.128.66.0F38.W0 2D /r: VMASKMOVPD xmm1, xmm2, m128
 # VEX.128.66.0F38.W0 2F /r: VMASKMOVPD m128, xmm1, xmm2
 VMASKMOVPD AVX 001011 d 1 \
   !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66, w => 0); modrm($_); !defined $_->{modrm}{reg2} } \
   !memory { $d ? store(size => 16) : load(size => 16); }
 
+# VEX.256.66.0F38.W0 2D /r: VMASKMOVPD ymm1, ymm2, m256
+# VEX.256.66.0F38.W0 2F /r: VMASKMOVPD m256, ymm1, ymm2
+VMASKMOVPD AVX2 001011 d 1 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66, w => 0); modrm($_); !defined $_->{modrm}{reg2} } \
+  !memory { $d ? store(size => 32) : load(size => 32); }
+
+# VEX.128.66.0F38.W0 8C /r: VPMASKMOVD xmm1, xmm2, m128
+# VEX.128.66.0F38.W0 8E /r: VPMASKMOVD m128, xmm1, xmm2
+VPMASKMOVD_xmm AVX2 100011 d 0 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66, w => 0); modrm($_); !defined $_->{modrm}{reg2} } \
+  !memory { $d ? store(size => 16) : load(size => 16); }
+
+# VEX.256.66.0F38.W0 8C /r: VPMASKMOVD ymm1, ymm2, m256
+# VEX.256.66.0F38.W0 8E /r: VPMASKMOVD m256, ymm1, ymm2
+VPMASKMOVD AVX2 100011 d 0 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66, w => 0); modrm($_); !defined $_->{modrm}{reg2} } \
+  !memory { $d ? store(size => 32) : load(size => 32); }
+
+# VEX.128.66.0F38.W1 8C /r: VPMASKMOVQ xmm1, xmm2, m128
+# VEX.128.66.0F38.W1 8E /r: VPMASKMOVQ m128, xmm1, xmm2
+VPMASKMOVQ_xmm AVX2 100011 d 0 \
+  !constraints { vex($_, m => 0x0F38, l => 128, p => 0x66, w => 1); modrm($_); !defined $_->{modrm}{reg2} } \
+  !memory { $d ? store(size => 16) : load(size => 16); }
+
+# VEX.256.66.0F38.W1 8C /r: VPMASKMOVQ ymm1, ymm2, m256
+# VEX.256.66.0F38.W1 8E /r: VPMASKMOVQ m256, ymm1, ymm2
+VPMASKMOVQ AVX2 100011 d 0 \
+  !constraints { vex($_, m => 0x0F38, l => 256, p => 0x66, w => 1); modrm($_); !defined $_->{modrm}{reg2} } \
+  !memory { $d ? store(size => 32) : load(size => 32); }
+
 # NP 0F 2B /r: MOVNTPS m128, xmm1
 MOVNTPS SSE 00001111 00101011 \
   !constraints { modrm($_); !defined $_->{modrm}{reg2} } \
@@ -3140,6 +4359,11 @@ VMOVNTPS AVX 00101011 \
   !constraints { vex($_, m => 0x0F, l => 128, v => 0); modrm($_); !defined $_->{modrm}{reg2} } \
   !memory { store(size => 16, align => 16); }
 
+# VEX.256.0F.WIG 2B /r: VMOVNTPS m256, ymm1
+VMOVNTPS AVX2 00101011 \
+  !constraints { vex($_, m => 0x0F, l => 256, v => 0); modrm($_); !defined $_->{modrm}{reg2} } \
+  !memory { store(size => 32, align => 32); }
+
 # 66 0F 2B /r: MOVNTPD m128, xmm1
 MOVNTPD SSE2 00001111 00101011 \
   !constraints { data16($_); modrm($_); !defined $_->{modrm}{reg2} } \
@@ -3150,6 +4374,11 @@ VMOVNTPD AVX 00101011 \
   !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0x66); modrm($_); !defined $_->{modrm}{reg2} } \
   !memory { store(size => 16, align => 16); }
 
+# VEX.256.66.0F.WIG 2B /r: VMOVNTPD m256, ymm1
+VMOVNTPD AVX2 00101011 \
+  !constraints { vex($_, m => 0x0F, l => 256, v => 0, p => 0x66); modrm($_); !defined $_->{modrm}{reg2} } \
+  !memory { store(size => 32, align => 32); }
+
 # NP 0F C3 /r: MOVNTI m32, r32
 MOVNTI SSE2 00001111 11000011 \
   !constraints { modrm($_); $_->{modrm}{reg} != REG_RSP && defined $_->{modrm}{base} && $_->{modrm}{base} != $_->{modrm}{reg} } \
@@ -3175,6 +4404,11 @@ VMOVNTDQ AVX 11100111 \
   !constraints { vex($_, m => 0x0F, l => 128, v => 0, p => 0x66); modrm($_); !defined $_->{modrm}{reg2} } \
   !memory { store(size => 16, align => 16); }
 
+# VEX.256.66.0F.WIG E7 /r: VMOVNTDQ m256, ymm1
+VMOVNTDQ AVX2 11100111 \
+  !constraints { vex($_, m => 0x0F, l => 256, v => 0, p => 0x66); modrm($_); !defined $_->{modrm}{reg2} } \
+  !memory { store(size => 32, align => 32); }
+
 # 66 0F 38 2A /r: MOVNTDQA xmm1, m128
 MOVNTDQA SSE4_1 00001111 00111000 00101010 \
   !constraints { data16($_); modrm($_); !defined $_->{modrm}{reg2} } \
@@ -3185,6 +4419,11 @@ VMOVNTDQA AVX 00101010 \
   !constraints { vex($_, m => 0x0F38, l => 128, v => 0, p => 0x66); modrm($_); !defined $_->{modrm}{reg2} } \
   !memory { load(size => 16, align => 16); }
 
+# VEX.256.66.0F38.WIG 2A /r: VMOVNTDQA ymm1, m256
+VMOVNTDQA AVX2 00101010 \
+  !constraints { vex($_, m => 0x0F38, l => 256, v => 0, p => 0x66); modrm($_); !defined $_->{modrm}{reg2} } \
+  !memory { load(size => 32, align => 32); }
+
 # 0F 18 /1: PREFETCHT0 m8
 PREFETCHT0 SSE 00001111 00011000 \
   !constraints { modrm($_, reg => 1); !defined $_->{modrm}{reg2} } \
-- 
2.20.1



^ permalink raw reply related	[flat|nested] 49+ messages in thread

* Re: [Qemu-devel] [RISU PATCH v3 01/18] risugen_common: add helper functions insnv, randint
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 01/18] risugen_common: add helper functions insnv, randint Jan Bobek
@ 2019-07-12  5:48   ` Richard Henderson
  2019-07-14 21:55     ` Jan Bobek
  2019-07-12 12:41   ` Alex Bennée
  1 sibling, 1 reply; 49+ messages in thread
From: Richard Henderson @ 2019-07-12  5:48 UTC (permalink / raw)
  To: Jan Bobek, qemu-devel; +Cc: Alex Bennée

On 7/12/19 12:32 AM, Jan Bobek wrote:
> insnv allows emitting variable-length instructions in little-endian or
> big-endian byte order; it subsumes functionality of former insn16()
> and insn32() functions.
> 
> randint can reliably generate signed or unsigned integers of arbitrary
> width.
> 
> Signed-off-by: Jan Bobek <jan.bobek@gmail.com>
> ---
>  risugen_common.pm | 55 +++++++++++++++++++++++++++++++++++++++++------
>  1 file changed, 48 insertions(+), 7 deletions(-)
> 
> diff --git a/risugen_common.pm b/risugen_common.pm
> index 71ee996..d63250a 100644
> --- a/risugen_common.pm
> +++ b/risugen_common.pm
> @@ -23,8 +23,9 @@ BEGIN {
>      require Exporter;
>  
>      our @ISA = qw(Exporter);
> -    our @EXPORT = qw(open_bin close_bin set_endian insn32 insn16 $bytecount
> -                   progress_start progress_update progress_end
> +    our @EXPORT = qw(open_bin close_bin set_endian insn32 insn16
> +                   $bytecount insnv randint progress_start
> +                   progress_update progress_end
>                     eval_with_fields is_pow_of_2 sextract ctz
>                     dump_insn_details);
>  }
> @@ -37,7 +38,7 @@ my $bigendian = 0;
>  # (default is little endian, 0).
>  sub set_endian
>  {
> -    $bigendian = @_;
> +    ($bigendian) = @_;
>  }
>  
>  sub open_bin
> @@ -52,18 +53,58 @@ sub close_bin
>      close(BIN) or die "can't close output file: $!";
>  }
>  
> +sub insnv(%)
> +{
> +    my (%args) = @_;
> +
> +    # Default to big-endian order, so that the instruction bytes are
> +    # emitted in the same order as they are written in the
> +    # configuration file.
> +    $args{bigendian} = 1 unless defined $args{bigendian};
> +
> +    for (my $bitcur = 0; $bitcur < $args{width}; $bitcur += 8) {
> +        my $value = $args{value} >> ($args{bigendian}
> +                                     ? $args{width} - $bitcur - 8
> +                                     : $bitcur);
> +
> +        print BIN pack("C", $value & 0xff);
> +        $bytecount += 1;
> +    }

Looks like bytecount is no longer used?

Otherwise,
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [Qemu-devel] [RISU PATCH v3 01/18] risugen_common: add helper functions insnv, randint
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 01/18] risugen_common: add helper functions insnv, randint Jan Bobek
  2019-07-12  5:48   ` Richard Henderson
@ 2019-07-12 12:41   ` Alex Bennée
  1 sibling, 0 replies; 49+ messages in thread
From: Alex Bennée @ 2019-07-12 12:41 UTC (permalink / raw)
  To: Jan Bobek; +Cc: Richard Henderson, qemu-devel


Jan Bobek <jan.bobek@gmail.com> writes:

> insnv allows emitting variable-length instructions in little-endian or
> big-endian byte order; it subsumes functionality of former insn16()
> and insn32() functions.
>
> randint can reliably generate signed or unsigned integers of arbitrary
> width.
>
> Signed-off-by: Jan Bobek <jan.bobek@gmail.com>

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

> ---
>  risugen_common.pm | 55 +++++++++++++++++++++++++++++++++++++++++------
>  1 file changed, 48 insertions(+), 7 deletions(-)
>
> diff --git a/risugen_common.pm b/risugen_common.pm
> index 71ee996..d63250a 100644
> --- a/risugen_common.pm
> +++ b/risugen_common.pm
> @@ -23,8 +23,9 @@ BEGIN {
>      require Exporter;
>
>      our @ISA = qw(Exporter);
> -    our @EXPORT = qw(open_bin close_bin set_endian insn32 insn16 $bytecount
> -                   progress_start progress_update progress_end
> +    our @EXPORT = qw(open_bin close_bin set_endian insn32 insn16
> +                   $bytecount insnv randint progress_start
> +                   progress_update progress_end
>                     eval_with_fields is_pow_of_2 sextract ctz
>                     dump_insn_details);
>  }
> @@ -37,7 +38,7 @@ my $bigendian = 0;
>  # (default is little endian, 0).
>  sub set_endian
>  {
> -    $bigendian = @_;
> +    ($bigendian) = @_;
>  }
>
>  sub open_bin
> @@ -52,18 +53,58 @@ sub close_bin
>      close(BIN) or die "can't close output file: $!";
>  }
>
> +sub insnv(%)
> +{
> +    my (%args) = @_;
> +
> +    # Default to big-endian order, so that the instruction bytes are
> +    # emitted in the same order as they are written in the
> +    # configuration file.
> +    $args{bigendian} = 1 unless defined $args{bigendian};
> +
> +    for (my $bitcur = 0; $bitcur < $args{width}; $bitcur += 8) {
> +        my $value = $args{value} >> ($args{bigendian}
> +                                     ? $args{width} - $bitcur - 8
> +                                     : $bitcur);
> +
> +        print BIN pack("C", $value & 0xff);
> +        $bytecount += 1;
> +    }
> +}
> +
>  sub insn32($)
>  {
>      my ($insn) = @_;
> -    print BIN pack($bigendian ? "N" : "V", $insn);
> -    $bytecount += 4;
> +    insnv(value => $insn, width => 32, bigendian => $bigendian);
>  }
>
>  sub insn16($)
>  {
>      my ($insn) = @_;
> -    print BIN pack($bigendian ? "n" : "v", $insn);
> -    $bytecount += 2;
> +    insnv(value => $insn, width => 16, bigendian => $bigendian);
> +}
> +
> +sub randint
> +{
> +    my (%args) = @_;
> +    my $width = $args{width};
> +
> +    if ($width > 32) {
> +        # Generate at most 32 bits at once; Perl's rand() does not
> +        # behave well with ranges that are too large.
> +        my $lower = randint(%args, width => 32);
> +        my $upper = randint(%args, width => $args{width} - 32);
> +        # Use arithmetic rather than bitwise operators, since bitwise
> +        # ops turn signed integers into unsigned.
> +        return $upper * (1 << 32) + $lower;
> +    } elsif ($width > 0) {
> +        my $halfrange = 1 << ($width - 1);
> +        my $value = int(rand(2 * $halfrange));
> +        $value -= $halfrange if defined $args{signed} && $args{signed};
> +        return $value;
> +    } else {
> +        return 0;
> +    }
>  }
>
>  # Progress bar implementation


--
Alex Bennée


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [Qemu-devel] [RISU PATCH v3 00/18] Support for generating x86 SIMD test images
  2019-07-11 22:32 [Qemu-devel] [RISU PATCH v3 00/18] Support for generating x86 SIMD test images Jan Bobek
                   ` (17 preceding siblings ...)
  2019-07-11 22:33 ` [Qemu-devel] [RISU PATCH v3 18/18] x86.risu: add AVX2 instructions Jan Bobek
@ 2019-07-12 13:34 ` Alex Bennée
  2019-07-14 23:08   ` Jan Bobek
  18 siblings, 1 reply; 49+ messages in thread
From: Alex Bennée @ 2019-07-12 13:34 UTC (permalink / raw)
  To: Jan Bobek; +Cc: Richard Henderson, qemu-devel


Jan Bobek <jan.bobek@gmail.com> writes:

> This is v3 of the patch series posted in [1] and [2]. Note that this
> is the first fully-featured patch series implementing all desired
> functionality, including (V)LDMXCSR and VSIB-based instructions like
> VGATHER*.
>
> While implementing the last bits required in order to support VGATHERx
> instructions, I ran into problems which required a larger redesign;
> namely, there are no more !emit blocks as their functionality is now
> implemented in regular !constraints blocks. Also, memory constraints
> are specified in !memory blocks, similarly to other architectures.
>
> I tested these changes on my machine; both master and slave modes work
> in both 32-bit and 64-bit modes.

Two things I've noticed:

  ./contrib/generate_all.sh -n 1 x86.risu testcases.x86

takes a very long time. I wonder if this is a consequence of constantly
needing to re-query the random number generator?

The other is:

  set -x RISU ./build/i686-linux-gnu/risu
  ./contrib/record_traces.sh testcases.x86/*.risu.bin

fails on the first trace when validating the playback. Might want to
check why that is.

>
> Cheers,
>  -Jan
>
> Changes since v2:
>   Too many to be listed individually; this patch series might be
>   better reviewed on its own.
>
> References:
>   1. https://lists.nongnu.org/archive/html/qemu-devel/2019-06/msg04123.html
>   2. https://lists.nongnu.org/archive/html/qemu-devel/2019-07/msg00001.html
>
> Jan Bobek (18):
>   risugen_common: add helper functions insnv, randint
>   risugen_common: split eval_with_fields into extract_fields and
>     eval_block
>   risugen_x86_asm: add module
>   risugen_x86_constraints: add module
>   risugen_x86_memory: add module
>   risugen_x86: add module
>   risugen: allow all byte-aligned instructions
>   risugen: add command-line flag --x86_64
>   risugen: add --xfeatures option for x86
>   x86.risu: add MMX instructions
>   x86.risu: add SSE instructions
>   x86.risu: add SSE2 instructions
>   x86.risu: add SSE3 instructions
>   x86.risu: add SSSE3 instructions
>   x86.risu: add SSE4.1 and SSE4.2 instructions
>   x86.risu: add AES and PCLMULQDQ instructions
>   x86.risu: add AVX instructions
>   x86.risu: add AVX2 instructions
>
>  risugen                    |   27 +-
>  risugen_arm.pm             |    6 +-
>  risugen_common.pm          |  117 +-
>  risugen_m68k.pm            |    3 +-
>  risugen_ppc64.pm           |    6 +-
>  risugen_x86.pm             |  518 +++++
>  risugen_x86_asm.pm         |  918 ++++++++
>  risugen_x86_constraints.pm |  154 ++
>  risugen_x86_memory.pm      |   87 +
>  x86.risu                   | 4499 ++++++++++++++++++++++++++++++++++++
>  10 files changed, 6293 insertions(+), 42 deletions(-)
>  create mode 100644 risugen_x86.pm
>  create mode 100644 risugen_x86_asm.pm
>  create mode 100644 risugen_x86_constraints.pm
>  create mode 100644 risugen_x86_memory.pm
>  create mode 100644 x86.risu


--
Alex Bennée


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [Qemu-devel] [RISU PATCH v3 03/18] risugen_x86_asm: add module
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 03/18] risugen_x86_asm: add module Jan Bobek
@ 2019-07-12 14:11   ` Richard Henderson
  2019-07-14 22:04     ` Jan Bobek
  0 siblings, 1 reply; 49+ messages in thread
From: Richard Henderson @ 2019-07-12 14:11 UTC (permalink / raw)
  To: Jan Bobek, qemu-devel; +Cc: Alex Bennée

On 7/12/19 12:32 AM, Jan Bobek wrote:
> The module risugen_x86_asm.pm exports named register constants and
> asm_insn_* family of functions, which greatly simplify emission of x86
> instructions.
> 
> Signed-off-by: Jan Bobek <jan.bobek@gmail.com>
> ---
>  risugen_x86_asm.pm | 918 +++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 918 insertions(+)
>  create mode 100644 risugen_x86_asm.pm

Clever use of token lists to make sure all state is processed as expected.  Kudos!

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [Qemu-devel] [RISU PATCH v3 04/18] risugen_x86_constraints: add module
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 04/18] risugen_x86_constraints: " Jan Bobek
@ 2019-07-12 14:24   ` Richard Henderson
  2019-07-14 22:39     ` Jan Bobek
  2019-07-21  1:54   ` Richard Henderson
  1 sibling, 1 reply; 49+ messages in thread
From: Richard Henderson @ 2019-07-12 14:24 UTC (permalink / raw)
  To: Jan Bobek, qemu-devel; +Cc: Alex Bennée

On 7/12/19 12:32 AM, Jan Bobek wrote:
> +sub vex($%)
> +{
> +    my ($insn, %vex) = @_;
> +    my $regidw = $is_x86_64 ? 4 : 3;
> +
> +    # There is no point in randomizing other VEX fields, since
> +    # VEX.R/.X/.B are encoded automatically by risugen_x86_asm, and
> +    # VEX.M/.P are opcodes.
> +    $vex{l} = randint(width => 1) ? 256 : 128 unless defined $vex{l};

VEX.L is sort-of opcode-like as well.  It certainly differentiates AVX1 vs
AVX2, and so probably should be constrained somehow.  I can't think of what's
the best way to do that at the moment, since our existing --xstate=foo isn't right.

Perhaps just a FIXME comment for now?

> +sub modrm_($%)
> +{
> +    my ($insn, %args) = @_;
> +    my $regidw = $is_x86_64 ? 4 : 3;
> +
> +    my %modrm = ();
> +    if (defined $args{reg}) {
> +        # This makes the config file syntax a bit more accommodating
> +        # in cases where MODRM.REG is an opcode extension field.
> +        $modrm{reg} = $args{reg};
> +    } else {
> +        $modrm{reg} = randint(width => $regidw);
> +    }
> +
> +    # There is also a displacement-only form, but we don't know
> +    # absolute address of the memblock, so we cannot test it.

32-bit mode has displacement-only, aka absolute; 64-bit replaces that with
rip-relative.  But agreed that the first is impossible to test and the second
is difficult.

> +sub modrm($%)
> +{
> +    my ($insn, %args) = @_;
> +    modrm_($insn, indexk => 'index', %args);
> +}

How are you avoiding %rsp as index?
I saw you die for that in the previous patch...


r~


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [Qemu-devel] [RISU PATCH v3 01/18] risugen_common: add helper functions insnv, randint
  2019-07-12  5:48   ` Richard Henderson
@ 2019-07-14 21:55     ` Jan Bobek
  0 siblings, 0 replies; 49+ messages in thread
From: Jan Bobek @ 2019-07-14 21:55 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel; +Cc: Alex Bennée


[-- Attachment #1.1: Type: text/plain, Size: 2458 bytes --]

On 7/12/19 1:48 AM, Richard Henderson wrote:
> On 7/12/19 12:32 AM, Jan Bobek wrote:
>> insnv allows emitting variable-length instructions in little-endian or
>> big-endian byte order; it subsumes functionality of former insn16()
>> and insn32() functions.
>>
>> randint can reliably generate signed or unsigned integers of arbitrary
>> width.
>>
>> Signed-off-by: Jan Bobek <jan.bobek@gmail.com>
>> ---
>>  risugen_common.pm | 55 +++++++++++++++++++++++++++++++++++++++++------
>>  1 file changed, 48 insertions(+), 7 deletions(-)
>>
>> diff --git a/risugen_common.pm b/risugen_common.pm
>> index 71ee996..d63250a 100644
>> --- a/risugen_common.pm
>> +++ b/risugen_common.pm
>> @@ -23,8 +23,9 @@ BEGIN {
>>      require Exporter;
>>  
>>      our @ISA = qw(Exporter);
>> -    our @EXPORT = qw(open_bin close_bin set_endian insn32 insn16 $bytecount
>> -                   progress_start progress_update progress_end
>> +    our @EXPORT = qw(open_bin close_bin set_endian insn32 insn16
>> +                   $bytecount insnv randint progress_start
>> +                   progress_update progress_end
>>                     eval_with_fields is_pow_of_2 sextract ctz
>>                     dump_insn_details);
>>  }
>> @@ -37,7 +38,7 @@ my $bigendian = 0;
>>  # (default is little endian, 0).
>>  sub set_endian
>>  {
>> -    $bigendian = @_;
>> +    ($bigendian) = @_;
>>  }
>>  
>>  sub open_bin
>> @@ -52,18 +53,58 @@ sub close_bin
>>      close(BIN) or die "can't close output file: $!";
>>  }
>>  
>> +sub insnv(%)
>> +{
>> +    my (%args) = @_;
>> +
>> +    # Default to big-endian order, so that the instruction bytes are
>> +    # emitted in the same order as they are written in the
>> +    # configuration file.
>> +    $args{bigendian} = 1 unless defined $args{bigendian};
>> +
>> +    for (my $bitcur = 0; $bitcur < $args{width}; $bitcur += 8) {
>> +        my $value = $args{value} >> ($args{bigendian}
>> +                                     ? $args{width} - $bitcur - 8
>> +                                     : $bitcur);
>> +
>> +        print BIN pack("C", $value & 0xff);
>> +        $bytecount += 1;
>> +    }
> 
> Looks like bytecount is no longer used?

bytecount is an exported variable, a quick git grep shows that
it's being used in risugen_arm.pm (sub thumb_align4).

> Otherwise,
> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> 
> 
> r~
> 


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [Qemu-devel] [RISU PATCH v3 03/18] risugen_x86_asm: add module
  2019-07-12 14:11   ` Richard Henderson
@ 2019-07-14 22:04     ` Jan Bobek
  0 siblings, 0 replies; 49+ messages in thread
From: Jan Bobek @ 2019-07-14 22:04 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel; +Cc: Alex Bennée


[-- Attachment #1.1: Type: text/plain, Size: 745 bytes --]

On 7/12/19 10:11 AM, Richard Henderson wrote:
> On 7/12/19 12:32 AM, Jan Bobek wrote:
>> The module risugen_x86_asm.pm exports named register constants and
>> asm_insn_* family of functions, which greatly simplify emission of x86
>> instructions.
>>
>> Signed-off-by: Jan Bobek <jan.bobek@gmail.com>
>> ---
>>  risugen_x86_asm.pm | 918 +++++++++++++++++++++++++++++++++++++++++++++
>>  1 file changed, 918 insertions(+)
>>  create mode 100644 risugen_x86_asm.pm
> 
> Clever use of token lists to make sure all state is processed as expected.  Kudos!

I was curious what you'll think of this part; thanks a lot, it's much
appreciated!

-Jan

> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> 
> 
> r~
> 


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [Qemu-devel] [RISU PATCH v3 04/18] risugen_x86_constraints: add module
  2019-07-12 14:24   ` Richard Henderson
@ 2019-07-14 22:39     ` Jan Bobek
  0 siblings, 0 replies; 49+ messages in thread
From: Jan Bobek @ 2019-07-14 22:39 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel; +Cc: Alex Bennée


[-- Attachment #1.1: Type: text/plain, Size: 2536 bytes --]

On 7/12/19 10:24 AM, Richard Henderson wrote:
> On 7/12/19 12:32 AM, Jan Bobek wrote:
>> +sub vex($%)
>> +{
>> +    my ($insn, %vex) = @_;
>> +    my $regidw = $is_x86_64 ? 4 : 3;
>> +
>> +    # There is no point in randomizing other VEX fields, since
>> +    # VEX.R/.X/.B are encoded automatically by risugen_x86_asm, and
>> +    # VEX.M/.P are opcodes.
>> +    $vex{l} = randint(width => 1) ? 256 : 128 unless defined $vex{l};
> 
> VEX.L is sort-of opcode-like as well.  It certainly differentiates AVX1 vs
> AVX2, and so probably should be constrained somehow.  I can't think of what's
> the best way to do that at the moment, since our existing --xstate=foo isn't right.
> 
> Perhaps just a FIXME comment for now?

So, the instructions that use VEX.L specify it in the !constraints
block in the config file. Originally, I thought some instructions are
supposed to ignore it (denoted by LIG in the Intel manual -- it's the
scalar instructions like ADDSS), so it might be worth randomizing.
However, when I later read the manual pages of some of these
instructions, it said they are supposed to be encoded with VEX.L=0
anyway. I didn't check every single one of them, but right now they
are all encoded with VEX.L=0, so I suppose this line can be removed
and we can rely on the caller (the !constraints block) to always
specify it.

>> +sub modrm_($%)
>> +{
>> +    my ($insn, %args) = @_;
>> +    my $regidw = $is_x86_64 ? 4 : 3;
>> +
>> +    my %modrm = ();
>> +    if (defined $args{reg}) {
>> +        # This makes the config file syntax a bit more accommodating
>> +        # in cases where MODRM.REG is an opcode extension field.
>> +        $modrm{reg} = $args{reg};
>> +    } else {
>> +        $modrm{reg} = randint(width => $regidw);
>> +    }
>> +
>> +    # There is also a displacement-only form, but we don't know
>> +    # absolute address of the memblock, so we cannot test it.
> 
> 32-bit mode has displacement-only, aka absolute; 64-bit replaces that with
> rip-relative.  But agreed that the first is impossible to test and the second
> is difficult.
> 
>> +sub modrm($%)
>> +{
>> +    my ($insn, %args) = @_;
>> +    modrm_($insn, indexk => 'index', %args);
>> +}
> 
> How are you avoiding %rsp as index?
> I saw you die for that in the previous patch...

See write_mem_getoffset in risugen_x86.pm. I felt there's a better
place for it there, since that's when we actually need to write to it,
so the problem is more exposed.

-Jan

> 
> r~
> 


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [Qemu-devel] [RISU PATCH v3 00/18] Support for generating x86 SIMD test images
  2019-07-12 13:34 ` [Qemu-devel] [RISU PATCH v3 00/18] Support for generating x86 SIMD test images Alex Bennée
@ 2019-07-14 23:08   ` Jan Bobek
  2019-07-15 10:14     ` Alex Bennée
  0 siblings, 1 reply; 49+ messages in thread
From: Jan Bobek @ 2019-07-14 23:08 UTC (permalink / raw)
  To: Alex Bennée; +Cc: Richard Henderson, qemu-devel


[-- Attachment #1.1: Type: text/plain, Size: 3990 bytes --]

On 7/12/19 9:34 AM, Alex Bennée wrote:
> 
> Jan Bobek <jan.bobek@gmail.com> writes:
> 
>> This is v3 of the patch series posted in [1] and [2]. Note that this
>> is the first fully-featured patch series implementing all desired
>> functionality, including (V)LDMXCSR and VSIB-based instructions like
>> VGATHER*.
>>
>> While implementing the last bits required in order to support VGATHERx
>> instructions, I ran into problems which required a larger redesign;
>> namely, there are no more !emit blocks as their functionality is now
>> implemented in regular !constraints blocks. Also, memory constraints
>> are specified in !memory blocks, similarly to other architectures.
>>
>> I tested these changes on my machine; both master and slave modes work
>> in both 32-bit and 64-bit modes.
> 
> Two things I've noticed:
> 
>   ./contrib/generate_all.sh -n 1 x86.risu testcases.x86
> 
> takes a very long time. I wonder if this is a consequence of constantly
> needing to re-query the random number generator?

I believe so. While other architectures can be as cheap as a single rand()
call per instruction, x86 does more like 5-10.

Even worse, there are some instructions which cannot be generated in
32-bit mode (those requiring REX.W prefix, e.g. MMX MOVQ). When I let
the script run for a little bit, risugen would get stuck in an
infinite loop, because it could only choose from a single instruction
which wasn't valid for 32-bit....

> The other is:
> 
>   set -x RISU ./build/i686-linux-gnu/risu
>   ./contrib/record_traces.sh testcases.x86/*.risu.bin
> 
> fails on the first trace when validating the playback. Might want to
> check why that is.

The SIMD registers aren't getting initialized; both master and
apprentice need an --xfeatures=XXX parameter for that. Right now the
default is 'none'; unless the instructions are filtered, you'd need
--xfeatures=avx (or --xfeatures=sse, and that only works because on my
laptop, the upper part of ymm registers seems to be always zeroed when
risu starts).

>>
>> Cheers,
>>  -Jan
>>
>> Changes since v2:
>>   Too many to be listed individually; this patch series might be
>>   better reviewed on its own.
>>
>> References:
>>   1. https://lists.nongnu.org/archive/html/qemu-devel/2019-06/msg04123.html
>>   2. https://lists.nongnu.org/archive/html/qemu-devel/2019-07/msg00001.html
>>
>> Jan Bobek (18):
>>   risugen_common: add helper functions insnv, randint
>>   risugen_common: split eval_with_fields into extract_fields and
>>     eval_block
>>   risugen_x86_asm: add module
>>   risugen_x86_constraints: add module
>>   risugen_x86_memory: add module
>>   risugen_x86: add module
>>   risugen: allow all byte-aligned instructions
>>   risugen: add command-line flag --x86_64
>>   risugen: add --xfeatures option for x86
>>   x86.risu: add MMX instructions
>>   x86.risu: add SSE instructions
>>   x86.risu: add SSE2 instructions
>>   x86.risu: add SSE3 instructions
>>   x86.risu: add SSSE3 instructions
>>   x86.risu: add SSE4.1 and SSE4.2 instructions
>>   x86.risu: add AES and PCLMULQDQ instructions
>>   x86.risu: add AVX instructions
>>   x86.risu: add AVX2 instructions
>>
>>  risugen                    |   27 +-
>>  risugen_arm.pm             |    6 +-
>>  risugen_common.pm          |  117 +-
>>  risugen_m68k.pm            |    3 +-
>>  risugen_ppc64.pm           |    6 +-
>>  risugen_x86.pm             |  518 +++++
>>  risugen_x86_asm.pm         |  918 ++++++++
>>  risugen_x86_constraints.pm |  154 ++
>>  risugen_x86_memory.pm      |   87 +
>>  x86.risu                   | 4499 ++++++++++++++++++++++++++++++++++++
>>  10 files changed, 6293 insertions(+), 42 deletions(-)
>>  create mode 100644 risugen_x86.pm
>>  create mode 100644 risugen_x86_asm.pm
>>  create mode 100644 risugen_x86_constraints.pm
>>  create mode 100644 risugen_x86_memory.pm
>>  create mode 100644 x86.risu
> 
> 
> --
> Alex Bennée
> 


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [Qemu-devel] [RISU PATCH v3 00/18] Support for generating x86 SIMD test images
  2019-07-14 23:08   ` Jan Bobek
@ 2019-07-15 10:14     ` Alex Bennée
  0 siblings, 0 replies; 49+ messages in thread
From: Alex Bennée @ 2019-07-15 10:14 UTC (permalink / raw)
  To: Jan Bobek; +Cc: Richard Henderson, qemu-devel


Jan Bobek <jan.bobek@gmail.com> writes:

> On 7/12/19 9:34 AM, Alex Bennée wrote:
>>
>> Jan Bobek <jan.bobek@gmail.com> writes:
>>
>>> This is v3 of the patch series posted in [1] and [2]. Note that this
>>> is the first fully-featured patch series implementing all desired
>>> functionality, including (V)LDMXCSR and VSIB-based instructions like
>>> VGATHER*.
>>>
>>> While implementing the last bits required in order to support VGATHERx
>>> instructions, I ran into problems which required a larger redesign;
>>> namely, there are no more !emit blocks as their functionality is now
>>> implemented in regular !constraints blocks. Also, memory constraints
>>> are specified in !memory blocks, similarly to other architectures.
>>>
>>> I tested these changes on my machine; both master and slave modes work
>>> in both 32-bit and 64-bit modes.
>>
>> Two things I've noticed:
>>
>>   ./contrib/generate_all.sh -n 1 x86.risu testcases.x86
>>
>> takes a very long time. I wonder if this is a consequence of constantly
>> needing to re-query the random number generator?
>
> I believe so. While other architectures can be as cheap as a single rand()
> call per instruction, x86 does more like 5-10.

OK

> Even worse, there are some instructions which cannot be generated in
> 32-bit mode (those requiring REX.W prefix, e.g. MMX MOVQ). When I let
> the script run for a little bit, risugen would get stuck in an
> infinite loop, because it could only choose from a single instruction
> which wasn't valid for 32-bit....

The first instruction I see hang is:

  Running: /home/alex/lsrc/tests/risu.git/risugen --xfeatures avx  --pattern CVTSD2SI_64 x86.risu testcases.x86/insn_CVTSD2SI_64__INC.risu.bin
  Generating code using patterns: CVTSD2SI_64 SSE2...
  [                                                                            ]

I wonder if this means we should split the x86.risu by mode? Or some
other way of filtering out patterns that are invalid for a mode?

We do have the concept of classes, see the @ annotations in
aarch64.risu. I guess the generate_all script doesn't handle that nicely
yet though. For now I lumped them all together with:

  ./risugen --pattern "MMX" --xfeatures=sse x86.risu all_mmx.risu.bin
  ./risugen --pattern "SSE" --xfeatures=sse x86.risu all_sse.risu.bin
  ./risugen --pattern "SSE2" --xfeatures=sse x86.risu all_sse2.risu.bin
  ./risugen --pattern "SSE3" --xfeatures=sse x86.risu all_sse3.risu.bin
  ./risugen --pattern "AVX" --xfeatures=avx x86.risu all_avx.risu.bin
  ./risugen --pattern "AVX2" --xfeatures=avx x86.risu all_avx2.risu.bin

>
>> The other is:
>>
>>   set -x RISU ./build/i686-linux-gnu/risu
>>   ./contrib/record_traces.sh testcases.x86/*.risu.bin
>>
>> fails on the first trace when validating the playback. Might want to
>> check why that is.
>
> The SIMD registers aren't getting initialized; both master and
> apprentice need an --xfeatures=XXX parameter for that. Right now the
> default is 'none'; unless the instructions are filtered, you'd need
> --xfeatures=avx (or --xfeatures=sse, and that only works because on my
> laptop, the upper part of ymm registers seems to be always zeroed when
> risu starts).

Ahh OK, I did a lot better with:

  ./contrib/generate_all.sh -n 1 x86.risu testcases.x86 -- --xfeatures avx
  set -x RISU ./build/i686-linux-gnu/risu --xfeatures=avx
  ./contrib/record_traces.sh testcases.x86-avx/*.risu.bin

There are enough failures when you run those against QEMU for now that
we don't need to worry too much about coverage yet ;-)

>>>
>>> Cheers,
>>>  -Jan
>>>
>>> Changes since v2:
>>>   Too many to be listed individually; this patch series might be
>>>   better reviewed on its own.
>>>
>>> References:
>>>   1. https://lists.nongnu.org/archive/html/qemu-devel/2019-06/msg04123.html
>>>   2. https://lists.nongnu.org/archive/html/qemu-devel/2019-07/msg00001.html
>>>
>>> Jan Bobek (18):
>>>   risugen_common: add helper functions insnv, randint
>>>   risugen_common: split eval_with_fields into extract_fields and
>>>     eval_block
>>>   risugen_x86_asm: add module
>>>   risugen_x86_constraints: add module
>>>   risugen_x86_memory: add module
>>>   risugen_x86: add module
>>>   risugen: allow all byte-aligned instructions
>>>   risugen: add command-line flag --x86_64
>>>   risugen: add --xfeatures option for x86
>>>   x86.risu: add MMX instructions
>>>   x86.risu: add SSE instructions
>>>   x86.risu: add SSE2 instructions
>>>   x86.risu: add SSE3 instructions
>>>   x86.risu: add SSSE3 instructions
>>>   x86.risu: add SSE4.1 and SSE4.2 instructions
>>>   x86.risu: add AES and PCLMULQDQ instructions
>>>   x86.risu: add AVX instructions
>>>   x86.risu: add AVX2 instructions
>>>
>>>  risugen                    |   27 +-
>>>  risugen_arm.pm             |    6 +-
>>>  risugen_common.pm          |  117 +-
>>>  risugen_m68k.pm            |    3 +-
>>>  risugen_ppc64.pm           |    6 +-
>>>  risugen_x86.pm             |  518 +++++
>>>  risugen_x86_asm.pm         |  918 ++++++++
>>>  risugen_x86_constraints.pm |  154 ++
>>>  risugen_x86_memory.pm      |   87 +
>>>  x86.risu                   | 4499 ++++++++++++++++++++++++++++++++++++
>>>  10 files changed, 6293 insertions(+), 42 deletions(-)
>>>  create mode 100644 risugen_x86.pm
>>>  create mode 100644 risugen_x86_asm.pm
>>>  create mode 100644 risugen_x86_constraints.pm
>>>  create mode 100644 risugen_x86_memory.pm
>>>  create mode 100644 x86.risu
>>
>>
>> --
>> Alex Bennée
>>


--
Alex Bennée


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [Qemu-devel] [RISU PATCH v3 08/18] risugen: add command-line flag --x86_64
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 08/18] risugen: add command-line flag --x86_64 Jan Bobek
@ 2019-07-17 17:00   ` Richard Henderson
  0 siblings, 0 replies; 49+ messages in thread
From: Richard Henderson @ 2019-07-17 17:00 UTC (permalink / raw)
  To: Jan Bobek, qemu-devel; +Cc: Alex Bennée

On 7/11/19 3:32 PM, Jan Bobek wrote:
> This flag instructs the x86 backend to emit 64-bit (rather than
> 32-bit) code.
> 
> Signed-off-by: Jan Bobek <jan.bobek@gmail.com>
> ---
>  risugen | 7 ++++++-
>  1 file changed, 6 insertions(+), 1 deletion(-)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [Qemu-devel] [RISU PATCH v3 09/18] risugen: add --xfeatures option for x86
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 09/18] risugen: add --xfeatures option for x86 Jan Bobek
@ 2019-07-17 17:01   ` Richard Henderson
  0 siblings, 0 replies; 49+ messages in thread
From: Richard Henderson @ 2019-07-17 17:01 UTC (permalink / raw)
  To: Jan Bobek, qemu-devel; +Cc: Alex Bennée

On 7/11/19 3:32 PM, Jan Bobek wrote:
> The --xfeatures option is modelled after identically-named option to
> RISU itself; it allows the user to specify which vector registers
> should be initialized, so that the test image doesn't try to access
> registers which may not be present at runtime. Note that it is still
> the user's responsibility to filter out the test instructions using
> these registers.
> 
> Signed-off-by: Jan Bobek <jan.bobek@gmail.com>
> ---
>  risugen | 13 +++++++++++++
>  1 file changed, 13 insertions(+)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~



^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [Qemu-devel] [RISU PATCH v3 10/18] x86.risu: add MMX instructions
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 10/18] x86.risu: add MMX instructions Jan Bobek
@ 2019-07-20  4:30   ` Richard Henderson
  0 siblings, 0 replies; 49+ messages in thread
From: Richard Henderson @ 2019-07-20  4:30 UTC (permalink / raw)
  To: Jan Bobek, qemu-devel; +Cc: Alex Bennée

On 7/11/19 3:32 PM, Jan Bobek wrote:
> Add an x86 configuration file with all MMX instructions.
> 
> Signed-off-by: Jan Bobek <jan.bobek@gmail.com>
> ---
>  x86.risu | 321 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 321 insertions(+)
>  create mode 100644 x86.risu

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [Qemu-devel] [RISU PATCH v3 11/18] x86.risu: add SSE instructions
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 11/18] x86.risu: add SSE instructions Jan Bobek
@ 2019-07-20 17:50   ` Richard Henderson
  2019-07-22 13:57     ` Jan Bobek
  0 siblings, 1 reply; 49+ messages in thread
From: Richard Henderson @ 2019-07-20 17:50 UTC (permalink / raw)
  To: Jan Bobek, qemu-devel; +Cc: Alex Bennée

On 7/11/19 3:32 PM, Jan Bobek wrote:
> +# NP 0F F7 /r: MASKMOVQ mm1, mm2
> +MASKMOVQ SSE 00001111 11110111 \
> +  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} } \
> +  !memory { load(size => 8, base => REG_RDI, rollback => 1); }

This one is a store.

Otherwise,
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [Qemu-devel] [RISU PATCH v3 12/18] x86.risu: add SSE2 instructions
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 12/18] x86.risu: add SSE2 instructions Jan Bobek
@ 2019-07-20 21:19   ` Richard Henderson
  2019-07-22 14:12     ` Jan Bobek
  0 siblings, 1 reply; 49+ messages in thread
From: Richard Henderson @ 2019-07-20 21:19 UTC (permalink / raw)
  To: Jan Bobek, qemu-devel; +Cc: Alex Bennée

On 7/11/19 3:32 PM, Jan Bobek wrote:
> +# F3 0F 2A /r: CVTSI2SS xmm1,r/m32
> +CVTSI2SS SSE2 00001111 00101010 \
> +  !constraints { rep($_); modrm($_); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
> +  !memory { load(size => 4); }
> +
> +# F3 REX.W 0F 2A /r: CVTSI2SS xmm1,r/m64
> +CVTSI2SS_64 SSE2 00001111 00101010 \
> +  !constraints { rep($_); rex($_, w => 1); modrm($_); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
> +  !memory { load(size => 8); }

Best I can tell, these are SSE1.  Likewise CVTTSI2SS.

Otherwise,
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [Qemu-devel] [RISU PATCH v3 13/18] x86.risu: add SSE3 instructions
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 13/18] x86.risu: add SSE3 instructions Jan Bobek
@ 2019-07-20 21:27   ` Richard Henderson
  0 siblings, 0 replies; 49+ messages in thread
From: Richard Henderson @ 2019-07-20 21:27 UTC (permalink / raw)
  To: Jan Bobek, qemu-devel; +Cc: Alex Bennée

On 7/11/19 3:32 PM, Jan Bobek wrote:
> Add SSE3 instructions to the x86 configuration file.
> 
> Signed-off-by: Jan Bobek <jan.bobek@gmail.com>
> ---
>  x86.risu | 50 ++++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 50 insertions(+)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [Qemu-devel] [RISU PATCH v3 14/18] x86.risu: add SSSE3 instructions
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 14/18] x86.risu: add SSSE3 instructions Jan Bobek
@ 2019-07-20 21:52   ` Richard Henderson
  0 siblings, 0 replies; 49+ messages in thread
From: Richard Henderson @ 2019-07-20 21:52 UTC (permalink / raw)
  To: Jan Bobek, qemu-devel; +Cc: Alex Bennée

On 7/11/19 3:32 PM, Jan Bobek wrote:
> Add SSSE3 instructions to the x86 configuration file.
> 
> Signed-off-by: Jan Bobek <jan.bobek@gmail.com>
> ---
>  x86.risu | 160 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 160 insertions(+)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [Qemu-devel] [RISU PATCH v3 15/18] x86.risu: add SSE4.1 and SSE4.2 instructions
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 15/18] x86.risu: add SSE4.1 and SSE4.2 instructions Jan Bobek
@ 2019-07-20 22:28   ` Richard Henderson
  0 siblings, 0 replies; 49+ messages in thread
From: Richard Henderson @ 2019-07-20 22:28 UTC (permalink / raw)
  To: Jan Bobek, qemu-devel; +Cc: Alex Bennée

On 7/11/19 3:32 PM, Jan Bobek wrote:
> Add SSE4.1 and SSE4.2 instructions to the x86 configuration file.
> 
> Signed-off-by: Jan Bobek <jan.bobek@gmail.com>
> ---
>  x86.risu | 270 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 270 insertions(+)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [Qemu-devel] [RISU PATCH v3 16/18] x86.risu: add AES and PCLMULQDQ instructions
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 16/18] x86.risu: add AES and PCLMULQDQ instructions Jan Bobek
@ 2019-07-20 22:35   ` Richard Henderson
  0 siblings, 0 replies; 49+ messages in thread
From: Richard Henderson @ 2019-07-20 22:35 UTC (permalink / raw)
  To: Jan Bobek, qemu-devel; +Cc: Alex Bennée

On 7/11/19 3:32 PM, Jan Bobek wrote:
> Add AES-NI and PCLMULQDQ instructions to the x86 configuration file.
> 
> Signed-off-by: Jan Bobek <jan.bobek@gmail.com>
> ---
>  x86.risu | 45 +++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 45 insertions(+)

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [Qemu-devel] [RISU PATCH v3 17/18] x86.risu: add AVX instructions
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 17/18] x86.risu: add AVX instructions Jan Bobek
@ 2019-07-21  0:04   ` Richard Henderson
  2019-07-22 14:23     ` Jan Bobek
  0 siblings, 1 reply; 49+ messages in thread
From: Richard Henderson @ 2019-07-21  0:04 UTC (permalink / raw)
  To: Jan Bobek, qemu-devel; +Cc: Alex Bennée

On 7/11/19 3:32 PM, Jan Bobek wrote:
> +# VEX.LIG.F3.0F.WIG 10 /r: VMOVSS xmm1, xmm2, xmm3
> +# VEX.LIG.F3.0F.WIG 10 /r: VMOVSS xmm1, m32
> +# VEX.LIG.F3.0F.WIG 11 /r: VMOVSS xmm1, xmm2, xmm3
> +# VEX.LIG.F3.0F.WIG 11 /r: VMOVSS m32, xmm1
> +VMOVSS AVX 0001000 d \
> +  !constraints { vex($_, m => 0x0F, l => 0, p => 0xF3); modrm($_); $_->{vex}{v} = 0 unless defined $_->{modrm}{reg2}; 1 } \
> +  !memory { $d ? store(size => 4) : load(size => 4); }

Why the l => 0?  LIG does mean VEX.L ignored, so why not let it get randomized
as you do for WIG?

Not wrong as is... this is the documented value for scalar operands.  But there
is a different document markup, LZ, for required (E)VEX.L == 0.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [Qemu-devel] [RISU PATCH v3 18/18] x86.risu: add AVX2 instructions
  2019-07-11 22:33 ` [Qemu-devel] [RISU PATCH v3 18/18] x86.risu: add AVX2 instructions Jan Bobek
@ 2019-07-21  0:46   ` Richard Henderson
  2019-07-22 14:41     ` Jan Bobek
  0 siblings, 1 reply; 49+ messages in thread
From: Richard Henderson @ 2019-07-21  0:46 UTC (permalink / raw)
  To: Jan Bobek, qemu-devel; +Cc: Alex Bennée

On 7/11/19 3:33 PM, Jan Bobek wrote:
> +# VEX.256.0F.WIG 28 /r: VMOVAPS ymm1, ymm2/m256
> +# VEX.256.0F.WIG 29 /r: VMOVAPS ymm2/m256, ymm1
> +VMOVAPS AVX2 0010100 d \
> +  !constraints { vex($_, m => 0x0F, l => 256, v => 0); modrm($_); 1 } \
> +  !memory { $d ? store(size => 32, align => 32) : load(size => 32, align => 32); }

I believe all of the floating-point 256-bit operations are actually AVX1.
Which, I see, would annoyingly require a renaming, since that would put two
VMOVAPS insns into the same group.

I wonder if it's worth calling the two groups AVX128 and AVX256 and ignore the
actual cpuid to which the insn is assigned?  Which ever way, they're still tied
to the same --xstate value to indicate ymmh.

Or could we fold the two insns together:

VMOVAPS AVX 0010100 d \
!constraints { vex($_, m => 0x0F, v => 0); modrm($_); 1 } \
!memory { my $len = $_->{vex}{l} / 8; \
          $d ? store(size => $len, align => $len) \
             : load(size => $len, align => $len); }


r~


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [Qemu-devel] [RISU PATCH v3 04/18] risugen_x86_constraints: add module
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 04/18] risugen_x86_constraints: " Jan Bobek
  2019-07-12 14:24   ` Richard Henderson
@ 2019-07-21  1:54   ` Richard Henderson
  2019-07-22 13:41     ` Jan Bobek
  1 sibling, 1 reply; 49+ messages in thread
From: Richard Henderson @ 2019-07-21  1:54 UTC (permalink / raw)
  To: Jan Bobek, qemu-devel; +Cc: Alex Bennée

On 7/11/19 3:32 PM, Jan Bobek wrote:
> +sub data16($%)
> +{
> +    my ($insn, %data16) = @_;
> +    $insn->{data16} = \%data16;
> +}
> +
> +sub rep($%)
> +{
> +    my ($insn, %rep) = @_;
> +    $insn->{rep} = \%rep;
> +}
> +
> +sub repne($%)
> +{
> +    my ($insn, %repne) = @_;
> +    $insn->{repne} = \%repne;
> +}

What do you think of replacing these with p($_, 0x66), etc?

It kinda matches up with the "p => 0x66" within vex(), and it is easier for the
eye to match up with the comments before each pattern.

> +sub modrm($%)
> +{
> +    my ($insn, %args) = @_;
> +    modrm_($insn, indexk => 'index', %args);
> +}
> +
> +sub modrm_vsib($%)
> +{
> +    my ($insn, %args) = @_;
> +    modrm_($insn, indexk => 'vindex', %args);
> +}

I'm thinking of adding a few more exports for very common patterns:

modrm_reg    -- force use of register.
modrm_mem    -- force use of memory.
modrm_mmx_1  -- crop reg1 to 0-7 for mm register.
modrm_mmx_2  -- crop reg2 to 0-7 if in use.
modrm_mmx_12 -- crop both reg1 and reg2.

I think these would significantly shorten some of the !constraints.

I'm willing to do these changes myself; for the GSoC project I'd rather you
continue to the next phase instead of iterating on risugen further.


r~


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [Qemu-devel] [RISU PATCH v3 05/18] risugen_x86_memory: add module
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 05/18] risugen_x86_memory: " Jan Bobek
@ 2019-07-21  1:58   ` Richard Henderson
  2019-07-22 13:53     ` Jan Bobek
  0 siblings, 1 reply; 49+ messages in thread
From: Richard Henderson @ 2019-07-21  1:58 UTC (permalink / raw)
  To: Jan Bobek, qemu-devel; +Cc: Alex Bennée

On 7/11/19 3:32 PM, Jan Bobek wrote:
> +sub load(%)
> +{
> +    my (%args) = @_;
> +
> +    @memory_opts{keys %args} = values %args;
> +    $memory_opts{is_write}   = 0;
> +}
> +
> +sub store(%)
> +{
> +    my (%args) = @_;
> +
> +    @memory_opts{keys %args} = values %args;
> +    $memory_opts{is_write}   = 1;
> +}

I was thinking maybe we should add a mem() that allows a "store => $d", which
would simplify the "$d ? store(size => x) : load(size => x)" pattern.

Anyway, that's incremental improvement.

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [Qemu-devel] [RISU PATCH v3 06/18] risugen_x86: add module
  2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 06/18] risugen_x86: " Jan Bobek
@ 2019-07-21  2:02   ` Richard Henderson
  0 siblings, 0 replies; 49+ messages in thread
From: Richard Henderson @ 2019-07-21  2:02 UTC (permalink / raw)
  To: Jan Bobek, qemu-devel; +Cc: Alex Bennée

On 7/11/19 3:32 PM, Jan Bobek wrote:
> risugen_x86.pm is the main backend module for Intel i386 and x86_64
> architectures; it orchestrates generation of the test code with
> support from the rest of risugen_x86_* modules.
> 
> Signed-off-by: Jan Bobek <jan.bobek@gmail.com>
> ---
>  risugen_x86.pm | 518 +++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 518 insertions(+)
>  create mode 100644 risugen_x86.pm

Reviewed-by: Richard Henderson <richard.henderson@linaro.org>


r~


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [Qemu-devel] [RISU PATCH v3 04/18] risugen_x86_constraints: add module
  2019-07-21  1:54   ` Richard Henderson
@ 2019-07-22 13:41     ` Jan Bobek
  0 siblings, 0 replies; 49+ messages in thread
From: Jan Bobek @ 2019-07-22 13:41 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel; +Cc: Alex Bennée


[-- Attachment #1.1: Type: text/plain, Size: 1607 bytes --]

On 7/20/19 9:54 PM, Richard Henderson wrote:
> On 7/11/19 3:32 PM, Jan Bobek wrote:
>> +sub data16($%)
>> +{
>> +    my ($insn, %data16) = @_;
>> +    $insn->{data16} = \%data16;
>> +}
>> +
>> +sub rep($%)
>> +{
>> +    my ($insn, %rep) = @_;
>> +    $insn->{rep} = \%rep;
>> +}
>> +
>> +sub repne($%)
>> +{
>> +    my ($insn, %repne) = @_;
>> +    $insn->{repne} = \%repne;
>> +}
> 
> What do you think of replacing these with p($_, 0x66), etc?
> 
> It kinda matches up with the "p => 0x66" within vex(), and it is easier for the
> eye to match up with the comments before each pattern.

Good idea!

>> +sub modrm($%)
>> +{
>> +    my ($insn, %args) = @_;
>> +    modrm_($insn, indexk => 'index', %args);
>> +}
>> +
>> +sub modrm_vsib($%)
>> +{
>> +    my ($insn, %args) = @_;
>> +    modrm_($insn, indexk => 'vindex', %args);
>> +}
> 
> I'm thinking of adding a few more exports for very common patterns:
> 
> modrm_reg    -- force use of register.
> modrm_mem    -- force use of memory.
> modrm_mmx_1  -- crop reg1 to 0-7 for mm register.
> modrm_mmx_2  -- crop reg2 to 0-7 if in use.
> modrm_mmx_12 -- crop both reg1 and reg2.
> 
> I think these would significantly shorten some of the !constraints.

I agree. I thought of something similar when I was preparing the v3
series; I didn't include it only because it would have further delayed
getting the v3 out.

> I'm willing to do these changes myself; for the GSoC project I'd rather you
> continue to the next phase instead of iterating on risugen further.

Of course, and thank you!

-Jan


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [Qemu-devel] [RISU PATCH v3 05/18] risugen_x86_memory: add module
  2019-07-21  1:58   ` Richard Henderson
@ 2019-07-22 13:53     ` Jan Bobek
  0 siblings, 0 replies; 49+ messages in thread
From: Jan Bobek @ 2019-07-22 13:53 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel; +Cc: Alex Bennée


[-- Attachment #1.1: Type: text/plain, Size: 1146 bytes --]

On 7/20/19 9:58 PM, Richard Henderson wrote:
> On 7/11/19 3:32 PM, Jan Bobek wrote:
>> +sub load(%)
>> +{
>> +    my (%args) = @_;
>> +
>> +    @memory_opts{keys %args} = values %args;
>> +    $memory_opts{is_write}   = 0;
>> +}
>> +
>> +sub store(%)
>> +{
>> +    my (%args) = @_;
>> +
>> +    @memory_opts{keys %args} = values %args;
>> +    $memory_opts{is_write}   = 1;
>> +}
> 
> I was thinking maybe we should add a mem() that allows a "store => $d", which
> would simplify the "$d ? store(size => x) : load(size => x)" pattern.
> 
> Anyway, that's incremental improvement.

It's possible. I suppose the reason why I did it like I did was that I
wanted the config file to be more descriptive: if you specify a
constraint like mem(store => 0, ...), it might not be immediately
clear that it actually means a load. It's not an issue when you know
the code, but if somebody were just browsing the x86.risu without
prior knowledge of anything, they might find it more cryptic.

Anyway, so much for my reasoning; I agree that it would make the
conditions simpler, so feel free to change it if you like.

-Jan


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [Qemu-devel] [RISU PATCH v3 11/18] x86.risu: add SSE instructions
  2019-07-20 17:50   ` Richard Henderson
@ 2019-07-22 13:57     ` Jan Bobek
  0 siblings, 0 replies; 49+ messages in thread
From: Jan Bobek @ 2019-07-22 13:57 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel; +Cc: Alex Bennée


[-- Attachment #1.1: Type: text/plain, Size: 543 bytes --]

On 7/20/19 1:50 PM, Richard Henderson wrote:
> On 7/11/19 3:32 PM, Jan Bobek wrote:
>> +# NP 0F F7 /r: MASKMOVQ mm1, mm2
>> +MASKMOVQ SSE 00001111 11110111 \
>> +  !constraints { modrm($_); $_->{modrm}{reg} &= 0b111; $_->{modrm}{reg2} &= 0b111 if defined $_->{modrm}{reg2}; defined $_->{modrm}{reg2} } \
>> +  !memory { load(size => 8, base => REG_RDI, rollback => 1); }
> 
> This one is a store.

Yes, indeed. I was pretty sure there must be some mistakes left among
the 900+ instructions. Three cheers for code reviews!

-Jan


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [Qemu-devel] [RISU PATCH v3 12/18] x86.risu: add SSE2 instructions
  2019-07-20 21:19   ` Richard Henderson
@ 2019-07-22 14:12     ` Jan Bobek
  0 siblings, 0 replies; 49+ messages in thread
From: Jan Bobek @ 2019-07-22 14:12 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel; +Cc: Alex Bennée


[-- Attachment #1.1: Type: text/plain, Size: 766 bytes --]

On 7/20/19 5:19 PM, Richard Henderson wrote:
> On 7/11/19 3:32 PM, Jan Bobek wrote:
>> +# F3 0F 2A /r: CVTSI2SS xmm1,r/m32
>> +CVTSI2SS SSE2 00001111 00101010 \
>> +  !constraints { rep($_); modrm($_); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
>> +  !memory { load(size => 4); }
>> +
>> +# F3 REX.W 0F 2A /r: CVTSI2SS xmm1,r/m64
>> +CVTSI2SS_64 SSE2 00001111 00101010 \
>> +  !constraints { rep($_); rex($_, w => 1); modrm($_); !(defined $_->{modrm}{reg2} && $_->{modrm}{reg2} == REG_RSP) } \
>> +  !memory { load(size => 8); }
> 
> Best I can tell, these are SSE1.  Likewise CVTTSI2SS.

Yep. I believe you mean CVTTSS2SI :) Both CVTSS2SI and CVTTSS2SI are
incorrectly flagged as SSE2, too (in addition to CVTSI2SS).

-Jan


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [Qemu-devel] [RISU PATCH v3 17/18] x86.risu: add AVX instructions
  2019-07-21  0:04   ` Richard Henderson
@ 2019-07-22 14:23     ` Jan Bobek
  0 siblings, 0 replies; 49+ messages in thread
From: Jan Bobek @ 2019-07-22 14:23 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel; +Cc: Alex Bennée


[-- Attachment #1.1: Type: text/plain, Size: 1284 bytes --]

On 7/20/19 8:04 PM, Richard Henderson wrote:
> On 7/11/19 3:32 PM, Jan Bobek wrote:
>> +# VEX.LIG.F3.0F.WIG 10 /r: VMOVSS xmm1, xmm2, xmm3
>> +# VEX.LIG.F3.0F.WIG 10 /r: VMOVSS xmm1, m32
>> +# VEX.LIG.F3.0F.WIG 11 /r: VMOVSS xmm1, xmm2, xmm3
>> +# VEX.LIG.F3.0F.WIG 11 /r: VMOVSS m32, xmm1
>> +VMOVSS AVX 0001000 d \
>> +  !constraints { vex($_, m => 0x0F, l => 0, p => 0xF3); modrm($_); $_->{vex}{v} = 0 unless defined $_->{modrm}{reg2}; 1 } \
>> +  !memory { $d ? store(size => 4) : load(size => 4); }
> 
> Why the l => 0?  LIG does mean VEX.L ignored, so why not let it get randomized
> as you do for WIG?
> 
> Not wrong as is... this is the documented value for scalar operands.  But there
> is a different document markup, LZ, for required (E)VEX.L == 0.

I am aware of LIG vs. LZ. Quoting from the MOVSS manual page:

  Software should ensure VMOVSS is encoded with VEX.L=0. Encoding
  VMOVSS with VEX.L=1 may encounter unpredictable behavior across
  different processor generations.

"Unpredictable behavior" sounded a bit menacing to me, so I opted for
the conservative route. AFAICT all the scalar instructions have this
warning attached; I don't know why they differentiate between LIG and
LZ then, though. Do you think it's irrelevant?

-Jan


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [Qemu-devel] [RISU PATCH v3 18/18] x86.risu: add AVX2 instructions
  2019-07-21  0:46   ` Richard Henderson
@ 2019-07-22 14:41     ` Jan Bobek
  0 siblings, 0 replies; 49+ messages in thread
From: Jan Bobek @ 2019-07-22 14:41 UTC (permalink / raw)
  To: Richard Henderson, qemu-devel; +Cc: Alex Bennée


[-- Attachment #1.1: Type: text/plain, Size: 1418 bytes --]

On 7/20/19 8:46 PM, Richard Henderson wrote:
> On 7/11/19 3:33 PM, Jan Bobek wrote:
>> +# VEX.256.0F.WIG 28 /r: VMOVAPS ymm1, ymm2/m256
>> +# VEX.256.0F.WIG 29 /r: VMOVAPS ymm2/m256, ymm1
>> +VMOVAPS AVX2 0010100 d \
>> +  !constraints { vex($_, m => 0x0F, l => 256, v => 0); modrm($_); 1 } \
>> +  !memory { $d ? store(size => 32, align => 32) : load(size => 32, align => 32); }
> 
> I believe all of the floating-point 256-bit operations are actually AVX1.
> Which, I see, would annoyingly require a renaming, since that would put two
> VMOVAPS insns into the same group.

Yeah.... and it is not just VMOVAPS, obviously.

> I wonder if it's worth calling the two groups AVX128 and AVX256 and ignore the
> actual cpuid to which the insn is assigned?  Which ever way, they're still tied
> to the same --xstate value to indicate ymmh.

We could do that, but I think I like your idea below even better.

> Or could we fold the two insns together:
> 
> VMOVAPS AVX 0010100 d \
> !constraints { vex($_, m => 0x0F, v => 0); modrm($_); 1 } \
> !memory { my $len = $_->{vex}{l} / 8; \
>           $d ? store(size => $len, align => $len) \
>              : load(size => $len, align => $len); }

This is a really interesting idea. If inability to differentiate
between the two is acceptable for us, then I think this approach might
be cleaner, more concise, and remove some redundancy.

-Jan


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 49+ messages in thread

end of thread, other threads:[~2019-07-22 14:41 UTC | newest]

Thread overview: 49+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-11 22:32 [Qemu-devel] [RISU PATCH v3 00/18] Support for generating x86 SIMD test images Jan Bobek
2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 01/18] risugen_common: add helper functions insnv, randint Jan Bobek
2019-07-12  5:48   ` Richard Henderson
2019-07-14 21:55     ` Jan Bobek
2019-07-12 12:41   ` Alex Bennée
2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 02/18] risugen_common: split eval_with_fields into extract_fields and eval_block Jan Bobek
2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 03/18] risugen_x86_asm: add module Jan Bobek
2019-07-12 14:11   ` Richard Henderson
2019-07-14 22:04     ` Jan Bobek
2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 04/18] risugen_x86_constraints: " Jan Bobek
2019-07-12 14:24   ` Richard Henderson
2019-07-14 22:39     ` Jan Bobek
2019-07-21  1:54   ` Richard Henderson
2019-07-22 13:41     ` Jan Bobek
2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 05/18] risugen_x86_memory: " Jan Bobek
2019-07-21  1:58   ` Richard Henderson
2019-07-22 13:53     ` Jan Bobek
2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 06/18] risugen_x86: " Jan Bobek
2019-07-21  2:02   ` Richard Henderson
2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 07/18] risugen: allow all byte-aligned instructions Jan Bobek
2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 08/18] risugen: add command-line flag --x86_64 Jan Bobek
2019-07-17 17:00   ` Richard Henderson
2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 09/18] risugen: add --xfeatures option for x86 Jan Bobek
2019-07-17 17:01   ` Richard Henderson
2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 10/18] x86.risu: add MMX instructions Jan Bobek
2019-07-20  4:30   ` Richard Henderson
2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 11/18] x86.risu: add SSE instructions Jan Bobek
2019-07-20 17:50   ` Richard Henderson
2019-07-22 13:57     ` Jan Bobek
2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 12/18] x86.risu: add SSE2 instructions Jan Bobek
2019-07-20 21:19   ` Richard Henderson
2019-07-22 14:12     ` Jan Bobek
2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 13/18] x86.risu: add SSE3 instructions Jan Bobek
2019-07-20 21:27   ` Richard Henderson
2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 14/18] x86.risu: add SSSE3 instructions Jan Bobek
2019-07-20 21:52   ` Richard Henderson
2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 15/18] x86.risu: add SSE4.1 and SSE4.2 instructions Jan Bobek
2019-07-20 22:28   ` Richard Henderson
2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 16/18] x86.risu: add AES and PCLMULQDQ instructions Jan Bobek
2019-07-20 22:35   ` Richard Henderson
2019-07-11 22:32 ` [Qemu-devel] [RISU PATCH v3 17/18] x86.risu: add AVX instructions Jan Bobek
2019-07-21  0:04   ` Richard Henderson
2019-07-22 14:23     ` Jan Bobek
2019-07-11 22:33 ` [Qemu-devel] [RISU PATCH v3 18/18] x86.risu: add AVX2 instructions Jan Bobek
2019-07-21  0:46   ` Richard Henderson
2019-07-22 14:41     ` Jan Bobek
2019-07-12 13:34 ` [Qemu-devel] [RISU PATCH v3 00/18] Support for generating x86 SIMD test images Alex Bennée
2019-07-14 23:08   ` Jan Bobek
2019-07-15 10:14     ` Alex Bennée

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.