All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next 0/4] bpf: Replace BPF_ALU and BPF_JMP with BPF_ALU32 and BPF_JMP64
@ 2023-02-01 12:36 Tiezhu Yang
  2023-02-01 12:36 ` [PATCH bpf-next 1/4] bpf: Add new macro " Tiezhu Yang
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: Tiezhu Yang @ 2023-02-01 12:36 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko; +Cc: bpf, linux-kernel

The intention of this patchset is to make the code more readable,
no functional changes, based on bpf-next.

If this patchset makes no sense, please ignore it and sorry for that.

Tiezhu Yang (4):
  bpf: Add new macro BPF_ALU32 and BPF_JMP64
  bpf: treewide: Clean up BPF_ALU and BPF_JMP
  bpf: treewide: Clean up BPF_ALU_* and BPF_JMP_*
  bpf: Mark BPF_ALU and BPF_JMP as deprecated

 Documentation/bpf/clang-notes.rst                  |    2 +-
 Documentation/bpf/classic_vs_extended.rst          |   26 +-
 Documentation/bpf/instruction-set.rst              |   28 +-
 Documentation/bpf/verifier.rst                     |   14 +-
 Documentation/networking/cdc_mbim.rst              |    8 +-
 arch/arm/net/bpf_jit_32.c                          |  110 +--
 arch/arm64/net/bpf_jit_comp.c                      |  112 +--
 arch/loongarch/net/bpf_jit.c                       |  108 +--
 arch/mips/net/bpf_jit_comp32.c                     |  106 +--
 arch/mips/net/bpf_jit_comp64.c                     |  106 +--
 arch/powerpc/net/bpf_jit_comp.c                    |    2 +-
 arch/powerpc/net/bpf_jit_comp32.c                  |  152 +--
 arch/powerpc/net/bpf_jit_comp64.c                  |  162 ++--
 arch/riscv/net/bpf_jit_comp32.c                    |  108 +--
 arch/riscv/net/bpf_jit_comp64.c                    |  120 +--
 arch/s390/net/bpf_jit_comp.c                       |  108 +--
 arch/sparc/net/bpf_jit_comp_32.c                   |   72 +-
 arch/sparc/net/bpf_jit_comp_64.c                   |  108 +--
 arch/x86/net/bpf_jit_comp.c                        |  118 +--
 arch/x86/net/bpf_jit_comp32.c                      |  116 +--
 drivers/net/ethernet/netronome/nfp/bpf/jit.c       |  124 +--
 drivers/net/ethernet/netronome/nfp/bpf/main.h      |    8 +-
 drivers/net/ethernet/netronome/nfp/bpf/verifier.c  |    4 +-
 include/linux/bpf_verifier.h                       |   14 +-
 include/linux/filter.h                             |   38 +-
 include/uapi/linux/bpf.h                           |    2 +
 include/uapi/linux/bpf_common.h                    |    4 +-
 kernel/bpf/arraymap.c                              |   14 +-
 kernel/bpf/core.c                                  |   52 +-
 kernel/bpf/disasm.c                                |   22 +-
 kernel/bpf/hashtab.c                               |    8 +-
 kernel/bpf/syscall.c                               |   12 +-
 kernel/bpf/verifier.c                              |  106 +--
 kernel/seccomp.c                                   |   68 +-
 lib/test_bpf.c                                     | 1004 ++++++++++----------
 net/core/filter.c                                  |  242 ++---
 net/xdp/xskmap.c                                   |    4 +-
 samples/bpf/bpf_insn.h                             |   22 +-
 samples/bpf/cookie_uid_helper_example.c            |   12 +-
 samples/bpf/sock_example.c                         |    4 +-
 samples/bpf/test_cgrp2_attach.c                    |    8 +-
 samples/bpf/test_cgrp2_sock.c                      |    4 +-
 samples/seccomp/bpf-direct.c                       |   18 +-
 samples/seccomp/bpf-helper.c                       |    2 +-
 samples/seccomp/bpf-helper.h                       |   56 +-
 samples/seccomp/dropper.c                          |    4 +-
 samples/seccomp/user-trap.c                        |    2 +-
 tools/bpf/bpf_dbg.c                                |  254 ++---
 tools/bpf/bpf_exp.y                                |  130 +--
 tools/bpf/bpftool/cfg.c                            |    4 +-
 tools/bpf/bpftool/feature.c                        |    4 +-
 tools/include/linux/filter.h                       |   34 +-
 tools/include/uapi/linux/bpf.h                     |    2 +
 tools/include/uapi/linux/bpf_common.h              |    4 +-
 tools/lib/bpf/bpf_endian.h                         |    2 +-
 tools/lib/bpf/gen_loader.c                         |   36 +-
 tools/lib/bpf/libbpf.c                             |   14 +-
 tools/lib/bpf/linker.c                             |    2 +-
 tools/lib/bpf/relo_core.c                          |    4 +-
 tools/perf/util/bpf-prologue.c                     |    8 +-
 tools/testing/selftests/bpf/prog_tests/align.c     |   24 +-
 tools/testing/selftests/bpf/prog_tests/btf.c       |   32 +-
 .../selftests/bpf/prog_tests/cgroup_attach_multi.c |    8 +-
 .../bpf/prog_tests/flow_dissector_load_bytes.c     |    4 +-
 tools/testing/selftests/bpf/prog_tests/sockopt.c   |   42 +-
 tools/testing/selftests/bpf/progs/pyperf600.c      |    2 +-
 tools/testing/selftests/bpf/progs/syscall.c        |    2 +-
 tools/testing/selftests/bpf/test_cgroup_storage.c  |    4 +-
 tools/testing/selftests/bpf/test_lru_map.c         |    4 +-
 tools/testing/selftests/bpf/test_sock.c            |   30 +-
 tools/testing/selftests/bpf/test_sock_addr.c       |    6 +-
 tools/testing/selftests/bpf/test_sysctl.c          |  172 ++--
 tools/testing/selftests/bpf/test_verifier.c        |   44 +-
 tools/testing/selftests/bpf/test_verifier_log.c    |    2 +-
 tools/testing/selftests/bpf/verifier/and.c         |   10 +-
 .../testing/selftests/bpf/verifier/array_access.c  |   92 +-
 tools/testing/selftests/bpf/verifier/atomic_and.c  |   14 +-
 .../testing/selftests/bpf/verifier/atomic_bounds.c |    2 +-
 .../selftests/bpf/verifier/atomic_cmpxchg.c        |   12 +-
 .../testing/selftests/bpf/verifier/atomic_fetch.c  |   12 +-
 .../selftests/bpf/verifier/atomic_fetch_add.c      |    8 +-
 tools/testing/selftests/bpf/verifier/atomic_or.c   |   12 +-
 tools/testing/selftests/bpf/verifier/atomic_xchg.c |    4 +-
 tools/testing/selftests/bpf/verifier/atomic_xor.c  |   10 +-
 tools/testing/selftests/bpf/verifier/basic_call.c  |   14 +-
 tools/testing/selftests/bpf/verifier/basic_instr.c |   14 +-
 tools/testing/selftests/bpf/verifier/basic_stack.c |    2 +-
 tools/testing/selftests/bpf/verifier/bounds.c      |  154 +--
 .../selftests/bpf/verifier/bounds_deduction.c      |   24 +-
 .../bpf/verifier/bounds_mix_sign_unsign.c          |  132 +--
 .../testing/selftests/bpf/verifier/bpf_get_stack.c |   16 +-
 .../selftests/bpf/verifier/bpf_loop_inline.c       |   40 +-
 tools/testing/selftests/bpf/verifier/calls.c       |  594 ++++++------
 tools/testing/selftests/bpf/verifier/cfg.c         |   14 +-
 tools/testing/selftests/bpf/verifier/cgroup_skb.c  |    2 +-
 .../selftests/bpf/verifier/cgroup_storage.c        |   28 +-
 tools/testing/selftests/bpf/verifier/ctx.c         |   24 +-
 tools/testing/selftests/bpf/verifier/ctx_sk_msg.c  |    8 +-
 tools/testing/selftests/bpf/verifier/ctx_skb.c     |  130 +--
 tools/testing/selftests/bpf/verifier/d_path.c      |    4 +-
 tools/testing/selftests/bpf/verifier/dead_code.c   |   56 +-
 .../selftests/bpf/verifier/direct_packet_access.c  |   92 +-
 .../testing/selftests/bpf/verifier/div_overflow.c  |    8 +-
 .../testing/selftests/bpf/verifier/event_output.c  |    2 +-
 .../selftests/bpf/verifier/helper_access_var_len.c |   80 +-
 .../selftests/bpf/verifier/helper_packet_access.c  |   84 +-
 .../selftests/bpf/verifier/helper_restricted.c     |   40 +-
 .../selftests/bpf/verifier/helper_value_access.c   |  132 +--
 .../selftests/bpf/verifier/jeq_infer_not_null.c    |   32 +-
 tools/testing/selftests/bpf/verifier/jit.c         |   50 +-
 tools/testing/selftests/bpf/verifier/jmp32.c       |   32 +-
 tools/testing/selftests/bpf/verifier/jset.c        |   40 +-
 tools/testing/selftests/bpf/verifier/jump.c        |  242 ++---
 tools/testing/selftests/bpf/verifier/junk_insn.c   |    2 +-
 tools/testing/selftests/bpf/verifier/ld_abs.c      |   14 +-
 tools/testing/selftests/bpf/verifier/ld_imm64.c    |    6 +-
 tools/testing/selftests/bpf/verifier/leak_ptr.c    |    4 +-
 tools/testing/selftests/bpf/verifier/loops1.c      |   58 +-
 tools/testing/selftests/bpf/verifier/lwt.c         |   16 +-
 tools/testing/selftests/bpf/verifier/map_in_map.c  |   32 +-
 tools/testing/selftests/bpf/verifier/map_kptr.c    |  126 +--
 tools/testing/selftests/bpf/verifier/map_ptr.c     |    4 +-
 .../selftests/bpf/verifier/map_ptr_mixing.c        |   28 +-
 tools/testing/selftests/bpf/verifier/map_ret_val.c |   12 +-
 tools/testing/selftests/bpf/verifier/meta_access.c |   32 +-
 tools/testing/selftests/bpf/verifier/precise.c     |   44 +-
 .../selftests/bpf/verifier/prevent_map_lookup.c    |    4 +-
 tools/testing/selftests/bpf/verifier/raw_stack.c   |   32 +-
 .../selftests/bpf/verifier/raw_tp_writable.c       |    4 +-
 .../testing/selftests/bpf/verifier/ref_tracking.c  |  164 ++--
 tools/testing/selftests/bpf/verifier/regalloc.c    |   64 +-
 tools/testing/selftests/bpf/verifier/ringbuf.c     |   20 +-
 tools/testing/selftests/bpf/verifier/runtime_jit.c |   50 +-
 .../selftests/bpf/verifier/search_pruning.c        |   66 +-
 tools/testing/selftests/bpf/verifier/sock.c        |  114 +--
 tools/testing/selftests/bpf/verifier/spill_fill.c  |   26 +-
 tools/testing/selftests/bpf/verifier/spin_lock.c   |  126 +--
 tools/testing/selftests/bpf/verifier/stack_ptr.c   |    6 +-
 tools/testing/selftests/bpf/verifier/subreg.c      |   96 +-
 tools/testing/selftests/bpf/verifier/uninit.c      |    2 +-
 tools/testing/selftests/bpf/verifier/unpriv.c      |   60 +-
 tools/testing/selftests/bpf/verifier/value.c       |   10 +-
 .../selftests/bpf/verifier/value_adj_spill.c       |    4 +-
 .../selftests/bpf/verifier/value_illegal_alu.c     |   10 +-
 .../testing/selftests/bpf/verifier/value_or_null.c |   46 +-
 .../selftests/bpf/verifier/value_ptr_arith.c       |  266 +++---
 tools/testing/selftests/bpf/verifier/var_off.c     |    6 +-
 tools/testing/selftests/bpf/verifier/xadd.c        |   16 +-
 tools/testing/selftests/bpf/verifier/xdp.c         |    2 +-
 .../bpf/verifier/xdp_direct_packet_access.c        |  228 ++---
 tools/testing/selftests/net/csum.c                 |    6 +-
 tools/testing/selftests/net/gro.c                  |    8 +-
 tools/testing/selftests/net/psock_fanout.c         |   12 +-
 tools/testing/selftests/net/reuseport_bpf.c        |    6 +-
 tools/testing/selftests/net/reuseport_bpf_numa.c   |    4 +-
 tools/testing/selftests/net/toeplitz.c             |    6 +-
 tools/testing/selftests/seccomp/seccomp_bpf.c      |   68 +-
 157 files changed, 4299 insertions(+), 4295 deletions(-)

-- 
2.1.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH bpf-next 1/4] bpf: Add new macro BPF_ALU32 and BPF_JMP64
  2023-02-01 12:36 [PATCH bpf-next 0/4] bpf: Replace BPF_ALU and BPF_JMP with BPF_ALU32 and BPF_JMP64 Tiezhu Yang
@ 2023-02-01 12:36 ` Tiezhu Yang
  2023-02-01 14:59   ` Dave Thaler
  2023-02-01 12:36 ` [PATCH bpf-next 2/4] bpf: treewide: Clean up BPF_ALU and BPF_JMP Tiezhu Yang
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 9+ messages in thread
From: Tiezhu Yang @ 2023-02-01 12:36 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko; +Cc: bpf, linux-kernel

In the current code, BPF_ALU means BPF_ALU32, but BPF_JMP means BPF_JMP64,
it is a little confusing at the first glance, add new macro BPF_ALU32 and
BPF_JMP64, then we can replace the ambiguos macro BPF_ALU and BPF_JMP with
new macro BPF_ALU32 and BPF_JMP64 step by step, BPF_ALU and BPF_JMP can be
removed from the uapi header file in some day.

Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
---
 include/uapi/linux/bpf.h       | 2 ++
 tools/include/uapi/linux/bpf.h | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index ba0f0cf..a118c43 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -15,6 +15,8 @@
 
 /* instruction classes */
 #define BPF_JMP32	0x06	/* jmp mode in word width */
+#define BPF_JMP64	0x05	/* jmp mode in double word width */
+#define BPF_ALU32	0x04	/* alu mode in word width */
 #define BPF_ALU64	0x07	/* alu mode in double word width */
 
 /* ld/ldx fields */
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 7f024ac..014b449 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -15,6 +15,8 @@
 
 /* instruction classes */
 #define BPF_JMP32	0x06	/* jmp mode in word width */
+#define BPF_JMP64	0x05	/* jmp mode in double word width */
+#define BPF_ALU32	0x04	/* alu mode in word width */
 #define BPF_ALU64	0x07	/* alu mode in double word width */
 
 /* ld/ldx fields */
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH bpf-next 2/4] bpf: treewide: Clean up BPF_ALU and BPF_JMP
  2023-02-01 12:36 [PATCH bpf-next 0/4] bpf: Replace BPF_ALU and BPF_JMP with BPF_ALU32 and BPF_JMP64 Tiezhu Yang
  2023-02-01 12:36 ` [PATCH bpf-next 1/4] bpf: Add new macro " Tiezhu Yang
@ 2023-02-01 12:36 ` Tiezhu Yang
  2023-02-02  4:01   ` kernel test robot
  2023-02-01 12:36 ` [PATCH bpf-next 3/4] bpf: treewide: Clean up BPF_ALU_* and BPF_JMP_* Tiezhu Yang
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 9+ messages in thread
From: Tiezhu Yang @ 2023-02-01 12:36 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko; +Cc: bpf, linux-kernel

Replace the ambiguos macro BPF_ALU and BPF_JMP with new macro BPF_ALU32
and BPF_JMP64 in the kernel code.

sed -i "s/\<BPF_ALU\>/BPF_ALU32/g" `grep BPF_ALU -rwl --exclude="bpf_common.h" .`
sed -i "s/\<BPF_JMP\>/BPF_JMP64/g" `grep BPF_JMP -rwl --exclude="bpf_common.h" .`

Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
---
 Documentation/bpf/clang-notes.rst                  |   2 +-
 Documentation/bpf/classic_vs_extended.rst          |  26 +-
 Documentation/bpf/instruction-set.rst              |  28 +-
 Documentation/bpf/verifier.rst                     |  10 +-
 Documentation/networking/cdc_mbim.rst              |   8 +-
 arch/arm/net/bpf_jit_32.c                          | 110 +++---
 arch/arm64/net/bpf_jit_comp.c                      | 112 +++---
 arch/loongarch/net/bpf_jit.c                       | 108 +++---
 arch/mips/net/bpf_jit_comp32.c                     | 106 +++---
 arch/mips/net/bpf_jit_comp64.c                     | 106 +++---
 arch/powerpc/net/bpf_jit_comp.c                    |   2 +-
 arch/powerpc/net/bpf_jit_comp32.c                  | 152 ++++----
 arch/powerpc/net/bpf_jit_comp64.c                  | 162 ++++----
 arch/riscv/net/bpf_jit_comp32.c                    | 108 +++---
 arch/riscv/net/bpf_jit_comp64.c                    | 120 +++---
 arch/s390/net/bpf_jit_comp.c                       | 108 +++---
 arch/sparc/net/bpf_jit_comp_32.c                   |  72 ++--
 arch/sparc/net/bpf_jit_comp_64.c                   | 108 +++---
 arch/x86/net/bpf_jit_comp.c                        | 118 +++---
 arch/x86/net/bpf_jit_comp32.c                      | 116 +++---
 drivers/net/ethernet/netronome/nfp/bpf/jit.c       | 124 +++----
 drivers/net/ethernet/netronome/nfp/bpf/main.h      |   8 +-
 drivers/net/ethernet/netronome/nfp/bpf/verifier.c  |   4 +-
 include/linux/filter.h                             |  28 +-
 kernel/bpf/core.c                                  |  50 +--
 kernel/bpf/disasm.c                                |  22 +-
 kernel/bpf/syscall.c                               |  12 +-
 kernel/bpf/verifier.c                              |  66 ++--
 kernel/seccomp.c                                   |  68 ++--
 lib/test_bpf.c                                     | 408 ++++++++++-----------
 net/core/filter.c                                  | 182 ++++-----
 samples/bpf/bpf_insn.h                             |  14 +-
 samples/bpf/cookie_uid_helper_example.c            |   8 +-
 samples/bpf/sock_example.c                         |   2 +-
 samples/bpf/test_cgrp2_attach.c                    |   4 +-
 samples/bpf/test_cgrp2_sock.c                      |   2 +-
 samples/seccomp/bpf-direct.c                       |  18 +-
 samples/seccomp/bpf-helper.c                       |   2 +-
 samples/seccomp/bpf-helper.h                       |  56 +--
 samples/seccomp/dropper.c                          |   4 +-
 samples/seccomp/user-trap.c                        |   2 +-
 tools/bpf/bpf_dbg.c                                |  34 +-
 tools/bpf/bpf_exp.y                                | 130 +++----
 tools/bpf/bpftool/cfg.c                            |   4 +-
 tools/include/linux/filter.h                       |  24 +-
 tools/lib/bpf/bpf_endian.h                         |   2 +-
 tools/lib/bpf/libbpf.c                             |  14 +-
 tools/lib/bpf/linker.c                             |   2 +-
 tools/lib/bpf/relo_core.c                          |   4 +-
 tools/perf/util/bpf-prologue.c                     |   2 +-
 tools/testing/selftests/bpf/prog_tests/btf.c       |   8 +-
 .../selftests/bpf/prog_tests/cgroup_attach_multi.c |   6 +-
 .../bpf/prog_tests/flow_dissector_load_bytes.c     |   2 +-
 tools/testing/selftests/bpf/prog_tests/sockopt.c   |   2 +-
 tools/testing/selftests/bpf/progs/pyperf600.c      |   2 +-
 tools/testing/selftests/bpf/progs/syscall.c        |   2 +-
 tools/testing/selftests/bpf/test_cgroup_storage.c  |   4 +-
 tools/testing/selftests/bpf/test_verifier.c        |  20 +-
 tools/testing/selftests/bpf/test_verifier_log.c    |   2 +-
 tools/testing/selftests/bpf/verifier/and.c         |   6 +-
 .../testing/selftests/bpf/verifier/array_access.c  |  46 +--
 tools/testing/selftests/bpf/verifier/basic_call.c  |  14 +-
 tools/testing/selftests/bpf/verifier/basic_stack.c |   2 +-
 tools/testing/selftests/bpf/verifier/bounds.c      |  58 +--
 .../bpf/verifier/bounds_mix_sign_unsign.c          |  32 +-
 .../testing/selftests/bpf/verifier/bpf_get_stack.c |   4 +-
 .../selftests/bpf/verifier/bpf_loop_inline.c       |  26 +-
 tools/testing/selftests/bpf/verifier/calls.c       | 360 +++++++++---------
 .../selftests/bpf/verifier/cgroup_storage.c        |  28 +-
 tools/testing/selftests/bpf/verifier/ctx.c         |  24 +-
 tools/testing/selftests/bpf/verifier/ctx_skb.c     |   6 +-
 tools/testing/selftests/bpf/verifier/d_path.c      |   4 +-
 tools/testing/selftests/bpf/verifier/dead_code.c   |  16 +-
 .../selftests/bpf/verifier/direct_packet_access.c  |   2 +-
 .../testing/selftests/bpf/verifier/event_output.c  |   2 +-
 .../selftests/bpf/verifier/helper_access_var_len.c |   6 +-
 .../selftests/bpf/verifier/helper_packet_access.c  |  42 +--
 .../selftests/bpf/verifier/helper_restricted.c     |  24 +-
 tools/testing/selftests/bpf/verifier/jmp32.c       |   6 +-
 tools/testing/selftests/bpf/verifier/jset.c        |   8 +-
 tools/testing/selftests/bpf/verifier/jump.c        |  12 +-
 tools/testing/selftests/bpf/verifier/junk_insn.c   |   2 +-
 tools/testing/selftests/bpf/verifier/ld_abs.c      |   4 +-
 tools/testing/selftests/bpf/verifier/leak_ptr.c    |   2 +-
 tools/testing/selftests/bpf/verifier/loops1.c      |  14 +-
 tools/testing/selftests/bpf/verifier/map_in_map.c  |  20 +-
 tools/testing/selftests/bpf/verifier/map_kptr.c    |  58 +--
 tools/testing/selftests/bpf/verifier/map_ptr.c     |   4 +-
 .../selftests/bpf/verifier/map_ptr_mixing.c        |   8 +-
 tools/testing/selftests/bpf/verifier/map_ret_val.c |   8 +-
 tools/testing/selftests/bpf/verifier/meta_access.c |   2 +-
 tools/testing/selftests/bpf/verifier/precise.c     |  12 +-
 .../selftests/bpf/verifier/prevent_map_lookup.c    |   4 +-
 tools/testing/selftests/bpf/verifier/raw_stack.c   |  32 +-
 .../selftests/bpf/verifier/raw_tp_writable.c       |   2 +-
 .../testing/selftests/bpf/verifier/ref_tracking.c  |  46 +--
 tools/testing/selftests/bpf/verifier/regalloc.c    |   4 +-
 tools/testing/selftests/bpf/verifier/ringbuf.c     |  14 +-
 tools/testing/selftests/bpf/verifier/runtime_jit.c |  26 +-
 .../selftests/bpf/verifier/search_pruning.c        |  10 +-
 tools/testing/selftests/bpf/verifier/spill_fill.c  |   8 +-
 tools/testing/selftests/bpf/verifier/spin_lock.c   |  82 ++---
 tools/testing/selftests/bpf/verifier/subreg.c      |  96 ++---
 tools/testing/selftests/bpf/verifier/unpriv.c      |  16 +-
 .../testing/selftests/bpf/verifier/value_or_null.c |  20 +-
 .../selftests/bpf/verifier/value_ptr_arith.c       |  80 ++--
 tools/testing/selftests/bpf/verifier/var_off.c     |   4 +-
 tools/testing/selftests/bpf/verifier/xadd.c        |   2 +-
 tools/testing/selftests/net/csum.c                 |   6 +-
 tools/testing/selftests/net/gro.c                  |   8 +-
 tools/testing/selftests/net/psock_fanout.c         |  12 +-
 tools/testing/selftests/net/reuseport_bpf.c        |   6 +-
 tools/testing/selftests/net/reuseport_bpf_numa.c   |   4 +-
 tools/testing/selftests/net/toeplitz.c             |   6 +-
 tools/testing/selftests/seccomp/seccomp_bpf.c      |  68 ++--
 115 files changed, 2224 insertions(+), 2224 deletions(-)

diff --git a/Documentation/bpf/clang-notes.rst b/Documentation/bpf/clang-notes.rst
index 528fedd..4306f5d 100644
--- a/Documentation/bpf/clang-notes.rst
+++ b/Documentation/bpf/clang-notes.rst
@@ -17,7 +17,7 @@ Clang can select the eBPF ISA version using ``-mcpu=v3`` for example to select v
 Arithmetic instructions
 =======================
 
-For CPU versions prior to 3, Clang v7.0 and later can enable ``BPF_ALU`` support with
+For CPU versions prior to 3, Clang v7.0 and later can enable ``BPF_ALU32`` support with
 ``-Xclang -target-feature -Xclang +alu32``.  In CPU version 3, support is automatically included.
 
 Atomic operations
diff --git a/Documentation/bpf/classic_vs_extended.rst b/Documentation/bpf/classic_vs_extended.rst
index 2f81a81..6a0e7ea 100644
--- a/Documentation/bpf/classic_vs_extended.rst
+++ b/Documentation/bpf/classic_vs_extended.rst
@@ -261,8 +261,8 @@ Three LSB bits store instruction class which is one of:
   BPF_LDX   0x01          BPF_LDX   0x01
   BPF_ST    0x02          BPF_ST    0x02
   BPF_STX   0x03          BPF_STX   0x03
-  BPF_ALU   0x04          BPF_ALU   0x04
-  BPF_JMP   0x05          BPF_JMP   0x05
+  BPF_ALU32 0x04          BPF_ALU32 0x04
+  BPF_JMP64 0x05          BPF_JMP64 0x05
   BPF_RET   0x06          BPF_JMP32 0x06
   BPF_MISC  0x07          BPF_ALU64 0x07
   ===================     ===============
@@ -286,7 +286,7 @@ The 4th bit encodes the source operand ...
 
 ... and four MSB bits store operation code.
 
-If BPF_CLASS(code) == BPF_ALU or BPF_ALU64 [ in eBPF ], BPF_OP(code) is one of::
+If BPF_CLASS(code) == BPF_ALU32 or BPF_ALU64 [ in eBPF ], BPF_OP(code) is one of::
 
   BPF_ADD   0x00
   BPF_SUB   0x10
@@ -303,9 +303,9 @@ If BPF_CLASS(code) == BPF_ALU or BPF_ALU64 [ in eBPF ], BPF_OP(code) is one of::
   BPF_ARSH  0xc0  /* eBPF only: sign extending shift right */
   BPF_END   0xd0  /* eBPF only: endianness conversion */
 
-If BPF_CLASS(code) == BPF_JMP or BPF_JMP32 [ in eBPF ], BPF_OP(code) is one of::
+If BPF_CLASS(code) == BPF_JMP64 or BPF_JMP32 [ in eBPF ], BPF_OP(code) is one of::
 
-  BPF_JA    0x00  /* BPF_JMP only */
+  BPF_JA    0x00  /* BPF_JMP64 only */
   BPF_JEQ   0x10
   BPF_JGT   0x20
   BPF_JGE   0x30
@@ -313,32 +313,32 @@ If BPF_CLASS(code) == BPF_JMP or BPF_JMP32 [ in eBPF ], BPF_OP(code) is one of::
   BPF_JNE   0x50  /* eBPF only: jump != */
   BPF_JSGT  0x60  /* eBPF only: signed '>' */
   BPF_JSGE  0x70  /* eBPF only: signed '>=' */
-  BPF_CALL  0x80  /* eBPF BPF_JMP only: function call */
-  BPF_EXIT  0x90  /* eBPF BPF_JMP only: function return */
+  BPF_CALL  0x80  /* eBPF BPF_JMP64 only: function call */
+  BPF_EXIT  0x90  /* eBPF BPF_JMP64 only: function return */
   BPF_JLT   0xa0  /* eBPF only: unsigned '<' */
   BPF_JLE   0xb0  /* eBPF only: unsigned '<=' */
   BPF_JSLT  0xc0  /* eBPF only: signed '<' */
   BPF_JSLE  0xd0  /* eBPF only: signed '<=' */
 
-So BPF_ADD | BPF_X | BPF_ALU means 32-bit addition in both classic BPF
+So BPF_ADD | BPF_X | BPF_ALU32 means 32-bit addition in both classic BPF
 and eBPF. There are only two registers in classic BPF, so it means A += X.
 In eBPF it means dst_reg = (u32) dst_reg + (u32) src_reg; similarly,
-BPF_XOR | BPF_K | BPF_ALU means A ^= imm32 in classic BPF and analogous
+BPF_XOR | BPF_K | BPF_ALU32 means A ^= imm32 in classic BPF and analogous
 src_reg = (u32) src_reg ^ (u32) imm32 in eBPF.
 
 Classic BPF is using BPF_MISC class to represent A = X and X = A moves.
-eBPF is using BPF_MOV | BPF_X | BPF_ALU code instead. Since there are no
+eBPF is using BPF_MOV | BPF_X | BPF_ALU32 code instead. Since there are no
 BPF_MISC operations in eBPF, the class 7 is used as BPF_ALU64 to mean
-exactly the same operations as BPF_ALU, but with 64-bit wide operands
+exactly the same operations as BPF_ALU32, but with 64-bit wide operands
 instead. So BPF_ADD | BPF_X | BPF_ALU64 means 64-bit addition, i.e.:
 dst_reg = dst_reg + src_reg
 
 Classic BPF wastes the whole BPF_RET class to represent a single ``ret``
 operation. Classic BPF_RET | BPF_K means copy imm32 into return register
-and perform function exit. eBPF is modeled to match CPU, so BPF_JMP | BPF_EXIT
+and perform function exit. eBPF is modeled to match CPU, so BPF_JMP64 | BPF_EXIT
 in eBPF means function exit only. The eBPF program needs to store return
 value into register R0 before doing a BPF_EXIT. Class 6 in eBPF is used as
-BPF_JMP32 to mean exactly the same operations as BPF_JMP, but with 32-bit wide
+BPF_JMP32 to mean exactly the same operations as BPF_JMP64, but with 32-bit wide
 operands for the comparisons instead.
 
 For load and store instructions the 8-bit 'code' field is divided as::
diff --git a/Documentation/bpf/instruction-set.rst b/Documentation/bpf/instruction-set.rst
index 2d3fe59..6fbf982 100644
--- a/Documentation/bpf/instruction-set.rst
+++ b/Documentation/bpf/instruction-set.rst
@@ -56,8 +56,8 @@ BPF_LD     0x00   non-standard load operations     `Load and store instructions`
 BPF_LDX    0x01   load into register operations    `Load and store instructions`_
 BPF_ST     0x02   store from immediate operations  `Load and store instructions`_
 BPF_STX    0x03   store from register operations   `Load and store instructions`_
-BPF_ALU    0x04   32-bit arithmetic operations     `Arithmetic and jump instructions`_
-BPF_JMP    0x05   64-bit jump operations           `Arithmetic and jump instructions`_
+BPF_ALU32  0x04   32-bit arithmetic operations     `Arithmetic and jump instructions`_
+BPF_JMP64  0x05   64-bit jump operations           `Arithmetic and jump instructions`_
 BPF_JMP32  0x06   32-bit jump operations           `Arithmetic and jump instructions`_
 BPF_ALU64  0x07   64-bit arithmetic operations     `Arithmetic and jump instructions`_
 =========  =====  ===============================  ===================================
@@ -65,7 +65,7 @@ BPF_ALU64  0x07   64-bit arithmetic operations     `Arithmetic and jump instruct
 Arithmetic and jump instructions
 ================================
 
-For arithmetic and jump instructions (``BPF_ALU``, ``BPF_ALU64``, ``BPF_JMP`` and
+For arithmetic and jump instructions (``BPF_ALU32``, ``BPF_ALU64``, ``BPF_JMP64`` and
 ``BPF_JMP32``), the 8-bit 'opcode' field is divided into three parts:
 
 ==============  ======  =================
@@ -89,7 +89,7 @@ The four MSB bits store the operation code.
 Arithmetic instructions
 -----------------------
 
-``BPF_ALU`` uses 32-bit wide operands while ``BPF_ALU64`` uses 64-bit wide operands for
+``BPF_ALU32`` uses 32-bit wide operands while ``BPF_ALU64`` uses 64-bit wide operands for
 otherwise identical operations.
 The 'code' field encodes the operation as below:
 
@@ -116,10 +116,10 @@ Underflow and overflow are allowed during arithmetic operations, meaning
 the 64-bit or 32-bit value will wrap. If eBPF program execution would
 result in division by zero, the destination register is instead set to zero.
 If execution would result in modulo by zero, for ``BPF_ALU64`` the value of
-the destination register is unchanged whereas for ``BPF_ALU`` the upper
+the destination register is unchanged whereas for ``BPF_ALU32`` the upper
 32 bits of the destination register are zeroed.
 
-``BPF_ADD | BPF_X | BPF_ALU`` means::
+``BPF_ADD | BPF_X | BPF_ALU32`` means::
 
   dst_reg = (u32) dst_reg + (u32) src_reg;
 
@@ -127,7 +127,7 @@ the destination register is unchanged whereas for ``BPF_ALU`` the upper
 
   dst_reg = dst_reg + src_reg
 
-``BPF_XOR | BPF_K | BPF_ALU`` means::
+``BPF_XOR | BPF_K | BPF_ALU32`` means::
 
   dst_reg = (u32) dst_reg ^ (u32) imm32
 
@@ -136,7 +136,7 @@ the destination register is unchanged whereas for ``BPF_ALU`` the upper
   dst_reg = dst_reg ^ imm32
 
 Also note that the division and modulo operations are unsigned. Thus, for
-``BPF_ALU``, 'imm' is first interpreted as an unsigned 32-bit value, whereas
+``BPF_ALU32``, 'imm' is first interpreted as an unsigned 32-bit value, whereas
 for ``BPF_ALU64``, 'imm' is first sign extended to 64 bits and the result
 interpreted as an unsigned 64-bit value. There are no instructions for
 signed division or modulo.
@@ -144,7 +144,7 @@ signed division or modulo.
 Byte swap instructions
 ~~~~~~~~~~~~~~~~~~~~~~
 
-The byte swap instructions use an instruction class of ``BPF_ALU`` and a 4-bit
+The byte swap instructions use an instruction class of ``BPF_ALU32`` and a 4-bit
 'code' field of ``BPF_END``.
 
 The byte swap instructions operate on the destination register
@@ -165,25 +165,25 @@ are supported: 16, 32 and 64.
 
 Examples:
 
-``BPF_ALU | BPF_TO_LE | BPF_END`` with imm = 16 means::
+``BPF_ALU32 | BPF_TO_LE | BPF_END`` with imm = 16 means::
 
   dst_reg = htole16(dst_reg)
 
-``BPF_ALU | BPF_TO_BE | BPF_END`` with imm = 64 means::
+``BPF_ALU32 | BPF_TO_BE | BPF_END`` with imm = 64 means::
 
   dst_reg = htobe64(dst_reg)
 
 Jump instructions
 -----------------
 
-``BPF_JMP32`` uses 32-bit wide operands while ``BPF_JMP`` uses 64-bit wide operands for
+``BPF_JMP32`` uses 32-bit wide operands while ``BPF_JMP64`` uses 64-bit wide operands for
 otherwise identical operations.
 The 'code' field encodes the operation as below:
 
 ========  =====  =========================  ============
 code      value  description                notes
 ========  =====  =========================  ============
-BPF_JA    0x00   PC += off                  BPF_JMP only
+BPF_JA    0x00   PC += off                  BPF_JMP64 only
 BPF_JEQ   0x10   PC += off if dst == src
 BPF_JGT   0x20   PC += off if dst > src     unsigned
 BPF_JGE   0x30   PC += off if dst >= src    unsigned
@@ -192,7 +192,7 @@ BPF_JNE   0x50   PC += off if dst != src
 BPF_JSGT  0x60   PC += off if dst > src     signed
 BPF_JSGE  0x70   PC += off if dst >= src    signed
 BPF_CALL  0x80   function call
-BPF_EXIT  0x90   function / program return  BPF_JMP only
+BPF_EXIT  0x90   function / program return  BPF_JMP64 only
 BPF_JLT   0xa0   PC += off if dst < src     unsigned
 BPF_JLE   0xb0   PC += off if dst <= src    unsigned
 BPF_JSLT  0xc0   PC += off if dst < src     signed
diff --git a/Documentation/bpf/verifier.rst b/Documentation/bpf/verifier.rst
index 3afa548..ecf6ead 100644
--- a/Documentation/bpf/verifier.rst
+++ b/Documentation/bpf/verifier.rst
@@ -369,7 +369,7 @@ Program that doesn't initialize stack before passing its address into function::
   BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
   BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
   BPF_LD_MAP_FD(BPF_REG_1, 0),
-  BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+  BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
   BPF_EXIT_INSN(),
 
 Error::
@@ -386,7 +386,7 @@ Program that uses invalid map_fd=0 while calling to map_lookup_elem() function::
   BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
   BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
   BPF_LD_MAP_FD(BPF_REG_1, 0),
-  BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+  BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
   BPF_EXIT_INSN(),
 
 Error::
@@ -405,7 +405,7 @@ map element::
   BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
   BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
   BPF_LD_MAP_FD(BPF_REG_1, 0),
-  BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+  BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
   BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 0),
   BPF_EXIT_INSN(),
 
@@ -426,7 +426,7 @@ accesses the memory with incorrect alignment::
   BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
   BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
   BPF_LD_MAP_FD(BPF_REG_1, 0),
-  BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+  BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
   BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
   BPF_ST_MEM(BPF_DW, BPF_REG_0, 4, 0),
   BPF_EXIT_INSN(),
@@ -451,7 +451,7 @@ to do so in the other side of 'if' branch::
   BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
   BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
   BPF_LD_MAP_FD(BPF_REG_1, 0),
-  BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+  BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
   BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
   BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 0),
   BPF_EXIT_INSN(),
diff --git a/Documentation/networking/cdc_mbim.rst b/Documentation/networking/cdc_mbim.rst
index 0048409..d927ccb 100644
--- a/Documentation/networking/cdc_mbim.rst
+++ b/Documentation/networking/cdc_mbim.rst
@@ -249,16 +249,16 @@ example::
   static struct sock_filter dssfilter[] = {
 	/* use special negative offsets to get VLAN tag */
 	BPF_STMT(BPF_LD|BPF_B|BPF_ABS, SKF_AD_OFF + SKF_AD_VLAN_TAG_PRESENT),
-	BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, 1, 0, 6), /* true */
+	BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K, 1, 0, 6), /* true */
 
 	/* verify DSS VLAN range */
 	BPF_STMT(BPF_LD|BPF_H|BPF_ABS, SKF_AD_OFF + SKF_AD_VLAN_TAG),
-	BPF_JUMP(BPF_JMP|BPF_JGE|BPF_K, 256, 0, 4),	/* 256 is first DSS VLAN */
-	BPF_JUMP(BPF_JMP|BPF_JGE|BPF_K, 512, 3, 0),	/* 511 is last DSS VLAN */
+	BPF_JUMP(BPF_JMP64|BPF_JGE|BPF_K, 256, 0, 4),	/* 256 is first DSS VLAN */
+	BPF_JUMP(BPF_JMP64|BPF_JGE|BPF_K, 512, 3, 0),	/* 511 is last DSS VLAN */
 
 	/* verify ethertype */
 	BPF_STMT(BPF_LD|BPF_H|BPF_ABS, 2 * ETH_ALEN),
-	BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, ETH_P_802_3, 0, 1),
+	BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K, ETH_P_802_3, 0, 1),
 
 	BPF_STMT(BPF_RET|BPF_K, (u_int)-1),	/* accept */
 	BPF_STMT(BPF_RET|BPF_K, 0),		/* ignore */
diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
index 6a1c9fc..f189ed5 100644
--- a/arch/arm/net/bpf_jit_32.c
+++ b/arch/arm/net/bpf_jit_32.c
@@ -492,7 +492,7 @@ static inline void emit_udivmod(u8 rd, u8 rm, u8 rn, struct jit_ctx *ctx, u8 op)
 #endif
 
 	/*
-	 * For BPF_ALU | BPF_DIV | BPF_K instructions
+	 * For BPF_ALU32 | BPF_DIV | BPF_K instructions
 	 * As ARM_R1 and ARM_R0 contains 1st argument of bpf
 	 * function, we need to save it on caller side to save
 	 * it from getting destroyed within callee.
@@ -1374,8 +1374,8 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 	/* ALU operations */
 
 	/* dst = src */
-	case BPF_ALU | BPF_MOV | BPF_K:
-	case BPF_ALU | BPF_MOV | BPF_X:
+	case BPF_ALU32 | BPF_MOV | BPF_K:
+	case BPF_ALU32 | BPF_MOV | BPF_X:
 	case BPF_ALU64 | BPF_MOV | BPF_K:
 	case BPF_ALU64 | BPF_MOV | BPF_X:
 		switch (BPF_SRC(code)) {
@@ -1401,21 +1401,21 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 	/* dst = dst * src/imm */
 	/* dst = dst << src */
 	/* dst = dst >> src */
-	case BPF_ALU | BPF_ADD | BPF_K:
-	case BPF_ALU | BPF_ADD | BPF_X:
-	case BPF_ALU | BPF_SUB | BPF_K:
-	case BPF_ALU | BPF_SUB | BPF_X:
-	case BPF_ALU | BPF_OR | BPF_K:
-	case BPF_ALU | BPF_OR | BPF_X:
-	case BPF_ALU | BPF_AND | BPF_K:
-	case BPF_ALU | BPF_AND | BPF_X:
-	case BPF_ALU | BPF_XOR | BPF_K:
-	case BPF_ALU | BPF_XOR | BPF_X:
-	case BPF_ALU | BPF_MUL | BPF_K:
-	case BPF_ALU | BPF_MUL | BPF_X:
-	case BPF_ALU | BPF_LSH | BPF_X:
-	case BPF_ALU | BPF_RSH | BPF_X:
-	case BPF_ALU | BPF_ARSH | BPF_X:
+	case BPF_ALU32 | BPF_ADD | BPF_K:
+	case BPF_ALU32 | BPF_ADD | BPF_X:
+	case BPF_ALU32 | BPF_SUB | BPF_K:
+	case BPF_ALU32 | BPF_SUB | BPF_X:
+	case BPF_ALU32 | BPF_OR | BPF_K:
+	case BPF_ALU32 | BPF_OR | BPF_X:
+	case BPF_ALU32 | BPF_AND | BPF_K:
+	case BPF_ALU32 | BPF_AND | BPF_X:
+	case BPF_ALU32 | BPF_XOR | BPF_K:
+	case BPF_ALU32 | BPF_XOR | BPF_X:
+	case BPF_ALU32 | BPF_MUL | BPF_K:
+	case BPF_ALU32 | BPF_MUL | BPF_X:
+	case BPF_ALU32 | BPF_LSH | BPF_X:
+	case BPF_ALU32 | BPF_RSH | BPF_X:
+	case BPF_ALU32 | BPF_ARSH | BPF_X:
 	case BPF_ALU64 | BPF_ADD | BPF_K:
 	case BPF_ALU64 | BPF_ADD | BPF_X:
 	case BPF_ALU64 | BPF_SUB | BPF_K:
@@ -1444,10 +1444,10 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 		break;
 	/* dst = dst / src(imm) */
 	/* dst = dst % src(imm) */
-	case BPF_ALU | BPF_DIV | BPF_K:
-	case BPF_ALU | BPF_DIV | BPF_X:
-	case BPF_ALU | BPF_MOD | BPF_K:
-	case BPF_ALU | BPF_MOD | BPF_X:
+	case BPF_ALU32 | BPF_DIV | BPF_K:
+	case BPF_ALU32 | BPF_DIV | BPF_X:
+	case BPF_ALU32 | BPF_MOD | BPF_K:
+	case BPF_ALU32 | BPF_MOD | BPF_X:
 		rd_lo = arm_bpf_get_reg32(dst_lo, tmp2[1], ctx);
 		switch (BPF_SRC(code)) {
 		case BPF_X:
@@ -1474,9 +1474,9 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 	/* dst = dst << imm */
 	/* dst = dst >> imm */
 	/* dst = dst >> imm (signed) */
-	case BPF_ALU | BPF_LSH | BPF_K:
-	case BPF_ALU | BPF_RSH | BPF_K:
-	case BPF_ALU | BPF_ARSH | BPF_K:
+	case BPF_ALU32 | BPF_LSH | BPF_K:
+	case BPF_ALU32 | BPF_RSH | BPF_K:
+	case BPF_ALU32 | BPF_ARSH | BPF_K:
 		if (unlikely(imm > 31))
 			return -EINVAL;
 		if (imm)
@@ -1515,7 +1515,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 		emit_a32_arsh_i64(dst, imm, ctx);
 		break;
 	/* dst = ~dst */
-	case BPF_ALU | BPF_NEG:
+	case BPF_ALU32 | BPF_NEG:
 		emit_a32_alu_i(dst_lo, 0, ctx, BPF_OP(code));
 		if (!ctx->prog->aux->verifier_zext)
 			emit_a32_mov_i(dst_hi, 0, ctx);
@@ -1545,8 +1545,8 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 		break;
 	/* dst = htole(dst) */
 	/* dst = htobe(dst) */
-	case BPF_ALU | BPF_END | BPF_FROM_LE:
-	case BPF_ALU | BPF_END | BPF_FROM_BE:
+	case BPF_ALU32 | BPF_END | BPF_FROM_LE:
+	case BPF_ALU32 | BPF_END | BPF_FROM_BE:
 		rd = arm_bpf_get_reg64(dst, tmp, ctx);
 		if (BPF_SRC(code) == BPF_FROM_LE)
 			goto emit_bswap_uxt;
@@ -1650,17 +1650,17 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 	/* PC += off if dst < src (signed) */
 	/* PC += off if dst <= src (signed) */
 	/* PC += off if dst & src */
-	case BPF_JMP | BPF_JEQ | BPF_X:
-	case BPF_JMP | BPF_JGT | BPF_X:
-	case BPF_JMP | BPF_JGE | BPF_X:
-	case BPF_JMP | BPF_JNE | BPF_X:
-	case BPF_JMP | BPF_JSGT | BPF_X:
-	case BPF_JMP | BPF_JSGE | BPF_X:
-	case BPF_JMP | BPF_JSET | BPF_X:
-	case BPF_JMP | BPF_JLE | BPF_X:
-	case BPF_JMP | BPF_JLT | BPF_X:
-	case BPF_JMP | BPF_JSLT | BPF_X:
-	case BPF_JMP | BPF_JSLE | BPF_X:
+	case BPF_JMP64 | BPF_JEQ | BPF_X:
+	case BPF_JMP64 | BPF_JGT | BPF_X:
+	case BPF_JMP64 | BPF_JGE | BPF_X:
+	case BPF_JMP64 | BPF_JNE | BPF_X:
+	case BPF_JMP64 | BPF_JSGT | BPF_X:
+	case BPF_JMP64 | BPF_JSGE | BPF_X:
+	case BPF_JMP64 | BPF_JSET | BPF_X:
+	case BPF_JMP64 | BPF_JLE | BPF_X:
+	case BPF_JMP64 | BPF_JLT | BPF_X:
+	case BPF_JMP64 | BPF_JSLT | BPF_X:
+	case BPF_JMP64 | BPF_JSLE | BPF_X:
 	case BPF_JMP32 | BPF_JEQ | BPF_X:
 	case BPF_JMP32 | BPF_JGT | BPF_X:
 	case BPF_JMP32 | BPF_JGE | BPF_X:
@@ -1687,17 +1687,17 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 	/* PC += off if dst < imm (signed) */
 	/* PC += off if dst <= imm (signed) */
 	/* PC += off if dst & imm */
-	case BPF_JMP | BPF_JEQ | BPF_K:
-	case BPF_JMP | BPF_JGT | BPF_K:
-	case BPF_JMP | BPF_JGE | BPF_K:
-	case BPF_JMP | BPF_JNE | BPF_K:
-	case BPF_JMP | BPF_JSGT | BPF_K:
-	case BPF_JMP | BPF_JSGE | BPF_K:
-	case BPF_JMP | BPF_JSET | BPF_K:
-	case BPF_JMP | BPF_JLT | BPF_K:
-	case BPF_JMP | BPF_JLE | BPF_K:
-	case BPF_JMP | BPF_JSLT | BPF_K:
-	case BPF_JMP | BPF_JSLE | BPF_K:
+	case BPF_JMP64 | BPF_JEQ | BPF_K:
+	case BPF_JMP64 | BPF_JGT | BPF_K:
+	case BPF_JMP64 | BPF_JGE | BPF_K:
+	case BPF_JMP64 | BPF_JNE | BPF_K:
+	case BPF_JMP64 | BPF_JSGT | BPF_K:
+	case BPF_JMP64 | BPF_JSGE | BPF_K:
+	case BPF_JMP64 | BPF_JSET | BPF_K:
+	case BPF_JMP64 | BPF_JLT | BPF_K:
+	case BPF_JMP64 | BPF_JLE | BPF_K:
+	case BPF_JMP64 | BPF_JSLT | BPF_K:
+	case BPF_JMP64 | BPF_JSLE | BPF_K:
 	case BPF_JMP32 | BPF_JEQ | BPF_K:
 	case BPF_JMP32 | BPF_JGT | BPF_K:
 	case BPF_JMP32 | BPF_JGE | BPF_K:
@@ -1721,7 +1721,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 
 		/* Check for the condition */
 		emit_ar_r(rd[0], rd[1], rm, rn, ctx, BPF_OP(code),
-			  BPF_CLASS(code) == BPF_JMP);
+			  BPF_CLASS(code) == BPF_JMP64);
 
 		/* Setup JUMP instruction */
 		jmp_offset = bpf2a32_offset(i+off, i, ctx);
@@ -1760,7 +1760,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 		}
 		break;
 	/* JMP OFF */
-	case BPF_JMP | BPF_JA:
+	case BPF_JMP64 | BPF_JA:
 	{
 		if (off == 0)
 			break;
@@ -1770,12 +1770,12 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 		break;
 	}
 	/* tail call */
-	case BPF_JMP | BPF_TAIL_CALL:
+	case BPF_JMP64 | BPF_TAIL_CALL:
 		if (emit_bpf_tail_call(ctx))
 			return -EFAULT;
 		break;
 	/* function call */
-	case BPF_JMP | BPF_CALL:
+	case BPF_JMP64 | BPF_CALL:
 	{
 		const s8 *r0 = bpf2a32[BPF_REG_0];
 		const s8 *r1 = bpf2a32[BPF_REG_1];
@@ -1798,7 +1798,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 		break;
 	}
 	/* function return */
-	case BPF_JMP | BPF_EXIT:
+	case BPF_JMP64 | BPF_EXIT:
 		/* Optimization: when last instruction is EXIT
 		 * simply fallthrough to epilogue.
 		 */
diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
index 62f805f..9de7a15 100644
--- a/arch/arm64/net/bpf_jit_comp.c
+++ b/arch/arm64/net/bpf_jit_comp.c
@@ -765,7 +765,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
 	const s32 imm = insn->imm;
 	const int i = insn - ctx->prog->insnsi;
 	const bool is64 = BPF_CLASS(code) == BPF_ALU64 ||
-			  BPF_CLASS(code) == BPF_JMP;
+			  BPF_CLASS(code) == BPF_JMP64;
 	u8 jmp_cond;
 	s32 jmp_offset;
 	u32 a64_insn;
@@ -776,64 +776,64 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
 
 	switch (code) {
 	/* dst = src */
-	case BPF_ALU | BPF_MOV | BPF_X:
+	case BPF_ALU32 | BPF_MOV | BPF_X:
 	case BPF_ALU64 | BPF_MOV | BPF_X:
 		emit(A64_MOV(is64, dst, src), ctx);
 		break;
 	/* dst = dst OP src */
-	case BPF_ALU | BPF_ADD | BPF_X:
+	case BPF_ALU32 | BPF_ADD | BPF_X:
 	case BPF_ALU64 | BPF_ADD | BPF_X:
 		emit(A64_ADD(is64, dst, dst, src), ctx);
 		break;
-	case BPF_ALU | BPF_SUB | BPF_X:
+	case BPF_ALU32 | BPF_SUB | BPF_X:
 	case BPF_ALU64 | BPF_SUB | BPF_X:
 		emit(A64_SUB(is64, dst, dst, src), ctx);
 		break;
-	case BPF_ALU | BPF_AND | BPF_X:
+	case BPF_ALU32 | BPF_AND | BPF_X:
 	case BPF_ALU64 | BPF_AND | BPF_X:
 		emit(A64_AND(is64, dst, dst, src), ctx);
 		break;
-	case BPF_ALU | BPF_OR | BPF_X:
+	case BPF_ALU32 | BPF_OR | BPF_X:
 	case BPF_ALU64 | BPF_OR | BPF_X:
 		emit(A64_ORR(is64, dst, dst, src), ctx);
 		break;
-	case BPF_ALU | BPF_XOR | BPF_X:
+	case BPF_ALU32 | BPF_XOR | BPF_X:
 	case BPF_ALU64 | BPF_XOR | BPF_X:
 		emit(A64_EOR(is64, dst, dst, src), ctx);
 		break;
-	case BPF_ALU | BPF_MUL | BPF_X:
+	case BPF_ALU32 | BPF_MUL | BPF_X:
 	case BPF_ALU64 | BPF_MUL | BPF_X:
 		emit(A64_MUL(is64, dst, dst, src), ctx);
 		break;
-	case BPF_ALU | BPF_DIV | BPF_X:
+	case BPF_ALU32 | BPF_DIV | BPF_X:
 	case BPF_ALU64 | BPF_DIV | BPF_X:
 		emit(A64_UDIV(is64, dst, dst, src), ctx);
 		break;
-	case BPF_ALU | BPF_MOD | BPF_X:
+	case BPF_ALU32 | BPF_MOD | BPF_X:
 	case BPF_ALU64 | BPF_MOD | BPF_X:
 		emit(A64_UDIV(is64, tmp, dst, src), ctx);
 		emit(A64_MSUB(is64, dst, dst, tmp, src), ctx);
 		break;
-	case BPF_ALU | BPF_LSH | BPF_X:
+	case BPF_ALU32 | BPF_LSH | BPF_X:
 	case BPF_ALU64 | BPF_LSH | BPF_X:
 		emit(A64_LSLV(is64, dst, dst, src), ctx);
 		break;
-	case BPF_ALU | BPF_RSH | BPF_X:
+	case BPF_ALU32 | BPF_RSH | BPF_X:
 	case BPF_ALU64 | BPF_RSH | BPF_X:
 		emit(A64_LSRV(is64, dst, dst, src), ctx);
 		break;
-	case BPF_ALU | BPF_ARSH | BPF_X:
+	case BPF_ALU32 | BPF_ARSH | BPF_X:
 	case BPF_ALU64 | BPF_ARSH | BPF_X:
 		emit(A64_ASRV(is64, dst, dst, src), ctx);
 		break;
 	/* dst = -dst */
-	case BPF_ALU | BPF_NEG:
+	case BPF_ALU32 | BPF_NEG:
 	case BPF_ALU64 | BPF_NEG:
 		emit(A64_NEG(is64, dst, dst), ctx);
 		break;
 	/* dst = BSWAP##imm(dst) */
-	case BPF_ALU | BPF_END | BPF_FROM_LE:
-	case BPF_ALU | BPF_END | BPF_FROM_BE:
+	case BPF_ALU32 | BPF_END | BPF_FROM_LE:
+	case BPF_ALU32 | BPF_END | BPF_FROM_BE:
 #ifdef CONFIG_CPU_BIG_ENDIAN
 		if (BPF_SRC(code) == BPF_FROM_BE)
 			goto emit_bswap_uxt;
@@ -872,12 +872,12 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
 		}
 		break;
 	/* dst = imm */
-	case BPF_ALU | BPF_MOV | BPF_K:
+	case BPF_ALU32 | BPF_MOV | BPF_K:
 	case BPF_ALU64 | BPF_MOV | BPF_K:
 		emit_a64_mov_i(is64, dst, imm, ctx);
 		break;
 	/* dst = dst OP imm */
-	case BPF_ALU | BPF_ADD | BPF_K:
+	case BPF_ALU32 | BPF_ADD | BPF_K:
 	case BPF_ALU64 | BPF_ADD | BPF_K:
 		if (is_addsub_imm(imm)) {
 			emit(A64_ADD_I(is64, dst, dst, imm), ctx);
@@ -888,7 +888,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
 			emit(A64_ADD(is64, dst, dst, tmp), ctx);
 		}
 		break;
-	case BPF_ALU | BPF_SUB | BPF_K:
+	case BPF_ALU32 | BPF_SUB | BPF_K:
 	case BPF_ALU64 | BPF_SUB | BPF_K:
 		if (is_addsub_imm(imm)) {
 			emit(A64_SUB_I(is64, dst, dst, imm), ctx);
@@ -899,7 +899,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
 			emit(A64_SUB(is64, dst, dst, tmp), ctx);
 		}
 		break;
-	case BPF_ALU | BPF_AND | BPF_K:
+	case BPF_ALU32 | BPF_AND | BPF_K:
 	case BPF_ALU64 | BPF_AND | BPF_K:
 		a64_insn = A64_AND_I(is64, dst, dst, imm);
 		if (a64_insn != AARCH64_BREAK_FAULT) {
@@ -909,7 +909,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
 			emit(A64_AND(is64, dst, dst, tmp), ctx);
 		}
 		break;
-	case BPF_ALU | BPF_OR | BPF_K:
+	case BPF_ALU32 | BPF_OR | BPF_K:
 	case BPF_ALU64 | BPF_OR | BPF_K:
 		a64_insn = A64_ORR_I(is64, dst, dst, imm);
 		if (a64_insn != AARCH64_BREAK_FAULT) {
@@ -919,7 +919,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
 			emit(A64_ORR(is64, dst, dst, tmp), ctx);
 		}
 		break;
-	case BPF_ALU | BPF_XOR | BPF_K:
+	case BPF_ALU32 | BPF_XOR | BPF_K:
 	case BPF_ALU64 | BPF_XOR | BPF_K:
 		a64_insn = A64_EOR_I(is64, dst, dst, imm);
 		if (a64_insn != AARCH64_BREAK_FAULT) {
@@ -929,52 +929,52 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
 			emit(A64_EOR(is64, dst, dst, tmp), ctx);
 		}
 		break;
-	case BPF_ALU | BPF_MUL | BPF_K:
+	case BPF_ALU32 | BPF_MUL | BPF_K:
 	case BPF_ALU64 | BPF_MUL | BPF_K:
 		emit_a64_mov_i(is64, tmp, imm, ctx);
 		emit(A64_MUL(is64, dst, dst, tmp), ctx);
 		break;
-	case BPF_ALU | BPF_DIV | BPF_K:
+	case BPF_ALU32 | BPF_DIV | BPF_K:
 	case BPF_ALU64 | BPF_DIV | BPF_K:
 		emit_a64_mov_i(is64, tmp, imm, ctx);
 		emit(A64_UDIV(is64, dst, dst, tmp), ctx);
 		break;
-	case BPF_ALU | BPF_MOD | BPF_K:
+	case BPF_ALU32 | BPF_MOD | BPF_K:
 	case BPF_ALU64 | BPF_MOD | BPF_K:
 		emit_a64_mov_i(is64, tmp2, imm, ctx);
 		emit(A64_UDIV(is64, tmp, dst, tmp2), ctx);
 		emit(A64_MSUB(is64, dst, dst, tmp, tmp2), ctx);
 		break;
-	case BPF_ALU | BPF_LSH | BPF_K:
+	case BPF_ALU32 | BPF_LSH | BPF_K:
 	case BPF_ALU64 | BPF_LSH | BPF_K:
 		emit(A64_LSL(is64, dst, dst, imm), ctx);
 		break;
-	case BPF_ALU | BPF_RSH | BPF_K:
+	case BPF_ALU32 | BPF_RSH | BPF_K:
 	case BPF_ALU64 | BPF_RSH | BPF_K:
 		emit(A64_LSR(is64, dst, dst, imm), ctx);
 		break;
-	case BPF_ALU | BPF_ARSH | BPF_K:
+	case BPF_ALU32 | BPF_ARSH | BPF_K:
 	case BPF_ALU64 | BPF_ARSH | BPF_K:
 		emit(A64_ASR(is64, dst, dst, imm), ctx);
 		break;
 
 	/* JUMP off */
-	case BPF_JMP | BPF_JA:
+	case BPF_JMP64 | BPF_JA:
 		jmp_offset = bpf2a64_offset(i, off, ctx);
 		check_imm26(jmp_offset);
 		emit(A64_B(jmp_offset), ctx);
 		break;
 	/* IF (dst COND src) JUMP off */
-	case BPF_JMP | BPF_JEQ | BPF_X:
-	case BPF_JMP | BPF_JGT | BPF_X:
-	case BPF_JMP | BPF_JLT | BPF_X:
-	case BPF_JMP | BPF_JGE | BPF_X:
-	case BPF_JMP | BPF_JLE | BPF_X:
-	case BPF_JMP | BPF_JNE | BPF_X:
-	case BPF_JMP | BPF_JSGT | BPF_X:
-	case BPF_JMP | BPF_JSLT | BPF_X:
-	case BPF_JMP | BPF_JSGE | BPF_X:
-	case BPF_JMP | BPF_JSLE | BPF_X:
+	case BPF_JMP64 | BPF_JEQ | BPF_X:
+	case BPF_JMP64 | BPF_JGT | BPF_X:
+	case BPF_JMP64 | BPF_JLT | BPF_X:
+	case BPF_JMP64 | BPF_JGE | BPF_X:
+	case BPF_JMP64 | BPF_JLE | BPF_X:
+	case BPF_JMP64 | BPF_JNE | BPF_X:
+	case BPF_JMP64 | BPF_JSGT | BPF_X:
+	case BPF_JMP64 | BPF_JSLT | BPF_X:
+	case BPF_JMP64 | BPF_JSGE | BPF_X:
+	case BPF_JMP64 | BPF_JSLE | BPF_X:
 	case BPF_JMP32 | BPF_JEQ | BPF_X:
 	case BPF_JMP32 | BPF_JGT | BPF_X:
 	case BPF_JMP32 | BPF_JLT | BPF_X:
@@ -1026,21 +1026,21 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
 		}
 		emit(A64_B_(jmp_cond, jmp_offset), ctx);
 		break;
-	case BPF_JMP | BPF_JSET | BPF_X:
+	case BPF_JMP64 | BPF_JSET | BPF_X:
 	case BPF_JMP32 | BPF_JSET | BPF_X:
 		emit(A64_TST(is64, dst, src), ctx);
 		goto emit_cond_jmp;
 	/* IF (dst COND imm) JUMP off */
-	case BPF_JMP | BPF_JEQ | BPF_K:
-	case BPF_JMP | BPF_JGT | BPF_K:
-	case BPF_JMP | BPF_JLT | BPF_K:
-	case BPF_JMP | BPF_JGE | BPF_K:
-	case BPF_JMP | BPF_JLE | BPF_K:
-	case BPF_JMP | BPF_JNE | BPF_K:
-	case BPF_JMP | BPF_JSGT | BPF_K:
-	case BPF_JMP | BPF_JSLT | BPF_K:
-	case BPF_JMP | BPF_JSGE | BPF_K:
-	case BPF_JMP | BPF_JSLE | BPF_K:
+	case BPF_JMP64 | BPF_JEQ | BPF_K:
+	case BPF_JMP64 | BPF_JGT | BPF_K:
+	case BPF_JMP64 | BPF_JLT | BPF_K:
+	case BPF_JMP64 | BPF_JGE | BPF_K:
+	case BPF_JMP64 | BPF_JLE | BPF_K:
+	case BPF_JMP64 | BPF_JNE | BPF_K:
+	case BPF_JMP64 | BPF_JSGT | BPF_K:
+	case BPF_JMP64 | BPF_JSLT | BPF_K:
+	case BPF_JMP64 | BPF_JSGE | BPF_K:
+	case BPF_JMP64 | BPF_JSLE | BPF_K:
 	case BPF_JMP32 | BPF_JEQ | BPF_K:
 	case BPF_JMP32 | BPF_JGT | BPF_K:
 	case BPF_JMP32 | BPF_JLT | BPF_K:
@@ -1060,7 +1060,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
 			emit(A64_CMP(is64, dst, tmp), ctx);
 		}
 		goto emit_cond_jmp;
-	case BPF_JMP | BPF_JSET | BPF_K:
+	case BPF_JMP64 | BPF_JSET | BPF_K:
 	case BPF_JMP32 | BPF_JSET | BPF_K:
 		a64_insn = A64_TST_I(is64, dst, imm);
 		if (a64_insn != AARCH64_BREAK_FAULT) {
@@ -1071,7 +1071,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
 		}
 		goto emit_cond_jmp;
 	/* function call */
-	case BPF_JMP | BPF_CALL:
+	case BPF_JMP64 | BPF_CALL:
 	{
 		const u8 r0 = bpf2a64[BPF_REG_0];
 		bool func_addr_fixed;
@@ -1086,12 +1086,12 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx,
 		break;
 	}
 	/* tail call */
-	case BPF_JMP | BPF_TAIL_CALL:
+	case BPF_JMP64 | BPF_TAIL_CALL:
 		if (emit_bpf_tail_call(ctx))
 			return -EFAULT;
 		break;
 	/* function return */
-	case BPF_JMP | BPF_EXIT:
+	case BPF_JMP64 | BPF_EXIT:
 		/* Optimization: when last instruction is EXIT,
 		   simply fallthrough to epilogue. */
 		if (i == ctx->prog->len - 1)
@@ -1338,7 +1338,7 @@ static int find_fpb_offset(struct bpf_prog *prog)
 			break;
 
 		case BPF_JMP32:
-		case BPF_JMP:
+		case BPF_JMP64:
 			break;
 
 		case BPF_LDX:
@@ -1352,7 +1352,7 @@ static int find_fpb_offset(struct bpf_prog *prog)
 				offset = off;
 			break;
 
-		case BPF_ALU:
+		case BPF_ALU32:
 		case BPF_ALU64:
 		default:
 			/* fp holds ALU result */
diff --git a/arch/loongarch/net/bpf_jit.c b/arch/loongarch/net/bpf_jit.c
index c4b1947..f18eaaa 100644
--- a/arch/loongarch/net/bpf_jit.c
+++ b/arch/loongarch/net/bpf_jit.c
@@ -462,31 +462,31 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
 	const s16 off = insn->off;
 	const s32 imm = insn->imm;
 	const u64 imm64 = (u64)(insn + 1)->imm << 32 | (u32)insn->imm;
-	const bool is32 = BPF_CLASS(insn->code) == BPF_ALU || BPF_CLASS(insn->code) == BPF_JMP32;
+	const bool is32 = BPF_CLASS(insn->code) == BPF_ALU32 || BPF_CLASS(insn->code) == BPF_JMP32;
 
 	switch (code) {
 	/* dst = src */
-	case BPF_ALU | BPF_MOV | BPF_X:
+	case BPF_ALU32 | BPF_MOV | BPF_X:
 	case BPF_ALU64 | BPF_MOV | BPF_X:
 		move_reg(ctx, dst, src);
 		emit_zext_32(ctx, dst, is32);
 		break;
 
 	/* dst = imm */
-	case BPF_ALU | BPF_MOV | BPF_K:
+	case BPF_ALU32 | BPF_MOV | BPF_K:
 	case BPF_ALU64 | BPF_MOV | BPF_K:
 		move_imm(ctx, dst, imm, is32);
 		break;
 
 	/* dst = dst + src */
-	case BPF_ALU | BPF_ADD | BPF_X:
+	case BPF_ALU32 | BPF_ADD | BPF_X:
 	case BPF_ALU64 | BPF_ADD | BPF_X:
 		emit_insn(ctx, addd, dst, dst, src);
 		emit_zext_32(ctx, dst, is32);
 		break;
 
 	/* dst = dst + imm */
-	case BPF_ALU | BPF_ADD | BPF_K:
+	case BPF_ALU32 | BPF_ADD | BPF_K:
 	case BPF_ALU64 | BPF_ADD | BPF_K:
 		if (is_signed_imm12(imm)) {
 			emit_insn(ctx, addid, dst, dst, imm);
@@ -498,14 +498,14 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
 		break;
 
 	/* dst = dst - src */
-	case BPF_ALU | BPF_SUB | BPF_X:
+	case BPF_ALU32 | BPF_SUB | BPF_X:
 	case BPF_ALU64 | BPF_SUB | BPF_X:
 		emit_insn(ctx, subd, dst, dst, src);
 		emit_zext_32(ctx, dst, is32);
 		break;
 
 	/* dst = dst - imm */
-	case BPF_ALU | BPF_SUB | BPF_K:
+	case BPF_ALU32 | BPF_SUB | BPF_K:
 	case BPF_ALU64 | BPF_SUB | BPF_K:
 		if (is_signed_imm12(-imm)) {
 			emit_insn(ctx, addid, dst, dst, -imm);
@@ -517,14 +517,14 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
 		break;
 
 	/* dst = dst * src */
-	case BPF_ALU | BPF_MUL | BPF_X:
+	case BPF_ALU32 | BPF_MUL | BPF_X:
 	case BPF_ALU64 | BPF_MUL | BPF_X:
 		emit_insn(ctx, muld, dst, dst, src);
 		emit_zext_32(ctx, dst, is32);
 		break;
 
 	/* dst = dst * imm */
-	case BPF_ALU | BPF_MUL | BPF_K:
+	case BPF_ALU32 | BPF_MUL | BPF_K:
 	case BPF_ALU64 | BPF_MUL | BPF_K:
 		move_imm(ctx, t1, imm, is32);
 		emit_insn(ctx, muld, dst, dst, t1);
@@ -532,7 +532,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
 		break;
 
 	/* dst = dst / src */
-	case BPF_ALU | BPF_DIV | BPF_X:
+	case BPF_ALU32 | BPF_DIV | BPF_X:
 	case BPF_ALU64 | BPF_DIV | BPF_X:
 		emit_zext_32(ctx, dst, is32);
 		move_reg(ctx, t1, src);
@@ -542,7 +542,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
 		break;
 
 	/* dst = dst / imm */
-	case BPF_ALU | BPF_DIV | BPF_K:
+	case BPF_ALU32 | BPF_DIV | BPF_K:
 	case BPF_ALU64 | BPF_DIV | BPF_K:
 		move_imm(ctx, t1, imm, is32);
 		emit_zext_32(ctx, dst, is32);
@@ -551,7 +551,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
 		break;
 
 	/* dst = dst % src */
-	case BPF_ALU | BPF_MOD | BPF_X:
+	case BPF_ALU32 | BPF_MOD | BPF_X:
 	case BPF_ALU64 | BPF_MOD | BPF_X:
 		emit_zext_32(ctx, dst, is32);
 		move_reg(ctx, t1, src);
@@ -561,7 +561,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
 		break;
 
 	/* dst = dst % imm */
-	case BPF_ALU | BPF_MOD | BPF_K:
+	case BPF_ALU32 | BPF_MOD | BPF_K:
 	case BPF_ALU64 | BPF_MOD | BPF_K:
 		move_imm(ctx, t1, imm, is32);
 		emit_zext_32(ctx, dst, is32);
@@ -570,7 +570,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
 		break;
 
 	/* dst = -dst */
-	case BPF_ALU | BPF_NEG:
+	case BPF_ALU32 | BPF_NEG:
 	case BPF_ALU64 | BPF_NEG:
 		move_imm(ctx, t1, imm, is32);
 		emit_insn(ctx, subd, dst, LOONGARCH_GPR_ZERO, dst);
@@ -578,14 +578,14 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
 		break;
 
 	/* dst = dst & src */
-	case BPF_ALU | BPF_AND | BPF_X:
+	case BPF_ALU32 | BPF_AND | BPF_X:
 	case BPF_ALU64 | BPF_AND | BPF_X:
 		emit_insn(ctx, and, dst, dst, src);
 		emit_zext_32(ctx, dst, is32);
 		break;
 
 	/* dst = dst & imm */
-	case BPF_ALU | BPF_AND | BPF_K:
+	case BPF_ALU32 | BPF_AND | BPF_K:
 	case BPF_ALU64 | BPF_AND | BPF_K:
 		if (is_unsigned_imm12(imm)) {
 			emit_insn(ctx, andi, dst, dst, imm);
@@ -597,14 +597,14 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
 		break;
 
 	/* dst = dst | src */
-	case BPF_ALU | BPF_OR | BPF_X:
+	case BPF_ALU32 | BPF_OR | BPF_X:
 	case BPF_ALU64 | BPF_OR | BPF_X:
 		emit_insn(ctx, or, dst, dst, src);
 		emit_zext_32(ctx, dst, is32);
 		break;
 
 	/* dst = dst | imm */
-	case BPF_ALU | BPF_OR | BPF_K:
+	case BPF_ALU32 | BPF_OR | BPF_K:
 	case BPF_ALU64 | BPF_OR | BPF_K:
 		if (is_unsigned_imm12(imm)) {
 			emit_insn(ctx, ori, dst, dst, imm);
@@ -616,14 +616,14 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
 		break;
 
 	/* dst = dst ^ src */
-	case BPF_ALU | BPF_XOR | BPF_X:
+	case BPF_ALU32 | BPF_XOR | BPF_X:
 	case BPF_ALU64 | BPF_XOR | BPF_X:
 		emit_insn(ctx, xor, dst, dst, src);
 		emit_zext_32(ctx, dst, is32);
 		break;
 
 	/* dst = dst ^ imm */
-	case BPF_ALU | BPF_XOR | BPF_K:
+	case BPF_ALU32 | BPF_XOR | BPF_K:
 	case BPF_ALU64 | BPF_XOR | BPF_K:
 		if (is_unsigned_imm12(imm)) {
 			emit_insn(ctx, xori, dst, dst, imm);
@@ -635,7 +635,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
 		break;
 
 	/* dst = dst << src (logical) */
-	case BPF_ALU | BPF_LSH | BPF_X:
+	case BPF_ALU32 | BPF_LSH | BPF_X:
 		emit_insn(ctx, sllw, dst, dst, src);
 		emit_zext_32(ctx, dst, is32);
 		break;
@@ -645,7 +645,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
 		break;
 
 	/* dst = dst << imm (logical) */
-	case BPF_ALU | BPF_LSH | BPF_K:
+	case BPF_ALU32 | BPF_LSH | BPF_K:
 		emit_insn(ctx, slliw, dst, dst, imm);
 		emit_zext_32(ctx, dst, is32);
 		break;
@@ -655,7 +655,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
 		break;
 
 	/* dst = dst >> src (logical) */
-	case BPF_ALU | BPF_RSH | BPF_X:
+	case BPF_ALU32 | BPF_RSH | BPF_X:
 		emit_insn(ctx, srlw, dst, dst, src);
 		emit_zext_32(ctx, dst, is32);
 		break;
@@ -665,7 +665,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
 		break;
 
 	/* dst = dst >> imm (logical) */
-	case BPF_ALU | BPF_RSH | BPF_K:
+	case BPF_ALU32 | BPF_RSH | BPF_K:
 		emit_insn(ctx, srliw, dst, dst, imm);
 		emit_zext_32(ctx, dst, is32);
 		break;
@@ -675,7 +675,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
 		break;
 
 	/* dst = dst >> src (arithmetic) */
-	case BPF_ALU | BPF_ARSH | BPF_X:
+	case BPF_ALU32 | BPF_ARSH | BPF_X:
 		emit_insn(ctx, sraw, dst, dst, src);
 		emit_zext_32(ctx, dst, is32);
 		break;
@@ -685,7 +685,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
 		break;
 
 	/* dst = dst >> imm (arithmetic) */
-	case BPF_ALU | BPF_ARSH | BPF_K:
+	case BPF_ALU32 | BPF_ARSH | BPF_K:
 		emit_insn(ctx, sraiw, dst, dst, imm);
 		emit_zext_32(ctx, dst, is32);
 		break;
@@ -695,7 +695,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
 		break;
 
 	/* dst = BSWAP##imm(dst) */
-	case BPF_ALU | BPF_END | BPF_FROM_LE:
+	case BPF_ALU32 | BPF_END | BPF_FROM_LE:
 		switch (imm) {
 		case 16:
 			/* zero-extend 16 bits into 64 bits */
@@ -711,7 +711,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
 		}
 		break;
 
-	case BPF_ALU | BPF_END | BPF_FROM_BE:
+	case BPF_ALU32 | BPF_END | BPF_FROM_BE:
 		switch (imm) {
 		case 16:
 			emit_insn(ctx, revb2h, dst, dst);
@@ -730,16 +730,16 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
 		break;
 
 	/* PC += off if dst cond src */
-	case BPF_JMP | BPF_JEQ | BPF_X:
-	case BPF_JMP | BPF_JNE | BPF_X:
-	case BPF_JMP | BPF_JGT | BPF_X:
-	case BPF_JMP | BPF_JGE | BPF_X:
-	case BPF_JMP | BPF_JLT | BPF_X:
-	case BPF_JMP | BPF_JLE | BPF_X:
-	case BPF_JMP | BPF_JSGT | BPF_X:
-	case BPF_JMP | BPF_JSGE | BPF_X:
-	case BPF_JMP | BPF_JSLT | BPF_X:
-	case BPF_JMP | BPF_JSLE | BPF_X:
+	case BPF_JMP64 | BPF_JEQ | BPF_X:
+	case BPF_JMP64 | BPF_JNE | BPF_X:
+	case BPF_JMP64 | BPF_JGT | BPF_X:
+	case BPF_JMP64 | BPF_JGE | BPF_X:
+	case BPF_JMP64 | BPF_JLT | BPF_X:
+	case BPF_JMP64 | BPF_JLE | BPF_X:
+	case BPF_JMP64 | BPF_JSGT | BPF_X:
+	case BPF_JMP64 | BPF_JSGE | BPF_X:
+	case BPF_JMP64 | BPF_JSLT | BPF_X:
+	case BPF_JMP64 | BPF_JSLE | BPF_X:
 	case BPF_JMP32 | BPF_JEQ | BPF_X:
 	case BPF_JMP32 | BPF_JNE | BPF_X:
 	case BPF_JMP32 | BPF_JGT | BPF_X:
@@ -765,16 +765,16 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
 		break;
 
 	/* PC += off if dst cond imm */
-	case BPF_JMP | BPF_JEQ | BPF_K:
-	case BPF_JMP | BPF_JNE | BPF_K:
-	case BPF_JMP | BPF_JGT | BPF_K:
-	case BPF_JMP | BPF_JGE | BPF_K:
-	case BPF_JMP | BPF_JLT | BPF_K:
-	case BPF_JMP | BPF_JLE | BPF_K:
-	case BPF_JMP | BPF_JSGT | BPF_K:
-	case BPF_JMP | BPF_JSGE | BPF_K:
-	case BPF_JMP | BPF_JSLT | BPF_K:
-	case BPF_JMP | BPF_JSLE | BPF_K:
+	case BPF_JMP64 | BPF_JEQ | BPF_K:
+	case BPF_JMP64 | BPF_JNE | BPF_K:
+	case BPF_JMP64 | BPF_JGT | BPF_K:
+	case BPF_JMP64 | BPF_JGE | BPF_K:
+	case BPF_JMP64 | BPF_JLT | BPF_K:
+	case BPF_JMP64 | BPF_JLE | BPF_K:
+	case BPF_JMP64 | BPF_JSGT | BPF_K:
+	case BPF_JMP64 | BPF_JSGE | BPF_K:
+	case BPF_JMP64 | BPF_JSLT | BPF_K:
+	case BPF_JMP64 | BPF_JSLE | BPF_K:
 	case BPF_JMP32 | BPF_JEQ | BPF_K:
 	case BPF_JMP32 | BPF_JNE | BPF_K:
 	case BPF_JMP32 | BPF_JGT | BPF_K:
@@ -806,7 +806,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
 		break;
 
 	/* PC += off if dst & src */
-	case BPF_JMP | BPF_JSET | BPF_X:
+	case BPF_JMP64 | BPF_JSET | BPF_X:
 	case BPF_JMP32 | BPF_JSET | BPF_X:
 		jmp_offset = bpf2la_offset(i, off, ctx);
 		emit_insn(ctx, and, t1, dst, src);
@@ -816,7 +816,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
 		break;
 
 	/* PC += off if dst & imm */
-	case BPF_JMP | BPF_JSET | BPF_K:
+	case BPF_JMP64 | BPF_JSET | BPF_K:
 	case BPF_JMP32 | BPF_JSET | BPF_K:
 		jmp_offset = bpf2la_offset(i, off, ctx);
 		move_imm(ctx, t1, imm, is32);
@@ -827,14 +827,14 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
 		break;
 
 	/* PC += off */
-	case BPF_JMP | BPF_JA:
+	case BPF_JMP64 | BPF_JA:
 		jmp_offset = bpf2la_offset(i, off, ctx);
 		if (emit_uncond_jmp(ctx, jmp_offset) < 0)
 			goto toofar;
 		break;
 
 	/* function call */
-	case BPF_JMP | BPF_CALL:
+	case BPF_JMP64 | BPF_CALL:
 		mark_call(ctx);
 		ret = bpf_jit_get_func_addr(ctx->prog, insn, extra_pass,
 					    &func_addr, &func_addr_fixed);
@@ -847,14 +847,14 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
 		break;
 
 	/* tail call */
-	case BPF_JMP | BPF_TAIL_CALL:
+	case BPF_JMP64 | BPF_TAIL_CALL:
 		mark_tail_call(ctx);
 		if (emit_bpf_tail_call(ctx) < 0)
 			return -EINVAL;
 		break;
 
 	/* function return */
-	case BPF_JMP | BPF_EXIT:
+	case BPF_JMP64 | BPF_EXIT:
 		emit_sext_32(ctx, regmap[BPF_REG_0], true);
 
 		if (i == ctx->prog->len - 1)
diff --git a/arch/mips/net/bpf_jit_comp32.c b/arch/mips/net/bpf_jit_comp32.c
index ace5db3..6bd12f6 100644
--- a/arch/mips/net/bpf_jit_comp32.c
+++ b/arch/mips/net/bpf_jit_comp32.c
@@ -1475,12 +1475,12 @@ int build_insn(const struct bpf_insn *insn, struct jit_context *ctx)
 	switch (code) {
 	/* ALU operations */
 	/* dst = imm */
-	case BPF_ALU | BPF_MOV | BPF_K:
+	case BPF_ALU32 | BPF_MOV | BPF_K:
 		emit_mov_i(ctx, lo(dst), imm);
 		emit_zext_ver(ctx, dst);
 		break;
 	/* dst = src */
-	case BPF_ALU | BPF_MOV | BPF_X:
+	case BPF_ALU32 | BPF_MOV | BPF_X:
 		if (imm == 1) {
 			/* Special mov32 for zext */
 			emit_mov_i(ctx, hi(dst), 0);
@@ -1490,7 +1490,7 @@ int build_insn(const struct bpf_insn *insn, struct jit_context *ctx)
 		}
 		break;
 	/* dst = -dst */
-	case BPF_ALU | BPF_NEG:
+	case BPF_ALU32 | BPF_NEG:
 		emit_alu_i(ctx, lo(dst), 0, BPF_NEG);
 		emit_zext_ver(ctx, dst);
 		break;
@@ -1505,17 +1505,17 @@ int build_insn(const struct bpf_insn *insn, struct jit_context *ctx)
 	/* dst = dst * imm */
 	/* dst = dst / imm */
 	/* dst = dst % imm */
-	case BPF_ALU | BPF_OR | BPF_K:
-	case BPF_ALU | BPF_AND | BPF_K:
-	case BPF_ALU | BPF_XOR | BPF_K:
-	case BPF_ALU | BPF_LSH | BPF_K:
-	case BPF_ALU | BPF_RSH | BPF_K:
-	case BPF_ALU | BPF_ARSH | BPF_K:
-	case BPF_ALU | BPF_ADD | BPF_K:
-	case BPF_ALU | BPF_SUB | BPF_K:
-	case BPF_ALU | BPF_MUL | BPF_K:
-	case BPF_ALU | BPF_DIV | BPF_K:
-	case BPF_ALU | BPF_MOD | BPF_K:
+	case BPF_ALU32 | BPF_OR | BPF_K:
+	case BPF_ALU32 | BPF_AND | BPF_K:
+	case BPF_ALU32 | BPF_XOR | BPF_K:
+	case BPF_ALU32 | BPF_LSH | BPF_K:
+	case BPF_ALU32 | BPF_RSH | BPF_K:
+	case BPF_ALU32 | BPF_ARSH | BPF_K:
+	case BPF_ALU32 | BPF_ADD | BPF_K:
+	case BPF_ALU32 | BPF_SUB | BPF_K:
+	case BPF_ALU32 | BPF_MUL | BPF_K:
+	case BPF_ALU32 | BPF_DIV | BPF_K:
+	case BPF_ALU32 | BPF_MOD | BPF_K:
 		if (!valid_alu_i(BPF_OP(code), imm)) {
 			emit_mov_i(ctx, MIPS_R_T6, imm);
 			emit_alu_r(ctx, lo(dst), MIPS_R_T6, BPF_OP(code));
@@ -1535,17 +1535,17 @@ int build_insn(const struct bpf_insn *insn, struct jit_context *ctx)
 	/* dst = dst * src */
 	/* dst = dst / src */
 	/* dst = dst % src */
-	case BPF_ALU | BPF_AND | BPF_X:
-	case BPF_ALU | BPF_OR | BPF_X:
-	case BPF_ALU | BPF_XOR | BPF_X:
-	case BPF_ALU | BPF_LSH | BPF_X:
-	case BPF_ALU | BPF_RSH | BPF_X:
-	case BPF_ALU | BPF_ARSH | BPF_X:
-	case BPF_ALU | BPF_ADD | BPF_X:
-	case BPF_ALU | BPF_SUB | BPF_X:
-	case BPF_ALU | BPF_MUL | BPF_X:
-	case BPF_ALU | BPF_DIV | BPF_X:
-	case BPF_ALU | BPF_MOD | BPF_X:
+	case BPF_ALU32 | BPF_AND | BPF_X:
+	case BPF_ALU32 | BPF_OR | BPF_X:
+	case BPF_ALU32 | BPF_XOR | BPF_X:
+	case BPF_ALU32 | BPF_LSH | BPF_X:
+	case BPF_ALU32 | BPF_RSH | BPF_X:
+	case BPF_ALU32 | BPF_ARSH | BPF_X:
+	case BPF_ALU32 | BPF_ADD | BPF_X:
+	case BPF_ALU32 | BPF_SUB | BPF_X:
+	case BPF_ALU32 | BPF_MUL | BPF_X:
+	case BPF_ALU32 | BPF_DIV | BPF_X:
+	case BPF_ALU32 | BPF_MOD | BPF_X:
 		emit_alu_r(ctx, lo(dst), lo(src), BPF_OP(code));
 		emit_zext_ver(ctx, dst);
 		break;
@@ -1633,8 +1633,8 @@ int build_insn(const struct bpf_insn *insn, struct jit_context *ctx)
 		break;
 	/* dst = htole(dst) */
 	/* dst = htobe(dst) */
-	case BPF_ALU | BPF_END | BPF_FROM_LE:
-	case BPF_ALU | BPF_END | BPF_FROM_BE:
+	case BPF_ALU32 | BPF_END | BPF_FROM_LE:
+	case BPF_ALU32 | BPF_END | BPF_FROM_BE:
 		if (BPF_SRC(code) ==
 #ifdef __BIG_ENDIAN
 		    BPF_FROM_LE
@@ -1814,17 +1814,17 @@ int build_insn(const struct bpf_insn *insn, struct jit_context *ctx)
 	/* PC += off if dst >= src (signed) */
 	/* PC += off if dst < src (signed) */
 	/* PC += off if dst <= src (signed) */
-	case BPF_JMP | BPF_JEQ | BPF_X:
-	case BPF_JMP | BPF_JNE | BPF_X:
-	case BPF_JMP | BPF_JSET | BPF_X:
-	case BPF_JMP | BPF_JGT | BPF_X:
-	case BPF_JMP | BPF_JGE | BPF_X:
-	case BPF_JMP | BPF_JLT | BPF_X:
-	case BPF_JMP | BPF_JLE | BPF_X:
-	case BPF_JMP | BPF_JSGT | BPF_X:
-	case BPF_JMP | BPF_JSGE | BPF_X:
-	case BPF_JMP | BPF_JSLT | BPF_X:
-	case BPF_JMP | BPF_JSLE | BPF_X:
+	case BPF_JMP64 | BPF_JEQ | BPF_X:
+	case BPF_JMP64 | BPF_JNE | BPF_X:
+	case BPF_JMP64 | BPF_JSET | BPF_X:
+	case BPF_JMP64 | BPF_JGT | BPF_X:
+	case BPF_JMP64 | BPF_JGE | BPF_X:
+	case BPF_JMP64 | BPF_JLT | BPF_X:
+	case BPF_JMP64 | BPF_JLE | BPF_X:
+	case BPF_JMP64 | BPF_JSGT | BPF_X:
+	case BPF_JMP64 | BPF_JSGE | BPF_X:
+	case BPF_JMP64 | BPF_JSLT | BPF_X:
+	case BPF_JMP64 | BPF_JSLE | BPF_X:
 		if (off == 0)
 			break;
 		setup_jmp_r(ctx, dst == src, BPF_OP(code), off, &jmp, &rel);
@@ -1843,17 +1843,17 @@ int build_insn(const struct bpf_insn *insn, struct jit_context *ctx)
 	/* PC += off if dst >= imm (signed) */
 	/* PC += off if dst < imm (signed) */
 	/* PC += off if dst <= imm (signed) */
-	case BPF_JMP | BPF_JEQ | BPF_K:
-	case BPF_JMP | BPF_JNE | BPF_K:
-	case BPF_JMP | BPF_JSET | BPF_K:
-	case BPF_JMP | BPF_JGT | BPF_K:
-	case BPF_JMP | BPF_JGE | BPF_K:
-	case BPF_JMP | BPF_JLT | BPF_K:
-	case BPF_JMP | BPF_JLE | BPF_K:
-	case BPF_JMP | BPF_JSGT | BPF_K:
-	case BPF_JMP | BPF_JSGE | BPF_K:
-	case BPF_JMP | BPF_JSLT | BPF_K:
-	case BPF_JMP | BPF_JSLE | BPF_K:
+	case BPF_JMP64 | BPF_JEQ | BPF_K:
+	case BPF_JMP64 | BPF_JNE | BPF_K:
+	case BPF_JMP64 | BPF_JSET | BPF_K:
+	case BPF_JMP64 | BPF_JGT | BPF_K:
+	case BPF_JMP64 | BPF_JGE | BPF_K:
+	case BPF_JMP64 | BPF_JLT | BPF_K:
+	case BPF_JMP64 | BPF_JLE | BPF_K:
+	case BPF_JMP64 | BPF_JSGT | BPF_K:
+	case BPF_JMP64 | BPF_JSGE | BPF_K:
+	case BPF_JMP64 | BPF_JSLT | BPF_K:
+	case BPF_JMP64 | BPF_JSLE | BPF_K:
 		if (off == 0)
 			break;
 		setup_jmp_i(ctx, imm, 64, BPF_OP(code), off, &jmp, &rel);
@@ -1862,24 +1862,24 @@ int build_insn(const struct bpf_insn *insn, struct jit_context *ctx)
 			goto toofar;
 		break;
 	/* PC += off */
-	case BPF_JMP | BPF_JA:
+	case BPF_JMP64 | BPF_JA:
 		if (off == 0)
 			break;
 		if (emit_ja(ctx, off) < 0)
 			goto toofar;
 		break;
 	/* Tail call */
-	case BPF_JMP | BPF_TAIL_CALL:
+	case BPF_JMP64 | BPF_TAIL_CALL:
 		if (emit_tail_call(ctx) < 0)
 			goto invalid;
 		break;
 	/* Function call */
-	case BPF_JMP | BPF_CALL:
+	case BPF_JMP64 | BPF_CALL:
 		if (emit_call(ctx, insn) < 0)
 			goto invalid;
 		break;
 	/* Function return */
-	case BPF_JMP | BPF_EXIT:
+	case BPF_JMP64 | BPF_EXIT:
 		/*
 		 * Optimization: when last instruction is EXIT
 		 * simply continue to epilogue.
diff --git a/arch/mips/net/bpf_jit_comp64.c b/arch/mips/net/bpf_jit_comp64.c
index 0e7c1bd..05416f4 100644
--- a/arch/mips/net/bpf_jit_comp64.c
+++ b/arch/mips/net/bpf_jit_comp64.c
@@ -643,12 +643,12 @@ int build_insn(const struct bpf_insn *insn, struct jit_context *ctx)
 	switch (code) {
 	/* ALU operations */
 	/* dst = imm */
-	case BPF_ALU | BPF_MOV | BPF_K:
+	case BPF_ALU32 | BPF_MOV | BPF_K:
 		emit_mov_i(ctx, dst, imm);
 		emit_zext_ver(ctx, dst);
 		break;
 	/* dst = src */
-	case BPF_ALU | BPF_MOV | BPF_X:
+	case BPF_ALU32 | BPF_MOV | BPF_X:
 		if (imm == 1) {
 			/* Special mov32 for zext */
 			emit_zext(ctx, dst);
@@ -658,7 +658,7 @@ int build_insn(const struct bpf_insn *insn, struct jit_context *ctx)
 		}
 		break;
 	/* dst = -dst */
-	case BPF_ALU | BPF_NEG:
+	case BPF_ALU32 | BPF_NEG:
 		emit_sext(ctx, dst, dst);
 		emit_alu_i(ctx, dst, 0, BPF_NEG);
 		emit_zext_ver(ctx, dst);
@@ -667,10 +667,10 @@ int build_insn(const struct bpf_insn *insn, struct jit_context *ctx)
 	/* dst = dst | imm */
 	/* dst = dst ^ imm */
 	/* dst = dst << imm */
-	case BPF_ALU | BPF_OR | BPF_K:
-	case BPF_ALU | BPF_AND | BPF_K:
-	case BPF_ALU | BPF_XOR | BPF_K:
-	case BPF_ALU | BPF_LSH | BPF_K:
+	case BPF_ALU32 | BPF_OR | BPF_K:
+	case BPF_ALU32 | BPF_AND | BPF_K:
+	case BPF_ALU32 | BPF_XOR | BPF_K:
+	case BPF_ALU32 | BPF_LSH | BPF_K:
 		if (!valid_alu_i(BPF_OP(code), imm)) {
 			emit_mov_i(ctx, MIPS_R_T4, imm);
 			emit_alu_r(ctx, dst, MIPS_R_T4, BPF_OP(code));
@@ -686,13 +686,13 @@ int build_insn(const struct bpf_insn *insn, struct jit_context *ctx)
 	/* dst = dst * imm */
 	/* dst = dst / imm */
 	/* dst = dst % imm */
-	case BPF_ALU | BPF_RSH | BPF_K:
-	case BPF_ALU | BPF_ARSH | BPF_K:
-	case BPF_ALU | BPF_ADD | BPF_K:
-	case BPF_ALU | BPF_SUB | BPF_K:
-	case BPF_ALU | BPF_MUL | BPF_K:
-	case BPF_ALU | BPF_DIV | BPF_K:
-	case BPF_ALU | BPF_MOD | BPF_K:
+	case BPF_ALU32 | BPF_RSH | BPF_K:
+	case BPF_ALU32 | BPF_ARSH | BPF_K:
+	case BPF_ALU32 | BPF_ADD | BPF_K:
+	case BPF_ALU32 | BPF_SUB | BPF_K:
+	case BPF_ALU32 | BPF_MUL | BPF_K:
+	case BPF_ALU32 | BPF_DIV | BPF_K:
+	case BPF_ALU32 | BPF_MOD | BPF_K:
 		if (!valid_alu_i(BPF_OP(code), imm)) {
 			emit_sext(ctx, dst, dst);
 			emit_mov_i(ctx, MIPS_R_T4, imm);
@@ -707,10 +707,10 @@ int build_insn(const struct bpf_insn *insn, struct jit_context *ctx)
 	/* dst = dst | src */
 	/* dst = dst ^ src */
 	/* dst = dst << src */
-	case BPF_ALU | BPF_AND | BPF_X:
-	case BPF_ALU | BPF_OR | BPF_X:
-	case BPF_ALU | BPF_XOR | BPF_X:
-	case BPF_ALU | BPF_LSH | BPF_X:
+	case BPF_ALU32 | BPF_AND | BPF_X:
+	case BPF_ALU32 | BPF_OR | BPF_X:
+	case BPF_ALU32 | BPF_XOR | BPF_X:
+	case BPF_ALU32 | BPF_LSH | BPF_X:
 		emit_alu_r(ctx, dst, src, BPF_OP(code));
 		emit_zext_ver(ctx, dst);
 		break;
@@ -721,13 +721,13 @@ int build_insn(const struct bpf_insn *insn, struct jit_context *ctx)
 	/* dst = dst * src */
 	/* dst = dst / src */
 	/* dst = dst % src */
-	case BPF_ALU | BPF_RSH | BPF_X:
-	case BPF_ALU | BPF_ARSH | BPF_X:
-	case BPF_ALU | BPF_ADD | BPF_X:
-	case BPF_ALU | BPF_SUB | BPF_X:
-	case BPF_ALU | BPF_MUL | BPF_X:
-	case BPF_ALU | BPF_DIV | BPF_X:
-	case BPF_ALU | BPF_MOD | BPF_X:
+	case BPF_ALU32 | BPF_RSH | BPF_X:
+	case BPF_ALU32 | BPF_ARSH | BPF_X:
+	case BPF_ALU32 | BPF_ADD | BPF_X:
+	case BPF_ALU32 | BPF_SUB | BPF_X:
+	case BPF_ALU32 | BPF_MUL | BPF_X:
+	case BPF_ALU32 | BPF_DIV | BPF_X:
+	case BPF_ALU32 | BPF_MOD | BPF_X:
 		emit_sext(ctx, dst, dst);
 		emit_sext(ctx, MIPS_R_T4, src);
 		emit_alu_r(ctx, dst, MIPS_R_T4, BPF_OP(code));
@@ -800,8 +800,8 @@ int build_insn(const struct bpf_insn *insn, struct jit_context *ctx)
 		break;
 	/* dst = htole(dst) */
 	/* dst = htobe(dst) */
-	case BPF_ALU | BPF_END | BPF_FROM_LE:
-	case BPF_ALU | BPF_END | BPF_FROM_BE:
+	case BPF_ALU32 | BPF_END | BPF_FROM_LE:
+	case BPF_ALU32 | BPF_END | BPF_FROM_BE:
 		if (BPF_SRC(code) ==
 #ifdef __BIG_ENDIAN
 		    BPF_FROM_LE
@@ -970,17 +970,17 @@ int build_insn(const struct bpf_insn *insn, struct jit_context *ctx)
 	/* PC += off if dst >= src (signed) */
 	/* PC += off if dst < src (signed) */
 	/* PC += off if dst <= src (signed) */
-	case BPF_JMP | BPF_JEQ | BPF_X:
-	case BPF_JMP | BPF_JNE | BPF_X:
-	case BPF_JMP | BPF_JSET | BPF_X:
-	case BPF_JMP | BPF_JGT | BPF_X:
-	case BPF_JMP | BPF_JGE | BPF_X:
-	case BPF_JMP | BPF_JLT | BPF_X:
-	case BPF_JMP | BPF_JLE | BPF_X:
-	case BPF_JMP | BPF_JSGT | BPF_X:
-	case BPF_JMP | BPF_JSGE | BPF_X:
-	case BPF_JMP | BPF_JSLT | BPF_X:
-	case BPF_JMP | BPF_JSLE | BPF_X:
+	case BPF_JMP64 | BPF_JEQ | BPF_X:
+	case BPF_JMP64 | BPF_JNE | BPF_X:
+	case BPF_JMP64 | BPF_JSET | BPF_X:
+	case BPF_JMP64 | BPF_JGT | BPF_X:
+	case BPF_JMP64 | BPF_JGE | BPF_X:
+	case BPF_JMP64 | BPF_JLT | BPF_X:
+	case BPF_JMP64 | BPF_JLE | BPF_X:
+	case BPF_JMP64 | BPF_JSGT | BPF_X:
+	case BPF_JMP64 | BPF_JSGE | BPF_X:
+	case BPF_JMP64 | BPF_JSLT | BPF_X:
+	case BPF_JMP64 | BPF_JSLE | BPF_X:
 		if (off == 0)
 			break;
 		setup_jmp_r(ctx, dst == src, BPF_OP(code), off, &jmp, &rel);
@@ -999,17 +999,17 @@ int build_insn(const struct bpf_insn *insn, struct jit_context *ctx)
 	/* PC += off if dst >= imm (signed) */
 	/* PC += off if dst < imm (signed) */
 	/* PC += off if dst <= imm (signed) */
-	case BPF_JMP | BPF_JEQ | BPF_K:
-	case BPF_JMP | BPF_JNE | BPF_K:
-	case BPF_JMP | BPF_JSET | BPF_K:
-	case BPF_JMP | BPF_JGT | BPF_K:
-	case BPF_JMP | BPF_JGE | BPF_K:
-	case BPF_JMP | BPF_JLT | BPF_K:
-	case BPF_JMP | BPF_JLE | BPF_K:
-	case BPF_JMP | BPF_JSGT | BPF_K:
-	case BPF_JMP | BPF_JSGE | BPF_K:
-	case BPF_JMP | BPF_JSLT | BPF_K:
-	case BPF_JMP | BPF_JSLE | BPF_K:
+	case BPF_JMP64 | BPF_JEQ | BPF_K:
+	case BPF_JMP64 | BPF_JNE | BPF_K:
+	case BPF_JMP64 | BPF_JSET | BPF_K:
+	case BPF_JMP64 | BPF_JGT | BPF_K:
+	case BPF_JMP64 | BPF_JGE | BPF_K:
+	case BPF_JMP64 | BPF_JLT | BPF_K:
+	case BPF_JMP64 | BPF_JLE | BPF_K:
+	case BPF_JMP64 | BPF_JSGT | BPF_K:
+	case BPF_JMP64 | BPF_JSGE | BPF_K:
+	case BPF_JMP64 | BPF_JSLT | BPF_K:
+	case BPF_JMP64 | BPF_JSLE | BPF_K:
 		if (off == 0)
 			break;
 		setup_jmp_i(ctx, imm, 64, BPF_OP(code), off, &jmp, &rel);
@@ -1024,24 +1024,24 @@ int build_insn(const struct bpf_insn *insn, struct jit_context *ctx)
 			goto toofar;
 		break;
 	/* PC += off */
-	case BPF_JMP | BPF_JA:
+	case BPF_JMP64 | BPF_JA:
 		if (off == 0)
 			break;
 		if (emit_ja(ctx, off) < 0)
 			goto toofar;
 		break;
 	/* Tail call */
-	case BPF_JMP | BPF_TAIL_CALL:
+	case BPF_JMP64 | BPF_TAIL_CALL:
 		if (emit_tail_call(ctx) < 0)
 			goto invalid;
 		break;
 	/* Function call */
-	case BPF_JMP | BPF_CALL:
+	case BPF_JMP64 | BPF_CALL:
 		if (emit_call(ctx, insn) < 0)
 			goto invalid;
 		break;
 	/* Function return */
-	case BPF_JMP | BPF_EXIT:
+	case BPF_JMP64 | BPF_EXIT:
 		/*
 		 * Optimization: when last instruction is EXIT
 		 * simply continue to epilogue.
diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
index 43e6341..c26dd9f 100644
--- a/arch/powerpc/net/bpf_jit_comp.c
+++ b/arch/powerpc/net/bpf_jit_comp.c
@@ -43,7 +43,7 @@ static int bpf_jit_fixup_addresses(struct bpf_prog *fp, u32 *image,
 		 * ensure that the JITed instruction sequence for these calls
 		 * are of fixed length by padding them with NOPs.
 		 */
-		if (insn[i].code == (BPF_JMP | BPF_CALL) &&
+		if (insn[i].code == (BPF_JMP64 | BPF_CALL) &&
 		    insn[i].src_reg == BPF_PSEUDO_CALL) {
 			ret = bpf_jit_get_func_addr(fp, &insn[i], true,
 						    &func_addr,
diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c
index a379b0c..3e7b9f1 100644
--- a/arch/powerpc/net/bpf_jit_comp32.c
+++ b/arch/powerpc/net/bpf_jit_comp32.c
@@ -327,24 +327,24 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 		/*
 		 * Arithmetic operations: ADD/SUB/MUL/DIV/MOD/NEG
 		 */
-		case BPF_ALU | BPF_ADD | BPF_X: /* (u32) dst += (u32) src */
+		case BPF_ALU32 | BPF_ADD | BPF_X: /* (u32) dst += (u32) src */
 			EMIT(PPC_RAW_ADD(dst_reg, dst_reg, src_reg));
 			break;
 		case BPF_ALU64 | BPF_ADD | BPF_X: /* dst += src */
 			EMIT(PPC_RAW_ADDC(dst_reg, dst_reg, src_reg));
 			EMIT(PPC_RAW_ADDE(dst_reg_h, dst_reg_h, src_reg_h));
 			break;
-		case BPF_ALU | BPF_SUB | BPF_X: /* (u32) dst -= (u32) src */
+		case BPF_ALU32 | BPF_SUB | BPF_X: /* (u32) dst -= (u32) src */
 			EMIT(PPC_RAW_SUB(dst_reg, dst_reg, src_reg));
 			break;
 		case BPF_ALU64 | BPF_SUB | BPF_X: /* dst -= src */
 			EMIT(PPC_RAW_SUBFC(dst_reg, src_reg, dst_reg));
 			EMIT(PPC_RAW_SUBFE(dst_reg_h, src_reg_h, dst_reg_h));
 			break;
-		case BPF_ALU | BPF_SUB | BPF_K: /* (u32) dst -= (u32) imm */
+		case BPF_ALU32 | BPF_SUB | BPF_K: /* (u32) dst -= (u32) imm */
 			imm = -imm;
 			fallthrough;
-		case BPF_ALU | BPF_ADD | BPF_K: /* (u32) dst += (u32) imm */
+		case BPF_ALU32 | BPF_ADD | BPF_K: /* (u32) dst += (u32) imm */
 			if (IMM_HA(imm) & 0xffff)
 				EMIT(PPC_RAW_ADDIS(dst_reg, dst_reg, IMM_HA(imm)));
 			if (IMM_L(imm))
@@ -377,10 +377,10 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 			EMIT(PPC_RAW_ADD(dst_reg_h, dst_reg_h, _R0));
 			EMIT(PPC_RAW_ADD(dst_reg_h, dst_reg_h, tmp_reg));
 			break;
-		case BPF_ALU | BPF_MUL | BPF_X: /* (u32) dst *= (u32) src */
+		case BPF_ALU32 | BPF_MUL | BPF_X: /* (u32) dst *= (u32) src */
 			EMIT(PPC_RAW_MULW(dst_reg, dst_reg, src_reg));
 			break;
-		case BPF_ALU | BPF_MUL | BPF_K: /* (u32) dst *= (u32) imm */
+		case BPF_ALU32 | BPF_MUL | BPF_K: /* (u32) dst *= (u32) imm */
 			if (imm >= -32768 && imm < 32768) {
 				EMIT(PPC_RAW_MULI(dst_reg, dst_reg, imm));
 			} else {
@@ -410,10 +410,10 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 			EMIT(PPC_RAW_MULW(dst_reg, dst_reg, tmp_reg));
 			EMIT(PPC_RAW_ADD(dst_reg_h, dst_reg_h, _R0));
 			break;
-		case BPF_ALU | BPF_DIV | BPF_X: /* (u32) dst /= (u32) src */
+		case BPF_ALU32 | BPF_DIV | BPF_X: /* (u32) dst /= (u32) src */
 			EMIT(PPC_RAW_DIVWU(dst_reg, dst_reg, src_reg));
 			break;
-		case BPF_ALU | BPF_MOD | BPF_X: /* (u32) dst %= (u32) src */
+		case BPF_ALU32 | BPF_MOD | BPF_X: /* (u32) dst %= (u32) src */
 			EMIT(PPC_RAW_DIVWU(_R0, dst_reg, src_reg));
 			EMIT(PPC_RAW_MULW(_R0, src_reg, _R0));
 			EMIT(PPC_RAW_SUB(dst_reg, dst_reg, _R0));
@@ -422,7 +422,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 			return -EOPNOTSUPP;
 		case BPF_ALU64 | BPF_MOD | BPF_X: /* dst %= src */
 			return -EOPNOTSUPP;
-		case BPF_ALU | BPF_DIV | BPF_K: /* (u32) dst /= (u32) imm */
+		case BPF_ALU32 | BPF_DIV | BPF_K: /* (u32) dst /= (u32) imm */
 			if (!imm)
 				return -EINVAL;
 			if (imm == 1)
@@ -431,7 +431,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 			PPC_LI32(_R0, imm);
 			EMIT(PPC_RAW_DIVWU(dst_reg, dst_reg, _R0));
 			break;
-		case BPF_ALU | BPF_MOD | BPF_K: /* (u32) dst %= (u32) imm */
+		case BPF_ALU32 | BPF_MOD | BPF_K: /* (u32) dst %= (u32) imm */
 			if (!imm)
 				return -EINVAL;
 
@@ -480,7 +480,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 			EMIT(PPC_RAW_RLWIMI(dst_reg, dst_reg_h, 32 - imm, 0, imm - 1));
 			EMIT(PPC_RAW_SRAWI(dst_reg_h, dst_reg_h, imm));
 			break;
-		case BPF_ALU | BPF_NEG: /* (u32) dst = -dst */
+		case BPF_ALU32 | BPF_NEG: /* (u32) dst = -dst */
 			EMIT(PPC_RAW_NEG(dst_reg, dst_reg));
 			break;
 		case BPF_ALU64 | BPF_NEG: /* dst = -dst */
@@ -495,14 +495,14 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 			EMIT(PPC_RAW_AND(dst_reg, dst_reg, src_reg));
 			EMIT(PPC_RAW_AND(dst_reg_h, dst_reg_h, src_reg_h));
 			break;
-		case BPF_ALU | BPF_AND | BPF_X: /* (u32) dst = dst & src */
+		case BPF_ALU32 | BPF_AND | BPF_X: /* (u32) dst = dst & src */
 			EMIT(PPC_RAW_AND(dst_reg, dst_reg, src_reg));
 			break;
 		case BPF_ALU64 | BPF_AND | BPF_K: /* dst = dst & imm */
 			if (imm >= 0)
 				EMIT(PPC_RAW_LI(dst_reg_h, 0));
 			fallthrough;
-		case BPF_ALU | BPF_AND | BPF_K: /* (u32) dst = dst & imm */
+		case BPF_ALU32 | BPF_AND | BPF_K: /* (u32) dst = dst & imm */
 			if (!IMM_H(imm)) {
 				EMIT(PPC_RAW_ANDI(dst_reg, dst_reg, IMM_L(imm)));
 			} else if (!IMM_L(imm)) {
@@ -519,7 +519,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 			EMIT(PPC_RAW_OR(dst_reg, dst_reg, src_reg));
 			EMIT(PPC_RAW_OR(dst_reg_h, dst_reg_h, src_reg_h));
 			break;
-		case BPF_ALU | BPF_OR | BPF_X: /* dst = (u32) dst | (u32) src */
+		case BPF_ALU32 | BPF_OR | BPF_X: /* dst = (u32) dst | (u32) src */
 			EMIT(PPC_RAW_OR(dst_reg, dst_reg, src_reg));
 			break;
 		case BPF_ALU64 | BPF_OR | BPF_K:/* dst = dst | imm */
@@ -527,7 +527,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 			if (imm < 0)
 				EMIT(PPC_RAW_LI(dst_reg_h, -1));
 			fallthrough;
-		case BPF_ALU | BPF_OR | BPF_K:/* dst = (u32) dst | (u32) imm */
+		case BPF_ALU32 | BPF_OR | BPF_K:/* dst = (u32) dst | (u32) imm */
 			if (IMM_L(imm))
 				EMIT(PPC_RAW_ORI(dst_reg, dst_reg, IMM_L(imm)));
 			if (IMM_H(imm))
@@ -542,7 +542,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 				EMIT(PPC_RAW_XOR(dst_reg_h, dst_reg_h, src_reg_h));
 			}
 			break;
-		case BPF_ALU | BPF_XOR | BPF_X: /* (u32) dst ^= src */
+		case BPF_ALU32 | BPF_XOR | BPF_X: /* (u32) dst ^= src */
 			if (dst_reg == src_reg)
 				EMIT(PPC_RAW_LI(dst_reg, 0));
 			else
@@ -552,13 +552,13 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 			if (imm < 0)
 				EMIT(PPC_RAW_NOR(dst_reg_h, dst_reg_h, dst_reg_h));
 			fallthrough;
-		case BPF_ALU | BPF_XOR | BPF_K: /* (u32) dst ^= (u32) imm */
+		case BPF_ALU32 | BPF_XOR | BPF_K: /* (u32) dst ^= (u32) imm */
 			if (IMM_L(imm))
 				EMIT(PPC_RAW_XORI(dst_reg, dst_reg, IMM_L(imm)));
 			if (IMM_H(imm))
 				EMIT(PPC_RAW_XORIS(dst_reg, dst_reg, IMM_H(imm)));
 			break;
-		case BPF_ALU | BPF_LSH | BPF_X: /* (u32) dst <<= (u32) src */
+		case BPF_ALU32 | BPF_LSH | BPF_X: /* (u32) dst <<= (u32) src */
 			EMIT(PPC_RAW_SLW(dst_reg, dst_reg, src_reg));
 			break;
 		case BPF_ALU64 | BPF_LSH | BPF_X: /* dst <<= src; */
@@ -572,7 +572,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 			EMIT(PPC_RAW_SLW(dst_reg, dst_reg, src_reg));
 			EMIT(PPC_RAW_OR(dst_reg_h, dst_reg_h, tmp_reg));
 			break;
-		case BPF_ALU | BPF_LSH | BPF_K: /* (u32) dst <<= (u32) imm */
+		case BPF_ALU32 | BPF_LSH | BPF_K: /* (u32) dst <<= (u32) imm */
 			if (!imm)
 				break;
 			EMIT(PPC_RAW_SLWI(dst_reg, dst_reg, imm));
@@ -594,7 +594,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 				EMIT(PPC_RAW_LI(dst_reg_h, 0));
 			EMIT(PPC_RAW_LI(dst_reg, 0));
 			break;
-		case BPF_ALU | BPF_RSH | BPF_X: /* (u32) dst >>= (u32) src */
+		case BPF_ALU32 | BPF_RSH | BPF_X: /* (u32) dst >>= (u32) src */
 			EMIT(PPC_RAW_SRW(dst_reg, dst_reg, src_reg));
 			break;
 		case BPF_ALU64 | BPF_RSH | BPF_X: /* dst >>= src */
@@ -608,7 +608,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 			EMIT(PPC_RAW_SRW(dst_reg_h, dst_reg_h, src_reg));
 			EMIT(PPC_RAW_OR(dst_reg, dst_reg, tmp_reg));
 			break;
-		case BPF_ALU | BPF_RSH | BPF_K: /* (u32) dst >>= (u32) imm */
+		case BPF_ALU32 | BPF_RSH | BPF_K: /* (u32) dst >>= (u32) imm */
 			if (!imm)
 				break;
 			EMIT(PPC_RAW_SRWI(dst_reg, dst_reg, imm));
@@ -630,7 +630,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 				EMIT(PPC_RAW_LI(dst_reg, 0));
 			EMIT(PPC_RAW_LI(dst_reg_h, 0));
 			break;
-		case BPF_ALU | BPF_ARSH | BPF_X: /* (s32) dst >>= src */
+		case BPF_ALU32 | BPF_ARSH | BPF_X: /* (s32) dst >>= src */
 			EMIT(PPC_RAW_SRAW(dst_reg, dst_reg, src_reg));
 			break;
 		case BPF_ALU64 | BPF_ARSH | BPF_X: /* (s64) dst >>= src */
@@ -646,7 +646,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 			EMIT(PPC_RAW_SLW(tmp_reg, tmp_reg, _R0));
 			EMIT(PPC_RAW_OR(dst_reg, dst_reg, tmp_reg));
 			break;
-		case BPF_ALU | BPF_ARSH | BPF_K: /* (s32) dst >>= imm */
+		case BPF_ALU32 | BPF_ARSH | BPF_K: /* (s32) dst >>= imm */
 			if (!imm)
 				break;
 			EMIT(PPC_RAW_SRAWI(dst_reg, dst_reg, imm));
@@ -678,7 +678,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 			EMIT(PPC_RAW_MR(dst_reg, src_reg));
 			EMIT(PPC_RAW_MR(dst_reg_h, src_reg_h));
 			break;
-		case BPF_ALU | BPF_MOV | BPF_X: /* (u32) dst = src */
+		case BPF_ALU32 | BPF_MOV | BPF_X: /* (u32) dst = src */
 			/* special mov32 for zext */
 			if (imm == 1)
 				EMIT(PPC_RAW_LI(dst_reg_h, 0));
@@ -689,14 +689,14 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 			PPC_LI32(dst_reg, imm);
 			PPC_EX32(dst_reg_h, imm);
 			break;
-		case BPF_ALU | BPF_MOV | BPF_K: /* (u32) dst = imm */
+		case BPF_ALU32 | BPF_MOV | BPF_K: /* (u32) dst = imm */
 			PPC_LI32(dst_reg, imm);
 			break;
 
 		/*
 		 * BPF_FROM_BE/LE
 		 */
-		case BPF_ALU | BPF_END | BPF_FROM_LE:
+		case BPF_ALU32 | BPF_END | BPF_FROM_LE:
 			switch (imm) {
 			case 16:
 				/* Copy 16 bits to upper part */
@@ -732,7 +732,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 				break;
 			}
 			break;
-		case BPF_ALU | BPF_END | BPF_FROM_BE:
+		case BPF_ALU32 | BPF_END | BPF_FROM_BE:
 			switch (imm) {
 			case 16:
 				/* zero-extend 16 bits into 32 bits */
@@ -969,7 +969,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 		/*
 		 * Return/Exit
 		 */
-		case BPF_JMP | BPF_EXIT:
+		case BPF_JMP64 | BPF_EXIT:
 			/*
 			 * If this isn't the very last instruction, branch to
 			 * the epilogue. If we _are_ the last instruction,
@@ -986,7 +986,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 		/*
 		 * Call kernel helper or bpf function
 		 */
-		case BPF_JMP | BPF_CALL:
+		case BPF_JMP64 | BPF_CALL:
 			ctx->seen |= SEEN_FUNC;
 
 			ret = bpf_jit_get_func_addr(fp, &insn[i], false,
@@ -1010,64 +1010,64 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 		/*
 		 * Jumps and branches
 		 */
-		case BPF_JMP | BPF_JA:
+		case BPF_JMP64 | BPF_JA:
 			PPC_JMP(addrs[i + 1 + off]);
 			break;
 
-		case BPF_JMP | BPF_JGT | BPF_K:
-		case BPF_JMP | BPF_JGT | BPF_X:
-		case BPF_JMP | BPF_JSGT | BPF_K:
-		case BPF_JMP | BPF_JSGT | BPF_X:
+		case BPF_JMP64 | BPF_JGT | BPF_K:
+		case BPF_JMP64 | BPF_JGT | BPF_X:
+		case BPF_JMP64 | BPF_JSGT | BPF_K:
+		case BPF_JMP64 | BPF_JSGT | BPF_X:
 		case BPF_JMP32 | BPF_JGT | BPF_K:
 		case BPF_JMP32 | BPF_JGT | BPF_X:
 		case BPF_JMP32 | BPF_JSGT | BPF_K:
 		case BPF_JMP32 | BPF_JSGT | BPF_X:
 			true_cond = COND_GT;
 			goto cond_branch;
-		case BPF_JMP | BPF_JLT | BPF_K:
-		case BPF_JMP | BPF_JLT | BPF_X:
-		case BPF_JMP | BPF_JSLT | BPF_K:
-		case BPF_JMP | BPF_JSLT | BPF_X:
+		case BPF_JMP64 | BPF_JLT | BPF_K:
+		case BPF_JMP64 | BPF_JLT | BPF_X:
+		case BPF_JMP64 | BPF_JSLT | BPF_K:
+		case BPF_JMP64 | BPF_JSLT | BPF_X:
 		case BPF_JMP32 | BPF_JLT | BPF_K:
 		case BPF_JMP32 | BPF_JLT | BPF_X:
 		case BPF_JMP32 | BPF_JSLT | BPF_K:
 		case BPF_JMP32 | BPF_JSLT | BPF_X:
 			true_cond = COND_LT;
 			goto cond_branch;
-		case BPF_JMP | BPF_JGE | BPF_K:
-		case BPF_JMP | BPF_JGE | BPF_X:
-		case BPF_JMP | BPF_JSGE | BPF_K:
-		case BPF_JMP | BPF_JSGE | BPF_X:
+		case BPF_JMP64 | BPF_JGE | BPF_K:
+		case BPF_JMP64 | BPF_JGE | BPF_X:
+		case BPF_JMP64 | BPF_JSGE | BPF_K:
+		case BPF_JMP64 | BPF_JSGE | BPF_X:
 		case BPF_JMP32 | BPF_JGE | BPF_K:
 		case BPF_JMP32 | BPF_JGE | BPF_X:
 		case BPF_JMP32 | BPF_JSGE | BPF_K:
 		case BPF_JMP32 | BPF_JSGE | BPF_X:
 			true_cond = COND_GE;
 			goto cond_branch;
-		case BPF_JMP | BPF_JLE | BPF_K:
-		case BPF_JMP | BPF_JLE | BPF_X:
-		case BPF_JMP | BPF_JSLE | BPF_K:
-		case BPF_JMP | BPF_JSLE | BPF_X:
+		case BPF_JMP64 | BPF_JLE | BPF_K:
+		case BPF_JMP64 | BPF_JLE | BPF_X:
+		case BPF_JMP64 | BPF_JSLE | BPF_K:
+		case BPF_JMP64 | BPF_JSLE | BPF_X:
 		case BPF_JMP32 | BPF_JLE | BPF_K:
 		case BPF_JMP32 | BPF_JLE | BPF_X:
 		case BPF_JMP32 | BPF_JSLE | BPF_K:
 		case BPF_JMP32 | BPF_JSLE | BPF_X:
 			true_cond = COND_LE;
 			goto cond_branch;
-		case BPF_JMP | BPF_JEQ | BPF_K:
-		case BPF_JMP | BPF_JEQ | BPF_X:
+		case BPF_JMP64 | BPF_JEQ | BPF_K:
+		case BPF_JMP64 | BPF_JEQ | BPF_X:
 		case BPF_JMP32 | BPF_JEQ | BPF_K:
 		case BPF_JMP32 | BPF_JEQ | BPF_X:
 			true_cond = COND_EQ;
 			goto cond_branch;
-		case BPF_JMP | BPF_JNE | BPF_K:
-		case BPF_JMP | BPF_JNE | BPF_X:
+		case BPF_JMP64 | BPF_JNE | BPF_K:
+		case BPF_JMP64 | BPF_JNE | BPF_X:
 		case BPF_JMP32 | BPF_JNE | BPF_K:
 		case BPF_JMP32 | BPF_JNE | BPF_X:
 			true_cond = COND_NE;
 			goto cond_branch;
-		case BPF_JMP | BPF_JSET | BPF_K:
-		case BPF_JMP | BPF_JSET | BPF_X:
+		case BPF_JMP64 | BPF_JSET | BPF_K:
+		case BPF_JMP64 | BPF_JSET | BPF_X:
 		case BPF_JMP32 | BPF_JSET | BPF_K:
 		case BPF_JMP32 | BPF_JSET | BPF_X:
 			true_cond = COND_NE;
@@ -1075,12 +1075,12 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 
 cond_branch:
 			switch (code) {
-			case BPF_JMP | BPF_JGT | BPF_X:
-			case BPF_JMP | BPF_JLT | BPF_X:
-			case BPF_JMP | BPF_JGE | BPF_X:
-			case BPF_JMP | BPF_JLE | BPF_X:
-			case BPF_JMP | BPF_JEQ | BPF_X:
-			case BPF_JMP | BPF_JNE | BPF_X:
+			case BPF_JMP64 | BPF_JGT | BPF_X:
+			case BPF_JMP64 | BPF_JLT | BPF_X:
+			case BPF_JMP64 | BPF_JGE | BPF_X:
+			case BPF_JMP64 | BPF_JLE | BPF_X:
+			case BPF_JMP64 | BPF_JEQ | BPF_X:
+			case BPF_JMP64 | BPF_JNE | BPF_X:
 				/* unsigned comparison */
 				EMIT(PPC_RAW_CMPLW(dst_reg_h, src_reg_h));
 				PPC_BCC_SHORT(COND_NE, (ctx->idx + 2) * 4);
@@ -1095,10 +1095,10 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 				/* unsigned comparison */
 				EMIT(PPC_RAW_CMPLW(dst_reg, src_reg));
 				break;
-			case BPF_JMP | BPF_JSGT | BPF_X:
-			case BPF_JMP | BPF_JSLT | BPF_X:
-			case BPF_JMP | BPF_JSGE | BPF_X:
-			case BPF_JMP | BPF_JSLE | BPF_X:
+			case BPF_JMP64 | BPF_JSGT | BPF_X:
+			case BPF_JMP64 | BPF_JSLT | BPF_X:
+			case BPF_JMP64 | BPF_JSGE | BPF_X:
+			case BPF_JMP64 | BPF_JSLE | BPF_X:
 				/* signed comparison */
 				EMIT(PPC_RAW_CMPW(dst_reg_h, src_reg_h));
 				PPC_BCC_SHORT(COND_NE, (ctx->idx + 2) * 4);
@@ -1111,7 +1111,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 				/* signed comparison */
 				EMIT(PPC_RAW_CMPW(dst_reg, src_reg));
 				break;
-			case BPF_JMP | BPF_JSET | BPF_X:
+			case BPF_JMP64 | BPF_JSET | BPF_X:
 				EMIT(PPC_RAW_AND_DOT(_R0, dst_reg_h, src_reg_h));
 				PPC_BCC_SHORT(COND_NE, (ctx->idx + 2) * 4);
 				EMIT(PPC_RAW_AND_DOT(_R0, dst_reg, src_reg));
@@ -1119,12 +1119,12 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 			case BPF_JMP32 | BPF_JSET | BPF_X: {
 				EMIT(PPC_RAW_AND_DOT(_R0, dst_reg, src_reg));
 				break;
-			case BPF_JMP | BPF_JNE | BPF_K:
-			case BPF_JMP | BPF_JEQ | BPF_K:
-			case BPF_JMP | BPF_JGT | BPF_K:
-			case BPF_JMP | BPF_JLT | BPF_K:
-			case BPF_JMP | BPF_JGE | BPF_K:
-			case BPF_JMP | BPF_JLE | BPF_K:
+			case BPF_JMP64 | BPF_JNE | BPF_K:
+			case BPF_JMP64 | BPF_JEQ | BPF_K:
+			case BPF_JMP64 | BPF_JGT | BPF_K:
+			case BPF_JMP64 | BPF_JLT | BPF_K:
+			case BPF_JMP64 | BPF_JGE | BPF_K:
+			case BPF_JMP64 | BPF_JLE | BPF_K:
 				/*
 				 * Need sign-extended load, so only positive
 				 * values can be used as imm in cmplwi
@@ -1156,10 +1156,10 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 				}
 				break;
 			}
-			case BPF_JMP | BPF_JSGT | BPF_K:
-			case BPF_JMP | BPF_JSLT | BPF_K:
-			case BPF_JMP | BPF_JSGE | BPF_K:
-			case BPF_JMP | BPF_JSLE | BPF_K:
+			case BPF_JMP64 | BPF_JSGT | BPF_K:
+			case BPF_JMP64 | BPF_JSLT | BPF_K:
+			case BPF_JMP64 | BPF_JSGE | BPF_K:
+			case BPF_JMP64 | BPF_JSLE | BPF_K:
 				if (imm >= 0 && imm < 65536) {
 					EMIT(PPC_RAW_CMPWI(dst_reg_h, imm < 0 ? -1 : 0));
 					PPC_BCC_SHORT(COND_NE, (ctx->idx + 2) * 4);
@@ -1188,7 +1188,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 					EMIT(PPC_RAW_CMPW(dst_reg, _R0));
 				}
 				break;
-			case BPF_JMP | BPF_JSET | BPF_K:
+			case BPF_JMP64 | BPF_JSET | BPF_K:
 				/* andi does not sign-extend the immediate */
 				if (imm >= 0 && imm < 32768) {
 					/* PPC_ANDI is _only/always_ dot-form */
@@ -1219,7 +1219,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 		/*
 		 * Tail call
 		 */
-		case BPF_JMP | BPF_TAIL_CALL:
+		case BPF_JMP64 | BPF_TAIL_CALL:
 			ctx->seen |= SEEN_TAILCALL;
 			ret = bpf_jit_emit_tail_call(image, ctx, addrs[i + 1]);
 			if (ret < 0)
@@ -1235,7 +1235,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 			pr_err_ratelimited("eBPF filter opcode %04x (@%d) unsupported\n", code, i);
 			return -EOPNOTSUPP;
 		}
-		if (BPF_CLASS(code) == BPF_ALU && !fp->aux->verifier_zext &&
+		if (BPF_CLASS(code) == BPF_ALU32 && !fp->aux->verifier_zext &&
 		    !insn_is_zext(&insn[i + 1]) && !(BPF_OP(code) == BPF_END && imm == 64))
 			EMIT(PPC_RAW_LI(dst_reg_h, 0));
 	}
diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
index 29ee306..3b6ee90 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -396,15 +396,15 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 		/*
 		 * Arithmetic operations: ADD/SUB/MUL/DIV/MOD/NEG
 		 */
-		case BPF_ALU | BPF_ADD | BPF_X: /* (u32) dst += (u32) src */
+		case BPF_ALU32 | BPF_ADD | BPF_X: /* (u32) dst += (u32) src */
 		case BPF_ALU64 | BPF_ADD | BPF_X: /* dst += src */
 			EMIT(PPC_RAW_ADD(dst_reg, dst_reg, src_reg));
 			goto bpf_alu32_trunc;
-		case BPF_ALU | BPF_SUB | BPF_X: /* (u32) dst -= (u32) src */
+		case BPF_ALU32 | BPF_SUB | BPF_X: /* (u32) dst -= (u32) src */
 		case BPF_ALU64 | BPF_SUB | BPF_X: /* dst -= src */
 			EMIT(PPC_RAW_SUB(dst_reg, dst_reg, src_reg));
 			goto bpf_alu32_trunc;
-		case BPF_ALU | BPF_ADD | BPF_K: /* (u32) dst += (u32) imm */
+		case BPF_ALU32 | BPF_ADD | BPF_K: /* (u32) dst += (u32) imm */
 		case BPF_ALU64 | BPF_ADD | BPF_K: /* dst += imm */
 			if (!imm) {
 				goto bpf_alu32_trunc;
@@ -415,7 +415,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 				EMIT(PPC_RAW_ADD(dst_reg, dst_reg, tmp1_reg));
 			}
 			goto bpf_alu32_trunc;
-		case BPF_ALU | BPF_SUB | BPF_K: /* (u32) dst -= (u32) imm */
+		case BPF_ALU32 | BPF_SUB | BPF_K: /* (u32) dst -= (u32) imm */
 		case BPF_ALU64 | BPF_SUB | BPF_K: /* dst -= imm */
 			if (!imm) {
 				goto bpf_alu32_trunc;
@@ -426,27 +426,27 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 				EMIT(PPC_RAW_SUB(dst_reg, dst_reg, tmp1_reg));
 			}
 			goto bpf_alu32_trunc;
-		case BPF_ALU | BPF_MUL | BPF_X: /* (u32) dst *= (u32) src */
+		case BPF_ALU32 | BPF_MUL | BPF_X: /* (u32) dst *= (u32) src */
 		case BPF_ALU64 | BPF_MUL | BPF_X: /* dst *= src */
-			if (BPF_CLASS(code) == BPF_ALU)
+			if (BPF_CLASS(code) == BPF_ALU32)
 				EMIT(PPC_RAW_MULW(dst_reg, dst_reg, src_reg));
 			else
 				EMIT(PPC_RAW_MULD(dst_reg, dst_reg, src_reg));
 			goto bpf_alu32_trunc;
-		case BPF_ALU | BPF_MUL | BPF_K: /* (u32) dst *= (u32) imm */
+		case BPF_ALU32 | BPF_MUL | BPF_K: /* (u32) dst *= (u32) imm */
 		case BPF_ALU64 | BPF_MUL | BPF_K: /* dst *= imm */
 			if (imm >= -32768 && imm < 32768)
 				EMIT(PPC_RAW_MULI(dst_reg, dst_reg, IMM_L(imm)));
 			else {
 				PPC_LI32(tmp1_reg, imm);
-				if (BPF_CLASS(code) == BPF_ALU)
+				if (BPF_CLASS(code) == BPF_ALU32)
 					EMIT(PPC_RAW_MULW(dst_reg, dst_reg, tmp1_reg));
 				else
 					EMIT(PPC_RAW_MULD(dst_reg, dst_reg, tmp1_reg));
 			}
 			goto bpf_alu32_trunc;
-		case BPF_ALU | BPF_DIV | BPF_X: /* (u32) dst /= (u32) src */
-		case BPF_ALU | BPF_MOD | BPF_X: /* (u32) dst %= (u32) src */
+		case BPF_ALU32 | BPF_DIV | BPF_X: /* (u32) dst /= (u32) src */
+		case BPF_ALU32 | BPF_MOD | BPF_X: /* (u32) dst %= (u32) src */
 			if (BPF_OP(code) == BPF_MOD) {
 				EMIT(PPC_RAW_DIVWU(tmp1_reg, dst_reg, src_reg));
 				EMIT(PPC_RAW_MULW(tmp1_reg, src_reg, tmp1_reg));
@@ -463,8 +463,8 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 			} else
 				EMIT(PPC_RAW_DIVDU(dst_reg, dst_reg, src_reg));
 			break;
-		case BPF_ALU | BPF_MOD | BPF_K: /* (u32) dst %= (u32) imm */
-		case BPF_ALU | BPF_DIV | BPF_K: /* (u32) dst /= (u32) imm */
+		case BPF_ALU32 | BPF_MOD | BPF_K: /* (u32) dst %= (u32) imm */
+		case BPF_ALU32 | BPF_DIV | BPF_K: /* (u32) dst /= (u32) imm */
 		case BPF_ALU64 | BPF_MOD | BPF_K: /* dst %= imm */
 		case BPF_ALU64 | BPF_DIV | BPF_K: /* dst /= imm */
 			if (imm == 0)
@@ -480,7 +480,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 
 			PPC_LI32(tmp1_reg, imm);
 			switch (BPF_CLASS(code)) {
-			case BPF_ALU:
+			case BPF_ALU32:
 				if (BPF_OP(code) == BPF_MOD) {
 					EMIT(PPC_RAW_DIVWU(tmp2_reg, dst_reg, tmp1_reg));
 					EMIT(PPC_RAW_MULW(tmp1_reg, tmp1_reg, tmp2_reg));
@@ -498,7 +498,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 				break;
 			}
 			goto bpf_alu32_trunc;
-		case BPF_ALU | BPF_NEG: /* (u32) dst = -dst */
+		case BPF_ALU32 | BPF_NEG: /* (u32) dst = -dst */
 		case BPF_ALU64 | BPF_NEG: /* dst = -dst */
 			EMIT(PPC_RAW_NEG(dst_reg, dst_reg));
 			goto bpf_alu32_trunc;
@@ -506,11 +506,11 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 		/*
 		 * Logical operations: AND/OR/XOR/[A]LSH/[A]RSH
 		 */
-		case BPF_ALU | BPF_AND | BPF_X: /* (u32) dst = dst & src */
+		case BPF_ALU32 | BPF_AND | BPF_X: /* (u32) dst = dst & src */
 		case BPF_ALU64 | BPF_AND | BPF_X: /* dst = dst & src */
 			EMIT(PPC_RAW_AND(dst_reg, dst_reg, src_reg));
 			goto bpf_alu32_trunc;
-		case BPF_ALU | BPF_AND | BPF_K: /* (u32) dst = dst & imm */
+		case BPF_ALU32 | BPF_AND | BPF_K: /* (u32) dst = dst & imm */
 		case BPF_ALU64 | BPF_AND | BPF_K: /* dst = dst & imm */
 			if (!IMM_H(imm))
 				EMIT(PPC_RAW_ANDI(dst_reg, dst_reg, IMM_L(imm)));
@@ -520,11 +520,11 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 				EMIT(PPC_RAW_AND(dst_reg, dst_reg, tmp1_reg));
 			}
 			goto bpf_alu32_trunc;
-		case BPF_ALU | BPF_OR | BPF_X: /* dst = (u32) dst | (u32) src */
+		case BPF_ALU32 | BPF_OR | BPF_X: /* dst = (u32) dst | (u32) src */
 		case BPF_ALU64 | BPF_OR | BPF_X: /* dst = dst | src */
 			EMIT(PPC_RAW_OR(dst_reg, dst_reg, src_reg));
 			goto bpf_alu32_trunc;
-		case BPF_ALU | BPF_OR | BPF_K:/* dst = (u32) dst | (u32) imm */
+		case BPF_ALU32 | BPF_OR | BPF_K:/* dst = (u32) dst | (u32) imm */
 		case BPF_ALU64 | BPF_OR | BPF_K:/* dst = dst | imm */
 			if (imm < 0 && BPF_CLASS(code) == BPF_ALU64) {
 				/* Sign-extended */
@@ -537,11 +537,11 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 					EMIT(PPC_RAW_ORIS(dst_reg, dst_reg, IMM_H(imm)));
 			}
 			goto bpf_alu32_trunc;
-		case BPF_ALU | BPF_XOR | BPF_X: /* (u32) dst ^= src */
+		case BPF_ALU32 | BPF_XOR | BPF_X: /* (u32) dst ^= src */
 		case BPF_ALU64 | BPF_XOR | BPF_X: /* dst ^= src */
 			EMIT(PPC_RAW_XOR(dst_reg, dst_reg, src_reg));
 			goto bpf_alu32_trunc;
-		case BPF_ALU | BPF_XOR | BPF_K: /* (u32) dst ^= (u32) imm */
+		case BPF_ALU32 | BPF_XOR | BPF_K: /* (u32) dst ^= (u32) imm */
 		case BPF_ALU64 | BPF_XOR | BPF_K: /* dst ^= imm */
 			if (imm < 0 && BPF_CLASS(code) == BPF_ALU64) {
 				/* Sign-extended */
@@ -554,7 +554,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 					EMIT(PPC_RAW_XORIS(dst_reg, dst_reg, IMM_H(imm)));
 			}
 			goto bpf_alu32_trunc;
-		case BPF_ALU | BPF_LSH | BPF_X: /* (u32) dst <<= (u32) src */
+		case BPF_ALU32 | BPF_LSH | BPF_X: /* (u32) dst <<= (u32) src */
 			/* slw clears top 32 bits */
 			EMIT(PPC_RAW_SLW(dst_reg, dst_reg, src_reg));
 			/* skip zero extension move, but set address map. */
@@ -564,7 +564,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 		case BPF_ALU64 | BPF_LSH | BPF_X: /* dst <<= src; */
 			EMIT(PPC_RAW_SLD(dst_reg, dst_reg, src_reg));
 			break;
-		case BPF_ALU | BPF_LSH | BPF_K: /* (u32) dst <<== (u32) imm */
+		case BPF_ALU32 | BPF_LSH | BPF_K: /* (u32) dst <<== (u32) imm */
 			/* with imm 0, we still need to clear top 32 bits */
 			EMIT(PPC_RAW_SLWI(dst_reg, dst_reg, imm));
 			if (insn_is_zext(&insn[i + 1]))
@@ -574,7 +574,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 			if (imm != 0)
 				EMIT(PPC_RAW_SLDI(dst_reg, dst_reg, imm));
 			break;
-		case BPF_ALU | BPF_RSH | BPF_X: /* (u32) dst >>= (u32) src */
+		case BPF_ALU32 | BPF_RSH | BPF_X: /* (u32) dst >>= (u32) src */
 			EMIT(PPC_RAW_SRW(dst_reg, dst_reg, src_reg));
 			if (insn_is_zext(&insn[i + 1]))
 				addrs[++i] = ctx->idx * 4;
@@ -582,7 +582,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 		case BPF_ALU64 | BPF_RSH | BPF_X: /* dst >>= src */
 			EMIT(PPC_RAW_SRD(dst_reg, dst_reg, src_reg));
 			break;
-		case BPF_ALU | BPF_RSH | BPF_K: /* (u32) dst >>= (u32) imm */
+		case BPF_ALU32 | BPF_RSH | BPF_K: /* (u32) dst >>= (u32) imm */
 			EMIT(PPC_RAW_SRWI(dst_reg, dst_reg, imm));
 			if (insn_is_zext(&insn[i + 1]))
 				addrs[++i] = ctx->idx * 4;
@@ -591,13 +591,13 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 			if (imm != 0)
 				EMIT(PPC_RAW_SRDI(dst_reg, dst_reg, imm));
 			break;
-		case BPF_ALU | BPF_ARSH | BPF_X: /* (s32) dst >>= src */
+		case BPF_ALU32 | BPF_ARSH | BPF_X: /* (s32) dst >>= src */
 			EMIT(PPC_RAW_SRAW(dst_reg, dst_reg, src_reg));
 			goto bpf_alu32_trunc;
 		case BPF_ALU64 | BPF_ARSH | BPF_X: /* (s64) dst >>= src */
 			EMIT(PPC_RAW_SRAD(dst_reg, dst_reg, src_reg));
 			break;
-		case BPF_ALU | BPF_ARSH | BPF_K: /* (s32) dst >>= imm */
+		case BPF_ALU32 | BPF_ARSH | BPF_K: /* (s32) dst >>= imm */
 			EMIT(PPC_RAW_SRAWI(dst_reg, dst_reg, imm));
 			goto bpf_alu32_trunc;
 		case BPF_ALU64 | BPF_ARSH | BPF_K: /* (s64) dst >>= imm */
@@ -608,7 +608,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 		/*
 		 * MOV
 		 */
-		case BPF_ALU | BPF_MOV | BPF_X: /* (u32) dst = src */
+		case BPF_ALU32 | BPF_MOV | BPF_X: /* (u32) dst = src */
 		case BPF_ALU64 | BPF_MOV | BPF_X: /* dst = src */
 			if (imm == 1) {
 				/* special mov32 for zext */
@@ -617,7 +617,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 			}
 			EMIT(PPC_RAW_MR(dst_reg, src_reg));
 			goto bpf_alu32_trunc;
-		case BPF_ALU | BPF_MOV | BPF_K: /* (u32) dst = imm */
+		case BPF_ALU32 | BPF_MOV | BPF_K: /* (u32) dst = imm */
 		case BPF_ALU64 | BPF_MOV | BPF_K: /* dst = (s64) imm */
 			PPC_LI32(dst_reg, imm);
 			if (imm < 0)
@@ -628,15 +628,15 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 
 bpf_alu32_trunc:
 		/* Truncate to 32-bits */
-		if (BPF_CLASS(code) == BPF_ALU && !fp->aux->verifier_zext)
+		if (BPF_CLASS(code) == BPF_ALU32 && !fp->aux->verifier_zext)
 			EMIT(PPC_RAW_RLWINM(dst_reg, dst_reg, 0, 0, 31));
 		break;
 
 		/*
 		 * BPF_FROM_BE/LE
 		 */
-		case BPF_ALU | BPF_END | BPF_FROM_LE:
-		case BPF_ALU | BPF_END | BPF_FROM_BE:
+		case BPF_ALU32 | BPF_END | BPF_FROM_LE:
+		case BPF_ALU32 | BPF_END | BPF_FROM_BE:
 #ifdef __BIG_ENDIAN__
 			if (BPF_SRC(code) == BPF_FROM_BE)
 				goto emit_clear;
@@ -947,7 +947,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 		/*
 		 * Return/Exit
 		 */
-		case BPF_JMP | BPF_EXIT:
+		case BPF_JMP64 | BPF_EXIT:
 			/*
 			 * If this isn't the very last instruction, branch to
 			 * the epilogue. If we _are_ the last instruction,
@@ -964,7 +964,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 		/*
 		 * Call kernel helper or bpf function
 		 */
-		case BPF_JMP | BPF_CALL:
+		case BPF_JMP64 | BPF_CALL:
 			ctx->seen |= SEEN_FUNC;
 
 			ret = bpf_jit_get_func_addr(fp, &insn[i], false,
@@ -987,64 +987,64 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 		/*
 		 * Jumps and branches
 		 */
-		case BPF_JMP | BPF_JA:
+		case BPF_JMP64 | BPF_JA:
 			PPC_JMP(addrs[i + 1 + off]);
 			break;
 
-		case BPF_JMP | BPF_JGT | BPF_K:
-		case BPF_JMP | BPF_JGT | BPF_X:
-		case BPF_JMP | BPF_JSGT | BPF_K:
-		case BPF_JMP | BPF_JSGT | BPF_X:
+		case BPF_JMP64 | BPF_JGT | BPF_K:
+		case BPF_JMP64 | BPF_JGT | BPF_X:
+		case BPF_JMP64 | BPF_JSGT | BPF_K:
+		case BPF_JMP64 | BPF_JSGT | BPF_X:
 		case BPF_JMP32 | BPF_JGT | BPF_K:
 		case BPF_JMP32 | BPF_JGT | BPF_X:
 		case BPF_JMP32 | BPF_JSGT | BPF_K:
 		case BPF_JMP32 | BPF_JSGT | BPF_X:
 			true_cond = COND_GT;
 			goto cond_branch;
-		case BPF_JMP | BPF_JLT | BPF_K:
-		case BPF_JMP | BPF_JLT | BPF_X:
-		case BPF_JMP | BPF_JSLT | BPF_K:
-		case BPF_JMP | BPF_JSLT | BPF_X:
+		case BPF_JMP64 | BPF_JLT | BPF_K:
+		case BPF_JMP64 | BPF_JLT | BPF_X:
+		case BPF_JMP64 | BPF_JSLT | BPF_K:
+		case BPF_JMP64 | BPF_JSLT | BPF_X:
 		case BPF_JMP32 | BPF_JLT | BPF_K:
 		case BPF_JMP32 | BPF_JLT | BPF_X:
 		case BPF_JMP32 | BPF_JSLT | BPF_K:
 		case BPF_JMP32 | BPF_JSLT | BPF_X:
 			true_cond = COND_LT;
 			goto cond_branch;
-		case BPF_JMP | BPF_JGE | BPF_K:
-		case BPF_JMP | BPF_JGE | BPF_X:
-		case BPF_JMP | BPF_JSGE | BPF_K:
-		case BPF_JMP | BPF_JSGE | BPF_X:
+		case BPF_JMP64 | BPF_JGE | BPF_K:
+		case BPF_JMP64 | BPF_JGE | BPF_X:
+		case BPF_JMP64 | BPF_JSGE | BPF_K:
+		case BPF_JMP64 | BPF_JSGE | BPF_X:
 		case BPF_JMP32 | BPF_JGE | BPF_K:
 		case BPF_JMP32 | BPF_JGE | BPF_X:
 		case BPF_JMP32 | BPF_JSGE | BPF_K:
 		case BPF_JMP32 | BPF_JSGE | BPF_X:
 			true_cond = COND_GE;
 			goto cond_branch;
-		case BPF_JMP | BPF_JLE | BPF_K:
-		case BPF_JMP | BPF_JLE | BPF_X:
-		case BPF_JMP | BPF_JSLE | BPF_K:
-		case BPF_JMP | BPF_JSLE | BPF_X:
+		case BPF_JMP64 | BPF_JLE | BPF_K:
+		case BPF_JMP64 | BPF_JLE | BPF_X:
+		case BPF_JMP64 | BPF_JSLE | BPF_K:
+		case BPF_JMP64 | BPF_JSLE | BPF_X:
 		case BPF_JMP32 | BPF_JLE | BPF_K:
 		case BPF_JMP32 | BPF_JLE | BPF_X:
 		case BPF_JMP32 | BPF_JSLE | BPF_K:
 		case BPF_JMP32 | BPF_JSLE | BPF_X:
 			true_cond = COND_LE;
 			goto cond_branch;
-		case BPF_JMP | BPF_JEQ | BPF_K:
-		case BPF_JMP | BPF_JEQ | BPF_X:
+		case BPF_JMP64 | BPF_JEQ | BPF_K:
+		case BPF_JMP64 | BPF_JEQ | BPF_X:
 		case BPF_JMP32 | BPF_JEQ | BPF_K:
 		case BPF_JMP32 | BPF_JEQ | BPF_X:
 			true_cond = COND_EQ;
 			goto cond_branch;
-		case BPF_JMP | BPF_JNE | BPF_K:
-		case BPF_JMP | BPF_JNE | BPF_X:
+		case BPF_JMP64 | BPF_JNE | BPF_K:
+		case BPF_JMP64 | BPF_JNE | BPF_X:
 		case BPF_JMP32 | BPF_JNE | BPF_K:
 		case BPF_JMP32 | BPF_JNE | BPF_X:
 			true_cond = COND_NE;
 			goto cond_branch;
-		case BPF_JMP | BPF_JSET | BPF_K:
-		case BPF_JMP | BPF_JSET | BPF_X:
+		case BPF_JMP64 | BPF_JSET | BPF_K:
+		case BPF_JMP64 | BPF_JSET | BPF_X:
 		case BPF_JMP32 | BPF_JSET | BPF_K:
 		case BPF_JMP32 | BPF_JSET | BPF_X:
 			true_cond = COND_NE;
@@ -1052,12 +1052,12 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 
 cond_branch:
 			switch (code) {
-			case BPF_JMP | BPF_JGT | BPF_X:
-			case BPF_JMP | BPF_JLT | BPF_X:
-			case BPF_JMP | BPF_JGE | BPF_X:
-			case BPF_JMP | BPF_JLE | BPF_X:
-			case BPF_JMP | BPF_JEQ | BPF_X:
-			case BPF_JMP | BPF_JNE | BPF_X:
+			case BPF_JMP64 | BPF_JGT | BPF_X:
+			case BPF_JMP64 | BPF_JLT | BPF_X:
+			case BPF_JMP64 | BPF_JGE | BPF_X:
+			case BPF_JMP64 | BPF_JLE | BPF_X:
+			case BPF_JMP64 | BPF_JEQ | BPF_X:
+			case BPF_JMP64 | BPF_JNE | BPF_X:
 			case BPF_JMP32 | BPF_JGT | BPF_X:
 			case BPF_JMP32 | BPF_JLT | BPF_X:
 			case BPF_JMP32 | BPF_JGE | BPF_X:
@@ -1070,10 +1070,10 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 				else
 					EMIT(PPC_RAW_CMPLD(dst_reg, src_reg));
 				break;
-			case BPF_JMP | BPF_JSGT | BPF_X:
-			case BPF_JMP | BPF_JSLT | BPF_X:
-			case BPF_JMP | BPF_JSGE | BPF_X:
-			case BPF_JMP | BPF_JSLE | BPF_X:
+			case BPF_JMP64 | BPF_JSGT | BPF_X:
+			case BPF_JMP64 | BPF_JSLT | BPF_X:
+			case BPF_JMP64 | BPF_JSGE | BPF_X:
+			case BPF_JMP64 | BPF_JSLE | BPF_X:
 			case BPF_JMP32 | BPF_JSGT | BPF_X:
 			case BPF_JMP32 | BPF_JSLT | BPF_X:
 			case BPF_JMP32 | BPF_JSGE | BPF_X:
@@ -1084,21 +1084,21 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 				else
 					EMIT(PPC_RAW_CMPD(dst_reg, src_reg));
 				break;
-			case BPF_JMP | BPF_JSET | BPF_X:
+			case BPF_JMP64 | BPF_JSET | BPF_X:
 			case BPF_JMP32 | BPF_JSET | BPF_X:
-				if (BPF_CLASS(code) == BPF_JMP) {
+				if (BPF_CLASS(code) == BPF_JMP64) {
 					EMIT(PPC_RAW_AND_DOT(tmp1_reg, dst_reg, src_reg));
 				} else {
 					EMIT(PPC_RAW_AND(tmp1_reg, dst_reg, src_reg));
 					EMIT(PPC_RAW_RLWINM_DOT(tmp1_reg, tmp1_reg, 0, 0, 31));
 				}
 				break;
-			case BPF_JMP | BPF_JNE | BPF_K:
-			case BPF_JMP | BPF_JEQ | BPF_K:
-			case BPF_JMP | BPF_JGT | BPF_K:
-			case BPF_JMP | BPF_JLT | BPF_K:
-			case BPF_JMP | BPF_JGE | BPF_K:
-			case BPF_JMP | BPF_JLE | BPF_K:
+			case BPF_JMP64 | BPF_JNE | BPF_K:
+			case BPF_JMP64 | BPF_JEQ | BPF_K:
+			case BPF_JMP64 | BPF_JGT | BPF_K:
+			case BPF_JMP64 | BPF_JLT | BPF_K:
+			case BPF_JMP64 | BPF_JGE | BPF_K:
+			case BPF_JMP64 | BPF_JLE | BPF_K:
 			case BPF_JMP32 | BPF_JNE | BPF_K:
 			case BPF_JMP32 | BPF_JEQ | BPF_K:
 			case BPF_JMP32 | BPF_JGT | BPF_K:
@@ -1128,10 +1128,10 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 				}
 				break;
 			}
-			case BPF_JMP | BPF_JSGT | BPF_K:
-			case BPF_JMP | BPF_JSLT | BPF_K:
-			case BPF_JMP | BPF_JSGE | BPF_K:
-			case BPF_JMP | BPF_JSLE | BPF_K:
+			case BPF_JMP64 | BPF_JSGT | BPF_K:
+			case BPF_JMP64 | BPF_JSLT | BPF_K:
+			case BPF_JMP64 | BPF_JSGE | BPF_K:
+			case BPF_JMP64 | BPF_JSLE | BPF_K:
 			case BPF_JMP32 | BPF_JSGT | BPF_K:
 			case BPF_JMP32 | BPF_JSLT | BPF_K:
 			case BPF_JMP32 | BPF_JSGE | BPF_K:
@@ -1157,7 +1157,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 				}
 				break;
 			}
-			case BPF_JMP | BPF_JSET | BPF_K:
+			case BPF_JMP64 | BPF_JSET | BPF_K:
 			case BPF_JMP32 | BPF_JSET | BPF_K:
 				/* andi does not sign-extend the immediate */
 				if (imm >= 0 && imm < 32768)
@@ -1165,7 +1165,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 					EMIT(PPC_RAW_ANDI(tmp1_reg, dst_reg, imm));
 				else {
 					PPC_LI32(tmp1_reg, imm);
-					if (BPF_CLASS(code) == BPF_JMP) {
+					if (BPF_CLASS(code) == BPF_JMP64) {
 						EMIT(PPC_RAW_AND_DOT(tmp1_reg, dst_reg,
 								     tmp1_reg));
 					} else {
@@ -1182,7 +1182,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
 		/*
 		 * Tail call
 		 */
-		case BPF_JMP | BPF_TAIL_CALL:
+		case BPF_JMP64 | BPF_TAIL_CALL:
 			ctx->seen |= SEEN_TAILCALL;
 			ret = bpf_jit_emit_tail_call(image, ctx, addrs[i + 1]);
 			if (ret < 0)
diff --git a/arch/riscv/net/bpf_jit_comp32.c b/arch/riscv/net/bpf_jit_comp32.c
index 529a83b..b8674ba 100644
--- a/arch/riscv/net/bpf_jit_comp32.c
+++ b/arch/riscv/net/bpf_jit_comp32.c
@@ -955,7 +955,7 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
 		      bool extra_pass)
 {
 	bool is64 = BPF_CLASS(insn->code) == BPF_ALU64 ||
-		BPF_CLASS(insn->code) == BPF_JMP;
+		BPF_CLASS(insn->code) == BPF_JMP64;
 	int s, e, rvoff, i = insn - ctx->prog->insnsi;
 	u8 code = insn->code;
 	s16 off = insn->off;
@@ -1012,7 +1012,7 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
 		emit_alu_i64(dst, imm, ctx, BPF_OP(code));
 		break;
 
-	case BPF_ALU | BPF_MOV | BPF_X:
+	case BPF_ALU32 | BPF_MOV | BPF_X:
 		if (imm == 1) {
 			/* Special mov32 for zext. */
 			emit_zext64(dst, ctx);
@@ -1020,24 +1020,24 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
 		}
 		fallthrough;
 
-	case BPF_ALU | BPF_ADD | BPF_X:
-	case BPF_ALU | BPF_SUB | BPF_X:
-	case BPF_ALU | BPF_AND | BPF_X:
-	case BPF_ALU | BPF_OR | BPF_X:
-	case BPF_ALU | BPF_XOR | BPF_X:
+	case BPF_ALU32 | BPF_ADD | BPF_X:
+	case BPF_ALU32 | BPF_SUB | BPF_X:
+	case BPF_ALU32 | BPF_AND | BPF_X:
+	case BPF_ALU32 | BPF_OR | BPF_X:
+	case BPF_ALU32 | BPF_XOR | BPF_X:
 
-	case BPF_ALU | BPF_MUL | BPF_X:
-	case BPF_ALU | BPF_MUL | BPF_K:
+	case BPF_ALU32 | BPF_MUL | BPF_X:
+	case BPF_ALU32 | BPF_MUL | BPF_K:
 
-	case BPF_ALU | BPF_DIV | BPF_X:
-	case BPF_ALU | BPF_DIV | BPF_K:
+	case BPF_ALU32 | BPF_DIV | BPF_X:
+	case BPF_ALU32 | BPF_DIV | BPF_K:
 
-	case BPF_ALU | BPF_MOD | BPF_X:
-	case BPF_ALU | BPF_MOD | BPF_K:
+	case BPF_ALU32 | BPF_MOD | BPF_X:
+	case BPF_ALU32 | BPF_MOD | BPF_K:
 
-	case BPF_ALU | BPF_LSH | BPF_X:
-	case BPF_ALU | BPF_RSH | BPF_X:
-	case BPF_ALU | BPF_ARSH | BPF_X:
+	case BPF_ALU32 | BPF_LSH | BPF_X:
+	case BPF_ALU32 | BPF_RSH | BPF_X:
+	case BPF_ALU32 | BPF_ARSH | BPF_X:
 		if (BPF_SRC(code) == BPF_K) {
 			emit_imm32(tmp2, imm, ctx);
 			src = tmp2;
@@ -1045,15 +1045,15 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
 		emit_alu_r32(dst, src, ctx, BPF_OP(code));
 		break;
 
-	case BPF_ALU | BPF_MOV | BPF_K:
-	case BPF_ALU | BPF_ADD | BPF_K:
-	case BPF_ALU | BPF_SUB | BPF_K:
-	case BPF_ALU | BPF_AND | BPF_K:
-	case BPF_ALU | BPF_OR | BPF_K:
-	case BPF_ALU | BPF_XOR | BPF_K:
-	case BPF_ALU | BPF_LSH | BPF_K:
-	case BPF_ALU | BPF_RSH | BPF_K:
-	case BPF_ALU | BPF_ARSH | BPF_K:
+	case BPF_ALU32 | BPF_MOV | BPF_K:
+	case BPF_ALU32 | BPF_ADD | BPF_K:
+	case BPF_ALU32 | BPF_SUB | BPF_K:
+	case BPF_ALU32 | BPF_AND | BPF_K:
+	case BPF_ALU32 | BPF_OR | BPF_K:
+	case BPF_ALU32 | BPF_XOR | BPF_K:
+	case BPF_ALU32 | BPF_LSH | BPF_K:
+	case BPF_ALU32 | BPF_RSH | BPF_K:
+	case BPF_ALU32 | BPF_ARSH | BPF_K:
 		/*
 		 * mul,div,mod are handled in the BPF_X case since there are
 		 * no RISC-V I-type equivalents.
@@ -1061,7 +1061,7 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
 		emit_alu_i32(dst, imm, ctx, BPF_OP(code));
 		break;
 
-	case BPF_ALU | BPF_NEG:
+	case BPF_ALU32 | BPF_NEG:
 		/*
 		 * src is ignored---choose tmp2 as a dummy register since it
 		 * is not on the stack.
@@ -1069,7 +1069,7 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
 		emit_alu_r32(dst, tmp2, ctx, BPF_OP(code));
 		break;
 
-	case BPF_ALU | BPF_END | BPF_FROM_LE:
+	case BPF_ALU32 | BPF_END | BPF_FROM_LE:
 	{
 		const s8 *rd = bpf_get_reg64(dst, tmp1, ctx);
 
@@ -1094,7 +1094,7 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
 		break;
 	}
 
-	case BPF_ALU | BPF_END | BPF_FROM_BE:
+	case BPF_ALU32 | BPF_END | BPF_FROM_BE:
 	{
 		const s8 *rd = bpf_get_reg64(dst, tmp1, ctx);
 
@@ -1128,12 +1128,12 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
 		break;
 	}
 
-	case BPF_JMP | BPF_JA:
+	case BPF_JMP64 | BPF_JA:
 		rvoff = rv_offset(i, off, ctx);
 		emit_jump_and_link(RV_REG_ZERO, rvoff, false, ctx);
 		break;
 
-	case BPF_JMP | BPF_CALL:
+	case BPF_JMP64 | BPF_CALL:
 	{
 		bool fixed;
 		int ret;
@@ -1147,63 +1147,63 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
 		break;
 	}
 
-	case BPF_JMP | BPF_TAIL_CALL:
+	case BPF_JMP64 | BPF_TAIL_CALL:
 		if (emit_bpf_tail_call(i, ctx))
 			return -1;
 		break;
 
-	case BPF_JMP | BPF_JEQ | BPF_X:
-	case BPF_JMP | BPF_JEQ | BPF_K:
+	case BPF_JMP64 | BPF_JEQ | BPF_X:
+	case BPF_JMP64 | BPF_JEQ | BPF_K:
 	case BPF_JMP32 | BPF_JEQ | BPF_X:
 	case BPF_JMP32 | BPF_JEQ | BPF_K:
 
-	case BPF_JMP | BPF_JNE | BPF_X:
-	case BPF_JMP | BPF_JNE | BPF_K:
+	case BPF_JMP64 | BPF_JNE | BPF_X:
+	case BPF_JMP64 | BPF_JNE | BPF_K:
 	case BPF_JMP32 | BPF_JNE | BPF_X:
 	case BPF_JMP32 | BPF_JNE | BPF_K:
 
-	case BPF_JMP | BPF_JLE | BPF_X:
-	case BPF_JMP | BPF_JLE | BPF_K:
+	case BPF_JMP64 | BPF_JLE | BPF_X:
+	case BPF_JMP64 | BPF_JLE | BPF_K:
 	case BPF_JMP32 | BPF_JLE | BPF_X:
 	case BPF_JMP32 | BPF_JLE | BPF_K:
 
-	case BPF_JMP | BPF_JLT | BPF_X:
-	case BPF_JMP | BPF_JLT | BPF_K:
+	case BPF_JMP64 | BPF_JLT | BPF_X:
+	case BPF_JMP64 | BPF_JLT | BPF_K:
 	case BPF_JMP32 | BPF_JLT | BPF_X:
 	case BPF_JMP32 | BPF_JLT | BPF_K:
 
-	case BPF_JMP | BPF_JGE | BPF_X:
-	case BPF_JMP | BPF_JGE | BPF_K:
+	case BPF_JMP64 | BPF_JGE | BPF_X:
+	case BPF_JMP64 | BPF_JGE | BPF_K:
 	case BPF_JMP32 | BPF_JGE | BPF_X:
 	case BPF_JMP32 | BPF_JGE | BPF_K:
 
-	case BPF_JMP | BPF_JGT | BPF_X:
-	case BPF_JMP | BPF_JGT | BPF_K:
+	case BPF_JMP64 | BPF_JGT | BPF_X:
+	case BPF_JMP64 | BPF_JGT | BPF_K:
 	case BPF_JMP32 | BPF_JGT | BPF_X:
 	case BPF_JMP32 | BPF_JGT | BPF_K:
 
-	case BPF_JMP | BPF_JSLE | BPF_X:
-	case BPF_JMP | BPF_JSLE | BPF_K:
+	case BPF_JMP64 | BPF_JSLE | BPF_X:
+	case BPF_JMP64 | BPF_JSLE | BPF_K:
 	case BPF_JMP32 | BPF_JSLE | BPF_X:
 	case BPF_JMP32 | BPF_JSLE | BPF_K:
 
-	case BPF_JMP | BPF_JSLT | BPF_X:
-	case BPF_JMP | BPF_JSLT | BPF_K:
+	case BPF_JMP64 | BPF_JSLT | BPF_X:
+	case BPF_JMP64 | BPF_JSLT | BPF_K:
 	case BPF_JMP32 | BPF_JSLT | BPF_X:
 	case BPF_JMP32 | BPF_JSLT | BPF_K:
 
-	case BPF_JMP | BPF_JSGE | BPF_X:
-	case BPF_JMP | BPF_JSGE | BPF_K:
+	case BPF_JMP64 | BPF_JSGE | BPF_X:
+	case BPF_JMP64 | BPF_JSGE | BPF_K:
 	case BPF_JMP32 | BPF_JSGE | BPF_X:
 	case BPF_JMP32 | BPF_JSGE | BPF_K:
 
-	case BPF_JMP | BPF_JSGT | BPF_X:
-	case BPF_JMP | BPF_JSGT | BPF_K:
+	case BPF_JMP64 | BPF_JSGT | BPF_X:
+	case BPF_JMP64 | BPF_JSGT | BPF_K:
 	case BPF_JMP32 | BPF_JSGT | BPF_X:
 	case BPF_JMP32 | BPF_JSGT | BPF_K:
 
-	case BPF_JMP | BPF_JSET | BPF_X:
-	case BPF_JMP | BPF_JSET | BPF_K:
+	case BPF_JMP64 | BPF_JSET | BPF_X:
+	case BPF_JMP64 | BPF_JSET | BPF_K:
 	case BPF_JMP32 | BPF_JSET | BPF_X:
 	case BPF_JMP32 | BPF_JSET | BPF_K:
 		rvoff = rv_offset(i, off, ctx);
@@ -1221,7 +1221,7 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
 			emit_branch_r32(dst, src, rvoff, ctx, BPF_OP(code));
 		break;
 
-	case BPF_JMP | BPF_EXIT:
+	case BPF_JMP64 | BPF_EXIT:
 		if (i == ctx->prog->len - 1)
 			break;
 
diff --git a/arch/riscv/net/bpf_jit_comp64.c b/arch/riscv/net/bpf_jit_comp64.c
index f2417ac..5c1b9b0 100644
--- a/arch/riscv/net/bpf_jit_comp64.c
+++ b/arch/riscv/net/bpf_jit_comp64.c
@@ -381,17 +381,17 @@ static void init_regs(u8 *rd, u8 *rs, const struct bpf_insn *insn,
 	u8 code = insn->code;
 
 	switch (code) {
-	case BPF_JMP | BPF_JA:
-	case BPF_JMP | BPF_CALL:
-	case BPF_JMP | BPF_EXIT:
-	case BPF_JMP | BPF_TAIL_CALL:
+	case BPF_JMP64 | BPF_JA:
+	case BPF_JMP64 | BPF_CALL:
+	case BPF_JMP64 | BPF_EXIT:
+	case BPF_JMP64 | BPF_TAIL_CALL:
 		break;
 	default:
 		*rd = bpf_to_rv_reg(insn->dst_reg, ctx);
 	}
 
-	if (code & (BPF_ALU | BPF_X) || code & (BPF_ALU64 | BPF_X) ||
-	    code & (BPF_JMP | BPF_X) || code & (BPF_JMP32 | BPF_X) ||
+	if (code & (BPF_ALU32 | BPF_X) || code & (BPF_ALU64 | BPF_X) ||
+	    code & (BPF_JMP64 | BPF_X) || code & (BPF_JMP32 | BPF_X) ||
 	    code & BPF_LDX || code & BPF_STX)
 		*rs = bpf_to_rv_reg(insn->src_reg, ctx);
 }
@@ -626,7 +626,7 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
 		      bool extra_pass)
 {
 	bool is64 = BPF_CLASS(insn->code) == BPF_ALU64 ||
-		    BPF_CLASS(insn->code) == BPF_JMP;
+		    BPF_CLASS(insn->code) == BPF_JMP64;
 	int s, e, rvoff, ret, i = insn - ctx->prog->insnsi;
 	struct bpf_prog_aux *aux = ctx->prog->aux;
 	u8 rd = -1, rs = -1, code = insn->code;
@@ -637,7 +637,7 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
 
 	switch (code) {
 	/* dst = src */
-	case BPF_ALU | BPF_MOV | BPF_X:
+	case BPF_ALU32 | BPF_MOV | BPF_X:
 	case BPF_ALU64 | BPF_MOV | BPF_X:
 		if (imm == 1) {
 			/* Special mov32 for zext */
@@ -650,13 +650,13 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
 		break;
 
 	/* dst = dst OP src */
-	case BPF_ALU | BPF_ADD | BPF_X:
+	case BPF_ALU32 | BPF_ADD | BPF_X:
 	case BPF_ALU64 | BPF_ADD | BPF_X:
 		emit_add(rd, rd, rs, ctx);
 		if (!is64 && !aux->verifier_zext)
 			emit_zext_32(rd, ctx);
 		break;
-	case BPF_ALU | BPF_SUB | BPF_X:
+	case BPF_ALU32 | BPF_SUB | BPF_X:
 	case BPF_ALU64 | BPF_SUB | BPF_X:
 		if (is64)
 			emit_sub(rd, rd, rs, ctx);
@@ -666,55 +666,55 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
 		if (!is64 && !aux->verifier_zext)
 			emit_zext_32(rd, ctx);
 		break;
-	case BPF_ALU | BPF_AND | BPF_X:
+	case BPF_ALU32 | BPF_AND | BPF_X:
 	case BPF_ALU64 | BPF_AND | BPF_X:
 		emit_and(rd, rd, rs, ctx);
 		if (!is64 && !aux->verifier_zext)
 			emit_zext_32(rd, ctx);
 		break;
-	case BPF_ALU | BPF_OR | BPF_X:
+	case BPF_ALU32 | BPF_OR | BPF_X:
 	case BPF_ALU64 | BPF_OR | BPF_X:
 		emit_or(rd, rd, rs, ctx);
 		if (!is64 && !aux->verifier_zext)
 			emit_zext_32(rd, ctx);
 		break;
-	case BPF_ALU | BPF_XOR | BPF_X:
+	case BPF_ALU32 | BPF_XOR | BPF_X:
 	case BPF_ALU64 | BPF_XOR | BPF_X:
 		emit_xor(rd, rd, rs, ctx);
 		if (!is64 && !aux->verifier_zext)
 			emit_zext_32(rd, ctx);
 		break;
-	case BPF_ALU | BPF_MUL | BPF_X:
+	case BPF_ALU32 | BPF_MUL | BPF_X:
 	case BPF_ALU64 | BPF_MUL | BPF_X:
 		emit(is64 ? rv_mul(rd, rd, rs) : rv_mulw(rd, rd, rs), ctx);
 		if (!is64 && !aux->verifier_zext)
 			emit_zext_32(rd, ctx);
 		break;
-	case BPF_ALU | BPF_DIV | BPF_X:
+	case BPF_ALU32 | BPF_DIV | BPF_X:
 	case BPF_ALU64 | BPF_DIV | BPF_X:
 		emit(is64 ? rv_divu(rd, rd, rs) : rv_divuw(rd, rd, rs), ctx);
 		if (!is64 && !aux->verifier_zext)
 			emit_zext_32(rd, ctx);
 		break;
-	case BPF_ALU | BPF_MOD | BPF_X:
+	case BPF_ALU32 | BPF_MOD | BPF_X:
 	case BPF_ALU64 | BPF_MOD | BPF_X:
 		emit(is64 ? rv_remu(rd, rd, rs) : rv_remuw(rd, rd, rs), ctx);
 		if (!is64 && !aux->verifier_zext)
 			emit_zext_32(rd, ctx);
 		break;
-	case BPF_ALU | BPF_LSH | BPF_X:
+	case BPF_ALU32 | BPF_LSH | BPF_X:
 	case BPF_ALU64 | BPF_LSH | BPF_X:
 		emit(is64 ? rv_sll(rd, rd, rs) : rv_sllw(rd, rd, rs), ctx);
 		if (!is64 && !aux->verifier_zext)
 			emit_zext_32(rd, ctx);
 		break;
-	case BPF_ALU | BPF_RSH | BPF_X:
+	case BPF_ALU32 | BPF_RSH | BPF_X:
 	case BPF_ALU64 | BPF_RSH | BPF_X:
 		emit(is64 ? rv_srl(rd, rd, rs) : rv_srlw(rd, rd, rs), ctx);
 		if (!is64 && !aux->verifier_zext)
 			emit_zext_32(rd, ctx);
 		break;
-	case BPF_ALU | BPF_ARSH | BPF_X:
+	case BPF_ALU32 | BPF_ARSH | BPF_X:
 	case BPF_ALU64 | BPF_ARSH | BPF_X:
 		emit(is64 ? rv_sra(rd, rd, rs) : rv_sraw(rd, rd, rs), ctx);
 		if (!is64 && !aux->verifier_zext)
@@ -722,7 +722,7 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
 		break;
 
 	/* dst = -dst */
-	case BPF_ALU | BPF_NEG:
+	case BPF_ALU32 | BPF_NEG:
 	case BPF_ALU64 | BPF_NEG:
 		emit_sub(rd, RV_REG_ZERO, rd, ctx);
 		if (!is64 && !aux->verifier_zext)
@@ -730,7 +730,7 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
 		break;
 
 	/* dst = BSWAP##imm(dst) */
-	case BPF_ALU | BPF_END | BPF_FROM_LE:
+	case BPF_ALU32 | BPF_END | BPF_FROM_LE:
 		switch (imm) {
 		case 16:
 			emit_slli(rd, rd, 48, ctx);
@@ -746,7 +746,7 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
 		}
 		break;
 
-	case BPF_ALU | BPF_END | BPF_FROM_BE:
+	case BPF_ALU32 | BPF_END | BPF_FROM_BE:
 		emit_li(RV_REG_T2, 0, ctx);
 
 		emit_andi(RV_REG_T1, rd, 0xff, ctx);
@@ -795,7 +795,7 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
 		break;
 
 	/* dst = imm */
-	case BPF_ALU | BPF_MOV | BPF_K:
+	case BPF_ALU32 | BPF_MOV | BPF_K:
 	case BPF_ALU64 | BPF_MOV | BPF_K:
 		emit_imm(rd, imm, ctx);
 		if (!is64 && !aux->verifier_zext)
@@ -803,7 +803,7 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
 		break;
 
 	/* dst = dst OP imm */
-	case BPF_ALU | BPF_ADD | BPF_K:
+	case BPF_ALU32 | BPF_ADD | BPF_K:
 	case BPF_ALU64 | BPF_ADD | BPF_K:
 		if (is_12b_int(imm)) {
 			emit_addi(rd, rd, imm, ctx);
@@ -814,7 +814,7 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
 		if (!is64 && !aux->verifier_zext)
 			emit_zext_32(rd, ctx);
 		break;
-	case BPF_ALU | BPF_SUB | BPF_K:
+	case BPF_ALU32 | BPF_SUB | BPF_K:
 	case BPF_ALU64 | BPF_SUB | BPF_K:
 		if (is_12b_int(-imm)) {
 			emit_addi(rd, rd, -imm, ctx);
@@ -825,7 +825,7 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
 		if (!is64 && !aux->verifier_zext)
 			emit_zext_32(rd, ctx);
 		break;
-	case BPF_ALU | BPF_AND | BPF_K:
+	case BPF_ALU32 | BPF_AND | BPF_K:
 	case BPF_ALU64 | BPF_AND | BPF_K:
 		if (is_12b_int(imm)) {
 			emit_andi(rd, rd, imm, ctx);
@@ -836,7 +836,7 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
 		if (!is64 && !aux->verifier_zext)
 			emit_zext_32(rd, ctx);
 		break;
-	case BPF_ALU | BPF_OR | BPF_K:
+	case BPF_ALU32 | BPF_OR | BPF_K:
 	case BPF_ALU64 | BPF_OR | BPF_K:
 		if (is_12b_int(imm)) {
 			emit(rv_ori(rd, rd, imm), ctx);
@@ -847,7 +847,7 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
 		if (!is64 && !aux->verifier_zext)
 			emit_zext_32(rd, ctx);
 		break;
-	case BPF_ALU | BPF_XOR | BPF_K:
+	case BPF_ALU32 | BPF_XOR | BPF_K:
 	case BPF_ALU64 | BPF_XOR | BPF_K:
 		if (is_12b_int(imm)) {
 			emit(rv_xori(rd, rd, imm), ctx);
@@ -858,7 +858,7 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
 		if (!is64 && !aux->verifier_zext)
 			emit_zext_32(rd, ctx);
 		break;
-	case BPF_ALU | BPF_MUL | BPF_K:
+	case BPF_ALU32 | BPF_MUL | BPF_K:
 	case BPF_ALU64 | BPF_MUL | BPF_K:
 		emit_imm(RV_REG_T1, imm, ctx);
 		emit(is64 ? rv_mul(rd, rd, RV_REG_T1) :
@@ -866,7 +866,7 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
 		if (!is64 && !aux->verifier_zext)
 			emit_zext_32(rd, ctx);
 		break;
-	case BPF_ALU | BPF_DIV | BPF_K:
+	case BPF_ALU32 | BPF_DIV | BPF_K:
 	case BPF_ALU64 | BPF_DIV | BPF_K:
 		emit_imm(RV_REG_T1, imm, ctx);
 		emit(is64 ? rv_divu(rd, rd, RV_REG_T1) :
@@ -874,7 +874,7 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
 		if (!is64 && !aux->verifier_zext)
 			emit_zext_32(rd, ctx);
 		break;
-	case BPF_ALU | BPF_MOD | BPF_K:
+	case BPF_ALU32 | BPF_MOD | BPF_K:
 	case BPF_ALU64 | BPF_MOD | BPF_K:
 		emit_imm(RV_REG_T1, imm, ctx);
 		emit(is64 ? rv_remu(rd, rd, RV_REG_T1) :
@@ -882,14 +882,14 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
 		if (!is64 && !aux->verifier_zext)
 			emit_zext_32(rd, ctx);
 		break;
-	case BPF_ALU | BPF_LSH | BPF_K:
+	case BPF_ALU32 | BPF_LSH | BPF_K:
 	case BPF_ALU64 | BPF_LSH | BPF_K:
 		emit_slli(rd, rd, imm, ctx);
 
 		if (!is64 && !aux->verifier_zext)
 			emit_zext_32(rd, ctx);
 		break;
-	case BPF_ALU | BPF_RSH | BPF_K:
+	case BPF_ALU32 | BPF_RSH | BPF_K:
 	case BPF_ALU64 | BPF_RSH | BPF_K:
 		if (is64)
 			emit_srli(rd, rd, imm, ctx);
@@ -899,7 +899,7 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
 		if (!is64 && !aux->verifier_zext)
 			emit_zext_32(rd, ctx);
 		break;
-	case BPF_ALU | BPF_ARSH | BPF_K:
+	case BPF_ALU32 | BPF_ARSH | BPF_K:
 	case BPF_ALU64 | BPF_ARSH | BPF_K:
 		if (is64)
 			emit_srai(rd, rd, imm, ctx);
@@ -911,7 +911,7 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
 		break;
 
 	/* JUMP off */
-	case BPF_JMP | BPF_JA:
+	case BPF_JMP64 | BPF_JA:
 		rvoff = rv_offset(i, off, ctx);
 		ret = emit_jump_and_link(RV_REG_ZERO, rvoff, false, ctx);
 		if (ret)
@@ -919,27 +919,27 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
 		break;
 
 	/* IF (dst COND src) JUMP off */
-	case BPF_JMP | BPF_JEQ | BPF_X:
+	case BPF_JMP64 | BPF_JEQ | BPF_X:
 	case BPF_JMP32 | BPF_JEQ | BPF_X:
-	case BPF_JMP | BPF_JGT | BPF_X:
+	case BPF_JMP64 | BPF_JGT | BPF_X:
 	case BPF_JMP32 | BPF_JGT | BPF_X:
-	case BPF_JMP | BPF_JLT | BPF_X:
+	case BPF_JMP64 | BPF_JLT | BPF_X:
 	case BPF_JMP32 | BPF_JLT | BPF_X:
-	case BPF_JMP | BPF_JGE | BPF_X:
+	case BPF_JMP64 | BPF_JGE | BPF_X:
 	case BPF_JMP32 | BPF_JGE | BPF_X:
-	case BPF_JMP | BPF_JLE | BPF_X:
+	case BPF_JMP64 | BPF_JLE | BPF_X:
 	case BPF_JMP32 | BPF_JLE | BPF_X:
-	case BPF_JMP | BPF_JNE | BPF_X:
+	case BPF_JMP64 | BPF_JNE | BPF_X:
 	case BPF_JMP32 | BPF_JNE | BPF_X:
-	case BPF_JMP | BPF_JSGT | BPF_X:
+	case BPF_JMP64 | BPF_JSGT | BPF_X:
 	case BPF_JMP32 | BPF_JSGT | BPF_X:
-	case BPF_JMP | BPF_JSLT | BPF_X:
+	case BPF_JMP64 | BPF_JSLT | BPF_X:
 	case BPF_JMP32 | BPF_JSLT | BPF_X:
-	case BPF_JMP | BPF_JSGE | BPF_X:
+	case BPF_JMP64 | BPF_JSGE | BPF_X:
 	case BPF_JMP32 | BPF_JSGE | BPF_X:
-	case BPF_JMP | BPF_JSLE | BPF_X:
+	case BPF_JMP64 | BPF_JSLE | BPF_X:
 	case BPF_JMP32 | BPF_JSLE | BPF_X:
-	case BPF_JMP | BPF_JSET | BPF_X:
+	case BPF_JMP64 | BPF_JSET | BPF_X:
 	case BPF_JMP32 | BPF_JSET | BPF_X:
 		rvoff = rv_offset(i, off, ctx);
 		if (!is64) {
@@ -966,25 +966,25 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
 		break;
 
 	/* IF (dst COND imm) JUMP off */
-	case BPF_JMP | BPF_JEQ | BPF_K:
+	case BPF_JMP64 | BPF_JEQ | BPF_K:
 	case BPF_JMP32 | BPF_JEQ | BPF_K:
-	case BPF_JMP | BPF_JGT | BPF_K:
+	case BPF_JMP64 | BPF_JGT | BPF_K:
 	case BPF_JMP32 | BPF_JGT | BPF_K:
-	case BPF_JMP | BPF_JLT | BPF_K:
+	case BPF_JMP64 | BPF_JLT | BPF_K:
 	case BPF_JMP32 | BPF_JLT | BPF_K:
-	case BPF_JMP | BPF_JGE | BPF_K:
+	case BPF_JMP64 | BPF_JGE | BPF_K:
 	case BPF_JMP32 | BPF_JGE | BPF_K:
-	case BPF_JMP | BPF_JLE | BPF_K:
+	case BPF_JMP64 | BPF_JLE | BPF_K:
 	case BPF_JMP32 | BPF_JLE | BPF_K:
-	case BPF_JMP | BPF_JNE | BPF_K:
+	case BPF_JMP64 | BPF_JNE | BPF_K:
 	case BPF_JMP32 | BPF_JNE | BPF_K:
-	case BPF_JMP | BPF_JSGT | BPF_K:
+	case BPF_JMP64 | BPF_JSGT | BPF_K:
 	case BPF_JMP32 | BPF_JSGT | BPF_K:
-	case BPF_JMP | BPF_JSLT | BPF_K:
+	case BPF_JMP64 | BPF_JSLT | BPF_K:
 	case BPF_JMP32 | BPF_JSLT | BPF_K:
-	case BPF_JMP | BPF_JSGE | BPF_K:
+	case BPF_JMP64 | BPF_JSGE | BPF_K:
 	case BPF_JMP32 | BPF_JSGE | BPF_K:
-	case BPF_JMP | BPF_JSLE | BPF_K:
+	case BPF_JMP64 | BPF_JSLE | BPF_K:
 	case BPF_JMP32 | BPF_JSLE | BPF_K:
 		rvoff = rv_offset(i, off, ctx);
 		s = ctx->ninsns;
@@ -1008,7 +1008,7 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
 		emit_branch(BPF_OP(code), rd, rs, rvoff, ctx);
 		break;
 
-	case BPF_JMP | BPF_JSET | BPF_K:
+	case BPF_JMP64 | BPF_JSET | BPF_K:
 	case BPF_JMP32 | BPF_JSET | BPF_K:
 		rvoff = rv_offset(i, off, ctx);
 		s = ctx->ninsns;
@@ -1030,7 +1030,7 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
 		break;
 
 	/* function call */
-	case BPF_JMP | BPF_CALL:
+	case BPF_JMP64 | BPF_CALL:
 	{
 		bool fixed;
 		u64 addr;
@@ -1046,13 +1046,13 @@ int bpf_jit_emit_insn(const struct bpf_insn *insn, struct rv_jit_context *ctx,
 		break;
 	}
 	/* tail call */
-	case BPF_JMP | BPF_TAIL_CALL:
+	case BPF_JMP64 | BPF_TAIL_CALL:
 		if (emit_bpf_tail_call(i, ctx))
 			return -1;
 		break;
 
 	/* function return */
-	case BPF_JMP | BPF_EXIT:
+	case BPF_JMP64 | BPF_EXIT:
 		if (i == ctx->prog->len - 1)
 			break;
 
diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
index d0846ba..d6bff91 100644
--- a/arch/s390/net/bpf_jit_comp.c
+++ b/arch/s390/net/bpf_jit_comp.c
@@ -795,7 +795,7 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
 	/*
 	 * BPF_MOV
 	 */
-	case BPF_ALU | BPF_MOV | BPF_X: /* dst = (u32) src */
+	case BPF_ALU32 | BPF_MOV | BPF_X: /* dst = (u32) src */
 		/* llgfr %dst,%src */
 		EMIT4(0xb9160000, dst_reg, src_reg);
 		if (insn_is_zext(&insn[1]))
@@ -805,7 +805,7 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
 		/* lgr %dst,%src */
 		EMIT4(0xb9040000, dst_reg, src_reg);
 		break;
-	case BPF_ALU | BPF_MOV | BPF_K: /* dst = (u32) imm */
+	case BPF_ALU32 | BPF_MOV | BPF_K: /* dst = (u32) imm */
 		/* llilf %dst,imm */
 		EMIT6_IMM(0xc00f0000, dst_reg, imm);
 		if (insn_is_zext(&insn[1]))
@@ -832,7 +832,7 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
 	/*
 	 * BPF_ADD
 	 */
-	case BPF_ALU | BPF_ADD | BPF_X: /* dst = (u32) dst + (u32) src */
+	case BPF_ALU32 | BPF_ADD | BPF_X: /* dst = (u32) dst + (u32) src */
 		/* ar %dst,%src */
 		EMIT2(0x1a00, dst_reg, src_reg);
 		EMIT_ZERO(dst_reg);
@@ -841,7 +841,7 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
 		/* agr %dst,%src */
 		EMIT4(0xb9080000, dst_reg, src_reg);
 		break;
-	case BPF_ALU | BPF_ADD | BPF_K: /* dst = (u32) dst + (u32) imm */
+	case BPF_ALU32 | BPF_ADD | BPF_K: /* dst = (u32) dst + (u32) imm */
 		if (imm != 0) {
 			/* alfi %dst,imm */
 			EMIT6_IMM(0xc20b0000, dst_reg, imm);
@@ -857,7 +857,7 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
 	/*
 	 * BPF_SUB
 	 */
-	case BPF_ALU | BPF_SUB | BPF_X: /* dst = (u32) dst - (u32) src */
+	case BPF_ALU32 | BPF_SUB | BPF_X: /* dst = (u32) dst - (u32) src */
 		/* sr %dst,%src */
 		EMIT2(0x1b00, dst_reg, src_reg);
 		EMIT_ZERO(dst_reg);
@@ -866,7 +866,7 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
 		/* sgr %dst,%src */
 		EMIT4(0xb9090000, dst_reg, src_reg);
 		break;
-	case BPF_ALU | BPF_SUB | BPF_K: /* dst = (u32) dst - (u32) imm */
+	case BPF_ALU32 | BPF_SUB | BPF_K: /* dst = (u32) dst - (u32) imm */
 		if (imm != 0) {
 			/* alfi %dst,-imm */
 			EMIT6_IMM(0xc20b0000, dst_reg, -imm);
@@ -887,7 +887,7 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
 	/*
 	 * BPF_MUL
 	 */
-	case BPF_ALU | BPF_MUL | BPF_X: /* dst = (u32) dst * (u32) src */
+	case BPF_ALU32 | BPF_MUL | BPF_X: /* dst = (u32) dst * (u32) src */
 		/* msr %dst,%src */
 		EMIT4(0xb2520000, dst_reg, src_reg);
 		EMIT_ZERO(dst_reg);
@@ -896,7 +896,7 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
 		/* msgr %dst,%src */
 		EMIT4(0xb90c0000, dst_reg, src_reg);
 		break;
-	case BPF_ALU | BPF_MUL | BPF_K: /* dst = (u32) dst * (u32) imm */
+	case BPF_ALU32 | BPF_MUL | BPF_K: /* dst = (u32) dst * (u32) imm */
 		if (imm != 1) {
 			/* msfi %r5,imm */
 			EMIT6_IMM(0xc2010000, dst_reg, imm);
@@ -912,8 +912,8 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
 	/*
 	 * BPF_DIV / BPF_MOD
 	 */
-	case BPF_ALU | BPF_DIV | BPF_X: /* dst = (u32) dst / (u32) src */
-	case BPF_ALU | BPF_MOD | BPF_X: /* dst = (u32) dst % (u32) src */
+	case BPF_ALU32 | BPF_DIV | BPF_X: /* dst = (u32) dst / (u32) src */
+	case BPF_ALU32 | BPF_MOD | BPF_X: /* dst = (u32) dst % (u32) src */
 	{
 		int rc_reg = BPF_OP(insn->code) == BPF_DIV ? REG_W1 : REG_W0;
 
@@ -944,8 +944,8 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
 		EMIT4(0xb9040000, dst_reg, rc_reg);
 		break;
 	}
-	case BPF_ALU | BPF_DIV | BPF_K: /* dst = (u32) dst / (u32) imm */
-	case BPF_ALU | BPF_MOD | BPF_K: /* dst = (u32) dst % (u32) imm */
+	case BPF_ALU32 | BPF_DIV | BPF_K: /* dst = (u32) dst / (u32) imm */
+	case BPF_ALU32 | BPF_MOD | BPF_K: /* dst = (u32) dst % (u32) imm */
 	{
 		int rc_reg = BPF_OP(insn->code) == BPF_DIV ? REG_W1 : REG_W0;
 
@@ -1013,7 +1013,7 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
 	/*
 	 * BPF_AND
 	 */
-	case BPF_ALU | BPF_AND | BPF_X: /* dst = (u32) dst & (u32) src */
+	case BPF_ALU32 | BPF_AND | BPF_X: /* dst = (u32) dst & (u32) src */
 		/* nr %dst,%src */
 		EMIT2(0x1400, dst_reg, src_reg);
 		EMIT_ZERO(dst_reg);
@@ -1022,7 +1022,7 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
 		/* ngr %dst,%src */
 		EMIT4(0xb9800000, dst_reg, src_reg);
 		break;
-	case BPF_ALU | BPF_AND | BPF_K: /* dst = (u32) dst & (u32) imm */
+	case BPF_ALU32 | BPF_AND | BPF_K: /* dst = (u32) dst & (u32) imm */
 		/* nilf %dst,imm */
 		EMIT6_IMM(0xc00b0000, dst_reg, imm);
 		EMIT_ZERO(dst_reg);
@@ -1045,7 +1045,7 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
 	/*
 	 * BPF_OR
 	 */
-	case BPF_ALU | BPF_OR | BPF_X: /* dst = (u32) dst | (u32) src */
+	case BPF_ALU32 | BPF_OR | BPF_X: /* dst = (u32) dst | (u32) src */
 		/* or %dst,%src */
 		EMIT2(0x1600, dst_reg, src_reg);
 		EMIT_ZERO(dst_reg);
@@ -1054,7 +1054,7 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
 		/* ogr %dst,%src */
 		EMIT4(0xb9810000, dst_reg, src_reg);
 		break;
-	case BPF_ALU | BPF_OR | BPF_K: /* dst = (u32) dst | (u32) imm */
+	case BPF_ALU32 | BPF_OR | BPF_K: /* dst = (u32) dst | (u32) imm */
 		/* oilf %dst,imm */
 		EMIT6_IMM(0xc00d0000, dst_reg, imm);
 		EMIT_ZERO(dst_reg);
@@ -1077,7 +1077,7 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
 	/*
 	 * BPF_XOR
 	 */
-	case BPF_ALU | BPF_XOR | BPF_X: /* dst = (u32) dst ^ (u32) src */
+	case BPF_ALU32 | BPF_XOR | BPF_X: /* dst = (u32) dst ^ (u32) src */
 		/* xr %dst,%src */
 		EMIT2(0x1700, dst_reg, src_reg);
 		EMIT_ZERO(dst_reg);
@@ -1086,7 +1086,7 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
 		/* xgr %dst,%src */
 		EMIT4(0xb9820000, dst_reg, src_reg);
 		break;
-	case BPF_ALU | BPF_XOR | BPF_K: /* dst = (u32) dst ^ (u32) imm */
+	case BPF_ALU32 | BPF_XOR | BPF_K: /* dst = (u32) dst ^ (u32) imm */
 		if (imm != 0) {
 			/* xilf %dst,imm */
 			EMIT6_IMM(0xc0070000, dst_reg, imm);
@@ -1111,7 +1111,7 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
 	/*
 	 * BPF_LSH
 	 */
-	case BPF_ALU | BPF_LSH | BPF_X: /* dst = (u32) dst << (u32) src */
+	case BPF_ALU32 | BPF_LSH | BPF_X: /* dst = (u32) dst << (u32) src */
 		/* sll %dst,0(%src) */
 		EMIT4_DISP(0x89000000, dst_reg, src_reg, 0);
 		EMIT_ZERO(dst_reg);
@@ -1120,7 +1120,7 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
 		/* sllg %dst,%dst,0(%src) */
 		EMIT6_DISP_LH(0xeb000000, 0x000d, dst_reg, dst_reg, src_reg, 0);
 		break;
-	case BPF_ALU | BPF_LSH | BPF_K: /* dst = (u32) dst << (u32) imm */
+	case BPF_ALU32 | BPF_LSH | BPF_K: /* dst = (u32) dst << (u32) imm */
 		if (imm != 0) {
 			/* sll %dst,imm(%r0) */
 			EMIT4_DISP(0x89000000, dst_reg, REG_0, imm);
@@ -1136,7 +1136,7 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
 	/*
 	 * BPF_RSH
 	 */
-	case BPF_ALU | BPF_RSH | BPF_X: /* dst = (u32) dst >> (u32) src */
+	case BPF_ALU32 | BPF_RSH | BPF_X: /* dst = (u32) dst >> (u32) src */
 		/* srl %dst,0(%src) */
 		EMIT4_DISP(0x88000000, dst_reg, src_reg, 0);
 		EMIT_ZERO(dst_reg);
@@ -1145,7 +1145,7 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
 		/* srlg %dst,%dst,0(%src) */
 		EMIT6_DISP_LH(0xeb000000, 0x000c, dst_reg, dst_reg, src_reg, 0);
 		break;
-	case BPF_ALU | BPF_RSH | BPF_K: /* dst = (u32) dst >> (u32) imm */
+	case BPF_ALU32 | BPF_RSH | BPF_K: /* dst = (u32) dst >> (u32) imm */
 		if (imm != 0) {
 			/* srl %dst,imm(%r0) */
 			EMIT4_DISP(0x88000000, dst_reg, REG_0, imm);
@@ -1161,7 +1161,7 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
 	/*
 	 * BPF_ARSH
 	 */
-	case BPF_ALU | BPF_ARSH | BPF_X: /* ((s32) dst) >>= src */
+	case BPF_ALU32 | BPF_ARSH | BPF_X: /* ((s32) dst) >>= src */
 		/* sra %dst,%dst,0(%src) */
 		EMIT4_DISP(0x8a000000, dst_reg, src_reg, 0);
 		EMIT_ZERO(dst_reg);
@@ -1170,7 +1170,7 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
 		/* srag %dst,%dst,0(%src) */
 		EMIT6_DISP_LH(0xeb000000, 0x000a, dst_reg, dst_reg, src_reg, 0);
 		break;
-	case BPF_ALU | BPF_ARSH | BPF_K: /* ((s32) dst >> imm */
+	case BPF_ALU32 | BPF_ARSH | BPF_K: /* ((s32) dst >> imm */
 		if (imm != 0) {
 			/* sra %dst,imm(%r0) */
 			EMIT4_DISP(0x8a000000, dst_reg, REG_0, imm);
@@ -1186,7 +1186,7 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
 	/*
 	 * BPF_NEG
 	 */
-	case BPF_ALU | BPF_NEG: /* dst = (u32) -dst */
+	case BPF_ALU32 | BPF_NEG: /* dst = (u32) -dst */
 		/* lcr %dst,%dst */
 		EMIT2(0x1300, dst_reg, dst_reg);
 		EMIT_ZERO(dst_reg);
@@ -1198,7 +1198,7 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
 	/*
 	 * BPF_FROM_BE/LE
 	 */
-	case BPF_ALU | BPF_END | BPF_FROM_BE:
+	case BPF_ALU32 | BPF_END | BPF_FROM_BE:
 		/* s390 is big endian, therefore only clear high order bytes */
 		switch (imm) {
 		case 16: /* dst = (u16) cpu_to_be16(dst) */
@@ -1216,7 +1216,7 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
 			break;
 		}
 		break;
-	case BPF_ALU | BPF_END | BPF_FROM_LE:
+	case BPF_ALU32 | BPF_END | BPF_FROM_LE:
 		switch (imm) {
 		case 16: /* dst = (u16) cpu_to_le16(dst) */
 			/* lrvr %dst,%dst */
@@ -1397,9 +1397,9 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
 		EMIT6_DISP_LH(0xe3000000, 0x0004, dst_reg, src_reg, REG_0, off);
 		break;
 	/*
-	 * BPF_JMP / CALL
+	 * BPF_JMP64 / CALL
 	 */
-	case BPF_JMP | BPF_CALL:
+	case BPF_JMP64 | BPF_CALL:
 	{
 		const struct btf_func_model *m;
 		bool func_addr_fixed;
@@ -1449,7 +1449,7 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
 		EMIT4(0xb9040000, BPF_REG_0, REG_2);
 		break;
 	}
-	case BPF_JMP | BPF_TAIL_CALL: {
+	case BPF_JMP64 | BPF_TAIL_CALL: {
 		int patch_1_clrj, patch_2_clij, patch_3_brc;
 
 		/*
@@ -1539,7 +1539,7 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
 		}
 		break;
 	}
-	case BPF_JMP | BPF_EXIT: /* return b0 */
+	case BPF_JMP64 | BPF_EXIT: /* return b0 */
 		last = (i == fp->len - 1) ? 1 : 0;
 		if (last)
 			break;
@@ -1570,50 +1570,50 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
 	 * instruction itself (loop) and for BPF with offset 0 we
 	 * branch to the instruction behind the branch.
 	 */
-	case BPF_JMP | BPF_JA: /* if (true) */
+	case BPF_JMP64 | BPF_JA: /* if (true) */
 		mask = 0xf000; /* j */
 		goto branch_oc;
-	case BPF_JMP | BPF_JSGT | BPF_K: /* ((s64) dst > (s64) imm) */
+	case BPF_JMP64 | BPF_JSGT | BPF_K: /* ((s64) dst > (s64) imm) */
 	case BPF_JMP32 | BPF_JSGT | BPF_K: /* ((s32) dst > (s32) imm) */
 		mask = 0x2000; /* jh */
 		goto branch_ks;
-	case BPF_JMP | BPF_JSLT | BPF_K: /* ((s64) dst < (s64) imm) */
+	case BPF_JMP64 | BPF_JSLT | BPF_K: /* ((s64) dst < (s64) imm) */
 	case BPF_JMP32 | BPF_JSLT | BPF_K: /* ((s32) dst < (s32) imm) */
 		mask = 0x4000; /* jl */
 		goto branch_ks;
-	case BPF_JMP | BPF_JSGE | BPF_K: /* ((s64) dst >= (s64) imm) */
+	case BPF_JMP64 | BPF_JSGE | BPF_K: /* ((s64) dst >= (s64) imm) */
 	case BPF_JMP32 | BPF_JSGE | BPF_K: /* ((s32) dst >= (s32) imm) */
 		mask = 0xa000; /* jhe */
 		goto branch_ks;
-	case BPF_JMP | BPF_JSLE | BPF_K: /* ((s64) dst <= (s64) imm) */
+	case BPF_JMP64 | BPF_JSLE | BPF_K: /* ((s64) dst <= (s64) imm) */
 	case BPF_JMP32 | BPF_JSLE | BPF_K: /* ((s32) dst <= (s32) imm) */
 		mask = 0xc000; /* jle */
 		goto branch_ks;
-	case BPF_JMP | BPF_JGT | BPF_K: /* (dst_reg > imm) */
+	case BPF_JMP64 | BPF_JGT | BPF_K: /* (dst_reg > imm) */
 	case BPF_JMP32 | BPF_JGT | BPF_K: /* ((u32) dst_reg > (u32) imm) */
 		mask = 0x2000; /* jh */
 		goto branch_ku;
-	case BPF_JMP | BPF_JLT | BPF_K: /* (dst_reg < imm) */
+	case BPF_JMP64 | BPF_JLT | BPF_K: /* (dst_reg < imm) */
 	case BPF_JMP32 | BPF_JLT | BPF_K: /* ((u32) dst_reg < (u32) imm) */
 		mask = 0x4000; /* jl */
 		goto branch_ku;
-	case BPF_JMP | BPF_JGE | BPF_K: /* (dst_reg >= imm) */
+	case BPF_JMP64 | BPF_JGE | BPF_K: /* (dst_reg >= imm) */
 	case BPF_JMP32 | BPF_JGE | BPF_K: /* ((u32) dst_reg >= (u32) imm) */
 		mask = 0xa000; /* jhe */
 		goto branch_ku;
-	case BPF_JMP | BPF_JLE | BPF_K: /* (dst_reg <= imm) */
+	case BPF_JMP64 | BPF_JLE | BPF_K: /* (dst_reg <= imm) */
 	case BPF_JMP32 | BPF_JLE | BPF_K: /* ((u32) dst_reg <= (u32) imm) */
 		mask = 0xc000; /* jle */
 		goto branch_ku;
-	case BPF_JMP | BPF_JNE | BPF_K: /* (dst_reg != imm) */
+	case BPF_JMP64 | BPF_JNE | BPF_K: /* (dst_reg != imm) */
 	case BPF_JMP32 | BPF_JNE | BPF_K: /* ((u32) dst_reg != (u32) imm) */
 		mask = 0x7000; /* jne */
 		goto branch_ku;
-	case BPF_JMP | BPF_JEQ | BPF_K: /* (dst_reg == imm) */
+	case BPF_JMP64 | BPF_JEQ | BPF_K: /* (dst_reg == imm) */
 	case BPF_JMP32 | BPF_JEQ | BPF_K: /* ((u32) dst_reg == (u32) imm) */
 		mask = 0x8000; /* je */
 		goto branch_ku;
-	case BPF_JMP | BPF_JSET | BPF_K: /* (dst_reg & imm) */
+	case BPF_JMP64 | BPF_JSET | BPF_K: /* (dst_reg & imm) */
 	case BPF_JMP32 | BPF_JSET | BPF_K: /* ((u32) dst_reg & (u32) imm) */
 		mask = 0x7000; /* jnz */
 		if (BPF_CLASS(insn->code) == BPF_JMP32) {
@@ -1629,47 +1629,47 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
 		}
 		goto branch_oc;
 
-	case BPF_JMP | BPF_JSGT | BPF_X: /* ((s64) dst > (s64) src) */
+	case BPF_JMP64 | BPF_JSGT | BPF_X: /* ((s64) dst > (s64) src) */
 	case BPF_JMP32 | BPF_JSGT | BPF_X: /* ((s32) dst > (s32) src) */
 		mask = 0x2000; /* jh */
 		goto branch_xs;
-	case BPF_JMP | BPF_JSLT | BPF_X: /* ((s64) dst < (s64) src) */
+	case BPF_JMP64 | BPF_JSLT | BPF_X: /* ((s64) dst < (s64) src) */
 	case BPF_JMP32 | BPF_JSLT | BPF_X: /* ((s32) dst < (s32) src) */
 		mask = 0x4000; /* jl */
 		goto branch_xs;
-	case BPF_JMP | BPF_JSGE | BPF_X: /* ((s64) dst >= (s64) src) */
+	case BPF_JMP64 | BPF_JSGE | BPF_X: /* ((s64) dst >= (s64) src) */
 	case BPF_JMP32 | BPF_JSGE | BPF_X: /* ((s32) dst >= (s32) src) */
 		mask = 0xa000; /* jhe */
 		goto branch_xs;
-	case BPF_JMP | BPF_JSLE | BPF_X: /* ((s64) dst <= (s64) src) */
+	case BPF_JMP64 | BPF_JSLE | BPF_X: /* ((s64) dst <= (s64) src) */
 	case BPF_JMP32 | BPF_JSLE | BPF_X: /* ((s32) dst <= (s32) src) */
 		mask = 0xc000; /* jle */
 		goto branch_xs;
-	case BPF_JMP | BPF_JGT | BPF_X: /* (dst > src) */
+	case BPF_JMP64 | BPF_JGT | BPF_X: /* (dst > src) */
 	case BPF_JMP32 | BPF_JGT | BPF_X: /* ((u32) dst > (u32) src) */
 		mask = 0x2000; /* jh */
 		goto branch_xu;
-	case BPF_JMP | BPF_JLT | BPF_X: /* (dst < src) */
+	case BPF_JMP64 | BPF_JLT | BPF_X: /* (dst < src) */
 	case BPF_JMP32 | BPF_JLT | BPF_X: /* ((u32) dst < (u32) src) */
 		mask = 0x4000; /* jl */
 		goto branch_xu;
-	case BPF_JMP | BPF_JGE | BPF_X: /* (dst >= src) */
+	case BPF_JMP64 | BPF_JGE | BPF_X: /* (dst >= src) */
 	case BPF_JMP32 | BPF_JGE | BPF_X: /* ((u32) dst >= (u32) src) */
 		mask = 0xa000; /* jhe */
 		goto branch_xu;
-	case BPF_JMP | BPF_JLE | BPF_X: /* (dst <= src) */
+	case BPF_JMP64 | BPF_JLE | BPF_X: /* (dst <= src) */
 	case BPF_JMP32 | BPF_JLE | BPF_X: /* ((u32) dst <= (u32) src) */
 		mask = 0xc000; /* jle */
 		goto branch_xu;
-	case BPF_JMP | BPF_JNE | BPF_X: /* (dst != src) */
+	case BPF_JMP64 | BPF_JNE | BPF_X: /* (dst != src) */
 	case BPF_JMP32 | BPF_JNE | BPF_X: /* ((u32) dst != (u32) src) */
 		mask = 0x7000; /* jne */
 		goto branch_xu;
-	case BPF_JMP | BPF_JEQ | BPF_X: /* (dst == src) */
+	case BPF_JMP64 | BPF_JEQ | BPF_X: /* (dst == src) */
 	case BPF_JMP32 | BPF_JEQ | BPF_X: /* ((u32) dst == (u32) src) */
 		mask = 0x8000; /* je */
 		goto branch_xu;
-	case BPF_JMP | BPF_JSET | BPF_X: /* (dst & src) */
+	case BPF_JMP64 | BPF_JSET | BPF_X: /* (dst & src) */
 	case BPF_JMP32 | BPF_JSET | BPF_X: /* ((u32) dst & (u32) src) */
 	{
 		bool is_jmp32 = BPF_CLASS(insn->code) == BPF_JMP32;
diff --git a/arch/sparc/net/bpf_jit_comp_32.c b/arch/sparc/net/bpf_jit_comp_32.c
index a74e500..44ff90f 100644
--- a/arch/sparc/net/bpf_jit_comp_32.c
+++ b/arch/sparc/net/bpf_jit_comp_32.c
@@ -396,56 +396,56 @@ void bpf_jit_compile(struct bpf_prog *fp)
 			int ilen;
 
 			switch (code) {
-			case BPF_ALU | BPF_ADD | BPF_X:	/* A += X; */
+			case BPF_ALU32 | BPF_ADD | BPF_X:	/* A += X; */
 				emit_alu_X(ADD);
 				break;
-			case BPF_ALU | BPF_ADD | BPF_K:	/* A += K; */
+			case BPF_ALU32 | BPF_ADD | BPF_K:	/* A += K; */
 				emit_alu_K(ADD, K);
 				break;
-			case BPF_ALU | BPF_SUB | BPF_X:	/* A -= X; */
+			case BPF_ALU32 | BPF_SUB | BPF_X:	/* A -= X; */
 				emit_alu_X(SUB);
 				break;
-			case BPF_ALU | BPF_SUB | BPF_K:	/* A -= K */
+			case BPF_ALU32 | BPF_SUB | BPF_K:	/* A -= K */
 				emit_alu_K(SUB, K);
 				break;
-			case BPF_ALU | BPF_AND | BPF_X:	/* A &= X */
+			case BPF_ALU32 | BPF_AND | BPF_X:	/* A &= X */
 				emit_alu_X(AND);
 				break;
-			case BPF_ALU | BPF_AND | BPF_K:	/* A &= K */
+			case BPF_ALU32 | BPF_AND | BPF_K:	/* A &= K */
 				emit_alu_K(AND, K);
 				break;
-			case BPF_ALU | BPF_OR | BPF_X:	/* A |= X */
+			case BPF_ALU32 | BPF_OR | BPF_X:	/* A |= X */
 				emit_alu_X(OR);
 				break;
-			case BPF_ALU | BPF_OR | BPF_K:	/* A |= K */
+			case BPF_ALU32 | BPF_OR | BPF_K:	/* A |= K */
 				emit_alu_K(OR, K);
 				break;
 			case BPF_ANC | SKF_AD_ALU_XOR_X: /* A ^= X; */
-			case BPF_ALU | BPF_XOR | BPF_X:
+			case BPF_ALU32 | BPF_XOR | BPF_X:
 				emit_alu_X(XOR);
 				break;
-			case BPF_ALU | BPF_XOR | BPF_K:	/* A ^= K */
+			case BPF_ALU32 | BPF_XOR | BPF_K:	/* A ^= K */
 				emit_alu_K(XOR, K);
 				break;
-			case BPF_ALU | BPF_LSH | BPF_X:	/* A <<= X */
+			case BPF_ALU32 | BPF_LSH | BPF_X:	/* A <<= X */
 				emit_alu_X(SLL);
 				break;
-			case BPF_ALU | BPF_LSH | BPF_K:	/* A <<= K */
+			case BPF_ALU32 | BPF_LSH | BPF_K:	/* A <<= K */
 				emit_alu_K(SLL, K);
 				break;
-			case BPF_ALU | BPF_RSH | BPF_X:	/* A >>= X */
+			case BPF_ALU32 | BPF_RSH | BPF_X:	/* A >>= X */
 				emit_alu_X(SRL);
 				break;
-			case BPF_ALU | BPF_RSH | BPF_K:	/* A >>= K */
+			case BPF_ALU32 | BPF_RSH | BPF_K:	/* A >>= K */
 				emit_alu_K(SRL, K);
 				break;
-			case BPF_ALU | BPF_MUL | BPF_X:	/* A *= X; */
+			case BPF_ALU32 | BPF_MUL | BPF_X:	/* A *= X; */
 				emit_alu_X(MUL);
 				break;
-			case BPF_ALU | BPF_MUL | BPF_K:	/* A *= K */
+			case BPF_ALU32 | BPF_MUL | BPF_K:	/* A *= K */
 				emit_alu_K(MUL, K);
 				break;
-			case BPF_ALU | BPF_DIV | BPF_K:	/* A /= K with K != 0*/
+			case BPF_ALU32 | BPF_DIV | BPF_K:	/* A /= K with K != 0*/
 				if (K == 1)
 					break;
 				emit_write_y(G0);
@@ -458,7 +458,7 @@ void bpf_jit_compile(struct bpf_prog *fp)
 				emit_nop();
 				emit_alu_K(DIV, K);
 				break;
-			case BPF_ALU | BPF_DIV | BPF_X:	/* A /= X; */
+			case BPF_ALU32 | BPF_DIV | BPF_X:	/* A /= X; */
 				emit_cmpi(r_X, 0);
 				if (pc_ret0 > 0) {
 					t_offset = addrs[pc_ret0 - 1];
@@ -480,7 +480,7 @@ void bpf_jit_compile(struct bpf_prog *fp)
 				emit_nop();
 				emit_alu_X(DIV);
 				break;
-			case BPF_ALU | BPF_NEG:
+			case BPF_ALU32 | BPF_NEG:
 				emit_neg();
 				break;
 			case BPF_RET | BPF_K:
@@ -629,7 +629,7 @@ common_load_ind:		seen |= SEEN_DATAREF | SEEN_XREG;
 			case BPF_LD | BPF_B | BPF_IND:
 				func = bpf_jit_load_byte;
 				goto common_load_ind;
-			case BPF_JMP | BPF_JA:
+			case BPF_JMP64 | BPF_JA:
 				emit_jump(addrs[i + K]);
 				emit_nop();
 				break;
@@ -640,14 +640,14 @@ common_load_ind:		seen |= SEEN_DATAREF | SEEN_XREG;
 		f_op = FOP;		\
 		goto cond_branch
 
-			COND_SEL(BPF_JMP | BPF_JGT | BPF_K, BGU, BLEU);
-			COND_SEL(BPF_JMP | BPF_JGE | BPF_K, BGEU, BLU);
-			COND_SEL(BPF_JMP | BPF_JEQ | BPF_K, BE, BNE);
-			COND_SEL(BPF_JMP | BPF_JSET | BPF_K, BNE, BE);
-			COND_SEL(BPF_JMP | BPF_JGT | BPF_X, BGU, BLEU);
-			COND_SEL(BPF_JMP | BPF_JGE | BPF_X, BGEU, BLU);
-			COND_SEL(BPF_JMP | BPF_JEQ | BPF_X, BE, BNE);
-			COND_SEL(BPF_JMP | BPF_JSET | BPF_X, BNE, BE);
+			COND_SEL(BPF_JMP64 | BPF_JGT | BPF_K, BGU, BLEU);
+			COND_SEL(BPF_JMP64 | BPF_JGE | BPF_K, BGEU, BLU);
+			COND_SEL(BPF_JMP64 | BPF_JEQ | BPF_K, BE, BNE);
+			COND_SEL(BPF_JMP64 | BPF_JSET | BPF_K, BNE, BE);
+			COND_SEL(BPF_JMP64 | BPF_JGT | BPF_X, BGU, BLEU);
+			COND_SEL(BPF_JMP64 | BPF_JGE | BPF_X, BGEU, BLU);
+			COND_SEL(BPF_JMP64 | BPF_JEQ | BPF_X, BE, BNE);
+			COND_SEL(BPF_JMP64 | BPF_JSET | BPF_X, BNE, BE);
 
 cond_branch:			f_offset = addrs[i + filter[i].jf];
 				t_offset = addrs[i + filter[i].jt];
@@ -660,19 +660,19 @@ cond_branch:			f_offset = addrs[i + filter[i].jf];
 				}
 
 				switch (code) {
-				case BPF_JMP | BPF_JGT | BPF_X:
-				case BPF_JMP | BPF_JGE | BPF_X:
-				case BPF_JMP | BPF_JEQ | BPF_X:
+				case BPF_JMP64 | BPF_JGT | BPF_X:
+				case BPF_JMP64 | BPF_JGE | BPF_X:
+				case BPF_JMP64 | BPF_JEQ | BPF_X:
 					seen |= SEEN_XREG;
 					emit_cmp(r_A, r_X);
 					break;
-				case BPF_JMP | BPF_JSET | BPF_X:
+				case BPF_JMP64 | BPF_JSET | BPF_X:
 					seen |= SEEN_XREG;
 					emit_btst(r_A, r_X);
 					break;
-				case BPF_JMP | BPF_JEQ | BPF_K:
-				case BPF_JMP | BPF_JGT | BPF_K:
-				case BPF_JMP | BPF_JGE | BPF_K:
+				case BPF_JMP64 | BPF_JEQ | BPF_K:
+				case BPF_JMP64 | BPF_JGT | BPF_K:
+				case BPF_JMP64 | BPF_JGE | BPF_K:
 					if (is_simm13(K)) {
 						emit_cmpi(r_A, K);
 					} else {
@@ -680,7 +680,7 @@ cond_branch:			f_offset = addrs[i + filter[i].jf];
 						emit_cmp(r_A, r_TMP);
 					}
 					break;
-				case BPF_JMP | BPF_JSET | BPF_K:
+				case BPF_JMP64 | BPF_JSET | BPF_K:
 					if (is_simm13(K)) {
 						emit_btsti(r_A, K);
 					} else {
diff --git a/arch/sparc/net/bpf_jit_comp_64.c b/arch/sparc/net/bpf_jit_comp_64.c
index fa0759b..5de22bc 100644
--- a/arch/sparc/net/bpf_jit_comp_64.c
+++ b/arch/sparc/net/bpf_jit_comp_64.c
@@ -906,7 +906,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 
 	switch (code) {
 	/* dst = src */
-	case BPF_ALU | BPF_MOV | BPF_X:
+	case BPF_ALU32 | BPF_MOV | BPF_X:
 		emit_alu3_K(SRL, src, 0, dst, ctx);
 		if (insn_is_zext(&insn[1]))
 			return 1;
@@ -915,33 +915,33 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 		emit_reg_move(src, dst, ctx);
 		break;
 	/* dst = dst OP src */
-	case BPF_ALU | BPF_ADD | BPF_X:
+	case BPF_ALU32 | BPF_ADD | BPF_X:
 	case BPF_ALU64 | BPF_ADD | BPF_X:
 		emit_alu(ADD, src, dst, ctx);
 		goto do_alu32_trunc;
-	case BPF_ALU | BPF_SUB | BPF_X:
+	case BPF_ALU32 | BPF_SUB | BPF_X:
 	case BPF_ALU64 | BPF_SUB | BPF_X:
 		emit_alu(SUB, src, dst, ctx);
 		goto do_alu32_trunc;
-	case BPF_ALU | BPF_AND | BPF_X:
+	case BPF_ALU32 | BPF_AND | BPF_X:
 	case BPF_ALU64 | BPF_AND | BPF_X:
 		emit_alu(AND, src, dst, ctx);
 		goto do_alu32_trunc;
-	case BPF_ALU | BPF_OR | BPF_X:
+	case BPF_ALU32 | BPF_OR | BPF_X:
 	case BPF_ALU64 | BPF_OR | BPF_X:
 		emit_alu(OR, src, dst, ctx);
 		goto do_alu32_trunc;
-	case BPF_ALU | BPF_XOR | BPF_X:
+	case BPF_ALU32 | BPF_XOR | BPF_X:
 	case BPF_ALU64 | BPF_XOR | BPF_X:
 		emit_alu(XOR, src, dst, ctx);
 		goto do_alu32_trunc;
-	case BPF_ALU | BPF_MUL | BPF_X:
+	case BPF_ALU32 | BPF_MUL | BPF_X:
 		emit_alu(MUL, src, dst, ctx);
 		goto do_alu32_trunc;
 	case BPF_ALU64 | BPF_MUL | BPF_X:
 		emit_alu(MULX, src, dst, ctx);
 		break;
-	case BPF_ALU | BPF_DIV | BPF_X:
+	case BPF_ALU32 | BPF_DIV | BPF_X:
 		emit_write_y(G0, ctx);
 		emit_alu(DIV, src, dst, ctx);
 		if (insn_is_zext(&insn[1]))
@@ -950,7 +950,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 	case BPF_ALU64 | BPF_DIV | BPF_X:
 		emit_alu(UDIVX, src, dst, ctx);
 		break;
-	case BPF_ALU | BPF_MOD | BPF_X: {
+	case BPF_ALU32 | BPF_MOD | BPF_X: {
 		const u8 tmp = bpf2sparc[TMP_REG_1];
 
 		ctx->tmp_1_used = true;
@@ -971,13 +971,13 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 		emit_alu3(SUB, dst, tmp, dst, ctx);
 		break;
 	}
-	case BPF_ALU | BPF_LSH | BPF_X:
+	case BPF_ALU32 | BPF_LSH | BPF_X:
 		emit_alu(SLL, src, dst, ctx);
 		goto do_alu32_trunc;
 	case BPF_ALU64 | BPF_LSH | BPF_X:
 		emit_alu(SLLX, src, dst, ctx);
 		break;
-	case BPF_ALU | BPF_RSH | BPF_X:
+	case BPF_ALU32 | BPF_RSH | BPF_X:
 		emit_alu(SRL, src, dst, ctx);
 		if (insn_is_zext(&insn[1]))
 			return 1;
@@ -985,7 +985,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 	case BPF_ALU64 | BPF_RSH | BPF_X:
 		emit_alu(SRLX, src, dst, ctx);
 		break;
-	case BPF_ALU | BPF_ARSH | BPF_X:
+	case BPF_ALU32 | BPF_ARSH | BPF_X:
 		emit_alu(SRA, src, dst, ctx);
 		goto do_alu32_trunc;
 	case BPF_ALU64 | BPF_ARSH | BPF_X:
@@ -993,12 +993,12 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 		break;
 
 	/* dst = -dst */
-	case BPF_ALU | BPF_NEG:
+	case BPF_ALU32 | BPF_NEG:
 	case BPF_ALU64 | BPF_NEG:
 		emit(SUB | RS1(0) | RS2(dst) | RD(dst), ctx);
 		goto do_alu32_trunc;
 
-	case BPF_ALU | BPF_END | BPF_FROM_BE:
+	case BPF_ALU32 | BPF_END | BPF_FROM_BE:
 		switch (imm) {
 		case 16:
 			emit_alu_K(SLL, dst, 16, ctx);
@@ -1018,7 +1018,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 		break;
 
 	/* dst = BSWAP##imm(dst) */
-	case BPF_ALU | BPF_END | BPF_FROM_LE: {
+	case BPF_ALU32 | BPF_END | BPF_FROM_LE: {
 		const u8 tmp = bpf2sparc[TMP_REG_1];
 		const u8 tmp2 = bpf2sparc[TMP_REG_2];
 
@@ -1061,7 +1061,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 		break;
 	}
 	/* dst = imm */
-	case BPF_ALU | BPF_MOV | BPF_K:
+	case BPF_ALU32 | BPF_MOV | BPF_K:
 		emit_loadimm32(imm, dst, ctx);
 		if (insn_is_zext(&insn[1]))
 			return 1;
@@ -1070,33 +1070,33 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 		emit_loadimm_sext(imm, dst, ctx);
 		break;
 	/* dst = dst OP imm */
-	case BPF_ALU | BPF_ADD | BPF_K:
+	case BPF_ALU32 | BPF_ADD | BPF_K:
 	case BPF_ALU64 | BPF_ADD | BPF_K:
 		emit_alu_K(ADD, dst, imm, ctx);
 		goto do_alu32_trunc;
-	case BPF_ALU | BPF_SUB | BPF_K:
+	case BPF_ALU32 | BPF_SUB | BPF_K:
 	case BPF_ALU64 | BPF_SUB | BPF_K:
 		emit_alu_K(SUB, dst, imm, ctx);
 		goto do_alu32_trunc;
-	case BPF_ALU | BPF_AND | BPF_K:
+	case BPF_ALU32 | BPF_AND | BPF_K:
 	case BPF_ALU64 | BPF_AND | BPF_K:
 		emit_alu_K(AND, dst, imm, ctx);
 		goto do_alu32_trunc;
-	case BPF_ALU | BPF_OR | BPF_K:
+	case BPF_ALU32 | BPF_OR | BPF_K:
 	case BPF_ALU64 | BPF_OR | BPF_K:
 		emit_alu_K(OR, dst, imm, ctx);
 		goto do_alu32_trunc;
-	case BPF_ALU | BPF_XOR | BPF_K:
+	case BPF_ALU32 | BPF_XOR | BPF_K:
 	case BPF_ALU64 | BPF_XOR | BPF_K:
 		emit_alu_K(XOR, dst, imm, ctx);
 		goto do_alu32_trunc;
-	case BPF_ALU | BPF_MUL | BPF_K:
+	case BPF_ALU32 | BPF_MUL | BPF_K:
 		emit_alu_K(MUL, dst, imm, ctx);
 		goto do_alu32_trunc;
 	case BPF_ALU64 | BPF_MUL | BPF_K:
 		emit_alu_K(MULX, dst, imm, ctx);
 		break;
-	case BPF_ALU | BPF_DIV | BPF_K:
+	case BPF_ALU32 | BPF_DIV | BPF_K:
 		if (imm == 0)
 			return -EINVAL;
 
@@ -1110,7 +1110,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 		emit_alu_K(UDIVX, dst, imm, ctx);
 		break;
 	case BPF_ALU64 | BPF_MOD | BPF_K:
-	case BPF_ALU | BPF_MOD | BPF_K: {
+	case BPF_ALU32 | BPF_MOD | BPF_K: {
 		const u8 tmp = bpf2sparc[TMP_REG_2];
 		unsigned int div;
 
@@ -1139,13 +1139,13 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 		}
 		goto do_alu32_trunc;
 	}
-	case BPF_ALU | BPF_LSH | BPF_K:
+	case BPF_ALU32 | BPF_LSH | BPF_K:
 		emit_alu_K(SLL, dst, imm, ctx);
 		goto do_alu32_trunc;
 	case BPF_ALU64 | BPF_LSH | BPF_K:
 		emit_alu_K(SLLX, dst, imm, ctx);
 		break;
-	case BPF_ALU | BPF_RSH | BPF_K:
+	case BPF_ALU32 | BPF_RSH | BPF_K:
 		emit_alu_K(SRL, dst, imm, ctx);
 		if (insn_is_zext(&insn[1]))
 			return 1;
@@ -1153,7 +1153,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 	case BPF_ALU64 | BPF_RSH | BPF_K:
 		emit_alu_K(SRLX, dst, imm, ctx);
 		break;
-	case BPF_ALU | BPF_ARSH | BPF_K:
+	case BPF_ALU32 | BPF_ARSH | BPF_K:
 		emit_alu_K(SRA, dst, imm, ctx);
 		goto do_alu32_trunc;
 	case BPF_ALU64 | BPF_ARSH | BPF_K:
@@ -1161,28 +1161,28 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 		break;
 
 	do_alu32_trunc:
-		if (BPF_CLASS(code) == BPF_ALU &&
+		if (BPF_CLASS(code) == BPF_ALU32 &&
 		    !ctx->prog->aux->verifier_zext)
 			emit_alu_K(SRL, dst, 0, ctx);
 		break;
 
 	/* JUMP off */
-	case BPF_JMP | BPF_JA:
+	case BPF_JMP64 | BPF_JA:
 		emit_branch(BA, ctx->idx, ctx->offset[i + off], ctx);
 		emit_nop(ctx);
 		break;
 	/* IF (dst COND src) JUMP off */
-	case BPF_JMP | BPF_JEQ | BPF_X:
-	case BPF_JMP | BPF_JGT | BPF_X:
-	case BPF_JMP | BPF_JLT | BPF_X:
-	case BPF_JMP | BPF_JGE | BPF_X:
-	case BPF_JMP | BPF_JLE | BPF_X:
-	case BPF_JMP | BPF_JNE | BPF_X:
-	case BPF_JMP | BPF_JSGT | BPF_X:
-	case BPF_JMP | BPF_JSLT | BPF_X:
-	case BPF_JMP | BPF_JSGE | BPF_X:
-	case BPF_JMP | BPF_JSLE | BPF_X:
-	case BPF_JMP | BPF_JSET | BPF_X: {
+	case BPF_JMP64 | BPF_JEQ | BPF_X:
+	case BPF_JMP64 | BPF_JGT | BPF_X:
+	case BPF_JMP64 | BPF_JLT | BPF_X:
+	case BPF_JMP64 | BPF_JGE | BPF_X:
+	case BPF_JMP64 | BPF_JLE | BPF_X:
+	case BPF_JMP64 | BPF_JNE | BPF_X:
+	case BPF_JMP64 | BPF_JSGT | BPF_X:
+	case BPF_JMP64 | BPF_JSLT | BPF_X:
+	case BPF_JMP64 | BPF_JSGE | BPF_X:
+	case BPF_JMP64 | BPF_JSLE | BPF_X:
+	case BPF_JMP64 | BPF_JSET | BPF_X: {
 		int err;
 
 		err = emit_compare_and_branch(code, dst, src, 0, false, i + off, ctx);
@@ -1191,17 +1191,17 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 		break;
 	}
 	/* IF (dst COND imm) JUMP off */
-	case BPF_JMP | BPF_JEQ | BPF_K:
-	case BPF_JMP | BPF_JGT | BPF_K:
-	case BPF_JMP | BPF_JLT | BPF_K:
-	case BPF_JMP | BPF_JGE | BPF_K:
-	case BPF_JMP | BPF_JLE | BPF_K:
-	case BPF_JMP | BPF_JNE | BPF_K:
-	case BPF_JMP | BPF_JSGT | BPF_K:
-	case BPF_JMP | BPF_JSLT | BPF_K:
-	case BPF_JMP | BPF_JSGE | BPF_K:
-	case BPF_JMP | BPF_JSLE | BPF_K:
-	case BPF_JMP | BPF_JSET | BPF_K: {
+	case BPF_JMP64 | BPF_JEQ | BPF_K:
+	case BPF_JMP64 | BPF_JGT | BPF_K:
+	case BPF_JMP64 | BPF_JLT | BPF_K:
+	case BPF_JMP64 | BPF_JGE | BPF_K:
+	case BPF_JMP64 | BPF_JLE | BPF_K:
+	case BPF_JMP64 | BPF_JNE | BPF_K:
+	case BPF_JMP64 | BPF_JSGT | BPF_K:
+	case BPF_JMP64 | BPF_JSLT | BPF_K:
+	case BPF_JMP64 | BPF_JSGE | BPF_K:
+	case BPF_JMP64 | BPF_JSLE | BPF_K:
+	case BPF_JMP64 | BPF_JSET | BPF_K: {
 		int err;
 
 		err = emit_compare_and_branch(code, dst, 0, imm, true, i + off, ctx);
@@ -1211,7 +1211,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 	}
 
 	/* function call */
-	case BPF_JMP | BPF_CALL:
+	case BPF_JMP64 | BPF_CALL:
 	{
 		u8 *func = ((u8 *)__bpf_call_base) + imm;
 
@@ -1225,12 +1225,12 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
 	}
 
 	/* tail call */
-	case BPF_JMP | BPF_TAIL_CALL:
+	case BPF_JMP64 | BPF_TAIL_CALL:
 		emit_tail_call(ctx);
 		break;
 
 	/* function return */
-	case BPF_JMP | BPF_EXIT:
+	case BPF_JMP64 | BPF_EXIT:
 		/* Optimization: when last instruction is EXIT,
 		   simply fallthrough to epilogue. */
 		if (i == ctx->prog->len - 1)
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 1056bbf..273c9e9 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -870,7 +870,7 @@ static void detect_reg_usage(struct bpf_insn *insn, int insn_cnt,
 	int i;
 
 	for (i = 1; i <= insn_cnt; i++, insn++) {
-		if (insn->code == (BPF_JMP | BPF_TAIL_CALL))
+		if (insn->code == (BPF_JMP64 | BPF_TAIL_CALL))
 			*tail_call_seen = true;
 		if (insn->dst_reg == BPF_REG_6 || insn->src_reg == BPF_REG_6)
 			regs_used[0] = true;
@@ -1010,11 +1010,11 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image
 
 		switch (insn->code) {
 			/* ALU */
-		case BPF_ALU | BPF_ADD | BPF_X:
-		case BPF_ALU | BPF_SUB | BPF_X:
-		case BPF_ALU | BPF_AND | BPF_X:
-		case BPF_ALU | BPF_OR | BPF_X:
-		case BPF_ALU | BPF_XOR | BPF_X:
+		case BPF_ALU32 | BPF_ADD | BPF_X:
+		case BPF_ALU32 | BPF_SUB | BPF_X:
+		case BPF_ALU32 | BPF_AND | BPF_X:
+		case BPF_ALU32 | BPF_OR | BPF_X:
+		case BPF_ALU32 | BPF_XOR | BPF_X:
 		case BPF_ALU64 | BPF_ADD | BPF_X:
 		case BPF_ALU64 | BPF_SUB | BPF_X:
 		case BPF_ALU64 | BPF_AND | BPF_X:
@@ -1027,25 +1027,25 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image
 			break;
 
 		case BPF_ALU64 | BPF_MOV | BPF_X:
-		case BPF_ALU | BPF_MOV | BPF_X:
+		case BPF_ALU32 | BPF_MOV | BPF_X:
 			emit_mov_reg(&prog,
 				     BPF_CLASS(insn->code) == BPF_ALU64,
 				     dst_reg, src_reg);
 			break;
 
 			/* neg dst */
-		case BPF_ALU | BPF_NEG:
+		case BPF_ALU32 | BPF_NEG:
 		case BPF_ALU64 | BPF_NEG:
 			maybe_emit_1mod(&prog, dst_reg,
 					BPF_CLASS(insn->code) == BPF_ALU64);
 			EMIT2(0xF7, add_1reg(0xD8, dst_reg));
 			break;
 
-		case BPF_ALU | BPF_ADD | BPF_K:
-		case BPF_ALU | BPF_SUB | BPF_K:
-		case BPF_ALU | BPF_AND | BPF_K:
-		case BPF_ALU | BPF_OR | BPF_K:
-		case BPF_ALU | BPF_XOR | BPF_K:
+		case BPF_ALU32 | BPF_ADD | BPF_K:
+		case BPF_ALU32 | BPF_SUB | BPF_K:
+		case BPF_ALU32 | BPF_AND | BPF_K:
+		case BPF_ALU32 | BPF_OR | BPF_K:
+		case BPF_ALU32 | BPF_XOR | BPF_K:
 		case BPF_ALU64 | BPF_ADD | BPF_K:
 		case BPF_ALU64 | BPF_SUB | BPF_K:
 		case BPF_ALU64 | BPF_AND | BPF_K:
@@ -1090,7 +1090,7 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image
 			break;
 
 		case BPF_ALU64 | BPF_MOV | BPF_K:
-		case BPF_ALU | BPF_MOV | BPF_K:
+		case BPF_ALU32 | BPF_MOV | BPF_K:
 			emit_mov_imm32(&prog, BPF_CLASS(insn->code) == BPF_ALU64,
 				       dst_reg, imm32);
 			break;
@@ -1102,10 +1102,10 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image
 			break;
 
 			/* dst %= src, dst /= src, dst %= imm32, dst /= imm32 */
-		case BPF_ALU | BPF_MOD | BPF_X:
-		case BPF_ALU | BPF_DIV | BPF_X:
-		case BPF_ALU | BPF_MOD | BPF_K:
-		case BPF_ALU | BPF_DIV | BPF_K:
+		case BPF_ALU32 | BPF_MOD | BPF_X:
+		case BPF_ALU32 | BPF_DIV | BPF_X:
+		case BPF_ALU32 | BPF_MOD | BPF_K:
+		case BPF_ALU32 | BPF_DIV | BPF_K:
 		case BPF_ALU64 | BPF_MOD | BPF_X:
 		case BPF_ALU64 | BPF_DIV | BPF_X:
 		case BPF_ALU64 | BPF_MOD | BPF_K:
@@ -1160,7 +1160,7 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image
 			break;
 		}
 
-		case BPF_ALU | BPF_MUL | BPF_K:
+		case BPF_ALU32 | BPF_MUL | BPF_K:
 		case BPF_ALU64 | BPF_MUL | BPF_K:
 			maybe_emit_mod(&prog, dst_reg, dst_reg,
 				       BPF_CLASS(insn->code) == BPF_ALU64);
@@ -1176,7 +1176,7 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image
 					    imm32);
 			break;
 
-		case BPF_ALU | BPF_MUL | BPF_X:
+		case BPF_ALU32 | BPF_MUL | BPF_X:
 		case BPF_ALU64 | BPF_MUL | BPF_X:
 			maybe_emit_mod(&prog, src_reg, dst_reg,
 				       BPF_CLASS(insn->code) == BPF_ALU64);
@@ -1186,9 +1186,9 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image
 			break;
 
 			/* Shifts */
-		case BPF_ALU | BPF_LSH | BPF_K:
-		case BPF_ALU | BPF_RSH | BPF_K:
-		case BPF_ALU | BPF_ARSH | BPF_K:
+		case BPF_ALU32 | BPF_LSH | BPF_K:
+		case BPF_ALU32 | BPF_RSH | BPF_K:
+		case BPF_ALU32 | BPF_ARSH | BPF_K:
 		case BPF_ALU64 | BPF_LSH | BPF_K:
 		case BPF_ALU64 | BPF_RSH | BPF_K:
 		case BPF_ALU64 | BPF_ARSH | BPF_K:
@@ -1202,9 +1202,9 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image
 				EMIT3(0xC1, add_1reg(b3, dst_reg), imm32);
 			break;
 
-		case BPF_ALU | BPF_LSH | BPF_X:
-		case BPF_ALU | BPF_RSH | BPF_X:
-		case BPF_ALU | BPF_ARSH | BPF_X:
+		case BPF_ALU32 | BPF_LSH | BPF_X:
+		case BPF_ALU32 | BPF_RSH | BPF_X:
+		case BPF_ALU32 | BPF_ARSH | BPF_X:
 		case BPF_ALU64 | BPF_LSH | BPF_X:
 		case BPF_ALU64 | BPF_RSH | BPF_X:
 		case BPF_ALU64 | BPF_ARSH | BPF_X:
@@ -1261,7 +1261,7 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image
 
 			break;
 
-		case BPF_ALU | BPF_END | BPF_FROM_BE:
+		case BPF_ALU32 | BPF_END | BPF_FROM_BE:
 			switch (imm32) {
 			case 16:
 				/* Emit 'ror %ax, 8' to swap lower 2 bytes */
@@ -1293,7 +1293,7 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image
 			}
 			break;
 
-		case BPF_ALU | BPF_END | BPF_FROM_LE:
+		case BPF_ALU32 | BPF_END | BPF_FROM_LE:
 			switch (imm32) {
 			case 16:
 				/*
@@ -1533,7 +1533,7 @@ st:			if (is_imm8(insn->off))
 			break;
 
 			/* call */
-		case BPF_JMP | BPF_CALL: {
+		case BPF_JMP64 | BPF_CALL: {
 			int offs;
 
 			func = (u8 *) __bpf_call_base + imm32;
@@ -1554,7 +1554,7 @@ st:			if (is_imm8(insn->off))
 			break;
 		}
 
-		case BPF_JMP | BPF_TAIL_CALL:
+		case BPF_JMP64 | BPF_TAIL_CALL:
 			if (imm32)
 				emit_bpf_tail_call_direct(&bpf_prog->aux->poke_tab[imm32 - 1],
 							  &prog, image + addrs[i - 1],
@@ -1570,16 +1570,16 @@ st:			if (is_imm8(insn->off))
 			break;
 
 			/* cond jump */
-		case BPF_JMP | BPF_JEQ | BPF_X:
-		case BPF_JMP | BPF_JNE | BPF_X:
-		case BPF_JMP | BPF_JGT | BPF_X:
-		case BPF_JMP | BPF_JLT | BPF_X:
-		case BPF_JMP | BPF_JGE | BPF_X:
-		case BPF_JMP | BPF_JLE | BPF_X:
-		case BPF_JMP | BPF_JSGT | BPF_X:
-		case BPF_JMP | BPF_JSLT | BPF_X:
-		case BPF_JMP | BPF_JSGE | BPF_X:
-		case BPF_JMP | BPF_JSLE | BPF_X:
+		case BPF_JMP64 | BPF_JEQ | BPF_X:
+		case BPF_JMP64 | BPF_JNE | BPF_X:
+		case BPF_JMP64 | BPF_JGT | BPF_X:
+		case BPF_JMP64 | BPF_JLT | BPF_X:
+		case BPF_JMP64 | BPF_JGE | BPF_X:
+		case BPF_JMP64 | BPF_JLE | BPF_X:
+		case BPF_JMP64 | BPF_JSGT | BPF_X:
+		case BPF_JMP64 | BPF_JSLT | BPF_X:
+		case BPF_JMP64 | BPF_JSGE | BPF_X:
+		case BPF_JMP64 | BPF_JSLE | BPF_X:
 		case BPF_JMP32 | BPF_JEQ | BPF_X:
 		case BPF_JMP32 | BPF_JNE | BPF_X:
 		case BPF_JMP32 | BPF_JGT | BPF_X:
@@ -1592,36 +1592,36 @@ st:			if (is_imm8(insn->off))
 		case BPF_JMP32 | BPF_JSLE | BPF_X:
 			/* cmp dst_reg, src_reg */
 			maybe_emit_mod(&prog, dst_reg, src_reg,
-				       BPF_CLASS(insn->code) == BPF_JMP);
+				       BPF_CLASS(insn->code) == BPF_JMP64);
 			EMIT2(0x39, add_2reg(0xC0, dst_reg, src_reg));
 			goto emit_cond_jmp;
 
-		case BPF_JMP | BPF_JSET | BPF_X:
+		case BPF_JMP64 | BPF_JSET | BPF_X:
 		case BPF_JMP32 | BPF_JSET | BPF_X:
 			/* test dst_reg, src_reg */
 			maybe_emit_mod(&prog, dst_reg, src_reg,
-				       BPF_CLASS(insn->code) == BPF_JMP);
+				       BPF_CLASS(insn->code) == BPF_JMP64);
 			EMIT2(0x85, add_2reg(0xC0, dst_reg, src_reg));
 			goto emit_cond_jmp;
 
-		case BPF_JMP | BPF_JSET | BPF_K:
+		case BPF_JMP64 | BPF_JSET | BPF_K:
 		case BPF_JMP32 | BPF_JSET | BPF_K:
 			/* test dst_reg, imm32 */
 			maybe_emit_1mod(&prog, dst_reg,
-					BPF_CLASS(insn->code) == BPF_JMP);
+					BPF_CLASS(insn->code) == BPF_JMP64);
 			EMIT2_off32(0xF7, add_1reg(0xC0, dst_reg), imm32);
 			goto emit_cond_jmp;
 
-		case BPF_JMP | BPF_JEQ | BPF_K:
-		case BPF_JMP | BPF_JNE | BPF_K:
-		case BPF_JMP | BPF_JGT | BPF_K:
-		case BPF_JMP | BPF_JLT | BPF_K:
-		case BPF_JMP | BPF_JGE | BPF_K:
-		case BPF_JMP | BPF_JLE | BPF_K:
-		case BPF_JMP | BPF_JSGT | BPF_K:
-		case BPF_JMP | BPF_JSLT | BPF_K:
-		case BPF_JMP | BPF_JSGE | BPF_K:
-		case BPF_JMP | BPF_JSLE | BPF_K:
+		case BPF_JMP64 | BPF_JEQ | BPF_K:
+		case BPF_JMP64 | BPF_JNE | BPF_K:
+		case BPF_JMP64 | BPF_JGT | BPF_K:
+		case BPF_JMP64 | BPF_JLT | BPF_K:
+		case BPF_JMP64 | BPF_JGE | BPF_K:
+		case BPF_JMP64 | BPF_JLE | BPF_K:
+		case BPF_JMP64 | BPF_JSGT | BPF_K:
+		case BPF_JMP64 | BPF_JSLT | BPF_K:
+		case BPF_JMP64 | BPF_JSGE | BPF_K:
+		case BPF_JMP64 | BPF_JSLE | BPF_K:
 		case BPF_JMP32 | BPF_JEQ | BPF_K:
 		case BPF_JMP32 | BPF_JNE | BPF_K:
 		case BPF_JMP32 | BPF_JGT | BPF_K:
@@ -1635,14 +1635,14 @@ st:			if (is_imm8(insn->off))
 			/* test dst_reg, dst_reg to save one extra byte */
 			if (imm32 == 0) {
 				maybe_emit_mod(&prog, dst_reg, dst_reg,
-					       BPF_CLASS(insn->code) == BPF_JMP);
+					       BPF_CLASS(insn->code) == BPF_JMP64);
 				EMIT2(0x85, add_2reg(0xC0, dst_reg, dst_reg));
 				goto emit_cond_jmp;
 			}
 
 			/* cmp dst_reg, imm8/32 */
 			maybe_emit_1mod(&prog, dst_reg,
-					BPF_CLASS(insn->code) == BPF_JMP);
+					BPF_CLASS(insn->code) == BPF_JMP64);
 
 			if (is_imm8(imm32))
 				EMIT3(0x83, add_1reg(0xF8, dst_reg), imm32);
@@ -1729,7 +1729,7 @@ st:			if (is_imm8(insn->off))
 
 			break;
 
-		case BPF_JMP | BPF_JA:
+		case BPF_JMP64 | BPF_JA:
 			if (insn->off == -1)
 				/* -1 jmp instructions will always jump
 				 * backwards two bytes. Explicitly handling
@@ -1799,7 +1799,7 @@ st:			if (is_imm8(insn->off))
 			}
 			break;
 
-		case BPF_JMP | BPF_EXIT:
+		case BPF_JMP64 | BPF_EXIT:
 			if (seen_exit) {
 				jmp_offset = ctx->cleanup_addr - addrs[i];
 				goto emit_jmp;
diff --git a/arch/x86/net/bpf_jit_comp32.c b/arch/x86/net/bpf_jit_comp32.c
index 429a89c..580ecd7 100644
--- a/arch/x86/net/bpf_jit_comp32.c
+++ b/arch/x86/net/bpf_jit_comp32.c
@@ -1558,7 +1558,7 @@ static u8 get_cond_jmp_opcode(const u8 op, bool is_cmp_lo)
  * the jmp_offset relative to the jit-insn address immediately
  * following the call (0xE8) instruction.  At this point, it knows
  * the end of the jit-insn address after completely translated the
- * current (BPF_JMP | BPF_CALL) bpf-insn.  It is passed as "end_addr"
+ * current (BPF_JMP64 | BPF_CALL) bpf-insn.  It is passed as "end_addr"
  * to the emit_kfunc_call().  Thus, it can learn the "immediate-follow-call"
  * address by figuring out how many jit-insn is generated between
  * the call (0xE8) and the end_addr:
@@ -1686,8 +1686,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
 		switch (code) {
 		/* ALU operations */
 		/* dst = src */
-		case BPF_ALU | BPF_MOV | BPF_K:
-		case BPF_ALU | BPF_MOV | BPF_X:
+		case BPF_ALU32 | BPF_MOV | BPF_K:
+		case BPF_ALU32 | BPF_MOV | BPF_X:
 		case BPF_ALU64 | BPF_MOV | BPF_K:
 		case BPF_ALU64 | BPF_MOV | BPF_X:
 			switch (BPF_SRC(code)) {
@@ -1715,16 +1715,16 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
 		/* dst = dst * src/imm */
 		/* dst = dst << src */
 		/* dst = dst >> src */
-		case BPF_ALU | BPF_ADD | BPF_K:
-		case BPF_ALU | BPF_ADD | BPF_X:
-		case BPF_ALU | BPF_SUB | BPF_K:
-		case BPF_ALU | BPF_SUB | BPF_X:
-		case BPF_ALU | BPF_OR | BPF_K:
-		case BPF_ALU | BPF_OR | BPF_X:
-		case BPF_ALU | BPF_AND | BPF_K:
-		case BPF_ALU | BPF_AND | BPF_X:
-		case BPF_ALU | BPF_XOR | BPF_K:
-		case BPF_ALU | BPF_XOR | BPF_X:
+		case BPF_ALU32 | BPF_ADD | BPF_K:
+		case BPF_ALU32 | BPF_ADD | BPF_X:
+		case BPF_ALU32 | BPF_SUB | BPF_K:
+		case BPF_ALU32 | BPF_SUB | BPF_X:
+		case BPF_ALU32 | BPF_OR | BPF_K:
+		case BPF_ALU32 | BPF_OR | BPF_X:
+		case BPF_ALU32 | BPF_AND | BPF_K:
+		case BPF_ALU32 | BPF_AND | BPF_X:
+		case BPF_ALU32 | BPF_XOR | BPF_K:
+		case BPF_ALU32 | BPF_XOR | BPF_X:
 		case BPF_ALU64 | BPF_ADD | BPF_K:
 		case BPF_ALU64 | BPF_ADD | BPF_X:
 		case BPF_ALU64 | BPF_SUB | BPF_K:
@@ -1748,8 +1748,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
 				break;
 			}
 			break;
-		case BPF_ALU | BPF_MUL | BPF_K:
-		case BPF_ALU | BPF_MUL | BPF_X:
+		case BPF_ALU32 | BPF_MUL | BPF_K:
+		case BPF_ALU32 | BPF_MUL | BPF_X:
 			switch (BPF_SRC(code)) {
 			case BPF_X:
 				emit_ia32_mul_r(dst_lo, src_lo, dstk,
@@ -1766,10 +1766,10 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
 			if (!bpf_prog->aux->verifier_zext)
 				emit_ia32_mov_i(dst_hi, 0, dstk, &prog);
 			break;
-		case BPF_ALU | BPF_LSH | BPF_X:
-		case BPF_ALU | BPF_RSH | BPF_X:
-		case BPF_ALU | BPF_ARSH | BPF_K:
-		case BPF_ALU | BPF_ARSH | BPF_X:
+		case BPF_ALU32 | BPF_LSH | BPF_X:
+		case BPF_ALU32 | BPF_RSH | BPF_X:
+		case BPF_ALU32 | BPF_ARSH | BPF_K:
+		case BPF_ALU32 | BPF_ARSH | BPF_X:
 			switch (BPF_SRC(code)) {
 			case BPF_X:
 				emit_ia32_shift_r(BPF_OP(code), dst_lo, src_lo,
@@ -1789,10 +1789,10 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
 			break;
 		/* dst = dst / src(imm) */
 		/* dst = dst % src(imm) */
-		case BPF_ALU | BPF_DIV | BPF_K:
-		case BPF_ALU | BPF_DIV | BPF_X:
-		case BPF_ALU | BPF_MOD | BPF_K:
-		case BPF_ALU | BPF_MOD | BPF_X:
+		case BPF_ALU32 | BPF_DIV | BPF_K:
+		case BPF_ALU32 | BPF_DIV | BPF_X:
+		case BPF_ALU32 | BPF_MOD | BPF_K:
+		case BPF_ALU32 | BPF_MOD | BPF_X:
 			switch (BPF_SRC(code)) {
 			case BPF_X:
 				emit_ia32_div_mod_r(BPF_OP(code), dst_lo,
@@ -1817,8 +1817,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
 			goto notyet;
 		/* dst = dst >> imm */
 		/* dst = dst << imm */
-		case BPF_ALU | BPF_RSH | BPF_K:
-		case BPF_ALU | BPF_LSH | BPF_K:
+		case BPF_ALU32 | BPF_RSH | BPF_K:
+		case BPF_ALU32 | BPF_LSH | BPF_K:
 			if (unlikely(imm32 > 31))
 				return -EINVAL;
 			/* mov ecx,imm32*/
@@ -1859,7 +1859,7 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
 			emit_ia32_arsh_i64(dst, imm32, dstk, &prog);
 			break;
 		/* dst = ~dst */
-		case BPF_ALU | BPF_NEG:
+		case BPF_ALU32 | BPF_NEG:
 			emit_ia32_alu_i(is64, false, BPF_OP(code),
 					dst_lo, 0, dstk, &prog);
 			if (!bpf_prog->aux->verifier_zext)
@@ -1882,12 +1882,12 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
 			}
 			break;
 		/* dst = htole(dst) */
-		case BPF_ALU | BPF_END | BPF_FROM_LE:
+		case BPF_ALU32 | BPF_END | BPF_FROM_LE:
 			emit_ia32_to_le_r64(dst, imm32, dstk, &prog,
 					    bpf_prog->aux);
 			break;
 		/* dst = htobe(dst) */
-		case BPF_ALU | BPF_END | BPF_FROM_BE:
+		case BPF_ALU32 | BPF_END | BPF_FROM_BE:
 			emit_ia32_to_be_r64(dst, imm32, dstk, &prog,
 					    bpf_prog->aux);
 			break;
@@ -2080,7 +2080,7 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
 			}
 			break;
 		/* call */
-		case BPF_JMP | BPF_CALL:
+		case BPF_JMP64 | BPF_CALL:
 		{
 			const u8 *r1 = bpf2ia32[BPF_REG_1];
 			const u8 *r2 = bpf2ia32[BPF_REG_2];
@@ -2137,17 +2137,17 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
 			EMIT3(0x83, add_1reg(0xC0, IA32_ESP), 32);
 			break;
 		}
-		case BPF_JMP | BPF_TAIL_CALL:
+		case BPF_JMP64 | BPF_TAIL_CALL:
 			emit_bpf_tail_call(&prog, image + addrs[i - 1]);
 			break;
 
 		/* cond jump */
-		case BPF_JMP | BPF_JEQ | BPF_X:
-		case BPF_JMP | BPF_JNE | BPF_X:
-		case BPF_JMP | BPF_JGT | BPF_X:
-		case BPF_JMP | BPF_JLT | BPF_X:
-		case BPF_JMP | BPF_JGE | BPF_X:
-		case BPF_JMP | BPF_JLE | BPF_X:
+		case BPF_JMP64 | BPF_JEQ | BPF_X:
+		case BPF_JMP64 | BPF_JNE | BPF_X:
+		case BPF_JMP64 | BPF_JGT | BPF_X:
+		case BPF_JMP64 | BPF_JLT | BPF_X:
+		case BPF_JMP64 | BPF_JGE | BPF_X:
+		case BPF_JMP64 | BPF_JLE | BPF_X:
 		case BPF_JMP32 | BPF_JEQ | BPF_X:
 		case BPF_JMP32 | BPF_JNE | BPF_X:
 		case BPF_JMP32 | BPF_JGT | BPF_X:
@@ -2158,7 +2158,7 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
 		case BPF_JMP32 | BPF_JSLE | BPF_X:
 		case BPF_JMP32 | BPF_JSLT | BPF_X:
 		case BPF_JMP32 | BPF_JSGE | BPF_X: {
-			bool is_jmp64 = BPF_CLASS(insn->code) == BPF_JMP;
+			bool is_jmp64 = BPF_CLASS(insn->code) == BPF_JMP64;
 			u8 dreg_lo = dstk ? IA32_EAX : dst_lo;
 			u8 dreg_hi = dstk ? IA32_EDX : dst_hi;
 			u8 sreg_lo = sstk ? IA32_ECX : src_lo;
@@ -2193,10 +2193,10 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
 			EMIT2(0x39, add_2reg(0xC0, dreg_lo, sreg_lo));
 			goto emit_cond_jmp;
 		}
-		case BPF_JMP | BPF_JSGT | BPF_X:
-		case BPF_JMP | BPF_JSLE | BPF_X:
-		case BPF_JMP | BPF_JSLT | BPF_X:
-		case BPF_JMP | BPF_JSGE | BPF_X: {
+		case BPF_JMP64 | BPF_JSGT | BPF_X:
+		case BPF_JMP64 | BPF_JSLE | BPF_X:
+		case BPF_JMP64 | BPF_JSLT | BPF_X:
+		case BPF_JMP64 | BPF_JSGE | BPF_X: {
 			u8 dreg_lo = dstk ? IA32_EAX : dst_lo;
 			u8 dreg_hi = dstk ? IA32_EDX : dst_hi;
 			u8 sreg_lo = sstk ? IA32_ECX : src_lo;
@@ -2227,9 +2227,9 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
 			EMIT2(0x39, add_2reg(0xC0, dreg_lo, sreg_lo));
 			goto emit_cond_jmp_signed;
 		}
-		case BPF_JMP | BPF_JSET | BPF_X:
+		case BPF_JMP64 | BPF_JSET | BPF_X:
 		case BPF_JMP32 | BPF_JSET | BPF_X: {
-			bool is_jmp64 = BPF_CLASS(insn->code) == BPF_JMP;
+			bool is_jmp64 = BPF_CLASS(insn->code) == BPF_JMP64;
 			u8 dreg_lo = IA32_EAX;
 			u8 dreg_hi = IA32_EDX;
 			u8 sreg_lo = sstk ? IA32_ECX : src_lo;
@@ -2271,9 +2271,9 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
 			}
 			goto emit_cond_jmp;
 		}
-		case BPF_JMP | BPF_JSET | BPF_K:
+		case BPF_JMP64 | BPF_JSET | BPF_K:
 		case BPF_JMP32 | BPF_JSET | BPF_K: {
-			bool is_jmp64 = BPF_CLASS(insn->code) == BPF_JMP;
+			bool is_jmp64 = BPF_CLASS(insn->code) == BPF_JMP64;
 			u8 dreg_lo = IA32_EAX;
 			u8 dreg_hi = IA32_EDX;
 			u8 sreg_lo = IA32_ECX;
@@ -2313,12 +2313,12 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
 			}
 			goto emit_cond_jmp;
 		}
-		case BPF_JMP | BPF_JEQ | BPF_K:
-		case BPF_JMP | BPF_JNE | BPF_K:
-		case BPF_JMP | BPF_JGT | BPF_K:
-		case BPF_JMP | BPF_JLT | BPF_K:
-		case BPF_JMP | BPF_JGE | BPF_K:
-		case BPF_JMP | BPF_JLE | BPF_K:
+		case BPF_JMP64 | BPF_JEQ | BPF_K:
+		case BPF_JMP64 | BPF_JNE | BPF_K:
+		case BPF_JMP64 | BPF_JGT | BPF_K:
+		case BPF_JMP64 | BPF_JLT | BPF_K:
+		case BPF_JMP64 | BPF_JGE | BPF_K:
+		case BPF_JMP64 | BPF_JLE | BPF_K:
 		case BPF_JMP32 | BPF_JEQ | BPF_K:
 		case BPF_JMP32 | BPF_JNE | BPF_K:
 		case BPF_JMP32 | BPF_JGT | BPF_K:
@@ -2329,7 +2329,7 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
 		case BPF_JMP32 | BPF_JSLE | BPF_K:
 		case BPF_JMP32 | BPF_JSLT | BPF_K:
 		case BPF_JMP32 | BPF_JSGE | BPF_K: {
-			bool is_jmp64 = BPF_CLASS(insn->code) == BPF_JMP;
+			bool is_jmp64 = BPF_CLASS(insn->code) == BPF_JMP64;
 			u8 dreg_lo = dstk ? IA32_EAX : dst_lo;
 			u8 dreg_hi = dstk ? IA32_EDX : dst_hi;
 			u8 sreg_lo = IA32_ECX;
@@ -2373,10 +2373,10 @@ emit_cond_jmp:		jmp_cond = get_cond_jmp_opcode(BPF_OP(code), false);
 			}
 			break;
 		}
-		case BPF_JMP | BPF_JSGT | BPF_K:
-		case BPF_JMP | BPF_JSLE | BPF_K:
-		case BPF_JMP | BPF_JSLT | BPF_K:
-		case BPF_JMP | BPF_JSGE | BPF_K: {
+		case BPF_JMP64 | BPF_JSGT | BPF_K:
+		case BPF_JMP64 | BPF_JSLE | BPF_K:
+		case BPF_JMP64 | BPF_JSLT | BPF_K:
+		case BPF_JMP64 | BPF_JSGE | BPF_K: {
 			u8 dreg_lo = dstk ? IA32_EAX : dst_lo;
 			u8 dreg_hi = dstk ? IA32_EDX : dst_hi;
 			u8 sreg_lo = IA32_ECX;
@@ -2433,7 +2433,7 @@ emit_cond_jmp:		jmp_cond = get_cond_jmp_opcode(BPF_OP(code), false);
 			}
 			break;
 		}
-		case BPF_JMP | BPF_JA:
+		case BPF_JMP64 | BPF_JA:
 			if (insn->off == -1)
 				/* -1 jmp instructions will always jump
 				 * backwards two bytes. Explicitly handling
@@ -2461,7 +2461,7 @@ emit_cond_jmp:		jmp_cond = get_cond_jmp_opcode(BPF_OP(code), false);
 		case BPF_STX | BPF_ATOMIC | BPF_W:
 		case BPF_STX | BPF_ATOMIC | BPF_DW:
 			goto notyet;
-		case BPF_JMP | BPF_EXIT:
+		case BPF_JMP64 | BPF_EXIT:
 			if (seen_exit) {
 				jmp_offset = ctx->cleanup_addr - addrs[i];
 				goto emit_jmp;
diff --git a/drivers/net/ethernet/netronome/nfp/bpf/jit.c b/drivers/net/ethernet/netronome/nfp/bpf/jit.c
index df2ab5c..82345a4 100644
--- a/drivers/net/ethernet/netronome/nfp/bpf/jit.c
+++ b/drivers/net/ethernet/netronome/nfp/bpf/jit.c
@@ -3442,30 +3442,30 @@ static const instr_cb_t instr_cb[256] = {
 	[BPF_ALU64 | BPF_RSH | BPF_K] =	shr_imm64,
 	[BPF_ALU64 | BPF_ARSH | BPF_X] = ashr_reg64,
 	[BPF_ALU64 | BPF_ARSH | BPF_K] = ashr_imm64,
-	[BPF_ALU | BPF_MOV | BPF_X] =	mov_reg,
-	[BPF_ALU | BPF_MOV | BPF_K] =	mov_imm,
-	[BPF_ALU | BPF_XOR | BPF_X] =	xor_reg,
-	[BPF_ALU | BPF_XOR | BPF_K] =	xor_imm,
-	[BPF_ALU | BPF_AND | BPF_X] =	and_reg,
-	[BPF_ALU | BPF_AND | BPF_K] =	and_imm,
-	[BPF_ALU | BPF_OR | BPF_X] =	or_reg,
-	[BPF_ALU | BPF_OR | BPF_K] =	or_imm,
-	[BPF_ALU | BPF_ADD | BPF_X] =	add_reg,
-	[BPF_ALU | BPF_ADD | BPF_K] =	add_imm,
-	[BPF_ALU | BPF_SUB | BPF_X] =	sub_reg,
-	[BPF_ALU | BPF_SUB | BPF_K] =	sub_imm,
-	[BPF_ALU | BPF_MUL | BPF_X] =	mul_reg,
-	[BPF_ALU | BPF_MUL | BPF_K] =	mul_imm,
-	[BPF_ALU | BPF_DIV | BPF_X] =	div_reg,
-	[BPF_ALU | BPF_DIV | BPF_K] =	div_imm,
-	[BPF_ALU | BPF_NEG] =		neg_reg,
-	[BPF_ALU | BPF_LSH | BPF_X] =	shl_reg,
-	[BPF_ALU | BPF_LSH | BPF_K] =	shl_imm,
-	[BPF_ALU | BPF_RSH | BPF_X] =	shr_reg,
-	[BPF_ALU | BPF_RSH | BPF_K] =	shr_imm,
-	[BPF_ALU | BPF_ARSH | BPF_X] =	ashr_reg,
-	[BPF_ALU | BPF_ARSH | BPF_K] =	ashr_imm,
-	[BPF_ALU | BPF_END | BPF_X] =	end_reg32,
+	[BPF_ALU32 | BPF_MOV | BPF_X] =	mov_reg,
+	[BPF_ALU32 | BPF_MOV | BPF_K] =	mov_imm,
+	[BPF_ALU32 | BPF_XOR | BPF_X] =	xor_reg,
+	[BPF_ALU32 | BPF_XOR | BPF_K] =	xor_imm,
+	[BPF_ALU32 | BPF_AND | BPF_X] =	and_reg,
+	[BPF_ALU32 | BPF_AND | BPF_K] =	and_imm,
+	[BPF_ALU32 | BPF_OR | BPF_X] =	or_reg,
+	[BPF_ALU32 | BPF_OR | BPF_K] =	or_imm,
+	[BPF_ALU32 | BPF_ADD | BPF_X] =	add_reg,
+	[BPF_ALU32 | BPF_ADD | BPF_K] =	add_imm,
+	[BPF_ALU32 | BPF_SUB | BPF_X] =	sub_reg,
+	[BPF_ALU32 | BPF_SUB | BPF_K] =	sub_imm,
+	[BPF_ALU32 | BPF_MUL | BPF_X] =	mul_reg,
+	[BPF_ALU32 | BPF_MUL | BPF_K] =	mul_imm,
+	[BPF_ALU32 | BPF_DIV | BPF_X] =	div_reg,
+	[BPF_ALU32 | BPF_DIV | BPF_K] =	div_imm,
+	[BPF_ALU32 | BPF_NEG] =		neg_reg,
+	[BPF_ALU32 | BPF_LSH | BPF_X] =	shl_reg,
+	[BPF_ALU32 | BPF_LSH | BPF_K] =	shl_imm,
+	[BPF_ALU32 | BPF_RSH | BPF_X] =	shr_reg,
+	[BPF_ALU32 | BPF_RSH | BPF_K] =	shr_imm,
+	[BPF_ALU32 | BPF_ARSH | BPF_X] = ashr_reg,
+	[BPF_ALU32 | BPF_ARSH | BPF_K] = ashr_imm,
+	[BPF_ALU32 | BPF_END | BPF_X] =	end_reg32,
 	[BPF_LD | BPF_IMM | BPF_DW] =	imm_ld8,
 	[BPF_LD | BPF_ABS | BPF_B] =	data_ld1,
 	[BPF_LD | BPF_ABS | BPF_H] =	data_ld2,
@@ -3487,53 +3487,53 @@ static const instr_cb_t instr_cb[256] = {
 	[BPF_ST | BPF_MEM | BPF_H] =	mem_st2,
 	[BPF_ST | BPF_MEM | BPF_W] =	mem_st4,
 	[BPF_ST | BPF_MEM | BPF_DW] =	mem_st8,
-	[BPF_JMP | BPF_JA | BPF_K] =	jump,
-	[BPF_JMP | BPF_JEQ | BPF_K] =	jeq_imm,
-	[BPF_JMP | BPF_JGT | BPF_K] =	cmp_imm,
-	[BPF_JMP | BPF_JGE | BPF_K] =	cmp_imm,
-	[BPF_JMP | BPF_JLT | BPF_K] =	cmp_imm,
-	[BPF_JMP | BPF_JLE | BPF_K] =	cmp_imm,
-	[BPF_JMP | BPF_JSGT | BPF_K] =  cmp_imm,
-	[BPF_JMP | BPF_JSGE | BPF_K] =  cmp_imm,
-	[BPF_JMP | BPF_JSLT | BPF_K] =  cmp_imm,
-	[BPF_JMP | BPF_JSLE | BPF_K] =  cmp_imm,
-	[BPF_JMP | BPF_JSET | BPF_K] =	jset_imm,
-	[BPF_JMP | BPF_JNE | BPF_K] =	jne_imm,
-	[BPF_JMP | BPF_JEQ | BPF_X] =	jeq_reg,
-	[BPF_JMP | BPF_JGT | BPF_X] =	cmp_reg,
-	[BPF_JMP | BPF_JGE | BPF_X] =	cmp_reg,
-	[BPF_JMP | BPF_JLT | BPF_X] =	cmp_reg,
-	[BPF_JMP | BPF_JLE | BPF_X] =	cmp_reg,
-	[BPF_JMP | BPF_JSGT | BPF_X] =  cmp_reg,
-	[BPF_JMP | BPF_JSGE | BPF_X] =  cmp_reg,
-	[BPF_JMP | BPF_JSLT | BPF_X] =  cmp_reg,
-	[BPF_JMP | BPF_JSLE | BPF_X] =  cmp_reg,
-	[BPF_JMP | BPF_JSET | BPF_X] =	jset_reg,
-	[BPF_JMP | BPF_JNE | BPF_X] =	jne_reg,
+	[BPF_JMP64 | BPF_JA | BPF_K] =	jump,
+	[BPF_JMP64 | BPF_JEQ | BPF_K] =	jeq_imm,
+	[BPF_JMP64 | BPF_JGT | BPF_K] =	cmp_imm,
+	[BPF_JMP64 | BPF_JGE | BPF_K] =	cmp_imm,
+	[BPF_JMP64 | BPF_JLT | BPF_K] =	cmp_imm,
+	[BPF_JMP64 | BPF_JLE | BPF_K] =	cmp_imm,
+	[BPF_JMP64 | BPF_JSGT | BPF_K] = cmp_imm,
+	[BPF_JMP64 | BPF_JSGE | BPF_K] = cmp_imm,
+	[BPF_JMP64 | BPF_JSLT | BPF_K] = cmp_imm,
+	[BPF_JMP64 | BPF_JSLE | BPF_K] = cmp_imm,
+	[BPF_JMP64 | BPF_JSET | BPF_K] = jset_imm,
+	[BPF_JMP64 | BPF_JNE | BPF_K] =	jne_imm,
+	[BPF_JMP64 | BPF_JEQ | BPF_X] =	jeq_reg,
+	[BPF_JMP64 | BPF_JGT | BPF_X] =	cmp_reg,
+	[BPF_JMP64 | BPF_JGE | BPF_X] =	cmp_reg,
+	[BPF_JMP64 | BPF_JLT | BPF_X] =	cmp_reg,
+	[BPF_JMP64 | BPF_JLE | BPF_X] =	cmp_reg,
+	[BPF_JMP64 | BPF_JSGT | BPF_X] = cmp_reg,
+	[BPF_JMP64 | BPF_JSGE | BPF_X] = cmp_reg,
+	[BPF_JMP64 | BPF_JSLT | BPF_X] = cmp_reg,
+	[BPF_JMP64 | BPF_JSLE | BPF_X] = cmp_reg,
+	[BPF_JMP64 | BPF_JSET | BPF_X] = jset_reg,
+	[BPF_JMP64 | BPF_JNE | BPF_X] =	jne_reg,
 	[BPF_JMP32 | BPF_JEQ | BPF_K] =	jeq32_imm,
 	[BPF_JMP32 | BPF_JGT | BPF_K] =	cmp_imm,
 	[BPF_JMP32 | BPF_JGE | BPF_K] =	cmp_imm,
 	[BPF_JMP32 | BPF_JLT | BPF_K] =	cmp_imm,
 	[BPF_JMP32 | BPF_JLE | BPF_K] =	cmp_imm,
-	[BPF_JMP32 | BPF_JSGT | BPF_K] =cmp_imm,
-	[BPF_JMP32 | BPF_JSGE | BPF_K] =cmp_imm,
-	[BPF_JMP32 | BPF_JSLT | BPF_K] =cmp_imm,
-	[BPF_JMP32 | BPF_JSLE | BPF_K] =cmp_imm,
-	[BPF_JMP32 | BPF_JSET | BPF_K] =jset_imm,
+	[BPF_JMP32 | BPF_JSGT | BPF_K] = cmp_imm,
+	[BPF_JMP32 | BPF_JSGE | BPF_K] = cmp_imm,
+	[BPF_JMP32 | BPF_JSLT | BPF_K] = cmp_imm,
+	[BPF_JMP32 | BPF_JSLE | BPF_K] = cmp_imm,
+	[BPF_JMP32 | BPF_JSET | BPF_K] = jset_imm,
 	[BPF_JMP32 | BPF_JNE | BPF_K] =	jne_imm,
 	[BPF_JMP32 | BPF_JEQ | BPF_X] =	jeq_reg,
 	[BPF_JMP32 | BPF_JGT | BPF_X] =	cmp_reg,
 	[BPF_JMP32 | BPF_JGE | BPF_X] =	cmp_reg,
 	[BPF_JMP32 | BPF_JLT | BPF_X] =	cmp_reg,
 	[BPF_JMP32 | BPF_JLE | BPF_X] =	cmp_reg,
-	[BPF_JMP32 | BPF_JSGT | BPF_X] =cmp_reg,
-	[BPF_JMP32 | BPF_JSGE | BPF_X] =cmp_reg,
-	[BPF_JMP32 | BPF_JSLT | BPF_X] =cmp_reg,
-	[BPF_JMP32 | BPF_JSLE | BPF_X] =cmp_reg,
-	[BPF_JMP32 | BPF_JSET | BPF_X] =jset_reg,
+	[BPF_JMP32 | BPF_JSGT | BPF_X] = cmp_reg,
+	[BPF_JMP32 | BPF_JSGE | BPF_X] = cmp_reg,
+	[BPF_JMP32 | BPF_JSLT | BPF_X] = cmp_reg,
+	[BPF_JMP32 | BPF_JSLE | BPF_X] = cmp_reg,
+	[BPF_JMP32 | BPF_JSET | BPF_X] = jset_reg,
 	[BPF_JMP32 | BPF_JNE | BPF_X] =	jne_reg,
-	[BPF_JMP | BPF_CALL] =		call,
-	[BPF_JMP | BPF_EXIT] =		jmp_exit,
+	[BPF_JMP64 | BPF_CALL] =	call,
+	[BPF_JMP64 | BPF_EXIT] =	jmp_exit,
 };
 
 /* --- Assembler logic --- */
@@ -3562,7 +3562,7 @@ static int nfp_fixup_branches(struct nfp_prog *nfp_prog)
 			continue;
 		if (!is_mbpf_jmp(meta))
 			continue;
-		if (meta->insn.code == (BPF_JMP | BPF_EXIT) &&
+		if (meta->insn.code == (BPF_JMP64 | BPF_EXIT) &&
 		    !nfp_is_main_function(meta))
 			continue;
 		if (is_mbpf_helper_call(meta))
@@ -3587,7 +3587,7 @@ static int nfp_fixup_branches(struct nfp_prog *nfp_prog)
 			return -ELOOP;
 		}
 
-		if (meta->insn.code == (BPF_JMP | BPF_EXIT))
+		if (meta->insn.code == (BPF_JMP64 | BPF_EXIT))
 			continue;
 
 		/* Leave special branches for later */
@@ -4292,7 +4292,7 @@ static void nfp_bpf_opt_pkt_cache(struct nfp_prog *nfp_prog)
 		insn = &meta->insn;
 
 		if (is_mbpf_store_pkt(meta) ||
-		    insn->code == (BPF_JMP | BPF_CALL) ||
+		    insn->code == (BPF_JMP64 | BPF_CALL) ||
 		    is_mbpf_classic_store_pkt(meta) ||
 		    is_mbpf_classic_load(meta)) {
 			cache_avail = false;
diff --git a/drivers/net/ethernet/netronome/nfp/bpf/main.h b/drivers/net/ethernet/netronome/nfp/bpf/main.h
index 16841bb7..e23a588 100644
--- a/drivers/net/ethernet/netronome/nfp/bpf/main.h
+++ b/drivers/net/ethernet/netronome/nfp/bpf/main.h
@@ -370,7 +370,7 @@ static inline u8 mbpf_mode(const struct nfp_insn_meta *meta)
 
 static inline bool is_mbpf_alu(const struct nfp_insn_meta *meta)
 {
-	return mbpf_class(meta) == BPF_ALU64 || mbpf_class(meta) == BPF_ALU;
+	return mbpf_class(meta) == BPF_ALU64 || mbpf_class(meta) == BPF_ALU32;
 }
 
 static inline bool is_mbpf_load(const struct nfp_insn_meta *meta)
@@ -385,7 +385,7 @@ static inline bool is_mbpf_jmp32(const struct nfp_insn_meta *meta)
 
 static inline bool is_mbpf_jmp64(const struct nfp_insn_meta *meta)
 {
-	return mbpf_class(meta) == BPF_JMP;
+	return mbpf_class(meta) == BPF_JMP64;
 }
 
 static inline bool is_mbpf_jmp(const struct nfp_insn_meta *meta)
@@ -461,7 +461,7 @@ static inline bool is_mbpf_helper_call(const struct nfp_insn_meta *meta)
 {
 	struct bpf_insn insn = meta->insn;
 
-	return insn.code == (BPF_JMP | BPF_CALL) &&
+	return insn.code == (BPF_JMP64 | BPF_CALL) &&
 		insn.src_reg != BPF_PSEUDO_CALL;
 }
 
@@ -469,7 +469,7 @@ static inline bool is_mbpf_pseudo_call(const struct nfp_insn_meta *meta)
 {
 	struct bpf_insn insn = meta->insn;
 
-	return insn.code == (BPF_JMP | BPF_CALL) &&
+	return insn.code == (BPF_JMP64 | BPF_CALL) &&
 		insn.src_reg == BPF_PSEUDO_CALL;
 }
 
diff --git a/drivers/net/ethernet/netronome/nfp/bpf/verifier.c b/drivers/net/ethernet/netronome/nfp/bpf/verifier.c
index 9d235c0..e4b3e44 100644
--- a/drivers/net/ethernet/netronome/nfp/bpf/verifier.c
+++ b/drivers/net/ethernet/netronome/nfp/bpf/verifier.c
@@ -651,7 +651,7 @@ int nfp_verify_insn(struct bpf_verifier_env *env, int insn_idx,
 
 	if (is_mbpf_helper_call(meta))
 		return nfp_bpf_check_helper_call(nfp_prog, env, meta);
-	if (meta->insn.code == (BPF_JMP | BPF_EXIT))
+	if (meta->insn.code == (BPF_JMP64 | BPF_EXIT))
 		return nfp_bpf_check_exit(nfp_prog, env);
 
 	if (is_mbpf_load(meta))
@@ -816,7 +816,7 @@ int nfp_bpf_opt_replace_insn(struct bpf_verifier_env *env, u32 off,
 
 	/* conditional jump to jump conversion */
 	if (is_mbpf_cond_jump(meta) &&
-	    insn->code == (BPF_JMP | BPF_JA | BPF_K)) {
+	    insn->code == (BPF_JMP64 | BPF_JA | BPF_K)) {
 		unsigned int tgt_off;
 
 		tgt_off = off + insn->off + 1;
diff --git a/include/linux/filter.h b/include/linux/filter.h
index 1727898..347fcfa 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -100,7 +100,7 @@ struct ctl_table_header;
 
 #define BPF_ALU32_REG(OP, DST, SRC)				\
 	((struct bpf_insn) {					\
-		.code  = BPF_ALU | BPF_OP(OP) | BPF_X,		\
+		.code  = BPF_ALU32 | BPF_OP(OP) | BPF_X,	\
 		.dst_reg = DST,					\
 		.src_reg = SRC,					\
 		.off   = 0,					\
@@ -118,7 +118,7 @@ struct ctl_table_header;
 
 #define BPF_ALU32_IMM(OP, DST, IMM)				\
 	((struct bpf_insn) {					\
-		.code  = BPF_ALU | BPF_OP(OP) | BPF_K,		\
+		.code  = BPF_ALU32 | BPF_OP(OP) | BPF_K,	\
 		.dst_reg = DST,					\
 		.src_reg = 0,					\
 		.off   = 0,					\
@@ -128,7 +128,7 @@ struct ctl_table_header;
 
 #define BPF_ENDIAN(TYPE, DST, LEN)				\
 	((struct bpf_insn) {					\
-		.code  = BPF_ALU | BPF_END | BPF_SRC(TYPE),	\
+		.code  = BPF_ALU32 | BPF_END | BPF_SRC(TYPE),	\
 		.dst_reg = DST,					\
 		.src_reg = 0,					\
 		.off   = 0,					\
@@ -146,7 +146,7 @@ struct ctl_table_header;
 
 #define BPF_MOV32_REG(DST, SRC)					\
 	((struct bpf_insn) {					\
-		.code  = BPF_ALU | BPF_MOV | BPF_X,		\
+		.code  = BPF_ALU32 | BPF_MOV | BPF_X,		\
 		.dst_reg = DST,					\
 		.src_reg = SRC,					\
 		.off   = 0,					\
@@ -164,7 +164,7 @@ struct ctl_table_header;
 
 #define BPF_MOV32_IMM(DST, IMM)					\
 	((struct bpf_insn) {					\
-		.code  = BPF_ALU | BPF_MOV | BPF_K,		\
+		.code  = BPF_ALU32 | BPF_MOV | BPF_K,		\
 		.dst_reg = DST,					\
 		.src_reg = 0,					\
 		.off   = 0,					\
@@ -173,7 +173,7 @@ struct ctl_table_header;
 /* Special form of mov32, used for doing explicit zero extension on dst. */
 #define BPF_ZEXT_REG(DST)					\
 	((struct bpf_insn) {					\
-		.code  = BPF_ALU | BPF_MOV | BPF_X,		\
+		.code  = BPF_ALU32 | BPF_MOV | BPF_X,		\
 		.dst_reg = DST,					\
 		.src_reg = DST,					\
 		.off   = 0,					\
@@ -181,7 +181,7 @@ struct ctl_table_header;
 
 static inline bool insn_is_zext(const struct bpf_insn *insn)
 {
-	return insn->code == (BPF_ALU | BPF_MOV | BPF_X) && insn->imm == 1;
+	return insn->code == (BPF_ALU32 | BPF_MOV | BPF_X) && insn->imm == 1;
 }
 
 /* BPF_LD_IMM64 macro encodes single 'load 64-bit immediate' insn */
@@ -218,7 +218,7 @@ static inline bool insn_is_zext(const struct bpf_insn *insn)
 
 #define BPF_MOV32_RAW(TYPE, DST, SRC, IMM)			\
 	((struct bpf_insn) {					\
-		.code  = BPF_ALU | BPF_MOV | BPF_SRC(TYPE),	\
+		.code  = BPF_ALU32 | BPF_MOV | BPF_SRC(TYPE),	\
 		.dst_reg = DST,					\
 		.src_reg = SRC,					\
 		.off   = 0,					\
@@ -305,7 +305,7 @@ static inline bool insn_is_zext(const struct bpf_insn *insn)
 
 #define BPF_JMP_REG(OP, DST, SRC, OFF)				\
 	((struct bpf_insn) {					\
-		.code  = BPF_JMP | BPF_OP(OP) | BPF_X,		\
+		.code  = BPF_JMP64 | BPF_OP(OP) | BPF_X,	\
 		.dst_reg = DST,					\
 		.src_reg = SRC,					\
 		.off   = OFF,					\
@@ -315,7 +315,7 @@ static inline bool insn_is_zext(const struct bpf_insn *insn)
 
 #define BPF_JMP_IMM(OP, DST, IMM, OFF)				\
 	((struct bpf_insn) {					\
-		.code  = BPF_JMP | BPF_OP(OP) | BPF_K,		\
+		.code  = BPF_JMP64 | BPF_OP(OP) | BPF_K,	\
 		.dst_reg = DST,					\
 		.src_reg = 0,					\
 		.off   = OFF,					\
@@ -345,7 +345,7 @@ static inline bool insn_is_zext(const struct bpf_insn *insn)
 
 #define BPF_JMP_A(OFF)						\
 	((struct bpf_insn) {					\
-		.code  = BPF_JMP | BPF_JA,			\
+		.code  = BPF_JMP64 | BPF_JA,			\
 		.dst_reg = 0,					\
 		.src_reg = 0,					\
 		.off   = OFF,					\
@@ -355,7 +355,7 @@ static inline bool insn_is_zext(const struct bpf_insn *insn)
 
 #define BPF_CALL_REL(TGT)					\
 	((struct bpf_insn) {					\
-		.code  = BPF_JMP | BPF_CALL,			\
+		.code  = BPF_JMP64 | BPF_CALL,			\
 		.dst_reg = 0,					\
 		.src_reg = BPF_PSEUDO_CALL,			\
 		.off   = 0,					\
@@ -367,7 +367,7 @@ static inline bool insn_is_zext(const struct bpf_insn *insn)
 
 #define BPF_EMIT_CALL(FUNC)					\
 	((struct bpf_insn) {					\
-		.code  = BPF_JMP | BPF_CALL,			\
+		.code  = BPF_JMP64 | BPF_CALL,			\
 		.dst_reg = 0,					\
 		.src_reg = 0,					\
 		.off   = 0,					\
@@ -387,7 +387,7 @@ static inline bool insn_is_zext(const struct bpf_insn *insn)
 
 #define BPF_EXIT_INSN()						\
 	((struct bpf_insn) {					\
-		.code  = BPF_JMP | BPF_EXIT,			\
+		.code  = BPF_JMP64 | BPF_EXIT,			\
 		.dst_reg = 0,					\
 		.src_reg = 0,					\
 		.off   = 0,					\
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 16da510..d31b38c 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -409,7 +409,7 @@ static int bpf_adj_branches(struct bpf_prog *prog, u32 pos, s32 end_old,
 			continue;
 		}
 		code = insn->code;
-		if ((BPF_CLASS(code) != BPF_JMP &&
+		if ((BPF_CLASS(code) != BPF_JMP64 &&
 		     BPF_CLASS(code) != BPF_JMP32) ||
 		    BPF_OP(code) == BPF_EXIT)
 			continue;
@@ -1245,22 +1245,22 @@ static int bpf_jit_blind_insn(const struct bpf_insn *from,
 		goto out;
 
 	if (from->imm == 0 &&
-	    (from->code == (BPF_ALU   | BPF_MOV | BPF_K) ||
+	    (from->code == (BPF_ALU32   | BPF_MOV | BPF_K) ||
 	     from->code == (BPF_ALU64 | BPF_MOV | BPF_K))) {
 		*to++ = BPF_ALU64_REG(BPF_XOR, from->dst_reg, from->dst_reg);
 		goto out;
 	}
 
 	switch (from->code) {
-	case BPF_ALU | BPF_ADD | BPF_K:
-	case BPF_ALU | BPF_SUB | BPF_K:
-	case BPF_ALU | BPF_AND | BPF_K:
-	case BPF_ALU | BPF_OR  | BPF_K:
-	case BPF_ALU | BPF_XOR | BPF_K:
-	case BPF_ALU | BPF_MUL | BPF_K:
-	case BPF_ALU | BPF_MOV | BPF_K:
-	case BPF_ALU | BPF_DIV | BPF_K:
-	case BPF_ALU | BPF_MOD | BPF_K:
+	case BPF_ALU32 | BPF_ADD | BPF_K:
+	case BPF_ALU32 | BPF_SUB | BPF_K:
+	case BPF_ALU32 | BPF_AND | BPF_K:
+	case BPF_ALU32 | BPF_OR  | BPF_K:
+	case BPF_ALU32 | BPF_XOR | BPF_K:
+	case BPF_ALU32 | BPF_MUL | BPF_K:
+	case BPF_ALU32 | BPF_MOV | BPF_K:
+	case BPF_ALU32 | BPF_DIV | BPF_K:
+	case BPF_ALU32 | BPF_MOD | BPF_K:
 		*to++ = BPF_ALU32_IMM(BPF_MOV, BPF_REG_AX, imm_rnd ^ from->imm);
 		*to++ = BPF_ALU32_IMM(BPF_XOR, BPF_REG_AX, imm_rnd);
 		*to++ = BPF_ALU32_REG(from->code, from->dst_reg, BPF_REG_AX);
@@ -1280,17 +1280,17 @@ static int bpf_jit_blind_insn(const struct bpf_insn *from,
 		*to++ = BPF_ALU64_REG(from->code, from->dst_reg, BPF_REG_AX);
 		break;
 
-	case BPF_JMP | BPF_JEQ  | BPF_K:
-	case BPF_JMP | BPF_JNE  | BPF_K:
-	case BPF_JMP | BPF_JGT  | BPF_K:
-	case BPF_JMP | BPF_JLT  | BPF_K:
-	case BPF_JMP | BPF_JGE  | BPF_K:
-	case BPF_JMP | BPF_JLE  | BPF_K:
-	case BPF_JMP | BPF_JSGT | BPF_K:
-	case BPF_JMP | BPF_JSLT | BPF_K:
-	case BPF_JMP | BPF_JSGE | BPF_K:
-	case BPF_JMP | BPF_JSLE | BPF_K:
-	case BPF_JMP | BPF_JSET | BPF_K:
+	case BPF_JMP64 | BPF_JEQ  | BPF_K:
+	case BPF_JMP64 | BPF_JNE  | BPF_K:
+	case BPF_JMP64 | BPF_JGT  | BPF_K:
+	case BPF_JMP64 | BPF_JLT  | BPF_K:
+	case BPF_JMP64 | BPF_JGE  | BPF_K:
+	case BPF_JMP64 | BPF_JLE  | BPF_K:
+	case BPF_JMP64 | BPF_JSGT | BPF_K:
+	case BPF_JMP64 | BPF_JSLT | BPF_K:
+	case BPF_JMP64 | BPF_JSGE | BPF_K:
+	case BPF_JMP64 | BPF_JSLE | BPF_K:
+	case BPF_JMP64 | BPF_JSET | BPF_K:
 		/* Accommodate for extra offset in case of a backjump. */
 		off = from->off;
 		if (off < 0)
@@ -1651,8 +1651,8 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn)
 		/* Now overwrite non-defaults ... */
 		BPF_INSN_MAP(BPF_INSN_2_LBL, BPF_INSN_3_LBL),
 		/* Non-UAPI available opcodes. */
-		[BPF_JMP | BPF_CALL_ARGS] = &&JMP_CALL_ARGS,
-		[BPF_JMP | BPF_TAIL_CALL] = &&JMP_TAIL_CALL,
+		[BPF_JMP64 | BPF_CALL_ARGS] = &&JMP_CALL_ARGS,
+		[BPF_JMP64 | BPF_TAIL_CALL] = &&JMP_TAIL_CALL,
 		[BPF_ST  | BPF_NOSPEC] = &&ST_NOSPEC,
 		[BPF_LDX | BPF_PROBE_MEM | BPF_B] = &&LDX_PROBE_MEM_B,
 		[BPF_LDX | BPF_PROBE_MEM | BPF_H] = &&LDX_PROBE_MEM_H,
@@ -2072,7 +2072,7 @@ void bpf_patch_call_args(struct bpf_insn *insn, u32 stack_depth)
 	insn->off = (s16) insn->imm;
 	insn->imm = interpreters_args[(round_up(stack_depth, 32) / 32) - 1] -
 		__bpf_call_base_args;
-	insn->code = BPF_JMP | BPF_CALL_ARGS;
+	insn->code = BPF_JMP64 | BPF_CALL_ARGS;
 }
 
 #else
diff --git a/kernel/bpf/disasm.c b/kernel/bpf/disasm.c
index 7b4afb7..53ce1b4 100644
--- a/kernel/bpf/disasm.c
+++ b/kernel/bpf/disasm.c
@@ -64,8 +64,8 @@ const char *const bpf_class_string[8] = {
 	[BPF_LDX]   = "ldx",
 	[BPF_ST]    = "st",
 	[BPF_STX]   = "stx",
-	[BPF_ALU]   = "alu",
-	[BPF_JMP]   = "jmp",
+	[BPF_ALU32]   = "alu",
+	[BPF_JMP64]   = "jmp",
 	[BPF_JMP32] = "jmp32",
 	[BPF_ALU64] = "alu64",
 };
@@ -135,7 +135,7 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs,
 	const bpf_insn_print_t verbose = cbs->cb_print;
 	u8 class = BPF_CLASS(insn->code);
 
-	if (class == BPF_ALU || class == BPF_ALU64) {
+	if (class == BPF_ALU32 || class == BPF_ALU64) {
 		if (BPF_OP(insn->code) == BPF_END) {
 			if (class == BPF_ALU64)
 				verbose(cbs->private_data, "BUG_alu64_%02x\n", insn->code);
@@ -143,19 +143,19 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs,
 				print_bpf_end_insn(verbose, cbs->private_data, insn);
 		} else if (BPF_OP(insn->code) == BPF_NEG) {
 			verbose(cbs->private_data, "(%02x) %c%d = -%c%d\n",
-				insn->code, class == BPF_ALU ? 'w' : 'r',
-				insn->dst_reg, class == BPF_ALU ? 'w' : 'r',
+				insn->code, class == BPF_ALU32 ? 'w' : 'r',
+				insn->dst_reg, class == BPF_ALU32 ? 'w' : 'r',
 				insn->dst_reg);
 		} else if (BPF_SRC(insn->code) == BPF_X) {
 			verbose(cbs->private_data, "(%02x) %c%d %s %c%d\n",
-				insn->code, class == BPF_ALU ? 'w' : 'r',
+				insn->code, class == BPF_ALU32 ? 'w' : 'r',
 				insn->dst_reg,
 				bpf_alu_string[BPF_OP(insn->code) >> 4],
-				class == BPF_ALU ? 'w' : 'r',
+				class == BPF_ALU32 ? 'w' : 'r',
 				insn->src_reg);
 		} else {
 			verbose(cbs->private_data, "(%02x) %c%d %s %d\n",
-				insn->code, class == BPF_ALU ? 'w' : 'r',
+				insn->code, class == BPF_ALU32 ? 'w' : 'r',
 				insn->dst_reg,
 				bpf_alu_string[BPF_OP(insn->code) >> 4],
 				insn->imm);
@@ -258,7 +258,7 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs,
 			verbose(cbs->private_data, "BUG_ld_%02x\n", insn->code);
 			return;
 		}
-	} else if (class == BPF_JMP32 || class == BPF_JMP) {
+	} else if (class == BPF_JMP32 || class == BPF_JMP64) {
 		u8 opcode = BPF_OP(insn->code);
 
 		if (opcode == BPF_CALL) {
@@ -276,10 +276,10 @@ void print_bpf_insn(const struct bpf_insn_cbs *cbs,
 							tmp, sizeof(tmp)),
 					insn->imm);
 			}
-		} else if (insn->code == (BPF_JMP | BPF_JA)) {
+		} else if (insn->code == (BPF_JMP64 | BPF_JA)) {
 			verbose(cbs->private_data, "(%02x) goto pc%+d\n",
 				insn->code, insn->off);
-		} else if (insn->code == (BPF_JMP | BPF_EXIT)) {
+		} else if (insn->code == (BPF_JMP64 | BPF_EXIT)) {
 			verbose(cbs->private_data, "(%02x) exit\n", insn->code);
 		} else if (BPF_SRC(insn->code) == BPF_X) {
 			verbose(cbs->private_data,
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 99417b3..a96e4d2 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -3837,15 +3837,15 @@ static struct bpf_insn *bpf_insn_prepare_dump(const struct bpf_prog *prog,
 	for (i = 0; i < prog->len; i++) {
 		code = insns[i].code;
 
-		if (code == (BPF_JMP | BPF_TAIL_CALL)) {
-			insns[i].code = BPF_JMP | BPF_CALL;
+		if (code == (BPF_JMP64 | BPF_TAIL_CALL)) {
+			insns[i].code = BPF_JMP64 | BPF_CALL;
 			insns[i].imm = BPF_FUNC_tail_call;
 			/* fall-through */
 		}
-		if (code == (BPF_JMP | BPF_CALL) ||
-		    code == (BPF_JMP | BPF_CALL_ARGS)) {
-			if (code == (BPF_JMP | BPF_CALL_ARGS))
-				insns[i].code = BPF_JMP | BPF_CALL;
+		if (code == (BPF_JMP64 | BPF_CALL) ||
+		    code == (BPF_JMP64 | BPF_CALL_ARGS)) {
+			if (code == (BPF_JMP64 | BPF_CALL_ARGS))
+				insns[i].code = BPF_JMP64 | BPF_CALL;
 			if (!bpf_dump_raw_ok(f_cred))
 				insns[i].imm = 0;
 			continue;
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 4cc0e70..0869c50 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -123,7 +123,7 @@ static const struct bpf_verifier_ops * const bpf_verifier_ops[] = {
  *    BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),  // after this insn R2 type is FRAME_PTR
  *    BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), // after this insn R2 type is PTR_TO_STACK
  *    BPF_LD_MAP_FD(BPF_REG_1, map_fd),      // after this insn R1 type is CONST_PTR_TO_MAP
- *    BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+ *    BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
  * here verifier looks at prototype of map_lookup_elem() and sees:
  * .arg1_type == ARG_CONST_MAP_PTR and R1->type == CONST_PTR_TO_MAP, which is ok,
  * Now verifier knows that this map has key of R1->map_ptr->key_size bytes
@@ -235,13 +235,13 @@ static void bpf_map_key_store(struct bpf_insn_aux_data *aux, u64 state)
 
 static bool bpf_pseudo_call(const struct bpf_insn *insn)
 {
-	return insn->code == (BPF_JMP | BPF_CALL) &&
+	return insn->code == (BPF_JMP64 | BPF_CALL) &&
 	       insn->src_reg == BPF_PSEUDO_CALL;
 }
 
 static bool bpf_pseudo_kfunc_call(const struct bpf_insn *insn)
 {
-	return insn->code == (BPF_JMP | BPF_CALL) &&
+	return insn->code == (BPF_JMP64 | BPF_CALL) &&
 	       insn->src_reg == BPF_PSEUDO_KFUNC_CALL;
 }
 
@@ -2453,14 +2453,14 @@ static int check_subprogs(struct bpf_verifier_env *env)
 	for (i = 0; i < insn_cnt; i++) {
 		u8 code = insn[i].code;
 
-		if (code == (BPF_JMP | BPF_CALL) &&
+		if (code == (BPF_JMP64 | BPF_CALL) &&
 		    insn[i].imm == BPF_FUNC_tail_call &&
 		    insn[i].src_reg != BPF_PSEUDO_CALL)
 			subprog[cur_subprog].has_tail_call = true;
 		if (BPF_CLASS(code) == BPF_LD &&
 		    (BPF_MODE(code) == BPF_ABS || BPF_MODE(code) == BPF_IND))
 			subprog[cur_subprog].has_ld_abs = true;
-		if (BPF_CLASS(code) != BPF_JMP && BPF_CLASS(code) != BPF_JMP32)
+		if (BPF_CLASS(code) != BPF_JMP64 && BPF_CLASS(code) != BPF_JMP32)
 			goto next;
 		if (BPF_OP(code) == BPF_EXIT || BPF_OP(code) == BPF_CALL)
 			goto next;
@@ -2475,8 +2475,8 @@ static int check_subprogs(struct bpf_verifier_env *env)
 			 * the last insn of the subprog should be either exit
 			 * or unconditional jump back
 			 */
-			if (code != (BPF_JMP | BPF_EXIT) &&
-			    code != (BPF_JMP | BPF_JA)) {
+			if (code != (BPF_JMP64 | BPF_EXIT) &&
+			    code != (BPF_JMP64 | BPF_JA)) {
 				verbose(env, "last insn is not an exit or jmp\n");
 				return -EINVAL;
 			}
@@ -2578,7 +2578,7 @@ static bool is_reg64(struct bpf_verifier_env *env, struct bpf_insn *insn,
 	code = insn->code;
 	class = BPF_CLASS(code);
 	op = BPF_OP(code);
-	if (class == BPF_JMP) {
+	if (class == BPF_JMP64) {
 		/* BPF_EXIT for "main" will reach here. Return TRUE
 		 * conservatively.
 		 */
@@ -2602,12 +2602,12 @@ static bool is_reg64(struct bpf_verifier_env *env, struct bpf_insn *insn,
 		}
 	}
 
-	if (class == BPF_ALU64 || class == BPF_JMP ||
-	    /* BPF_END always use BPF_ALU class. */
-	    (class == BPF_ALU && op == BPF_END && insn->imm == 64))
+	if (class == BPF_ALU64 || class == BPF_JMP64 ||
+	    /* BPF_END always use BPF_ALU32 class. */
+	    (class == BPF_ALU32 && op == BPF_END && insn->imm == 64))
 		return true;
 
-	if (class == BPF_ALU || class == BPF_JMP32)
+	if (class == BPF_ALU32 || class == BPF_JMP32)
 		return false;
 
 	if (class == BPF_LDX) {
@@ -2658,7 +2658,7 @@ static bool is_reg64(struct bpf_verifier_env *env, struct bpf_insn *insn,
 static int insn_def_regno(const struct bpf_insn *insn)
 {
 	switch (BPF_CLASS(insn->code)) {
-	case BPF_JMP:
+	case BPF_JMP64:
 	case BPF_JMP32:
 	case BPF_ST:
 		return -1;
@@ -2842,7 +2842,7 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx,
 		print_bpf_insn(&cbs, insn, env->allow_ptr_leaks);
 	}
 
-	if (class == BPF_ALU || class == BPF_ALU64) {
+	if (class == BPF_ALU32 || class == BPF_ALU64) {
 		if (!(*reg_mask & dreg))
 			return 0;
 		if (opcode == BPF_MOV) {
@@ -2919,7 +2919,7 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx,
 		*stack_mask &= ~(1ull << spi);
 		if (class == BPF_STX)
 			*reg_mask |= sreg;
-	} else if (class == BPF_JMP || class == BPF_JMP32) {
+	} else if (class == BPF_JMP64 || class == BPF_JMP32) {
 		if (opcode == BPF_CALL) {
 			if (insn->src_reg == BPF_PSEUDO_CALL)
 				return -ENOTSUPP;
@@ -7433,7 +7433,7 @@ static int __check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn
 		return -EFAULT;
 	}
 
-	if (insn->code == (BPF_JMP | BPF_CALL) &&
+	if (insn->code == (BPF_JMP64 | BPF_CALL) &&
 	    insn->src_reg == 0 &&
 	    insn->imm == BPF_FUNC_timer_set_callback) {
 		struct bpf_verifier_state *async_cb;
@@ -11211,14 +11211,14 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
 		}
 
 	} else if (opcode > BPF_END) {
-		verbose(env, "invalid BPF_ALU opcode %x\n", opcode);
+		verbose(env, "invalid BPF_ALU32 opcode %x\n", opcode);
 		return -EINVAL;
 
 	} else {	/* all other ALU ops: and, sub, xor, add, ... */
 
 		if (BPF_SRC(insn->code) == BPF_X) {
 			if (insn->imm != 0 || insn->off != 0) {
-				verbose(env, "BPF_ALU uses reserved fields\n");
+				verbose(env, "BPF_ALU32 uses reserved fields\n");
 				return -EINVAL;
 			}
 			/* check src1 operand */
@@ -11227,7 +11227,7 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
 				return err;
 		} else {
 			if (insn->src_reg != BPF_REG_0 || insn->off != 0) {
-				verbose(env, "BPF_ALU uses reserved fields\n");
+				verbose(env, "BPF_ALU32 uses reserved fields\n");
 				return -EINVAL;
 			}
 		}
@@ -11997,13 +11997,13 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
 
 	/* Only conditional jumps are expected to reach here. */
 	if (opcode == BPF_JA || opcode > BPF_JSLE) {
-		verbose(env, "invalid BPF_JMP/JMP32 opcode %x\n", opcode);
+		verbose(env, "invalid BPF_JMP64/JMP32 opcode %x\n", opcode);
 		return -EINVAL;
 	}
 
 	if (BPF_SRC(insn->code) == BPF_X) {
 		if (insn->imm != 0) {
-			verbose(env, "BPF_JMP/JMP32 uses reserved fields\n");
+			verbose(env, "BPF_JMP64/JMP32 uses reserved fields\n");
 			return -EINVAL;
 		}
 
@@ -12020,7 +12020,7 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
 		src_reg = &regs[insn->src_reg];
 	} else {
 		if (insn->src_reg != BPF_REG_0) {
-			verbose(env, "BPF_JMP/JMP32 uses reserved fields\n");
+			verbose(env, "BPF_JMP64/JMP32 uses reserved fields\n");
 			return -EINVAL;
 		}
 	}
@@ -12740,7 +12740,7 @@ static int visit_insn(int t, struct bpf_verifier_env *env)
 		return visit_func_call_insn(t, insns, env, true);
 
 	/* All non-branch instructions have a single fall-through edge. */
-	if (BPF_CLASS(insns[t].code) != BPF_JMP &&
+	if (BPF_CLASS(insns[t].code) != BPF_JMP64 &&
 	    BPF_CLASS(insns[t].code) != BPF_JMP32)
 		return push_insn(t, t + 1, FALLTHROUGH, env, false);
 
@@ -14184,7 +14184,7 @@ static int do_check(struct bpf_verifier_env *env)
 		sanitize_mark_insn_seen(env);
 		prev_insn_idx = env->insn_idx;
 
-		if (class == BPF_ALU || class == BPF_ALU64) {
+		if (class == BPF_ALU32 || class == BPF_ALU64) {
 			err = check_alu_op(env, insn);
 			if (err)
 				return err;
@@ -14303,7 +14303,7 @@ static int do_check(struct bpf_verifier_env *env)
 			if (err)
 				return err;
 
-		} else if (class == BPF_JMP || class == BPF_JMP32) {
+		} else if (class == BPF_JMP64 || class == BPF_JMP32) {
 			u8 opcode = BPF_OP(insn->code);
 
 			env->jmps_processed++;
@@ -15189,7 +15189,7 @@ static bool insn_is_cond_jump(u8 code)
 	if (BPF_CLASS(code) == BPF_JMP32)
 		return true;
 
-	if (BPF_CLASS(code) != BPF_JMP)
+	if (BPF_CLASS(code) != BPF_JMP64)
 		return false;
 
 	op = BPF_OP(code);
@@ -15926,14 +15926,14 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
 		/* Make divide-by-zero exceptions impossible. */
 		if (insn->code == (BPF_ALU64 | BPF_MOD | BPF_X) ||
 		    insn->code == (BPF_ALU64 | BPF_DIV | BPF_X) ||
-		    insn->code == (BPF_ALU | BPF_MOD | BPF_X) ||
-		    insn->code == (BPF_ALU | BPF_DIV | BPF_X)) {
+		    insn->code == (BPF_ALU32 | BPF_MOD | BPF_X) ||
+		    insn->code == (BPF_ALU32 | BPF_DIV | BPF_X)) {
 			bool is64 = BPF_CLASS(insn->code) == BPF_ALU64;
 			bool isdiv = BPF_OP(insn->code) == BPF_DIV;
 			struct bpf_insn *patchlet;
 			struct bpf_insn chk_and_div[] = {
 				/* [R,W]x div 0 -> 0 */
-				BPF_RAW_INSN((is64 ? BPF_JMP : BPF_JMP32) |
+				BPF_RAW_INSN((is64 ? BPF_JMP64 : BPF_JMP32) |
 					     BPF_JNE | BPF_K, insn->src_reg,
 					     0, 2, 0),
 				BPF_ALU32_REG(BPF_XOR, insn->dst_reg, insn->dst_reg),
@@ -15942,7 +15942,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
 			};
 			struct bpf_insn chk_and_mod[] = {
 				/* [R,W]x mod 0 -> [R,W]x */
-				BPF_RAW_INSN((is64 ? BPF_JMP : BPF_JMP32) |
+				BPF_RAW_INSN((is64 ? BPF_JMP64 : BPF_JMP32) |
 					     BPF_JEQ | BPF_K, insn->src_reg,
 					     0, 1 + (is64 ? 0 : 1), 0),
 				*insn,
@@ -16037,7 +16037,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
 			continue;
 		}
 
-		if (insn->code != (BPF_JMP | BPF_CALL))
+		if (insn->code != (BPF_JMP64 | BPF_CALL))
 			continue;
 		if (insn->src_reg == BPF_PSEUDO_CALL)
 			continue;
@@ -16081,7 +16081,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
 			 * that doesn't support bpf_tail_call yet
 			 */
 			insn->imm = 0;
-			insn->code = BPF_JMP | BPF_TAIL_CALL;
+			insn->code = BPF_JMP64 | BPF_TAIL_CALL;
 
 			aux = &env->insn_aux_data[i + delta];
 			if (env->bpf_capable && !prog->blinding_requested &&
@@ -16511,7 +16511,7 @@ static struct bpf_prog *inline_bpf_loop(struct bpf_verifier_env *env,
 
 static bool is_bpf_loop_call(struct bpf_insn *insn)
 {
-	return insn->code == (BPF_JMP | BPF_CALL) &&
+	return insn->code == (BPF_JMP64 | BPF_CALL) &&
 		insn->src_reg == 0 &&
 		insn->imm == BPF_FUNC_loop;
 }
diff --git a/kernel/seccomp.c b/kernel/seccomp.c
index e9852d1..7ba3252 100644
--- a/kernel/seccomp.c
+++ b/kernel/seccomp.c
@@ -298,25 +298,25 @@ static int seccomp_check_filter(struct sock_filter *filter, unsigned int flen)
 		/* Explicitly include allowed calls. */
 		case BPF_RET | BPF_K:
 		case BPF_RET | BPF_A:
-		case BPF_ALU | BPF_ADD | BPF_K:
-		case BPF_ALU | BPF_ADD | BPF_X:
-		case BPF_ALU | BPF_SUB | BPF_K:
-		case BPF_ALU | BPF_SUB | BPF_X:
-		case BPF_ALU | BPF_MUL | BPF_K:
-		case BPF_ALU | BPF_MUL | BPF_X:
-		case BPF_ALU | BPF_DIV | BPF_K:
-		case BPF_ALU | BPF_DIV | BPF_X:
-		case BPF_ALU | BPF_AND | BPF_K:
-		case BPF_ALU | BPF_AND | BPF_X:
-		case BPF_ALU | BPF_OR | BPF_K:
-		case BPF_ALU | BPF_OR | BPF_X:
-		case BPF_ALU | BPF_XOR | BPF_K:
-		case BPF_ALU | BPF_XOR | BPF_X:
-		case BPF_ALU | BPF_LSH | BPF_K:
-		case BPF_ALU | BPF_LSH | BPF_X:
-		case BPF_ALU | BPF_RSH | BPF_K:
-		case BPF_ALU | BPF_RSH | BPF_X:
-		case BPF_ALU | BPF_NEG:
+		case BPF_ALU32 | BPF_ADD | BPF_K:
+		case BPF_ALU32 | BPF_ADD | BPF_X:
+		case BPF_ALU32 | BPF_SUB | BPF_K:
+		case BPF_ALU32 | BPF_SUB | BPF_X:
+		case BPF_ALU32 | BPF_MUL | BPF_K:
+		case BPF_ALU32 | BPF_MUL | BPF_X:
+		case BPF_ALU32 | BPF_DIV | BPF_K:
+		case BPF_ALU32 | BPF_DIV | BPF_X:
+		case BPF_ALU32 | BPF_AND | BPF_K:
+		case BPF_ALU32 | BPF_AND | BPF_X:
+		case BPF_ALU32 | BPF_OR | BPF_K:
+		case BPF_ALU32 | BPF_OR | BPF_X:
+		case BPF_ALU32 | BPF_XOR | BPF_K:
+		case BPF_ALU32 | BPF_XOR | BPF_X:
+		case BPF_ALU32 | BPF_LSH | BPF_K:
+		case BPF_ALU32 | BPF_LSH | BPF_X:
+		case BPF_ALU32 | BPF_RSH | BPF_K:
+		case BPF_ALU32 | BPF_RSH | BPF_X:
+		case BPF_ALU32 | BPF_NEG:
 		case BPF_LD | BPF_IMM:
 		case BPF_LDX | BPF_IMM:
 		case BPF_MISC | BPF_TAX:
@@ -325,15 +325,15 @@ static int seccomp_check_filter(struct sock_filter *filter, unsigned int flen)
 		case BPF_LDX | BPF_MEM:
 		case BPF_ST:
 		case BPF_STX:
-		case BPF_JMP | BPF_JA:
-		case BPF_JMP | BPF_JEQ | BPF_K:
-		case BPF_JMP | BPF_JEQ | BPF_X:
-		case BPF_JMP | BPF_JGE | BPF_K:
-		case BPF_JMP | BPF_JGE | BPF_X:
-		case BPF_JMP | BPF_JGT | BPF_K:
-		case BPF_JMP | BPF_JGT | BPF_X:
-		case BPF_JMP | BPF_JSET | BPF_K:
-		case BPF_JMP | BPF_JSET | BPF_X:
+		case BPF_JMP64 | BPF_JA:
+		case BPF_JMP64 | BPF_JEQ | BPF_K:
+		case BPF_JMP64 | BPF_JEQ | BPF_X:
+		case BPF_JMP64 | BPF_JGE | BPF_K:
+		case BPF_JMP64 | BPF_JGE | BPF_X:
+		case BPF_JMP64 | BPF_JGT | BPF_K:
+		case BPF_JMP64 | BPF_JGT | BPF_X:
+		case BPF_JMP64 | BPF_JSET | BPF_K:
+		case BPF_JMP64 | BPF_JSET | BPF_X:
 			continue;
 		default:
 			return -EINVAL;
@@ -749,13 +749,13 @@ static bool seccomp_is_const_allow(struct sock_fprog_kern *fprog,
 		case BPF_RET | BPF_K:
 			/* reached return with constant values only, check allow */
 			return k == SECCOMP_RET_ALLOW;
-		case BPF_JMP | BPF_JA:
+		case BPF_JMP64 | BPF_JA:
 			pc += insn->k;
 			break;
-		case BPF_JMP | BPF_JEQ | BPF_K:
-		case BPF_JMP | BPF_JGE | BPF_K:
-		case BPF_JMP | BPF_JGT | BPF_K:
-		case BPF_JMP | BPF_JSET | BPF_K:
+		case BPF_JMP64 | BPF_JEQ | BPF_K:
+		case BPF_JMP64 | BPF_JGE | BPF_K:
+		case BPF_JMP64 | BPF_JGT | BPF_K:
+		case BPF_JMP64 | BPF_JSET | BPF_K:
 			switch (BPF_OP(code)) {
 			case BPF_JEQ:
 				op_res = reg_value == k;
@@ -776,7 +776,7 @@ static bool seccomp_is_const_allow(struct sock_fprog_kern *fprog,
 
 			pc += op_res ? insn->jt : insn->jf;
 			break;
-		case BPF_ALU | BPF_AND | BPF_K:
+		case BPF_ALU32 | BPF_AND | BPF_K:
 			reg_value &= k;
 			break;
 		default:
diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index ade9ac6..be5f161 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -142,7 +142,7 @@ static int bpf_fill_maxinsns3(struct bpf_test *self)
 	for (i = 0; i < len - 1; i++) {
 		__u32 k = prandom_u32_state(&rnd);
 
-		insn[i] = __BPF_STMT(BPF_ALU | BPF_ADD | BPF_K, k);
+		insn[i] = __BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_K, k);
 	}
 
 	insn[len - 1] = __BPF_STMT(BPF_RET | BPF_A, 0);
@@ -182,7 +182,7 @@ static int bpf_fill_maxinsns5(struct bpf_test *self)
 	if (!insn)
 		return -ENOMEM;
 
-	insn[0] = __BPF_JUMP(BPF_JMP | BPF_JA, len - 2, 0, 0);
+	insn[0] = __BPF_JUMP(BPF_JMP64 | BPF_JA, len - 2, 0, 0);
 
 	for (i = 1; i < len - 1; i++)
 		insn[i] = __BPF_STMT(BPF_RET | BPF_K, 0xfefefefe);
@@ -234,7 +234,7 @@ static int bpf_fill_maxinsns7(struct bpf_test *self)
 	insn[len - 4] = __BPF_STMT(BPF_MISC | BPF_TAX, 0);
 	insn[len - 3] = __BPF_STMT(BPF_LD | BPF_W | BPF_ABS, SKF_AD_OFF +
 				   SKF_AD_CPU);
-	insn[len - 2] = __BPF_STMT(BPF_ALU | BPF_SUB | BPF_X, 0);
+	insn[len - 2] = __BPF_STMT(BPF_ALU32 | BPF_SUB | BPF_X, 0);
 	insn[len - 1] = __BPF_STMT(BPF_RET | BPF_A, 0);
 
 	self->u.ptr.insns = insn;
@@ -256,7 +256,7 @@ static int bpf_fill_maxinsns8(struct bpf_test *self)
 	insn[0] = __BPF_STMT(BPF_LD | BPF_IMM, 0xffffffff);
 
 	for (i = 1; i < len - 1; i++)
-		insn[i] = __BPF_JUMP(BPF_JMP | BPF_JGT, 0xffffffff, jmp_off--, 0);
+		insn[i] = __BPF_JUMP(BPF_JMP64 | BPF_JGT, 0xffffffff, jmp_off--, 0);
 
 	insn[len - 1] = __BPF_STMT(BPF_RET | BPF_A, 0);
 
@@ -332,10 +332,10 @@ static int __bpf_fill_ja(struct bpf_test *self, unsigned int len,
 
 	for (i = 0; i + plen < len; i += plen)
 		for (j = 0; j < plen; j++)
-			insn[i + j] = __BPF_JUMP(BPF_JMP | BPF_JA,
+			insn[i + j] = __BPF_JUMP(BPF_JMP64 | BPF_JA,
 						 plen - 1 - j, 0, 0);
 	for (j = 0; j < rlen; j++)
-		insn[i + j] = __BPF_JUMP(BPF_JMP | BPF_JA, rlen - 1 - j,
+		insn[i + j] = __BPF_JUMP(BPF_JMP64 | BPF_JA, rlen - 1 - j,
 					 0, 0);
 
 	insn[len - 1] = __BPF_STMT(BPF_RET | BPF_K, 0xababcbac);
@@ -362,7 +362,7 @@ static int bpf_fill_maxinsns12(struct bpf_test *self)
 	if (!insn)
 		return -ENOMEM;
 
-	insn[0] = __BPF_JUMP(BPF_JMP | BPF_JA, len - 2, 0, 0);
+	insn[0] = __BPF_JUMP(BPF_JMP64 | BPF_JA, len - 2, 0, 0);
 
 	for (i = 1; i < len - 1; i++)
 		insn[i] = __BPF_STMT(BPF_LDX | BPF_B | BPF_MSH, 0);
@@ -389,7 +389,7 @@ static int bpf_fill_maxinsns13(struct bpf_test *self)
 		insn[i] = __BPF_STMT(BPF_LDX | BPF_B | BPF_MSH, 0);
 
 	insn[len - 3] = __BPF_STMT(BPF_LD | BPF_IMM, 0xabababab);
-	insn[len - 2] = __BPF_STMT(BPF_ALU | BPF_XOR | BPF_X, 0);
+	insn[len - 2] = __BPF_STMT(BPF_ALU32 | BPF_XOR | BPF_X, 0);
 	insn[len - 1] = __BPF_STMT(BPF_RET | BPF_A, 0);
 
 	self->u.ptr.insns = insn;
@@ -3067,11 +3067,11 @@ static struct bpf_test tests[] = {
 			BPF_STMT(BPF_LD | BPF_IMM, 1),
 			BPF_STMT(BPF_MISC | BPF_TAX, 0),
 			BPF_STMT(BPF_LD | BPF_IMM, 2),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_X, 0),
-			BPF_STMT(BPF_ALU | BPF_NEG, 0), /* A == -3 */
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_NEG, 0), /* A == -3 */
 			BPF_STMT(BPF_MISC | BPF_TAX, 0),
 			BPF_STMT(BPF_LD | BPF_LEN, 0),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_X, 0),
 			BPF_STMT(BPF_MISC | BPF_TAX, 0), /* X == len - 3 */
 			BPF_STMT(BPF_LD | BPF_B | BPF_IND, 1),
 			BPF_STMT(BPF_RET | BPF_A, 0)
@@ -3085,7 +3085,7 @@ static struct bpf_test tests[] = {
 		.u.insns = {
 			BPF_STMT(BPF_LDX | BPF_LEN, 0),
 			BPF_STMT(BPF_MISC | BPF_TXA, 0),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_X, 0),
 			BPF_STMT(BPF_RET | BPF_A, 0) /* A == len * 2 */
 		},
 		CLASSIC,
@@ -3096,11 +3096,11 @@ static struct bpf_test tests[] = {
 		"ADD_SUB_MUL_K",
 		.u.insns = {
 			BPF_STMT(BPF_LD | BPF_IMM, 1),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_K, 2),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_K, 2),
 			BPF_STMT(BPF_LDX | BPF_IMM, 3),
-			BPF_STMT(BPF_ALU | BPF_SUB | BPF_X, 0),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_K, 0xffffffff),
-			BPF_STMT(BPF_ALU | BPF_MUL | BPF_K, 3),
+			BPF_STMT(BPF_ALU32 | BPF_SUB | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_K, 0xffffffff),
+			BPF_STMT(BPF_ALU32 | BPF_MUL | BPF_K, 3),
 			BPF_STMT(BPF_RET | BPF_A, 0)
 		},
 		CLASSIC | FLAG_NO_DATA,
@@ -3111,20 +3111,20 @@ static struct bpf_test tests[] = {
 		"DIV_MOD_KX",
 		.u.insns = {
 			BPF_STMT(BPF_LD | BPF_IMM, 8),
-			BPF_STMT(BPF_ALU | BPF_DIV | BPF_K, 2),
+			BPF_STMT(BPF_ALU32 | BPF_DIV | BPF_K, 2),
 			BPF_STMT(BPF_MISC | BPF_TAX, 0),
 			BPF_STMT(BPF_LD | BPF_IMM, 0xffffffff),
-			BPF_STMT(BPF_ALU | BPF_DIV | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_DIV | BPF_X, 0),
 			BPF_STMT(BPF_MISC | BPF_TAX, 0),
 			BPF_STMT(BPF_LD | BPF_IMM, 0xffffffff),
-			BPF_STMT(BPF_ALU | BPF_DIV | BPF_K, 0x70000000),
+			BPF_STMT(BPF_ALU32 | BPF_DIV | BPF_K, 0x70000000),
 			BPF_STMT(BPF_MISC | BPF_TAX, 0),
 			BPF_STMT(BPF_LD | BPF_IMM, 0xffffffff),
-			BPF_STMT(BPF_ALU | BPF_MOD | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_MOD | BPF_X, 0),
 			BPF_STMT(BPF_MISC | BPF_TAX, 0),
 			BPF_STMT(BPF_LD | BPF_IMM, 0xffffffff),
-			BPF_STMT(BPF_ALU | BPF_MOD | BPF_K, 0x70000000),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_MOD | BPF_K, 0x70000000),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_X, 0),
 			BPF_STMT(BPF_RET | BPF_A, 0)
 		},
 		CLASSIC | FLAG_NO_DATA,
@@ -3135,12 +3135,12 @@ static struct bpf_test tests[] = {
 		"AND_OR_LSH_K",
 		.u.insns = {
 			BPF_STMT(BPF_LD | BPF_IMM, 0xff),
-			BPF_STMT(BPF_ALU | BPF_AND | BPF_K, 0xf0),
-			BPF_STMT(BPF_ALU | BPF_LSH | BPF_K, 27),
+			BPF_STMT(BPF_ALU32 | BPF_AND | BPF_K, 0xf0),
+			BPF_STMT(BPF_ALU32 | BPF_LSH | BPF_K, 27),
 			BPF_STMT(BPF_MISC | BPF_TAX, 0),
 			BPF_STMT(BPF_LD | BPF_IMM, 0xf),
-			BPF_STMT(BPF_ALU | BPF_OR | BPF_K, 0xf0),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_OR | BPF_K, 0xf0),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_X, 0),
 			BPF_STMT(BPF_RET | BPF_A, 0)
 		},
 		CLASSIC | FLAG_NO_DATA,
@@ -3151,7 +3151,7 @@ static struct bpf_test tests[] = {
 		"LD_IMM_0",
 		.u.insns = {
 			BPF_STMT(BPF_LD | BPF_IMM, 0), /* ld #0 */
-			BPF_JUMP(BPF_JMP | BPF_JEQ | BPF_K, 0, 1, 0),
+			BPF_JUMP(BPF_JMP64 | BPF_JEQ | BPF_K, 0, 1, 0),
 			BPF_STMT(BPF_RET | BPF_K, 0),
 			BPF_STMT(BPF_RET | BPF_K, 1),
 		},
@@ -3186,7 +3186,7 @@ static struct bpf_test tests[] = {
 			BPF_STMT(BPF_LD | BPF_B | BPF_ABS, SKF_LL_OFF),
 			BPF_STMT(BPF_MISC | BPF_TAX, 0),
 			BPF_STMT(BPF_LD | BPF_B | BPF_ABS, SKF_LL_OFF + 1),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_X, 0),
 			BPF_STMT(BPF_RET | BPF_A, 0)
 		},
 		CLASSIC,
@@ -3198,7 +3198,7 @@ static struct bpf_test tests[] = {
 		.u.insns = {
 			BPF_STMT(BPF_LD | BPF_IMM, SKF_LL_OFF - 1),
 			BPF_STMT(BPF_LDX | BPF_LEN, 0),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_X, 0),
 			BPF_STMT(BPF_MISC | BPF_TAX, 0),
 			BPF_STMT(BPF_LD | BPF_B | BPF_IND, 0),
 			BPF_STMT(BPF_RET | BPF_A, 0)
@@ -3213,7 +3213,7 @@ static struct bpf_test tests[] = {
 			BPF_STMT(BPF_LD | BPF_B | BPF_ABS, SKF_NET_OFF),
 			BPF_STMT(BPF_MISC | BPF_TAX, 0),
 			BPF_STMT(BPF_LD | BPF_B | BPF_ABS, SKF_NET_OFF + 1),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_X, 0),
 			BPF_STMT(BPF_RET | BPF_A, 0)
 		},
 		CLASSIC,
@@ -3225,7 +3225,7 @@ static struct bpf_test tests[] = {
 		.u.insns = {
 			BPF_STMT(BPF_LD | BPF_IMM, SKF_NET_OFF - 15),
 			BPF_STMT(BPF_LDX | BPF_LEN, 0),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_X, 0),
 			BPF_STMT(BPF_MISC | BPF_TAX, 0),
 			BPF_STMT(BPF_LD | BPF_B | BPF_IND, 0),
 			BPF_STMT(BPF_RET | BPF_A, 0)
@@ -3239,15 +3239,15 @@ static struct bpf_test tests[] = {
 		.u.insns = {
 			BPF_STMT(BPF_LD | BPF_W | BPF_ABS,
 				 SKF_AD_OFF + SKF_AD_PKTTYPE),
-			BPF_JUMP(BPF_JMP | BPF_JEQ | BPF_K, SKB_TYPE, 1, 0),
+			BPF_JUMP(BPF_JMP64 | BPF_JEQ | BPF_K, SKB_TYPE, 1, 0),
 			BPF_STMT(BPF_RET | BPF_K, 1),
 			BPF_STMT(BPF_LD | BPF_W | BPF_ABS,
 				 SKF_AD_OFF + SKF_AD_PKTTYPE),
-			BPF_JUMP(BPF_JMP | BPF_JEQ | BPF_K, SKB_TYPE, 1, 0),
+			BPF_JUMP(BPF_JMP64 | BPF_JEQ | BPF_K, SKB_TYPE, 1, 0),
 			BPF_STMT(BPF_RET | BPF_K, 1),
 			BPF_STMT(BPF_LD | BPF_W | BPF_ABS,
 				 SKF_AD_OFF + SKF_AD_PKTTYPE),
-			BPF_JUMP(BPF_JMP | BPF_JEQ | BPF_K, SKB_TYPE, 1, 0),
+			BPF_JUMP(BPF_JMP64 | BPF_JEQ | BPF_K, SKB_TYPE, 1, 0),
 			BPF_STMT(BPF_RET | BPF_K, 1),
 			BPF_STMT(BPF_RET | BPF_A, 0)
 		},
@@ -3292,13 +3292,13 @@ static struct bpf_test tests[] = {
 		"LD_PROTOCOL",
 		.u.insns = {
 			BPF_STMT(BPF_LD | BPF_B | BPF_ABS, 1),
-			BPF_JUMP(BPF_JMP | BPF_JEQ | BPF_K, 20, 1, 0),
+			BPF_JUMP(BPF_JMP64 | BPF_JEQ | BPF_K, 20, 1, 0),
 			BPF_STMT(BPF_RET | BPF_K, 0),
 			BPF_STMT(BPF_LD | BPF_W | BPF_ABS,
 				 SKF_AD_OFF + SKF_AD_PROTOCOL),
 			BPF_STMT(BPF_MISC | BPF_TAX, 0),
 			BPF_STMT(BPF_LD | BPF_B | BPF_ABS, 2),
-			BPF_JUMP(BPF_JMP | BPF_JEQ | BPF_K, 30, 1, 0),
+			BPF_JUMP(BPF_JMP64 | BPF_JEQ | BPF_K, 30, 1, 0),
 			BPF_STMT(BPF_RET | BPF_K, 0),
 			BPF_STMT(BPF_MISC | BPF_TXA, 0),
 			BPF_STMT(BPF_RET | BPF_A, 0)
@@ -3365,7 +3365,7 @@ static struct bpf_test tests[] = {
 			BPF_STMT(BPF_MISC | BPF_TAX, 0),
 			BPF_STMT(BPF_LD | BPF_W | BPF_ABS,
 				 SKF_AD_OFF + SKF_AD_CPU),
-			BPF_STMT(BPF_ALU | BPF_SUB | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_SUB | BPF_X, 0),
 			BPF_STMT(BPF_RET | BPF_A, 0)
 		},
 		CLASSIC,
@@ -3473,17 +3473,17 @@ static struct bpf_test tests[] = {
 		.u.insns = {
 			BPF_STMT(BPF_LDX | BPF_LEN, 0),
 			BPF_STMT(BPF_LD | BPF_IMM, 2),
-			BPF_STMT(BPF_ALU | BPF_RSH, 1),
-			BPF_STMT(BPF_ALU | BPF_XOR | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_RSH, 1),
+			BPF_STMT(BPF_ALU32 | BPF_XOR | BPF_X, 0),
 			BPF_STMT(BPF_ST, 1), /* M1 = 1 ^ len */
-			BPF_STMT(BPF_ALU | BPF_XOR | BPF_K, 0x80000000),
+			BPF_STMT(BPF_ALU32 | BPF_XOR | BPF_K, 0x80000000),
 			BPF_STMT(BPF_ST, 2), /* M2 = 1 ^ len ^ 0x80000000 */
 			BPF_STMT(BPF_STX, 15), /* M3 = len */
 			BPF_STMT(BPF_LDX | BPF_MEM, 1),
 			BPF_STMT(BPF_LD | BPF_MEM, 2),
-			BPF_STMT(BPF_ALU | BPF_XOR | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_XOR | BPF_X, 0),
 			BPF_STMT(BPF_LDX | BPF_MEM, 15),
-			BPF_STMT(BPF_ALU | BPF_XOR | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_XOR | BPF_X, 0),
 			BPF_STMT(BPF_RET | BPF_A, 0)
 		},
 		CLASSIC,
@@ -3495,7 +3495,7 @@ static struct bpf_test tests[] = {
 		.u.insns = {
 			BPF_STMT(BPF_LDX | BPF_LEN, 0),
 			BPF_STMT(BPF_LD | BPF_B | BPF_ABS, 2),
-			BPF_JUMP(BPF_JMP | BPF_JEQ | BPF_X, 0, 0, 1),
+			BPF_JUMP(BPF_JMP64 | BPF_JEQ | BPF_X, 0, 0, 1),
 			BPF_STMT(BPF_RET | BPF_K, 1),
 			BPF_STMT(BPF_RET | BPF_K, MAX_K)
 		},
@@ -3508,7 +3508,7 @@ static struct bpf_test tests[] = {
 		.u.insns = {
 			BPF_STMT(BPF_LDX | BPF_LEN, 0),
 			BPF_STMT(BPF_LD | BPF_B | BPF_ABS, 2),
-			BPF_JUMP(BPF_JMP | BPF_JGT | BPF_X, 0, 0, 1),
+			BPF_JUMP(BPF_JMP64 | BPF_JGT | BPF_X, 0, 0, 1),
 			BPF_STMT(BPF_RET | BPF_K, 1),
 			BPF_STMT(BPF_RET | BPF_K, MAX_K)
 		},
@@ -3521,7 +3521,7 @@ static struct bpf_test tests[] = {
 		.u.insns = {
 			BPF_STMT(BPF_LDX | BPF_LEN, 0),
 			BPF_STMT(BPF_LD | BPF_B | BPF_ABS, 2),
-			BPF_JUMP(BPF_JMP | BPF_JGE | BPF_X, 0, 0, 1),
+			BPF_JUMP(BPF_JMP64 | BPF_JGE | BPF_X, 0, 0, 1),
 			BPF_STMT(BPF_RET | BPF_K, 1),
 			BPF_STMT(BPF_RET | BPF_K, MAX_K)
 		},
@@ -3534,7 +3534,7 @@ static struct bpf_test tests[] = {
 		.u.insns = {
 			BPF_STMT(BPF_LDX | BPF_LEN, 0),
 			BPF_STMT(BPF_LD | BPF_B | BPF_ABS, 2),
-			BPF_JUMP(BPF_JMP | BPF_JGE | BPF_X, 0, 0, 1),
+			BPF_JUMP(BPF_JMP64 | BPF_JGE | BPF_X, 0, 0, 1),
 			BPF_STMT(BPF_RET | BPF_K, 1),
 			BPF_STMT(BPF_RET | BPF_K, MAX_K)
 		},
@@ -3547,13 +3547,13 @@ static struct bpf_test tests[] = {
 		.u.insns = {
 			BPF_STMT(BPF_LDX | BPF_LEN, 0),
 			BPF_STMT(BPF_LD | BPF_B | BPF_IND, MAX_K),
-			BPF_JUMP(BPF_JMP | BPF_JGE | BPF_K, 1, 1, 0),
+			BPF_JUMP(BPF_JMP64 | BPF_JGE | BPF_K, 1, 1, 0),
 			BPF_STMT(BPF_RET | BPF_K, 10),
-			BPF_JUMP(BPF_JMP | BPF_JGE | BPF_K, 2, 1, 0),
+			BPF_JUMP(BPF_JMP64 | BPF_JGE | BPF_K, 2, 1, 0),
 			BPF_STMT(BPF_RET | BPF_K, 20),
-			BPF_JUMP(BPF_JMP | BPF_JGE | BPF_K, 3, 1, 0),
+			BPF_JUMP(BPF_JMP64 | BPF_JGE | BPF_K, 3, 1, 0),
 			BPF_STMT(BPF_RET | BPF_K, 30),
-			BPF_JUMP(BPF_JMP | BPF_JGE | BPF_K, 4, 1, 0),
+			BPF_JUMP(BPF_JMP64 | BPF_JGE | BPF_K, 4, 1, 0),
 			BPF_STMT(BPF_RET | BPF_K, 40),
 			BPF_STMT(BPF_RET | BPF_K, MAX_K)
 		},
@@ -3564,28 +3564,28 @@ static struct bpf_test tests[] = {
 	{
 		"JSET",
 		.u.insns = {
-			BPF_JUMP(BPF_JMP | BPF_JA, 0, 0, 0),
-			BPF_JUMP(BPF_JMP | BPF_JA, 1, 1, 1),
-			BPF_JUMP(BPF_JMP | BPF_JA, 0, 0, 0),
-			BPF_JUMP(BPF_JMP | BPF_JA, 0, 0, 0),
+			BPF_JUMP(BPF_JMP64 | BPF_JA, 0, 0, 0),
+			BPF_JUMP(BPF_JMP64 | BPF_JA, 1, 1, 1),
+			BPF_JUMP(BPF_JMP64 | BPF_JA, 0, 0, 0),
+			BPF_JUMP(BPF_JMP64 | BPF_JA, 0, 0, 0),
 			BPF_STMT(BPF_LDX | BPF_LEN, 0),
 			BPF_STMT(BPF_MISC | BPF_TXA, 0),
-			BPF_STMT(BPF_ALU | BPF_SUB | BPF_K, 4),
+			BPF_STMT(BPF_ALU32 | BPF_SUB | BPF_K, 4),
 			BPF_STMT(BPF_MISC | BPF_TAX, 0),
 			BPF_STMT(BPF_LD | BPF_W | BPF_IND, 0),
-			BPF_JUMP(BPF_JMP | BPF_JSET | BPF_K, 1, 0, 1),
+			BPF_JUMP(BPF_JMP64 | BPF_JSET | BPF_K, 1, 0, 1),
 			BPF_STMT(BPF_RET | BPF_K, 10),
-			BPF_JUMP(BPF_JMP | BPF_JSET | BPF_K, 0x80000000, 0, 1),
+			BPF_JUMP(BPF_JMP64 | BPF_JSET | BPF_K, 0x80000000, 0, 1),
 			BPF_STMT(BPF_RET | BPF_K, 20),
-			BPF_JUMP(BPF_JMP | BPF_JSET | BPF_K, 0xffffff, 1, 0),
+			BPF_JUMP(BPF_JMP64 | BPF_JSET | BPF_K, 0xffffff, 1, 0),
 			BPF_STMT(BPF_RET | BPF_K, 30),
-			BPF_JUMP(BPF_JMP | BPF_JSET | BPF_K, 0xffffff, 1, 0),
+			BPF_JUMP(BPF_JMP64 | BPF_JSET | BPF_K, 0xffffff, 1, 0),
 			BPF_STMT(BPF_RET | BPF_K, 30),
-			BPF_JUMP(BPF_JMP | BPF_JSET | BPF_K, 0xffffff, 1, 0),
+			BPF_JUMP(BPF_JMP64 | BPF_JSET | BPF_K, 0xffffff, 1, 0),
 			BPF_STMT(BPF_RET | BPF_K, 30),
-			BPF_JUMP(BPF_JMP | BPF_JSET | BPF_K, 0xffffff, 1, 0),
+			BPF_JUMP(BPF_JMP64 | BPF_JSET | BPF_K, 0xffffff, 1, 0),
 			BPF_STMT(BPF_RET | BPF_K, 30),
-			BPF_JUMP(BPF_JMP | BPF_JSET | BPF_K, 0xffffff, 1, 0),
+			BPF_JUMP(BPF_JMP64 | BPF_JSET | BPF_K, 0xffffff, 1, 0),
 			BPF_STMT(BPF_RET | BPF_K, 30),
 			BPF_STMT(BPF_RET | BPF_K, MAX_K)
 		},
@@ -3597,27 +3597,27 @@ static struct bpf_test tests[] = {
 		"tcpdump port 22",
 		.u.insns = {
 			BPF_STMT(BPF_LD | BPF_H | BPF_ABS, 12),
-			BPF_JUMP(BPF_JMP | BPF_JEQ | BPF_K, 0x86dd, 0, 8), /* IPv6 */
+			BPF_JUMP(BPF_JMP64 | BPF_JEQ | BPF_K, 0x86dd, 0, 8), /* IPv6 */
 			BPF_STMT(BPF_LD | BPF_B | BPF_ABS, 20),
-			BPF_JUMP(BPF_JMP | BPF_JEQ | BPF_K, 0x84, 2, 0),
-			BPF_JUMP(BPF_JMP | BPF_JEQ | BPF_K, 0x6, 1, 0),
-			BPF_JUMP(BPF_JMP | BPF_JEQ | BPF_K, 0x11, 0, 17),
+			BPF_JUMP(BPF_JMP64 | BPF_JEQ | BPF_K, 0x84, 2, 0),
+			BPF_JUMP(BPF_JMP64 | BPF_JEQ | BPF_K, 0x6, 1, 0),
+			BPF_JUMP(BPF_JMP64 | BPF_JEQ | BPF_K, 0x11, 0, 17),
 			BPF_STMT(BPF_LD | BPF_H | BPF_ABS, 54),
-			BPF_JUMP(BPF_JMP | BPF_JEQ | BPF_K, 22, 14, 0),
+			BPF_JUMP(BPF_JMP64 | BPF_JEQ | BPF_K, 22, 14, 0),
 			BPF_STMT(BPF_LD | BPF_H | BPF_ABS, 56),
-			BPF_JUMP(BPF_JMP | BPF_JEQ | BPF_K, 22, 12, 13),
-			BPF_JUMP(BPF_JMP | BPF_JEQ | BPF_K, 0x0800, 0, 12), /* IPv4 */
+			BPF_JUMP(BPF_JMP64 | BPF_JEQ | BPF_K, 22, 12, 13),
+			BPF_JUMP(BPF_JMP64 | BPF_JEQ | BPF_K, 0x0800, 0, 12), /* IPv4 */
 			BPF_STMT(BPF_LD | BPF_B | BPF_ABS, 23),
-			BPF_JUMP(BPF_JMP | BPF_JEQ | BPF_K, 0x84, 2, 0),
-			BPF_JUMP(BPF_JMP | BPF_JEQ | BPF_K, 0x6, 1, 0),
-			BPF_JUMP(BPF_JMP | BPF_JEQ | BPF_K, 0x11, 0, 8),
+			BPF_JUMP(BPF_JMP64 | BPF_JEQ | BPF_K, 0x84, 2, 0),
+			BPF_JUMP(BPF_JMP64 | BPF_JEQ | BPF_K, 0x6, 1, 0),
+			BPF_JUMP(BPF_JMP64 | BPF_JEQ | BPF_K, 0x11, 0, 8),
 			BPF_STMT(BPF_LD | BPF_H | BPF_ABS, 20),
-			BPF_JUMP(BPF_JMP | BPF_JSET | BPF_K, 0x1fff, 6, 0),
+			BPF_JUMP(BPF_JMP64 | BPF_JSET | BPF_K, 0x1fff, 6, 0),
 			BPF_STMT(BPF_LDX | BPF_B | BPF_MSH, 14),
 			BPF_STMT(BPF_LD | BPF_H | BPF_IND, 14),
-			BPF_JUMP(BPF_JMP | BPF_JEQ | BPF_K, 22, 2, 0),
+			BPF_JUMP(BPF_JMP64 | BPF_JEQ | BPF_K, 22, 2, 0),
 			BPF_STMT(BPF_LD | BPF_H | BPF_IND, 16),
-			BPF_JUMP(BPF_JMP | BPF_JEQ | BPF_K, 22, 0, 1),
+			BPF_JUMP(BPF_JMP64 | BPF_JEQ | BPF_K, 22, 0, 1),
 			BPF_STMT(BPF_RET | BPF_K, 0xffff),
 			BPF_STMT(BPF_RET | BPF_K, 0),
 		},
@@ -3646,36 +3646,36 @@ static struct bpf_test tests[] = {
 			 * (len > 115 or len < 30000000000)' -d
 			 */
 			BPF_STMT(BPF_LD | BPF_H | BPF_ABS, 12),
-			BPF_JUMP(BPF_JMP | BPF_JEQ | BPF_K, 0x86dd, 30, 0),
-			BPF_JUMP(BPF_JMP | BPF_JEQ | BPF_K, 0x800, 0, 29),
+			BPF_JUMP(BPF_JMP64 | BPF_JEQ | BPF_K, 0x86dd, 30, 0),
+			BPF_JUMP(BPF_JMP64 | BPF_JEQ | BPF_K, 0x800, 0, 29),
 			BPF_STMT(BPF_LD | BPF_B | BPF_ABS, 23),
-			BPF_JUMP(BPF_JMP | BPF_JEQ | BPF_K, 0x6, 0, 27),
+			BPF_JUMP(BPF_JMP64 | BPF_JEQ | BPF_K, 0x6, 0, 27),
 			BPF_STMT(BPF_LD | BPF_H | BPF_ABS, 20),
-			BPF_JUMP(BPF_JMP | BPF_JSET | BPF_K, 0x1fff, 25, 0),
+			BPF_JUMP(BPF_JMP64 | BPF_JSET | BPF_K, 0x1fff, 25, 0),
 			BPF_STMT(BPF_LDX | BPF_B | BPF_MSH, 14),
 			BPF_STMT(BPF_LD | BPF_H | BPF_IND, 14),
-			BPF_JUMP(BPF_JMP | BPF_JEQ | BPF_K, 22, 2, 0),
+			BPF_JUMP(BPF_JMP64 | BPF_JEQ | BPF_K, 22, 2, 0),
 			BPF_STMT(BPF_LD | BPF_H | BPF_IND, 16),
-			BPF_JUMP(BPF_JMP | BPF_JEQ | BPF_K, 22, 0, 20),
+			BPF_JUMP(BPF_JMP64 | BPF_JEQ | BPF_K, 22, 0, 20),
 			BPF_STMT(BPF_LD | BPF_H | BPF_ABS, 16),
 			BPF_STMT(BPF_ST, 1),
 			BPF_STMT(BPF_LD | BPF_B | BPF_ABS, 14),
-			BPF_STMT(BPF_ALU | BPF_AND | BPF_K, 0xf),
-			BPF_STMT(BPF_ALU | BPF_LSH | BPF_K, 2),
+			BPF_STMT(BPF_ALU32 | BPF_AND | BPF_K, 0xf),
+			BPF_STMT(BPF_ALU32 | BPF_LSH | BPF_K, 2),
 			BPF_STMT(BPF_MISC | BPF_TAX, 0x5), /* libpcap emits K on TAX */
 			BPF_STMT(BPF_LD | BPF_MEM, 1),
-			BPF_STMT(BPF_ALU | BPF_SUB | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_SUB | BPF_X, 0),
 			BPF_STMT(BPF_ST, 5),
 			BPF_STMT(BPF_LDX | BPF_B | BPF_MSH, 14),
 			BPF_STMT(BPF_LD | BPF_B | BPF_IND, 26),
-			BPF_STMT(BPF_ALU | BPF_AND | BPF_K, 0xf0),
-			BPF_STMT(BPF_ALU | BPF_RSH | BPF_K, 2),
+			BPF_STMT(BPF_ALU32 | BPF_AND | BPF_K, 0xf0),
+			BPF_STMT(BPF_ALU32 | BPF_RSH | BPF_K, 2),
 			BPF_STMT(BPF_MISC | BPF_TAX, 0x9), /* libpcap emits K on TAX */
 			BPF_STMT(BPF_LD | BPF_MEM, 5),
-			BPF_JUMP(BPF_JMP | BPF_JEQ | BPF_X, 0, 4, 0),
+			BPF_JUMP(BPF_JMP64 | BPF_JEQ | BPF_X, 0, 4, 0),
 			BPF_STMT(BPF_LD | BPF_LEN, 0),
-			BPF_JUMP(BPF_JMP | BPF_JGT | BPF_K, 0x73, 1, 0),
-			BPF_JUMP(BPF_JMP | BPF_JGE | BPF_K, 0xfc23ac00, 1, 0),
+			BPF_JUMP(BPF_JMP64 | BPF_JGT | BPF_K, 0x73, 1, 0),
+			BPF_JUMP(BPF_JMP64 | BPF_JGE | BPF_K, 0xfc23ac00, 1, 0),
 			BPF_STMT(BPF_RET | BPF_K, 0xffff),
 			BPF_STMT(BPF_RET | BPF_K, 0),
 		},
@@ -4545,7 +4545,7 @@ static struct bpf_test tests[] = {
 	{
 		"check: div_k_0",
 		.u.insns = {
-			BPF_STMT(BPF_ALU | BPF_DIV | BPF_K, 0),
+			BPF_STMT(BPF_ALU32 | BPF_DIV | BPF_K, 0),
 			BPF_STMT(BPF_RET | BPF_K, 0)
 		},
 		CLASSIC | FLAG_NO_DATA | FLAG_EXPECTED_FAIL,
@@ -4583,7 +4583,7 @@ static struct bpf_test tests[] = {
 		"JUMPS + HOLES",
 		.u.insns = {
 			BPF_STMT(BPF_LD | BPF_H | BPF_ABS, 0),
-			BPF_JUMP(BPF_JMP | BPF_JGE, 0, 13, 15),
+			BPF_JUMP(BPF_JMP64 | BPF_JGE, 0, 13, 15),
 			BPF_STMT(BPF_LD | BPF_H | BPF_ABS, 0),
 			BPF_STMT(BPF_LD | BPF_H | BPF_ABS, 0),
 			BPF_STMT(BPF_LD | BPF_H | BPF_ABS, 0),
@@ -4597,12 +4597,12 @@ static struct bpf_test tests[] = {
 			BPF_STMT(BPF_LD | BPF_H | BPF_ABS, 0),
 			BPF_STMT(BPF_LD | BPF_H | BPF_ABS, 0),
 			BPF_STMT(BPF_LD | BPF_H | BPF_ABS, 0),
-			BPF_JUMP(BPF_JMP | BPF_JEQ, 0x90c2894d, 3, 4),
+			BPF_JUMP(BPF_JMP64 | BPF_JEQ, 0x90c2894d, 3, 4),
 			BPF_STMT(BPF_LD | BPF_H | BPF_ABS, 0),
-			BPF_JUMP(BPF_JMP | BPF_JEQ, 0x90c2894d, 1, 2),
+			BPF_JUMP(BPF_JMP64 | BPF_JEQ, 0x90c2894d, 1, 2),
 			BPF_STMT(BPF_LD | BPF_H | BPF_ABS, 0),
-			BPF_JUMP(BPF_JMP | BPF_JGE, 0, 14, 15),
-			BPF_JUMP(BPF_JMP | BPF_JGE, 0, 13, 14),
+			BPF_JUMP(BPF_JMP64 | BPF_JGE, 0, 14, 15),
+			BPF_JUMP(BPF_JMP64 | BPF_JGE, 0, 13, 14),
 			BPF_STMT(BPF_LD | BPF_H | BPF_ABS, 0),
 			BPF_STMT(BPF_LD | BPF_H | BPF_ABS, 0),
 			BPF_STMT(BPF_LD | BPF_H | BPF_ABS, 0),
@@ -4616,11 +4616,11 @@ static struct bpf_test tests[] = {
 			BPF_STMT(BPF_LD | BPF_H | BPF_ABS, 0),
 			BPF_STMT(BPF_LD | BPF_H | BPF_ABS, 0),
 			BPF_STMT(BPF_LD | BPF_H | BPF_ABS, 0),
-			BPF_JUMP(BPF_JMP | BPF_JEQ, 0x2ac28349, 2, 3),
-			BPF_JUMP(BPF_JMP | BPF_JEQ, 0x2ac28349, 1, 2),
+			BPF_JUMP(BPF_JMP64 | BPF_JEQ, 0x2ac28349, 2, 3),
+			BPF_JUMP(BPF_JMP64 | BPF_JEQ, 0x2ac28349, 1, 2),
 			BPF_STMT(BPF_LD | BPF_H | BPF_ABS, 0),
-			BPF_JUMP(BPF_JMP | BPF_JGE, 0, 14, 15),
-			BPF_JUMP(BPF_JMP | BPF_JGE, 0, 13, 14),
+			BPF_JUMP(BPF_JMP64 | BPF_JGE, 0, 14, 15),
+			BPF_JUMP(BPF_JMP64 | BPF_JGE, 0, 13, 14),
 			BPF_STMT(BPF_LD | BPF_H | BPF_ABS, 0),
 			BPF_STMT(BPF_LD | BPF_H | BPF_ABS, 0),
 			BPF_STMT(BPF_LD | BPF_H | BPF_ABS, 0),
@@ -4634,8 +4634,8 @@ static struct bpf_test tests[] = {
 			BPF_STMT(BPF_LD | BPF_H | BPF_ABS, 0),
 			BPF_STMT(BPF_LD | BPF_H | BPF_ABS, 0),
 			BPF_STMT(BPF_LD | BPF_H | BPF_ABS, 0),
-			BPF_JUMP(BPF_JMP | BPF_JEQ, 0x90d2ff41, 2, 3),
-			BPF_JUMP(BPF_JMP | BPF_JEQ, 0x90d2ff41, 1, 2),
+			BPF_JUMP(BPF_JMP64 | BPF_JEQ, 0x90d2ff41, 2, 3),
+			BPF_JUMP(BPF_JMP64 | BPF_JEQ, 0x90d2ff41, 1, 2),
 			BPF_STMT(BPF_LD | BPF_H | BPF_ABS, 0),
 			BPF_STMT(BPF_RET | BPF_A, 0),
 			BPF_STMT(BPF_RET | BPF_A, 0),
@@ -4691,82 +4691,82 @@ static struct bpf_test tests[] = {
 			BPF_STMT(BPF_STX, 0),
 			BPF_STMT(BPF_LDX | BPF_MEM, 0),
 			BPF_STMT(BPF_MISC | BPF_TXA, 0),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_K, 1),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_K, 1),
 			BPF_STMT(BPF_MISC | BPF_TAX, 0),
 			BPF_STMT(BPF_STX, 1),
 			BPF_STMT(BPF_LDX | BPF_MEM, 1),
 			BPF_STMT(BPF_MISC | BPF_TXA, 0),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_K, 1),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_K, 1),
 			BPF_STMT(BPF_MISC | BPF_TAX, 0),
 			BPF_STMT(BPF_STX, 2),
 			BPF_STMT(BPF_LDX | BPF_MEM, 2),
 			BPF_STMT(BPF_MISC | BPF_TXA, 0),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_K, 1),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_K, 1),
 			BPF_STMT(BPF_MISC | BPF_TAX, 0),
 			BPF_STMT(BPF_STX, 3),
 			BPF_STMT(BPF_LDX | BPF_MEM, 3),
 			BPF_STMT(BPF_MISC | BPF_TXA, 0),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_K, 1),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_K, 1),
 			BPF_STMT(BPF_MISC | BPF_TAX, 0),
 			BPF_STMT(BPF_STX, 4),
 			BPF_STMT(BPF_LDX | BPF_MEM, 4),
 			BPF_STMT(BPF_MISC | BPF_TXA, 0),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_K, 1),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_K, 1),
 			BPF_STMT(BPF_MISC | BPF_TAX, 0),
 			BPF_STMT(BPF_STX, 5),
 			BPF_STMT(BPF_LDX | BPF_MEM, 5),
 			BPF_STMT(BPF_MISC | BPF_TXA, 0),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_K, 1),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_K, 1),
 			BPF_STMT(BPF_MISC | BPF_TAX, 0),
 			BPF_STMT(BPF_STX, 6),
 			BPF_STMT(BPF_LDX | BPF_MEM, 6),
 			BPF_STMT(BPF_MISC | BPF_TXA, 0),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_K, 1),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_K, 1),
 			BPF_STMT(BPF_MISC | BPF_TAX, 0),
 			BPF_STMT(BPF_STX, 7),
 			BPF_STMT(BPF_LDX | BPF_MEM, 7),
 			BPF_STMT(BPF_MISC | BPF_TXA, 0),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_K, 1),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_K, 1),
 			BPF_STMT(BPF_MISC | BPF_TAX, 0),
 			BPF_STMT(BPF_STX, 8),
 			BPF_STMT(BPF_LDX | BPF_MEM, 8),
 			BPF_STMT(BPF_MISC | BPF_TXA, 0),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_K, 1),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_K, 1),
 			BPF_STMT(BPF_MISC | BPF_TAX, 0),
 			BPF_STMT(BPF_STX, 9),
 			BPF_STMT(BPF_LDX | BPF_MEM, 9),
 			BPF_STMT(BPF_MISC | BPF_TXA, 0),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_K, 1),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_K, 1),
 			BPF_STMT(BPF_MISC | BPF_TAX, 0),
 			BPF_STMT(BPF_STX, 10),
 			BPF_STMT(BPF_LDX | BPF_MEM, 10),
 			BPF_STMT(BPF_MISC | BPF_TXA, 0),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_K, 1),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_K, 1),
 			BPF_STMT(BPF_MISC | BPF_TAX, 0),
 			BPF_STMT(BPF_STX, 11),
 			BPF_STMT(BPF_LDX | BPF_MEM, 11),
 			BPF_STMT(BPF_MISC | BPF_TXA, 0),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_K, 1),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_K, 1),
 			BPF_STMT(BPF_MISC | BPF_TAX, 0),
 			BPF_STMT(BPF_STX, 12),
 			BPF_STMT(BPF_LDX | BPF_MEM, 12),
 			BPF_STMT(BPF_MISC | BPF_TXA, 0),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_K, 1),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_K, 1),
 			BPF_STMT(BPF_MISC | BPF_TAX, 0),
 			BPF_STMT(BPF_STX, 13),
 			BPF_STMT(BPF_LDX | BPF_MEM, 13),
 			BPF_STMT(BPF_MISC | BPF_TXA, 0),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_K, 1),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_K, 1),
 			BPF_STMT(BPF_MISC | BPF_TAX, 0),
 			BPF_STMT(BPF_STX, 14),
 			BPF_STMT(BPF_LDX | BPF_MEM, 14),
 			BPF_STMT(BPF_MISC | BPF_TXA, 0),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_K, 1),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_K, 1),
 			BPF_STMT(BPF_MISC | BPF_TAX, 0),
 			BPF_STMT(BPF_STX, 15),
 			BPF_STMT(BPF_LDX | BPF_MEM, 15),
 			BPF_STMT(BPF_MISC | BPF_TXA, 0),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_K, 1),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_K, 1),
 			BPF_STMT(BPF_MISC | BPF_TAX, 0),
 			BPF_STMT(BPF_RET | BPF_A, 0),
 		},
@@ -4812,35 +4812,35 @@ static struct bpf_test tests[] = {
 			BPF_STMT(BPF_LDX | BPF_MEM, 0),
 			BPF_STMT(BPF_MISC | BPF_TXA, 0),
 			BPF_STMT(BPF_LDX | BPF_MEM, 1),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_X, 0),
 			BPF_STMT(BPF_LDX | BPF_MEM, 2),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_X, 0),
 			BPF_STMT(BPF_LDX | BPF_MEM, 3),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_X, 0),
 			BPF_STMT(BPF_LDX | BPF_MEM, 4),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_X, 0),
 			BPF_STMT(BPF_LDX | BPF_MEM, 5),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_X, 0),
 			BPF_STMT(BPF_LDX | BPF_MEM, 6),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_X, 0),
 			BPF_STMT(BPF_LDX | BPF_MEM, 7),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_X, 0),
 			BPF_STMT(BPF_LDX | BPF_MEM, 8),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_X, 0),
 			BPF_STMT(BPF_LDX | BPF_MEM, 9),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_X, 0),
 			BPF_STMT(BPF_LDX | BPF_MEM, 10),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_X, 0),
 			BPF_STMT(BPF_LDX | BPF_MEM, 11),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_X, 0),
 			BPF_STMT(BPF_LDX | BPF_MEM, 12),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_X, 0),
 			BPF_STMT(BPF_LDX | BPF_MEM, 13),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_X, 0),
 			BPF_STMT(BPF_LDX | BPF_MEM, 14),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_X, 0),
 			BPF_STMT(BPF_LDX | BPF_MEM, 15),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_X, 0),
 			BPF_STMT(BPF_RET | BPF_A, 0),
 		},
 		CLASSIC | FLAG_NO_DATA,
@@ -4893,7 +4893,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } }
 	},
-	/* BPF_ALU | BPF_MOV | BPF_X */
+	/* BPF_ALU32 | BPF_MOV | BPF_X */
 	{
 		"ALU_MOV_X: dst = 2",
 		.u.insns_int = {
@@ -4938,7 +4938,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 4294967295U } },
 	},
-	/* BPF_ALU | BPF_MOV | BPF_K */
+	/* BPF_ALU32 | BPF_MOV | BPF_K */
 	{
 		"ALU_MOV_K: dst = 2",
 		.u.insns_int = {
@@ -5111,7 +5111,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0xffffffff } }
 	},
-	/* BPF_ALU | BPF_ADD | BPF_X */
+	/* BPF_ALU32 | BPF_ADD | BPF_X */
 	{
 		"ALU_ADD_X: 1 + 2 = 3",
 		.u.insns_int = {
@@ -5193,7 +5193,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
-	/* BPF_ALU | BPF_ADD | BPF_K */
+	/* BPF_ALU32 | BPF_ADD | BPF_K */
 	{
 		"ALU_ADD_K: 1 + 2 = 3",
 		.u.insns_int = {
@@ -5478,7 +5478,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0x1 } },
 	},
-	/* BPF_ALU | BPF_SUB | BPF_X */
+	/* BPF_ALU32 | BPF_SUB | BPF_X */
 	{
 		"ALU_SUB_X: 3 - 1 = 2",
 		.u.insns_int = {
@@ -5527,7 +5527,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
-	/* BPF_ALU | BPF_SUB | BPF_K */
+	/* BPF_ALU32 | BPF_SUB | BPF_K */
 	{
 		"ALU_SUB_K: 3 - 1 = 2",
 		.u.insns_int = {
@@ -5605,7 +5605,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, -1 } },
 	},
-	/* BPF_ALU | BPF_MUL | BPF_X */
+	/* BPF_ALU32 | BPF_MUL | BPF_X */
 	{
 		"ALU_MUL_X: 2 * 3 = 6",
 		.u.insns_int = {
@@ -5691,7 +5691,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0x2236d88f } }
 	},
-	/* BPF_ALU | BPF_MUL | BPF_K */
+	/* BPF_ALU32 | BPF_MUL | BPF_K */
 	{
 		"ALU_MUL_K: 2 * 3 = 6",
 		.u.insns_int = {
@@ -5824,7 +5824,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0xc28f5c28 } }
 	},
-	/* BPF_ALU | BPF_DIV | BPF_X */
+	/* BPF_ALU32 | BPF_DIV | BPF_X */
 	{
 		"ALU_DIV_X: 6 / 2 = 3",
 		.u.insns_int = {
@@ -5890,7 +5890,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0x1 } },
 	},
-	/* BPF_ALU | BPF_DIV | BPF_K */
+	/* BPF_ALU32 | BPF_DIV | BPF_K */
 	{
 		"ALU_DIV_K: 6 / 2 = 3",
 		.u.insns_int = {
@@ -5989,7 +5989,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0x1 } },
 	},
-	/* BPF_ALU | BPF_MOD | BPF_X */
+	/* BPF_ALU32 | BPF_MOD | BPF_X */
 	{
 		"ALU_MOD_X: 3 % 2 = 1",
 		.u.insns_int = {
@@ -6038,7 +6038,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 2 } },
 	},
-	/* BPF_ALU | BPF_MOD | BPF_K */
+	/* BPF_ALU32 | BPF_MOD | BPF_K */
 	{
 		"ALU_MOD_K: 3 % 2 = 1",
 		.u.insns_int = {
@@ -6105,7 +6105,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 2 } },
 	},
-	/* BPF_ALU | BPF_AND | BPF_X */
+	/* BPF_ALU32 | BPF_AND | BPF_X */
 	{
 		"ALU_AND_X: 3 & 2 = 2",
 		.u.insns_int = {
@@ -6154,7 +6154,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0xffffffff } },
 	},
-	/* BPF_ALU | BPF_AND | BPF_K */
+	/* BPF_ALU32 | BPF_AND | BPF_K */
 	{
 		"ALU_AND_K: 3 & 2 = 2",
 		.u.insns_int = {
@@ -6317,7 +6317,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } }
 	},
-	/* BPF_ALU | BPF_OR | BPF_X */
+	/* BPF_ALU32 | BPF_OR | BPF_X */
 	{
 		"ALU_OR_X: 1 | 2 = 3",
 		.u.insns_int = {
@@ -6366,7 +6366,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0xffffffff } },
 	},
-	/* BPF_ALU | BPF_OR | BPF_K */
+	/* BPF_ALU32 | BPF_OR | BPF_K */
 	{
 		"ALU_OR_K: 1 | 2 = 3",
 		.u.insns_int = {
@@ -6529,7 +6529,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } }
 	},
-	/* BPF_ALU | BPF_XOR | BPF_X */
+	/* BPF_ALU32 | BPF_XOR | BPF_X */
 	{
 		"ALU_XOR_X: 5 ^ 6 = 3",
 		.u.insns_int = {
@@ -6578,7 +6578,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0xfffffffe } },
 	},
-	/* BPF_ALU | BPF_XOR | BPF_K */
+	/* BPF_ALU32 | BPF_XOR | BPF_K */
 	{
 		"ALU_XOR_K: 5 ^ 6 = 3",
 		.u.insns_int = {
@@ -6741,7 +6741,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } }
 	},
-	/* BPF_ALU | BPF_LSH | BPF_X */
+	/* BPF_ALU32 | BPF_LSH | BPF_X */
 	{
 		"ALU_LSH_X: 1 << 1 = 2",
 		.u.insns_int = {
@@ -6902,7 +6902,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0x01234567 } }
 	},
-	/* BPF_ALU | BPF_LSH | BPF_K */
+	/* BPF_ALU32 | BPF_LSH | BPF_K */
 	{
 		"ALU_LSH_K: 1 << 1 = 2",
 		.u.insns_int = {
@@ -7049,7 +7049,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0x89abcdef } }
 	},
-	/* BPF_ALU | BPF_RSH | BPF_X */
+	/* BPF_ALU32 | BPF_RSH | BPF_X */
 	{
 		"ALU_RSH_X: 2 >> 1 = 1",
 		.u.insns_int = {
@@ -7210,7 +7210,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0x81234567 } }
 	},
-	/* BPF_ALU | BPF_RSH | BPF_K */
+	/* BPF_ALU32 | BPF_RSH | BPF_K */
 	{
 		"ALU_RSH_K: 2 >> 1 = 1",
 		.u.insns_int = {
@@ -7357,7 +7357,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0x89abcdef } }
 	},
-	/* BPF_ALU | BPF_ARSH | BPF_X */
+	/* BPF_ALU32 | BPF_ARSH | BPF_X */
 	{
 		"ALU32_ARSH_X: -1234 >> 7 = -10",
 		.u.insns_int = {
@@ -7482,7 +7482,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0x81234567 } }
 	},
-	/* BPF_ALU | BPF_ARSH | BPF_K */
+	/* BPF_ALU32 | BPF_ARSH | BPF_K */
 	{
 		"ALU32_ARSH_K: -1234 >> 7 = -10",
 		.u.insns_int = {
@@ -7596,7 +7596,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0x89abcdef } }
 	},
-	/* BPF_ALU | BPF_NEG */
+	/* BPF_ALU32 | BPF_NEG */
 	{
 		"ALU_NEG: -(3) = -3",
 		.u.insns_int = {
@@ -7641,7 +7641,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 3 } },
 	},
-	/* BPF_ALU | BPF_END | BPF_FROM_BE */
+	/* BPF_ALU32 | BPF_END | BPF_FROM_BE */
 	{
 		"ALU_END_FROM_BE 16: 0x0123456789abcdef -> 0xcdef",
 		.u.insns_int = {
@@ -7690,7 +7690,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, (u32) (cpu_to_be64(0x0123456789abcdefLL) >> 32) } },
 	},
-	/* BPF_ALU | BPF_END | BPF_FROM_BE, reversed */
+	/* BPF_ALU32 | BPF_END | BPF_FROM_BE, reversed */
 	{
 		"ALU_END_FROM_BE 16: 0xfedcba9876543210 -> 0x3210",
 		.u.insns_int = {
@@ -7739,7 +7739,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, (u32) (cpu_to_be64(0xfedcba9876543210ULL) >> 32) } },
 	},
-	/* BPF_ALU | BPF_END | BPF_FROM_LE */
+	/* BPF_ALU32 | BPF_END | BPF_FROM_LE */
 	{
 		"ALU_END_FROM_LE 16: 0x0123456789abcdef -> 0xefcd",
 		.u.insns_int = {
@@ -7788,7 +7788,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, (u32) (cpu_to_le64(0x0123456789abcdefLL) >> 32) } },
 	},
-	/* BPF_ALU | BPF_END | BPF_FROM_LE, reversed */
+	/* BPF_ALU32 | BPF_END | BPF_FROM_LE, reversed */
 	{
 		"ALU_END_FROM_LE 16: 0xfedcba9876543210 -> 0x1032",
 		.u.insns_int = {
@@ -9448,7 +9448,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, -12345678 } }
 	},
-	/* BPF_JMP | BPF_EXIT */
+	/* BPF_JMP64 | BPF_EXIT */
 	{
 		"JMP_EXIT",
 		.u.insns_int = {
@@ -9460,7 +9460,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 0x4711 } },
 	},
-	/* BPF_JMP | BPF_JA */
+	/* BPF_JMP64 | BPF_JA */
 	{
 		"JMP_JA: Unconditional jump: if (true) return 1",
 		.u.insns_int = {
@@ -9474,7 +9474,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
-	/* BPF_JMP | BPF_JSLT | BPF_K */
+	/* BPF_JMP64 | BPF_JSLT | BPF_K */
 	{
 		"JMP_JSLT_K: Signed jump: if (-2 < -1) return 1",
 		.u.insns_int = {
@@ -9503,7 +9503,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
-	/* BPF_JMP | BPF_JSGT | BPF_K */
+	/* BPF_JMP64 | BPF_JSGT | BPF_K */
 	{
 		"JMP_JSGT_K: Signed jump: if (-1 > -2) return 1",
 		.u.insns_int = {
@@ -9532,7 +9532,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
-	/* BPF_JMP | BPF_JSLE | BPF_K */
+	/* BPF_JMP64 | BPF_JSLE | BPF_K */
 	{
 		"JMP_JSLE_K: Signed jump: if (-2 <= -1) return 1",
 		.u.insns_int = {
@@ -9599,7 +9599,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
-	/* BPF_JMP | BPF_JSGE | BPF_K */
+	/* BPF_JMP64 | BPF_JSGE | BPF_K */
 	{
 		"JMP_JSGE_K: Signed jump: if (-1 >= -2) return 1",
 		.u.insns_int = {
@@ -9666,7 +9666,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
-	/* BPF_JMP | BPF_JGT | BPF_K */
+	/* BPF_JMP64 | BPF_JGT | BPF_K */
 	{
 		"JMP_JGT_K: if (3 > 2) return 1",
 		.u.insns_int = {
@@ -9695,7 +9695,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
-	/* BPF_JMP | BPF_JLT | BPF_K */
+	/* BPF_JMP64 | BPF_JLT | BPF_K */
 	{
 		"JMP_JLT_K: if (2 < 3) return 1",
 		.u.insns_int = {
@@ -9724,7 +9724,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
-	/* BPF_JMP | BPF_JGE | BPF_K */
+	/* BPF_JMP64 | BPF_JGE | BPF_K */
 	{
 		"JMP_JGE_K: if (3 >= 2) return 1",
 		.u.insns_int = {
@@ -9739,7 +9739,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
-	/* BPF_JMP | BPF_JLE | BPF_K */
+	/* BPF_JMP64 | BPF_JLE | BPF_K */
 	{
 		"JMP_JLE_K: if (2 <= 3) return 1",
 		.u.insns_int = {
@@ -9754,7 +9754,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
-	/* BPF_JMP | BPF_JGT | BPF_K jump backwards */
+	/* BPF_JMP64 | BPF_JGT | BPF_K jump backwards */
 	{
 		"JMP_JGT_K: if (3 > 2) return 1 (jump backwards)",
 		.u.insns_int = {
@@ -9784,7 +9784,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
-	/* BPF_JMP | BPF_JLT | BPF_K jump backwards */
+	/* BPF_JMP64 | BPF_JLT | BPF_K jump backwards */
 	{
 		"JMP_JGT_K: if (2 < 3) return 1 (jump backwards)",
 		.u.insns_int = {
@@ -9814,7 +9814,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
-	/* BPF_JMP | BPF_JNE | BPF_K */
+	/* BPF_JMP64 | BPF_JNE | BPF_K */
 	{
 		"JMP_JNE_K: if (3 != 2) return 1",
 		.u.insns_int = {
@@ -9829,7 +9829,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
-	/* BPF_JMP | BPF_JEQ | BPF_K */
+	/* BPF_JMP64 | BPF_JEQ | BPF_K */
 	{
 		"JMP_JEQ_K: if (3 == 3) return 1",
 		.u.insns_int = {
@@ -9844,7 +9844,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
-	/* BPF_JMP | BPF_JSET | BPF_K */
+	/* BPF_JMP64 | BPF_JSET | BPF_K */
 	{
 		"JMP_JSET_K: if (0x3 & 0x2) return 1",
 		.u.insns_int = {
@@ -9873,7 +9873,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
-	/* BPF_JMP | BPF_JSGT | BPF_X */
+	/* BPF_JMP64 | BPF_JSGT | BPF_X */
 	{
 		"JMP_JSGT_X: Signed jump: if (-1 > -2) return 1",
 		.u.insns_int = {
@@ -9904,7 +9904,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
-	/* BPF_JMP | BPF_JSLT | BPF_X */
+	/* BPF_JMP64 | BPF_JSLT | BPF_X */
 	{
 		"JMP_JSLT_X: Signed jump: if (-2 < -1) return 1",
 		.u.insns_int = {
@@ -9935,7 +9935,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
-	/* BPF_JMP | BPF_JSGE | BPF_X */
+	/* BPF_JMP64 | BPF_JSGE | BPF_X */
 	{
 		"JMP_JSGE_X: Signed jump: if (-1 >= -2) return 1",
 		.u.insns_int = {
@@ -9966,7 +9966,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
-	/* BPF_JMP | BPF_JSLE | BPF_X */
+	/* BPF_JMP64 | BPF_JSLE | BPF_X */
 	{
 		"JMP_JSLE_X: Signed jump: if (-2 <= -1) return 1",
 		.u.insns_int = {
@@ -9997,7 +9997,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
-	/* BPF_JMP | BPF_JGT | BPF_X */
+	/* BPF_JMP64 | BPF_JGT | BPF_X */
 	{
 		"JMP_JGT_X: if (3 > 2) return 1",
 		.u.insns_int = {
@@ -10028,7 +10028,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
-	/* BPF_JMP | BPF_JLT | BPF_X */
+	/* BPF_JMP64 | BPF_JLT | BPF_X */
 	{
 		"JMP_JLT_X: if (2 < 3) return 1",
 		.u.insns_int = {
@@ -10059,7 +10059,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
-	/* BPF_JMP | BPF_JGE | BPF_X */
+	/* BPF_JMP64 | BPF_JGE | BPF_X */
 	{
 		"JMP_JGE_X: if (3 >= 2) return 1",
 		.u.insns_int = {
@@ -10090,7 +10090,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
-	/* BPF_JMP | BPF_JLE | BPF_X */
+	/* BPF_JMP64 | BPF_JLE | BPF_X */
 	{
 		"JMP_JLE_X: if (2 <= 3) return 1",
 		.u.insns_int = {
@@ -10210,7 +10210,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
-	/* BPF_JMP | BPF_JNE | BPF_X */
+	/* BPF_JMP64 | BPF_JNE | BPF_X */
 	{
 		"JMP_JNE_X: if (3 != 2) return 1",
 		.u.insns_int = {
@@ -10226,7 +10226,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
-	/* BPF_JMP | BPF_JEQ | BPF_X */
+	/* BPF_JMP64 | BPF_JEQ | BPF_X */
 	{
 		"JMP_JEQ_X: if (3 == 3) return 1",
 		.u.insns_int = {
@@ -10242,7 +10242,7 @@ static struct bpf_test tests[] = {
 		{ },
 		{ { 0, 1 } },
 	},
-	/* BPF_JMP | BPF_JSET | BPF_X */
+	/* BPF_JMP64 | BPF_JSET | BPF_X */
 	{
 		"JMP_JSET_X: if (0x3 & 0x2) return 1",
 		.u.insns_int = {
@@ -11207,7 +11207,7 @@ static struct bpf_test tests[] = {
 			 * ret A
 			 */
 			BPF_STMT(BPF_LD | BPF_IMM, 0x42),
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_X, 0),
 			BPF_STMT(BPF_RET | BPF_A, 0x0),
 		},
 		CLASSIC | FLAG_NO_DATA,
@@ -11221,7 +11221,7 @@ static struct bpf_test tests[] = {
 			 * A = A + 0x42
 			 * ret A
 			 */
-			BPF_STMT(BPF_ALU | BPF_ADD | BPF_K, 0x42),
+			BPF_STMT(BPF_ALU32 | BPF_ADD | BPF_K, 0x42),
 			BPF_STMT(BPF_RET | BPF_A, 0x0),
 		},
 		CLASSIC | FLAG_NO_DATA,
@@ -11237,7 +11237,7 @@ static struct bpf_test tests[] = {
 			 * ret A
 			 */
 			BPF_STMT(BPF_LD | BPF_IMM, 0x66),
-			BPF_STMT(BPF_ALU | BPF_SUB | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_SUB | BPF_X, 0),
 			BPF_STMT(BPF_RET | BPF_A, 0x0),
 		},
 		CLASSIC | FLAG_NO_DATA,
@@ -11251,7 +11251,7 @@ static struct bpf_test tests[] = {
 			 * A = A - -0x66
 			 * ret A
 			 */
-			BPF_STMT(BPF_ALU | BPF_SUB | BPF_K, -0x66),
+			BPF_STMT(BPF_ALU32 | BPF_SUB | BPF_K, -0x66),
 			BPF_STMT(BPF_RET | BPF_A, 0x0),
 		},
 		CLASSIC | FLAG_NO_DATA,
@@ -11267,7 +11267,7 @@ static struct bpf_test tests[] = {
 			 * ret A
 			 */
 			BPF_STMT(BPF_LD | BPF_IMM, 0x42),
-			BPF_STMT(BPF_ALU | BPF_MUL | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_MUL | BPF_X, 0),
 			BPF_STMT(BPF_RET | BPF_A, 0x0),
 		},
 		CLASSIC | FLAG_NO_DATA,
@@ -11281,7 +11281,7 @@ static struct bpf_test tests[] = {
 			 * A = A * 0x66
 			 * ret A
 			 */
-			BPF_STMT(BPF_ALU | BPF_MUL | BPF_K, 0x66),
+			BPF_STMT(BPF_ALU32 | BPF_MUL | BPF_K, 0x66),
 			BPF_STMT(BPF_RET | BPF_A, 0x0),
 		},
 		CLASSIC | FLAG_NO_DATA,
@@ -11297,7 +11297,7 @@ static struct bpf_test tests[] = {
 			 * ret 0x42
 			 */
 			BPF_STMT(BPF_LD | BPF_IMM, 0x42),
-			BPF_STMT(BPF_ALU | BPF_DIV | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_DIV | BPF_X, 0),
 			BPF_STMT(BPF_RET | BPF_K, 0x42),
 		},
 		CLASSIC | FLAG_NO_DATA,
@@ -11311,7 +11311,7 @@ static struct bpf_test tests[] = {
 			 * A = A / 1
 			 * ret A
 			 */
-			BPF_STMT(BPF_ALU | BPF_DIV | BPF_K, 0x1),
+			BPF_STMT(BPF_ALU32 | BPF_DIV | BPF_K, 0x1),
 			BPF_STMT(BPF_RET | BPF_A, 0x0),
 		},
 		CLASSIC | FLAG_NO_DATA,
@@ -11327,7 +11327,7 @@ static struct bpf_test tests[] = {
 			 * ret 0x42
 			 */
 			BPF_STMT(BPF_LD | BPF_IMM, 0x42),
-			BPF_STMT(BPF_ALU | BPF_MOD | BPF_X, 0),
+			BPF_STMT(BPF_ALU32 | BPF_MOD | BPF_X, 0),
 			BPF_STMT(BPF_RET | BPF_K, 0x42),
 		},
 		CLASSIC | FLAG_NO_DATA,
@@ -11341,7 +11341,7 @@ static struct bpf_test tests[] = {
 			 * A = A mod 1
 			 * ret A
 			 */
-			BPF_STMT(BPF_ALU | BPF_MOD | BPF_K, 0x1),
+			BPF_STMT(BPF_ALU32 | BPF_MOD | BPF_K, 0x1),
 			BPF_STMT(BPF_RET | BPF_A, 0x0),
 		},
 		CLASSIC | FLAG_NO_DATA,
@@ -11356,7 +11356,7 @@ static struct bpf_test tests[] = {
 			 * ret 0x42
 			 * ret 0x66
 			 */
-			BPF_JUMP(BPF_JMP | BPF_JEQ | BPF_K, 0x0, 0, 1),
+			BPF_JUMP(BPF_JMP64 | BPF_JEQ | BPF_K, 0x0, 0, 1),
 			BPF_STMT(BPF_RET | BPF_K, 0x42),
 			BPF_STMT(BPF_RET | BPF_K, 0x66),
 		},
@@ -11374,7 +11374,7 @@ static struct bpf_test tests[] = {
 			 * ret 0x66
 			 */
 			BPF_STMT(BPF_LD | BPF_IMM, 0x0),
-			BPF_JUMP(BPF_JMP | BPF_JEQ | BPF_X, 0x0, 0, 1),
+			BPF_JUMP(BPF_JMP64 | BPF_JEQ | BPF_X, 0x0, 0, 1),
 			BPF_STMT(BPF_RET | BPF_K, 0x42),
 			BPF_STMT(BPF_RET | BPF_K, 0x66),
 		},
@@ -11477,8 +11477,8 @@ static struct bpf_test tests[] = {
 			BPF_STMT(BPF_LD | BPF_IMM, 0xffff0000),
 			BPF_STMT(BPF_MISC | BPF_TAX, 0),
 			BPF_STMT(BPF_LD | BPF_IMM, 0xfefbbc12),
-			BPF_STMT(BPF_ALU | BPF_AND | BPF_X, 0),
-			BPF_JUMP(BPF_JMP | BPF_JEQ | BPF_K, 0xfefb0000, 1, 0),
+			BPF_STMT(BPF_ALU32 | BPF_AND | BPF_X, 0),
+			BPF_JUMP(BPF_JMP64 | BPF_JEQ | BPF_K, 0xfefb0000, 1, 0),
 			BPF_STMT(BPF_RET | BPF_K, 1),
 			BPF_STMT(BPF_RET | BPF_K, 2),
 		},
@@ -14874,7 +14874,7 @@ struct tail_call_test {
 
 #define TAIL_CALL(offset)			       \
 	BPF_LD_IMM64(R2, TAIL_CALL_MARKER),	       \
-	BPF_RAW_INSN(BPF_ALU | BPF_MOV | BPF_K, R3, 0, \
+	BPF_RAW_INSN(BPF_ALU32 | BPF_MOV | BPF_K, R3, 0, \
 		     offset, TAIL_CALL_MARKER),	       \
 	BPF_JMP_IMM(BPF_TAIL_CALL, 0, 0, 0)
 
@@ -15101,7 +15101,7 @@ static __init int prepare_tail_call_tests(struct bpf_array **pprogs)
 				insn[1].imm = ((u64)(long)progs) >> 32;
 				break;
 
-			case BPF_ALU | BPF_MOV | BPF_K:
+			case BPF_ALU32 | BPF_MOV | BPF_K:
 				if (insn->imm != TAIL_CALL_MARKER)
 					break;
 				if (insn->off == TAIL_CALL_NULL)
@@ -15113,7 +15113,7 @@ static __init int prepare_tail_call_tests(struct bpf_array **pprogs)
 				insn->off = 0;
 				break;
 
-			case BPF_JMP | BPF_CALL:
+			case BPF_JMP64 | BPF_CALL:
 				if (insn->src_reg != BPF_PSEUDO_CALL)
 					break;
 				switch (insn->imm) {
diff --git a/net/core/filter.c b/net/core/filter.c
index 0039cf1..1cd5897 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -625,27 +625,27 @@ static int bpf_convert_filter(struct sock_filter *prog, int len,
 
 		switch (fp->code) {
 		/* All arithmetic insns and skb loads map as-is. */
-		case BPF_ALU | BPF_ADD | BPF_X:
-		case BPF_ALU | BPF_ADD | BPF_K:
-		case BPF_ALU | BPF_SUB | BPF_X:
-		case BPF_ALU | BPF_SUB | BPF_K:
-		case BPF_ALU | BPF_AND | BPF_X:
-		case BPF_ALU | BPF_AND | BPF_K:
-		case BPF_ALU | BPF_OR | BPF_X:
-		case BPF_ALU | BPF_OR | BPF_K:
-		case BPF_ALU | BPF_LSH | BPF_X:
-		case BPF_ALU | BPF_LSH | BPF_K:
-		case BPF_ALU | BPF_RSH | BPF_X:
-		case BPF_ALU | BPF_RSH | BPF_K:
-		case BPF_ALU | BPF_XOR | BPF_X:
-		case BPF_ALU | BPF_XOR | BPF_K:
-		case BPF_ALU | BPF_MUL | BPF_X:
-		case BPF_ALU | BPF_MUL | BPF_K:
-		case BPF_ALU | BPF_DIV | BPF_X:
-		case BPF_ALU | BPF_DIV | BPF_K:
-		case BPF_ALU | BPF_MOD | BPF_X:
-		case BPF_ALU | BPF_MOD | BPF_K:
-		case BPF_ALU | BPF_NEG:
+		case BPF_ALU32 | BPF_ADD | BPF_X:
+		case BPF_ALU32 | BPF_ADD | BPF_K:
+		case BPF_ALU32 | BPF_SUB | BPF_X:
+		case BPF_ALU32 | BPF_SUB | BPF_K:
+		case BPF_ALU32 | BPF_AND | BPF_X:
+		case BPF_ALU32 | BPF_AND | BPF_K:
+		case BPF_ALU32 | BPF_OR | BPF_X:
+		case BPF_ALU32 | BPF_OR | BPF_K:
+		case BPF_ALU32 | BPF_LSH | BPF_X:
+		case BPF_ALU32 | BPF_LSH | BPF_K:
+		case BPF_ALU32 | BPF_RSH | BPF_X:
+		case BPF_ALU32 | BPF_RSH | BPF_K:
+		case BPF_ALU32 | BPF_XOR | BPF_X:
+		case BPF_ALU32 | BPF_XOR | BPF_K:
+		case BPF_ALU32 | BPF_MUL | BPF_X:
+		case BPF_ALU32 | BPF_MUL | BPF_K:
+		case BPF_ALU32 | BPF_DIV | BPF_X:
+		case BPF_ALU32 | BPF_DIV | BPF_K:
+		case BPF_ALU32 | BPF_MOD | BPF_X:
+		case BPF_ALU32 | BPF_MOD | BPF_K:
+		case BPF_ALU32 | BPF_NEG:
 		case BPF_LD | BPF_ABS | BPF_W:
 		case BPF_LD | BPF_ABS | BPF_H:
 		case BPF_LD | BPF_ABS | BPF_B:
@@ -666,8 +666,8 @@ static int bpf_convert_filter(struct sock_filter *prog, int len,
 				break;
 			}
 
-			if (fp->code == (BPF_ALU | BPF_DIV | BPF_X) ||
-			    fp->code == (BPF_ALU | BPF_MOD | BPF_X)) {
+			if (fp->code == (BPF_ALU32 | BPF_DIV | BPF_X) ||
+			    fp->code == (BPF_ALU32 | BPF_MOD | BPF_X)) {
 				*insn++ = BPF_MOV32_REG(BPF_REG_X, BPF_REG_X);
 				/* Error with exception code on div/mod by 0.
 				 * For cBPF programs, this was always return 0.
@@ -702,20 +702,20 @@ static int bpf_convert_filter(struct sock_filter *prog, int len,
 		insn->off = off;					\
 	} while (0)
 
-		case BPF_JMP | BPF_JA:
+		case BPF_JMP64 | BPF_JA:
 			target = i + fp->k + 1;
 			insn->code = fp->code;
 			BPF_EMIT_JMP;
 			break;
 
-		case BPF_JMP | BPF_JEQ | BPF_K:
-		case BPF_JMP | BPF_JEQ | BPF_X:
-		case BPF_JMP | BPF_JSET | BPF_K:
-		case BPF_JMP | BPF_JSET | BPF_X:
-		case BPF_JMP | BPF_JGT | BPF_K:
-		case BPF_JMP | BPF_JGT | BPF_X:
-		case BPF_JMP | BPF_JGE | BPF_K:
-		case BPF_JMP | BPF_JGE | BPF_X:
+		case BPF_JMP64 | BPF_JEQ | BPF_K:
+		case BPF_JMP64 | BPF_JEQ | BPF_X:
+		case BPF_JMP64 | BPF_JSET | BPF_K:
+		case BPF_JMP64 | BPF_JSET | BPF_X:
+		case BPF_JMP64 | BPF_JGT | BPF_K:
+		case BPF_JMP64 | BPF_JGT | BPF_X:
+		case BPF_JMP64 | BPF_JGE | BPF_K:
+		case BPF_JMP64 | BPF_JGE | BPF_X:
 			if (BPF_SRC(fp->code) == BPF_K && (int) fp->k < 0) {
 				/* BPF immediates are signed, zero extend
 				 * immediate into tmp register and use it
@@ -735,7 +735,7 @@ static int bpf_convert_filter(struct sock_filter *prog, int len,
 
 			/* Common case where 'jump_false' is next insn. */
 			if (fp->jf == 0) {
-				insn->code = BPF_JMP | BPF_OP(fp->code) | bpf_src;
+				insn->code = BPF_JMP64 | BPF_OP(fp->code) | bpf_src;
 				target = i + fp->jt + 1;
 				BPF_EMIT_JMP;
 				break;
@@ -745,13 +745,13 @@ static int bpf_convert_filter(struct sock_filter *prog, int len,
 			if (fp->jt == 0) {
 				switch (BPF_OP(fp->code)) {
 				case BPF_JEQ:
-					insn->code = BPF_JMP | BPF_JNE | bpf_src;
+					insn->code = BPF_JMP64 | BPF_JNE | bpf_src;
 					break;
 				case BPF_JGT:
-					insn->code = BPF_JMP | BPF_JLE | bpf_src;
+					insn->code = BPF_JMP64 | BPF_JLE | bpf_src;
 					break;
 				case BPF_JGE:
-					insn->code = BPF_JMP | BPF_JLT | bpf_src;
+					insn->code = BPF_JMP64 | BPF_JLT | bpf_src;
 					break;
 				default:
 					goto jmp_rest;
@@ -764,11 +764,11 @@ static int bpf_convert_filter(struct sock_filter *prog, int len,
 jmp_rest:
 			/* Other jumps are mapped into two insns: Jxx and JA. */
 			target = i + fp->jt + 1;
-			insn->code = BPF_JMP | BPF_OP(fp->code) | bpf_src;
+			insn->code = BPF_JMP64 | BPF_OP(fp->code) | bpf_src;
 			BPF_EMIT_JMP;
 			insn++;
 
-			insn->code = BPF_JMP | BPF_JA;
+			insn->code = BPF_JMP64 | BPF_JA;
 			target = i + fp->jf + 1;
 			BPF_EMIT_JMP;
 			break;
@@ -936,19 +936,19 @@ static int check_load_and_stores(const struct sock_filter *filter, int flen)
 				goto error;
 			}
 			break;
-		case BPF_JMP | BPF_JA:
+		case BPF_JMP64 | BPF_JA:
 			/* A jump must set masks on target */
 			masks[pc + 1 + filter[pc].k] &= memvalid;
 			memvalid = ~0;
 			break;
-		case BPF_JMP | BPF_JEQ | BPF_K:
-		case BPF_JMP | BPF_JEQ | BPF_X:
-		case BPF_JMP | BPF_JGE | BPF_K:
-		case BPF_JMP | BPF_JGE | BPF_X:
-		case BPF_JMP | BPF_JGT | BPF_K:
-		case BPF_JMP | BPF_JGT | BPF_X:
-		case BPF_JMP | BPF_JSET | BPF_K:
-		case BPF_JMP | BPF_JSET | BPF_X:
+		case BPF_JMP64 | BPF_JEQ | BPF_K:
+		case BPF_JMP64 | BPF_JEQ | BPF_X:
+		case BPF_JMP64 | BPF_JGE | BPF_K:
+		case BPF_JMP64 | BPF_JGE | BPF_X:
+		case BPF_JMP64 | BPF_JGT | BPF_K:
+		case BPF_JMP64 | BPF_JGT | BPF_X:
+		case BPF_JMP64 | BPF_JSET | BPF_K:
+		case BPF_JMP64 | BPF_JSET | BPF_X:
 			/* A jump must set masks on targets */
 			masks[pc + 1 + filter[pc].jt] &= memvalid;
 			masks[pc + 1 + filter[pc].jf] &= memvalid;
@@ -965,27 +965,27 @@ static bool chk_code_allowed(u16 code_to_probe)
 {
 	static const bool codes[] = {
 		/* 32 bit ALU operations */
-		[BPF_ALU | BPF_ADD | BPF_K] = true,
-		[BPF_ALU | BPF_ADD | BPF_X] = true,
-		[BPF_ALU | BPF_SUB | BPF_K] = true,
-		[BPF_ALU | BPF_SUB | BPF_X] = true,
-		[BPF_ALU | BPF_MUL | BPF_K] = true,
-		[BPF_ALU | BPF_MUL | BPF_X] = true,
-		[BPF_ALU | BPF_DIV | BPF_K] = true,
-		[BPF_ALU | BPF_DIV | BPF_X] = true,
-		[BPF_ALU | BPF_MOD | BPF_K] = true,
-		[BPF_ALU | BPF_MOD | BPF_X] = true,
-		[BPF_ALU | BPF_AND | BPF_K] = true,
-		[BPF_ALU | BPF_AND | BPF_X] = true,
-		[BPF_ALU | BPF_OR | BPF_K] = true,
-		[BPF_ALU | BPF_OR | BPF_X] = true,
-		[BPF_ALU | BPF_XOR | BPF_K] = true,
-		[BPF_ALU | BPF_XOR | BPF_X] = true,
-		[BPF_ALU | BPF_LSH | BPF_K] = true,
-		[BPF_ALU | BPF_LSH | BPF_X] = true,
-		[BPF_ALU | BPF_RSH | BPF_K] = true,
-		[BPF_ALU | BPF_RSH | BPF_X] = true,
-		[BPF_ALU | BPF_NEG] = true,
+		[BPF_ALU32 | BPF_ADD | BPF_K] = true,
+		[BPF_ALU32 | BPF_ADD | BPF_X] = true,
+		[BPF_ALU32 | BPF_SUB | BPF_K] = true,
+		[BPF_ALU32 | BPF_SUB | BPF_X] = true,
+		[BPF_ALU32 | BPF_MUL | BPF_K] = true,
+		[BPF_ALU32 | BPF_MUL | BPF_X] = true,
+		[BPF_ALU32 | BPF_DIV | BPF_K] = true,
+		[BPF_ALU32 | BPF_DIV | BPF_X] = true,
+		[BPF_ALU32 | BPF_MOD | BPF_K] = true,
+		[BPF_ALU32 | BPF_MOD | BPF_X] = true,
+		[BPF_ALU32 | BPF_AND | BPF_K] = true,
+		[BPF_ALU32 | BPF_AND | BPF_X] = true,
+		[BPF_ALU32 | BPF_OR | BPF_K] = true,
+		[BPF_ALU32 | BPF_OR | BPF_X] = true,
+		[BPF_ALU32 | BPF_XOR | BPF_K] = true,
+		[BPF_ALU32 | BPF_XOR | BPF_X] = true,
+		[BPF_ALU32 | BPF_LSH | BPF_K] = true,
+		[BPF_ALU32 | BPF_LSH | BPF_X] = true,
+		[BPF_ALU32 | BPF_RSH | BPF_K] = true,
+		[BPF_ALU32 | BPF_RSH | BPF_X] = true,
+		[BPF_ALU32 | BPF_NEG] = true,
 		/* Load instructions */
 		[BPF_LD | BPF_W | BPF_ABS] = true,
 		[BPF_LD | BPF_H | BPF_ABS] = true,
@@ -1010,15 +1010,15 @@ static bool chk_code_allowed(u16 code_to_probe)
 		[BPF_RET | BPF_K] = true,
 		[BPF_RET | BPF_A] = true,
 		/* Jump instructions */
-		[BPF_JMP | BPF_JA] = true,
-		[BPF_JMP | BPF_JEQ | BPF_K] = true,
-		[BPF_JMP | BPF_JEQ | BPF_X] = true,
-		[BPF_JMP | BPF_JGE | BPF_K] = true,
-		[BPF_JMP | BPF_JGE | BPF_X] = true,
-		[BPF_JMP | BPF_JGT | BPF_K] = true,
-		[BPF_JMP | BPF_JGT | BPF_X] = true,
-		[BPF_JMP | BPF_JSET | BPF_K] = true,
-		[BPF_JMP | BPF_JSET | BPF_X] = true,
+		[BPF_JMP64 | BPF_JA] = true,
+		[BPF_JMP64 | BPF_JEQ | BPF_K] = true,
+		[BPF_JMP64 | BPF_JEQ | BPF_X] = true,
+		[BPF_JMP64 | BPF_JGE | BPF_K] = true,
+		[BPF_JMP64 | BPF_JGE | BPF_X] = true,
+		[BPF_JMP64 | BPF_JGT | BPF_K] = true,
+		[BPF_JMP64 | BPF_JGT | BPF_X] = true,
+		[BPF_JMP64 | BPF_JSET | BPF_K] = true,
+		[BPF_JMP64 | BPF_JSET | BPF_X] = true,
 	};
 
 	if (code_to_probe >= ARRAY_SIZE(codes))
@@ -1068,14 +1068,14 @@ static int bpf_check_classic(const struct sock_filter *filter,
 
 		/* Some instructions need special checks */
 		switch (ftest->code) {
-		case BPF_ALU | BPF_DIV | BPF_K:
-		case BPF_ALU | BPF_MOD | BPF_K:
+		case BPF_ALU32 | BPF_DIV | BPF_K:
+		case BPF_ALU32 | BPF_MOD | BPF_K:
 			/* Check for division by zero */
 			if (ftest->k == 0)
 				return -EINVAL;
 			break;
-		case BPF_ALU | BPF_LSH | BPF_K:
-		case BPF_ALU | BPF_RSH | BPF_K:
+		case BPF_ALU32 | BPF_LSH | BPF_K:
+		case BPF_ALU32 | BPF_RSH | BPF_K:
 			if (ftest->k >= 32)
 				return -EINVAL;
 			break;
@@ -1087,7 +1087,7 @@ static int bpf_check_classic(const struct sock_filter *filter,
 			if (ftest->k >= BPF_MEMWORDS)
 				return -EINVAL;
 			break;
-		case BPF_JMP | BPF_JA:
+		case BPF_JMP64 | BPF_JA:
 			/* Note, the large ftest->k might cause loops.
 			 * Compare this with conditional jumps below,
 			 * where offsets are limited. --ANK (981016)
@@ -1095,14 +1095,14 @@ static int bpf_check_classic(const struct sock_filter *filter,
 			if (ftest->k >= (unsigned int)(flen - pc - 1))
 				return -EINVAL;
 			break;
-		case BPF_JMP | BPF_JEQ | BPF_K:
-		case BPF_JMP | BPF_JEQ | BPF_X:
-		case BPF_JMP | BPF_JGE | BPF_K:
-		case BPF_JMP | BPF_JGE | BPF_X:
-		case BPF_JMP | BPF_JGT | BPF_K:
-		case BPF_JMP | BPF_JGT | BPF_X:
-		case BPF_JMP | BPF_JSET | BPF_K:
-		case BPF_JMP | BPF_JSET | BPF_X:
+		case BPF_JMP64 | BPF_JEQ | BPF_K:
+		case BPF_JMP64 | BPF_JEQ | BPF_X:
+		case BPF_JMP64 | BPF_JGE | BPF_K:
+		case BPF_JMP64 | BPF_JGE | BPF_X:
+		case BPF_JMP64 | BPF_JGT | BPF_K:
+		case BPF_JMP64 | BPF_JGT | BPF_X:
+		case BPF_JMP64 | BPF_JSET | BPF_K:
+		case BPF_JMP64 | BPF_JSET | BPF_X:
 			/* Both conditionals must be safe */
 			if (pc + ftest->jt + 1 >= flen ||
 			    pc + ftest->jf + 1 >= flen)
@@ -8607,7 +8607,7 @@ static int bpf_unclone_prologue(struct bpf_insn *insn_buf, bool direct_write,
 	/* ret = bpf_skb_pull_data(skb, 0); */
 	*insn++ = BPF_MOV64_REG(BPF_REG_6, BPF_REG_1);
 	*insn++ = BPF_ALU64_REG(BPF_XOR, BPF_REG_2, BPF_REG_2);
-	*insn++ = BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+	*insn++ = BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 			       BPF_FUNC_skb_pull_data);
 	/* if (!ret)
 	 *      goto restore;
diff --git a/samples/bpf/bpf_insn.h b/samples/bpf/bpf_insn.h
index 29c3bb6..1c55a77 100644
--- a/samples/bpf/bpf_insn.h
+++ b/samples/bpf/bpf_insn.h
@@ -17,7 +17,7 @@ struct bpf_insn;
 
 #define BPF_ALU32_REG(OP, DST, SRC)				\
 	((struct bpf_insn) {					\
-		.code  = BPF_ALU | BPF_OP(OP) | BPF_X,		\
+		.code  = BPF_ALU32 | BPF_OP(OP) | BPF_X,	\
 		.dst_reg = DST,					\
 		.src_reg = SRC,					\
 		.off   = 0,					\
@@ -35,7 +35,7 @@ struct bpf_insn;
 
 #define BPF_ALU32_IMM(OP, DST, IMM)				\
 	((struct bpf_insn) {					\
-		.code  = BPF_ALU | BPF_OP(OP) | BPF_K,		\
+		.code  = BPF_ALU32 | BPF_OP(OP) | BPF_K,	\
 		.dst_reg = DST,					\
 		.src_reg = 0,					\
 		.off   = 0,					\
@@ -53,7 +53,7 @@ struct bpf_insn;
 
 #define BPF_MOV32_REG(DST, SRC)					\
 	((struct bpf_insn) {					\
-		.code  = BPF_ALU | BPF_MOV | BPF_X,		\
+		.code  = BPF_ALU32 | BPF_MOV | BPF_X,		\
 		.dst_reg = DST,					\
 		.src_reg = SRC,					\
 		.off   = 0,					\
@@ -71,7 +71,7 @@ struct bpf_insn;
 
 #define BPF_MOV32_IMM(DST, IMM)					\
 	((struct bpf_insn) {					\
-		.code  = BPF_ALU | BPF_MOV | BPF_K,		\
+		.code  = BPF_ALU32 | BPF_MOV | BPF_K,		\
 		.dst_reg = DST,					\
 		.src_reg = 0,					\
 		.off   = 0,					\
@@ -174,7 +174,7 @@ struct bpf_insn;
 
 #define BPF_JMP_REG(OP, DST, SRC, OFF)				\
 	((struct bpf_insn) {					\
-		.code  = BPF_JMP | BPF_OP(OP) | BPF_X,		\
+		.code  = BPF_JMP64 | BPF_OP(OP) | BPF_X,	\
 		.dst_reg = DST,					\
 		.src_reg = SRC,					\
 		.off   = OFF,					\
@@ -194,7 +194,7 @@ struct bpf_insn;
 
 #define BPF_JMP_IMM(OP, DST, IMM, OFF)				\
 	((struct bpf_insn) {					\
-		.code  = BPF_JMP | BPF_OP(OP) | BPF_K,		\
+		.code  = BPF_JMP64 | BPF_OP(OP) | BPF_K,	\
 		.dst_reg = DST,					\
 		.src_reg = 0,					\
 		.off   = OFF,					\
@@ -224,7 +224,7 @@ struct bpf_insn;
 
 #define BPF_EXIT_INSN()						\
 	((struct bpf_insn) {					\
-		.code  = BPF_JMP | BPF_EXIT,			\
+		.code  = BPF_JMP64 | BPF_EXIT,			\
 		.dst_reg = 0,					\
 		.src_reg = 0,					\
 		.off   = 0,					\
diff --git a/samples/bpf/cookie_uid_helper_example.c b/samples/bpf/cookie_uid_helper_example.c
index f0df3dd..ddc6223 100644
--- a/samples/bpf/cookie_uid_helper_example.c
+++ b/samples/bpf/cookie_uid_helper_example.c
@@ -87,7 +87,7 @@ static void prog_load(void)
 		 * pc1: BPF_FUNC_get_socket_cookie takes one parameter,
 		 * R1: sk_buff
 		 */
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 				BPF_FUNC_get_socket_cookie),
 		/* pc2-4: save &socketCookie to r7 for future usage*/
 		BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8),
@@ -99,7 +99,7 @@ static void prog_load(void)
 		 */
 		BPF_LD_MAP_FD(BPF_REG_1, map_fd),
 		BPF_MOV64_REG(BPF_REG_2, BPF_REG_7),
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 				BPF_FUNC_map_lookup_elem),
 		/*
 		 * pc9. if r0 != 0x0, go to pc+14, since we have the cookie
@@ -108,7 +108,7 @@ static void prog_load(void)
 		 */
 		BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 14),
 		BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 				BPF_FUNC_get_socket_uid),
 		/*
 		 * Place a struct stats in the R10 stack and sequentially
@@ -137,7 +137,7 @@ static void prog_load(void)
 		BPF_MOV64_REG(BPF_REG_3, BPF_REG_10),
 		BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, -32),
 		BPF_MOV64_IMM(BPF_REG_4, 0),
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 				BPF_FUNC_map_update_elem),
 		BPF_JMP_IMM(BPF_JA, 0, 0, 5),
 		/*
diff --git a/samples/bpf/sock_example.c b/samples/bpf/sock_example.c
index 5b66f24..3e8d74d 100644
--- a/samples/bpf/sock_example.c
+++ b/samples/bpf/sock_example.c
@@ -52,7 +52,7 @@ static int test_sock(void)
 		BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 		BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), /* r2 = fp - 4 */
 		BPF_LD_MAP_FD(BPF_REG_1, map_fd),
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 		BPF_MOV64_IMM(BPF_REG_1, 1), /* r1 = 1 */
 		BPF_ATOMIC_OP(BPF_DW, BPF_ADD, BPF_REG_0, BPF_REG_1, 0),
diff --git a/samples/bpf/test_cgrp2_attach.c b/samples/bpf/test_cgrp2_attach.c
index 68ce694..b8331e7 100644
--- a/samples/bpf/test_cgrp2_attach.c
+++ b/samples/bpf/test_cgrp2_attach.c
@@ -51,7 +51,7 @@ static int prog_load(int map_fd, int verdict)
 		BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 		BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), /* r2 = fp - 4 */
 		BPF_LD_MAP_FD(BPF_REG_1, map_fd), /* load map fd to r1 */
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 		BPF_MOV64_IMM(BPF_REG_1, 1), /* r1 = 1 */
 		BPF_ATOMIC_OP(BPF_DW, BPF_ADD, BPF_REG_0, BPF_REG_1, 0),
@@ -62,7 +62,7 @@ static int prog_load(int map_fd, int verdict)
 		BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 		BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), /* r2 = fp - 4 */
 		BPF_LD_MAP_FD(BPF_REG_1, map_fd),
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 		BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_6, offsetof(struct __sk_buff, len)), /* r1 = skb->len */
 
diff --git a/samples/bpf/test_cgrp2_sock.c b/samples/bpf/test_cgrp2_sock.c
index a0811df..5447bce 100644
--- a/samples/bpf/test_cgrp2_sock.c
+++ b/samples/bpf/test_cgrp2_sock.c
@@ -48,7 +48,7 @@ static int prog_load(__u32 idx, __u32 mark, __u32 prio)
 	/* set mark on socket */
 	struct bpf_insn prog_mark[] = {
 		/* get uid of process */
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 			     BPF_FUNC_get_current_uid_gid),
 		BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 0xffffffff),
 
diff --git a/samples/seccomp/bpf-direct.c b/samples/seccomp/bpf-direct.c
index c09e4a1..eda0b2a 100644
--- a/samples/seccomp/bpf-direct.c
+++ b/samples/seccomp/bpf-direct.c
@@ -114,29 +114,29 @@ static int install_filter(void)
 		/* Grab the system call number */
 		BPF_STMT(BPF_LD+BPF_W+BPF_ABS, syscall_nr),
 		/* Jump table for the allowed syscalls */
-		BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_rt_sigreturn, 0, 1),
+		BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, __NR_rt_sigreturn, 0, 1),
 		BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_ALLOW),
 #ifdef __NR_sigreturn
-		BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_sigreturn, 0, 1),
+		BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, __NR_sigreturn, 0, 1),
 		BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_ALLOW),
 #endif
-		BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_exit_group, 0, 1),
+		BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, __NR_exit_group, 0, 1),
 		BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_ALLOW),
-		BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_exit, 0, 1),
+		BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, __NR_exit, 0, 1),
 		BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_ALLOW),
-		BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_read, 1, 0),
-		BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_write, 3, 2),
+		BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, __NR_read, 1, 0),
+		BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, __NR_write, 3, 2),
 
 		/* Check that read is only using stdin. */
 		BPF_STMT(BPF_LD+BPF_W+BPF_ABS, syscall_arg(0)),
-		BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, STDIN_FILENO, 4, 0),
+		BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, STDIN_FILENO, 4, 0),
 		BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_KILL),
 
 		/* Check that write is only using stdout */
 		BPF_STMT(BPF_LD+BPF_W+BPF_ABS, syscall_arg(0)),
-		BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, STDOUT_FILENO, 1, 0),
+		BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, STDOUT_FILENO, 1, 0),
 		/* Trap attempts to write to stderr */
-		BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, STDERR_FILENO, 1, 2),
+		BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, STDERR_FILENO, 1, 2),
 
 		BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_ALLOW),
 		BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_TRAP),
diff --git a/samples/seccomp/bpf-helper.c b/samples/seccomp/bpf-helper.c
index ae260d7..5187790 100644
--- a/samples/seccomp/bpf-helper.c
+++ b/samples/seccomp/bpf-helper.c
@@ -30,7 +30,7 @@ int bpf_resolve_jumps(struct bpf_labels *labels,
 	for (i = 0; i < count; ++i) {
 		size_t offset = count - i - 1;
 		struct sock_filter *instr = &filter[offset];
-		if (instr->code != (BPF_JMP+BPF_JA))
+		if (instr->code != (BPF_JMP64+BPF_JA))
 			continue;
 		switch ((instr->jt<<8)|instr->jf) {
 		case (JUMP_JT<<8)|JUMP_JF:
diff --git a/samples/seccomp/bpf-helper.h b/samples/seccomp/bpf-helper.h
index 417e48a..b19812b 100644
--- a/samples/seccomp/bpf-helper.h
+++ b/samples/seccomp/bpf-helper.h
@@ -47,13 +47,13 @@ void seccomp_bpf_print(struct sock_filter *filter, size_t count);
 #define DENY \
 	BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_KILL)
 #define JUMP(labels, label) \
-	BPF_JUMP(BPF_JMP+BPF_JA, FIND_LABEL((labels), (label)), \
+	BPF_JUMP(BPF_JMP64+BPF_JA, FIND_LABEL((labels), (label)), \
 		 JUMP_JT, JUMP_JF)
 #define LABEL(labels, label) \
-	BPF_JUMP(BPF_JMP+BPF_JA, FIND_LABEL((labels), (label)), \
+	BPF_JUMP(BPF_JMP64+BPF_JA, FIND_LABEL((labels), (label)), \
 		 LABEL_JT, LABEL_JF)
 #define SYSCALL(nr, jt) \
-	BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (nr), 0, 1), \
+	BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, (nr), 0, 1), \
 	jt
 
 /* Lame, but just an example */
@@ -147,31 +147,31 @@ union arg64 {
 	BPF_STMT(BPF_ST, 1) /* hi -> M[1] */
 
 #define JEQ32(value, jt) \
-	BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (value), 0, 1), \
+	BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, (value), 0, 1), \
 	jt
 
 #define JNE32(value, jt) \
-	BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (value), 1, 0), \
+	BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, (value), 1, 0), \
 	jt
 
 #define JA32(value, jt) \
-	BPF_JUMP(BPF_JMP+BPF_JSET+BPF_K, (value), 0, 1), \
+	BPF_JUMP(BPF_JMP64+BPF_JSET+BPF_K, (value), 0, 1), \
 	jt
 
 #define JGE32(value, jt) \
-	BPF_JUMP(BPF_JMP+BPF_JGE+BPF_K, (value), 0, 1), \
+	BPF_JUMP(BPF_JMP64+BPF_JGE+BPF_K, (value), 0, 1), \
 	jt
 
 #define JGT32(value, jt) \
-	BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (value), 0, 1), \
+	BPF_JUMP(BPF_JMP64+BPF_JGT+BPF_K, (value), 0, 1), \
 	jt
 
 #define JLE32(value, jt) \
-	BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (value), 1, 0), \
+	BPF_JUMP(BPF_JMP64+BPF_JGT+BPF_K, (value), 1, 0), \
 	jt
 
 #define JLT32(value, jt) \
-	BPF_JUMP(BPF_JMP+BPF_JGE+BPF_K, (value), 1, 0), \
+	BPF_JUMP(BPF_JMP64+BPF_JGE+BPF_K, (value), 1, 0), \
 	jt
 
 /*
@@ -180,78 +180,78 @@ union arg64 {
  */
 #define JEQ64(lo, hi, jt) \
 	/* if (hi != arg.hi) goto NOMATCH; */ \
-	BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (hi), 0, 5), \
+	BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, (hi), 0, 5), \
 	BPF_STMT(BPF_LD+BPF_MEM, 0), /* swap in lo */ \
 	/* if (lo != arg.lo) goto NOMATCH; */ \
-	BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (lo), 0, 2), \
+	BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, (lo), 0, 2), \
 	BPF_STMT(BPF_LD+BPF_MEM, 1), \
 	jt, \
 	BPF_STMT(BPF_LD+BPF_MEM, 1)
 
 #define JNE64(lo, hi, jt) \
 	/* if (hi != arg.hi) goto MATCH; */ \
-	BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (hi), 0, 3), \
+	BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, (hi), 0, 3), \
 	BPF_STMT(BPF_LD+BPF_MEM, 0), \
 	/* if (lo != arg.lo) goto MATCH; */ \
-	BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (lo), 2, 0), \
+	BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, (lo), 2, 0), \
 	BPF_STMT(BPF_LD+BPF_MEM, 1), \
 	jt, \
 	BPF_STMT(BPF_LD+BPF_MEM, 1)
 
 #define JA64(lo, hi, jt) \
 	/* if (hi & arg.hi) goto MATCH; */ \
-	BPF_JUMP(BPF_JMP+BPF_JSET+BPF_K, (hi), 3, 0), \
+	BPF_JUMP(BPF_JMP64+BPF_JSET+BPF_K, (hi), 3, 0), \
 	BPF_STMT(BPF_LD+BPF_MEM, 0), \
 	/* if (lo & arg.lo) goto MATCH; */ \
-	BPF_JUMP(BPF_JMP+BPF_JSET+BPF_K, (lo), 0, 2), \
+	BPF_JUMP(BPF_JMP64+BPF_JSET+BPF_K, (lo), 0, 2), \
 	BPF_STMT(BPF_LD+BPF_MEM, 1), \
 	jt, \
 	BPF_STMT(BPF_LD+BPF_MEM, 1)
 
 #define JGE64(lo, hi, jt) \
 	/* if (hi > arg.hi) goto MATCH; */ \
-	BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (hi), 4, 0), \
+	BPF_JUMP(BPF_JMP64+BPF_JGT+BPF_K, (hi), 4, 0), \
 	/* if (hi != arg.hi) goto NOMATCH; */ \
-	BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (hi), 0, 5), \
+	BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, (hi), 0, 5), \
 	BPF_STMT(BPF_LD+BPF_MEM, 0), \
 	/* if (lo >= arg.lo) goto MATCH; */ \
-	BPF_JUMP(BPF_JMP+BPF_JGE+BPF_K, (lo), 0, 2), \
+	BPF_JUMP(BPF_JMP64+BPF_JGE+BPF_K, (lo), 0, 2), \
 	BPF_STMT(BPF_LD+BPF_MEM, 1), \
 	jt, \
 	BPF_STMT(BPF_LD+BPF_MEM, 1)
 
 #define JGT64(lo, hi, jt) \
 	/* if (hi > arg.hi) goto MATCH; */ \
-	BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (hi), 4, 0), \
+	BPF_JUMP(BPF_JMP64+BPF_JGT+BPF_K, (hi), 4, 0), \
 	/* if (hi != arg.hi) goto NOMATCH; */ \
-	BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (hi), 0, 5), \
+	BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, (hi), 0, 5), \
 	BPF_STMT(BPF_LD+BPF_MEM, 0), \
 	/* if (lo > arg.lo) goto MATCH; */ \
-	BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (lo), 0, 2), \
+	BPF_JUMP(BPF_JMP64+BPF_JGT+BPF_K, (lo), 0, 2), \
 	BPF_STMT(BPF_LD+BPF_MEM, 1), \
 	jt, \
 	BPF_STMT(BPF_LD+BPF_MEM, 1)
 
 #define JLE64(lo, hi, jt) \
 	/* if (hi < arg.hi) goto MATCH; */ \
-	BPF_JUMP(BPF_JMP+BPF_JGE+BPF_K, (hi), 0, 4), \
+	BPF_JUMP(BPF_JMP64+BPF_JGE+BPF_K, (hi), 0, 4), \
 	/* if (hi != arg.hi) goto NOMATCH; */ \
-	BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (hi), 0, 5), \
+	BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, (hi), 0, 5), \
 	BPF_STMT(BPF_LD+BPF_MEM, 0), \
 	/* if (lo <= arg.lo) goto MATCH; */ \
-	BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (lo), 2, 0), \
+	BPF_JUMP(BPF_JMP64+BPF_JGT+BPF_K, (lo), 2, 0), \
 	BPF_STMT(BPF_LD+BPF_MEM, 1), \
 	jt, \
 	BPF_STMT(BPF_LD+BPF_MEM, 1)
 
 #define JLT64(lo, hi, jt) \
 	/* if (hi < arg.hi) goto MATCH; */ \
-	BPF_JUMP(BPF_JMP+BPF_JGE+BPF_K, (hi), 0, 4), \
+	BPF_JUMP(BPF_JMP64+BPF_JGE+BPF_K, (hi), 0, 4), \
 	/* if (hi != arg.hi) goto NOMATCH; */ \
-	BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (hi), 0, 5), \
+	BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, (hi), 0, 5), \
 	BPF_STMT(BPF_LD+BPF_MEM, 0), \
 	/* if (lo < arg.lo) goto MATCH; */ \
-	BPF_JUMP(BPF_JMP+BPF_JGE+BPF_K, (lo), 2, 0), \
+	BPF_JUMP(BPF_JMP64+BPF_JGE+BPF_K, (lo), 2, 0), \
 	BPF_STMT(BPF_LD+BPF_MEM, 1), \
 	jt, \
 	BPF_STMT(BPF_LD+BPF_MEM, 1)
diff --git a/samples/seccomp/dropper.c b/samples/seccomp/dropper.c
index 4bca4b7..982c770 100644
--- a/samples/seccomp/dropper.c
+++ b/samples/seccomp/dropper.c
@@ -30,10 +30,10 @@ static int install_filter(int arch, int nr, int error)
 	struct sock_filter filter[] = {
 		BPF_STMT(BPF_LD+BPF_W+BPF_ABS,
 			 (offsetof(struct seccomp_data, arch))),
-		BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, arch, 0, 3),
+		BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, arch, 0, 3),
 		BPF_STMT(BPF_LD+BPF_W+BPF_ABS,
 			 (offsetof(struct seccomp_data, nr))),
-		BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, nr, 0, 1),
+		BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, nr, 0, 1),
 		BPF_STMT(BPF_RET+BPF_K,
 			 SECCOMP_RET_ERRNO|(error & SECCOMP_RET_DATA)),
 		BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_ALLOW),
diff --git a/samples/seccomp/user-trap.c b/samples/seccomp/user-trap.c
index 20291ec6..108ef31 100644
--- a/samples/seccomp/user-trap.c
+++ b/samples/seccomp/user-trap.c
@@ -88,7 +88,7 @@ static int user_trap_syscall(int nr, unsigned int flags)
 	struct sock_filter filter[] = {
 		BPF_STMT(BPF_LD+BPF_W+BPF_ABS,
 			offsetof(struct seccomp_data, nr)),
-		BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, nr, 0, 1),
+		BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, nr, 0, 1),
 		BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_USER_NOTIF),
 		BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_ALLOW),
 	};
diff --git a/tools/bpf/bpf_dbg.c b/tools/bpf/bpf_dbg.c
index 00e560a..6f7ed34 100644
--- a/tools/bpf/bpf_dbg.c
+++ b/tools/bpf/bpf_dbg.c
@@ -56,22 +56,22 @@
 
 #define BPF_LDX_B	(BPF_LDX | BPF_B)
 #define BPF_LDX_W	(BPF_LDX | BPF_W)
-#define BPF_JMP_JA	(BPF_JMP | BPF_JA)
-#define BPF_JMP_JEQ	(BPF_JMP | BPF_JEQ)
-#define BPF_JMP_JGT	(BPF_JMP | BPF_JGT)
-#define BPF_JMP_JGE	(BPF_JMP | BPF_JGE)
-#define BPF_JMP_JSET	(BPF_JMP | BPF_JSET)
-#define BPF_ALU_ADD	(BPF_ALU | BPF_ADD)
-#define BPF_ALU_SUB	(BPF_ALU | BPF_SUB)
-#define BPF_ALU_MUL	(BPF_ALU | BPF_MUL)
-#define BPF_ALU_DIV	(BPF_ALU | BPF_DIV)
-#define BPF_ALU_MOD	(BPF_ALU | BPF_MOD)
-#define BPF_ALU_NEG	(BPF_ALU | BPF_NEG)
-#define BPF_ALU_AND	(BPF_ALU | BPF_AND)
-#define BPF_ALU_OR	(BPF_ALU | BPF_OR)
-#define BPF_ALU_XOR	(BPF_ALU | BPF_XOR)
-#define BPF_ALU_LSH	(BPF_ALU | BPF_LSH)
-#define BPF_ALU_RSH	(BPF_ALU | BPF_RSH)
+#define BPF_JMP_JA	(BPF_JMP64 | BPF_JA)
+#define BPF_JMP_JEQ	(BPF_JMP64 | BPF_JEQ)
+#define BPF_JMP_JGT	(BPF_JMP64 | BPF_JGT)
+#define BPF_JMP_JGE	(BPF_JMP64 | BPF_JGE)
+#define BPF_JMP_JSET	(BPF_JMP64 | BPF_JSET)
+#define BPF_ALU_ADD	(BPF_ALU32 | BPF_ADD)
+#define BPF_ALU_SUB	(BPF_ALU32 | BPF_SUB)
+#define BPF_ALU_MUL	(BPF_ALU32 | BPF_MUL)
+#define BPF_ALU_DIV	(BPF_ALU32 | BPF_DIV)
+#define BPF_ALU_MOD	(BPF_ALU32 | BPF_MOD)
+#define BPF_ALU_NEG	(BPF_ALU32 | BPF_NEG)
+#define BPF_ALU_AND	(BPF_ALU32 | BPF_AND)
+#define BPF_ALU_OR	(BPF_ALU32 | BPF_OR)
+#define BPF_ALU_XOR	(BPF_ALU32 | BPF_XOR)
+#define BPF_ALU_LSH	(BPF_ALU32 | BPF_LSH)
+#define BPF_ALU_RSH	(BPF_ALU32 | BPF_RSH)
 #define BPF_MISC_TAX	(BPF_MISC | BPF_TAX)
 #define BPF_MISC_TXA	(BPF_MISC | BPF_TXA)
 #define BPF_LD_B	(BPF_LD | BPF_B)
@@ -428,7 +428,7 @@ static void bpf_disasm(const struct sock_filter f, unsigned int i)
 	snprintf(buf, sizeof(buf), fmt, val);
 	buf[sizeof(buf) - 1] = 0;
 
-	if ((BPF_CLASS(f.code) == BPF_JMP && BPF_OP(f.code) != BPF_JA))
+	if ((BPF_CLASS(f.code) == BPF_JMP64 && BPF_OP(f.code) != BPF_JA))
 		rl_printf("l%d:\t%s %s, l%d, l%d\n", i, op, buf,
 			  i + 1 + f.jt, i + 1 + f.jf);
 	else
diff --git a/tools/bpf/bpf_exp.y b/tools/bpf/bpf_exp.y
index dfb7254..3dd6b0a 100644
--- a/tools/bpf/bpf_exp.y
+++ b/tools/bpf/bpf_exp.y
@@ -209,234 +209,234 @@ stx
 jmp
 	: OP_JMP label {
 		bpf_set_jmp_label($2, JKL);
-		bpf_set_curr_instr(BPF_JMP | BPF_JA, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_JMP64 | BPF_JA, 0, 0, 0); }
 	;
 
 jeq
 	: OP_JEQ '#' number ',' label ',' label {
 		bpf_set_jmp_label($5, JTL);
 		bpf_set_jmp_label($7, JFL);
-		bpf_set_curr_instr(BPF_JMP | BPF_JEQ | BPF_K, 0, 0, $3); }
+		bpf_set_curr_instr(BPF_JMP64 | BPF_JEQ | BPF_K, 0, 0, $3); }
 	| OP_JEQ 'x' ',' label ',' label {
 		bpf_set_jmp_label($4, JTL);
 		bpf_set_jmp_label($6, JFL);
-		bpf_set_curr_instr(BPF_JMP | BPF_JEQ | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_JMP64 | BPF_JEQ | BPF_X, 0, 0, 0); }
 	| OP_JEQ '%' 'x' ',' label ',' label {
 		bpf_set_jmp_label($5, JTL);
 		bpf_set_jmp_label($7, JFL);
-		bpf_set_curr_instr(BPF_JMP | BPF_JEQ | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_JMP64 | BPF_JEQ | BPF_X, 0, 0, 0); }
 	| OP_JEQ '#' number ',' label {
 		bpf_set_jmp_label($5, JTL);
-		bpf_set_curr_instr(BPF_JMP | BPF_JEQ | BPF_K, 0, 0, $3); }
+		bpf_set_curr_instr(BPF_JMP64 | BPF_JEQ | BPF_K, 0, 0, $3); }
 	| OP_JEQ 'x' ',' label {
 		bpf_set_jmp_label($4, JTL);
-		bpf_set_curr_instr(BPF_JMP | BPF_JEQ | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_JMP64 | BPF_JEQ | BPF_X, 0, 0, 0); }
 	| OP_JEQ '%' 'x' ',' label {
 		bpf_set_jmp_label($5, JTL);
-		bpf_set_curr_instr(BPF_JMP | BPF_JEQ | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_JMP64 | BPF_JEQ | BPF_X, 0, 0, 0); }
 	;
 
 jneq
 	: OP_JNEQ '#' number ',' label {
 		bpf_set_jmp_label($5, JFL);
-		bpf_set_curr_instr(BPF_JMP | BPF_JEQ | BPF_K, 0, 0, $3); }
+		bpf_set_curr_instr(BPF_JMP64 | BPF_JEQ | BPF_K, 0, 0, $3); }
 	| OP_JNEQ 'x' ',' label {
 		bpf_set_jmp_label($4, JFL);
-		bpf_set_curr_instr(BPF_JMP | BPF_JEQ | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_JMP64 | BPF_JEQ | BPF_X, 0, 0, 0); }
 	| OP_JNEQ '%' 'x' ',' label {
 		bpf_set_jmp_label($5, JFL);
-		bpf_set_curr_instr(BPF_JMP | BPF_JEQ | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_JMP64 | BPF_JEQ | BPF_X, 0, 0, 0); }
 	;
 
 jlt
 	: OP_JLT '#' number ',' label {
 		bpf_set_jmp_label($5, JFL);
-		bpf_set_curr_instr(BPF_JMP | BPF_JGE | BPF_K, 0, 0, $3); }
+		bpf_set_curr_instr(BPF_JMP64 | BPF_JGE | BPF_K, 0, 0, $3); }
 	| OP_JLT 'x' ',' label {
 		bpf_set_jmp_label($4, JFL);
-		bpf_set_curr_instr(BPF_JMP | BPF_JGE | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_JMP64 | BPF_JGE | BPF_X, 0, 0, 0); }
 	| OP_JLT '%' 'x' ',' label {
 		bpf_set_jmp_label($5, JFL);
-		bpf_set_curr_instr(BPF_JMP | BPF_JGE | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_JMP64 | BPF_JGE | BPF_X, 0, 0, 0); }
 	;
 
 jle
 	: OP_JLE '#' number ',' label {
 		bpf_set_jmp_label($5, JFL);
-		bpf_set_curr_instr(BPF_JMP | BPF_JGT | BPF_K, 0, 0, $3); }
+		bpf_set_curr_instr(BPF_JMP64 | BPF_JGT | BPF_K, 0, 0, $3); }
 	| OP_JLE 'x' ',' label {
 		bpf_set_jmp_label($4, JFL);
-		bpf_set_curr_instr(BPF_JMP | BPF_JGT | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_JMP64 | BPF_JGT | BPF_X, 0, 0, 0); }
 	| OP_JLE '%' 'x' ',' label {
 		bpf_set_jmp_label($5, JFL);
-		bpf_set_curr_instr(BPF_JMP | BPF_JGT | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_JMP64 | BPF_JGT | BPF_X, 0, 0, 0); }
 	;
 
 jgt
 	: OP_JGT '#' number ',' label ',' label {
 		bpf_set_jmp_label($5, JTL);
 		bpf_set_jmp_label($7, JFL);
-		bpf_set_curr_instr(BPF_JMP | BPF_JGT | BPF_K, 0, 0, $3); }
+		bpf_set_curr_instr(BPF_JMP64 | BPF_JGT | BPF_K, 0, 0, $3); }
 	| OP_JGT 'x' ',' label ',' label {
 		bpf_set_jmp_label($4, JTL);
 		bpf_set_jmp_label($6, JFL);
-		bpf_set_curr_instr(BPF_JMP | BPF_JGT | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_JMP64 | BPF_JGT | BPF_X, 0, 0, 0); }
 	| OP_JGT '%' 'x' ',' label ',' label {
 		bpf_set_jmp_label($5, JTL);
 		bpf_set_jmp_label($7, JFL);
-		bpf_set_curr_instr(BPF_JMP | BPF_JGT | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_JMP64 | BPF_JGT | BPF_X, 0, 0, 0); }
 	| OP_JGT '#' number ',' label {
 		bpf_set_jmp_label($5, JTL);
-		bpf_set_curr_instr(BPF_JMP | BPF_JGT | BPF_K, 0, 0, $3); }
+		bpf_set_curr_instr(BPF_JMP64 | BPF_JGT | BPF_K, 0, 0, $3); }
 	| OP_JGT 'x' ',' label {
 		bpf_set_jmp_label($4, JTL);
-		bpf_set_curr_instr(BPF_JMP | BPF_JGT | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_JMP64 | BPF_JGT | BPF_X, 0, 0, 0); }
 	| OP_JGT '%' 'x' ',' label {
 		bpf_set_jmp_label($5, JTL);
-		bpf_set_curr_instr(BPF_JMP | BPF_JGT | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_JMP64 | BPF_JGT | BPF_X, 0, 0, 0); }
 	;
 
 jge
 	: OP_JGE '#' number ',' label ',' label {
 		bpf_set_jmp_label($5, JTL);
 		bpf_set_jmp_label($7, JFL);
-		bpf_set_curr_instr(BPF_JMP | BPF_JGE | BPF_K, 0, 0, $3); }
+		bpf_set_curr_instr(BPF_JMP64 | BPF_JGE | BPF_K, 0, 0, $3); }
 	| OP_JGE 'x' ',' label ',' label {
 		bpf_set_jmp_label($4, JTL);
 		bpf_set_jmp_label($6, JFL);
-		bpf_set_curr_instr(BPF_JMP | BPF_JGE | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_JMP64 | BPF_JGE | BPF_X, 0, 0, 0); }
 	| OP_JGE '%' 'x' ',' label ',' label {
 		bpf_set_jmp_label($5, JTL);
 		bpf_set_jmp_label($7, JFL);
-		bpf_set_curr_instr(BPF_JMP | BPF_JGE | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_JMP64 | BPF_JGE | BPF_X, 0, 0, 0); }
 	| OP_JGE '#' number ',' label {
 		bpf_set_jmp_label($5, JTL);
-		bpf_set_curr_instr(BPF_JMP | BPF_JGE | BPF_K, 0, 0, $3); }
+		bpf_set_curr_instr(BPF_JMP64 | BPF_JGE | BPF_K, 0, 0, $3); }
 	| OP_JGE 'x' ',' label {
 		bpf_set_jmp_label($4, JTL);
-		bpf_set_curr_instr(BPF_JMP | BPF_JGE | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_JMP64 | BPF_JGE | BPF_X, 0, 0, 0); }
 	| OP_JGE '%' 'x' ',' label {
 		bpf_set_jmp_label($5, JTL);
-		bpf_set_curr_instr(BPF_JMP | BPF_JGE | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_JMP64 | BPF_JGE | BPF_X, 0, 0, 0); }
 	;
 
 jset
 	: OP_JSET '#' number ',' label ',' label {
 		bpf_set_jmp_label($5, JTL);
 		bpf_set_jmp_label($7, JFL);
-		bpf_set_curr_instr(BPF_JMP | BPF_JSET | BPF_K, 0, 0, $3); }
+		bpf_set_curr_instr(BPF_JMP64 | BPF_JSET | BPF_K, 0, 0, $3); }
 	| OP_JSET 'x' ',' label ',' label {
 		bpf_set_jmp_label($4, JTL);
 		bpf_set_jmp_label($6, JFL);
-		bpf_set_curr_instr(BPF_JMP | BPF_JSET | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_JMP64 | BPF_JSET | BPF_X, 0, 0, 0); }
 	| OP_JSET '%' 'x' ',' label ',' label {
 		bpf_set_jmp_label($5, JTL);
 		bpf_set_jmp_label($7, JFL);
-		bpf_set_curr_instr(BPF_JMP | BPF_JSET | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_JMP64 | BPF_JSET | BPF_X, 0, 0, 0); }
 	| OP_JSET '#' number ',' label {
 		bpf_set_jmp_label($5, JTL);
-		bpf_set_curr_instr(BPF_JMP | BPF_JSET | BPF_K, 0, 0, $3); }
+		bpf_set_curr_instr(BPF_JMP64 | BPF_JSET | BPF_K, 0, 0, $3); }
 	| OP_JSET 'x' ',' label {
 		bpf_set_jmp_label($4, JTL);
-		bpf_set_curr_instr(BPF_JMP | BPF_JSET | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_JMP64 | BPF_JSET | BPF_X, 0, 0, 0); }
 	| OP_JSET '%' 'x' ',' label {
 		bpf_set_jmp_label($5, JTL);
-		bpf_set_curr_instr(BPF_JMP | BPF_JSET | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_JMP64 | BPF_JSET | BPF_X, 0, 0, 0); }
 	;
 
 add
 	: OP_ADD '#' number {
-		bpf_set_curr_instr(BPF_ALU | BPF_ADD | BPF_K, 0, 0, $3); }
+		bpf_set_curr_instr(BPF_ALU32 | BPF_ADD | BPF_K, 0, 0, $3); }
 	| OP_ADD 'x' {
-		bpf_set_curr_instr(BPF_ALU | BPF_ADD | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_ALU32 | BPF_ADD | BPF_X, 0, 0, 0); }
 	| OP_ADD '%' 'x' {
-		bpf_set_curr_instr(BPF_ALU | BPF_ADD | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_ALU32 | BPF_ADD | BPF_X, 0, 0, 0); }
 	;
 
 sub
 	: OP_SUB '#' number {
-		bpf_set_curr_instr(BPF_ALU | BPF_SUB | BPF_K, 0, 0, $3); }
+		bpf_set_curr_instr(BPF_ALU32 | BPF_SUB | BPF_K, 0, 0, $3); }
 	| OP_SUB 'x' {
-		bpf_set_curr_instr(BPF_ALU | BPF_SUB | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_ALU32 | BPF_SUB | BPF_X, 0, 0, 0); }
 	| OP_SUB '%' 'x' {
-		bpf_set_curr_instr(BPF_ALU | BPF_SUB | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_ALU32 | BPF_SUB | BPF_X, 0, 0, 0); }
 	;
 
 mul
 	: OP_MUL '#' number {
-		bpf_set_curr_instr(BPF_ALU | BPF_MUL | BPF_K, 0, 0, $3); }
+		bpf_set_curr_instr(BPF_ALU32 | BPF_MUL | BPF_K, 0, 0, $3); }
 	| OP_MUL 'x' {
-		bpf_set_curr_instr(BPF_ALU | BPF_MUL | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_ALU32 | BPF_MUL | BPF_X, 0, 0, 0); }
 	| OP_MUL '%' 'x' {
-		bpf_set_curr_instr(BPF_ALU | BPF_MUL | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_ALU32 | BPF_MUL | BPF_X, 0, 0, 0); }
 	;
 
 div
 	: OP_DIV '#' number {
-		bpf_set_curr_instr(BPF_ALU | BPF_DIV | BPF_K, 0, 0, $3); }
+		bpf_set_curr_instr(BPF_ALU32 | BPF_DIV | BPF_K, 0, 0, $3); }
 	| OP_DIV 'x' {
-		bpf_set_curr_instr(BPF_ALU | BPF_DIV | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_ALU32 | BPF_DIV | BPF_X, 0, 0, 0); }
 	| OP_DIV '%' 'x' {
-		bpf_set_curr_instr(BPF_ALU | BPF_DIV | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_ALU32 | BPF_DIV | BPF_X, 0, 0, 0); }
 	;
 
 mod
 	: OP_MOD '#' number {
-		bpf_set_curr_instr(BPF_ALU | BPF_MOD | BPF_K, 0, 0, $3); }
+		bpf_set_curr_instr(BPF_ALU32 | BPF_MOD | BPF_K, 0, 0, $3); }
 	| OP_MOD 'x' {
-		bpf_set_curr_instr(BPF_ALU | BPF_MOD | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_ALU32 | BPF_MOD | BPF_X, 0, 0, 0); }
 	| OP_MOD '%' 'x' {
-		bpf_set_curr_instr(BPF_ALU | BPF_MOD | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_ALU32 | BPF_MOD | BPF_X, 0, 0, 0); }
 	;
 
 neg
 	: OP_NEG {
-		bpf_set_curr_instr(BPF_ALU | BPF_NEG, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_ALU32 | BPF_NEG, 0, 0, 0); }
 	;
 
 and
 	: OP_AND '#' number {
-		bpf_set_curr_instr(BPF_ALU | BPF_AND | BPF_K, 0, 0, $3); }
+		bpf_set_curr_instr(BPF_ALU32 | BPF_AND | BPF_K, 0, 0, $3); }
 	| OP_AND 'x' {
-		bpf_set_curr_instr(BPF_ALU | BPF_AND | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_ALU32 | BPF_AND | BPF_X, 0, 0, 0); }
 	| OP_AND '%' 'x' {
-		bpf_set_curr_instr(BPF_ALU | BPF_AND | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_ALU32 | BPF_AND | BPF_X, 0, 0, 0); }
 	;
 
 or
 	: OP_OR '#' number {
-		bpf_set_curr_instr(BPF_ALU | BPF_OR | BPF_K, 0, 0, $3); }
+		bpf_set_curr_instr(BPF_ALU32 | BPF_OR | BPF_K, 0, 0, $3); }
 	| OP_OR 'x' {
-		bpf_set_curr_instr(BPF_ALU | BPF_OR | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_ALU32 | BPF_OR | BPF_X, 0, 0, 0); }
 	| OP_OR '%' 'x' {
-		bpf_set_curr_instr(BPF_ALU | BPF_OR | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_ALU32 | BPF_OR | BPF_X, 0, 0, 0); }
 	;
 
 xor
 	: OP_XOR '#' number {
-		bpf_set_curr_instr(BPF_ALU | BPF_XOR | BPF_K, 0, 0, $3); }
+		bpf_set_curr_instr(BPF_ALU32 | BPF_XOR | BPF_K, 0, 0, $3); }
 	| OP_XOR 'x' {
-		bpf_set_curr_instr(BPF_ALU | BPF_XOR | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_ALU32 | BPF_XOR | BPF_X, 0, 0, 0); }
 	| OP_XOR '%' 'x' {
-		bpf_set_curr_instr(BPF_ALU | BPF_XOR | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_ALU32 | BPF_XOR | BPF_X, 0, 0, 0); }
 	;
 
 lsh
 	: OP_LSH '#' number {
-		bpf_set_curr_instr(BPF_ALU | BPF_LSH | BPF_K, 0, 0, $3); }
+		bpf_set_curr_instr(BPF_ALU32 | BPF_LSH | BPF_K, 0, 0, $3); }
 	| OP_LSH 'x' {
-		bpf_set_curr_instr(BPF_ALU | BPF_LSH | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_ALU32 | BPF_LSH | BPF_X, 0, 0, 0); }
 	| OP_LSH '%' 'x' {
-		bpf_set_curr_instr(BPF_ALU | BPF_LSH | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_ALU32 | BPF_LSH | BPF_X, 0, 0, 0); }
 	;
 
 rsh
 	: OP_RSH '#' number {
-		bpf_set_curr_instr(BPF_ALU | BPF_RSH | BPF_K, 0, 0, $3); }
+		bpf_set_curr_instr(BPF_ALU32 | BPF_RSH | BPF_K, 0, 0, $3); }
 	| OP_RSH 'x' {
-		bpf_set_curr_instr(BPF_ALU | BPF_RSH | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_ALU32 | BPF_RSH | BPF_X, 0, 0, 0); }
 	| OP_RSH '%' 'x' {
-		bpf_set_curr_instr(BPF_ALU | BPF_RSH | BPF_X, 0, 0, 0); }
+		bpf_set_curr_instr(BPF_ALU32 | BPF_RSH | BPF_X, 0, 0, 0); }
 	;
 
 ret
diff --git a/tools/bpf/bpftool/cfg.c b/tools/bpf/bpftool/cfg.c
index 1951219..8f509ab 100644
--- a/tools/bpf/bpftool/cfg.c
+++ b/tools/bpf/bpftool/cfg.c
@@ -138,7 +138,7 @@ static bool cfg_partition_funcs(struct cfg *cfg, struct bpf_insn *cur,
 		return true;
 
 	for (; cur < end; cur++) {
-		if (cur->code != (BPF_JMP | BPF_CALL))
+		if (cur->code != (BPF_JMP64 | BPF_CALL))
 			continue;
 		if (cur->src_reg != BPF_PSEUDO_CALL)
 			continue;
@@ -159,7 +159,7 @@ static bool cfg_partition_funcs(struct cfg *cfg, struct bpf_insn *cur,
 
 static bool is_jmp_insn(__u8 code)
 {
-	return BPF_CLASS(code) == BPF_JMP || BPF_CLASS(code) == BPF_JMP32;
+	return BPF_CLASS(code) == BPF_JMP64 || BPF_CLASS(code) == BPF_JMP32;
 }
 
 static bool func_partition_bb_head(struct func_node *func)
diff --git a/tools/include/linux/filter.h b/tools/include/linux/filter.h
index 736bdec..cf111f1 100644
--- a/tools/include/linux/filter.h
+++ b/tools/include/linux/filter.h
@@ -41,7 +41,7 @@
 
 #define BPF_ALU32_REG(OP, DST, SRC)				\
 	((struct bpf_insn) {					\
-		.code  = BPF_ALU | BPF_OP(OP) | BPF_X,		\
+		.code  = BPF_ALU32 | BPF_OP(OP) | BPF_X,	\
 		.dst_reg = DST,					\
 		.src_reg = SRC,					\
 		.off   = 0,					\
@@ -59,7 +59,7 @@
 
 #define BPF_ALU32_IMM(OP, DST, IMM)				\
 	((struct bpf_insn) {					\
-		.code  = BPF_ALU | BPF_OP(OP) | BPF_K,		\
+		.code  = BPF_ALU32 | BPF_OP(OP) | BPF_K,	\
 		.dst_reg = DST,					\
 		.src_reg = 0,					\
 		.off   = 0,					\
@@ -69,7 +69,7 @@
 
 #define BPF_ENDIAN(TYPE, DST, LEN)				\
 	((struct bpf_insn) {					\
-		.code  = BPF_ALU | BPF_END | BPF_SRC(TYPE),	\
+		.code  = BPF_ALU32 | BPF_END | BPF_SRC(TYPE),	\
 		.dst_reg = DST,					\
 		.src_reg = 0,					\
 		.off   = 0,					\
@@ -87,7 +87,7 @@
 
 #define BPF_MOV32_REG(DST, SRC)					\
 	((struct bpf_insn) {					\
-		.code  = BPF_ALU | BPF_MOV | BPF_X,		\
+		.code  = BPF_ALU32 | BPF_MOV | BPF_X,		\
 		.dst_reg = DST,					\
 		.src_reg = SRC,					\
 		.off   = 0,					\
@@ -105,7 +105,7 @@
 
 #define BPF_MOV32_IMM(DST, IMM)					\
 	((struct bpf_insn) {					\
-		.code  = BPF_ALU | BPF_MOV | BPF_K,		\
+		.code  = BPF_ALU32 | BPF_MOV | BPF_K,		\
 		.dst_reg = DST,					\
 		.src_reg = 0,					\
 		.off   = 0,					\
@@ -123,7 +123,7 @@
 
 #define BPF_MOV32_RAW(TYPE, DST, SRC, IMM)			\
 	((struct bpf_insn) {					\
-		.code  = BPF_ALU | BPF_MOV | BPF_SRC(TYPE),	\
+		.code  = BPF_ALU32 | BPF_MOV | BPF_SRC(TYPE),	\
 		.dst_reg = DST,					\
 		.src_reg = SRC,					\
 		.off   = 0,					\
@@ -209,7 +209,7 @@
 
 #define BPF_JMP_REG(OP, DST, SRC, OFF)				\
 	((struct bpf_insn) {					\
-		.code  = BPF_JMP | BPF_OP(OP) | BPF_X,		\
+		.code  = BPF_JMP64 | BPF_OP(OP) | BPF_X,	\
 		.dst_reg = DST,					\
 		.src_reg = SRC,					\
 		.off   = OFF,					\
@@ -229,7 +229,7 @@
 
 #define BPF_JMP_IMM(OP, DST, IMM, OFF)				\
 	((struct bpf_insn) {					\
-		.code  = BPF_JMP | BPF_OP(OP) | BPF_K,		\
+		.code  = BPF_JMP64 | BPF_OP(OP) | BPF_K,	\
 		.dst_reg = DST,					\
 		.src_reg = 0,					\
 		.off   = OFF,					\
@@ -249,7 +249,7 @@
 
 #define BPF_JMP_A(OFF)						\
 	((struct bpf_insn) {					\
-		.code  = BPF_JMP | BPF_JA,			\
+		.code  = BPF_JMP64 | BPF_JA,			\
 		.dst_reg = 0,					\
 		.src_reg = 0,					\
 		.off   = OFF,					\
@@ -259,7 +259,7 @@
 
 #define BPF_EMIT_CALL(FUNC)					\
 	((struct bpf_insn) {					\
-		.code  = BPF_JMP | BPF_CALL,			\
+		.code  = BPF_JMP64 | BPF_CALL,			\
 		.dst_reg = 0,					\
 		.src_reg = 0,					\
 		.off   = 0,					\
@@ -322,7 +322,7 @@
 
 #define BPF_CALL_REL(TGT)					\
 	((struct bpf_insn) {					\
-		.code  = BPF_JMP | BPF_CALL,			\
+		.code  = BPF_JMP64 | BPF_CALL,			\
 		.dst_reg = 0,					\
 		.src_reg = BPF_PSEUDO_CALL,			\
 		.off   = 0,					\
@@ -332,7 +332,7 @@
 
 #define BPF_EXIT_INSN()						\
 	((struct bpf_insn) {					\
-		.code  = BPF_JMP | BPF_EXIT,			\
+		.code  = BPF_JMP64 | BPF_EXIT,			\
 		.dst_reg = 0,					\
 		.src_reg = 0,					\
 		.off   = 0,					\
diff --git a/tools/lib/bpf/bpf_endian.h b/tools/lib/bpf/bpf_endian.h
index ec9db4f..6f8db8b 100644
--- a/tools/lib/bpf/bpf_endian.h
+++ b/tools/lib/bpf/bpf_endian.h
@@ -40,7 +40,7 @@
  * requested byte order.
  *
  * Note, LLVM's BPF target has different __builtin_bswapX()
- * semantics. It does map to BPF_ALU | BPF_END | BPF_TO_BE
+ * semantics. It does map to BPF_ALU32 | BPF_END | BPF_TO_BE
  * in bpfel and bpfeb case, which means below, that we map
  * to cpu_to_be16(). We could use it unconditionally in BPF
  * case, but better not rely on it, so that this header here
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index eed5cec..1afea14 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -706,7 +706,7 @@ static void bpf_program__exit(struct bpf_program *prog)
 
 static bool insn_is_subprog_call(const struct bpf_insn *insn)
 {
-	return BPF_CLASS(insn->code) == BPF_JMP &&
+	return BPF_CLASS(insn->code) == BPF_JMP64 &&
 	       BPF_OP(insn->code) == BPF_CALL &&
 	       BPF_SRC(insn->code) == BPF_K &&
 	       insn->src_reg == BPF_PSEUDO_CALL &&
@@ -716,7 +716,7 @@ static bool insn_is_subprog_call(const struct bpf_insn *insn)
 
 static bool is_call_insn(const struct bpf_insn *insn)
 {
-	return insn->code == (BPF_JMP | BPF_CALL);
+	return insn->code == (BPF_JMP64 | BPF_CALL);
 }
 
 static bool insn_is_pseudo_func(struct bpf_insn *insn)
@@ -4045,7 +4045,7 @@ static int bpf_program__record_reloc(struct bpf_program *prog,
 		}
 		pr_debug("prog '%s': found extern #%d '%s' (sym %d) for insn #%u\n",
 			 prog->name, i, ext->name, ext->sym_idx, insn_idx);
-		if (insn->code == (BPF_JMP | BPF_CALL))
+		if (insn->code == (BPF_JMP64 | BPF_CALL))
 			reloc_desc->type = RELO_EXTERN_FUNC;
 		else
 			reloc_desc->type = RELO_EXTERN_VAR;
@@ -4701,7 +4701,7 @@ static int probe_kern_probe_read_kernel(void)
 		BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),	/* r1 += -8 */
 		BPF_MOV64_IMM(BPF_REG_2, 8),		/* r2 = 8 */
 		BPF_MOV64_IMM(BPF_REG_3, 0),		/* r3 = 0 */
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_probe_read_kernel),
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_probe_read_kernel),
 		BPF_EXIT_INSN(),
 	};
 	int fd, insn_cnt = ARRAY_SIZE(insns);
@@ -4800,7 +4800,7 @@ static int probe_perf_link(void)
 static int probe_kern_bpf_cookie(void)
 {
 	struct bpf_insn insns[] = {
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_attach_cookie),
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_attach_cookie),
 		BPF_EXIT_INSN(),
 	};
 	int ret, insn_cnt = ARRAY_SIZE(insns);
@@ -5833,7 +5833,7 @@ static void poison_map_ldimm64(struct bpf_program *prog, int relo_idx,
 
 	/* we turn single ldimm64 into two identical invalid calls */
 	for (i = 0; i < 2; i++) {
-		insn->code = BPF_JMP | BPF_CALL;
+		insn->code = BPF_JMP64 | BPF_CALL;
 		insn->dst_reg = 0;
 		insn->src_reg = 0;
 		insn->off = 0;
@@ -6663,7 +6663,7 @@ static int bpf_object__collect_relos(struct bpf_object *obj)
 
 static bool insn_is_helper_call(struct bpf_insn *insn, enum bpf_func_id *func_id)
 {
-	if (BPF_CLASS(insn->code) == BPF_JMP &&
+	if (BPF_CLASS(insn->code) == BPF_JMP64 &&
 	    BPF_OP(insn->code) == BPF_CALL &&
 	    BPF_SRC(insn->code) == BPF_K &&
 	    insn->src_reg == 0 &&
diff --git a/tools/lib/bpf/linker.c b/tools/lib/bpf/linker.c
index 4ac02c2..bcad766 100644
--- a/tools/lib/bpf/linker.c
+++ b/tools/lib/bpf/linker.c
@@ -2078,7 +2078,7 @@ static int linker_append_elf_relos(struct bpf_linker *linker, struct src_obj *ob
 					 * (rom two different object files)
 					 */
 					insn = dst_linked_sec->raw_data + dst_rel->r_offset;
-					if (insn->code == (BPF_JMP | BPF_CALL))
+					if (insn->code == (BPF_JMP64 | BPF_CALL))
 						insn->imm += sec->dst_off / sizeof(struct bpf_insn);
 					else
 						insn->imm += sec->dst_off;
diff --git a/tools/lib/bpf/relo_core.c b/tools/lib/bpf/relo_core.c
index c4b0e81..2faaf75 100644
--- a/tools/lib/bpf/relo_core.c
+++ b/tools/lib/bpf/relo_core.c
@@ -971,7 +971,7 @@ static void bpf_core_poison_insn(const char *prog_name, int relo_idx,
 {
 	pr_debug("prog '%s': relo #%d: substituting insn #%d w/ invalid insn\n",
 		 prog_name, relo_idx, insn_idx);
-	insn->code = BPF_JMP | BPF_CALL;
+	insn->code = BPF_JMP64 | BPF_CALL;
 	insn->dst_reg = 0;
 	insn->src_reg = 0;
 	insn->off = 0;
@@ -1045,7 +1045,7 @@ int bpf_core_patch_insn(const char *prog_name, struct bpf_insn *insn,
 	new_val = res->new_val;
 
 	switch (class) {
-	case BPF_ALU:
+	case BPF_ALU32:
 	case BPF_ALU64:
 		if (BPF_SRC(insn->code) != BPF_K)
 			return -EINVAL;
diff --git a/tools/perf/util/bpf-prologue.c b/tools/perf/util/bpf-prologue.c
index 9887ae0..16b3957 100644
--- a/tools/perf/util/bpf-prologue.c
+++ b/tools/perf/util/bpf-prologue.c
@@ -335,7 +335,7 @@ prologue_relocate(struct bpf_insn_pos *pos, struct bpf_insn *error_code,
 		u8 class = BPF_CLASS(insn->code);
 		u8 opcode;
 
-		if (class != BPF_JMP)
+		if (class != BPF_JMP64)
 			continue;
 		opcode = BPF_OP(insn->code);
 		if (opcode == BPF_CALL)
diff --git a/tools/testing/selftests/bpf/prog_tests/btf.c b/tools/testing/selftests/bpf/prog_tests/btf.c
index de1b5b9..e9a214b 100644
--- a/tools/testing/selftests/bpf/prog_tests/btf.c
+++ b/tools/testing/selftests/bpf/prog_tests/btf.c
@@ -5571,7 +5571,7 @@ static struct prog_info_raw_test {
 	.str_sec = "\0int\0unsigned int\0a\0b\0c\0d\0funcA\0funcB",
 	.str_sec_size = sizeof("\0int\0unsigned int\0a\0b\0c\0d\0funcA\0funcB"),
 	.insns = {
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2),
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 2),
 		BPF_MOV64_IMM(BPF_REG_0, 1),
 		BPF_EXIT_INSN(),
 		BPF_MOV64_IMM(BPF_REG_0, 2),
@@ -5602,7 +5602,7 @@ static struct prog_info_raw_test {
 	.str_sec = "\0int\0unsigned int\0a\0b\0c\0d\0funcA\0funcB",
 	.str_sec_size = sizeof("\0int\0unsigned int\0a\0b\0c\0d\0funcA\0funcB"),
 	.insns = {
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2),
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 2),
 		BPF_MOV64_IMM(BPF_REG_0, 1),
 		BPF_EXIT_INSN(),
 		BPF_MOV64_IMM(BPF_REG_0, 2),
@@ -5634,7 +5634,7 @@ static struct prog_info_raw_test {
 	.str_sec = "\0int\0unsigned int\0a\0b\0c\0d\0funcA\0funcB",
 	.str_sec_size = sizeof("\0int\0unsigned int\0a\0b\0c\0d\0funcA\0funcB"),
 	.insns = {
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2),
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 2),
 		BPF_MOV64_IMM(BPF_REG_0, 1),
 		BPF_EXIT_INSN(),
 		BPF_MOV64_IMM(BPF_REG_0, 2),
@@ -5666,7 +5666,7 @@ static struct prog_info_raw_test {
 	.str_sec = "\0int\0unsigned int\0a\0b\0c\0d\0funcA\0funcB",
 	.str_sec_size = sizeof("\0int\0unsigned int\0a\0b\0c\0d\0funcA\0funcB"),
 	.insns = {
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2),
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 2),
 		BPF_MOV64_IMM(BPF_REG_0, 1),
 		BPF_EXIT_INSN(),
 		BPF_MOV64_IMM(BPF_REG_0, 2),
diff --git a/tools/testing/selftests/bpf/prog_tests/cgroup_attach_multi.c b/tools/testing/selftests/bpf/prog_tests/cgroup_attach_multi.c
index db0b7ba..87c5434 100644
--- a/tools/testing/selftests/bpf/prog_tests/cgroup_attach_multi.c
+++ b/tools/testing/selftests/bpf/prog_tests/cgroup_attach_multi.c
@@ -42,20 +42,20 @@ static int prog_load_cnt(int verdict, int val)
 		BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 		BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), /* r2 = fp - 4 */
 		BPF_LD_MAP_FD(BPF_REG_1, map_fd),
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 		BPF_MOV64_IMM(BPF_REG_1, val), /* r1 = 1 */
 		BPF_ATOMIC_OP(BPF_DW, BPF_ADD, BPF_REG_0, BPF_REG_1, 0),
 
 		BPF_LD_MAP_FD(BPF_REG_1, cgroup_storage_fd),
 		BPF_MOV64_IMM(BPF_REG_2, 0),
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
 		BPF_MOV64_IMM(BPF_REG_1, val),
 		BPF_ATOMIC_OP(BPF_W, BPF_ADD, BPF_REG_0, BPF_REG_1, 0),
 
 		BPF_LD_MAP_FD(BPF_REG_1, percpu_cgroup_storage_fd),
 		BPF_MOV64_IMM(BPF_REG_2, 0),
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
 		BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0),
 		BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, 0x1),
 		BPF_STX_MEM(BPF_W, BPF_REG_0, BPF_REG_3, 0),
diff --git a/tools/testing/selftests/bpf/prog_tests/flow_dissector_load_bytes.c b/tools/testing/selftests/bpf/prog_tests/flow_dissector_load_bytes.c
index c7a47b5..ca3bcd7 100644
--- a/tools/testing/selftests/bpf/prog_tests/flow_dissector_load_bytes.c
+++ b/tools/testing/selftests/bpf/prog_tests/flow_dissector_load_bytes.c
@@ -15,7 +15,7 @@ void serial_test_flow_dissector_load_bytes(void)
 		// BPF_REG_4 - 4th argument: copy one byte
 		BPF_MOV64_IMM(BPF_REG_4, 1),
 		// bpf_skb_load_bytes(ctx, sizeof(pkt_v4), ptr, 1)
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 			     BPF_FUNC_skb_load_bytes),
 		BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 		// if (ret == 0) return BPF_DROP (2)
diff --git a/tools/testing/selftests/bpf/prog_tests/sockopt.c b/tools/testing/selftests/bpf/prog_tests/sockopt.c
index aa4debf6..3656ed2 100644
--- a/tools/testing/selftests/bpf/prog_tests/sockopt.c
+++ b/tools/testing/selftests/bpf/prog_tests/sockopt.c
@@ -861,7 +861,7 @@ static int load_prog(const struct bpf_insn *insns,
 	int fd, insns_cnt = 0;
 
 	for (;
-	     insns[insns_cnt].code != (BPF_JMP | BPF_EXIT);
+	     insns[insns_cnt].code != (BPF_JMP64 | BPF_EXIT);
 	     insns_cnt++) {
 	}
 	insns_cnt++;
diff --git a/tools/testing/selftests/bpf/progs/pyperf600.c b/tools/testing/selftests/bpf/progs/pyperf600.c
index ce1aa51..16d2aa2 100644
--- a/tools/testing/selftests/bpf/progs/pyperf600.c
+++ b/tools/testing/selftests/bpf/progs/pyperf600.c
@@ -3,7 +3,7 @@
 #define STACK_MAX_LEN 600
 /* Full unroll of 600 iterations will have total
  * program size close to 298k insns and this may
- * cause BPF_JMP insn out of 16-bit integer range.
+ * cause BPF_JMP64 insn out of 16-bit integer range.
  * So limit the unroll size to 150 so the
  * total program size is around 80k insns but
  * the loop will still execute 600 times.
diff --git a/tools/testing/selftests/bpf/progs/syscall.c b/tools/testing/selftests/bpf/progs/syscall.c
index e550f72..85451c9 100644
--- a/tools/testing/selftests/bpf/progs/syscall.c
+++ b/tools/testing/selftests/bpf/progs/syscall.c
@@ -66,7 +66,7 @@ int bpf_prog(struct args *ctx)
 		BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 		BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 		BPF_LD_MAP_FD(BPF_REG_1, 0),
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 		BPF_MOV64_IMM(BPF_REG_0, 0),
 		BPF_EXIT_INSN(),
 	};
diff --git a/tools/testing/selftests/bpf/test_cgroup_storage.c b/tools/testing/selftests/bpf/test_cgroup_storage.c
index 0861ea6..2fbb30f 100644
--- a/tools/testing/selftests/bpf/test_cgroup_storage.c
+++ b/tools/testing/selftests/bpf/test_cgroup_storage.c
@@ -19,7 +19,7 @@ int main(int argc, char **argv)
 	struct bpf_insn prog[] = {
 		BPF_LD_MAP_FD(BPF_REG_1, 0), /* percpu map fd */
 		BPF_MOV64_IMM(BPF_REG_2, 0), /* flags, not used */
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 			     BPF_FUNC_get_local_storage),
 		BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_0, 0),
 		BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, 0x1),
@@ -27,7 +27,7 @@ int main(int argc, char **argv)
 
 		BPF_LD_MAP_FD(BPF_REG_1, 0), /* map fd */
 		BPF_MOV64_IMM(BPF_REG_2, 0), /* flags, not used */
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 			     BPF_FUNC_get_local_storage),
 		BPF_MOV64_IMM(BPF_REG_1, 1),
 		BPF_ATOMIC_OP(BPF_DW, BPF_ADD, BPF_REG_0, BPF_REG_1, 0),
diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
index 8c80855..83319c0 100644
--- a/tools/testing/selftests/bpf/test_verifier.c
+++ b/tools/testing/selftests/bpf/test_verifier.c
@@ -208,7 +208,7 @@ static void bpf_fill_ld_abs_vlan_push_pop(struct bpf_test *self)
 		insn[i++] = BPF_MOV64_REG(BPF_REG_1, BPF_REG_6);
 		insn[i++] = BPF_MOV64_IMM(BPF_REG_2, 1);
 		insn[i++] = BPF_MOV64_IMM(BPF_REG_3, 2);
-		insn[i++] = BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+		insn[i++] = BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 					 BPF_FUNC_skb_vlan_push),
 		insn[i] = BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, len - i - 3);
 		i++;
@@ -219,7 +219,7 @@ static void bpf_fill_ld_abs_vlan_push_pop(struct bpf_test *self)
 		insn[i] = BPF_JMP32_IMM(BPF_JNE, BPF_REG_0, 0x34, len - i - 3);
 		i++;
 		insn[i++] = BPF_MOV64_REG(BPF_REG_1, BPF_REG_6);
-		insn[i++] = BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+		insn[i++] = BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 					 BPF_FUNC_skb_vlan_pop),
 		insn[i] = BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, len - i - 3);
 		i++;
@@ -294,7 +294,7 @@ static void bpf_fill_scale1(struct bpf_test *self)
 	insn[i++] = BPF_MOV64_REG(BPF_REG_6, BPF_REG_1);
 	/* test to check that the long sequence of jumps is acceptable */
 	while (k++ < MAX_JMP_SEQ) {
-		insn[i++] = BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+		insn[i++] = BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 					 BPF_FUNC_get_prandom_u32);
 		insn[i++] = BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, bpf_semi_rand_get(), 2);
 		insn[i++] = BPF_MOV64_REG(BPF_REG_1, BPF_REG_10);
@@ -326,7 +326,7 @@ static void bpf_fill_scale2(struct bpf_test *self)
 	/* test to check that the long sequence of jumps is acceptable */
 	k = 0;
 	while (k++ < MAX_JMP_SEQ) {
-		insn[i++] = BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+		insn[i++] = BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 					 BPF_FUNC_get_prandom_u32);
 		insn[i++] = BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, bpf_semi_rand_get(), 2);
 		insn[i++] = BPF_MOV64_REG(BPF_REG_1, BPF_REG_10);
@@ -405,8 +405,8 @@ static void bpf_fill_torturous_jumps(struct bpf_test *self)
 		return;
 	case 3:
 		/* main */
-		insn[i++] = BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, 4);
-		insn[i++] = BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, 262);
+		insn[i++] = BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, 4);
+		insn[i++] = BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, 262);
 		insn[i++] = BPF_ST_MEM(BPF_B, BPF_REG_10, -32, 0);
 		insn[i++] = BPF_MOV64_IMM(BPF_REG_0, 3);
 		insn[i++] = BPF_EXIT_INSN();
@@ -448,7 +448,7 @@ static void bpf_fill_big_prog_with_loop_1(struct bpf_test *self)
 	insn[i++] = BPF_RAW_INSN(0, 0, 0, 0, 0);
 	insn[i++] = BPF_ALU64_IMM(BPF_MOV, BPF_REG_3, 0);
 	insn[i++] = BPF_ALU64_IMM(BPF_MOV, BPF_REG_4, 0);
-	insn[i++] = BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_loop);
+	insn[i++] = BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_loop);
 
 	while (i < len - 3)
 		insn[i++] = BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 0);
@@ -501,7 +501,7 @@ static void bpf_fill_big_prog_with_loop_1(struct bpf_test *self)
  * positive u32, and zero-extend it into 64-bit.
  */
 #define BPF_RAND_UEXT_R7						\
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,			\
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,			\
 		     BPF_FUNC_get_prandom_u32),				\
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),				\
 	BPF_ALU64_IMM(BPF_LSH, BPF_REG_7, 33),				\
@@ -511,7 +511,7 @@ static void bpf_fill_big_prog_with_loop_1(struct bpf_test *self)
  * negative u32, and sign-extend it into 64-bit.
  */
 #define BPF_RAND_SEXT_R7						\
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,			\
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,			\
 		     BPF_FUNC_get_prandom_u32),				\
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),				\
 	BPF_ALU64_IMM(BPF_OR, BPF_REG_7, 0x80000000),			\
@@ -594,7 +594,7 @@ static int create_prog_dummy_loop(enum bpf_prog_type prog_type, int mfd,
 	struct bpf_insn prog[] = {
 		BPF_MOV64_IMM(BPF_REG_3, idx),
 		BPF_LD_MAP_FD(BPF_REG_2, mfd),
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 			     BPF_FUNC_tail_call),
 		BPF_MOV64_IMM(BPF_REG_0, ret),
 		BPF_EXIT_INSN(),
diff --git a/tools/testing/selftests/bpf/test_verifier_log.c b/tools/testing/selftests/bpf/test_verifier_log.c
index 70feda9..6cb0d17 100644
--- a/tools/testing/selftests/bpf/test_verifier_log.c
+++ b/tools/testing/selftests/bpf/test_verifier_log.c
@@ -33,7 +33,7 @@ static const struct bpf_insn code_sample[] = {
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 		     BPF_FUNC_map_lookup_elem),
 	BPF_EXIT_INSN(),
 };
diff --git a/tools/testing/selftests/bpf/verifier/and.c b/tools/testing/selftests/bpf/verifier/and.c
index 7d7ebee..6edbfe3 100644
--- a/tools/testing/selftests/bpf/verifier/and.c
+++ b/tools/testing/selftests/bpf/verifier/and.c
@@ -5,7 +5,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, -4),
@@ -26,7 +26,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 12),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
 	BPF_MOV64_IMM(BPF_REG_9, 1),
@@ -51,7 +51,7 @@
 {
 	"check known subreg with unknown reg",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_ALU64_IMM(BPF_LSH, BPF_REG_0, 32),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 0xFFFF1234),
diff --git a/tools/testing/selftests/bpf/verifier/array_access.c b/tools/testing/selftests/bpf/verifier/array_access.c
index 1b138cd..f32bd8b 100644
--- a/tools/testing/selftests/bpf/verifier/array_access.c
+++ b/tools/testing/selftests/bpf/verifier/array_access.c
@@ -5,7 +5,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, offsetof(struct test_val, foo)),
 	BPF_EXIT_INSN(),
@@ -22,7 +22,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_IMM(BPF_REG_1, 4),
 	BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 2),
@@ -43,7 +43,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
 	BPF_JMP_IMM(BPF_JGE, BPF_REG_1, MAX_ENTRIES, 3),
@@ -65,7 +65,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
 	BPF_JMP32_IMM(BPF_JSGT, BPF_REG_1, 0xffffffff, 1),
@@ -91,7 +91,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, (MAX_ENTRIES + 1) << 2,
 		   offsetof(struct test_val, foo)),
@@ -108,7 +108,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_IMM(BPF_REG_1, MAX_ENTRIES + 1),
 	BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 2),
@@ -128,7 +128,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
 	BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 2),
@@ -148,7 +148,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0),
 	BPF_MOV32_IMM(BPF_REG_2, MAX_ENTRIES),
@@ -173,7 +173,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
 	BPF_MOV32_IMM(BPF_REG_2, MAX_ENTRIES + 1),
@@ -198,14 +198,14 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 10),
 	BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_8),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0,
@@ -224,7 +224,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -240,7 +240,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
@@ -248,7 +248,7 @@
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
 	BPF_MOV64_IMM(BPF_REG_5, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 		     BPF_FUNC_csum_diff),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 0xffff),
 	BPF_EXIT_INSN(),
@@ -265,7 +265,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 42),
 	BPF_EXIT_INSN(),
@@ -282,13 +282,13 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_MOV64_IMM(BPF_REG_2, 0),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_4, 8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 		     BPF_FUNC_skb_load_bytes),
 	BPF_EXIT_INSN(),
 	},
@@ -304,7 +304,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 42),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
@@ -322,13 +322,13 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_MOV64_IMM(BPF_REG_2, 0),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_4, 8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 		     BPF_FUNC_skb_load_bytes),
 	BPF_EXIT_INSN(),
 	},
@@ -344,7 +344,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -360,7 +360,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
@@ -368,7 +368,7 @@
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
 	BPF_MOV64_IMM(BPF_REG_5, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 		     BPF_FUNC_csum_diff),
 	BPF_EXIT_INSN(),
 	},
diff --git a/tools/testing/selftests/bpf/verifier/basic_call.c b/tools/testing/selftests/bpf/verifier/basic_call.c
index a8c6ab4..55a0aad 100644
--- a/tools/testing/selftests/bpf/verifier/basic_call.c
+++ b/tools/testing/selftests/bpf/verifier/basic_call.c
@@ -1,7 +1,7 @@
 {
 	"invalid call insn1",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL | BPF_X, 0, 0, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL | BPF_X, 0, 0, 0, 0),
 	BPF_EXIT_INSN(),
 	},
 	.errstr = "unknown opcode 8d",
@@ -10,7 +10,7 @@
 {
 	"invalid call insn2",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 1, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 1, 0),
 	BPF_EXIT_INSN(),
 	},
 	.errstr = "BPF_CALL uses reserved",
@@ -19,7 +19,7 @@
 {
 	"invalid function call",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, 1234567),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, 1234567),
 	BPF_EXIT_INSN(),
 	},
 	.errstr = "invalid func unknown#1234567",
@@ -28,8 +28,8 @@
 {
 	"invalid argument register",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_cgroup_classid),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_cgroup_classid),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_cgroup_classid),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_cgroup_classid),
 	BPF_EXIT_INSN(),
 	},
 	.errstr = "R1 !read_ok",
@@ -40,9 +40,9 @@
 	"non-invalid argument register",
 	.insns = {
 	BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_1),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_cgroup_classid),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_cgroup_classid),
 	BPF_ALU64_REG(BPF_MOV, BPF_REG_1, BPF_REG_6),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_cgroup_classid),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_cgroup_classid),
 	BPF_EXIT_INSN(),
 	},
 	.result = ACCEPT,
diff --git a/tools/testing/selftests/bpf/verifier/basic_stack.c b/tools/testing/selftests/bpf/verifier/basic_stack.c
index f995777..20ab2d3 100644
--- a/tools/testing/selftests/bpf/verifier/basic_stack.c
+++ b/tools/testing/selftests/bpf/verifier/basic_stack.c
@@ -13,7 +13,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_EXIT_INSN(),
 	},
 	.fixup_map_hash_8b = { 2 },
diff --git a/tools/testing/selftests/bpf/verifier/bounds.c b/tools/testing/selftests/bpf/verifier/bounds.c
index 33125d5..f822f2b 100644
--- a/tools/testing/selftests/bpf/verifier/bounds.c
+++ b/tools/testing/selftests/bpf/verifier/bounds.c
@@ -5,7 +5,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
 	BPF_JMP_IMM(BPF_JGT, BPF_REG_1, 0xff, 7),
@@ -30,7 +30,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
 	BPF_JMP_IMM(BPF_JGT, BPF_REG_1, 0xff, 6),
@@ -56,14 +56,14 @@
 	BPF_MOV64_REG(BPF_REG_ARG2, BPF_REG_FP),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_ARG2, -8),
 	BPF_ST_MEM(BPF_DW, BPF_REG_ARG2, 0, 9),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_MOV64_REG(BPF_REG_9, BPF_REG_FP),
 	BPF_ALU64_REG(BPF_SUB, BPF_REG_9, BPF_REG_0),
 	BPF_LD_MAP_FD(BPF_REG_ARG1, 0),
 	BPF_MOV64_REG(BPF_REG_ARG2, BPF_REG_FP),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_ARG2, -8),
 	BPF_ST_MEM(BPF_DW, BPF_REG_ARG2, 0, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_9, 0),
@@ -82,7 +82,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	/* r2 = 0x0000'0000'ffff'ffff */
 	BPF_MOV32_IMM(BPF_REG_2, 0xffffffff),
@@ -106,7 +106,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	/* r2 = 0xffff'ffff'ffff'ffff */
 	BPF_MOV64_IMM(BPF_REG_2, 0xffffffff),
@@ -131,7 +131,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	/* r2 = 0xffff'ffff'ffff'ffff */
 	BPF_MOV64_IMM(BPF_REG_2, 0xffffffff),
@@ -158,7 +158,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_6, 1),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, (1 << 29) - 1),
@@ -182,7 +182,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_6, 1),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, (1 << 30) - 1),
@@ -204,7 +204,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
 	/* r1 = [0x00, 0xff] */
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
@@ -237,7 +237,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
 	/* r1 = [0x00, 0xff] */
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
@@ -271,7 +271,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
 	/* r1 = [0x00, 0xff] */
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
@@ -306,7 +306,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
 	/* r1 = 0x7fff'ffff */
 	BPF_MOV64_IMM(BPF_REG_1, 0x7fffffff),
@@ -332,7 +332,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 	BPF_MOV64_IMM(BPF_REG_2, 32),
 	BPF_MOV64_IMM(BPF_REG_1, 1),
@@ -359,7 +359,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 	/* r1 = [0x00, 0xff] */
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
@@ -388,7 +388,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 	/* r1 = 2 */
 	BPF_MOV64_IMM(BPF_REG_1, 2),
@@ -415,7 +415,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0x7ffffffe),
@@ -434,7 +434,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0x1fffffff),
@@ -456,7 +456,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_IMM(BPF_SUB, BPF_REG_0, 0x1fffffff),
@@ -477,7 +477,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_1, 1000000),
@@ -563,7 +563,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_1, 0),
@@ -585,7 +585,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV32_IMM(BPF_REG_1, 0),
@@ -607,7 +607,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_1, 2),
@@ -629,7 +629,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0),
@@ -651,7 +651,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0),
@@ -673,7 +673,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0),
@@ -696,7 +696,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0),
@@ -719,7 +719,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
 	/* This used to reduce the max bound to 0x7fffffff */
@@ -740,7 +740,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
 	BPF_JMP_IMM(BPF_JSLT, BPF_REG_1, 1, 1),
diff --git a/tools/testing/selftests/bpf/verifier/bounds_mix_sign_unsign.c b/tools/testing/selftests/bpf/verifier/bounds_mix_sign_unsign.c
index c2aa6f2..47b56b0ed 100644
--- a/tools/testing/selftests/bpf/verifier/bounds_mix_sign_unsign.c
+++ b/tools/testing/selftests/bpf/verifier/bounds_mix_sign_unsign.c
@@ -5,7 +5,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
@@ -28,7 +28,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
@@ -51,7 +51,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
@@ -76,7 +76,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
@@ -100,7 +100,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
@@ -122,7 +122,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
@@ -153,7 +153,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 1),
 	BPF_MOV64_IMM(BPF_REG_5, 0),
 	BPF_ST_MEM(BPF_H, BPF_REG_10, -512, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -167,7 +167,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
@@ -189,7 +189,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
@@ -214,7 +214,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 10),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
@@ -238,7 +238,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
@@ -263,7 +263,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
@@ -289,7 +289,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
@@ -314,7 +314,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
@@ -344,7 +344,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
@@ -371,7 +371,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
diff --git a/tools/testing/selftests/bpf/verifier/bpf_get_stack.c b/tools/testing/selftests/bpf/verifier/bpf_get_stack.c
index 3e024c89..6c02db4 100644
--- a/tools/testing/selftests/bpf/verifier/bpf_get_stack.c
+++ b/tools/testing/selftests/bpf/verifier/bpf_get_stack.c
@@ -6,7 +6,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 28),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_9, sizeof(struct test_val)/2),
@@ -52,7 +52,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
diff --git a/tools/testing/selftests/bpf/verifier/bpf_loop_inline.c b/tools/testing/selftests/bpf/verifier/bpf_loop_inline.c
index a535d41..31c71dc 100644
--- a/tools/testing/selftests/bpf/verifier/bpf_loop_inline.c
+++ b/tools/testing/selftests/bpf/verifier/bpf_loop_inline.c
@@ -21,14 +21,14 @@
  * fields for pseudo calls
  */
 #define PSEUDO_CALL_INSN() \
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_CALL, \
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_CALL, \
 		     INSN_OFF_MASK, INSN_IMM_MASK)
 
 /* can't use BPF_FUNC_loop constant,
  * do_mix_fixups adjusts the IMM field
  */
 #define HELPER_CALL_INSN() \
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, INSN_OFF_MASK, INSN_IMM_MASK)
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, INSN_OFF_MASK, INSN_IMM_MASK)
 
 {
 	"inline simple bpf_loop call",
@@ -37,7 +37,7 @@
 	/* force verifier state branching to verify logic on first and
 	 * subsequent bpf_loop insn processing steps
 	 */
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_jiffies64),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_jiffies64),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 777, 2),
 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_1, 1),
 	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
@@ -47,7 +47,7 @@
 	BPF_RAW_INSN(0, 0, 0, 0, 0),
 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_3, 0),
 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_4, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_loop),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_loop),
 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	/* callback */
@@ -67,9 +67,9 @@
 	"don't inline bpf_loop call, flags non-zero",
 	.insns = {
 	/* main */
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_jiffies64),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_jiffies64),
 	BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_jiffies64),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_jiffies64),
 	BPF_ALU64_REG(BPF_MOV, BPF_REG_7, BPF_REG_0),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_6, 0, 9),
 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_4, 0),
@@ -78,7 +78,7 @@
 	BPF_RAW_INSN(BPF_LD | BPF_IMM | BPF_DW, BPF_REG_2, BPF_PSEUDO_FUNC, 0, 7),
 	BPF_RAW_INSN(0, 0, 0, 0, 0),
 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_3, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_loop),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_loop),
 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_4, 1),
@@ -100,7 +100,7 @@
 	"don't inline bpf_loop call, callback non-constant",
 	.insns = {
 	/* main */
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_jiffies64),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_jiffies64),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 777, 4), /* pick a random callback */
 
 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_1, 1),
@@ -114,7 +114,7 @@
 
 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_3, 0),
 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_4, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_loop),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_loop),
 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	/* callback */
@@ -152,7 +152,7 @@
 	BPF_RAW_INSN(0, 0, 0, 0, 0),
 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_3, 0),
 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_4, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_loop),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_loop),
 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	/* callback */
@@ -186,14 +186,14 @@
 	BPF_RAW_INSN(0, 0, 0, 0, 0),
 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_3, 0),
 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_4, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_loop),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_loop),
 	/* bpf_loop call #2 */
 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_1, 2),
 	BPF_RAW_INSN(BPF_LD | BPF_IMM | BPF_DW, BPF_REG_2, BPF_PSEUDO_FUNC, 0, 16),
 	BPF_RAW_INSN(0, 0, 0, 0, 0),
 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_3, 0),
 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_4, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_loop),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_loop),
 	/* call func and exit */
 	BPF_CALL_REL(2),
 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 0),
@@ -205,7 +205,7 @@
 	BPF_RAW_INSN(0, 0, 0, 0, 0),
 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_3, 0),
 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_4, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_loop),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_loop),
 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	/* callback */
diff --git a/tools/testing/selftests/bpf/verifier/calls.c b/tools/testing/selftests/bpf/verifier/calls.c
index 9d99392..986bf68 100644
--- a/tools/testing/selftests/bpf/verifier/calls.c
+++ b/tools/testing/selftests/bpf/verifier/calls.c
@@ -1,7 +1,7 @@
 {
 	"calls: invalid kfunc call not eliminated",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	},
@@ -14,7 +14,7 @@
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_JMP_IMM(BPF_JGT, BPF_REG_0, 0, 2),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	},
@@ -26,7 +26,7 @@
 	.insns = {
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_EXIT_INSN(),
 	},
 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
@@ -41,7 +41,7 @@
 	.insns = {
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_EXIT_INSN(),
 	},
 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
@@ -56,7 +56,7 @@
 	.insns = {
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_EXIT_INSN(),
 	},
 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
@@ -71,7 +71,7 @@
 	.insns = {
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_EXIT_INSN(),
 	},
 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
@@ -86,7 +86,7 @@
 	.insns = {
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_EXIT_INSN(),
 	},
 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
@@ -102,9 +102,9 @@
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
 	BPF_ST_MEM(BPF_DW, BPF_REG_1, 0, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_EXIT_INSN(),
 	},
 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
@@ -121,12 +121,12 @@
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
 	BPF_ST_MEM(BPF_DW, BPF_REG_1, 0, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -142,11 +142,11 @@
 	"calls: invalid kfunc call: don't match first member type when passed to release kfunc",
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -164,13 +164,13 @@
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
 	BPF_ST_MEM(BPF_DW, BPF_REG_1, 0, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, 16),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -189,21 +189,21 @@
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
 	BPF_ST_MEM(BPF_DW, BPF_REG_1, 0, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_0, 4),
 	BPF_JMP_IMM(BPF_JLE, BPF_REG_2, 4, 3),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_JMP_IMM(BPF_JGE, BPF_REG_2, 0, 3),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_2),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -224,14 +224,14 @@
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
 	BPF_ST_MEM(BPF_DW, BPF_REG_1, 0, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 16),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -251,14 +251,14 @@
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
 	BPF_ST_MEM(BPF_DW, BPF_REG_1, 0, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -274,7 +274,7 @@
 {
 	"calls: basic sanity",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_0, 2),
@@ -286,7 +286,7 @@
 {
 	"calls: not on unprivileged",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_0, 2),
@@ -301,7 +301,7 @@
 	"calls: div by 0 in subprog",
 	.insns = {
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 8),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 8),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_1,
 		    offsetof(struct __sk_buff, data_end)),
@@ -326,7 +326,7 @@
 	"calls: multiple ret types in subprog 1",
 	.insns = {
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 8),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 8),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_1,
 		    offsetof(struct __sk_buff, data_end)),
@@ -350,7 +350,7 @@
 	"calls: multiple ret types in subprog 2",
 	.insns = {
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 8),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 8),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_1,
 		    offsetof(struct __sk_buff, data_end)),
@@ -368,7 +368,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_6,
 		    offsetof(struct __sk_buff, data)),
@@ -383,7 +383,7 @@
 {
 	"calls: overlapping caller/callee",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	},
@@ -396,9 +396,9 @@
 	.insns = {
 	BPF_JMP_IMM(BPF_JA, 0, 0, 4),
 	BPF_JMP_IMM(BPF_JA, 0, 0, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, -2),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, -2),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, -2),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, -2),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, -2),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, -2),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	},
@@ -409,7 +409,7 @@
 {
 	"calls: wrong src reg",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 3, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 3, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	},
@@ -420,7 +420,7 @@
 {
 	"calls: wrong off value",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, -1, 2),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, -1, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_0, 2),
@@ -433,7 +433,7 @@
 {
 	"calls: jump back loop",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, -1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, -1),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	},
@@ -447,7 +447,7 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
 		    offsetof(struct __sk_buff, mark)),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_0, 2),
@@ -463,7 +463,7 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
 		    offsetof(struct __sk_buff, mark)),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 4),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 4),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_0, 2),
@@ -500,7 +500,7 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
 		    offsetof(struct __sk_buff, mark)),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 4),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 4),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
@@ -517,7 +517,7 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
 		    offsetof(struct __sk_buff, mark)),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 4),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 4),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
@@ -534,7 +534,7 @@
 	.insns = {
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 2),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, -3),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
@@ -548,7 +548,7 @@
 {
 	"calls: using r0 returned by callee",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_0, 2),
 	BPF_EXIT_INSN(),
@@ -559,7 +559,7 @@
 {
 	"calls: using uninit r0 from callee",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_EXIT_INSN(),
 	},
@@ -570,7 +570,7 @@
 {
 	"calls: callee is using r1",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
 		    offsetof(struct __sk_buff, len)),
@@ -583,7 +583,7 @@
 {
 	"calls: callee using args1",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
 	BPF_EXIT_INSN(),
@@ -596,7 +596,7 @@
 {
 	"calls: callee using wrong args2",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_EXIT_INSN(),
@@ -613,7 +613,7 @@
 		    offsetof(struct __sk_buff, len)),
 	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_6,
 		    offsetof(struct __sk_buff, len)),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_2),
@@ -633,7 +633,7 @@
 	BPF_MOV64_REG(BPF_REG_8, BPF_REG_6),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_8, 8),
 	BPF_JMP_REG(BPF_JGT, BPF_REG_8, BPF_REG_7, 2),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 3),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 3),
 	/* clear_all_pkt_pointers() has to walk all frames
 	 * to make sure that pkt pointers in the caller
 	 * are cleared when callee is calling a helper that
@@ -643,7 +643,7 @@
 	BPF_MOV32_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_xdp_adjust_head),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_xdp_adjust_head),
 	BPF_EXIT_INSN(),
 	},
 	.result = REJECT,
@@ -661,7 +661,7 @@
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 3),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 3),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_6, 0),
 	BPF_EXIT_INSN(),
@@ -679,13 +679,13 @@
 {
 	"calls: two calls with args",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 6),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 6),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 3),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 3),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_7),
 	BPF_EXIT_INSN(),
@@ -702,10 +702,10 @@
 	.insns = {
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -64),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -64),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -64),
 	BPF_MOV64_IMM(BPF_REG_0, 42),
@@ -721,10 +721,10 @@
 	.insns = {
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -63),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -61),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -63),
 	BPF_MOV64_IMM(BPF_REG_0, 42),
@@ -757,7 +757,7 @@
 	BPF_JMP_IMM(BPF_JA, 0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 43),
 	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, -3),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, -3),
 	BPF_EXIT_INSN(),
 	},
 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
@@ -767,13 +767,13 @@
 {
 	"calls: two calls with bad jump",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 6),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 6),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 3),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 3),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_7),
 	BPF_EXIT_INSN(),
@@ -789,9 +789,9 @@
 {
 	"calls: recursive call. test1",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, -1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, -1),
 	BPF_EXIT_INSN(),
 	},
 	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
@@ -801,9 +801,9 @@
 {
 	"calls: recursive call. test2",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, -3),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, -3),
 	BPF_EXIT_INSN(),
 	},
 	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
@@ -813,9 +813,9 @@
 {
 	"calls: unreachable code",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -829,9 +829,9 @@
 {
 	"calls: invalid call",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, -4),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, -4),
 	BPF_EXIT_INSN(),
 	},
 	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
@@ -841,9 +841,9 @@
 {
 	"calls: invalid call 2",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 0x7fffffff),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 0x7fffffff),
 	BPF_EXIT_INSN(),
 	},
 	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
@@ -853,7 +853,7 @@
 {
 	"calls: jumping across function bodies. test1",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, -3),
@@ -867,7 +867,7 @@
 	"calls: jumping across function bodies. test2",
 	.insns = {
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 3),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_EXIT_INSN(),
@@ -879,9 +879,9 @@
 {
 	"calls: call without exit",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, -2),
@@ -893,8 +893,8 @@
 {
 	"calls: call into middle of ld_imm64",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 3),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 3),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 3),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_LD_IMM64(BPF_REG_0, 0),
@@ -907,8 +907,8 @@
 {
 	"calls: call into middle of other call",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 3),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 3),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 3),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -928,7 +928,7 @@
 	BPF_LD_ABS(BPF_W, 0),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_6),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 5),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 5),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_7),
 	BPF_LD_ABS(BPF_B, 0),
 	BPF_LD_ABS(BPF_H, 0),
@@ -936,7 +936,7 @@
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_2, 1),
 	BPF_MOV64_IMM(BPF_REG_3, 2),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_vlan_push),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_vlan_push),
 	BPF_EXIT_INSN(),
 	},
 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
@@ -945,13 +945,13 @@
 {
 	"calls: two calls with bad fallthrough",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 6),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 6),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 3),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 3),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_7),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_0),
@@ -969,13 +969,13 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 6),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 6),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 3),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 3),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_7),
 	BPF_EXIT_INSN(),
@@ -994,17 +994,17 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -16),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 2),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -16),
 	BPF_EXIT_INSN(),
 
 	/* subprog 1 */
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_2),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 7),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 7),
 	BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 4),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 4),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_8, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_8),
 	/* write into stack frame of main prog */
@@ -1024,7 +1024,7 @@
 	.insns = {
 	/* prog 1 */
 	BPF_ST_MEM(BPF_B, BPF_REG_10, -300, 0),
-	BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
 
 	/* prog 2 */
@@ -1040,7 +1040,7 @@
 	"calls: stack overflow using two frames (post-call access)",
 	.insns = {
 	/* prog 1 */
-	BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, 2),
+	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, 2),
 	BPF_ST_MEM(BPF_B, BPF_REG_10, -300, 0),
 	BPF_EXIT_INSN(),
 
@@ -1057,8 +1057,8 @@
 	"calls: stack depth check using three frames. test1",
 	.insns = {
 	/* main */
-	BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, 4), /* call A */
-	BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, 5), /* call B */
+	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, 4), /* call A */
+	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, 5), /* call B */
 	BPF_ST_MEM(BPF_B, BPF_REG_10, -32, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -1066,7 +1066,7 @@
 	BPF_ST_MEM(BPF_B, BPF_REG_10, -256, 0),
 	BPF_EXIT_INSN(),
 	/* B */
-	BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, -3), /* call A */
+	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, -3), /* call A */
 	BPF_ST_MEM(BPF_B, BPF_REG_10, -64, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -1080,8 +1080,8 @@
 	"calls: stack depth check using three frames. test2",
 	.insns = {
 	/* main */
-	BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, 4), /* call A */
-	BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, 5), /* call B */
+	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, 4), /* call A */
+	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, 5), /* call B */
 	BPF_ST_MEM(BPF_B, BPF_REG_10, -32, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -1089,7 +1089,7 @@
 	BPF_ST_MEM(BPF_B, BPF_REG_10, -64, 0),
 	BPF_EXIT_INSN(),
 	/* B */
-	BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, -3), /* call A */
+	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, -3), /* call A */
 	BPF_ST_MEM(BPF_B, BPF_REG_10, -256, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -1104,9 +1104,9 @@
 	.insns = {
 	/* main */
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
-	BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, 6), /* call A */
+	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, 6), /* call A */
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, 8), /* call B */
+	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, 8), /* call B */
 	BPF_JMP_IMM(BPF_JGE, BPF_REG_6, 0, 1),
 	BPF_ST_MEM(BPF_B, BPF_REG_10, -64, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -1118,7 +1118,7 @@
 	BPF_JMP_IMM(BPF_JA, 0, 0, -3),
 	/* B */
 	BPF_JMP_IMM(BPF_JGT, BPF_REG_1, 2, 1),
-	BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, -6), /* call A */
+	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, -6), /* call A */
 	BPF_ST_MEM(BPF_B, BPF_REG_10, -256, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -1152,18 +1152,18 @@
 	.insns = {
 	/* main */
 	BPF_MOV64_IMM(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, 6), /* call A */
+	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, 6), /* call A */
 	BPF_MOV64_IMM(BPF_REG_1, 1),
-	BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, 4), /* call A */
+	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, 4), /* call A */
 	BPF_MOV64_IMM(BPF_REG_1, 1),
-	BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, 7), /* call B */
+	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, 7), /* call B */
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	/* A */
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 2),
 	BPF_ST_MEM(BPF_B, BPF_REG_10, -300, 0),
 	BPF_EXIT_INSN(),
-	BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, 1), /* call B */
+	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, 1), /* call B */
 	BPF_EXIT_INSN(),
 	/* B */
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
@@ -1178,28 +1178,28 @@
 	"calls: stack depth check using three frames. test5",
 	.insns = {
 	/* main */
-	BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, 1), /* call A */
+	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, 1), /* call A */
 	BPF_EXIT_INSN(),
 	/* A */
-	BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, 1), /* call B */
+	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, 1), /* call B */
 	BPF_EXIT_INSN(),
 	/* B */
-	BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, 1), /* call C */
+	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, 1), /* call C */
 	BPF_EXIT_INSN(),
 	/* C */
-	BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, 1), /* call D */
+	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, 1), /* call D */
 	BPF_EXIT_INSN(),
 	/* D */
-	BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, 1), /* call E */
+	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, 1), /* call E */
 	BPF_EXIT_INSN(),
 	/* E */
-	BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, 1), /* call F */
+	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, 1), /* call F */
 	BPF_EXIT_INSN(),
 	/* F */
-	BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, 1), /* call G */
+	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, 1), /* call G */
 	BPF_EXIT_INSN(),
 	/* G */
-	BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, 1), /* call H */
+	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, 1), /* call H */
 	BPF_EXIT_INSN(),
 	/* H */
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -1214,30 +1214,30 @@
 	.insns = {
 	/* main */
 	BPF_MOV64_IMM(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, 1), /* call A */
+	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, 1), /* call A */
 	BPF_EXIT_INSN(),
 	/* A */
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
-	BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, 2), /* call B */
+	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, 2), /* call B */
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	/* B */
-	BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, 1), /* call C */
+	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, 1), /* call C */
 	BPF_EXIT_INSN(),
 	/* C */
-	BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, 1), /* call D */
+	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, 1), /* call D */
 	BPF_EXIT_INSN(),
 	/* D */
-	BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, 1), /* call E */
+	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, 1), /* call E */
 	BPF_EXIT_INSN(),
 	/* E */
-	BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, 1), /* call F */
+	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, 1), /* call F */
 	BPF_EXIT_INSN(),
 	/* F */
-	BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, 1), /* call G */
+	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, 1), /* call G */
 	BPF_EXIT_INSN(),
 	/* G */
-	BPF_RAW_INSN(BPF_JMP|BPF_CALL, 0, 1, 0, 1), /* call H */
+	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, 1), /* call H */
 	BPF_EXIT_INSN(),
 	/* H */
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -1253,7 +1253,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -1269,7 +1269,7 @@
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 2),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, 0),
 	BPF_EXIT_INSN(),
 	BPF_ST_MEM(BPF_DW, BPF_REG_1, 0, 42),
@@ -1283,7 +1283,7 @@
 {
 	"calls: write into callee stack frame",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 2),
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 42),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_10),
@@ -1303,16 +1303,16 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -16),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 2),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -16),
 	BPF_EXIT_INSN(),
 
 	/* subprog 1 */
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_2),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 3),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 3),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
 
 	/* subprog 2 */
@@ -1327,10 +1327,10 @@
 	"calls: ambiguous return value",
 	.insns = {
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 5),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 5),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 2),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_EXIT_INSN(),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
@@ -1351,7 +1351,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -16),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 8),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 8),
 
 	/* fetch map_value_ptr from the stack of this function */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8),
@@ -1371,10 +1371,10 @@
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_2),
 	/* first time with fp-8 */
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 3),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 3),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
 	/* second time with fp-16 */
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
 
 	/* subprog 2 */
@@ -1384,7 +1384,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	/* write map_value_ptr into stack frame of main prog */
 	BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -1403,7 +1403,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -16),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 
@@ -1412,7 +1412,7 @@
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_2),
 	/* first time with fp-8 */
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 9),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 9),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
 	/* fetch map_value_ptr from the stack of this function */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, 0),
@@ -1420,7 +1420,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
 	/* second time with fp-16 */
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 4),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 4),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
 	/* fetch secound map_value_ptr from the stack */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_7, 0),
@@ -1435,7 +1435,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(), /* return 0 */
@@ -1457,7 +1457,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -16),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 
@@ -1466,7 +1466,7 @@
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_2),
 	/* first time with fp-8 */
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 9),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 9),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
 	/* fetch map_value_ptr from the stack of this function */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, 0),
@@ -1474,7 +1474,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
 	/* second time with fp-16 */
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 4),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 4),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	/* fetch secound map_value_ptr from the stack */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_7, 0),
@@ -1489,7 +1489,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(), /* return 0 */
@@ -1512,7 +1512,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -16),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 
@@ -1524,7 +1524,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_8, 0),
 	BPF_JMP_IMM(BPF_JA, 0, 0, 2),
@@ -1536,7 +1536,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), /* 20 */
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, /* 24 */
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, /* 24 */
 		     BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_9, 0),
@@ -1550,7 +1550,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_8),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_7),
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_9),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),  /* 34 */
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),  /* 34 */
 	BPF_EXIT_INSN(),
 
 	/* subprog 2 */
@@ -1584,7 +1584,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -16),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 
@@ -1596,7 +1596,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_8, 0),
 	BPF_JMP_IMM(BPF_JA, 0, 0, 2),
@@ -1608,7 +1608,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), /* 20 */
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, /* 24 */
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, /* 24 */
 		     BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_9, 0),
@@ -1622,7 +1622,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_8),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_7),
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_9),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),  /* 34 */
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),  /* 34 */
 	BPF_EXIT_INSN(),
 
 	/* subprog 2 */
@@ -1666,7 +1666,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -24),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_8, 0),
 	BPF_JMP_IMM(BPF_JA, 0, 0, 2),
@@ -1678,7 +1678,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -24),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_9, 0),  // 26
 	BPF_JMP_IMM(BPF_JA, 0, 0, 2),
@@ -1725,7 +1725,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -16),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 
@@ -1737,7 +1737,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	/* write map_value_ptr_or_null into stack frame of main prog at fp-8 */
 	BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_0, 0),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
@@ -1749,7 +1749,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	/* write map_value_ptr_or_null into stack frame of main prog at fp-16 */
 	BPF_STX_MEM(BPF_DW, BPF_REG_7, BPF_REG_0, 0),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
@@ -1762,7 +1762,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_8),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_7),
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_9),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
 
 	/* subprog 2 */
@@ -1794,7 +1794,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -16),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 
@@ -1806,7 +1806,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	/* write map_value_ptr_or_null into stack frame of main prog at fp-8 */
 	BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_0, 0),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
@@ -1818,7 +1818,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	/* write map_value_ptr_or_null into stack frame of main prog at fp-16 */
 	BPF_STX_MEM(BPF_DW, BPF_REG_7, BPF_REG_0, 0),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
@@ -1831,7 +1831,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_8),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_7),
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_9),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
 
 	/* subprog 2 */
@@ -1860,7 +1860,7 @@
 	.insns = {
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
 
 	/* subprog 1 */
@@ -1889,7 +1889,7 @@
 	.insns = {
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 3),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 3),
 	/* Marking is still kept, but not in all cases safe. */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -8),
 	BPF_ST_MEM(BPF_W, BPF_REG_4, 0, 0),
@@ -1921,7 +1921,7 @@
 	.insns = {
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 4),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 4),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	/* Marking is still kept and safe here. */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -8),
@@ -1957,7 +1957,7 @@
 	.insns = {
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 4),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 4),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	/* Check marking propagated. */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -8),
@@ -1993,7 +1993,7 @@
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -8),
 	BPF_STX_MEM(BPF_DW, BPF_REG_4, BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 3),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 3),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -8),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_4, 0),
 	BPF_EXIT_INSN(),
@@ -2029,7 +2029,7 @@
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -8),
 	BPF_STX_MEM(BPF_DW, BPF_REG_4, BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 3),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 3),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -8),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_4, 0),
 	BPF_EXIT_INSN(),
@@ -2064,7 +2064,7 @@
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -8),
 	BPF_STX_MEM(BPF_DW, BPF_REG_4, BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 3),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 3),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -8),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_4, 0),
 	BPF_EXIT_INSN(),
@@ -2106,7 +2106,7 @@
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -8),
 	BPF_STX_MEM(BPF_DW, BPF_REG_4, BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 3),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 3),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -8),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_4, 0),
 	BPF_EXIT_INSN(),
@@ -2147,7 +2147,7 @@
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -8),
 	BPF_STX_MEM(BPF_DW, BPF_REG_4, BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 3),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 3),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -8),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_4, 0),
 	BPF_EXIT_INSN(),
@@ -2182,7 +2182,7 @@
 	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 4),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 4),
 	/* fetch map_value_or_null or const_zero from stack */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
@@ -2199,7 +2199,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	/* write map_value_ptr_or_null into stack frame of main prog at fp-8 */
 	BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -2227,7 +2227,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_EXIT_INSN(),
 	},
 	.fixup_map_hash_48b = { 6 },
@@ -2239,10 +2239,10 @@
 	"calls: ctx read at start of subprog",
 	.insns = {
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 5),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 5),
 	BPF_JMP_REG(BPF_JSGT, BPF_REG_0, BPF_REG_0, 0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 2),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_B, BPF_REG_9, BPF_REG_1, 0),
@@ -2262,12 +2262,12 @@
 	 * if (r8)
 	 *     do something bad;
 	 */
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_IMM(BPF_REG_8, 0),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_MOV64_IMM(BPF_REG_8, 1),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 4),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 4),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_8, 1, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_9, BPF_REG_1, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -2283,16 +2283,16 @@
 {
 	"calls: cross frame pruning - liveness propagation",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_IMM(BPF_REG_8, 0),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_MOV64_IMM(BPF_REG_8, 1),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_IMM(BPF_REG_9, 0),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_MOV64_IMM(BPF_REG_9, 1),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 4),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 4),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_8, 1, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
diff --git a/tools/testing/selftests/bpf/verifier/cgroup_storage.c b/tools/testing/selftests/bpf/verifier/cgroup_storage.c
index 97057c0..89e245b 100644
--- a/tools/testing/selftests/bpf/verifier/cgroup_storage.c
+++ b/tools/testing/selftests/bpf/verifier/cgroup_storage.c
@@ -3,7 +3,7 @@
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_2, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1),
@@ -18,7 +18,7 @@
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_2, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1),
@@ -34,7 +34,7 @@
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_2, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 1),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	},
@@ -47,7 +47,7 @@
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_2, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 256),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -63,7 +63,7 @@
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_2, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, -2),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
@@ -80,7 +80,7 @@
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_2, 7),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1),
@@ -96,7 +96,7 @@
 	.insns = {
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_1),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1),
@@ -113,7 +113,7 @@
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_2, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1),
@@ -128,7 +128,7 @@
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_2, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1),
@@ -144,7 +144,7 @@
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_2, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 1),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	},
@@ -157,7 +157,7 @@
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_2, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 256),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -173,7 +173,7 @@
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_2, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, -2),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
@@ -190,7 +190,7 @@
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_2, 7),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1),
@@ -206,7 +206,7 @@
 	.insns = {
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_1),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_local_storage),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1),
diff --git a/tools/testing/selftests/bpf/verifier/ctx.c b/tools/testing/selftests/bpf/verifier/ctx.c
index c8eaf05..a778020 100644
--- a/tools/testing/selftests/bpf/verifier/ctx.c
+++ b/tools/testing/selftests/bpf/verifier/ctx.c
@@ -38,7 +38,7 @@
 	"pass unmodified ctx pointer to helper",
 	.insns = {
 		BPF_MOV64_IMM(BPF_REG_2, 0),
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 			     BPF_FUNC_csum_update),
 		BPF_MOV64_IMM(BPF_REG_0, 0),
 		BPF_EXIT_INSN(),
@@ -51,7 +51,7 @@
 	.insns = {
 		BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -612),
 		BPF_MOV64_IMM(BPF_REG_2, 0),
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 			     BPF_FUNC_csum_update),
 		BPF_MOV64_IMM(BPF_REG_0, 0),
 		BPF_EXIT_INSN(),
@@ -64,7 +64,7 @@
 	"pass modified ctx pointer to helper, 2",
 	.insns = {
 		BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -612),
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 			     BPF_FUNC_get_socket_cookie),
 		BPF_MOV64_IMM(BPF_REG_0, 0),
 		BPF_EXIT_INSN(),
@@ -81,7 +81,7 @@
 		BPF_ALU64_IMM(BPF_AND, BPF_REG_3, 4),
 		BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3),
 		BPF_MOV64_IMM(BPF_REG_2, 0),
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 			     BPF_FUNC_csum_update),
 		BPF_MOV64_IMM(BPF_REG_0, 0),
 		BPF_EXIT_INSN(),
@@ -93,7 +93,7 @@
 {
 	"pass ctx or null check, 1: ctx",
 	.insns = {
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 			     BPF_FUNC_get_netns_cookie),
 		BPF_MOV64_IMM(BPF_REG_0, 0),
 		BPF_EXIT_INSN(),
@@ -106,7 +106,7 @@
 	"pass ctx or null check, 2: null",
 	.insns = {
 		BPF_MOV64_IMM(BPF_REG_1, 0),
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 			     BPF_FUNC_get_netns_cookie),
 		BPF_MOV64_IMM(BPF_REG_0, 0),
 		BPF_EXIT_INSN(),
@@ -119,7 +119,7 @@
 	"pass ctx or null check, 3: 1",
 	.insns = {
 		BPF_MOV64_IMM(BPF_REG_1, 1),
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 			     BPF_FUNC_get_netns_cookie),
 		BPF_MOV64_IMM(BPF_REG_0, 0),
 		BPF_EXIT_INSN(),
@@ -133,7 +133,7 @@
 	"pass ctx or null check, 4: ctx - const",
 	.insns = {
 		BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -612),
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 			     BPF_FUNC_get_netns_cookie),
 		BPF_MOV64_IMM(BPF_REG_0, 0),
 		BPF_EXIT_INSN(),
@@ -147,7 +147,7 @@
 	"pass ctx or null check, 5: null (connect)",
 	.insns = {
 		BPF_MOV64_IMM(BPF_REG_1, 0),
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 			     BPF_FUNC_get_netns_cookie),
 		BPF_MOV64_IMM(BPF_REG_0, 0),
 		BPF_EXIT_INSN(),
@@ -160,7 +160,7 @@
 	"pass ctx or null check, 6: null (bind)",
 	.insns = {
 		BPF_MOV64_IMM(BPF_REG_1, 0),
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 			     BPF_FUNC_get_netns_cookie),
 		BPF_MOV64_IMM(BPF_REG_0, 0),
 		BPF_EXIT_INSN(),
@@ -172,7 +172,7 @@
 {
 	"pass ctx or null check, 7: ctx (bind)",
 	.insns = {
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 			     BPF_FUNC_get_socket_cookie),
 		BPF_MOV64_IMM(BPF_REG_0, 0),
 		BPF_EXIT_INSN(),
@@ -185,7 +185,7 @@
 	"pass ctx or null check, 8: null (bind)",
 	.insns = {
 		BPF_MOV64_IMM(BPF_REG_1, 0),
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 			     BPF_FUNC_get_socket_cookie),
 		BPF_MOV64_IMM(BPF_REG_0, 0),
 		BPF_EXIT_INSN(),
diff --git a/tools/testing/selftests/bpf/verifier/ctx_skb.c b/tools/testing/selftests/bpf/verifier/ctx_skb.c
index 83cecfb..fc55789 100644
--- a/tools/testing/selftests/bpf/verifier/ctx_skb.c
+++ b/tools/testing/selftests/bpf/verifier/ctx_skb.c
@@ -46,7 +46,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
@@ -70,7 +70,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
@@ -93,7 +93,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
diff --git a/tools/testing/selftests/bpf/verifier/d_path.c b/tools/testing/selftests/bpf/verifier/d_path.c
index b988396..5b6eb548 100644
--- a/tools/testing/selftests/bpf/verifier/d_path.c
+++ b/tools/testing/selftests/bpf/verifier/d_path.c
@@ -7,7 +7,7 @@
 	BPF_MOV64_IMM(BPF_REG_6, 0),
 	BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_6, 0),
 	BPF_LD_IMM64(BPF_REG_3, 8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_d_path),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_d_path),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -25,7 +25,7 @@
 	BPF_MOV64_IMM(BPF_REG_6, 0),
 	BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_6, 0),
 	BPF_LD_IMM64(BPF_REG_3, 8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_d_path),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_d_path),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
diff --git a/tools/testing/selftests/bpf/verifier/dead_code.c b/tools/testing/selftests/bpf/verifier/dead_code.c
index ee45432..6cd9d1b 100644
--- a/tools/testing/selftests/bpf/verifier/dead_code.c
+++ b/tools/testing/selftests/bpf/verifier/dead_code.c
@@ -27,7 +27,7 @@
 {
 	"dead code: mid 2",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_JMP_IMM(BPF_JSET, BPF_REG_0, 1, 4),
 	BPF_JMP_IMM(BPF_JSET, BPF_REG_0, 1, 1),
 	BPF_JMP_IMM(BPF_JA, 0, 0, 2),
@@ -82,7 +82,7 @@
 	BPF_MOV64_IMM(BPF_REG_0, 7),
 	BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 8, 1),
 	BPF_EXIT_INSN(),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_0, 12),
 	BPF_EXIT_INSN(),
@@ -98,9 +98,9 @@
 	BPF_MOV64_IMM(BPF_REG_0, 7),
 	BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 8, 1),
 	BPF_EXIT_INSN(),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_0, 12),
 	BPF_EXIT_INSN(),
@@ -114,13 +114,13 @@
 	"dead code: function in the middle and mid of another func",
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_1, 7),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 3),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 3),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_0, 12),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_0, 7),
 	BPF_JMP_IMM(BPF_JGE, BPF_REG_1, 7, 1),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, -5),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, -5),
 	BPF_EXIT_INSN(),
 	},
 	.errstr_unpriv = "loading/calling other bpf or kernel functions are allowed for",
@@ -134,7 +134,7 @@
 	BPF_MOV64_IMM(BPF_REG_1, 2),
 	BPF_JMP_IMM(BPF_JGE, BPF_REG_1, 2, 1),
 	BPF_MOV64_IMM(BPF_REG_1, 5),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
 	BPF_EXIT_INSN(),
@@ -148,7 +148,7 @@
 	"dead code: start of a function",
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_1, 2),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_JMP_IMM(BPF_JA, 0, 0, 0),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
diff --git a/tools/testing/selftests/bpf/verifier/direct_packet_access.c b/tools/testing/selftests/bpf/verifier/direct_packet_access.c
index dce2e28..46bf9e6 100644
--- a/tools/testing/selftests/bpf/verifier/direct_packet_access.c
+++ b/tools/testing/selftests/bpf/verifier/direct_packet_access.c
@@ -641,7 +641,7 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_6),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, 8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 4),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 4),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_6, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
diff --git a/tools/testing/selftests/bpf/verifier/event_output.c b/tools/testing/selftests/bpf/verifier/event_output.c
index c5e8059..0a1bc01 100644
--- a/tools/testing/selftests/bpf/verifier/event_output.c
+++ b/tools/testing/selftests/bpf/verifier/event_output.c
@@ -32,7 +32,7 @@
 	BPF_LD_MAP_FD(BPF_REG_2, 0),				\
 	BPF_MOV64_IMM(BPF_REG_3, 0),				\
 	BPF_MOV64_IMM(BPF_REG_5, 8),				\
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,		\
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,		\
 		     BPF_FUNC_perf_event_output),		\
 	BPF_MOV64_IMM(BPF_REG_0, 1),				\
 	BPF_EXIT_INSN(),
diff --git a/tools/testing/selftests/bpf/verifier/helper_access_var_len.c b/tools/testing/selftests/bpf/verifier/helper_access_var_len.c
index a6c869a..37be14d 100644
--- a/tools/testing/selftests/bpf/verifier/helper_access_var_len.c
+++ b/tools/testing/selftests/bpf/verifier/helper_access_var_len.c
@@ -378,7 +378,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_2, 0),
@@ -399,7 +399,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_0, 0),
 	BPF_JMP_IMM(BPF_JGT, BPF_REG_2, 8, 7),
@@ -423,7 +423,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_0, 0),
diff --git a/tools/testing/selftests/bpf/verifier/helper_packet_access.c b/tools/testing/selftests/bpf/verifier/helper_packet_access.c
index ae54587e..926ef8d 100644
--- a/tools/testing/selftests/bpf/verifier/helper_packet_access.c
+++ b/tools/testing/selftests/bpf/verifier/helper_packet_access.c
@@ -10,7 +10,7 @@
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_2),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_update_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_update_elem),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -24,7 +24,7 @@
 	.insns = {
 	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -50,7 +50,7 @@
 	BPF_JMP_REG(BPF_JGT, BPF_REG_5, BPF_REG_3, 4),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -70,7 +70,7 @@
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -90,7 +90,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 7),
 	BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 3),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -112,7 +112,7 @@
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_2),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_update_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_update_elem),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -126,7 +126,7 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
 		    offsetof(struct __sk_buff, data)),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -153,7 +153,7 @@
 	BPF_JMP_REG(BPF_JGT, BPF_REG_5, BPF_REG_3, 4),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -174,7 +174,7 @@
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -195,7 +195,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 7),
 	BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 3),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -218,7 +218,7 @@
 	BPF_MOV64_IMM(BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_4, 42),
 	BPF_MOV64_IMM(BPF_REG_5, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_store_bytes),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_store_bytes),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -238,7 +238,7 @@
 	BPF_JMP_REG(BPF_JGT, BPF_REG_6, BPF_REG_7, 3),
 	BPF_MOV64_IMM(BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_4, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -262,7 +262,7 @@
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
 	BPF_MOV64_IMM(BPF_REG_5, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_csum_diff),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_csum_diff),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -285,7 +285,7 @@
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
 	BPF_MOV64_IMM(BPF_REG_5, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_csum_diff),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_csum_diff),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -308,7 +308,7 @@
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
 	BPF_MOV64_IMM(BPF_REG_5, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_csum_diff),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_csum_diff),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -332,7 +332,7 @@
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
 	BPF_MOV64_IMM(BPF_REG_5, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_csum_diff),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_csum_diff),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -356,7 +356,7 @@
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
 	BPF_MOV64_IMM(BPF_REG_5, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_csum_diff),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_csum_diff),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -380,7 +380,7 @@
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
 	BPF_MOV64_IMM(BPF_REG_5, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_csum_diff),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_csum_diff),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -404,7 +404,7 @@
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
 	BPF_MOV64_IMM(BPF_REG_5, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_csum_diff),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_csum_diff),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -427,7 +427,7 @@
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
 	BPF_MOV64_IMM(BPF_REG_5, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_csum_diff),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_csum_diff),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -450,7 +450,7 @@
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
 	BPF_MOV64_IMM(BPF_REG_5, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_csum_diff),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_csum_diff),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
diff --git a/tools/testing/selftests/bpf/verifier/helper_restricted.c b/tools/testing/selftests/bpf/verifier/helper_restricted.c
index a067b70..423556b 100644
--- a/tools/testing/selftests/bpf/verifier/helper_restricted.c
+++ b/tools/testing/selftests/bpf/verifier/helper_restricted.c
@@ -1,7 +1,7 @@
 {
 	"bpf_ktime_get_coarse_ns is forbidden in BPF_PROG_TYPE_KPROBE",
 	.insns = {
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ktime_get_coarse_ns),
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_ktime_get_coarse_ns),
 		BPF_MOV64_IMM(BPF_REG_0, 0),
 		BPF_EXIT_INSN(),
 	},
@@ -12,7 +12,7 @@
 {
 	"bpf_ktime_get_coarse_ns is forbidden in BPF_PROG_TYPE_TRACEPOINT",
 	.insns = {
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ktime_get_coarse_ns),
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_ktime_get_coarse_ns),
 		BPF_MOV64_IMM(BPF_REG_0, 0),
 		BPF_EXIT_INSN(),
 	},
@@ -23,7 +23,7 @@
 {
 	"bpf_ktime_get_coarse_ns is forbidden in BPF_PROG_TYPE_PERF_EVENT",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ktime_get_coarse_ns),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_ktime_get_coarse_ns),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -34,7 +34,7 @@
 {
 	"bpf_ktime_get_coarse_ns is forbidden in BPF_PROG_TYPE_RAW_TRACEPOINT",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ktime_get_coarse_ns),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_ktime_get_coarse_ns),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -49,7 +49,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
@@ -69,7 +69,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
@@ -89,7 +89,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
@@ -109,7 +109,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
@@ -129,7 +129,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_EMIT_CALL(BPF_FUNC_spin_lock),
@@ -147,7 +147,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_EMIT_CALL(BPF_FUNC_spin_lock),
@@ -165,7 +165,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_EMIT_CALL(BPF_FUNC_spin_lock),
@@ -183,7 +183,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_EMIT_CALL(BPF_FUNC_spin_lock),
diff --git a/tools/testing/selftests/bpf/verifier/jmp32.c b/tools/testing/selftests/bpf/verifier/jmp32.c
index 1a27a62..59be762 100644
--- a/tools/testing/selftests/bpf/verifier/jmp32.c
+++ b/tools/testing/selftests/bpf/verifier/jmp32.c
@@ -789,7 +789,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_8),
 	BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
@@ -816,7 +816,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 10),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_8),
 	BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
@@ -844,7 +844,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 10),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_8),
 	BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
diff --git a/tools/testing/selftests/bpf/verifier/jset.c b/tools/testing/selftests/bpf/verifier/jset.c
index 11fc68d..feb1c01 100644
--- a/tools/testing/selftests/bpf/verifier/jset.c
+++ b/tools/testing/selftests/bpf/verifier/jset.c
@@ -104,7 +104,7 @@
 {
 	"jset: unknown const compare taken",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_JMP_IMM(BPF_JSET, BPF_REG_0, 1, 1),
 	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_8, BPF_REG_9, 0),
@@ -119,7 +119,7 @@
 {
 	"jset: unknown const compare not taken",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_JMP_IMM(BPF_JSET, BPF_REG_0, 1, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_8, BPF_REG_9, 0),
 	BPF_EXIT_INSN(),
@@ -133,7 +133,7 @@
 {
 	"jset: half-known const compare",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_ALU64_IMM(BPF_OR, BPF_REG_0, 2),
 	BPF_JMP_IMM(BPF_JSET, BPF_REG_0, 3, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_8, BPF_REG_9, 0),
@@ -148,7 +148,7 @@
 {
 	"jset: range",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0xff),
diff --git a/tools/testing/selftests/bpf/verifier/jump.c b/tools/testing/selftests/bpf/verifier/jump.c
index 497fe17..f5c0866 100644
--- a/tools/testing/selftests/bpf/verifier/jump.c
+++ b/tools/testing/selftests/bpf/verifier/jump.c
@@ -78,7 +78,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, -56, 0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -56),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_delete_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_delete_elem),
 	BPF_EXIT_INSN(),
 	},
 	.fixup_map_hash_8b = { 24 },
@@ -291,7 +291,7 @@
 	BPF_MOV64_IMM(BPF_REG_0, 42),
 	BPF_MOV64_IMM(BPF_REG_0, 42),
 	BPF_MOV64_IMM(BPF_REG_0, 42),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, -20),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, -20),
 	BPF_EXIT_INSN(),
 	},
 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
@@ -302,7 +302,7 @@
 	"jump/call test 10",
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 3),
 	BPF_EXIT_INSN(),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 2, 16),
@@ -322,7 +322,7 @@
 	BPF_MOV64_IMM(BPF_REG_0, 42),
 	BPF_MOV64_IMM(BPF_REG_0, 42),
 	BPF_MOV64_IMM(BPF_REG_0, 42),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, -20),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, -20),
 	BPF_EXIT_INSN(),
 	},
 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
@@ -333,7 +333,7 @@
 	"jump/call test 11",
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 4),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 4),
 	BPF_MOV64_IMM(BPF_REG_0, 3),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_0, 3),
@@ -366,7 +366,7 @@
 	BPF_MOV64_IMM(BPF_REG_0, 42),
 	BPF_MOV64_IMM(BPF_REG_0, 42),
 	BPF_MOV64_IMM(BPF_REG_0, 42),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, -31),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, -31),
 	BPF_EXIT_INSN(),
 	},
 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
diff --git a/tools/testing/selftests/bpf/verifier/junk_insn.c b/tools/testing/selftests/bpf/verifier/junk_insn.c
index 89d690f..4ec4fb8 100644
--- a/tools/testing/selftests/bpf/verifier/junk_insn.c
+++ b/tools/testing/selftests/bpf/verifier/junk_insn.c
@@ -40,6 +40,6 @@
 	BPF_RAW_INSN(0x7f, -1, -1, -1, -1),
 	BPF_EXIT_INSN(),
 	},
-	.errstr = "BPF_ALU uses reserved fields",
+	.errstr = "BPF_ALU32 uses reserved fields",
 	.result = REJECT,
 },
diff --git a/tools/testing/selftests/bpf/verifier/ld_abs.c b/tools/testing/selftests/bpf/verifier/ld_abs.c
index f6599d2..06e5ad0 100644
--- a/tools/testing/selftests/bpf/verifier/ld_abs.c
+++ b/tools/testing/selftests/bpf/verifier/ld_abs.c
@@ -81,7 +81,7 @@
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
 	BPF_MOV64_IMM(BPF_REG_2, 1),
 	BPF_MOV64_IMM(BPF_REG_3, 2),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_vlan_push),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_vlan_push),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_7),
 	BPF_LD_ABS(BPF_B, 0),
 	BPF_LD_ABS(BPF_H, 0),
@@ -257,7 +257,7 @@
 		BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
 		BPF_MOV64_IMM(BPF_REG_2, 1),
 		BPF_MOV64_IMM(BPF_REG_3, 2),
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 			     BPF_FUNC_skb_vlan_push),
 		BPF_MOV64_REG(BPF_REG_6, BPF_REG_7),
 		BPF_LD_ABS(BPF_B, 0),
diff --git a/tools/testing/selftests/bpf/verifier/leak_ptr.c b/tools/testing/selftests/bpf/verifier/leak_ptr.c
index 73f0dea..892eb00 100644
--- a/tools/testing/selftests/bpf/verifier/leak_ptr.c
+++ b/tools/testing/selftests/bpf/verifier/leak_ptr.c
@@ -52,7 +52,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_3, 0),
diff --git a/tools/testing/selftests/bpf/verifier/loops1.c b/tools/testing/selftests/bpf/verifier/loops1.c
index 1af3718..eb69c65 100644
--- a/tools/testing/selftests/bpf/verifier/loops1.c
+++ b/tools/testing/selftests/bpf/verifier/loops1.c
@@ -24,7 +24,7 @@
 {
 	"bounded loop, count from positive unknown to 4",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_JMP_IMM(BPF_JSLT, BPF_REG_0, 0, 2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
 	BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 4, -2),
@@ -37,7 +37,7 @@
 {
 	"bounded loop, count from totally unknown to 4",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
 	BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 4, -2),
 	BPF_EXIT_INSN(),
@@ -89,7 +89,7 @@
 	BPF_MOV64_IMM(BPF_REG_6, 0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 1),
 	BPF_JMP_IMM(BPF_JGT, BPF_REG_6, 10000, 2),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_JMP_A(-4),
 	BPF_EXIT_INSN(),
 	},
@@ -113,13 +113,13 @@
 	"bounded recursion",
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
 	BPF_JMP_IMM(BPF_JLT, BPF_REG_1, 4, 1),
 	BPF_EXIT_INSN(),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, -5),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, -5),
 	BPF_EXIT_INSN(),
 	},
 	.result = REJECT,
@@ -175,7 +175,7 @@
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_1, 10),
 	BPF_MOV64_IMM(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_1),
 	BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 1),
@@ -192,7 +192,7 @@
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_1, 10),
 	BPF_MOV64_IMM(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_1),
 	BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 1),
diff --git a/tools/testing/selftests/bpf/verifier/map_in_map.c b/tools/testing/selftests/bpf/verifier/map_in_map.c
index 128a348..7e58a19 100644
--- a/tools/testing/selftests/bpf/verifier/map_in_map.c
+++ b/tools/testing/selftests/bpf/verifier/map_in_map.c
@@ -5,13 +5,13 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
 	BPF_ST_MEM(0, BPF_REG_10, -4, 0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -26,21 +26,21 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -4),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_6),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_6),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 11),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_6),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_6),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0),
@@ -59,14 +59,14 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 	BPF_ST_MEM(0, BPF_REG_10, -4, 0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -81,12 +81,12 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_ST_MEM(0, BPF_REG_10, -4, 0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
diff --git a/tools/testing/selftests/bpf/verifier/map_kptr.c b/tools/testing/selftests/bpf/verifier/map_kptr.c
index 6914904..2aca724 100644
--- a/tools/testing/selftests/bpf/verifier/map_kptr.c
+++ b/tools/testing/selftests/bpf/verifier/map_kptr.c
@@ -8,7 +8,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 1),
@@ -28,7 +28,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_ST_MEM(BPF_W, BPF_REG_0, 0, 0),
@@ -48,7 +48,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_0),
@@ -78,7 +78,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_0),
@@ -93,7 +93,7 @@
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_3, BPF_REG_2),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_3),
 	BPF_MOV64_IMM(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_kptr_xchg),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_kptr_xchg),
 	BPF_EXIT_INSN(),
 	},
 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
@@ -110,7 +110,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 7),
@@ -131,7 +131,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0),
@@ -161,7 +161,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0),
@@ -185,7 +185,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
@@ -206,7 +206,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
@@ -229,14 +229,14 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 16),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_this_cpu_ptr),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_this_cpu_ptr),
 	BPF_EXIT_INSN(),
 	},
 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
@@ -253,7 +253,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
@@ -274,12 +274,12 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_kptr_xchg),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_kptr_xchg),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -297,13 +297,13 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -325,12 +325,12 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_1, 0),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_this_cpu_ptr),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_this_cpu_ptr),
 	BPF_EXIT_INSN(),
 	},
 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
@@ -347,21 +347,21 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_kptr_xchg),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_kptr_xchg),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_kptr_xchg),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_kptr_xchg),
 	BPF_EXIT_INSN(),
 	},
 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
@@ -378,7 +378,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
@@ -386,12 +386,12 @@
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
 	BPF_ST_MEM(BPF_DW, BPF_REG_1, 0, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_kptr_xchg),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_kptr_xchg),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -412,7 +412,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_1, 0),
@@ -433,7 +433,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 8, 0),
@@ -453,13 +453,13 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 2),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_delete_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_delete_elem),
 	BPF_EXIT_INSN(),
 	},
 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
diff --git a/tools/testing/selftests/bpf/verifier/map_ptr.c b/tools/testing/selftests/bpf/verifier/map_ptr.c
index 17ee84d..a544056 100644
--- a/tools/testing/selftests/bpf/verifier/map_ptr.c
+++ b/tools/testing/selftests/bpf/verifier/map_ptr.c
@@ -70,7 +70,7 @@
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -88,7 +88,7 @@
 	BPF_MOV64_IMM(BPF_REG_1, 0),
 	BPF_LD_MAP_FD(BPF_REG_0, 0),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
diff --git a/tools/testing/selftests/bpf/verifier/map_ptr_mixing.c b/tools/testing/selftests/bpf/verifier/map_ptr_mixing.c
index 1f2b8c4..253d711 100644
--- a/tools/testing/selftests/bpf/verifier/map_ptr_mixing.c
+++ b/tools/testing/selftests/bpf/verifier/map_ptr_mixing.c
@@ -10,7 +10,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, offsetof(struct test_val, foo)),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
@@ -40,7 +40,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, offsetof(struct test_val, foo)),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
@@ -68,7 +68,7 @@
 	BPF_JMP_IMM(BPF_JA, 0, 0, 2),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_3, 7),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	},
@@ -89,7 +89,7 @@
 	BPF_JMP_IMM(BPF_JA, 0, 0, 2),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_3, 7),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	},
diff --git a/tools/testing/selftests/bpf/verifier/map_ret_val.c b/tools/testing/selftests/bpf/verifier/map_ret_val.c
index bdd0e8d..24078fe 100644
--- a/tools/testing/selftests/bpf/verifier/map_ret_val.c
+++ b/tools/testing/selftests/bpf/verifier/map_ret_val.c
@@ -5,7 +5,7 @@
 	BPF_ALU64_REG(BPF_MOV, BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_delete_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_delete_elem),
 	BPF_EXIT_INSN(),
 	},
 	.errstr = "fd 0 is not pointing to valid bpf_map",
@@ -18,7 +18,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -33,7 +33,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 4, 0),
 	BPF_EXIT_INSN(),
@@ -50,7 +50,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 0),
 	BPF_EXIT_INSN(),
diff --git a/tools/testing/selftests/bpf/verifier/meta_access.c b/tools/testing/selftests/bpf/verifier/meta_access.c
index b45e8af..54e5a0b 100644
--- a/tools/testing/selftests/bpf/verifier/meta_access.c
+++ b/tools/testing/selftests/bpf/verifier/meta_access.c
@@ -80,7 +80,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
 	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_4, 3),
 	BPF_MOV64_IMM(BPF_REG_2, -8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_xdp_adjust_meta),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_xdp_adjust_meta),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_3, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
diff --git a/tools/testing/selftests/bpf/verifier/precise.c b/tools/testing/selftests/bpf/verifier/precise.c
index 6c03a7d..0fdfa42 100644
--- a/tools/testing/selftests/bpf/verifier/precise.c
+++ b/tools/testing/selftests/bpf/verifier/precise.c
@@ -118,16 +118,16 @@
 {
 	"precise: cross frame pruning",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_IMM(BPF_REG_8, 0),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_MOV64_IMM(BPF_REG_8, 1),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_IMM(BPF_REG_9, 0),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_MOV64_IMM(BPF_REG_9, 1),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 4),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 4),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_8, 1, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -168,7 +168,7 @@
 {
 	"precise: STX insn causing spi > allocated_stack",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_10),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_3, 123, 0),
 	BPF_STX_MEM(BPF_DW, BPF_REG_3, BPF_REG_0, -8),
@@ -202,12 +202,12 @@
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_4, 0, 1),
 	BPF_MOV64_IMM(BPF_REG_2, 0x1000),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_0, 42),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_submit),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_submit),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
diff --git a/tools/testing/selftests/bpf/verifier/prevent_map_lookup.c b/tools/testing/selftests/bpf/verifier/prevent_map_lookup.c
index fc4e301..0b09eb5 100644
--- a/tools/testing/selftests/bpf/verifier/prevent_map_lookup.c
+++ b/tools/testing/selftests/bpf/verifier/prevent_map_lookup.c
@@ -5,7 +5,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_EXIT_INSN(),
 	},
 	.fixup_map_stacktrace = { 3 },
@@ -20,7 +20,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_EXIT_INSN(),
 	},
 	.fixup_prog2 = { 3 },
diff --git a/tools/testing/selftests/bpf/verifier/raw_stack.c b/tools/testing/selftests/bpf/verifier/raw_stack.c
index eb5ed9365..6f2cee2 100644
--- a/tools/testing/selftests/bpf/verifier/raw_stack.c
+++ b/tools/testing/selftests/bpf/verifier/raw_stack.c
@@ -22,7 +22,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_6),
 	BPF_MOV64_IMM(BPF_REG_4, -8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -38,7 +38,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_6),
 	BPF_MOV64_IMM(BPF_REG_4, ~0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -54,7 +54,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_6),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -70,7 +70,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_6),
 	BPF_MOV64_IMM(BPF_REG_4, 8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -86,7 +86,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_6, 0, 0xcafe),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_6),
 	BPF_MOV64_IMM(BPF_REG_4, 8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -103,7 +103,7 @@
 	BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1,  8),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_6),
 	BPF_MOV64_IMM(BPF_REG_4, 8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_6,  8),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0,
@@ -125,7 +125,7 @@
 	BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_6),
 	BPF_MOV64_IMM(BPF_REG_4, 8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, 0),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0,
 		    offsetof(struct __sk_buff, mark)),
@@ -147,7 +147,7 @@
 	BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1,  8),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_6),
 	BPF_MOV64_IMM(BPF_REG_4, 8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_6,  8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_6,  0),
@@ -177,7 +177,7 @@
 	BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1,  8),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_6),
 	BPF_MOV64_IMM(BPF_REG_4, 8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_6,  8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_6,  0),
@@ -200,7 +200,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -513),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_6),
 	BPF_MOV64_IMM(BPF_REG_4, 8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -216,7 +216,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -1),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_6),
 	BPF_MOV64_IMM(BPF_REG_4, 8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -232,7 +232,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 0xffffffff),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_6),
 	BPF_MOV64_IMM(BPF_REG_4, 0xffffffff),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -248,7 +248,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -1),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_6),
 	BPF_MOV64_IMM(BPF_REG_4, 0x7fffffff),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -264,7 +264,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -512),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_6),
 	BPF_MOV64_IMM(BPF_REG_4, 0x7fffffff),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -280,7 +280,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -512),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_6),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -296,7 +296,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -512),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_6),
 	BPF_MOV64_IMM(BPF_REG_4, 512),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, 0),
 	BPF_EXIT_INSN(),
 	},
diff --git a/tools/testing/selftests/bpf/verifier/raw_tp_writable.c b/tools/testing/selftests/bpf/verifier/raw_tp_writable.c
index 2978fb5..cc66892 100644
--- a/tools/testing/selftests/bpf/verifier/raw_tp_writable.c
+++ b/tools/testing/selftests/bpf/verifier/raw_tp_writable.c
@@ -11,7 +11,7 @@
 		BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 		BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_0, 0),
 		/* lookup in the map */
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 			     BPF_FUNC_map_lookup_elem),
 
 		/* exit clean if null */
diff --git a/tools/testing/selftests/bpf/verifier/ref_tracking.c b/tools/testing/selftests/bpf/verifier/ref_tracking.c
index 9540164..f3cf02e 100644
--- a/tools/testing/selftests/bpf/verifier/ref_tracking.c
+++ b/tools/testing/selftests/bpf/verifier/ref_tracking.c
@@ -89,10 +89,10 @@
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_1, -3),
 	BPF_MOV64_IMM(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -110,10 +110,10 @@
 	"reference tracking: acquire/release system key reference",
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_1, 1),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -132,9 +132,9 @@
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_1, -3),
 	BPF_MOV64_IMM(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -153,9 +153,9 @@
 	"reference tracking: release system key reference without check",
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_1, 1),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -174,7 +174,7 @@
 	"reference tracking: release with NULL key pointer",
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -193,7 +193,7 @@
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_1, -3),
 	BPF_MOV64_IMM(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_EXIT_INSN(),
 	},
 	.prog_type = BPF_PROG_TYPE_LSM,
@@ -210,7 +210,7 @@
 	"reference tracking: leak potential reference to system key",
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_1, 1),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_EXIT_INSN(),
 	},
 	.prog_type = BPF_PROG_TYPE_LSM,
@@ -382,7 +382,7 @@
 	.insns = {
 	BPF_SK_LOOKUP(sk_lookup_tcp),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), /* unchecked reference */
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 
@@ -401,7 +401,7 @@
 	BPF_SK_LOOKUP(sk_lookup_tcp),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), /* unchecked reference */
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 3),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 3),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
 	BPF_EXIT_INSN(),
@@ -421,7 +421,7 @@
 	.insns = {
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 3),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 3),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -442,7 +442,7 @@
 	"reference tracking in call: alloc in subprog, release outside",
 	.insns = {
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 4),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
@@ -461,7 +461,7 @@
 	.insns = {
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 
@@ -469,7 +469,7 @@
 	BPF_MOV64_REG(BPF_REG_5, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, -8),
 	BPF_STX_MEM(BPF_DW, BPF_REG_5, BPF_REG_4, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 5),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 5),
 	/* spill unchecked sk_ptr into stack of caller */
 	BPF_MOV64_REG(BPF_REG_5, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, -8),
@@ -490,7 +490,7 @@
 	.insns = {
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -8),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 
@@ -498,7 +498,7 @@
 	BPF_MOV64_REG(BPF_REG_5, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, -8),
 	BPF_STX_MEM(BPF_DW, BPF_REG_5, BPF_REG_4, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 8),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 8),
 	/* spill unchecked sk_ptr into stack of caller */
 	BPF_MOV64_REG(BPF_REG_5, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, -8),
@@ -597,7 +597,7 @@
 	BPF_MOV64_IMM(BPF_REG_3, 3),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
@@ -620,7 +620,7 @@
 	BPF_MOV64_IMM(BPF_REG_3, 3),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -639,7 +639,7 @@
 	BPF_MOV64_IMM(BPF_REG_3, 3),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	/* if (sk) bpf_sk_release() */
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
@@ -665,7 +665,7 @@
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
diff --git a/tools/testing/selftests/bpf/verifier/regalloc.c b/tools/testing/selftests/bpf/verifier/regalloc.c
index bb0dd89..ead6db9 100644
--- a/tools/testing/selftests/bpf/verifier/regalloc.c
+++ b/tools/testing/selftests/bpf/verifier/regalloc.c
@@ -203,7 +203,7 @@
 	BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_9, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 6),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 6),
 	BPF_JMP_IMM(BPF_JSGT, BPF_REG_8, 20, 4),
 	BPF_JMP_IMM(BPF_JSLT, BPF_REG_9, 0, 3),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_8),
@@ -233,7 +233,7 @@
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_7),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 1),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 20, 5),
 	BPF_JMP_IMM(BPF_JSLT, BPF_REG_2, 0, 4),
diff --git a/tools/testing/selftests/bpf/verifier/ringbuf.c b/tools/testing/selftests/bpf/verifier/ringbuf.c
index 92e3f6a..d288253 100644
--- a/tools/testing/selftests/bpf/verifier/ringbuf.c
+++ b/tools/testing/selftests/bpf/verifier/ringbuf.c
@@ -6,7 +6,7 @@
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_MOV64_IMM(BPF_REG_2, 8),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve),
 	/* store a pointer to the reserved memory in R6 */
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	/* check whether the reservation was successful */
@@ -22,7 +22,7 @@
 	/* add invalid offset to reserved ringbuf memory */
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0xcafe),
 	BPF_MOV64_IMM(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_submit),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_submit),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -38,7 +38,7 @@
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_MOV64_IMM(BPF_REG_2, 8),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve),
 	/* store a pointer to the reserved memory in R6 */
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	/* check whether the reservation was successful */
@@ -54,7 +54,7 @@
 	/* submit the reserved ringbuf memory */
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
 	BPF_MOV64_IMM(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_submit),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_submit),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -71,7 +71,7 @@
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_MOV64_IMM(BPF_REG_2, 8),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
 	/* check whether the reservation was successful */
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
@@ -81,11 +81,11 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_3, 8),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_fib_lookup),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_fib_lookup),
 	/* submit the ringbuf memory */
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
 	BPF_MOV64_IMM(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_submit),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_submit),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
diff --git a/tools/testing/selftests/bpf/verifier/runtime_jit.c b/tools/testing/selftests/bpf/verifier/runtime_jit.c
index 94c399d..160911b 100644
--- a/tools/testing/selftests/bpf/verifier/runtime_jit.c
+++ b/tools/testing/selftests/bpf/verifier/runtime_jit.c
@@ -3,7 +3,7 @@
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	},
@@ -16,7 +16,7 @@
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_3, 1),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	},
@@ -29,7 +29,7 @@
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_3, 3),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	},
@@ -42,7 +42,7 @@
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_3, 2),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	},
@@ -64,7 +64,7 @@
 	BPF_JMP_IMM(BPF_JA, 0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_3, 2),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	},
@@ -86,7 +86,7 @@
 	BPF_JMP_IMM(BPF_JA, 0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_3, 2),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	},
@@ -108,7 +108,7 @@
 	BPF_JMP_IMM(BPF_JA, 0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_3, 2),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	},
@@ -130,7 +130,7 @@
 	BPF_JMP_IMM(BPF_JA, 0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_3, 2),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	},
@@ -152,7 +152,7 @@
 	BPF_JMP_IMM(BPF_JA, 0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	},
@@ -177,7 +177,7 @@
 	BPF_JMP_IMM(BPF_JA, 0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	},
@@ -193,7 +193,7 @@
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_3, 256),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
 	BPF_MOV64_IMM(BPF_REG_0, 2),
 	BPF_EXIT_INSN(),
 	},
@@ -206,7 +206,7 @@
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_3, -1),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
 	BPF_MOV64_IMM(BPF_REG_0, 2),
 	BPF_EXIT_INSN(),
 	},
@@ -219,7 +219,7 @@
 	.insns = {
 	BPF_LD_IMM64(BPF_REG_3, 0x100000000ULL),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
 	BPF_MOV64_IMM(BPF_REG_0, 2),
 	BPF_EXIT_INSN(),
 	},
diff --git a/tools/testing/selftests/bpf/verifier/search_pruning.c b/tools/testing/selftests/bpf/verifier/search_pruning.c
index 68b14fd..1a4d06b 100644
--- a/tools/testing/selftests/bpf/verifier/search_pruning.c
+++ b/tools/testing/selftests/bpf/verifier/search_pruning.c
@@ -5,7 +5,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
 	BPF_JMP_A(1),
@@ -26,7 +26,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_10),
 	BPF_JMP_A(1),
@@ -62,7 +62,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0),
 	BPF_MOV32_IMM(BPF_REG_2, MAX_ENTRIES),
@@ -154,7 +154,7 @@
 		BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 		BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -16),
 		BPF_LD_MAP_FD(BPF_REG_1, 0),
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 		BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_8),
 		BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
@@ -207,7 +207,7 @@
 	"allocated_stack",
 	.insns = {
 		BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_1),
-		BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 		BPF_ALU64_REG(BPF_MOV, BPF_REG_7, BPF_REG_0),
 		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
 		BPF_MOV64_IMM(BPF_REG_0, 0),
diff --git a/tools/testing/selftests/bpf/verifier/spill_fill.c b/tools/testing/selftests/bpf/verifier/spill_fill.c
index 9bb302d..5b8d764 100644
--- a/tools/testing/selftests/bpf/verifier/spill_fill.c
+++ b/tools/testing/selftests/bpf/verifier/spill_fill.c
@@ -36,7 +36,7 @@
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_MOV64_IMM(BPF_REG_2, 8),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve),
 	/* store a pointer to the reserved memory in R6 */
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	/* check whether the reservation was successful */
@@ -50,7 +50,7 @@
 	/* submit the reserved ringbuf memory */
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
 	BPF_MOV64_IMM(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_submit),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_submit),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -66,7 +66,7 @@
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_MOV64_IMM(BPF_REG_2, 8),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve),
 	/* store a pointer to the reserved memory in R6 */
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	/* add invalid offset to memory or NULL */
@@ -78,7 +78,7 @@
 	/* submit the reserved ringbuf memory */
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_MOV64_IMM(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_submit),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_submit),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
diff --git a/tools/testing/selftests/bpf/verifier/spin_lock.c b/tools/testing/selftests/bpf/verifier/spin_lock.c
index eaf114f..8f24b17 100644
--- a/tools/testing/selftests/bpf/verifier/spin_lock.c
+++ b/tools/testing/selftests/bpf/verifier/spin_lock.c
@@ -6,17 +6,17 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_LD_MAP_FD(BPF_REG_1,
 		      0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_6, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -34,17 +34,17 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_LD_MAP_FD(BPF_REG_1,
 		      0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -63,17 +63,17 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_LD_MAP_FD(BPF_REG_1,
 		      0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_6, 1),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -93,17 +93,17 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_LD_MAP_FD(BPF_REG_1,
 		      0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
 	BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_6, 3),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -123,17 +123,17 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_LD_MAP_FD(BPF_REG_1,
 		      0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -152,18 +152,18 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_LD_MAP_FD(BPF_REG_1,
 		      0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_6, 0),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -182,18 +182,18 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_LD_MAP_FD(BPF_REG_1,
 		      0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_6, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -212,20 +212,20 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_LD_MAP_FD(BPF_REG_1,
 		      0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_6, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -244,7 +244,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_LD_MAP_FD(BPF_REG_1,
 		      0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
@@ -252,16 +252,16 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_LD_MAP_FD(BPF_REG_1,
 		      0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -280,19 +280,19 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_LD_MAP_FD(BPF_REG_1,
 		      0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 5),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 5),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -312,17 +312,17 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_LD_MAP_FD(BPF_REG_1,
 		      0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
 	BPF_LD_ABS(BPF_B, 0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -341,26 +341,26 @@
 	BPF_ST_MEM(BPF_W, BPF_REG_10, -4, 0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_9),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 1),
 	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_8),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
diff --git a/tools/testing/selftests/bpf/verifier/subreg.c b/tools/testing/selftests/bpf/verifier/subreg.c
index 4c4133c..ee18bb0 100644
--- a/tools/testing/selftests/bpf/verifier/subreg.c
+++ b/tools/testing/selftests/bpf/verifier/subreg.c
@@ -1,6 +1,6 @@
 /* This file contains sub-register zero extension checks for insns defining
  * sub-registers, meaning:
- *   - All insns under BPF_ALU class. Their BPF_ALU32 variants or narrow width
+ *   - All insns under BPF_ALU32 class. Their BPF_ALU32 variants or narrow width
  *     forms (BPF_END) could define sub-registers.
  *   - Narrow direct loads, BPF_B/H/W | BPF_LDX.
  *   - BPF_LD is not exposed to JIT back-ends, so no need for testing.
@@ -13,7 +13,7 @@
 {
 	"add32 reg zero extend check",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LD_IMM64(BPF_REG_0, 0x100000000ULL),
 	BPF_ALU32_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
@@ -26,7 +26,7 @@
 {
 	"add32 imm zero extend check",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
 	/* An insn could have no effect on the low 32-bit, for example:
@@ -38,7 +38,7 @@
 	BPF_ALU32_IMM(BPF_ADD, BPF_REG_0, 0),
 	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
 	BPF_ALU32_IMM(BPF_ADD, BPF_REG_0, -2),
@@ -52,7 +52,7 @@
 {
 	"sub32 reg zero extend check",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LD_IMM64(BPF_REG_0, 0x1ffffffffULL),
 	BPF_ALU32_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
@@ -65,13 +65,13 @@
 {
 	"sub32 imm zero extend check",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
 	BPF_ALU32_IMM(BPF_SUB, BPF_REG_0, 0),
 	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
 	BPF_ALU32_IMM(BPF_SUB, BPF_REG_0, 1),
@@ -85,7 +85,7 @@
 {
 	"mul32 reg zero extend check",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LD_IMM64(BPF_REG_0, 0x100000001ULL),
 	BPF_ALU32_REG(BPF_MUL, BPF_REG_0, BPF_REG_1),
@@ -98,13 +98,13 @@
 {
 	"mul32 imm zero extend check",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
 	BPF_ALU32_IMM(BPF_MUL, BPF_REG_0, 1),
 	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
 	BPF_ALU32_IMM(BPF_MUL, BPF_REG_0, -1),
@@ -118,7 +118,7 @@
 {
 	"div32 reg zero extend check",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_0, -1),
 	BPF_ALU32_REG(BPF_DIV, BPF_REG_0, BPF_REG_1),
@@ -131,13 +131,13 @@
 {
 	"div32 imm zero extend check",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
 	BPF_ALU32_IMM(BPF_DIV, BPF_REG_0, 1),
 	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
 	BPF_ALU32_IMM(BPF_DIV, BPF_REG_0, 2),
@@ -151,7 +151,7 @@
 {
 	"or32 reg zero extend check",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LD_IMM64(BPF_REG_0, 0x100000001ULL),
 	BPF_ALU32_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
@@ -164,13 +164,13 @@
 {
 	"or32 imm zero extend check",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
 	BPF_ALU32_IMM(BPF_OR, BPF_REG_0, 0),
 	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
 	BPF_ALU32_IMM(BPF_OR, BPF_REG_0, 1),
@@ -184,7 +184,7 @@
 {
 	"and32 reg zero extend check",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_LD_IMM64(BPF_REG_1, 0x100000000ULL),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_1, BPF_REG_0),
 	BPF_LD_IMM64(BPF_REG_0, 0x1ffffffffULL),
@@ -198,13 +198,13 @@
 {
 	"and32 imm zero extend check",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
 	BPF_ALU32_IMM(BPF_AND, BPF_REG_0, -1),
 	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
 	BPF_ALU32_IMM(BPF_AND, BPF_REG_0, -2),
@@ -218,7 +218,7 @@
 {
 	"lsh32 reg zero extend check",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_LD_IMM64(BPF_REG_1, 0x100000000ULL),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
 	BPF_MOV64_IMM(BPF_REG_1, 1),
@@ -232,13 +232,13 @@
 {
 	"lsh32 imm zero extend check",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
 	BPF_ALU32_IMM(BPF_LSH, BPF_REG_0, 0),
 	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
 	BPF_ALU32_IMM(BPF_LSH, BPF_REG_0, 1),
@@ -252,7 +252,7 @@
 {
 	"rsh32 reg zero extend check",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
 	BPF_MOV64_IMM(BPF_REG_1, 1),
@@ -266,13 +266,13 @@
 {
 	"rsh32 imm zero extend check",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
 	BPF_ALU32_IMM(BPF_RSH, BPF_REG_0, 0),
 	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
 	BPF_ALU32_IMM(BPF_RSH, BPF_REG_0, 1),
@@ -286,7 +286,7 @@
 {
 	"neg32 reg zero extend check",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
 	BPF_ALU32_IMM(BPF_NEG, BPF_REG_0, 0),
@@ -299,7 +299,7 @@
 {
 	"mod32 reg zero extend check",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_0, -1),
 	BPF_ALU32_REG(BPF_MOD, BPF_REG_0, BPF_REG_1),
@@ -312,13 +312,13 @@
 {
 	"mod32 imm zero extend check",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
 	BPF_ALU32_IMM(BPF_MOD, BPF_REG_0, 1),
 	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
 	BPF_ALU32_IMM(BPF_MOD, BPF_REG_0, 2),
@@ -332,7 +332,7 @@
 {
 	"xor32 reg zero extend check",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LD_IMM64(BPF_REG_0, 0x100000000ULL),
 	BPF_ALU32_REG(BPF_XOR, BPF_REG_0, BPF_REG_1),
@@ -345,7 +345,7 @@
 {
 	"xor32 imm zero extend check",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
 	BPF_ALU32_IMM(BPF_XOR, BPF_REG_0, 1),
@@ -358,7 +358,7 @@
 {
 	"mov32 reg zero extend check",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_LD_IMM64(BPF_REG_1, 0x100000000ULL),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_1, BPF_REG_0),
 	BPF_LD_IMM64(BPF_REG_0, 0x100000000ULL),
@@ -372,13 +372,13 @@
 {
 	"mov32 imm zero extend check",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
 	BPF_MOV32_IMM(BPF_REG_0, 0),
 	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
 	BPF_MOV32_IMM(BPF_REG_0, 1),
@@ -392,7 +392,7 @@
 {
 	"arsh32 reg zero extend check",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
 	BPF_MOV64_IMM(BPF_REG_1, 1),
@@ -406,13 +406,13 @@
 {
 	"arsh32 imm zero extend check",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
 	BPF_ALU32_IMM(BPF_ARSH, BPF_REG_0, 0),
 	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
 	BPF_ALU32_IMM(BPF_ARSH, BPF_REG_0, 1),
@@ -426,10 +426,10 @@
 {
 	"end16 (to_le) reg zero extend check",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_LSH, BPF_REG_6, 32),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_6),
 	BPF_ENDIAN(BPF_TO_LE, BPF_REG_0, 16),
 	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
@@ -441,10 +441,10 @@
 {
 	"end32 (to_le) reg zero extend check",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_LSH, BPF_REG_6, 32),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_6),
 	BPF_ENDIAN(BPF_TO_LE, BPF_REG_0, 32),
 	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
@@ -456,10 +456,10 @@
 {
 	"end16 (to_be) reg zero extend check",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_LSH, BPF_REG_6, 32),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_6),
 	BPF_ENDIAN(BPF_TO_BE, BPF_REG_0, 16),
 	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
@@ -471,10 +471,10 @@
 {
 	"end32 (to_be) reg zero extend check",
 	.insns = {
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_LSH, BPF_REG_6, 32),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_6),
 	BPF_ENDIAN(BPF_TO_BE, BPF_REG_0, 32),
 	BPF_ALU64_IMM(BPF_RSH, BPF_REG_0, 32),
@@ -489,7 +489,7 @@
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -4),
 	BPF_ST_MEM(BPF_W, BPF_REG_6, 0, 0xfaceb00c),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_6, 0),
@@ -505,7 +505,7 @@
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -4),
 	BPF_ST_MEM(BPF_W, BPF_REG_6, 0, 0xfaceb00c),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
 	BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_6, 0),
@@ -521,7 +521,7 @@
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -4),
 	BPF_ST_MEM(BPF_W, BPF_REG_6, 0, 0xfaceb00c),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_LD_IMM64(BPF_REG_1, 0x1000000000ULL),
 	BPF_ALU64_REG(BPF_OR, BPF_REG_0, BPF_REG_1),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_6, 0),
diff --git a/tools/testing/selftests/bpf/verifier/unpriv.c b/tools/testing/selftests/bpf/verifier/unpriv.c
index 878ca26..e035e92 100644
--- a/tools/testing/selftests/bpf/verifier/unpriv.c
+++ b/tools/testing/selftests/bpf/verifier/unpriv.c
@@ -69,7 +69,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
 	BPF_MOV64_IMM(BPF_REG_2, 8),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_1),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_trace_printk),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_trace_printk),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -87,7 +87,7 @@
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_2),
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_2),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_update_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_update_elem),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -103,7 +103,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -178,7 +178,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8),
 	BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_hash_recalc),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_hash_recalc),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -193,7 +193,7 @@
 	BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0),
 	BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_10, 0),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_hash_recalc),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_hash_recalc),
 	BPF_EXIT_INSN(),
 	},
 	.result = REJECT,
@@ -210,7 +210,7 @@
 	BPF_RAW_INSN(BPF_STX | BPF_ATOMIC | BPF_DW,
 		     BPF_REG_10, BPF_REG_0, -8, BPF_ADD),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_hash_recalc),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_hash_recalc),
 	BPF_EXIT_INSN(),
 	},
 	.result = REJECT,
@@ -400,7 +400,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -441,7 +441,7 @@
 	.insns = {
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_1),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
diff --git a/tools/testing/selftests/bpf/verifier/value_or_null.c b/tools/testing/selftests/bpf/verifier/value_or_null.c
index 52a8bca..1ea97759 100644
--- a/tools/testing/selftests/bpf/verifier/value_or_null.c
+++ b/tools/testing/selftests/bpf/verifier/value_or_null.c
@@ -6,7 +6,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_0),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_ST_MEM(BPF_DW, BPF_REG_4, 0, 0),
@@ -24,7 +24,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 2),
@@ -45,7 +45,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_4, -1),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
@@ -65,7 +65,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_LSH, BPF_REG_4, 1),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
@@ -87,11 +87,11 @@
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_MOV64_REG(BPF_REG_8, BPF_REG_1),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_2),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_8),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_7),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_ST_MEM(BPF_DW, BPF_REG_4, 0, 0),
 	BPF_EXIT_INSN(),
@@ -111,12 +111,12 @@
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_MOV64_REG(BPF_REG_8, BPF_REG_1),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_2),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_MOV64_IMM(BPF_REG_2, 10),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_2, 0, 3),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_8),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_7),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_0),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_ST_MEM(BPF_DW, BPF_REG_4, 0, 0),
@@ -133,7 +133,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
 	BPF_JMP_IMM(BPF_JGE, BPF_REG_1, MAX_ENTRIES-1, 1),
@@ -158,7 +158,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 2),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_6, 0, 1),
diff --git a/tools/testing/selftests/bpf/verifier/value_ptr_arith.c b/tools/testing/selftests/bpf/verifier/value_ptr_arith.c
index 249187d..af7a406 100644
--- a/tools/testing/selftests/bpf/verifier/value_ptr_arith.c
+++ b/tools/testing/selftests/bpf/verifier/value_ptr_arith.c
@@ -10,7 +10,7 @@
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
 	BPF_LDX_MEM(BPF_B, BPF_REG_4, BPF_REG_0, 0),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_4, 1, 4),
@@ -43,7 +43,7 @@
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
 	BPF_LDX_MEM(BPF_B, BPF_REG_4, BPF_REG_0, 0),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_4, 1, 2),
@@ -76,7 +76,7 @@
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
 	BPF_LDX_MEM(BPF_B, BPF_REG_4, BPF_REG_0, 0),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_4, 1, 2),
@@ -107,7 +107,7 @@
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
 	BPF_LDX_MEM(BPF_B, BPF_REG_4, BPF_REG_0, 0),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_4, 1, 2),
@@ -136,7 +136,7 @@
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 11),
 	BPF_LDX_MEM(BPF_B, BPF_REG_4, BPF_REG_0, 0),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_4, 1, 4),
@@ -169,7 +169,7 @@
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 11),
 	BPF_LDX_MEM(BPF_B, BPF_REG_4, BPF_REG_0, 0),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_4, 1, 4),
@@ -204,7 +204,7 @@
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 11),
 	BPF_LDX_MEM(BPF_B, BPF_REG_4, BPF_REG_0, 0),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_4, 1, 4),
@@ -239,7 +239,7 @@
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_1, 4),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
@@ -264,7 +264,7 @@
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_IMM(BPF_REG_1, 4),
 	BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
@@ -292,7 +292,7 @@
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_1, 4),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
@@ -475,7 +475,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_IMM(BPF_REG_1, 48),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
@@ -497,7 +497,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_IMM(BPF_REG_1, 49),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
@@ -519,7 +519,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_IMM(BPF_REG_1, 47),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
@@ -539,7 +539,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
 	BPF_MOV64_IMM(BPF_REG_1, 47),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
@@ -562,7 +562,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
 	BPF_MOV64_IMM(BPF_REG_1, 47),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
@@ -587,7 +587,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
 	BPF_MOV64_IMM(BPF_REG_1, 47),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
@@ -608,7 +608,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_1, 4),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
@@ -627,7 +627,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_1, 4),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
@@ -646,7 +646,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_1, 49),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
@@ -665,7 +665,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_1, -1),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
@@ -684,7 +684,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
 	BPF_MOV64_IMM(BPF_REG_1, 5),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
@@ -707,7 +707,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_1, (6 + 1) * sizeof(int)),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
@@ -725,7 +725,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
 	BPF_MOV64_IMM(BPF_REG_1, (3 + 1) * sizeof(int)),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
@@ -745,7 +745,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 	BPF_MOV32_IMM(BPF_REG_1, 0x12345678),
 	BPF_STX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 0),
@@ -766,7 +766,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0xf),
@@ -786,7 +786,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 31),
@@ -806,7 +806,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
 	BPF_MOV64_IMM(BPF_REG_1, -1),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
@@ -832,7 +832,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 	BPF_MOV64_IMM(BPF_REG_1, 19),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
@@ -855,7 +855,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0xf),
@@ -875,7 +875,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 31),
@@ -895,7 +895,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 11),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_0, 8),
@@ -922,7 +922,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_0),
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
@@ -940,7 +940,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_1, 4),
 	BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_0),
@@ -959,7 +959,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_1, 4),
 	BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
@@ -978,7 +978,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
 	BPF_MOV64_IMM(BPF_REG_1, 6),
 	BPF_MOV64_IMM(BPF_REG_2, 4),
@@ -999,7 +999,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0xf),
@@ -1019,7 +1019,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0xf),
@@ -1039,7 +1039,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0xf),
@@ -1065,7 +1065,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_0),
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
@@ -1085,7 +1085,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
diff --git a/tools/testing/selftests/bpf/verifier/var_off.c b/tools/testing/selftests/bpf/verifier/var_off.c
index d37f512..769b20f 100644
--- a/tools/testing/selftests/bpf/verifier/var_off.c
+++ b/tools/testing/selftests/bpf/verifier/var_off.c
@@ -178,7 +178,7 @@
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_10),
 	/* dereference it indirectly */
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -203,7 +203,7 @@
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_10),
 	/* dereference it indirectly */
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
diff --git a/tools/testing/selftests/bpf/verifier/xadd.c b/tools/testing/selftests/bpf/verifier/xadd.c
index b96ef35..8ce0171 100644
--- a/tools/testing/selftests/bpf/verifier/xadd.c
+++ b/tools/testing/selftests/bpf/verifier/xadd.c
@@ -18,7 +18,7 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_1, 1),
diff --git a/tools/testing/selftests/net/csum.c b/tools/testing/selftests/net/csum.c
index 82a1c183..3a2d2c8 100644
--- a/tools/testing/selftests/net/csum.c
+++ b/tools/testing/selftests/net/csum.c
@@ -494,11 +494,11 @@ static void __recv_prepare_packet_filter(int fd, int off_nexthdr, int off_dport)
 {
 	struct sock_filter filter[] = {
 		BPF_STMT(BPF_LD + BPF_B + BPF_ABS, SKF_AD_OFF + SKF_AD_PKTTYPE),
-		BPF_JUMP(BPF_JMP + BPF_JEQ + BPF_K, PACKET_HOST, 0, 4),
+		BPF_JUMP(BPF_JMP64 + BPF_JEQ + BPF_K, PACKET_HOST, 0, 4),
 		BPF_STMT(BPF_LD + BPF_B + BPF_ABS, off_nexthdr),
-		BPF_JUMP(BPF_JMP + BPF_JEQ + BPF_K, cfg_encap ? IPPROTO_UDP : cfg_proto, 0, 2),
+		BPF_JUMP(BPF_JMP64 + BPF_JEQ + BPF_K, cfg_encap ? IPPROTO_UDP : cfg_proto, 0, 2),
 		BPF_STMT(BPF_LD + BPF_H + BPF_ABS, off_dport),
-		BPF_JUMP(BPF_JMP + BPF_JEQ + BPF_K, cfg_port_dst, 1, 0),
+		BPF_JUMP(BPF_JMP64 + BPF_JEQ + BPF_K, cfg_port_dst, 1, 0),
 		BPF_STMT(BPF_RET + BPF_K, 0),
 		BPF_STMT(BPF_RET + BPF_K, 0xFFFF),
 	};
diff --git a/tools/testing/selftests/net/gro.c b/tools/testing/selftests/net/gro.c
index 30024d0..7709741 100644
--- a/tools/testing/selftests/net/gro.c
+++ b/tools/testing/selftests/net/gro.c
@@ -122,13 +122,13 @@ static void setup_sock_filter(int fd)
 
 	struct sock_filter filter[] = {
 			BPF_STMT(BPF_LD  + BPF_H   + BPF_ABS, ethproto_off),
-			BPF_JUMP(BPF_JMP + BPF_JEQ + BPF_K, ntohs(ethhdr_proto), 0, 7),
+			BPF_JUMP(BPF_JMP64 + BPF_JEQ + BPF_K, ntohs(ethhdr_proto), 0, 7),
 			BPF_STMT(BPF_LD  + BPF_B   + BPF_ABS, ipproto_off),
-			BPF_JUMP(BPF_JMP + BPF_JEQ + BPF_K, IPPROTO_TCP, 0, 5),
+			BPF_JUMP(BPF_JMP64 + BPF_JEQ + BPF_K, IPPROTO_TCP, 0, 5),
 			BPF_STMT(BPF_LD  + BPF_H   + BPF_ABS, dport_off),
-			BPF_JUMP(BPF_JMP + BPF_JEQ + BPF_K, DPORT, 2, 0),
+			BPF_JUMP(BPF_JMP64 + BPF_JEQ + BPF_K, DPORT, 2, 0),
 			BPF_STMT(BPF_LD  + BPF_H   + BPF_ABS, dport_off + optlen),
-			BPF_JUMP(BPF_JMP + BPF_JEQ + BPF_K, DPORT, 0, 1),
+			BPF_JUMP(BPF_JMP64 + BPF_JEQ + BPF_K, DPORT, 0, 1),
 			BPF_STMT(BPF_RET + BPF_K, 0xFFFFFFFF),
 			BPF_STMT(BPF_RET + BPF_K, 0),
 	};
diff --git a/tools/testing/selftests/net/psock_fanout.c b/tools/testing/selftests/net/psock_fanout.c
index 1a736f7..2fe7ead 100644
--- a/tools/testing/selftests/net/psock_fanout.c
+++ b/tools/testing/selftests/net/psock_fanout.c
@@ -149,13 +149,13 @@ static void sock_fanout_set_ebpf(int fd)
 	struct bpf_insn prog[] = {
 		{ BPF_ALU64 | BPF_MOV | BPF_X,   6, 1, 0, 0 },
 		{ BPF_LDX   | BPF_W   | BPF_MEM, 0, 6, len_off, 0 },
-		{ BPF_JMP   | BPF_JGE | BPF_K,   0, 0, 1, DATA_LEN },
-		{ BPF_JMP   | BPF_JA  | BPF_K,   0, 0, 4, 0 },
+		{ BPF_JMP64   | BPF_JGE | BPF_K,   0, 0, 1, DATA_LEN },
+		{ BPF_JMP64   | BPF_JA  | BPF_K,   0, 0, 4, 0 },
 		{ BPF_LD    | BPF_B   | BPF_ABS, 0, 0, 0, 0x50 },
-		{ BPF_JMP   | BPF_JEQ | BPF_K,   0, 0, 2, DATA_CHAR },
-		{ BPF_JMP   | BPF_JEQ | BPF_K,   0, 0, 1, DATA_CHAR_1 },
-		{ BPF_ALU   | BPF_MOV | BPF_K,   0, 0, 0, 0 },
-		{ BPF_JMP   | BPF_EXIT,          0, 0, 0, 0 }
+		{ BPF_JMP64   | BPF_JEQ | BPF_K,   0, 0, 2, DATA_CHAR },
+		{ BPF_JMP64   | BPF_JEQ | BPF_K,   0, 0, 1, DATA_CHAR_1 },
+		{ BPF_ALU32   | BPF_MOV | BPF_K,   0, 0, 0, 0 },
+		{ BPF_JMP64   | BPF_EXIT,          0, 0, 0, 0 }
 	};
 	union bpf_attr attr;
 	int pfd;
diff --git a/tools/testing/selftests/net/reuseport_bpf.c b/tools/testing/selftests/net/reuseport_bpf.c
index 65aea27..a4eab6d 100644
--- a/tools/testing/selftests/net/reuseport_bpf.c
+++ b/tools/testing/selftests/net/reuseport_bpf.c
@@ -103,7 +103,7 @@ static void attach_ebpf(int fd, uint16_t mod)
 		/* BPF_ALU64_IMM(BPF_MOD, BPF_REG_0, mod) */
 		{ BPF_ALU64 | BPF_MOD | BPF_K, BPF_REG_0, 0, 0, mod },
 		/* BPF_EXIT_INSN() */
-		{ BPF_JMP | BPF_EXIT, 0, 0, 0, 0 }
+		{ BPF_JMP64 | BPF_EXIT, 0, 0, 0, 0 }
 	};
 	union bpf_attr attr;
 
@@ -134,7 +134,7 @@ static void attach_cbpf(int fd, uint16_t mod)
 		/* A = (uint32_t)skb[0] */
 		{ BPF_LD  | BPF_W | BPF_ABS, 0, 0, 0 },
 		/* A = A % mod */
-		{ BPF_ALU | BPF_MOD, 0, 0, mod },
+		{ BPF_ALU32 | BPF_MOD, 0, 0, mod },
 		/* return A */
 		{ BPF_RET | BPF_A, 0, 0, 0 },
 	};
@@ -341,7 +341,7 @@ static void test_filter_no_reuseport(const struct test_params p)
 	const char bpf_license[] = "GPL";
 	struct bpf_insn ecode[] = {
 		{ BPF_ALU64 | BPF_MOV | BPF_K, BPF_REG_0, 0, 0, 10 },
-		{ BPF_JMP | BPF_EXIT, 0, 0, 0, 0 }
+		{ BPF_JMP64 | BPF_EXIT, 0, 0, 0, 0 }
 	};
 	struct sock_filter ccode[] = {{ BPF_RET | BPF_A, 0, 0, 0 }};
 	union bpf_attr eprog;
diff --git a/tools/testing/selftests/net/reuseport_bpf_numa.c b/tools/testing/selftests/net/reuseport_bpf_numa.c
index c9ba36a..8def0fe 100644
--- a/tools/testing/selftests/net/reuseport_bpf_numa.c
+++ b/tools/testing/selftests/net/reuseport_bpf_numa.c
@@ -78,9 +78,9 @@ static void attach_bpf(int fd)
 	int bpf_fd;
 	const struct bpf_insn prog[] = {
 		/* R0 = bpf_get_numa_node_id() */
-		{ BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_get_numa_node_id },
+		{ BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_numa_node_id },
 		/* return R0 */
-		{ BPF_JMP | BPF_EXIT, 0, 0, 0, 0 }
+		{ BPF_JMP64 | BPF_EXIT, 0, 0, 0, 0 }
 	};
 	union bpf_attr attr;
 
diff --git a/tools/testing/selftests/net/toeplitz.c b/tools/testing/selftests/net/toeplitz.c
index 9ba0316..6774571 100644
--- a/tools/testing/selftests/net/toeplitz.c
+++ b/tools/testing/selftests/net/toeplitz.c
@@ -288,11 +288,11 @@ static void __set_filter(int fd, int off_proto, uint8_t proto, int off_dport)
 {
 	struct sock_filter filter[] = {
 		BPF_STMT(BPF_LD  + BPF_B   + BPF_ABS, SKF_AD_OFF + SKF_AD_PKTTYPE),
-		BPF_JUMP(BPF_JMP + BPF_JEQ + BPF_K, PACKET_HOST, 0, 4),
+		BPF_JUMP(BPF_JMP64 + BPF_JEQ + BPF_K, PACKET_HOST, 0, 4),
 		BPF_STMT(BPF_LD  + BPF_B   + BPF_ABS, off_proto),
-		BPF_JUMP(BPF_JMP + BPF_JEQ + BPF_K, proto, 0, 2),
+		BPF_JUMP(BPF_JMP64 + BPF_JEQ + BPF_K, proto, 0, 2),
 		BPF_STMT(BPF_LD  + BPF_H   + BPF_ABS, off_dport),
-		BPF_JUMP(BPF_JMP + BPF_JEQ + BPF_K, cfg_dport, 1, 0),
+		BPF_JUMP(BPF_JMP64 + BPF_JEQ + BPF_K, cfg_dport, 1, 0),
 		BPF_STMT(BPF_RET + BPF_K, 0),
 		BPF_STMT(BPF_RET + BPF_K, 0xFFFF),
 	};
diff --git a/tools/testing/selftests/seccomp/seccomp_bpf.c b/tools/testing/selftests/seccomp/seccomp_bpf.c
index 9c2f448..9537f27 100644
--- a/tools/testing/selftests/seccomp/seccomp_bpf.c
+++ b/tools/testing/selftests/seccomp/seccomp_bpf.c
@@ -662,7 +662,7 @@ TEST_SIGNAL(KILL_one, SIGSYS)
 	struct sock_filter filter[] = {
 		BPF_STMT(BPF_LD|BPF_W|BPF_ABS,
 			offsetof(struct seccomp_data, nr)),
-		BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_getpid, 0, 1),
+		BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K, __NR_getpid, 0, 1),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_KILL),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW),
 	};
@@ -690,11 +690,11 @@ TEST_SIGNAL(KILL_one_arg_one, SIGSYS)
 	struct sock_filter filter[] = {
 		BPF_STMT(BPF_LD|BPF_W|BPF_ABS,
 			offsetof(struct seccomp_data, nr)),
-		BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_times, 1, 0),
+		BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K, __NR_times, 1, 0),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW),
 		/* Only both with lower 32-bit for now. */
 		BPF_STMT(BPF_LD|BPF_W|BPF_ABS, syscall_arg(0)),
-		BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K,
+		BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K,
 			(unsigned long)&fatal_address, 0, 1),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_KILL),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW),
@@ -730,11 +730,11 @@ TEST_SIGNAL(KILL_one_arg_six, SIGSYS)
 	struct sock_filter filter[] = {
 		BPF_STMT(BPF_LD|BPF_W|BPF_ABS,
 			offsetof(struct seccomp_data, nr)),
-		BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, sysno, 1, 0),
+		BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K, sysno, 1, 0),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW),
 		/* Only both with lower 32-bit for now. */
 		BPF_STMT(BPF_LD|BPF_W|BPF_ABS, syscall_arg(5)),
-		BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, 0x0C0FFEE, 0, 1),
+		BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K, 0x0C0FFEE, 0, 1),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_KILL),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW),
 	};
@@ -803,7 +803,7 @@ void kill_thread_or_group(struct __test_metadata *_metadata,
 	struct sock_filter filter_thread[] = {
 		BPF_STMT(BPF_LD|BPF_W|BPF_ABS,
 			offsetof(struct seccomp_data, nr)),
-		BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_prctl, 0, 1),
+		BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K, __NR_prctl, 0, 1),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_KILL_THREAD),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW),
 	};
@@ -815,7 +815,7 @@ void kill_thread_or_group(struct __test_metadata *_metadata,
 	struct sock_filter filter_process[] = {
 		BPF_STMT(BPF_LD|BPF_W|BPF_ABS,
 			offsetof(struct seccomp_data, nr)),
-		BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_prctl, 0, 1),
+		BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K, __NR_prctl, 0, 1),
 		BPF_STMT(BPF_RET|BPF_K, kill),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW),
 	};
@@ -941,7 +941,7 @@ TEST(arg_out_of_range)
 	struct sock_filter _read_filter_##name[] = {			\
 		BPF_STMT(BPF_LD|BPF_W|BPF_ABS,				\
 			offsetof(struct seccomp_data, nr)),		\
-		BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_read, 0, 1),	\
+		BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K, __NR_read, 0, 1),	\
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ERRNO | errno),	\
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW),		\
 	};								\
@@ -1048,7 +1048,7 @@ FIXTURE_SETUP(TRAP)
 	struct sock_filter filter[] = {
 		BPF_STMT(BPF_LD|BPF_W|BPF_ABS,
 			offsetof(struct seccomp_data, nr)),
-		BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_getpid, 0, 1),
+		BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K, __NR_getpid, 0, 1),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_TRAP),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW),
 	};
@@ -1167,35 +1167,35 @@ FIXTURE_SETUP(precedence)
 	struct sock_filter log_insns[] = {
 		BPF_STMT(BPF_LD|BPF_W|BPF_ABS,
 			offsetof(struct seccomp_data, nr)),
-		BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_getpid, 1, 0),
+		BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K, __NR_getpid, 1, 0),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_LOG),
 	};
 	struct sock_filter trace_insns[] = {
 		BPF_STMT(BPF_LD|BPF_W|BPF_ABS,
 			offsetof(struct seccomp_data, nr)),
-		BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_getpid, 1, 0),
+		BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K, __NR_getpid, 1, 0),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_TRACE),
 	};
 	struct sock_filter error_insns[] = {
 		BPF_STMT(BPF_LD|BPF_W|BPF_ABS,
 			offsetof(struct seccomp_data, nr)),
-		BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_getpid, 1, 0),
+		BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K, __NR_getpid, 1, 0),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ERRNO),
 	};
 	struct sock_filter trap_insns[] = {
 		BPF_STMT(BPF_LD|BPF_W|BPF_ABS,
 			offsetof(struct seccomp_data, nr)),
-		BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_getpid, 1, 0),
+		BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K, __NR_getpid, 1, 0),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_TRAP),
 	};
 	struct sock_filter kill_insns[] = {
 		BPF_STMT(BPF_LD|BPF_W|BPF_ABS,
 			offsetof(struct seccomp_data, nr)),
-		BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_getpid, 1, 0),
+		BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K, __NR_getpid, 1, 0),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_KILL),
 	};
@@ -1662,7 +1662,7 @@ FIXTURE_SETUP(TRACE_poke)
 	struct sock_filter filter[] = {
 		BPF_STMT(BPF_LD|BPF_W|BPF_ABS,
 			offsetof(struct seccomp_data, nr)),
-		BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_read, 0, 1),
+		BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K, __NR_read, 0, 1),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_TRACE | 0x1001),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW),
 	};
@@ -2117,13 +2117,13 @@ FIXTURE_SETUP(TRACE_syscall)
 	struct sock_filter filter[] = {
 		BPF_STMT(BPF_LD|BPF_W|BPF_ABS,
 			offsetof(struct seccomp_data, nr)),
-		BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_getpid, 0, 1),
+		BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K, __NR_getpid, 0, 1),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_TRACE | 0x1002),
-		BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_gettid, 0, 1),
+		BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K, __NR_gettid, 0, 1),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_TRACE | 0x1003),
-		BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_openat, 0, 1),
+		BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K, __NR_openat, 0, 1),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_TRACE | 0x1004),
-		BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_getppid, 0, 1),
+		BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K, __NR_getppid, 0, 1),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_TRACE | 0x1005),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW),
 	};
@@ -2221,7 +2221,7 @@ TEST_F_SIGNAL(TRACE_syscall, kill_immediate, SIGSYS)
 	struct sock_filter filter[] = {
 		BPF_STMT(BPF_LD|BPF_W|BPF_ABS,
 			offsetof(struct seccomp_data, nr)),
-		BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_mknodat, 0, 1),
+		BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K, __NR_mknodat, 0, 1),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_KILL_THREAD),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW),
 	};
@@ -2244,7 +2244,7 @@ TEST_F(TRACE_syscall, skip_after)
 	struct sock_filter filter[] = {
 		BPF_STMT(BPF_LD|BPF_W|BPF_ABS,
 			offsetof(struct seccomp_data, nr)),
-		BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_getppid, 0, 1),
+		BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K, __NR_getppid, 0, 1),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ERRNO | EPERM),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW),
 	};
@@ -2269,7 +2269,7 @@ TEST_F_SIGNAL(TRACE_syscall, kill_after, SIGSYS)
 	struct sock_filter filter[] = {
 		BPF_STMT(BPF_LD|BPF_W|BPF_ABS,
 			offsetof(struct seccomp_data, nr)),
-		BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_getppid, 0, 1),
+		BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K, __NR_getppid, 0, 1),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_KILL),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW),
 	};
@@ -2539,7 +2539,7 @@ FIXTURE_SETUP(TSYNC)
 	struct sock_filter apply_filter[] = {
 		BPF_STMT(BPF_LD|BPF_W|BPF_ABS,
 			offsetof(struct seccomp_data, nr)),
-		BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_read, 0, 1),
+		BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K, __NR_read, 0, 1),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_KILL),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW),
 	};
@@ -2648,7 +2648,7 @@ TEST_F(TSYNC, siblings_fail_prctl)
 	struct sock_filter filter[] = {
 		BPF_STMT(BPF_LD|BPF_W|BPF_ABS,
 			offsetof(struct seccomp_data, nr)),
-		BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_prctl, 0, 1),
+		BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K, __NR_prctl, 0, 1),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ERRNO | EINVAL),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW),
 	};
@@ -2999,17 +2999,17 @@ TEST(syscall_restart)
 			 offsetof(struct seccomp_data, nr)),
 
 #ifdef __NR_sigreturn
-		BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_sigreturn, 7, 0),
+		BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K, __NR_sigreturn, 7, 0),
 #endif
-		BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_read, 6, 0),
-		BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_exit, 5, 0),
-		BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_rt_sigreturn, 4, 0),
-		BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_nanosleep, 5, 0),
-		BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_clock_nanosleep, 4, 0),
-		BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_restart_syscall, 4, 0),
+		BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K, __NR_read, 6, 0),
+		BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K, __NR_exit, 5, 0),
+		BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K, __NR_rt_sigreturn, 4, 0),
+		BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K, __NR_nanosleep, 5, 0),
+		BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K, __NR_clock_nanosleep, 4, 0),
+		BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K, __NR_restart_syscall, 4, 0),
 
 		/* Allow __NR_write for easy logging. */
-		BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_write, 0, 1),
+		BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K, __NR_write, 0, 1),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_KILL),
 		/* The nanosleep jump target. */
@@ -3169,7 +3169,7 @@ TEST_SIGNAL(filter_flag_log, SIGSYS)
 	struct sock_filter kill_filter[] = {
 		BPF_STMT(BPF_LD|BPF_W|BPF_ABS,
 			offsetof(struct seccomp_data, nr)),
-		BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_getpid, 0, 1),
+		BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K, __NR_getpid, 0, 1),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_KILL),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW),
 	};
@@ -3328,7 +3328,7 @@ static int user_notif_syscall(int nr, unsigned int flags)
 	struct sock_filter filter[] = {
 		BPF_STMT(BPF_LD|BPF_W|BPF_ABS,
 			offsetof(struct seccomp_data, nr)),
-		BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, nr, 0, 1),
+		BPF_JUMP(BPF_JMP64|BPF_JEQ|BPF_K, nr, 0, 1),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_USER_NOTIF),
 		BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW),
 	};
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH bpf-next 3/4] bpf: treewide: Clean up BPF_ALU_* and BPF_JMP_*
  2023-02-01 12:36 [PATCH bpf-next 0/4] bpf: Replace BPF_ALU and BPF_JMP with BPF_ALU32 and BPF_JMP64 Tiezhu Yang
  2023-02-01 12:36 ` [PATCH bpf-next 1/4] bpf: Add new macro " Tiezhu Yang
  2023-02-01 12:36 ` [PATCH bpf-next 2/4] bpf: treewide: Clean up BPF_ALU and BPF_JMP Tiezhu Yang
@ 2023-02-01 12:36 ` Tiezhu Yang
  2023-02-01 12:36 ` [PATCH bpf-next 4/4] bpf: Mark BPF_ALU and BPF_JMP as deprecated Tiezhu Yang
  2023-02-02 11:26 ` [PATCH bpf-next 0/4] bpf: Replace BPF_ALU and BPF_JMP with BPF_ALU32 and BPF_JMP64 Alexei Starovoitov
  4 siblings, 0 replies; 9+ messages in thread
From: Tiezhu Yang @ 2023-02-01 12:36 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko; +Cc: bpf, linux-kernel

Replace the ambiguos macro BPF_ALU_* and BPF_JMP_* with new macro
BPF_ALU32_* and BPF_JMP64_* in the kernel code.

sed -i "s/BPF_ALU_/BPF_ALU32_/g" `grep BPF_ALU_ -rl .`
sed -i "s/BPF_JMP_/BPF_JMP64_/g" `grep BPF_JMP_ -rl .`

Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
---
 Documentation/bpf/verifier.rst                     |   4 +-
 include/linux/bpf_verifier.h                       |  14 +-
 include/linux/filter.h                             |  10 +-
 kernel/bpf/arraymap.c                              |  14 +-
 kernel/bpf/core.c                                  |   2 +-
 kernel/bpf/hashtab.c                               |   8 +-
 kernel/bpf/verifier.c                              |  40 +-
 lib/test_bpf.c                                     | 596 ++++++++++-----------
 net/core/filter.c                                  |  60 +--
 net/xdp/xskmap.c                                   |   4 +-
 samples/bpf/bpf_insn.h                             |   8 +-
 samples/bpf/cookie_uid_helper_example.c            |   4 +-
 samples/bpf/sock_example.c                         |   2 +-
 samples/bpf/test_cgrp2_attach.c                    |   4 +-
 samples/bpf/test_cgrp2_sock.c                      |   2 +-
 tools/bpf/bpf_dbg.c                                | 252 ++++-----
 tools/bpf/bpftool/feature.c                        |   4 +-
 tools/include/linux/filter.h                       |  10 +-
 tools/lib/bpf/gen_loader.c                         |  36 +-
 tools/perf/util/bpf-prologue.c                     |   6 +-
 tools/testing/selftests/bpf/prog_tests/align.c     |  24 +-
 tools/testing/selftests/bpf/prog_tests/btf.c       |  24 +-
 .../selftests/bpf/prog_tests/cgroup_attach_multi.c |   2 +-
 .../bpf/prog_tests/flow_dissector_load_bytes.c     |   2 +-
 tools/testing/selftests/bpf/prog_tests/sockopt.c   |  40 +-
 tools/testing/selftests/bpf/test_lru_map.c         |   4 +-
 tools/testing/selftests/bpf/test_sock.c            |  30 +-
 tools/testing/selftests/bpf/test_sock_addr.c       |   6 +-
 tools/testing/selftests/bpf/test_sysctl.c          | 172 +++---
 tools/testing/selftests/bpf/test_verifier.c        |  24 +-
 tools/testing/selftests/bpf/verifier/and.c         |   4 +-
 .../testing/selftests/bpf/verifier/array_access.c  |  46 +-
 tools/testing/selftests/bpf/verifier/atomic_and.c  |  14 +-
 .../testing/selftests/bpf/verifier/atomic_bounds.c |   2 +-
 .../selftests/bpf/verifier/atomic_cmpxchg.c        |  12 +-
 .../testing/selftests/bpf/verifier/atomic_fetch.c  |  12 +-
 .../selftests/bpf/verifier/atomic_fetch_add.c      |   8 +-
 tools/testing/selftests/bpf/verifier/atomic_or.c   |  12 +-
 tools/testing/selftests/bpf/verifier/atomic_xchg.c |   4 +-
 tools/testing/selftests/bpf/verifier/atomic_xor.c  |  10 +-
 tools/testing/selftests/bpf/verifier/basic_instr.c |  14 +-
 tools/testing/selftests/bpf/verifier/bounds.c      |  96 ++--
 .../selftests/bpf/verifier/bounds_deduction.c      |  24 +-
 .../bpf/verifier/bounds_mix_sign_unsign.c          | 100 ++--
 .../testing/selftests/bpf/verifier/bpf_get_stack.c |  12 +-
 .../selftests/bpf/verifier/bpf_loop_inline.c       |  14 +-
 tools/testing/selftests/bpf/verifier/calls.c       | 234 ++++----
 tools/testing/selftests/bpf/verifier/cfg.c         |  14 +-
 tools/testing/selftests/bpf/verifier/cgroup_skb.c  |   2 +-
 tools/testing/selftests/bpf/verifier/ctx_sk_msg.c  |   8 +-
 tools/testing/selftests/bpf/verifier/ctx_skb.c     | 124 ++---
 tools/testing/selftests/bpf/verifier/dead_code.c   |  40 +-
 .../selftests/bpf/verifier/direct_packet_access.c  |  90 ++--
 .../testing/selftests/bpf/verifier/div_overflow.c  |   8 +-
 .../selftests/bpf/verifier/helper_access_var_len.c |  74 +--
 .../selftests/bpf/verifier/helper_packet_access.c  |  42 +-
 .../selftests/bpf/verifier/helper_restricted.c     |  16 +-
 .../selftests/bpf/verifier/helper_value_access.c   | 132 ++---
 .../selftests/bpf/verifier/jeq_infer_not_null.c    |  32 +-
 tools/testing/selftests/bpf/verifier/jit.c         |  50 +-
 tools/testing/selftests/bpf/verifier/jmp32.c       |  26 +-
 tools/testing/selftests/bpf/verifier/jset.c        |  32 +-
 tools/testing/selftests/bpf/verifier/jump.c        | 230 ++++----
 tools/testing/selftests/bpf/verifier/ld_abs.c      |  10 +-
 tools/testing/selftests/bpf/verifier/ld_imm64.c    |   6 +-
 tools/testing/selftests/bpf/verifier/leak_ptr.c    |   2 +-
 tools/testing/selftests/bpf/verifier/loops1.c      |  44 +-
 tools/testing/selftests/bpf/verifier/lwt.c         |  16 +-
 tools/testing/selftests/bpf/verifier/map_in_map.c  |  12 +-
 tools/testing/selftests/bpf/verifier/map_kptr.c    |  68 +--
 .../selftests/bpf/verifier/map_ptr_mixing.c        |  20 +-
 tools/testing/selftests/bpf/verifier/map_ret_val.c |   4 +-
 tools/testing/selftests/bpf/verifier/meta_access.c |  30 +-
 tools/testing/selftests/bpf/verifier/precise.c     |  32 +-
 .../selftests/bpf/verifier/raw_tp_writable.c       |   2 +-
 .../testing/selftests/bpf/verifier/ref_tracking.c  | 118 ++--
 tools/testing/selftests/bpf/verifier/regalloc.c    |  60 +--
 tools/testing/selftests/bpf/verifier/ringbuf.c     |   6 +-
 tools/testing/selftests/bpf/verifier/runtime_jit.c |  24 +-
 .../selftests/bpf/verifier/search_pruning.c        |  56 +-
 tools/testing/selftests/bpf/verifier/sock.c        | 114 ++--
 tools/testing/selftests/bpf/verifier/spill_fill.c  |  18 +-
 tools/testing/selftests/bpf/verifier/spin_lock.c   |  44 +-
 tools/testing/selftests/bpf/verifier/stack_ptr.c   |   6 +-
 tools/testing/selftests/bpf/verifier/uninit.c      |   2 +-
 tools/testing/selftests/bpf/verifier/unpriv.c      |  44 +-
 tools/testing/selftests/bpf/verifier/value.c       |  10 +-
 .../selftests/bpf/verifier/value_adj_spill.c       |   4 +-
 .../selftests/bpf/verifier/value_illegal_alu.c     |  10 +-
 .../testing/selftests/bpf/verifier/value_or_null.c |  26 +-
 .../selftests/bpf/verifier/value_ptr_arith.c       | 186 +++----
 tools/testing/selftests/bpf/verifier/var_off.c     |   2 +-
 tools/testing/selftests/bpf/verifier/xadd.c        |  14 +-
 tools/testing/selftests/bpf/verifier/xdp.c         |   2 +-
 .../bpf/verifier/xdp_direct_packet_access.c        | 228 ++++----
 95 files changed, 2083 insertions(+), 2083 deletions(-)

diff --git a/Documentation/bpf/verifier.rst b/Documentation/bpf/verifier.rst
index ecf6ead..16d0d19 100644
--- a/Documentation/bpf/verifier.rst
+++ b/Documentation/bpf/verifier.rst
@@ -427,7 +427,7 @@ accesses the memory with incorrect alignment::
   BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
   BPF_LD_MAP_FD(BPF_REG_1, 0),
   BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-  BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+  BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
   BPF_ST_MEM(BPF_DW, BPF_REG_0, 4, 0),
   BPF_EXIT_INSN(),
 
@@ -452,7 +452,7 @@ to do so in the other side of 'if' branch::
   BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
   BPF_LD_MAP_FD(BPF_REG_1, 0),
   BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-  BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+  BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
   BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 0),
   BPF_EXIT_INSN(),
   BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 1),
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index aa83de1..1fdbca0 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -409,13 +409,13 @@ struct bpf_loop_inline_state {
 };
 
 /* Possible states for alu_state member. */
-#define BPF_ALU_SANITIZE_SRC		(1U << 0)
-#define BPF_ALU_SANITIZE_DST		(1U << 1)
-#define BPF_ALU_NEG_VALUE		(1U << 2)
-#define BPF_ALU_NON_POINTER		(1U << 3)
-#define BPF_ALU_IMMEDIATE		(1U << 4)
-#define BPF_ALU_SANITIZE		(BPF_ALU_SANITIZE_SRC | \
-					 BPF_ALU_SANITIZE_DST)
+#define BPF_ALU32_SANITIZE_SRC		(1U << 0)
+#define BPF_ALU32_SANITIZE_DST		(1U << 1)
+#define BPF_ALU32_NEG_VALUE		(1U << 2)
+#define BPF_ALU32_NON_POINTER		(1U << 3)
+#define BPF_ALU32_IMMEDIATE		(1U << 4)
+#define BPF_ALU32_SANITIZE		(BPF_ALU32_SANITIZE_SRC | \
+					 BPF_ALU32_SANITIZE_DST)
 
 struct bpf_insn_aux_data {
 	union {
diff --git a/include/linux/filter.h b/include/linux/filter.h
index 347fcfa..c1f0182 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -303,7 +303,7 @@ static inline bool insn_is_zext(const struct bpf_insn *insn)
 
 /* Conditional jumps against registers, if (dst_reg 'op' src_reg) goto pc + off16 */
 
-#define BPF_JMP_REG(OP, DST, SRC, OFF)				\
+#define BPF_JMP64_REG(OP, DST, SRC, OFF)			\
 	((struct bpf_insn) {					\
 		.code  = BPF_JMP64 | BPF_OP(OP) | BPF_X,	\
 		.dst_reg = DST,					\
@@ -313,7 +313,7 @@ static inline bool insn_is_zext(const struct bpf_insn *insn)
 
 /* Conditional jumps against immediates, if (dst_reg 'op' imm32) goto pc + off16 */
 
-#define BPF_JMP_IMM(OP, DST, IMM, OFF)				\
+#define BPF_JMP64_IMM(OP, DST, IMM, OFF)			\
 	((struct bpf_insn) {					\
 		.code  = BPF_JMP64 | BPF_OP(OP) | BPF_K,	\
 		.dst_reg = DST,					\
@@ -321,7 +321,7 @@ static inline bool insn_is_zext(const struct bpf_insn *insn)
 		.off   = OFF,					\
 		.imm   = IMM })
 
-/* Like BPF_JMP_REG, but with 32-bit wide operands for comparison. */
+/* Like BPF_JMP64_REG, but with 32-bit wide operands for comparison. */
 
 #define BPF_JMP32_REG(OP, DST, SRC, OFF)			\
 	((struct bpf_insn) {					\
@@ -331,7 +331,7 @@ static inline bool insn_is_zext(const struct bpf_insn *insn)
 		.off   = OFF,					\
 		.imm   = 0 })
 
-/* Like BPF_JMP_IMM, but with 32-bit wide operands for comparison. */
+/* Like BPF_JMP64_IMM, but with 32-bit wide operands for comparison. */
 
 #define BPF_JMP32_IMM(OP, DST, IMM, OFF)			\
 	((struct bpf_insn) {					\
@@ -343,7 +343,7 @@ static inline bool insn_is_zext(const struct bpf_insn *insn)
 
 /* Unconditional jumps, goto pc + off16 */
 
-#define BPF_JMP_A(OFF)						\
+#define BPF_JMP64_A(OFF)					\
 	((struct bpf_insn) {					\
 		.code  = BPF_JMP64 | BPF_JA,			\
 		.dst_reg = 0,					\
diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
index 4847069..601e465 100644
--- a/kernel/bpf/arraymap.c
+++ b/kernel/bpf/arraymap.c
@@ -217,10 +217,10 @@ static int array_map_gen_lookup(struct bpf_map *map, struct bpf_insn *insn_buf)
 	*insn++ = BPF_ALU64_IMM(BPF_ADD, map_ptr, offsetof(struct bpf_array, value));
 	*insn++ = BPF_LDX_MEM(BPF_W, ret, index, 0);
 	if (!map->bypass_spec_v1) {
-		*insn++ = BPF_JMP_IMM(BPF_JGE, ret, map->max_entries, 4);
+		*insn++ = BPF_JMP64_IMM(BPF_JGE, ret, map->max_entries, 4);
 		*insn++ = BPF_ALU32_IMM(BPF_AND, ret, array->index_mask);
 	} else {
-		*insn++ = BPF_JMP_IMM(BPF_JGE, ret, map->max_entries, 3);
+		*insn++ = BPF_JMP64_IMM(BPF_JGE, ret, map->max_entries, 3);
 	}
 
 	if (is_power_of_2(elem_size)) {
@@ -229,7 +229,7 @@ static int array_map_gen_lookup(struct bpf_map *map, struct bpf_insn *insn_buf)
 		*insn++ = BPF_ALU64_IMM(BPF_MUL, ret, elem_size);
 	}
 	*insn++ = BPF_ALU64_REG(BPF_ADD, ret, map_ptr);
-	*insn++ = BPF_JMP_IMM(BPF_JA, 0, 0, 1);
+	*insn++ = BPF_JMP64_IMM(BPF_JA, 0, 0, 1);
 	*insn++ = BPF_MOV64_IMM(ret, 0);
 	return insn - insn_buf;
 }
@@ -1347,10 +1347,10 @@ static int array_of_map_gen_lookup(struct bpf_map *map,
 	*insn++ = BPF_ALU64_IMM(BPF_ADD, map_ptr, offsetof(struct bpf_array, value));
 	*insn++ = BPF_LDX_MEM(BPF_W, ret, index, 0);
 	if (!map->bypass_spec_v1) {
-		*insn++ = BPF_JMP_IMM(BPF_JGE, ret, map->max_entries, 6);
+		*insn++ = BPF_JMP64_IMM(BPF_JGE, ret, map->max_entries, 6);
 		*insn++ = BPF_ALU32_IMM(BPF_AND, ret, array->index_mask);
 	} else {
-		*insn++ = BPF_JMP_IMM(BPF_JGE, ret, map->max_entries, 5);
+		*insn++ = BPF_JMP64_IMM(BPF_JGE, ret, map->max_entries, 5);
 	}
 	if (is_power_of_2(elem_size))
 		*insn++ = BPF_ALU64_IMM(BPF_LSH, ret, ilog2(elem_size));
@@ -1358,8 +1358,8 @@ static int array_of_map_gen_lookup(struct bpf_map *map,
 		*insn++ = BPF_ALU64_IMM(BPF_MUL, ret, elem_size);
 	*insn++ = BPF_ALU64_REG(BPF_ADD, ret, map_ptr);
 	*insn++ = BPF_LDX_MEM(BPF_DW, ret, ret, 0);
-	*insn++ = BPF_JMP_IMM(BPF_JEQ, ret, 0, 1);
-	*insn++ = BPF_JMP_IMM(BPF_JA, 0, 0, 1);
+	*insn++ = BPF_JMP64_IMM(BPF_JEQ, ret, 0, 1);
+	*insn++ = BPF_JMP64_IMM(BPF_JA, 0, 0, 1);
 	*insn++ = BPF_MOV64_IMM(ret, 0);
 
 	return insn - insn_buf;
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index d31b38c..ebc6c97 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -1297,7 +1297,7 @@ static int bpf_jit_blind_insn(const struct bpf_insn *from,
 			off -= 2;
 		*to++ = BPF_ALU64_IMM(BPF_MOV, BPF_REG_AX, imm_rnd ^ from->imm);
 		*to++ = BPF_ALU64_IMM(BPF_XOR, BPF_REG_AX, imm_rnd);
-		*to++ = BPF_JMP_REG(from->code, from->dst_reg, BPF_REG_AX, off);
+		*to++ = BPF_JMP64_REG(from->code, from->dst_reg, BPF_REG_AX, off);
 		break;
 
 	case BPF_JMP32 | BPF_JEQ  | BPF_K:
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
index 66bded1..aa74085 100644
--- a/kernel/bpf/hashtab.c
+++ b/kernel/bpf/hashtab.c
@@ -700,7 +700,7 @@ static int htab_map_gen_lookup(struct bpf_map *map, struct bpf_insn *insn_buf)
 	BUILD_BUG_ON(!__same_type(&__htab_map_lookup_elem,
 		     (void *(*)(struct bpf_map *map, void *key))NULL));
 	*insn++ = BPF_EMIT_CALL(__htab_map_lookup_elem);
-	*insn++ = BPF_JMP_IMM(BPF_JEQ, ret, 0, 1);
+	*insn++ = BPF_JMP64_IMM(BPF_JEQ, ret, 0, 1);
 	*insn++ = BPF_ALU64_IMM(BPF_ADD, ret,
 				offsetof(struct htab_elem, key) +
 				round_up(map->key_size, 8));
@@ -741,11 +741,11 @@ static int htab_lru_map_gen_lookup(struct bpf_map *map,
 	BUILD_BUG_ON(!__same_type(&__htab_map_lookup_elem,
 		     (void *(*)(struct bpf_map *map, void *key))NULL));
 	*insn++ = BPF_EMIT_CALL(__htab_map_lookup_elem);
-	*insn++ = BPF_JMP_IMM(BPF_JEQ, ret, 0, 4);
+	*insn++ = BPF_JMP64_IMM(BPF_JEQ, ret, 0, 4);
 	*insn++ = BPF_LDX_MEM(BPF_B, ref_reg, ret,
 			      offsetof(struct htab_elem, lru_node) +
 			      offsetof(struct bpf_lru_node, ref));
-	*insn++ = BPF_JMP_IMM(BPF_JNE, ref_reg, 0, 1);
+	*insn++ = BPF_JMP64_IMM(BPF_JNE, ref_reg, 0, 1);
 	*insn++ = BPF_ST_MEM(BPF_B, ret,
 			     offsetof(struct htab_elem, lru_node) +
 			     offsetof(struct bpf_lru_node, ref),
@@ -2492,7 +2492,7 @@ static int htab_of_map_gen_lookup(struct bpf_map *map,
 	BUILD_BUG_ON(!__same_type(&__htab_map_lookup_elem,
 		     (void *(*)(struct bpf_map *map, void *key))NULL));
 	*insn++ = BPF_EMIT_CALL(__htab_map_lookup_elem);
-	*insn++ = BPF_JMP_IMM(BPF_JEQ, ret, 0, 2);
+	*insn++ = BPF_JMP64_IMM(BPF_JEQ, ret, 0, 2);
 	*insn++ = BPF_ALU64_IMM(BPF_ADD, ret,
 				offsetof(struct htab_elem, key) +
 				round_up(map->key_size, 8));
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 0869c50..46a4783 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -9794,7 +9794,7 @@ static int sanitize_val_alu(struct bpf_verifier_env *env,
 	if (can_skip_alu_sanitation(env, insn))
 		return 0;
 
-	return update_alu_sanitation_state(aux, BPF_ALU_NON_POINTER, 0);
+	return update_alu_sanitation_state(aux, BPF_ALU32_NON_POINTER, 0);
 }
 
 static bool sanitize_needed(u8 opcode)
@@ -9877,10 +9877,10 @@ static int sanitize_ptr_alu(struct bpf_verifier_env *env,
 		alu_state = info->aux.alu_state;
 		alu_limit = abs(info->aux.alu_limit - alu_limit);
 	} else {
-		alu_state  = off_is_neg ? BPF_ALU_NEG_VALUE : 0;
-		alu_state |= off_is_imm ? BPF_ALU_IMMEDIATE : 0;
+		alu_state  = off_is_neg ? BPF_ALU32_NEG_VALUE : 0;
+		alu_state |= off_is_imm ? BPF_ALU32_IMMEDIATE : 0;
 		alu_state |= ptr_is_dst_reg ?
-			     BPF_ALU_SANITIZE_SRC : BPF_ALU_SANITIZE_DST;
+			     BPF_ALU32_SANITIZE_SRC : BPF_ALU32_SANITIZE_DST;
 
 		/* Limit pruning on unknown scalars to enable deep search for
 		 * potential masking differences from other program paths.
@@ -15169,7 +15169,7 @@ static int verifier_remove_insns(struct bpf_verifier_env *env, u32 off, u32 cnt)
 static void sanitize_dead_code(struct bpf_verifier_env *env)
 {
 	struct bpf_insn_aux_data *aux_data = env->insn_aux_data;
-	struct bpf_insn trap = BPF_JMP_IMM(BPF_JA, 0, 0, -1);
+	struct bpf_insn trap = BPF_JMP64_IMM(BPF_JA, 0, 0, -1);
 	struct bpf_insn *insn = env->prog->insnsi;
 	const int insn_cnt = env->prog->len;
 	int i;
@@ -15199,7 +15199,7 @@ static bool insn_is_cond_jump(u8 code)
 static void opt_hard_wire_dead_code_branches(struct bpf_verifier_env *env)
 {
 	struct bpf_insn_aux_data *aux_data = env->insn_aux_data;
-	struct bpf_insn ja = BPF_JMP_IMM(BPF_JA, 0, 0, 0);
+	struct bpf_insn ja = BPF_JMP64_IMM(BPF_JA, 0, 0, 0);
 	struct bpf_insn *insn = env->prog->insnsi;
 	const int insn_cnt = env->prog->len;
 	int i;
@@ -15248,7 +15248,7 @@ static int opt_remove_dead_code(struct bpf_verifier_env *env)
 
 static int opt_remove_nops(struct bpf_verifier_env *env)
 {
-	const struct bpf_insn ja = BPF_JMP_IMM(BPF_JA, 0, 0, 0);
+	const struct bpf_insn ja = BPF_JMP64_IMM(BPF_JA, 0, 0, 0);
 	struct bpf_insn *insn = env->prog->insnsi;
 	int insn_cnt = env->prog->len;
 	int i, err;
@@ -15937,7 +15937,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
 					     BPF_JNE | BPF_K, insn->src_reg,
 					     0, 2, 0),
 				BPF_ALU32_REG(BPF_XOR, insn->dst_reg, insn->dst_reg),
-				BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+				BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 				*insn,
 			};
 			struct bpf_insn chk_and_mod[] = {
@@ -15946,7 +15946,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
 					     BPF_JEQ | BPF_K, insn->src_reg,
 					     0, 1 + (is64 ? 0 : 1), 0),
 				*insn,
-				BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+				BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 				BPF_MOV32_REG(insn->dst_reg, insn->dst_reg),
 			};
 
@@ -15995,13 +15995,13 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
 
 			aux = &env->insn_aux_data[i + delta];
 			if (!aux->alu_state ||
-			    aux->alu_state == BPF_ALU_NON_POINTER)
+			    aux->alu_state == BPF_ALU32_NON_POINTER)
 				continue;
 
-			isneg = aux->alu_state & BPF_ALU_NEG_VALUE;
-			issrc = (aux->alu_state & BPF_ALU_SANITIZE) ==
-				BPF_ALU_SANITIZE_SRC;
-			isimm = aux->alu_state & BPF_ALU_IMMEDIATE;
+			isneg = aux->alu_state & BPF_ALU32_NEG_VALUE;
+			issrc = (aux->alu_state & BPF_ALU32_SANITIZE) ==
+				BPF_ALU32_SANITIZE_SRC;
+			isimm = aux->alu_state & BPF_ALU32_IMMEDIATE;
 
 			off_reg = issrc ? insn->src_reg : insn->dst_reg;
 			if (isimm) {
@@ -16121,7 +16121,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
 			}
 
 			map_ptr = BPF_MAP_PTR(aux->map_ptr_state);
-			insn_buf[0] = BPF_JMP_IMM(BPF_JGE, BPF_REG_3,
+			insn_buf[0] = BPF_JMP64_IMM(BPF_JGE, BPF_REG_3,
 						  map_ptr->max_entries, 2);
 			insn_buf[1] = BPF_ALU32_IMM(BPF_AND, BPF_REG_3,
 						    container_of(map_ptr,
@@ -16326,7 +16326,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
 			insn_buf[4] = BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, 0);
 			insn_buf[5] = BPF_STX_MEM(BPF_DW, BPF_REG_3, BPF_REG_0, 0);
 			insn_buf[6] = BPF_MOV64_IMM(BPF_REG_0, 0);
-			insn_buf[7] = BPF_JMP_A(1);
+			insn_buf[7] = BPF_JMP64_A(1);
 			insn_buf[8] = BPF_MOV64_IMM(BPF_REG_0, -EINVAL);
 			cnt = 9;
 
@@ -16459,9 +16459,9 @@ static struct bpf_prog *inline_bpf_loop(struct bpf_verifier_env *env,
 		/* Return error and jump to the end of the patch if
 		 * expected number of iterations is too big.
 		 */
-		BPF_JMP_IMM(BPF_JLE, BPF_REG_1, BPF_MAX_LOOPS, 2),
+		BPF_JMP64_IMM(BPF_JLE, BPF_REG_1, BPF_MAX_LOOPS, 2),
 		BPF_MOV32_IMM(BPF_REG_0, -E2BIG),
-		BPF_JMP_IMM(BPF_JA, 0, 0, 16),
+		BPF_JMP64_IMM(BPF_JA, 0, 0, 16),
 		/* spill R6, R7, R8 to use these as loop vars */
 		BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_6, r6_offset),
 		BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_7, r7_offset),
@@ -16473,7 +16473,7 @@ static struct bpf_prog *inline_bpf_loop(struct bpf_verifier_env *env,
 		/* loop header,
 		 * if reg_loop_cnt >= reg_loop_max skip the loop body
 		 */
-		BPF_JMP_REG(BPF_JGE, reg_loop_cnt, reg_loop_max, 5),
+		BPF_JMP64_REG(BPF_JGE, reg_loop_cnt, reg_loop_max, 5),
 		/* callback call,
 		 * correct callback offset would be set after patching
 		 */
@@ -16483,7 +16483,7 @@ static struct bpf_prog *inline_bpf_loop(struct bpf_verifier_env *env,
 		/* increment loop counter */
 		BPF_ALU64_IMM(BPF_ADD, reg_loop_cnt, 1),
 		/* jump to loop header if callback returned 0 */
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, -6),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, -6),
 		/* return value of bpf_loop,
 		 * set R0 to the number of iterations
 		 */
diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index be5f161..f84fc19 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -276,7 +276,7 @@ static int bpf_fill_maxinsns9(struct bpf_test *self)
 	if (!insn)
 		return -ENOMEM;
 
-	insn[0] = BPF_JMP_IMM(BPF_JA, 0, 0, len - 2);
+	insn[0] = BPF_JMP64_IMM(BPF_JA, 0, 0, len - 2);
 	insn[1] = BPF_ALU32_IMM(BPF_MOV, R0, 0xcbababab);
 	insn[2] = BPF_EXIT_INSN();
 
@@ -284,7 +284,7 @@ static int bpf_fill_maxinsns9(struct bpf_test *self)
 		insn[i] = BPF_ALU32_IMM(BPF_MOV, R0, 0xfefefefe);
 
 	insn[len - 2] = BPF_EXIT_INSN();
-	insn[len - 1] = BPF_JMP_IMM(BPF_JA, 0, 0, -(len - 1));
+	insn[len - 1] = BPF_JMP64_IMM(BPF_JA, 0, 0, -(len - 1));
 
 	self->u.ptr.insns = insn;
 	self->u.ptr.len = len;
@@ -303,11 +303,11 @@ static int bpf_fill_maxinsns10(struct bpf_test *self)
 		return -ENOMEM;
 
 	for (i = 0; i < hlen / 2; i++)
-		insn[i] = BPF_JMP_IMM(BPF_JA, 0, 0, hlen - 2 - 2 * i);
+		insn[i] = BPF_JMP64_IMM(BPF_JA, 0, 0, hlen - 2 - 2 * i);
 	for (i = hlen - 1; i > hlen / 2; i--)
-		insn[i] = BPF_JMP_IMM(BPF_JA, 0, 0, hlen - 1 - 2 * i);
+		insn[i] = BPF_JMP64_IMM(BPF_JA, 0, 0, hlen - 1 - 2 * i);
 
-	insn[hlen / 2] = BPF_JMP_IMM(BPF_JA, 0, 0, hlen / 2 - 1);
+	insn[hlen / 2] = BPF_JMP64_IMM(BPF_JA, 0, 0, hlen / 2 - 1);
 	insn[hlen]     = BPF_ALU32_IMM(BPF_MOV, R0, 0xabababac);
 	insn[hlen + 1] = BPF_EXIT_INSN();
 
@@ -490,7 +490,7 @@ static int __bpf_fill_max_jmp(struct bpf_test *self, int jmp, int imm)
 
 	i = __bpf_ld_imm64(insns, R1, 0x0123456789abcdefULL);
 	insns[i++] = BPF_ALU64_IMM(BPF_MOV, R0, 1);
-	insns[i++] = BPF_JMP_IMM(jmp, R0, imm, S16_MAX);
+	insns[i++] = BPF_JMP64_IMM(jmp, R0, imm, S16_MAX);
 	insns[i++] = BPF_ALU64_IMM(BPF_MOV, R0, 2);
 	insns[i++] = BPF_EXIT_INSN();
 
@@ -651,7 +651,7 @@ static int __bpf_fill_alu_shift(struct bpf_test *self, u8 op,
 
 			/* Load reference and check the result */
 			i += __bpf_ld_imm64(&insn[i], R4, val);
-			insn[i++] = BPF_JMP_REG(BPF_JEQ, R1, R4, 1);
+			insn[i++] = BPF_JMP64_REG(BPF_JEQ, R1, R4, 1);
 			insn[i++] = BPF_EXIT_INSN();
 		}
 	}
@@ -762,7 +762,7 @@ static int __bpf_fill_alu_shift_same_reg(struct bpf_test *self, u8 op,
 		i += __bpf_ld_imm64(&insn[i], R2, res);
 
 		/* Check the actual result */
-		insn[i++] = BPF_JMP_REG(BPF_JEQ, R1, R2, 1);
+		insn[i++] = BPF_JMP64_REG(BPF_JEQ, R1, R2, 1);
 		insn[i++] = BPF_EXIT_INSN();
 	}
 
@@ -927,7 +927,7 @@ static int __bpf_emit_alu64_imm(struct bpf_test *self, void *arg,
 		i += __bpf_ld_imm64(&insns[i], R1, dst);
 		i += __bpf_ld_imm64(&insns[i], R3, res);
 		insns[i++] = BPF_ALU64_IMM(op, R1, imm);
-		insns[i++] = BPF_JMP_REG(BPF_JEQ, R1, R3, 1);
+		insns[i++] = BPF_JMP64_REG(BPF_JEQ, R1, R3, 1);
 		insns[i++] = BPF_EXIT_INSN();
 	}
 
@@ -948,7 +948,7 @@ static int __bpf_emit_alu32_imm(struct bpf_test *self, void *arg,
 		i += __bpf_ld_imm64(&insns[i], R1, dst);
 		i += __bpf_ld_imm64(&insns[i], R3, (u32)res);
 		insns[i++] = BPF_ALU32_IMM(op, R1, imm);
-		insns[i++] = BPF_JMP_REG(BPF_JEQ, R1, R3, 1);
+		insns[i++] = BPF_JMP64_REG(BPF_JEQ, R1, R3, 1);
 		insns[i++] = BPF_EXIT_INSN();
 	}
 
@@ -970,7 +970,7 @@ static int __bpf_emit_alu64_reg(struct bpf_test *self, void *arg,
 		i += __bpf_ld_imm64(&insns[i], R2, src);
 		i += __bpf_ld_imm64(&insns[i], R3, res);
 		insns[i++] = BPF_ALU64_REG(op, R1, R2);
-		insns[i++] = BPF_JMP_REG(BPF_JEQ, R1, R3, 1);
+		insns[i++] = BPF_JMP64_REG(BPF_JEQ, R1, R3, 1);
 		insns[i++] = BPF_EXIT_INSN();
 	}
 
@@ -992,7 +992,7 @@ static int __bpf_emit_alu32_reg(struct bpf_test *self, void *arg,
 		i += __bpf_ld_imm64(&insns[i], R2, src);
 		i += __bpf_ld_imm64(&insns[i], R3, (u32)res);
 		insns[i++] = BPF_ALU32_REG(op, R1, R2);
-		insns[i++] = BPF_JMP_REG(BPF_JEQ, R1, R3, 1);
+		insns[i++] = BPF_JMP64_REG(BPF_JEQ, R1, R3, 1);
 		insns[i++] = BPF_EXIT_INSN();
 	}
 
@@ -1626,13 +1626,13 @@ static int __bpf_emit_atomic64(struct bpf_test *self, void *arg,
 	insns[i++] = BPF_ATOMIC_OP(BPF_DW, op, R10, R2, -8);
 	insns[i++] = BPF_LDX_MEM(BPF_DW, R1, R10, -8);
 
-	insns[i++] = BPF_JMP_REG(BPF_JEQ, R1, R3, 1);
+	insns[i++] = BPF_JMP64_REG(BPF_JEQ, R1, R3, 1);
 	insns[i++] = BPF_EXIT_INSN();
 
-	insns[i++] = BPF_JMP_REG(BPF_JEQ, R2, R4, 1);
+	insns[i++] = BPF_JMP64_REG(BPF_JEQ, R2, R4, 1);
 	insns[i++] = BPF_EXIT_INSN();
 
-	insns[i++] = BPF_JMP_REG(BPF_JEQ, R0, R5, 1);
+	insns[i++] = BPF_JMP64_REG(BPF_JEQ, R0, R5, 1);
 	insns[i++] = BPF_EXIT_INSN();
 
 	return i;
@@ -1673,13 +1673,13 @@ static int __bpf_emit_atomic32(struct bpf_test *self, void *arg,
 	insns[i++] = BPF_ATOMIC_OP(BPF_W, op, R10, R2, -4);
 	insns[i++] = BPF_LDX_MEM(BPF_W, R1, R10, -4);
 
-	insns[i++] = BPF_JMP_REG(BPF_JEQ, R1, R3, 1);
+	insns[i++] = BPF_JMP64_REG(BPF_JEQ, R1, R3, 1);
 	insns[i++] = BPF_EXIT_INSN();
 
-	insns[i++] = BPF_JMP_REG(BPF_JEQ, R2, R4, 1);
+	insns[i++] = BPF_JMP64_REG(BPF_JEQ, R2, R4, 1);
 	insns[i++] = BPF_EXIT_INSN();
 
-	insns[i++] = BPF_JMP_REG(BPF_JEQ, R0, R5, 1);
+	insns[i++] = BPF_JMP64_REG(BPF_JEQ, R0, R5, 1);
 	insns[i++] = BPF_EXIT_INSN();
 
 	return i;
@@ -1702,11 +1702,11 @@ static int __bpf_emit_cmpxchg64(struct bpf_test *self, void *arg,
 	insns[i++] = BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, R10, R2, -8);
 	insns[i++] = BPF_LDX_MEM(BPF_DW, R3, R10, -8);
 
-	insns[i++] = BPF_JMP_REG(BPF_JEQ, R1, R3, 2);
+	insns[i++] = BPF_JMP64_REG(BPF_JEQ, R1, R3, 2);
 	insns[i++] = BPF_MOV64_IMM(R0, __LINE__);
 	insns[i++] = BPF_EXIT_INSN();
 
-	insns[i++] = BPF_JMP_REG(BPF_JEQ, R0, R3, 2);
+	insns[i++] = BPF_JMP64_REG(BPF_JEQ, R0, R3, 2);
 	insns[i++] = BPF_MOV64_IMM(R0, __LINE__);
 	insns[i++] = BPF_EXIT_INSN();
 
@@ -1714,11 +1714,11 @@ static int __bpf_emit_cmpxchg64(struct bpf_test *self, void *arg,
 	insns[i++] = BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, R10, R2, -8);
 	insns[i++] = BPF_LDX_MEM(BPF_DW, R3, R10, -8);
 
-	insns[i++] = BPF_JMP_REG(BPF_JEQ, R2, R3, 2);
+	insns[i++] = BPF_JMP64_REG(BPF_JEQ, R2, R3, 2);
 	insns[i++] = BPF_MOV64_IMM(R0, __LINE__);
 	insns[i++] = BPF_EXIT_INSN();
 
-	insns[i++] = BPF_JMP_REG(BPF_JEQ, R0, R1, 2);
+	insns[i++] = BPF_JMP64_REG(BPF_JEQ, R0, R1, 2);
 	insns[i++] = BPF_MOV64_IMM(R0, __LINE__);
 	insns[i++] = BPF_EXIT_INSN();
 
@@ -1747,7 +1747,7 @@ static int __bpf_emit_cmpxchg32(struct bpf_test *self, void *arg,
 	insns[i++] = BPF_MOV32_IMM(R0, __LINE__);
 	insns[i++] = BPF_EXIT_INSN();
 
-	insns[i++] = BPF_JMP_REG(BPF_JEQ, R0, R3, 2);
+	insns[i++] = BPF_JMP64_REG(BPF_JEQ, R0, R3, 2);
 	insns[i++] = BPF_MOV32_IMM(R0, __LINE__);
 	insns[i++] = BPF_EXIT_INSN();
 
@@ -1761,7 +1761,7 @@ static int __bpf_emit_cmpxchg32(struct bpf_test *self, void *arg,
 	insns[i++] = BPF_MOV32_IMM(R0, __LINE__);
 	insns[i++] = BPF_EXIT_INSN();
 
-	insns[i++] = BPF_JMP_REG(BPF_JEQ, R0, R1, 2);
+	insns[i++] = BPF_JMP64_REG(BPF_JEQ, R0, R1, 2);
 	insns[i++] = BPF_MOV32_IMM(R0, __LINE__);
 	insns[i++] = BPF_EXIT_INSN();
 
@@ -1987,7 +1987,7 @@ static int __bpf_fill_atomic_reg_pairs(struct bpf_test *self, u8 width, u8 op)
 			/* Check destination register value */
 			if (!(rd == R0 && op == BPF_CMPXCHG) &&
 			    !(rd == rs && (op & BPF_FETCH))) {
-				insn[i++] = BPF_JMP_REG(BPF_JEQ, rd, R10, 2);
+				insn[i++] = BPF_JMP64_REG(BPF_JEQ, rd, R10, 2);
 				insn[i++] = BPF_MOV32_IMM(R0, __LINE__);
 				insn[i++] = BPF_EXIT_INSN();
 			}
@@ -2006,7 +2006,7 @@ static int __bpf_fill_atomic_reg_pairs(struct bpf_test *self, u8 width, u8 op)
 
 			insn[i++] = BPF_LDX_MEM(width, R0, R10, -8);
 			if (width == BPF_DW)
-				insn[i++] = BPF_JMP_REG(BPF_JEQ, R0, R1, 2);
+				insn[i++] = BPF_JMP64_REG(BPF_JEQ, R0, R1, 2);
 			else /* width == BPF_W */
 				insn[i++] = BPF_JMP32_REG(BPF_JEQ, R0, R1, 2);
 			insn[i++] = BPF_MOV32_IMM(R0, __LINE__);
@@ -2165,7 +2165,7 @@ static int bpf_fill_ld_imm64_magn(struct bpf_test *self)
 				insn[i++] = BPF_ALU64_REG(BPF_OR, R2, R3);
 
 				/* Check result */
-				insn[i++] = BPF_JMP_REG(BPF_JEQ, R1, R2, 1);
+				insn[i++] = BPF_JMP64_REG(BPF_JEQ, R1, R2, 1);
 				insn[i++] = BPF_EXIT_INSN();
 			}
 		}
@@ -2229,7 +2229,7 @@ static int __bpf_fill_ld_imm64_bytes(struct bpf_test *self,
 		insn[i++] = BPF_ALU64_REG(BPF_OR, R2, R3);
 
 		/* Check result */
-		insn[i++] = BPF_JMP_REG(BPF_JEQ, R1, R2, 1);
+		insn[i++] = BPF_JMP64_REG(BPF_JEQ, R1, R2, 1);
 		insn[i++] = BPF_EXIT_INSN();
 	}
 
@@ -2311,9 +2311,9 @@ static int __bpf_emit_jmp_imm(struct bpf_test *self, void *arg,
 		insns[i++] = BPF_ALU32_IMM(BPF_MOV, R0, match);
 
 		i += __bpf_ld_imm64(&insns[i], R1, dst);
-		insns[i++] = BPF_JMP_IMM(op, R1, imm, 1);
+		insns[i++] = BPF_JMP64_IMM(op, R1, imm, 1);
 		if (!match)
-			insns[i++] = BPF_JMP_IMM(BPF_JA, 0, 0, 1);
+			insns[i++] = BPF_JMP64_IMM(BPF_JA, 0, 0, 1);
 		insns[i++] = BPF_EXIT_INSN();
 
 		return i;
@@ -2334,7 +2334,7 @@ static int __bpf_emit_jmp32_imm(struct bpf_test *self, void *arg,
 		i += __bpf_ld_imm64(&insns[i], R1, dst);
 		insns[i++] = BPF_JMP32_IMM(op, R1, imm, 1);
 		if (!match)
-			insns[i++] = BPF_JMP_IMM(BPF_JA, 0, 0, 1);
+			insns[i++] = BPF_JMP64_IMM(BPF_JA, 0, 0, 1);
 		insns[i++] = BPF_EXIT_INSN();
 
 		return i;
@@ -2354,9 +2354,9 @@ static int __bpf_emit_jmp_reg(struct bpf_test *self, void *arg,
 
 		i += __bpf_ld_imm64(&insns[i], R1, dst);
 		i += __bpf_ld_imm64(&insns[i], R2, src);
-		insns[i++] = BPF_JMP_REG(op, R1, R2, 1);
+		insns[i++] = BPF_JMP64_REG(op, R1, R2, 1);
 		if (!match)
-			insns[i++] = BPF_JMP_IMM(BPF_JA, 0, 0, 1);
+			insns[i++] = BPF_JMP64_IMM(BPF_JA, 0, 0, 1);
 		insns[i++] = BPF_EXIT_INSN();
 
 		return i;
@@ -2378,7 +2378,7 @@ static int __bpf_emit_jmp32_reg(struct bpf_test *self, void *arg,
 		i += __bpf_ld_imm64(&insns[i], R2, src);
 		insns[i++] = BPF_JMP32_REG(op, R1, R2, 1);
 		if (!match)
-			insns[i++] = BPF_JMP_IMM(BPF_JA, 0, 0, 1);
+			insns[i++] = BPF_JMP64_IMM(BPF_JA, 0, 0, 1);
 		insns[i++] = BPF_EXIT_INSN();
 
 		return i;
@@ -2712,7 +2712,7 @@ static int __bpf_fill_staggered_jumps(struct bpf_test *self,
 	insns[0] = BPF_ALU64_IMM(BPF_MOV, R0, 0);
 	insns[1] = BPF_ALU64_IMM(BPF_MOV, R1, r1);
 	insns[2] = BPF_ALU64_IMM(BPF_MOV, R2, r2);
-	insns[3] = BPF_JMP_IMM(BPF_JA, 0, 0, 3 * size / 2);
+	insns[3] = BPF_JMP64_IMM(BPF_JA, 0, 0, 3 * size / 2);
 
 	/* Sequence */
 	for (ind = 0, off = size; ind <= size; ind++, off -= 2) {
@@ -2723,7 +2723,7 @@ static int __bpf_fill_staggered_jumps(struct bpf_test *self,
 			off--;
 
 		loc = abs(off);
-		ins[0] = BPF_JMP_IMM(BPF_JNE, R0, loc - 1,
+		ins[0] = BPF_JMP64_IMM(BPF_JNE, R0, loc - 1,
 				     3 * (size - ind) + 1);
 		ins[1] = BPF_ALU64_IMM(BPF_MOV, R0, loc);
 		ins[2] = *jmp;
@@ -2742,7 +2742,7 @@ static int __bpf_fill_staggered_jumps(struct bpf_test *self,
 /* 64-bit unconditional jump */
 static int bpf_fill_staggered_ja(struct bpf_test *self)
 {
-	struct bpf_insn jmp = BPF_JMP_IMM(BPF_JA, 0, 0, 0);
+	struct bpf_insn jmp = BPF_JMP64_IMM(BPF_JA, 0, 0, 0);
 
 	return __bpf_fill_staggered_jumps(self, &jmp, 0, 0);
 }
@@ -2750,77 +2750,77 @@ static int bpf_fill_staggered_ja(struct bpf_test *self)
 /* 64-bit immediate jumps */
 static int bpf_fill_staggered_jeq_imm(struct bpf_test *self)
 {
-	struct bpf_insn jmp = BPF_JMP_IMM(BPF_JEQ, R1, 1234, 0);
+	struct bpf_insn jmp = BPF_JMP64_IMM(BPF_JEQ, R1, 1234, 0);
 
 	return __bpf_fill_staggered_jumps(self, &jmp, 1234, 0);
 }
 
 static int bpf_fill_staggered_jne_imm(struct bpf_test *self)
 {
-	struct bpf_insn jmp = BPF_JMP_IMM(BPF_JNE, R1, 1234, 0);
+	struct bpf_insn jmp = BPF_JMP64_IMM(BPF_JNE, R1, 1234, 0);
 
 	return __bpf_fill_staggered_jumps(self, &jmp, 4321, 0);
 }
 
 static int bpf_fill_staggered_jset_imm(struct bpf_test *self)
 {
-	struct bpf_insn jmp = BPF_JMP_IMM(BPF_JSET, R1, 0x82, 0);
+	struct bpf_insn jmp = BPF_JMP64_IMM(BPF_JSET, R1, 0x82, 0);
 
 	return __bpf_fill_staggered_jumps(self, &jmp, 0x86, 0);
 }
 
 static int bpf_fill_staggered_jgt_imm(struct bpf_test *self)
 {
-	struct bpf_insn jmp = BPF_JMP_IMM(BPF_JGT, R1, 1234, 0);
+	struct bpf_insn jmp = BPF_JMP64_IMM(BPF_JGT, R1, 1234, 0);
 
 	return __bpf_fill_staggered_jumps(self, &jmp, 0x80000000, 0);
 }
 
 static int bpf_fill_staggered_jge_imm(struct bpf_test *self)
 {
-	struct bpf_insn jmp = BPF_JMP_IMM(BPF_JGE, R1, 1234, 0);
+	struct bpf_insn jmp = BPF_JMP64_IMM(BPF_JGE, R1, 1234, 0);
 
 	return __bpf_fill_staggered_jumps(self, &jmp, 1234, 0);
 }
 
 static int bpf_fill_staggered_jlt_imm(struct bpf_test *self)
 {
-	struct bpf_insn jmp = BPF_JMP_IMM(BPF_JLT, R1, 0x80000000, 0);
+	struct bpf_insn jmp = BPF_JMP64_IMM(BPF_JLT, R1, 0x80000000, 0);
 
 	return __bpf_fill_staggered_jumps(self, &jmp, 1234, 0);
 }
 
 static int bpf_fill_staggered_jle_imm(struct bpf_test *self)
 {
-	struct bpf_insn jmp = BPF_JMP_IMM(BPF_JLE, R1, 1234, 0);
+	struct bpf_insn jmp = BPF_JMP64_IMM(BPF_JLE, R1, 1234, 0);
 
 	return __bpf_fill_staggered_jumps(self, &jmp, 1234, 0);
 }
 
 static int bpf_fill_staggered_jsgt_imm(struct bpf_test *self)
 {
-	struct bpf_insn jmp = BPF_JMP_IMM(BPF_JSGT, R1, -2, 0);
+	struct bpf_insn jmp = BPF_JMP64_IMM(BPF_JSGT, R1, -2, 0);
 
 	return __bpf_fill_staggered_jumps(self, &jmp, -1, 0);
 }
 
 static int bpf_fill_staggered_jsge_imm(struct bpf_test *self)
 {
-	struct bpf_insn jmp = BPF_JMP_IMM(BPF_JSGE, R1, -2, 0);
+	struct bpf_insn jmp = BPF_JMP64_IMM(BPF_JSGE, R1, -2, 0);
 
 	return __bpf_fill_staggered_jumps(self, &jmp, -2, 0);
 }
 
 static int bpf_fill_staggered_jslt_imm(struct bpf_test *self)
 {
-	struct bpf_insn jmp = BPF_JMP_IMM(BPF_JSLT, R1, -1, 0);
+	struct bpf_insn jmp = BPF_JMP64_IMM(BPF_JSLT, R1, -1, 0);
 
 	return __bpf_fill_staggered_jumps(self, &jmp, -2, 0);
 }
 
 static int bpf_fill_staggered_jsle_imm(struct bpf_test *self)
 {
-	struct bpf_insn jmp = BPF_JMP_IMM(BPF_JSLE, R1, -1, 0);
+	struct bpf_insn jmp = BPF_JMP64_IMM(BPF_JSLE, R1, -1, 0);
 
 	return __bpf_fill_staggered_jumps(self, &jmp, -1, 0);
 }
@@ -2828,77 +2828,77 @@ static int bpf_fill_staggered_jsle_imm(struct bpf_test *self)
 /* 64-bit register jumps */
 static int bpf_fill_staggered_jeq_reg(struct bpf_test *self)
 {
-	struct bpf_insn jmp = BPF_JMP_REG(BPF_JEQ, R1, R2, 0);
+	struct bpf_insn jmp = BPF_JMP64_REG(BPF_JEQ, R1, R2, 0);
 
 	return __bpf_fill_staggered_jumps(self, &jmp, 1234, 1234);
 }
 
 static int bpf_fill_staggered_jne_reg(struct bpf_test *self)
 {
-	struct bpf_insn jmp = BPF_JMP_REG(BPF_JNE, R1, R2, 0);
+	struct bpf_insn jmp = BPF_JMP64_REG(BPF_JNE, R1, R2, 0);
 
 	return __bpf_fill_staggered_jumps(self, &jmp, 4321, 1234);
 }
 
 static int bpf_fill_staggered_jset_reg(struct bpf_test *self)
 {
-	struct bpf_insn jmp = BPF_JMP_REG(BPF_JSET, R1, R2, 0);
+	struct bpf_insn jmp = BPF_JMP64_REG(BPF_JSET, R1, R2, 0);
 
 	return __bpf_fill_staggered_jumps(self, &jmp, 0x86, 0x82);
 }
 
 static int bpf_fill_staggered_jgt_reg(struct bpf_test *self)
 {
-	struct bpf_insn jmp = BPF_JMP_REG(BPF_JGT, R1, R2, 0);
+	struct bpf_insn jmp = BPF_JMP64_REG(BPF_JGT, R1, R2, 0);
 
 	return __bpf_fill_staggered_jumps(self, &jmp, 0x80000000, 1234);
 }
 
 static int bpf_fill_staggered_jge_reg(struct bpf_test *self)
 {
-	struct bpf_insn jmp = BPF_JMP_REG(BPF_JGE, R1, R2, 0);
+	struct bpf_insn jmp = BPF_JMP64_REG(BPF_JGE, R1, R2, 0);
 
 	return __bpf_fill_staggered_jumps(self, &jmp, 1234, 1234);
 }
 
 static int bpf_fill_staggered_jlt_reg(struct bpf_test *self)
 {
-	struct bpf_insn jmp = BPF_JMP_REG(BPF_JLT, R1, R2, 0);
+	struct bpf_insn jmp = BPF_JMP64_REG(BPF_JLT, R1, R2, 0);
 
 	return __bpf_fill_staggered_jumps(self, &jmp, 1234, 0x80000000);
 }
 
 static int bpf_fill_staggered_jle_reg(struct bpf_test *self)
 {
-	struct bpf_insn jmp = BPF_JMP_REG(BPF_JLE, R1, R2, 0);
+	struct bpf_insn jmp = BPF_JMP64_REG(BPF_JLE, R1, R2, 0);
 
 	return __bpf_fill_staggered_jumps(self, &jmp, 1234, 1234);
 }
 
 static int bpf_fill_staggered_jsgt_reg(struct bpf_test *self)
 {
-	struct bpf_insn jmp = BPF_JMP_REG(BPF_JSGT, R1, R2, 0);
+	struct bpf_insn jmp = BPF_JMP64_REG(BPF_JSGT, R1, R2, 0);
 
 	return __bpf_fill_staggered_jumps(self, &jmp, -1, -2);
 }
 
 static int bpf_fill_staggered_jsge_reg(struct bpf_test *self)
 {
-	struct bpf_insn jmp = BPF_JMP_REG(BPF_JSGE, R1, R2, 0);
+	struct bpf_insn jmp = BPF_JMP64_REG(BPF_JSGE, R1, R2, 0);
 
 	return __bpf_fill_staggered_jumps(self, &jmp, -2, -2);
 }
 
 static int bpf_fill_staggered_jslt_reg(struct bpf_test *self)
 {
-	struct bpf_insn jmp = BPF_JMP_REG(BPF_JSLT, R1, R2, 0);
+	struct bpf_insn jmp = BPF_JMP64_REG(BPF_JSLT, R1, R2, 0);
 
 	return __bpf_fill_staggered_jumps(self, &jmp, -2, -1);
 }
 
 static int bpf_fill_staggered_jsle_reg(struct bpf_test *self)
 {
-	struct bpf_insn jmp = BPF_JMP_REG(BPF_JSLE, R1, R2, 0);
+	struct bpf_insn jmp = BPF_JMP64_REG(BPF_JSLE, R1, R2, 0);
 
 	return __bpf_fill_staggered_jumps(self, &jmp, -1, -1);
 }
@@ -3725,7 +3725,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU64_IMM(BPF_MOV, R1, -1),
 			BPF_ALU64_IMM(BPF_MOV, R2, 3),
 			BPF_ALU64_REG(BPF_MUL, R1, R2),
-			BPF_JMP_IMM(BPF_JEQ, R1, 0xfffffffd, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R1, 0xfffffffd, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU64_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -3742,7 +3742,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_IMM(BPF_MOV, R2, 3),
 			BPF_ALU64_REG(BPF_MUL, R1, R2),
 			BPF_ALU64_IMM(BPF_RSH, R1, 8),
-			BPF_JMP_IMM(BPF_JEQ, R1, 0x2ffffff, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R1, 0x2ffffff, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -3759,7 +3759,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_IMM(BPF_MOV, R2, 3),
 			BPF_ALU32_REG(BPF_MUL, R1, R2),
 			BPF_ALU64_IMM(BPF_RSH, R1, 8),
-			BPF_JMP_IMM(BPF_JEQ, R1, 0xffffff, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R1, 0xffffff, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -3815,7 +3815,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU64_REG(BPF_ADD, R0, R7),
 			BPF_ALU64_REG(BPF_ADD, R0, R8),
 			BPF_ALU64_REG(BPF_ADD, R0, R9), /* R0 == 155 */
-			BPF_JMP_IMM(BPF_JEQ, R0, 155, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R0, 155, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU64_REG(BPF_ADD, R1, R0),
 			BPF_ALU64_REG(BPF_ADD, R1, R1),
@@ -3827,7 +3827,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU64_REG(BPF_ADD, R1, R7),
 			BPF_ALU64_REG(BPF_ADD, R1, R8),
 			BPF_ALU64_REG(BPF_ADD, R1, R9), /* R1 == 456 */
-			BPF_JMP_IMM(BPF_JEQ, R1, 456, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R1, 456, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU64_REG(BPF_ADD, R2, R0),
 			BPF_ALU64_REG(BPF_ADD, R2, R1),
@@ -3839,7 +3839,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU64_REG(BPF_ADD, R2, R7),
 			BPF_ALU64_REG(BPF_ADD, R2, R8),
 			BPF_ALU64_REG(BPF_ADD, R2, R9), /* R2 == 1358 */
-			BPF_JMP_IMM(BPF_JEQ, R2, 1358, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R2, 1358, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU64_REG(BPF_ADD, R3, R0),
 			BPF_ALU64_REG(BPF_ADD, R3, R1),
@@ -3851,7 +3851,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU64_REG(BPF_ADD, R3, R7),
 			BPF_ALU64_REG(BPF_ADD, R3, R8),
 			BPF_ALU64_REG(BPF_ADD, R3, R9), /* R3 == 4063 */
-			BPF_JMP_IMM(BPF_JEQ, R3, 4063, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R3, 4063, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU64_REG(BPF_ADD, R4, R0),
 			BPF_ALU64_REG(BPF_ADD, R4, R1),
@@ -3863,7 +3863,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU64_REG(BPF_ADD, R4, R7),
 			BPF_ALU64_REG(BPF_ADD, R4, R8),
 			BPF_ALU64_REG(BPF_ADD, R4, R9), /* R4 == 12177 */
-			BPF_JMP_IMM(BPF_JEQ, R4, 12177, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R4, 12177, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU64_REG(BPF_ADD, R5, R0),
 			BPF_ALU64_REG(BPF_ADD, R5, R1),
@@ -3875,7 +3875,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU64_REG(BPF_ADD, R5, R7),
 			BPF_ALU64_REG(BPF_ADD, R5, R8),
 			BPF_ALU64_REG(BPF_ADD, R5, R9), /* R5 == 36518 */
-			BPF_JMP_IMM(BPF_JEQ, R5, 36518, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R5, 36518, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU64_REG(BPF_ADD, R6, R0),
 			BPF_ALU64_REG(BPF_ADD, R6, R1),
@@ -3887,7 +3887,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU64_REG(BPF_ADD, R6, R7),
 			BPF_ALU64_REG(BPF_ADD, R6, R8),
 			BPF_ALU64_REG(BPF_ADD, R6, R9), /* R6 == 109540 */
-			BPF_JMP_IMM(BPF_JEQ, R6, 109540, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R6, 109540, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU64_REG(BPF_ADD, R7, R0),
 			BPF_ALU64_REG(BPF_ADD, R7, R1),
@@ -3899,7 +3899,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU64_REG(BPF_ADD, R7, R7),
 			BPF_ALU64_REG(BPF_ADD, R7, R8),
 			BPF_ALU64_REG(BPF_ADD, R7, R9), /* R7 == 328605 */
-			BPF_JMP_IMM(BPF_JEQ, R7, 328605, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R7, 328605, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU64_REG(BPF_ADD, R8, R0),
 			BPF_ALU64_REG(BPF_ADD, R8, R1),
@@ -3911,7 +3911,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU64_REG(BPF_ADD, R8, R7),
 			BPF_ALU64_REG(BPF_ADD, R8, R8),
 			BPF_ALU64_REG(BPF_ADD, R8, R9), /* R8 == 985799 */
-			BPF_JMP_IMM(BPF_JEQ, R8, 985799, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R8, 985799, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU64_REG(BPF_ADD, R9, R0),
 			BPF_ALU64_REG(BPF_ADD, R9, R1),
@@ -3961,7 +3961,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_REG(BPF_ADD, R0, R7),
 			BPF_ALU32_REG(BPF_ADD, R0, R8),
 			BPF_ALU32_REG(BPF_ADD, R0, R9), /* R0 == 155 */
-			BPF_JMP_IMM(BPF_JEQ, R0, 155, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R0, 155, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_REG(BPF_ADD, R1, R0),
 			BPF_ALU32_REG(BPF_ADD, R1, R1),
@@ -3973,7 +3973,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_REG(BPF_ADD, R1, R7),
 			BPF_ALU32_REG(BPF_ADD, R1, R8),
 			BPF_ALU32_REG(BPF_ADD, R1, R9), /* R1 == 456 */
-			BPF_JMP_IMM(BPF_JEQ, R1, 456, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R1, 456, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_REG(BPF_ADD, R2, R0),
 			BPF_ALU32_REG(BPF_ADD, R2, R1),
@@ -3985,7 +3985,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_REG(BPF_ADD, R2, R7),
 			BPF_ALU32_REG(BPF_ADD, R2, R8),
 			BPF_ALU32_REG(BPF_ADD, R2, R9), /* R2 == 1358 */
-			BPF_JMP_IMM(BPF_JEQ, R2, 1358, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R2, 1358, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_REG(BPF_ADD, R3, R0),
 			BPF_ALU32_REG(BPF_ADD, R3, R1),
@@ -3997,7 +3997,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_REG(BPF_ADD, R3, R7),
 			BPF_ALU32_REG(BPF_ADD, R3, R8),
 			BPF_ALU32_REG(BPF_ADD, R3, R9), /* R3 == 4063 */
-			BPF_JMP_IMM(BPF_JEQ, R3, 4063, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R3, 4063, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_REG(BPF_ADD, R4, R0),
 			BPF_ALU32_REG(BPF_ADD, R4, R1),
@@ -4009,7 +4009,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_REG(BPF_ADD, R4, R7),
 			BPF_ALU32_REG(BPF_ADD, R4, R8),
 			BPF_ALU32_REG(BPF_ADD, R4, R9), /* R4 == 12177 */
-			BPF_JMP_IMM(BPF_JEQ, R4, 12177, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R4, 12177, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_REG(BPF_ADD, R5, R0),
 			BPF_ALU32_REG(BPF_ADD, R5, R1),
@@ -4021,7 +4021,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_REG(BPF_ADD, R5, R7),
 			BPF_ALU32_REG(BPF_ADD, R5, R8),
 			BPF_ALU32_REG(BPF_ADD, R5, R9), /* R5 == 36518 */
-			BPF_JMP_IMM(BPF_JEQ, R5, 36518, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R5, 36518, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_REG(BPF_ADD, R6, R0),
 			BPF_ALU32_REG(BPF_ADD, R6, R1),
@@ -4033,7 +4033,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_REG(BPF_ADD, R6, R7),
 			BPF_ALU32_REG(BPF_ADD, R6, R8),
 			BPF_ALU32_REG(BPF_ADD, R6, R9), /* R6 == 109540 */
-			BPF_JMP_IMM(BPF_JEQ, R6, 109540, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R6, 109540, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_REG(BPF_ADD, R7, R0),
 			BPF_ALU32_REG(BPF_ADD, R7, R1),
@@ -4045,7 +4045,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_REG(BPF_ADD, R7, R7),
 			BPF_ALU32_REG(BPF_ADD, R7, R8),
 			BPF_ALU32_REG(BPF_ADD, R7, R9), /* R7 == 328605 */
-			BPF_JMP_IMM(BPF_JEQ, R7, 328605, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R7, 328605, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_REG(BPF_ADD, R8, R0),
 			BPF_ALU32_REG(BPF_ADD, R8, R1),
@@ -4057,7 +4057,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_REG(BPF_ADD, R8, R7),
 			BPF_ALU32_REG(BPF_ADD, R8, R8),
 			BPF_ALU32_REG(BPF_ADD, R8, R9), /* R8 == 985799 */
-			BPF_JMP_IMM(BPF_JEQ, R8, 985799, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R8, 985799, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_REG(BPF_ADD, R9, R0),
 			BPF_ALU32_REG(BPF_ADD, R9, R1),
@@ -4100,7 +4100,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU64_REG(BPF_SUB, R0, R8),
 			BPF_ALU64_REG(BPF_SUB, R0, R9),
 			BPF_ALU64_IMM(BPF_SUB, R0, 10),
-			BPF_JMP_IMM(BPF_JEQ, R0, -55, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R0, -55, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU64_REG(BPF_SUB, R1, R0),
 			BPF_ALU64_REG(BPF_SUB, R1, R2),
@@ -4214,58 +4214,58 @@ static struct bpf_test tests[] = {
 		.u.insns_int = {
 			BPF_ALU64_REG(BPF_SUB, R0, R0),
 			BPF_ALU64_REG(BPF_XOR, R1, R1),
-			BPF_JMP_REG(BPF_JEQ, R0, R1, 1),
+			BPF_JMP64_REG(BPF_JEQ, R0, R1, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU64_IMM(BPF_MOV, R0, 10),
 			BPF_ALU64_IMM(BPF_MOV, R1, -1),
 			BPF_ALU64_REG(BPF_SUB, R1, R1),
 			BPF_ALU64_REG(BPF_XOR, R2, R2),
-			BPF_JMP_REG(BPF_JEQ, R1, R2, 1),
+			BPF_JMP64_REG(BPF_JEQ, R1, R2, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU64_REG(BPF_SUB, R2, R2),
 			BPF_ALU64_REG(BPF_XOR, R3, R3),
 			BPF_ALU64_IMM(BPF_MOV, R0, 10),
 			BPF_ALU64_IMM(BPF_MOV, R1, -1),
-			BPF_JMP_REG(BPF_JEQ, R2, R3, 1),
+			BPF_JMP64_REG(BPF_JEQ, R2, R3, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU64_REG(BPF_SUB, R3, R3),
 			BPF_ALU64_REG(BPF_XOR, R4, R4),
 			BPF_ALU64_IMM(BPF_MOV, R2, 1),
 			BPF_ALU64_IMM(BPF_MOV, R5, -1),
-			BPF_JMP_REG(BPF_JEQ, R3, R4, 1),
+			BPF_JMP64_REG(BPF_JEQ, R3, R4, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU64_REG(BPF_SUB, R4, R4),
 			BPF_ALU64_REG(BPF_XOR, R5, R5),
 			BPF_ALU64_IMM(BPF_MOV, R3, 1),
 			BPF_ALU64_IMM(BPF_MOV, R7, -1),
-			BPF_JMP_REG(BPF_JEQ, R5, R4, 1),
+			BPF_JMP64_REG(BPF_JEQ, R5, R4, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU64_IMM(BPF_MOV, R5, 1),
 			BPF_ALU64_REG(BPF_SUB, R5, R5),
 			BPF_ALU64_REG(BPF_XOR, R6, R6),
 			BPF_ALU64_IMM(BPF_MOV, R1, 1),
 			BPF_ALU64_IMM(BPF_MOV, R8, -1),
-			BPF_JMP_REG(BPF_JEQ, R5, R6, 1),
+			BPF_JMP64_REG(BPF_JEQ, R5, R6, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU64_REG(BPF_SUB, R6, R6),
 			BPF_ALU64_REG(BPF_XOR, R7, R7),
-			BPF_JMP_REG(BPF_JEQ, R7, R6, 1),
+			BPF_JMP64_REG(BPF_JEQ, R7, R6, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU64_REG(BPF_SUB, R7, R7),
 			BPF_ALU64_REG(BPF_XOR, R8, R8),
-			BPF_JMP_REG(BPF_JEQ, R7, R8, 1),
+			BPF_JMP64_REG(BPF_JEQ, R7, R8, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU64_REG(BPF_SUB, R8, R8),
 			BPF_ALU64_REG(BPF_XOR, R9, R9),
-			BPF_JMP_REG(BPF_JEQ, R9, R8, 1),
+			BPF_JMP64_REG(BPF_JEQ, R9, R8, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU64_REG(BPF_SUB, R9, R9),
 			BPF_ALU64_REG(BPF_XOR, R0, R0),
-			BPF_JMP_REG(BPF_JEQ, R9, R0, 1),
+			BPF_JMP64_REG(BPF_JEQ, R9, R0, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU64_REG(BPF_SUB, R1, R1),
 			BPF_ALU64_REG(BPF_XOR, R0, R0),
-			BPF_JMP_REG(BPF_JEQ, R9, R0, 2),
+			BPF_JMP64_REG(BPF_JEQ, R9, R0, 2),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 			BPF_ALU64_IMM(BPF_MOV, R0, 1),
@@ -4299,7 +4299,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU64_REG(BPF_MUL, R0, R8),
 			BPF_ALU64_REG(BPF_MUL, R0, R9),
 			BPF_ALU64_IMM(BPF_MUL, R0, 10),
-			BPF_JMP_IMM(BPF_JEQ, R0, 439084800, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R0, 439084800, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU64_REG(BPF_MUL, R1, R0),
 			BPF_ALU64_REG(BPF_MUL, R1, R2),
@@ -4313,11 +4313,11 @@ static struct bpf_test tests[] = {
 			BPF_ALU64_IMM(BPF_MUL, R1, 10),
 			BPF_ALU64_REG(BPF_MOV, R2, R1),
 			BPF_ALU64_IMM(BPF_RSH, R2, 32),
-			BPF_JMP_IMM(BPF_JEQ, R2, 0x5a924, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R2, 0x5a924, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU64_IMM(BPF_LSH, R1, 32),
 			BPF_ALU64_IMM(BPF_ARSH, R1, 32),
-			BPF_JMP_IMM(BPF_JEQ, R1, 0xebb90000, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R1, 0xebb90000, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU64_REG(BPF_MUL, R2, R0),
 			BPF_ALU64_REG(BPF_MUL, R2, R1),
@@ -4465,10 +4465,10 @@ static struct bpf_test tests[] = {
 			BPF_ALU64_IMM(BPF_MOV, R2, 2),
 			BPF_ALU64_IMM(BPF_XOR, R2, 3),
 			BPF_ALU64_REG(BPF_DIV, R0, R2),
-			BPF_JMP_IMM(BPF_JEQ, R0, 10, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R0, 10, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU64_IMM(BPF_MOD, R0, 3),
-			BPF_JMP_IMM(BPF_JEQ, R0, 1, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R0, 1, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU64_IMM(BPF_MOV, R0, -1),
 			BPF_EXIT_INSN(),
@@ -4483,30 +4483,30 @@ static struct bpf_test tests[] = {
 			BPF_MOV64_IMM(R0, -1234),
 			BPF_MOV64_IMM(R1, 1),
 			BPF_ALU32_REG(BPF_RSH, R0, R1),
-			BPF_JMP_IMM(BPF_JEQ, R0, 0x7ffffd97, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R0, 0x7ffffd97, 1),
 			BPF_EXIT_INSN(),
 			BPF_MOV64_IMM(R2, 1),
 			BPF_ALU64_REG(BPF_LSH, R0, R2),
 			BPF_MOV32_IMM(R4, -1234),
-			BPF_JMP_REG(BPF_JEQ, R0, R4, 1),
+			BPF_JMP64_REG(BPF_JEQ, R0, R4, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU64_IMM(BPF_AND, R4, 63),
 			BPF_ALU64_REG(BPF_LSH, R0, R4), /* R0 <= 46 */
 			BPF_MOV64_IMM(R3, 47),
 			BPF_ALU64_REG(BPF_ARSH, R0, R3),
-			BPF_JMP_IMM(BPF_JEQ, R0, -617, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R0, -617, 1),
 			BPF_EXIT_INSN(),
 			BPF_MOV64_IMM(R2, 1),
 			BPF_ALU64_REG(BPF_LSH, R4, R2), /* R4 = 46 << 1 */
-			BPF_JMP_IMM(BPF_JEQ, R4, 92, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R4, 92, 1),
 			BPF_EXIT_INSN(),
 			BPF_MOV64_IMM(R4, 4),
 			BPF_ALU64_REG(BPF_LSH, R4, R4), /* R4 = 4 << 4 */
-			BPF_JMP_IMM(BPF_JEQ, R4, 64, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R4, 64, 1),
 			BPF_EXIT_INSN(),
 			BPF_MOV64_IMM(R4, 5),
 			BPF_ALU32_REG(BPF_LSH, R4, R4), /* R4 = 5 << 5 */
-			BPF_JMP_IMM(BPF_JEQ, R4, 160, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R4, 160, 1),
 			BPF_EXIT_INSN(),
 			BPF_MOV64_IMM(R0, -1),
 			BPF_EXIT_INSN(),
@@ -4881,9 +4881,9 @@ static struct bpf_test tests[] = {
 			BPF_ALU64_IMM(BPF_LSH, R3, 32),
 			BPF_ALU64_IMM(BPF_RSH, R3, 32),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
-			BPF_JMP_IMM(BPF_JEQ, R2, 0x5678, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R2, 0x5678, 1),
 			BPF_EXIT_INSN(),
-			BPF_JMP_IMM(BPF_JEQ, R3, 0x1234, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R3, 0x1234, 1),
 			BPF_EXIT_INSN(),
 			BPF_LD_IMM64(R0, 0x1ffffffffLL),
 			BPF_ALU64_IMM(BPF_RSH, R0, 32), /* R0 = 1 */
@@ -4965,7 +4965,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R2, 0x0000ffffffff0000LL),
 			BPF_LD_IMM64(R3, 0x00000000ffffffffLL),
 			BPF_ALU32_IMM(BPF_MOV, R2, 0xffffffff),
-			BPF_JMP_REG(BPF_JEQ, R2, R3, 2),
+			BPF_JMP64_REG(BPF_JEQ, R2, R3, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -5043,7 +5043,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R2, 0x0000ffffffff0000LL),
 			BPF_LD_IMM64(R3, 0x0),
 			BPF_ALU64_IMM(BPF_MOV, R2, 0x0),
-			BPF_JMP_REG(BPF_JEQ, R2, R3, 2),
+			BPF_JMP64_REG(BPF_JEQ, R2, R3, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -5059,7 +5059,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R2, 0x0000ffffffff0000LL),
 			BPF_LD_IMM64(R3, 0xffffffffffffffffLL),
 			BPF_ALU64_IMM(BPF_MOV, R2, 0xffffffff),
-			BPF_JMP_REG(BPF_JEQ, R2, R3, 2),
+			BPF_JMP64_REG(BPF_JEQ, R2, R3, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -5142,7 +5142,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R0, 2),
 			BPF_LD_IMM64(R1, 4294967294U),
 			BPF_ALU32_REG(BPF_ADD, R0, R1),
-			BPF_JMP_IMM(BPF_JEQ, R0, 0, 2),
+			BPF_JMP64_IMM(BPF_JEQ, R0, 0, 2),
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
@@ -5183,7 +5183,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R1, 4294967294U),
 			BPF_LD_IMM64(R2, 4294967296ULL),
 			BPF_ALU64_REG(BPF_ADD, R0, R1),
-			BPF_JMP_REG(BPF_JEQ, R0, R2, 2),
+			BPF_JMP64_REG(BPF_JEQ, R0, R2, 2),
 			BPF_MOV32_IMM(R0, 0),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -5232,7 +5232,7 @@ static struct bpf_test tests[] = {
 		.u.insns_int = {
 			BPF_LD_IMM64(R0, 4294967294U),
 			BPF_ALU32_IMM(BPF_ADD, R0, 2),
-			BPF_JMP_IMM(BPF_JEQ, R0, 0, 2),
+			BPF_JMP64_IMM(BPF_JEQ, R0, 0, 2),
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
@@ -5248,7 +5248,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R2, 0x0),
 			BPF_LD_IMM64(R3, 0x00000000ffffffff),
 			BPF_ALU32_IMM(BPF_ADD, R2, 0xffffffff),
-			BPF_JMP_REG(BPF_JEQ, R2, R3, 2),
+			BPF_JMP64_REG(BPF_JEQ, R2, R3, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -5264,7 +5264,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R2, 0x0),
 			BPF_LD_IMM64(R3, 0xffff),
 			BPF_ALU32_IMM(BPF_ADD, R2, 0xffff),
-			BPF_JMP_REG(BPF_JEQ, R2, R3, 2),
+			BPF_JMP64_REG(BPF_JEQ, R2, R3, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -5280,7 +5280,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R2, 0x0),
 			BPF_LD_IMM64(R3, 0x7fffffff),
 			BPF_ALU32_IMM(BPF_ADD, R2, 0x7fffffff),
-			BPF_JMP_REG(BPF_JEQ, R2, R3, 2),
+			BPF_JMP64_REG(BPF_JEQ, R2, R3, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -5296,7 +5296,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R2, 0x0),
 			BPF_LD_IMM64(R3, 0x80000000),
 			BPF_ALU32_IMM(BPF_ADD, R2, 0x80000000),
-			BPF_JMP_REG(BPF_JEQ, R2, R3, 2),
+			BPF_JMP64_REG(BPF_JEQ, R2, R3, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -5312,7 +5312,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R2, 0x0),
 			BPF_LD_IMM64(R3, 0x80008000),
 			BPF_ALU32_IMM(BPF_ADD, R2, 0x80008000),
-			BPF_JMP_REG(BPF_JEQ, R2, R3, 2),
+			BPF_JMP64_REG(BPF_JEQ, R2, R3, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -5361,7 +5361,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R0, 4294967294U),
 			BPF_LD_IMM64(R1, 4294967296ULL),
 			BPF_ALU64_IMM(BPF_ADD, R0, 2),
-			BPF_JMP_REG(BPF_JEQ, R0, R1, 2),
+			BPF_JMP64_REG(BPF_JEQ, R0, R1, 2),
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
@@ -5388,7 +5388,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R2, 0x1),
 			BPF_LD_IMM64(R3, 0x1),
 			BPF_ALU64_IMM(BPF_ADD, R2, 0x0),
-			BPF_JMP_REG(BPF_JEQ, R2, R3, 2),
+			BPF_JMP64_REG(BPF_JEQ, R2, R3, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -5404,7 +5404,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R2, 0x0),
 			BPF_LD_IMM64(R3, 0xffffffffffffffffLL),
 			BPF_ALU64_IMM(BPF_ADD, R2, 0xffffffff),
-			BPF_JMP_REG(BPF_JEQ, R2, R3, 2),
+			BPF_JMP64_REG(BPF_JEQ, R2, R3, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -5420,7 +5420,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R2, 0x0),
 			BPF_LD_IMM64(R3, 0xffff),
 			BPF_ALU64_IMM(BPF_ADD, R2, 0xffff),
-			BPF_JMP_REG(BPF_JEQ, R2, R3, 2),
+			BPF_JMP64_REG(BPF_JEQ, R2, R3, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -5436,7 +5436,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R2, 0x0),
 			BPF_LD_IMM64(R3, 0x7fffffff),
 			BPF_ALU64_IMM(BPF_ADD, R2, 0x7fffffff),
-			BPF_JMP_REG(BPF_JEQ, R2, R3, 2),
+			BPF_JMP64_REG(BPF_JEQ, R2, R3, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -5452,7 +5452,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R2, 0x0),
 			BPF_LD_IMM64(R3, 0xffffffff80000000LL),
 			BPF_ALU64_IMM(BPF_ADD, R2, 0x80000000),
-			BPF_JMP_REG(BPF_JEQ, R2, R3, 2),
+			BPF_JMP64_REG(BPF_JEQ, R2, R3, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -5468,7 +5468,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R2, 0x0),
 			BPF_LD_IMM64(R3, 0xffffffff80008000LL),
 			BPF_ALU64_IMM(BPF_ADD, R2, 0x80008000),
-			BPF_JMP_REG(BPF_JEQ, R2, R3, 2),
+			BPF_JMP64_REG(BPF_JEQ, R2, R3, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -5731,7 +5731,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R2, 0x1),
 			BPF_LD_IMM64(R3, 0x00000000ffffffff),
 			BPF_ALU32_IMM(BPF_MUL, R2, 0xffffffff),
-			BPF_JMP_REG(BPF_JEQ, R2, R3, 2),
+			BPF_JMP64_REG(BPF_JEQ, R2, R3, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -5791,7 +5791,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R2, 0x1),
 			BPF_LD_IMM64(R3, 0xffffffffffffffffLL),
 			BPF_ALU64_IMM(BPF_MUL, R2, 0xffffffff),
-			BPF_JMP_REG(BPF_JEQ, R2, R3, 2),
+			BPF_JMP64_REG(BPF_JEQ, R2, R3, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -5880,7 +5880,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R4, 0xffffffffffffffffLL),
 			BPF_LD_IMM64(R3, 0x0000000000000001LL),
 			BPF_ALU64_REG(BPF_DIV, R2, R4),
-			BPF_JMP_REG(BPF_JEQ, R2, R3, 2),
+			BPF_JMP64_REG(BPF_JEQ, R2, R3, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -5930,7 +5930,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R2, 0xffffffffffffffffLL),
 			BPF_LD_IMM64(R3, 0x1UL),
 			BPF_ALU32_IMM(BPF_DIV, R2, 0xffffffff),
-			BPF_JMP_REG(BPF_JEQ, R2, R3, 2),
+			BPF_JMP64_REG(BPF_JEQ, R2, R3, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -5979,7 +5979,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R2, 0xffffffffffffffffLL),
 			BPF_LD_IMM64(R3, 0x0000000000000001LL),
 			BPF_ALU64_IMM(BPF_DIV, R2, 0xffffffff),
-			BPF_JMP_REG(BPF_JEQ, R2, R3, 2),
+			BPF_JMP64_REG(BPF_JEQ, R2, R3, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -6205,7 +6205,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
 			BPF_LD_IMM64(R1, 0x0000000080a0c0e0LL),
 			BPF_ALU32_IMM(BPF_AND, R0, 0xf0f0f0f0),
-			BPF_JMP_REG(BPF_JEQ, R0, R1, 2),
+			BPF_JMP64_REG(BPF_JEQ, R0, R1, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -6243,7 +6243,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R2, 0x0000ffffffff0000LL),
 			BPF_LD_IMM64(R3, 0x0000000000000000LL),
 			BPF_ALU64_IMM(BPF_AND, R2, 0x0),
-			BPF_JMP_REG(BPF_JEQ, R2, R3, 2),
+			BPF_JMP64_REG(BPF_JEQ, R2, R3, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -6259,7 +6259,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R2, 0x0000ffffffff0000LL),
 			BPF_LD_IMM64(R3, 0x0000ffffffff0000LL),
 			BPF_ALU64_IMM(BPF_AND, R2, 0xffffffff),
-			BPF_JMP_REG(BPF_JEQ, R2, R3, 2),
+			BPF_JMP64_REG(BPF_JEQ, R2, R3, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -6275,7 +6275,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R2, 0xffffffffffffffffLL),
 			BPF_LD_IMM64(R3, 0xffffffffffffffffLL),
 			BPF_ALU64_IMM(BPF_AND, R2, 0xffffffff),
-			BPF_JMP_REG(BPF_JEQ, R2, R3, 2),
+			BPF_JMP64_REG(BPF_JEQ, R2, R3, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -6291,7 +6291,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
 			BPF_LD_IMM64(R1, 0x00000000090b0d0fLL),
 			BPF_ALU64_IMM(BPF_AND, R0, 0x0f0f0f0f),
-			BPF_JMP_REG(BPF_JEQ, R0, R1, 2),
+			BPF_JMP64_REG(BPF_JEQ, R0, R1, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -6307,7 +6307,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
 			BPF_LD_IMM64(R1, 0x0123456780a0c0e0LL),
 			BPF_ALU64_IMM(BPF_AND, R0, 0xf0f0f0f0),
-			BPF_JMP_REG(BPF_JEQ, R0, R1, 2),
+			BPF_JMP64_REG(BPF_JEQ, R0, R1, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -6417,7 +6417,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
 			BPF_LD_IMM64(R1, 0x00000000f9fbfdffLL),
 			BPF_ALU32_IMM(BPF_OR, R0, 0xf0f0f0f0),
-			BPF_JMP_REG(BPF_JEQ, R0, R1, 2),
+			BPF_JMP64_REG(BPF_JEQ, R0, R1, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -6455,7 +6455,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R2, 0x0000ffffffff0000LL),
 			BPF_LD_IMM64(R3, 0x0000ffffffff0000LL),
 			BPF_ALU64_IMM(BPF_OR, R2, 0x0),
-			BPF_JMP_REG(BPF_JEQ, R2, R3, 2),
+			BPF_JMP64_REG(BPF_JEQ, R2, R3, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -6471,7 +6471,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R2, 0x0000ffffffff0000LL),
 			BPF_LD_IMM64(R3, 0xffffffffffffffffLL),
 			BPF_ALU64_IMM(BPF_OR, R2, 0xffffffff),
-			BPF_JMP_REG(BPF_JEQ, R2, R3, 2),
+			BPF_JMP64_REG(BPF_JEQ, R2, R3, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -6487,7 +6487,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R2, 0x0000000000000000LL),
 			BPF_LD_IMM64(R3, 0xffffffffffffffffLL),
 			BPF_ALU64_IMM(BPF_OR, R2, 0xffffffff),
-			BPF_JMP_REG(BPF_JEQ, R2, R3, 2),
+			BPF_JMP64_REG(BPF_JEQ, R2, R3, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -6503,7 +6503,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
 			BPF_LD_IMM64(R1, 0x012345678fafcfefLL),
 			BPF_ALU64_IMM(BPF_OR, R0, 0x0f0f0f0f),
-			BPF_JMP_REG(BPF_JEQ, R0, R1, 2),
+			BPF_JMP64_REG(BPF_JEQ, R0, R1, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -6519,7 +6519,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
 			BPF_LD_IMM64(R1, 0xfffffffff9fbfdffLL),
 			BPF_ALU64_IMM(BPF_OR, R0, 0xf0f0f0f0),
-			BPF_JMP_REG(BPF_JEQ, R0, R1, 2),
+			BPF_JMP64_REG(BPF_JEQ, R0, R1, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -6629,7 +6629,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
 			BPF_LD_IMM64(R1, 0x00000000795b3d1fLL),
 			BPF_ALU32_IMM(BPF_XOR, R0, 0xf0f0f0f0),
-			BPF_JMP_REG(BPF_JEQ, R0, R1, 2),
+			BPF_JMP64_REG(BPF_JEQ, R0, R1, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -6667,7 +6667,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R2, 0x0000ffffffff0000LL),
 			BPF_LD_IMM64(R3, 0x0000ffffffff0000LL),
 			BPF_ALU64_IMM(BPF_XOR, R2, 0x0),
-			BPF_JMP_REG(BPF_JEQ, R2, R3, 2),
+			BPF_JMP64_REG(BPF_JEQ, R2, R3, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -6683,7 +6683,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R2, 0x0000ffffffff0000LL),
 			BPF_LD_IMM64(R3, 0xffff00000000ffffLL),
 			BPF_ALU64_IMM(BPF_XOR, R2, 0xffffffff),
-			BPF_JMP_REG(BPF_JEQ, R2, R3, 2),
+			BPF_JMP64_REG(BPF_JEQ, R2, R3, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -6699,7 +6699,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R2, 0x0000000000000000LL),
 			BPF_LD_IMM64(R3, 0xffffffffffffffffLL),
 			BPF_ALU64_IMM(BPF_XOR, R2, 0xffffffff),
-			BPF_JMP_REG(BPF_JEQ, R2, R3, 2),
+			BPF_JMP64_REG(BPF_JEQ, R2, R3, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -6715,7 +6715,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
 			BPF_LD_IMM64(R1, 0x0123456786a4c2e0LL),
 			BPF_ALU64_IMM(BPF_XOR, R0, 0x0f0f0f0f),
-			BPF_JMP_REG(BPF_JEQ, R0, R1, 2),
+			BPF_JMP64_REG(BPF_JEQ, R0, R1, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -6731,7 +6731,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
 			BPF_LD_IMM64(R1, 0xfedcba98795b3d1fLL),
 			BPF_ALU64_IMM(BPF_XOR, R0, 0xf0f0f0f0),
-			BPF_JMP_REG(BPF_JEQ, R0, R1, 2),
+			BPF_JMP64_REG(BPF_JEQ, R0, R1, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -7849,7 +7849,7 @@ static struct bpf_test tests[] = {
 #else
 			BPF_LDX_MEM(BPF_B, R0, R10, -8),
 #endif
-			BPF_JMP_REG(BPF_JNE, R0, R2, 1),
+			BPF_JMP64_REG(BPF_JNE, R0, R2, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -7869,7 +7869,7 @@ static struct bpf_test tests[] = {
 #else
 			BPF_LDX_MEM(BPF_B, R0, R10, -8),
 #endif
-			BPF_JMP_REG(BPF_JNE, R0, R2, 1),
+			BPF_JMP64_REG(BPF_JNE, R0, R2, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -7886,7 +7886,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU64_IMM(BPF_ADD, R1, 512),
 			BPF_STX_MEM(BPF_B, R1, R2, -256),
 			BPF_LDX_MEM(BPF_B, R0, R1, -256),
-			BPF_JMP_REG(BPF_JNE, R0, R3, 1),
+			BPF_JMP64_REG(BPF_JNE, R0, R3, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -7902,7 +7902,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R3, 0x0000000000000088ULL),
 			BPF_STX_MEM(BPF_B, R1, R2, 256),
 			BPF_LDX_MEM(BPF_B, R0, R1, 256),
-			BPF_JMP_REG(BPF_JNE, R0, R3, 1),
+			BPF_JMP64_REG(BPF_JNE, R0, R3, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -7918,7 +7918,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R3, 0x0000000000000088ULL),
 			BPF_STX_MEM(BPF_B, R1, R2, 4096),
 			BPF_LDX_MEM(BPF_B, R0, R1, 4096),
-			BPF_JMP_REG(BPF_JNE, R0, R3, 1),
+			BPF_JMP64_REG(BPF_JNE, R0, R3, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -7938,7 +7938,7 @@ static struct bpf_test tests[] = {
 #else
 			BPF_LDX_MEM(BPF_H, R0, R10, -8),
 #endif
-			BPF_JMP_REG(BPF_JNE, R0, R2, 1),
+			BPF_JMP64_REG(BPF_JNE, R0, R2, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -7958,7 +7958,7 @@ static struct bpf_test tests[] = {
 #else
 			BPF_LDX_MEM(BPF_H, R0, R10, -8),
 #endif
-			BPF_JMP_REG(BPF_JNE, R0, R2, 1),
+			BPF_JMP64_REG(BPF_JNE, R0, R2, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -7975,7 +7975,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU64_IMM(BPF_ADD, R1, 512),
 			BPF_STX_MEM(BPF_H, R1, R2, -256),
 			BPF_LDX_MEM(BPF_H, R0, R1, -256),
-			BPF_JMP_REG(BPF_JNE, R0, R3, 1),
+			BPF_JMP64_REG(BPF_JNE, R0, R3, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -7991,7 +7991,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R3, 0x0000000000008788ULL),
 			BPF_STX_MEM(BPF_H, R1, R2, 256),
 			BPF_LDX_MEM(BPF_H, R0, R1, 256),
-			BPF_JMP_REG(BPF_JNE, R0, R3, 1),
+			BPF_JMP64_REG(BPF_JNE, R0, R3, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -8007,7 +8007,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R3, 0x0000000000008788ULL),
 			BPF_STX_MEM(BPF_H, R1, R2, 8192),
 			BPF_LDX_MEM(BPF_H, R0, R1, 8192),
-			BPF_JMP_REG(BPF_JNE, R0, R3, 1),
+			BPF_JMP64_REG(BPF_JNE, R0, R3, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -8023,7 +8023,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R3, 0x0000000000008788ULL),
 			BPF_STX_MEM(BPF_H, R1, R2, 13),
 			BPF_LDX_MEM(BPF_H, R0, R1, 13),
-			BPF_JMP_REG(BPF_JNE, R0, R3, 1),
+			BPF_JMP64_REG(BPF_JNE, R0, R3, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -8043,7 +8043,7 @@ static struct bpf_test tests[] = {
 #else
 			BPF_LDX_MEM(BPF_W, R0, R10, -8),
 #endif
-			BPF_JMP_REG(BPF_JNE, R0, R2, 1),
+			BPF_JMP64_REG(BPF_JNE, R0, R2, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -8063,7 +8063,7 @@ static struct bpf_test tests[] = {
 #else
 			BPF_LDX_MEM(BPF_W, R0, R10, -8),
 #endif
-			BPF_JMP_REG(BPF_JNE, R0, R2, 1),
+			BPF_JMP64_REG(BPF_JNE, R0, R2, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -8080,7 +8080,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU64_IMM(BPF_ADD, R1, 512),
 			BPF_STX_MEM(BPF_W, R1, R2, -256),
 			BPF_LDX_MEM(BPF_W, R0, R1, -256),
-			BPF_JMP_REG(BPF_JNE, R0, R3, 1),
+			BPF_JMP64_REG(BPF_JNE, R0, R3, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -8096,7 +8096,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R3, 0x0000000085868788ULL),
 			BPF_STX_MEM(BPF_W, R1, R2, 256),
 			BPF_LDX_MEM(BPF_W, R0, R1, 256),
-			BPF_JMP_REG(BPF_JNE, R0, R3, 1),
+			BPF_JMP64_REG(BPF_JNE, R0, R3, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -8112,7 +8112,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R3, 0x0000000085868788ULL),
 			BPF_STX_MEM(BPF_W, R1, R2, 16384),
 			BPF_LDX_MEM(BPF_W, R0, R1, 16384),
-			BPF_JMP_REG(BPF_JNE, R0, R3, 1),
+			BPF_JMP64_REG(BPF_JNE, R0, R3, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -8128,7 +8128,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R3, 0x0000000085868788ULL),
 			BPF_STX_MEM(BPF_W, R1, R2, 13),
 			BPF_LDX_MEM(BPF_W, R0, R1, 13),
-			BPF_JMP_REG(BPF_JNE, R0, R3, 1),
+			BPF_JMP64_REG(BPF_JNE, R0, R3, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -8143,7 +8143,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R1, 0x0102030405060708ULL),
 			BPF_STX_MEM(BPF_DW, R10, R1, -8),
 			BPF_LDX_MEM(BPF_DW, R0, R10, -8),
-			BPF_JMP_REG(BPF_JNE, R0, R1, 1),
+			BPF_JMP64_REG(BPF_JNE, R0, R1, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -8158,7 +8158,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R1, 0x8182838485868788ULL),
 			BPF_STX_MEM(BPF_DW, R10, R1, -8),
 			BPF_LDX_MEM(BPF_DW, R0, R10, -8),
-			BPF_JMP_REG(BPF_JNE, R0, R1, 1),
+			BPF_JMP64_REG(BPF_JNE, R0, R1, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -8174,7 +8174,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU64_IMM(BPF_ADD, R1, 512),
 			BPF_STX_MEM(BPF_DW, R1, R2, -256),
 			BPF_LDX_MEM(BPF_DW, R0, R1, -256),
-			BPF_JMP_REG(BPF_JNE, R0, R2, 1),
+			BPF_JMP64_REG(BPF_JNE, R0, R2, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -8189,7 +8189,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R2, 0x8182838485868788ULL),
 			BPF_STX_MEM(BPF_DW, R1, R2, 256),
 			BPF_LDX_MEM(BPF_DW, R0, R1, 256),
-			BPF_JMP_REG(BPF_JNE, R0, R2, 1),
+			BPF_JMP64_REG(BPF_JNE, R0, R2, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -8204,7 +8204,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R2, 0x8182838485868788ULL),
 			BPF_STX_MEM(BPF_DW, R1, R2, 32760),
 			BPF_LDX_MEM(BPF_DW, R0, R1, 32760),
-			BPF_JMP_REG(BPF_JNE, R0, R2, 1),
+			BPF_JMP64_REG(BPF_JNE, R0, R2, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -8219,7 +8219,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R2, 0x8182838485868788ULL),
 			BPF_STX_MEM(BPF_DW, R1, R2, 13),
 			BPF_LDX_MEM(BPF_DW, R0, R1, 13),
-			BPF_JMP_REG(BPF_JNE, R0, R2, 1),
+			BPF_JMP64_REG(BPF_JNE, R0, R2, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -8242,7 +8242,7 @@ static struct bpf_test tests[] = {
 			BPF_STX_MEM(BPF_B, R10, R2, -8),
 #endif
 			BPF_LDX_MEM(BPF_DW, R0, R10, -8),
-			BPF_JMP_REG(BPF_JNE, R0, R3, 1),
+			BPF_JMP64_REG(BPF_JNE, R0, R3, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -8264,7 +8264,7 @@ static struct bpf_test tests[] = {
 			BPF_STX_MEM(BPF_B, R10, R2, -8),
 #endif
 			BPF_LDX_MEM(BPF_DW, R0, R10, -8),
-			BPF_JMP_REG(BPF_JNE, R0, R3, 1),
+			BPF_JMP64_REG(BPF_JNE, R0, R3, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -8286,7 +8286,7 @@ static struct bpf_test tests[] = {
 			BPF_STX_MEM(BPF_H, R10, R2, -8),
 #endif
 			BPF_LDX_MEM(BPF_DW, R0, R10, -8),
-			BPF_JMP_REG(BPF_JNE, R0, R3, 1),
+			BPF_JMP64_REG(BPF_JNE, R0, R3, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -8308,7 +8308,7 @@ static struct bpf_test tests[] = {
 			BPF_STX_MEM(BPF_H, R10, R2, -8),
 #endif
 			BPF_LDX_MEM(BPF_DW, R0, R10, -8),
-			BPF_JMP_REG(BPF_JNE, R0, R3, 1),
+			BPF_JMP64_REG(BPF_JNE, R0, R3, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -8330,7 +8330,7 @@ static struct bpf_test tests[] = {
 			BPF_STX_MEM(BPF_W, R10, R2, -8),
 #endif
 			BPF_LDX_MEM(BPF_DW, R0, R10, -8),
-			BPF_JMP_REG(BPF_JNE, R0, R3, 1),
+			BPF_JMP64_REG(BPF_JNE, R0, R3, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -8352,7 +8352,7 @@ static struct bpf_test tests[] = {
 			BPF_STX_MEM(BPF_W, R10, R2, -8),
 #endif
 			BPF_LDX_MEM(BPF_DW, R0, R10, -8),
-			BPF_JMP_REG(BPF_JNE, R0, R3, 1),
+			BPF_JMP64_REG(BPF_JNE, R0, R3, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -8502,7 +8502,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R3, 0xffffffffffffffffLL),
 			BPF_ST_MEM(BPF_DW, R10, -40, 0xffffffff),
 			BPF_LDX_MEM(BPF_DW, R2, R10, -40),
-			BPF_JMP_REG(BPF_JEQ, R2, R3, 2),
+			BPF_JMP64_REG(BPF_JEQ, R2, R3, 2),
 			BPF_MOV32_IMM(R0, 2),
 			BPF_EXIT_INSN(),
 			BPF_MOV32_IMM(R0, 1),
@@ -8855,7 +8855,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU64_REG(BPF_MOV, R0, R1),
 			BPF_STX_MEM(BPF_DW, R10, R1, -40),
 			BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, R10, R2, -40),
-			BPF_JMP_REG(BPF_JNE, R0, R1, 1),
+			BPF_JMP64_REG(BPF_JNE, R0, R1, 1),
 			BPF_ALU64_REG(BPF_SUB, R0, R1),
 			BPF_EXIT_INSN(),
 		},
@@ -8873,7 +8873,7 @@ static struct bpf_test tests[] = {
 			BPF_STX_MEM(BPF_DW, R10, R0, -40),
 			BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, R10, R2, -40),
 			BPF_LDX_MEM(BPF_DW, R0, R10, -40),
-			BPF_JMP_REG(BPF_JNE, R0, R2, 1),
+			BPF_JMP64_REG(BPF_JNE, R0, R2, 1),
 			BPF_ALU64_REG(BPF_SUB, R0, R2),
 			BPF_EXIT_INSN(),
 		},
@@ -8891,7 +8891,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU64_IMM(BPF_ADD, R0, 1),
 			BPF_STX_MEM(BPF_DW, R10, R1, -40),
 			BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, R10, R2, -40),
-			BPF_JMP_REG(BPF_JNE, R0, R1, 1),
+			BPF_JMP64_REG(BPF_JNE, R0, R1, 1),
 			BPF_ALU64_REG(BPF_SUB, R0, R1),
 			BPF_EXIT_INSN(),
 		},
@@ -8910,7 +8910,7 @@ static struct bpf_test tests[] = {
 			BPF_STX_MEM(BPF_DW, R10, R1, -40),
 			BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, R10, R2, -40),
 			BPF_LDX_MEM(BPF_DW, R0, R10, -40),
-			BPF_JMP_REG(BPF_JNE, R0, R1, 1),
+			BPF_JMP64_REG(BPF_JNE, R0, R1, 1),
 			BPF_ALU64_REG(BPF_SUB, R0, R1),
 			BPF_EXIT_INSN(),
 		},
@@ -8928,7 +8928,7 @@ static struct bpf_test tests[] = {
 			BPF_STX_MEM(BPF_DW, R10, R1, -40),
 			BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, R10, R2, -40),
 			BPF_LD_IMM64(R0, 0xfedcba9876543210ULL),
-			BPF_JMP_REG(BPF_JNE, R0, R2, 1),
+			BPF_JMP64_REG(BPF_JNE, R0, R2, 1),
 			BPF_ALU64_REG(BPF_SUB, R0, R2),
 			BPF_EXIT_INSN(),
 		},
@@ -9465,7 +9465,7 @@ static struct bpf_test tests[] = {
 		"JMP_JA: Unconditional jump: if (true) return 1",
 		.u.insns_int = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
-			BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+			BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -9480,7 +9480,7 @@ static struct bpf_test tests[] = {
 		.u.insns_int = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, 0xfffffffffffffffeLL),
-			BPF_JMP_IMM(BPF_JSLT, R1, -1, 1),
+			BPF_JMP64_IMM(BPF_JSLT, R1, -1, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -9494,7 +9494,7 @@ static struct bpf_test tests[] = {
 		.u.insns_int = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_LD_IMM64(R1, 0xffffffffffffffffLL),
-			BPF_JMP_IMM(BPF_JSLT, R1, -1, 1),
+			BPF_JMP64_IMM(BPF_JSLT, R1, -1, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
@@ -9509,7 +9509,7 @@ static struct bpf_test tests[] = {
 		.u.insns_int = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, 0xffffffffffffffffLL),
-			BPF_JMP_IMM(BPF_JSGT, R1, -2, 1),
+			BPF_JMP64_IMM(BPF_JSGT, R1, -2, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -9523,7 +9523,7 @@ static struct bpf_test tests[] = {
 		.u.insns_int = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_LD_IMM64(R1, 0xffffffffffffffffLL),
-			BPF_JMP_IMM(BPF_JSGT, R1, -1, 1),
+			BPF_JMP64_IMM(BPF_JSGT, R1, -1, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
@@ -9538,7 +9538,7 @@ static struct bpf_test tests[] = {
 		.u.insns_int = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, 0xfffffffffffffffeLL),
-			BPF_JMP_IMM(BPF_JSLE, R1, -1, 1),
+			BPF_JMP64_IMM(BPF_JSLE, R1, -1, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -9552,7 +9552,7 @@ static struct bpf_test tests[] = {
 		.u.insns_int = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, 0xffffffffffffffffLL),
-			BPF_JMP_IMM(BPF_JSLE, R1, -1, 1),
+			BPF_JMP64_IMM(BPF_JSLE, R1, -1, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -9566,13 +9566,13 @@ static struct bpf_test tests[] = {
 		.u.insns_int = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, 3),
-			BPF_JMP_IMM(BPF_JSLE, R1, 0, 6),
+			BPF_JMP64_IMM(BPF_JSLE, R1, 0, 6),
 			BPF_ALU64_IMM(BPF_SUB, R1, 1),
-			BPF_JMP_IMM(BPF_JSLE, R1, 0, 4),
+			BPF_JMP64_IMM(BPF_JSLE, R1, 0, 4),
 			BPF_ALU64_IMM(BPF_SUB, R1, 1),
-			BPF_JMP_IMM(BPF_JSLE, R1, 0, 2),
+			BPF_JMP64_IMM(BPF_JSLE, R1, 0, 2),
 			BPF_ALU64_IMM(BPF_SUB, R1, 1),
-			BPF_JMP_IMM(BPF_JSLE, R1, 0, 1),
+			BPF_JMP64_IMM(BPF_JSLE, R1, 0, 1),
 			BPF_EXIT_INSN(),		/* bad exit */
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),	/* good exit */
 			BPF_EXIT_INSN(),
@@ -9586,11 +9586,11 @@ static struct bpf_test tests[] = {
 		.u.insns_int = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, 3),
-			BPF_JMP_IMM(BPF_JSLE, R1, 0, 4),
+			BPF_JMP64_IMM(BPF_JSLE, R1, 0, 4),
 			BPF_ALU64_IMM(BPF_SUB, R1, 2),
-			BPF_JMP_IMM(BPF_JSLE, R1, 0, 2),
+			BPF_JMP64_IMM(BPF_JSLE, R1, 0, 2),
 			BPF_ALU64_IMM(BPF_SUB, R1, 2),
-			BPF_JMP_IMM(BPF_JSLE, R1, 0, 1),
+			BPF_JMP64_IMM(BPF_JSLE, R1, 0, 1),
 			BPF_EXIT_INSN(),		/* bad exit */
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),	/* good exit */
 			BPF_EXIT_INSN(),
@@ -9605,7 +9605,7 @@ static struct bpf_test tests[] = {
 		.u.insns_int = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, 0xffffffffffffffffLL),
-			BPF_JMP_IMM(BPF_JSGE, R1, -2, 1),
+			BPF_JMP64_IMM(BPF_JSGE, R1, -2, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -9619,7 +9619,7 @@ static struct bpf_test tests[] = {
 		.u.insns_int = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, 0xffffffffffffffffLL),
-			BPF_JMP_IMM(BPF_JSGE, R1, -1, 1),
+			BPF_JMP64_IMM(BPF_JSGE, R1, -1, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -9633,13 +9633,13 @@ static struct bpf_test tests[] = {
 		.u.insns_int = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, -3),
-			BPF_JMP_IMM(BPF_JSGE, R1, 0, 6),
+			BPF_JMP64_IMM(BPF_JSGE, R1, 0, 6),
 			BPF_ALU64_IMM(BPF_ADD, R1, 1),
-			BPF_JMP_IMM(BPF_JSGE, R1, 0, 4),
+			BPF_JMP64_IMM(BPF_JSGE, R1, 0, 4),
 			BPF_ALU64_IMM(BPF_ADD, R1, 1),
-			BPF_JMP_IMM(BPF_JSGE, R1, 0, 2),
+			BPF_JMP64_IMM(BPF_JSGE, R1, 0, 2),
 			BPF_ALU64_IMM(BPF_ADD, R1, 1),
-			BPF_JMP_IMM(BPF_JSGE, R1, 0, 1),
+			BPF_JMP64_IMM(BPF_JSGE, R1, 0, 1),
 			BPF_EXIT_INSN(),		/* bad exit */
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),	/* good exit */
 			BPF_EXIT_INSN(),
@@ -9653,11 +9653,11 @@ static struct bpf_test tests[] = {
 		.u.insns_int = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, -3),
-			BPF_JMP_IMM(BPF_JSGE, R1, 0, 4),
+			BPF_JMP64_IMM(BPF_JSGE, R1, 0, 4),
 			BPF_ALU64_IMM(BPF_ADD, R1, 2),
-			BPF_JMP_IMM(BPF_JSGE, R1, 0, 2),
+			BPF_JMP64_IMM(BPF_JSGE, R1, 0, 2),
 			BPF_ALU64_IMM(BPF_ADD, R1, 2),
-			BPF_JMP_IMM(BPF_JSGE, R1, 0, 1),
+			BPF_JMP64_IMM(BPF_JSGE, R1, 0, 1),
 			BPF_EXIT_INSN(),		/* bad exit */
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),	/* good exit */
 			BPF_EXIT_INSN(),
@@ -9672,7 +9672,7 @@ static struct bpf_test tests[] = {
 		.u.insns_int = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, 3),
-			BPF_JMP_IMM(BPF_JGT, R1, 2, 1),
+			BPF_JMP64_IMM(BPF_JGT, R1, 2, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -9686,7 +9686,7 @@ static struct bpf_test tests[] = {
 		.u.insns_int = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, -1),
-			BPF_JMP_IMM(BPF_JGT, R1, 1, 1),
+			BPF_JMP64_IMM(BPF_JGT, R1, 1, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -9701,7 +9701,7 @@ static struct bpf_test tests[] = {
 		.u.insns_int = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, 2),
-			BPF_JMP_IMM(BPF_JLT, R1, 3, 1),
+			BPF_JMP64_IMM(BPF_JLT, R1, 3, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -9715,7 +9715,7 @@ static struct bpf_test tests[] = {
 		.u.insns_int = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, 1),
-			BPF_JMP_IMM(BPF_JLT, R1, -1, 1),
+			BPF_JMP64_IMM(BPF_JLT, R1, -1, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -9730,7 +9730,7 @@ static struct bpf_test tests[] = {
 		.u.insns_int = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, 3),
-			BPF_JMP_IMM(BPF_JGE, R1, 2, 1),
+			BPF_JMP64_IMM(BPF_JGE, R1, 2, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -9745,7 +9745,7 @@ static struct bpf_test tests[] = {
 		.u.insns_int = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, 2),
-			BPF_JMP_IMM(BPF_JLE, R1, 3, 1),
+			BPF_JMP64_IMM(BPF_JLE, R1, 3, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -9758,12 +9758,12 @@ static struct bpf_test tests[] = {
 	{
 		"JMP_JGT_K: if (3 > 2) return 1 (jump backwards)",
 		.u.insns_int = {
-			BPF_JMP_IMM(BPF_JA, 0, 0, 2), /* goto start */
+			BPF_JMP64_IMM(BPF_JA, 0, 0, 2), /* goto start */
 			BPF_ALU32_IMM(BPF_MOV, R0, 1), /* out: */
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 0), /* start: */
 			BPF_LD_IMM64(R1, 3), /* note: this takes 2 insns */
-			BPF_JMP_IMM(BPF_JGT, R1, 2, -6), /* goto out */
+			BPF_JMP64_IMM(BPF_JGT, R1, 2, -6), /* goto out */
 			BPF_EXIT_INSN(),
 		},
 		INTERNAL,
@@ -9775,7 +9775,7 @@ static struct bpf_test tests[] = {
 		.u.insns_int = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, 3),
-			BPF_JMP_IMM(BPF_JGE, R1, 3, 1),
+			BPF_JMP64_IMM(BPF_JGE, R1, 3, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -9788,12 +9788,12 @@ static struct bpf_test tests[] = {
 	{
 		"JMP_JGT_K: if (2 < 3) return 1 (jump backwards)",
 		.u.insns_int = {
-			BPF_JMP_IMM(BPF_JA, 0, 0, 2), /* goto start */
+			BPF_JMP64_IMM(BPF_JA, 0, 0, 2), /* goto start */
 			BPF_ALU32_IMM(BPF_MOV, R0, 1), /* out: */
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 0), /* start: */
 			BPF_LD_IMM64(R1, 2), /* note: this takes 2 insns */
-			BPF_JMP_IMM(BPF_JLT, R1, 3, -6), /* goto out */
+			BPF_JMP64_IMM(BPF_JLT, R1, 3, -6), /* goto out */
 			BPF_EXIT_INSN(),
 		},
 		INTERNAL,
@@ -9805,7 +9805,7 @@ static struct bpf_test tests[] = {
 		.u.insns_int = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, 3),
-			BPF_JMP_IMM(BPF_JLE, R1, 3, 1),
+			BPF_JMP64_IMM(BPF_JLE, R1, 3, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -9820,7 +9820,7 @@ static struct bpf_test tests[] = {
 		.u.insns_int = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, 3),
-			BPF_JMP_IMM(BPF_JNE, R1, 2, 1),
+			BPF_JMP64_IMM(BPF_JNE, R1, 2, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -9835,7 +9835,7 @@ static struct bpf_test tests[] = {
 		.u.insns_int = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, 3),
-			BPF_JMP_IMM(BPF_JEQ, R1, 3, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R1, 3, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -9850,7 +9850,7 @@ static struct bpf_test tests[] = {
 		.u.insns_int = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, 3),
-			BPF_JMP_IMM(BPF_JSET, R1, 2, 1),
+			BPF_JMP64_IMM(BPF_JSET, R1, 2, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -9864,7 +9864,7 @@ static struct bpf_test tests[] = {
 		.u.insns_int = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, 3),
-			BPF_JMP_IMM(BPF_JSET, R1, 0xffffffff, 1),
+			BPF_JMP64_IMM(BPF_JSET, R1, 0xffffffff, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -9880,7 +9880,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, -1),
 			BPF_LD_IMM64(R2, -2),
-			BPF_JMP_REG(BPF_JSGT, R1, R2, 1),
+			BPF_JMP64_REG(BPF_JSGT, R1, R2, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -9895,7 +9895,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_LD_IMM64(R1, -1),
 			BPF_LD_IMM64(R2, -1),
-			BPF_JMP_REG(BPF_JSGT, R1, R2, 1),
+			BPF_JMP64_REG(BPF_JSGT, R1, R2, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
@@ -9911,7 +9911,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, -1),
 			BPF_LD_IMM64(R2, -2),
-			BPF_JMP_REG(BPF_JSLT, R2, R1, 1),
+			BPF_JMP64_REG(BPF_JSLT, R2, R1, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -9926,7 +9926,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_LD_IMM64(R1, -1),
 			BPF_LD_IMM64(R2, -1),
-			BPF_JMP_REG(BPF_JSLT, R1, R2, 1),
+			BPF_JMP64_REG(BPF_JSLT, R1, R2, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
@@ -9942,7 +9942,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, -1),
 			BPF_LD_IMM64(R2, -2),
-			BPF_JMP_REG(BPF_JSGE, R1, R2, 1),
+			BPF_JMP64_REG(BPF_JSGE, R1, R2, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -9957,7 +9957,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, -1),
 			BPF_LD_IMM64(R2, -1),
-			BPF_JMP_REG(BPF_JSGE, R1, R2, 1),
+			BPF_JMP64_REG(BPF_JSGE, R1, R2, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -9973,7 +9973,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, -1),
 			BPF_LD_IMM64(R2, -2),
-			BPF_JMP_REG(BPF_JSLE, R2, R1, 1),
+			BPF_JMP64_REG(BPF_JSLE, R2, R1, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -9988,7 +9988,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, -1),
 			BPF_LD_IMM64(R2, -1),
-			BPF_JMP_REG(BPF_JSLE, R1, R2, 1),
+			BPF_JMP64_REG(BPF_JSLE, R1, R2, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -10004,7 +10004,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, 3),
 			BPF_LD_IMM64(R2, 2),
-			BPF_JMP_REG(BPF_JGT, R1, R2, 1),
+			BPF_JMP64_REG(BPF_JGT, R1, R2, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -10019,7 +10019,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, -1),
 			BPF_LD_IMM64(R2, 1),
-			BPF_JMP_REG(BPF_JGT, R1, R2, 1),
+			BPF_JMP64_REG(BPF_JGT, R1, R2, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -10035,7 +10035,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, 3),
 			BPF_LD_IMM64(R2, 2),
-			BPF_JMP_REG(BPF_JLT, R2, R1, 1),
+			BPF_JMP64_REG(BPF_JLT, R2, R1, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -10050,7 +10050,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, -1),
 			BPF_LD_IMM64(R2, 1),
-			BPF_JMP_REG(BPF_JLT, R2, R1, 1),
+			BPF_JMP64_REG(BPF_JLT, R2, R1, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -10066,7 +10066,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, 3),
 			BPF_LD_IMM64(R2, 2),
-			BPF_JMP_REG(BPF_JGE, R1, R2, 1),
+			BPF_JMP64_REG(BPF_JGE, R1, R2, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -10081,7 +10081,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, 3),
 			BPF_LD_IMM64(R2, 3),
-			BPF_JMP_REG(BPF_JGE, R1, R2, 1),
+			BPF_JMP64_REG(BPF_JGE, R1, R2, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -10097,7 +10097,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, 3),
 			BPF_LD_IMM64(R2, 2),
-			BPF_JMP_REG(BPF_JLE, R2, R1, 1),
+			BPF_JMP64_REG(BPF_JLE, R2, R1, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -10112,7 +10112,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, 3),
 			BPF_LD_IMM64(R2, 3),
-			BPF_JMP_REG(BPF_JLE, R1, R2, 1),
+			BPF_JMP64_REG(BPF_JLE, R1, R2, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -10128,7 +10128,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, 3),
 			BPF_LD_IMM64(R2, 2),
-			BPF_JMP_REG(BPF_JGE, R1, R2, 2),
+			BPF_JMP64_REG(BPF_JGE, R1, R2, 2),
 			BPF_LD_IMM64(R0, 0xffffffffffffffffULL),
 			BPF_LD_IMM64(R0, 0xeeeeeeeeeeeeeeeeULL),
 			BPF_EXIT_INSN(),
@@ -10143,7 +10143,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, 3),
 			BPF_LD_IMM64(R2, 2),
-			BPF_JMP_REG(BPF_JGE, R1, R2, 0),
+			BPF_JMP64_REG(BPF_JGE, R1, R2, 0),
 			BPF_LD_IMM64(R0, 0xffffffffffffffffULL),
 			BPF_EXIT_INSN(),
 		},
@@ -10157,7 +10157,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_LD_IMM64(R1, 3),
 			BPF_LD_IMM64(R2, 2),
-			BPF_JMP_REG(BPF_JGE, R1, R2, 4),
+			BPF_JMP64_REG(BPF_JGE, R1, R2, 4),
 			BPF_LD_IMM64(R0, 0xffffffffffffffffULL),
 			BPF_LD_IMM64(R0, 0xeeeeeeeeeeeeeeeeULL),
 			BPF_EXIT_INSN(),
@@ -10172,7 +10172,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, 3),
 			BPF_LD_IMM64(R2, 2),
-			BPF_JMP_REG(BPF_JLE, R2, R1, 2),
+			BPF_JMP64_REG(BPF_JLE, R2, R1, 2),
 			BPF_LD_IMM64(R0, 0xffffffffffffffffULL),
 			BPF_LD_IMM64(R0, 0xeeeeeeeeeeeeeeeeULL),
 			BPF_EXIT_INSN(),
@@ -10187,7 +10187,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, 3),
 			BPF_LD_IMM64(R2, 2),
-			BPF_JMP_REG(BPF_JLE, R2, R1, 0),
+			BPF_JMP64_REG(BPF_JLE, R2, R1, 0),
 			BPF_LD_IMM64(R0, 0xffffffffffffffffULL),
 			BPF_EXIT_INSN(),
 		},
@@ -10201,7 +10201,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_LD_IMM64(R1, 3),
 			BPF_LD_IMM64(R2, 2),
-			BPF_JMP_REG(BPF_JLE, R2, R1, 4),
+			BPF_JMP64_REG(BPF_JLE, R2, R1, 4),
 			BPF_LD_IMM64(R0, 0xffffffffffffffffULL),
 			BPF_LD_IMM64(R0, 0xeeeeeeeeeeeeeeeeULL),
 			BPF_EXIT_INSN(),
@@ -10217,7 +10217,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, 3),
 			BPF_LD_IMM64(R2, 2),
-			BPF_JMP_REG(BPF_JNE, R1, R2, 1),
+			BPF_JMP64_REG(BPF_JNE, R1, R2, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -10233,7 +10233,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, 3),
 			BPF_LD_IMM64(R2, 3),
-			BPF_JMP_REG(BPF_JEQ, R1, R2, 1),
+			BPF_JMP64_REG(BPF_JEQ, R1, R2, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -10249,7 +10249,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, 3),
 			BPF_LD_IMM64(R2, 2),
-			BPF_JMP_REG(BPF_JSET, R1, R2, 1),
+			BPF_JMP64_REG(BPF_JSET, R1, R2, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -10264,7 +10264,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU32_IMM(BPF_MOV, R0, 0),
 			BPF_LD_IMM64(R1, 3),
 			BPF_LD_IMM64(R2, 0xffffffff),
-			BPF_JMP_REG(BPF_JSET, R1, R2, 1),
+			BPF_JMP64_REG(BPF_JSET, R1, R2, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
 			BPF_EXIT_INSN(),
@@ -11391,7 +11391,7 @@ static struct bpf_test tests[] = {
 			BPF_MOV64_REG(R2, R1),
 			BPF_ALU64_REG(BPF_AND, R2, R3),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
-			BPF_JMP_IMM(BPF_JNE, R2, -17104896, 1),
+			BPF_JMP64_IMM(BPF_JNE, R2, -17104896, 1),
 			BPF_ALU32_IMM(BPF_MOV, R0, 2),
 			BPF_EXIT_INSN(),
 		},
@@ -11407,7 +11407,7 @@ static struct bpf_test tests[] = {
 			BPF_MOV64_REG(R2, R1),
 			BPF_ALU64_REG(BPF_AND, R2, R3),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
-			BPF_JMP_IMM(BPF_JNE, R2, 0xfefb0000, 1),
+			BPF_JMP64_IMM(BPF_JNE, R2, 0xfefb0000, 1),
 			BPF_ALU32_IMM(BPF_MOV, R0, 2),
 			BPF_EXIT_INSN(),
 		},
@@ -11424,7 +11424,7 @@ static struct bpf_test tests[] = {
 			BPF_MOV64_REG(R2, R1),
 			BPF_ALU64_REG(BPF_AND, R2, R3),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
-			BPF_JMP_REG(BPF_JNE, R2, R4, 1),
+			BPF_JMP64_REG(BPF_JNE, R2, R4, 1),
 			BPF_ALU32_IMM(BPF_MOV, R0, 2),
 			BPF_EXIT_INSN(),
 		},
@@ -11437,7 +11437,7 @@ static struct bpf_test tests[] = {
 		.u.insns_int = {
 			BPF_LD_IMM64(R1, -17104896),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
-			BPF_JMP_IMM(BPF_JNE, R1, -17104896, 1),
+			BPF_JMP64_IMM(BPF_JNE, R1, -17104896, 1),
 			BPF_ALU32_IMM(BPF_MOV, R0, 2),
 			BPF_EXIT_INSN(),
 		},
@@ -11450,7 +11450,7 @@ static struct bpf_test tests[] = {
 		.u.insns_int = {
 			BPF_LD_IMM64(R1, 0xfefb0000),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
-			BPF_JMP_IMM(BPF_JNE, R1, 0xfefb0000, 1),
+			BPF_JMP64_IMM(BPF_JNE, R1, 0xfefb0000, 1),
 			BPF_ALU32_IMM(BPF_MOV, R0, 2),
 			BPF_EXIT_INSN(),
 		},
@@ -11463,7 +11463,7 @@ static struct bpf_test tests[] = {
 		.u.insns_int = {
 			BPF_LD_IMM64(R1, 0x7efb0000),
 			BPF_ALU32_IMM(BPF_MOV, R0, 1),
-			BPF_JMP_IMM(BPF_JNE, R1, 0x7efb0000, 1),
+			BPF_JMP64_IMM(BPF_JNE, R1, 0x7efb0000, 1),
 			BPF_ALU32_IMM(BPF_MOV, R0, 2),
 			BPF_EXIT_INSN(),
 		},
@@ -11572,16 +11572,16 @@ static struct bpf_test tests[] = {
 			BPF_ALU64_IMM(BPF_MOV, R9, R9),		\
 			BPF_##alu(BPF_ ##op, dst, src),		\
 			BPF_ALU32_IMM(BPF_MOV, dst, dst),	\
-			BPF_JMP_IMM(BPF_JNE, R0, R0, 10),	\
-			BPF_JMP_IMM(BPF_JNE, R1, R1, 9),	\
-			BPF_JMP_IMM(BPF_JNE, R2, R2, 8),	\
-			BPF_JMP_IMM(BPF_JNE, R3, R3, 7),	\
-			BPF_JMP_IMM(BPF_JNE, R4, R4, 6),	\
-			BPF_JMP_IMM(BPF_JNE, R5, R5, 5),	\
-			BPF_JMP_IMM(BPF_JNE, R6, R6, 4),	\
-			BPF_JMP_IMM(BPF_JNE, R7, R7, 3),	\
-			BPF_JMP_IMM(BPF_JNE, R8, R8, 2),	\
-			BPF_JMP_IMM(BPF_JNE, R9, R9, 1),	\
+			BPF_JMP64_IMM(BPF_JNE, R0, R0, 10),	\
+			BPF_JMP64_IMM(BPF_JNE, R1, R1, 9),	\
+			BPF_JMP64_IMM(BPF_JNE, R2, R2, 8),	\
+			BPF_JMP64_IMM(BPF_JNE, R3, R3, 7),	\
+			BPF_JMP64_IMM(BPF_JNE, R4, R4, 6),	\
+			BPF_JMP64_IMM(BPF_JNE, R5, R5, 5),	\
+			BPF_JMP64_IMM(BPF_JNE, R6, R6, 4),	\
+			BPF_JMP64_IMM(BPF_JNE, R7, R7, 3),	\
+			BPF_JMP64_IMM(BPF_JNE, R8, R8, 2),	\
+			BPF_JMP64_IMM(BPF_JNE, R9, R9, 1),	\
 			BPF_ALU64_IMM(BPF_MOV, R0, 1),		\
 			BPF_EXIT_INSN(),			\
 		},						\
@@ -11700,16 +11700,16 @@ static struct bpf_test tests[] = {
 				   (op) == BPF_CMPXCHG ? 0 :	\
 				   (op) & BPF_FETCH ? 1 : 0),	\
 			BPF_ATOMIC_OP(width, op, R10, R1, -8),	\
-			BPF_JMP_IMM(BPF_JNE, R0, 0, 10),	\
-			BPF_JMP_IMM(BPF_JNE, R1, 1, 9),		\
-			BPF_JMP_IMM(BPF_JNE, R2, 2, 8),		\
-			BPF_JMP_IMM(BPF_JNE, R3, 3, 7),		\
-			BPF_JMP_IMM(BPF_JNE, R4, 4, 6),		\
-			BPF_JMP_IMM(BPF_JNE, R5, 5, 5),		\
-			BPF_JMP_IMM(BPF_JNE, R6, 6, 4),		\
-			BPF_JMP_IMM(BPF_JNE, R7, 7, 3),		\
-			BPF_JMP_IMM(BPF_JNE, R8, 8, 2),		\
-			BPF_JMP_IMM(BPF_JNE, R9, 9, 1),		\
+			BPF_JMP64_IMM(BPF_JNE, R0, 0, 10),	\
+			BPF_JMP64_IMM(BPF_JNE, R1, 1, 9),	\
+			BPF_JMP64_IMM(BPF_JNE, R2, 2, 8),	\
+			BPF_JMP64_IMM(BPF_JNE, R3, 3, 7),	\
+			BPF_JMP64_IMM(BPF_JNE, R4, 4, 6),	\
+			BPF_JMP64_IMM(BPF_JNE, R5, 5, 5),	\
+			BPF_JMP64_IMM(BPF_JNE, R6, 6, 4),	\
+			BPF_JMP64_IMM(BPF_JNE, R7, 7, 3),	\
+			BPF_JMP64_IMM(BPF_JNE, R8, 8, 2),	\
+			BPF_JMP64_IMM(BPF_JNE, R9, 9, 1),	\
 			BPF_ALU64_IMM(BPF_MOV, R0, 1),		\
 			BPF_EXIT_INSN(),			\
 		},						\
@@ -11824,7 +11824,7 @@ static struct bpf_test tests[] = {
 			BPF_LD_IMM64(R0, 0x0123456789acbdefULL),\
 			BPF_ALU64_REG(BPF_MOV, R1, R0),		\
 			BPF_JMP32_IMM(BPF_##op, R0, 1234, 1),	\
-			BPF_JMP_A(0), /* Nop */			\
+			BPF_JMP64_A(0), /* Nop */		\
 			BPF_ALU64_REG(BPF_SUB, R0, R1),		\
 			BPF_ALU64_REG(BPF_MOV, R1, R0),		\
 			BPF_ALU64_IMM(BPF_RSH, R1, 32),		\
@@ -11858,7 +11858,7 @@ static struct bpf_test tests[] = {
 			BPF_ALU64_REG(BPF_MOV, R2, R0),		\
 			BPF_ALU64_REG(BPF_MOV, R3, R1),		\
 			BPF_JMP32_IMM(BPF_##op, R0, R1, 1),	\
-			BPF_JMP_A(0), /* Nop */			\
+			BPF_JMP64_A(0), /* Nop */		\
 			BPF_ALU64_REG(BPF_SUB, R0, R2),		\
 			BPF_ALU64_REG(BPF_SUB, R1, R3),		\
 			BPF_ALU64_REG(BPF_OR, R0, R1),		\
@@ -13584,7 +13584,7 @@ static struct bpf_test tests[] = {
 		"JMP_JSET_K: imm = 0 -> never taken",
 		.u.insns_int = {
 			BPF_ALU64_IMM(BPF_MOV, R0, 1),
-			BPF_JMP_IMM(BPF_JSET, R1, 0, 1),
+			BPF_JMP64_IMM(BPF_JSET, R1, 0, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -13596,7 +13596,7 @@ static struct bpf_test tests[] = {
 		"JMP_JLT_K: imm = 0 -> never taken",
 		.u.insns_int = {
 			BPF_ALU64_IMM(BPF_MOV, R0, 1),
-			BPF_JMP_IMM(BPF_JLT, R1, 0, 1),
+			BPF_JMP64_IMM(BPF_JLT, R1, 0, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -13608,7 +13608,7 @@ static struct bpf_test tests[] = {
 		"JMP_JGE_K: imm = 0 -> always taken",
 		.u.insns_int = {
 			BPF_ALU64_IMM(BPF_MOV, R0, 1),
-			BPF_JMP_IMM(BPF_JGE, R1, 0, 1),
+			BPF_JMP64_IMM(BPF_JGE, R1, 0, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -13620,7 +13620,7 @@ static struct bpf_test tests[] = {
 		"JMP_JGT_K: imm = 0xffffffff -> never taken",
 		.u.insns_int = {
 			BPF_ALU64_IMM(BPF_MOV, R0, 1),
-			BPF_JMP_IMM(BPF_JGT, R1, U32_MAX, 1),
+			BPF_JMP64_IMM(BPF_JGT, R1, U32_MAX, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -13632,7 +13632,7 @@ static struct bpf_test tests[] = {
 		"JMP_JLE_K: imm = 0xffffffff -> always taken",
 		.u.insns_int = {
 			BPF_ALU64_IMM(BPF_MOV, R0, 1),
-			BPF_JMP_IMM(BPF_JLE, R1, U32_MAX, 1),
+			BPF_JMP64_IMM(BPF_JLE, R1, U32_MAX, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -13692,7 +13692,7 @@ static struct bpf_test tests[] = {
 		"JMP_JEQ_X: dst = src -> always taken",
 		.u.insns_int = {
 			BPF_ALU64_IMM(BPF_MOV, R0, 1),
-			BPF_JMP_REG(BPF_JEQ, R1, R1, 1),
+			BPF_JMP64_REG(BPF_JEQ, R1, R1, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -13704,7 +13704,7 @@ static struct bpf_test tests[] = {
 		"JMP_JGE_X: dst = src -> always taken",
 		.u.insns_int = {
 			BPF_ALU64_IMM(BPF_MOV, R0, 1),
-			BPF_JMP_REG(BPF_JGE, R1, R1, 1),
+			BPF_JMP64_REG(BPF_JGE, R1, R1, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -13716,7 +13716,7 @@ static struct bpf_test tests[] = {
 		"JMP_JLE_X: dst = src -> always taken",
 		.u.insns_int = {
 			BPF_ALU64_IMM(BPF_MOV, R0, 1),
-			BPF_JMP_REG(BPF_JLE, R1, R1, 1),
+			BPF_JMP64_REG(BPF_JLE, R1, R1, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -13728,7 +13728,7 @@ static struct bpf_test tests[] = {
 		"JMP_JSGE_X: dst = src -> always taken",
 		.u.insns_int = {
 			BPF_ALU64_IMM(BPF_MOV, R0, 1),
-			BPF_JMP_REG(BPF_JSGE, R1, R1, 1),
+			BPF_JMP64_REG(BPF_JSGE, R1, R1, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -13740,7 +13740,7 @@ static struct bpf_test tests[] = {
 		"JMP_JSLE_X: dst = src -> always taken",
 		.u.insns_int = {
 			BPF_ALU64_IMM(BPF_MOV, R0, 1),
-			BPF_JMP_REG(BPF_JSLE, R1, R1, 1),
+			BPF_JMP64_REG(BPF_JSLE, R1, R1, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -13752,7 +13752,7 @@ static struct bpf_test tests[] = {
 		"JMP_JNE_X: dst = src -> never taken",
 		.u.insns_int = {
 			BPF_ALU64_IMM(BPF_MOV, R0, 1),
-			BPF_JMP_REG(BPF_JNE, R1, R1, 1),
+			BPF_JMP64_REG(BPF_JNE, R1, R1, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -13764,7 +13764,7 @@ static struct bpf_test tests[] = {
 		"JMP_JGT_X: dst = src -> never taken",
 		.u.insns_int = {
 			BPF_ALU64_IMM(BPF_MOV, R0, 1),
-			BPF_JMP_REG(BPF_JGT, R1, R1, 1),
+			BPF_JMP64_REG(BPF_JGT, R1, R1, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -13776,7 +13776,7 @@ static struct bpf_test tests[] = {
 		"JMP_JLT_X: dst = src -> never taken",
 		.u.insns_int = {
 			BPF_ALU64_IMM(BPF_MOV, R0, 1),
-			BPF_JMP_REG(BPF_JLT, R1, R1, 1),
+			BPF_JMP64_REG(BPF_JLT, R1, R1, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -13788,7 +13788,7 @@ static struct bpf_test tests[] = {
 		"JMP_JSGT_X: dst = src -> never taken",
 		.u.insns_int = {
 			BPF_ALU64_IMM(BPF_MOV, R0, 1),
-			BPF_JMP_REG(BPF_JSGT, R1, R1, 1),
+			BPF_JMP64_REG(BPF_JSGT, R1, R1, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -13800,7 +13800,7 @@ static struct bpf_test tests[] = {
 		"JMP_JSLT_X: dst = src -> never taken",
 		.u.insns_int = {
 			BPF_ALU64_IMM(BPF_MOV, R0, 1),
-			BPF_JMP_REG(BPF_JSLT, R1, R1, 1),
+			BPF_JMP64_REG(BPF_JSLT, R1, R1, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -13813,7 +13813,7 @@ static struct bpf_test tests[] = {
 		"Short relative jump: offset=0",
 		.u.insns_int = {
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
-			BPF_JMP_IMM(BPF_JEQ, R0, 0, 0),
+			BPF_JMP64_IMM(BPF_JEQ, R0, 0, 0),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, -1),
 		},
@@ -13825,7 +13825,7 @@ static struct bpf_test tests[] = {
 		"Short relative jump: offset=1",
 		.u.insns_int = {
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
-			BPF_JMP_IMM(BPF_JEQ, R0, 0, 1),
+			BPF_JMP64_IMM(BPF_JEQ, R0, 0, 1),
 			BPF_ALU32_IMM(BPF_ADD, R0, 1),
 			BPF_EXIT_INSN(),
 			BPF_ALU32_IMM(BPF_MOV, R0, -1),
@@ -13838,7 +13838,7 @@ static struct bpf_test tests[] = {
 		"Short relative jump: offset=2",
 		.u.insns_int = {
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
-			BPF_JMP_IMM(BPF_JEQ, R0, 0, 2),
+			BPF_JMP64_IMM(BPF_JEQ, R0, 0, 2),
 			BPF_ALU32_IMM(BPF_ADD, R0, 1),
 			BPF_ALU32_IMM(BPF_ADD, R0, 1),
 			BPF_EXIT_INSN(),
@@ -13852,7 +13852,7 @@ static struct bpf_test tests[] = {
 		"Short relative jump: offset=3",
 		.u.insns_int = {
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
-			BPF_JMP_IMM(BPF_JEQ, R0, 0, 3),
+			BPF_JMP64_IMM(BPF_JEQ, R0, 0, 3),
 			BPF_ALU32_IMM(BPF_ADD, R0, 1),
 			BPF_ALU32_IMM(BPF_ADD, R0, 1),
 			BPF_ALU32_IMM(BPF_ADD, R0, 1),
@@ -13867,7 +13867,7 @@ static struct bpf_test tests[] = {
 		"Short relative jump: offset=4",
 		.u.insns_int = {
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
-			BPF_JMP_IMM(BPF_JEQ, R0, 0, 4),
+			BPF_JMP64_IMM(BPF_JEQ, R0, 0, 4),
 			BPF_ALU32_IMM(BPF_ADD, R0, 1),
 			BPF_ALU32_IMM(BPF_ADD, R0, 1),
 			BPF_ALU32_IMM(BPF_ADD, R0, 1),
@@ -14876,7 +14876,7 @@ struct tail_call_test {
 	BPF_LD_IMM64(R2, TAIL_CALL_MARKER),	       \
 	BPF_RAW_INSN(BPF_ALU32 | BPF_MOV | BPF_K, R3, 0, \
 		     offset, TAIL_CALL_MARKER),	       \
-	BPF_JMP_IMM(BPF_TAIL_CALL, 0, 0, 0)
+	BPF_JMP64_IMM(BPF_TAIL_CALL, 0, 0, 0)
 
 /*
  * A test function to be called from a BPF program, clobbering a lot of
@@ -14958,9 +14958,9 @@ static struct tail_call_test tail_call_tests[] = {
 			BPF_STX_MEM(BPF_DW, R3, R1, -8),
 			BPF_STX_MEM(BPF_DW, R3, R2, -16),
 			BPF_LDX_MEM(BPF_DW, R0, BPF_REG_FP, -8),
-			BPF_JMP_REG(BPF_JNE, R0, R1, 3),
+			BPF_JMP64_REG(BPF_JNE, R0, R1, 3),
 			BPF_LDX_MEM(BPF_DW, R0, BPF_REG_FP, -16),
-			BPF_JMP_REG(BPF_JNE, R0, R2, 1),
+			BPF_JMP64_REG(BPF_JNE, R0, R2, 1),
 			BPF_ALU64_IMM(BPF_MOV, R0, 0),
 			BPF_EXIT_INSN(),
 		},
@@ -15141,7 +15141,7 @@ static __init int prepare_tail_call_tests(struct bpf_array **pprogs)
 				}
 				*insn = BPF_EMIT_CALL(addr);
 				if ((long)__bpf_call_base + insn->imm != addr)
-					*insn = BPF_JMP_A(0); /* Skip: NOP */
+					*insn = BPF_JMP64_A(0); /* Skip: NOP */
 				break;
 			}
 		}
diff --git a/net/core/filter.c b/net/core/filter.c
index 1cd5897..e1576e0 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -329,7 +329,7 @@ static u32 convert_skb_access(int skb_field, int dst_reg, int src_reg,
 		BUILD_BUG_ON(sizeof_field(struct sk_buff, vlan_all) != 4);
 		*insn++ = BPF_LDX_MEM(BPF_W, dst_reg, src_reg,
 				      offsetof(struct sk_buff, vlan_all));
-		*insn++ = BPF_JMP_IMM(BPF_JEQ, dst_reg, 0, 1);
+		*insn++ = BPF_JMP64_IMM(BPF_JEQ, dst_reg, 0, 1);
 		*insn++ = BPF_ALU32_IMM(BPF_MOV, dst_reg, 1);
 		break;
 	}
@@ -368,7 +368,7 @@ static bool convert_bpf_extensions(struct sock_filter *fp,
 				      BPF_REG_TMP, BPF_REG_CTX,
 				      offsetof(struct sk_buff, dev));
 		/* if (tmp != 0) goto pc + 1 */
-		*insn++ = BPF_JMP_IMM(BPF_JNE, BPF_REG_TMP, 0, 1);
+		*insn++ = BPF_JMP64_IMM(BPF_JNE, BPF_REG_TMP, 0, 1);
 		*insn++ = BPF_EXIT_INSN();
 		if (fp->k == SKF_AD_OFF + SKF_AD_IFINDEX)
 			*insn = BPF_LDX_MEM(BPF_W, BPF_REG_A, BPF_REG_TMP,
@@ -488,7 +488,7 @@ static bool convert_bpf_ld_abs(struct sock_filter *fp, struct bpf_insn **insnp)
 		*insn++ = BPF_MOV64_REG(BPF_REG_TMP, BPF_REG_H);
 		if (offset)
 			*insn++ = BPF_ALU64_IMM(BPF_SUB, BPF_REG_TMP, offset);
-		*insn++ = BPF_JMP_IMM(BPF_JSLT, BPF_REG_TMP,
+		*insn++ = BPF_JMP64_IMM(BPF_JSLT, BPF_REG_TMP,
 				      size, 2 + endian + (!ldx_off_ok * 2));
 		if (ldx_off_ok) {
 			*insn++ = BPF_LDX_MEM(BPF_SIZE(fp->code), BPF_REG_A,
@@ -501,7 +501,7 @@ static bool convert_bpf_ld_abs(struct sock_filter *fp, struct bpf_insn **insnp)
 		}
 		if (endian)
 			*insn++ = BPF_ENDIAN(BPF_FROM_BE, BPF_REG_A, size * 8);
-		*insn++ = BPF_JMP_A(8);
+		*insn++ = BPF_JMP64_A(8);
 	}
 
 	*insn++ = BPF_MOV64_REG(BPF_REG_ARG1, BPF_REG_CTX);
@@ -529,7 +529,7 @@ static bool convert_bpf_ld_abs(struct sock_filter *fp, struct bpf_insn **insnp)
 		return false;
 	}
 
-	*insn++ = BPF_JMP_IMM(BPF_JSGE, BPF_REG_A, 0, 2);
+	*insn++ = BPF_JMP64_IMM(BPF_JSGE, BPF_REG_A, 0, 2);
 	*insn++ = BPF_ALU32_REG(BPF_XOR, BPF_REG_A, BPF_REG_A);
 	*insn   = BPF_EXIT_INSN();
 
@@ -672,7 +672,7 @@ static int bpf_convert_filter(struct sock_filter *prog, int len,
 				/* Error with exception code on div/mod by 0.
 				 * For cBPF programs, this was always return 0.
 				 */
-				*insn++ = BPF_JMP_IMM(BPF_JNE, BPF_REG_X, 0, 2);
+				*insn++ = BPF_JMP64_IMM(BPF_JNE, BPF_REG_X, 0, 2);
 				*insn++ = BPF_ALU32_REG(BPF_XOR, BPF_REG_A, BPF_REG_A);
 				*insn++ = BPF_EXIT_INSN();
 			}
@@ -8602,7 +8602,7 @@ static int bpf_unclone_prologue(struct bpf_insn *insn_buf, bool direct_write,
 	 */
 	*insn++ = BPF_LDX_MEM(BPF_B, BPF_REG_6, BPF_REG_1, CLONED_OFFSET);
 	*insn++ = BPF_ALU32_IMM(BPF_AND, BPF_REG_6, CLONED_MASK);
-	*insn++ = BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 7);
+	*insn++ = BPF_JMP64_IMM(BPF_JEQ, BPF_REG_6, 0, 7);
 
 	/* ret = bpf_skb_pull_data(skb, 0); */
 	*insn++ = BPF_MOV64_REG(BPF_REG_6, BPF_REG_1);
@@ -8613,7 +8613,7 @@ static int bpf_unclone_prologue(struct bpf_insn *insn_buf, bool direct_write,
 	 *      goto restore;
 	 * return TC_ACT_SHOT;
 	 */
-	*insn++ = BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2);
+	*insn++ = BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2);
 	*insn++ = BPF_ALU32_IMM(BPF_MOV, BPF_REG_0, drop_verdict);
 	*insn++ = BPF_EXIT_INSN();
 
@@ -8653,7 +8653,7 @@ static int bpf_gen_ld_abs(const struct bpf_insn *orig,
 		break;
 	}
 
-	*insn++ = BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 0, 2);
+	*insn++ = BPF_JMP64_IMM(BPF_JSGE, BPF_REG_0, 0, 2);
 	*insn++ = BPF_ALU32_REG(BPF_XOR, BPF_REG_0, BPF_REG_0);
 	*insn++ = BPF_EXIT_INSN();
 
@@ -9164,7 +9164,7 @@ static struct bpf_insn *bpf_convert_tstamp_type_read(const struct bpf_insn *si,
 	*insn++ = BPF_JMP32_IMM(BPF_JSET, tmp_reg,
 				SKB_MONO_DELIVERY_TIME_MASK, 2);
 	*insn++ = BPF_MOV32_IMM(value_reg, BPF_SKB_TSTAMP_UNSPEC);
-	*insn++ = BPF_JMP_A(1);
+	*insn++ = BPF_JMP64_A(1);
 	*insn++ = BPF_MOV32_IMM(value_reg, BPF_SKB_TSTAMP_DELIVERY_MONO);
 
 	return insn;
@@ -9216,7 +9216,7 @@ static struct bpf_insn *bpf_convert_tstamp_read(const struct bpf_prog *prog,
 		 * read 0 as the (rcv) timestamp.
 		 */
 		*insn++ = BPF_MOV64_IMM(value_reg, 0);
-		*insn++ = BPF_JMP_A(1);
+		*insn++ = BPF_JMP64_A(1);
 	}
 #endif
 
@@ -9246,7 +9246,7 @@ static struct bpf_insn *bpf_convert_tstamp_write(const struct bpf_prog *prog,
 		/* Writing __sk_buff->tstamp as ingress, goto <clear> */
 		*insn++ = BPF_JMP32_IMM(BPF_JSET, tmp_reg, TC_AT_INGRESS_MASK, 1);
 		/* goto <store> */
-		*insn++ = BPF_JMP_A(2);
+		*insn++ = BPF_JMP64_A(2);
 		/* <clear>: mono_delivery_time */
 		*insn++ = BPF_ALU32_IMM(BPF_AND, tmp_reg, ~SKB_MONO_DELIVERY_TIME_MASK);
 		*insn++ = BPF_STX_MEM(BPF_B, skb_reg, tmp_reg, PKT_VLAN_PRESENT_OFFSET);
@@ -9307,7 +9307,7 @@ static u32 bpf_convert_ctx_access(enum bpf_access_type type,
 		*insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct sk_buff, dev),
 				      si->dst_reg, si->src_reg,
 				      offsetof(struct sk_buff, dev));
-		*insn++ = BPF_JMP_IMM(BPF_JEQ, si->dst_reg, 0, 1);
+		*insn++ = BPF_JMP64_IMM(BPF_JEQ, si->dst_reg, 0, 1);
 		*insn++ = BPF_LDX_MEM(BPF_W, si->dst_reg, si->dst_reg,
 				      bpf_target_off(struct net_device, ifindex, 4,
 						     target_size));
@@ -9342,7 +9342,7 @@ static u32 bpf_convert_ctx_access(enum bpf_access_type type,
 
 	case offsetof(struct __sk_buff, queue_mapping):
 		if (type == BPF_WRITE) {
-			*insn++ = BPF_JMP_IMM(BPF_JGE, si->src_reg, NO_QUEUE_MAPPING, 1);
+			*insn++ = BPF_JMP64_IMM(BPF_JGE, si->src_reg, NO_QUEUE_MAPPING, 1);
 			*insn++ = BPF_STX_MEM(BPF_H, si->dst_reg, si->src_reg,
 					      bpf_target_off(struct sk_buff,
 							     queue_mapping,
@@ -9359,7 +9359,7 @@ static u32 bpf_convert_ctx_access(enum bpf_access_type type,
 		*insn++ = BPF_LDX_MEM(BPF_W, si->dst_reg, si->src_reg,
 				      bpf_target_off(struct sk_buff,
 						     vlan_all, 4, target_size));
-		*insn++ = BPF_JMP_IMM(BPF_JEQ, si->dst_reg, 0, 1);
+		*insn++ = BPF_JMP64_IMM(BPF_JEQ, si->dst_reg, 0, 1);
 		*insn++ = BPF_ALU32_IMM(BPF_MOV, si->dst_reg, 1);
 		break;
 
@@ -9453,7 +9453,7 @@ static u32 bpf_convert_ctx_access(enum bpf_access_type type,
 		*insn++ = BPF_LDX_MEM(BPF_W, si->dst_reg, si->src_reg,
 				      bpf_target_off(struct sk_buff, napi_id, 4,
 						     target_size));
-		*insn++ = BPF_JMP_IMM(BPF_JGE, si->dst_reg, MIN_NAPI_ID, 1);
+		*insn++ = BPF_JMP64_IMM(BPF_JGE, si->dst_reg, MIN_NAPI_ID, 1);
 		*insn++ = BPF_MOV64_IMM(si->dst_reg, 0);
 #else
 		*target_size = 4;
@@ -9784,7 +9784,7 @@ u32 bpf_sock_convert_ctx_access(enum bpf_access_type type,
 				       sizeof_field(struct sock,
 						    sk_rx_queue_mapping),
 				       target_size));
-		*insn++ = BPF_JMP_IMM(BPF_JNE, si->dst_reg, NO_QUEUE_MAPPING,
+		*insn++ = BPF_JMP64_IMM(BPF_JNE, si->dst_reg, NO_QUEUE_MAPPING,
 				      1);
 		*insn++ = BPF_MOV64_IMM(si->dst_reg, -1);
 #else
@@ -10068,7 +10068,7 @@ static u32 sock_ops_convert_ctx_access(enum bpf_access_type type,
 				      fullsock_reg, si->src_reg,	      \
 				      offsetof(struct bpf_sock_ops_kern,      \
 					       is_fullsock));		      \
-		*insn++ = BPF_JMP_IMM(BPF_JEQ, fullsock_reg, 0, jmp);	      \
+		*insn++ = BPF_JMP64_IMM(BPF_JEQ, fullsock_reg, 0, jmp);	      \
 		if (si->dst_reg == si->src_reg)				      \
 			*insn++ = BPF_LDX_MEM(BPF_DW, reg, si->src_reg,	      \
 				      offsetof(struct bpf_sock_ops_kern,      \
@@ -10082,14 +10082,14 @@ static u32 sock_ops_convert_ctx_access(enum bpf_access_type type,
 				      si->dst_reg, si->dst_reg,		      \
 				      offsetof(OBJ, OBJ_FIELD));	      \
 		if (si->dst_reg == si->src_reg)	{			      \
-			*insn++ = BPF_JMP_A(1);				      \
+			*insn++ = BPF_JMP64_A(1);			      \
 			*insn++ = BPF_LDX_MEM(BPF_DW, reg, si->src_reg,	      \
 				      offsetof(struct bpf_sock_ops_kern,      \
 				      temp));				      \
 		}							      \
 	} while (0)
 
-#define SOCK_OPS_GET_SK()							      \
+#define SOCK_OPS_GET_SK()						      \
 	do {								      \
 		int fullsock_reg = si->dst_reg, reg = BPF_REG_9, jmp = 1;     \
 		if (si->dst_reg == reg || si->src_reg == reg)		      \
@@ -10109,7 +10109,7 @@ static u32 sock_ops_convert_ctx_access(enum bpf_access_type type,
 				      fullsock_reg, si->src_reg,	      \
 				      offsetof(struct bpf_sock_ops_kern,      \
 					       is_fullsock));		      \
-		*insn++ = BPF_JMP_IMM(BPF_JEQ, fullsock_reg, 0, jmp);	      \
+		*insn++ = BPF_JMP64_IMM(BPF_JEQ, fullsock_reg, 0, jmp);	      \
 		if (si->dst_reg == si->src_reg)				      \
 			*insn++ = BPF_LDX_MEM(BPF_DW, reg, si->src_reg,	      \
 				      offsetof(struct bpf_sock_ops_kern,      \
@@ -10119,7 +10119,7 @@ static u32 sock_ops_convert_ctx_access(enum bpf_access_type type,
 				      si->dst_reg, si->src_reg,		      \
 				      offsetof(struct bpf_sock_ops_kern, sk));\
 		if (si->dst_reg == si->src_reg)	{			      \
-			*insn++ = BPF_JMP_A(1);				      \
+			*insn++ = BPF_JMP64_A(1);			      \
 			*insn++ = BPF_LDX_MEM(BPF_DW, reg, si->src_reg,	      \
 				      offsetof(struct bpf_sock_ops_kern,      \
 				      temp));				      \
@@ -10156,7 +10156,7 @@ static u32 sock_ops_convert_ctx_access(enum bpf_access_type type,
 				      reg, si->dst_reg,			      \
 				      offsetof(struct bpf_sock_ops_kern,      \
 					       is_fullsock));		      \
-		*insn++ = BPF_JMP_IMM(BPF_JEQ, reg, 0, 2);		      \
+		*insn++ = BPF_JMP64_IMM(BPF_JEQ, reg, 0, 2);		      \
 		*insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(			      \
 						struct bpf_sock_ops_kern, sk),\
 				      reg, si->dst_reg,			      \
@@ -10427,7 +10427,7 @@ static u32 sock_ops_convert_ctx_access(enum bpf_access_type type,
 				      si->dst_reg, si->src_reg,
 				      offsetof(struct bpf_sock_ops_kern,
 					       skb));
-		*insn++ = BPF_JMP_IMM(BPF_JEQ, si->dst_reg, 0, 1);
+		*insn++ = BPF_JMP64_IMM(BPF_JEQ, si->dst_reg, 0, 1);
 		*insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct sk_buff, data),
 				      si->dst_reg, si->dst_reg,
 				      offsetof(struct sk_buff, data));
@@ -10438,7 +10438,7 @@ static u32 sock_ops_convert_ctx_access(enum bpf_access_type type,
 				      si->dst_reg, si->src_reg,
 				      offsetof(struct bpf_sock_ops_kern,
 					       skb));
-		*insn++ = BPF_JMP_IMM(BPF_JEQ, si->dst_reg, 0, 1);
+		*insn++ = BPF_JMP64_IMM(BPF_JEQ, si->dst_reg, 0, 1);
 		*insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct sk_buff, len),
 				      si->dst_reg, si->dst_reg,
 				      offsetof(struct sk_buff, len));
@@ -10452,7 +10452,7 @@ static u32 sock_ops_convert_ctx_access(enum bpf_access_type type,
 				      si->dst_reg, si->src_reg,
 				      offsetof(struct bpf_sock_ops_kern,
 					       skb));
-		*insn++ = BPF_JMP_IMM(BPF_JEQ, si->dst_reg, 0, 1);
+		*insn++ = BPF_JMP64_IMM(BPF_JEQ, si->dst_reg, 0, 1);
 		*insn++ = BPF_LDX_MEM(BPF_FIELD_SIZEOF(struct tcp_skb_cb,
 						       tcp_flags),
 				      si->dst_reg, si->dst_reg, off);
@@ -10472,8 +10472,8 @@ static u32 sock_ops_convert_ctx_access(enum bpf_access_type type,
 				      bpf_target_off(struct skb_shared_info,
 						     hwtstamps, 8,
 						     target_size));
-		*jmp_on_null_skb = BPF_JMP_IMM(BPF_JEQ, si->dst_reg, 0,
-					       insn - jmp_on_null_skb - 1);
+		*jmp_on_null_skb = BPF_JMP64_IMM(BPF_JEQ, si->dst_reg, 0,
+						 insn - jmp_on_null_skb - 1);
 		break;
 	}
 	}
@@ -11337,7 +11337,7 @@ static u32 sk_lookup_convert_ctx_access(enum bpf_access_type type,
 		off += bpf_target_off(struct in6_addr, s6_addr32[0], 4, target_size);
 		*insn++ = BPF_LDX_MEM(BPF_SIZEOF(void *), si->dst_reg, si->src_reg,
 				      offsetof(struct bpf_sk_lookup_kern, v6.saddr));
-		*insn++ = BPF_JMP_IMM(BPF_JEQ, si->dst_reg, 0, 1);
+		*insn++ = BPF_JMP64_IMM(BPF_JEQ, si->dst_reg, 0, 1);
 		*insn++ = BPF_LDX_MEM(BPF_W, si->dst_reg, si->dst_reg, off);
 #else
 		*insn++ = BPF_MOV32_IMM(si->dst_reg, 0);
@@ -11353,7 +11353,7 @@ static u32 sk_lookup_convert_ctx_access(enum bpf_access_type type,
 		off += bpf_target_off(struct in6_addr, s6_addr32[0], 4, target_size);
 		*insn++ = BPF_LDX_MEM(BPF_SIZEOF(void *), si->dst_reg, si->src_reg,
 				      offsetof(struct bpf_sk_lookup_kern, v6.daddr));
-		*insn++ = BPF_JMP_IMM(BPF_JEQ, si->dst_reg, 0, 1);
+		*insn++ = BPF_JMP64_IMM(BPF_JEQ, si->dst_reg, 0, 1);
 		*insn++ = BPF_LDX_MEM(BPF_W, si->dst_reg, si->dst_reg, off);
 #else
 		*insn++ = BPF_MOV32_IMM(si->dst_reg, 0);
diff --git a/net/xdp/xskmap.c b/net/xdp/xskmap.c
index 771d0fa..2f16ab1 100644
--- a/net/xdp/xskmap.c
+++ b/net/xdp/xskmap.c
@@ -116,12 +116,12 @@ static int xsk_map_gen_lookup(struct bpf_map *map, struct bpf_insn *insn_buf)
 	struct bpf_insn *insn = insn_buf;
 
 	*insn++ = BPF_LDX_MEM(BPF_W, ret, index, 0);
-	*insn++ = BPF_JMP_IMM(BPF_JGE, ret, map->max_entries, 5);
+	*insn++ = BPF_JMP64_IMM(BPF_JGE, ret, map->max_entries, 5);
 	*insn++ = BPF_ALU64_IMM(BPF_LSH, ret, ilog2(sizeof(struct xsk_sock *)));
 	*insn++ = BPF_ALU64_IMM(BPF_ADD, mp, offsetof(struct xsk_map, xsk_map));
 	*insn++ = BPF_ALU64_REG(BPF_ADD, ret, mp);
 	*insn++ = BPF_LDX_MEM(BPF_SIZEOF(struct xsk_sock *), ret, ret, 0);
-	*insn++ = BPF_JMP_IMM(BPF_JA, 0, 0, 1);
+	*insn++ = BPF_JMP64_IMM(BPF_JA, 0, 0, 1);
 	*insn++ = BPF_MOV64_IMM(ret, 0);
 	return insn - insn_buf;
 }
diff --git a/samples/bpf/bpf_insn.h b/samples/bpf/bpf_insn.h
index 1c55a77..7ba92d6 100644
--- a/samples/bpf/bpf_insn.h
+++ b/samples/bpf/bpf_insn.h
@@ -172,7 +172,7 @@ struct bpf_insn;
 
 /* Conditional jumps against registers, if (dst_reg 'op' src_reg) goto pc + off16 */
 
-#define BPF_JMP_REG(OP, DST, SRC, OFF)				\
+#define BPF_JMP64_REG(OP, DST, SRC, OFF)			\
 	((struct bpf_insn) {					\
 		.code  = BPF_JMP64 | BPF_OP(OP) | BPF_X,	\
 		.dst_reg = DST,					\
@@ -180,7 +180,7 @@ struct bpf_insn;
 		.off   = OFF,					\
 		.imm   = 0 })
 
-/* Like BPF_JMP_REG, but with 32-bit wide operands for comparison. */
+/* Like BPF_JMP64_REG, but with 32-bit wide operands for comparison. */
 
 #define BPF_JMP32_REG(OP, DST, SRC, OFF)			\
 	((struct bpf_insn) {					\
@@ -192,7 +192,7 @@ struct bpf_insn;
 
 /* Conditional jumps against immediates, if (dst_reg 'op' imm32) goto pc + off16 */
 
-#define BPF_JMP_IMM(OP, DST, IMM, OFF)				\
+#define BPF_JMP64_IMM(OP, DST, IMM, OFF)			\
 	((struct bpf_insn) {					\
 		.code  = BPF_JMP64 | BPF_OP(OP) | BPF_K,	\
 		.dst_reg = DST,					\
@@ -200,7 +200,7 @@ struct bpf_insn;
 		.off   = OFF,					\
 		.imm   = IMM })
 
-/* Like BPF_JMP_IMM, but with 32-bit wide operands for comparison. */
+/* Like BPF_JMP64_IMM, but with 32-bit wide operands for comparison. */
 
 #define BPF_JMP32_IMM(OP, DST, IMM, OFF)			\
 	((struct bpf_insn) {					\
diff --git a/samples/bpf/cookie_uid_helper_example.c b/samples/bpf/cookie_uid_helper_example.c
index ddc6223..7579460 100644
--- a/samples/bpf/cookie_uid_helper_example.c
+++ b/samples/bpf/cookie_uid_helper_example.c
@@ -106,7 +106,7 @@ static void prog_load(void)
 		 * stored already
 		 * Otherwise do pc10-22 to setup a new data entry.
 		 */
-		BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 14),
+		BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 14),
 		BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 				BPF_FUNC_get_socket_uid),
@@ -139,7 +139,7 @@ static void prog_load(void)
 		BPF_MOV64_IMM(BPF_REG_4, 0),
 		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 				BPF_FUNC_map_update_elem),
-		BPF_JMP_IMM(BPF_JA, 0, 0, 5),
+		BPF_JMP64_IMM(BPF_JA, 0, 0, 5),
 		/*
 		 * pc24-30 update the packet info to a exist data entry, it can
 		 * be done by directly write to pointers instead of using
diff --git a/samples/bpf/sock_example.c b/samples/bpf/sock_example.c
index 3e8d74d..123cd75 100644
--- a/samples/bpf/sock_example.c
+++ b/samples/bpf/sock_example.c
@@ -53,7 +53,7 @@ static int test_sock(void)
 		BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), /* r2 = fp - 4 */
 		BPF_LD_MAP_FD(BPF_REG_1, map_fd),
 		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 		BPF_MOV64_IMM(BPF_REG_1, 1), /* r1 = 1 */
 		BPF_ATOMIC_OP(BPF_DW, BPF_ADD, BPF_REG_0, BPF_REG_1, 0),
 		BPF_MOV64_IMM(BPF_REG_0, 0), /* r0 = 0 */
diff --git a/samples/bpf/test_cgrp2_attach.c b/samples/bpf/test_cgrp2_attach.c
index b8331e7..ed330f7 100644
--- a/samples/bpf/test_cgrp2_attach.c
+++ b/samples/bpf/test_cgrp2_attach.c
@@ -52,7 +52,7 @@ static int prog_load(int map_fd, int verdict)
 		BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), /* r2 = fp - 4 */
 		BPF_LD_MAP_FD(BPF_REG_1, map_fd), /* load map fd to r1 */
 		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 		BPF_MOV64_IMM(BPF_REG_1, 1), /* r1 = 1 */
 		BPF_ATOMIC_OP(BPF_DW, BPF_ADD, BPF_REG_0, BPF_REG_1, 0),
 
@@ -63,7 +63,7 @@ static int prog_load(int map_fd, int verdict)
 		BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), /* r2 = fp - 4 */
 		BPF_LD_MAP_FD(BPF_REG_1, map_fd),
 		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 		BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_6, offsetof(struct __sk_buff, len)), /* r1 = skb->len */
 
 		BPF_ATOMIC_OP(BPF_DW, BPF_ADD, BPF_REG_0, BPF_REG_1, 0),
diff --git a/samples/bpf/test_cgrp2_sock.c b/samples/bpf/test_cgrp2_sock.c
index 5447bce..18ddee1 100644
--- a/samples/bpf/test_cgrp2_sock.c
+++ b/samples/bpf/test_cgrp2_sock.c
@@ -54,7 +54,7 @@ static int prog_load(__u32 idx, __u32 mark, __u32 prio)
 
 		/* if uid is 0, use given mark, else use the uid as the mark */
 		BPF_MOV64_REG(BPF_REG_3, BPF_REG_0),
-		BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+		BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 		BPF_MOV64_IMM(BPF_REG_3, mark),
 
 		/* set the mark on the new socket */
diff --git a/tools/bpf/bpf_dbg.c b/tools/bpf/bpf_dbg.c
index 6f7ed34..fa78389 100644
--- a/tools/bpf/bpf_dbg.c
+++ b/tools/bpf/bpf_dbg.c
@@ -56,22 +56,22 @@
 
 #define BPF_LDX_B	(BPF_LDX | BPF_B)
 #define BPF_LDX_W	(BPF_LDX | BPF_W)
-#define BPF_JMP_JA	(BPF_JMP64 | BPF_JA)
-#define BPF_JMP_JEQ	(BPF_JMP64 | BPF_JEQ)
-#define BPF_JMP_JGT	(BPF_JMP64 | BPF_JGT)
-#define BPF_JMP_JGE	(BPF_JMP64 | BPF_JGE)
-#define BPF_JMP_JSET	(BPF_JMP64 | BPF_JSET)
-#define BPF_ALU_ADD	(BPF_ALU32 | BPF_ADD)
-#define BPF_ALU_SUB	(BPF_ALU32 | BPF_SUB)
-#define BPF_ALU_MUL	(BPF_ALU32 | BPF_MUL)
-#define BPF_ALU_DIV	(BPF_ALU32 | BPF_DIV)
-#define BPF_ALU_MOD	(BPF_ALU32 | BPF_MOD)
-#define BPF_ALU_NEG	(BPF_ALU32 | BPF_NEG)
-#define BPF_ALU_AND	(BPF_ALU32 | BPF_AND)
-#define BPF_ALU_OR	(BPF_ALU32 | BPF_OR)
-#define BPF_ALU_XOR	(BPF_ALU32 | BPF_XOR)
-#define BPF_ALU_LSH	(BPF_ALU32 | BPF_LSH)
-#define BPF_ALU_RSH	(BPF_ALU32 | BPF_RSH)
+#define BPF_JMP64_JA	(BPF_JMP64 | BPF_JA)
+#define BPF_JMP64_JEQ	(BPF_JMP64 | BPF_JEQ)
+#define BPF_JMP64_JGT	(BPF_JMP64 | BPF_JGT)
+#define BPF_JMP64_JGE	(BPF_JMP64 | BPF_JGE)
+#define BPF_JMP64_JSET	(BPF_JMP64 | BPF_JSET)
+#define BPF_ALU32_ADD	(BPF_ALU32 | BPF_ADD)
+#define BPF_ALU32_SUB	(BPF_ALU32 | BPF_SUB)
+#define BPF_ALU32_MUL	(BPF_ALU32 | BPF_MUL)
+#define BPF_ALU32_DIV	(BPF_ALU32 | BPF_DIV)
+#define BPF_ALU32_MOD	(BPF_ALU32 | BPF_MOD)
+#define BPF_ALU32_NEG	(BPF_ALU32 | BPF_NEG)
+#define BPF_ALU32_AND	(BPF_ALU32 | BPF_AND)
+#define BPF_ALU32_OR	(BPF_ALU32 | BPF_OR)
+#define BPF_ALU32_XOR	(BPF_ALU32 | BPF_XOR)
+#define BPF_ALU32_LSH	(BPF_ALU32 | BPF_LSH)
+#define BPF_ALU32_RSH	(BPF_ALU32 | BPF_RSH)
 #define BPF_MISC_TAX	(BPF_MISC | BPF_TAX)
 #define BPF_MISC_TXA	(BPF_MISC | BPF_TXA)
 #define BPF_LD_B	(BPF_LD | BPF_B)
@@ -149,22 +149,22 @@ static const char * const op_table[] = {
 	[BPF_LD_W]	= "ld",
 	[BPF_LDX]	= "ldx",
 	[BPF_LDX_B]	= "ldxb",
-	[BPF_JMP_JA]	= "ja",
-	[BPF_JMP_JEQ]	= "jeq",
-	[BPF_JMP_JGT]	= "jgt",
-	[BPF_JMP_JGE]	= "jge",
-	[BPF_JMP_JSET]	= "jset",
-	[BPF_ALU_ADD]	= "add",
-	[BPF_ALU_SUB]	= "sub",
-	[BPF_ALU_MUL]	= "mul",
-	[BPF_ALU_DIV]	= "div",
-	[BPF_ALU_MOD]	= "mod",
-	[BPF_ALU_NEG]	= "neg",
-	[BPF_ALU_AND]	= "and",
-	[BPF_ALU_OR]	= "or",
-	[BPF_ALU_XOR]	= "xor",
-	[BPF_ALU_LSH]	= "lsh",
-	[BPF_ALU_RSH]	= "rsh",
+	[BPF_JMP64_JA]	= "ja",
+	[BPF_JMP64_JEQ]	= "jeq",
+	[BPF_JMP64_JGT]	= "jgt",
+	[BPF_JMP64_JGE]	= "jge",
+	[BPF_JMP64_JSET] = "jset",
+	[BPF_ALU32_ADD]	= "add",
+	[BPF_ALU32_SUB]	= "sub",
+	[BPF_ALU32_MUL]	= "mul",
+	[BPF_ALU32_DIV]	= "div",
+	[BPF_ALU32_MOD]	= "mod",
+	[BPF_ALU32_NEG]	= "neg",
+	[BPF_ALU32_AND]	= "and",
+	[BPF_ALU32_OR]	= "or",
+	[BPF_ALU32_XOR]	= "xor",
+	[BPF_ALU32_LSH]	= "lsh",
+	[BPF_ALU32_RSH]	= "rsh",
 	[BPF_MISC_TAX]	= "tax",
 	[BPF_MISC_TXA]	= "txa",
 	[BPF_RET]	= "ret",
@@ -296,125 +296,125 @@ static void bpf_disasm(const struct sock_filter f, unsigned int i)
 		op = op_table[BPF_LDX];
 		fmt = "M[%d]";
 		break;
-	case BPF_JMP_JA:
-		op = op_table[BPF_JMP_JA];
+	case BPF_JMP64_JA:
+		op = op_table[BPF_JMP64_JA];
 		fmt = "%d";
 		val = i + 1 + f.k;
 		break;
-	case BPF_JMP_JGT | BPF_X:
-		op = op_table[BPF_JMP_JGT];
+	case BPF_JMP64_JGT | BPF_X:
+		op = op_table[BPF_JMP64_JGT];
 		fmt = "x";
 		break;
-	case BPF_JMP_JGT | BPF_K:
-		op = op_table[BPF_JMP_JGT];
+	case BPF_JMP64_JGT | BPF_K:
+		op = op_table[BPF_JMP64_JGT];
 		fmt = "#%#x";
 		break;
-	case BPF_JMP_JGE | BPF_X:
-		op = op_table[BPF_JMP_JGE];
+	case BPF_JMP64_JGE | BPF_X:
+		op = op_table[BPF_JMP64_JGE];
 		fmt = "x";
 		break;
-	case BPF_JMP_JGE | BPF_K:
-		op = op_table[BPF_JMP_JGE];
+	case BPF_JMP64_JGE | BPF_K:
+		op = op_table[BPF_JMP64_JGE];
 		fmt = "#%#x";
 		break;
-	case BPF_JMP_JEQ | BPF_X:
-		op = op_table[BPF_JMP_JEQ];
+	case BPF_JMP64_JEQ | BPF_X:
+		op = op_table[BPF_JMP64_JEQ];
 		fmt = "x";
 		break;
-	case BPF_JMP_JEQ | BPF_K:
-		op = op_table[BPF_JMP_JEQ];
+	case BPF_JMP64_JEQ | BPF_K:
+		op = op_table[BPF_JMP64_JEQ];
 		fmt = "#%#x";
 		break;
-	case BPF_JMP_JSET | BPF_X:
-		op = op_table[BPF_JMP_JSET];
+	case BPF_JMP64_JSET | BPF_X:
+		op = op_table[BPF_JMP64_JSET];
 		fmt = "x";
 		break;
-	case BPF_JMP_JSET | BPF_K:
-		op = op_table[BPF_JMP_JSET];
+	case BPF_JMP64_JSET | BPF_K:
+		op = op_table[BPF_JMP64_JSET];
 		fmt = "#%#x";
 		break;
-	case BPF_ALU_NEG:
-		op = op_table[BPF_ALU_NEG];
+	case BPF_ALU32_NEG:
+		op = op_table[BPF_ALU32_NEG];
 		fmt = "";
 		break;
-	case BPF_ALU_LSH | BPF_X:
-		op = op_table[BPF_ALU_LSH];
+	case BPF_ALU32_LSH | BPF_X:
+		op = op_table[BPF_ALU32_LSH];
 		fmt = "x";
 		break;
-	case BPF_ALU_LSH | BPF_K:
-		op = op_table[BPF_ALU_LSH];
+	case BPF_ALU32_LSH | BPF_K:
+		op = op_table[BPF_ALU32_LSH];
 		fmt = "#%d";
 		break;
-	case BPF_ALU_RSH | BPF_X:
-		op = op_table[BPF_ALU_RSH];
+	case BPF_ALU32_RSH | BPF_X:
+		op = op_table[BPF_ALU32_RSH];
 		fmt = "x";
 		break;
-	case BPF_ALU_RSH | BPF_K:
-		op = op_table[BPF_ALU_RSH];
+	case BPF_ALU32_RSH | BPF_K:
+		op = op_table[BPF_ALU32_RSH];
 		fmt = "#%d";
 		break;
-	case BPF_ALU_ADD | BPF_X:
-		op = op_table[BPF_ALU_ADD];
+	case BPF_ALU32_ADD | BPF_X:
+		op = op_table[BPF_ALU32_ADD];
 		fmt = "x";
 		break;
-	case BPF_ALU_ADD | BPF_K:
-		op = op_table[BPF_ALU_ADD];
+	case BPF_ALU32_ADD | BPF_K:
+		op = op_table[BPF_ALU32_ADD];
 		fmt = "#%d";
 		break;
-	case BPF_ALU_SUB | BPF_X:
-		op = op_table[BPF_ALU_SUB];
+	case BPF_ALU32_SUB | BPF_X:
+		op = op_table[BPF_ALU32_SUB];
 		fmt = "x";
 		break;
-	case BPF_ALU_SUB | BPF_K:
-		op = op_table[BPF_ALU_SUB];
+	case BPF_ALU32_SUB | BPF_K:
+		op = op_table[BPF_ALU32_SUB];
 		fmt = "#%d";
 		break;
-	case BPF_ALU_MUL | BPF_X:
-		op = op_table[BPF_ALU_MUL];
+	case BPF_ALU32_MUL | BPF_X:
+		op = op_table[BPF_ALU32_MUL];
 		fmt = "x";
 		break;
-	case BPF_ALU_MUL | BPF_K:
-		op = op_table[BPF_ALU_MUL];
+	case BPF_ALU32_MUL | BPF_K:
+		op = op_table[BPF_ALU32_MUL];
 		fmt = "#%d";
 		break;
-	case BPF_ALU_DIV | BPF_X:
-		op = op_table[BPF_ALU_DIV];
+	case BPF_ALU32_DIV | BPF_X:
+		op = op_table[BPF_ALU32_DIV];
 		fmt = "x";
 		break;
-	case BPF_ALU_DIV | BPF_K:
-		op = op_table[BPF_ALU_DIV];
+	case BPF_ALU32_DIV | BPF_K:
+		op = op_table[BPF_ALU32_DIV];
 		fmt = "#%d";
 		break;
-	case BPF_ALU_MOD | BPF_X:
-		op = op_table[BPF_ALU_MOD];
+	case BPF_ALU32_MOD | BPF_X:
+		op = op_table[BPF_ALU32_MOD];
 		fmt = "x";
 		break;
-	case BPF_ALU_MOD | BPF_K:
-		op = op_table[BPF_ALU_MOD];
+	case BPF_ALU32_MOD | BPF_K:
+		op = op_table[BPF_ALU32_MOD];
 		fmt = "#%d";
 		break;
-	case BPF_ALU_AND | BPF_X:
-		op = op_table[BPF_ALU_AND];
+	case BPF_ALU32_AND | BPF_X:
+		op = op_table[BPF_ALU32_AND];
 		fmt = "x";
 		break;
-	case BPF_ALU_AND | BPF_K:
-		op = op_table[BPF_ALU_AND];
+	case BPF_ALU32_AND | BPF_K:
+		op = op_table[BPF_ALU32_AND];
 		fmt = "#%#x";
 		break;
-	case BPF_ALU_OR | BPF_X:
-		op = op_table[BPF_ALU_OR];
+	case BPF_ALU32_OR | BPF_X:
+		op = op_table[BPF_ALU32_OR];
 		fmt = "x";
 		break;
-	case BPF_ALU_OR | BPF_K:
-		op = op_table[BPF_ALU_OR];
+	case BPF_ALU32_OR | BPF_K:
+		op = op_table[BPF_ALU32_OR];
 		fmt = "#%#x";
 		break;
-	case BPF_ALU_XOR | BPF_X:
-		op = op_table[BPF_ALU_XOR];
+	case BPF_ALU32_XOR | BPF_X:
+		op = op_table[BPF_ALU32_XOR];
 		fmt = "x";
 		break;
-	case BPF_ALU_XOR | BPF_K:
-		op = op_table[BPF_ALU_XOR];
+	case BPF_ALU32_XOR | BPF_K:
+		op = op_table[BPF_ALU32_XOR];
 		fmt = "#%#x";
 		break;
 	default:
@@ -727,111 +727,111 @@ static void bpf_single_step(struct bpf_regs *r, struct sock_filter *f,
 	case BPF_LDX | BPF_MEM:
 		r->X = r->M[K];
 		break;
-	case BPF_JMP_JA:
+	case BPF_JMP64_JA:
 		r->Pc += K;
 		break;
-	case BPF_JMP_JGT | BPF_X:
+	case BPF_JMP64_JGT | BPF_X:
 		r->Pc += r->A > r->X ? f->jt : f->jf;
 		break;
-	case BPF_JMP_JGT | BPF_K:
+	case BPF_JMP64_JGT | BPF_K:
 		r->Pc += r->A > K ? f->jt : f->jf;
 		break;
-	case BPF_JMP_JGE | BPF_X:
+	case BPF_JMP64_JGE | BPF_X:
 		r->Pc += r->A >= r->X ? f->jt : f->jf;
 		break;
-	case BPF_JMP_JGE | BPF_K:
+	case BPF_JMP64_JGE | BPF_K:
 		r->Pc += r->A >= K ? f->jt : f->jf;
 		break;
-	case BPF_JMP_JEQ | BPF_X:
+	case BPF_JMP64_JEQ | BPF_X:
 		r->Pc += r->A == r->X ? f->jt : f->jf;
 		break;
-	case BPF_JMP_JEQ | BPF_K:
+	case BPF_JMP64_JEQ | BPF_K:
 		r->Pc += r->A == K ? f->jt : f->jf;
 		break;
-	case BPF_JMP_JSET | BPF_X:
+	case BPF_JMP64_JSET | BPF_X:
 		r->Pc += r->A & r->X ? f->jt : f->jf;
 		break;
-	case BPF_JMP_JSET | BPF_K:
+	case BPF_JMP64_JSET | BPF_K:
 		r->Pc += r->A & K ? f->jt : f->jf;
 		break;
-	case BPF_ALU_NEG:
+	case BPF_ALU32_NEG:
 		r->A = -r->A;
 		break;
-	case BPF_ALU_LSH | BPF_X:
+	case BPF_ALU32_LSH | BPF_X:
 		r->A <<= r->X;
 		break;
-	case BPF_ALU_LSH | BPF_K:
+	case BPF_ALU32_LSH | BPF_K:
 		r->A <<= K;
 		break;
-	case BPF_ALU_RSH | BPF_X:
+	case BPF_ALU32_RSH | BPF_X:
 		r->A >>= r->X;
 		break;
-	case BPF_ALU_RSH | BPF_K:
+	case BPF_ALU32_RSH | BPF_K:
 		r->A >>= K;
 		break;
-	case BPF_ALU_ADD | BPF_X:
+	case BPF_ALU32_ADD | BPF_X:
 		r->A += r->X;
 		break;
-	case BPF_ALU_ADD | BPF_K:
+	case BPF_ALU32_ADD | BPF_K:
 		r->A += K;
 		break;
-	case BPF_ALU_SUB | BPF_X:
+	case BPF_ALU32_SUB | BPF_X:
 		r->A -= r->X;
 		break;
-	case BPF_ALU_SUB | BPF_K:
+	case BPF_ALU32_SUB | BPF_K:
 		r->A -= K;
 		break;
-	case BPF_ALU_MUL | BPF_X:
+	case BPF_ALU32_MUL | BPF_X:
 		r->A *= r->X;
 		break;
-	case BPF_ALU_MUL | BPF_K:
+	case BPF_ALU32_MUL | BPF_K:
 		r->A *= K;
 		break;
-	case BPF_ALU_DIV | BPF_X:
-	case BPF_ALU_MOD | BPF_X:
+	case BPF_ALU32_DIV | BPF_X:
+	case BPF_ALU32_MOD | BPF_X:
 		if (r->X == 0) {
 			set_return(r);
 			break;
 		}
 		goto do_div;
-	case BPF_ALU_DIV | BPF_K:
-	case BPF_ALU_MOD | BPF_K:
+	case BPF_ALU32_DIV | BPF_K:
+	case BPF_ALU32_MOD | BPF_K:
 		if (K == 0) {
 			set_return(r);
 			break;
 		}
 do_div:
 		switch (f->code) {
-		case BPF_ALU_DIV | BPF_X:
+		case BPF_ALU32_DIV | BPF_X:
 			r->A /= r->X;
 			break;
-		case BPF_ALU_DIV | BPF_K:
+		case BPF_ALU32_DIV | BPF_K:
 			r->A /= K;
 			break;
-		case BPF_ALU_MOD | BPF_X:
+		case BPF_ALU32_MOD | BPF_X:
 			r->A %= r->X;
 			break;
-		case BPF_ALU_MOD | BPF_K:
+		case BPF_ALU32_MOD | BPF_K:
 			r->A %= K;
 			break;
 		}
 		break;
-	case BPF_ALU_AND | BPF_X:
+	case BPF_ALU32_AND | BPF_X:
 		r->A &= r->X;
 		break;
-	case BPF_ALU_AND | BPF_K:
+	case BPF_ALU32_AND | BPF_K:
 		r->A &= K;
 		break;
-	case BPF_ALU_OR | BPF_X:
+	case BPF_ALU32_OR | BPF_X:
 		r->A |= r->X;
 		break;
-	case BPF_ALU_OR | BPF_K:
+	case BPF_ALU32_OR | BPF_K:
 		r->A |= K;
 		break;
-	case BPF_ALU_XOR | BPF_X:
+	case BPF_ALU32_XOR | BPF_X:
 		r->A ^= r->X;
 		break;
-	case BPF_ALU_XOR | BPF_K:
+	case BPF_ALU32_XOR | BPF_K:
 		r->A ^= K;
 		break;
 	}
diff --git a/tools/bpf/bpftool/feature.c b/tools/bpf/bpftool/feature.c
index da16e6a..58e4f8f 100644
--- a/tools/bpf/bpftool/feature.c
+++ b/tools/bpf/bpftool/feature.c
@@ -835,7 +835,7 @@ probe_bounded_loops(const char *define_prefix, __u32 ifindex)
 	struct bpf_insn insns[4] = {
 		BPF_MOV64_IMM(BPF_REG_0, 10),
 		BPF_ALU64_IMM(BPF_SUB, BPF_REG_0, 1),
-		BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, -2),
+		BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, -2),
 		BPF_EXIT_INSN()
 	};
 
@@ -855,7 +855,7 @@ probe_v2_isa_extension(const char *define_prefix, __u32 ifindex)
 {
 	struct bpf_insn insns[4] = {
 		BPF_MOV64_IMM(BPF_REG_0, 0),
-		BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 0, 1),
+		BPF_JMP64_IMM(BPF_JLT, BPF_REG_0, 0, 1),
 		BPF_MOV64_IMM(BPF_REG_0, 1),
 		BPF_EXIT_INSN()
 	};
diff --git a/tools/include/linux/filter.h b/tools/include/linux/filter.h
index cf111f1..97169b1 100644
--- a/tools/include/linux/filter.h
+++ b/tools/include/linux/filter.h
@@ -207,7 +207,7 @@
 
 /* Conditional jumps against registers, if (dst_reg 'op' src_reg) goto pc + off16 */
 
-#define BPF_JMP_REG(OP, DST, SRC, OFF)				\
+#define BPF_JMP64_REG(OP, DST, SRC, OFF)			\
 	((struct bpf_insn) {					\
 		.code  = BPF_JMP64 | BPF_OP(OP) | BPF_X,	\
 		.dst_reg = DST,					\
@@ -215,7 +215,7 @@
 		.off   = OFF,					\
 		.imm   = 0 })
 
-/* Like BPF_JMP_REG, but with 32-bit wide operands for comparison. */
+/* Like BPF_JMP64_REG, but with 32-bit wide operands for comparison. */
 
 #define BPF_JMP32_REG(OP, DST, SRC, OFF)			\
 	((struct bpf_insn) {					\
@@ -227,7 +227,7 @@
 
 /* Conditional jumps against immediates, if (dst_reg 'op' imm32) goto pc + off16 */
 
-#define BPF_JMP_IMM(OP, DST, IMM, OFF)				\
+#define BPF_JMP64_IMM(OP, DST, IMM, OFF)			\
 	((struct bpf_insn) {					\
 		.code  = BPF_JMP64 | BPF_OP(OP) | BPF_K,	\
 		.dst_reg = DST,					\
@@ -235,7 +235,7 @@
 		.off   = OFF,					\
 		.imm   = IMM })
 
-/* Like BPF_JMP_IMM, but with 32-bit wide operands for comparison. */
+/* Like BPF_JMP64_IMM, but with 32-bit wide operands for comparison. */
 
 #define BPF_JMP32_IMM(OP, DST, IMM, OFF)			\
 	((struct bpf_insn) {					\
@@ -247,7 +247,7 @@
 
 /* Unconditional jumps, goto pc + off16 */
 
-#define BPF_JMP_A(OFF)						\
+#define BPF_JMP64_A(OFF)					\
 	((struct bpf_insn) {					\
 		.code  = BPF_JMP64 | BPF_JA,			\
 		.dst_reg = 0,					\
diff --git a/tools/lib/bpf/gen_loader.c b/tools/lib/bpf/gen_loader.c
index 23f5c46..d1dc6d5 100644
--- a/tools/lib/bpf/gen_loader.c
+++ b/tools/lib/bpf/gen_loader.c
@@ -130,7 +130,7 @@ void bpf_gen__init(struct bpf_gen *gen, int log_level, int nr_progs, int nr_maps
 	/* amount of stack actually used, only used to calculate iterations, not stack offset */
 	nr_progs_sz = offsetof(struct loader_stack, prog_fd[nr_progs]);
 	/* jump over cleanup code */
-	emit(gen, BPF_JMP_IMM(BPF_JA, 0, 0,
+	emit(gen, BPF_JMP64_IMM(BPF_JA, 0, 0,
 			      /* size of cleanup code below (including map fd cleanup) */
 			      (nr_progs_sz / 4) * 3 + 2 +
 			      /* 6 insns for emit_sys_close_blob,
@@ -143,7 +143,7 @@ void bpf_gen__init(struct bpf_gen *gen, int log_level, int nr_progs, int nr_maps
 	/* emit cleanup code: close all temp FDs */
 	for (i = 0; i < nr_progs_sz; i += 4) {
 		emit(gen, BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_10, -stack_sz + i));
-		emit(gen, BPF_JMP_IMM(BPF_JSLE, BPF_REG_1, 0, 1));
+		emit(gen, BPF_JMP64_IMM(BPF_JSLE, BPF_REG_1, 0, 1));
 		emit(gen, BPF_EMIT_CALL(BPF_FUNC_sys_close));
 	}
 	for (i = 0; i < nr_maps; i++)
@@ -243,7 +243,7 @@ static void move_ctx2blob(struct bpf_gen *gen, int off, int size, int ctx_off,
 		/* If value in ctx is zero don't update the blob.
 		 * For example: when ctx->map.max_entries == 0, keep default max_entries from bpf.c
 		 */
-		emit(gen, BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3));
+		emit(gen, BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 3));
 	emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_1, BPF_PSEUDO_MAP_IDX_VALUE,
 					 0, 0, 0, off));
 	emit(gen, BPF_STX_MEM(insn_bytes_to_bpf_size(size), BPF_REG_1, BPF_REG_0, 0));
@@ -287,10 +287,10 @@ static void emit_check_err(struct bpf_gen *gen)
 	 * if (R7 < 0) goto cleanup;
 	 */
 	if (is_simm16(off)) {
-		emit(gen, BPF_JMP_IMM(BPF_JSLT, BPF_REG_7, 0, off));
+		emit(gen, BPF_JMP64_IMM(BPF_JSLT, BPF_REG_7, 0, off));
 	} else {
 		gen->error = -ERANGE;
-		emit(gen, BPF_JMP_IMM(BPF_JA, 0, 0, -1));
+		emit(gen, BPF_JMP64_IMM(BPF_JA, 0, 0, -1));
 	}
 }
 
@@ -343,7 +343,7 @@ static void debug_ret(struct bpf_gen *gen, const char *fmt, ...)
 
 static void __emit_sys_close(struct bpf_gen *gen)
 {
-	emit(gen, BPF_JMP_IMM(BPF_JSLE, BPF_REG_1, 0,
+	emit(gen, BPF_JMP64_IMM(BPF_JSLE, BPF_REG_1, 0,
 			      /* 2 is the number of the following insns
 			       * * 6 is additional insns in debug_regs
 			       */
@@ -688,23 +688,23 @@ static void emit_relo_kfunc_btf(struct bpf_gen *gen, struct ksym_relo_desc *relo
 	}
 	kdesc->off = btf_fd_idx;
 	/* jump to success case */
-	emit(gen, BPF_JMP_IMM(BPF_JSGE, BPF_REG_7, 0, 3));
+	emit(gen, BPF_JMP64_IMM(BPF_JSGE, BPF_REG_7, 0, 3));
 	/* set value for imm, off as 0 */
 	emit(gen, BPF_ST_MEM(BPF_W, BPF_REG_8, offsetof(struct bpf_insn, imm), 0));
 	emit(gen, BPF_ST_MEM(BPF_H, BPF_REG_8, offsetof(struct bpf_insn, off), 0));
 	/* skip success case for ret < 0 */
-	emit(gen, BPF_JMP_IMM(BPF_JA, 0, 0, 10));
+	emit(gen, BPF_JMP64_IMM(BPF_JA, 0, 0, 10));
 	/* store btf_id into insn[insn_idx].imm */
 	emit(gen, BPF_STX_MEM(BPF_W, BPF_REG_8, BPF_REG_7, offsetof(struct bpf_insn, imm)));
 	/* obtain fd in BPF_REG_9 */
 	emit(gen, BPF_MOV64_REG(BPF_REG_9, BPF_REG_7));
 	emit(gen, BPF_ALU64_IMM(BPF_RSH, BPF_REG_9, 32));
 	/* jump to fd_array store if fd denotes module BTF */
-	emit(gen, BPF_JMP_IMM(BPF_JNE, BPF_REG_9, 0, 2));
+	emit(gen, BPF_JMP64_IMM(BPF_JNE, BPF_REG_9, 0, 2));
 	/* set the default value for off */
 	emit(gen, BPF_ST_MEM(BPF_H, BPF_REG_8, offsetof(struct bpf_insn, off), 0));
 	/* skip BTF fd store for vmlinux BTF */
-	emit(gen, BPF_JMP_IMM(BPF_JA, 0, 0, 4));
+	emit(gen, BPF_JMP64_IMM(BPF_JA, 0, 0, 4));
 	/* load fd_array slot pointer */
 	emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_0, BPF_PSEUDO_MAP_IDX_VALUE,
 					 0, 0, 0, blob_fd_array_off(gen, btf_fd_idx)));
@@ -768,7 +768,7 @@ static void emit_relo_ksym_typeless(struct bpf_gen *gen,
 	/* skip typeless ksym_desc in fd closing loop in cleanup_relos */
 	kdesc->typeless = true;
 	emit_bpf_kallsyms_lookup_name(gen, relo);
-	emit(gen, BPF_JMP_IMM(BPF_JEQ, BPF_REG_7, -ENOENT, 1));
+	emit(gen, BPF_JMP64_IMM(BPF_JEQ, BPF_REG_7, -ENOENT, 1));
 	emit_check_err(gen);
 	/* store lower half of addr into insn[insn_idx].imm */
 	emit(gen, BPF_STX_MEM(BPF_W, BPF_REG_8, BPF_REG_9, offsetof(struct bpf_insn, imm)));
@@ -809,7 +809,7 @@ static void emit_relo_ksym_btf(struct bpf_gen *gen, struct ksym_relo_desc *relo,
 		move_blob2blob(gen, insn + sizeof(struct bpf_insn) + offsetof(struct bpf_insn, imm), 4,
 			       kdesc->insn + sizeof(struct bpf_insn) + offsetof(struct bpf_insn, imm));
 		/* jump over src_reg adjustment if imm is not 0, reuse BPF_REG_0 from move_blob2blob */
-		emit(gen, BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 3));
+		emit(gen, BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 3));
 		goto clear_src_reg;
 	}
 	/* remember insn offset, so we can copy BTF ID and FD later */
@@ -818,12 +818,12 @@ static void emit_relo_ksym_btf(struct bpf_gen *gen, struct ksym_relo_desc *relo,
 	if (!relo->is_weak)
 		emit_check_err(gen);
 	/* jump to success case */
-	emit(gen, BPF_JMP_IMM(BPF_JSGE, BPF_REG_7, 0, 3));
+	emit(gen, BPF_JMP64_IMM(BPF_JSGE, BPF_REG_7, 0, 3));
 	/* set values for insn[insn_idx].imm, insn[insn_idx + 1].imm as 0 */
 	emit(gen, BPF_ST_MEM(BPF_W, BPF_REG_8, offsetof(struct bpf_insn, imm), 0));
 	emit(gen, BPF_ST_MEM(BPF_W, BPF_REG_8, sizeof(struct bpf_insn) + offsetof(struct bpf_insn, imm), 0));
 	/* skip success case for ret < 0 */
-	emit(gen, BPF_JMP_IMM(BPF_JA, 0, 0, 4));
+	emit(gen, BPF_JMP64_IMM(BPF_JA, 0, 0, 4));
 	/* store btf_id into insn[insn_idx].imm */
 	emit(gen, BPF_STX_MEM(BPF_W, BPF_REG_8, BPF_REG_7, offsetof(struct bpf_insn, imm)));
 	/* store btf_obj_fd into insn[insn_idx + 1].imm */
@@ -831,7 +831,7 @@ static void emit_relo_ksym_btf(struct bpf_gen *gen, struct ksym_relo_desc *relo,
 	emit(gen, BPF_STX_MEM(BPF_W, BPF_REG_8, BPF_REG_7,
 			      sizeof(struct bpf_insn) + offsetof(struct bpf_insn, imm)));
 	/* skip src_reg adjustment */
-	emit(gen, BPF_JMP_IMM(BPF_JSGE, BPF_REG_7, 0, 3));
+	emit(gen, BPF_JMP64_IMM(BPF_JSGE, BPF_REG_7, 0, 3));
 clear_src_reg:
 	/* clear bpf_object__relocate_data's src_reg assignment, otherwise we get a verifier failure */
 	reg_mask = src_reg_mask();
@@ -1054,15 +1054,15 @@ void bpf_gen__map_update_elem(struct bpf_gen *gen, int map_idx, void *pvalue,
 			      sizeof(struct bpf_loader_ctx) +
 			      sizeof(struct bpf_map_desc) * map_idx +
 			      offsetof(struct bpf_map_desc, initial_value)));
-	emit(gen, BPF_JMP_IMM(BPF_JEQ, BPF_REG_3, 0, 8));
+	emit(gen, BPF_JMP64_IMM(BPF_JEQ, BPF_REG_3, 0, 8));
 	emit2(gen, BPF_LD_IMM64_RAW_FULL(BPF_REG_1, BPF_PSEUDO_MAP_IDX_VALUE,
 					 0, 0, 0, value));
 	emit(gen, BPF_MOV64_IMM(BPF_REG_2, value_size));
 	emit(gen, BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_6,
 			      offsetof(struct bpf_loader_ctx, flags)));
-	emit(gen, BPF_JMP_IMM(BPF_JSET, BPF_REG_0, BPF_SKEL_KERNEL, 2));
+	emit(gen, BPF_JMP64_IMM(BPF_JSET, BPF_REG_0, BPF_SKEL_KERNEL, 2));
 	emit(gen, BPF_EMIT_CALL(BPF_FUNC_copy_from_user));
-	emit(gen, BPF_JMP_IMM(BPF_JA, 0, 0, 1));
+	emit(gen, BPF_JMP64_IMM(BPF_JA, 0, 0, 1));
 	emit(gen, BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel));
 
 	map_update_attr = add_data(gen, &attr, attr_size);
diff --git a/tools/perf/util/bpf-prologue.c b/tools/perf/util/bpf-prologue.c
index 16b3957..b4ea8cc 100644
--- a/tools/perf/util/bpf-prologue.c
+++ b/tools/perf/util/bpf-prologue.c
@@ -166,7 +166,7 @@ gen_read_mem(struct bpf_insn_pos *pos,
 	 * will be relocated. Target should be the start of
 	 * error processing code.
 	 */
-	ins(BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, JMP_TO_ERROR_CODE),
+	ins(BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, JMP_TO_ERROR_CODE),
 	    pos);
 
 	return check_pos(pos);
@@ -314,7 +314,7 @@ gen_prologue_slowpath(struct bpf_insn_pos *pos,
 				BPF_REG_FP, -BPF_REG_SIZE * (i + 1)), pos);
 	}
 
-	ins(BPF_JMP_IMM(BPF_JA, BPF_REG_0, 0, JMP_TO_SUCCESS_CODE), pos);
+	ins(BPF_JMP64_IMM(BPF_JA, BPF_REG_0, 0, JMP_TO_SUCCESS_CODE), pos);
 
 	return check_pos(pos);
 errout:
@@ -468,7 +468,7 @@ int bpf__gen_prologue(struct probe_trace_arg *args, int nargs,
 					  BPF_PROLOGUE_START_ARG_REG + i,
 					  0),
 			    &pos);
-		ins(BPF_JMP_IMM(BPF_JA, BPF_REG_0, 0, JMP_TO_USER_CODE),
+		ins(BPF_JMP64_IMM(BPF_JA, BPF_REG_0, 0, JMP_TO_USER_CODE),
 				&pos);
 	}
 
diff --git a/tools/testing/selftests/bpf/prog_tests/align.c b/tools/testing/selftests/bpf/prog_tests/align.c
index 4666f88..d7fba94 100644
--- a/tools/testing/selftests/bpf/prog_tests/align.c
+++ b/tools/testing/selftests/bpf/prog_tests/align.c
@@ -138,7 +138,7 @@ static struct bpf_align_test tests[] = {
 	PREP_PKT_POINTERS, \
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), \
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8), \
-	BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_0, 1), \
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_3, BPF_REG_0, 1), \
 	BPF_EXIT_INSN(), \
 	BPF_LDX_MEM(BPF_B, DST_REG, BPF_REG_2, 0)
 
@@ -218,7 +218,7 @@ static struct bpf_align_test tests[] = {
 			BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 14),
 			BPF_MOV64_REG(BPF_REG_4, BPF_REG_5),
 			BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 4),
-			BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_4, 1),
+			BPF_JMP64_REG(BPF_JGE, BPF_REG_3, BPF_REG_4, 1),
 			BPF_EXIT_INSN(),
 
 			BPF_LDX_MEM(BPF_B, BPF_REG_4, BPF_REG_5, 0),
@@ -258,7 +258,7 @@ static struct bpf_align_test tests[] = {
 			BPF_ALU64_REG(BPF_ADD, BPF_REG_5, BPF_REG_6),
 			BPF_MOV64_REG(BPF_REG_4, BPF_REG_5),
 			BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 4),
-			BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_4, 1),
+			BPF_JMP64_REG(BPF_JGE, BPF_REG_3, BPF_REG_4, 1),
 			BPF_EXIT_INSN(),
 			BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_5, 0),
 
@@ -271,7 +271,7 @@ static struct bpf_align_test tests[] = {
 			BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 14),
 			BPF_MOV64_REG(BPF_REG_4, BPF_REG_5),
 			BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 4),
-			BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_4, 1),
+			BPF_JMP64_REG(BPF_JGE, BPF_REG_3, BPF_REG_4, 1),
 			BPF_EXIT_INSN(),
 			BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_5, 0),
 
@@ -286,7 +286,7 @@ static struct bpf_align_test tests[] = {
 			BPF_ALU64_REG(BPF_ADD, BPF_REG_5, BPF_REG_6),
 			BPF_MOV64_REG(BPF_REG_4, BPF_REG_5),
 			BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 4),
-			BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_4, 1),
+			BPF_JMP64_REG(BPF_JGE, BPF_REG_3, BPF_REG_4, 1),
 			BPF_EXIT_INSN(),
 			BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_5, 0),
 
@@ -374,7 +374,7 @@ static struct bpf_align_test tests[] = {
 			/* Check bounds and perform a read */
 			BPF_MOV64_REG(BPF_REG_4, BPF_REG_5),
 			BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 4),
-			BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_4, 1),
+			BPF_JMP64_REG(BPF_JGE, BPF_REG_3, BPF_REG_4, 1),
 			BPF_EXIT_INSN(),
 			BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_5, 0),
 			/* Make a (4n) offset from the value we just read */
@@ -385,7 +385,7 @@ static struct bpf_align_test tests[] = {
 			/* Check bounds and perform a read */
 			BPF_MOV64_REG(BPF_REG_4, BPF_REG_5),
 			BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 4),
-			BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_4, 1),
+			BPF_JMP64_REG(BPF_JGE, BPF_REG_3, BPF_REG_4, 1),
 			BPF_EXIT_INSN(),
 			BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_5, 0),
 			BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -442,7 +442,7 @@ static struct bpf_align_test tests[] = {
 			 */
 			BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 14),
 			/* Then make sure it's nonnegative */
-			BPF_JMP_IMM(BPF_JSGE, BPF_REG_5, 0, 1),
+			BPF_JMP64_IMM(BPF_JSGE, BPF_REG_5, 0, 1),
 			BPF_EXIT_INSN(),
 			/* Add it to packet pointer */
 			BPF_MOV64_REG(BPF_REG_6, BPF_REG_2),
@@ -450,7 +450,7 @@ static struct bpf_align_test tests[] = {
 			/* Check bounds and perform a read */
 			BPF_MOV64_REG(BPF_REG_4, BPF_REG_6),
 			BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 4),
-			BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_4, 1),
+			BPF_JMP64_REG(BPF_JGE, BPF_REG_3, BPF_REG_4, 1),
 			BPF_EXIT_INSN(),
 			BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_6, 0),
 			BPF_EXIT_INSN(),
@@ -494,7 +494,7 @@ static struct bpf_align_test tests[] = {
 			BPF_ALU64_IMM(BPF_LSH, BPF_REG_7, 2),
 			BPF_ALU64_REG(BPF_SUB, BPF_REG_6, BPF_REG_7),
 			/* Bounds-check the result */
-			BPF_JMP_IMM(BPF_JSGE, BPF_REG_6, 0, 1),
+			BPF_JMP64_IMM(BPF_JSGE, BPF_REG_6, 0, 1),
 			BPF_EXIT_INSN(),
 			/* Add it to the packet pointer */
 			BPF_MOV64_REG(BPF_REG_5, BPF_REG_2),
@@ -502,7 +502,7 @@ static struct bpf_align_test tests[] = {
 			/* Check bounds and perform a read */
 			BPF_MOV64_REG(BPF_REG_4, BPF_REG_5),
 			BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 4),
-			BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_4, 1),
+			BPF_JMP64_REG(BPF_JGE, BPF_REG_3, BPF_REG_4, 1),
 			BPF_EXIT_INSN(),
 			BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_5, 0),
 			BPF_EXIT_INSN(),
@@ -556,7 +556,7 @@ static struct bpf_align_test tests[] = {
 			/* Check bounds and perform a read */
 			BPF_MOV64_REG(BPF_REG_4, BPF_REG_5),
 			BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 4),
-			BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_4, 1),
+			BPF_JMP64_REG(BPF_JGE, BPF_REG_3, BPF_REG_4, 1),
 			BPF_EXIT_INSN(),
 			BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_5, 0),
 			BPF_EXIT_INSN(),
diff --git a/tools/testing/selftests/bpf/prog_tests/btf.c b/tools/testing/selftests/bpf/prog_tests/btf.c
index e9a214b..a429f8b 100644
--- a/tools/testing/selftests/bpf/prog_tests/btf.c
+++ b/tools/testing/selftests/bpf/prog_tests/btf.c
@@ -5986,7 +5986,7 @@ static struct prog_info_raw_test {
 	},
 	BTF_STR_SEC("\0int\0/* dead jmp */\0int a=1;\0int b=2;\0return a + b;\0return a + b;"),
 	.insns = {
-		BPF_JMP_IMM(BPF_JA, 0, 0, 0),
+		BPF_JMP64_IMM(BPF_JA, 0, 0, 0),
 		BPF_MOV64_IMM(BPF_REG_0, 1),
 		BPF_MOV64_IMM(BPF_REG_1, 2),
 		BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
@@ -6019,7 +6019,7 @@ static struct prog_info_raw_test {
 		BPF_MOV64_IMM(BPF_REG_0, 1),
 		BPF_MOV64_IMM(BPF_REG_1, 2),
 		BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-		BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 10, 1),
+		BPF_JMP64_IMM(BPF_JGE, BPF_REG_0, 10, 1),
 		BPF_EXIT_INSN(),
 		BPF_EXIT_INSN(),
 	},
@@ -6058,7 +6058,7 @@ static struct prog_info_raw_test {
 		BPF_MOV64_IMM(BPF_REG_2, 1),
 		BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 1),
 		BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
-		BPF_JMP_IMM(BPF_JGE, BPF_REG_2, 0, 8),
+		BPF_JMP64_IMM(BPF_JGE, BPF_REG_2, 0, 8),
 		BPF_MOV64_IMM(BPF_REG_2, 1),
 		BPF_MOV64_IMM(BPF_REG_2, 1),
 		BPF_MOV64_IMM(BPF_REG_2, 1),
@@ -6116,7 +6116,7 @@ static struct prog_info_raw_test {
 		    "\0return bla + 1;\0return func(a);\0b+=1;\0return b;"),
 	.insns = {
 		BPF_MOV64_IMM(BPF_REG_2, 1),
-		BPF_JMP_IMM(BPF_JGE, BPF_REG_2, 0, 1),
+		BPF_JMP64_IMM(BPF_JGE, BPF_REG_2, 0, 1),
 		BPF_CALL_REL(3),
 		BPF_CALL_REL(5),
 		BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -6167,7 +6167,7 @@ static struct prog_info_raw_test {
 		    "\0return 0;\0/* dead */\0/* dead */"),
 	.insns = {
 		BPF_MOV64_IMM(BPF_REG_2, 1),
-		BPF_JMP_IMM(BPF_JGE, BPF_REG_2, 0, 1),
+		BPF_JMP64_IMM(BPF_JGE, BPF_REG_2, 0, 1),
 		BPF_CALL_REL(2),
 		BPF_MOV64_IMM(BPF_REG_0, 0),
 		BPF_EXIT_INSN(),
@@ -6210,9 +6210,9 @@ static struct prog_info_raw_test {
 		    "\0/* dead */\0/* dead */\0/* dead */\0/* dead */"
 		    "\0return b + 1;\0return b + 1;\0return b + 1;"),
 	.insns = {
-		BPF_JMP_IMM(BPF_JA, 0, 0, 0),
+		BPF_JMP64_IMM(BPF_JA, 0, 0, 0),
 		BPF_MOV64_IMM(BPF_REG_2, 1),
-		BPF_JMP_IMM(BPF_JGE, BPF_REG_2, 0, 1),
+		BPF_JMP64_IMM(BPF_JGE, BPF_REG_2, 0, 1),
 		BPF_CALL_REL(3),
 		BPF_CALL_REL(5),
 		BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -6220,7 +6220,7 @@ static struct prog_info_raw_test {
 		BPF_MOV64_IMM(BPF_REG_0, 0),
 		BPF_CALL_REL(1),
 		BPF_EXIT_INSN(),
-		BPF_JMP_IMM(BPF_JA, 0, 0, 0),
+		BPF_JMP64_IMM(BPF_JA, 0, 0, 0),
 		BPF_MOV64_REG(BPF_REG_0, 2),
 		BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
 		BPF_EXIT_INSN(),
@@ -6269,7 +6269,7 @@ static struct prog_info_raw_test {
 		    "\0return bla + 1;\0return func(a);\0b+=1;\0return b;"),
 	.insns = {
 		BPF_MOV64_IMM(BPF_REG_2, 1),
-		BPF_JMP_IMM(BPF_JGE, BPF_REG_2, 0, 1),
+		BPF_JMP64_IMM(BPF_JGE, BPF_REG_2, 0, 1),
 		BPF_CALL_REL(3),
 		BPF_CALL_REL(5),
 		BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -6277,7 +6277,7 @@ static struct prog_info_raw_test {
 		BPF_MOV64_IMM(BPF_REG_0, 0),
 		BPF_CALL_REL(1),
 		BPF_EXIT_INSN(),
-		BPF_JMP_IMM(BPF_JA, 0, 0, 0),
+		BPF_JMP64_IMM(BPF_JA, 0, 0, 0),
 		BPF_MOV64_REG(BPF_REG_0, 2),
 		BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
 		BPF_EXIT_INSN(),
@@ -6320,12 +6320,12 @@ static struct prog_info_raw_test {
 	BTF_STR_SEC("\0int\0x\0main\0func\0/* main linfo */\0/* func linfo */"),
 	.insns = {
 		BPF_MOV64_IMM(BPF_REG_0, 0),
-		BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 1, 3),
+		BPF_JMP64_IMM(BPF_JGE, BPF_REG_0, 1, 3),
 		BPF_CALL_REL(3),
 		BPF_MOV64_IMM(BPF_REG_0, 0),
 		BPF_EXIT_INSN(),
 		BPF_EXIT_INSN(),
-		BPF_JMP_IMM(BPF_JA, 0, 0, 0),
+		BPF_JMP64_IMM(BPF_JA, 0, 0, 0),
 		BPF_EXIT_INSN(),
 	},
 	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
diff --git a/tools/testing/selftests/bpf/prog_tests/cgroup_attach_multi.c b/tools/testing/selftests/bpf/prog_tests/cgroup_attach_multi.c
index 87c5434..2fe3d4d 100644
--- a/tools/testing/selftests/bpf/prog_tests/cgroup_attach_multi.c
+++ b/tools/testing/selftests/bpf/prog_tests/cgroup_attach_multi.c
@@ -43,7 +43,7 @@ static int prog_load_cnt(int verdict, int val)
 		BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4), /* r2 = fp - 4 */
 		BPF_LD_MAP_FD(BPF_REG_1, map_fd),
 		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 		BPF_MOV64_IMM(BPF_REG_1, val), /* r1 = 1 */
 		BPF_ATOMIC_OP(BPF_DW, BPF_ADD, BPF_REG_0, BPF_REG_1, 0),
 
diff --git a/tools/testing/selftests/bpf/prog_tests/flow_dissector_load_bytes.c b/tools/testing/selftests/bpf/prog_tests/flow_dissector_load_bytes.c
index ca3bcd7..9f26fde 100644
--- a/tools/testing/selftests/bpf/prog_tests/flow_dissector_load_bytes.c
+++ b/tools/testing/selftests/bpf/prog_tests/flow_dissector_load_bytes.c
@@ -17,7 +17,7 @@ void serial_test_flow_dissector_load_bytes(void)
 		// bpf_skb_load_bytes(ctx, sizeof(pkt_v4), ptr, 1)
 		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 			     BPF_FUNC_skb_load_bytes),
-		BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+		BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 		// if (ret == 0) return BPF_DROP (2)
 		BPF_MOV64_IMM(BPF_REG_0, BPF_DROP),
 		BPF_EXIT_INSN(),
diff --git a/tools/testing/selftests/bpf/prog_tests/sockopt.c b/tools/testing/selftests/bpf/prog_tests/sockopt.c
index 3656ed2..d3be966 100644
--- a/tools/testing/selftests/bpf/prog_tests/sockopt.c
+++ b/tools/testing/selftests/bpf/prog_tests/sockopt.c
@@ -125,14 +125,14 @@ static struct sockopt_test {
 				    offsetof(struct bpf_sockopt, level)),
 
 			/* if (ctx->level == 123) { */
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_6, 123, 4),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_6, 123, 4),
 			/* ctx->retval = 0 */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
 			BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_0,
 				    offsetof(struct bpf_sockopt, retval)),
 			/* return 1 */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 			/* } else { */
 			/* return 0 */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -168,14 +168,14 @@ static struct sockopt_test {
 				    offsetof(struct bpf_sockopt, optname)),
 
 			/* if (ctx->optname == 123) { */
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_6, 123, 4),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_6, 123, 4),
 			/* ctx->retval = 0 */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
 			BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_0,
 				    offsetof(struct bpf_sockopt, retval)),
 			/* return 1 */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 			/* } else { */
 			/* return 0 */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -229,14 +229,14 @@ static struct sockopt_test {
 				    offsetof(struct bpf_sockopt, optlen)),
 
 			/* if (ctx->optlen == 64) { */
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_6, 64, 4),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_6, 64, 4),
 			/* ctx->retval = 0 */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
 			BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_0,
 				    offsetof(struct bpf_sockopt, retval)),
 			/* return 1 */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 			/* } else { */
 			/* return 0 */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -356,7 +356,7 @@ static struct sockopt_test {
 				    offsetof(struct bpf_sockopt, optval_end)),
 
 			/* if (ctx->optval + 1 <= ctx->optval_end) { */
-			BPF_JMP_REG(BPF_JGT, BPF_REG_6, BPF_REG_7, 1),
+			BPF_JMP64_REG(BPF_JGT, BPF_REG_6, BPF_REG_7, 1),
 			/* ctx->optval[0] = 0xF0 */
 			BPF_ST_MEM(BPF_B, BPF_REG_2, 0, 0xF0),
 			/* } */
@@ -470,14 +470,14 @@ static struct sockopt_test {
 				    offsetof(struct bpf_sockopt, level)),
 
 			/* if (ctx->level == 123) { */
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_6, 123, 4),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_6, 123, 4),
 			/* ctx->optlen = -1 */
 			BPF_MOV64_IMM(BPF_REG_0, -1),
 			BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_0,
 				    offsetof(struct bpf_sockopt, optlen)),
 			/* return 1 */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 			/* } else { */
 			/* return 0 */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -524,14 +524,14 @@ static struct sockopt_test {
 				    offsetof(struct bpf_sockopt, optname)),
 
 			/* if (ctx->optname == 123) { */
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_6, 123, 4),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_6, 123, 4),
 			/* ctx->optlen = -1 */
 			BPF_MOV64_IMM(BPF_REG_0, -1),
 			BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_0,
 				    offsetof(struct bpf_sockopt, optlen)),
 			/* return 1 */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 			/* } else { */
 			/* return 0 */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -578,14 +578,14 @@ static struct sockopt_test {
 				    offsetof(struct bpf_sockopt, optlen)),
 
 			/* if (ctx->optlen == 64) { */
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_6, 64, 4),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_6, 64, 4),
 			/* ctx->optlen = -1 */
 			BPF_MOV64_IMM(BPF_REG_0, -1),
 			BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_0,
 				    offsetof(struct bpf_sockopt, optlen)),
 			/* return 1 */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 			/* } else { */
 			/* return 0 */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -664,7 +664,7 @@ static struct sockopt_test {
 				    offsetof(struct bpf_sockopt, optval_end)),
 
 			/* if (ctx->optval + 1 <= ctx->optval_end) { */
-			BPF_JMP_REG(BPF_JGT, BPF_REG_6, BPF_REG_7, 1),
+			BPF_JMP64_REG(BPF_JGT, BPF_REG_6, BPF_REG_7, 1),
 			/* ctx->optval[0] = 1 << 3 */
 			BPF_ST_MEM(BPF_B, BPF_REG_2, 0, 1 << 3),
 			/* } */
@@ -768,15 +768,15 @@ static struct sockopt_test {
 				    offsetof(struct bpf_sockopt, optval_end)),
 
 			/* if (ctx->optval + 1 <= ctx->optval_end) { */
-			BPF_JMP_REG(BPF_JGT, BPF_REG_7, BPF_REG_8, 4),
+			BPF_JMP64_REG(BPF_JGT, BPF_REG_7, BPF_REG_8, 4),
 
 			/* r9 = ctx->optval[0] */
 			BPF_LDX_MEM(BPF_B, BPF_REG_9, BPF_REG_6, 0),
 
 			/* if (ctx->optval[0] < 128) */
-			BPF_JMP_IMM(BPF_JGT, BPF_REG_9, 128, 2),
+			BPF_JMP64_IMM(BPF_JGT, BPF_REG_9, 128, 2),
 			BPF_MOV64_IMM(BPF_REG_0, 1),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 			/* } */
 
 			/* } else { */
@@ -814,15 +814,15 @@ static struct sockopt_test {
 				    offsetof(struct bpf_sockopt, optval_end)),
 
 			/* if (ctx->optval + 1 <= ctx->optval_end) { */
-			BPF_JMP_REG(BPF_JGT, BPF_REG_7, BPF_REG_8, 4),
+			BPF_JMP64_REG(BPF_JGT, BPF_REG_7, BPF_REG_8, 4),
 
 			/* r9 = ctx->optval[0] */
 			BPF_LDX_MEM(BPF_B, BPF_REG_9, BPF_REG_6, 0),
 
 			/* if (ctx->optval[0] < 128) */
-			BPF_JMP_IMM(BPF_JGT, BPF_REG_9, 128, 2),
+			BPF_JMP64_IMM(BPF_JGT, BPF_REG_9, 128, 2),
 			BPF_MOV64_IMM(BPF_REG_0, 1),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 			/* } */
 
 			/* } else { */
diff --git a/tools/testing/selftests/bpf/test_lru_map.c b/tools/testing/selftests/bpf/test_lru_map.c
index 4d0650c..0aa4287 100644
--- a/tools/testing/selftests/bpf/test_lru_map.c
+++ b/tools/testing/selftests/bpf/test_lru_map.c
@@ -50,11 +50,11 @@ static int bpf_map_lookup_elem_with_ref_bit(int fd, unsigned long long key,
 		BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 		BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_3, 0),
 		BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 		BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0),
 		BPF_STX_MEM(BPF_DW, BPF_REG_9, BPF_REG_1, 0),
 		BPF_MOV64_IMM(BPF_REG_0, 42),
-		BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+		BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 		BPF_MOV64_IMM(BPF_REG_0, 1),
 		BPF_EXIT_INSN(),
 	};
diff --git a/tools/testing/selftests/bpf/test_sock.c b/tools/testing/selftests/bpf/test_sock.c
index 810c374..6f6dcae 100644
--- a/tools/testing/selftests/bpf/test_sock.c
+++ b/tools/testing/selftests/bpf/test_sock.c
@@ -201,15 +201,15 @@ static struct sock_test tests[] = {
 			/* if (ip == expected && port == expected) */
 			BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_6,
 				    offsetof(struct bpf_sock, src_ip6[3])),
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_7,
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_7,
 				    __bpf_constant_ntohl(0x00000001), 4),
 			BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_6,
 				    offsetof(struct bpf_sock, src_port)),
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_7, 0x2001, 2),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_7, 0x2001, 2),
 
 			/* return DENY; */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 
 			/* else return ALLOW; */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
@@ -231,15 +231,15 @@ static struct sock_test tests[] = {
 			/* if (ip == expected && port == expected) */
 			BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_6,
 				    offsetof(struct bpf_sock, src_ip4)),
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_7,
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_7,
 				    __bpf_constant_ntohl(0x7F000001), 4),
 			BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_6,
 				    offsetof(struct bpf_sock, src_port)),
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_7, 0x1002, 2),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_7, 0x1002, 2),
 
 			/* return ALLOW; */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 
 			/* else return DENY; */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -261,15 +261,15 @@ static struct sock_test tests[] = {
 			/* if (ip == expected && port == expected) */
 			BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_6,
 				    offsetof(struct bpf_sock, src_ip4)),
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_7,
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_7,
 				    __bpf_constant_ntohl(0x7F000001), 4),
 			BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_6,
 				    offsetof(struct bpf_sock, src_port)),
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_7, 0x1002, 2),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_7, 0x1002, 2),
 
 			/* return DENY; */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 
 			/* else return ALLOW; */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
@@ -292,15 +292,15 @@ static struct sock_test tests[] = {
 			/* if (ip == expected && port == expected) */
 			BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_6,
 				    offsetof(struct bpf_sock, src_ip4)),
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_7,
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_7,
 				    __bpf_constant_ntohl(0x7F000001), 4),
 			BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_6,
 				    offsetof(struct bpf_sock, src_port)),
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_7, 0x1002, 2),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_7, 0x1002, 2),
 
 			/* return DENY; */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 
 			/* else return ALLOW; */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
@@ -323,15 +323,15 @@ static struct sock_test tests[] = {
 			/* if (ip == expected && port == expected) */
 			BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_6,
 				    offsetof(struct bpf_sock, src_ip6[3])),
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_7,
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_7,
 				    __bpf_constant_ntohl(0x00000001), 4),
 			BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_6,
 				    offsetof(struct bpf_sock, src_port)),
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_7, 0x2001, 2),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_7, 0x2001, 2),
 
 			/* return DENY; */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 
 			/* else return ALLOW; */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
diff --git a/tools/testing/selftests/bpf/test_sock_addr.c b/tools/testing/selftests/bpf/test_sock_addr.c
index 2c89674..5703546 100644
--- a/tools/testing/selftests/bpf/test_sock_addr.c
+++ b/tools/testing/selftests/bpf/test_sock_addr.c
@@ -766,12 +766,12 @@ static int sendmsg4_rw_asm_prog_load(const struct sock_addr_test *test)
 		/* if (sk.family == AF_INET && */
 		BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_6,
 			    offsetof(struct bpf_sock_addr, family)),
-		BPF_JMP_IMM(BPF_JNE, BPF_REG_7, AF_INET, 8),
+		BPF_JMP64_IMM(BPF_JNE, BPF_REG_7, AF_INET, 8),
 
 		/*     sk.type == SOCK_DGRAM)  { */
 		BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_6,
 			    offsetof(struct bpf_sock_addr, type)),
-		BPF_JMP_IMM(BPF_JNE, BPF_REG_7, SOCK_DGRAM, 6),
+		BPF_JMP64_IMM(BPF_JNE, BPF_REG_7, SOCK_DGRAM, 6),
 
 		/*      msg_src_ip4 = src4_rw_ip */
 		BPF_MOV32_IMM(BPF_REG_7, src4_rw_ip.s_addr),
@@ -829,7 +829,7 @@ static int sendmsg6_rw_dst_asm_prog_load(const struct sock_addr_test *test,
 		/* if (sk.family == AF_INET6) { */
 		BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_6,
 			    offsetof(struct bpf_sock_addr, family)),
-		BPF_JMP_IMM(BPF_JNE, BPF_REG_7, AF_INET6, 18),
+		BPF_JMP64_IMM(BPF_JNE, BPF_REG_7, AF_INET6, 18),
 
 #define STORE_IPV6_WORD_N(DST, SRC, N)					       \
 		BPF_MOV32_IMM(BPF_REG_7, SRC[N]),			       \
diff --git a/tools/testing/selftests/bpf/test_sysctl.c b/tools/testing/selftests/bpf/test_sysctl.c
index bcdbd27..4ab32dd 100644
--- a/tools/testing/selftests/bpf/test_sysctl.c
+++ b/tools/testing/selftests/bpf/test_sysctl.c
@@ -83,11 +83,11 @@ static struct sysctl_test tests[] = {
 			/* If (write) */
 			BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_1,
 				    offsetof(struct bpf_sysctl, write)),
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_7, 1, 2),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_7, 1, 2),
 
 			/* return DENY; */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 
 			/* else return ALLOW; */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
@@ -104,11 +104,11 @@ static struct sysctl_test tests[] = {
 			/* If (write) */
 			BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_1,
 				    offsetof(struct bpf_sysctl, write)),
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_7, 1, 2),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_7, 1, 2),
 
 			/* return DENY; */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 
 			/* else return ALLOW; */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
@@ -164,11 +164,11 @@ static struct sysctl_test tests[] = {
 			/* If (file_pos == X) */
 			BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_1,
 				    offsetof(struct bpf_sysctl, file_pos)),
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_7, 3, 2),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_7, 3, 2),
 
 			/* return ALLOW; */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 
 			/* else return DENY; */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -191,11 +191,11 @@ static struct sysctl_test tests[] = {
 			BPF_LDX_MEM(BPF_B, BPF_REG_7, BPF_REG_1,
 				    offsetof(struct bpf_sysctl, file_pos) + 3),
 #endif
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_7, 4, 2),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_7, 4, 2),
 
 			/* return ALLOW; */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 
 			/* else return DENY; */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -244,16 +244,16 @@ static struct sysctl_test tests[] = {
 			BPF_EMIT_CALL(BPF_FUNC_sysctl_get_name),
 
 			/* if (ret == expected && */
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_0, sizeof("tcp_mem") - 1, 6),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, sizeof("tcp_mem") - 1, 6),
 			/*     buf == "tcp_mem\0") */
 			BPF_LD_IMM64(BPF_REG_8,
 				     bpf_be64_to_cpu(0x7463705f6d656d00ULL)),
 			BPF_LDX_MEM(BPF_DW, BPF_REG_9, BPF_REG_7, 0),
-			BPF_JMP_REG(BPF_JNE, BPF_REG_8, BPF_REG_9, 2),
+			BPF_JMP64_REG(BPF_JNE, BPF_REG_8, BPF_REG_9, 2),
 
 			/* return ALLOW; */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 
 			/* else return DENY; */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -285,17 +285,17 @@ static struct sysctl_test tests[] = {
 			BPF_EMIT_CALL(BPF_FUNC_sysctl_get_name),
 
 			/* if (ret == expected && */
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_0, -E2BIG, 6),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, -E2BIG, 6),
 
 			/*     buf[0:7] == "tcp_me\0") */
 			BPF_LD_IMM64(BPF_REG_8,
 				     bpf_be64_to_cpu(0x7463705f6d650000ULL)),
 			BPF_LDX_MEM(BPF_DW, BPF_REG_9, BPF_REG_7, 0),
-			BPF_JMP_REG(BPF_JNE, BPF_REG_8, BPF_REG_9, 2),
+			BPF_JMP64_REG(BPF_JNE, BPF_REG_8, BPF_REG_9, 2),
 
 			/* return ALLOW; */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 
 			/* else return DENY; */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -329,28 +329,28 @@ static struct sysctl_test tests[] = {
 			BPF_EMIT_CALL(BPF_FUNC_sysctl_get_name),
 
 			/* if (ret == expected && */
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 16, 14),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 16, 14),
 
 			/*     buf[0:8] == "net/ipv4" && */
 			BPF_LD_IMM64(BPF_REG_8,
 				     bpf_be64_to_cpu(0x6e65742f69707634ULL)),
 			BPF_LDX_MEM(BPF_DW, BPF_REG_9, BPF_REG_7, 0),
-			BPF_JMP_REG(BPF_JNE, BPF_REG_8, BPF_REG_9, 10),
+			BPF_JMP64_REG(BPF_JNE, BPF_REG_8, BPF_REG_9, 10),
 
 			/*     buf[8:16] == "/tcp_mem" && */
 			BPF_LD_IMM64(BPF_REG_8,
 				     bpf_be64_to_cpu(0x2f7463705f6d656dULL)),
 			BPF_LDX_MEM(BPF_DW, BPF_REG_9, BPF_REG_7, 8),
-			BPF_JMP_REG(BPF_JNE, BPF_REG_8, BPF_REG_9, 6),
+			BPF_JMP64_REG(BPF_JNE, BPF_REG_8, BPF_REG_9, 6),
 
 			/*     buf[16:24] == "\0") */
 			BPF_LD_IMM64(BPF_REG_8, 0x0ULL),
 			BPF_LDX_MEM(BPF_DW, BPF_REG_9, BPF_REG_7, 16),
-			BPF_JMP_REG(BPF_JNE, BPF_REG_8, BPF_REG_9, 2),
+			BPF_JMP64_REG(BPF_JNE, BPF_REG_8, BPF_REG_9, 2),
 
 			/* return ALLOW; */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 
 			/* else return DENY; */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -383,23 +383,23 @@ static struct sysctl_test tests[] = {
 			BPF_EMIT_CALL(BPF_FUNC_sysctl_get_name),
 
 			/* if (ret == expected && */
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_0, -E2BIG, 10),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, -E2BIG, 10),
 
 			/*     buf[0:8] == "net/ipv4" && */
 			BPF_LD_IMM64(BPF_REG_8,
 				     bpf_be64_to_cpu(0x6e65742f69707634ULL)),
 			BPF_LDX_MEM(BPF_DW, BPF_REG_9, BPF_REG_7, 0),
-			BPF_JMP_REG(BPF_JNE, BPF_REG_8, BPF_REG_9, 6),
+			BPF_JMP64_REG(BPF_JNE, BPF_REG_8, BPF_REG_9, 6),
 
 			/*     buf[8:16] == "/tcp_me\0") */
 			BPF_LD_IMM64(BPF_REG_8,
 				     bpf_be64_to_cpu(0x2f7463705f6d6500ULL)),
 			BPF_LDX_MEM(BPF_DW, BPF_REG_9, BPF_REG_7, 8),
-			BPF_JMP_REG(BPF_JNE, BPF_REG_8, BPF_REG_9, 2),
+			BPF_JMP64_REG(BPF_JNE, BPF_REG_8, BPF_REG_9, 2),
 
 			/* return ALLOW; */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 
 			/* else return DENY; */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -431,17 +431,17 @@ static struct sysctl_test tests[] = {
 			BPF_EMIT_CALL(BPF_FUNC_sysctl_get_name),
 
 			/* if (ret == expected && */
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_0, -E2BIG, 6),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, -E2BIG, 6),
 
 			/*     buf[0:8] == "net/ip\0") */
 			BPF_LD_IMM64(BPF_REG_8,
 				     bpf_be64_to_cpu(0x6e65742f69700000ULL)),
 			BPF_LDX_MEM(BPF_DW, BPF_REG_9, BPF_REG_7, 0),
-			BPF_JMP_REG(BPF_JNE, BPF_REG_8, BPF_REG_9, 2),
+			BPF_JMP64_REG(BPF_JNE, BPF_REG_8, BPF_REG_9, 2),
 
 			/* return ALLOW; */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 
 			/* else return DENY; */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -467,17 +467,17 @@ static struct sysctl_test tests[] = {
 			BPF_EMIT_CALL(BPF_FUNC_sysctl_get_current_value),
 
 			/* if (ret == expected && */
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 6, 6),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 6, 6),
 
 			/*     buf[0:6] == "Linux\n\0") */
 			BPF_LD_IMM64(BPF_REG_8,
 				     bpf_be64_to_cpu(0x4c696e75780a0000ULL)),
 			BPF_LDX_MEM(BPF_DW, BPF_REG_9, BPF_REG_7, 0),
-			BPF_JMP_REG(BPF_JNE, BPF_REG_8, BPF_REG_9, 2),
+			BPF_JMP64_REG(BPF_JNE, BPF_REG_8, BPF_REG_9, 2),
 
 			/* return ALLOW; */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 
 			/* else return DENY; */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -506,17 +506,17 @@ static struct sysctl_test tests[] = {
 			BPF_EMIT_CALL(BPF_FUNC_sysctl_get_current_value),
 
 			/* if (ret == expected && */
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 6, 6),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 6, 6),
 
 			/*     buf[0:6] == "Linux\n\0") */
 			BPF_LD_IMM64(BPF_REG_8,
 				     bpf_be64_to_cpu(0x4c696e75780a0000ULL)),
 			BPF_LDX_MEM(BPF_DW, BPF_REG_9, BPF_REG_7, 0),
-			BPF_JMP_REG(BPF_JNE, BPF_REG_8, BPF_REG_9, 2),
+			BPF_JMP64_REG(BPF_JNE, BPF_REG_8, BPF_REG_9, 2),
 
 			/* return ALLOW; */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 
 			/* else return DENY; */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -545,17 +545,17 @@ static struct sysctl_test tests[] = {
 			BPF_EMIT_CALL(BPF_FUNC_sysctl_get_current_value),
 
 			/* if (ret == expected && */
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_0, -E2BIG, 6),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, -E2BIG, 6),
 
 			/*     buf[0:6] == "Linux\0") */
 			BPF_LD_IMM64(BPF_REG_8,
 				     bpf_be64_to_cpu(0x4c696e7578000000ULL)),
 			BPF_LDX_MEM(BPF_DW, BPF_REG_9, BPF_REG_7, 0),
-			BPF_JMP_REG(BPF_JNE, BPF_REG_8, BPF_REG_9, 2),
+			BPF_JMP64_REG(BPF_JNE, BPF_REG_8, BPF_REG_9, 2),
 
 			/* return ALLOW; */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 
 			/* else return DENY; */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -582,15 +582,15 @@ static struct sysctl_test tests[] = {
 			BPF_EMIT_CALL(BPF_FUNC_sysctl_get_current_value),
 
 			/* if (ret == expected && */
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_0, -EINVAL, 4),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, -EINVAL, 4),
 
 			/*     buf[0:8] is NUL-filled) */
 			BPF_LDX_MEM(BPF_DW, BPF_REG_9, BPF_REG_7, 0),
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_9, 0, 2),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_9, 0, 2),
 
 			/* return DENY; */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 
 			/* else return ALLOW; */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
@@ -618,16 +618,16 @@ static struct sysctl_test tests[] = {
 			BPF_EMIT_CALL(BPF_FUNC_sysctl_get_current_value),
 
 			/* if (ret == expected && */
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 4, 6),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 4, 6),
 
 			/*     buf[0:4] == expected) */
 			BPF_LD_IMM64(BPF_REG_8, FIXUP_SYSCTL_VALUE),
 			BPF_LDX_MEM(BPF_DW, BPF_REG_9, BPF_REG_7, 0),
-			BPF_JMP_REG(BPF_JNE, BPF_REG_8, BPF_REG_9, 2),
+			BPF_JMP64_REG(BPF_JNE, BPF_REG_8, BPF_REG_9, 2),
 
 			/* return DENY; */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 
 			/* else return ALLOW; */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
@@ -657,11 +657,11 @@ static struct sysctl_test tests[] = {
 			BPF_EMIT_CALL(BPF_FUNC_sysctl_get_new_value),
 
 			/* if (ret == expected) */
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_0, -EINVAL, 2),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, -EINVAL, 2),
 
 			/* return ALLOW; */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 
 			/* else return DENY; */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -688,16 +688,16 @@ static struct sysctl_test tests[] = {
 			BPF_EMIT_CALL(BPF_FUNC_sysctl_get_new_value),
 
 			/* if (ret == expected && */
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 3, 4),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 3, 4),
 
 			/*     buf[0:4] == "606\0") */
 			BPF_LDX_MEM(BPF_W, BPF_REG_9, BPF_REG_7, 0),
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_9,
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_9,
 				    bpf_ntohl(0x36303600), 2),
 
 			/* return DENY; */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 
 			/* else return ALLOW; */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
@@ -725,29 +725,29 @@ static struct sysctl_test tests[] = {
 			BPF_EMIT_CALL(BPF_FUNC_sysctl_get_new_value),
 
 			/* if (ret == expected && */
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 23, 14),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 23, 14),
 
 			/*     buf[0:8] == "3000000 " && */
 			BPF_LD_IMM64(BPF_REG_8,
 				     bpf_be64_to_cpu(0x3330303030303020ULL)),
 			BPF_LDX_MEM(BPF_DW, BPF_REG_9, BPF_REG_7, 0),
-			BPF_JMP_REG(BPF_JNE, BPF_REG_8, BPF_REG_9, 10),
+			BPF_JMP64_REG(BPF_JNE, BPF_REG_8, BPF_REG_9, 10),
 
 			/*     buf[8:16] == "4000000 " && */
 			BPF_LD_IMM64(BPF_REG_8,
 				     bpf_be64_to_cpu(0x3430303030303020ULL)),
 			BPF_LDX_MEM(BPF_DW, BPF_REG_9, BPF_REG_7, 8),
-			BPF_JMP_REG(BPF_JNE, BPF_REG_8, BPF_REG_9, 6),
+			BPF_JMP64_REG(BPF_JNE, BPF_REG_8, BPF_REG_9, 6),
 
 			/*     buf[16:24] == "6000000\0") */
 			BPF_LD_IMM64(BPF_REG_8,
 				     bpf_be64_to_cpu(0x3630303030303000ULL)),
 			BPF_LDX_MEM(BPF_DW, BPF_REG_9, BPF_REG_7, 16),
-			BPF_JMP_REG(BPF_JNE, BPF_REG_8, BPF_REG_9, 2),
+			BPF_JMP64_REG(BPF_JNE, BPF_REG_8, BPF_REG_9, 2),
 
 			/* return DENY; */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 
 			/* else return ALLOW; */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
@@ -777,16 +777,16 @@ static struct sysctl_test tests[] = {
 			BPF_EMIT_CALL(BPF_FUNC_sysctl_get_new_value),
 
 			/* if (ret == expected && */
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_0, -E2BIG, 4),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, -E2BIG, 4),
 
 			/*     buf[0:3] == "60\0") */
 			BPF_LDX_MEM(BPF_W, BPF_REG_9, BPF_REG_7, 0),
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_9,
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_9,
 				    bpf_ntohl(0x36300000), 2),
 
 			/* return DENY; */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 
 			/* else return ALLOW; */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
@@ -817,11 +817,11 @@ static struct sysctl_test tests[] = {
 			BPF_EMIT_CALL(BPF_FUNC_sysctl_set_new_value),
 
 			/* if (ret == expected) */
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_0, -EINVAL, 2),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, -EINVAL, 2),
 
 			/* return ALLOW; */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 
 			/* else return DENY; */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -851,11 +851,11 @@ static struct sysctl_test tests[] = {
 			BPF_EMIT_CALL(BPF_FUNC_sysctl_set_new_value),
 
 			/* if (ret == expected) */
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 
 			/* return ALLOW; */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 
 			/* else return DENY; */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -893,14 +893,14 @@ static struct sysctl_test tests[] = {
 			BPF_EMIT_CALL(BPF_FUNC_strtoul),
 
 			/* if (ret == expected && */
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 3, 4),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 3, 4),
 			/*     res == expected) */
 			BPF_LDX_MEM(BPF_DW, BPF_REG_9, BPF_REG_7, 0),
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_9, 600, 2),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_9, 600, 2),
 
 			/* return ALLOW; */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 
 			/* else return DENY; */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -937,10 +937,10 @@ static struct sysctl_test tests[] = {
 			BPF_EMIT_CALL(BPF_FUNC_strtoul),
 
 			/* if (ret == expected && */
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 3, 18),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 3, 18),
 			/*     res == expected) */
 			BPF_LDX_MEM(BPF_DW, BPF_REG_9, BPF_REG_7, 0),
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_9, 600, 16),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_9, 600, 16),
 
 			/*     arg1 (buf) */
 			BPF_MOV64_REG(BPF_REG_7, BPF_REG_10),
@@ -963,14 +963,14 @@ static struct sysctl_test tests[] = {
 			BPF_EMIT_CALL(BPF_FUNC_strtoul),
 
 			/*     if (ret == expected && */
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 4, 4),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 4, 4),
 			/*         res == expected) */
 			BPF_LDX_MEM(BPF_DW, BPF_REG_9, BPF_REG_7, 0),
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_9, 602, 2),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_9, 602, 2),
 
 			/* return ALLOW; */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 
 			/* else return DENY; */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -1040,14 +1040,14 @@ static struct sysctl_test tests[] = {
 			BPF_EMIT_CALL(BPF_FUNC_strtoul),
 
 			/* if (ret == expected && */
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 3, 4),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 3, 4),
 			/*     res == expected) */
 			BPF_LDX_MEM(BPF_DW, BPF_REG_9, BPF_REG_7, 0),
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_9, 63, 2),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_9, 63, 2),
 
 			/* return ALLOW; */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 
 			/* else return DENY; */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -1084,11 +1084,11 @@ static struct sysctl_test tests[] = {
 			BPF_EMIT_CALL(BPF_FUNC_strtoul),
 
 			/* if (ret == expected) */
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_0, -EINVAL, 2),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, -EINVAL, 2),
 
 			/* return ALLOW; */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 
 			/* else return DENY; */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -1125,11 +1125,11 @@ static struct sysctl_test tests[] = {
 			BPF_EMIT_CALL(BPF_FUNC_strtoul),
 
 			/* if (ret == expected) */
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_0, -EINVAL, 2),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, -EINVAL, 2),
 
 			/* return ALLOW; */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 
 			/* else return DENY; */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -1167,11 +1167,11 @@ static struct sysctl_test tests[] = {
 			BPF_EMIT_CALL(BPF_FUNC_strtoul),
 
 			/* if (ret == expected) */
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_0, -EINVAL, 2),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, -EINVAL, 2),
 
 			/* return ALLOW; */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 
 			/* else return DENY; */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -1209,14 +1209,14 @@ static struct sysctl_test tests[] = {
 			BPF_EMIT_CALL(BPF_FUNC_strtol),
 
 			/* if (ret == expected && */
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 3, 4),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 3, 4),
 			/*     res == expected) */
 			BPF_LDX_MEM(BPF_DW, BPF_REG_9, BPF_REG_7, 0),
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_9, -6, 2),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_9, -6, 2),
 
 			/* return ALLOW; */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 
 			/* else return DENY; */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -1254,14 +1254,14 @@ static struct sysctl_test tests[] = {
 			BPF_EMIT_CALL(BPF_FUNC_strtol),
 
 			/* if (ret == expected && */
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 4, 4),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 4, 4),
 			/*     res == expected) */
 			BPF_LDX_MEM(BPF_DW, BPF_REG_9, BPF_REG_7, 0),
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_9, 254, 2),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_9, 254, 2),
 
 			/* return ALLOW; */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 
 			/* else return DENY; */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -1304,15 +1304,15 @@ static struct sysctl_test tests[] = {
 			BPF_EMIT_CALL(BPF_FUNC_strtol),
 
 			/* if (ret == expected && */
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 19, 6),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 19, 6),
 			/*     res == expected) */
 			BPF_LD_IMM64(BPF_REG_8, 0x7fffffffffffffffULL),
 			BPF_LDX_MEM(BPF_DW, BPF_REG_9, BPF_REG_7, 0),
-			BPF_JMP_REG(BPF_JNE, BPF_REG_8, BPF_REG_9, 2),
+			BPF_JMP64_REG(BPF_JNE, BPF_REG_8, BPF_REG_9, 2),
 
 			/* return ALLOW; */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 
 			/* else return DENY; */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -1355,11 +1355,11 @@ static struct sysctl_test tests[] = {
 			BPF_EMIT_CALL(BPF_FUNC_strtol),
 
 			/* if (ret == expected) */
-			BPF_JMP_IMM(BPF_JNE, BPF_REG_0, -ERANGE, 2),
+			BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, -ERANGE, 2),
 
 			/* return ALLOW; */
 			BPF_MOV64_IMM(BPF_REG_0, 1),
-			BPF_JMP_A(1),
+			BPF_JMP64_A(1),
 
 			/* else return DENY; */
 			BPF_MOV64_IMM(BPF_REG_0, 0),
diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
index 83319c0..6864c95 100644
--- a/tools/testing/selftests/bpf/test_verifier.c
+++ b/tools/testing/selftests/bpf/test_verifier.c
@@ -210,7 +210,7 @@ static void bpf_fill_ld_abs_vlan_push_pop(struct bpf_test *self)
 		insn[i++] = BPF_MOV64_IMM(BPF_REG_3, 2);
 		insn[i++] = BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 					 BPF_FUNC_skb_vlan_push),
-		insn[i] = BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, len - i - 3);
+		insn[i] = BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, len - i - 3);
 		i++;
 	}
 
@@ -221,7 +221,7 @@ static void bpf_fill_ld_abs_vlan_push_pop(struct bpf_test *self)
 		insn[i++] = BPF_MOV64_REG(BPF_REG_1, BPF_REG_6);
 		insn[i++] = BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 					 BPF_FUNC_skb_vlan_pop),
-		insn[i] = BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, len - i - 3);
+		insn[i] = BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, len - i - 3);
 		i++;
 	}
 	if (++k < 5)
@@ -229,7 +229,7 @@ static void bpf_fill_ld_abs_vlan_push_pop(struct bpf_test *self)
 
 	for (; i < len - 3; i++)
 		insn[i] = BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 0xbef);
-	insn[len - 3] = BPF_JMP_A(1);
+	insn[len - 3] = BPF_JMP64_A(1);
 	/* error label */
 	insn[len - 2] = BPF_MOV32_IMM(BPF_REG_0, 0);
 	insn[len - 1] = BPF_EXIT_INSN();
@@ -250,7 +250,7 @@ static void bpf_fill_jump_around_ld_abs(struct bpf_test *self)
 
 	insn[i++] = BPF_MOV64_REG(BPF_REG_6, BPF_REG_1);
 	insn[i++] = BPF_LD_ABS(BPF_B, 0);
-	insn[i] = BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 10, len - i - 2);
+	insn[i] = BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 10, len - i - 2);
 	i++;
 	while (i < len - 1)
 		insn[i++] = BPF_LD_ABS(BPF_B, 1);
@@ -296,7 +296,7 @@ static void bpf_fill_scale1(struct bpf_test *self)
 	while (k++ < MAX_JMP_SEQ) {
 		insn[i++] = BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 					 BPF_FUNC_get_prandom_u32);
-		insn[i++] = BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, bpf_semi_rand_get(), 2);
+		insn[i++] = BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, bpf_semi_rand_get(), 2);
 		insn[i++] = BPF_MOV64_REG(BPF_REG_1, BPF_REG_10);
 		insn[i++] = BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6,
 					-8 * (k % 64 + 1));
@@ -328,7 +328,7 @@ static void bpf_fill_scale2(struct bpf_test *self)
 	while (k++ < MAX_JMP_SEQ) {
 		insn[i++] = BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0,
 					 BPF_FUNC_get_prandom_u32);
-		insn[i++] = BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, bpf_semi_rand_get(), 2);
+		insn[i++] = BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, bpf_semi_rand_get(), 2);
 		insn[i++] = BPF_MOV64_REG(BPF_REG_1, BPF_REG_10);
 		insn[i++] = BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6,
 					-8 * (k % (64 - 4 * FUNC_NEST) + 1));
@@ -360,8 +360,8 @@ static int bpf_fill_torturous_jumps_insn_1(struct bpf_insn *insn)
 
 	insn[0] = BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32);
 	for (i = 1; i <= hlen; i++) {
-		insn[i]        = BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, i, hlen);
-		insn[i + hlen] = BPF_JMP_A(hlen - i);
+		insn[i]        = BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, i, hlen);
+		insn[i + hlen] = BPF_JMP64_A(hlen - i);
 	}
 	insn[len - 2] = BPF_MOV64_IMM(BPF_REG_0, 1);
 	insn[len - 1] = BPF_EXIT_INSN();
@@ -376,12 +376,12 @@ static int bpf_fill_torturous_jumps_insn_2(struct bpf_insn *insn)
 
 	insn[0] = BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32);
 	for (i = 1; i <= jmp_off; i++) {
-		insn[i] = BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, i, jmp_off);
+		insn[i] = BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, i, jmp_off);
 	}
-	insn[i++] = BPF_JMP_A(jmp_off);
+	insn[i++] = BPF_JMP64_A(jmp_off);
 	for (; i <= jmp_off * 2 + 1; i+=16) {
 		for (j = 0; j < 16; j++) {
-			insn[i + j] = BPF_JMP_A(16 - j - 1);
+			insn[i + j] = BPF_JMP64_A(16 - j - 1);
 		}
 	}
 
@@ -494,7 +494,7 @@ static void bpf_fill_big_prog_with_loop_1(struct bpf_test *self)
 		    offsetof(struct __sk_buff, data_end)),		\
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_2),				\
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 8),				\
-	BPF_JMP_REG(BPF_JLE, BPF_REG_4, BPF_REG_3, 1),			\
+	BPF_JMP64_REG(BPF_JLE, BPF_REG_4, BPF_REG_3, 1),		\
 	BPF_EXIT_INSN()
 
 /* BPF_RAND_UEXT_R7 contains 4 instructions, it initializes R7 into a random
diff --git a/tools/testing/selftests/bpf/verifier/and.c b/tools/testing/selftests/bpf/verifier/and.c
index 6edbfe3..b6d7a804 100644
--- a/tools/testing/selftests/bpf/verifier/and.c
+++ b/tools/testing/selftests/bpf/verifier/and.c
@@ -6,7 +6,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, -4),
 	BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 2),
@@ -27,7 +27,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 12),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 12),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
 	BPF_MOV64_IMM(BPF_REG_9, 1),
 	BPF_ALU32_IMM(BPF_MOD, BPF_REG_1, 2),
diff --git a/tools/testing/selftests/bpf/verifier/array_access.c b/tools/testing/selftests/bpf/verifier/array_access.c
index f32bd8b..69a9ed9 100644
--- a/tools/testing/selftests/bpf/verifier/array_access.c
+++ b/tools/testing/selftests/bpf/verifier/array_access.c
@@ -6,7 +6,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, offsetof(struct test_val, foo)),
 	BPF_EXIT_INSN(),
 	},
@@ -23,7 +23,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_IMM(BPF_REG_1, 4),
 	BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 2),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
@@ -44,9 +44,9 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_1, MAX_ENTRIES, 3),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_1, MAX_ENTRIES, 3),
 	BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 2),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, offsetof(struct test_val, foo)),
@@ -66,12 +66,12 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
 	BPF_JMP32_IMM(BPF_JSGT, BPF_REG_1, 0xffffffff, 1),
 	BPF_MOV32_IMM(BPF_REG_1, 0),
 	BPF_MOV32_IMM(BPF_REG_2, MAX_ENTRIES),
-	BPF_JMP_REG(BPF_JSGT, BPF_REG_2, BPF_REG_1, 1),
+	BPF_JMP64_REG(BPF_JSGT, BPF_REG_2, BPF_REG_1, 1),
 	BPF_MOV32_IMM(BPF_REG_1, 0),
 	BPF_ALU32_IMM(BPF_LSH, BPF_REG_1, 2),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
@@ -92,7 +92,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, (MAX_ENTRIES + 1) << 2,
 		   offsetof(struct test_val, foo)),
 	BPF_EXIT_INSN(),
@@ -109,7 +109,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_IMM(BPF_REG_1, MAX_ENTRIES + 1),
 	BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 2),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
@@ -129,7 +129,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
 	BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 2),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
@@ -149,10 +149,10 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0),
 	BPF_MOV32_IMM(BPF_REG_2, MAX_ENTRIES),
-	BPF_JMP_REG(BPF_JSGT, BPF_REG_2, BPF_REG_1, 1),
+	BPF_JMP64_REG(BPF_JSGT, BPF_REG_2, BPF_REG_1, 1),
 	BPF_MOV32_IMM(BPF_REG_1, 0),
 	BPF_ALU32_IMM(BPF_LSH, BPF_REG_1, 2),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
@@ -174,10 +174,10 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
 	BPF_MOV32_IMM(BPF_REG_2, MAX_ENTRIES + 1),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 1),
 	BPF_MOV32_IMM(BPF_REG_1, 0),
 	BPF_ALU32_IMM(BPF_LSH, BPF_REG_1, 2),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
@@ -199,14 +199,14 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 10),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 10),
 	BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_8),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0,
 		    offsetof(struct test_val, foo)),
@@ -225,7 +225,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -241,7 +241,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_2, 4),
@@ -266,7 +266,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 42),
 	BPF_EXIT_INSN(),
 	},
@@ -283,7 +283,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_MOV64_IMM(BPF_REG_2, 0),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_0),
@@ -305,7 +305,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 42),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
@@ -323,7 +323,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_MOV64_IMM(BPF_REG_2, 0),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_0),
@@ -345,7 +345,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -361,7 +361,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_2, 4),
diff --git a/tools/testing/selftests/bpf/verifier/atomic_and.c b/tools/testing/selftests/bpf/verifier/atomic_and.c
index fe4bb70..2c67275 100644
--- a/tools/testing/selftests/bpf/verifier/atomic_and.c
+++ b/tools/testing/selftests/bpf/verifier/atomic_and.c
@@ -8,12 +8,12 @@
 		BPF_ATOMIC_OP(BPF_DW, BPF_AND, BPF_REG_10, BPF_REG_1, -8),
 		/* if (val != 0x010) exit(2); */
 		BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0x010, 2),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0x010, 2),
 		BPF_MOV64_IMM(BPF_REG_0, 2),
 		BPF_EXIT_INSN(),
 		/* r1 should not be clobbered, no BPF_FETCH flag */
 		BPF_MOV64_IMM(BPF_REG_0, 0),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x011, 1),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0x011, 1),
 		BPF_MOV64_IMM(BPF_REG_0, 1),
 		BPF_EXIT_INSN(),
 	},
@@ -29,16 +29,16 @@
 		BPF_MOV64_IMM(BPF_REG_1, 0x011),
 		BPF_ATOMIC_OP(BPF_DW, BPF_AND | BPF_FETCH, BPF_REG_10, BPF_REG_1, -8),
 		/* if (old != 0x110) exit(3); */
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x110, 2),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0x110, 2),
 		BPF_MOV64_IMM(BPF_REG_0, 3),
 		BPF_EXIT_INSN(),
 		/* if (val != 0x010) exit(2); */
 		BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -8),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x010, 2),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0x010, 2),
 		BPF_MOV64_IMM(BPF_REG_1, 2),
 		BPF_EXIT_INSN(),
 		/* Check R0 wasn't clobbered (for fear of x86 JIT bug) */
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 123, 2),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 123, 2),
 		BPF_MOV64_IMM(BPF_REG_0, 1),
 		BPF_EXIT_INSN(),
 		/* exit(0); */
@@ -84,12 +84,12 @@
 		BPF_MOV64_IMM(BPF_REG_0, 0x011),
 		BPF_ATOMIC_OP(BPF_DW, BPF_AND | BPF_FETCH, BPF_REG_10, BPF_REG_0, -8),
 		/* if (old != 0x110) exit(3); */
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0x110, 2),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0x110, 2),
 		BPF_MOV64_IMM(BPF_REG_0, 3),
 		BPF_EXIT_INSN(),
 		/* if (val != 0x010) exit(2); */
 		BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -8),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x010, 2),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0x010, 2),
 		BPF_MOV64_IMM(BPF_REG_1, 2),
 		BPF_EXIT_INSN(),
 		/* exit(0); */
diff --git a/tools/testing/selftests/bpf/verifier/atomic_bounds.c b/tools/testing/selftests/bpf/verifier/atomic_bounds.c
index e82183e..436372f 100644
--- a/tools/testing/selftests/bpf/verifier/atomic_bounds.c
+++ b/tools/testing/selftests/bpf/verifier/atomic_bounds.c
@@ -18,7 +18,7 @@
 		BPF_ATOMIC_OP(BPF_DW, BPF_ADD | BPF_FETCH, BPF_REG_10, BPF_REG_1, -8),
 		/* Verifier should be able to tell that this infinite loop isn't reachable. */
 		/* if (b) while (true) continue; */
-		BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, -1),
+		BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, -1),
 		BPF_EXIT_INSN(),
 	},
 	.result = ACCEPT,
diff --git a/tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c b/tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c
index b39665f..4a334dc 100644
--- a/tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c
+++ b/tools/testing/selftests/bpf/verifier/atomic_cmpxchg.c
@@ -8,12 +8,12 @@
 		BPF_MOV64_IMM(BPF_REG_0, 2),
 		BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, BPF_REG_10, BPF_REG_1, -8),
 		/* if (old != 3) exit(2); */
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 3, 2),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 3, 2),
 		BPF_MOV64_IMM(BPF_REG_0, 2),
 		BPF_EXIT_INSN(),
 		/* if (val != 3) exit(3); */
 		BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 3, 2),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 3, 2),
 		BPF_MOV64_IMM(BPF_REG_0, 3),
 		BPF_EXIT_INSN(),
 		/* old = atomic_cmpxchg(&val, 3, 4); */
@@ -21,12 +21,12 @@
 		BPF_MOV64_IMM(BPF_REG_0, 3),
 		BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, BPF_REG_10, BPF_REG_1, -8),
 		/* if (old != 3) exit(4); */
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 3, 2),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 3, 2),
 		BPF_MOV64_IMM(BPF_REG_0, 4),
 		BPF_EXIT_INSN(),
 		/* if (val != 4) exit(5); */
 		BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 4, 2),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 4, 2),
 		BPF_MOV64_IMM(BPF_REG_0, 5),
 		BPF_EXIT_INSN(),
 		/* exit(0); */
@@ -110,7 +110,7 @@
 		BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 32),
 		BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 1),
 		/* if (r0 != r1) exit(1); */
-		BPF_JMP_REG(BPF_JEQ, BPF_REG_0, BPF_REG_1, 2),
+		BPF_JMP64_REG(BPF_JEQ, BPF_REG_0, BPF_REG_1, 2),
 		BPF_MOV32_IMM(BPF_REG_0, 1),
 		BPF_EXIT_INSN(),
 		/* exit(0); */
@@ -130,7 +130,7 @@
 		BPF_MOV64_IMM(BPF_REG_1, 1),
 		BPF_ATOMIC_OP(BPF_DW, BPF_CMPXCHG, BPF_REG_10, BPF_REG_1, -8),
 		/* if (r0 != 0) exit(1); */
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 		BPF_MOV64_IMM(BPF_REG_0, 1),
 		BPF_EXIT_INSN(),
 		/* exit(0); */
diff --git a/tools/testing/selftests/bpf/verifier/atomic_fetch.c b/tools/testing/selftests/bpf/verifier/atomic_fetch.c
index 5bf03fb..6941206 100644
--- a/tools/testing/selftests/bpf/verifier/atomic_fetch.c
+++ b/tools/testing/selftests/bpf/verifier/atomic_fetch.c
@@ -12,7 +12,7 @@
 		BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 		BPF_MOV64_REG(BPF_REG_1, BPF_REG_8),
 		BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 		BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_9, 0),
 		BPF_MOV64_IMM(BPF_REG_0, 0),
 		BPF_EXIT_INSN(),
@@ -36,7 +36,7 @@
 		BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 		BPF_MOV64_REG(BPF_REG_1, BPF_REG_8),
 		BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 		BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_9, 0),
 		BPF_MOV64_IMM(BPF_REG_0, 0),
 		BPF_EXIT_INSN(),
@@ -60,7 +60,7 @@
 		BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 		BPF_MOV64_REG(BPF_REG_1, BPF_REG_8),
 		BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 		BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_9, 0),
 		BPF_MOV64_IMM(BPF_REG_0, 0),
 		BPF_EXIT_INSN(),
@@ -83,7 +83,7 @@
 		BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 		BPF_MOV64_REG(BPF_REG_1, BPF_REG_8),
 		BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 		BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_9, 0),
 		BPF_MOV64_IMM(BPF_REG_0, 0),
 		BPF_EXIT_INSN(),
@@ -104,12 +104,12 @@
 			BPF_ATOMIC_OP(BPF_DW, op,			\
 				      dst_reg, src_reg, -8),		\
 			/* if (old != operand1) exit(1); */		\
-			BPF_JMP_IMM(BPF_JEQ, src_reg, operand1, 2),	\
+			BPF_JMP64_IMM(BPF_JEQ, src_reg, operand1, 2),	\
 			BPF_MOV64_IMM(BPF_REG_0, 1),			\
 			BPF_EXIT_INSN(),				\
 			/* if (val != result) exit (2); */		\
 			BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -8),	\
-			BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, expect, 2),	\
+			BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, expect, 2),	\
 			BPF_MOV64_IMM(BPF_REG_0, 2),			\
 			BPF_EXIT_INSN(),				\
 			/* exit(0); */					\
diff --git a/tools/testing/selftests/bpf/verifier/atomic_fetch_add.c b/tools/testing/selftests/bpf/verifier/atomic_fetch_add.c
index a91de8c..5602b5d 100644
--- a/tools/testing/selftests/bpf/verifier/atomic_fetch_add.c
+++ b/tools/testing/selftests/bpf/verifier/atomic_fetch_add.c
@@ -8,13 +8,13 @@
 		BPF_MOV64_IMM(BPF_REG_1, 1),
 		BPF_ATOMIC_OP(BPF_DW, BPF_ADD | BPF_FETCH, BPF_REG_10, BPF_REG_1, -8),
 		/* Check the value we loaded back was 3 */
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 3, 2),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 3, 2),
 		BPF_MOV64_IMM(BPF_REG_0, 1),
 		BPF_EXIT_INSN(),
 		/* Load value from stack */
 		BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -8),
 		/* Check value loaded from stack was 4 */
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 4, 1),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 4, 1),
 		BPF_MOV64_IMM(BPF_REG_0, 2),
 		BPF_EXIT_INSN(),
 	},
@@ -30,13 +30,13 @@
 		BPF_MOV32_IMM(BPF_REG_1, 1),
 		BPF_ATOMIC_OP(BPF_W, BPF_ADD | BPF_FETCH, BPF_REG_10, BPF_REG_1, -4),
 		/* Check the value we loaded back was 3 */
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 3, 2),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 3, 2),
 		BPF_MOV64_IMM(BPF_REG_0, 1),
 		BPF_EXIT_INSN(),
 		/* Load value from stack */
 		BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_10, -4),
 		/* Check value loaded from stack was 4 */
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 4, 1),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 4, 1),
 		BPF_MOV64_IMM(BPF_REG_0, 2),
 		BPF_EXIT_INSN(),
 	},
diff --git a/tools/testing/selftests/bpf/verifier/atomic_or.c b/tools/testing/selftests/bpf/verifier/atomic_or.c
index 9d0716a..c1baeb6 100644
--- a/tools/testing/selftests/bpf/verifier/atomic_or.c
+++ b/tools/testing/selftests/bpf/verifier/atomic_or.c
@@ -8,12 +8,12 @@
 		BPF_ATOMIC_OP(BPF_DW, BPF_OR, BPF_REG_10, BPF_REG_1, -8),
 		/* if (val != 0x111) exit(2); */
 		BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0x111, 2),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0x111, 2),
 		BPF_MOV64_IMM(BPF_REG_0, 2),
 		BPF_EXIT_INSN(),
 		/* r1 should not be clobbered, no BPF_FETCH flag */
 		BPF_MOV64_IMM(BPF_REG_0, 0),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x011, 1),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0x011, 1),
 		BPF_MOV64_IMM(BPF_REG_0, 1),
 		BPF_EXIT_INSN(),
 	},
@@ -29,16 +29,16 @@
 		BPF_MOV64_IMM(BPF_REG_1, 0x011),
 		BPF_ATOMIC_OP(BPF_DW, BPF_OR | BPF_FETCH, BPF_REG_10, BPF_REG_1, -8),
 		/* if (old != 0x110) exit(3); */
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x110, 2),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0x110, 2),
 		BPF_MOV64_IMM(BPF_REG_0, 3),
 		BPF_EXIT_INSN(),
 		/* if (val != 0x111) exit(2); */
 		BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -8),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x111, 2),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0x111, 2),
 		BPF_MOV64_IMM(BPF_REG_1, 2),
 		BPF_EXIT_INSN(),
 		/* Check R0 wasn't clobbered (for fear of x86 JIT bug) */
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 123, 2),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 123, 2),
 		BPF_MOV64_IMM(BPF_REG_0, 1),
 		BPF_EXIT_INSN(),
 		/* exit(0); */
@@ -91,7 +91,7 @@
 		BPF_ALU64_IMM(BPF_LSH, BPF_REG_2, 32),
 		BPF_ALU64_IMM(BPF_SUB, BPF_REG_2, 1),
 		/* if (r2 != r1) exit(1); */
-		BPF_JMP_REG(BPF_JEQ, BPF_REG_2, BPF_REG_1, 2),
+		BPF_JMP64_REG(BPF_JEQ, BPF_REG_2, BPF_REG_1, 2),
 		BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
 		BPF_EXIT_INSN(),
 		/* exit(0); */
diff --git a/tools/testing/selftests/bpf/verifier/atomic_xchg.c b/tools/testing/selftests/bpf/verifier/atomic_xchg.c
index 33e2d6c..1ec8295 100644
--- a/tools/testing/selftests/bpf/verifier/atomic_xchg.c
+++ b/tools/testing/selftests/bpf/verifier/atomic_xchg.c
@@ -7,12 +7,12 @@
 		BPF_MOV64_IMM(BPF_REG_1, 4),
 		BPF_ATOMIC_OP(BPF_DW, BPF_XCHG, BPF_REG_10, BPF_REG_1, -8),
 		/* if (old != 3) exit(1); */
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 3, 2),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 3, 2),
 		BPF_MOV64_IMM(BPF_REG_0, 1),
 		BPF_EXIT_INSN(),
 		/* if (val != 4) exit(2); */
 		BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 4, 2),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 4, 2),
 		BPF_MOV64_IMM(BPF_REG_0, 2),
 		BPF_EXIT_INSN(),
 		/* exit(0); */
diff --git a/tools/testing/selftests/bpf/verifier/atomic_xor.c b/tools/testing/selftests/bpf/verifier/atomic_xor.c
index 74e8fb4..34c8009 100644
--- a/tools/testing/selftests/bpf/verifier/atomic_xor.c
+++ b/tools/testing/selftests/bpf/verifier/atomic_xor.c
@@ -8,12 +8,12 @@
 		BPF_ATOMIC_OP(BPF_DW, BPF_XOR, BPF_REG_10, BPF_REG_1, -8),
 		/* if (val != 0x101) exit(2); */
 		BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0x101, 2),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0x101, 2),
 		BPF_MOV64_IMM(BPF_REG_0, 2),
 		BPF_EXIT_INSN(),
 		/* r1 should not be clobbered, no BPF_FETCH flag */
 		BPF_MOV64_IMM(BPF_REG_0, 0),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x011, 1),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0x011, 1),
 		BPF_MOV64_IMM(BPF_REG_0, 1),
 		BPF_EXIT_INSN(),
 	},
@@ -29,16 +29,16 @@
 		BPF_MOV64_IMM(BPF_REG_1, 0x011),
 		BPF_ATOMIC_OP(BPF_DW, BPF_XOR | BPF_FETCH, BPF_REG_10, BPF_REG_1, -8),
 		/* if (old != 0x110) exit(3); */
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x110, 2),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0x110, 2),
 		BPF_MOV64_IMM(BPF_REG_0, 3),
 		BPF_EXIT_INSN(),
 		/* if (val != 0x101) exit(2); */
 		BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -8),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x101, 2),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0x101, 2),
 		BPF_MOV64_IMM(BPF_REG_1, 2),
 		BPF_EXIT_INSN(),
 		/* Check R0 wasn't clobbered (fxor fear of x86 JIT bug) */
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 123, 2),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 123, 2),
 		BPF_MOV64_IMM(BPF_REG_0, 1),
 		BPF_EXIT_INSN(),
 		/* exit(0); */
diff --git a/tools/testing/selftests/bpf/verifier/basic_instr.c b/tools/testing/selftests/bpf/verifier/basic_instr.c
index 071dbc8..2630d8c 100644
--- a/tools/testing/selftests/bpf/verifier/basic_instr.c
+++ b/tools/testing/selftests/bpf/verifier/basic_instr.c
@@ -21,7 +21,7 @@
 	BPF_ALU64_IMM(BPF_OR, BPF_REG_2, 0xffff),
 	BPF_ALU32_REG(BPF_XOR, BPF_REG_2, BPF_REG_2),
 	BPF_MOV32_IMM(BPF_REG_0, 2),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_2, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_2, 0, 1),
 	BPF_MOV32_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	},
@@ -96,7 +96,7 @@
 	BPF_LD_IMM64(BPF_REG_0, 1),
 	BPF_LD_IMM64(BPF_REG_1, 1),
 	BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 1, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 1, 1),
 	BPF_MOV64_IMM(BPF_REG_0, 2),
 	BPF_EXIT_INSN(),
 	},
@@ -110,7 +110,7 @@
 	BPF_LD_IMM64(BPF_REG_1, 0x100000000LL),
 	BPF_ALU64_REG(BPF_MOV, BPF_REG_2, BPF_REG_1),
 	BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 0),
-	BPF_JMP_REG(BPF_JEQ, BPF_REG_1, BPF_REG_2, 1),
+	BPF_JMP64_REG(BPF_JEQ, BPF_REG_1, BPF_REG_2, 1),
 	BPF_MOV64_IMM(BPF_REG_0, 2),
 	BPF_EXIT_INSN(),
 	},
@@ -124,7 +124,7 @@
 	BPF_LD_IMM64(BPF_REG_1, 0x100000000LL),
 	BPF_ALU64_REG(BPF_MOV, BPF_REG_2, BPF_REG_1),
 	BPF_ALU64_IMM(BPF_ARSH, BPF_REG_1, 0),
-	BPF_JMP_REG(BPF_JEQ, BPF_REG_1, BPF_REG_2, 1),
+	BPF_JMP64_REG(BPF_JEQ, BPF_REG_1, BPF_REG_2, 1),
 	BPF_MOV64_IMM(BPF_REG_0, 2),
 	BPF_EXIT_INSN(),
 	},
@@ -138,7 +138,7 @@
 	BPF_LD_IMM64(BPF_REG_1, 1),
 	BPF_LD_IMM64(BPF_REG_2, 0),
 	BPF_ALU64_REG(BPF_LSH, BPF_REG_1, BPF_REG_2),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 1, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 1, 1),
 	BPF_MOV64_IMM(BPF_REG_0, 2),
 	BPF_EXIT_INSN(),
 	},
@@ -153,7 +153,7 @@
 	BPF_ALU64_REG(BPF_MOV, BPF_REG_2, BPF_REG_1),
 	BPF_LD_IMM64(BPF_REG_3, 0),
 	BPF_ALU64_REG(BPF_RSH, BPF_REG_1, BPF_REG_3),
-	BPF_JMP_REG(BPF_JEQ, BPF_REG_1, BPF_REG_2, 1),
+	BPF_JMP64_REG(BPF_JEQ, BPF_REG_1, BPF_REG_2, 1),
 	BPF_MOV64_IMM(BPF_REG_0, 2),
 	BPF_EXIT_INSN(),
 	},
@@ -168,7 +168,7 @@
 	BPF_ALU64_REG(BPF_MOV, BPF_REG_2, BPF_REG_1),
 	BPF_LD_IMM64(BPF_REG_3, 0),
 	BPF_ALU64_REG(BPF_ARSH, BPF_REG_1, BPF_REG_3),
-	BPF_JMP_REG(BPF_JEQ, BPF_REG_1, BPF_REG_2, 1),
+	BPF_JMP64_REG(BPF_JEQ, BPF_REG_1, BPF_REG_2, 1),
 	BPF_MOV64_IMM(BPF_REG_0, 2),
 	BPF_EXIT_INSN(),
 	},
diff --git a/tools/testing/selftests/bpf/verifier/bounds.c b/tools/testing/selftests/bpf/verifier/bounds.c
index f822f2b..80066f7 100644
--- a/tools/testing/selftests/bpf/verifier/bounds.c
+++ b/tools/testing/selftests/bpf/verifier/bounds.c
@@ -6,11 +6,11 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_1, 0xff, 7),
+	BPF_JMP64_IMM(BPF_JGT, BPF_REG_1, 0xff, 7),
 	BPF_LDX_MEM(BPF_B, BPF_REG_3, BPF_REG_0, 1),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_3, 0xff, 5),
+	BPF_JMP64_IMM(BPF_JGT, BPF_REG_3, 0xff, 5),
 	BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_3),
 	BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 56),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
@@ -31,11 +31,11 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_1, 0xff, 6),
+	BPF_JMP64_IMM(BPF_JGT, BPF_REG_1, 0xff, 6),
 	BPF_LDX_MEM(BPF_B, BPF_REG_3, BPF_REG_0, 1),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_3, 0xff, 4),
+	BPF_JMP64_IMM(BPF_JGT, BPF_REG_3, 0xff, 4),
 	BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_3),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
@@ -64,7 +64,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_ARG2, -8),
 	BPF_ST_MEM(BPF_DW, BPF_REG_ARG2, 0, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_9, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -83,7 +83,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	/* r2 = 0x0000'0000'ffff'ffff */
 	BPF_MOV32_IMM(BPF_REG_2, 0xffffffff),
 	/* r2 = 0 */
@@ -107,7 +107,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	/* r2 = 0xffff'ffff'ffff'ffff */
 	BPF_MOV64_IMM(BPF_REG_2, 0xffffffff),
 	/* r2 = 0xffff'ffff */
@@ -132,7 +132,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	/* r2 = 0xffff'ffff'ffff'ffff */
 	BPF_MOV64_IMM(BPF_REG_2, 0xffffffff),
 	/* r2 = 0xfff'ffff */
@@ -159,7 +159,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_6, 1),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, (1 << 29) - 1),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_6),
@@ -183,7 +183,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_6, 1),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, (1 << 30) - 1),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_6),
@@ -205,7 +205,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
 	/* r1 = [0x00, 0xff] */
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
 	BPF_MOV64_IMM(BPF_REG_2, 1),
@@ -238,7 +238,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
 	/* r1 = [0x00, 0xff] */
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0xffffff80 >> 1),
@@ -272,7 +272,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
 	/* r1 = [0x00, 0xff] */
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 0xffffff80 >> 1),
@@ -307,7 +307,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
 	/* r1 = 0x7fff'ffff */
 	BPF_MOV64_IMM(BPF_REG_1, 0x7fffffff),
 	/* r1 = 0xffff'fffe */
@@ -333,7 +333,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 	BPF_MOV64_IMM(BPF_REG_2, 32),
 	BPF_MOV64_IMM(BPF_REG_1, 1),
 	/* r1 = (u32)1 << (u32)32 = ? */
@@ -360,7 +360,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 	/* r1 = [0x00, 0xff] */
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
 	/* r1 = [-0x01, 0xfe] */
@@ -389,7 +389,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 	/* r1 = 2 */
 	BPF_MOV64_IMM(BPF_REG_1, 2),
 	/* r1 = 1<<32 */
@@ -416,11 +416,11 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0x7ffffffe),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
-	BPF_JMP_A(0),
+	BPF_JMP64_A(0),
 	BPF_EXIT_INSN(),
 	},
 	.fixup_map_hash_8b = { 3 },
@@ -435,13 +435,13 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0x1fffffff),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0x1fffffff),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0x1fffffff),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
-	BPF_JMP_A(0),
+	BPF_JMP64_A(0),
 	BPF_EXIT_INSN(),
 	},
 	.fixup_map_hash_8b = { 3 },
@@ -457,12 +457,12 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_IMM(BPF_SUB, BPF_REG_0, 0x1fffffff),
 	BPF_ALU64_IMM(BPF_SUB, BPF_REG_0, 0x1fffffff),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 2),
-	BPF_JMP_A(0),
+	BPF_JMP64_A(0),
 	BPF_EXIT_INSN(),
 	},
 	.fixup_map_hash_8b = { 3 },
@@ -478,13 +478,13 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_1, 1000000),
 	BPF_ALU64_IMM(BPF_MUL, BPF_REG_1, 1000000),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 2),
-	BPF_JMP_A(0),
+	BPF_JMP64_A(0),
 	BPF_EXIT_INSN(),
 	},
 	.fixup_map_hash_8b = { 3 },
@@ -503,7 +503,7 @@
 	/* check ALU64 op keeps 32bit bounds */
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
 	BPF_JMP32_IMM(BPF_JGT, BPF_REG_1, 2, 1),
-	BPF_JMP_A(1),
+	BPF_JMP64_A(1),
 	/* invalid ldx if bounds are lost above */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, -1),
 	BPF_EXIT_INSN(),
@@ -524,8 +524,8 @@
 	/* r1 = 0x2 */
 	BPF_ALU32_IMM(BPF_ADD, BPF_REG_1, 1),
 	/* check ALU32 op zero extends 64bit bounds */
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 1),
-	BPF_JMP_A(1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 1),
+	BPF_JMP64_A(1),
 	/* invalid ldx if bounds are lost above */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, -1),
 	BPF_EXIT_INSN(),
@@ -547,7 +547,7 @@
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_6, BPF_REG_2),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_6),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_8, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_3, BPF_REG_8, 1),
 	BPF_LDX_MEM(BPF_W, BPF_REG_5, BPF_REG_6, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -564,11 +564,11 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_1, 0),
 	BPF_ALU64_IMM(BPF_XOR, BPF_REG_1, 1),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 8),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -586,7 +586,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV32_IMM(BPF_REG_1, 0),
 	BPF_ALU32_IMM(BPF_XOR, BPF_REG_1, 1),
@@ -608,11 +608,11 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_1, 2),
 	BPF_ALU64_IMM(BPF_XOR, BPF_REG_1, 3),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_1, 0, 1),
+	BPF_JMP64_IMM(BPF_JGT, BPF_REG_1, 0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 8),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -630,11 +630,11 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0),
 	BPF_ALU64_IMM(BPF_XOR, BPF_REG_1, 3),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 8),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -652,7 +652,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0),
 	BPF_ALU32_IMM(BPF_XOR, BPF_REG_1, 3),
@@ -674,12 +674,12 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JLE, BPF_REG_1, 0, 3),
+	BPF_JMP64_IMM(BPF_JLE, BPF_REG_1, 0, 3),
 	BPF_ALU64_IMM(BPF_XOR, BPF_REG_1, 3),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_1, 0, 1),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_1, 0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 8),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -697,7 +697,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0),
 	BPF_JMP32_IMM(BPF_JLE, BPF_REG_1, 0, 3),
@@ -720,11 +720,11 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
 	/* This used to reduce the max bound to 0x7fffffff */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_1, 0x7fffffff, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
+	BPF_JMP64_IMM(BPF_JGT, BPF_REG_1, 0x7fffffff, 1),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -741,9 +741,9 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JSLT, BPF_REG_1, 1, 1),
+	BPF_JMP64_IMM(BPF_JSLT, BPF_REG_1, 1, 1),
 	BPF_JMP32_IMM(BPF_JSLT, BPF_REG_1, 0, 1),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
diff --git a/tools/testing/selftests/bpf/verifier/bounds_deduction.c b/tools/testing/selftests/bpf/verifier/bounds_deduction.c
index 3931c48..cea299f 100644
--- a/tools/testing/selftests/bpf/verifier/bounds_deduction.c
+++ b/tools/testing/selftests/bpf/verifier/bounds_deduction.c
@@ -2,7 +2,7 @@
 	"check deducing bounds from const, 1",
 	.insns = {
 		BPF_MOV64_IMM(BPF_REG_0, 1),
-		BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 1, 0),
+		BPF_JMP64_IMM(BPF_JSGE, BPF_REG_0, 1, 0),
 		BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
 		BPF_EXIT_INSN(),
 	},
@@ -14,9 +14,9 @@
 	"check deducing bounds from const, 2",
 	.insns = {
 		BPF_MOV64_IMM(BPF_REG_0, 1),
-		BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 1, 1),
+		BPF_JMP64_IMM(BPF_JSGE, BPF_REG_0, 1, 1),
 		BPF_EXIT_INSN(),
-		BPF_JMP_IMM(BPF_JSLE, BPF_REG_0, 1, 1),
+		BPF_JMP64_IMM(BPF_JSLE, BPF_REG_0, 1, 1),
 		BPF_EXIT_INSN(),
 		BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_0),
 		BPF_EXIT_INSN(),
@@ -30,7 +30,7 @@
 	"check deducing bounds from const, 3",
 	.insns = {
 		BPF_MOV64_IMM(BPF_REG_0, 0),
-		BPF_JMP_IMM(BPF_JSLE, BPF_REG_0, 0, 0),
+		BPF_JMP64_IMM(BPF_JSLE, BPF_REG_0, 0, 0),
 		BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
 		BPF_EXIT_INSN(),
 	},
@@ -43,9 +43,9 @@
 	.insns = {
 		BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
 		BPF_MOV64_IMM(BPF_REG_0, 0),
-		BPF_JMP_IMM(BPF_JSLE, BPF_REG_0, 0, 1),
+		BPF_JMP64_IMM(BPF_JSLE, BPF_REG_0, 0, 1),
 		BPF_EXIT_INSN(),
-		BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 0, 1),
+		BPF_JMP64_IMM(BPF_JSGE, BPF_REG_0, 0, 1),
 		BPF_EXIT_INSN(),
 		BPF_ALU64_REG(BPF_SUB, BPF_REG_6, BPF_REG_0),
 		BPF_EXIT_INSN(),
@@ -58,7 +58,7 @@
 	"check deducing bounds from const, 5",
 	.insns = {
 		BPF_MOV64_IMM(BPF_REG_0, 0),
-		BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 1, 1),
+		BPF_JMP64_IMM(BPF_JSGE, BPF_REG_0, 1, 1),
 		BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
 		BPF_EXIT_INSN(),
 	},
@@ -70,7 +70,7 @@
 	"check deducing bounds from const, 6",
 	.insns = {
 		BPF_MOV64_IMM(BPF_REG_0, 0),
-		BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 0, 1),
+		BPF_JMP64_IMM(BPF_JSGE, BPF_REG_0, 0, 1),
 		BPF_EXIT_INSN(),
 		BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
 		BPF_EXIT_INSN(),
@@ -83,7 +83,7 @@
 	"check deducing bounds from const, 7",
 	.insns = {
 		BPF_MOV64_IMM(BPF_REG_0, ~0),
-		BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 0, 0),
+		BPF_JMP64_IMM(BPF_JSGE, BPF_REG_0, 0, 0),
 		BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_0),
 		BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
 			    offsetof(struct __sk_buff, mark)),
@@ -98,7 +98,7 @@
 	"check deducing bounds from const, 8",
 	.insns = {
 		BPF_MOV64_IMM(BPF_REG_0, ~0),
-		BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 0, 1),
+		BPF_JMP64_IMM(BPF_JSGE, BPF_REG_0, 0, 1),
 		BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
 		BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
 			    offsetof(struct __sk_buff, mark)),
@@ -113,7 +113,7 @@
 	"check deducing bounds from const, 9",
 	.insns = {
 		BPF_MOV64_IMM(BPF_REG_0, 0),
-		BPF_JMP_IMM(BPF_JSGE, BPF_REG_0, 0, 0),
+		BPF_JMP64_IMM(BPF_JSGE, BPF_REG_0, 0, 0),
 		BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
 		BPF_EXIT_INSN(),
 	},
@@ -125,7 +125,7 @@
 	"check deducing bounds from const, 10",
 	.insns = {
 		BPF_MOV64_IMM(BPF_REG_0, 0),
-		BPF_JMP_IMM(BPF_JSLE, BPF_REG_0, 0, 0),
+		BPF_JMP64_IMM(BPF_JSLE, BPF_REG_0, 0, 0),
 		/* Marks reg as unknown. */
 		BPF_ALU64_IMM(BPF_NEG, BPF_REG_0, 0),
 		BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
diff --git a/tools/testing/selftests/bpf/verifier/bounds_mix_sign_unsign.c b/tools/testing/selftests/bpf/verifier/bounds_mix_sign_unsign.c
index 47b56b0ed..839e316 100644
--- a/tools/testing/selftests/bpf/verifier/bounds_mix_sign_unsign.c
+++ b/tools/testing/selftests/bpf/verifier/bounds_mix_sign_unsign.c
@@ -6,12 +6,12 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
 	BPF_MOV64_IMM(BPF_REG_2, 2),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_2, BPF_REG_1, 3),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 4, 2),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_2, BPF_REG_1, 3),
+	BPF_JMP64_IMM(BPF_JSGT, BPF_REG_1, 4, 2),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
 	BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -29,12 +29,12 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
 	BPF_MOV64_IMM(BPF_REG_2, -1),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 3),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 3),
+	BPF_JMP64_IMM(BPF_JSGT, BPF_REG_1, 1, 2),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
 	BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -52,14 +52,14 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
 	BPF_MOV64_IMM(BPF_REG_2, -1),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 5),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 5),
 	BPF_MOV64_IMM(BPF_REG_8, 0),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_8, BPF_REG_1),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_8, 1, 2),
+	BPF_JMP64_IMM(BPF_JSGT, BPF_REG_8, 1, 2),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_8),
 	BPF_ST_MEM(BPF_B, BPF_REG_8, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -77,13 +77,13 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
 	BPF_MOV64_IMM(BPF_REG_2, -1),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 4),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 4),
 	BPF_MOV64_REG(BPF_REG_8, BPF_REG_1),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_8, 1, 2),
+	BPF_JMP64_IMM(BPF_JSGT, BPF_REG_8, 1, 2),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_8),
 	BPF_ST_MEM(BPF_B, BPF_REG_8, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -101,12 +101,12 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
 	BPF_MOV64_IMM(BPF_REG_2, 1),
 	BPF_ALU64_REG(BPF_AND, BPF_REG_1, BPF_REG_2),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2),
+	BPF_JMP64_IMM(BPF_JSGT, BPF_REG_1, 1, 2),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
 	BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -123,12 +123,12 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
 	BPF_MOV64_IMM(BPF_REG_2, -1),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 5),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 4),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 5),
+	BPF_JMP64_IMM(BPF_JSGT, BPF_REG_1, 1, 4),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 4),
 	BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
 	BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0),
@@ -148,8 +148,8 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -16),
 	BPF_MOV64_IMM(BPF_REG_6, -1),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_6, 5),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_4, 1, 4),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_4, BPF_REG_6, 5),
+	BPF_JMP64_IMM(BPF_JSGT, BPF_REG_4, 1, 4),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 1),
 	BPF_MOV64_IMM(BPF_REG_5, 0),
 	BPF_ST_MEM(BPF_H, BPF_REG_10, -512, 0),
@@ -168,12 +168,12 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
 	BPF_MOV64_IMM(BPF_REG_2, 1024 * 1024 * 1024),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 3),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, 3),
+	BPF_JMP64_IMM(BPF_JSGT, BPF_REG_1, 1, 2),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
 	BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -190,14 +190,14 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
 	BPF_MOV64_IMM(BPF_REG_2, -1),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 2),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2),
+	BPF_JMP64_IMM(BPF_JSGT, BPF_REG_1, 1, 2),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
 	BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -215,14 +215,14 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 10),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 10),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
 	BPF_LD_IMM64(BPF_REG_2, -9223372036854775808ULL),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 2),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2),
+	BPF_JMP64_IMM(BPF_JSGT, BPF_REG_1, 1, 2),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
 	BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -239,14 +239,14 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
 	BPF_MOV64_IMM(BPF_REG_2, 0),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 2),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2),
+	BPF_JMP64_IMM(BPF_JSGT, BPF_REG_1, 1, 2),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
 	BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -264,15 +264,15 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
 	BPF_MOV64_IMM(BPF_REG_2, -1),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_2, BPF_REG_1, 2),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_2, BPF_REG_1, 2),
 	/* Dead branch. */
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2),
+	BPF_JMP64_IMM(BPF_JSGT, BPF_REG_1, 1, 2),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
 	BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -290,14 +290,14 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
 	BPF_MOV64_IMM(BPF_REG_2, -6),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_2, BPF_REG_1, 2),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_2, BPF_REG_1, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2),
+	BPF_JMP64_IMM(BPF_JSGT, BPF_REG_1, 1, 2),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
 	BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -315,17 +315,17 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
 	BPF_MOV64_IMM(BPF_REG_2, 2),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_2, BPF_REG_1, 2),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_2, BPF_REG_1, 2),
 	BPF_MOV64_IMM(BPF_REG_7, 1),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_7, 0, 2),
+	BPF_JMP64_IMM(BPF_JSGT, BPF_REG_7, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_1),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_7, 4, 2),
+	BPF_JMP64_IMM(BPF_JSGT, BPF_REG_7, 4, 2),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_7),
 	BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -345,20 +345,20 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
 	BPF_MOV64_IMM(BPF_REG_2, -1),
 	BPF_MOV64_IMM(BPF_REG_8, 2),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_9, 42, 6),
-	BPF_JMP_REG(BPF_JSGT, BPF_REG_8, BPF_REG_1, 3),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 1, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_9, 42, 6),
+	BPF_JMP64_REG(BPF_JSGT, BPF_REG_8, BPF_REG_1, 3),
+	BPF_JMP64_IMM(BPF_JSGT, BPF_REG_1, 1, 2),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
 	BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, -3),
-	BPF_JMP_IMM(BPF_JA, 0, 0, -7),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_1, BPF_REG_2, -3),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, -7),
 	},
 	.fixup_map_hash_8b = { 4 },
 	.errstr = "unbounded min value",
@@ -372,15 +372,15 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -16, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_10, -16),
 	BPF_MOV64_IMM(BPF_REG_2, -6),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_2, BPF_REG_1, 2),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_2, BPF_REG_1, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_0, 1, 2),
+	BPF_JMP64_IMM(BPF_JGT, BPF_REG_0, 1, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_ST_MEM(BPF_B, BPF_REG_0, 0, 0),
diff --git a/tools/testing/selftests/bpf/verifier/bpf_get_stack.c b/tools/testing/selftests/bpf/verifier/bpf_get_stack.c
index 6c02db4..5c51c09 100644
--- a/tools/testing/selftests/bpf/verifier/bpf_get_stack.c
+++ b/tools/testing/selftests/bpf/verifier/bpf_get_stack.c
@@ -7,7 +7,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 28),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 28),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_9, sizeof(struct test_val)/2),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
@@ -19,7 +19,7 @@
 	BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_LSH, BPF_REG_8, 32),
 	BPF_ALU64_IMM(BPF_ARSH, BPF_REG_8, 32),
-	BPF_JMP_REG(BPF_JSGT, BPF_REG_1, BPF_REG_8, 16),
+	BPF_JMP64_REG(BPF_JSGT, BPF_REG_1, BPF_REG_8, 16),
 	BPF_ALU64_REG(BPF_SUB, BPF_REG_9, BPF_REG_8),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_7),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_8),
@@ -31,7 +31,7 @@
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
 	BPF_MOV64_IMM(BPF_REG_5, sizeof(struct test_val)/2),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_5),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 4),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_9),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
@@ -53,10 +53,10 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_7, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_7, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 
@@ -66,7 +66,7 @@
 	BPF_MOV64_IMM(BPF_REG_3, 48),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
 	BPF_EMIT_CALL(BPF_FUNC_get_task_stack),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JSGT, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 
diff --git a/tools/testing/selftests/bpf/verifier/bpf_loop_inline.c b/tools/testing/selftests/bpf/verifier/bpf_loop_inline.c
index 31c71dc..ac62f40 100644
--- a/tools/testing/selftests/bpf/verifier/bpf_loop_inline.c
+++ b/tools/testing/selftests/bpf/verifier/bpf_loop_inline.c
@@ -38,9 +38,9 @@
 	 * subsequent bpf_loop insn processing steps
 	 */
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_jiffies64),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 777, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 777, 2),
 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_1, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_1, 2),
 
 	BPF_RAW_INSN(BPF_LD | BPF_IMM | BPF_DW, BPF_REG_2, BPF_PSEUDO_FUNC, 0, 6),
@@ -71,9 +71,9 @@
 	BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_jiffies64),
 	BPF_ALU64_REG(BPF_MOV, BPF_REG_7, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_6, 0, 9),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_6, 0, 9),
 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_4, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_7, 0, 0),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_7, 0, 0),
 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_1, 1),
 	BPF_RAW_INSN(BPF_LD | BPF_IMM | BPF_DW, BPF_REG_2, BPF_PSEUDO_FUNC, 0, 7),
 	BPF_RAW_INSN(0, 0, 0, 0, 0),
@@ -82,7 +82,7 @@
 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_4, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, -10),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, -10),
 	/* callback */
 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
@@ -101,12 +101,12 @@
 	.insns = {
 	/* main */
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_jiffies64),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 777, 4), /* pick a random callback */
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 777, 4), /* pick a random callback */
 
 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_1, 1),
 	BPF_RAW_INSN(BPF_LD | BPF_IMM | BPF_DW, BPF_REG_2, BPF_PSEUDO_FUNC, 0, 10),
 	BPF_RAW_INSN(0, 0, 0, 0, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 3),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 3),
 
 	BPF_ALU64_IMM(BPF_MOV, BPF_REG_1, 1),
 	BPF_RAW_INSN(BPF_LD | BPF_IMM | BPF_DW, BPF_REG_2, BPF_PSEUDO_FUNC, 0, 8),
diff --git a/tools/testing/selftests/bpf/verifier/calls.c b/tools/testing/selftests/bpf/verifier/calls.c
index 986bf68..4e0e559 100644
--- a/tools/testing/selftests/bpf/verifier/calls.c
+++ b/tools/testing/selftests/bpf/verifier/calls.c
@@ -13,7 +13,7 @@
 	"calls: invalid kfunc call unreachable",
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JGT, BPF_REG_0, 0, 2),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
@@ -122,7 +122,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
 	BPF_ST_MEM(BPF_DW, BPF_REG_1, 0, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
@@ -143,7 +143,7 @@
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
@@ -165,7 +165,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
 	BPF_ST_MEM(BPF_DW, BPF_REG_1, 0, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, 16),
@@ -190,15 +190,15 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
 	BPF_ST_MEM(BPF_DW, BPF_REG_1, 0, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_0, 4),
-	BPF_JMP_IMM(BPF_JLE, BPF_REG_2, 4, 3),
+	BPF_JMP64_IMM(BPF_JLE, BPF_REG_2, 4, 3),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_2, 0, 3),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_2, 0, 3),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -225,7 +225,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
 	BPF_ST_MEM(BPF_DW, BPF_REG_1, 0, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
@@ -252,7 +252,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
 	BPF_ST_MEM(BPF_DW, BPF_REG_1, 0, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
@@ -307,7 +307,7 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
@@ -332,13 +332,13 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
 		    offsetof(struct __sk_buff, data)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_MOV32_IMM(BPF_REG_0, 42),
 	BPF_EXIT_INSN(),
 	},
@@ -356,20 +356,20 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
 		    offsetof(struct __sk_buff, data)),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 9),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 9),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_6,
 		    offsetof(struct __sk_buff, data)),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 64),
@@ -394,8 +394,8 @@
 {
 	"calls: wrong recursive calls",
 	.insns = {
-	BPF_JMP_IMM(BPF_JA, 0, 0, 4),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 4),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 4),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 4),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, -2),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, -2),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, -2),
@@ -446,7 +446,7 @@
 	.insns = {
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
 		    offsetof(struct __sk_buff, mark)),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
@@ -462,7 +462,7 @@
 	.insns = {
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
 		    offsetof(struct __sk_buff, mark)),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 4),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
@@ -479,14 +479,14 @@
 	.insns = {
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
 		    offsetof(struct __sk_buff, mark)),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 4),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, -6),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, -6),
 	BPF_MOV64_IMM(BPF_REG_0, 3),
-	BPF_JMP_IMM(BPF_JA, 0, 0, -6),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, -6),
 	},
 	.prog_type = BPF_PROG_TYPE_SOCKET_FILTER,
 	.errstr_unpriv = "back-edge from insn",
@@ -499,12 +499,12 @@
 	.insns = {
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
 		    offsetof(struct __sk_buff, mark)),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 4),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, -5),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, -5),
 	BPF_MOV64_IMM(BPF_REG_0, 3),
 	BPF_EXIT_INSN(),
 	},
@@ -516,12 +516,12 @@
 	.insns = {
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
 		    offsetof(struct __sk_buff, mark)),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 4),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, -6),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, -6),
 	BPF_MOV64_IMM(BPF_REG_0, 3),
 	BPF_EXIT_INSN(),
 	},
@@ -535,7 +535,7 @@
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 2),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, -3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, -3),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
 		    offsetof(struct __sk_buff, mark)),
@@ -632,7 +632,7 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_8, BPF_REG_6),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_8, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_8, BPF_REG_7, 2),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_8, BPF_REG_7, 2),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 3),
 	/* clear_all_pkt_pointers() has to walk all frames
 	 * to make sure that pkt pointers in the caller
@@ -662,11 +662,11 @@
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 3),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_6, 0),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	},
@@ -740,10 +740,10 @@
 	"calls: calls control flow, jump test",
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_0, 42),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 2),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 43),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, -3),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, -3),
 	BPF_EXIT_INSN(),
 	},
 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
@@ -754,9 +754,9 @@
 	"calls: calls control flow, jump test 2",
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_0, 42),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 2),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 43),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, -3),
 	BPF_EXIT_INSN(),
 	},
@@ -779,7 +779,7 @@
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
 		    offsetof(struct __sk_buff, len)),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, -3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, -3),
 	BPF_EXIT_INSN(),
 	},
 	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
@@ -856,7 +856,7 @@
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, -3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, -3),
 	BPF_EXIT_INSN(),
 	},
 	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
@@ -866,7 +866,7 @@
 {
 	"calls: jumping across function bodies. test2",
 	.insns = {
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, 3),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -884,7 +884,7 @@
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, -2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, -2),
 	},
 	.prog_type = BPF_PROG_TYPE_TRACEPOINT,
 	.errstr = "not an exit",
@@ -1107,17 +1107,17 @@
 	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, 6), /* call A */
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, 8), /* call B */
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_6, 0, 1),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_6, 0, 1),
 	BPF_ST_MEM(BPF_B, BPF_REG_10, -64, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	/* A */
-	BPF_JMP_IMM(BPF_JLT, BPF_REG_1, 10, 1),
+	BPF_JMP64_IMM(BPF_JLT, BPF_REG_1, 10, 1),
 	BPF_EXIT_INSN(),
 	BPF_ST_MEM(BPF_B, BPF_REG_10, -224, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, -3),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, -3),
 	/* B */
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_1, 2, 1),
+	BPF_JMP64_IMM(BPF_JGT, BPF_REG_1, 2, 1),
 	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, -6), /* call A */
 	BPF_ST_MEM(BPF_B, BPF_REG_10, -256, 0),
 	BPF_EXIT_INSN(),
@@ -1160,13 +1160,13 @@
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	/* A */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, 2),
 	BPF_ST_MEM(BPF_B, BPF_REG_10, -300, 0),
 	BPF_EXIT_INSN(),
 	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, 1), /* call B */
 	BPF_EXIT_INSN(),
 	/* B */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
 	BPF_ST_MEM(BPF_B, BPF_REG_10, -300, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -1217,7 +1217,7 @@
 	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, 1), /* call A */
 	BPF_EXIT_INSN(),
 	/* A */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
 	BPF_RAW_INSN(BPF_JMP64|BPF_CALL, 0, 1, 0, 2), /* call B */
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -1333,7 +1333,7 @@
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 2),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -1355,12 +1355,12 @@
 
 	/* fetch map_value_ptr from the stack of this function */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	/* write into map value */
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 0),
 	/* fetch secound map_value_ptr from the stack */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -16),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	/* write into map value */
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -1413,7 +1413,7 @@
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_2),
 	/* first time with fp-8 */
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 9),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 1, 2),
 	/* fetch map_value_ptr from the stack of this function */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, 0),
 	/* write into map value */
@@ -1421,7 +1421,7 @@
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
 	/* second time with fp-16 */
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 4),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 1, 2),
 	/* fetch secound map_value_ptr from the stack */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_7, 0),
 	/* write into map value */
@@ -1436,7 +1436,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(), /* return 0 */
 	/* write map_value_ptr into stack frame of main prog */
@@ -1467,7 +1467,7 @@
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_2),
 	/* first time with fp-8 */
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 9),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 1, 2),
 	/* fetch map_value_ptr from the stack of this function */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_6, 0),
 	/* write into map value */
@@ -1475,7 +1475,7 @@
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
 	/* second time with fp-16 */
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 4),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	/* fetch secound map_value_ptr from the stack */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_7, 0),
 	/* write into map value */
@@ -1490,7 +1490,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(), /* return 0 */
 	/* write map_value_ptr into stack frame of main prog */
@@ -1525,9 +1525,9 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_8, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 2),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 2),
 	/* write map_value_ptr into stack frame of main prog at fp-8 */
 	BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_0, 0),
 	BPF_MOV64_IMM(BPF_REG_8, 1),
@@ -1538,9 +1538,9 @@
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, /* 24 */
 		     BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_9, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 2),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 2),
 	/* write map_value_ptr into stack frame of main prog at fp-16 */
 	BPF_STX_MEM(BPF_DW, BPF_REG_7, BPF_REG_0, 0),
 	BPF_MOV64_IMM(BPF_REG_9, 1),
@@ -1555,14 +1555,14 @@
 
 	/* subprog 2 */
 	/* if arg2 == 1 do *arg1 = 0 */
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_2, 1, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_2, 1, 2),
 	/* fetch map_value_ptr from the stack of this function */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, 0),
 	/* write into map value */
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 0),
 
 	/* if arg4 == 1 do *arg3 = 0 */
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_4, 1, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_4, 1, 2),
 	/* fetch map_value_ptr from the stack of this function */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_3, 0),
 	/* write into map value */
@@ -1597,9 +1597,9 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_8, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 2),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 2),
 	/* write map_value_ptr into stack frame of main prog at fp-8 */
 	BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_0, 0),
 	BPF_MOV64_IMM(BPF_REG_8, 1),
@@ -1610,9 +1610,9 @@
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, /* 24 */
 		     BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_9, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 2),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 2),
 	/* write map_value_ptr into stack frame of main prog at fp-16 */
 	BPF_STX_MEM(BPF_DW, BPF_REG_7, BPF_REG_0, 0),
 	BPF_MOV64_IMM(BPF_REG_9, 1),
@@ -1627,14 +1627,14 @@
 
 	/* subprog 2 */
 	/* if arg2 == 1 do *arg1 = 0 */
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_2, 1, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_2, 1, 2),
 	/* fetch map_value_ptr from the stack of this function */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, 0),
 	/* write into map value */
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 0),
 
 	/* if arg4 == 1 do *arg3 = 0 */
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_4, 1, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_4, 1, 2),
 	/* fetch map_value_ptr from the stack of this function */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_3, 0),
 	/* write into map value */
@@ -1654,7 +1654,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -16),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 
@@ -1667,9 +1667,9 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -24),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_8, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 2),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 2),
 	/* write map_value_ptr into stack frame of main prog at fp-8 */
 	BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_0, 0),
 	BPF_MOV64_IMM(BPF_REG_8, 1),
@@ -1679,9 +1679,9 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -24),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_9, 0),  // 26
-	BPF_JMP_IMM(BPF_JA, 0, 0, 2),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 2),
 	/* write map_value_ptr into stack frame of main prog at fp-16 */
 	BPF_STX_MEM(BPF_DW, BPF_REG_7, BPF_REG_0, 0),
 	BPF_MOV64_IMM(BPF_REG_9, 1),
@@ -1691,24 +1691,24 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_8),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_7),
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_9),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1), // 34
-	BPF_JMP_IMM(BPF_JA, 0, 0, -30),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 1), // 34
+	BPF_JMP64_IMM(BPF_JA, 0, 0, -30),
 
 	/* subprog 2 */
 	/* if arg2 == 1 do *arg1 = 0 */
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_2, 1, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_2, 1, 2),
 	/* fetch map_value_ptr from the stack of this function */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, 0),
 	/* write into map value */
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 0),
 
 	/* if arg4 == 1 do *arg3 = 0 */
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_4, 1, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_4, 1, 2),
 	/* fetch map_value_ptr from the stack of this function */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_3, 0),
 	/* write into map value */
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 2, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, -8),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, -8),
 	},
 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
 	.fixup_map_hash_8b = { 12, 22 },
@@ -1740,9 +1740,9 @@
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	/* write map_value_ptr_or_null into stack frame of main prog at fp-8 */
 	BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_8, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_MOV64_IMM(BPF_REG_8, 1),
 
 	/* 2nd lookup from map */
@@ -1752,9 +1752,9 @@
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	/* write map_value_ptr_or_null into stack frame of main prog at fp-16 */
 	BPF_STX_MEM(BPF_DW, BPF_REG_7, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_9, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_MOV64_IMM(BPF_REG_9, 1),
 
 	/* call 3rd func with fp-8, 0|1, fp-16, 0|1 */
@@ -1767,14 +1767,14 @@
 
 	/* subprog 2 */
 	/* if arg2 == 1 do *arg1 = 0 */
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_2, 1, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_2, 1, 2),
 	/* fetch map_value_ptr from the stack of this function */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, 0),
 	/* write into map value */
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 0),
 
 	/* if arg4 == 1 do *arg3 = 0 */
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_4, 1, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_4, 1, 2),
 	/* fetch map_value_ptr from the stack of this function */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_3, 0),
 	/* write into map value */
@@ -1809,9 +1809,9 @@
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	/* write map_value_ptr_or_null into stack frame of main prog at fp-8 */
 	BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_8, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_MOV64_IMM(BPF_REG_8, 1),
 
 	/* 2nd lookup from map */
@@ -1821,9 +1821,9 @@
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	/* write map_value_ptr_or_null into stack frame of main prog at fp-16 */
 	BPF_STX_MEM(BPF_DW, BPF_REG_7, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_9, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_MOV64_IMM(BPF_REG_9, 1),
 
 	/* call 3rd func with fp-8, 0|1, fp-16, 0|1 */
@@ -1836,14 +1836,14 @@
 
 	/* subprog 2 */
 	/* if arg2 == 1 do *arg1 = 0 */
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_2, 1, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_2, 1, 2),
 	/* fetch map_value_ptr from the stack of this function */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, 0),
 	/* write into map value */
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 0),
 
 	/* if arg4 == 0 do *arg3 = 0 */
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_4, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_4, 0, 2),
 	/* fetch map_value_ptr from the stack of this function */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_3, 0),
 	/* write into map value */
@@ -1872,7 +1872,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
 	/* spill unchecked pkt_ptr into stack of caller */
 	BPF_STX_MEM(BPF_DW, BPF_REG_4, BPF_REG_2, 0),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 2),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 2),
 	/* now the pkt range is verified, read pkt_ptr from stack */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_4, 0),
 	/* write 4 bytes into packet */
@@ -1904,7 +1904,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
 	/* spill unchecked pkt_ptr into stack of caller */
 	BPF_STX_MEM(BPF_DW, BPF_REG_4, BPF_REG_2, 0),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 2),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 2),
 	/* now the pkt range is verified, read pkt_ptr from stack */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_4, 0),
 	/* write 4 bytes into packet */
@@ -1922,7 +1922,7 @@
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -8),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 4),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	/* Marking is still kept and safe here. */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -8),
 	BPF_ST_MEM(BPF_W, BPF_REG_4, 0, 0),
@@ -1938,7 +1938,7 @@
 	/* spill unchecked pkt_ptr into stack of caller */
 	BPF_STX_MEM(BPF_DW, BPF_REG_4, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_5, 0),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 3),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 3),
 	BPF_MOV64_IMM(BPF_REG_5, 1),
 	/* now the pkt range is verified, read pkt_ptr from stack */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_4, 0),
@@ -1958,7 +1958,7 @@
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -8),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 4),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	/* Check marking propagated. */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -8),
 	BPF_ST_MEM(BPF_W, BPF_REG_4, 0, 0),
@@ -1974,7 +1974,7 @@
 	/* spill unchecked pkt_ptr into stack of caller */
 	BPF_STX_MEM(BPF_DW, BPF_REG_4, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_5, 0),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 2),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 2),
 	BPF_MOV64_IMM(BPF_REG_5, 1),
 	/* don't read back pkt_ptr from stack here */
 	/* write 4 bytes into packet */
@@ -2006,7 +2006,7 @@
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
 	BPF_MOV64_IMM(BPF_REG_5, 0),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 3),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 3),
 	/* spill checked pkt_ptr into stack of caller */
 	BPF_STX_MEM(BPF_DW, BPF_REG_4, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_5, 1),
@@ -2042,7 +2042,7 @@
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
 	BPF_MOV64_IMM(BPF_REG_5, 0),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 3),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 3),
 	/* spill checked pkt_ptr into stack of caller */
 	BPF_STX_MEM(BPF_DW, BPF_REG_4, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_5, 1),
@@ -2077,7 +2077,7 @@
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
 	BPF_MOV64_IMM(BPF_REG_5, 0),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 3),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 3),
 	/* spill checked pkt_ptr into stack of caller */
 	BPF_STX_MEM(BPF_DW, BPF_REG_4, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_5, 1),
@@ -2101,7 +2101,7 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JLE, BPF_REG_0, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JLE, BPF_REG_0, BPF_REG_3, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -8),
@@ -2119,7 +2119,7 @@
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
 	BPF_MOV64_IMM(BPF_REG_5, 0),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 3),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 3),
 	/* spill checked pkt_ptr into stack of caller */
 	BPF_STX_MEM(BPF_DW, BPF_REG_4, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_5, 1),
@@ -2142,7 +2142,7 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JLE, BPF_REG_0, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JLE, BPF_REG_0, BPF_REG_3, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -8),
@@ -2162,7 +2162,7 @@
 	BPF_MOV64_IMM(BPF_REG_5, 0),
 	/* spill unchecked pkt_ptr into stack of caller */
 	BPF_STX_MEM(BPF_DW, BPF_REG_4, BPF_REG_2, 0),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 2),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 2),
 	BPF_MOV64_IMM(BPF_REG_5, 1),
 	/* don't read back pkt_ptr from stack here */
 	/* write 4 bytes into packet */
@@ -2185,14 +2185,14 @@
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 4),
 	/* fetch map_value_or_null or const_zero from stack */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	/* store into map_value */
 	BPF_ST_MEM(BPF_W, BPF_REG_0, 0, 0),
 	BPF_EXIT_INSN(),
 
 	/* subprog 1 */
 	/* if (ctx == 0) return; */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 8),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, 8),
 	/* else bpf_map_lookup() and *(fp - 8) = r0 */
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_2),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
@@ -2221,9 +2221,9 @@
 	 * that fp-8 stack slot was unused in the fall-through
 	 * branch and will accept the program incorrectly
 	 */
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_1, 2, 2),
+	BPF_JMP64_IMM(BPF_JGT, BPF_REG_1, 2, 2),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 0),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
@@ -2240,7 +2240,7 @@
 	.insns = {
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 5),
-	BPF_JMP_REG(BPF_JSGT, BPF_REG_0, BPF_REG_0, 0),
+	BPF_JMP64_REG(BPF_JSGT, BPF_REG_0, BPF_REG_0, 0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 2),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
@@ -2264,15 +2264,15 @@
 	 */
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_IMM(BPF_REG_8, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_MOV64_IMM(BPF_REG_8, 1),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_8),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 4),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_8, 1, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_8, 1, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_9, BPF_REG_1, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 0),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, 0),
 	BPF_EXIT_INSN(),
 	},
 	.prog_type = BPF_PROG_TYPE_SOCKET_FILTER,
@@ -2285,19 +2285,19 @@
 	.insns = {
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_IMM(BPF_REG_8, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_MOV64_IMM(BPF_REG_8, 1),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_IMM(BPF_REG_9, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_MOV64_IMM(BPF_REG_9, 1),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 4),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_8, 1, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_8, 1, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 0),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, 0),
 	BPF_EXIT_INSN(),
 	},
 	.prog_type = BPF_PROG_TYPE_SOCKET_FILTER,
@@ -2356,7 +2356,7 @@
 	 *                         ; only in 'insn_idx'
 	 * r9 = r8
 	 */
-	BPF_JMP_REG(BPF_JGT, BPF_REG_6, BPF_REG_7, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_6, BPF_REG_7, 1),
 	BPF_MOV64_REG(BPF_REG_9, BPF_REG_8),
 	/* r9 = *r9                ; verifier get's to this point via two paths:
 	 *                         ; (I) one including r9 = r8, verified first;
@@ -2369,7 +2369,7 @@
 	 * if r9 == 0 goto <exit>
 	 */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_9, BPF_REG_9, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_9, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_9, 0, 1),
 	/* r8 = *r8                ; read map value via r8, this is not safe
 	 * r0 = *r8                ; because r8 might be not equal to r9.
 	 */
diff --git a/tools/testing/selftests/bpf/verifier/cfg.c b/tools/testing/selftests/bpf/verifier/cfg.c
index 4eb76ed..54f7b02 100644
--- a/tools/testing/selftests/bpf/verifier/cfg.c
+++ b/tools/testing/selftests/bpf/verifier/cfg.c
@@ -10,8 +10,8 @@
 {
 	"unreachable2",
 	.insns = {
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 0),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 0),
 	BPF_EXIT_INSN(),
 	},
 	.errstr = "unreachable",
@@ -20,7 +20,7 @@
 {
 	"out of range jump",
 	.insns = {
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_EXIT_INSN(),
 	},
 	.errstr = "jump out of range",
@@ -29,7 +29,7 @@
 {
 	"out of range jump2",
 	.insns = {
-	BPF_JMP_IMM(BPF_JA, 0, 0, -2),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, -2),
 	BPF_EXIT_INSN(),
 	},
 	.errstr = "jump out of range",
@@ -38,7 +38,7 @@
 {
 	"loop (back-edge)",
 	.insns = {
-	BPF_JMP_IMM(BPF_JA, 0, 0, -1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, -1),
 	BPF_EXIT_INSN(),
 	},
 	.errstr = "unreachable insn 1",
@@ -51,7 +51,7 @@
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, -4),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, -4),
 	BPF_EXIT_INSN(),
 	},
 	.errstr = "unreachable insn 4",
@@ -64,7 +64,7 @@
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, -3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, -3),
 	BPF_EXIT_INSN(),
 	},
 	.errstr = "infinite loop detected",
diff --git a/tools/testing/selftests/bpf/verifier/cgroup_skb.c b/tools/testing/selftests/bpf/verifier/cgroup_skb.c
index 52e4c03b..e2eb2bb 100644
--- a/tools/testing/selftests/bpf/verifier/cgroup_skb.c
+++ b/tools/testing/selftests/bpf/verifier/cgroup_skb.c
@@ -21,7 +21,7 @@
 		    offsetof(struct __sk_buff, vlan_present)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
diff --git a/tools/testing/selftests/bpf/verifier/ctx_sk_msg.c b/tools/testing/selftests/bpf/verifier/ctx_sk_msg.c
index c6c6922..bcc73a7 100644
--- a/tools/testing/selftests/bpf/verifier/ctx_sk_msg.c
+++ b/tools/testing/selftests/bpf/verifier/ctx_sk_msg.c
@@ -134,7 +134,7 @@
 		    offsetof(struct sk_msg_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -151,7 +151,7 @@
 		    offsetof(struct sk_msg_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
 	BPF_STX_MEM(BPF_B, BPF_REG_2, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -168,10 +168,10 @@
 		    offsetof(struct sk_msg_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 4),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_2, 6),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
diff --git a/tools/testing/selftests/bpf/verifier/ctx_skb.c b/tools/testing/selftests/bpf/verifier/ctx_skb.c
index fc55789..bde7240 100644
--- a/tools/testing/selftests/bpf/verifier/ctx_skb.c
+++ b/tools/testing/selftests/bpf/verifier/ctx_skb.c
@@ -3,28 +3,28 @@
 	.insns = {
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
 		    offsetof(struct __sk_buff, len)),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_0, 0, 1),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
 		    offsetof(struct __sk_buff, mark)),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_0, 0, 1),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
 		    offsetof(struct __sk_buff, pkt_type)),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_0, 0, 1),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
 		    offsetof(struct __sk_buff, queue_mapping)),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 0, 0),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_0, 0, 0),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
 		    offsetof(struct __sk_buff, protocol)),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 0, 0),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_0, 0, 0),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
 		    offsetof(struct __sk_buff, vlan_present)),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 0, 0),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_0, 0, 0),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
 		    offsetof(struct __sk_buff, vlan_tci)),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 0, 0),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_0, 0, 0),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
 		    offsetof(struct __sk_buff, napi_id)),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 0, 0),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_0, 0, 0),
 	BPF_EXIT_INSN(),
 	},
 	.result = ACCEPT,
@@ -41,13 +41,13 @@
 {
 	"access skb fields bad2",
 	.insns = {
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_1, 0, 9),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_1, 0, 9),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
@@ -62,7 +62,7 @@
 {
 	"access skb fields bad3",
 	.insns = {
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_1, 0, 2),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
 		    offsetof(struct __sk_buff, pkt_type)),
 	BPF_EXIT_INSN(),
@@ -71,10 +71,10 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, -12),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, -12),
 	},
 	.fixup_map_hash_8b = { 6 },
 	.errstr = "different pointers",
@@ -84,7 +84,7 @@
 {
 	"access skb fields bad4",
 	.insns = {
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_1, 0, 3),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_1, 0, 3),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_1,
 		    offsetof(struct __sk_buff, len)),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -94,10 +94,10 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, -13),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, -13),
 	},
 	.fixup_map_hash_8b = { 7 },
 	.errstr = "different pointers",
@@ -321,7 +321,7 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -338,7 +338,7 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
 	BPF_STX_MEM(BPF_B, BPF_REG_2, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -355,10 +355,10 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 4),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_2, 6),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -907,12 +907,12 @@
 	.insns = {
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
 		    offsetof(struct __sk_buff, cb[4])),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_0, 0, 1),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
 		    offsetof(struct __sk_buff, mark)),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
 		    offsetof(struct __sk_buff, tc_index)),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_0, 0, 1),
 	BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_1,
 		    offsetof(struct __sk_buff, cb[0])),
 	BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_1,
@@ -1150,44 +1150,44 @@
 	.result = REJECT,
 },
 {
-       "pkt > pkt_end taken check",
-       .insns = {
-       BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,                //  0. r2 = *(u32 *)(r1 + data_end)
-                   offsetof(struct __sk_buff, data_end)),
-       BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1,                //  1. r4 = *(u32 *)(r1 + data)
-                   offsetof(struct __sk_buff, data)),
-       BPF_MOV64_REG(BPF_REG_3, BPF_REG_4),                    //  2. r3 = r4
-       BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, 42),                  //  3. r3 += 42
-       BPF_MOV64_IMM(BPF_REG_1, 0),                            //  4. r1 = 0
-       BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_2, 2),          //  5. if r3 > r2 goto 8
-       BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 14),                  //  6. r4 += 14
-       BPF_MOV64_REG(BPF_REG_1, BPF_REG_4),                    //  7. r1 = r4
-       BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_2, 1),          //  8. if r3 > r2 goto 10
-       BPF_LDX_MEM(BPF_H, BPF_REG_2, BPF_REG_1, 9),            //  9. r2 = *(u8 *)(r1 + 9)
-       BPF_MOV64_IMM(BPF_REG_0, 0),                            // 10. r0 = 0
-       BPF_EXIT_INSN(),                                        // 11. exit
-       },
-       .result = ACCEPT,
-       .prog_type = BPF_PROG_TYPE_SK_SKB,
-},
-{
-       "pkt_end < pkt taken check",
-       .insns = {
-       BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,                //  0. r2 = *(u32 *)(r1 + data_end)
-                   offsetof(struct __sk_buff, data_end)),
-       BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1,                //  1. r4 = *(u32 *)(r1 + data)
-                   offsetof(struct __sk_buff, data)),
-       BPF_MOV64_REG(BPF_REG_3, BPF_REG_4),                    //  2. r3 = r4
-       BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, 42),                  //  3. r3 += 42
-       BPF_MOV64_IMM(BPF_REG_1, 0),                            //  4. r1 = 0
-       BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_2, 2),          //  5. if r3 > r2 goto 8
-       BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 14),                  //  6. r4 += 14
-       BPF_MOV64_REG(BPF_REG_1, BPF_REG_4),                    //  7. r1 = r4
-       BPF_JMP_REG(BPF_JLT, BPF_REG_2, BPF_REG_3, 1),          //  8. if r2 < r3 goto 10
-       BPF_LDX_MEM(BPF_H, BPF_REG_2, BPF_REG_1, 9),            //  9. r2 = *(u8 *)(r1 + 9)
-       BPF_MOV64_IMM(BPF_REG_0, 0),                            // 10. r0 = 0
-       BPF_EXIT_INSN(),                                        // 11. exit
-       },
-       .result = ACCEPT,
-       .prog_type = BPF_PROG_TYPE_SK_SKB,
+	"pkt > pkt_end taken check",
+	.insns = {
+	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,                //  0. r2 = *(u32 *)(r1 + data_end)
+		    offsetof(struct __sk_buff, data_end)),
+	BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1,                //  1. r4 = *(u32 *)(r1 + data)
+		    offsetof(struct __sk_buff, data)),
+	BPF_MOV64_REG(BPF_REG_3, BPF_REG_4),                    //  2. r3 = r4
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, 42),                  //  3. r3 += 42
+	BPF_MOV64_IMM(BPF_REG_1, 0),                            //  4. r1 = 0
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_3, BPF_REG_2, 2),        //  5. if r3 > r2 goto 8
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 14),                  //  6. r4 += 14
+	BPF_MOV64_REG(BPF_REG_1, BPF_REG_4),                    //  7. r1 = r4
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_3, BPF_REG_2, 1),        //  8. if r3 > r2 goto 10
+	BPF_LDX_MEM(BPF_H, BPF_REG_2, BPF_REG_1, 9),            //  9. r2 = *(u8 *)(r1 + 9)
+	BPF_MOV64_IMM(BPF_REG_0, 0),                            // 10. r0 = 0
+	BPF_EXIT_INSN(),                                        // 11. exit
+	},
+	.result = ACCEPT,
+	.prog_type = BPF_PROG_TYPE_SK_SKB,
+},
+{
+	"pkt_end < pkt taken check",
+	.insns = {
+	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,                //  0. r2 = *(u32 *)(r1 + data_end)
+		    offsetof(struct __sk_buff, data_end)),
+	BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1,                //  1. r4 = *(u32 *)(r1 + data)
+		    offsetof(struct __sk_buff, data)),
+	BPF_MOV64_REG(BPF_REG_3, BPF_REG_4),                    //  2. r3 = r4
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, 42),                  //  3. r3 += 42
+	BPF_MOV64_IMM(BPF_REG_1, 0),                            //  4. r1 = 0
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_3, BPF_REG_2, 2),        //  5. if r3 > r2 goto 8
+	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 14),                  //  6. r4 += 14
+	BPF_MOV64_REG(BPF_REG_1, BPF_REG_4),                    //  7. r1 = r4
+	BPF_JMP64_REG(BPF_JLT, BPF_REG_2, BPF_REG_3, 1),        //  8. if r2 < r3 goto 10
+	BPF_LDX_MEM(BPF_H, BPF_REG_2, BPF_REG_1, 9),            //  9. r2 = *(u8 *)(r1 + 9)
+	BPF_MOV64_IMM(BPF_REG_0, 0),                            // 10. r0 = 0
+	BPF_EXIT_INSN(),                                        // 11. exit
+	},
+	.result = ACCEPT,
+	.prog_type = BPF_PROG_TYPE_SK_SKB,
 },
diff --git a/tools/testing/selftests/bpf/verifier/dead_code.c b/tools/testing/selftests/bpf/verifier/dead_code.c
index 6cd9d1b..57d9d07 100644
--- a/tools/testing/selftests/bpf/verifier/dead_code.c
+++ b/tools/testing/selftests/bpf/verifier/dead_code.c
@@ -1,11 +1,11 @@
 {
 	"dead code: start",
 	.insns = {
-	BPF_JMP_IMM(BPF_JA, 0, 0, 2),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 2),
 	BPF_LDX_MEM(BPF_B, BPF_REG_8, BPF_REG_9, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 2),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 7),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 10, -4),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_0, 10, -4),
 	BPF_EXIT_INSN(),
 	},
 	.errstr_unpriv = "R9 !read_ok",
@@ -17,8 +17,8 @@
 	"dead code: mid 1",
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_0, 7),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 0, 1),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 10, 0),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_0, 10, 0),
 	BPF_EXIT_INSN(),
 	},
 	.result = ACCEPT,
@@ -28,9 +28,9 @@
 	"dead code: mid 2",
 	.insns = {
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_JMP_IMM(BPF_JSET, BPF_REG_0, 1, 4),
-	BPF_JMP_IMM(BPF_JSET, BPF_REG_0, 1, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 2),
+	BPF_JMP64_IMM(BPF_JSET, BPF_REG_0, 1, 4),
+	BPF_JMP64_IMM(BPF_JSET, BPF_REG_0, 1, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 7),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
@@ -43,7 +43,7 @@
 	"dead code: end 1",
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_0, 7),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 10, 1),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_0, 10, 1),
 	BPF_EXIT_INSN(),
 	BPF_EXIT_INSN(),
 	},
@@ -54,7 +54,7 @@
 	"dead code: end 2",
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_0, 7),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 10, 1),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_0, 10, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_0, 12),
 	BPF_EXIT_INSN(),
@@ -66,12 +66,12 @@
 	"dead code: end 3",
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_0, 7),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 8, 1),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_0, 8, 1),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 10, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_0, 10, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_MOV64_IMM(BPF_REG_0, 12),
-	BPF_JMP_IMM(BPF_JA, 0, 0, -5),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, -5),
 	},
 	.result = ACCEPT,
 	.retval = 7,
@@ -80,7 +80,7 @@
 	"dead code: tail of main + func",
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_0, 7),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 8, 1),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_0, 8, 1),
 	BPF_EXIT_INSN(),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
@@ -96,7 +96,7 @@
 	"dead code: tail of main + two functions",
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_0, 7),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 8, 1),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_0, 8, 1),
 	BPF_EXIT_INSN(),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
@@ -119,7 +119,7 @@
 	BPF_MOV64_IMM(BPF_REG_0, 12),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_0, 7),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_1, 7, 1),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_1, 7, 1),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, -5),
 	BPF_EXIT_INSN(),
 	},
@@ -132,7 +132,7 @@
 	"dead code: middle of main before call",
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_1, 2),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_1, 2, 1),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_1, 2, 1),
 	BPF_MOV64_IMM(BPF_REG_1, 5),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
@@ -150,7 +150,7 @@
 	BPF_MOV64_IMM(BPF_REG_1, 2),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 0),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 0),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
 	BPF_EXIT_INSN(),
 	},
@@ -164,7 +164,7 @@
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_0, -4),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_0, 0, 1),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -4),
 	BPF_EXIT_INSN(),
 	},
diff --git a/tools/testing/selftests/bpf/verifier/direct_packet_access.c b/tools/testing/selftests/bpf/verifier/direct_packet_access.c
index 46bf9e6..609963b 100644
--- a/tools/testing/selftests/bpf/verifier/direct_packet_access.c
+++ b/tools/testing/selftests/bpf/verifier/direct_packet_access.c
@@ -21,7 +21,7 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -39,7 +39,7 @@
 		    offsetof(struct __sk_buff, data)),
 	BPF_MOV64_REG(BPF_REG_5, BPF_REG_3),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 14),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_5, BPF_REG_4, 15),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_5, BPF_REG_4, 15),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_3, 7),
 	BPF_LDX_MEM(BPF_B, BPF_REG_4, BPF_REG_3, 12),
 	BPF_ALU64_IMM(BPF_MUL, BPF_REG_4, 14),
@@ -55,7 +55,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 8),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_1,
 		    offsetof(struct __sk_buff, data_end)),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_2, BPF_REG_1, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_3, 4),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -84,7 +84,7 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
 	BPF_STX_MEM(BPF_B, BPF_REG_2, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -101,7 +101,7 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_0, 2),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_3, BPF_REG_0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
@@ -120,7 +120,7 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_0, 3),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_3, BPF_REG_0, 3),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
@@ -140,7 +140,7 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_0, 3),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_3, BPF_REG_0, 3),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
@@ -161,8 +161,8 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_0, 4),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_3, BPF_REG_0, 4),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
@@ -182,10 +182,10 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_0, 2),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_3, BPF_REG_0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -203,7 +203,7 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 2),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_STX_MEM(BPF_B, BPF_REG_2, BPF_REG_2, 0),
@@ -223,7 +223,7 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 22),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 8),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 8),
 	BPF_MOV64_IMM(BPF_REG_3, 144),
 	BPF_MOV64_REG(BPF_REG_5, BPF_REG_3),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 23),
@@ -248,7 +248,7 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 22),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 8),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 8),
 	BPF_MOV64_IMM(BPF_REG_3, 144),
 	BPF_MOV64_REG(BPF_REG_5, BPF_REG_3),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 23),
@@ -273,13 +273,13 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 22),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 13),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 13),
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
 		    offsetof(struct __sk_buff, mark)),
 	BPF_MOV64_IMM(BPF_REG_4, 1),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_4, 2),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_3, BPF_REG_4, 2),
 	BPF_MOV64_IMM(BPF_REG_3, 14),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_MOV64_IMM(BPF_REG_3, 24),
 	BPF_MOV64_REG(BPF_REG_5, BPF_REG_3),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 23),
@@ -304,7 +304,7 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 22),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 7),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 7),
 	BPF_MOV64_IMM(BPF_REG_5, 12),
 	BPF_ALU64_IMM(BPF_RSH, BPF_REG_5, 4),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_2),
@@ -328,7 +328,7 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 8),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 8),
 	BPF_MOV64_IMM(BPF_REG_5, 4096),
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -8),
@@ -354,7 +354,7 @@
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, 16),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
 	BPF_STX_MEM(BPF_B, BPF_REG_2, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -374,13 +374,13 @@
 		    offsetof(struct __sk_buff, mark)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 14),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_7, 1, 4),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
+	BPF_JMP64_IMM(BPF_JGT, BPF_REG_7, 1, 4),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
 	BPF_STX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, -4),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
-	BPF_JMP_A(-6),
+	BPF_JMP64_A(-6),
 	},
 	.errstr = "misaligned packet access off 2+(0x0; 0x0)+15+-4 size 4",
 	.result = REJECT,
@@ -396,7 +396,7 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_IMM(BPF_REG_0, 8),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_2),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
 	BPF_STX_MEM(BPF_B, BPF_REG_2, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -413,7 +413,7 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 3),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 3),
 	BPF_MOV64_IMM(BPF_REG_4, 4),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_4, BPF_REG_2),
 	BPF_STX_MEM(BPF_B, BPF_REG_4, BPF_REG_4, 0),
@@ -438,7 +438,7 @@
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_4, BPF_REG_2),
 	BPF_MOV64_REG(BPF_REG_5, BPF_REG_4),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 0x7fff - 1),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 1),
 	BPF_STX_MEM(BPF_DW, BPF_REG_5, BPF_REG_4, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -456,7 +456,7 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 9),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 9),
 	BPF_MOV64_IMM(BPF_REG_4, 0xffffffff),
 	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_4, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -8),
@@ -464,7 +464,7 @@
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_4, BPF_REG_2),
 	BPF_MOV64_REG(BPF_REG_5, BPF_REG_4),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 0x7fff - 1),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 1),
 	BPF_STX_MEM(BPF_DW, BPF_REG_5, BPF_REG_4, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -485,7 +485,7 @@
 	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_2, -8),
 	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_3, -16),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_10, -16),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 11),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 11),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_10, -8),
 	BPF_MOV64_IMM(BPF_REG_4, 0xffffffff),
 	BPF_ATOMIC_OP(BPF_DW, BPF_ADD, BPF_REG_10, BPF_REG_4, -8),
@@ -494,7 +494,7 @@
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_4, BPF_REG_2),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_4),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 2),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 2),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 2),
 	BPF_MOV64_IMM(BPF_REG_2, 1),
 	BPF_STX_MEM(BPF_H, BPF_REG_4, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -522,7 +522,7 @@
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_2),
 	BPF_MOV64_REG(BPF_REG_5, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0xffff - 1),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
 	BPF_STX_MEM(BPF_DW, BPF_REG_5, BPF_REG_0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -549,7 +549,7 @@
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_2),
 	BPF_MOV64_REG(BPF_REG_5, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 0x7fff - 1),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
 	BPF_STX_MEM(BPF_DW, BPF_REG_5, BPF_REG_0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -567,11 +567,11 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JLT, BPF_REG_0, BPF_REG_3, 2),
+	BPF_JMP64_REG(BPF_JLT, BPF_REG_0, BPF_REG_3, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, -4),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, -4),
 	},
 	.result = ACCEPT,
 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
@@ -585,11 +585,11 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JLT, BPF_REG_0, BPF_REG_3, 3),
+	BPF_JMP64_REG(BPF_JLT, BPF_REG_0, BPF_REG_3, 3),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JA, 0, 0, -3),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, -3),
 	},
 	.result = REJECT,
 	.errstr = "invalid access to packet",
@@ -604,7 +604,7 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_0, 1),
+	BPF_JMP64_REG(BPF_JLE, BPF_REG_3, BPF_REG_0, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
@@ -622,11 +622,11 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_0, 2),
+	BPF_JMP64_REG(BPF_JLE, BPF_REG_3, BPF_REG_0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, -4),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, -4),
 	},
 	.result = REJECT,
 	.errstr = "invalid access to packet",
@@ -642,12 +642,12 @@
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_6),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, 8),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 4),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_6, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_2, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_3, BPF_REG_2, 1),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	},
@@ -675,8 +675,8 @@
 	/* if r6 > 100 goto exit
 	 * if r7 > 100 goto exit
 	 */
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_6, 100, 9),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_7, 100, 8),
+	BPF_JMP64_IMM(BPF_JGT, BPF_REG_6, 100, 9),
+	BPF_JMP64_IMM(BPF_JGT, BPF_REG_7, 100, 8),
 	/* r2 += r6              ; this forces assignment of ID to r2
 	 * r2 += 1               ; get some fixed off for r2
 	 * r3 += r7              ; this forces assignment of ID to r3
@@ -691,10 +691,10 @@
 	 *                       ; only in 'insn_idx'
 	 * r2 = r3               ; optionally share ID between r2 and r3
 	 */
-	BPF_JMP_REG(BPF_JNE, BPF_REG_6, BPF_REG_7, 1),
+	BPF_JMP64_REG(BPF_JNE, BPF_REG_6, BPF_REG_7, 1),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_3),
 	/* if r3 > ctx->data_end goto exit */
-	BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_4, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_3, BPF_REG_4, 1),
 	/* r5 = *(u8 *) (r2 - 1) ; access packet memory using r2,
 	 *                       ; this is not always safe
 	 */
diff --git a/tools/testing/selftests/bpf/verifier/div_overflow.c b/tools/testing/selftests/bpf/verifier/div_overflow.c
index acab4f0..96f5277 100644
--- a/tools/testing/selftests/bpf/verifier/div_overflow.c
+++ b/tools/testing/selftests/bpf/verifier/div_overflow.c
@@ -32,7 +32,7 @@
 	BPF_LD_IMM64(BPF_REG_2, LLONG_MIN),
 	BPF_ALU64_REG(BPF_DIV, BPF_REG_2, BPF_REG_1),
 	BPF_MOV32_IMM(BPF_REG_0, 0),
-	BPF_JMP_REG(BPF_JEQ, BPF_REG_0, BPF_REG_2, 1),
+	BPF_JMP64_REG(BPF_JEQ, BPF_REG_0, BPF_REG_2, 1),
 	BPF_MOV32_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	},
@@ -46,7 +46,7 @@
 	BPF_LD_IMM64(BPF_REG_1, LLONG_MIN),
 	BPF_ALU64_IMM(BPF_DIV, BPF_REG_1, -1),
 	BPF_MOV32_IMM(BPF_REG_0, 0),
-	BPF_JMP_REG(BPF_JEQ, BPF_REG_0, BPF_REG_1, 1),
+	BPF_JMP64_REG(BPF_JEQ, BPF_REG_0, BPF_REG_1, 1),
 	BPF_MOV32_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	},
@@ -85,7 +85,7 @@
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_2),
 	BPF_ALU64_REG(BPF_MOD, BPF_REG_2, BPF_REG_1),
 	BPF_MOV32_IMM(BPF_REG_0, 0),
-	BPF_JMP_REG(BPF_JNE, BPF_REG_3, BPF_REG_2, 1),
+	BPF_JMP64_REG(BPF_JNE, BPF_REG_3, BPF_REG_2, 1),
 	BPF_MOV32_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	},
@@ -100,7 +100,7 @@
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_MOD, BPF_REG_2, -1),
 	BPF_MOV32_IMM(BPF_REG_0, 0),
-	BPF_JMP_REG(BPF_JNE, BPF_REG_3, BPF_REG_2, 1),
+	BPF_JMP64_REG(BPF_JNE, BPF_REG_3, BPF_REG_2, 1),
 	BPF_MOV32_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	},
diff --git a/tools/testing/selftests/bpf/verifier/helper_access_var_len.c b/tools/testing/selftests/bpf/verifier/helper_access_var_len.c
index 37be14d..7f665e8 100644
--- a/tools/testing/selftests/bpf/verifier/helper_access_var_len.c
+++ b/tools/testing/selftests/bpf/verifier/helper_access_var_len.c
@@ -17,7 +17,7 @@
 	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, -128),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 64),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_4, BPF_REG_2, 2),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_4, BPF_REG_2, 2),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -53,7 +53,7 @@
 	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, -128),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_2, 65),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_4, BPF_REG_2, 2),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_4, BPF_REG_2, 2),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -80,9 +80,9 @@
 	BPF_MOV64_IMM(BPF_REG_2, 16),
 	BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_2, -128),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, -128),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_2, 64, 4),
+	BPF_JMP64_IMM(BPF_JGT, BPF_REG_2, 64, 4),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_4, BPF_REG_2, 2),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_4, BPF_REG_2, 2),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -108,9 +108,9 @@
 	BPF_MOV64_IMM(BPF_REG_2, 16),
 	BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_2, -128),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, -128),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_2, 64, 4),
+	BPF_JMP64_IMM(BPF_JSGT, BPF_REG_2, 64, 4),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
-	BPF_JMP_REG(BPF_JSGE, BPF_REG_4, BPF_REG_2, 2),
+	BPF_JMP64_REG(BPF_JSGE, BPF_REG_4, BPF_REG_2, 2),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -127,9 +127,9 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -64),
 	BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_2, -128),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, -128),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_2, 64, 5),
+	BPF_JMP64_IMM(BPF_JGT, BPF_REG_2, 64, 5),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_4, BPF_REG_2, 3),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_4, BPF_REG_2, 3),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 1),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel),
@@ -148,9 +148,9 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -64),
 	BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_2, -128),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, -128),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_2, 65, 4),
+	BPF_JMP64_IMM(BPF_JGT, BPF_REG_2, 65, 4),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_4, BPF_REG_2, 2),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_4, BPF_REG_2, 2),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -169,7 +169,7 @@
 	BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_2, -128),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, -128),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_4, BPF_REG_2, 2),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_4, BPF_REG_2, 2),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -188,7 +188,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -64),
 	BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_2, -128),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, -128),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_2, 64, 3),
+	BPF_JMP64_IMM(BPF_JGT, BPF_REG_2, 64, 3),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -206,7 +206,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -64),
 	BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_2, -128),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, -128),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_2, 64, 3),
+	BPF_JMP64_IMM(BPF_JSGT, BPF_REG_2, 64, 3),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -224,14 +224,14 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 10),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 10),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_2, sizeof(struct test_val)),
 	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_2, -128),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_10, -128),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_2, sizeof(struct test_val), 4),
+	BPF_JMP64_IMM(BPF_JSGT, BPF_REG_2, sizeof(struct test_val), 4),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
-	BPF_JMP_REG(BPF_JSGE, BPF_REG_4, BPF_REG_2, 2),
+	BPF_JMP64_REG(BPF_JSGE, BPF_REG_4, BPF_REG_2, 2),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -250,14 +250,14 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 10),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 10),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_6),
 	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_2, -128),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_10, -128),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_2, sizeof(struct test_val) + 1, 4),
+	BPF_JMP64_IMM(BPF_JSGT, BPF_REG_2, sizeof(struct test_val) + 1, 4),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
-	BPF_JMP_REG(BPF_JSGE, BPF_REG_4, BPF_REG_2, 2),
+	BPF_JMP64_REG(BPF_JSGE, BPF_REG_4, BPF_REG_2, 2),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -276,15 +276,15 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 11),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 11),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 20),
 	BPF_MOV64_IMM(BPF_REG_2, sizeof(struct test_val)),
 	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_2, -128),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_10, -128),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_2, sizeof(struct test_val) - 20, 4),
+	BPF_JMP64_IMM(BPF_JSGT, BPF_REG_2, sizeof(struct test_val) - 20, 4),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
-	BPF_JMP_REG(BPF_JSGE, BPF_REG_4, BPF_REG_2, 2),
+	BPF_JMP64_REG(BPF_JSGE, BPF_REG_4, BPF_REG_2, 2),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -303,15 +303,15 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 11),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 11),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 20),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_6),
 	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_2, -128),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_10, -128),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_2, sizeof(struct test_val) - 19, 4),
+	BPF_JMP64_IMM(BPF_JSGT, BPF_REG_2, sizeof(struct test_val) - 19, 4),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
-	BPF_JMP_REG(BPF_JSGE, BPF_REG_4, BPF_REG_2, 2),
+	BPF_JMP64_REG(BPF_JSGE, BPF_REG_4, BPF_REG_2, 2),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -379,7 +379,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
@@ -400,9 +400,9 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_2, 8, 7),
+	BPF_JMP64_IMM(BPF_JGT, BPF_REG_2, 8, 7),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
 	BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_2, 0),
@@ -424,10 +424,10 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_2, 8, 4),
+	BPF_JMP64_IMM(BPF_JGT, BPF_REG_2, 8, 4),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
 	BPF_MOV64_IMM(BPF_REG_5, 0),
@@ -447,10 +447,10 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_6),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 7),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 7),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_6, 0),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_2, 8, 4),
+	BPF_JMP64_IMM(BPF_JGT, BPF_REG_2, 8, 4),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
 	BPF_MOV64_IMM(BPF_REG_5, 0),
@@ -509,7 +509,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
@@ -528,9 +528,9 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_2, 8, 4),
+	BPF_JMP64_IMM(BPF_JGT, BPF_REG_2, 8, 4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
@@ -549,10 +549,10 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_2, 8, 2),
+	BPF_JMP64_IMM(BPF_JGT, BPF_REG_2, 8, 2),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_EMIT_CALL(BPF_FUNC_probe_read_kernel),
 	BPF_EXIT_INSN(),
diff --git a/tools/testing/selftests/bpf/verifier/helper_packet_access.c b/tools/testing/selftests/bpf/verifier/helper_packet_access.c
index 926ef8d..a7ce6b9 100644
--- a/tools/testing/selftests/bpf/verifier/helper_packet_access.c
+++ b/tools/testing/selftests/bpf/verifier/helper_packet_access.c
@@ -6,7 +6,7 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 5),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 5),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_2),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
@@ -41,13 +41,13 @@
 			offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 10),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 10),
 	BPF_LDX_MEM(BPF_B, BPF_REG_5, BPF_REG_2, 0),
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_2),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_4, BPF_REG_5),
 	BPF_MOV64_REG(BPF_REG_5, BPF_REG_4),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_5, BPF_REG_3, 4),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_5, BPF_REG_3, 4),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_4),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
@@ -66,7 +66,7 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 4),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 2),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
@@ -88,7 +88,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 1),
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 7),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 3),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 3),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -108,7 +108,7 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 5),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 5),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_2),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
@@ -144,13 +144,13 @@
 			offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 10),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 10),
 	BPF_LDX_MEM(BPF_B, BPF_REG_5, BPF_REG_2, 0),
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_2),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_4, BPF_REG_5),
 	BPF_MOV64_REG(BPF_REG_5, BPF_REG_4),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_5, BPF_REG_3, 4),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_5, BPF_REG_3, 4),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_4),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
@@ -170,7 +170,7 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 4),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 2),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
@@ -193,7 +193,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 1),
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 7),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 3),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 3),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -214,7 +214,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 1),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_6),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, 7),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_7, 4),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_3, BPF_REG_7, 4),
 	BPF_MOV64_IMM(BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_4, 42),
 	BPF_MOV64_IMM(BPF_REG_5, 0),
@@ -235,7 +235,7 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_6),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_6, BPF_REG_7, 3),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_6, BPF_REG_7, 3),
 	BPF_MOV64_IMM(BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_4, 4),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_skb_load_bytes),
@@ -256,7 +256,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 1),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_7, 6),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_1, BPF_REG_7, 6),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_MOV64_IMM(BPF_REG_2, 4),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
@@ -279,7 +279,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 1),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_7, 6),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_1, BPF_REG_7, 6),
 	BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 4),
 	BPF_MOV64_IMM(BPF_REG_2, 4),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
@@ -302,7 +302,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 1),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_7, 6),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_1, BPF_REG_7, 6),
 	BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 12),
 	BPF_MOV64_IMM(BPF_REG_2, 4),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
@@ -326,7 +326,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 1),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_7, 6),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_1, BPF_REG_7, 6),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_MOV64_IMM(BPF_REG_2, 8),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
@@ -350,7 +350,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 1),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_7, 6),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_1, BPF_REG_7, 6),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_MOV64_IMM(BPF_REG_2, -9),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
@@ -374,7 +374,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 1),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_7, 6),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_1, BPF_REG_7, 6),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_MOV64_IMM(BPF_REG_2, ~0),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
@@ -398,7 +398,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 1),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_7, 6),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_1, BPF_REG_7, 6),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_MOV64_IMM(BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
@@ -421,7 +421,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 1),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_7, 6),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_1, BPF_REG_7, 6),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
 	BPF_MOV64_IMM(BPF_REG_2, 4),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
@@ -445,7 +445,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 1),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_7, 6),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_1, BPF_REG_7, 6),
 	BPF_MOV64_IMM(BPF_REG_2, 4),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
diff --git a/tools/testing/selftests/bpf/verifier/helper_restricted.c b/tools/testing/selftests/bpf/verifier/helper_restricted.c
index 423556b..bf75880 100644
--- a/tools/testing/selftests/bpf/verifier/helper_restricted.c
+++ b/tools/testing/selftests/bpf/verifier/helper_restricted.c
@@ -50,7 +50,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_3, 1),
@@ -70,7 +70,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_3, 1),
@@ -90,7 +90,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_3, 1),
@@ -110,7 +110,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_3, 1),
@@ -130,7 +130,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_EMIT_CALL(BPF_FUNC_spin_lock),
 	BPF_EXIT_INSN(),
@@ -148,7 +148,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_EMIT_CALL(BPF_FUNC_spin_lock),
 	BPF_EXIT_INSN(),
@@ -166,7 +166,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_EMIT_CALL(BPF_FUNC_spin_lock),
 	BPF_EXIT_INSN(),
@@ -184,7 +184,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_EMIT_CALL(BPF_FUNC_spin_lock),
 	BPF_EXIT_INSN(),
diff --git a/tools/testing/selftests/bpf/verifier/helper_value_access.c b/tools/testing/selftests/bpf/verifier/helper_value_access.c
index 1c7882d..a1e1ffb 100644
--- a/tools/testing/selftests/bpf/verifier/helper_value_access.c
+++ b/tools/testing/selftests/bpf/verifier/helper_value_access.c
@@ -6,7 +6,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_2, sizeof(struct test_val)),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
@@ -25,7 +25,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_2, 8),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
@@ -44,7 +44,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_2, 0),
 	BPF_EMIT_CALL(BPF_FUNC_trace_printk),
@@ -63,7 +63,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_2, sizeof(struct test_val) + 8),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
@@ -83,7 +83,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_2, -8),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
@@ -103,7 +103,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, offsetof(struct test_val, foo)),
 	BPF_MOV64_IMM(BPF_REG_2,
@@ -124,7 +124,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, offsetof(struct test_val, foo)),
 	BPF_MOV64_IMM(BPF_REG_2, 8),
@@ -144,7 +144,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, offsetof(struct test_val, foo)),
 	BPF_MOV64_IMM(BPF_REG_2, 0),
@@ -164,7 +164,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, offsetof(struct test_val, foo)),
 	BPF_MOV64_IMM(BPF_REG_2,
@@ -186,7 +186,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, offsetof(struct test_val, foo)),
 	BPF_MOV64_IMM(BPF_REG_2, -8),
@@ -207,7 +207,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, offsetof(struct test_val, foo)),
 	BPF_MOV64_IMM(BPF_REG_2, -1),
@@ -228,7 +228,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_3, offsetof(struct test_val, foo)),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3),
@@ -250,7 +250,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_3, offsetof(struct test_val, foo)),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3),
@@ -271,7 +271,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3),
@@ -292,7 +292,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_3, offsetof(struct test_val, foo)),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3),
@@ -316,7 +316,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_3, offsetof(struct test_val, foo)),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3),
@@ -338,7 +338,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_3, offsetof(struct test_val, foo)),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3),
@@ -360,10 +360,10 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_3, offsetof(struct test_val, foo), 4),
+	BPF_JMP64_IMM(BPF_JGT, BPF_REG_3, offsetof(struct test_val, foo), 4),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3),
 	BPF_MOV64_IMM(BPF_REG_2,
 		      sizeof(struct test_val) - offsetof(struct test_val, foo)),
@@ -383,10 +383,10 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_3, offsetof(struct test_val, foo), 4),
+	BPF_JMP64_IMM(BPF_JGT, BPF_REG_3, offsetof(struct test_val, foo), 4),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3),
 	BPF_MOV64_IMM(BPF_REG_2, 8),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
@@ -405,10 +405,10 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_3, offsetof(struct test_val, foo), 3),
+	BPF_JMP64_IMM(BPF_JGT, BPF_REG_3, offsetof(struct test_val, foo), 3),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3),
 	BPF_MOV64_IMM(BPF_REG_2, 0),
 	BPF_EMIT_CALL(BPF_FUNC_trace_printk),
@@ -427,7 +427,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3),
@@ -449,10 +449,10 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_3, offsetof(struct test_val, foo), 4),
+	BPF_JMP64_IMM(BPF_JGT, BPF_REG_3, offsetof(struct test_val, foo), 4),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3),
 	BPF_MOV64_IMM(BPF_REG_2,
 		      sizeof(struct test_val) -
@@ -474,10 +474,10 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JLT, BPF_REG_3, 32, 2),
+	BPF_JMP64_IMM(BPF_JLT, BPF_REG_3, 32, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3),
@@ -497,10 +497,10 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JLT, BPF_REG_3, 32, 4),
+	BPF_JMP64_IMM(BPF_JLT, BPF_REG_3, 32, 4),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3),
 	BPF_ST_MEM(BPF_B, BPF_REG_1, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -521,10 +521,10 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JLE, BPF_REG_3, 32, 2),
+	BPF_JMP64_IMM(BPF_JLE, BPF_REG_3, 32, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3),
@@ -544,10 +544,10 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JLE, BPF_REG_3, 32, 4),
+	BPF_JMP64_IMM(BPF_JLE, BPF_REG_3, 32, 4),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3),
 	BPF_ST_MEM(BPF_B, BPF_REG_1, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -568,13 +568,13 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JSLT, BPF_REG_3, 32, 2),
+	BPF_JMP64_IMM(BPF_JSLT, BPF_REG_3, 32, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JSLT, BPF_REG_3, 0, -3),
+	BPF_JMP64_IMM(BPF_JSLT, BPF_REG_3, 0, -3),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3),
 	BPF_ST_MEM(BPF_B, BPF_REG_1, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -592,13 +592,13 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JSLT, BPF_REG_3, 32, 2),
+	BPF_JMP64_IMM(BPF_JSLT, BPF_REG_3, 32, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JSLT, BPF_REG_3, -3, -3),
+	BPF_JMP64_IMM(BPF_JSLT, BPF_REG_3, -3, -3),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3),
 	BPF_ST_MEM(BPF_B, BPF_REG_1, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -616,13 +616,13 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JSLT, BPF_REG_3, 32, 2),
+	BPF_JMP64_IMM(BPF_JSLT, BPF_REG_3, 32, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JSLT, BPF_REG_3, -3, -3),
+	BPF_JMP64_IMM(BPF_JSLT, BPF_REG_3, -3, -3),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3),
 	BPF_ST_MEM(BPF_B, BPF_REG_1, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -641,13 +641,13 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JSLE, BPF_REG_3, 32, 2),
+	BPF_JMP64_IMM(BPF_JSLE, BPF_REG_3, 32, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JSLE, BPF_REG_3, 0, -3),
+	BPF_JMP64_IMM(BPF_JSLE, BPF_REG_3, 0, -3),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3),
 	BPF_ST_MEM(BPF_B, BPF_REG_1, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -665,13 +665,13 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JSLE, BPF_REG_3, 32, 2),
+	BPF_JMP64_IMM(BPF_JSLE, BPF_REG_3, 32, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JSLE, BPF_REG_3, -3, -3),
+	BPF_JMP64_IMM(BPF_JSLE, BPF_REG_3, -3, -3),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3),
 	BPF_ST_MEM(BPF_B, BPF_REG_1, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -689,13 +689,13 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JSLE, BPF_REG_3, 32, 2),
+	BPF_JMP64_IMM(BPF_JSLE, BPF_REG_3, 32, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JSLE, BPF_REG_3, -3, -3),
+	BPF_JMP64_IMM(BPF_JSLE, BPF_REG_3, -3, -3),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_3),
 	BPF_ST_MEM(BPF_B, BPF_REG_1, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -714,7 +714,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
@@ -732,7 +732,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
@@ -752,7 +752,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
@@ -774,7 +774,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, offsetof(struct other_val, bar)),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
@@ -793,7 +793,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, sizeof(struct other_val) - 4),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
@@ -813,7 +813,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
@@ -833,7 +833,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_3, offsetof(struct other_val, bar)),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_3),
@@ -853,7 +853,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_3, sizeof(struct other_val) - 4),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_3),
@@ -874,7 +874,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_3, -4),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_3),
@@ -895,10 +895,10 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_3, offsetof(struct other_val, bar), 4),
+	BPF_JMP64_IMM(BPF_JGT, BPF_REG_3, offsetof(struct other_val, bar), 4),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_3),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
@@ -916,7 +916,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_3),
@@ -937,10 +937,10 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_3, offsetof(struct other_val, bar) + 1, 4),
+	BPF_JMP64_IMM(BPF_JGT, BPF_REG_3, offsetof(struct other_val, bar) + 1, 4),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_3),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
diff --git a/tools/testing/selftests/bpf/verifier/jeq_infer_not_null.c b/tools/testing/selftests/bpf/verifier/jeq_infer_not_null.c
index 67a1c07..0a4e3ee 100644
--- a/tools/testing/selftests/bpf/verifier/jeq_infer_not_null.c
+++ b/tools/testing/selftests/bpf/verifier/jeq_infer_not_null.c
@@ -17,7 +17,7 @@
 	/* r6 = skb->sk; */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, offsetof(struct __sk_buff, sk)),
 	/* if (r6 == 0) return 0; */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 8),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_6, 0, 8),
 	/* r7 = sk_fullsock(skb); */
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
@@ -26,9 +26,9 @@
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
 	/* if (r0 == null) return 0; */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	/* if (r0 == r7) r0 = *(r7->type); */
-	BPF_JMP_REG(BPF_JNE, BPF_REG_0, BPF_REG_7, 1), /* Use ! JNE ! */
+	BPF_JMP64_REG(BPF_JNE, BPF_REG_0, BPF_REG_7, 1), /* Use ! JNE ! */
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_7, offsetof(struct bpf_sock, type)),
 	/* return 0 */
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -48,7 +48,7 @@
 	/* r6 = skb->sk */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, offsetof(struct __sk_buff, sk)),
 	/* if (r6 == 0) return 0; */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 9),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_6, 0, 9),
 	/* r7 = sk_fullsock(skb); */
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
@@ -57,10 +57,10 @@
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
 	/* if (r0 == null) return 0; */
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 3),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 3),
 	/* if (r0 == r7) return 0; */
-	BPF_JMP_REG(BPF_JNE, BPF_REG_0, BPF_REG_7, 1), /* Use ! JNE ! */
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_REG(BPF_JNE, BPF_REG_0, BPF_REG_7, 1), /* Use ! JNE ! */
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	/* r0 = *(r7->type); */
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_7, offsetof(struct bpf_sock, type)),
 	/* return 0 */
@@ -80,7 +80,7 @@
 	/* r6 = skb->sk; */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, offsetof(struct __sk_buff, sk)),
 	/* if (r6 == null) return 0; */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 9),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_6, 0, 9),
 	/* r7 = sk_fullsock(skb); */
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
@@ -89,10 +89,10 @@
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
 	/* if (r0 == null) return 0; */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
 	/* if (r0 != r7) return 0; */
-	BPF_JMP_REG(BPF_JEQ, BPF_REG_0, BPF_REG_7, 1), /* Use ! JEQ ! */
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_REG(BPF_JEQ, BPF_REG_0, BPF_REG_7, 1), /* Use ! JEQ ! */
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	/* r0 = *(r7->type); */
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_7, offsetof(struct bpf_sock, type)),
 	/* return 0; */
@@ -113,7 +113,7 @@
 	/* r6 = skb->sk; */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, offsetof(struct __sk_buff, sk)),
 	/* if (r6 == null) return 0; */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 8),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_6, 0, 8),
 	/* r7 = sk_fullsock(skb); */
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
@@ -122,9 +122,9 @@
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
 	/* if (r0 == null) return 0; */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	/* if (r0 != r7) r0 = *(r7->type); */
-	BPF_JMP_REG(BPF_JEQ, BPF_REG_0, BPF_REG_7, 1), /* Use ! JEQ ! */
+	BPF_JMP64_REG(BPF_JEQ, BPF_REG_0, BPF_REG_7, 1), /* Use ! JEQ ! */
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_7, offsetof(struct bpf_sock, type)),
 	/* return 0; */
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -159,9 +159,9 @@
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
 	/* if (r6 == 0) return 0; */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_6, 0, 2),
 	/* if (r6 != r7) return 0; */
-	BPF_JMP_REG(BPF_JNE, BPF_REG_6, BPF_REG_7, 1),
+	BPF_JMP64_REG(BPF_JNE, BPF_REG_6, BPF_REG_7, 1),
 	/* read *r7; */
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_7, offsetof(struct bpf_xdp_sock, queue_id)),
 	/* return 0; */
diff --git a/tools/testing/selftests/bpf/verifier/jit.c b/tools/testing/selftests/bpf/verifier/jit.c
index 8bf37e5..2c6f883 100644
--- a/tools/testing/selftests/bpf/verifier/jit.c
+++ b/tools/testing/selftests/bpf/verifier/jit.c
@@ -5,14 +5,14 @@
 	BPF_MOV64_IMM(BPF_REG_1, 0xff),
 	BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 1),
 	BPF_ALU32_IMM(BPF_LSH, BPF_REG_1, 1),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x3fc, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0x3fc, 1),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 1),
 	BPF_ALU32_IMM(BPF_RSH, BPF_REG_1, 1),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0xff, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0xff, 1),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_IMM(BPF_ARSH, BPF_REG_1, 1),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x7f, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0x7f, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_0, 2),
 	BPF_EXIT_INSN(),
@@ -28,15 +28,15 @@
 	BPF_MOV64_IMM(BPF_REG_1, 0xff),
 	BPF_ALU64_REG(BPF_LSH, BPF_REG_1, BPF_REG_0),
 	BPF_ALU32_REG(BPF_LSH, BPF_REG_1, BPF_REG_4),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0x3fc, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0x3fc, 1),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_REG(BPF_RSH, BPF_REG_1, BPF_REG_4),
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_1),
 	BPF_ALU32_REG(BPF_RSH, BPF_REG_4, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_4, 0xff, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_4, 0xff, 1),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_REG(BPF_ARSH, BPF_REG_4, BPF_REG_4),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_4, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_4, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_0, 2),
 	BPF_EXIT_INSN(),
@@ -51,7 +51,7 @@
 	BPF_LD_IMM64(BPF_REG_1, 0xfeffffffffffffffULL),
 	BPF_ALU64_IMM(BPF_RSH, BPF_REG_1, 32),
 	BPF_LD_IMM64(BPF_REG_2, 0xfeffffffULL),
-	BPF_JMP_REG(BPF_JEQ, BPF_REG_1, BPF_REG_2, 1),
+	BPF_JMP64_REG(BPF_JEQ, BPF_REG_1, BPF_REG_2, 1),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	},
@@ -64,7 +64,7 @@
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_LD_IMM64(BPF_REG_1, 0x1ffffffffULL),
 	BPF_LD_IMM64(BPF_REG_2, 0xffffffffULL),
-	BPF_JMP_REG(BPF_JEQ, BPF_REG_1, BPF_REG_2, 1),
+	BPF_JMP64_REG(BPF_JEQ, BPF_REG_1, BPF_REG_2, 1),
 	BPF_MOV64_IMM(BPF_REG_0, 2),
 	BPF_EXIT_INSN(),
 	},
@@ -78,46 +78,46 @@
 	BPF_LD_IMM64(BPF_REG_0, 0xfefefeULL),
 	BPF_LD_IMM64(BPF_REG_1, 0xefefefULL),
 	BPF_ALU64_REG(BPF_MUL, BPF_REG_0, BPF_REG_1),
-	BPF_JMP_REG(BPF_JEQ, BPF_REG_0, BPF_REG_2, 2),
+	BPF_JMP64_REG(BPF_JEQ, BPF_REG_0, BPF_REG_2, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LD_IMM64(BPF_REG_3, 0xfefefeULL),
 	BPF_ALU64_REG(BPF_MUL, BPF_REG_3, BPF_REG_1),
-	BPF_JMP_REG(BPF_JEQ, BPF_REG_3, BPF_REG_2, 2),
+	BPF_JMP64_REG(BPF_JEQ, BPF_REG_3, BPF_REG_2, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LD_IMM64(BPF_REG_3, 0xfefefeULL),
 	BPF_ALU64_IMM(BPF_MUL, BPF_REG_3, 0xefefef),
-	BPF_JMP_REG(BPF_JEQ, BPF_REG_3, BPF_REG_2, 2),
+	BPF_JMP64_REG(BPF_JEQ, BPF_REG_3, BPF_REG_2, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV32_REG(BPF_REG_2, BPF_REG_2),
 	BPF_LD_IMM64(BPF_REG_0, 0xfefefeULL),
 	BPF_ALU32_REG(BPF_MUL, BPF_REG_0, BPF_REG_1),
-	BPF_JMP_REG(BPF_JEQ, BPF_REG_0, BPF_REG_2, 2),
+	BPF_JMP64_REG(BPF_JEQ, BPF_REG_0, BPF_REG_2, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LD_IMM64(BPF_REG_3, 0xfefefeULL),
 	BPF_ALU32_REG(BPF_MUL, BPF_REG_3, BPF_REG_1),
-	BPF_JMP_REG(BPF_JEQ, BPF_REG_3, BPF_REG_2, 2),
+	BPF_JMP64_REG(BPF_JEQ, BPF_REG_3, BPF_REG_2, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LD_IMM64(BPF_REG_3, 0xfefefeULL),
 	BPF_ALU32_IMM(BPF_MUL, BPF_REG_3, 0xefefef),
-	BPF_JMP_REG(BPF_JEQ, BPF_REG_3, BPF_REG_2, 2),
+	BPF_JMP64_REG(BPF_JEQ, BPF_REG_3, BPF_REG_2, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LD_IMM64(BPF_REG_0, 0xfefefeULL),
 	BPF_LD_IMM64(BPF_REG_2, 0x2ad4d4aaULL),
 	BPF_ALU32_IMM(BPF_MUL, BPF_REG_0, 0x2b),
-	BPF_JMP_REG(BPF_JEQ, BPF_REG_0, BPF_REG_2, 2),
+	BPF_JMP64_REG(BPF_JEQ, BPF_REG_0, BPF_REG_2, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LD_IMM64(BPF_REG_0, 0x952a7bbcULL),
 	BPF_LD_IMM64(BPF_REG_1, 0xfefefeULL),
 	BPF_LD_IMM64(BPF_REG_5, 0xeeff0d413122ULL),
 	BPF_ALU32_REG(BPF_MUL, BPF_REG_5, BPF_REG_1),
-	BPF_JMP_REG(BPF_JEQ, BPF_REG_5, BPF_REG_0, 2),
+	BPF_JMP64_REG(BPF_JEQ, BPF_REG_5, BPF_REG_0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_0, 2),
@@ -133,38 +133,38 @@
 	BPF_LD_IMM64(BPF_REG_0, 0xeeff0d413122ULL),
 	BPF_LD_IMM64(BPF_REG_1, 0xfefeeeULL),
 	BPF_ALU64_REG(BPF_DIV, BPF_REG_0, BPF_REG_1),
-	BPF_JMP_REG(BPF_JEQ, BPF_REG_0, BPF_REG_2, 2),
+	BPF_JMP64_REG(BPF_JEQ, BPF_REG_0, BPF_REG_2, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LD_IMM64(BPF_REG_3, 0xeeff0d413122ULL),
 	BPF_ALU64_IMM(BPF_DIV, BPF_REG_3, 0xfefeeeULL),
-	BPF_JMP_REG(BPF_JEQ, BPF_REG_3, BPF_REG_2, 2),
+	BPF_JMP64_REG(BPF_JEQ, BPF_REG_3, BPF_REG_2, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LD_IMM64(BPF_REG_2, 0xaa93ULL),
 	BPF_ALU64_IMM(BPF_MOD, BPF_REG_1, 0xbeefULL),
-	BPF_JMP_REG(BPF_JEQ, BPF_REG_1, BPF_REG_2, 2),
+	BPF_JMP64_REG(BPF_JEQ, BPF_REG_1, BPF_REG_2, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LD_IMM64(BPF_REG_1, 0xfefeeeULL),
 	BPF_LD_IMM64(BPF_REG_3, 0xbeefULL),
 	BPF_ALU64_REG(BPF_MOD, BPF_REG_1, BPF_REG_3),
-	BPF_JMP_REG(BPF_JEQ, BPF_REG_1, BPF_REG_2, 2),
+	BPF_JMP64_REG(BPF_JEQ, BPF_REG_1, BPF_REG_2, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LD_IMM64(BPF_REG_2, 0x5ee1dULL),
 	BPF_LD_IMM64(BPF_REG_1, 0xfefeeeULL),
 	BPF_LD_IMM64(BPF_REG_3, 0x2bULL),
 	BPF_ALU32_REG(BPF_DIV, BPF_REG_1, BPF_REG_3),
-	BPF_JMP_REG(BPF_JEQ, BPF_REG_1, BPF_REG_2, 2),
+	BPF_JMP64_REG(BPF_JEQ, BPF_REG_1, BPF_REG_2, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	BPF_ALU32_REG(BPF_DIV, BPF_REG_1, BPF_REG_1),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 1, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 1, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_REG(BPF_MOD, BPF_REG_2, BPF_REG_2),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_2, 0, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_2, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_0, 2),
@@ -178,11 +178,11 @@
 	.insns = {
 	BPF_LD_IMM64(BPF_REG_1, 0x80000000ULL),
 	BPF_LD_IMM64(BPF_REG_2, 0x0ULL),
-	BPF_JMP_REG(BPF_JSGT, BPF_REG_1, BPF_REG_2, 2),
+	BPF_JMP64_REG(BPF_JSGT, BPF_REG_1, BPF_REG_2, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 
-	BPF_JMP_REG(BPF_JSLT, BPF_REG_2, BPF_REG_1, 2),
+	BPF_JMP64_REG(BPF_JSLT, BPF_REG_2, BPF_REG_1, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 
diff --git a/tools/testing/selftests/bpf/verifier/jmp32.c b/tools/testing/selftests/bpf/verifier/jmp32.c
index 59be762..d88c1b1 100644
--- a/tools/testing/selftests/bpf/verifier/jmp32.c
+++ b/tools/testing/selftests/bpf/verifier/jmp32.c
@@ -5,7 +5,7 @@
 	BPF_LDX_MEM(BPF_DW, BPF_REG_7, BPF_REG_2, 0),
 	/* reg, high bits shouldn't be tested */
 	BPF_JMP32_IMM(BPF_JSET, BPF_REG_7, -2, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_EXIT_INSN(),
 
 	BPF_JMP32_IMM(BPF_JSET, BPF_REG_7, 1, 1),
@@ -36,7 +36,7 @@
 	BPF_LDX_MEM(BPF_DW, BPF_REG_7, BPF_REG_2, 0),
 	BPF_LD_IMM64(BPF_REG_8, 0x8000000000000000),
 	BPF_JMP32_REG(BPF_JSET, BPF_REG_7, BPF_REG_8, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_EXIT_INSN(),
 
 	BPF_LD_IMM64(BPF_REG_8, 0x8000000000000001),
@@ -67,7 +67,7 @@
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_LD_IMM64(BPF_REG_7, 0x8000000000000000),
 	BPF_LD_IMM64(BPF_REG_8, 0x8000000000000000),
-	BPF_JMP_REG(BPF_JSET, BPF_REG_7, BPF_REG_8, 1),
+	BPF_JMP64_REG(BPF_JSET, BPF_REG_7, BPF_REG_8, 1),
 	BPF_EXIT_INSN(),
 	BPF_JMP32_REG(BPF_JSET, BPF_REG_7, BPF_REG_8, 1),
 	BPF_MOV64_IMM(BPF_REG_0, 2),
@@ -212,7 +212,7 @@
 	BPF_RAND_UEXT_R7,
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_JMP32_IMM(BPF_JNE, BPF_REG_7, 0x10, 1),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_7, 0x10, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_7, 0x10, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_B, BPF_REG_8, BPF_REG_9, 0),
 	BPF_EXIT_INSN(),
@@ -352,7 +352,7 @@
 	BPF_LD_IMM64(BPF_REG_8, 0x7ffffff0 | 1ULL << 32),
 	BPF_JMP32_REG(BPF_JGT, BPF_REG_7, BPF_REG_8, 1),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_7, 0x7ffffff0, 1),
+	BPF_JMP64_IMM(BPF_JGT, BPF_REG_7, 0x7ffffff0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -492,7 +492,7 @@
 	BPF_LD_IMM64(BPF_REG_8, 0x7ffffff0 | 1ULL << 32),
 	BPF_JMP32_REG(BPF_JLT, BPF_REG_7, BPF_REG_8, 1),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JSLT, BPF_REG_7, 0x7ffffff0, 1),
+	BPF_JMP64_IMM(BPF_JSLT, BPF_REG_7, 0x7ffffff0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -562,7 +562,7 @@
 	BPF_LD_IMM64(BPF_REG_8, 0x7ffffff0 | 1ULL << 32),
 	BPF_JMP32_REG(BPF_JSGE, BPF_REG_7, BPF_REG_8, 1),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JSGE, BPF_REG_7, 0x7ffffff0, 1),
+	BPF_JMP64_IMM(BPF_JSGE, BPF_REG_7, 0x7ffffff0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -632,7 +632,7 @@
 	BPF_LD_IMM64(BPF_REG_8, (__u32)(-2) | 1ULL << 32),
 	BPF_JMP32_REG(BPF_JSGT, BPF_REG_7, BPF_REG_8, 1),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_7, -2, 1),
+	BPF_JMP64_IMM(BPF_JSGT, BPF_REG_7, -2, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -702,7 +702,7 @@
 	BPF_LD_IMM64(BPF_REG_8, 0x7ffffff0 | 1ULL << 32),
 	BPF_JMP32_REG(BPF_JSLE, BPF_REG_7, BPF_REG_8, 1),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JSLE, BPF_REG_7, 0x7ffffff0, 1),
+	BPF_JMP64_IMM(BPF_JSLE, BPF_REG_7, 0x7ffffff0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -790,7 +790,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_8),
 	BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
 	BPF_EMIT_CALL(BPF_FUNC_get_cgroup_classid),
@@ -817,7 +817,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 10),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 10),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_8),
 	BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
 	BPF_EMIT_CALL(BPF_FUNC_get_cgroup_classid),
@@ -845,7 +845,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 10),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 10),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_8),
 	BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
 	BPF_EMIT_CALL(BPF_FUNC_get_cgroup_classid),
@@ -873,7 +873,7 @@
 	BPF_ALU64_IMM(BPF_NEG, BPF_REG_2, 0),
 	BPF_ALU32_REG(BPF_OR, BPF_REG_2, BPF_REG_6),
 	BPF_JMP32_IMM(BPF_JNE, BPF_REG_2, 8, 5),
-	BPF_JMP_IMM(BPF_JSGE, BPF_REG_2, 500, 2),
+	BPF_JMP64_IMM(BPF_JSGE, BPF_REG_2, 500, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 2),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_4),
diff --git a/tools/testing/selftests/bpf/verifier/jset.c b/tools/testing/selftests/bpf/verifier/jset.c
index feb1c01..49cbed5 100644
--- a/tools/testing/selftests/bpf/verifier/jset.c
+++ b/tools/testing/selftests/bpf/verifier/jset.c
@@ -6,21 +6,21 @@
 
 	/* reg, bit 63 or bit 0 set, taken */
 	BPF_LD_IMM64(BPF_REG_8, 0x8000000000000001),
-	BPF_JMP_REG(BPF_JSET, BPF_REG_7, BPF_REG_8, 1),
+	BPF_JMP64_REG(BPF_JSET, BPF_REG_7, BPF_REG_8, 1),
 	BPF_EXIT_INSN(),
 
 	/* reg, bit 62, not taken */
 	BPF_LD_IMM64(BPF_REG_8, 0x4000000000000000),
-	BPF_JMP_REG(BPF_JSET, BPF_REG_7, BPF_REG_8, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_REG(BPF_JSET, BPF_REG_7, BPF_REG_8, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_EXIT_INSN(),
 
 	/* imm, any bit set, taken */
-	BPF_JMP_IMM(BPF_JSET, BPF_REG_7, -1, 1),
+	BPF_JMP64_IMM(BPF_JSET, BPF_REG_7, -1, 1),
 	BPF_EXIT_INSN(),
 
 	/* imm, bit 31 set, taken */
-	BPF_JMP_IMM(BPF_JSET, BPF_REG_7, 0x80000000, 1),
+	BPF_JMP64_IMM(BPF_JSET, BPF_REG_7, 0x80000000, 1),
 	BPF_EXIT_INSN(),
 
 	/* all good - return r0 == 2 */
@@ -61,7 +61,7 @@
 	BPF_DIRECT_PKT_R2,
 	BPF_LDX_MEM(BPF_DW, BPF_REG_7, BPF_REG_2, 0),
 
-	BPF_JMP_IMM(BPF_JSET, BPF_REG_7, 0x80000000, 1),
+	BPF_JMP64_IMM(BPF_JSET, BPF_REG_7, 0x80000000, 1),
 	BPF_EXIT_INSN(),
 
 	BPF_MOV64_IMM(BPF_REG_0, 2),
@@ -77,7 +77,7 @@
 	"jset: known const compare",
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_JMP_IMM(BPF_JSET, BPF_REG_0, 1, 1),
+	BPF_JMP64_IMM(BPF_JSET, BPF_REG_0, 1, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_8, BPF_REG_9, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -91,7 +91,7 @@
 	"jset: known const compare bad",
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JSET, BPF_REG_0, 1, 1),
+	BPF_JMP64_IMM(BPF_JSET, BPF_REG_0, 1, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_8, BPF_REG_9, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -105,8 +105,8 @@
 	"jset: unknown const compare taken",
 	.insns = {
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_JMP_IMM(BPF_JSET, BPF_REG_0, 1, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_IMM(BPF_JSET, BPF_REG_0, 1, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_8, BPF_REG_9, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -120,7 +120,7 @@
 	"jset: unknown const compare not taken",
 	.insns = {
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_JMP_IMM(BPF_JSET, BPF_REG_0, 1, 1),
+	BPF_JMP64_IMM(BPF_JSET, BPF_REG_0, 1, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_8, BPF_REG_9, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -135,7 +135,7 @@
 	.insns = {
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_ALU64_IMM(BPF_OR, BPF_REG_0, 2),
-	BPF_JMP_IMM(BPF_JSET, BPF_REG_0, 3, 1),
+	BPF_JMP64_IMM(BPF_JSET, BPF_REG_0, 3, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_8, BPF_REG_9, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -152,13 +152,13 @@
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0xff),
-	BPF_JMP_IMM(BPF_JSET, BPF_REG_1, 0xf0, 3),
-	BPF_JMP_IMM(BPF_JLT, BPF_REG_1, 0x10, 1),
+	BPF_JMP64_IMM(BPF_JSET, BPF_REG_1, 0xf0, 3),
+	BPF_JMP64_IMM(BPF_JLT, BPF_REG_1, 0x10, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_8, BPF_REG_9, 0),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JSET, BPF_REG_1, 0x10, 1),
+	BPF_JMP64_IMM(BPF_JSET, BPF_REG_1, 0x10, 1),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_1, 0x10, 1),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_1, 0x10, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_8, BPF_REG_9, 0),
 	BPF_EXIT_INSN(),
 	},
diff --git a/tools/testing/selftests/bpf/verifier/jump.c b/tools/testing/selftests/bpf/verifier/jump.c
index f5c0866..f0baadd 100644
--- a/tools/testing/selftests/bpf/verifier/jump.c
+++ b/tools/testing/selftests/bpf/verifier/jump.c
@@ -3,17 +3,17 @@
 	.insns = {
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, -8),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, -8, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 1, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 1, 1),
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, -16, 1),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 2, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 2, 1),
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, -8, 2),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 3, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 3, 1),
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, -16, 3),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 4, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 4, 1),
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, -8, 4),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 5, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 5, 1),
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, -32, 5),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -26,22 +26,22 @@
 	"jump test 2",
 	.insns = {
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, 2),
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, -8, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 14),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 1, 2),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 14),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 1, 2),
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, -16, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 11),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 2, 2),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 11),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 2, 2),
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, -32, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 8),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 3, 2),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 8),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 3, 2),
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, -40, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 5),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 4, 2),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 5),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 4, 2),
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, -48, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 2),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 5, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 5, 1),
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, -56, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -54,27 +54,27 @@
 	"jump test 3",
 	.insns = {
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, 3),
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, -8, 0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 19),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 1, 3),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 19),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 1, 3),
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, -16, 0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -16),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 15),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 2, 3),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 15),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 2, 3),
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, -32, 0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -32),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 11),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 3, 3),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 11),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 3, 3),
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, -40, 0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -40),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 7),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 4, 3),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 7),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 4, 3),
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, -48, 0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -48),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 3),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 5, 0),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 5, 0),
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, -56, 0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -56),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
@@ -90,46 +90,46 @@
 {
 	"jump test 4",
 	.insns = {
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 1),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 2),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 3),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 4),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 1),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 2),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 3),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 4),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 1),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 2),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 3),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 4),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 1),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 2),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 3),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 4),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 1),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 2),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 3),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 4),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 1),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 2),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 3),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 4),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 1),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 2),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 3),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 4),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 1),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 2),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 3),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 4),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 1),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 2),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 3),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 4),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 0),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 0),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 0),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 0),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -142,35 +142,35 @@
 	.insns = {
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_2),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_1, 0, 2),
 	BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_3, -8),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 2),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 2),
 	BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_2, -8),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 0),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_1, 0, 2),
 	BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_3, -8),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 2),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 2),
 	BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_2, -8),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 0),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_1, 0, 2),
 	BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_3, -8),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 2),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 2),
 	BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_2, -8),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 0),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_1, 0, 2),
 	BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_3, -8),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 2),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 2),
 	BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_2, -8),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 0),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_1, 0, 2),
 	BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_3, -8),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 2),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 2),
 	BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_2, -8),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 0),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -183,27 +183,27 @@
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_MOV64_IMM(BPF_REG_1, 2),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 2),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 2),
 	BPF_EXIT_INSN(),
-	BPF_JMP_REG(BPF_JNE, BPF_REG_0, BPF_REG_1, 16),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, -20),
+	BPF_JMP64_REG(BPF_JNE, BPF_REG_0, BPF_REG_1, 16),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 0),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 0),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 0),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 0),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 0),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 0),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 0),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 0),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 0),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 0),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 0),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 0),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 0),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 0),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 0),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 0),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, -20),
 	},
 	.result = ACCEPT,
 	.retval = 2,
@@ -212,10 +212,10 @@
 	"jump test 7",
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 2),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 3),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 2, 16),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 2, 16),
 	BPF_MOV64_IMM(BPF_REG_0, 42),
 	BPF_MOV64_IMM(BPF_REG_0, 42),
 	BPF_MOV64_IMM(BPF_REG_0, 42),
@@ -232,7 +232,7 @@
 	BPF_MOV64_IMM(BPF_REG_0, 42),
 	BPF_MOV64_IMM(BPF_REG_0, 42),
 	BPF_MOV64_IMM(BPF_REG_0, 42),
-	BPF_JMP_IMM(BPF_JA, 0, 0, -20),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, -20),
 	},
 	.result = ACCEPT,
 	.retval = 3,
@@ -242,10 +242,10 @@
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_MOV64_IMM(BPF_REG_1, 2),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 2),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 3),
 	BPF_EXIT_INSN(),
-	BPF_JMP_REG(BPF_JNE, BPF_REG_0, BPF_REG_1, 16),
+	BPF_JMP64_REG(BPF_JNE, BPF_REG_0, BPF_REG_1, 16),
 	BPF_MOV64_IMM(BPF_REG_0, 42),
 	BPF_MOV64_IMM(BPF_REG_0, 42),
 	BPF_MOV64_IMM(BPF_REG_0, 42),
@@ -262,7 +262,7 @@
 	BPF_MOV64_IMM(BPF_REG_0, 42),
 	BPF_MOV64_IMM(BPF_REG_0, 42),
 	BPF_MOV64_IMM(BPF_REG_0, 42),
-	BPF_JMP_IMM(BPF_JA, 0, 0, -20),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, -20),
 	},
 	.result = ACCEPT,
 	.retval = 3,
@@ -271,10 +271,10 @@
 	"jump/call test 9",
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 2),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 3),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 2, 16),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 2, 16),
 	BPF_MOV64_IMM(BPF_REG_0, 42),
 	BPF_MOV64_IMM(BPF_REG_0, 42),
 	BPF_MOV64_IMM(BPF_REG_0, 42),
@@ -305,7 +305,7 @@
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 3),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 2, 16),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 2, 16),
 	BPF_MOV64_IMM(BPF_REG_0, 42),
 	BPF_MOV64_IMM(BPF_REG_0, 42),
 	BPF_MOV64_IMM(BPF_REG_0, 42),
@@ -339,7 +339,7 @@
 	BPF_MOV64_IMM(BPF_REG_0, 3),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 2, 26),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 2, 26),
 	BPF_MOV64_IMM(BPF_REG_0, 42),
 	BPF_MOV64_IMM(BPF_REG_0, 42),
 	BPF_MOV64_IMM(BPF_REG_0, 42),
@@ -381,13 +381,13 @@
 	BPF_ALU64_IMM(BPF_NEG, BPF_REG_3, 0),
 	BPF_ALU64_IMM(BPF_NEG, BPF_REG_3, 0),
 	BPF_ALU64_IMM(BPF_OR, BPF_REG_3, 32767),
-	BPF_JMP_IMM(BPF_JSGE, BPF_REG_3, 0, 1),
+	BPF_JMP64_IMM(BPF_JSGE, BPF_REG_3, 0, 1),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JSLE, BPF_REG_3, 0x8000, 1),
+	BPF_JMP64_IMM(BPF_JSLE, BPF_REG_3, 0x8000, 1),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, -32767),
 	BPF_MOV64_IMM(BPF_REG_0, 2),
-	BPF_JMP_IMM(BPF_JLE, BPF_REG_3, 0, 1),
+	BPF_JMP64_IMM(BPF_JLE, BPF_REG_3, 0, 1),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_4),
 	BPF_EXIT_INSN(),
 	},
diff --git a/tools/testing/selftests/bpf/verifier/ld_abs.c b/tools/testing/selftests/bpf/verifier/ld_abs.c
index 06e5ad0..298ca01 100644
--- a/tools/testing/selftests/bpf/verifier/ld_abs.c
+++ b/tools/testing/selftests/bpf/verifier/ld_abs.c
@@ -121,9 +121,9 @@
 	.insns = {
 		BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
 		BPF_LD_ABS(BPF_H, 12),
-		BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0x806, 28),
+		BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0x806, 28),
 		BPF_LD_ABS(BPF_H, 12),
-		BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0x806, 26),
+		BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0x806, 26),
 		BPF_MOV32_IMM(BPF_REG_0, 18),
 		BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_0, -64),
 		BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_10, -64),
@@ -134,9 +134,9 @@
 		BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_10, -56),
 		BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -60),
 		BPF_ALU32_REG(BPF_SUB, BPF_REG_0, BPF_REG_7),
-		BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 15),
+		BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 15),
 		BPF_LD_ABS(BPF_H, 12),
-		BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0x806, 13),
+		BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0x806, 13),
 		BPF_MOV32_IMM(BPF_REG_0, 22),
 		BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_0, -56),
 		BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_10, -56),
@@ -147,7 +147,7 @@
 		BPF_LDX_MEM(BPF_W, BPF_REG_7, BPF_REG_10, -48),
 		BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -52),
 		BPF_ALU32_REG(BPF_SUB, BPF_REG_0, BPF_REG_7),
-		BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+		BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 		BPF_MOV32_IMM(BPF_REG_0, 256),
 		BPF_EXIT_INSN(),
 		BPF_MOV32_IMM(BPF_REG_0, 0),
diff --git a/tools/testing/selftests/bpf/verifier/ld_imm64.c b/tools/testing/selftests/bpf/verifier/ld_imm64.c
index f929790..630c560 100644
--- a/tools/testing/selftests/bpf/verifier/ld_imm64.c
+++ b/tools/testing/selftests/bpf/verifier/ld_imm64.c
@@ -1,7 +1,7 @@
 {
 	"test1 ld_imm64",
 	.insns = {
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
 	BPF_LD_IMM64(BPF_REG_0, 0),
 	BPF_LD_IMM64(BPF_REG_0, 0),
 	BPF_LD_IMM64(BPF_REG_0, 1),
@@ -16,7 +16,7 @@
 {
 	"test2 ld_imm64",
 	.insns = {
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
 	BPF_LD_IMM64(BPF_REG_0, 0),
 	BPF_LD_IMM64(BPF_REG_0, 0),
 	BPF_LD_IMM64(BPF_REG_0, 1),
@@ -30,7 +30,7 @@
 {
 	"test3 ld_imm64",
 	.insns = {
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
 	BPF_RAW_INSN(BPF_LD | BPF_IMM | BPF_DW, 0, 0, 0, 0),
 	BPF_LD_IMM64(BPF_REG_0, 0),
 	BPF_LD_IMM64(BPF_REG_0, 0),
diff --git a/tools/testing/selftests/bpf/verifier/leak_ptr.c b/tools/testing/selftests/bpf/verifier/leak_ptr.c
index 892eb00..f3f0c8a 100644
--- a/tools/testing/selftests/bpf/verifier/leak_ptr.c
+++ b/tools/testing/selftests/bpf/verifier/leak_ptr.c
@@ -53,7 +53,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_3, 0),
 	BPF_ATOMIC_OP(BPF_DW, BPF_ADD, BPF_REG_0, BPF_REG_6, 0),
diff --git a/tools/testing/selftests/bpf/verifier/loops1.c b/tools/testing/selftests/bpf/verifier/loops1.c
index eb69c65..2a81e26 100644
--- a/tools/testing/selftests/bpf/verifier/loops1.c
+++ b/tools/testing/selftests/bpf/verifier/loops1.c
@@ -3,7 +3,7 @@
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
-	BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 4, -2),
+	BPF_JMP64_IMM(BPF_JLT, BPF_REG_0, 4, -2),
 	BPF_EXIT_INSN(),
 	},
 	.result = ACCEPT,
@@ -15,7 +15,7 @@
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 3),
-	BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 20, -2),
+	BPF_JMP64_IMM(BPF_JLT, BPF_REG_0, 20, -2),
 	BPF_EXIT_INSN(),
 	},
 	.result = ACCEPT,
@@ -25,9 +25,9 @@
 	"bounded loop, count from positive unknown to 4",
 	.insns = {
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_JMP_IMM(BPF_JSLT, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JSLT, BPF_REG_0, 0, 2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
-	BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 4, -2),
+	BPF_JMP64_IMM(BPF_JLT, BPF_REG_0, 4, -2),
 	BPF_EXIT_INSN(),
 	},
 	.result = ACCEPT,
@@ -39,7 +39,7 @@
 	.insns = {
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
-	BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 4, -2),
+	BPF_JMP64_IMM(BPF_JLT, BPF_REG_0, 4, -2),
 	BPF_EXIT_INSN(),
 	},
 	.result = ACCEPT,
@@ -50,7 +50,7 @@
 	.insns = {
 		BPF_MOV64_IMM(BPF_REG_0, 0),
 		BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
-		BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 4, -2),
+		BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 4, -2),
 		BPF_EXIT_INSN(),
 	},
 	.result = ACCEPT,
@@ -60,9 +60,9 @@
 	"bounded loop, start in the middle",
 	.insns = {
 		BPF_MOV64_IMM(BPF_REG_0, 0),
-		BPF_JMP_A(1),
+		BPF_JMP64_A(1),
 		BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
-		BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 4, -2),
+		BPF_JMP64_IMM(BPF_JLT, BPF_REG_0, 4, -2),
 		BPF_EXIT_INSN(),
 	},
 	.result = REJECT,
@@ -75,8 +75,8 @@
 	.insns = {
 		BPF_MOV64_IMM(BPF_REG_0, 0),
 		BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
-		BPF_JMP_REG(BPF_JEQ, BPF_REG_0, BPF_REG_0, 0),
-		BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 4, -3),
+		BPF_JMP64_REG(BPF_JEQ, BPF_REG_0, BPF_REG_0, 0),
+		BPF_JMP64_IMM(BPF_JLT, BPF_REG_0, 4, -3),
 		BPF_EXIT_INSN(),
 	},
 	.result = ACCEPT,
@@ -88,9 +88,9 @@
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_6, 0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 1),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_6, 10000, 2),
+	BPF_JMP64_IMM(BPF_JGT, BPF_REG_6, 10000, 2),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
-	BPF_JMP_A(-4),
+	BPF_JMP64_A(-4),
 	BPF_EXIT_INSN(),
 	},
 	.result = ACCEPT,
@@ -100,9 +100,9 @@
 	"infinite loop after a conditional jump",
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_0, 5),
-	BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 4, 2),
+	BPF_JMP64_IMM(BPF_JLT, BPF_REG_0, 4, 2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
-	BPF_JMP_A(-2),
+	BPF_JMP64_A(-2),
 	BPF_EXIT_INSN(),
 	},
 	.result = REJECT,
@@ -117,7 +117,7 @@
 	BPF_EXIT_INSN(),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
-	BPF_JMP_IMM(BPF_JLT, BPF_REG_1, 4, 1),
+	BPF_JMP64_IMM(BPF_JLT, BPF_REG_1, 4, 1),
 	BPF_EXIT_INSN(),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, -5),
 	BPF_EXIT_INSN(),
@@ -130,8 +130,8 @@
 	"infinite loop in two jumps",
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_JMP_A(0),
-	BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 4, -2),
+	BPF_JMP64_A(0),
+	BPF_JMP64_IMM(BPF_JLT, BPF_REG_0, 4, -2),
 	BPF_EXIT_INSN(),
 	},
 	.result = REJECT,
@@ -144,15 +144,15 @@
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1),
-	BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 2, 1),
+	BPF_JMP64_IMM(BPF_JLT, BPF_REG_0, 2, 1),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1),
-	BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 2, 1),
+	BPF_JMP64_IMM(BPF_JLT, BPF_REG_0, 2, 1),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 1),
-	BPF_JMP_IMM(BPF_JLT, BPF_REG_0, 2, -11),
+	BPF_JMP64_IMM(BPF_JLT, BPF_REG_0, 2, -11),
 	BPF_EXIT_INSN(),
 	},
 	.result = REJECT,
@@ -163,7 +163,7 @@
 	"not-taken loop with back jump to 1st insn",
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_0, 123),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 4, -2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 4, -2),
 	BPF_EXIT_INSN(),
 	},
 	.result = ACCEPT,
@@ -179,7 +179,7 @@
 	BPF_EXIT_INSN(),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_1),
 	BPF_ALU64_IMM(BPF_SUB, BPF_REG_1, 1),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, -3),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, -3),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_EXIT_INSN(),
 	},
diff --git a/tools/testing/selftests/bpf/verifier/lwt.c b/tools/testing/selftests/bpf/verifier/lwt.c
index 5c8944d..0978328 100644
--- a/tools/testing/selftests/bpf/verifier/lwt.c
+++ b/tools/testing/selftests/bpf/verifier/lwt.c
@@ -7,7 +7,7 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
 	BPF_STX_MEM(BPF_B, BPF_REG_2, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -25,7 +25,7 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
 	BPF_STX_MEM(BPF_B, BPF_REG_2, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -43,7 +43,7 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
 	BPF_STX_MEM(BPF_B, BPF_REG_2, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -60,7 +60,7 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -77,7 +77,7 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -94,7 +94,7 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -111,10 +111,10 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 4),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_2, 6),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
diff --git a/tools/testing/selftests/bpf/verifier/map_in_map.c b/tools/testing/selftests/bpf/verifier/map_in_map.c
index 7e58a19..bf9d225 100644
--- a/tools/testing/selftests/bpf/verifier/map_in_map.c
+++ b/tools/testing/selftests/bpf/verifier/map_in_map.c
@@ -6,7 +6,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
 	BPF_ST_MEM(0, BPF_REG_10, -4, 0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
@@ -27,21 +27,21 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_6),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_6),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 11),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 11),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_6),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_6),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -60,7 +60,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 	BPF_ST_MEM(0, BPF_REG_10, -4, 0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
diff --git a/tools/testing/selftests/bpf/verifier/map_kptr.c b/tools/testing/selftests/bpf/verifier/map_kptr.c
index 2aca724..750c687 100644
--- a/tools/testing/selftests/bpf/verifier/map_kptr.c
+++ b/tools/testing/selftests/bpf/verifier/map_kptr.c
@@ -9,7 +9,7 @@
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
@@ -29,7 +29,7 @@
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_ST_MEM(BPF_W, BPF_REG_0, 0, 0),
 	BPF_EXIT_INSN(),
@@ -49,16 +49,16 @@
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_0),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_2, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_2, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_2, 0),
-	BPF_JMP_IMM(BPF_JLE, BPF_REG_2, 4, 1),
+	BPF_JMP64_IMM(BPF_JLE, BPF_REG_2, 4, 1),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_2, 0, 1),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_2, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_3, BPF_REG_2),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_3, 0),
@@ -79,16 +79,16 @@
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_0),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_2, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_2, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_2, 0),
-	BPF_JMP_IMM(BPF_JLE, BPF_REG_2, 4, 1),
+	BPF_JMP64_IMM(BPF_JLE, BPF_REG_2, 4, 1),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_2, 0, 1),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_2, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_3, BPF_REG_2),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_3),
@@ -111,7 +111,7 @@
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 7),
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 0),
@@ -132,15 +132,15 @@
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 0),
-	BPF_JMP_IMM(BPF_JLE, BPF_REG_2, 4, 1),
+	BPF_JMP64_IMM(BPF_JLE, BPF_REG_2, 4, 1),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_2, 0, 1),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_2, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_2),
 	BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, 0),
@@ -162,10 +162,10 @@
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
 	BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, 0),
@@ -186,7 +186,7 @@
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0),
@@ -207,10 +207,10 @@
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 32),
 	BPF_EXIT_INSN(),
@@ -230,10 +230,10 @@
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 16),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_this_cpu_ptr),
@@ -254,10 +254,10 @@
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_EXIT_INSN(),
 	},
@@ -275,7 +275,7 @@
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_2, 0),
@@ -298,7 +298,7 @@
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_2, 0),
@@ -326,7 +326,7 @@
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_1, 0),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 8),
@@ -348,14 +348,14 @@
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_2, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_kptr_xchg),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
@@ -379,7 +379,7 @@
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
@@ -387,7 +387,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8),
 	BPF_ST_MEM(BPF_DW, BPF_REG_1, 0, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
@@ -413,7 +413,7 @@
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_1, 0),
 	BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, 8),
@@ -434,7 +434,7 @@
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 8, 0),
 	BPF_EXIT_INSN(),
@@ -454,7 +454,7 @@
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 2),
diff --git a/tools/testing/selftests/bpf/verifier/map_ptr_mixing.c b/tools/testing/selftests/bpf/verifier/map_ptr_mixing.c
index 253d711..329eef0 100644
--- a/tools/testing/selftests/bpf/verifier/map_ptr_mixing.c
+++ b/tools/testing/selftests/bpf/verifier/map_ptr_mixing.c
@@ -2,16 +2,16 @@
 	"calls: two calls returning different map pointers for lookup (hash, array)",
 	.insns = {
 	/* main prog */
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 2),
 	BPF_CALL_REL(11),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_CALL_REL(12),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, offsetof(struct test_val, foo)),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
@@ -32,16 +32,16 @@
 	"calls: two calls returning different map pointers for lookup (hash, map in map)",
 	.insns = {
 	/* main prog */
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 2),
 	BPF_CALL_REL(11),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_CALL_REL(12),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, offsetof(struct test_val, foo)),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
@@ -63,9 +63,9 @@
 	.insns = {
 	BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1,
 		    offsetof(struct __sk_buff, mark)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_6, 0, 3),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_6, 0, 3),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 2),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 2),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_3, 7),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
@@ -84,9 +84,9 @@
 	.insns = {
 	BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_1,
 		    offsetof(struct __sk_buff, mark)),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_6, 0, 3),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 2),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 2),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_3, 7),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
diff --git a/tools/testing/selftests/bpf/verifier/map_ret_val.c b/tools/testing/selftests/bpf/verifier/map_ret_val.c
index 24078fe..81ae6fd 100644
--- a/tools/testing/selftests/bpf/verifier/map_ret_val.c
+++ b/tools/testing/selftests/bpf/verifier/map_ret_val.c
@@ -34,7 +34,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 4, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -51,7 +51,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 0),
 	BPF_EXIT_INSN(),
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 1),
diff --git a/tools/testing/selftests/bpf/verifier/meta_access.c b/tools/testing/selftests/bpf/verifier/meta_access.c
index 54e5a0b..a175909 100644
--- a/tools/testing/selftests/bpf/verifier/meta_access.c
+++ b/tools/testing/selftests/bpf/verifier/meta_access.c
@@ -6,7 +6,7 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -24,7 +24,7 @@
 	BPF_ALU64_IMM(BPF_SUB, BPF_REG_0, 8),
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -42,7 +42,7 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -61,7 +61,7 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_4),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -78,7 +78,7 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_4, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_3),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_4, 3),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_4, 3),
 	BPF_MOV64_IMM(BPF_REG_2, -8),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_xdp_adjust_meta),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_3, 0),
@@ -99,7 +99,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_0, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_4, BPF_REG_0, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -118,7 +118,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -134,7 +134,7 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 0xFFFF),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -151,7 +151,7 @@
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 0xFFFF),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 1),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_4, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -173,12 +173,12 @@
 	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_5, -8),
 	BPF_ATOMIC_OP(BPF_DW, BPF_ADD, BPF_REG_10, BPF_REG_6, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_5, BPF_REG_10, -8),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_5, 100, 6),
+	BPF_JMP64_IMM(BPF_JGT, BPF_REG_5, 100, 6),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_3, BPF_REG_5),
 	BPF_MOV64_REG(BPF_REG_5, BPF_REG_3),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_6, BPF_REG_5, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_6, BPF_REG_5, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_2, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -198,12 +198,12 @@
 	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_5, -8),
 	BPF_ATOMIC_OP(BPF_DW, BPF_ADD, BPF_REG_10, BPF_REG_6, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_5, BPF_REG_10, -8),
-	BPF_JMP_IMM(BPF_JGT, BPF_REG_5, 100, 6),
+	BPF_JMP64_IMM(BPF_JGT, BPF_REG_5, 100, 6),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_5),
 	BPF_MOV64_REG(BPF_REG_5, BPF_REG_2),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_6, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_6, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_5, BPF_REG_5, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -221,11 +221,11 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_5, BPF_REG_3),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 16),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_5, BPF_REG_4, 5),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_5, BPF_REG_4, 5),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_3, 0),
 	BPF_MOV64_REG(BPF_REG_5, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, 16),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_5, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_5, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
diff --git a/tools/testing/selftests/bpf/verifier/precise.c b/tools/testing/selftests/bpf/verifier/precise.c
index 0fdfa42..4247871 100644
--- a/tools/testing/selftests/bpf/verifier/precise.c
+++ b/tools/testing/selftests/bpf/verifier/precise.c
@@ -8,7 +8,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_ST_MEM(BPF_DW, BPF_REG_FP, -8, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 
 	BPF_MOV64_REG(BPF_REG_9, BPF_REG_0),
@@ -17,14 +17,14 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_FP),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 
 	BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
 
 	BPF_ALU64_REG(BPF_SUB, BPF_REG_9, BPF_REG_8), /* map_value_ptr -= map_value_ptr */
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_9),
-	BPF_JMP_IMM(BPF_JLT, BPF_REG_2, 8, 1),
+	BPF_JMP64_IMM(BPF_JLT, BPF_REG_2, 8, 1),
 	BPF_EXIT_INSN(),
 
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 1), /* R2=scalar(umin=1, umax=8) */
@@ -68,7 +68,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_ST_MEM(BPF_DW, BPF_REG_FP, -8, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 
 	BPF_MOV64_REG(BPF_REG_9, BPF_REG_0),
@@ -77,14 +77,14 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_FP),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 
 	BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
 
 	BPF_ALU64_REG(BPF_SUB, BPF_REG_9, BPF_REG_8), /* map_value_ptr -= map_value_ptr */
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_9),
-	BPF_JMP_IMM(BPF_JLT, BPF_REG_2, 8, 1),
+	BPF_JMP64_IMM(BPF_JLT, BPF_REG_2, 8, 1),
 	BPF_EXIT_INSN(),
 
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 1), /* R2=scalar(umin=1, umax=8) */
@@ -120,19 +120,19 @@
 	.insns = {
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_IMM(BPF_REG_8, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_MOV64_IMM(BPF_REG_8, 1),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_IMM(BPF_REG_9, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_MOV64_IMM(BPF_REG_9, 1),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 4),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_8, 1, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_8, 1, 1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 0),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, 0),
 	BPF_EXIT_INSN(),
 	},
 	.prog_type = BPF_PROG_TYPE_XDP,
@@ -144,11 +144,11 @@
 	"precise: ST insn causing spi > allocated_stack",
 	.insns = {
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_10),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_3, 123, 0),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_3, 123, 0),
 	BPF_ST_MEM(BPF_DW, BPF_REG_3, -8, 0),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -8),
 	BPF_MOV64_IMM(BPF_REG_0, -1),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_0, 0),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_4, BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
 	.prog_type = BPF_PROG_TYPE_XDP,
@@ -170,11 +170,11 @@
 	.insns = {
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_10),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_3, 123, 0),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_3, 123, 0),
 	BPF_STX_MEM(BPF_DW, BPF_REG_3, BPF_REG_0, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_10, -8),
 	BPF_MOV64_IMM(BPF_REG_0, -1),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_4, BPF_REG_0, 0),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_4, BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
 	.prog_type = BPF_PROG_TYPE_XDP,
@@ -200,10 +200,10 @@
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_MOV64_IMM(BPF_REG_2, 1),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_4, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_4, 0, 1),
 	BPF_MOV64_IMM(BPF_REG_2, 0x1000),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_0, 42),
diff --git a/tools/testing/selftests/bpf/verifier/raw_tp_writable.c b/tools/testing/selftests/bpf/verifier/raw_tp_writable.c
index cc66892..38389bb 100644
--- a/tools/testing/selftests/bpf/verifier/raw_tp_writable.c
+++ b/tools/testing/selftests/bpf/verifier/raw_tp_writable.c
@@ -15,7 +15,7 @@
 			     BPF_FUNC_map_lookup_elem),
 
 		/* exit clean if null */
-		BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+		BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 		BPF_EXIT_INSN(),
 
 		/* shift the buffer pointer to a variable location */
diff --git a/tools/testing/selftests/bpf/verifier/ref_tracking.c b/tools/testing/selftests/bpf/verifier/ref_tracking.c
index f3cf02e..e8caeac 100644
--- a/tools/testing/selftests/bpf/verifier/ref_tracking.c
+++ b/tools/testing/selftests/bpf/verifier/ref_tracking.c
@@ -90,7 +90,7 @@
 	BPF_MOV64_IMM(BPF_REG_1, -3),
 	BPF_MOV64_IMM(BPF_REG_2, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -111,7 +111,7 @@
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_1, 1),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, BPF_PSEUDO_KFUNC_CALL, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -256,7 +256,7 @@
 	.insns = {
 	BPF_SK_LOOKUP(sk_lookup_tcp),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
 	BPF_EXIT_INSN(),
 	},
@@ -268,7 +268,7 @@
 	.insns = {
 	BPF_SK_LOOKUP(skc_lookup_tcp),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
 	BPF_EXIT_INSN(),
 	},
@@ -280,7 +280,7 @@
 	.insns = {
 	BPF_SK_LOOKUP(sk_lookup_tcp),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
 	BPF_EXIT_INSN(),
@@ -294,7 +294,7 @@
 	BPF_SK_LOOKUP(sk_lookup_tcp),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
@@ -310,7 +310,7 @@
 	BPF_SK_LOOKUP(sk_lookup_tcp),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3), /* goto end */
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 3), /* goto end */
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
@@ -330,15 +330,15 @@
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 16),
 	/* if (offsetof(skb, mark) > data_len) exit; */
-	BPF_JMP_REG(BPF_JLE, BPF_REG_0, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JLE, BPF_REG_0, BPF_REG_3, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_2,
 		    offsetof(struct __sk_buff, mark)),
 	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 1), /* mark == 0? */
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_6, 0, 1), /* mark == 0? */
 	/* Leak reference in R0 */
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), /* sk NULL? */
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2), /* sk NULL? */
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
 	BPF_EXIT_INSN(),
@@ -358,17 +358,17 @@
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 16),
 	/* if (offsetof(skb, mark) > data_len) exit; */
-	BPF_JMP_REG(BPF_JLE, BPF_REG_0, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JLE, BPF_REG_0, BPF_REG_3, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_W, BPF_REG_6, BPF_REG_2,
 		    offsetof(struct __sk_buff, mark)),
 	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 4), /* mark == 0? */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), /* sk NULL? */
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_6, 0, 4), /* mark == 0? */
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2), /* sk NULL? */
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), /* sk NULL? */
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2), /* sk NULL? */
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
 	BPF_EXIT_INSN(),
@@ -388,7 +388,7 @@
 
 	/* subprog 1 */
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_1),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_2, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_2, 0, 1),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
 	BPF_EXIT_INSN(),
 	},
@@ -408,7 +408,7 @@
 
 	/* subprog 1 */
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_1),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_2, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_2, 0, 1),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
 	BPF_EXIT_INSN(),
 	},
@@ -444,7 +444,7 @@
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_10),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
 	BPF_EXIT_INSN(),
 
@@ -504,7 +504,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_5, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_5, 0),
 	BPF_STX_MEM(BPF_DW, BPF_REG_4, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	/* now the sk_ptr is verified, free the reference */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_4, 0),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
@@ -523,7 +523,7 @@
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
 	BPF_SK_LOOKUP(sk_lookup_tcp),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
 	BPF_LD_ABS(BPF_B, 0),
 	BPF_LD_ABS(BPF_H, 0),
@@ -542,7 +542,7 @@
 	BPF_LD_ABS(BPF_H, 0),
 	BPF_LD_ABS(BPF_W, 0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
 	BPF_EXIT_INSN(),
 	},
@@ -556,7 +556,7 @@
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
 	BPF_SK_LOOKUP(sk_lookup_tcp),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
 	BPF_MOV64_IMM(BPF_REG_7, 1),
 	BPF_LD_IND(BPF_W, BPF_REG_7, -0x200000),
@@ -577,7 +577,7 @@
 	BPF_LD_IND(BPF_W, BPF_REG_7, -0x200000),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_7),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_4),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
 	BPF_EXIT_INSN(),
 	},
@@ -592,7 +592,7 @@
 	BPF_SK_LOOKUP(sk_lookup_tcp),
 	/* if (sk) bpf_sk_release() */
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 7),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 7),
 	/* bpf_tail_call() */
 	BPF_MOV64_IMM(BPF_REG_3, 3),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
@@ -614,7 +614,7 @@
 	BPF_SK_LOOKUP(sk_lookup_tcp),
 	/* if (sk) bpf_sk_release() */
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
 	/* bpf_tail_call() */
 	BPF_MOV64_IMM(BPF_REG_3, 3),
@@ -643,7 +643,7 @@
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	/* if (sk) bpf_sk_release() */
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
 	BPF_EXIT_INSN(),
 	},
@@ -660,7 +660,7 @@
 	BPF_SK_LOOKUP(sk_lookup_tcp),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	/* if (!sk) goto end */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
 	/* bpf_tail_call() */
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
@@ -682,7 +682,7 @@
 	BPF_SK_LOOKUP(sk_lookup_tcp),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 5),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
 	BPF_EXIT_INSN(),
 	},
@@ -695,7 +695,7 @@
 	.insns = {
 	BPF_SK_LOOKUP(sk_lookup_tcp),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 5),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
 	BPF_EXIT_INSN(),
@@ -709,7 +709,7 @@
 	.insns = {
 	BPF_SK_LOOKUP(sk_lookup_tcp),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
 	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_0, 4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
@@ -723,7 +723,7 @@
 	.insns = {
 	BPF_SK_LOOKUP(sk_lookup_tcp),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_LD_IMM64(BPF_REG_2, 42),
 	BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_2,
@@ -742,7 +742,7 @@
 	.insns = {
 	BPF_SK_LOOKUP(sk_lookup_tcp),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_0, 0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
@@ -757,7 +757,7 @@
 	.insns = {
 	BPF_SK_LOOKUP(sk_lookup_tcp),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
 	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 0),
 	BPF_EXIT_INSN(),
@@ -776,14 +776,14 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 64),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 9),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 9),
 	/* sk = sk_lookup_tcp(ctx, skb->data, ...) */
 	BPF_MOV64_IMM(BPF_REG_3, sizeof(struct bpf_sock_tuple)),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
 	BPF_MOV64_IMM(BPF_REG_5, 0),
 	BPF_EMIT_CALL(BPF_FUNC_sk_lookup_tcp),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
 	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_0, 4),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
@@ -796,12 +796,12 @@
 	"reference tracking: use ptr from bpf_tcp_sock() after release",
 	.insns = {
 	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_EMIT_CALL(BPF_FUNC_tcp_sock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 3),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 3),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
 	BPF_EXIT_INSN(),
@@ -820,12 +820,12 @@
 	"reference tracking: use ptr from bpf_sk_fullsock() after release",
 	.insns = {
 	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 3),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 3),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
 	BPF_EXIT_INSN(),
@@ -844,12 +844,12 @@
 	"reference tracking: use ptr from bpf_sk_fullsock(tp) after release",
 	.insns = {
 	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_EMIT_CALL(BPF_FUNC_tcp_sock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 3),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 3),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
 	BPF_EXIT_INSN(),
@@ -858,7 +858,7 @@
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_6, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_6, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_6, offsetof(struct bpf_sock, type)),
 	BPF_EXIT_INSN(),
@@ -872,12 +872,12 @@
 	"reference tracking: use sk after bpf_sk_release(tp)",
 	.insns = {
 	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_EMIT_CALL(BPF_FUNC_tcp_sock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 3),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 3),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
 	BPF_EXIT_INSN(),
@@ -895,12 +895,12 @@
 	"reference tracking: use ptr from bpf_get_listener_sock() after bpf_sk_release(sk)",
 	.insns = {
 	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_EMIT_CALL(BPF_FUNC_get_listener_sock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 3),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 3),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
 	BPF_EXIT_INSN(),
@@ -917,12 +917,12 @@
 	"reference tracking: bpf_sk_release(listen_sk)",
 	.insns = {
 	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_EMIT_CALL(BPF_FUNC_get_listener_sock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 3),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 3),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
 	BPF_EXIT_INSN(),
@@ -942,7 +942,7 @@
 	"reference tracking: tp->snd_cwnd after bpf_sk_fullsock(sk) and bpf_tcp_sock(sk)",
 	.insns = {
 	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
@@ -951,7 +951,7 @@
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_EMIT_CALL(BPF_FUNC_tcp_sock),
 	BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_7, 0, 3),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_7, 0, 3),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
 	BPF_EXIT_INSN(),
@@ -970,9 +970,9 @@
 	BPF_SK_LOOKUP(sk_lookup_tcp),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_3, 1),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_6, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_6, 0, 1),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_6, 0, 2),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
 	BPF_EXIT_INSN(),
@@ -986,9 +986,9 @@
 	BPF_SK_LOOKUP(sk_lookup_tcp),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_3, 1),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_6, 0, 4),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 1234, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_6, 1234, 2),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
 	BPF_EXIT_INSN(),
@@ -1001,12 +1001,12 @@
 	"reference tracking: bpf_sk_release(btf_tcp_sock)",
 	.insns = {
 	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_EMIT_CALL(BPF_FUNC_skc_to_tcp_sock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 3),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 3),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
 	BPF_EXIT_INSN(),
@@ -1023,12 +1023,12 @@
 	"reference tracking: use ptr from bpf_skc_to_tcp_sock() after release",
 	.insns = {
 	BPF_SK_LOOKUP(sk_lookup_tcp),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_EMIT_CALL(BPF_FUNC_skc_to_tcp_sock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 3),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 3),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
 	BPF_EXIT_INSN(),
@@ -1053,7 +1053,7 @@
 		BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 		BPF_LD_MAP_FD(BPF_REG_1, 0),
 		BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-		BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+		BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 		BPF_EXIT_INSN(),
 		BPF_MOV64_REG(BPF_REG_9, BPF_REG_0),
 
@@ -1062,7 +1062,7 @@
 		BPF_MOV64_IMM(BPF_REG_2, 8),
 		BPF_MOV64_IMM(BPF_REG_3, 0),
 		BPF_EMIT_CALL(BPF_FUNC_ringbuf_reserve),
-		BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+		BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 		BPF_EXIT_INSN(),
 		BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
 
diff --git a/tools/testing/selftests/bpf/verifier/regalloc.c b/tools/testing/selftests/bpf/verifier/regalloc.c
index ead6db9..2b1d572 100644
--- a/tools/testing/selftests/bpf/verifier/regalloc.c
+++ b/tools/testing/selftests/bpf/verifier/regalloc.c
@@ -7,12 +7,12 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
 	BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_0, 20, 4),
-	BPF_JMP_IMM(BPF_JSLT, BPF_REG_2, 0, 3),
+	BPF_JMP64_IMM(BPF_JSGT, BPF_REG_0, 20, 4),
+	BPF_JMP64_IMM(BPF_JSLT, BPF_REG_2, 0, 3),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_0),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_2),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_7, 0),
@@ -32,12 +32,12 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
 	BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_0, 24, 4),
-	BPF_JMP_IMM(BPF_JSLT, BPF_REG_2, 0, 3),
+	BPF_JMP64_IMM(BPF_JSGT, BPF_REG_0, 24, 4),
+	BPF_JMP64_IMM(BPF_JSLT, BPF_REG_2, 0, 3),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_0),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_2),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_7, 0),
@@ -57,13 +57,13 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
 	BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_0, 20, 5),
+	BPF_JMP64_IMM(BPF_JSGT, BPF_REG_0, 20, 5),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
-	BPF_JMP_REG(BPF_JSGE, BPF_REG_3, BPF_REG_2, 3),
+	BPF_JMP64_REG(BPF_JSGE, BPF_REG_3, BPF_REG_2, 3),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_0),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_2),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_7, 0),
@@ -83,13 +83,13 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
 	BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_0, 22, 5),
+	BPF_JMP64_IMM(BPF_JSGT, BPF_REG_0, 22, 5),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
-	BPF_JMP_REG(BPF_JSGE, BPF_REG_3, BPF_REG_2, 3),
+	BPF_JMP64_REG(BPF_JSGE, BPF_REG_3, BPF_REG_2, 3),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_0),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_2),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_7, 0),
@@ -110,17 +110,17 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 11),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 11),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
 	BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_0, 20, 7),
+	BPF_JMP64_IMM(BPF_JSGT, BPF_REG_0, 20, 7),
 	/* r0 has upper bound that should propagate into r2 */
 	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_2, -8), /* spill r2 */
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_MOV64_IMM(BPF_REG_2, 0), /* clear r0 and r2 */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_10, -8), /* fill r3 */
-	BPF_JMP_REG(BPF_JSGE, BPF_REG_0, BPF_REG_3, 2),
+	BPF_JMP64_REG(BPF_JSGE, BPF_REG_0, BPF_REG_3, 2),
 	/* r3 has lower and upper bounds */
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_3),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_7, 0),
@@ -140,17 +140,17 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 11),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 11),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
 	BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_0, 48, 7),
+	BPF_JMP64_IMM(BPF_JSGT, BPF_REG_0, 48, 7),
 	/* r0 has upper bound that should propagate into r2 */
 	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_2, -8), /* spill r2 */
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_MOV64_IMM(BPF_REG_2, 0), /* clear r0 and r2 */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_10, -8), /* fill r3 */
-	BPF_JMP_REG(BPF_JSGE, BPF_REG_0, BPF_REG_3, 2),
+	BPF_JMP64_REG(BPF_JSGE, BPF_REG_0, BPF_REG_3, 2),
 	/* r3 has lower and upper bounds */
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_3),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_7, 0),
@@ -171,13 +171,13 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 10),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 10),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
 	BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_2),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_0, 12, 5),
-	BPF_JMP_IMM(BPF_JSLT, BPF_REG_2, 0, 4),
+	BPF_JMP64_IMM(BPF_JSGT, BPF_REG_0, 12, 5),
+	BPF_JMP64_IMM(BPF_JSLT, BPF_REG_2, 0, 4),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_0),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_2),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_4),
@@ -198,14 +198,14 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 10),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 10),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
 	BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_9, BPF_REG_0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 6),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_8, 20, 4),
-	BPF_JMP_IMM(BPF_JSLT, BPF_REG_9, 0, 3),
+	BPF_JMP64_IMM(BPF_JSGT, BPF_REG_8, 20, 4),
+	BPF_JMP64_IMM(BPF_JSLT, BPF_REG_9, 0, 3),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_8),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_7, BPF_REG_9),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_7, 0),
@@ -227,7 +227,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
 	BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
@@ -235,8 +235,8 @@
 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_7),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 1, 0, 1),
 	BPF_EXIT_INSN(),
-	BPF_JMP_IMM(BPF_JSGT, BPF_REG_1, 20, 5),
-	BPF_JMP_IMM(BPF_JSLT, BPF_REG_2, 0, 4),
+	BPF_JMP64_IMM(BPF_JSGT, BPF_REG_1, 20, 5),
+	BPF_JMP64_IMM(BPF_JSLT, BPF_REG_2, 0, 4),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_3, BPF_REG_1),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_3, BPF_REG_2),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_3, 0),
@@ -259,14 +259,14 @@
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
 	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8), /* spill r0 */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 0),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 0),
 	/* The verifier will walk the rest twice with r0 == 0 and r0 == map_value */
 	BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_2, 20, 0),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_2, 20, 0),
 	/* The verifier will walk the rest two more times with r0 == 20 and r0 == unknown */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_10, -8), /* fill r3 with map_value */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_3, 0, 1), /* skip ldx if map_value == NULL */
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_3, 0, 1), /* skip ldx if map_value == NULL */
 	/* Buggy verifier will think that r3 == 20 here */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_3, 0), /* read from map_value */
 	BPF_EXIT_INSN(),
diff --git a/tools/testing/selftests/bpf/verifier/ringbuf.c b/tools/testing/selftests/bpf/verifier/ringbuf.c
index d288253..a2738c2 100644
--- a/tools/testing/selftests/bpf/verifier/ringbuf.c
+++ b/tools/testing/selftests/bpf/verifier/ringbuf.c
@@ -10,7 +10,7 @@
 	/* store a pointer to the reserved memory in R6 */
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	/* check whether the reservation was successful */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
 	/* spill R6(mem) into the stack */
 	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_6, -8),
 	/* fill it back in R7 */
@@ -42,7 +42,7 @@
 	/* store a pointer to the reserved memory in R6 */
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	/* check whether the reservation was successful */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
 	/* spill R6(mem) into the stack */
 	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_6, -8),
 	/* fill it back in R7 */
@@ -74,7 +74,7 @@
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_ringbuf_reserve),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
 	/* check whether the reservation was successful */
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	/* pass allocated ring buffer memory to fib lookup */
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
diff --git a/tools/testing/selftests/bpf/verifier/runtime_jit.c b/tools/testing/selftests/bpf/verifier/runtime_jit.c
index 160911b..9944e90 100644
--- a/tools/testing/selftests/bpf/verifier/runtime_jit.c
+++ b/tools/testing/selftests/bpf/verifier/runtime_jit.c
@@ -58,10 +58,10 @@
 		    offsetof(struct __sk_buff, cb[0])),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1,
 		    offsetof(struct __sk_buff, cb[0])),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 13, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 13, 4),
 	BPF_MOV64_IMM(BPF_REG_3, 2),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 3),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_3, 2),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
@@ -80,10 +80,10 @@
 		    offsetof(struct __sk_buff, cb[0])),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1,
 		    offsetof(struct __sk_buff, cb[0])),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 13, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 13, 4),
 	BPF_MOV64_IMM(BPF_REG_3, 2),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 3),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_3, 2),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
@@ -102,10 +102,10 @@
 		    offsetof(struct __sk_buff, cb[0])),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1,
 		    offsetof(struct __sk_buff, cb[0])),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 13, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 13, 4),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 3),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_3, 2),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
@@ -124,10 +124,10 @@
 		    offsetof(struct __sk_buff, cb[0])),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1,
 		    offsetof(struct __sk_buff, cb[0])),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 13, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 13, 4),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 3),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_3, 2),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
@@ -146,10 +146,10 @@
 		    offsetof(struct __sk_buff, cb[0])),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1,
 		    offsetof(struct __sk_buff, cb[0])),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 13, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 13, 4),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 3),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
@@ -171,10 +171,10 @@
 		    offsetof(struct __sk_buff, cb[0])),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1,
 		    offsetof(struct __sk_buff, cb[0])),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 13, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 13, 4),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 3),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	BPF_LD_MAP_FD(BPF_REG_2, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_tail_call),
diff --git a/tools/testing/selftests/bpf/verifier/search_pruning.c b/tools/testing/selftests/bpf/verifier/search_pruning.c
index 1a4d06b..967ceeb 100644
--- a/tools/testing/selftests/bpf/verifier/search_pruning.c
+++ b/tools/testing/selftests/bpf/verifier/search_pruning.c
@@ -6,11 +6,11 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
-	BPF_JMP_A(1),
+	BPF_JMP64_A(1),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_10),
-	BPF_JMP_A(0),
+	BPF_JMP64_A(0),
 	BPF_EXIT_INSN(),
 	},
 	.fixup_map_hash_8b = { 3 },
@@ -27,9 +27,9 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_REG(BPF_REG_0, BPF_REG_10),
-	BPF_JMP_A(1),
+	BPF_JMP64_A(1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -45,9 +45,9 @@
 	/* Get an unknown value */
 	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1, 0),
 	/* branch conditions teach us nothing about R2 */
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_2, 0, 1),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_2, 0, 1),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_2, 0, 1),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_2, 0, 1),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -63,14 +63,14 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0),
 	BPF_MOV32_IMM(BPF_REG_2, MAX_ENTRIES),
-	BPF_JMP_REG(BPF_JSGT, BPF_REG_2, BPF_REG_1, 1),
+	BPF_JMP64_REG(BPF_JSGT, BPF_REG_2, BPF_REG_1, 1),
 	BPF_MOV32_IMM(BPF_REG_1, 0),
 	BPF_ALU32_IMM(BPF_LSH, BPF_REG_1, 2),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 0),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 0),
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, offsetof(struct test_val, foo)),
 	BPF_EXIT_INSN(),
 	},
@@ -89,16 +89,16 @@
 		BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 		BPF_LD_MAP_FD(BPF_REG_1, 0),
 		BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 11),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 11),
 		BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_0, 0),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_3, 0xbeef, 2),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_3, 0xbeef, 2),
 		BPF_MOV64_IMM(BPF_REG_4, 0),
-		BPF_JMP_A(1),
+		BPF_JMP64_A(1),
 		BPF_MOV64_IMM(BPF_REG_4, 1),
 		BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_4, -16),
 		BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns),
 		BPF_LDX_MEM(BPF_DW, BPF_REG_5, BPF_REG_10, -16),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_5, 0, 2),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_5, 0, 2),
 		BPF_MOV64_IMM(BPF_REG_6, 0),
 		BPF_ST_MEM(BPF_DW, BPF_REG_6, 0, 0xdead),
 		BPF_EXIT_INSN(),
@@ -116,12 +116,12 @@
 		BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 		BPF_LD_MAP_FD(BPF_REG_1, 0),
 		BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
 		BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_0, 0),
 		BPF_MOV64_IMM(BPF_REG_4, 0),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_3, 0xbeef, 2),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_3, 0xbeef, 2),
 		BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_4, -16),
-		BPF_JMP_A(1),
+		BPF_JMP64_A(1),
 		BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_4, -24),
 		BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns),
 		BPF_LDX_MEM(BPF_DW, BPF_REG_5, BPF_REG_10, -16),
@@ -138,13 +138,13 @@
 		BPF_MOV64_REG(BPF_REG_7, BPF_REG_1),
 		BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
 		BPF_MOV32_IMM(BPF_REG_6, 32),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 		BPF_MOV32_IMM(BPF_REG_6, 4),
 		/* Additional insns to introduce a pruning point. */
 		BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
 		BPF_MOV64_IMM(BPF_REG_3, 0),
 		BPF_MOV64_IMM(BPF_REG_3, 0),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 		BPF_MOV64_IMM(BPF_REG_3, 0),
 		/* u32 spill/fill */
 		BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_6, -8),
@@ -155,7 +155,7 @@
 		BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -16),
 		BPF_LD_MAP_FD(BPF_REG_1, 0),
 		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 		BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_8),
 		BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
 		BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -178,7 +178,7 @@
 		BPF_MOV64_IMM(BPF_REG_3, 1),
 		BPF_MOV64_IMM(BPF_REG_3, 1),
 		BPF_EMIT_CALL(BPF_FUNC_get_prandom_u32),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 		BPF_MOV64_IMM(BPF_REG_3, 1),
 		BPF_ALU32_IMM(BPF_DIV, BPF_REG_3, 0),
 		/* u32 spills, u64 fill */
@@ -186,14 +186,14 @@
 		BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_7, -8),
 		BPF_LDX_MEM(BPF_DW, BPF_REG_8, BPF_REG_10, -8),
 		/* if r8 != X goto pc+1  r8 known in fallthrough branch */
-		BPF_JMP_IMM(BPF_JNE, BPF_REG_8, 0xffffffff, 1),
+		BPF_JMP64_IMM(BPF_JNE, BPF_REG_8, 0xffffffff, 1),
 		BPF_MOV64_IMM(BPF_REG_3, 1),
 		/* if r8 == X goto pc+1  condition always true on first
 		 * traversal, so starts backtracking to mark r8 as requiring
 		 * precision. r7 marked as needing precision. r6 not marked
 		 * since it's not tracked.
 		 */
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_8, 0xffffffff, 1),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_8, 0xffffffff, 1),
 		/* fails if r8 correctly marked unknown after fill. */
 		BPF_ALU32_IMM(BPF_DIV, BPF_REG_3, 0),
 		BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -209,16 +209,16 @@
 		BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_1),
 		BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_get_prandom_u32),
 		BPF_ALU64_REG(BPF_MOV, BPF_REG_7, BPF_REG_0),
-		BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
+		BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
 		BPF_MOV64_IMM(BPF_REG_0, 0),
 		BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_6, -8),
 		BPF_LDX_MEM(BPF_DW, BPF_REG_6, BPF_REG_10, -8),
 		BPF_STX_MEM(BPF_B, BPF_REG_10, BPF_REG_7, -9),
 		BPF_LDX_MEM(BPF_B, BPF_REG_7, BPF_REG_10, -9),
-		BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 0),
-		BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 0),
-		BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 0),
-		BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 0),
+		BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 0),
+		BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 0),
+		BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 0),
+		BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 0),
 		BPF_EXIT_INSN(),
 	},
 	.result = ACCEPT,
diff --git a/tools/testing/selftests/bpf/verifier/sock.c b/tools/testing/selftests/bpf/verifier/sock.c
index d11d0b2..9f89ade 100644
--- a/tools/testing/selftests/bpf/verifier/sock.c
+++ b/tools/testing/selftests/bpf/verifier/sock.c
@@ -14,7 +14,7 @@
 	"skb->sk: sk->family [non fullsock field]",
 	.insns = {
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, offsetof(struct bpf_sock, family)),
@@ -28,7 +28,7 @@
 	"skb->sk: sk->type [fullsock field]",
 	.insns = {
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, offsetof(struct bpf_sock, type)),
@@ -55,7 +55,7 @@
 	"sk_fullsock(skb->sk): no NULL check on ret",
 	.insns = {
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
@@ -71,11 +71,11 @@
 	"sk_fullsock(skb->sk): sk->type [fullsock field]",
 	.insns = {
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, type)),
@@ -89,11 +89,11 @@
 	"sk_fullsock(skb->sk): sk->family [non fullsock field]",
 	.insns = {
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, family)),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -106,11 +106,11 @@
 	"sk_fullsock(skb->sk): sk->state [narrow load]",
 	.insns = {
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, state)),
@@ -124,11 +124,11 @@
 	"sk_fullsock(skb->sk): sk->dst_port [word load] (backward compatibility)",
 	.insns = {
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, dst_port)),
@@ -142,11 +142,11 @@
 	"sk_fullsock(skb->sk): sk->dst_port [half load]",
 	.insns = {
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, dst_port)),
@@ -160,11 +160,11 @@
 	"sk_fullsock(skb->sk): sk->dst_port [half load] (invalid)",
 	.insns = {
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, dst_port) + 2),
@@ -179,11 +179,11 @@
 	"sk_fullsock(skb->sk): sk->dst_port [byte load]",
 	.insns = {
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_B, BPF_REG_2, BPF_REG_0, offsetof(struct bpf_sock, dst_port)),
@@ -198,11 +198,11 @@
 	"sk_fullsock(skb->sk): sk->dst_port [byte load] (invalid)",
 	.insns = {
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, dst_port) + 2),
@@ -217,11 +217,11 @@
 	"sk_fullsock(skb->sk): past sk->dst_port [half load] (invalid)",
 	.insns = {
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_0, offsetofend(struct bpf_sock, dst_port)),
@@ -236,11 +236,11 @@
 	"sk_fullsock(skb->sk): sk->dst_ip6 [load 2nd byte]",
 	.insns = {
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, dst_ip6[0]) + 1),
@@ -254,11 +254,11 @@
 	"sk_fullsock(skb->sk): sk->type [narrow load]",
 	.insns = {
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, type)),
@@ -272,11 +272,11 @@
 	"sk_fullsock(skb->sk): sk->protocol [narrow load]",
 	.insns = {
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, protocol)),
@@ -290,11 +290,11 @@
 	"sk_fullsock(skb->sk): beyond last field",
 	.insns = {
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, offsetofend(struct bpf_sock, rx_queue_mapping)),
@@ -321,7 +321,7 @@
 	"bpf_tcp_sock(skb->sk): no NULL check on ret",
 	.insns = {
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_EMIT_CALL(BPF_FUNC_tcp_sock),
@@ -337,11 +337,11 @@
 	"bpf_tcp_sock(skb->sk): tp->snd_cwnd",
 	.insns = {
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_EMIT_CALL(BPF_FUNC_tcp_sock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_tcp_sock, snd_cwnd)),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -354,11 +354,11 @@
 	"bpf_tcp_sock(skb->sk): tp->bytes_acked",
 	.insns = {
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_EMIT_CALL(BPF_FUNC_tcp_sock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_tcp_sock, bytes_acked)),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -371,11 +371,11 @@
 	"bpf_tcp_sock(skb->sk): beyond last field",
 	.insns = {
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_EMIT_CALL(BPF_FUNC_tcp_sock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, offsetofend(struct bpf_tcp_sock, bytes_acked)),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -389,15 +389,15 @@
 	"bpf_tcp_sock(bpf_sk_fullsock(skb->sk)): tp->snd_cwnd",
 	.insns = {
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_EMIT_CALL(BPF_FUNC_tcp_sock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_tcp_sock, snd_cwnd)),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -410,7 +410,7 @@
 	"bpf_sk_release(skb->sk)",
 	.insns = {
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -423,11 +423,11 @@
 	"bpf_sk_release(bpf_sk_fullsock(skb->sk))",
 	.insns = {
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
@@ -442,11 +442,11 @@
 	"bpf_sk_release(bpf_tcp_sock(skb->sk))",
 	.insns = {
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_EMIT_CALL(BPF_FUNC_tcp_sock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_EMIT_CALL(BPF_FUNC_sk_release),
@@ -461,11 +461,11 @@
 	"sk_storage_get(map, skb->sk, NULL, 0): value == NULL",
 	.insns = {
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_4, 0),
@@ -484,11 +484,11 @@
 	"sk_storage_get(map, skb->sk, 1, 1): value == 1",
 	.insns = {
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_4, 1),
@@ -510,11 +510,11 @@
 	BPF_MOV64_IMM(BPF_REG_2, 0),
 	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_2, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_4, 1),
@@ -536,11 +536,11 @@
 	BPF_MOV64_IMM(BPF_REG_2, 0),
 	BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_2, -8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_EMIT_CALL(BPF_FUNC_sk_fullsock),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_4, 1),
@@ -581,7 +581,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_xdp_sock, queue_id)),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -631,7 +631,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, type)),
@@ -650,7 +650,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, offsetof(struct bpf_sock, type)),
@@ -710,7 +710,7 @@
 	"mark null check on return value of bpf_skc_to helpers",
 	.insns = {
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, offsetof(struct __sk_buff, sk)),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_1),
@@ -719,7 +719,7 @@
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_EMIT_CALL(BPF_FUNC_skc_to_tcp_request_sock),
 	BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_8, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_8, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_7, 0),
diff --git a/tools/testing/selftests/bpf/verifier/spill_fill.c b/tools/testing/selftests/bpf/verifier/spill_fill.c
index 5b8d764..0c47475 100644
--- a/tools/testing/selftests/bpf/verifier/spill_fill.c
+++ b/tools/testing/selftests/bpf/verifier/spill_fill.c
@@ -40,7 +40,7 @@
 	/* store a pointer to the reserved memory in R6 */
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	/* check whether the reservation was successful */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 	/* spill R6(mem) into the stack */
 	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_6, -8),
 	/* fill it back in R7 */
@@ -72,7 +72,7 @@
 	/* add invalid offset to memory or NULL */
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 1),
 	/* check whether the reservation was successful */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	/* should not be able to access *(R7) = 0 */
 	BPF_ST_MEM(BPF_W, BPF_REG_6, 0, 0),
 	/* submit the reserved ringbuf memory */
@@ -150,7 +150,7 @@
 	/* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=20 */
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4),
 	/* if (r0 > r3) R0=pkt,off=20 R2=pkt R3=pkt_end R4=20 */
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
 	/* r0 = *(u32 *)r2 R0=pkt,off=20,r=20 R2=pkt,r=20 R3=pkt_end R4=20 */
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -193,7 +193,7 @@
 	/* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=65535 */
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4),
 	/* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=umax=65535 */
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
 	/* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=20 */
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -225,7 +225,7 @@
 	/* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=65535 */
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4),
 	/* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=umax=65535 */
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
 	/* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=20 */
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -253,7 +253,7 @@
 	/* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=65535 */
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4),
 	/* if (r0 > r3) R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=umax=65535 */
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
 	/* r0 = *(u32 *)r2 R0=pkt,umax=65535 R2=pkt R3=pkt_end R4=20 */
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -283,7 +283,7 @@
 	/* r0 += r4 R0=pkt R2=pkt R3=pkt_end R4=umax=U32_MAX */
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_4),
 	/* if (r0 > r3) R0=pkt,umax=U32_MAX R2=pkt R3=pkt_end R4= */
-	BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 1),
 	/* r0 = *(u32 *)r2 R0=pkt,umax=U32_MAX R2=pkt R3=pkt_end R4= */
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -302,7 +302,7 @@
 		    offsetof(struct __sk_buff, data_end)),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_1,
 		    offsetof(struct __sk_buff, tstamp)),
-	BPF_JMP_IMM(BPF_JLE, BPF_REG_4, 40, 2),
+	BPF_JMP64_IMM(BPF_JLE, BPF_REG_4, 40, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	/* *(u32 *)(r10 -8) = r4 R4=umax=40 */
@@ -316,7 +316,7 @@
 	/* r2 += 20 R0=pkt,umax=40 R2=pkt,umax=40 */
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 20),
 	/* if (r2 > r3) R0=pkt,umax=40 R2=pkt,off=20,umax=40 */
-	BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_2, BPF_REG_3, 1),
 	/* r0 = *(u32 *)r0 R0=pkt,r=20,umax=40 R2=pkt,off=20,r=20,umax=40 */
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
diff --git a/tools/testing/selftests/bpf/verifier/spin_lock.c b/tools/testing/selftests/bpf/verifier/spin_lock.c
index 8f24b17..016786e 100644
--- a/tools/testing/selftests/bpf/verifier/spin_lock.c
+++ b/tools/testing/selftests/bpf/verifier/spin_lock.c
@@ -7,7 +7,7 @@
 	BPF_LD_MAP_FD(BPF_REG_1,
 		      0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
@@ -35,7 +35,7 @@
 	BPF_LD_MAP_FD(BPF_REG_1,
 		      0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
@@ -64,7 +64,7 @@
 	BPF_LD_MAP_FD(BPF_REG_1,
 		      0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
@@ -94,7 +94,7 @@
 	BPF_LD_MAP_FD(BPF_REG_1,
 		      0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
@@ -124,7 +124,7 @@
 	BPF_LD_MAP_FD(BPF_REG_1,
 		      0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
@@ -153,7 +153,7 @@
 	BPF_LD_MAP_FD(BPF_REG_1,
 		      0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
@@ -162,7 +162,7 @@
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_6, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_unlock),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -183,12 +183,12 @@
 	BPF_LD_MAP_FD(BPF_REG_1,
 		      0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 1),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
@@ -213,7 +213,7 @@
 	BPF_LD_MAP_FD(BPF_REG_1,
 		      0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
@@ -245,7 +245,7 @@
 	BPF_LD_MAP_FD(BPF_REG_1,
 		      0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
@@ -253,7 +253,7 @@
 	BPF_LD_MAP_FD(BPF_REG_1,
 		      0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
@@ -281,7 +281,7 @@
 	BPF_LD_MAP_FD(BPF_REG_1,
 		      0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
@@ -313,7 +313,7 @@
 	BPF_LD_MAP_FD(BPF_REG_1,
 		      0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
@@ -342,21 +342,21 @@
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_9),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -4),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_spin_lock),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_6, 0, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_8),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
@@ -396,7 +396,7 @@
 	BPF_LD_MAP_FD(BPF_REG_1,
 		      0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 24),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 24),
 	BPF_MOV64_REG(BPF_REG_9, BPF_REG_0),
 	/* r8 = map_lookup_elem(...) */
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
@@ -404,7 +404,7 @@
 	BPF_LD_MAP_FD(BPF_REG_1,
 		      0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 18),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 18),
 	BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
 	/* r7 = ktime_get_ns() */
 	BPF_EMIT_CALL(BPF_FUNC_ktime_get_ns),
@@ -419,12 +419,12 @@
 	 * r9 = r8
 	 * goto unlock
 	 */
-	BPF_JMP_REG(BPF_JGT, BPF_REG_6, BPF_REG_7, 5),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_6, BPF_REG_7, 5),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_8),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
 	BPF_EMIT_CALL(BPF_FUNC_spin_lock),
 	BPF_MOV64_REG(BPF_REG_9, BPF_REG_8),
-	BPF_JMP_A(3),
+	BPF_JMP64_A(3),
 	/* spin_lock(r9) */
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_9),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 4),
diff --git a/tools/testing/selftests/bpf/verifier/stack_ptr.c b/tools/testing/selftests/bpf/verifier/stack_ptr.c
index 8ab94d6..fac4daa 100644
--- a/tools/testing/selftests/bpf/verifier/stack_ptr.c
+++ b/tools/testing/selftests/bpf/verifier/stack_ptr.c
@@ -302,7 +302,7 @@
 	"stack pointer arithmetic",
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_1, 4),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 0),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 0),
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_7, -10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_7, -10),
@@ -326,7 +326,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 2),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_0),
@@ -346,7 +346,7 @@
 	BPF_STX_MEM(BPF_B, BPF_REG_1, BPF_REG_9, 0),
 	BPF_LDX_MEM(BPF_B, BPF_REG_3, BPF_REG_1, 0),
 	/* Should read back as same value. */
-	BPF_JMP_REG(BPF_JEQ, BPF_REG_2, BPF_REG_3, 2),
+	BPF_JMP64_REG(BPF_JEQ, BPF_REG_2, BPF_REG_3, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_0, 42),
diff --git a/tools/testing/selftests/bpf/verifier/uninit.c b/tools/testing/selftests/bpf/verifier/uninit.c
index 987a587..bde33e4 100644
--- a/tools/testing/selftests/bpf/verifier/uninit.c
+++ b/tools/testing/selftests/bpf/verifier/uninit.c
@@ -28,7 +28,7 @@
 {
 	"program doesn't init R0 before exit in all branches",
 	.insns = {
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_1, 0, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 2),
 	BPF_EXIT_INSN(),
diff --git a/tools/testing/selftests/bpf/verifier/unpriv.c b/tools/testing/selftests/bpf/verifier/unpriv.c
index e035e92..d14b292 100644
--- a/tools/testing/selftests/bpf/verifier/unpriv.c
+++ b/tools/testing/selftests/bpf/verifier/unpriv.c
@@ -42,7 +42,7 @@
 {
 	"unpriv: cmp pointer with const",
 	.insns = {
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 0),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -53,7 +53,7 @@
 {
 	"unpriv: cmp pointer with pointer",
 	.insns = {
-	BPF_JMP_REG(BPF_JEQ, BPF_REG_1, BPF_REG_10, 0),
+	BPF_JMP64_REG(BPF_JEQ, BPF_REG_1, BPF_REG_10, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -223,11 +223,11 @@
 	BPF_MOV64_IMM(BPF_REG_3, 42),
 	BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, 3),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -16),
 	BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_2, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 1),
 	BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0),
 	BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_3,
@@ -252,10 +252,10 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_8),
 	/* if (skb == NULL) *target = sock; */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
 		BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_2, 0),
 	/* else *target = skb; */
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 1),
 		BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0),
 	/* struct __sk_buff *skb = *target; */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0),
@@ -264,7 +264,7 @@
 	BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_3,
 		    offsetof(struct __sk_buff, mark)),
 	/* if (sk) bpf_sk_release(sk) */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
 		BPF_EMIT_CALL(BPF_FUNC_sk_release),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -286,10 +286,10 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_8),
 	/* if (skb == NULL) *target = sock; */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
 		BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_2, 0),
 	/* else *target = skb; */
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 1),
 		BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0),
 	/* struct __sk_buff *skb = *target; */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0),
@@ -317,15 +317,15 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_8),
 	/* if (skb) *target = skb */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
 		BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0),
 	/* else *target = sock */
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 1),
 		BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_2, 0),
 	/* struct bpf_sock *sk = *target; */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0),
 	/* if (sk) u32 foo = sk->mark; bpf_sk_release(sk); */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, 2),
 		BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
 			    offsetof(struct bpf_sock, mark)),
 		BPF_EMIT_CALL(BPF_FUNC_sk_release),
@@ -349,15 +349,15 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_8),
 	/* if (skb) *target = skb */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, 1),
 		BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0),
 	/* else *target = sock */
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 1),
 		BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_2, 0),
 	/* struct bpf_sock *sk = *target; */
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0),
 	/* if (sk) sk->mark = 42; bpf_sk_release(sk); */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, 3),
 		BPF_MOV64_IMM(BPF_REG_3, 42),
 		BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_3,
 			    offsetof(struct bpf_sock, mark)),
@@ -375,13 +375,13 @@
 	.insns = {
 	BPF_ALU64_REG(BPF_MOV, BPF_REG_6, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, -8),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, 3),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2,
 		      -(__s32)offsetof(struct bpf_perf_event_data,
 				       sample_period) - 8),
 	BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_2, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 1),
 	BPF_STX_MEM(BPF_DW, BPF_REG_6, BPF_REG_1, 0),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_6, 0),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1,
@@ -401,7 +401,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -416,7 +416,7 @@
 	BPF_MOV32_IMM(BPF_REG_7, 0),
 	BPF_ALU32_IMM(BPF_AND, BPF_REG_7, 1),
 	BPF_MOV32_REG(BPF_REG_0, BPF_REG_7),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_7, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -455,7 +455,7 @@
 	.insns = {
 	BPF_MOV64_IMM(BPF_REG_1, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 0),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -490,7 +490,7 @@
 {
 	"unpriv: cmp of frame pointer",
 	.insns = {
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_10, 0, 0),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_10, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -529,7 +529,7 @@
 	.insns = {
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_2, 0, 0),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_2, 0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	},
diff --git a/tools/testing/selftests/bpf/verifier/value.c b/tools/testing/selftests/bpf/verifier/value.c
index 0e42592b..4e9e41d 100644
--- a/tools/testing/selftests/bpf/verifier/value.c
+++ b/tools/testing/selftests/bpf/verifier/value.c
@@ -6,7 +6,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_STX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -24,7 +24,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 17),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 17),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 3),
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 42),
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 2, 43),
@@ -58,9 +58,9 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 11),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 11),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_1, MAX_ENTRIES, 9),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_1, MAX_ENTRIES, 9),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 3),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_7, BPF_REG_0, 0),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_7, BPF_REG_0, 2),
@@ -86,7 +86,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, offsetof(struct test_val, foo)),
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 42),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
diff --git a/tools/testing/selftests/bpf/verifier/value_adj_spill.c b/tools/testing/selftests/bpf/verifier/value_adj_spill.c
index 7135e80..32576ac 100644
--- a/tools/testing/selftests/bpf/verifier/value_adj_spill.c
+++ b/tools/testing/selftests/bpf/verifier/value_adj_spill.c
@@ -6,7 +6,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 42),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -184),
@@ -31,7 +31,7 @@
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -152),
 	BPF_STX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_1, 0),
 	BPF_ST_MEM(BPF_DW, BPF_REG_3, 0, 42),
 	BPF_EXIT_INSN(),
diff --git a/tools/testing/selftests/bpf/verifier/value_illegal_alu.c b/tools/testing/selftests/bpf/verifier/value_illegal_alu.c
index d6f29eb..0dc9c99 100644
--- a/tools/testing/selftests/bpf/verifier/value_illegal_alu.c
+++ b/tools/testing/selftests/bpf/verifier/value_illegal_alu.c
@@ -6,7 +6,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_0, 8),
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 22),
 	BPF_EXIT_INSN(),
@@ -23,7 +23,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	BPF_ALU32_IMM(BPF_ADD, BPF_REG_0, 0),
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 22),
 	BPF_EXIT_INSN(),
@@ -40,7 +40,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	BPF_ALU64_IMM(BPF_DIV, BPF_REG_0, 42),
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 22),
 	BPF_EXIT_INSN(),
@@ -57,7 +57,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	BPF_ENDIAN(BPF_FROM_BE, BPF_REG_0, 64),
 	BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 22),
 	BPF_EXIT_INSN(),
@@ -77,7 +77,7 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_2, 0, 0),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
 	BPF_MOV64_IMM(BPF_REG_3, 4096),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
diff --git a/tools/testing/selftests/bpf/verifier/value_or_null.c b/tools/testing/selftests/bpf/verifier/value_or_null.c
index 1ea97759..6013cee 100644
--- a/tools/testing/selftests/bpf/verifier/value_or_null.c
+++ b/tools/testing/selftests/bpf/verifier/value_or_null.c
@@ -8,7 +8,7 @@
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_ST_MEM(BPF_DW, BPF_REG_4, 0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -28,7 +28,7 @@
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, -2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_4, 2),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_ST_MEM(BPF_DW, BPF_REG_4, 0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -48,7 +48,7 @@
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_4, -1),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_ST_MEM(BPF_DW, BPF_REG_4, 0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -68,7 +68,7 @@
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_0),
 	BPF_ALU64_IMM(BPF_LSH, BPF_REG_4, 1),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_ST_MEM(BPF_DW, BPF_REG_4, 0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -92,7 +92,7 @@
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_8),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_7),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_ST_MEM(BPF_DW, BPF_REG_4, 0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -113,12 +113,12 @@
 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_2),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_MOV64_IMM(BPF_REG_2, 10),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_2, 0, 3),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_2, 0, 3),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_8),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_7),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_MOV64_REG(BPF_REG_4, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 1),
 	BPF_ST_MEM(BPF_DW, BPF_REG_4, 0, 0),
 	BPF_EXIT_INSN(),
 	},
@@ -134,9 +134,9 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JGE, BPF_REG_1, MAX_ENTRIES-1, 1),
+	BPF_JMP64_IMM(BPF_JGE, BPF_REG_1, MAX_ENTRIES-1, 1),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 1),
 	BPF_ALU64_IMM(BPF_LSH, BPF_REG_1, 2),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
@@ -160,8 +160,8 @@
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_6, 0, 2),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_6, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_6, 0, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_6, 0, 1),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_10, 10),
 	BPF_EXIT_INSN(),
 	},
@@ -198,10 +198,10 @@
 	 *                       ; only in 'insn_idx'
 	 * r9 = r8               ; optionally share ID between r9 and r8
 	 */
-	BPF_JMP_REG(BPF_JGT, BPF_REG_6, BPF_REG_7, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_6, BPF_REG_7, 1),
 	BPF_MOV64_REG(BPF_REG_9, BPF_REG_8),
 	/* if r9 == 0 goto <exit> */
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_9, 0, 1),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_9, 0, 1),
 	/* read map value via r8, this is not always
 	 * safe because r8 might be not equal to r9.
 	 */
diff --git a/tools/testing/selftests/bpf/verifier/value_ptr_arith.c b/tools/testing/selftests/bpf/verifier/value_ptr_arith.c
index af7a406..5ce7066 100644
--- a/tools/testing/selftests/bpf/verifier/value_ptr_arith.c
+++ b/tools/testing/selftests/bpf/verifier/value_ptr_arith.c
@@ -6,18 +6,18 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 1, 3),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 1, 2),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
 	BPF_LDX_MEM(BPF_B, BPF_REG_4, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_4, 1, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_4, 1, 4),
 	BPF_MOV64_IMM(BPF_REG_1, 6),
 	BPF_ALU64_IMM(BPF_NEG, BPF_REG_1, 0),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0x7),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_MOV64_IMM(BPF_REG_1, 3),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0),
@@ -39,16 +39,16 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 1, 3),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 1, 2),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 9),
 	BPF_LDX_MEM(BPF_B, BPF_REG_4, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_4, 1, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_4, 1, 2),
 	BPF_MOV64_IMM(BPF_REG_1, 3),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 3),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_1, 6),
 	BPF_ALU64_IMM(BPF_NEG, BPF_REG_1, 0),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0x7),
@@ -72,16 +72,16 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 1, 3),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 1, 2),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
 	BPF_LDX_MEM(BPF_B, BPF_REG_4, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_4, 1, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_4, 1, 2),
 	BPF_MOV64_IMM(BPF_REG_1, 3),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_MOV64_IMM(BPF_REG_1, 5),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0),
@@ -103,16 +103,16 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 1, 3),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 1, 2),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
 	BPF_LDX_MEM(BPF_B, BPF_REG_4, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_4, 1, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_4, 1, 2),
 	BPF_MOV64_IMM(BPF_REG_1, 5),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_MOV64_IMM(BPF_REG_1, 5),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0),
@@ -132,18 +132,18 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 1, 3),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 1, 2),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 11),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 11),
 	BPF_LDX_MEM(BPF_B, BPF_REG_4, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_4, 1, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_4, 1, 4),
 	BPF_MOV64_IMM(BPF_REG_1, 6),
 	BPF_ALU64_IMM(BPF_NEG, BPF_REG_1, 0),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0x7),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 3),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_1, 6),
 	BPF_ALU64_IMM(BPF_NEG, BPF_REG_1, 0),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0x7),
@@ -165,18 +165,18 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 1, 3),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 1, 2),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 11),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 11),
 	BPF_LDX_MEM(BPF_B, BPF_REG_4, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_4, 1, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_4, 1, 4),
 	BPF_MOV64_IMM(BPF_REG_1, 6),
 	BPF_ALU64_IMM(BPF_NEG, BPF_REG_1, 0),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0x3),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 3),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_1, 6),
 	BPF_ALU64_IMM(BPF_NEG, BPF_REG_1, 0),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0x7),
@@ -200,18 +200,18 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 1, 3),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 1, 2),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 11),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 11),
 	BPF_LDX_MEM(BPF_B, BPF_REG_4, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_4, 1, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_4, 1, 4),
 	BPF_MOV64_IMM(BPF_REG_1, 6),
 	BPF_ALU64_IMM(BPF_NEG, BPF_REG_1, 0),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0x7),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 3),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_1, 6),
 	BPF_ALU64_IMM(BPF_NEG, BPF_REG_1, 0),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0x3),
@@ -235,12 +235,12 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 1, 3),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 1, 2),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_1, 4),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0),
@@ -260,12 +260,12 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 1, 3),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 1, 2),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_IMM(BPF_REG_1, 4),
 	BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
@@ -288,12 +288,12 @@
 	BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 1, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 1, 3),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 1, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 1, 2),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_1, 4),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0),
@@ -315,29 +315,29 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_ARG2, -16),
 	BPF_ST_MEM(BPF_DW, BPF_REG_FP, -16, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	// load some number from the map into r1
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
 	// depending on r1, branch:
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 3),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 3),
 	// branch A
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
-	BPF_JMP_A(2),
+	BPF_JMP64_A(2),
 	// branch B
 	BPF_MOV64_IMM(BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_3, 0x100000),
 	// common instruction
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_3),
 	// depending on r1, branch:
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 1),
 	// branch A
-	BPF_JMP_A(4),
+	BPF_JMP64_A(4),
 	// branch B
 	BPF_MOV64_IMM(BPF_REG_0, 0x13371337),
 	// verifier follows fall-through
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_2, 0x100000, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_2, 0x100000, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	// fake-dead code; targeted from branch A to
@@ -362,29 +362,29 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_ARG2, -16),
 	BPF_ST_MEM(BPF_DW, BPF_REG_FP, -16, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	// load some number from the map into r1
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
 	// depending on r1, branch:
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, 3),
 	// branch A
 	BPF_MOV64_IMM(BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_3, 0x100000),
-	BPF_JMP_A(2),
+	BPF_JMP64_A(2),
 	// branch B
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
 	BPF_MOV64_IMM(BPF_REG_3, 0),
 	// common instruction
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_3),
 	// depending on r1, branch:
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_1, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_1, 0, 1),
 	// branch A
-	BPF_JMP_A(4),
+	BPF_JMP64_A(4),
 	// branch B
 	BPF_MOV64_IMM(BPF_REG_0, 0x13371337),
 	// verifier follows fall-through
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_2, 0x100000, 2),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_2, 0x100000, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
 	// fake-dead code; targeted from branch A to
@@ -409,13 +409,13 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_ARG2, -16),
 	BPF_ST_MEM(BPF_DW, BPF_REG_FP, -16, 0),
 	BPF_EMIT_CALL(BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_1, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_2, 0),
 	BPF_MOV64_IMM(BPF_REG_3, 0x100000),
-	BPF_JMP_A(2),
+	BPF_JMP64_A(2),
 	BPF_MOV64_IMM(BPF_REG_2, 42),
 	BPF_MOV64_IMM(BPF_REG_3, 0x100001),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_3),
@@ -476,7 +476,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_IMM(BPF_REG_1, 48),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
 	BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
@@ -498,7 +498,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_IMM(BPF_REG_1, 49),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
 	BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
@@ -520,7 +520,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_MOV64_IMM(BPF_REG_1, 47),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
 	BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
@@ -540,7 +540,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
 	BPF_MOV64_IMM(BPF_REG_1, 47),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
 	BPF_MOV64_IMM(BPF_REG_1, 48),
@@ -563,7 +563,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
 	BPF_MOV64_IMM(BPF_REG_1, 47),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
 	BPF_MOV64_IMM(BPF_REG_1, 48),
@@ -588,7 +588,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
 	BPF_MOV64_IMM(BPF_REG_1, 47),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
 	BPF_MOV64_IMM(BPF_REG_1, 47),
@@ -609,7 +609,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_1, 4),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0),
@@ -628,7 +628,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_1, 4),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
@@ -647,7 +647,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_1, 49),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
@@ -666,7 +666,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_1, -1),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
@@ -685,7 +685,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 7),
 	BPF_MOV64_IMM(BPF_REG_1, 5),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
 	BPF_MOV64_IMM(BPF_REG_1, -2),
@@ -708,7 +708,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_1, (6 + 1) * sizeof(int)),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 0),
@@ -726,7 +726,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
 	BPF_MOV64_IMM(BPF_REG_1, (3 + 1) * sizeof(int)),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
 	BPF_MOV64_IMM(BPF_REG_1, 3 * sizeof(int)),
@@ -746,7 +746,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 	BPF_MOV32_IMM(BPF_REG_1, 0x12345678),
 	BPF_STX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 0),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 2),
@@ -767,7 +767,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0xf),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
@@ -787,7 +787,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 31),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_0),
@@ -807,7 +807,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
 	BPF_MOV64_IMM(BPF_REG_1, -1),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
 	BPF_MOV64_IMM(BPF_REG_1, 1),
@@ -833,7 +833,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 6),
 	BPF_MOV64_IMM(BPF_REG_1, 19),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
@@ -856,7 +856,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0xf),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
@@ -876,7 +876,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 0),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 31),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
@@ -896,20 +896,20 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 11),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 11),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_0, 0),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_0, 8),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_3, BPF_REG_0, 16),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0xf),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_3, 1),
 	BPF_ALU64_IMM(BPF_OR, BPF_REG_3, 1),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_2, BPF_REG_3, 4),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_2, BPF_REG_3, 4),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_3),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_0, 2),
-	BPF_JMP_IMM(BPF_JA, 0, 0, -3),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, -3),
 	},
 	.fixup_map_array_48b = { 3 },
 	.result = ACCEPT,
@@ -923,7 +923,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_0),
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
@@ -941,7 +941,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_1, 4),
 	BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_0),
 	BPF_LDX_MEM(BPF_B, BPF_REG_0, BPF_REG_1, 0),
@@ -960,7 +960,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 3),
 	BPF_MOV64_IMM(BPF_REG_1, 4),
 	BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
@@ -979,7 +979,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 5),
 	BPF_MOV64_IMM(BPF_REG_1, 6),
 	BPF_MOV64_IMM(BPF_REG_2, 4),
 	BPF_ALU64_REG(BPF_ADD, BPF_REG_0, BPF_REG_1),
@@ -1000,7 +1000,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0xf),
 	BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_0),
@@ -1020,7 +1020,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 4),
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0xf),
 	BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_1),
@@ -1040,7 +1040,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 8),
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
 	BPF_ALU64_IMM(BPF_AND, BPF_REG_1, 0xf),
 	BPF_ALU64_IMM(BPF_OR, BPF_REG_1, 0x7),
@@ -1066,7 +1066,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
+	BPF_JMP64_IMM(BPF_JEQ, BPF_REG_0, 0, 2),
 	BPF_ALU64_REG(BPF_SUB, BPF_REG_0, BPF_REG_0),
 	BPF_LDX_MEM(BPF_B, BPF_REG_1, BPF_REG_0, 0),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
@@ -1086,7 +1086,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_0),
 	BPF_MOV32_IMM(BPF_REG_1, 0xFFFFFFFF),
@@ -1109,7 +1109,7 @@
 		    offsetof(struct __sk_buff, data)),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_7),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 40),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_6, BPF_REG_8, 2),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_6, BPF_REG_8, 2),
 	BPF_ALU32_REG(BPF_MOV, BPF_REG_4, BPF_REG_7),
 	BPF_ALU32_REG(BPF_SUB, BPF_REG_6, BPF_REG_4),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
@@ -1128,7 +1128,7 @@
 		    offsetof(struct __sk_buff, data)),
 	BPF_MOV64_REG(BPF_REG_6, BPF_REG_7),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_6, 40),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_6, BPF_REG_8, 2),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_6, BPF_REG_8, 2),
 	BPF_ALU32_REG(BPF_MOV, BPF_REG_4, BPF_REG_6),
 	BPF_ALU32_REG(BPF_SUB, BPF_REG_4, BPF_REG_7),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
diff --git a/tools/testing/selftests/bpf/verifier/var_off.c b/tools/testing/selftests/bpf/verifier/var_off.c
index 769b20f..97d7664 100644
--- a/tools/testing/selftests/bpf/verifier/var_off.c
+++ b/tools/testing/selftests/bpf/verifier/var_off.c
@@ -146,7 +146,7 @@
 	BPF_LDX_MEM(BPF_DW, BPF_REG_4, BPF_REG_1, offsetof(struct bpf_sock_ops,
 							   bytes_received)),
 	/* Check the lower bound but don't check the upper one. */
-	BPF_JMP_IMM(BPF_JSLT, BPF_REG_4, 0, 4),
+	BPF_JMP64_IMM(BPF_JSLT, BPF_REG_4, 0, 4),
 	/* Point the lower bound to initialized stack. Offset is now in range
 	 * from fp-16 to fp+0x7fffffffffffffef, i.e. max value is unbounded.
 	 */
diff --git a/tools/testing/selftests/bpf/verifier/xadd.c b/tools/testing/selftests/bpf/verifier/xadd.c
index 8ce0171..7c7a4f4 100644
--- a/tools/testing/selftests/bpf/verifier/xadd.c
+++ b/tools/testing/selftests/bpf/verifier/xadd.c
@@ -19,7 +19,7 @@
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
 	BPF_LD_MAP_FD(BPF_REG_1, 0),
 	BPF_RAW_INSN(BPF_JMP64 | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
-	BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
+	BPF_JMP64_IMM(BPF_JNE, BPF_REG_0, 0, 1),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_1, 1),
 	BPF_ATOMIC_OP(BPF_W, BPF_ADD, BPF_REG_0, BPF_REG_1, 3),
@@ -39,9 +39,9 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 2),
+	BPF_JMP64_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 2),
 	BPF_MOV64_IMM(BPF_REG_0, 99),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 6),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 6),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
 	BPF_ST_MEM(BPF_W, BPF_REG_2, 3, 0),
@@ -64,8 +64,8 @@
 	BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8),
 	BPF_ATOMIC_OP(BPF_DW, BPF_ADD, BPF_REG_10, BPF_REG_0, -8),
 	BPF_ATOMIC_OP(BPF_DW, BPF_ADD, BPF_REG_10, BPF_REG_0, -8),
-	BPF_JMP_REG(BPF_JNE, BPF_REG_6, BPF_REG_0, 3),
-	BPF_JMP_REG(BPF_JNE, BPF_REG_7, BPF_REG_10, 2),
+	BPF_JMP64_REG(BPF_JNE, BPF_REG_6, BPF_REG_0, 3),
+	BPF_JMP64_REG(BPF_JNE, BPF_REG_7, BPF_REG_10, 2),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_0, 42),
@@ -84,8 +84,8 @@
 	BPF_STX_MEM(BPF_W, BPF_REG_10, BPF_REG_0, -8),
 	BPF_ATOMIC_OP(BPF_W, BPF_ADD, BPF_REG_10, BPF_REG_0, -8),
 	BPF_ATOMIC_OP(BPF_W, BPF_ADD, BPF_REG_10, BPF_REG_0, -8),
-	BPF_JMP_REG(BPF_JNE, BPF_REG_6, BPF_REG_0, 3),
-	BPF_JMP_REG(BPF_JNE, BPF_REG_7, BPF_REG_10, 2),
+	BPF_JMP64_REG(BPF_JNE, BPF_REG_6, BPF_REG_0, 3),
+	BPF_JMP64_REG(BPF_JNE, BPF_REG_7, BPF_REG_10, 2),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_10, -8),
 	BPF_EXIT_INSN(),
 	BPF_MOV64_IMM(BPF_REG_0, 42),
diff --git a/tools/testing/selftests/bpf/verifier/xdp.c b/tools/testing/selftests/bpf/verifier/xdp.c
index 5ac3905..8d83b90 100644
--- a/tools/testing/selftests/bpf/verifier/xdp.c
+++ b/tools/testing/selftests/bpf/verifier/xdp.c
@@ -4,7 +4,7 @@
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
 		    offsetof(struct xdp_md, ingress_ifindex)),
-	BPF_JMP_IMM(BPF_JLT, BPF_REG_2, 1, 1),
+	BPF_JMP64_IMM(BPF_JLT, BPF_REG_2, 1, 1),
 	BPF_MOV64_IMM(BPF_REG_0, 1),
 	BPF_EXIT_INSN(),
 	},
diff --git a/tools/testing/selftests/bpf/verifier/xdp_direct_packet_access.c b/tools/testing/selftests/bpf/verifier/xdp_direct_packet_access.c
index b4ec228..404cc8b 100644
--- a/tools/testing/selftests/bpf/verifier/xdp_direct_packet_access.c
+++ b/tools/testing/selftests/bpf/verifier/xdp_direct_packet_access.c
@@ -7,7 +7,7 @@
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -25,7 +25,7 @@
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
 	BPF_ALU64_IMM(BPF_SUB, BPF_REG_3, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -42,7 +42,7 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -59,7 +59,7 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -4),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -77,7 +77,7 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 0),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 0),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -95,7 +95,7 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -112,7 +112,7 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -130,8 +130,8 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, -5),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -148,8 +148,8 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -167,7 +167,7 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -185,8 +185,8 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -203,8 +203,8 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -221,8 +221,8 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, -5),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -239,8 +239,8 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6),
-	BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -258,7 +258,7 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -276,8 +276,8 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
-	BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -294,8 +294,8 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -312,7 +312,7 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -329,7 +329,7 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -4),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -347,7 +347,7 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 0),
+	BPF_JMP64_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 0),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -365,7 +365,7 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9),
-	BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -382,7 +382,7 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
-	BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -400,7 +400,7 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, -5),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -417,7 +417,7 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -435,7 +435,7 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 0),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 0),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, -5),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -453,7 +453,7 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -470,7 +470,7 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -487,8 +487,8 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -505,8 +505,8 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -4),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -524,7 +524,7 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -542,8 +542,8 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -560,8 +560,8 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -579,8 +579,8 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -597,8 +597,8 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -4),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -616,7 +616,7 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -634,8 +634,8 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9),
-	BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -652,8 +652,8 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
-	BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -671,7 +671,7 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, -5),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -688,7 +688,7 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6),
-	BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -706,7 +706,7 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 0),
+	BPF_JMP64_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 0),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, -5),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -724,7 +724,7 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
-	BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -741,7 +741,7 @@
 		    offsetof(struct xdp_md, data_end)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -758,7 +758,7 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -775,7 +775,7 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -4),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -793,7 +793,7 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 0),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 0),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -811,7 +811,7 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -828,7 +828,7 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -846,8 +846,8 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, -5),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -864,8 +864,8 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -883,7 +883,7 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -901,8 +901,8 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -919,8 +919,8 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_REG(BPF_JGT, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -937,8 +937,8 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, -5),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -955,8 +955,8 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6),
-	BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -974,7 +974,7 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -992,8 +992,8 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
-	BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -1010,8 +1010,8 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -1028,7 +1028,7 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -1045,7 +1045,7 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -4),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -1063,7 +1063,7 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 0),
+	BPF_JMP64_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 0),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -1081,7 +1081,7 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9),
-	BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -1098,7 +1098,7 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
-	BPF_JMP_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_REG(BPF_JLT, BPF_REG_3, BPF_REG_1, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -1116,7 +1116,7 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, -5),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -1133,7 +1133,7 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -1151,7 +1151,7 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 0),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 0),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, -5),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -1169,7 +1169,7 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -1186,7 +1186,7 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_1, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -1203,8 +1203,8 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -1221,8 +1221,8 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -4),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -1240,7 +1240,7 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -1258,8 +1258,8 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -1276,8 +1276,8 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
-	BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -1295,8 +1295,8 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -1313,8 +1313,8 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -4),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -1332,7 +1332,7 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -1350,8 +1350,8 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 9),
-	BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -9),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -1368,8 +1368,8 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
-	BPF_JMP_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1),
-	BPF_JMP_IMM(BPF_JA, 0, 0, 1),
+	BPF_JMP64_REG(BPF_JLE, BPF_REG_1, BPF_REG_3, 1),
+	BPF_JMP64_IMM(BPF_JA, 0, 0, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -1387,7 +1387,7 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, -5),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -1404,7 +1404,7 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6),
-	BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -6),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -1422,7 +1422,7 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 0),
+	BPF_JMP64_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 0),
 	BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, -5),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -1440,7 +1440,7 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 7),
-	BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -7),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
@@ -1457,7 +1457,7 @@
 	BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1, offsetof(struct xdp_md, data)),
 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
 	BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
-	BPF_JMP_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1),
+	BPF_JMP64_REG(BPF_JLE, BPF_REG_3, BPF_REG_1, 1),
 	BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, -8),
 	BPF_MOV64_IMM(BPF_REG_0, 0),
 	BPF_EXIT_INSN(),
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH bpf-next 4/4] bpf: Mark BPF_ALU and BPF_JMP as deprecated
  2023-02-01 12:36 [PATCH bpf-next 0/4] bpf: Replace BPF_ALU and BPF_JMP with BPF_ALU32 and BPF_JMP64 Tiezhu Yang
                   ` (2 preceding siblings ...)
  2023-02-01 12:36 ` [PATCH bpf-next 3/4] bpf: treewide: Clean up BPF_ALU_* and BPF_JMP_* Tiezhu Yang
@ 2023-02-01 12:36 ` Tiezhu Yang
  2023-02-02 11:26 ` [PATCH bpf-next 0/4] bpf: Replace BPF_ALU and BPF_JMP with BPF_ALU32 and BPF_JMP64 Alexei Starovoitov
  4 siblings, 0 replies; 9+ messages in thread
From: Tiezhu Yang @ 2023-02-01 12:36 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko; +Cc: bpf, linux-kernel

For now, BPF_ALU and BPF_JMP are not used by any kernel code, but we can
not remove them directly due to they are in the uapi header file, so just
mark them as deprecated.

Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
---
 include/uapi/linux/bpf_common.h       | 4 ++--
 tools/include/uapi/linux/bpf_common.h | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/include/uapi/linux/bpf_common.h b/include/uapi/linux/bpf_common.h
index ee97668..75ae3dd 100644
--- a/include/uapi/linux/bpf_common.h
+++ b/include/uapi/linux/bpf_common.h
@@ -8,8 +8,8 @@
 #define		BPF_LDX		0x01
 #define		BPF_ST		0x02
 #define		BPF_STX		0x03
-#define		BPF_ALU		0x04
-#define		BPF_JMP		0x05
+#define		BPF_ALU		0x04 /* deprecated */
+#define		BPF_JMP		0x05 /* deprecated */
 #define		BPF_RET		0x06
 #define		BPF_MISC        0x07
 
diff --git a/tools/include/uapi/linux/bpf_common.h b/tools/include/uapi/linux/bpf_common.h
index ee97668..75ae3dd 100644
--- a/tools/include/uapi/linux/bpf_common.h
+++ b/tools/include/uapi/linux/bpf_common.h
@@ -8,8 +8,8 @@
 #define		BPF_LDX		0x01
 #define		BPF_ST		0x02
 #define		BPF_STX		0x03
-#define		BPF_ALU		0x04
-#define		BPF_JMP		0x05
+#define		BPF_ALU		0x04 /* deprecated */
+#define		BPF_JMP		0x05 /* deprecated */
 #define		BPF_RET		0x06
 #define		BPF_MISC        0x07
 
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* RE: [PATCH bpf-next 1/4] bpf: Add new macro BPF_ALU32 and BPF_JMP64
  2023-02-01 12:36 ` [PATCH bpf-next 1/4] bpf: Add new macro " Tiezhu Yang
@ 2023-02-01 14:59   ` Dave Thaler
  0 siblings, 0 replies; 9+ messages in thread
From: Dave Thaler @ 2023-02-01 14:59 UTC (permalink / raw)
  To: Tiezhu Yang, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: bpf, linux-kernel, bpf

Forwarding this to the bpf@ietf.org list for visibility, since the instruction-set.rst
change would affect the Internet Draft proposed for cross-platform standardization.

> -----Original Message-----
> From: Tiezhu Yang <yangtiezhu@loongson.cn>
> Sent: Wednesday, February 1, 2023 4:37 AM
> To: Alexei Starovoitov <ast@kernel.org>; Daniel Borkmann
> <daniel@iogearbox.net>; Andrii Nakryiko <andrii@kernel.org>
> Cc: bpf@vger.kernel.org; linux-kernel@vger.kernel.org
> Subject: [PATCH bpf-next 1/4] bpf: Add new macro BPF_ALU32 and
> BPF_JMP64
> 
> In the current code, BPF_ALU means BPF_ALU32, but BPF_JMP means
> BPF_JMP64, it is a little confusing at the first glance, add new macro
> BPF_ALU32 and BPF_JMP64, then we can replace the ambiguos macro
> BPF_ALU and BPF_JMP with new macro BPF_ALU32 and BPF_JMP64 step by
> step, BPF_ALU and BPF_JMP can be removed from the uapi header file in
> some day.
> 
> Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
> ---
>  include/uapi/linux/bpf.h       | 2 ++
>  tools/include/uapi/linux/bpf.h | 2 ++
>  2 files changed, 4 insertions(+)
> 
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h index
> ba0f0cf..a118c43 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -15,6 +15,8 @@
> 
>  /* instruction classes */
>  #define BPF_JMP32	0x06	/* jmp mode in word width */
> +#define BPF_JMP64	0x05	/* jmp mode in double word width */
> +#define BPF_ALU32	0x04	/* alu mode in word width */
>  #define BPF_ALU64	0x07	/* alu mode in double word width */
> 
>  /* ld/ldx fields */
> diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
> index 7f024ac..014b449 100644
> --- a/tools/include/uapi/linux/bpf.h
> +++ b/tools/include/uapi/linux/bpf.h
> @@ -15,6 +15,8 @@
> 
>  /* instruction classes */
>  #define BPF_JMP32	0x06	/* jmp mode in word width */
> +#define BPF_JMP64	0x05	/* jmp mode in double word width */
> +#define BPF_ALU32	0x04	/* alu mode in word width */
>  #define BPF_ALU64	0x07	/* alu mode in double word width */
> 
>  /* ld/ldx fields */
> --
> 2.1.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH bpf-next 2/4] bpf: treewide: Clean up BPF_ALU and BPF_JMP
  2023-02-01 12:36 ` [PATCH bpf-next 2/4] bpf: treewide: Clean up BPF_ALU and BPF_JMP Tiezhu Yang
@ 2023-02-02  4:01   ` kernel test robot
  0 siblings, 0 replies; 9+ messages in thread
From: kernel test robot @ 2023-02-02  4:01 UTC (permalink / raw)
  To: Tiezhu Yang, Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: llvm, oe-kbuild-all, bpf, linux-kernel

Hi Tiezhu,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on bpf-next/master]

url:    https://github.com/intel-lab-lkp/linux/commits/Tiezhu-Yang/bpf-Add-new-macro-BPF_ALU32-and-BPF_JMP64/20230201-203836
base:   https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git master
patch link:    https://lore.kernel.org/r/1675254998-4951-3-git-send-email-yangtiezhu%40loongson.cn
patch subject: [PATCH bpf-next 2/4] bpf: treewide: Clean up BPF_ALU and BPF_JMP
config: i386-randconfig-a016-20230130 (https://download.01.org/0day-ci/archive/20230202/202302021105.HU4McFMJ-lkp@intel.com/config)
compiler: clang version 14.0.6 (https://github.com/llvm/llvm-project f28c006a5895fc0e329fe15fead81e37457cb1d1)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/9955f957cb218f711161ac710656be1164eaa3a3
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Tiezhu-Yang/bpf-Add-new-macro-BPF_ALU32-and-BPF_JMP64/20230201-203836
        git checkout 9955f957cb218f711161ac710656be1164eaa3a3
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=i386 olddefconfig
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=i386 SHELL=/bin/bash samples/

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

>> samples/seccomp/bpf-fancy.c:38:3: error: use of undeclared identifier 'BPF_JMP64'
                   SYSCALL(__NR_exit, ALLOW),
                   ^
   samples/seccomp/bpf-helper.h:56:11: note: expanded from macro 'SYSCALL'
           BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, (nr), 0, 1), \
                    ^
   samples/seccomp/bpf-fancy.c:39:3: error: use of undeclared identifier 'BPF_JMP64'
                   SYSCALL(__NR_exit_group, ALLOW),
                   ^
   samples/seccomp/bpf-helper.h:56:11: note: expanded from macro 'SYSCALL'
           BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, (nr), 0, 1), \
                    ^
   samples/seccomp/bpf-fancy.c:40:3: error: use of undeclared identifier 'BPF_JMP64'
                   SYSCALL(__NR_write, JUMP(&l, write_fd)),
                   ^
   samples/seccomp/bpf-helper.h:56:11: note: expanded from macro 'SYSCALL'
           BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, (nr), 0, 1), \
                    ^
   samples/seccomp/bpf-fancy.c:40:23: error: use of undeclared identifier 'BPF_JMP64'
                   SYSCALL(__NR_write, JUMP(&l, write_fd)),
                                       ^
   samples/seccomp/bpf-helper.h:50:11: note: expanded from macro 'JUMP'
           BPF_JUMP(BPF_JMP64+BPF_JA, FIND_LABEL((labels), (label)), \
                    ^
   samples/seccomp/bpf-fancy.c:41:3: error: use of undeclared identifier 'BPF_JMP64'
                   SYSCALL(__NR_read, JUMP(&l, read)),
                   ^
   samples/seccomp/bpf-helper.h:56:11: note: expanded from macro 'SYSCALL'
           BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, (nr), 0, 1), \
                    ^
   samples/seccomp/bpf-fancy.c:41:22: error: use of undeclared identifier 'BPF_JMP64'
                   SYSCALL(__NR_read, JUMP(&l, read)),
                                      ^
   samples/seccomp/bpf-helper.h:50:11: note: expanded from macro 'JUMP'
           BPF_JUMP(BPF_JMP64+BPF_JA, FIND_LABEL((labels), (label)), \
                    ^
   samples/seccomp/bpf-fancy.c:44:3: error: use of undeclared identifier 'BPF_JMP64'
                   LABEL(&l, read),
                   ^
   samples/seccomp/bpf-helper.h:53:11: note: expanded from macro 'LABEL'
           BPF_JUMP(BPF_JMP64+BPF_JA, FIND_LABEL((labels), (label)), \
                    ^
   samples/seccomp/bpf-fancy.c:46:3: error: use of undeclared identifier 'BPF_JMP64'
                   JNE(STDIN_FILENO, DENY),
                   ^
   samples/seccomp/bpf-helper.h:77:20: note: expanded from macro 'JNE'
   #define JNE(x, jt) JNE32(x, EXPAND(jt))
                      ^
   samples/seccomp/bpf-helper.h:154:11: note: expanded from macro 'JNE32'
           BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, (value), 1, 0), \
                    ^
   samples/seccomp/bpf-fancy.c:48:3: error: use of undeclared identifier 'BPF_JMP64'
                   JNE((unsigned long)buf, DENY),
                   ^
   samples/seccomp/bpf-helper.h:77:20: note: expanded from macro 'JNE'
   #define JNE(x, jt) JNE32(x, EXPAND(jt))
                      ^
   samples/seccomp/bpf-helper.h:154:11: note: expanded from macro 'JNE32'
           BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, (value), 1, 0), \
                    ^
   samples/seccomp/bpf-fancy.c:50:3: error: use of undeclared identifier 'BPF_JMP64'
                   JGE(sizeof(buf), DENY),
                   ^
   samples/seccomp/bpf-helper.h:80:20: note: expanded from macro 'JGE'
   #define JGE(x, jt) JGE32(x, EXPAND(jt))
                      ^
   samples/seccomp/bpf-helper.h:162:11: note: expanded from macro 'JGE32'
           BPF_JUMP(BPF_JMP64+BPF_JGE+BPF_K, (value), 0, 1), \
                    ^
   samples/seccomp/bpf-fancy.c:53:3: error: use of undeclared identifier 'BPF_JMP64'
                   LABEL(&l, write_fd),
                   ^
   samples/seccomp/bpf-helper.h:53:11: note: expanded from macro 'LABEL'
           BPF_JUMP(BPF_JMP64+BPF_JA, FIND_LABEL((labels), (label)), \
                    ^
   samples/seccomp/bpf-fancy.c:55:3: error: use of undeclared identifier 'BPF_JMP64'
                   JEQ(STDOUT_FILENO, JUMP(&l, write_buf)),
                   ^
   samples/seccomp/bpf-helper.h:76:20: note: expanded from macro 'JEQ'
   #define JEQ(x, jt) JEQ32(x, EXPAND(jt))
                      ^
   samples/seccomp/bpf-helper.h:150:11: note: expanded from macro 'JEQ32'
           BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, (value), 0, 1), \
                    ^
   samples/seccomp/bpf-fancy.c:55:22: error: use of undeclared identifier 'BPF_JMP64'
                   JEQ(STDOUT_FILENO, JUMP(&l, write_buf)),
                                      ^
   samples/seccomp/bpf-helper.h:50:11: note: expanded from macro 'JUMP'
           BPF_JUMP(BPF_JMP64+BPF_JA, FIND_LABEL((labels), (label)), \
                    ^
   samples/seccomp/bpf-fancy.c:56:3: error: use of undeclared identifier 'BPF_JMP64'
                   JEQ(STDERR_FILENO, JUMP(&l, write_buf)),
                   ^
   samples/seccomp/bpf-helper.h:76:20: note: expanded from macro 'JEQ'
   #define JEQ(x, jt) JEQ32(x, EXPAND(jt))
                      ^
   samples/seccomp/bpf-helper.h:150:11: note: expanded from macro 'JEQ32'
           BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, (value), 0, 1), \
                    ^
   samples/seccomp/bpf-fancy.c:56:22: error: use of undeclared identifier 'BPF_JMP64'
                   JEQ(STDERR_FILENO, JUMP(&l, write_buf)),
--
>> samples/seccomp/bpf-helper.c:33:23: error: use of undeclared identifier 'BPF_JMP64'
                   if (instr->code != (BPF_JMP64+BPF_JA))
                                       ^
   1 error generated.
--
>> samples/seccomp/dropper.c:33:12: error: use of undeclared identifier 'BPF_JMP64'
                   BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, arch, 0, 3),
                            ^
   samples/seccomp/dropper.c:36:12: error: use of undeclared identifier 'BPF_JMP64'
                   BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, nr, 0, 1),
                            ^
>> samples/seccomp/dropper.c:42:33: error: invalid application of 'sizeof' to an incomplete type 'struct sock_filter[]'
                   .len = (unsigned short)(sizeof(filter)/sizeof(filter[0])),
                                                 ^~~~~~~~
   3 errors generated.
--
>> samples/seccomp/bpf-direct.c:117:12: error: use of undeclared identifier 'BPF_JMP64'
                   BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, __NR_rt_sigreturn, 0, 1),
                            ^
   samples/seccomp/bpf-direct.c:120:12: error: use of undeclared identifier 'BPF_JMP64'
                   BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, __NR_sigreturn, 0, 1),
                            ^
   samples/seccomp/bpf-direct.c:123:12: error: use of undeclared identifier 'BPF_JMP64'
                   BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, __NR_exit_group, 0, 1),
                            ^
   samples/seccomp/bpf-direct.c:125:12: error: use of undeclared identifier 'BPF_JMP64'
                   BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, __NR_exit, 0, 1),
                            ^
   samples/seccomp/bpf-direct.c:127:12: error: use of undeclared identifier 'BPF_JMP64'
                   BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, __NR_read, 1, 0),
                            ^
   samples/seccomp/bpf-direct.c:128:12: error: use of undeclared identifier 'BPF_JMP64'
                   BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, __NR_write, 3, 2),
                            ^
   samples/seccomp/bpf-direct.c:132:12: error: use of undeclared identifier 'BPF_JMP64'
                   BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, STDIN_FILENO, 4, 0),
                            ^
   samples/seccomp/bpf-direct.c:137:12: error: use of undeclared identifier 'BPF_JMP64'
                   BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, STDOUT_FILENO, 1, 0),
                            ^
   samples/seccomp/bpf-direct.c:139:12: error: use of undeclared identifier 'BPF_JMP64'
                   BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, STDERR_FILENO, 1, 2),
                            ^
>> samples/seccomp/bpf-direct.c:146:33: error: invalid application of 'sizeof' to an incomplete type 'struct sock_filter[]'
                   .len = (unsigned short)(sizeof(filter)/sizeof(filter[0])),
                                                 ^~~~~~~~
   10 errors generated.
--
>> samples/seccomp/user-trap.c:91:12: error: use of undeclared identifier 'BPF_JMP64'
                   BPF_JUMP(BPF_JMP64+BPF_JEQ+BPF_K, nr, 0, 1),
                            ^
>> samples/seccomp/user-trap.c:97:26: error: invalid application of 'sizeof' to an incomplete type 'struct sock_filter[]'
                   .len = (unsigned short)ARRAY_SIZE(filter),
                                          ^~~~~~~~~~~~~~~~~~
   samples/seccomp/user-trap.c:24:30: note: expanded from macro 'ARRAY_SIZE'
   #define ARRAY_SIZE(x) (sizeof(x) / sizeof(*(x)))
                                ^~~
   2 errors generated.


vim +/BPF_JMP64 +38 samples/seccomp/bpf-fancy.c

8ac270d1e29f042 Will Drewry 2012-04-12  26  
8ac270d1e29f042 Will Drewry 2012-04-12  27  int main(int argc, char **argv)
8ac270d1e29f042 Will Drewry 2012-04-12  28  {
3a9af0bd34410a2 Kees Cook   2015-02-17  29  	struct bpf_labels l = {
3a9af0bd34410a2 Kees Cook   2015-02-17  30  		.count = 0,
3a9af0bd34410a2 Kees Cook   2015-02-17  31  	};
8ac270d1e29f042 Will Drewry 2012-04-12  32  	static const char msg1[] = "Please type something: ";
8ac270d1e29f042 Will Drewry 2012-04-12  33  	static const char msg2[] = "You typed: ";
8ac270d1e29f042 Will Drewry 2012-04-12  34  	char buf[256];
8ac270d1e29f042 Will Drewry 2012-04-12  35  	struct sock_filter filter[] = {
8ac270d1e29f042 Will Drewry 2012-04-12  36  		/* TODO: LOAD_SYSCALL_NR(arch) and enforce an arch */
8ac270d1e29f042 Will Drewry 2012-04-12  37  		LOAD_SYSCALL_NR,
8ac270d1e29f042 Will Drewry 2012-04-12 @38  		SYSCALL(__NR_exit, ALLOW),

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH bpf-next 0/4] bpf: Replace BPF_ALU and BPF_JMP with BPF_ALU32 and BPF_JMP64
  2023-02-01 12:36 [PATCH bpf-next 0/4] bpf: Replace BPF_ALU and BPF_JMP with BPF_ALU32 and BPF_JMP64 Tiezhu Yang
                   ` (3 preceding siblings ...)
  2023-02-01 12:36 ` [PATCH bpf-next 4/4] bpf: Mark BPF_ALU and BPF_JMP as deprecated Tiezhu Yang
@ 2023-02-02 11:26 ` Alexei Starovoitov
  2023-02-02 12:55   ` Tiezhu Yang
  4 siblings, 1 reply; 9+ messages in thread
From: Alexei Starovoitov @ 2023-02-02 11:26 UTC (permalink / raw)
  To: Tiezhu Yang
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf, LKML

On Wed, Feb 1, 2023 at 4:36 AM Tiezhu Yang <yangtiezhu@loongson.cn> wrote:
>
> The intention of this patchset is to make the code more readable,
> no functional changes, based on bpf-next.
>
> If this patchset makes no sense, please ignore it and sorry for that.
...
>  157 files changed, 4299 insertions(+), 4295 deletions(-)

Are you trying to get to the top of lwn's "most active developers by
lines changed" ?
I'm sure you knew that it's most likely going to be rejected,
yet you still sent it. why?
Your developer's reputation suffers.
Think quality and not quantity.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH bpf-next 0/4] bpf: Replace BPF_ALU and BPF_JMP with BPF_ALU32 and BPF_JMP64
  2023-02-02 11:26 ` [PATCH bpf-next 0/4] bpf: Replace BPF_ALU and BPF_JMP with BPF_ALU32 and BPF_JMP64 Alexei Starovoitov
@ 2023-02-02 12:55   ` Tiezhu Yang
  0 siblings, 0 replies; 9+ messages in thread
From: Tiezhu Yang @ 2023-02-02 12:55 UTC (permalink / raw)
  To: Alexei Starovoitov
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko, bpf, LKML



On 02/02/2023 07:26 PM, Alexei Starovoitov wrote:
> On Wed, Feb 1, 2023 at 4:36 AM Tiezhu Yang <yangtiezhu@loongson.cn> wrote:
>>
>> The intention of this patchset is to make the code more readable,
>> no functional changes, based on bpf-next.
>>
>> If this patchset makes no sense, please ignore it and sorry for that.
> ...
>>  157 files changed, 4299 insertions(+), 4295 deletions(-)
>
> Are you trying to get to the top of lwn's "most active developers by
> lines changed" ?

Oh, no, no intention of doing so.

> I'm sure you knew that it's most likely going to be rejected,
> yet you still sent it. why?

Sorry for the trouble.
Maybe just send patch #1 and #4 as RFC is a proper way.

> Your developer's reputation suffers.
> Think quality and not quantity.
>

Anyway, thank you for your reply.


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2023-02-02 12:55 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-02-01 12:36 [PATCH bpf-next 0/4] bpf: Replace BPF_ALU and BPF_JMP with BPF_ALU32 and BPF_JMP64 Tiezhu Yang
2023-02-01 12:36 ` [PATCH bpf-next 1/4] bpf: Add new macro " Tiezhu Yang
2023-02-01 14:59   ` Dave Thaler
2023-02-01 12:36 ` [PATCH bpf-next 2/4] bpf: treewide: Clean up BPF_ALU and BPF_JMP Tiezhu Yang
2023-02-02  4:01   ` kernel test robot
2023-02-01 12:36 ` [PATCH bpf-next 3/4] bpf: treewide: Clean up BPF_ALU_* and BPF_JMP_* Tiezhu Yang
2023-02-01 12:36 ` [PATCH bpf-next 4/4] bpf: Mark BPF_ALU and BPF_JMP as deprecated Tiezhu Yang
2023-02-02 11:26 ` [PATCH bpf-next 0/4] bpf: Replace BPF_ALU and BPF_JMP with BPF_ALU32 and BPF_JMP64 Alexei Starovoitov
2023-02-02 12:55   ` Tiezhu Yang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.