From: Alistair Francis <alistair23@gmail.com> To: LIU Zhiwei <zhiwei_liu@c-sky.com> Cc: guoren@linux.alibaba.com, "open list:RISC-V" <qemu-riscv@nongnu.org>, Richard Henderson <richard.henderson@linaro.org>, "qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>, wxy194768@alibaba-inc.com, Chih-Min Chao <chihmin.chao@sifive.com>, wenmeng_zhang@c-sky.com, Palmer Dabbelt <palmer@dabbelt.com> Subject: Re: [PATCH v6 16/61] target/riscv: vector integer comparison instructions Date: Wed, 25 Mar 2020 10:32:10 -0700 [thread overview] Message-ID: <CAKmqyKNdZyTiBhW+uNbyaVxmpO4GsUFCXu7titVzAkx8OvAzOQ@mail.gmail.com> (raw) In-Reply-To: <20200317150653.9008-17-zhiwei_liu@c-sky.com> On Tue, Mar 17, 2020 at 8:39 AM LIU Zhiwei <zhiwei_liu@c-sky.com> wrote: > > Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com> > Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Alistair > --- > target/riscv/helper.h | 57 +++++++++++ > target/riscv/insn32.decode | 20 ++++ > target/riscv/insn_trans/trans_rvv.inc.c | 45 +++++++++ > target/riscv/vector_helper.c | 129 ++++++++++++++++++++++++ > 4 files changed, 251 insertions(+) > > diff --git a/target/riscv/helper.h b/target/riscv/helper.h > index 0f36a8ce43..4e6c47c2d2 100644 > --- a/target/riscv/helper.h > +++ b/target/riscv/helper.h > @@ -435,3 +435,60 @@ DEF_HELPER_6(vnsrl_vx_w, void, ptr, ptr, tl, ptr, env, i32) > DEF_HELPER_6(vnsra_vx_b, void, ptr, ptr, tl, ptr, env, i32) > DEF_HELPER_6(vnsra_vx_h, void, ptr, ptr, tl, ptr, env, i32) > DEF_HELPER_6(vnsra_vx_w, void, ptr, ptr, tl, ptr, env, i32) > + > +DEF_HELPER_6(vmseq_vv_b, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmseq_vv_h, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmseq_vv_w, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmseq_vv_d, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmsne_vv_b, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmsne_vv_h, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmsne_vv_w, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmsne_vv_d, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmsltu_vv_b, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmsltu_vv_h, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmsltu_vv_w, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmsltu_vv_d, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmslt_vv_b, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmslt_vv_h, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmslt_vv_w, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmslt_vv_d, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmsleu_vv_b, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmsleu_vv_h, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmsleu_vv_w, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmsleu_vv_d, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmsle_vv_b, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmsle_vv_h, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmsle_vv_w, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmsle_vv_d, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmseq_vx_b, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmseq_vx_h, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmseq_vx_w, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmseq_vx_d, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsne_vx_b, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsne_vx_h, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsne_vx_w, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsne_vx_d, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsltu_vx_b, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsltu_vx_h, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsltu_vx_w, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsltu_vx_d, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmslt_vx_b, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmslt_vx_h, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmslt_vx_w, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmslt_vx_d, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsleu_vx_b, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsleu_vx_h, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsleu_vx_w, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsleu_vx_d, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsle_vx_b, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsle_vx_h, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsle_vx_w, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsle_vx_d, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsgtu_vx_b, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsgtu_vx_h, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsgtu_vx_w, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsgtu_vx_d, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsgt_vx_b, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsgt_vx_h, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsgt_vx_w, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsgt_vx_d, void, ptr, ptr, tl, ptr, env, i32) > diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode > index 89fd2aa4e2..df6181980d 100644 > --- a/target/riscv/insn32.decode > +++ b/target/riscv/insn32.decode > @@ -335,6 +335,26 @@ vnsrl_vi 101100 . ..... ..... 011 ..... 1010111 @r_vm > vnsra_vv 101101 . ..... ..... 000 ..... 1010111 @r_vm > vnsra_vx 101101 . ..... ..... 100 ..... 1010111 @r_vm > vnsra_vi 101101 . ..... ..... 011 ..... 1010111 @r_vm > +vmseq_vv 011000 . ..... ..... 000 ..... 1010111 @r_vm > +vmseq_vx 011000 . ..... ..... 100 ..... 1010111 @r_vm > +vmseq_vi 011000 . ..... ..... 011 ..... 1010111 @r_vm > +vmsne_vv 011001 . ..... ..... 000 ..... 1010111 @r_vm > +vmsne_vx 011001 . ..... ..... 100 ..... 1010111 @r_vm > +vmsne_vi 011001 . ..... ..... 011 ..... 1010111 @r_vm > +vmsltu_vv 011010 . ..... ..... 000 ..... 1010111 @r_vm > +vmsltu_vx 011010 . ..... ..... 100 ..... 1010111 @r_vm > +vmslt_vv 011011 . ..... ..... 000 ..... 1010111 @r_vm > +vmslt_vx 011011 . ..... ..... 100 ..... 1010111 @r_vm > +vmsleu_vv 011100 . ..... ..... 000 ..... 1010111 @r_vm > +vmsleu_vx 011100 . ..... ..... 100 ..... 1010111 @r_vm > +vmsleu_vi 011100 . ..... ..... 011 ..... 1010111 @r_vm > +vmsle_vv 011101 . ..... ..... 000 ..... 1010111 @r_vm > +vmsle_vx 011101 . ..... ..... 100 ..... 1010111 @r_vm > +vmsle_vi 011101 . ..... ..... 011 ..... 1010111 @r_vm > +vmsgtu_vx 011110 . ..... ..... 100 ..... 1010111 @r_vm > +vmsgtu_vi 011110 . ..... ..... 011 ..... 1010111 @r_vm > +vmsgt_vx 011111 . ..... ..... 100 ..... 1010111 @r_vm > +vmsgt_vi 011111 . ..... ..... 011 ..... 1010111 @r_vm > > vsetvli 0 ........... ..... 111 ..... 1010111 @r2_zimm > vsetvl 1000000 ..... ..... 111 ..... 1010111 @r > diff --git a/target/riscv/insn_trans/trans_rvv.inc.c b/target/riscv/insn_trans/trans_rvv.inc.c > index a537b507a0..53c00d914f 100644 > --- a/target/riscv/insn_trans/trans_rvv.inc.c > +++ b/target/riscv/insn_trans/trans_rvv.inc.c > @@ -1397,3 +1397,48 @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \ > > GEN_OPIVI_NARROW_TRANS(vnsra_vi, 1, vnsra_vx) > GEN_OPIVI_NARROW_TRANS(vnsrl_vi, 1, vnsrl_vx) > + > +/* Vector Integer Comparison Instructions */ > +/* > + * For all comparison instructions, an illegal instruction exception is raised > + * if the destination vector register overlaps a source vector register group > + * and LMUL > 1. > + */ > +static bool opivv_cmp_check(DisasContext *s, arg_rmrr *a) > +{ > + return (vext_check_isa_ill(s) && > + vext_check_reg(s, a->rs2, false) && > + vext_check_reg(s, a->rs1, false) && > + ((vext_check_overlap_group(a->rd, 1, a->rs1, 1 << s->lmul) && > + vext_check_overlap_group(a->rd, 1, a->rs2, 1 << s->lmul)) || > + (s->lmul == 0))); > +} > +GEN_OPIVV_TRANS(vmseq_vv, opivv_cmp_check) > +GEN_OPIVV_TRANS(vmsne_vv, opivv_cmp_check) > +GEN_OPIVV_TRANS(vmsltu_vv, opivv_cmp_check) > +GEN_OPIVV_TRANS(vmslt_vv, opivv_cmp_check) > +GEN_OPIVV_TRANS(vmsleu_vv, opivv_cmp_check) > +GEN_OPIVV_TRANS(vmsle_vv, opivv_cmp_check) > + > +static bool opivx_cmp_check(DisasContext *s, arg_rmrr *a) > +{ > + return (vext_check_isa_ill(s) && > + vext_check_reg(s, a->rs2, false) && > + (vext_check_overlap_group(a->rd, 1, a->rs2, 1 << s->lmul) || > + (s->lmul == 0))); > +} > +GEN_OPIVX_TRANS(vmseq_vx, opivx_cmp_check) > +GEN_OPIVX_TRANS(vmsne_vx, opivx_cmp_check) > +GEN_OPIVX_TRANS(vmsltu_vx, opivx_cmp_check) > +GEN_OPIVX_TRANS(vmslt_vx, opivx_cmp_check) > +GEN_OPIVX_TRANS(vmsleu_vx, opivx_cmp_check) > +GEN_OPIVX_TRANS(vmsle_vx, opivx_cmp_check) > +GEN_OPIVX_TRANS(vmsgtu_vx, opivx_cmp_check) > +GEN_OPIVX_TRANS(vmsgt_vx, opivx_cmp_check) > + > +GEN_OPIVI_TRANS(vmseq_vi, 0, vmseq_vx, opivx_cmp_check) > +GEN_OPIVI_TRANS(vmsne_vi, 0, vmsne_vx, opivx_cmp_check) > +GEN_OPIVI_TRANS(vmsleu_vi, 1, vmsleu_vx, opivx_cmp_check) > +GEN_OPIVI_TRANS(vmsle_vi, 0, vmsle_vx, opivx_cmp_check) > +GEN_OPIVI_TRANS(vmsgtu_vi, 1, vmsgtu_vx, opivx_cmp_check) > +GEN_OPIVI_TRANS(vmsgt_vi, 0, vmsgt_vx, opivx_cmp_check) > diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c > index 8d1f32a7ff..d1fc543e98 100644 > --- a/target/riscv/vector_helper.c > +++ b/target/riscv/vector_helper.c > @@ -1385,3 +1385,132 @@ GEN_VEXT_SHIFT_VX(vnsrl_vx_w, uint32_t, uint64_t, H4, H8, DO_SRL, 0x3f, clearl) > GEN_VEXT_SHIFT_VX(vnsra_vx_b, int8_t, int16_t, H1, H2, DO_SRL, 0xf, clearb) > GEN_VEXT_SHIFT_VX(vnsra_vx_h, int16_t, int32_t, H2, H4, DO_SRL, 0x1f, clearh) > GEN_VEXT_SHIFT_VX(vnsra_vx_w, int32_t, int64_t, H4, H8, DO_SRL, 0x3f, clearl) > + > +/* Vector Integer Comparison Instructions */ > +#define DO_MSEQ(N, M) (N == M) > +#define DO_MSNE(N, M) (N != M) > +#define DO_MSLT(N, M) (N < M) > +#define DO_MSLE(N, M) (N <= M) > +#define DO_MSGT(N, M) (N > M) > + > +#define GEN_VEXT_CMP_VV(NAME, ETYPE, H, DO_OP) \ > +void HELPER(NAME)(void *vd, void *v0, void *vs1, void *vs2, \ > + CPURISCVState *env, uint32_t desc) \ > +{ \ > + uint32_t mlen = vext_mlen(desc); \ > + uint32_t vm = vext_vm(desc); \ > + uint32_t vl = env->vl; \ > + uint32_t vlmax = vext_maxsz(desc) / sizeof(ETYPE); \ > + uint32_t i; \ > + \ > + if (vl == 0) { \ > + return; \ > + } \ > + for (i = 0; i < vl; i++) { \ > + ETYPE s1 = *((ETYPE *)vs1 + H(i)); \ > + ETYPE s2 = *((ETYPE *)vs2 + H(i)); \ > + if (!vm && !vext_elem_mask(v0, mlen, i)) { \ > + continue; \ > + } \ > + vext_set_elem_mask(vd, mlen, i, DO_OP(s2, s1)); \ > + } \ > + for (; i < vlmax; i++) { \ > + vext_set_elem_mask(vd, mlen, i, 0); \ > + } \ > +} > + > +GEN_VEXT_CMP_VV(vmseq_vv_b, uint8_t, H1, DO_MSEQ) > +GEN_VEXT_CMP_VV(vmseq_vv_h, uint16_t, H2, DO_MSEQ) > +GEN_VEXT_CMP_VV(vmseq_vv_w, uint32_t, H4, DO_MSEQ) > +GEN_VEXT_CMP_VV(vmseq_vv_d, uint64_t, H8, DO_MSEQ) > + > +GEN_VEXT_CMP_VV(vmsne_vv_b, uint8_t, H1, DO_MSNE) > +GEN_VEXT_CMP_VV(vmsne_vv_h, uint16_t, H2, DO_MSNE) > +GEN_VEXT_CMP_VV(vmsne_vv_w, uint32_t, H4, DO_MSNE) > +GEN_VEXT_CMP_VV(vmsne_vv_d, uint64_t, H8, DO_MSNE) > + > +GEN_VEXT_CMP_VV(vmsltu_vv_b, uint8_t, H1, DO_MSLT) > +GEN_VEXT_CMP_VV(vmsltu_vv_h, uint16_t, H2, DO_MSLT) > +GEN_VEXT_CMP_VV(vmsltu_vv_w, uint32_t, H4, DO_MSLT) > +GEN_VEXT_CMP_VV(vmsltu_vv_d, uint64_t, H8, DO_MSLT) > + > +GEN_VEXT_CMP_VV(vmslt_vv_b, int8_t, H1, DO_MSLT) > +GEN_VEXT_CMP_VV(vmslt_vv_h, int16_t, H2, DO_MSLT) > +GEN_VEXT_CMP_VV(vmslt_vv_w, int32_t, H4, DO_MSLT) > +GEN_VEXT_CMP_VV(vmslt_vv_d, int64_t, H8, DO_MSLT) > + > +GEN_VEXT_CMP_VV(vmsleu_vv_b, uint8_t, H1, DO_MSLE) > +GEN_VEXT_CMP_VV(vmsleu_vv_h, uint16_t, H2, DO_MSLE) > +GEN_VEXT_CMP_VV(vmsleu_vv_w, uint32_t, H4, DO_MSLE) > +GEN_VEXT_CMP_VV(vmsleu_vv_d, uint64_t, H8, DO_MSLE) > + > +GEN_VEXT_CMP_VV(vmsle_vv_b, int8_t, H1, DO_MSLE) > +GEN_VEXT_CMP_VV(vmsle_vv_h, int16_t, H2, DO_MSLE) > +GEN_VEXT_CMP_VV(vmsle_vv_w, int32_t, H4, DO_MSLE) > +GEN_VEXT_CMP_VV(vmsle_vv_d, int64_t, H8, DO_MSLE) > + > +#define GEN_VEXT_CMP_VX(NAME, ETYPE, H, DO_OP) \ > +void HELPER(NAME)(void *vd, void *v0, target_ulong s1, void *vs2, \ > + CPURISCVState *env, uint32_t desc) \ > +{ \ > + uint32_t mlen = vext_mlen(desc); \ > + uint32_t vm = vext_vm(desc); \ > + uint32_t vl = env->vl; \ > + uint32_t vlmax = vext_maxsz(desc) / sizeof(ETYPE); \ > + uint32_t i; \ > + \ > + if (vl == 0) { \ > + return; \ > + } \ > + for (i = 0; i < vl; i++) { \ > + ETYPE s2 = *((ETYPE *)vs2 + H(i)); \ > + if (!vm && !vext_elem_mask(v0, mlen, i)) { \ > + continue; \ > + } \ > + vext_set_elem_mask(vd, mlen, i, \ > + DO_OP(s2, (ETYPE)(target_long)s1)); \ > + } \ > + for (; i < vlmax; i++) { \ > + vext_set_elem_mask(vd, mlen, i, 0); \ > + } \ > +} > + > +GEN_VEXT_CMP_VX(vmseq_vx_b, uint8_t, H1, DO_MSEQ) > +GEN_VEXT_CMP_VX(vmseq_vx_h, uint16_t, H2, DO_MSEQ) > +GEN_VEXT_CMP_VX(vmseq_vx_w, uint32_t, H4, DO_MSEQ) > +GEN_VEXT_CMP_VX(vmseq_vx_d, uint64_t, H8, DO_MSEQ) > + > +GEN_VEXT_CMP_VX(vmsne_vx_b, uint8_t, H1, DO_MSNE) > +GEN_VEXT_CMP_VX(vmsne_vx_h, uint16_t, H2, DO_MSNE) > +GEN_VEXT_CMP_VX(vmsne_vx_w, uint32_t, H4, DO_MSNE) > +GEN_VEXT_CMP_VX(vmsne_vx_d, uint64_t, H8, DO_MSNE) > + > +GEN_VEXT_CMP_VX(vmsltu_vx_b, uint8_t, H1, DO_MSLT) > +GEN_VEXT_CMP_VX(vmsltu_vx_h, uint16_t, H2, DO_MSLT) > +GEN_VEXT_CMP_VX(vmsltu_vx_w, uint32_t, H4, DO_MSLT) > +GEN_VEXT_CMP_VX(vmsltu_vx_d, uint64_t, H8, DO_MSLT) > + > +GEN_VEXT_CMP_VX(vmslt_vx_b, int8_t, H1, DO_MSLT) > +GEN_VEXT_CMP_VX(vmslt_vx_h, int16_t, H2, DO_MSLT) > +GEN_VEXT_CMP_VX(vmslt_vx_w, int32_t, H4, DO_MSLT) > +GEN_VEXT_CMP_VX(vmslt_vx_d, int64_t, H8, DO_MSLT) > + > +GEN_VEXT_CMP_VX(vmsleu_vx_b, uint8_t, H1, DO_MSLE) > +GEN_VEXT_CMP_VX(vmsleu_vx_h, uint16_t, H2, DO_MSLE) > +GEN_VEXT_CMP_VX(vmsleu_vx_w, uint32_t, H4, DO_MSLE) > +GEN_VEXT_CMP_VX(vmsleu_vx_d, uint64_t, H8, DO_MSLE) > + > +GEN_VEXT_CMP_VX(vmsle_vx_b, int8_t, H1, DO_MSLE) > +GEN_VEXT_CMP_VX(vmsle_vx_h, int16_t, H2, DO_MSLE) > +GEN_VEXT_CMP_VX(vmsle_vx_w, int32_t, H4, DO_MSLE) > +GEN_VEXT_CMP_VX(vmsle_vx_d, int64_t, H8, DO_MSLE) > + > +GEN_VEXT_CMP_VX(vmsgtu_vx_b, uint8_t, H1, DO_MSGT) > +GEN_VEXT_CMP_VX(vmsgtu_vx_h, uint16_t, H2, DO_MSGT) > +GEN_VEXT_CMP_VX(vmsgtu_vx_w, uint32_t, H4, DO_MSGT) > +GEN_VEXT_CMP_VX(vmsgtu_vx_d, uint64_t, H8, DO_MSGT) > + > +GEN_VEXT_CMP_VX(vmsgt_vx_b, int8_t, H1, DO_MSGT) > +GEN_VEXT_CMP_VX(vmsgt_vx_h, int16_t, H2, DO_MSGT) > +GEN_VEXT_CMP_VX(vmsgt_vx_w, int32_t, H4, DO_MSGT) > +GEN_VEXT_CMP_VX(vmsgt_vx_d, int64_t, H8, DO_MSGT) > -- > 2.23.0 >
WARNING: multiple messages have this Message-ID (diff)
From: Alistair Francis <alistair23@gmail.com> To: LIU Zhiwei <zhiwei_liu@c-sky.com> Cc: Richard Henderson <richard.henderson@linaro.org>, Chih-Min Chao <chihmin.chao@sifive.com>, Palmer Dabbelt <palmer@dabbelt.com>, wenmeng_zhang@c-sky.com, wxy194768@alibaba-inc.com, guoren@linux.alibaba.com, "qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>, "open list:RISC-V" <qemu-riscv@nongnu.org> Subject: Re: [PATCH v6 16/61] target/riscv: vector integer comparison instructions Date: Wed, 25 Mar 2020 10:32:10 -0700 [thread overview] Message-ID: <CAKmqyKNdZyTiBhW+uNbyaVxmpO4GsUFCXu7titVzAkx8OvAzOQ@mail.gmail.com> (raw) In-Reply-To: <20200317150653.9008-17-zhiwei_liu@c-sky.com> On Tue, Mar 17, 2020 at 8:39 AM LIU Zhiwei <zhiwei_liu@c-sky.com> wrote: > > Signed-off-by: LIU Zhiwei <zhiwei_liu@c-sky.com> > Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Alistair > --- > target/riscv/helper.h | 57 +++++++++++ > target/riscv/insn32.decode | 20 ++++ > target/riscv/insn_trans/trans_rvv.inc.c | 45 +++++++++ > target/riscv/vector_helper.c | 129 ++++++++++++++++++++++++ > 4 files changed, 251 insertions(+) > > diff --git a/target/riscv/helper.h b/target/riscv/helper.h > index 0f36a8ce43..4e6c47c2d2 100644 > --- a/target/riscv/helper.h > +++ b/target/riscv/helper.h > @@ -435,3 +435,60 @@ DEF_HELPER_6(vnsrl_vx_w, void, ptr, ptr, tl, ptr, env, i32) > DEF_HELPER_6(vnsra_vx_b, void, ptr, ptr, tl, ptr, env, i32) > DEF_HELPER_6(vnsra_vx_h, void, ptr, ptr, tl, ptr, env, i32) > DEF_HELPER_6(vnsra_vx_w, void, ptr, ptr, tl, ptr, env, i32) > + > +DEF_HELPER_6(vmseq_vv_b, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmseq_vv_h, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmseq_vv_w, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmseq_vv_d, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmsne_vv_b, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmsne_vv_h, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmsne_vv_w, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmsne_vv_d, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmsltu_vv_b, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmsltu_vv_h, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmsltu_vv_w, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmsltu_vv_d, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmslt_vv_b, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmslt_vv_h, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmslt_vv_w, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmslt_vv_d, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmsleu_vv_b, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmsleu_vv_h, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmsleu_vv_w, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmsleu_vv_d, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmsle_vv_b, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmsle_vv_h, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmsle_vv_w, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmsle_vv_d, void, ptr, ptr, ptr, ptr, env, i32) > +DEF_HELPER_6(vmseq_vx_b, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmseq_vx_h, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmseq_vx_w, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmseq_vx_d, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsne_vx_b, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsne_vx_h, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsne_vx_w, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsne_vx_d, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsltu_vx_b, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsltu_vx_h, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsltu_vx_w, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsltu_vx_d, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmslt_vx_b, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmslt_vx_h, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmslt_vx_w, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmslt_vx_d, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsleu_vx_b, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsleu_vx_h, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsleu_vx_w, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsleu_vx_d, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsle_vx_b, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsle_vx_h, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsle_vx_w, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsle_vx_d, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsgtu_vx_b, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsgtu_vx_h, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsgtu_vx_w, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsgtu_vx_d, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsgt_vx_b, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsgt_vx_h, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsgt_vx_w, void, ptr, ptr, tl, ptr, env, i32) > +DEF_HELPER_6(vmsgt_vx_d, void, ptr, ptr, tl, ptr, env, i32) > diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode > index 89fd2aa4e2..df6181980d 100644 > --- a/target/riscv/insn32.decode > +++ b/target/riscv/insn32.decode > @@ -335,6 +335,26 @@ vnsrl_vi 101100 . ..... ..... 011 ..... 1010111 @r_vm > vnsra_vv 101101 . ..... ..... 000 ..... 1010111 @r_vm > vnsra_vx 101101 . ..... ..... 100 ..... 1010111 @r_vm > vnsra_vi 101101 . ..... ..... 011 ..... 1010111 @r_vm > +vmseq_vv 011000 . ..... ..... 000 ..... 1010111 @r_vm > +vmseq_vx 011000 . ..... ..... 100 ..... 1010111 @r_vm > +vmseq_vi 011000 . ..... ..... 011 ..... 1010111 @r_vm > +vmsne_vv 011001 . ..... ..... 000 ..... 1010111 @r_vm > +vmsne_vx 011001 . ..... ..... 100 ..... 1010111 @r_vm > +vmsne_vi 011001 . ..... ..... 011 ..... 1010111 @r_vm > +vmsltu_vv 011010 . ..... ..... 000 ..... 1010111 @r_vm > +vmsltu_vx 011010 . ..... ..... 100 ..... 1010111 @r_vm > +vmslt_vv 011011 . ..... ..... 000 ..... 1010111 @r_vm > +vmslt_vx 011011 . ..... ..... 100 ..... 1010111 @r_vm > +vmsleu_vv 011100 . ..... ..... 000 ..... 1010111 @r_vm > +vmsleu_vx 011100 . ..... ..... 100 ..... 1010111 @r_vm > +vmsleu_vi 011100 . ..... ..... 011 ..... 1010111 @r_vm > +vmsle_vv 011101 . ..... ..... 000 ..... 1010111 @r_vm > +vmsle_vx 011101 . ..... ..... 100 ..... 1010111 @r_vm > +vmsle_vi 011101 . ..... ..... 011 ..... 1010111 @r_vm > +vmsgtu_vx 011110 . ..... ..... 100 ..... 1010111 @r_vm > +vmsgtu_vi 011110 . ..... ..... 011 ..... 1010111 @r_vm > +vmsgt_vx 011111 . ..... ..... 100 ..... 1010111 @r_vm > +vmsgt_vi 011111 . ..... ..... 011 ..... 1010111 @r_vm > > vsetvli 0 ........... ..... 111 ..... 1010111 @r2_zimm > vsetvl 1000000 ..... ..... 111 ..... 1010111 @r > diff --git a/target/riscv/insn_trans/trans_rvv.inc.c b/target/riscv/insn_trans/trans_rvv.inc.c > index a537b507a0..53c00d914f 100644 > --- a/target/riscv/insn_trans/trans_rvv.inc.c > +++ b/target/riscv/insn_trans/trans_rvv.inc.c > @@ -1397,3 +1397,48 @@ static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \ > > GEN_OPIVI_NARROW_TRANS(vnsra_vi, 1, vnsra_vx) > GEN_OPIVI_NARROW_TRANS(vnsrl_vi, 1, vnsrl_vx) > + > +/* Vector Integer Comparison Instructions */ > +/* > + * For all comparison instructions, an illegal instruction exception is raised > + * if the destination vector register overlaps a source vector register group > + * and LMUL > 1. > + */ > +static bool opivv_cmp_check(DisasContext *s, arg_rmrr *a) > +{ > + return (vext_check_isa_ill(s) && > + vext_check_reg(s, a->rs2, false) && > + vext_check_reg(s, a->rs1, false) && > + ((vext_check_overlap_group(a->rd, 1, a->rs1, 1 << s->lmul) && > + vext_check_overlap_group(a->rd, 1, a->rs2, 1 << s->lmul)) || > + (s->lmul == 0))); > +} > +GEN_OPIVV_TRANS(vmseq_vv, opivv_cmp_check) > +GEN_OPIVV_TRANS(vmsne_vv, opivv_cmp_check) > +GEN_OPIVV_TRANS(vmsltu_vv, opivv_cmp_check) > +GEN_OPIVV_TRANS(vmslt_vv, opivv_cmp_check) > +GEN_OPIVV_TRANS(vmsleu_vv, opivv_cmp_check) > +GEN_OPIVV_TRANS(vmsle_vv, opivv_cmp_check) > + > +static bool opivx_cmp_check(DisasContext *s, arg_rmrr *a) > +{ > + return (vext_check_isa_ill(s) && > + vext_check_reg(s, a->rs2, false) && > + (vext_check_overlap_group(a->rd, 1, a->rs2, 1 << s->lmul) || > + (s->lmul == 0))); > +} > +GEN_OPIVX_TRANS(vmseq_vx, opivx_cmp_check) > +GEN_OPIVX_TRANS(vmsne_vx, opivx_cmp_check) > +GEN_OPIVX_TRANS(vmsltu_vx, opivx_cmp_check) > +GEN_OPIVX_TRANS(vmslt_vx, opivx_cmp_check) > +GEN_OPIVX_TRANS(vmsleu_vx, opivx_cmp_check) > +GEN_OPIVX_TRANS(vmsle_vx, opivx_cmp_check) > +GEN_OPIVX_TRANS(vmsgtu_vx, opivx_cmp_check) > +GEN_OPIVX_TRANS(vmsgt_vx, opivx_cmp_check) > + > +GEN_OPIVI_TRANS(vmseq_vi, 0, vmseq_vx, opivx_cmp_check) > +GEN_OPIVI_TRANS(vmsne_vi, 0, vmsne_vx, opivx_cmp_check) > +GEN_OPIVI_TRANS(vmsleu_vi, 1, vmsleu_vx, opivx_cmp_check) > +GEN_OPIVI_TRANS(vmsle_vi, 0, vmsle_vx, opivx_cmp_check) > +GEN_OPIVI_TRANS(vmsgtu_vi, 1, vmsgtu_vx, opivx_cmp_check) > +GEN_OPIVI_TRANS(vmsgt_vi, 0, vmsgt_vx, opivx_cmp_check) > diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c > index 8d1f32a7ff..d1fc543e98 100644 > --- a/target/riscv/vector_helper.c > +++ b/target/riscv/vector_helper.c > @@ -1385,3 +1385,132 @@ GEN_VEXT_SHIFT_VX(vnsrl_vx_w, uint32_t, uint64_t, H4, H8, DO_SRL, 0x3f, clearl) > GEN_VEXT_SHIFT_VX(vnsra_vx_b, int8_t, int16_t, H1, H2, DO_SRL, 0xf, clearb) > GEN_VEXT_SHIFT_VX(vnsra_vx_h, int16_t, int32_t, H2, H4, DO_SRL, 0x1f, clearh) > GEN_VEXT_SHIFT_VX(vnsra_vx_w, int32_t, int64_t, H4, H8, DO_SRL, 0x3f, clearl) > + > +/* Vector Integer Comparison Instructions */ > +#define DO_MSEQ(N, M) (N == M) > +#define DO_MSNE(N, M) (N != M) > +#define DO_MSLT(N, M) (N < M) > +#define DO_MSLE(N, M) (N <= M) > +#define DO_MSGT(N, M) (N > M) > + > +#define GEN_VEXT_CMP_VV(NAME, ETYPE, H, DO_OP) \ > +void HELPER(NAME)(void *vd, void *v0, void *vs1, void *vs2, \ > + CPURISCVState *env, uint32_t desc) \ > +{ \ > + uint32_t mlen = vext_mlen(desc); \ > + uint32_t vm = vext_vm(desc); \ > + uint32_t vl = env->vl; \ > + uint32_t vlmax = vext_maxsz(desc) / sizeof(ETYPE); \ > + uint32_t i; \ > + \ > + if (vl == 0) { \ > + return; \ > + } \ > + for (i = 0; i < vl; i++) { \ > + ETYPE s1 = *((ETYPE *)vs1 + H(i)); \ > + ETYPE s2 = *((ETYPE *)vs2 + H(i)); \ > + if (!vm && !vext_elem_mask(v0, mlen, i)) { \ > + continue; \ > + } \ > + vext_set_elem_mask(vd, mlen, i, DO_OP(s2, s1)); \ > + } \ > + for (; i < vlmax; i++) { \ > + vext_set_elem_mask(vd, mlen, i, 0); \ > + } \ > +} > + > +GEN_VEXT_CMP_VV(vmseq_vv_b, uint8_t, H1, DO_MSEQ) > +GEN_VEXT_CMP_VV(vmseq_vv_h, uint16_t, H2, DO_MSEQ) > +GEN_VEXT_CMP_VV(vmseq_vv_w, uint32_t, H4, DO_MSEQ) > +GEN_VEXT_CMP_VV(vmseq_vv_d, uint64_t, H8, DO_MSEQ) > + > +GEN_VEXT_CMP_VV(vmsne_vv_b, uint8_t, H1, DO_MSNE) > +GEN_VEXT_CMP_VV(vmsne_vv_h, uint16_t, H2, DO_MSNE) > +GEN_VEXT_CMP_VV(vmsne_vv_w, uint32_t, H4, DO_MSNE) > +GEN_VEXT_CMP_VV(vmsne_vv_d, uint64_t, H8, DO_MSNE) > + > +GEN_VEXT_CMP_VV(vmsltu_vv_b, uint8_t, H1, DO_MSLT) > +GEN_VEXT_CMP_VV(vmsltu_vv_h, uint16_t, H2, DO_MSLT) > +GEN_VEXT_CMP_VV(vmsltu_vv_w, uint32_t, H4, DO_MSLT) > +GEN_VEXT_CMP_VV(vmsltu_vv_d, uint64_t, H8, DO_MSLT) > + > +GEN_VEXT_CMP_VV(vmslt_vv_b, int8_t, H1, DO_MSLT) > +GEN_VEXT_CMP_VV(vmslt_vv_h, int16_t, H2, DO_MSLT) > +GEN_VEXT_CMP_VV(vmslt_vv_w, int32_t, H4, DO_MSLT) > +GEN_VEXT_CMP_VV(vmslt_vv_d, int64_t, H8, DO_MSLT) > + > +GEN_VEXT_CMP_VV(vmsleu_vv_b, uint8_t, H1, DO_MSLE) > +GEN_VEXT_CMP_VV(vmsleu_vv_h, uint16_t, H2, DO_MSLE) > +GEN_VEXT_CMP_VV(vmsleu_vv_w, uint32_t, H4, DO_MSLE) > +GEN_VEXT_CMP_VV(vmsleu_vv_d, uint64_t, H8, DO_MSLE) > + > +GEN_VEXT_CMP_VV(vmsle_vv_b, int8_t, H1, DO_MSLE) > +GEN_VEXT_CMP_VV(vmsle_vv_h, int16_t, H2, DO_MSLE) > +GEN_VEXT_CMP_VV(vmsle_vv_w, int32_t, H4, DO_MSLE) > +GEN_VEXT_CMP_VV(vmsle_vv_d, int64_t, H8, DO_MSLE) > + > +#define GEN_VEXT_CMP_VX(NAME, ETYPE, H, DO_OP) \ > +void HELPER(NAME)(void *vd, void *v0, target_ulong s1, void *vs2, \ > + CPURISCVState *env, uint32_t desc) \ > +{ \ > + uint32_t mlen = vext_mlen(desc); \ > + uint32_t vm = vext_vm(desc); \ > + uint32_t vl = env->vl; \ > + uint32_t vlmax = vext_maxsz(desc) / sizeof(ETYPE); \ > + uint32_t i; \ > + \ > + if (vl == 0) { \ > + return; \ > + } \ > + for (i = 0; i < vl; i++) { \ > + ETYPE s2 = *((ETYPE *)vs2 + H(i)); \ > + if (!vm && !vext_elem_mask(v0, mlen, i)) { \ > + continue; \ > + } \ > + vext_set_elem_mask(vd, mlen, i, \ > + DO_OP(s2, (ETYPE)(target_long)s1)); \ > + } \ > + for (; i < vlmax; i++) { \ > + vext_set_elem_mask(vd, mlen, i, 0); \ > + } \ > +} > + > +GEN_VEXT_CMP_VX(vmseq_vx_b, uint8_t, H1, DO_MSEQ) > +GEN_VEXT_CMP_VX(vmseq_vx_h, uint16_t, H2, DO_MSEQ) > +GEN_VEXT_CMP_VX(vmseq_vx_w, uint32_t, H4, DO_MSEQ) > +GEN_VEXT_CMP_VX(vmseq_vx_d, uint64_t, H8, DO_MSEQ) > + > +GEN_VEXT_CMP_VX(vmsne_vx_b, uint8_t, H1, DO_MSNE) > +GEN_VEXT_CMP_VX(vmsne_vx_h, uint16_t, H2, DO_MSNE) > +GEN_VEXT_CMP_VX(vmsne_vx_w, uint32_t, H4, DO_MSNE) > +GEN_VEXT_CMP_VX(vmsne_vx_d, uint64_t, H8, DO_MSNE) > + > +GEN_VEXT_CMP_VX(vmsltu_vx_b, uint8_t, H1, DO_MSLT) > +GEN_VEXT_CMP_VX(vmsltu_vx_h, uint16_t, H2, DO_MSLT) > +GEN_VEXT_CMP_VX(vmsltu_vx_w, uint32_t, H4, DO_MSLT) > +GEN_VEXT_CMP_VX(vmsltu_vx_d, uint64_t, H8, DO_MSLT) > + > +GEN_VEXT_CMP_VX(vmslt_vx_b, int8_t, H1, DO_MSLT) > +GEN_VEXT_CMP_VX(vmslt_vx_h, int16_t, H2, DO_MSLT) > +GEN_VEXT_CMP_VX(vmslt_vx_w, int32_t, H4, DO_MSLT) > +GEN_VEXT_CMP_VX(vmslt_vx_d, int64_t, H8, DO_MSLT) > + > +GEN_VEXT_CMP_VX(vmsleu_vx_b, uint8_t, H1, DO_MSLE) > +GEN_VEXT_CMP_VX(vmsleu_vx_h, uint16_t, H2, DO_MSLE) > +GEN_VEXT_CMP_VX(vmsleu_vx_w, uint32_t, H4, DO_MSLE) > +GEN_VEXT_CMP_VX(vmsleu_vx_d, uint64_t, H8, DO_MSLE) > + > +GEN_VEXT_CMP_VX(vmsle_vx_b, int8_t, H1, DO_MSLE) > +GEN_VEXT_CMP_VX(vmsle_vx_h, int16_t, H2, DO_MSLE) > +GEN_VEXT_CMP_VX(vmsle_vx_w, int32_t, H4, DO_MSLE) > +GEN_VEXT_CMP_VX(vmsle_vx_d, int64_t, H8, DO_MSLE) > + > +GEN_VEXT_CMP_VX(vmsgtu_vx_b, uint8_t, H1, DO_MSGT) > +GEN_VEXT_CMP_VX(vmsgtu_vx_h, uint16_t, H2, DO_MSGT) > +GEN_VEXT_CMP_VX(vmsgtu_vx_w, uint32_t, H4, DO_MSGT) > +GEN_VEXT_CMP_VX(vmsgtu_vx_d, uint64_t, H8, DO_MSGT) > + > +GEN_VEXT_CMP_VX(vmsgt_vx_b, int8_t, H1, DO_MSGT) > +GEN_VEXT_CMP_VX(vmsgt_vx_h, int16_t, H2, DO_MSGT) > +GEN_VEXT_CMP_VX(vmsgt_vx_w, int32_t, H4, DO_MSGT) > +GEN_VEXT_CMP_VX(vmsgt_vx_d, int64_t, H8, DO_MSGT) > -- > 2.23.0 >
next prev parent reply other threads:[~2020-03-25 17:40 UTC|newest] Thread overview: 240+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-03-17 15:05 [PATCH v6 00/61] target/riscv: support vector extension v0.7.1 LIU Zhiwei 2020-03-17 15:05 ` LIU Zhiwei 2020-03-17 15:05 ` [PATCH v6 01/61] target/riscv: add vector extension field in CPURISCVState LIU Zhiwei 2020-03-17 15:05 ` LIU Zhiwei 2020-03-17 15:05 ` [PATCH v6 02/61] target/riscv: implementation-defined constant parameters LIU Zhiwei 2020-03-17 15:05 ` LIU Zhiwei 2020-03-17 15:05 ` [PATCH v6 03/61] target/riscv: support vector extension csr LIU Zhiwei 2020-03-17 15:05 ` LIU Zhiwei 2020-03-17 15:05 ` [PATCH v6 04/61] target/riscv: add vector configure instruction LIU Zhiwei 2020-03-17 15:05 ` LIU Zhiwei 2020-03-23 6:51 ` Kito Cheng 2020-03-23 6:51 ` Kito Cheng 2020-03-23 7:10 ` LIU Zhiwei 2020-03-23 7:10 ` LIU Zhiwei 2020-03-17 15:05 ` [PATCH v6 05/61] target/riscv: add an internals.h header LIU Zhiwei 2020-03-17 15:05 ` LIU Zhiwei 2020-03-18 23:45 ` Alistair Francis 2020-03-18 23:45 ` Alistair Francis 2020-03-27 23:41 ` Richard Henderson 2020-03-27 23:41 ` Richard Henderson 2020-03-17 15:05 ` [PATCH v6 06/61] target/riscv: add vector stride load and store instructions LIU Zhiwei 2020-03-17 15:05 ` LIU Zhiwei 2020-03-18 23:54 ` Alistair Francis 2020-03-18 23:54 ` Alistair Francis 2020-03-17 15:05 ` [PATCH v6 07/61] target/riscv: add vector index " LIU Zhiwei 2020-03-17 15:05 ` LIU Zhiwei 2020-03-17 15:06 ` [PATCH v6 08/61] target/riscv: add fault-only-first unit stride load LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-17 15:06 ` [PATCH v6 09/61] target/riscv: add vector amo operations LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-19 17:01 ` Alistair Francis 2020-03-19 17:01 ` Alistair Francis 2020-03-27 23:44 ` Richard Henderson 2020-03-27 23:44 ` Richard Henderson 2020-03-17 15:06 ` [PATCH v6 10/61] target/riscv: vector single-width integer add and subtract LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-20 18:31 ` Alistair Francis 2020-03-20 18:31 ` Alistair Francis 2020-03-27 23:54 ` Richard Henderson 2020-03-27 23:54 ` Richard Henderson 2020-03-28 14:42 ` LIU Zhiwei 2020-03-28 14:42 ` LIU Zhiwei 2020-03-17 15:06 ` [PATCH v6 11/61] target/riscv: vector widening " LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-19 16:28 ` Alistair Francis 2020-03-19 16:28 ` Alistair Francis 2020-03-17 15:06 ` [PATCH v6 12/61] target/riscv: vector integer add-with-carry / subtract-with-borrow instructions LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-19 17:29 ` Alistair Francis 2020-03-19 17:29 ` Alistair Francis 2020-03-28 0:00 ` Richard Henderson 2020-03-28 0:00 ` Richard Henderson 2020-03-17 15:06 ` [PATCH v6 13/61] target/riscv: vector bitwise logical instructions LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-20 18:34 ` Alistair Francis 2020-03-20 18:34 ` Alistair Francis 2020-03-17 15:06 ` [PATCH v6 14/61] target/riscv: vector single-width bit shift instructions LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-19 20:10 ` Alistair Francis 2020-03-19 20:10 ` Alistair Francis 2020-03-17 15:06 ` [PATCH v6 15/61] target/riscv: vector narrowing integer right " LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-20 18:43 ` Alistair Francis 2020-03-20 18:43 ` Alistair Francis 2020-03-17 15:06 ` [PATCH v6 16/61] target/riscv: vector integer comparison instructions LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-25 17:32 ` Alistair Francis [this message] 2020-03-25 17:32 ` Alistair Francis 2020-03-17 15:06 ` [PATCH v6 17/61] target/riscv: vector integer min/max instructions LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-20 18:49 ` Alistair Francis 2020-03-20 18:49 ` Alistair Francis 2020-03-17 15:06 ` [PATCH v6 18/61] target/riscv: vector single-width integer multiply instructions LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-25 17:36 ` Alistair Francis 2020-03-25 17:36 ` Alistair Francis 2020-03-28 0:06 ` Richard Henderson 2020-03-28 0:06 ` Richard Henderson 2020-03-28 15:17 ` LIU Zhiwei 2020-03-28 15:17 ` LIU Zhiwei 2020-03-28 15:47 ` Richard Henderson 2020-03-28 15:47 ` Richard Henderson 2020-03-28 16:13 ` LIU Zhiwei 2020-03-28 16:13 ` LIU Zhiwei 2020-03-29 4:00 ` LIU Zhiwei 2020-03-29 4:00 ` LIU Zhiwei 2020-03-17 15:06 ` [PATCH v6 19/61] target/riscv: vector integer divide instructions LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-20 18:51 ` Alistair Francis 2020-03-20 18:51 ` Alistair Francis 2020-03-17 15:06 ` [PATCH v6 20/61] target/riscv: vector widening integer multiply instructions LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-25 17:25 ` Alistair Francis 2020-03-25 17:25 ` Alistair Francis 2020-03-17 15:06 ` [PATCH v6 21/61] target/riscv: vector single-width integer multiply-add instructions LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-25 17:27 ` Alistair Francis 2020-03-25 17:27 ` Alistair Francis 2020-03-17 15:06 ` [PATCH v6 22/61] target/riscv: vector widening " LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-25 17:42 ` Alistair Francis 2020-03-25 17:42 ` Alistair Francis 2020-03-17 15:06 ` [PATCH v6 23/61] target/riscv: vector integer merge and move instructions LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-26 17:57 ` Alistair Francis 2020-03-26 17:57 ` Alistair Francis 2020-03-28 0:18 ` Richard Henderson 2020-03-28 0:18 ` Richard Henderson 2020-03-17 15:06 ` [PATCH v6 24/61] target/riscv: vector single-width saturating add and subtract LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-28 0:20 ` Richard Henderson 2020-03-28 0:20 ` Richard Henderson 2020-03-17 15:06 ` [PATCH v6 25/61] target/riscv: vector single-width averaging " LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-19 3:46 ` LIU Zhiwei 2020-03-19 3:46 ` LIU Zhiwei 2020-03-28 0:32 ` Richard Henderson 2020-03-28 0:32 ` Richard Henderson 2020-03-28 1:07 ` LIU Zhiwei 2020-03-28 1:07 ` LIU Zhiwei 2020-03-28 1:22 ` Richard Henderson 2020-03-28 1:22 ` Richard Henderson 2020-03-28 15:37 ` LIU Zhiwei 2020-03-28 15:37 ` LIU Zhiwei 2020-03-17 15:06 ` [PATCH v6 26/61] target/riscv: vector single-width fractional multiply with rounding and saturation LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-28 1:08 ` Richard Henderson 2020-03-28 1:08 ` Richard Henderson 2020-03-28 15:41 ` LIU Zhiwei 2020-03-28 15:41 ` LIU Zhiwei 2020-03-17 15:06 ` [PATCH v6 27/61] target/riscv: vector widening saturating scaled multiply-add LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-28 1:23 ` Richard Henderson 2020-03-28 1:23 ` Richard Henderson 2020-03-17 15:06 ` [PATCH v6 28/61] target/riscv: vector single-width scaling shift instructions LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-28 1:24 ` Richard Henderson 2020-03-28 1:24 ` Richard Henderson 2020-03-17 15:06 ` [PATCH v6 29/61] target/riscv: vector narrowing fixed-point clip instructions LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-28 1:50 ` Richard Henderson 2020-03-28 1:50 ` Richard Henderson 2020-03-17 15:06 ` [PATCH v6 30/61] target/riscv: vector single-width floating-point add/subtract instructions LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-17 15:06 ` [PATCH v6 31/61] target/riscv: vector widening " LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-17 15:06 ` [PATCH v6 32/61] target/riscv: vector single-width floating-point multiply/divide instructions LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-25 17:46 ` Alistair Francis 2020-03-25 17:46 ` Alistair Francis 2020-03-17 15:06 ` [PATCH v6 33/61] target/riscv: vector widening floating-point multiply LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-17 15:06 ` [PATCH v6 34/61] target/riscv: vector single-width floating-point fused multiply-add instructions LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-17 15:06 ` [PATCH v6 35/61] target/riscv: vector widening " LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-17 15:06 ` [PATCH v6 36/61] target/riscv: vector floating-point square-root instruction LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-17 15:06 ` [PATCH v6 37/61] target/riscv: vector floating-point min/max instructions LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-25 17:47 ` Alistair Francis 2020-03-25 17:47 ` Alistair Francis 2020-03-17 15:06 ` [PATCH v6 38/61] target/riscv: vector floating-point sign-injection instructions LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-17 15:06 ` [PATCH v6 39/61] target/riscv: vector floating-point compare instructions LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-28 2:01 ` Richard Henderson 2020-03-28 2:01 ` Richard Henderson 2020-03-28 15:44 ` LIU Zhiwei 2020-03-28 15:44 ` LIU Zhiwei 2020-03-17 15:06 ` [PATCH v6 40/61] target/riscv: vector floating-point classify instructions LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-28 2:06 ` Richard Henderson 2020-03-28 2:06 ` Richard Henderson 2020-03-17 15:06 ` [PATCH v6 41/61] target/riscv: vector floating-point merge instructions LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-28 3:23 ` Richard Henderson 2020-03-28 3:23 ` Richard Henderson 2020-03-28 15:47 ` LIU Zhiwei 2020-03-28 15:47 ` LIU Zhiwei 2020-03-17 15:06 ` [PATCH v6 42/61] target/riscv: vector floating-point/integer type-convert instructions LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-17 15:06 ` [PATCH v6 43/61] target/riscv: widening " LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-17 15:06 ` [PATCH v6 44/61] target/riscv: narrowing " LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-17 15:06 ` [PATCH v6 45/61] target/riscv: vector single-width integer reduction instructions LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-17 15:06 ` [PATCH v6 46/61] target/riscv: vector wideing " LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-17 15:06 ` [PATCH v6 47/61] target/riscv: vector single-width floating-point " LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-17 15:06 ` [PATCH v6 48/61] target/riscv: vector widening " LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-17 15:06 ` [PATCH v6 49/61] target/riscv: vector mask-register logical instructions LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-17 15:06 ` [PATCH v6 50/61] target/riscv: vector mask population count vmpopc LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-17 15:06 ` [PATCH v6 51/61] target/riscv: vmfirst find-first-set mask bit LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-17 15:06 ` [PATCH v6 52/61] target/riscv: set-X-first " LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-17 15:06 ` [PATCH v6 53/61] target/riscv: vector iota instruction LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-17 15:06 ` [PATCH v6 54/61] target/riscv: vector element index instruction LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-17 15:06 ` [PATCH v6 55/61] target/riscv: integer extract instruction LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-28 3:36 ` Richard Henderson 2020-03-28 3:36 ` Richard Henderson 2020-03-28 16:23 ` LIU Zhiwei 2020-03-28 16:23 ` LIU Zhiwei 2020-03-17 15:06 ` [PATCH v6 56/61] target/riscv: integer scalar move instruction LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-17 15:06 ` [PATCH v6 57/61] target/riscv: floating-point scalar move instructions LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-28 3:44 ` Richard Henderson 2020-03-28 3:44 ` Richard Henderson 2020-03-28 16:31 ` LIU Zhiwei 2020-03-28 16:31 ` LIU Zhiwei 2020-03-17 15:06 ` [PATCH v6 58/61] target/riscv: vector slide instructions LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-28 3:50 ` Richard Henderson 2020-03-28 3:50 ` Richard Henderson 2020-03-28 13:40 ` LIU Zhiwei 2020-03-28 13:40 ` LIU Zhiwei 2020-03-17 15:06 ` [PATCH v6 59/61] target/riscv: vector register gather instruction LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-28 3:57 ` Richard Henderson 2020-03-28 3:57 ` Richard Henderson 2020-03-17 15:06 ` [PATCH v6 60/61] target/riscv: vector compress instruction LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-17 15:06 ` [PATCH v6 61/61] target/riscv: configure and turn on vector extension from command line LIU Zhiwei 2020-03-17 15:06 ` LIU Zhiwei 2020-03-25 17:49 ` Alistair Francis 2020-03-25 17:49 ` Alistair Francis 2020-03-28 4:00 ` Richard Henderson 2020-03-28 4:00 ` Richard Henderson 2020-03-17 20:47 ` [PATCH v6 00/61] target/riscv: support vector extension v0.7.1 no-reply 2020-03-17 20:47 ` no-reply
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=CAKmqyKNdZyTiBhW+uNbyaVxmpO4GsUFCXu7titVzAkx8OvAzOQ@mail.gmail.com \ --to=alistair23@gmail.com \ --cc=chihmin.chao@sifive.com \ --cc=guoren@linux.alibaba.com \ --cc=palmer@dabbelt.com \ --cc=qemu-devel@nongnu.org \ --cc=qemu-riscv@nongnu.org \ --cc=richard.henderson@linaro.org \ --cc=wenmeng_zhang@c-sky.com \ --cc=wxy194768@alibaba-inc.com \ --cc=zhiwei_liu@c-sky.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.