From: <jerinj@marvell.com> To: <dev@dpdk.org>, Jerin Jacob <jerinj@marvell.com>, Nithin Dabilpuram <ndabilpuram@marvell.com>, Vamsi Attunuru <vattunuru@marvell.com> Cc: Pavan Nikhilesh <pbhagavatula@marvell.com>, Olivier Matz <olivier.matz@6wind.com>, Aaron Conole <aconole@redhat.com> Subject: [dpdk-dev] [PATCH v3 25/27] mempool/octeontx2: add optimized dequeue operation for arm64 Date: Mon, 17 Jun 2019 21:25:35 +0530 Message-ID: <20190617155537.36144-26-jerinj@marvell.com> (raw) In-Reply-To: <20190617155537.36144-1-jerinj@marvell.com> From: Pavan Nikhilesh <pbhagavatula@marvell.com> This patch adds an optimized arm64 instruction based routine to leverage CPU pipeline characteristics of octeontx2. The theme is to fill the pipeline with CASP operations as much HW can do so that HW can do alloc() HW ops in full throttle. Cc: Olivier Matz <olivier.matz@6wind.com> Cc: Aaron Conole <aconole@redhat.com> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com> Signed-off-by: Jerin Jacob <jerinj@marvell.com> Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com> --- drivers/mempool/octeontx2/otx2_mempool_ops.c | 291 +++++++++++++++++++ 1 file changed, 291 insertions(+) diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c index c59bd73c0..e6737abda 100644 --- a/drivers/mempool/octeontx2/otx2_mempool_ops.c +++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c @@ -37,6 +37,293 @@ npa_lf_aura_op_alloc_one(const int64_t wdata, int64_t * const addr, return -ENOENT; } +#if defined(RTE_ARCH_ARM64) +static __rte_noinline int +npa_lf_aura_op_search_alloc(const int64_t wdata, int64_t * const addr, + void **obj_table, unsigned int n) +{ + uint8_t i; + + for (i = 0; i < n; i++) { + if (obj_table[i] != NULL) + continue; + if (npa_lf_aura_op_alloc_one(wdata, addr, obj_table, i)) + return -ENOENT; + } + + return 0; +} + +static __attribute__((optimize("-O3"))) __rte_noinline int __hot +npa_lf_aura_op_alloc_bulk(const int64_t wdata, int64_t * const addr, + unsigned int n, void **obj_table) +{ + const __uint128_t wdata128 = ((__uint128_t)wdata << 64) | wdata; + uint64x2_t failed = vdupq_n_u64(~0); + + switch (n) { + case 32: + { + __uint128_t t0, t1, t2, t3, t4, t5, t6, t7, t8, t9; + __uint128_t t10, t11; + + asm volatile ( + ".cpu generic+lse\n" + "casp %[t0], %H[t0], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t1], %H[t1], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t2], %H[t2], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t3], %H[t3], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t4], %H[t4], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t5], %H[t5], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t6], %H[t6], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t7], %H[t7], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t8], %H[t8], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t9], %H[t9], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t10], %H[t10], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t11], %H[t11], %[wdata], %H[wdata], [%[loc]]\n" + "fmov d16, %[t0]\n" + "fmov v16.D[1], %H[t0]\n" + "casp %[t0], %H[t0], %[wdata], %H[wdata], [%[loc]]\n" + "fmov d17, %[t1]\n" + "fmov v17.D[1], %H[t1]\n" + "casp %[t1], %H[t1], %[wdata], %H[wdata], [%[loc]]\n" + "fmov d18, %[t2]\n" + "fmov v18.D[1], %H[t2]\n" + "casp %[t2], %H[t2], %[wdata], %H[wdata], [%[loc]]\n" + "fmov d19, %[t3]\n" + "fmov v19.D[1], %H[t3]\n" + "casp %[t3], %H[t3], %[wdata], %H[wdata], [%[loc]]\n" + "and %[failed].16B, %[failed].16B, v16.16B\n" + "and %[failed].16B, %[failed].16B, v17.16B\n" + "and %[failed].16B, %[failed].16B, v18.16B\n" + "and %[failed].16B, %[failed].16B, v19.16B\n" + "fmov d20, %[t4]\n" + "fmov v20.D[1], %H[t4]\n" + "fmov d21, %[t5]\n" + "fmov v21.D[1], %H[t5]\n" + "fmov d22, %[t6]\n" + "fmov v22.D[1], %H[t6]\n" + "fmov d23, %[t7]\n" + "fmov v23.D[1], %H[t7]\n" + "and %[failed].16B, %[failed].16B, v20.16B\n" + "and %[failed].16B, %[failed].16B, v21.16B\n" + "and %[failed].16B, %[failed].16B, v22.16B\n" + "and %[failed].16B, %[failed].16B, v23.16B\n" + "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n" + "st1 { v20.2d, v21.2d, v22.2d, v23.2d}, [%[dst]], 64\n" + "fmov d16, %[t8]\n" + "fmov v16.D[1], %H[t8]\n" + "fmov d17, %[t9]\n" + "fmov v17.D[1], %H[t9]\n" + "fmov d18, %[t10]\n" + "fmov v18.D[1], %H[t10]\n" + "fmov d19, %[t11]\n" + "fmov v19.D[1], %H[t11]\n" + "and %[failed].16B, %[failed].16B, v16.16B\n" + "and %[failed].16B, %[failed].16B, v17.16B\n" + "and %[failed].16B, %[failed].16B, v18.16B\n" + "and %[failed].16B, %[failed].16B, v19.16B\n" + "fmov d20, %[t0]\n" + "fmov v20.D[1], %H[t0]\n" + "fmov d21, %[t1]\n" + "fmov v21.D[1], %H[t1]\n" + "fmov d22, %[t2]\n" + "fmov v22.D[1], %H[t2]\n" + "fmov d23, %[t3]\n" + "fmov v23.D[1], %H[t3]\n" + "and %[failed].16B, %[failed].16B, v20.16B\n" + "and %[failed].16B, %[failed].16B, v21.16B\n" + "and %[failed].16B, %[failed].16B, v22.16B\n" + "and %[failed].16B, %[failed].16B, v23.16B\n" + "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n" + "st1 { v20.2d, v21.2d, v22.2d, v23.2d}, [%[dst]], 64\n" + : "+Q" (*addr), [failed] "=&w" (failed), + [t0] "=&r" (t0), [t1] "=&r" (t1), [t2] "=&r" (t2), + [t3] "=&r" (t3), [t4] "=&r" (t4), [t5] "=&r" (t5), + [t6] "=&r" (t6), [t7] "=&r" (t7), [t8] "=&r" (t8), + [t9] "=&r" (t9), [t10] "=&r" (t10), [t11] "=&r" (t11) + : [wdata] "r" (wdata128), [dst] "r" (obj_table), + [loc] "r" (addr) + : "memory", "v16", "v17", "v18", + "v19", "v20", "v21", "v22", "v23" + ); + break; + } + case 16: + { + __uint128_t t0, t1, t2, t3, t4, t5, t6, t7; + + asm volatile ( + ".cpu generic+lse\n" + "casp %[t0], %H[t0], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t1], %H[t1], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t2], %H[t2], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t3], %H[t3], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t4], %H[t4], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t5], %H[t5], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t6], %H[t6], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t7], %H[t7], %[wdata], %H[wdata], [%[loc]]\n" + "fmov d16, %[t0]\n" + "fmov v16.D[1], %H[t0]\n" + "fmov d17, %[t1]\n" + "fmov v17.D[1], %H[t1]\n" + "fmov d18, %[t2]\n" + "fmov v18.D[1], %H[t2]\n" + "fmov d19, %[t3]\n" + "fmov v19.D[1], %H[t3]\n" + "and %[failed].16B, %[failed].16B, v16.16B\n" + "and %[failed].16B, %[failed].16B, v17.16B\n" + "and %[failed].16B, %[failed].16B, v18.16B\n" + "and %[failed].16B, %[failed].16B, v19.16B\n" + "fmov d20, %[t4]\n" + "fmov v20.D[1], %H[t4]\n" + "fmov d21, %[t5]\n" + "fmov v21.D[1], %H[t5]\n" + "fmov d22, %[t6]\n" + "fmov v22.D[1], %H[t6]\n" + "fmov d23, %[t7]\n" + "fmov v23.D[1], %H[t7]\n" + "and %[failed].16B, %[failed].16B, v20.16B\n" + "and %[failed].16B, %[failed].16B, v21.16B\n" + "and %[failed].16B, %[failed].16B, v22.16B\n" + "and %[failed].16B, %[failed].16B, v23.16B\n" + "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n" + "st1 { v20.2d, v21.2d, v22.2d, v23.2d}, [%[dst]], 64\n" + : "+Q" (*addr), [failed] "=&w" (failed), + [t0] "=&r" (t0), [t1] "=&r" (t1), [t2] "=&r" (t2), + [t3] "=&r" (t3), [t4] "=&r" (t4), [t5] "=&r" (t5), + [t6] "=&r" (t6), [t7] "=&r" (t7) + : [wdata] "r" (wdata128), [dst] "r" (obj_table), + [loc] "r" (addr) + : "memory", "v16", "v17", "v18", "v19", + "v20", "v21", "v22", "v23" + ); + break; + } + case 8: + { + __uint128_t t0, t1, t2, t3; + + asm volatile ( + ".cpu generic+lse\n" + "casp %[t0], %H[t0], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t1], %H[t1], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t2], %H[t2], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t3], %H[t3], %[wdata], %H[wdata], [%[loc]]\n" + "fmov d16, %[t0]\n" + "fmov v16.D[1], %H[t0]\n" + "fmov d17, %[t1]\n" + "fmov v17.D[1], %H[t1]\n" + "fmov d18, %[t2]\n" + "fmov v18.D[1], %H[t2]\n" + "fmov d19, %[t3]\n" + "fmov v19.D[1], %H[t3]\n" + "and %[failed].16B, %[failed].16B, v16.16B\n" + "and %[failed].16B, %[failed].16B, v17.16B\n" + "and %[failed].16B, %[failed].16B, v18.16B\n" + "and %[failed].16B, %[failed].16B, v19.16B\n" + "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n" + : "+Q" (*addr), [failed] "=&w" (failed), + [t0] "=&r" (t0), [t1] "=&r" (t1), [t2] "=&r" (t2), + [t3] "=&r" (t3) + : [wdata] "r" (wdata128), [dst] "r" (obj_table), + [loc] "r" (addr) + : "memory", "v16", "v17", "v18", "v19" + ); + break; + } + case 4: + { + __uint128_t t0, t1; + + asm volatile ( + ".cpu generic+lse\n" + "casp %[t0], %H[t0], %[wdata], %H[wdata], [%[loc]]\n" + "casp %[t1], %H[t1], %[wdata], %H[wdata], [%[loc]]\n" + "fmov d16, %[t0]\n" + "fmov v16.D[1], %H[t0]\n" + "fmov d17, %[t1]\n" + "fmov v17.D[1], %H[t1]\n" + "and %[failed].16B, %[failed].16B, v16.16B\n" + "and %[failed].16B, %[failed].16B, v17.16B\n" + "st1 { v16.2d, v17.2d}, [%[dst]], 32\n" + : "+Q" (*addr), [failed] "=&w" (failed), + [t0] "=&r" (t0), [t1] "=&r" (t1) + : [wdata] "r" (wdata128), [dst] "r" (obj_table), + [loc] "r" (addr) + : "memory", "v16", "v17" + ); + break; + } + case 2: + { + __uint128_t t0; + + asm volatile ( + ".cpu generic+lse\n" + "casp %[t0], %H[t0], %[wdata], %H[wdata], [%[loc]]\n" + "fmov d16, %[t0]\n" + "fmov v16.D[1], %H[t0]\n" + "and %[failed].16B, %[failed].16B, v16.16B\n" + "st1 { v16.2d}, [%[dst]], 16\n" + : "+Q" (*addr), [failed] "=&w" (failed), + [t0] "=&r" (t0) + : [wdata] "r" (wdata128), [dst] "r" (obj_table), + [loc] "r" (addr) + : "memory", "v16" + ); + break; + } + case 1: + return npa_lf_aura_op_alloc_one(wdata, addr, obj_table, 0); + } + + if (unlikely(!(vgetq_lane_u64(failed, 0) & vgetq_lane_u64(failed, 1)))) + return npa_lf_aura_op_search_alloc(wdata, addr, (void **) + ((char *)obj_table - (sizeof(uint64_t) * n)), n); + + return 0; +} + +static __rte_noinline void +otx2_npa_clear_alloc(struct rte_mempool *mp, void **obj_table, unsigned int n) +{ + unsigned int i; + + for (i = 0; i < n; i++) { + if (obj_table[i] != NULL) { + otx2_npa_enq(mp, &obj_table[i], 1); + obj_table[i] = NULL; + } + } +} + +static inline int __hot +otx2_npa_deq_arm64(struct rte_mempool *mp, void **obj_table, unsigned int n) +{ + const int64_t wdata = npa_lf_aura_handle_to_aura(mp->pool_id); + void **obj_table_bak = obj_table; + const unsigned int nfree = n; + unsigned int parts; + + int64_t * const addr = (int64_t * const) + (npa_lf_aura_handle_to_base(mp->pool_id) + + NPA_LF_AURA_OP_ALLOCX(0)); + while (n) { + parts = n > 31 ? 32 : rte_align32prevpow2(n); + n -= parts; + if (unlikely(npa_lf_aura_op_alloc_bulk(wdata, addr, + parts, obj_table))) { + otx2_npa_clear_alloc(mp, obj_table_bak, nfree - n); + return -ENOENT; + } + obj_table += parts; + } + + return 0; +} +#endif + static inline int __hot otx2_npa_deq(struct rte_mempool *mp, void **obj_table, unsigned int n) { @@ -463,7 +750,11 @@ static struct rte_mempool_ops otx2_npa_ops = { .get_count = otx2_npa_get_count, .calc_mem_size = otx2_npa_calc_mem_size, .populate = otx2_npa_populate, +#if defined(RTE_ARCH_ARM64) + .dequeue = otx2_npa_deq_arm64, +#else .dequeue = otx2_npa_deq, +#endif }; MEMPOOL_REGISTER_OPS(otx2_npa_ops); -- 2.21.0
next prev parent reply index Thread overview: 123+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-05-23 8:13 [dpdk-dev] [PATCH v1 00/27] OCTEON TX2 common and mempool driver jerinj 2019-05-23 8:13 ` [dpdk-dev] [PATCH v1 01/27] common/octeontx2: add build infrastructure and HW definition jerinj 2019-05-23 8:13 ` [dpdk-dev] [PATCH v1 02/27] common/octeontx2: add IO handling APIs jerinj 2019-05-23 8:13 ` [dpdk-dev] [PATCH v1 03/27] common/octeontx2: add mbox request and response definition jerinj 2019-05-23 8:13 ` [dpdk-dev] [PATCH v1 04/27] common/octeontx2: add mailbox base support infra jerinj 2019-05-23 8:13 ` [dpdk-dev] [PATCH v1 05/27] common/octeontx2: add runtime log infra jerinj 2019-05-23 8:13 ` [dpdk-dev] [PATCH v1 06/27] common/octeontx2: add mailbox send and receive support jerinj 2019-05-23 8:13 ` [dpdk-dev] [PATCH v1 07/27] common/octeontx2: introduce common device class jerinj 2019-05-23 8:13 ` [dpdk-dev] [PATCH v1 08/27] common/octeontx2: introduce irq handling functions jerinj 2019-05-23 8:13 ` [dpdk-dev] [PATCH v1 09/27] common/octeontx2: handle intra device operations jerinj 2019-05-23 8:13 ` [dpdk-dev] [PATCH v1 10/27] common/octeontx2: add AF to PF mailbox IRQ and msg handlers jerinj 2019-05-23 8:13 ` [dpdk-dev] [PATCH v1 11/27] common/octeontx2: add PF to VF " jerinj 2019-05-23 8:13 ` [dpdk-dev] [PATCH v1 12/27] common/octeontx2: add VF mailbox IRQ and msg handler jerinj 2019-05-23 8:13 ` [dpdk-dev] [PATCH v1 13/27] common/octeontx2: add uplink message support jerinj 2019-05-23 8:13 ` [dpdk-dev] [PATCH v1 14/27] common/octeontx2: add FLR IRQ handler jerinj 2019-05-23 8:13 ` [dpdk-dev] [PATCH v1 15/27] doc: add Marvell OCTEON TX2 platform guide jerinj 2019-05-23 8:13 ` [dpdk-dev] [PATCH v1 16/27] mempool/octeontx2: add build infra and device probe jerinj 2019-05-23 8:13 ` [dpdk-dev] [PATCH v1 17/27] drivers: add init and fini on octeontx2 NPA object jerinj 2019-05-23 8:13 ` [dpdk-dev] [PATCH v1 18/27] mempool/octeontx2: add NPA HW operations jerinj 2019-05-23 8:13 ` [dpdk-dev] [PATCH v1 19/27] mempool/octeontx2: add NPA IRQ handler jerinj 2019-05-23 8:13 ` [dpdk-dev] [PATCH v1 20/27] mempool/octeontx2: add context dump support jerinj 2019-05-23 8:13 ` [dpdk-dev] [PATCH v1 21/27] mempool/octeontx2: add mempool alloc op jerinj 2019-05-23 8:13 ` [dpdk-dev] [PATCH v1 22/27] mempool/octeontx2: add mempool free op jerinj 2019-05-23 8:13 ` [dpdk-dev] [PATCH v1 23/27] mempool/octeontx2: add remaining slow path ops jerinj 2019-05-23 8:13 ` [dpdk-dev] [PATCH v1 24/27] mempool/octeontx2: add fast path mempool ops jerinj 2019-05-23 8:13 ` [dpdk-dev] [PATCH v1 25/27] mempool/octeontx2: add optimized dequeue operation for arm64 jerinj 2019-05-24 13:32 ` Aaron Conole 2019-05-27 9:20 ` Jerin Jacob Kollanukkaran 2019-05-23 8:13 ` [dpdk-dev] [PATCH v1 26/27] mempool/octeontx2: add devargs for max pool selection jerinj 2019-05-23 8:13 ` [dpdk-dev] [PATCH v1 27/27] doc: add Marvell OCTEON TX2 mempool documentation jerinj 2019-06-01 1:48 ` [dpdk-dev] [PATCH v2 00/27] OCTEON TX2 common and mempool driver jerinj 2019-06-01 1:48 ` [dpdk-dev] [PATCH v2 01/27] common/octeontx2: add build infrastructure and HW definition jerinj 2019-06-01 1:48 ` [dpdk-dev] [PATCH v2 02/27] common/octeontx2: add IO handling APIs jerinj 2019-06-01 1:48 ` [dpdk-dev] [PATCH v2 03/27] common/octeontx2: add mbox request and response definition jerinj 2019-06-01 1:48 ` [dpdk-dev] [PATCH v2 04/27] common/octeontx2: add mailbox base support infra jerinj 2019-06-01 1:48 ` [dpdk-dev] [PATCH v2 05/27] common/octeontx2: add runtime log infra jerinj 2019-06-01 1:48 ` [dpdk-dev] [PATCH v2 06/27] common/octeontx2: add mailbox send and receive support jerinj 2019-06-01 1:48 ` [dpdk-dev] [PATCH v2 07/27] common/octeontx2: introduce common device class jerinj 2019-06-01 1:48 ` [dpdk-dev] [PATCH v2 08/27] common/octeontx2: introduce irq handling functions jerinj 2019-06-01 1:48 ` [dpdk-dev] [PATCH v2 09/27] common/octeontx2: handle intra device operations jerinj 2019-06-01 1:48 ` [dpdk-dev] [PATCH v2 10/27] common/octeontx2: add AF to PF mailbox IRQ and msg handlers jerinj 2019-06-01 1:48 ` [dpdk-dev] [PATCH v2 11/27] common/octeontx2: add PF to VF " jerinj 2019-06-01 1:48 ` [dpdk-dev] [PATCH v2 12/27] common/octeontx2: add VF mailbox IRQ and msg handler jerinj 2019-06-01 1:48 ` [dpdk-dev] [PATCH v2 13/27] common/octeontx2: add uplink message support jerinj 2019-06-01 1:48 ` [dpdk-dev] [PATCH v2 14/27] common/octeontx2: add FLR IRQ handler jerinj 2019-06-01 1:48 ` [dpdk-dev] [PATCH v2 15/27] doc: add Marvell OCTEON TX2 platform guide jerinj 2019-06-01 1:48 ` [dpdk-dev] [PATCH v2 16/27] mempool/octeontx2: add build infra and device probe jerinj 2019-06-01 1:48 ` [dpdk-dev] [PATCH v2 17/27] drivers: add init and fini on octeontx2 NPA object jerinj 2019-06-01 1:48 ` [dpdk-dev] [PATCH v2 18/27] mempool/octeontx2: add NPA HW operations jerinj 2019-06-01 1:48 ` [dpdk-dev] [PATCH v2 19/27] mempool/octeontx2: add NPA IRQ handler jerinj 2019-06-01 1:48 ` [dpdk-dev] [PATCH v2 20/27] mempool/octeontx2: add context dump support jerinj 2019-06-01 1:48 ` [dpdk-dev] [PATCH v2 21/27] mempool/octeontx2: add mempool alloc op jerinj 2019-06-01 1:49 ` [dpdk-dev] [PATCH v2 22/27] mempool/octeontx2: add mempool free op jerinj 2019-06-01 1:49 ` [dpdk-dev] [PATCH v2 23/27] mempool/octeontx2: add remaining slow path ops jerinj 2019-06-01 1:49 ` [dpdk-dev] [PATCH v2 24/27] mempool/octeontx2: add fast path mempool ops jerinj 2019-06-01 1:49 ` [dpdk-dev] [PATCH v2 25/27] mempool/octeontx2: add optimized dequeue operation for arm64 jerinj 2019-06-01 1:49 ` [dpdk-dev] [PATCH v2 26/27] mempool/octeontx2: add devargs for max pool selection jerinj 2019-06-01 1:49 ` [dpdk-dev] [PATCH v2 27/27] doc: add Marvell OCTEON TX2 mempool documentation jerinj 2019-06-17 15:55 ` [dpdk-dev] [PATCH v3 00/27] OCTEON TX2 common and mempool driver jerinj 2019-06-17 15:55 ` [dpdk-dev] [PATCH v3 01/27] common/octeontx2: add build infrastructure and HW definition jerinj 2019-06-17 15:55 ` [dpdk-dev] [PATCH v3 02/27] common/octeontx2: add IO handling APIs jerinj 2019-06-17 15:55 ` [dpdk-dev] [PATCH v3 03/27] common/octeontx2: add mbox request and response definition jerinj 2019-06-17 15:55 ` [dpdk-dev] [PATCH v3 04/27] common/octeontx2: add mailbox base support infra jerinj 2019-06-17 15:55 ` [dpdk-dev] [PATCH v3 05/27] common/octeontx2: add runtime log infra jerinj 2019-06-17 15:55 ` [dpdk-dev] [PATCH v3 06/27] common/octeontx2: add mailbox send and receive support jerinj 2019-06-17 15:55 ` [dpdk-dev] [PATCH v3 07/27] common/octeontx2: introduce common device class jerinj 2019-06-17 15:55 ` [dpdk-dev] [PATCH v3 08/27] common/octeontx2: introduce irq handling functions jerinj 2019-06-17 15:55 ` [dpdk-dev] [PATCH v3 09/27] common/octeontx2: handle intra device operations jerinj 2019-06-17 15:55 ` [dpdk-dev] [PATCH v3 10/27] common/octeontx2: add AF to PF mailbox IRQ and msg handlers jerinj 2019-06-17 15:55 ` [dpdk-dev] [PATCH v3 11/27] common/octeontx2: add PF to VF " jerinj 2019-06-17 15:55 ` [dpdk-dev] [PATCH v3 12/27] common/octeontx2: add VF mailbox IRQ and msg handler jerinj 2019-06-17 15:55 ` [dpdk-dev] [PATCH v3 13/27] common/octeontx2: add uplink message support jerinj 2019-06-17 15:55 ` [dpdk-dev] [PATCH v3 14/27] common/octeontx2: add FLR IRQ handler jerinj 2019-06-17 15:55 ` [dpdk-dev] [PATCH v3 15/27] doc: add Marvell OCTEON TX2 platform guide jerinj 2019-06-17 15:55 ` [dpdk-dev] [PATCH v3 16/27] mempool/octeontx2: add build infra and device probe jerinj 2019-06-17 15:55 ` [dpdk-dev] [PATCH v3 17/27] drivers: add init and fini on octeontx2 NPA object jerinj 2019-06-17 15:55 ` [dpdk-dev] [PATCH v3 18/27] mempool/octeontx2: add NPA HW operations jerinj 2019-06-17 15:55 ` [dpdk-dev] [PATCH v3 19/27] mempool/octeontx2: add NPA IRQ handler jerinj 2019-06-17 15:55 ` [dpdk-dev] [PATCH v3 20/27] mempool/octeontx2: add context dump support jerinj 2019-06-17 15:55 ` [dpdk-dev] [PATCH v3 21/27] mempool/octeontx2: add mempool alloc op jerinj 2019-06-17 15:55 ` [dpdk-dev] [PATCH v3 22/27] mempool/octeontx2: add mempool free op jerinj 2019-06-17 15:55 ` [dpdk-dev] [PATCH v3 23/27] mempool/octeontx2: add remaining slow path ops jerinj 2019-06-17 15:55 ` [dpdk-dev] [PATCH v3 24/27] mempool/octeontx2: add fast path mempool ops jerinj 2019-06-17 15:55 ` jerinj [this message] 2019-06-17 21:25 ` [dpdk-dev] [PATCH v3 25/27] mempool/octeontx2: add optimized dequeue operation for arm64 Aaron Conole 2019-06-18 7:39 ` [dpdk-dev] [EXT] " Pavan Nikhilesh Bhagavatula 2019-06-21 19:26 ` Aaron Conole 2019-06-17 15:55 ` [dpdk-dev] [PATCH v3 26/27] mempool/octeontx2: add devargs for max pool selection jerinj 2019-06-17 15:55 ` [dpdk-dev] [PATCH v3 27/27] doc: add Marvell OCTEON TX2 mempool documentation jerinj 2019-06-20 8:39 ` [dpdk-dev] [PATCH v3 00/27] OCTEON TX2 common and mempool driver Jerin Jacob Kollanukkaran 2019-06-22 13:23 ` [dpdk-dev] [PATCH v4 " jerinj 2019-06-22 13:23 ` [dpdk-dev] [PATCH v4 01/27] common/octeontx2: add build infrastructure and HW definition jerinj 2019-06-22 13:23 ` [dpdk-dev] [PATCH v4 02/27] common/octeontx2: add IO handling APIs jerinj 2019-06-22 13:23 ` [dpdk-dev] [PATCH v4 03/27] common/octeontx2: add mbox request and response definition jerinj 2019-06-22 13:23 ` [dpdk-dev] [PATCH v4 04/27] common/octeontx2: add mailbox base support infra jerinj 2019-06-22 13:23 ` [dpdk-dev] [PATCH v4 05/27] common/octeontx2: add runtime log infra jerinj 2019-06-22 13:23 ` [dpdk-dev] [PATCH v4 06/27] common/octeontx2: add mailbox send and receive support jerinj 2019-06-22 13:23 ` [dpdk-dev] [PATCH v4 07/27] common/octeontx2: introduce common device class jerinj 2019-06-22 13:23 ` [dpdk-dev] [PATCH v4 08/27] common/octeontx2: introduce irq handling functions jerinj 2019-06-22 13:23 ` [dpdk-dev] [PATCH v4 09/27] common/octeontx2: handle intra device operations jerinj 2019-06-22 13:24 ` [dpdk-dev] [PATCH v4 10/27] common/octeontx2: add AF to PF mailbox IRQ and msg handlers jerinj 2019-06-22 13:24 ` [dpdk-dev] [PATCH v4 11/27] common/octeontx2: add PF to VF " jerinj 2019-06-22 13:24 ` [dpdk-dev] [PATCH v4 12/27] common/octeontx2: add VF mailbox IRQ and msg handler jerinj 2019-06-22 13:24 ` [dpdk-dev] [PATCH v4 13/27] common/octeontx2: add uplink message support jerinj 2019-06-22 13:24 ` [dpdk-dev] [PATCH v4 14/27] common/octeontx2: add FLR IRQ handler jerinj 2019-06-22 13:24 ` [dpdk-dev] [PATCH v4 15/27] doc: add Marvell OCTEON TX2 platform guide jerinj 2019-06-22 13:24 ` [dpdk-dev] [PATCH v4 16/27] mempool/octeontx2: add build infra and device probe jerinj 2019-06-22 13:24 ` [dpdk-dev] [PATCH v4 17/27] drivers: add init and fini on octeontx2 NPA object jerinj 2019-06-22 13:24 ` [dpdk-dev] [PATCH v4 18/27] mempool/octeontx2: add NPA HW operations jerinj 2019-06-22 13:24 ` [dpdk-dev] [PATCH v4 19/27] mempool/octeontx2: add NPA IRQ handler jerinj 2019-06-22 13:24 ` [dpdk-dev] [PATCH v4 20/27] mempool/octeontx2: add context dump support jerinj 2019-06-22 13:24 ` [dpdk-dev] [PATCH v4 21/27] mempool/octeontx2: add mempool alloc op jerinj 2019-06-22 13:24 ` [dpdk-dev] [PATCH v4 22/27] mempool/octeontx2: add mempool free op jerinj 2019-06-22 13:24 ` [dpdk-dev] [PATCH v4 23/27] mempool/octeontx2: add remaining slow path ops jerinj 2019-06-22 13:24 ` [dpdk-dev] [PATCH v4 24/27] mempool/octeontx2: add fast path mempool ops jerinj 2019-06-22 13:24 ` [dpdk-dev] [PATCH v4 25/27] mempool/octeontx2: add optimized dequeue operation for arm64 jerinj 2019-06-22 13:24 ` [dpdk-dev] [PATCH v4 26/27] mempool/octeontx2: add devargs for max pool selection jerinj 2019-06-22 13:24 ` [dpdk-dev] [PATCH v4 27/27] doc: add Marvell OCTEON TX2 mempool documentation jerinj 2019-06-25 21:25 ` Thomas Monjalon 2019-06-25 21:39 ` [dpdk-dev] [PATCH v4 00/27] OCTEON TX2 common and mempool driver Thomas Monjalon 2019-06-26 23:10 ` Stephen Hemminger 2019-06-26 13:14 ` Ferruh Yigit 2019-06-22 13:21 [dpdk-dev] [PATCH v3 25/27] mempool/octeontx2: add optimized dequeue operation for arm64 Jerin Jacob Kollanukkaran
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20190617155537.36144-26-jerinj@marvell.com \ --to=jerinj@marvell.com \ --cc=aconole@redhat.com \ --cc=dev@dpdk.org \ --cc=ndabilpuram@marvell.com \ --cc=olivier.matz@6wind.com \ --cc=pbhagavatula@marvell.com \ --cc=vattunuru@marvell.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
DPDK-dev Archive on lore.kernel.org Archives are clonable: git clone --mirror https://lore.kernel.org/dpdk-dev/0 dpdk-dev/git/0.git # If you have public-inbox 1.1+ installed, you may # initialize and index your mirror using the following commands: public-inbox-init -V2 dpdk-dev dpdk-dev/ https://lore.kernel.org/dpdk-dev \ dev@dpdk.org public-inbox-index dpdk-dev Example config snippet for mirrors Newsgroup available over NNTP: nntp://nntp.lore.kernel.org/org.dpdk.dev AGPL code for this site: git clone https://public-inbox.org/public-inbox.git