From mboxrd@z Thu Jan 1 00:00:00 1970 From: Nelio Laranjeiro Subject: [DPDK 18.08 v1 10/12] net/mlx5: add flow TCP item Date: Mon, 28 May 2018 13:21:43 +0200 Message-ID: <1905b9674c6f3988aad149e86d85ecd809e78ece.1527506071.git.nelio.laranjeiro@6wind.com> References: To: dev@dpdk.org, Adrien Mazarguil , Yongseok Koh Return-path: Received: from mail-wm0-f66.google.com (mail-wm0-f66.google.com [74.125.82.66]) by dpdk.org (Postfix) with ESMTP id 1FDBE37B7 for ; Mon, 28 May 2018 13:21:48 +0200 (CEST) Received: by mail-wm0-f66.google.com with SMTP id r15-v6so3675410wmc.1 for ; Mon, 28 May 2018 04:21:48 -0700 (PDT) In-Reply-To: In-Reply-To: References: List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Signed-off-by: Nelio Laranjeiro --- drivers/net/mlx5/mlx5_flow.c | 55 ++++++++++++++++++++++++++++++++++++ 1 file changed, 55 insertions(+) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 66ebe6d36..ce1b4e94b 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -706,6 +706,58 @@ mlx5_flow_item_udp(const struct rte_flow_item *item, struct rte_flow *flow, return 0; } +/** + * Validate TCP layer and possibly create the Verbs specification. + * + * @param item[in] + * Item specification. + * @param flow[in, out] + * Pointer to flow structure. + * @param error + * Pointer to error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_flow_item_tcp(const struct rte_flow_item *item, struct rte_flow *flow, + struct rte_flow_error *error) +{ + const struct rte_flow_item_tcp *spec = item->spec; + const struct rte_flow_item_tcp *mask = item->mask; + unsigned int size = sizeof(struct ibv_flow_spec_tcp_udp); + struct ibv_flow_spec_tcp_udp tcp = { + .type = IBV_FLOW_SPEC_TCP, + .size = size, + }; + int ret; + + if (flow->verbs.layers & MLX5_FLOW_LAYER_L4) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, + item, + "L4 layer is already present"); + if (!mask) + mask = &rte_flow_item_tcp_mask; + ret = mlx5_flow_item_validate(item, (const uint8_t *)mask, + (const uint8_t *)&rte_flow_item_tcp_mask, + sizeof(struct rte_flow_item_tcp), error); + if (ret < 0) + return ret; + if (spec) { + tcp.val.dst_port = spec->hdr.dst_port; + tcp.val.src_port = spec->hdr.src_port; + tcp.mask.dst_port = mask->hdr.dst_port; + tcp.mask.src_port = mask->hdr.src_port; + /* Remove unwanted bits from values. */ + tcp.val.src_port &= tcp.mask.src_port; + tcp.val.dst_port &= tcp.mask.dst_port; + } + mlx5_flow_spec_verbs_add(flow, &tcp, size); + flow->verbs.layers |= MLX5_FLOW_LAYER_L4_TCP; + return 0; +} + /** * Validate items provided by the user. * @@ -745,6 +797,9 @@ mlx5_flow_items(const struct rte_flow_item items[], case RTE_FLOW_ITEM_TYPE_UDP: ret = mlx5_flow_item_udp(items, flow, error); break; + case RTE_FLOW_ITEM_TYPE_TCP: + ret = mlx5_flow_item_tcp(items, flow, error); + break; default: return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, -- 2.17.0