From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 748B8C433ED for ; Tue, 27 Apr 2021 15:39:18 +0000 (UTC) Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by mail.kernel.org (Postfix) with ESMTP id C930061026 for ; Tue, 27 Apr 2021 15:39:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C930061026 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=dev-bounces@dpdk.org Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6BB6641271; Tue, 27 Apr 2021 17:38:57 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2047.outbound.protection.outlook.com [40.107.220.47]) by mails.dpdk.org (Postfix) with ESMTP id 4F95F4127F for ; Tue, 27 Apr 2021 17:38:55 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=gppKtDdAHGscHBUwswoKIsnktNP87XaHeo/uT54AO02FUDniIV526/Jv9/wZDIGOMLbftiM2JhhCtq/oLm1V+TmSql42heXzTnu+wd0pndDOnTG1VUX5qJ26444NiuI0VhyZSo+x0fBzlM91ofhRYYZOyEkNnzi/vj91Ma/mmOxCoqm8qo8uaTGohGjHyaZUg9eDhg7YVcNa+V2NQLwYuNGiHfbiv14jBu9aCqMAFWPJueDe2jwrV6nI0lwDsG7EQKqHK40Ll7xeoL/9fOSojNx3iQn+20UP/5CLLjB13CspJCBSFOr2yX/qlKQ19/nTt2JrDqXb11EJmjuwD0w6pQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=1tMBNekbU9ovpI2gPOp1Pc/7uCV3c91zS7+5fYUqGuA=; b=PDiCP6a4OXgdmH/FjNLTcmtgaVr65bPSy7ziTundPRqgQv2ORbgqxjflfx8Aa4wqD63NJb8b8J0cqlz5YD0K7Zo9oL0mrJvBWm3mrgxj81pE2FSIyx5VacyTHOIG/+QZE4TsVE27TBaUWWduE0Lpz+z3FaUjdvLO+NYej1TKJWwdQlTC6xxpYBwqgJvDU24D4nNbJVdFZvArKcVyXMRMY95RsGE8PZEa3C1j/fFaVVZvjECDnROo1JMq/BkU29oyn89b64IcPa9hEsSZJELXzdG6n1iHMQMOaWs5ZLcOcDYyGvf1FCMvsGRmSin/gcOQXbR/StfVHTW6TS+42L63GA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=1tMBNekbU9ovpI2gPOp1Pc/7uCV3c91zS7+5fYUqGuA=; b=RAaMqzEvsf796MtSMfDdCfFDtltuC8UkWkmAXJ0GFLVs5cKHGL1eM5RAsAUjljOajwtjE/ePUzeMVoscESK+hqv3mSADFML7dBVSN2V8sfugsqJ4NNKcmg6Ep9ionzDpB/CpPqrSho9gTM7eSHBQ34lVVcCczby5KSIRB+6jrBKqqXPZ0IiV7Xan8pePQcL1EDoARsAeu41rD7VwkYWED1Zp4y7IdDzxK08QM4G8NnrUFMguf8U8P4Do9q2XsBaLqzsFThEqqmrkRFLXNo4KtaRiZmFgr1sx+eGBVaqD9VGSqfJ8Q0BcsdNLSVN6RMZPUiJ9R40yyzqV+l82akDgLg== Received: from MWHPR12CA0046.namprd12.prod.outlook.com (2603:10b6:301:2::32) by BN9PR12MB5068.namprd12.prod.outlook.com (2603:10b6:408:135::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4065.22; Tue, 27 Apr 2021 15:38:53 +0000 Received: from CO1NAM11FT067.eop-nam11.prod.protection.outlook.com (2603:10b6:301:2:cafe::ba) by MWHPR12CA0046.outlook.office365.com (2603:10b6:301:2::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4087.25 via Frontend Transport; Tue, 27 Apr 2021 15:38:53 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT067.mail.protection.outlook.com (10.13.174.212) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4065.21 via Frontend Transport; Tue, 27 Apr 2021 15:38:52 +0000 Received: from nvidia.com (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 27 Apr 2021 15:38:51 +0000 From: Bing Zhao To: , CC: , , Date: Tue, 27 Apr 2021 18:38:00 +0300 Message-ID: <20210427153811.11554-7-bingz@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210427153811.11554-1-bingz@nvidia.com> References: <20210427153811.11554-1-bingz@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: e2718a16-e446-4ff3-76bb-08d909928faf X-MS-TrafficTypeDiagnostic: BN9PR12MB5068: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:118; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 3LwReniDk4NVEHpZeCD0382E8ghuCJauIZPF5rqVTqWKvovdetxId3s5Tg0qgOSdB2HRVbLTS5WTFDgDp7GPGcS1/FMlrk3LDGUA7yjABfdR6cFIloMTCvDAIQu/bSTy4PyFeii473yrDQTPPrZJr6OcyvEYfXQSx8MkJ/aGyCerWQJ7oyqp255yIOzQDItmtlwDT/OxTlKScA2luJEIV31OSGHf/L7sIRnkwTLtBO+Kx+iNDgB6srIT2uwDb9z0vdGhtGX5fUyN+v3Qq5a2gzvV+lM7ORrXxfOFrz4pod/6gEizzbVd0KfEhAtV+QN3l5bKkY4TxAvK0dG4iEA2qW/01Dazl7C5PB75Upd4Fn2A/ce/kATc2MEWcKExjQ8T/D7Bmw3rgZQtR3f4KFi9utxACT9Chm0aXDRCSFJmjgT93hSj8P3YXL4bi8WXrU7VxM+F+25r5Cq3G96cLINl7DvV8o52UGBzZ8FryYd9v/qbV47DMe8JrKm/TV7zu+poJbbFX1r2ZdpxzAMIiXX+jvgmveqFB5Z9i4bPhVOjQdfYleBaSQok3f8TxUSZHHPtCJr/GibHAdGn/fjlmAY4dEoQfQGB8wE2/6JgdYvXTSE9yUcEVhqv5VTWBTsaAe0pimWtmP0IxWQuIP+6rOF5eigqXenZ4CgsrtiCbP6XIDM= X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(39860400002)(136003)(396003)(346002)(376002)(46966006)(36840700001)(6666004)(26005)(36756003)(86362001)(82740400003)(7636003)(47076005)(1076003)(36906005)(316002)(186003)(16526019)(478600001)(426003)(107886003)(6636002)(2616005)(70586007)(30864003)(5660300002)(8676002)(2906002)(54906003)(82310400003)(70206006)(336012)(110136005)(6286002)(55016002)(36860700001)(8936002)(7696005)(356005)(83380400001)(4326008); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Apr 2021 15:38:52.8702 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e2718a16-e446-4ff3-76bb-08d909928faf X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT067.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN9PR12MB5068 Subject: [dpdk-dev] [PATCH 06/17] net/mlx5: add modify support for CT X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" After the connection tracking object bulk is allocated, all the objects' contents are filled with zero by default. One object must be modified via WQE operation before using it. In order to reduce the latency for the flow creation, an asynchronous way is used instead of busy waiting for the CQE to be generated. Signed-off-by: Bing Zhao --- drivers/net/mlx5/mlx5.h | 6 + drivers/net/mlx5/mlx5_flow.h | 3 + drivers/net/mlx5/mlx5_flow_aso.c | 288 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 297 insertions(+) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 0a7e03e..1d31813 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -498,6 +498,7 @@ struct mlx5_aso_sq_elem { uint16_t burst_size; }; struct mlx5_aso_mtr *mtr; + struct mlx5_aso_ct_action *ct; }; }; @@ -1700,5 +1701,10 @@ int mlx5_aso_meter_update_by_wqe(struct mlx5_dev_ctx_shared *sh, struct mlx5_aso_mtr *mtr); int mlx5_aso_mtr_wait(struct mlx5_dev_ctx_shared *sh, struct mlx5_aso_mtr *mtr); +int mlx5_aso_ct_update_by_wqe(struct mlx5_dev_ctx_shared *sh, + struct mlx5_aso_ct_action *ct, + const struct rte_flow_action_conntrack *profile); +int mlx5_aso_ct_wait_ready(struct mlx5_dev_ctx_shared *sh, + struct mlx5_aso_ct_action *ct); #endif /* RTE_PMD_MLX5_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 59769e9..c3e7bf8 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -45,6 +45,7 @@ enum mlx5_rte_flow_action_type { enum { MLX5_INDIRECT_ACTION_TYPE_RSS, MLX5_INDIRECT_ACTION_TYPE_AGE, + MLX5_INDIRECT_ACTION_TYPE_AGE, }; /* Matches on selected register. */ @@ -828,6 +829,8 @@ struct mlx5_flow { #define MLX5_ASO_WQE_CQE_RESPONSE_DELAY 10u #define MLX5_MTR_POLL_WQE_CQE_TIMES 100000u +#define MLX5_CT_POLL_WQE_CQE_TIMES MLX5_MTR_POLL_WQE_CQE_TIMES + #define MLX5_MAN_WIDTH 8 /* Legacy Meter parameter structure. */ struct mlx5_legacy_flow_meter { diff --git a/drivers/net/mlx5/mlx5_flow_aso.c b/drivers/net/mlx5/mlx5_flow_aso.c index d0aa09f..bb3221a 100644 --- a/drivers/net/mlx5/mlx5_flow_aso.c +++ b/drivers/net/mlx5/mlx5_flow_aso.c @@ -897,3 +897,291 @@ mlx5_aso_mtr_wait(struct mlx5_dev_ctx_shared *sh, mtr->offset); return -1; } + +/* + * Post a WQE to the ASO CT SQ to modify the context. + * + * @param[in] mng + * Pointer to the CT pools management structure. + * @param[in] ct + * Pointer to the generic CT structure related to the context. + * @param[in] profile + * Pointer to configuration profile. + * + * @return + * 1 on success (WQE number), 0 on failure. + */ +static uint16_t +mlx5_aso_ct_sq_enqueue_single(struct mlx5_aso_ct_pools_mng *mng, + struct mlx5_aso_ct_action *ct, + const struct rte_flow_action_conntrack *profile) +{ + volatile struct mlx5_aso_wqe *wqe = NULL; + struct mlx5_aso_sq *sq = &mng->aso_sq; + uint16_t size = 1 << sq->log_desc_n; + uint16_t mask = size - 1; + uint16_t res; + struct mlx5_aso_ct_pool *pool; + void *desg; + void *orig_dir; + void *reply_dir; + + rte_spinlock_lock(&sq->sqsl); + /* Prevent other threads to update the index. */ + res = size - (uint16_t)(sq->head - sq->tail); + if (unlikely(!res)) { + rte_spinlock_unlock(&sq->sqsl); + DRV_LOG(ERR, "Fail: SQ is full and no free WQE to send"); + return 0; + } + wqe = &sq->sq_obj.aso_wqes[sq->head & mask]; + rte_prefetch0(&sq->sq_obj.aso_wqes[(sq->head + 1) & mask]); + /* Fill next WQE. */ + __atomic_store_n(&ct->state, ASO_CONNTRACK_WAIT, __ATOMIC_RELAXED); + sq->elts[sq->head & mask].ct = ct; + pool = container_of(ct, struct mlx5_aso_ct_pool, actions[ct->offset]); + /* Each WQE will have a single CT object. */ + wqe->general_cseg.misc = rte_cpu_to_be_32(pool->devx_obj->id + + ct->offset); + wqe->general_cseg.opcode = rte_cpu_to_be_32(MLX5_OPCODE_ACCESS_ASO | + (ASO_OPC_MOD_CONNECTION_TRACKING << + WQE_CSEG_OPC_MOD_OFFSET) | + sq->pi << WQE_CSEG_WQE_INDEX_OFFSET); + wqe->aso_cseg.operand_masks = rte_cpu_to_be_32 + (0u | + (ASO_OPER_LOGICAL_OR << ASO_CSEG_COND_OPER_OFFSET) | + (ASO_OP_ALWAYS_TRUE << ASO_CSEG_COND_1_OPER_OFFSET) | + (ASO_OP_ALWAYS_TRUE << ASO_CSEG_COND_0_OPER_OFFSET) | + (BYTEWISE_64BYTE << ASO_CSEG_DATA_MASK_MODE_OFFSET)); + wqe->aso_cseg.data_mask = UINT64_MAX; + /* To make compiler happy. */ + desg = (void *)(uintptr_t)wqe->aso_dseg.data; + MLX5_SET(conn_track_aso, desg, valid, 1); + MLX5_SET(conn_track_aso, desg, state, profile->state); + MLX5_SET(conn_track_aso, desg, freeze_track, !profile->enable); + MLX5_SET(conn_track_aso, desg, connection_assured, + profile->live_connection); + MLX5_SET(conn_track_aso, desg, sack_permitted, profile->selective_ack); + MLX5_SET(conn_track_aso, desg, challenged_acked, + profile->challenge_ack_passed); + /* Heartbeat, retransmission_counter, retranmission_limit_exceeded: 0 */ + MLX5_SET(conn_track_aso, desg, heartbeat, 0); + MLX5_SET(conn_track_aso, desg, max_ack_window, + profile->max_ack_window); + MLX5_SET(conn_track_aso, desg, retransmission_counter, 0); + MLX5_SET(conn_track_aso, desg, retranmission_limit_exceeded, 0); + MLX5_SET(conn_track_aso, desg, retranmission_limit, + profile->retransmission_limit); + MLX5_SET(conn_track_aso, desg, reply_dircetion_tcp_scale, + profile->reply_dir.scale); + MLX5_SET(conn_track_aso, desg, reply_dircetion_tcp_close_initiated, + profile->reply_dir.close_initiated); + /* Both directions will use the same liberal mode. */ + MLX5_SET(conn_track_aso, desg, reply_dircetion_tcp_liberal_enabled, + profile->liberal_mode); + MLX5_SET(conn_track_aso, desg, reply_dircetion_tcp_data_unacked, + profile->reply_dir.data_unacked); + MLX5_SET(conn_track_aso, desg, reply_dircetion_tcp_max_ack, + profile->reply_dir.last_ack_seen); + MLX5_SET(conn_track_aso, desg, original_dircetion_tcp_scale, + profile->original_dir.scale); + MLX5_SET(conn_track_aso, desg, original_dircetion_tcp_close_initiated, + profile->original_dir.close_initiated); + MLX5_SET(conn_track_aso, desg, original_dircetion_tcp_liberal_enabled, + profile->liberal_mode); + MLX5_SET(conn_track_aso, desg, original_dircetion_tcp_data_unacked, + profile->original_dir.data_unacked); + MLX5_SET(conn_track_aso, desg, original_dircetion_tcp_max_ack, + profile->original_dir.last_ack_seen); + MLX5_SET(conn_track_aso, desg, last_win, profile->last_window); + MLX5_SET(conn_track_aso, desg, last_dir, profile->last_direction); + MLX5_SET(conn_track_aso, desg, last_index, profile->last_index); + MLX5_SET(conn_track_aso, desg, last_seq, profile->last_seq); + MLX5_SET(conn_track_aso, desg, last_ack, profile->last_ack); + MLX5_SET(conn_track_aso, desg, last_end, profile->last_end); + orig_dir = MLX5_ADDR_OF(conn_track_aso, desg, original_dir); + MLX5_SET(tcp_window_params, orig_dir, sent_end, + profile->original_dir.sent_end); + MLX5_SET(tcp_window_params, orig_dir, reply_end, + profile->original_dir.reply_end); + MLX5_SET(tcp_window_params, orig_dir, max_win, + profile->original_dir.max_win); + MLX5_SET(tcp_window_params, orig_dir, max_ack, + profile->original_dir.max_ack); + reply_dir = MLX5_ADDR_OF(conn_track_aso, desg, reply_dir); + MLX5_SET(tcp_window_params, reply_dir, sent_end, + profile->reply_dir.sent_end); + MLX5_SET(tcp_window_params, reply_dir, reply_end, + profile->reply_dir.reply_end); + MLX5_SET(tcp_window_params, reply_dir, max_win, + profile->reply_dir.max_win); + MLX5_SET(tcp_window_params, reply_dir, max_ack, + profile->reply_dir.max_ack); + sq->head++; + sq->pi += 2; /* Each WQE contains 2 WQEBB's. */ + rte_io_wmb(); + sq->sq_obj.db_rec[MLX5_SND_DBR] = rte_cpu_to_be_32(sq->pi); + rte_wmb(); + *sq->uar_addr = *(volatile uint64_t *)wqe; /* Assume 64 bit ARCH. */ + rte_wmb(); + rte_spinlock_unlock(&sq->sqsl); + return 1; +} + +/* + * Update the status field of CTs to indicate ready to be used by flows. + * A continuous number of CTs since last update. + * + * @param[in] sq + * Pointer to ASO CT SQ. + * @param[in] num + * Number of CT structures to be updated. + * + * @return + * 0 on success, a negative value. + */ +static void +mlx5_aso_ct_status_update(struct mlx5_aso_sq *sq, uint16_t num) +{ + uint16_t size = 1 << sq->log_desc_n; + uint16_t mask = size - 1; + uint16_t i; + struct mlx5_aso_ct_action *ct = NULL; + uint16_t idx; + + for (i = 0; i < num; i++) { + idx = (uint16_t)((sq->tail + i) & mask); + ct = sq->elts[idx].ct; + MLX5_ASSERT(ct); + __atomic_store_n(&ct->state, ASO_CONNTRACK_READY, + __ATOMIC_RELAXED); + } +} + +/* + * Handle completions from WQEs sent to ASO CT. + * + * @param[in] mng + * Pointer to the CT pools management structure. + */ +static void +mlx5_aso_ct_completion_handle(struct mlx5_aso_ct_pools_mng *mng) +{ + struct mlx5_aso_sq *sq = &mng->aso_sq; + struct mlx5_aso_cq *cq = &sq->cq; + volatile struct mlx5_cqe *restrict cqe; + const uint32_t cq_size = 1 << cq->log_desc_n; + const uint32_t mask = cq_size - 1; + uint32_t idx; + uint32_t next_idx = cq->cq_ci & mask; + uint16_t max; + uint16_t n = 0; + int ret; + + rte_spinlock_lock(&sq->sqsl); + max = (uint16_t)(sq->head - sq->tail); + if (unlikely(!max)) { + rte_spinlock_unlock(&sq->sqsl); + return; + } + do { + idx = next_idx; + next_idx = (cq->cq_ci + 1) & mask; + /* Need to confirm the position of the prefetch. */ + rte_prefetch0(&cq->cq_obj.cqes[next_idx]); + cqe = &cq->cq_obj.cqes[idx]; + ret = check_cqe(cqe, cq_size, cq->cq_ci); + /* + * Be sure owner read is done before any other cookie field or + * opaque field. + */ + rte_io_rmb(); + if (unlikely(ret != MLX5_CQE_STATUS_SW_OWN)) { + if (likely(ret == MLX5_CQE_STATUS_HW_OWN)) + break; + mlx5_aso_cqe_err_handle(sq); + } else { + n++; + } + cq->cq_ci++; + } while (1); + if (likely(n)) { + mlx5_aso_ct_status_update(sq, n); + sq->tail += n; + rte_io_wmb(); + cq->cq_obj.db_rec[0] = rte_cpu_to_be_32(cq->cq_ci); + } + rte_spinlock_unlock(&sq->sqsl); +} + +/* + * Update connection tracking parameter by send WQE. + * + * @param[in] sh + * Pointer to mlx5_dev_ctx_shared object. + * @param[in] ct + * Pointer to connection tracking offload object. + * @param[in] profile + * Pointer to connection tracking TCP parameter. + * + * @return + * 0 on success, -1 on failure. + */ +int +mlx5_aso_ct_update_by_wqe(struct mlx5_dev_ctx_shared *sh, + struct mlx5_aso_ct_action *ct, + const struct rte_flow_action_conntrack *profile) +{ + struct mlx5_aso_ct_pools_mng *mng = sh->ct_mng; + uint32_t poll_wqe_times = MLX5_CT_POLL_WQE_CQE_TIMES; + struct mlx5_aso_ct_pool *pool; + + /* Assertion here. */ + do { + mlx5_aso_ct_completion_handle(mng); + if (mlx5_aso_ct_sq_enqueue_single(mng, ct, profile)) + return 0; + /* Waiting for wqe resource. */ + rte_delay_us_sleep(10u); + } while (--poll_wqe_times); + pool = container_of(ct, struct mlx5_aso_ct_pool, actions[ct->offset]); + DRV_LOG(ERR, "Fail to send WQE for ASO CT %d in pool %d", + ct->offset, pool->index); + return -1; +} + +/* + * Wait for conntrack context in the HW to be ready to use. + * + * @param[in] sh + * Pointer to mlx5_dev_ctx_shared object. + * @param[in] ct + * Pointer to connection tracking offload object. + * + * @return + * 0 on success, -1 on failure. + */ +int +mlx5_aso_ct_wait_ready(struct mlx5_dev_ctx_shared *sh, + struct mlx5_aso_ct_action *ct) +{ + struct mlx5_aso_ct_pools_mng *mng = sh->ct_mng; + uint32_t poll_cqe_times = MLX5_CT_POLL_WQE_CQE_TIMES; + struct mlx5_aso_ct_pool *pool; + + if (__atomic_load_n(&ct->state, __ATOMIC_RELAXED) == + ASO_CONNTRACK_READY) + return 0; + do { + mlx5_aso_ct_completion_handle(mng); + if (__atomic_load_n(&ct->state, __ATOMIC_RELAXED) == + ASO_CONNTRACK_READY) + return 0; + /* Waiting for CQE ready, consider should block or sleep. */ + rte_delay_us_sleep(MLX5_ASO_WQE_CQE_RESPONSE_DELAY); + } while (--poll_cqe_times); + pool = container_of(ct, struct mlx5_aso_ct_pool, actions[ct->offset]); + DRV_LOG(ERR, "Fail to poll CQE for ASO CT %d in pool %d", + ct->offset, pool->index); + return -1; +} -- 2.5.5