From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64C61C433EF for ; Fri, 22 Apr 2022 02:05:12 +0000 (UTC) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7009E427EA; Fri, 22 Apr 2022 04:05:04 +0200 (CEST) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id D4175410D5 for ; Fri, 22 Apr 2022 04:05:00 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1650593101; x=1682129101; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=a43Jyx+mVAHdV1ZQSMkoWnk2O0P1JrCL/fGSj+qb128=; b=ms6wkIfcdhBefrjM0XLvoIwA6c5eDOmhKLWl1ivS6E4ALB+RIj5Vhrg/ UoVxyalgjTzOws/K0+b4oX9a6keYDqg0vRlT5u8+eOwAXi/XFLKZcPWH7 4VhbxRJxMmGOkVS/Z7Ln7UNTaExp+4pktdq4FIFjkKSk47VbnJVHzBfDe dJTYBzFdMuh863aFpP0T0LwJH0NzMF7GNx1qMSgEa6QhwLJKzQhMb841z ezQ2RpGY+Lnb2n0jRhMQU7Q3kI9GlWFixKvLTBX7yhkqAHm34GDPJzNYc 2UOYR8CdRc4vRIVhG6WUXwdqFVrGekJsVQZ0O0OXlBpnrTa4dOlT6PcOb w==; X-IronPort-AV: E=McAfee;i="6400,9594,10324"; a="251865599" X-IronPort-AV: E=Sophos;i="5.90,280,1643702400"; d="scan'208";a="251865599" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Apr 2022 19:04:49 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.90,280,1643702400"; d="scan'208";a="577596600" Received: from npg-wuwenjun-dpdk-01.sh.intel.com ([10.67.110.181]) by orsmga008.jf.intel.com with ESMTP; 21 Apr 2022 19:04:47 -0700 From: Wenjun Wu To: dev@dpdk.org, jingjing.wu@intel.com, beilei.xing@intel.com, qi.z.zhang@intel.com Subject: [PATCH v6 2/3] net/iavf: support queue rate limit configuration Date: Fri, 22 Apr 2022 09:42:59 +0800 Message-Id: <20220422014300.2380259-3-wenjun1.wu@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220422014300.2380259-1-wenjun1.wu@intel.com> References: <20220329020717.1101263-1-wenjun1.wu@intel.com> <20220422014300.2380259-1-wenjun1.wu@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch adds queue rate limit configuration support. Only max bandwidth is supported. Signed-off-by: Ting Xu Signed-off-by: Wenjun Wu --- doc/guides/rel_notes/release_22_07.rst | 3 + drivers/net/iavf/iavf.h | 13 ++ drivers/net/iavf/iavf_tm.c | 190 +++++++++++++++++++++++-- drivers/net/iavf/iavf_vchnl.c | 23 +++ 4 files changed, 221 insertions(+), 8 deletions(-) diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst index 42a5f2d990..ff379ace67 100644 --- a/doc/guides/rel_notes/release_22_07.rst +++ b/doc/guides/rel_notes/release_22_07.rst @@ -55,6 +55,9 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= +* **Updated Intel iavf driver.** + + * Added Tx QoS queue rate limitation support. Removed Items ------------- diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h index a01d18e61b..96515a3ee9 100644 --- a/drivers/net/iavf/iavf.h +++ b/drivers/net/iavf/iavf.h @@ -170,11 +170,21 @@ struct iavf_tm_node { uint32_t weight; uint32_t reference_count; struct iavf_tm_node *parent; + struct iavf_tm_shaper_profile *shaper_profile; struct rte_tm_node_params params; }; TAILQ_HEAD(iavf_tm_node_list, iavf_tm_node); +struct iavf_tm_shaper_profile { + TAILQ_ENTRY(iavf_tm_shaper_profile) node; + uint32_t shaper_profile_id; + uint32_t reference_count; + struct rte_tm_shaper_params profile; +}; + +TAILQ_HEAD(iavf_shaper_profile_list, iavf_tm_shaper_profile); + /* node type of Traffic Manager */ enum iavf_tm_node_type { IAVF_TM_NODE_TYPE_PORT, @@ -188,6 +198,7 @@ struct iavf_tm_conf { struct iavf_tm_node *root; /* root node - vf vsi */ struct iavf_tm_node_list tc_list; /* node list for all the TCs */ struct iavf_tm_node_list queue_list; /* node list for all the queues */ + struct iavf_shaper_profile_list shaper_profile_list; uint32_t nb_tc_node; uint32_t nb_queue_node; bool committed; @@ -451,6 +462,8 @@ int iavf_add_del_mc_addr_list(struct iavf_adapter *adapter, int iavf_request_queues(struct rte_eth_dev *dev, uint16_t num); int iavf_get_max_rss_queue_region(struct iavf_adapter *adapter); int iavf_get_qos_cap(struct iavf_adapter *adapter); +int iavf_set_q_bw(struct rte_eth_dev *dev, + struct virtchnl_queues_bw_cfg *q_bw, uint16_t size); int iavf_set_q_tc_map(struct rte_eth_dev *dev, struct virtchnl_queue_tc_mapping *q_tc_mapping, uint16_t size); diff --git a/drivers/net/iavf/iavf_tm.c b/drivers/net/iavf/iavf_tm.c index 8d92062c7f..32bb3be45e 100644 --- a/drivers/net/iavf/iavf_tm.c +++ b/drivers/net/iavf/iavf_tm.c @@ -8,6 +8,13 @@ static int iavf_hierarchy_commit(struct rte_eth_dev *dev, __rte_unused int clear_on_fail, __rte_unused struct rte_tm_error *error); +static int iavf_shaper_profile_add(struct rte_eth_dev *dev, + uint32_t shaper_profile_id, + struct rte_tm_shaper_params *profile, + struct rte_tm_error *error); +static int iavf_shaper_profile_del(struct rte_eth_dev *dev, + uint32_t shaper_profile_id, + struct rte_tm_error *error); static int iavf_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, uint32_t parent_node_id, uint32_t priority, uint32_t weight, uint32_t level_id, @@ -30,6 +37,8 @@ static int iavf_node_type_get(struct rte_eth_dev *dev, uint32_t node_id, int *is_leaf, struct rte_tm_error *error); const struct rte_tm_ops iavf_tm_ops = { + .shaper_profile_add = iavf_shaper_profile_add, + .shaper_profile_delete = iavf_shaper_profile_del, .node_add = iavf_tm_node_add, .node_delete = iavf_tm_node_delete, .capabilities_get = iavf_tm_capabilities_get, @@ -44,6 +53,9 @@ iavf_tm_conf_init(struct rte_eth_dev *dev) { struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + /* initialize shaper profile list */ + TAILQ_INIT(&vf->tm_conf.shaper_profile_list); + /* initialize node configuration */ vf->tm_conf.root = NULL; TAILQ_INIT(&vf->tm_conf.tc_list); @@ -57,6 +69,7 @@ void iavf_tm_conf_uninit(struct rte_eth_dev *dev) { struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + struct iavf_tm_shaper_profile *shaper_profile; struct iavf_tm_node *tm_node; /* clear node configuration */ @@ -74,6 +87,14 @@ iavf_tm_conf_uninit(struct rte_eth_dev *dev) rte_free(vf->tm_conf.root); vf->tm_conf.root = NULL; } + + /* Remove all shaper profiles */ + while ((shaper_profile = + TAILQ_FIRST(&vf->tm_conf.shaper_profile_list))) { + TAILQ_REMOVE(&vf->tm_conf.shaper_profile_list, + shaper_profile, node); + rte_free(shaper_profile); + } } static inline struct iavf_tm_node * @@ -132,13 +153,6 @@ iavf_node_param_check(struct iavf_info *vf, uint32_t node_id, return -EINVAL; } - /* not support shaper profile */ - if (params->shaper_profile_id) { - error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_SHAPER_PROFILE_ID; - error->message = "shaper profile not supported"; - return -EINVAL; - } - /* not support shared shaper */ if (params->shared_shaper_id) { error->type = RTE_TM_ERROR_TYPE_NODE_PARAMS_SHARED_SHAPER_ID; @@ -236,6 +250,23 @@ iavf_node_type_get(struct rte_eth_dev *dev, uint32_t node_id, return 0; } +static inline struct iavf_tm_shaper_profile * +iavf_shaper_profile_search(struct rte_eth_dev *dev, + uint32_t shaper_profile_id) +{ + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + struct iavf_shaper_profile_list *shaper_profile_list = + &vf->tm_conf.shaper_profile_list; + struct iavf_tm_shaper_profile *shaper_profile; + + TAILQ_FOREACH(shaper_profile, shaper_profile_list, node) { + if (shaper_profile_id == shaper_profile->shaper_profile_id) + return shaper_profile; + } + + return NULL; +} + static int iavf_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, uint32_t parent_node_id, uint32_t priority, @@ -246,6 +277,7 @@ iavf_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); enum iavf_tm_node_type node_type = IAVF_TM_NODE_TYPE_MAX; enum iavf_tm_node_type parent_node_type = IAVF_TM_NODE_TYPE_MAX; + struct iavf_tm_shaper_profile *shaper_profile = NULL; struct iavf_tm_node *tm_node; struct iavf_tm_node *parent_node; uint16_t tc_nb = vf->qos_cap->num_elem; @@ -273,6 +305,18 @@ iavf_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, return -EINVAL; } + /* check the shaper profile id */ + if (params->shaper_profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE) { + shaper_profile = iavf_shaper_profile_search(dev, + params->shaper_profile_id); + if (!shaper_profile) { + error->type = + RTE_TM_ERROR_TYPE_NODE_PARAMS_SHAPER_PROFILE_ID; + error->message = "shaper profile not exist"; + return -EINVAL; + } + } + /* root node if not have a parent */ if (parent_node_id == RTE_TM_NODE_ID_NULL) { /* check level */ @@ -358,6 +402,7 @@ iavf_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, tm_node->id = node_id; tm_node->reference_count = 0; tm_node->parent = parent_node; + tm_node->shaper_profile = shaper_profile; rte_memcpy(&tm_node->params, params, sizeof(struct rte_tm_node_params)); if (parent_node_type == IAVF_TM_NODE_TYPE_PORT) { @@ -373,6 +418,10 @@ iavf_tm_node_add(struct rte_eth_dev *dev, uint32_t node_id, } tm_node->parent->reference_count++; + /* increase the reference counter of the shaper profile */ + if (shaper_profile) + shaper_profile->reference_count++; + return 0; } @@ -437,6 +486,103 @@ iavf_tm_node_delete(struct rte_eth_dev *dev, uint32_t node_id, return 0; } +static int +iavf_shaper_profile_param_check(struct rte_tm_shaper_params *profile, + struct rte_tm_error *error) +{ + /* min bucket size not supported */ + if (profile->committed.size) { + error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_SIZE; + error->message = "committed bucket size not supported"; + return -EINVAL; + } + /* max bucket size not supported */ + if (profile->peak.size) { + error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE; + error->message = "peak bucket size not supported"; + return -EINVAL; + } + /* length adjustment not supported */ + if (profile->pkt_length_adjust) { + error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PKT_ADJUST_LEN; + error->message = "packet length adjustment not supported"; + return -EINVAL; + } + + return 0; +} + +static int +iavf_shaper_profile_add(struct rte_eth_dev *dev, + uint32_t shaper_profile_id, + struct rte_tm_shaper_params *profile, + struct rte_tm_error *error) +{ + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + struct iavf_tm_shaper_profile *shaper_profile; + int ret; + + if (!profile || !error) + return -EINVAL; + + ret = iavf_shaper_profile_param_check(profile, error); + if (ret) + return ret; + + shaper_profile = iavf_shaper_profile_search(dev, shaper_profile_id); + + if (shaper_profile) { + error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID; + error->message = "profile ID exist"; + return -EINVAL; + } + + shaper_profile = rte_zmalloc("iavf_tm_shaper_profile", + sizeof(struct iavf_tm_shaper_profile), + 0); + if (!shaper_profile) + return -ENOMEM; + shaper_profile->shaper_profile_id = shaper_profile_id; + rte_memcpy(&shaper_profile->profile, profile, + sizeof(struct rte_tm_shaper_params)); + TAILQ_INSERT_TAIL(&vf->tm_conf.shaper_profile_list, + shaper_profile, node); + + return 0; +} + +static int +iavf_shaper_profile_del(struct rte_eth_dev *dev, + uint32_t shaper_profile_id, + struct rte_tm_error *error) +{ + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + struct iavf_tm_shaper_profile *shaper_profile; + + if (!error) + return -EINVAL; + + shaper_profile = iavf_shaper_profile_search(dev, shaper_profile_id); + + if (!shaper_profile) { + error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID; + error->message = "profile ID not exist"; + return -EINVAL; + } + + /* don't delete a profile if it's used by one or several nodes */ + if (shaper_profile->reference_count) { + error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE; + error->message = "profile in use"; + return -EINVAL; + } + + TAILQ_REMOVE(&vf->tm_conf.shaper_profile_list, shaper_profile, node); + rte_free(shaper_profile); + + return 0; +} + static int iavf_tm_capabilities_get(struct rte_eth_dev *dev, struct rte_tm_capabilities *cap, @@ -656,10 +802,11 @@ static int iavf_hierarchy_commit(struct rte_eth_dev *dev, struct iavf_adapter *adapter = IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); struct virtchnl_queue_tc_mapping *q_tc_mapping; + struct virtchnl_queues_bw_cfg *q_bw; struct iavf_tm_node_list *queue_list = &vf->tm_conf.queue_list; struct iavf_tm_node *tm_node; struct iavf_qtc_map *qtc_map; - uint16_t size; + uint16_t size, size_q; int index = 0, node_committed = 0; int i, ret_val = IAVF_SUCCESS; @@ -691,10 +838,21 @@ static int iavf_hierarchy_commit(struct rte_eth_dev *dev, goto fail_clear; } + size_q = sizeof(*q_bw) + sizeof(q_bw->cfg[0]) * + (vf->num_queue_pairs - 1); + q_bw = rte_zmalloc("q_bw", size_q, 0); + if (!q_bw) { + ret_val = IAVF_ERR_NO_MEMORY; + goto fail_clear; + } + q_tc_mapping->vsi_id = vf->vsi.vsi_id; q_tc_mapping->num_tc = vf->qos_cap->num_elem; q_tc_mapping->num_queue_pairs = vf->num_queue_pairs; + q_bw->vsi_id = vf->vsi.vsi_id; + q_bw->num_queues = vf->num_queue_pairs; + TAILQ_FOREACH(tm_node, queue_list, node) { if (tm_node->tc >= q_tc_mapping->num_tc) { PMD_DRV_LOG(ERR, "TC%d is not enabled", tm_node->tc); @@ -702,6 +860,18 @@ static int iavf_hierarchy_commit(struct rte_eth_dev *dev, goto fail_clear; } q_tc_mapping->tc[tm_node->tc].req.queue_count++; + + if (tm_node->shaper_profile) { + q_bw->cfg[node_committed].queue_id = node_committed; + q_bw->cfg[node_committed].shaper.peak = + tm_node->shaper_profile->profile.peak.rate / + 1000 * IAVF_BITS_PER_BYTE; + q_bw->cfg[node_committed].shaper.committed = + tm_node->shaper_profile->profile.committed.rate / + 1000 * IAVF_BITS_PER_BYTE; + q_bw->cfg[node_committed].tc = tm_node->tc; + } + node_committed++; } @@ -712,6 +882,10 @@ static int iavf_hierarchy_commit(struct rte_eth_dev *dev, goto fail_clear; } + ret_val = iavf_set_q_bw(dev, q_bw, size_q); + if (ret_val) + goto fail_clear; + /* store the queue TC mapping info */ qtc_map = rte_zmalloc("qtc_map", sizeof(struct iavf_qtc_map) * q_tc_mapping->num_tc, 0); diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c index 169e1f2012..537369f736 100644 --- a/drivers/net/iavf/iavf_vchnl.c +++ b/drivers/net/iavf/iavf_vchnl.c @@ -1636,6 +1636,29 @@ int iavf_set_q_tc_map(struct rte_eth_dev *dev, return err; } +int iavf_set_q_bw(struct rte_eth_dev *dev, + struct virtchnl_queues_bw_cfg *q_bw, uint16_t size) +{ + struct iavf_adapter *adapter = + IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private); + struct iavf_cmd_info args; + int err; + + memset(&args, 0, sizeof(args)); + args.ops = VIRTCHNL_OP_CONFIG_QUEUE_BW; + args.in_args = (uint8_t *)q_bw; + args.in_args_size = size; + args.out_buffer = vf->aq_resp; + args.out_size = IAVF_AQ_BUF_SZ; + + err = iavf_execute_vf_cmd(adapter, &args, 0); + if (err) + PMD_DRV_LOG(ERR, "Failed to execute command of" + " VIRTCHNL_OP_CONFIG_QUEUE_BW"); + return err; +} + int iavf_add_del_mc_addr_list(struct iavf_adapter *adapter, struct rte_ether_addr *mc_addrs, -- 2.25.1