From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9BFEC48BD5 for ; Tue, 25 Jun 2019 15:37:24 +0000 (UTC) Received: from dpdk.org (dpdk.org [92.243.14.124]) by mail.kernel.org (Postfix) with ESMTP id 51D6D208E3 for ; Tue, 25 Jun 2019 15:37:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 51D6D208E3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 04DC91BBFB; Tue, 25 Jun 2019 17:32:58 +0200 (CEST) Received: from mga05.intel.com (mga05.intel.com [192.55.52.43]) by dpdk.org (Postfix) with ESMTP id 6D7C71BB5C for ; Tue, 25 Jun 2019 17:32:35 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Jun 2019 08:32:34 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.63,416,1557212400"; d="scan'208";a="166711686" Received: from silpixa00381635.ir.intel.com (HELO silpixa00381635.ger.corp.intel.com) ([10.237.223.4]) by orsmga006.jf.intel.com with ESMTP; 25 Jun 2019 08:32:32 -0700 From: Jasvinder Singh To: dev@dpdk.org Cc: cristian.dumitrescu@intel.com, Abraham Tovar , Lukasz Krakowiak Date: Tue, 25 Jun 2019 16:32:13 +0100 Message-Id: <20190625153217.24301-25-jasvinder.singh@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190625153217.24301-1-jasvinder.singh@intel.com> References: <20190528120553.2992-2-lukaszx.krakowiak@intel.com> <20190625153217.24301-1-jasvinder.singh@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: [dpdk-dev] [PATCH v2 24/28] net/softnic: update softnic tm function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Update softnic tm function to allow configuration flexiblity for pipe traffic classes and queues, and subport level configuration of the pipe parameters. Signed-off-by: Jasvinder Singh Signed-off-by: Abraham Tovar Signed-off-by: Lukasz Krakowiak --- drivers/net/softnic/rte_eth_softnic.c | 131 ++++++++ drivers/net/softnic/rte_eth_softnic_cli.c | 286 ++++++++++++++++-- .../net/softnic/rte_eth_softnic_internals.h | 8 +- drivers/net/softnic/rte_eth_softnic_tm.c | 89 +++--- 4 files changed, 445 insertions(+), 69 deletions(-) diff --git a/drivers/net/softnic/rte_eth_softnic.c b/drivers/net/softnic/rte_eth_softnic.c index 4bda2f2b0..50a48e90b 100644 --- a/drivers/net/softnic/rte_eth_softnic.c +++ b/drivers/net/softnic/rte_eth_softnic.c @@ -28,6 +28,19 @@ #define PMD_PARAM_TM_QSIZE1 "tm_qsize1" #define PMD_PARAM_TM_QSIZE2 "tm_qsize2" #define PMD_PARAM_TM_QSIZE3 "tm_qsize3" +#define PMD_PARAM_TM_QSIZE4 "tm_qsize4" +#define PMD_PARAM_TM_QSIZE5 "tm_qsize5" +#define PMD_PARAM_TM_QSIZE6 "tm_qsize6" +#define PMD_PARAM_TM_QSIZE7 "tm_qsize7" +#define PMD_PARAM_TM_QSIZE8 "tm_qsize8" +#define PMD_PARAM_TM_QSIZE9 "tm_qsize9" +#define PMD_PARAM_TM_QSIZE10 "tm_qsize10" +#define PMD_PARAM_TM_QSIZE11 "tm_qsize11" +#define PMD_PARAM_TM_QSIZE12 "tm_qsize12" +#define PMD_PARAM_TM_QSIZE13 "tm_qsize13" +#define PMD_PARAM_TM_QSIZE14 "tm_qsize14" +#define PMD_PARAM_TM_QSIZE15 "tm_qsize15" + static const char * const pmd_valid_args[] = { PMD_PARAM_FIRMWARE, @@ -39,6 +52,18 @@ static const char * const pmd_valid_args[] = { PMD_PARAM_TM_QSIZE1, PMD_PARAM_TM_QSIZE2, PMD_PARAM_TM_QSIZE3, + PMD_PARAM_TM_QSIZE4, + PMD_PARAM_TM_QSIZE5, + PMD_PARAM_TM_QSIZE6, + PMD_PARAM_TM_QSIZE7, + PMD_PARAM_TM_QSIZE8, + PMD_PARAM_TM_QSIZE9, + PMD_PARAM_TM_QSIZE10, + PMD_PARAM_TM_QSIZE11, + PMD_PARAM_TM_QSIZE12, + PMD_PARAM_TM_QSIZE13, + PMD_PARAM_TM_QSIZE14, + PMD_PARAM_TM_QSIZE15, NULL }; @@ -434,6 +459,18 @@ pmd_parse_args(struct pmd_params *p, const char *params) p->tm.qsize[1] = SOFTNIC_TM_QUEUE_SIZE; p->tm.qsize[2] = SOFTNIC_TM_QUEUE_SIZE; p->tm.qsize[3] = SOFTNIC_TM_QUEUE_SIZE; + p->tm.qsize[4] = SOFTNIC_TM_QUEUE_SIZE; + p->tm.qsize[5] = SOFTNIC_TM_QUEUE_SIZE; + p->tm.qsize[6] = SOFTNIC_TM_QUEUE_SIZE; + p->tm.qsize[7] = SOFTNIC_TM_QUEUE_SIZE; + p->tm.qsize[8] = SOFTNIC_TM_QUEUE_SIZE; + p->tm.qsize[9] = SOFTNIC_TM_QUEUE_SIZE; + p->tm.qsize[10] = SOFTNIC_TM_QUEUE_SIZE; + p->tm.qsize[11] = SOFTNIC_TM_QUEUE_SIZE; + p->tm.qsize[12] = SOFTNIC_TM_QUEUE_SIZE; + p->tm.qsize[13] = SOFTNIC_TM_QUEUE_SIZE; + p->tm.qsize[14] = SOFTNIC_TM_QUEUE_SIZE; + p->tm.qsize[15] = SOFTNIC_TM_QUEUE_SIZE; /* Firmware script (optional) */ if (rte_kvargs_count(kvlist, PMD_PARAM_FIRMWARE) == 1) { @@ -504,6 +541,88 @@ pmd_parse_args(struct pmd_params *p, const char *params) goto out_free; } + if (rte_kvargs_count(kvlist, PMD_PARAM_TM_QSIZE4) == 1) { + ret = rte_kvargs_process(kvlist, PMD_PARAM_TM_QSIZE4, + &get_uint32, &p->tm.qsize[4]); + if (ret < 0) + goto out_free; + } + + if (rte_kvargs_count(kvlist, PMD_PARAM_TM_QSIZE5) == 1) { + ret = rte_kvargs_process(kvlist, PMD_PARAM_TM_QSIZE5, + &get_uint32, &p->tm.qsize[5]); + if (ret < 0) + goto out_free; + } + + if (rte_kvargs_count(kvlist, PMD_PARAM_TM_QSIZE6) == 1) { + ret = rte_kvargs_process(kvlist, PMD_PARAM_TM_QSIZE6, + &get_uint32, &p->tm.qsize[6]); + if (ret < 0) + goto out_free; + } + + if (rte_kvargs_count(kvlist, PMD_PARAM_TM_QSIZE7) == 1) { + ret = rte_kvargs_process(kvlist, PMD_PARAM_TM_QSIZE7, + &get_uint32, &p->tm.qsize[7]); + if (ret < 0) + goto out_free; + } + if (rte_kvargs_count(kvlist, PMD_PARAM_TM_QSIZE8) == 1) { + ret = rte_kvargs_process(kvlist, PMD_PARAM_TM_QSIZE8, + &get_uint32, &p->tm.qsize[8]); + if (ret < 0) + goto out_free; + } + if (rte_kvargs_count(kvlist, PMD_PARAM_TM_QSIZE9) == 1) { + ret = rte_kvargs_process(kvlist, PMD_PARAM_TM_QSIZE9, + &get_uint32, &p->tm.qsize[9]); + if (ret < 0) + goto out_free; + } + + if (rte_kvargs_count(kvlist, PMD_PARAM_TM_QSIZE10) == 1) { + ret = rte_kvargs_process(kvlist, PMD_PARAM_TM_QSIZE10, + &get_uint32, &p->tm.qsize[10]); + if (ret < 0) + goto out_free; + } + + if (rte_kvargs_count(kvlist, PMD_PARAM_TM_QSIZE11) == 1) { + ret = rte_kvargs_process(kvlist, PMD_PARAM_TM_QSIZE11, + &get_uint32, &p->tm.qsize[11]); + if (ret < 0) + goto out_free; + } + + if (rte_kvargs_count(kvlist, PMD_PARAM_TM_QSIZE12) == 1) { + ret = rte_kvargs_process(kvlist, PMD_PARAM_TM_QSIZE12, + &get_uint32, &p->tm.qsize[12]); + if (ret < 0) + goto out_free; + } + + if (rte_kvargs_count(kvlist, PMD_PARAM_TM_QSIZE13) == 1) { + ret = rte_kvargs_process(kvlist, PMD_PARAM_TM_QSIZE13, + &get_uint32, &p->tm.qsize[13]); + if (ret < 0) + goto out_free; + } + + if (rte_kvargs_count(kvlist, PMD_PARAM_TM_QSIZE14) == 1) { + ret = rte_kvargs_process(kvlist, PMD_PARAM_TM_QSIZE14, + &get_uint32, &p->tm.qsize[14]); + if (ret < 0) + goto out_free; + } + + if (rte_kvargs_count(kvlist, PMD_PARAM_TM_QSIZE15) == 1) { + ret = rte_kvargs_process(kvlist, PMD_PARAM_TM_QSIZE15, + &get_uint32, &p->tm.qsize[15]); + if (ret < 0) + goto out_free; + } + out_free: rte_kvargs_free(kvlist); return ret; @@ -588,6 +707,18 @@ RTE_PMD_REGISTER_PARAM_STRING(net_softnic, PMD_PARAM_TM_QSIZE1 "= " PMD_PARAM_TM_QSIZE2 "= " PMD_PARAM_TM_QSIZE3 "=" + PMD_PARAM_TM_QSIZE4 "= " + PMD_PARAM_TM_QSIZE5 "= " + PMD_PARAM_TM_QSIZE6 "= " + PMD_PARAM_TM_QSIZE7 "=" + PMD_PARAM_TM_QSIZE8 "= " + PMD_PARAM_TM_QSIZE9 "= " + PMD_PARAM_TM_QSIZE10 "= " + PMD_PARAM_TM_QSIZE11 "=" + PMD_PARAM_TM_QSIZE12 "= " + PMD_PARAM_TM_QSIZE13 "= " + PMD_PARAM_TM_QSIZE14 "= " + PMD_PARAM_TM_QSIZE15 "=" ); diff --git a/drivers/net/softnic/rte_eth_softnic_cli.c b/drivers/net/softnic/rte_eth_softnic_cli.c index 56fc92ba2..63325623f 100644 --- a/drivers/net/softnic/rte_eth_softnic_cli.c +++ b/drivers/net/softnic/rte_eth_softnic_cli.c @@ -566,9 +566,13 @@ queue_node_id(uint32_t n_spp __rte_unused, uint32_t tc_id, uint32_t queue_id) { - return queue_id + - tc_id * RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE + - (pipe_id + subport_id * n_pps) * RTE_SCHED_QUEUES_PER_PIPE; + if (tc_id < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE) + return queue_id + tc_id + + (pipe_id + subport_id * n_pps) * RTE_SCHED_QUEUES_PER_PIPE; + else + return queue_id + + tc_id * RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE + + (pipe_id + subport_id * n_pps) * RTE_SCHED_QUEUES_PER_PIPE; } struct tmgr_hierarchy_default_params { @@ -617,10 +621,19 @@ tmgr_hierarchy_default(struct pmd_internals *softnic, }, }; + uint32_t *shared_shaper_id = + (uint32_t *) calloc(RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE, + sizeof(uint32_t)); + if (shared_shaper_id == NULL) + return -1; + + memcpy(shared_shaper_id, params->shared_shaper_id.tc, + sizeof(params->shared_shaper_id.tc)); + struct rte_tm_node_params tc_node_params[] = { [0] = { .shaper_profile_id = params->shaper_profile_id.tc[0], - .shared_shaper_id = ¶ms->shared_shaper_id.tc[0], + .shared_shaper_id = &shared_shaper_id[0], .n_shared_shapers = (¶ms->shared_shaper_id.tc_valid[0]) ? 1 : 0, .nonleaf = { @@ -630,7 +643,7 @@ tmgr_hierarchy_default(struct pmd_internals *softnic, [1] = { .shaper_profile_id = params->shaper_profile_id.tc[1], - .shared_shaper_id = ¶ms->shared_shaper_id.tc[1], + .shared_shaper_id = &shared_shaper_id[1], .n_shared_shapers = (¶ms->shared_shaper_id.tc_valid[1]) ? 1 : 0, .nonleaf = { @@ -640,7 +653,7 @@ tmgr_hierarchy_default(struct pmd_internals *softnic, [2] = { .shaper_profile_id = params->shaper_profile_id.tc[2], - .shared_shaper_id = ¶ms->shared_shaper_id.tc[2], + .shared_shaper_id = &shared_shaper_id[2], .n_shared_shapers = (¶ms->shared_shaper_id.tc_valid[2]) ? 1 : 0, .nonleaf = { @@ -650,13 +663,63 @@ tmgr_hierarchy_default(struct pmd_internals *softnic, [3] = { .shaper_profile_id = params->shaper_profile_id.tc[3], - .shared_shaper_id = ¶ms->shared_shaper_id.tc[3], + .shared_shaper_id = &shared_shaper_id[3], .n_shared_shapers = (¶ms->shared_shaper_id.tc_valid[3]) ? 1 : 0, .nonleaf = { .n_sp_priorities = 1, }, }, + + [4] = { + .shaper_profile_id = params->shaper_profile_id.tc[4], + .shared_shaper_id = &shared_shaper_id[4], + .n_shared_shapers = + (¶ms->shared_shaper_id.tc_valid[4]) ? 1 : 0, + .nonleaf = { + .n_sp_priorities = 1, + }, + }, + + [5] = { + .shaper_profile_id = params->shaper_profile_id.tc[5], + .shared_shaper_id = &shared_shaper_id[5], + .n_shared_shapers = + (¶ms->shared_shaper_id.tc_valid[5]) ? 1 : 0, + .nonleaf = { + .n_sp_priorities = 1, + }, + }, + + [6] = { + .shaper_profile_id = params->shaper_profile_id.tc[6], + .shared_shaper_id = &shared_shaper_id[6], + .n_shared_shapers = + (¶ms->shared_shaper_id.tc_valid[6]) ? 1 : 0, + .nonleaf = { + .n_sp_priorities = 1, + }, + }, + + [7] = { + .shaper_profile_id = params->shaper_profile_id.tc[7], + .shared_shaper_id = &shared_shaper_id[7], + .n_shared_shapers = + (¶ms->shared_shaper_id.tc_valid[7]) ? 1 : 0, + .nonleaf = { + .n_sp_priorities = 1, + }, + }, + + [8] = { + .shaper_profile_id = params->shaper_profile_id.tc[8], + .shared_shaper_id = &shared_shaper_id[8], + .n_shared_shapers = + (¶ms->shared_shaper_id.tc_valid[8]) ? 1 : 0, + .nonleaf = { + .n_sp_priorities = 1, + }, + }, }; struct rte_tm_node_params queue_node_params = { @@ -730,7 +793,21 @@ tmgr_hierarchy_default(struct pmd_internals *softnic, return -1; /* Hierarchy level 4: Queue nodes */ - for (q = 0; q < RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS; q++) { + if (t == RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE - 1) { /*BE Traffic Class*/ + for (q = 0; q < RTE_SCHED_BE_QUEUES_PER_PIPE; q++) { + status = rte_tm_node_add(port_id, + queue_node_id(n_spp, n_pps, s, p, t, q), + tc_node_id(n_spp, n_pps, s, p, t), + 0, + params->weight.queue[q], + RTE_TM_NODE_LEVEL_ID_ANY, + &queue_node_params, + &error); + if (status) + return -1; + } /* Queues (BE Traffic Class) */ + } else { /* SP Traffic Class */ + q = 0; status = rte_tm_node_add(port_id, queue_node_id(n_spp, n_pps, s, p, t, q), tc_node_id(n_spp, n_pps, s, p, t), @@ -741,7 +818,7 @@ tmgr_hierarchy_default(struct pmd_internals *softnic, &error); if (status) return -1; - } /* Queue */ + } /* Queue (SP Traffic Class) */ } /* TC */ } /* Pipe */ } /* Subport */ @@ -762,13 +839,23 @@ tmgr_hierarchy_default(struct pmd_internals *softnic, * tc1 * tc2 * tc3 + * tc4 + * tc5 + * tc6 + * tc7 + * tc8 * shared shaper * tc0 * tc1 * tc2 * tc3 + * tc4 + * tc5 + * tc6 + * tc7 + * tc8 * weight - * queue ... + * queue ... */ static void cmd_tmgr_hierarchy_default(struct pmd_internals *softnic, @@ -778,11 +865,11 @@ cmd_tmgr_hierarchy_default(struct pmd_internals *softnic, size_t out_size) { struct tmgr_hierarchy_default_params p; - int i, status; + int i, j, status; memset(&p, 0, sizeof(p)); - if (n_tokens != 50) { + if (n_tokens != 62) { snprintf(out, out_size, MSG_ARG_MISMATCH, tokens[0]); return; } @@ -894,27 +981,77 @@ cmd_tmgr_hierarchy_default(struct pmd_internals *softnic, return; } + if (strcmp(tokens[22], "tc4") != 0) { + snprintf(out, out_size, MSG_ARG_NOT_FOUND, "tc4"); + return; + } + + if (softnic_parser_read_uint32(&p.shaper_profile_id.tc[4], tokens[23]) != 0) { + snprintf(out, out_size, MSG_ARG_INVALID, "tc4 profile id"); + return; + } + + if (strcmp(tokens[24], "tc5") != 0) { + snprintf(out, out_size, MSG_ARG_NOT_FOUND, "tc5"); + return; + } + + if (softnic_parser_read_uint32(&p.shaper_profile_id.tc[5], tokens[25]) != 0) { + snprintf(out, out_size, MSG_ARG_INVALID, "tc5 profile id"); + return; + } + + if (strcmp(tokens[26], "tc6") != 0) { + snprintf(out, out_size, MSG_ARG_NOT_FOUND, "tc6"); + return; + } + + if (softnic_parser_read_uint32(&p.shaper_profile_id.tc[6], tokens[27]) != 0) { + snprintf(out, out_size, MSG_ARG_INVALID, "tc6 profile id"); + return; + } + + if (strcmp(tokens[28], "tc7") != 0) { + snprintf(out, out_size, MSG_ARG_NOT_FOUND, "tc7"); + return; + } + + if (softnic_parser_read_uint32(&p.shaper_profile_id.tc[7], tokens[29]) != 0) { + snprintf(out, out_size, MSG_ARG_INVALID, "tc7 profile id"); + return; + } + + if (strcmp(tokens[30], "tc8") != 0) { + snprintf(out, out_size, MSG_ARG_NOT_FOUND, "tc8"); + return; + } + + if (softnic_parser_read_uint32(&p.shaper_profile_id.tc[8], tokens[31]) != 0) { + snprintf(out, out_size, MSG_ARG_INVALID, "tc8 profile id"); + return; + } + /* Shared shaper */ - if (strcmp(tokens[22], "shared") != 0) { + if (strcmp(tokens[32], "shared") != 0) { snprintf(out, out_size, MSG_ARG_NOT_FOUND, "shared"); return; } - if (strcmp(tokens[23], "shaper") != 0) { + if (strcmp(tokens[33], "shaper") != 0) { snprintf(out, out_size, MSG_ARG_NOT_FOUND, "shaper"); return; } - if (strcmp(tokens[24], "tc0") != 0) { + if (strcmp(tokens[34], "tc0") != 0) { snprintf(out, out_size, MSG_ARG_NOT_FOUND, "tc0"); return; } - if (strcmp(tokens[25], "none") == 0) + if (strcmp(tokens[35], "none") == 0) p.shared_shaper_id.tc_valid[0] = 0; else { - if (softnic_parser_read_uint32(&p.shared_shaper_id.tc[0], tokens[25]) != 0) { + if (softnic_parser_read_uint32(&p.shared_shaper_id.tc[0], tokens[35]) != 0) { snprintf(out, out_size, MSG_ARG_INVALID, "shared shaper tc0"); return; } @@ -922,15 +1059,15 @@ cmd_tmgr_hierarchy_default(struct pmd_internals *softnic, p.shared_shaper_id.tc_valid[0] = 1; } - if (strcmp(tokens[26], "tc1") != 0) { + if (strcmp(tokens[36], "tc1") != 0) { snprintf(out, out_size, MSG_ARG_NOT_FOUND, "tc1"); return; } - if (strcmp(tokens[27], "none") == 0) + if (strcmp(tokens[37], "none") == 0) p.shared_shaper_id.tc_valid[1] = 0; else { - if (softnic_parser_read_uint32(&p.shared_shaper_id.tc[1], tokens[27]) != 0) { + if (softnic_parser_read_uint32(&p.shared_shaper_id.tc[1], tokens[37]) != 0) { snprintf(out, out_size, MSG_ARG_INVALID, "shared shaper tc1"); return; } @@ -938,15 +1075,15 @@ cmd_tmgr_hierarchy_default(struct pmd_internals *softnic, p.shared_shaper_id.tc_valid[1] = 1; } - if (strcmp(tokens[28], "tc2") != 0) { + if (strcmp(tokens[38], "tc2") != 0) { snprintf(out, out_size, MSG_ARG_NOT_FOUND, "tc2"); return; } - if (strcmp(tokens[29], "none") == 0) + if (strcmp(tokens[39], "none") == 0) p.shared_shaper_id.tc_valid[2] = 0; else { - if (softnic_parser_read_uint32(&p.shared_shaper_id.tc[2], tokens[29]) != 0) { + if (softnic_parser_read_uint32(&p.shared_shaper_id.tc[2], tokens[39]) != 0) { snprintf(out, out_size, MSG_ARG_INVALID, "shared shaper tc2"); return; } @@ -954,15 +1091,15 @@ cmd_tmgr_hierarchy_default(struct pmd_internals *softnic, p.shared_shaper_id.tc_valid[2] = 1; } - if (strcmp(tokens[30], "tc3") != 0) { + if (strcmp(tokens[40], "tc3") != 0) { snprintf(out, out_size, MSG_ARG_NOT_FOUND, "tc3"); return; } - if (strcmp(tokens[31], "none") == 0) + if (strcmp(tokens[41], "none") == 0) p.shared_shaper_id.tc_valid[3] = 0; else { - if (softnic_parser_read_uint32(&p.shared_shaper_id.tc[3], tokens[31]) != 0) { + if (softnic_parser_read_uint32(&p.shared_shaper_id.tc[3], tokens[41]) != 0) { snprintf(out, out_size, MSG_ARG_INVALID, "shared shaper tc3"); return; } @@ -970,22 +1107,107 @@ cmd_tmgr_hierarchy_default(struct pmd_internals *softnic, p.shared_shaper_id.tc_valid[3] = 1; } + if (strcmp(tokens[42], "tc4") != 0) { + snprintf(out, out_size, MSG_ARG_NOT_FOUND, "tc4"); + return; + } + + if (strcmp(tokens[43], "none") == 0) + p.shared_shaper_id.tc_valid[4] = 0; + else { + if (softnic_parser_read_uint32(&p.shared_shaper_id.tc[4], tokens[43]) != 0) { + snprintf(out, out_size, MSG_ARG_INVALID, "shared shaper tc4"); + return; + } + + p.shared_shaper_id.tc_valid[4] = 1; + } + + if (strcmp(tokens[44], "tc5") != 0) { + snprintf(out, out_size, MSG_ARG_NOT_FOUND, "tc5"); + return; + } + + if (strcmp(tokens[45], "none") == 0) + p.shared_shaper_id.tc_valid[5] = 0; + else { + if (softnic_parser_read_uint32(&p.shared_shaper_id.tc[5], tokens[45]) != 0) { + snprintf(out, out_size, MSG_ARG_INVALID, "shared shaper tc5"); + return; + } + + p.shared_shaper_id.tc_valid[5] = 1; + } + + if (strcmp(tokens[46], "tc6") != 0) { + snprintf(out, out_size, MSG_ARG_NOT_FOUND, "tc3"); + return; + } + + if (strcmp(tokens[47], "none") == 0) + p.shared_shaper_id.tc_valid[6] = 0; + else { + if (softnic_parser_read_uint32(&p.shared_shaper_id.tc[6], tokens[47]) != 0) { + snprintf(out, out_size, MSG_ARG_INVALID, "shared shaper tc6"); + return; + } + + p.shared_shaper_id.tc_valid[6] = 1; + } + + if (strcmp(tokens[48], "tc7") != 0) { + snprintf(out, out_size, MSG_ARG_NOT_FOUND, "tc7"); + return; + } + + if (strcmp(tokens[49], "none") == 0) + p.shared_shaper_id.tc_valid[7] = 0; + else { + if (softnic_parser_read_uint32(&p.shared_shaper_id.tc[7], tokens[49]) != 0) { + snprintf(out, out_size, MSG_ARG_INVALID, "shared shaper tc7"); + return; + } + + p.shared_shaper_id.tc_valid[7] = 1; + } + + if (strcmp(tokens[50], "tc8") != 0) { + snprintf(out, out_size, MSG_ARG_NOT_FOUND, "tc8"); + return; + } + + if (strcmp(tokens[51], "none") == 0) + p.shared_shaper_id.tc_valid[8] = 0; + else { + if (softnic_parser_read_uint32(&p.shared_shaper_id.tc[8], tokens[51]) != 0) { + snprintf(out, out_size, MSG_ARG_INVALID, "shared shaper tc8"); + return; + } + + p.shared_shaper_id.tc_valid[8] = 1; + } + /* Weight */ - if (strcmp(tokens[32], "weight") != 0) { + if (strcmp(tokens[52], "weight") != 0) { snprintf(out, out_size, MSG_ARG_NOT_FOUND, "weight"); return; } - if (strcmp(tokens[33], "queue") != 0) { + if (strcmp(tokens[53], "queue") != 0) { snprintf(out, out_size, MSG_ARG_NOT_FOUND, "queue"); return; } - for (i = 0; i < 16; i++) { - if (softnic_parser_read_uint32(&p.weight.queue[i], tokens[34 + i]) != 0) { - snprintf(out, out_size, MSG_ARG_INVALID, "weight queue"); - return; + for (i = 0, j = 0; i < 16; i++) { + if (i < RTE_SCHED_BE_QUEUES_PER_PIPE) { + p.weight.queue[i] = 1; + } else { + if (softnic_parser_read_uint32(&p.weight.queue[i], tokens[54 + j]) != 0) { + snprintf(out, out_size, MSG_ARG_INVALID, "weight queue"); + return; + } + j++; } } diff --git a/drivers/net/softnic/rte_eth_softnic_internals.h b/drivers/net/softnic/rte_eth_softnic_internals.h index 415434d0d..5525dff98 100644 --- a/drivers/net/softnic/rte_eth_softnic_internals.h +++ b/drivers/net/softnic/rte_eth_softnic_internals.h @@ -43,7 +43,7 @@ struct pmd_params { /** Traffic Management (TM) */ struct { uint32_t n_queues; /**< Number of queues */ - uint16_t qsize[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE]; + uint16_t qsize[RTE_SCHED_QUEUES_PER_PIPE]; } tm; }; @@ -161,13 +161,15 @@ TAILQ_HEAD(softnic_link_list, softnic_link); #define TM_MAX_PIPES_PER_SUBPORT 4096 #endif +#ifndef TM_MAX_PIPE_PROFILE +#define TM_MAX_PIPE_PROFILE 256 +#endif struct tm_params { struct rte_sched_port_params port_params; struct rte_sched_subport_params subport_params[TM_MAX_SUBPORTS]; - struct rte_sched_pipe_params - pipe_profiles[RTE_SCHED_PIPE_PROFILES_PER_PORT]; + struct rte_sched_pipe_params pipe_profiles[TM_MAX_PIPE_PROFILE]; uint32_t n_pipe_profiles; uint32_t pipe_to_profile[TM_MAX_SUBPORTS * TM_MAX_PIPES_PER_SUBPORT]; }; diff --git a/drivers/net/softnic/rte_eth_softnic_tm.c b/drivers/net/softnic/rte_eth_softnic_tm.c index 58744a9eb..6ba993147 100644 --- a/drivers/net/softnic/rte_eth_softnic_tm.c +++ b/drivers/net/softnic/rte_eth_softnic_tm.c @@ -85,7 +85,8 @@ softnic_tmgr_port_create(struct pmd_internals *p, /* Subport */ n_subports = t->port_params.n_subports_per_port; for (subport_id = 0; subport_id < n_subports; subport_id++) { - uint32_t n_pipes_per_subport = t->port_params.n_pipes_per_subport; + uint32_t n_pipes_per_subport = + t->subport_params[subport_id].n_subport_pipes; uint32_t pipe_id; int status; @@ -367,7 +368,8 @@ tm_level_get_max_nodes(struct rte_eth_dev *dev, enum tm_node_level level) { struct pmd_internals *p = dev->data->dev_private; uint32_t n_queues_max = p->params.tm.n_queues; - uint32_t n_tc_max = n_queues_max / RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS; + uint32_t n_tc_max = + (n_queues_max * RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE) / RTE_SCHED_QUEUES_PER_PIPE; uint32_t n_pipes_max = n_tc_max / RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; uint32_t n_subports_max = n_pipes_max; uint32_t n_root_max = 1; @@ -625,10 +627,10 @@ static const struct rte_tm_level_capabilities tm_level_cap[] = { .shaper_shared_n_max = 1, .sched_n_children_max = - RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS, + RTE_SCHED_BE_QUEUES_PER_PIPE, .sched_sp_n_priorities_max = 1, .sched_wfq_n_children_per_group_max = - RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS, + RTE_SCHED_BE_QUEUES_PER_PIPE, .sched_wfq_n_groups_max = 1, .sched_wfq_weight_max = UINT32_MAX, @@ -793,10 +795,10 @@ static const struct rte_tm_node_capabilities tm_node_cap[] = { {.nonleaf = { .sched_n_children_max = - RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS, + RTE_SCHED_BE_QUEUES_PER_PIPE, .sched_sp_n_priorities_max = 1, .sched_wfq_n_children_per_group_max = - RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS, + RTE_SCHED_BE_QUEUES_PER_PIPE, .sched_wfq_n_groups_max = 1, .sched_wfq_weight_max = UINT32_MAX, } }, @@ -2043,15 +2045,13 @@ pipe_profile_build(struct rte_eth_dev *dev, /* Queue */ TAILQ_FOREACH(nq, nl, node) { - uint32_t pipe_queue_id; if (nq->level != TM_NODE_LEVEL_QUEUE || nq->parent_node_id != nt->node_id) continue; - pipe_queue_id = nt->priority * - RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS + queue_id; - pp->wrr_weights[pipe_queue_id] = nq->weight; + if (nt->priority == RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE - 1) + pp->wrr_weights[queue_id] = nq->weight; queue_id++; } @@ -2065,7 +2065,7 @@ pipe_profile_free_exists(struct rte_eth_dev *dev, struct pmd_internals *p = dev->data->dev_private; struct tm_params *t = &p->soft.tm.params; - if (t->n_pipe_profiles < RTE_SCHED_PIPE_PROFILES_PER_PORT) { + if (t->n_pipe_profiles < TM_MAX_PIPE_PROFILE) { *pipe_profile_id = t->n_pipe_profiles; return 1; } @@ -2213,10 +2213,11 @@ tm_tc_wred_profile_get(struct rte_eth_dev *dev, uint32_t tc_id) #ifdef RTE_SCHED_RED static void -wred_profiles_set(struct rte_eth_dev *dev) +wred_profiles_set(struct rte_eth_dev *dev, uint32_t subport_id) { struct pmd_internals *p = dev->data->dev_private; - struct rte_sched_port_params *pp = &p->soft.tm.params.port_params; + struct rte_sched_subport_params *pp = + &p->soft.tm.params.subport_params[subport_id]; uint32_t tc_id; enum rte_color color; @@ -2235,7 +2236,7 @@ wred_profiles_set(struct rte_eth_dev *dev) #else -#define wred_profiles_set(dev) +#define wred_profiles_set(dev, subport_id) #endif @@ -2332,7 +2333,7 @@ hierarchy_commit_check(struct rte_eth_dev *dev, struct rte_tm_error *error) rte_strerror(EINVAL)); } - /* Each pipe has exactly 4 TCs, with exactly one TC for each priority */ + /* Each pipe has exactly 9 TCs, with exactly one TC for each priority */ TAILQ_FOREACH(np, nl, node) { uint32_t mask = 0, mask_expected = RTE_LEN2MASK(RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE, @@ -2369,7 +2370,7 @@ hierarchy_commit_check(struct rte_eth_dev *dev, struct rte_tm_error *error) if (nt->level != TM_NODE_LEVEL_TC) continue; - if (nt->n_children != RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS) + if (nt->n_children != 1 && nt->n_children != RTE_SCHED_BE_QUEUES_PER_PIPE) return -rte_tm_error_set(error, EINVAL, RTE_TM_ERROR_TYPE_UNSPECIFIED, @@ -2525,19 +2526,8 @@ hierarchy_blueprints_create(struct rte_eth_dev *dev) .frame_overhead = root->shaper_profile->params.pkt_length_adjust, .n_subports_per_port = root->n_children, - .n_pipes_per_subport = h->n_tm_nodes[TM_NODE_LEVEL_PIPE] / - h->n_tm_nodes[TM_NODE_LEVEL_SUBPORT], - .qsize = {p->params.tm.qsize[0], - p->params.tm.qsize[1], - p->params.tm.qsize[2], - p->params.tm.qsize[3], - }, - .pipe_profiles = t->pipe_profiles, - .n_pipe_profiles = t->n_pipe_profiles, }; - wred_profiles_set(dev); - subport_id = 0; TAILQ_FOREACH(n, nl, node) { uint64_t tc_rate[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE]; @@ -2566,10 +2556,41 @@ hierarchy_blueprints_create(struct rte_eth_dev *dev) tc_rate[1], tc_rate[2], tc_rate[3], - }, - .tc_period = SUBPORT_TC_PERIOD, + tc_rate[4], + tc_rate[5], + tc_rate[6], + tc_rate[7], + tc_rate[8], + }, + .tc_period = SUBPORT_TC_PERIOD, + + .n_subport_pipes = h->n_tm_nodes[TM_NODE_LEVEL_PIPE] / + h->n_tm_nodes[TM_NODE_LEVEL_SUBPORT], + + .qsize = {p->params.tm.qsize[0], + p->params.tm.qsize[1], + p->params.tm.qsize[2], + p->params.tm.qsize[3], + p->params.tm.qsize[4], + p->params.tm.qsize[5], + p->params.tm.qsize[6], + p->params.tm.qsize[7], + p->params.tm.qsize[8], + p->params.tm.qsize[9], + p->params.tm.qsize[10], + p->params.tm.qsize[11], + p->params.tm.qsize[12], + p->params.tm.qsize[13], + p->params.tm.qsize[14], + p->params.tm.qsize[15], + }, + + .pipe_profiles = t->pipe_profiles, + .n_pipe_profiles = t->n_pipe_profiles, + .n_max_pipe_profiles = TM_MAX_PIPE_PROFILE, }; + wred_profiles_set(dev, subport_id); subport_id++; } } @@ -2666,7 +2687,7 @@ update_queue_weight(struct rte_eth_dev *dev, uint32_t subport_id = tm_node_subport_id(dev, ns); uint32_t pipe_queue_id = - tc_id * RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS + queue_id; + tc_id * RTE_SCHED_QUEUES_PER_PIPE + queue_id; struct rte_sched_pipe_params *profile0 = pipe_profile_get(dev, np); struct rte_sched_pipe_params profile1; @@ -3023,7 +3044,7 @@ tm_port_queue_id(struct rte_eth_dev *dev, uint32_t port_tc_id = port_pipe_id * RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE + pipe_tc_id; uint32_t port_queue_id = - port_tc_id * RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS + tc_queue_id; + port_tc_id * RTE_SCHED_QUEUES_PER_PIPE + tc_queue_id; return port_queue_id; } @@ -3149,8 +3170,8 @@ read_pipe_stats(struct rte_eth_dev *dev, uint32_t qid = tm_port_queue_id(dev, subport_id, pipe_id, - i / RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS, - i % RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS); + i / RTE_SCHED_QUEUES_PER_PIPE, + i % RTE_SCHED_QUEUES_PER_PIPE); int status = rte_sched_queue_read_stats(SCHED(p), qid, @@ -3202,7 +3223,7 @@ read_tc_stats(struct rte_eth_dev *dev, uint32_t i; /* Stats read */ - for (i = 0; i < RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS; i++) { + for (i = 0; i < RTE_SCHED_QUEUES_PER_PIPE; i++) { struct rte_sched_queue_stats s; uint16_t qlen; -- 2.21.0