From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16FBFC32771 for ; Mon, 26 Sep 2022 07:01:09 +0000 (UTC) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A40B742847; Mon, 26 Sep 2022 09:00:33 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2097.outbound.protection.outlook.com [40.107.244.97]) by mails.dpdk.org (Postfix) with ESMTP id 023D242847 for ; Mon, 26 Sep 2022 09:00:31 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=DdTGVWZCrd8tJAWth2XDoEGYl8iyG3iAkv0wPb54J8DUrWiqJjF6zZAwiLsQXLbWOEmndy0R4UguX1lc/Hj+kV+Mrtu/dJ+8EkU+wn+hsPjnWYtvHza4GUF+w6ezcgXeXWkFyjhDLya2MiFvLUF8d6OCsw4+jgiQebwPk30TD3krxgLxx9pt1cB9wjcAJ/EnyO9Xmdpb3Ro1lmJHP7ayFG1DUrYQtLss5CbrKrmwO5B7DC9RKUQT6iKJ01oWcGxGup6NUrYUvIphoTHAto3n1JMmBxAR+8/SqdmpyKrtNLCrgOvswqAQKGNdB8RrtBZQ92I0CbEbLZ5k2nOSNDBy/Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=l9j5remVnqr5T106xyHBBxXvEfir+nn2BW6pNeM1viA=; b=nzInftWJlqI8Qweg0nJ0JxcriYaFXe8bFf/4QmXCNUal8Pqy+X4miA4sVKOOvQlvIqh4TSD9RSN31b+RsOEK5TQEadkqws5nGb3a8/iDm42ZJj7K7w7xeeidETN4lo8YHDpoU0/Veaoa9opK9/571tBBqU+JMwVwQhRYEppOJfyIz7ZK0bgDim/6VMA6URwT30Dmw0Iql6AnZwt30ZFQ8GW/iMPJW35DnhdW38KIP0O4Prl/qF063pFCNJ5dkv8LBihCSfPR/XZrq1YWu+rALsru+u3Iho4FEMJQCDGrYFooNraCOvWGemtT3TXT+WgFXGrxADBztQCoT78q8qtewg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=l9j5remVnqr5T106xyHBBxXvEfir+nn2BW6pNeM1viA=; b=cn2cnDPvW7GnZvIC2Crk3csB/LJrZGuJ4NpD6KA8ME9u4NuHxbsmXaX4IsnNId6vnprEOD9K4oJP9j0zLlAhBmuCM7bS/XQRoHAgu9wP3xRtw/amyJa1IMNfiIRZ8FC82kvTa6u+y1YNfVupR2LAWEg0evodkrBhkwCXAhEQgDY= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) by SA1PR13MB4893.namprd13.prod.outlook.com (2603:10b6:806:18a::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5676.14; Mon, 26 Sep 2022 07:00:30 +0000 Received: from SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::779:2722:a8e5:503b]) by SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::779:2722:a8e5:503b%7]) with mapi id 15.20.5676.014; Mon, 26 Sep 2022 07:00:30 +0000 From: Chaoyong He To: dev@dpdk.org Cc: oss-drivers@corigine.com, niklas.soderlund@corigine.com, Chaoyong He Subject: [PATCH v10 06/13] net/nfp: add flower ctrl VNIC related logics Date: Mon, 26 Sep 2022 14:59:50 +0800 Message-Id: <1664175597-37248-7-git-send-email-chaoyong.he@corigine.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1664175597-37248-1-git-send-email-chaoyong.he@corigine.com> References: <1664175597-37248-1-git-send-email-chaoyong.he@corigine.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-ClientProxiedBy: SG2P153CA0005.APCP153.PROD.OUTLOOK.COM (2603:1096::15) To SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR13MB5545:EE_|SA1PR13MB4893:EE_ X-MS-Office365-Filtering-Correlation-Id: 3be667f6-0ad2-4dff-8ccd-08da9f8ccc44 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: EdUxgO/1A4fxcYFBl49rMPDPwo/Sb3D0qNP2iPDP6c91gQmNYB0UGhYkrU5I/cVFLrl3oprBI+8umXHKGKqrHxDLVxjJcbhFHCk3jk26TFnXO4Vz3VtpXWIYSLZPowpdbNdqshEMUV8o8i3r23a4Sz9DBQvYuWtjzGt+ntkE9xw3n0VXmzpGQlk1m1g9w2tPfk4RcUKqUc8xBUSKKc7y6FOj3ipSLgOn5ZKQcfJlWits4XiXJnafVek459UhEeShVqNOBXXquXO11jDJWuU2RE8UM2lPuHhB4rt/H2hHF9/mCOFjYZJX9w0wJ65EMwzG0MNSK8WoMuvGJIMjcLGrdWr3SZg5T0vXdzKLN8Zt84ZAm/6OvnUPJYEF7l3I5DtYVBtveYh+ag9VvRyytmuAy1VZBqaSWXKRYSzckv6NTyHA3rBdo5q4A9cKSgUhq+D0BJzpJKzJUUBpo+iDRALJWlfOODA5u1Nf2ncgPeXn7CbEthjnTkrHLMMMP9nAEbtgGX2u4LuDCzhfsKt9qqjNhD+J7IsNS4zsxLvE1LfEmmuhSQDF8n8XNt2UN6ISSbachDigHR++7gnxge0Ceb0NIx+7wLHtyq3ZeiUJR0fcV+Vbbp0DbB+YU8l4Ei/e/O+Rw/5lf3QpYualDorcLF2KyDwlfvnFlzUtc30mR2QGimpg7FXNdT3HeV3134bfnNg/JpyjKgODYysc1ssTGSoRzqa3TOyFwzgqZnMux39+2zH2VRnFn3RVupDeV+qDIFXXbuvHglX/CC7M/AZFisTxvA== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SJ0PR13MB5545.namprd13.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230022)(4636009)(366004)(396003)(39830400003)(136003)(376002)(346002)(451199015)(36756003)(38100700002)(38350700002)(2906002)(86362001)(44832011)(30864003)(66946007)(8936002)(66556008)(66476007)(5660300002)(8676002)(4326008)(83380400001)(186003)(66574015)(6512007)(6916009)(316002)(52116002)(6666004)(2616005)(26005)(6506007)(478600001)(41300700001)(107886003)(6486002); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?Uzl6TjhIWTBkWXJBc1dkMmhtYnFKOG8zZ0hlcDVYSUd2Q2hHai9ZaGpxWkJj?= =?utf-8?B?VTkyUS9MM2luQzd5NVNqNGVTMTYvSnlJZzcrTFlqRi9oMk1TMDFSSzhvOUhB?= =?utf-8?B?QmhnNmlWa05sczhMOHhaS0FLaDc1Zkh6dWEwbkdGTTJwL3pOOElhZGM1Tklh?= =?utf-8?B?T0M5dEhvNnAxU3NVY1RyUDNveUtTdmVZc2pRTGwzWnMzc2NaSlF5S2Yvc0lp?= =?utf-8?B?ci9IYlVKYklEM05EQkR2NTZQNmlNSFl1Y3RqN1MvUFVwaEZiem9xdFpEeU1s?= =?utf-8?B?eHZOcVg1ZVFMRVQvcnVPRGZ5NENJeCtMRXI2UlRYNVl6UWhFLy9lWS9pQzVJ?= =?utf-8?B?a0QrYnVNL0piYWpiM2REUGFPVXJsRHBsUlNPeENuN0g1YnQ0eE9RTHJoa0VF?= =?utf-8?B?ckg1OGROcWlrZnZiVVVhZm82UngwSHBhNGJPZ1lVN3lsWVV0UVJRY1ViTk43?= =?utf-8?B?UXh1N3k2ZTBvbVNwcEY1NStLV3M4U3Y0YnluM296eGJub3N2MGhUQXBkUW04?= =?utf-8?B?RndpVkU2eE5UU1UzWTBnQ1o5Zi9GcGlxSE14L0YxS1NjSXVEemxUSjJHamJ2?= =?utf-8?B?dENQTlpVVmtVUDc5ZGhYYm82SU1SYUZJYkJFSW5Hd3phbGNJRllnc3FCOFkz?= =?utf-8?B?T2Z6dVY3TDVkL0Q0Nmx2YjNoZCsrZ2RVT3JQVC9hTFRQWitTUFQ3REVlUXlj?= =?utf-8?B?N2xkRm4rd2dqcEFVNy9GSjZxcE5pRkw1T21RTVpUTTRidllUaEkra2tWQ01a?= =?utf-8?B?ZFZYN2V1ejdsTWtpVW1mZG85SHRldHdreTZPSE5idHBUZEo3NHJpRU50RkdQ?= =?utf-8?B?a1kxc1VuZk1ra0tIRzlBYmpxT24vaTMyVFZoMHVYd1BaK2tLS0liTHA5OU5E?= =?utf-8?B?NXIxN0FScjF0dHhXcHREMGRiVzg0cEJVK0Z3a1c0RHU3dExhNHZiTWlQOEs2?= =?utf-8?B?djNZWUhLaDhzbU55c00za01GckNCRUVRRnhjVldVbXpKWmN6K3hQRGtJbVlt?= =?utf-8?B?bEdtTU9kOXN1ZDlXZzVybmxwRlhpbmNUQXdqYTNvVFprWVdDYTdrZS9VVTh4?= =?utf-8?B?cTNwV1RWU1JPUTZRZzJxTHIweE5iMCt4R1BxRVpud1RUc09DUE4vSWpobHlT?= =?utf-8?B?am5zcmJOTndaU2tZYzdqZTZJY3k1T2ZlTGhEZzM0WHZDSkcxckJsOW5SMVI5?= =?utf-8?B?d3FSb2NlamE1R0QranVrWGliaDZpSlJ4elB1ZlAyN0dWMTdySGxxSS9DVVF3?= =?utf-8?B?a0k1RlYrSzFLTTBMK2YxVC9GUzNtb0RnYmVDUU5uSW85N2ovY1JOUUZBaFZ3?= =?utf-8?B?enlJZ3BGOUpTbHd1enlRUHFodDF1YmdGQmNrVjFVSGplQUdZTVk2a0dQbEYv?= =?utf-8?B?VndDblE1YmhlaDA4Z0NuZG1Wa2ZrMW1RNkJqTUFxeGprTW5vdm55OW9kUm43?= =?utf-8?B?b3NaVE81b2dra0J5YXkrbmRSU3ZnUlEzVW1qTUd0bGZMZnMxZVBDTGhJVHZ6?= =?utf-8?B?LzdjQklKWmFhTUx3eEJONENUeVhHdGlRSWYvdncrMGRSaFJqMG9XRG5aTU8y?= =?utf-8?B?RTBocUFVZVRVVGNsYkhabXZuUzhIbEtEcWZJSUltMENidW5yVFpRSzFqU2lM?= =?utf-8?B?dXZjVkc0Ty9SWmNGVWJGTURSOFZSTFZQcjk3M3pMaG1wS3Bsa3NKWU5OSzhh?= =?utf-8?B?NjlwbXlMc1dSSXBjNmQxVlU5ZEk4WlUwU0c5c0dabldmMlFXQWNxSGpCbnJY?= =?utf-8?B?UGN4bm5URElyMjhscUYwQ094QjdyTG9kcmozU3N4amphQ2gycG9qT0xQRTJ4?= =?utf-8?B?Qko5TTZEM1pseXdLQUVQMzdrQlVkL3lnVi9iWE5DUEtsb0dkNE1KNkhldkZT?= =?utf-8?B?MHM5TVdKSHZ2ZHIzT2JhKzZFY0s1SVlTMVl3Qi9VMWVtSW9sTEtuMlNMV3ZK?= =?utf-8?B?WWVyYUt2VFMzU3hWbkJDekZPQjhvSFJkalNlRVQyNUNCNWpmT0xic1VDWDZD?= =?utf-8?B?SS9UcnpCNlVJMnBZSWdPams4TytvRW1oaW9hZjcvMUQ0SFdrS2l1TlNtTFFa?= =?utf-8?B?SVA2a1o5cXlzT1EwbmRDUE80VFRTMDlpQkJRd2RUQkdmS3VzMGlzOGZKOFRC?= =?utf-8?B?YWladHVxQUdpVmI1amVya3RsUkk0MUVSTDNBcG16WXJUVGg1dWdDUUhtM2pQ?= =?utf-8?B?dmc9PQ==?= X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: 3be667f6-0ad2-4dff-8ccd-08da9f8ccc44 X-MS-Exchange-CrossTenant-AuthSource: SJ0PR13MB5545.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Sep 2022 07:00:30.0454 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: v2deLtRXhgVriLkZJx0afkMNkkXA0XckBWXcqFwm87s/uH/LKiXGMUNKJbSKr0HYtAbFk6731b1biCQIIwhNzdpJgyBr1JwoVbEUxHCrHuI= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR13MB4893 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Adds the setup/start logic for the ctrl vNIC. This vNIC is used by the PMD and flower firmware application as a communication channel between driver and firmware. In the case of OVS it is also used to communicate flow statistics from hardware to the driver. A rte_eth device is not exposed to DPDK for this vNIC as it is strictly used internally by flower logic. Because of the add of ctrl vNIC, a new PCItoCPPBar is needed. Modify the related logics. Signed-off-by: Chaoyong He Reviewed-by: Niklas Söderlund --- drivers/net/nfp/flower/nfp_flower.c | 376 +++++++++++++++++++++++++++++ drivers/net/nfp/flower/nfp_flower.h | 6 + drivers/net/nfp/nfpcore/nfp_cpp_pcie_ops.c | 31 ++- 3 files changed, 401 insertions(+), 12 deletions(-) diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c index aec9e28..efa8c6b 100644 --- a/drivers/net/nfp/flower/nfp_flower.c +++ b/drivers/net/nfp/flower/nfp_flower.c @@ -20,6 +20,7 @@ #include "../nfpcore/nfp_nsp.h" #include "nfp_flower.h" +#define CTRL_VNIC_NB_DESC 512 #define DEFAULT_FLBUF_SIZE 9216 static const struct eth_dev_ops nfp_flower_pf_vnic_ops = { @@ -98,12 +99,353 @@ return 0; } +static int +nfp_flower_init_ctrl_vnic(struct nfp_net_hw *hw) +{ + uint32_t i; + int ret = 0; + uint16_t n_txq; + uint16_t n_rxq; + unsigned int numa_node; + struct rte_mempool *mp; + struct nfp_net_rxq *rxq; + struct nfp_net_txq *txq; + struct nfp_pf_dev *pf_dev; + struct rte_eth_dev *eth_dev; + const struct rte_memzone *tz; + struct nfp_app_fw_flower *app_fw_flower; + + /* Set up some pointers here for ease of use */ + pf_dev = hw->pf_dev; + app_fw_flower = NFP_PRIV_TO_APP_FW_FLOWER(pf_dev->app_fw_priv); + + ret = nfp_flower_init_vnic_common(hw, "ctrl_vnic"); + if (ret != 0) { + PMD_INIT_LOG(ERR, "Could not init pf vnic"); + return -EINVAL; + } + + /* Allocate memory for the eth_dev of the vNIC */ + hw->eth_dev = rte_zmalloc("nfp_ctrl_vnic", + sizeof(struct rte_eth_dev), RTE_CACHE_LINE_SIZE); + if (hw->eth_dev == NULL) { + PMD_INIT_LOG(ERR, "Could not allocate ctrl vnic"); + return -ENOMEM; + } + + /* Grab the pointer to the newly created rte_eth_dev here */ + eth_dev = hw->eth_dev; + + /* Also allocate memory for the data part of the eth_dev */ + eth_dev->data = rte_zmalloc("nfp_ctrl_vnic_data", + sizeof(struct rte_eth_dev_data), RTE_CACHE_LINE_SIZE); + if (eth_dev->data == NULL) { + PMD_INIT_LOG(ERR, "Could not allocate ctrl vnic data"); + ret = -ENOMEM; + goto eth_dev_cleanup; + } + + /* Create a mbuf pool for the ctrl vNIC */ + numa_node = rte_socket_id(); + app_fw_flower->ctrl_pktmbuf_pool = rte_pktmbuf_pool_create("ctrl_mbuf_pool", + 4 * CTRL_VNIC_NB_DESC, 64, 0, 9216, numa_node); + if (app_fw_flower->ctrl_pktmbuf_pool == NULL) { + PMD_INIT_LOG(ERR, "Create mbuf pool for ctrl vnic failed"); + ret = -ENOMEM; + goto dev_data_cleanup; + } + + mp = app_fw_flower->ctrl_pktmbuf_pool; + + /* Configure the ctrl vNIC device */ + n_rxq = hw->max_rx_queues; + n_txq = hw->max_tx_queues; + eth_dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues", + sizeof(eth_dev->data->rx_queues[0]) * n_rxq, + RTE_CACHE_LINE_SIZE); + if (eth_dev->data->rx_queues == NULL) { + PMD_INIT_LOG(ERR, "rte_zmalloc failed for ctrl vNIC rx queues"); + ret = -ENOMEM; + goto mempool_cleanup; + } + + eth_dev->data->tx_queues = rte_zmalloc("ethdev->tx_queues", + sizeof(eth_dev->data->tx_queues[0]) * n_txq, + RTE_CACHE_LINE_SIZE); + if (eth_dev->data->tx_queues == NULL) { + PMD_INIT_LOG(ERR, "rte_zmalloc failed for ctrl vNIC tx queues"); + ret = -ENOMEM; + goto rx_queue_free; + } + + /* Fill in some of the eth_dev fields */ + eth_dev->device = &pf_dev->pci_dev->device; + eth_dev->data->nb_tx_queues = n_rxq; + eth_dev->data->nb_rx_queues = n_txq; + eth_dev->data->dev_private = hw; + + /* Set up the Rx queues */ + for (i = 0; i < n_rxq; i++) { + rxq = rte_zmalloc_socket("ethdev RX queue", + sizeof(struct nfp_net_rxq), RTE_CACHE_LINE_SIZE, + numa_node); + if (rxq == NULL) { + PMD_DRV_LOG(ERR, "Error allocating rxq"); + ret = -ENOMEM; + goto rx_queue_setup_cleanup; + } + + eth_dev->data->rx_queues[i] = rxq; + + /* Hw queues mapping based on firmware configuration */ + rxq->qidx = i; + rxq->fl_qcidx = i * hw->stride_rx; + rxq->rx_qcidx = rxq->fl_qcidx + (hw->stride_rx - 1); + rxq->qcp_fl = hw->rx_bar + NFP_QCP_QUEUE_OFF(rxq->fl_qcidx); + rxq->qcp_rx = hw->rx_bar + NFP_QCP_QUEUE_OFF(rxq->rx_qcidx); + + /* + * Tracking mbuf size for detecting a potential mbuf overflow due to + * RX offset + */ + rxq->mem_pool = mp; + rxq->mbuf_size = rxq->mem_pool->elt_size; + rxq->mbuf_size -= (sizeof(struct rte_mbuf) + RTE_PKTMBUF_HEADROOM); + hw->flbufsz = rxq->mbuf_size; + + rxq->rx_count = CTRL_VNIC_NB_DESC; + rxq->rx_free_thresh = DEFAULT_RX_FREE_THRESH; + rxq->drop_en = 1; + + /* + * Allocate RX ring hardware descriptors. A memzone large enough to + * handle the maximum ring size is allocated in order to allow for + * resizing in later calls to the queue setup function. + */ + tz = rte_eth_dma_zone_reserve(eth_dev, "ctrl_rx_ring", i, + sizeof(struct nfp_net_rx_desc) * NFP_NET_MAX_RX_DESC, + NFP_MEMZONE_ALIGN, numa_node); + if (tz == NULL) { + PMD_DRV_LOG(ERR, "Error allocating rx dma"); + rte_free(rxq); + ret = -ENOMEM; + goto rx_queue_setup_cleanup; + } + + /* Saving physical and virtual addresses for the RX ring */ + rxq->dma = (uint64_t)tz->iova; + rxq->rxds = (struct nfp_net_rx_desc *)tz->addr; + + /* Mbuf pointers array for referencing mbufs linked to RX descriptors */ + rxq->rxbufs = rte_zmalloc_socket("rxq->rxbufs", + sizeof(*rxq->rxbufs) * CTRL_VNIC_NB_DESC, + RTE_CACHE_LINE_SIZE, numa_node); + if (rxq->rxbufs == NULL) { + rte_eth_dma_zone_free(eth_dev, "ctrl_rx_ring", i); + rte_free(rxq); + ret = -ENOMEM; + goto rx_queue_setup_cleanup; + } + + nfp_net_reset_rx_queue(rxq); + + rxq->hw = hw; + + /* + * Telling the HW about the physical address of the RX ring and number + * of descriptors in log2 format + */ + nn_cfg_writeq(hw, NFP_NET_CFG_RXR_ADDR(i), rxq->dma); + nn_cfg_writeb(hw, NFP_NET_CFG_RXR_SZ(i), rte_log2_u32(CTRL_VNIC_NB_DESC)); + } + + /* Set up the Tx queues */ + for (i = 0; i < n_txq; i++) { + txq = rte_zmalloc_socket("ethdev TX queue", + sizeof(struct nfp_net_txq), RTE_CACHE_LINE_SIZE, + numa_node); + if (txq == NULL) { + PMD_DRV_LOG(ERR, "Error allocating txq"); + ret = -ENOMEM; + goto tx_queue_setup_cleanup; + } + + eth_dev->data->tx_queues[i] = txq; + + /* + * Allocate TX ring hardware descriptors. A memzone large enough to + * handle the maximum ring size is allocated in order to allow for + * resizing in later calls to the queue setup function. + */ + tz = rte_eth_dma_zone_reserve(eth_dev, "ctrl_tx_ring", i, + sizeof(struct nfp_net_nfd3_tx_desc) * NFP_NET_MAX_TX_DESC, + NFP_MEMZONE_ALIGN, numa_node); + if (tz == NULL) { + PMD_DRV_LOG(ERR, "Error allocating tx dma"); + rte_free(txq); + ret = -ENOMEM; + goto tx_queue_setup_cleanup; + } + + txq->tx_count = CTRL_VNIC_NB_DESC; + txq->tx_free_thresh = DEFAULT_RX_FREE_THRESH; + txq->tx_pthresh = DEFAULT_TX_PTHRESH; + txq->tx_hthresh = DEFAULT_TX_HTHRESH; + txq->tx_wthresh = DEFAULT_TX_WTHRESH; + + /* Queue mapping based on firmware configuration */ + txq->qidx = i; + txq->tx_qcidx = i * hw->stride_tx; + txq->qcp_q = hw->tx_bar + NFP_QCP_QUEUE_OFF(txq->tx_qcidx); + + /* Saving physical and virtual addresses for the TX ring */ + txq->dma = (uint64_t)tz->iova; + txq->txds = (struct nfp_net_nfd3_tx_desc *)tz->addr; + + /* Mbuf pointers array for referencing mbufs linked to TX descriptors */ + txq->txbufs = rte_zmalloc_socket("txq->txbufs", + sizeof(*txq->txbufs) * CTRL_VNIC_NB_DESC, + RTE_CACHE_LINE_SIZE, numa_node); + if (txq->txbufs == NULL) { + rte_eth_dma_zone_free(eth_dev, "ctrl_tx_ring", i); + rte_free(txq); + ret = -ENOMEM; + goto tx_queue_setup_cleanup; + } + + nfp_net_reset_tx_queue(txq); + + txq->hw = hw; + + /* + * Telling the HW about the physical address of the TX ring and number + * of descriptors in log2 format + */ + nn_cfg_writeq(hw, NFP_NET_CFG_TXR_ADDR(i), txq->dma); + nn_cfg_writeb(hw, NFP_NET_CFG_TXR_SZ(i), rte_log2_u32(CTRL_VNIC_NB_DESC)); + } + + return 0; + +tx_queue_setup_cleanup: + for (i = 0; i < hw->max_tx_queues; i++) { + txq = eth_dev->data->tx_queues[i]; + if (txq != NULL) { + rte_free(txq->txbufs); + rte_eth_dma_zone_free(eth_dev, "ctrl_tx_ring", i); + rte_free(txq); + } + } +rx_queue_setup_cleanup: + for (i = 0; i < hw->max_rx_queues; i++) { + rxq = eth_dev->data->rx_queues[i]; + if (rxq != NULL) { + rte_free(rxq->rxbufs); + rte_eth_dma_zone_free(eth_dev, "ctrl_rx_ring", i); + rte_free(rxq); + } + } + rte_free(eth_dev->data->tx_queues); +rx_queue_free: + rte_free(eth_dev->data->rx_queues); +mempool_cleanup: + rte_mempool_free(mp); +dev_data_cleanup: + rte_free(eth_dev->data); +eth_dev_cleanup: + rte_free(eth_dev); + + return ret; +} + +static void +nfp_flower_cleanup_ctrl_vnic(struct nfp_net_hw *hw) +{ + uint32_t i; + struct nfp_net_rxq *rxq; + struct nfp_net_txq *txq; + struct rte_eth_dev *eth_dev; + struct nfp_app_fw_flower *app_fw_flower; + + eth_dev = hw->eth_dev; + app_fw_flower = NFP_PRIV_TO_APP_FW_FLOWER(hw->pf_dev->app_fw_priv); + + for (i = 0; i < hw->max_tx_queues; i++) { + txq = eth_dev->data->tx_queues[i]; + if (txq != NULL) { + rte_free(txq->txbufs); + rte_eth_dma_zone_free(eth_dev, "ctrl_tx_ring", i); + rte_free(txq); + } + } + + for (i = 0; i < hw->max_rx_queues; i++) { + rxq = eth_dev->data->rx_queues[i]; + if (rxq != NULL) { + rte_free(rxq->rxbufs); + rte_eth_dma_zone_free(eth_dev, "ctrl_rx_ring", i); + rte_free(rxq); + } + } + + rte_free(eth_dev->data->tx_queues); + rte_free(eth_dev->data->rx_queues); + rte_mempool_free(app_fw_flower->ctrl_pktmbuf_pool); + rte_free(eth_dev->data); + rte_free(eth_dev); +} + +static int +nfp_flower_start_ctrl_vnic(struct nfp_net_hw *hw) +{ + int ret; + uint32_t update; + uint32_t new_ctrl; + struct rte_eth_dev *dev; + + dev = hw->eth_dev; + + /* Disabling queues just in case... */ + nfp_net_disable_queues(dev); + + /* Enabling the required queues in the device */ + nfp_net_enable_queues(dev); + + /* Writing configuration parameters in the device */ + nfp_net_params_setup(hw); + + new_ctrl = NFP_NET_CFG_CTRL_ENABLE; + update = NFP_NET_CFG_UPDATE_GEN | NFP_NET_CFG_UPDATE_RING | + NFP_NET_CFG_UPDATE_MSIX; + + rte_wmb(); + + /* If an error when reconfig we avoid to change hw state */ + ret = nfp_net_reconfig(hw, new_ctrl, update); + if (ret != 0) { + PMD_INIT_LOG(ERR, "Failed to reconfig ctrl vnic"); + return -EIO; + } + + hw->ctrl = new_ctrl; + + /* Setup the freelist ring */ + ret = nfp_net_rx_freelist_setup(dev); + if (ret != 0) { + PMD_INIT_LOG(ERR, "Error with flower ctrl vNIC freelist setup"); + return -EIO; + } + + return 0; +} + int nfp_init_app_fw_flower(struct nfp_pf_dev *pf_dev) { int ret; unsigned int numa_node; struct nfp_net_hw *pf_hw; + struct nfp_net_hw *ctrl_hw; struct nfp_app_fw_flower *app_fw_flower; numa_node = rte_socket_id(); @@ -148,8 +490,42 @@ goto pf_cpp_area_cleanup; } + /* The ctrl vNIC struct comes directly after the PF one */ + app_fw_flower->ctrl_hw = pf_hw + 1; + ctrl_hw = app_fw_flower->ctrl_hw; + + /* Map the ctrl vNIC ctrl bar */ + ctrl_hw->ctrl_bar = nfp_rtsym_map(pf_dev->sym_tbl, "_pf0_net_ctrl_bar", + 32768, &ctrl_hw->ctrl_area); + if (ctrl_hw->ctrl_bar == NULL) { + PMD_INIT_LOG(ERR, "Cloud not map the ctrl vNIC ctrl bar"); + ret = -ENODEV; + goto pf_cpp_area_cleanup; + } + + /* Now populate the ctrl vNIC */ + ctrl_hw->pf_dev = pf_dev; + ctrl_hw->cpp = pf_dev->cpp; + + ret = nfp_flower_init_ctrl_vnic(app_fw_flower->ctrl_hw); + if (ret != 0) { + PMD_INIT_LOG(ERR, "Could not initialize flower ctrl vNIC"); + goto ctrl_cpp_area_cleanup; + } + + /* Start the ctrl vNIC */ + ret = nfp_flower_start_ctrl_vnic(app_fw_flower->ctrl_hw); + if (ret != 0) { + PMD_INIT_LOG(ERR, "Could not start flower ctrl vNIC"); + goto ctrl_vnic_cleanup; + } + return 0; +ctrl_vnic_cleanup: + nfp_flower_cleanup_ctrl_vnic(app_fw_flower->ctrl_hw); +ctrl_cpp_area_cleanup: + nfp_cpp_area_free(ctrl_hw->ctrl_area); pf_cpp_area_cleanup: nfp_cpp_area_free(pf_dev->ctrl_area); vnic_cleanup: diff --git a/drivers/net/nfp/flower/nfp_flower.h b/drivers/net/nfp/flower/nfp_flower.h index 51e05e8..7b138d8 100644 --- a/drivers/net/nfp/flower/nfp_flower.h +++ b/drivers/net/nfp/flower/nfp_flower.h @@ -10,6 +10,12 @@ struct nfp_app_fw_flower { /* Pointer to the PF vNIC */ struct nfp_net_hw *pf_hw; + + /* Pointer to a mempool for the ctrlvNIC */ + struct rte_mempool *ctrl_pktmbuf_pool; + + /* Pointer to the ctrl vNIC */ + struct nfp_net_hw *ctrl_hw; }; int nfp_init_app_fw_flower(struct nfp_pf_dev *pf_dev); diff --git a/drivers/net/nfp/nfpcore/nfp_cpp_pcie_ops.c b/drivers/net/nfp/nfpcore/nfp_cpp_pcie_ops.c index 08bc4e8..22c8bc4 100644 --- a/drivers/net/nfp/nfpcore/nfp_cpp_pcie_ops.c +++ b/drivers/net/nfp/nfpcore/nfp_cpp_pcie_ops.c @@ -91,7 +91,10 @@ * @refcnt: number of current users * @iomem: mapped IO memory */ +#define NFP_BAR_MIN 1 +#define NFP_BAR_MID 5 #define NFP_BAR_MAX 7 + struct nfp_bar { struct nfp_pcie_user *nfp; uint32_t barcfg; @@ -292,6 +295,7 @@ struct nfp_pcie_user { * BAR0.0: Reserved for General Mapping (for MSI-X access to PCIe SRAM) * * Halving PCItoCPPBars for primary and secondary processes. + * For CoreNIC firmware: * NFP PMD just requires two fixed slots, one for configuration BAR, * and another for accessing the hw queues. Another slot is needed * for setting the link up or down. Secondary processes do not need @@ -301,6 +305,9 @@ struct nfp_pcie_user { * supported. Due to this requirement and future extensions requiring * new slots per process, only one secondary process is supported by * now. + * For Flower firmware: + * NFP PMD need another fixed slots, used as the configureation BAR + * for ctrl vNIC. */ static int nfp_enable_bars(struct nfp_pcie_user *nfp) @@ -309,11 +316,11 @@ struct nfp_pcie_user { int x, start, end; if (rte_eal_process_type() == RTE_PROC_PRIMARY) { - start = 4; - end = 1; + start = NFP_BAR_MID; + end = NFP_BAR_MIN; } else { - start = 7; - end = 4; + start = NFP_BAR_MAX; + end = NFP_BAR_MID; } for (x = start; x > end; x--) { bar = &nfp->bar[x - 1]; @@ -341,11 +348,11 @@ struct nfp_pcie_user { int x, start, end; if (rte_eal_process_type() == RTE_PROC_PRIMARY) { - start = 4; - end = 1; + start = NFP_BAR_MID; + end = NFP_BAR_MIN; } else { - start = 7; - end = 4; + start = NFP_BAR_MAX; + end = NFP_BAR_MID; } for (x = start; x > end; x--) { bar = &nfp->bar[x - 1]; @@ -364,11 +371,11 @@ struct nfp_pcie_user { int x, start, end; if (rte_eal_process_type() == RTE_PROC_PRIMARY) { - start = 4; - end = 1; + start = NFP_BAR_MID; + end = NFP_BAR_MIN; } else { - start = 7; - end = 4; + start = NFP_BAR_MAX; + end = NFP_BAR_MID; } for (x = start; x > end; x--) { -- 1.8.3.1