From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.8 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A3B0C282CE for ; Mon, 3 Jun 2019 16:55:37 +0000 (UTC) Received: from dpdk.org (dpdk.org [92.243.14.124]) by mail.kernel.org (Postfix) with ESMTP id D6B5B2741D for ; Mon, 3 Jun 2019 16:55:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=marvell.com header.i=@marvell.com header.b="Dt3YgZ2q" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D6B5B2741D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=marvell.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6A0421B9B8; Mon, 3 Jun 2019 18:52:36 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id E165C1B9B6 for ; Mon, 3 Jun 2019 18:52:34 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x53Ge4mx031766; Mon, 3 Jun 2019 09:52:34 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=j3lxGIdDVDDcvxRcOGv7QnqbrhYhTE+6DLuLuWzoSXY=; b=Dt3YgZ2qjSKFE0zJheac4z8d2uvl2kFws9HDanlHMuMEYgL5tLprk82ixN88yw0QN7IA nDKnNrol5ExmC/VF3Kkdv2dWlR3vTijXZmbWLVi7PtXm/PbXvvFOydE0pgV8xEDHNGuw YgA34J4n8bKm01hY0/iWLjTsEtdnG56XlTemkA8lYVFu01M3ZPNUwdMFuMtl4vN0RCb3 cNIEPrX8QGQpsDmZT9kz/dp51AWIHHGwiyOcgq4R02cJOA1xv2KH6gWWXJGbPlJ52rEF HV7kElqkpRL7yPLpG0F5OsXO3B/E5sRb9/29FvW4gNvKzigfpSNP2jAj+ZMBfLFgXNzz OQ== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 2sw2wmh7b9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 03 Jun 2019 09:52:34 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Mon, 3 Jun 2019 09:52:32 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Mon, 3 Jun 2019 09:52:32 -0700 Received: from ajoseph83.caveonetworks.com.com (unknown [10.29.45.56]) by maili.marvell.com (Postfix) with ESMTP id C06D63F703F; Mon, 3 Jun 2019 09:52:27 -0700 (PDT) From: Anoob Joseph To: Jerin Jacob , Nikhil Rao , "Erik Gabriel Carrillo" , Abhinandan Gujjar , Bruce Richardson , Pablo de Lara CC: Narayana Prasad , , Lukasz Bartosik , Pavan Nikhilesh , "Hemant Agrawal" , Nipun Gupta , "Harry van Haaren" , =?UTF-8?q?Mattias=20R=C3=B6nnblom?= , Liang Ma , "Anoob Joseph" Date: Mon, 3 Jun 2019 22:19:36 +0530 Message-ID: <1559580584-5728-32-git-send-email-anoobj@marvell.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1559580584-5728-1-git-send-email-anoobj@marvell.com> References: <1559580584-5728-1-git-send-email-anoobj@marvell.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-03_13:, , signatures=0 Subject: [dpdk-dev] [PATCH 31/39] eventdev: add routine to access event queue for eth Tx X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When the application is drafted for single stage eventmode, it will be efficient to have the loop in the application space, rather than passing it on to the helper. When the application's stage is in ORDERED sched mode, the application will have to change the sched type of the event to ATOMIC before sending it, to ensure ingress ordering is maintained. Since, it is application who would do the tx, this info is required in it's space. Signed-off-by: Anoob Joseph Signed-off-by: Lukasz Bartosik --- lib/librte_eventdev/rte_eventdev_version.map | 1 + lib/librte_eventdev/rte_eventmode_helper.c | 53 ++++++++++++++++++++++++++++ lib/librte_eventdev/rte_eventmode_helper.h | 21 +++++++++++ 3 files changed, 75 insertions(+) diff --git a/lib/librte_eventdev/rte_eventdev_version.map b/lib/librte_eventdev/rte_eventdev_version.map index 8137cb5..3cf926a 100644 --- a/lib/librte_eventdev/rte_eventdev_version.map +++ b/lib/librte_eventdev/rte_eventdev_version.map @@ -134,4 +134,5 @@ EXPERIMENTAL { rte_eventmode_helper_initialize_devs; rte_eventmode_helper_display_conf; rte_eventmode_helper_get_event_lcore_links; + rte_eventmode_helper_get_tx_queue; }; diff --git a/lib/librte_eventdev/rte_eventmode_helper.c b/lib/librte_eventdev/rte_eventmode_helper.c index 6c853f6..e7670e0 100644 --- a/lib/librte_eventdev/rte_eventmode_helper.c +++ b/lib/librte_eventdev/rte_eventmode_helper.c @@ -93,6 +93,24 @@ internal_get_next_active_core(struct eventmode_conf *em_conf, return next_core; } +static struct eventdev_params * +internal_get_eventdev_params(struct eventmode_conf *em_conf, + uint8_t eventdev_id) +{ + int i; + + for (i = 0; i < em_conf->nb_eventdev; i++) { + if (em_conf->eventdev_config[i].eventdev_id == eventdev_id) + break; + } + + /* No match */ + if (i == em_conf->nb_eventdev) + return NULL; + + return &(em_conf->eventdev_config[i]); +} + /* Global functions */ void __rte_experimental @@ -927,3 +945,38 @@ rte_eventmode_helper_get_event_lcore_links(uint32_t lcore_id, return lcore_nb_link; } +uint8_t __rte_experimental +rte_eventmode_helper_get_tx_queue(struct rte_eventmode_helper_conf *mode_conf, + uint8_t eventdev_id) +{ + struct eventdev_params *eventdev_config; + struct eventmode_conf *em_conf; + + if (mode_conf == NULL) { + RTE_EM_HLPR_LOG_ERR("Invalid conf"); + return (uint8_t)(-1); + } + + if (mode_conf->mode_params == NULL) { + RTE_EM_HLPR_LOG_ERR("Invalid mode params"); + return (uint8_t)(-1); + } + + /* Get eventmode conf */ + em_conf = (struct eventmode_conf *)(mode_conf->mode_params); + + /* Get event device conf */ + eventdev_config = internal_get_eventdev_params(em_conf, eventdev_id); + + if (eventdev_config == NULL) { + RTE_EM_HLPR_LOG_ERR("Error reading eventdev conf"); + return (uint8_t)(-1); + } + + /* + * The last queue would be reserved to be used as atomic queue for the + * last stage (eth packet tx stage) + */ + return eventdev_config->nb_eventqueue - 1; +} + diff --git a/lib/librte_eventdev/rte_eventmode_helper.h b/lib/librte_eventdev/rte_eventmode_helper.h index 925b660..cd6d708 100644 --- a/lib/librte_eventdev/rte_eventmode_helper.h +++ b/lib/librte_eventdev/rte_eventmode_helper.h @@ -136,6 +136,27 @@ rte_eventmode_helper_get_event_lcore_links(uint32_t lcore_id, struct rte_eventmode_helper_conf *mode_conf, struct rte_eventmode_helper_event_link_info **links); +/** + * Get eventdev tx queue + * + * If the application uses event device which does not support internal port + * then it needs to submit the events to an atomic Tx queue before final + * transmission. The Tx queue will be atomic to make sure that ingress order of + * the packets is maintained. This Tx queue will be created internally by the + * eventmode helper subsystem, and application will need it's queue ID when it + * is running the execution loop. + * + * @param mode_conf + * Configuration of the mode in which app is doing packet handling + * @param eventdev_id + * Event device ID + * @return + * Tx queue ID + */ +uint8_t __rte_experimental +rte_eventmode_helper_get_tx_queue(struct rte_eventmode_helper_conf *mode_conf, + uint8_t eventdev_id); + #ifdef __cplusplus } #endif -- 2.7.4