From mboxrd@z Thu Jan 1 00:00:00 1970 From: Pavan Nikhilesh Subject: [PATCH v5 5/7] examples/eventdev: update sample app to use service Date: Wed, 25 Oct 2017 20:20:31 +0530 Message-ID: <1508943033-15574-5-git-send-email-pbhagavatula@caviumnetworks.com> References: <1507712990-13064-1-git-send-email-pbhagavatula@caviumnetworks.com> <1508943033-15574-1-git-send-email-pbhagavatula@caviumnetworks.com> Mime-Version: 1.0 Content-Type: text/plain Cc: dev@dpdk.org, Pavan Nikhilesh To: jerin.jacob@caviumnetworks.com, hemant.agrawal@nxp.com, harry.van.haaren@intel.com Return-path: Received: from NAM03-BY2-obe.outbound.protection.outlook.com (mail-by2nam03on0085.outbound.protection.outlook.com [104.47.42.85]) by dpdk.org (Postfix) with ESMTP id 835C01B9D5 for ; Wed, 25 Oct 2017 16:51:14 +0200 (CEST) In-Reply-To: <1508943033-15574-1-git-send-email-pbhagavatula@caviumnetworks.com> List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Update the sample app eventdev_pipeline_sw_pmd to use service run iter for event scheduling in case of sw eventdev. Signed-off-by: Pavan Nikhilesh Acked-by: Harry van Haaren --- v5 changes: - fix minor checkpatch issue v4 changes: - rebase patchset on top of http://dpdk.org/dev/patchwork/patch/30732/ for controlled event scheduling in case event_sw examples/eventdev_pipeline_sw_pmd/main.c | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/examples/eventdev_pipeline_sw_pmd/main.c b/examples/eventdev_pipeline_sw_pmd/main.c index 2e6787b..948be67 100644 --- a/examples/eventdev_pipeline_sw_pmd/main.c +++ b/examples/eventdev_pipeline_sw_pmd/main.c @@ -46,6 +46,7 @@ #include #include #include +#include #define MAX_NUM_STAGES 8 #define BATCH_SIZE 16 @@ -76,6 +77,7 @@ struct fastpath_data { uint32_t rx_lock; uint32_t tx_lock; uint32_t sched_lock; + uint32_t evdev_service_id; bool rx_single; bool tx_single; bool sched_single; @@ -233,7 +235,7 @@ producer(void) } static inline void -schedule_devices(uint8_t dev_id, unsigned int lcore_id) +schedule_devices(unsigned int lcore_id) { if (fdata->rx_core[lcore_id] && (fdata->rx_single || rte_atomic32_cmpset(&(fdata->rx_lock), 0, 1))) { @@ -243,7 +245,7 @@ schedule_devices(uint8_t dev_id, unsigned int lcore_id) if (fdata->sched_core[lcore_id] && (fdata->sched_single || rte_atomic32_cmpset(&(fdata->sched_lock), 0, 1))) { - rte_event_schedule(dev_id); + rte_service_run_iter_on_app_lcore(fdata->evdev_service_id); if (cdata.dump_dev_signal) { rte_event_dev_dump(0, stdout); cdata.dump_dev_signal = 0; @@ -294,7 +296,7 @@ worker(void *arg) while (!fdata->done) { uint16_t i; - schedule_devices(dev_id, lcore_id); + schedule_devices(lcore_id); if (!fdata->worker_core[lcore_id]) { rte_pause(); @@ -839,6 +841,14 @@ setup_eventdev(struct prod_data *prod_data, *cons_data = (struct cons_data){.dev_id = dev_id, .port_id = i }; + ret = rte_event_dev_service_id_get(dev_id, + &fdata->evdev_service_id); + if (ret != -ESRCH && ret != 0) { + printf("Error getting the service ID for sw eventdev\n"); + return -1; + } + rte_service_runstate_set(fdata->evdev_service_id, 1); + rte_service_set_runstate_mapped_check(fdata->evdev_service_id, 0); if (rte_event_dev_start(dev_id) < 0) { printf("Error starting eventdev\n"); return -1; -- 2.7.4