All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/7] eventdev: remove event schedule API for SW driver
@ 2017-10-11  9:09 Pavan Nikhilesh
  2017-10-11  9:09 ` [PATCH 1/7] eventdev: add API to get service id Pavan Nikhilesh
                   ` (10 more replies)
  0 siblings, 11 replies; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-11  9:09 UTC (permalink / raw)
  To: jerin.jacob, harry.van.haaren, hemant.agrawal, santosh.shukla
  Cc: dev, Pavan Nikhilesh

The software event dev is a centralized software scheduler and needs
`rte_event_schedule` to be called repeatedly in order to distribute the
events. In most cases, this requires a dedicated lcore to achieve this.

With the introduction to the rte_service concept, software eventdev
driver can register event distribution as a service component and
offload it to a service core. Thus removing the requirement of calling
`rte_event_schedule` explicitly and abstracts the differences between
HW and SW PMD to provide a single interface to the application.

Pavan Nikhilesh (7):
  eventdev: add API to get service id
  event/sw: extend service capability
  app/test-eventdev: update app to use service cores
  test/eventdev: update test to use service core
  examples/eventdev: update sample app to use service
  eventdev: remove eventdev schedule API
  doc: update software event device

 app/test-eventdev/evt_common.h             |  41 ++++++++++
 app/test-eventdev/evt_options.c            |  10 ---
 app/test-eventdev/test_order_atq.c         |   6 ++
 app/test-eventdev/test_order_common.c      |   3 -
 app/test-eventdev/test_order_queue.c       |   6 ++
 app/test-eventdev/test_perf_atq.c          |   6 ++
 app/test-eventdev/test_perf_common.c       |  21 -----
 app/test-eventdev/test_perf_common.h       |   1 +
 app/test-eventdev/test_perf_queue.c        |   6 ++
 doc/guides/eventdevs/sw.rst                |  13 ++--
 drivers/event/octeontx/ssovf_evdev.c       |   1 -
 drivers/event/skeleton/skeleton_eventdev.c |   2 -
 drivers/event/sw/sw_evdev.c                |  10 ++-
 examples/eventdev_pipeline_sw_pmd/main.c   |  51 +++++++-----
 lib/librte_eventdev/rte_eventdev.c         |  17 ++++
 lib/librte_eventdev/rte_eventdev.h         |  53 +++++++------
 test/test/test_eventdev_sw.c               | 120 ++++++++++++++++-------------
 17 files changed, 223 insertions(+), 144 deletions(-)

--
2.7.4

^ permalink raw reply	[flat|nested] 47+ messages in thread

* [PATCH 1/7] eventdev: add API to get service id
  2017-10-11  9:09 [PATCH 0/7] eventdev: remove event schedule API for SW driver Pavan Nikhilesh
@ 2017-10-11  9:09 ` Pavan Nikhilesh
  2017-10-11  9:09 ` [PATCH 2/7] event/sw: extend service capability Pavan Nikhilesh
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-11  9:09 UTC (permalink / raw)
  To: jerin.jacob, harry.van.haaren, hemant.agrawal, santosh.shukla
  Cc: dev, Pavan Nikhilesh

In case of sw event device the scheduling can be done on a service core
using the service registered at the time of probe.
This patch adds a helper function to get the service id that can be used
by the application to assign a lcore for the service to run on.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
---
 lib/librte_eventdev/rte_eventdev.c | 17 +++++++++++++++++
 lib/librte_eventdev/rte_eventdev.h | 22 ++++++++++++++++++++++
 2 files changed, 39 insertions(+)

diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c
index 378ccb5..f179aa4 100644
--- a/lib/librte_eventdev/rte_eventdev.c
+++ b/lib/librte_eventdev/rte_eventdev.c
@@ -961,6 +961,23 @@ rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
 }
 
 int
+rte_event_dev_service_id_get(uint8_t dev_id, uint32_t *service_id)
+{
+	struct rte_eventdev *dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+
+	if (service_id == NULL)
+		return -EINVAL;
+
+	if (dev->data->service_inited)
+		*service_id = dev->data->service_id;
+
+	return dev->data->service_inited ? 0 : -ESRCH;
+}
+
+int
 rte_event_dev_dump(uint8_t dev_id, FILE *f)
 {
 	struct rte_eventdev *dev;
diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
index 1dbc872..1c1ff6b 100644
--- a/lib/librte_eventdev/rte_eventdev.h
+++ b/lib/librte_eventdev/rte_eventdev.h
@@ -1116,6 +1116,10 @@ struct rte_eventdev_data {
 	/**< Event device capabilities(RTE_EVENT_DEV_CAP_)*/
 	struct rte_event_dev_config dev_conf;
 	/**< Configuration applied to device. */
+	uint8_t service_inited;
+	/* Service initialization state */
+	uint32_t service_id;
+	/* Service ID*/
 
 	RTE_STD_C11
 	uint8_t dev_started : 1;
@@ -1619,6 +1623,24 @@ rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
 			 uint8_t queues[], uint8_t priorities[]);
 
 /**
+ * Retrieve the service ID of the event dev. If the adapter doesn't use
+ * a rte_service function, this function returns -ESRCH.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ *
+ * @param [out] service_id
+ *   A pointer to a uint32_t, to be filled in with the service id.
+ *
+ * @return
+ *   - 0: Success
+ *   - <0: Error code on failure, if the event dev doesn't use a rte_service
+ *   function, this function returns -ESRCH.
+ */
+int
+rte_event_dev_service_id_get(uint8_t dev_id, uint32_t *service_id);
+
+/**
  * Dump internal information about *dev_id* to the FILE* provided in *f*.
  *
  * @param dev_id
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 2/7] event/sw: extend service capability
  2017-10-11  9:09 [PATCH 0/7] eventdev: remove event schedule API for SW driver Pavan Nikhilesh
  2017-10-11  9:09 ` [PATCH 1/7] eventdev: add API to get service id Pavan Nikhilesh
@ 2017-10-11  9:09 ` Pavan Nikhilesh
  2017-10-11  9:09 ` [PATCH 3/7] app/test-eventdev: update app to use service cores Pavan Nikhilesh
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-11  9:09 UTC (permalink / raw)
  To: jerin.jacob, harry.van.haaren, hemant.agrawal, santosh.shukla
  Cc: dev, Pavan Nikhilesh

Extend the service capability of the sw event device by exposing service id
to the application.
The application can use service id to configure service cores to run event
scheduling.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
---
 drivers/event/sw/sw_evdev.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
index aed8b72..9b7f4d4 100644
--- a/drivers/event/sw/sw_evdev.c
+++ b/drivers/event/sw/sw_evdev.c
@@ -875,6 +875,15 @@ sw_probe(struct rte_vdev_device *vdev)
 		return -ENOEXEC;
 	}
 
+	ret = rte_service_component_runstate_set(sw->service_id, 1);
+	if (ret) {
+		SW_LOG_ERR("Unable to enable service component");
+		return -ENOEXEC;
+	}
+
+	dev->data->service_inited = 1;
+	dev->data->service_id = sw->service_id;
+
 	return 0;
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 3/7] app/test-eventdev: update app to use service cores
  2017-10-11  9:09 [PATCH 0/7] eventdev: remove event schedule API for SW driver Pavan Nikhilesh
  2017-10-11  9:09 ` [PATCH 1/7] eventdev: add API to get service id Pavan Nikhilesh
  2017-10-11  9:09 ` [PATCH 2/7] event/sw: extend service capability Pavan Nikhilesh
@ 2017-10-11  9:09 ` Pavan Nikhilesh
  2017-10-11  9:09 ` [PATCH 4/7] test/eventdev: update test to use service core Pavan Nikhilesh
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-11  9:09 UTC (permalink / raw)
  To: jerin.jacob, harry.van.haaren, hemant.agrawal, santosh.shukla
  Cc: dev, Pavan Nikhilesh

Use service cores for offloading event scheduling in case of
centralized scheduling instead of calling the schedule api directly.
This removes the dependency on dedicated scheduler core specified by
giving command line option --slcore.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
---
 app/test-eventdev/evt_common.h        | 41 +++++++++++++++++++++++++++++++++++
 app/test-eventdev/evt_options.c       | 10 ---------
 app/test-eventdev/test_order_atq.c    |  6 +++++
 app/test-eventdev/test_order_common.c |  3 ---
 app/test-eventdev/test_order_queue.c  |  6 +++++
 app/test-eventdev/test_perf_atq.c     |  6 +++++
 app/test-eventdev/test_perf_common.c  | 21 ------------------
 app/test-eventdev/test_perf_common.h  |  1 +
 app/test-eventdev/test_perf_queue.c   |  6 +++++
 9 files changed, 66 insertions(+), 34 deletions(-)

diff --git a/app/test-eventdev/evt_common.h b/app/test-eventdev/evt_common.h
index 4102076..0300453 100644
--- a/app/test-eventdev/evt_common.h
+++ b/app/test-eventdev/evt_common.h
@@ -36,6 +36,7 @@
 #include <rte_common.h>
 #include <rte_debug.h>
 #include <rte_eventdev.h>
+#include <rte_service.h>
 
 #define CLNRM  "\x1b[0m"
 #define CLRED  "\x1b[31m"
@@ -113,4 +114,44 @@ evt_sched_type2queue_cfg(uint8_t sched_type)
 	return ret;
 }
 
+
+static inline int
+evt_service_setup(uint8_t dev_id)
+{
+	uint32_t service_id;
+	int32_t core_cnt;
+	unsigned lcore = 0;
+	uint32_t core_array[RTE_MAX_LCORE];
+	uint8_t cnt;
+	uint8_t min_cnt = UINT8_MAX;
+
+	if (evt_has_distributed_sched(dev_id))
+		return 0;
+
+	if (!rte_service_lcore_count())
+		return -ENOENT;
+
+	if (!rte_event_dev_service_id_get(dev_id, &service_id)) {
+		core_cnt = rte_service_lcore_list(core_array,
+				RTE_MAX_LCORE);
+		if (core_cnt < 0)
+			return -ENOENT;
+		/* Get the core which has least number of services running. */
+		while (core_cnt--) {
+			/* Reset default mapping */
+			rte_service_map_lcore_set(service_id,
+					core_array[core_cnt], 0);
+			cnt = rte_service_lcore_count_services(
+					core_array[core_cnt]);
+			if (cnt < min_cnt) {
+				lcore = core_array[core_cnt];
+				min_cnt = cnt;
+			}
+		}
+		if (rte_service_map_lcore_set(service_id, lcore, 1))
+			return -ENOENT;
+	}
+	return 0;
+}
+
 #endif /*  _EVT_COMMON_*/
diff --git a/app/test-eventdev/evt_options.c b/app/test-eventdev/evt_options.c
index 65e22f8..e2187df 100644
--- a/app/test-eventdev/evt_options.c
+++ b/app/test-eventdev/evt_options.c
@@ -114,13 +114,6 @@ evt_parse_test_name(struct evt_options *opt, const char *arg)
 }
 
 static int
-evt_parse_slcore(struct evt_options *opt, const char *arg)
-{
-	opt->slcore = atoi(arg);
-	return 0;
-}
-
-static int
 evt_parse_socket_id(struct evt_options *opt, const char *arg)
 {
 	opt->socket_id = atoi(arg);
@@ -188,7 +181,6 @@ usage(char *program)
 		"\t--test             : name of the test application to run\n"
 		"\t--socket_id        : socket_id of application resources\n"
 		"\t--pool_sz          : pool size of the mempool\n"
-		"\t--slcore           : lcore id of the scheduler\n"
 		"\t--plcores          : list of lcore ids for producers\n"
 		"\t--wlcores          : list of lcore ids for workers\n"
 		"\t--stlist           : list of scheduled types of the stages\n"
@@ -254,7 +246,6 @@ static struct option lgopts[] = {
 	{ EVT_POOL_SZ,          1, 0, 0 },
 	{ EVT_NB_PKTS,          1, 0, 0 },
 	{ EVT_WKR_DEQ_DEP,      1, 0, 0 },
-	{ EVT_SCHED_LCORE,      1, 0, 0 },
 	{ EVT_SCHED_TYPE_LIST,  1, 0, 0 },
 	{ EVT_FWD_LATENCY,      0, 0, 0 },
 	{ EVT_QUEUE_PRIORITY,   0, 0, 0 },
@@ -278,7 +269,6 @@ evt_opts_parse_long(int opt_idx, struct evt_options *opt)
 		{ EVT_POOL_SZ, evt_parse_pool_sz},
 		{ EVT_NB_PKTS, evt_parse_nb_pkts},
 		{ EVT_WKR_DEQ_DEP, evt_parse_wkr_deq_dep},
-		{ EVT_SCHED_LCORE, evt_parse_slcore},
 		{ EVT_SCHED_TYPE_LIST, evt_parse_sched_type_list},
 		{ EVT_FWD_LATENCY, evt_parse_fwd_latency},
 		{ EVT_QUEUE_PRIORITY, evt_parse_queue_priority},
diff --git a/app/test-eventdev/test_order_atq.c b/app/test-eventdev/test_order_atq.c
index 7e6c67d..4ee0dea 100644
--- a/app/test-eventdev/test_order_atq.c
+++ b/app/test-eventdev/test_order_atq.c
@@ -179,6 +179,12 @@ order_atq_eventdev_setup(struct evt_test *test, struct evt_options *opt)
 	if (ret)
 		return ret;
 
+	ret = evt_service_setup(opt->dev_id);
+	if (ret) {
+		evt_err("No service lcore found to run event dev.");
+		return ret;
+	}
+
 	ret = rte_event_dev_start(opt->dev_id);
 	if (ret) {
 		evt_err("failed to start eventdev %d", opt->dev_id);
diff --git a/app/test-eventdev/test_order_common.c b/app/test-eventdev/test_order_common.c
index 80e14c0..7cfe7fa 100644
--- a/app/test-eventdev/test_order_common.c
+++ b/app/test-eventdev/test_order_common.c
@@ -292,9 +292,6 @@ order_launch_lcores(struct evt_test *test, struct evt_options *opt,
 	int64_t old_remaining  = -1;
 
 	while (t->err == false) {
-
-		rte_event_schedule(opt->dev_id);
-
 		uint64_t new_cycles = rte_get_timer_cycles();
 		int64_t remaining = rte_atomic64_read(&t->outstand_pkts);
 
diff --git a/app/test-eventdev/test_order_queue.c b/app/test-eventdev/test_order_queue.c
index beadd9c..a14e0b0 100644
--- a/app/test-eventdev/test_order_queue.c
+++ b/app/test-eventdev/test_order_queue.c
@@ -192,6 +192,12 @@ order_queue_eventdev_setup(struct evt_test *test, struct evt_options *opt)
 	if (ret)
 		return ret;
 
+	ret = evt_service_setup(opt->dev_id);
+	if (ret) {
+		evt_err("No service lcore found to run event dev.");
+		return ret;
+	}
+
 	ret = rte_event_dev_start(opt->dev_id);
 	if (ret) {
 		evt_err("failed to start eventdev %d", opt->dev_id);
diff --git a/app/test-eventdev/test_perf_atq.c b/app/test-eventdev/test_perf_atq.c
index 9c3efa3..0e9f2db 100644
--- a/app/test-eventdev/test_perf_atq.c
+++ b/app/test-eventdev/test_perf_atq.c
@@ -221,6 +221,12 @@ perf_atq_eventdev_setup(struct evt_test *test, struct evt_options *opt)
 	if (ret)
 		return ret;
 
+	ret = evt_service_setup(opt->dev_id);
+	if (ret) {
+		evt_err("No service lcore found to run event dev.");
+		return ret;
+	}
+
 	ret = rte_event_dev_start(opt->dev_id);
 	if (ret) {
 		evt_err("failed to start eventdev %d", opt->dev_id);
diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c
index 7b09299..770e365 100644
--- a/app/test-eventdev/test_perf_common.c
+++ b/app/test-eventdev/test_perf_common.c
@@ -88,18 +88,6 @@ perf_producer(void *arg)
 	return 0;
 }
 
-static inline int
-scheduler(void *arg)
-{
-	struct test_perf *t = arg;
-	const uint8_t dev_id = t->opt->dev_id;
-
-	while (t->done == false)
-		rte_event_schedule(dev_id);
-
-	return 0;
-}
-
 static inline uint64_t
 processed_pkts(struct test_perf *t)
 {
@@ -163,15 +151,6 @@ perf_launch_lcores(struct evt_test *test, struct evt_options *opt,
 		port_idx++;
 	}
 
-	/* launch scheduler */
-	if (!evt_has_distributed_sched(opt->dev_id)) {
-		ret = rte_eal_remote_launch(scheduler, t, opt->slcore);
-		if (ret) {
-			evt_err("failed to launch sched %d", opt->slcore);
-			return ret;
-		}
-	}
-
 	const uint64_t total_pkts = opt->nb_pkts *
 			evt_nr_active_lcores(opt->plcores);
 
diff --git a/app/test-eventdev/test_perf_common.h b/app/test-eventdev/test_perf_common.h
index 4956586..c6fc70c 100644
--- a/app/test-eventdev/test_perf_common.h
+++ b/app/test-eventdev/test_perf_common.h
@@ -159,6 +159,7 @@ int perf_test_setup(struct evt_test *test, struct evt_options *opt);
 int perf_mempool_setup(struct evt_test *test, struct evt_options *opt);
 int perf_event_dev_port_setup(struct evt_test *test, struct evt_options *opt,
 				uint8_t stride, uint8_t nb_queues);
+int perf_event_dev_service_setup(uint8_t dev_id);
 int perf_launch_lcores(struct evt_test *test, struct evt_options *opt,
 		int (*worker)(void *));
 void perf_opt_dump(struct evt_options *opt, uint8_t nb_queues);
diff --git a/app/test-eventdev/test_perf_queue.c b/app/test-eventdev/test_perf_queue.c
index 658c08a..78f43b5 100644
--- a/app/test-eventdev/test_perf_queue.c
+++ b/app/test-eventdev/test_perf_queue.c
@@ -232,6 +232,12 @@ perf_queue_eventdev_setup(struct evt_test *test, struct evt_options *opt)
 	if (ret)
 		return ret;
 
+	ret = evt_service_setup(opt->dev_id);
+	if (ret) {
+		evt_err("No service lcore found to run event dev.");
+		return ret;
+	}
+
 	ret = rte_event_dev_start(opt->dev_id);
 	if (ret) {
 		evt_err("failed to start eventdev %d", opt->dev_id);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 4/7] test/eventdev: update test to use service core
  2017-10-11  9:09 [PATCH 0/7] eventdev: remove event schedule API for SW driver Pavan Nikhilesh
                   ` (2 preceding siblings ...)
  2017-10-11  9:09 ` [PATCH 3/7] app/test-eventdev: update app to use service cores Pavan Nikhilesh
@ 2017-10-11  9:09 ` Pavan Nikhilesh
  2017-10-11  9:09 ` [PATCH 5/7] examples/eventdev: update sample app to use service Pavan Nikhilesh
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-11  9:09 UTC (permalink / raw)
  To: jerin.jacob, harry.van.haaren, hemant.agrawal, santosh.shukla
  Cc: dev, Pavan Nikhilesh

Use service core for event scheduling instead of calling the event schedule
api directly.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
---
 test/test/test_eventdev_sw.c | 120 ++++++++++++++++++++++++-------------------
 1 file changed, 67 insertions(+), 53 deletions(-)

diff --git a/test/test/test_eventdev_sw.c b/test/test/test_eventdev_sw.c
index 7219886..81954dc 100644
--- a/test/test/test_eventdev_sw.c
+++ b/test/test/test_eventdev_sw.c
@@ -49,6 +49,7 @@
 #include <rte_cycles.h>
 #include <rte_eventdev.h>
 #include <rte_pause.h>
+#include <rte_service_component.h>
 
 #include "test.h"
 
@@ -320,6 +321,19 @@ struct test_event_dev_stats {
 	uint64_t qid_tx_pkts[MAX_QIDS];
 };
 
+static inline void
+wait_schedule(int evdev)
+{
+	static const char * const dev_names[] = {"dev_sched_calls"};
+	uint64_t val;
+
+	val = rte_event_dev_xstats_by_name_get(evdev, dev_names[0],
+			0);
+	while ((rte_event_dev_xstats_by_name_get(evdev, dev_names[0], 0) - val)
+			< 2)
+		;
+}
+
 static inline int
 test_event_dev_stats_get(int dev_id, struct test_event_dev_stats *stats)
 {
@@ -392,9 +406,9 @@ run_prio_packet_test(struct test *t)
 		RTE_EVENT_DEV_PRIORITY_HIGHEST
 	};
 	unsigned int i;
+	struct rte_event ev_arr[2];
 	for (i = 0; i < RTE_DIM(MAGIC_SEQN); i++) {
 		/* generate pkt and enqueue */
-		struct rte_event ev;
 		struct rte_mbuf *arp = rte_gen_arp(0, t->mbuf_pool);
 		if (!arp) {
 			printf("%d: gen of pkt failed\n", __LINE__);
@@ -402,20 +416,20 @@ run_prio_packet_test(struct test *t)
 		}
 		arp->seqn = MAGIC_SEQN[i];
 
-		ev = (struct rte_event){
+		ev_arr[i] = (struct rte_event){
 			.priority = PRIORITY[i],
 			.op = RTE_EVENT_OP_NEW,
 			.queue_id = t->qid[0],
 			.mbuf = arp
 		};
-		err = rte_event_enqueue_burst(evdev, t->port[0], &ev, 1);
-		if (err < 0) {
-			printf("%d: error failed to enqueue\n", __LINE__);
-			return -1;
-		}
+	}
+	err = rte_event_enqueue_burst(evdev, t->port[0], ev_arr, 2);
+	if (err < 0) {
+		printf("%d: error failed to enqueue\n", __LINE__);
+		return -1;
 	}
 
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	struct test_event_dev_stats stats;
 	err = test_event_dev_stats_get(evdev, &stats);
@@ -425,8 +439,8 @@ run_prio_packet_test(struct test *t)
 	}
 
 	if (stats.port_rx_pkts[t->port[0]] != 2) {
-		printf("%d: error stats incorrect for directed port\n",
-				__LINE__);
+		printf("%d: error stats incorrect for directed port %"PRIu64"\n",
+				__LINE__, stats.port_rx_pkts[t->port[0]]);
 		rte_event_dev_dump(evdev, stdout);
 		return -1;
 	}
@@ -439,6 +453,7 @@ run_prio_packet_test(struct test *t)
 		rte_event_dev_dump(evdev, stdout);
 		return -1;
 	}
+
 	if (ev.mbuf->seqn != MAGIC_SEQN[1]) {
 		printf("%d: first packet out not highest priority\n",
 				__LINE__);
@@ -507,7 +522,7 @@ test_single_directed_packet(struct test *t)
 	}
 
 	/* Run schedule() as dir packets may need to be re-ordered */
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	struct test_event_dev_stats stats;
 	err = test_event_dev_stats_get(evdev, &stats);
@@ -574,7 +589,7 @@ test_directed_forward_credits(struct test *t)
 			printf("%d: error failed to enqueue\n", __LINE__);
 			return -1;
 		}
-		rte_event_schedule(evdev);
+		wait_schedule(evdev);
 
 		uint32_t deq_pkts;
 		deq_pkts = rte_event_dequeue_burst(evdev, 0, &ev, 1, 0);
@@ -736,7 +751,7 @@ burst_packets(struct test *t)
 			return -1;
 		}
 	}
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	/* Check stats for all NUM_PKTS arrived to sched core */
 	struct test_event_dev_stats stats;
@@ -825,7 +840,7 @@ abuse_inflights(struct test *t)
 	}
 
 	/* schedule */
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	struct test_event_dev_stats stats;
 
@@ -963,7 +978,7 @@ xstats_tests(struct test *t)
 		}
 	}
 
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	/* Device names / values */
 	int num_stats = rte_event_dev_xstats_names_get(evdev,
@@ -974,8 +989,8 @@ xstats_tests(struct test *t)
 	ret = rte_event_dev_xstats_get(evdev,
 					RTE_EVENT_DEV_XSTATS_DEVICE,
 					0, ids, values, num_stats);
-	static const uint64_t expected[] = {3, 3, 0, 1, 0, 0};
-	for (i = 0; (signed int)i < ret; i++) {
+	static const uint64_t expected[] = {3, 3, 0};
+	for (i = 0; (signed int)i < 3; i++) {
 		if (expected[i] != values[i]) {
 			printf(
 				"%d Error xstat %d (id %d) %s : %"PRIu64
@@ -994,7 +1009,7 @@ xstats_tests(struct test *t)
 	ret = rte_event_dev_xstats_get(evdev,
 					RTE_EVENT_DEV_XSTATS_DEVICE,
 					0, ids, values, num_stats);
-	for (i = 0; (signed int)i < ret; i++) {
+	for (i = 0; (signed int)i < 3; i++) {
 		if (expected_zero[i] != values[i]) {
 			printf(
 				"%d Error, xstat %d (id %d) %s : %"PRIu64
@@ -1290,7 +1305,7 @@ port_reconfig_credits(struct test *t)
 			}
 		}
 
-		rte_event_schedule(evdev);
+		wait_schedule(evdev);
 
 		struct rte_event ev[NPKTS];
 		int deq = rte_event_dequeue_burst(evdev, t->port[0], ev,
@@ -1516,14 +1531,12 @@ xstats_id_reset_tests(struct test *t)
 		}
 	}
 
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	static const char * const dev_names[] = {
-		"dev_rx", "dev_tx", "dev_drop", "dev_sched_calls",
-		"dev_sched_no_iq_enq", "dev_sched_no_cq_enq",
-	};
+		"dev_rx", "dev_tx", "dev_drop"};
 	uint64_t dev_expected[] = {NPKTS, NPKTS, 0, 1, 0, 0};
-	for (i = 0; (int)i < ret; i++) {
+	for (i = 0; (int)i < 3; i++) {
 		unsigned int id;
 		uint64_t val = rte_event_dev_xstats_by_name_get(evdev,
 								dev_names[i],
@@ -1888,26 +1901,26 @@ qid_priorities(struct test *t)
 	}
 
 	/* enqueue 3 packets, setting seqn and QID to check priority */
+	struct rte_event ev_arr[3];
 	for (i = 0; i < 3; i++) {
-		struct rte_event ev;
 		struct rte_mbuf *arp = rte_gen_arp(0, t->mbuf_pool);
 		if (!arp) {
 			printf("%d: gen of pkt failed\n", __LINE__);
 			return -1;
 		}
-		ev.queue_id = t->qid[i];
-		ev.op = RTE_EVENT_OP_NEW;
-		ev.mbuf = arp;
+		ev_arr[i].queue_id = t->qid[i];
+		ev_arr[i].op = RTE_EVENT_OP_NEW;
+		ev_arr[i].mbuf = arp;
 		arp->seqn = i;
 
-		int err = rte_event_enqueue_burst(evdev, t->port[0], &ev, 1);
-		if (err != 1) {
-			printf("%d: Failed to enqueue\n", __LINE__);
-			return -1;
-		}
+	}
+	int err = rte_event_enqueue_burst(evdev, t->port[0], ev_arr, 3);
+	if (err != 3) {
+		printf("%d: Failed to enqueue\n", __LINE__);
+		return -1;
 	}
 
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	/* dequeue packets, verify priority was upheld */
 	struct rte_event ev[32];
@@ -1988,7 +2001,7 @@ load_balancing(struct test *t)
 		}
 	}
 
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	struct test_event_dev_stats stats;
 	err = test_event_dev_stats_get(evdev, &stats);
@@ -2088,7 +2101,7 @@ load_balancing_history(struct test *t)
 	}
 
 	/* call the scheduler */
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	/* Dequeue the flow 0 packet from port 1, so that we can then drop */
 	struct rte_event ev;
@@ -2105,7 +2118,7 @@ load_balancing_history(struct test *t)
 	rte_event_enqueue_burst(evdev, t->port[1], &release_ev, 1);
 
 	/* call the scheduler */
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	/*
 	 * Set up the next set of flows, first a new flow to fill up
@@ -2138,7 +2151,7 @@ load_balancing_history(struct test *t)
 	}
 
 	/* schedule */
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	err = test_event_dev_stats_get(evdev, &stats);
 	if (err) {
@@ -2182,7 +2195,7 @@ load_balancing_history(struct test *t)
 		while (rte_event_dequeue_burst(evdev, i, &ev, 1, 0))
 			rte_event_enqueue_burst(evdev, i, &release_ev, 1);
 	}
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	cleanup(t);
 	return 0;
@@ -2248,7 +2261,7 @@ invalid_qid(struct test *t)
 	}
 
 	/* call the scheduler */
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	err = test_event_dev_stats_get(evdev, &stats);
 	if (err) {
@@ -2333,7 +2346,7 @@ single_packet(struct test *t)
 		return -1;
 	}
 
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	err = test_event_dev_stats_get(evdev, &stats);
 	if (err) {
@@ -2376,7 +2389,7 @@ single_packet(struct test *t)
 		printf("%d: Failed to enqueue\n", __LINE__);
 		return -1;
 	}
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	err = test_event_dev_stats_get(evdev, &stats);
 	if (stats.port_inflight[wrk_enq] != 0) {
@@ -2464,7 +2477,7 @@ inflight_counts(struct test *t)
 	}
 
 	/* schedule */
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	err = test_event_dev_stats_get(evdev, &stats);
 	if (err) {
@@ -2520,7 +2533,7 @@ inflight_counts(struct test *t)
 	 * As the scheduler core decrements inflights, it needs to run to
 	 * process packets to act on the drop messages
 	 */
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	err = test_event_dev_stats_get(evdev, &stats);
 	if (stats.port_inflight[p1] != 0) {
@@ -2555,7 +2568,7 @@ inflight_counts(struct test *t)
 	 * As the scheduler core decrements inflights, it needs to run to
 	 * process packets to act on the drop messages
 	 */
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	err = test_event_dev_stats_get(evdev, &stats);
 	if (stats.port_inflight[p2] != 0) {
@@ -2649,7 +2662,7 @@ parallel_basic(struct test *t, int check_order)
 		}
 	}
 
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	/* use extra slot to make logic in loops easier */
 	struct rte_event deq_ev[w3_port + 1];
@@ -2676,7 +2689,7 @@ parallel_basic(struct test *t, int check_order)
 			return -1;
 		}
 	}
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	/* dequeue from the tx ports, we should get 3 packets */
 	deq_pkts = rte_event_dequeue_burst(evdev, t->port[tx_port], deq_ev,
@@ -2754,7 +2767,7 @@ holb(struct test *t) /* test to check we avoid basic head-of-line blocking */
 		printf("%d: Error doing first enqueue\n", __LINE__);
 		goto err;
 	}
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	if (rte_event_dev_xstats_by_name_get(evdev, "port_0_cq_ring_used", NULL)
 			!= 1)
@@ -2779,7 +2792,7 @@ holb(struct test *t) /* test to check we avoid basic head-of-line blocking */
 			printf("%d: Error with enqueue\n", __LINE__);
 			goto err;
 		}
-		rte_event_schedule(evdev);
+		wait_schedule(evdev);
 	} while (rte_event_dev_xstats_by_name_get(evdev,
 				rx_port_free_stat, NULL) != 0);
 
@@ -2789,7 +2802,7 @@ holb(struct test *t) /* test to check we avoid basic head-of-line blocking */
 		printf("%d: Error with enqueue\n", __LINE__);
 		goto err;
 	}
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	/* check that the other port still has an empty CQ */
 	if (rte_event_dev_xstats_by_name_get(evdev, other_port_used_stat, NULL)
@@ -2812,7 +2825,7 @@ holb(struct test *t) /* test to check we avoid basic head-of-line blocking */
 		printf("%d: Error with enqueue\n", __LINE__);
 		goto err;
 	}
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	if (rte_event_dev_xstats_by_name_get(evdev, other_port_used_stat, NULL)
 			!= 1) {
@@ -3002,7 +3015,7 @@ worker_loopback(struct test *t)
 	while (rte_eal_get_lcore_state(p_lcore) != FINISHED ||
 			rte_eal_get_lcore_state(w_lcore) != FINISHED) {
 
-		rte_event_schedule(evdev);
+		wait_schedule(evdev);
 
 		uint64_t new_cycles = rte_get_timer_cycles();
 
@@ -3029,7 +3042,7 @@ worker_loopback(struct test *t)
 			cycles = new_cycles;
 		}
 	}
-	rte_event_schedule(evdev); /* ensure all completions are flushed */
+	wait_schedule(evdev); /* ensure all completions are flushed */
 
 	rte_eal_mp_wait_lcore();
 
@@ -3064,6 +3077,7 @@ test_sw_eventdev(void)
 			printf("Error finding newly created eventdev\n");
 			return -1;
 		}
+		rte_service_start_with_defaults();
 	}
 
 	/* Only create mbuf pool once, reuse for each test run */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 5/7] examples/eventdev: update sample app to use service
  2017-10-11  9:09 [PATCH 0/7] eventdev: remove event schedule API for SW driver Pavan Nikhilesh
                   ` (3 preceding siblings ...)
  2017-10-11  9:09 ` [PATCH 4/7] test/eventdev: update test to use service core Pavan Nikhilesh
@ 2017-10-11  9:09 ` Pavan Nikhilesh
  2017-10-11  9:09 ` [PATCH 6/7] eventdev: remove eventdev schedule API Pavan Nikhilesh
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-11  9:09 UTC (permalink / raw)
  To: jerin.jacob, harry.van.haaren, hemant.agrawal, santosh.shukla
  Cc: dev, Pavan Nikhilesh

Update the sample app eventdev_pipeline_sw_pmd to use service cores for
event scheduling in case of sw eventdev.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
---
 examples/eventdev_pipeline_sw_pmd/main.c | 51 +++++++++++++++++++++-----------
 1 file changed, 33 insertions(+), 18 deletions(-)

diff --git a/examples/eventdev_pipeline_sw_pmd/main.c b/examples/eventdev_pipeline_sw_pmd/main.c
index 09b90c3..f02fe84 100644
--- a/examples/eventdev_pipeline_sw_pmd/main.c
+++ b/examples/eventdev_pipeline_sw_pmd/main.c
@@ -46,6 +46,7 @@
 #include <rte_cycles.h>
 #include <rte_ethdev.h>
 #include <rte_eventdev.h>
+#include <rte_service.h>
 
 #define MAX_NUM_STAGES 8
 #define BATCH_SIZE 16
@@ -233,7 +234,7 @@ producer(void)
 }
 
 static inline void
-schedule_devices(uint8_t dev_id, unsigned int lcore_id)
+schedule_devices(unsigned int lcore_id)
 {
 	if (fdata->rx_core[lcore_id] && (fdata->rx_single ||
 	    rte_atomic32_cmpset(&(fdata->rx_lock), 0, 1))) {
@@ -241,16 +242,6 @@ schedule_devices(uint8_t dev_id, unsigned int lcore_id)
 		rte_atomic32_clear((rte_atomic32_t *)&(fdata->rx_lock));
 	}
 
-	if (fdata->sched_core[lcore_id] && (fdata->sched_single ||
-	    rte_atomic32_cmpset(&(fdata->sched_lock), 0, 1))) {
-		rte_event_schedule(dev_id);
-		if (cdata.dump_dev_signal) {
-			rte_event_dev_dump(0, stdout);
-			cdata.dump_dev_signal = 0;
-		}
-		rte_atomic32_clear((rte_atomic32_t *)&(fdata->sched_lock));
-	}
-
 	if (fdata->tx_core[lcore_id] && (fdata->tx_single ||
 	    rte_atomic32_cmpset(&(fdata->tx_lock), 0, 1))) {
 		consumer();
@@ -294,7 +285,7 @@ worker(void *arg)
 	while (!fdata->done) {
 		uint16_t i;
 
-		schedule_devices(dev_id, lcore_id);
+		schedule_devices(lcore_id);
 
 		if (!fdata->worker_core[lcore_id]) {
 			rte_pause();
@@ -661,6 +652,27 @@ struct port_link {
 };
 
 static int
+setup_scheduling_service(unsigned lcore, uint8_t dev_id)
+{
+	int ret;
+	uint32_t service_id;
+	ret = rte_event_dev_service_id_get(dev_id, &service_id);
+	if (ret == -ESRCH) {
+		printf("Event device [%d] doesn't need scheduling service\n",
+				dev_id);
+		return 0;
+	}
+	if (!ret) {
+		rte_service_runstate_set(service_id, 1);
+		rte_service_lcore_add(lcore);
+		rte_service_map_lcore_set(service_id, lcore, 1);
+		rte_service_lcore_start(lcore);
+	}
+
+	return ret;
+}
+
+static int
 setup_eventdev(struct prod_data *prod_data,
 		struct cons_data *cons_data,
 		struct worker_data *worker_data)
@@ -839,6 +851,14 @@ setup_eventdev(struct prod_data *prod_data,
 	*cons_data = (struct cons_data){.dev_id = dev_id,
 					.port_id = i };
 
+	for (i = 0; i < MAX_NUM_CORE; i++) {
+		if (fdata->sched_core[i]
+				&& setup_scheduling_service(i, dev_id)) {
+			printf("Error setting up schedulig service on %d", i);
+			return -1;
+		}
+	}
+
 	if (rte_event_dev_start(dev_id) < 0) {
 		printf("Error starting eventdev\n");
 		return -1;
@@ -944,8 +964,7 @@ main(int argc, char **argv)
 
 		if (!fdata->rx_core[lcore_id] &&
 			!fdata->worker_core[lcore_id] &&
-			!fdata->tx_core[lcore_id] &&
-			!fdata->sched_core[lcore_id])
+			!fdata->tx_core[lcore_id])
 			continue;
 
 		if (fdata->rx_core[lcore_id])
@@ -958,10 +977,6 @@ main(int argc, char **argv)
 				"[%s()] lcore %d executing NIC Tx, and using eventdev port %u\n",
 				__func__, lcore_id, cons_data.port_id);
 
-		if (fdata->sched_core[lcore_id])
-			printf("[%s()] lcore %d executing scheduler\n",
-					__func__, lcore_id);
-
 		if (fdata->worker_core[lcore_id])
 			printf(
 				"[%s()] lcore %d executing worker, using eventdev port %u\n",
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 6/7] eventdev: remove eventdev schedule API
  2017-10-11  9:09 [PATCH 0/7] eventdev: remove event schedule API for SW driver Pavan Nikhilesh
                   ` (4 preceding siblings ...)
  2017-10-11  9:09 ` [PATCH 5/7] examples/eventdev: update sample app to use service Pavan Nikhilesh
@ 2017-10-11  9:09 ` Pavan Nikhilesh
  2017-10-11  9:09 ` [PATCH 7/7] doc: update software event device Pavan Nikhilesh
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-11  9:09 UTC (permalink / raw)
  To: jerin.jacob, harry.van.haaren, hemant.agrawal, santosh.shukla
  Cc: dev, Pavan Nikhilesh

remove eventdev schedule api and enforce sw driver to use service core
feature for event scheduling.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
---
 drivers/event/octeontx/ssovf_evdev.c       |  1 -
 drivers/event/skeleton/skeleton_eventdev.c |  2 --
 drivers/event/sw/sw_evdev.c                | 13 +++++--------
 lib/librte_eventdev/rte_eventdev.h         | 31 ++++--------------------------
 4 files changed, 9 insertions(+), 38 deletions(-)

diff --git a/drivers/event/octeontx/ssovf_evdev.c b/drivers/event/octeontx/ssovf_evdev.c
index d829b49..1127db0 100644
--- a/drivers/event/octeontx/ssovf_evdev.c
+++ b/drivers/event/octeontx/ssovf_evdev.c
@@ -155,7 +155,6 @@ ssovf_fastpath_fns_set(struct rte_eventdev *dev)
 {
 	struct ssovf_evdev *edev = ssovf_pmd_priv(dev);
 
-	dev->schedule      = NULL;
 	dev->enqueue       = ssows_enq;
 	dev->enqueue_burst = ssows_enq_burst;
 	dev->enqueue_new_burst = ssows_enq_new_burst;
diff --git a/drivers/event/skeleton/skeleton_eventdev.c b/drivers/event/skeleton/skeleton_eventdev.c
index bcd2055..4d1a1da 100644
--- a/drivers/event/skeleton/skeleton_eventdev.c
+++ b/drivers/event/skeleton/skeleton_eventdev.c
@@ -375,7 +375,6 @@ skeleton_eventdev_init(struct rte_eventdev *eventdev)
 	PMD_DRV_FUNC_TRACE();
 
 	eventdev->dev_ops       = &skeleton_eventdev_ops;
-	eventdev->schedule      = NULL;
 	eventdev->enqueue       = skeleton_eventdev_enqueue;
 	eventdev->enqueue_burst = skeleton_eventdev_enqueue_burst;
 	eventdev->dequeue       = skeleton_eventdev_dequeue;
@@ -466,7 +465,6 @@ skeleton_eventdev_create(const char *name, int socket_id)
 	}
 
 	eventdev->dev_ops       = &skeleton_eventdev_ops;
-	eventdev->schedule      = NULL;
 	eventdev->enqueue       = skeleton_eventdev_enqueue;
 	eventdev->enqueue_burst = skeleton_eventdev_enqueue_burst;
 	eventdev->dequeue       = skeleton_eventdev_dequeue;
diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
index 9b7f4d4..086fd96 100644
--- a/drivers/event/sw/sw_evdev.c
+++ b/drivers/event/sw/sw_evdev.c
@@ -629,10 +629,14 @@ sw_start(struct rte_eventdev *dev)
 	unsigned int i, j;
 	struct sw_evdev *sw = sw_pmd_priv(dev);
 
+	rte_service_component_runstate_set(sw->service_id, 1);
+
 	/* check a service core is mapped to this service */
-	if (!rte_service_runstate_get(sw->service_id))
+	if (!rte_service_runstate_get(sw->service_id)) {
 		SW_LOG_ERR("Warning: No Service core enabled on service %s\n",
 				sw->service_name);
+		return -ENOENT;
+	}
 
 	/* check all ports are set up */
 	for (i = 0; i < sw->port_count; i++)
@@ -847,7 +851,6 @@ sw_probe(struct rte_vdev_device *vdev)
 	dev->enqueue_forward_burst = sw_event_enqueue_burst;
 	dev->dequeue = sw_event_dequeue;
 	dev->dequeue_burst = sw_event_dequeue_burst;
-	dev->schedule = sw_event_schedule;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
@@ -875,12 +878,6 @@ sw_probe(struct rte_vdev_device *vdev)
 		return -ENOEXEC;
 	}
 
-	ret = rte_service_component_runstate_set(sw->service_id, 1);
-	if (ret) {
-		SW_LOG_ERR("Unable to enable service component");
-		return -ENOEXEC;
-	}
-
 	dev->data->service_inited = 1;
 	dev->data->service_id = sw->service_id;
 
diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
index 1c1ff6b..ee0c4c3 100644
--- a/lib/librte_eventdev/rte_eventdev.h
+++ b/lib/librte_eventdev/rte_eventdev.h
@@ -218,10 +218,10 @@
  * (each worker thread schedules events to its own port) or centralized
  * (a dedicated thread schedules to all ports). Distributed software schedulers
  * perform the scheduling in rte_event_dequeue_burst(), whereas centralized
- * scheduler logic is located in rte_event_schedule().
+ * scheduler logic need a dedicated service core for scheduling.
  * The RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag is not set
  * indicates the device is centralized and thus needs a dedicated scheduling
- * thread that repeatedly calls rte_event_schedule().
+ * thread that repeatedly calls software specific scheduling function.
  *
  * An event driven worker thread has following typical workflow on fastpath:
  * \code{.c}
@@ -263,9 +263,9 @@ struct rte_mbuf; /* we just use mbuf pointers; no need to include rte_mbuf.h */
  * In distributed scheduling mode, event scheduling happens in HW or
  * rte_event_dequeue_burst() or the combination of these two.
  * If the flag is not set then eventdev is centralized and thus needs a
- * dedicated scheduling thread that repeatedly calls rte_event_schedule().
+ * dedicated service core that acts as a scheduling thread .
  *
- * @see rte_event_schedule(), rte_event_dequeue_burst()
+ * @see rte_event_dequeue_burst()
  */
 #define RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES     (1ULL << 3)
 /**< Event device is capable of enqueuing events of any type to any queue.
@@ -1065,9 +1065,6 @@ struct rte_eventdev_driver;
 struct rte_eventdev_ops;
 struct rte_eventdev;
 
-typedef void (*event_schedule_t)(struct rte_eventdev *dev);
-/**< @internal Schedule one or more events in the event dev. */
-
 typedef uint16_t (*event_enqueue_t)(void *port, const struct rte_event *ev);
 /**< @internal Enqueue event on port of a device */
 
@@ -1131,8 +1128,6 @@ struct rte_eventdev_data {
 
 /** @internal The data structure associated with each event device. */
 struct rte_eventdev {
-	event_schedule_t schedule;
-	/**< Pointer to PMD schedule function. */
 	event_enqueue_t enqueue;
 	/**< Pointer to PMD enqueue function. */
 	event_enqueue_burst_t enqueue_burst;
@@ -1161,24 +1156,6 @@ struct rte_eventdev {
 extern struct rte_eventdev *rte_eventdevs;
 /** @internal The pool of rte_eventdev structures. */
 
-
-/**
- * Schedule one or more events in the event dev.
- *
- * An event dev implementation may define this is a NOOP, for instance if
- * the event dev performs its scheduling in hardware.
- *
- * @param dev_id
- *   The identifier of the device.
- */
-static inline void
-rte_event_schedule(uint8_t dev_id)
-{
-	struct rte_eventdev *dev = &rte_eventdevs[dev_id];
-	if (*dev->schedule)
-		(*dev->schedule)(dev);
-}
-
 static __rte_always_inline uint16_t
 __rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
 			const struct rte_event ev[], uint16_t nb_events,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH 7/7] doc: update software event device
  2017-10-11  9:09 [PATCH 0/7] eventdev: remove event schedule API for SW driver Pavan Nikhilesh
                   ` (5 preceding siblings ...)
  2017-10-11  9:09 ` [PATCH 6/7] eventdev: remove eventdev schedule API Pavan Nikhilesh
@ 2017-10-11  9:09 ` Pavan Nikhilesh
  2017-10-12 12:29   ` Mcnamara, John
  2017-10-13 16:36 ` [PATCH v2 1/7] eventdev: add API to get service id Pavan Nikhilesh
                   ` (3 subsequent siblings)
  10 siblings, 1 reply; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-11  9:09 UTC (permalink / raw)
  To: jerin.jacob, harry.van.haaren, hemant.agrawal, santosh.shukla
  Cc: dev, Pavan Nikhilesh

Update software event device documentation to include use of service
cores for event distribution.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
---
 doc/guides/eventdevs/sw.rst | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/doc/guides/eventdevs/sw.rst b/doc/guides/eventdevs/sw.rst
index a3e6624..ec49b3b 100644
--- a/doc/guides/eventdevs/sw.rst
+++ b/doc/guides/eventdevs/sw.rst
@@ -78,9 +78,9 @@ Scheduling Quanta
 ~~~~~~~~~~~~~~~~~
 
 The scheduling quanta sets the number of events that the device attempts to
-schedule before returning to the application from the ``rte_event_schedule()``
-function. Note that is a *hint* only, and that fewer or more events may be
-scheduled in a given iteration.
+schedule in a single schedule call performed by the service core. Note that
+is a *hint* only, and that fewer or more events may be scheduled in a given
+iteration.
 
 The scheduling quanta can be set using a string argument to the vdev
 create call:
@@ -140,10 +140,9 @@ eventdev.
 Distributed Scheduler
 ~~~~~~~~~~~~~~~~~~~~~
 
-The software eventdev is a centralized scheduler, requiring the
-``rte_event_schedule()`` function to be called by a CPU core to perform the
-required event distribution. This is not really a limitation but rather a
-design decision.
+The software eventdev is a centralized scheduler, requiring a service core to
+perform the required event distribution. This is not really a limitation but
+rather a design decision.
 
 The ``RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED`` flag is not set in the
 ``event_dev_cap`` field of the ``rte_event_dev_info`` struct for the software
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* Re: [PATCH 7/7] doc: update software event device
  2017-10-11  9:09 ` [PATCH 7/7] doc: update software event device Pavan Nikhilesh
@ 2017-10-12 12:29   ` Mcnamara, John
  0 siblings, 0 replies; 47+ messages in thread
From: Mcnamara, John @ 2017-10-12 12:29 UTC (permalink / raw)
  To: Pavan Nikhilesh, jerin.jacob, Van Haaren, Harry, hemant.agrawal,
	santosh.shukla
  Cc: dev



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Pavan Nikhilesh
> Sent: Wednesday, October 11, 2017 10:10 AM
> To: jerin.jacob@caviumnetworks.com; Van Haaren, Harry
> <harry.van.haaren@intel.com>; hemant.agrawal@nxp.com;
> santosh.shukla@caviumnetworks.com
> Cc: dev@dpdk.org; Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
> Subject: [dpdk-dev] [PATCH 7/7] doc: update software event device
> 
> Update software event device documentation to include use of service cores
> for event distribution.
> 
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>


Acked-by: John McNamara <john.mcnamara@intel.com>

^ permalink raw reply	[flat|nested] 47+ messages in thread

* [PATCH v2 1/7] eventdev: add API to get service id
  2017-10-11  9:09 [PATCH 0/7] eventdev: remove event schedule API for SW driver Pavan Nikhilesh
                   ` (6 preceding siblings ...)
  2017-10-11  9:09 ` [PATCH 7/7] doc: update software event device Pavan Nikhilesh
@ 2017-10-13 16:36 ` Pavan Nikhilesh
  2017-10-13 16:36   ` [PATCH v2 2/7] event/sw: extend service capability Pavan Nikhilesh
                     ` (6 more replies)
  2017-10-22  9:16 ` [PATCH v3 " Pavan Nikhilesh
                   ` (2 subsequent siblings)
  10 siblings, 7 replies; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-13 16:36 UTC (permalink / raw)
  To: jerin.jacob, hemant.agrawal, harry.van.haaren; +Cc: dev, Pavan Bhagavatula

From: Pavan Bhagavatula <pbhagavatula@caviumnetworks.com>

In case of sw event device the scheduling can be done on a service core
using the service registered at the time of probe.
This patch adds a helper function to get the service id that can be used
by the application to assign a lcore for the service to run on.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
---

v2 changes:
 - fix checkpatch issues
 - update eventdev versio map
 - fix --slcore option not removed in app/test-event-dev

 lib/librte_eventdev/rte_eventdev.c           | 17 +++++++++++++++++
 lib/librte_eventdev/rte_eventdev.h           | 22 ++++++++++++++++++++++
 lib/librte_eventdev/rte_eventdev_version.map |  1 +
 3 files changed, 40 insertions(+)

diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c
index 378ccb5..f179aa4 100644
--- a/lib/librte_eventdev/rte_eventdev.c
+++ b/lib/librte_eventdev/rte_eventdev.c
@@ -961,6 +961,23 @@ rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
 }

 int
+rte_event_dev_service_id_get(uint8_t dev_id, uint32_t *service_id)
+{
+	struct rte_eventdev *dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+
+	if (service_id == NULL)
+		return -EINVAL;
+
+	if (dev->data->service_inited)
+		*service_id = dev->data->service_id;
+
+	return dev->data->service_inited ? 0 : -ESRCH;
+}
+
+int
 rte_event_dev_dump(uint8_t dev_id, FILE *f)
 {
 	struct rte_eventdev *dev;
diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
index 1dbc872..1c1ff6b 100644
--- a/lib/librte_eventdev/rte_eventdev.h
+++ b/lib/librte_eventdev/rte_eventdev.h
@@ -1116,6 +1116,10 @@ struct rte_eventdev_data {
 	/**< Event device capabilities(RTE_EVENT_DEV_CAP_)*/
 	struct rte_event_dev_config dev_conf;
 	/**< Configuration applied to device. */
+	uint8_t service_inited;
+	/* Service initialization state */
+	uint32_t service_id;
+	/* Service ID*/

 	RTE_STD_C11
 	uint8_t dev_started : 1;
@@ -1619,6 +1623,24 @@ rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
 			 uint8_t queues[], uint8_t priorities[]);

 /**
+ * Retrieve the service ID of the event dev. If the adapter doesn't use
+ * a rte_service function, this function returns -ESRCH.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ *
+ * @param [out] service_id
+ *   A pointer to a uint32_t, to be filled in with the service id.
+ *
+ * @return
+ *   - 0: Success
+ *   - <0: Error code on failure, if the event dev doesn't use a rte_service
+ *   function, this function returns -ESRCH.
+ */
+int
+rte_event_dev_service_id_get(uint8_t dev_id, uint32_t *service_id);
+
+/**
  * Dump internal information about *dev_id* to the FILE* provided in *f*.
  *
  * @param dev_id
diff --git a/lib/librte_eventdev/rte_eventdev_version.map b/lib/librte_eventdev/rte_eventdev_version.map
index d555b19..59c36a0 100644
--- a/lib/librte_eventdev/rte_eventdev_version.map
+++ b/lib/librte_eventdev/rte_eventdev_version.map
@@ -53,6 +53,7 @@ DPDK_17.11 {
 	rte_event_dev_attr_get;
 	rte_event_port_attr_get;
 	rte_event_queue_attr_get;
+	rte_event_dev_service_id_get;

 	rte_event_eth_rx_adapter_create;
 	rte_event_eth_rx_adapter_free;
--
2.7.4

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v2 2/7] event/sw: extend service capability
  2017-10-13 16:36 ` [PATCH v2 1/7] eventdev: add API to get service id Pavan Nikhilesh
@ 2017-10-13 16:36   ` Pavan Nikhilesh
  2017-10-20 10:30     ` Van Haaren, Harry
  2017-10-13 16:36   ` [PATCH v2 3/7] app/test-eventdev: update app to use service cores Pavan Nikhilesh
                     ` (5 subsequent siblings)
  6 siblings, 1 reply; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-13 16:36 UTC (permalink / raw)
  To: jerin.jacob, hemant.agrawal, harry.van.haaren; +Cc: dev, Pavan Bhagavatula

From: Pavan Bhagavatula <pbhagavatula@caviumnetworks.com>

Extend the service capability of the sw event device by exposing service id
to the application.
The application can use service id to configure service cores to run event
scheduling.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
---
 drivers/event/sw/sw_evdev.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
index aed8b72..9b7f4d4 100644
--- a/drivers/event/sw/sw_evdev.c
+++ b/drivers/event/sw/sw_evdev.c
@@ -875,6 +875,15 @@ sw_probe(struct rte_vdev_device *vdev)
 		return -ENOEXEC;
 	}
 
+	ret = rte_service_component_runstate_set(sw->service_id, 1);
+	if (ret) {
+		SW_LOG_ERR("Unable to enable service component");
+		return -ENOEXEC;
+	}
+
+	dev->data->service_inited = 1;
+	dev->data->service_id = sw->service_id;
+
 	return 0;
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v2 3/7] app/test-eventdev: update app to use service cores
  2017-10-13 16:36 ` [PATCH v2 1/7] eventdev: add API to get service id Pavan Nikhilesh
  2017-10-13 16:36   ` [PATCH v2 2/7] event/sw: extend service capability Pavan Nikhilesh
@ 2017-10-13 16:36   ` Pavan Nikhilesh
  2017-10-21 17:01     ` Jerin Jacob
  2017-10-13 16:36   ` [PATCH v2 4/7] test/eventdev: update test to use service core Pavan Nikhilesh
                     ` (4 subsequent siblings)
  6 siblings, 1 reply; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-13 16:36 UTC (permalink / raw)
  To: jerin.jacob, hemant.agrawal, harry.van.haaren; +Cc: dev, Pavan Bhagavatula

From: Pavan Bhagavatula <pbhagavatula@caviumnetworks.com>

Use service cores for offloading event scheduling in case of
centralized scheduling instead of calling the schedule api directly.
This removes the dependency on dedicated scheduler core specified by
giving command line option --slcore.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
---
 app/test-eventdev/evt_common.h        | 41 ++++++++++++++++++++++++++++++
 app/test-eventdev/evt_options.c       | 10 --------
 app/test-eventdev/evt_options.h       |  8 ------
 app/test-eventdev/test_order_atq.c    |  6 +++++
 app/test-eventdev/test_order_common.c |  3 ---
 app/test-eventdev/test_order_queue.c  |  6 +++++
 app/test-eventdev/test_perf_atq.c     |  6 +++++
 app/test-eventdev/test_perf_common.c  | 47 ++---------------------------------
 app/test-eventdev/test_perf_common.h  |  1 +
 app/test-eventdev/test_perf_queue.c   |  6 +++++
 10 files changed, 68 insertions(+), 66 deletions(-)

diff --git a/app/test-eventdev/evt_common.h b/app/test-eventdev/evt_common.h
index 4102076..1589190 100644
--- a/app/test-eventdev/evt_common.h
+++ b/app/test-eventdev/evt_common.h
@@ -36,6 +36,7 @@
 #include <rte_common.h>
 #include <rte_debug.h>
 #include <rte_eventdev.h>
+#include <rte_service.h>
 
 #define CLNRM  "\x1b[0m"
 #define CLRED  "\x1b[31m"
@@ -113,4 +114,44 @@ evt_sched_type2queue_cfg(uint8_t sched_type)
 	return ret;
 }
 
+
+static inline int
+evt_service_setup(uint8_t dev_id)
+{
+	uint32_t service_id;
+	int32_t core_cnt;
+	unsigned int lcore = 0;
+	uint32_t core_array[RTE_MAX_LCORE];
+	uint8_t cnt;
+	uint8_t min_cnt = UINT8_MAX;
+
+	if (evt_has_distributed_sched(dev_id))
+		return 0;
+
+	if (!rte_service_lcore_count())
+		return -ENOENT;
+
+	if (!rte_event_dev_service_id_get(dev_id, &service_id)) {
+		core_cnt = rte_service_lcore_list(core_array,
+				RTE_MAX_LCORE);
+		if (core_cnt < 0)
+			return -ENOENT;
+		/* Get the core which has least number of services running. */
+		while (core_cnt--) {
+			/* Reset default mapping */
+			rte_service_map_lcore_set(service_id,
+					core_array[core_cnt], 0);
+			cnt = rte_service_lcore_count_services(
+					core_array[core_cnt]);
+			if (cnt < min_cnt) {
+				lcore = core_array[core_cnt];
+				min_cnt = cnt;
+			}
+		}
+		if (rte_service_map_lcore_set(service_id, lcore, 1))
+			return -ENOENT;
+	}
+	return 0;
+}
+
 #endif /*  _EVT_COMMON_*/
diff --git a/app/test-eventdev/evt_options.c b/app/test-eventdev/evt_options.c
index 65e22f8..e2187df 100644
--- a/app/test-eventdev/evt_options.c
+++ b/app/test-eventdev/evt_options.c
@@ -114,13 +114,6 @@ evt_parse_test_name(struct evt_options *opt, const char *arg)
 }
 
 static int
-evt_parse_slcore(struct evt_options *opt, const char *arg)
-{
-	opt->slcore = atoi(arg);
-	return 0;
-}
-
-static int
 evt_parse_socket_id(struct evt_options *opt, const char *arg)
 {
 	opt->socket_id = atoi(arg);
@@ -188,7 +181,6 @@ usage(char *program)
 		"\t--test             : name of the test application to run\n"
 		"\t--socket_id        : socket_id of application resources\n"
 		"\t--pool_sz          : pool size of the mempool\n"
-		"\t--slcore           : lcore id of the scheduler\n"
 		"\t--plcores          : list of lcore ids for producers\n"
 		"\t--wlcores          : list of lcore ids for workers\n"
 		"\t--stlist           : list of scheduled types of the stages\n"
@@ -254,7 +246,6 @@ static struct option lgopts[] = {
 	{ EVT_POOL_SZ,          1, 0, 0 },
 	{ EVT_NB_PKTS,          1, 0, 0 },
 	{ EVT_WKR_DEQ_DEP,      1, 0, 0 },
-	{ EVT_SCHED_LCORE,      1, 0, 0 },
 	{ EVT_SCHED_TYPE_LIST,  1, 0, 0 },
 	{ EVT_FWD_LATENCY,      0, 0, 0 },
 	{ EVT_QUEUE_PRIORITY,   0, 0, 0 },
@@ -278,7 +269,6 @@ evt_opts_parse_long(int opt_idx, struct evt_options *opt)
 		{ EVT_POOL_SZ, evt_parse_pool_sz},
 		{ EVT_NB_PKTS, evt_parse_nb_pkts},
 		{ EVT_WKR_DEQ_DEP, evt_parse_wkr_deq_dep},
-		{ EVT_SCHED_LCORE, evt_parse_slcore},
 		{ EVT_SCHED_TYPE_LIST, evt_parse_sched_type_list},
 		{ EVT_FWD_LATENCY, evt_parse_fwd_latency},
 		{ EVT_QUEUE_PRIORITY, evt_parse_queue_priority},
diff --git a/app/test-eventdev/evt_options.h b/app/test-eventdev/evt_options.h
index d8a9fdc..a9a9125 100644
--- a/app/test-eventdev/evt_options.h
+++ b/app/test-eventdev/evt_options.h
@@ -47,7 +47,6 @@
 #define EVT_VERBOSE              ("verbose")
 #define EVT_DEVICE               ("dev")
 #define EVT_TEST                 ("test")
-#define EVT_SCHED_LCORE          ("slcore")
 #define EVT_PROD_LCORES          ("plcores")
 #define EVT_WORK_LCORES          ("wlcores")
 #define EVT_NB_FLOWS             ("nb_flows")
@@ -67,7 +66,6 @@ struct evt_options {
 	bool plcores[RTE_MAX_LCORE];
 	bool wlcores[RTE_MAX_LCORE];
 	uint8_t sched_type_list[EVT_MAX_STAGES];
-	int slcore;
 	uint32_t nb_flows;
 	int socket_id;
 	int pool_sz;
@@ -219,12 +217,6 @@ evt_dump_nb_flows(struct evt_options *opt)
 }
 
 static inline void
-evt_dump_scheduler_lcore(struct evt_options *opt)
-{
-	evt_dump("scheduler lcore", "%d", opt->slcore);
-}
-
-static inline void
 evt_dump_worker_dequeue_depth(struct evt_options *opt)
 {
 	evt_dump("worker deq depth", "%d", opt->wkr_deq_dep);
diff --git a/app/test-eventdev/test_order_atq.c b/app/test-eventdev/test_order_atq.c
index 7e6c67d..4ee0dea 100644
--- a/app/test-eventdev/test_order_atq.c
+++ b/app/test-eventdev/test_order_atq.c
@@ -179,6 +179,12 @@ order_atq_eventdev_setup(struct evt_test *test, struct evt_options *opt)
 	if (ret)
 		return ret;
 
+	ret = evt_service_setup(opt->dev_id);
+	if (ret) {
+		evt_err("No service lcore found to run event dev.");
+		return ret;
+	}
+
 	ret = rte_event_dev_start(opt->dev_id);
 	if (ret) {
 		evt_err("failed to start eventdev %d", opt->dev_id);
diff --git a/app/test-eventdev/test_order_common.c b/app/test-eventdev/test_order_common.c
index 80e14c0..7cfe7fa 100644
--- a/app/test-eventdev/test_order_common.c
+++ b/app/test-eventdev/test_order_common.c
@@ -292,9 +292,6 @@ order_launch_lcores(struct evt_test *test, struct evt_options *opt,
 	int64_t old_remaining  = -1;
 
 	while (t->err == false) {
-
-		rte_event_schedule(opt->dev_id);
-
 		uint64_t new_cycles = rte_get_timer_cycles();
 		int64_t remaining = rte_atomic64_read(&t->outstand_pkts);
 
diff --git a/app/test-eventdev/test_order_queue.c b/app/test-eventdev/test_order_queue.c
index beadd9c..a14e0b0 100644
--- a/app/test-eventdev/test_order_queue.c
+++ b/app/test-eventdev/test_order_queue.c
@@ -192,6 +192,12 @@ order_queue_eventdev_setup(struct evt_test *test, struct evt_options *opt)
 	if (ret)
 		return ret;
 
+	ret = evt_service_setup(opt->dev_id);
+	if (ret) {
+		evt_err("No service lcore found to run event dev.");
+		return ret;
+	}
+
 	ret = rte_event_dev_start(opt->dev_id);
 	if (ret) {
 		evt_err("failed to start eventdev %d", opt->dev_id);
diff --git a/app/test-eventdev/test_perf_atq.c b/app/test-eventdev/test_perf_atq.c
index 9c3efa3..0e9f2db 100644
--- a/app/test-eventdev/test_perf_atq.c
+++ b/app/test-eventdev/test_perf_atq.c
@@ -221,6 +221,12 @@ perf_atq_eventdev_setup(struct evt_test *test, struct evt_options *opt)
 	if (ret)
 		return ret;
 
+	ret = evt_service_setup(opt->dev_id);
+	if (ret) {
+		evt_err("No service lcore found to run event dev.");
+		return ret;
+	}
+
 	ret = rte_event_dev_start(opt->dev_id);
 	if (ret) {
 		evt_err("failed to start eventdev %d", opt->dev_id);
diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c
index 7b09299..e77b472 100644
--- a/app/test-eventdev/test_perf_common.c
+++ b/app/test-eventdev/test_perf_common.c
@@ -88,18 +88,6 @@ perf_producer(void *arg)
 	return 0;
 }
 
-static inline int
-scheduler(void *arg)
-{
-	struct test_perf *t = arg;
-	const uint8_t dev_id = t->opt->dev_id;
-
-	while (t->done == false)
-		rte_event_schedule(dev_id);
-
-	return 0;
-}
-
 static inline uint64_t
 processed_pkts(struct test_perf *t)
 {
@@ -163,15 +151,6 @@ perf_launch_lcores(struct evt_test *test, struct evt_options *opt,
 		port_idx++;
 	}
 
-	/* launch scheduler */
-	if (!evt_has_distributed_sched(opt->dev_id)) {
-		ret = rte_eal_remote_launch(scheduler, t, opt->slcore);
-		if (ret) {
-			evt_err("failed to launch sched %d", opt->slcore);
-			return ret;
-		}
-	}
-
 	const uint64_t total_pkts = opt->nb_pkts *
 			evt_nr_active_lcores(opt->plcores);
 
@@ -307,10 +286,9 @@ int
 perf_opt_check(struct evt_options *opt, uint64_t nb_queues)
 {
 	unsigned int lcores;
-	bool need_slcore = !evt_has_distributed_sched(opt->dev_id);
 
-	/* N producer + N worker + 1 scheduler(based on dev capa) + 1 master */
-	lcores = need_slcore ? 4 : 3;
+	/* N producer + N worker + 1 master */
+	lcores = 3;
 
 	if (rte_lcore_count() < lcores) {
 		evt_err("test need minimum %d lcores", lcores);
@@ -322,10 +300,6 @@ perf_opt_check(struct evt_options *opt, uint64_t nb_queues)
 		evt_err("worker lcores overlaps with master lcore");
 		return -1;
 	}
-	if (need_slcore && evt_lcores_has_overlap(opt->wlcores, opt->slcore)) {
-		evt_err("worker lcores overlaps with scheduler lcore");
-		return -1;
-	}
 	if (evt_lcores_has_overlap_multi(opt->wlcores, opt->plcores)) {
 		evt_err("worker lcores overlaps producer lcores");
 		return -1;
@@ -344,10 +318,6 @@ perf_opt_check(struct evt_options *opt, uint64_t nb_queues)
 		evt_err("producer lcores overlaps with master lcore");
 		return -1;
 	}
-	if (need_slcore && evt_lcores_has_overlap(opt->plcores, opt->slcore)) {
-		evt_err("producer lcores overlaps with scheduler lcore");
-		return -1;
-	}
 	if (evt_has_disabled_lcore(opt->plcores)) {
 		evt_err("one or more producer lcores are not enabled");
 		return -1;
@@ -357,17 +327,6 @@ perf_opt_check(struct evt_options *opt, uint64_t nb_queues)
 		return -1;
 	}
 
-	/* Validate scheduler lcore */
-	if (!evt_has_distributed_sched(opt->dev_id) &&
-			opt->slcore == (int)rte_get_master_lcore()) {
-		evt_err("scheduler lcore and master lcore should be different");
-		return -1;
-	}
-	if (need_slcore && !rte_lcore_is_enabled(opt->slcore)) {
-		evt_err("scheduler lcore is not enabled");
-		return -1;
-	}
-
 	if (evt_has_invalid_stage(opt))
 		return -1;
 
@@ -405,8 +364,6 @@ perf_opt_dump(struct evt_options *opt, uint8_t nb_queues)
 	evt_dump_producer_lcores(opt);
 	evt_dump("nb_worker_lcores", "%d", evt_nr_active_lcores(opt->wlcores));
 	evt_dump_worker_lcores(opt);
-	if (!evt_has_distributed_sched(opt->dev_id))
-		evt_dump_scheduler_lcore(opt);
 	evt_dump_nb_stages(opt);
 	evt_dump("nb_evdev_ports", "%d", perf_nb_event_ports(opt));
 	evt_dump("nb_evdev_queues", "%d", nb_queues);
diff --git a/app/test-eventdev/test_perf_common.h b/app/test-eventdev/test_perf_common.h
index 4956586..c6fc70c 100644
--- a/app/test-eventdev/test_perf_common.h
+++ b/app/test-eventdev/test_perf_common.h
@@ -159,6 +159,7 @@ int perf_test_setup(struct evt_test *test, struct evt_options *opt);
 int perf_mempool_setup(struct evt_test *test, struct evt_options *opt);
 int perf_event_dev_port_setup(struct evt_test *test, struct evt_options *opt,
 				uint8_t stride, uint8_t nb_queues);
+int perf_event_dev_service_setup(uint8_t dev_id);
 int perf_launch_lcores(struct evt_test *test, struct evt_options *opt,
 		int (*worker)(void *));
 void perf_opt_dump(struct evt_options *opt, uint8_t nb_queues);
diff --git a/app/test-eventdev/test_perf_queue.c b/app/test-eventdev/test_perf_queue.c
index 658c08a..78f43b5 100644
--- a/app/test-eventdev/test_perf_queue.c
+++ b/app/test-eventdev/test_perf_queue.c
@@ -232,6 +232,12 @@ perf_queue_eventdev_setup(struct evt_test *test, struct evt_options *opt)
 	if (ret)
 		return ret;
 
+	ret = evt_service_setup(opt->dev_id);
+	if (ret) {
+		evt_err("No service lcore found to run event dev.");
+		return ret;
+	}
+
 	ret = rte_event_dev_start(opt->dev_id);
 	if (ret) {
 		evt_err("failed to start eventdev %d", opt->dev_id);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v2 4/7] test/eventdev: update test to use service core
  2017-10-13 16:36 ` [PATCH v2 1/7] eventdev: add API to get service id Pavan Nikhilesh
  2017-10-13 16:36   ` [PATCH v2 2/7] event/sw: extend service capability Pavan Nikhilesh
  2017-10-13 16:36   ` [PATCH v2 3/7] app/test-eventdev: update app to use service cores Pavan Nikhilesh
@ 2017-10-13 16:36   ` Pavan Nikhilesh
  2017-10-13 16:36   ` [PATCH v2 5/7] examples/eventdev: update sample app to use service Pavan Nikhilesh
                     ` (3 subsequent siblings)
  6 siblings, 0 replies; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-13 16:36 UTC (permalink / raw)
  To: jerin.jacob, hemant.agrawal, harry.van.haaren; +Cc: dev, Pavan Bhagavatula

From: Pavan Bhagavatula <pbhagavatula@caviumnetworks.com>

Use service core for event scheduling instead of calling the event schedule
api directly.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
---
 test/test/test_eventdev_sw.c | 120 ++++++++++++++++++++++++-------------------
 1 file changed, 67 insertions(+), 53 deletions(-)

diff --git a/test/test/test_eventdev_sw.c b/test/test/test_eventdev_sw.c
index 7219886..81954dc 100644
--- a/test/test/test_eventdev_sw.c
+++ b/test/test/test_eventdev_sw.c
@@ -49,6 +49,7 @@
 #include <rte_cycles.h>
 #include <rte_eventdev.h>
 #include <rte_pause.h>
+#include <rte_service_component.h>
 
 #include "test.h"
 
@@ -320,6 +321,19 @@ struct test_event_dev_stats {
 	uint64_t qid_tx_pkts[MAX_QIDS];
 };
 
+static inline void
+wait_schedule(int evdev)
+{
+	static const char * const dev_names[] = {"dev_sched_calls"};
+	uint64_t val;
+
+	val = rte_event_dev_xstats_by_name_get(evdev, dev_names[0],
+			0);
+	while ((rte_event_dev_xstats_by_name_get(evdev, dev_names[0], 0) - val)
+			< 2)
+		;
+}
+
 static inline int
 test_event_dev_stats_get(int dev_id, struct test_event_dev_stats *stats)
 {
@@ -392,9 +406,9 @@ run_prio_packet_test(struct test *t)
 		RTE_EVENT_DEV_PRIORITY_HIGHEST
 	};
 	unsigned int i;
+	struct rte_event ev_arr[2];
 	for (i = 0; i < RTE_DIM(MAGIC_SEQN); i++) {
 		/* generate pkt and enqueue */
-		struct rte_event ev;
 		struct rte_mbuf *arp = rte_gen_arp(0, t->mbuf_pool);
 		if (!arp) {
 			printf("%d: gen of pkt failed\n", __LINE__);
@@ -402,20 +416,20 @@ run_prio_packet_test(struct test *t)
 		}
 		arp->seqn = MAGIC_SEQN[i];
 
-		ev = (struct rte_event){
+		ev_arr[i] = (struct rte_event){
 			.priority = PRIORITY[i],
 			.op = RTE_EVENT_OP_NEW,
 			.queue_id = t->qid[0],
 			.mbuf = arp
 		};
-		err = rte_event_enqueue_burst(evdev, t->port[0], &ev, 1);
-		if (err < 0) {
-			printf("%d: error failed to enqueue\n", __LINE__);
-			return -1;
-		}
+	}
+	err = rte_event_enqueue_burst(evdev, t->port[0], ev_arr, 2);
+	if (err < 0) {
+		printf("%d: error failed to enqueue\n", __LINE__);
+		return -1;
 	}
 
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	struct test_event_dev_stats stats;
 	err = test_event_dev_stats_get(evdev, &stats);
@@ -425,8 +439,8 @@ run_prio_packet_test(struct test *t)
 	}
 
 	if (stats.port_rx_pkts[t->port[0]] != 2) {
-		printf("%d: error stats incorrect for directed port\n",
-				__LINE__);
+		printf("%d: error stats incorrect for directed port %"PRIu64"\n",
+				__LINE__, stats.port_rx_pkts[t->port[0]]);
 		rte_event_dev_dump(evdev, stdout);
 		return -1;
 	}
@@ -439,6 +453,7 @@ run_prio_packet_test(struct test *t)
 		rte_event_dev_dump(evdev, stdout);
 		return -1;
 	}
+
 	if (ev.mbuf->seqn != MAGIC_SEQN[1]) {
 		printf("%d: first packet out not highest priority\n",
 				__LINE__);
@@ -507,7 +522,7 @@ test_single_directed_packet(struct test *t)
 	}
 
 	/* Run schedule() as dir packets may need to be re-ordered */
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	struct test_event_dev_stats stats;
 	err = test_event_dev_stats_get(evdev, &stats);
@@ -574,7 +589,7 @@ test_directed_forward_credits(struct test *t)
 			printf("%d: error failed to enqueue\n", __LINE__);
 			return -1;
 		}
-		rte_event_schedule(evdev);
+		wait_schedule(evdev);
 
 		uint32_t deq_pkts;
 		deq_pkts = rte_event_dequeue_burst(evdev, 0, &ev, 1, 0);
@@ -736,7 +751,7 @@ burst_packets(struct test *t)
 			return -1;
 		}
 	}
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	/* Check stats for all NUM_PKTS arrived to sched core */
 	struct test_event_dev_stats stats;
@@ -825,7 +840,7 @@ abuse_inflights(struct test *t)
 	}
 
 	/* schedule */
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	struct test_event_dev_stats stats;
 
@@ -963,7 +978,7 @@ xstats_tests(struct test *t)
 		}
 	}
 
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	/* Device names / values */
 	int num_stats = rte_event_dev_xstats_names_get(evdev,
@@ -974,8 +989,8 @@ xstats_tests(struct test *t)
 	ret = rte_event_dev_xstats_get(evdev,
 					RTE_EVENT_DEV_XSTATS_DEVICE,
 					0, ids, values, num_stats);
-	static const uint64_t expected[] = {3, 3, 0, 1, 0, 0};
-	for (i = 0; (signed int)i < ret; i++) {
+	static const uint64_t expected[] = {3, 3, 0};
+	for (i = 0; (signed int)i < 3; i++) {
 		if (expected[i] != values[i]) {
 			printf(
 				"%d Error xstat %d (id %d) %s : %"PRIu64
@@ -994,7 +1009,7 @@ xstats_tests(struct test *t)
 	ret = rte_event_dev_xstats_get(evdev,
 					RTE_EVENT_DEV_XSTATS_DEVICE,
 					0, ids, values, num_stats);
-	for (i = 0; (signed int)i < ret; i++) {
+	for (i = 0; (signed int)i < 3; i++) {
 		if (expected_zero[i] != values[i]) {
 			printf(
 				"%d Error, xstat %d (id %d) %s : %"PRIu64
@@ -1290,7 +1305,7 @@ port_reconfig_credits(struct test *t)
 			}
 		}
 
-		rte_event_schedule(evdev);
+		wait_schedule(evdev);
 
 		struct rte_event ev[NPKTS];
 		int deq = rte_event_dequeue_burst(evdev, t->port[0], ev,
@@ -1516,14 +1531,12 @@ xstats_id_reset_tests(struct test *t)
 		}
 	}
 
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	static const char * const dev_names[] = {
-		"dev_rx", "dev_tx", "dev_drop", "dev_sched_calls",
-		"dev_sched_no_iq_enq", "dev_sched_no_cq_enq",
-	};
+		"dev_rx", "dev_tx", "dev_drop"};
 	uint64_t dev_expected[] = {NPKTS, NPKTS, 0, 1, 0, 0};
-	for (i = 0; (int)i < ret; i++) {
+	for (i = 0; (int)i < 3; i++) {
 		unsigned int id;
 		uint64_t val = rte_event_dev_xstats_by_name_get(evdev,
 								dev_names[i],
@@ -1888,26 +1901,26 @@ qid_priorities(struct test *t)
 	}
 
 	/* enqueue 3 packets, setting seqn and QID to check priority */
+	struct rte_event ev_arr[3];
 	for (i = 0; i < 3; i++) {
-		struct rte_event ev;
 		struct rte_mbuf *arp = rte_gen_arp(0, t->mbuf_pool);
 		if (!arp) {
 			printf("%d: gen of pkt failed\n", __LINE__);
 			return -1;
 		}
-		ev.queue_id = t->qid[i];
-		ev.op = RTE_EVENT_OP_NEW;
-		ev.mbuf = arp;
+		ev_arr[i].queue_id = t->qid[i];
+		ev_arr[i].op = RTE_EVENT_OP_NEW;
+		ev_arr[i].mbuf = arp;
 		arp->seqn = i;
 
-		int err = rte_event_enqueue_burst(evdev, t->port[0], &ev, 1);
-		if (err != 1) {
-			printf("%d: Failed to enqueue\n", __LINE__);
-			return -1;
-		}
+	}
+	int err = rte_event_enqueue_burst(evdev, t->port[0], ev_arr, 3);
+	if (err != 3) {
+		printf("%d: Failed to enqueue\n", __LINE__);
+		return -1;
 	}
 
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	/* dequeue packets, verify priority was upheld */
 	struct rte_event ev[32];
@@ -1988,7 +2001,7 @@ load_balancing(struct test *t)
 		}
 	}
 
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	struct test_event_dev_stats stats;
 	err = test_event_dev_stats_get(evdev, &stats);
@@ -2088,7 +2101,7 @@ load_balancing_history(struct test *t)
 	}
 
 	/* call the scheduler */
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	/* Dequeue the flow 0 packet from port 1, so that we can then drop */
 	struct rte_event ev;
@@ -2105,7 +2118,7 @@ load_balancing_history(struct test *t)
 	rte_event_enqueue_burst(evdev, t->port[1], &release_ev, 1);
 
 	/* call the scheduler */
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	/*
 	 * Set up the next set of flows, first a new flow to fill up
@@ -2138,7 +2151,7 @@ load_balancing_history(struct test *t)
 	}
 
 	/* schedule */
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	err = test_event_dev_stats_get(evdev, &stats);
 	if (err) {
@@ -2182,7 +2195,7 @@ load_balancing_history(struct test *t)
 		while (rte_event_dequeue_burst(evdev, i, &ev, 1, 0))
 			rte_event_enqueue_burst(evdev, i, &release_ev, 1);
 	}
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	cleanup(t);
 	return 0;
@@ -2248,7 +2261,7 @@ invalid_qid(struct test *t)
 	}
 
 	/* call the scheduler */
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	err = test_event_dev_stats_get(evdev, &stats);
 	if (err) {
@@ -2333,7 +2346,7 @@ single_packet(struct test *t)
 		return -1;
 	}
 
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	err = test_event_dev_stats_get(evdev, &stats);
 	if (err) {
@@ -2376,7 +2389,7 @@ single_packet(struct test *t)
 		printf("%d: Failed to enqueue\n", __LINE__);
 		return -1;
 	}
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	err = test_event_dev_stats_get(evdev, &stats);
 	if (stats.port_inflight[wrk_enq] != 0) {
@@ -2464,7 +2477,7 @@ inflight_counts(struct test *t)
 	}
 
 	/* schedule */
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	err = test_event_dev_stats_get(evdev, &stats);
 	if (err) {
@@ -2520,7 +2533,7 @@ inflight_counts(struct test *t)
 	 * As the scheduler core decrements inflights, it needs to run to
 	 * process packets to act on the drop messages
 	 */
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	err = test_event_dev_stats_get(evdev, &stats);
 	if (stats.port_inflight[p1] != 0) {
@@ -2555,7 +2568,7 @@ inflight_counts(struct test *t)
 	 * As the scheduler core decrements inflights, it needs to run to
 	 * process packets to act on the drop messages
 	 */
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	err = test_event_dev_stats_get(evdev, &stats);
 	if (stats.port_inflight[p2] != 0) {
@@ -2649,7 +2662,7 @@ parallel_basic(struct test *t, int check_order)
 		}
 	}
 
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	/* use extra slot to make logic in loops easier */
 	struct rte_event deq_ev[w3_port + 1];
@@ -2676,7 +2689,7 @@ parallel_basic(struct test *t, int check_order)
 			return -1;
 		}
 	}
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	/* dequeue from the tx ports, we should get 3 packets */
 	deq_pkts = rte_event_dequeue_burst(evdev, t->port[tx_port], deq_ev,
@@ -2754,7 +2767,7 @@ holb(struct test *t) /* test to check we avoid basic head-of-line blocking */
 		printf("%d: Error doing first enqueue\n", __LINE__);
 		goto err;
 	}
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	if (rte_event_dev_xstats_by_name_get(evdev, "port_0_cq_ring_used", NULL)
 			!= 1)
@@ -2779,7 +2792,7 @@ holb(struct test *t) /* test to check we avoid basic head-of-line blocking */
 			printf("%d: Error with enqueue\n", __LINE__);
 			goto err;
 		}
-		rte_event_schedule(evdev);
+		wait_schedule(evdev);
 	} while (rte_event_dev_xstats_by_name_get(evdev,
 				rx_port_free_stat, NULL) != 0);
 
@@ -2789,7 +2802,7 @@ holb(struct test *t) /* test to check we avoid basic head-of-line blocking */
 		printf("%d: Error with enqueue\n", __LINE__);
 		goto err;
 	}
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	/* check that the other port still has an empty CQ */
 	if (rte_event_dev_xstats_by_name_get(evdev, other_port_used_stat, NULL)
@@ -2812,7 +2825,7 @@ holb(struct test *t) /* test to check we avoid basic head-of-line blocking */
 		printf("%d: Error with enqueue\n", __LINE__);
 		goto err;
 	}
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	if (rte_event_dev_xstats_by_name_get(evdev, other_port_used_stat, NULL)
 			!= 1) {
@@ -3002,7 +3015,7 @@ worker_loopback(struct test *t)
 	while (rte_eal_get_lcore_state(p_lcore) != FINISHED ||
 			rte_eal_get_lcore_state(w_lcore) != FINISHED) {
 
-		rte_event_schedule(evdev);
+		wait_schedule(evdev);
 
 		uint64_t new_cycles = rte_get_timer_cycles();
 
@@ -3029,7 +3042,7 @@ worker_loopback(struct test *t)
 			cycles = new_cycles;
 		}
 	}
-	rte_event_schedule(evdev); /* ensure all completions are flushed */
+	wait_schedule(evdev); /* ensure all completions are flushed */
 
 	rte_eal_mp_wait_lcore();
 
@@ -3064,6 +3077,7 @@ test_sw_eventdev(void)
 			printf("Error finding newly created eventdev\n");
 			return -1;
 		}
+		rte_service_start_with_defaults();
 	}
 
 	/* Only create mbuf pool once, reuse for each test run */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v2 5/7] examples/eventdev: update sample app to use service
  2017-10-13 16:36 ` [PATCH v2 1/7] eventdev: add API to get service id Pavan Nikhilesh
                     ` (2 preceding siblings ...)
  2017-10-13 16:36   ` [PATCH v2 4/7] test/eventdev: update test to use service core Pavan Nikhilesh
@ 2017-10-13 16:36   ` Pavan Nikhilesh
  2017-10-23 17:17     ` Van Haaren, Harry
  2017-10-13 16:36   ` [PATCH v2 6/7] eventdev: remove eventdev schedule API Pavan Nikhilesh
                     ` (2 subsequent siblings)
  6 siblings, 1 reply; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-13 16:36 UTC (permalink / raw)
  To: jerin.jacob, hemant.agrawal, harry.van.haaren; +Cc: dev, Pavan Bhagavatula

From: Pavan Bhagavatula <pbhagavatula@caviumnetworks.com>

Update the sample app eventdev_pipeline_sw_pmd to use service cores for
event scheduling in case of sw eventdev.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
---
 examples/eventdev_pipeline_sw_pmd/main.c | 51 +++++++++++++++++++++-----------
 1 file changed, 33 insertions(+), 18 deletions(-)

diff --git a/examples/eventdev_pipeline_sw_pmd/main.c b/examples/eventdev_pipeline_sw_pmd/main.c
index 09b90c3..d5068d2 100644
--- a/examples/eventdev_pipeline_sw_pmd/main.c
+++ b/examples/eventdev_pipeline_sw_pmd/main.c
@@ -46,6 +46,7 @@
 #include <rte_cycles.h>
 #include <rte_ethdev.h>
 #include <rte_eventdev.h>
+#include <rte_service.h>
 
 #define MAX_NUM_STAGES 8
 #define BATCH_SIZE 16
@@ -233,7 +234,7 @@ producer(void)
 }
 
 static inline void
-schedule_devices(uint8_t dev_id, unsigned int lcore_id)
+schedule_devices(unsigned int lcore_id)
 {
 	if (fdata->rx_core[lcore_id] && (fdata->rx_single ||
 	    rte_atomic32_cmpset(&(fdata->rx_lock), 0, 1))) {
@@ -241,16 +242,6 @@ schedule_devices(uint8_t dev_id, unsigned int lcore_id)
 		rte_atomic32_clear((rte_atomic32_t *)&(fdata->rx_lock));
 	}
 
-	if (fdata->sched_core[lcore_id] && (fdata->sched_single ||
-	    rte_atomic32_cmpset(&(fdata->sched_lock), 0, 1))) {
-		rte_event_schedule(dev_id);
-		if (cdata.dump_dev_signal) {
-			rte_event_dev_dump(0, stdout);
-			cdata.dump_dev_signal = 0;
-		}
-		rte_atomic32_clear((rte_atomic32_t *)&(fdata->sched_lock));
-	}
-
 	if (fdata->tx_core[lcore_id] && (fdata->tx_single ||
 	    rte_atomic32_cmpset(&(fdata->tx_lock), 0, 1))) {
 		consumer();
@@ -294,7 +285,7 @@ worker(void *arg)
 	while (!fdata->done) {
 		uint16_t i;
 
-		schedule_devices(dev_id, lcore_id);
+		schedule_devices(lcore_id);
 
 		if (!fdata->worker_core[lcore_id]) {
 			rte_pause();
@@ -661,6 +652,27 @@ struct port_link {
 };
 
 static int
+setup_scheduling_service(unsigned int lcore, uint8_t dev_id)
+{
+	int ret;
+	uint32_t service_id;
+	ret = rte_event_dev_service_id_get(dev_id, &service_id);
+	if (ret == -ESRCH) {
+		printf("Event device [%d] doesn't need scheduling service\n",
+				dev_id);
+		return 0;
+	}
+	if (!ret) {
+		rte_service_runstate_set(service_id, 1);
+		rte_service_lcore_add(lcore);
+		rte_service_map_lcore_set(service_id, lcore, 1);
+		rte_service_lcore_start(lcore);
+	}
+
+	return ret;
+}
+
+static int
 setup_eventdev(struct prod_data *prod_data,
 		struct cons_data *cons_data,
 		struct worker_data *worker_data)
@@ -839,6 +851,14 @@ setup_eventdev(struct prod_data *prod_data,
 	*cons_data = (struct cons_data){.dev_id = dev_id,
 					.port_id = i };
 
+	for (i = 0; i < MAX_NUM_CORE; i++) {
+		if (fdata->sched_core[i]
+				&& setup_scheduling_service(i, dev_id)) {
+			printf("Error setting up schedulig service on %d", i);
+			return -1;
+		}
+	}
+
 	if (rte_event_dev_start(dev_id) < 0) {
 		printf("Error starting eventdev\n");
 		return -1;
@@ -944,8 +964,7 @@ main(int argc, char **argv)
 
 		if (!fdata->rx_core[lcore_id] &&
 			!fdata->worker_core[lcore_id] &&
-			!fdata->tx_core[lcore_id] &&
-			!fdata->sched_core[lcore_id])
+			!fdata->tx_core[lcore_id])
 			continue;
 
 		if (fdata->rx_core[lcore_id])
@@ -958,10 +977,6 @@ main(int argc, char **argv)
 				"[%s()] lcore %d executing NIC Tx, and using eventdev port %u\n",
 				__func__, lcore_id, cons_data.port_id);
 
-		if (fdata->sched_core[lcore_id])
-			printf("[%s()] lcore %d executing scheduler\n",
-					__func__, lcore_id);
-
 		if (fdata->worker_core[lcore_id])
 			printf(
 				"[%s()] lcore %d executing worker, using eventdev port %u\n",
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v2 6/7] eventdev: remove eventdev schedule API
  2017-10-13 16:36 ` [PATCH v2 1/7] eventdev: add API to get service id Pavan Nikhilesh
                     ` (3 preceding siblings ...)
  2017-10-13 16:36   ` [PATCH v2 5/7] examples/eventdev: update sample app to use service Pavan Nikhilesh
@ 2017-10-13 16:36   ` Pavan Nikhilesh
  2017-10-21 17:07     ` Jerin Jacob
  2017-10-13 16:36   ` [PATCH v2 7/7] doc: update software event device Pavan Nikhilesh
  2017-10-20 10:21   ` [PATCH v2 1/7] eventdev: add API to get service id Van Haaren, Harry
  6 siblings, 1 reply; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-13 16:36 UTC (permalink / raw)
  To: jerin.jacob, hemant.agrawal, harry.van.haaren; +Cc: dev, Pavan Bhagavatula

From: Pavan Bhagavatula <pbhagavatula@caviumnetworks.com>

remove eventdev schedule api and enforce sw driver to use service core
feature for event scheduling.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
---
 drivers/event/octeontx/ssovf_evdev.c       |  1 -
 drivers/event/skeleton/skeleton_eventdev.c |  2 --
 drivers/event/sw/sw_evdev.c                | 13 +++++--------
 lib/librte_eventdev/rte_eventdev.h         | 31 ++++--------------------------
 4 files changed, 9 insertions(+), 38 deletions(-)

diff --git a/drivers/event/octeontx/ssovf_evdev.c b/drivers/event/octeontx/ssovf_evdev.c
index d829b49..1127db0 100644
--- a/drivers/event/octeontx/ssovf_evdev.c
+++ b/drivers/event/octeontx/ssovf_evdev.c
@@ -155,7 +155,6 @@ ssovf_fastpath_fns_set(struct rte_eventdev *dev)
 {
 	struct ssovf_evdev *edev = ssovf_pmd_priv(dev);
 
-	dev->schedule      = NULL;
 	dev->enqueue       = ssows_enq;
 	dev->enqueue_burst = ssows_enq_burst;
 	dev->enqueue_new_burst = ssows_enq_new_burst;
diff --git a/drivers/event/skeleton/skeleton_eventdev.c b/drivers/event/skeleton/skeleton_eventdev.c
index bcd2055..4d1a1da 100644
--- a/drivers/event/skeleton/skeleton_eventdev.c
+++ b/drivers/event/skeleton/skeleton_eventdev.c
@@ -375,7 +375,6 @@ skeleton_eventdev_init(struct rte_eventdev *eventdev)
 	PMD_DRV_FUNC_TRACE();
 
 	eventdev->dev_ops       = &skeleton_eventdev_ops;
-	eventdev->schedule      = NULL;
 	eventdev->enqueue       = skeleton_eventdev_enqueue;
 	eventdev->enqueue_burst = skeleton_eventdev_enqueue_burst;
 	eventdev->dequeue       = skeleton_eventdev_dequeue;
@@ -466,7 +465,6 @@ skeleton_eventdev_create(const char *name, int socket_id)
 	}
 
 	eventdev->dev_ops       = &skeleton_eventdev_ops;
-	eventdev->schedule      = NULL;
 	eventdev->enqueue       = skeleton_eventdev_enqueue;
 	eventdev->enqueue_burst = skeleton_eventdev_enqueue_burst;
 	eventdev->dequeue       = skeleton_eventdev_dequeue;
diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
index 9b7f4d4..086fd96 100644
--- a/drivers/event/sw/sw_evdev.c
+++ b/drivers/event/sw/sw_evdev.c
@@ -629,10 +629,14 @@ sw_start(struct rte_eventdev *dev)
 	unsigned int i, j;
 	struct sw_evdev *sw = sw_pmd_priv(dev);
 
+	rte_service_component_runstate_set(sw->service_id, 1);
+
 	/* check a service core is mapped to this service */
-	if (!rte_service_runstate_get(sw->service_id))
+	if (!rte_service_runstate_get(sw->service_id)) {
 		SW_LOG_ERR("Warning: No Service core enabled on service %s\n",
 				sw->service_name);
+		return -ENOENT;
+	}
 
 	/* check all ports are set up */
 	for (i = 0; i < sw->port_count; i++)
@@ -847,7 +851,6 @@ sw_probe(struct rte_vdev_device *vdev)
 	dev->enqueue_forward_burst = sw_event_enqueue_burst;
 	dev->dequeue = sw_event_dequeue;
 	dev->dequeue_burst = sw_event_dequeue_burst;
-	dev->schedule = sw_event_schedule;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
@@ -875,12 +878,6 @@ sw_probe(struct rte_vdev_device *vdev)
 		return -ENOEXEC;
 	}
 
-	ret = rte_service_component_runstate_set(sw->service_id, 1);
-	if (ret) {
-		SW_LOG_ERR("Unable to enable service component");
-		return -ENOEXEC;
-	}
-
 	dev->data->service_inited = 1;
 	dev->data->service_id = sw->service_id;
 
diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
index 1c1ff6b..ee0c4c3 100644
--- a/lib/librte_eventdev/rte_eventdev.h
+++ b/lib/librte_eventdev/rte_eventdev.h
@@ -218,10 +218,10 @@
  * (each worker thread schedules events to its own port) or centralized
  * (a dedicated thread schedules to all ports). Distributed software schedulers
  * perform the scheduling in rte_event_dequeue_burst(), whereas centralized
- * scheduler logic is located in rte_event_schedule().
+ * scheduler logic need a dedicated service core for scheduling.
  * The RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag is not set
  * indicates the device is centralized and thus needs a dedicated scheduling
- * thread that repeatedly calls rte_event_schedule().
+ * thread that repeatedly calls software specific scheduling function.
  *
  * An event driven worker thread has following typical workflow on fastpath:
  * \code{.c}
@@ -263,9 +263,9 @@ struct rte_mbuf; /* we just use mbuf pointers; no need to include rte_mbuf.h */
  * In distributed scheduling mode, event scheduling happens in HW or
  * rte_event_dequeue_burst() or the combination of these two.
  * If the flag is not set then eventdev is centralized and thus needs a
- * dedicated scheduling thread that repeatedly calls rte_event_schedule().
+ * dedicated service core that acts as a scheduling thread .
  *
- * @see rte_event_schedule(), rte_event_dequeue_burst()
+ * @see rte_event_dequeue_burst()
  */
 #define RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES     (1ULL << 3)
 /**< Event device is capable of enqueuing events of any type to any queue.
@@ -1065,9 +1065,6 @@ struct rte_eventdev_driver;
 struct rte_eventdev_ops;
 struct rte_eventdev;
 
-typedef void (*event_schedule_t)(struct rte_eventdev *dev);
-/**< @internal Schedule one or more events in the event dev. */
-
 typedef uint16_t (*event_enqueue_t)(void *port, const struct rte_event *ev);
 /**< @internal Enqueue event on port of a device */
 
@@ -1131,8 +1128,6 @@ struct rte_eventdev_data {
 
 /** @internal The data structure associated with each event device. */
 struct rte_eventdev {
-	event_schedule_t schedule;
-	/**< Pointer to PMD schedule function. */
 	event_enqueue_t enqueue;
 	/**< Pointer to PMD enqueue function. */
 	event_enqueue_burst_t enqueue_burst;
@@ -1161,24 +1156,6 @@ struct rte_eventdev {
 extern struct rte_eventdev *rte_eventdevs;
 /** @internal The pool of rte_eventdev structures. */
 
-
-/**
- * Schedule one or more events in the event dev.
- *
- * An event dev implementation may define this is a NOOP, for instance if
- * the event dev performs its scheduling in hardware.
- *
- * @param dev_id
- *   The identifier of the device.
- */
-static inline void
-rte_event_schedule(uint8_t dev_id)
-{
-	struct rte_eventdev *dev = &rte_eventdevs[dev_id];
-	if (*dev->schedule)
-		(*dev->schedule)(dev);
-}
-
 static __rte_always_inline uint16_t
 __rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
 			const struct rte_event ev[], uint16_t nb_events,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v2 7/7] doc: update software event device
  2017-10-13 16:36 ` [PATCH v2 1/7] eventdev: add API to get service id Pavan Nikhilesh
                     ` (4 preceding siblings ...)
  2017-10-13 16:36   ` [PATCH v2 6/7] eventdev: remove eventdev schedule API Pavan Nikhilesh
@ 2017-10-13 16:36   ` Pavan Nikhilesh
  2017-10-20 10:21   ` [PATCH v2 1/7] eventdev: add API to get service id Van Haaren, Harry
  6 siblings, 0 replies; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-13 16:36 UTC (permalink / raw)
  To: jerin.jacob, hemant.agrawal, harry.van.haaren; +Cc: dev, Pavan Bhagavatula

From: Pavan Bhagavatula <pbhagavatula@caviumnetworks.com>

Update software event device documentation to include use of service
cores for event distribution.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
---
 doc/guides/eventdevs/sw.rst | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/doc/guides/eventdevs/sw.rst b/doc/guides/eventdevs/sw.rst
index a3e6624..ec49b3b 100644
--- a/doc/guides/eventdevs/sw.rst
+++ b/doc/guides/eventdevs/sw.rst
@@ -78,9 +78,9 @@ Scheduling Quanta
 ~~~~~~~~~~~~~~~~~
 
 The scheduling quanta sets the number of events that the device attempts to
-schedule before returning to the application from the ``rte_event_schedule()``
-function. Note that is a *hint* only, and that fewer or more events may be
-scheduled in a given iteration.
+schedule in a single schedule call performed by the service core. Note that
+is a *hint* only, and that fewer or more events may be scheduled in a given
+iteration.
 
 The scheduling quanta can be set using a string argument to the vdev
 create call:
@@ -140,10 +140,9 @@ eventdev.
 Distributed Scheduler
 ~~~~~~~~~~~~~~~~~~~~~
 
-The software eventdev is a centralized scheduler, requiring the
-``rte_event_schedule()`` function to be called by a CPU core to perform the
-required event distribution. This is not really a limitation but rather a
-design decision.
+The software eventdev is a centralized scheduler, requiring a service core to
+perform the required event distribution. This is not really a limitation but
+rather a design decision.
 
 The ``RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED`` flag is not set in the
 ``event_dev_cap`` field of the ``rte_event_dev_info`` struct for the software
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* Re: [PATCH v2 1/7] eventdev: add API to get service id
  2017-10-13 16:36 ` [PATCH v2 1/7] eventdev: add API to get service id Pavan Nikhilesh
                     ` (5 preceding siblings ...)
  2017-10-13 16:36   ` [PATCH v2 7/7] doc: update software event device Pavan Nikhilesh
@ 2017-10-20 10:21   ` Van Haaren, Harry
  2017-10-20 11:11     ` Pavan Nikhilesh Bhagavatula
  6 siblings, 1 reply; 47+ messages in thread
From: Van Haaren, Harry @ 2017-10-20 10:21 UTC (permalink / raw)
  To: Pavan Nikhilesh, jerin.jacob, hemant.agrawal; +Cc: dev

> From: Pavan Nikhilesh [mailto:pbhagavatula@caviumnetworks.com]
> Sent: Friday, October 13, 2017 5:37 PM
> To: jerin.jacob@caviumnetworks.com; hemant.agrawal@nxp.com; Van Haaren,
> Harry <harry.van.haaren@intel.com>
> Cc: dev@dpdk.org; Pavan Bhagavatula <pbhagavatula@caviumnetworks.com>
> Subject: [dpdk-dev] [PATCH v2 1/7] eventdev: add API to get service id
> 
> From: Pavan Bhagavatula <pbhagavatula@caviumnetworks.com>
> 
> In case of sw event device the scheduling can be done on a service core
> using the service registered at the time of probe.
> This patch adds a helper function to get the service id that can be used
> by the application to assign a lcore for the service to run on.
> 
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>

<snip>

>   * Dump internal information about *dev_id* to the FILE* provided in *f*.
>   *
>   * @param dev_id
> diff --git a/lib/librte_eventdev/rte_eventdev_version.map
> b/lib/librte_eventdev/rte_eventdev_version.map
> index d555b19..59c36a0 100644
> --- a/lib/librte_eventdev/rte_eventdev_version.map
> +++ b/lib/librte_eventdev/rte_eventdev_version.map
> @@ -53,6 +53,7 @@ DPDK_17.11 {
>  	rte_event_dev_attr_get;
>  	rte_event_port_attr_get;
>  	rte_event_queue_attr_get;
> +	rte_event_dev_service_id_get;


Version-map diff didn't apply cleanly - probably better fixed on apply;
Also, I think the functions are supposed to be in alphabetical order.

Acked-by: Harry van Haaren <harry.van.haaren@intel.com>

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v2 2/7] event/sw: extend service capability
  2017-10-13 16:36   ` [PATCH v2 2/7] event/sw: extend service capability Pavan Nikhilesh
@ 2017-10-20 10:30     ` Van Haaren, Harry
  0 siblings, 0 replies; 47+ messages in thread
From: Van Haaren, Harry @ 2017-10-20 10:30 UTC (permalink / raw)
  To: Pavan Nikhilesh, jerin.jacob, hemant.agrawal; +Cc: dev

> From: Pavan Nikhilesh [mailto:pbhagavatula@caviumnetworks.com]
> Sent: Friday, October 13, 2017 5:37 PM
> To: jerin.jacob@caviumnetworks.com; hemant.agrawal@nxp.com; Van Haaren,
> Harry <harry.van.haaren@intel.com>
> Cc: dev@dpdk.org; Pavan Bhagavatula <pbhagavatula@caviumnetworks.com>
> Subject: [dpdk-dev] [PATCH v2 2/7] event/sw: extend service capability
> 
> From: Pavan Bhagavatula <pbhagavatula@caviumnetworks.com>
> 
> Extend the service capability of the sw event device by exposing service id
> to the application.
> The application can use service id to configure service cores to run event
> scheduling.
> 
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>

Acked-by: Harry van Haaren <harry.van.haaren@intel.com>

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v2 1/7] eventdev: add API to get service id
  2017-10-20 10:21   ` [PATCH v2 1/7] eventdev: add API to get service id Van Haaren, Harry
@ 2017-10-20 11:11     ` Pavan Nikhilesh Bhagavatula
  0 siblings, 0 replies; 47+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2017-10-20 11:11 UTC (permalink / raw)
  To: Van Haaren, Harry; +Cc: dev

On Fri, Oct 20, 2017 at 10:21:57AM +0000, Van Haaren, Harry wrote:
> > From: Pavan Nikhilesh [mailto:pbhagavatula@caviumnetworks.com]
> > Sent: Friday, October 13, 2017 5:37 PM
> > To: jerin.jacob@caviumnetworks.com; hemant.agrawal@nxp.com; Van Haaren,
> > Harry <harry.van.haaren@intel.com>
> > Cc: dev@dpdk.org; Pavan Bhagavatula <pbhagavatula@caviumnetworks.com>
> > Subject: [dpdk-dev] [PATCH v2 1/7] eventdev: add API to get service id
> >
> > From: Pavan Bhagavatula <pbhagavatula@caviumnetworks.com>
> >
> > In case of sw event device the scheduling can be done on a service core
> > using the service registered at the time of probe.
> > This patch adds a helper function to get the service id that can be used
> > by the application to assign a lcore for the service to run on.
> >
> > Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
>
> <snip>
>
> >   * Dump internal information about *dev_id* to the FILE* provided in *f*.
> >   *
> >   * @param dev_id
> > diff --git a/lib/librte_eventdev/rte_eventdev_version.map
> > b/lib/librte_eventdev/rte_eventdev_version.map
> > index d555b19..59c36a0 100644
> > --- a/lib/librte_eventdev/rte_eventdev_version.map
> > +++ b/lib/librte_eventdev/rte_eventdev_version.map
> > @@ -53,6 +53,7 @@ DPDK_17.11 {
> >  	rte_event_dev_attr_get;
> >  	rte_event_port_attr_get;
> >  	rte_event_queue_attr_get;
> > +	rte_event_dev_service_id_get;
>
>
> Version-map diff didn't apply cleanly - probably better fixed on apply;
> Also, I think the functions are supposed to be in alphabetical order.
>

Yep, will fix in v3.

> Acked-by: Harry van Haaren <harry.van.haaren@intel.com>

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v2 3/7] app/test-eventdev: update app to use service cores
  2017-10-13 16:36   ` [PATCH v2 3/7] app/test-eventdev: update app to use service cores Pavan Nikhilesh
@ 2017-10-21 17:01     ` Jerin Jacob
  0 siblings, 0 replies; 47+ messages in thread
From: Jerin Jacob @ 2017-10-21 17:01 UTC (permalink / raw)
  To: Pavan Nikhilesh; +Cc: hemant.agrawal, harry.van.haaren, dev

-----Original Message-----
> Date: Fri, 13 Oct 2017 22:06:46 +0530
> From: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
> To: jerin.jacob@caviumnetworks.com, hemant.agrawal@nxp.com,
>  harry.van.haaren@intel.com
> Cc: dev@dpdk.org, Pavan Bhagavatula <pbhagavatula@caviumnetworks.com>
> Subject: [dpdk-dev] [PATCH v2 3/7] app/test-eventdev: update app to use
>  service cores
> X-Mailer: git-send-email 2.7.4
> 
> From: Pavan Bhagavatula <pbhagavatula@caviumnetworks.com>
> 
> Use service cores for offloading event scheduling in case of
> centralized scheduling instead of calling the schedule api directly.
> This removes the dependency on dedicated scheduler core specified by
> giving command line option --slcore.
> 
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
> ---
>  app/test-eventdev/evt_common.h        | 41 ++++++++++++++++++++++++++++++
>  app/test-eventdev/evt_options.c       | 10 --------
>  app/test-eventdev/evt_options.h       |  8 ------
>  app/test-eventdev/test_order_atq.c    |  6 +++++
>  app/test-eventdev/test_order_common.c |  3 ---
>  app/test-eventdev/test_order_queue.c  |  6 +++++
>  app/test-eventdev/test_perf_atq.c     |  6 +++++
>  app/test-eventdev/test_perf_common.c  | 47 ++---------------------------------
>  app/test-eventdev/test_perf_common.h  |  1 +
>  app/test-eventdev/test_perf_queue.c   |  6 +++++


There are references to "slcore" in documentation(doc/guides/tools/testeventdev.rst).
Please remove the slcore references and update example command to use service
cores in the documentation.

With above change:
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v2 6/7] eventdev: remove eventdev schedule API
  2017-10-13 16:36   ` [PATCH v2 6/7] eventdev: remove eventdev schedule API Pavan Nikhilesh
@ 2017-10-21 17:07     ` Jerin Jacob
  0 siblings, 0 replies; 47+ messages in thread
From: Jerin Jacob @ 2017-10-21 17:07 UTC (permalink / raw)
  To: Pavan Nikhilesh; +Cc: hemant.agrawal, harry.van.haaren, dev

-----Original Message-----
> Date: Fri, 13 Oct 2017 22:06:49 +0530
> From: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
> To: jerin.jacob@caviumnetworks.com, hemant.agrawal@nxp.com,
>  harry.van.haaren@intel.com
> Cc: dev@dpdk.org, Pavan Bhagavatula <pbhagavatula@caviumnetworks.com>
> Subject: [dpdk-dev] [PATCH v2 6/7] eventdev: remove eventdev schedule API
> X-Mailer: git-send-email 2.7.4
> 
> From: Pavan Bhagavatula <pbhagavatula@caviumnetworks.com>
> 
> remove eventdev schedule api and enforce sw driver to use service core
> feature for event scheduling.
> 
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
> ---
>  drivers/event/octeontx/ssovf_evdev.c       |  1 -
>  drivers/event/skeleton/skeleton_eventdev.c |  2 --
>  drivers/event/sw/sw_evdev.c                | 13 +++++--------
>  lib/librte_eventdev/rte_eventdev.h         | 31 ++++--------------------------

1) Missed removing the dpaa2 driver schedule() function(drivers/event/dpaa2/dpaa2_eventdev.c)
2) Found following check-patch issues
### eventdev: remove eventdev schedule API

WARNING:LEADING_SPACE: please, no spaces at the start of a line
#93: FILE: lib/librte_eventdev/rte_eventdev.h:221:
+ * scheduler logic need a dedicated service core for scheduling.$

WARNING:LEADING_SPACE: please, no spaces at the start of a line
#97: FILE: lib/librte_eventdev/rte_eventdev.h:224:
+ * thread that repeatedly calls software specific scheduling function.$

With above fixes:
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>

>  4 files changed, 9 insertions(+), 38 deletions(-)
> 

^ permalink raw reply	[flat|nested] 47+ messages in thread

* [PATCH v3 1/7] eventdev: add API to get service id
  2017-10-11  9:09 [PATCH 0/7] eventdev: remove event schedule API for SW driver Pavan Nikhilesh
                   ` (7 preceding siblings ...)
  2017-10-13 16:36 ` [PATCH v2 1/7] eventdev: add API to get service id Pavan Nikhilesh
@ 2017-10-22  9:16 ` Pavan Nikhilesh
  2017-10-22  9:16   ` [PATCH v3 2/7] event/sw: extend service capability Pavan Nikhilesh
                     ` (5 more replies)
  2017-10-25 11:59 ` [PATCH v4 1/7] eventdev: add API to get service id Pavan Nikhilesh
  2017-10-25 14:50 ` [PATCH v5 1/7] eventdev: add API to get service id Pavan Nikhilesh
  10 siblings, 6 replies; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-22  9:16 UTC (permalink / raw)
  To: jerin.jacob, hemant.agrawal, harry.van.haaren; +Cc: dev, Pavan Bhagavatula

From: Pavan Bhagavatula <pbhagavatula@caviumnetworks.com>

In case of sw event device the scheduling can be done on a service core
using the service registered at the time of probe.
This patch adds a helper function to get the service id that can be used
by the application to assign a lcore for the service to run on.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
---

v3 changes:
 - removed stale slcore option from documentation
 - fix version map
 - remove schedule api from dpaa2 event dev driver

v2 changes:
 - fix checkpatch issues
 - update eventdev versio map
 - fix --slcore option not removed in app/test-event-dev

 lib/librte_eventdev/rte_eventdev.c           | 17 +++++++++++++++++
 lib/librte_eventdev/rte_eventdev.h           | 22 ++++++++++++++++++++++
 lib/librte_eventdev/rte_eventdev_version.map |  1 +
 3 files changed, 40 insertions(+)

diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c
index 378ccb5..f179aa4 100644
--- a/lib/librte_eventdev/rte_eventdev.c
+++ b/lib/librte_eventdev/rte_eventdev.c
@@ -961,6 +961,23 @@ rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
 }

 int
+rte_event_dev_service_id_get(uint8_t dev_id, uint32_t *service_id)
+{
+	struct rte_eventdev *dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+
+	if (service_id == NULL)
+		return -EINVAL;
+
+	if (dev->data->service_inited)
+		*service_id = dev->data->service_id;
+
+	return dev->data->service_inited ? 0 : -ESRCH;
+}
+
+int
 rte_event_dev_dump(uint8_t dev_id, FILE *f)
 {
 	struct rte_eventdev *dev;
diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
index 1dbc872..1c1ff6b 100644
--- a/lib/librte_eventdev/rte_eventdev.h
+++ b/lib/librte_eventdev/rte_eventdev.h
@@ -1116,6 +1116,10 @@ struct rte_eventdev_data {
 	/**< Event device capabilities(RTE_EVENT_DEV_CAP_)*/
 	struct rte_event_dev_config dev_conf;
 	/**< Configuration applied to device. */
+	uint8_t service_inited;
+	/* Service initialization state */
+	uint32_t service_id;
+	/* Service ID*/

 	RTE_STD_C11
 	uint8_t dev_started : 1;
@@ -1619,6 +1623,24 @@ rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
 			 uint8_t queues[], uint8_t priorities[]);

 /**
+ * Retrieve the service ID of the event dev. If the adapter doesn't use
+ * a rte_service function, this function returns -ESRCH.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ *
+ * @param [out] service_id
+ *   A pointer to a uint32_t, to be filled in with the service id.
+ *
+ * @return
+ *   - 0: Success
+ *   - <0: Error code on failure, if the event dev doesn't use a rte_service
+ *   function, this function returns -ESRCH.
+ */
+int
+rte_event_dev_service_id_get(uint8_t dev_id, uint32_t *service_id);
+
+/**
  * Dump internal information about *dev_id* to the FILE* provided in *f*.
  *
  * @param dev_id
diff --git a/lib/librte_eventdev/rte_eventdev_version.map b/lib/librte_eventdev/rte_eventdev_version.map
index 800ca6e..108ae61 100644
--- a/lib/librte_eventdev/rte_eventdev_version.map
+++ b/lib/librte_eventdev/rte_eventdev_version.map
@@ -51,6 +51,7 @@ DPDK_17.11 {
 	global:

 	rte_event_dev_attr_get;
+	rte_event_dev_service_id_get;
 	rte_event_port_attr_get;
 	rte_event_queue_attr_get;

--
2.7.4

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v3 2/7] event/sw: extend service capability
  2017-10-22  9:16 ` [PATCH v3 " Pavan Nikhilesh
@ 2017-10-22  9:16   ` Pavan Nikhilesh
  2017-10-22  9:16   ` [PATCH v3 3/7] app/test-eventdev: update app to use service cores Pavan Nikhilesh
                     ` (4 subsequent siblings)
  5 siblings, 0 replies; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-22  9:16 UTC (permalink / raw)
  To: jerin.jacob, hemant.agrawal, harry.van.haaren; +Cc: dev, Pavan Bhagavatula

From: Pavan Bhagavatula <pbhagavatula@caviumnetworks.com>

Extend the service capability of the sw event device by exposing service id
to the application.
The application can use service id to configure service cores to run event
scheduling.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
---
 drivers/event/sw/sw_evdev.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
index aed8b72..9b7f4d4 100644
--- a/drivers/event/sw/sw_evdev.c
+++ b/drivers/event/sw/sw_evdev.c
@@ -875,6 +875,15 @@ sw_probe(struct rte_vdev_device *vdev)
 		return -ENOEXEC;
 	}
 
+	ret = rte_service_component_runstate_set(sw->service_id, 1);
+	if (ret) {
+		SW_LOG_ERR("Unable to enable service component");
+		return -ENOEXEC;
+	}
+
+	dev->data->service_inited = 1;
+	dev->data->service_id = sw->service_id;
+
 	return 0;
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v3 3/7] app/test-eventdev: update app to use service cores
  2017-10-22  9:16 ` [PATCH v3 " Pavan Nikhilesh
  2017-10-22  9:16   ` [PATCH v3 2/7] event/sw: extend service capability Pavan Nikhilesh
@ 2017-10-22  9:16   ` Pavan Nikhilesh
  2017-10-22  9:16   ` [PATCH v3 4/7] test/eventdev: update test to use service core Pavan Nikhilesh
                     ` (3 subsequent siblings)
  5 siblings, 0 replies; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-22  9:16 UTC (permalink / raw)
  To: jerin.jacob, hemant.agrawal, harry.van.haaren; +Cc: dev, Pavan Bhagavatula

From: Pavan Bhagavatula <pbhagavatula@caviumnetworks.com>

Use service cores for offloading event scheduling in case of
centralized scheduling instead of calling the schedule api directly.
This removes the dependency on dedicated scheduler core specified by
giving command line option --slcore.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 app/test-eventdev/evt_common.h        | 41 ++++++++++++++++++++++++++++++
 app/test-eventdev/evt_options.c       | 10 --------
 app/test-eventdev/evt_options.h       |  8 ------
 app/test-eventdev/test_order_atq.c    |  6 +++++
 app/test-eventdev/test_order_common.c |  3 ---
 app/test-eventdev/test_order_queue.c  |  6 +++++
 app/test-eventdev/test_perf_atq.c     |  6 +++++
 app/test-eventdev/test_perf_common.c  | 47 ++---------------------------------
 app/test-eventdev/test_perf_common.h  |  1 +
 app/test-eventdev/test_perf_queue.c   |  6 +++++
 10 files changed, 68 insertions(+), 66 deletions(-)

diff --git a/app/test-eventdev/evt_common.h b/app/test-eventdev/evt_common.h
index 4102076..1589190 100644
--- a/app/test-eventdev/evt_common.h
+++ b/app/test-eventdev/evt_common.h
@@ -36,6 +36,7 @@
 #include <rte_common.h>
 #include <rte_debug.h>
 #include <rte_eventdev.h>
+#include <rte_service.h>
 
 #define CLNRM  "\x1b[0m"
 #define CLRED  "\x1b[31m"
@@ -113,4 +114,44 @@ evt_sched_type2queue_cfg(uint8_t sched_type)
 	return ret;
 }
 
+
+static inline int
+evt_service_setup(uint8_t dev_id)
+{
+	uint32_t service_id;
+	int32_t core_cnt;
+	unsigned int lcore = 0;
+	uint32_t core_array[RTE_MAX_LCORE];
+	uint8_t cnt;
+	uint8_t min_cnt = UINT8_MAX;
+
+	if (evt_has_distributed_sched(dev_id))
+		return 0;
+
+	if (!rte_service_lcore_count())
+		return -ENOENT;
+
+	if (!rte_event_dev_service_id_get(dev_id, &service_id)) {
+		core_cnt = rte_service_lcore_list(core_array,
+				RTE_MAX_LCORE);
+		if (core_cnt < 0)
+			return -ENOENT;
+		/* Get the core which has least number of services running. */
+		while (core_cnt--) {
+			/* Reset default mapping */
+			rte_service_map_lcore_set(service_id,
+					core_array[core_cnt], 0);
+			cnt = rte_service_lcore_count_services(
+					core_array[core_cnt]);
+			if (cnt < min_cnt) {
+				lcore = core_array[core_cnt];
+				min_cnt = cnt;
+			}
+		}
+		if (rte_service_map_lcore_set(service_id, lcore, 1))
+			return -ENOENT;
+	}
+	return 0;
+}
+
 #endif /*  _EVT_COMMON_*/
diff --git a/app/test-eventdev/evt_options.c b/app/test-eventdev/evt_options.c
index 65e22f8..e2187df 100644
--- a/app/test-eventdev/evt_options.c
+++ b/app/test-eventdev/evt_options.c
@@ -114,13 +114,6 @@ evt_parse_test_name(struct evt_options *opt, const char *arg)
 }
 
 static int
-evt_parse_slcore(struct evt_options *opt, const char *arg)
-{
-	opt->slcore = atoi(arg);
-	return 0;
-}
-
-static int
 evt_parse_socket_id(struct evt_options *opt, const char *arg)
 {
 	opt->socket_id = atoi(arg);
@@ -188,7 +181,6 @@ usage(char *program)
 		"\t--test             : name of the test application to run\n"
 		"\t--socket_id        : socket_id of application resources\n"
 		"\t--pool_sz          : pool size of the mempool\n"
-		"\t--slcore           : lcore id of the scheduler\n"
 		"\t--plcores          : list of lcore ids for producers\n"
 		"\t--wlcores          : list of lcore ids for workers\n"
 		"\t--stlist           : list of scheduled types of the stages\n"
@@ -254,7 +246,6 @@ static struct option lgopts[] = {
 	{ EVT_POOL_SZ,          1, 0, 0 },
 	{ EVT_NB_PKTS,          1, 0, 0 },
 	{ EVT_WKR_DEQ_DEP,      1, 0, 0 },
-	{ EVT_SCHED_LCORE,      1, 0, 0 },
 	{ EVT_SCHED_TYPE_LIST,  1, 0, 0 },
 	{ EVT_FWD_LATENCY,      0, 0, 0 },
 	{ EVT_QUEUE_PRIORITY,   0, 0, 0 },
@@ -278,7 +269,6 @@ evt_opts_parse_long(int opt_idx, struct evt_options *opt)
 		{ EVT_POOL_SZ, evt_parse_pool_sz},
 		{ EVT_NB_PKTS, evt_parse_nb_pkts},
 		{ EVT_WKR_DEQ_DEP, evt_parse_wkr_deq_dep},
-		{ EVT_SCHED_LCORE, evt_parse_slcore},
 		{ EVT_SCHED_TYPE_LIST, evt_parse_sched_type_list},
 		{ EVT_FWD_LATENCY, evt_parse_fwd_latency},
 		{ EVT_QUEUE_PRIORITY, evt_parse_queue_priority},
diff --git a/app/test-eventdev/evt_options.h b/app/test-eventdev/evt_options.h
index d8a9fdc..a9a9125 100644
--- a/app/test-eventdev/evt_options.h
+++ b/app/test-eventdev/evt_options.h
@@ -47,7 +47,6 @@
 #define EVT_VERBOSE              ("verbose")
 #define EVT_DEVICE               ("dev")
 #define EVT_TEST                 ("test")
-#define EVT_SCHED_LCORE          ("slcore")
 #define EVT_PROD_LCORES          ("plcores")
 #define EVT_WORK_LCORES          ("wlcores")
 #define EVT_NB_FLOWS             ("nb_flows")
@@ -67,7 +66,6 @@ struct evt_options {
 	bool plcores[RTE_MAX_LCORE];
 	bool wlcores[RTE_MAX_LCORE];
 	uint8_t sched_type_list[EVT_MAX_STAGES];
-	int slcore;
 	uint32_t nb_flows;
 	int socket_id;
 	int pool_sz;
@@ -219,12 +217,6 @@ evt_dump_nb_flows(struct evt_options *opt)
 }
 
 static inline void
-evt_dump_scheduler_lcore(struct evt_options *opt)
-{
-	evt_dump("scheduler lcore", "%d", opt->slcore);
-}
-
-static inline void
 evt_dump_worker_dequeue_depth(struct evt_options *opt)
 {
 	evt_dump("worker deq depth", "%d", opt->wkr_deq_dep);
diff --git a/app/test-eventdev/test_order_atq.c b/app/test-eventdev/test_order_atq.c
index 7e6c67d..4ee0dea 100644
--- a/app/test-eventdev/test_order_atq.c
+++ b/app/test-eventdev/test_order_atq.c
@@ -179,6 +179,12 @@ order_atq_eventdev_setup(struct evt_test *test, struct evt_options *opt)
 	if (ret)
 		return ret;
 
+	ret = evt_service_setup(opt->dev_id);
+	if (ret) {
+		evt_err("No service lcore found to run event dev.");
+		return ret;
+	}
+
 	ret = rte_event_dev_start(opt->dev_id);
 	if (ret) {
 		evt_err("failed to start eventdev %d", opt->dev_id);
diff --git a/app/test-eventdev/test_order_common.c b/app/test-eventdev/test_order_common.c
index 80e14c0..7cfe7fa 100644
--- a/app/test-eventdev/test_order_common.c
+++ b/app/test-eventdev/test_order_common.c
@@ -292,9 +292,6 @@ order_launch_lcores(struct evt_test *test, struct evt_options *opt,
 	int64_t old_remaining  = -1;
 
 	while (t->err == false) {
-
-		rte_event_schedule(opt->dev_id);
-
 		uint64_t new_cycles = rte_get_timer_cycles();
 		int64_t remaining = rte_atomic64_read(&t->outstand_pkts);
 
diff --git a/app/test-eventdev/test_order_queue.c b/app/test-eventdev/test_order_queue.c
index beadd9c..a14e0b0 100644
--- a/app/test-eventdev/test_order_queue.c
+++ b/app/test-eventdev/test_order_queue.c
@@ -192,6 +192,12 @@ order_queue_eventdev_setup(struct evt_test *test, struct evt_options *opt)
 	if (ret)
 		return ret;
 
+	ret = evt_service_setup(opt->dev_id);
+	if (ret) {
+		evt_err("No service lcore found to run event dev.");
+		return ret;
+	}
+
 	ret = rte_event_dev_start(opt->dev_id);
 	if (ret) {
 		evt_err("failed to start eventdev %d", opt->dev_id);
diff --git a/app/test-eventdev/test_perf_atq.c b/app/test-eventdev/test_perf_atq.c
index 9c3efa3..0e9f2db 100644
--- a/app/test-eventdev/test_perf_atq.c
+++ b/app/test-eventdev/test_perf_atq.c
@@ -221,6 +221,12 @@ perf_atq_eventdev_setup(struct evt_test *test, struct evt_options *opt)
 	if (ret)
 		return ret;
 
+	ret = evt_service_setup(opt->dev_id);
+	if (ret) {
+		evt_err("No service lcore found to run event dev.");
+		return ret;
+	}
+
 	ret = rte_event_dev_start(opt->dev_id);
 	if (ret) {
 		evt_err("failed to start eventdev %d", opt->dev_id);
diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c
index 7b09299..e77b472 100644
--- a/app/test-eventdev/test_perf_common.c
+++ b/app/test-eventdev/test_perf_common.c
@@ -88,18 +88,6 @@ perf_producer(void *arg)
 	return 0;
 }
 
-static inline int
-scheduler(void *arg)
-{
-	struct test_perf *t = arg;
-	const uint8_t dev_id = t->opt->dev_id;
-
-	while (t->done == false)
-		rte_event_schedule(dev_id);
-
-	return 0;
-}
-
 static inline uint64_t
 processed_pkts(struct test_perf *t)
 {
@@ -163,15 +151,6 @@ perf_launch_lcores(struct evt_test *test, struct evt_options *opt,
 		port_idx++;
 	}
 
-	/* launch scheduler */
-	if (!evt_has_distributed_sched(opt->dev_id)) {
-		ret = rte_eal_remote_launch(scheduler, t, opt->slcore);
-		if (ret) {
-			evt_err("failed to launch sched %d", opt->slcore);
-			return ret;
-		}
-	}
-
 	const uint64_t total_pkts = opt->nb_pkts *
 			evt_nr_active_lcores(opt->plcores);
 
@@ -307,10 +286,9 @@ int
 perf_opt_check(struct evt_options *opt, uint64_t nb_queues)
 {
 	unsigned int lcores;
-	bool need_slcore = !evt_has_distributed_sched(opt->dev_id);
 
-	/* N producer + N worker + 1 scheduler(based on dev capa) + 1 master */
-	lcores = need_slcore ? 4 : 3;
+	/* N producer + N worker + 1 master */
+	lcores = 3;
 
 	if (rte_lcore_count() < lcores) {
 		evt_err("test need minimum %d lcores", lcores);
@@ -322,10 +300,6 @@ perf_opt_check(struct evt_options *opt, uint64_t nb_queues)
 		evt_err("worker lcores overlaps with master lcore");
 		return -1;
 	}
-	if (need_slcore && evt_lcores_has_overlap(opt->wlcores, opt->slcore)) {
-		evt_err("worker lcores overlaps with scheduler lcore");
-		return -1;
-	}
 	if (evt_lcores_has_overlap_multi(opt->wlcores, opt->plcores)) {
 		evt_err("worker lcores overlaps producer lcores");
 		return -1;
@@ -344,10 +318,6 @@ perf_opt_check(struct evt_options *opt, uint64_t nb_queues)
 		evt_err("producer lcores overlaps with master lcore");
 		return -1;
 	}
-	if (need_slcore && evt_lcores_has_overlap(opt->plcores, opt->slcore)) {
-		evt_err("producer lcores overlaps with scheduler lcore");
-		return -1;
-	}
 	if (evt_has_disabled_lcore(opt->plcores)) {
 		evt_err("one or more producer lcores are not enabled");
 		return -1;
@@ -357,17 +327,6 @@ perf_opt_check(struct evt_options *opt, uint64_t nb_queues)
 		return -1;
 	}
 
-	/* Validate scheduler lcore */
-	if (!evt_has_distributed_sched(opt->dev_id) &&
-			opt->slcore == (int)rte_get_master_lcore()) {
-		evt_err("scheduler lcore and master lcore should be different");
-		return -1;
-	}
-	if (need_slcore && !rte_lcore_is_enabled(opt->slcore)) {
-		evt_err("scheduler lcore is not enabled");
-		return -1;
-	}
-
 	if (evt_has_invalid_stage(opt))
 		return -1;
 
@@ -405,8 +364,6 @@ perf_opt_dump(struct evt_options *opt, uint8_t nb_queues)
 	evt_dump_producer_lcores(opt);
 	evt_dump("nb_worker_lcores", "%d", evt_nr_active_lcores(opt->wlcores));
 	evt_dump_worker_lcores(opt);
-	if (!evt_has_distributed_sched(opt->dev_id))
-		evt_dump_scheduler_lcore(opt);
 	evt_dump_nb_stages(opt);
 	evt_dump("nb_evdev_ports", "%d", perf_nb_event_ports(opt));
 	evt_dump("nb_evdev_queues", "%d", nb_queues);
diff --git a/app/test-eventdev/test_perf_common.h b/app/test-eventdev/test_perf_common.h
index 4956586..c6fc70c 100644
--- a/app/test-eventdev/test_perf_common.h
+++ b/app/test-eventdev/test_perf_common.h
@@ -159,6 +159,7 @@ int perf_test_setup(struct evt_test *test, struct evt_options *opt);
 int perf_mempool_setup(struct evt_test *test, struct evt_options *opt);
 int perf_event_dev_port_setup(struct evt_test *test, struct evt_options *opt,
 				uint8_t stride, uint8_t nb_queues);
+int perf_event_dev_service_setup(uint8_t dev_id);
 int perf_launch_lcores(struct evt_test *test, struct evt_options *opt,
 		int (*worker)(void *));
 void perf_opt_dump(struct evt_options *opt, uint8_t nb_queues);
diff --git a/app/test-eventdev/test_perf_queue.c b/app/test-eventdev/test_perf_queue.c
index 658c08a..78f43b5 100644
--- a/app/test-eventdev/test_perf_queue.c
+++ b/app/test-eventdev/test_perf_queue.c
@@ -232,6 +232,12 @@ perf_queue_eventdev_setup(struct evt_test *test, struct evt_options *opt)
 	if (ret)
 		return ret;
 
+	ret = evt_service_setup(opt->dev_id);
+	if (ret) {
+		evt_err("No service lcore found to run event dev.");
+		return ret;
+	}
+
 	ret = rte_event_dev_start(opt->dev_id);
 	if (ret) {
 		evt_err("failed to start eventdev %d", opt->dev_id);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v3 4/7] test/eventdev: update test to use service core
  2017-10-22  9:16 ` [PATCH v3 " Pavan Nikhilesh
  2017-10-22  9:16   ` [PATCH v3 2/7] event/sw: extend service capability Pavan Nikhilesh
  2017-10-22  9:16   ` [PATCH v3 3/7] app/test-eventdev: update app to use service cores Pavan Nikhilesh
@ 2017-10-22  9:16   ` Pavan Nikhilesh
  2017-10-22  9:16   ` [PATCH v3 5/7] examples/eventdev: update sample app to use service Pavan Nikhilesh
                     ` (2 subsequent siblings)
  5 siblings, 0 replies; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-22  9:16 UTC (permalink / raw)
  To: jerin.jacob, hemant.agrawal, harry.van.haaren; +Cc: dev, Pavan Bhagavatula

From: Pavan Bhagavatula <pbhagavatula@caviumnetworks.com>

Use service core for event scheduling instead of calling the event schedule
api directly.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
---
 test/test/test_eventdev_sw.c | 120 ++++++++++++++++++++++++-------------------
 1 file changed, 67 insertions(+), 53 deletions(-)

diff --git a/test/test/test_eventdev_sw.c b/test/test/test_eventdev_sw.c
index 7219886..81954dc 100644
--- a/test/test/test_eventdev_sw.c
+++ b/test/test/test_eventdev_sw.c
@@ -49,6 +49,7 @@
 #include <rte_cycles.h>
 #include <rte_eventdev.h>
 #include <rte_pause.h>
+#include <rte_service_component.h>
 
 #include "test.h"
 
@@ -320,6 +321,19 @@ struct test_event_dev_stats {
 	uint64_t qid_tx_pkts[MAX_QIDS];
 };
 
+static inline void
+wait_schedule(int evdev)
+{
+	static const char * const dev_names[] = {"dev_sched_calls"};
+	uint64_t val;
+
+	val = rte_event_dev_xstats_by_name_get(evdev, dev_names[0],
+			0);
+	while ((rte_event_dev_xstats_by_name_get(evdev, dev_names[0], 0) - val)
+			< 2)
+		;
+}
+
 static inline int
 test_event_dev_stats_get(int dev_id, struct test_event_dev_stats *stats)
 {
@@ -392,9 +406,9 @@ run_prio_packet_test(struct test *t)
 		RTE_EVENT_DEV_PRIORITY_HIGHEST
 	};
 	unsigned int i;
+	struct rte_event ev_arr[2];
 	for (i = 0; i < RTE_DIM(MAGIC_SEQN); i++) {
 		/* generate pkt and enqueue */
-		struct rte_event ev;
 		struct rte_mbuf *arp = rte_gen_arp(0, t->mbuf_pool);
 		if (!arp) {
 			printf("%d: gen of pkt failed\n", __LINE__);
@@ -402,20 +416,20 @@ run_prio_packet_test(struct test *t)
 		}
 		arp->seqn = MAGIC_SEQN[i];
 
-		ev = (struct rte_event){
+		ev_arr[i] = (struct rte_event){
 			.priority = PRIORITY[i],
 			.op = RTE_EVENT_OP_NEW,
 			.queue_id = t->qid[0],
 			.mbuf = arp
 		};
-		err = rte_event_enqueue_burst(evdev, t->port[0], &ev, 1);
-		if (err < 0) {
-			printf("%d: error failed to enqueue\n", __LINE__);
-			return -1;
-		}
+	}
+	err = rte_event_enqueue_burst(evdev, t->port[0], ev_arr, 2);
+	if (err < 0) {
+		printf("%d: error failed to enqueue\n", __LINE__);
+		return -1;
 	}
 
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	struct test_event_dev_stats stats;
 	err = test_event_dev_stats_get(evdev, &stats);
@@ -425,8 +439,8 @@ run_prio_packet_test(struct test *t)
 	}
 
 	if (stats.port_rx_pkts[t->port[0]] != 2) {
-		printf("%d: error stats incorrect for directed port\n",
-				__LINE__);
+		printf("%d: error stats incorrect for directed port %"PRIu64"\n",
+				__LINE__, stats.port_rx_pkts[t->port[0]]);
 		rte_event_dev_dump(evdev, stdout);
 		return -1;
 	}
@@ -439,6 +453,7 @@ run_prio_packet_test(struct test *t)
 		rte_event_dev_dump(evdev, stdout);
 		return -1;
 	}
+
 	if (ev.mbuf->seqn != MAGIC_SEQN[1]) {
 		printf("%d: first packet out not highest priority\n",
 				__LINE__);
@@ -507,7 +522,7 @@ test_single_directed_packet(struct test *t)
 	}
 
 	/* Run schedule() as dir packets may need to be re-ordered */
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	struct test_event_dev_stats stats;
 	err = test_event_dev_stats_get(evdev, &stats);
@@ -574,7 +589,7 @@ test_directed_forward_credits(struct test *t)
 			printf("%d: error failed to enqueue\n", __LINE__);
 			return -1;
 		}
-		rte_event_schedule(evdev);
+		wait_schedule(evdev);
 
 		uint32_t deq_pkts;
 		deq_pkts = rte_event_dequeue_burst(evdev, 0, &ev, 1, 0);
@@ -736,7 +751,7 @@ burst_packets(struct test *t)
 			return -1;
 		}
 	}
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	/* Check stats for all NUM_PKTS arrived to sched core */
 	struct test_event_dev_stats stats;
@@ -825,7 +840,7 @@ abuse_inflights(struct test *t)
 	}
 
 	/* schedule */
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	struct test_event_dev_stats stats;
 
@@ -963,7 +978,7 @@ xstats_tests(struct test *t)
 		}
 	}
 
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	/* Device names / values */
 	int num_stats = rte_event_dev_xstats_names_get(evdev,
@@ -974,8 +989,8 @@ xstats_tests(struct test *t)
 	ret = rte_event_dev_xstats_get(evdev,
 					RTE_EVENT_DEV_XSTATS_DEVICE,
 					0, ids, values, num_stats);
-	static const uint64_t expected[] = {3, 3, 0, 1, 0, 0};
-	for (i = 0; (signed int)i < ret; i++) {
+	static const uint64_t expected[] = {3, 3, 0};
+	for (i = 0; (signed int)i < 3; i++) {
 		if (expected[i] != values[i]) {
 			printf(
 				"%d Error xstat %d (id %d) %s : %"PRIu64
@@ -994,7 +1009,7 @@ xstats_tests(struct test *t)
 	ret = rte_event_dev_xstats_get(evdev,
 					RTE_EVENT_DEV_XSTATS_DEVICE,
 					0, ids, values, num_stats);
-	for (i = 0; (signed int)i < ret; i++) {
+	for (i = 0; (signed int)i < 3; i++) {
 		if (expected_zero[i] != values[i]) {
 			printf(
 				"%d Error, xstat %d (id %d) %s : %"PRIu64
@@ -1290,7 +1305,7 @@ port_reconfig_credits(struct test *t)
 			}
 		}
 
-		rte_event_schedule(evdev);
+		wait_schedule(evdev);
 
 		struct rte_event ev[NPKTS];
 		int deq = rte_event_dequeue_burst(evdev, t->port[0], ev,
@@ -1516,14 +1531,12 @@ xstats_id_reset_tests(struct test *t)
 		}
 	}
 
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	static const char * const dev_names[] = {
-		"dev_rx", "dev_tx", "dev_drop", "dev_sched_calls",
-		"dev_sched_no_iq_enq", "dev_sched_no_cq_enq",
-	};
+		"dev_rx", "dev_tx", "dev_drop"};
 	uint64_t dev_expected[] = {NPKTS, NPKTS, 0, 1, 0, 0};
-	for (i = 0; (int)i < ret; i++) {
+	for (i = 0; (int)i < 3; i++) {
 		unsigned int id;
 		uint64_t val = rte_event_dev_xstats_by_name_get(evdev,
 								dev_names[i],
@@ -1888,26 +1901,26 @@ qid_priorities(struct test *t)
 	}
 
 	/* enqueue 3 packets, setting seqn and QID to check priority */
+	struct rte_event ev_arr[3];
 	for (i = 0; i < 3; i++) {
-		struct rte_event ev;
 		struct rte_mbuf *arp = rte_gen_arp(0, t->mbuf_pool);
 		if (!arp) {
 			printf("%d: gen of pkt failed\n", __LINE__);
 			return -1;
 		}
-		ev.queue_id = t->qid[i];
-		ev.op = RTE_EVENT_OP_NEW;
-		ev.mbuf = arp;
+		ev_arr[i].queue_id = t->qid[i];
+		ev_arr[i].op = RTE_EVENT_OP_NEW;
+		ev_arr[i].mbuf = arp;
 		arp->seqn = i;
 
-		int err = rte_event_enqueue_burst(evdev, t->port[0], &ev, 1);
-		if (err != 1) {
-			printf("%d: Failed to enqueue\n", __LINE__);
-			return -1;
-		}
+	}
+	int err = rte_event_enqueue_burst(evdev, t->port[0], ev_arr, 3);
+	if (err != 3) {
+		printf("%d: Failed to enqueue\n", __LINE__);
+		return -1;
 	}
 
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	/* dequeue packets, verify priority was upheld */
 	struct rte_event ev[32];
@@ -1988,7 +2001,7 @@ load_balancing(struct test *t)
 		}
 	}
 
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	struct test_event_dev_stats stats;
 	err = test_event_dev_stats_get(evdev, &stats);
@@ -2088,7 +2101,7 @@ load_balancing_history(struct test *t)
 	}
 
 	/* call the scheduler */
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	/* Dequeue the flow 0 packet from port 1, so that we can then drop */
 	struct rte_event ev;
@@ -2105,7 +2118,7 @@ load_balancing_history(struct test *t)
 	rte_event_enqueue_burst(evdev, t->port[1], &release_ev, 1);
 
 	/* call the scheduler */
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	/*
 	 * Set up the next set of flows, first a new flow to fill up
@@ -2138,7 +2151,7 @@ load_balancing_history(struct test *t)
 	}
 
 	/* schedule */
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	err = test_event_dev_stats_get(evdev, &stats);
 	if (err) {
@@ -2182,7 +2195,7 @@ load_balancing_history(struct test *t)
 		while (rte_event_dequeue_burst(evdev, i, &ev, 1, 0))
 			rte_event_enqueue_burst(evdev, i, &release_ev, 1);
 	}
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	cleanup(t);
 	return 0;
@@ -2248,7 +2261,7 @@ invalid_qid(struct test *t)
 	}
 
 	/* call the scheduler */
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	err = test_event_dev_stats_get(evdev, &stats);
 	if (err) {
@@ -2333,7 +2346,7 @@ single_packet(struct test *t)
 		return -1;
 	}
 
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	err = test_event_dev_stats_get(evdev, &stats);
 	if (err) {
@@ -2376,7 +2389,7 @@ single_packet(struct test *t)
 		printf("%d: Failed to enqueue\n", __LINE__);
 		return -1;
 	}
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	err = test_event_dev_stats_get(evdev, &stats);
 	if (stats.port_inflight[wrk_enq] != 0) {
@@ -2464,7 +2477,7 @@ inflight_counts(struct test *t)
 	}
 
 	/* schedule */
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	err = test_event_dev_stats_get(evdev, &stats);
 	if (err) {
@@ -2520,7 +2533,7 @@ inflight_counts(struct test *t)
 	 * As the scheduler core decrements inflights, it needs to run to
 	 * process packets to act on the drop messages
 	 */
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	err = test_event_dev_stats_get(evdev, &stats);
 	if (stats.port_inflight[p1] != 0) {
@@ -2555,7 +2568,7 @@ inflight_counts(struct test *t)
 	 * As the scheduler core decrements inflights, it needs to run to
 	 * process packets to act on the drop messages
 	 */
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	err = test_event_dev_stats_get(evdev, &stats);
 	if (stats.port_inflight[p2] != 0) {
@@ -2649,7 +2662,7 @@ parallel_basic(struct test *t, int check_order)
 		}
 	}
 
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	/* use extra slot to make logic in loops easier */
 	struct rte_event deq_ev[w3_port + 1];
@@ -2676,7 +2689,7 @@ parallel_basic(struct test *t, int check_order)
 			return -1;
 		}
 	}
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	/* dequeue from the tx ports, we should get 3 packets */
 	deq_pkts = rte_event_dequeue_burst(evdev, t->port[tx_port], deq_ev,
@@ -2754,7 +2767,7 @@ holb(struct test *t) /* test to check we avoid basic head-of-line blocking */
 		printf("%d: Error doing first enqueue\n", __LINE__);
 		goto err;
 	}
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	if (rte_event_dev_xstats_by_name_get(evdev, "port_0_cq_ring_used", NULL)
 			!= 1)
@@ -2779,7 +2792,7 @@ holb(struct test *t) /* test to check we avoid basic head-of-line blocking */
 			printf("%d: Error with enqueue\n", __LINE__);
 			goto err;
 		}
-		rte_event_schedule(evdev);
+		wait_schedule(evdev);
 	} while (rte_event_dev_xstats_by_name_get(evdev,
 				rx_port_free_stat, NULL) != 0);
 
@@ -2789,7 +2802,7 @@ holb(struct test *t) /* test to check we avoid basic head-of-line blocking */
 		printf("%d: Error with enqueue\n", __LINE__);
 		goto err;
 	}
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	/* check that the other port still has an empty CQ */
 	if (rte_event_dev_xstats_by_name_get(evdev, other_port_used_stat, NULL)
@@ -2812,7 +2825,7 @@ holb(struct test *t) /* test to check we avoid basic head-of-line blocking */
 		printf("%d: Error with enqueue\n", __LINE__);
 		goto err;
 	}
-	rte_event_schedule(evdev);
+	wait_schedule(evdev);
 
 	if (rte_event_dev_xstats_by_name_get(evdev, other_port_used_stat, NULL)
 			!= 1) {
@@ -3002,7 +3015,7 @@ worker_loopback(struct test *t)
 	while (rte_eal_get_lcore_state(p_lcore) != FINISHED ||
 			rte_eal_get_lcore_state(w_lcore) != FINISHED) {
 
-		rte_event_schedule(evdev);
+		wait_schedule(evdev);
 
 		uint64_t new_cycles = rte_get_timer_cycles();
 
@@ -3029,7 +3042,7 @@ worker_loopback(struct test *t)
 			cycles = new_cycles;
 		}
 	}
-	rte_event_schedule(evdev); /* ensure all completions are flushed */
+	wait_schedule(evdev); /* ensure all completions are flushed */
 
 	rte_eal_mp_wait_lcore();
 
@@ -3064,6 +3077,7 @@ test_sw_eventdev(void)
 			printf("Error finding newly created eventdev\n");
 			return -1;
 		}
+		rte_service_start_with_defaults();
 	}
 
 	/* Only create mbuf pool once, reuse for each test run */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v3 5/7] examples/eventdev: update sample app to use service
  2017-10-22  9:16 ` [PATCH v3 " Pavan Nikhilesh
                     ` (2 preceding siblings ...)
  2017-10-22  9:16   ` [PATCH v3 4/7] test/eventdev: update test to use service core Pavan Nikhilesh
@ 2017-10-22  9:16   ` Pavan Nikhilesh
  2017-10-22  9:16   ` [PATCH v3 6/7] eventdev: remove eventdev schedule API Pavan Nikhilesh
  2017-10-22  9:16   ` [PATCH v3 7/7] doc: update software event device Pavan Nikhilesh
  5 siblings, 0 replies; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-22  9:16 UTC (permalink / raw)
  To: jerin.jacob, hemant.agrawal, harry.van.haaren; +Cc: dev, Pavan Bhagavatula

From: Pavan Bhagavatula <pbhagavatula@caviumnetworks.com>

Update the sample app eventdev_pipeline_sw_pmd to use service cores for
event scheduling in case of sw eventdev.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
---
 examples/eventdev_pipeline_sw_pmd/main.c | 51 +++++++++++++++++++++-----------
 1 file changed, 33 insertions(+), 18 deletions(-)

diff --git a/examples/eventdev_pipeline_sw_pmd/main.c b/examples/eventdev_pipeline_sw_pmd/main.c
index 09b90c3..d5068d2 100644
--- a/examples/eventdev_pipeline_sw_pmd/main.c
+++ b/examples/eventdev_pipeline_sw_pmd/main.c
@@ -46,6 +46,7 @@
 #include <rte_cycles.h>
 #include <rte_ethdev.h>
 #include <rte_eventdev.h>
+#include <rte_service.h>
 
 #define MAX_NUM_STAGES 8
 #define BATCH_SIZE 16
@@ -233,7 +234,7 @@ producer(void)
 }
 
 static inline void
-schedule_devices(uint8_t dev_id, unsigned int lcore_id)
+schedule_devices(unsigned int lcore_id)
 {
 	if (fdata->rx_core[lcore_id] && (fdata->rx_single ||
 	    rte_atomic32_cmpset(&(fdata->rx_lock), 0, 1))) {
@@ -241,16 +242,6 @@ schedule_devices(uint8_t dev_id, unsigned int lcore_id)
 		rte_atomic32_clear((rte_atomic32_t *)&(fdata->rx_lock));
 	}
 
-	if (fdata->sched_core[lcore_id] && (fdata->sched_single ||
-	    rte_atomic32_cmpset(&(fdata->sched_lock), 0, 1))) {
-		rte_event_schedule(dev_id);
-		if (cdata.dump_dev_signal) {
-			rte_event_dev_dump(0, stdout);
-			cdata.dump_dev_signal = 0;
-		}
-		rte_atomic32_clear((rte_atomic32_t *)&(fdata->sched_lock));
-	}
-
 	if (fdata->tx_core[lcore_id] && (fdata->tx_single ||
 	    rte_atomic32_cmpset(&(fdata->tx_lock), 0, 1))) {
 		consumer();
@@ -294,7 +285,7 @@ worker(void *arg)
 	while (!fdata->done) {
 		uint16_t i;
 
-		schedule_devices(dev_id, lcore_id);
+		schedule_devices(lcore_id);
 
 		if (!fdata->worker_core[lcore_id]) {
 			rte_pause();
@@ -661,6 +652,27 @@ struct port_link {
 };
 
 static int
+setup_scheduling_service(unsigned int lcore, uint8_t dev_id)
+{
+	int ret;
+	uint32_t service_id;
+	ret = rte_event_dev_service_id_get(dev_id, &service_id);
+	if (ret == -ESRCH) {
+		printf("Event device [%d] doesn't need scheduling service\n",
+				dev_id);
+		return 0;
+	}
+	if (!ret) {
+		rte_service_runstate_set(service_id, 1);
+		rte_service_lcore_add(lcore);
+		rte_service_map_lcore_set(service_id, lcore, 1);
+		rte_service_lcore_start(lcore);
+	}
+
+	return ret;
+}
+
+static int
 setup_eventdev(struct prod_data *prod_data,
 		struct cons_data *cons_data,
 		struct worker_data *worker_data)
@@ -839,6 +851,14 @@ setup_eventdev(struct prod_data *prod_data,
 	*cons_data = (struct cons_data){.dev_id = dev_id,
 					.port_id = i };
 
+	for (i = 0; i < MAX_NUM_CORE; i++) {
+		if (fdata->sched_core[i]
+				&& setup_scheduling_service(i, dev_id)) {
+			printf("Error setting up schedulig service on %d", i);
+			return -1;
+		}
+	}
+
 	if (rte_event_dev_start(dev_id) < 0) {
 		printf("Error starting eventdev\n");
 		return -1;
@@ -944,8 +964,7 @@ main(int argc, char **argv)
 
 		if (!fdata->rx_core[lcore_id] &&
 			!fdata->worker_core[lcore_id] &&
-			!fdata->tx_core[lcore_id] &&
-			!fdata->sched_core[lcore_id])
+			!fdata->tx_core[lcore_id])
 			continue;
 
 		if (fdata->rx_core[lcore_id])
@@ -958,10 +977,6 @@ main(int argc, char **argv)
 				"[%s()] lcore %d executing NIC Tx, and using eventdev port %u\n",
 				__func__, lcore_id, cons_data.port_id);
 
-		if (fdata->sched_core[lcore_id])
-			printf("[%s()] lcore %d executing scheduler\n",
-					__func__, lcore_id);
-
 		if (fdata->worker_core[lcore_id])
 			printf(
 				"[%s()] lcore %d executing worker, using eventdev port %u\n",
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v3 6/7] eventdev: remove eventdev schedule API
  2017-10-22  9:16 ` [PATCH v3 " Pavan Nikhilesh
                     ` (3 preceding siblings ...)
  2017-10-22  9:16   ` [PATCH v3 5/7] examples/eventdev: update sample app to use service Pavan Nikhilesh
@ 2017-10-22  9:16   ` Pavan Nikhilesh
  2017-10-22  9:16   ` [PATCH v3 7/7] doc: update software event device Pavan Nikhilesh
  5 siblings, 0 replies; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-22  9:16 UTC (permalink / raw)
  To: jerin.jacob, hemant.agrawal, harry.van.haaren; +Cc: dev, Pavan Bhagavatula

From: Pavan Bhagavatula <pbhagavatula@caviumnetworks.com>

remove eventdev schedule api and enforce sw driver to use service core
feature for event scheduling.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 drivers/event/dpaa2/dpaa2_eventdev.c       |  1 -
 drivers/event/octeontx/ssovf_evdev.c       |  1 -
 drivers/event/skeleton/skeleton_eventdev.c |  2 --
 drivers/event/sw/sw_evdev.c                | 13 +++++--------
 lib/librte_eventdev/rte_eventdev.h         | 31 ++++--------------------------
 5 files changed, 9 insertions(+), 39 deletions(-)

diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
index 81286a8..28f09a7 100644
--- a/drivers/event/dpaa2/dpaa2_eventdev.c
+++ b/drivers/event/dpaa2/dpaa2_eventdev.c
@@ -621,7 +621,6 @@ dpaa2_eventdev_create(const char *name)
 	}
 
 	eventdev->dev_ops       = &dpaa2_eventdev_ops;
-	eventdev->schedule      = NULL;
 	eventdev->enqueue       = dpaa2_eventdev_enqueue;
 	eventdev->enqueue_burst = dpaa2_eventdev_enqueue_burst;
 	eventdev->enqueue_new_burst = dpaa2_eventdev_enqueue_burst;
diff --git a/drivers/event/octeontx/ssovf_evdev.c b/drivers/event/octeontx/ssovf_evdev.c
index d829b49..1127db0 100644
--- a/drivers/event/octeontx/ssovf_evdev.c
+++ b/drivers/event/octeontx/ssovf_evdev.c
@@ -155,7 +155,6 @@ ssovf_fastpath_fns_set(struct rte_eventdev *dev)
 {
 	struct ssovf_evdev *edev = ssovf_pmd_priv(dev);
 
-	dev->schedule      = NULL;
 	dev->enqueue       = ssows_enq;
 	dev->enqueue_burst = ssows_enq_burst;
 	dev->enqueue_new_burst = ssows_enq_new_burst;
diff --git a/drivers/event/skeleton/skeleton_eventdev.c b/drivers/event/skeleton/skeleton_eventdev.c
index bcd2055..4d1a1da 100644
--- a/drivers/event/skeleton/skeleton_eventdev.c
+++ b/drivers/event/skeleton/skeleton_eventdev.c
@@ -375,7 +375,6 @@ skeleton_eventdev_init(struct rte_eventdev *eventdev)
 	PMD_DRV_FUNC_TRACE();
 
 	eventdev->dev_ops       = &skeleton_eventdev_ops;
-	eventdev->schedule      = NULL;
 	eventdev->enqueue       = skeleton_eventdev_enqueue;
 	eventdev->enqueue_burst = skeleton_eventdev_enqueue_burst;
 	eventdev->dequeue       = skeleton_eventdev_dequeue;
@@ -466,7 +465,6 @@ skeleton_eventdev_create(const char *name, int socket_id)
 	}
 
 	eventdev->dev_ops       = &skeleton_eventdev_ops;
-	eventdev->schedule      = NULL;
 	eventdev->enqueue       = skeleton_eventdev_enqueue;
 	eventdev->enqueue_burst = skeleton_eventdev_enqueue_burst;
 	eventdev->dequeue       = skeleton_eventdev_dequeue;
diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
index 9b7f4d4..086fd96 100644
--- a/drivers/event/sw/sw_evdev.c
+++ b/drivers/event/sw/sw_evdev.c
@@ -629,10 +629,14 @@ sw_start(struct rte_eventdev *dev)
 	unsigned int i, j;
 	struct sw_evdev *sw = sw_pmd_priv(dev);
 
+	rte_service_component_runstate_set(sw->service_id, 1);
+
 	/* check a service core is mapped to this service */
-	if (!rte_service_runstate_get(sw->service_id))
+	if (!rte_service_runstate_get(sw->service_id)) {
 		SW_LOG_ERR("Warning: No Service core enabled on service %s\n",
 				sw->service_name);
+		return -ENOENT;
+	}
 
 	/* check all ports are set up */
 	for (i = 0; i < sw->port_count; i++)
@@ -847,7 +851,6 @@ sw_probe(struct rte_vdev_device *vdev)
 	dev->enqueue_forward_burst = sw_event_enqueue_burst;
 	dev->dequeue = sw_event_dequeue;
 	dev->dequeue_burst = sw_event_dequeue_burst;
-	dev->schedule = sw_event_schedule;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
@@ -875,12 +878,6 @@ sw_probe(struct rte_vdev_device *vdev)
 		return -ENOEXEC;
 	}
 
-	ret = rte_service_component_runstate_set(sw->service_id, 1);
-	if (ret) {
-		SW_LOG_ERR("Unable to enable service component");
-		return -ENOEXEC;
-	}
-
 	dev->data->service_inited = 1;
 	dev->data->service_id = sw->service_id;
 
diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
index 1c1ff6b..ee0c4c3 100644
--- a/lib/librte_eventdev/rte_eventdev.h
+++ b/lib/librte_eventdev/rte_eventdev.h
@@ -218,10 +218,10 @@
  * (each worker thread schedules events to its own port) or centralized
  * (a dedicated thread schedules to all ports). Distributed software schedulers
  * perform the scheduling in rte_event_dequeue_burst(), whereas centralized
- * scheduler logic is located in rte_event_schedule().
+ * scheduler logic need a dedicated service core for scheduling.
  * The RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag is not set
  * indicates the device is centralized and thus needs a dedicated scheduling
- * thread that repeatedly calls rte_event_schedule().
+ * thread that repeatedly calls software specific scheduling function.
  *
  * An event driven worker thread has following typical workflow on fastpath:
  * \code{.c}
@@ -263,9 +263,9 @@ struct rte_mbuf; /* we just use mbuf pointers; no need to include rte_mbuf.h */
  * In distributed scheduling mode, event scheduling happens in HW or
  * rte_event_dequeue_burst() or the combination of these two.
  * If the flag is not set then eventdev is centralized and thus needs a
- * dedicated scheduling thread that repeatedly calls rte_event_schedule().
+ * dedicated service core that acts as a scheduling thread .
  *
- * @see rte_event_schedule(), rte_event_dequeue_burst()
+ * @see rte_event_dequeue_burst()
  */
 #define RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES     (1ULL << 3)
 /**< Event device is capable of enqueuing events of any type to any queue.
@@ -1065,9 +1065,6 @@ struct rte_eventdev_driver;
 struct rte_eventdev_ops;
 struct rte_eventdev;
 
-typedef void (*event_schedule_t)(struct rte_eventdev *dev);
-/**< @internal Schedule one or more events in the event dev. */
-
 typedef uint16_t (*event_enqueue_t)(void *port, const struct rte_event *ev);
 /**< @internal Enqueue event on port of a device */
 
@@ -1131,8 +1128,6 @@ struct rte_eventdev_data {
 
 /** @internal The data structure associated with each event device. */
 struct rte_eventdev {
-	event_schedule_t schedule;
-	/**< Pointer to PMD schedule function. */
 	event_enqueue_t enqueue;
 	/**< Pointer to PMD enqueue function. */
 	event_enqueue_burst_t enqueue_burst;
@@ -1161,24 +1156,6 @@ struct rte_eventdev {
 extern struct rte_eventdev *rte_eventdevs;
 /** @internal The pool of rte_eventdev structures. */
 
-
-/**
- * Schedule one or more events in the event dev.
- *
- * An event dev implementation may define this is a NOOP, for instance if
- * the event dev performs its scheduling in hardware.
- *
- * @param dev_id
- *   The identifier of the device.
- */
-static inline void
-rte_event_schedule(uint8_t dev_id)
-{
-	struct rte_eventdev *dev = &rte_eventdevs[dev_id];
-	if (*dev->schedule)
-		(*dev->schedule)(dev);
-}
-
 static __rte_always_inline uint16_t
 __rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
 			const struct rte_event ev[], uint16_t nb_events,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v3 7/7] doc: update software event device
  2017-10-22  9:16 ` [PATCH v3 " Pavan Nikhilesh
                     ` (4 preceding siblings ...)
  2017-10-22  9:16   ` [PATCH v3 6/7] eventdev: remove eventdev schedule API Pavan Nikhilesh
@ 2017-10-22  9:16   ` Pavan Nikhilesh
  5 siblings, 0 replies; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-22  9:16 UTC (permalink / raw)
  To: jerin.jacob, hemant.agrawal, harry.van.haaren; +Cc: dev, Pavan Bhagavatula

From: Pavan Bhagavatula <pbhagavatula@caviumnetworks.com>

Update software event device documentation to include use of service
cores for event distribution.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
---
 doc/guides/eventdevs/sw.rst       | 13 ++++++-------
 doc/guides/tools/testeventdev.rst |  8 +-------
 2 files changed, 7 insertions(+), 14 deletions(-)

diff --git a/doc/guides/eventdevs/sw.rst b/doc/guides/eventdevs/sw.rst
index a3e6624..ec49b3b 100644
--- a/doc/guides/eventdevs/sw.rst
+++ b/doc/guides/eventdevs/sw.rst
@@ -78,9 +78,9 @@ Scheduling Quanta
 ~~~~~~~~~~~~~~~~~
 
 The scheduling quanta sets the number of events that the device attempts to
-schedule before returning to the application from the ``rte_event_schedule()``
-function. Note that is a *hint* only, and that fewer or more events may be
-scheduled in a given iteration.
+schedule in a single schedule call performed by the service core. Note that
+is a *hint* only, and that fewer or more events may be scheduled in a given
+iteration.
 
 The scheduling quanta can be set using a string argument to the vdev
 create call:
@@ -140,10 +140,9 @@ eventdev.
 Distributed Scheduler
 ~~~~~~~~~~~~~~~~~~~~~
 
-The software eventdev is a centralized scheduler, requiring the
-``rte_event_schedule()`` function to be called by a CPU core to perform the
-required event distribution. This is not really a limitation but rather a
-design decision.
+The software eventdev is a centralized scheduler, requiring a service core to
+perform the required event distribution. This is not really a limitation but
+rather a design decision.
 
 The ``RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED`` flag is not set in the
 ``event_dev_cap`` field of the ``rte_event_dev_info`` struct for the software
diff --git a/doc/guides/tools/testeventdev.rst b/doc/guides/tools/testeventdev.rst
index 34b1c31..2d045ec 100644
--- a/doc/guides/tools/testeventdev.rst
+++ b/doc/guides/tools/testeventdev.rst
@@ -106,10 +106,6 @@ The following are the application command-line options:
 
         Set the number of mbufs to be allocated from the mempool.
 
-* ``--slcore <n>``
-
-        Set the scheduler lcore id.(Valid when eventdev is not RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capable)
-
 * ``--plcores <CORELIST>``
 
         Set the list of cores to be used as producers.
@@ -362,7 +358,6 @@ Supported application command line options are following::
         --test
         --socket_id
         --pool_sz
-        --slcore (Valid when eventdev is not RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capable)
         --plcores
         --wlcores
         --stlist
@@ -380,7 +375,7 @@ Example command to run perf queue test:
 .. code-block:: console
 
    sudo build/app/dpdk-test-eventdev --vdev=event_sw0 -- \
-        --test=perf_queue --slcore=1 --plcores=2 --wlcore=3 --stlist=p --nb_pkts=0
+        --test=perf_queue --plcores=2 --wlcore=3 --stlist=p --nb_pkts=0
 
 
 PERF_ATQ Test
@@ -441,7 +436,6 @@ Supported application command line options are following::
         --test
         --socket_id
         --pool_sz
-        --slcore (Valid when eventdev is not RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capable)
         --plcores
         --wlcores
         --stlist
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* Re: [PATCH v2 5/7] examples/eventdev: update sample app to use service
  2017-10-13 16:36   ` [PATCH v2 5/7] examples/eventdev: update sample app to use service Pavan Nikhilesh
@ 2017-10-23 17:17     ` Van Haaren, Harry
  2017-10-23 17:51       ` Pavan Nikhilesh Bhagavatula
  0 siblings, 1 reply; 47+ messages in thread
From: Van Haaren, Harry @ 2017-10-23 17:17 UTC (permalink / raw)
  To: Pavan Nikhilesh, jerin.jacob, hemant.agrawal; +Cc: dev

> From: Pavan Nikhilesh [mailto:pbhagavatula@caviumnetworks.com]
> Sent: Friday, October 13, 2017 5:37 PM
> To: jerin.jacob@caviumnetworks.com; hemant.agrawal@nxp.com; Van Haaren,
> Harry <harry.van.haaren@intel.com>
> Cc: dev@dpdk.org; Pavan Bhagavatula <pbhagavatula@caviumnetworks.com>
> Subject: [dpdk-dev] [PATCH v2 5/7] examples/eventdev: update sample app to
> use service
> 
> From: Pavan Bhagavatula <pbhagavatula@caviumnetworks.com>
> 
> Update the sample app eventdev_pipeline_sw_pmd to use service cores for
> event scheduling in case of sw eventdev.
> 
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>


Comments inline - I think there are some side-effect changes in the application.


> ---
>  examples/eventdev_pipeline_sw_pmd/main.c | 51 +++++++++++++++++++++--------
> ---
>  1 file changed, 33 insertions(+), 18 deletions(-)
> 
> diff --git a/examples/eventdev_pipeline_sw_pmd/main.c
> b/examples/eventdev_pipeline_sw_pmd/main.c
> index 09b90c3..d5068d2 100644
> --- a/examples/eventdev_pipeline_sw_pmd/main.c
> +++ b/examples/eventdev_pipeline_sw_pmd/main.c
> @@ -46,6 +46,7 @@
>  #include <rte_cycles.h>
>  #include <rte_ethdev.h>
>  #include <rte_eventdev.h>
> +#include <rte_service.h>
> 
>  #define MAX_NUM_STAGES 8
>  #define BATCH_SIZE 16
> @@ -233,7 +234,7 @@ producer(void)
>  }
> 
>  static inline void
> -schedule_devices(uint8_t dev_id, unsigned int lcore_id)
> +schedule_devices(unsigned int lcore_id)
>  {
>  	if (fdata->rx_core[lcore_id] && (fdata->rx_single ||
>  	    rte_atomic32_cmpset(&(fdata->rx_lock), 0, 1))) {
> @@ -241,16 +242,6 @@ schedule_devices(uint8_t dev_id, unsigned int lcore_id)
>  		rte_atomic32_clear((rte_atomic32_t *)&(fdata->rx_lock));
>  	}
> 
> -	if (fdata->sched_core[lcore_id] && (fdata->sched_single ||
> -	    rte_atomic32_cmpset(&(fdata->sched_lock), 0, 1))) {
> -		rte_event_schedule(dev_id);
> -		if (cdata.dump_dev_signal) {
> -			rte_event_dev_dump(0, stdout);
> -			cdata.dump_dev_signal = 0;
> -		}
> -		rte_atomic32_clear((rte_atomic32_t *)&(fdata->sched_lock));
> -	}

See note below, about keeping the functionality provided by
fdata->sched_core[] intact.


>  	if (fdata->tx_core[lcore_id] && (fdata->tx_single ||
>  	    rte_atomic32_cmpset(&(fdata->tx_lock), 0, 1))) {
>  		consumer();
> @@ -294,7 +285,7 @@ worker(void *arg)
>  	while (!fdata->done) {
>  		uint16_t i;
> 
> -		schedule_devices(dev_id, lcore_id);
> +		schedule_devices(lcore_id);
> 
>  		if (!fdata->worker_core[lcore_id]) {
>  			rte_pause();
> @@ -661,6 +652,27 @@ struct port_link {
>  };
> 
>  static int
> +setup_scheduling_service(unsigned int lcore, uint8_t dev_id)
> +{
> +	int ret;
> +	uint32_t service_id;
> +	ret = rte_event_dev_service_id_get(dev_id, &service_id);
> +	if (ret == -ESRCH) {
> +		printf("Event device [%d] doesn't need scheduling service\n",
> +				dev_id);
> +		return 0;
> +	}
> +	if (!ret) {
> +		rte_service_runstate_set(service_id, 1);
> +		rte_service_lcore_add(lcore);
> +		rte_service_map_lcore_set(service_id, lcore, 1);
> +		rte_service_lcore_start(lcore);
> +	}
> +
> +	return ret;
> +}
> +
> +static int
>  setup_eventdev(struct prod_data *prod_data,
>  		struct cons_data *cons_data,
>  		struct worker_data *worker_data)
> @@ -839,6 +851,14 @@ setup_eventdev(struct prod_data *prod_data,
>  	*cons_data = (struct cons_data){.dev_id = dev_id,
>  					.port_id = i };
> 
> +	for (i = 0; i < MAX_NUM_CORE; i++) {
> +		if (fdata->sched_core[i]
> +				&& setup_scheduling_service(i, dev_id)) {
> +			printf("Error setting up schedulig service on %d", i);
> +			return -1;
> +		}
> +	}


Previously,  the fdata->sched_core[] array contained a "coremask" for scheduling.
A core running the scheduling could *also* perform other work. AKA: a single core
could perform all of RX, Sched, Worker, and TX.

Due to the service-core requiring to "take" the full core, there is no option to
have a core "split" its work into schedule() and RX,TX,Worker. This is a service core
implementation limitation - however it should be resolved for this sample app too.

The solution is to enable an ordinary DPDK (non-service-core) thread to run
a service. This MUST be enabled at the service-cores library level, to keep atomics
behavior of services etc), and hence removing rte_event_schedule() is still required.

The changes should become simpler than proposed here, instead of the wait_schedule() hack,
we can just run an iteration of the SW PMD using the newly-added service core iter function.

I have (just) sent a patch for service-cores to enable running a service on an ordinary
DPDK lcore, see here: http://dpdk.org/ml/archives/dev/2017-October/080022.html

Hope you can rework patches 4/7 and 5/7 to use the newly provided functionality!
Let me know if the intended usage of the new function is unclear in any way.


Regards, -Harry


> +
>  	if (rte_event_dev_start(dev_id) < 0) {
>  		printf("Error starting eventdev\n");
>  		return -1;
> @@ -944,8 +964,7 @@ main(int argc, char **argv)
> 
>  		if (!fdata->rx_core[lcore_id] &&
>  			!fdata->worker_core[lcore_id] &&
> -			!fdata->tx_core[lcore_id] &&
> -			!fdata->sched_core[lcore_id])
> +			!fdata->tx_core[lcore_id])
>  			continue;
> 
>  		if (fdata->rx_core[lcore_id])
> @@ -958,10 +977,6 @@ main(int argc, char **argv)
>  				"[%s()] lcore %d executing NIC Tx, and using eventdev
> port %u\n",
>  				__func__, lcore_id, cons_data.port_id);
> 
> -		if (fdata->sched_core[lcore_id])
> -			printf("[%s()] lcore %d executing scheduler\n",
> -					__func__, lcore_id);
> -
>  		if (fdata->worker_core[lcore_id])
>  			printf(
>  				"[%s()] lcore %d executing worker, using eventdev port
> %u\n",
> --
> 2.7.4

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v2 5/7] examples/eventdev: update sample app to use service
  2017-10-23 17:17     ` Van Haaren, Harry
@ 2017-10-23 17:51       ` Pavan Nikhilesh Bhagavatula
  0 siblings, 0 replies; 47+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2017-10-23 17:51 UTC (permalink / raw)
  To: Van Haaren, Harry; +Cc: dev

On Mon, Oct 23, 2017 at 05:17:48PM +0000, Van Haaren, Harry wrote:
> > From: Pavan Nikhilesh [mailto:pbhagavatula@caviumnetworks.com]
> > Sent: Friday, October 13, 2017 5:37 PM
> > To: jerin.jacob@caviumnetworks.com; hemant.agrawal@nxp.com; Van Haaren,
> > Harry <harry.van.haaren@intel.com>
> > Cc: dev@dpdk.org; Pavan Bhagavatula <pbhagavatula@caviumnetworks.com>
> > Subject: [dpdk-dev] [PATCH v2 5/7] examples/eventdev: update sample app to
> > use service
> >
> > From: Pavan Bhagavatula <pbhagavatula@caviumnetworks.com>
> >
> > Update the sample app eventdev_pipeline_sw_pmd to use service cores for
> > event scheduling in case of sw eventdev.
> >
> > Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
>
>
> Comments inline - I think there are some side-effect changes in the application.
>
>
> > ---
> >  examples/eventdev_pipeline_sw_pmd/main.c | 51 +++++++++++++++++++++--------
> > ---
> >  1 file changed, 33 insertions(+), 18 deletions(-)
> >
> > diff --git a/examples/eventdev_pipeline_sw_pmd/main.c
> > b/examples/eventdev_pipeline_sw_pmd/main.c
> > index 09b90c3..d5068d2 100644
> > --- a/examples/eventdev_pipeline_sw_pmd/main.c
> > +++ b/examples/eventdev_pipeline_sw_pmd/main.c
> > @@ -46,6 +46,7 @@
> >  #include <rte_cycles.h>
> >  #include <rte_ethdev.h>
> >  #include <rte_eventdev.h>
> > +#include <rte_service.h>
> >
> >  #define MAX_NUM_STAGES 8
> >  #define BATCH_SIZE 16
> > @@ -233,7 +234,7 @@ producer(void)
> >  }
> >
> >  static inline void
> > -schedule_devices(uint8_t dev_id, unsigned int lcore_id)
> > +schedule_devices(unsigned int lcore_id)
> >  {
> >  	if (fdata->rx_core[lcore_id] && (fdata->rx_single ||
> >  	    rte_atomic32_cmpset(&(fdata->rx_lock), 0, 1))) {
> > @@ -241,16 +242,6 @@ schedule_devices(uint8_t dev_id, unsigned int lcore_id)
> >  		rte_atomic32_clear((rte_atomic32_t *)&(fdata->rx_lock));
> >  	}
> >
> > -	if (fdata->sched_core[lcore_id] && (fdata->sched_single ||
> > -	    rte_atomic32_cmpset(&(fdata->sched_lock), 0, 1))) {
> > -		rte_event_schedule(dev_id);
> > -		if (cdata.dump_dev_signal) {
> > -			rte_event_dev_dump(0, stdout);
> > -			cdata.dump_dev_signal = 0;
> > -		}
> > -		rte_atomic32_clear((rte_atomic32_t *)&(fdata->sched_lock));
> > -	}
>
> See note below, about keeping the functionality provided by
> fdata->sched_core[] intact.
>
>
> >  	if (fdata->tx_core[lcore_id] && (fdata->tx_single ||
> >  	    rte_atomic32_cmpset(&(fdata->tx_lock), 0, 1))) {
> >  		consumer();
> > @@ -294,7 +285,7 @@ worker(void *arg)
> >  	while (!fdata->done) {
> >  		uint16_t i;
> >
> > -		schedule_devices(dev_id, lcore_id);
> > +		schedule_devices(lcore_id);
> >
> >  		if (!fdata->worker_core[lcore_id]) {
> >  			rte_pause();
> > @@ -661,6 +652,27 @@ struct port_link {
> >  };
> >
> >  static int
> > +setup_scheduling_service(unsigned int lcore, uint8_t dev_id)
> > +{
> > +	int ret;
> > +	uint32_t service_id;
> > +	ret = rte_event_dev_service_id_get(dev_id, &service_id);
> > +	if (ret == -ESRCH) {
> > +		printf("Event device [%d] doesn't need scheduling service\n",
> > +				dev_id);
> > +		return 0;
> > +	}
> > +	if (!ret) {
> > +		rte_service_runstate_set(service_id, 1);
> > +		rte_service_lcore_add(lcore);
> > +		rte_service_map_lcore_set(service_id, lcore, 1);
> > +		rte_service_lcore_start(lcore);
> > +	}
> > +
> > +	return ret;
> > +}
> > +
> > +static int
> >  setup_eventdev(struct prod_data *prod_data,
> >  		struct cons_data *cons_data,
> >  		struct worker_data *worker_data)
> > @@ -839,6 +851,14 @@ setup_eventdev(struct prod_data *prod_data,
> >  	*cons_data = (struct cons_data){.dev_id = dev_id,
> >  					.port_id = i };
> >
> > +	for (i = 0; i < MAX_NUM_CORE; i++) {
> > +		if (fdata->sched_core[i]
> > +				&& setup_scheduling_service(i, dev_id)) {
> > +			printf("Error setting up schedulig service on %d", i);
> > +			return -1;
> > +		}
> > +	}
>
>
> Previously,  the fdata->sched_core[] array contained a "coremask" for scheduling.
> A core running the scheduling could *also* perform other work. AKA: a single core
> could perform all of RX, Sched, Worker, and TX.
>
> Due to the service-core requiring to "take" the full core, there is no option to
> have a core "split" its work into schedule() and RX,TX,Worker. This is a service core
> implementation limitation - however it should be resolved for this sample app too.
>
> The solution is to enable an ordinary DPDK (non-service-core) thread to run
> a service. This MUST be enabled at the service-cores library level, to keep atomics
> behavior of services etc), and hence removing rte_event_schedule() is still required.
>
> The changes should become simpler than proposed here, instead of the wait_schedule() hack,
> we can just run an iteration of the SW PMD using the newly-added service core iter function.
>
> I have (just) sent a patch for service-cores to enable running a service on an ordinary
> DPDK lcore, see here: http://dpdk.org/ml/archives/dev/2017-October/080022.html
>
> Hope you can rework patches 4/7 and 5/7 to use the newly provided functionality!
> Let me know if the intended usage of the new function is unclear in any way.
>

Agreed, current solution for controlled scheduling of event_sw is bit hacky,
the added flexibility of service core API helps a lot. Will rebase my patchset
on top of service core patches and spin up a v4.

Thanks,
Pavan

>
> Regards, -Harry
>
>
> > +
> >  	if (rte_event_dev_start(dev_id) < 0) {
> >  		printf("Error starting eventdev\n");
> >  		return -1;
> > @@ -944,8 +964,7 @@ main(int argc, char **argv)
> >
> >  		if (!fdata->rx_core[lcore_id] &&
> >  			!fdata->worker_core[lcore_id] &&
> > -			!fdata->tx_core[lcore_id] &&
> > -			!fdata->sched_core[lcore_id])
> > +			!fdata->tx_core[lcore_id])
> >  			continue;
> >
> >  		if (fdata->rx_core[lcore_id])
> > @@ -958,10 +977,6 @@ main(int argc, char **argv)
> >  				"[%s()] lcore %d executing NIC Tx, and using eventdev
> > port %u\n",
> >  				__func__, lcore_id, cons_data.port_id);
> >
> > -		if (fdata->sched_core[lcore_id])
> > -			printf("[%s()] lcore %d executing scheduler\n",
> > -					__func__, lcore_id);
> > -
> >  		if (fdata->worker_core[lcore_id])
> >  			printf(
> >  				"[%s()] lcore %d executing worker, using eventdev port
> > %u\n",
> > --
> > 2.7.4
>

^ permalink raw reply	[flat|nested] 47+ messages in thread

* [PATCH v4 1/7] eventdev: add API to get service id
  2017-10-11  9:09 [PATCH 0/7] eventdev: remove event schedule API for SW driver Pavan Nikhilesh
                   ` (8 preceding siblings ...)
  2017-10-22  9:16 ` [PATCH v3 " Pavan Nikhilesh
@ 2017-10-25 11:59 ` Pavan Nikhilesh
  2017-10-25 11:59   ` [PATCH v4 2/7] event/sw: extend service capability Pavan Nikhilesh
                     ` (5 more replies)
  2017-10-25 14:50 ` [PATCH v5 1/7] eventdev: add API to get service id Pavan Nikhilesh
  10 siblings, 6 replies; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-25 11:59 UTC (permalink / raw)
  To: jerin.jacob, hemant.agrawal, harry.van.haaren; +Cc: dev, Pavan Bhagavatula

From: Pavan Bhagavatula <pbhagavatula@caviumnetworks.com>

In case of sw event device the scheduling can be done on a service core
using the service registered at the time of probe.
This patch adds a helper function to get the service id that can be used
by the application to assign a lcore for the service to run on.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
---

v3 changes:
 - removed stale slcore option from documentation
 - fix version map
 - remove schedule api from dpaa2 event dev driver

v2 changes:
 - fix checkpatch issues
 - update eventdev versio map
 - fix --slcore option not removed in app/test-event-dev

 This patch series depends on http://dpdk.org/dev/patchwork/patch/30734/
 and http://dpdk.org/dev/patchwork/patch/30732/

 lib/librte_eventdev/rte_eventdev.c           | 17 +++++++++++++++++
 lib/librte_eventdev/rte_eventdev.h           | 22 ++++++++++++++++++++++
 lib/librte_eventdev/rte_eventdev_version.map |  1 +
 3 files changed, 40 insertions(+)

diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c
index fa18422..df01f4b 100644
--- a/lib/librte_eventdev/rte_eventdev.c
+++ b/lib/librte_eventdev/rte_eventdev.c
@@ -963,6 +963,23 @@ rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
 }

 int
+rte_event_dev_service_id_get(uint8_t dev_id, uint32_t *service_id)
+{
+	struct rte_eventdev *dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+
+	if (service_id == NULL)
+		return -EINVAL;
+
+	if (dev->data->service_inited)
+		*service_id = dev->data->service_id;
+
+	return dev->data->service_inited ? 0 : -ESRCH;
+}
+
+int
 rte_event_dev_dump(uint8_t dev_id, FILE *f)
 {
 	struct rte_eventdev *dev;
diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
index b9d1b98..a7973a9 100644
--- a/lib/librte_eventdev/rte_eventdev.h
+++ b/lib/librte_eventdev/rte_eventdev.h
@@ -1103,6 +1103,10 @@ struct rte_eventdev_data {
 	/**< Event device capabilities(RTE_EVENT_DEV_CAP_)*/
 	struct rte_event_dev_config dev_conf;
 	/**< Configuration applied to device. */
+	uint8_t service_inited;
+	/* Service initialization state */
+	uint32_t service_id;
+	/* Service ID*/

 	RTE_STD_C11
 	uint8_t dev_started : 1;
@@ -1606,6 +1610,24 @@ rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
 			 uint8_t queues[], uint8_t priorities[]);

 /**
+ * Retrieve the service ID of the event dev. If the adapter doesn't use
+ * a rte_service function, this function returns -ESRCH.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ *
+ * @param [out] service_id
+ *   A pointer to a uint32_t, to be filled in with the service id.
+ *
+ * @return
+ *   - 0: Success
+ *   - <0: Error code on failure, if the event dev doesn't use a rte_service
+ *   function, this function returns -ESRCH.
+ */
+int
+rte_event_dev_service_id_get(uint8_t dev_id, uint32_t *service_id);
+
+/**
  * Dump internal information about *dev_id* to the FILE* provided in *f*.
  *
  * @param dev_id
diff --git a/lib/librte_eventdev/rte_eventdev_version.map b/lib/librte_eventdev/rte_eventdev_version.map
index 800ca6e..108ae61 100644
--- a/lib/librte_eventdev/rte_eventdev_version.map
+++ b/lib/librte_eventdev/rte_eventdev_version.map
@@ -51,6 +51,7 @@ DPDK_17.11 {
 	global:

 	rte_event_dev_attr_get;
+	rte_event_dev_service_id_get;
 	rte_event_port_attr_get;
 	rte_event_queue_attr_get;

--
2.7.4

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v4 2/7] event/sw: extend service capability
  2017-10-25 11:59 ` [PATCH v4 1/7] eventdev: add API to get service id Pavan Nikhilesh
@ 2017-10-25 11:59   ` Pavan Nikhilesh
  2017-10-25 11:59   ` [PATCH v4 3/7] app/test-eventdev: update app to use service cores Pavan Nikhilesh
                     ` (4 subsequent siblings)
  5 siblings, 0 replies; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-25 11:59 UTC (permalink / raw)
  To: jerin.jacob, hemant.agrawal, harry.van.haaren; +Cc: dev, Pavan Bhagavatula

From: Pavan Bhagavatula <pbhagavatula@caviumnetworks.com>

Extend the service capability of the sw event device by exposing service id
to the application.
The application can use service id to configure service cores to run event
scheduling.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
---
 drivers/event/sw/sw_evdev.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
index 522cd71..92fd07b 100644
--- a/drivers/event/sw/sw_evdev.c
+++ b/drivers/event/sw/sw_evdev.c
@@ -861,6 +861,15 @@ sw_probe(struct rte_vdev_device *vdev)
 		return -ENOEXEC;
 	}
 
+	ret = rte_service_component_runstate_set(sw->service_id, 1);
+	if (ret) {
+		SW_LOG_ERR("Unable to enable service component");
+		return -ENOEXEC;
+	}
+
+	dev->data->service_inited = 1;
+	dev->data->service_id = sw->service_id;
+
 	return 0;
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v4 3/7] app/test-eventdev: update app to use service cores
  2017-10-25 11:59 ` [PATCH v4 1/7] eventdev: add API to get service id Pavan Nikhilesh
  2017-10-25 11:59   ` [PATCH v4 2/7] event/sw: extend service capability Pavan Nikhilesh
@ 2017-10-25 11:59   ` Pavan Nikhilesh
  2017-10-25 11:59   ` [PATCH v4 4/7] test/eventdev: update test to use service iter Pavan Nikhilesh
                     ` (3 subsequent siblings)
  5 siblings, 0 replies; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-25 11:59 UTC (permalink / raw)
  To: jerin.jacob, hemant.agrawal, harry.van.haaren; +Cc: dev, Pavan Bhagavatula

From: Pavan Bhagavatula <pbhagavatula@caviumnetworks.com>

Use service cores for offloading event scheduling in case of
centralized scheduling instead of calling the schedule api directly.
This removes the dependency on dedicated scheduler core specified by
giving command line option --slcore.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 app/test-eventdev/evt_common.h        | 40 +++++++++++++++++++++++++++++
 app/test-eventdev/evt_options.c       | 10 --------
 app/test-eventdev/evt_options.h       |  8 ------
 app/test-eventdev/test_order_atq.c    |  6 +++++
 app/test-eventdev/test_order_common.c |  3 ---
 app/test-eventdev/test_order_queue.c  |  6 +++++
 app/test-eventdev/test_perf_atq.c     |  6 +++++
 app/test-eventdev/test_perf_common.c  | 47 ++---------------------------------
 app/test-eventdev/test_perf_common.h  |  1 +
 app/test-eventdev/test_perf_queue.c   |  6 +++++
 10 files changed, 67 insertions(+), 66 deletions(-)

diff --git a/app/test-eventdev/evt_common.h b/app/test-eventdev/evt_common.h
index ee896a2..0fadab4 100644
--- a/app/test-eventdev/evt_common.h
+++ b/app/test-eventdev/evt_common.h
@@ -36,6 +36,7 @@
 #include <rte_common.h>
 #include <rte_debug.h>
 #include <rte_eventdev.h>
+#include <rte_service.h>
 
 #define CLNRM  "\x1b[0m"
 #define CLRED  "\x1b[31m"
@@ -92,4 +93,43 @@ evt_has_all_types_queue(uint8_t dev_id)
 			true : false;
 }
 
+static inline int
+evt_service_setup(uint8_t dev_id)
+{
+	uint32_t service_id;
+	int32_t core_cnt;
+	unsigned int lcore = 0;
+	uint32_t core_array[RTE_MAX_LCORE];
+	uint8_t cnt;
+	uint8_t min_cnt = UINT8_MAX;
+
+	if (evt_has_distributed_sched(dev_id))
+		return 0;
+
+	if (!rte_service_lcore_count())
+		return -ENOENT;
+
+	if (!rte_event_dev_service_id_get(dev_id, &service_id)) {
+		core_cnt = rte_service_lcore_list(core_array,
+				RTE_MAX_LCORE);
+		if (core_cnt < 0)
+			return -ENOENT;
+		/* Get the core which has least number of services running. */
+		while (core_cnt--) {
+			/* Reset default mapping */
+			rte_service_map_lcore_set(service_id,
+					core_array[core_cnt], 0);
+			cnt = rte_service_lcore_count_services(
+					core_array[core_cnt]);
+			if (cnt < min_cnt) {
+				lcore = core_array[core_cnt];
+				min_cnt = cnt;
+			}
+		}
+		if (rte_service_map_lcore_set(service_id, lcore, 1))
+			return -ENOENT;
+	}
+	return 0;
+}
+
 #endif /*  _EVT_COMMON_*/
diff --git a/app/test-eventdev/evt_options.c b/app/test-eventdev/evt_options.c
index 65e22f8..e2187df 100644
--- a/app/test-eventdev/evt_options.c
+++ b/app/test-eventdev/evt_options.c
@@ -114,13 +114,6 @@ evt_parse_test_name(struct evt_options *opt, const char *arg)
 }
 
 static int
-evt_parse_slcore(struct evt_options *opt, const char *arg)
-{
-	opt->slcore = atoi(arg);
-	return 0;
-}
-
-static int
 evt_parse_socket_id(struct evt_options *opt, const char *arg)
 {
 	opt->socket_id = atoi(arg);
@@ -188,7 +181,6 @@ usage(char *program)
 		"\t--test             : name of the test application to run\n"
 		"\t--socket_id        : socket_id of application resources\n"
 		"\t--pool_sz          : pool size of the mempool\n"
-		"\t--slcore           : lcore id of the scheduler\n"
 		"\t--plcores          : list of lcore ids for producers\n"
 		"\t--wlcores          : list of lcore ids for workers\n"
 		"\t--stlist           : list of scheduled types of the stages\n"
@@ -254,7 +246,6 @@ static struct option lgopts[] = {
 	{ EVT_POOL_SZ,          1, 0, 0 },
 	{ EVT_NB_PKTS,          1, 0, 0 },
 	{ EVT_WKR_DEQ_DEP,      1, 0, 0 },
-	{ EVT_SCHED_LCORE,      1, 0, 0 },
 	{ EVT_SCHED_TYPE_LIST,  1, 0, 0 },
 	{ EVT_FWD_LATENCY,      0, 0, 0 },
 	{ EVT_QUEUE_PRIORITY,   0, 0, 0 },
@@ -278,7 +269,6 @@ evt_opts_parse_long(int opt_idx, struct evt_options *opt)
 		{ EVT_POOL_SZ, evt_parse_pool_sz},
 		{ EVT_NB_PKTS, evt_parse_nb_pkts},
 		{ EVT_WKR_DEQ_DEP, evt_parse_wkr_deq_dep},
-		{ EVT_SCHED_LCORE, evt_parse_slcore},
 		{ EVT_SCHED_TYPE_LIST, evt_parse_sched_type_list},
 		{ EVT_FWD_LATENCY, evt_parse_fwd_latency},
 		{ EVT_QUEUE_PRIORITY, evt_parse_queue_priority},
diff --git a/app/test-eventdev/evt_options.h b/app/test-eventdev/evt_options.h
index d8a9fdc..a9a9125 100644
--- a/app/test-eventdev/evt_options.h
+++ b/app/test-eventdev/evt_options.h
@@ -47,7 +47,6 @@
 #define EVT_VERBOSE              ("verbose")
 #define EVT_DEVICE               ("dev")
 #define EVT_TEST                 ("test")
-#define EVT_SCHED_LCORE          ("slcore")
 #define EVT_PROD_LCORES          ("plcores")
 #define EVT_WORK_LCORES          ("wlcores")
 #define EVT_NB_FLOWS             ("nb_flows")
@@ -67,7 +66,6 @@ struct evt_options {
 	bool plcores[RTE_MAX_LCORE];
 	bool wlcores[RTE_MAX_LCORE];
 	uint8_t sched_type_list[EVT_MAX_STAGES];
-	int slcore;
 	uint32_t nb_flows;
 	int socket_id;
 	int pool_sz;
@@ -219,12 +217,6 @@ evt_dump_nb_flows(struct evt_options *opt)
 }
 
 static inline void
-evt_dump_scheduler_lcore(struct evt_options *opt)
-{
-	evt_dump("scheduler lcore", "%d", opt->slcore);
-}
-
-static inline void
 evt_dump_worker_dequeue_depth(struct evt_options *opt)
 {
 	evt_dump("worker deq depth", "%d", opt->wkr_deq_dep);
diff --git a/app/test-eventdev/test_order_atq.c b/app/test-eventdev/test_order_atq.c
index 7e6c67d..4ee0dea 100644
--- a/app/test-eventdev/test_order_atq.c
+++ b/app/test-eventdev/test_order_atq.c
@@ -179,6 +179,12 @@ order_atq_eventdev_setup(struct evt_test *test, struct evt_options *opt)
 	if (ret)
 		return ret;
 
+	ret = evt_service_setup(opt->dev_id);
+	if (ret) {
+		evt_err("No service lcore found to run event dev.");
+		return ret;
+	}
+
 	ret = rte_event_dev_start(opt->dev_id);
 	if (ret) {
 		evt_err("failed to start eventdev %d", opt->dev_id);
diff --git a/app/test-eventdev/test_order_common.c b/app/test-eventdev/test_order_common.c
index 80e14c0..7cfe7fa 100644
--- a/app/test-eventdev/test_order_common.c
+++ b/app/test-eventdev/test_order_common.c
@@ -292,9 +292,6 @@ order_launch_lcores(struct evt_test *test, struct evt_options *opt,
 	int64_t old_remaining  = -1;
 
 	while (t->err == false) {
-
-		rte_event_schedule(opt->dev_id);
-
 		uint64_t new_cycles = rte_get_timer_cycles();
 		int64_t remaining = rte_atomic64_read(&t->outstand_pkts);
 
diff --git a/app/test-eventdev/test_order_queue.c b/app/test-eventdev/test_order_queue.c
index 1fa4082..eef69a4 100644
--- a/app/test-eventdev/test_order_queue.c
+++ b/app/test-eventdev/test_order_queue.c
@@ -192,6 +192,12 @@ order_queue_eventdev_setup(struct evt_test *test, struct evt_options *opt)
 	if (ret)
 		return ret;
 
+	ret = evt_service_setup(opt->dev_id);
+	if (ret) {
+		evt_err("No service lcore found to run event dev.");
+		return ret;
+	}
+
 	ret = rte_event_dev_start(opt->dev_id);
 	if (ret) {
 		evt_err("failed to start eventdev %d", opt->dev_id);
diff --git a/app/test-eventdev/test_perf_atq.c b/app/test-eventdev/test_perf_atq.c
index 9c3efa3..0e9f2db 100644
--- a/app/test-eventdev/test_perf_atq.c
+++ b/app/test-eventdev/test_perf_atq.c
@@ -221,6 +221,12 @@ perf_atq_eventdev_setup(struct evt_test *test, struct evt_options *opt)
 	if (ret)
 		return ret;
 
+	ret = evt_service_setup(opt->dev_id);
+	if (ret) {
+		evt_err("No service lcore found to run event dev.");
+		return ret;
+	}
+
 	ret = rte_event_dev_start(opt->dev_id);
 	if (ret) {
 		evt_err("failed to start eventdev %d", opt->dev_id);
diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c
index 7b09299..e77b472 100644
--- a/app/test-eventdev/test_perf_common.c
+++ b/app/test-eventdev/test_perf_common.c
@@ -88,18 +88,6 @@ perf_producer(void *arg)
 	return 0;
 }
 
-static inline int
-scheduler(void *arg)
-{
-	struct test_perf *t = arg;
-	const uint8_t dev_id = t->opt->dev_id;
-
-	while (t->done == false)
-		rte_event_schedule(dev_id);
-
-	return 0;
-}
-
 static inline uint64_t
 processed_pkts(struct test_perf *t)
 {
@@ -163,15 +151,6 @@ perf_launch_lcores(struct evt_test *test, struct evt_options *opt,
 		port_idx++;
 	}
 
-	/* launch scheduler */
-	if (!evt_has_distributed_sched(opt->dev_id)) {
-		ret = rte_eal_remote_launch(scheduler, t, opt->slcore);
-		if (ret) {
-			evt_err("failed to launch sched %d", opt->slcore);
-			return ret;
-		}
-	}
-
 	const uint64_t total_pkts = opt->nb_pkts *
 			evt_nr_active_lcores(opt->plcores);
 
@@ -307,10 +286,9 @@ int
 perf_opt_check(struct evt_options *opt, uint64_t nb_queues)
 {
 	unsigned int lcores;
-	bool need_slcore = !evt_has_distributed_sched(opt->dev_id);
 
-	/* N producer + N worker + 1 scheduler(based on dev capa) + 1 master */
-	lcores = need_slcore ? 4 : 3;
+	/* N producer + N worker + 1 master */
+	lcores = 3;
 
 	if (rte_lcore_count() < lcores) {
 		evt_err("test need minimum %d lcores", lcores);
@@ -322,10 +300,6 @@ perf_opt_check(struct evt_options *opt, uint64_t nb_queues)
 		evt_err("worker lcores overlaps with master lcore");
 		return -1;
 	}
-	if (need_slcore && evt_lcores_has_overlap(opt->wlcores, opt->slcore)) {
-		evt_err("worker lcores overlaps with scheduler lcore");
-		return -1;
-	}
 	if (evt_lcores_has_overlap_multi(opt->wlcores, opt->plcores)) {
 		evt_err("worker lcores overlaps producer lcores");
 		return -1;
@@ -344,10 +318,6 @@ perf_opt_check(struct evt_options *opt, uint64_t nb_queues)
 		evt_err("producer lcores overlaps with master lcore");
 		return -1;
 	}
-	if (need_slcore && evt_lcores_has_overlap(opt->plcores, opt->slcore)) {
-		evt_err("producer lcores overlaps with scheduler lcore");
-		return -1;
-	}
 	if (evt_has_disabled_lcore(opt->plcores)) {
 		evt_err("one or more producer lcores are not enabled");
 		return -1;
@@ -357,17 +327,6 @@ perf_opt_check(struct evt_options *opt, uint64_t nb_queues)
 		return -1;
 	}
 
-	/* Validate scheduler lcore */
-	if (!evt_has_distributed_sched(opt->dev_id) &&
-			opt->slcore == (int)rte_get_master_lcore()) {
-		evt_err("scheduler lcore and master lcore should be different");
-		return -1;
-	}
-	if (need_slcore && !rte_lcore_is_enabled(opt->slcore)) {
-		evt_err("scheduler lcore is not enabled");
-		return -1;
-	}
-
 	if (evt_has_invalid_stage(opt))
 		return -1;
 
@@ -405,8 +364,6 @@ perf_opt_dump(struct evt_options *opt, uint8_t nb_queues)
 	evt_dump_producer_lcores(opt);
 	evt_dump("nb_worker_lcores", "%d", evt_nr_active_lcores(opt->wlcores));
 	evt_dump_worker_lcores(opt);
-	if (!evt_has_distributed_sched(opt->dev_id))
-		evt_dump_scheduler_lcore(opt);
 	evt_dump_nb_stages(opt);
 	evt_dump("nb_evdev_ports", "%d", perf_nb_event_ports(opt));
 	evt_dump("nb_evdev_queues", "%d", nb_queues);
diff --git a/app/test-eventdev/test_perf_common.h b/app/test-eventdev/test_perf_common.h
index 4956586..c6fc70c 100644
--- a/app/test-eventdev/test_perf_common.h
+++ b/app/test-eventdev/test_perf_common.h
@@ -159,6 +159,7 @@ int perf_test_setup(struct evt_test *test, struct evt_options *opt);
 int perf_mempool_setup(struct evt_test *test, struct evt_options *opt);
 int perf_event_dev_port_setup(struct evt_test *test, struct evt_options *opt,
 				uint8_t stride, uint8_t nb_queues);
+int perf_event_dev_service_setup(uint8_t dev_id);
 int perf_launch_lcores(struct evt_test *test, struct evt_options *opt,
 		int (*worker)(void *));
 void perf_opt_dump(struct evt_options *opt, uint8_t nb_queues);
diff --git a/app/test-eventdev/test_perf_queue.c b/app/test-eventdev/test_perf_queue.c
index a7a2b1f..d843eea 100644
--- a/app/test-eventdev/test_perf_queue.c
+++ b/app/test-eventdev/test_perf_queue.c
@@ -232,6 +232,12 @@ perf_queue_eventdev_setup(struct evt_test *test, struct evt_options *opt)
 	if (ret)
 		return ret;
 
+	ret = evt_service_setup(opt->dev_id);
+	if (ret) {
+		evt_err("No service lcore found to run event dev.");
+		return ret;
+	}
+
 	ret = rte_event_dev_start(opt->dev_id);
 	if (ret) {
 		evt_err("failed to start eventdev %d", opt->dev_id);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v4 4/7] test/eventdev: update test to use service iter
  2017-10-25 11:59 ` [PATCH v4 1/7] eventdev: add API to get service id Pavan Nikhilesh
  2017-10-25 11:59   ` [PATCH v4 2/7] event/sw: extend service capability Pavan Nikhilesh
  2017-10-25 11:59   ` [PATCH v4 3/7] app/test-eventdev: update app to use service cores Pavan Nikhilesh
@ 2017-10-25 11:59   ` Pavan Nikhilesh
  2017-10-25 14:24     ` Van Haaren, Harry
  2017-10-25 11:59   ` [PATCH v4 5/7] examples/eventdev: update sample app to use service Pavan Nikhilesh
                     ` (2 subsequent siblings)
  5 siblings, 1 reply; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-25 11:59 UTC (permalink / raw)
  To: jerin.jacob, hemant.agrawal, harry.van.haaren; +Cc: dev, Pavan Nikhilesh

Use service run iter for event scheduling instead of calling the event
schedule api directly.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
---

 v4 changes:
  - rebase patchset on top of http://dpdk.org/dev/patchwork/patch/30732/
  for controlled event scheduling in case event_sw

 test/test/test_eventdev_sw.c | 68 ++++++++++++++++++++++++++------------------
 1 file changed, 40 insertions(+), 28 deletions(-)

diff --git a/test/test/test_eventdev_sw.c b/test/test/test_eventdev_sw.c
index dea302f..5c7751b 100644
--- a/test/test/test_eventdev_sw.c
+++ b/test/test/test_eventdev_sw.c
@@ -49,6 +49,8 @@
 #include <rte_cycles.h>
 #include <rte_eventdev.h>
 #include <rte_pause.h>
+#include <rte_service.h>
+#include <rte_service_component.h>

 #include "test.h"

@@ -63,6 +65,7 @@ struct test {
 	uint8_t port[MAX_PORTS];
 	uint8_t qid[MAX_QIDS];
 	int nb_qids;
+	uint32_t service_id;
 };

 static struct rte_event release_ev;
@@ -415,7 +418,7 @@ run_prio_packet_test(struct test *t)
 		}
 	}

-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	struct test_event_dev_stats stats;
 	err = test_event_dev_stats_get(evdev, &stats);
@@ -507,7 +510,7 @@ test_single_directed_packet(struct test *t)
 	}

 	/* Run schedule() as dir packets may need to be re-ordered */
-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	struct test_event_dev_stats stats;
 	err = test_event_dev_stats_get(evdev, &stats);
@@ -574,7 +577,7 @@ test_directed_forward_credits(struct test *t)
 			printf("%d: error failed to enqueue\n", __LINE__);
 			return -1;
 		}
-		rte_event_schedule(evdev);
+		rte_service_run_iter_on_app_lcore(t->service_id);

 		uint32_t deq_pkts;
 		deq_pkts = rte_event_dequeue_burst(evdev, 0, &ev, 1, 0);
@@ -736,7 +739,7 @@ burst_packets(struct test *t)
 			return -1;
 		}
 	}
-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	/* Check stats for all NUM_PKTS arrived to sched core */
 	struct test_event_dev_stats stats;
@@ -825,7 +828,7 @@ abuse_inflights(struct test *t)
 	}

 	/* schedule */
-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	struct test_event_dev_stats stats;

@@ -963,7 +966,7 @@ xstats_tests(struct test *t)
 		}
 	}

-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	/* Device names / values */
 	int num_stats = rte_event_dev_xstats_names_get(evdev,
@@ -1290,7 +1293,7 @@ port_reconfig_credits(struct test *t)
 			}
 		}

-		rte_event_schedule(evdev);
+		rte_service_run_iter_on_app_lcore(t->service_id);

 		struct rte_event ev[NPKTS];
 		int deq = rte_event_dequeue_burst(evdev, t->port[0], ev,
@@ -1516,7 +1519,7 @@ xstats_id_reset_tests(struct test *t)
 		}
 	}

-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	static const char * const dev_names[] = {
 		"dev_rx", "dev_tx", "dev_drop", "dev_sched_calls",
@@ -1907,7 +1910,7 @@ qid_priorities(struct test *t)
 		}
 	}

-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	/* dequeue packets, verify priority was upheld */
 	struct rte_event ev[32];
@@ -1988,7 +1991,7 @@ load_balancing(struct test *t)
 		}
 	}

-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	struct test_event_dev_stats stats;
 	err = test_event_dev_stats_get(evdev, &stats);
@@ -2088,7 +2091,7 @@ load_balancing_history(struct test *t)
 	}

 	/* call the scheduler */
-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	/* Dequeue the flow 0 packet from port 1, so that we can then drop */
 	struct rte_event ev;
@@ -2105,7 +2108,7 @@ load_balancing_history(struct test *t)
 	rte_event_enqueue_burst(evdev, t->port[1], &release_ev, 1);

 	/* call the scheduler */
-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	/*
 	 * Set up the next set of flows, first a new flow to fill up
@@ -2138,7 +2141,7 @@ load_balancing_history(struct test *t)
 	}

 	/* schedule */
-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	err = test_event_dev_stats_get(evdev, &stats);
 	if (err) {
@@ -2182,7 +2185,7 @@ load_balancing_history(struct test *t)
 		while (rte_event_dequeue_burst(evdev, i, &ev, 1, 0))
 			rte_event_enqueue_burst(evdev, i, &release_ev, 1);
 	}
-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	cleanup(t);
 	return 0;
@@ -2248,7 +2251,7 @@ invalid_qid(struct test *t)
 	}

 	/* call the scheduler */
-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	err = test_event_dev_stats_get(evdev, &stats);
 	if (err) {
@@ -2333,7 +2336,7 @@ single_packet(struct test *t)
 		return -1;
 	}

-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	err = test_event_dev_stats_get(evdev, &stats);
 	if (err) {
@@ -2376,7 +2379,7 @@ single_packet(struct test *t)
 		printf("%d: Failed to enqueue\n", __LINE__);
 		return -1;
 	}
-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	err = test_event_dev_stats_get(evdev, &stats);
 	if (stats.port_inflight[wrk_enq] != 0) {
@@ -2464,7 +2467,7 @@ inflight_counts(struct test *t)
 	}

 	/* schedule */
-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	err = test_event_dev_stats_get(evdev, &stats);
 	if (err) {
@@ -2520,7 +2523,7 @@ inflight_counts(struct test *t)
 	 * As the scheduler core decrements inflights, it needs to run to
 	 * process packets to act on the drop messages
 	 */
-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	err = test_event_dev_stats_get(evdev, &stats);
 	if (stats.port_inflight[p1] != 0) {
@@ -2555,7 +2558,7 @@ inflight_counts(struct test *t)
 	 * As the scheduler core decrements inflights, it needs to run to
 	 * process packets to act on the drop messages
 	 */
-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	err = test_event_dev_stats_get(evdev, &stats);
 	if (stats.port_inflight[p2] != 0) {
@@ -2649,7 +2652,7 @@ parallel_basic(struct test *t, int check_order)
 		}
 	}

-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	/* use extra slot to make logic in loops easier */
 	struct rte_event deq_ev[w3_port + 1];
@@ -2676,7 +2679,7 @@ parallel_basic(struct test *t, int check_order)
 			return -1;
 		}
 	}
-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	/* dequeue from the tx ports, we should get 3 packets */
 	deq_pkts = rte_event_dequeue_burst(evdev, t->port[tx_port], deq_ev,
@@ -2754,7 +2757,7 @@ holb(struct test *t) /* test to check we avoid basic head-of-line blocking */
 		printf("%d: Error doing first enqueue\n", __LINE__);
 		goto err;
 	}
-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	if (rte_event_dev_xstats_by_name_get(evdev, "port_0_cq_ring_used", NULL)
 			!= 1)
@@ -2779,7 +2782,7 @@ holb(struct test *t) /* test to check we avoid basic head-of-line blocking */
 			printf("%d: Error with enqueue\n", __LINE__);
 			goto err;
 		}
-		rte_event_schedule(evdev);
+		rte_service_run_iter_on_app_lcore(t->service_id);
 	} while (rte_event_dev_xstats_by_name_get(evdev,
 				rx_port_free_stat, NULL) != 0);

@@ -2789,7 +2792,7 @@ holb(struct test *t) /* test to check we avoid basic head-of-line blocking */
 		printf("%d: Error with enqueue\n", __LINE__);
 		goto err;
 	}
-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	/* check that the other port still has an empty CQ */
 	if (rte_event_dev_xstats_by_name_get(evdev, other_port_used_stat, NULL)
@@ -2812,7 +2815,7 @@ holb(struct test *t) /* test to check we avoid basic head-of-line blocking */
 		printf("%d: Error with enqueue\n", __LINE__);
 		goto err;
 	}
-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	if (rte_event_dev_xstats_by_name_get(evdev, other_port_used_stat, NULL)
 			!= 1) {
@@ -3002,7 +3005,7 @@ worker_loopback(struct test *t)
 	while (rte_eal_get_lcore_state(p_lcore) != FINISHED ||
 			rte_eal_get_lcore_state(w_lcore) != FINISHED) {

-		rte_event_schedule(evdev);
+		rte_service_run_iter_on_app_lcore(t->service_id);

 		uint64_t new_cycles = rte_get_timer_cycles();

@@ -3029,7 +3032,8 @@ worker_loopback(struct test *t)
 			cycles = new_cycles;
 		}
 	}
-	rte_event_schedule(evdev); /* ensure all completions are flushed */
+	rte_service_run_iter_on_app_lcore(t->service_id);
+	/* ensure all completions are flushed */

 	rte_eal_mp_wait_lcore();

@@ -3066,6 +3070,14 @@ test_sw_eventdev(void)
 		}
 	}

+	if (rte_event_dev_service_id_get(evdev, &t->service_id) < 0) {
+		printf("Failed to get service ID for software event dev\n");
+		return -1;
+	}
+
+	rte_service_runstate_set(t->service_id, 1);
+	rte_service_set_runstate_mapped_check(t->service_id, 0);
+
 	/* Only create mbuf pool once, reuse for each test run */
 	if (!eventdev_func_mempool) {
 		eventdev_func_mempool = rte_pktmbuf_pool_create(
--
2.7.4

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v4 5/7] examples/eventdev: update sample app to use service
  2017-10-25 11:59 ` [PATCH v4 1/7] eventdev: add API to get service id Pavan Nikhilesh
                     ` (2 preceding siblings ...)
  2017-10-25 11:59   ` [PATCH v4 4/7] test/eventdev: update test to use service iter Pavan Nikhilesh
@ 2017-10-25 11:59   ` Pavan Nikhilesh
  2017-10-25 14:24     ` Van Haaren, Harry
  2017-10-25 11:59   ` [PATCH v4 6/7] eventdev: remove eventdev schedule API Pavan Nikhilesh
  2017-10-25 11:59   ` [PATCH v4 7/7] doc: update software event device Pavan Nikhilesh
  5 siblings, 1 reply; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-25 11:59 UTC (permalink / raw)
  To: jerin.jacob, hemant.agrawal, harry.van.haaren; +Cc: dev, Pavan Nikhilesh

Update the sample app eventdev_pipeline_sw_pmd to use service run iter for
event scheduling in case of sw eventdev.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
---

 v4 changes:
  - rebase patchset on top of http://dpdk.org/dev/patchwork/patch/30732/
  for controlled event scheduling in case event_sw

 examples/eventdev_pipeline_sw_pmd/main.c | 16 +++++++++++++---
 1 file changed, 13 insertions(+), 3 deletions(-)

diff --git a/examples/eventdev_pipeline_sw_pmd/main.c b/examples/eventdev_pipeline_sw_pmd/main.c
index 2e6787b..f77744b 100644
--- a/examples/eventdev_pipeline_sw_pmd/main.c
+++ b/examples/eventdev_pipeline_sw_pmd/main.c
@@ -46,6 +46,7 @@
 #include <rte_cycles.h>
 #include <rte_ethdev.h>
 #include <rte_eventdev.h>
+#include <rte_service.h>

 #define MAX_NUM_STAGES 8
 #define BATCH_SIZE 16
@@ -76,6 +77,7 @@ struct fastpath_data {
 	uint32_t rx_lock;
 	uint32_t tx_lock;
 	uint32_t sched_lock;
+	uint32_t evdev_service_id;
 	bool rx_single;
 	bool tx_single;
 	bool sched_single;
@@ -233,7 +235,7 @@ producer(void)
 }

 static inline void
-schedule_devices(uint8_t dev_id, unsigned int lcore_id)
+schedule_devices(unsigned int lcore_id)
 {
 	if (fdata->rx_core[lcore_id] && (fdata->rx_single ||
 	    rte_atomic32_cmpset(&(fdata->rx_lock), 0, 1))) {
@@ -243,7 +245,7 @@ schedule_devices(uint8_t dev_id, unsigned int lcore_id)

 	if (fdata->sched_core[lcore_id] && (fdata->sched_single ||
 	    rte_atomic32_cmpset(&(fdata->sched_lock), 0, 1))) {
-		rte_event_schedule(dev_id);
+		rte_service_run_iter_on_app_lcore(fdata->evdev_service_id);
 		if (cdata.dump_dev_signal) {
 			rte_event_dev_dump(0, stdout);
 			cdata.dump_dev_signal = 0;
@@ -294,7 +296,7 @@ worker(void *arg)
 	while (!fdata->done) {
 		uint16_t i;

-		schedule_devices(dev_id, lcore_id);
+		schedule_devices(lcore_id);

 		if (!fdata->worker_core[lcore_id]) {
 			rte_pause();
@@ -839,6 +841,14 @@ setup_eventdev(struct prod_data *prod_data,
 	*cons_data = (struct cons_data){.dev_id = dev_id,
 					.port_id = i };

+	ret = rte_event_dev_service_id_get(dev_id,
+				&fdata->evdev_service_id);
+	if (ret != -ESRCH && ret != 0 ) {
+		printf("Error getting the service ID for sw eventdev\n");
+		return -1;
+	}
+	rte_service_runstate_set(fdata->evdev_service_id, 1);
+	rte_service_set_runstate_mapped_check(fdata->evdev_service_id, 0);
 	if (rte_event_dev_start(dev_id) < 0) {
 		printf("Error starting eventdev\n");
 		return -1;
--
2.7.4

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v4 6/7] eventdev: remove eventdev schedule API
  2017-10-25 11:59 ` [PATCH v4 1/7] eventdev: add API to get service id Pavan Nikhilesh
                     ` (3 preceding siblings ...)
  2017-10-25 11:59   ` [PATCH v4 5/7] examples/eventdev: update sample app to use service Pavan Nikhilesh
@ 2017-10-25 11:59   ` Pavan Nikhilesh
  2017-10-25 11:59   ` [PATCH v4 7/7] doc: update software event device Pavan Nikhilesh
  5 siblings, 0 replies; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-25 11:59 UTC (permalink / raw)
  To: jerin.jacob, hemant.agrawal, harry.van.haaren; +Cc: dev, Pavan Bhagavatula

From: Pavan Bhagavatula <pbhagavatula@caviumnetworks.com>

remove eventdev schedule api and enforce sw driver to use service core
feature for event scheduling.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 drivers/event/dpaa2/dpaa2_eventdev.c       |  1 -
 drivers/event/octeontx/ssovf_evdev.c       |  1 -
 drivers/event/skeleton/skeleton_eventdev.c |  2 --
 drivers/event/sw/sw_evdev.c                | 13 +++++--------
 lib/librte_eventdev/rte_eventdev.h         | 31 ++++--------------------------
 5 files changed, 9 insertions(+), 39 deletions(-)

diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
index 3dbc337..c0ad748 100644
--- a/drivers/event/dpaa2/dpaa2_eventdev.c
+++ b/drivers/event/dpaa2/dpaa2_eventdev.c
@@ -621,7 +621,6 @@ dpaa2_eventdev_create(const char *name)
 	}
 
 	eventdev->dev_ops       = &dpaa2_eventdev_ops;
-	eventdev->schedule      = NULL;
 	eventdev->enqueue       = dpaa2_eventdev_enqueue;
 	eventdev->enqueue_burst = dpaa2_eventdev_enqueue_burst;
 	eventdev->enqueue_new_burst = dpaa2_eventdev_enqueue_burst;
diff --git a/drivers/event/octeontx/ssovf_evdev.c b/drivers/event/octeontx/ssovf_evdev.c
index d829b49..1127db0 100644
--- a/drivers/event/octeontx/ssovf_evdev.c
+++ b/drivers/event/octeontx/ssovf_evdev.c
@@ -155,7 +155,6 @@ ssovf_fastpath_fns_set(struct rte_eventdev *dev)
 {
 	struct ssovf_evdev *edev = ssovf_pmd_priv(dev);
 
-	dev->schedule      = NULL;
 	dev->enqueue       = ssows_enq;
 	dev->enqueue_burst = ssows_enq_burst;
 	dev->enqueue_new_burst = ssows_enq_new_burst;
diff --git a/drivers/event/skeleton/skeleton_eventdev.c b/drivers/event/skeleton/skeleton_eventdev.c
index bcd2055..4d1a1da 100644
--- a/drivers/event/skeleton/skeleton_eventdev.c
+++ b/drivers/event/skeleton/skeleton_eventdev.c
@@ -375,7 +375,6 @@ skeleton_eventdev_init(struct rte_eventdev *eventdev)
 	PMD_DRV_FUNC_TRACE();
 
 	eventdev->dev_ops       = &skeleton_eventdev_ops;
-	eventdev->schedule      = NULL;
 	eventdev->enqueue       = skeleton_eventdev_enqueue;
 	eventdev->enqueue_burst = skeleton_eventdev_enqueue_burst;
 	eventdev->dequeue       = skeleton_eventdev_dequeue;
@@ -466,7 +465,6 @@ skeleton_eventdev_create(const char *name, int socket_id)
 	}
 
 	eventdev->dev_ops       = &skeleton_eventdev_ops;
-	eventdev->schedule      = NULL;
 	eventdev->enqueue       = skeleton_eventdev_enqueue;
 	eventdev->enqueue_burst = skeleton_eventdev_enqueue_burst;
 	eventdev->dequeue       = skeleton_eventdev_dequeue;
diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
index 92fd07b..178f169 100644
--- a/drivers/event/sw/sw_evdev.c
+++ b/drivers/event/sw/sw_evdev.c
@@ -615,10 +615,14 @@ sw_start(struct rte_eventdev *dev)
 	unsigned int i, j;
 	struct sw_evdev *sw = sw_pmd_priv(dev);
 
+	rte_service_component_runstate_set(sw->service_id, 1);
+
 	/* check a service core is mapped to this service */
-	if (!rte_service_runstate_get(sw->service_id))
+	if (!rte_service_runstate_get(sw->service_id)) {
 		SW_LOG_ERR("Warning: No Service core enabled on service %s\n",
 				sw->service_name);
+		return -ENOENT;
+	}
 
 	/* check all ports are set up */
 	for (i = 0; i < sw->port_count; i++)
@@ -833,7 +837,6 @@ sw_probe(struct rte_vdev_device *vdev)
 	dev->enqueue_forward_burst = sw_event_enqueue_burst;
 	dev->dequeue = sw_event_dequeue;
 	dev->dequeue_burst = sw_event_dequeue_burst;
-	dev->schedule = sw_event_schedule;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
@@ -861,12 +864,6 @@ sw_probe(struct rte_vdev_device *vdev)
 		return -ENOEXEC;
 	}
 
-	ret = rte_service_component_runstate_set(sw->service_id, 1);
-	if (ret) {
-		SW_LOG_ERR("Unable to enable service component");
-		return -ENOEXEC;
-	}
-
 	dev->data->service_inited = 1;
 	dev->data->service_id = sw->service_id;
 
diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
index a7973a9..f1949ff 100644
--- a/lib/librte_eventdev/rte_eventdev.h
+++ b/lib/librte_eventdev/rte_eventdev.h
@@ -218,10 +218,10 @@
  * (each worker thread schedules events to its own port) or centralized
  * (a dedicated thread schedules to all ports). Distributed software schedulers
  * perform the scheduling in rte_event_dequeue_burst(), whereas centralized
- * scheduler logic is located in rte_event_schedule().
+ * scheduler logic need a dedicated service core for scheduling.
  * The RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag is not set
  * indicates the device is centralized and thus needs a dedicated scheduling
- * thread that repeatedly calls rte_event_schedule().
+ * thread that repeatedly calls software specific scheduling function.
  *
  * An event driven worker thread has following typical workflow on fastpath:
  * \code{.c}
@@ -263,9 +263,9 @@ struct rte_mbuf; /* we just use mbuf pointers; no need to include rte_mbuf.h */
  * In distributed scheduling mode, event scheduling happens in HW or
  * rte_event_dequeue_burst() or the combination of these two.
  * If the flag is not set then eventdev is centralized and thus needs a
- * dedicated scheduling thread that repeatedly calls rte_event_schedule().
+ * dedicated service core that acts as a scheduling thread .
  *
- * @see rte_event_schedule(), rte_event_dequeue_burst()
+ * @see rte_event_dequeue_burst()
  */
 #define RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES     (1ULL << 3)
 /**< Event device is capable of enqueuing events of any type to any queue.
@@ -1052,9 +1052,6 @@ struct rte_eventdev_driver;
 struct rte_eventdev_ops;
 struct rte_eventdev;
 
-typedef void (*event_schedule_t)(struct rte_eventdev *dev);
-/**< @internal Schedule one or more events in the event dev. */
-
 typedef uint16_t (*event_enqueue_t)(void *port, const struct rte_event *ev);
 /**< @internal Enqueue event on port of a device */
 
@@ -1118,8 +1115,6 @@ struct rte_eventdev_data {
 
 /** @internal The data structure associated with each event device. */
 struct rte_eventdev {
-	event_schedule_t schedule;
-	/**< Pointer to PMD schedule function. */
 	event_enqueue_t enqueue;
 	/**< Pointer to PMD enqueue function. */
 	event_enqueue_burst_t enqueue_burst;
@@ -1148,24 +1143,6 @@ struct rte_eventdev {
 extern struct rte_eventdev *rte_eventdevs;
 /** @internal The pool of rte_eventdev structures. */
 
-
-/**
- * Schedule one or more events in the event dev.
- *
- * An event dev implementation may define this is a NOOP, for instance if
- * the event dev performs its scheduling in hardware.
- *
- * @param dev_id
- *   The identifier of the device.
- */
-static inline void
-rte_event_schedule(uint8_t dev_id)
-{
-	struct rte_eventdev *dev = &rte_eventdevs[dev_id];
-	if (*dev->schedule)
-		(*dev->schedule)(dev);
-}
-
 static __rte_always_inline uint16_t
 __rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
 			const struct rte_event ev[], uint16_t nb_events,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v4 7/7] doc: update software event device
  2017-10-25 11:59 ` [PATCH v4 1/7] eventdev: add API to get service id Pavan Nikhilesh
                     ` (4 preceding siblings ...)
  2017-10-25 11:59   ` [PATCH v4 6/7] eventdev: remove eventdev schedule API Pavan Nikhilesh
@ 2017-10-25 11:59   ` Pavan Nikhilesh
  5 siblings, 0 replies; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-25 11:59 UTC (permalink / raw)
  To: jerin.jacob, hemant.agrawal, harry.van.haaren; +Cc: dev, Pavan Bhagavatula

From: Pavan Bhagavatula <pbhagavatula@caviumnetworks.com>

Update software event device documentation to include use of service
cores for event distribution.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
---
 doc/guides/eventdevs/sw.rst       | 13 ++++++-------
 doc/guides/tools/testeventdev.rst | 10 ++--------
 2 files changed, 8 insertions(+), 15 deletions(-)

diff --git a/doc/guides/eventdevs/sw.rst b/doc/guides/eventdevs/sw.rst
index a3e6624..ec49b3b 100644
--- a/doc/guides/eventdevs/sw.rst
+++ b/doc/guides/eventdevs/sw.rst
@@ -78,9 +78,9 @@ Scheduling Quanta
 ~~~~~~~~~~~~~~~~~
 
 The scheduling quanta sets the number of events that the device attempts to
-schedule before returning to the application from the ``rte_event_schedule()``
-function. Note that is a *hint* only, and that fewer or more events may be
-scheduled in a given iteration.
+schedule in a single schedule call performed by the service core. Note that
+is a *hint* only, and that fewer or more events may be scheduled in a given
+iteration.
 
 The scheduling quanta can be set using a string argument to the vdev
 create call:
@@ -140,10 +140,9 @@ eventdev.
 Distributed Scheduler
 ~~~~~~~~~~~~~~~~~~~~~
 
-The software eventdev is a centralized scheduler, requiring the
-``rte_event_schedule()`` function to be called by a CPU core to perform the
-required event distribution. This is not really a limitation but rather a
-design decision.
+The software eventdev is a centralized scheduler, requiring a service core to
+perform the required event distribution. This is not really a limitation but
+rather a design decision.
 
 The ``RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED`` flag is not set in the
 ``event_dev_cap`` field of the ``rte_event_dev_info`` struct for the software
diff --git a/doc/guides/tools/testeventdev.rst b/doc/guides/tools/testeventdev.rst
index 34b1c31..5aa2237 100644
--- a/doc/guides/tools/testeventdev.rst
+++ b/doc/guides/tools/testeventdev.rst
@@ -106,10 +106,6 @@ The following are the application command-line options:
 
         Set the number of mbufs to be allocated from the mempool.
 
-* ``--slcore <n>``
-
-        Set the scheduler lcore id.(Valid when eventdev is not RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capable)
-
 * ``--plcores <CORELIST>``
 
         Set the list of cores to be used as producers.
@@ -362,7 +358,6 @@ Supported application command line options are following::
         --test
         --socket_id
         --pool_sz
-        --slcore (Valid when eventdev is not RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capable)
         --plcores
         --wlcores
         --stlist
@@ -379,8 +374,8 @@ Example command to run perf queue test:
 
 .. code-block:: console
 
-   sudo build/app/dpdk-test-eventdev --vdev=event_sw0 -- \
-        --test=perf_queue --slcore=1 --plcores=2 --wlcore=3 --stlist=p --nb_pkts=0
+   sudo build/app/dpdk-test-eventdev -c 0xf -s 0x1 --vdev=event_sw0 -- \
+        --test=perf_queue --plcores=2 --wlcore=3 --stlist=p --nb_pkts=0
 
 
 PERF_ATQ Test
@@ -441,7 +436,6 @@ Supported application command line options are following::
         --test
         --socket_id
         --pool_sz
-        --slcore (Valid when eventdev is not RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capable)
         --plcores
         --wlcores
         --stlist
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 4/7] test/eventdev: update test to use service iter
  2017-10-25 11:59   ` [PATCH v4 4/7] test/eventdev: update test to use service iter Pavan Nikhilesh
@ 2017-10-25 14:24     ` Van Haaren, Harry
  0 siblings, 0 replies; 47+ messages in thread
From: Van Haaren, Harry @ 2017-10-25 14:24 UTC (permalink / raw)
  To: Pavan Nikhilesh, jerin.jacob, hemant.agrawal; +Cc: dev

> From: Pavan Nikhilesh [mailto:pbhagavatula@caviumnetworks.com]
> Sent: Wednesday, October 25, 2017 12:59 PM
> To: jerin.jacob@caviumnetworks.com; hemant.agrawal@nxp.com; Van Haaren,
> Harry <harry.van.haaren@intel.com>
> Cc: dev@dpdk.org; Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
> Subject: [dpdk-dev] [PATCH v4 4/7] test/eventdev: update test to use service
> iter
> 
> Use service run iter for event scheduling instead of calling the event
> schedule api directly.
> 
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
> ---
> 
>  v4 changes:
>   - rebase patchset on top of http://dpdk.org/dev/patchwork/patch/30732/
>   for controlled event scheduling in case event_sw


Much cleaner than v3, thanks for this re-work, and for the discussions via IRC
to design these service APIs to enable this cleanup!

Acked-by: Harry van Haaren <harry.van.haaren@intel.com>

^ permalink raw reply	[flat|nested] 47+ messages in thread

* Re: [PATCH v4 5/7] examples/eventdev: update sample app to use service
  2017-10-25 11:59   ` [PATCH v4 5/7] examples/eventdev: update sample app to use service Pavan Nikhilesh
@ 2017-10-25 14:24     ` Van Haaren, Harry
  0 siblings, 0 replies; 47+ messages in thread
From: Van Haaren, Harry @ 2017-10-25 14:24 UTC (permalink / raw)
  To: Pavan Nikhilesh, jerin.jacob, hemant.agrawal; +Cc: dev

> From: Pavan Nikhilesh [mailto:pbhagavatula@caviumnetworks.com]
> Sent: Wednesday, October 25, 2017 12:59 PM
> To: jerin.jacob@caviumnetworks.com; hemant.agrawal@nxp.com; Van Haaren,
> Harry <harry.van.haaren@intel.com>
> Cc: dev@dpdk.org; Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
> Subject: [dpdk-dev] [PATCH v4 5/7] examples/eventdev: update sample app to
> use service
> 
> Update the sample app eventdev_pipeline_sw_pmd to use service run iter for
> event scheduling in case of sw eventdev.
> 
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>


One note below,

Acked-by: Harry van Haaren <harry.van.haaren@intel.com>

 
> +	ret = rte_event_dev_service_id_get(dev_id,
> +				&fdata->evdev_service_id);
> +	if (ret != -ESRCH && ret != 0 ) {

Checkpatch warns on the space between  0  and the  )

^ permalink raw reply	[flat|nested] 47+ messages in thread

* [PATCH v5 1/7] eventdev: add API to get service id
  2017-10-11  9:09 [PATCH 0/7] eventdev: remove event schedule API for SW driver Pavan Nikhilesh
                   ` (9 preceding siblings ...)
  2017-10-25 11:59 ` [PATCH v4 1/7] eventdev: add API to get service id Pavan Nikhilesh
@ 2017-10-25 14:50 ` Pavan Nikhilesh
  2017-10-25 14:50   ` [PATCH v5 2/7] event/sw: extend service capability Pavan Nikhilesh
                     ` (6 more replies)
  10 siblings, 7 replies; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-25 14:50 UTC (permalink / raw)
  To: jerin.jacob, hemant.agrawal, harry.van.haaren; +Cc: dev, Pavan Nikhilesh

In case of sw event device the scheduling can be done on a service core
using the service registered at the time of probe.
This patch adds a helper function to get the service id that can be used
by the application to assign a lcore for the service to run on.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
---

v3 changes:
 - removed stale slcore option from documentation
 - fix version map
 - remove schedule api from dpaa2 event dev driver

v2 changes:
 - fix checkpatch issues
 - update eventdev versio map
 - fix --slcore option not removed in app/test-event-dev

 lib/librte_eventdev/rte_eventdev.c           | 17 +++++++++++++++++
 lib/librte_eventdev/rte_eventdev.h           | 22 ++++++++++++++++++++++
 lib/librte_eventdev/rte_eventdev_version.map |  1 +
 3 files changed, 40 insertions(+)

diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c
index 250bfc8..ce6a5dc 100644
--- a/lib/librte_eventdev/rte_eventdev.c
+++ b/lib/librte_eventdev/rte_eventdev.c
@@ -963,6 +963,23 @@ rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
 }

 int
+rte_event_dev_service_id_get(uint8_t dev_id, uint32_t *service_id)
+{
+	struct rte_eventdev *dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+
+	if (service_id == NULL)
+		return -EINVAL;
+
+	if (dev->data->service_inited)
+		*service_id = dev->data->service_id;
+
+	return dev->data->service_inited ? 0 : -ESRCH;
+}
+
+int
 rte_event_dev_dump(uint8_t dev_id, FILE *f)
 {
 	struct rte_eventdev *dev;
diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
index b9d1b98..a7973a9 100644
--- a/lib/librte_eventdev/rte_eventdev.h
+++ b/lib/librte_eventdev/rte_eventdev.h
@@ -1103,6 +1103,10 @@ struct rte_eventdev_data {
 	/**< Event device capabilities(RTE_EVENT_DEV_CAP_)*/
 	struct rte_event_dev_config dev_conf;
 	/**< Configuration applied to device. */
+	uint8_t service_inited;
+	/* Service initialization state */
+	uint32_t service_id;
+	/* Service ID*/

 	RTE_STD_C11
 	uint8_t dev_started : 1;
@@ -1606,6 +1610,24 @@ rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
 			 uint8_t queues[], uint8_t priorities[]);

 /**
+ * Retrieve the service ID of the event dev. If the adapter doesn't use
+ * a rte_service function, this function returns -ESRCH.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ *
+ * @param [out] service_id
+ *   A pointer to a uint32_t, to be filled in with the service id.
+ *
+ * @return
+ *   - 0: Success
+ *   - <0: Error code on failure, if the event dev doesn't use a rte_service
+ *   function, this function returns -ESRCH.
+ */
+int
+rte_event_dev_service_id_get(uint8_t dev_id, uint32_t *service_id);
+
+/**
  * Dump internal information about *dev_id* to the FILE* provided in *f*.
  *
  * @param dev_id
diff --git a/lib/librte_eventdev/rte_eventdev_version.map b/lib/librte_eventdev/rte_eventdev_version.map
index 800ca6e..108ae61 100644
--- a/lib/librte_eventdev/rte_eventdev_version.map
+++ b/lib/librte_eventdev/rte_eventdev_version.map
@@ -51,6 +51,7 @@ DPDK_17.11 {
 	global:

 	rte_event_dev_attr_get;
+	rte_event_dev_service_id_get;
 	rte_event_port_attr_get;
 	rte_event_queue_attr_get;

--
2.7.4

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v5 2/7] event/sw: extend service capability
  2017-10-25 14:50 ` [PATCH v5 1/7] eventdev: add API to get service id Pavan Nikhilesh
@ 2017-10-25 14:50   ` Pavan Nikhilesh
  2017-10-25 14:50   ` [PATCH v5 3/7] app/test-eventdev: update app to use service cores Pavan Nikhilesh
                     ` (5 subsequent siblings)
  6 siblings, 0 replies; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-25 14:50 UTC (permalink / raw)
  To: jerin.jacob, hemant.agrawal, harry.van.haaren; +Cc: dev, Pavan Nikhilesh

Extend the service capability of the sw event device by exposing service id
to the application.
The application can use service id to configure service cores to run event
scheduling.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
---
 drivers/event/sw/sw_evdev.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
index 522cd71..92fd07b 100644
--- a/drivers/event/sw/sw_evdev.c
+++ b/drivers/event/sw/sw_evdev.c
@@ -861,6 +861,15 @@ sw_probe(struct rte_vdev_device *vdev)
 		return -ENOEXEC;
 	}
 
+	ret = rte_service_component_runstate_set(sw->service_id, 1);
+	if (ret) {
+		SW_LOG_ERR("Unable to enable service component");
+		return -ENOEXEC;
+	}
+
+	dev->data->service_inited = 1;
+	dev->data->service_id = sw->service_id;
+
 	return 0;
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v5 3/7] app/test-eventdev: update app to use service cores
  2017-10-25 14:50 ` [PATCH v5 1/7] eventdev: add API to get service id Pavan Nikhilesh
  2017-10-25 14:50   ` [PATCH v5 2/7] event/sw: extend service capability Pavan Nikhilesh
@ 2017-10-25 14:50   ` Pavan Nikhilesh
  2017-10-25 14:50   ` [PATCH v5 4/7] test/eventdev: update test to use service iter Pavan Nikhilesh
                     ` (4 subsequent siblings)
  6 siblings, 0 replies; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-25 14:50 UTC (permalink / raw)
  To: jerin.jacob, hemant.agrawal, harry.van.haaren; +Cc: dev, Pavan Nikhilesh

Use service cores for offloading event scheduling in case of
centralized scheduling instead of calling the schedule api directly.
This removes the dependency on dedicated scheduler core specified by
giving command line option --slcore.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 app/test-eventdev/evt_common.h        | 40 +++++++++++++++++++++++++++++
 app/test-eventdev/evt_options.c       | 10 --------
 app/test-eventdev/evt_options.h       |  8 ------
 app/test-eventdev/test_order_atq.c    |  6 +++++
 app/test-eventdev/test_order_common.c |  3 ---
 app/test-eventdev/test_order_queue.c  |  6 +++++
 app/test-eventdev/test_perf_atq.c     |  6 +++++
 app/test-eventdev/test_perf_common.c  | 47 ++---------------------------------
 app/test-eventdev/test_perf_common.h  |  1 +
 app/test-eventdev/test_perf_queue.c   |  6 +++++
 10 files changed, 67 insertions(+), 66 deletions(-)

diff --git a/app/test-eventdev/evt_common.h b/app/test-eventdev/evt_common.h
index ee896a2..0fadab4 100644
--- a/app/test-eventdev/evt_common.h
+++ b/app/test-eventdev/evt_common.h
@@ -36,6 +36,7 @@
 #include <rte_common.h>
 #include <rte_debug.h>
 #include <rte_eventdev.h>
+#include <rte_service.h>
 
 #define CLNRM  "\x1b[0m"
 #define CLRED  "\x1b[31m"
@@ -92,4 +93,43 @@ evt_has_all_types_queue(uint8_t dev_id)
 			true : false;
 }
 
+static inline int
+evt_service_setup(uint8_t dev_id)
+{
+	uint32_t service_id;
+	int32_t core_cnt;
+	unsigned int lcore = 0;
+	uint32_t core_array[RTE_MAX_LCORE];
+	uint8_t cnt;
+	uint8_t min_cnt = UINT8_MAX;
+
+	if (evt_has_distributed_sched(dev_id))
+		return 0;
+
+	if (!rte_service_lcore_count())
+		return -ENOENT;
+
+	if (!rte_event_dev_service_id_get(dev_id, &service_id)) {
+		core_cnt = rte_service_lcore_list(core_array,
+				RTE_MAX_LCORE);
+		if (core_cnt < 0)
+			return -ENOENT;
+		/* Get the core which has least number of services running. */
+		while (core_cnt--) {
+			/* Reset default mapping */
+			rte_service_map_lcore_set(service_id,
+					core_array[core_cnt], 0);
+			cnt = rte_service_lcore_count_services(
+					core_array[core_cnt]);
+			if (cnt < min_cnt) {
+				lcore = core_array[core_cnt];
+				min_cnt = cnt;
+			}
+		}
+		if (rte_service_map_lcore_set(service_id, lcore, 1))
+			return -ENOENT;
+	}
+	return 0;
+}
+
 #endif /*  _EVT_COMMON_*/
diff --git a/app/test-eventdev/evt_options.c b/app/test-eventdev/evt_options.c
index 65e22f8..e2187df 100644
--- a/app/test-eventdev/evt_options.c
+++ b/app/test-eventdev/evt_options.c
@@ -114,13 +114,6 @@ evt_parse_test_name(struct evt_options *opt, const char *arg)
 }
 
 static int
-evt_parse_slcore(struct evt_options *opt, const char *arg)
-{
-	opt->slcore = atoi(arg);
-	return 0;
-}
-
-static int
 evt_parse_socket_id(struct evt_options *opt, const char *arg)
 {
 	opt->socket_id = atoi(arg);
@@ -188,7 +181,6 @@ usage(char *program)
 		"\t--test             : name of the test application to run\n"
 		"\t--socket_id        : socket_id of application resources\n"
 		"\t--pool_sz          : pool size of the mempool\n"
-		"\t--slcore           : lcore id of the scheduler\n"
 		"\t--plcores          : list of lcore ids for producers\n"
 		"\t--wlcores          : list of lcore ids for workers\n"
 		"\t--stlist           : list of scheduled types of the stages\n"
@@ -254,7 +246,6 @@ static struct option lgopts[] = {
 	{ EVT_POOL_SZ,          1, 0, 0 },
 	{ EVT_NB_PKTS,          1, 0, 0 },
 	{ EVT_WKR_DEQ_DEP,      1, 0, 0 },
-	{ EVT_SCHED_LCORE,      1, 0, 0 },
 	{ EVT_SCHED_TYPE_LIST,  1, 0, 0 },
 	{ EVT_FWD_LATENCY,      0, 0, 0 },
 	{ EVT_QUEUE_PRIORITY,   0, 0, 0 },
@@ -278,7 +269,6 @@ evt_opts_parse_long(int opt_idx, struct evt_options *opt)
 		{ EVT_POOL_SZ, evt_parse_pool_sz},
 		{ EVT_NB_PKTS, evt_parse_nb_pkts},
 		{ EVT_WKR_DEQ_DEP, evt_parse_wkr_deq_dep},
-		{ EVT_SCHED_LCORE, evt_parse_slcore},
 		{ EVT_SCHED_TYPE_LIST, evt_parse_sched_type_list},
 		{ EVT_FWD_LATENCY, evt_parse_fwd_latency},
 		{ EVT_QUEUE_PRIORITY, evt_parse_queue_priority},
diff --git a/app/test-eventdev/evt_options.h b/app/test-eventdev/evt_options.h
index d8a9fdc..a9a9125 100644
--- a/app/test-eventdev/evt_options.h
+++ b/app/test-eventdev/evt_options.h
@@ -47,7 +47,6 @@
 #define EVT_VERBOSE              ("verbose")
 #define EVT_DEVICE               ("dev")
 #define EVT_TEST                 ("test")
-#define EVT_SCHED_LCORE          ("slcore")
 #define EVT_PROD_LCORES          ("plcores")
 #define EVT_WORK_LCORES          ("wlcores")
 #define EVT_NB_FLOWS             ("nb_flows")
@@ -67,7 +66,6 @@ struct evt_options {
 	bool plcores[RTE_MAX_LCORE];
 	bool wlcores[RTE_MAX_LCORE];
 	uint8_t sched_type_list[EVT_MAX_STAGES];
-	int slcore;
 	uint32_t nb_flows;
 	int socket_id;
 	int pool_sz;
@@ -219,12 +217,6 @@ evt_dump_nb_flows(struct evt_options *opt)
 }
 
 static inline void
-evt_dump_scheduler_lcore(struct evt_options *opt)
-{
-	evt_dump("scheduler lcore", "%d", opt->slcore);
-}
-
-static inline void
 evt_dump_worker_dequeue_depth(struct evt_options *opt)
 {
 	evt_dump("worker deq depth", "%d", opt->wkr_deq_dep);
diff --git a/app/test-eventdev/test_order_atq.c b/app/test-eventdev/test_order_atq.c
index 7e6c67d..4ee0dea 100644
--- a/app/test-eventdev/test_order_atq.c
+++ b/app/test-eventdev/test_order_atq.c
@@ -179,6 +179,12 @@ order_atq_eventdev_setup(struct evt_test *test, struct evt_options *opt)
 	if (ret)
 		return ret;
 
+	ret = evt_service_setup(opt->dev_id);
+	if (ret) {
+		evt_err("No service lcore found to run event dev.");
+		return ret;
+	}
+
 	ret = rte_event_dev_start(opt->dev_id);
 	if (ret) {
 		evt_err("failed to start eventdev %d", opt->dev_id);
diff --git a/app/test-eventdev/test_order_common.c b/app/test-eventdev/test_order_common.c
index 80e14c0..7cfe7fa 100644
--- a/app/test-eventdev/test_order_common.c
+++ b/app/test-eventdev/test_order_common.c
@@ -292,9 +292,6 @@ order_launch_lcores(struct evt_test *test, struct evt_options *opt,
 	int64_t old_remaining  = -1;
 
 	while (t->err == false) {
-
-		rte_event_schedule(opt->dev_id);
-
 		uint64_t new_cycles = rte_get_timer_cycles();
 		int64_t remaining = rte_atomic64_read(&t->outstand_pkts);
 
diff --git a/app/test-eventdev/test_order_queue.c b/app/test-eventdev/test_order_queue.c
index 1fa4082..eef69a4 100644
--- a/app/test-eventdev/test_order_queue.c
+++ b/app/test-eventdev/test_order_queue.c
@@ -192,6 +192,12 @@ order_queue_eventdev_setup(struct evt_test *test, struct evt_options *opt)
 	if (ret)
 		return ret;
 
+	ret = evt_service_setup(opt->dev_id);
+	if (ret) {
+		evt_err("No service lcore found to run event dev.");
+		return ret;
+	}
+
 	ret = rte_event_dev_start(opt->dev_id);
 	if (ret) {
 		evt_err("failed to start eventdev %d", opt->dev_id);
diff --git a/app/test-eventdev/test_perf_atq.c b/app/test-eventdev/test_perf_atq.c
index 9c3efa3..0e9f2db 100644
--- a/app/test-eventdev/test_perf_atq.c
+++ b/app/test-eventdev/test_perf_atq.c
@@ -221,6 +221,12 @@ perf_atq_eventdev_setup(struct evt_test *test, struct evt_options *opt)
 	if (ret)
 		return ret;
 
+	ret = evt_service_setup(opt->dev_id);
+	if (ret) {
+		evt_err("No service lcore found to run event dev.");
+		return ret;
+	}
+
 	ret = rte_event_dev_start(opt->dev_id);
 	if (ret) {
 		evt_err("failed to start eventdev %d", opt->dev_id);
diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c
index 7b09299..e77b472 100644
--- a/app/test-eventdev/test_perf_common.c
+++ b/app/test-eventdev/test_perf_common.c
@@ -88,18 +88,6 @@ perf_producer(void *arg)
 	return 0;
 }
 
-static inline int
-scheduler(void *arg)
-{
-	struct test_perf *t = arg;
-	const uint8_t dev_id = t->opt->dev_id;
-
-	while (t->done == false)
-		rte_event_schedule(dev_id);
-
-	return 0;
-}
-
 static inline uint64_t
 processed_pkts(struct test_perf *t)
 {
@@ -163,15 +151,6 @@ perf_launch_lcores(struct evt_test *test, struct evt_options *opt,
 		port_idx++;
 	}
 
-	/* launch scheduler */
-	if (!evt_has_distributed_sched(opt->dev_id)) {
-		ret = rte_eal_remote_launch(scheduler, t, opt->slcore);
-		if (ret) {
-			evt_err("failed to launch sched %d", opt->slcore);
-			return ret;
-		}
-	}
-
 	const uint64_t total_pkts = opt->nb_pkts *
 			evt_nr_active_lcores(opt->plcores);
 
@@ -307,10 +286,9 @@ int
 perf_opt_check(struct evt_options *opt, uint64_t nb_queues)
 {
 	unsigned int lcores;
-	bool need_slcore = !evt_has_distributed_sched(opt->dev_id);
 
-	/* N producer + N worker + 1 scheduler(based on dev capa) + 1 master */
-	lcores = need_slcore ? 4 : 3;
+	/* N producer + N worker + 1 master */
+	lcores = 3;
 
 	if (rte_lcore_count() < lcores) {
 		evt_err("test need minimum %d lcores", lcores);
@@ -322,10 +300,6 @@ perf_opt_check(struct evt_options *opt, uint64_t nb_queues)
 		evt_err("worker lcores overlaps with master lcore");
 		return -1;
 	}
-	if (need_slcore && evt_lcores_has_overlap(opt->wlcores, opt->slcore)) {
-		evt_err("worker lcores overlaps with scheduler lcore");
-		return -1;
-	}
 	if (evt_lcores_has_overlap_multi(opt->wlcores, opt->plcores)) {
 		evt_err("worker lcores overlaps producer lcores");
 		return -1;
@@ -344,10 +318,6 @@ perf_opt_check(struct evt_options *opt, uint64_t nb_queues)
 		evt_err("producer lcores overlaps with master lcore");
 		return -1;
 	}
-	if (need_slcore && evt_lcores_has_overlap(opt->plcores, opt->slcore)) {
-		evt_err("producer lcores overlaps with scheduler lcore");
-		return -1;
-	}
 	if (evt_has_disabled_lcore(opt->plcores)) {
 		evt_err("one or more producer lcores are not enabled");
 		return -1;
@@ -357,17 +327,6 @@ perf_opt_check(struct evt_options *opt, uint64_t nb_queues)
 		return -1;
 	}
 
-	/* Validate scheduler lcore */
-	if (!evt_has_distributed_sched(opt->dev_id) &&
-			opt->slcore == (int)rte_get_master_lcore()) {
-		evt_err("scheduler lcore and master lcore should be different");
-		return -1;
-	}
-	if (need_slcore && !rte_lcore_is_enabled(opt->slcore)) {
-		evt_err("scheduler lcore is not enabled");
-		return -1;
-	}
-
 	if (evt_has_invalid_stage(opt))
 		return -1;
 
@@ -405,8 +364,6 @@ perf_opt_dump(struct evt_options *opt, uint8_t nb_queues)
 	evt_dump_producer_lcores(opt);
 	evt_dump("nb_worker_lcores", "%d", evt_nr_active_lcores(opt->wlcores));
 	evt_dump_worker_lcores(opt);
-	if (!evt_has_distributed_sched(opt->dev_id))
-		evt_dump_scheduler_lcore(opt);
 	evt_dump_nb_stages(opt);
 	evt_dump("nb_evdev_ports", "%d", perf_nb_event_ports(opt));
 	evt_dump("nb_evdev_queues", "%d", nb_queues);
diff --git a/app/test-eventdev/test_perf_common.h b/app/test-eventdev/test_perf_common.h
index 4956586..c6fc70c 100644
--- a/app/test-eventdev/test_perf_common.h
+++ b/app/test-eventdev/test_perf_common.h
@@ -159,6 +159,7 @@ int perf_test_setup(struct evt_test *test, struct evt_options *opt);
 int perf_mempool_setup(struct evt_test *test, struct evt_options *opt);
 int perf_event_dev_port_setup(struct evt_test *test, struct evt_options *opt,
 				uint8_t stride, uint8_t nb_queues);
+int perf_event_dev_service_setup(uint8_t dev_id);
 int perf_launch_lcores(struct evt_test *test, struct evt_options *opt,
 		int (*worker)(void *));
 void perf_opt_dump(struct evt_options *opt, uint8_t nb_queues);
diff --git a/app/test-eventdev/test_perf_queue.c b/app/test-eventdev/test_perf_queue.c
index a7a2b1f..d843eea 100644
--- a/app/test-eventdev/test_perf_queue.c
+++ b/app/test-eventdev/test_perf_queue.c
@@ -232,6 +232,12 @@ perf_queue_eventdev_setup(struct evt_test *test, struct evt_options *opt)
 	if (ret)
 		return ret;
 
+	ret = evt_service_setup(opt->dev_id);
+	if (ret) {
+		evt_err("No service lcore found to run event dev.");
+		return ret;
+	}
+
 	ret = rte_event_dev_start(opt->dev_id);
 	if (ret) {
 		evt_err("failed to start eventdev %d", opt->dev_id);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v5 4/7] test/eventdev: update test to use service iter
  2017-10-25 14:50 ` [PATCH v5 1/7] eventdev: add API to get service id Pavan Nikhilesh
  2017-10-25 14:50   ` [PATCH v5 2/7] event/sw: extend service capability Pavan Nikhilesh
  2017-10-25 14:50   ` [PATCH v5 3/7] app/test-eventdev: update app to use service cores Pavan Nikhilesh
@ 2017-10-25 14:50   ` Pavan Nikhilesh
  2017-10-25 14:50   ` [PATCH v5 5/7] examples/eventdev: update sample app to use service Pavan Nikhilesh
                     ` (3 subsequent siblings)
  6 siblings, 0 replies; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-25 14:50 UTC (permalink / raw)
  To: jerin.jacob, hemant.agrawal, harry.van.haaren; +Cc: dev, Pavan Nikhilesh

Use service run iter for event scheduling instead of calling the event
schedule api directly.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
---

 v4 changes:
  - rebase patchset on top of http://dpdk.org/dev/patchwork/patch/30732/
  for controlled event scheduling in case event_sw

 test/test/test_eventdev_sw.c | 68 ++++++++++++++++++++++++++------------------
 1 file changed, 40 insertions(+), 28 deletions(-)

diff --git a/test/test/test_eventdev_sw.c b/test/test/test_eventdev_sw.c
index dea302f..5c7751b 100644
--- a/test/test/test_eventdev_sw.c
+++ b/test/test/test_eventdev_sw.c
@@ -49,6 +49,8 @@
 #include <rte_cycles.h>
 #include <rte_eventdev.h>
 #include <rte_pause.h>
+#include <rte_service.h>
+#include <rte_service_component.h>

 #include "test.h"

@@ -63,6 +65,7 @@ struct test {
 	uint8_t port[MAX_PORTS];
 	uint8_t qid[MAX_QIDS];
 	int nb_qids;
+	uint32_t service_id;
 };

 static struct rte_event release_ev;
@@ -415,7 +418,7 @@ run_prio_packet_test(struct test *t)
 		}
 	}

-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	struct test_event_dev_stats stats;
 	err = test_event_dev_stats_get(evdev, &stats);
@@ -507,7 +510,7 @@ test_single_directed_packet(struct test *t)
 	}

 	/* Run schedule() as dir packets may need to be re-ordered */
-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	struct test_event_dev_stats stats;
 	err = test_event_dev_stats_get(evdev, &stats);
@@ -574,7 +577,7 @@ test_directed_forward_credits(struct test *t)
 			printf("%d: error failed to enqueue\n", __LINE__);
 			return -1;
 		}
-		rte_event_schedule(evdev);
+		rte_service_run_iter_on_app_lcore(t->service_id);

 		uint32_t deq_pkts;
 		deq_pkts = rte_event_dequeue_burst(evdev, 0, &ev, 1, 0);
@@ -736,7 +739,7 @@ burst_packets(struct test *t)
 			return -1;
 		}
 	}
-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	/* Check stats for all NUM_PKTS arrived to sched core */
 	struct test_event_dev_stats stats;
@@ -825,7 +828,7 @@ abuse_inflights(struct test *t)
 	}

 	/* schedule */
-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	struct test_event_dev_stats stats;

@@ -963,7 +966,7 @@ xstats_tests(struct test *t)
 		}
 	}

-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	/* Device names / values */
 	int num_stats = rte_event_dev_xstats_names_get(evdev,
@@ -1290,7 +1293,7 @@ port_reconfig_credits(struct test *t)
 			}
 		}

-		rte_event_schedule(evdev);
+		rte_service_run_iter_on_app_lcore(t->service_id);

 		struct rte_event ev[NPKTS];
 		int deq = rte_event_dequeue_burst(evdev, t->port[0], ev,
@@ -1516,7 +1519,7 @@ xstats_id_reset_tests(struct test *t)
 		}
 	}

-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	static const char * const dev_names[] = {
 		"dev_rx", "dev_tx", "dev_drop", "dev_sched_calls",
@@ -1907,7 +1910,7 @@ qid_priorities(struct test *t)
 		}
 	}

-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	/* dequeue packets, verify priority was upheld */
 	struct rte_event ev[32];
@@ -1988,7 +1991,7 @@ load_balancing(struct test *t)
 		}
 	}

-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	struct test_event_dev_stats stats;
 	err = test_event_dev_stats_get(evdev, &stats);
@@ -2088,7 +2091,7 @@ load_balancing_history(struct test *t)
 	}

 	/* call the scheduler */
-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	/* Dequeue the flow 0 packet from port 1, so that we can then drop */
 	struct rte_event ev;
@@ -2105,7 +2108,7 @@ load_balancing_history(struct test *t)
 	rte_event_enqueue_burst(evdev, t->port[1], &release_ev, 1);

 	/* call the scheduler */
-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	/*
 	 * Set up the next set of flows, first a new flow to fill up
@@ -2138,7 +2141,7 @@ load_balancing_history(struct test *t)
 	}

 	/* schedule */
-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	err = test_event_dev_stats_get(evdev, &stats);
 	if (err) {
@@ -2182,7 +2185,7 @@ load_balancing_history(struct test *t)
 		while (rte_event_dequeue_burst(evdev, i, &ev, 1, 0))
 			rte_event_enqueue_burst(evdev, i, &release_ev, 1);
 	}
-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	cleanup(t);
 	return 0;
@@ -2248,7 +2251,7 @@ invalid_qid(struct test *t)
 	}

 	/* call the scheduler */
-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	err = test_event_dev_stats_get(evdev, &stats);
 	if (err) {
@@ -2333,7 +2336,7 @@ single_packet(struct test *t)
 		return -1;
 	}

-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	err = test_event_dev_stats_get(evdev, &stats);
 	if (err) {
@@ -2376,7 +2379,7 @@ single_packet(struct test *t)
 		printf("%d: Failed to enqueue\n", __LINE__);
 		return -1;
 	}
-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	err = test_event_dev_stats_get(evdev, &stats);
 	if (stats.port_inflight[wrk_enq] != 0) {
@@ -2464,7 +2467,7 @@ inflight_counts(struct test *t)
 	}

 	/* schedule */
-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	err = test_event_dev_stats_get(evdev, &stats);
 	if (err) {
@@ -2520,7 +2523,7 @@ inflight_counts(struct test *t)
 	 * As the scheduler core decrements inflights, it needs to run to
 	 * process packets to act on the drop messages
 	 */
-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	err = test_event_dev_stats_get(evdev, &stats);
 	if (stats.port_inflight[p1] != 0) {
@@ -2555,7 +2558,7 @@ inflight_counts(struct test *t)
 	 * As the scheduler core decrements inflights, it needs to run to
 	 * process packets to act on the drop messages
 	 */
-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	err = test_event_dev_stats_get(evdev, &stats);
 	if (stats.port_inflight[p2] != 0) {
@@ -2649,7 +2652,7 @@ parallel_basic(struct test *t, int check_order)
 		}
 	}

-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	/* use extra slot to make logic in loops easier */
 	struct rte_event deq_ev[w3_port + 1];
@@ -2676,7 +2679,7 @@ parallel_basic(struct test *t, int check_order)
 			return -1;
 		}
 	}
-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	/* dequeue from the tx ports, we should get 3 packets */
 	deq_pkts = rte_event_dequeue_burst(evdev, t->port[tx_port], deq_ev,
@@ -2754,7 +2757,7 @@ holb(struct test *t) /* test to check we avoid basic head-of-line blocking */
 		printf("%d: Error doing first enqueue\n", __LINE__);
 		goto err;
 	}
-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	if (rte_event_dev_xstats_by_name_get(evdev, "port_0_cq_ring_used", NULL)
 			!= 1)
@@ -2779,7 +2782,7 @@ holb(struct test *t) /* test to check we avoid basic head-of-line blocking */
 			printf("%d: Error with enqueue\n", __LINE__);
 			goto err;
 		}
-		rte_event_schedule(evdev);
+		rte_service_run_iter_on_app_lcore(t->service_id);
 	} while (rte_event_dev_xstats_by_name_get(evdev,
 				rx_port_free_stat, NULL) != 0);

@@ -2789,7 +2792,7 @@ holb(struct test *t) /* test to check we avoid basic head-of-line blocking */
 		printf("%d: Error with enqueue\n", __LINE__);
 		goto err;
 	}
-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	/* check that the other port still has an empty CQ */
 	if (rte_event_dev_xstats_by_name_get(evdev, other_port_used_stat, NULL)
@@ -2812,7 +2815,7 @@ holb(struct test *t) /* test to check we avoid basic head-of-line blocking */
 		printf("%d: Error with enqueue\n", __LINE__);
 		goto err;
 	}
-	rte_event_schedule(evdev);
+	rte_service_run_iter_on_app_lcore(t->service_id);

 	if (rte_event_dev_xstats_by_name_get(evdev, other_port_used_stat, NULL)
 			!= 1) {
@@ -3002,7 +3005,7 @@ worker_loopback(struct test *t)
 	while (rte_eal_get_lcore_state(p_lcore) != FINISHED ||
 			rte_eal_get_lcore_state(w_lcore) != FINISHED) {

-		rte_event_schedule(evdev);
+		rte_service_run_iter_on_app_lcore(t->service_id);

 		uint64_t new_cycles = rte_get_timer_cycles();

@@ -3029,7 +3032,8 @@ worker_loopback(struct test *t)
 			cycles = new_cycles;
 		}
 	}
-	rte_event_schedule(evdev); /* ensure all completions are flushed */
+	rte_service_run_iter_on_app_lcore(t->service_id);
+	/* ensure all completions are flushed */

 	rte_eal_mp_wait_lcore();

@@ -3066,6 +3070,14 @@ test_sw_eventdev(void)
 		}
 	}

+	if (rte_event_dev_service_id_get(evdev, &t->service_id) < 0) {
+		printf("Failed to get service ID for software event dev\n");
+		return -1;
+	}
+
+	rte_service_runstate_set(t->service_id, 1);
+	rte_service_set_runstate_mapped_check(t->service_id, 0);
+
 	/* Only create mbuf pool once, reuse for each test run */
 	if (!eventdev_func_mempool) {
 		eventdev_func_mempool = rte_pktmbuf_pool_create(
--
2.7.4

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v5 5/7] examples/eventdev: update sample app to use service
  2017-10-25 14:50 ` [PATCH v5 1/7] eventdev: add API to get service id Pavan Nikhilesh
                     ` (2 preceding siblings ...)
  2017-10-25 14:50   ` [PATCH v5 4/7] test/eventdev: update test to use service iter Pavan Nikhilesh
@ 2017-10-25 14:50   ` Pavan Nikhilesh
  2017-10-25 14:50   ` [PATCH v5 6/7] eventdev: remove eventdev schedule API Pavan Nikhilesh
                     ` (2 subsequent siblings)
  6 siblings, 0 replies; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-25 14:50 UTC (permalink / raw)
  To: jerin.jacob, hemant.agrawal, harry.van.haaren; +Cc: dev, Pavan Nikhilesh

Update the sample app eventdev_pipeline_sw_pmd to use service run iter for
event scheduling in case of sw eventdev.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: Harry van Haaren <harry.van.haaren@intel.com>
---

 v5 changes:
  - fix minor checkpatch issue

 v4 changes:
  - rebase patchset on top of http://dpdk.org/dev/patchwork/patch/30732/
  for controlled event scheduling in case event_sw

 examples/eventdev_pipeline_sw_pmd/main.c | 16 +++++++++++++---
 1 file changed, 13 insertions(+), 3 deletions(-)

diff --git a/examples/eventdev_pipeline_sw_pmd/main.c b/examples/eventdev_pipeline_sw_pmd/main.c
index 2e6787b..948be67 100644
--- a/examples/eventdev_pipeline_sw_pmd/main.c
+++ b/examples/eventdev_pipeline_sw_pmd/main.c
@@ -46,6 +46,7 @@
 #include <rte_cycles.h>
 #include <rte_ethdev.h>
 #include <rte_eventdev.h>
+#include <rte_service.h>

 #define MAX_NUM_STAGES 8
 #define BATCH_SIZE 16
@@ -76,6 +77,7 @@ struct fastpath_data {
 	uint32_t rx_lock;
 	uint32_t tx_lock;
 	uint32_t sched_lock;
+	uint32_t evdev_service_id;
 	bool rx_single;
 	bool tx_single;
 	bool sched_single;
@@ -233,7 +235,7 @@ producer(void)
 }

 static inline void
-schedule_devices(uint8_t dev_id, unsigned int lcore_id)
+schedule_devices(unsigned int lcore_id)
 {
 	if (fdata->rx_core[lcore_id] && (fdata->rx_single ||
 	    rte_atomic32_cmpset(&(fdata->rx_lock), 0, 1))) {
@@ -243,7 +245,7 @@ schedule_devices(uint8_t dev_id, unsigned int lcore_id)

 	if (fdata->sched_core[lcore_id] && (fdata->sched_single ||
 	    rte_atomic32_cmpset(&(fdata->sched_lock), 0, 1))) {
-		rte_event_schedule(dev_id);
+		rte_service_run_iter_on_app_lcore(fdata->evdev_service_id);
 		if (cdata.dump_dev_signal) {
 			rte_event_dev_dump(0, stdout);
 			cdata.dump_dev_signal = 0;
@@ -294,7 +296,7 @@ worker(void *arg)
 	while (!fdata->done) {
 		uint16_t i;

-		schedule_devices(dev_id, lcore_id);
+		schedule_devices(lcore_id);

 		if (!fdata->worker_core[lcore_id]) {
 			rte_pause();
@@ -839,6 +841,14 @@ setup_eventdev(struct prod_data *prod_data,
 	*cons_data = (struct cons_data){.dev_id = dev_id,
 					.port_id = i };

+	ret = rte_event_dev_service_id_get(dev_id,
+				&fdata->evdev_service_id);
+	if (ret != -ESRCH && ret != 0) {
+		printf("Error getting the service ID for sw eventdev\n");
+		return -1;
+	}
+	rte_service_runstate_set(fdata->evdev_service_id, 1);
+	rte_service_set_runstate_mapped_check(fdata->evdev_service_id, 0);
 	if (rte_event_dev_start(dev_id) < 0) {
 		printf("Error starting eventdev\n");
 		return -1;
--
2.7.4

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v5 6/7] eventdev: remove eventdev schedule API
  2017-10-25 14:50 ` [PATCH v5 1/7] eventdev: add API to get service id Pavan Nikhilesh
                     ` (3 preceding siblings ...)
  2017-10-25 14:50   ` [PATCH v5 5/7] examples/eventdev: update sample app to use service Pavan Nikhilesh
@ 2017-10-25 14:50   ` Pavan Nikhilesh
  2017-10-25 14:50   ` [PATCH v5 7/7] doc: update software event device Pavan Nikhilesh
  2017-10-26 22:47   ` [PATCH v5 1/7] eventdev: add API to get service id Thomas Monjalon
  6 siblings, 0 replies; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-25 14:50 UTC (permalink / raw)
  To: jerin.jacob, hemant.agrawal, harry.van.haaren; +Cc: dev, Pavan Nikhilesh

remove eventdev schedule api and enforce sw driver to use service core
feature for event scheduling.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 drivers/event/dpaa2/dpaa2_eventdev.c       |  1 -
 drivers/event/octeontx/ssovf_evdev.c       |  1 -
 drivers/event/skeleton/skeleton_eventdev.c |  2 --
 drivers/event/sw/sw_evdev.c                | 13 +++++--------
 lib/librte_eventdev/rte_eventdev.h         | 31 ++++--------------------------
 5 files changed, 9 insertions(+), 39 deletions(-)

diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
index d8f5f7d..45e2ebc 100644
--- a/drivers/event/dpaa2/dpaa2_eventdev.c
+++ b/drivers/event/dpaa2/dpaa2_eventdev.c
@@ -785,7 +785,6 @@ dpaa2_eventdev_create(const char *name)
 	}
 
 	eventdev->dev_ops       = &dpaa2_eventdev_ops;
-	eventdev->schedule      = NULL;
 	eventdev->enqueue       = dpaa2_eventdev_enqueue;
 	eventdev->enqueue_burst = dpaa2_eventdev_enqueue_burst;
 	eventdev->enqueue_new_burst = dpaa2_eventdev_enqueue_burst;
diff --git a/drivers/event/octeontx/ssovf_evdev.c b/drivers/event/octeontx/ssovf_evdev.c
index 7bdc85d..cfbd958 100644
--- a/drivers/event/octeontx/ssovf_evdev.c
+++ b/drivers/event/octeontx/ssovf_evdev.c
@@ -157,7 +157,6 @@ ssovf_fastpath_fns_set(struct rte_eventdev *dev)
 {
 	struct ssovf_evdev *edev = ssovf_pmd_priv(dev);
 
-	dev->schedule      = NULL;
 	dev->enqueue       = ssows_enq;
 	dev->enqueue_burst = ssows_enq_burst;
 	dev->enqueue_new_burst = ssows_enq_new_burst;
diff --git a/drivers/event/skeleton/skeleton_eventdev.c b/drivers/event/skeleton/skeleton_eventdev.c
index bcd2055..4d1a1da 100644
--- a/drivers/event/skeleton/skeleton_eventdev.c
+++ b/drivers/event/skeleton/skeleton_eventdev.c
@@ -375,7 +375,6 @@ skeleton_eventdev_init(struct rte_eventdev *eventdev)
 	PMD_DRV_FUNC_TRACE();
 
 	eventdev->dev_ops       = &skeleton_eventdev_ops;
-	eventdev->schedule      = NULL;
 	eventdev->enqueue       = skeleton_eventdev_enqueue;
 	eventdev->enqueue_burst = skeleton_eventdev_enqueue_burst;
 	eventdev->dequeue       = skeleton_eventdev_dequeue;
@@ -466,7 +465,6 @@ skeleton_eventdev_create(const char *name, int socket_id)
 	}
 
 	eventdev->dev_ops       = &skeleton_eventdev_ops;
-	eventdev->schedule      = NULL;
 	eventdev->enqueue       = skeleton_eventdev_enqueue;
 	eventdev->enqueue_burst = skeleton_eventdev_enqueue_burst;
 	eventdev->dequeue       = skeleton_eventdev_dequeue;
diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
index 92fd07b..178f169 100644
--- a/drivers/event/sw/sw_evdev.c
+++ b/drivers/event/sw/sw_evdev.c
@@ -615,10 +615,14 @@ sw_start(struct rte_eventdev *dev)
 	unsigned int i, j;
 	struct sw_evdev *sw = sw_pmd_priv(dev);
 
+	rte_service_component_runstate_set(sw->service_id, 1);
+
 	/* check a service core is mapped to this service */
-	if (!rte_service_runstate_get(sw->service_id))
+	if (!rte_service_runstate_get(sw->service_id)) {
 		SW_LOG_ERR("Warning: No Service core enabled on service %s\n",
 				sw->service_name);
+		return -ENOENT;
+	}
 
 	/* check all ports are set up */
 	for (i = 0; i < sw->port_count; i++)
@@ -833,7 +837,6 @@ sw_probe(struct rte_vdev_device *vdev)
 	dev->enqueue_forward_burst = sw_event_enqueue_burst;
 	dev->dequeue = sw_event_dequeue;
 	dev->dequeue_burst = sw_event_dequeue_burst;
-	dev->schedule = sw_event_schedule;
 
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
 		return 0;
@@ -861,12 +864,6 @@ sw_probe(struct rte_vdev_device *vdev)
 		return -ENOEXEC;
 	}
 
-	ret = rte_service_component_runstate_set(sw->service_id, 1);
-	if (ret) {
-		SW_LOG_ERR("Unable to enable service component");
-		return -ENOEXEC;
-	}
-
 	dev->data->service_inited = 1;
 	dev->data->service_id = sw->service_id;
 
diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
index a7973a9..f1949ff 100644
--- a/lib/librte_eventdev/rte_eventdev.h
+++ b/lib/librte_eventdev/rte_eventdev.h
@@ -218,10 +218,10 @@
  * (each worker thread schedules events to its own port) or centralized
  * (a dedicated thread schedules to all ports). Distributed software schedulers
  * perform the scheduling in rte_event_dequeue_burst(), whereas centralized
- * scheduler logic is located in rte_event_schedule().
+ * scheduler logic need a dedicated service core for scheduling.
  * The RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag is not set
  * indicates the device is centralized and thus needs a dedicated scheduling
- * thread that repeatedly calls rte_event_schedule().
+ * thread that repeatedly calls software specific scheduling function.
  *
  * An event driven worker thread has following typical workflow on fastpath:
  * \code{.c}
@@ -263,9 +263,9 @@ struct rte_mbuf; /* we just use mbuf pointers; no need to include rte_mbuf.h */
  * In distributed scheduling mode, event scheduling happens in HW or
  * rte_event_dequeue_burst() or the combination of these two.
  * If the flag is not set then eventdev is centralized and thus needs a
- * dedicated scheduling thread that repeatedly calls rte_event_schedule().
+ * dedicated service core that acts as a scheduling thread .
  *
- * @see rte_event_schedule(), rte_event_dequeue_burst()
+ * @see rte_event_dequeue_burst()
  */
 #define RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES     (1ULL << 3)
 /**< Event device is capable of enqueuing events of any type to any queue.
@@ -1052,9 +1052,6 @@ struct rte_eventdev_driver;
 struct rte_eventdev_ops;
 struct rte_eventdev;
 
-typedef void (*event_schedule_t)(struct rte_eventdev *dev);
-/**< @internal Schedule one or more events in the event dev. */
-
 typedef uint16_t (*event_enqueue_t)(void *port, const struct rte_event *ev);
 /**< @internal Enqueue event on port of a device */
 
@@ -1118,8 +1115,6 @@ struct rte_eventdev_data {
 
 /** @internal The data structure associated with each event device. */
 struct rte_eventdev {
-	event_schedule_t schedule;
-	/**< Pointer to PMD schedule function. */
 	event_enqueue_t enqueue;
 	/**< Pointer to PMD enqueue function. */
 	event_enqueue_burst_t enqueue_burst;
@@ -1148,24 +1143,6 @@ struct rte_eventdev {
 extern struct rte_eventdev *rte_eventdevs;
 /** @internal The pool of rte_eventdev structures. */
 
-
-/**
- * Schedule one or more events in the event dev.
- *
- * An event dev implementation may define this is a NOOP, for instance if
- * the event dev performs its scheduling in hardware.
- *
- * @param dev_id
- *   The identifier of the device.
- */
-static inline void
-rte_event_schedule(uint8_t dev_id)
-{
-	struct rte_eventdev *dev = &rte_eventdevs[dev_id];
-	if (*dev->schedule)
-		(*dev->schedule)(dev);
-}
-
 static __rte_always_inline uint16_t
 __rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
 			const struct rte_event ev[], uint16_t nb_events,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* [PATCH v5 7/7] doc: update software event device
  2017-10-25 14:50 ` [PATCH v5 1/7] eventdev: add API to get service id Pavan Nikhilesh
                     ` (4 preceding siblings ...)
  2017-10-25 14:50   ` [PATCH v5 6/7] eventdev: remove eventdev schedule API Pavan Nikhilesh
@ 2017-10-25 14:50   ` Pavan Nikhilesh
  2017-10-26 22:47   ` [PATCH v5 1/7] eventdev: add API to get service id Thomas Monjalon
  6 siblings, 0 replies; 47+ messages in thread
From: Pavan Nikhilesh @ 2017-10-25 14:50 UTC (permalink / raw)
  To: jerin.jacob, hemant.agrawal, harry.van.haaren; +Cc: dev, Pavan Nikhilesh

Update software event device documentation to include use of service
cores for event distribution.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
---
 doc/guides/eventdevs/sw.rst       | 13 ++++++-------
 doc/guides/tools/testeventdev.rst | 10 ++--------
 2 files changed, 8 insertions(+), 15 deletions(-)

diff --git a/doc/guides/eventdevs/sw.rst b/doc/guides/eventdevs/sw.rst
index a3e6624..ec49b3b 100644
--- a/doc/guides/eventdevs/sw.rst
+++ b/doc/guides/eventdevs/sw.rst
@@ -78,9 +78,9 @@ Scheduling Quanta
 ~~~~~~~~~~~~~~~~~
 
 The scheduling quanta sets the number of events that the device attempts to
-schedule before returning to the application from the ``rte_event_schedule()``
-function. Note that is a *hint* only, and that fewer or more events may be
-scheduled in a given iteration.
+schedule in a single schedule call performed by the service core. Note that
+is a *hint* only, and that fewer or more events may be scheduled in a given
+iteration.
 
 The scheduling quanta can be set using a string argument to the vdev
 create call:
@@ -140,10 +140,9 @@ eventdev.
 Distributed Scheduler
 ~~~~~~~~~~~~~~~~~~~~~
 
-The software eventdev is a centralized scheduler, requiring the
-``rte_event_schedule()`` function to be called by a CPU core to perform the
-required event distribution. This is not really a limitation but rather a
-design decision.
+The software eventdev is a centralized scheduler, requiring a service core to
+perform the required event distribution. This is not really a limitation but
+rather a design decision.
 
 The ``RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED`` flag is not set in the
 ``event_dev_cap`` field of the ``rte_event_dev_info`` struct for the software
diff --git a/doc/guides/tools/testeventdev.rst b/doc/guides/tools/testeventdev.rst
index 34b1c31..5aa2237 100644
--- a/doc/guides/tools/testeventdev.rst
+++ b/doc/guides/tools/testeventdev.rst
@@ -106,10 +106,6 @@ The following are the application command-line options:
 
         Set the number of mbufs to be allocated from the mempool.
 
-* ``--slcore <n>``
-
-        Set the scheduler lcore id.(Valid when eventdev is not RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capable)
-
 * ``--plcores <CORELIST>``
 
         Set the list of cores to be used as producers.
@@ -362,7 +358,6 @@ Supported application command line options are following::
         --test
         --socket_id
         --pool_sz
-        --slcore (Valid when eventdev is not RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capable)
         --plcores
         --wlcores
         --stlist
@@ -379,8 +374,8 @@ Example command to run perf queue test:
 
 .. code-block:: console
 
-   sudo build/app/dpdk-test-eventdev --vdev=event_sw0 -- \
-        --test=perf_queue --slcore=1 --plcores=2 --wlcore=3 --stlist=p --nb_pkts=0
+   sudo build/app/dpdk-test-eventdev -c 0xf -s 0x1 --vdev=event_sw0 -- \
+        --test=perf_queue --plcores=2 --wlcore=3 --stlist=p --nb_pkts=0
 
 
 PERF_ATQ Test
@@ -441,7 +436,6 @@ Supported application command line options are following::
         --test
         --socket_id
         --pool_sz
-        --slcore (Valid when eventdev is not RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capable)
         --plcores
         --wlcores
         --stlist
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 47+ messages in thread

* Re: [PATCH v5 1/7] eventdev: add API to get service id
  2017-10-25 14:50 ` [PATCH v5 1/7] eventdev: add API to get service id Pavan Nikhilesh
                     ` (5 preceding siblings ...)
  2017-10-25 14:50   ` [PATCH v5 7/7] doc: update software event device Pavan Nikhilesh
@ 2017-10-26 22:47   ` Thomas Monjalon
  6 siblings, 0 replies; 47+ messages in thread
From: Thomas Monjalon @ 2017-10-26 22:47 UTC (permalink / raw)
  To: Pavan Nikhilesh; +Cc: dev, jerin.jacob, hemant.agrawal, harry.van.haaren

25/10/2017 16:50, Pavan Nikhilesh:
> In case of sw event device the scheduling can be done on a service core
> using the service registered at the time of probe.
> This patch adds a helper function to get the service id that can be used
> by the application to assign a lcore for the service to run on.
> 
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@caviumnetworks.com>
> Acked-by: Harry van Haaren <harry.van.haaren@intel.com>

Series applied, thanks

^ permalink raw reply	[flat|nested] 47+ messages in thread

end of thread, other threads:[~2017-10-26 22:47 UTC | newest]

Thread overview: 47+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-11  9:09 [PATCH 0/7] eventdev: remove event schedule API for SW driver Pavan Nikhilesh
2017-10-11  9:09 ` [PATCH 1/7] eventdev: add API to get service id Pavan Nikhilesh
2017-10-11  9:09 ` [PATCH 2/7] event/sw: extend service capability Pavan Nikhilesh
2017-10-11  9:09 ` [PATCH 3/7] app/test-eventdev: update app to use service cores Pavan Nikhilesh
2017-10-11  9:09 ` [PATCH 4/7] test/eventdev: update test to use service core Pavan Nikhilesh
2017-10-11  9:09 ` [PATCH 5/7] examples/eventdev: update sample app to use service Pavan Nikhilesh
2017-10-11  9:09 ` [PATCH 6/7] eventdev: remove eventdev schedule API Pavan Nikhilesh
2017-10-11  9:09 ` [PATCH 7/7] doc: update software event device Pavan Nikhilesh
2017-10-12 12:29   ` Mcnamara, John
2017-10-13 16:36 ` [PATCH v2 1/7] eventdev: add API to get service id Pavan Nikhilesh
2017-10-13 16:36   ` [PATCH v2 2/7] event/sw: extend service capability Pavan Nikhilesh
2017-10-20 10:30     ` Van Haaren, Harry
2017-10-13 16:36   ` [PATCH v2 3/7] app/test-eventdev: update app to use service cores Pavan Nikhilesh
2017-10-21 17:01     ` Jerin Jacob
2017-10-13 16:36   ` [PATCH v2 4/7] test/eventdev: update test to use service core Pavan Nikhilesh
2017-10-13 16:36   ` [PATCH v2 5/7] examples/eventdev: update sample app to use service Pavan Nikhilesh
2017-10-23 17:17     ` Van Haaren, Harry
2017-10-23 17:51       ` Pavan Nikhilesh Bhagavatula
2017-10-13 16:36   ` [PATCH v2 6/7] eventdev: remove eventdev schedule API Pavan Nikhilesh
2017-10-21 17:07     ` Jerin Jacob
2017-10-13 16:36   ` [PATCH v2 7/7] doc: update software event device Pavan Nikhilesh
2017-10-20 10:21   ` [PATCH v2 1/7] eventdev: add API to get service id Van Haaren, Harry
2017-10-20 11:11     ` Pavan Nikhilesh Bhagavatula
2017-10-22  9:16 ` [PATCH v3 " Pavan Nikhilesh
2017-10-22  9:16   ` [PATCH v3 2/7] event/sw: extend service capability Pavan Nikhilesh
2017-10-22  9:16   ` [PATCH v3 3/7] app/test-eventdev: update app to use service cores Pavan Nikhilesh
2017-10-22  9:16   ` [PATCH v3 4/7] test/eventdev: update test to use service core Pavan Nikhilesh
2017-10-22  9:16   ` [PATCH v3 5/7] examples/eventdev: update sample app to use service Pavan Nikhilesh
2017-10-22  9:16   ` [PATCH v3 6/7] eventdev: remove eventdev schedule API Pavan Nikhilesh
2017-10-22  9:16   ` [PATCH v3 7/7] doc: update software event device Pavan Nikhilesh
2017-10-25 11:59 ` [PATCH v4 1/7] eventdev: add API to get service id Pavan Nikhilesh
2017-10-25 11:59   ` [PATCH v4 2/7] event/sw: extend service capability Pavan Nikhilesh
2017-10-25 11:59   ` [PATCH v4 3/7] app/test-eventdev: update app to use service cores Pavan Nikhilesh
2017-10-25 11:59   ` [PATCH v4 4/7] test/eventdev: update test to use service iter Pavan Nikhilesh
2017-10-25 14:24     ` Van Haaren, Harry
2017-10-25 11:59   ` [PATCH v4 5/7] examples/eventdev: update sample app to use service Pavan Nikhilesh
2017-10-25 14:24     ` Van Haaren, Harry
2017-10-25 11:59   ` [PATCH v4 6/7] eventdev: remove eventdev schedule API Pavan Nikhilesh
2017-10-25 11:59   ` [PATCH v4 7/7] doc: update software event device Pavan Nikhilesh
2017-10-25 14:50 ` [PATCH v5 1/7] eventdev: add API to get service id Pavan Nikhilesh
2017-10-25 14:50   ` [PATCH v5 2/7] event/sw: extend service capability Pavan Nikhilesh
2017-10-25 14:50   ` [PATCH v5 3/7] app/test-eventdev: update app to use service cores Pavan Nikhilesh
2017-10-25 14:50   ` [PATCH v5 4/7] test/eventdev: update test to use service iter Pavan Nikhilesh
2017-10-25 14:50   ` [PATCH v5 5/7] examples/eventdev: update sample app to use service Pavan Nikhilesh
2017-10-25 14:50   ` [PATCH v5 6/7] eventdev: remove eventdev schedule API Pavan Nikhilesh
2017-10-25 14:50   ` [PATCH v5 7/7] doc: update software event device Pavan Nikhilesh
2017-10-26 22:47   ` [PATCH v5 1/7] eventdev: add API to get service id Thomas Monjalon

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.