All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] libeventdev API and northbound implementation
@ 2016-11-18  5:44 Jerin Jacob
  2016-11-18  5:44 ` [PATCH 1/4] eventdev: introduce event driven programming model Jerin Jacob
                   ` (4 more replies)
  0 siblings, 5 replies; 109+ messages in thread
From: Jerin Jacob @ 2016-11-18  5:44 UTC (permalink / raw)
  To: dev
  Cc: bruce.richardson, harry.van.haaren, hemant.agrawal, gage.eads,
	Jerin Jacob

As previously discussed in RFC v1 [1], RFC v2 [2], with changes
described in [3] (also pasted below), here is the first non-draft series
for this new API.

[1] http://dpdk.org/ml/archives/dev/2016-August/045181.html
[2] http://dpdk.org/ml/archives/dev/2016-October/048592.html
[3] http://dpdk.org/ml/archives/dev/2016-October/048196.html

Changes since RFC v2:

- Updated the documentation to define the need for this library[Jerin]
- Added RTE_EVENT_QUEUE_CFG_*_ONLY configuration parameters in
  struct rte_event_queue_conf to enable optimized sw implementation [Bruce]
- Introduced RTE_EVENT_OP* ops [Bruce]
- Added nb_event_queue_flows,nb_event_port_dequeue_depth, nb_event_port_enqueue_depth
  in rte_event_dev_configure() like ethdev and crypto library[Jerin]
- Removed rte_event_release() and replaced with RTE_EVENT_OP_RELEASE ops to
  reduce fast path APIs and it is redundant too[Jerin]
- In the view of better application portability, Removed pin_event
  from rte_event_enqueue as it is just hint and Intel/NXP can not support it[Jerin]
- Added rte_event_port_links_get()[Jerin]
- Added rte_event_dev_dump[Harry]

Notes:

- This patch set is check-patch clean with an exception that
02/04 has one WARNING:MACRO_WITH_FLOW_CONTROL
- Looking forward to getting additional maintainers for libeventdev


Possible next steps:
1) Review this patch set
2) Integrate Intel's SW driver[http://dpdk.org/dev/patchwork/patch/17049/]
3) Review proposed examples/eventdev_pipeline application[http://dpdk.org/dev/patchwork/patch/17053/]
4) Review proposed functional tests[http://dpdk.org/dev/patchwork/patch/17051/]
5) Cavium's HW based eventdev driver

I am planning to work on (3),(4) and (5)

TODO:
1) Example applications for pipelining, packet ingress order maintenance with
ORDERED type and ATOMIC synchronization services.
2) Create user guide


Jerin Jacob (4):
  eventdev: introduce event driven programming model
  eventdev: implement the northbound APIs
  event/skeleton: add skeleton eventdev driver
  app/test: unit test case for eventdev APIs

 MAINTAINERS                                        |    5 +
 app/test/Makefile                                  |    2 +
 app/test/test_eventdev.c                           |  776 +++++++++++
 config/common_base                                 |   14 +
 doc/api/doxy-api-index.md                          |    1 +
 doc/api/doxy-api.conf                              |    1 +
 drivers/Makefile                                   |    1 +
 drivers/event/Makefile                             |   36 +
 drivers/event/skeleton/Makefile                    |   55 +
 .../skeleton/rte_pmd_skeleton_event_version.map    |    4 +
 drivers/event/skeleton/skeleton_eventdev.c         |  535 ++++++++
 drivers/event/skeleton/skeleton_eventdev.h         |   72 +
 lib/Makefile                                       |    1 +
 lib/librte_eal/common/include/rte_log.h            |    1 +
 lib/librte_eventdev/Makefile                       |   57 +
 lib/librte_eventdev/rte_eventdev.c                 | 1211 ++++++++++++++++
 lib/librte_eventdev/rte_eventdev.h                 | 1439 ++++++++++++++++++++
 lib/librte_eventdev/rte_eventdev_pmd.h             |  504 +++++++
 lib/librte_eventdev/rte_eventdev_version.map       |   39 +
 mk/rte.app.mk                                      |    5 +
 20 files changed, 4759 insertions(+)
 create mode 100644 app/test/test_eventdev.c
 create mode 100644 drivers/event/Makefile
 create mode 100644 drivers/event/skeleton/Makefile
 create mode 100644 drivers/event/skeleton/rte_pmd_skeleton_event_version.map
 create mode 100644 drivers/event/skeleton/skeleton_eventdev.c
 create mode 100644 drivers/event/skeleton/skeleton_eventdev.h
 create mode 100644 lib/librte_eventdev/Makefile
 create mode 100644 lib/librte_eventdev/rte_eventdev.c
 create mode 100644 lib/librte_eventdev/rte_eventdev.h
 create mode 100644 lib/librte_eventdev/rte_eventdev_pmd.h
 create mode 100644 lib/librte_eventdev/rte_eventdev_version.map

-- 
2.5.5

^ permalink raw reply	[flat|nested] 109+ messages in thread

* [PATCH 1/4] eventdev: introduce event driven programming model
  2016-11-18  5:44 [PATCH 0/4] libeventdev API and northbound implementation Jerin Jacob
@ 2016-11-18  5:44 ` Jerin Jacob
  2016-11-23 18:39   ` Thomas Monjalon
                     ` (2 more replies)
  2016-11-18  5:45 ` [PATCH 2/4] eventdev: implement the northbound APIs Jerin Jacob
                   ` (3 subsequent siblings)
  4 siblings, 3 replies; 109+ messages in thread
From: Jerin Jacob @ 2016-11-18  5:44 UTC (permalink / raw)
  To: dev
  Cc: bruce.richardson, harry.van.haaren, hemant.agrawal, gage.eads,
	Jerin Jacob

In a polling model, lcores poll ethdev ports and associated
rx queues directly to look for packet. In an event driven model,
by contrast, lcores call the scheduler that selects packets for
them based on programmer-specified criteria. Eventdev library
adds support for event driven programming model, which offer
applications automatic multicore scaling, dynamic load balancing,
pipelining, packet ingress order maintenance and
synchronization services to simplify application packet processing.

By introducing event driven programming model, DPDK can support
both polling and event driven programming models for packet processing,
and applications are free to choose whatever model
(or combination of the two) that best suits their needs.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 MAINTAINERS                        |    3 +
 doc/api/doxy-api-index.md          |    1 +
 doc/api/doxy-api.conf              |    1 +
 lib/librte_eventdev/rte_eventdev.h | 1439 ++++++++++++++++++++++++++++++++++++
 4 files changed, 1444 insertions(+)
 create mode 100644 lib/librte_eventdev/rte_eventdev.h

diff --git a/MAINTAINERS b/MAINTAINERS
index d6bb8f8..e430ca7 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -249,6 +249,9 @@ F: lib/librte_cryptodev/
 F: app/test/test_cryptodev*
 F: examples/l2fwd-crypto/
 
+Eventdev API - EXPERIMENTAL
+M: Jerin Jacob <jerin.jacob@caviumnetworks.com>
+F: lib/librte_eventdev/
 
 Networking Drivers
 ------------------
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 6675f96..28c1329 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -40,6 +40,7 @@ There are many libraries, so their headers may be grouped by topics:
   [ethdev]             (@ref rte_ethdev.h),
   [ethctrl]            (@ref rte_eth_ctrl.h),
   [cryptodev]          (@ref rte_cryptodev.h),
+  [eventdev]           (@ref rte_eventdev.h),
   [devargs]            (@ref rte_devargs.h),
   [bond]               (@ref rte_eth_bond.h),
   [vhost]              (@ref rte_virtio_net.h),
diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf
index 9dc7ae5..9841477 100644
--- a/doc/api/doxy-api.conf
+++ b/doc/api/doxy-api.conf
@@ -41,6 +41,7 @@ INPUT                   = doc/api/doxy-api-index.md \
                           lib/librte_cryptodev \
                           lib/librte_distributor \
                           lib/librte_ether \
+                          lib/librte_eventdev \
                           lib/librte_hash \
                           lib/librte_ip_frag \
                           lib/librte_jobstats \
diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
new file mode 100644
index 0000000..778d6dc
--- /dev/null
+++ b/lib/librte_eventdev/rte_eventdev.h
@@ -0,0 +1,1439 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright 2016 Cavium.
+ *   Copyright 2016 Intel Corporation.
+ *   Copyright 2016 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_EVENTDEV_H_
+#define _RTE_EVENTDEV_H_
+
+/**
+ * @file
+ *
+ * RTE Event Device API
+ *
+ * In a polling model, lcores poll ethdev ports and associated rx queues
+ * directly to look for packet. In an event driven model, by contrast, lcores
+ * call the scheduler that selects packets for them based on programmer
+ * specified criteria. Eventdev library adds support for event driven
+ * programming model, which offer applications automatic multicore scaling,
+ * dynamic load balancing, pipelining, packet ingress order maintenance and
+ * synchronization services to simplify application packet processing.
+ *
+ * The Event Device API is composed of two parts:
+ *
+ * - The application-oriented Event API that includes functions to setup
+ *   an event device (configure it, setup its queues, ports and start it), to
+ *   establish the link between queues to port and to receive events, and so on.
+ *
+ * - The driver-oriented Event API that exports a function allowing
+ *   an event poll Mode Driver (PMD) to simultaneously register itself as
+ *   an event device driver.
+ *
+ * Event device components:
+ *
+ *                     +-----------------+
+ *                     | +-------------+ |
+ *        +-------+    | |    flow 0   | |
+ *        |Packet |    | +-------------+ |
+ *        |event  |    | +-------------+ |
+ *        |       |    | |    flow 1   | |port_link(port0, queue0)
+ *        +-------+    | +-------------+ |     |     +--------+
+ *        +-------+    | +-------------+ o-----v-----o        |dequeue +------+
+ *        |Crypto |    | |    flow n   | |           | event  +------->|Core 0|
+ *        |work   |    | +-------------+ o----+      | port 0 |        |      |
+ *        |done ev|    |  event queue 0  |    |      +--------+        +------+
+ *        +-------+    +-----------------+    |
+ *        +-------+                           |
+ *        |Timer  |    +-----------------+    |      +--------+
+ *        |expiry |    | +-------------+ |    +------o        |dequeue +------+
+ *        |event  |    | |    flow 0   | o-----------o event  +------->|Core 1|
+ *        +-------+    | +-------------+ |      +----o port 1 |        |      |
+ *       Event enqueue | +-------------+ |      |    +--------+        +------+
+ *     o-------------> | |    flow 1   | |      |
+ *        enqueue(     | +-------------+ |      |
+ *        queue_id,    |                 |      |    +--------+        +------+
+ *        flow_id,     | +-------------+ |      |    |        |dequeue |Core 2|
+ *        sched_type,  | |    flow n   | o-----------o event  +------->|      |
+ *        event_type,  | +-------------+ |      |    | port 2 |        +------+
+ *        subev_type,  |  event queue 1  |      |    +--------+
+ *        event)       +-----------------+      |    +--------+
+ *                                              |    |        |dequeue +------+
+ *        +-------+    +-----------------+      |    | event  +------->|Core n|
+ *        |Core   |    | +-------------+ o-----------o port n |        |      |
+ *        |(SW)   |    | |    flow 0   | |      |    +--------+        +--+---+
+ *        |event  |    | +-------------+ |      |                         |
+ *        +-------+    | +-------------+ |      |                         |
+ *            ^        | |    flow 1   | |      |                         |
+ *            |        | +-------------+ o------+                         |
+ *            |        | +-------------+ |                                |
+ *            |        | |    flow n   | |                                |
+ *            |        | +-------------+ |                                |
+ *            |        |  event queue n  |                                |
+ *            |        +-----------------+                                |
+ *            |                                                           |
+ *            +-----------------------------------------------------------+
+ *
+ *
+ *
+ * Event device: A hardware or software-based event scheduler.
+ *
+ * Event: A unit of scheduling that encapsulates a packet or other datatype
+ * like SW generated event from the core, Crypto work completion notification,
+ * Timer expiry event notification etc as well as metadata.
+ * The metadata includes flow ID, scheduling type, event priority, event_type,
+ * sub_event_type etc.
+ *
+ * Event queue: A queue containing events that are scheduled by the event dev.
+ * An event queue contains events of different flows associated with scheduling
+ * types, such as atomic, ordered, or parallel.
+ *
+ * Event port: An application's interface into the event dev for enqueue and
+ * dequeue operations. Each event port can be linked with one or more
+ * event queues for dequeue operations.
+ *
+ * By default, all the functions of the Event Device API exported by a PMD
+ * are lock-free functions which assume to not be invoked in parallel on
+ * different logical cores to work on the same target object. For instance,
+ * the dequeue function of a PMD cannot be invoked in parallel on two logical
+ * cores to operates on same  event port. Of course, this function
+ * can be invoked in parallel by different logical cores on different ports.
+ * It is the responsibility of the upper level application to enforce this rule.
+ *
+ * In all functions of the Event API, the Event device is
+ * designated by an integer >= 0 named the device identifier *dev_id*
+ *
+ * At the Event driver level, Event devices are represented by a generic
+ * data structure of type *rte_event_dev*.
+ *
+ * Event devices are dynamically registered during the PCI/SoC device probing
+ * phase performed at EAL initialization time.
+ * When an Event device is being probed, a *rte_event_dev* structure and
+ * a new device identifier are allocated for that device. Then, the
+ * event_dev_init() function supplied by the Event driver matching the probed
+ * device is invoked to properly initialize the device.
+ *
+ * The role of the device init function consists of resetting the hardware or
+ * software event driver implementations.
+ *
+ * If the device init operation is successful, the correspondence between
+ * the device identifier assigned to the new device and its associated
+ * *rte_event_dev* structure is effectively registered.
+ * Otherwise, both the *rte_event_dev* structure and the device identifier are
+ * freed.
+ *
+ * The functions exported by the application Event API to setup a device
+ * designated by its device identifier must be invoked in the following order:
+ *     - rte_event_dev_configure()
+ *     - rte_event_queue_setup()
+ *     - rte_event_port_setup()
+ *     - rte_event_port_link()
+ *     - rte_event_dev_start()
+ *
+ * Then, the application can invoke, in any order, the functions
+ * exported by the Event API to schedule events, dequeue events, enqueue events,
+ * change event queue(s) to event port [un]link establishment and so on.
+ *
+ * Application may use rte_event_[queue/port]_default_conf_get() to get the
+ * default configuration to set up an event queue or event port by
+ * overriding few default values.
+ *
+ * If the application wants to change the configuration (i.e. call
+ * rte_event_dev_configure(), rte_event_queue_setup(), or
+ * rte_event_port_setup()), it must call rte_event_dev_stop() first to stop the
+ * device and then do the reconfiguration before calling rte_event_dev_start()
+ * again. The schedule, enqueue and dequeue functions should not be invoked
+ * when the device is stopped.
+ *
+ * Finally, an application can close an Event device by invoking the
+ * rte_event_dev_close() function.
+ *
+ * Each function of the application Event API invokes a specific function
+ * of the PMD that controls the target device designated by its device
+ * identifier.
+ *
+ * For this purpose, all device-specific functions of an Event driver are
+ * supplied through a set of pointers contained in a generic structure of type
+ * *event_dev_ops*.
+ * The address of the *event_dev_ops* structure is stored in the *rte_event_dev*
+ * structure by the device init function of the Event driver, which is
+ * invoked during the PCI/SoC device probing phase, as explained earlier.
+ *
+ * In other words, each function of the Event API simply retrieves the
+ * *rte_event_dev* structure associated with the device identifier and
+ * performs an indirect invocation of the corresponding driver function
+ * supplied in the *event_dev_ops* structure of the *rte_event_dev* structure.
+ *
+ * For performance reasons, the address of the fast-path functions of the
+ * Event driver is not contained in the *event_dev_ops* structure.
+ * Instead, they are directly stored at the beginning of the *rte_event_dev*
+ * structure to avoid an extra indirect memory access during their invocation.
+ *
+ * RTE event device drivers do not use interrupts for enqueue or dequeue
+ * operation. Instead, Event drivers export Poll-Mode enqueue and dequeue
+ * functions to applications.
+ *
+ * An event driven based application has following typical workflow on fastpath:
+ * \code{.c}
+ *	while (1) {
+ *
+ *		rte_event_schedule(dev_id);
+ *
+ *		rte_event_dequeue(...);
+ *
+ *		(event processing)
+ *
+ *		rte_event_enqueue(...);
+ *	}
+ * \endcode
+ *
+ * The *schedule* operation is intended to do event scheduling, and the
+ * *dequeue* operation returns the scheduled events. An implementation
+ * is free to define the semantics between *schedule* and *dequeue*. For
+ * example, a system based on a hardware scheduler can define its
+ * rte_event_schedule() to be an NOOP, whereas a software scheduler can use
+ * the *schedule* operation to schedule events.
+ *
+ * The events are injected to event device through *enqueue* operation by
+ * event producers in the system. The typical event producers are ethdev
+ * subsystem for generating packet events, core(SW) for generating events based
+ * on different stages of application processing, cryptodev for generating
+ * crypto work completion notification etc
+ *
+ * The *dequeue* operation gets one or more events from the event ports.
+ * The application process the events and send to downstream event queue through
+ * rte_event_enqueue() if it is an intermediate stage of event processing, on
+ * the final stage, the application may send to different subsystem like ethdev
+ * to send the packet/event on the wire using ethdev rte_eth_tx_burst() API.
+ *
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdbool.h>
+
+#include <rte_pci.h>
+#include <rte_dev.h>
+#include <rte_memory.h>
+#include <rte_errno.h>
+
+#define EVENTDEV_NAME_SKELETON_PMD event_skeleton
+/**< Skeleton event device PMD name */
+
+/**
+ * Get the total number of event devices that have been successfully
+ * initialised.
+ *
+ * @return
+ *   The total number of usable event devices.
+ */
+uint8_t
+rte_event_dev_count(void);
+
+/**
+ * Get the device identifier for the named event device.
+ *
+ * @param name
+ *   Event device name to select the event device identifier.
+ *
+ * @return
+ *   Returns event device identifier on success.
+ *   - <0: Failure to find named event device.
+ */
+int
+rte_event_dev_get_dev_id(const char *name);
+
+/**
+ * Return the NUMA socket to which a device is connected.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @return
+ *   The NUMA socket id to which the device is connected or
+ *   a default of zero if the socket could not be determined.
+ *   -(-EINVAL)  dev_id value is out of range.
+ */
+int
+rte_event_dev_socket_id(uint8_t dev_id);
+
+/* Event device capability bitmap flags */
+#define RTE_EVENT_DEV_CAP_QUEUE_QOS        (1ULL << 0)
+/**< Event scheduling prioritization is based on the priority associated with
+ *  each event queue.
+ *
+ *  \see rte_event_queue_setup(), RTE_EVENT_QUEUE_PRIORITY_NORMAL
+ */
+#define RTE_EVENT_DEV_CAP_EVENT_QOS        (1ULL << 1)
+/**< Event scheduling prioritization is based on the priority associated with
+ *  each event. Priority of each event is supplied in *rte_event* structure
+ *  on each enqueue operation.
+ *
+ *  \see rte_event_enqueue()
+ */
+
+/**
+ * Event device information
+ */
+struct rte_event_dev_info {
+	const char *driver_name;	/**< Event driver name */
+	struct rte_pci_device *pci_dev;	/**< PCI information */
+	uint32_t min_dequeue_wait_ns;
+	/**< Minimum supported global dequeue wait delay(ns) by this device */
+	uint32_t max_dequeue_wait_ns;
+	/**< Maximum supported global dequeue wait delay(ns) by this device */
+	uint32_t dequeue_wait_ns;
+	/**< Configured global dequeue wait delay(ns) for this device */
+	uint8_t max_event_queues;
+	/**< Maximum event_queues supported by this device */
+	uint32_t max_event_queue_flows;
+	/**< Maximum supported flows in an event queue by this device*/
+	uint8_t max_event_queue_priority_levels;
+	/**< Maximum number of event queue priority levels by this device.
+	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability
+	 */
+	uint8_t max_event_priority_levels;
+	/**< Maximum number of event priority levels by this device.
+	 * Valid when the device has RTE_EVENT_DEV_CAP_EVENT_QOS capability
+	 */
+	uint8_t max_event_ports;
+	/**< Maximum number of event ports supported by this device */
+	uint8_t max_event_port_dequeue_depth;
+	/**< Maximum dequeue queue depth for any event port.
+	 * Implementations can schedule N events at a time to an event port.
+	 * A device that does not support bulk dequeue will set this as 1.
+	 */
+	uint32_t max_event_port_enqueue_depth;
+	/**< Maximum enqueue queue depth for any event port. Implementations
+	 * can batch N events at a time to enqueue through event port.
+	 */
+	int32_t max_num_events;
+	/**< A *closed system* event dev has a limit on the number of events it
+	 * can manage at a time. An *open system* event dev does not have a
+	 * limit and will specify this as -1.
+	 */
+	uint32_t event_dev_cap;
+	/**< Event device capabilities(RTE_EVENT_DEV_CAP_)*/
+};
+
+/**
+ * Retrieve the contextual information of an event device.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ *
+ * @param[out] dev_info
+ *   A pointer to a structure of type *rte_event_dev_info* to be filled with the
+ *   contextual information of the device.
+ *
+ * @return
+ *   - 0: Success, driver updates the contextual information of the event device
+ *   - <0: Error code returned by the driver info get function.
+ *
+ */
+int
+rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info *dev_info);
+
+/* Event device configuration bitmap flags */
+#define RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT (1ULL << 0)
+/**< Override the global *dequeue_wait_ns* and use per dequeue wait in ns.
+ *  \see rte_event_dequeue_wait_time(), rte_event_dequeue()
+ */
+
+/** Event device configuration structure */
+struct rte_event_dev_config {
+	uint32_t dequeue_wait_ns;
+	/**< rte_event_dequeue() wait for *dequeue_wait_ns* ns on this device.
+	 * This value should be in the range of *min_dequeue_wait_ns* and
+	 * *max_dequeue_wait_ns* which previously provided in
+	 * rte_event_dev_info_get()
+	 * \see RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT
+	 */
+	int32_t nb_events_limit;
+	/**< Applies to *closed system* event dev only. This field indicates a
+	 * limit to ethdev-like devices to limit the number of events injected
+	 * into the system to not overwhelm core-to-core events.
+	 * This value cannot exceed the *max_num_events* which previously
+	 * provided in rte_event_dev_info_get()
+	 */
+	uint8_t nb_event_queues;
+	/**< Number of event queues to configure on this device.
+	 * This value cannot exceed the *max_event_queues* which previously
+	 * provided in rte_event_dev_info_get()
+	 */
+	uint8_t nb_event_ports;
+	/**< Number of event ports to configure on this device.
+	 * This value cannot exceed the *max_event_ports* which previously
+	 * provided in rte_event_dev_info_get()
+	 */
+	uint32_t nb_event_queue_flows;
+	/**< Number of flows for any event queue on this device.
+	 * This value cannot exceed the *max_event_queue_flows* which previously
+	 * provided in rte_event_dev_info_get()
+	 */
+	uint8_t nb_event_port_dequeue_depth;
+	/**< Number of dequeue queue depth for any event port on this device.
+	 * This value cannot exceed the *max_event_port_dequeue_queue_depth*
+	 * which previously provided in rte_event_dev_info_get()
+	 * \see rte_event_port_setup()
+	 */
+	uint32_t nb_event_port_enqueue_depth;
+	/**< Number of enqueue queue depth for any event port on this device.
+	 * This value cannot exceed the *max_event_port_enqueue_queue_depth*
+	 * which previously provided in rte_event_dev_info_get()
+	 * \see rte_event_port_setup()
+	 */
+	uint32_t event_dev_cfg;
+	/**< Event device config flags(RTE_EVENT_DEV_CFG_)*/
+};
+
+/**
+ * Configure an event device.
+ *
+ * This function must be invoked first before any other function in the
+ * API. This function can also be re-invoked when a device is in the
+ * stopped state.
+ *
+ * The caller may use rte_event_dev_info_get() to get the capability of each
+ * resources available for this event device.
+ *
+ * @param dev_id
+ *   The identifier of the device to configure.
+ * @param dev_conf
+ *   The event device configuration structure.
+ *
+ * @return
+ *   - 0: Success, device configured.
+ *   - <0: Error code returned by the driver configuration function.
+ */
+int
+rte_event_dev_configure(uint8_t dev_id, struct rte_event_dev_config *dev_conf);
+
+
+/* Event queue specific APIs */
+
+#define RTE_EVENT_QUEUE_PRIORITY_HIGHEST   0
+/**< Highest event queue priority */
+#define RTE_EVENT_QUEUE_PRIORITY_NORMAL    128
+/**< Normal event queue priority */
+#define RTE_EVENT_QUEUE_PRIORITY_LOWEST    255
+/**< Lowest event queue priority */
+
+/* Event queue configuration bitmap flags */
+#define RTE_EVENT_QUEUE_CFG_DEFAULT            (0)
+/**< Default value of *event_queue_cfg* when rte_event_queue_setup() invoked
+ * with queue_conf == NULL
+ *
+ * \see rte_event_queue_setup()
+ */
+#define RTE_EVENT_QUEUE_CFG_TYPE_MASK          (3ULL << 0)
+/**< Mask for event queue schedule type configuration request */
+#define RTE_EVENT_QUEUE_CFG_ALL_TYPES          (0ULL << 0)
+/**< Allow ATOMIC,ORDERED,PARALLEL schedule type enqueue
+ *
+ * \see RTE_SCHED_TYPE_ORDERED, RTE_SCHED_TYPE_ATOMIC, RTE_SCHED_TYPE_PARALLEL
+ * \see rte_event_enqueue()
+ */
+#define RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY        (1ULL << 0)
+/**< Allow only ATOMIC schedule type enqueue
+ *
+ * The rte_event_enqueue() result is undefined if the queue configured with
+ * ATOMIC only and sched_type != RTE_SCHED_TYPE_ATOMIC
+ *
+ * \see RTE_SCHED_TYPE_ATOMIC, rte_event_enqueue()
+ */
+#define RTE_EVENT_QUEUE_CFG_ORDERED_ONLY       (2ULL << 0)
+/**< Allow only ORDERED schedule type enqueue
+ *
+ * The rte_event_enqueue() result is undefined if the queue configured with
+ * ORDERED only and sched_type != RTE_SCHED_TYPE_ORDERED
+ *
+ * \see RTE_SCHED_TYPE_ORDERED, rte_event_enqueue()
+ */
+#define RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY      (3ULL << 0)
+/**< Allow only PARALLEL schedule type enqueue
+ *
+ * The rte_event_enqueue() result is undefined if the queue configured with
+ * PARALLEL only and sched_type != RTE_SCHED_TYPE_PARALLEL
+ *
+ * \see RTE_SCHED_TYPE_PARALLEL, rte_event_enqueue()
+ */
+#define RTE_EVENT_QUEUE_CFG_SINGLE_CONSUMER    (1ULL << 2)
+/**< This event queue links only to a single event port.
+ *
+ *  \see rte_event_port_setup(), rte_event_port_link()
+ */
+
+/** Event queue configuration structure */
+struct rte_event_queue_conf {
+	uint32_t nb_atomic_flows;
+	/**< The maximum number of active flows this queue can track at any
+	 * given time. The value must be in the range of
+	 * [1 - nb_event_queue_flows)] which previously provided in
+	 * rte_event_dev_info_get().
+	 */
+	uint32_t nb_atomic_order_sequences;
+	/**< The maximum number of outstanding events waiting to be
+	 * reordered by this queue. In other words, the number of entries in
+	 * this queue’s reorder buffer.When the number of events in the
+	 * reorder buffer reaches to *nb_atomic_order_sequences* then the
+	 * scheduler cannot schedule the events from this queue and invalid
+	 * event will be returned from dequeue until one or more entries are
+	 * freed up/released.
+	 * The value must be in the range of [1 - nb_event_queue_flows)]
+	 * which previously supplied to rte_event_dev_configure().
+	 */
+	uint32_t event_queue_cfg; /**< Queue config flags(EVENT_QUEUE_CFG_) */
+	uint8_t priority;
+	/**< Priority for this event queue relative to other event queues.
+	 * The requested priority should in the range of
+	 * [RTE_EVENT_QUEUE_PRIORITY_HIGHEST, RTE_EVENT_QUEUE_PRIORITY_LOWEST].
+	 * The implementation shall normalize the requested priority to
+	 * event device supported priority value.
+	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability
+	 */
+};
+
+/**
+ * Retrieve the default configuration information of an event queue designated
+ * by its *queue_id* from the event driver for an event device.
+ *
+ * This function intended to be used in conjunction with rte_event_queue_setup()
+ * where caller needs to set up the queue by overriding few default values.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param queue_id
+ *   The index of the event queue to get the configuration information.
+ *   The value must be in the range [0, nb_event_queues - 1]
+ *   previously supplied to rte_event_dev_configure().
+ * @param[out] queue_conf
+ *   The pointer to the default event queue configuration data.
+ * @return
+ *   - 0: Success, driver updates the default event queue configuration data.
+ *   - <0: Error code returned by the driver info get function.
+ *
+ * \see rte_event_queue_setup()
+ *
+ */
+int
+rte_event_queue_default_conf_get(uint8_t dev_id, uint8_t queue_id,
+				 struct rte_event_queue_conf *queue_conf);
+
+/**
+ * Allocate and set up an event queue for an event device.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param queue_id
+ *   The index of the event queue to setup. The value must be in the range
+ *   [0, nb_event_queues - 1] previously supplied to rte_event_dev_configure().
+ * @param queue_conf
+ *   The pointer to the configuration data to be used for the event queue.
+ *   NULL value is allowed, in which case default configuration	used.
+ *
+ * \see rte_event_queue_default_conf_get()
+ *
+ * @return
+ *   - 0: Success, event queue correctly set up.
+ *   - <0: event queue configuration failed
+ */
+int
+rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id,
+		      struct rte_event_queue_conf *queue_conf);
+
+/**
+ * Get the number of event queues on a specific event device
+ *
+ * @param dev_id
+ *   Event device identifier.
+ * @return
+ *   - The number of configured event queues
+ */
+uint8_t
+rte_event_queue_count(uint8_t dev_id);
+
+/**
+ * Get the priority of the event queue on a specific event device
+ *
+ * @param dev_id
+ *   Event device identifier.
+ * @param queue_id
+ *   Event queue identifier.
+ * @return
+ *   - If the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability then the
+ *    configured priority of the event queue in
+ *    [RTE_EVENT_QUEUE_PRIORITY_HIGHEST, RTE_EVENT_QUEUE_PRIORITY_LOWEST] range
+ *    else the value RTE_EVENT_QUEUE_PRIORITY_NORMAL
+ */
+uint8_t
+rte_event_queue_priority(uint8_t dev_id, uint8_t queue_id);
+
+/* Event port specific APIs */
+
+/** Event port configuration structure */
+struct rte_event_port_conf {
+	int32_t new_event_threshold;
+	/**< A backpressure threshold for new event enqueues on this port.
+	 * Use for *closed system* event dev where event capacity is limited,
+	 * and cannot exceed the capacity of the event dev.
+	 * Configuring ports with different thresholds can make higher priority
+	 * traffic less likely to  be backpressured.
+	 * For example, a port used to inject NIC Rx packets into the event dev
+	 * can have a lower threshold so as not to overwhelm the device,
+	 * while ports used for worker pools can have a higher threshold.
+	 * This value cannot exceed the *nb_events_limit*
+	 * which previously supplied to rte_event_dev_configure()
+	 */
+	uint8_t dequeue_depth;
+	/**< Configure number of bulk dequeues for this event port.
+	 * This value cannot exceed the *nb_event_port_dequeue_depth*
+	 * which previously supplied to rte_event_dev_configure()
+	 */
+	uint8_t enqueue_depth;
+	/**< Configure number of bulk enqueues for this event port.
+	 * This value cannot exceed the *nb_event_port_enqueue_depth*
+	 * which previously supplied to rte_event_dev_configure()
+	 */
+};
+
+/**
+ * Retrieve the default configuration information of an event port designated
+ * by its *port_id* from the event driver for an event device.
+ *
+ * This function intended to be used in conjunction with rte_event_port_setup()
+ * where caller needs to set up the port by overriding few default values.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param port_id
+ *   The index of the event port to get the configuration information.
+ *   The value must be in the range [0, nb_event_ports - 1]
+ *   previously supplied to rte_event_dev_configure().
+ * @param[out] port_conf
+ *   The pointer to the default event port configuration data
+ * @return
+ *   - 0: Success, driver updates the default event port configuration data.
+ *   - <0: Error code returned by the driver info get function.
+ *
+ * \see rte_event_port_setup()
+ *
+ */
+int
+rte_event_port_default_conf_get(uint8_t dev_id, uint8_t port_id,
+				struct rte_event_port_conf *port_conf);
+
+/**
+ * Allocate and set up an event port for an event device.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param port_id
+ *   The index of the event port to setup. The value must be in the range
+ *   [0, nb_event_ports - 1] previously supplied to rte_event_dev_configure().
+ * @param port_conf
+ *   The pointer to the configuration data to be used for the queue.
+ *   NULL value is allowed, in which case default configuration	used.
+ *
+ * \see rte_event_port_default_conf_get()
+ *
+ * @return
+ *   - 0: Success, event port correctly set up.
+ *   - <0: Port configuration failed
+ *   - (-EDQUOT) Quota exceeded(Application tried to link the queue configured
+ *   with RTE_EVENT_QUEUE_CFG_SINGLE_CONSUMER to more than one event ports)
+ */
+int
+rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
+		     struct rte_event_port_conf *port_conf);
+
+/**
+ * Get the number of dequeue queue depth configured for event port designated
+ * by its *port_id* on a specific event device
+ *
+ * @param dev_id
+ *   Event device identifier.
+ * @param port_id
+ *   Event port identifier.
+ * @return
+ *   - The number of configured dequeue queue depth
+ *
+ * \see rte_event_dequeue_burst()
+ */
+uint8_t
+rte_event_port_dequeue_depth(uint8_t dev_id, uint8_t port_id);
+
+/**
+ * Get the number of enqueue queue depth configured for event port designated
+ * by its *port_id* on a specific event device
+ *
+ * @param dev_id
+ *   Event device identifier.
+ * @param port_id
+ *   Event port identifier.
+ * @return
+ *   - The number of configured enqueue queue depth
+ *
+ * \see rte_event_enqueue_burst()
+ */
+uint8_t
+rte_event_port_enqueue_depth(uint8_t dev_id, uint8_t port_id);
+
+/**
+ * Get the number of ports on a specific event device
+ *
+ * @param dev_id
+ *   Event device identifier.
+ * @return
+ *   - The number of configured ports
+ */
+uint8_t
+rte_event_port_count(uint8_t dev_id);
+
+/**
+ * Start an event device.
+ *
+ * The device start step is the last one and consists of setting the event
+ * queues to start accepting the events and schedules to event ports.
+ *
+ * On success, all basic functions exported by the API (event enqueue,
+ * event dequeue and so on) can be invoked.
+ *
+ * @param dev_id
+ *   Event device identifier
+ * @return
+ *   - 0: Success, device started.
+ *   - <0: Error code of the driver device start function.
+ */
+int
+rte_event_dev_start(uint8_t dev_id);
+
+/**
+ * Stop an event device. The device can be restarted with a call to
+ * rte_event_dev_start()
+ *
+ * @param dev_id
+ *   Event device identifier.
+ */
+void
+rte_event_dev_stop(uint8_t dev_id);
+
+/**
+ * Close an event device. The device cannot be restarted!
+ *
+ * @param dev_id
+ *   Event device identifier
+ *
+ * @return
+ *  - 0 on successfully closing device
+ *  - <0 on failure to close device
+ *  - (-EAGAIN) if device is busy
+ */
+int
+rte_event_dev_close(uint8_t dev_id);
+
+/* Scheduler type definitions */
+#define RTE_SCHED_TYPE_ORDERED          0
+/**< Ordered scheduling
+ *
+ * Events from an ordered flow of an event queue can be scheduled to multiple
+ * ports for concurrent processing while maintaining the original event order.
+ * This scheme enables the user to achieve high single flow throughput by
+ * avoiding SW synchronization for ordering between ports which bound to cores.
+ *
+ * The source flow ordering from an event queue is maintained when events are
+ * enqueued to their destination queue within the same ordered flow context.
+ * An event port holds the context until application call rte_event_dequeue()
+ * from the same port, which implicitly releases the context.
+ * User may allow the scheduler to release the context earlier than that
+ * by invoking rte_event_enqueue() with RTE_EVENT_OP_RELEASE operation.
+ *
+ * Events from the source queue appear in their original order when dequeued
+ * from a destination queue.
+ * Event ordering is based on the received event(s), but also other
+ * (newly allocated or stored) events are ordered when enqueued within the same
+ * ordered context. Events not enqueued (e.g. released or stored) within the
+ * context are  considered missing from reordering and are skipped at this time
+ * (but can be ordered again within another context).
+ *
+ * \see rte_event_queue_setup(), rte_event_dequeue(), RTE_EVENT_OP_RELEASE
+ */
+
+#define RTE_SCHED_TYPE_ATOMIC           1
+/**< Atomic scheduling
+ *
+ * Events from an atomic flow of an event queue can be scheduled only to a
+ * single port at a time. The port is guaranteed to have exclusive (atomic)
+ * access to the associated flow context, which enables the user to avoid SW
+ * synchronization. Atomic flows also help to maintain event ordering
+ * since only one port at a time can process events from a flow of an
+ * event queue.
+ *
+ * The atomic queue synchronization context is dedicated to the port until
+ * application call rte_event_dequeue() from the same port, which implicitly
+ * releases the context. User may allow the scheduler to release the context
+ * earlier than that by invoking rte_event_enqueue() with
+ * RTE_EVENT_OP_RELEASE operation.
+ *
+ * \see rte_event_queue_setup(), rte_event_dequeue(), RTE_EVENT_OP_RELEASE
+ */
+
+#define RTE_SCHED_TYPE_PARALLEL         2
+/**< Parallel scheduling
+ *
+ * The scheduler performs priority scheduling, load balancing, etc. functions
+ * but does not provide additional event synchronization or ordering.
+ * It is free to schedule events from a single parallel flow of an event queue
+ * to multiple events ports for concurrent processing.
+ * The application is responsible for flow context synchronization and
+ * event ordering (SW synchronization).
+ *
+ * \see rte_event_queue_setup(), rte_event_dequeue()
+ */
+
+/* Event types to classify the event source */
+#define RTE_EVENT_TYPE_ETHDEV           0x0
+/**< The event generated from ethdev subsystem */
+#define RTE_EVENT_TYPE_CRYPTODEV        0x1
+/**< The event generated from crypodev subsystem */
+#define RTE_EVENT_TYPE_TIMERDEV         0x2
+/**< The event generated from timerdev subsystem */
+#define RTE_EVENT_TYPE_CORE             0x3
+/**< The event generated from core.
+ * Application may use *sub_event_type* to further classify the event
+ */
+#define RTE_EVENT_TYPE_MAX              0x10
+/**< Maximum number of event types */
+
+/* Event priority */
+#define RTE_EVENT_PRIORITY_HIGHEST      0
+/**< Highest event priority */
+#define RTE_EVENT_PRIORITY_NORMAL       128
+/**< Normal event priority */
+#define RTE_EVENT_PRIORITY_LOWEST       255
+/**< Lowest event priority */
+
+/* Event enqueue operations */
+#define RTE_EVENT_OP_NEW                0
+/**< New event without previous context */
+#define RTE_EVENT_OP_FORWARD            1
+/**< Re-enqueue previously dequeued event */
+#define RTE_EVENT_OP_RELEASE            2
+/**
+ * Release the flow context associated with the schedule type.
+ *
+ * If current flow's scheduler type method is *RTE_SCHED_TYPE_ATOMIC*
+ * then this function hints the scheduler that the user has completed critical
+ * section processing in the current atomic context.
+ * The scheduler is now allowed to schedule events from the same flow from
+ * an event queue to another port. However, the context may be still held
+ * until the next rte_event_dequeue() or rte_event_dequeue_burst() call, this
+ * call allows but does not force the scheduler to release the context early.
+ *
+ * Early atomic context release may increase parallelism and thus system
+ * performance, but the user needs to design carefully the split into critical
+ * vs non-critical sections.
+ *
+ * If current flow's scheduler type method is *RTE_SCHED_TYPE_ORDERED*
+ * then this function hints the scheduler that the user has done all that need
+ * to maintain event order in the current ordered context.
+ * The scheduler is allowed to release the ordered context of this port and
+ * avoid reordering any following enqueues.
+ *
+ * Early ordered context release may increase parallelism and thus system
+ * performance.
+ *
+ * If current flow's scheduler type method is *RTE_SCHED_TYPE_PARALLEL*
+ * or no scheduling context is held then this function may be an NOOP,
+ * depending on the implementation.
+ *
+ */
+
+/**
+ * The generic *rte_event* structure to hold the event attributes
+ * for dequeue and enqueue operation
+ */
+struct rte_event {
+	/** WORD0 */
+	RTE_STD_C11
+	union {
+		uint64_t event;
+		/** Event attributes for dequeue or enqueue operation */
+		struct {
+			uint32_t flow_id:24;
+			/**< Targeted flow identifier for the enqueue and
+			 * dequeue operation.
+			 * The value must be in the range of
+			 * [0, nb_event_queue_flows - 1] which
+			 * previously supplied to rte_event_dev_configure().
+			 */
+			uint32_t operation:6;
+			/**< The type of event being enqueued - new/forward/etc
+			 *  This field is not preserved across an instance and
+			 *  is undefined on dequeue.
+			 */
+			uint8_t queue_id:8;
+			/**< Targeted event queue identifier for the enqueue or
+			 * dequeue operation.
+			 * The value must be in the range of
+			 * [0, nb_event_queues - 1] which previously supplied to
+			 * rte_event_dev_configure().
+			 */
+			uint8_t  sched_type;
+			/**< Scheduler synchronization type (RTE_SCHED_TYPE_)
+			 * associated with flow id on a given event queue
+			 * for the enqueue and dequeue operation.
+			 */
+			uint8_t  event_type;
+			/**< Event type to classify the event source. */
+			uint8_t  sub_event_type;
+			/**< Sub-event types based on the event source.
+			 * \see RTE_EVENT_TYPE_CORE
+			 */
+			uint8_t  priority;
+			/**< Event priority relative to other events in the
+			 * event queue. The requested priority should in the
+			 * range of  [RTE_EVENT_PRIORITY_HIGHEST,
+			 * RTE_EVENT_PRIORITY_LOWEST].
+			 * The implementation shall normalize the requested
+			 * priority to supported priority value.
+			 * Valid when the device has RTE_EVENT_DEV_CAP_EVENT_QOS
+			 * capability.
+			 */
+		};
+	};
+	/** WORD1 */
+	RTE_STD_C11
+	union {
+		uintptr_t event_ptr;
+		/**< Opaque event pointer */
+		struct rte_mbuf *mbuf;
+		/**< mbuf pointer if dequeued event is associated with mbuf */
+	};
+};
+
+typedef int (*event_schedule_t)(void);
+/**< @internal Schedule one or more events in the event dev. */
+
+typedef int (*event_enqueue_t)(void *port, struct rte_event *ev);
+/**< @internal Enqueue event on port of a device */
+
+typedef uint16_t (*event_enqueue_burst_t)(void *port, struct rte_event ev[],
+		uint16_t nb_events);
+/**< @internal Enqueue burst of events on port of a device */
+
+typedef bool (*event_dequeue_t)(void *port, struct rte_event *ev,
+		uint64_t wait);
+/**< @internal Dequeue event from port of a device */
+
+typedef uint16_t (*event_dequeue_burst_t)(void *port, struct rte_event ev[],
+		uint16_t nb_events, uint64_t wait);
+/**< @internal Dequeue burst of events from port of a device */
+
+struct rte_eventdev_driver;
+struct rte_eventdev_ops;
+
+#define RTE_EVENTDEV_NAME_MAX_LEN	(64)
+/**< @internal Max length of name of event PMD */
+
+/**
+ * @internal
+ * The data part, with no function pointers, associated with each device.
+ *
+ * This structure is safe to place in shared memory to be common among
+ * different processes in a multi-process configuration.
+ */
+struct rte_eventdev_data {
+	int socket_id;
+	/**< Socket ID where memory is allocated */
+	uint8_t dev_id;
+	/**< Device ID for this instance */
+	uint8_t nb_queues;
+	/**< Number of event queues. */
+	uint8_t nb_ports;
+	/**< Number of event ports. */
+	void **ports;
+	/**< Array of pointers to ports. */
+	uint8_t *ports_dequeue_depth;
+	/**< Array of port dequeue depth. */
+	uint8_t *ports_enqueue_depth;
+	/**< Array of port enqueue depth. */
+	void **queues;
+	/**< Array of pointers to queues. */
+	uint8_t *queues_prio;
+	/**< Array of queue priority. */
+	uint16_t *links_map;
+	/**< Memory to store queues to port connections. */
+	void *dev_private;
+	/**< PMD-specific private data */
+	uint32_t event_dev_cap;
+	/**< Event device capabilities(RTE_EVENT_DEV_CAP_)*/
+	struct rte_event_dev_config dev_conf;
+	/**< Configuration applied to device. */
+
+	RTE_STD_C11
+	uint8_t dev_started : 1;
+	/**< Device state: STARTED(1)/STOPPED(0) */
+
+	char name[RTE_EVENTDEV_NAME_MAX_LEN];
+	/**< Unique identifier name */
+} __rte_cache_aligned;
+
+
+/** @internal The data structure associated with each event device. */
+struct rte_eventdev {
+	event_schedule_t schedule;
+	/**< Pointer to PMD schedule function. */
+	event_enqueue_t enqueue;
+	/**< Pointer to PMD enqueue function. */
+	event_enqueue_burst_t enqueue_burst;
+	/**< Pointer to PMD enqueue burst function. */
+	event_dequeue_t dequeue;
+	/**< Pointer to PMD dequeue function. */
+	event_dequeue_burst_t dequeue_burst;
+	/**< Pointer to PMD dequeue burst function. */
+
+	struct rte_eventdev_data *data;
+	/**< Pointer to device data */
+	const struct rte_eventdev_ops *dev_ops;
+	/**< Functions exported by PMD */
+	struct rte_pci_device *pci_dev;
+	/**< PCI info. supplied by probing */
+	const struct rte_eventdev_driver *driver;
+	/**< Driver for this device */
+
+	RTE_STD_C11
+	uint8_t attached : 1;
+	/**< Flag indicating the device is attached */
+} __rte_cache_aligned;
+
+extern struct rte_eventdev *rte_eventdevs;
+/** @internal The pool of rte_eventdev structures. */
+
+
+/**
+ * Schedule one or more events in the event dev.
+ *
+ * An event dev implementation may define this is a NOOP, for instance if
+ * the event dev performs its scheduling in hardware.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ */
+static inline void
+rte_event_schedule(uint8_t dev_id)
+{
+	struct rte_eventdev *dev = &rte_eventdevs[dev_id];
+	if (*dev->schedule)
+		(*dev->schedule)();
+}
+
+/**
+ * Enqueue the event object supplied in the *rte_event* structure on an
+ * event device designated by its *dev_id* through the event port specified by
+ * *port_id*. The event object specifies the event queue on which this
+ * event will be enqueued.
+ *
+ * @param dev_id
+ *   Event device identifier.
+ * @param port_id
+ *   The identifier of the event port.
+ * @param ev
+ *   Pointer to struct rte_event
+ *
+ * @return
+ *  - 0 on success
+ *  - <0 on failure. Failure can occur if the event port's output queue is
+ *     backpressured, for instance.
+ */
+static inline int
+rte_event_enqueue(uint8_t dev_id, uint8_t port_id, struct rte_event *ev)
+{
+	struct rte_eventdev *dev = &rte_eventdevs[dev_id];
+
+	return (*dev->enqueue)(
+			dev->data->ports[port_id], ev);
+}
+
+/**
+ * Enqueue a burst of events objects supplied in *rte_event* structure on an
+ * event device designated by its *dev_id* through the event port specified by
+ * *port_id*. Each event object specifies the event queue on which it
+ * will be enqueued.
+ *
+ * The rte_event_enqueue_burst() function is invoked to enqueue
+ * multiple event objects.
+ * It is the burst variant of rte_event_enqueue() function.
+ *
+ * The *nb_events* parameter is the number of event objects to enqueue which are
+ * supplied in the *ev* array of *rte_event* structure.
+ *
+ * The rte_event_enqueue_burst() function returns the number of
+ * events objects it actually enqueued. A return value equal to *nb_events*
+ * means that all event objects have been enqueued.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param port_id
+ *   The identifier of the event port.
+ * @param ev
+ *   Points to an array of *nb_events* objects of type *rte_event* structure
+ *   which contain the event object enqueue operations to be processed.
+ * @param nb_events
+ *   The number of event objects to enqueue, typically number of
+ *   rte_event_port_enqueue_depth() available for this port.
+ *
+ * @return
+ *   The number of event objects actually enqueued on the event device. The
+ *   return value can be less than the value of the *nb_events* parameter when
+ *   the event devices queue is full or if invalid parameters are specified in a
+ *   *rte_event*. If return value is less than *nb_events*, the remaining events
+ *   at the end of ev[] are not consumed,and the caller has to take care of them
+ *
+ * \see rte_event_enqueue(), rte_event_port_enqueue_depth()
+ */
+static inline uint16_t
+rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event ev[],
+			uint16_t nb_events)
+{
+	struct rte_eventdev *dev = &rte_eventdevs[dev_id];
+
+	return (*dev->enqueue_burst)(
+			dev->data->ports[port_id], ev, nb_events);
+}
+
+/**
+ * Converts nanoseconds to *wait* value for rte_event_dequeue()
+ *
+ * If the device is configured with RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT flag then
+ * application can use this function to convert wait value in nanoseconds to
+ * implementations specific wait value supplied in rte_event_dequeue()
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param ns
+ *   Wait time in nanosecond
+ * @param[out] wait_ticks
+ *   Value for the *wait* parameter in rte_event_dequeue() function
+ *
+ * @return
+ *  - 0 on success.
+ *  - <0 on failure.
+ *
+ * \see rte_event_dequeue(), RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT
+ * \see rte_event_dev_configure()
+ *
+ */
+extern int
+rte_event_dequeue_wait_time(uint8_t dev_id, uint64_t ns, uint64_t *wait_ticks);
+
+/**
+ * Dequeue an event from the event port specified by *port_id* on the
+ * event device designated by its *dev_id*.
+ *
+ * rte_event_dequeue() does not dictate the specifics of scheduling algorithm as
+ * each eventdev driver may have different criteria to schedule an event.
+ * However, in general, from an application perspective scheduler may use the
+ * following scheme to dispatch an event to the port.
+ *
+ * 1) Selection of event queue based on
+ *   a) The list of event queues are linked to the event port.
+ *   b) If the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability then event
+ *   queue selection from list is based on event queue priority relative to
+ *   other event queue supplied as *priority* in rte_event_queue_setup()
+ *   c) If the device has RTE_EVENT_DEV_CAP_EVENT_QOS capability then event
+ *   queue selection from the list is based on event priority supplied as
+ *   *priority* in rte_event_enqueue_burst()
+ * 2) Selection of event
+ *   a) The number of flows available in selected event queue.
+ *   b) Schedule type method associated with the event
+ *
+ * On a successful dequeue, the event port holds flow id and schedule type
+ * context associated with the dispatched event. The context is automatically
+ * released in the next rte_event_dequeue() invocation, or invoking
+ * rte_event_enqueue() with RTE_EVENT_OP_RELEASE operation can be used
+ * to release the context early.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param port_id
+ *   The identifier of the event port.
+ * @param[out] ev
+ *   Pointer to struct rte_event. On successful event dispatch, implementation
+ *   updates the event attributes.
+ *
+ * @param wait
+ *   0 - no-wait, returns immediately if there is no event.
+ *   >0 - wait for the event, if the device is configured with
+ *   RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT then this function will wait until
+ *   the event available or *wait* time.
+ *   if the device is not configured with RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT
+ *   then this function will wait until the event available or *dequeue_wait_ns*
+ *   ns which was previously supplied to rte_event_dev_configure()
+ *
+ * @return
+ * When true, a valid event has been dispatched by the scheduler.
+ *
+ */
+static inline bool
+rte_event_dequeue(uint8_t dev_id, uint8_t port_id, struct rte_event *ev,
+		  uint64_t wait)
+{
+	struct rte_eventdev *dev = &rte_eventdevs[dev_id];
+
+	return (*dev->dequeue)(
+			dev->data->ports[port_id], ev, wait);
+}
+
+/**
+ * Dequeue a burst of events objects from the event port designated by its
+ * *event_port_id*, on an event device designated by its *dev_id*.
+ *
+ * The rte_event_dequeue_burst() function is invoked to dequeue
+ * multiple event objects. It is the burst variant of rte_event_dequeue()
+ * function.
+ *
+ * The *nb_events* parameter is the maximum number of event objects to dequeue
+ * which are returned in the *ev* array of *rte_event* structure.
+ *
+ * The rte_event_dequeue_burst() function returns the number of
+ * events objects it actually dequeued. A return value equal to
+ * *nb_events* means that all event objects have been dequeued.
+ *
+ * The number of events dequeued is the number of scheduler contexts held by
+ * this port. These contexts are automatically released in the next
+ * rte_event_dequeue() invocation, or invoking rte_event_enqueue() with
+ * RTE_EVENT_OP_RELEASE operation can be used to release the contexts early.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param port_id
+ *   The identifier of the event port.
+ * @param[out] ev
+ *   Points to an array of *nb_events* objects of type *rte_event* structure
+ *   for output to be populated with the dequeued event objects.
+ * @param nb_events
+ *   The maximum number of event objects to dequeue, typically number of
+ *   rte_event_port_dequeue_depth() available for this port.
+ *
+ * @param wait
+ *   0 - no-wait, returns immediately if there is no event.
+ *   >0 - wait for the event, if the device is configured with
+ *   RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT then this function will wait until the
+ *   event available or *wait* time.
+ *   if the device is not configured with RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT
+ *   then this function will wait until the event available or *dequeue_wait_ns*
+ *   ns which was previously supplied to rte_event_dev_configure()
+ *
+ * @return
+ * The number of event objects actually dequeued from the port. The return
+ * value can be less than the value of the *nb_events* parameter when the
+ * event port's queue is not full.
+ *
+ * \see rte_event_dequeue(), rte_event_port_dequeue_depth()
+ */
+static inline uint16_t
+rte_event_dequeue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event ev[],
+			uint16_t nb_events, uint64_t wait)
+{
+	struct rte_eventdev *dev = &rte_eventdevs[dev_id];
+
+	return (*dev->dequeue_burst)(
+			dev->data->ports[port_id], ev, nb_events, wait);
+}
+
+#define RTE_EVENT_QUEUE_SERVICE_PRIORITY_HIGHEST  0
+/**< Highest event queue servicing priority */
+#define RTE_EVENT_QUEUE_SERVICE_PRIORITY_NORMAL   128
+/**< Normal event queue servicing priority */
+#define RTE_EVENT_QUEUE_SERVICE_PRIORITY_LOWEST   255
+/**< Lowest event queue servicing priority */
+
+/** Structure to hold the queue to port link establishment attributes */
+struct rte_event_queue_link {
+	uint8_t queue_id;
+	/**< Event queue identifier to select the source queue to link */
+	uint8_t priority;
+	/**< The priority of the event queue for this event port.
+	 * The priority defines the event port's servicing priority for
+	 * event queue, which may be ignored by an implementation.
+	 * The requested priority should in the range of
+	 * [RTE_EVENT_QUEUE_SERVICE_PRIORITY_HIGHEST,
+	 * RTE_EVENT_QUEUE_SERVICE_PRIORITY_LOWEST].
+	 * The implementation shall normalize the requested priority to
+	 * implementation supported priority value.
+	 */
+};
+
+/**
+ * Link multiple source event queues supplied in *rte_event_queue_link*
+ * structure as *queue_id* to the destination event port designated by its
+ * *port_id* on the event device designated by its *dev_id*.
+ *
+ * The link establishment shall enable the event port *port_id* from
+ * receiving events from the specified event queue *queue_id*
+ *
+ * An event queue may link to one or more event ports.
+ * The number of links can be established from an event queue to event port is
+ * implementation defined.
+ *
+ * Event queue(s) to event port link establishment can be changed at runtime
+ * without re-configuring the device to support scaling and to reduce the
+ * latency of critical work by establishing the link with more event ports
+ * at runtime.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ *
+ * @param port_id
+ *   Event port identifier to select the destination port to link.
+ *
+ * @param link
+ *   Points to an array of *nb_links* objects of type *rte_event_queue_link*
+ *   structure which contain the event queue to event port link establishment
+ *   attributes.
+ *   NULL value is allowed, in which case this function links all the configured
+ *   event queues *nb_event_queues* which previously supplied to
+ *   rte_event_dev_configure() to the event port *port_id* with normal servicing
+ *   priority(RTE_EVENT_QUEUE_SERVICE_PRIORITY_NORMAL).
+ *
+ * @param nb_links
+ *   The number of links to establish
+ *
+ * @return
+ * The number of links actually established. The return value can be less than
+ * the value of the *nb_links* parameter when the implementation has the
+ * limitation on specific queue to port link establishment or if invalid
+ * parameters are specified in a *rte_event_queue_link*.
+ * If the return value is less than *nb_links*, the remaining links at the end
+ * of link[] are not established, and the caller has to take care of them.
+ * If return value is less than *nb_links* then implementation shall update the
+ * rte_errno accordingly, Possible rte_errno values are
+ * (-EDQUOT) Quota exceeded(Application tried to link the queue configured with
+ *  RTE_EVENT_QUEUE_CFG_SINGLE_CONSUMER to more than one event ports)
+ * (-EINVAL) Invalid parameter
+ *
+ */
+int
+rte_event_port_link(uint8_t dev_id, uint8_t port_id,
+		    struct rte_event_queue_link link[], uint16_t nb_links);
+
+/**
+ * Unlink multiple source event queues supplied in *queues* from the destination
+ * event port designated by its *port_id* on the event device designated
+ * by its *dev_id*.
+ *
+ * The unlink establishment shall disable the event port *port_id* from
+ * receiving events from the specified event queue *queue_id*
+ *
+ * Event queue(s) to event port unlink establishment can be changed at runtime
+ * without re-configuring the device.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ *
+ * @param port_id
+ *   Event port identifier to select the destination port to unlink.
+ *
+ * @param queues
+ *   Points to an array of *nb_unlinks* event queues to be unlinked
+ *   from the event port.
+ *   NULL value is allowed, in which case this function unlinks all the
+ *   event queue(s) from the event port *port_id*.
+ *
+ * @param nb_unlinks
+ *   The number of unlinks to establish
+ *
+ * @return
+ * The number of unlinks actually established. The return value can be less
+ * than the value of the *nb_unlinks* parameter when the implementation has the
+ * limitation on specific queue to port unlink establishment or
+ * if invalid parameters are specified.
+ * If the return value is less than *nb_unlinks*, the remaining queues at the
+ * end of queues[] are not established, and the caller has to take care of them.
+ * If return value is less than *nb_unlinks* then implementation shall update
+ * the rte_errno accordingly, Possible rte_errno values are
+ * (-EINVAL) Invalid parameter
+ *
+ */
+int
+rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
+		      uint8_t queues[], uint16_t nb_unlinks);
+
+/**
+ * Retrieve the list of source event queues and its associated attributes
+ * linked to the destination event port designated by its *port_id*
+ * on the event device designated by its *dev_id*.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ *
+ * @param port_id
+ *   Event port identifier.
+ *
+ * @param[out] link
+ *   Points to an array of *rte_event_queue_link* structure for output.
+ *   The caller has to allocate *RTE_EVENT_MAX_QUEUES_PER_DEV* objects of size
+ *   *rte_event_queue_link* structure to store the event queue to event port
+ *   link establishment attributes.
+ *
+ * @return
+ * The number of links established on the event port designated by its
+ *  *port_id*.
+ * - <0 on failure.
+ *
+ */
+int
+rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
+			struct rte_event_queue_link link[]);
+
+/**
+ * Dump internal information about *dev_id* to the FILE* provided in *f*.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ *
+ * @param f
+ *   A pointer to a file for output
+ *
+ * @return
+ *   - 0: on success
+ *   - <0: on failure.
+ */
+int
+rte_event_dev_dump(uint8_t dev_id, FILE *f);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_EVENTDEV_H_ */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [PATCH 2/4] eventdev: implement the northbound APIs
  2016-11-18  5:44 [PATCH 0/4] libeventdev API and northbound implementation Jerin Jacob
  2016-11-18  5:44 ` [PATCH 1/4] eventdev: introduce event driven programming model Jerin Jacob
@ 2016-11-18  5:45 ` Jerin Jacob
  2016-11-21 17:45   ` Eads, Gage
  2016-11-23 19:18   ` Thomas Monjalon
  2016-11-18  5:45 ` [PATCH 3/4] event/skeleton: add skeleton eventdev driver Jerin Jacob
                   ` (2 subsequent siblings)
  4 siblings, 2 replies; 109+ messages in thread
From: Jerin Jacob @ 2016-11-18  5:45 UTC (permalink / raw)
  To: dev
  Cc: bruce.richardson, harry.van.haaren, hemant.agrawal, gage.eads,
	Jerin Jacob

This patch set defines the southbound driver interface
and implements the common code required for northbound
eventdev API interface.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 config/common_base                           |    6 +
 lib/Makefile                                 |    1 +
 lib/librte_eal/common/include/rte_log.h      |    1 +
 lib/librte_eventdev/Makefile                 |   57 ++
 lib/librte_eventdev/rte_eventdev.c           | 1211 ++++++++++++++++++++++++++
 lib/librte_eventdev/rte_eventdev_pmd.h       |  504 +++++++++++
 lib/librte_eventdev/rte_eventdev_version.map |   39 +
 mk/rte.app.mk                                |    1 +
 8 files changed, 1820 insertions(+)
 create mode 100644 lib/librte_eventdev/Makefile
 create mode 100644 lib/librte_eventdev/rte_eventdev.c
 create mode 100644 lib/librte_eventdev/rte_eventdev_pmd.h
 create mode 100644 lib/librte_eventdev/rte_eventdev_version.map

diff --git a/config/common_base b/config/common_base
index 4bff83a..7a8814e 100644
--- a/config/common_base
+++ b/config/common_base
@@ -411,6 +411,12 @@ CONFIG_RTE_LIBRTE_PMD_ZUC_DEBUG=n
 CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO=y
 
 #
+# Compile generic event device library
+#
+CONFIG_RTE_LIBRTE_EVENTDEV=y
+CONFIG_RTE_LIBRTE_EVENTDEV_DEBUG=n
+CONFIG_RTE_EVENT_MAX_DEVS=16
+CONFIG_RTE_EVENT_MAX_QUEUES_PER_DEV=64
 # Compile librte_ring
 #
 CONFIG_RTE_LIBRTE_RING=y
diff --git a/lib/Makefile b/lib/Makefile
index 990f23a..1a067bf 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -41,6 +41,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) += librte_cfgfile
 DIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += librte_cmdline
 DIRS-$(CONFIG_RTE_LIBRTE_ETHER) += librte_ether
 DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += librte_cryptodev
+DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += librte_eventdev
 DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
 DIRS-$(CONFIG_RTE_LIBRTE_HASH) += librte_hash
 DIRS-$(CONFIG_RTE_LIBRTE_LPM) += librte_lpm
diff --git a/lib/librte_eal/common/include/rte_log.h b/lib/librte_eal/common/include/rte_log.h
index 29f7d19..9a07d92 100644
--- a/lib/librte_eal/common/include/rte_log.h
+++ b/lib/librte_eal/common/include/rte_log.h
@@ -79,6 +79,7 @@ extern struct rte_logs rte_logs;
 #define RTE_LOGTYPE_PIPELINE 0x00008000 /**< Log related to pipeline. */
 #define RTE_LOGTYPE_MBUF    0x00010000 /**< Log related to mbuf. */
 #define RTE_LOGTYPE_CRYPTODEV 0x00020000 /**< Log related to cryptodev. */
+#define RTE_LOGTYPE_EVENTDEV 0x00040000 /**< Log related to eventdev. */
 
 /* these log types can be used in an application */
 #define RTE_LOGTYPE_USER1   0x01000000 /**< User-defined log type 1. */
diff --git a/lib/librte_eventdev/Makefile b/lib/librte_eventdev/Makefile
new file mode 100644
index 0000000..dac0663
--- /dev/null
+++ b/lib/librte_eventdev/Makefile
@@ -0,0 +1,57 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2016 Cavium networks. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Cavium networks nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_eventdev.a
+
+# library version
+LIBABIVER := 1
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library source files
+SRCS-y += rte_eventdev.c
+
+# export include files
+SYMLINK-y-include += rte_eventdev.h
+SYMLINK-y-include += rte_eventdev_pmd.h
+
+# versioning export map
+EXPORT_MAP := rte_eventdev_version.map
+
+# library dependencies
+DEPDIRS-y += lib/librte_eal
+DEPDIRS-y += lib/librte_mbuf
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c
new file mode 100644
index 0000000..17ce5c3
--- /dev/null
+++ b/lib/librte_eventdev/rte_eventdev.c
@@ -0,0 +1,1211 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Cavium networks. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <ctype.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stdarg.h>
+#include <errno.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <sys/types.h>
+#include <sys/queue.h>
+
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_dev.h>
+#include <rte_pci.h>
+#include <rte_memory.h>
+#include <rte_memcpy.h>
+#include <rte_memzone.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_errno.h>
+
+#include "rte_eventdev.h"
+#include "rte_eventdev_pmd.h"
+
+struct rte_eventdev rte_event_devices[RTE_EVENT_MAX_DEVS];
+
+struct rte_eventdev *rte_eventdevs = &rte_event_devices[0];
+
+static struct rte_eventdev_global eventdev_globals = {
+	.nb_devs		= 0
+};
+
+struct rte_eventdev_global *rte_eventdev_globals = &eventdev_globals;
+
+/* Event dev north bound API implementation */
+
+uint8_t
+rte_event_dev_count(void)
+{
+	return rte_eventdev_globals->nb_devs;
+}
+
+int
+rte_event_dev_get_dev_id(const char *name)
+{
+	int i;
+
+	if (!name)
+		return -EINVAL;
+
+	for (i = 0; i < rte_eventdev_globals->nb_devs; i++)
+		if ((strcmp(rte_event_devices[i].data->name, name)
+				== 0) &&
+				(rte_event_devices[i].attached ==
+						RTE_EVENTDEV_ATTACHED))
+			return i;
+	return -ENODEV;
+}
+
+int
+rte_event_dev_socket_id(uint8_t dev_id)
+{
+	struct rte_eventdev *dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+
+	return dev->data->socket_id;
+}
+
+int
+rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info *dev_info)
+{
+	struct rte_eventdev *dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+
+	if (dev_info == NULL)
+		return -EINVAL;
+
+	memset(dev_info, 0, sizeof(struct rte_event_dev_info));
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
+	(*dev->dev_ops->dev_infos_get)(dev, dev_info);
+
+	dev_info->pci_dev = dev->pci_dev;
+	if (dev->driver)
+		dev_info->driver_name = dev->driver->pci_drv.driver.name;
+	return 0;
+}
+
+static inline int
+rte_event_dev_queue_config(struct rte_eventdev *dev, uint8_t nb_queues)
+{
+	uint8_t old_nb_queues = dev->data->nb_queues;
+	void **queues;
+	uint8_t *queues_prio;
+	unsigned int i;
+
+	EDEV_LOG_DEBUG("Setup %d queues on device %u", nb_queues,
+			 dev->data->dev_id);
+
+	/* First time configuration */
+	if (dev->data->queues == NULL && nb_queues != 0) {
+		dev->data->queues = rte_zmalloc_socket("eventdev->data->queues",
+				sizeof(dev->data->queues[0]) * nb_queues,
+				RTE_CACHE_LINE_SIZE, dev->data->socket_id);
+		if (dev->data->queues == NULL) {
+			dev->data->nb_queues = 0;
+			EDEV_LOG_ERR("failed to get memory for queue meta data,"
+					"nb_queues %u", nb_queues);
+			return -(ENOMEM);
+		}
+		/* Allocate memory to store queue priority */
+		dev->data->queues_prio = rte_zmalloc_socket(
+				"eventdev->data->queues_prio",
+				sizeof(dev->data->queues_prio[0]) * nb_queues,
+				RTE_CACHE_LINE_SIZE, dev->data->socket_id);
+		if (dev->data->queues_prio == NULL) {
+			dev->data->nb_queues = 0;
+			EDEV_LOG_ERR("failed to get memory for queue priority,"
+					"nb_queues %u", nb_queues);
+			return -(ENOMEM);
+		}
+
+	} else if (dev->data->queues != NULL && nb_queues != 0) {/* re-config */
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_release, -ENOTSUP);
+
+		queues = dev->data->queues;
+		for (i = nb_queues; i < old_nb_queues; i++)
+			(*dev->dev_ops->queue_release)(queues[i]);
+
+		queues = rte_realloc(queues, sizeof(queues[0]) * nb_queues,
+				RTE_CACHE_LINE_SIZE);
+		if (queues == NULL) {
+			EDEV_LOG_ERR("failed to realloc queue meta data,"
+						" nb_queues %u", nb_queues);
+			return -(ENOMEM);
+		}
+		dev->data->queues = queues;
+
+		/* Re allocate memory to store queue priority */
+		queues_prio = dev->data->queues_prio;
+		queues_prio = rte_realloc(queues_prio,
+				sizeof(queues_prio[0]) * nb_queues,
+				RTE_CACHE_LINE_SIZE);
+		if (queues_prio == NULL) {
+			EDEV_LOG_ERR("failed to realloc queue priority,"
+						" nb_queues %u", nb_queues);
+			return -(ENOMEM);
+		}
+		dev->data->queues_prio = queues_prio;
+
+		if (nb_queues > old_nb_queues) {
+			uint8_t new_qs = nb_queues - old_nb_queues;
+
+			memset(queues + old_nb_queues, 0,
+				sizeof(queues[0]) * new_qs);
+			memset(queues_prio + old_nb_queues, 0,
+				sizeof(queues_prio[0]) * new_qs);
+		}
+	} else if (dev->data->queues != NULL && nb_queues == 0) {
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_release, -ENOTSUP);
+
+		queues = dev->data->queues;
+		for (i = nb_queues; i < old_nb_queues; i++)
+			(*dev->dev_ops->queue_release)(queues[i]);
+	}
+
+	dev->data->nb_queues = nb_queues;
+	return 0;
+}
+
+static inline int
+rte_event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
+{
+	uint8_t old_nb_ports = dev->data->nb_ports;
+	void **ports;
+	uint16_t *links_map;
+	uint8_t *ports_dequeue_depth;
+	uint8_t *ports_enqueue_depth;
+	unsigned int i;
+
+	EDEV_LOG_DEBUG("Setup %d ports on device %u", nb_ports,
+			 dev->data->dev_id);
+
+	/* First time configuration */
+	if (dev->data->ports == NULL && nb_ports != 0) {
+		dev->data->ports = rte_zmalloc_socket("eventdev->data->ports",
+				sizeof(dev->data->ports[0]) * nb_ports,
+				RTE_CACHE_LINE_SIZE, dev->data->socket_id);
+		if (dev->data->ports == NULL) {
+			dev->data->nb_ports = 0;
+			EDEV_LOG_ERR("failed to get memory for port meta data,"
+					"nb_ports %u", nb_ports);
+			return -(ENOMEM);
+		}
+
+		/* Allocate memory to store ports dequeue depth */
+		dev->data->ports_dequeue_depth =
+			rte_zmalloc_socket("eventdev->ports_dequeue_depth",
+			sizeof(dev->data->ports_dequeue_depth[0]) * nb_ports,
+			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
+		if (dev->data->ports_dequeue_depth == NULL) {
+			dev->data->nb_ports = 0;
+			EDEV_LOG_ERR("failed to get memory for port deq meta,"
+					"nb_ports %u", nb_ports);
+			return -(ENOMEM);
+		}
+
+		/* Allocate memory to store ports enqueue depth */
+		dev->data->ports_enqueue_depth =
+			rte_zmalloc_socket("eventdev->ports_enqueue_depth",
+			sizeof(dev->data->ports_enqueue_depth[0]) * nb_ports,
+			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
+		if (dev->data->ports_enqueue_depth == NULL) {
+			dev->data->nb_ports = 0;
+			EDEV_LOG_ERR("failed to get memory for port enq meta,"
+					"nb_ports %u", nb_ports);
+			return -(ENOMEM);
+		}
+
+		/* Allocate memory to store queue to port link connection */
+		dev->data->links_map =
+			rte_zmalloc_socket("eventdev->links_map",
+			sizeof(dev->data->links_map[0]) * nb_ports *
+			RTE_EVENT_MAX_QUEUES_PER_DEV,
+			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
+		if (dev->data->links_map == NULL) {
+			dev->data->nb_ports = 0;
+			EDEV_LOG_ERR("failed to get memory for port_map area,"
+					"nb_ports %u", nb_ports);
+			return -(ENOMEM);
+		}
+	} else if (dev->data->ports != NULL && nb_ports != 0) {/* re-config */
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_release, -ENOTSUP);
+
+		ports = dev->data->ports;
+		ports_dequeue_depth = dev->data->ports_dequeue_depth;
+		ports_enqueue_depth = dev->data->ports_enqueue_depth;
+		links_map = dev->data->links_map;
+
+		for (i = nb_ports; i < old_nb_ports; i++)
+			(*dev->dev_ops->port_release)(ports[i]);
+
+		/* Realloc memory for ports */
+		ports = rte_realloc(ports, sizeof(ports[0]) * nb_ports,
+				RTE_CACHE_LINE_SIZE);
+		if (ports == NULL) {
+			EDEV_LOG_ERR("failed to realloc port meta data,"
+						" nb_ports %u", nb_ports);
+			return -(ENOMEM);
+		}
+
+		/* Realloc memory for ports_dequeue_depth */
+		ports_dequeue_depth = rte_realloc(ports_dequeue_depth,
+			sizeof(ports_dequeue_depth[0]) * nb_ports,
+			RTE_CACHE_LINE_SIZE);
+		if (ports_dequeue_depth == NULL) {
+			EDEV_LOG_ERR("failed to realloc port deqeue meta data,"
+						" nb_ports %u", nb_ports);
+			return -(ENOMEM);
+		}
+
+		/* Realloc memory for ports_enqueue_depth */
+		ports_enqueue_depth = rte_realloc(ports_enqueue_depth,
+			sizeof(ports_enqueue_depth[0]) * nb_ports,
+			RTE_CACHE_LINE_SIZE);
+		if (ports_enqueue_depth == NULL) {
+			EDEV_LOG_ERR("failed to realloc port enqueue meta data,"
+						" nb_ports %u", nb_ports);
+			return -(ENOMEM);
+		}
+
+		/* Realloc memory to store queue to port link connection */
+		links_map = rte_realloc(links_map,
+			sizeof(dev->data->links_map[0]) * nb_ports *
+			RTE_EVENT_MAX_QUEUES_PER_DEV,
+			RTE_CACHE_LINE_SIZE);
+		if (dev->data->links_map == NULL) {
+			dev->data->nb_ports = 0;
+			EDEV_LOG_ERR("failed to realloc mem for port_map area,"
+					"nb_ports %u", nb_ports);
+			return -(ENOMEM);
+		}
+
+		if (nb_ports > old_nb_ports) {
+			uint8_t new_ps = nb_ports - old_nb_ports;
+
+			memset(ports + old_nb_ports, 0,
+				sizeof(ports[0]) * new_ps);
+			memset(ports_dequeue_depth + old_nb_ports, 0,
+				sizeof(ports_dequeue_depth[0]) * new_ps);
+			memset(ports_enqueue_depth + old_nb_ports, 0,
+				sizeof(ports_enqueue_depth[0]) * new_ps);
+			memset(links_map +
+				(old_nb_ports * RTE_EVENT_MAX_QUEUES_PER_DEV),
+				0, sizeof(ports_enqueue_depth[0]) * new_ps);
+		}
+
+		dev->data->ports = ports;
+		dev->data->ports_dequeue_depth = ports_dequeue_depth;
+		dev->data->ports_enqueue_depth = ports_enqueue_depth;
+		dev->data->links_map = links_map;
+	} else if (dev->data->ports != NULL && nb_ports == 0) {
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_release, -ENOTSUP);
+
+		ports = dev->data->ports;
+		for (i = nb_ports; i < old_nb_ports; i++)
+			(*dev->dev_ops->port_release)(ports[i]);
+	}
+
+	dev->data->nb_ports = nb_ports;
+	return 0;
+}
+
+int
+rte_event_dev_configure(uint8_t dev_id, struct rte_event_dev_config *dev_conf)
+{
+	struct rte_eventdev *dev;
+	struct rte_event_dev_info info;
+	int diag;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_configure, -ENOTSUP);
+
+	if (dev->data->dev_started) {
+		EDEV_LOG_ERR(
+		    "device %d must be stopped to allow configuration", dev_id);
+		return -EBUSY;
+	}
+
+	if (dev_conf == NULL)
+		return -EINVAL;
+
+	(*dev->dev_ops->dev_infos_get)(dev, &info);
+
+	/* Check dequeue_wait_ns value is in limit */
+	if (!dev_conf->event_dev_cfg & RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT) {
+		if (dev_conf->dequeue_wait_ns < info.min_dequeue_wait_ns ||
+			dev_conf->dequeue_wait_ns > info.max_dequeue_wait_ns) {
+			EDEV_LOG_ERR("dev%d invalid dequeue_wait_ns=%d"
+			" min_dequeue_wait_ns=%d max_dequeue_wait_ns=%d",
+			dev_id, dev_conf->dequeue_wait_ns,
+			info.min_dequeue_wait_ns,
+			info.max_dequeue_wait_ns);
+			return -EINVAL;
+		}
+	}
+
+	/* Check nb_events_limit is in limit */
+	if (dev_conf->nb_events_limit > info.max_num_events) {
+		EDEV_LOG_ERR("dev%d nb_events_limit=%d > max_num_events=%d",
+		dev_id, dev_conf->nb_events_limit, info.max_num_events);
+		return -EINVAL;
+	}
+
+	/* Check nb_event_queues is in limit */
+	if (!dev_conf->nb_event_queues) {
+		EDEV_LOG_ERR("dev%d nb_event_queues cannot be zero", dev_id);
+		return -EINVAL;
+	}
+	if (dev_conf->nb_event_queues > info.max_event_queues) {
+		EDEV_LOG_ERR("dev%d nb_event_queues=%d > max_event_queues=%d",
+		dev_id, dev_conf->nb_event_queues, info.max_event_queues);
+		return -EINVAL;
+	}
+
+	/* Check nb_event_ports is in limit */
+	if (!dev_conf->nb_event_ports) {
+		EDEV_LOG_ERR("dev%d nb_event_ports cannot be zero", dev_id);
+		return -EINVAL;
+	}
+	if (dev_conf->nb_event_ports > info.max_event_ports) {
+		EDEV_LOG_ERR("dev%d nb_event_ports=%d > max_event_ports= %d",
+		dev_id, dev_conf->nb_event_ports, info.max_event_ports);
+		return -EINVAL;
+	}
+
+	/* Check nb_event_queue_flows is in limit */
+	if (!dev_conf->nb_event_queue_flows) {
+		EDEV_LOG_ERR("dev%d nb_flows cannot be zero", dev_id);
+		return -EINVAL;
+	}
+	if (dev_conf->nb_event_queue_flows > info.max_event_queue_flows) {
+		EDEV_LOG_ERR("dev%d nb_flows=%x > max_flows=%x",
+		dev_id, dev_conf->nb_event_queue_flows,
+		info.max_event_queue_flows);
+		return -EINVAL;
+	}
+
+	/* Check nb_event_port_dequeue_depth is in limit */
+	if (!dev_conf->nb_event_port_dequeue_depth) {
+		EDEV_LOG_ERR("dev%d nb_dequeue_depth cannot be zero", dev_id);
+		return -EINVAL;
+	}
+	if (dev_conf->nb_event_port_dequeue_depth >
+			 info.max_event_port_dequeue_depth) {
+		EDEV_LOG_ERR("dev%d nb_dequeue_depth=%d > max_dequeue_depth=%d",
+		dev_id, dev_conf->nb_event_port_dequeue_depth,
+		info.max_event_port_dequeue_depth);
+		return -EINVAL;
+	}
+
+	/* Check nb_event_port_enqueue_depth is in limit */
+	if (!dev_conf->nb_event_port_enqueue_depth) {
+		EDEV_LOG_ERR("dev%d nb_enqueue_depth cannot be zero", dev_id);
+		return -EINVAL;
+	}
+	if (dev_conf->nb_event_port_enqueue_depth >
+			 info.max_event_port_enqueue_depth) {
+		EDEV_LOG_ERR("dev%d nb_enqueue_depth=%d > max_enqueue_depth=%d",
+		dev_id, dev_conf->nb_event_port_enqueue_depth,
+		info.max_event_port_enqueue_depth);
+		return -EINVAL;
+	}
+
+	/* Copy the dev_conf parameter into the dev structure */
+	memcpy(&dev->data->dev_conf, dev_conf, sizeof(dev->data->dev_conf));
+
+	/* Setup new number of queues and reconfigure device. */
+	diag = rte_event_dev_queue_config(dev, dev_conf->nb_event_queues);
+	if (diag != 0) {
+		EDEV_LOG_ERR("dev%d rte_event_dev_queue_config = %d",
+				dev_id, diag);
+		return diag;
+	}
+
+	/* Setup new number of ports and reconfigure device. */
+	diag = rte_event_dev_port_config(dev, dev_conf->nb_event_ports);
+	if (diag != 0) {
+		rte_event_dev_queue_config(dev, 0);
+		EDEV_LOG_ERR("dev%d rte_event_dev_port_config = %d",
+				dev_id, diag);
+		return diag;
+	}
+
+	/* Configure the device */
+	diag = (*dev->dev_ops->dev_configure)(dev);
+	if (diag != 0) {
+		EDEV_LOG_ERR("dev%d dev_configure = %d", dev_id, diag);
+		rte_event_dev_queue_config(dev, 0);
+		rte_event_dev_port_config(dev, 0);
+	}
+
+	dev->data->event_dev_cap = info.event_dev_cap;
+	return diag;
+}
+
+static inline int
+is_valid_queue(struct rte_eventdev *dev, uint8_t queue_id)
+{
+	if (queue_id < dev->data->nb_queues && queue_id <
+				RTE_EVENT_MAX_QUEUES_PER_DEV)
+		return 1;
+	else
+		return 0;
+}
+
+int
+rte_event_queue_default_conf_get(uint8_t dev_id, uint8_t queue_id,
+				 struct rte_event_queue_conf *queue_conf)
+{
+	struct rte_eventdev *dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+
+	if (queue_conf == NULL)
+		return -EINVAL;
+
+	if (!is_valid_queue(dev, queue_id)) {
+		EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id);
+		return -EINVAL;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_def_conf, -ENOTSUP);
+	memset(queue_conf, 0, sizeof(struct rte_event_queue_conf));
+	(*dev->dev_ops->queue_def_conf)(dev, queue_id, queue_conf);
+	return 0;
+}
+
+static inline int
+is_valid_atomic_queue_conf(struct rte_event_queue_conf *queue_conf)
+{
+	if (queue_conf && (
+		((queue_conf->event_queue_cfg & RTE_EVENT_QUEUE_CFG_TYPE_MASK)
+			== RTE_EVENT_QUEUE_CFG_ALL_TYPES) ||
+		((queue_conf->event_queue_cfg & RTE_EVENT_QUEUE_CFG_TYPE_MASK)
+			== RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY)
+		))
+		return 1;
+	else
+		return 0;
+}
+
+int
+rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id,
+		      struct rte_event_queue_conf *queue_conf)
+{
+	struct rte_eventdev *dev;
+	struct rte_event_queue_conf def_conf;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+
+	if (!is_valid_queue(dev, queue_id)) {
+		EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id);
+		return -EINVAL;
+	}
+
+	/* Check nb_atomic_flows limit */
+	if (is_valid_atomic_queue_conf(queue_conf)) {
+		if (queue_conf->nb_atomic_flows == 0 ||
+		    queue_conf->nb_atomic_flows >
+			dev->data->dev_conf.nb_event_queue_flows) {
+			EDEV_LOG_ERR(
+		"dev%d queue%d Invalid nb_atomic_flows=%d max_flows=%d",
+			dev_id, queue_id, queue_conf->nb_atomic_flows,
+			dev->data->dev_conf.nb_event_queue_flows);
+			return -EINVAL;
+		}
+	}
+
+	/* Check nb_atomic_order_sequences limit */
+	if (is_valid_atomic_queue_conf(queue_conf)) {
+		if (queue_conf->nb_atomic_order_sequences == 0 ||
+		    queue_conf->nb_atomic_order_sequences >
+			dev->data->dev_conf.nb_event_queue_flows) {
+			EDEV_LOG_ERR(
+		"dev%d queue%d Invalid nb_atomic_order_seq=%d max_flows=%d",
+			dev_id, queue_id, queue_conf->nb_atomic_order_sequences,
+			dev->data->dev_conf.nb_event_queue_flows);
+			return -EINVAL;
+		}
+	}
+
+	if (dev->data->dev_started) {
+		EDEV_LOG_ERR(
+		    "device %d must be stopped to allow queue setup", dev_id);
+		return -EBUSY;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_setup, -ENOTSUP);
+
+	if (queue_conf == NULL) {
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_def_conf,
+					-ENOTSUP);
+		(*dev->dev_ops->queue_def_conf)(dev, queue_id, &def_conf);
+		def_conf.event_queue_cfg = RTE_EVENT_QUEUE_CFG_DEFAULT;
+		queue_conf = &def_conf;
+	}
+
+	dev->data->queues_prio[queue_id] = queue_conf->priority;
+	return (*dev->dev_ops->queue_setup)(dev, queue_id, queue_conf);
+}
+
+uint8_t
+rte_event_queue_count(uint8_t dev_id)
+{
+	struct rte_eventdev *dev;
+
+	dev = &rte_eventdevs[dev_id];
+	return dev->data->nb_queues;
+}
+
+uint8_t
+rte_event_queue_priority(uint8_t dev_id, uint8_t queue_id)
+{
+	struct rte_eventdev *dev;
+
+	dev = &rte_eventdevs[dev_id];
+	if (dev->data->event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_QOS)
+		return dev->data->queues_prio[queue_id];
+	else
+		return RTE_EVENT_QUEUE_PRIORITY_NORMAL;
+}
+
+static inline int
+is_valid_port(struct rte_eventdev *dev, uint8_t port_id)
+{
+	if (port_id < dev->data->nb_ports)
+		return 1;
+	else
+		return 0;
+}
+
+int
+rte_event_port_default_conf_get(uint8_t dev_id, uint8_t port_id,
+				 struct rte_event_port_conf *port_conf)
+{
+	struct rte_eventdev *dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+
+	if (port_conf == NULL)
+		return -EINVAL;
+
+	if (!is_valid_port(dev, port_id)) {
+		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
+		return -EINVAL;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_def_conf, -ENOTSUP);
+	memset(port_conf, 0, sizeof(struct rte_event_port_conf));
+	(*dev->dev_ops->port_def_conf)(dev, port_id, port_conf);
+	return 0;
+}
+
+int
+rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
+		      struct rte_event_port_conf *port_conf)
+{
+	struct rte_eventdev *dev;
+	struct rte_event_port_conf def_conf;
+	int diag;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+
+	if (!is_valid_port(dev, port_id)) {
+		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
+		return -EINVAL;
+	}
+
+	/* Check new_event_threshold limit */
+	if ((port_conf && !port_conf->new_event_threshold) ||
+			(port_conf && port_conf->new_event_threshold >
+				 dev->data->dev_conf.nb_events_limit)) {
+		EDEV_LOG_ERR(
+		   "dev%d port%d Invalid event_threshold=%d nb_events_limit=%d",
+			dev_id, port_id, port_conf->new_event_threshold,
+			dev->data->dev_conf.nb_events_limit);
+		return -EINVAL;
+	}
+
+	/* Check dequeue_depth limit */
+	if ((port_conf && !port_conf->dequeue_depth) ||
+			(port_conf && port_conf->dequeue_depth >
+		dev->data->dev_conf.nb_event_port_dequeue_depth)) {
+		EDEV_LOG_ERR(
+		   "dev%d port%d Invalid dequeue depth=%d max_dequeue_depth=%d",
+			dev_id, port_id, port_conf->dequeue_depth,
+			dev->data->dev_conf.nb_event_port_dequeue_depth);
+		return -EINVAL;
+	}
+
+	/* Check enqueue_depth limit */
+	if ((port_conf && !port_conf->enqueue_depth) ||
+			(port_conf && port_conf->enqueue_depth >
+		dev->data->dev_conf.nb_event_port_enqueue_depth)) {
+		EDEV_LOG_ERR(
+		   "dev%d port%d Invalid enqueue depth=%d max_enqueue_depth=%d",
+			dev_id, port_id, port_conf->enqueue_depth,
+			dev->data->dev_conf.nb_event_port_enqueue_depth);
+		return -EINVAL;
+	}
+
+	if (dev->data->dev_started) {
+		EDEV_LOG_ERR(
+		    "device %d must be stopped to allow port setup", dev_id);
+		return -EBUSY;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_setup, -ENOTSUP);
+
+	if (port_conf == NULL) {
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_def_conf,
+					-ENOTSUP);
+		(*dev->dev_ops->port_def_conf)(dev, port_id, &def_conf);
+		port_conf = &def_conf;
+	}
+
+	dev->data->ports_dequeue_depth[port_id] =
+			port_conf->dequeue_depth;
+	dev->data->ports_enqueue_depth[port_id] =
+			port_conf->enqueue_depth;
+
+	diag = (*dev->dev_ops->port_setup)(dev, port_id, port_conf);
+
+	/* Unlink all the queues from this port(default state after setup) */
+	if (!diag)
+		diag = rte_event_port_unlink(dev_id, port_id, NULL, 0);
+
+	if (diag < 0)
+		return diag;
+
+	return 0;
+}
+
+uint8_t
+rte_event_port_dequeue_depth(uint8_t dev_id, uint8_t port_id)
+{
+	struct rte_eventdev *dev;
+
+	dev = &rte_eventdevs[dev_id];
+	return dev->data->ports_dequeue_depth[port_id];
+}
+
+uint8_t
+rte_event_port_enqueue_depth(uint8_t dev_id, uint8_t port_id)
+{
+	struct rte_eventdev *dev;
+
+	dev = &rte_eventdevs[dev_id];
+	return dev->data->ports_enqueue_depth[port_id];
+}
+
+uint8_t
+rte_event_port_count(uint8_t dev_id)
+{
+	struct rte_eventdev *dev;
+
+	dev = &rte_eventdevs[dev_id];
+	return dev->data->nb_ports;
+}
+
+int
+rte_event_port_link(uint8_t dev_id, uint8_t port_id,
+		    struct rte_event_queue_link link[], uint16_t nb_links)
+{
+	struct rte_eventdev *dev;
+	struct rte_event_queue_link all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
+	uint16_t *links_map;
+	int i, diag;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_link, -ENOTSUP);
+
+	if (!is_valid_port(dev, port_id)) {
+		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
+		return -EINVAL;
+	}
+
+	if (link == NULL) {
+		for (i = 0; i < dev->data->nb_queues; i++) {
+			all_queues[i].queue_id = i;
+			all_queues[i].priority =
+				RTE_EVENT_QUEUE_SERVICE_PRIORITY_NORMAL;
+		}
+		link = all_queues;
+		nb_links = dev->data->nb_queues;
+	}
+
+	for (i = 0; i < nb_links; i++)
+		if (link[i].queue_id >= RTE_EVENT_MAX_QUEUES_PER_DEV)
+			return -EINVAL;
+
+	diag = (*dev->dev_ops->port_link)(dev->data->ports[port_id], link,
+						 nb_links);
+	if (diag < 0)
+		return diag;
+
+	links_map = dev->data->links_map;
+	/* Point links_map to this port specific area */
+	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
+	for (i = 0; i < diag; i++)
+		links_map[link[i].queue_id] = (uint8_t)link[i].priority;
+
+	return diag;
+}
+
+#define EVENT_QUEUE_SERVICE_PRIORITY_INVALID (0xdead)
+
+int
+rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
+		      uint8_t queues[], uint16_t nb_unlinks)
+{
+	struct rte_eventdev *dev;
+	uint8_t all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
+	int i, diag;
+	uint16_t *links_map;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_unlink, -ENOTSUP);
+
+	if (!is_valid_port(dev, port_id)) {
+		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
+		return -EINVAL;
+	}
+
+	if (queues == NULL) {
+		for (i = 0; i < dev->data->nb_queues; i++)
+			all_queues[i] = i;
+		queues = all_queues;
+		nb_unlinks = dev->data->nb_queues;
+	}
+
+	for (i = 0; i < nb_unlinks; i++)
+		if (queues[i] >= RTE_EVENT_MAX_QUEUES_PER_DEV)
+			return -EINVAL;
+
+	diag = (*dev->dev_ops->port_unlink)(dev->data->ports[port_id], queues,
+					nb_unlinks);
+
+	if (diag < 0)
+		return diag;
+
+	links_map = dev->data->links_map;
+	/* Point links_map to this port specific area */
+	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
+	for (i = 0; i < diag; i++)
+		links_map[queues[i]] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
+
+	return diag;
+}
+
+int
+rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
+			struct rte_event_queue_link link[])
+{
+	struct rte_eventdev *dev;
+	uint16_t *links_map;
+	int i, count = 0;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+	if (!is_valid_port(dev, port_id)) {
+		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
+		return -EINVAL;
+	}
+
+	links_map = dev->data->links_map;
+	/* Point links_map to this port specific area */
+	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
+	for (i = 0; i < RTE_EVENT_MAX_QUEUES_PER_DEV; i++) {
+		if (links_map[i] != EVENT_QUEUE_SERVICE_PRIORITY_INVALID) {
+			link[count].queue_id = i;
+			link[count].priority = (uint8_t)links_map[i];
+			++count;
+		}
+	}
+	return count;
+}
+
+int
+rte_event_dequeue_wait_time(uint8_t dev_id, uint64_t ns, uint64_t *wait_ticks)
+{
+	struct rte_eventdev *dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->wait_time, -ENOTSUP);
+
+	if (wait_ticks == NULL)
+		return -EINVAL;
+
+	(*dev->dev_ops->wait_time)(dev, ns, wait_ticks);
+	return 0;
+}
+
+int
+rte_event_dev_dump(uint8_t dev_id, FILE *f)
+{
+	struct rte_eventdev *dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dump, -ENOTSUP);
+
+	(*dev->dev_ops->dump)(dev, f);
+	return 0;
+
+}
+
+int
+rte_event_dev_start(uint8_t dev_id)
+{
+	struct rte_eventdev *dev;
+	int diag;
+
+	EDEV_LOG_DEBUG("Start dev_id=%" PRIu8, dev_id);
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_start, -ENOTSUP);
+
+	if (dev->data->dev_started != 0) {
+		EDEV_LOG_ERR("Device with dev_id=%" PRIu8 " already started",
+			dev_id);
+		return 0;
+	}
+
+	diag = (*dev->dev_ops->dev_start)(dev);
+	if (diag == 0)
+		dev->data->dev_started = 1;
+	else
+		return diag;
+
+	return 0;
+}
+
+void
+rte_event_dev_stop(uint8_t dev_id)
+{
+	struct rte_eventdev *dev;
+
+	EDEV_LOG_DEBUG("Stop dev_id=%" PRIu8, dev_id);
+
+	RTE_EVENTDEV_VALID_DEVID_OR_RET(dev_id);
+	dev = &rte_eventdevs[dev_id];
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
+
+	if (dev->data->dev_started == 0) {
+		EDEV_LOG_ERR("Device with dev_id=%" PRIu8 " already stopped",
+			dev_id);
+		return;
+	}
+
+	dev->data->dev_started = 0;
+	(*dev->dev_ops->dev_stop)(dev);
+}
+
+int
+rte_event_dev_close(uint8_t dev_id)
+{
+	struct rte_eventdev *dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_close, -ENOTSUP);
+
+	/* Device must be stopped before it can be closed */
+	if (dev->data->dev_started == 1) {
+		EDEV_LOG_ERR("Device %u must be stopped before closing",
+				dev_id);
+		return -EBUSY;
+	}
+
+	return (*dev->dev_ops->dev_close)(dev);
+}
+
+static inline int
+rte_eventdev_data_alloc(uint8_t dev_id, struct rte_eventdev_data **data,
+		int socket_id)
+{
+	char mz_name[RTE_EVENTDEV_NAME_MAX_LEN];
+	const struct rte_memzone *mz;
+	int n;
+
+	/* Generate memzone name */
+	n = snprintf(mz_name, sizeof(mz_name), "rte_eventdev_data_%u", dev_id);
+	if (n >= (int)sizeof(mz_name))
+		return -EINVAL;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		mz = rte_memzone_reserve(mz_name,
+				sizeof(struct rte_eventdev_data),
+				socket_id, 0);
+	} else
+		mz = rte_memzone_lookup(mz_name);
+
+	if (mz == NULL)
+		return -ENOMEM;
+
+	*data = mz->addr;
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		memset(*data, 0, sizeof(struct rte_eventdev_data));
+
+	return 0;
+}
+
+static uint8_t
+rte_eventdev_find_free_device_index(void)
+{
+	uint8_t dev_id;
+
+	for (dev_id = 0; dev_id < RTE_EVENT_MAX_DEVS; dev_id++) {
+		if (rte_eventdevs[dev_id].attached ==
+				RTE_EVENTDEV_DETACHED)
+			return dev_id;
+	}
+	return RTE_EVENT_MAX_DEVS;
+}
+
+struct rte_eventdev *
+rte_eventdev_pmd_allocate(const char *name, int socket_id)
+{
+	struct rte_eventdev *eventdev;
+	uint8_t dev_id;
+
+	if (rte_eventdev_pmd_get_named_dev(name) != NULL) {
+		EDEV_LOG_ERR("Event device with name %s already "
+				"allocated!", name);
+		return NULL;
+	}
+
+	dev_id = rte_eventdev_find_free_device_index();
+	if (dev_id == RTE_EVENT_MAX_DEVS) {
+		EDEV_LOG_ERR("Reached maximum number of event devices");
+		return NULL;
+	}
+
+	eventdev = &rte_eventdevs[dev_id];
+
+	if (eventdev->data == NULL) {
+		struct rte_eventdev_data *eventdev_data = NULL;
+
+		int retval = rte_eventdev_data_alloc(dev_id, &eventdev_data,
+				socket_id);
+
+		if (retval < 0 || eventdev_data == NULL)
+			return NULL;
+
+		eventdev->data = eventdev_data;
+
+		snprintf(eventdev->data->name, RTE_EVENTDEV_NAME_MAX_LEN,
+				"%s", name);
+
+		eventdev->data->dev_id = dev_id;
+		eventdev->data->socket_id = socket_id;
+		eventdev->data->dev_started = 0;
+
+		eventdev->attached = RTE_EVENTDEV_ATTACHED;
+
+		eventdev_globals.nb_devs++;
+	}
+
+	return eventdev;
+}
+
+int
+rte_eventdev_pmd_release(struct rte_eventdev *eventdev)
+{
+	int ret;
+
+	if (eventdev == NULL)
+		return -EINVAL;
+
+	ret = rte_event_dev_close(eventdev->data->dev_id);
+	if (ret < 0)
+		return ret;
+
+	eventdev->attached = RTE_EVENTDEV_DETACHED;
+	eventdev_globals.nb_devs--;
+	eventdev->data = NULL;
+
+	return 0;
+}
+
+struct rte_eventdev *
+rte_eventdev_pmd_vdev_init(const char *name, size_t dev_private_size,
+		int socket_id)
+{
+	struct rte_eventdev *eventdev;
+
+	/* Allocate device structure */
+	eventdev = rte_eventdev_pmd_allocate(name, socket_id);
+	if (eventdev == NULL)
+		return NULL;
+
+	/* Allocate private device structure */
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		eventdev->data->dev_private =
+				rte_zmalloc_socket("eventdev device private",
+						dev_private_size,
+						RTE_CACHE_LINE_SIZE,
+						socket_id);
+
+		if (eventdev->data->dev_private == NULL)
+			rte_panic("Cannot allocate memzone for private device"
+					" data");
+	}
+
+	return eventdev;
+}
+
+int
+rte_eventdev_pmd_pci_probe(struct rte_pci_driver *pci_drv,
+			struct rte_pci_device *pci_dev)
+{
+	struct rte_eventdev_driver *eventdrv;
+	struct rte_eventdev *eventdev;
+
+	char eventdev_name[RTE_EVENTDEV_NAME_MAX_LEN];
+
+	int retval;
+
+	eventdrv = (struct rte_eventdev_driver *)pci_drv;
+	if (eventdrv == NULL)
+		return -ENODEV;
+
+	rte_eal_pci_device_name(&pci_dev->addr, eventdev_name,
+			sizeof(eventdev_name));
+
+	eventdev = rte_eventdev_pmd_allocate(eventdev_name,
+			 pci_dev->device.numa_node);
+	if (eventdev == NULL)
+		return -ENOMEM;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		eventdev->data->dev_private =
+				rte_zmalloc_socket(
+						"eventdev private structure",
+						eventdrv->dev_private_size,
+						RTE_CACHE_LINE_SIZE,
+						rte_socket_id());
+
+		if (eventdev->data->dev_private == NULL)
+			rte_panic("Cannot allocate memzone for private "
+					"device data");
+	}
+
+	eventdev->pci_dev = pci_dev;
+	eventdev->driver = eventdrv;
+
+	/* Invoke PMD device initialization function */
+	retval = (*eventdrv->eventdev_init)(eventdev);
+	if (retval == 0)
+		return 0;
+
+	EDEV_LOG_ERR("driver %s: event_dev_init(vendor_id=0x%x device_id=0x%x)"
+			" failed", pci_drv->driver.name,
+			(unsigned int) pci_dev->id.vendor_id,
+			(unsigned int) pci_dev->id.device_id);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(eventdev->data->dev_private);
+
+	eventdev->attached = RTE_EVENTDEV_DETACHED;
+	eventdev_globals.nb_devs--;
+
+	return -ENXIO;
+}
+
+int
+rte_eventdev_pmd_pci_remove(struct rte_pci_device *pci_dev)
+{
+	const struct rte_eventdev_driver *eventdrv;
+	struct rte_eventdev *eventdev;
+	char eventdev_name[RTE_EVENTDEV_NAME_MAX_LEN];
+	int ret;
+
+	if (pci_dev == NULL)
+		return -EINVAL;
+
+	rte_eal_pci_device_name(&pci_dev->addr, eventdev_name,
+			sizeof(eventdev_name));
+
+	eventdev = rte_eventdev_pmd_get_named_dev(eventdev_name);
+	if (eventdev == NULL)
+		return -ENODEV;
+
+	eventdrv = (const struct rte_eventdev_driver *)pci_dev->driver;
+	if (eventdrv == NULL)
+		return -ENODEV;
+
+	/* Invoke PMD device uninit function */
+	if (*eventdrv->eventdev_uninit) {
+		ret = (*eventdrv->eventdev_uninit)(eventdev);
+		if (ret)
+			return ret;
+	}
+
+	/* Free event device */
+	rte_eventdev_pmd_release(eventdev);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(eventdev->data->dev_private);
+
+	eventdev->pci_dev = NULL;
+	eventdev->driver = NULL;
+
+	return 0;
+}
diff --git a/lib/librte_eventdev/rte_eventdev_pmd.h b/lib/librte_eventdev/rte_eventdev_pmd.h
new file mode 100644
index 0000000..e9d9b83
--- /dev/null
+++ b/lib/librte_eventdev/rte_eventdev_pmd.h
@@ -0,0 +1,504 @@
+/*
+ *
+ *   Copyright(c) 2016 Cavium networks. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_EVENTDEV_PMD_H_
+#define _RTE_EVENTDEV_PMD_H_
+
+/** @file
+ * RTE Event PMD APIs
+ *
+ * @note
+ * These API are from event PMD only and user applications should not call
+ * them directly.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <string.h>
+
+#include <rte_dev.h>
+#include <rte_pci.h>
+#include <rte_malloc.h>
+#include <rte_log.h>
+#include <rte_common.h>
+
+#include "rte_eventdev.h"
+
+#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
+#define RTE_PMD_DEBUG_TRACE(...) \
+	rte_pmd_debug_trace(__func__, __VA_ARGS__)
+#else
+#define RTE_PMD_DEBUG_TRACE(...)
+#endif
+
+/* Logging Macros */
+#define EDEV_LOG_ERR(fmt, args...) \
+	RTE_LOG(ERR, EVENTDEV, "%s() line %u: " fmt "\n",  \
+			__func__, __LINE__, ## args)
+
+#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
+#define EDEV_LOG_DEBUG(fmt, args...) \
+	RTE_LOG(DEBUG, EVENTDEV, "%s() line %u: " fmt "\n",  \
+			__func__, __LINE__, ## args)
+#else
+#define EDEV_LOG_DEBUG(fmt, args...) (void)0
+#endif
+
+/* Macros to check for valid device */
+#define RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, retval) do { \
+	if (!rte_eventdev_pmd_is_valid_dev((dev_id))) { \
+		EDEV_LOG_ERR("Invalid dev_id=%d\n", dev_id); \
+		return retval; \
+	} \
+} while (0)
+
+#define RTE_EVENTDEV_VALID_DEVID_OR_RET(dev_id) do { \
+	if (!rte_eventdev_pmd_is_valid_dev((dev_id))) { \
+		EDEV_LOG_ERR("Invalid dev_id=%d\n", dev_id); \
+		return; \
+	} \
+} while (0)
+
+#define RTE_EVENTDEV_DETACHED  (0)
+#define RTE_EVENTDEV_ATTACHED  (1)
+
+/**
+ * Initialisation function of a event driver invoked for each matching
+ * event PCI device detected during the PCI probing phase.
+ *
+ * @param dev
+ *   The dev pointer is the address of the *rte_eventdev* structure associated
+ *   with the matching device and which has been [automatically] allocated in
+ *   the *rte_event_devices* array.
+ *
+ * @return
+ *   - 0: Success, the device is properly initialised by the driver.
+ *        In particular, the driver MUST have set up the *dev_ops* pointer
+ *        of the *dev* structure.
+ *   - <0: Error code of the device initialisation failure.
+ */
+typedef int (*eventdev_init_t)(struct rte_eventdev *dev);
+
+/**
+ * Finalisation function of a driver invoked for each matching
+ * PCI device detected during the PCI closing phase.
+ *
+ * @param dev
+ *   The dev pointer is the address of the *rte_eventdev* structure associated
+ *   with the matching device and which	has been [automatically] allocated in
+ *   the *rte_event_devices* array.
+ *
+ * @return
+ *   - 0: Success, the device is properly finalised by the driver.
+ *        In particular, the driver MUST free the *dev_ops* pointer
+ *        of the *dev* structure.
+ *   - <0: Error code of the device initialisation failure.
+ */
+typedef int (*eventdev_uninit_t)(struct rte_eventdev *dev);
+
+/**
+ * The structure associated with a PMD driver.
+ *
+ * Each driver acts as a PCI driver and is represented by a generic
+ * *event_driver* structure that holds:
+ *
+ * - An *rte_pci_driver* structure (which must be the first field).
+ *
+ * - The *eventdev_init* function invoked for each matching PCI device.
+ *
+ * - The size of the private data to allocate for each matching device.
+ */
+struct rte_eventdev_driver {
+	struct rte_pci_driver pci_drv;	/**< The PMD is also a PCI driver. */
+	unsigned int dev_private_size;	/**< Size of device private data. */
+
+	eventdev_init_t eventdev_init;	/**< Device init function. */
+	eventdev_uninit_t eventdev_uninit; /**< Device uninit function. */
+};
+
+/** Global structure used for maintaining state of allocated event devices */
+struct rte_eventdev_global {
+	uint8_t nb_devs;	/**< Number of devices found */
+	uint8_t max_devs;	/**< Max number of devices */
+};
+
+extern struct rte_eventdev_global *rte_eventdev_globals;
+/** Pointer to global event devices data structure. */
+extern struct rte_eventdev *rte_eventdevs;
+/** The pool of rte_eventdev structures. */
+
+/**
+ * Get the rte_eventdev structure device pointer for the named device.
+ *
+ * @param name
+ *   device name to select the device structure.
+ *
+ * @return
+ *   - The rte_eventdev structure pointer for the given device ID.
+ */
+static inline struct rte_eventdev *
+rte_eventdev_pmd_get_named_dev(const char *name)
+{
+	struct rte_eventdev *dev;
+	unsigned int i;
+
+	if (name == NULL)
+		return NULL;
+
+	for (i = 0, dev = &rte_eventdevs[i];
+			i < rte_eventdev_globals->max_devs; i++) {
+		if ((dev->attached == RTE_EVENTDEV_ATTACHED) &&
+				(strcmp(dev->data->name, name) == 0))
+			return dev;
+	}
+
+	return NULL;
+}
+
+/**
+ * Validate if the event device index is valid attached event device.
+ *
+ * @param dev_id
+ *   Event device index.
+ *
+ * @return
+ *   - If the device index is valid (1) or not (0).
+ */
+static inline unsigned
+rte_eventdev_pmd_is_valid_dev(uint8_t dev_id)
+{
+	struct rte_eventdev *dev;
+
+	if (dev_id >= rte_eventdev_globals->nb_devs)
+		return 0;
+
+	dev = &rte_eventdevs[dev_id];
+	if (dev->attached != RTE_EVENTDEV_ATTACHED)
+		return 0;
+	else
+		return 1;
+}
+
+/**
+ * Definitions of all functions exported by a driver through the
+ * the generic structure of type *event_dev_ops* supplied in the
+ * *rte_eventdev* structure associated with a device.
+ */
+
+/**
+ * Get device information of a device.
+ *
+ * @param dev
+ *   Event device pointer
+ * @param dev_info
+ *   Event device information structure
+ *
+ * @return
+ *   Returns 0 on success
+ */
+typedef void (*eventdev_info_get_t)(struct rte_eventdev *dev,
+		struct rte_event_dev_info *dev_info);
+
+/**
+ * Configure a device.
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @return
+ *   Returns 0 on success
+ */
+typedef int (*eventdev_configure_t)(struct rte_eventdev *dev);
+
+/**
+ * Start a configured device.
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @return
+ *   Returns 0 on success
+ */
+typedef int (*eventdev_start_t)(struct rte_eventdev *dev);
+
+/**
+ * Stop a configured device.
+ *
+ * @param dev
+ *   Event device pointer
+ */
+typedef void (*eventdev_stop_t)(struct rte_eventdev *dev);
+
+/**
+ * Close a configured device.
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @return
+ * - 0 on success
+ * - (-EAGAIN) if can't close as device is busy
+ */
+typedef int (*eventdev_close_t)(struct rte_eventdev *dev);
+
+/**
+ * Retrieve the default event queue configuration.
+ *
+ * @param dev
+ *   Event device pointer
+ * @param queue_id
+ *   Event queue index
+ * @param[out] queue_conf
+ *   Event queue configuration structure
+ *
+ */
+typedef void (*eventdev_queue_default_conf_get_t)(struct rte_eventdev *dev,
+		uint8_t queue_id, struct rte_event_queue_conf *queue_conf);
+
+/**
+ * Setup an event queue.
+ *
+ * @param dev
+ *   Event device pointer
+ * @param queue_id
+ *   Event queue index
+ * @param queue_conf
+ *   Event queue configuration structure
+ *
+ * @return
+ *   Returns 0 on success.
+ */
+typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev,
+		uint8_t queue_id, struct rte_event_queue_conf *queue_conf);
+
+/**
+ * Release memory resources allocated by given event queue.
+ *
+ * @param queue
+ *   Event queue pointer
+ *
+ */
+typedef void (*eventdev_queue_release_t)(void *queue);
+
+/**
+ * Retrieve the default event port configuration.
+ *
+ * @param dev
+ *   Event device pointer
+ * @param port_id
+ *   Event port index
+ * @param[out] port_conf
+ *   Event port configuration structure
+ *
+ */
+typedef void (*eventdev_port_default_conf_get_t)(struct rte_eventdev *dev,
+		uint8_t port_id, struct rte_event_port_conf *port_conf);
+
+/**
+ * Setup an event port.
+ *
+ * @param dev
+ *   Event device pointer
+ * @param port_id
+ *   Event port index
+ * @param port_conf
+ *   Event port configuration structure
+ *
+ * @return
+ *   Returns 0 on success.
+ */
+typedef int (*eventdev_port_setup_t)(struct rte_eventdev *dev,
+		uint8_t port_id, struct rte_event_port_conf *port_conf);
+
+/**
+ * Release memory resources allocated by given event port.
+ *
+ * @param port
+ *   Event port pointer
+ *
+ */
+typedef void (*eventdev_port_release_t)(void *port);
+
+/**
+ * Link multiple source event queues to destination event port.
+ *
+ * @param port
+ *   Event port pointer
+ * @param link
+ *   An array of *nb_links* pointers to *rte_event_queue_link* structure
+ * @param nb_links
+ *   The number of links to establish
+ *
+ * @return
+ *   Returns 0 on success.
+ *
+ */
+typedef int (*eventdev_port_link_t)(void *port,
+		struct rte_event_queue_link link[], uint16_t nb_links);
+
+/**
+ * Unlink multiple source event queues from destination event port.
+ *
+ * @param port
+ *   Event port pointer
+ * @param queues
+ *   An array of *nb_unlinks* event queues to be unlinked from the event port.
+ * @param nb_unlinks
+ *   The number of unlinks to establish
+ *
+ * @return
+ *   Returns 0 on success.
+ *
+ */
+typedef int (*eventdev_port_unlink_t)(void *port,
+		uint8_t queues[], uint16_t nb_unlinks);
+
+/**
+ * Converts nanoseconds to *wait* value for rte_event_dequeue()
+ *
+ * @param dev
+ *   Event device pointer
+ * @param ns
+ *   Wait time in nanosecond
+ * @param[out] wait_ticks
+ *   Value for the *wait* parameter in rte_event_dequeue() function
+ *
+ */
+typedef void (*eventdev_dequeue_wait_time_t)(struct rte_eventdev *dev,
+		uint64_t ns, uint64_t *wait_ticks);
+
+/**
+ * Dump internal information
+ *
+ * @param dev
+ *   Event device pointer
+ * @param f
+ *   A pointer to a file for output
+ *
+ */
+typedef void (*eventdev_dump_t)(struct rte_eventdev *dev, FILE *f);
+
+/** Event device operations function pointer table */
+struct rte_eventdev_ops {
+	eventdev_info_get_t dev_infos_get;	/**< Get device info. */
+	eventdev_configure_t dev_configure;	/**< Configure device. */
+	eventdev_start_t dev_start;		/**< Start device. */
+	eventdev_stop_t dev_stop;		/**< Stop device. */
+	eventdev_close_t dev_close;		/**< Close device. */
+
+	eventdev_queue_default_conf_get_t queue_def_conf;
+	/**< Get default queue configuration. */
+	eventdev_queue_setup_t queue_setup;
+	/**< Set up an event queue. */
+	eventdev_queue_release_t queue_release;
+	/**< Release an event queue. */
+
+	eventdev_port_default_conf_get_t port_def_conf;
+	/**< Get default port configuration. */
+	eventdev_port_setup_t port_setup;
+	/**< Set up an event port. */
+	eventdev_port_release_t port_release;
+	/**< Release an event port. */
+
+	eventdev_port_link_t port_link;
+	/**< Link event queues to an event port. */
+	eventdev_port_unlink_t port_unlink;
+	/**< Unlink event queues from an event port. */
+	eventdev_dequeue_wait_time_t wait_time;
+	/**< Converts nanoseconds to *wait* value for rte_event_dequeue() */
+	eventdev_dump_t dump;
+	/* Dump internal information */
+};
+
+/**
+ * Allocates a new eventdev slot for an event device and returns the pointer
+ * to that slot for the driver to use.
+ *
+ * @param name
+ *   Unique identifier name for each device
+ * @param socket_id
+ *   Socket to allocate resources on.
+ * @return
+ *   - Slot in the rte_dev_devices array for a new device;
+ */
+struct rte_eventdev *
+rte_eventdev_pmd_allocate(const char *name, int socket_id);
+
+/**
+ * Release the specified eventdev device.
+ *
+ * @param eventdev
+ * The *eventdev* pointer is the address of the *rte_eventdev* structure.
+ * @return
+ *   - 0 on success, negative on error
+ */
+int
+rte_eventdev_pmd_release(struct rte_eventdev *eventdev);
+
+/**
+ * Creates a new virtual event device and returns the pointer to that device.
+ *
+ * @param name
+ *   PMD type name
+ * @param dev_private_size
+ *   Size of event PMDs private data
+ * @param socket_id
+ *   Socket to allocate resources on.
+ *
+ * @return
+ *   - Eventdev pointer if device is successfully created.
+ *   - NULL if device cannot be created.
+ */
+struct rte_eventdev *
+rte_eventdev_pmd_vdev_init(const char *name, size_t dev_private_size,
+		int socket_id);
+
+
+/**
+ * Wrapper for use by pci drivers as a .probe function to attach to a event
+ * interface.
+ */
+int rte_eventdev_pmd_pci_probe(struct rte_pci_driver *pci_drv,
+			    struct rte_pci_device *pci_dev);
+
+/**
+ * Wrapper for use by pci drivers as a .remove function to detach a event
+ * interface.
+ */
+int rte_eventdev_pmd_pci_remove(struct rte_pci_device *pci_dev);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_EVENTDEV_PMD_H_ */
diff --git a/lib/librte_eventdev/rte_eventdev_version.map b/lib/librte_eventdev/rte_eventdev_version.map
new file mode 100644
index 0000000..ef40aae
--- /dev/null
+++ b/lib/librte_eventdev/rte_eventdev_version.map
@@ -0,0 +1,39 @@
+DPDK_17.02 {
+	global:
+
+	rte_eventdevs;
+
+	rte_event_dev_count;
+	rte_event_dev_get_dev_id;
+	rte_event_dev_socket_id;
+	rte_event_dev_info_get;
+	rte_event_dev_configure;
+	rte_event_dev_start;
+	rte_event_dev_stop;
+	rte_event_dev_close;
+	rte_event_dev_dump;
+
+	rte_event_port_default_conf_get;
+	rte_event_port_setup;
+	rte_event_port_dequeue_depth;
+	rte_event_port_enqueue_depth;
+	rte_event_port_count;
+	rte_event_port_link;
+	rte_event_port_unlink;
+	rte_event_port_links_get;
+
+	rte_event_queue_default_conf_get
+	rte_event_queue_setup;
+	rte_event_queue_count;
+	rte_event_queue_priority;
+
+	rte_event_dequeue_wait_time;
+
+	rte_eventdev_pmd_allocate;
+	rte_eventdev_pmd_release;
+	rte_eventdev_pmd_vdev_init;
+	rte_eventdev_pmd_pci_probe;
+	rte_eventdev_pmd_pci_remove;
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index f75f0e2..716725a 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -93,6 +93,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF)           += -lrte_mbuf
 _LDLIBS-$(CONFIG_RTE_LIBRTE_NET)            += -lrte_net
 _LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER)          += -lrte_ethdev
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CRYPTODEV)      += -lrte_cryptodev
+_LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV)       += -lrte_eventdev
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL)        += -lrte_mempool
 _LDLIBS-$(CONFIG_RTE_LIBRTE_RING)           += -lrte_ring
 _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL)            += -lrte_eal
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [PATCH 3/4] event/skeleton: add skeleton eventdev driver
  2016-11-18  5:44 [PATCH 0/4] libeventdev API and northbound implementation Jerin Jacob
  2016-11-18  5:44 ` [PATCH 1/4] eventdev: introduce event driven programming model Jerin Jacob
  2016-11-18  5:45 ` [PATCH 2/4] eventdev: implement the northbound APIs Jerin Jacob
@ 2016-11-18  5:45 ` Jerin Jacob
  2016-11-18  5:45 ` [PATCH 4/4] app/test: unit test case for eventdev APIs Jerin Jacob
  2016-11-18 15:25 ` [PATCH 0/4] libeventdev API and northbound implementation Bruce Richardson
  4 siblings, 0 replies; 109+ messages in thread
From: Jerin Jacob @ 2016-11-18  5:45 UTC (permalink / raw)
  To: dev
  Cc: bruce.richardson, harry.van.haaren, hemant.agrawal, gage.eads,
	Jerin Jacob

The skeleton driver facilitates, bootstrapping the new
eventdev driver and creates a platform to verify
the northbound eventdev common code.

The driver supports both VDEV and PCI based eventdev
devices.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 MAINTAINERS                                        |   1 +
 config/common_base                                 |   8 +
 drivers/Makefile                                   |   1 +
 drivers/event/Makefile                             |  36 ++
 drivers/event/skeleton/Makefile                    |  55 +++
 .../skeleton/rte_pmd_skeleton_event_version.map    |   4 +
 drivers/event/skeleton/skeleton_eventdev.c         | 535 +++++++++++++++++++++
 drivers/event/skeleton/skeleton_eventdev.h         |  72 +++
 mk/rte.app.mk                                      |   4 +
 9 files changed, 716 insertions(+)
 create mode 100644 drivers/event/Makefile
 create mode 100644 drivers/event/skeleton/Makefile
 create mode 100644 drivers/event/skeleton/rte_pmd_skeleton_event_version.map
 create mode 100644 drivers/event/skeleton/skeleton_eventdev.c
 create mode 100644 drivers/event/skeleton/skeleton_eventdev.h

diff --git a/MAINTAINERS b/MAINTAINERS
index e430ca7..c594a23 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -252,6 +252,7 @@ F: examples/l2fwd-crypto/
 Eventdev API - EXPERIMENTAL
 M: Jerin Jacob <jerin.jacob@caviumnetworks.com>
 F: lib/librte_eventdev/
+F: drivers/event/skeleton/
 
 Networking Drivers
 ------------------
diff --git a/config/common_base b/config/common_base
index 7a8814e..35aef0a 100644
--- a/config/common_base
+++ b/config/common_base
@@ -417,6 +417,14 @@ CONFIG_RTE_LIBRTE_EVENTDEV=y
 CONFIG_RTE_LIBRTE_EVENTDEV_DEBUG=n
 CONFIG_RTE_EVENT_MAX_DEVS=16
 CONFIG_RTE_EVENT_MAX_QUEUES_PER_DEV=64
+
+#
+# Compile PMD for skeleton event device
+#
+CONFIG_RTE_LIBRTE_PMD_SKELETON_EVENTDEV=y
+CONFIG_RTE_LIBRTE_PMD_SKELETON_EVENTDEV_DEBUG=n
+
+#
 # Compile librte_ring
 #
 CONFIG_RTE_LIBRTE_RING=y
diff --git a/drivers/Makefile b/drivers/Makefile
index 81c03a8..40b8347 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -33,5 +33,6 @@ include $(RTE_SDK)/mk/rte.vars.mk
 
 DIRS-y += net
 DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += crypto
+DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += event
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/event/Makefile b/drivers/event/Makefile
new file mode 100644
index 0000000..678279f
--- /dev/null
+++ b/drivers/event/Makefile
@@ -0,0 +1,36 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2016 Cavium networks. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Cavium networks nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_SKELETON_EVENTDEV) += skeleton
+
+include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/event/skeleton/Makefile b/drivers/event/skeleton/Makefile
new file mode 100644
index 0000000..e557f6d
--- /dev/null
+++ b/drivers/event/skeleton/Makefile
@@ -0,0 +1,55 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2016 Cavium Networks. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Cavium Networks nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+#
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_skeleton_event.a
+
+CFLAGS += $(WERROR_FLAGS)
+
+EXPORT_MAP := rte_pmd_skeleton_event_version.map
+
+LIBABIVER := 1
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_SKELETON_EVENTDEV) += skeleton_eventdev.c
+
+# this lib depends upon:
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_SKELETON_EVENTDEV) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_SKELETON_EVENTDEV) += lib/librte_event
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/event/skeleton/rte_pmd_skeleton_event_version.map b/drivers/event/skeleton/rte_pmd_skeleton_event_version.map
new file mode 100644
index 0000000..31eca32
--- /dev/null
+++ b/drivers/event/skeleton/rte_pmd_skeleton_event_version.map
@@ -0,0 +1,4 @@
+DPDK_17.02 {
+
+	local: *;
+};
diff --git a/drivers/event/skeleton/skeleton_eventdev.c b/drivers/event/skeleton/skeleton_eventdev.c
new file mode 100644
index 0000000..da9f444
--- /dev/null
+++ b/drivers/event/skeleton/skeleton_eventdev.c
@@ -0,0 +1,535 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <assert.h>
+#include <stdio.h>
+#include <stdbool.h>
+#include <errno.h>
+#include <stdint.h>
+#include <string.h>
+
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_debug.h>
+#include <rte_dev.h>
+#include <rte_eal.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_malloc.h>
+#include <rte_pci.h>
+#include <rte_lcore.h>
+#include <rte_vdev.h>
+
+#include "skeleton_eventdev.h"
+
+static int
+skeleton_eventdev_enqueue(void *port, struct rte_event *ev)
+{
+	struct skeleton_port *sp = port;
+
+	RTE_SET_USED(sp);
+	RTE_SET_USED(ev);
+	RTE_SET_USED(port);
+
+	return -ENOTSUP;
+}
+
+static uint16_t
+skeleton_eventdev_enqueue_burst(void *port, struct rte_event ev[],
+			uint16_t nb_events)
+{
+	struct skeleton_port *sp = port;
+
+	RTE_SET_USED(sp);
+	RTE_SET_USED(ev);
+	RTE_SET_USED(port);
+	RTE_SET_USED(nb_events);
+
+	return 0;
+}
+
+static bool
+skeleton_eventdev_dequeue(void *port, struct rte_event *ev, uint64_t wait)
+{
+	struct skeleton_port *sp = port;
+
+	RTE_SET_USED(sp);
+	RTE_SET_USED(ev);
+	RTE_SET_USED(wait);
+
+	return 0;
+}
+
+static uint16_t
+skeleton_eventdev_dequeue_burst(void *port, struct rte_event ev[],
+		uint16_t nb_events, uint64_t wait)
+{
+	struct skeleton_port *sp = port;
+
+	RTE_SET_USED(sp);
+	RTE_SET_USED(ev);
+	RTE_SET_USED(nb_events);
+	RTE_SET_USED(wait);
+
+	return 0;
+}
+
+static void
+skeleton_eventdev_info_get(struct rte_eventdev *dev,
+		struct rte_event_dev_info *dev_info)
+{
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(dev);
+
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(skel);
+
+	dev_info->min_dequeue_wait_ns = 1;
+	dev_info->max_dequeue_wait_ns = 10000;
+	dev_info->dequeue_wait_ns = 25;
+	dev_info->max_event_queues = 64;
+	dev_info->max_event_queue_flows = (1ULL << 20);
+	dev_info->max_event_queue_priority_levels = 8;
+	dev_info->max_event_priority_levels = 8;
+	dev_info->max_event_ports = 32;
+	dev_info->max_event_port_dequeue_depth = 16;
+	dev_info->max_event_port_enqueue_depth = 16;
+	dev_info->max_num_events = (1ULL << 20);
+	dev_info->event_dev_cap = RTE_EVENT_DEV_CAP_QUEUE_QOS |
+					RTE_EVENT_DEV_CAP_EVENT_QOS;
+}
+
+static int
+skeleton_eventdev_configure(struct rte_eventdev *dev)
+{
+	struct rte_eventdev_data *data = dev->data;
+	struct rte_event_dev_config *conf = &data->dev_conf;
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(dev);
+
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(conf);
+	RTE_SET_USED(skel);
+
+	PMD_DRV_LOG(DEBUG, "Configured eventdev devid=%d", dev->data->dev_id);
+	return 0;
+}
+
+static int
+skeleton_eventdev_start(struct rte_eventdev *dev)
+{
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(dev);
+
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(skel);
+
+	return 0;
+}
+
+static void
+skeleton_eventdev_stop(struct rte_eventdev *dev)
+{
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(dev);
+
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(skel);
+}
+
+static int
+skeleton_eventdev_close(struct rte_eventdev *dev)
+{
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(dev);
+
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(skel);
+
+	return 0;
+}
+
+static void
+skeleton_eventdev_queue_def_conf(struct rte_eventdev *dev, uint8_t queue_id,
+				 struct rte_event_queue_conf *queue_conf)
+{
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(dev);
+
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(skel);
+	RTE_SET_USED(queue_id);
+
+	queue_conf->nb_atomic_flows = (1ULL << 20);
+	queue_conf->nb_atomic_order_sequences = (1ULL << 20);
+	queue_conf->event_queue_cfg = RTE_EVENT_QUEUE_CFG_DEFAULT;
+	queue_conf->priority = RTE_EVENT_QUEUE_PRIORITY_NORMAL;
+}
+
+static void
+skeleton_eventdev_queue_release(void *queue)
+{
+	struct skeleton_queue *sq = queue;
+	PMD_DRV_FUNC_TRACE();
+
+	rte_free(sq);
+}
+
+static int
+skeleton_eventdev_queue_setup(struct rte_eventdev *dev, uint8_t queue_id,
+			      struct rte_event_queue_conf *queue_conf)
+{
+	struct skeleton_queue *sq;
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(dev);
+
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(skel);
+	RTE_SET_USED(queue_conf);
+
+	/* Free memory prior to re-allocation if needed */
+	if (dev->data->queues[queue_id] != NULL) {
+		PMD_DRV_LOG(DEBUG, "Freeing memory prior to re-allocation %d",
+				queue_id);
+		skeleton_eventdev_queue_release(dev->data->queues[queue_id]);
+		dev->data->queues[queue_id] = NULL;
+	}
+
+	/* Allocate event queue memory */
+	sq = rte_zmalloc_socket("eventdev queue",
+			sizeof(struct skeleton_queue), RTE_CACHE_LINE_SIZE,
+			dev->data->socket_id);
+	if (sq == NULL) {
+		PMD_DRV_ERR("Failed to allocate sq queue_id=%d", queue_id);
+		return -ENOMEM;
+	}
+
+	sq->queue_id = queue_id;
+
+	PMD_DRV_LOG(DEBUG, "[%d] sq=%p", queue_id, sq);
+
+	dev->data->queues[queue_id] = sq;
+	return 0;
+}
+
+static void
+skeleton_eventdev_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
+				 struct rte_event_port_conf *port_conf)
+{
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(dev);
+
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(skel);
+	RTE_SET_USED(port_id);
+
+	port_conf->new_event_threshold = 32 * 1024;
+	port_conf->dequeue_depth = 16;
+	port_conf->enqueue_depth = 16;
+}
+
+static void
+skeleton_eventdev_port_release(void *port)
+{
+	struct skeleton_port *sp = port;
+	PMD_DRV_FUNC_TRACE();
+
+	rte_free(sp);
+}
+
+static int
+skeleton_eventdev_port_setup(struct rte_eventdev *dev, uint8_t port_id,
+			      struct rte_event_port_conf *port_conf)
+{
+	struct skeleton_port *sp;
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(dev);
+
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(skel);
+	RTE_SET_USED(port_conf);
+
+	/* Free memory prior to re-allocation if needed */
+	if (dev->data->ports[port_id] != NULL) {
+		PMD_DRV_LOG(DEBUG, "Freeing memory prior to re-allocation %d",
+				port_id);
+		skeleton_eventdev_port_release(dev->data->ports[port_id]);
+		dev->data->ports[port_id] = NULL;
+	}
+
+	/* Allocate event port memory */
+	sp = rte_zmalloc_socket("eventdev port",
+			sizeof(struct skeleton_port), RTE_CACHE_LINE_SIZE,
+			dev->data->socket_id);
+	if (sp == NULL) {
+		PMD_DRV_ERR("Failed to allocate sp port_id=%d", port_id);
+		return -ENOMEM;
+	}
+
+	sp->port_id = port_id;
+
+	PMD_DRV_LOG(DEBUG, "[%d] sp=%p", port_id, sp);
+
+	dev->data->ports[port_id] = sp;
+	return 0;
+}
+
+static int
+skeleton_eventdev_port_link(void *port, struct rte_event_queue_link link[],
+			    uint16_t nb_links)
+{
+	struct skeleton_port *sp = port;
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(sp);
+	RTE_SET_USED(link);
+
+	/* Linked all the queues */
+	return (int)nb_links;
+}
+
+static int
+skeleton_eventdev_port_unlink(void *port, uint8_t queues[],
+				 uint16_t nb_unlinks)
+{
+	struct skeleton_port *sp = port;
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(sp);
+	RTE_SET_USED(queues);
+
+	/* Unlinked all the queues */
+	return (int)nb_unlinks;
+
+}
+
+static void
+skeleton_eventdev_wait_time(struct rte_eventdev *dev, uint64_t ns,
+				 uint64_t *wait_ticks)
+{
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(dev);
+	uint32_t scale = 1;
+
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(skel);
+	*wait_ticks = ns * scale;
+}
+
+static void
+skeleton_eventdev_dump(struct rte_eventdev *dev, FILE *f)
+{
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(dev);
+
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(skel);
+	RTE_SET_USED(f);
+}
+
+
+/* Initialize and register event driver with DPDK Application */
+static const struct rte_eventdev_ops skeleton_eventdev_ops = {
+	.dev_infos_get    = skeleton_eventdev_info_get,
+	.dev_configure    = skeleton_eventdev_configure,
+	.dev_start        = skeleton_eventdev_start,
+	.dev_stop         = skeleton_eventdev_stop,
+	.dev_close        = skeleton_eventdev_close,
+	.queue_def_conf   = skeleton_eventdev_queue_def_conf,
+	.queue_setup      = skeleton_eventdev_queue_setup,
+	.queue_release    = skeleton_eventdev_queue_release,
+	.port_def_conf    = skeleton_eventdev_port_def_conf,
+	.port_setup       = skeleton_eventdev_port_setup,
+	.port_release     = skeleton_eventdev_port_release,
+	.port_link        = skeleton_eventdev_port_link,
+	.port_unlink      = skeleton_eventdev_port_unlink,
+	.wait_time        = skeleton_eventdev_wait_time,
+	.dump             = skeleton_eventdev_dump
+};
+
+static int
+skeleton_eventdev_init(struct rte_eventdev *eventdev)
+{
+	struct rte_pci_device *pci_dev;
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(eventdev);
+	int ret = 0;
+
+	PMD_DRV_FUNC_TRACE();
+
+	eventdev->dev_ops       = &skeleton_eventdev_ops;
+	eventdev->schedule      = NULL;
+	eventdev->enqueue       = skeleton_eventdev_enqueue;
+	eventdev->enqueue_burst = skeleton_eventdev_enqueue_burst;
+	eventdev->dequeue       = skeleton_eventdev_dequeue;
+	eventdev->dequeue_burst = skeleton_eventdev_dequeue_burst;
+
+	/* For secondary processes, the primary has done all the work */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return 0;
+
+	pci_dev = eventdev->pci_dev;
+
+	skel->reg_base = (uintptr_t)pci_dev->mem_resource[0].addr;
+	if (!skel->reg_base) {
+		PMD_DRV_ERR("Failed to map BAR0");
+		ret = -ENODEV;
+		goto fail;
+	}
+
+	skel->device_id = pci_dev->id.device_id;
+	skel->vendor_id = pci_dev->id.vendor_id;
+	skel->subsystem_device_id = pci_dev->id.subsystem_device_id;
+	skel->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+
+	PMD_DRV_LOG(DEBUG, "pci device (%x:%x) %u:%u:%u:%u",
+			pci_dev->id.vendor_id, pci_dev->id.device_id,
+			pci_dev->addr.domain, pci_dev->addr.bus,
+			pci_dev->addr.devid, pci_dev->addr.function);
+
+	PMD_DRV_LOG(INFO, "dev_id=%d socket_id=%d (%x:%x)",
+		eventdev->data->dev_id, eventdev->data->socket_id,
+		skel->vendor_id, skel->device_id);
+
+fail:
+	return ret;
+}
+
+/* PCI based event device */
+
+#define EVENTDEV_SKEL_VENDOR_ID         0x177d
+#define EVENTDEV_SKEL_PRODUCT_ID        0x0001
+
+static const struct rte_pci_id pci_id_skeleton_map[] = {
+	{
+		RTE_PCI_DEVICE(EVENTDEV_SKEL_VENDOR_ID,
+			       EVENTDEV_SKEL_PRODUCT_ID)
+	},
+	{
+		.vendor_id = 0,
+	},
+};
+
+static struct rte_eventdev_driver pci_eventdev_skeleton_pmd = {
+	.pci_drv = {
+		.id_table = pci_id_skeleton_map,
+		.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+		.probe = rte_eventdev_pmd_pci_probe,
+		.remove = rte_eventdev_pmd_pci_remove,
+	},
+	.eventdev_init = skeleton_eventdev_init,
+	.dev_private_size = sizeof(struct skeleton_eventdev),
+};
+
+RTE_PMD_REGISTER_PCI(event_skeleton_pci, pci_eventdev_skeleton_pmd.pci_drv);
+RTE_PMD_REGISTER_PCI_TABLE(event_skeleton_pci, pci_id_skeleton_map);
+
+/* VDEV based event device */
+
+/**
+ * Global static parameter used to create a unique name for each skeleton
+ * event device.
+ */
+static unsigned int skeleton_unique_id;
+
+static inline int
+skeleton_create_unique_device_name(char *name, size_t size)
+{
+	int ret;
+
+	if (name == NULL)
+		return -EINVAL;
+
+	ret = snprintf(name, size, "%s_%u", RTE_STR(EVENTDEV_NAME_SKELETON_PMD),
+			skeleton_unique_id++);
+	if (ret < 0)
+		return ret;
+	return 0;
+}
+
+static int
+skeleton_eventdev_create(int socket_id)
+{
+	struct rte_eventdev *eventdev;
+	char eventdev_name[RTE_EVENTDEV_NAME_MAX_LEN];
+
+	/* Create a unique device name */
+	if (skeleton_create_unique_device_name(eventdev_name,
+			RTE_EVENTDEV_NAME_MAX_LEN) != 0) {
+		PMD_DRV_ERR("Failed to create unique eventdev name");
+		return -EINVAL;
+	}
+
+	eventdev = rte_eventdev_pmd_vdev_init(eventdev_name,
+			sizeof(struct skeleton_eventdev), socket_id);
+	if (eventdev == NULL) {
+		PMD_DRV_ERR("Failed to create eventdev vdev");
+		goto fail;
+	}
+
+	eventdev->dev_ops       = &skeleton_eventdev_ops;
+	eventdev->schedule      = NULL;
+	eventdev->enqueue       = skeleton_eventdev_enqueue;
+	eventdev->enqueue_burst = skeleton_eventdev_enqueue_burst;
+	eventdev->dequeue       = skeleton_eventdev_dequeue;
+	eventdev->dequeue_burst = skeleton_eventdev_dequeue_burst;
+
+	return 0;
+fail:
+	return -EFAULT;
+}
+
+static int
+skeleton_eventdev_probe(const char *name, __rte_unused const char *input_args)
+{
+	RTE_LOG(INFO, PMD, "Initializing %s on NUMA node %d", name,
+			rte_socket_id());
+	return skeleton_eventdev_create(rte_socket_id());
+}
+
+static int
+skeleton_eventdev_remove(const char *name)
+{
+	if (name == NULL)
+		return -EINVAL;
+
+	PMD_DRV_LOG(INFO, "Closing %s on NUMA node %d", name, rte_socket_id());
+
+	return 0;
+}
+
+static struct rte_vdev_driver vdev_eventdev_skeleton_pmd = {
+	.probe = skeleton_eventdev_probe,
+	.remove = skeleton_eventdev_remove
+};
+
+RTE_PMD_REGISTER_VDEV(EVENTDEV_NAME_SKELETON_PMD, vdev_eventdev_skeleton_pmd);
diff --git a/drivers/event/skeleton/skeleton_eventdev.h b/drivers/event/skeleton/skeleton_eventdev.h
new file mode 100644
index 0000000..872ba01
--- /dev/null
+++ b/drivers/event/skeleton/skeleton_eventdev.h
@@ -0,0 +1,72 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __SKELETON_EVENTDEV_H__
+#define __SKELETON_EVENTDEV_H__
+
+#include <rte_eventdev_pmd.h>
+
+#ifdef RTE_LIBRTE_PMD_SKELETON_EVENTDEV_DEBUG
+#define PMD_DRV_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#define PMD_DRV_FUNC_TRACE() PMD_DRV_LOG(DEBUG, ">>")
+#else
+#define PMD_DRV_LOG(level, fmt, args...) do { } while (0)
+#define PMD_DRV_FUNC_TRACE() do { } while (0)
+#endif
+
+#define PMD_DRV_ERR(fmt, args...) \
+	RTE_LOG(ERR, PMD, "%s(): " fmt "\n", __func__, ## args)
+
+struct skeleton_eventdev {
+	uintptr_t reg_base;
+	uint16_t device_id;
+	uint16_t vendor_id;
+	uint16_t subsystem_device_id;
+	uint16_t subsystem_vendor_id;
+} __rte_cache_aligned;
+
+struct skeleton_queue {
+	uint8_t queue_id;
+} __rte_cache_aligned;
+
+struct skeleton_port {
+	uint8_t port_id;
+} __rte_cache_aligned;
+
+static inline struct skeleton_eventdev *
+skeleton_pmd_priv(struct rte_eventdev *eventdev)
+{
+	return eventdev->data->dev_private;
+}
+
+#endif /* __SKELETON_EVENTDEV_H__ */
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 716725a..8341c13 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -148,6 +148,10 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC)         += -lrte_pmd_zuc
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC)         += -L$(LIBSSO_ZUC_PATH)/build -lsso_zuc
 endif # CONFIG_RTE_LIBRTE_CRYPTODEV
 
+ifeq ($(CONFIG_RTE_LIBRTE_EVENTDEV),y)
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SKELETON_EVENTDEV) += -lrte_pmd_skeleton_event
+endif # CONFIG_RTE_LIBRTE_EVENTDEV
+
 endif # !CONFIG_RTE_BUILD_SHARED_LIBS
 
 _LDLIBS-y += --no-whole-archive
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [PATCH 4/4] app/test: unit test case for eventdev APIs
  2016-11-18  5:44 [PATCH 0/4] libeventdev API and northbound implementation Jerin Jacob
                   ` (2 preceding siblings ...)
  2016-11-18  5:45 ` [PATCH 3/4] event/skeleton: add skeleton eventdev driver Jerin Jacob
@ 2016-11-18  5:45 ` Jerin Jacob
  2016-11-18 15:25 ` [PATCH 0/4] libeventdev API and northbound implementation Bruce Richardson
  4 siblings, 0 replies; 109+ messages in thread
From: Jerin Jacob @ 2016-11-18  5:45 UTC (permalink / raw)
  To: dev
  Cc: bruce.richardson, harry.van.haaren, hemant.agrawal, gage.eads,
	Jerin Jacob

This commit adds basic unit tests for the eventdev API.

commands to run the test app:
./build/app/test -c 2
RTE>>eventdev_common_autotest

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 MAINTAINERS              |   1 +
 app/test/Makefile        |   2 +
 app/test/test_eventdev.c | 776 +++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 779 insertions(+)
 create mode 100644 app/test/test_eventdev.c

diff --git a/MAINTAINERS b/MAINTAINERS
index c594a23..887f133 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -252,6 +252,7 @@ F: examples/l2fwd-crypto/
 Eventdev API - EXPERIMENTAL
 M: Jerin Jacob <jerin.jacob@caviumnetworks.com>
 F: lib/librte_eventdev/
+F: app/test/test_eventdev*
 F: drivers/event/skeleton/
 
 Networking Drivers
diff --git a/app/test/Makefile b/app/test/Makefile
index 5be023a..e28c079 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -197,6 +197,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev_blockcipher.c
 SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev_perf.c
 SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev.c
 
+SRCS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += test_eventdev.c
+
 SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
 
 CFLAGS += -O3
diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
new file mode 100644
index 0000000..e876804
--- /dev/null
+++ b/app/test/test_eventdev.c
@@ -0,0 +1,776 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Cavium networks. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Cavium networks nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_hexdump.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_eventdev.h>
+#include <rte_cryptodev.h>
+
+#include "test.h"
+
+#define TEST_DEV_NAME EVENTDEV_NAME_SKELETON_PMD
+
+static inline uint8_t
+test_dev_id_get(void)
+{
+	return rte_event_dev_get_dev_id(RTE_STR(TEST_DEV_NAME)"_0");
+}
+
+static int
+testsuite_setup(void)
+{
+	return rte_eal_vdev_init(RTE_STR(TEST_DEV_NAME), NULL);
+}
+
+static void
+testsuite_teardown(void)
+{
+}
+
+static int
+test_eventdev_count(void)
+{
+	uint8_t count;
+	count = rte_event_dev_count();
+	TEST_ASSERT(count > 0, "Invalid eventdev count %" PRIu8, count);
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_get_dev_id(void)
+{
+	int ret;
+	ret = rte_event_dev_get_dev_id(RTE_STR(TEST_DEV_NAME)"_0");
+	TEST_ASSERT(ret >= 0, "Failed to get dev_id %d", ret);
+	ret = rte_event_dev_get_dev_id("not_a_valid_ethdev_driver");
+	TEST_ASSERT_FAIL(ret, "Expected <0 for invalid dev name ret=%d", ret);
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_socket_id(void)
+{
+	int ret, socket_id;
+	ret = rte_event_dev_get_dev_id(RTE_STR(TEST_DEV_NAME)"_0");
+	socket_id = rte_event_dev_socket_id(ret);
+	TEST_ASSERT(socket_id != -EINVAL, "Failed to get socket_id %d",
+				socket_id);
+	socket_id = rte_event_dev_socket_id(RTE_EVENT_MAX_DEVS);
+	TEST_ASSERT(socket_id == -EINVAL, "Expected -EINVAL %d", socket_id);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_info_get(void)
+{
+	int ret;
+	struct rte_event_dev_info info;
+	ret = rte_event_dev_info_get(test_dev_id_get(), NULL);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+	ret = rte_event_dev_info_get(test_dev_id_get(), &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+	TEST_ASSERT(info.max_event_ports > 0,
+			"Not enough event ports %d", info.max_event_ports);
+	TEST_ASSERT(info.max_event_queues > 0,
+			"Not enough event queues %d", info.max_event_queues);
+	return TEST_SUCCESS;
+}
+
+static inline void
+devconf_set_default_sane_values(struct rte_event_dev_config *dev_conf,
+			struct rte_event_dev_info *info)
+{
+	memset(dev_conf, 0, sizeof(struct rte_event_dev_config));
+	dev_conf->dequeue_wait_ns = info->min_dequeue_wait_ns;
+	dev_conf->nb_event_ports = info->max_event_ports;
+	dev_conf->nb_event_queues = info->max_event_queues;
+	dev_conf->nb_event_queue_flows = info->max_event_queue_flows;
+	dev_conf->nb_event_port_dequeue_depth =
+			info->max_event_port_dequeue_depth;
+	dev_conf->nb_event_port_enqueue_depth =
+			info->max_event_port_enqueue_depth;
+	dev_conf->nb_event_port_enqueue_depth =
+			info->max_event_port_enqueue_depth;
+	dev_conf->nb_events_limit =
+			info->max_num_events;
+}
+
+static int
+test_ethdev_config_run(struct rte_event_dev_config *dev_conf,
+		struct rte_event_dev_info *info,
+		void (*fn)(struct rte_event_dev_config *dev_conf,
+			struct rte_event_dev_info *info))
+{
+	devconf_set_default_sane_values(dev_conf, info);
+	fn(dev_conf, info);
+	return rte_event_dev_configure(test_dev_id_get(), dev_conf);
+}
+
+static void
+min_dequeue_limit(struct rte_event_dev_config *dev_conf,
+		  struct rte_event_dev_info *info)
+{
+	dev_conf->dequeue_wait_ns = info->min_dequeue_wait_ns - 1;
+}
+
+static void
+max_dequeue_limit(struct rte_event_dev_config *dev_conf,
+		  struct rte_event_dev_info *info)
+{
+	dev_conf->dequeue_wait_ns = info->max_dequeue_wait_ns + 1;
+}
+
+static void
+max_events_limit(struct rte_event_dev_config *dev_conf,
+		  struct rte_event_dev_info *info)
+{
+	dev_conf->nb_events_limit  = info->max_num_events + 1;
+}
+
+static void
+max_event_ports(struct rte_event_dev_config *dev_conf,
+		  struct rte_event_dev_info *info)
+{
+	dev_conf->nb_event_ports = info->max_event_ports + 1;
+}
+
+static void
+max_event_queues(struct rte_event_dev_config *dev_conf,
+		  struct rte_event_dev_info *info)
+{
+	dev_conf->nb_event_queues = info->max_event_queues + 1;
+}
+
+static void
+max_event_queue_flows(struct rte_event_dev_config *dev_conf,
+		  struct rte_event_dev_info *info)
+{
+	dev_conf->nb_event_queue_flows = info->max_event_queue_flows + 1;
+}
+
+static void
+max_event_port_dequeue_depth(struct rte_event_dev_config *dev_conf,
+		  struct rte_event_dev_info *info)
+{
+	dev_conf->nb_event_port_dequeue_depth =
+		info->max_event_port_dequeue_depth + 1;
+}
+
+static void
+max_event_port_enqueue_depth(struct rte_event_dev_config *dev_conf,
+		  struct rte_event_dev_info *info)
+{
+	dev_conf->nb_event_port_enqueue_depth =
+		info->max_event_port_enqueue_depth + 1;
+}
+
+
+static int
+test_eventdev_configure(void)
+{
+	int ret;
+	struct rte_event_dev_config dev_conf;
+	struct rte_event_dev_info info;
+	ret = rte_event_dev_configure(test_dev_id_get(), NULL);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+
+	ret = rte_event_dev_info_get(test_dev_id_get(), &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	/* Check limits */
+	TEST_ASSERT_EQUAL(-EINVAL,
+		test_ethdev_config_run(&dev_conf, &info, min_dequeue_limit),
+		 "Config negative test failed");
+	TEST_ASSERT_EQUAL(-EINVAL,
+		test_ethdev_config_run(&dev_conf, &info, max_dequeue_limit),
+		 "Config negative test failed");
+	TEST_ASSERT_EQUAL(-EINVAL,
+		test_ethdev_config_run(&dev_conf, &info, max_events_limit),
+		 "Config negative test failed");
+	TEST_ASSERT_EQUAL(-EINVAL,
+		test_ethdev_config_run(&dev_conf, &info, max_event_ports),
+		 "Config negative test failed");
+	TEST_ASSERT_EQUAL(-EINVAL,
+		test_ethdev_config_run(&dev_conf, &info, max_event_queues),
+		 "Config negative test failed");
+	TEST_ASSERT_EQUAL(-EINVAL,
+		test_ethdev_config_run(&dev_conf, &info, max_event_queue_flows),
+		 "Config negative test failed");
+	TEST_ASSERT_EQUAL(-EINVAL,
+		test_ethdev_config_run(&dev_conf, &info,
+			max_event_port_dequeue_depth),
+			 "Config negative test failed");
+	TEST_ASSERT_EQUAL(-EINVAL,
+		test_ethdev_config_run(&dev_conf, &info,
+		max_event_port_enqueue_depth),
+		 "Config negative test failed");
+
+	/* Positive case */
+	devconf_set_default_sane_values(&dev_conf, &info);
+	ret = rte_event_dev_configure(test_dev_id_get(), &dev_conf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev");
+
+	/* re-configure */
+	devconf_set_default_sane_values(&dev_conf, &info);
+	dev_conf.nb_event_ports = info.max_event_ports/2;
+	dev_conf.nb_event_queues = info.max_event_queues/2;
+	ret = rte_event_dev_configure(test_dev_id_get(), &dev_conf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to re configure eventdev");
+
+	/* re-configure back to max_event_queues and max_event_ports */
+	devconf_set_default_sane_values(&dev_conf, &info);
+	ret = rte_event_dev_configure(test_dev_id_get(), &dev_conf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to re-configure eventdev");
+
+	return TEST_SUCCESS;
+
+}
+
+static int
+eventdev_configure_setup(void)
+{
+	int ret;
+	struct rte_event_dev_config dev_conf;
+	struct rte_event_dev_info info;
+
+	ret = rte_event_dev_info_get(test_dev_id_get(), &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+	devconf_set_default_sane_values(&dev_conf, &info);
+	ret = rte_event_dev_configure(test_dev_id_get(), &dev_conf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_queue_default_conf_get(void)
+{
+	int i, ret;
+	struct rte_event_queue_conf qconf;
+
+	ret = rte_event_queue_default_conf_get(test_dev_id_get(), 0, NULL);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+
+	for (i = 0; i < rte_event_queue_count(test_dev_id_get()); i++) {
+		ret = rte_event_queue_default_conf_get(test_dev_id_get(), i,
+						 &qconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to get queue%d info", i);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_queue_setup(void)
+{
+	int i, ret;
+	struct rte_event_dev_info info;
+	struct rte_event_queue_conf qconf;
+
+	ret = rte_event_dev_info_get(test_dev_id_get(), &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	/* Negative cases */
+	ret = rte_event_queue_default_conf_get(test_dev_id_get(), 0, &qconf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get queue0 info");
+	qconf.event_queue_cfg =	(RTE_EVENT_QUEUE_CFG_ALL_TYPES &
+		 RTE_EVENT_QUEUE_CFG_TYPE_MASK);
+	qconf.nb_atomic_flows = info.max_event_queue_flows + 1;
+	ret = rte_event_queue_setup(test_dev_id_get(), 0, &qconf);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+
+	qconf.nb_atomic_flows = info.max_event_queue_flows;
+	qconf.event_queue_cfg =	(RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY &
+		 RTE_EVENT_QUEUE_CFG_TYPE_MASK);
+	qconf.nb_atomic_order_sequences = info.max_event_queue_flows + 1;
+	ret = rte_event_queue_setup(test_dev_id_get(), 0, &qconf);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+
+	ret = rte_event_queue_setup(test_dev_id_get(), info.max_event_queues,
+					&qconf);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+
+	/* Positive case */
+	ret = rte_event_queue_default_conf_get(test_dev_id_get(), 0, &qconf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get queue0 info");
+	ret = rte_event_queue_setup(test_dev_id_get(), 0, &qconf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to setup queue0");
+
+
+	for (i = 0; i < rte_event_queue_count(test_dev_id_get()); i++) {
+		ret = rte_event_queue_setup(test_dev_id_get(), i, NULL);
+		TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_queue_count(void)
+{
+	int ret;
+	struct rte_event_dev_info info;
+
+	ret = rte_event_dev_info_get(test_dev_id_get(), &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	TEST_ASSERT_EQUAL(rte_event_queue_count(test_dev_id_get()),
+		 info.max_event_queues, "Wrong queue count");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_queue_priority(void)
+{
+	int i, ret;
+	struct rte_event_dev_info info;
+	struct rte_event_queue_conf qconf;
+	uint8_t priority;
+
+	ret = rte_event_dev_info_get(test_dev_id_get(), &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	for (i = 0; i < rte_event_queue_count(test_dev_id_get()); i++) {
+		ret = rte_event_queue_default_conf_get(test_dev_id_get(), i,
+					&qconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to get queue%d def conf", i);
+		qconf.priority = i %  RTE_EVENT_QUEUE_PRIORITY_LOWEST;
+		ret = rte_event_queue_setup(test_dev_id_get(), i, &qconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i);
+	}
+
+	for (i = 0; i < rte_event_queue_count(test_dev_id_get()); i++) {
+		priority =  rte_event_queue_priority(test_dev_id_get(), i);
+		if (info.event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_QOS)
+			TEST_ASSERT_EQUAL(priority,
+			 i %  RTE_EVENT_QUEUE_PRIORITY_LOWEST,
+			 "Wrong priority value for queue%d", i);
+		else
+			TEST_ASSERT_EQUAL(priority,
+			 RTE_EVENT_QUEUE_PRIORITY_NORMAL,
+			 "Wrong priority value for queue%d", i);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_port_default_conf_get(void)
+{
+	int i, ret;
+	struct rte_event_port_conf pconf;
+
+	ret = rte_event_port_default_conf_get(test_dev_id_get(), 0, NULL);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+
+	ret = rte_event_port_default_conf_get(test_dev_id_get(),
+			rte_event_port_count(test_dev_id_get()) + 1, NULL);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+
+	for (i = 0; i < rte_event_port_count(test_dev_id_get()); i++) {
+		ret = rte_event_port_default_conf_get(test_dev_id_get(), i,
+							&pconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to get port%d info", i);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_port_setup(void)
+{
+	int i, ret;
+	struct rte_event_dev_info info;
+	struct rte_event_port_conf pconf;
+
+	ret = rte_event_dev_info_get(test_dev_id_get(), &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	/* Negative cases */
+	ret = rte_event_port_default_conf_get(test_dev_id_get(), 0, &pconf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get port0 info");
+	pconf.new_event_threshold = info.max_num_events + 1;
+	ret = rte_event_port_setup(test_dev_id_get(), 0, &pconf);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+
+	pconf.new_event_threshold = info.max_num_events;
+	pconf.dequeue_depth = info.max_event_port_dequeue_depth + 1;
+	ret = rte_event_port_setup(test_dev_id_get(), 0, &pconf);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+
+	pconf.dequeue_depth = info.max_event_port_dequeue_depth;
+	pconf.enqueue_depth = info.max_event_port_enqueue_depth + 1;
+	ret = rte_event_port_setup(test_dev_id_get(), 0, &pconf);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+
+	ret = rte_event_port_setup(test_dev_id_get(), info.max_event_ports,
+					&pconf);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+
+	/* Positive case */
+	ret = rte_event_port_default_conf_get(test_dev_id_get(), 0, &pconf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get port0 info");
+	ret = rte_event_port_setup(test_dev_id_get(), 0, &pconf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to setup port0");
+
+
+	for (i = 0; i < rte_event_port_count(test_dev_id_get()); i++) {
+		ret = rte_event_port_setup(test_dev_id_get(), i, NULL);
+		TEST_ASSERT_SUCCESS(ret, "Failed to setup port%d", i);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_dequeue_depth(void)
+{
+	int ret;
+	struct rte_event_dev_info info;
+	struct rte_event_port_conf pconf;
+
+	ret = rte_event_dev_info_get(test_dev_id_get(), &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	ret = rte_event_port_default_conf_get(test_dev_id_get(), 0, &pconf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get port0 info");
+	ret = rte_event_port_setup(test_dev_id_get(), 0, &pconf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to setup port0");
+
+	TEST_ASSERT_EQUAL(rte_event_port_dequeue_depth(test_dev_id_get(), 0),
+		 pconf.dequeue_depth, "Wrong port dequeue depth");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_enqueue_depth(void)
+{
+	int ret;
+	struct rte_event_dev_info info;
+	struct rte_event_port_conf pconf;
+
+	ret = rte_event_dev_info_get(test_dev_id_get(), &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	ret = rte_event_port_default_conf_get(test_dev_id_get(), 0, &pconf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get port0 info");
+	ret = rte_event_port_setup(test_dev_id_get(), 0, &pconf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to setup port0");
+
+	TEST_ASSERT_EQUAL(rte_event_port_enqueue_depth(test_dev_id_get(), 0),
+		 pconf.enqueue_depth, "Wrong port enqueue depth");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_port_count(void)
+{
+	int ret;
+	struct rte_event_dev_info info;
+
+	ret = rte_event_dev_info_get(test_dev_id_get(), &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	TEST_ASSERT_EQUAL(rte_event_port_count(test_dev_id_get()),
+		 info.max_event_ports, "Wrong port count");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_wait_time(void)
+{
+	int ret;
+	uint64_t wait_ticks;
+
+	ret = rte_event_dequeue_wait_time(test_dev_id_get(), 100, &wait_ticks);
+	TEST_ASSERT_SUCCESS(ret, "Fail to get wait_time");
+
+	return TEST_SUCCESS;
+}
+
+
+static int
+test_eventdev_start_stop(void)
+{
+	int i, ret;
+
+	ret = eventdev_configure_setup();
+	TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev");
+
+	for (i = 0; i < rte_event_queue_count(test_dev_id_get()); i++) {
+		ret = rte_event_queue_setup(test_dev_id_get(), i, NULL);
+		TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i);
+	}
+
+	for (i = 0; i < rte_event_port_count(test_dev_id_get()); i++) {
+		ret = rte_event_port_setup(test_dev_id_get(), i, NULL);
+		TEST_ASSERT_SUCCESS(ret, "Failed to setup port%d", i);
+	}
+
+	ret = rte_event_dev_start(test_dev_id_get());
+	TEST_ASSERT_SUCCESS(ret, "Failed to start device%d", test_dev_id_get());
+
+	rte_event_dev_stop(test_dev_id_get());
+	return TEST_SUCCESS;
+}
+
+
+static int
+eventdev_setup_device(void)
+{
+	int i, ret;
+
+	ret = eventdev_configure_setup();
+	TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev");
+
+	for (i = 0; i < rte_event_queue_count(test_dev_id_get()); i++) {
+		ret = rte_event_queue_setup(test_dev_id_get(), i, NULL);
+		TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i);
+	}
+
+	for (i = 0; i < rte_event_port_count(test_dev_id_get()); i++) {
+		ret = rte_event_port_setup(test_dev_id_get(), i, NULL);
+		TEST_ASSERT_SUCCESS(ret, "Failed to setup port%d", i);
+	}
+
+	ret = rte_event_dev_start(test_dev_id_get());
+	TEST_ASSERT_SUCCESS(ret, "Failed to start device%d", test_dev_id_get());
+
+	return TEST_SUCCESS;
+}
+
+static void
+eventdev_stop_device(void)
+{
+	rte_event_dev_stop(test_dev_id_get());
+}
+
+static int
+test_eventdev_link(void)
+{
+	int ret, nb_queues, i;
+	struct rte_event_queue_link links[RTE_EVENT_MAX_QUEUES_PER_DEV];
+
+	ret = rte_event_port_link(test_dev_id_get(), 0, NULL, 0);
+	TEST_ASSERT(ret >= 0, "Failed to link with NULL device%d",
+				 test_dev_id_get());
+
+	nb_queues = rte_event_queue_count(test_dev_id_get());
+	for (i = 0; i < nb_queues; i++) {
+		links[i].queue_id = i;
+		links[i].priority = RTE_EVENT_QUEUE_SERVICE_PRIORITY_NORMAL;
+	}
+
+	ret = rte_event_port_link(test_dev_id_get(), 0, links, nb_queues);
+	TEST_ASSERT(ret == nb_queues, "Failed to link(device%d) ret=%d",
+				 test_dev_id_get(), ret);
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_unlink(void)
+{
+	int ret, nb_queues, i;
+	uint8_t queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
+
+	ret = rte_event_port_unlink(test_dev_id_get(), 0, NULL, 0);
+	TEST_ASSERT(ret >= 0, "Failed to unlink with NULL device%d",
+				 test_dev_id_get());
+
+	nb_queues = rte_event_queue_count(test_dev_id_get());
+	for (i = 0; i < nb_queues; i++)
+		queues[i] = i;
+
+
+	ret = rte_event_port_unlink(test_dev_id_get(), 0, queues, nb_queues);
+	TEST_ASSERT(ret == nb_queues, "Failed to unlink(device%d) ret=%d",
+				 test_dev_id_get(), ret);
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_link_get(void)
+{
+	int ret, nb_queues, i;
+	uint8_t queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
+	struct rte_event_queue_link links[RTE_EVENT_MAX_QUEUES_PER_DEV];
+
+	/* link all queues */
+	ret = rte_event_port_link(test_dev_id_get(), 0, NULL, 0);
+	TEST_ASSERT(ret >= 0, "Failed to link with NULL device%d",
+				 test_dev_id_get());
+
+	nb_queues = rte_event_queue_count(test_dev_id_get());
+	for (i = 0; i < nb_queues; i++)
+		queues[i] = i;
+
+	ret = rte_event_port_unlink(test_dev_id_get(), 0, queues, nb_queues);
+	TEST_ASSERT(ret == nb_queues, "Failed to unlink(device%d) ret=%d",
+				 test_dev_id_get(), ret);
+
+	ret = rte_event_port_links_get(test_dev_id_get(), 0, links);
+	TEST_ASSERT(ret == 0, "(%d)Wrong link get=%d", test_dev_id_get(), ret);
+
+	/* link all queues and get the links */
+	nb_queues = rte_event_queue_count(test_dev_id_get());
+	for (i = 0; i < nb_queues; i++) {
+		links[i].queue_id = i;
+		links[i].priority = RTE_EVENT_QUEUE_SERVICE_PRIORITY_NORMAL;
+	}
+	ret = rte_event_port_link(test_dev_id_get(), 0, links, nb_queues);
+	TEST_ASSERT(ret == nb_queues, "Failed to link(device%d) ret=%d",
+				 test_dev_id_get(), ret);
+	ret = rte_event_port_links_get(test_dev_id_get(), 0, links);
+	TEST_ASSERT(ret == nb_queues, "(%d)Wrong link get ret=%d expected=%d",
+				 test_dev_id_get(), ret, nb_queues);
+	/* unlink all*/
+	ret = rte_event_port_unlink(test_dev_id_get(), 0, NULL, 0);
+	TEST_ASSERT(ret == nb_queues, "Failed to unlink(device%d) ret=%d",
+				 test_dev_id_get(), ret);
+	/* link just one queue */
+	links[0].queue_id = 0;
+	links[0].priority = RTE_EVENT_QUEUE_SERVICE_PRIORITY_NORMAL;
+
+	ret = rte_event_port_link(test_dev_id_get(), 0, links, 1);
+	TEST_ASSERT(ret == 1, "Failed to link(device%d) ret=%d",
+				 test_dev_id_get(), ret);
+	ret = rte_event_port_links_get(test_dev_id_get(), 0, links);
+	TEST_ASSERT(ret == 1, "(%d)Wrong link get ret=%d expected=%d",
+					test_dev_id_get(), ret, 1);
+	/* unlink all*/
+	ret = rte_event_port_unlink(test_dev_id_get(), 0, NULL, 0);
+	TEST_ASSERT(ret == nb_queues, "Failed to unlink(device%d) ret=%d",
+				 test_dev_id_get(), ret);
+	/* 4links and 2 unlinks */
+	nb_queues = rte_event_queue_count(test_dev_id_get());
+	if (nb_queues >= 4) {
+		for (i = 0; i < 4; i++) {
+			links[i].queue_id = i;
+			links[i].priority = 0x40;
+		}
+		ret = rte_event_port_link(test_dev_id_get(), 0, links, 4);
+		TEST_ASSERT(ret == 4, "Failed to link(device%d) ret=%d",
+					 test_dev_id_get(), ret);
+
+		for (i = 0; i < 2; i++)
+			queues[i] = i;
+
+		ret = rte_event_port_unlink(test_dev_id_get(), 0, queues, 2);
+		TEST_ASSERT(ret == 2, "Failed to unlink(device%d) ret=%d",
+					 test_dev_id_get(), ret);
+		ret = rte_event_port_links_get(test_dev_id_get(), 0, links);
+		TEST_ASSERT(ret == 2, "(%d)Wrong link get ret=%d expected=%d",
+						test_dev_id_get(), ret, 2);
+		TEST_ASSERT(links[0].queue_id == 2, "ret=%d expected=%d",
+					ret, 2);
+		TEST_ASSERT(links[0].priority == 0x40, "ret=%d expected=%d",
+					ret, 0x40);
+		TEST_ASSERT(links[1].queue_id == 3, "ret=%d expected=%d",
+					ret, 3);
+		TEST_ASSERT(links[1].priority == 0x40, "ret=%d expected=%d",
+					ret, 0x40);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_close(void)
+{
+	rte_event_dev_stop(test_dev_id_get());
+	return rte_event_dev_close(test_dev_id_get());
+}
+
+static struct unit_test_suite eventdev_common_testsuite  = {
+	.suite_name = "eventdev common code unit test suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(NULL, NULL,
+			test_eventdev_count),
+		TEST_CASE_ST(NULL, NULL,
+			test_eventdev_get_dev_id),
+		TEST_CASE_ST(NULL, NULL,
+			test_eventdev_socket_id),
+		TEST_CASE_ST(NULL, NULL,
+			test_eventdev_info_get),
+		TEST_CASE_ST(NULL, NULL,
+			test_eventdev_configure),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_queue_default_conf_get),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_queue_setup),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_queue_count),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_queue_priority),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_port_default_conf_get),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_port_setup),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_dequeue_depth),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_enqueue_depth),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_port_count),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_wait_time),
+		TEST_CASE_ST(NULL, NULL,
+			test_eventdev_start_stop),
+		TEST_CASE_ST(eventdev_setup_device, eventdev_stop_device,
+			test_eventdev_link),
+		TEST_CASE_ST(eventdev_setup_device, eventdev_stop_device,
+			test_eventdev_unlink),
+		TEST_CASE_ST(eventdev_setup_device, eventdev_stop_device,
+			test_eventdev_link_get),
+		TEST_CASE_ST(eventdev_setup_device, NULL,
+			test_eventdev_close),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static int
+test_eventdev_common(void)
+{
+	return unit_test_suite_runner(&eventdev_common_testsuite);
+}
+
+REGISTER_TEST_COMMAND(eventdev_common_autotest, test_eventdev_common);
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* Re: [PATCH 0/4] libeventdev API and northbound implementation
  2016-11-18  5:44 [PATCH 0/4] libeventdev API and northbound implementation Jerin Jacob
                   ` (3 preceding siblings ...)
  2016-11-18  5:45 ` [PATCH 4/4] app/test: unit test case for eventdev APIs Jerin Jacob
@ 2016-11-18 15:25 ` Bruce Richardson
  2016-11-18 16:04   ` Bruce Richardson
  4 siblings, 1 reply; 109+ messages in thread
From: Bruce Richardson @ 2016-11-18 15:25 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: dev, harry.van.haaren, hemant.agrawal, gage.eads

On Fri, Nov 18, 2016 at 11:14:58AM +0530, Jerin Jacob wrote:
> As previously discussed in RFC v1 [1], RFC v2 [2], with changes
> described in [3] (also pasted below), here is the first non-draft series
> for this new API.
> 
> [1] http://dpdk.org/ml/archives/dev/2016-August/045181.html
> [2] http://dpdk.org/ml/archives/dev/2016-October/048592.html
> [3] http://dpdk.org/ml/archives/dev/2016-October/048196.html
> 
> Changes since RFC v2:
> 
> - Updated the documentation to define the need for this library[Jerin]
> - Added RTE_EVENT_QUEUE_CFG_*_ONLY configuration parameters in
>   struct rte_event_queue_conf to enable optimized sw implementation [Bruce]
> - Introduced RTE_EVENT_OP* ops [Bruce]
> - Added nb_event_queue_flows,nb_event_port_dequeue_depth, nb_event_port_enqueue_depth
>   in rte_event_dev_configure() like ethdev and crypto library[Jerin]
> - Removed rte_event_release() and replaced with RTE_EVENT_OP_RELEASE ops to
>   reduce fast path APIs and it is redundant too[Jerin]
> - In the view of better application portability, Removed pin_event
>   from rte_event_enqueue as it is just hint and Intel/NXP can not support it[Jerin]
> - Added rte_event_port_links_get()[Jerin]
> - Added rte_event_dev_dump[Harry]
> 
> Notes:
> 
> - This patch set is check-patch clean with an exception that
> 02/04 has one WARNING:MACRO_WITH_FLOW_CONTROL
> - Looking forward to getting additional maintainers for libeventdev
> 
> 
> Possible next steps:
> 1) Review this patch set
> 2) Integrate Intel's SW driver[http://dpdk.org/dev/patchwork/patch/17049/]
> 3) Review proposed examples/eventdev_pipeline application[http://dpdk.org/dev/patchwork/patch/17053/]
> 4) Review proposed functional tests[http://dpdk.org/dev/patchwork/patch/17051/]
> 5) Cavium's HW based eventdev driver
> 
> I am planning to work on (3),(4) and (5)
> 
Thanks Jerin,

we'll review and get back to you with any comments or feedback (1), and
obviously start working on item (2) also! :-)

I'm also wonder whether we should have a staging tree for this work to
make interaction between us easier. Although this may not be
finalised enough for 17.02 release, do you think having an
dpdk-eventdev-next tree would be a help? My thinking is that once we get
the eventdev library itself in reasonable shape following our review, we
could commit that and make any changes thereafter as new patches, rather
than constantly respinning the same set. It also gives us a clean git
tree to base the respective driver implementations on from our two sides.

Thomas, any thoughts here on your end - or from anyone else?

Regards,
/Bruce

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 0/4] libeventdev API and northbound implementation
  2016-11-18 15:25 ` [PATCH 0/4] libeventdev API and northbound implementation Bruce Richardson
@ 2016-11-18 16:04   ` Bruce Richardson
  2016-11-18 19:27     ` Jerin Jacob
  0 siblings, 1 reply; 109+ messages in thread
From: Bruce Richardson @ 2016-11-18 16:04 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dev, harry.van.haaren, hemant.agrawal, gage.eads, thomas.monjalon

+Thomas

On Fri, Nov 18, 2016 at 03:25:18PM +0000, Bruce Richardson wrote:
> On Fri, Nov 18, 2016 at 11:14:58AM +0530, Jerin Jacob wrote:
> > As previously discussed in RFC v1 [1], RFC v2 [2], with changes
> > described in [3] (also pasted below), here is the first non-draft series
> > for this new API.
> > 
> > [1] http://dpdk.org/ml/archives/dev/2016-August/045181.html
> > [2] http://dpdk.org/ml/archives/dev/2016-October/048592.html
> > [3] http://dpdk.org/ml/archives/dev/2016-October/048196.html
> > 
> > Changes since RFC v2:
> > 
> > - Updated the documentation to define the need for this library[Jerin]
> > - Added RTE_EVENT_QUEUE_CFG_*_ONLY configuration parameters in
> >   struct rte_event_queue_conf to enable optimized sw implementation [Bruce]
> > - Introduced RTE_EVENT_OP* ops [Bruce]
> > - Added nb_event_queue_flows,nb_event_port_dequeue_depth, nb_event_port_enqueue_depth
> >   in rte_event_dev_configure() like ethdev and crypto library[Jerin]
> > - Removed rte_event_release() and replaced with RTE_EVENT_OP_RELEASE ops to
> >   reduce fast path APIs and it is redundant too[Jerin]
> > - In the view of better application portability, Removed pin_event
> >   from rte_event_enqueue as it is just hint and Intel/NXP can not support it[Jerin]
> > - Added rte_event_port_links_get()[Jerin]
> > - Added rte_event_dev_dump[Harry]
> > 
> > Notes:
> > 
> > - This patch set is check-patch clean with an exception that
> > 02/04 has one WARNING:MACRO_WITH_FLOW_CONTROL
> > - Looking forward to getting additional maintainers for libeventdev
> > 
> > 
> > Possible next steps:
> > 1) Review this patch set
> > 2) Integrate Intel's SW driver[http://dpdk.org/dev/patchwork/patch/17049/]
> > 3) Review proposed examples/eventdev_pipeline application[http://dpdk.org/dev/patchwork/patch/17053/]
> > 4) Review proposed functional tests[http://dpdk.org/dev/patchwork/patch/17051/]
> > 5) Cavium's HW based eventdev driver
> > 
> > I am planning to work on (3),(4) and (5)
> > 
> Thanks Jerin,
> 
> we'll review and get back to you with any comments or feedback (1), and
> obviously start working on item (2) also! :-)
> 
> I'm also wonder whether we should have a staging tree for this work to
> make interaction between us easier. Although this may not be
> finalised enough for 17.02 release, do you think having an
> dpdk-eventdev-next tree would be a help? My thinking is that once we get
> the eventdev library itself in reasonable shape following our review, we
> could commit that and make any changes thereafter as new patches, rather
> than constantly respinning the same set. It also gives us a clean git
> tree to base the respective driver implementations on from our two sides.
> 
> Thomas, any thoughts here on your end - or from anyone else?
> 
> Regards,
> /Bruce
> 

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 0/4] libeventdev API and northbound implementation
  2016-11-18 16:04   ` Bruce Richardson
@ 2016-11-18 19:27     ` Jerin Jacob
  2016-11-21  9:40       ` Thomas Monjalon
  2016-11-22  2:00       ` Yuanhan Liu
  0 siblings, 2 replies; 109+ messages in thread
From: Jerin Jacob @ 2016-11-18 19:27 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: dev, harry.van.haaren, hemant.agrawal, gage.eads, thomas.monjalon

On Fri, Nov 18, 2016 at 04:04:29PM +0000, Bruce Richardson wrote:
> +Thomas
> 
> On Fri, Nov 18, 2016 at 03:25:18PM +0000, Bruce Richardson wrote:
> > On Fri, Nov 18, 2016 at 11:14:58AM +0530, Jerin Jacob wrote:
> > > As previously discussed in RFC v1 [1], RFC v2 [2], with changes
> > > described in [3] (also pasted below), here is the first non-draft series
> > > for this new API.
> > > 
> > > [1] http://dpdk.org/ml/archives/dev/2016-August/045181.html
> > > [2] http://dpdk.org/ml/archives/dev/2016-October/048592.html
> > > [3] http://dpdk.org/ml/archives/dev/2016-October/048196.html
> > > 
> > > Changes since RFC v2:
> > > 
> > > - Updated the documentation to define the need for this library[Jerin]
> > > - Added RTE_EVENT_QUEUE_CFG_*_ONLY configuration parameters in
> > >   struct rte_event_queue_conf to enable optimized sw implementation [Bruce]
> > > - Introduced RTE_EVENT_OP* ops [Bruce]
> > > - Added nb_event_queue_flows,nb_event_port_dequeue_depth, nb_event_port_enqueue_depth
> > >   in rte_event_dev_configure() like ethdev and crypto library[Jerin]
> > > - Removed rte_event_release() and replaced with RTE_EVENT_OP_RELEASE ops to
> > >   reduce fast path APIs and it is redundant too[Jerin]
> > > - In the view of better application portability, Removed pin_event
> > >   from rte_event_enqueue as it is just hint and Intel/NXP can not support it[Jerin]
> > > - Added rte_event_port_links_get()[Jerin]
> > > - Added rte_event_dev_dump[Harry]
> > > 
> > > Notes:
> > > 
> > > - This patch set is check-patch clean with an exception that
> > > 02/04 has one WARNING:MACRO_WITH_FLOW_CONTROL
> > > - Looking forward to getting additional maintainers for libeventdev
> > > 
> > > 
> > > Possible next steps:
> > > 1) Review this patch set
> > > 2) Integrate Intel's SW driver[http://dpdk.org/dev/patchwork/patch/17049/]
> > > 3) Review proposed examples/eventdev_pipeline application[http://dpdk.org/dev/patchwork/patch/17053/]
> > > 4) Review proposed functional tests[http://dpdk.org/dev/patchwork/patch/17051/]
> > > 5) Cavium's HW based eventdev driver
> > > 
> > > I am planning to work on (3),(4) and (5)
> > > 
> > Thanks Jerin,
> > 
> > we'll review and get back to you with any comments or feedback (1), and
> > obviously start working on item (2) also! :-)
> > 
> > I'm also wonder whether we should have a staging tree for this work to
> > make interaction between us easier. Although this may not be
> > finalised enough for 17.02 release, do you think having an
> > dpdk-eventdev-next tree would be a help? My thinking is that once we get
> > the eventdev library itself in reasonable shape following our review, we
> > could commit that and make any changes thereafter as new patches, rather
> > than constantly respinning the same set. It also gives us a clean git
> > tree to base the respective driver implementations on from our two sides.
> > 
> > Thomas, any thoughts here on your end - or from anyone else?

I was thinking more or less along the same lines. To avoid re-spinning the
same set, it is better to have libeventdev library mark as EXPERIMENTAL
and commit it somewhere on dpdk-eventdev-next or main tree

I think, EXPERIMENTAL status can be changed only when
- At least two event drivers available
- Functional test applications fine with at least two drivers
- Portable example application to showcase the features of the library
- eventdev integration with another dpdk subsystem such as ethdev

Jerin

> > 
> > Regards,
> > /Bruce
> > 

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 0/4] libeventdev API and northbound implementation
  2016-11-18 19:27     ` Jerin Jacob
@ 2016-11-21  9:40       ` Thomas Monjalon
  2016-11-21  9:57         ` Bruce Richardson
  2016-11-22  2:00       ` Yuanhan Liu
  1 sibling, 1 reply; 109+ messages in thread
From: Thomas Monjalon @ 2016-11-21  9:40 UTC (permalink / raw)
  To: Jerin Jacob, Bruce Richardson
  Cc: dev, harry.van.haaren, hemant.agrawal, gage.eads

2016-11-19 00:57, Jerin Jacob:
> On Fri, Nov 18, 2016 at 04:04:29PM +0000, Bruce Richardson wrote:
> > On Fri, Nov 18, 2016 at 03:25:18PM +0000, Bruce Richardson wrote:
> > > On Fri, Nov 18, 2016 at 11:14:58AM +0530, Jerin Jacob wrote:
> > > > Possible next steps:
> > > > 1) Review this patch set
> > > > 2) Integrate Intel's SW driver[http://dpdk.org/dev/patchwork/patch/17049/]
> > > > 3) Review proposed examples/eventdev_pipeline application[http://dpdk.org/dev/patchwork/patch/17053/]
> > > > 4) Review proposed functional tests[http://dpdk.org/dev/patchwork/patch/17051/]
> > > > 5) Cavium's HW based eventdev driver
> > > > 
> > > > I am planning to work on (3),(4) and (5)
> > > > 
> > > Thanks Jerin,
> > > 
> > > we'll review and get back to you with any comments or feedback (1), and
> > > obviously start working on item (2) also! :-)
> > > 
> > > I'm also wonder whether we should have a staging tree for this work to
> > > make interaction between us easier. Although this may not be
> > > finalised enough for 17.02 release, do you think having an
> > > dpdk-eventdev-next tree would be a help? My thinking is that once we get
> > > the eventdev library itself in reasonable shape following our review, we
> > > could commit that and make any changes thereafter as new patches, rather
> > > than constantly respinning the same set. It also gives us a clean git
> > > tree to base the respective driver implementations on from our two sides.
> > > 
> > > Thomas, any thoughts here on your end - or from anyone else?
> 
> I was thinking more or less along the same lines. To avoid re-spinning the
> same set, it is better to have libeventdev library mark as EXPERIMENTAL
> and commit it somewhere on dpdk-eventdev-next or main tree
> 
> I think, EXPERIMENTAL status can be changed only when
> - At least two event drivers available
> - Functional test applications fine with at least two drivers
> - Portable example application to showcase the features of the library
> - eventdev integration with another dpdk subsystem such as ethdev

Are you asking for a temporary tree?
If yes, please tell its name and its committers, it will be done.

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 0/4] libeventdev API and northbound implementation
  2016-11-21  9:40       ` Thomas Monjalon
@ 2016-11-21  9:57         ` Bruce Richardson
  2016-11-22  0:11           ` Thomas Monjalon
  0 siblings, 1 reply; 109+ messages in thread
From: Bruce Richardson @ 2016-11-21  9:57 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: Jerin Jacob, dev, harry.van.haaren, hemant.agrawal, gage.eads

On Mon, Nov 21, 2016 at 10:40:50AM +0100, Thomas Monjalon wrote:
> 2016-11-19 00:57, Jerin Jacob:
> > On Fri, Nov 18, 2016 at 04:04:29PM +0000, Bruce Richardson wrote:
> > > On Fri, Nov 18, 2016 at 03:25:18PM +0000, Bruce Richardson wrote:
> > > > On Fri, Nov 18, 2016 at 11:14:58AM +0530, Jerin Jacob wrote:
> > > > > Possible next steps:
> > > > > 1) Review this patch set
> > > > > 2) Integrate Intel's SW driver[http://dpdk.org/dev/patchwork/patch/17049/]
> > > > > 3) Review proposed examples/eventdev_pipeline application[http://dpdk.org/dev/patchwork/patch/17053/]
> > > > > 4) Review proposed functional tests[http://dpdk.org/dev/patchwork/patch/17051/]
> > > > > 5) Cavium's HW based eventdev driver
> > > > > 
> > > > > I am planning to work on (3),(4) and (5)
> > > > > 
> > > > Thanks Jerin,
> > > > 
> > > > we'll review and get back to you with any comments or feedback (1), and
> > > > obviously start working on item (2) also! :-)
> > > > 
> > > > I'm also wonder whether we should have a staging tree for this work to
> > > > make interaction between us easier. Although this may not be
> > > > finalised enough for 17.02 release, do you think having an
> > > > dpdk-eventdev-next tree would be a help? My thinking is that once we get
> > > > the eventdev library itself in reasonable shape following our review, we
> > > > could commit that and make any changes thereafter as new patches, rather
> > > > than constantly respinning the same set. It also gives us a clean git
> > > > tree to base the respective driver implementations on from our two sides.
> > > > 
> > > > Thomas, any thoughts here on your end - or from anyone else?
> > 
> > I was thinking more or less along the same lines. To avoid re-spinning the
> > same set, it is better to have libeventdev library mark as EXPERIMENTAL
> > and commit it somewhere on dpdk-eventdev-next or main tree
> > 
> > I think, EXPERIMENTAL status can be changed only when
> > - At least two event drivers available
> > - Functional test applications fine with at least two drivers
> > - Portable example application to showcase the features of the library
> > - eventdev integration with another dpdk subsystem such as ethdev
> 
> Are you asking for a temporary tree?
> If yes, please tell its name and its committers, it will be done.

Yes, we are asking for a new tree, but I would not assume it is
temporary - it might be, but it also might not be, given how other
threads are discussing having an increasing number of subtrees giving
pull requests. :-)

Name: dpdk-eventdev-next
Committers: Bruce Richardson & Jerin Jacob

Thanks,
/Bruce.

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 2/4] eventdev: implement the northbound APIs
  2016-11-18  5:45 ` [PATCH 2/4] eventdev: implement the northbound APIs Jerin Jacob
@ 2016-11-21 17:45   ` Eads, Gage
  2016-11-21 19:13     ` Jerin Jacob
  2016-11-23 19:18   ` Thomas Monjalon
  1 sibling, 1 reply; 109+ messages in thread
From: Eads, Gage @ 2016-11-21 17:45 UTC (permalink / raw)
  To: Jerin Jacob, dev; +Cc: Richardson, Bruce, Van Haaren, Harry, hemant.agrawal

Hi Jerin,

I did a quick review and overall this implementation looks good. I noticed just one issue in rte_event_queue_setup(): the check of nb_atomic_order_sequences is being applied to atomic-type queues, but that field applies to ordered-type queues.

One open issue I noticed is the "typical workflow" description starting in rte_eventdev.h:204 conflicts with the centralized software PMD that Harry posted last week. Specifically, that PMD expects a single core to call the schedule function. We could extend the documentation to account for this alternative style of scheduler invocation, or discuss ways to make the software PMD work with the documented workflow. I prefer the former, but either way I think we ought to expose the scheduler's expected usage to the user -- perhaps through an RTE_EVENT_DEV_CAP flag?

Thanks,
Gage

>  -----Original Message-----
>  From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
>  Sent: Thursday, November 17, 2016 11:45 PM
>  To: dev@dpdk.org
>  Cc: Richardson, Bruce <bruce.richardson@intel.com>; Van Haaren, Harry
>  <harry.van.haaren@intel.com>; hemant.agrawal@nxp.com; Eads, Gage
>  <gage.eads@intel.com>; Jerin Jacob <jerin.jacob@caviumnetworks.com>
>  Subject: [dpdk-dev] [PATCH 2/4] eventdev: implement the northbound APIs
>  
>  This patch set defines the southbound driver interface
>  and implements the common code required for northbound
>  eventdev API interface.
>  
>  Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
>  ---
>   config/common_base                           |    6 +
>   lib/Makefile                                 |    1 +
>   lib/librte_eal/common/include/rte_log.h      |    1 +
>   lib/librte_eventdev/Makefile                 |   57 ++
>   lib/librte_eventdev/rte_eventdev.c           | 1211
>  ++++++++++++++++++++++++++
>   lib/librte_eventdev/rte_eventdev_pmd.h       |  504 +++++++++++
>   lib/librte_eventdev/rte_eventdev_version.map |   39 +
>   mk/rte.app.mk                                |    1 +
>   8 files changed, 1820 insertions(+)
>   create mode 100644 lib/librte_eventdev/Makefile
>   create mode 100644 lib/librte_eventdev/rte_eventdev.c
>   create mode 100644 lib/librte_eventdev/rte_eventdev_pmd.h
>   create mode 100644 lib/librte_eventdev/rte_eventdev_version.map
>  
>  diff --git a/config/common_base b/config/common_base
>  index 4bff83a..7a8814e 100644
>  --- a/config/common_base
>  +++ b/config/common_base
>  @@ -411,6 +411,12 @@ CONFIG_RTE_LIBRTE_PMD_ZUC_DEBUG=n
>   CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO=y
>  
>   #
>  +# Compile generic event device library
>  +#
>  +CONFIG_RTE_LIBRTE_EVENTDEV=y
>  +CONFIG_RTE_LIBRTE_EVENTDEV_DEBUG=n
>  +CONFIG_RTE_EVENT_MAX_DEVS=16
>  +CONFIG_RTE_EVENT_MAX_QUEUES_PER_DEV=64
>   # Compile librte_ring
>   #
>   CONFIG_RTE_LIBRTE_RING=y
>  diff --git a/lib/Makefile b/lib/Makefile
>  index 990f23a..1a067bf 100644
>  --- a/lib/Makefile
>  +++ b/lib/Makefile
>  @@ -41,6 +41,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) += librte_cfgfile
>   DIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += librte_cmdline
>   DIRS-$(CONFIG_RTE_LIBRTE_ETHER) += librte_ether
>   DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += librte_cryptodev
>  +DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += librte_eventdev
>   DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
>   DIRS-$(CONFIG_RTE_LIBRTE_HASH) += librte_hash
>   DIRS-$(CONFIG_RTE_LIBRTE_LPM) += librte_lpm
>  diff --git a/lib/librte_eal/common/include/rte_log.h
>  b/lib/librte_eal/common/include/rte_log.h
>  index 29f7d19..9a07d92 100644
>  --- a/lib/librte_eal/common/include/rte_log.h
>  +++ b/lib/librte_eal/common/include/rte_log.h
>  @@ -79,6 +79,7 @@ extern struct rte_logs rte_logs;
>   #define RTE_LOGTYPE_PIPELINE 0x00008000 /**< Log related to pipeline. */
>   #define RTE_LOGTYPE_MBUF    0x00010000 /**< Log related to mbuf. */
>   #define RTE_LOGTYPE_CRYPTODEV 0x00020000 /**< Log related to
>  cryptodev. */
>  +#define RTE_LOGTYPE_EVENTDEV 0x00040000 /**< Log related to eventdev.
>  */
>  
>   /* these log types can be used in an application */
>   #define RTE_LOGTYPE_USER1   0x01000000 /**< User-defined log type 1. */
>  diff --git a/lib/librte_eventdev/Makefile b/lib/librte_eventdev/Makefile
>  new file mode 100644
>  index 0000000..dac0663
>  --- /dev/null
>  +++ b/lib/librte_eventdev/Makefile
>  @@ -0,0 +1,57 @@
>  +#   BSD LICENSE
>  +#
>  +#   Copyright(c) 2016 Cavium networks. All rights reserved.
>  +#
>  +#   Redistribution and use in source and binary forms, with or without
>  +#   modification, are permitted provided that the following conditions
>  +#   are met:
>  +#
>  +#     * Redistributions of source code must retain the above copyright
>  +#       notice, this list of conditions and the following disclaimer.
>  +#     * Redistributions in binary form must reproduce the above copyright
>  +#       notice, this list of conditions and the following disclaimer in
>  +#       the documentation and/or other materials provided with the
>  +#       distribution.
>  +#     * Neither the name of Cavium networks nor the names of its
>  +#       contributors may be used to endorse or promote products derived
>  +#       from this software without specific prior written permission.
>  +#
>  +#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
>  CONTRIBUTORS
>  +#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT
>  NOT
>  +#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
>  FITNESS FOR
>  +#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
>  COPYRIGHT
>  +#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
>  INCIDENTAL,
>  +#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
>  NOT
>  +#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
>  OF USE,
>  +#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
>  ON ANY
>  +#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
>  TORT
>  +#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
>  THE USE
>  +#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
>  DAMAGE.
>  +
>  +include $(RTE_SDK)/mk/rte.vars.mk
>  +
>  +# library name
>  +LIB = librte_eventdev.a
>  +
>  +# library version
>  +LIBABIVER := 1
>  +
>  +# build flags
>  +CFLAGS += -O3
>  +CFLAGS += $(WERROR_FLAGS)
>  +
>  +# library source files
>  +SRCS-y += rte_eventdev.c
>  +
>  +# export include files
>  +SYMLINK-y-include += rte_eventdev.h
>  +SYMLINK-y-include += rte_eventdev_pmd.h
>  +
>  +# versioning export map
>  +EXPORT_MAP := rte_eventdev_version.map
>  +
>  +# library dependencies
>  +DEPDIRS-y += lib/librte_eal
>  +DEPDIRS-y += lib/librte_mbuf
>  +
>  +include $(RTE_SDK)/mk/rte.lib.mk
>  diff --git a/lib/librte_eventdev/rte_eventdev.c
>  b/lib/librte_eventdev/rte_eventdev.c
>  new file mode 100644
>  index 0000000..17ce5c3
>  --- /dev/null
>  +++ b/lib/librte_eventdev/rte_eventdev.c
>  @@ -0,0 +1,1211 @@
>  +/*
>  + *   BSD LICENSE
>  + *
>  + *   Copyright(c) 2016 Cavium networks. All rights reserved.
>  + *
>  + *   Redistribution and use in source and binary forms, with or without
>  + *   modification, are permitted provided that the following conditions
>  + *   are met:
>  + *
>  + *     * Redistributions of source code must retain the above copyright
>  + *       notice, this list of conditions and the following disclaimer.
>  + *     * Redistributions in binary form must reproduce the above copyright
>  + *       notice, this list of conditions and the following disclaimer in
>  + *       the documentation and/or other materials provided with the
>  + *       distribution.
>  + *     * Neither the name of Cavium networks nor the names of its
>  + *       contributors may be used to endorse or promote products derived
>  + *       from this software without specific prior written permission.
>  + *
>  + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
>  CONTRIBUTORS
>  + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT
>  NOT
>  + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
>  FITNESS FOR
>  + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
>  COPYRIGHT
>  + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
>  INCIDENTAL,
>  + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
>  NOT
>  + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
>  OF USE,
>  + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
>  AND ON ANY
>  + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
>  TORT
>  + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
>  THE USE
>  + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
>  DAMAGE.
>  + */
>  +
>  +#include <ctype.h>
>  +#include <stdio.h>
>  +#include <stdlib.h>
>  +#include <string.h>
>  +#include <stdarg.h>
>  +#include <errno.h>
>  +#include <stdint.h>
>  +#include <inttypes.h>
>  +#include <sys/types.h>
>  +#include <sys/queue.h>
>  +
>  +#include <rte_byteorder.h>
>  +#include <rte_log.h>
>  +#include <rte_debug.h>
>  +#include <rte_dev.h>
>  +#include <rte_pci.h>
>  +#include <rte_memory.h>
>  +#include <rte_memcpy.h>
>  +#include <rte_memzone.h>
>  +#include <rte_eal.h>
>  +#include <rte_per_lcore.h>
>  +#include <rte_lcore.h>
>  +#include <rte_atomic.h>
>  +#include <rte_branch_prediction.h>
>  +#include <rte_common.h>
>  +#include <rte_malloc.h>
>  +#include <rte_errno.h>
>  +
>  +#include "rte_eventdev.h"
>  +#include "rte_eventdev_pmd.h"
>  +
>  +struct rte_eventdev rte_event_devices[RTE_EVENT_MAX_DEVS];
>  +
>  +struct rte_eventdev *rte_eventdevs = &rte_event_devices[0];
>  +
>  +static struct rte_eventdev_global eventdev_globals = {
>  +	.nb_devs		= 0
>  +};
>  +
>  +struct rte_eventdev_global *rte_eventdev_globals = &eventdev_globals;
>  +
>  +/* Event dev north bound API implementation */
>  +
>  +uint8_t
>  +rte_event_dev_count(void)
>  +{
>  +	return rte_eventdev_globals->nb_devs;
>  +}
>  +
>  +int
>  +rte_event_dev_get_dev_id(const char *name)
>  +{
>  +	int i;
>  +
>  +	if (!name)
>  +		return -EINVAL;
>  +
>  +	for (i = 0; i < rte_eventdev_globals->nb_devs; i++)
>  +		if ((strcmp(rte_event_devices[i].data->name, name)
>  +				== 0) &&
>  +				(rte_event_devices[i].attached ==
>  +						RTE_EVENTDEV_ATTACHED))
>  +			return i;
>  +	return -ENODEV;
>  +}
>  +
>  +int
>  +rte_event_dev_socket_id(uint8_t dev_id)
>  +{
>  +	struct rte_eventdev *dev;
>  +
>  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
>  +	dev = &rte_eventdevs[dev_id];
>  +
>  +	return dev->data->socket_id;
>  +}
>  +
>  +int
>  +rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info *dev_info)
>  +{
>  +	struct rte_eventdev *dev;
>  +
>  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
>  +	dev = &rte_eventdevs[dev_id];
>  +
>  +	if (dev_info == NULL)
>  +		return -EINVAL;
>  +
>  +	memset(dev_info, 0, sizeof(struct rte_event_dev_info));
>  +
>  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -
>  ENOTSUP);
>  +	(*dev->dev_ops->dev_infos_get)(dev, dev_info);
>  +
>  +	dev_info->pci_dev = dev->pci_dev;
>  +	if (dev->driver)
>  +		dev_info->driver_name = dev->driver->pci_drv.driver.name;
>  +	return 0;
>  +}
>  +
>  +static inline int
>  +rte_event_dev_queue_config(struct rte_eventdev *dev, uint8_t nb_queues)
>  +{
>  +	uint8_t old_nb_queues = dev->data->nb_queues;
>  +	void **queues;
>  +	uint8_t *queues_prio;
>  +	unsigned int i;
>  +
>  +	EDEV_LOG_DEBUG("Setup %d queues on device %u", nb_queues,
>  +			 dev->data->dev_id);
>  +
>  +	/* First time configuration */
>  +	if (dev->data->queues == NULL && nb_queues != 0) {
>  +		dev->data->queues = rte_zmalloc_socket("eventdev->data-
>  >queues",
>  +				sizeof(dev->data->queues[0]) * nb_queues,
>  +				RTE_CACHE_LINE_SIZE, dev->data-
>  >socket_id);
>  +		if (dev->data->queues == NULL) {
>  +			dev->data->nb_queues = 0;
>  +			EDEV_LOG_ERR("failed to get memory for queue meta
>  data,"
>  +					"nb_queues %u", nb_queues);
>  +			return -(ENOMEM);
>  +		}
>  +		/* Allocate memory to store queue priority */
>  +		dev->data->queues_prio = rte_zmalloc_socket(
>  +				"eventdev->data->queues_prio",
>  +				sizeof(dev->data->queues_prio[0]) *
>  nb_queues,
>  +				RTE_CACHE_LINE_SIZE, dev->data-
>  >socket_id);
>  +		if (dev->data->queues_prio == NULL) {
>  +			dev->data->nb_queues = 0;
>  +			EDEV_LOG_ERR("failed to get memory for queue
>  priority,"
>  +					"nb_queues %u", nb_queues);
>  +			return -(ENOMEM);
>  +		}
>  +
>  +	} else if (dev->data->queues != NULL && nb_queues != 0) {/* re-config
>  */
>  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
>  >queue_release, -ENOTSUP);
>  +
>  +		queues = dev->data->queues;
>  +		for (i = nb_queues; i < old_nb_queues; i++)
>  +			(*dev->dev_ops->queue_release)(queues[i]);
>  +
>  +		queues = rte_realloc(queues, sizeof(queues[0]) * nb_queues,
>  +				RTE_CACHE_LINE_SIZE);
>  +		if (queues == NULL) {
>  +			EDEV_LOG_ERR("failed to realloc queue meta data,"
>  +						" nb_queues %u",
>  nb_queues);
>  +			return -(ENOMEM);
>  +		}
>  +		dev->data->queues = queues;
>  +
>  +		/* Re allocate memory to store queue priority */
>  +		queues_prio = dev->data->queues_prio;
>  +		queues_prio = rte_realloc(queues_prio,
>  +				sizeof(queues_prio[0]) * nb_queues,
>  +				RTE_CACHE_LINE_SIZE);
>  +		if (queues_prio == NULL) {
>  +			EDEV_LOG_ERR("failed to realloc queue priority,"
>  +						" nb_queues %u",
>  nb_queues);
>  +			return -(ENOMEM);
>  +		}
>  +		dev->data->queues_prio = queues_prio;
>  +
>  +		if (nb_queues > old_nb_queues) {
>  +			uint8_t new_qs = nb_queues - old_nb_queues;
>  +
>  +			memset(queues + old_nb_queues, 0,
>  +				sizeof(queues[0]) * new_qs);
>  +			memset(queues_prio + old_nb_queues, 0,
>  +				sizeof(queues_prio[0]) * new_qs);
>  +		}
>  +	} else if (dev->data->queues != NULL && nb_queues == 0) {
>  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
>  >queue_release, -ENOTSUP);
>  +
>  +		queues = dev->data->queues;
>  +		for (i = nb_queues; i < old_nb_queues; i++)
>  +			(*dev->dev_ops->queue_release)(queues[i]);
>  +	}
>  +
>  +	dev->data->nb_queues = nb_queues;
>  +	return 0;
>  +}
>  +
>  +static inline int
>  +rte_event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
>  +{
>  +	uint8_t old_nb_ports = dev->data->nb_ports;
>  +	void **ports;
>  +	uint16_t *links_map;
>  +	uint8_t *ports_dequeue_depth;
>  +	uint8_t *ports_enqueue_depth;
>  +	unsigned int i;
>  +
>  +	EDEV_LOG_DEBUG("Setup %d ports on device %u", nb_ports,
>  +			 dev->data->dev_id);
>  +
>  +	/* First time configuration */
>  +	if (dev->data->ports == NULL && nb_ports != 0) {
>  +		dev->data->ports = rte_zmalloc_socket("eventdev->data-
>  >ports",
>  +				sizeof(dev->data->ports[0]) * nb_ports,
>  +				RTE_CACHE_LINE_SIZE, dev->data-
>  >socket_id);
>  +		if (dev->data->ports == NULL) {
>  +			dev->data->nb_ports = 0;
>  +			EDEV_LOG_ERR("failed to get memory for port meta
>  data,"
>  +					"nb_ports %u", nb_ports);
>  +			return -(ENOMEM);
>  +		}
>  +
>  +		/* Allocate memory to store ports dequeue depth */
>  +		dev->data->ports_dequeue_depth =
>  +			rte_zmalloc_socket("eventdev-
>  >ports_dequeue_depth",
>  +			sizeof(dev->data->ports_dequeue_depth[0]) *
>  nb_ports,
>  +			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
>  +		if (dev->data->ports_dequeue_depth == NULL) {
>  +			dev->data->nb_ports = 0;
>  +			EDEV_LOG_ERR("failed to get memory for port deq
>  meta,"
>  +					"nb_ports %u", nb_ports);
>  +			return -(ENOMEM);
>  +		}
>  +
>  +		/* Allocate memory to store ports enqueue depth */
>  +		dev->data->ports_enqueue_depth =
>  +			rte_zmalloc_socket("eventdev-
>  >ports_enqueue_depth",
>  +			sizeof(dev->data->ports_enqueue_depth[0]) *
>  nb_ports,
>  +			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
>  +		if (dev->data->ports_enqueue_depth == NULL) {
>  +			dev->data->nb_ports = 0;
>  +			EDEV_LOG_ERR("failed to get memory for port enq
>  meta,"
>  +					"nb_ports %u", nb_ports);
>  +			return -(ENOMEM);
>  +		}
>  +
>  +		/* Allocate memory to store queue to port link connection */
>  +		dev->data->links_map =
>  +			rte_zmalloc_socket("eventdev->links_map",
>  +			sizeof(dev->data->links_map[0]) * nb_ports *
>  +			RTE_EVENT_MAX_QUEUES_PER_DEV,
>  +			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
>  +		if (dev->data->links_map == NULL) {
>  +			dev->data->nb_ports = 0;
>  +			EDEV_LOG_ERR("failed to get memory for port_map
>  area,"
>  +					"nb_ports %u", nb_ports);
>  +			return -(ENOMEM);
>  +		}
>  +	} else if (dev->data->ports != NULL && nb_ports != 0) {/* re-config */
>  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_release,
>  -ENOTSUP);
>  +
>  +		ports = dev->data->ports;
>  +		ports_dequeue_depth = dev->data->ports_dequeue_depth;
>  +		ports_enqueue_depth = dev->data->ports_enqueue_depth;
>  +		links_map = dev->data->links_map;
>  +
>  +		for (i = nb_ports; i < old_nb_ports; i++)
>  +			(*dev->dev_ops->port_release)(ports[i]);
>  +
>  +		/* Realloc memory for ports */
>  +		ports = rte_realloc(ports, sizeof(ports[0]) * nb_ports,
>  +				RTE_CACHE_LINE_SIZE);
>  +		if (ports == NULL) {
>  +			EDEV_LOG_ERR("failed to realloc port meta data,"
>  +						" nb_ports %u", nb_ports);
>  +			return -(ENOMEM);
>  +		}
>  +
>  +		/* Realloc memory for ports_dequeue_depth */
>  +		ports_dequeue_depth = rte_realloc(ports_dequeue_depth,
>  +			sizeof(ports_dequeue_depth[0]) * nb_ports,
>  +			RTE_CACHE_LINE_SIZE);
>  +		if (ports_dequeue_depth == NULL) {
>  +			EDEV_LOG_ERR("failed to realloc port deqeue meta
>  data,"
>  +						" nb_ports %u", nb_ports);
>  +			return -(ENOMEM);
>  +		}
>  +
>  +		/* Realloc memory for ports_enqueue_depth */
>  +		ports_enqueue_depth = rte_realloc(ports_enqueue_depth,
>  +			sizeof(ports_enqueue_depth[0]) * nb_ports,
>  +			RTE_CACHE_LINE_SIZE);
>  +		if (ports_enqueue_depth == NULL) {
>  +			EDEV_LOG_ERR("failed to realloc port enqueue meta
>  data,"
>  +						" nb_ports %u", nb_ports);
>  +			return -(ENOMEM);
>  +		}
>  +
>  +		/* Realloc memory to store queue to port link connection */
>  +		links_map = rte_realloc(links_map,
>  +			sizeof(dev->data->links_map[0]) * nb_ports *
>  +			RTE_EVENT_MAX_QUEUES_PER_DEV,
>  +			RTE_CACHE_LINE_SIZE);
>  +		if (dev->data->links_map == NULL) {
>  +			dev->data->nb_ports = 0;
>  +			EDEV_LOG_ERR("failed to realloc mem for port_map
>  area,"
>  +					"nb_ports %u", nb_ports);
>  +			return -(ENOMEM);
>  +		}
>  +
>  +		if (nb_ports > old_nb_ports) {
>  +			uint8_t new_ps = nb_ports - old_nb_ports;
>  +
>  +			memset(ports + old_nb_ports, 0,
>  +				sizeof(ports[0]) * new_ps);
>  +			memset(ports_dequeue_depth + old_nb_ports, 0,
>  +				sizeof(ports_dequeue_depth[0]) * new_ps);
>  +			memset(ports_enqueue_depth + old_nb_ports, 0,
>  +				sizeof(ports_enqueue_depth[0]) * new_ps);
>  +			memset(links_map +
>  +				(old_nb_ports *
>  RTE_EVENT_MAX_QUEUES_PER_DEV),
>  +				0, sizeof(ports_enqueue_depth[0]) * new_ps);
>  +		}
>  +
>  +		dev->data->ports = ports;
>  +		dev->data->ports_dequeue_depth = ports_dequeue_depth;
>  +		dev->data->ports_enqueue_depth = ports_enqueue_depth;
>  +		dev->data->links_map = links_map;
>  +	} else if (dev->data->ports != NULL && nb_ports == 0) {
>  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_release,
>  -ENOTSUP);
>  +
>  +		ports = dev->data->ports;
>  +		for (i = nb_ports; i < old_nb_ports; i++)
>  +			(*dev->dev_ops->port_release)(ports[i]);
>  +	}
>  +
>  +	dev->data->nb_ports = nb_ports;
>  +	return 0;
>  +}
>  +
>  +int
>  +rte_event_dev_configure(uint8_t dev_id, struct rte_event_dev_config
>  *dev_conf)
>  +{
>  +	struct rte_eventdev *dev;
>  +	struct rte_event_dev_info info;
>  +	int diag;
>  +
>  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
>  +	dev = &rte_eventdevs[dev_id];
>  +
>  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -
>  ENOTSUP);
>  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_configure, -
>  ENOTSUP);
>  +
>  +	if (dev->data->dev_started) {
>  +		EDEV_LOG_ERR(
>  +		    "device %d must be stopped to allow configuration",
>  dev_id);
>  +		return -EBUSY;
>  +	}
>  +
>  +	if (dev_conf == NULL)
>  +		return -EINVAL;
>  +
>  +	(*dev->dev_ops->dev_infos_get)(dev, &info);
>  +
>  +	/* Check dequeue_wait_ns value is in limit */
>  +	if (!dev_conf->event_dev_cfg &
>  RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT) {
>  +		if (dev_conf->dequeue_wait_ns < info.min_dequeue_wait_ns
>  ||
>  +			dev_conf->dequeue_wait_ns >
>  info.max_dequeue_wait_ns) {
>  +			EDEV_LOG_ERR("dev%d invalid dequeue_wait_ns=%d"
>  +			" min_dequeue_wait_ns=%d
>  max_dequeue_wait_ns=%d",
>  +			dev_id, dev_conf->dequeue_wait_ns,
>  +			info.min_dequeue_wait_ns,
>  +			info.max_dequeue_wait_ns);
>  +			return -EINVAL;
>  +		}
>  +	}
>  +
>  +	/* Check nb_events_limit is in limit */
>  +	if (dev_conf->nb_events_limit > info.max_num_events) {
>  +		EDEV_LOG_ERR("dev%d nb_events_limit=%d >
>  max_num_events=%d",
>  +		dev_id, dev_conf->nb_events_limit, info.max_num_events);
>  +		return -EINVAL;
>  +	}
>  +
>  +	/* Check nb_event_queues is in limit */
>  +	if (!dev_conf->nb_event_queues) {
>  +		EDEV_LOG_ERR("dev%d nb_event_queues cannot be zero",
>  dev_id);
>  +		return -EINVAL;
>  +	}
>  +	if (dev_conf->nb_event_queues > info.max_event_queues) {
>  +		EDEV_LOG_ERR("dev%d nb_event_queues=%d >
>  max_event_queues=%d",
>  +		dev_id, dev_conf->nb_event_queues,
>  info.max_event_queues);
>  +		return -EINVAL;
>  +	}
>  +
>  +	/* Check nb_event_ports is in limit */
>  +	if (!dev_conf->nb_event_ports) {
>  +		EDEV_LOG_ERR("dev%d nb_event_ports cannot be zero",
>  dev_id);
>  +		return -EINVAL;
>  +	}
>  +	if (dev_conf->nb_event_ports > info.max_event_ports) {
>  +		EDEV_LOG_ERR("dev%d nb_event_ports=%d >
>  max_event_ports= %d",
>  +		dev_id, dev_conf->nb_event_ports, info.max_event_ports);
>  +		return -EINVAL;
>  +	}
>  +
>  +	/* Check nb_event_queue_flows is in limit */
>  +	if (!dev_conf->nb_event_queue_flows) {
>  +		EDEV_LOG_ERR("dev%d nb_flows cannot be zero", dev_id);
>  +		return -EINVAL;
>  +	}
>  +	if (dev_conf->nb_event_queue_flows > info.max_event_queue_flows)
>  {
>  +		EDEV_LOG_ERR("dev%d nb_flows=%x > max_flows=%x",
>  +		dev_id, dev_conf->nb_event_queue_flows,
>  +		info.max_event_queue_flows);
>  +		return -EINVAL;
>  +	}
>  +
>  +	/* Check nb_event_port_dequeue_depth is in limit */
>  +	if (!dev_conf->nb_event_port_dequeue_depth) {
>  +		EDEV_LOG_ERR("dev%d nb_dequeue_depth cannot be zero",
>  dev_id);
>  +		return -EINVAL;
>  +	}
>  +	if (dev_conf->nb_event_port_dequeue_depth >
>  +			 info.max_event_port_dequeue_depth) {
>  +		EDEV_LOG_ERR("dev%d nb_dequeue_depth=%d >
>  max_dequeue_depth=%d",
>  +		dev_id, dev_conf->nb_event_port_dequeue_depth,
>  +		info.max_event_port_dequeue_depth);
>  +		return -EINVAL;
>  +	}
>  +
>  +	/* Check nb_event_port_enqueue_depth is in limit */
>  +	if (!dev_conf->nb_event_port_enqueue_depth) {
>  +		EDEV_LOG_ERR("dev%d nb_enqueue_depth cannot be zero",
>  dev_id);
>  +		return -EINVAL;
>  +	}
>  +	if (dev_conf->nb_event_port_enqueue_depth >
>  +			 info.max_event_port_enqueue_depth) {
>  +		EDEV_LOG_ERR("dev%d nb_enqueue_depth=%d >
>  max_enqueue_depth=%d",
>  +		dev_id, dev_conf->nb_event_port_enqueue_depth,
>  +		info.max_event_port_enqueue_depth);
>  +		return -EINVAL;
>  +	}
>  +
>  +	/* Copy the dev_conf parameter into the dev structure */
>  +	memcpy(&dev->data->dev_conf, dev_conf, sizeof(dev->data-
>  >dev_conf));
>  +
>  +	/* Setup new number of queues and reconfigure device. */
>  +	diag = rte_event_dev_queue_config(dev, dev_conf-
>  >nb_event_queues);
>  +	if (diag != 0) {
>  +		EDEV_LOG_ERR("dev%d rte_event_dev_queue_config = %d",
>  +				dev_id, diag);
>  +		return diag;
>  +	}
>  +
>  +	/* Setup new number of ports and reconfigure device. */
>  +	diag = rte_event_dev_port_config(dev, dev_conf->nb_event_ports);
>  +	if (diag != 0) {
>  +		rte_event_dev_queue_config(dev, 0);
>  +		EDEV_LOG_ERR("dev%d rte_event_dev_port_config = %d",
>  +				dev_id, diag);
>  +		return diag;
>  +	}
>  +
>  +	/* Configure the device */
>  +	diag = (*dev->dev_ops->dev_configure)(dev);
>  +	if (diag != 0) {
>  +		EDEV_LOG_ERR("dev%d dev_configure = %d", dev_id, diag);
>  +		rte_event_dev_queue_config(dev, 0);
>  +		rte_event_dev_port_config(dev, 0);
>  +	}
>  +
>  +	dev->data->event_dev_cap = info.event_dev_cap;
>  +	return diag;
>  +}
>  +
>  +static inline int
>  +is_valid_queue(struct rte_eventdev *dev, uint8_t queue_id)
>  +{
>  +	if (queue_id < dev->data->nb_queues && queue_id <
>  +				RTE_EVENT_MAX_QUEUES_PER_DEV)
>  +		return 1;
>  +	else
>  +		return 0;
>  +}
>  +
>  +int
>  +rte_event_queue_default_conf_get(uint8_t dev_id, uint8_t queue_id,
>  +				 struct rte_event_queue_conf *queue_conf)
>  +{
>  +	struct rte_eventdev *dev;
>  +
>  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
>  +	dev = &rte_eventdevs[dev_id];
>  +
>  +	if (queue_conf == NULL)
>  +		return -EINVAL;
>  +
>  +	if (!is_valid_queue(dev, queue_id)) {
>  +		EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id);
>  +		return -EINVAL;
>  +	}
>  +
>  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_def_conf, -
>  ENOTSUP);
>  +	memset(queue_conf, 0, sizeof(struct rte_event_queue_conf));
>  +	(*dev->dev_ops->queue_def_conf)(dev, queue_id, queue_conf);
>  +	return 0;
>  +}
>  +
>  +static inline int
>  +is_valid_atomic_queue_conf(struct rte_event_queue_conf *queue_conf)
>  +{
>  +	if (queue_conf && (
>  +		((queue_conf->event_queue_cfg &
>  RTE_EVENT_QUEUE_CFG_TYPE_MASK)
>  +			== RTE_EVENT_QUEUE_CFG_ALL_TYPES) ||
>  +		((queue_conf->event_queue_cfg &
>  RTE_EVENT_QUEUE_CFG_TYPE_MASK)
>  +			== RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY)
>  +		))
>  +		return 1;
>  +	else
>  +		return 0;
>  +}
>  +
>  +int
>  +rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id,
>  +		      struct rte_event_queue_conf *queue_conf)
>  +{
>  +	struct rte_eventdev *dev;
>  +	struct rte_event_queue_conf def_conf;
>  +
>  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
>  +	dev = &rte_eventdevs[dev_id];
>  +
>  +	if (!is_valid_queue(dev, queue_id)) {
>  +		EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id);
>  +		return -EINVAL;
>  +	}
>  +
>  +	/* Check nb_atomic_flows limit */
>  +	if (is_valid_atomic_queue_conf(queue_conf)) {
>  +		if (queue_conf->nb_atomic_flows == 0 ||
>  +		    queue_conf->nb_atomic_flows >
>  +			dev->data->dev_conf.nb_event_queue_flows) {
>  +			EDEV_LOG_ERR(
>  +		"dev%d queue%d Invalid nb_atomic_flows=%d
>  max_flows=%d",
>  +			dev_id, queue_id, queue_conf->nb_atomic_flows,
>  +			dev->data->dev_conf.nb_event_queue_flows);
>  +			return -EINVAL;
>  +		}
>  +	}
>  +
>  +	/* Check nb_atomic_order_sequences limit */
>  +	if (is_valid_atomic_queue_conf(queue_conf)) {
>  +		if (queue_conf->nb_atomic_order_sequences == 0 ||
>  +		    queue_conf->nb_atomic_order_sequences >
>  +			dev->data->dev_conf.nb_event_queue_flows) {
>  +			EDEV_LOG_ERR(
>  +		"dev%d queue%d Invalid nb_atomic_order_seq=%d
>  max_flows=%d",
>  +			dev_id, queue_id, queue_conf-
>  >nb_atomic_order_sequences,
>  +			dev->data->dev_conf.nb_event_queue_flows);
>  +			return -EINVAL;
>  +		}
>  +	}
>  +
>  +	if (dev->data->dev_started) {
>  +		EDEV_LOG_ERR(
>  +		    "device %d must be stopped to allow queue setup", dev_id);
>  +		return -EBUSY;
>  +	}
>  +
>  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_setup, -
>  ENOTSUP);
>  +
>  +	if (queue_conf == NULL) {
>  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
>  >queue_def_conf,
>  +					-ENOTSUP);
>  +		(*dev->dev_ops->queue_def_conf)(dev, queue_id, &def_conf);
>  +		def_conf.event_queue_cfg =
>  RTE_EVENT_QUEUE_CFG_DEFAULT;
>  +		queue_conf = &def_conf;
>  +	}
>  +
>  +	dev->data->queues_prio[queue_id] = queue_conf->priority;
>  +	return (*dev->dev_ops->queue_setup)(dev, queue_id, queue_conf);
>  +}
>  +
>  +uint8_t
>  +rte_event_queue_count(uint8_t dev_id)
>  +{
>  +	struct rte_eventdev *dev;
>  +
>  +	dev = &rte_eventdevs[dev_id];
>  +	return dev->data->nb_queues;
>  +}
>  +
>  +uint8_t
>  +rte_event_queue_priority(uint8_t dev_id, uint8_t queue_id)
>  +{
>  +	struct rte_eventdev *dev;
>  +
>  +	dev = &rte_eventdevs[dev_id];
>  +	if (dev->data->event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_QOS)
>  +		return dev->data->queues_prio[queue_id];
>  +	else
>  +		return RTE_EVENT_QUEUE_PRIORITY_NORMAL;
>  +}
>  +
>  +static inline int
>  +is_valid_port(struct rte_eventdev *dev, uint8_t port_id)
>  +{
>  +	if (port_id < dev->data->nb_ports)
>  +		return 1;
>  +	else
>  +		return 0;
>  +}
>  +
>  +int
>  +rte_event_port_default_conf_get(uint8_t dev_id, uint8_t port_id,
>  +				 struct rte_event_port_conf *port_conf)
>  +{
>  +	struct rte_eventdev *dev;
>  +
>  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
>  +	dev = &rte_eventdevs[dev_id];
>  +
>  +	if (port_conf == NULL)
>  +		return -EINVAL;
>  +
>  +	if (!is_valid_port(dev, port_id)) {
>  +		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
>  +		return -EINVAL;
>  +	}
>  +
>  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_def_conf, -
>  ENOTSUP);
>  +	memset(port_conf, 0, sizeof(struct rte_event_port_conf));
>  +	(*dev->dev_ops->port_def_conf)(dev, port_id, port_conf);
>  +	return 0;
>  +}
>  +
>  +int
>  +rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
>  +		      struct rte_event_port_conf *port_conf)
>  +{
>  +	struct rte_eventdev *dev;
>  +	struct rte_event_port_conf def_conf;
>  +	int diag;
>  +
>  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
>  +	dev = &rte_eventdevs[dev_id];
>  +
>  +	if (!is_valid_port(dev, port_id)) {
>  +		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
>  +		return -EINVAL;
>  +	}
>  +
>  +	/* Check new_event_threshold limit */
>  +	if ((port_conf && !port_conf->new_event_threshold) ||
>  +			(port_conf && port_conf->new_event_threshold >
>  +				 dev->data->dev_conf.nb_events_limit)) {
>  +		EDEV_LOG_ERR(
>  +		   "dev%d port%d Invalid event_threshold=%d
>  nb_events_limit=%d",
>  +			dev_id, port_id, port_conf->new_event_threshold,
>  +			dev->data->dev_conf.nb_events_limit);
>  +		return -EINVAL;
>  +	}
>  +
>  +	/* Check dequeue_depth limit */
>  +	if ((port_conf && !port_conf->dequeue_depth) ||
>  +			(port_conf && port_conf->dequeue_depth >
>  +		dev->data->dev_conf.nb_event_port_dequeue_depth)) {
>  +		EDEV_LOG_ERR(
>  +		   "dev%d port%d Invalid dequeue depth=%d
>  max_dequeue_depth=%d",
>  +			dev_id, port_id, port_conf->dequeue_depth,
>  +			dev->data-
>  >dev_conf.nb_event_port_dequeue_depth);
>  +		return -EINVAL;
>  +	}
>  +
>  +	/* Check enqueue_depth limit */
>  +	if ((port_conf && !port_conf->enqueue_depth) ||
>  +			(port_conf && port_conf->enqueue_depth >
>  +		dev->data->dev_conf.nb_event_port_enqueue_depth)) {
>  +		EDEV_LOG_ERR(
>  +		   "dev%d port%d Invalid enqueue depth=%d
>  max_enqueue_depth=%d",
>  +			dev_id, port_id, port_conf->enqueue_depth,
>  +			dev->data-
>  >dev_conf.nb_event_port_enqueue_depth);
>  +		return -EINVAL;
>  +	}
>  +
>  +	if (dev->data->dev_started) {
>  +		EDEV_LOG_ERR(
>  +		    "device %d must be stopped to allow port setup", dev_id);
>  +		return -EBUSY;
>  +	}
>  +
>  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_setup, -
>  ENOTSUP);
>  +
>  +	if (port_conf == NULL) {
>  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
>  >port_def_conf,
>  +					-ENOTSUP);
>  +		(*dev->dev_ops->port_def_conf)(dev, port_id, &def_conf);
>  +		port_conf = &def_conf;
>  +	}
>  +
>  +	dev->data->ports_dequeue_depth[port_id] =
>  +			port_conf->dequeue_depth;
>  +	dev->data->ports_enqueue_depth[port_id] =
>  +			port_conf->enqueue_depth;
>  +
>  +	diag = (*dev->dev_ops->port_setup)(dev, port_id, port_conf);
>  +
>  +	/* Unlink all the queues from this port(default state after setup) */
>  +	if (!diag)
>  +		diag = rte_event_port_unlink(dev_id, port_id, NULL, 0);
>  +
>  +	if (diag < 0)
>  +		return diag;
>  +
>  +	return 0;
>  +}
>  +
>  +uint8_t
>  +rte_event_port_dequeue_depth(uint8_t dev_id, uint8_t port_id)
>  +{
>  +	struct rte_eventdev *dev;
>  +
>  +	dev = &rte_eventdevs[dev_id];
>  +	return dev->data->ports_dequeue_depth[port_id];
>  +}
>  +
>  +uint8_t
>  +rte_event_port_enqueue_depth(uint8_t dev_id, uint8_t port_id)
>  +{
>  +	struct rte_eventdev *dev;
>  +
>  +	dev = &rte_eventdevs[dev_id];
>  +	return dev->data->ports_enqueue_depth[port_id];
>  +}
>  +
>  +uint8_t
>  +rte_event_port_count(uint8_t dev_id)
>  +{
>  +	struct rte_eventdev *dev;
>  +
>  +	dev = &rte_eventdevs[dev_id];
>  +	return dev->data->nb_ports;
>  +}
>  +
>  +int
>  +rte_event_port_link(uint8_t dev_id, uint8_t port_id,
>  +		    struct rte_event_queue_link link[], uint16_t nb_links)
>  +{
>  +	struct rte_eventdev *dev;
>  +	struct rte_event_queue_link
>  all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
>  +	uint16_t *links_map;
>  +	int i, diag;
>  +
>  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
>  +	dev = &rte_eventdevs[dev_id];
>  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_link, -ENOTSUP);
>  +
>  +	if (!is_valid_port(dev, port_id)) {
>  +		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
>  +		return -EINVAL;
>  +	}
>  +
>  +	if (link == NULL) {
>  +		for (i = 0; i < dev->data->nb_queues; i++) {
>  +			all_queues[i].queue_id = i;
>  +			all_queues[i].priority =
>  +
>  	RTE_EVENT_QUEUE_SERVICE_PRIORITY_NORMAL;
>  +		}
>  +		link = all_queues;
>  +		nb_links = dev->data->nb_queues;
>  +	}
>  +
>  +	for (i = 0; i < nb_links; i++)
>  +		if (link[i].queue_id >= RTE_EVENT_MAX_QUEUES_PER_DEV)
>  +			return -EINVAL;
>  +
>  +	diag = (*dev->dev_ops->port_link)(dev->data->ports[port_id], link,
>  +						 nb_links);
>  +	if (diag < 0)
>  +		return diag;
>  +
>  +	links_map = dev->data->links_map;
>  +	/* Point links_map to this port specific area */
>  +	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
>  +	for (i = 0; i < diag; i++)
>  +		links_map[link[i].queue_id] = (uint8_t)link[i].priority;
>  +
>  +	return diag;
>  +}
>  +
>  +#define EVENT_QUEUE_SERVICE_PRIORITY_INVALID (0xdead)
>  +
>  +int
>  +rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
>  +		      uint8_t queues[], uint16_t nb_unlinks)
>  +{
>  +	struct rte_eventdev *dev;
>  +	uint8_t all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
>  +	int i, diag;
>  +	uint16_t *links_map;
>  +
>  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
>  +	dev = &rte_eventdevs[dev_id];
>  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_unlink, -
>  ENOTSUP);
>  +
>  +	if (!is_valid_port(dev, port_id)) {
>  +		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
>  +		return -EINVAL;
>  +	}
>  +
>  +	if (queues == NULL) {
>  +		for (i = 0; i < dev->data->nb_queues; i++)
>  +			all_queues[i] = i;
>  +		queues = all_queues;
>  +		nb_unlinks = dev->data->nb_queues;
>  +	}
>  +
>  +	for (i = 0; i < nb_unlinks; i++)
>  +		if (queues[i] >= RTE_EVENT_MAX_QUEUES_PER_DEV)
>  +			return -EINVAL;
>  +
>  +	diag = (*dev->dev_ops->port_unlink)(dev->data->ports[port_id],
>  queues,
>  +					nb_unlinks);
>  +
>  +	if (diag < 0)
>  +		return diag;
>  +
>  +	links_map = dev->data->links_map;
>  +	/* Point links_map to this port specific area */
>  +	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
>  +	for (i = 0; i < diag; i++)
>  +		links_map[queues[i]] =
>  EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
>  +
>  +	return diag;
>  +}
>  +
>  +int
>  +rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
>  +			struct rte_event_queue_link link[])
>  +{
>  +	struct rte_eventdev *dev;
>  +	uint16_t *links_map;
>  +	int i, count = 0;
>  +
>  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
>  +	dev = &rte_eventdevs[dev_id];
>  +	if (!is_valid_port(dev, port_id)) {
>  +		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
>  +		return -EINVAL;
>  +	}
>  +
>  +	links_map = dev->data->links_map;
>  +	/* Point links_map to this port specific area */
>  +	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
>  +	for (i = 0; i < RTE_EVENT_MAX_QUEUES_PER_DEV; i++) {
>  +		if (links_map[i] !=
>  EVENT_QUEUE_SERVICE_PRIORITY_INVALID) {
>  +			link[count].queue_id = i;
>  +			link[count].priority = (uint8_t)links_map[i];
>  +			++count;
>  +		}
>  +	}
>  +	return count;
>  +}
>  +
>  +int
>  +rte_event_dequeue_wait_time(uint8_t dev_id, uint64_t ns, uint64_t
>  *wait_ticks)
>  +{
>  +	struct rte_eventdev *dev;
>  +
>  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
>  +	dev = &rte_eventdevs[dev_id];
>  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->wait_time, -
>  ENOTSUP);
>  +
>  +	if (wait_ticks == NULL)
>  +		return -EINVAL;
>  +
>  +	(*dev->dev_ops->wait_time)(dev, ns, wait_ticks);
>  +	return 0;
>  +}
>  +
>  +int
>  +rte_event_dev_dump(uint8_t dev_id, FILE *f)
>  +{
>  +	struct rte_eventdev *dev;
>  +
>  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
>  +	dev = &rte_eventdevs[dev_id];
>  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dump, -ENOTSUP);
>  +
>  +	(*dev->dev_ops->dump)(dev, f);
>  +	return 0;
>  +
>  +}
>  +
>  +int
>  +rte_event_dev_start(uint8_t dev_id)
>  +{
>  +	struct rte_eventdev *dev;
>  +	int diag;
>  +
>  +	EDEV_LOG_DEBUG("Start dev_id=%" PRIu8, dev_id);
>  +
>  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
>  +	dev = &rte_eventdevs[dev_id];
>  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_start, -ENOTSUP);
>  +
>  +	if (dev->data->dev_started != 0) {
>  +		EDEV_LOG_ERR("Device with dev_id=%" PRIu8 " already
>  started",
>  +			dev_id);
>  +		return 0;
>  +	}
>  +
>  +	diag = (*dev->dev_ops->dev_start)(dev);
>  +	if (diag == 0)
>  +		dev->data->dev_started = 1;
>  +	else
>  +		return diag;
>  +
>  +	return 0;
>  +}
>  +
>  +void
>  +rte_event_dev_stop(uint8_t dev_id)
>  +{
>  +	struct rte_eventdev *dev;
>  +
>  +	EDEV_LOG_DEBUG("Stop dev_id=%" PRIu8, dev_id);
>  +
>  +	RTE_EVENTDEV_VALID_DEVID_OR_RET(dev_id);
>  +	dev = &rte_eventdevs[dev_id];
>  +	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
>  +
>  +	if (dev->data->dev_started == 0) {
>  +		EDEV_LOG_ERR("Device with dev_id=%" PRIu8 " already
>  stopped",
>  +			dev_id);
>  +		return;
>  +	}
>  +
>  +	dev->data->dev_started = 0;
>  +	(*dev->dev_ops->dev_stop)(dev);
>  +}
>  +
>  +int
>  +rte_event_dev_close(uint8_t dev_id)
>  +{
>  +	struct rte_eventdev *dev;
>  +
>  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
>  +	dev = &rte_eventdevs[dev_id];
>  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_close, -
>  ENOTSUP);
>  +
>  +	/* Device must be stopped before it can be closed */
>  +	if (dev->data->dev_started == 1) {
>  +		EDEV_LOG_ERR("Device %u must be stopped before closing",
>  +				dev_id);
>  +		return -EBUSY;
>  +	}
>  +
>  +	return (*dev->dev_ops->dev_close)(dev);
>  +}
>  +
>  +static inline int
>  +rte_eventdev_data_alloc(uint8_t dev_id, struct rte_eventdev_data **data,
>  +		int socket_id)
>  +{
>  +	char mz_name[RTE_EVENTDEV_NAME_MAX_LEN];
>  +	const struct rte_memzone *mz;
>  +	int n;
>  +
>  +	/* Generate memzone name */
>  +	n = snprintf(mz_name, sizeof(mz_name), "rte_eventdev_data_%u",
>  dev_id);
>  +	if (n >= (int)sizeof(mz_name))
>  +		return -EINVAL;
>  +
>  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
>  +		mz = rte_memzone_reserve(mz_name,
>  +				sizeof(struct rte_eventdev_data),
>  +				socket_id, 0);
>  +	} else
>  +		mz = rte_memzone_lookup(mz_name);
>  +
>  +	if (mz == NULL)
>  +		return -ENOMEM;
>  +
>  +	*data = mz->addr;
>  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
>  +		memset(*data, 0, sizeof(struct rte_eventdev_data));
>  +
>  +	return 0;
>  +}
>  +
>  +static uint8_t
>  +rte_eventdev_find_free_device_index(void)
>  +{
>  +	uint8_t dev_id;
>  +
>  +	for (dev_id = 0; dev_id < RTE_EVENT_MAX_DEVS; dev_id++) {
>  +		if (rte_eventdevs[dev_id].attached ==
>  +				RTE_EVENTDEV_DETACHED)
>  +			return dev_id;
>  +	}
>  +	return RTE_EVENT_MAX_DEVS;
>  +}
>  +
>  +struct rte_eventdev *
>  +rte_eventdev_pmd_allocate(const char *name, int socket_id)
>  +{
>  +	struct rte_eventdev *eventdev;
>  +	uint8_t dev_id;
>  +
>  +	if (rte_eventdev_pmd_get_named_dev(name) != NULL) {
>  +		EDEV_LOG_ERR("Event device with name %s already "
>  +				"allocated!", name);
>  +		return NULL;
>  +	}
>  +
>  +	dev_id = rte_eventdev_find_free_device_index();
>  +	if (dev_id == RTE_EVENT_MAX_DEVS) {
>  +		EDEV_LOG_ERR("Reached maximum number of event
>  devices");
>  +		return NULL;
>  +	}
>  +
>  +	eventdev = &rte_eventdevs[dev_id];
>  +
>  +	if (eventdev->data == NULL) {
>  +		struct rte_eventdev_data *eventdev_data = NULL;
>  +
>  +		int retval = rte_eventdev_data_alloc(dev_id, &eventdev_data,
>  +				socket_id);
>  +
>  +		if (retval < 0 || eventdev_data == NULL)
>  +			return NULL;
>  +
>  +		eventdev->data = eventdev_data;
>  +
>  +		snprintf(eventdev->data->name,
>  RTE_EVENTDEV_NAME_MAX_LEN,
>  +				"%s", name);
>  +
>  +		eventdev->data->dev_id = dev_id;
>  +		eventdev->data->socket_id = socket_id;
>  +		eventdev->data->dev_started = 0;
>  +
>  +		eventdev->attached = RTE_EVENTDEV_ATTACHED;
>  +
>  +		eventdev_globals.nb_devs++;
>  +	}
>  +
>  +	return eventdev;
>  +}
>  +
>  +int
>  +rte_eventdev_pmd_release(struct rte_eventdev *eventdev)
>  +{
>  +	int ret;
>  +
>  +	if (eventdev == NULL)
>  +		return -EINVAL;
>  +
>  +	ret = rte_event_dev_close(eventdev->data->dev_id);
>  +	if (ret < 0)
>  +		return ret;
>  +
>  +	eventdev->attached = RTE_EVENTDEV_DETACHED;
>  +	eventdev_globals.nb_devs--;
>  +	eventdev->data = NULL;
>  +
>  +	return 0;
>  +}
>  +
>  +struct rte_eventdev *
>  +rte_eventdev_pmd_vdev_init(const char *name, size_t dev_private_size,
>  +		int socket_id)
>  +{
>  +	struct rte_eventdev *eventdev;
>  +
>  +	/* Allocate device structure */
>  +	eventdev = rte_eventdev_pmd_allocate(name, socket_id);
>  +	if (eventdev == NULL)
>  +		return NULL;
>  +
>  +	/* Allocate private device structure */
>  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
>  +		eventdev->data->dev_private =
>  +				rte_zmalloc_socket("eventdev device private",
>  +						dev_private_size,
>  +						RTE_CACHE_LINE_SIZE,
>  +						socket_id);
>  +
>  +		if (eventdev->data->dev_private == NULL)
>  +			rte_panic("Cannot allocate memzone for private
>  device"
>  +					" data");
>  +	}
>  +
>  +	return eventdev;
>  +}
>  +
>  +int
>  +rte_eventdev_pmd_pci_probe(struct rte_pci_driver *pci_drv,
>  +			struct rte_pci_device *pci_dev)
>  +{
>  +	struct rte_eventdev_driver *eventdrv;
>  +	struct rte_eventdev *eventdev;
>  +
>  +	char eventdev_name[RTE_EVENTDEV_NAME_MAX_LEN];
>  +
>  +	int retval;
>  +
>  +	eventdrv = (struct rte_eventdev_driver *)pci_drv;
>  +	if (eventdrv == NULL)
>  +		return -ENODEV;
>  +
>  +	rte_eal_pci_device_name(&pci_dev->addr, eventdev_name,
>  +			sizeof(eventdev_name));
>  +
>  +	eventdev = rte_eventdev_pmd_allocate(eventdev_name,
>  +			 pci_dev->device.numa_node);
>  +	if (eventdev == NULL)
>  +		return -ENOMEM;
>  +
>  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
>  +		eventdev->data->dev_private =
>  +				rte_zmalloc_socket(
>  +						"eventdev private structure",
>  +						eventdrv->dev_private_size,
>  +						RTE_CACHE_LINE_SIZE,
>  +						rte_socket_id());
>  +
>  +		if (eventdev->data->dev_private == NULL)
>  +			rte_panic("Cannot allocate memzone for private "
>  +					"device data");
>  +	}
>  +
>  +	eventdev->pci_dev = pci_dev;
>  +	eventdev->driver = eventdrv;
>  +
>  +	/* Invoke PMD device initialization function */
>  +	retval = (*eventdrv->eventdev_init)(eventdev);
>  +	if (retval == 0)
>  +		return 0;
>  +
>  +	EDEV_LOG_ERR("driver %s: event_dev_init(vendor_id=0x%x
>  device_id=0x%x)"
>  +			" failed", pci_drv->driver.name,
>  +			(unsigned int) pci_dev->id.vendor_id,
>  +			(unsigned int) pci_dev->id.device_id);
>  +
>  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
>  +		rte_free(eventdev->data->dev_private);
>  +
>  +	eventdev->attached = RTE_EVENTDEV_DETACHED;
>  +	eventdev_globals.nb_devs--;
>  +
>  +	return -ENXIO;
>  +}
>  +
>  +int
>  +rte_eventdev_pmd_pci_remove(struct rte_pci_device *pci_dev)
>  +{
>  +	const struct rte_eventdev_driver *eventdrv;
>  +	struct rte_eventdev *eventdev;
>  +	char eventdev_name[RTE_EVENTDEV_NAME_MAX_LEN];
>  +	int ret;
>  +
>  +	if (pci_dev == NULL)
>  +		return -EINVAL;
>  +
>  +	rte_eal_pci_device_name(&pci_dev->addr, eventdev_name,
>  +			sizeof(eventdev_name));
>  +
>  +	eventdev = rte_eventdev_pmd_get_named_dev(eventdev_name);
>  +	if (eventdev == NULL)
>  +		return -ENODEV;
>  +
>  +	eventdrv = (const struct rte_eventdev_driver *)pci_dev->driver;
>  +	if (eventdrv == NULL)
>  +		return -ENODEV;
>  +
>  +	/* Invoke PMD device uninit function */
>  +	if (*eventdrv->eventdev_uninit) {
>  +		ret = (*eventdrv->eventdev_uninit)(eventdev);
>  +		if (ret)
>  +			return ret;
>  +	}
>  +
>  +	/* Free event device */
>  +	rte_eventdev_pmd_release(eventdev);
>  +
>  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
>  +		rte_free(eventdev->data->dev_private);
>  +
>  +	eventdev->pci_dev = NULL;
>  +	eventdev->driver = NULL;
>  +
>  +	return 0;
>  +}
>  diff --git a/lib/librte_eventdev/rte_eventdev_pmd.h
>  b/lib/librte_eventdev/rte_eventdev_pmd.h
>  new file mode 100644
>  index 0000000..e9d9b83
>  --- /dev/null
>  +++ b/lib/librte_eventdev/rte_eventdev_pmd.h
>  @@ -0,0 +1,504 @@
>  +/*
>  + *
>  + *   Copyright(c) 2016 Cavium networks. All rights reserved.
>  + *
>  + *   Redistribution and use in source and binary forms, with or without
>  + *   modification, are permitted provided that the following conditions
>  + *   are met:
>  + *
>  + *     * Redistributions of source code must retain the above copyright
>  + *       notice, this list of conditions and the following disclaimer.
>  + *     * Redistributions in binary form must reproduce the above copyright
>  + *       notice, this list of conditions and the following disclaimer in
>  + *       the documentation and/or other materials provided with the
>  + *       distribution.
>  + *     * Neither the name of Cavium networks nor the names of its
>  + *       contributors may be used to endorse or promote products derived
>  + *       from this software without specific prior written permission.
>  + *
>  + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
>  CONTRIBUTORS
>  + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT
>  NOT
>  + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
>  FITNESS FOR
>  + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
>  COPYRIGHT
>  + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
>  INCIDENTAL,
>  + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
>  NOT
>  + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
>  OF USE,
>  + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
>  AND ON ANY
>  + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
>  TORT
>  + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
>  THE USE
>  + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
>  DAMAGE.
>  + */
>  +
>  +#ifndef _RTE_EVENTDEV_PMD_H_
>  +#define _RTE_EVENTDEV_PMD_H_
>  +
>  +/** @file
>  + * RTE Event PMD APIs
>  + *
>  + * @note
>  + * These API are from event PMD only and user applications should not call
>  + * them directly.
>  + */
>  +
>  +#ifdef __cplusplus
>  +extern "C" {
>  +#endif
>  +
>  +#include <string.h>
>  +
>  +#include <rte_dev.h>
>  +#include <rte_pci.h>
>  +#include <rte_malloc.h>
>  +#include <rte_log.h>
>  +#include <rte_common.h>
>  +
>  +#include "rte_eventdev.h"
>  +
>  +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
>  +#define RTE_PMD_DEBUG_TRACE(...) \
>  +	rte_pmd_debug_trace(__func__, __VA_ARGS__)
>  +#else
>  +#define RTE_PMD_DEBUG_TRACE(...)
>  +#endif
>  +
>  +/* Logging Macros */
>  +#define EDEV_LOG_ERR(fmt, args...) \
>  +	RTE_LOG(ERR, EVENTDEV, "%s() line %u: " fmt "\n",  \
>  +			__func__, __LINE__, ## args)
>  +
>  +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
>  +#define EDEV_LOG_DEBUG(fmt, args...) \
>  +	RTE_LOG(DEBUG, EVENTDEV, "%s() line %u: " fmt "\n",  \
>  +			__func__, __LINE__, ## args)
>  +#else
>  +#define EDEV_LOG_DEBUG(fmt, args...) (void)0
>  +#endif
>  +
>  +/* Macros to check for valid device */
>  +#define RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, retval) do { \
>  +	if (!rte_eventdev_pmd_is_valid_dev((dev_id))) { \
>  +		EDEV_LOG_ERR("Invalid dev_id=%d\n", dev_id); \
>  +		return retval; \
>  +	} \
>  +} while (0)
>  +
>  +#define RTE_EVENTDEV_VALID_DEVID_OR_RET(dev_id) do { \
>  +	if (!rte_eventdev_pmd_is_valid_dev((dev_id))) { \
>  +		EDEV_LOG_ERR("Invalid dev_id=%d\n", dev_id); \
>  +		return; \
>  +	} \
>  +} while (0)
>  +
>  +#define RTE_EVENTDEV_DETACHED  (0)
>  +#define RTE_EVENTDEV_ATTACHED  (1)
>  +
>  +/**
>  + * Initialisation function of a event driver invoked for each matching
>  + * event PCI device detected during the PCI probing phase.
>  + *
>  + * @param dev
>  + *   The dev pointer is the address of the *rte_eventdev* structure associated
>  + *   with the matching device and which has been [automatically] allocated in
>  + *   the *rte_event_devices* array.
>  + *
>  + * @return
>  + *   - 0: Success, the device is properly initialised by the driver.
>  + *        In particular, the driver MUST have set up the *dev_ops* pointer
>  + *        of the *dev* structure.
>  + *   - <0: Error code of the device initialisation failure.
>  + */
>  +typedef int (*eventdev_init_t)(struct rte_eventdev *dev);
>  +
>  +/**
>  + * Finalisation function of a driver invoked for each matching
>  + * PCI device detected during the PCI closing phase.
>  + *
>  + * @param dev
>  + *   The dev pointer is the address of the *rte_eventdev* structure associated
>  + *   with the matching device and which	has been [automatically] allocated in
>  + *   the *rte_event_devices* array.
>  + *
>  + * @return
>  + *   - 0: Success, the device is properly finalised by the driver.
>  + *        In particular, the driver MUST free the *dev_ops* pointer
>  + *        of the *dev* structure.
>  + *   - <0: Error code of the device initialisation failure.
>  + */
>  +typedef int (*eventdev_uninit_t)(struct rte_eventdev *dev);
>  +
>  +/**
>  + * The structure associated with a PMD driver.
>  + *
>  + * Each driver acts as a PCI driver and is represented by a generic
>  + * *event_driver* structure that holds:
>  + *
>  + * - An *rte_pci_driver* structure (which must be the first field).
>  + *
>  + * - The *eventdev_init* function invoked for each matching PCI device.
>  + *
>  + * - The size of the private data to allocate for each matching device.
>  + */
>  +struct rte_eventdev_driver {
>  +	struct rte_pci_driver pci_drv;	/**< The PMD is also a PCI driver. */
>  +	unsigned int dev_private_size;	/**< Size of device private data. */
>  +
>  +	eventdev_init_t eventdev_init;	/**< Device init function. */
>  +	eventdev_uninit_t eventdev_uninit; /**< Device uninit function. */
>  +};
>  +
>  +/** Global structure used for maintaining state of allocated event devices */
>  +struct rte_eventdev_global {
>  +	uint8_t nb_devs;	/**< Number of devices found */
>  +	uint8_t max_devs;	/**< Max number of devices */
>  +};
>  +
>  +extern struct rte_eventdev_global *rte_eventdev_globals;
>  +/** Pointer to global event devices data structure. */
>  +extern struct rte_eventdev *rte_eventdevs;
>  +/** The pool of rte_eventdev structures. */
>  +
>  +/**
>  + * Get the rte_eventdev structure device pointer for the named device.
>  + *
>  + * @param name
>  + *   device name to select the device structure.
>  + *
>  + * @return
>  + *   - The rte_eventdev structure pointer for the given device ID.
>  + */
>  +static inline struct rte_eventdev *
>  +rte_eventdev_pmd_get_named_dev(const char *name)
>  +{
>  +	struct rte_eventdev *dev;
>  +	unsigned int i;
>  +
>  +	if (name == NULL)
>  +		return NULL;
>  +
>  +	for (i = 0, dev = &rte_eventdevs[i];
>  +			i < rte_eventdev_globals->max_devs; i++) {
>  +		if ((dev->attached == RTE_EVENTDEV_ATTACHED) &&
>  +				(strcmp(dev->data->name, name) == 0))
>  +			return dev;
>  +	}
>  +
>  +	return NULL;
>  +}
>  +
>  +/**
>  + * Validate if the event device index is valid attached event device.
>  + *
>  + * @param dev_id
>  + *   Event device index.
>  + *
>  + * @return
>  + *   - If the device index is valid (1) or not (0).
>  + */
>  +static inline unsigned
>  +rte_eventdev_pmd_is_valid_dev(uint8_t dev_id)
>  +{
>  +	struct rte_eventdev *dev;
>  +
>  +	if (dev_id >= rte_eventdev_globals->nb_devs)
>  +		return 0;
>  +
>  +	dev = &rte_eventdevs[dev_id];
>  +	if (dev->attached != RTE_EVENTDEV_ATTACHED)
>  +		return 0;
>  +	else
>  +		return 1;
>  +}
>  +
>  +/**
>  + * Definitions of all functions exported by a driver through the
>  + * the generic structure of type *event_dev_ops* supplied in the
>  + * *rte_eventdev* structure associated with a device.
>  + */
>  +
>  +/**
>  + * Get device information of a device.
>  + *
>  + * @param dev
>  + *   Event device pointer
>  + * @param dev_info
>  + *   Event device information structure
>  + *
>  + * @return
>  + *   Returns 0 on success
>  + */
>  +typedef void (*eventdev_info_get_t)(struct rte_eventdev *dev,
>  +		struct rte_event_dev_info *dev_info);
>  +
>  +/**
>  + * Configure a device.
>  + *
>  + * @param dev
>  + *   Event device pointer
>  + *
>  + * @return
>  + *   Returns 0 on success
>  + */
>  +typedef int (*eventdev_configure_t)(struct rte_eventdev *dev);
>  +
>  +/**
>  + * Start a configured device.
>  + *
>  + * @param dev
>  + *   Event device pointer
>  + *
>  + * @return
>  + *   Returns 0 on success
>  + */
>  +typedef int (*eventdev_start_t)(struct rte_eventdev *dev);
>  +
>  +/**
>  + * Stop a configured device.
>  + *
>  + * @param dev
>  + *   Event device pointer
>  + */
>  +typedef void (*eventdev_stop_t)(struct rte_eventdev *dev);
>  +
>  +/**
>  + * Close a configured device.
>  + *
>  + * @param dev
>  + *   Event device pointer
>  + *
>  + * @return
>  + * - 0 on success
>  + * - (-EAGAIN) if can't close as device is busy
>  + */
>  +typedef int (*eventdev_close_t)(struct rte_eventdev *dev);
>  +
>  +/**
>  + * Retrieve the default event queue configuration.
>  + *
>  + * @param dev
>  + *   Event device pointer
>  + * @param queue_id
>  + *   Event queue index
>  + * @param[out] queue_conf
>  + *   Event queue configuration structure
>  + *
>  + */
>  +typedef void (*eventdev_queue_default_conf_get_t)(struct rte_eventdev
>  *dev,
>  +		uint8_t queue_id, struct rte_event_queue_conf *queue_conf);
>  +
>  +/**
>  + * Setup an event queue.
>  + *
>  + * @param dev
>  + *   Event device pointer
>  + * @param queue_id
>  + *   Event queue index
>  + * @param queue_conf
>  + *   Event queue configuration structure
>  + *
>  + * @return
>  + *   Returns 0 on success.
>  + */
>  +typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev,
>  +		uint8_t queue_id, struct rte_event_queue_conf *queue_conf);
>  +
>  +/**
>  + * Release memory resources allocated by given event queue.
>  + *
>  + * @param queue
>  + *   Event queue pointer
>  + *
>  + */
>  +typedef void (*eventdev_queue_release_t)(void *queue);
>  +
>  +/**
>  + * Retrieve the default event port configuration.
>  + *
>  + * @param dev
>  + *   Event device pointer
>  + * @param port_id
>  + *   Event port index
>  + * @param[out] port_conf
>  + *   Event port configuration structure
>  + *
>  + */
>  +typedef void (*eventdev_port_default_conf_get_t)(struct rte_eventdev *dev,
>  +		uint8_t port_id, struct rte_event_port_conf *port_conf);
>  +
>  +/**
>  + * Setup an event port.
>  + *
>  + * @param dev
>  + *   Event device pointer
>  + * @param port_id
>  + *   Event port index
>  + * @param port_conf
>  + *   Event port configuration structure
>  + *
>  + * @return
>  + *   Returns 0 on success.
>  + */
>  +typedef int (*eventdev_port_setup_t)(struct rte_eventdev *dev,
>  +		uint8_t port_id, struct rte_event_port_conf *port_conf);
>  +
>  +/**
>  + * Release memory resources allocated by given event port.
>  + *
>  + * @param port
>  + *   Event port pointer
>  + *
>  + */
>  +typedef void (*eventdev_port_release_t)(void *port);
>  +
>  +/**
>  + * Link multiple source event queues to destination event port.
>  + *
>  + * @param port
>  + *   Event port pointer
>  + * @param link
>  + *   An array of *nb_links* pointers to *rte_event_queue_link* structure
>  + * @param nb_links
>  + *   The number of links to establish
>  + *
>  + * @return
>  + *   Returns 0 on success.
>  + *
>  + */
>  +typedef int (*eventdev_port_link_t)(void *port,
>  +		struct rte_event_queue_link link[], uint16_t nb_links);
>  +
>  +/**
>  + * Unlink multiple source event queues from destination event port.
>  + *
>  + * @param port
>  + *   Event port pointer
>  + * @param queues
>  + *   An array of *nb_unlinks* event queues to be unlinked from the event port.
>  + * @param nb_unlinks
>  + *   The number of unlinks to establish
>  + *
>  + * @return
>  + *   Returns 0 on success.
>  + *
>  + */
>  +typedef int (*eventdev_port_unlink_t)(void *port,
>  +		uint8_t queues[], uint16_t nb_unlinks);
>  +
>  +/**
>  + * Converts nanoseconds to *wait* value for rte_event_dequeue()
>  + *
>  + * @param dev
>  + *   Event device pointer
>  + * @param ns
>  + *   Wait time in nanosecond
>  + * @param[out] wait_ticks
>  + *   Value for the *wait* parameter in rte_event_dequeue() function
>  + *
>  + */
>  +typedef void (*eventdev_dequeue_wait_time_t)(struct rte_eventdev *dev,
>  +		uint64_t ns, uint64_t *wait_ticks);
>  +
>  +/**
>  + * Dump internal information
>  + *
>  + * @param dev
>  + *   Event device pointer
>  + * @param f
>  + *   A pointer to a file for output
>  + *
>  + */
>  +typedef void (*eventdev_dump_t)(struct rte_eventdev *dev, FILE *f);
>  +
>  +/** Event device operations function pointer table */
>  +struct rte_eventdev_ops {
>  +	eventdev_info_get_t dev_infos_get;	/**< Get device info. */
>  +	eventdev_configure_t dev_configure;	/**< Configure device. */
>  +	eventdev_start_t dev_start;		/**< Start device. */
>  +	eventdev_stop_t dev_stop;		/**< Stop device. */
>  +	eventdev_close_t dev_close;		/**< Close device. */
>  +
>  +	eventdev_queue_default_conf_get_t queue_def_conf;
>  +	/**< Get default queue configuration. */
>  +	eventdev_queue_setup_t queue_setup;
>  +	/**< Set up an event queue. */
>  +	eventdev_queue_release_t queue_release;
>  +	/**< Release an event queue. */
>  +
>  +	eventdev_port_default_conf_get_t port_def_conf;
>  +	/**< Get default port configuration. */
>  +	eventdev_port_setup_t port_setup;
>  +	/**< Set up an event port. */
>  +	eventdev_port_release_t port_release;
>  +	/**< Release an event port. */
>  +
>  +	eventdev_port_link_t port_link;
>  +	/**< Link event queues to an event port. */
>  +	eventdev_port_unlink_t port_unlink;
>  +	/**< Unlink event queues from an event port. */
>  +	eventdev_dequeue_wait_time_t wait_time;
>  +	/**< Converts nanoseconds to *wait* value for rte_event_dequeue()
>  */
>  +	eventdev_dump_t dump;
>  +	/* Dump internal information */
>  +};
>  +
>  +/**
>  + * Allocates a new eventdev slot for an event device and returns the pointer
>  + * to that slot for the driver to use.
>  + *
>  + * @param name
>  + *   Unique identifier name for each device
>  + * @param socket_id
>  + *   Socket to allocate resources on.
>  + * @return
>  + *   - Slot in the rte_dev_devices array for a new device;
>  + */
>  +struct rte_eventdev *
>  +rte_eventdev_pmd_allocate(const char *name, int socket_id);
>  +
>  +/**
>  + * Release the specified eventdev device.
>  + *
>  + * @param eventdev
>  + * The *eventdev* pointer is the address of the *rte_eventdev* structure.
>  + * @return
>  + *   - 0 on success, negative on error
>  + */
>  +int
>  +rte_eventdev_pmd_release(struct rte_eventdev *eventdev);
>  +
>  +/**
>  + * Creates a new virtual event device and returns the pointer to that device.
>  + *
>  + * @param name
>  + *   PMD type name
>  + * @param dev_private_size
>  + *   Size of event PMDs private data
>  + * @param socket_id
>  + *   Socket to allocate resources on.
>  + *
>  + * @return
>  + *   - Eventdev pointer if device is successfully created.
>  + *   - NULL if device cannot be created.
>  + */
>  +struct rte_eventdev *
>  +rte_eventdev_pmd_vdev_init(const char *name, size_t dev_private_size,
>  +		int socket_id);
>  +
>  +
>  +/**
>  + * Wrapper for use by pci drivers as a .probe function to attach to a event
>  + * interface.
>  + */
>  +int rte_eventdev_pmd_pci_probe(struct rte_pci_driver *pci_drv,
>  +			    struct rte_pci_device *pci_dev);
>  +
>  +/**
>  + * Wrapper for use by pci drivers as a .remove function to detach a event
>  + * interface.
>  + */
>  +int rte_eventdev_pmd_pci_remove(struct rte_pci_device *pci_dev);
>  +
>  +#ifdef __cplusplus
>  +}
>  +#endif
>  +
>  +#endif /* _RTE_EVENTDEV_PMD_H_ */
>  diff --git a/lib/librte_eventdev/rte_eventdev_version.map
>  b/lib/librte_eventdev/rte_eventdev_version.map
>  new file mode 100644
>  index 0000000..ef40aae
>  --- /dev/null
>  +++ b/lib/librte_eventdev/rte_eventdev_version.map
>  @@ -0,0 +1,39 @@
>  +DPDK_17.02 {
>  +	global:
>  +
>  +	rte_eventdevs;
>  +
>  +	rte_event_dev_count;
>  +	rte_event_dev_get_dev_id;
>  +	rte_event_dev_socket_id;
>  +	rte_event_dev_info_get;
>  +	rte_event_dev_configure;
>  +	rte_event_dev_start;
>  +	rte_event_dev_stop;
>  +	rte_event_dev_close;
>  +	rte_event_dev_dump;
>  +
>  +	rte_event_port_default_conf_get;
>  +	rte_event_port_setup;
>  +	rte_event_port_dequeue_depth;
>  +	rte_event_port_enqueue_depth;
>  +	rte_event_port_count;
>  +	rte_event_port_link;
>  +	rte_event_port_unlink;
>  +	rte_event_port_links_get;
>  +
>  +	rte_event_queue_default_conf_get
>  +	rte_event_queue_setup;
>  +	rte_event_queue_count;
>  +	rte_event_queue_priority;
>  +
>  +	rte_event_dequeue_wait_time;
>  +
>  +	rte_eventdev_pmd_allocate;
>  +	rte_eventdev_pmd_release;
>  +	rte_eventdev_pmd_vdev_init;
>  +	rte_eventdev_pmd_pci_probe;
>  +	rte_eventdev_pmd_pci_remove;
>  +
>  +	local: *;
>  +};
>  diff --git a/mk/rte.app.mk b/mk/rte.app.mk
>  index f75f0e2..716725a 100644
>  --- a/mk/rte.app.mk
>  +++ b/mk/rte.app.mk
>  @@ -93,6 +93,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF)           += -
>  lrte_mbuf
>   _LDLIBS-$(CONFIG_RTE_LIBRTE_NET)            += -lrte_net
>   _LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER)          += -lrte_ethdev
>   _LDLIBS-$(CONFIG_RTE_LIBRTE_CRYPTODEV)      += -lrte_cryptodev
>  +_LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV)       += -lrte_eventdev
>   _LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL)        += -lrte_mempool
>   _LDLIBS-$(CONFIG_RTE_LIBRTE_RING)           += -lrte_ring
>   _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL)            += -lrte_eal
>  --
>  2.5.5

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 2/4] eventdev: implement the northbound APIs
  2016-11-21 17:45   ` Eads, Gage
@ 2016-11-21 19:13     ` Jerin Jacob
  2016-11-21 19:31       ` Jerin Jacob
  0 siblings, 1 reply; 109+ messages in thread
From: Jerin Jacob @ 2016-11-21 19:13 UTC (permalink / raw)
  To: Eads, Gage; +Cc: dev, Richardson, Bruce, Van Haaren, Harry, hemant.agrawal

On Mon, Nov 21, 2016 at 05:45:51PM +0000, Eads, Gage wrote:
> Hi Jerin,
> 
> I did a quick review and overall this implementation looks good. I noticed just one issue in rte_event_queue_setup(): the check of nb_atomic_order_sequences is being applied to atomic-type queues, but that field applies to ordered-type queues.

Thanks Gage. I will fix that in v2.

> 
> One open issue I noticed is the "typical workflow" description starting in rte_eventdev.h:204 conflicts with the centralized software PMD that Harry posted last week. Specifically, that PMD expects a single core to call the schedule function. We could extend the documentation to account for this alternative style of scheduler invocation, or discuss ways to make the software PMD work with the documented workflow. I prefer the former, but either way I think we ought to expose the scheduler's expected usage to the user -- perhaps through an RTE_EVENT_DEV_CAP flag?

I prefer former too, you can propose the documentation change required for software PMD.

On same note, If software PMD based workflow need  a separate core(s) for
schedule function then, Can we hide that from API specification and pass an
argument to SW pmd to define the scheduling core(s)?

Something like --vdev=eventsw0,schedule_cmask=0x2

> 
> Thanks,
> Gage
> 
> >  -----Original Message-----
> >  From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> >  Sent: Thursday, November 17, 2016 11:45 PM
> >  To: dev@dpdk.org
> >  Cc: Richardson, Bruce <bruce.richardson@intel.com>; Van Haaren, Harry
> >  <harry.van.haaren@intel.com>; hemant.agrawal@nxp.com; Eads, Gage
> >  <gage.eads@intel.com>; Jerin Jacob <jerin.jacob@caviumnetworks.com>
> >  Subject: [dpdk-dev] [PATCH 2/4] eventdev: implement the northbound APIs
> >  
> >  This patch set defines the southbound driver interface
> >  and implements the common code required for northbound
> >  eventdev API interface.
> >  
> >  Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> >  ---
> >   config/common_base                           |    6 +
> >   lib/Makefile                                 |    1 +
> >   lib/librte_eal/common/include/rte_log.h      |    1 +
> >   lib/librte_eventdev/Makefile                 |   57 ++
> >   lib/librte_eventdev/rte_eventdev.c           | 1211
> >  ++++++++++++++++++++++++++
> >   lib/librte_eventdev/rte_eventdev_pmd.h       |  504 +++++++++++
> >   lib/librte_eventdev/rte_eventdev_version.map |   39 +
> >   mk/rte.app.mk                                |    1 +
> >   8 files changed, 1820 insertions(+)
> >   create mode 100644 lib/librte_eventdev/Makefile
> >   create mode 100644 lib/librte_eventdev/rte_eventdev.c
> >   create mode 100644 lib/librte_eventdev/rte_eventdev_pmd.h
> >   create mode 100644 lib/librte_eventdev/rte_eventdev_version.map
> >  
> >  diff --git a/config/common_base b/config/common_base
> >  index 4bff83a..7a8814e 100644
> >  --- a/config/common_base
> >  +++ b/config/common_base
> >  @@ -411,6 +411,12 @@ CONFIG_RTE_LIBRTE_PMD_ZUC_DEBUG=n
> >   CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO=y
> >  
> >   #
> >  +# Compile generic event device library
> >  +#
> >  +CONFIG_RTE_LIBRTE_EVENTDEV=y
> >  +CONFIG_RTE_LIBRTE_EVENTDEV_DEBUG=n
> >  +CONFIG_RTE_EVENT_MAX_DEVS=16
> >  +CONFIG_RTE_EVENT_MAX_QUEUES_PER_DEV=64
> >   # Compile librte_ring
> >   #
> >   CONFIG_RTE_LIBRTE_RING=y
> >  diff --git a/lib/Makefile b/lib/Makefile
> >  index 990f23a..1a067bf 100644
> >  --- a/lib/Makefile
> >  +++ b/lib/Makefile
> >  @@ -41,6 +41,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) += librte_cfgfile
> >   DIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += librte_cmdline
> >   DIRS-$(CONFIG_RTE_LIBRTE_ETHER) += librte_ether
> >   DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += librte_cryptodev
> >  +DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += librte_eventdev
> >   DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
> >   DIRS-$(CONFIG_RTE_LIBRTE_HASH) += librte_hash
> >   DIRS-$(CONFIG_RTE_LIBRTE_LPM) += librte_lpm
> >  diff --git a/lib/librte_eal/common/include/rte_log.h
> >  b/lib/librte_eal/common/include/rte_log.h
> >  index 29f7d19..9a07d92 100644
> >  --- a/lib/librte_eal/common/include/rte_log.h
> >  +++ b/lib/librte_eal/common/include/rte_log.h
> >  @@ -79,6 +79,7 @@ extern struct rte_logs rte_logs;
> >   #define RTE_LOGTYPE_PIPELINE 0x00008000 /**< Log related to pipeline. */
> >   #define RTE_LOGTYPE_MBUF    0x00010000 /**< Log related to mbuf. */
> >   #define RTE_LOGTYPE_CRYPTODEV 0x00020000 /**< Log related to
> >  cryptodev. */
> >  +#define RTE_LOGTYPE_EVENTDEV 0x00040000 /**< Log related to eventdev.
> >  */
> >  
> >   /* these log types can be used in an application */
> >   #define RTE_LOGTYPE_USER1   0x01000000 /**< User-defined log type 1. */
> >  diff --git a/lib/librte_eventdev/Makefile b/lib/librte_eventdev/Makefile
> >  new file mode 100644
> >  index 0000000..dac0663
> >  --- /dev/null
> >  +++ b/lib/librte_eventdev/Makefile
> >  @@ -0,0 +1,57 @@
> >  +#   BSD LICENSE
> >  +#
> >  +#   Copyright(c) 2016 Cavium networks. All rights reserved.
> >  +#
> >  +#   Redistribution and use in source and binary forms, with or without
> >  +#   modification, are permitted provided that the following conditions
> >  +#   are met:
> >  +#
> >  +#     * Redistributions of source code must retain the above copyright
> >  +#       notice, this list of conditions and the following disclaimer.
> >  +#     * Redistributions in binary form must reproduce the above copyright
> >  +#       notice, this list of conditions and the following disclaimer in
> >  +#       the documentation and/or other materials provided with the
> >  +#       distribution.
> >  +#     * Neither the name of Cavium networks nor the names of its
> >  +#       contributors may be used to endorse or promote products derived
> >  +#       from this software without specific prior written permission.
> >  +#
> >  +#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
> >  CONTRIBUTORS
> >  +#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT
> >  NOT
> >  +#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
> >  FITNESS FOR
> >  +#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> >  COPYRIGHT
> >  +#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> >  INCIDENTAL,
> >  +#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
> >  NOT
> >  +#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
> >  OF USE,
> >  +#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
> >  ON ANY
> >  +#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
> >  TORT
> >  +#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
> >  THE USE
> >  +#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> >  DAMAGE.
> >  +
> >  +include $(RTE_SDK)/mk/rte.vars.mk
> >  +
> >  +# library name
> >  +LIB = librte_eventdev.a
> >  +
> >  +# library version
> >  +LIBABIVER := 1
> >  +
> >  +# build flags
> >  +CFLAGS += -O3
> >  +CFLAGS += $(WERROR_FLAGS)
> >  +
> >  +# library source files
> >  +SRCS-y += rte_eventdev.c
> >  +
> >  +# export include files
> >  +SYMLINK-y-include += rte_eventdev.h
> >  +SYMLINK-y-include += rte_eventdev_pmd.h
> >  +
> >  +# versioning export map
> >  +EXPORT_MAP := rte_eventdev_version.map
> >  +
> >  +# library dependencies
> >  +DEPDIRS-y += lib/librte_eal
> >  +DEPDIRS-y += lib/librte_mbuf
> >  +
> >  +include $(RTE_SDK)/mk/rte.lib.mk
> >  diff --git a/lib/librte_eventdev/rte_eventdev.c
> >  b/lib/librte_eventdev/rte_eventdev.c
> >  new file mode 100644
> >  index 0000000..17ce5c3
> >  --- /dev/null
> >  +++ b/lib/librte_eventdev/rte_eventdev.c
> >  @@ -0,0 +1,1211 @@
> >  +/*
> >  + *   BSD LICENSE
> >  + *
> >  + *   Copyright(c) 2016 Cavium networks. All rights reserved.
> >  + *
> >  + *   Redistribution and use in source and binary forms, with or without
> >  + *   modification, are permitted provided that the following conditions
> >  + *   are met:
> >  + *
> >  + *     * Redistributions of source code must retain the above copyright
> >  + *       notice, this list of conditions and the following disclaimer.
> >  + *     * Redistributions in binary form must reproduce the above copyright
> >  + *       notice, this list of conditions and the following disclaimer in
> >  + *       the documentation and/or other materials provided with the
> >  + *       distribution.
> >  + *     * Neither the name of Cavium networks nor the names of its
> >  + *       contributors may be used to endorse or promote products derived
> >  + *       from this software without specific prior written permission.
> >  + *
> >  + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
> >  CONTRIBUTORS
> >  + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT
> >  NOT
> >  + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
> >  FITNESS FOR
> >  + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> >  COPYRIGHT
> >  + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> >  INCIDENTAL,
> >  + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
> >  NOT
> >  + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
> >  OF USE,
> >  + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
> >  AND ON ANY
> >  + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
> >  TORT
> >  + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
> >  THE USE
> >  + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> >  DAMAGE.
> >  + */
> >  +
> >  +#include <ctype.h>
> >  +#include <stdio.h>
> >  +#include <stdlib.h>
> >  +#include <string.h>
> >  +#include <stdarg.h>
> >  +#include <errno.h>
> >  +#include <stdint.h>
> >  +#include <inttypes.h>
> >  +#include <sys/types.h>
> >  +#include <sys/queue.h>
> >  +
> >  +#include <rte_byteorder.h>
> >  +#include <rte_log.h>
> >  +#include <rte_debug.h>
> >  +#include <rte_dev.h>
> >  +#include <rte_pci.h>
> >  +#include <rte_memory.h>
> >  +#include <rte_memcpy.h>
> >  +#include <rte_memzone.h>
> >  +#include <rte_eal.h>
> >  +#include <rte_per_lcore.h>
> >  +#include <rte_lcore.h>
> >  +#include <rte_atomic.h>
> >  +#include <rte_branch_prediction.h>
> >  +#include <rte_common.h>
> >  +#include <rte_malloc.h>
> >  +#include <rte_errno.h>
> >  +
> >  +#include "rte_eventdev.h"
> >  +#include "rte_eventdev_pmd.h"
> >  +
> >  +struct rte_eventdev rte_event_devices[RTE_EVENT_MAX_DEVS];
> >  +
> >  +struct rte_eventdev *rte_eventdevs = &rte_event_devices[0];
> >  +
> >  +static struct rte_eventdev_global eventdev_globals = {
> >  +	.nb_devs		= 0
> >  +};
> >  +
> >  +struct rte_eventdev_global *rte_eventdev_globals = &eventdev_globals;
> >  +
> >  +/* Event dev north bound API implementation */
> >  +
> >  +uint8_t
> >  +rte_event_dev_count(void)
> >  +{
> >  +	return rte_eventdev_globals->nb_devs;
> >  +}
> >  +
> >  +int
> >  +rte_event_dev_get_dev_id(const char *name)
> >  +{
> >  +	int i;
> >  +
> >  +	if (!name)
> >  +		return -EINVAL;
> >  +
> >  +	for (i = 0; i < rte_eventdev_globals->nb_devs; i++)
> >  +		if ((strcmp(rte_event_devices[i].data->name, name)
> >  +				== 0) &&
> >  +				(rte_event_devices[i].attached ==
> >  +						RTE_EVENTDEV_ATTACHED))
> >  +			return i;
> >  +	return -ENODEV;
> >  +}
> >  +
> >  +int
> >  +rte_event_dev_socket_id(uint8_t dev_id)
> >  +{
> >  +	struct rte_eventdev *dev;
> >  +
> >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  +	dev = &rte_eventdevs[dev_id];
> >  +
> >  +	return dev->data->socket_id;
> >  +}
> >  +
> >  +int
> >  +rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info *dev_info)
> >  +{
> >  +	struct rte_eventdev *dev;
> >  +
> >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  +	dev = &rte_eventdevs[dev_id];
> >  +
> >  +	if (dev_info == NULL)
> >  +		return -EINVAL;
> >  +
> >  +	memset(dev_info, 0, sizeof(struct rte_event_dev_info));
> >  +
> >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -
> >  ENOTSUP);
> >  +	(*dev->dev_ops->dev_infos_get)(dev, dev_info);
> >  +
> >  +	dev_info->pci_dev = dev->pci_dev;
> >  +	if (dev->driver)
> >  +		dev_info->driver_name = dev->driver->pci_drv.driver.name;
> >  +	return 0;
> >  +}
> >  +
> >  +static inline int
> >  +rte_event_dev_queue_config(struct rte_eventdev *dev, uint8_t nb_queues)
> >  +{
> >  +	uint8_t old_nb_queues = dev->data->nb_queues;
> >  +	void **queues;
> >  +	uint8_t *queues_prio;
> >  +	unsigned int i;
> >  +
> >  +	EDEV_LOG_DEBUG("Setup %d queues on device %u", nb_queues,
> >  +			 dev->data->dev_id);
> >  +
> >  +	/* First time configuration */
> >  +	if (dev->data->queues == NULL && nb_queues != 0) {
> >  +		dev->data->queues = rte_zmalloc_socket("eventdev->data-
> >  >queues",
> >  +				sizeof(dev->data->queues[0]) * nb_queues,
> >  +				RTE_CACHE_LINE_SIZE, dev->data-
> >  >socket_id);
> >  +		if (dev->data->queues == NULL) {
> >  +			dev->data->nb_queues = 0;
> >  +			EDEV_LOG_ERR("failed to get memory for queue meta
> >  data,"
> >  +					"nb_queues %u", nb_queues);
> >  +			return -(ENOMEM);
> >  +		}
> >  +		/* Allocate memory to store queue priority */
> >  +		dev->data->queues_prio = rte_zmalloc_socket(
> >  +				"eventdev->data->queues_prio",
> >  +				sizeof(dev->data->queues_prio[0]) *
> >  nb_queues,
> >  +				RTE_CACHE_LINE_SIZE, dev->data-
> >  >socket_id);
> >  +		if (dev->data->queues_prio == NULL) {
> >  +			dev->data->nb_queues = 0;
> >  +			EDEV_LOG_ERR("failed to get memory for queue
> >  priority,"
> >  +					"nb_queues %u", nb_queues);
> >  +			return -(ENOMEM);
> >  +		}
> >  +
> >  +	} else if (dev->data->queues != NULL && nb_queues != 0) {/* re-config
> >  */
> >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
> >  >queue_release, -ENOTSUP);
> >  +
> >  +		queues = dev->data->queues;
> >  +		for (i = nb_queues; i < old_nb_queues; i++)
> >  +			(*dev->dev_ops->queue_release)(queues[i]);
> >  +
> >  +		queues = rte_realloc(queues, sizeof(queues[0]) * nb_queues,
> >  +				RTE_CACHE_LINE_SIZE);
> >  +		if (queues == NULL) {
> >  +			EDEV_LOG_ERR("failed to realloc queue meta data,"
> >  +						" nb_queues %u",
> >  nb_queues);
> >  +			return -(ENOMEM);
> >  +		}
> >  +		dev->data->queues = queues;
> >  +
> >  +		/* Re allocate memory to store queue priority */
> >  +		queues_prio = dev->data->queues_prio;
> >  +		queues_prio = rte_realloc(queues_prio,
> >  +				sizeof(queues_prio[0]) * nb_queues,
> >  +				RTE_CACHE_LINE_SIZE);
> >  +		if (queues_prio == NULL) {
> >  +			EDEV_LOG_ERR("failed to realloc queue priority,"
> >  +						" nb_queues %u",
> >  nb_queues);
> >  +			return -(ENOMEM);
> >  +		}
> >  +		dev->data->queues_prio = queues_prio;
> >  +
> >  +		if (nb_queues > old_nb_queues) {
> >  +			uint8_t new_qs = nb_queues - old_nb_queues;
> >  +
> >  +			memset(queues + old_nb_queues, 0,
> >  +				sizeof(queues[0]) * new_qs);
> >  +			memset(queues_prio + old_nb_queues, 0,
> >  +				sizeof(queues_prio[0]) * new_qs);
> >  +		}
> >  +	} else if (dev->data->queues != NULL && nb_queues == 0) {
> >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
> >  >queue_release, -ENOTSUP);
> >  +
> >  +		queues = dev->data->queues;
> >  +		for (i = nb_queues; i < old_nb_queues; i++)
> >  +			(*dev->dev_ops->queue_release)(queues[i]);
> >  +	}
> >  +
> >  +	dev->data->nb_queues = nb_queues;
> >  +	return 0;
> >  +}
> >  +
> >  +static inline int
> >  +rte_event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
> >  +{
> >  +	uint8_t old_nb_ports = dev->data->nb_ports;
> >  +	void **ports;
> >  +	uint16_t *links_map;
> >  +	uint8_t *ports_dequeue_depth;
> >  +	uint8_t *ports_enqueue_depth;
> >  +	unsigned int i;
> >  +
> >  +	EDEV_LOG_DEBUG("Setup %d ports on device %u", nb_ports,
> >  +			 dev->data->dev_id);
> >  +
> >  +	/* First time configuration */
> >  +	if (dev->data->ports == NULL && nb_ports != 0) {
> >  +		dev->data->ports = rte_zmalloc_socket("eventdev->data-
> >  >ports",
> >  +				sizeof(dev->data->ports[0]) * nb_ports,
> >  +				RTE_CACHE_LINE_SIZE, dev->data-
> >  >socket_id);
> >  +		if (dev->data->ports == NULL) {
> >  +			dev->data->nb_ports = 0;
> >  +			EDEV_LOG_ERR("failed to get memory for port meta
> >  data,"
> >  +					"nb_ports %u", nb_ports);
> >  +			return -(ENOMEM);
> >  +		}
> >  +
> >  +		/* Allocate memory to store ports dequeue depth */
> >  +		dev->data->ports_dequeue_depth =
> >  +			rte_zmalloc_socket("eventdev-
> >  >ports_dequeue_depth",
> >  +			sizeof(dev->data->ports_dequeue_depth[0]) *
> >  nb_ports,
> >  +			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
> >  +		if (dev->data->ports_dequeue_depth == NULL) {
> >  +			dev->data->nb_ports = 0;
> >  +			EDEV_LOG_ERR("failed to get memory for port deq
> >  meta,"
> >  +					"nb_ports %u", nb_ports);
> >  +			return -(ENOMEM);
> >  +		}
> >  +
> >  +		/* Allocate memory to store ports enqueue depth */
> >  +		dev->data->ports_enqueue_depth =
> >  +			rte_zmalloc_socket("eventdev-
> >  >ports_enqueue_depth",
> >  +			sizeof(dev->data->ports_enqueue_depth[0]) *
> >  nb_ports,
> >  +			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
> >  +		if (dev->data->ports_enqueue_depth == NULL) {
> >  +			dev->data->nb_ports = 0;
> >  +			EDEV_LOG_ERR("failed to get memory for port enq
> >  meta,"
> >  +					"nb_ports %u", nb_ports);
> >  +			return -(ENOMEM);
> >  +		}
> >  +
> >  +		/* Allocate memory to store queue to port link connection */
> >  +		dev->data->links_map =
> >  +			rte_zmalloc_socket("eventdev->links_map",
> >  +			sizeof(dev->data->links_map[0]) * nb_ports *
> >  +			RTE_EVENT_MAX_QUEUES_PER_DEV,
> >  +			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
> >  +		if (dev->data->links_map == NULL) {
> >  +			dev->data->nb_ports = 0;
> >  +			EDEV_LOG_ERR("failed to get memory for port_map
> >  area,"
> >  +					"nb_ports %u", nb_ports);
> >  +			return -(ENOMEM);
> >  +		}
> >  +	} else if (dev->data->ports != NULL && nb_ports != 0) {/* re-config */
> >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_release,
> >  -ENOTSUP);
> >  +
> >  +		ports = dev->data->ports;
> >  +		ports_dequeue_depth = dev->data->ports_dequeue_depth;
> >  +		ports_enqueue_depth = dev->data->ports_enqueue_depth;
> >  +		links_map = dev->data->links_map;
> >  +
> >  +		for (i = nb_ports; i < old_nb_ports; i++)
> >  +			(*dev->dev_ops->port_release)(ports[i]);
> >  +
> >  +		/* Realloc memory for ports */
> >  +		ports = rte_realloc(ports, sizeof(ports[0]) * nb_ports,
> >  +				RTE_CACHE_LINE_SIZE);
> >  +		if (ports == NULL) {
> >  +			EDEV_LOG_ERR("failed to realloc port meta data,"
> >  +						" nb_ports %u", nb_ports);
> >  +			return -(ENOMEM);
> >  +		}
> >  +
> >  +		/* Realloc memory for ports_dequeue_depth */
> >  +		ports_dequeue_depth = rte_realloc(ports_dequeue_depth,
> >  +			sizeof(ports_dequeue_depth[0]) * nb_ports,
> >  +			RTE_CACHE_LINE_SIZE);
> >  +		if (ports_dequeue_depth == NULL) {
> >  +			EDEV_LOG_ERR("failed to realloc port deqeue meta
> >  data,"
> >  +						" nb_ports %u", nb_ports);
> >  +			return -(ENOMEM);
> >  +		}
> >  +
> >  +		/* Realloc memory for ports_enqueue_depth */
> >  +		ports_enqueue_depth = rte_realloc(ports_enqueue_depth,
> >  +			sizeof(ports_enqueue_depth[0]) * nb_ports,
> >  +			RTE_CACHE_LINE_SIZE);
> >  +		if (ports_enqueue_depth == NULL) {
> >  +			EDEV_LOG_ERR("failed to realloc port enqueue meta
> >  data,"
> >  +						" nb_ports %u", nb_ports);
> >  +			return -(ENOMEM);
> >  +		}
> >  +
> >  +		/* Realloc memory to store queue to port link connection */
> >  +		links_map = rte_realloc(links_map,
> >  +			sizeof(dev->data->links_map[0]) * nb_ports *
> >  +			RTE_EVENT_MAX_QUEUES_PER_DEV,
> >  +			RTE_CACHE_LINE_SIZE);
> >  +		if (dev->data->links_map == NULL) {
> >  +			dev->data->nb_ports = 0;
> >  +			EDEV_LOG_ERR("failed to realloc mem for port_map
> >  area,"
> >  +					"nb_ports %u", nb_ports);
> >  +			return -(ENOMEM);
> >  +		}
> >  +
> >  +		if (nb_ports > old_nb_ports) {
> >  +			uint8_t new_ps = nb_ports - old_nb_ports;
> >  +
> >  +			memset(ports + old_nb_ports, 0,
> >  +				sizeof(ports[0]) * new_ps);
> >  +			memset(ports_dequeue_depth + old_nb_ports, 0,
> >  +				sizeof(ports_dequeue_depth[0]) * new_ps);
> >  +			memset(ports_enqueue_depth + old_nb_ports, 0,
> >  +				sizeof(ports_enqueue_depth[0]) * new_ps);
> >  +			memset(links_map +
> >  +				(old_nb_ports *
> >  RTE_EVENT_MAX_QUEUES_PER_DEV),
> >  +				0, sizeof(ports_enqueue_depth[0]) * new_ps);
> >  +		}
> >  +
> >  +		dev->data->ports = ports;
> >  +		dev->data->ports_dequeue_depth = ports_dequeue_depth;
> >  +		dev->data->ports_enqueue_depth = ports_enqueue_depth;
> >  +		dev->data->links_map = links_map;
> >  +	} else if (dev->data->ports != NULL && nb_ports == 0) {
> >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_release,
> >  -ENOTSUP);
> >  +
> >  +		ports = dev->data->ports;
> >  +		for (i = nb_ports; i < old_nb_ports; i++)
> >  +			(*dev->dev_ops->port_release)(ports[i]);
> >  +	}
> >  +
> >  +	dev->data->nb_ports = nb_ports;
> >  +	return 0;
> >  +}
> >  +
> >  +int
> >  +rte_event_dev_configure(uint8_t dev_id, struct rte_event_dev_config
> >  *dev_conf)
> >  +{
> >  +	struct rte_eventdev *dev;
> >  +	struct rte_event_dev_info info;
> >  +	int diag;
> >  +
> >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  +	dev = &rte_eventdevs[dev_id];
> >  +
> >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -
> >  ENOTSUP);
> >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_configure, -
> >  ENOTSUP);
> >  +
> >  +	if (dev->data->dev_started) {
> >  +		EDEV_LOG_ERR(
> >  +		    "device %d must be stopped to allow configuration",
> >  dev_id);
> >  +		return -EBUSY;
> >  +	}
> >  +
> >  +	if (dev_conf == NULL)
> >  +		return -EINVAL;
> >  +
> >  +	(*dev->dev_ops->dev_infos_get)(dev, &info);
> >  +
> >  +	/* Check dequeue_wait_ns value is in limit */
> >  +	if (!dev_conf->event_dev_cfg &
> >  RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT) {
> >  +		if (dev_conf->dequeue_wait_ns < info.min_dequeue_wait_ns
> >  ||
> >  +			dev_conf->dequeue_wait_ns >
> >  info.max_dequeue_wait_ns) {
> >  +			EDEV_LOG_ERR("dev%d invalid dequeue_wait_ns=%d"
> >  +			" min_dequeue_wait_ns=%d
> >  max_dequeue_wait_ns=%d",
> >  +			dev_id, dev_conf->dequeue_wait_ns,
> >  +			info.min_dequeue_wait_ns,
> >  +			info.max_dequeue_wait_ns);
> >  +			return -EINVAL;
> >  +		}
> >  +	}
> >  +
> >  +	/* Check nb_events_limit is in limit */
> >  +	if (dev_conf->nb_events_limit > info.max_num_events) {
> >  +		EDEV_LOG_ERR("dev%d nb_events_limit=%d >
> >  max_num_events=%d",
> >  +		dev_id, dev_conf->nb_events_limit, info.max_num_events);
> >  +		return -EINVAL;
> >  +	}
> >  +
> >  +	/* Check nb_event_queues is in limit */
> >  +	if (!dev_conf->nb_event_queues) {
> >  +		EDEV_LOG_ERR("dev%d nb_event_queues cannot be zero",
> >  dev_id);
> >  +		return -EINVAL;
> >  +	}
> >  +	if (dev_conf->nb_event_queues > info.max_event_queues) {
> >  +		EDEV_LOG_ERR("dev%d nb_event_queues=%d >
> >  max_event_queues=%d",
> >  +		dev_id, dev_conf->nb_event_queues,
> >  info.max_event_queues);
> >  +		return -EINVAL;
> >  +	}
> >  +
> >  +	/* Check nb_event_ports is in limit */
> >  +	if (!dev_conf->nb_event_ports) {
> >  +		EDEV_LOG_ERR("dev%d nb_event_ports cannot be zero",
> >  dev_id);
> >  +		return -EINVAL;
> >  +	}
> >  +	if (dev_conf->nb_event_ports > info.max_event_ports) {
> >  +		EDEV_LOG_ERR("dev%d nb_event_ports=%d >
> >  max_event_ports= %d",
> >  +		dev_id, dev_conf->nb_event_ports, info.max_event_ports);
> >  +		return -EINVAL;
> >  +	}
> >  +
> >  +	/* Check nb_event_queue_flows is in limit */
> >  +	if (!dev_conf->nb_event_queue_flows) {
> >  +		EDEV_LOG_ERR("dev%d nb_flows cannot be zero", dev_id);
> >  +		return -EINVAL;
> >  +	}
> >  +	if (dev_conf->nb_event_queue_flows > info.max_event_queue_flows)
> >  {
> >  +		EDEV_LOG_ERR("dev%d nb_flows=%x > max_flows=%x",
> >  +		dev_id, dev_conf->nb_event_queue_flows,
> >  +		info.max_event_queue_flows);
> >  +		return -EINVAL;
> >  +	}
> >  +
> >  +	/* Check nb_event_port_dequeue_depth is in limit */
> >  +	if (!dev_conf->nb_event_port_dequeue_depth) {
> >  +		EDEV_LOG_ERR("dev%d nb_dequeue_depth cannot be zero",
> >  dev_id);
> >  +		return -EINVAL;
> >  +	}
> >  +	if (dev_conf->nb_event_port_dequeue_depth >
> >  +			 info.max_event_port_dequeue_depth) {
> >  +		EDEV_LOG_ERR("dev%d nb_dequeue_depth=%d >
> >  max_dequeue_depth=%d",
> >  +		dev_id, dev_conf->nb_event_port_dequeue_depth,
> >  +		info.max_event_port_dequeue_depth);
> >  +		return -EINVAL;
> >  +	}
> >  +
> >  +	/* Check nb_event_port_enqueue_depth is in limit */
> >  +	if (!dev_conf->nb_event_port_enqueue_depth) {
> >  +		EDEV_LOG_ERR("dev%d nb_enqueue_depth cannot be zero",
> >  dev_id);
> >  +		return -EINVAL;
> >  +	}
> >  +	if (dev_conf->nb_event_port_enqueue_depth >
> >  +			 info.max_event_port_enqueue_depth) {
> >  +		EDEV_LOG_ERR("dev%d nb_enqueue_depth=%d >
> >  max_enqueue_depth=%d",
> >  +		dev_id, dev_conf->nb_event_port_enqueue_depth,
> >  +		info.max_event_port_enqueue_depth);
> >  +		return -EINVAL;
> >  +	}
> >  +
> >  +	/* Copy the dev_conf parameter into the dev structure */
> >  +	memcpy(&dev->data->dev_conf, dev_conf, sizeof(dev->data-
> >  >dev_conf));
> >  +
> >  +	/* Setup new number of queues and reconfigure device. */
> >  +	diag = rte_event_dev_queue_config(dev, dev_conf-
> >  >nb_event_queues);
> >  +	if (diag != 0) {
> >  +		EDEV_LOG_ERR("dev%d rte_event_dev_queue_config = %d",
> >  +				dev_id, diag);
> >  +		return diag;
> >  +	}
> >  +
> >  +	/* Setup new number of ports and reconfigure device. */
> >  +	diag = rte_event_dev_port_config(dev, dev_conf->nb_event_ports);
> >  +	if (diag != 0) {
> >  +		rte_event_dev_queue_config(dev, 0);
> >  +		EDEV_LOG_ERR("dev%d rte_event_dev_port_config = %d",
> >  +				dev_id, diag);
> >  +		return diag;
> >  +	}
> >  +
> >  +	/* Configure the device */
> >  +	diag = (*dev->dev_ops->dev_configure)(dev);
> >  +	if (diag != 0) {
> >  +		EDEV_LOG_ERR("dev%d dev_configure = %d", dev_id, diag);
> >  +		rte_event_dev_queue_config(dev, 0);
> >  +		rte_event_dev_port_config(dev, 0);
> >  +	}
> >  +
> >  +	dev->data->event_dev_cap = info.event_dev_cap;
> >  +	return diag;
> >  +}
> >  +
> >  +static inline int
> >  +is_valid_queue(struct rte_eventdev *dev, uint8_t queue_id)
> >  +{
> >  +	if (queue_id < dev->data->nb_queues && queue_id <
> >  +				RTE_EVENT_MAX_QUEUES_PER_DEV)
> >  +		return 1;
> >  +	else
> >  +		return 0;
> >  +}
> >  +
> >  +int
> >  +rte_event_queue_default_conf_get(uint8_t dev_id, uint8_t queue_id,
> >  +				 struct rte_event_queue_conf *queue_conf)
> >  +{
> >  +	struct rte_eventdev *dev;
> >  +
> >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  +	dev = &rte_eventdevs[dev_id];
> >  +
> >  +	if (queue_conf == NULL)
> >  +		return -EINVAL;
> >  +
> >  +	if (!is_valid_queue(dev, queue_id)) {
> >  +		EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id);
> >  +		return -EINVAL;
> >  +	}
> >  +
> >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_def_conf, -
> >  ENOTSUP);
> >  +	memset(queue_conf, 0, sizeof(struct rte_event_queue_conf));
> >  +	(*dev->dev_ops->queue_def_conf)(dev, queue_id, queue_conf);
> >  +	return 0;
> >  +}
> >  +
> >  +static inline int
> >  +is_valid_atomic_queue_conf(struct rte_event_queue_conf *queue_conf)
> >  +{
> >  +	if (queue_conf && (
> >  +		((queue_conf->event_queue_cfg &
> >  RTE_EVENT_QUEUE_CFG_TYPE_MASK)
> >  +			== RTE_EVENT_QUEUE_CFG_ALL_TYPES) ||
> >  +		((queue_conf->event_queue_cfg &
> >  RTE_EVENT_QUEUE_CFG_TYPE_MASK)
> >  +			== RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY)
> >  +		))
> >  +		return 1;
> >  +	else
> >  +		return 0;
> >  +}
> >  +
> >  +int
> >  +rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id,
> >  +		      struct rte_event_queue_conf *queue_conf)
> >  +{
> >  +	struct rte_eventdev *dev;
> >  +	struct rte_event_queue_conf def_conf;
> >  +
> >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  +	dev = &rte_eventdevs[dev_id];
> >  +
> >  +	if (!is_valid_queue(dev, queue_id)) {
> >  +		EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id);
> >  +		return -EINVAL;
> >  +	}
> >  +
> >  +	/* Check nb_atomic_flows limit */
> >  +	if (is_valid_atomic_queue_conf(queue_conf)) {
> >  +		if (queue_conf->nb_atomic_flows == 0 ||
> >  +		    queue_conf->nb_atomic_flows >
> >  +			dev->data->dev_conf.nb_event_queue_flows) {
> >  +			EDEV_LOG_ERR(
> >  +		"dev%d queue%d Invalid nb_atomic_flows=%d
> >  max_flows=%d",
> >  +			dev_id, queue_id, queue_conf->nb_atomic_flows,
> >  +			dev->data->dev_conf.nb_event_queue_flows);
> >  +			return -EINVAL;
> >  +		}
> >  +	}
> >  +
> >  +	/* Check nb_atomic_order_sequences limit */
> >  +	if (is_valid_atomic_queue_conf(queue_conf)) {
> >  +		if (queue_conf->nb_atomic_order_sequences == 0 ||
> >  +		    queue_conf->nb_atomic_order_sequences >
> >  +			dev->data->dev_conf.nb_event_queue_flows) {
> >  +			EDEV_LOG_ERR(
> >  +		"dev%d queue%d Invalid nb_atomic_order_seq=%d
> >  max_flows=%d",
> >  +			dev_id, queue_id, queue_conf-
> >  >nb_atomic_order_sequences,
> >  +			dev->data->dev_conf.nb_event_queue_flows);
> >  +			return -EINVAL;
> >  +		}
> >  +	}
> >  +
> >  +	if (dev->data->dev_started) {
> >  +		EDEV_LOG_ERR(
> >  +		    "device %d must be stopped to allow queue setup", dev_id);
> >  +		return -EBUSY;
> >  +	}
> >  +
> >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_setup, -
> >  ENOTSUP);
> >  +
> >  +	if (queue_conf == NULL) {
> >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
> >  >queue_def_conf,
> >  +					-ENOTSUP);
> >  +		(*dev->dev_ops->queue_def_conf)(dev, queue_id, &def_conf);
> >  +		def_conf.event_queue_cfg =
> >  RTE_EVENT_QUEUE_CFG_DEFAULT;
> >  +		queue_conf = &def_conf;
> >  +	}
> >  +
> >  +	dev->data->queues_prio[queue_id] = queue_conf->priority;
> >  +	return (*dev->dev_ops->queue_setup)(dev, queue_id, queue_conf);
> >  +}
> >  +
> >  +uint8_t
> >  +rte_event_queue_count(uint8_t dev_id)
> >  +{
> >  +	struct rte_eventdev *dev;
> >  +
> >  +	dev = &rte_eventdevs[dev_id];
> >  +	return dev->data->nb_queues;
> >  +}
> >  +
> >  +uint8_t
> >  +rte_event_queue_priority(uint8_t dev_id, uint8_t queue_id)
> >  +{
> >  +	struct rte_eventdev *dev;
> >  +
> >  +	dev = &rte_eventdevs[dev_id];
> >  +	if (dev->data->event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_QOS)
> >  +		return dev->data->queues_prio[queue_id];
> >  +	else
> >  +		return RTE_EVENT_QUEUE_PRIORITY_NORMAL;
> >  +}
> >  +
> >  +static inline int
> >  +is_valid_port(struct rte_eventdev *dev, uint8_t port_id)
> >  +{
> >  +	if (port_id < dev->data->nb_ports)
> >  +		return 1;
> >  +	else
> >  +		return 0;
> >  +}
> >  +
> >  +int
> >  +rte_event_port_default_conf_get(uint8_t dev_id, uint8_t port_id,
> >  +				 struct rte_event_port_conf *port_conf)
> >  +{
> >  +	struct rte_eventdev *dev;
> >  +
> >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  +	dev = &rte_eventdevs[dev_id];
> >  +
> >  +	if (port_conf == NULL)
> >  +		return -EINVAL;
> >  +
> >  +	if (!is_valid_port(dev, port_id)) {
> >  +		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
> >  +		return -EINVAL;
> >  +	}
> >  +
> >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_def_conf, -
> >  ENOTSUP);
> >  +	memset(port_conf, 0, sizeof(struct rte_event_port_conf));
> >  +	(*dev->dev_ops->port_def_conf)(dev, port_id, port_conf);
> >  +	return 0;
> >  +}
> >  +
> >  +int
> >  +rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
> >  +		      struct rte_event_port_conf *port_conf)
> >  +{
> >  +	struct rte_eventdev *dev;
> >  +	struct rte_event_port_conf def_conf;
> >  +	int diag;
> >  +
> >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  +	dev = &rte_eventdevs[dev_id];
> >  +
> >  +	if (!is_valid_port(dev, port_id)) {
> >  +		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
> >  +		return -EINVAL;
> >  +	}
> >  +
> >  +	/* Check new_event_threshold limit */
> >  +	if ((port_conf && !port_conf->new_event_threshold) ||
> >  +			(port_conf && port_conf->new_event_threshold >
> >  +				 dev->data->dev_conf.nb_events_limit)) {
> >  +		EDEV_LOG_ERR(
> >  +		   "dev%d port%d Invalid event_threshold=%d
> >  nb_events_limit=%d",
> >  +			dev_id, port_id, port_conf->new_event_threshold,
> >  +			dev->data->dev_conf.nb_events_limit);
> >  +		return -EINVAL;
> >  +	}
> >  +
> >  +	/* Check dequeue_depth limit */
> >  +	if ((port_conf && !port_conf->dequeue_depth) ||
> >  +			(port_conf && port_conf->dequeue_depth >
> >  +		dev->data->dev_conf.nb_event_port_dequeue_depth)) {
> >  +		EDEV_LOG_ERR(
> >  +		   "dev%d port%d Invalid dequeue depth=%d
> >  max_dequeue_depth=%d",
> >  +			dev_id, port_id, port_conf->dequeue_depth,
> >  +			dev->data-
> >  >dev_conf.nb_event_port_dequeue_depth);
> >  +		return -EINVAL;
> >  +	}
> >  +
> >  +	/* Check enqueue_depth limit */
> >  +	if ((port_conf && !port_conf->enqueue_depth) ||
> >  +			(port_conf && port_conf->enqueue_depth >
> >  +		dev->data->dev_conf.nb_event_port_enqueue_depth)) {
> >  +		EDEV_LOG_ERR(
> >  +		   "dev%d port%d Invalid enqueue depth=%d
> >  max_enqueue_depth=%d",
> >  +			dev_id, port_id, port_conf->enqueue_depth,
> >  +			dev->data-
> >  >dev_conf.nb_event_port_enqueue_depth);
> >  +		return -EINVAL;
> >  +	}
> >  +
> >  +	if (dev->data->dev_started) {
> >  +		EDEV_LOG_ERR(
> >  +		    "device %d must be stopped to allow port setup", dev_id);
> >  +		return -EBUSY;
> >  +	}
> >  +
> >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_setup, -
> >  ENOTSUP);
> >  +
> >  +	if (port_conf == NULL) {
> >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
> >  >port_def_conf,
> >  +					-ENOTSUP);
> >  +		(*dev->dev_ops->port_def_conf)(dev, port_id, &def_conf);
> >  +		port_conf = &def_conf;
> >  +	}
> >  +
> >  +	dev->data->ports_dequeue_depth[port_id] =
> >  +			port_conf->dequeue_depth;
> >  +	dev->data->ports_enqueue_depth[port_id] =
> >  +			port_conf->enqueue_depth;
> >  +
> >  +	diag = (*dev->dev_ops->port_setup)(dev, port_id, port_conf);
> >  +
> >  +	/* Unlink all the queues from this port(default state after setup) */
> >  +	if (!diag)
> >  +		diag = rte_event_port_unlink(dev_id, port_id, NULL, 0);
> >  +
> >  +	if (diag < 0)
> >  +		return diag;
> >  +
> >  +	return 0;
> >  +}
> >  +
> >  +uint8_t
> >  +rte_event_port_dequeue_depth(uint8_t dev_id, uint8_t port_id)
> >  +{
> >  +	struct rte_eventdev *dev;
> >  +
> >  +	dev = &rte_eventdevs[dev_id];
> >  +	return dev->data->ports_dequeue_depth[port_id];
> >  +}
> >  +
> >  +uint8_t
> >  +rte_event_port_enqueue_depth(uint8_t dev_id, uint8_t port_id)
> >  +{
> >  +	struct rte_eventdev *dev;
> >  +
> >  +	dev = &rte_eventdevs[dev_id];
> >  +	return dev->data->ports_enqueue_depth[port_id];
> >  +}
> >  +
> >  +uint8_t
> >  +rte_event_port_count(uint8_t dev_id)
> >  +{
> >  +	struct rte_eventdev *dev;
> >  +
> >  +	dev = &rte_eventdevs[dev_id];
> >  +	return dev->data->nb_ports;
> >  +}
> >  +
> >  +int
> >  +rte_event_port_link(uint8_t dev_id, uint8_t port_id,
> >  +		    struct rte_event_queue_link link[], uint16_t nb_links)
> >  +{
> >  +	struct rte_eventdev *dev;
> >  +	struct rte_event_queue_link
> >  all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
> >  +	uint16_t *links_map;
> >  +	int i, diag;
> >  +
> >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  +	dev = &rte_eventdevs[dev_id];
> >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_link, -ENOTSUP);
> >  +
> >  +	if (!is_valid_port(dev, port_id)) {
> >  +		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
> >  +		return -EINVAL;
> >  +	}
> >  +
> >  +	if (link == NULL) {
> >  +		for (i = 0; i < dev->data->nb_queues; i++) {
> >  +			all_queues[i].queue_id = i;
> >  +			all_queues[i].priority =
> >  +
> >  	RTE_EVENT_QUEUE_SERVICE_PRIORITY_NORMAL;
> >  +		}
> >  +		link = all_queues;
> >  +		nb_links = dev->data->nb_queues;
> >  +	}
> >  +
> >  +	for (i = 0; i < nb_links; i++)
> >  +		if (link[i].queue_id >= RTE_EVENT_MAX_QUEUES_PER_DEV)
> >  +			return -EINVAL;
> >  +
> >  +	diag = (*dev->dev_ops->port_link)(dev->data->ports[port_id], link,
> >  +						 nb_links);
> >  +	if (diag < 0)
> >  +		return diag;
> >  +
> >  +	links_map = dev->data->links_map;
> >  +	/* Point links_map to this port specific area */
> >  +	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
> >  +	for (i = 0; i < diag; i++)
> >  +		links_map[link[i].queue_id] = (uint8_t)link[i].priority;
> >  +
> >  +	return diag;
> >  +}
> >  +
> >  +#define EVENT_QUEUE_SERVICE_PRIORITY_INVALID (0xdead)
> >  +
> >  +int
> >  +rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
> >  +		      uint8_t queues[], uint16_t nb_unlinks)
> >  +{
> >  +	struct rte_eventdev *dev;
> >  +	uint8_t all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
> >  +	int i, diag;
> >  +	uint16_t *links_map;
> >  +
> >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  +	dev = &rte_eventdevs[dev_id];
> >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_unlink, -
> >  ENOTSUP);
> >  +
> >  +	if (!is_valid_port(dev, port_id)) {
> >  +		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
> >  +		return -EINVAL;
> >  +	}
> >  +
> >  +	if (queues == NULL) {
> >  +		for (i = 0; i < dev->data->nb_queues; i++)
> >  +			all_queues[i] = i;
> >  +		queues = all_queues;
> >  +		nb_unlinks = dev->data->nb_queues;
> >  +	}
> >  +
> >  +	for (i = 0; i < nb_unlinks; i++)
> >  +		if (queues[i] >= RTE_EVENT_MAX_QUEUES_PER_DEV)
> >  +			return -EINVAL;
> >  +
> >  +	diag = (*dev->dev_ops->port_unlink)(dev->data->ports[port_id],
> >  queues,
> >  +					nb_unlinks);
> >  +
> >  +	if (diag < 0)
> >  +		return diag;
> >  +
> >  +	links_map = dev->data->links_map;
> >  +	/* Point links_map to this port specific area */
> >  +	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
> >  +	for (i = 0; i < diag; i++)
> >  +		links_map[queues[i]] =
> >  EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
> >  +
> >  +	return diag;
> >  +}
> >  +
> >  +int
> >  +rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
> >  +			struct rte_event_queue_link link[])
> >  +{
> >  +	struct rte_eventdev *dev;
> >  +	uint16_t *links_map;
> >  +	int i, count = 0;
> >  +
> >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  +	dev = &rte_eventdevs[dev_id];
> >  +	if (!is_valid_port(dev, port_id)) {
> >  +		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
> >  +		return -EINVAL;
> >  +	}
> >  +
> >  +	links_map = dev->data->links_map;
> >  +	/* Point links_map to this port specific area */
> >  +	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
> >  +	for (i = 0; i < RTE_EVENT_MAX_QUEUES_PER_DEV; i++) {
> >  +		if (links_map[i] !=
> >  EVENT_QUEUE_SERVICE_PRIORITY_INVALID) {
> >  +			link[count].queue_id = i;
> >  +			link[count].priority = (uint8_t)links_map[i];
> >  +			++count;
> >  +		}
> >  +	}
> >  +	return count;
> >  +}
> >  +
> >  +int
> >  +rte_event_dequeue_wait_time(uint8_t dev_id, uint64_t ns, uint64_t
> >  *wait_ticks)
> >  +{
> >  +	struct rte_eventdev *dev;
> >  +
> >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  +	dev = &rte_eventdevs[dev_id];
> >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->wait_time, -
> >  ENOTSUP);
> >  +
> >  +	if (wait_ticks == NULL)
> >  +		return -EINVAL;
> >  +
> >  +	(*dev->dev_ops->wait_time)(dev, ns, wait_ticks);
> >  +	return 0;
> >  +}
> >  +
> >  +int
> >  +rte_event_dev_dump(uint8_t dev_id, FILE *f)
> >  +{
> >  +	struct rte_eventdev *dev;
> >  +
> >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  +	dev = &rte_eventdevs[dev_id];
> >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dump, -ENOTSUP);
> >  +
> >  +	(*dev->dev_ops->dump)(dev, f);
> >  +	return 0;
> >  +
> >  +}
> >  +
> >  +int
> >  +rte_event_dev_start(uint8_t dev_id)
> >  +{
> >  +	struct rte_eventdev *dev;
> >  +	int diag;
> >  +
> >  +	EDEV_LOG_DEBUG("Start dev_id=%" PRIu8, dev_id);
> >  +
> >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  +	dev = &rte_eventdevs[dev_id];
> >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_start, -ENOTSUP);
> >  +
> >  +	if (dev->data->dev_started != 0) {
> >  +		EDEV_LOG_ERR("Device with dev_id=%" PRIu8 " already
> >  started",
> >  +			dev_id);
> >  +		return 0;
> >  +	}
> >  +
> >  +	diag = (*dev->dev_ops->dev_start)(dev);
> >  +	if (diag == 0)
> >  +		dev->data->dev_started = 1;
> >  +	else
> >  +		return diag;
> >  +
> >  +	return 0;
> >  +}
> >  +
> >  +void
> >  +rte_event_dev_stop(uint8_t dev_id)
> >  +{
> >  +	struct rte_eventdev *dev;
> >  +
> >  +	EDEV_LOG_DEBUG("Stop dev_id=%" PRIu8, dev_id);
> >  +
> >  +	RTE_EVENTDEV_VALID_DEVID_OR_RET(dev_id);
> >  +	dev = &rte_eventdevs[dev_id];
> >  +	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
> >  +
> >  +	if (dev->data->dev_started == 0) {
> >  +		EDEV_LOG_ERR("Device with dev_id=%" PRIu8 " already
> >  stopped",
> >  +			dev_id);
> >  +		return;
> >  +	}
> >  +
> >  +	dev->data->dev_started = 0;
> >  +	(*dev->dev_ops->dev_stop)(dev);
> >  +}
> >  +
> >  +int
> >  +rte_event_dev_close(uint8_t dev_id)
> >  +{
> >  +	struct rte_eventdev *dev;
> >  +
> >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  +	dev = &rte_eventdevs[dev_id];
> >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_close, -
> >  ENOTSUP);
> >  +
> >  +	/* Device must be stopped before it can be closed */
> >  +	if (dev->data->dev_started == 1) {
> >  +		EDEV_LOG_ERR("Device %u must be stopped before closing",
> >  +				dev_id);
> >  +		return -EBUSY;
> >  +	}
> >  +
> >  +	return (*dev->dev_ops->dev_close)(dev);
> >  +}
> >  +
> >  +static inline int
> >  +rte_eventdev_data_alloc(uint8_t dev_id, struct rte_eventdev_data **data,
> >  +		int socket_id)
> >  +{
> >  +	char mz_name[RTE_EVENTDEV_NAME_MAX_LEN];
> >  +	const struct rte_memzone *mz;
> >  +	int n;
> >  +
> >  +	/* Generate memzone name */
> >  +	n = snprintf(mz_name, sizeof(mz_name), "rte_eventdev_data_%u",
> >  dev_id);
> >  +	if (n >= (int)sizeof(mz_name))
> >  +		return -EINVAL;
> >  +
> >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> >  +		mz = rte_memzone_reserve(mz_name,
> >  +				sizeof(struct rte_eventdev_data),
> >  +				socket_id, 0);
> >  +	} else
> >  +		mz = rte_memzone_lookup(mz_name);
> >  +
> >  +	if (mz == NULL)
> >  +		return -ENOMEM;
> >  +
> >  +	*data = mz->addr;
> >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
> >  +		memset(*data, 0, sizeof(struct rte_eventdev_data));
> >  +
> >  +	return 0;
> >  +}
> >  +
> >  +static uint8_t
> >  +rte_eventdev_find_free_device_index(void)
> >  +{
> >  +	uint8_t dev_id;
> >  +
> >  +	for (dev_id = 0; dev_id < RTE_EVENT_MAX_DEVS; dev_id++) {
> >  +		if (rte_eventdevs[dev_id].attached ==
> >  +				RTE_EVENTDEV_DETACHED)
> >  +			return dev_id;
> >  +	}
> >  +	return RTE_EVENT_MAX_DEVS;
> >  +}
> >  +
> >  +struct rte_eventdev *
> >  +rte_eventdev_pmd_allocate(const char *name, int socket_id)
> >  +{
> >  +	struct rte_eventdev *eventdev;
> >  +	uint8_t dev_id;
> >  +
> >  +	if (rte_eventdev_pmd_get_named_dev(name) != NULL) {
> >  +		EDEV_LOG_ERR("Event device with name %s already "
> >  +				"allocated!", name);
> >  +		return NULL;
> >  +	}
> >  +
> >  +	dev_id = rte_eventdev_find_free_device_index();
> >  +	if (dev_id == RTE_EVENT_MAX_DEVS) {
> >  +		EDEV_LOG_ERR("Reached maximum number of event
> >  devices");
> >  +		return NULL;
> >  +	}
> >  +
> >  +	eventdev = &rte_eventdevs[dev_id];
> >  +
> >  +	if (eventdev->data == NULL) {
> >  +		struct rte_eventdev_data *eventdev_data = NULL;
> >  +
> >  +		int retval = rte_eventdev_data_alloc(dev_id, &eventdev_data,
> >  +				socket_id);
> >  +
> >  +		if (retval < 0 || eventdev_data == NULL)
> >  +			return NULL;
> >  +
> >  +		eventdev->data = eventdev_data;
> >  +
> >  +		snprintf(eventdev->data->name,
> >  RTE_EVENTDEV_NAME_MAX_LEN,
> >  +				"%s", name);
> >  +
> >  +		eventdev->data->dev_id = dev_id;
> >  +		eventdev->data->socket_id = socket_id;
> >  +		eventdev->data->dev_started = 0;
> >  +
> >  +		eventdev->attached = RTE_EVENTDEV_ATTACHED;
> >  +
> >  +		eventdev_globals.nb_devs++;
> >  +	}
> >  +
> >  +	return eventdev;
> >  +}
> >  +
> >  +int
> >  +rte_eventdev_pmd_release(struct rte_eventdev *eventdev)
> >  +{
> >  +	int ret;
> >  +
> >  +	if (eventdev == NULL)
> >  +		return -EINVAL;
> >  +
> >  +	ret = rte_event_dev_close(eventdev->data->dev_id);
> >  +	if (ret < 0)
> >  +		return ret;
> >  +
> >  +	eventdev->attached = RTE_EVENTDEV_DETACHED;
> >  +	eventdev_globals.nb_devs--;
> >  +	eventdev->data = NULL;
> >  +
> >  +	return 0;
> >  +}
> >  +
> >  +struct rte_eventdev *
> >  +rte_eventdev_pmd_vdev_init(const char *name, size_t dev_private_size,
> >  +		int socket_id)
> >  +{
> >  +	struct rte_eventdev *eventdev;
> >  +
> >  +	/* Allocate device structure */
> >  +	eventdev = rte_eventdev_pmd_allocate(name, socket_id);
> >  +	if (eventdev == NULL)
> >  +		return NULL;
> >  +
> >  +	/* Allocate private device structure */
> >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> >  +		eventdev->data->dev_private =
> >  +				rte_zmalloc_socket("eventdev device private",
> >  +						dev_private_size,
> >  +						RTE_CACHE_LINE_SIZE,
> >  +						socket_id);
> >  +
> >  +		if (eventdev->data->dev_private == NULL)
> >  +			rte_panic("Cannot allocate memzone for private
> >  device"
> >  +					" data");
> >  +	}
> >  +
> >  +	return eventdev;
> >  +}
> >  +
> >  +int
> >  +rte_eventdev_pmd_pci_probe(struct rte_pci_driver *pci_drv,
> >  +			struct rte_pci_device *pci_dev)
> >  +{
> >  +	struct rte_eventdev_driver *eventdrv;
> >  +	struct rte_eventdev *eventdev;
> >  +
> >  +	char eventdev_name[RTE_EVENTDEV_NAME_MAX_LEN];
> >  +
> >  +	int retval;
> >  +
> >  +	eventdrv = (struct rte_eventdev_driver *)pci_drv;
> >  +	if (eventdrv == NULL)
> >  +		return -ENODEV;
> >  +
> >  +	rte_eal_pci_device_name(&pci_dev->addr, eventdev_name,
> >  +			sizeof(eventdev_name));
> >  +
> >  +	eventdev = rte_eventdev_pmd_allocate(eventdev_name,
> >  +			 pci_dev->device.numa_node);
> >  +	if (eventdev == NULL)
> >  +		return -ENOMEM;
> >  +
> >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> >  +		eventdev->data->dev_private =
> >  +				rte_zmalloc_socket(
> >  +						"eventdev private structure",
> >  +						eventdrv->dev_private_size,
> >  +						RTE_CACHE_LINE_SIZE,
> >  +						rte_socket_id());
> >  +
> >  +		if (eventdev->data->dev_private == NULL)
> >  +			rte_panic("Cannot allocate memzone for private "
> >  +					"device data");
> >  +	}
> >  +
> >  +	eventdev->pci_dev = pci_dev;
> >  +	eventdev->driver = eventdrv;
> >  +
> >  +	/* Invoke PMD device initialization function */
> >  +	retval = (*eventdrv->eventdev_init)(eventdev);
> >  +	if (retval == 0)
> >  +		return 0;
> >  +
> >  +	EDEV_LOG_ERR("driver %s: event_dev_init(vendor_id=0x%x
> >  device_id=0x%x)"
> >  +			" failed", pci_drv->driver.name,
> >  +			(unsigned int) pci_dev->id.vendor_id,
> >  +			(unsigned int) pci_dev->id.device_id);
> >  +
> >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
> >  +		rte_free(eventdev->data->dev_private);
> >  +
> >  +	eventdev->attached = RTE_EVENTDEV_DETACHED;
> >  +	eventdev_globals.nb_devs--;
> >  +
> >  +	return -ENXIO;
> >  +}
> >  +
> >  +int
> >  +rte_eventdev_pmd_pci_remove(struct rte_pci_device *pci_dev)
> >  +{
> >  +	const struct rte_eventdev_driver *eventdrv;
> >  +	struct rte_eventdev *eventdev;
> >  +	char eventdev_name[RTE_EVENTDEV_NAME_MAX_LEN];
> >  +	int ret;
> >  +
> >  +	if (pci_dev == NULL)
> >  +		return -EINVAL;
> >  +
> >  +	rte_eal_pci_device_name(&pci_dev->addr, eventdev_name,
> >  +			sizeof(eventdev_name));
> >  +
> >  +	eventdev = rte_eventdev_pmd_get_named_dev(eventdev_name);
> >  +	if (eventdev == NULL)
> >  +		return -ENODEV;
> >  +
> >  +	eventdrv = (const struct rte_eventdev_driver *)pci_dev->driver;
> >  +	if (eventdrv == NULL)
> >  +		return -ENODEV;
> >  +
> >  +	/* Invoke PMD device uninit function */
> >  +	if (*eventdrv->eventdev_uninit) {
> >  +		ret = (*eventdrv->eventdev_uninit)(eventdev);
> >  +		if (ret)
> >  +			return ret;
> >  +	}
> >  +
> >  +	/* Free event device */
> >  +	rte_eventdev_pmd_release(eventdev);
> >  +
> >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
> >  +		rte_free(eventdev->data->dev_private);
> >  +
> >  +	eventdev->pci_dev = NULL;
> >  +	eventdev->driver = NULL;
> >  +
> >  +	return 0;
> >  +}
> >  diff --git a/lib/librte_eventdev/rte_eventdev_pmd.h
> >  b/lib/librte_eventdev/rte_eventdev_pmd.h
> >  new file mode 100644
> >  index 0000000..e9d9b83
> >  --- /dev/null
> >  +++ b/lib/librte_eventdev/rte_eventdev_pmd.h
> >  @@ -0,0 +1,504 @@
> >  +/*
> >  + *
> >  + *   Copyright(c) 2016 Cavium networks. All rights reserved.
> >  + *
> >  + *   Redistribution and use in source and binary forms, with or without
> >  + *   modification, are permitted provided that the following conditions
> >  + *   are met:
> >  + *
> >  + *     * Redistributions of source code must retain the above copyright
> >  + *       notice, this list of conditions and the following disclaimer.
> >  + *     * Redistributions in binary form must reproduce the above copyright
> >  + *       notice, this list of conditions and the following disclaimer in
> >  + *       the documentation and/or other materials provided with the
> >  + *       distribution.
> >  + *     * Neither the name of Cavium networks nor the names of its
> >  + *       contributors may be used to endorse or promote products derived
> >  + *       from this software without specific prior written permission.
> >  + *
> >  + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
> >  CONTRIBUTORS
> >  + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT
> >  NOT
> >  + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
> >  FITNESS FOR
> >  + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> >  COPYRIGHT
> >  + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> >  INCIDENTAL,
> >  + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
> >  NOT
> >  + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
> >  OF USE,
> >  + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
> >  AND ON ANY
> >  + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
> >  TORT
> >  + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
> >  THE USE
> >  + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> >  DAMAGE.
> >  + */
> >  +
> >  +#ifndef _RTE_EVENTDEV_PMD_H_
> >  +#define _RTE_EVENTDEV_PMD_H_
> >  +
> >  +/** @file
> >  + * RTE Event PMD APIs
> >  + *
> >  + * @note
> >  + * These API are from event PMD only and user applications should not call
> >  + * them directly.
> >  + */
> >  +
> >  +#ifdef __cplusplus
> >  +extern "C" {
> >  +#endif
> >  +
> >  +#include <string.h>
> >  +
> >  +#include <rte_dev.h>
> >  +#include <rte_pci.h>
> >  +#include <rte_malloc.h>
> >  +#include <rte_log.h>
> >  +#include <rte_common.h>
> >  +
> >  +#include "rte_eventdev.h"
> >  +
> >  +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
> >  +#define RTE_PMD_DEBUG_TRACE(...) \
> >  +	rte_pmd_debug_trace(__func__, __VA_ARGS__)
> >  +#else
> >  +#define RTE_PMD_DEBUG_TRACE(...)
> >  +#endif
> >  +
> >  +/* Logging Macros */
> >  +#define EDEV_LOG_ERR(fmt, args...) \
> >  +	RTE_LOG(ERR, EVENTDEV, "%s() line %u: " fmt "\n",  \
> >  +			__func__, __LINE__, ## args)
> >  +
> >  +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
> >  +#define EDEV_LOG_DEBUG(fmt, args...) \
> >  +	RTE_LOG(DEBUG, EVENTDEV, "%s() line %u: " fmt "\n",  \
> >  +			__func__, __LINE__, ## args)
> >  +#else
> >  +#define EDEV_LOG_DEBUG(fmt, args...) (void)0
> >  +#endif
> >  +
> >  +/* Macros to check for valid device */
> >  +#define RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, retval) do { \
> >  +	if (!rte_eventdev_pmd_is_valid_dev((dev_id))) { \
> >  +		EDEV_LOG_ERR("Invalid dev_id=%d\n", dev_id); \
> >  +		return retval; \
> >  +	} \
> >  +} while (0)
> >  +
> >  +#define RTE_EVENTDEV_VALID_DEVID_OR_RET(dev_id) do { \
> >  +	if (!rte_eventdev_pmd_is_valid_dev((dev_id))) { \
> >  +		EDEV_LOG_ERR("Invalid dev_id=%d\n", dev_id); \
> >  +		return; \
> >  +	} \
> >  +} while (0)
> >  +
> >  +#define RTE_EVENTDEV_DETACHED  (0)
> >  +#define RTE_EVENTDEV_ATTACHED  (1)
> >  +
> >  +/**
> >  + * Initialisation function of a event driver invoked for each matching
> >  + * event PCI device detected during the PCI probing phase.
> >  + *
> >  + * @param dev
> >  + *   The dev pointer is the address of the *rte_eventdev* structure associated
> >  + *   with the matching device and which has been [automatically] allocated in
> >  + *   the *rte_event_devices* array.
> >  + *
> >  + * @return
> >  + *   - 0: Success, the device is properly initialised by the driver.
> >  + *        In particular, the driver MUST have set up the *dev_ops* pointer
> >  + *        of the *dev* structure.
> >  + *   - <0: Error code of the device initialisation failure.
> >  + */
> >  +typedef int (*eventdev_init_t)(struct rte_eventdev *dev);
> >  +
> >  +/**
> >  + * Finalisation function of a driver invoked for each matching
> >  + * PCI device detected during the PCI closing phase.
> >  + *
> >  + * @param dev
> >  + *   The dev pointer is the address of the *rte_eventdev* structure associated
> >  + *   with the matching device and which	has been [automatically] allocated in
> >  + *   the *rte_event_devices* array.
> >  + *
> >  + * @return
> >  + *   - 0: Success, the device is properly finalised by the driver.
> >  + *        In particular, the driver MUST free the *dev_ops* pointer
> >  + *        of the *dev* structure.
> >  + *   - <0: Error code of the device initialisation failure.
> >  + */
> >  +typedef int (*eventdev_uninit_t)(struct rte_eventdev *dev);
> >  +
> >  +/**
> >  + * The structure associated with a PMD driver.
> >  + *
> >  + * Each driver acts as a PCI driver and is represented by a generic
> >  + * *event_driver* structure that holds:
> >  + *
> >  + * - An *rte_pci_driver* structure (which must be the first field).
> >  + *
> >  + * - The *eventdev_init* function invoked for each matching PCI device.
> >  + *
> >  + * - The size of the private data to allocate for each matching device.
> >  + */
> >  +struct rte_eventdev_driver {
> >  +	struct rte_pci_driver pci_drv;	/**< The PMD is also a PCI driver. */
> >  +	unsigned int dev_private_size;	/**< Size of device private data. */
> >  +
> >  +	eventdev_init_t eventdev_init;	/**< Device init function. */
> >  +	eventdev_uninit_t eventdev_uninit; /**< Device uninit function. */
> >  +};
> >  +
> >  +/** Global structure used for maintaining state of allocated event devices */
> >  +struct rte_eventdev_global {
> >  +	uint8_t nb_devs;	/**< Number of devices found */
> >  +	uint8_t max_devs;	/**< Max number of devices */
> >  +};
> >  +
> >  +extern struct rte_eventdev_global *rte_eventdev_globals;
> >  +/** Pointer to global event devices data structure. */
> >  +extern struct rte_eventdev *rte_eventdevs;
> >  +/** The pool of rte_eventdev structures. */
> >  +
> >  +/**
> >  + * Get the rte_eventdev structure device pointer for the named device.
> >  + *
> >  + * @param name
> >  + *   device name to select the device structure.
> >  + *
> >  + * @return
> >  + *   - The rte_eventdev structure pointer for the given device ID.
> >  + */
> >  +static inline struct rte_eventdev *
> >  +rte_eventdev_pmd_get_named_dev(const char *name)
> >  +{
> >  +	struct rte_eventdev *dev;
> >  +	unsigned int i;
> >  +
> >  +	if (name == NULL)
> >  +		return NULL;
> >  +
> >  +	for (i = 0, dev = &rte_eventdevs[i];
> >  +			i < rte_eventdev_globals->max_devs; i++) {
> >  +		if ((dev->attached == RTE_EVENTDEV_ATTACHED) &&
> >  +				(strcmp(dev->data->name, name) == 0))
> >  +			return dev;
> >  +	}
> >  +
> >  +	return NULL;
> >  +}
> >  +
> >  +/**
> >  + * Validate if the event device index is valid attached event device.
> >  + *
> >  + * @param dev_id
> >  + *   Event device index.
> >  + *
> >  + * @return
> >  + *   - If the device index is valid (1) or not (0).
> >  + */
> >  +static inline unsigned
> >  +rte_eventdev_pmd_is_valid_dev(uint8_t dev_id)
> >  +{
> >  +	struct rte_eventdev *dev;
> >  +
> >  +	if (dev_id >= rte_eventdev_globals->nb_devs)
> >  +		return 0;
> >  +
> >  +	dev = &rte_eventdevs[dev_id];
> >  +	if (dev->attached != RTE_EVENTDEV_ATTACHED)
> >  +		return 0;
> >  +	else
> >  +		return 1;
> >  +}
> >  +
> >  +/**
> >  + * Definitions of all functions exported by a driver through the
> >  + * the generic structure of type *event_dev_ops* supplied in the
> >  + * *rte_eventdev* structure associated with a device.
> >  + */
> >  +
> >  +/**
> >  + * Get device information of a device.
> >  + *
> >  + * @param dev
> >  + *   Event device pointer
> >  + * @param dev_info
> >  + *   Event device information structure
> >  + *
> >  + * @return
> >  + *   Returns 0 on success
> >  + */
> >  +typedef void (*eventdev_info_get_t)(struct rte_eventdev *dev,
> >  +		struct rte_event_dev_info *dev_info);
> >  +
> >  +/**
> >  + * Configure a device.
> >  + *
> >  + * @param dev
> >  + *   Event device pointer
> >  + *
> >  + * @return
> >  + *   Returns 0 on success
> >  + */
> >  +typedef int (*eventdev_configure_t)(struct rte_eventdev *dev);
> >  +
> >  +/**
> >  + * Start a configured device.
> >  + *
> >  + * @param dev
> >  + *   Event device pointer
> >  + *
> >  + * @return
> >  + *   Returns 0 on success
> >  + */
> >  +typedef int (*eventdev_start_t)(struct rte_eventdev *dev);
> >  +
> >  +/**
> >  + * Stop a configured device.
> >  + *
> >  + * @param dev
> >  + *   Event device pointer
> >  + */
> >  +typedef void (*eventdev_stop_t)(struct rte_eventdev *dev);
> >  +
> >  +/**
> >  + * Close a configured device.
> >  + *
> >  + * @param dev
> >  + *   Event device pointer
> >  + *
> >  + * @return
> >  + * - 0 on success
> >  + * - (-EAGAIN) if can't close as device is busy
> >  + */
> >  +typedef int (*eventdev_close_t)(struct rte_eventdev *dev);
> >  +
> >  +/**
> >  + * Retrieve the default event queue configuration.
> >  + *
> >  + * @param dev
> >  + *   Event device pointer
> >  + * @param queue_id
> >  + *   Event queue index
> >  + * @param[out] queue_conf
> >  + *   Event queue configuration structure
> >  + *
> >  + */
> >  +typedef void (*eventdev_queue_default_conf_get_t)(struct rte_eventdev
> >  *dev,
> >  +		uint8_t queue_id, struct rte_event_queue_conf *queue_conf);
> >  +
> >  +/**
> >  + * Setup an event queue.
> >  + *
> >  + * @param dev
> >  + *   Event device pointer
> >  + * @param queue_id
> >  + *   Event queue index
> >  + * @param queue_conf
> >  + *   Event queue configuration structure
> >  + *
> >  + * @return
> >  + *   Returns 0 on success.
> >  + */
> >  +typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev,
> >  +		uint8_t queue_id, struct rte_event_queue_conf *queue_conf);
> >  +
> >  +/**
> >  + * Release memory resources allocated by given event queue.
> >  + *
> >  + * @param queue
> >  + *   Event queue pointer
> >  + *
> >  + */
> >  +typedef void (*eventdev_queue_release_t)(void *queue);
> >  +
> >  +/**
> >  + * Retrieve the default event port configuration.
> >  + *
> >  + * @param dev
> >  + *   Event device pointer
> >  + * @param port_id
> >  + *   Event port index
> >  + * @param[out] port_conf
> >  + *   Event port configuration structure
> >  + *
> >  + */
> >  +typedef void (*eventdev_port_default_conf_get_t)(struct rte_eventdev *dev,
> >  +		uint8_t port_id, struct rte_event_port_conf *port_conf);
> >  +
> >  +/**
> >  + * Setup an event port.
> >  + *
> >  + * @param dev
> >  + *   Event device pointer
> >  + * @param port_id
> >  + *   Event port index
> >  + * @param port_conf
> >  + *   Event port configuration structure
> >  + *
> >  + * @return
> >  + *   Returns 0 on success.
> >  + */
> >  +typedef int (*eventdev_port_setup_t)(struct rte_eventdev *dev,
> >  +		uint8_t port_id, struct rte_event_port_conf *port_conf);
> >  +
> >  +/**
> >  + * Release memory resources allocated by given event port.
> >  + *
> >  + * @param port
> >  + *   Event port pointer
> >  + *
> >  + */
> >  +typedef void (*eventdev_port_release_t)(void *port);
> >  +
> >  +/**
> >  + * Link multiple source event queues to destination event port.
> >  + *
> >  + * @param port
> >  + *   Event port pointer
> >  + * @param link
> >  + *   An array of *nb_links* pointers to *rte_event_queue_link* structure
> >  + * @param nb_links
> >  + *   The number of links to establish
> >  + *
> >  + * @return
> >  + *   Returns 0 on success.
> >  + *
> >  + */
> >  +typedef int (*eventdev_port_link_t)(void *port,
> >  +		struct rte_event_queue_link link[], uint16_t nb_links);
> >  +
> >  +/**
> >  + * Unlink multiple source event queues from destination event port.
> >  + *
> >  + * @param port
> >  + *   Event port pointer
> >  + * @param queues
> >  + *   An array of *nb_unlinks* event queues to be unlinked from the event port.
> >  + * @param nb_unlinks
> >  + *   The number of unlinks to establish
> >  + *
> >  + * @return
> >  + *   Returns 0 on success.
> >  + *
> >  + */
> >  +typedef int (*eventdev_port_unlink_t)(void *port,
> >  +		uint8_t queues[], uint16_t nb_unlinks);
> >  +
> >  +/**
> >  + * Converts nanoseconds to *wait* value for rte_event_dequeue()
> >  + *
> >  + * @param dev
> >  + *   Event device pointer
> >  + * @param ns
> >  + *   Wait time in nanosecond
> >  + * @param[out] wait_ticks
> >  + *   Value for the *wait* parameter in rte_event_dequeue() function
> >  + *
> >  + */
> >  +typedef void (*eventdev_dequeue_wait_time_t)(struct rte_eventdev *dev,
> >  +		uint64_t ns, uint64_t *wait_ticks);
> >  +
> >  +/**
> >  + * Dump internal information
> >  + *
> >  + * @param dev
> >  + *   Event device pointer
> >  + * @param f
> >  + *   A pointer to a file for output
> >  + *
> >  + */
> >  +typedef void (*eventdev_dump_t)(struct rte_eventdev *dev, FILE *f);
> >  +
> >  +/** Event device operations function pointer table */
> >  +struct rte_eventdev_ops {
> >  +	eventdev_info_get_t dev_infos_get;	/**< Get device info. */
> >  +	eventdev_configure_t dev_configure;	/**< Configure device. */
> >  +	eventdev_start_t dev_start;		/**< Start device. */
> >  +	eventdev_stop_t dev_stop;		/**< Stop device. */
> >  +	eventdev_close_t dev_close;		/**< Close device. */
> >  +
> >  +	eventdev_queue_default_conf_get_t queue_def_conf;
> >  +	/**< Get default queue configuration. */
> >  +	eventdev_queue_setup_t queue_setup;
> >  +	/**< Set up an event queue. */
> >  +	eventdev_queue_release_t queue_release;
> >  +	/**< Release an event queue. */
> >  +
> >  +	eventdev_port_default_conf_get_t port_def_conf;
> >  +	/**< Get default port configuration. */
> >  +	eventdev_port_setup_t port_setup;
> >  +	/**< Set up an event port. */
> >  +	eventdev_port_release_t port_release;
> >  +	/**< Release an event port. */
> >  +
> >  +	eventdev_port_link_t port_link;
> >  +	/**< Link event queues to an event port. */
> >  +	eventdev_port_unlink_t port_unlink;
> >  +	/**< Unlink event queues from an event port. */
> >  +	eventdev_dequeue_wait_time_t wait_time;
> >  +	/**< Converts nanoseconds to *wait* value for rte_event_dequeue()
> >  */
> >  +	eventdev_dump_t dump;
> >  +	/* Dump internal information */
> >  +};
> >  +
> >  +/**
> >  + * Allocates a new eventdev slot for an event device and returns the pointer
> >  + * to that slot for the driver to use.
> >  + *
> >  + * @param name
> >  + *   Unique identifier name for each device
> >  + * @param socket_id
> >  + *   Socket to allocate resources on.
> >  + * @return
> >  + *   - Slot in the rte_dev_devices array for a new device;
> >  + */
> >  +struct rte_eventdev *
> >  +rte_eventdev_pmd_allocate(const char *name, int socket_id);
> >  +
> >  +/**
> >  + * Release the specified eventdev device.
> >  + *
> >  + * @param eventdev
> >  + * The *eventdev* pointer is the address of the *rte_eventdev* structure.
> >  + * @return
> >  + *   - 0 on success, negative on error
> >  + */
> >  +int
> >  +rte_eventdev_pmd_release(struct rte_eventdev *eventdev);
> >  +
> >  +/**
> >  + * Creates a new virtual event device and returns the pointer to that device.
> >  + *
> >  + * @param name
> >  + *   PMD type name
> >  + * @param dev_private_size
> >  + *   Size of event PMDs private data
> >  + * @param socket_id
> >  + *   Socket to allocate resources on.
> >  + *
> >  + * @return
> >  + *   - Eventdev pointer if device is successfully created.
> >  + *   - NULL if device cannot be created.
> >  + */
> >  +struct rte_eventdev *
> >  +rte_eventdev_pmd_vdev_init(const char *name, size_t dev_private_size,
> >  +		int socket_id);
> >  +
> >  +
> >  +/**
> >  + * Wrapper for use by pci drivers as a .probe function to attach to a event
> >  + * interface.
> >  + */
> >  +int rte_eventdev_pmd_pci_probe(struct rte_pci_driver *pci_drv,
> >  +			    struct rte_pci_device *pci_dev);
> >  +
> >  +/**
> >  + * Wrapper for use by pci drivers as a .remove function to detach a event
> >  + * interface.
> >  + */
> >  +int rte_eventdev_pmd_pci_remove(struct rte_pci_device *pci_dev);
> >  +
> >  +#ifdef __cplusplus
> >  +}
> >  +#endif
> >  +
> >  +#endif /* _RTE_EVENTDEV_PMD_H_ */
> >  diff --git a/lib/librte_eventdev/rte_eventdev_version.map
> >  b/lib/librte_eventdev/rte_eventdev_version.map
> >  new file mode 100644
> >  index 0000000..ef40aae
> >  --- /dev/null
> >  +++ b/lib/librte_eventdev/rte_eventdev_version.map
> >  @@ -0,0 +1,39 @@
> >  +DPDK_17.02 {
> >  +	global:
> >  +
> >  +	rte_eventdevs;
> >  +
> >  +	rte_event_dev_count;
> >  +	rte_event_dev_get_dev_id;
> >  +	rte_event_dev_socket_id;
> >  +	rte_event_dev_info_get;
> >  +	rte_event_dev_configure;
> >  +	rte_event_dev_start;
> >  +	rte_event_dev_stop;
> >  +	rte_event_dev_close;
> >  +	rte_event_dev_dump;
> >  +
> >  +	rte_event_port_default_conf_get;
> >  +	rte_event_port_setup;
> >  +	rte_event_port_dequeue_depth;
> >  +	rte_event_port_enqueue_depth;
> >  +	rte_event_port_count;
> >  +	rte_event_port_link;
> >  +	rte_event_port_unlink;
> >  +	rte_event_port_links_get;
> >  +
> >  +	rte_event_queue_default_conf_get
> >  +	rte_event_queue_setup;
> >  +	rte_event_queue_count;
> >  +	rte_event_queue_priority;
> >  +
> >  +	rte_event_dequeue_wait_time;
> >  +
> >  +	rte_eventdev_pmd_allocate;
> >  +	rte_eventdev_pmd_release;
> >  +	rte_eventdev_pmd_vdev_init;
> >  +	rte_eventdev_pmd_pci_probe;
> >  +	rte_eventdev_pmd_pci_remove;
> >  +
> >  +	local: *;
> >  +};
> >  diff --git a/mk/rte.app.mk b/mk/rte.app.mk
> >  index f75f0e2..716725a 100644
> >  --- a/mk/rte.app.mk
> >  +++ b/mk/rte.app.mk
> >  @@ -93,6 +93,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF)           += -
> >  lrte_mbuf
> >   _LDLIBS-$(CONFIG_RTE_LIBRTE_NET)            += -lrte_net
> >   _LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER)          += -lrte_ethdev
> >   _LDLIBS-$(CONFIG_RTE_LIBRTE_CRYPTODEV)      += -lrte_cryptodev
> >  +_LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV)       += -lrte_eventdev
> >   _LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL)        += -lrte_mempool
> >   _LDLIBS-$(CONFIG_RTE_LIBRTE_RING)           += -lrte_ring
> >   _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL)            += -lrte_eal
> >  --
> >  2.5.5
> 

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 2/4] eventdev: implement the northbound APIs
  2016-11-21 19:13     ` Jerin Jacob
@ 2016-11-21 19:31       ` Jerin Jacob
  2016-11-22 15:15         ` Eads, Gage
  0 siblings, 1 reply; 109+ messages in thread
From: Jerin Jacob @ 2016-11-21 19:31 UTC (permalink / raw)
  To: Eads, Gage; +Cc: dev, Richardson, Bruce, Van Haaren, Harry, hemant.agrawal

On Tue, Nov 22, 2016 at 12:43:58AM +0530, Jerin Jacob wrote:
> On Mon, Nov 21, 2016 at 05:45:51PM +0000, Eads, Gage wrote:
> > Hi Jerin,
> > 
> > I did a quick review and overall this implementation looks good. I noticed just one issue in rte_event_queue_setup(): the check of nb_atomic_order_sequences is being applied to atomic-type queues, but that field applies to ordered-type queues.
> 
> Thanks Gage. I will fix that in v2.
> 
> > 
> > One open issue I noticed is the "typical workflow" description starting in rte_eventdev.h:204 conflicts with the centralized software PMD that Harry posted last week. Specifically, that PMD expects a single core to call the schedule function. We could extend the documentation to account for this alternative style of scheduler invocation, or discuss ways to make the software PMD work with the documented workflow. I prefer the former, but either way I think we ought to expose the scheduler's expected usage to the user -- perhaps through an RTE_EVENT_DEV_CAP flag?
> 
> I prefer former too, you can propose the documentation change required for software PMD.
> 
> On same note, If software PMD based workflow need  a separate core(s) for
> schedule function then, Can we hide that from API specification and pass an
> argument to SW pmd to define the scheduling core(s)?
> 
> Something like --vdev=eventsw0,schedule_cmask=0x2

Just a thought,

Perhaps, We could introduce generic "service" cores concept to DPDK to hide the
requirement where the implementation needs dedicated core to do certain
work. I guess it would useful for other NPU integration in DPDK.

> 
> > 
> > Thanks,
> > Gage
> > 
> > >  -----Original Message-----
> > >  From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> > >  Sent: Thursday, November 17, 2016 11:45 PM
> > >  To: dev@dpdk.org
> > >  Cc: Richardson, Bruce <bruce.richardson@intel.com>; Van Haaren, Harry
> > >  <harry.van.haaren@intel.com>; hemant.agrawal@nxp.com; Eads, Gage
> > >  <gage.eads@intel.com>; Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > >  Subject: [dpdk-dev] [PATCH 2/4] eventdev: implement the northbound APIs
> > >  
> > >  This patch set defines the southbound driver interface
> > >  and implements the common code required for northbound
> > >  eventdev API interface.
> > >  
> > >  Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > >  ---
> > >   config/common_base                           |    6 +
> > >   lib/Makefile                                 |    1 +
> > >   lib/librte_eal/common/include/rte_log.h      |    1 +
> > >   lib/librte_eventdev/Makefile                 |   57 ++
> > >   lib/librte_eventdev/rte_eventdev.c           | 1211
> > >  ++++++++++++++++++++++++++
> > >   lib/librte_eventdev/rte_eventdev_pmd.h       |  504 +++++++++++
> > >   lib/librte_eventdev/rte_eventdev_version.map |   39 +
> > >   mk/rte.app.mk                                |    1 +
> > >   8 files changed, 1820 insertions(+)
> > >   create mode 100644 lib/librte_eventdev/Makefile
> > >   create mode 100644 lib/librte_eventdev/rte_eventdev.c
> > >   create mode 100644 lib/librte_eventdev/rte_eventdev_pmd.h
> > >   create mode 100644 lib/librte_eventdev/rte_eventdev_version.map
> > >  
> > >  diff --git a/config/common_base b/config/common_base
> > >  index 4bff83a..7a8814e 100644
> > >  --- a/config/common_base
> > >  +++ b/config/common_base
> > >  @@ -411,6 +411,12 @@ CONFIG_RTE_LIBRTE_PMD_ZUC_DEBUG=n
> > >   CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO=y
> > >  
> > >   #
> > >  +# Compile generic event device library
> > >  +#
> > >  +CONFIG_RTE_LIBRTE_EVENTDEV=y
> > >  +CONFIG_RTE_LIBRTE_EVENTDEV_DEBUG=n
> > >  +CONFIG_RTE_EVENT_MAX_DEVS=16
> > >  +CONFIG_RTE_EVENT_MAX_QUEUES_PER_DEV=64
> > >   # Compile librte_ring
> > >   #
> > >   CONFIG_RTE_LIBRTE_RING=y
> > >  diff --git a/lib/Makefile b/lib/Makefile
> > >  index 990f23a..1a067bf 100644
> > >  --- a/lib/Makefile
> > >  +++ b/lib/Makefile
> > >  @@ -41,6 +41,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) += librte_cfgfile
> > >   DIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += librte_cmdline
> > >   DIRS-$(CONFIG_RTE_LIBRTE_ETHER) += librte_ether
> > >   DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += librte_cryptodev
> > >  +DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += librte_eventdev
> > >   DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
> > >   DIRS-$(CONFIG_RTE_LIBRTE_HASH) += librte_hash
> > >   DIRS-$(CONFIG_RTE_LIBRTE_LPM) += librte_lpm
> > >  diff --git a/lib/librte_eal/common/include/rte_log.h
> > >  b/lib/librte_eal/common/include/rte_log.h
> > >  index 29f7d19..9a07d92 100644
> > >  --- a/lib/librte_eal/common/include/rte_log.h
> > >  +++ b/lib/librte_eal/common/include/rte_log.h
> > >  @@ -79,6 +79,7 @@ extern struct rte_logs rte_logs;
> > >   #define RTE_LOGTYPE_PIPELINE 0x00008000 /**< Log related to pipeline. */
> > >   #define RTE_LOGTYPE_MBUF    0x00010000 /**< Log related to mbuf. */
> > >   #define RTE_LOGTYPE_CRYPTODEV 0x00020000 /**< Log related to
> > >  cryptodev. */
> > >  +#define RTE_LOGTYPE_EVENTDEV 0x00040000 /**< Log related to eventdev.
> > >  */
> > >  
> > >   /* these log types can be used in an application */
> > >   #define RTE_LOGTYPE_USER1   0x01000000 /**< User-defined log type 1. */
> > >  diff --git a/lib/librte_eventdev/Makefile b/lib/librte_eventdev/Makefile
> > >  new file mode 100644
> > >  index 0000000..dac0663
> > >  --- /dev/null
> > >  +++ b/lib/librte_eventdev/Makefile
> > >  @@ -0,0 +1,57 @@
> > >  +#   BSD LICENSE
> > >  +#
> > >  +#   Copyright(c) 2016 Cavium networks. All rights reserved.
> > >  +#
> > >  +#   Redistribution and use in source and binary forms, with or without
> > >  +#   modification, are permitted provided that the following conditions
> > >  +#   are met:
> > >  +#
> > >  +#     * Redistributions of source code must retain the above copyright
> > >  +#       notice, this list of conditions and the following disclaimer.
> > >  +#     * Redistributions in binary form must reproduce the above copyright
> > >  +#       notice, this list of conditions and the following disclaimer in
> > >  +#       the documentation and/or other materials provided with the
> > >  +#       distribution.
> > >  +#     * Neither the name of Cavium networks nor the names of its
> > >  +#       contributors may be used to endorse or promote products derived
> > >  +#       from this software without specific prior written permission.
> > >  +#
> > >  +#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
> > >  CONTRIBUTORS
> > >  +#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT
> > >  NOT
> > >  +#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
> > >  FITNESS FOR
> > >  +#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> > >  COPYRIGHT
> > >  +#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> > >  INCIDENTAL,
> > >  +#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
> > >  NOT
> > >  +#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
> > >  OF USE,
> > >  +#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
> > >  ON ANY
> > >  +#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
> > >  TORT
> > >  +#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
> > >  THE USE
> > >  +#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> > >  DAMAGE.
> > >  +
> > >  +include $(RTE_SDK)/mk/rte.vars.mk
> > >  +
> > >  +# library name
> > >  +LIB = librte_eventdev.a
> > >  +
> > >  +# library version
> > >  +LIBABIVER := 1
> > >  +
> > >  +# build flags
> > >  +CFLAGS += -O3
> > >  +CFLAGS += $(WERROR_FLAGS)
> > >  +
> > >  +# library source files
> > >  +SRCS-y += rte_eventdev.c
> > >  +
> > >  +# export include files
> > >  +SYMLINK-y-include += rte_eventdev.h
> > >  +SYMLINK-y-include += rte_eventdev_pmd.h
> > >  +
> > >  +# versioning export map
> > >  +EXPORT_MAP := rte_eventdev_version.map
> > >  +
> > >  +# library dependencies
> > >  +DEPDIRS-y += lib/librte_eal
> > >  +DEPDIRS-y += lib/librte_mbuf
> > >  +
> > >  +include $(RTE_SDK)/mk/rte.lib.mk
> > >  diff --git a/lib/librte_eventdev/rte_eventdev.c
> > >  b/lib/librte_eventdev/rte_eventdev.c
> > >  new file mode 100644
> > >  index 0000000..17ce5c3
> > >  --- /dev/null
> > >  +++ b/lib/librte_eventdev/rte_eventdev.c
> > >  @@ -0,0 +1,1211 @@
> > >  +/*
> > >  + *   BSD LICENSE
> > >  + *
> > >  + *   Copyright(c) 2016 Cavium networks. All rights reserved.
> > >  + *
> > >  + *   Redistribution and use in source and binary forms, with or without
> > >  + *   modification, are permitted provided that the following conditions
> > >  + *   are met:
> > >  + *
> > >  + *     * Redistributions of source code must retain the above copyright
> > >  + *       notice, this list of conditions and the following disclaimer.
> > >  + *     * Redistributions in binary form must reproduce the above copyright
> > >  + *       notice, this list of conditions and the following disclaimer in
> > >  + *       the documentation and/or other materials provided with the
> > >  + *       distribution.
> > >  + *     * Neither the name of Cavium networks nor the names of its
> > >  + *       contributors may be used to endorse or promote products derived
> > >  + *       from this software without specific prior written permission.
> > >  + *
> > >  + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
> > >  CONTRIBUTORS
> > >  + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT
> > >  NOT
> > >  + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
> > >  FITNESS FOR
> > >  + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> > >  COPYRIGHT
> > >  + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> > >  INCIDENTAL,
> > >  + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
> > >  NOT
> > >  + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
> > >  OF USE,
> > >  + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
> > >  AND ON ANY
> > >  + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
> > >  TORT
> > >  + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
> > >  THE USE
> > >  + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> > >  DAMAGE.
> > >  + */
> > >  +
> > >  +#include <ctype.h>
> > >  +#include <stdio.h>
> > >  +#include <stdlib.h>
> > >  +#include <string.h>
> > >  +#include <stdarg.h>
> > >  +#include <errno.h>
> > >  +#include <stdint.h>
> > >  +#include <inttypes.h>
> > >  +#include <sys/types.h>
> > >  +#include <sys/queue.h>
> > >  +
> > >  +#include <rte_byteorder.h>
> > >  +#include <rte_log.h>
> > >  +#include <rte_debug.h>
> > >  +#include <rte_dev.h>
> > >  +#include <rte_pci.h>
> > >  +#include <rte_memory.h>
> > >  +#include <rte_memcpy.h>
> > >  +#include <rte_memzone.h>
> > >  +#include <rte_eal.h>
> > >  +#include <rte_per_lcore.h>
> > >  +#include <rte_lcore.h>
> > >  +#include <rte_atomic.h>
> > >  +#include <rte_branch_prediction.h>
> > >  +#include <rte_common.h>
> > >  +#include <rte_malloc.h>
> > >  +#include <rte_errno.h>
> > >  +
> > >  +#include "rte_eventdev.h"
> > >  +#include "rte_eventdev_pmd.h"
> > >  +
> > >  +struct rte_eventdev rte_event_devices[RTE_EVENT_MAX_DEVS];
> > >  +
> > >  +struct rte_eventdev *rte_eventdevs = &rte_event_devices[0];
> > >  +
> > >  +static struct rte_eventdev_global eventdev_globals = {
> > >  +	.nb_devs		= 0
> > >  +};
> > >  +
> > >  +struct rte_eventdev_global *rte_eventdev_globals = &eventdev_globals;
> > >  +
> > >  +/* Event dev north bound API implementation */
> > >  +
> > >  +uint8_t
> > >  +rte_event_dev_count(void)
> > >  +{
> > >  +	return rte_eventdev_globals->nb_devs;
> > >  +}
> > >  +
> > >  +int
> > >  +rte_event_dev_get_dev_id(const char *name)
> > >  +{
> > >  +	int i;
> > >  +
> > >  +	if (!name)
> > >  +		return -EINVAL;
> > >  +
> > >  +	for (i = 0; i < rte_eventdev_globals->nb_devs; i++)
> > >  +		if ((strcmp(rte_event_devices[i].data->name, name)
> > >  +				== 0) &&
> > >  +				(rte_event_devices[i].attached ==
> > >  +						RTE_EVENTDEV_ATTACHED))
> > >  +			return i;
> > >  +	return -ENODEV;
> > >  +}
> > >  +
> > >  +int
> > >  +rte_event_dev_socket_id(uint8_t dev_id)
> > >  +{
> > >  +	struct rte_eventdev *dev;
> > >  +
> > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> > >  +	dev = &rte_eventdevs[dev_id];
> > >  +
> > >  +	return dev->data->socket_id;
> > >  +}
> > >  +
> > >  +int
> > >  +rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info *dev_info)
> > >  +{
> > >  +	struct rte_eventdev *dev;
> > >  +
> > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> > >  +	dev = &rte_eventdevs[dev_id];
> > >  +
> > >  +	if (dev_info == NULL)
> > >  +		return -EINVAL;
> > >  +
> > >  +	memset(dev_info, 0, sizeof(struct rte_event_dev_info));
> > >  +
> > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -
> > >  ENOTSUP);
> > >  +	(*dev->dev_ops->dev_infos_get)(dev, dev_info);
> > >  +
> > >  +	dev_info->pci_dev = dev->pci_dev;
> > >  +	if (dev->driver)
> > >  +		dev_info->driver_name = dev->driver->pci_drv.driver.name;
> > >  +	return 0;
> > >  +}
> > >  +
> > >  +static inline int
> > >  +rte_event_dev_queue_config(struct rte_eventdev *dev, uint8_t nb_queues)
> > >  +{
> > >  +	uint8_t old_nb_queues = dev->data->nb_queues;
> > >  +	void **queues;
> > >  +	uint8_t *queues_prio;
> > >  +	unsigned int i;
> > >  +
> > >  +	EDEV_LOG_DEBUG("Setup %d queues on device %u", nb_queues,
> > >  +			 dev->data->dev_id);
> > >  +
> > >  +	/* First time configuration */
> > >  +	if (dev->data->queues == NULL && nb_queues != 0) {
> > >  +		dev->data->queues = rte_zmalloc_socket("eventdev->data-
> > >  >queues",
> > >  +				sizeof(dev->data->queues[0]) * nb_queues,
> > >  +				RTE_CACHE_LINE_SIZE, dev->data-
> > >  >socket_id);
> > >  +		if (dev->data->queues == NULL) {
> > >  +			dev->data->nb_queues = 0;
> > >  +			EDEV_LOG_ERR("failed to get memory for queue meta
> > >  data,"
> > >  +					"nb_queues %u", nb_queues);
> > >  +			return -(ENOMEM);
> > >  +		}
> > >  +		/* Allocate memory to store queue priority */
> > >  +		dev->data->queues_prio = rte_zmalloc_socket(
> > >  +				"eventdev->data->queues_prio",
> > >  +				sizeof(dev->data->queues_prio[0]) *
> > >  nb_queues,
> > >  +				RTE_CACHE_LINE_SIZE, dev->data-
> > >  >socket_id);
> > >  +		if (dev->data->queues_prio == NULL) {
> > >  +			dev->data->nb_queues = 0;
> > >  +			EDEV_LOG_ERR("failed to get memory for queue
> > >  priority,"
> > >  +					"nb_queues %u", nb_queues);
> > >  +			return -(ENOMEM);
> > >  +		}
> > >  +
> > >  +	} else if (dev->data->queues != NULL && nb_queues != 0) {/* re-config
> > >  */
> > >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
> > >  >queue_release, -ENOTSUP);
> > >  +
> > >  +		queues = dev->data->queues;
> > >  +		for (i = nb_queues; i < old_nb_queues; i++)
> > >  +			(*dev->dev_ops->queue_release)(queues[i]);
> > >  +
> > >  +		queues = rte_realloc(queues, sizeof(queues[0]) * nb_queues,
> > >  +				RTE_CACHE_LINE_SIZE);
> > >  +		if (queues == NULL) {
> > >  +			EDEV_LOG_ERR("failed to realloc queue meta data,"
> > >  +						" nb_queues %u",
> > >  nb_queues);
> > >  +			return -(ENOMEM);
> > >  +		}
> > >  +		dev->data->queues = queues;
> > >  +
> > >  +		/* Re allocate memory to store queue priority */
> > >  +		queues_prio = dev->data->queues_prio;
> > >  +		queues_prio = rte_realloc(queues_prio,
> > >  +				sizeof(queues_prio[0]) * nb_queues,
> > >  +				RTE_CACHE_LINE_SIZE);
> > >  +		if (queues_prio == NULL) {
> > >  +			EDEV_LOG_ERR("failed to realloc queue priority,"
> > >  +						" nb_queues %u",
> > >  nb_queues);
> > >  +			return -(ENOMEM);
> > >  +		}
> > >  +		dev->data->queues_prio = queues_prio;
> > >  +
> > >  +		if (nb_queues > old_nb_queues) {
> > >  +			uint8_t new_qs = nb_queues - old_nb_queues;
> > >  +
> > >  +			memset(queues + old_nb_queues, 0,
> > >  +				sizeof(queues[0]) * new_qs);
> > >  +			memset(queues_prio + old_nb_queues, 0,
> > >  +				sizeof(queues_prio[0]) * new_qs);
> > >  +		}
> > >  +	} else if (dev->data->queues != NULL && nb_queues == 0) {
> > >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
> > >  >queue_release, -ENOTSUP);
> > >  +
> > >  +		queues = dev->data->queues;
> > >  +		for (i = nb_queues; i < old_nb_queues; i++)
> > >  +			(*dev->dev_ops->queue_release)(queues[i]);
> > >  +	}
> > >  +
> > >  +	dev->data->nb_queues = nb_queues;
> > >  +	return 0;
> > >  +}
> > >  +
> > >  +static inline int
> > >  +rte_event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
> > >  +{
> > >  +	uint8_t old_nb_ports = dev->data->nb_ports;
> > >  +	void **ports;
> > >  +	uint16_t *links_map;
> > >  +	uint8_t *ports_dequeue_depth;
> > >  +	uint8_t *ports_enqueue_depth;
> > >  +	unsigned int i;
> > >  +
> > >  +	EDEV_LOG_DEBUG("Setup %d ports on device %u", nb_ports,
> > >  +			 dev->data->dev_id);
> > >  +
> > >  +	/* First time configuration */
> > >  +	if (dev->data->ports == NULL && nb_ports != 0) {
> > >  +		dev->data->ports = rte_zmalloc_socket("eventdev->data-
> > >  >ports",
> > >  +				sizeof(dev->data->ports[0]) * nb_ports,
> > >  +				RTE_CACHE_LINE_SIZE, dev->data-
> > >  >socket_id);
> > >  +		if (dev->data->ports == NULL) {
> > >  +			dev->data->nb_ports = 0;
> > >  +			EDEV_LOG_ERR("failed to get memory for port meta
> > >  data,"
> > >  +					"nb_ports %u", nb_ports);
> > >  +			return -(ENOMEM);
> > >  +		}
> > >  +
> > >  +		/* Allocate memory to store ports dequeue depth */
> > >  +		dev->data->ports_dequeue_depth =
> > >  +			rte_zmalloc_socket("eventdev-
> > >  >ports_dequeue_depth",
> > >  +			sizeof(dev->data->ports_dequeue_depth[0]) *
> > >  nb_ports,
> > >  +			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
> > >  +		if (dev->data->ports_dequeue_depth == NULL) {
> > >  +			dev->data->nb_ports = 0;
> > >  +			EDEV_LOG_ERR("failed to get memory for port deq
> > >  meta,"
> > >  +					"nb_ports %u", nb_ports);
> > >  +			return -(ENOMEM);
> > >  +		}
> > >  +
> > >  +		/* Allocate memory to store ports enqueue depth */
> > >  +		dev->data->ports_enqueue_depth =
> > >  +			rte_zmalloc_socket("eventdev-
> > >  >ports_enqueue_depth",
> > >  +			sizeof(dev->data->ports_enqueue_depth[0]) *
> > >  nb_ports,
> > >  +			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
> > >  +		if (dev->data->ports_enqueue_depth == NULL) {
> > >  +			dev->data->nb_ports = 0;
> > >  +			EDEV_LOG_ERR("failed to get memory for port enq
> > >  meta,"
> > >  +					"nb_ports %u", nb_ports);
> > >  +			return -(ENOMEM);
> > >  +		}
> > >  +
> > >  +		/* Allocate memory to store queue to port link connection */
> > >  +		dev->data->links_map =
> > >  +			rte_zmalloc_socket("eventdev->links_map",
> > >  +			sizeof(dev->data->links_map[0]) * nb_ports *
> > >  +			RTE_EVENT_MAX_QUEUES_PER_DEV,
> > >  +			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
> > >  +		if (dev->data->links_map == NULL) {
> > >  +			dev->data->nb_ports = 0;
> > >  +			EDEV_LOG_ERR("failed to get memory for port_map
> > >  area,"
> > >  +					"nb_ports %u", nb_ports);
> > >  +			return -(ENOMEM);
> > >  +		}
> > >  +	} else if (dev->data->ports != NULL && nb_ports != 0) {/* re-config */
> > >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_release,
> > >  -ENOTSUP);
> > >  +
> > >  +		ports = dev->data->ports;
> > >  +		ports_dequeue_depth = dev->data->ports_dequeue_depth;
> > >  +		ports_enqueue_depth = dev->data->ports_enqueue_depth;
> > >  +		links_map = dev->data->links_map;
> > >  +
> > >  +		for (i = nb_ports; i < old_nb_ports; i++)
> > >  +			(*dev->dev_ops->port_release)(ports[i]);
> > >  +
> > >  +		/* Realloc memory for ports */
> > >  +		ports = rte_realloc(ports, sizeof(ports[0]) * nb_ports,
> > >  +				RTE_CACHE_LINE_SIZE);
> > >  +		if (ports == NULL) {
> > >  +			EDEV_LOG_ERR("failed to realloc port meta data,"
> > >  +						" nb_ports %u", nb_ports);
> > >  +			return -(ENOMEM);
> > >  +		}
> > >  +
> > >  +		/* Realloc memory for ports_dequeue_depth */
> > >  +		ports_dequeue_depth = rte_realloc(ports_dequeue_depth,
> > >  +			sizeof(ports_dequeue_depth[0]) * nb_ports,
> > >  +			RTE_CACHE_LINE_SIZE);
> > >  +		if (ports_dequeue_depth == NULL) {
> > >  +			EDEV_LOG_ERR("failed to realloc port deqeue meta
> > >  data,"
> > >  +						" nb_ports %u", nb_ports);
> > >  +			return -(ENOMEM);
> > >  +		}
> > >  +
> > >  +		/* Realloc memory for ports_enqueue_depth */
> > >  +		ports_enqueue_depth = rte_realloc(ports_enqueue_depth,
> > >  +			sizeof(ports_enqueue_depth[0]) * nb_ports,
> > >  +			RTE_CACHE_LINE_SIZE);
> > >  +		if (ports_enqueue_depth == NULL) {
> > >  +			EDEV_LOG_ERR("failed to realloc port enqueue meta
> > >  data,"
> > >  +						" nb_ports %u", nb_ports);
> > >  +			return -(ENOMEM);
> > >  +		}
> > >  +
> > >  +		/* Realloc memory to store queue to port link connection */
> > >  +		links_map = rte_realloc(links_map,
> > >  +			sizeof(dev->data->links_map[0]) * nb_ports *
> > >  +			RTE_EVENT_MAX_QUEUES_PER_DEV,
> > >  +			RTE_CACHE_LINE_SIZE);
> > >  +		if (dev->data->links_map == NULL) {
> > >  +			dev->data->nb_ports = 0;
> > >  +			EDEV_LOG_ERR("failed to realloc mem for port_map
> > >  area,"
> > >  +					"nb_ports %u", nb_ports);
> > >  +			return -(ENOMEM);
> > >  +		}
> > >  +
> > >  +		if (nb_ports > old_nb_ports) {
> > >  +			uint8_t new_ps = nb_ports - old_nb_ports;
> > >  +
> > >  +			memset(ports + old_nb_ports, 0,
> > >  +				sizeof(ports[0]) * new_ps);
> > >  +			memset(ports_dequeue_depth + old_nb_ports, 0,
> > >  +				sizeof(ports_dequeue_depth[0]) * new_ps);
> > >  +			memset(ports_enqueue_depth + old_nb_ports, 0,
> > >  +				sizeof(ports_enqueue_depth[0]) * new_ps);
> > >  +			memset(links_map +
> > >  +				(old_nb_ports *
> > >  RTE_EVENT_MAX_QUEUES_PER_DEV),
> > >  +				0, sizeof(ports_enqueue_depth[0]) * new_ps);
> > >  +		}
> > >  +
> > >  +		dev->data->ports = ports;
> > >  +		dev->data->ports_dequeue_depth = ports_dequeue_depth;
> > >  +		dev->data->ports_enqueue_depth = ports_enqueue_depth;
> > >  +		dev->data->links_map = links_map;
> > >  +	} else if (dev->data->ports != NULL && nb_ports == 0) {
> > >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_release,
> > >  -ENOTSUP);
> > >  +
> > >  +		ports = dev->data->ports;
> > >  +		for (i = nb_ports; i < old_nb_ports; i++)
> > >  +			(*dev->dev_ops->port_release)(ports[i]);
> > >  +	}
> > >  +
> > >  +	dev->data->nb_ports = nb_ports;
> > >  +	return 0;
> > >  +}
> > >  +
> > >  +int
> > >  +rte_event_dev_configure(uint8_t dev_id, struct rte_event_dev_config
> > >  *dev_conf)
> > >  +{
> > >  +	struct rte_eventdev *dev;
> > >  +	struct rte_event_dev_info info;
> > >  +	int diag;
> > >  +
> > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> > >  +	dev = &rte_eventdevs[dev_id];
> > >  +
> > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -
> > >  ENOTSUP);
> > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_configure, -
> > >  ENOTSUP);
> > >  +
> > >  +	if (dev->data->dev_started) {
> > >  +		EDEV_LOG_ERR(
> > >  +		    "device %d must be stopped to allow configuration",
> > >  dev_id);
> > >  +		return -EBUSY;
> > >  +	}
> > >  +
> > >  +	if (dev_conf == NULL)
> > >  +		return -EINVAL;
> > >  +
> > >  +	(*dev->dev_ops->dev_infos_get)(dev, &info);
> > >  +
> > >  +	/* Check dequeue_wait_ns value is in limit */
> > >  +	if (!dev_conf->event_dev_cfg &
> > >  RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT) {
> > >  +		if (dev_conf->dequeue_wait_ns < info.min_dequeue_wait_ns
> > >  ||
> > >  +			dev_conf->dequeue_wait_ns >
> > >  info.max_dequeue_wait_ns) {
> > >  +			EDEV_LOG_ERR("dev%d invalid dequeue_wait_ns=%d"
> > >  +			" min_dequeue_wait_ns=%d
> > >  max_dequeue_wait_ns=%d",
> > >  +			dev_id, dev_conf->dequeue_wait_ns,
> > >  +			info.min_dequeue_wait_ns,
> > >  +			info.max_dequeue_wait_ns);
> > >  +			return -EINVAL;
> > >  +		}
> > >  +	}
> > >  +
> > >  +	/* Check nb_events_limit is in limit */
> > >  +	if (dev_conf->nb_events_limit > info.max_num_events) {
> > >  +		EDEV_LOG_ERR("dev%d nb_events_limit=%d >
> > >  max_num_events=%d",
> > >  +		dev_id, dev_conf->nb_events_limit, info.max_num_events);
> > >  +		return -EINVAL;
> > >  +	}
> > >  +
> > >  +	/* Check nb_event_queues is in limit */
> > >  +	if (!dev_conf->nb_event_queues) {
> > >  +		EDEV_LOG_ERR("dev%d nb_event_queues cannot be zero",
> > >  dev_id);
> > >  +		return -EINVAL;
> > >  +	}
> > >  +	if (dev_conf->nb_event_queues > info.max_event_queues) {
> > >  +		EDEV_LOG_ERR("dev%d nb_event_queues=%d >
> > >  max_event_queues=%d",
> > >  +		dev_id, dev_conf->nb_event_queues,
> > >  info.max_event_queues);
> > >  +		return -EINVAL;
> > >  +	}
> > >  +
> > >  +	/* Check nb_event_ports is in limit */
> > >  +	if (!dev_conf->nb_event_ports) {
> > >  +		EDEV_LOG_ERR("dev%d nb_event_ports cannot be zero",
> > >  dev_id);
> > >  +		return -EINVAL;
> > >  +	}
> > >  +	if (dev_conf->nb_event_ports > info.max_event_ports) {
> > >  +		EDEV_LOG_ERR("dev%d nb_event_ports=%d >
> > >  max_event_ports= %d",
> > >  +		dev_id, dev_conf->nb_event_ports, info.max_event_ports);
> > >  +		return -EINVAL;
> > >  +	}
> > >  +
> > >  +	/* Check nb_event_queue_flows is in limit */
> > >  +	if (!dev_conf->nb_event_queue_flows) {
> > >  +		EDEV_LOG_ERR("dev%d nb_flows cannot be zero", dev_id);
> > >  +		return -EINVAL;
> > >  +	}
> > >  +	if (dev_conf->nb_event_queue_flows > info.max_event_queue_flows)
> > >  {
> > >  +		EDEV_LOG_ERR("dev%d nb_flows=%x > max_flows=%x",
> > >  +		dev_id, dev_conf->nb_event_queue_flows,
> > >  +		info.max_event_queue_flows);
> > >  +		return -EINVAL;
> > >  +	}
> > >  +
> > >  +	/* Check nb_event_port_dequeue_depth is in limit */
> > >  +	if (!dev_conf->nb_event_port_dequeue_depth) {
> > >  +		EDEV_LOG_ERR("dev%d nb_dequeue_depth cannot be zero",
> > >  dev_id);
> > >  +		return -EINVAL;
> > >  +	}
> > >  +	if (dev_conf->nb_event_port_dequeue_depth >
> > >  +			 info.max_event_port_dequeue_depth) {
> > >  +		EDEV_LOG_ERR("dev%d nb_dequeue_depth=%d >
> > >  max_dequeue_depth=%d",
> > >  +		dev_id, dev_conf->nb_event_port_dequeue_depth,
> > >  +		info.max_event_port_dequeue_depth);
> > >  +		return -EINVAL;
> > >  +	}
> > >  +
> > >  +	/* Check nb_event_port_enqueue_depth is in limit */
> > >  +	if (!dev_conf->nb_event_port_enqueue_depth) {
> > >  +		EDEV_LOG_ERR("dev%d nb_enqueue_depth cannot be zero",
> > >  dev_id);
> > >  +		return -EINVAL;
> > >  +	}
> > >  +	if (dev_conf->nb_event_port_enqueue_depth >
> > >  +			 info.max_event_port_enqueue_depth) {
> > >  +		EDEV_LOG_ERR("dev%d nb_enqueue_depth=%d >
> > >  max_enqueue_depth=%d",
> > >  +		dev_id, dev_conf->nb_event_port_enqueue_depth,
> > >  +		info.max_event_port_enqueue_depth);
> > >  +		return -EINVAL;
> > >  +	}
> > >  +
> > >  +	/* Copy the dev_conf parameter into the dev structure */
> > >  +	memcpy(&dev->data->dev_conf, dev_conf, sizeof(dev->data-
> > >  >dev_conf));
> > >  +
> > >  +	/* Setup new number of queues and reconfigure device. */
> > >  +	diag = rte_event_dev_queue_config(dev, dev_conf-
> > >  >nb_event_queues);
> > >  +	if (diag != 0) {
> > >  +		EDEV_LOG_ERR("dev%d rte_event_dev_queue_config = %d",
> > >  +				dev_id, diag);
> > >  +		return diag;
> > >  +	}
> > >  +
> > >  +	/* Setup new number of ports and reconfigure device. */
> > >  +	diag = rte_event_dev_port_config(dev, dev_conf->nb_event_ports);
> > >  +	if (diag != 0) {
> > >  +		rte_event_dev_queue_config(dev, 0);
> > >  +		EDEV_LOG_ERR("dev%d rte_event_dev_port_config = %d",
> > >  +				dev_id, diag);
> > >  +		return diag;
> > >  +	}
> > >  +
> > >  +	/* Configure the device */
> > >  +	diag = (*dev->dev_ops->dev_configure)(dev);
> > >  +	if (diag != 0) {
> > >  +		EDEV_LOG_ERR("dev%d dev_configure = %d", dev_id, diag);
> > >  +		rte_event_dev_queue_config(dev, 0);
> > >  +		rte_event_dev_port_config(dev, 0);
> > >  +	}
> > >  +
> > >  +	dev->data->event_dev_cap = info.event_dev_cap;
> > >  +	return diag;
> > >  +}
> > >  +
> > >  +static inline int
> > >  +is_valid_queue(struct rte_eventdev *dev, uint8_t queue_id)
> > >  +{
> > >  +	if (queue_id < dev->data->nb_queues && queue_id <
> > >  +				RTE_EVENT_MAX_QUEUES_PER_DEV)
> > >  +		return 1;
> > >  +	else
> > >  +		return 0;
> > >  +}
> > >  +
> > >  +int
> > >  +rte_event_queue_default_conf_get(uint8_t dev_id, uint8_t queue_id,
> > >  +				 struct rte_event_queue_conf *queue_conf)
> > >  +{
> > >  +	struct rte_eventdev *dev;
> > >  +
> > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> > >  +	dev = &rte_eventdevs[dev_id];
> > >  +
> > >  +	if (queue_conf == NULL)
> > >  +		return -EINVAL;
> > >  +
> > >  +	if (!is_valid_queue(dev, queue_id)) {
> > >  +		EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id);
> > >  +		return -EINVAL;
> > >  +	}
> > >  +
> > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_def_conf, -
> > >  ENOTSUP);
> > >  +	memset(queue_conf, 0, sizeof(struct rte_event_queue_conf));
> > >  +	(*dev->dev_ops->queue_def_conf)(dev, queue_id, queue_conf);
> > >  +	return 0;
> > >  +}
> > >  +
> > >  +static inline int
> > >  +is_valid_atomic_queue_conf(struct rte_event_queue_conf *queue_conf)
> > >  +{
> > >  +	if (queue_conf && (
> > >  +		((queue_conf->event_queue_cfg &
> > >  RTE_EVENT_QUEUE_CFG_TYPE_MASK)
> > >  +			== RTE_EVENT_QUEUE_CFG_ALL_TYPES) ||
> > >  +		((queue_conf->event_queue_cfg &
> > >  RTE_EVENT_QUEUE_CFG_TYPE_MASK)
> > >  +			== RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY)
> > >  +		))
> > >  +		return 1;
> > >  +	else
> > >  +		return 0;
> > >  +}
> > >  +
> > >  +int
> > >  +rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id,
> > >  +		      struct rte_event_queue_conf *queue_conf)
> > >  +{
> > >  +	struct rte_eventdev *dev;
> > >  +	struct rte_event_queue_conf def_conf;
> > >  +
> > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> > >  +	dev = &rte_eventdevs[dev_id];
> > >  +
> > >  +	if (!is_valid_queue(dev, queue_id)) {
> > >  +		EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id);
> > >  +		return -EINVAL;
> > >  +	}
> > >  +
> > >  +	/* Check nb_atomic_flows limit */
> > >  +	if (is_valid_atomic_queue_conf(queue_conf)) {
> > >  +		if (queue_conf->nb_atomic_flows == 0 ||
> > >  +		    queue_conf->nb_atomic_flows >
> > >  +			dev->data->dev_conf.nb_event_queue_flows) {
> > >  +			EDEV_LOG_ERR(
> > >  +		"dev%d queue%d Invalid nb_atomic_flows=%d
> > >  max_flows=%d",
> > >  +			dev_id, queue_id, queue_conf->nb_atomic_flows,
> > >  +			dev->data->dev_conf.nb_event_queue_flows);
> > >  +			return -EINVAL;
> > >  +		}
> > >  +	}
> > >  +
> > >  +	/* Check nb_atomic_order_sequences limit */
> > >  +	if (is_valid_atomic_queue_conf(queue_conf)) {
> > >  +		if (queue_conf->nb_atomic_order_sequences == 0 ||
> > >  +		    queue_conf->nb_atomic_order_sequences >
> > >  +			dev->data->dev_conf.nb_event_queue_flows) {
> > >  +			EDEV_LOG_ERR(
> > >  +		"dev%d queue%d Invalid nb_atomic_order_seq=%d
> > >  max_flows=%d",
> > >  +			dev_id, queue_id, queue_conf-
> > >  >nb_atomic_order_sequences,
> > >  +			dev->data->dev_conf.nb_event_queue_flows);
> > >  +			return -EINVAL;
> > >  +		}
> > >  +	}
> > >  +
> > >  +	if (dev->data->dev_started) {
> > >  +		EDEV_LOG_ERR(
> > >  +		    "device %d must be stopped to allow queue setup", dev_id);
> > >  +		return -EBUSY;
> > >  +	}
> > >  +
> > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_setup, -
> > >  ENOTSUP);
> > >  +
> > >  +	if (queue_conf == NULL) {
> > >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
> > >  >queue_def_conf,
> > >  +					-ENOTSUP);
> > >  +		(*dev->dev_ops->queue_def_conf)(dev, queue_id, &def_conf);
> > >  +		def_conf.event_queue_cfg =
> > >  RTE_EVENT_QUEUE_CFG_DEFAULT;
> > >  +		queue_conf = &def_conf;
> > >  +	}
> > >  +
> > >  +	dev->data->queues_prio[queue_id] = queue_conf->priority;
> > >  +	return (*dev->dev_ops->queue_setup)(dev, queue_id, queue_conf);
> > >  +}
> > >  +
> > >  +uint8_t
> > >  +rte_event_queue_count(uint8_t dev_id)
> > >  +{
> > >  +	struct rte_eventdev *dev;
> > >  +
> > >  +	dev = &rte_eventdevs[dev_id];
> > >  +	return dev->data->nb_queues;
> > >  +}
> > >  +
> > >  +uint8_t
> > >  +rte_event_queue_priority(uint8_t dev_id, uint8_t queue_id)
> > >  +{
> > >  +	struct rte_eventdev *dev;
> > >  +
> > >  +	dev = &rte_eventdevs[dev_id];
> > >  +	if (dev->data->event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_QOS)
> > >  +		return dev->data->queues_prio[queue_id];
> > >  +	else
> > >  +		return RTE_EVENT_QUEUE_PRIORITY_NORMAL;
> > >  +}
> > >  +
> > >  +static inline int
> > >  +is_valid_port(struct rte_eventdev *dev, uint8_t port_id)
> > >  +{
> > >  +	if (port_id < dev->data->nb_ports)
> > >  +		return 1;
> > >  +	else
> > >  +		return 0;
> > >  +}
> > >  +
> > >  +int
> > >  +rte_event_port_default_conf_get(uint8_t dev_id, uint8_t port_id,
> > >  +				 struct rte_event_port_conf *port_conf)
> > >  +{
> > >  +	struct rte_eventdev *dev;
> > >  +
> > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> > >  +	dev = &rte_eventdevs[dev_id];
> > >  +
> > >  +	if (port_conf == NULL)
> > >  +		return -EINVAL;
> > >  +
> > >  +	if (!is_valid_port(dev, port_id)) {
> > >  +		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
> > >  +		return -EINVAL;
> > >  +	}
> > >  +
> > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_def_conf, -
> > >  ENOTSUP);
> > >  +	memset(port_conf, 0, sizeof(struct rte_event_port_conf));
> > >  +	(*dev->dev_ops->port_def_conf)(dev, port_id, port_conf);
> > >  +	return 0;
> > >  +}
> > >  +
> > >  +int
> > >  +rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
> > >  +		      struct rte_event_port_conf *port_conf)
> > >  +{
> > >  +	struct rte_eventdev *dev;
> > >  +	struct rte_event_port_conf def_conf;
> > >  +	int diag;
> > >  +
> > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> > >  +	dev = &rte_eventdevs[dev_id];
> > >  +
> > >  +	if (!is_valid_port(dev, port_id)) {
> > >  +		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
> > >  +		return -EINVAL;
> > >  +	}
> > >  +
> > >  +	/* Check new_event_threshold limit */
> > >  +	if ((port_conf && !port_conf->new_event_threshold) ||
> > >  +			(port_conf && port_conf->new_event_threshold >
> > >  +				 dev->data->dev_conf.nb_events_limit)) {
> > >  +		EDEV_LOG_ERR(
> > >  +		   "dev%d port%d Invalid event_threshold=%d
> > >  nb_events_limit=%d",
> > >  +			dev_id, port_id, port_conf->new_event_threshold,
> > >  +			dev->data->dev_conf.nb_events_limit);
> > >  +		return -EINVAL;
> > >  +	}
> > >  +
> > >  +	/* Check dequeue_depth limit */
> > >  +	if ((port_conf && !port_conf->dequeue_depth) ||
> > >  +			(port_conf && port_conf->dequeue_depth >
> > >  +		dev->data->dev_conf.nb_event_port_dequeue_depth)) {
> > >  +		EDEV_LOG_ERR(
> > >  +		   "dev%d port%d Invalid dequeue depth=%d
> > >  max_dequeue_depth=%d",
> > >  +			dev_id, port_id, port_conf->dequeue_depth,
> > >  +			dev->data-
> > >  >dev_conf.nb_event_port_dequeue_depth);
> > >  +		return -EINVAL;
> > >  +	}
> > >  +
> > >  +	/* Check enqueue_depth limit */
> > >  +	if ((port_conf && !port_conf->enqueue_depth) ||
> > >  +			(port_conf && port_conf->enqueue_depth >
> > >  +		dev->data->dev_conf.nb_event_port_enqueue_depth)) {
> > >  +		EDEV_LOG_ERR(
> > >  +		   "dev%d port%d Invalid enqueue depth=%d
> > >  max_enqueue_depth=%d",
> > >  +			dev_id, port_id, port_conf->enqueue_depth,
> > >  +			dev->data-
> > >  >dev_conf.nb_event_port_enqueue_depth);
> > >  +		return -EINVAL;
> > >  +	}
> > >  +
> > >  +	if (dev->data->dev_started) {
> > >  +		EDEV_LOG_ERR(
> > >  +		    "device %d must be stopped to allow port setup", dev_id);
> > >  +		return -EBUSY;
> > >  +	}
> > >  +
> > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_setup, -
> > >  ENOTSUP);
> > >  +
> > >  +	if (port_conf == NULL) {
> > >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
> > >  >port_def_conf,
> > >  +					-ENOTSUP);
> > >  +		(*dev->dev_ops->port_def_conf)(dev, port_id, &def_conf);
> > >  +		port_conf = &def_conf;
> > >  +	}
> > >  +
> > >  +	dev->data->ports_dequeue_depth[port_id] =
> > >  +			port_conf->dequeue_depth;
> > >  +	dev->data->ports_enqueue_depth[port_id] =
> > >  +			port_conf->enqueue_depth;
> > >  +
> > >  +	diag = (*dev->dev_ops->port_setup)(dev, port_id, port_conf);
> > >  +
> > >  +	/* Unlink all the queues from this port(default state after setup) */
> > >  +	if (!diag)
> > >  +		diag = rte_event_port_unlink(dev_id, port_id, NULL, 0);
> > >  +
> > >  +	if (diag < 0)
> > >  +		return diag;
> > >  +
> > >  +	return 0;
> > >  +}
> > >  +
> > >  +uint8_t
> > >  +rte_event_port_dequeue_depth(uint8_t dev_id, uint8_t port_id)
> > >  +{
> > >  +	struct rte_eventdev *dev;
> > >  +
> > >  +	dev = &rte_eventdevs[dev_id];
> > >  +	return dev->data->ports_dequeue_depth[port_id];
> > >  +}
> > >  +
> > >  +uint8_t
> > >  +rte_event_port_enqueue_depth(uint8_t dev_id, uint8_t port_id)
> > >  +{
> > >  +	struct rte_eventdev *dev;
> > >  +
> > >  +	dev = &rte_eventdevs[dev_id];
> > >  +	return dev->data->ports_enqueue_depth[port_id];
> > >  +}
> > >  +
> > >  +uint8_t
> > >  +rte_event_port_count(uint8_t dev_id)
> > >  +{
> > >  +	struct rte_eventdev *dev;
> > >  +
> > >  +	dev = &rte_eventdevs[dev_id];
> > >  +	return dev->data->nb_ports;
> > >  +}
> > >  +
> > >  +int
> > >  +rte_event_port_link(uint8_t dev_id, uint8_t port_id,
> > >  +		    struct rte_event_queue_link link[], uint16_t nb_links)
> > >  +{
> > >  +	struct rte_eventdev *dev;
> > >  +	struct rte_event_queue_link
> > >  all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
> > >  +	uint16_t *links_map;
> > >  +	int i, diag;
> > >  +
> > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> > >  +	dev = &rte_eventdevs[dev_id];
> > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_link, -ENOTSUP);
> > >  +
> > >  +	if (!is_valid_port(dev, port_id)) {
> > >  +		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
> > >  +		return -EINVAL;
> > >  +	}
> > >  +
> > >  +	if (link == NULL) {
> > >  +		for (i = 0; i < dev->data->nb_queues; i++) {
> > >  +			all_queues[i].queue_id = i;
> > >  +			all_queues[i].priority =
> > >  +
> > >  	RTE_EVENT_QUEUE_SERVICE_PRIORITY_NORMAL;
> > >  +		}
> > >  +		link = all_queues;
> > >  +		nb_links = dev->data->nb_queues;
> > >  +	}
> > >  +
> > >  +	for (i = 0; i < nb_links; i++)
> > >  +		if (link[i].queue_id >= RTE_EVENT_MAX_QUEUES_PER_DEV)
> > >  +			return -EINVAL;
> > >  +
> > >  +	diag = (*dev->dev_ops->port_link)(dev->data->ports[port_id], link,
> > >  +						 nb_links);
> > >  +	if (diag < 0)
> > >  +		return diag;
> > >  +
> > >  +	links_map = dev->data->links_map;
> > >  +	/* Point links_map to this port specific area */
> > >  +	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
> > >  +	for (i = 0; i < diag; i++)
> > >  +		links_map[link[i].queue_id] = (uint8_t)link[i].priority;
> > >  +
> > >  +	return diag;
> > >  +}
> > >  +
> > >  +#define EVENT_QUEUE_SERVICE_PRIORITY_INVALID (0xdead)
> > >  +
> > >  +int
> > >  +rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
> > >  +		      uint8_t queues[], uint16_t nb_unlinks)
> > >  +{
> > >  +	struct rte_eventdev *dev;
> > >  +	uint8_t all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
> > >  +	int i, diag;
> > >  +	uint16_t *links_map;
> > >  +
> > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> > >  +	dev = &rte_eventdevs[dev_id];
> > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_unlink, -
> > >  ENOTSUP);
> > >  +
> > >  +	if (!is_valid_port(dev, port_id)) {
> > >  +		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
> > >  +		return -EINVAL;
> > >  +	}
> > >  +
> > >  +	if (queues == NULL) {
> > >  +		for (i = 0; i < dev->data->nb_queues; i++)
> > >  +			all_queues[i] = i;
> > >  +		queues = all_queues;
> > >  +		nb_unlinks = dev->data->nb_queues;
> > >  +	}
> > >  +
> > >  +	for (i = 0; i < nb_unlinks; i++)
> > >  +		if (queues[i] >= RTE_EVENT_MAX_QUEUES_PER_DEV)
> > >  +			return -EINVAL;
> > >  +
> > >  +	diag = (*dev->dev_ops->port_unlink)(dev->data->ports[port_id],
> > >  queues,
> > >  +					nb_unlinks);
> > >  +
> > >  +	if (diag < 0)
> > >  +		return diag;
> > >  +
> > >  +	links_map = dev->data->links_map;
> > >  +	/* Point links_map to this port specific area */
> > >  +	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
> > >  +	for (i = 0; i < diag; i++)
> > >  +		links_map[queues[i]] =
> > >  EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
> > >  +
> > >  +	return diag;
> > >  +}
> > >  +
> > >  +int
> > >  +rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
> > >  +			struct rte_event_queue_link link[])
> > >  +{
> > >  +	struct rte_eventdev *dev;
> > >  +	uint16_t *links_map;
> > >  +	int i, count = 0;
> > >  +
> > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> > >  +	dev = &rte_eventdevs[dev_id];
> > >  +	if (!is_valid_port(dev, port_id)) {
> > >  +		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
> > >  +		return -EINVAL;
> > >  +	}
> > >  +
> > >  +	links_map = dev->data->links_map;
> > >  +	/* Point links_map to this port specific area */
> > >  +	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
> > >  +	for (i = 0; i < RTE_EVENT_MAX_QUEUES_PER_DEV; i++) {
> > >  +		if (links_map[i] !=
> > >  EVENT_QUEUE_SERVICE_PRIORITY_INVALID) {
> > >  +			link[count].queue_id = i;
> > >  +			link[count].priority = (uint8_t)links_map[i];
> > >  +			++count;
> > >  +		}
> > >  +	}
> > >  +	return count;
> > >  +}
> > >  +
> > >  +int
> > >  +rte_event_dequeue_wait_time(uint8_t dev_id, uint64_t ns, uint64_t
> > >  *wait_ticks)
> > >  +{
> > >  +	struct rte_eventdev *dev;
> > >  +
> > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> > >  +	dev = &rte_eventdevs[dev_id];
> > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->wait_time, -
> > >  ENOTSUP);
> > >  +
> > >  +	if (wait_ticks == NULL)
> > >  +		return -EINVAL;
> > >  +
> > >  +	(*dev->dev_ops->wait_time)(dev, ns, wait_ticks);
> > >  +	return 0;
> > >  +}
> > >  +
> > >  +int
> > >  +rte_event_dev_dump(uint8_t dev_id, FILE *f)
> > >  +{
> > >  +	struct rte_eventdev *dev;
> > >  +
> > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> > >  +	dev = &rte_eventdevs[dev_id];
> > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dump, -ENOTSUP);
> > >  +
> > >  +	(*dev->dev_ops->dump)(dev, f);
> > >  +	return 0;
> > >  +
> > >  +}
> > >  +
> > >  +int
> > >  +rte_event_dev_start(uint8_t dev_id)
> > >  +{
> > >  +	struct rte_eventdev *dev;
> > >  +	int diag;
> > >  +
> > >  +	EDEV_LOG_DEBUG("Start dev_id=%" PRIu8, dev_id);
> > >  +
> > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> > >  +	dev = &rte_eventdevs[dev_id];
> > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_start, -ENOTSUP);
> > >  +
> > >  +	if (dev->data->dev_started != 0) {
> > >  +		EDEV_LOG_ERR("Device with dev_id=%" PRIu8 " already
> > >  started",
> > >  +			dev_id);
> > >  +		return 0;
> > >  +	}
> > >  +
> > >  +	diag = (*dev->dev_ops->dev_start)(dev);
> > >  +	if (diag == 0)
> > >  +		dev->data->dev_started = 1;
> > >  +	else
> > >  +		return diag;
> > >  +
> > >  +	return 0;
> > >  +}
> > >  +
> > >  +void
> > >  +rte_event_dev_stop(uint8_t dev_id)
> > >  +{
> > >  +	struct rte_eventdev *dev;
> > >  +
> > >  +	EDEV_LOG_DEBUG("Stop dev_id=%" PRIu8, dev_id);
> > >  +
> > >  +	RTE_EVENTDEV_VALID_DEVID_OR_RET(dev_id);
> > >  +	dev = &rte_eventdevs[dev_id];
> > >  +	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
> > >  +
> > >  +	if (dev->data->dev_started == 0) {
> > >  +		EDEV_LOG_ERR("Device with dev_id=%" PRIu8 " already
> > >  stopped",
> > >  +			dev_id);
> > >  +		return;
> > >  +	}
> > >  +
> > >  +	dev->data->dev_started = 0;
> > >  +	(*dev->dev_ops->dev_stop)(dev);
> > >  +}
> > >  +
> > >  +int
> > >  +rte_event_dev_close(uint8_t dev_id)
> > >  +{
> > >  +	struct rte_eventdev *dev;
> > >  +
> > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> > >  +	dev = &rte_eventdevs[dev_id];
> > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_close, -
> > >  ENOTSUP);
> > >  +
> > >  +	/* Device must be stopped before it can be closed */
> > >  +	if (dev->data->dev_started == 1) {
> > >  +		EDEV_LOG_ERR("Device %u must be stopped before closing",
> > >  +				dev_id);
> > >  +		return -EBUSY;
> > >  +	}
> > >  +
> > >  +	return (*dev->dev_ops->dev_close)(dev);
> > >  +}
> > >  +
> > >  +static inline int
> > >  +rte_eventdev_data_alloc(uint8_t dev_id, struct rte_eventdev_data **data,
> > >  +		int socket_id)
> > >  +{
> > >  +	char mz_name[RTE_EVENTDEV_NAME_MAX_LEN];
> > >  +	const struct rte_memzone *mz;
> > >  +	int n;
> > >  +
> > >  +	/* Generate memzone name */
> > >  +	n = snprintf(mz_name, sizeof(mz_name), "rte_eventdev_data_%u",
> > >  dev_id);
> > >  +	if (n >= (int)sizeof(mz_name))
> > >  +		return -EINVAL;
> > >  +
> > >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> > >  +		mz = rte_memzone_reserve(mz_name,
> > >  +				sizeof(struct rte_eventdev_data),
> > >  +				socket_id, 0);
> > >  +	} else
> > >  +		mz = rte_memzone_lookup(mz_name);
> > >  +
> > >  +	if (mz == NULL)
> > >  +		return -ENOMEM;
> > >  +
> > >  +	*data = mz->addr;
> > >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
> > >  +		memset(*data, 0, sizeof(struct rte_eventdev_data));
> > >  +
> > >  +	return 0;
> > >  +}
> > >  +
> > >  +static uint8_t
> > >  +rte_eventdev_find_free_device_index(void)
> > >  +{
> > >  +	uint8_t dev_id;
> > >  +
> > >  +	for (dev_id = 0; dev_id < RTE_EVENT_MAX_DEVS; dev_id++) {
> > >  +		if (rte_eventdevs[dev_id].attached ==
> > >  +				RTE_EVENTDEV_DETACHED)
> > >  +			return dev_id;
> > >  +	}
> > >  +	return RTE_EVENT_MAX_DEVS;
> > >  +}
> > >  +
> > >  +struct rte_eventdev *
> > >  +rte_eventdev_pmd_allocate(const char *name, int socket_id)
> > >  +{
> > >  +	struct rte_eventdev *eventdev;
> > >  +	uint8_t dev_id;
> > >  +
> > >  +	if (rte_eventdev_pmd_get_named_dev(name) != NULL) {
> > >  +		EDEV_LOG_ERR("Event device with name %s already "
> > >  +				"allocated!", name);
> > >  +		return NULL;
> > >  +	}
> > >  +
> > >  +	dev_id = rte_eventdev_find_free_device_index();
> > >  +	if (dev_id == RTE_EVENT_MAX_DEVS) {
> > >  +		EDEV_LOG_ERR("Reached maximum number of event
> > >  devices");
> > >  +		return NULL;
> > >  +	}
> > >  +
> > >  +	eventdev = &rte_eventdevs[dev_id];
> > >  +
> > >  +	if (eventdev->data == NULL) {
> > >  +		struct rte_eventdev_data *eventdev_data = NULL;
> > >  +
> > >  +		int retval = rte_eventdev_data_alloc(dev_id, &eventdev_data,
> > >  +				socket_id);
> > >  +
> > >  +		if (retval < 0 || eventdev_data == NULL)
> > >  +			return NULL;
> > >  +
> > >  +		eventdev->data = eventdev_data;
> > >  +
> > >  +		snprintf(eventdev->data->name,
> > >  RTE_EVENTDEV_NAME_MAX_LEN,
> > >  +				"%s", name);
> > >  +
> > >  +		eventdev->data->dev_id = dev_id;
> > >  +		eventdev->data->socket_id = socket_id;
> > >  +		eventdev->data->dev_started = 0;
> > >  +
> > >  +		eventdev->attached = RTE_EVENTDEV_ATTACHED;
> > >  +
> > >  +		eventdev_globals.nb_devs++;
> > >  +	}
> > >  +
> > >  +	return eventdev;
> > >  +}
> > >  +
> > >  +int
> > >  +rte_eventdev_pmd_release(struct rte_eventdev *eventdev)
> > >  +{
> > >  +	int ret;
> > >  +
> > >  +	if (eventdev == NULL)
> > >  +		return -EINVAL;
> > >  +
> > >  +	ret = rte_event_dev_close(eventdev->data->dev_id);
> > >  +	if (ret < 0)
> > >  +		return ret;
> > >  +
> > >  +	eventdev->attached = RTE_EVENTDEV_DETACHED;
> > >  +	eventdev_globals.nb_devs--;
> > >  +	eventdev->data = NULL;
> > >  +
> > >  +	return 0;
> > >  +}
> > >  +
> > >  +struct rte_eventdev *
> > >  +rte_eventdev_pmd_vdev_init(const char *name, size_t dev_private_size,
> > >  +		int socket_id)
> > >  +{
> > >  +	struct rte_eventdev *eventdev;
> > >  +
> > >  +	/* Allocate device structure */
> > >  +	eventdev = rte_eventdev_pmd_allocate(name, socket_id);
> > >  +	if (eventdev == NULL)
> > >  +		return NULL;
> > >  +
> > >  +	/* Allocate private device structure */
> > >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> > >  +		eventdev->data->dev_private =
> > >  +				rte_zmalloc_socket("eventdev device private",
> > >  +						dev_private_size,
> > >  +						RTE_CACHE_LINE_SIZE,
> > >  +						socket_id);
> > >  +
> > >  +		if (eventdev->data->dev_private == NULL)
> > >  +			rte_panic("Cannot allocate memzone for private
> > >  device"
> > >  +					" data");
> > >  +	}
> > >  +
> > >  +	return eventdev;
> > >  +}
> > >  +
> > >  +int
> > >  +rte_eventdev_pmd_pci_probe(struct rte_pci_driver *pci_drv,
> > >  +			struct rte_pci_device *pci_dev)
> > >  +{
> > >  +	struct rte_eventdev_driver *eventdrv;
> > >  +	struct rte_eventdev *eventdev;
> > >  +
> > >  +	char eventdev_name[RTE_EVENTDEV_NAME_MAX_LEN];
> > >  +
> > >  +	int retval;
> > >  +
> > >  +	eventdrv = (struct rte_eventdev_driver *)pci_drv;
> > >  +	if (eventdrv == NULL)
> > >  +		return -ENODEV;
> > >  +
> > >  +	rte_eal_pci_device_name(&pci_dev->addr, eventdev_name,
> > >  +			sizeof(eventdev_name));
> > >  +
> > >  +	eventdev = rte_eventdev_pmd_allocate(eventdev_name,
> > >  +			 pci_dev->device.numa_node);
> > >  +	if (eventdev == NULL)
> > >  +		return -ENOMEM;
> > >  +
> > >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> > >  +		eventdev->data->dev_private =
> > >  +				rte_zmalloc_socket(
> > >  +						"eventdev private structure",
> > >  +						eventdrv->dev_private_size,
> > >  +						RTE_CACHE_LINE_SIZE,
> > >  +						rte_socket_id());
> > >  +
> > >  +		if (eventdev->data->dev_private == NULL)
> > >  +			rte_panic("Cannot allocate memzone for private "
> > >  +					"device data");
> > >  +	}
> > >  +
> > >  +	eventdev->pci_dev = pci_dev;
> > >  +	eventdev->driver = eventdrv;
> > >  +
> > >  +	/* Invoke PMD device initialization function */
> > >  +	retval = (*eventdrv->eventdev_init)(eventdev);
> > >  +	if (retval == 0)
> > >  +		return 0;
> > >  +
> > >  +	EDEV_LOG_ERR("driver %s: event_dev_init(vendor_id=0x%x
> > >  device_id=0x%x)"
> > >  +			" failed", pci_drv->driver.name,
> > >  +			(unsigned int) pci_dev->id.vendor_id,
> > >  +			(unsigned int) pci_dev->id.device_id);
> > >  +
> > >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
> > >  +		rte_free(eventdev->data->dev_private);
> > >  +
> > >  +	eventdev->attached = RTE_EVENTDEV_DETACHED;
> > >  +	eventdev_globals.nb_devs--;
> > >  +
> > >  +	return -ENXIO;
> > >  +}
> > >  +
> > >  +int
> > >  +rte_eventdev_pmd_pci_remove(struct rte_pci_device *pci_dev)
> > >  +{
> > >  +	const struct rte_eventdev_driver *eventdrv;
> > >  +	struct rte_eventdev *eventdev;
> > >  +	char eventdev_name[RTE_EVENTDEV_NAME_MAX_LEN];
> > >  +	int ret;
> > >  +
> > >  +	if (pci_dev == NULL)
> > >  +		return -EINVAL;
> > >  +
> > >  +	rte_eal_pci_device_name(&pci_dev->addr, eventdev_name,
> > >  +			sizeof(eventdev_name));
> > >  +
> > >  +	eventdev = rte_eventdev_pmd_get_named_dev(eventdev_name);
> > >  +	if (eventdev == NULL)
> > >  +		return -ENODEV;
> > >  +
> > >  +	eventdrv = (const struct rte_eventdev_driver *)pci_dev->driver;
> > >  +	if (eventdrv == NULL)
> > >  +		return -ENODEV;
> > >  +
> > >  +	/* Invoke PMD device uninit function */
> > >  +	if (*eventdrv->eventdev_uninit) {
> > >  +		ret = (*eventdrv->eventdev_uninit)(eventdev);
> > >  +		if (ret)
> > >  +			return ret;
> > >  +	}
> > >  +
> > >  +	/* Free event device */
> > >  +	rte_eventdev_pmd_release(eventdev);
> > >  +
> > >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
> > >  +		rte_free(eventdev->data->dev_private);
> > >  +
> > >  +	eventdev->pci_dev = NULL;
> > >  +	eventdev->driver = NULL;
> > >  +
> > >  +	return 0;
> > >  +}
> > >  diff --git a/lib/librte_eventdev/rte_eventdev_pmd.h
> > >  b/lib/librte_eventdev/rte_eventdev_pmd.h
> > >  new file mode 100644
> > >  index 0000000..e9d9b83
> > >  --- /dev/null
> > >  +++ b/lib/librte_eventdev/rte_eventdev_pmd.h
> > >  @@ -0,0 +1,504 @@
> > >  +/*
> > >  + *
> > >  + *   Copyright(c) 2016 Cavium networks. All rights reserved.
> > >  + *
> > >  + *   Redistribution and use in source and binary forms, with or without
> > >  + *   modification, are permitted provided that the following conditions
> > >  + *   are met:
> > >  + *
> > >  + *     * Redistributions of source code must retain the above copyright
> > >  + *       notice, this list of conditions and the following disclaimer.
> > >  + *     * Redistributions in binary form must reproduce the above copyright
> > >  + *       notice, this list of conditions and the following disclaimer in
> > >  + *       the documentation and/or other materials provided with the
> > >  + *       distribution.
> > >  + *     * Neither the name of Cavium networks nor the names of its
> > >  + *       contributors may be used to endorse or promote products derived
> > >  + *       from this software without specific prior written permission.
> > >  + *
> > >  + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
> > >  CONTRIBUTORS
> > >  + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT
> > >  NOT
> > >  + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
> > >  FITNESS FOR
> > >  + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> > >  COPYRIGHT
> > >  + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> > >  INCIDENTAL,
> > >  + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
> > >  NOT
> > >  + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
> > >  OF USE,
> > >  + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
> > >  AND ON ANY
> > >  + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
> > >  TORT
> > >  + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
> > >  THE USE
> > >  + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> > >  DAMAGE.
> > >  + */
> > >  +
> > >  +#ifndef _RTE_EVENTDEV_PMD_H_
> > >  +#define _RTE_EVENTDEV_PMD_H_
> > >  +
> > >  +/** @file
> > >  + * RTE Event PMD APIs
> > >  + *
> > >  + * @note
> > >  + * These API are from event PMD only and user applications should not call
> > >  + * them directly.
> > >  + */
> > >  +
> > >  +#ifdef __cplusplus
> > >  +extern "C" {
> > >  +#endif
> > >  +
> > >  +#include <string.h>
> > >  +
> > >  +#include <rte_dev.h>
> > >  +#include <rte_pci.h>
> > >  +#include <rte_malloc.h>
> > >  +#include <rte_log.h>
> > >  +#include <rte_common.h>
> > >  +
> > >  +#include "rte_eventdev.h"
> > >  +
> > >  +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
> > >  +#define RTE_PMD_DEBUG_TRACE(...) \
> > >  +	rte_pmd_debug_trace(__func__, __VA_ARGS__)
> > >  +#else
> > >  +#define RTE_PMD_DEBUG_TRACE(...)
> > >  +#endif
> > >  +
> > >  +/* Logging Macros */
> > >  +#define EDEV_LOG_ERR(fmt, args...) \
> > >  +	RTE_LOG(ERR, EVENTDEV, "%s() line %u: " fmt "\n",  \
> > >  +			__func__, __LINE__, ## args)
> > >  +
> > >  +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
> > >  +#define EDEV_LOG_DEBUG(fmt, args...) \
> > >  +	RTE_LOG(DEBUG, EVENTDEV, "%s() line %u: " fmt "\n",  \
> > >  +			__func__, __LINE__, ## args)
> > >  +#else
> > >  +#define EDEV_LOG_DEBUG(fmt, args...) (void)0
> > >  +#endif
> > >  +
> > >  +/* Macros to check for valid device */
> > >  +#define RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, retval) do { \
> > >  +	if (!rte_eventdev_pmd_is_valid_dev((dev_id))) { \
> > >  +		EDEV_LOG_ERR("Invalid dev_id=%d\n", dev_id); \
> > >  +		return retval; \
> > >  +	} \
> > >  +} while (0)
> > >  +
> > >  +#define RTE_EVENTDEV_VALID_DEVID_OR_RET(dev_id) do { \
> > >  +	if (!rte_eventdev_pmd_is_valid_dev((dev_id))) { \
> > >  +		EDEV_LOG_ERR("Invalid dev_id=%d\n", dev_id); \
> > >  +		return; \
> > >  +	} \
> > >  +} while (0)
> > >  +
> > >  +#define RTE_EVENTDEV_DETACHED  (0)
> > >  +#define RTE_EVENTDEV_ATTACHED  (1)
> > >  +
> > >  +/**
> > >  + * Initialisation function of a event driver invoked for each matching
> > >  + * event PCI device detected during the PCI probing phase.
> > >  + *
> > >  + * @param dev
> > >  + *   The dev pointer is the address of the *rte_eventdev* structure associated
> > >  + *   with the matching device and which has been [automatically] allocated in
> > >  + *   the *rte_event_devices* array.
> > >  + *
> > >  + * @return
> > >  + *   - 0: Success, the device is properly initialised by the driver.
> > >  + *        In particular, the driver MUST have set up the *dev_ops* pointer
> > >  + *        of the *dev* structure.
> > >  + *   - <0: Error code of the device initialisation failure.
> > >  + */
> > >  +typedef int (*eventdev_init_t)(struct rte_eventdev *dev);
> > >  +
> > >  +/**
> > >  + * Finalisation function of a driver invoked for each matching
> > >  + * PCI device detected during the PCI closing phase.
> > >  + *
> > >  + * @param dev
> > >  + *   The dev pointer is the address of the *rte_eventdev* structure associated
> > >  + *   with the matching device and which	has been [automatically] allocated in
> > >  + *   the *rte_event_devices* array.
> > >  + *
> > >  + * @return
> > >  + *   - 0: Success, the device is properly finalised by the driver.
> > >  + *        In particular, the driver MUST free the *dev_ops* pointer
> > >  + *        of the *dev* structure.
> > >  + *   - <0: Error code of the device initialisation failure.
> > >  + */
> > >  +typedef int (*eventdev_uninit_t)(struct rte_eventdev *dev);
> > >  +
> > >  +/**
> > >  + * The structure associated with a PMD driver.
> > >  + *
> > >  + * Each driver acts as a PCI driver and is represented by a generic
> > >  + * *event_driver* structure that holds:
> > >  + *
> > >  + * - An *rte_pci_driver* structure (which must be the first field).
> > >  + *
> > >  + * - The *eventdev_init* function invoked for each matching PCI device.
> > >  + *
> > >  + * - The size of the private data to allocate for each matching device.
> > >  + */
> > >  +struct rte_eventdev_driver {
> > >  +	struct rte_pci_driver pci_drv;	/**< The PMD is also a PCI driver. */
> > >  +	unsigned int dev_private_size;	/**< Size of device private data. */
> > >  +
> > >  +	eventdev_init_t eventdev_init;	/**< Device init function. */
> > >  +	eventdev_uninit_t eventdev_uninit; /**< Device uninit function. */
> > >  +};
> > >  +
> > >  +/** Global structure used for maintaining state of allocated event devices */
> > >  +struct rte_eventdev_global {
> > >  +	uint8_t nb_devs;	/**< Number of devices found */
> > >  +	uint8_t max_devs;	/**< Max number of devices */
> > >  +};
> > >  +
> > >  +extern struct rte_eventdev_global *rte_eventdev_globals;
> > >  +/** Pointer to global event devices data structure. */
> > >  +extern struct rte_eventdev *rte_eventdevs;
> > >  +/** The pool of rte_eventdev structures. */
> > >  +
> > >  +/**
> > >  + * Get the rte_eventdev structure device pointer for the named device.
> > >  + *
> > >  + * @param name
> > >  + *   device name to select the device structure.
> > >  + *
> > >  + * @return
> > >  + *   - The rte_eventdev structure pointer for the given device ID.
> > >  + */
> > >  +static inline struct rte_eventdev *
> > >  +rte_eventdev_pmd_get_named_dev(const char *name)
> > >  +{
> > >  +	struct rte_eventdev *dev;
> > >  +	unsigned int i;
> > >  +
> > >  +	if (name == NULL)
> > >  +		return NULL;
> > >  +
> > >  +	for (i = 0, dev = &rte_eventdevs[i];
> > >  +			i < rte_eventdev_globals->max_devs; i++) {
> > >  +		if ((dev->attached == RTE_EVENTDEV_ATTACHED) &&
> > >  +				(strcmp(dev->data->name, name) == 0))
> > >  +			return dev;
> > >  +	}
> > >  +
> > >  +	return NULL;
> > >  +}
> > >  +
> > >  +/**
> > >  + * Validate if the event device index is valid attached event device.
> > >  + *
> > >  + * @param dev_id
> > >  + *   Event device index.
> > >  + *
> > >  + * @return
> > >  + *   - If the device index is valid (1) or not (0).
> > >  + */
> > >  +static inline unsigned
> > >  +rte_eventdev_pmd_is_valid_dev(uint8_t dev_id)
> > >  +{
> > >  +	struct rte_eventdev *dev;
> > >  +
> > >  +	if (dev_id >= rte_eventdev_globals->nb_devs)
> > >  +		return 0;
> > >  +
> > >  +	dev = &rte_eventdevs[dev_id];
> > >  +	if (dev->attached != RTE_EVENTDEV_ATTACHED)
> > >  +		return 0;
> > >  +	else
> > >  +		return 1;
> > >  +}
> > >  +
> > >  +/**
> > >  + * Definitions of all functions exported by a driver through the
> > >  + * the generic structure of type *event_dev_ops* supplied in the
> > >  + * *rte_eventdev* structure associated with a device.
> > >  + */
> > >  +
> > >  +/**
> > >  + * Get device information of a device.
> > >  + *
> > >  + * @param dev
> > >  + *   Event device pointer
> > >  + * @param dev_info
> > >  + *   Event device information structure
> > >  + *
> > >  + * @return
> > >  + *   Returns 0 on success
> > >  + */
> > >  +typedef void (*eventdev_info_get_t)(struct rte_eventdev *dev,
> > >  +		struct rte_event_dev_info *dev_info);
> > >  +
> > >  +/**
> > >  + * Configure a device.
> > >  + *
> > >  + * @param dev
> > >  + *   Event device pointer
> > >  + *
> > >  + * @return
> > >  + *   Returns 0 on success
> > >  + */
> > >  +typedef int (*eventdev_configure_t)(struct rte_eventdev *dev);
> > >  +
> > >  +/**
> > >  + * Start a configured device.
> > >  + *
> > >  + * @param dev
> > >  + *   Event device pointer
> > >  + *
> > >  + * @return
> > >  + *   Returns 0 on success
> > >  + */
> > >  +typedef int (*eventdev_start_t)(struct rte_eventdev *dev);
> > >  +
> > >  +/**
> > >  + * Stop a configured device.
> > >  + *
> > >  + * @param dev
> > >  + *   Event device pointer
> > >  + */
> > >  +typedef void (*eventdev_stop_t)(struct rte_eventdev *dev);
> > >  +
> > >  +/**
> > >  + * Close a configured device.
> > >  + *
> > >  + * @param dev
> > >  + *   Event device pointer
> > >  + *
> > >  + * @return
> > >  + * - 0 on success
> > >  + * - (-EAGAIN) if can't close as device is busy
> > >  + */
> > >  +typedef int (*eventdev_close_t)(struct rte_eventdev *dev);
> > >  +
> > >  +/**
> > >  + * Retrieve the default event queue configuration.
> > >  + *
> > >  + * @param dev
> > >  + *   Event device pointer
> > >  + * @param queue_id
> > >  + *   Event queue index
> > >  + * @param[out] queue_conf
> > >  + *   Event queue configuration structure
> > >  + *
> > >  + */
> > >  +typedef void (*eventdev_queue_default_conf_get_t)(struct rte_eventdev
> > >  *dev,
> > >  +		uint8_t queue_id, struct rte_event_queue_conf *queue_conf);
> > >  +
> > >  +/**
> > >  + * Setup an event queue.
> > >  + *
> > >  + * @param dev
> > >  + *   Event device pointer
> > >  + * @param queue_id
> > >  + *   Event queue index
> > >  + * @param queue_conf
> > >  + *   Event queue configuration structure
> > >  + *
> > >  + * @return
> > >  + *   Returns 0 on success.
> > >  + */
> > >  +typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev,
> > >  +		uint8_t queue_id, struct rte_event_queue_conf *queue_conf);
> > >  +
> > >  +/**
> > >  + * Release memory resources allocated by given event queue.
> > >  + *
> > >  + * @param queue
> > >  + *   Event queue pointer
> > >  + *
> > >  + */
> > >  +typedef void (*eventdev_queue_release_t)(void *queue);
> > >  +
> > >  +/**
> > >  + * Retrieve the default event port configuration.
> > >  + *
> > >  + * @param dev
> > >  + *   Event device pointer
> > >  + * @param port_id
> > >  + *   Event port index
> > >  + * @param[out] port_conf
> > >  + *   Event port configuration structure
> > >  + *
> > >  + */
> > >  +typedef void (*eventdev_port_default_conf_get_t)(struct rte_eventdev *dev,
> > >  +		uint8_t port_id, struct rte_event_port_conf *port_conf);
> > >  +
> > >  +/**
> > >  + * Setup an event port.
> > >  + *
> > >  + * @param dev
> > >  + *   Event device pointer
> > >  + * @param port_id
> > >  + *   Event port index
> > >  + * @param port_conf
> > >  + *   Event port configuration structure
> > >  + *
> > >  + * @return
> > >  + *   Returns 0 on success.
> > >  + */
> > >  +typedef int (*eventdev_port_setup_t)(struct rte_eventdev *dev,
> > >  +		uint8_t port_id, struct rte_event_port_conf *port_conf);
> > >  +
> > >  +/**
> > >  + * Release memory resources allocated by given event port.
> > >  + *
> > >  + * @param port
> > >  + *   Event port pointer
> > >  + *
> > >  + */
> > >  +typedef void (*eventdev_port_release_t)(void *port);
> > >  +
> > >  +/**
> > >  + * Link multiple source event queues to destination event port.
> > >  + *
> > >  + * @param port
> > >  + *   Event port pointer
> > >  + * @param link
> > >  + *   An array of *nb_links* pointers to *rte_event_queue_link* structure
> > >  + * @param nb_links
> > >  + *   The number of links to establish
> > >  + *
> > >  + * @return
> > >  + *   Returns 0 on success.
> > >  + *
> > >  + */
> > >  +typedef int (*eventdev_port_link_t)(void *port,
> > >  +		struct rte_event_queue_link link[], uint16_t nb_links);
> > >  +
> > >  +/**
> > >  + * Unlink multiple source event queues from destination event port.
> > >  + *
> > >  + * @param port
> > >  + *   Event port pointer
> > >  + * @param queues
> > >  + *   An array of *nb_unlinks* event queues to be unlinked from the event port.
> > >  + * @param nb_unlinks
> > >  + *   The number of unlinks to establish
> > >  + *
> > >  + * @return
> > >  + *   Returns 0 on success.
> > >  + *
> > >  + */
> > >  +typedef int (*eventdev_port_unlink_t)(void *port,
> > >  +		uint8_t queues[], uint16_t nb_unlinks);
> > >  +
> > >  +/**
> > >  + * Converts nanoseconds to *wait* value for rte_event_dequeue()
> > >  + *
> > >  + * @param dev
> > >  + *   Event device pointer
> > >  + * @param ns
> > >  + *   Wait time in nanosecond
> > >  + * @param[out] wait_ticks
> > >  + *   Value for the *wait* parameter in rte_event_dequeue() function
> > >  + *
> > >  + */
> > >  +typedef void (*eventdev_dequeue_wait_time_t)(struct rte_eventdev *dev,
> > >  +		uint64_t ns, uint64_t *wait_ticks);
> > >  +
> > >  +/**
> > >  + * Dump internal information
> > >  + *
> > >  + * @param dev
> > >  + *   Event device pointer
> > >  + * @param f
> > >  + *   A pointer to a file for output
> > >  + *
> > >  + */
> > >  +typedef void (*eventdev_dump_t)(struct rte_eventdev *dev, FILE *f);
> > >  +
> > >  +/** Event device operations function pointer table */
> > >  +struct rte_eventdev_ops {
> > >  +	eventdev_info_get_t dev_infos_get;	/**< Get device info. */
> > >  +	eventdev_configure_t dev_configure;	/**< Configure device. */
> > >  +	eventdev_start_t dev_start;		/**< Start device. */
> > >  +	eventdev_stop_t dev_stop;		/**< Stop device. */
> > >  +	eventdev_close_t dev_close;		/**< Close device. */
> > >  +
> > >  +	eventdev_queue_default_conf_get_t queue_def_conf;
> > >  +	/**< Get default queue configuration. */
> > >  +	eventdev_queue_setup_t queue_setup;
> > >  +	/**< Set up an event queue. */
> > >  +	eventdev_queue_release_t queue_release;
> > >  +	/**< Release an event queue. */
> > >  +
> > >  +	eventdev_port_default_conf_get_t port_def_conf;
> > >  +	/**< Get default port configuration. */
> > >  +	eventdev_port_setup_t port_setup;
> > >  +	/**< Set up an event port. */
> > >  +	eventdev_port_release_t port_release;
> > >  +	/**< Release an event port. */
> > >  +
> > >  +	eventdev_port_link_t port_link;
> > >  +	/**< Link event queues to an event port. */
> > >  +	eventdev_port_unlink_t port_unlink;
> > >  +	/**< Unlink event queues from an event port. */
> > >  +	eventdev_dequeue_wait_time_t wait_time;
> > >  +	/**< Converts nanoseconds to *wait* value for rte_event_dequeue()
> > >  */
> > >  +	eventdev_dump_t dump;
> > >  +	/* Dump internal information */
> > >  +};
> > >  +
> > >  +/**
> > >  + * Allocates a new eventdev slot for an event device and returns the pointer
> > >  + * to that slot for the driver to use.
> > >  + *
> > >  + * @param name
> > >  + *   Unique identifier name for each device
> > >  + * @param socket_id
> > >  + *   Socket to allocate resources on.
> > >  + * @return
> > >  + *   - Slot in the rte_dev_devices array for a new device;
> > >  + */
> > >  +struct rte_eventdev *
> > >  +rte_eventdev_pmd_allocate(const char *name, int socket_id);
> > >  +
> > >  +/**
> > >  + * Release the specified eventdev device.
> > >  + *
> > >  + * @param eventdev
> > >  + * The *eventdev* pointer is the address of the *rte_eventdev* structure.
> > >  + * @return
> > >  + *   - 0 on success, negative on error
> > >  + */
> > >  +int
> > >  +rte_eventdev_pmd_release(struct rte_eventdev *eventdev);
> > >  +
> > >  +/**
> > >  + * Creates a new virtual event device and returns the pointer to that device.
> > >  + *
> > >  + * @param name
> > >  + *   PMD type name
> > >  + * @param dev_private_size
> > >  + *   Size of event PMDs private data
> > >  + * @param socket_id
> > >  + *   Socket to allocate resources on.
> > >  + *
> > >  + * @return
> > >  + *   - Eventdev pointer if device is successfully created.
> > >  + *   - NULL if device cannot be created.
> > >  + */
> > >  +struct rte_eventdev *
> > >  +rte_eventdev_pmd_vdev_init(const char *name, size_t dev_private_size,
> > >  +		int socket_id);
> > >  +
> > >  +
> > >  +/**
> > >  + * Wrapper for use by pci drivers as a .probe function to attach to a event
> > >  + * interface.
> > >  + */
> > >  +int rte_eventdev_pmd_pci_probe(struct rte_pci_driver *pci_drv,
> > >  +			    struct rte_pci_device *pci_dev);
> > >  +
> > >  +/**
> > >  + * Wrapper for use by pci drivers as a .remove function to detach a event
> > >  + * interface.
> > >  + */
> > >  +int rte_eventdev_pmd_pci_remove(struct rte_pci_device *pci_dev);
> > >  +
> > >  +#ifdef __cplusplus
> > >  +}
> > >  +#endif
> > >  +
> > >  +#endif /* _RTE_EVENTDEV_PMD_H_ */
> > >  diff --git a/lib/librte_eventdev/rte_eventdev_version.map
> > >  b/lib/librte_eventdev/rte_eventdev_version.map
> > >  new file mode 100644
> > >  index 0000000..ef40aae
> > >  --- /dev/null
> > >  +++ b/lib/librte_eventdev/rte_eventdev_version.map
> > >  @@ -0,0 +1,39 @@
> > >  +DPDK_17.02 {
> > >  +	global:
> > >  +
> > >  +	rte_eventdevs;
> > >  +
> > >  +	rte_event_dev_count;
> > >  +	rte_event_dev_get_dev_id;
> > >  +	rte_event_dev_socket_id;
> > >  +	rte_event_dev_info_get;
> > >  +	rte_event_dev_configure;
> > >  +	rte_event_dev_start;
> > >  +	rte_event_dev_stop;
> > >  +	rte_event_dev_close;
> > >  +	rte_event_dev_dump;
> > >  +
> > >  +	rte_event_port_default_conf_get;
> > >  +	rte_event_port_setup;
> > >  +	rte_event_port_dequeue_depth;
> > >  +	rte_event_port_enqueue_depth;
> > >  +	rte_event_port_count;
> > >  +	rte_event_port_link;
> > >  +	rte_event_port_unlink;
> > >  +	rte_event_port_links_get;
> > >  +
> > >  +	rte_event_queue_default_conf_get
> > >  +	rte_event_queue_setup;
> > >  +	rte_event_queue_count;
> > >  +	rte_event_queue_priority;
> > >  +
> > >  +	rte_event_dequeue_wait_time;
> > >  +
> > >  +	rte_eventdev_pmd_allocate;
> > >  +	rte_eventdev_pmd_release;
> > >  +	rte_eventdev_pmd_vdev_init;
> > >  +	rte_eventdev_pmd_pci_probe;
> > >  +	rte_eventdev_pmd_pci_remove;
> > >  +
> > >  +	local: *;
> > >  +};
> > >  diff --git a/mk/rte.app.mk b/mk/rte.app.mk
> > >  index f75f0e2..716725a 100644
> > >  --- a/mk/rte.app.mk
> > >  +++ b/mk/rte.app.mk
> > >  @@ -93,6 +93,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF)           += -
> > >  lrte_mbuf
> > >   _LDLIBS-$(CONFIG_RTE_LIBRTE_NET)            += -lrte_net
> > >   _LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER)          += -lrte_ethdev
> > >   _LDLIBS-$(CONFIG_RTE_LIBRTE_CRYPTODEV)      += -lrte_cryptodev
> > >  +_LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV)       += -lrte_eventdev
> > >   _LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL)        += -lrte_mempool
> > >   _LDLIBS-$(CONFIG_RTE_LIBRTE_RING)           += -lrte_ring
> > >   _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL)            += -lrte_eal
> > >  --
> > >  2.5.5
> > 

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 0/4] libeventdev API and northbound implementation
  2016-11-21  9:57         ` Bruce Richardson
@ 2016-11-22  0:11           ` Thomas Monjalon
  0 siblings, 0 replies; 109+ messages in thread
From: Thomas Monjalon @ 2016-11-22  0:11 UTC (permalink / raw)
  To: Bruce Richardson, Jerin Jacob
  Cc: dev, harry.van.haaren, hemant.agrawal, gage.eads

2016-11-21 09:57, Bruce Richardson:
> On Mon, Nov 21, 2016 at 10:40:50AM +0100, Thomas Monjalon wrote:
> > Are you asking for a temporary tree?
> > If yes, please tell its name and its committers, it will be done.
> 
> Yes, we are asking for a new tree, but I would not assume it is
> temporary - it might be, but it also might not be, given how other
> threads are discussing having an increasing number of subtrees giving
> pull requests. :-)
> 
> Name: dpdk-eventdev-next

Named dpdk-next-eventdev for consistency.

> Committers: Bruce Richardson & Jerin Jacob

Access granted. Jerin could you send me a public SSH key please?

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 0/4] libeventdev API and northbound implementation
  2016-11-18 19:27     ` Jerin Jacob
  2016-11-21  9:40       ` Thomas Monjalon
@ 2016-11-22  2:00       ` Yuanhan Liu
  2016-11-22  9:05         ` Shreyansh Jain
  1 sibling, 1 reply; 109+ messages in thread
From: Yuanhan Liu @ 2016-11-22  2:00 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Bruce Richardson, dev, harry.van.haaren, hemant.agrawal,
	gage.eads, thomas.monjalon

On Sat, Nov 19, 2016 at 12:57:15AM +0530, Jerin Jacob wrote:
> On Fri, Nov 18, 2016 at 04:04:29PM +0000, Bruce Richardson wrote:
> > +Thomas
> > 
> > On Fri, Nov 18, 2016 at 03:25:18PM +0000, Bruce Richardson wrote:
> > > On Fri, Nov 18, 2016 at 11:14:58AM +0530, Jerin Jacob wrote:
> > > > As previously discussed in RFC v1 [1], RFC v2 [2], with changes
> > > > described in [3] (also pasted below), here is the first non-draft series
> > > > for this new API.
> > > > 
> > > > [1] http://dpdk.org/ml/archives/dev/2016-August/045181.html
> > > > [2] http://dpdk.org/ml/archives/dev/2016-October/048592.html
> > > > [3] http://dpdk.org/ml/archives/dev/2016-October/048196.html
> > > > 
> > > > Changes since RFC v2:
> > > > 
> > > > - Updated the documentation to define the need for this library[Jerin]
> > > > - Added RTE_EVENT_QUEUE_CFG_*_ONLY configuration parameters in
> > > >   struct rte_event_queue_conf to enable optimized sw implementation [Bruce]
> > > > - Introduced RTE_EVENT_OP* ops [Bruce]
> > > > - Added nb_event_queue_flows,nb_event_port_dequeue_depth, nb_event_port_enqueue_depth
> > > >   in rte_event_dev_configure() like ethdev and crypto library[Jerin]
> > > > - Removed rte_event_release() and replaced with RTE_EVENT_OP_RELEASE ops to
> > > >   reduce fast path APIs and it is redundant too[Jerin]
> > > > - In the view of better application portability, Removed pin_event
> > > >   from rte_event_enqueue as it is just hint and Intel/NXP can not support it[Jerin]
> > > > - Added rte_event_port_links_get()[Jerin]
> > > > - Added rte_event_dev_dump[Harry]
> > > > 
> > > > Notes:
> > > > 
> > > > - This patch set is check-patch clean with an exception that
> > > > 02/04 has one WARNING:MACRO_WITH_FLOW_CONTROL
> > > > - Looking forward to getting additional maintainers for libeventdev
> > > > 
> > > > 
> > > > Possible next steps:
> > > > 1) Review this patch set
> > > > 2) Integrate Intel's SW driver[http://dpdk.org/dev/patchwork/patch/17049/]
> > > > 3) Review proposed examples/eventdev_pipeline application[http://dpdk.org/dev/patchwork/patch/17053/]
> > > > 4) Review proposed functional tests[http://dpdk.org/dev/patchwork/patch/17051/]
> > > > 5) Cavium's HW based eventdev driver
> > > > 
> > > > I am planning to work on (3),(4) and (5)
> > > > 
> > > Thanks Jerin,
> > > 
> > > we'll review and get back to you with any comments or feedback (1), and
> > > obviously start working on item (2) also! :-)
> > > 
> > > I'm also wonder whether we should have a staging tree for this work to
> > > make interaction between us easier. Although this may not be
> > > finalised enough for 17.02 release, do you think having an
> > > dpdk-eventdev-next tree would be a help? My thinking is that once we get
> > > the eventdev library itself in reasonable shape following our review, we
> > > could commit that and make any changes thereafter as new patches, rather
> > > than constantly respinning the same set. It also gives us a clean git
> > > tree to base the respective driver implementations on from our two sides.
> > > 
> > > Thomas, any thoughts here on your end - or from anyone else?
> 
> I was thinking more or less along the same lines. To avoid re-spinning the
> same set, it is better to have libeventdev library mark as EXPERIMENTAL
> and commit it somewhere on dpdk-eventdev-next or main tree
> 
> I think, EXPERIMENTAL status can be changed only when
> - At least two event drivers available
> - Functional test applications fine with at least two drivers
> - Portable example application to showcase the features of the library
> - eventdev integration with another dpdk subsystem such as ethdev

I'm wondering maybe we could have a staging tree, for all features like
this one (and one branch for each feature)?

	--yliu

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 0/4] libeventdev API and northbound implementation
  2016-11-22  2:00       ` Yuanhan Liu
@ 2016-11-22  9:05         ` Shreyansh Jain
  0 siblings, 0 replies; 109+ messages in thread
From: Shreyansh Jain @ 2016-11-22  9:05 UTC (permalink / raw)
  To: Yuanhan Liu, Jerin Jacob
  Cc: Bruce Richardson, dev, harry.van.haaren, hemant.agrawal,
	gage.eads, thomas.monjalon

On Tuesday 22 November 2016 07:30 AM, Yuanhan Liu wrote:
> On Sat, Nov 19, 2016 at 12:57:15AM +0530, Jerin Jacob wrote:
>> On Fri, Nov 18, 2016 at 04:04:29PM +0000, Bruce Richardson wrote:
>>> +Thomas
>>>
>>> On Fri, Nov 18, 2016 at 03:25:18PM +0000, Bruce Richardson wrote:
>>>> On Fri, Nov 18, 2016 at 11:14:58AM +0530, Jerin Jacob wrote:
>>>>> As previously discussed in RFC v1 [1], RFC v2 [2], with changes
>>>>> described in [3] (also pasted below), here is the first non-draft series
>>>>> for this new API.
>>>>>
>>>>> [1] http://dpdk.org/ml/archives/dev/2016-August/045181.html
>>>>> [2] http://dpdk.org/ml/archives/dev/2016-October/048592.html
>>>>> [3] http://dpdk.org/ml/archives/dev/2016-October/048196.html
>>>>>
>>>>> Changes since RFC v2:
>>>>>
>>>>> - Updated the documentation to define the need for this library[Jerin]
>>>>> - Added RTE_EVENT_QUEUE_CFG_*_ONLY configuration parameters in
>>>>>   struct rte_event_queue_conf to enable optimized sw implementation [Bruce]
>>>>> - Introduced RTE_EVENT_OP* ops [Bruce]
>>>>> - Added nb_event_queue_flows,nb_event_port_dequeue_depth, nb_event_port_enqueue_depth
>>>>>   in rte_event_dev_configure() like ethdev and crypto library[Jerin]
>>>>> - Removed rte_event_release() and replaced with RTE_EVENT_OP_RELEASE ops to
>>>>>   reduce fast path APIs and it is redundant too[Jerin]
>>>>> - In the view of better application portability, Removed pin_event
>>>>>   from rte_event_enqueue as it is just hint and Intel/NXP can not support it[Jerin]
>>>>> - Added rte_event_port_links_get()[Jerin]
>>>>> - Added rte_event_dev_dump[Harry]
>>>>>
>>>>> Notes:
>>>>>
>>>>> - This patch set is check-patch clean with an exception that
>>>>> 02/04 has one WARNING:MACRO_WITH_FLOW_CONTROL
>>>>> - Looking forward to getting additional maintainers for libeventdev
>>>>>
>>>>>
>>>>> Possible next steps:
>>>>> 1) Review this patch set
>>>>> 2) Integrate Intel's SW driver[http://dpdk.org/dev/patchwork/patch/17049/]
>>>>> 3) Review proposed examples/eventdev_pipeline application[http://dpdk.org/dev/patchwork/patch/17053/]
>>>>> 4) Review proposed functional tests[http://dpdk.org/dev/patchwork/patch/17051/]
>>>>> 5) Cavium's HW based eventdev driver
>>>>>
>>>>> I am planning to work on (3),(4) and (5)
>>>>>
>>>> Thanks Jerin,
>>>>
>>>> we'll review and get back to you with any comments or feedback (1), and
>>>> obviously start working on item (2) also! :-)
>>>>
>>>> I'm also wonder whether we should have a staging tree for this work to
>>>> make interaction between us easier. Although this may not be
>>>> finalised enough for 17.02 release, do you think having an
>>>> dpdk-eventdev-next tree would be a help? My thinking is that once we get
>>>> the eventdev library itself in reasonable shape following our review, we
>>>> could commit that and make any changes thereafter as new patches, rather
>>>> than constantly respinning the same set. It also gives us a clean git
>>>> tree to base the respective driver implementations on from our two sides.
>>>>
>>>> Thomas, any thoughts here on your end - or from anyone else?
>>
>> I was thinking more or less along the same lines. To avoid re-spinning the
>> same set, it is better to have libeventdev library mark as EXPERIMENTAL
>> and commit it somewhere on dpdk-eventdev-next or main tree
>>
>> I think, EXPERIMENTAL status can be changed only when
>> - At least two event drivers available
>> - Functional test applications fine with at least two drivers
>> - Portable example application to showcase the features of the library
>> - eventdev integration with another dpdk subsystem such as ethdev
>
> I'm wondering maybe we could have a staging tree, for all features like
> this one (and one branch for each feature)?
>
> 	--yliu
>

+1

It would help a lot of 'experimental' stuff reach a wider audience 
without waiting for a complete cycle of upstreaming.
Though, I am not sure how would we limit the branches - or if that is 
even required.

-- 
-
Shreyansh

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 2/4] eventdev: implement the northbound APIs
  2016-11-21 19:31       ` Jerin Jacob
@ 2016-11-22 15:15         ` Eads, Gage
  2016-11-22 18:19           ` Jerin Jacob
  2016-11-23  9:57           ` Bruce Richardson
  0 siblings, 2 replies; 109+ messages in thread
From: Eads, Gage @ 2016-11-22 15:15 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: dev, Richardson, Bruce, Van Haaren, Harry, hemant.agrawal



>  -----Original Message-----
>  From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
>  Sent: Monday, November 21, 2016 1:32 PM
>  To: Eads, Gage <gage.eads@intel.com>
>  Cc: dev@dpdk.org; Richardson, Bruce <bruce.richardson@intel.com>; Van
>  Haaren, Harry <harry.van.haaren@intel.com>; hemant.agrawal@nxp.com
>  Subject: Re: [dpdk-dev] [PATCH 2/4] eventdev: implement the northbound APIs
>  
>  On Tue, Nov 22, 2016 at 12:43:58AM +0530, Jerin Jacob wrote:
>  > On Mon, Nov 21, 2016 at 05:45:51PM +0000, Eads, Gage wrote:
>  > > Hi Jerin,
>  > >
>  > > I did a quick review and overall this implementation looks good. I noticed
>  just one issue in rte_event_queue_setup(): the check of
>  nb_atomic_order_sequences is being applied to atomic-type queues, but that
>  field applies to ordered-type queues.
>  >
>  > Thanks Gage. I will fix that in v2.
>  >
>  > >
>  > > One open issue I noticed is the "typical workflow" description starting in
>  rte_eventdev.h:204 conflicts with the centralized software PMD that Harry
>  posted last week. Specifically, that PMD expects a single core to call the
>  schedule function. We could extend the documentation to account for this
>  alternative style of scheduler invocation, or discuss ways to make the software
>  PMD work with the documented workflow. I prefer the former, but either way I
>  think we ought to expose the scheduler's expected usage to the user -- perhaps
>  through an RTE_EVENT_DEV_CAP flag?
>  >
>  > I prefer former too, you can propose the documentation change required for
>  software PMD.

Sure, proposal follows. The "typical workflow" isn't the most optimal by having a conditional in the fast-path, of course, but it demonstrates the idea simply.

(line 204)
 * An event driven based application has following typical workflow on fastpath:
 * \code{.c}                                                                        
 *      while (1) {                                                                 
 *                                                                                  
 *              if (dev_info.event_dev_cap &                                        
 *                      RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED)                        
 *                      rte_event_schedule(dev_id);                                 
 *                                                                                  
 *              rte_event_dequeue(...);                                             
 *                                                                                  
 *              (event processing)                                                  
 *                                                                                  
 *              rte_event_enqueue(...);                                             
 *      }                                                                           
 * \endcode                                                                         
 *                                                                                  
 * The *schedule* operation is intended to do event scheduling, and the             
 * *dequeue* operation returns the scheduled events. An implementation              
 * is free to define the semantics between *schedule* and *dequeue*. For            
 * example, a system based on a hardware scheduler can define its                   
 * rte_event_schedule() to be an NOOP, whereas a software scheduler can use         
 * the *schedule* operation to schedule events. The                                 
 * RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag indicates whether
 * rte_event_schedule() should be called by all cores or by a single (typically 
 * dedicated) core.

(line 308)
#define RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED (1ULL < 2)                          
/**< Event scheduling implementation is distributed and all cores must execute       
 *  rte_event_schedule(). If unset, the implementation is centralized and     
 *  a single core must execute the schedule operation.                        
 *                                                                              
 *  \see rte_event_schedule()                                                   
 */

>  >
>  > On same note, If software PMD based workflow need  a separate core(s) for
>  > schedule function then, Can we hide that from API specification and pass an
>  > argument to SW pmd to define the scheduling core(s)?
>  >
>  > Something like --vdev=eventsw0,schedule_cmask=0x2

An API for controlling the scheduler coremask instead of (or perhaps in addition to) the vdev argument would be good, to allow runtime control. I can imagine apps that scale the number of cores based on load, and in doing so may want to migrate the scheduler to a different core.

>  
>  Just a thought,
>  
>  Perhaps, We could introduce generic "service" cores concept to DPDK to hide
>  the
>  requirement where the implementation needs dedicated core to do certain
>  work. I guess it would useful for other NPU integration in DPDK.
>  

That's an interesting idea. As you suggested in the other thread, this concept could be extended to the "producer" code in the example for configurations where the NIC requires software to feed into the eventdev. And to the other subsystems mentioned in your original PDF, crypto and timer.

>  >
>  > >
>  > > Thanks,
>  > > Gage
>  > >
>  > > >  -----Original Message-----
>  > > >  From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
>  > > >  Sent: Thursday, November 17, 2016 11:45 PM
>  > > >  To: dev@dpdk.org
>  > > >  Cc: Richardson, Bruce <bruce.richardson@intel.com>; Van Haaren, Harry
>  > > >  <harry.van.haaren@intel.com>; hemant.agrawal@nxp.com; Eads, Gage
>  > > >  <gage.eads@intel.com>; Jerin Jacob
>  <jerin.jacob@caviumnetworks.com>
>  > > >  Subject: [dpdk-dev] [PATCH 2/4] eventdev: implement the northbound
>  APIs
>  > > >
>  > > >  This patch set defines the southbound driver interface
>  > > >  and implements the common code required for northbound
>  > > >  eventdev API interface.
>  > > >
>  > > >  Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
>  > > >  ---
>  > > >   config/common_base                           |    6 +
>  > > >   lib/Makefile                                 |    1 +
>  > > >   lib/librte_eal/common/include/rte_log.h      |    1 +
>  > > >   lib/librte_eventdev/Makefile                 |   57 ++
>  > > >   lib/librte_eventdev/rte_eventdev.c           | 1211
>  > > >  ++++++++++++++++++++++++++
>  > > >   lib/librte_eventdev/rte_eventdev_pmd.h       |  504 +++++++++++
>  > > >   lib/librte_eventdev/rte_eventdev_version.map |   39 +
>  > > >   mk/rte.app.mk                                |    1 +
>  > > >   8 files changed, 1820 insertions(+)
>  > > >   create mode 100644 lib/librte_eventdev/Makefile
>  > > >   create mode 100644 lib/librte_eventdev/rte_eventdev.c
>  > > >   create mode 100644 lib/librte_eventdev/rte_eventdev_pmd.h
>  > > >   create mode 100644 lib/librte_eventdev/rte_eventdev_version.map
>  > > >
>  > > >  diff --git a/config/common_base b/config/common_base
>  > > >  index 4bff83a..7a8814e 100644
>  > > >  --- a/config/common_base
>  > > >  +++ b/config/common_base
>  > > >  @@ -411,6 +411,12 @@ CONFIG_RTE_LIBRTE_PMD_ZUC_DEBUG=n
>  > > >   CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO=y
>  > > >
>  > > >   #
>  > > >  +# Compile generic event device library
>  > > >  +#
>  > > >  +CONFIG_RTE_LIBRTE_EVENTDEV=y
>  > > >  +CONFIG_RTE_LIBRTE_EVENTDEV_DEBUG=n
>  > > >  +CONFIG_RTE_EVENT_MAX_DEVS=16
>  > > >  +CONFIG_RTE_EVENT_MAX_QUEUES_PER_DEV=64
>  > > >   # Compile librte_ring
>  > > >   #
>  > > >   CONFIG_RTE_LIBRTE_RING=y
>  > > >  diff --git a/lib/Makefile b/lib/Makefile
>  > > >  index 990f23a..1a067bf 100644
>  > > >  --- a/lib/Makefile
>  > > >  +++ b/lib/Makefile
>  > > >  @@ -41,6 +41,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) +=
>  librte_cfgfile
>  > > >   DIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += librte_cmdline
>  > > >   DIRS-$(CONFIG_RTE_LIBRTE_ETHER) += librte_ether
>  > > >   DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += librte_cryptodev
>  > > >  +DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += librte_eventdev
>  > > >   DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
>  > > >   DIRS-$(CONFIG_RTE_LIBRTE_HASH) += librte_hash
>  > > >   DIRS-$(CONFIG_RTE_LIBRTE_LPM) += librte_lpm
>  > > >  diff --git a/lib/librte_eal/common/include/rte_log.h
>  > > >  b/lib/librte_eal/common/include/rte_log.h
>  > > >  index 29f7d19..9a07d92 100644
>  > > >  --- a/lib/librte_eal/common/include/rte_log.h
>  > > >  +++ b/lib/librte_eal/common/include/rte_log.h
>  > > >  @@ -79,6 +79,7 @@ extern struct rte_logs rte_logs;
>  > > >   #define RTE_LOGTYPE_PIPELINE 0x00008000 /**< Log related to
>  pipeline. */
>  > > >   #define RTE_LOGTYPE_MBUF    0x00010000 /**< Log related to mbuf.
>  */
>  > > >   #define RTE_LOGTYPE_CRYPTODEV 0x00020000 /**< Log related to
>  > > >  cryptodev. */
>  > > >  +#define RTE_LOGTYPE_EVENTDEV 0x00040000 /**< Log related to
>  eventdev.
>  > > >  */
>  > > >
>  > > >   /* these log types can be used in an application */
>  > > >   #define RTE_LOGTYPE_USER1   0x01000000 /**< User-defined log type
>  1. */
>  > > >  diff --git a/lib/librte_eventdev/Makefile b/lib/librte_eventdev/Makefile
>  > > >  new file mode 100644
>  > > >  index 0000000..dac0663
>  > > >  --- /dev/null
>  > > >  +++ b/lib/librte_eventdev/Makefile
>  > > >  @@ -0,0 +1,57 @@
>  > > >  +#   BSD LICENSE
>  > > >  +#
>  > > >  +#   Copyright(c) 2016 Cavium networks. All rights reserved.
>  > > >  +#
>  > > >  +#   Redistribution and use in source and binary forms, with or without
>  > > >  +#   modification, are permitted provided that the following conditions
>  > > >  +#   are met:
>  > > >  +#
>  > > >  +#     * Redistributions of source code must retain the above copyright
>  > > >  +#       notice, this list of conditions and the following disclaimer.
>  > > >  +#     * Redistributions in binary form must reproduce the above copyright
>  > > >  +#       notice, this list of conditions and the following disclaimer in
>  > > >  +#       the documentation and/or other materials provided with the
>  > > >  +#       distribution.
>  > > >  +#     * Neither the name of Cavium networks nor the names of its
>  > > >  +#       contributors may be used to endorse or promote products derived
>  > > >  +#       from this software without specific prior written permission.
>  > > >  +#
>  > > >  +#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
>  > > >  CONTRIBUTORS
>  > > >  +#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING,
>  BUT
>  > > >  NOT
>  > > >  +#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
>  > > >  FITNESS FOR
>  > > >  +#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
>  > > >  COPYRIGHT
>  > > >  +#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
>  > > >  INCIDENTAL,
>  > > >  +#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
>  BUT
>  > > >  NOT
>  > > >  +#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
>  LOSS
>  > > >  OF USE,
>  > > >  +#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
>  CAUSED AND
>  > > >  ON ANY
>  > > >  +#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
>  OR
>  > > >  TORT
>  > > >  +#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
>  OUT OF
>  > > >  THE USE
>  > > >  +#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
>  > > >  DAMAGE.
>  > > >  +
>  > > >  +include $(RTE_SDK)/mk/rte.vars.mk
>  > > >  +
>  > > >  +# library name
>  > > >  +LIB = librte_eventdev.a
>  > > >  +
>  > > >  +# library version
>  > > >  +LIBABIVER := 1
>  > > >  +
>  > > >  +# build flags
>  > > >  +CFLAGS += -O3
>  > > >  +CFLAGS += $(WERROR_FLAGS)
>  > > >  +
>  > > >  +# library source files
>  > > >  +SRCS-y += rte_eventdev.c
>  > > >  +
>  > > >  +# export include files
>  > > >  +SYMLINK-y-include += rte_eventdev.h
>  > > >  +SYMLINK-y-include += rte_eventdev_pmd.h
>  > > >  +
>  > > >  +# versioning export map
>  > > >  +EXPORT_MAP := rte_eventdev_version.map
>  > > >  +
>  > > >  +# library dependencies
>  > > >  +DEPDIRS-y += lib/librte_eal
>  > > >  +DEPDIRS-y += lib/librte_mbuf
>  > > >  +
>  > > >  +include $(RTE_SDK)/mk/rte.lib.mk
>  > > >  diff --git a/lib/librte_eventdev/rte_eventdev.c
>  > > >  b/lib/librte_eventdev/rte_eventdev.c
>  > > >  new file mode 100644
>  > > >  index 0000000..17ce5c3
>  > > >  --- /dev/null
>  > > >  +++ b/lib/librte_eventdev/rte_eventdev.c
>  > > >  @@ -0,0 +1,1211 @@
>  > > >  +/*
>  > > >  + *   BSD LICENSE
>  > > >  + *
>  > > >  + *   Copyright(c) 2016 Cavium networks. All rights reserved.
>  > > >  + *
>  > > >  + *   Redistribution and use in source and binary forms, with or without
>  > > >  + *   modification, are permitted provided that the following conditions
>  > > >  + *   are met:
>  > > >  + *
>  > > >  + *     * Redistributions of source code must retain the above copyright
>  > > >  + *       notice, this list of conditions and the following disclaimer.
>  > > >  + *     * Redistributions in binary form must reproduce the above
>  copyright
>  > > >  + *       notice, this list of conditions and the following disclaimer in
>  > > >  + *       the documentation and/or other materials provided with the
>  > > >  + *       distribution.
>  > > >  + *     * Neither the name of Cavium networks nor the names of its
>  > > >  + *       contributors may be used to endorse or promote products derived
>  > > >  + *       from this software without specific prior written permission.
>  > > >  + *
>  > > >  + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
>  > > >  CONTRIBUTORS
>  > > >  + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING,
>  BUT
>  > > >  NOT
>  > > >  + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
>  > > >  FITNESS FOR
>  > > >  + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
>  > > >  COPYRIGHT
>  > > >  + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
>  > > >  INCIDENTAL,
>  > > >  + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
>  BUT
>  > > >  NOT
>  > > >  + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
>  LOSS
>  > > >  OF USE,
>  > > >  + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
>  CAUSED
>  > > >  AND ON ANY
>  > > >  + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
>  OR
>  > > >  TORT
>  > > >  + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
>  OUT OF
>  > > >  THE USE
>  > > >  + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
>  > > >  DAMAGE.
>  > > >  + */
>  > > >  +
>  > > >  +#include <ctype.h>
>  > > >  +#include <stdio.h>
>  > > >  +#include <stdlib.h>
>  > > >  +#include <string.h>
>  > > >  +#include <stdarg.h>
>  > > >  +#include <errno.h>
>  > > >  +#include <stdint.h>
>  > > >  +#include <inttypes.h>
>  > > >  +#include <sys/types.h>
>  > > >  +#include <sys/queue.h>
>  > > >  +
>  > > >  +#include <rte_byteorder.h>
>  > > >  +#include <rte_log.h>
>  > > >  +#include <rte_debug.h>
>  > > >  +#include <rte_dev.h>
>  > > >  +#include <rte_pci.h>
>  > > >  +#include <rte_memory.h>
>  > > >  +#include <rte_memcpy.h>
>  > > >  +#include <rte_memzone.h>
>  > > >  +#include <rte_eal.h>
>  > > >  +#include <rte_per_lcore.h>
>  > > >  +#include <rte_lcore.h>
>  > > >  +#include <rte_atomic.h>
>  > > >  +#include <rte_branch_prediction.h>
>  > > >  +#include <rte_common.h>
>  > > >  +#include <rte_malloc.h>
>  > > >  +#include <rte_errno.h>
>  > > >  +
>  > > >  +#include "rte_eventdev.h"
>  > > >  +#include "rte_eventdev_pmd.h"
>  > > >  +
>  > > >  +struct rte_eventdev rte_event_devices[RTE_EVENT_MAX_DEVS];
>  > > >  +
>  > > >  +struct rte_eventdev *rte_eventdevs = &rte_event_devices[0];
>  > > >  +
>  > > >  +static struct rte_eventdev_global eventdev_globals = {
>  > > >  +	.nb_devs		= 0
>  > > >  +};
>  > > >  +
>  > > >  +struct rte_eventdev_global *rte_eventdev_globals =
>  &eventdev_globals;
>  > > >  +
>  > > >  +/* Event dev north bound API implementation */
>  > > >  +
>  > > >  +uint8_t
>  > > >  +rte_event_dev_count(void)
>  > > >  +{
>  > > >  +	return rte_eventdev_globals->nb_devs;
>  > > >  +}
>  > > >  +
>  > > >  +int
>  > > >  +rte_event_dev_get_dev_id(const char *name)
>  > > >  +{
>  > > >  +	int i;
>  > > >  +
>  > > >  +	if (!name)
>  > > >  +		return -EINVAL;
>  > > >  +
>  > > >  +	for (i = 0; i < rte_eventdev_globals->nb_devs; i++)
>  > > >  +		if ((strcmp(rte_event_devices[i].data->name, name)
>  > > >  +				== 0) &&
>  > > >  +				(rte_event_devices[i].attached ==
>  > > >  +						RTE_EVENTDEV_ATTACHED))
>  > > >  +			return i;
>  > > >  +	return -ENODEV;
>  > > >  +}
>  > > >  +
>  > > >  +int
>  > > >  +rte_event_dev_socket_id(uint8_t dev_id)
>  > > >  +{
>  > > >  +	struct rte_eventdev *dev;
>  > > >  +
>  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
>  > > >  +	dev = &rte_eventdevs[dev_id];
>  > > >  +
>  > > >  +	return dev->data->socket_id;
>  > > >  +}
>  > > >  +
>  > > >  +int
>  > > >  +rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info
>  *dev_info)
>  > > >  +{
>  > > >  +	struct rte_eventdev *dev;
>  > > >  +
>  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
>  > > >  +	dev = &rte_eventdevs[dev_id];
>  > > >  +
>  > > >  +	if (dev_info == NULL)
>  > > >  +		return -EINVAL;
>  > > >  +
>  > > >  +	memset(dev_info, 0, sizeof(struct rte_event_dev_info));
>  > > >  +
>  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -
>  > > >  ENOTSUP);
>  > > >  +	(*dev->dev_ops->dev_infos_get)(dev, dev_info);
>  > > >  +
>  > > >  +	dev_info->pci_dev = dev->pci_dev;
>  > > >  +	if (dev->driver)
>  > > >  +		dev_info->driver_name = dev->driver->pci_drv.driver.name;
>  > > >  +	return 0;
>  > > >  +}
>  > > >  +
>  > > >  +static inline int
>  > > >  +rte_event_dev_queue_config(struct rte_eventdev *dev, uint8_t
>  nb_queues)
>  > > >  +{
>  > > >  +	uint8_t old_nb_queues = dev->data->nb_queues;
>  > > >  +	void **queues;
>  > > >  +	uint8_t *queues_prio;
>  > > >  +	unsigned int i;
>  > > >  +
>  > > >  +	EDEV_LOG_DEBUG("Setup %d queues on device %u", nb_queues,
>  > > >  +			 dev->data->dev_id);
>  > > >  +
>  > > >  +	/* First time configuration */
>  > > >  +	if (dev->data->queues == NULL && nb_queues != 0) {
>  > > >  +		dev->data->queues = rte_zmalloc_socket("eventdev->data-
>  > > >  >queues",
>  > > >  +				sizeof(dev->data->queues[0]) * nb_queues,
>  > > >  +				RTE_CACHE_LINE_SIZE, dev->data-
>  > > >  >socket_id);
>  > > >  +		if (dev->data->queues == NULL) {
>  > > >  +			dev->data->nb_queues = 0;
>  > > >  +			EDEV_LOG_ERR("failed to get memory for queue meta
>  > > >  data,"
>  > > >  +					"nb_queues %u", nb_queues);
>  > > >  +			return -(ENOMEM);
>  > > >  +		}
>  > > >  +		/* Allocate memory to store queue priority */
>  > > >  +		dev->data->queues_prio = rte_zmalloc_socket(
>  > > >  +				"eventdev->data->queues_prio",
>  > > >  +				sizeof(dev->data->queues_prio[0]) *
>  > > >  nb_queues,
>  > > >  +				RTE_CACHE_LINE_SIZE, dev->data-
>  > > >  >socket_id);
>  > > >  +		if (dev->data->queues_prio == NULL) {
>  > > >  +			dev->data->nb_queues = 0;
>  > > >  +			EDEV_LOG_ERR("failed to get memory for queue
>  > > >  priority,"
>  > > >  +					"nb_queues %u", nb_queues);
>  > > >  +			return -(ENOMEM);
>  > > >  +		}
>  > > >  +
>  > > >  +	} else if (dev->data->queues != NULL && nb_queues != 0) {/* re-config
>  > > >  */
>  > > >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
>  > > >  >queue_release, -ENOTSUP);
>  > > >  +
>  > > >  +		queues = dev->data->queues;
>  > > >  +		for (i = nb_queues; i < old_nb_queues; i++)
>  > > >  +			(*dev->dev_ops->queue_release)(queues[i]);
>  > > >  +
>  > > >  +		queues = rte_realloc(queues, sizeof(queues[0]) * nb_queues,
>  > > >  +				RTE_CACHE_LINE_SIZE);
>  > > >  +		if (queues == NULL) {
>  > > >  +			EDEV_LOG_ERR("failed to realloc queue meta data,"
>  > > >  +						" nb_queues %u",
>  > > >  nb_queues);
>  > > >  +			return -(ENOMEM);
>  > > >  +		}
>  > > >  +		dev->data->queues = queues;
>  > > >  +
>  > > >  +		/* Re allocate memory to store queue priority */
>  > > >  +		queues_prio = dev->data->queues_prio;
>  > > >  +		queues_prio = rte_realloc(queues_prio,
>  > > >  +				sizeof(queues_prio[0]) * nb_queues,
>  > > >  +				RTE_CACHE_LINE_SIZE);
>  > > >  +		if (queues_prio == NULL) {
>  > > >  +			EDEV_LOG_ERR("failed to realloc queue priority,"
>  > > >  +						" nb_queues %u",
>  > > >  nb_queues);
>  > > >  +			return -(ENOMEM);
>  > > >  +		}
>  > > >  +		dev->data->queues_prio = queues_prio;
>  > > >  +
>  > > >  +		if (nb_queues > old_nb_queues) {
>  > > >  +			uint8_t new_qs = nb_queues - old_nb_queues;
>  > > >  +
>  > > >  +			memset(queues + old_nb_queues, 0,
>  > > >  +				sizeof(queues[0]) * new_qs);
>  > > >  +			memset(queues_prio + old_nb_queues, 0,
>  > > >  +				sizeof(queues_prio[0]) * new_qs);
>  > > >  +		}
>  > > >  +	} else if (dev->data->queues != NULL && nb_queues == 0) {
>  > > >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
>  > > >  >queue_release, -ENOTSUP);
>  > > >  +
>  > > >  +		queues = dev->data->queues;
>  > > >  +		for (i = nb_queues; i < old_nb_queues; i++)
>  > > >  +			(*dev->dev_ops->queue_release)(queues[i]);
>  > > >  +	}
>  > > >  +
>  > > >  +	dev->data->nb_queues = nb_queues;
>  > > >  +	return 0;
>  > > >  +}
>  > > >  +
>  > > >  +static inline int
>  > > >  +rte_event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
>  > > >  +{
>  > > >  +	uint8_t old_nb_ports = dev->data->nb_ports;
>  > > >  +	void **ports;
>  > > >  +	uint16_t *links_map;
>  > > >  +	uint8_t *ports_dequeue_depth;
>  > > >  +	uint8_t *ports_enqueue_depth;
>  > > >  +	unsigned int i;
>  > > >  +
>  > > >  +	EDEV_LOG_DEBUG("Setup %d ports on device %u", nb_ports,
>  > > >  +			 dev->data->dev_id);
>  > > >  +
>  > > >  +	/* First time configuration */
>  > > >  +	if (dev->data->ports == NULL && nb_ports != 0) {
>  > > >  +		dev->data->ports = rte_zmalloc_socket("eventdev->data-
>  > > >  >ports",
>  > > >  +				sizeof(dev->data->ports[0]) * nb_ports,
>  > > >  +				RTE_CACHE_LINE_SIZE, dev->data-
>  > > >  >socket_id);
>  > > >  +		if (dev->data->ports == NULL) {
>  > > >  +			dev->data->nb_ports = 0;
>  > > >  +			EDEV_LOG_ERR("failed to get memory for port meta
>  > > >  data,"
>  > > >  +					"nb_ports %u", nb_ports);
>  > > >  +			return -(ENOMEM);
>  > > >  +		}
>  > > >  +
>  > > >  +		/* Allocate memory to store ports dequeue depth */
>  > > >  +		dev->data->ports_dequeue_depth =
>  > > >  +			rte_zmalloc_socket("eventdev-
>  > > >  >ports_dequeue_depth",
>  > > >  +			sizeof(dev->data->ports_dequeue_depth[0]) *
>  > > >  nb_ports,
>  > > >  +			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
>  > > >  +		if (dev->data->ports_dequeue_depth == NULL) {
>  > > >  +			dev->data->nb_ports = 0;
>  > > >  +			EDEV_LOG_ERR("failed to get memory for port deq
>  > > >  meta,"
>  > > >  +					"nb_ports %u", nb_ports);
>  > > >  +			return -(ENOMEM);
>  > > >  +		}
>  > > >  +
>  > > >  +		/* Allocate memory to store ports enqueue depth */
>  > > >  +		dev->data->ports_enqueue_depth =
>  > > >  +			rte_zmalloc_socket("eventdev-
>  > > >  >ports_enqueue_depth",
>  > > >  +			sizeof(dev->data->ports_enqueue_depth[0]) *
>  > > >  nb_ports,
>  > > >  +			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
>  > > >  +		if (dev->data->ports_enqueue_depth == NULL) {
>  > > >  +			dev->data->nb_ports = 0;
>  > > >  +			EDEV_LOG_ERR("failed to get memory for port enq
>  > > >  meta,"
>  > > >  +					"nb_ports %u", nb_ports);
>  > > >  +			return -(ENOMEM);
>  > > >  +		}
>  > > >  +
>  > > >  +		/* Allocate memory to store queue to port link connection */
>  > > >  +		dev->data->links_map =
>  > > >  +			rte_zmalloc_socket("eventdev->links_map",
>  > > >  +			sizeof(dev->data->links_map[0]) * nb_ports *
>  > > >  +			RTE_EVENT_MAX_QUEUES_PER_DEV,
>  > > >  +			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
>  > > >  +		if (dev->data->links_map == NULL) {
>  > > >  +			dev->data->nb_ports = 0;
>  > > >  +			EDEV_LOG_ERR("failed to get memory for port_map
>  > > >  area,"
>  > > >  +					"nb_ports %u", nb_ports);
>  > > >  +			return -(ENOMEM);
>  > > >  +		}
>  > > >  +	} else if (dev->data->ports != NULL && nb_ports != 0) {/* re-config */
>  > > >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_release,
>  > > >  -ENOTSUP);
>  > > >  +
>  > > >  +		ports = dev->data->ports;
>  > > >  +		ports_dequeue_depth = dev->data->ports_dequeue_depth;
>  > > >  +		ports_enqueue_depth = dev->data->ports_enqueue_depth;
>  > > >  +		links_map = dev->data->links_map;
>  > > >  +
>  > > >  +		for (i = nb_ports; i < old_nb_ports; i++)
>  > > >  +			(*dev->dev_ops->port_release)(ports[i]);
>  > > >  +
>  > > >  +		/* Realloc memory for ports */
>  > > >  +		ports = rte_realloc(ports, sizeof(ports[0]) * nb_ports,
>  > > >  +				RTE_CACHE_LINE_SIZE);
>  > > >  +		if (ports == NULL) {
>  > > >  +			EDEV_LOG_ERR("failed to realloc port meta data,"
>  > > >  +						" nb_ports %u", nb_ports);
>  > > >  +			return -(ENOMEM);
>  > > >  +		}
>  > > >  +
>  > > >  +		/* Realloc memory for ports_dequeue_depth */
>  > > >  +		ports_dequeue_depth = rte_realloc(ports_dequeue_depth,
>  > > >  +			sizeof(ports_dequeue_depth[0]) * nb_ports,
>  > > >  +			RTE_CACHE_LINE_SIZE);
>  > > >  +		if (ports_dequeue_depth == NULL) {
>  > > >  +			EDEV_LOG_ERR("failed to realloc port deqeue meta
>  > > >  data,"
>  > > >  +						" nb_ports %u", nb_ports);
>  > > >  +			return -(ENOMEM);
>  > > >  +		}
>  > > >  +
>  > > >  +		/* Realloc memory for ports_enqueue_depth */
>  > > >  +		ports_enqueue_depth = rte_realloc(ports_enqueue_depth,
>  > > >  +			sizeof(ports_enqueue_depth[0]) * nb_ports,
>  > > >  +			RTE_CACHE_LINE_SIZE);
>  > > >  +		if (ports_enqueue_depth == NULL) {
>  > > >  +			EDEV_LOG_ERR("failed to realloc port enqueue meta
>  > > >  data,"
>  > > >  +						" nb_ports %u", nb_ports);
>  > > >  +			return -(ENOMEM);
>  > > >  +		}
>  > > >  +
>  > > >  +		/* Realloc memory to store queue to port link connection */
>  > > >  +		links_map = rte_realloc(links_map,
>  > > >  +			sizeof(dev->data->links_map[0]) * nb_ports *
>  > > >  +			RTE_EVENT_MAX_QUEUES_PER_DEV,
>  > > >  +			RTE_CACHE_LINE_SIZE);
>  > > >  +		if (dev->data->links_map == NULL) {
>  > > >  +			dev->data->nb_ports = 0;
>  > > >  +			EDEV_LOG_ERR("failed to realloc mem for port_map
>  > > >  area,"
>  > > >  +					"nb_ports %u", nb_ports);
>  > > >  +			return -(ENOMEM);
>  > > >  +		}
>  > > >  +
>  > > >  +		if (nb_ports > old_nb_ports) {
>  > > >  +			uint8_t new_ps = nb_ports - old_nb_ports;
>  > > >  +
>  > > >  +			memset(ports + old_nb_ports, 0,
>  > > >  +				sizeof(ports[0]) * new_ps);
>  > > >  +			memset(ports_dequeue_depth + old_nb_ports, 0,
>  > > >  +				sizeof(ports_dequeue_depth[0]) * new_ps);
>  > > >  +			memset(ports_enqueue_depth + old_nb_ports, 0,
>  > > >  +				sizeof(ports_enqueue_depth[0]) * new_ps);
>  > > >  +			memset(links_map +
>  > > >  +				(old_nb_ports *
>  > > >  RTE_EVENT_MAX_QUEUES_PER_DEV),
>  > > >  +				0, sizeof(ports_enqueue_depth[0]) * new_ps);
>  > > >  +		}
>  > > >  +
>  > > >  +		dev->data->ports = ports;
>  > > >  +		dev->data->ports_dequeue_depth = ports_dequeue_depth;
>  > > >  +		dev->data->ports_enqueue_depth = ports_enqueue_depth;
>  > > >  +		dev->data->links_map = links_map;
>  > > >  +	} else if (dev->data->ports != NULL && nb_ports == 0) {
>  > > >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_release,
>  > > >  -ENOTSUP);
>  > > >  +
>  > > >  +		ports = dev->data->ports;
>  > > >  +		for (i = nb_ports; i < old_nb_ports; i++)
>  > > >  +			(*dev->dev_ops->port_release)(ports[i]);
>  > > >  +	}
>  > > >  +
>  > > >  +	dev->data->nb_ports = nb_ports;
>  > > >  +	return 0;
>  > > >  +}
>  > > >  +
>  > > >  +int
>  > > >  +rte_event_dev_configure(uint8_t dev_id, struct rte_event_dev_config
>  > > >  *dev_conf)
>  > > >  +{
>  > > >  +	struct rte_eventdev *dev;
>  > > >  +	struct rte_event_dev_info info;
>  > > >  +	int diag;
>  > > >  +
>  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
>  > > >  +	dev = &rte_eventdevs[dev_id];
>  > > >  +
>  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -
>  > > >  ENOTSUP);
>  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_configure, -
>  > > >  ENOTSUP);
>  > > >  +
>  > > >  +	if (dev->data->dev_started) {
>  > > >  +		EDEV_LOG_ERR(
>  > > >  +		    "device %d must be stopped to allow configuration",
>  > > >  dev_id);
>  > > >  +		return -EBUSY;
>  > > >  +	}
>  > > >  +
>  > > >  +	if (dev_conf == NULL)
>  > > >  +		return -EINVAL;
>  > > >  +
>  > > >  +	(*dev->dev_ops->dev_infos_get)(dev, &info);
>  > > >  +
>  > > >  +	/* Check dequeue_wait_ns value is in limit */
>  > > >  +	if (!dev_conf->event_dev_cfg &
>  > > >  RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT) {
>  > > >  +		if (dev_conf->dequeue_wait_ns < info.min_dequeue_wait_ns
>  > > >  ||
>  > > >  +			dev_conf->dequeue_wait_ns >
>  > > >  info.max_dequeue_wait_ns) {
>  > > >  +			EDEV_LOG_ERR("dev%d invalid dequeue_wait_ns=%d"
>  > > >  +			" min_dequeue_wait_ns=%d
>  > > >  max_dequeue_wait_ns=%d",
>  > > >  +			dev_id, dev_conf->dequeue_wait_ns,
>  > > >  +			info.min_dequeue_wait_ns,
>  > > >  +			info.max_dequeue_wait_ns);
>  > > >  +			return -EINVAL;
>  > > >  +		}
>  > > >  +	}
>  > > >  +
>  > > >  +	/* Check nb_events_limit is in limit */
>  > > >  +	if (dev_conf->nb_events_limit > info.max_num_events) {
>  > > >  +		EDEV_LOG_ERR("dev%d nb_events_limit=%d >
>  > > >  max_num_events=%d",
>  > > >  +		dev_id, dev_conf->nb_events_limit, info.max_num_events);
>  > > >  +		return -EINVAL;
>  > > >  +	}
>  > > >  +
>  > > >  +	/* Check nb_event_queues is in limit */
>  > > >  +	if (!dev_conf->nb_event_queues) {
>  > > >  +		EDEV_LOG_ERR("dev%d nb_event_queues cannot be zero",
>  > > >  dev_id);
>  > > >  +		return -EINVAL;
>  > > >  +	}
>  > > >  +	if (dev_conf->nb_event_queues > info.max_event_queues) {
>  > > >  +		EDEV_LOG_ERR("dev%d nb_event_queues=%d >
>  > > >  max_event_queues=%d",
>  > > >  +		dev_id, dev_conf->nb_event_queues,
>  > > >  info.max_event_queues);
>  > > >  +		return -EINVAL;
>  > > >  +	}
>  > > >  +
>  > > >  +	/* Check nb_event_ports is in limit */
>  > > >  +	if (!dev_conf->nb_event_ports) {
>  > > >  +		EDEV_LOG_ERR("dev%d nb_event_ports cannot be zero",
>  > > >  dev_id);
>  > > >  +		return -EINVAL;
>  > > >  +	}
>  > > >  +	if (dev_conf->nb_event_ports > info.max_event_ports) {
>  > > >  +		EDEV_LOG_ERR("dev%d nb_event_ports=%d >
>  > > >  max_event_ports= %d",
>  > > >  +		dev_id, dev_conf->nb_event_ports, info.max_event_ports);
>  > > >  +		return -EINVAL;
>  > > >  +	}
>  > > >  +
>  > > >  +	/* Check nb_event_queue_flows is in limit */
>  > > >  +	if (!dev_conf->nb_event_queue_flows) {
>  > > >  +		EDEV_LOG_ERR("dev%d nb_flows cannot be zero", dev_id);
>  > > >  +		return -EINVAL;
>  > > >  +	}
>  > > >  +	if (dev_conf->nb_event_queue_flows > info.max_event_queue_flows)
>  > > >  {
>  > > >  +		EDEV_LOG_ERR("dev%d nb_flows=%x > max_flows=%x",
>  > > >  +		dev_id, dev_conf->nb_event_queue_flows,
>  > > >  +		info.max_event_queue_flows);
>  > > >  +		return -EINVAL;
>  > > >  +	}
>  > > >  +
>  > > >  +	/* Check nb_event_port_dequeue_depth is in limit */
>  > > >  +	if (!dev_conf->nb_event_port_dequeue_depth) {
>  > > >  +		EDEV_LOG_ERR("dev%d nb_dequeue_depth cannot be zero",
>  > > >  dev_id);
>  > > >  +		return -EINVAL;
>  > > >  +	}
>  > > >  +	if (dev_conf->nb_event_port_dequeue_depth >
>  > > >  +			 info.max_event_port_dequeue_depth) {
>  > > >  +		EDEV_LOG_ERR("dev%d nb_dequeue_depth=%d >
>  > > >  max_dequeue_depth=%d",
>  > > >  +		dev_id, dev_conf->nb_event_port_dequeue_depth,
>  > > >  +		info.max_event_port_dequeue_depth);
>  > > >  +		return -EINVAL;
>  > > >  +	}
>  > > >  +
>  > > >  +	/* Check nb_event_port_enqueue_depth is in limit */
>  > > >  +	if (!dev_conf->nb_event_port_enqueue_depth) {
>  > > >  +		EDEV_LOG_ERR("dev%d nb_enqueue_depth cannot be zero",
>  > > >  dev_id);
>  > > >  +		return -EINVAL;
>  > > >  +	}
>  > > >  +	if (dev_conf->nb_event_port_enqueue_depth >
>  > > >  +			 info.max_event_port_enqueue_depth) {
>  > > >  +		EDEV_LOG_ERR("dev%d nb_enqueue_depth=%d >
>  > > >  max_enqueue_depth=%d",
>  > > >  +		dev_id, dev_conf->nb_event_port_enqueue_depth,
>  > > >  +		info.max_event_port_enqueue_depth);
>  > > >  +		return -EINVAL;
>  > > >  +	}
>  > > >  +
>  > > >  +	/* Copy the dev_conf parameter into the dev structure */
>  > > >  +	memcpy(&dev->data->dev_conf, dev_conf, sizeof(dev->data-
>  > > >  >dev_conf));
>  > > >  +
>  > > >  +	/* Setup new number of queues and reconfigure device. */
>  > > >  +	diag = rte_event_dev_queue_config(dev, dev_conf-
>  > > >  >nb_event_queues);
>  > > >  +	if (diag != 0) {
>  > > >  +		EDEV_LOG_ERR("dev%d rte_event_dev_queue_config = %d",
>  > > >  +				dev_id, diag);
>  > > >  +		return diag;
>  > > >  +	}
>  > > >  +
>  > > >  +	/* Setup new number of ports and reconfigure device. */
>  > > >  +	diag = rte_event_dev_port_config(dev, dev_conf->nb_event_ports);
>  > > >  +	if (diag != 0) {
>  > > >  +		rte_event_dev_queue_config(dev, 0);
>  > > >  +		EDEV_LOG_ERR("dev%d rte_event_dev_port_config = %d",
>  > > >  +				dev_id, diag);
>  > > >  +		return diag;
>  > > >  +	}
>  > > >  +
>  > > >  +	/* Configure the device */
>  > > >  +	diag = (*dev->dev_ops->dev_configure)(dev);
>  > > >  +	if (diag != 0) {
>  > > >  +		EDEV_LOG_ERR("dev%d dev_configure = %d", dev_id, diag);
>  > > >  +		rte_event_dev_queue_config(dev, 0);
>  > > >  +		rte_event_dev_port_config(dev, 0);
>  > > >  +	}
>  > > >  +
>  > > >  +	dev->data->event_dev_cap = info.event_dev_cap;
>  > > >  +	return diag;
>  > > >  +}
>  > > >  +
>  > > >  +static inline int
>  > > >  +is_valid_queue(struct rte_eventdev *dev, uint8_t queue_id)
>  > > >  +{
>  > > >  +	if (queue_id < dev->data->nb_queues && queue_id <
>  > > >  +				RTE_EVENT_MAX_QUEUES_PER_DEV)
>  > > >  +		return 1;
>  > > >  +	else
>  > > >  +		return 0;
>  > > >  +}
>  > > >  +
>  > > >  +int
>  > > >  +rte_event_queue_default_conf_get(uint8_t dev_id, uint8_t queue_id,
>  > > >  +				 struct rte_event_queue_conf *queue_conf)
>  > > >  +{
>  > > >  +	struct rte_eventdev *dev;
>  > > >  +
>  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
>  > > >  +	dev = &rte_eventdevs[dev_id];
>  > > >  +
>  > > >  +	if (queue_conf == NULL)
>  > > >  +		return -EINVAL;
>  > > >  +
>  > > >  +	if (!is_valid_queue(dev, queue_id)) {
>  > > >  +		EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id);
>  > > >  +		return -EINVAL;
>  > > >  +	}
>  > > >  +
>  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_def_conf, -
>  > > >  ENOTSUP);
>  > > >  +	memset(queue_conf, 0, sizeof(struct rte_event_queue_conf));
>  > > >  +	(*dev->dev_ops->queue_def_conf)(dev, queue_id, queue_conf);
>  > > >  +	return 0;
>  > > >  +}
>  > > >  +
>  > > >  +static inline int
>  > > >  +is_valid_atomic_queue_conf(struct rte_event_queue_conf
>  *queue_conf)
>  > > >  +{
>  > > >  +	if (queue_conf && (
>  > > >  +		((queue_conf->event_queue_cfg &
>  > > >  RTE_EVENT_QUEUE_CFG_TYPE_MASK)
>  > > >  +			== RTE_EVENT_QUEUE_CFG_ALL_TYPES) ||
>  > > >  +		((queue_conf->event_queue_cfg &
>  > > >  RTE_EVENT_QUEUE_CFG_TYPE_MASK)
>  > > >  +			== RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY)
>  > > >  +		))
>  > > >  +		return 1;
>  > > >  +	else
>  > > >  +		return 0;
>  > > >  +}
>  > > >  +
>  > > >  +int
>  > > >  +rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id,
>  > > >  +		      struct rte_event_queue_conf *queue_conf)
>  > > >  +{
>  > > >  +	struct rte_eventdev *dev;
>  > > >  +	struct rte_event_queue_conf def_conf;
>  > > >  +
>  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
>  > > >  +	dev = &rte_eventdevs[dev_id];
>  > > >  +
>  > > >  +	if (!is_valid_queue(dev, queue_id)) {
>  > > >  +		EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id);
>  > > >  +		return -EINVAL;
>  > > >  +	}
>  > > >  +
>  > > >  +	/* Check nb_atomic_flows limit */
>  > > >  +	if (is_valid_atomic_queue_conf(queue_conf)) {
>  > > >  +		if (queue_conf->nb_atomic_flows == 0 ||
>  > > >  +		    queue_conf->nb_atomic_flows >
>  > > >  +			dev->data->dev_conf.nb_event_queue_flows) {
>  > > >  +			EDEV_LOG_ERR(
>  > > >  +		"dev%d queue%d Invalid nb_atomic_flows=%d
>  > > >  max_flows=%d",
>  > > >  +			dev_id, queue_id, queue_conf->nb_atomic_flows,
>  > > >  +			dev->data->dev_conf.nb_event_queue_flows);
>  > > >  +			return -EINVAL;
>  > > >  +		}
>  > > >  +	}
>  > > >  +
>  > > >  +	/* Check nb_atomic_order_sequences limit */
>  > > >  +	if (is_valid_atomic_queue_conf(queue_conf)) {
>  > > >  +		if (queue_conf->nb_atomic_order_sequences == 0 ||
>  > > >  +		    queue_conf->nb_atomic_order_sequences >
>  > > >  +			dev->data->dev_conf.nb_event_queue_flows) {
>  > > >  +			EDEV_LOG_ERR(
>  > > >  +		"dev%d queue%d Invalid nb_atomic_order_seq=%d
>  > > >  max_flows=%d",
>  > > >  +			dev_id, queue_id, queue_conf-
>  > > >  >nb_atomic_order_sequences,
>  > > >  +			dev->data->dev_conf.nb_event_queue_flows);
>  > > >  +			return -EINVAL;
>  > > >  +		}
>  > > >  +	}
>  > > >  +
>  > > >  +	if (dev->data->dev_started) {
>  > > >  +		EDEV_LOG_ERR(
>  > > >  +		    "device %d must be stopped to allow queue setup", dev_id);
>  > > >  +		return -EBUSY;
>  > > >  +	}
>  > > >  +
>  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_setup, -
>  > > >  ENOTSUP);
>  > > >  +
>  > > >  +	if (queue_conf == NULL) {
>  > > >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
>  > > >  >queue_def_conf,
>  > > >  +					-ENOTSUP);
>  > > >  +		(*dev->dev_ops->queue_def_conf)(dev, queue_id, &def_conf);
>  > > >  +		def_conf.event_queue_cfg =
>  > > >  RTE_EVENT_QUEUE_CFG_DEFAULT;
>  > > >  +		queue_conf = &def_conf;
>  > > >  +	}
>  > > >  +
>  > > >  +	dev->data->queues_prio[queue_id] = queue_conf->priority;
>  > > >  +	return (*dev->dev_ops->queue_setup)(dev, queue_id, queue_conf);
>  > > >  +}
>  > > >  +
>  > > >  +uint8_t
>  > > >  +rte_event_queue_count(uint8_t dev_id)
>  > > >  +{
>  > > >  +	struct rte_eventdev *dev;
>  > > >  +
>  > > >  +	dev = &rte_eventdevs[dev_id];
>  > > >  +	return dev->data->nb_queues;
>  > > >  +}
>  > > >  +
>  > > >  +uint8_t
>  > > >  +rte_event_queue_priority(uint8_t dev_id, uint8_t queue_id)
>  > > >  +{
>  > > >  +	struct rte_eventdev *dev;
>  > > >  +
>  > > >  +	dev = &rte_eventdevs[dev_id];
>  > > >  +	if (dev->data->event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_QOS)
>  > > >  +		return dev->data->queues_prio[queue_id];
>  > > >  +	else
>  > > >  +		return RTE_EVENT_QUEUE_PRIORITY_NORMAL;
>  > > >  +}
>  > > >  +
>  > > >  +static inline int
>  > > >  +is_valid_port(struct rte_eventdev *dev, uint8_t port_id)
>  > > >  +{
>  > > >  +	if (port_id < dev->data->nb_ports)
>  > > >  +		return 1;
>  > > >  +	else
>  > > >  +		return 0;
>  > > >  +}
>  > > >  +
>  > > >  +int
>  > > >  +rte_event_port_default_conf_get(uint8_t dev_id, uint8_t port_id,
>  > > >  +				 struct rte_event_port_conf *port_conf)
>  > > >  +{
>  > > >  +	struct rte_eventdev *dev;
>  > > >  +
>  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
>  > > >  +	dev = &rte_eventdevs[dev_id];
>  > > >  +
>  > > >  +	if (port_conf == NULL)
>  > > >  +		return -EINVAL;
>  > > >  +
>  > > >  +	if (!is_valid_port(dev, port_id)) {
>  > > >  +		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
>  > > >  +		return -EINVAL;
>  > > >  +	}
>  > > >  +
>  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_def_conf, -
>  > > >  ENOTSUP);
>  > > >  +	memset(port_conf, 0, sizeof(struct rte_event_port_conf));
>  > > >  +	(*dev->dev_ops->port_def_conf)(dev, port_id, port_conf);
>  > > >  +	return 0;
>  > > >  +}
>  > > >  +
>  > > >  +int
>  > > >  +rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
>  > > >  +		      struct rte_event_port_conf *port_conf)
>  > > >  +{
>  > > >  +	struct rte_eventdev *dev;
>  > > >  +	struct rte_event_port_conf def_conf;
>  > > >  +	int diag;
>  > > >  +
>  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
>  > > >  +	dev = &rte_eventdevs[dev_id];
>  > > >  +
>  > > >  +	if (!is_valid_port(dev, port_id)) {
>  > > >  +		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
>  > > >  +		return -EINVAL;
>  > > >  +	}
>  > > >  +
>  > > >  +	/* Check new_event_threshold limit */
>  > > >  +	if ((port_conf && !port_conf->new_event_threshold) ||
>  > > >  +			(port_conf && port_conf->new_event_threshold >
>  > > >  +				 dev->data->dev_conf.nb_events_limit)) {
>  > > >  +		EDEV_LOG_ERR(
>  > > >  +		   "dev%d port%d Invalid event_threshold=%d
>  > > >  nb_events_limit=%d",
>  > > >  +			dev_id, port_id, port_conf->new_event_threshold,
>  > > >  +			dev->data->dev_conf.nb_events_limit);
>  > > >  +		return -EINVAL;
>  > > >  +	}
>  > > >  +
>  > > >  +	/* Check dequeue_depth limit */
>  > > >  +	if ((port_conf && !port_conf->dequeue_depth) ||
>  > > >  +			(port_conf && port_conf->dequeue_depth >
>  > > >  +		dev->data->dev_conf.nb_event_port_dequeue_depth)) {
>  > > >  +		EDEV_LOG_ERR(
>  > > >  +		   "dev%d port%d Invalid dequeue depth=%d
>  > > >  max_dequeue_depth=%d",
>  > > >  +			dev_id, port_id, port_conf->dequeue_depth,
>  > > >  +			dev->data-
>  > > >  >dev_conf.nb_event_port_dequeue_depth);
>  > > >  +		return -EINVAL;
>  > > >  +	}
>  > > >  +
>  > > >  +	/* Check enqueue_depth limit */
>  > > >  +	if ((port_conf && !port_conf->enqueue_depth) ||
>  > > >  +			(port_conf && port_conf->enqueue_depth >
>  > > >  +		dev->data->dev_conf.nb_event_port_enqueue_depth)) {
>  > > >  +		EDEV_LOG_ERR(
>  > > >  +		   "dev%d port%d Invalid enqueue depth=%d
>  > > >  max_enqueue_depth=%d",
>  > > >  +			dev_id, port_id, port_conf->enqueue_depth,
>  > > >  +			dev->data-
>  > > >  >dev_conf.nb_event_port_enqueue_depth);
>  > > >  +		return -EINVAL;
>  > > >  +	}
>  > > >  +
>  > > >  +	if (dev->data->dev_started) {
>  > > >  +		EDEV_LOG_ERR(
>  > > >  +		    "device %d must be stopped to allow port setup", dev_id);
>  > > >  +		return -EBUSY;
>  > > >  +	}
>  > > >  +
>  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_setup, -
>  > > >  ENOTSUP);
>  > > >  +
>  > > >  +	if (port_conf == NULL) {
>  > > >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
>  > > >  >port_def_conf,
>  > > >  +					-ENOTSUP);
>  > > >  +		(*dev->dev_ops->port_def_conf)(dev, port_id, &def_conf);
>  > > >  +		port_conf = &def_conf;
>  > > >  +	}
>  > > >  +
>  > > >  +	dev->data->ports_dequeue_depth[port_id] =
>  > > >  +			port_conf->dequeue_depth;
>  > > >  +	dev->data->ports_enqueue_depth[port_id] =
>  > > >  +			port_conf->enqueue_depth;
>  > > >  +
>  > > >  +	diag = (*dev->dev_ops->port_setup)(dev, port_id, port_conf);
>  > > >  +
>  > > >  +	/* Unlink all the queues from this port(default state after setup) */
>  > > >  +	if (!diag)
>  > > >  +		diag = rte_event_port_unlink(dev_id, port_id, NULL, 0);
>  > > >  +
>  > > >  +	if (diag < 0)
>  > > >  +		return diag;
>  > > >  +
>  > > >  +	return 0;
>  > > >  +}
>  > > >  +
>  > > >  +uint8_t
>  > > >  +rte_event_port_dequeue_depth(uint8_t dev_id, uint8_t port_id)
>  > > >  +{
>  > > >  +	struct rte_eventdev *dev;
>  > > >  +
>  > > >  +	dev = &rte_eventdevs[dev_id];
>  > > >  +	return dev->data->ports_dequeue_depth[port_id];
>  > > >  +}
>  > > >  +
>  > > >  +uint8_t
>  > > >  +rte_event_port_enqueue_depth(uint8_t dev_id, uint8_t port_id)
>  > > >  +{
>  > > >  +	struct rte_eventdev *dev;
>  > > >  +
>  > > >  +	dev = &rte_eventdevs[dev_id];
>  > > >  +	return dev->data->ports_enqueue_depth[port_id];
>  > > >  +}
>  > > >  +
>  > > >  +uint8_t
>  > > >  +rte_event_port_count(uint8_t dev_id)
>  > > >  +{
>  > > >  +	struct rte_eventdev *dev;
>  > > >  +
>  > > >  +	dev = &rte_eventdevs[dev_id];
>  > > >  +	return dev->data->nb_ports;
>  > > >  +}
>  > > >  +
>  > > >  +int
>  > > >  +rte_event_port_link(uint8_t dev_id, uint8_t port_id,
>  > > >  +		    struct rte_event_queue_link link[], uint16_t nb_links)
>  > > >  +{
>  > > >  +	struct rte_eventdev *dev;
>  > > >  +	struct rte_event_queue_link
>  > > >  all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
>  > > >  +	uint16_t *links_map;
>  > > >  +	int i, diag;
>  > > >  +
>  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
>  > > >  +	dev = &rte_eventdevs[dev_id];
>  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_link, -ENOTSUP);
>  > > >  +
>  > > >  +	if (!is_valid_port(dev, port_id)) {
>  > > >  +		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
>  > > >  +		return -EINVAL;
>  > > >  +	}
>  > > >  +
>  > > >  +	if (link == NULL) {
>  > > >  +		for (i = 0; i < dev->data->nb_queues; i++) {
>  > > >  +			all_queues[i].queue_id = i;
>  > > >  +			all_queues[i].priority =
>  > > >  +
>  > > >  	RTE_EVENT_QUEUE_SERVICE_PRIORITY_NORMAL;
>  > > >  +		}
>  > > >  +		link = all_queues;
>  > > >  +		nb_links = dev->data->nb_queues;
>  > > >  +	}
>  > > >  +
>  > > >  +	for (i = 0; i < nb_links; i++)
>  > > >  +		if (link[i].queue_id >= RTE_EVENT_MAX_QUEUES_PER_DEV)
>  > > >  +			return -EINVAL;
>  > > >  +
>  > > >  +	diag = (*dev->dev_ops->port_link)(dev->data->ports[port_id], link,
>  > > >  +						 nb_links);
>  > > >  +	if (diag < 0)
>  > > >  +		return diag;
>  > > >  +
>  > > >  +	links_map = dev->data->links_map;
>  > > >  +	/* Point links_map to this port specific area */
>  > > >  +	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
>  > > >  +	for (i = 0; i < diag; i++)
>  > > >  +		links_map[link[i].queue_id] = (uint8_t)link[i].priority;
>  > > >  +
>  > > >  +	return diag;
>  > > >  +}
>  > > >  +
>  > > >  +#define EVENT_QUEUE_SERVICE_PRIORITY_INVALID (0xdead)
>  > > >  +
>  > > >  +int
>  > > >  +rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
>  > > >  +		      uint8_t queues[], uint16_t nb_unlinks)
>  > > >  +{
>  > > >  +	struct rte_eventdev *dev;
>  > > >  +	uint8_t all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
>  > > >  +	int i, diag;
>  > > >  +	uint16_t *links_map;
>  > > >  +
>  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
>  > > >  +	dev = &rte_eventdevs[dev_id];
>  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_unlink, -
>  > > >  ENOTSUP);
>  > > >  +
>  > > >  +	if (!is_valid_port(dev, port_id)) {
>  > > >  +		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
>  > > >  +		return -EINVAL;
>  > > >  +	}
>  > > >  +
>  > > >  +	if (queues == NULL) {
>  > > >  +		for (i = 0; i < dev->data->nb_queues; i++)
>  > > >  +			all_queues[i] = i;
>  > > >  +		queues = all_queues;
>  > > >  +		nb_unlinks = dev->data->nb_queues;
>  > > >  +	}
>  > > >  +
>  > > >  +	for (i = 0; i < nb_unlinks; i++)
>  > > >  +		if (queues[i] >= RTE_EVENT_MAX_QUEUES_PER_DEV)
>  > > >  +			return -EINVAL;
>  > > >  +
>  > > >  +	diag = (*dev->dev_ops->port_unlink)(dev->data->ports[port_id],
>  > > >  queues,
>  > > >  +					nb_unlinks);
>  > > >  +
>  > > >  +	if (diag < 0)
>  > > >  +		return diag;
>  > > >  +
>  > > >  +	links_map = dev->data->links_map;
>  > > >  +	/* Point links_map to this port specific area */
>  > > >  +	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
>  > > >  +	for (i = 0; i < diag; i++)
>  > > >  +		links_map[queues[i]] =
>  > > >  EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
>  > > >  +
>  > > >  +	return diag;
>  > > >  +}
>  > > >  +
>  > > >  +int
>  > > >  +rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
>  > > >  +			struct rte_event_queue_link link[])
>  > > >  +{
>  > > >  +	struct rte_eventdev *dev;
>  > > >  +	uint16_t *links_map;
>  > > >  +	int i, count = 0;
>  > > >  +
>  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
>  > > >  +	dev = &rte_eventdevs[dev_id];
>  > > >  +	if (!is_valid_port(dev, port_id)) {
>  > > >  +		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
>  > > >  +		return -EINVAL;
>  > > >  +	}
>  > > >  +
>  > > >  +	links_map = dev->data->links_map;
>  > > >  +	/* Point links_map to this port specific area */
>  > > >  +	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
>  > > >  +	for (i = 0; i < RTE_EVENT_MAX_QUEUES_PER_DEV; i++) {
>  > > >  +		if (links_map[i] !=
>  > > >  EVENT_QUEUE_SERVICE_PRIORITY_INVALID) {
>  > > >  +			link[count].queue_id = i;
>  > > >  +			link[count].priority = (uint8_t)links_map[i];
>  > > >  +			++count;
>  > > >  +		}
>  > > >  +	}
>  > > >  +	return count;
>  > > >  +}
>  > > >  +
>  > > >  +int
>  > > >  +rte_event_dequeue_wait_time(uint8_t dev_id, uint64_t ns, uint64_t
>  > > >  *wait_ticks)
>  > > >  +{
>  > > >  +	struct rte_eventdev *dev;
>  > > >  +
>  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
>  > > >  +	dev = &rte_eventdevs[dev_id];
>  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->wait_time, -
>  > > >  ENOTSUP);
>  > > >  +
>  > > >  +	if (wait_ticks == NULL)
>  > > >  +		return -EINVAL;
>  > > >  +
>  > > >  +	(*dev->dev_ops->wait_time)(dev, ns, wait_ticks);
>  > > >  +	return 0;
>  > > >  +}
>  > > >  +
>  > > >  +int
>  > > >  +rte_event_dev_dump(uint8_t dev_id, FILE *f)
>  > > >  +{
>  > > >  +	struct rte_eventdev *dev;
>  > > >  +
>  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
>  > > >  +	dev = &rte_eventdevs[dev_id];
>  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dump, -ENOTSUP);
>  > > >  +
>  > > >  +	(*dev->dev_ops->dump)(dev, f);
>  > > >  +	return 0;
>  > > >  +
>  > > >  +}
>  > > >  +
>  > > >  +int
>  > > >  +rte_event_dev_start(uint8_t dev_id)
>  > > >  +{
>  > > >  +	struct rte_eventdev *dev;
>  > > >  +	int diag;
>  > > >  +
>  > > >  +	EDEV_LOG_DEBUG("Start dev_id=%" PRIu8, dev_id);
>  > > >  +
>  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
>  > > >  +	dev = &rte_eventdevs[dev_id];
>  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_start, -ENOTSUP);
>  > > >  +
>  > > >  +	if (dev->data->dev_started != 0) {
>  > > >  +		EDEV_LOG_ERR("Device with dev_id=%" PRIu8 " already
>  > > >  started",
>  > > >  +			dev_id);
>  > > >  +		return 0;
>  > > >  +	}
>  > > >  +
>  > > >  +	diag = (*dev->dev_ops->dev_start)(dev);
>  > > >  +	if (diag == 0)
>  > > >  +		dev->data->dev_started = 1;
>  > > >  +	else
>  > > >  +		return diag;
>  > > >  +
>  > > >  +	return 0;
>  > > >  +}
>  > > >  +
>  > > >  +void
>  > > >  +rte_event_dev_stop(uint8_t dev_id)
>  > > >  +{
>  > > >  +	struct rte_eventdev *dev;
>  > > >  +
>  > > >  +	EDEV_LOG_DEBUG("Stop dev_id=%" PRIu8, dev_id);
>  > > >  +
>  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_RET(dev_id);
>  > > >  +	dev = &rte_eventdevs[dev_id];
>  > > >  +	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
>  > > >  +
>  > > >  +	if (dev->data->dev_started == 0) {
>  > > >  +		EDEV_LOG_ERR("Device with dev_id=%" PRIu8 " already
>  > > >  stopped",
>  > > >  +			dev_id);
>  > > >  +		return;
>  > > >  +	}
>  > > >  +
>  > > >  +	dev->data->dev_started = 0;
>  > > >  +	(*dev->dev_ops->dev_stop)(dev);
>  > > >  +}
>  > > >  +
>  > > >  +int
>  > > >  +rte_event_dev_close(uint8_t dev_id)
>  > > >  +{
>  > > >  +	struct rte_eventdev *dev;
>  > > >  +
>  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
>  > > >  +	dev = &rte_eventdevs[dev_id];
>  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_close, -
>  > > >  ENOTSUP);
>  > > >  +
>  > > >  +	/* Device must be stopped before it can be closed */
>  > > >  +	if (dev->data->dev_started == 1) {
>  > > >  +		EDEV_LOG_ERR("Device %u must be stopped before closing",
>  > > >  +				dev_id);
>  > > >  +		return -EBUSY;
>  > > >  +	}
>  > > >  +
>  > > >  +	return (*dev->dev_ops->dev_close)(dev);
>  > > >  +}
>  > > >  +
>  > > >  +static inline int
>  > > >  +rte_eventdev_data_alloc(uint8_t dev_id, struct rte_eventdev_data
>  **data,
>  > > >  +		int socket_id)
>  > > >  +{
>  > > >  +	char mz_name[RTE_EVENTDEV_NAME_MAX_LEN];
>  > > >  +	const struct rte_memzone *mz;
>  > > >  +	int n;
>  > > >  +
>  > > >  +	/* Generate memzone name */
>  > > >  +	n = snprintf(mz_name, sizeof(mz_name), "rte_eventdev_data_%u",
>  > > >  dev_id);
>  > > >  +	if (n >= (int)sizeof(mz_name))
>  > > >  +		return -EINVAL;
>  > > >  +
>  > > >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
>  > > >  +		mz = rte_memzone_reserve(mz_name,
>  > > >  +				sizeof(struct rte_eventdev_data),
>  > > >  +				socket_id, 0);
>  > > >  +	} else
>  > > >  +		mz = rte_memzone_lookup(mz_name);
>  > > >  +
>  > > >  +	if (mz == NULL)
>  > > >  +		return -ENOMEM;
>  > > >  +
>  > > >  +	*data = mz->addr;
>  > > >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
>  > > >  +		memset(*data, 0, sizeof(struct rte_eventdev_data));
>  > > >  +
>  > > >  +	return 0;
>  > > >  +}
>  > > >  +
>  > > >  +static uint8_t
>  > > >  +rte_eventdev_find_free_device_index(void)
>  > > >  +{
>  > > >  +	uint8_t dev_id;
>  > > >  +
>  > > >  +	for (dev_id = 0; dev_id < RTE_EVENT_MAX_DEVS; dev_id++) {
>  > > >  +		if (rte_eventdevs[dev_id].attached ==
>  > > >  +				RTE_EVENTDEV_DETACHED)
>  > > >  +			return dev_id;
>  > > >  +	}
>  > > >  +	return RTE_EVENT_MAX_DEVS;
>  > > >  +}
>  > > >  +
>  > > >  +struct rte_eventdev *
>  > > >  +rte_eventdev_pmd_allocate(const char *name, int socket_id)
>  > > >  +{
>  > > >  +	struct rte_eventdev *eventdev;
>  > > >  +	uint8_t dev_id;
>  > > >  +
>  > > >  +	if (rte_eventdev_pmd_get_named_dev(name) != NULL) {
>  > > >  +		EDEV_LOG_ERR("Event device with name %s already "
>  > > >  +				"allocated!", name);
>  > > >  +		return NULL;
>  > > >  +	}
>  > > >  +
>  > > >  +	dev_id = rte_eventdev_find_free_device_index();
>  > > >  +	if (dev_id == RTE_EVENT_MAX_DEVS) {
>  > > >  +		EDEV_LOG_ERR("Reached maximum number of event
>  > > >  devices");
>  > > >  +		return NULL;
>  > > >  +	}
>  > > >  +
>  > > >  +	eventdev = &rte_eventdevs[dev_id];
>  > > >  +
>  > > >  +	if (eventdev->data == NULL) {
>  > > >  +		struct rte_eventdev_data *eventdev_data = NULL;
>  > > >  +
>  > > >  +		int retval = rte_eventdev_data_alloc(dev_id, &eventdev_data,
>  > > >  +				socket_id);
>  > > >  +
>  > > >  +		if (retval < 0 || eventdev_data == NULL)
>  > > >  +			return NULL;
>  > > >  +
>  > > >  +		eventdev->data = eventdev_data;
>  > > >  +
>  > > >  +		snprintf(eventdev->data->name,
>  > > >  RTE_EVENTDEV_NAME_MAX_LEN,
>  > > >  +				"%s", name);
>  > > >  +
>  > > >  +		eventdev->data->dev_id = dev_id;
>  > > >  +		eventdev->data->socket_id = socket_id;
>  > > >  +		eventdev->data->dev_started = 0;
>  > > >  +
>  > > >  +		eventdev->attached = RTE_EVENTDEV_ATTACHED;
>  > > >  +
>  > > >  +		eventdev_globals.nb_devs++;
>  > > >  +	}
>  > > >  +
>  > > >  +	return eventdev;
>  > > >  +}
>  > > >  +
>  > > >  +int
>  > > >  +rte_eventdev_pmd_release(struct rte_eventdev *eventdev)
>  > > >  +{
>  > > >  +	int ret;
>  > > >  +
>  > > >  +	if (eventdev == NULL)
>  > > >  +		return -EINVAL;
>  > > >  +
>  > > >  +	ret = rte_event_dev_close(eventdev->data->dev_id);
>  > > >  +	if (ret < 0)
>  > > >  +		return ret;
>  > > >  +
>  > > >  +	eventdev->attached = RTE_EVENTDEV_DETACHED;
>  > > >  +	eventdev_globals.nb_devs--;
>  > > >  +	eventdev->data = NULL;
>  > > >  +
>  > > >  +	return 0;
>  > > >  +}
>  > > >  +
>  > > >  +struct rte_eventdev *
>  > > >  +rte_eventdev_pmd_vdev_init(const char *name, size_t
>  dev_private_size,
>  > > >  +		int socket_id)
>  > > >  +{
>  > > >  +	struct rte_eventdev *eventdev;
>  > > >  +
>  > > >  +	/* Allocate device structure */
>  > > >  +	eventdev = rte_eventdev_pmd_allocate(name, socket_id);
>  > > >  +	if (eventdev == NULL)
>  > > >  +		return NULL;
>  > > >  +
>  > > >  +	/* Allocate private device structure */
>  > > >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
>  > > >  +		eventdev->data->dev_private =
>  > > >  +				rte_zmalloc_socket("eventdev device private",
>  > > >  +						dev_private_size,
>  > > >  +						RTE_CACHE_LINE_SIZE,
>  > > >  +						socket_id);
>  > > >  +
>  > > >  +		if (eventdev->data->dev_private == NULL)
>  > > >  +			rte_panic("Cannot allocate memzone for private
>  > > >  device"
>  > > >  +					" data");
>  > > >  +	}
>  > > >  +
>  > > >  +	return eventdev;
>  > > >  +}
>  > > >  +
>  > > >  +int
>  > > >  +rte_eventdev_pmd_pci_probe(struct rte_pci_driver *pci_drv,
>  > > >  +			struct rte_pci_device *pci_dev)
>  > > >  +{
>  > > >  +	struct rte_eventdev_driver *eventdrv;
>  > > >  +	struct rte_eventdev *eventdev;
>  > > >  +
>  > > >  +	char eventdev_name[RTE_EVENTDEV_NAME_MAX_LEN];
>  > > >  +
>  > > >  +	int retval;
>  > > >  +
>  > > >  +	eventdrv = (struct rte_eventdev_driver *)pci_drv;
>  > > >  +	if (eventdrv == NULL)
>  > > >  +		return -ENODEV;
>  > > >  +
>  > > >  +	rte_eal_pci_device_name(&pci_dev->addr, eventdev_name,
>  > > >  +			sizeof(eventdev_name));
>  > > >  +
>  > > >  +	eventdev = rte_eventdev_pmd_allocate(eventdev_name,
>  > > >  +			 pci_dev->device.numa_node);
>  > > >  +	if (eventdev == NULL)
>  > > >  +		return -ENOMEM;
>  > > >  +
>  > > >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
>  > > >  +		eventdev->data->dev_private =
>  > > >  +				rte_zmalloc_socket(
>  > > >  +						"eventdev private structure",
>  > > >  +						eventdrv->dev_private_size,
>  > > >  +						RTE_CACHE_LINE_SIZE,
>  > > >  +						rte_socket_id());
>  > > >  +
>  > > >  +		if (eventdev->data->dev_private == NULL)
>  > > >  +			rte_panic("Cannot allocate memzone for private "
>  > > >  +					"device data");
>  > > >  +	}
>  > > >  +
>  > > >  +	eventdev->pci_dev = pci_dev;
>  > > >  +	eventdev->driver = eventdrv;
>  > > >  +
>  > > >  +	/* Invoke PMD device initialization function */
>  > > >  +	retval = (*eventdrv->eventdev_init)(eventdev);
>  > > >  +	if (retval == 0)
>  > > >  +		return 0;
>  > > >  +
>  > > >  +	EDEV_LOG_ERR("driver %s: event_dev_init(vendor_id=0x%x
>  > > >  device_id=0x%x)"
>  > > >  +			" failed", pci_drv->driver.name,
>  > > >  +			(unsigned int) pci_dev->id.vendor_id,
>  > > >  +			(unsigned int) pci_dev->id.device_id);
>  > > >  +
>  > > >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
>  > > >  +		rte_free(eventdev->data->dev_private);
>  > > >  +
>  > > >  +	eventdev->attached = RTE_EVENTDEV_DETACHED;
>  > > >  +	eventdev_globals.nb_devs--;
>  > > >  +
>  > > >  +	return -ENXIO;
>  > > >  +}
>  > > >  +
>  > > >  +int
>  > > >  +rte_eventdev_pmd_pci_remove(struct rte_pci_device *pci_dev)
>  > > >  +{
>  > > >  +	const struct rte_eventdev_driver *eventdrv;
>  > > >  +	struct rte_eventdev *eventdev;
>  > > >  +	char eventdev_name[RTE_EVENTDEV_NAME_MAX_LEN];
>  > > >  +	int ret;
>  > > >  +
>  > > >  +	if (pci_dev == NULL)
>  > > >  +		return -EINVAL;
>  > > >  +
>  > > >  +	rte_eal_pci_device_name(&pci_dev->addr, eventdev_name,
>  > > >  +			sizeof(eventdev_name));
>  > > >  +
>  > > >  +	eventdev = rte_eventdev_pmd_get_named_dev(eventdev_name);
>  > > >  +	if (eventdev == NULL)
>  > > >  +		return -ENODEV;
>  > > >  +
>  > > >  +	eventdrv = (const struct rte_eventdev_driver *)pci_dev->driver;
>  > > >  +	if (eventdrv == NULL)
>  > > >  +		return -ENODEV;
>  > > >  +
>  > > >  +	/* Invoke PMD device uninit function */
>  > > >  +	if (*eventdrv->eventdev_uninit) {
>  > > >  +		ret = (*eventdrv->eventdev_uninit)(eventdev);
>  > > >  +		if (ret)
>  > > >  +			return ret;
>  > > >  +	}
>  > > >  +
>  > > >  +	/* Free event device */
>  > > >  +	rte_eventdev_pmd_release(eventdev);
>  > > >  +
>  > > >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
>  > > >  +		rte_free(eventdev->data->dev_private);
>  > > >  +
>  > > >  +	eventdev->pci_dev = NULL;
>  > > >  +	eventdev->driver = NULL;
>  > > >  +
>  > > >  +	return 0;
>  > > >  +}
>  > > >  diff --git a/lib/librte_eventdev/rte_eventdev_pmd.h
>  > > >  b/lib/librte_eventdev/rte_eventdev_pmd.h
>  > > >  new file mode 100644
>  > > >  index 0000000..e9d9b83
>  > > >  --- /dev/null
>  > > >  +++ b/lib/librte_eventdev/rte_eventdev_pmd.h
>  > > >  @@ -0,0 +1,504 @@
>  > > >  +/*
>  > > >  + *
>  > > >  + *   Copyright(c) 2016 Cavium networks. All rights reserved.
>  > > >  + *
>  > > >  + *   Redistribution and use in source and binary forms, with or without
>  > > >  + *   modification, are permitted provided that the following conditions
>  > > >  + *   are met:
>  > > >  + *
>  > > >  + *     * Redistributions of source code must retain the above copyright
>  > > >  + *       notice, this list of conditions and the following disclaimer.
>  > > >  + *     * Redistributions in binary form must reproduce the above
>  copyright
>  > > >  + *       notice, this list of conditions and the following disclaimer in
>  > > >  + *       the documentation and/or other materials provided with the
>  > > >  + *       distribution.
>  > > >  + *     * Neither the name of Cavium networks nor the names of its
>  > > >  + *       contributors may be used to endorse or promote products derived
>  > > >  + *       from this software without specific prior written permission.
>  > > >  + *
>  > > >  + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
>  > > >  CONTRIBUTORS
>  > > >  + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING,
>  BUT
>  > > >  NOT
>  > > >  + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
>  > > >  FITNESS FOR
>  > > >  + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
>  > > >  COPYRIGHT
>  > > >  + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
>  > > >  INCIDENTAL,
>  > > >  + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
>  BUT
>  > > >  NOT
>  > > >  + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
>  LOSS
>  > > >  OF USE,
>  > > >  + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
>  CAUSED
>  > > >  AND ON ANY
>  > > >  + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
>  OR
>  > > >  TORT
>  > > >  + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
>  OUT OF
>  > > >  THE USE
>  > > >  + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
>  > > >  DAMAGE.
>  > > >  + */
>  > > >  +
>  > > >  +#ifndef _RTE_EVENTDEV_PMD_H_
>  > > >  +#define _RTE_EVENTDEV_PMD_H_
>  > > >  +
>  > > >  +/** @file
>  > > >  + * RTE Event PMD APIs
>  > > >  + *
>  > > >  + * @note
>  > > >  + * These API are from event PMD only and user applications should not
>  call
>  > > >  + * them directly.
>  > > >  + */
>  > > >  +
>  > > >  +#ifdef __cplusplus
>  > > >  +extern "C" {
>  > > >  +#endif
>  > > >  +
>  > > >  +#include <string.h>
>  > > >  +
>  > > >  +#include <rte_dev.h>
>  > > >  +#include <rte_pci.h>
>  > > >  +#include <rte_malloc.h>
>  > > >  +#include <rte_log.h>
>  > > >  +#include <rte_common.h>
>  > > >  +
>  > > >  +#include "rte_eventdev.h"
>  > > >  +
>  > > >  +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
>  > > >  +#define RTE_PMD_DEBUG_TRACE(...) \
>  > > >  +	rte_pmd_debug_trace(__func__, __VA_ARGS__)
>  > > >  +#else
>  > > >  +#define RTE_PMD_DEBUG_TRACE(...)
>  > > >  +#endif
>  > > >  +
>  > > >  +/* Logging Macros */
>  > > >  +#define EDEV_LOG_ERR(fmt, args...) \
>  > > >  +	RTE_LOG(ERR, EVENTDEV, "%s() line %u: " fmt "\n",  \
>  > > >  +			__func__, __LINE__, ## args)
>  > > >  +
>  > > >  +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
>  > > >  +#define EDEV_LOG_DEBUG(fmt, args...) \
>  > > >  +	RTE_LOG(DEBUG, EVENTDEV, "%s() line %u: " fmt "\n",  \
>  > > >  +			__func__, __LINE__, ## args)
>  > > >  +#else
>  > > >  +#define EDEV_LOG_DEBUG(fmt, args...) (void)0
>  > > >  +#endif
>  > > >  +
>  > > >  +/* Macros to check for valid device */
>  > > >  +#define RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, retval) do
>  { \
>  > > >  +	if (!rte_eventdev_pmd_is_valid_dev((dev_id))) { \
>  > > >  +		EDEV_LOG_ERR("Invalid dev_id=%d\n", dev_id); \
>  > > >  +		return retval; \
>  > > >  +	} \
>  > > >  +} while (0)
>  > > >  +
>  > > >  +#define RTE_EVENTDEV_VALID_DEVID_OR_RET(dev_id) do { \
>  > > >  +	if (!rte_eventdev_pmd_is_valid_dev((dev_id))) { \
>  > > >  +		EDEV_LOG_ERR("Invalid dev_id=%d\n", dev_id); \
>  > > >  +		return; \
>  > > >  +	} \
>  > > >  +} while (0)
>  > > >  +
>  > > >  +#define RTE_EVENTDEV_DETACHED  (0)
>  > > >  +#define RTE_EVENTDEV_ATTACHED  (1)
>  > > >  +
>  > > >  +/**
>  > > >  + * Initialisation function of a event driver invoked for each matching
>  > > >  + * event PCI device detected during the PCI probing phase.
>  > > >  + *
>  > > >  + * @param dev
>  > > >  + *   The dev pointer is the address of the *rte_eventdev* structure
>  associated
>  > > >  + *   with the matching device and which has been [automatically]
>  allocated in
>  > > >  + *   the *rte_event_devices* array.
>  > > >  + *
>  > > >  + * @return
>  > > >  + *   - 0: Success, the device is properly initialised by the driver.
>  > > >  + *        In particular, the driver MUST have set up the *dev_ops* pointer
>  > > >  + *        of the *dev* structure.
>  > > >  + *   - <0: Error code of the device initialisation failure.
>  > > >  + */
>  > > >  +typedef int (*eventdev_init_t)(struct rte_eventdev *dev);
>  > > >  +
>  > > >  +/**
>  > > >  + * Finalisation function of a driver invoked for each matching
>  > > >  + * PCI device detected during the PCI closing phase.
>  > > >  + *
>  > > >  + * @param dev
>  > > >  + *   The dev pointer is the address of the *rte_eventdev* structure
>  associated
>  > > >  + *   with the matching device and which	has been [automatically]
>  allocated in
>  > > >  + *   the *rte_event_devices* array.
>  > > >  + *
>  > > >  + * @return
>  > > >  + *   - 0: Success, the device is properly finalised by the driver.
>  > > >  + *        In particular, the driver MUST free the *dev_ops* pointer
>  > > >  + *        of the *dev* structure.
>  > > >  + *   - <0: Error code of the device initialisation failure.
>  > > >  + */
>  > > >  +typedef int (*eventdev_uninit_t)(struct rte_eventdev *dev);
>  > > >  +
>  > > >  +/**
>  > > >  + * The structure associated with a PMD driver.
>  > > >  + *
>  > > >  + * Each driver acts as a PCI driver and is represented by a generic
>  > > >  + * *event_driver* structure that holds:
>  > > >  + *
>  > > >  + * - An *rte_pci_driver* structure (which must be the first field).
>  > > >  + *
>  > > >  + * - The *eventdev_init* function invoked for each matching PCI device.
>  > > >  + *
>  > > >  + * - The size of the private data to allocate for each matching device.
>  > > >  + */
>  > > >  +struct rte_eventdev_driver {
>  > > >  +	struct rte_pci_driver pci_drv;	/**< The PMD is also a PCI driver. */
>  > > >  +	unsigned int dev_private_size;	/**< Size of device private data. */
>  > > >  +
>  > > >  +	eventdev_init_t eventdev_init;	/**< Device init function. */
>  > > >  +	eventdev_uninit_t eventdev_uninit; /**< Device uninit function. */
>  > > >  +};
>  > > >  +
>  > > >  +/** Global structure used for maintaining state of allocated event
>  devices */
>  > > >  +struct rte_eventdev_global {
>  > > >  +	uint8_t nb_devs;	/**< Number of devices found */
>  > > >  +	uint8_t max_devs;	/**< Max number of devices */
>  > > >  +};
>  > > >  +
>  > > >  +extern struct rte_eventdev_global *rte_eventdev_globals;
>  > > >  +/** Pointer to global event devices data structure. */
>  > > >  +extern struct rte_eventdev *rte_eventdevs;
>  > > >  +/** The pool of rte_eventdev structures. */
>  > > >  +
>  > > >  +/**
>  > > >  + * Get the rte_eventdev structure device pointer for the named device.
>  > > >  + *
>  > > >  + * @param name
>  > > >  + *   device name to select the device structure.
>  > > >  + *
>  > > >  + * @return
>  > > >  + *   - The rte_eventdev structure pointer for the given device ID.
>  > > >  + */
>  > > >  +static inline struct rte_eventdev *
>  > > >  +rte_eventdev_pmd_get_named_dev(const char *name)
>  > > >  +{
>  > > >  +	struct rte_eventdev *dev;
>  > > >  +	unsigned int i;
>  > > >  +
>  > > >  +	if (name == NULL)
>  > > >  +		return NULL;
>  > > >  +
>  > > >  +	for (i = 0, dev = &rte_eventdevs[i];
>  > > >  +			i < rte_eventdev_globals->max_devs; i++) {
>  > > >  +		if ((dev->attached == RTE_EVENTDEV_ATTACHED) &&
>  > > >  +				(strcmp(dev->data->name, name) == 0))
>  > > >  +			return dev;
>  > > >  +	}
>  > > >  +
>  > > >  +	return NULL;
>  > > >  +}
>  > > >  +
>  > > >  +/**
>  > > >  + * Validate if the event device index is valid attached event device.
>  > > >  + *
>  > > >  + * @param dev_id
>  > > >  + *   Event device index.
>  > > >  + *
>  > > >  + * @return
>  > > >  + *   - If the device index is valid (1) or not (0).
>  > > >  + */
>  > > >  +static inline unsigned
>  > > >  +rte_eventdev_pmd_is_valid_dev(uint8_t dev_id)
>  > > >  +{
>  > > >  +	struct rte_eventdev *dev;
>  > > >  +
>  > > >  +	if (dev_id >= rte_eventdev_globals->nb_devs)
>  > > >  +		return 0;
>  > > >  +
>  > > >  +	dev = &rte_eventdevs[dev_id];
>  > > >  +	if (dev->attached != RTE_EVENTDEV_ATTACHED)
>  > > >  +		return 0;
>  > > >  +	else
>  > > >  +		return 1;
>  > > >  +}
>  > > >  +
>  > > >  +/**
>  > > >  + * Definitions of all functions exported by a driver through the
>  > > >  + * the generic structure of type *event_dev_ops* supplied in the
>  > > >  + * *rte_eventdev* structure associated with a device.
>  > > >  + */
>  > > >  +
>  > > >  +/**
>  > > >  + * Get device information of a device.
>  > > >  + *
>  > > >  + * @param dev
>  > > >  + *   Event device pointer
>  > > >  + * @param dev_info
>  > > >  + *   Event device information structure
>  > > >  + *
>  > > >  + * @return
>  > > >  + *   Returns 0 on success
>  > > >  + */
>  > > >  +typedef void (*eventdev_info_get_t)(struct rte_eventdev *dev,
>  > > >  +		struct rte_event_dev_info *dev_info);
>  > > >  +
>  > > >  +/**
>  > > >  + * Configure a device.
>  > > >  + *
>  > > >  + * @param dev
>  > > >  + *   Event device pointer
>  > > >  + *
>  > > >  + * @return
>  > > >  + *   Returns 0 on success
>  > > >  + */
>  > > >  +typedef int (*eventdev_configure_t)(struct rte_eventdev *dev);
>  > > >  +
>  > > >  +/**
>  > > >  + * Start a configured device.
>  > > >  + *
>  > > >  + * @param dev
>  > > >  + *   Event device pointer
>  > > >  + *
>  > > >  + * @return
>  > > >  + *   Returns 0 on success
>  > > >  + */
>  > > >  +typedef int (*eventdev_start_t)(struct rte_eventdev *dev);
>  > > >  +
>  > > >  +/**
>  > > >  + * Stop a configured device.
>  > > >  + *
>  > > >  + * @param dev
>  > > >  + *   Event device pointer
>  > > >  + */
>  > > >  +typedef void (*eventdev_stop_t)(struct rte_eventdev *dev);
>  > > >  +
>  > > >  +/**
>  > > >  + * Close a configured device.
>  > > >  + *
>  > > >  + * @param dev
>  > > >  + *   Event device pointer
>  > > >  + *
>  > > >  + * @return
>  > > >  + * - 0 on success
>  > > >  + * - (-EAGAIN) if can't close as device is busy
>  > > >  + */
>  > > >  +typedef int (*eventdev_close_t)(struct rte_eventdev *dev);
>  > > >  +
>  > > >  +/**
>  > > >  + * Retrieve the default event queue configuration.
>  > > >  + *
>  > > >  + * @param dev
>  > > >  + *   Event device pointer
>  > > >  + * @param queue_id
>  > > >  + *   Event queue index
>  > > >  + * @param[out] queue_conf
>  > > >  + *   Event queue configuration structure
>  > > >  + *
>  > > >  + */
>  > > >  +typedef void (*eventdev_queue_default_conf_get_t)(struct
>  rte_eventdev
>  > > >  *dev,
>  > > >  +		uint8_t queue_id, struct rte_event_queue_conf *queue_conf);
>  > > >  +
>  > > >  +/**
>  > > >  + * Setup an event queue.
>  > > >  + *
>  > > >  + * @param dev
>  > > >  + *   Event device pointer
>  > > >  + * @param queue_id
>  > > >  + *   Event queue index
>  > > >  + * @param queue_conf
>  > > >  + *   Event queue configuration structure
>  > > >  + *
>  > > >  + * @return
>  > > >  + *   Returns 0 on success.
>  > > >  + */
>  > > >  +typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev,
>  > > >  +		uint8_t queue_id, struct rte_event_queue_conf *queue_conf);
>  > > >  +
>  > > >  +/**
>  > > >  + * Release memory resources allocated by given event queue.
>  > > >  + *
>  > > >  + * @param queue
>  > > >  + *   Event queue pointer
>  > > >  + *
>  > > >  + */
>  > > >  +typedef void (*eventdev_queue_release_t)(void *queue);
>  > > >  +
>  > > >  +/**
>  > > >  + * Retrieve the default event port configuration.
>  > > >  + *
>  > > >  + * @param dev
>  > > >  + *   Event device pointer
>  > > >  + * @param port_id
>  > > >  + *   Event port index
>  > > >  + * @param[out] port_conf
>  > > >  + *   Event port configuration structure
>  > > >  + *
>  > > >  + */
>  > > >  +typedef void (*eventdev_port_default_conf_get_t)(struct rte_eventdev
>  *dev,
>  > > >  +		uint8_t port_id, struct rte_event_port_conf *port_conf);
>  > > >  +
>  > > >  +/**
>  > > >  + * Setup an event port.
>  > > >  + *
>  > > >  + * @param dev
>  > > >  + *   Event device pointer
>  > > >  + * @param port_id
>  > > >  + *   Event port index
>  > > >  + * @param port_conf
>  > > >  + *   Event port configuration structure
>  > > >  + *
>  > > >  + * @return
>  > > >  + *   Returns 0 on success.
>  > > >  + */
>  > > >  +typedef int (*eventdev_port_setup_t)(struct rte_eventdev *dev,
>  > > >  +		uint8_t port_id, struct rte_event_port_conf *port_conf);
>  > > >  +
>  > > >  +/**
>  > > >  + * Release memory resources allocated by given event port.
>  > > >  + *
>  > > >  + * @param port
>  > > >  + *   Event port pointer
>  > > >  + *
>  > > >  + */
>  > > >  +typedef void (*eventdev_port_release_t)(void *port);
>  > > >  +
>  > > >  +/**
>  > > >  + * Link multiple source event queues to destination event port.
>  > > >  + *
>  > > >  + * @param port
>  > > >  + *   Event port pointer
>  > > >  + * @param link
>  > > >  + *   An array of *nb_links* pointers to *rte_event_queue_link* structure
>  > > >  + * @param nb_links
>  > > >  + *   The number of links to establish
>  > > >  + *
>  > > >  + * @return
>  > > >  + *   Returns 0 on success.
>  > > >  + *
>  > > >  + */
>  > > >  +typedef int (*eventdev_port_link_t)(void *port,
>  > > >  +		struct rte_event_queue_link link[], uint16_t nb_links);
>  > > >  +
>  > > >  +/**
>  > > >  + * Unlink multiple source event queues from destination event port.
>  > > >  + *
>  > > >  + * @param port
>  > > >  + *   Event port pointer
>  > > >  + * @param queues
>  > > >  + *   An array of *nb_unlinks* event queues to be unlinked from the event
>  port.
>  > > >  + * @param nb_unlinks
>  > > >  + *   The number of unlinks to establish
>  > > >  + *
>  > > >  + * @return
>  > > >  + *   Returns 0 on success.
>  > > >  + *
>  > > >  + */
>  > > >  +typedef int (*eventdev_port_unlink_t)(void *port,
>  > > >  +		uint8_t queues[], uint16_t nb_unlinks);
>  > > >  +
>  > > >  +/**
>  > > >  + * Converts nanoseconds to *wait* value for rte_event_dequeue()
>  > > >  + *
>  > > >  + * @param dev
>  > > >  + *   Event device pointer
>  > > >  + * @param ns
>  > > >  + *   Wait time in nanosecond
>  > > >  + * @param[out] wait_ticks
>  > > >  + *   Value for the *wait* parameter in rte_event_dequeue() function
>  > > >  + *
>  > > >  + */
>  > > >  +typedef void (*eventdev_dequeue_wait_time_t)(struct rte_eventdev
>  *dev,
>  > > >  +		uint64_t ns, uint64_t *wait_ticks);
>  > > >  +
>  > > >  +/**
>  > > >  + * Dump internal information
>  > > >  + *
>  > > >  + * @param dev
>  > > >  + *   Event device pointer
>  > > >  + * @param f
>  > > >  + *   A pointer to a file for output
>  > > >  + *
>  > > >  + */
>  > > >  +typedef void (*eventdev_dump_t)(struct rte_eventdev *dev, FILE *f);
>  > > >  +
>  > > >  +/** Event device operations function pointer table */
>  > > >  +struct rte_eventdev_ops {
>  > > >  +	eventdev_info_get_t dev_infos_get;	/**< Get device info. */
>  > > >  +	eventdev_configure_t dev_configure;	/**< Configure device. */
>  > > >  +	eventdev_start_t dev_start;		/**< Start device. */
>  > > >  +	eventdev_stop_t dev_stop;		/**< Stop device. */
>  > > >  +	eventdev_close_t dev_close;		/**< Close device. */
>  > > >  +
>  > > >  +	eventdev_queue_default_conf_get_t queue_def_conf;
>  > > >  +	/**< Get default queue configuration. */
>  > > >  +	eventdev_queue_setup_t queue_setup;
>  > > >  +	/**< Set up an event queue. */
>  > > >  +	eventdev_queue_release_t queue_release;
>  > > >  +	/**< Release an event queue. */
>  > > >  +
>  > > >  +	eventdev_port_default_conf_get_t port_def_conf;
>  > > >  +	/**< Get default port configuration. */
>  > > >  +	eventdev_port_setup_t port_setup;
>  > > >  +	/**< Set up an event port. */
>  > > >  +	eventdev_port_release_t port_release;
>  > > >  +	/**< Release an event port. */
>  > > >  +
>  > > >  +	eventdev_port_link_t port_link;
>  > > >  +	/**< Link event queues to an event port. */
>  > > >  +	eventdev_port_unlink_t port_unlink;
>  > > >  +	/**< Unlink event queues from an event port. */
>  > > >  +	eventdev_dequeue_wait_time_t wait_time;
>  > > >  +	/**< Converts nanoseconds to *wait* value for rte_event_dequeue()
>  > > >  */
>  > > >  +	eventdev_dump_t dump;
>  > > >  +	/* Dump internal information */
>  > > >  +};
>  > > >  +
>  > > >  +/**
>  > > >  + * Allocates a new eventdev slot for an event device and returns the
>  pointer
>  > > >  + * to that slot for the driver to use.
>  > > >  + *
>  > > >  + * @param name
>  > > >  + *   Unique identifier name for each device
>  > > >  + * @param socket_id
>  > > >  + *   Socket to allocate resources on.
>  > > >  + * @return
>  > > >  + *   - Slot in the rte_dev_devices array for a new device;
>  > > >  + */
>  > > >  +struct rte_eventdev *
>  > > >  +rte_eventdev_pmd_allocate(const char *name, int socket_id);
>  > > >  +
>  > > >  +/**
>  > > >  + * Release the specified eventdev device.
>  > > >  + *
>  > > >  + * @param eventdev
>  > > >  + * The *eventdev* pointer is the address of the *rte_eventdev*
>  structure.
>  > > >  + * @return
>  > > >  + *   - 0 on success, negative on error
>  > > >  + */
>  > > >  +int
>  > > >  +rte_eventdev_pmd_release(struct rte_eventdev *eventdev);
>  > > >  +
>  > > >  +/**
>  > > >  + * Creates a new virtual event device and returns the pointer to that
>  device.
>  > > >  + *
>  > > >  + * @param name
>  > > >  + *   PMD type name
>  > > >  + * @param dev_private_size
>  > > >  + *   Size of event PMDs private data
>  > > >  + * @param socket_id
>  > > >  + *   Socket to allocate resources on.
>  > > >  + *
>  > > >  + * @return
>  > > >  + *   - Eventdev pointer if device is successfully created.
>  > > >  + *   - NULL if device cannot be created.
>  > > >  + */
>  > > >  +struct rte_eventdev *
>  > > >  +rte_eventdev_pmd_vdev_init(const char *name, size_t
>  dev_private_size,
>  > > >  +		int socket_id);
>  > > >  +
>  > > >  +
>  > > >  +/**
>  > > >  + * Wrapper for use by pci drivers as a .probe function to attach to a
>  event
>  > > >  + * interface.
>  > > >  + */
>  > > >  +int rte_eventdev_pmd_pci_probe(struct rte_pci_driver *pci_drv,
>  > > >  +			    struct rte_pci_device *pci_dev);
>  > > >  +
>  > > >  +/**
>  > > >  + * Wrapper for use by pci drivers as a .remove function to detach a
>  event
>  > > >  + * interface.
>  > > >  + */
>  > > >  +int rte_eventdev_pmd_pci_remove(struct rte_pci_device *pci_dev);
>  > > >  +
>  > > >  +#ifdef __cplusplus
>  > > >  +}
>  > > >  +#endif
>  > > >  +
>  > > >  +#endif /* _RTE_EVENTDEV_PMD_H_ */
>  > > >  diff --git a/lib/librte_eventdev/rte_eventdev_version.map
>  > > >  b/lib/librte_eventdev/rte_eventdev_version.map
>  > > >  new file mode 100644
>  > > >  index 0000000..ef40aae
>  > > >  --- /dev/null
>  > > >  +++ b/lib/librte_eventdev/rte_eventdev_version.map
>  > > >  @@ -0,0 +1,39 @@
>  > > >  +DPDK_17.02 {
>  > > >  +	global:
>  > > >  +
>  > > >  +	rte_eventdevs;
>  > > >  +
>  > > >  +	rte_event_dev_count;
>  > > >  +	rte_event_dev_get_dev_id;
>  > > >  +	rte_event_dev_socket_id;
>  > > >  +	rte_event_dev_info_get;
>  > > >  +	rte_event_dev_configure;
>  > > >  +	rte_event_dev_start;
>  > > >  +	rte_event_dev_stop;
>  > > >  +	rte_event_dev_close;
>  > > >  +	rte_event_dev_dump;
>  > > >  +
>  > > >  +	rte_event_port_default_conf_get;
>  > > >  +	rte_event_port_setup;
>  > > >  +	rte_event_port_dequeue_depth;
>  > > >  +	rte_event_port_enqueue_depth;
>  > > >  +	rte_event_port_count;
>  > > >  +	rte_event_port_link;
>  > > >  +	rte_event_port_unlink;
>  > > >  +	rte_event_port_links_get;
>  > > >  +
>  > > >  +	rte_event_queue_default_conf_get
>  > > >  +	rte_event_queue_setup;
>  > > >  +	rte_event_queue_count;
>  > > >  +	rte_event_queue_priority;
>  > > >  +
>  > > >  +	rte_event_dequeue_wait_time;
>  > > >  +
>  > > >  +	rte_eventdev_pmd_allocate;
>  > > >  +	rte_eventdev_pmd_release;
>  > > >  +	rte_eventdev_pmd_vdev_init;
>  > > >  +	rte_eventdev_pmd_pci_probe;
>  > > >  +	rte_eventdev_pmd_pci_remove;
>  > > >  +
>  > > >  +	local: *;
>  > > >  +};
>  > > >  diff --git a/mk/rte.app.mk b/mk/rte.app.mk
>  > > >  index f75f0e2..716725a 100644
>  > > >  --- a/mk/rte.app.mk
>  > > >  +++ b/mk/rte.app.mk
>  > > >  @@ -93,6 +93,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF)           += -
>  > > >  lrte_mbuf
>  > > >   _LDLIBS-$(CONFIG_RTE_LIBRTE_NET)            += -lrte_net
>  > > >   _LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER)          += -lrte_ethdev
>  > > >   _LDLIBS-$(CONFIG_RTE_LIBRTE_CRYPTODEV)      += -lrte_cryptodev
>  > > >  +_LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV)       += -lrte_eventdev
>  > > >   _LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL)        += -lrte_mempool
>  > > >   _LDLIBS-$(CONFIG_RTE_LIBRTE_RING)           += -lrte_ring
>  > > >   _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL)            += -lrte_eal
>  > > >  --
>  > > >  2.5.5
>  > >

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 2/4] eventdev: implement the northbound APIs
  2016-11-22 15:15         ` Eads, Gage
@ 2016-11-22 18:19           ` Jerin Jacob
  2016-11-22 19:43             ` Eads, Gage
  2016-11-23  9:57           ` Bruce Richardson
  1 sibling, 1 reply; 109+ messages in thread
From: Jerin Jacob @ 2016-11-22 18:19 UTC (permalink / raw)
  To: Eads, Gage; +Cc: dev, Richardson, Bruce, Van Haaren, Harry, hemant.agrawal

On Tue, Nov 22, 2016 at 03:15:52PM +0000, Eads, Gage wrote:
> 
> 
> >  -----Original Message-----
> >  From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> >  Sent: Monday, November 21, 2016 1:32 PM
> >  To: Eads, Gage <gage.eads@intel.com>
> >  Cc: dev@dpdk.org; Richardson, Bruce <bruce.richardson@intel.com>; Van
> >  Haaren, Harry <harry.van.haaren@intel.com>; hemant.agrawal@nxp.com
> >  Subject: Re: [dpdk-dev] [PATCH 2/4] eventdev: implement the northbound APIs
> >  
> >  On Tue, Nov 22, 2016 at 12:43:58AM +0530, Jerin Jacob wrote:
> >  > On Mon, Nov 21, 2016 at 05:45:51PM +0000, Eads, Gage wrote:
> >  > > Hi Jerin,
> >  > >
> >  > > I did a quick review and overall this implementation looks good. I noticed
> >  just one issue in rte_event_queue_setup(): the check of
> >  nb_atomic_order_sequences is being applied to atomic-type queues, but that
> >  field applies to ordered-type queues.
> >  >
> >  > Thanks Gage. I will fix that in v2.
> >  >
> >  > >
> >  > > One open issue I noticed is the "typical workflow" description starting in
> >  rte_eventdev.h:204 conflicts with the centralized software PMD that Harry
> >  posted last week. Specifically, that PMD expects a single core to call the
> >  schedule function. We could extend the documentation to account for this
> >  alternative style of scheduler invocation, or discuss ways to make the software
> >  PMD work with the documented workflow. I prefer the former, but either way I
> >  think we ought to expose the scheduler's expected usage to the user -- perhaps
> >  through an RTE_EVENT_DEV_CAP flag?
> >  >
> >  > I prefer former too, you can propose the documentation change required for
> >  software PMD.
> 
> Sure, proposal follows. The "typical workflow" isn't the most optimal by having a conditional in the fast-path, of course, but it demonstrates the idea simply.
> 
> (line 204)
>  * An event driven based application has following typical workflow on fastpath:
>  * \code{.c}                                                                        
>  *      while (1) {                                                                 
>  *                                                                                  
>  *              if (dev_info.event_dev_cap &                                        
>  *                      RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED)                        
>  *                      rte_event_schedule(dev_id);                                 

Yes, I like the idea of RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED.
It  can be input to application/subsystem to
launch separate core(s) for schedule functions.
But, I think, the "dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED"
check can be moved inside the implementation(to make the better decisions and
avoiding consuming cycles on HW based schedulers.

>  *                                                                                  
>  *              rte_event_dequeue(...);                                             
>  *                                                                                  
>  *              (event processing)                                                  
>  *                                                                                  
>  *              rte_event_enqueue(...);                                             
>  *      }                                                                           
>  * \endcode                                                                         
>  *                                                                                  
>  * The *schedule* operation is intended to do event scheduling, and the             
>  * *dequeue* operation returns the scheduled events. An implementation              
>  * is free to define the semantics between *schedule* and *dequeue*. For            
>  * example, a system based on a hardware scheduler can define its                   
>  * rte_event_schedule() to be an NOOP, whereas a software scheduler can use         
>  * the *schedule* operation to schedule events. The                                 
>  * RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag indicates whether
>  * rte_event_schedule() should be called by all cores or by a single (typically 
>  * dedicated) core.
> 
> (line 308)
> #define RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED (1ULL < 2)                          
> /**< Event scheduling implementation is distributed and all cores must execute       
>  *  rte_event_schedule(). If unset, the implementation is centralized and     
>  *  a single core must execute the schedule operation.                        
>  *                                                                              
>  *  \see rte_event_schedule()                                                   
>  */
> 
> >  >
> >  > On same note, If software PMD based workflow need  a separate core(s) for
> >  > schedule function then, Can we hide that from API specification and pass an
> >  > argument to SW pmd to define the scheduling core(s)?
> >  >
> >  > Something like --vdev=eventsw0,schedule_cmask=0x2
> 
> An API for controlling the scheduler coremask instead of (or perhaps in addition to) the vdev argument would be good, to allow runtime control. I can imagine apps that scale the number of cores based on load, and in doing so may want to migrate the scheduler to a different core.

Yes, an API for number of scheduler core looks OK. But if we are going to
have service core approach then we just need to specify at one place as
application will not creating the service functions.

> 
> >  
> >  Just a thought,
> >  
> >  Perhaps, We could introduce generic "service" cores concept to DPDK to hide
> >  the
> >  requirement where the implementation needs dedicated core to do certain
> >  work. I guess it would useful for other NPU integration in DPDK.
> >  
> 
> That's an interesting idea. As you suggested in the other thread, this concept could be extended to the "producer" code in the example for configurations where the NIC requires software to feed into the eventdev. And to the other subsystems mentioned in your original PDF, crypto and timer.

Yes. Producers should come in service core category. I think, that 
enables us to have better NPU integration.(same application code for
NPU vs non NPU)

> 
> >  >
> >  > >
> >  > > Thanks,
> >  > > Gage
> >  > >
> >  > > >  -----Original Message-----
> >  > > >  From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> >  > > >  Sent: Thursday, November 17, 2016 11:45 PM
> >  > > >  To: dev@dpdk.org
> >  > > >  Cc: Richardson, Bruce <bruce.richardson@intel.com>; Van Haaren, Harry
> >  > > >  <harry.van.haaren@intel.com>; hemant.agrawal@nxp.com; Eads, Gage
> >  > > >  <gage.eads@intel.com>; Jerin Jacob
> >  <jerin.jacob@caviumnetworks.com>
> >  > > >  Subject: [dpdk-dev] [PATCH 2/4] eventdev: implement the northbound
> >  APIs
> >  > > >
> >  > > >  This patch set defines the southbound driver interface
> >  > > >  and implements the common code required for northbound
> >  > > >  eventdev API interface.
> >  > > >
> >  > > >  Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> >  > > >  ---
> >  > > >   config/common_base                           |    6 +
> >  > > >   lib/Makefile                                 |    1 +
> >  > > >   lib/librte_eal/common/include/rte_log.h      |    1 +
> >  > > >   lib/librte_eventdev/Makefile                 |   57 ++
> >  > > >   lib/librte_eventdev/rte_eventdev.c           | 1211
> >  > > >  ++++++++++++++++++++++++++
> >  > > >   lib/librte_eventdev/rte_eventdev_pmd.h       |  504 +++++++++++
> >  > > >   lib/librte_eventdev/rte_eventdev_version.map |   39 +
> >  > > >   mk/rte.app.mk                                |    1 +
> >  > > >   8 files changed, 1820 insertions(+)
> >  > > >   create mode 100644 lib/librte_eventdev/Makefile
> >  > > >   create mode 100644 lib/librte_eventdev/rte_eventdev.c
> >  > > >   create mode 100644 lib/librte_eventdev/rte_eventdev_pmd.h
> >  > > >   create mode 100644 lib/librte_eventdev/rte_eventdev_version.map
> >  > > >
> >  > > >  diff --git a/config/common_base b/config/common_base
> >  > > >  index 4bff83a..7a8814e 100644
> >  > > >  --- a/config/common_base
> >  > > >  +++ b/config/common_base
> >  > > >  @@ -411,6 +411,12 @@ CONFIG_RTE_LIBRTE_PMD_ZUC_DEBUG=n
> >  > > >   CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO=y
> >  > > >
> >  > > >   #
> >  > > >  +# Compile generic event device library
> >  > > >  +#
> >  > > >  +CONFIG_RTE_LIBRTE_EVENTDEV=y
> >  > > >  +CONFIG_RTE_LIBRTE_EVENTDEV_DEBUG=n
> >  > > >  +CONFIG_RTE_EVENT_MAX_DEVS=16
> >  > > >  +CONFIG_RTE_EVENT_MAX_QUEUES_PER_DEV=64
> >  > > >   # Compile librte_ring
> >  > > >   #
> >  > > >   CONFIG_RTE_LIBRTE_RING=y
> >  > > >  diff --git a/lib/Makefile b/lib/Makefile
> >  > > >  index 990f23a..1a067bf 100644
> >  > > >  --- a/lib/Makefile
> >  > > >  +++ b/lib/Makefile
> >  > > >  @@ -41,6 +41,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) +=
> >  librte_cfgfile
> >  > > >   DIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += librte_cmdline
> >  > > >   DIRS-$(CONFIG_RTE_LIBRTE_ETHER) += librte_ether
> >  > > >   DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += librte_cryptodev
> >  > > >  +DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += librte_eventdev
> >  > > >   DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
> >  > > >   DIRS-$(CONFIG_RTE_LIBRTE_HASH) += librte_hash
> >  > > >   DIRS-$(CONFIG_RTE_LIBRTE_LPM) += librte_lpm
> >  > > >  diff --git a/lib/librte_eal/common/include/rte_log.h
> >  > > >  b/lib/librte_eal/common/include/rte_log.h
> >  > > >  index 29f7d19..9a07d92 100644
> >  > > >  --- a/lib/librte_eal/common/include/rte_log.h
> >  > > >  +++ b/lib/librte_eal/common/include/rte_log.h
> >  > > >  @@ -79,6 +79,7 @@ extern struct rte_logs rte_logs;
> >  > > >   #define RTE_LOGTYPE_PIPELINE 0x00008000 /**< Log related to
> >  pipeline. */
> >  > > >   #define RTE_LOGTYPE_MBUF    0x00010000 /**< Log related to mbuf.
> >  */
> >  > > >   #define RTE_LOGTYPE_CRYPTODEV 0x00020000 /**< Log related to
> >  > > >  cryptodev. */
> >  > > >  +#define RTE_LOGTYPE_EVENTDEV 0x00040000 /**< Log related to
> >  eventdev.
> >  > > >  */
> >  > > >
> >  > > >   /* these log types can be used in an application */
> >  > > >   #define RTE_LOGTYPE_USER1   0x01000000 /**< User-defined log type
> >  1. */
> >  > > >  diff --git a/lib/librte_eventdev/Makefile b/lib/librte_eventdev/Makefile
> >  > > >  new file mode 100644
> >  > > >  index 0000000..dac0663
> >  > > >  --- /dev/null
> >  > > >  +++ b/lib/librte_eventdev/Makefile
> >  > > >  @@ -0,0 +1,57 @@
> >  > > >  +#   BSD LICENSE
> >  > > >  +#
> >  > > >  +#   Copyright(c) 2016 Cavium networks. All rights reserved.
> >  > > >  +#
> >  > > >  +#   Redistribution and use in source and binary forms, with or without
> >  > > >  +#   modification, are permitted provided that the following conditions
> >  > > >  +#   are met:
> >  > > >  +#
> >  > > >  +#     * Redistributions of source code must retain the above copyright
> >  > > >  +#       notice, this list of conditions and the following disclaimer.
> >  > > >  +#     * Redistributions in binary form must reproduce the above copyright
> >  > > >  +#       notice, this list of conditions and the following disclaimer in
> >  > > >  +#       the documentation and/or other materials provided with the
> >  > > >  +#       distribution.
> >  > > >  +#     * Neither the name of Cavium networks nor the names of its
> >  > > >  +#       contributors may be used to endorse or promote products derived
> >  > > >  +#       from this software without specific prior written permission.
> >  > > >  +#
> >  > > >  +#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
> >  > > >  CONTRIBUTORS
> >  > > >  +#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING,
> >  BUT
> >  > > >  NOT
> >  > > >  +#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
> >  > > >  FITNESS FOR
> >  > > >  +#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> >  > > >  COPYRIGHT
> >  > > >  +#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> >  > > >  INCIDENTAL,
> >  > > >  +#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
> >  BUT
> >  > > >  NOT
> >  > > >  +#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
> >  LOSS
> >  > > >  OF USE,
> >  > > >  +#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
> >  CAUSED AND
> >  > > >  ON ANY
> >  > > >  +#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
> >  OR
> >  > > >  TORT
> >  > > >  +#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
> >  OUT OF
> >  > > >  THE USE
> >  > > >  +#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> >  > > >  DAMAGE.
> >  > > >  +
> >  > > >  +include $(RTE_SDK)/mk/rte.vars.mk
> >  > > >  +
> >  > > >  +# library name
> >  > > >  +LIB = librte_eventdev.a
> >  > > >  +
> >  > > >  +# library version
> >  > > >  +LIBABIVER := 1
> >  > > >  +
> >  > > >  +# build flags
> >  > > >  +CFLAGS += -O3
> >  > > >  +CFLAGS += $(WERROR_FLAGS)
> >  > > >  +
> >  > > >  +# library source files
> >  > > >  +SRCS-y += rte_eventdev.c
> >  > > >  +
> >  > > >  +# export include files
> >  > > >  +SYMLINK-y-include += rte_eventdev.h
> >  > > >  +SYMLINK-y-include += rte_eventdev_pmd.h
> >  > > >  +
> >  > > >  +# versioning export map
> >  > > >  +EXPORT_MAP := rte_eventdev_version.map
> >  > > >  +
> >  > > >  +# library dependencies
> >  > > >  +DEPDIRS-y += lib/librte_eal
> >  > > >  +DEPDIRS-y += lib/librte_mbuf
> >  > > >  +
> >  > > >  +include $(RTE_SDK)/mk/rte.lib.mk
> >  > > >  diff --git a/lib/librte_eventdev/rte_eventdev.c
> >  > > >  b/lib/librte_eventdev/rte_eventdev.c
> >  > > >  new file mode 100644
> >  > > >  index 0000000..17ce5c3
> >  > > >  --- /dev/null
> >  > > >  +++ b/lib/librte_eventdev/rte_eventdev.c
> >  > > >  @@ -0,0 +1,1211 @@
> >  > > >  +/*
> >  > > >  + *   BSD LICENSE
> >  > > >  + *
> >  > > >  + *   Copyright(c) 2016 Cavium networks. All rights reserved.
> >  > > >  + *
> >  > > >  + *   Redistribution and use in source and binary forms, with or without
> >  > > >  + *   modification, are permitted provided that the following conditions
> >  > > >  + *   are met:
> >  > > >  + *
> >  > > >  + *     * Redistributions of source code must retain the above copyright
> >  > > >  + *       notice, this list of conditions and the following disclaimer.
> >  > > >  + *     * Redistributions in binary form must reproduce the above
> >  copyright
> >  > > >  + *       notice, this list of conditions and the following disclaimer in
> >  > > >  + *       the documentation and/or other materials provided with the
> >  > > >  + *       distribution.
> >  > > >  + *     * Neither the name of Cavium networks nor the names of its
> >  > > >  + *       contributors may be used to endorse or promote products derived
> >  > > >  + *       from this software without specific prior written permission.
> >  > > >  + *
> >  > > >  + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
> >  > > >  CONTRIBUTORS
> >  > > >  + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING,
> >  BUT
> >  > > >  NOT
> >  > > >  + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
> >  > > >  FITNESS FOR
> >  > > >  + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> >  > > >  COPYRIGHT
> >  > > >  + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> >  > > >  INCIDENTAL,
> >  > > >  + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
> >  BUT
> >  > > >  NOT
> >  > > >  + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
> >  LOSS
> >  > > >  OF USE,
> >  > > >  + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
> >  CAUSED
> >  > > >  AND ON ANY
> >  > > >  + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
> >  OR
> >  > > >  TORT
> >  > > >  + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
> >  OUT OF
> >  > > >  THE USE
> >  > > >  + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> >  > > >  DAMAGE.
> >  > > >  + */
> >  > > >  +
> >  > > >  +#include <ctype.h>
> >  > > >  +#include <stdio.h>
> >  > > >  +#include <stdlib.h>
> >  > > >  +#include <string.h>
> >  > > >  +#include <stdarg.h>
> >  > > >  +#include <errno.h>
> >  > > >  +#include <stdint.h>
> >  > > >  +#include <inttypes.h>
> >  > > >  +#include <sys/types.h>
> >  > > >  +#include <sys/queue.h>
> >  > > >  +
> >  > > >  +#include <rte_byteorder.h>
> >  > > >  +#include <rte_log.h>
> >  > > >  +#include <rte_debug.h>
> >  > > >  +#include <rte_dev.h>
> >  > > >  +#include <rte_pci.h>
> >  > > >  +#include <rte_memory.h>
> >  > > >  +#include <rte_memcpy.h>
> >  > > >  +#include <rte_memzone.h>
> >  > > >  +#include <rte_eal.h>
> >  > > >  +#include <rte_per_lcore.h>
> >  > > >  +#include <rte_lcore.h>
> >  > > >  +#include <rte_atomic.h>
> >  > > >  +#include <rte_branch_prediction.h>
> >  > > >  +#include <rte_common.h>
> >  > > >  +#include <rte_malloc.h>
> >  > > >  +#include <rte_errno.h>
> >  > > >  +
> >  > > >  +#include "rte_eventdev.h"
> >  > > >  +#include "rte_eventdev_pmd.h"
> >  > > >  +
> >  > > >  +struct rte_eventdev rte_event_devices[RTE_EVENT_MAX_DEVS];
> >  > > >  +
> >  > > >  +struct rte_eventdev *rte_eventdevs = &rte_event_devices[0];
> >  > > >  +
> >  > > >  +static struct rte_eventdev_global eventdev_globals = {
> >  > > >  +	.nb_devs		= 0
> >  > > >  +};
> >  > > >  +
> >  > > >  +struct rte_eventdev_global *rte_eventdev_globals =
> >  &eventdev_globals;
> >  > > >  +
> >  > > >  +/* Event dev north bound API implementation */
> >  > > >  +
> >  > > >  +uint8_t
> >  > > >  +rte_event_dev_count(void)
> >  > > >  +{
> >  > > >  +	return rte_eventdev_globals->nb_devs;
> >  > > >  +}
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_event_dev_get_dev_id(const char *name)
> >  > > >  +{
> >  > > >  +	int i;
> >  > > >  +
> >  > > >  +	if (!name)
> >  > > >  +		return -EINVAL;
> >  > > >  +
> >  > > >  +	for (i = 0; i < rte_eventdev_globals->nb_devs; i++)
> >  > > >  +		if ((strcmp(rte_event_devices[i].data->name, name)
> >  > > >  +				== 0) &&
> >  > > >  +				(rte_event_devices[i].attached ==
> >  > > >  +						RTE_EVENTDEV_ATTACHED))
> >  > > >  +			return i;
> >  > > >  +	return -ENODEV;
> >  > > >  +}
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_event_dev_socket_id(uint8_t dev_id)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +
> >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +
> >  > > >  +	return dev->data->socket_id;
> >  > > >  +}
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info
> >  *dev_info)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +
> >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +
> >  > > >  +	if (dev_info == NULL)
> >  > > >  +		return -EINVAL;
> >  > > >  +
> >  > > >  +	memset(dev_info, 0, sizeof(struct rte_event_dev_info));
> >  > > >  +
> >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -
> >  > > >  ENOTSUP);
> >  > > >  +	(*dev->dev_ops->dev_infos_get)(dev, dev_info);
> >  > > >  +
> >  > > >  +	dev_info->pci_dev = dev->pci_dev;
> >  > > >  +	if (dev->driver)
> >  > > >  +		dev_info->driver_name = dev->driver->pci_drv.driver.name;
> >  > > >  +	return 0;
> >  > > >  +}
> >  > > >  +
> >  > > >  +static inline int
> >  > > >  +rte_event_dev_queue_config(struct rte_eventdev *dev, uint8_t
> >  nb_queues)
> >  > > >  +{
> >  > > >  +	uint8_t old_nb_queues = dev->data->nb_queues;
> >  > > >  +	void **queues;
> >  > > >  +	uint8_t *queues_prio;
> >  > > >  +	unsigned int i;
> >  > > >  +
> >  > > >  +	EDEV_LOG_DEBUG("Setup %d queues on device %u", nb_queues,
> >  > > >  +			 dev->data->dev_id);
> >  > > >  +
> >  > > >  +	/* First time configuration */
> >  > > >  +	if (dev->data->queues == NULL && nb_queues != 0) {
> >  > > >  +		dev->data->queues = rte_zmalloc_socket("eventdev->data-
> >  > > >  >queues",
> >  > > >  +				sizeof(dev->data->queues[0]) * nb_queues,
> >  > > >  +				RTE_CACHE_LINE_SIZE, dev->data-
> >  > > >  >socket_id);
> >  > > >  +		if (dev->data->queues == NULL) {
> >  > > >  +			dev->data->nb_queues = 0;
> >  > > >  +			EDEV_LOG_ERR("failed to get memory for queue meta
> >  > > >  data,"
> >  > > >  +					"nb_queues %u", nb_queues);
> >  > > >  +			return -(ENOMEM);
> >  > > >  +		}
> >  > > >  +		/* Allocate memory to store queue priority */
> >  > > >  +		dev->data->queues_prio = rte_zmalloc_socket(
> >  > > >  +				"eventdev->data->queues_prio",
> >  > > >  +				sizeof(dev->data->queues_prio[0]) *
> >  > > >  nb_queues,
> >  > > >  +				RTE_CACHE_LINE_SIZE, dev->data-
> >  > > >  >socket_id);
> >  > > >  +		if (dev->data->queues_prio == NULL) {
> >  > > >  +			dev->data->nb_queues = 0;
> >  > > >  +			EDEV_LOG_ERR("failed to get memory for queue
> >  > > >  priority,"
> >  > > >  +					"nb_queues %u", nb_queues);
> >  > > >  +			return -(ENOMEM);
> >  > > >  +		}
> >  > > >  +
> >  > > >  +	} else if (dev->data->queues != NULL && nb_queues != 0) {/* re-config
> >  > > >  */
> >  > > >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
> >  > > >  >queue_release, -ENOTSUP);
> >  > > >  +
> >  > > >  +		queues = dev->data->queues;
> >  > > >  +		for (i = nb_queues; i < old_nb_queues; i++)
> >  > > >  +			(*dev->dev_ops->queue_release)(queues[i]);
> >  > > >  +
> >  > > >  +		queues = rte_realloc(queues, sizeof(queues[0]) * nb_queues,
> >  > > >  +				RTE_CACHE_LINE_SIZE);
> >  > > >  +		if (queues == NULL) {
> >  > > >  +			EDEV_LOG_ERR("failed to realloc queue meta data,"
> >  > > >  +						" nb_queues %u",
> >  > > >  nb_queues);
> >  > > >  +			return -(ENOMEM);
> >  > > >  +		}
> >  > > >  +		dev->data->queues = queues;
> >  > > >  +
> >  > > >  +		/* Re allocate memory to store queue priority */
> >  > > >  +		queues_prio = dev->data->queues_prio;
> >  > > >  +		queues_prio = rte_realloc(queues_prio,
> >  > > >  +				sizeof(queues_prio[0]) * nb_queues,
> >  > > >  +				RTE_CACHE_LINE_SIZE);
> >  > > >  +		if (queues_prio == NULL) {
> >  > > >  +			EDEV_LOG_ERR("failed to realloc queue priority,"
> >  > > >  +						" nb_queues %u",
> >  > > >  nb_queues);
> >  > > >  +			return -(ENOMEM);
> >  > > >  +		}
> >  > > >  +		dev->data->queues_prio = queues_prio;
> >  > > >  +
> >  > > >  +		if (nb_queues > old_nb_queues) {
> >  > > >  +			uint8_t new_qs = nb_queues - old_nb_queues;
> >  > > >  +
> >  > > >  +			memset(queues + old_nb_queues, 0,
> >  > > >  +				sizeof(queues[0]) * new_qs);
> >  > > >  +			memset(queues_prio + old_nb_queues, 0,
> >  > > >  +				sizeof(queues_prio[0]) * new_qs);
> >  > > >  +		}
> >  > > >  +	} else if (dev->data->queues != NULL && nb_queues == 0) {
> >  > > >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
> >  > > >  >queue_release, -ENOTSUP);
> >  > > >  +
> >  > > >  +		queues = dev->data->queues;
> >  > > >  +		for (i = nb_queues; i < old_nb_queues; i++)
> >  > > >  +			(*dev->dev_ops->queue_release)(queues[i]);
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	dev->data->nb_queues = nb_queues;
> >  > > >  +	return 0;
> >  > > >  +}
> >  > > >  +
> >  > > >  +static inline int
> >  > > >  +rte_event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
> >  > > >  +{
> >  > > >  +	uint8_t old_nb_ports = dev->data->nb_ports;
> >  > > >  +	void **ports;
> >  > > >  +	uint16_t *links_map;
> >  > > >  +	uint8_t *ports_dequeue_depth;
> >  > > >  +	uint8_t *ports_enqueue_depth;
> >  > > >  +	unsigned int i;
> >  > > >  +
> >  > > >  +	EDEV_LOG_DEBUG("Setup %d ports on device %u", nb_ports,
> >  > > >  +			 dev->data->dev_id);
> >  > > >  +
> >  > > >  +	/* First time configuration */
> >  > > >  +	if (dev->data->ports == NULL && nb_ports != 0) {
> >  > > >  +		dev->data->ports = rte_zmalloc_socket("eventdev->data-
> >  > > >  >ports",
> >  > > >  +				sizeof(dev->data->ports[0]) * nb_ports,
> >  > > >  +				RTE_CACHE_LINE_SIZE, dev->data-
> >  > > >  >socket_id);
> >  > > >  +		if (dev->data->ports == NULL) {
> >  > > >  +			dev->data->nb_ports = 0;
> >  > > >  +			EDEV_LOG_ERR("failed to get memory for port meta
> >  > > >  data,"
> >  > > >  +					"nb_ports %u", nb_ports);
> >  > > >  +			return -(ENOMEM);
> >  > > >  +		}
> >  > > >  +
> >  > > >  +		/* Allocate memory to store ports dequeue depth */
> >  > > >  +		dev->data->ports_dequeue_depth =
> >  > > >  +			rte_zmalloc_socket("eventdev-
> >  > > >  >ports_dequeue_depth",
> >  > > >  +			sizeof(dev->data->ports_dequeue_depth[0]) *
> >  > > >  nb_ports,
> >  > > >  +			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
> >  > > >  +		if (dev->data->ports_dequeue_depth == NULL) {
> >  > > >  +			dev->data->nb_ports = 0;
> >  > > >  +			EDEV_LOG_ERR("failed to get memory for port deq
> >  > > >  meta,"
> >  > > >  +					"nb_ports %u", nb_ports);
> >  > > >  +			return -(ENOMEM);
> >  > > >  +		}
> >  > > >  +
> >  > > >  +		/* Allocate memory to store ports enqueue depth */
> >  > > >  +		dev->data->ports_enqueue_depth =
> >  > > >  +			rte_zmalloc_socket("eventdev-
> >  > > >  >ports_enqueue_depth",
> >  > > >  +			sizeof(dev->data->ports_enqueue_depth[0]) *
> >  > > >  nb_ports,
> >  > > >  +			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
> >  > > >  +		if (dev->data->ports_enqueue_depth == NULL) {
> >  > > >  +			dev->data->nb_ports = 0;
> >  > > >  +			EDEV_LOG_ERR("failed to get memory for port enq
> >  > > >  meta,"
> >  > > >  +					"nb_ports %u", nb_ports);
> >  > > >  +			return -(ENOMEM);
> >  > > >  +		}
> >  > > >  +
> >  > > >  +		/* Allocate memory to store queue to port link connection */
> >  > > >  +		dev->data->links_map =
> >  > > >  +			rte_zmalloc_socket("eventdev->links_map",
> >  > > >  +			sizeof(dev->data->links_map[0]) * nb_ports *
> >  > > >  +			RTE_EVENT_MAX_QUEUES_PER_DEV,
> >  > > >  +			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
> >  > > >  +		if (dev->data->links_map == NULL) {
> >  > > >  +			dev->data->nb_ports = 0;
> >  > > >  +			EDEV_LOG_ERR("failed to get memory for port_map
> >  > > >  area,"
> >  > > >  +					"nb_ports %u", nb_ports);
> >  > > >  +			return -(ENOMEM);
> >  > > >  +		}
> >  > > >  +	} else if (dev->data->ports != NULL && nb_ports != 0) {/* re-config */
> >  > > >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_release,
> >  > > >  -ENOTSUP);
> >  > > >  +
> >  > > >  +		ports = dev->data->ports;
> >  > > >  +		ports_dequeue_depth = dev->data->ports_dequeue_depth;
> >  > > >  +		ports_enqueue_depth = dev->data->ports_enqueue_depth;
> >  > > >  +		links_map = dev->data->links_map;
> >  > > >  +
> >  > > >  +		for (i = nb_ports; i < old_nb_ports; i++)
> >  > > >  +			(*dev->dev_ops->port_release)(ports[i]);
> >  > > >  +
> >  > > >  +		/* Realloc memory for ports */
> >  > > >  +		ports = rte_realloc(ports, sizeof(ports[0]) * nb_ports,
> >  > > >  +				RTE_CACHE_LINE_SIZE);
> >  > > >  +		if (ports == NULL) {
> >  > > >  +			EDEV_LOG_ERR("failed to realloc port meta data,"
> >  > > >  +						" nb_ports %u", nb_ports);
> >  > > >  +			return -(ENOMEM);
> >  > > >  +		}
> >  > > >  +
> >  > > >  +		/* Realloc memory for ports_dequeue_depth */
> >  > > >  +		ports_dequeue_depth = rte_realloc(ports_dequeue_depth,
> >  > > >  +			sizeof(ports_dequeue_depth[0]) * nb_ports,
> >  > > >  +			RTE_CACHE_LINE_SIZE);
> >  > > >  +		if (ports_dequeue_depth == NULL) {
> >  > > >  +			EDEV_LOG_ERR("failed to realloc port deqeue meta
> >  > > >  data,"
> >  > > >  +						" nb_ports %u", nb_ports);
> >  > > >  +			return -(ENOMEM);
> >  > > >  +		}
> >  > > >  +
> >  > > >  +		/* Realloc memory for ports_enqueue_depth */
> >  > > >  +		ports_enqueue_depth = rte_realloc(ports_enqueue_depth,
> >  > > >  +			sizeof(ports_enqueue_depth[0]) * nb_ports,
> >  > > >  +			RTE_CACHE_LINE_SIZE);
> >  > > >  +		if (ports_enqueue_depth == NULL) {
> >  > > >  +			EDEV_LOG_ERR("failed to realloc port enqueue meta
> >  > > >  data,"
> >  > > >  +						" nb_ports %u", nb_ports);
> >  > > >  +			return -(ENOMEM);
> >  > > >  +		}
> >  > > >  +
> >  > > >  +		/* Realloc memory to store queue to port link connection */
> >  > > >  +		links_map = rte_realloc(links_map,
> >  > > >  +			sizeof(dev->data->links_map[0]) * nb_ports *
> >  > > >  +			RTE_EVENT_MAX_QUEUES_PER_DEV,
> >  > > >  +			RTE_CACHE_LINE_SIZE);
> >  > > >  +		if (dev->data->links_map == NULL) {
> >  > > >  +			dev->data->nb_ports = 0;
> >  > > >  +			EDEV_LOG_ERR("failed to realloc mem for port_map
> >  > > >  area,"
> >  > > >  +					"nb_ports %u", nb_ports);
> >  > > >  +			return -(ENOMEM);
> >  > > >  +		}
> >  > > >  +
> >  > > >  +		if (nb_ports > old_nb_ports) {
> >  > > >  +			uint8_t new_ps = nb_ports - old_nb_ports;
> >  > > >  +
> >  > > >  +			memset(ports + old_nb_ports, 0,
> >  > > >  +				sizeof(ports[0]) * new_ps);
> >  > > >  +			memset(ports_dequeue_depth + old_nb_ports, 0,
> >  > > >  +				sizeof(ports_dequeue_depth[0]) * new_ps);
> >  > > >  +			memset(ports_enqueue_depth + old_nb_ports, 0,
> >  > > >  +				sizeof(ports_enqueue_depth[0]) * new_ps);
> >  > > >  +			memset(links_map +
> >  > > >  +				(old_nb_ports *
> >  > > >  RTE_EVENT_MAX_QUEUES_PER_DEV),
> >  > > >  +				0, sizeof(ports_enqueue_depth[0]) * new_ps);
> >  > > >  +		}
> >  > > >  +
> >  > > >  +		dev->data->ports = ports;
> >  > > >  +		dev->data->ports_dequeue_depth = ports_dequeue_depth;
> >  > > >  +		dev->data->ports_enqueue_depth = ports_enqueue_depth;
> >  > > >  +		dev->data->links_map = links_map;
> >  > > >  +	} else if (dev->data->ports != NULL && nb_ports == 0) {
> >  > > >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_release,
> >  > > >  -ENOTSUP);
> >  > > >  +
> >  > > >  +		ports = dev->data->ports;
> >  > > >  +		for (i = nb_ports; i < old_nb_ports; i++)
> >  > > >  +			(*dev->dev_ops->port_release)(ports[i]);
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	dev->data->nb_ports = nb_ports;
> >  > > >  +	return 0;
> >  > > >  +}
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_event_dev_configure(uint8_t dev_id, struct rte_event_dev_config
> >  > > >  *dev_conf)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +	struct rte_event_dev_info info;
> >  > > >  +	int diag;
> >  > > >  +
> >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +
> >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -
> >  > > >  ENOTSUP);
> >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_configure, -
> >  > > >  ENOTSUP);
> >  > > >  +
> >  > > >  +	if (dev->data->dev_started) {
> >  > > >  +		EDEV_LOG_ERR(
> >  > > >  +		    "device %d must be stopped to allow configuration",
> >  > > >  dev_id);
> >  > > >  +		return -EBUSY;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	if (dev_conf == NULL)
> >  > > >  +		return -EINVAL;
> >  > > >  +
> >  > > >  +	(*dev->dev_ops->dev_infos_get)(dev, &info);
> >  > > >  +
> >  > > >  +	/* Check dequeue_wait_ns value is in limit */
> >  > > >  +	if (!dev_conf->event_dev_cfg &
> >  > > >  RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT) {
> >  > > >  +		if (dev_conf->dequeue_wait_ns < info.min_dequeue_wait_ns
> >  > > >  ||
> >  > > >  +			dev_conf->dequeue_wait_ns >
> >  > > >  info.max_dequeue_wait_ns) {
> >  > > >  +			EDEV_LOG_ERR("dev%d invalid dequeue_wait_ns=%d"
> >  > > >  +			" min_dequeue_wait_ns=%d
> >  > > >  max_dequeue_wait_ns=%d",
> >  > > >  +			dev_id, dev_conf->dequeue_wait_ns,
> >  > > >  +			info.min_dequeue_wait_ns,
> >  > > >  +			info.max_dequeue_wait_ns);
> >  > > >  +			return -EINVAL;
> >  > > >  +		}
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	/* Check nb_events_limit is in limit */
> >  > > >  +	if (dev_conf->nb_events_limit > info.max_num_events) {
> >  > > >  +		EDEV_LOG_ERR("dev%d nb_events_limit=%d >
> >  > > >  max_num_events=%d",
> >  > > >  +		dev_id, dev_conf->nb_events_limit, info.max_num_events);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	/* Check nb_event_queues is in limit */
> >  > > >  +	if (!dev_conf->nb_event_queues) {
> >  > > >  +		EDEV_LOG_ERR("dev%d nb_event_queues cannot be zero",
> >  > > >  dev_id);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +	if (dev_conf->nb_event_queues > info.max_event_queues) {
> >  > > >  +		EDEV_LOG_ERR("dev%d nb_event_queues=%d >
> >  > > >  max_event_queues=%d",
> >  > > >  +		dev_id, dev_conf->nb_event_queues,
> >  > > >  info.max_event_queues);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	/* Check nb_event_ports is in limit */
> >  > > >  +	if (!dev_conf->nb_event_ports) {
> >  > > >  +		EDEV_LOG_ERR("dev%d nb_event_ports cannot be zero",
> >  > > >  dev_id);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +	if (dev_conf->nb_event_ports > info.max_event_ports) {
> >  > > >  +		EDEV_LOG_ERR("dev%d nb_event_ports=%d >
> >  > > >  max_event_ports= %d",
> >  > > >  +		dev_id, dev_conf->nb_event_ports, info.max_event_ports);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	/* Check nb_event_queue_flows is in limit */
> >  > > >  +	if (!dev_conf->nb_event_queue_flows) {
> >  > > >  +		EDEV_LOG_ERR("dev%d nb_flows cannot be zero", dev_id);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +	if (dev_conf->nb_event_queue_flows > info.max_event_queue_flows)
> >  > > >  {
> >  > > >  +		EDEV_LOG_ERR("dev%d nb_flows=%x > max_flows=%x",
> >  > > >  +		dev_id, dev_conf->nb_event_queue_flows,
> >  > > >  +		info.max_event_queue_flows);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	/* Check nb_event_port_dequeue_depth is in limit */
> >  > > >  +	if (!dev_conf->nb_event_port_dequeue_depth) {
> >  > > >  +		EDEV_LOG_ERR("dev%d nb_dequeue_depth cannot be zero",
> >  > > >  dev_id);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +	if (dev_conf->nb_event_port_dequeue_depth >
> >  > > >  +			 info.max_event_port_dequeue_depth) {
> >  > > >  +		EDEV_LOG_ERR("dev%d nb_dequeue_depth=%d >
> >  > > >  max_dequeue_depth=%d",
> >  > > >  +		dev_id, dev_conf->nb_event_port_dequeue_depth,
> >  > > >  +		info.max_event_port_dequeue_depth);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	/* Check nb_event_port_enqueue_depth is in limit */
> >  > > >  +	if (!dev_conf->nb_event_port_enqueue_depth) {
> >  > > >  +		EDEV_LOG_ERR("dev%d nb_enqueue_depth cannot be zero",
> >  > > >  dev_id);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +	if (dev_conf->nb_event_port_enqueue_depth >
> >  > > >  +			 info.max_event_port_enqueue_depth) {
> >  > > >  +		EDEV_LOG_ERR("dev%d nb_enqueue_depth=%d >
> >  > > >  max_enqueue_depth=%d",
> >  > > >  +		dev_id, dev_conf->nb_event_port_enqueue_depth,
> >  > > >  +		info.max_event_port_enqueue_depth);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	/* Copy the dev_conf parameter into the dev structure */
> >  > > >  +	memcpy(&dev->data->dev_conf, dev_conf, sizeof(dev->data-
> >  > > >  >dev_conf));
> >  > > >  +
> >  > > >  +	/* Setup new number of queues and reconfigure device. */
> >  > > >  +	diag = rte_event_dev_queue_config(dev, dev_conf-
> >  > > >  >nb_event_queues);
> >  > > >  +	if (diag != 0) {
> >  > > >  +		EDEV_LOG_ERR("dev%d rte_event_dev_queue_config = %d",
> >  > > >  +				dev_id, diag);
> >  > > >  +		return diag;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	/* Setup new number of ports and reconfigure device. */
> >  > > >  +	diag = rte_event_dev_port_config(dev, dev_conf->nb_event_ports);
> >  > > >  +	if (diag != 0) {
> >  > > >  +		rte_event_dev_queue_config(dev, 0);
> >  > > >  +		EDEV_LOG_ERR("dev%d rte_event_dev_port_config = %d",
> >  > > >  +				dev_id, diag);
> >  > > >  +		return diag;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	/* Configure the device */
> >  > > >  +	diag = (*dev->dev_ops->dev_configure)(dev);
> >  > > >  +	if (diag != 0) {
> >  > > >  +		EDEV_LOG_ERR("dev%d dev_configure = %d", dev_id, diag);
> >  > > >  +		rte_event_dev_queue_config(dev, 0);
> >  > > >  +		rte_event_dev_port_config(dev, 0);
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	dev->data->event_dev_cap = info.event_dev_cap;
> >  > > >  +	return diag;
> >  > > >  +}
> >  > > >  +
> >  > > >  +static inline int
> >  > > >  +is_valid_queue(struct rte_eventdev *dev, uint8_t queue_id)
> >  > > >  +{
> >  > > >  +	if (queue_id < dev->data->nb_queues && queue_id <
> >  > > >  +				RTE_EVENT_MAX_QUEUES_PER_DEV)
> >  > > >  +		return 1;
> >  > > >  +	else
> >  > > >  +		return 0;
> >  > > >  +}
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_event_queue_default_conf_get(uint8_t dev_id, uint8_t queue_id,
> >  > > >  +				 struct rte_event_queue_conf *queue_conf)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +
> >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +
> >  > > >  +	if (queue_conf == NULL)
> >  > > >  +		return -EINVAL;
> >  > > >  +
> >  > > >  +	if (!is_valid_queue(dev, queue_id)) {
> >  > > >  +		EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_def_conf, -
> >  > > >  ENOTSUP);
> >  > > >  +	memset(queue_conf, 0, sizeof(struct rte_event_queue_conf));
> >  > > >  +	(*dev->dev_ops->queue_def_conf)(dev, queue_id, queue_conf);
> >  > > >  +	return 0;
> >  > > >  +}
> >  > > >  +
> >  > > >  +static inline int
> >  > > >  +is_valid_atomic_queue_conf(struct rte_event_queue_conf
> >  *queue_conf)
> >  > > >  +{
> >  > > >  +	if (queue_conf && (
> >  > > >  +		((queue_conf->event_queue_cfg &
> >  > > >  RTE_EVENT_QUEUE_CFG_TYPE_MASK)
> >  > > >  +			== RTE_EVENT_QUEUE_CFG_ALL_TYPES) ||
> >  > > >  +		((queue_conf->event_queue_cfg &
> >  > > >  RTE_EVENT_QUEUE_CFG_TYPE_MASK)
> >  > > >  +			== RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY)
> >  > > >  +		))
> >  > > >  +		return 1;
> >  > > >  +	else
> >  > > >  +		return 0;
> >  > > >  +}
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id,
> >  > > >  +		      struct rte_event_queue_conf *queue_conf)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +	struct rte_event_queue_conf def_conf;
> >  > > >  +
> >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +
> >  > > >  +	if (!is_valid_queue(dev, queue_id)) {
> >  > > >  +		EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	/* Check nb_atomic_flows limit */
> >  > > >  +	if (is_valid_atomic_queue_conf(queue_conf)) {
> >  > > >  +		if (queue_conf->nb_atomic_flows == 0 ||
> >  > > >  +		    queue_conf->nb_atomic_flows >
> >  > > >  +			dev->data->dev_conf.nb_event_queue_flows) {
> >  > > >  +			EDEV_LOG_ERR(
> >  > > >  +		"dev%d queue%d Invalid nb_atomic_flows=%d
> >  > > >  max_flows=%d",
> >  > > >  +			dev_id, queue_id, queue_conf->nb_atomic_flows,
> >  > > >  +			dev->data->dev_conf.nb_event_queue_flows);
> >  > > >  +			return -EINVAL;
> >  > > >  +		}
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	/* Check nb_atomic_order_sequences limit */
> >  > > >  +	if (is_valid_atomic_queue_conf(queue_conf)) {
> >  > > >  +		if (queue_conf->nb_atomic_order_sequences == 0 ||
> >  > > >  +		    queue_conf->nb_atomic_order_sequences >
> >  > > >  +			dev->data->dev_conf.nb_event_queue_flows) {
> >  > > >  +			EDEV_LOG_ERR(
> >  > > >  +		"dev%d queue%d Invalid nb_atomic_order_seq=%d
> >  > > >  max_flows=%d",
> >  > > >  +			dev_id, queue_id, queue_conf-
> >  > > >  >nb_atomic_order_sequences,
> >  > > >  +			dev->data->dev_conf.nb_event_queue_flows);
> >  > > >  +			return -EINVAL;
> >  > > >  +		}
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	if (dev->data->dev_started) {
> >  > > >  +		EDEV_LOG_ERR(
> >  > > >  +		    "device %d must be stopped to allow queue setup", dev_id);
> >  > > >  +		return -EBUSY;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_setup, -
> >  > > >  ENOTSUP);
> >  > > >  +
> >  > > >  +	if (queue_conf == NULL) {
> >  > > >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
> >  > > >  >queue_def_conf,
> >  > > >  +					-ENOTSUP);
> >  > > >  +		(*dev->dev_ops->queue_def_conf)(dev, queue_id, &def_conf);
> >  > > >  +		def_conf.event_queue_cfg =
> >  > > >  RTE_EVENT_QUEUE_CFG_DEFAULT;
> >  > > >  +		queue_conf = &def_conf;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	dev->data->queues_prio[queue_id] = queue_conf->priority;
> >  > > >  +	return (*dev->dev_ops->queue_setup)(dev, queue_id, queue_conf);
> >  > > >  +}
> >  > > >  +
> >  > > >  +uint8_t
> >  > > >  +rte_event_queue_count(uint8_t dev_id)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +	return dev->data->nb_queues;
> >  > > >  +}
> >  > > >  +
> >  > > >  +uint8_t
> >  > > >  +rte_event_queue_priority(uint8_t dev_id, uint8_t queue_id)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +	if (dev->data->event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_QOS)
> >  > > >  +		return dev->data->queues_prio[queue_id];
> >  > > >  +	else
> >  > > >  +		return RTE_EVENT_QUEUE_PRIORITY_NORMAL;
> >  > > >  +}
> >  > > >  +
> >  > > >  +static inline int
> >  > > >  +is_valid_port(struct rte_eventdev *dev, uint8_t port_id)
> >  > > >  +{
> >  > > >  +	if (port_id < dev->data->nb_ports)
> >  > > >  +		return 1;
> >  > > >  +	else
> >  > > >  +		return 0;
> >  > > >  +}
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_event_port_default_conf_get(uint8_t dev_id, uint8_t port_id,
> >  > > >  +				 struct rte_event_port_conf *port_conf)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +
> >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +
> >  > > >  +	if (port_conf == NULL)
> >  > > >  +		return -EINVAL;
> >  > > >  +
> >  > > >  +	if (!is_valid_port(dev, port_id)) {
> >  > > >  +		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_def_conf, -
> >  > > >  ENOTSUP);
> >  > > >  +	memset(port_conf, 0, sizeof(struct rte_event_port_conf));
> >  > > >  +	(*dev->dev_ops->port_def_conf)(dev, port_id, port_conf);
> >  > > >  +	return 0;
> >  > > >  +}
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
> >  > > >  +		      struct rte_event_port_conf *port_conf)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +	struct rte_event_port_conf def_conf;
> >  > > >  +	int diag;
> >  > > >  +
> >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +
> >  > > >  +	if (!is_valid_port(dev, port_id)) {
> >  > > >  +		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	/* Check new_event_threshold limit */
> >  > > >  +	if ((port_conf && !port_conf->new_event_threshold) ||
> >  > > >  +			(port_conf && port_conf->new_event_threshold >
> >  > > >  +				 dev->data->dev_conf.nb_events_limit)) {
> >  > > >  +		EDEV_LOG_ERR(
> >  > > >  +		   "dev%d port%d Invalid event_threshold=%d
> >  > > >  nb_events_limit=%d",
> >  > > >  +			dev_id, port_id, port_conf->new_event_threshold,
> >  > > >  +			dev->data->dev_conf.nb_events_limit);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	/* Check dequeue_depth limit */
> >  > > >  +	if ((port_conf && !port_conf->dequeue_depth) ||
> >  > > >  +			(port_conf && port_conf->dequeue_depth >
> >  > > >  +		dev->data->dev_conf.nb_event_port_dequeue_depth)) {
> >  > > >  +		EDEV_LOG_ERR(
> >  > > >  +		   "dev%d port%d Invalid dequeue depth=%d
> >  > > >  max_dequeue_depth=%d",
> >  > > >  +			dev_id, port_id, port_conf->dequeue_depth,
> >  > > >  +			dev->data-
> >  > > >  >dev_conf.nb_event_port_dequeue_depth);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	/* Check enqueue_depth limit */
> >  > > >  +	if ((port_conf && !port_conf->enqueue_depth) ||
> >  > > >  +			(port_conf && port_conf->enqueue_depth >
> >  > > >  +		dev->data->dev_conf.nb_event_port_enqueue_depth)) {
> >  > > >  +		EDEV_LOG_ERR(
> >  > > >  +		   "dev%d port%d Invalid enqueue depth=%d
> >  > > >  max_enqueue_depth=%d",
> >  > > >  +			dev_id, port_id, port_conf->enqueue_depth,
> >  > > >  +			dev->data-
> >  > > >  >dev_conf.nb_event_port_enqueue_depth);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	if (dev->data->dev_started) {
> >  > > >  +		EDEV_LOG_ERR(
> >  > > >  +		    "device %d must be stopped to allow port setup", dev_id);
> >  > > >  +		return -EBUSY;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_setup, -
> >  > > >  ENOTSUP);
> >  > > >  +
> >  > > >  +	if (port_conf == NULL) {
> >  > > >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
> >  > > >  >port_def_conf,
> >  > > >  +					-ENOTSUP);
> >  > > >  +		(*dev->dev_ops->port_def_conf)(dev, port_id, &def_conf);
> >  > > >  +		port_conf = &def_conf;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	dev->data->ports_dequeue_depth[port_id] =
> >  > > >  +			port_conf->dequeue_depth;
> >  > > >  +	dev->data->ports_enqueue_depth[port_id] =
> >  > > >  +			port_conf->enqueue_depth;
> >  > > >  +
> >  > > >  +	diag = (*dev->dev_ops->port_setup)(dev, port_id, port_conf);
> >  > > >  +
> >  > > >  +	/* Unlink all the queues from this port(default state after setup) */
> >  > > >  +	if (!diag)
> >  > > >  +		diag = rte_event_port_unlink(dev_id, port_id, NULL, 0);
> >  > > >  +
> >  > > >  +	if (diag < 0)
> >  > > >  +		return diag;
> >  > > >  +
> >  > > >  +	return 0;
> >  > > >  +}
> >  > > >  +
> >  > > >  +uint8_t
> >  > > >  +rte_event_port_dequeue_depth(uint8_t dev_id, uint8_t port_id)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +	return dev->data->ports_dequeue_depth[port_id];
> >  > > >  +}
> >  > > >  +
> >  > > >  +uint8_t
> >  > > >  +rte_event_port_enqueue_depth(uint8_t dev_id, uint8_t port_id)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +	return dev->data->ports_enqueue_depth[port_id];
> >  > > >  +}
> >  > > >  +
> >  > > >  +uint8_t
> >  > > >  +rte_event_port_count(uint8_t dev_id)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +	return dev->data->nb_ports;
> >  > > >  +}
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_event_port_link(uint8_t dev_id, uint8_t port_id,
> >  > > >  +		    struct rte_event_queue_link link[], uint16_t nb_links)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +	struct rte_event_queue_link
> >  > > >  all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
> >  > > >  +	uint16_t *links_map;
> >  > > >  +	int i, diag;
> >  > > >  +
> >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_link, -ENOTSUP);
> >  > > >  +
> >  > > >  +	if (!is_valid_port(dev, port_id)) {
> >  > > >  +		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	if (link == NULL) {
> >  > > >  +		for (i = 0; i < dev->data->nb_queues; i++) {
> >  > > >  +			all_queues[i].queue_id = i;
> >  > > >  +			all_queues[i].priority =
> >  > > >  +
> >  > > >  	RTE_EVENT_QUEUE_SERVICE_PRIORITY_NORMAL;
> >  > > >  +		}
> >  > > >  +		link = all_queues;
> >  > > >  +		nb_links = dev->data->nb_queues;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	for (i = 0; i < nb_links; i++)
> >  > > >  +		if (link[i].queue_id >= RTE_EVENT_MAX_QUEUES_PER_DEV)
> >  > > >  +			return -EINVAL;
> >  > > >  +
> >  > > >  +	diag = (*dev->dev_ops->port_link)(dev->data->ports[port_id], link,
> >  > > >  +						 nb_links);
> >  > > >  +	if (diag < 0)
> >  > > >  +		return diag;
> >  > > >  +
> >  > > >  +	links_map = dev->data->links_map;
> >  > > >  +	/* Point links_map to this port specific area */
> >  > > >  +	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
> >  > > >  +	for (i = 0; i < diag; i++)
> >  > > >  +		links_map[link[i].queue_id] = (uint8_t)link[i].priority;
> >  > > >  +
> >  > > >  +	return diag;
> >  > > >  +}
> >  > > >  +
> >  > > >  +#define EVENT_QUEUE_SERVICE_PRIORITY_INVALID (0xdead)
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
> >  > > >  +		      uint8_t queues[], uint16_t nb_unlinks)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +	uint8_t all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
> >  > > >  +	int i, diag;
> >  > > >  +	uint16_t *links_map;
> >  > > >  +
> >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_unlink, -
> >  > > >  ENOTSUP);
> >  > > >  +
> >  > > >  +	if (!is_valid_port(dev, port_id)) {
> >  > > >  +		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	if (queues == NULL) {
> >  > > >  +		for (i = 0; i < dev->data->nb_queues; i++)
> >  > > >  +			all_queues[i] = i;
> >  > > >  +		queues = all_queues;
> >  > > >  +		nb_unlinks = dev->data->nb_queues;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	for (i = 0; i < nb_unlinks; i++)
> >  > > >  +		if (queues[i] >= RTE_EVENT_MAX_QUEUES_PER_DEV)
> >  > > >  +			return -EINVAL;
> >  > > >  +
> >  > > >  +	diag = (*dev->dev_ops->port_unlink)(dev->data->ports[port_id],
> >  > > >  queues,
> >  > > >  +					nb_unlinks);
> >  > > >  +
> >  > > >  +	if (diag < 0)
> >  > > >  +		return diag;
> >  > > >  +
> >  > > >  +	links_map = dev->data->links_map;
> >  > > >  +	/* Point links_map to this port specific area */
> >  > > >  +	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
> >  > > >  +	for (i = 0; i < diag; i++)
> >  > > >  +		links_map[queues[i]] =
> >  > > >  EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
> >  > > >  +
> >  > > >  +	return diag;
> >  > > >  +}
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
> >  > > >  +			struct rte_event_queue_link link[])
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +	uint16_t *links_map;
> >  > > >  +	int i, count = 0;
> >  > > >  +
> >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +	if (!is_valid_port(dev, port_id)) {
> >  > > >  +		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	links_map = dev->data->links_map;
> >  > > >  +	/* Point links_map to this port specific area */
> >  > > >  +	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
> >  > > >  +	for (i = 0; i < RTE_EVENT_MAX_QUEUES_PER_DEV; i++) {
> >  > > >  +		if (links_map[i] !=
> >  > > >  EVENT_QUEUE_SERVICE_PRIORITY_INVALID) {
> >  > > >  +			link[count].queue_id = i;
> >  > > >  +			link[count].priority = (uint8_t)links_map[i];
> >  > > >  +			++count;
> >  > > >  +		}
> >  > > >  +	}
> >  > > >  +	return count;
> >  > > >  +}
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_event_dequeue_wait_time(uint8_t dev_id, uint64_t ns, uint64_t
> >  > > >  *wait_ticks)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +
> >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->wait_time, -
> >  > > >  ENOTSUP);
> >  > > >  +
> >  > > >  +	if (wait_ticks == NULL)
> >  > > >  +		return -EINVAL;
> >  > > >  +
> >  > > >  +	(*dev->dev_ops->wait_time)(dev, ns, wait_ticks);
> >  > > >  +	return 0;
> >  > > >  +}
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_event_dev_dump(uint8_t dev_id, FILE *f)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +
> >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dump, -ENOTSUP);
> >  > > >  +
> >  > > >  +	(*dev->dev_ops->dump)(dev, f);
> >  > > >  +	return 0;
> >  > > >  +
> >  > > >  +}
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_event_dev_start(uint8_t dev_id)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +	int diag;
> >  > > >  +
> >  > > >  +	EDEV_LOG_DEBUG("Start dev_id=%" PRIu8, dev_id);
> >  > > >  +
> >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_start, -ENOTSUP);
> >  > > >  +
> >  > > >  +	if (dev->data->dev_started != 0) {
> >  > > >  +		EDEV_LOG_ERR("Device with dev_id=%" PRIu8 " already
> >  > > >  started",
> >  > > >  +			dev_id);
> >  > > >  +		return 0;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	diag = (*dev->dev_ops->dev_start)(dev);
> >  > > >  +	if (diag == 0)
> >  > > >  +		dev->data->dev_started = 1;
> >  > > >  +	else
> >  > > >  +		return diag;
> >  > > >  +
> >  > > >  +	return 0;
> >  > > >  +}
> >  > > >  +
> >  > > >  +void
> >  > > >  +rte_event_dev_stop(uint8_t dev_id)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +
> >  > > >  +	EDEV_LOG_DEBUG("Stop dev_id=%" PRIu8, dev_id);
> >  > > >  +
> >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_RET(dev_id);
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
> >  > > >  +
> >  > > >  +	if (dev->data->dev_started == 0) {
> >  > > >  +		EDEV_LOG_ERR("Device with dev_id=%" PRIu8 " already
> >  > > >  stopped",
> >  > > >  +			dev_id);
> >  > > >  +		return;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	dev->data->dev_started = 0;
> >  > > >  +	(*dev->dev_ops->dev_stop)(dev);
> >  > > >  +}
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_event_dev_close(uint8_t dev_id)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +
> >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_close, -
> >  > > >  ENOTSUP);
> >  > > >  +
> >  > > >  +	/* Device must be stopped before it can be closed */
> >  > > >  +	if (dev->data->dev_started == 1) {
> >  > > >  +		EDEV_LOG_ERR("Device %u must be stopped before closing",
> >  > > >  +				dev_id);
> >  > > >  +		return -EBUSY;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	return (*dev->dev_ops->dev_close)(dev);
> >  > > >  +}
> >  > > >  +
> >  > > >  +static inline int
> >  > > >  +rte_eventdev_data_alloc(uint8_t dev_id, struct rte_eventdev_data
> >  **data,
> >  > > >  +		int socket_id)
> >  > > >  +{
> >  > > >  +	char mz_name[RTE_EVENTDEV_NAME_MAX_LEN];
> >  > > >  +	const struct rte_memzone *mz;
> >  > > >  +	int n;
> >  > > >  +
> >  > > >  +	/* Generate memzone name */
> >  > > >  +	n = snprintf(mz_name, sizeof(mz_name), "rte_eventdev_data_%u",
> >  > > >  dev_id);
> >  > > >  +	if (n >= (int)sizeof(mz_name))
> >  > > >  +		return -EINVAL;
> >  > > >  +
> >  > > >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> >  > > >  +		mz = rte_memzone_reserve(mz_name,
> >  > > >  +				sizeof(struct rte_eventdev_data),
> >  > > >  +				socket_id, 0);
> >  > > >  +	} else
> >  > > >  +		mz = rte_memzone_lookup(mz_name);
> >  > > >  +
> >  > > >  +	if (mz == NULL)
> >  > > >  +		return -ENOMEM;
> >  > > >  +
> >  > > >  +	*data = mz->addr;
> >  > > >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
> >  > > >  +		memset(*data, 0, sizeof(struct rte_eventdev_data));
> >  > > >  +
> >  > > >  +	return 0;
> >  > > >  +}
> >  > > >  +
> >  > > >  +static uint8_t
> >  > > >  +rte_eventdev_find_free_device_index(void)
> >  > > >  +{
> >  > > >  +	uint8_t dev_id;
> >  > > >  +
> >  > > >  +	for (dev_id = 0; dev_id < RTE_EVENT_MAX_DEVS; dev_id++) {
> >  > > >  +		if (rte_eventdevs[dev_id].attached ==
> >  > > >  +				RTE_EVENTDEV_DETACHED)
> >  > > >  +			return dev_id;
> >  > > >  +	}
> >  > > >  +	return RTE_EVENT_MAX_DEVS;
> >  > > >  +}
> >  > > >  +
> >  > > >  +struct rte_eventdev *
> >  > > >  +rte_eventdev_pmd_allocate(const char *name, int socket_id)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *eventdev;
> >  > > >  +	uint8_t dev_id;
> >  > > >  +
> >  > > >  +	if (rte_eventdev_pmd_get_named_dev(name) != NULL) {
> >  > > >  +		EDEV_LOG_ERR("Event device with name %s already "
> >  > > >  +				"allocated!", name);
> >  > > >  +		return NULL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	dev_id = rte_eventdev_find_free_device_index();
> >  > > >  +	if (dev_id == RTE_EVENT_MAX_DEVS) {
> >  > > >  +		EDEV_LOG_ERR("Reached maximum number of event
> >  > > >  devices");
> >  > > >  +		return NULL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	eventdev = &rte_eventdevs[dev_id];
> >  > > >  +
> >  > > >  +	if (eventdev->data == NULL) {
> >  > > >  +		struct rte_eventdev_data *eventdev_data = NULL;
> >  > > >  +
> >  > > >  +		int retval = rte_eventdev_data_alloc(dev_id, &eventdev_data,
> >  > > >  +				socket_id);
> >  > > >  +
> >  > > >  +		if (retval < 0 || eventdev_data == NULL)
> >  > > >  +			return NULL;
> >  > > >  +
> >  > > >  +		eventdev->data = eventdev_data;
> >  > > >  +
> >  > > >  +		snprintf(eventdev->data->name,
> >  > > >  RTE_EVENTDEV_NAME_MAX_LEN,
> >  > > >  +				"%s", name);
> >  > > >  +
> >  > > >  +		eventdev->data->dev_id = dev_id;
> >  > > >  +		eventdev->data->socket_id = socket_id;
> >  > > >  +		eventdev->data->dev_started = 0;
> >  > > >  +
> >  > > >  +		eventdev->attached = RTE_EVENTDEV_ATTACHED;
> >  > > >  +
> >  > > >  +		eventdev_globals.nb_devs++;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	return eventdev;
> >  > > >  +}
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_eventdev_pmd_release(struct rte_eventdev *eventdev)
> >  > > >  +{
> >  > > >  +	int ret;
> >  > > >  +
> >  > > >  +	if (eventdev == NULL)
> >  > > >  +		return -EINVAL;
> >  > > >  +
> >  > > >  +	ret = rte_event_dev_close(eventdev->data->dev_id);
> >  > > >  +	if (ret < 0)
> >  > > >  +		return ret;
> >  > > >  +
> >  > > >  +	eventdev->attached = RTE_EVENTDEV_DETACHED;
> >  > > >  +	eventdev_globals.nb_devs--;
> >  > > >  +	eventdev->data = NULL;
> >  > > >  +
> >  > > >  +	return 0;
> >  > > >  +}
> >  > > >  +
> >  > > >  +struct rte_eventdev *
> >  > > >  +rte_eventdev_pmd_vdev_init(const char *name, size_t
> >  dev_private_size,
> >  > > >  +		int socket_id)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *eventdev;
> >  > > >  +
> >  > > >  +	/* Allocate device structure */
> >  > > >  +	eventdev = rte_eventdev_pmd_allocate(name, socket_id);
> >  > > >  +	if (eventdev == NULL)
> >  > > >  +		return NULL;
> >  > > >  +
> >  > > >  +	/* Allocate private device structure */
> >  > > >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> >  > > >  +		eventdev->data->dev_private =
> >  > > >  +				rte_zmalloc_socket("eventdev device private",
> >  > > >  +						dev_private_size,
> >  > > >  +						RTE_CACHE_LINE_SIZE,
> >  > > >  +						socket_id);
> >  > > >  +
> >  > > >  +		if (eventdev->data->dev_private == NULL)
> >  > > >  +			rte_panic("Cannot allocate memzone for private
> >  > > >  device"
> >  > > >  +					" data");
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	return eventdev;
> >  > > >  +}
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_eventdev_pmd_pci_probe(struct rte_pci_driver *pci_drv,
> >  > > >  +			struct rte_pci_device *pci_dev)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev_driver *eventdrv;
> >  > > >  +	struct rte_eventdev *eventdev;
> >  > > >  +
> >  > > >  +	char eventdev_name[RTE_EVENTDEV_NAME_MAX_LEN];
> >  > > >  +
> >  > > >  +	int retval;
> >  > > >  +
> >  > > >  +	eventdrv = (struct rte_eventdev_driver *)pci_drv;
> >  > > >  +	if (eventdrv == NULL)
> >  > > >  +		return -ENODEV;
> >  > > >  +
> >  > > >  +	rte_eal_pci_device_name(&pci_dev->addr, eventdev_name,
> >  > > >  +			sizeof(eventdev_name));
> >  > > >  +
> >  > > >  +	eventdev = rte_eventdev_pmd_allocate(eventdev_name,
> >  > > >  +			 pci_dev->device.numa_node);
> >  > > >  +	if (eventdev == NULL)
> >  > > >  +		return -ENOMEM;
> >  > > >  +
> >  > > >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> >  > > >  +		eventdev->data->dev_private =
> >  > > >  +				rte_zmalloc_socket(
> >  > > >  +						"eventdev private structure",
> >  > > >  +						eventdrv->dev_private_size,
> >  > > >  +						RTE_CACHE_LINE_SIZE,
> >  > > >  +						rte_socket_id());
> >  > > >  +
> >  > > >  +		if (eventdev->data->dev_private == NULL)
> >  > > >  +			rte_panic("Cannot allocate memzone for private "
> >  > > >  +					"device data");
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	eventdev->pci_dev = pci_dev;
> >  > > >  +	eventdev->driver = eventdrv;
> >  > > >  +
> >  > > >  +	/* Invoke PMD device initialization function */
> >  > > >  +	retval = (*eventdrv->eventdev_init)(eventdev);
> >  > > >  +	if (retval == 0)
> >  > > >  +		return 0;
> >  > > >  +
> >  > > >  +	EDEV_LOG_ERR("driver %s: event_dev_init(vendor_id=0x%x
> >  > > >  device_id=0x%x)"
> >  > > >  +			" failed", pci_drv->driver.name,
> >  > > >  +			(unsigned int) pci_dev->id.vendor_id,
> >  > > >  +			(unsigned int) pci_dev->id.device_id);
> >  > > >  +
> >  > > >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
> >  > > >  +		rte_free(eventdev->data->dev_private);
> >  > > >  +
> >  > > >  +	eventdev->attached = RTE_EVENTDEV_DETACHED;
> >  > > >  +	eventdev_globals.nb_devs--;
> >  > > >  +
> >  > > >  +	return -ENXIO;
> >  > > >  +}
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_eventdev_pmd_pci_remove(struct rte_pci_device *pci_dev)
> >  > > >  +{
> >  > > >  +	const struct rte_eventdev_driver *eventdrv;
> >  > > >  +	struct rte_eventdev *eventdev;
> >  > > >  +	char eventdev_name[RTE_EVENTDEV_NAME_MAX_LEN];
> >  > > >  +	int ret;
> >  > > >  +
> >  > > >  +	if (pci_dev == NULL)
> >  > > >  +		return -EINVAL;
> >  > > >  +
> >  > > >  +	rte_eal_pci_device_name(&pci_dev->addr, eventdev_name,
> >  > > >  +			sizeof(eventdev_name));
> >  > > >  +
> >  > > >  +	eventdev = rte_eventdev_pmd_get_named_dev(eventdev_name);
> >  > > >  +	if (eventdev == NULL)
> >  > > >  +		return -ENODEV;
> >  > > >  +
> >  > > >  +	eventdrv = (const struct rte_eventdev_driver *)pci_dev->driver;
> >  > > >  +	if (eventdrv == NULL)
> >  > > >  +		return -ENODEV;
> >  > > >  +
> >  > > >  +	/* Invoke PMD device uninit function */
> >  > > >  +	if (*eventdrv->eventdev_uninit) {
> >  > > >  +		ret = (*eventdrv->eventdev_uninit)(eventdev);
> >  > > >  +		if (ret)
> >  > > >  +			return ret;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	/* Free event device */
> >  > > >  +	rte_eventdev_pmd_release(eventdev);
> >  > > >  +
> >  > > >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
> >  > > >  +		rte_free(eventdev->data->dev_private);
> >  > > >  +
> >  > > >  +	eventdev->pci_dev = NULL;
> >  > > >  +	eventdev->driver = NULL;
> >  > > >  +
> >  > > >  +	return 0;
> >  > > >  +}
> >  > > >  diff --git a/lib/librte_eventdev/rte_eventdev_pmd.h
> >  > > >  b/lib/librte_eventdev/rte_eventdev_pmd.h
> >  > > >  new file mode 100644
> >  > > >  index 0000000..e9d9b83
> >  > > >  --- /dev/null
> >  > > >  +++ b/lib/librte_eventdev/rte_eventdev_pmd.h
> >  > > >  @@ -0,0 +1,504 @@
> >  > > >  +/*
> >  > > >  + *
> >  > > >  + *   Copyright(c) 2016 Cavium networks. All rights reserved.
> >  > > >  + *
> >  > > >  + *   Redistribution and use in source and binary forms, with or without
> >  > > >  + *   modification, are permitted provided that the following conditions
> >  > > >  + *   are met:
> >  > > >  + *
> >  > > >  + *     * Redistributions of source code must retain the above copyright
> >  > > >  + *       notice, this list of conditions and the following disclaimer.
> >  > > >  + *     * Redistributions in binary form must reproduce the above
> >  copyright
> >  > > >  + *       notice, this list of conditions and the following disclaimer in
> >  > > >  + *       the documentation and/or other materials provided with the
> >  > > >  + *       distribution.
> >  > > >  + *     * Neither the name of Cavium networks nor the names of its
> >  > > >  + *       contributors may be used to endorse or promote products derived
> >  > > >  + *       from this software without specific prior written permission.
> >  > > >  + *
> >  > > >  + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
> >  > > >  CONTRIBUTORS
> >  > > >  + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING,
> >  BUT
> >  > > >  NOT
> >  > > >  + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
> >  > > >  FITNESS FOR
> >  > > >  + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> >  > > >  COPYRIGHT
> >  > > >  + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> >  > > >  INCIDENTAL,
> >  > > >  + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
> >  BUT
> >  > > >  NOT
> >  > > >  + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
> >  LOSS
> >  > > >  OF USE,
> >  > > >  + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
> >  CAUSED
> >  > > >  AND ON ANY
> >  > > >  + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
> >  OR
> >  > > >  TORT
> >  > > >  + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
> >  OUT OF
> >  > > >  THE USE
> >  > > >  + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> >  > > >  DAMAGE.
> >  > > >  + */
> >  > > >  +
> >  > > >  +#ifndef _RTE_EVENTDEV_PMD_H_
> >  > > >  +#define _RTE_EVENTDEV_PMD_H_
> >  > > >  +
> >  > > >  +/** @file
> >  > > >  + * RTE Event PMD APIs
> >  > > >  + *
> >  > > >  + * @note
> >  > > >  + * These API are from event PMD only and user applications should not
> >  call
> >  > > >  + * them directly.
> >  > > >  + */
> >  > > >  +
> >  > > >  +#ifdef __cplusplus
> >  > > >  +extern "C" {
> >  > > >  +#endif
> >  > > >  +
> >  > > >  +#include <string.h>
> >  > > >  +
> >  > > >  +#include <rte_dev.h>
> >  > > >  +#include <rte_pci.h>
> >  > > >  +#include <rte_malloc.h>
> >  > > >  +#include <rte_log.h>
> >  > > >  +#include <rte_common.h>
> >  > > >  +
> >  > > >  +#include "rte_eventdev.h"
> >  > > >  +
> >  > > >  +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
> >  > > >  +#define RTE_PMD_DEBUG_TRACE(...) \
> >  > > >  +	rte_pmd_debug_trace(__func__, __VA_ARGS__)
> >  > > >  +#else
> >  > > >  +#define RTE_PMD_DEBUG_TRACE(...)
> >  > > >  +#endif
> >  > > >  +
> >  > > >  +/* Logging Macros */
> >  > > >  +#define EDEV_LOG_ERR(fmt, args...) \
> >  > > >  +	RTE_LOG(ERR, EVENTDEV, "%s() line %u: " fmt "\n",  \
> >  > > >  +			__func__, __LINE__, ## args)
> >  > > >  +
> >  > > >  +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
> >  > > >  +#define EDEV_LOG_DEBUG(fmt, args...) \
> >  > > >  +	RTE_LOG(DEBUG, EVENTDEV, "%s() line %u: " fmt "\n",  \
> >  > > >  +			__func__, __LINE__, ## args)
> >  > > >  +#else
> >  > > >  +#define EDEV_LOG_DEBUG(fmt, args...) (void)0
> >  > > >  +#endif
> >  > > >  +
> >  > > >  +/* Macros to check for valid device */
> >  > > >  +#define RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, retval) do
> >  { \
> >  > > >  +	if (!rte_eventdev_pmd_is_valid_dev((dev_id))) { \
> >  > > >  +		EDEV_LOG_ERR("Invalid dev_id=%d\n", dev_id); \
> >  > > >  +		return retval; \
> >  > > >  +	} \
> >  > > >  +} while (0)
> >  > > >  +
> >  > > >  +#define RTE_EVENTDEV_VALID_DEVID_OR_RET(dev_id) do { \
> >  > > >  +	if (!rte_eventdev_pmd_is_valid_dev((dev_id))) { \
> >  > > >  +		EDEV_LOG_ERR("Invalid dev_id=%d\n", dev_id); \
> >  > > >  +		return; \
> >  > > >  +	} \
> >  > > >  +} while (0)
> >  > > >  +
> >  > > >  +#define RTE_EVENTDEV_DETACHED  (0)
> >  > > >  +#define RTE_EVENTDEV_ATTACHED  (1)
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Initialisation function of a event driver invoked for each matching
> >  > > >  + * event PCI device detected during the PCI probing phase.
> >  > > >  + *
> >  > > >  + * @param dev
> >  > > >  + *   The dev pointer is the address of the *rte_eventdev* structure
> >  associated
> >  > > >  + *   with the matching device and which has been [automatically]
> >  allocated in
> >  > > >  + *   the *rte_event_devices* array.
> >  > > >  + *
> >  > > >  + * @return
> >  > > >  + *   - 0: Success, the device is properly initialised by the driver.
> >  > > >  + *        In particular, the driver MUST have set up the *dev_ops* pointer
> >  > > >  + *        of the *dev* structure.
> >  > > >  + *   - <0: Error code of the device initialisation failure.
> >  > > >  + */
> >  > > >  +typedef int (*eventdev_init_t)(struct rte_eventdev *dev);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Finalisation function of a driver invoked for each matching
> >  > > >  + * PCI device detected during the PCI closing phase.
> >  > > >  + *
> >  > > >  + * @param dev
> >  > > >  + *   The dev pointer is the address of the *rte_eventdev* structure
> >  associated
> >  > > >  + *   with the matching device and which	has been [automatically]
> >  allocated in
> >  > > >  + *   the *rte_event_devices* array.
> >  > > >  + *
> >  > > >  + * @return
> >  > > >  + *   - 0: Success, the device is properly finalised by the driver.
> >  > > >  + *        In particular, the driver MUST free the *dev_ops* pointer
> >  > > >  + *        of the *dev* structure.
> >  > > >  + *   - <0: Error code of the device initialisation failure.
> >  > > >  + */
> >  > > >  +typedef int (*eventdev_uninit_t)(struct rte_eventdev *dev);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * The structure associated with a PMD driver.
> >  > > >  + *
> >  > > >  + * Each driver acts as a PCI driver and is represented by a generic
> >  > > >  + * *event_driver* structure that holds:
> >  > > >  + *
> >  > > >  + * - An *rte_pci_driver* structure (which must be the first field).
> >  > > >  + *
> >  > > >  + * - The *eventdev_init* function invoked for each matching PCI device.
> >  > > >  + *
> >  > > >  + * - The size of the private data to allocate for each matching device.
> >  > > >  + */
> >  > > >  +struct rte_eventdev_driver {
> >  > > >  +	struct rte_pci_driver pci_drv;	/**< The PMD is also a PCI driver. */
> >  > > >  +	unsigned int dev_private_size;	/**< Size of device private data. */
> >  > > >  +
> >  > > >  +	eventdev_init_t eventdev_init;	/**< Device init function. */
> >  > > >  +	eventdev_uninit_t eventdev_uninit; /**< Device uninit function. */
> >  > > >  +};
> >  > > >  +
> >  > > >  +/** Global structure used for maintaining state of allocated event
> >  devices */
> >  > > >  +struct rte_eventdev_global {
> >  > > >  +	uint8_t nb_devs;	/**< Number of devices found */
> >  > > >  +	uint8_t max_devs;	/**< Max number of devices */
> >  > > >  +};
> >  > > >  +
> >  > > >  +extern struct rte_eventdev_global *rte_eventdev_globals;
> >  > > >  +/** Pointer to global event devices data structure. */
> >  > > >  +extern struct rte_eventdev *rte_eventdevs;
> >  > > >  +/** The pool of rte_eventdev structures. */
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Get the rte_eventdev structure device pointer for the named device.
> >  > > >  + *
> >  > > >  + * @param name
> >  > > >  + *   device name to select the device structure.
> >  > > >  + *
> >  > > >  + * @return
> >  > > >  + *   - The rte_eventdev structure pointer for the given device ID.
> >  > > >  + */
> >  > > >  +static inline struct rte_eventdev *
> >  > > >  +rte_eventdev_pmd_get_named_dev(const char *name)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +	unsigned int i;
> >  > > >  +
> >  > > >  +	if (name == NULL)
> >  > > >  +		return NULL;
> >  > > >  +
> >  > > >  +	for (i = 0, dev = &rte_eventdevs[i];
> >  > > >  +			i < rte_eventdev_globals->max_devs; i++) {
> >  > > >  +		if ((dev->attached == RTE_EVENTDEV_ATTACHED) &&
> >  > > >  +				(strcmp(dev->data->name, name) == 0))
> >  > > >  +			return dev;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	return NULL;
> >  > > >  +}
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Validate if the event device index is valid attached event device.
> >  > > >  + *
> >  > > >  + * @param dev_id
> >  > > >  + *   Event device index.
> >  > > >  + *
> >  > > >  + * @return
> >  > > >  + *   - If the device index is valid (1) or not (0).
> >  > > >  + */
> >  > > >  +static inline unsigned
> >  > > >  +rte_eventdev_pmd_is_valid_dev(uint8_t dev_id)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +
> >  > > >  +	if (dev_id >= rte_eventdev_globals->nb_devs)
> >  > > >  +		return 0;
> >  > > >  +
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +	if (dev->attached != RTE_EVENTDEV_ATTACHED)
> >  > > >  +		return 0;
> >  > > >  +	else
> >  > > >  +		return 1;
> >  > > >  +}
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Definitions of all functions exported by a driver through the
> >  > > >  + * the generic structure of type *event_dev_ops* supplied in the
> >  > > >  + * *rte_eventdev* structure associated with a device.
> >  > > >  + */
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Get device information of a device.
> >  > > >  + *
> >  > > >  + * @param dev
> >  > > >  + *   Event device pointer
> >  > > >  + * @param dev_info
> >  > > >  + *   Event device information structure
> >  > > >  + *
> >  > > >  + * @return
> >  > > >  + *   Returns 0 on success
> >  > > >  + */
> >  > > >  +typedef void (*eventdev_info_get_t)(struct rte_eventdev *dev,
> >  > > >  +		struct rte_event_dev_info *dev_info);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Configure a device.
> >  > > >  + *
> >  > > >  + * @param dev
> >  > > >  + *   Event device pointer
> >  > > >  + *
> >  > > >  + * @return
> >  > > >  + *   Returns 0 on success
> >  > > >  + */
> >  > > >  +typedef int (*eventdev_configure_t)(struct rte_eventdev *dev);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Start a configured device.
> >  > > >  + *
> >  > > >  + * @param dev
> >  > > >  + *   Event device pointer
> >  > > >  + *
> >  > > >  + * @return
> >  > > >  + *   Returns 0 on success
> >  > > >  + */
> >  > > >  +typedef int (*eventdev_start_t)(struct rte_eventdev *dev);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Stop a configured device.
> >  > > >  + *
> >  > > >  + * @param dev
> >  > > >  + *   Event device pointer
> >  > > >  + */
> >  > > >  +typedef void (*eventdev_stop_t)(struct rte_eventdev *dev);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Close a configured device.
> >  > > >  + *
> >  > > >  + * @param dev
> >  > > >  + *   Event device pointer
> >  > > >  + *
> >  > > >  + * @return
> >  > > >  + * - 0 on success
> >  > > >  + * - (-EAGAIN) if can't close as device is busy
> >  > > >  + */
> >  > > >  +typedef int (*eventdev_close_t)(struct rte_eventdev *dev);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Retrieve the default event queue configuration.
> >  > > >  + *
> >  > > >  + * @param dev
> >  > > >  + *   Event device pointer
> >  > > >  + * @param queue_id
> >  > > >  + *   Event queue index
> >  > > >  + * @param[out] queue_conf
> >  > > >  + *   Event queue configuration structure
> >  > > >  + *
> >  > > >  + */
> >  > > >  +typedef void (*eventdev_queue_default_conf_get_t)(struct
> >  rte_eventdev
> >  > > >  *dev,
> >  > > >  +		uint8_t queue_id, struct rte_event_queue_conf *queue_conf);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Setup an event queue.
> >  > > >  + *
> >  > > >  + * @param dev
> >  > > >  + *   Event device pointer
> >  > > >  + * @param queue_id
> >  > > >  + *   Event queue index
> >  > > >  + * @param queue_conf
> >  > > >  + *   Event queue configuration structure
> >  > > >  + *
> >  > > >  + * @return
> >  > > >  + *   Returns 0 on success.
> >  > > >  + */
> >  > > >  +typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev,
> >  > > >  +		uint8_t queue_id, struct rte_event_queue_conf *queue_conf);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Release memory resources allocated by given event queue.
> >  > > >  + *
> >  > > >  + * @param queue
> >  > > >  + *   Event queue pointer
> >  > > >  + *
> >  > > >  + */
> >  > > >  +typedef void (*eventdev_queue_release_t)(void *queue);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Retrieve the default event port configuration.
> >  > > >  + *
> >  > > >  + * @param dev
> >  > > >  + *   Event device pointer
> >  > > >  + * @param port_id
> >  > > >  + *   Event port index
> >  > > >  + * @param[out] port_conf
> >  > > >  + *   Event port configuration structure
> >  > > >  + *
> >  > > >  + */
> >  > > >  +typedef void (*eventdev_port_default_conf_get_t)(struct rte_eventdev
> >  *dev,
> >  > > >  +		uint8_t port_id, struct rte_event_port_conf *port_conf);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Setup an event port.
> >  > > >  + *
> >  > > >  + * @param dev
> >  > > >  + *   Event device pointer
> >  > > >  + * @param port_id
> >  > > >  + *   Event port index
> >  > > >  + * @param port_conf
> >  > > >  + *   Event port configuration structure
> >  > > >  + *
> >  > > >  + * @return
> >  > > >  + *   Returns 0 on success.
> >  > > >  + */
> >  > > >  +typedef int (*eventdev_port_setup_t)(struct rte_eventdev *dev,
> >  > > >  +		uint8_t port_id, struct rte_event_port_conf *port_conf);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Release memory resources allocated by given event port.
> >  > > >  + *
> >  > > >  + * @param port
> >  > > >  + *   Event port pointer
> >  > > >  + *
> >  > > >  + */
> >  > > >  +typedef void (*eventdev_port_release_t)(void *port);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Link multiple source event queues to destination event port.
> >  > > >  + *
> >  > > >  + * @param port
> >  > > >  + *   Event port pointer
> >  > > >  + * @param link
> >  > > >  + *   An array of *nb_links* pointers to *rte_event_queue_link* structure
> >  > > >  + * @param nb_links
> >  > > >  + *   The number of links to establish
> >  > > >  + *
> >  > > >  + * @return
> >  > > >  + *   Returns 0 on success.
> >  > > >  + *
> >  > > >  + */
> >  > > >  +typedef int (*eventdev_port_link_t)(void *port,
> >  > > >  +		struct rte_event_queue_link link[], uint16_t nb_links);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Unlink multiple source event queues from destination event port.
> >  > > >  + *
> >  > > >  + * @param port
> >  > > >  + *   Event port pointer
> >  > > >  + * @param queues
> >  > > >  + *   An array of *nb_unlinks* event queues to be unlinked from the event
> >  port.
> >  > > >  + * @param nb_unlinks
> >  > > >  + *   The number of unlinks to establish
> >  > > >  + *
> >  > > >  + * @return
> >  > > >  + *   Returns 0 on success.
> >  > > >  + *
> >  > > >  + */
> >  > > >  +typedef int (*eventdev_port_unlink_t)(void *port,
> >  > > >  +		uint8_t queues[], uint16_t nb_unlinks);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Converts nanoseconds to *wait* value for rte_event_dequeue()
> >  > > >  + *
> >  > > >  + * @param dev
> >  > > >  + *   Event device pointer
> >  > > >  + * @param ns
> >  > > >  + *   Wait time in nanosecond
> >  > > >  + * @param[out] wait_ticks
> >  > > >  + *   Value for the *wait* parameter in rte_event_dequeue() function
> >  > > >  + *
> >  > > >  + */
> >  > > >  +typedef void (*eventdev_dequeue_wait_time_t)(struct rte_eventdev
> >  *dev,
> >  > > >  +		uint64_t ns, uint64_t *wait_ticks);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Dump internal information
> >  > > >  + *
> >  > > >  + * @param dev
> >  > > >  + *   Event device pointer
> >  > > >  + * @param f
> >  > > >  + *   A pointer to a file for output
> >  > > >  + *
> >  > > >  + */
> >  > > >  +typedef void (*eventdev_dump_t)(struct rte_eventdev *dev, FILE *f);
> >  > > >  +
> >  > > >  +/** Event device operations function pointer table */
> >  > > >  +struct rte_eventdev_ops {
> >  > > >  +	eventdev_info_get_t dev_infos_get;	/**< Get device info. */
> >  > > >  +	eventdev_configure_t dev_configure;	/**< Configure device. */
> >  > > >  +	eventdev_start_t dev_start;		/**< Start device. */
> >  > > >  +	eventdev_stop_t dev_stop;		/**< Stop device. */
> >  > > >  +	eventdev_close_t dev_close;		/**< Close device. */
> >  > > >  +
> >  > > >  +	eventdev_queue_default_conf_get_t queue_def_conf;
> >  > > >  +	/**< Get default queue configuration. */
> >  > > >  +	eventdev_queue_setup_t queue_setup;
> >  > > >  +	/**< Set up an event queue. */
> >  > > >  +	eventdev_queue_release_t queue_release;
> >  > > >  +	/**< Release an event queue. */
> >  > > >  +
> >  > > >  +	eventdev_port_default_conf_get_t port_def_conf;
> >  > > >  +	/**< Get default port configuration. */
> >  > > >  +	eventdev_port_setup_t port_setup;
> >  > > >  +	/**< Set up an event port. */
> >  > > >  +	eventdev_port_release_t port_release;
> >  > > >  +	/**< Release an event port. */
> >  > > >  +
> >  > > >  +	eventdev_port_link_t port_link;
> >  > > >  +	/**< Link event queues to an event port. */
> >  > > >  +	eventdev_port_unlink_t port_unlink;
> >  > > >  +	/**< Unlink event queues from an event port. */
> >  > > >  +	eventdev_dequeue_wait_time_t wait_time;
> >  > > >  +	/**< Converts nanoseconds to *wait* value for rte_event_dequeue()
> >  > > >  */
> >  > > >  +	eventdev_dump_t dump;
> >  > > >  +	/* Dump internal information */
> >  > > >  +};
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Allocates a new eventdev slot for an event device and returns the
> >  pointer
> >  > > >  + * to that slot for the driver to use.
> >  > > >  + *
> >  > > >  + * @param name
> >  > > >  + *   Unique identifier name for each device
> >  > > >  + * @param socket_id
> >  > > >  + *   Socket to allocate resources on.
> >  > > >  + * @return
> >  > > >  + *   - Slot in the rte_dev_devices array for a new device;
> >  > > >  + */
> >  > > >  +struct rte_eventdev *
> >  > > >  +rte_eventdev_pmd_allocate(const char *name, int socket_id);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Release the specified eventdev device.
> >  > > >  + *
> >  > > >  + * @param eventdev
> >  > > >  + * The *eventdev* pointer is the address of the *rte_eventdev*
> >  structure.
> >  > > >  + * @return
> >  > > >  + *   - 0 on success, negative on error
> >  > > >  + */
> >  > > >  +int
> >  > > >  +rte_eventdev_pmd_release(struct rte_eventdev *eventdev);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Creates a new virtual event device and returns the pointer to that
> >  device.
> >  > > >  + *
> >  > > >  + * @param name
> >  > > >  + *   PMD type name
> >  > > >  + * @param dev_private_size
> >  > > >  + *   Size of event PMDs private data
> >  > > >  + * @param socket_id
> >  > > >  + *   Socket to allocate resources on.
> >  > > >  + *
> >  > > >  + * @return
> >  > > >  + *   - Eventdev pointer if device is successfully created.
> >  > > >  + *   - NULL if device cannot be created.
> >  > > >  + */
> >  > > >  +struct rte_eventdev *
> >  > > >  +rte_eventdev_pmd_vdev_init(const char *name, size_t
> >  dev_private_size,
> >  > > >  +		int socket_id);
> >  > > >  +
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Wrapper for use by pci drivers as a .probe function to attach to a
> >  event
> >  > > >  + * interface.
> >  > > >  + */
> >  > > >  +int rte_eventdev_pmd_pci_probe(struct rte_pci_driver *pci_drv,
> >  > > >  +			    struct rte_pci_device *pci_dev);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Wrapper for use by pci drivers as a .remove function to detach a
> >  event
> >  > > >  + * interface.
> >  > > >  + */
> >  > > >  +int rte_eventdev_pmd_pci_remove(struct rte_pci_device *pci_dev);
> >  > > >  +
> >  > > >  +#ifdef __cplusplus
> >  > > >  +}
> >  > > >  +#endif
> >  > > >  +
> >  > > >  +#endif /* _RTE_EVENTDEV_PMD_H_ */
> >  > > >  diff --git a/lib/librte_eventdev/rte_eventdev_version.map
> >  > > >  b/lib/librte_eventdev/rte_eventdev_version.map
> >  > > >  new file mode 100644
> >  > > >  index 0000000..ef40aae
> >  > > >  --- /dev/null
> >  > > >  +++ b/lib/librte_eventdev/rte_eventdev_version.map
> >  > > >  @@ -0,0 +1,39 @@
> >  > > >  +DPDK_17.02 {
> >  > > >  +	global:
> >  > > >  +
> >  > > >  +	rte_eventdevs;
> >  > > >  +
> >  > > >  +	rte_event_dev_count;
> >  > > >  +	rte_event_dev_get_dev_id;
> >  > > >  +	rte_event_dev_socket_id;
> >  > > >  +	rte_event_dev_info_get;
> >  > > >  +	rte_event_dev_configure;
> >  > > >  +	rte_event_dev_start;
> >  > > >  +	rte_event_dev_stop;
> >  > > >  +	rte_event_dev_close;
> >  > > >  +	rte_event_dev_dump;
> >  > > >  +
> >  > > >  +	rte_event_port_default_conf_get;
> >  > > >  +	rte_event_port_setup;
> >  > > >  +	rte_event_port_dequeue_depth;
> >  > > >  +	rte_event_port_enqueue_depth;
> >  > > >  +	rte_event_port_count;
> >  > > >  +	rte_event_port_link;
> >  > > >  +	rte_event_port_unlink;
> >  > > >  +	rte_event_port_links_get;
> >  > > >  +
> >  > > >  +	rte_event_queue_default_conf_get
> >  > > >  +	rte_event_queue_setup;
> >  > > >  +	rte_event_queue_count;
> >  > > >  +	rte_event_queue_priority;
> >  > > >  +
> >  > > >  +	rte_event_dequeue_wait_time;
> >  > > >  +
> >  > > >  +	rte_eventdev_pmd_allocate;
> >  > > >  +	rte_eventdev_pmd_release;
> >  > > >  +	rte_eventdev_pmd_vdev_init;
> >  > > >  +	rte_eventdev_pmd_pci_probe;
> >  > > >  +	rte_eventdev_pmd_pci_remove;
> >  > > >  +
> >  > > >  +	local: *;
> >  > > >  +};
> >  > > >  diff --git a/mk/rte.app.mk b/mk/rte.app.mk
> >  > > >  index f75f0e2..716725a 100644
> >  > > >  --- a/mk/rte.app.mk
> >  > > >  +++ b/mk/rte.app.mk
> >  > > >  @@ -93,6 +93,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF)           += -
> >  > > >  lrte_mbuf
> >  > > >   _LDLIBS-$(CONFIG_RTE_LIBRTE_NET)            += -lrte_net
> >  > > >   _LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER)          += -lrte_ethdev
> >  > > >   _LDLIBS-$(CONFIG_RTE_LIBRTE_CRYPTODEV)      += -lrte_cryptodev
> >  > > >  +_LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV)       += -lrte_eventdev
> >  > > >   _LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL)        += -lrte_mempool
> >  > > >   _LDLIBS-$(CONFIG_RTE_LIBRTE_RING)           += -lrte_ring
> >  > > >   _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL)            += -lrte_eal
> >  > > >  --
> >  > > >  2.5.5
> >  > >

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 2/4] eventdev: implement the northbound APIs
  2016-11-22 18:19           ` Jerin Jacob
@ 2016-11-22 19:43             ` Eads, Gage
  2016-11-22 20:00               ` Jerin Jacob
  0 siblings, 1 reply; 109+ messages in thread
From: Eads, Gage @ 2016-11-22 19:43 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: dev, Richardson, Bruce, Van Haaren, Harry, hemant.agrawal



>  -----Original Message-----
>  From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
>  Sent: Tuesday, November 22, 2016 12:19 PM
>  To: Eads, Gage <gage.eads@intel.com>
>  Cc: dev@dpdk.org; Richardson, Bruce <bruce.richardson@intel.com>; Van
>  Haaren, Harry <harry.van.haaren@intel.com>; hemant.agrawal@nxp.com
>  Subject: Re: [dpdk-dev] [PATCH 2/4] eventdev: implement the northbound APIs
>  
>  On Tue, Nov 22, 2016 at 03:15:52PM +0000, Eads, Gage wrote:
>  >
>  >
>  > >  -----Original Message-----
>  > >  From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
>  > >  Sent: Monday, November 21, 2016 1:32 PM
>  > >  To: Eads, Gage <gage.eads@intel.com>
>  > >  Cc: dev@dpdk.org; Richardson, Bruce <bruce.richardson@intel.com>; Van
>  > >  Haaren, Harry <harry.van.haaren@intel.com>; hemant.agrawal@nxp.com
>  > >  Subject: Re: [dpdk-dev] [PATCH 2/4] eventdev: implement the northbound
>  APIs
>  > >
>  > >  On Tue, Nov 22, 2016 at 12:43:58AM +0530, Jerin Jacob wrote:
>  > >  > On Mon, Nov 21, 2016 at 05:45:51PM +0000, Eads, Gage wrote:
>  > >  > > Hi Jerin,
>  > >  > >
>  > >  > > I did a quick review and overall this implementation looks good. I
>  noticed
>  > >  just one issue in rte_event_queue_setup(): the check of
>  > >  nb_atomic_order_sequences is being applied to atomic-type queues, but
>  that
>  > >  field applies to ordered-type queues.
>  > >  >
>  > >  > Thanks Gage. I will fix that in v2.
>  > >  >
>  > >  > >
>  > >  > > One open issue I noticed is the "typical workflow" description starting in
>  > >  rte_eventdev.h:204 conflicts with the centralized software PMD that Harry
>  > >  posted last week. Specifically, that PMD expects a single core to call the
>  > >  schedule function. We could extend the documentation to account for this
>  > >  alternative style of scheduler invocation, or discuss ways to make the
>  software
>  > >  PMD work with the documented workflow. I prefer the former, but either
>  way I
>  > >  think we ought to expose the scheduler's expected usage to the user --
>  perhaps
>  > >  through an RTE_EVENT_DEV_CAP flag?
>  > >  >
>  > >  > I prefer former too, you can propose the documentation change required
>  for
>  > >  software PMD.
>  >
>  > Sure, proposal follows. The "typical workflow" isn't the most optimal by
>  having a conditional in the fast-path, of course, but it demonstrates the idea
>  simply.
>  >
>  > (line 204)
>  >  * An event driven based application has following typical workflow on
>  fastpath:
>  >  * \code{.c}
>  >  *      while (1) {
>  >  *
>  >  *              if (dev_info.event_dev_cap &
>  >  *                      RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED)
>  >  *                      rte_event_schedule(dev_id);
>  
>  Yes, I like the idea of RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED.
>  It  can be input to application/subsystem to
>  launch separate core(s) for schedule functions.
>  But, I think, the "dev_info.event_dev_cap &
>  RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED"
>  check can be moved inside the implementation(to make the better decisions
>  and
>  avoiding consuming cycles on HW based schedulers.

How would this check work? Wouldn't it prevent any core from running the software scheduler in the centralized case?

>  
>  >  *
>  >  *              rte_event_dequeue(...);
>  >  *
>  >  *              (event processing)
>  >  *
>  >  *              rte_event_enqueue(...);
>  >  *      }
>  >  * \endcode
>  >  *
>  >  * The *schedule* operation is intended to do event scheduling, and the
>  >  * *dequeue* operation returns the scheduled events. An implementation
>  >  * is free to define the semantics between *schedule* and *dequeue*. For
>  >  * example, a system based on a hardware scheduler can define its
>  >  * rte_event_schedule() to be an NOOP, whereas a software scheduler can
>  use
>  >  * the *schedule* operation to schedule events. The
>  >  * RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag indicates
>  whether
>  >  * rte_event_schedule() should be called by all cores or by a single (typically
>  >  * dedicated) core.
>  >
>  > (line 308)
>  > #define RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED (1ULL < 2)
>  > /**< Event scheduling implementation is distributed and all cores must
>  execute
>  >  *  rte_event_schedule(). If unset, the implementation is centralized and
>  >  *  a single core must execute the schedule operation.
>  >  *
>  >  *  \see rte_event_schedule()
>  >  */
>  >
>  > >  >
>  > >  > On same note, If software PMD based workflow need  a separate core(s)
>  for
>  > >  > schedule function then, Can we hide that from API specification and pass
>  an
>  > >  > argument to SW pmd to define the scheduling core(s)?
>  > >  >
>  > >  > Something like --vdev=eventsw0,schedule_cmask=0x2
>  >
>  > An API for controlling the scheduler coremask instead of (or perhaps in
>  addition to) the vdev argument would be good, to allow runtime control. I can
>  imagine apps that scale the number of cores based on load, and in doing so
>  may want to migrate the scheduler to a different core.
>  
>  Yes, an API for number of scheduler core looks OK. But if we are going to
>  have service core approach then we just need to specify at one place as
>  application will not creating the service functions.
>  
>  >
>  > >
>  > >  Just a thought,
>  > >
>  > >  Perhaps, We could introduce generic "service" cores concept to DPDK to
>  hide
>  > >  the
>  > >  requirement where the implementation needs dedicated core to do certain
>  > >  work. I guess it would useful for other NPU integration in DPDK.
>  > >
>  >
>  > That's an interesting idea. As you suggested in the other thread, this concept
>  could be extended to the "producer" code in the example for configurations
>  where the NIC requires software to feed into the eventdev. And to the other
>  subsystems mentioned in your original PDF, crypto and timer.
>  
>  Yes. Producers should come in service core category. I think, that
>  enables us to have better NPU integration.(same application code for
>  NPU vs non NPU)
>  
>  >
>  > >  >
>  > >  > >
>  > >  > > Thanks,
>  > >  > > Gage
>  > >  > >
>  > >  > > >  -----Original Message-----
>  > >  > > >  From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
>  > >  > > >  Sent: Thursday, November 17, 2016 11:45 PM
>  > >  > > >  To: dev@dpdk.org
>  > >  > > >  Cc: Richardson, Bruce <bruce.richardson@intel.com>; Van Haaren,
>  Harry
>  > >  > > >  <harry.van.haaren@intel.com>; hemant.agrawal@nxp.com; Eads,
>  Gage
>  > >  > > >  <gage.eads@intel.com>; Jerin Jacob
>  > >  <jerin.jacob@caviumnetworks.com>
>  > >  > > >  Subject: [dpdk-dev] [PATCH 2/4] eventdev: implement the
>  northbound
>  > >  APIs
>  > >  > > >
>  > >  > > >  This patch set defines the southbound driver interface
>  > >  > > >  and implements the common code required for northbound
>  > >  > > >  eventdev API interface.
>  > >  > > >
>  > >  > > >  Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
>  > >  > > >  ---
>  > >  > > >   config/common_base                           |    6 +
>  > >  > > >   lib/Makefile                                 |    1 +
>  > >  > > >   lib/librte_eal/common/include/rte_log.h      |    1 +
>  > >  > > >   lib/librte_eventdev/Makefile                 |   57 ++
>  > >  > > >   lib/librte_eventdev/rte_eventdev.c           | 1211
>  > >  > > >  ++++++++++++++++++++++++++
>  > >  > > >   lib/librte_eventdev/rte_eventdev_pmd.h       |  504 +++++++++++
>  > >  > > >   lib/librte_eventdev/rte_eventdev_version.map |   39 +
>  > >  > > >   mk/rte.app.mk                                |    1 +
>  > >  > > >   8 files changed, 1820 insertions(+)
>  > >  > > >   create mode 100644 lib/librte_eventdev/Makefile
>  > >  > > >   create mode 100644 lib/librte_eventdev/rte_eventdev.c
>  > >  > > >   create mode 100644 lib/librte_eventdev/rte_eventdev_pmd.h
>  > >  > > >   create mode 100644 lib/librte_eventdev/rte_eventdev_version.map
>  > >  > > >
>  > >  > > >  diff --git a/config/common_base b/config/common_base
>  > >  > > >  index 4bff83a..7a8814e 100644
>  > >  > > >  --- a/config/common_base
>  > >  > > >  +++ b/config/common_base
>  > >  > > >  @@ -411,6 +411,12 @@ CONFIG_RTE_LIBRTE_PMD_ZUC_DEBUG=n
>  > >  > > >   CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO=y
>  > >  > > >
>  > >  > > >   #
>  > >  > > >  +# Compile generic event device library
>  > >  > > >  +#
>  > >  > > >  +CONFIG_RTE_LIBRTE_EVENTDEV=y
>  > >  > > >  +CONFIG_RTE_LIBRTE_EVENTDEV_DEBUG=n
>  > >  > > >  +CONFIG_RTE_EVENT_MAX_DEVS=16
>  > >  > > >  +CONFIG_RTE_EVENT_MAX_QUEUES_PER_DEV=64
>  > >  > > >   # Compile librte_ring
>  > >  > > >   #
>  > >  > > >   CONFIG_RTE_LIBRTE_RING=y
>  > >  > > >  diff --git a/lib/Makefile b/lib/Makefile
>  > >  > > >  index 990f23a..1a067bf 100644
>  > >  > > >  --- a/lib/Makefile
>  > >  > > >  +++ b/lib/Makefile
>  > >  > > >  @@ -41,6 +41,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) +=
>  > >  librte_cfgfile
>  > >  > > >   DIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += librte_cmdline
>  > >  > > >   DIRS-$(CONFIG_RTE_LIBRTE_ETHER) += librte_ether
>  > >  > > >   DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += librte_cryptodev
>  > >  > > >  +DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += librte_eventdev
>  > >  > > >   DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
>  > >  > > >   DIRS-$(CONFIG_RTE_LIBRTE_HASH) += librte_hash
>  > >  > > >   DIRS-$(CONFIG_RTE_LIBRTE_LPM) += librte_lpm
>  > >  > > >  diff --git a/lib/librte_eal/common/include/rte_log.h
>  > >  > > >  b/lib/librte_eal/common/include/rte_log.h
>  > >  > > >  index 29f7d19..9a07d92 100644
>  > >  > > >  --- a/lib/librte_eal/common/include/rte_log.h
>  > >  > > >  +++ b/lib/librte_eal/common/include/rte_log.h
>  > >  > > >  @@ -79,6 +79,7 @@ extern struct rte_logs rte_logs;
>  > >  > > >   #define RTE_LOGTYPE_PIPELINE 0x00008000 /**< Log related to
>  > >  pipeline. */
>  > >  > > >   #define RTE_LOGTYPE_MBUF    0x00010000 /**< Log related to
>  mbuf.
>  > >  */
>  > >  > > >   #define RTE_LOGTYPE_CRYPTODEV 0x00020000 /**< Log related to
>  > >  > > >  cryptodev. */
>  > >  > > >  +#define RTE_LOGTYPE_EVENTDEV 0x00040000 /**< Log related to
>  > >  eventdev.
>  > >  > > >  */
>  > >  > > >
>  > >  > > >   /* these log types can be used in an application */
>  > >  > > >   #define RTE_LOGTYPE_USER1   0x01000000 /**< User-defined log
>  type
>  > >  1. */
>  > >  > > >  diff --git a/lib/librte_eventdev/Makefile
>  b/lib/librte_eventdev/Makefile
>  > >  > > >  new file mode 100644
>  > >  > > >  index 0000000..dac0663
>  > >  > > >  --- /dev/null
>  > >  > > >  +++ b/lib/librte_eventdev/Makefile
>  > >  > > >  @@ -0,0 +1,57 @@
>  > >  > > >  +#   BSD LICENSE
>  > >  > > >  +#
>  > >  > > >  +#   Copyright(c) 2016 Cavium networks. All rights reserved.
>  > >  > > >  +#
>  > >  > > >  +#   Redistribution and use in source and binary forms, with or
>  without
>  > >  > > >  +#   modification, are permitted provided that the following
>  conditions
>  > >  > > >  +#   are met:
>  > >  > > >  +#
>  > >  > > >  +#     * Redistributions of source code must retain the above
>  copyright
>  > >  > > >  +#       notice, this list of conditions and the following disclaimer.
>  > >  > > >  +#     * Redistributions in binary form must reproduce the above
>  copyright
>  > >  > > >  +#       notice, this list of conditions and the following disclaimer in
>  > >  > > >  +#       the documentation and/or other materials provided with the
>  > >  > > >  +#       distribution.
>  > >  > > >  +#     * Neither the name of Cavium networks nor the names of its
>  > >  > > >  +#       contributors may be used to endorse or promote products
>  derived
>  > >  > > >  +#       from this software without specific prior written permission.
>  > >  > > >  +#
>  > >  > > >  +#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
>  > >  > > >  CONTRIBUTORS
>  > >  > > >  +#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
>  INCLUDING,
>  > >  BUT
>  > >  > > >  NOT
>  > >  > > >  +#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
>  AND
>  > >  > > >  FITNESS FOR
>  > >  > > >  +#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
>  THE
>  > >  > > >  COPYRIGHT
>  > >  > > >  +#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
>  INDIRECT,
>  > >  > > >  INCIDENTAL,
>  > >  > > >  +#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
>  (INCLUDING,
>  > >  BUT
>  > >  > > >  NOT
>  > >  > > >  +#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
>  SERVICES;
>  > >  LOSS
>  > >  > > >  OF USE,
>  > >  > > >  +#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
>  > >  CAUSED AND
>  > >  > > >  ON ANY
>  > >  > > >  +#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
>  LIABILITY,
>  > >  OR
>  > >  > > >  TORT
>  > >  > > >  +#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
>  > >  OUT OF
>  > >  > > >  THE USE
>  > >  > > >  +#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
>  SUCH
>  > >  > > >  DAMAGE.
>  > >  > > >  +
>  > >  > > >  +include $(RTE_SDK)/mk/rte.vars.mk
>  > >  > > >  +
>  > >  > > >  +# library name
>  > >  > > >  +LIB = librte_eventdev.a
>  > >  > > >  +
>  > >  > > >  +# library version
>  > >  > > >  +LIBABIVER := 1
>  > >  > > >  +
>  > >  > > >  +# build flags
>  > >  > > >  +CFLAGS += -O3
>  > >  > > >  +CFLAGS += $(WERROR_FLAGS)
>  > >  > > >  +
>  > >  > > >  +# library source files
>  > >  > > >  +SRCS-y += rte_eventdev.c
>  > >  > > >  +
>  > >  > > >  +# export include files
>  > >  > > >  +SYMLINK-y-include += rte_eventdev.h
>  > >  > > >  +SYMLINK-y-include += rte_eventdev_pmd.h
>  > >  > > >  +
>  > >  > > >  +# versioning export map
>  > >  > > >  +EXPORT_MAP := rte_eventdev_version.map
>  > >  > > >  +
>  > >  > > >  +# library dependencies
>  > >  > > >  +DEPDIRS-y += lib/librte_eal
>  > >  > > >  +DEPDIRS-y += lib/librte_mbuf
>  > >  > > >  +
>  > >  > > >  +include $(RTE_SDK)/mk/rte.lib.mk
>  > >  > > >  diff --git a/lib/librte_eventdev/rte_eventdev.c
>  > >  > > >  b/lib/librte_eventdev/rte_eventdev.c
>  > >  > > >  new file mode 100644
>  > >  > > >  index 0000000..17ce5c3
>  > >  > > >  --- /dev/null
>  > >  > > >  +++ b/lib/librte_eventdev/rte_eventdev.c
>  > >  > > >  @@ -0,0 +1,1211 @@
>  > >  > > >  +/*
>  > >  > > >  + *   BSD LICENSE
>  > >  > > >  + *
>  > >  > > >  + *   Copyright(c) 2016 Cavium networks. All rights reserved.
>  > >  > > >  + *
>  > >  > > >  + *   Redistribution and use in source and binary forms, with or
>  without
>  > >  > > >  + *   modification, are permitted provided that the following
>  conditions
>  > >  > > >  + *   are met:
>  > >  > > >  + *
>  > >  > > >  + *     * Redistributions of source code must retain the above
>  copyright
>  > >  > > >  + *       notice, this list of conditions and the following disclaimer.
>  > >  > > >  + *     * Redistributions in binary form must reproduce the above
>  > >  copyright
>  > >  > > >  + *       notice, this list of conditions and the following disclaimer in
>  > >  > > >  + *       the documentation and/or other materials provided with the
>  > >  > > >  + *       distribution.
>  > >  > > >  + *     * Neither the name of Cavium networks nor the names of its
>  > >  > > >  + *       contributors may be used to endorse or promote products
>  derived
>  > >  > > >  + *       from this software without specific prior written permission.
>  > >  > > >  + *
>  > >  > > >  + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
>  > >  > > >  CONTRIBUTORS
>  > >  > > >  + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
>  INCLUDING,
>  > >  BUT
>  > >  > > >  NOT
>  > >  > > >  + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
>  AND
>  > >  > > >  FITNESS FOR
>  > >  > > >  + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
>  THE
>  > >  > > >  COPYRIGHT
>  > >  > > >  + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
>  INDIRECT,
>  > >  > > >  INCIDENTAL,
>  > >  > > >  + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
>  (INCLUDING,
>  > >  BUT
>  > >  > > >  NOT
>  > >  > > >  + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
>  SERVICES;
>  > >  LOSS
>  > >  > > >  OF USE,
>  > >  > > >  + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
>  > >  CAUSED
>  > >  > > >  AND ON ANY
>  > >  > > >  + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
>  LIABILITY,
>  > >  OR
>  > >  > > >  TORT
>  > >  > > >  + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY
>  WAY
>  > >  OUT OF
>  > >  > > >  THE USE
>  > >  > > >  + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
>  SUCH
>  > >  > > >  DAMAGE.
>  > >  > > >  + */
>  > >  > > >  +
>  > >  > > >  +#include <ctype.h>
>  > >  > > >  +#include <stdio.h>
>  > >  > > >  +#include <stdlib.h>
>  > >  > > >  +#include <string.h>
>  > >  > > >  +#include <stdarg.h>
>  > >  > > >  +#include <errno.h>
>  > >  > > >  +#include <stdint.h>
>  > >  > > >  +#include <inttypes.h>
>  > >  > > >  +#include <sys/types.h>
>  > >  > > >  +#include <sys/queue.h>
>  > >  > > >  +
>  > >  > > >  +#include <rte_byteorder.h>
>  > >  > > >  +#include <rte_log.h>
>  > >  > > >  +#include <rte_debug.h>
>  > >  > > >  +#include <rte_dev.h>
>  > >  > > >  +#include <rte_pci.h>
>  > >  > > >  +#include <rte_memory.h>
>  > >  > > >  +#include <rte_memcpy.h>
>  > >  > > >  +#include <rte_memzone.h>
>  > >  > > >  +#include <rte_eal.h>
>  > >  > > >  +#include <rte_per_lcore.h>
>  > >  > > >  +#include <rte_lcore.h>
>  > >  > > >  +#include <rte_atomic.h>
>  > >  > > >  +#include <rte_branch_prediction.h>
>  > >  > > >  +#include <rte_common.h>
>  > >  > > >  +#include <rte_malloc.h>
>  > >  > > >  +#include <rte_errno.h>
>  > >  > > >  +
>  > >  > > >  +#include "rte_eventdev.h"
>  > >  > > >  +#include "rte_eventdev_pmd.h"
>  > >  > > >  +
>  > >  > > >  +struct rte_eventdev rte_event_devices[RTE_EVENT_MAX_DEVS];
>  > >  > > >  +
>  > >  > > >  +struct rte_eventdev *rte_eventdevs = &rte_event_devices[0];
>  > >  > > >  +
>  > >  > > >  +static struct rte_eventdev_global eventdev_globals = {
>  > >  > > >  +	.nb_devs		= 0
>  > >  > > >  +};
>  > >  > > >  +
>  > >  > > >  +struct rte_eventdev_global *rte_eventdev_globals =
>  > >  &eventdev_globals;
>  > >  > > >  +
>  > >  > > >  +/* Event dev north bound API implementation */
>  > >  > > >  +
>  > >  > > >  +uint8_t
>  > >  > > >  +rte_event_dev_count(void)
>  > >  > > >  +{
>  > >  > > >  +	return rte_eventdev_globals->nb_devs;
>  > >  > > >  +}
>  > >  > > >  +
>  > >  > > >  +int
>  > >  > > >  +rte_event_dev_get_dev_id(const char *name)
>  > >  > > >  +{
>  > >  > > >  +	int i;
>  > >  > > >  +
>  > >  > > >  +	if (!name)
>  > >  > > >  +		return -EINVAL;
>  > >  > > >  +
>  > >  > > >  +	for (i = 0; i < rte_eventdev_globals->nb_devs; i++)
>  > >  > > >  +		if ((strcmp(rte_event_devices[i].data->name, name)
>  > >  > > >  +				== 0) &&
>  > >  > > >  +				(rte_event_devices[i].attached ==
>  > >  > > >  +
>  	RTE_EVENTDEV_ATTACHED))
>  > >  > > >  +			return i;
>  > >  > > >  +	return -ENODEV;
>  > >  > > >  +}
>  > >  > > >  +
>  > >  > > >  +int
>  > >  > > >  +rte_event_dev_socket_id(uint8_t dev_id)
>  > >  > > >  +{
>  > >  > > >  +	struct rte_eventdev *dev;
>  > >  > > >  +
>  > >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -
>  EINVAL);
>  > >  > > >  +	dev = &rte_eventdevs[dev_id];
>  > >  > > >  +
>  > >  > > >  +	return dev->data->socket_id;
>  > >  > > >  +}
>  > >  > > >  +
>  > >  > > >  +int
>  > >  > > >  +rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info
>  > >  *dev_info)
>  > >  > > >  +{
>  > >  > > >  +	struct rte_eventdev *dev;
>  > >  > > >  +
>  > >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -
>  EINVAL);
>  > >  > > >  +	dev = &rte_eventdevs[dev_id];
>  > >  > > >  +
>  > >  > > >  +	if (dev_info == NULL)
>  > >  > > >  +		return -EINVAL;
>  > >  > > >  +
>  > >  > > >  +	memset(dev_info, 0, sizeof(struct rte_event_dev_info));
>  > >  > > >  +
>  > >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
>  >dev_infos_get, -
>  > >  > > >  ENOTSUP);
>  > >  > > >  +	(*dev->dev_ops->dev_infos_get)(dev, dev_info);
>  > >  > > >  +
>  > >  > > >  +	dev_info->pci_dev = dev->pci_dev;
>  > >  > > >  +	if (dev->driver)
>  > >  > > >  +		dev_info->driver_name = dev->driver-
>  >pci_drv.driver.name;
>  > >  > > >  +	return 0;
>  > >  > > >  +}
>  > >  > > >  +
>  > >  > > >  +static inline int
>  > >  > > >  +rte_event_dev_queue_config(struct rte_eventdev *dev, uint8_t
>  > >  nb_queues)
>  > >  > > >  +{
>  > >  > > >  +	uint8_t old_nb_queues = dev->data->nb_queues;
>  > >  > > >  +	void **queues;
>  > >  > > >  +	uint8_t *queues_prio;
>  > >  > > >  +	unsigned int i;
>  > >  > > >  +
>  > >  > > >  +	EDEV_LOG_DEBUG("Setup %d queues on device %u",
>  nb_queues,
>  > >  > > >  +			 dev->data->dev_id);
>  > >  > > >  +
>  > >  > > >  +	/* First time configuration */
>  > >  > > >  +	if (dev->data->queues == NULL && nb_queues != 0) {
>  > >  > > >  +		dev->data->queues = rte_zmalloc_socket("eventdev-
>  >data-
>  > >  > > >  >queues",
>  > >  > > >  +				sizeof(dev->data->queues[0]) *
>  nb_queues,
>  > >  > > >  +				RTE_CACHE_LINE_SIZE, dev->data-
>  > >  > > >  >socket_id);
>  > >  > > >  +		if (dev->data->queues == NULL) {
>  > >  > > >  +			dev->data->nb_queues = 0;
>  > >  > > >  +			EDEV_LOG_ERR("failed to get memory for
>  queue meta
>  > >  > > >  data,"
>  > >  > > >  +					"nb_queues %u", nb_queues);
>  > >  > > >  +			return -(ENOMEM);
>  > >  > > >  +		}
>  > >  > > >  +		/* Allocate memory to store queue priority */
>  > >  > > >  +		dev->data->queues_prio = rte_zmalloc_socket(
>  > >  > > >  +				"eventdev->data->queues_prio",
>  > >  > > >  +				sizeof(dev->data->queues_prio[0]) *
>  > >  > > >  nb_queues,
>  > >  > > >  +				RTE_CACHE_LINE_SIZE, dev->data-
>  > >  > > >  >socket_id);
>  > >  > > >  +		if (dev->data->queues_prio == NULL) {
>  > >  > > >  +			dev->data->nb_queues = 0;
>  > >  > > >  +			EDEV_LOG_ERR("failed to get memory for
>  queue
>  > >  > > >  priority,"
>  > >  > > >  +					"nb_queues %u", nb_queues);
>  > >  > > >  +			return -(ENOMEM);
>  > >  > > >  +		}
>  > >  > > >  +
>  > >  > > >  +	} else if (dev->data->queues != NULL && nb_queues != 0) {/*
>  re-config
>  > >  > > >  */
>  > >  > > >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
>  > >  > > >  >queue_release, -ENOTSUP);
>  > >  > > >  +
>  > >  > > >  +		queues = dev->data->queues;
>  > >  > > >  +		for (i = nb_queues; i < old_nb_queues; i++)
>  > >  > > >  +			(*dev->dev_ops->queue_release)(queues[i]);
>  > >  > > >  +
>  > >  > > >  +		queues = rte_realloc(queues, sizeof(queues[0]) *
>  nb_queues,
>  > >  > > >  +				RTE_CACHE_LINE_SIZE);
>  > >  > > >  +		if (queues == NULL) {
>  > >  > > >  +			EDEV_LOG_ERR("failed to realloc queue meta
>  data,"
>  > >  > > >  +						" nb_queues %u",
>  > >  > > >  nb_queues);
>  > >  > > >  +			return -(ENOMEM);
>  > >  > > >  +		}
>  > >  > > >  +		dev->data->queues = queues;
>  > >  > > >  +
>  > >  > > >  +		/* Re allocate memory to store queue priority */
>  > >  > > >  +		queues_prio = dev->data->queues_prio;
>  > >  > > >  +		queues_prio = rte_realloc(queues_prio,
>  > >  > > >  +				sizeof(queues_prio[0]) * nb_queues,
>  > >  > > >  +				RTE_CACHE_LINE_SIZE);
>  > >  > > >  +		if (queues_prio == NULL) {
>  > >  > > >  +			EDEV_LOG_ERR("failed to realloc queue
>  priority,"
>  > >  > > >  +						" nb_queues %u",
>  > >  > > >  nb_queues);
>  > >  > > >  +			return -(ENOMEM);
>  > >  > > >  +		}
>  > >  > > >  +		dev->data->queues_prio = queues_prio;
>  > >  > > >  +
>  > >  > > >  +		if (nb_queues > old_nb_queues) {
>  > >  > > >  +			uint8_t new_qs = nb_queues - old_nb_queues;
>  > >  > > >  +
>  > >  > > >  +			memset(queues + old_nb_queues, 0,
>  > >  > > >  +				sizeof(queues[0]) * new_qs);
>  > >  > > >  +			memset(queues_prio + old_nb_queues, 0,
>  > >  > > >  +				sizeof(queues_prio[0]) * new_qs);
>  > >  > > >  +		}
>  > >  > > >  +	} else if (dev->data->queues != NULL && nb_queues == 0) {
>  > >  > > >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
>  > >  > > >  >queue_release, -ENOTSUP);
>  > >  > > >  +
>  > >  > > >  +		queues = dev->data->queues;
>  > >  > > >  +		for (i = nb_queues; i < old_nb_queues; i++)
>  > >  > > >  +			(*dev->dev_ops->queue_release)(queues[i]);
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	dev->data->nb_queues = nb_queues;
>  > >  > > >  +	return 0;
>  > >  > > >  +}
>  > >  > > >  +
>  > >  > > >  +static inline int
>  > >  > > >  +rte_event_dev_port_config(struct rte_eventdev *dev, uint8_t
>  nb_ports)
>  > >  > > >  +{
>  > >  > > >  +	uint8_t old_nb_ports = dev->data->nb_ports;
>  > >  > > >  +	void **ports;
>  > >  > > >  +	uint16_t *links_map;
>  > >  > > >  +	uint8_t *ports_dequeue_depth;
>  > >  > > >  +	uint8_t *ports_enqueue_depth;
>  > >  > > >  +	unsigned int i;
>  > >  > > >  +
>  > >  > > >  +	EDEV_LOG_DEBUG("Setup %d ports on device %u", nb_ports,
>  > >  > > >  +			 dev->data->dev_id);
>  > >  > > >  +
>  > >  > > >  +	/* First time configuration */
>  > >  > > >  +	if (dev->data->ports == NULL && nb_ports != 0) {
>  > >  > > >  +		dev->data->ports = rte_zmalloc_socket("eventdev-
>  >data-
>  > >  > > >  >ports",
>  > >  > > >  +				sizeof(dev->data->ports[0]) *
>  nb_ports,
>  > >  > > >  +				RTE_CACHE_LINE_SIZE, dev->data-
>  > >  > > >  >socket_id);
>  > >  > > >  +		if (dev->data->ports == NULL) {
>  > >  > > >  +			dev->data->nb_ports = 0;
>  > >  > > >  +			EDEV_LOG_ERR("failed to get memory for
>  port meta
>  > >  > > >  data,"
>  > >  > > >  +					"nb_ports %u", nb_ports);
>  > >  > > >  +			return -(ENOMEM);
>  > >  > > >  +		}
>  > >  > > >  +
>  > >  > > >  +		/* Allocate memory to store ports dequeue depth */
>  > >  > > >  +		dev->data->ports_dequeue_depth =
>  > >  > > >  +			rte_zmalloc_socket("eventdev-
>  > >  > > >  >ports_dequeue_depth",
>  > >  > > >  +			sizeof(dev->data->ports_dequeue_depth[0]) *
>  > >  > > >  nb_ports,
>  > >  > > >  +			RTE_CACHE_LINE_SIZE, dev->data-
>  >socket_id);
>  > >  > > >  +		if (dev->data->ports_dequeue_depth == NULL) {
>  > >  > > >  +			dev->data->nb_ports = 0;
>  > >  > > >  +			EDEV_LOG_ERR("failed to get memory for
>  port deq
>  > >  > > >  meta,"
>  > >  > > >  +					"nb_ports %u", nb_ports);
>  > >  > > >  +			return -(ENOMEM);
>  > >  > > >  +		}
>  > >  > > >  +
>  > >  > > >  +		/* Allocate memory to store ports enqueue depth */
>  > >  > > >  +		dev->data->ports_enqueue_depth =
>  > >  > > >  +			rte_zmalloc_socket("eventdev-
>  > >  > > >  >ports_enqueue_depth",
>  > >  > > >  +			sizeof(dev->data->ports_enqueue_depth[0]) *
>  > >  > > >  nb_ports,
>  > >  > > >  +			RTE_CACHE_LINE_SIZE, dev->data-
>  >socket_id);
>  > >  > > >  +		if (dev->data->ports_enqueue_depth == NULL) {
>  > >  > > >  +			dev->data->nb_ports = 0;
>  > >  > > >  +			EDEV_LOG_ERR("failed to get memory for
>  port enq
>  > >  > > >  meta,"
>  > >  > > >  +					"nb_ports %u", nb_ports);
>  > >  > > >  +			return -(ENOMEM);
>  > >  > > >  +		}
>  > >  > > >  +
>  > >  > > >  +		/* Allocate memory to store queue to port link
>  connection */
>  > >  > > >  +		dev->data->links_map =
>  > >  > > >  +			rte_zmalloc_socket("eventdev->links_map",
>  > >  > > >  +			sizeof(dev->data->links_map[0]) * nb_ports *
>  > >  > > >  +			RTE_EVENT_MAX_QUEUES_PER_DEV,
>  > >  > > >  +			RTE_CACHE_LINE_SIZE, dev->data-
>  >socket_id);
>  > >  > > >  +		if (dev->data->links_map == NULL) {
>  > >  > > >  +			dev->data->nb_ports = 0;
>  > >  > > >  +			EDEV_LOG_ERR("failed to get memory for
>  port_map
>  > >  > > >  area,"
>  > >  > > >  +					"nb_ports %u", nb_ports);
>  > >  > > >  +			return -(ENOMEM);
>  > >  > > >  +		}
>  > >  > > >  +	} else if (dev->data->ports != NULL && nb_ports != 0) {/* re-
>  config */
>  > >  > > >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
>  >port_release,
>  > >  > > >  -ENOTSUP);
>  > >  > > >  +
>  > >  > > >  +		ports = dev->data->ports;
>  > >  > > >  +		ports_dequeue_depth = dev->data-
>  >ports_dequeue_depth;
>  > >  > > >  +		ports_enqueue_depth = dev->data-
>  >ports_enqueue_depth;
>  > >  > > >  +		links_map = dev->data->links_map;
>  > >  > > >  +
>  > >  > > >  +		for (i = nb_ports; i < old_nb_ports; i++)
>  > >  > > >  +			(*dev->dev_ops->port_release)(ports[i]);
>  > >  > > >  +
>  > >  > > >  +		/* Realloc memory for ports */
>  > >  > > >  +		ports = rte_realloc(ports, sizeof(ports[0]) * nb_ports,
>  > >  > > >  +				RTE_CACHE_LINE_SIZE);
>  > >  > > >  +		if (ports == NULL) {
>  > >  > > >  +			EDEV_LOG_ERR("failed to realloc port meta
>  data,"
>  > >  > > >  +						" nb_ports %u",
>  nb_ports);
>  > >  > > >  +			return -(ENOMEM);
>  > >  > > >  +		}
>  > >  > > >  +
>  > >  > > >  +		/* Realloc memory for ports_dequeue_depth */
>  > >  > > >  +		ports_dequeue_depth =
>  rte_realloc(ports_dequeue_depth,
>  > >  > > >  +			sizeof(ports_dequeue_depth[0]) * nb_ports,
>  > >  > > >  +			RTE_CACHE_LINE_SIZE);
>  > >  > > >  +		if (ports_dequeue_depth == NULL) {
>  > >  > > >  +			EDEV_LOG_ERR("failed to realloc port deqeue
>  meta
>  > >  > > >  data,"
>  > >  > > >  +						" nb_ports %u",
>  nb_ports);
>  > >  > > >  +			return -(ENOMEM);
>  > >  > > >  +		}
>  > >  > > >  +
>  > >  > > >  +		/* Realloc memory for ports_enqueue_depth */
>  > >  > > >  +		ports_enqueue_depth =
>  rte_realloc(ports_enqueue_depth,
>  > >  > > >  +			sizeof(ports_enqueue_depth[0]) * nb_ports,
>  > >  > > >  +			RTE_CACHE_LINE_SIZE);
>  > >  > > >  +		if (ports_enqueue_depth == NULL) {
>  > >  > > >  +			EDEV_LOG_ERR("failed to realloc port
>  enqueue meta
>  > >  > > >  data,"
>  > >  > > >  +						" nb_ports %u",
>  nb_ports);
>  > >  > > >  +			return -(ENOMEM);
>  > >  > > >  +		}
>  > >  > > >  +
>  > >  > > >  +		/* Realloc memory to store queue to port link
>  connection */
>  > >  > > >  +		links_map = rte_realloc(links_map,
>  > >  > > >  +			sizeof(dev->data->links_map[0]) * nb_ports *
>  > >  > > >  +			RTE_EVENT_MAX_QUEUES_PER_DEV,
>  > >  > > >  +			RTE_CACHE_LINE_SIZE);
>  > >  > > >  +		if (dev->data->links_map == NULL) {
>  > >  > > >  +			dev->data->nb_ports = 0;
>  > >  > > >  +			EDEV_LOG_ERR("failed to realloc mem for
>  port_map
>  > >  > > >  area,"
>  > >  > > >  +					"nb_ports %u", nb_ports);
>  > >  > > >  +			return -(ENOMEM);
>  > >  > > >  +		}
>  > >  > > >  +
>  > >  > > >  +		if (nb_ports > old_nb_ports) {
>  > >  > > >  +			uint8_t new_ps = nb_ports - old_nb_ports;
>  > >  > > >  +
>  > >  > > >  +			memset(ports + old_nb_ports, 0,
>  > >  > > >  +				sizeof(ports[0]) * new_ps);
>  > >  > > >  +			memset(ports_dequeue_depth +
>  old_nb_ports, 0,
>  > >  > > >  +				sizeof(ports_dequeue_depth[0]) *
>  new_ps);
>  > >  > > >  +			memset(ports_enqueue_depth +
>  old_nb_ports, 0,
>  > >  > > >  +				sizeof(ports_enqueue_depth[0]) *
>  new_ps);
>  > >  > > >  +			memset(links_map +
>  > >  > > >  +				(old_nb_ports *
>  > >  > > >  RTE_EVENT_MAX_QUEUES_PER_DEV),
>  > >  > > >  +				0, sizeof(ports_enqueue_depth[0]) *
>  new_ps);
>  > >  > > >  +		}
>  > >  > > >  +
>  > >  > > >  +		dev->data->ports = ports;
>  > >  > > >  +		dev->data->ports_dequeue_depth =
>  ports_dequeue_depth;
>  > >  > > >  +		dev->data->ports_enqueue_depth =
>  ports_enqueue_depth;
>  > >  > > >  +		dev->data->links_map = links_map;
>  > >  > > >  +	} else if (dev->data->ports != NULL && nb_ports == 0) {
>  > >  > > >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
>  >port_release,
>  > >  > > >  -ENOTSUP);
>  > >  > > >  +
>  > >  > > >  +		ports = dev->data->ports;
>  > >  > > >  +		for (i = nb_ports; i < old_nb_ports; i++)
>  > >  > > >  +			(*dev->dev_ops->port_release)(ports[i]);
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	dev->data->nb_ports = nb_ports;
>  > >  > > >  +	return 0;
>  > >  > > >  +}
>  > >  > > >  +
>  > >  > > >  +int
>  > >  > > >  +rte_event_dev_configure(uint8_t dev_id, struct
>  rte_event_dev_config
>  > >  > > >  *dev_conf)
>  > >  > > >  +{
>  > >  > > >  +	struct rte_eventdev *dev;
>  > >  > > >  +	struct rte_event_dev_info info;
>  > >  > > >  +	int diag;
>  > >  > > >  +
>  > >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -
>  EINVAL);
>  > >  > > >  +	dev = &rte_eventdevs[dev_id];
>  > >  > > >  +
>  > >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
>  >dev_infos_get, -
>  > >  > > >  ENOTSUP);
>  > >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
>  >dev_configure, -
>  > >  > > >  ENOTSUP);
>  > >  > > >  +
>  > >  > > >  +	if (dev->data->dev_started) {
>  > >  > > >  +		EDEV_LOG_ERR(
>  > >  > > >  +		    "device %d must be stopped to allow configuration",
>  > >  > > >  dev_id);
>  > >  > > >  +		return -EBUSY;
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	if (dev_conf == NULL)
>  > >  > > >  +		return -EINVAL;
>  > >  > > >  +
>  > >  > > >  +	(*dev->dev_ops->dev_infos_get)(dev, &info);
>  > >  > > >  +
>  > >  > > >  +	/* Check dequeue_wait_ns value is in limit */
>  > >  > > >  +	if (!dev_conf->event_dev_cfg &
>  > >  > > >  RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT) {
>  > >  > > >  +		if (dev_conf->dequeue_wait_ns <
>  info.min_dequeue_wait_ns
>  > >  > > >  ||
>  > >  > > >  +			dev_conf->dequeue_wait_ns >
>  > >  > > >  info.max_dequeue_wait_ns) {
>  > >  > > >  +			EDEV_LOG_ERR("dev%d invalid
>  dequeue_wait_ns=%d"
>  > >  > > >  +			" min_dequeue_wait_ns=%d
>  > >  > > >  max_dequeue_wait_ns=%d",
>  > >  > > >  +			dev_id, dev_conf->dequeue_wait_ns,
>  > >  > > >  +			info.min_dequeue_wait_ns,
>  > >  > > >  +			info.max_dequeue_wait_ns);
>  > >  > > >  +			return -EINVAL;
>  > >  > > >  +		}
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	/* Check nb_events_limit is in limit */
>  > >  > > >  +	if (dev_conf->nb_events_limit > info.max_num_events) {
>  > >  > > >  +		EDEV_LOG_ERR("dev%d nb_events_limit=%d >
>  > >  > > >  max_num_events=%d",
>  > >  > > >  +		dev_id, dev_conf->nb_events_limit,
>  info.max_num_events);
>  > >  > > >  +		return -EINVAL;
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	/* Check nb_event_queues is in limit */
>  > >  > > >  +	if (!dev_conf->nb_event_queues) {
>  > >  > > >  +		EDEV_LOG_ERR("dev%d nb_event_queues cannot be
>  zero",
>  > >  > > >  dev_id);
>  > >  > > >  +		return -EINVAL;
>  > >  > > >  +	}
>  > >  > > >  +	if (dev_conf->nb_event_queues > info.max_event_queues) {
>  > >  > > >  +		EDEV_LOG_ERR("dev%d nb_event_queues=%d >
>  > >  > > >  max_event_queues=%d",
>  > >  > > >  +		dev_id, dev_conf->nb_event_queues,
>  > >  > > >  info.max_event_queues);
>  > >  > > >  +		return -EINVAL;
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	/* Check nb_event_ports is in limit */
>  > >  > > >  +	if (!dev_conf->nb_event_ports) {
>  > >  > > >  +		EDEV_LOG_ERR("dev%d nb_event_ports cannot be
>  zero",
>  > >  > > >  dev_id);
>  > >  > > >  +		return -EINVAL;
>  > >  > > >  +	}
>  > >  > > >  +	if (dev_conf->nb_event_ports > info.max_event_ports) {
>  > >  > > >  +		EDEV_LOG_ERR("dev%d nb_event_ports=%d >
>  > >  > > >  max_event_ports= %d",
>  > >  > > >  +		dev_id, dev_conf->nb_event_ports,
>  info.max_event_ports);
>  > >  > > >  +		return -EINVAL;
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	/* Check nb_event_queue_flows is in limit */
>  > >  > > >  +	if (!dev_conf->nb_event_queue_flows) {
>  > >  > > >  +		EDEV_LOG_ERR("dev%d nb_flows cannot be zero",
>  dev_id);
>  > >  > > >  +		return -EINVAL;
>  > >  > > >  +	}
>  > >  > > >  +	if (dev_conf->nb_event_queue_flows >
>  info.max_event_queue_flows)
>  > >  > > >  {
>  > >  > > >  +		EDEV_LOG_ERR("dev%d nb_flows=%x >
>  max_flows=%x",
>  > >  > > >  +		dev_id, dev_conf->nb_event_queue_flows,
>  > >  > > >  +		info.max_event_queue_flows);
>  > >  > > >  +		return -EINVAL;
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	/* Check nb_event_port_dequeue_depth is in limit */
>  > >  > > >  +	if (!dev_conf->nb_event_port_dequeue_depth) {
>  > >  > > >  +		EDEV_LOG_ERR("dev%d nb_dequeue_depth cannot be
>  zero",
>  > >  > > >  dev_id);
>  > >  > > >  +		return -EINVAL;
>  > >  > > >  +	}
>  > >  > > >  +	if (dev_conf->nb_event_port_dequeue_depth >
>  > >  > > >  +			 info.max_event_port_dequeue_depth) {
>  > >  > > >  +		EDEV_LOG_ERR("dev%d nb_dequeue_depth=%d >
>  > >  > > >  max_dequeue_depth=%d",
>  > >  > > >  +		dev_id, dev_conf->nb_event_port_dequeue_depth,
>  > >  > > >  +		info.max_event_port_dequeue_depth);
>  > >  > > >  +		return -EINVAL;
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	/* Check nb_event_port_enqueue_depth is in limit */
>  > >  > > >  +	if (!dev_conf->nb_event_port_enqueue_depth) {
>  > >  > > >  +		EDEV_LOG_ERR("dev%d nb_enqueue_depth cannot be
>  zero",
>  > >  > > >  dev_id);
>  > >  > > >  +		return -EINVAL;
>  > >  > > >  +	}
>  > >  > > >  +	if (dev_conf->nb_event_port_enqueue_depth >
>  > >  > > >  +			 info.max_event_port_enqueue_depth) {
>  > >  > > >  +		EDEV_LOG_ERR("dev%d nb_enqueue_depth=%d >
>  > >  > > >  max_enqueue_depth=%d",
>  > >  > > >  +		dev_id, dev_conf->nb_event_port_enqueue_depth,
>  > >  > > >  +		info.max_event_port_enqueue_depth);
>  > >  > > >  +		return -EINVAL;
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	/* Copy the dev_conf parameter into the dev structure */
>  > >  > > >  +	memcpy(&dev->data->dev_conf, dev_conf, sizeof(dev->data-
>  > >  > > >  >dev_conf));
>  > >  > > >  +
>  > >  > > >  +	/* Setup new number of queues and reconfigure device. */
>  > >  > > >  +	diag = rte_event_dev_queue_config(dev, dev_conf-
>  > >  > > >  >nb_event_queues);
>  > >  > > >  +	if (diag != 0) {
>  > >  > > >  +		EDEV_LOG_ERR("dev%d rte_event_dev_queue_config
>  = %d",
>  > >  > > >  +				dev_id, diag);
>  > >  > > >  +		return diag;
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	/* Setup new number of ports and reconfigure device. */
>  > >  > > >  +	diag = rte_event_dev_port_config(dev, dev_conf-
>  >nb_event_ports);
>  > >  > > >  +	if (diag != 0) {
>  > >  > > >  +		rte_event_dev_queue_config(dev, 0);
>  > >  > > >  +		EDEV_LOG_ERR("dev%d rte_event_dev_port_config =
>  %d",
>  > >  > > >  +				dev_id, diag);
>  > >  > > >  +		return diag;
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	/* Configure the device */
>  > >  > > >  +	diag = (*dev->dev_ops->dev_configure)(dev);
>  > >  > > >  +	if (diag != 0) {
>  > >  > > >  +		EDEV_LOG_ERR("dev%d dev_configure = %d", dev_id,
>  diag);
>  > >  > > >  +		rte_event_dev_queue_config(dev, 0);
>  > >  > > >  +		rte_event_dev_port_config(dev, 0);
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	dev->data->event_dev_cap = info.event_dev_cap;
>  > >  > > >  +	return diag;
>  > >  > > >  +}
>  > >  > > >  +
>  > >  > > >  +static inline int
>  > >  > > >  +is_valid_queue(struct rte_eventdev *dev, uint8_t queue_id)
>  > >  > > >  +{
>  > >  > > >  +	if (queue_id < dev->data->nb_queues && queue_id <
>  > >  > > >  +
>  	RTE_EVENT_MAX_QUEUES_PER_DEV)
>  > >  > > >  +		return 1;
>  > >  > > >  +	else
>  > >  > > >  +		return 0;
>  > >  > > >  +}
>  > >  > > >  +
>  > >  > > >  +int
>  > >  > > >  +rte_event_queue_default_conf_get(uint8_t dev_id, uint8_t
>  queue_id,
>  > >  > > >  +				 struct rte_event_queue_conf
>  *queue_conf)
>  > >  > > >  +{
>  > >  > > >  +	struct rte_eventdev *dev;
>  > >  > > >  +
>  > >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -
>  EINVAL);
>  > >  > > >  +	dev = &rte_eventdevs[dev_id];
>  > >  > > >  +
>  > >  > > >  +	if (queue_conf == NULL)
>  > >  > > >  +		return -EINVAL;
>  > >  > > >  +
>  > >  > > >  +	if (!is_valid_queue(dev, queue_id)) {
>  > >  > > >  +		EDEV_LOG_ERR("Invalid queue_id=%" PRIu8,
>  queue_id);
>  > >  > > >  +		return -EINVAL;
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
>  >queue_def_conf, -
>  > >  > > >  ENOTSUP);
>  > >  > > >  +	memset(queue_conf, 0, sizeof(struct rte_event_queue_conf));
>  > >  > > >  +	(*dev->dev_ops->queue_def_conf)(dev, queue_id,
>  queue_conf);
>  > >  > > >  +	return 0;
>  > >  > > >  +}
>  > >  > > >  +
>  > >  > > >  +static inline int
>  > >  > > >  +is_valid_atomic_queue_conf(struct rte_event_queue_conf
>  > >  *queue_conf)
>  > >  > > >  +{
>  > >  > > >  +	if (queue_conf && (
>  > >  > > >  +		((queue_conf->event_queue_cfg &
>  > >  > > >  RTE_EVENT_QUEUE_CFG_TYPE_MASK)
>  > >  > > >  +			== RTE_EVENT_QUEUE_CFG_ALL_TYPES) ||
>  > >  > > >  +		((queue_conf->event_queue_cfg &
>  > >  > > >  RTE_EVENT_QUEUE_CFG_TYPE_MASK)
>  > >  > > >  +			== RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY)
>  > >  > > >  +		))
>  > >  > > >  +		return 1;
>  > >  > > >  +	else
>  > >  > > >  +		return 0;
>  > >  > > >  +}
>  > >  > > >  +
>  > >  > > >  +int
>  > >  > > >  +rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id,
>  > >  > > >  +		      struct rte_event_queue_conf *queue_conf)
>  > >  > > >  +{
>  > >  > > >  +	struct rte_eventdev *dev;
>  > >  > > >  +	struct rte_event_queue_conf def_conf;
>  > >  > > >  +
>  > >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -
>  EINVAL);
>  > >  > > >  +	dev = &rte_eventdevs[dev_id];
>  > >  > > >  +
>  > >  > > >  +	if (!is_valid_queue(dev, queue_id)) {
>  > >  > > >  +		EDEV_LOG_ERR("Invalid queue_id=%" PRIu8,
>  queue_id);
>  > >  > > >  +		return -EINVAL;
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	/* Check nb_atomic_flows limit */
>  > >  > > >  +	if (is_valid_atomic_queue_conf(queue_conf)) {
>  > >  > > >  +		if (queue_conf->nb_atomic_flows == 0 ||
>  > >  > > >  +		    queue_conf->nb_atomic_flows >
>  > >  > > >  +			dev->data->dev_conf.nb_event_queue_flows)
>  {
>  > >  > > >  +			EDEV_LOG_ERR(
>  > >  > > >  +		"dev%d queue%d Invalid nb_atomic_flows=%d
>  > >  > > >  max_flows=%d",
>  > >  > > >  +			dev_id, queue_id, queue_conf-
>  >nb_atomic_flows,
>  > >  > > >  +			dev->data-
>  >dev_conf.nb_event_queue_flows);
>  > >  > > >  +			return -EINVAL;
>  > >  > > >  +		}
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	/* Check nb_atomic_order_sequences limit */
>  > >  > > >  +	if (is_valid_atomic_queue_conf(queue_conf)) {
>  > >  > > >  +		if (queue_conf->nb_atomic_order_sequences == 0 ||
>  > >  > > >  +		    queue_conf->nb_atomic_order_sequences >
>  > >  > > >  +			dev->data->dev_conf.nb_event_queue_flows)
>  {
>  > >  > > >  +			EDEV_LOG_ERR(
>  > >  > > >  +		"dev%d queue%d Invalid nb_atomic_order_seq=%d
>  > >  > > >  max_flows=%d",
>  > >  > > >  +			dev_id, queue_id, queue_conf-
>  > >  > > >  >nb_atomic_order_sequences,
>  > >  > > >  +			dev->data-
>  >dev_conf.nb_event_queue_flows);
>  > >  > > >  +			return -EINVAL;
>  > >  > > >  +		}
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	if (dev->data->dev_started) {
>  > >  > > >  +		EDEV_LOG_ERR(
>  > >  > > >  +		    "device %d must be stopped to allow queue setup",
>  dev_id);
>  > >  > > >  +		return -EBUSY;
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_setup,
>  -
>  > >  > > >  ENOTSUP);
>  > >  > > >  +
>  > >  > > >  +	if (queue_conf == NULL) {
>  > >  > > >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
>  > >  > > >  >queue_def_conf,
>  > >  > > >  +					-ENOTSUP);
>  > >  > > >  +		(*dev->dev_ops->queue_def_conf)(dev, queue_id,
>  &def_conf);
>  > >  > > >  +		def_conf.event_queue_cfg =
>  > >  > > >  RTE_EVENT_QUEUE_CFG_DEFAULT;
>  > >  > > >  +		queue_conf = &def_conf;
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	dev->data->queues_prio[queue_id] = queue_conf->priority;
>  > >  > > >  +	return (*dev->dev_ops->queue_setup)(dev, queue_id,
>  queue_conf);
>  > >  > > >  +}
>  > >  > > >  +
>  > >  > > >  +uint8_t
>  > >  > > >  +rte_event_queue_count(uint8_t dev_id)
>  > >  > > >  +{
>  > >  > > >  +	struct rte_eventdev *dev;
>  > >  > > >  +
>  > >  > > >  +	dev = &rte_eventdevs[dev_id];
>  > >  > > >  +	return dev->data->nb_queues;
>  > >  > > >  +}
>  > >  > > >  +
>  > >  > > >  +uint8_t
>  > >  > > >  +rte_event_queue_priority(uint8_t dev_id, uint8_t queue_id)
>  > >  > > >  +{
>  > >  > > >  +	struct rte_eventdev *dev;
>  > >  > > >  +
>  > >  > > >  +	dev = &rte_eventdevs[dev_id];
>  > >  > > >  +	if (dev->data->event_dev_cap &
>  RTE_EVENT_DEV_CAP_QUEUE_QOS)
>  > >  > > >  +		return dev->data->queues_prio[queue_id];
>  > >  > > >  +	else
>  > >  > > >  +		return RTE_EVENT_QUEUE_PRIORITY_NORMAL;
>  > >  > > >  +}
>  > >  > > >  +
>  > >  > > >  +static inline int
>  > >  > > >  +is_valid_port(struct rte_eventdev *dev, uint8_t port_id)
>  > >  > > >  +{
>  > >  > > >  +	if (port_id < dev->data->nb_ports)
>  > >  > > >  +		return 1;
>  > >  > > >  +	else
>  > >  > > >  +		return 0;
>  > >  > > >  +}
>  > >  > > >  +
>  > >  > > >  +int
>  > >  > > >  +rte_event_port_default_conf_get(uint8_t dev_id, uint8_t port_id,
>  > >  > > >  +				 struct rte_event_port_conf
>  *port_conf)
>  > >  > > >  +{
>  > >  > > >  +	struct rte_eventdev *dev;
>  > >  > > >  +
>  > >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -
>  EINVAL);
>  > >  > > >  +	dev = &rte_eventdevs[dev_id];
>  > >  > > >  +
>  > >  > > >  +	if (port_conf == NULL)
>  > >  > > >  +		return -EINVAL;
>  > >  > > >  +
>  > >  > > >  +	if (!is_valid_port(dev, port_id)) {
>  > >  > > >  +		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
>  > >  > > >  +		return -EINVAL;
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
>  >port_def_conf, -
>  > >  > > >  ENOTSUP);
>  > >  > > >  +	memset(port_conf, 0, sizeof(struct rte_event_port_conf));
>  > >  > > >  +	(*dev->dev_ops->port_def_conf)(dev, port_id, port_conf);
>  > >  > > >  +	return 0;
>  > >  > > >  +}
>  > >  > > >  +
>  > >  > > >  +int
>  > >  > > >  +rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
>  > >  > > >  +		      struct rte_event_port_conf *port_conf)
>  > >  > > >  +{
>  > >  > > >  +	struct rte_eventdev *dev;
>  > >  > > >  +	struct rte_event_port_conf def_conf;
>  > >  > > >  +	int diag;
>  > >  > > >  +
>  > >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -
>  EINVAL);
>  > >  > > >  +	dev = &rte_eventdevs[dev_id];
>  > >  > > >  +
>  > >  > > >  +	if (!is_valid_port(dev, port_id)) {
>  > >  > > >  +		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
>  > >  > > >  +		return -EINVAL;
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	/* Check new_event_threshold limit */
>  > >  > > >  +	if ((port_conf && !port_conf->new_event_threshold) ||
>  > >  > > >  +			(port_conf && port_conf-
>  >new_event_threshold >
>  > >  > > >  +				 dev->data-
>  >dev_conf.nb_events_limit)) {
>  > >  > > >  +		EDEV_LOG_ERR(
>  > >  > > >  +		   "dev%d port%d Invalid event_threshold=%d
>  > >  > > >  nb_events_limit=%d",
>  > >  > > >  +			dev_id, port_id, port_conf-
>  >new_event_threshold,
>  > >  > > >  +			dev->data->dev_conf.nb_events_limit);
>  > >  > > >  +		return -EINVAL;
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	/* Check dequeue_depth limit */
>  > >  > > >  +	if ((port_conf && !port_conf->dequeue_depth) ||
>  > >  > > >  +			(port_conf && port_conf->dequeue_depth >
>  > >  > > >  +		dev->data-
>  >dev_conf.nb_event_port_dequeue_depth)) {
>  > >  > > >  +		EDEV_LOG_ERR(
>  > >  > > >  +		   "dev%d port%d Invalid dequeue depth=%d
>  > >  > > >  max_dequeue_depth=%d",
>  > >  > > >  +			dev_id, port_id, port_conf->dequeue_depth,
>  > >  > > >  +			dev->data-
>  > >  > > >  >dev_conf.nb_event_port_dequeue_depth);
>  > >  > > >  +		return -EINVAL;
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	/* Check enqueue_depth limit */
>  > >  > > >  +	if ((port_conf && !port_conf->enqueue_depth) ||
>  > >  > > >  +			(port_conf && port_conf->enqueue_depth >
>  > >  > > >  +		dev->data-
>  >dev_conf.nb_event_port_enqueue_depth)) {
>  > >  > > >  +		EDEV_LOG_ERR(
>  > >  > > >  +		   "dev%d port%d Invalid enqueue depth=%d
>  > >  > > >  max_enqueue_depth=%d",
>  > >  > > >  +			dev_id, port_id, port_conf->enqueue_depth,
>  > >  > > >  +			dev->data-
>  > >  > > >  >dev_conf.nb_event_port_enqueue_depth);
>  > >  > > >  +		return -EINVAL;
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	if (dev->data->dev_started) {
>  > >  > > >  +		EDEV_LOG_ERR(
>  > >  > > >  +		    "device %d must be stopped to allow port setup",
>  dev_id);
>  > >  > > >  +		return -EBUSY;
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_setup, -
>  > >  > > >  ENOTSUP);
>  > >  > > >  +
>  > >  > > >  +	if (port_conf == NULL) {
>  > >  > > >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
>  > >  > > >  >port_def_conf,
>  > >  > > >  +					-ENOTSUP);
>  > >  > > >  +		(*dev->dev_ops->port_def_conf)(dev, port_id,
>  &def_conf);
>  > >  > > >  +		port_conf = &def_conf;
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	dev->data->ports_dequeue_depth[port_id] =
>  > >  > > >  +			port_conf->dequeue_depth;
>  > >  > > >  +	dev->data->ports_enqueue_depth[port_id] =
>  > >  > > >  +			port_conf->enqueue_depth;
>  > >  > > >  +
>  > >  > > >  +	diag = (*dev->dev_ops->port_setup)(dev, port_id, port_conf);
>  > >  > > >  +
>  > >  > > >  +	/* Unlink all the queues from this port(default state after
>  setup) */
>  > >  > > >  +	if (!diag)
>  > >  > > >  +		diag = rte_event_port_unlink(dev_id, port_id, NULL,
>  0);
>  > >  > > >  +
>  > >  > > >  +	if (diag < 0)
>  > >  > > >  +		return diag;
>  > >  > > >  +
>  > >  > > >  +	return 0;
>  > >  > > >  +}
>  > >  > > >  +
>  > >  > > >  +uint8_t
>  > >  > > >  +rte_event_port_dequeue_depth(uint8_t dev_id, uint8_t port_id)
>  > >  > > >  +{
>  > >  > > >  +	struct rte_eventdev *dev;
>  > >  > > >  +
>  > >  > > >  +	dev = &rte_eventdevs[dev_id];
>  > >  > > >  +	return dev->data->ports_dequeue_depth[port_id];
>  > >  > > >  +}
>  > >  > > >  +
>  > >  > > >  +uint8_t
>  > >  > > >  +rte_event_port_enqueue_depth(uint8_t dev_id, uint8_t port_id)
>  > >  > > >  +{
>  > >  > > >  +	struct rte_eventdev *dev;
>  > >  > > >  +
>  > >  > > >  +	dev = &rte_eventdevs[dev_id];
>  > >  > > >  +	return dev->data->ports_enqueue_depth[port_id];
>  > >  > > >  +}
>  > >  > > >  +
>  > >  > > >  +uint8_t
>  > >  > > >  +rte_event_port_count(uint8_t dev_id)
>  > >  > > >  +{
>  > >  > > >  +	struct rte_eventdev *dev;
>  > >  > > >  +
>  > >  > > >  +	dev = &rte_eventdevs[dev_id];
>  > >  > > >  +	return dev->data->nb_ports;
>  > >  > > >  +}
>  > >  > > >  +
>  > >  > > >  +int
>  > >  > > >  +rte_event_port_link(uint8_t dev_id, uint8_t port_id,
>  > >  > > >  +		    struct rte_event_queue_link link[], uint16_t nb_links)
>  > >  > > >  +{
>  > >  > > >  +	struct rte_eventdev *dev;
>  > >  > > >  +	struct rte_event_queue_link
>  > >  > > >  all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
>  > >  > > >  +	uint16_t *links_map;
>  > >  > > >  +	int i, diag;
>  > >  > > >  +
>  > >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -
>  EINVAL);
>  > >  > > >  +	dev = &rte_eventdevs[dev_id];
>  > >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_link, -
>  ENOTSUP);
>  > >  > > >  +
>  > >  > > >  +	if (!is_valid_port(dev, port_id)) {
>  > >  > > >  +		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
>  > >  > > >  +		return -EINVAL;
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	if (link == NULL) {
>  > >  > > >  +		for (i = 0; i < dev->data->nb_queues; i++) {
>  > >  > > >  +			all_queues[i].queue_id = i;
>  > >  > > >  +			all_queues[i].priority =
>  > >  > > >  +
>  > >  > > >  	RTE_EVENT_QUEUE_SERVICE_PRIORITY_NORMAL;
>  > >  > > >  +		}
>  > >  > > >  +		link = all_queues;
>  > >  > > >  +		nb_links = dev->data->nb_queues;
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	for (i = 0; i < nb_links; i++)
>  > >  > > >  +		if (link[i].queue_id >=
>  RTE_EVENT_MAX_QUEUES_PER_DEV)
>  > >  > > >  +			return -EINVAL;
>  > >  > > >  +
>  > >  > > >  +	diag = (*dev->dev_ops->port_link)(dev->data->ports[port_id],
>  link,
>  > >  > > >  +						 nb_links);
>  > >  > > >  +	if (diag < 0)
>  > >  > > >  +		return diag;
>  > >  > > >  +
>  > >  > > >  +	links_map = dev->data->links_map;
>  > >  > > >  +	/* Point links_map to this port specific area */
>  > >  > > >  +	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
>  > >  > > >  +	for (i = 0; i < diag; i++)
>  > >  > > >  +		links_map[link[i].queue_id] = (uint8_t)link[i].priority;
>  > >  > > >  +
>  > >  > > >  +	return diag;
>  > >  > > >  +}
>  > >  > > >  +
>  > >  > > >  +#define EVENT_QUEUE_SERVICE_PRIORITY_INVALID (0xdead)
>  > >  > > >  +
>  > >  > > >  +int
>  > >  > > >  +rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
>  > >  > > >  +		      uint8_t queues[], uint16_t nb_unlinks)
>  > >  > > >  +{
>  > >  > > >  +	struct rte_eventdev *dev;
>  > >  > > >  +	uint8_t all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
>  > >  > > >  +	int i, diag;
>  > >  > > >  +	uint16_t *links_map;
>  > >  > > >  +
>  > >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -
>  EINVAL);
>  > >  > > >  +	dev = &rte_eventdevs[dev_id];
>  > >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_unlink, -
>  > >  > > >  ENOTSUP);
>  > >  > > >  +
>  > >  > > >  +	if (!is_valid_port(dev, port_id)) {
>  > >  > > >  +		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
>  > >  > > >  +		return -EINVAL;
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	if (queues == NULL) {
>  > >  > > >  +		for (i = 0; i < dev->data->nb_queues; i++)
>  > >  > > >  +			all_queues[i] = i;
>  > >  > > >  +		queues = all_queues;
>  > >  > > >  +		nb_unlinks = dev->data->nb_queues;
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	for (i = 0; i < nb_unlinks; i++)
>  > >  > > >  +		if (queues[i] >= RTE_EVENT_MAX_QUEUES_PER_DEV)
>  > >  > > >  +			return -EINVAL;
>  > >  > > >  +
>  > >  > > >  +	diag = (*dev->dev_ops->port_unlink)(dev->data-
>  >ports[port_id],
>  > >  > > >  queues,
>  > >  > > >  +					nb_unlinks);
>  > >  > > >  +
>  > >  > > >  +	if (diag < 0)
>  > >  > > >  +		return diag;
>  > >  > > >  +
>  > >  > > >  +	links_map = dev->data->links_map;
>  > >  > > >  +	/* Point links_map to this port specific area */
>  > >  > > >  +	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
>  > >  > > >  +	for (i = 0; i < diag; i++)
>  > >  > > >  +		links_map[queues[i]] =
>  > >  > > >  EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
>  > >  > > >  +
>  > >  > > >  +	return diag;
>  > >  > > >  +}
>  > >  > > >  +
>  > >  > > >  +int
>  > >  > > >  +rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
>  > >  > > >  +			struct rte_event_queue_link link[])
>  > >  > > >  +{
>  > >  > > >  +	struct rte_eventdev *dev;
>  > >  > > >  +	uint16_t *links_map;
>  > >  > > >  +	int i, count = 0;
>  > >  > > >  +
>  > >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -
>  EINVAL);
>  > >  > > >  +	dev = &rte_eventdevs[dev_id];
>  > >  > > >  +	if (!is_valid_port(dev, port_id)) {
>  > >  > > >  +		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
>  > >  > > >  +		return -EINVAL;
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	links_map = dev->data->links_map;
>  > >  > > >  +	/* Point links_map to this port specific area */
>  > >  > > >  +	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
>  > >  > > >  +	for (i = 0; i < RTE_EVENT_MAX_QUEUES_PER_DEV; i++) {
>  > >  > > >  +		if (links_map[i] !=
>  > >  > > >  EVENT_QUEUE_SERVICE_PRIORITY_INVALID) {
>  > >  > > >  +			link[count].queue_id = i;
>  > >  > > >  +			link[count].priority = (uint8_t)links_map[i];
>  > >  > > >  +			++count;
>  > >  > > >  +		}
>  > >  > > >  +	}
>  > >  > > >  +	return count;
>  > >  > > >  +}
>  > >  > > >  +
>  > >  > > >  +int
>  > >  > > >  +rte_event_dequeue_wait_time(uint8_t dev_id, uint64_t ns, uint64_t
>  > >  > > >  *wait_ticks)
>  > >  > > >  +{
>  > >  > > >  +	struct rte_eventdev *dev;
>  > >  > > >  +
>  > >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -
>  EINVAL);
>  > >  > > >  +	dev = &rte_eventdevs[dev_id];
>  > >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->wait_time, -
>  > >  > > >  ENOTSUP);
>  > >  > > >  +
>  > >  > > >  +	if (wait_ticks == NULL)
>  > >  > > >  +		return -EINVAL;
>  > >  > > >  +
>  > >  > > >  +	(*dev->dev_ops->wait_time)(dev, ns, wait_ticks);
>  > >  > > >  +	return 0;
>  > >  > > >  +}
>  > >  > > >  +
>  > >  > > >  +int
>  > >  > > >  +rte_event_dev_dump(uint8_t dev_id, FILE *f)
>  > >  > > >  +{
>  > >  > > >  +	struct rte_eventdev *dev;
>  > >  > > >  +
>  > >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -
>  EINVAL);
>  > >  > > >  +	dev = &rte_eventdevs[dev_id];
>  > >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dump, -
>  ENOTSUP);
>  > >  > > >  +
>  > >  > > >  +	(*dev->dev_ops->dump)(dev, f);
>  > >  > > >  +	return 0;
>  > >  > > >  +
>  > >  > > >  +}
>  > >  > > >  +
>  > >  > > >  +int
>  > >  > > >  +rte_event_dev_start(uint8_t dev_id)
>  > >  > > >  +{
>  > >  > > >  +	struct rte_eventdev *dev;
>  > >  > > >  +	int diag;
>  > >  > > >  +
>  > >  > > >  +	EDEV_LOG_DEBUG("Start dev_id=%" PRIu8, dev_id);
>  > >  > > >  +
>  > >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -
>  EINVAL);
>  > >  > > >  +	dev = &rte_eventdevs[dev_id];
>  > >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_start, -
>  ENOTSUP);
>  > >  > > >  +
>  > >  > > >  +	if (dev->data->dev_started != 0) {
>  > >  > > >  +		EDEV_LOG_ERR("Device with dev_id=%" PRIu8 "
>  already
>  > >  > > >  started",
>  > >  > > >  +			dev_id);
>  > >  > > >  +		return 0;
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	diag = (*dev->dev_ops->dev_start)(dev);
>  > >  > > >  +	if (diag == 0)
>  > >  > > >  +		dev->data->dev_started = 1;
>  > >  > > >  +	else
>  > >  > > >  +		return diag;
>  > >  > > >  +
>  > >  > > >  +	return 0;
>  > >  > > >  +}
>  > >  > > >  +
>  > >  > > >  +void
>  > >  > > >  +rte_event_dev_stop(uint8_t dev_id)
>  > >  > > >  +{
>  > >  > > >  +	struct rte_eventdev *dev;
>  > >  > > >  +
>  > >  > > >  +	EDEV_LOG_DEBUG("Stop dev_id=%" PRIu8, dev_id);
>  > >  > > >  +
>  > >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_RET(dev_id);
>  > >  > > >  +	dev = &rte_eventdevs[dev_id];
>  > >  > > >  +	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
>  > >  > > >  +
>  > >  > > >  +	if (dev->data->dev_started == 0) {
>  > >  > > >  +		EDEV_LOG_ERR("Device with dev_id=%" PRIu8 "
>  already
>  > >  > > >  stopped",
>  > >  > > >  +			dev_id);
>  > >  > > >  +		return;
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	dev->data->dev_started = 0;
>  > >  > > >  +	(*dev->dev_ops->dev_stop)(dev);
>  > >  > > >  +}
>  > >  > > >  +
>  > >  > > >  +int
>  > >  > > >  +rte_event_dev_close(uint8_t dev_id)
>  > >  > > >  +{
>  > >  > > >  +	struct rte_eventdev *dev;
>  > >  > > >  +
>  > >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -
>  EINVAL);
>  > >  > > >  +	dev = &rte_eventdevs[dev_id];
>  > >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_close, -
>  > >  > > >  ENOTSUP);
>  > >  > > >  +
>  > >  > > >  +	/* Device must be stopped before it can be closed */
>  > >  > > >  +	if (dev->data->dev_started == 1) {
>  > >  > > >  +		EDEV_LOG_ERR("Device %u must be stopped before
>  closing",
>  > >  > > >  +				dev_id);
>  > >  > > >  +		return -EBUSY;
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	return (*dev->dev_ops->dev_close)(dev);
>  > >  > > >  +}
>  > >  > > >  +
>  > >  > > >  +static inline int
>  > >  > > >  +rte_eventdev_data_alloc(uint8_t dev_id, struct rte_eventdev_data
>  > >  **data,
>  > >  > > >  +		int socket_id)
>  > >  > > >  +{
>  > >  > > >  +	char mz_name[RTE_EVENTDEV_NAME_MAX_LEN];
>  > >  > > >  +	const struct rte_memzone *mz;
>  > >  > > >  +	int n;
>  > >  > > >  +
>  > >  > > >  +	/* Generate memzone name */
>  > >  > > >  +	n = snprintf(mz_name, sizeof(mz_name),
>  "rte_eventdev_data_%u",
>  > >  > > >  dev_id);
>  > >  > > >  +	if (n >= (int)sizeof(mz_name))
>  > >  > > >  +		return -EINVAL;
>  > >  > > >  +
>  > >  > > >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
>  > >  > > >  +		mz = rte_memzone_reserve(mz_name,
>  > >  > > >  +				sizeof(struct rte_eventdev_data),
>  > >  > > >  +				socket_id, 0);
>  > >  > > >  +	} else
>  > >  > > >  +		mz = rte_memzone_lookup(mz_name);
>  > >  > > >  +
>  > >  > > >  +	if (mz == NULL)
>  > >  > > >  +		return -ENOMEM;
>  > >  > > >  +
>  > >  > > >  +	*data = mz->addr;
>  > >  > > >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
>  > >  > > >  +		memset(*data, 0, sizeof(struct rte_eventdev_data));
>  > >  > > >  +
>  > >  > > >  +	return 0;
>  > >  > > >  +}
>  > >  > > >  +
>  > >  > > >  +static uint8_t
>  > >  > > >  +rte_eventdev_find_free_device_index(void)
>  > >  > > >  +{
>  > >  > > >  +	uint8_t dev_id;
>  > >  > > >  +
>  > >  > > >  +	for (dev_id = 0; dev_id < RTE_EVENT_MAX_DEVS; dev_id++) {
>  > >  > > >  +		if (rte_eventdevs[dev_id].attached ==
>  > >  > > >  +				RTE_EVENTDEV_DETACHED)
>  > >  > > >  +			return dev_id;
>  > >  > > >  +	}
>  > >  > > >  +	return RTE_EVENT_MAX_DEVS;
>  > >  > > >  +}
>  > >  > > >  +
>  > >  > > >  +struct rte_eventdev *
>  > >  > > >  +rte_eventdev_pmd_allocate(const char *name, int socket_id)
>  > >  > > >  +{
>  > >  > > >  +	struct rte_eventdev *eventdev;
>  > >  > > >  +	uint8_t dev_id;
>  > >  > > >  +
>  > >  > > >  +	if (rte_eventdev_pmd_get_named_dev(name) != NULL) {
>  > >  > > >  +		EDEV_LOG_ERR("Event device with name %s already "
>  > >  > > >  +				"allocated!", name);
>  > >  > > >  +		return NULL;
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	dev_id = rte_eventdev_find_free_device_index();
>  > >  > > >  +	if (dev_id == RTE_EVENT_MAX_DEVS) {
>  > >  > > >  +		EDEV_LOG_ERR("Reached maximum number of event
>  > >  > > >  devices");
>  > >  > > >  +		return NULL;
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	eventdev = &rte_eventdevs[dev_id];
>  > >  > > >  +
>  > >  > > >  +	if (eventdev->data == NULL) {
>  > >  > > >  +		struct rte_eventdev_data *eventdev_data = NULL;
>  > >  > > >  +
>  > >  > > >  +		int retval = rte_eventdev_data_alloc(dev_id,
>  &eventdev_data,
>  > >  > > >  +				socket_id);
>  > >  > > >  +
>  > >  > > >  +		if (retval < 0 || eventdev_data == NULL)
>  > >  > > >  +			return NULL;
>  > >  > > >  +
>  > >  > > >  +		eventdev->data = eventdev_data;
>  > >  > > >  +
>  > >  > > >  +		snprintf(eventdev->data->name,
>  > >  > > >  RTE_EVENTDEV_NAME_MAX_LEN,
>  > >  > > >  +				"%s", name);
>  > >  > > >  +
>  > >  > > >  +		eventdev->data->dev_id = dev_id;
>  > >  > > >  +		eventdev->data->socket_id = socket_id;
>  > >  > > >  +		eventdev->data->dev_started = 0;
>  > >  > > >  +
>  > >  > > >  +		eventdev->attached = RTE_EVENTDEV_ATTACHED;
>  > >  > > >  +
>  > >  > > >  +		eventdev_globals.nb_devs++;
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	return eventdev;
>  > >  > > >  +}
>  > >  > > >  +
>  > >  > > >  +int
>  > >  > > >  +rte_eventdev_pmd_release(struct rte_eventdev *eventdev)
>  > >  > > >  +{
>  > >  > > >  +	int ret;
>  > >  > > >  +
>  > >  > > >  +	if (eventdev == NULL)
>  > >  > > >  +		return -EINVAL;
>  > >  > > >  +
>  > >  > > >  +	ret = rte_event_dev_close(eventdev->data->dev_id);
>  > >  > > >  +	if (ret < 0)
>  > >  > > >  +		return ret;
>  > >  > > >  +
>  > >  > > >  +	eventdev->attached = RTE_EVENTDEV_DETACHED;
>  > >  > > >  +	eventdev_globals.nb_devs--;
>  > >  > > >  +	eventdev->data = NULL;
>  > >  > > >  +
>  > >  > > >  +	return 0;
>  > >  > > >  +}
>  > >  > > >  +
>  > >  > > >  +struct rte_eventdev *
>  > >  > > >  +rte_eventdev_pmd_vdev_init(const char *name, size_t
>  > >  dev_private_size,
>  > >  > > >  +		int socket_id)
>  > >  > > >  +{
>  > >  > > >  +	struct rte_eventdev *eventdev;
>  > >  > > >  +
>  > >  > > >  +	/* Allocate device structure */
>  > >  > > >  +	eventdev = rte_eventdev_pmd_allocate(name, socket_id);
>  > >  > > >  +	if (eventdev == NULL)
>  > >  > > >  +		return NULL;
>  > >  > > >  +
>  > >  > > >  +	/* Allocate private device structure */
>  > >  > > >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
>  > >  > > >  +		eventdev->data->dev_private =
>  > >  > > >  +				rte_zmalloc_socket("eventdev device
>  private",
>  > >  > > >  +						dev_private_size,
>  > >  > > >  +
>  	RTE_CACHE_LINE_SIZE,
>  > >  > > >  +						socket_id);
>  > >  > > >  +
>  > >  > > >  +		if (eventdev->data->dev_private == NULL)
>  > >  > > >  +			rte_panic("Cannot allocate memzone for
>  private
>  > >  > > >  device"
>  > >  > > >  +					" data");
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	return eventdev;
>  > >  > > >  +}
>  > >  > > >  +
>  > >  > > >  +int
>  > >  > > >  +rte_eventdev_pmd_pci_probe(struct rte_pci_driver *pci_drv,
>  > >  > > >  +			struct rte_pci_device *pci_dev)
>  > >  > > >  +{
>  > >  > > >  +	struct rte_eventdev_driver *eventdrv;
>  > >  > > >  +	struct rte_eventdev *eventdev;
>  > >  > > >  +
>  > >  > > >  +	char eventdev_name[RTE_EVENTDEV_NAME_MAX_LEN];
>  > >  > > >  +
>  > >  > > >  +	int retval;
>  > >  > > >  +
>  > >  > > >  +	eventdrv = (struct rte_eventdev_driver *)pci_drv;
>  > >  > > >  +	if (eventdrv == NULL)
>  > >  > > >  +		return -ENODEV;
>  > >  > > >  +
>  > >  > > >  +	rte_eal_pci_device_name(&pci_dev->addr, eventdev_name,
>  > >  > > >  +			sizeof(eventdev_name));
>  > >  > > >  +
>  > >  > > >  +	eventdev = rte_eventdev_pmd_allocate(eventdev_name,
>  > >  > > >  +			 pci_dev->device.numa_node);
>  > >  > > >  +	if (eventdev == NULL)
>  > >  > > >  +		return -ENOMEM;
>  > >  > > >  +
>  > >  > > >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
>  > >  > > >  +		eventdev->data->dev_private =
>  > >  > > >  +				rte_zmalloc_socket(
>  > >  > > >  +						"eventdev private
>  structure",
>  > >  > > >  +						eventdrv-
>  >dev_private_size,
>  > >  > > >  +
>  	RTE_CACHE_LINE_SIZE,
>  > >  > > >  +						rte_socket_id());
>  > >  > > >  +
>  > >  > > >  +		if (eventdev->data->dev_private == NULL)
>  > >  > > >  +			rte_panic("Cannot allocate memzone for
>  private "
>  > >  > > >  +					"device data");
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	eventdev->pci_dev = pci_dev;
>  > >  > > >  +	eventdev->driver = eventdrv;
>  > >  > > >  +
>  > >  > > >  +	/* Invoke PMD device initialization function */
>  > >  > > >  +	retval = (*eventdrv->eventdev_init)(eventdev);
>  > >  > > >  +	if (retval == 0)
>  > >  > > >  +		return 0;
>  > >  > > >  +
>  > >  > > >  +	EDEV_LOG_ERR("driver %s: event_dev_init(vendor_id=0x%x
>  > >  > > >  device_id=0x%x)"
>  > >  > > >  +			" failed", pci_drv->driver.name,
>  > >  > > >  +			(unsigned int) pci_dev->id.vendor_id,
>  > >  > > >  +			(unsigned int) pci_dev->id.device_id);
>  > >  > > >  +
>  > >  > > >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
>  > >  > > >  +		rte_free(eventdev->data->dev_private);
>  > >  > > >  +
>  > >  > > >  +	eventdev->attached = RTE_EVENTDEV_DETACHED;
>  > >  > > >  +	eventdev_globals.nb_devs--;
>  > >  > > >  +
>  > >  > > >  +	return -ENXIO;
>  > >  > > >  +}
>  > >  > > >  +
>  > >  > > >  +int
>  > >  > > >  +rte_eventdev_pmd_pci_remove(struct rte_pci_device *pci_dev)
>  > >  > > >  +{
>  > >  > > >  +	const struct rte_eventdev_driver *eventdrv;
>  > >  > > >  +	struct rte_eventdev *eventdev;
>  > >  > > >  +	char eventdev_name[RTE_EVENTDEV_NAME_MAX_LEN];
>  > >  > > >  +	int ret;
>  > >  > > >  +
>  > >  > > >  +	if (pci_dev == NULL)
>  > >  > > >  +		return -EINVAL;
>  > >  > > >  +
>  > >  > > >  +	rte_eal_pci_device_name(&pci_dev->addr, eventdev_name,
>  > >  > > >  +			sizeof(eventdev_name));
>  > >  > > >  +
>  > >  > > >  +	eventdev =
>  rte_eventdev_pmd_get_named_dev(eventdev_name);
>  > >  > > >  +	if (eventdev == NULL)
>  > >  > > >  +		return -ENODEV;
>  > >  > > >  +
>  > >  > > >  +	eventdrv = (const struct rte_eventdev_driver *)pci_dev->driver;
>  > >  > > >  +	if (eventdrv == NULL)
>  > >  > > >  +		return -ENODEV;
>  > >  > > >  +
>  > >  > > >  +	/* Invoke PMD device uninit function */
>  > >  > > >  +	if (*eventdrv->eventdev_uninit) {
>  > >  > > >  +		ret = (*eventdrv->eventdev_uninit)(eventdev);
>  > >  > > >  +		if (ret)
>  > >  > > >  +			return ret;
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	/* Free event device */
>  > >  > > >  +	rte_eventdev_pmd_release(eventdev);
>  > >  > > >  +
>  > >  > > >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
>  > >  > > >  +		rte_free(eventdev->data->dev_private);
>  > >  > > >  +
>  > >  > > >  +	eventdev->pci_dev = NULL;
>  > >  > > >  +	eventdev->driver = NULL;
>  > >  > > >  +
>  > >  > > >  +	return 0;
>  > >  > > >  +}
>  > >  > > >  diff --git a/lib/librte_eventdev/rte_eventdev_pmd.h
>  > >  > > >  b/lib/librte_eventdev/rte_eventdev_pmd.h
>  > >  > > >  new file mode 100644
>  > >  > > >  index 0000000..e9d9b83
>  > >  > > >  --- /dev/null
>  > >  > > >  +++ b/lib/librte_eventdev/rte_eventdev_pmd.h
>  > >  > > >  @@ -0,0 +1,504 @@
>  > >  > > >  +/*
>  > >  > > >  + *
>  > >  > > >  + *   Copyright(c) 2016 Cavium networks. All rights reserved.
>  > >  > > >  + *
>  > >  > > >  + *   Redistribution and use in source and binary forms, with or
>  without
>  > >  > > >  + *   modification, are permitted provided that the following
>  conditions
>  > >  > > >  + *   are met:
>  > >  > > >  + *
>  > >  > > >  + *     * Redistributions of source code must retain the above
>  copyright
>  > >  > > >  + *       notice, this list of conditions and the following disclaimer.
>  > >  > > >  + *     * Redistributions in binary form must reproduce the above
>  > >  copyright
>  > >  > > >  + *       notice, this list of conditions and the following disclaimer in
>  > >  > > >  + *       the documentation and/or other materials provided with the
>  > >  > > >  + *       distribution.
>  > >  > > >  + *     * Neither the name of Cavium networks nor the names of its
>  > >  > > >  + *       contributors may be used to endorse or promote products
>  derived
>  > >  > > >  + *       from this software without specific prior written permission.
>  > >  > > >  + *
>  > >  > > >  + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
>  > >  > > >  CONTRIBUTORS
>  > >  > > >  + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES,
>  INCLUDING,
>  > >  BUT
>  > >  > > >  NOT
>  > >  > > >  + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
>  AND
>  > >  > > >  FITNESS FOR
>  > >  > > >  + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
>  THE
>  > >  > > >  COPYRIGHT
>  > >  > > >  + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
>  INDIRECT,
>  > >  > > >  INCIDENTAL,
>  > >  > > >  + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
>  (INCLUDING,
>  > >  BUT
>  > >  > > >  NOT
>  > >  > > >  + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
>  SERVICES;
>  > >  LOSS
>  > >  > > >  OF USE,
>  > >  > > >  + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
>  > >  CAUSED
>  > >  > > >  AND ON ANY
>  > >  > > >  + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
>  LIABILITY,
>  > >  OR
>  > >  > > >  TORT
>  > >  > > >  + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY
>  WAY
>  > >  OUT OF
>  > >  > > >  THE USE
>  > >  > > >  + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
>  SUCH
>  > >  > > >  DAMAGE.
>  > >  > > >  + */
>  > >  > > >  +
>  > >  > > >  +#ifndef _RTE_EVENTDEV_PMD_H_
>  > >  > > >  +#define _RTE_EVENTDEV_PMD_H_
>  > >  > > >  +
>  > >  > > >  +/** @file
>  > >  > > >  + * RTE Event PMD APIs
>  > >  > > >  + *
>  > >  > > >  + * @note
>  > >  > > >  + * These API are from event PMD only and user applications should
>  not
>  > >  call
>  > >  > > >  + * them directly.
>  > >  > > >  + */
>  > >  > > >  +
>  > >  > > >  +#ifdef __cplusplus
>  > >  > > >  +extern "C" {
>  > >  > > >  +#endif
>  > >  > > >  +
>  > >  > > >  +#include <string.h>
>  > >  > > >  +
>  > >  > > >  +#include <rte_dev.h>
>  > >  > > >  +#include <rte_pci.h>
>  > >  > > >  +#include <rte_malloc.h>
>  > >  > > >  +#include <rte_log.h>
>  > >  > > >  +#include <rte_common.h>
>  > >  > > >  +
>  > >  > > >  +#include "rte_eventdev.h"
>  > >  > > >  +
>  > >  > > >  +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
>  > >  > > >  +#define RTE_PMD_DEBUG_TRACE(...) \
>  > >  > > >  +	rte_pmd_debug_trace(__func__, __VA_ARGS__)
>  > >  > > >  +#else
>  > >  > > >  +#define RTE_PMD_DEBUG_TRACE(...)
>  > >  > > >  +#endif
>  > >  > > >  +
>  > >  > > >  +/* Logging Macros */
>  > >  > > >  +#define EDEV_LOG_ERR(fmt, args...) \
>  > >  > > >  +	RTE_LOG(ERR, EVENTDEV, "%s() line %u: " fmt "\n",  \
>  > >  > > >  +			__func__, __LINE__, ## args)
>  > >  > > >  +
>  > >  > > >  +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
>  > >  > > >  +#define EDEV_LOG_DEBUG(fmt, args...) \
>  > >  > > >  +	RTE_LOG(DEBUG, EVENTDEV, "%s() line %u: " fmt "\n",  \
>  > >  > > >  +			__func__, __LINE__, ## args)
>  > >  > > >  +#else
>  > >  > > >  +#define EDEV_LOG_DEBUG(fmt, args...) (void)0
>  > >  > > >  +#endif
>  > >  > > >  +
>  > >  > > >  +/* Macros to check for valid device */
>  > >  > > >  +#define RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, retval)
>  do
>  > >  { \
>  > >  > > >  +	if (!rte_eventdev_pmd_is_valid_dev((dev_id))) { \
>  > >  > > >  +		EDEV_LOG_ERR("Invalid dev_id=%d\n", dev_id); \
>  > >  > > >  +		return retval; \
>  > >  > > >  +	} \
>  > >  > > >  +} while (0)
>  > >  > > >  +
>  > >  > > >  +#define RTE_EVENTDEV_VALID_DEVID_OR_RET(dev_id) do { \
>  > >  > > >  +	if (!rte_eventdev_pmd_is_valid_dev((dev_id))) { \
>  > >  > > >  +		EDEV_LOG_ERR("Invalid dev_id=%d\n", dev_id); \
>  > >  > > >  +		return; \
>  > >  > > >  +	} \
>  > >  > > >  +} while (0)
>  > >  > > >  +
>  > >  > > >  +#define RTE_EVENTDEV_DETACHED  (0)
>  > >  > > >  +#define RTE_EVENTDEV_ATTACHED  (1)
>  > >  > > >  +
>  > >  > > >  +/**
>  > >  > > >  + * Initialisation function of a event driver invoked for each matching
>  > >  > > >  + * event PCI device detected during the PCI probing phase.
>  > >  > > >  + *
>  > >  > > >  + * @param dev
>  > >  > > >  + *   The dev pointer is the address of the *rte_eventdev* structure
>  > >  associated
>  > >  > > >  + *   with the matching device and which has been [automatically]
>  > >  allocated in
>  > >  > > >  + *   the *rte_event_devices* array.
>  > >  > > >  + *
>  > >  > > >  + * @return
>  > >  > > >  + *   - 0: Success, the device is properly initialised by the driver.
>  > >  > > >  + *        In particular, the driver MUST have set up the *dev_ops*
>  pointer
>  > >  > > >  + *        of the *dev* structure.
>  > >  > > >  + *   - <0: Error code of the device initialisation failure.
>  > >  > > >  + */
>  > >  > > >  +typedef int (*eventdev_init_t)(struct rte_eventdev *dev);
>  > >  > > >  +
>  > >  > > >  +/**
>  > >  > > >  + * Finalisation function of a driver invoked for each matching
>  > >  > > >  + * PCI device detected during the PCI closing phase.
>  > >  > > >  + *
>  > >  > > >  + * @param dev
>  > >  > > >  + *   The dev pointer is the address of the *rte_eventdev* structure
>  > >  associated
>  > >  > > >  + *   with the matching device and which	has been
>  [automatically]
>  > >  allocated in
>  > >  > > >  + *   the *rte_event_devices* array.
>  > >  > > >  + *
>  > >  > > >  + * @return
>  > >  > > >  + *   - 0: Success, the device is properly finalised by the driver.
>  > >  > > >  + *        In particular, the driver MUST free the *dev_ops* pointer
>  > >  > > >  + *        of the *dev* structure.
>  > >  > > >  + *   - <0: Error code of the device initialisation failure.
>  > >  > > >  + */
>  > >  > > >  +typedef int (*eventdev_uninit_t)(struct rte_eventdev *dev);
>  > >  > > >  +
>  > >  > > >  +/**
>  > >  > > >  + * The structure associated with a PMD driver.
>  > >  > > >  + *
>  > >  > > >  + * Each driver acts as a PCI driver and is represented by a generic
>  > >  > > >  + * *event_driver* structure that holds:
>  > >  > > >  + *
>  > >  > > >  + * - An *rte_pci_driver* structure (which must be the first field).
>  > >  > > >  + *
>  > >  > > >  + * - The *eventdev_init* function invoked for each matching PCI
>  device.
>  > >  > > >  + *
>  > >  > > >  + * - The size of the private data to allocate for each matching
>  device.
>  > >  > > >  + */
>  > >  > > >  +struct rte_eventdev_driver {
>  > >  > > >  +	struct rte_pci_driver pci_drv;	/**< The PMD is also a PCI
>  driver. */
>  > >  > > >  +	unsigned int dev_private_size;	/**< Size of device private
>  data. */
>  > >  > > >  +
>  > >  > > >  +	eventdev_init_t eventdev_init;	/**< Device init function. */
>  > >  > > >  +	eventdev_uninit_t eventdev_uninit; /**< Device uninit
>  function. */
>  > >  > > >  +};
>  > >  > > >  +
>  > >  > > >  +/** Global structure used for maintaining state of allocated event
>  > >  devices */
>  > >  > > >  +struct rte_eventdev_global {
>  > >  > > >  +	uint8_t nb_devs;	/**< Number of devices found */
>  > >  > > >  +	uint8_t max_devs;	/**< Max number of devices */
>  > >  > > >  +};
>  > >  > > >  +
>  > >  > > >  +extern struct rte_eventdev_global *rte_eventdev_globals;
>  > >  > > >  +/** Pointer to global event devices data structure. */
>  > >  > > >  +extern struct rte_eventdev *rte_eventdevs;
>  > >  > > >  +/** The pool of rte_eventdev structures. */
>  > >  > > >  +
>  > >  > > >  +/**
>  > >  > > >  + * Get the rte_eventdev structure device pointer for the named
>  device.
>  > >  > > >  + *
>  > >  > > >  + * @param name
>  > >  > > >  + *   device name to select the device structure.
>  > >  > > >  + *
>  > >  > > >  + * @return
>  > >  > > >  + *   - The rte_eventdev structure pointer for the given device ID.
>  > >  > > >  + */
>  > >  > > >  +static inline struct rte_eventdev *
>  > >  > > >  +rte_eventdev_pmd_get_named_dev(const char *name)
>  > >  > > >  +{
>  > >  > > >  +	struct rte_eventdev *dev;
>  > >  > > >  +	unsigned int i;
>  > >  > > >  +
>  > >  > > >  +	if (name == NULL)
>  > >  > > >  +		return NULL;
>  > >  > > >  +
>  > >  > > >  +	for (i = 0, dev = &rte_eventdevs[i];
>  > >  > > >  +			i < rte_eventdev_globals->max_devs; i++) {
>  > >  > > >  +		if ((dev->attached == RTE_EVENTDEV_ATTACHED) &&
>  > >  > > >  +				(strcmp(dev->data->name, name) ==
>  0))
>  > >  > > >  +			return dev;
>  > >  > > >  +	}
>  > >  > > >  +
>  > >  > > >  +	return NULL;
>  > >  > > >  +}
>  > >  > > >  +
>  > >  > > >  +/**
>  > >  > > >  + * Validate if the event device index is valid attached event device.
>  > >  > > >  + *
>  > >  > > >  + * @param dev_id
>  > >  > > >  + *   Event device index.
>  > >  > > >  + *
>  > >  > > >  + * @return
>  > >  > > >  + *   - If the device index is valid (1) or not (0).
>  > >  > > >  + */
>  > >  > > >  +static inline unsigned
>  > >  > > >  +rte_eventdev_pmd_is_valid_dev(uint8_t dev_id)
>  > >  > > >  +{
>  > >  > > >  +	struct rte_eventdev *dev;
>  > >  > > >  +
>  > >  > > >  +	if (dev_id >= rte_eventdev_globals->nb_devs)
>  > >  > > >  +		return 0;
>  > >  > > >  +
>  > >  > > >  +	dev = &rte_eventdevs[dev_id];
>  > >  > > >  +	if (dev->attached != RTE_EVENTDEV_ATTACHED)
>  > >  > > >  +		return 0;
>  > >  > > >  +	else
>  > >  > > >  +		return 1;
>  > >  > > >  +}
>  > >  > > >  +
>  > >  > > >  +/**
>  > >  > > >  + * Definitions of all functions exported by a driver through the
>  > >  > > >  + * the generic structure of type *event_dev_ops* supplied in the
>  > >  > > >  + * *rte_eventdev* structure associated with a device.
>  > >  > > >  + */
>  > >  > > >  +
>  > >  > > >  +/**
>  > >  > > >  + * Get device information of a device.
>  > >  > > >  + *
>  > >  > > >  + * @param dev
>  > >  > > >  + *   Event device pointer
>  > >  > > >  + * @param dev_info
>  > >  > > >  + *   Event device information structure
>  > >  > > >  + *
>  > >  > > >  + * @return
>  > >  > > >  + *   Returns 0 on success
>  > >  > > >  + */
>  > >  > > >  +typedef void (*eventdev_info_get_t)(struct rte_eventdev *dev,
>  > >  > > >  +		struct rte_event_dev_info *dev_info);
>  > >  > > >  +
>  > >  > > >  +/**
>  > >  > > >  + * Configure a device.
>  > >  > > >  + *
>  > >  > > >  + * @param dev
>  > >  > > >  + *   Event device pointer
>  > >  > > >  + *
>  > >  > > >  + * @return
>  > >  > > >  + *   Returns 0 on success
>  > >  > > >  + */
>  > >  > > >  +typedef int (*eventdev_configure_t)(struct rte_eventdev *dev);
>  > >  > > >  +
>  > >  > > >  +/**
>  > >  > > >  + * Start a configured device.
>  > >  > > >  + *
>  > >  > > >  + * @param dev
>  > >  > > >  + *   Event device pointer
>  > >  > > >  + *
>  > >  > > >  + * @return
>  > >  > > >  + *   Returns 0 on success
>  > >  > > >  + */
>  > >  > > >  +typedef int (*eventdev_start_t)(struct rte_eventdev *dev);
>  > >  > > >  +
>  > >  > > >  +/**
>  > >  > > >  + * Stop a configured device.
>  > >  > > >  + *
>  > >  > > >  + * @param dev
>  > >  > > >  + *   Event device pointer
>  > >  > > >  + */
>  > >  > > >  +typedef void (*eventdev_stop_t)(struct rte_eventdev *dev);
>  > >  > > >  +
>  > >  > > >  +/**
>  > >  > > >  + * Close a configured device.
>  > >  > > >  + *
>  > >  > > >  + * @param dev
>  > >  > > >  + *   Event device pointer
>  > >  > > >  + *
>  > >  > > >  + * @return
>  > >  > > >  + * - 0 on success
>  > >  > > >  + * - (-EAGAIN) if can't close as device is busy
>  > >  > > >  + */
>  > >  > > >  +typedef int (*eventdev_close_t)(struct rte_eventdev *dev);
>  > >  > > >  +
>  > >  > > >  +/**
>  > >  > > >  + * Retrieve the default event queue configuration.
>  > >  > > >  + *
>  > >  > > >  + * @param dev
>  > >  > > >  + *   Event device pointer
>  > >  > > >  + * @param queue_id
>  > >  > > >  + *   Event queue index
>  > >  > > >  + * @param[out] queue_conf
>  > >  > > >  + *   Event queue configuration structure
>  > >  > > >  + *
>  > >  > > >  + */
>  > >  > > >  +typedef void (*eventdev_queue_default_conf_get_t)(struct
>  > >  rte_eventdev
>  > >  > > >  *dev,
>  > >  > > >  +		uint8_t queue_id, struct rte_event_queue_conf
>  *queue_conf);
>  > >  > > >  +
>  > >  > > >  +/**
>  > >  > > >  + * Setup an event queue.
>  > >  > > >  + *
>  > >  > > >  + * @param dev
>  > >  > > >  + *   Event device pointer
>  > >  > > >  + * @param queue_id
>  > >  > > >  + *   Event queue index
>  > >  > > >  + * @param queue_conf
>  > >  > > >  + *   Event queue configuration structure
>  > >  > > >  + *
>  > >  > > >  + * @return
>  > >  > > >  + *   Returns 0 on success.
>  > >  > > >  + */
>  > >  > > >  +typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev,
>  > >  > > >  +		uint8_t queue_id, struct rte_event_queue_conf
>  *queue_conf);
>  > >  > > >  +
>  > >  > > >  +/**
>  > >  > > >  + * Release memory resources allocated by given event queue.
>  > >  > > >  + *
>  > >  > > >  + * @param queue
>  > >  > > >  + *   Event queue pointer
>  > >  > > >  + *
>  > >  > > >  + */
>  > >  > > >  +typedef void (*eventdev_queue_release_t)(void *queue);
>  > >  > > >  +
>  > >  > > >  +/**
>  > >  > > >  + * Retrieve the default event port configuration.
>  > >  > > >  + *
>  > >  > > >  + * @param dev
>  > >  > > >  + *   Event device pointer
>  > >  > > >  + * @param port_id
>  > >  > > >  + *   Event port index
>  > >  > > >  + * @param[out] port_conf
>  > >  > > >  + *   Event port configuration structure
>  > >  > > >  + *
>  > >  > > >  + */
>  > >  > > >  +typedef void (*eventdev_port_default_conf_get_t)(struct
>  rte_eventdev
>  > >  *dev,
>  > >  > > >  +		uint8_t port_id, struct rte_event_port_conf
>  *port_conf);
>  > >  > > >  +
>  > >  > > >  +/**
>  > >  > > >  + * Setup an event port.
>  > >  > > >  + *
>  > >  > > >  + * @param dev
>  > >  > > >  + *   Event device pointer
>  > >  > > >  + * @param port_id
>  > >  > > >  + *   Event port index
>  > >  > > >  + * @param port_conf
>  > >  > > >  + *   Event port configuration structure
>  > >  > > >  + *
>  > >  > > >  + * @return
>  > >  > > >  + *   Returns 0 on success.
>  > >  > > >  + */
>  > >  > > >  +typedef int (*eventdev_port_setup_t)(struct rte_eventdev *dev,
>  > >  > > >  +		uint8_t port_id, struct rte_event_port_conf
>  *port_conf);
>  > >  > > >  +
>  > >  > > >  +/**
>  > >  > > >  + * Release memory resources allocated by given event port.
>  > >  > > >  + *
>  > >  > > >  + * @param port
>  > >  > > >  + *   Event port pointer
>  > >  > > >  + *
>  > >  > > >  + */
>  > >  > > >  +typedef void (*eventdev_port_release_t)(void *port);
>  > >  > > >  +
>  > >  > > >  +/**
>  > >  > > >  + * Link multiple source event queues to destination event port.
>  > >  > > >  + *
>  > >  > > >  + * @param port
>  > >  > > >  + *   Event port pointer
>  > >  > > >  + * @param link
>  > >  > > >  + *   An array of *nb_links* pointers to *rte_event_queue_link*
>  structure
>  > >  > > >  + * @param nb_links
>  > >  > > >  + *   The number of links to establish
>  > >  > > >  + *
>  > >  > > >  + * @return
>  > >  > > >  + *   Returns 0 on success.
>  > >  > > >  + *
>  > >  > > >  + */
>  > >  > > >  +typedef int (*eventdev_port_link_t)(void *port,
>  > >  > > >  +		struct rte_event_queue_link link[], uint16_t nb_links);
>  > >  > > >  +
>  > >  > > >  +/**
>  > >  > > >  + * Unlink multiple source event queues from destination event port.
>  > >  > > >  + *
>  > >  > > >  + * @param port
>  > >  > > >  + *   Event port pointer
>  > >  > > >  + * @param queues
>  > >  > > >  + *   An array of *nb_unlinks* event queues to be unlinked from the
>  event
>  > >  port.
>  > >  > > >  + * @param nb_unlinks
>  > >  > > >  + *   The number of unlinks to establish
>  > >  > > >  + *
>  > >  > > >  + * @return
>  > >  > > >  + *   Returns 0 on success.
>  > >  > > >  + *
>  > >  > > >  + */
>  > >  > > >  +typedef int (*eventdev_port_unlink_t)(void *port,
>  > >  > > >  +		uint8_t queues[], uint16_t nb_unlinks);
>  > >  > > >  +
>  > >  > > >  +/**
>  > >  > > >  + * Converts nanoseconds to *wait* value for rte_event_dequeue()
>  > >  > > >  + *
>  > >  > > >  + * @param dev
>  > >  > > >  + *   Event device pointer
>  > >  > > >  + * @param ns
>  > >  > > >  + *   Wait time in nanosecond
>  > >  > > >  + * @param[out] wait_ticks
>  > >  > > >  + *   Value for the *wait* parameter in rte_event_dequeue() function
>  > >  > > >  + *
>  > >  > > >  + */
>  > >  > > >  +typedef void (*eventdev_dequeue_wait_time_t)(struct
>  rte_eventdev
>  > >  *dev,
>  > >  > > >  +		uint64_t ns, uint64_t *wait_ticks);
>  > >  > > >  +
>  > >  > > >  +/**
>  > >  > > >  + * Dump internal information
>  > >  > > >  + *
>  > >  > > >  + * @param dev
>  > >  > > >  + *   Event device pointer
>  > >  > > >  + * @param f
>  > >  > > >  + *   A pointer to a file for output
>  > >  > > >  + *
>  > >  > > >  + */
>  > >  > > >  +typedef void (*eventdev_dump_t)(struct rte_eventdev *dev, FILE
>  *f);
>  > >  > > >  +
>  > >  > > >  +/** Event device operations function pointer table */
>  > >  > > >  +struct rte_eventdev_ops {
>  > >  > > >  +	eventdev_info_get_t dev_infos_get;	/**< Get device info.
>  */
>  > >  > > >  +	eventdev_configure_t dev_configure;	/**< Configure
>  device. */
>  > >  > > >  +	eventdev_start_t dev_start;		/**< Start device. */
>  > >  > > >  +	eventdev_stop_t dev_stop;		/**< Stop device. */
>  > >  > > >  +	eventdev_close_t dev_close;		/**< Close device. */
>  > >  > > >  +
>  > >  > > >  +	eventdev_queue_default_conf_get_t queue_def_conf;
>  > >  > > >  +	/**< Get default queue configuration. */
>  > >  > > >  +	eventdev_queue_setup_t queue_setup;
>  > >  > > >  +	/**< Set up an event queue. */
>  > >  > > >  +	eventdev_queue_release_t queue_release;
>  > >  > > >  +	/**< Release an event queue. */
>  > >  > > >  +
>  > >  > > >  +	eventdev_port_default_conf_get_t port_def_conf;
>  > >  > > >  +	/**< Get default port configuration. */
>  > >  > > >  +	eventdev_port_setup_t port_setup;
>  > >  > > >  +	/**< Set up an event port. */
>  > >  > > >  +	eventdev_port_release_t port_release;
>  > >  > > >  +	/**< Release an event port. */
>  > >  > > >  +
>  > >  > > >  +	eventdev_port_link_t port_link;
>  > >  > > >  +	/**< Link event queues to an event port. */
>  > >  > > >  +	eventdev_port_unlink_t port_unlink;
>  > >  > > >  +	/**< Unlink event queues from an event port. */
>  > >  > > >  +	eventdev_dequeue_wait_time_t wait_time;
>  > >  > > >  +	/**< Converts nanoseconds to *wait* value for
>  rte_event_dequeue()
>  > >  > > >  */
>  > >  > > >  +	eventdev_dump_t dump;
>  > >  > > >  +	/* Dump internal information */
>  > >  > > >  +};
>  > >  > > >  +
>  > >  > > >  +/**
>  > >  > > >  + * Allocates a new eventdev slot for an event device and returns the
>  > >  pointer
>  > >  > > >  + * to that slot for the driver to use.
>  > >  > > >  + *
>  > >  > > >  + * @param name
>  > >  > > >  + *   Unique identifier name for each device
>  > >  > > >  + * @param socket_id
>  > >  > > >  + *   Socket to allocate resources on.
>  > >  > > >  + * @return
>  > >  > > >  + *   - Slot in the rte_dev_devices array for a new device;
>  > >  > > >  + */
>  > >  > > >  +struct rte_eventdev *
>  > >  > > >  +rte_eventdev_pmd_allocate(const char *name, int socket_id);
>  > >  > > >  +
>  > >  > > >  +/**
>  > >  > > >  + * Release the specified eventdev device.
>  > >  > > >  + *
>  > >  > > >  + * @param eventdev
>  > >  > > >  + * The *eventdev* pointer is the address of the *rte_eventdev*
>  > >  structure.
>  > >  > > >  + * @return
>  > >  > > >  + *   - 0 on success, negative on error
>  > >  > > >  + */
>  > >  > > >  +int
>  > >  > > >  +rte_eventdev_pmd_release(struct rte_eventdev *eventdev);
>  > >  > > >  +
>  > >  > > >  +/**
>  > >  > > >  + * Creates a new virtual event device and returns the pointer to that
>  > >  device.
>  > >  > > >  + *
>  > >  > > >  + * @param name
>  > >  > > >  + *   PMD type name
>  > >  > > >  + * @param dev_private_size
>  > >  > > >  + *   Size of event PMDs private data
>  > >  > > >  + * @param socket_id
>  > >  > > >  + *   Socket to allocate resources on.
>  > >  > > >  + *
>  > >  > > >  + * @return
>  > >  > > >  + *   - Eventdev pointer if device is successfully created.
>  > >  > > >  + *   - NULL if device cannot be created.
>  > >  > > >  + */
>  > >  > > >  +struct rte_eventdev *
>  > >  > > >  +rte_eventdev_pmd_vdev_init(const char *name, size_t
>  > >  dev_private_size,
>  > >  > > >  +		int socket_id);
>  > >  > > >  +
>  > >  > > >  +
>  > >  > > >  +/**
>  > >  > > >  + * Wrapper for use by pci drivers as a .probe function to attach to a
>  > >  event
>  > >  > > >  + * interface.
>  > >  > > >  + */
>  > >  > > >  +int rte_eventdev_pmd_pci_probe(struct rte_pci_driver *pci_drv,
>  > >  > > >  +			    struct rte_pci_device *pci_dev);
>  > >  > > >  +
>  > >  > > >  +/**
>  > >  > > >  + * Wrapper for use by pci drivers as a .remove function to detach a
>  > >  event
>  > >  > > >  + * interface.
>  > >  > > >  + */
>  > >  > > >  +int rte_eventdev_pmd_pci_remove(struct rte_pci_device *pci_dev);
>  > >  > > >  +
>  > >  > > >  +#ifdef __cplusplus
>  > >  > > >  +}
>  > >  > > >  +#endif
>  > >  > > >  +
>  > >  > > >  +#endif /* _RTE_EVENTDEV_PMD_H_ */
>  > >  > > >  diff --git a/lib/librte_eventdev/rte_eventdev_version.map
>  > >  > > >  b/lib/librte_eventdev/rte_eventdev_version.map
>  > >  > > >  new file mode 100644
>  > >  > > >  index 0000000..ef40aae
>  > >  > > >  --- /dev/null
>  > >  > > >  +++ b/lib/librte_eventdev/rte_eventdev_version.map
>  > >  > > >  @@ -0,0 +1,39 @@
>  > >  > > >  +DPDK_17.02 {
>  > >  > > >  +	global:
>  > >  > > >  +
>  > >  > > >  +	rte_eventdevs;
>  > >  > > >  +
>  > >  > > >  +	rte_event_dev_count;
>  > >  > > >  +	rte_event_dev_get_dev_id;
>  > >  > > >  +	rte_event_dev_socket_id;
>  > >  > > >  +	rte_event_dev_info_get;
>  > >  > > >  +	rte_event_dev_configure;
>  > >  > > >  +	rte_event_dev_start;
>  > >  > > >  +	rte_event_dev_stop;
>  > >  > > >  +	rte_event_dev_close;
>  > >  > > >  +	rte_event_dev_dump;
>  > >  > > >  +
>  > >  > > >  +	rte_event_port_default_conf_get;
>  > >  > > >  +	rte_event_port_setup;
>  > >  > > >  +	rte_event_port_dequeue_depth;
>  > >  > > >  +	rte_event_port_enqueue_depth;
>  > >  > > >  +	rte_event_port_count;
>  > >  > > >  +	rte_event_port_link;
>  > >  > > >  +	rte_event_port_unlink;
>  > >  > > >  +	rte_event_port_links_get;
>  > >  > > >  +
>  > >  > > >  +	rte_event_queue_default_conf_get
>  > >  > > >  +	rte_event_queue_setup;
>  > >  > > >  +	rte_event_queue_count;
>  > >  > > >  +	rte_event_queue_priority;
>  > >  > > >  +
>  > >  > > >  +	rte_event_dequeue_wait_time;
>  > >  > > >  +
>  > >  > > >  +	rte_eventdev_pmd_allocate;
>  > >  > > >  +	rte_eventdev_pmd_release;
>  > >  > > >  +	rte_eventdev_pmd_vdev_init;
>  > >  > > >  +	rte_eventdev_pmd_pci_probe;
>  > >  > > >  +	rte_eventdev_pmd_pci_remove;
>  > >  > > >  +
>  > >  > > >  +	local: *;
>  > >  > > >  +};
>  > >  > > >  diff --git a/mk/rte.app.mk b/mk/rte.app.mk
>  > >  > > >  index f75f0e2..716725a 100644
>  > >  > > >  --- a/mk/rte.app.mk
>  > >  > > >  +++ b/mk/rte.app.mk
>  > >  > > >  @@ -93,6 +93,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF)
>  += -
>  > >  > > >  lrte_mbuf
>  > >  > > >   _LDLIBS-$(CONFIG_RTE_LIBRTE_NET)            += -lrte_net
>  > >  > > >   _LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER)          += -lrte_ethdev
>  > >  > > >   _LDLIBS-$(CONFIG_RTE_LIBRTE_CRYPTODEV)      += -lrte_cryptodev
>  > >  > > >  +_LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV)       += -lrte_eventdev
>  > >  > > >   _LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL)        += -lrte_mempool
>  > >  > > >   _LDLIBS-$(CONFIG_RTE_LIBRTE_RING)           += -lrte_ring
>  > >  > > >   _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL)            += -lrte_eal
>  > >  > > >  --
>  > >  > > >  2.5.5
>  > >  > >

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 2/4] eventdev: implement the northbound APIs
  2016-11-22 19:43             ` Eads, Gage
@ 2016-11-22 20:00               ` Jerin Jacob
  2016-11-22 22:48                 ` Eads, Gage
  0 siblings, 1 reply; 109+ messages in thread
From: Jerin Jacob @ 2016-11-22 20:00 UTC (permalink / raw)
  To: Eads, Gage; +Cc: dev, Richardson, Bruce, Van Haaren, Harry, hemant.agrawal

On Tue, Nov 22, 2016 at 07:43:03PM +0000, Eads, Gage wrote:
> >  > >  > > One open issue I noticed is the "typical workflow" description starting in
> >  > >  rte_eventdev.h:204 conflicts with the centralized software PMD that Harry
> >  > >  posted last week. Specifically, that PMD expects a single core to call the
> >  > >  schedule function. We could extend the documentation to account for this
> >  > >  alternative style of scheduler invocation, or discuss ways to make the
> >  software
> >  > >  PMD work with the documented workflow. I prefer the former, but either
> >  way I
> >  > >  think we ought to expose the scheduler's expected usage to the user --
> >  perhaps
> >  > >  through an RTE_EVENT_DEV_CAP flag?
> >  > >  >
> >  > >  > I prefer former too, you can propose the documentation change required
> >  for
> >  > >  software PMD.
> >  >
> >  > Sure, proposal follows. The "typical workflow" isn't the most optimal by
> >  having a conditional in the fast-path, of course, but it demonstrates the idea
> >  simply.
> >  >
> >  > (line 204)
> >  >  * An event driven based application has following typical workflow on
> >  fastpath:
> >  >  * \code{.c}
> >  >  *      while (1) {
> >  >  *
> >  >  *              if (dev_info.event_dev_cap &
> >  >  *                      RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED)
> >  >  *                      rte_event_schedule(dev_id);
> >  
> >  Yes, I like the idea of RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED.
> >  It  can be input to application/subsystem to
> >  launch separate core(s) for schedule functions.
> >  But, I think, the "dev_info.event_dev_cap &
> >  RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED"
> >  check can be moved inside the implementation(to make the better decisions
> >  and
> >  avoiding consuming cycles on HW based schedulers.
> 
> How would this check work? Wouldn't it prevent any core from running the software scheduler in the centralized case?

I guess you may not need RTE_EVENT_DEV_CAP here, instead need flag for device
configure here

#define RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED (1ULL << 1)

struct rte_event_dev_config config;
config.event_dev_cfg = RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED;
rte_event_dev_configure(.., &config);

on the driver side on configure,
if (config.event_dev_cfg & RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED)
	eventdev->schedule = NULL;
else // centralized case
	eventdev->schedule = your_centrized_schedule_function;

Does that work?

> 
> >  
> >  >  *
> >  >  *              rte_event_dequeue(...);
> >  >  *
> >  >  *              (event processing)
> >  >  *
> >  >  *              rte_event_enqueue(...);
> >  >  *      }
> >  >  * \endcode
> >  >  *
> >  >  * The *schedule* operation is intended to do event scheduling, and the
> >  >  * *dequeue* operation returns the scheduled events. An implementation
> >  >  * is free to define the semantics between *schedule* and *dequeue*. For
> >  >  * example, a system based on a hardware scheduler can define its
> >  >  * rte_event_schedule() to be an NOOP, whereas a software scheduler can
> >  use
> >  >  * the *schedule* operation to schedule events. The
> >  >  * RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag indicates
> >  whether
> >  >  * rte_event_schedule() should be called by all cores or by a single (typically
> >  >  * dedicated) core.
> >  >
> >  > (line 308)
> >  > #define RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED (1ULL < 2)
> >  > /**< Event scheduling implementation is distributed and all cores must
> >  execute
> >  >  *  rte_event_schedule(). If unset, the implementation is centralized and
> >  >  *  a single core must execute the schedule operation.
> >  >  *
> >  >  *  \see rte_event_schedule()
> >  >  */
> >  >
> >  > >  >
> >  > >  > On same note, If software PMD based workflow need  a separate core(s)
> >  for
> >  > >  > schedule function then, Can we hide that from API specification and pass
> >  an
> >  > >  > argument to SW pmd to define the scheduling core(s)?
> >  > >  >
> >  > >  > Something like --vdev=eventsw0,schedule_cmask=0x2
> >  >
> >  > An API for controlling the scheduler coremask instead of (or perhaps in
> >  addition to) the vdev argument would be good, to allow runtime control. I can
> >  imagine apps that scale the number of cores based on load, and in doing so
> >  may want to migrate the scheduler to a different core.
> >  
> >  Yes, an API for number of scheduler core looks OK. But if we are going to
> >  have service core approach then we just need to specify at one place as
> >  application will not creating the service functions.
> >  
> >  >
> >  > >
> >  > >  Just a thought,
> >  > >
> >  > >  Perhaps, We could introduce generic "service" cores concept to DPDK to
> >  hide
> >  > >  the
> >  > >  requirement where the implementation needs dedicated core to do certain
> >  > >  work. I guess it would useful for other NPU integration in DPDK.
> >  > >
> >  >
> >  > That's an interesting idea. As you suggested in the other thread, this concept
> >  could be extended to the "producer" code in the example for configurations
> >  where the NIC requires software to feed into the eventdev. And to the other
> >  subsystems mentioned in your original PDF, crypto and timer.
> >  
> >  Yes. Producers should come in service core category. I think, that
> >  enables us to have better NPU integration.(same application code for
> >  NPU vs non NPU)
> >  

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 2/4] eventdev: implement the northbound APIs
  2016-11-22 20:00               ` Jerin Jacob
@ 2016-11-22 22:48                 ` Eads, Gage
  2016-11-22 23:43                   ` Jerin Jacob
  0 siblings, 1 reply; 109+ messages in thread
From: Eads, Gage @ 2016-11-22 22:48 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: dev, Richardson, Bruce, Van Haaren, Harry, hemant.agrawal



>  -----Original Message-----
>  From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
>  Sent: Tuesday, November 22, 2016 2:00 PM
>  To: Eads, Gage <gage.eads@intel.com>
>  Cc: dev@dpdk.org; Richardson, Bruce <bruce.richardson@intel.com>; Van
>  Haaren, Harry <harry.van.haaren@intel.com>; hemant.agrawal@nxp.com
>  Subject: Re: [dpdk-dev] [PATCH 2/4] eventdev: implement the northbound APIs
>  
>  On Tue, Nov 22, 2016 at 07:43:03PM +0000, Eads, Gage wrote:
>  > >  > >  > > One open issue I noticed is the "typical workflow"
>  > > description starting in  > >  rte_eventdev.h:204 conflicts with the
>  > > centralized software PMD that Harry  > >  posted last week.
>  > > Specifically, that PMD expects a single core to call the  > >
>  > > schedule function. We could extend the documentation to account for
>  > > this  > >  alternative style of scheduler invocation, or discuss
>  > > ways to make the  software  > >  PMD work with the documented
>  > > workflow. I prefer the former, but either  way I  > >  think we
>  > > ought to expose the scheduler's expected usage to the user --
>  > > perhaps  > >  through an RTE_EVENT_DEV_CAP flag?
>  > >  > >  >
>  > >  > >  > I prefer former too, you can propose the documentation
>  > > change required  for  > >  software PMD.
>  > >  >
>  > >  > Sure, proposal follows. The "typical workflow" isn't the most
>  > > optimal by  having a conditional in the fast-path, of course, but it
>  > > demonstrates the idea  simply.
>  > >  >
>  > >  > (line 204)
>  > >  >  * An event driven based application has following typical
>  > > workflow on
>  > >  fastpath:
>  > >  >  * \code{.c}
>  > >  >  *      while (1) {
>  > >  >  *
>  > >  >  *              if (dev_info.event_dev_cap &
>  > >  >  *                      RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED)
>  > >  >  *                      rte_event_schedule(dev_id);
>  > >
>  > >  Yes, I like the idea of RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED.
>  > >  It  can be input to application/subsystem to  launch separate
>  > > core(s) for schedule functions.
>  > >  But, I think, the "dev_info.event_dev_cap &
>  > > RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED"
>  > >  check can be moved inside the implementation(to make the better
>  > > decisions  and  avoiding consuming cycles on HW based schedulers.
>  >
>  > How would this check work? Wouldn't it prevent any core from running the
>  software scheduler in the centralized case?
>  
>  I guess you may not need RTE_EVENT_DEV_CAP here, instead need flag for
>  device configure here
>  
>  #define RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED (1ULL << 1)
>  
>  struct rte_event_dev_config config;
>  config.event_dev_cfg = RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED;
>  rte_event_dev_configure(.., &config);
>  
>  on the driver side on configure,
>  if (config.event_dev_cfg & RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED)
>  	eventdev->schedule = NULL;
>  else // centralized case
>  	eventdev->schedule = your_centrized_schedule_function;
>  
>  Does that work?

Hm, I fear the API would give users the impression that they can select the scheduling behavior of a given eventdev, when a software scheduler is more likely to be either distributed or centralized -- not both.

What if we use the capability flag, and define rte_event_schedule() as the scheduling function for centralized schedulers and rte_event_dequeue() as the scheduling function for distributed schedulers? That way, the datapath could be the simple dequeue -> process -> enqueue. Applications would check the capability flag at configuration time to decide whether or not to launch an lcore that calls rte_event_schedule().

>  
>  >
>  > >
>  > >  >  *
>  > >  >  *              rte_event_dequeue(...);
>  > >  >  *
>  > >  >  *              (event processing)
>  > >  >  *
>  > >  >  *              rte_event_enqueue(...);
>  > >  >  *      }
>  > >  >  * \endcode
>  > >  >  *
>  > >  >  * The *schedule* operation is intended to do event scheduling,
>  > > and the  >  * *dequeue* operation returns the scheduled events. An
>  > > implementation  >  * is free to define the semantics between
>  > > *schedule* and *dequeue*. For  >  * example, a system based on a
>  > > hardware scheduler can define its  >  * rte_event_schedule() to be
>  > > an NOOP, whereas a software scheduler can  use  >  * the *schedule*
>  > > operation to schedule events. The  >  *
>  > > RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag indicates
>  > > whether  >  * rte_event_schedule() should be called by all cores or
>  > > by a single (typically  >  * dedicated) core.
>  > >  >
>  > >  > (line 308)
>  > >  > #define RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED (1ULL < 2)  > /**<
>  > > Event scheduling implementation is distributed and all cores must
>  > > execute  >  *  rte_event_schedule(). If unset, the implementation is
>  > > centralized and  >  *  a single core must execute the schedule
>  > > operation.
>  > >  >  *
>  > >  >  *  \see rte_event_schedule()
>  > >  >  */
>  > >  >
>  > >  > >  >
>  > >  > >  > On same note, If software PMD based workflow need  a
>  > > separate core(s)  for  > >  > schedule function then, Can we hide
>  > > that from API specification and pass  an  > >  > argument to SW pmd
>  > > to define the scheduling core(s)?
>  > >  > >  >
>  > >  > >  > Something like --vdev=eventsw0,schedule_cmask=0x2
>  > >  >
>  > >  > An API for controlling the scheduler coremask instead of (or
>  > > perhaps in  addition to) the vdev argument would be good, to allow
>  > > runtime control. I can  imagine apps that scale the number of cores
>  > > based on load, and in doing so  may want to migrate the scheduler to a
>  different core.
>  > >
>  > >  Yes, an API for number of scheduler core looks OK. But if we are
>  > > going to  have service core approach then we just need to specify at
>  > > one place as  application will not creating the service functions.
>  > >
>  > >  >
>  > >  > >
>  > >  > >  Just a thought,
>  > >  > >
>  > >  > >  Perhaps, We could introduce generic "service" cores concept to
>  > > DPDK to  hide  > >  the  > >  requirement where the implementation
>  > > needs dedicated core to do certain  > >  work. I guess it would
>  > > useful for other NPU integration in DPDK.
>  > >  > >
>  > >  >
>  > >  > That's an interesting idea. As you suggested in the other thread,
>  > > this concept  could be extended to the "producer" code in the
>  > > example for configurations  where the NIC requires software to feed
>  > > into the eventdev. And to the other  subsystems mentioned in your original
>  PDF, crypto and timer.
>  > >
>  > >  Yes. Producers should come in service core category. I think, that
>  > > enables us to have better NPU integration.(same application code for
>  > > NPU vs non NPU)
>  > >

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 2/4] eventdev: implement the northbound APIs
  2016-11-22 22:48                 ` Eads, Gage
@ 2016-11-22 23:43                   ` Jerin Jacob
  2016-11-28 15:53                     ` Eads, Gage
  0 siblings, 1 reply; 109+ messages in thread
From: Jerin Jacob @ 2016-11-22 23:43 UTC (permalink / raw)
  To: Eads, Gage; +Cc: dev, Richardson, Bruce, Van Haaren, Harry, hemant.agrawal

On Tue, Nov 22, 2016 at 10:48:32PM +0000, Eads, Gage wrote:
> 
> 
> >  -----Original Message-----
> >  From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> >  Sent: Tuesday, November 22, 2016 2:00 PM
> >  To: Eads, Gage <gage.eads@intel.com>
> >  Cc: dev@dpdk.org; Richardson, Bruce <bruce.richardson@intel.com>; Van
> >  Haaren, Harry <harry.van.haaren@intel.com>; hemant.agrawal@nxp.com
> >  Subject: Re: [dpdk-dev] [PATCH 2/4] eventdev: implement the northbound APIs
> >  
> >  On Tue, Nov 22, 2016 at 07:43:03PM +0000, Eads, Gage wrote:
> >  > >  > >  > > One open issue I noticed is the "typical workflow"
> >  > > description starting in  > >  rte_eventdev.h:204 conflicts with the
> >  > > centralized software PMD that Harry  > >  posted last week.
> >  > > Specifically, that PMD expects a single core to call the  > >
> >  > > schedule function. We could extend the documentation to account for
> >  > > this  > >  alternative style of scheduler invocation, or discuss
> >  > > ways to make the  software  > >  PMD work with the documented
> >  > > workflow. I prefer the former, but either  way I  > >  think we
> >  > > ought to expose the scheduler's expected usage to the user --
> >  > > perhaps  > >  through an RTE_EVENT_DEV_CAP flag?
> >  > >  > >  >
> >  > >  > >  > I prefer former too, you can propose the documentation
> >  > > change required  for  > >  software PMD.
> >  > >  >
> >  > >  > Sure, proposal follows. The "typical workflow" isn't the most
> >  > > optimal by  having a conditional in the fast-path, of course, but it
> >  > > demonstrates the idea  simply.
> >  > >  >
> >  > >  > (line 204)
> >  > >  >  * An event driven based application has following typical
> >  > > workflow on
> >  > >  fastpath:
> >  > >  >  * \code{.c}
> >  > >  >  *      while (1) {
> >  > >  >  *
> >  > >  >  *              if (dev_info.event_dev_cap &
> >  > >  >  *                      RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED)
> >  > >  >  *                      rte_event_schedule(dev_id);
> >  > >
> >  > >  Yes, I like the idea of RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED.
> >  > >  It  can be input to application/subsystem to  launch separate
> >  > > core(s) for schedule functions.
> >  > >  But, I think, the "dev_info.event_dev_cap &
> >  > > RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED"
> >  > >  check can be moved inside the implementation(to make the better
> >  > > decisions  and  avoiding consuming cycles on HW based schedulers.
> >  >
> >  > How would this check work? Wouldn't it prevent any core from running the
> >  software scheduler in the centralized case?
> >  
> >  I guess you may not need RTE_EVENT_DEV_CAP here, instead need flag for
> >  device configure here
> >  
> >  #define RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED (1ULL << 1)
> >  
> >  struct rte_event_dev_config config;
> >  config.event_dev_cfg = RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED;
> >  rte_event_dev_configure(.., &config);
> >  
> >  on the driver side on configure,
> >  if (config.event_dev_cfg & RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED)
> >  	eventdev->schedule = NULL;
> >  else // centralized case
> >  	eventdev->schedule = your_centrized_schedule_function;
> >  
> >  Does that work?
> 
> Hm, I fear the API would give users the impression that they can select the scheduling behavior of a given eventdev, when a software scheduler is more likely to be either distributed or centralized -- not both.

Even if it is capability flag then also it is per "device". Right ?
capability flag is more of read only too. Am i missing something here?

> 
> What if we use the capability flag, and define rte_event_schedule() as the scheduling function for centralized schedulers and rte_event_dequeue() as the scheduling function for distributed schedulers? That way, the datapath could be the simple dequeue -> process -> enqueue. Applications would check the capability flag at configuration time to decide whether or not to launch an lcore that calls rte_event_schedule().

I am all for simple "dequeue -> process -> enqueue".
rte_event_schedule() added for SW scheduler only,  now it may not make
sense to add one more check on top of "rte_event_schedule()" to see
it is really need or not in fastpath?

> 
> >  
> >  >
> >  > >
> >  > >  >  *
> >  > >  >  *              rte_event_dequeue(...);
> >  > >  >  *
> >  > >  >  *              (event processing)
> >  > >  >  *
> >  > >  >  *              rte_event_enqueue(...);
> >  > >  >  *      }
> >  > >  >  * \endcode
> >  > >  >  *
> >  > >  >  * The *schedule* operation is intended to do event scheduling,
> >  > > and the  >  * *dequeue* operation returns the scheduled events. An
> >  > > implementation  >  * is free to define the semantics between
> >  > > *schedule* and *dequeue*. For  >  * example, a system based on a
> >  > > hardware scheduler can define its  >  * rte_event_schedule() to be
> >  > > an NOOP, whereas a software scheduler can  use  >  * the *schedule*
> >  > > operation to schedule events. The  >  *
> >  > > RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag indicates
> >  > > whether  >  * rte_event_schedule() should be called by all cores or
> >  > > by a single (typically  >  * dedicated) core.
> >  > >  >
> >  > >  > (line 308)
> >  > >  > #define RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED (1ULL < 2)  > /**<
> >  > > Event scheduling implementation is distributed and all cores must
> >  > > execute  >  *  rte_event_schedule(). If unset, the implementation is
> >  > > centralized and  >  *  a single core must execute the schedule
> >  > > operation.
> >  > >  >  *
> >  > >  >  *  \see rte_event_schedule()
> >  > >  >  */
> >  > >  >
> >  > >  > >  >
> >  > >  > >  > On same note, If software PMD based workflow need  a
> >  > > separate core(s)  for  > >  > schedule function then, Can we hide
> >  > > that from API specification and pass  an  > >  > argument to SW pmd
> >  > > to define the scheduling core(s)?
> >  > >  > >  >
> >  > >  > >  > Something like --vdev=eventsw0,schedule_cmask=0x2
> >  > >  >
> >  > >  > An API for controlling the scheduler coremask instead of (or
> >  > > perhaps in  addition to) the vdev argument would be good, to allow
> >  > > runtime control. I can  imagine apps that scale the number of cores
> >  > > based on load, and in doing so  may want to migrate the scheduler to a
> >  different core.
> >  > >
> >  > >  Yes, an API for number of scheduler core looks OK. But if we are
> >  > > going to  have service core approach then we just need to specify at
> >  > > one place as  application will not creating the service functions.
> >  > >
> >  > >  >
> >  > >  > >
> >  > >  > >  Just a thought,
> >  > >  > >
> >  > >  > >  Perhaps, We could introduce generic "service" cores concept to
> >  > > DPDK to  hide  > >  the  > >  requirement where the implementation
> >  > > needs dedicated core to do certain  > >  work. I guess it would
> >  > > useful for other NPU integration in DPDK.
> >  > >  > >
> >  > >  >
> >  > >  > That's an interesting idea. As you suggested in the other thread,
> >  > > this concept  could be extended to the "producer" code in the
> >  > > example for configurations  where the NIC requires software to feed
> >  > > into the eventdev. And to the other  subsystems mentioned in your original
> >  PDF, crypto and timer.
> >  > >
> >  > >  Yes. Producers should come in service core category. I think, that
> >  > > enables us to have better NPU integration.(same application code for
> >  > > NPU vs non NPU)
> >  > >

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 2/4] eventdev: implement the northbound APIs
  2016-11-22 15:15         ` Eads, Gage
  2016-11-22 18:19           ` Jerin Jacob
@ 2016-11-23  9:57           ` Bruce Richardson
  1 sibling, 0 replies; 109+ messages in thread
From: Bruce Richardson @ 2016-11-23  9:57 UTC (permalink / raw)
  To: Eads, Gage; +Cc: Jerin Jacob, dev, Van Haaren, Harry, hemant.agrawal

Hi Gage,

just FYI, you can make it easier on your readers if you cut off the end
of the original email that you are not replying to. It saves us having
to scroll down to check for more comments. :-)

/Bruce

On Tue, Nov 22, 2016 at 03:15:52PM +0000, Eads, Gage wrote:
> 
> 
> >  -----Original Message-----
> >  From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> >  Sent: Monday, November 21, 2016 1:32 PM
> >  To: Eads, Gage <gage.eads@intel.com>
> >  Cc: dev@dpdk.org; Richardson, Bruce <bruce.richardson@intel.com>; Van
> >  Haaren, Harry <harry.van.haaren@intel.com>; hemant.agrawal@nxp.com
> >  Subject: Re: [dpdk-dev] [PATCH 2/4] eventdev: implement the northbound APIs
> >  
> >  On Tue, Nov 22, 2016 at 12:43:58AM +0530, Jerin Jacob wrote:
> >  > On Mon, Nov 21, 2016 at 05:45:51PM +0000, Eads, Gage wrote:
> >  > > Hi Jerin,
> >  > >
> >  > > I did a quick review and overall this implementation looks good. I noticed
> >  just one issue in rte_event_queue_setup(): the check of
> >  nb_atomic_order_sequences is being applied to atomic-type queues, but that
> >  field applies to ordered-type queues.
> >  >
> >  > Thanks Gage. I will fix that in v2.
> >  >
> >  > >
> >  > > One open issue I noticed is the "typical workflow" description starting in
> >  rte_eventdev.h:204 conflicts with the centralized software PMD that Harry
> >  posted last week. Specifically, that PMD expects a single core to call the
> >  schedule function. We could extend the documentation to account for this
> >  alternative style of scheduler invocation, or discuss ways to make the software
> >  PMD work with the documented workflow. I prefer the former, but either way I
> >  think we ought to expose the scheduler's expected usage to the user -- perhaps
> >  through an RTE_EVENT_DEV_CAP flag?
> >  >
> >  > I prefer former too, you can propose the documentation change required for
> >  software PMD.
> 
> Sure, proposal follows. The "typical workflow" isn't the most optimal by having a conditional in the fast-path, of course, but it demonstrates the idea simply.
> 
> (line 204)
>  * An event driven based application has following typical workflow on fastpath:
>  * \code{.c}                                                                        
>  *      while (1) {                                                                 
>  *                                                                                  
>  *              if (dev_info.event_dev_cap &                                        
>  *                      RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED)                        
>  *                      rte_event_schedule(dev_id);                                 
>  *                                                                                  
>  *              rte_event_dequeue(...);                                             
>  *                                                                                  
>  *              (event processing)                                                  
>  *                                                                                  
>  *              rte_event_enqueue(...);                                             
>  *      }                                                                           
>  * \endcode                                                                         
>  *                                                                                  
>  * The *schedule* operation is intended to do event scheduling, and the             
>  * *dequeue* operation returns the scheduled events. An implementation              
>  * is free to define the semantics between *schedule* and *dequeue*. For            
>  * example, a system based on a hardware scheduler can define its                   
>  * rte_event_schedule() to be an NOOP, whereas a software scheduler can use         
>  * the *schedule* operation to schedule events. The                                 
>  * RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag indicates whether
>  * rte_event_schedule() should be called by all cores or by a single (typically 
>  * dedicated) core.
> 
> (line 308)
> #define RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED (1ULL < 2)                          
> /**< Event scheduling implementation is distributed and all cores must execute       
>  *  rte_event_schedule(). If unset, the implementation is centralized and     
>  *  a single core must execute the schedule operation.                        
>  *                                                                              
>  *  \see rte_event_schedule()                                                   
>  */
> 
> >  >
> >  > On same note, If software PMD based workflow need  a separate core(s) for
> >  > schedule function then, Can we hide that from API specification and pass an
> >  > argument to SW pmd to define the scheduling core(s)?
> >  >
> >  > Something like --vdev=eventsw0,schedule_cmask=0x2
> 
> An API for controlling the scheduler coremask instead of (or perhaps in addition to) the vdev argument would be good, to allow runtime control. I can imagine apps that scale the number of cores based on load, and in doing so may want to migrate the scheduler to a different core.
> 
> >  
> >  Just a thought,
> >  
> >  Perhaps, We could introduce generic "service" cores concept to DPDK to hide
> >  the
> >  requirement where the implementation needs dedicated core to do certain
> >  work. I guess it would useful for other NPU integration in DPDK.
> >  
> 
> That's an interesting idea. As you suggested in the other thread, this concept could be extended to the "producer" code in the example for configurations where the NIC requires software to feed into the eventdev. And to the other subsystems mentioned in your original PDF, crypto and timer.
> 
> >  >
> >  > >
> >  > > Thanks,
> >  > > Gage
> >  > >
> >  > > >  -----Original Message-----
> >  > > >  From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> >  > > >  Sent: Thursday, November 17, 2016 11:45 PM
> >  > > >  To: dev@dpdk.org
> >  > > >  Cc: Richardson, Bruce <bruce.richardson@intel.com>; Van Haaren, Harry
> >  > > >  <harry.van.haaren@intel.com>; hemant.agrawal@nxp.com; Eads, Gage
> >  > > >  <gage.eads@intel.com>; Jerin Jacob
> >  <jerin.jacob@caviumnetworks.com>
> >  > > >  Subject: [dpdk-dev] [PATCH 2/4] eventdev: implement the northbound
> >  APIs
> >  > > >
> >  > > >  This patch set defines the southbound driver interface
> >  > > >  and implements the common code required for northbound
> >  > > >  eventdev API interface.
> >  > > >
> >  > > >  Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> >  > > >  ---
> >  > > >   config/common_base                           |    6 +
> >  > > >   lib/Makefile                                 |    1 +
> >  > > >   lib/librte_eal/common/include/rte_log.h      |    1 +
> >  > > >   lib/librte_eventdev/Makefile                 |   57 ++
> >  > > >   lib/librte_eventdev/rte_eventdev.c           | 1211
> >  > > >  ++++++++++++++++++++++++++
> >  > > >   lib/librte_eventdev/rte_eventdev_pmd.h       |  504 +++++++++++
> >  > > >   lib/librte_eventdev/rte_eventdev_version.map |   39 +
> >  > > >   mk/rte.app.mk                                |    1 +
> >  > > >   8 files changed, 1820 insertions(+)
> >  > > >   create mode 100644 lib/librte_eventdev/Makefile
> >  > > >   create mode 100644 lib/librte_eventdev/rte_eventdev.c
> >  > > >   create mode 100644 lib/librte_eventdev/rte_eventdev_pmd.h
> >  > > >   create mode 100644 lib/librte_eventdev/rte_eventdev_version.map
> >  > > >
> >  > > >  diff --git a/config/common_base b/config/common_base
> >  > > >  index 4bff83a..7a8814e 100644
> >  > > >  --- a/config/common_base
> >  > > >  +++ b/config/common_base
> >  > > >  @@ -411,6 +411,12 @@ CONFIG_RTE_LIBRTE_PMD_ZUC_DEBUG=n
> >  > > >   CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO=y
> >  > > >
> >  > > >   #
> >  > > >  +# Compile generic event device library
> >  > > >  +#
> >  > > >  +CONFIG_RTE_LIBRTE_EVENTDEV=y
> >  > > >  +CONFIG_RTE_LIBRTE_EVENTDEV_DEBUG=n
> >  > > >  +CONFIG_RTE_EVENT_MAX_DEVS=16
> >  > > >  +CONFIG_RTE_EVENT_MAX_QUEUES_PER_DEV=64
> >  > > >   # Compile librte_ring
> >  > > >   #
> >  > > >   CONFIG_RTE_LIBRTE_RING=y
> >  > > >  diff --git a/lib/Makefile b/lib/Makefile
> >  > > >  index 990f23a..1a067bf 100644
> >  > > >  --- a/lib/Makefile
> >  > > >  +++ b/lib/Makefile
> >  > > >  @@ -41,6 +41,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) +=
> >  librte_cfgfile
> >  > > >   DIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += librte_cmdline
> >  > > >   DIRS-$(CONFIG_RTE_LIBRTE_ETHER) += librte_ether
> >  > > >   DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += librte_cryptodev
> >  > > >  +DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += librte_eventdev
> >  > > >   DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
> >  > > >   DIRS-$(CONFIG_RTE_LIBRTE_HASH) += librte_hash
> >  > > >   DIRS-$(CONFIG_RTE_LIBRTE_LPM) += librte_lpm
> >  > > >  diff --git a/lib/librte_eal/common/include/rte_log.h
> >  > > >  b/lib/librte_eal/common/include/rte_log.h
> >  > > >  index 29f7d19..9a07d92 100644
> >  > > >  --- a/lib/librte_eal/common/include/rte_log.h
> >  > > >  +++ b/lib/librte_eal/common/include/rte_log.h
> >  > > >  @@ -79,6 +79,7 @@ extern struct rte_logs rte_logs;
> >  > > >   #define RTE_LOGTYPE_PIPELINE 0x00008000 /**< Log related to
> >  pipeline. */
> >  > > >   #define RTE_LOGTYPE_MBUF    0x00010000 /**< Log related to mbuf.
> >  */
> >  > > >   #define RTE_LOGTYPE_CRYPTODEV 0x00020000 /**< Log related to
> >  > > >  cryptodev. */
> >  > > >  +#define RTE_LOGTYPE_EVENTDEV 0x00040000 /**< Log related to
> >  eventdev.
> >  > > >  */
> >  > > >
> >  > > >   /* these log types can be used in an application */
> >  > > >   #define RTE_LOGTYPE_USER1   0x01000000 /**< User-defined log type
> >  1. */
> >  > > >  diff --git a/lib/librte_eventdev/Makefile b/lib/librte_eventdev/Makefile
> >  > > >  new file mode 100644
> >  > > >  index 0000000..dac0663
> >  > > >  --- /dev/null
> >  > > >  +++ b/lib/librte_eventdev/Makefile
> >  > > >  @@ -0,0 +1,57 @@
> >  > > >  +#   BSD LICENSE
> >  > > >  +#
> >  > > >  +#   Copyright(c) 2016 Cavium networks. All rights reserved.
> >  > > >  +#
> >  > > >  +#   Redistribution and use in source and binary forms, with or without
> >  > > >  +#   modification, are permitted provided that the following conditions
> >  > > >  +#   are met:
> >  > > >  +#
> >  > > >  +#     * Redistributions of source code must retain the above copyright
> >  > > >  +#       notice, this list of conditions and the following disclaimer.
> >  > > >  +#     * Redistributions in binary form must reproduce the above copyright
> >  > > >  +#       notice, this list of conditions and the following disclaimer in
> >  > > >  +#       the documentation and/or other materials provided with the
> >  > > >  +#       distribution.
> >  > > >  +#     * Neither the name of Cavium networks nor the names of its
> >  > > >  +#       contributors may be used to endorse or promote products derived
> >  > > >  +#       from this software without specific prior written permission.
> >  > > >  +#
> >  > > >  +#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
> >  > > >  CONTRIBUTORS
> >  > > >  +#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING,
> >  BUT
> >  > > >  NOT
> >  > > >  +#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
> >  > > >  FITNESS FOR
> >  > > >  +#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> >  > > >  COPYRIGHT
> >  > > >  +#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> >  > > >  INCIDENTAL,
> >  > > >  +#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
> >  BUT
> >  > > >  NOT
> >  > > >  +#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
> >  LOSS
> >  > > >  OF USE,
> >  > > >  +#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
> >  CAUSED AND
> >  > > >  ON ANY
> >  > > >  +#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
> >  OR
> >  > > >  TORT
> >  > > >  +#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
> >  OUT OF
> >  > > >  THE USE
> >  > > >  +#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> >  > > >  DAMAGE.
> >  > > >  +
> >  > > >  +include $(RTE_SDK)/mk/rte.vars.mk
> >  > > >  +
> >  > > >  +# library name
> >  > > >  +LIB = librte_eventdev.a
> >  > > >  +
> >  > > >  +# library version
> >  > > >  +LIBABIVER := 1
> >  > > >  +
> >  > > >  +# build flags
> >  > > >  +CFLAGS += -O3
> >  > > >  +CFLAGS += $(WERROR_FLAGS)
> >  > > >  +
> >  > > >  +# library source files
> >  > > >  +SRCS-y += rte_eventdev.c
> >  > > >  +
> >  > > >  +# export include files
> >  > > >  +SYMLINK-y-include += rte_eventdev.h
> >  > > >  +SYMLINK-y-include += rte_eventdev_pmd.h
> >  > > >  +
> >  > > >  +# versioning export map
> >  > > >  +EXPORT_MAP := rte_eventdev_version.map
> >  > > >  +
> >  > > >  +# library dependencies
> >  > > >  +DEPDIRS-y += lib/librte_eal
> >  > > >  +DEPDIRS-y += lib/librte_mbuf
> >  > > >  +
> >  > > >  +include $(RTE_SDK)/mk/rte.lib.mk
> >  > > >  diff --git a/lib/librte_eventdev/rte_eventdev.c
> >  > > >  b/lib/librte_eventdev/rte_eventdev.c
> >  > > >  new file mode 100644
> >  > > >  index 0000000..17ce5c3
> >  > > >  --- /dev/null
> >  > > >  +++ b/lib/librte_eventdev/rte_eventdev.c
> >  > > >  @@ -0,0 +1,1211 @@
> >  > > >  +/*
> >  > > >  + *   BSD LICENSE
> >  > > >  + *
> >  > > >  + *   Copyright(c) 2016 Cavium networks. All rights reserved.
> >  > > >  + *
> >  > > >  + *   Redistribution and use in source and binary forms, with or without
> >  > > >  + *   modification, are permitted provided that the following conditions
> >  > > >  + *   are met:
> >  > > >  + *
> >  > > >  + *     * Redistributions of source code must retain the above copyright
> >  > > >  + *       notice, this list of conditions and the following disclaimer.
> >  > > >  + *     * Redistributions in binary form must reproduce the above
> >  copyright
> >  > > >  + *       notice, this list of conditions and the following disclaimer in
> >  > > >  + *       the documentation and/or other materials provided with the
> >  > > >  + *       distribution.
> >  > > >  + *     * Neither the name of Cavium networks nor the names of its
> >  > > >  + *       contributors may be used to endorse or promote products derived
> >  > > >  + *       from this software without specific prior written permission.
> >  > > >  + *
> >  > > >  + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
> >  > > >  CONTRIBUTORS
> >  > > >  + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING,
> >  BUT
> >  > > >  NOT
> >  > > >  + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
> >  > > >  FITNESS FOR
> >  > > >  + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> >  > > >  COPYRIGHT
> >  > > >  + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> >  > > >  INCIDENTAL,
> >  > > >  + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
> >  BUT
> >  > > >  NOT
> >  > > >  + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
> >  LOSS
> >  > > >  OF USE,
> >  > > >  + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
> >  CAUSED
> >  > > >  AND ON ANY
> >  > > >  + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
> >  OR
> >  > > >  TORT
> >  > > >  + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
> >  OUT OF
> >  > > >  THE USE
> >  > > >  + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> >  > > >  DAMAGE.
> >  > > >  + */
> >  > > >  +
> >  > > >  +#include <ctype.h>
> >  > > >  +#include <stdio.h>
> >  > > >  +#include <stdlib.h>
> >  > > >  +#include <string.h>
> >  > > >  +#include <stdarg.h>
> >  > > >  +#include <errno.h>
> >  > > >  +#include <stdint.h>
> >  > > >  +#include <inttypes.h>
> >  > > >  +#include <sys/types.h>
> >  > > >  +#include <sys/queue.h>
> >  > > >  +
> >  > > >  +#include <rte_byteorder.h>
> >  > > >  +#include <rte_log.h>
> >  > > >  +#include <rte_debug.h>
> >  > > >  +#include <rte_dev.h>
> >  > > >  +#include <rte_pci.h>
> >  > > >  +#include <rte_memory.h>
> >  > > >  +#include <rte_memcpy.h>
> >  > > >  +#include <rte_memzone.h>
> >  > > >  +#include <rte_eal.h>
> >  > > >  +#include <rte_per_lcore.h>
> >  > > >  +#include <rte_lcore.h>
> >  > > >  +#include <rte_atomic.h>
> >  > > >  +#include <rte_branch_prediction.h>
> >  > > >  +#include <rte_common.h>
> >  > > >  +#include <rte_malloc.h>
> >  > > >  +#include <rte_errno.h>
> >  > > >  +
> >  > > >  +#include "rte_eventdev.h"
> >  > > >  +#include "rte_eventdev_pmd.h"
> >  > > >  +
> >  > > >  +struct rte_eventdev rte_event_devices[RTE_EVENT_MAX_DEVS];
> >  > > >  +
> >  > > >  +struct rte_eventdev *rte_eventdevs = &rte_event_devices[0];
> >  > > >  +
> >  > > >  +static struct rte_eventdev_global eventdev_globals = {
> >  > > >  +	.nb_devs		= 0
> >  > > >  +};
> >  > > >  +
> >  > > >  +struct rte_eventdev_global *rte_eventdev_globals =
> >  &eventdev_globals;
> >  > > >  +
> >  > > >  +/* Event dev north bound API implementation */
> >  > > >  +
> >  > > >  +uint8_t
> >  > > >  +rte_event_dev_count(void)
> >  > > >  +{
> >  > > >  +	return rte_eventdev_globals->nb_devs;
> >  > > >  +}
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_event_dev_get_dev_id(const char *name)
> >  > > >  +{
> >  > > >  +	int i;
> >  > > >  +
> >  > > >  +	if (!name)
> >  > > >  +		return -EINVAL;
> >  > > >  +
> >  > > >  +	for (i = 0; i < rte_eventdev_globals->nb_devs; i++)
> >  > > >  +		if ((strcmp(rte_event_devices[i].data->name, name)
> >  > > >  +				== 0) &&
> >  > > >  +				(rte_event_devices[i].attached ==
> >  > > >  +						RTE_EVENTDEV_ATTACHED))
> >  > > >  +			return i;
> >  > > >  +	return -ENODEV;
> >  > > >  +}
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_event_dev_socket_id(uint8_t dev_id)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +
> >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +
> >  > > >  +	return dev->data->socket_id;
> >  > > >  +}
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info
> >  *dev_info)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +
> >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +
> >  > > >  +	if (dev_info == NULL)
> >  > > >  +		return -EINVAL;
> >  > > >  +
> >  > > >  +	memset(dev_info, 0, sizeof(struct rte_event_dev_info));
> >  > > >  +
> >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -
> >  > > >  ENOTSUP);
> >  > > >  +	(*dev->dev_ops->dev_infos_get)(dev, dev_info);
> >  > > >  +
> >  > > >  +	dev_info->pci_dev = dev->pci_dev;
> >  > > >  +	if (dev->driver)
> >  > > >  +		dev_info->driver_name = dev->driver->pci_drv.driver.name;
> >  > > >  +	return 0;
> >  > > >  +}
> >  > > >  +
> >  > > >  +static inline int
> >  > > >  +rte_event_dev_queue_config(struct rte_eventdev *dev, uint8_t
> >  nb_queues)
> >  > > >  +{
> >  > > >  +	uint8_t old_nb_queues = dev->data->nb_queues;
> >  > > >  +	void **queues;
> >  > > >  +	uint8_t *queues_prio;
> >  > > >  +	unsigned int i;
> >  > > >  +
> >  > > >  +	EDEV_LOG_DEBUG("Setup %d queues on device %u", nb_queues,
> >  > > >  +			 dev->data->dev_id);
> >  > > >  +
> >  > > >  +	/* First time configuration */
> >  > > >  +	if (dev->data->queues == NULL && nb_queues != 0) {
> >  > > >  +		dev->data->queues = rte_zmalloc_socket("eventdev->data-
> >  > > >  >queues",
> >  > > >  +				sizeof(dev->data->queues[0]) * nb_queues,
> >  > > >  +				RTE_CACHE_LINE_SIZE, dev->data-
> >  > > >  >socket_id);
> >  > > >  +		if (dev->data->queues == NULL) {
> >  > > >  +			dev->data->nb_queues = 0;
> >  > > >  +			EDEV_LOG_ERR("failed to get memory for queue meta
> >  > > >  data,"
> >  > > >  +					"nb_queues %u", nb_queues);
> >  > > >  +			return -(ENOMEM);
> >  > > >  +		}
> >  > > >  +		/* Allocate memory to store queue priority */
> >  > > >  +		dev->data->queues_prio = rte_zmalloc_socket(
> >  > > >  +				"eventdev->data->queues_prio",
> >  > > >  +				sizeof(dev->data->queues_prio[0]) *
> >  > > >  nb_queues,
> >  > > >  +				RTE_CACHE_LINE_SIZE, dev->data-
> >  > > >  >socket_id);
> >  > > >  +		if (dev->data->queues_prio == NULL) {
> >  > > >  +			dev->data->nb_queues = 0;
> >  > > >  +			EDEV_LOG_ERR("failed to get memory for queue
> >  > > >  priority,"
> >  > > >  +					"nb_queues %u", nb_queues);
> >  > > >  +			return -(ENOMEM);
> >  > > >  +		}
> >  > > >  +
> >  > > >  +	} else if (dev->data->queues != NULL && nb_queues != 0) {/* re-config
> >  > > >  */
> >  > > >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
> >  > > >  >queue_release, -ENOTSUP);
> >  > > >  +
> >  > > >  +		queues = dev->data->queues;
> >  > > >  +		for (i = nb_queues; i < old_nb_queues; i++)
> >  > > >  +			(*dev->dev_ops->queue_release)(queues[i]);
> >  > > >  +
> >  > > >  +		queues = rte_realloc(queues, sizeof(queues[0]) * nb_queues,
> >  > > >  +				RTE_CACHE_LINE_SIZE);
> >  > > >  +		if (queues == NULL) {
> >  > > >  +			EDEV_LOG_ERR("failed to realloc queue meta data,"
> >  > > >  +						" nb_queues %u",
> >  > > >  nb_queues);
> >  > > >  +			return -(ENOMEM);
> >  > > >  +		}
> >  > > >  +		dev->data->queues = queues;
> >  > > >  +
> >  > > >  +		/* Re allocate memory to store queue priority */
> >  > > >  +		queues_prio = dev->data->queues_prio;
> >  > > >  +		queues_prio = rte_realloc(queues_prio,
> >  > > >  +				sizeof(queues_prio[0]) * nb_queues,
> >  > > >  +				RTE_CACHE_LINE_SIZE);
> >  > > >  +		if (queues_prio == NULL) {
> >  > > >  +			EDEV_LOG_ERR("failed to realloc queue priority,"
> >  > > >  +						" nb_queues %u",
> >  > > >  nb_queues);
> >  > > >  +			return -(ENOMEM);
> >  > > >  +		}
> >  > > >  +		dev->data->queues_prio = queues_prio;
> >  > > >  +
> >  > > >  +		if (nb_queues > old_nb_queues) {
> >  > > >  +			uint8_t new_qs = nb_queues - old_nb_queues;
> >  > > >  +
> >  > > >  +			memset(queues + old_nb_queues, 0,
> >  > > >  +				sizeof(queues[0]) * new_qs);
> >  > > >  +			memset(queues_prio + old_nb_queues, 0,
> >  > > >  +				sizeof(queues_prio[0]) * new_qs);
> >  > > >  +		}
> >  > > >  +	} else if (dev->data->queues != NULL && nb_queues == 0) {
> >  > > >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
> >  > > >  >queue_release, -ENOTSUP);
> >  > > >  +
> >  > > >  +		queues = dev->data->queues;
> >  > > >  +		for (i = nb_queues; i < old_nb_queues; i++)
> >  > > >  +			(*dev->dev_ops->queue_release)(queues[i]);
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	dev->data->nb_queues = nb_queues;
> >  > > >  +	return 0;
> >  > > >  +}
> >  > > >  +
> >  > > >  +static inline int
> >  > > >  +rte_event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
> >  > > >  +{
> >  > > >  +	uint8_t old_nb_ports = dev->data->nb_ports;
> >  > > >  +	void **ports;
> >  > > >  +	uint16_t *links_map;
> >  > > >  +	uint8_t *ports_dequeue_depth;
> >  > > >  +	uint8_t *ports_enqueue_depth;
> >  > > >  +	unsigned int i;
> >  > > >  +
> >  > > >  +	EDEV_LOG_DEBUG("Setup %d ports on device %u", nb_ports,
> >  > > >  +			 dev->data->dev_id);
> >  > > >  +
> >  > > >  +	/* First time configuration */
> >  > > >  +	if (dev->data->ports == NULL && nb_ports != 0) {
> >  > > >  +		dev->data->ports = rte_zmalloc_socket("eventdev->data-
> >  > > >  >ports",
> >  > > >  +				sizeof(dev->data->ports[0]) * nb_ports,
> >  > > >  +				RTE_CACHE_LINE_SIZE, dev->data-
> >  > > >  >socket_id);
> >  > > >  +		if (dev->data->ports == NULL) {
> >  > > >  +			dev->data->nb_ports = 0;
> >  > > >  +			EDEV_LOG_ERR("failed to get memory for port meta
> >  > > >  data,"
> >  > > >  +					"nb_ports %u", nb_ports);
> >  > > >  +			return -(ENOMEM);
> >  > > >  +		}
> >  > > >  +
> >  > > >  +		/* Allocate memory to store ports dequeue depth */
> >  > > >  +		dev->data->ports_dequeue_depth =
> >  > > >  +			rte_zmalloc_socket("eventdev-
> >  > > >  >ports_dequeue_depth",
> >  > > >  +			sizeof(dev->data->ports_dequeue_depth[0]) *
> >  > > >  nb_ports,
> >  > > >  +			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
> >  > > >  +		if (dev->data->ports_dequeue_depth == NULL) {
> >  > > >  +			dev->data->nb_ports = 0;
> >  > > >  +			EDEV_LOG_ERR("failed to get memory for port deq
> >  > > >  meta,"
> >  > > >  +					"nb_ports %u", nb_ports);
> >  > > >  +			return -(ENOMEM);
> >  > > >  +		}
> >  > > >  +
> >  > > >  +		/* Allocate memory to store ports enqueue depth */
> >  > > >  +		dev->data->ports_enqueue_depth =
> >  > > >  +			rte_zmalloc_socket("eventdev-
> >  > > >  >ports_enqueue_depth",
> >  > > >  +			sizeof(dev->data->ports_enqueue_depth[0]) *
> >  > > >  nb_ports,
> >  > > >  +			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
> >  > > >  +		if (dev->data->ports_enqueue_depth == NULL) {
> >  > > >  +			dev->data->nb_ports = 0;
> >  > > >  +			EDEV_LOG_ERR("failed to get memory for port enq
> >  > > >  meta,"
> >  > > >  +					"nb_ports %u", nb_ports);
> >  > > >  +			return -(ENOMEM);
> >  > > >  +		}
> >  > > >  +
> >  > > >  +		/* Allocate memory to store queue to port link connection */
> >  > > >  +		dev->data->links_map =
> >  > > >  +			rte_zmalloc_socket("eventdev->links_map",
> >  > > >  +			sizeof(dev->data->links_map[0]) * nb_ports *
> >  > > >  +			RTE_EVENT_MAX_QUEUES_PER_DEV,
> >  > > >  +			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
> >  > > >  +		if (dev->data->links_map == NULL) {
> >  > > >  +			dev->data->nb_ports = 0;
> >  > > >  +			EDEV_LOG_ERR("failed to get memory for port_map
> >  > > >  area,"
> >  > > >  +					"nb_ports %u", nb_ports);
> >  > > >  +			return -(ENOMEM);
> >  > > >  +		}
> >  > > >  +	} else if (dev->data->ports != NULL && nb_ports != 0) {/* re-config */
> >  > > >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_release,
> >  > > >  -ENOTSUP);
> >  > > >  +
> >  > > >  +		ports = dev->data->ports;
> >  > > >  +		ports_dequeue_depth = dev->data->ports_dequeue_depth;
> >  > > >  +		ports_enqueue_depth = dev->data->ports_enqueue_depth;
> >  > > >  +		links_map = dev->data->links_map;
> >  > > >  +
> >  > > >  +		for (i = nb_ports; i < old_nb_ports; i++)
> >  > > >  +			(*dev->dev_ops->port_release)(ports[i]);
> >  > > >  +
> >  > > >  +		/* Realloc memory for ports */
> >  > > >  +		ports = rte_realloc(ports, sizeof(ports[0]) * nb_ports,
> >  > > >  +				RTE_CACHE_LINE_SIZE);
> >  > > >  +		if (ports == NULL) {
> >  > > >  +			EDEV_LOG_ERR("failed to realloc port meta data,"
> >  > > >  +						" nb_ports %u", nb_ports);
> >  > > >  +			return -(ENOMEM);
> >  > > >  +		}
> >  > > >  +
> >  > > >  +		/* Realloc memory for ports_dequeue_depth */
> >  > > >  +		ports_dequeue_depth = rte_realloc(ports_dequeue_depth,
> >  > > >  +			sizeof(ports_dequeue_depth[0]) * nb_ports,
> >  > > >  +			RTE_CACHE_LINE_SIZE);
> >  > > >  +		if (ports_dequeue_depth == NULL) {
> >  > > >  +			EDEV_LOG_ERR("failed to realloc port deqeue meta
> >  > > >  data,"
> >  > > >  +						" nb_ports %u", nb_ports);
> >  > > >  +			return -(ENOMEM);
> >  > > >  +		}
> >  > > >  +
> >  > > >  +		/* Realloc memory for ports_enqueue_depth */
> >  > > >  +		ports_enqueue_depth = rte_realloc(ports_enqueue_depth,
> >  > > >  +			sizeof(ports_enqueue_depth[0]) * nb_ports,
> >  > > >  +			RTE_CACHE_LINE_SIZE);
> >  > > >  +		if (ports_enqueue_depth == NULL) {
> >  > > >  +			EDEV_LOG_ERR("failed to realloc port enqueue meta
> >  > > >  data,"
> >  > > >  +						" nb_ports %u", nb_ports);
> >  > > >  +			return -(ENOMEM);
> >  > > >  +		}
> >  > > >  +
> >  > > >  +		/* Realloc memory to store queue to port link connection */
> >  > > >  +		links_map = rte_realloc(links_map,
> >  > > >  +			sizeof(dev->data->links_map[0]) * nb_ports *
> >  > > >  +			RTE_EVENT_MAX_QUEUES_PER_DEV,
> >  > > >  +			RTE_CACHE_LINE_SIZE);
> >  > > >  +		if (dev->data->links_map == NULL) {
> >  > > >  +			dev->data->nb_ports = 0;
> >  > > >  +			EDEV_LOG_ERR("failed to realloc mem for port_map
> >  > > >  area,"
> >  > > >  +					"nb_ports %u", nb_ports);
> >  > > >  +			return -(ENOMEM);
> >  > > >  +		}
> >  > > >  +
> >  > > >  +		if (nb_ports > old_nb_ports) {
> >  > > >  +			uint8_t new_ps = nb_ports - old_nb_ports;
> >  > > >  +
> >  > > >  +			memset(ports + old_nb_ports, 0,
> >  > > >  +				sizeof(ports[0]) * new_ps);
> >  > > >  +			memset(ports_dequeue_depth + old_nb_ports, 0,
> >  > > >  +				sizeof(ports_dequeue_depth[0]) * new_ps);
> >  > > >  +			memset(ports_enqueue_depth + old_nb_ports, 0,
> >  > > >  +				sizeof(ports_enqueue_depth[0]) * new_ps);
> >  > > >  +			memset(links_map +
> >  > > >  +				(old_nb_ports *
> >  > > >  RTE_EVENT_MAX_QUEUES_PER_DEV),
> >  > > >  +				0, sizeof(ports_enqueue_depth[0]) * new_ps);
> >  > > >  +		}
> >  > > >  +
> >  > > >  +		dev->data->ports = ports;
> >  > > >  +		dev->data->ports_dequeue_depth = ports_dequeue_depth;
> >  > > >  +		dev->data->ports_enqueue_depth = ports_enqueue_depth;
> >  > > >  +		dev->data->links_map = links_map;
> >  > > >  +	} else if (dev->data->ports != NULL && nb_ports == 0) {
> >  > > >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_release,
> >  > > >  -ENOTSUP);
> >  > > >  +
> >  > > >  +		ports = dev->data->ports;
> >  > > >  +		for (i = nb_ports; i < old_nb_ports; i++)
> >  > > >  +			(*dev->dev_ops->port_release)(ports[i]);
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	dev->data->nb_ports = nb_ports;
> >  > > >  +	return 0;
> >  > > >  +}
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_event_dev_configure(uint8_t dev_id, struct rte_event_dev_config
> >  > > >  *dev_conf)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +	struct rte_event_dev_info info;
> >  > > >  +	int diag;
> >  > > >  +
> >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +
> >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -
> >  > > >  ENOTSUP);
> >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_configure, -
> >  > > >  ENOTSUP);
> >  > > >  +
> >  > > >  +	if (dev->data->dev_started) {
> >  > > >  +		EDEV_LOG_ERR(
> >  > > >  +		    "device %d must be stopped to allow configuration",
> >  > > >  dev_id);
> >  > > >  +		return -EBUSY;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	if (dev_conf == NULL)
> >  > > >  +		return -EINVAL;
> >  > > >  +
> >  > > >  +	(*dev->dev_ops->dev_infos_get)(dev, &info);
> >  > > >  +
> >  > > >  +	/* Check dequeue_wait_ns value is in limit */
> >  > > >  +	if (!dev_conf->event_dev_cfg &
> >  > > >  RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT) {
> >  > > >  +		if (dev_conf->dequeue_wait_ns < info.min_dequeue_wait_ns
> >  > > >  ||
> >  > > >  +			dev_conf->dequeue_wait_ns >
> >  > > >  info.max_dequeue_wait_ns) {
> >  > > >  +			EDEV_LOG_ERR("dev%d invalid dequeue_wait_ns=%d"
> >  > > >  +			" min_dequeue_wait_ns=%d
> >  > > >  max_dequeue_wait_ns=%d",
> >  > > >  +			dev_id, dev_conf->dequeue_wait_ns,
> >  > > >  +			info.min_dequeue_wait_ns,
> >  > > >  +			info.max_dequeue_wait_ns);
> >  > > >  +			return -EINVAL;
> >  > > >  +		}
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	/* Check nb_events_limit is in limit */
> >  > > >  +	if (dev_conf->nb_events_limit > info.max_num_events) {
> >  > > >  +		EDEV_LOG_ERR("dev%d nb_events_limit=%d >
> >  > > >  max_num_events=%d",
> >  > > >  +		dev_id, dev_conf->nb_events_limit, info.max_num_events);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	/* Check nb_event_queues is in limit */
> >  > > >  +	if (!dev_conf->nb_event_queues) {
> >  > > >  +		EDEV_LOG_ERR("dev%d nb_event_queues cannot be zero",
> >  > > >  dev_id);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +	if (dev_conf->nb_event_queues > info.max_event_queues) {
> >  > > >  +		EDEV_LOG_ERR("dev%d nb_event_queues=%d >
> >  > > >  max_event_queues=%d",
> >  > > >  +		dev_id, dev_conf->nb_event_queues,
> >  > > >  info.max_event_queues);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	/* Check nb_event_ports is in limit */
> >  > > >  +	if (!dev_conf->nb_event_ports) {
> >  > > >  +		EDEV_LOG_ERR("dev%d nb_event_ports cannot be zero",
> >  > > >  dev_id);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +	if (dev_conf->nb_event_ports > info.max_event_ports) {
> >  > > >  +		EDEV_LOG_ERR("dev%d nb_event_ports=%d >
> >  > > >  max_event_ports= %d",
> >  > > >  +		dev_id, dev_conf->nb_event_ports, info.max_event_ports);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	/* Check nb_event_queue_flows is in limit */
> >  > > >  +	if (!dev_conf->nb_event_queue_flows) {
> >  > > >  +		EDEV_LOG_ERR("dev%d nb_flows cannot be zero", dev_id);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +	if (dev_conf->nb_event_queue_flows > info.max_event_queue_flows)
> >  > > >  {
> >  > > >  +		EDEV_LOG_ERR("dev%d nb_flows=%x > max_flows=%x",
> >  > > >  +		dev_id, dev_conf->nb_event_queue_flows,
> >  > > >  +		info.max_event_queue_flows);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	/* Check nb_event_port_dequeue_depth is in limit */
> >  > > >  +	if (!dev_conf->nb_event_port_dequeue_depth) {
> >  > > >  +		EDEV_LOG_ERR("dev%d nb_dequeue_depth cannot be zero",
> >  > > >  dev_id);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +	if (dev_conf->nb_event_port_dequeue_depth >
> >  > > >  +			 info.max_event_port_dequeue_depth) {
> >  > > >  +		EDEV_LOG_ERR("dev%d nb_dequeue_depth=%d >
> >  > > >  max_dequeue_depth=%d",
> >  > > >  +		dev_id, dev_conf->nb_event_port_dequeue_depth,
> >  > > >  +		info.max_event_port_dequeue_depth);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	/* Check nb_event_port_enqueue_depth is in limit */
> >  > > >  +	if (!dev_conf->nb_event_port_enqueue_depth) {
> >  > > >  +		EDEV_LOG_ERR("dev%d nb_enqueue_depth cannot be zero",
> >  > > >  dev_id);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +	if (dev_conf->nb_event_port_enqueue_depth >
> >  > > >  +			 info.max_event_port_enqueue_depth) {
> >  > > >  +		EDEV_LOG_ERR("dev%d nb_enqueue_depth=%d >
> >  > > >  max_enqueue_depth=%d",
> >  > > >  +		dev_id, dev_conf->nb_event_port_enqueue_depth,
> >  > > >  +		info.max_event_port_enqueue_depth);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	/* Copy the dev_conf parameter into the dev structure */
> >  > > >  +	memcpy(&dev->data->dev_conf, dev_conf, sizeof(dev->data-
> >  > > >  >dev_conf));
> >  > > >  +
> >  > > >  +	/* Setup new number of queues and reconfigure device. */
> >  > > >  +	diag = rte_event_dev_queue_config(dev, dev_conf-
> >  > > >  >nb_event_queues);
> >  > > >  +	if (diag != 0) {
> >  > > >  +		EDEV_LOG_ERR("dev%d rte_event_dev_queue_config = %d",
> >  > > >  +				dev_id, diag);
> >  > > >  +		return diag;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	/* Setup new number of ports and reconfigure device. */
> >  > > >  +	diag = rte_event_dev_port_config(dev, dev_conf->nb_event_ports);
> >  > > >  +	if (diag != 0) {
> >  > > >  +		rte_event_dev_queue_config(dev, 0);
> >  > > >  +		EDEV_LOG_ERR("dev%d rte_event_dev_port_config = %d",
> >  > > >  +				dev_id, diag);
> >  > > >  +		return diag;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	/* Configure the device */
> >  > > >  +	diag = (*dev->dev_ops->dev_configure)(dev);
> >  > > >  +	if (diag != 0) {
> >  > > >  +		EDEV_LOG_ERR("dev%d dev_configure = %d", dev_id, diag);
> >  > > >  +		rte_event_dev_queue_config(dev, 0);
> >  > > >  +		rte_event_dev_port_config(dev, 0);
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	dev->data->event_dev_cap = info.event_dev_cap;
> >  > > >  +	return diag;
> >  > > >  +}
> >  > > >  +
> >  > > >  +static inline int
> >  > > >  +is_valid_queue(struct rte_eventdev *dev, uint8_t queue_id)
> >  > > >  +{
> >  > > >  +	if (queue_id < dev->data->nb_queues && queue_id <
> >  > > >  +				RTE_EVENT_MAX_QUEUES_PER_DEV)
> >  > > >  +		return 1;
> >  > > >  +	else
> >  > > >  +		return 0;
> >  > > >  +}
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_event_queue_default_conf_get(uint8_t dev_id, uint8_t queue_id,
> >  > > >  +				 struct rte_event_queue_conf *queue_conf)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +
> >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +
> >  > > >  +	if (queue_conf == NULL)
> >  > > >  +		return -EINVAL;
> >  > > >  +
> >  > > >  +	if (!is_valid_queue(dev, queue_id)) {
> >  > > >  +		EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_def_conf, -
> >  > > >  ENOTSUP);
> >  > > >  +	memset(queue_conf, 0, sizeof(struct rte_event_queue_conf));
> >  > > >  +	(*dev->dev_ops->queue_def_conf)(dev, queue_id, queue_conf);
> >  > > >  +	return 0;
> >  > > >  +}
> >  > > >  +
> >  > > >  +static inline int
> >  > > >  +is_valid_atomic_queue_conf(struct rte_event_queue_conf
> >  *queue_conf)
> >  > > >  +{
> >  > > >  +	if (queue_conf && (
> >  > > >  +		((queue_conf->event_queue_cfg &
> >  > > >  RTE_EVENT_QUEUE_CFG_TYPE_MASK)
> >  > > >  +			== RTE_EVENT_QUEUE_CFG_ALL_TYPES) ||
> >  > > >  +		((queue_conf->event_queue_cfg &
> >  > > >  RTE_EVENT_QUEUE_CFG_TYPE_MASK)
> >  > > >  +			== RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY)
> >  > > >  +		))
> >  > > >  +		return 1;
> >  > > >  +	else
> >  > > >  +		return 0;
> >  > > >  +}
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id,
> >  > > >  +		      struct rte_event_queue_conf *queue_conf)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +	struct rte_event_queue_conf def_conf;
> >  > > >  +
> >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +
> >  > > >  +	if (!is_valid_queue(dev, queue_id)) {
> >  > > >  +		EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	/* Check nb_atomic_flows limit */
> >  > > >  +	if (is_valid_atomic_queue_conf(queue_conf)) {
> >  > > >  +		if (queue_conf->nb_atomic_flows == 0 ||
> >  > > >  +		    queue_conf->nb_atomic_flows >
> >  > > >  +			dev->data->dev_conf.nb_event_queue_flows) {
> >  > > >  +			EDEV_LOG_ERR(
> >  > > >  +		"dev%d queue%d Invalid nb_atomic_flows=%d
> >  > > >  max_flows=%d",
> >  > > >  +			dev_id, queue_id, queue_conf->nb_atomic_flows,
> >  > > >  +			dev->data->dev_conf.nb_event_queue_flows);
> >  > > >  +			return -EINVAL;
> >  > > >  +		}
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	/* Check nb_atomic_order_sequences limit */
> >  > > >  +	if (is_valid_atomic_queue_conf(queue_conf)) {
> >  > > >  +		if (queue_conf->nb_atomic_order_sequences == 0 ||
> >  > > >  +		    queue_conf->nb_atomic_order_sequences >
> >  > > >  +			dev->data->dev_conf.nb_event_queue_flows) {
> >  > > >  +			EDEV_LOG_ERR(
> >  > > >  +		"dev%d queue%d Invalid nb_atomic_order_seq=%d
> >  > > >  max_flows=%d",
> >  > > >  +			dev_id, queue_id, queue_conf-
> >  > > >  >nb_atomic_order_sequences,
> >  > > >  +			dev->data->dev_conf.nb_event_queue_flows);
> >  > > >  +			return -EINVAL;
> >  > > >  +		}
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	if (dev->data->dev_started) {
> >  > > >  +		EDEV_LOG_ERR(
> >  > > >  +		    "device %d must be stopped to allow queue setup", dev_id);
> >  > > >  +		return -EBUSY;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_setup, -
> >  > > >  ENOTSUP);
> >  > > >  +
> >  > > >  +	if (queue_conf == NULL) {
> >  > > >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
> >  > > >  >queue_def_conf,
> >  > > >  +					-ENOTSUP);
> >  > > >  +		(*dev->dev_ops->queue_def_conf)(dev, queue_id, &def_conf);
> >  > > >  +		def_conf.event_queue_cfg =
> >  > > >  RTE_EVENT_QUEUE_CFG_DEFAULT;
> >  > > >  +		queue_conf = &def_conf;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	dev->data->queues_prio[queue_id] = queue_conf->priority;
> >  > > >  +	return (*dev->dev_ops->queue_setup)(dev, queue_id, queue_conf);
> >  > > >  +}
> >  > > >  +
> >  > > >  +uint8_t
> >  > > >  +rte_event_queue_count(uint8_t dev_id)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +	return dev->data->nb_queues;
> >  > > >  +}
> >  > > >  +
> >  > > >  +uint8_t
> >  > > >  +rte_event_queue_priority(uint8_t dev_id, uint8_t queue_id)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +	if (dev->data->event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_QOS)
> >  > > >  +		return dev->data->queues_prio[queue_id];
> >  > > >  +	else
> >  > > >  +		return RTE_EVENT_QUEUE_PRIORITY_NORMAL;
> >  > > >  +}
> >  > > >  +
> >  > > >  +static inline int
> >  > > >  +is_valid_port(struct rte_eventdev *dev, uint8_t port_id)
> >  > > >  +{
> >  > > >  +	if (port_id < dev->data->nb_ports)
> >  > > >  +		return 1;
> >  > > >  +	else
> >  > > >  +		return 0;
> >  > > >  +}
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_event_port_default_conf_get(uint8_t dev_id, uint8_t port_id,
> >  > > >  +				 struct rte_event_port_conf *port_conf)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +
> >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +
> >  > > >  +	if (port_conf == NULL)
> >  > > >  +		return -EINVAL;
> >  > > >  +
> >  > > >  +	if (!is_valid_port(dev, port_id)) {
> >  > > >  +		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_def_conf, -
> >  > > >  ENOTSUP);
> >  > > >  +	memset(port_conf, 0, sizeof(struct rte_event_port_conf));
> >  > > >  +	(*dev->dev_ops->port_def_conf)(dev, port_id, port_conf);
> >  > > >  +	return 0;
> >  > > >  +}
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
> >  > > >  +		      struct rte_event_port_conf *port_conf)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +	struct rte_event_port_conf def_conf;
> >  > > >  +	int diag;
> >  > > >  +
> >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +
> >  > > >  +	if (!is_valid_port(dev, port_id)) {
> >  > > >  +		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	/* Check new_event_threshold limit */
> >  > > >  +	if ((port_conf && !port_conf->new_event_threshold) ||
> >  > > >  +			(port_conf && port_conf->new_event_threshold >
> >  > > >  +				 dev->data->dev_conf.nb_events_limit)) {
> >  > > >  +		EDEV_LOG_ERR(
> >  > > >  +		   "dev%d port%d Invalid event_threshold=%d
> >  > > >  nb_events_limit=%d",
> >  > > >  +			dev_id, port_id, port_conf->new_event_threshold,
> >  > > >  +			dev->data->dev_conf.nb_events_limit);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	/* Check dequeue_depth limit */
> >  > > >  +	if ((port_conf && !port_conf->dequeue_depth) ||
> >  > > >  +			(port_conf && port_conf->dequeue_depth >
> >  > > >  +		dev->data->dev_conf.nb_event_port_dequeue_depth)) {
> >  > > >  +		EDEV_LOG_ERR(
> >  > > >  +		   "dev%d port%d Invalid dequeue depth=%d
> >  > > >  max_dequeue_depth=%d",
> >  > > >  +			dev_id, port_id, port_conf->dequeue_depth,
> >  > > >  +			dev->data-
> >  > > >  >dev_conf.nb_event_port_dequeue_depth);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	/* Check enqueue_depth limit */
> >  > > >  +	if ((port_conf && !port_conf->enqueue_depth) ||
> >  > > >  +			(port_conf && port_conf->enqueue_depth >
> >  > > >  +		dev->data->dev_conf.nb_event_port_enqueue_depth)) {
> >  > > >  +		EDEV_LOG_ERR(
> >  > > >  +		   "dev%d port%d Invalid enqueue depth=%d
> >  > > >  max_enqueue_depth=%d",
> >  > > >  +			dev_id, port_id, port_conf->enqueue_depth,
> >  > > >  +			dev->data-
> >  > > >  >dev_conf.nb_event_port_enqueue_depth);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	if (dev->data->dev_started) {
> >  > > >  +		EDEV_LOG_ERR(
> >  > > >  +		    "device %d must be stopped to allow port setup", dev_id);
> >  > > >  +		return -EBUSY;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_setup, -
> >  > > >  ENOTSUP);
> >  > > >  +
> >  > > >  +	if (port_conf == NULL) {
> >  > > >  +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops-
> >  > > >  >port_def_conf,
> >  > > >  +					-ENOTSUP);
> >  > > >  +		(*dev->dev_ops->port_def_conf)(dev, port_id, &def_conf);
> >  > > >  +		port_conf = &def_conf;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	dev->data->ports_dequeue_depth[port_id] =
> >  > > >  +			port_conf->dequeue_depth;
> >  > > >  +	dev->data->ports_enqueue_depth[port_id] =
> >  > > >  +			port_conf->enqueue_depth;
> >  > > >  +
> >  > > >  +	diag = (*dev->dev_ops->port_setup)(dev, port_id, port_conf);
> >  > > >  +
> >  > > >  +	/* Unlink all the queues from this port(default state after setup) */
> >  > > >  +	if (!diag)
> >  > > >  +		diag = rte_event_port_unlink(dev_id, port_id, NULL, 0);
> >  > > >  +
> >  > > >  +	if (diag < 0)
> >  > > >  +		return diag;
> >  > > >  +
> >  > > >  +	return 0;
> >  > > >  +}
> >  > > >  +
> >  > > >  +uint8_t
> >  > > >  +rte_event_port_dequeue_depth(uint8_t dev_id, uint8_t port_id)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +	return dev->data->ports_dequeue_depth[port_id];
> >  > > >  +}
> >  > > >  +
> >  > > >  +uint8_t
> >  > > >  +rte_event_port_enqueue_depth(uint8_t dev_id, uint8_t port_id)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +	return dev->data->ports_enqueue_depth[port_id];
> >  > > >  +}
> >  > > >  +
> >  > > >  +uint8_t
> >  > > >  +rte_event_port_count(uint8_t dev_id)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +	return dev->data->nb_ports;
> >  > > >  +}
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_event_port_link(uint8_t dev_id, uint8_t port_id,
> >  > > >  +		    struct rte_event_queue_link link[], uint16_t nb_links)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +	struct rte_event_queue_link
> >  > > >  all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
> >  > > >  +	uint16_t *links_map;
> >  > > >  +	int i, diag;
> >  > > >  +
> >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_link, -ENOTSUP);
> >  > > >  +
> >  > > >  +	if (!is_valid_port(dev, port_id)) {
> >  > > >  +		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	if (link == NULL) {
> >  > > >  +		for (i = 0; i < dev->data->nb_queues; i++) {
> >  > > >  +			all_queues[i].queue_id = i;
> >  > > >  +			all_queues[i].priority =
> >  > > >  +
> >  > > >  	RTE_EVENT_QUEUE_SERVICE_PRIORITY_NORMAL;
> >  > > >  +		}
> >  > > >  +		link = all_queues;
> >  > > >  +		nb_links = dev->data->nb_queues;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	for (i = 0; i < nb_links; i++)
> >  > > >  +		if (link[i].queue_id >= RTE_EVENT_MAX_QUEUES_PER_DEV)
> >  > > >  +			return -EINVAL;
> >  > > >  +
> >  > > >  +	diag = (*dev->dev_ops->port_link)(dev->data->ports[port_id], link,
> >  > > >  +						 nb_links);
> >  > > >  +	if (diag < 0)
> >  > > >  +		return diag;
> >  > > >  +
> >  > > >  +	links_map = dev->data->links_map;
> >  > > >  +	/* Point links_map to this port specific area */
> >  > > >  +	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
> >  > > >  +	for (i = 0; i < diag; i++)
> >  > > >  +		links_map[link[i].queue_id] = (uint8_t)link[i].priority;
> >  > > >  +
> >  > > >  +	return diag;
> >  > > >  +}
> >  > > >  +
> >  > > >  +#define EVENT_QUEUE_SERVICE_PRIORITY_INVALID (0xdead)
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
> >  > > >  +		      uint8_t queues[], uint16_t nb_unlinks)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +	uint8_t all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
> >  > > >  +	int i, diag;
> >  > > >  +	uint16_t *links_map;
> >  > > >  +
> >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_unlink, -
> >  > > >  ENOTSUP);
> >  > > >  +
> >  > > >  +	if (!is_valid_port(dev, port_id)) {
> >  > > >  +		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	if (queues == NULL) {
> >  > > >  +		for (i = 0; i < dev->data->nb_queues; i++)
> >  > > >  +			all_queues[i] = i;
> >  > > >  +		queues = all_queues;
> >  > > >  +		nb_unlinks = dev->data->nb_queues;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	for (i = 0; i < nb_unlinks; i++)
> >  > > >  +		if (queues[i] >= RTE_EVENT_MAX_QUEUES_PER_DEV)
> >  > > >  +			return -EINVAL;
> >  > > >  +
> >  > > >  +	diag = (*dev->dev_ops->port_unlink)(dev->data->ports[port_id],
> >  > > >  queues,
> >  > > >  +					nb_unlinks);
> >  > > >  +
> >  > > >  +	if (diag < 0)
> >  > > >  +		return diag;
> >  > > >  +
> >  > > >  +	links_map = dev->data->links_map;
> >  > > >  +	/* Point links_map to this port specific area */
> >  > > >  +	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
> >  > > >  +	for (i = 0; i < diag; i++)
> >  > > >  +		links_map[queues[i]] =
> >  > > >  EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
> >  > > >  +
> >  > > >  +	return diag;
> >  > > >  +}
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
> >  > > >  +			struct rte_event_queue_link link[])
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +	uint16_t *links_map;
> >  > > >  +	int i, count = 0;
> >  > > >  +
> >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +	if (!is_valid_port(dev, port_id)) {
> >  > > >  +		EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
> >  > > >  +		return -EINVAL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	links_map = dev->data->links_map;
> >  > > >  +	/* Point links_map to this port specific area */
> >  > > >  +	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
> >  > > >  +	for (i = 0; i < RTE_EVENT_MAX_QUEUES_PER_DEV; i++) {
> >  > > >  +		if (links_map[i] !=
> >  > > >  EVENT_QUEUE_SERVICE_PRIORITY_INVALID) {
> >  > > >  +			link[count].queue_id = i;
> >  > > >  +			link[count].priority = (uint8_t)links_map[i];
> >  > > >  +			++count;
> >  > > >  +		}
> >  > > >  +	}
> >  > > >  +	return count;
> >  > > >  +}
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_event_dequeue_wait_time(uint8_t dev_id, uint64_t ns, uint64_t
> >  > > >  *wait_ticks)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +
> >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->wait_time, -
> >  > > >  ENOTSUP);
> >  > > >  +
> >  > > >  +	if (wait_ticks == NULL)
> >  > > >  +		return -EINVAL;
> >  > > >  +
> >  > > >  +	(*dev->dev_ops->wait_time)(dev, ns, wait_ticks);
> >  > > >  +	return 0;
> >  > > >  +}
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_event_dev_dump(uint8_t dev_id, FILE *f)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +
> >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dump, -ENOTSUP);
> >  > > >  +
> >  > > >  +	(*dev->dev_ops->dump)(dev, f);
> >  > > >  +	return 0;
> >  > > >  +
> >  > > >  +}
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_event_dev_start(uint8_t dev_id)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +	int diag;
> >  > > >  +
> >  > > >  +	EDEV_LOG_DEBUG("Start dev_id=%" PRIu8, dev_id);
> >  > > >  +
> >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_start, -ENOTSUP);
> >  > > >  +
> >  > > >  +	if (dev->data->dev_started != 0) {
> >  > > >  +		EDEV_LOG_ERR("Device with dev_id=%" PRIu8 " already
> >  > > >  started",
> >  > > >  +			dev_id);
> >  > > >  +		return 0;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	diag = (*dev->dev_ops->dev_start)(dev);
> >  > > >  +	if (diag == 0)
> >  > > >  +		dev->data->dev_started = 1;
> >  > > >  +	else
> >  > > >  +		return diag;
> >  > > >  +
> >  > > >  +	return 0;
> >  > > >  +}
> >  > > >  +
> >  > > >  +void
> >  > > >  +rte_event_dev_stop(uint8_t dev_id)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +
> >  > > >  +	EDEV_LOG_DEBUG("Stop dev_id=%" PRIu8, dev_id);
> >  > > >  +
> >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_RET(dev_id);
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
> >  > > >  +
> >  > > >  +	if (dev->data->dev_started == 0) {
> >  > > >  +		EDEV_LOG_ERR("Device with dev_id=%" PRIu8 " already
> >  > > >  stopped",
> >  > > >  +			dev_id);
> >  > > >  +		return;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	dev->data->dev_started = 0;
> >  > > >  +	(*dev->dev_ops->dev_stop)(dev);
> >  > > >  +}
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_event_dev_close(uint8_t dev_id)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +
> >  > > >  +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_close, -
> >  > > >  ENOTSUP);
> >  > > >  +
> >  > > >  +	/* Device must be stopped before it can be closed */
> >  > > >  +	if (dev->data->dev_started == 1) {
> >  > > >  +		EDEV_LOG_ERR("Device %u must be stopped before closing",
> >  > > >  +				dev_id);
> >  > > >  +		return -EBUSY;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	return (*dev->dev_ops->dev_close)(dev);
> >  > > >  +}
> >  > > >  +
> >  > > >  +static inline int
> >  > > >  +rte_eventdev_data_alloc(uint8_t dev_id, struct rte_eventdev_data
> >  **data,
> >  > > >  +		int socket_id)
> >  > > >  +{
> >  > > >  +	char mz_name[RTE_EVENTDEV_NAME_MAX_LEN];
> >  > > >  +	const struct rte_memzone *mz;
> >  > > >  +	int n;
> >  > > >  +
> >  > > >  +	/* Generate memzone name */
> >  > > >  +	n = snprintf(mz_name, sizeof(mz_name), "rte_eventdev_data_%u",
> >  > > >  dev_id);
> >  > > >  +	if (n >= (int)sizeof(mz_name))
> >  > > >  +		return -EINVAL;
> >  > > >  +
> >  > > >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> >  > > >  +		mz = rte_memzone_reserve(mz_name,
> >  > > >  +				sizeof(struct rte_eventdev_data),
> >  > > >  +				socket_id, 0);
> >  > > >  +	} else
> >  > > >  +		mz = rte_memzone_lookup(mz_name);
> >  > > >  +
> >  > > >  +	if (mz == NULL)
> >  > > >  +		return -ENOMEM;
> >  > > >  +
> >  > > >  +	*data = mz->addr;
> >  > > >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
> >  > > >  +		memset(*data, 0, sizeof(struct rte_eventdev_data));
> >  > > >  +
> >  > > >  +	return 0;
> >  > > >  +}
> >  > > >  +
> >  > > >  +static uint8_t
> >  > > >  +rte_eventdev_find_free_device_index(void)
> >  > > >  +{
> >  > > >  +	uint8_t dev_id;
> >  > > >  +
> >  > > >  +	for (dev_id = 0; dev_id < RTE_EVENT_MAX_DEVS; dev_id++) {
> >  > > >  +		if (rte_eventdevs[dev_id].attached ==
> >  > > >  +				RTE_EVENTDEV_DETACHED)
> >  > > >  +			return dev_id;
> >  > > >  +	}
> >  > > >  +	return RTE_EVENT_MAX_DEVS;
> >  > > >  +}
> >  > > >  +
> >  > > >  +struct rte_eventdev *
> >  > > >  +rte_eventdev_pmd_allocate(const char *name, int socket_id)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *eventdev;
> >  > > >  +	uint8_t dev_id;
> >  > > >  +
> >  > > >  +	if (rte_eventdev_pmd_get_named_dev(name) != NULL) {
> >  > > >  +		EDEV_LOG_ERR("Event device with name %s already "
> >  > > >  +				"allocated!", name);
> >  > > >  +		return NULL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	dev_id = rte_eventdev_find_free_device_index();
> >  > > >  +	if (dev_id == RTE_EVENT_MAX_DEVS) {
> >  > > >  +		EDEV_LOG_ERR("Reached maximum number of event
> >  > > >  devices");
> >  > > >  +		return NULL;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	eventdev = &rte_eventdevs[dev_id];
> >  > > >  +
> >  > > >  +	if (eventdev->data == NULL) {
> >  > > >  +		struct rte_eventdev_data *eventdev_data = NULL;
> >  > > >  +
> >  > > >  +		int retval = rte_eventdev_data_alloc(dev_id, &eventdev_data,
> >  > > >  +				socket_id);
> >  > > >  +
> >  > > >  +		if (retval < 0 || eventdev_data == NULL)
> >  > > >  +			return NULL;
> >  > > >  +
> >  > > >  +		eventdev->data = eventdev_data;
> >  > > >  +
> >  > > >  +		snprintf(eventdev->data->name,
> >  > > >  RTE_EVENTDEV_NAME_MAX_LEN,
> >  > > >  +				"%s", name);
> >  > > >  +
> >  > > >  +		eventdev->data->dev_id = dev_id;
> >  > > >  +		eventdev->data->socket_id = socket_id;
> >  > > >  +		eventdev->data->dev_started = 0;
> >  > > >  +
> >  > > >  +		eventdev->attached = RTE_EVENTDEV_ATTACHED;
> >  > > >  +
> >  > > >  +		eventdev_globals.nb_devs++;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	return eventdev;
> >  > > >  +}
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_eventdev_pmd_release(struct rte_eventdev *eventdev)
> >  > > >  +{
> >  > > >  +	int ret;
> >  > > >  +
> >  > > >  +	if (eventdev == NULL)
> >  > > >  +		return -EINVAL;
> >  > > >  +
> >  > > >  +	ret = rte_event_dev_close(eventdev->data->dev_id);
> >  > > >  +	if (ret < 0)
> >  > > >  +		return ret;
> >  > > >  +
> >  > > >  +	eventdev->attached = RTE_EVENTDEV_DETACHED;
> >  > > >  +	eventdev_globals.nb_devs--;
> >  > > >  +	eventdev->data = NULL;
> >  > > >  +
> >  > > >  +	return 0;
> >  > > >  +}
> >  > > >  +
> >  > > >  +struct rte_eventdev *
> >  > > >  +rte_eventdev_pmd_vdev_init(const char *name, size_t
> >  dev_private_size,
> >  > > >  +		int socket_id)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *eventdev;
> >  > > >  +
> >  > > >  +	/* Allocate device structure */
> >  > > >  +	eventdev = rte_eventdev_pmd_allocate(name, socket_id);
> >  > > >  +	if (eventdev == NULL)
> >  > > >  +		return NULL;
> >  > > >  +
> >  > > >  +	/* Allocate private device structure */
> >  > > >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> >  > > >  +		eventdev->data->dev_private =
> >  > > >  +				rte_zmalloc_socket("eventdev device private",
> >  > > >  +						dev_private_size,
> >  > > >  +						RTE_CACHE_LINE_SIZE,
> >  > > >  +						socket_id);
> >  > > >  +
> >  > > >  +		if (eventdev->data->dev_private == NULL)
> >  > > >  +			rte_panic("Cannot allocate memzone for private
> >  > > >  device"
> >  > > >  +					" data");
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	return eventdev;
> >  > > >  +}
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_eventdev_pmd_pci_probe(struct rte_pci_driver *pci_drv,
> >  > > >  +			struct rte_pci_device *pci_dev)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev_driver *eventdrv;
> >  > > >  +	struct rte_eventdev *eventdev;
> >  > > >  +
> >  > > >  +	char eventdev_name[RTE_EVENTDEV_NAME_MAX_LEN];
> >  > > >  +
> >  > > >  +	int retval;
> >  > > >  +
> >  > > >  +	eventdrv = (struct rte_eventdev_driver *)pci_drv;
> >  > > >  +	if (eventdrv == NULL)
> >  > > >  +		return -ENODEV;
> >  > > >  +
> >  > > >  +	rte_eal_pci_device_name(&pci_dev->addr, eventdev_name,
> >  > > >  +			sizeof(eventdev_name));
> >  > > >  +
> >  > > >  +	eventdev = rte_eventdev_pmd_allocate(eventdev_name,
> >  > > >  +			 pci_dev->device.numa_node);
> >  > > >  +	if (eventdev == NULL)
> >  > > >  +		return -ENOMEM;
> >  > > >  +
> >  > > >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> >  > > >  +		eventdev->data->dev_private =
> >  > > >  +				rte_zmalloc_socket(
> >  > > >  +						"eventdev private structure",
> >  > > >  +						eventdrv->dev_private_size,
> >  > > >  +						RTE_CACHE_LINE_SIZE,
> >  > > >  +						rte_socket_id());
> >  > > >  +
> >  > > >  +		if (eventdev->data->dev_private == NULL)
> >  > > >  +			rte_panic("Cannot allocate memzone for private "
> >  > > >  +					"device data");
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	eventdev->pci_dev = pci_dev;
> >  > > >  +	eventdev->driver = eventdrv;
> >  > > >  +
> >  > > >  +	/* Invoke PMD device initialization function */
> >  > > >  +	retval = (*eventdrv->eventdev_init)(eventdev);
> >  > > >  +	if (retval == 0)
> >  > > >  +		return 0;
> >  > > >  +
> >  > > >  +	EDEV_LOG_ERR("driver %s: event_dev_init(vendor_id=0x%x
> >  > > >  device_id=0x%x)"
> >  > > >  +			" failed", pci_drv->driver.name,
> >  > > >  +			(unsigned int) pci_dev->id.vendor_id,
> >  > > >  +			(unsigned int) pci_dev->id.device_id);
> >  > > >  +
> >  > > >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
> >  > > >  +		rte_free(eventdev->data->dev_private);
> >  > > >  +
> >  > > >  +	eventdev->attached = RTE_EVENTDEV_DETACHED;
> >  > > >  +	eventdev_globals.nb_devs--;
> >  > > >  +
> >  > > >  +	return -ENXIO;
> >  > > >  +}
> >  > > >  +
> >  > > >  +int
> >  > > >  +rte_eventdev_pmd_pci_remove(struct rte_pci_device *pci_dev)
> >  > > >  +{
> >  > > >  +	const struct rte_eventdev_driver *eventdrv;
> >  > > >  +	struct rte_eventdev *eventdev;
> >  > > >  +	char eventdev_name[RTE_EVENTDEV_NAME_MAX_LEN];
> >  > > >  +	int ret;
> >  > > >  +
> >  > > >  +	if (pci_dev == NULL)
> >  > > >  +		return -EINVAL;
> >  > > >  +
> >  > > >  +	rte_eal_pci_device_name(&pci_dev->addr, eventdev_name,
> >  > > >  +			sizeof(eventdev_name));
> >  > > >  +
> >  > > >  +	eventdev = rte_eventdev_pmd_get_named_dev(eventdev_name);
> >  > > >  +	if (eventdev == NULL)
> >  > > >  +		return -ENODEV;
> >  > > >  +
> >  > > >  +	eventdrv = (const struct rte_eventdev_driver *)pci_dev->driver;
> >  > > >  +	if (eventdrv == NULL)
> >  > > >  +		return -ENODEV;
> >  > > >  +
> >  > > >  +	/* Invoke PMD device uninit function */
> >  > > >  +	if (*eventdrv->eventdev_uninit) {
> >  > > >  +		ret = (*eventdrv->eventdev_uninit)(eventdev);
> >  > > >  +		if (ret)
> >  > > >  +			return ret;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	/* Free event device */
> >  > > >  +	rte_eventdev_pmd_release(eventdev);
> >  > > >  +
> >  > > >  +	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
> >  > > >  +		rte_free(eventdev->data->dev_private);
> >  > > >  +
> >  > > >  +	eventdev->pci_dev = NULL;
> >  > > >  +	eventdev->driver = NULL;
> >  > > >  +
> >  > > >  +	return 0;
> >  > > >  +}
> >  > > >  diff --git a/lib/librte_eventdev/rte_eventdev_pmd.h
> >  > > >  b/lib/librte_eventdev/rte_eventdev_pmd.h
> >  > > >  new file mode 100644
> >  > > >  index 0000000..e9d9b83
> >  > > >  --- /dev/null
> >  > > >  +++ b/lib/librte_eventdev/rte_eventdev_pmd.h
> >  > > >  @@ -0,0 +1,504 @@
> >  > > >  +/*
> >  > > >  + *
> >  > > >  + *   Copyright(c) 2016 Cavium networks. All rights reserved.
> >  > > >  + *
> >  > > >  + *   Redistribution and use in source and binary forms, with or without
> >  > > >  + *   modification, are permitted provided that the following conditions
> >  > > >  + *   are met:
> >  > > >  + *
> >  > > >  + *     * Redistributions of source code must retain the above copyright
> >  > > >  + *       notice, this list of conditions and the following disclaimer.
> >  > > >  + *     * Redistributions in binary form must reproduce the above
> >  copyright
> >  > > >  + *       notice, this list of conditions and the following disclaimer in
> >  > > >  + *       the documentation and/or other materials provided with the
> >  > > >  + *       distribution.
> >  > > >  + *     * Neither the name of Cavium networks nor the names of its
> >  > > >  + *       contributors may be used to endorse or promote products derived
> >  > > >  + *       from this software without specific prior written permission.
> >  > > >  + *
> >  > > >  + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
> >  > > >  CONTRIBUTORS
> >  > > >  + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING,
> >  BUT
> >  > > >  NOT
> >  > > >  + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
> >  > > >  FITNESS FOR
> >  > > >  + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
> >  > > >  COPYRIGHT
> >  > > >  + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
> >  > > >  INCIDENTAL,
> >  > > >  + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
> >  BUT
> >  > > >  NOT
> >  > > >  + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
> >  LOSS
> >  > > >  OF USE,
> >  > > >  + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
> >  CAUSED
> >  > > >  AND ON ANY
> >  > > >  + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
> >  OR
> >  > > >  TORT
> >  > > >  + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
> >  OUT OF
> >  > > >  THE USE
> >  > > >  + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
> >  > > >  DAMAGE.
> >  > > >  + */
> >  > > >  +
> >  > > >  +#ifndef _RTE_EVENTDEV_PMD_H_
> >  > > >  +#define _RTE_EVENTDEV_PMD_H_
> >  > > >  +
> >  > > >  +/** @file
> >  > > >  + * RTE Event PMD APIs
> >  > > >  + *
> >  > > >  + * @note
> >  > > >  + * These API are from event PMD only and user applications should not
> >  call
> >  > > >  + * them directly.
> >  > > >  + */
> >  > > >  +
> >  > > >  +#ifdef __cplusplus
> >  > > >  +extern "C" {
> >  > > >  +#endif
> >  > > >  +
> >  > > >  +#include <string.h>
> >  > > >  +
> >  > > >  +#include <rte_dev.h>
> >  > > >  +#include <rte_pci.h>
> >  > > >  +#include <rte_malloc.h>
> >  > > >  +#include <rte_log.h>
> >  > > >  +#include <rte_common.h>
> >  > > >  +
> >  > > >  +#include "rte_eventdev.h"
> >  > > >  +
> >  > > >  +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
> >  > > >  +#define RTE_PMD_DEBUG_TRACE(...) \
> >  > > >  +	rte_pmd_debug_trace(__func__, __VA_ARGS__)
> >  > > >  +#else
> >  > > >  +#define RTE_PMD_DEBUG_TRACE(...)
> >  > > >  +#endif
> >  > > >  +
> >  > > >  +/* Logging Macros */
> >  > > >  +#define EDEV_LOG_ERR(fmt, args...) \
> >  > > >  +	RTE_LOG(ERR, EVENTDEV, "%s() line %u: " fmt "\n",  \
> >  > > >  +			__func__, __LINE__, ## args)
> >  > > >  +
> >  > > >  +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
> >  > > >  +#define EDEV_LOG_DEBUG(fmt, args...) \
> >  > > >  +	RTE_LOG(DEBUG, EVENTDEV, "%s() line %u: " fmt "\n",  \
> >  > > >  +			__func__, __LINE__, ## args)
> >  > > >  +#else
> >  > > >  +#define EDEV_LOG_DEBUG(fmt, args...) (void)0
> >  > > >  +#endif
> >  > > >  +
> >  > > >  +/* Macros to check for valid device */
> >  > > >  +#define RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, retval) do
> >  { \
> >  > > >  +	if (!rte_eventdev_pmd_is_valid_dev((dev_id))) { \
> >  > > >  +		EDEV_LOG_ERR("Invalid dev_id=%d\n", dev_id); \
> >  > > >  +		return retval; \
> >  > > >  +	} \
> >  > > >  +} while (0)
> >  > > >  +
> >  > > >  +#define RTE_EVENTDEV_VALID_DEVID_OR_RET(dev_id) do { \
> >  > > >  +	if (!rte_eventdev_pmd_is_valid_dev((dev_id))) { \
> >  > > >  +		EDEV_LOG_ERR("Invalid dev_id=%d\n", dev_id); \
> >  > > >  +		return; \
> >  > > >  +	} \
> >  > > >  +} while (0)
> >  > > >  +
> >  > > >  +#define RTE_EVENTDEV_DETACHED  (0)
> >  > > >  +#define RTE_EVENTDEV_ATTACHED  (1)
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Initialisation function of a event driver invoked for each matching
> >  > > >  + * event PCI device detected during the PCI probing phase.
> >  > > >  + *
> >  > > >  + * @param dev
> >  > > >  + *   The dev pointer is the address of the *rte_eventdev* structure
> >  associated
> >  > > >  + *   with the matching device and which has been [automatically]
> >  allocated in
> >  > > >  + *   the *rte_event_devices* array.
> >  > > >  + *
> >  > > >  + * @return
> >  > > >  + *   - 0: Success, the device is properly initialised by the driver.
> >  > > >  + *        In particular, the driver MUST have set up the *dev_ops* pointer
> >  > > >  + *        of the *dev* structure.
> >  > > >  + *   - <0: Error code of the device initialisation failure.
> >  > > >  + */
> >  > > >  +typedef int (*eventdev_init_t)(struct rte_eventdev *dev);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Finalisation function of a driver invoked for each matching
> >  > > >  + * PCI device detected during the PCI closing phase.
> >  > > >  + *
> >  > > >  + * @param dev
> >  > > >  + *   The dev pointer is the address of the *rte_eventdev* structure
> >  associated
> >  > > >  + *   with the matching device and which	has been [automatically]
> >  allocated in
> >  > > >  + *   the *rte_event_devices* array.
> >  > > >  + *
> >  > > >  + * @return
> >  > > >  + *   - 0: Success, the device is properly finalised by the driver.
> >  > > >  + *        In particular, the driver MUST free the *dev_ops* pointer
> >  > > >  + *        of the *dev* structure.
> >  > > >  + *   - <0: Error code of the device initialisation failure.
> >  > > >  + */
> >  > > >  +typedef int (*eventdev_uninit_t)(struct rte_eventdev *dev);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * The structure associated with a PMD driver.
> >  > > >  + *
> >  > > >  + * Each driver acts as a PCI driver and is represented by a generic
> >  > > >  + * *event_driver* structure that holds:
> >  > > >  + *
> >  > > >  + * - An *rte_pci_driver* structure (which must be the first field).
> >  > > >  + *
> >  > > >  + * - The *eventdev_init* function invoked for each matching PCI device.
> >  > > >  + *
> >  > > >  + * - The size of the private data to allocate for each matching device.
> >  > > >  + */
> >  > > >  +struct rte_eventdev_driver {
> >  > > >  +	struct rte_pci_driver pci_drv;	/**< The PMD is also a PCI driver. */
> >  > > >  +	unsigned int dev_private_size;	/**< Size of device private data. */
> >  > > >  +
> >  > > >  +	eventdev_init_t eventdev_init;	/**< Device init function. */
> >  > > >  +	eventdev_uninit_t eventdev_uninit; /**< Device uninit function. */
> >  > > >  +};
> >  > > >  +
> >  > > >  +/** Global structure used for maintaining state of allocated event
> >  devices */
> >  > > >  +struct rte_eventdev_global {
> >  > > >  +	uint8_t nb_devs;	/**< Number of devices found */
> >  > > >  +	uint8_t max_devs;	/**< Max number of devices */
> >  > > >  +};
> >  > > >  +
> >  > > >  +extern struct rte_eventdev_global *rte_eventdev_globals;
> >  > > >  +/** Pointer to global event devices data structure. */
> >  > > >  +extern struct rte_eventdev *rte_eventdevs;
> >  > > >  +/** The pool of rte_eventdev structures. */
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Get the rte_eventdev structure device pointer for the named device.
> >  > > >  + *
> >  > > >  + * @param name
> >  > > >  + *   device name to select the device structure.
> >  > > >  + *
> >  > > >  + * @return
> >  > > >  + *   - The rte_eventdev structure pointer for the given device ID.
> >  > > >  + */
> >  > > >  +static inline struct rte_eventdev *
> >  > > >  +rte_eventdev_pmd_get_named_dev(const char *name)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +	unsigned int i;
> >  > > >  +
> >  > > >  +	if (name == NULL)
> >  > > >  +		return NULL;
> >  > > >  +
> >  > > >  +	for (i = 0, dev = &rte_eventdevs[i];
> >  > > >  +			i < rte_eventdev_globals->max_devs; i++) {
> >  > > >  +		if ((dev->attached == RTE_EVENTDEV_ATTACHED) &&
> >  > > >  +				(strcmp(dev->data->name, name) == 0))
> >  > > >  +			return dev;
> >  > > >  +	}
> >  > > >  +
> >  > > >  +	return NULL;
> >  > > >  +}
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Validate if the event device index is valid attached event device.
> >  > > >  + *
> >  > > >  + * @param dev_id
> >  > > >  + *   Event device index.
> >  > > >  + *
> >  > > >  + * @return
> >  > > >  + *   - If the device index is valid (1) or not (0).
> >  > > >  + */
> >  > > >  +static inline unsigned
> >  > > >  +rte_eventdev_pmd_is_valid_dev(uint8_t dev_id)
> >  > > >  +{
> >  > > >  +	struct rte_eventdev *dev;
> >  > > >  +
> >  > > >  +	if (dev_id >= rte_eventdev_globals->nb_devs)
> >  > > >  +		return 0;
> >  > > >  +
> >  > > >  +	dev = &rte_eventdevs[dev_id];
> >  > > >  +	if (dev->attached != RTE_EVENTDEV_ATTACHED)
> >  > > >  +		return 0;
> >  > > >  +	else
> >  > > >  +		return 1;
> >  > > >  +}
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Definitions of all functions exported by a driver through the
> >  > > >  + * the generic structure of type *event_dev_ops* supplied in the
> >  > > >  + * *rte_eventdev* structure associated with a device.
> >  > > >  + */
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Get device information of a device.
> >  > > >  + *
> >  > > >  + * @param dev
> >  > > >  + *   Event device pointer
> >  > > >  + * @param dev_info
> >  > > >  + *   Event device information structure
> >  > > >  + *
> >  > > >  + * @return
> >  > > >  + *   Returns 0 on success
> >  > > >  + */
> >  > > >  +typedef void (*eventdev_info_get_t)(struct rte_eventdev *dev,
> >  > > >  +		struct rte_event_dev_info *dev_info);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Configure a device.
> >  > > >  + *
> >  > > >  + * @param dev
> >  > > >  + *   Event device pointer
> >  > > >  + *
> >  > > >  + * @return
> >  > > >  + *   Returns 0 on success
> >  > > >  + */
> >  > > >  +typedef int (*eventdev_configure_t)(struct rte_eventdev *dev);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Start a configured device.
> >  > > >  + *
> >  > > >  + * @param dev
> >  > > >  + *   Event device pointer
> >  > > >  + *
> >  > > >  + * @return
> >  > > >  + *   Returns 0 on success
> >  > > >  + */
> >  > > >  +typedef int (*eventdev_start_t)(struct rte_eventdev *dev);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Stop a configured device.
> >  > > >  + *
> >  > > >  + * @param dev
> >  > > >  + *   Event device pointer
> >  > > >  + */
> >  > > >  +typedef void (*eventdev_stop_t)(struct rte_eventdev *dev);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Close a configured device.
> >  > > >  + *
> >  > > >  + * @param dev
> >  > > >  + *   Event device pointer
> >  > > >  + *
> >  > > >  + * @return
> >  > > >  + * - 0 on success
> >  > > >  + * - (-EAGAIN) if can't close as device is busy
> >  > > >  + */
> >  > > >  +typedef int (*eventdev_close_t)(struct rte_eventdev *dev);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Retrieve the default event queue configuration.
> >  > > >  + *
> >  > > >  + * @param dev
> >  > > >  + *   Event device pointer
> >  > > >  + * @param queue_id
> >  > > >  + *   Event queue index
> >  > > >  + * @param[out] queue_conf
> >  > > >  + *   Event queue configuration structure
> >  > > >  + *
> >  > > >  + */
> >  > > >  +typedef void (*eventdev_queue_default_conf_get_t)(struct
> >  rte_eventdev
> >  > > >  *dev,
> >  > > >  +		uint8_t queue_id, struct rte_event_queue_conf *queue_conf);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Setup an event queue.
> >  > > >  + *
> >  > > >  + * @param dev
> >  > > >  + *   Event device pointer
> >  > > >  + * @param queue_id
> >  > > >  + *   Event queue index
> >  > > >  + * @param queue_conf
> >  > > >  + *   Event queue configuration structure
> >  > > >  + *
> >  > > >  + * @return
> >  > > >  + *   Returns 0 on success.
> >  > > >  + */
> >  > > >  +typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev,
> >  > > >  +		uint8_t queue_id, struct rte_event_queue_conf *queue_conf);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Release memory resources allocated by given event queue.
> >  > > >  + *
> >  > > >  + * @param queue
> >  > > >  + *   Event queue pointer
> >  > > >  + *
> >  > > >  + */
> >  > > >  +typedef void (*eventdev_queue_release_t)(void *queue);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Retrieve the default event port configuration.
> >  > > >  + *
> >  > > >  + * @param dev
> >  > > >  + *   Event device pointer
> >  > > >  + * @param port_id
> >  > > >  + *   Event port index
> >  > > >  + * @param[out] port_conf
> >  > > >  + *   Event port configuration structure
> >  > > >  + *
> >  > > >  + */
> >  > > >  +typedef void (*eventdev_port_default_conf_get_t)(struct rte_eventdev
> >  *dev,
> >  > > >  +		uint8_t port_id, struct rte_event_port_conf *port_conf);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Setup an event port.
> >  > > >  + *
> >  > > >  + * @param dev
> >  > > >  + *   Event device pointer
> >  > > >  + * @param port_id
> >  > > >  + *   Event port index
> >  > > >  + * @param port_conf
> >  > > >  + *   Event port configuration structure
> >  > > >  + *
> >  > > >  + * @return
> >  > > >  + *   Returns 0 on success.
> >  > > >  + */
> >  > > >  +typedef int (*eventdev_port_setup_t)(struct rte_eventdev *dev,
> >  > > >  +		uint8_t port_id, struct rte_event_port_conf *port_conf);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Release memory resources allocated by given event port.
> >  > > >  + *
> >  > > >  + * @param port
> >  > > >  + *   Event port pointer
> >  > > >  + *
> >  > > >  + */
> >  > > >  +typedef void (*eventdev_port_release_t)(void *port);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Link multiple source event queues to destination event port.
> >  > > >  + *
> >  > > >  + * @param port
> >  > > >  + *   Event port pointer
> >  > > >  + * @param link
> >  > > >  + *   An array of *nb_links* pointers to *rte_event_queue_link* structure
> >  > > >  + * @param nb_links
> >  > > >  + *   The number of links to establish
> >  > > >  + *
> >  > > >  + * @return
> >  > > >  + *   Returns 0 on success.
> >  > > >  + *
> >  > > >  + */
> >  > > >  +typedef int (*eventdev_port_link_t)(void *port,
> >  > > >  +		struct rte_event_queue_link link[], uint16_t nb_links);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Unlink multiple source event queues from destination event port.
> >  > > >  + *
> >  > > >  + * @param port
> >  > > >  + *   Event port pointer
> >  > > >  + * @param queues
> >  > > >  + *   An array of *nb_unlinks* event queues to be unlinked from the event
> >  port.
> >  > > >  + * @param nb_unlinks
> >  > > >  + *   The number of unlinks to establish
> >  > > >  + *
> >  > > >  + * @return
> >  > > >  + *   Returns 0 on success.
> >  > > >  + *
> >  > > >  + */
> >  > > >  +typedef int (*eventdev_port_unlink_t)(void *port,
> >  > > >  +		uint8_t queues[], uint16_t nb_unlinks);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Converts nanoseconds to *wait* value for rte_event_dequeue()
> >  > > >  + *
> >  > > >  + * @param dev
> >  > > >  + *   Event device pointer
> >  > > >  + * @param ns
> >  > > >  + *   Wait time in nanosecond
> >  > > >  + * @param[out] wait_ticks
> >  > > >  + *   Value for the *wait* parameter in rte_event_dequeue() function
> >  > > >  + *
> >  > > >  + */
> >  > > >  +typedef void (*eventdev_dequeue_wait_time_t)(struct rte_eventdev
> >  *dev,
> >  > > >  +		uint64_t ns, uint64_t *wait_ticks);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Dump internal information
> >  > > >  + *
> >  > > >  + * @param dev
> >  > > >  + *   Event device pointer
> >  > > >  + * @param f
> >  > > >  + *   A pointer to a file for output
> >  > > >  + *
> >  > > >  + */
> >  > > >  +typedef void (*eventdev_dump_t)(struct rte_eventdev *dev, FILE *f);
> >  > > >  +
> >  > > >  +/** Event device operations function pointer table */
> >  > > >  +struct rte_eventdev_ops {
> >  > > >  +	eventdev_info_get_t dev_infos_get;	/**< Get device info. */
> >  > > >  +	eventdev_configure_t dev_configure;	/**< Configure device. */
> >  > > >  +	eventdev_start_t dev_start;		/**< Start device. */
> >  > > >  +	eventdev_stop_t dev_stop;		/**< Stop device. */
> >  > > >  +	eventdev_close_t dev_close;		/**< Close device. */
> >  > > >  +
> >  > > >  +	eventdev_queue_default_conf_get_t queue_def_conf;
> >  > > >  +	/**< Get default queue configuration. */
> >  > > >  +	eventdev_queue_setup_t queue_setup;
> >  > > >  +	/**< Set up an event queue. */
> >  > > >  +	eventdev_queue_release_t queue_release;
> >  > > >  +	/**< Release an event queue. */
> >  > > >  +
> >  > > >  +	eventdev_port_default_conf_get_t port_def_conf;
> >  > > >  +	/**< Get default port configuration. */
> >  > > >  +	eventdev_port_setup_t port_setup;
> >  > > >  +	/**< Set up an event port. */
> >  > > >  +	eventdev_port_release_t port_release;
> >  > > >  +	/**< Release an event port. */
> >  > > >  +
> >  > > >  +	eventdev_port_link_t port_link;
> >  > > >  +	/**< Link event queues to an event port. */
> >  > > >  +	eventdev_port_unlink_t port_unlink;
> >  > > >  +	/**< Unlink event queues from an event port. */
> >  > > >  +	eventdev_dequeue_wait_time_t wait_time;
> >  > > >  +	/**< Converts nanoseconds to *wait* value for rte_event_dequeue()
> >  > > >  */
> >  > > >  +	eventdev_dump_t dump;
> >  > > >  +	/* Dump internal information */
> >  > > >  +};
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Allocates a new eventdev slot for an event device and returns the
> >  pointer
> >  > > >  + * to that slot for the driver to use.
> >  > > >  + *
> >  > > >  + * @param name
> >  > > >  + *   Unique identifier name for each device
> >  > > >  + * @param socket_id
> >  > > >  + *   Socket to allocate resources on.
> >  > > >  + * @return
> >  > > >  + *   - Slot in the rte_dev_devices array for a new device;
> >  > > >  + */
> >  > > >  +struct rte_eventdev *
> >  > > >  +rte_eventdev_pmd_allocate(const char *name, int socket_id);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Release the specified eventdev device.
> >  > > >  + *
> >  > > >  + * @param eventdev
> >  > > >  + * The *eventdev* pointer is the address of the *rte_eventdev*
> >  structure.
> >  > > >  + * @return
> >  > > >  + *   - 0 on success, negative on error
> >  > > >  + */
> >  > > >  +int
> >  > > >  +rte_eventdev_pmd_release(struct rte_eventdev *eventdev);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Creates a new virtual event device and returns the pointer to that
> >  device.
> >  > > >  + *
> >  > > >  + * @param name
> >  > > >  + *   PMD type name
> >  > > >  + * @param dev_private_size
> >  > > >  + *   Size of event PMDs private data
> >  > > >  + * @param socket_id
> >  > > >  + *   Socket to allocate resources on.
> >  > > >  + *
> >  > > >  + * @return
> >  > > >  + *   - Eventdev pointer if device is successfully created.
> >  > > >  + *   - NULL if device cannot be created.
> >  > > >  + */
> >  > > >  +struct rte_eventdev *
> >  > > >  +rte_eventdev_pmd_vdev_init(const char *name, size_t
> >  dev_private_size,
> >  > > >  +		int socket_id);
> >  > > >  +
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Wrapper for use by pci drivers as a .probe function to attach to a
> >  event
> >  > > >  + * interface.
> >  > > >  + */
> >  > > >  +int rte_eventdev_pmd_pci_probe(struct rte_pci_driver *pci_drv,
> >  > > >  +			    struct rte_pci_device *pci_dev);
> >  > > >  +
> >  > > >  +/**
> >  > > >  + * Wrapper for use by pci drivers as a .remove function to detach a
> >  event
> >  > > >  + * interface.
> >  > > >  + */
> >  > > >  +int rte_eventdev_pmd_pci_remove(struct rte_pci_device *pci_dev);
> >  > > >  +
> >  > > >  +#ifdef __cplusplus
> >  > > >  +}
> >  > > >  +#endif
> >  > > >  +
> >  > > >  +#endif /* _RTE_EVENTDEV_PMD_H_ */
> >  > > >  diff --git a/lib/librte_eventdev/rte_eventdev_version.map
> >  > > >  b/lib/librte_eventdev/rte_eventdev_version.map
> >  > > >  new file mode 100644
> >  > > >  index 0000000..ef40aae
> >  > > >  --- /dev/null
> >  > > >  +++ b/lib/librte_eventdev/rte_eventdev_version.map
> >  > > >  @@ -0,0 +1,39 @@
> >  > > >  +DPDK_17.02 {
> >  > > >  +	global:
> >  > > >  +
> >  > > >  +	rte_eventdevs;
> >  > > >  +
> >  > > >  +	rte_event_dev_count;
> >  > > >  +	rte_event_dev_get_dev_id;
> >  > > >  +	rte_event_dev_socket_id;
> >  > > >  +	rte_event_dev_info_get;
> >  > > >  +	rte_event_dev_configure;
> >  > > >  +	rte_event_dev_start;
> >  > > >  +	rte_event_dev_stop;
> >  > > >  +	rte_event_dev_close;
> >  > > >  +	rte_event_dev_dump;
> >  > > >  +
> >  > > >  +	rte_event_port_default_conf_get;
> >  > > >  +	rte_event_port_setup;
> >  > > >  +	rte_event_port_dequeue_depth;
> >  > > >  +	rte_event_port_enqueue_depth;
> >  > > >  +	rte_event_port_count;
> >  > > >  +	rte_event_port_link;
> >  > > >  +	rte_event_port_unlink;
> >  > > >  +	rte_event_port_links_get;
> >  > > >  +
> >  > > >  +	rte_event_queue_default_conf_get
> >  > > >  +	rte_event_queue_setup;
> >  > > >  +	rte_event_queue_count;
> >  > > >  +	rte_event_queue_priority;
> >  > > >  +
> >  > > >  +	rte_event_dequeue_wait_time;
> >  > > >  +
> >  > > >  +	rte_eventdev_pmd_allocate;
> >  > > >  +	rte_eventdev_pmd_release;
> >  > > >  +	rte_eventdev_pmd_vdev_init;
> >  > > >  +	rte_eventdev_pmd_pci_probe;
> >  > > >  +	rte_eventdev_pmd_pci_remove;
> >  > > >  +
> >  > > >  +	local: *;
> >  > > >  +};
> >  > > >  diff --git a/mk/rte.app.mk b/mk/rte.app.mk
> >  > > >  index f75f0e2..716725a 100644
> >  > > >  --- a/mk/rte.app.mk
> >  > > >  +++ b/mk/rte.app.mk
> >  > > >  @@ -93,6 +93,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF)           += -
> >  > > >  lrte_mbuf
> >  > > >   _LDLIBS-$(CONFIG_RTE_LIBRTE_NET)            += -lrte_net
> >  > > >   _LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER)          += -lrte_ethdev
> >  > > >   _LDLIBS-$(CONFIG_RTE_LIBRTE_CRYPTODEV)      += -lrte_cryptodev
> >  > > >  +_LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV)       += -lrte_eventdev
> >  > > >   _LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL)        += -lrte_mempool
> >  > > >   _LDLIBS-$(CONFIG_RTE_LIBRTE_RING)           += -lrte_ring
> >  > > >   _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL)            += -lrte_eal
> >  > > >  --
> >  > > >  2.5.5
> >  > >

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 1/4] eventdev: introduce event driven programming model
  2016-11-18  5:44 ` [PATCH 1/4] eventdev: introduce event driven programming model Jerin Jacob
@ 2016-11-23 18:39   ` Thomas Monjalon
  2016-11-24  1:59     ` Jerin Jacob
  2016-11-24 16:24   ` Bruce Richardson
  2016-12-06  3:52   ` [PATCH v2 0/6] libeventdev API and northbound implementation Jerin Jacob
  2 siblings, 1 reply; 109+ messages in thread
From: Thomas Monjalon @ 2016-11-23 18:39 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dev, bruce.richardson, harry.van.haaren, hemant.agrawal, gage.eads

Hi Jerin,

Thanks for bringing a big new piece in DPDK.

I made some comments below.

2016-11-18 11:14, Jerin Jacob:
> +Eventdev API - EXPERIMENTAL
> +M: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> +F: lib/librte_eventdev/

OK to mark it experimental.
What is the plan to remove the experimental word?

> + * RTE event device drivers do not use interrupts for enqueue or dequeue
> + * operation. Instead, Event drivers export Poll-Mode enqueue and dequeue
> + * functions to applications.

To the question "what makes DPDK different" it could be answered
that DPDK event drivers implement polling functions :)

> +#include <stdbool.h>
> +
> +#include <rte_pci.h>
> +#include <rte_dev.h>
> +#include <rte_memory.h>

Is it possible to remove some of these includes from the API?

> +
> +#define EVENTDEV_NAME_SKELETON_PMD event_skeleton
> +/**< Skeleton event device PMD name */

I do not understand this #define.
And it is not properly prefixed.

> +struct rte_event_dev_info {
> +	const char *driver_name;	/**< Event driver name */
> +	struct rte_pci_device *pci_dev;	/**< PCI information */

There is some work in progress to remove PCI information from ethdev.
Please do not add any PCI related structure in eventdev.
The generic structure is rte_device.

> +struct rte_event_dev_config {
> +	uint32_t dequeue_wait_ns;
> +	/**< rte_event_dequeue() wait for *dequeue_wait_ns* ns on this device.

Please explain exactly when the wait occurs and why.

> +	 * This value should be in the range of *min_dequeue_wait_ns* and
> +	 * *max_dequeue_wait_ns* which previously provided in
> +	 * rte_event_dev_info_get()
> +	 * \see RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT

I think the @see syntax would be more consistent than \see.

> +	uint8_t nb_event_port_dequeue_depth;
> +	/**< Number of dequeue queue depth for any event port on this device.

I think it deserves more explanations.

> +	uint32_t event_dev_cfg;
> +	/**< Event device config flags(RTE_EVENT_DEV_CFG_)*/

How this field differs from others in the struct?
Should it be named flags?

> +	uint32_t event_queue_cfg; /**< Queue config flags(EVENT_QUEUE_CFG_) */

Same comment about the naming of this field for event_queue config sruct.

> +/** Event port configuration structure */
> +struct rte_event_port_conf {
> +	int32_t new_event_threshold;
> +	/**< A backpressure threshold for new event enqueues on this port.
> +	 * Use for *closed system* event dev where event capacity is limited,
> +	 * and cannot exceed the capacity of the event dev.
> +	 * Configuring ports with different thresholds can make higher priority
> +	 * traffic less likely to  be backpressured.
> +	 * For example, a port used to inject NIC Rx packets into the event dev
> +	 * can have a lower threshold so as not to overwhelm the device,
> +	 * while ports used for worker pools can have a higher threshold.
> +	 * This value cannot exceed the *nb_events_limit*
> +	 * which previously supplied to rte_event_dev_configure()
> +	 */
> +	uint8_t dequeue_depth;
> +	/**< Configure number of bulk dequeues for this event port.
> +	 * This value cannot exceed the *nb_event_port_dequeue_depth*
> +	 * which previously supplied to rte_event_dev_configure()
> +	 */
> +	uint8_t enqueue_depth;
> +	/**< Configure number of bulk enqueues for this event port.
> +	 * This value cannot exceed the *nb_event_port_enqueue_depth*
> +	 * which previously supplied to rte_event_dev_configure()
> +	 */
> +};

The depth configuration is not clear to me.

> +/* Event types to classify the event source */

Why this classification is needed?

> +#define RTE_EVENT_TYPE_ETHDEV           0x0
> +/**< The event generated from ethdev subsystem */
> +#define RTE_EVENT_TYPE_CRYPTODEV        0x1
> +/**< The event generated from crypodev subsystem */
> +#define RTE_EVENT_TYPE_TIMERDEV         0x2
> +/**< The event generated from timerdev subsystem */
> +#define RTE_EVENT_TYPE_CORE             0x3
> +/**< The event generated from core.

What is core?

> +/* Event enqueue operations */

I feel a longer explanation is needed here to describe
what is an operation and where this data is useful.

> +#define RTE_EVENT_OP_NEW                0
> +/**< New event without previous context */
> +#define RTE_EVENT_OP_FORWARD            1
> +/**< Re-enqueue previously dequeued event */
> +#define RTE_EVENT_OP_RELEASE            2

There is no comment for the release operation.

> +/**
> + * Release the flow context associated with the schedule type.
> + *
[...]
> + */

There is no function declaration below this comment.

> +/**
> + * The generic *rte_event* structure to hold the event attributes
> + * for dequeue and enqueue operation
> + */
> +struct rte_event {
> +	/** WORD0 */
> +	RTE_STD_C11
> +	union {
> +		uint64_t event;
[...]
> +	};
> +	/** WORD1 */
> +	RTE_STD_C11
> +	union {
> +		uintptr_t event_ptr;

I wonder if it can be a problem to have the size of this field
not constant across machines.

> +		/**< Opaque event pointer */
> +		struct rte_mbuf *mbuf;
> +		/**< mbuf pointer if dequeued event is associated with mbuf */

How do we know that an event is associated with mbuf?
Does it mean that such events are always converted into mbuf even if the
application does not need it?

> +struct rte_eventdev_driver;
> +struct rte_eventdev_ops;

I think it is better to split API and driver interface in two files.
(we should do this split in ethdev)

> +/**
> + * Enqueue the event object supplied in the *rte_event* structure on an
> + * event device designated by its *dev_id* through the event port specified by
> + * *port_id*. The event object specifies the event queue on which this
> + * event will be enqueued.
> + *
> + * @param dev_id
> + *   Event device identifier.
> + * @param port_id
> + *   The identifier of the event port.
> + * @param ev
> + *   Pointer to struct rte_event
> + *
> + * @return
> + *  - 0 on success
> + *  - <0 on failure. Failure can occur if the event port's output queue is
> + *     backpressured, for instance.
> + */
> +static inline int
> +rte_event_enqueue(uint8_t dev_id, uint8_t port_id, struct rte_event *ev)

Is it really needed to have non-burst variant of enqueue/dequeue?

> +/**
> + * Converts nanoseconds to *wait* value for rte_event_dequeue()
> + *
> + * If the device is configured with RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT flag then
> + * application can use this function to convert wait value in nanoseconds to
> + * implementations specific wait value supplied in rte_event_dequeue()

Why is it implementation-specific?
Why this conversion is not internal in the driver?

End of review for this patch ;)

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 2/4] eventdev: implement the northbound APIs
  2016-11-18  5:45 ` [PATCH 2/4] eventdev: implement the northbound APIs Jerin Jacob
  2016-11-21 17:45   ` Eads, Gage
@ 2016-11-23 19:18   ` Thomas Monjalon
  2016-11-25  4:17     ` Jerin Jacob
  1 sibling, 1 reply; 109+ messages in thread
From: Thomas Monjalon @ 2016-11-23 19:18 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dev, bruce.richardson, harry.van.haaren, hemant.agrawal, gage.eads

2016-11-18 11:15, Jerin Jacob:
> This patch set defines the southbound driver interface
> and implements the common code required for northbound
> eventdev API interface.

Please make two separate patches.

> +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
> +#define RTE_PMD_DEBUG_TRACE(...) \
> +	rte_pmd_debug_trace(__func__, __VA_ARGS__)
> +#else
> +#define RTE_PMD_DEBUG_TRACE(...)
> +#endif

I would like to discuss the need for a debug option as there is
already a log level.

> +/* Logging Macros */
> +#define EDEV_LOG_ERR(fmt, args...) \

Every symbols and macros in an exported header must be prefixed by RTE_.

> +/* Macros to check for valid device */
> +#define RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, retval) do { \

Sometimes you use RTE_EVENT_DEV_ and sometimes RTE_EVENTDEV.
(I prefer the latter).

> +struct rte_eventdev_driver {
> +	struct rte_pci_driver pci_drv;	/**< The PMD is also a PCI driver. */

It must not be directly linked to the underlying bus.

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 1/4] eventdev: introduce event driven programming model
  2016-11-23 18:39   ` Thomas Monjalon
@ 2016-11-24  1:59     ` Jerin Jacob
  2016-11-24 12:26       ` Bruce Richardson
  2016-11-24 15:35       ` Thomas Monjalon
  0 siblings, 2 replies; 109+ messages in thread
From: Jerin Jacob @ 2016-11-24  1:59 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: dev, bruce.richardson, harry.van.haaren, hemant.agrawal, gage.eads

On Wed, Nov 23, 2016 at 07:39:09PM +0100, Thomas Monjalon wrote:
> Hi Jerin,

Hi Thomas,

> 
> Thanks for bringing a big new piece in DPDK.
> 
> I made some comments below.

Thanks for the review.

> 
> 2016-11-18 11:14, Jerin Jacob:
> > +Eventdev API - EXPERIMENTAL
> > +M: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > +F: lib/librte_eventdev/
> 
> OK to mark it experimental.
> What is the plan to remove the experimental word?

IMO, EXPERIMENTAL status can be changed when
- At least two event drivers available(Intel and Cavium are working on
  SW and HW event drivers)
- Functional test applications are fine with at least two drivers
- Portable example application to showcase the features of the library
- eventdev integration with another dpdk subsystem such as ethdev

Thoughts?. I am not sure the criteria used in cryptodev case.


> 
> > + * RTE event device drivers do not use interrupts for enqueue or dequeue
> > + * operation. Instead, Event drivers export Poll-Mode enqueue and dequeue
> > + * functions to applications.
> 
> To the question "what makes DPDK different" it could be answered
> that DPDK event drivers implement polling functions :)

Mostly taken from ethdev API header file :-)

> 
> > +#include <stdbool.h>
> > +
> > +#include <rte_pci.h>
> > +#include <rte_dev.h>
> > +#include <rte_memory.h>
> 
> Is it possible to remove some of these includes from the API?

OK. I will scan through all the header file and remove the not required
ones.

> 
> > +
> > +#define EVENTDEV_NAME_SKELETON_PMD event_skeleton
> > +/**< Skeleton event device PMD name */
> 
> I do not understand this #define.

Applications can explicitly request the a specific driver though driver
name. This will go as argument to rte_event_dev_get_dev_id(const char *name).
The reason for keeping this #define in rte_eventdev.h is that,
application needs to include only rte_eventdev.h not rte_eventdev_pmd.h.

I will remove the definition from this patch and add this definition in
skeleton driver patch(patch 03/04)

> And it is not properly prefixed.

OK. I will prefix with RTE_ in v2.

> 
> > +struct rte_event_dev_info {
> > +	const char *driver_name;	/**< Event driver name */
> > +	struct rte_pci_device *pci_dev;	/**< PCI information */
> 
> There is some work in progress to remove PCI information from ethdev.
> Please do not add any PCI related structure in eventdev.
> The generic structure is rte_device.

OK. Makes sense. A grep of "rte_device" shows none of the subsystem
implemented yet and the work in progress. I will change to rte_device
when it is mainline. The skeleton eventdev driver based on PCI bus needs
this for the moment.


> 
> > +struct rte_event_dev_config {
> > +	uint32_t dequeue_wait_ns;
> > +	/**< rte_event_dequeue() wait for *dequeue_wait_ns* ns on this device.
> 
> Please explain exactly when the wait occurs and why.

Here is the explanation from rte_event_dequeue() API definition,
-
@param wait
0 - no-wait, returns immediately if there is no event.
>0 - wait for the event, if the device is configured with
RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT then this function will wait until
the event available or *wait* time.
if the device is not configured with RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT
then this function will wait until the event available or *dequeue_wait_ns*
                                                      ^^^^^^^^^^^^^^^^^^^^^^
ns which was previously supplied to rte_event_dev_configure()
-
This is provides the application to have control over, how long the
implementation should wait if event is not available.

Let me know what exact changes are required if details are not enough in
rte_event_dequeue() API definition.

> 
> > +	 * This value should be in the range of *min_dequeue_wait_ns* and
> > +	 * *max_dequeue_wait_ns* which previously provided in
> > +	 * rte_event_dev_info_get()
> > +	 * \see RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT
> 
> I think the @see syntax would be more consistent than \see.

OK. I will change to @see

> 
> > +	uint8_t nb_event_port_dequeue_depth;
> > +	/**< Number of dequeue queue depth for any event port on this device.
> 
> I think it deserves more explanations.

see below

> 
> > +	uint32_t event_dev_cfg;
> > +	/**< Event device config flags(RTE_EVENT_DEV_CFG_)*/
> 
> How this field differs from others in the struct?
> Should it be named flags?

OK. I will change to flags

> 
> > +	uint32_t event_queue_cfg; /**< Queue config flags(EVENT_QUEUE_CFG_) */
> 
> Same comment about the naming of this field for event_queue config sruct.

OK. I will change to flags

> 
> > +/** Event port configuration structure */
> > +struct rte_event_port_conf {
> > +	int32_t new_event_threshold;
> > +	/**< A backpressure threshold for new event enqueues on this port.
> > +	 * Use for *closed system* event dev where event capacity is limited,
> > +	 * and cannot exceed the capacity of the event dev.
> > +	 * Configuring ports with different thresholds can make higher priority
> > +	 * traffic less likely to  be backpressured.
> > +	 * For example, a port used to inject NIC Rx packets into the event dev
> > +	 * can have a lower threshold so as not to overwhelm the device,
> > +	 * while ports used for worker pools can have a higher threshold.
> > +	 * This value cannot exceed the *nb_events_limit*
> > +	 * which previously supplied to rte_event_dev_configure()
> > +	 */
> > +	uint8_t dequeue_depth;
> > +	/**< Configure number of bulk dequeues for this event port.
> > +	 * This value cannot exceed the *nb_event_port_dequeue_depth*
> > +	 * which previously supplied to rte_event_dev_configure()
> > +	 */
> > +	uint8_t enqueue_depth;
> > +	/**< Configure number of bulk enqueues for this event port.
> > +	 * This value cannot exceed the *nb_event_port_enqueue_depth*
> > +	 * which previously supplied to rte_event_dev_configure()
> > +	 */
> > +};
> 
> The depth configuration is not clear to me.

Basically the maximum number of events can be enqueued/dequeued at time
from a given event port. depth of one == non burst mode.

> 
> > +/* Event types to classify the event source */
> 
> Why this classification is needed?

This for application pipeling and the cases like, if application wants to know which
subsystem generated the event.

example packet forwarding loop on the worker cores:
while(1) {
	ev = dequeue()
	// event from ethdev subsystem
	if (ev.event_type == RTE_EVENT_TYPE_ETHDEV) {
		- swap the mac address
		- push to atomic queue for ingress flow order maintenance
		  by CORE
	/* events from core */
	} else if (ev.event_type == RTE_EVENT_TYPE_CORE) {

	}
	enqueue(ev);
}

> 
> > +#define RTE_EVENT_TYPE_ETHDEV           0x0
> > +/**< The event generated from ethdev subsystem */
> > +#define RTE_EVENT_TYPE_CRYPTODEV        0x1
> > +/**< The event generated from crypodev subsystem */
> > +#define RTE_EVENT_TYPE_TIMERDEV         0x2
> > +/**< The event generated from timerdev subsystem */
> > +#define RTE_EVENT_TYPE_CORE             0x3
> > +/**< The event generated from core.
> 
> What is core?

The event are generated by lcore for pipeling. Any suggestion for
better name? lcore?

> 
> > +/* Event enqueue operations */
> 
> I feel a longer explanation is needed here to describe
> what is an operation and where this data is useful.

I will try to add it. The v1 has lengthy description for release
because it not self explanatory.

> 
> > +#define RTE_EVENT_OP_NEW                0
> > +/**< New event without previous context */
> > +#define RTE_EVENT_OP_FORWARD            1
> > +/**< Re-enqueue previously dequeued event */
> > +#define RTE_EVENT_OP_RELEASE            2
> 
> There is no comment for the release operation.

Its there. see next comment

> 
> > +/**
> > + * Release the flow context associated with the schedule type.
> > + *
> [...]
> > + */
> 
> There is no function declaration below this comment.

This comment was for previous RTE_EVENT_OP_RELEASE.I will fix the doxygen
formatting issue.

> 
> > +/**
> > + * The generic *rte_event* structure to hold the event attributes
> > + * for dequeue and enqueue operation
> > + */
> > +struct rte_event {
> > +	/** WORD0 */
> > +	RTE_STD_C11
> > +	union {
> > +		uint64_t event;
> [...]
> > +	};
> > +	/** WORD1 */
> > +	RTE_STD_C11
> > +	union {
> > +		uintptr_t event_ptr;
> 
> I wonder if it can be a problem to have the size of this field
> not constant across machines.

OK. May be I can make it as "uint64_t u64" to reserve space or I can
remove it.

> 
> > +		/**< Opaque event pointer */
> > +		struct rte_mbuf *mbuf;
> > +		/**< mbuf pointer if dequeued event is associated with mbuf */
> 
> How do we know that an event is associated with mbuf?

By looking at the event source/type RTE_EVENT_TYPE_*

> Does it mean that such events are always converted into mbuf even if the
> application does not need it?

Hardware has dependency on getting physical address of the event, so any
struct that has "phys_addr_t buf_physaddr" works.

> 
> > +struct rte_eventdev_driver;
> > +struct rte_eventdev_ops;
> 
> I think it is better to split API and driver interface in two files.
> (we should do this split in ethdev)

I thought so, but then the "static inline" versions of northbound
API(like rte_event_enqueue) will go another file(due to the fact that
implementation need to deference "dev->data->ports[port_id]"). Do you want that way?
I would like to keep all northbound API in rte_eventdev.h and not any of them
in rte_eventdev_pmd.h.

Any suggestions?

> 
> > +/**
> > + * Enqueue the event object supplied in the *rte_event* structure on an
> > + * event device designated by its *dev_id* through the event port specified by
> > + * *port_id*. The event object specifies the event queue on which this
> > + * event will be enqueued.
> > + *
> > + * @param dev_id
> > + *   Event device identifier.
> > + * @param port_id
> > + *   The identifier of the event port.
> > + * @param ev
> > + *   Pointer to struct rte_event
> > + *
> > + * @return
> > + *  - 0 on success
> > + *  - <0 on failure. Failure can occur if the event port's output queue is
> > + *     backpressured, for instance.
> > + */
> > +static inline int
> > +rte_event_enqueue(uint8_t dev_id, uint8_t port_id, struct rte_event *ev)
> 
> Is it really needed to have non-burst variant of enqueue/dequeue?

Yes. certain HW can work only with non burst variants.
> 
> > +/**
> > + * Converts nanoseconds to *wait* value for rte_event_dequeue()
> > + *
> > + * If the device is configured with RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT flag then
> > + * application can use this function to convert wait value in nanoseconds to
> > + * implementations specific wait value supplied in rte_event_dequeue()
> 
> Why is it implementation-specific?
> Why this conversion is not internal in the driver?

This is for performance optimization, otherwise in drivers
need to convert ns to ticks in "fast path"

> 
> End of review for this patch ;)

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 1/4] eventdev: introduce event driven programming model
  2016-11-24  1:59     ` Jerin Jacob
@ 2016-11-24 12:26       ` Bruce Richardson
  2016-11-24 15:35       ` Thomas Monjalon
  1 sibling, 0 replies; 109+ messages in thread
From: Bruce Richardson @ 2016-11-24 12:26 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Thomas Monjalon, dev, harry.van.haaren, hemant.agrawal, gage.eads

On Thu, Nov 24, 2016 at 07:29:13AM +0530, Jerin Jacob wrote:
> On Wed, Nov 23, 2016 at 07:39:09PM +0100, Thomas Monjalon wrote:

Just some comments on mine triggered by Thomas comments?

<snip>
> > + */
> > > +static inline int
> > > +rte_event_enqueue(uint8_t dev_id, uint8_t port_id, struct rte_event *ev)
> > 
> > Is it really needed to have non-burst variant of enqueue/dequeue?
> 
> Yes. certain HW can work only with non burst variants.

In those cases is it not acceptable just to have the dequeue_burst
function return 1 all the time? It would allow apps to be more portable
between burst and non-burst varients would it not.

> > 
> > > +/**
> > > + * Converts nanoseconds to *wait* value for rte_event_dequeue()
> > > + *
> > > + * If the device is configured with RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT flag then
> > > + * application can use this function to convert wait value in nanoseconds to
> > > + * implementations specific wait value supplied in rte_event_dequeue()
> > 
> > Why is it implementation-specific?
> > Why this conversion is not internal in the driver?
> 
> This is for performance optimization, otherwise in drivers
> need to convert ns to ticks in "fast path"
> 
> > 
Is that really likely to be a performance bottleneck. I would expect
modern cores to fly through basic arithmetic in a negligable amount of
cycles?

/Bruce

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 1/4] eventdev: introduce event driven programming model
  2016-11-24  1:59     ` Jerin Jacob
  2016-11-24 12:26       ` Bruce Richardson
@ 2016-11-24 15:35       ` Thomas Monjalon
  2016-11-25  0:23         ` Jerin Jacob
  1 sibling, 1 reply; 109+ messages in thread
From: Thomas Monjalon @ 2016-11-24 15:35 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dev, bruce.richardson, harry.van.haaren, hemant.agrawal, gage.eads

2016-11-24 07:29, Jerin Jacob:
> On Wed, Nov 23, 2016 at 07:39:09PM +0100, Thomas Monjalon wrote:
> > 2016-11-18 11:14, Jerin Jacob:
> > > +Eventdev API - EXPERIMENTAL
> > > +M: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > > +F: lib/librte_eventdev/
> > 
> > OK to mark it experimental.
> > What is the plan to remove the experimental word?
> 
> IMO, EXPERIMENTAL status can be changed when
> - At least two event drivers available(Intel and Cavium are working on
>   SW and HW event drivers)
> - Functional test applications are fine with at least two drivers
> - Portable example application to showcase the features of the library
> - eventdev integration with another dpdk subsystem such as ethdev
> 
> Thoughts?. I am not sure the criteria used in cryptodev case.

Sounds good.
We will be more confident when drivers and tests will be implemented.

I think the roadmap for the SW driver targets the release 17.05.
Do you still plan 17.02 for this API and the Cavium driver?

> > > +#define EVENTDEV_NAME_SKELETON_PMD event_skeleton
> > > +/**< Skeleton event device PMD name */
> > 
> > I do not understand this #define.
> 
> Applications can explicitly request the a specific driver though driver
> name. This will go as argument to rte_event_dev_get_dev_id(const char *name).
> The reason for keeping this #define in rte_eventdev.h is that,
> application needs to include only rte_eventdev.h not rte_eventdev_pmd.h.

So each driver must register its name in the API?
Is it really needed?

> > > +struct rte_event_dev_config {
> > > +	uint32_t dequeue_wait_ns;
> > > +	/**< rte_event_dequeue() wait for *dequeue_wait_ns* ns on this device.
> > 
> > Please explain exactly when the wait occurs and why.
> 
> Here is the explanation from rte_event_dequeue() API definition,
> -
> @param wait
> 0 - no-wait, returns immediately if there is no event.
> >0 - wait for the event, if the device is configured with
> RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT then this function will wait until
> the event available or *wait* time.
> if the device is not configured with RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT
> then this function will wait until the event available or *dequeue_wait_ns*
>                                                       ^^^^^^^^^^^^^^^^^^^^^^
> ns which was previously supplied to rte_event_dev_configure()
> -
> This is provides the application to have control over, how long the
> implementation should wait if event is not available.
> 
> Let me know what exact changes are required if details are not enough in
> rte_event_dequeue() API definition.

Maybe that timeout would be a better name.
It waits only if there is nothing in the queue.
It can be interesting to highlight in this comment that this parameter
makes the dequeue function a blocking call.

> > > +/** Event port configuration structure */
> > > +struct rte_event_port_conf {
> > > +	int32_t new_event_threshold;
> > > +	/**< A backpressure threshold for new event enqueues on this port.
> > > +	 * Use for *closed system* event dev where event capacity is limited,
> > > +	 * and cannot exceed the capacity of the event dev.
> > > +	 * Configuring ports with different thresholds can make higher priority
> > > +	 * traffic less likely to  be backpressured.
> > > +	 * For example, a port used to inject NIC Rx packets into the event dev
> > > +	 * can have a lower threshold so as not to overwhelm the device,
> > > +	 * while ports used for worker pools can have a higher threshold.
> > > +	 * This value cannot exceed the *nb_events_limit*
> > > +	 * which previously supplied to rte_event_dev_configure()
> > > +	 */
> > > +	uint8_t dequeue_depth;
> > > +	/**< Configure number of bulk dequeues for this event port.
> > > +	 * This value cannot exceed the *nb_event_port_dequeue_depth*
> > > +	 * which previously supplied to rte_event_dev_configure()
> > > +	 */
> > > +	uint8_t enqueue_depth;
> > > +	/**< Configure number of bulk enqueues for this event port.
> > > +	 * This value cannot exceed the *nb_event_port_enqueue_depth*
> > > +	 * which previously supplied to rte_event_dev_configure()
> > > +	 */
> > > +};
> > 
> > The depth configuration is not clear to me.
> 
> Basically the maximum number of events can be enqueued/dequeued at time
> from a given event port. depth of one == non burst mode.

OK so depth is the queue size. Please could you reword?

> > > +/* Event types to classify the event source */
> > 
> > Why this classification is needed?
> 
> This for application pipeling and the cases like, if application wants to know which
> subsystem generated the event.
> 
> example packet forwarding loop on the worker cores:
> while(1) {
> 	ev = dequeue()
> 	// event from ethdev subsystem
> 	if (ev.event_type == RTE_EVENT_TYPE_ETHDEV) {
> 		- swap the mac address
> 		- push to atomic queue for ingress flow order maintenance
> 		  by CORE
> 	/* events from core */
> 	} else if (ev.event_type == RTE_EVENT_TYPE_CORE) {
> 
> 	}
> 	enqueue(ev);
> }

I don't know why but I feel this classification is weak.
You need to track the source of the event. Does it make sense to go beyond
and identify the source device?

> > > +#define RTE_EVENT_TYPE_ETHDEV           0x0
> > > +/**< The event generated from ethdev subsystem */
> > > +#define RTE_EVENT_TYPE_CRYPTODEV        0x1
> > > +/**< The event generated from crypodev subsystem */
> > > +#define RTE_EVENT_TYPE_TIMERDEV         0x2
> > > +/**< The event generated from timerdev subsystem */
> > > +#define RTE_EVENT_TYPE_CORE             0x3
> > > +/**< The event generated from core.
> > 
> > What is core?
> 
> The event are generated by lcore for pipeling. Any suggestion for
> better name? lcore?

What about CPU or SW?

> > > +		/**< Opaque event pointer */
> > > +		struct rte_mbuf *mbuf;
> > > +		/**< mbuf pointer if dequeued event is associated with mbuf */
> > 
> > How do we know that an event is associated with mbuf?
> 
> By looking at the event source/type RTE_EVENT_TYPE_*
> 
> > Does it mean that such events are always converted into mbuf even if the
> > application does not need it?
> 
> Hardware has dependency on getting physical address of the event, so any
> struct that has "phys_addr_t buf_physaddr" works.

I do not understand.

I tought that decoding the event would be the responsibility of the app
by calling a function like
rte_eventdev_convert_to_mbuf(struct rte_event *, struct rte_mbuf *).

> > > +struct rte_eventdev_driver;
> > > +struct rte_eventdev_ops;
> > 
> > I think it is better to split API and driver interface in two files.
> > (we should do this split in ethdev)
> 
> I thought so, but then the "static inline" versions of northbound
> API(like rte_event_enqueue) will go another file(due to the fact that
> implementation need to deference "dev->data->ports[port_id]"). Do you want that way?
> I would like to keep all northbound API in rte_eventdev.h and not any of them
> in rte_eventdev_pmd.h.

My comment was confusing.
You are doing 2 files, one for API (what you call northbound I think)
and the other one for driver interface (what you call southbound I think),
it's very fine.

> > > +/**
> > > + * Enqueue the event object supplied in the *rte_event* structure on an
> > > + * event device designated by its *dev_id* through the event port specified by
> > > + * *port_id*. The event object specifies the event queue on which this
> > > + * event will be enqueued.
> > > + *
> > > + * @param dev_id
> > > + *   Event device identifier.
> > > + * @param port_id
> > > + *   The identifier of the event port.
> > > + * @param ev
> > > + *   Pointer to struct rte_event
> > > + *
> > > + * @return
> > > + *  - 0 on success
> > > + *  - <0 on failure. Failure can occur if the event port's output queue is
> > > + *     backpressured, for instance.
> > > + */
> > > +static inline int
> > > +rte_event_enqueue(uint8_t dev_id, uint8_t port_id, struct rte_event *ev)
> > 
> > Is it really needed to have non-burst variant of enqueue/dequeue?
> 
> Yes. certain HW can work only with non burst variants.

Same comment as Bruce, we must keep only the burst variant.
We cannot have different API for different HW.

> > > +/**
> > > + * Converts nanoseconds to *wait* value for rte_event_dequeue()
> > > + *
> > > + * If the device is configured with RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT flag then
> > > + * application can use this function to convert wait value in nanoseconds to
> > > + * implementations specific wait value supplied in rte_event_dequeue()
> > 
> > Why is it implementation-specific?
> > Why this conversion is not internal in the driver?
> 
> This is for performance optimization, otherwise in drivers
> need to convert ns to ticks in "fast path"

So why not defining the unit of this timeout as CPU cycles like the ones
returned by rte_get_timer_cycles()?

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 1/4] eventdev: introduce event driven programming model
  2016-11-18  5:44 ` [PATCH 1/4] eventdev: introduce event driven programming model Jerin Jacob
  2016-11-23 18:39   ` Thomas Monjalon
@ 2016-11-24 16:24   ` Bruce Richardson
  2016-11-24 19:30     ` Jerin Jacob
  2016-12-06  3:52   ` [PATCH v2 0/6] libeventdev API and northbound implementation Jerin Jacob
  2 siblings, 1 reply; 109+ messages in thread
From: Bruce Richardson @ 2016-11-24 16:24 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: dev, harry.van.haaren, hemant.agrawal, gage.eads

On Fri, Nov 18, 2016 at 11:14:59AM +0530, Jerin Jacob wrote:
> In a polling model, lcores poll ethdev ports and associated
> rx queues directly to look for packet. In an event driven model,
> by contrast, lcores call the scheduler that selects packets for
> them based on programmer-specified criteria. Eventdev library
> adds support for event driven programming model, which offer
> applications automatic multicore scaling, dynamic load balancing,
> pipelining, packet ingress order maintenance and
> synchronization services to simplify application packet processing.
> 
> By introducing event driven programming model, DPDK can support
> both polling and event driven programming models for packet processing,
> and applications are free to choose whatever model
> (or combination of the two) that best suits their needs.
> 
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> ---

Hi Jerin,

Thanks for the patchset. A few minor comments in general on the API that
we found from working with it (thus far - more may follow :-) ).

1. Priorities: priorities are used in a number of places in the API, but
   all are uint8_t types and have their own MAX/NORMAL/MIN values. I think
   it would be simpler for the user just to have one priority type in the
   library, and use that everywhere. I suggest using RTE_EVENT_PRIORITY_*
   and drop the separate defines for SERVICE_PRIORITY, and QUEUE_PRIORITY
   etc. Ideally, I'd see things like this converted to enums too, rather
   than defines, but I'm not sure it's possible in this case.

2. Functions for config and setup can have their structure parameter
   types as const as they don't/shouldn't change the values internally.
   So add "const" to parameters to:
     rte_event_dev_configure()
     rte_event_queue_setup()
     rte_event_port_setup()
     rte_event_port_link()

3. in event schedule() function, the dev->schedule() function needs the
   dev instance pointer passed in as parameter.

4. The event op values and the event type values would be better as
   enums rather than as a set of #defines.

Regards,
/Bruce

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 1/4] eventdev: introduce event driven programming model
  2016-11-24 16:24   ` Bruce Richardson
@ 2016-11-24 19:30     ` Jerin Jacob
  0 siblings, 0 replies; 109+ messages in thread
From: Jerin Jacob @ 2016-11-24 19:30 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: dev, harry.van.haaren, hemant.agrawal, gage.eads

On Thu, Nov 24, 2016 at 04:24:11PM +0000, Bruce Richardson wrote:
> On Fri, Nov 18, 2016 at 11:14:59AM +0530, Jerin Jacob wrote:
> > In a polling model, lcores poll ethdev ports and associated
> > rx queues directly to look for packet. In an event driven model,
> > by contrast, lcores call the scheduler that selects packets for
> > them based on programmer-specified criteria. Eventdev library
> > adds support for event driven programming model, which offer
> > applications automatic multicore scaling, dynamic load balancing,
> > pipelining, packet ingress order maintenance and
> > synchronization services to simplify application packet processing.
> > 
> > By introducing event driven programming model, DPDK can support
> > both polling and event driven programming models for packet processing,
> > and applications are free to choose whatever model
> > (or combination of the two) that best suits their needs.
> > 
> > Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > ---
> 
> Hi Jerin,
> 
> Thanks for the patchset. A few minor comments in general on the API that
> we found from working with it (thus far - more may follow :-) ).

Thanks Bruce.

> 
> 1. Priorities: priorities are used in a number of places in the API, but
>    all are uint8_t types and have their own MAX/NORMAL/MIN values. I think
>    it would be simpler for the user just to have one priority type in the
>    library, and use that everywhere. I suggest using RTE_EVENT_PRIORITY_*
>    and drop the separate defines for SERVICE_PRIORITY, and QUEUE_PRIORITY
>    etc. Ideally, I'd see things like this converted to enums too, rather
>    than defines, but I'm not sure it's possible in this case.

OK. I will address it in v2

> 
> 2. Functions for config and setup can have their structure parameter
>    types as const as they don't/shouldn't change the values internally.
>    So add "const" to parameters to:
>      rte_event_dev_configure()
>      rte_event_queue_setup()
>      rte_event_port_setup()
>      rte_event_port_link()
> 

OK. I will address it in v2

> 3. in event schedule() function, the dev->schedule() function needs the
>    dev instance pointer passed in as parameter.

OK. I will address it in v2

> 
> 4. The event op values and the event type values would be better as
>    enums rather than as a set of #defines.

OK. I will address it in v2

I will reply to your other comments in Thomas's email.

> 
> Regards,
> /Bruce

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 1/4] eventdev: introduce event driven programming model
  2016-11-24 15:35       ` Thomas Monjalon
@ 2016-11-25  0:23         ` Jerin Jacob
  2016-11-25 11:00           ` Bruce Richardson
  2016-11-25 11:59           ` Van Haaren, Harry
  0 siblings, 2 replies; 109+ messages in thread
From: Jerin Jacob @ 2016-11-25  0:23 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: dev, bruce.richardson, harry.van.haaren, hemant.agrawal, gage.eads

On Thu, Nov 24, 2016 at 04:35:56PM +0100, Thomas Monjalon wrote:
> 2016-11-24 07:29, Jerin Jacob:
> > On Wed, Nov 23, 2016 at 07:39:09PM +0100, Thomas Monjalon wrote:
> > > 2016-11-18 11:14, Jerin Jacob:
> > > > +Eventdev API - EXPERIMENTAL
> > > > +M: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > > > +F: lib/librte_eventdev/
> > > 
> > > OK to mark it experimental.
> > > What is the plan to remove the experimental word?
> > 
> > IMO, EXPERIMENTAL status can be changed when
> > - At least two event drivers available(Intel and Cavium are working on
> >   SW and HW event drivers)
> > - Functional test applications are fine with at least two drivers
> > - Portable example application to showcase the features of the library
> > - eventdev integration with another dpdk subsystem such as ethdev
> > 
> > Thoughts?. I am not sure the criteria used in cryptodev case.
> 
> Sounds good.
> We will be more confident when drivers and tests will be implemented.
> 
> I think the roadmap for the SW driver targets the release 17.05.
> Do you still plan 17.02 for this API and the Cavium driver?

No. 17.02 too short for up-streaming the Cavium driver.However, I think API and
skeleton event driver can go in 17.02 if there are no objections.

> 
> > > > +#define EVENTDEV_NAME_SKELETON_PMD event_skeleton
> > > > +/**< Skeleton event device PMD name */
> > > 
> > > I do not understand this #define.
> > 
> > Applications can explicitly request the a specific driver though driver
> > name. This will go as argument to rte_event_dev_get_dev_id(const char *name).
> > The reason for keeping this #define in rte_eventdev.h is that,
> > application needs to include only rte_eventdev.h not rte_eventdev_pmd.h.
> 
> So each driver must register its name in the API?
> Is it really needed?

Otherwise how application knows the name of the driver.
The similar scheme used in cryptodev.
http://dpdk.org/browse/dpdk/tree/lib/librte_cryptodev/rte_cryptodev.h#n53
No strong opinion here. Open for suggestions.

> 
> > > > +struct rte_event_dev_config {
> > > > +	uint32_t dequeue_wait_ns;
> > > > +	/**< rte_event_dequeue() wait for *dequeue_wait_ns* ns on this device.
> > > 
> > > Please explain exactly when the wait occurs and why.
> > 
> > Here is the explanation from rte_event_dequeue() API definition,
> > -
> > @param wait
> > 0 - no-wait, returns immediately if there is no event.
> > >0 - wait for the event, if the device is configured with
> > RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT then this function will wait until
> > the event available or *wait* time.
> > if the device is not configured with RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT
> > then this function will wait until the event available or *dequeue_wait_ns*
> >                                                       ^^^^^^^^^^^^^^^^^^^^^^
> > ns which was previously supplied to rte_event_dev_configure()
> > -
> > This is provides the application to have control over, how long the
> > implementation should wait if event is not available.
> > 
> > Let me know what exact changes are required if details are not enough in
> > rte_event_dequeue() API definition.
> 
> Maybe that timeout would be a better name.
> It waits only if there is nothing in the queue.
> It can be interesting to highlight in this comment that this parameter
> makes the dequeue function a blocking call.

OK. I will change to timeout then

> 
> > > > +/** Event port configuration structure */
> > > > +struct rte_event_port_conf {
> > > > +	int32_t new_event_threshold;
> > > > +	/**< A backpressure threshold for new event enqueues on this port.
> > > > +	 * Use for *closed system* event dev where event capacity is limited,
> > > > +	 * and cannot exceed the capacity of the event dev.
> > > > +	 * Configuring ports with different thresholds can make higher priority
> > > > +	 * traffic less likely to  be backpressured.
> > > > +	 * For example, a port used to inject NIC Rx packets into the event dev
> > > > +	 * can have a lower threshold so as not to overwhelm the device,
> > > > +	 * while ports used for worker pools can have a higher threshold.
> > > > +	 * This value cannot exceed the *nb_events_limit*
> > > > +	 * which previously supplied to rte_event_dev_configure()
> > > > +	 */
> > > > +	uint8_t dequeue_depth;
> > > > +	/**< Configure number of bulk dequeues for this event port.
> > > > +	 * This value cannot exceed the *nb_event_port_dequeue_depth*
> > > > +	 * which previously supplied to rte_event_dev_configure()
> > > > +	 */
> > > > +	uint8_t enqueue_depth;
> > > > +	/**< Configure number of bulk enqueues for this event port.
> > > > +	 * This value cannot exceed the *nb_event_port_enqueue_depth*
> > > > +	 * which previously supplied to rte_event_dev_configure()
> > > > +	 */
> > > > +};
> > > 
> > > The depth configuration is not clear to me.
> > 
> > Basically the maximum number of events can be enqueued/dequeued at time
> > from a given event port. depth of one == non burst mode.
> 
> OK so depth is the queue size. Please could you reword?

OK

> 
> > > > +/* Event types to classify the event source */
> > > 
> > > Why this classification is needed?
> > 
> > This for application pipeling and the cases like, if application wants to know which
> > subsystem generated the event.
> > 
> > example packet forwarding loop on the worker cores:
> > while(1) {
> > 	ev = dequeue()
> > 	// event from ethdev subsystem
> > 	if (ev.event_type == RTE_EVENT_TYPE_ETHDEV) {
> > 		- swap the mac address
> > 		- push to atomic queue for ingress flow order maintenance
> > 		  by CORE
> > 	/* events from core */
> > 	} else if (ev.event_type == RTE_EVENT_TYPE_CORE) {
> > 
> > 	}
> > 	enqueue(ev);
> > }
> 
> I don't know why but I feel this classification is weak.
> You need to track the source of the event. Does it make sense to go beyond
> and identify the source device?

No, dequeue has dev_id argument, so event comes only from that device

> 
> > > > +#define RTE_EVENT_TYPE_ETHDEV           0x0
> > > > +/**< The event generated from ethdev subsystem */
> > > > +#define RTE_EVENT_TYPE_CRYPTODEV        0x1
> > > > +/**< The event generated from crypodev subsystem */
> > > > +#define RTE_EVENT_TYPE_TIMERDEV         0x2
> > > > +/**< The event generated from timerdev subsystem */
> > > > +#define RTE_EVENT_TYPE_CORE             0x3
> > > > +/**< The event generated from core.
> > > 
> > > What is core?
> > 
> > The event are generated by lcore for pipeling. Any suggestion for
> > better name? lcore?
> 
> What about CPU or SW?

No strong opinion here. I will go with CPU then

> 
> > > > +		/**< Opaque event pointer */
> > > > +		struct rte_mbuf *mbuf;
> > > > +		/**< mbuf pointer if dequeued event is associated with mbuf */
> > > 
> > > How do we know that an event is associated with mbuf?
> > 
> > By looking at the event source/type RTE_EVENT_TYPE_*
> > 
> > > Does it mean that such events are always converted into mbuf even if the
> > > application does not need it?
> > 
> > Hardware has dependency on getting physical address of the event, so any
> > struct that has "phys_addr_t buf_physaddr" works.
> 
> I do not understand.

In HW based implementations, the event pointer will be submitted to HW.
As you know, since HW can't understand the virtual address and it needs
to converted to the physical address, any DPDK object that provides phys_addr_t
such as mbuf can be used with libeventdev.

> 
> I tought that decoding the event would be the responsibility of the app
> by calling a function like
> rte_eventdev_convert_to_mbuf(struct rte_event *, struct rte_mbuf *).

It can be. But it is costly.i.e Yet another function pointer based
driver interface on fastpath. Instead, if the driver itself can
convert to mbuf(in case of ETHDEV device) and tag the source/event type
as RTE_EVENT_TYPE_ETHDEV.
IMO the proposed schemed helps in SW based implementation as their no real
mbuf conversation. Something we can revisit in ethdev integration if
required.

> 
> > > > +struct rte_eventdev_driver;
> > > > +struct rte_eventdev_ops;
> > > 
> > > I think it is better to split API and driver interface in two files.
> > > (we should do this split in ethdev)
> > 
> > I thought so, but then the "static inline" versions of northbound
> > API(like rte_event_enqueue) will go another file(due to the fact that
> > implementation need to deference "dev->data->ports[port_id]"). Do you want that way?
> > I would like to keep all northbound API in rte_eventdev.h and not any of them
> > in rte_eventdev_pmd.h.
> 
> My comment was confusing.
> You are doing 2 files, one for API (what you call northbound I think)
> and the other one for driver interface (what you call southbound I think),
> it's very fine.
> 
> > > > +/**
> > > > + * Enqueue the event object supplied in the *rte_event* structure on an
> > > > + * event device designated by its *dev_id* through the event port specified by
> > > > + * *port_id*. The event object specifies the event queue on which this
> > > > + * event will be enqueued.
> > > > + *
> > > > + * @param dev_id
> > > > + *   Event device identifier.
> > > > + * @param port_id
> > > > + *   The identifier of the event port.
> > > > + * @param ev
> > > > + *   Pointer to struct rte_event
> > > > + *
> > > > + * @return
> > > > + *  - 0 on success
> > > > + *  - <0 on failure. Failure can occur if the event port's output queue is
> > > > + *     backpressured, for instance.
> > > > + */
> > > > +static inline int
> > > > +rte_event_enqueue(uint8_t dev_id, uint8_t port_id, struct rte_event *ev)
> > > 
> > > Is it really needed to have non-burst variant of enqueue/dequeue?
> > 
> > Yes. certain HW can work only with non burst variants.
> 
> Same comment as Bruce, we must keep only the burst variant.
> We cannot have different API for different HW.

I don't think there is any portability issue here, I can explain.

The application level, we have two more use case to deal with non burst
variant

- latency critical work
- on dequeue, if application wants to deal with only one flow(i.e to
  avoid processing two different application flows to avoid cache trashing)

Selection of the burst variants will be based on
rte_event_dev_info_get() and rte_event_dev_configure()(see, max_event_port_dequeue_depth,
max_event_port_enqueue_depth, nb_event_port_dequeue_depth, nb_event_port_enqueue_depth )
So I don't think their is portability issue here and I don't want to waste my
CPU cycles on the for loop if application known to be working with non
bursts variant like below

nb_events = rte_event_dequeue_burst();
for(i=0; i < nb_events; i++){
	process ev[i]
}

And mostly importantly the NPU can get almost same throughput
without burst variant so why not?

> 
> > > > +/**
> > > > + * Converts nanoseconds to *wait* value for rte_event_dequeue()
> > > > + *
> > > > + * If the device is configured with RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT flag then
> > > > + * application can use this function to convert wait value in nanoseconds to
> > > > + * implementations specific wait value supplied in rte_event_dequeue()
> > > 
> > > Why is it implementation-specific?
> > > Why this conversion is not internal in the driver?
> > 
> > This is for performance optimization, otherwise in drivers
> > need to convert ns to ticks in "fast path"
> 
> So why not defining the unit of this timeout as CPU cycles like the ones
> returned by rte_get_timer_cycles()?

Because HW co-processor can run in different clock domain. Need not be at
CPU frequency.

> 
> 

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 2/4] eventdev: implement the northbound APIs
  2016-11-23 19:18   ` Thomas Monjalon
@ 2016-11-25  4:17     ` Jerin Jacob
  2016-11-25  9:55       ` Richardson, Bruce
  0 siblings, 1 reply; 109+ messages in thread
From: Jerin Jacob @ 2016-11-25  4:17 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: dev, bruce.richardson, harry.van.haaren, hemant.agrawal, gage.eads

On Wed, Nov 23, 2016 at 08:18:09PM +0100, Thomas Monjalon wrote:
> 2016-11-18 11:15, Jerin Jacob:
> > This patch set defines the southbound driver interface
> > and implements the common code required for northbound
> > eventdev API interface.
> 
> Please make two separate patches.

OK

> 
> > +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
> > +#define RTE_PMD_DEBUG_TRACE(...) \
> > +	rte_pmd_debug_trace(__func__, __VA_ARGS__)
> > +#else
> > +#define RTE_PMD_DEBUG_TRACE(...)
> > +#endif
> 
> I would like to discuss the need for a debug option as there is
> already a log level.

IMO, we don't need this. However, RTE_FUNC_PTR_OR_ERR_RET needs the
definition of RTE_PMD_DEBUG_TRACE inorder to compile. I think we can
remove it when it get fixed in EAL layer.

> 
> > +/* Logging Macros */
> > +#define EDEV_LOG_ERR(fmt, args...) \
> 
> Every symbols and macros in an exported header must be prefixed by RTE_.
> 
OK. I will fix it

> > +/* Macros to check for valid device */
> > +#define RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, retval) do { \
> 
> Sometimes you use RTE_EVENT_DEV_ and sometimes RTE_EVENTDEV.
> (I prefer the latter).

I choose the naming conversion based on the interface. API side it
is rte_event_ and driver side it is rte_eventdev_*

rte_event_dev_count;
rte_event_dev_get_dev_id
rte_event_dev_socket_id;
rte_event_dev_info_get;
rte_event_dev_configure;
rte_event_dev_start;
rte_event_dev_stop;
rte_event_dev_close;
rte_event_dev_dump;

rte_event_port_default_conf_get;
rte_event_port_setup;
rte_event_port_dequeue_depth;
rte_event_port_enqueue_depth;
rte_event_port_count;
rte_event_port_link;
rte_event_port_unlink;
rte_event_port_links_get;

rte_event_queue_default_conf_get
rte_event_queue_setup;
rte_event_queue_count;
rte_event_queue_priority;

rte_event_dequeue_wait_time;

rte_eventdev_pmd_allocate;
rte_eventdev_pmd_release;
rte_eventdev_pmd_vdev_init;
rte_eventdev_pmd_pci_probe;
rte_eventdev_pmd_pci_remove;

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 2/4] eventdev: implement the northbound APIs
  2016-11-25  4:17     ` Jerin Jacob
@ 2016-11-25  9:55       ` Richardson, Bruce
  2016-11-25 23:08         ` Jerin Jacob
  0 siblings, 1 reply; 109+ messages in thread
From: Richardson, Bruce @ 2016-11-25  9:55 UTC (permalink / raw)
  To: Jerin Jacob, Thomas Monjalon
  Cc: dev, Van Haaren, Harry, hemant.agrawal, Eads, Gage


> -----Original Message-----
> From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> Sent: Friday, November 25, 2016 4:18 AM
> To: Thomas Monjalon <thomas.monjalon@6wind.com>
> Cc: dev@dpdk.org; Richardson, Bruce <bruce.richardson@intel.com>; Van
> Haaren, Harry <harry.van.haaren@intel.com>; hemant.agrawal@nxp.com; Eads,
> Gage <gage.eads@intel.com>
> Subject: Re: [dpdk-dev] [PATCH 2/4] eventdev: implement the northbound
> APIs
> 
> On Wed, Nov 23, 2016 at 08:18:09PM +0100, Thomas Monjalon wrote:
> > 2016-11-18 11:15, Jerin Jacob:
> > > This patch set defines the southbound driver interface and
> > > implements the common code required for northbound eventdev API
> > > interface.
> >
> > Please make two separate patches.
> 
> OK
> 
> >
> > > +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
> > > +#define RTE_PMD_DEBUG_TRACE(...) \
> > > +	rte_pmd_debug_trace(__func__, __VA_ARGS__) #else #define
> > > +RTE_PMD_DEBUG_TRACE(...) #endif
> >
> > I would like to discuss the need for a debug option as there is
> > already a log level.
> 
> IMO, we don't need this. However, RTE_FUNC_PTR_OR_ERR_RET needs the
> definition of RTE_PMD_DEBUG_TRACE inorder to compile. I think we can
> remove it when it get fixed in EAL layer.
> 
> >
> > > +/* Logging Macros */
> > > +#define EDEV_LOG_ERR(fmt, args...) \
> >
> > Every symbols and macros in an exported header must be prefixed by RTE_.
> >
> OK. I will fix it
> 
> > > +/* Macros to check for valid device */ #define
> > > +RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, retval) do { \
> >
> > Sometimes you use RTE_EVENT_DEV_ and sometimes RTE_EVENTDEV.
> > (I prefer the latter).
> 
> I choose the naming conversion based on the interface. API side it is
> rte_event_ and driver side it is rte_eventdev_*
> 
> rte_event_dev_count;
> rte_event_dev_get_dev_id
> rte_event_dev_socket_id;
> rte_event_dev_info_get;
> rte_event_dev_configure;
> rte_event_dev_start;
> rte_event_dev_stop;
> rte_event_dev_close;
> rte_event_dev_dump;
> 
> rte_event_port_default_conf_get;
> rte_event_port_setup;
> rte_event_port_dequeue_depth;
> rte_event_port_enqueue_depth;
> rte_event_port_count;
> rte_event_port_link;
> rte_event_port_unlink;
> rte_event_port_links_get;
> 
> rte_event_queue_default_conf_get
> rte_event_queue_setup;
> rte_event_queue_count;
> rte_event_queue_priority;
> 
> rte_event_dequeue_wait_time;
> 
> rte_eventdev_pmd_allocate;
> rte_eventdev_pmd_release;
> rte_eventdev_pmd_vdev_init;
> rte_eventdev_pmd_pci_probe;
> rte_eventdev_pmd_pci_remove;

For this last set, you probably are ok prefixing with just "rte_event_pmd_", and drop the "dev" as unnecessary. That makes everything have a prefix of "rte_event_" and thereafter dev, port, queue, or pmd as appropriate.

/Bruce

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 1/4] eventdev: introduce event driven programming model
  2016-11-25  0:23         ` Jerin Jacob
@ 2016-11-25 11:00           ` Bruce Richardson
  2016-11-25 13:09             ` Thomas Monjalon
  2016-11-26  2:54             ` Jerin Jacob
  2016-11-25 11:59           ` Van Haaren, Harry
  1 sibling, 2 replies; 109+ messages in thread
From: Bruce Richardson @ 2016-11-25 11:00 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Thomas Monjalon, dev, harry.van.haaren, hemant.agrawal, gage.eads

On Fri, Nov 25, 2016 at 05:53:34AM +0530, Jerin Jacob wrote:
> On Thu, Nov 24, 2016 at 04:35:56PM +0100, Thomas Monjalon wrote:
> > 2016-11-24 07:29, Jerin Jacob:
> > > On Wed, Nov 23, 2016 at 07:39:09PM +0100, Thomas Monjalon wrote:
> > > > 2016-11-18 11:14, Jerin Jacob:
> > > > > +Eventdev API - EXPERIMENTAL
> > > > > +M: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > > > > +F: lib/librte_eventdev/
> > > > 
> > > > OK to mark it experimental.
> > > > What is the plan to remove the experimental word?
> > > 
> > > IMO, EXPERIMENTAL status can be changed when
> > > - At least two event drivers available(Intel and Cavium are working on
> > >   SW and HW event drivers)
> > > - Functional test applications are fine with at least two drivers
> > > - Portable example application to showcase the features of the library
> > > - eventdev integration with another dpdk subsystem such as ethdev
> > > 
> > > Thoughts?. I am not sure the criteria used in cryptodev case.
> > 
> > Sounds good.
> > We will be more confident when drivers and tests will be implemented.
> > 
> > I think the roadmap for the SW driver targets the release 17.05.
> > Do you still plan 17.02 for this API and the Cavium driver?
> 
> No. 17.02 too short for up-streaming the Cavium driver.However, I think API and
> skeleton event driver can go in 17.02 if there are no objections.
> 
> > 
> > > > > +#define EVENTDEV_NAME_SKELETON_PMD event_skeleton
> > > > > +/**< Skeleton event device PMD name */
> > > > 
> > > > I do not understand this #define.
> > > 
> > > Applications can explicitly request the a specific driver though driver
> > > name. This will go as argument to rte_event_dev_get_dev_id(const char *name).
> > > The reason for keeping this #define in rte_eventdev.h is that,
> > > application needs to include only rte_eventdev.h not rte_eventdev_pmd.h.
> > 
> > So each driver must register its name in the API?
> > Is it really needed?
> 
> Otherwise how application knows the name of the driver.
> The similar scheme used in cryptodev.
> http://dpdk.org/browse/dpdk/tree/lib/librte_cryptodev/rte_cryptodev.h#n53
> No strong opinion here. Open for suggestions.
> 

I like having a name registered. I think we need a scheme where an app
can find and use an implementation using a specific driver.

> > 
> > > > > +struct rte_event_dev_config {
> > > > > +	uint32_t dequeue_wait_ns;
> > > > > +	/**< rte_event_dequeue() wait for *dequeue_wait_ns* ns on this device.
> > > > 
> > > > Please explain exactly when the wait occurs and why.
> > > 
> > > Here is the explanation from rte_event_dequeue() API definition,
> > > -
> > > @param wait
> > > 0 - no-wait, returns immediately if there is no event.
> > > >0 - wait for the event, if the device is configured with
> > > RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT then this function will wait until
> > > the event available or *wait* time.
> > > if the device is not configured with RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT
> > > then this function will wait until the event available or *dequeue_wait_ns*
> > >                                                       ^^^^^^^^^^^^^^^^^^^^^^
> > > ns which was previously supplied to rte_event_dev_configure()
> > > -
> > > This is provides the application to have control over, how long the
> > > implementation should wait if event is not available.
> > > 
> > > Let me know what exact changes are required if details are not enough in
> > > rte_event_dequeue() API definition.
> > 
> > Maybe that timeout would be a better name.
> > It waits only if there is nothing in the queue.
> > It can be interesting to highlight in this comment that this parameter
> > makes the dequeue function a blocking call.
> 
> OK. I will change to timeout then
> 
> > 
> > > > > +/** Event port configuration structure */
> > > > > +struct rte_event_port_conf {
> > > > > +	int32_t new_event_threshold;
> > > > > +	/**< A backpressure threshold for new event enqueues on this port.
> > > > > +	 * Use for *closed system* event dev where event capacity is limited,
> > > > > +	 * and cannot exceed the capacity of the event dev.
> > > > > +	 * Configuring ports with different thresholds can make higher priority
> > > > > +	 * traffic less likely to  be backpressured.
> > > > > +	 * For example, a port used to inject NIC Rx packets into the event dev
> > > > > +	 * can have a lower threshold so as not to overwhelm the device,
> > > > > +	 * while ports used for worker pools can have a higher threshold.
> > > > > +	 * This value cannot exceed the *nb_events_limit*
> > > > > +	 * which previously supplied to rte_event_dev_configure()
> > > > > +	 */
> > > > > +	uint8_t dequeue_depth;
> > > > > +	/**< Configure number of bulk dequeues for this event port.
> > > > > +	 * This value cannot exceed the *nb_event_port_dequeue_depth*
> > > > > +	 * which previously supplied to rte_event_dev_configure()
> > > > > +	 */
> > > > > +	uint8_t enqueue_depth;
> > > > > +	/**< Configure number of bulk enqueues for this event port.
> > > > > +	 * This value cannot exceed the *nb_event_port_enqueue_depth*
> > > > > +	 * which previously supplied to rte_event_dev_configure()
> > > > > +	 */
> > > > > +};
> > > > 
> > > > The depth configuration is not clear to me.
> > > 
> > > Basically the maximum number of events can be enqueued/dequeued at time
> > > from a given event port. depth of one == non burst mode.
> > 
> > OK so depth is the queue size. Please could you reword?
> 
> OK
> 
> > 
> > > > > +/* Event types to classify the event source */
> > > > 
> > > > Why this classification is needed?
> > > 
> > > This for application pipeling and the cases like, if application wants to know which
> > > subsystem generated the event.
> > > 
> > > example packet forwarding loop on the worker cores:
> > > while(1) {
> > > 	ev = dequeue()
> > > 	// event from ethdev subsystem
> > > 	if (ev.event_type == RTE_EVENT_TYPE_ETHDEV) {
> > > 		- swap the mac address
> > > 		- push to atomic queue for ingress flow order maintenance
> > > 		  by CORE
> > > 	/* events from core */
> > > 	} else if (ev.event_type == RTE_EVENT_TYPE_CORE) {
> > > 
> > > 	}
> > > 	enqueue(ev);
> > > }
> > 
> > I don't know why but I feel this classification is weak.
> > You need to track the source of the event. Does it make sense to go beyond
> > and identify the source device?
> 
> No, dequeue has dev_id argument, so event comes only from that device
> 
> > 
> > > > > +#define RTE_EVENT_TYPE_ETHDEV           0x0
> > > > > +/**< The event generated from ethdev subsystem */
> > > > > +#define RTE_EVENT_TYPE_CRYPTODEV        0x1
> > > > > +/**< The event generated from crypodev subsystem */
> > > > > +#define RTE_EVENT_TYPE_TIMERDEV         0x2
> > > > > +/**< The event generated from timerdev subsystem */
> > > > > +#define RTE_EVENT_TYPE_CORE             0x3
> > > > > +/**< The event generated from core.
> > > > 
> > > > What is core?
> > > 
> > > The event are generated by lcore for pipeling. Any suggestion for
> > > better name? lcore?
> > 
> > What about CPU or SW?
> 
> No strong opinion here. I will go with CPU then

If you have no strong opinion, I think I'd prefer SW to CPU, as the main
difference to my mind is that this comes from another SW entity rather
than a hardware block.

> 
> > 
> > > > > +		/**< Opaque event pointer */
> > > > > +		struct rte_mbuf *mbuf;
> > > > > +		/**< mbuf pointer if dequeued event is associated with mbuf */
> > > > 
> > > > How do we know that an event is associated with mbuf?
> > > 
> > > By looking at the event source/type RTE_EVENT_TYPE_*
> > > 
> > > > Does it mean that such events are always converted into mbuf even if the
> > > > application does not need it?
> > > 
> > > Hardware has dependency on getting physical address of the event, so any
> > > struct that has "phys_addr_t buf_physaddr" works.
> > 
> > I do not understand.
> 
> In HW based implementations, the event pointer will be submitted to HW.
> As you know, since HW can't understand the virtual address and it needs
> to converted to the physical address, any DPDK object that provides phys_addr_t
> such as mbuf can be used with libeventdev.
> 
> > 
> > I tought that decoding the event would be the responsibility of the app
> > by calling a function like
> > rte_eventdev_convert_to_mbuf(struct rte_event *, struct rte_mbuf *).
> 
> It can be. But it is costly.i.e Yet another function pointer based
> driver interface on fastpath. Instead, if the driver itself can
> convert to mbuf(in case of ETHDEV device) and tag the source/event type
> as RTE_EVENT_TYPE_ETHDEV.
> IMO the proposed schemed helps in SW based implementation as their no real
> mbuf conversation. Something we can revisit in ethdev integration if
> required.
> 
> > 
> > > > > +struct rte_eventdev_driver;
> > > > > +struct rte_eventdev_ops;
> > > > 
> > > > I think it is better to split API and driver interface in two files.
> > > > (we should do this split in ethdev)
> > > 
> > > I thought so, but then the "static inline" versions of northbound
> > > API(like rte_event_enqueue) will go another file(due to the fact that
> > > implementation need to deference "dev->data->ports[port_id]"). Do you want that way?
> > > I would like to keep all northbound API in rte_eventdev.h and not any of them
> > > in rte_eventdev_pmd.h.
> > 
> > My comment was confusing.
> > You are doing 2 files, one for API (what you call northbound I think)
> > and the other one for driver interface (what you call southbound I think),
> > it's very fine.
> > 
> > > > > +/**
> > > > > + * Enqueue the event object supplied in the *rte_event* structure on an
> > > > > + * event device designated by its *dev_id* through the event port specified by
> > > > > + * *port_id*. The event object specifies the event queue on which this
> > > > > + * event will be enqueued.
> > > > > + *
> > > > > + * @param dev_id
> > > > > + *   Event device identifier.
> > > > > + * @param port_id
> > > > > + *   The identifier of the event port.
> > > > > + * @param ev
> > > > > + *   Pointer to struct rte_event
> > > > > + *
> > > > > + * @return
> > > > > + *  - 0 on success
> > > > > + *  - <0 on failure. Failure can occur if the event port's output queue is
> > > > > + *     backpressured, for instance.
> > > > > + */
> > > > > +static inline int
> > > > > +rte_event_enqueue(uint8_t dev_id, uint8_t port_id, struct rte_event *ev)
> > > > 
> > > > Is it really needed to have non-burst variant of enqueue/dequeue?
> > > 
> > > Yes. certain HW can work only with non burst variants.
> > 
> > Same comment as Bruce, we must keep only the burst variant.
> > We cannot have different API for different HW.
> 
> I don't think there is any portability issue here, I can explain.
> 
> The application level, we have two more use case to deal with non burst
> variant
> 
> - latency critical work
> - on dequeue, if application wants to deal with only one flow(i.e to
>   avoid processing two different application flows to avoid cache trashing)
> 
> Selection of the burst variants will be based on
> rte_event_dev_info_get() and rte_event_dev_configure()(see, max_event_port_dequeue_depth,
> max_event_port_enqueue_depth, nb_event_port_dequeue_depth, nb_event_port_enqueue_depth )
> So I don't think their is portability issue here and I don't want to waste my
> CPU cycles on the for loop if application known to be working with non
> bursts variant like below
> 

If the application is known to be working on non-burst varients, then
they always request a burst-size of 1, and skip the loop completely.
There is no extra performance hit in that case in either the app or the
driver (since the non-burst driver always returns 1, irrespective of the
number requested).

> nb_events = rte_event_dequeue_burst();
> for(i=0; i < nb_events; i++){
> 	process ev[i]
> }
> 
> And mostly importantly the NPU can get almost same throughput
> without burst variant so why not?
> 
> > 
> > > > > +/**
> > > > > + * Converts nanoseconds to *wait* value for rte_event_dequeue()
> > > > > + *
> > > > > + * If the device is configured with RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT flag then
> > > > > + * application can use this function to convert wait value in nanoseconds to
> > > > > + * implementations specific wait value supplied in rte_event_dequeue()
> > > > 
> > > > Why is it implementation-specific?
> > > > Why this conversion is not internal in the driver?
> > > 
> > > This is for performance optimization, otherwise in drivers
> > > need to convert ns to ticks in "fast path"
> > 
> > So why not defining the unit of this timeout as CPU cycles like the ones
> > returned by rte_get_timer_cycles()?
> 
> Because HW co-processor can run in different clock domain. Need not be at
> CPU frequency.
> 
While I've no huge objection to this API, since it will not be
implemented by our SW implementation, I'm just curious as to how much
having this will save. How complicated is the arithmetic that needs to
be done, and how many cycles on your platform is that going to take?

/Bruce

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 1/4] eventdev: introduce event driven programming model
  2016-11-25  0:23         ` Jerin Jacob
  2016-11-25 11:00           ` Bruce Richardson
@ 2016-11-25 11:59           ` Van Haaren, Harry
  2016-11-25 12:09             ` Richardson, Bruce
  1 sibling, 1 reply; 109+ messages in thread
From: Van Haaren, Harry @ 2016-11-25 11:59 UTC (permalink / raw)
  To: Jerin Jacob, Thomas Monjalon
  Cc: dev, Richardson, Bruce, hemant.agrawal, Eads, Gage

Hi All,

> From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> Sent: Friday, November 25, 2016 12:24 AM
> To: Thomas Monjalon <thomas.monjalon@6wind.com>
> Cc: dev@dpdk.org; Richardson, Bruce <bruce.richardson@intel.com>; Van Haaren, Harry
> <harry.van.haaren@intel.com>; hemant.agrawal@nxp.com; Eads, Gage <gage.eads@intel.com>
> Subject: Re: [dpdk-dev] [PATCH 1/4] eventdev: introduce event driven programming model
> 
> On Thu, Nov 24, 2016 at 04:35:56PM +0100, Thomas Monjalon wrote:
> > 2016-11-24 07:29, Jerin Jacob:
> > > On Wed, Nov 23, 2016 at 07:39:09PM +0100, Thomas Monjalon wrote:
> > > > 2016-11-18 11:14, Jerin Jacob:
> > > > > +Eventdev API - EXPERIMENTAL
> > > > > +M: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > > > > +F: lib/librte_eventdev/
> > > >
> > > > OK to mark it experimental.
> > > > What is the plan to remove the experimental word?
> > >
> > > IMO, EXPERIMENTAL status can be changed when
> > > - At least two event drivers available(Intel and Cavium are working on
> > >   SW and HW event drivers)
> > > - Functional test applications are fine with at least two drivers
> > > - Portable example application to showcase the features of the library
> > > - eventdev integration with another dpdk subsystem such as ethdev
> > >
> > > Thoughts?. I am not sure the criteria used in cryptodev case.
> >
> > Sounds good.
> > We will be more confident when drivers and tests will be implemented.
> >
> > I think the roadmap for the SW driver targets the release 17.05.
> > Do you still plan 17.02 for this API and the Cavium driver?
> 
> No. 17.02 too short for up-streaming the Cavium driver.However, I think API and
> skeleton event driver can go in 17.02 if there are no objections.
> 
> >
> > > > > +#define EVENTDEV_NAME_SKELETON_PMD event_skeleton
> > > > > +/**< Skeleton event device PMD name */
> > > >
> > > > I do not understand this #define.
> > >
> > > Applications can explicitly request the a specific driver though driver
> > > name. This will go as argument to rte_event_dev_get_dev_id(const char *name).
> > > The reason for keeping this #define in rte_eventdev.h is that,
> > > application needs to include only rte_eventdev.h not rte_eventdev_pmd.h.
> >
> > So each driver must register its name in the API?
> > Is it really needed?
> 
> Otherwise how application knows the name of the driver.
> The similar scheme used in cryptodev.
> http://dpdk.org/browse/dpdk/tree/lib/librte_cryptodev/rte_cryptodev.h#n53
> No strong opinion here. Open for suggestions.
> 
> >
> > > > > +struct rte_event_dev_config {
> > > > > +	uint32_t dequeue_wait_ns;
> > > > > +	/**< rte_event_dequeue() wait for *dequeue_wait_ns* ns on this device.
> > > >
> > > > Please explain exactly when the wait occurs and why.
> > >
> > > Here is the explanation from rte_event_dequeue() API definition,
> > > -
> > > @param wait
> > > 0 - no-wait, returns immediately if there is no event.
> > > >0 - wait for the event, if the device is configured with
> > > RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT then this function will wait until
> > > the event available or *wait* time.
> > > if the device is not configured with RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT
> > > then this function will wait until the event available or *dequeue_wait_ns*
> > >                                                       ^^^^^^^^^^^^^^^^^^^^^^
> > > ns which was previously supplied to rte_event_dev_configure()
> > > -
> > > This is provides the application to have control over, how long the
> > > implementation should wait if event is not available.
> > >
> > > Let me know what exact changes are required if details are not enough in
> > > rte_event_dequeue() API definition.
> >
> > Maybe that timeout would be a better name.
> > It waits only if there is nothing in the queue.
> > It can be interesting to highlight in this comment that this parameter
> > makes the dequeue function a blocking call.
> 
> OK. I will change to timeout then
> 
> >
> > > > > +/** Event port configuration structure */
> > > > > +struct rte_event_port_conf {
> > > > > +	int32_t new_event_threshold;
> > > > > +	/**< A backpressure threshold for new event enqueues on this port.
> > > > > +	 * Use for *closed system* event dev where event capacity is limited,
> > > > > +	 * and cannot exceed the capacity of the event dev.
> > > > > +	 * Configuring ports with different thresholds can make higher priority
> > > > > +	 * traffic less likely to  be backpressured.
> > > > > +	 * For example, a port used to inject NIC Rx packets into the event dev
> > > > > +	 * can have a lower threshold so as not to overwhelm the device,
> > > > > +	 * while ports used for worker pools can have a higher threshold.
> > > > > +	 * This value cannot exceed the *nb_events_limit*
> > > > > +	 * which previously supplied to rte_event_dev_configure()
> > > > > +	 */
> > > > > +	uint8_t dequeue_depth;
> > > > > +	/**< Configure number of bulk dequeues for this event port.
> > > > > +	 * This value cannot exceed the *nb_event_port_dequeue_depth*
> > > > > +	 * which previously supplied to rte_event_dev_configure()
> > > > > +	 */
> > > > > +	uint8_t enqueue_depth;
> > > > > +	/**< Configure number of bulk enqueues for this event port.
> > > > > +	 * This value cannot exceed the *nb_event_port_enqueue_depth*
> > > > > +	 * which previously supplied to rte_event_dev_configure()
> > > > > +	 */
> > > > > +};
> > > >
> > > > The depth configuration is not clear to me.
> > >
> > > Basically the maximum number of events can be enqueued/dequeued at time
> > > from a given event port. depth of one == non burst mode.
> >
> > OK so depth is the queue size. Please could you reword?
> 
> OK
> 
> >
> > > > > +/* Event types to classify the event source */
> > > >
> > > > Why this classification is needed?
> > >
> > > This for application pipeling and the cases like, if application wants to know which
> > > subsystem generated the event.
> > >
> > > example packet forwarding loop on the worker cores:
> > > while(1) {
> > > 	ev = dequeue()
> > > 	// event from ethdev subsystem
> > > 	if (ev.event_type == RTE_EVENT_TYPE_ETHDEV) {
> > > 		- swap the mac address
> > > 		- push to atomic queue for ingress flow order maintenance
> > > 		  by CORE
> > > 	/* events from core */
> > > 	} else if (ev.event_type == RTE_EVENT_TYPE_CORE) {
> > >
> > > 	}
> > > 	enqueue(ev);
> > > }
> >
> > I don't know why but I feel this classification is weak.
> > You need to track the source of the event. Does it make sense to go beyond
> > and identify the source device?
> 
> No, dequeue has dev_id argument, so event comes only from that device
> 
> >
> > > > > +#define RTE_EVENT_TYPE_ETHDEV           0x0
> > > > > +/**< The event generated from ethdev subsystem */
> > > > > +#define RTE_EVENT_TYPE_CRYPTODEV        0x1
> > > > > +/**< The event generated from crypodev subsystem */
> > > > > +#define RTE_EVENT_TYPE_TIMERDEV         0x2
> > > > > +/**< The event generated from timerdev subsystem */
> > > > > +#define RTE_EVENT_TYPE_CORE             0x3
> > > > > +/**< The event generated from core.
> > > >
> > > > What is core?
> > >
> > > The event are generated by lcore for pipeling. Any suggestion for
> > > better name? lcore?
> >
> > What about CPU or SW?
> 
> No strong opinion here. I will go with CPU then


+1 for CPU (as SW is the software PMD name).


> > > > > +		/**< Opaque event pointer */
> > > > > +		struct rte_mbuf *mbuf;
> > > > > +		/**< mbuf pointer if dequeued event is associated with mbuf */
> > > >
> > > > How do we know that an event is associated with mbuf?
> > >
> > > By looking at the event source/type RTE_EVENT_TYPE_*
> > >
> > > > Does it mean that such events are always converted into mbuf even if the
> > > > application does not need it?
> > >
> > > Hardware has dependency on getting physical address of the event, so any
> > > struct that has "phys_addr_t buf_physaddr" works.
> >
> > I do not understand.
> 
> In HW based implementations, the event pointer will be submitted to HW.
> As you know, since HW can't understand the virtual address and it needs
> to converted to the physical address, any DPDK object that provides phys_addr_t
> such as mbuf can be used with libeventdev.
> 
> >
> > I tought that decoding the event would be the responsibility of the app
> > by calling a function like
> > rte_eventdev_convert_to_mbuf(struct rte_event *, struct rte_mbuf *).
> 
> It can be. But it is costly.i.e Yet another function pointer based
> driver interface on fastpath. Instead, if the driver itself can
> convert to mbuf(in case of ETHDEV device) and tag the source/event type
> as RTE_EVENT_TYPE_ETHDEV.
> IMO the proposed schemed helps in SW based implementation as their no real
> mbuf conversation. Something we can revisit in ethdev integration if
> required.
> 
> >
> > > > > +struct rte_eventdev_driver;
> > > > > +struct rte_eventdev_ops;
> > > >
> > > > I think it is better to split API and driver interface in two files.
> > > > (we should do this split in ethdev)
> > >
> > > I thought so, but then the "static inline" versions of northbound
> > > API(like rte_event_enqueue) will go another file(due to the fact that
> > > implementation need to deference "dev->data->ports[port_id]"). Do you want that way?
> > > I would like to keep all northbound API in rte_eventdev.h and not any of them
> > > in rte_eventdev_pmd.h.
> >
> > My comment was confusing.
> > You are doing 2 files, one for API (what you call northbound I think)
> > and the other one for driver interface (what you call southbound I think),
> > it's very fine.
> >
> > > > > +/**
> > > > > + * Enqueue the event object supplied in the *rte_event* structure on an
> > > > > + * event device designated by its *dev_id* through the event port specified by
> > > > > + * *port_id*. The event object specifies the event queue on which this
> > > > > + * event will be enqueued.
> > > > > + *
> > > > > + * @param dev_id
> > > > > + *   Event device identifier.
> > > > > + * @param port_id
> > > > > + *   The identifier of the event port.
> > > > > + * @param ev
> > > > > + *   Pointer to struct rte_event
> > > > > + *
> > > > > + * @return
> > > > > + *  - 0 on success
> > > > > + *  - <0 on failure. Failure can occur if the event port's output queue is
> > > > > + *     backpressured, for instance.
> > > > > + */
> > > > > +static inline int
> > > > > +rte_event_enqueue(uint8_t dev_id, uint8_t port_id, struct rte_event *ev)
> > > >
> > > > Is it really needed to have non-burst variant of enqueue/dequeue?
> > >
> > > Yes. certain HW can work only with non burst variants.
> >
> > Same comment as Bruce, we must keep only the burst variant.
> > We cannot have different API for different HW.
> 
> I don't think there is any portability issue here, I can explain.
> 
> The application level, we have two more use case to deal with non burst
> variant
> 
> - latency critical work
> - on dequeue, if application wants to deal with only one flow(i.e to
>   avoid processing two different application flows to avoid cache trashing)
> 
> Selection of the burst variants will be based on
> rte_event_dev_info_get() and rte_event_dev_configure()(see, max_event_port_dequeue_depth,
> max_event_port_enqueue_depth, nb_event_port_dequeue_depth, nb_event_port_enqueue_depth )
> So I don't think their is portability issue here and I don't want to waste my
> CPU cycles on the for loop if application known to be working with non
> bursts variant like below
> 
> nb_events = rte_event_dequeue_burst();
> for(i=0; i < nb_events; i++){
> 	process ev[i]
> }
> 
> And mostly importantly the NPU can get almost same throughput
> without burst variant so why not?


Perhaps I'm mis-understanding, but can you not just dequeue 1 from the burst() function?

struct rte_event ev;
rte_event_dequeue_burst(dev, port, &ev, 1, wait);
process( &ev );

I mean, if an application *demands* to not use bursts, the above allows it. Of course it won't scale to other implementations that would benefit from burst - but that's the application authors choice?


> > > > > +/**
> > > > > + * Converts nanoseconds to *wait* value for rte_event_dequeue()
> > > > > + *
> > > > > + * If the device is configured with RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT flag then
> > > > > + * application can use this function to convert wait value in nanoseconds to
> > > > > + * implementations specific wait value supplied in rte_event_dequeue()
> > > >
> > > > Why is it implementation-specific?
> > > > Why this conversion is not internal in the driver?
> > >
> > > This is for performance optimization, otherwise in drivers
> > > need to convert ns to ticks in "fast path"
> >
> > So why not defining the unit of this timeout as CPU cycles like the ones
> > returned by rte_get_timer_cycles()?
> 
> Because HW co-processor can run in different clock domain. Need not be at
> CPU frequency.

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 1/4] eventdev: introduce event driven programming model
  2016-11-25 11:59           ` Van Haaren, Harry
@ 2016-11-25 12:09             ` Richardson, Bruce
  0 siblings, 0 replies; 109+ messages in thread
From: Richardson, Bruce @ 2016-11-25 12:09 UTC (permalink / raw)
  To: Van Haaren, Harry, Jerin Jacob, Thomas Monjalon
  Cc: dev, hemant.agrawal, Eads, Gage



> -----Original Message-----
> From: Van Haaren, Harry
> Sent: Friday, November 25, 2016 11:59 AM
> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>; Thomas Monjalon
> <thomas.monjalon@6wind.com>
> Cc: dev@dpdk.org; Richardson, Bruce <bruce.richardson@intel.com>;
> hemant.agrawal@nxp.com; Eads, Gage <gage.eads@intel.com>
> Subject: RE: [dpdk-dev] [PATCH 1/4] eventdev: introduce event driven
> programming model
> 
> Hi All,
> 
> > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> > Sent: Friday, November 25, 2016 12:24 AM
> > To: Thomas Monjalon <thomas.monjalon@6wind.com>
> > Cc: dev@dpdk.org; Richardson, Bruce <bruce.richardson@intel.com>; Van
> > Haaren, Harry <harry.van.haaren@intel.com>; hemant.agrawal@nxp.com;
> > Eads, Gage <gage.eads@intel.com>
> > Subject: Re: [dpdk-dev] [PATCH 1/4] eventdev: introduce event driven
> > programming model
> >
> > On Thu, Nov 24, 2016 at 04:35:56PM +0100, Thomas Monjalon wrote:
> > > 2016-11-24 07:29, Jerin Jacob:
> > > > On Wed, Nov 23, 2016 at 07:39:09PM +0100, Thomas Monjalon wrote:
> > > > > 2016-11-18 11:14, Jerin Jacob:
>
> > > > > > +#define RTE_EVENT_TYPE_ETHDEV           0x0
> > > > > > +/**< The event generated from ethdev subsystem */
> > > > > > +#define RTE_EVENT_TYPE_CRYPTODEV        0x1
> > > > > > +/**< The event generated from crypodev subsystem */
> > > > > > +#define RTE_EVENT_TYPE_TIMERDEV         0x2
> > > > > > +/**< The event generated from timerdev subsystem */
> > > > > > +#define RTE_EVENT_TYPE_CORE             0x3
> > > > > > +/**< The event generated from core.
> > > > >
> > > > > What is core?
> > > >
> > > > The event are generated by lcore for pipeling. Any suggestion for
> > > > better name? lcore?
> > >
> > > What about CPU or SW?
> >
> > No strong opinion here. I will go with CPU then
> 
> 
> +1 for CPU (as SW is the software PMD name).
> 

Fine, I'm outvoted. I'll learn to live with it. :-)

/Bruce

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 1/4] eventdev: introduce event driven programming model
  2016-11-25 11:00           ` Bruce Richardson
@ 2016-11-25 13:09             ` Thomas Monjalon
  2016-11-26  0:57               ` Jerin Jacob
  2016-11-26  2:54             ` Jerin Jacob
  1 sibling, 1 reply; 109+ messages in thread
From: Thomas Monjalon @ 2016-11-25 13:09 UTC (permalink / raw)
  To: Bruce Richardson, Jerin Jacob
  Cc: dev, harry.van.haaren, hemant.agrawal, gage.eads

2016-11-25 11:00, Bruce Richardson:
> On Fri, Nov 25, 2016 at 05:53:34AM +0530, Jerin Jacob wrote:
> > On Thu, Nov 24, 2016 at 04:35:56PM +0100, Thomas Monjalon wrote:
> > > 2016-11-24 07:29, Jerin Jacob:
> > > > On Wed, Nov 23, 2016 at 07:39:09PM +0100, Thomas Monjalon wrote:
> > > > > 2016-11-18 11:14, Jerin Jacob:
> > > > > > +#define EVENTDEV_NAME_SKELETON_PMD event_skeleton
> > > > > > +/**< Skeleton event device PMD name */
> > > > > 
> > > > > I do not understand this #define.
> > > > 
> > > > Applications can explicitly request the a specific driver though driver
> > > > name. This will go as argument to rte_event_dev_get_dev_id(const char *name).
> > > > The reason for keeping this #define in rte_eventdev.h is that,
> > > > application needs to include only rte_eventdev.h not rte_eventdev_pmd.h.
> > > 
> > > So each driver must register its name in the API?
> > > Is it really needed?
> > 
> > Otherwise how application knows the name of the driver.
> > The similar scheme used in cryptodev.
> > http://dpdk.org/browse/dpdk/tree/lib/librte_cryptodev/rte_cryptodev.h#n53
> > No strong opinion here. Open for suggestions.
> > 
> 
> I like having a name registered. I think we need a scheme where an app
> can find and use an implementation using a specific driver.

I do not like having the driver names in the API.
An API should not know its drivers.
If an application do some driver-specific processing, it knows
the driver name as well. The driver name is written in the driver.

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 2/4] eventdev: implement the northbound APIs
  2016-11-25  9:55       ` Richardson, Bruce
@ 2016-11-25 23:08         ` Jerin Jacob
  0 siblings, 0 replies; 109+ messages in thread
From: Jerin Jacob @ 2016-11-25 23:08 UTC (permalink / raw)
  To: Richardson, Bruce
  Cc: Thomas Monjalon, dev, Van Haaren, Harry, hemant.agrawal, Eads, Gage

On Fri, Nov 25, 2016 at 09:55:39AM +0000, Richardson, Bruce wrote:
> > > > +/* Macros to check for valid device */ #define
> > > > +RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, retval) do { \
> > >
> > > Sometimes you use RTE_EVENT_DEV_ and sometimes RTE_EVENTDEV.
> > > (I prefer the latter).
> > 
> > I choose the naming conversion based on the interface. API side it is
> > rte_event_ and driver side it is rte_eventdev_*
> > 
> > rte_event_dev_count;
> > rte_event_dev_get_dev_id
> > rte_event_dev_socket_id;
> > rte_event_dev_info_get;
> > rte_event_dev_configure;
> > rte_event_dev_start;
> > rte_event_dev_stop;
> > rte_event_dev_close;
> > rte_event_dev_dump;
> > 
> > rte_event_port_default_conf_get;
> > rte_event_port_setup;
> > rte_event_port_dequeue_depth;
> > rte_event_port_enqueue_depth;
> > rte_event_port_count;
> > rte_event_port_link;
> > rte_event_port_unlink;
> > rte_event_port_links_get;
> > 
> > rte_event_queue_default_conf_get
> > rte_event_queue_setup;
> > rte_event_queue_count;
> > rte_event_queue_priority;
> > 
> > rte_event_dequeue_wait_time;
> > 
> > rte_eventdev_pmd_allocate;
> > rte_eventdev_pmd_release;
> > rte_eventdev_pmd_vdev_init;
> > rte_eventdev_pmd_pci_probe;
> > rte_eventdev_pmd_pci_remove;
> 
> For this last set, you probably are ok prefixing with just "rte_event_pmd_", and drop the "dev" as unnecessary. That makes everything have a prefix of "rte_event_" and thereafter dev, port, queue, or pmd as appropriate.

OK. I will change the last set to rte_event_pmd_*

> 
> /Bruce

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 1/4] eventdev: introduce event driven programming model
  2016-11-25 13:09             ` Thomas Monjalon
@ 2016-11-26  0:57               ` Jerin Jacob
  2016-11-28  9:10                 ` Bruce Richardson
  0 siblings, 1 reply; 109+ messages in thread
From: Jerin Jacob @ 2016-11-26  0:57 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: Bruce Richardson, dev, harry.van.haaren, hemant.agrawal, gage.eads

On Fri, Nov 25, 2016 at 02:09:22PM +0100, Thomas Monjalon wrote:
> 2016-11-25 11:00, Bruce Richardson:
> > On Fri, Nov 25, 2016 at 05:53:34AM +0530, Jerin Jacob wrote:
> > > On Thu, Nov 24, 2016 at 04:35:56PM +0100, Thomas Monjalon wrote:
> > > > 2016-11-24 07:29, Jerin Jacob:
> > > > > On Wed, Nov 23, 2016 at 07:39:09PM +0100, Thomas Monjalon wrote:
> > > > > > 2016-11-18 11:14, Jerin Jacob:
> > > > > > > +#define EVENTDEV_NAME_SKELETON_PMD event_skeleton
> > > > > > > +/**< Skeleton event device PMD name */
> > > > > > 
> > > > > > I do not understand this #define.
> > > > > 
> > > > > Applications can explicitly request the a specific driver though driver
> > > > > name. This will go as argument to rte_event_dev_get_dev_id(const char *name).
> > > > > The reason for keeping this #define in rte_eventdev.h is that,
> > > > > application needs to include only rte_eventdev.h not rte_eventdev_pmd.h.
> > > > 
> > > > So each driver must register its name in the API?
> > > > Is it really needed?
> > > 
> > > Otherwise how application knows the name of the driver.
> > > The similar scheme used in cryptodev.
> > > http://dpdk.org/browse/dpdk/tree/lib/librte_cryptodev/rte_cryptodev.h#n53
> > > No strong opinion here. Open for suggestions.
> > > 
> > 
> > I like having a name registered. I think we need a scheme where an app
> > can find and use an implementation using a specific driver.
> 
> I do not like having the driver names in the API.
> An API should not know its drivers.
> If an application do some driver-specific processing, it knows
> the driver name as well. The driver name is written in the driver.

If Bruce don't have further objection, Then I will go with Thomas's
suggestion.

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 1/4] eventdev: introduce event driven programming model
  2016-11-25 11:00           ` Bruce Richardson
  2016-11-25 13:09             ` Thomas Monjalon
@ 2016-11-26  2:54             ` Jerin Jacob
  2016-11-28  9:16               ` Bruce Richardson
  1 sibling, 1 reply; 109+ messages in thread
From: Jerin Jacob @ 2016-11-26  2:54 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: Thomas Monjalon, dev, harry.van.haaren, hemant.agrawal, gage.eads

On Fri, Nov 25, 2016 at 11:00:53AM +0000, Bruce Richardson wrote:
> On Fri, Nov 25, 2016 at 05:53:34AM +0530, Jerin Jacob wrote:
> > On Thu, Nov 24, 2016 at 04:35:56PM +0100, Thomas Monjalon wrote:
> > > 2016-11-24 07:29, Jerin Jacob:
> > > > On Wed, Nov 23, 2016 at 07:39:09PM +0100, Thomas Monjalon wrote:
> > > > > 2016-11-18 11:14, Jerin Jacob:
> > > > > > +Eventdev API - EXPERIMENTAL
> > > > > > +M: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > > > > > +F: lib/librte_eventdev/
> > > > > 
> > 
> > I don't think there is any portability issue here, I can explain.
> > 
> > The application level, we have two more use case to deal with non burst
> > variant
> > 
> > - latency critical work
> > - on dequeue, if application wants to deal with only one flow(i.e to
> >   avoid processing two different application flows to avoid cache trashing)
> > 
> > Selection of the burst variants will be based on
> > rte_event_dev_info_get() and rte_event_dev_configure()(see, max_event_port_dequeue_depth,
> > max_event_port_enqueue_depth, nb_event_port_dequeue_depth, nb_event_port_enqueue_depth )
> > So I don't think their is portability issue here and I don't want to waste my
> > CPU cycles on the for loop if application known to be working with non
> > bursts variant like below
> > 
> 
> If the application is known to be working on non-burst varients, then
> they always request a burst-size of 1, and skip the loop completely.
> There is no extra performance hit in that case in either the app or the
> driver (since the non-burst driver always returns 1, irrespective of the
> number requested).

Hmm. I am afraid, There is.
On the app side, the const "1" can not be optimized by the compiler as
on downside it is function pointer based driver interface
On the driver side, the implementation would be for loop based instead
of plain access.
(compiler never can see the const "1" in driver interface)

We are planning to implement burst mode as kind of emulation mode and
have a different scheme for burst and nonburst. The similar approach we have
taken in introducing rte_event_schedule() and split the responsibility so
that SW driver can work without additional performance overhead and neat
driver interface.

If you are concerned about the usability part and regression on the SW
driver, then it's not the case, application will use nonburst variant only if
dequeue_depth == 1 and/or explicit case where latency matters.

On the portability side, we support both case and application if written based
on dequeue_depth it will perform well in both implementations.IMO, There is
no another shortcut for performance optimized application running on different
set of model.I think it is not an issue as, in event model as each cores
identical and main loop can be changed based on dequeue_depth
if needs performance(anyway mainloop will be function pointer based).

> 
> > nb_events = rte_event_dequeue_burst();
> > for(i=0; i < nb_events; i++){
> > 	process ev[i]
> > }
> > 
> > And mostly importantly the NPU can get almost same throughput
> > without burst variant so why not?
> > 
> > > 
> > > > > > +/**
> > > > > > + * Converts nanoseconds to *wait* value for rte_event_dequeue()
> > > > > > + *
> > > > > > + * If the device is configured with RTE_EVENT_DEV_CFG_PER_DEQUEUE_WAIT flag then
> > > > > > + * application can use this function to convert wait value in nanoseconds to
> > > > > > + * implementations specific wait value supplied in rte_event_dequeue()
> > > > > 
> > > > > Why is it implementation-specific?
> > > > > Why this conversion is not internal in the driver?
> > > > 
> > > > This is for performance optimization, otherwise in drivers
> > > > need to convert ns to ticks in "fast path"
> > > 
> > > So why not defining the unit of this timeout as CPU cycles like the ones
> > > returned by rte_get_timer_cycles()?
> > 
> > Because HW co-processor can run in different clock domain. Need not be at
> > CPU frequency.
> > 
> While I've no huge objection to this API, since it will not be
> implemented by our SW implementation, I'm just curious as to how much
> having this will save. How complicated is the arithmetic that needs to
> be done, and how many cycles on your platform is that going to take?

one load, division and/or multiplication of (floating) numbers. I could be
6isl cycles or more, but it matters when burst size is less(worst case 1).
I think the software implementation could use rte_get_timer_cycles() here
if required.I think there is no harm in moving some-work in slow-path if it
can be, like this case.

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 1/4] eventdev: introduce event driven programming model
  2016-11-26  0:57               ` Jerin Jacob
@ 2016-11-28  9:10                 ` Bruce Richardson
  0 siblings, 0 replies; 109+ messages in thread
From: Bruce Richardson @ 2016-11-28  9:10 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Thomas Monjalon, dev, harry.van.haaren, hemant.agrawal, gage.eads

On Sat, Nov 26, 2016 at 06:27:57AM +0530, Jerin Jacob wrote:
> On Fri, Nov 25, 2016 at 02:09:22PM +0100, Thomas Monjalon wrote:
> > 2016-11-25 11:00, Bruce Richardson:
> > > On Fri, Nov 25, 2016 at 05:53:34AM +0530, Jerin Jacob wrote:
> > > > On Thu, Nov 24, 2016 at 04:35:56PM +0100, Thomas Monjalon wrote:
> > > > > 2016-11-24 07:29, Jerin Jacob:
> > > > > > On Wed, Nov 23, 2016 at 07:39:09PM +0100, Thomas Monjalon wrote:
> > > > > > > 2016-11-18 11:14, Jerin Jacob:
> > > > > > > > +#define EVENTDEV_NAME_SKELETON_PMD event_skeleton
> > > > > > > > +/**< Skeleton event device PMD name */
> > > > > > > 
> > > > > > > I do not understand this #define.
> > > > > > 
> > > > > > Applications can explicitly request the a specific driver though driver
> > > > > > name. This will go as argument to rte_event_dev_get_dev_id(const char *name).
> > > > > > The reason for keeping this #define in rte_eventdev.h is that,
> > > > > > application needs to include only rte_eventdev.h not rte_eventdev_pmd.h.
> > > > > 
> > > > > So each driver must register its name in the API?
> > > > > Is it really needed?
> > > > 
> > > > Otherwise how application knows the name of the driver.
> > > > The similar scheme used in cryptodev.
> > > > http://dpdk.org/browse/dpdk/tree/lib/librte_cryptodev/rte_cryptodev.h#n53
> > > > No strong opinion here. Open for suggestions.
> > > > 
> > > 
> > > I like having a name registered. I think we need a scheme where an app
> > > can find and use an implementation using a specific driver.
> > 
> > I do not like having the driver names in the API.
> > An API should not know its drivers.
> > If an application do some driver-specific processing, it knows
> > the driver name as well. The driver name is written in the driver.
> 
> If Bruce don't have further objection, Then I will go with Thomas's
> suggestion.
>
Go with it.

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 1/4] eventdev: introduce event driven programming model
  2016-11-26  2:54             ` Jerin Jacob
@ 2016-11-28  9:16               ` Bruce Richardson
  2016-11-28 11:30                 ` Thomas Monjalon
  2016-11-29  4:01                 ` Jerin Jacob
  0 siblings, 2 replies; 109+ messages in thread
From: Bruce Richardson @ 2016-11-28  9:16 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Thomas Monjalon, dev, harry.van.haaren, hemant.agrawal, gage.eads

On Sat, Nov 26, 2016 at 08:24:55AM +0530, Jerin Jacob wrote:
> On Fri, Nov 25, 2016 at 11:00:53AM +0000, Bruce Richardson wrote:
> > On Fri, Nov 25, 2016 at 05:53:34AM +0530, Jerin Jacob wrote:
> > > On Thu, Nov 24, 2016 at 04:35:56PM +0100, Thomas Monjalon wrote:
> > > > 2016-11-24 07:29, Jerin Jacob:
> > > > > On Wed, Nov 23, 2016 at 07:39:09PM +0100, Thomas Monjalon wrote:
> > > > > > 2016-11-18 11:14, Jerin Jacob:
> > > > > > > +Eventdev API - EXPERIMENTAL
> > > > > > > +M: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > > > > > > +F: lib/librte_eventdev/
> > > > > > 
> > > 
> > > I don't think there is any portability issue here, I can explain.
> > > 
> > > The application level, we have two more use case to deal with non burst
> > > variant
> > > 
> > > - latency critical work
> > > - on dequeue, if application wants to deal with only one flow(i.e to
> > >   avoid processing two different application flows to avoid cache trashing)
> > > 
> > > Selection of the burst variants will be based on
> > > rte_event_dev_info_get() and rte_event_dev_configure()(see, max_event_port_dequeue_depth,
> > > max_event_port_enqueue_depth, nb_event_port_dequeue_depth, nb_event_port_enqueue_depth )
> > > So I don't think their is portability issue here and I don't want to waste my
> > > CPU cycles on the for loop if application known to be working with non
> > > bursts variant like below
> > > 
> > 
> > If the application is known to be working on non-burst varients, then
> > they always request a burst-size of 1, and skip the loop completely.
> > There is no extra performance hit in that case in either the app or the
> > driver (since the non-burst driver always returns 1, irrespective of the
> > number requested).
> 
> Hmm. I am afraid, There is.
> On the app side, the const "1" can not be optimized by the compiler as
> on downside it is function pointer based driver interface
> On the driver side, the implementation would be for loop based instead
> of plain access.
> (compiler never can see the const "1" in driver interface)
> 
> We are planning to implement burst mode as kind of emulation mode and
> have a different scheme for burst and nonburst. The similar approach we have
> taken in introducing rte_event_schedule() and split the responsibility so
> that SW driver can work without additional performance overhead and neat
> driver interface.
> 
> If you are concerned about the usability part and regression on the SW
> driver, then it's not the case, application will use nonburst variant only if
> dequeue_depth == 1 and/or explicit case where latency matters.
> 
> On the portability side, we support both case and application if written based
> on dequeue_depth it will perform well in both implementations.IMO, There is
> no another shortcut for performance optimized application running on different
> set of model.I think it is not an issue as, in event model as each cores
> identical and main loop can be changed based on dequeue_depth
> if needs performance(anyway mainloop will be function pointer based).
> 

Ok, I think I see your point now. Here is an alternative suggestion.

1. Keep the single user API.
2. Have both single and burst function pointers in the driver
3. Call appropriately in the eventdev layer based on parameters. For
example:

rte_event_dequeue_burst(..., int num)
{
	if (num == 1 && single_dequeue_fn != NULL)
		return single_dequeue_fn(...);
	return burst_dequeue_fn(...);
}

This way drivers can optionally special-case the single dequeue case -
the function pointer check will definitely be predictable in HW making
that a near-zero-cost check - while not forcing all drivers to do so.
It also reduces the public API surface, and gives us a single enqueue
and dequeue function.

/Bruce

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 1/4] eventdev: introduce event driven programming model
  2016-11-28  9:16               ` Bruce Richardson
@ 2016-11-28 11:30                 ` Thomas Monjalon
  2016-11-29  4:01                 ` Jerin Jacob
  1 sibling, 0 replies; 109+ messages in thread
From: Thomas Monjalon @ 2016-11-28 11:30 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: Jerin Jacob, dev, harry.van.haaren, hemant.agrawal, gage.eads

2016-11-28 09:16, Bruce Richardson:
> On Sat, Nov 26, 2016 at 08:24:55AM +0530, Jerin Jacob wrote:
> > On Fri, Nov 25, 2016 at 11:00:53AM +0000, Bruce Richardson wrote:
> > > On Fri, Nov 25, 2016 at 05:53:34AM +0530, Jerin Jacob wrote:
> > > > On Thu, Nov 24, 2016 at 04:35:56PM +0100, Thomas Monjalon wrote:
> > > > > 2016-11-24 07:29, Jerin Jacob:
> > > > > > On Wed, Nov 23, 2016 at 07:39:09PM +0100, Thomas Monjalon wrote:
> > > > > > > 2016-11-18 11:14, Jerin Jacob:
> > > > > > > > +Eventdev API - EXPERIMENTAL
> > > > > > > > +M: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > > > > > > > +F: lib/librte_eventdev/
> > > > > > > 
> > > > 
> > > > I don't think there is any portability issue here, I can explain.
> > > > 
> > > > The application level, we have two more use case to deal with non burst
> > > > variant
> > > > 
> > > > - latency critical work
> > > > - on dequeue, if application wants to deal with only one flow(i.e to
> > > >   avoid processing two different application flows to avoid cache trashing)
> > > > 
> > > > Selection of the burst variants will be based on
> > > > rte_event_dev_info_get() and rte_event_dev_configure()(see, max_event_port_dequeue_depth,
> > > > max_event_port_enqueue_depth, nb_event_port_dequeue_depth, nb_event_port_enqueue_depth )
> > > > So I don't think their is portability issue here and I don't want to waste my
> > > > CPU cycles on the for loop if application known to be working with non
> > > > bursts variant like below
> > > > 
> > > 
> > > If the application is known to be working on non-burst varients, then
> > > they always request a burst-size of 1, and skip the loop completely.
> > > There is no extra performance hit in that case in either the app or the
> > > driver (since the non-burst driver always returns 1, irrespective of the
> > > number requested).
> > 
> > Hmm. I am afraid, There is.
> > On the app side, the const "1" can not be optimized by the compiler as
> > on downside it is function pointer based driver interface
> > On the driver side, the implementation would be for loop based instead
> > of plain access.
> > (compiler never can see the const "1" in driver interface)
> > 
> > We are planning to implement burst mode as kind of emulation mode and
> > have a different scheme for burst and nonburst. The similar approach we have
> > taken in introducing rte_event_schedule() and split the responsibility so
> > that SW driver can work without additional performance overhead and neat
> > driver interface.
> > 
> > If you are concerned about the usability part and regression on the SW
> > driver, then it's not the case, application will use nonburst variant only if
> > dequeue_depth == 1 and/or explicit case where latency matters.
> > 
> > On the portability side, we support both case and application if written based
> > on dequeue_depth it will perform well in both implementations.IMO, There is
> > no another shortcut for performance optimized application running on different
> > set of model.I think it is not an issue as, in event model as each cores
> > identical and main loop can be changed based on dequeue_depth
> > if needs performance(anyway mainloop will be function pointer based).
> > 
> 
> Ok, I think I see your point now. Here is an alternative suggestion.
> 
> 1. Keep the single user API.
> 2. Have both single and burst function pointers in the driver
> 3. Call appropriately in the eventdev layer based on parameters. For
> example:
> 
> rte_event_dequeue_burst(..., int num)
> {
> 	if (num == 1 && single_dequeue_fn != NULL)
> 		return single_dequeue_fn(...);
> 	return burst_dequeue_fn(...);
> }
> 
> This way drivers can optionally special-case the single dequeue case -
> the function pointer check will definitely be predictable in HW making
> that a near-zero-cost check - while not forcing all drivers to do so.
> It also reduces the public API surface, and gives us a single enqueue
> and dequeue function.

+1

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 2/4] eventdev: implement the northbound APIs
  2016-11-22 23:43                   ` Jerin Jacob
@ 2016-11-28 15:53                     ` Eads, Gage
  2016-11-29  2:01                       ` Jerin Jacob
  2016-11-29  3:43                       ` Jerin Jacob
  0 siblings, 2 replies; 109+ messages in thread
From: Eads, Gage @ 2016-11-28 15:53 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: dev, Richardson, Bruce, Van Haaren, Harry, hemant.agrawal

(Bruce's adviced heeded :))

>  -----Original Message-----
>  From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
>  Sent: Tuesday, November 22, 2016 5:44 PM
>  To: Eads, Gage <gage.eads@intel.com>
>  Cc: dev@dpdk.org; Richardson, Bruce <bruce.richardson@intel.com>; Van
>  Haaren, Harry <harry.van.haaren@intel.com>; hemant.agrawal@nxp.com
>  Subject: Re: [dpdk-dev] [PATCH 2/4] eventdev: implement the northbound APIs
>  
>  On Tue, Nov 22, 2016 at 10:48:32PM +0000, Eads, Gage wrote:
>  >
>  >
>  > >  -----Original Message-----
>  > >  From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
>  > >  Sent: Tuesday, November 22, 2016 2:00 PM
>  > >  To: Eads, Gage <gage.eads@intel.com>
>  > >  Cc: dev@dpdk.org; Richardson, Bruce <bruce.richardson@intel.com>;
>  > > Van  Haaren, Harry <harry.van.haaren@intel.com>;
>  > > hemant.agrawal@nxp.com
>  > >  Subject: Re: [dpdk-dev] [PATCH 2/4] eventdev: implement the
>  > > northbound APIs
>  > >
>  > >  On Tue, Nov 22, 2016 at 07:43:03PM +0000, Eads, Gage wrote:
>  > >  > >  > >  > > One open issue I noticed is the "typical workflow"
>  > >  > > description starting in  > >  rte_eventdev.h:204 conflicts with
>  > > the  > > centralized software PMD that Harry  > >  posted last week.
>  > >  > > Specifically, that PMD expects a single core to call the  > >
>  > > > > schedule function. We could extend the documentation to account
>  > > for  > > this  > >  alternative style of scheduler invocation, or
>  > > discuss  > > ways to make the  software  > >  PMD work with the
>  > > documented  > > workflow. I prefer the former, but either  way I  >
>  > > >  think we  > > ought to expose the scheduler's expected usage to
>  > > the user --  > > perhaps  > >  through an RTE_EVENT_DEV_CAP flag?
>  > >  > >  > >  >
>  > >  > >  > >  > I prefer former too, you can propose the documentation
>  > > > > change required  for  > >  software PMD.
>  > >  > >  >
>  > >  > >  > Sure, proposal follows. The "typical workflow" isn't the
>  > > most  > > optimal by  having a conditional in the fast-path, of
>  > > course, but it  > > demonstrates the idea  simply.
>  > >  > >  >
>  > >  > >  > (line 204)
>  > >  > >  >  * An event driven based application has following typical
>  > > > > workflow on  > >  fastpath:
>  > >  > >  >  * \code{.c}
>  > >  > >  >  *      while (1) {
>  > >  > >  >  *
>  > >  > >  >  *              if (dev_info.event_dev_cap &
>  > >  > >  >  *                      RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED)
>  > >  > >  >  *                      rte_event_schedule(dev_id);
>  > >  > >
>  > >  > >  Yes, I like the idea of RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED.
>  > >  > >  It  can be input to application/subsystem to  launch separate
>  > > > > core(s) for schedule functions.
>  > >  > >  But, I think, the "dev_info.event_dev_cap &  > >
>  > > RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED"
>  > >  > >  check can be moved inside the implementation(to make the
>  > > better  > > decisions  and  avoiding consuming cycles on HW based
>  schedulers.
>  > >  >
>  > >  > How would this check work? Wouldn't it prevent any core from
>  > > running the  software scheduler in the centralized case?
>  > >
>  > >  I guess you may not need RTE_EVENT_DEV_CAP here, instead need flag
>  > > for  device configure here
>  > >
>  > >  #define RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED (1ULL << 1)
>  > >
>  > >  struct rte_event_dev_config config;  config.event_dev_cfg =
>  > > RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED;
>  > >  rte_event_dev_configure(.., &config);
>  > >
>  > >  on the driver side on configure,
>  > >  if (config.event_dev_cfg & RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED)
>  > >  	eventdev->schedule = NULL;
>  > >  else // centralized case
>  > >  	eventdev->schedule = your_centrized_schedule_function;
>  > >
>  > >  Does that work?
>  >
>  > Hm, I fear the API would give users the impression that they can select the
>  scheduling behavior of a given eventdev, when a software scheduler is more
>  likely to be either distributed or centralized -- not both.
>  
>  Even if it is capability flag then also it is per "device". Right ?
>  capability flag is more of read only too. Am i missing something here?
>  

Correct, the capability flag I'm envisioning is per-device and read-only. 

>  >
>  > What if we use the capability flag, and define rte_event_schedule() as the
>  scheduling function for centralized schedulers and rte_event_dequeue() as the
>  scheduling function for distributed schedulers? That way, the datapath could be
>  the simple dequeue -> process -> enqueue. Applications would check the
>  capability flag at configuration time to decide whether or not to launch an
>  lcore that calls rte_event_schedule().
>  
>  I am all for simple "dequeue -> process -> enqueue".
>  rte_event_schedule() added for SW scheduler only,  now it may not make sense
>  to add one more check on top of "rte_event_schedule()" to see it is really need
>  or not in fastpath?
>  

Yes, the additional check shouldn't be needed. In terms of the 'typical workflow' description, this is what I have in mind:

*
 * An event driven based application has following typical workflow on fastpath:
 * \code{.c}
 *  while (1) {
 *
 *      rte_event_dequeue(...);
 *
 *      (event processing)
 *
 *      rte_event_enqueue(...);
 *  }
 * \endcode
 *
 * The events are injected to event device through the *enqueue* operation by
 * event producers in the system. The typical event producers are ethdev
 * subsystem for generating packet events, core(SW) for generating events based
 * on different stages of application processing, cryptodev for generating
 * crypto work completion notification etc
 *
 * The *dequeue* operation gets one or more events from the event ports.
 * The application process the events and send to downstream event queue through
 * rte_event_enqueue() if it is an intermediate stage of event processing, on
 * the final stage, the application may send to different subsystem like ethdev
 * to send the packet/event on the wire using ethdev rte_eth_tx_burst() API.
 *
 * The point at which events are scheduled to ports depends on the device. For
 * hardware devices, scheduling occurs asynchronously. Software schedulers can
 * either be distributed (each worker thread schedules events to its own port)
 * or centralized (a dedicated thread schedules to all ports). Distributed
 * software schedulers perform the scheduling in rte_event_dequeue(), whereas
 * centralized scheduler logic is located in rte_event_schedule(). The
 * RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag indicates whether a
 * device is centralized and thus needs a dedicated scheduling thread that
 * repeatedly calls rte_event_schedule().
 *
 */

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 2/4] eventdev: implement the northbound APIs
  2016-11-28 15:53                     ` Eads, Gage
@ 2016-11-29  2:01                       ` Jerin Jacob
  2016-11-29  3:43                       ` Jerin Jacob
  1 sibling, 0 replies; 109+ messages in thread
From: Jerin Jacob @ 2016-11-29  2:01 UTC (permalink / raw)
  To: Eads, Gage; +Cc: dev, Richardson, Bruce, Van Haaren, Harry, hemant.agrawal

On Mon, Nov 28, 2016 at 03:53:08PM +0000, Eads, Gage wrote:
> (Bruce's adviced heeded :))
> 
> >  > >  >
> >  > >  > How would this check work? Wouldn't it prevent any core from
> >  > > running the  software scheduler in the centralized case?
> >  > >
> >  > >  I guess you may not need RTE_EVENT_DEV_CAP here, instead need flag
> >  > > for  device configure here
> >  > >
> >  > >  #define RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED (1ULL << 1)
> >  > >
> >  > >  struct rte_event_dev_config config;  config.event_dev_cfg =
> >  > > RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED;
> >  > >  rte_event_dev_configure(.., &config);
> >  > >
> >  > >  on the driver side on configure,
> >  > >  if (config.event_dev_cfg & RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED)
> >  > >  	eventdev->schedule = NULL;
> >  > >  else // centralized case
> >  > >  	eventdev->schedule = your_centrized_schedule_function;
> >  > >
> >  > >  Does that work?
> >  >
> >  > Hm, I fear the API would give users the impression that they can select the
> >  scheduling behavior of a given eventdev, when a software scheduler is more
> >  likely to be either distributed or centralized -- not both.
> >  
> >  Even if it is capability flag then also it is per "device". Right ?
> >  capability flag is more of read only too. Am i missing something here?
> >  
> 
> Correct, the capability flag I'm envisioning is per-device and read-only. 
> 
> >  >
> >  > What if we use the capability flag, and define rte_event_schedule() as the
> >  scheduling function for centralized schedulers and rte_event_dequeue() as the
> >  scheduling function for distributed schedulers? That way, the datapath could be
> >  the simple dequeue -> process -> enqueue. Applications would check the
> >  capability flag at configuration time to decide whether or not to launch an
> >  lcore that calls rte_event_schedule().
> >  
> >  I am all for simple "dequeue -> process -> enqueue".
> >  rte_event_schedule() added for SW scheduler only,  now it may not make sense
> >  to add one more check on top of "rte_event_schedule()" to see it is really need
> >  or not in fastpath?
> >  
> 
> Yes, the additional check shouldn't be needed. In terms of the 'typical workflow' description, this is what I have in mind:
> 
> *
>  * An event driven based application has following typical workflow on fastpath:
>  * \code{.c}
>  *  while (1) {
>  *
>  *      rte_event_dequeue(...);
>  *
>  *      (event processing)
>  *
>  *      rte_event_enqueue(...);
>  *  }
>  * \endcode
>  *
>  * The point at which events are scheduled to ports depends on the device. For
>  * hardware devices, scheduling occurs asynchronously. Software schedulers can
>  * either be distributed (each worker thread schedules events to its own port)
>  * or centralized (a dedicated thread schedules to all ports). Distributed
>  * software schedulers perform the scheduling in rte_event_dequeue(), whereas
>  * centralized scheduler logic is located in rte_event_schedule(). The
>  * RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag indicates whether a
>  * device is centralized and thus needs a dedicated scheduling thread that
>  * repeatedly calls rte_event_schedule().

Makes sense. I will change the existing schedule description to the
proposed one and add RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag
in v2.

Thanks Gage.
>  *
>  */

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 2/4] eventdev: implement the northbound APIs
  2016-11-28 15:53                     ` Eads, Gage
  2016-11-29  2:01                       ` Jerin Jacob
@ 2016-11-29  3:43                       ` Jerin Jacob
  2016-11-29  5:46                         ` Eads, Gage
  1 sibling, 1 reply; 109+ messages in thread
From: Jerin Jacob @ 2016-11-29  3:43 UTC (permalink / raw)
  To: Eads, Gage; +Cc: dev, Richardson, Bruce, Van Haaren, Harry, hemant.agrawal

On Mon, Nov 28, 2016 at 03:53:08PM +0000, Eads, Gage wrote:
> (Bruce's adviced heeded :))
> 
> >  -----Original Message-----
> >  From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> >  Sent: Tuesday, November 22, 2016 5:44 PM
> >  To: Eads, Gage <gage.eads@intel.com>
> >  Cc: dev@dpdk.org; Richardson, Bruce <bruce.richardson@intel.com>; Van
> >  Haaren, Harry <harry.van.haaren@intel.com>; hemant.agrawal@nxp.com
> >  Subject: Re: [dpdk-dev] [PATCH 2/4] eventdev: implement the northbound APIs
> >  
> >  On Tue, Nov 22, 2016 at 10:48:32PM +0000, Eads, Gage wrote:
> >  >
> >  >
> >  > >  -----Original Message-----
> >  > >  From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> >  > >  Sent: Tuesday, November 22, 2016 2:00 PM
> >  > >  To: Eads, Gage <gage.eads@intel.com>
> >  > >  Cc: dev@dpdk.org; Richardson, Bruce <bruce.richardson@intel.com>;
> >  > > Van  Haaren, Harry <harry.van.haaren@intel.com>;
> >  > > hemant.agrawal@nxp.com
> >  > >  Subject: Re: [dpdk-dev] [PATCH 2/4] eventdev: implement the
> >  > > northbound APIs
> >  > >
> >  > >  On Tue, Nov 22, 2016 at 07:43:03PM +0000, Eads, Gage wrote:
> >  > >  > >  > >  > > One open issue I noticed is the "typical workflow"
> >  > >  > > description starting in  > >  rte_eventdev.h:204 conflicts with
> >  > > the  > > centralized software PMD that Harry  > >  posted last week.
> >  > >  > > Specifically, that PMD expects a single core to call the  > >
> >  > > > > schedule function. We could extend the documentation to account
> >  > > for  > > this  > >  alternative style of scheduler invocation, or
> >  > > discuss  > > ways to make the  software  > >  PMD work with the
> >  > > documented  > > workflow. I prefer the former, but either  way I  >
> >  > > >  think we  > > ought to expose the scheduler's expected usage to
> >  > > the user --  > > perhaps  > >  through an RTE_EVENT_DEV_CAP flag?
> >  > >  > >  > >  >
> >  > >  > >  > >  > I prefer former too, you can propose the documentation
> >  > > > > change required  for  > >  software PMD.
> >  > >  > >  >
> >  > >  > >  > Sure, proposal follows. The "typical workflow" isn't the
> >  > > most  > > optimal by  having a conditional in the fast-path, of
> >  > > course, but it  > > demonstrates the idea  simply.
> >  > >  > >  >
> >  > >  > >  > (line 204)
> >  > >  > >  >  * An event driven based application has following typical
> >  > > > > workflow on  > >  fastpath:
> >  > >  > >  >  * \code{.c}
> >  > >  > >  >  *      while (1) {
> >  > >  > >  >  *
> >  > >  > >  >  *              if (dev_info.event_dev_cap &
> >  > >  > >  >  *                      RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED)
> >  > >  > >  >  *                      rte_event_schedule(dev_id);
> >  > >  > >
> >  > >  > >  Yes, I like the idea of RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED.
> >  > >  > >  It  can be input to application/subsystem to  launch separate
> >  > > > > core(s) for schedule functions.
> >  > >  > >  But, I think, the "dev_info.event_dev_cap &  > >
> >  > > RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED"
> >  > >  > >  check can be moved inside the implementation(to make the
> >  > > better  > > decisions  and  avoiding consuming cycles on HW based
> >  schedulers.
> >  > >  >
> >  > >  > How would this check work? Wouldn't it prevent any core from
> >  > > running the  software scheduler in the centralized case?
> >  > >
> >  > >  I guess you may not need RTE_EVENT_DEV_CAP here, instead need flag
> >  > > for  device configure here
> >  > >
> >  > >  #define RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED (1ULL << 1)
> >  > >
> >  > >  struct rte_event_dev_config config;  config.event_dev_cfg =
> >  > > RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED;
> >  > >  rte_event_dev_configure(.., &config);
> >  > >
> >  > >  on the driver side on configure,
> >  > >  if (config.event_dev_cfg & RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED)
> >  > >  	eventdev->schedule = NULL;
> >  > >  else // centralized case
> >  > >  	eventdev->schedule = your_centrized_schedule_function;
> >  > >
> >  > >  Does that work?
> >  >
> >  > Hm, I fear the API would give users the impression that they can select the
> >  scheduling behavior of a given eventdev, when a software scheduler is more
> >  likely to be either distributed or centralized -- not both.
> >  
> >  Even if it is capability flag then also it is per "device". Right ?
> >  capability flag is more of read only too. Am i missing something here?
> >  
> 
> Correct, the capability flag I'm envisioning is per-device and read-only. 
> 
> >  >
> >  > What if we use the capability flag, and define rte_event_schedule() as the
> >  scheduling function for centralized schedulers and rte_event_dequeue() as the
> >  scheduling function for distributed schedulers? That way, the datapath could be
> >  the simple dequeue -> process -> enqueue. Applications would check the
> >  capability flag at configuration time to decide whether or not to launch an
> >  lcore that calls rte_event_schedule().
> >  
> >  I am all for simple "dequeue -> process -> enqueue".
> >  rte_event_schedule() added for SW scheduler only,  now it may not make sense
> >  to add one more check on top of "rte_event_schedule()" to see it is really need
> >  or not in fastpath?
> >  
> 
> Yes, the additional check shouldn't be needed. In terms of the 'typical workflow' description, this is what I have in mind:
> 
> *
>  * An event driven based application has following typical workflow on fastpath:
>  * \code{.c}
>  *  while (1) {
>  *
>  *      rte_event_dequeue(...);
>  *
>  *      (event processing)
>  *
>  *      rte_event_enqueue(...);
>  *  }
>  * \endcode
>  *
>  * The events are injected to event device through the *enqueue* operation by
>  * event producers in the system. The typical event producers are ethdev
>  * subsystem for generating packet events, core(SW) for generating events based
>  * on different stages of application processing, cryptodev for generating
>  * crypto work completion notification etc
>  *
>  * The *dequeue* operation gets one or more events from the event ports.
>  * The application process the events and send to downstream event queue through
>  * rte_event_enqueue() if it is an intermediate stage of event processing, on
>  * the final stage, the application may send to different subsystem like ethdev
>  * to send the packet/event on the wire using ethdev rte_eth_tx_burst() API.
>  *
>  * The point at which events are scheduled to ports depends on the device. For
>  * hardware devices, scheduling occurs asynchronously. Software schedulers can
>  * either be distributed (each worker thread schedules events to its own port)
>  * or centralized (a dedicated thread schedules to all ports). Distributed
>  * software schedulers perform the scheduling in rte_event_dequeue(), whereas
>  * centralized scheduler logic is located in rte_event_schedule(). The
>  * RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag indicates whether a
>  * device is centralized and thus needs a dedicated scheduling thread that

Since we are starting a dedicated thread in centralized
case, How about name the flag as RTE_EVENT_DEV_CAP_CENTRALIZED_SCHED?
instead of RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED.
No strong opinion here. Just a thought.

>  * repeatedly calls rte_event_schedule().
>  *
>  */

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 1/4] eventdev: introduce event driven programming model
  2016-11-28  9:16               ` Bruce Richardson
  2016-11-28 11:30                 ` Thomas Monjalon
@ 2016-11-29  4:01                 ` Jerin Jacob
  2016-11-29 10:00                   ` Bruce Richardson
  1 sibling, 1 reply; 109+ messages in thread
From: Jerin Jacob @ 2016-11-29  4:01 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: Thomas Monjalon, dev, harry.van.haaren, hemant.agrawal, gage.eads

On Mon, Nov 28, 2016 at 09:16:10AM +0000, Bruce Richardson wrote:
> On Sat, Nov 26, 2016 at 08:24:55AM +0530, Jerin Jacob wrote:
> > On Fri, Nov 25, 2016 at 11:00:53AM +0000, Bruce Richardson wrote:
> > > On Fri, Nov 25, 2016 at 05:53:34AM +0530, Jerin Jacob wrote:
> > > > On Thu, Nov 24, 2016 at 04:35:56PM +0100, Thomas Monjalon wrote:
> > > > > 2016-11-24 07:29, Jerin Jacob:
> > > > > > On Wed, Nov 23, 2016 at 07:39:09PM +0100, Thomas Monjalon wrote:
> > > > > > > 2016-11-18 11:14, Jerin Jacob:
> > > > > > > > +Eventdev API - EXPERIMENTAL
> > > > > > > > +M: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > > > > > > > +F: lib/librte_eventdev/
> > > > > > > 
> > > > 
> > > > I don't think there is any portability issue here, I can explain.
> > > > 
> > > > The application level, we have two more use case to deal with non burst
> > > > variant
> > > > 
> > > > - latency critical work
> > > > - on dequeue, if application wants to deal with only one flow(i.e to
> > > >   avoid processing two different application flows to avoid cache trashing)
> > > > 
> > > > Selection of the burst variants will be based on
> > > > rte_event_dev_info_get() and rte_event_dev_configure()(see, max_event_port_dequeue_depth,
> > > > max_event_port_enqueue_depth, nb_event_port_dequeue_depth, nb_event_port_enqueue_depth )
> > > > So I don't think their is portability issue here and I don't want to waste my
> > > > CPU cycles on the for loop if application known to be working with non
> > > > bursts variant like below
> > > > 
> > > 
> > > If the application is known to be working on non-burst varients, then
> > > they always request a burst-size of 1, and skip the loop completely.
> > > There is no extra performance hit in that case in either the app or the
> > > driver (since the non-burst driver always returns 1, irrespective of the
> > > number requested).
> > 
> > Hmm. I am afraid, There is.
> > On the app side, the const "1" can not be optimized by the compiler as
> > on downside it is function pointer based driver interface
> > On the driver side, the implementation would be for loop based instead
> > of plain access.
> > (compiler never can see the const "1" in driver interface)
> > 
> > We are planning to implement burst mode as kind of emulation mode and
> > have a different scheme for burst and nonburst. The similar approach we have
> > taken in introducing rte_event_schedule() and split the responsibility so
> > that SW driver can work without additional performance overhead and neat
> > driver interface.
> > 
> > If you are concerned about the usability part and regression on the SW
> > driver, then it's not the case, application will use nonburst variant only if
> > dequeue_depth == 1 and/or explicit case where latency matters.
> > 
> > On the portability side, we support both case and application if written based
> > on dequeue_depth it will perform well in both implementations.IMO, There is
> > no another shortcut for performance optimized application running on different
> > set of model.I think it is not an issue as, in event model as each cores
> > identical and main loop can be changed based on dequeue_depth
> > if needs performance(anyway mainloop will be function pointer based).
> > 
> 
> Ok, I think I see your point now. Here is an alternative suggestion.
> 
> 1. Keep the single user API.
> 2. Have both single and burst function pointers in the driver
> 3. Call appropriately in the eventdev layer based on parameters. For
> example:
> 
> rte_event_dequeue_burst(..., int num)
> {
> 	if (num == 1 && single_dequeue_fn != NULL)
> 		return single_dequeue_fn(...);
> 	return burst_dequeue_fn(...);
> }
> 
> This way drivers can optionally special-case the single dequeue case -
> the function pointer check will definitely be predictable in HW making
> that a near-zero-cost check - while not forcing all drivers to do so.
> It also reduces the public API surface, and gives us a single enqueue
> and dequeue function.

The alternative suggestion looks good to me. Yes, it makes sense to reduces the
public API interface if possible.

Regarding the implementation, I thought to have a bit approach like below
to reduce the cost of additional AND operation.(with const "1", compiler
can choose with correct one with out any overhead)

rte_event_dequeue_burst(..., int num)
{
	if (num == 1)
		return single_dequeue_fn(...);
	return burst_dequeue_fn(...);
}

"single_dequeue_fn" populated from the driver layer.
In the absence of populating the "single_dequeue_fn" from the driver layer,
The common code can create the single_dequeue_fn using driver
provided "burst_dequeue_fn"

something like
generic_single_dequeue_fn(dev){
{
	dev->burst_dequeue_fn(..,1);
}

Any concerns?

> 
> /Bruce
> 

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 2/4] eventdev: implement the northbound APIs
  2016-11-29  3:43                       ` Jerin Jacob
@ 2016-11-29  5:46                         ` Eads, Gage
  0 siblings, 0 replies; 109+ messages in thread
From: Eads, Gage @ 2016-11-29  5:46 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: dev, Richardson, Bruce, Van Haaren, Harry, hemant.agrawal



>  -----Original Message-----
>  From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
>  Sent: Monday, November 28, 2016 9:43 PM
>  To: Eads, Gage <gage.eads@intel.com>
>  Cc: dev@dpdk.org; Richardson, Bruce <bruce.richardson@intel.com>; Van
>  Haaren, Harry <harry.van.haaren@intel.com>; hemant.agrawal@nxp.com
>  Subject: Re: [dpdk-dev] [PATCH 2/4] eventdev: implement the northbound APIs
>  
>  On Mon, Nov 28, 2016 at 03:53:08PM +0000, Eads, Gage wrote:
>  > (Bruce's adviced heeded :))
>  >
>  > >  -----Original Message-----
>  > >  From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
>  > >  Sent: Tuesday, November 22, 2016 5:44 PM
>  > >  To: Eads, Gage <gage.eads@intel.com>
>  > >  Cc: dev@dpdk.org; Richardson, Bruce <bruce.richardson@intel.com>;
>  > > Van  Haaren, Harry <harry.van.haaren@intel.com>;
>  > > hemant.agrawal@nxp.com
>  > >  Subject: Re: [dpdk-dev] [PATCH 2/4] eventdev: implement the
>  > > northbound APIs
>  > >
>  > >  On Tue, Nov 22, 2016 at 10:48:32PM +0000, Eads, Gage wrote:
>  > >  >
>  > >  >
>  > >  > >  -----Original Message-----
>  > >  > >  From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
>  > >  > >  Sent: Tuesday, November 22, 2016 2:00 PM  > >  To: Eads, Gage
>  > > <gage.eads@intel.com>  > >  Cc: dev@dpdk.org; Richardson, Bruce
>  > > <bruce.richardson@intel.com>;  > > Van  Haaren, Harry
>  > > <harry.van.haaren@intel.com>;  > > hemant.agrawal@nxp.com  > >
>  > > Subject: Re: [dpdk-dev] [PATCH 2/4] eventdev: implement the  > >
>  > > northbound APIs  > >  > >  On Tue, Nov 22, 2016 at 07:43:03PM +0000,
>  > > Eads, Gage wrote:
>  > >  > >  > >  > >  > > One open issue I noticed is the "typical workflow"
>  > >  > >  > > description starting in  > >  rte_eventdev.h:204 conflicts
>  > > with  > > the  > > centralized software PMD that Harry  > >  posted last
>  week.
>  > >  > >  > > Specifically, that PMD expects a single core to call the
>  > > > >  > > > > schedule function. We could extend the documentation to
>  > > account  > > for  > > this  > >  alternative style of scheduler
>  > > invocation, or  > > discuss  > > ways to make the  software  > >
>  > > PMD work with the  > > documented  > > workflow. I prefer the
>  > > former, but either  way I  >  > > >  think we  > > ought to expose
>  > > the scheduler's expected usage to  > > the user --  > > perhaps  > >  through
>  an RTE_EVENT_DEV_CAP flag?
>  > >  > >  > >  > >  >
>  > >  > >  > >  > >  > I prefer former too, you can propose the
>  > > documentation  > > > > change required  for  > >  software PMD.
>  > >  > >  > >  >
>  > >  > >  > >  > Sure, proposal follows. The "typical workflow" isn't
>  > > the  > > most  > > optimal by  having a conditional in the
>  > > fast-path, of  > > course, but it  > > demonstrates the idea  simply.
>  > >  > >  > >  >
>  > >  > >  > >  > (line 204)
>  > >  > >  > >  >  * An event driven based application has following
>  > > typical  > > > > workflow on  > >  fastpath:
>  > >  > >  > >  >  * \code{.c}
>  > >  > >  > >  >  *      while (1) {
>  > >  > >  > >  >  *
>  > >  > >  > >  >  *              if (dev_info.event_dev_cap &
>  > >  > >  > >  >  *                      RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED)
>  > >  > >  > >  >  *                      rte_event_schedule(dev_id);
>  > >  > >  > >
>  > >  > >  > >  Yes, I like the idea of RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED.
>  > >  > >  > >  It  can be input to application/subsystem to  launch
>  > > separate  > > > > core(s) for schedule functions.
>  > >  > >  > >  But, I think, the "dev_info.event_dev_cap &  > >  > >
>  > > RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED"
>  > >  > >  > >  check can be moved inside the implementation(to make the
>  > > > > better  > > decisions  and  avoiding consuming cycles on HW
>  > > based  schedulers.
>  > >  > >  >
>  > >  > >  > How would this check work? Wouldn't it prevent any core from
>  > > > > running the  software scheduler in the centralized case?
>  > >  > >
>  > >  > >  I guess you may not need RTE_EVENT_DEV_CAP here, instead need
>  > > flag  > > for  device configure here  > >  > >  #define
>  > > RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED (1ULL << 1)  > >  > >  struct
>  > > rte_event_dev_config config;  config.event_dev_cfg =  > >
>  > > RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED;
>  > >  > >  rte_event_dev_configure(.., &config);  > >  > >  on the driver
>  > > side on configure,  > >  if (config.event_dev_cfg &
>  > > RTE_EVENT_DEV_CFG_DISTRIBUTED_SCHED)
>  > >  > >  	eventdev->schedule = NULL;
>  > >  > >  else // centralized case
>  > >  > >  	eventdev->schedule = your_centrized_schedule_function;
>  > >  > >
>  > >  > >  Does that work?
>  > >  >
>  > >  > Hm, I fear the API would give users the impression that they can
>  > > select the  scheduling behavior of a given eventdev, when a software
>  > > scheduler is more  likely to be either distributed or centralized -- not both.
>  > >
>  > >  Even if it is capability flag then also it is per "device". Right ?
>  > >  capability flag is more of read only too. Am i missing something here?
>  > >
>  >
>  > Correct, the capability flag I'm envisioning is per-device and read-only.
>  >
>  > >  >
>  > >  > What if we use the capability flag, and define
>  > > rte_event_schedule() as the  scheduling function for centralized
>  > > schedulers and rte_event_dequeue() as the  scheduling function for
>  > > distributed schedulers? That way, the datapath could be  the simple
>  > > dequeue -> process -> enqueue. Applications would check the
>  > > capability flag at configuration time to decide whether or not to launch an
>  lcore that calls rte_event_schedule().
>  > >
>  > >  I am all for simple "dequeue -> process -> enqueue".
>  > >  rte_event_schedule() added for SW scheduler only,  now it may not
>  > > make sense  to add one more check on top of "rte_event_schedule()"
>  > > to see it is really need  or not in fastpath?
>  > >
>  >
>  > Yes, the additional check shouldn't be needed. In terms of the 'typical
>  workflow' description, this is what I have in mind:
>  >
>  > *
>  >  * An event driven based application has following typical workflow on
>  fastpath:
>  >  * \code{.c}
>  >  *  while (1) {
>  >  *
>  >  *      rte_event_dequeue(...);
>  >  *
>  >  *      (event processing)
>  >  *
>  >  *      rte_event_enqueue(...);
>  >  *  }
>  >  * \endcode
>  >  *
>  >  * The events are injected to event device through the *enqueue*
>  > operation by
>  >  * event producers in the system. The typical event producers are
>  > ethdev
>  >  * subsystem for generating packet events, core(SW) for generating
>  > events based
>  >  * on different stages of application processing, cryptodev for
>  > generating
>  >  * crypto work completion notification etc
>  >  *
>  >  * The *dequeue* operation gets one or more events from the event ports.
>  >  * The application process the events and send to downstream event
>  > queue through
>  >  * rte_event_enqueue() if it is an intermediate stage of event
>  > processing, on
>  >  * the final stage, the application may send to different subsystem
>  > like ethdev
>  >  * to send the packet/event on the wire using ethdev rte_eth_tx_burst() API.
>  >  *
>  >  * The point at which events are scheduled to ports depends on the
>  > device. For
>  >  * hardware devices, scheduling occurs asynchronously. Software
>  > schedulers can
>  >  * either be distributed (each worker thread schedules events to its
>  > own port)
>  >  * or centralized (a dedicated thread schedules to all ports).
>  > Distributed
>  >  * software schedulers perform the scheduling in rte_event_dequeue(),
>  > whereas
>  >  * centralized scheduler logic is located in rte_event_schedule(). The
>  >  * RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag indicates
>  > whether a
>  >  * device is centralized and thus needs a dedicated scheduling thread
>  > that
>  
>  Since we are starting a dedicated thread in centralized case, How about name
>  the flag as RTE_EVENT_DEV_CAP_CENTRALIZED_SCHED?
>  instead of RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED.
>  No strong opinion here. Just a thought.
>  

Fine with me.

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH 1/4] eventdev: introduce event driven programming model
  2016-11-29  4:01                 ` Jerin Jacob
@ 2016-11-29 10:00                   ` Bruce Richardson
  0 siblings, 0 replies; 109+ messages in thread
From: Bruce Richardson @ 2016-11-29 10:00 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Thomas Monjalon, dev, harry.van.haaren, hemant.agrawal, gage.eads

On Tue, Nov 29, 2016 at 09:31:42AM +0530, Jerin Jacob wrote:
> On Mon, Nov 28, 2016 at 09:16:10AM +0000, Bruce Richardson wrote:
> > On Sat, Nov 26, 2016 at 08:24:55AM +0530, Jerin Jacob wrote:
> > > On Fri, Nov 25, 2016 at 11:00:53AM +0000, Bruce Richardson wrote:
> > > > On Fri, Nov 25, 2016 at 05:53:34AM +0530, Jerin Jacob wrote:
> > > > > On Thu, Nov 24, 2016 at 04:35:56PM +0100, Thomas Monjalon wrote:
> > > > > > 2016-11-24 07:29, Jerin Jacob:
> > > > > > > On Wed, Nov 23, 2016 at 07:39:09PM +0100, Thomas Monjalon wrote:
> > > > > > > > 2016-11-18 11:14, Jerin Jacob:
> > > > > > > > > +Eventdev API - EXPERIMENTAL
> > > > > > > > > +M: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > > > > > > > > +F: lib/librte_eventdev/
> > > > > > > > 
> > > > > 
> > > > > I don't think there is any portability issue here, I can explain.
> > > > > 
> > > > > The application level, we have two more use case to deal with non burst
> > > > > variant
> > > > > 
> > > > > - latency critical work
> > > > > - on dequeue, if application wants to deal with only one flow(i.e to
> > > > >   avoid processing two different application flows to avoid cache trashing)
> > > > > 
> > > > > Selection of the burst variants will be based on
> > > > > rte_event_dev_info_get() and rte_event_dev_configure()(see, max_event_port_dequeue_depth,
> > > > > max_event_port_enqueue_depth, nb_event_port_dequeue_depth, nb_event_port_enqueue_depth )
> > > > > So I don't think their is portability issue here and I don't want to waste my
> > > > > CPU cycles on the for loop if application known to be working with non
> > > > > bursts variant like below
> > > > > 
> > > > 
> > > > If the application is known to be working on non-burst varients, then
> > > > they always request a burst-size of 1, and skip the loop completely.
> > > > There is no extra performance hit in that case in either the app or the
> > > > driver (since the non-burst driver always returns 1, irrespective of the
> > > > number requested).
> > > 
> > > Hmm. I am afraid, There is.
> > > On the app side, the const "1" can not be optimized by the compiler as
> > > on downside it is function pointer based driver interface
> > > On the driver side, the implementation would be for loop based instead
> > > of plain access.
> > > (compiler never can see the const "1" in driver interface)
> > > 
> > > We are planning to implement burst mode as kind of emulation mode and
> > > have a different scheme for burst and nonburst. The similar approach we have
> > > taken in introducing rte_event_schedule() and split the responsibility so
> > > that SW driver can work without additional performance overhead and neat
> > > driver interface.
> > > 
> > > If you are concerned about the usability part and regression on the SW
> > > driver, then it's not the case, application will use nonburst variant only if
> > > dequeue_depth == 1 and/or explicit case where latency matters.
> > > 
> > > On the portability side, we support both case and application if written based
> > > on dequeue_depth it will perform well in both implementations.IMO, There is
> > > no another shortcut for performance optimized application running on different
> > > set of model.I think it is not an issue as, in event model as each cores
> > > identical and main loop can be changed based on dequeue_depth
> > > if needs performance(anyway mainloop will be function pointer based).
> > > 
> > 
> > Ok, I think I see your point now. Here is an alternative suggestion.
> > 
> > 1. Keep the single user API.
> > 2. Have both single and burst function pointers in the driver
> > 3. Call appropriately in the eventdev layer based on parameters. For
> > example:
> > 
> > rte_event_dequeue_burst(..., int num)
> > {
> > 	if (num == 1 && single_dequeue_fn != NULL)
> > 		return single_dequeue_fn(...);
> > 	return burst_dequeue_fn(...);
> > }
> > 
> > This way drivers can optionally special-case the single dequeue case -
> > the function pointer check will definitely be predictable in HW making
> > that a near-zero-cost check - while not forcing all drivers to do so.
> > It also reduces the public API surface, and gives us a single enqueue
> > and dequeue function.
> 
> The alternative suggestion looks good to me. Yes, it makes sense to reduces the
> public API interface if possible.
> 
> Regarding the implementation, I thought to have a bit approach like below
> to reduce the cost of additional AND operation.(with const "1", compiler
> can choose with correct one with out any overhead)
> 
> rte_event_dequeue_burst(..., int num)
> {
> 	if (num == 1)
> 		return single_dequeue_fn(...);
> 	return burst_dequeue_fn(...);
> }
> 
> "single_dequeue_fn" populated from the driver layer.
> In the absence of populating the "single_dequeue_fn" from the driver layer,
> The common code can create the single_dequeue_fn using driver
> provided "burst_dequeue_fn"
> 
> something like
> generic_single_dequeue_fn(dev){
> {
> 	dev->burst_dequeue_fn(..,1);
> }
> 
> Any concerns?
> 
No, works ok for me 

/Bruce

^ permalink raw reply	[flat|nested] 109+ messages in thread

* [PATCH v2 0/6] libeventdev API and northbound implementation
  2016-11-18  5:44 ` [PATCH 1/4] eventdev: introduce event driven programming model Jerin Jacob
  2016-11-23 18:39   ` Thomas Monjalon
  2016-11-24 16:24   ` Bruce Richardson
@ 2016-12-06  3:52   ` Jerin Jacob
  2016-12-06  3:52     ` [PATCH v2 1/6] eventdev: introduce event driven programming model Jerin Jacob
                       ` (7 more replies)
  2 siblings, 8 replies; 109+ messages in thread
From: Jerin Jacob @ 2016-12-06  3:52 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, hemant.agrawal, gage.eads,
	harry.van.haaren, Jerin Jacob

As previously discussed in RFC v1 [1], RFC v2 [2], with changes
described in [3] (also pasted below), here is the first non-draft series
for this new API.

[1] http://dpdk.org/ml/archives/dev/2016-August/045181.html
[2] http://dpdk.org/ml/archives/dev/2016-October/048592.html
[3] http://dpdk.org/ml/archives/dev/2016-October/048196.html

v1..v2:
1) Remove unnecessary header files from rte_eventdev.h(Thomas)
2) Removed PMD driver name(EVENTDEV_NAME_SKELETON_PMD) from rte_eventdev.h(Thomas)
3) Removed different #define for different priority schemes. Changed to
one event device RTE_EVENT_DEV_PRIORITY_* priority (Bruce)
4) add const to rte_event_dev_configure(), rte_event_queue_setup(),
rte_event_port_setup(), rte_event_port_link()(Bruce)
5) Fixed missing dev argument in dev->schedule() function(Bruce)
6) Changed \see to @see in doxgen comments(Thomas)
7) Added additional text in specification to clarify the queue depth(Thomas)
8) Changed wait to timeout across the specification(Thomas)
9) Added longer explanation for RTE_EVENT_OP_NEW and RTE_EVENT_OP_FORWARD(Thomas)
10) Fixed issue with RTE_EVENT_OP_RELEASE doxgen formatting(Thomas)
11) Changed to RTE_EVENT_DEV_CFG_FLAG_ from RTE_EVENT_DEV_CFG_(Thomas)
12) Changed to EVENT_QUEUE_CFG_FLAG_ from EVENT_QUEUE_CFG_(Thomas)
13) s/RTE_EVENT_TYPE_CORE/RTE_EVENT_TYPE_CPU/(Thomas, Gage)
14) Removed non burst API and kept only the burst API in the API specification
(Thomas, Bruce, Harry, Jerin)
-- Driver interface has non burst API, selection of the non burst API is based
on num_objects == 1
15) sizeeof(struct rte_event) was not 16 in v1. Fixed it in v2
-- reduced the width of event_type to 4bit to save space for future change
-- introduced impl_opaque for implementation specific opaque data(Harry),
Something useful for HW driver too, in the context of removal the need for sepeare
release API.
-- squashed other element size and provided enough space to impl_opaque(Jerin)
-- added RTE_BUILD_BUG_ON(sizeof(struct rte_event) != 16); check
16) add union of uint64_t in the second element in struct rte_event to
make sure the structure has 16byte address all arch(Thomas)
17) Fixed invalid check of nb_atomic_order_sequences in implementation(Gage)
18) s/EDEV_LOG_ERR/RTE_EDEV_LOG_ERR(Thomas)
19) s/rte_eventdev_pmd_/rte_event_pmd_/(Bruce)
20) added fine details of distributed vs centralized scheduling information
in the specification and introduced RTE_EVENT_DEV_CAP_FLAG_DISTRIBUTED_SCHED
flag(Gage)
21)s/RTE_EVENT_QUEUE_CFG_FLAG_SINGLE_CONSUMER/RTE_EVENT_QUEUE_CFG_FLAG_SINGLE_LINK (Jerin)
to remove the confusion to between another producer and consumer in sw eventdev driver
22) Northbound api implementation  patch spited to more logical patches(Thomas)


Changes since RFC v2:

- Updated the documentation to define the need for this library[Jerin]
- Added RTE_EVENT_QUEUE_CFG_*_ONLY configuration parameters in
  struct rte_event_queue_conf to enable optimized sw implementation [Bruce]
- Introduced RTE_EVENT_OP* ops [Bruce]
- Added nb_event_queue_flows,nb_event_port_dequeue_depth, nb_event_port_enqueue_depth
  in rte_event_dev_configure() like ethdev and crypto library[Jerin]
- Removed rte_event_release() and replaced with RTE_EVENT_OP_RELEASE ops to
  reduce fast path APIs and it is redundant too[Jerin]
- In the view of better application portability, Removed pin_event
  from rte_event_enqueue as it is just hint and Intel/NXP can not support it[Jerin]
- Added rte_event_port_links_get()[Jerin]
- Added rte_event_dev_dump[Harry]

Notes:

- This patch set is check-patch clean with an exception that
02/04 has one WARNING:MACRO_WITH_FLOW_CONTROL
- Looking forward to getting additional maintainers for libeventdev


TODO:
1) Create user guide

Jerin Jacob (6):
  eventdev: introduce event driven programming model
  eventdev: define southbound driver interface
  eventdev: implement the northbound APIs
  eventdev: implement PMD registration functions
  event/skeleton: add skeleton eventdev driver
  app/test: unit test case for eventdev APIs

 MAINTAINERS                                        |    5 +
 app/test/Makefile                                  |    2 +
 app/test/test_eventdev.c                           |  775 +++++++++++
 config/common_base                                 |   14 +
 doc/api/doxy-api-index.md                          |    1 +
 doc/api/doxy-api.conf                              |    1 +
 drivers/Makefile                                   |    1 +
 drivers/event/Makefile                             |   36 +
 drivers/event/skeleton/Makefile                    |   55 +
 .../skeleton/rte_pmd_skeleton_event_version.map    |    4 +
 drivers/event/skeleton/skeleton_eventdev.c         |  540 ++++++++
 drivers/event/skeleton/skeleton_eventdev.h         |   72 +
 lib/Makefile                                       |    1 +
 lib/librte_eal/common/include/rte_log.h            |    1 +
 lib/librte_eventdev/Makefile                       |   57 +
 lib/librte_eventdev/rte_eventdev.c                 | 1237 +++++++++++++++++
 lib/librte_eventdev/rte_eventdev.h                 | 1408 ++++++++++++++++++++
 lib/librte_eventdev/rte_eventdev_pmd.h             |  506 +++++++
 lib/librte_eventdev/rte_eventdev_version.map       |   39 +
 mk/rte.app.mk                                      |    5 +
 20 files changed, 4760 insertions(+)
 create mode 100644 app/test/test_eventdev.c
 create mode 100644 drivers/event/Makefile
 create mode 100644 drivers/event/skeleton/Makefile
 create mode 100644 drivers/event/skeleton/rte_pmd_skeleton_event_version.map
 create mode 100644 drivers/event/skeleton/skeleton_eventdev.c
 create mode 100644 drivers/event/skeleton/skeleton_eventdev.h
 create mode 100644 lib/librte_eventdev/Makefile
 create mode 100644 lib/librte_eventdev/rte_eventdev.c
 create mode 100644 lib/librte_eventdev/rte_eventdev.h
 create mode 100644 lib/librte_eventdev/rte_eventdev_pmd.h
 create mode 100644 lib/librte_eventdev/rte_eventdev_version.map

-- 
2.5.5

^ permalink raw reply	[flat|nested] 109+ messages in thread

* [PATCH v2 1/6] eventdev: introduce event driven programming model
  2016-12-06  3:52   ` [PATCH v2 0/6] libeventdev API and northbound implementation Jerin Jacob
@ 2016-12-06  3:52     ` Jerin Jacob
  2016-12-06 16:51       ` Bruce Richardson
                         ` (3 more replies)
  2016-12-06  3:52     ` [PATCH v2 2/6] eventdev: define southbound driver interface Jerin Jacob
                       ` (6 subsequent siblings)
  7 siblings, 4 replies; 109+ messages in thread
From: Jerin Jacob @ 2016-12-06  3:52 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, hemant.agrawal, gage.eads,
	harry.van.haaren, Jerin Jacob

In a polling model, lcores poll ethdev ports and associated
rx queues directly to look for packet. In an event driven model,
by contrast, lcores call the scheduler that selects packets for
them based on programmer-specified criteria. Eventdev library
adds support for event driven programming model, which offer
applications automatic multicore scaling, dynamic load balancing,
pipelining, packet ingress order maintenance and
synchronization services to simplify application packet processing.

By introducing event driven programming model, DPDK can support
both polling and event driven programming models for packet processing,
and applications are free to choose whatever model
(or combination of the two) that best suits their needs.

This patch adds the eventdev specification header file.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 MAINTAINERS                        |    3 +
 doc/api/doxy-api-index.md          |    1 +
 doc/api/doxy-api.conf              |    1 +
 lib/librte_eventdev/rte_eventdev.h | 1274 ++++++++++++++++++++++++++++++++++++
 4 files changed, 1279 insertions(+)
 create mode 100644 lib/librte_eventdev/rte_eventdev.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 26d9590..8e59352 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -249,6 +249,9 @@ F: lib/librte_cryptodev/
 F: app/test/test_cryptodev*
 F: examples/l2fwd-crypto/
 
+Eventdev API - EXPERIMENTAL
+M: Jerin Jacob <jerin.jacob@caviumnetworks.com>
+F: lib/librte_eventdev/
 
 Networking Drivers
 ------------------
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 6675f96..28c1329 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -40,6 +40,7 @@ There are many libraries, so their headers may be grouped by topics:
   [ethdev]             (@ref rte_ethdev.h),
   [ethctrl]            (@ref rte_eth_ctrl.h),
   [cryptodev]          (@ref rte_cryptodev.h),
+  [eventdev]           (@ref rte_eventdev.h),
   [devargs]            (@ref rte_devargs.h),
   [bond]               (@ref rte_eth_bond.h),
   [vhost]              (@ref rte_virtio_net.h),
diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf
index 9dc7ae5..9841477 100644
--- a/doc/api/doxy-api.conf
+++ b/doc/api/doxy-api.conf
@@ -41,6 +41,7 @@ INPUT                   = doc/api/doxy-api-index.md \
                           lib/librte_cryptodev \
                           lib/librte_distributor \
                           lib/librte_ether \
+                          lib/librte_eventdev \
                           lib/librte_hash \
                           lib/librte_ip_frag \
                           lib/librte_jobstats \
diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
new file mode 100644
index 0000000..72f5b87
--- /dev/null
+++ b/lib/librte_eventdev/rte_eventdev.h
@@ -0,0 +1,1274 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright 2016 Cavium.
+ *   Copyright 2016 Intel Corporation.
+ *   Copyright 2016 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_EVENTDEV_H_
+#define _RTE_EVENTDEV_H_
+
+/**
+ * @file
+ *
+ * RTE Event Device API
+ *
+ * In a polling model, lcores poll ethdev ports and associated rx queues
+ * directly to look for packet. In an event driven model, by contrast, lcores
+ * call the scheduler that selects packets for them based on programmer
+ * specified criteria. Eventdev library adds support for event driven
+ * programming model, which offer applications automatic multicore scaling,
+ * dynamic load balancing, pipelining, packet ingress order maintenance and
+ * synchronization services to simplify application packet processing.
+ *
+ * The Event Device API is composed of two parts:
+ *
+ * - The application-oriented Event API that includes functions to setup
+ *   an event device (configure it, setup its queues, ports and start it), to
+ *   establish the link between queues to port and to receive events, and so on.
+ *
+ * - The driver-oriented Event API that exports a function allowing
+ *   an event poll Mode Driver (PMD) to simultaneously register itself as
+ *   an event device driver.
+ *
+ * Event device components:
+ *
+ *                     +-----------------+
+ *                     | +-------------+ |
+ *        +-------+    | |    flow 0   | |
+ *        |Packet |    | +-------------+ |
+ *        |event  |    | +-------------+ |
+ *        |       |    | |    flow 1   | |port_link(port0, queue0)
+ *        +-------+    | +-------------+ |     |     +--------+
+ *        +-------+    | +-------------+ o-----v-----o        |dequeue +------+
+ *        |Crypto |    | |    flow n   | |           | event  +------->|Core 0|
+ *        |work   |    | +-------------+ o----+      | port 0 |        |      |
+ *        |done ev|    |  event queue 0  |    |      +--------+        +------+
+ *        +-------+    +-----------------+    |
+ *        +-------+                           |
+ *        |Timer  |    +-----------------+    |      +--------+
+ *        |expiry |    | +-------------+ |    +------o        |dequeue +------+
+ *        |event  |    | |    flow 0   | o-----------o event  +------->|Core 1|
+ *        +-------+    | +-------------+ |      +----o port 1 |        |      |
+ *       Event enqueue | +-------------+ |      |    +--------+        +------+
+ *     o-------------> | |    flow 1   | |      |
+ *        enqueue(     | +-------------+ |      |
+ *        queue_id,    |                 |      |    +--------+        +------+
+ *        flow_id,     | +-------------+ |      |    |        |dequeue |Core 2|
+ *        sched_type,  | |    flow n   | o-----------o event  +------->|      |
+ *        event_type,  | +-------------+ |      |    | port 2 |        +------+
+ *        subev_type,  |  event queue 1  |      |    +--------+
+ *        event)       +-----------------+      |    +--------+
+ *                                              |    |        |dequeue +------+
+ *        +-------+    +-----------------+      |    | event  +------->|Core n|
+ *        |Core   |    | +-------------+ o-----------o port n |        |      |
+ *        |(SW)   |    | |    flow 0   | |      |    +--------+        +--+---+
+ *        |event  |    | +-------------+ |      |                         |
+ *        +-------+    | +-------------+ |      |                         |
+ *            ^        | |    flow 1   | |      |                         |
+ *            |        | +-------------+ o------+                         |
+ *            |        | +-------------+ |                                |
+ *            |        | |    flow n   | |                                |
+ *            |        | +-------------+ |                                |
+ *            |        |  event queue n  |                                |
+ *            |        +-----------------+                                |
+ *            |                                                           |
+ *            +-----------------------------------------------------------+
+ *
+ * Event device: A hardware or software-based event scheduler.
+ *
+ * Event: A unit of scheduling that encapsulates a packet or other datatype
+ * like SW generated event from the CPU, Crypto work completion notification,
+ * Timer expiry event notification etc as well as metadata.
+ * The metadata includes flow ID, scheduling type, event priority, event_type,
+ * sub_event_type etc.
+ *
+ * Event queue: A queue containing events that are scheduled by the event dev.
+ * An event queue contains events of different flows associated with scheduling
+ * types, such as atomic, ordered, or parallel.
+ *
+ * Event port: An application's interface into the event dev for enqueue and
+ * dequeue operations. Each event port can be linked with one or more
+ * event queues for dequeue operations.
+ *
+ * By default, all the functions of the Event Device API exported by a PMD
+ * are lock-free functions which assume to not be invoked in parallel on
+ * different logical cores to work on the same target object. For instance,
+ * the dequeue function of a PMD cannot be invoked in parallel on two logical
+ * cores to operates on same  event port. Of course, this function
+ * can be invoked in parallel by different logical cores on different ports.
+ * It is the responsibility of the upper level application to enforce this rule.
+ *
+ * In all functions of the Event API, the Event device is
+ * designated by an integer >= 0 named the device identifier *dev_id*
+ *
+ * At the Event driver level, Event devices are represented by a generic
+ * data structure of type *rte_event_dev*.
+ *
+ * Event devices are dynamically registered during the PCI/SoC device probing
+ * phase performed at EAL initialization time.
+ * When an Event device is being probed, a *rte_event_dev* structure and
+ * a new device identifier are allocated for that device. Then, the
+ * event_dev_init() function supplied by the Event driver matching the probed
+ * device is invoked to properly initialize the device.
+ *
+ * The role of the device init function consists of resetting the hardware or
+ * software event driver implementations.
+ *
+ * If the device init operation is successful, the correspondence between
+ * the device identifier assigned to the new device and its associated
+ * *rte_event_dev* structure is effectively registered.
+ * Otherwise, both the *rte_event_dev* structure and the device identifier are
+ * freed.
+ *
+ * The functions exported by the application Event API to setup a device
+ * designated by its device identifier must be invoked in the following order:
+ *     - rte_event_dev_configure()
+ *     - rte_event_queue_setup()
+ *     - rte_event_port_setup()
+ *     - rte_event_port_link()
+ *     - rte_event_dev_start()
+ *
+ * Then, the application can invoke, in any order, the functions
+ * exported by the Event API to schedule events, dequeue events, enqueue events,
+ * change event queue(s) to event port [un]link establishment and so on.
+ *
+ * Application may use rte_event_[queue/port]_default_conf_get() to get the
+ * default configuration to set up an event queue or event port by
+ * overriding few default values.
+ *
+ * If the application wants to change the configuration (i.e. call
+ * rte_event_dev_configure(), rte_event_queue_setup(), or
+ * rte_event_port_setup()), it must call rte_event_dev_stop() first to stop the
+ * device and then do the reconfiguration before calling rte_event_dev_start()
+ * again. The schedule, enqueue and dequeue functions should not be invoked
+ * when the device is stopped.
+ *
+ * Finally, an application can close an Event device by invoking the
+ * rte_event_dev_close() function.
+ *
+ * Each function of the application Event API invokes a specific function
+ * of the PMD that controls the target device designated by its device
+ * identifier.
+ *
+ * For this purpose, all device-specific functions of an Event driver are
+ * supplied through a set of pointers contained in a generic structure of type
+ * *event_dev_ops*.
+ * The address of the *event_dev_ops* structure is stored in the *rte_event_dev*
+ * structure by the device init function of the Event driver, which is
+ * invoked during the PCI/SoC device probing phase, as explained earlier.
+ *
+ * In other words, each function of the Event API simply retrieves the
+ * *rte_event_dev* structure associated with the device identifier and
+ * performs an indirect invocation of the corresponding driver function
+ * supplied in the *event_dev_ops* structure of the *rte_event_dev* structure.
+ *
+ * For performance reasons, the address of the fast-path functions of the
+ * Event driver is not contained in the *event_dev_ops* structure.
+ * Instead, they are directly stored at the beginning of the *rte_event_dev*
+ * structure to avoid an extra indirect memory access during their invocation.
+ *
+ * RTE event device drivers do not use interrupts for enqueue or dequeue
+ * operation. Instead, Event drivers export Poll-Mode enqueue and dequeue
+ * functions to applications.
+ *
+ * An event driven based application has following typical workflow on fastpath:
+ * \code{.c}
+ *	while (1) {
+ *
+ *		rte_event_schedule(dev_id);
+ *
+ *		rte_event_dequeue(...);
+ *
+ *		(event processing)
+ *
+ *		rte_event_enqueue(...);
+ *	}
+ * \endcode
+ *
+ * The events are injected to event device through *enqueue* operation by
+ * event producers in the system. The typical event producers are ethdev
+ * subsystem for generating packet events, CPU(SW) for generating events based
+ * on different stages of application processing, cryptodev for generating
+ * crypto work completion notification etc
+ *
+ * The *dequeue* operation gets one or more events from the event ports.
+ * The application process the events and send to downstream event queue through
+ * rte_event_enqueue_burst() if it is an intermediate stage of event processing,
+ * on the final stage, the application may send to different subsystem like
+ * ethdev to send the packet/event on the wire using ethdev
+ * rte_eth_tx_burst() API.
+ *
+ * The point at which events are scheduled to ports depends on the device.
+ * For hardware devices, scheduling occurs asynchronously without any software
+ * intervention. Software schedulers can either be distributed
+ * (each worker thread schedules events to its own port) or centralized
+ * (a dedicated thread schedules to all ports). Distributed software schedulers
+ * perform the scheduling in rte_event_dequeue_burst(), whereas centralized
+ * scheduler logic is located in rte_event_schedule().
+ * The RTE_EVENT_DEV_CAP_FLAG_DISTRIBUTED_SCHED capability flag is not set
+ * indicates the device is centralized and thus needs a dedicated scheduling
+ * thread that repeatedly calls rte_event_schedule().
+ *
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_common.h>
+#include <rte_pci.h>
+#include <rte_mbuf.h>
+
+/* Event device capability bitmap flags */
+#define RTE_EVENT_DEV_CAP_FLAG_QUEUE_QOS           (1ULL << 0)
+/**< Event scheduling prioritization is based on the priority associated with
+ *  each event queue.
+ *
+ *  @see rte_event_queue_setup()
+ */
+#define RTE_EVENT_DEV_CAP_FLAG_EVENT_QOS           (1ULL << 1)
+/**< Event scheduling prioritization is based on the priority associated with
+ *  each event. Priority of each event is supplied in *rte_event* structure
+ *  on each enqueue operation.
+ *
+ *  @see rte_event_enqueue_burst()
+ */
+#define RTE_EVENT_DEV_CAP_FLAG_DISTRIBUTED_SCHED   (1ULL << 2)
+/**< Event device operates in distributed scheduling mode.
+ * In distributed scheduling mode, event scheduling happens in HW or
+ * rte_event_dequeue_burst() or the combination of these two.
+ * If the flag is not set then eventdev is centralized and thus needs a
+ * dedicated scheduling thread that repeatedly calls rte_event_schedule().
+ *
+ * @see rte_event_schedule(), rte_event_dequeue_burst()
+ */
+
+/* Event device priority levels */
+#define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
+/**< Highest priority expressed across eventdev subsystem
+ * @see rte_event_queue_setup(), rte_event_enqueue_burst()
+ * @see rte_event_port_link()
+ */
+#define RTE_EVENT_DEV_PRIORITY_NORMAL    128
+/**< Normal priority expressed across eventdev subsystem
+ * @see rte_event_queue_setup(), rte_event_enqueue_burst()
+ * @see rte_event_port_link()
+ */
+#define RTE_EVENT_DEV_PRIORITY_LOWEST    255
+/**< Lowest priority expressed across eventdev subsystem
+ * @see rte_event_queue_setup(), rte_event_enqueue_burst()
+ * @see rte_event_port_link()
+ */
+
+/**
+ * Get the total number of event devices that have been successfully
+ * initialised.
+ *
+ * @return
+ *   The total number of usable event devices.
+ */
+uint8_t
+rte_event_dev_count(void);
+
+/**
+ * Get the device identifier for the named event device.
+ *
+ * @param name
+ *   Event device name to select the event device identifier.
+ *
+ * @return
+ *   Returns event device identifier on success.
+ *   - <0: Failure to find named event device.
+ */
+int
+rte_event_dev_get_dev_id(const char *name);
+
+/**
+ * Return the NUMA socket to which a device is connected.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @return
+ *   The NUMA socket id to which the device is connected or
+ *   a default of zero if the socket could not be determined.
+ *   -(-EINVAL)  dev_id value is out of range.
+ */
+int
+rte_event_dev_socket_id(uint8_t dev_id);
+
+/**
+ * Event device information
+ */
+struct rte_event_dev_info {
+	const char *driver_name;	/**< Event driver name */
+	struct rte_pci_device *pci_dev;	/**< PCI information */
+	uint32_t min_dequeue_timeout_ns;
+	/**< Minimum supported global dequeue timeout(ns) by this device */
+	uint32_t max_dequeue_timeout_ns;
+	/**< Maximum supported global dequeue timeout(ns) by this device */
+	uint32_t dequeue_timeout_ns;
+	/**< Configured global dequeue timeout(ns) for this device */
+	uint8_t max_event_queues;
+	/**< Maximum event_queues supported by this device */
+	uint32_t max_event_queue_flows;
+	/**< Maximum supported flows in an event queue by this device*/
+	uint8_t max_event_queue_priority_levels;
+	/**< Maximum number of event queue priority levels by this device.
+	 * Valid when the device has RTE_EVENT_DEV_CAP_FLAG_QUEUE_QOS capability
+	 */
+	uint8_t max_event_priority_levels;
+	/**< Maximum number of event priority levels by this device.
+	 * Valid when the device has RTE_EVENT_DEV_CAP_FLAG_EVENT_QOS capability
+	 */
+	uint8_t max_event_ports;
+	/**< Maximum number of event ports supported by this device */
+	uint8_t max_event_port_dequeue_depth;
+	/**< Maximum number of events can be dequeued at a time from an
+	 * event port by this device.
+	 * A device that does not support bulk dequeue will set this as 1.
+	 */
+	uint32_t max_event_port_enqueue_depth;
+	/**< Maximum number of events can be enqueued at a time from an
+	 * event port by this device.
+	 * A device that does not support bulk enqueue will set this as 1.
+	 */
+	int32_t max_num_events;
+	/**< A *closed system* event dev has a limit on the number of events it
+	 * can manage at a time. An *open system* event dev does not have a
+	 * limit and will specify this as -1.
+	 */
+	uint32_t event_dev_cap;
+	/**< Event device capabilities(RTE_EVENT_DEV_CAP_FLAG_)*/
+};
+
+/**
+ * Retrieve the contextual information of an event device.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ *
+ * @param[out] dev_info
+ *   A pointer to a structure of type *rte_event_dev_info* to be filled with the
+ *   contextual information of the device.
+ *
+ * @return
+ *   - 0: Success, driver updates the contextual information of the event device
+ *   - <0: Error code returned by the driver info get function.
+ *
+ */
+int
+rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info *dev_info);
+
+/* Event device configuration bitmap flags */
+#define RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT (1ULL << 0)
+/**< Override the global *dequeue_timeout_ns* and use per dequeue timeout in ns.
+ *  @see rte_event_dequeue_timeout_ticks(), rte_event_dequeue_burst()
+ */
+
+/** Event device configuration structure */
+struct rte_event_dev_config {
+	uint32_t dequeue_timeout_ns;
+	/**< rte_event_dequeue_burst() timeout on this device.
+	 * This value should be in the range of *min_dequeue_timeout_ns* and
+	 * *max_dequeue_timeout_ns* which previously provided in
+	 * rte_event_dev_info_get()
+	 * @see RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT
+	 */
+	int32_t nb_events_limit;
+	/**< Applies to *closed system* event dev only. This field indicates a
+	 * limit to ethdev-like devices to limit the number of events injected
+	 * into the system to not overwhelm core-to-core events.
+	 * This value cannot exceed the *max_num_events* which previously
+	 * provided in rte_event_dev_info_get()
+	 */
+	uint8_t nb_event_queues;
+	/**< Number of event queues to configure on this device.
+	 * This value cannot exceed the *max_event_queues* which previously
+	 * provided in rte_event_dev_info_get()
+	 */
+	uint8_t nb_event_ports;
+	/**< Number of event ports to configure on this device.
+	 * This value cannot exceed the *max_event_ports* which previously
+	 * provided in rte_event_dev_info_get()
+	 */
+	uint32_t nb_event_queue_flows;
+	/**< Number of flows for any event queue on this device.
+	 * This value cannot exceed the *max_event_queue_flows* which previously
+	 * provided in rte_event_dev_info_get()
+	 */
+	uint8_t nb_event_port_dequeue_depth;
+	/**< Maximum number of events can be dequeued at a time from an
+	 * event port by this device.
+	 * This value cannot exceed the *max_event_port_dequeue_depth*
+	 * which previously provided in rte_event_dev_info_get()
+	 * @see rte_event_port_setup()
+	 */
+	uint32_t nb_event_port_enqueue_depth;
+	/**< Maximum number of events can be enqueued at a time from an
+	 * event port by this device.
+	 * This value cannot exceed the *max_event_port_enqueue_depth*
+	 * which previously provided in rte_event_dev_info_get()
+	 * @see rte_event_port_setup()
+	 */
+	uint32_t event_dev_cfg;
+	/**< Event device config flags(RTE_EVENT_DEV_CFG_)*/
+};
+
+/**
+ * Configure an event device.
+ *
+ * This function must be invoked first before any other function in the
+ * API. This function can also be re-invoked when a device is in the
+ * stopped state.
+ *
+ * The caller may use rte_event_dev_info_get() to get the capability of each
+ * resources available for this event device.
+ *
+ * @param dev_id
+ *   The identifier of the device to configure.
+ * @param dev_conf
+ *   The event device configuration structure.
+ *
+ * @return
+ *   - 0: Success, device configured.
+ *   - <0: Error code returned by the driver configuration function.
+ */
+int
+rte_event_dev_configure(uint8_t dev_id,
+			const struct rte_event_dev_config *dev_conf);
+
+
+/* Event queue specific APIs */
+
+/* Event queue configuration bitmap flags */
+#define RTE_EVENT_QUEUE_CFG_FLAG_DEFAULT            (0)
+/**< Default value of *event_queue_cfg* when rte_event_queue_setup() invoked
+ * with queue_conf == NULL
+ *
+ * @see rte_event_queue_setup()
+ */
+#define RTE_EVENT_QUEUE_CFG_FLAG_TYPE_MASK          (3ULL << 0)
+/**< Mask for event queue schedule type configuration request */
+#define RTE_EVENT_QUEUE_CFG_FLAG_ALL_TYPES          (0ULL << 0)
+/**< Allow ATOMIC,ORDERED,PARALLEL schedule type enqueue
+ *
+ * @see RTE_SCHED_TYPE_ORDERED, RTE_SCHED_TYPE_ATOMIC, RTE_SCHED_TYPE_PARALLEL
+ * @see rte_event_enqueue_burst()
+ */
+#define RTE_EVENT_QUEUE_CFG_FLAG_ATOMIC_ONLY        (1ULL << 0)
+/**< Allow only ATOMIC schedule type enqueue
+ *
+ * The rte_event_enqueue_burst() result is undefined if the queue configured
+ * with ATOMIC only and sched_type != RTE_SCHED_TYPE_ATOMIC
+ *
+ * @see RTE_SCHED_TYPE_ATOMIC, rte_event_enqueue_burst()
+ */
+#define RTE_EVENT_QUEUE_CFG_FLAG_ORDERED_ONLY       (2ULL << 0)
+/**< Allow only ORDERED schedule type enqueue
+ *
+ * The rte_event_enqueue_burst() result is undefined if the queue configured
+ * with ORDERED only and sched_type != RTE_SCHED_TYPE_ORDERED
+ *
+ * @see RTE_SCHED_TYPE_ORDERED, rte_event_enqueue_burst()
+ */
+#define RTE_EVENT_QUEUE_CFG_FLAG_PARALLEL_ONLY      (3ULL << 0)
+/**< Allow only PARALLEL schedule type enqueue
+ *
+ * The rte_event_enqueue_burst() result is undefined if the queue configured
+ * with PARALLEL only and sched_type != RTE_SCHED_TYPE_PARALLEL
+ *
+ * @see RTE_SCHED_TYPE_PARALLEL, rte_event_enqueue_burst()
+ */
+#define RTE_EVENT_QUEUE_CFG_FLAG_SINGLE_LINK        (1ULL << 2)
+/**< This event queue links only to a single event port.
+ *
+ *  @see rte_event_port_setup(), rte_event_port_link()
+ */
+
+/** Event queue configuration structure */
+struct rte_event_queue_conf {
+	uint32_t nb_atomic_flows;
+	/**< The maximum number of active flows this queue can track at any
+	 * given time. The value must be in the range of
+	 * [1 - nb_event_queue_flows)] which previously provided in
+	 * rte_event_dev_info_get().
+	 */
+	uint32_t nb_atomic_order_sequences;
+	/**< The maximum number of outstanding events waiting to be
+	 * reordered by this queue. In other words, the number of entries in
+	 * this queue’s reorder buffer.When the number of events in the
+	 * reorder buffer reaches to *nb_atomic_order_sequences* then the
+	 * scheduler cannot schedule the events from this queue and invalid
+	 * event will be returned from dequeue until one or more entries are
+	 * freed up/released.
+	 * The value must be in the range of [1 - nb_event_queue_flows)]
+	 * which previously supplied to rte_event_dev_configure().
+	 */
+	uint32_t event_queue_cfg; /**< Queue cfg flags(EVENT_QUEUE_CFG_FLAG) */
+	uint8_t priority;
+	/**< Priority for this event queue relative to other event queues.
+	 * The requested priority should in the range of
+	 * [RTE_EVENT_DEV_PRIORITY_HIGHEST, RTE_EVENT_DEV_PRIORITY_LOWEST].
+	 * The implementation shall normalize the requested priority to
+	 * event device supported priority value.
+	 * Valid when the device has RTE_EVENT_DEV_CAP_FLAG_QUEUE_QOS capability
+	 */
+};
+
+/**
+ * Retrieve the default configuration information of an event queue designated
+ * by its *queue_id* from the event driver for an event device.
+ *
+ * This function intended to be used in conjunction with rte_event_queue_setup()
+ * where caller needs to set up the queue by overriding few default values.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param queue_id
+ *   The index of the event queue to get the configuration information.
+ *   The value must be in the range [0, nb_event_queues - 1]
+ *   previously supplied to rte_event_dev_configure().
+ * @param[out] queue_conf
+ *   The pointer to the default event queue configuration data.
+ * @return
+ *   - 0: Success, driver updates the default event queue configuration data.
+ *   - <0: Error code returned by the driver info get function.
+ *
+ * @see rte_event_queue_setup()
+ *
+ */
+int
+rte_event_queue_default_conf_get(uint8_t dev_id, uint8_t queue_id,
+				 struct rte_event_queue_conf *queue_conf);
+
+/**
+ * Allocate and set up an event queue for an event device.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param queue_id
+ *   The index of the event queue to setup. The value must be in the range
+ *   [0, nb_event_queues - 1] previously supplied to rte_event_dev_configure().
+ * @param queue_conf
+ *   The pointer to the configuration data to be used for the event queue.
+ *   NULL value is allowed, in which case default configuration	used.
+ *
+ * @see rte_event_queue_default_conf_get()
+ *
+ * @return
+ *   - 0: Success, event queue correctly set up.
+ *   - <0: event queue configuration failed
+ */
+int
+rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id,
+		      const struct rte_event_queue_conf *queue_conf);
+
+/**
+ * Get the number of event queues on a specific event device
+ *
+ * @param dev_id
+ *   Event device identifier.
+ * @return
+ *   - The number of configured event queues
+ */
+uint8_t
+rte_event_queue_count(uint8_t dev_id);
+
+/**
+ * Get the priority of the event queue on a specific event device
+ *
+ * @param dev_id
+ *   Event device identifier.
+ * @param queue_id
+ *   Event queue identifier.
+ * @return
+ *   - If the device has RTE_EVENT_DEV_CAP_FLAG_QUEUE_QOS capability then the
+ *    configured priority of the event queue in
+ *    [RTE_EVENT_DEV_PRIORITY_HIGHEST, RTE_EVENT_DEV_PRIORITY_LOWEST] range
+ *    else the value RTE_EVENT_DEV_PRIORITY_NORMAL
+ */
+uint8_t
+rte_event_queue_priority(uint8_t dev_id, uint8_t queue_id);
+
+/* Event port specific APIs */
+
+/** Event port configuration structure */
+struct rte_event_port_conf {
+	int32_t new_event_threshold;
+	/**< A backpressure threshold for new event enqueues on this port.
+	 * Use for *closed system* event dev where event capacity is limited,
+	 * and cannot exceed the capacity of the event dev.
+	 * Configuring ports with different thresholds can make higher priority
+	 * traffic less likely to  be backpressured.
+	 * For example, a port used to inject NIC Rx packets into the event dev
+	 * can have a lower threshold so as not to overwhelm the device,
+	 * while ports used for worker pools can have a higher threshold.
+	 * This value cannot exceed the *nb_events_limit*
+	 * which previously supplied to rte_event_dev_configure()
+	 */
+	uint8_t dequeue_depth;
+	/**< Configure number of bulk dequeues for this event port.
+	 * This value cannot exceed the *nb_event_port_dequeue_depth*
+	 * which previously supplied to rte_event_dev_configure()
+	 */
+	uint8_t enqueue_depth;
+	/**< Configure number of bulk enqueues for this event port.
+	 * This value cannot exceed the *nb_event_port_enqueue_depth*
+	 * which previously supplied to rte_event_dev_configure()
+	 */
+};
+
+/**
+ * Retrieve the default configuration information of an event port designated
+ * by its *port_id* from the event driver for an event device.
+ *
+ * This function intended to be used in conjunction with rte_event_port_setup()
+ * where caller needs to set up the port by overriding few default values.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param port_id
+ *   The index of the event port to get the configuration information.
+ *   The value must be in the range [0, nb_event_ports - 1]
+ *   previously supplied to rte_event_dev_configure().
+ * @param[out] port_conf
+ *   The pointer to the default event port configuration data
+ * @return
+ *   - 0: Success, driver updates the default event port configuration data.
+ *   - <0: Error code returned by the driver info get function.
+ *
+ * @see rte_event_port_setup()
+ *
+ */
+int
+rte_event_port_default_conf_get(uint8_t dev_id, uint8_t port_id,
+				struct rte_event_port_conf *port_conf);
+
+/**
+ * Allocate and set up an event port for an event device.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param port_id
+ *   The index of the event port to setup. The value must be in the range
+ *   [0, nb_event_ports - 1] previously supplied to rte_event_dev_configure().
+ * @param port_conf
+ *   The pointer to the configuration data to be used for the queue.
+ *   NULL value is allowed, in which case default configuration	used.
+ *
+ * @see rte_event_port_default_conf_get()
+ *
+ * @return
+ *   - 0: Success, event port correctly set up.
+ *   - <0: Port configuration failed
+ *   - (-EDQUOT) Quota exceeded(Application tried to link the queue configured
+ *   with RTE_EVENT_QUEUE_CFG_FLAG_SINGLE_LINK to more than one event ports)
+ */
+int
+rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
+		     const struct rte_event_port_conf *port_conf);
+
+/**
+ * Get the number of dequeue queue depth configured for event port designated
+ * by its *port_id* on a specific event device
+ *
+ * @param dev_id
+ *   Event device identifier.
+ * @param port_id
+ *   Event port identifier.
+ * @return
+ *   - The number of configured dequeue queue depth
+ *
+ * @see rte_event_dequeue_burst()
+ */
+uint8_t
+rte_event_port_dequeue_depth(uint8_t dev_id, uint8_t port_id);
+
+/**
+ * Get the number of enqueue queue depth configured for event port designated
+ * by its *port_id* on a specific event device
+ *
+ * @param dev_id
+ *   Event device identifier.
+ * @param port_id
+ *   Event port identifier.
+ * @return
+ *   - The number of configured enqueue queue depth
+ *
+ * @see rte_event_enqueue_burst()
+ */
+uint8_t
+rte_event_port_enqueue_depth(uint8_t dev_id, uint8_t port_id);
+
+/**
+ * Get the number of ports on a specific event device
+ *
+ * @param dev_id
+ *   Event device identifier.
+ * @return
+ *   - The number of configured ports
+ */
+uint8_t
+rte_event_port_count(uint8_t dev_id);
+
+/**
+ * Start an event device.
+ *
+ * The device start step is the last one and consists of setting the event
+ * queues to start accepting the events and schedules to event ports.
+ *
+ * On success, all basic functions exported by the API (event enqueue,
+ * event dequeue and so on) can be invoked.
+ *
+ * @param dev_id
+ *   Event device identifier
+ * @return
+ *   - 0: Success, device started.
+ *   - <0: Error code of the driver device start function.
+ */
+int
+rte_event_dev_start(uint8_t dev_id);
+
+/**
+ * Stop an event device. The device can be restarted with a call to
+ * rte_event_dev_start()
+ *
+ * @param dev_id
+ *   Event device identifier.
+ */
+void
+rte_event_dev_stop(uint8_t dev_id);
+
+/**
+ * Close an event device. The device cannot be restarted!
+ *
+ * @param dev_id
+ *   Event device identifier
+ *
+ * @return
+ *  - 0 on successfully closing device
+ *  - <0 on failure to close device
+ *  - (-EAGAIN) if device is busy
+ */
+int
+rte_event_dev_close(uint8_t dev_id);
+
+/* Scheduler type definitions */
+#define RTE_SCHED_TYPE_ORDERED          0
+/**< Ordered scheduling
+ *
+ * Events from an ordered flow of an event queue can be scheduled to multiple
+ * ports for concurrent processing while maintaining the original event order.
+ * This scheme enables the user to achieve high single flow throughput by
+ * avoiding SW synchronization for ordering between ports which bound to cores.
+ *
+ * The source flow ordering from an event queue is maintained when events are
+ * enqueued to their destination queue within the same ordered flow context.
+ * An event port holds the context until application call
+ * rte_event_dequeue_burst() from the same port, which implicitly releases
+ * the context.
+ * User may allow the scheduler to release the context earlier than that
+ * by invoking rte_event_enqueue_burst() with RTE_EVENT_OP_RELEASE operation.
+ *
+ * Events from the source queue appear in their original order when dequeued
+ * from a destination queue.
+ * Event ordering is based on the received event(s), but also other
+ * (newly allocated or stored) events are ordered when enqueued within the same
+ * ordered context. Events not enqueued (e.g. released or stored) within the
+ * context are  considered missing from reordering and are skipped at this time
+ * (but can be ordered again within another context).
+ *
+ * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP_RELEASE
+ */
+
+#define RTE_SCHED_TYPE_ATOMIC           1
+/**< Atomic scheduling
+ *
+ * Events from an atomic flow of an event queue can be scheduled only to a
+ * single port at a time. The port is guaranteed to have exclusive (atomic)
+ * access to the associated flow context, which enables the user to avoid SW
+ * synchronization. Atomic flows also help to maintain event ordering
+ * since only one port at a time can process events from a flow of an
+ * event queue.
+ *
+ * The atomic queue synchronization context is dedicated to the port until
+ * application call rte_event_dequeue_burst() from the same port,
+ * which implicitly releases the context. User may allow the scheduler to
+ * release the context earlier than that by invoking rte_event_enqueue_burst()
+ * with RTE_EVENT_OP_RELEASE operation.
+ *
+ * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP_RELEASE
+ */
+
+#define RTE_SCHED_TYPE_PARALLEL         2
+/**< Parallel scheduling
+ *
+ * The scheduler performs priority scheduling, load balancing, etc. functions
+ * but does not provide additional event synchronization or ordering.
+ * It is free to schedule events from a single parallel flow of an event queue
+ * to multiple events ports for concurrent processing.
+ * The application is responsible for flow context synchronization and
+ * event ordering (SW synchronization).
+ *
+ * @see rte_event_queue_setup(), rte_event_dequeue_burst()
+ */
+
+/* Event types to classify the event source */
+#define RTE_EVENT_TYPE_ETHDEV           0x0
+/**< The event generated from ethdev subsystem */
+#define RTE_EVENT_TYPE_CRYPTODEV        0x1
+/**< The event generated from crypodev subsystem */
+#define RTE_EVENT_TYPE_TIMERDEV         0x2
+/**< The event generated from timerdev subsystem */
+#define RTE_EVENT_TYPE_CPU              0x3
+/**< The event generated from cpu for pipelining.
+ * Application may use *sub_event_type* to further classify the event
+ */
+#define RTE_EVENT_TYPE_MAX              0x10
+/**< Maximum number of event types */
+
+/* Event enqueue operations */
+#define RTE_EVENT_OP_NEW                0
+/**< The event producers use this operation to inject a new event to the
+ * event device.
+ */
+#define RTE_EVENT_OP_FORWARD            1
+/**< The CPU use this operation to forward the event to different event queue or
+ * change to new application specific flow or schedule type to enable
+ * pipelining
+ */
+#define RTE_EVENT_OP_RELEASE            2
+/**< Release the flow context associated with the schedule type.
+ *
+ * If current flow's scheduler type method is *RTE_SCHED_TYPE_ATOMIC*
+ * then this function hints the scheduler that the user has completed critical
+ * section processing in the current atomic context.
+ * The scheduler is now allowed to schedule events from the same flow from
+ * an event queue to another port. However, the context may be still held
+ * until the next rte_event_dequeue_burst() call, this call allows but does not
+ * force the scheduler to release the context early.
+ *
+ * Early atomic context release may increase parallelism and thus system
+ * performance, but the user needs to design carefully the split into critical
+ * vs non-critical sections.
+ *
+ * If current flow's scheduler type method is *RTE_SCHED_TYPE_ORDERED*
+ * then this function hints the scheduler that the user has done all that need
+ * to maintain event order in the current ordered context.
+ * The scheduler is allowed to release the ordered context of this port and
+ * avoid reordering any following enqueues.
+ *
+ * Early ordered context release may increase parallelism and thus system
+ * performance.
+ *
+ * If current flow's scheduler type method is *RTE_SCHED_TYPE_PARALLEL*
+ * or no scheduling context is held then this function may be an NOOP,
+ * depending on the implementation.
+ *
+ */
+
+/**
+ * The generic *rte_event* structure to hold the event attributes
+ * for dequeue and enqueue operation
+ */
+struct rte_event {
+	/** WORD0 */
+	RTE_STD_C11
+	union {
+		uint64_t event;
+		/** Event attributes for dequeue or enqueue operation */
+		struct {
+			uint64_t flow_id:20;
+			/**< Targeted flow identifier for the enqueue and
+			 * dequeue operation.
+			 * The value must be in the range of
+			 * [0, nb_event_queue_flows - 1] which
+			 * previously supplied to rte_event_dev_configure().
+			 */
+			uint64_t sub_event_type:8;
+			/**< Sub-event types based on the event source.
+			 * @see RTE_EVENT_TYPE_CPU
+			 */
+			uint64_t event_type:4;
+			/**< Event type to classify the event source.
+			 * @see RTE_EVENT_TYPE_ETHDEV, (RTE_EVENT_TYPE_*)
+			 */
+			uint64_t sched_type:2;
+			/**< Scheduler synchronization type (RTE_SCHED_TYPE_*)
+			 * associated with flow id on a given event queue
+			 * for the enqueue and dequeue operation.
+			 */
+			uint64_t queue_id:8;
+			/**< Targeted event queue identifier for the enqueue or
+			 * dequeue operation.
+			 * The value must be in the range of
+			 * [0, nb_event_queues - 1] which previously supplied to
+			 * rte_event_dev_configure().
+			 */
+			uint64_t priority:8;
+			/**< Event priority relative to other events in the
+			 * event queue. The requested priority should in the
+			 * range of  [RTE_EVENT_DEV_PRIORITY_HIGHEST,
+			 * RTE_EVENT_DEV_PRIORITY_LOWEST].
+			 * The implementation shall normalize the requested
+			 * priority to supported priority value.
+			 * Valid when the device has
+			 * RTE_EVENT_DEV_CAP_FLAG_EVENT_QOS capability.
+			 */
+			uint64_t op:2;
+			/**< The type of event enqueue operation - new/forward/
+			 * etc.This field is not preserved across an instance
+			 * and is undefined on dequeue.
+			 *  @see RTE_EVENT_OP_NEW, (RTE_EVENT_OP_*)
+			 */
+			uint64_t impl_opaque:12;
+			/**< Implementation specific opaque value.
+			 * An implementation may use this field to hold
+			 * implementation specific value to share between
+			 * dequeue and enqueue operation.
+			 * The application should not modify this field.
+			 */
+		};
+	};
+	/** WORD1 */
+	RTE_STD_C11
+	union {
+		uint64_t u64;
+		/**< Opaque 64-bit value */
+		uintptr_t event_ptr;
+		/**< Opaque event pointer */
+		struct rte_mbuf *mbuf;
+		/**< mbuf pointer if dequeued event is associated with mbuf */
+	};
+};
+
+/**
+ * Schedule one or more events in the event dev.
+ *
+ * An event dev implementation may define this is a NOOP, for instance if
+ * the event dev performs its scheduling in hardware.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ */
+void
+rte_event_schedule(uint8_t dev_id);
+
+/**
+ * Enqueue a burst of events objects or an event object supplied in *rte_event*
+ * structure on an  event device designated by its *dev_id* through the event
+ * port specified by *port_id*. Each event object specifies the event queue on
+ * which it will be enqueued.
+ *
+ * The *nb_events* parameter is the number of event objects to enqueue which are
+ * supplied in the *ev* array of *rte_event* structure.
+ *
+ * The rte_event_enqueue_burst() function returns the number of
+ * events objects it actually enqueued. A return value equal to *nb_events*
+ * means that all event objects have been enqueued.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param port_id
+ *   The identifier of the event port.
+ * @param ev
+ *   Points to an array of *nb_events* objects of type *rte_event* structure
+ *   which contain the event object enqueue operations to be processed.
+ * @param nb_events
+ *   The number of event objects to enqueue, typically number of
+ *   rte_event_port_enqueue_depth() available for this port.
+ *
+ * @return
+ *   The number of event objects actually enqueued on the event device. The
+ *   return value can be less than the value of the *nb_events* parameter when
+ *   the event devices queue is full or if invalid parameters are specified in a
+ *   *rte_event*. If return value is less than *nb_events*, the remaining events
+ *   at the end of ev[] are not consumed,and the caller has to take care of them
+ *
+ * @see rte_event_port_enqueue_depth()
+ */
+uint16_t
+rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event ev[],
+			uint16_t nb_events);
+
+/**
+ * Converts nanoseconds to *timeout_ticks* value for rte_event_dequeue_burst()
+ *
+ * If the device is configured with RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT flag
+ * then application can use this function to convert timeout value in
+ * nanoseconds to implementations specific timeout value supplied in
+ * rte_event_dequeue_burst()
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param ns
+ *   Wait time in nanosecond
+ * @param[out] timeout_ticks
+ *   Value for the *timeout_ticks* parameter in rte_event_dequeue_burst()
+ *
+ * @return
+ *  - 0 on success.
+ *  - <0 on failure.
+ *
+ * @see rte_event_dequeue_burst(), RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT
+ * @see rte_event_dev_configure()
+ *
+ */
+int
+rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
+					uint64_t *timeout_ticks);
+
+/**
+ * Dequeue a burst of events objects or an event object from the event port
+ * designated by its *event_port_id*, on an event device designated
+ * by its *dev_id*.
+ *
+ * rte_event_dequeue_burst() does not dictate the specifics of scheduling
+ * algorithm as each eventdev driver may have different criteria to schedule
+ * an event. However, in general, from an application perspective scheduler may
+ * use the following scheme to dispatch an event to the port.
+ *
+ * 1) Selection of event queue based on
+ *   a) The list of event queues are linked to the event port.
+ *   b) If the device has RTE_EVENT_DEV_CAP_FLAG_QUEUE_QOS capability then event
+ *   queue selection from list is based on event queue priority relative to
+ *   other event queue supplied as *priority* in rte_event_queue_setup()
+ *   c) If the device has RTE_EVENT_DEV_CAP_FLAG_EVENT_QOS capability then event
+ *   queue selection from the list is based on event priority supplied as
+ *   *priority* in rte_event_enqueue_burst()
+ * 2) Selection of event
+ *   a) The number of flows available in selected event queue.
+ *   b) Schedule type method associated with the event
+ *
+ * The *nb_events* parameter is the maximum number of event objects to dequeue
+ * which are returned in the *ev* array of *rte_event* structure.
+ *
+ * The rte_event_dequeue_burst() function returns the number of events objects
+ * it actually dequeued. A return value equal to *nb_events* means that all
+ * event objects have been dequeued.
+ *
+ * The number of events dequeued is the number of scheduler contexts held by
+ * this port. These contexts are automatically released in the next
+ * rte_event_dequeue_burst() invocation, or invoking rte_event_enqueue_burst()
+ * with RTE_EVENT_OP_RELEASE operation can be used to release the
+ * contexts early.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param port_id
+ *   The identifier of the event port.
+ * @param[out] ev
+ *   Points to an array of *nb_events* objects of type *rte_event* structure
+ *   for output to be populated with the dequeued event objects.
+ * @param nb_events
+ *   The maximum number of event objects to dequeue, typically number of
+ *   rte_event_port_dequeue_depth() available for this port.
+ *
+ * @param timeout_ticks
+ *   - 0 no-wait, returns immediately if there is no event.
+ *   - >0 wait for the event, if the device is configured with
+ *   RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT then this function will wait until
+ *   the event available or *timeout_ticks* time.
+ *   if the device is not configured with RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT
+ *   then this function will wait until the event available or
+ *   *dequeue_timeout_ns* ns which was previously supplied to
+ *   rte_event_dev_configure()
+ *
+ * @return
+ * The number of event objects actually dequeued from the port. The return
+ * value can be less than the value of the *nb_events* parameter when the
+ * event port's queue is not full.
+ *
+ * @see rte_event_port_dequeue_depth()
+ */
+uint16_t
+rte_event_dequeue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event ev[],
+			uint16_t nb_events, uint64_t timeout_ticks);
+
+/** Structure to hold the queue to port link establishment attributes */
+struct rte_event_queue_link {
+	uint8_t queue_id;
+	/**< Event queue identifier to select the source queue to link */
+	uint8_t priority;
+	/**< The priority of the event queue for this event port.
+	 * The priority defines the event port's servicing priority for
+	 * event queue, which may be ignored by an implementation.
+	 * The requested priority should in the range of
+	 * [RTE_EVENT_DEV_PRIORITY_HIGHEST, RTE_EVENT_DEV_PRIORITY_LOWEST].
+	 * The implementation shall normalize the requested priority to
+	 * implementation supported priority value.
+	 */
+};
+
+/**
+ * Link multiple source event queues supplied in *rte_event_queue_link*
+ * structure as *queue_id* to the destination event port designated by its
+ * *port_id* on the event device designated by its *dev_id*.
+ *
+ * The link establishment shall enable the event port *port_id* from
+ * receiving events from the specified event queue *queue_id*
+ *
+ * An event queue may link to one or more event ports.
+ * The number of links can be established from an event queue to event port is
+ * implementation defined.
+ *
+ * Event queue(s) to event port link establishment can be changed at runtime
+ * without re-configuring the device to support scaling and to reduce the
+ * latency of critical work by establishing the link with more event ports
+ * at runtime.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ *
+ * @param port_id
+ *   Event port identifier to select the destination port to link.
+ *
+ * @param link
+ *   Points to an array of *nb_links* objects of type *rte_event_queue_link*
+ *   structure which contain the event queue to event port link establishment
+ *   attributes.
+ *   NULL value is allowed, in which case this function links all the configured
+ *   event queues *nb_event_queues* which previously supplied to
+ *   rte_event_dev_configure() to the event port *port_id* with normal servicing
+ *   priority(RTE_EVENT_DEV_PRIORITY_NORMAL).
+ *
+ * @param nb_links
+ *   The number of links to establish
+ *
+ * @return
+ * The number of links actually established. The return value can be less than
+ * the value of the *nb_links* parameter when the implementation has the
+ * limitation on specific queue to port link establishment or if invalid
+ * parameters are specified in a *rte_event_queue_link*.
+ * If the return value is less than *nb_links*, the remaining links at the end
+ * of link[] are not established, and the caller has to take care of them.
+ * If return value is less than *nb_links* then implementation shall update the
+ * rte_errno accordingly, Possible rte_errno values are
+ * (-EDQUOT) Quota exceeded(Application tried to link the queue configured with
+ *  RTE_EVENT_QUEUE_CFG_FLAG_SINGLE_LINK to more than one event ports)
+ * (-EINVAL) Invalid parameter
+ *
+ */
+int
+rte_event_port_link(uint8_t dev_id, uint8_t port_id,
+		    const struct rte_event_queue_link link[],
+		    uint16_t nb_links);
+
+/**
+ * Unlink multiple source event queues supplied in *queues* from the destination
+ * event port designated by its *port_id* on the event device designated
+ * by its *dev_id*.
+ *
+ * The unlink establishment shall disable the event port *port_id* from
+ * receiving events from the specified event queue *queue_id*
+ *
+ * Event queue(s) to event port unlink establishment can be changed at runtime
+ * without re-configuring the device.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ *
+ * @param port_id
+ *   Event port identifier to select the destination port to unlink.
+ *
+ * @param queues
+ *   Points to an array of *nb_unlinks* event queues to be unlinked
+ *   from the event port.
+ *   NULL value is allowed, in which case this function unlinks all the
+ *   event queue(s) from the event port *port_id*.
+ *
+ * @param nb_unlinks
+ *   The number of unlinks to establish
+ *
+ * @return
+ * The number of unlinks actually established. The return value can be less
+ * than the value of the *nb_unlinks* parameter when the implementation has the
+ * limitation on specific queue to port unlink establishment or
+ * if invalid parameters are specified.
+ * If the return value is less than *nb_unlinks*, the remaining queues at the
+ * end of queues[] are not established, and the caller has to take care of them.
+ * If return value is less than *nb_unlinks* then implementation shall update
+ * the rte_errno accordingly, Possible rte_errno values are
+ * (-EINVAL) Invalid parameter
+ *
+ */
+int
+rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
+		      uint8_t queues[], uint16_t nb_unlinks);
+
+/**
+ * Retrieve the list of source event queues and its associated attributes
+ * linked to the destination event port designated by its *port_id*
+ * on the event device designated by its *dev_id*.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ *
+ * @param port_id
+ *   Event port identifier.
+ *
+ * @param[out] link
+ *   Points to an array of *rte_event_queue_link* structure for output.
+ *   The caller has to allocate *RTE_EVENT_MAX_QUEUES_PER_DEV* objects of size
+ *   *rte_event_queue_link* structure to store the event queue to event port
+ *   link establishment attributes.
+ *
+ * @return
+ * The number of links established on the event port designated by its
+ *  *port_id*.
+ * - <0 on failure.
+ *
+ */
+int
+rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
+			struct rte_event_queue_link link[]);
+
+/**
+ * Dump internal information about *dev_id* to the FILE* provided in *f*.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ *
+ * @param f
+ *   A pointer to a file for output
+ *
+ * @return
+ *   - 0: on success
+ *   - <0: on failure.
+ */
+int
+rte_event_dev_dump(uint8_t dev_id, FILE *f);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_EVENTDEV_H_ */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [PATCH v2 2/6] eventdev: define southbound driver interface
  2016-12-06  3:52   ` [PATCH v2 0/6] libeventdev API and northbound implementation Jerin Jacob
  2016-12-06  3:52     ` [PATCH v2 1/6] eventdev: introduce event driven programming model Jerin Jacob
@ 2016-12-06  3:52     ` Jerin Jacob
  2016-12-06  3:52     ` [PATCH v2 3/6] eventdev: implement the northbound APIs Jerin Jacob
                       ` (5 subsequent siblings)
  7 siblings, 0 replies; 109+ messages in thread
From: Jerin Jacob @ 2016-12-06  3:52 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, hemant.agrawal, gage.eads,
	harry.van.haaren, Jerin Jacob

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 lib/librte_eventdev/rte_eventdev.h     |  38 +++++
 lib/librte_eventdev/rte_eventdev_pmd.h | 286 +++++++++++++++++++++++++++++++++
 2 files changed, 324 insertions(+)
 create mode 100644 lib/librte_eventdev/rte_eventdev_pmd.h

diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
index 72f5b87..451bb5d 100644
--- a/lib/librte_eventdev/rte_eventdev.h
+++ b/lib/librte_eventdev/rte_eventdev.h
@@ -970,6 +970,44 @@ struct rte_event {
 	};
 };
 
+struct rte_eventdev_ops;
+struct rte_eventdev;
+
+typedef void (*event_schedule_t)(struct rte_eventdev *dev);
+/**< @internal Schedule one or more events in the event dev. */
+
+typedef uint16_t (*event_enqueue_t)(void *port, struct rte_event *ev);
+/**< @internal Enqueue event on port of a device */
+
+typedef uint16_t (*event_enqueue_burst_t)(void *port, struct rte_event ev[],
+		uint16_t nb_events);
+/**< @internal Enqueue burst of events on port of a device */
+
+typedef uint16_t (*event_dequeue_t)(void *port, struct rte_event *ev,
+		uint64_t timeout_ticks);
+/**< @internal Dequeue event from port of a device */
+
+typedef uint16_t (*event_dequeue_burst_t)(void *port, struct rte_event ev[],
+		uint16_t nb_events, uint64_t timeout_ticks);
+/**< @internal Dequeue burst of events from port of a device */
+
+
+/** @internal The data structure associated with each event device. */
+struct rte_eventdev {
+	event_schedule_t schedule;
+	/**< Pointer to PMD schedule function. */
+	event_enqueue_t enqueue;
+	/**< Pointer to PMD enqueue function. */
+	event_enqueue_burst_t enqueue_burst;
+	/**< Pointer to PMD enqueue burst function. */
+	event_dequeue_t dequeue;
+	/**< Pointer to PMD dequeue function. */
+	event_dequeue_burst_t dequeue_burst;
+	/**< Pointer to PMD dequeue burst function. */
+
+} __rte_cache_aligned;
+
+
 /**
  * Schedule one or more events in the event dev.
  *
diff --git a/lib/librte_eventdev/rte_eventdev_pmd.h b/lib/librte_eventdev/rte_eventdev_pmd.h
new file mode 100644
index 0000000..0b04ab7
--- /dev/null
+++ b/lib/librte_eventdev/rte_eventdev_pmd.h
@@ -0,0 +1,286 @@
+/*
+ *
+ *   Copyright(c) 2016 Cavium networks. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_EVENTDEV_PMD_H_
+#define _RTE_EVENTDEV_PMD_H_
+
+/** @file
+ * RTE Event PMD APIs
+ *
+ * @note
+ * These API are from event PMD only and user applications should not call
+ * them directly.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include "rte_eventdev.h"
+
+/**
+ * Definitions of all functions exported by a driver through the
+ * the generic structure of type *event_dev_ops* supplied in the
+ * *rte_eventdev* structure associated with a device.
+ */
+
+/**
+ * Get device information of a device.
+ *
+ * @param dev
+ *   Event device pointer
+ * @param dev_info
+ *   Event device information structure
+ *
+ * @return
+ *   Returns 0 on success
+ */
+typedef void (*eventdev_info_get_t)(struct rte_eventdev *dev,
+		struct rte_event_dev_info *dev_info);
+
+/**
+ * Configure a device.
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @return
+ *   Returns 0 on success
+ */
+typedef int (*eventdev_configure_t)(const struct rte_eventdev *dev);
+
+/**
+ * Start a configured device.
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @return
+ *   Returns 0 on success
+ */
+typedef int (*eventdev_start_t)(struct rte_eventdev *dev);
+
+/**
+ * Stop a configured device.
+ *
+ * @param dev
+ *   Event device pointer
+ */
+typedef void (*eventdev_stop_t)(struct rte_eventdev *dev);
+
+/**
+ * Close a configured device.
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @return
+ * - 0 on success
+ * - (-EAGAIN) if can't close as device is busy
+ */
+typedef int (*eventdev_close_t)(struct rte_eventdev *dev);
+
+/**
+ * Retrieve the default event queue configuration.
+ *
+ * @param dev
+ *   Event device pointer
+ * @param queue_id
+ *   Event queue index
+ * @param[out] queue_conf
+ *   Event queue configuration structure
+ *
+ */
+typedef void (*eventdev_queue_default_conf_get_t)(struct rte_eventdev *dev,
+		uint8_t queue_id, struct rte_event_queue_conf *queue_conf);
+
+/**
+ * Setup an event queue.
+ *
+ * @param dev
+ *   Event device pointer
+ * @param queue_id
+ *   Event queue index
+ * @param queue_conf
+ *   Event queue configuration structure
+ *
+ * @return
+ *   Returns 0 on success.
+ */
+typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev,
+		uint8_t queue_id,
+		const struct rte_event_queue_conf *queue_conf);
+
+/**
+ * Release memory resources allocated by given event queue.
+ *
+ * @param queue
+ *   Event queue pointer
+ *
+ */
+typedef void (*eventdev_queue_release_t)(void *queue);
+
+/**
+ * Retrieve the default event port configuration.
+ *
+ * @param dev
+ *   Event device pointer
+ * @param port_id
+ *   Event port index
+ * @param[out] port_conf
+ *   Event port configuration structure
+ *
+ */
+typedef void (*eventdev_port_default_conf_get_t)(struct rte_eventdev *dev,
+		uint8_t port_id, struct rte_event_port_conf *port_conf);
+
+/**
+ * Setup an event port.
+ *
+ * @param dev
+ *   Event device pointer
+ * @param port_id
+ *   Event port index
+ * @param port_conf
+ *   Event port configuration structure
+ *
+ * @return
+ *   Returns 0 on success.
+ */
+typedef int (*eventdev_port_setup_t)(struct rte_eventdev *dev,
+		uint8_t port_id,
+		const struct rte_event_port_conf *port_conf);
+
+/**
+ * Release memory resources allocated by given event port.
+ *
+ * @param port
+ *   Event port pointer
+ *
+ */
+typedef void (*eventdev_port_release_t)(void *port);
+
+/**
+ * Link multiple source event queues to destination event port.
+ *
+ * @param port
+ *   Event port pointer
+ * @param link
+ *   An array of *nb_links* pointers to *rte_event_queue_link* structure
+ * @param nb_links
+ *   The number of links to establish
+ *
+ * @return
+ *   Returns 0 on success.
+ *
+ */
+typedef int (*eventdev_port_link_t)(void *port,
+		const struct rte_event_queue_link link[], uint16_t nb_links);
+
+/**
+ * Unlink multiple source event queues from destination event port.
+ *
+ * @param port
+ *   Event port pointer
+ * @param queues
+ *   An array of *nb_unlinks* event queues to be unlinked from the event port.
+ * @param nb_unlinks
+ *   The number of unlinks to establish
+ *
+ * @return
+ *   Returns 0 on success.
+ *
+ */
+typedef int (*eventdev_port_unlink_t)(void *port,
+		uint8_t queues[], uint16_t nb_unlinks);
+
+/**
+ * Converts nanoseconds to *timeout_ticks* value for rte_event_dequeue()
+ *
+ * @param dev
+ *   Event device pointer
+ * @param ns
+ *   Wait time in nanosecond
+ * @param[out] timeout_ticks
+ *   Value for the *timeout_ticks* parameter in rte_event_dequeue() function
+ *
+ */
+typedef void (*eventdev_dequeue_timeout_ticks_t)(struct rte_eventdev *dev,
+		uint64_t ns, uint64_t *timeout_ticks);
+
+/**
+ * Dump internal information
+ *
+ * @param dev
+ *   Event device pointer
+ * @param f
+ *   A pointer to a file for output
+ *
+ */
+typedef void (*eventdev_dump_t)(struct rte_eventdev *dev, FILE *f);
+
+/** Event device operations function pointer table */
+struct rte_eventdev_ops {
+	eventdev_info_get_t dev_infos_get;	/**< Get device info. */
+	eventdev_configure_t dev_configure;	/**< Configure device. */
+	eventdev_start_t dev_start;		/**< Start device. */
+	eventdev_stop_t dev_stop;		/**< Stop device. */
+	eventdev_close_t dev_close;		/**< Close device. */
+
+	eventdev_queue_default_conf_get_t queue_def_conf;
+	/**< Get default queue configuration. */
+	eventdev_queue_setup_t queue_setup;
+	/**< Set up an event queue. */
+	eventdev_queue_release_t queue_release;
+	/**< Release an event queue. */
+
+	eventdev_port_default_conf_get_t port_def_conf;
+	/**< Get default port configuration. */
+	eventdev_port_setup_t port_setup;
+	/**< Set up an event port. */
+	eventdev_port_release_t port_release;
+	/**< Release an event port. */
+
+	eventdev_port_link_t port_link;
+	/**< Link event queues to an event port. */
+	eventdev_port_unlink_t port_unlink;
+	/**< Unlink event queues from an event port. */
+	eventdev_dequeue_timeout_ticks_t timeout_ticks;
+	/**< Converts ns to *timeout_ticks* value for rte_event_dequeue() */
+	eventdev_dump_t dump;
+	/* Dump internal information */
+};
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_EVENTDEV_PMD_H_ */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [PATCH v2 3/6] eventdev: implement the northbound APIs
  2016-12-06  3:52   ` [PATCH v2 0/6] libeventdev API and northbound implementation Jerin Jacob
  2016-12-06  3:52     ` [PATCH v2 1/6] eventdev: introduce event driven programming model Jerin Jacob
  2016-12-06  3:52     ` [PATCH v2 2/6] eventdev: define southbound driver interface Jerin Jacob
@ 2016-12-06  3:52     ` Jerin Jacob
  2016-12-06 17:17       ` Bruce Richardson
  2016-12-06  3:52     ` [PATCH v2 4/6] eventdev: implement PMD registration functions Jerin Jacob
                       ` (4 subsequent siblings)
  7 siblings, 1 reply; 109+ messages in thread
From: Jerin Jacob @ 2016-12-06  3:52 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, hemant.agrawal, gage.eads,
	harry.van.haaren, Jerin Jacob

This patch implements northbound eventdev API interface using
southbond driver interface

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 config/common_base                           |    6 +
 lib/Makefile                                 |    1 +
 lib/librte_eal/common/include/rte_log.h      |    1 +
 lib/librte_eventdev/Makefile                 |   57 ++
 lib/librte_eventdev/rte_eventdev.c           | 1001 ++++++++++++++++++++++++++
 lib/librte_eventdev/rte_eventdev.h           |  108 ++-
 lib/librte_eventdev/rte_eventdev_pmd.h       |  109 +++
 lib/librte_eventdev/rte_eventdev_version.map |   33 +
 mk/rte.app.mk                                |    1 +
 9 files changed, 1311 insertions(+), 6 deletions(-)
 create mode 100644 lib/librte_eventdev/Makefile
 create mode 100644 lib/librte_eventdev/rte_eventdev.c
 create mode 100644 lib/librte_eventdev/rte_eventdev_version.map

diff --git a/config/common_base b/config/common_base
index 4bff83a..7a8814e 100644
--- a/config/common_base
+++ b/config/common_base
@@ -411,6 +411,12 @@ CONFIG_RTE_LIBRTE_PMD_ZUC_DEBUG=n
 CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO=y
 
 #
+# Compile generic event device library
+#
+CONFIG_RTE_LIBRTE_EVENTDEV=y
+CONFIG_RTE_LIBRTE_EVENTDEV_DEBUG=n
+CONFIG_RTE_EVENT_MAX_DEVS=16
+CONFIG_RTE_EVENT_MAX_QUEUES_PER_DEV=64
 # Compile librte_ring
 #
 CONFIG_RTE_LIBRTE_RING=y
diff --git a/lib/Makefile b/lib/Makefile
index 990f23a..1a067bf 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -41,6 +41,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) += librte_cfgfile
 DIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += librte_cmdline
 DIRS-$(CONFIG_RTE_LIBRTE_ETHER) += librte_ether
 DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += librte_cryptodev
+DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += librte_eventdev
 DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
 DIRS-$(CONFIG_RTE_LIBRTE_HASH) += librte_hash
 DIRS-$(CONFIG_RTE_LIBRTE_LPM) += librte_lpm
diff --git a/lib/librte_eal/common/include/rte_log.h b/lib/librte_eal/common/include/rte_log.h
index 29f7d19..9a07d92 100644
--- a/lib/librte_eal/common/include/rte_log.h
+++ b/lib/librte_eal/common/include/rte_log.h
@@ -79,6 +79,7 @@ extern struct rte_logs rte_logs;
 #define RTE_LOGTYPE_PIPELINE 0x00008000 /**< Log related to pipeline. */
 #define RTE_LOGTYPE_MBUF    0x00010000 /**< Log related to mbuf. */
 #define RTE_LOGTYPE_CRYPTODEV 0x00020000 /**< Log related to cryptodev. */
+#define RTE_LOGTYPE_EVENTDEV 0x00040000 /**< Log related to eventdev. */
 
 /* these log types can be used in an application */
 #define RTE_LOGTYPE_USER1   0x01000000 /**< User-defined log type 1. */
diff --git a/lib/librte_eventdev/Makefile b/lib/librte_eventdev/Makefile
new file mode 100644
index 0000000..dac0663
--- /dev/null
+++ b/lib/librte_eventdev/Makefile
@@ -0,0 +1,57 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2016 Cavium networks. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Cavium networks nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_eventdev.a
+
+# library version
+LIBABIVER := 1
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library source files
+SRCS-y += rte_eventdev.c
+
+# export include files
+SYMLINK-y-include += rte_eventdev.h
+SYMLINK-y-include += rte_eventdev_pmd.h
+
+# versioning export map
+EXPORT_MAP := rte_eventdev_version.map
+
+# library dependencies
+DEPDIRS-y += lib/librte_eal
+DEPDIRS-y += lib/librte_mbuf
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c
new file mode 100644
index 0000000..0a1d2d6
--- /dev/null
+++ b/lib/librte_eventdev/rte_eventdev.c
@@ -0,0 +1,1001 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Cavium networks. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <ctype.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stdarg.h>
+#include <errno.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <sys/types.h>
+#include <sys/queue.h>
+
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_dev.h>
+#include <rte_pci.h>
+#include <rte_memory.h>
+#include <rte_memcpy.h>
+#include <rte_memzone.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_errno.h>
+
+#include "rte_eventdev.h"
+#include "rte_eventdev_pmd.h"
+
+struct rte_eventdev rte_event_devices[RTE_EVENT_MAX_DEVS];
+
+struct rte_eventdev *rte_eventdevs = &rte_event_devices[0];
+
+static struct rte_eventdev_global eventdev_globals = {
+	.nb_devs		= 0
+};
+
+struct rte_eventdev_global *rte_eventdev_globals = &eventdev_globals;
+
+/* Event dev north bound API implementation */
+
+uint8_t
+rte_event_dev_count(void)
+{
+	return rte_eventdev_globals->nb_devs;
+}
+
+int
+rte_event_dev_get_dev_id(const char *name)
+{
+	int i;
+
+	if (!name)
+		return -EINVAL;
+
+	for (i = 0; i < rte_eventdev_globals->nb_devs; i++)
+		if ((strcmp(rte_event_devices[i].data->name, name)
+				== 0) &&
+				(rte_event_devices[i].attached ==
+						RTE_EVENTDEV_ATTACHED))
+			return i;
+	return -ENODEV;
+}
+
+int
+rte_event_dev_socket_id(uint8_t dev_id)
+{
+	struct rte_eventdev *dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+
+	return dev->data->socket_id;
+}
+
+int
+rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info *dev_info)
+{
+	struct rte_eventdev *dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+
+	if (dev_info == NULL)
+		return -EINVAL;
+
+	memset(dev_info, 0, sizeof(struct rte_event_dev_info));
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
+	(*dev->dev_ops->dev_infos_get)(dev, dev_info);
+
+	dev_info->pci_dev = dev->pci_dev;
+	return 0;
+}
+
+static inline int
+rte_event_dev_queue_config(struct rte_eventdev *dev, uint8_t nb_queues)
+{
+	uint8_t old_nb_queues = dev->data->nb_queues;
+	void **queues;
+	uint8_t *queues_prio;
+	unsigned int i;
+
+	RTE_EDEV_LOG_DEBUG("Setup %d queues on device %u", nb_queues,
+			 dev->data->dev_id);
+
+	/* First time configuration */
+	if (dev->data->queues == NULL && nb_queues != 0) {
+		dev->data->queues = rte_zmalloc_socket("eventdev->data->queues",
+				sizeof(dev->data->queues[0]) * nb_queues,
+				RTE_CACHE_LINE_SIZE, dev->data->socket_id);
+		if (dev->data->queues == NULL) {
+			dev->data->nb_queues = 0;
+			RTE_EDEV_LOG_ERR("failed to get memory for queue meta,"
+					"nb_queues %u", nb_queues);
+			return -(ENOMEM);
+		}
+		/* Allocate memory to store queue priority */
+		dev->data->queues_prio = rte_zmalloc_socket(
+				"eventdev->data->queues_prio",
+				sizeof(dev->data->queues_prio[0]) * nb_queues,
+				RTE_CACHE_LINE_SIZE, dev->data->socket_id);
+		if (dev->data->queues_prio == NULL) {
+			dev->data->nb_queues = 0;
+			RTE_EDEV_LOG_ERR("failed to get mem for queue priority,"
+					"nb_queues %u", nb_queues);
+			return -(ENOMEM);
+		}
+
+	} else if (dev->data->queues != NULL && nb_queues != 0) {/* re-config */
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_release, -ENOTSUP);
+
+		queues = dev->data->queues;
+		for (i = nb_queues; i < old_nb_queues; i++)
+			(*dev->dev_ops->queue_release)(queues[i]);
+
+		queues = rte_realloc(queues, sizeof(queues[0]) * nb_queues,
+				RTE_CACHE_LINE_SIZE);
+		if (queues == NULL) {
+			RTE_EDEV_LOG_ERR("failed to realloc queue meta data,"
+						" nb_queues %u", nb_queues);
+			return -(ENOMEM);
+		}
+		dev->data->queues = queues;
+
+		/* Re allocate memory to store queue priority */
+		queues_prio = dev->data->queues_prio;
+		queues_prio = rte_realloc(queues_prio,
+				sizeof(queues_prio[0]) * nb_queues,
+				RTE_CACHE_LINE_SIZE);
+		if (queues_prio == NULL) {
+			RTE_EDEV_LOG_ERR("failed to realloc queue priority,"
+						" nb_queues %u", nb_queues);
+			return -(ENOMEM);
+		}
+		dev->data->queues_prio = queues_prio;
+
+		if (nb_queues > old_nb_queues) {
+			uint8_t new_qs = nb_queues - old_nb_queues;
+
+			memset(queues + old_nb_queues, 0,
+				sizeof(queues[0]) * new_qs);
+			memset(queues_prio + old_nb_queues, 0,
+				sizeof(queues_prio[0]) * new_qs);
+		}
+	} else if (dev->data->queues != NULL && nb_queues == 0) {
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_release, -ENOTSUP);
+
+		queues = dev->data->queues;
+		for (i = nb_queues; i < old_nb_queues; i++)
+			(*dev->dev_ops->queue_release)(queues[i]);
+	}
+
+	dev->data->nb_queues = nb_queues;
+	return 0;
+}
+
+static inline int
+rte_event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
+{
+	uint8_t old_nb_ports = dev->data->nb_ports;
+	void **ports;
+	uint16_t *links_map;
+	uint8_t *ports_dequeue_depth;
+	uint8_t *ports_enqueue_depth;
+	unsigned int i;
+
+	RTE_EDEV_LOG_DEBUG("Setup %d ports on device %u", nb_ports,
+			 dev->data->dev_id);
+
+	/* First time configuration */
+	if (dev->data->ports == NULL && nb_ports != 0) {
+		dev->data->ports = rte_zmalloc_socket("eventdev->data->ports",
+				sizeof(dev->data->ports[0]) * nb_ports,
+				RTE_CACHE_LINE_SIZE, dev->data->socket_id);
+		if (dev->data->ports == NULL) {
+			dev->data->nb_ports = 0;
+			RTE_EDEV_LOG_ERR("failed to get mem for port meta data,"
+					"nb_ports %u", nb_ports);
+			return -(ENOMEM);
+		}
+
+		/* Allocate memory to store ports dequeue depth */
+		dev->data->ports_dequeue_depth =
+			rte_zmalloc_socket("eventdev->ports_dequeue_depth",
+			sizeof(dev->data->ports_dequeue_depth[0]) * nb_ports,
+			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
+		if (dev->data->ports_dequeue_depth == NULL) {
+			dev->data->nb_ports = 0;
+			RTE_EDEV_LOG_ERR("failed to get mem for port deq meta,"
+					"nb_ports %u", nb_ports);
+			return -(ENOMEM);
+		}
+
+		/* Allocate memory to store ports enqueue depth */
+		dev->data->ports_enqueue_depth =
+			rte_zmalloc_socket("eventdev->ports_enqueue_depth",
+			sizeof(dev->data->ports_enqueue_depth[0]) * nb_ports,
+			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
+		if (dev->data->ports_enqueue_depth == NULL) {
+			dev->data->nb_ports = 0;
+			RTE_EDEV_LOG_ERR("failed to get mem for port enq meta,"
+					"nb_ports %u", nb_ports);
+			return -(ENOMEM);
+		}
+
+		/* Allocate memory to store queue to port link connection */
+		dev->data->links_map =
+			rte_zmalloc_socket("eventdev->links_map",
+			sizeof(dev->data->links_map[0]) * nb_ports *
+			RTE_EVENT_MAX_QUEUES_PER_DEV,
+			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
+		if (dev->data->links_map == NULL) {
+			dev->data->nb_ports = 0;
+			RTE_EDEV_LOG_ERR("failed to get mem for port_map area,"
+					"nb_ports %u", nb_ports);
+			return -(ENOMEM);
+		}
+	} else if (dev->data->ports != NULL && nb_ports != 0) {/* re-config */
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_release, -ENOTSUP);
+
+		ports = dev->data->ports;
+		ports_dequeue_depth = dev->data->ports_dequeue_depth;
+		ports_enqueue_depth = dev->data->ports_enqueue_depth;
+		links_map = dev->data->links_map;
+
+		for (i = nb_ports; i < old_nb_ports; i++)
+			(*dev->dev_ops->port_release)(ports[i]);
+
+		/* Realloc memory for ports */
+		ports = rte_realloc(ports, sizeof(ports[0]) * nb_ports,
+				RTE_CACHE_LINE_SIZE);
+		if (ports == NULL) {
+			RTE_EDEV_LOG_ERR("failed to realloc port meta data,"
+						" nb_ports %u", nb_ports);
+			return -(ENOMEM);
+		}
+
+		/* Realloc memory for ports_dequeue_depth */
+		ports_dequeue_depth = rte_realloc(ports_dequeue_depth,
+			sizeof(ports_dequeue_depth[0]) * nb_ports,
+			RTE_CACHE_LINE_SIZE);
+		if (ports_dequeue_depth == NULL) {
+			RTE_EDEV_LOG_ERR("failed to realloc port dequeue meta,"
+						" nb_ports %u", nb_ports);
+			return -(ENOMEM);
+		}
+
+		/* Realloc memory for ports_enqueue_depth */
+		ports_enqueue_depth = rte_realloc(ports_enqueue_depth,
+			sizeof(ports_enqueue_depth[0]) * nb_ports,
+			RTE_CACHE_LINE_SIZE);
+		if (ports_enqueue_depth == NULL) {
+			RTE_EDEV_LOG_ERR("failed to realloc port enqueue meta,"
+						" nb_ports %u", nb_ports);
+			return -(ENOMEM);
+		}
+
+		/* Realloc memory to store queue to port link connection */
+		links_map = rte_realloc(links_map,
+			sizeof(dev->data->links_map[0]) * nb_ports *
+			RTE_EVENT_MAX_QUEUES_PER_DEV,
+			RTE_CACHE_LINE_SIZE);
+		if (dev->data->links_map == NULL) {
+			dev->data->nb_ports = 0;
+			RTE_EDEV_LOG_ERR("failed to realloc mem for port_map,"
+					"nb_ports %u", nb_ports);
+			return -(ENOMEM);
+		}
+
+		if (nb_ports > old_nb_ports) {
+			uint8_t new_ps = nb_ports - old_nb_ports;
+
+			memset(ports + old_nb_ports, 0,
+				sizeof(ports[0]) * new_ps);
+			memset(ports_dequeue_depth + old_nb_ports, 0,
+				sizeof(ports_dequeue_depth[0]) * new_ps);
+			memset(ports_enqueue_depth + old_nb_ports, 0,
+				sizeof(ports_enqueue_depth[0]) * new_ps);
+			memset(links_map +
+				(old_nb_ports * RTE_EVENT_MAX_QUEUES_PER_DEV),
+				0, sizeof(ports_enqueue_depth[0]) * new_ps);
+		}
+
+		dev->data->ports = ports;
+		dev->data->ports_dequeue_depth = ports_dequeue_depth;
+		dev->data->ports_enqueue_depth = ports_enqueue_depth;
+		dev->data->links_map = links_map;
+	} else if (dev->data->ports != NULL && nb_ports == 0) {
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_release, -ENOTSUP);
+
+		ports = dev->data->ports;
+		for (i = nb_ports; i < old_nb_ports; i++)
+			(*dev->dev_ops->port_release)(ports[i]);
+	}
+
+	dev->data->nb_ports = nb_ports;
+	return 0;
+}
+
+int
+rte_event_dev_configure(uint8_t dev_id,
+			const struct rte_event_dev_config *dev_conf)
+{
+	struct rte_eventdev *dev;
+	struct rte_event_dev_info info;
+	int diag;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_configure, -ENOTSUP);
+
+	if (dev->data->dev_started) {
+		RTE_EDEV_LOG_ERR(
+		    "device %d must be stopped to allow configuration", dev_id);
+		return -EBUSY;
+	}
+
+	if (dev_conf == NULL)
+		return -EINVAL;
+
+	(*dev->dev_ops->dev_infos_get)(dev, &info);
+
+	/* Check dequeue_timeout_ns value is in limit */
+	if (!dev_conf->event_dev_cfg & RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT) {
+		if (dev_conf->dequeue_timeout_ns < info.min_dequeue_timeout_ns
+			|| dev_conf->dequeue_timeout_ns >
+				 info.max_dequeue_timeout_ns) {
+			RTE_EDEV_LOG_ERR("dev%d invalid dequeue_timeout_ns=%d"
+			" min_dequeue_timeout_ns=%d max_dequeue_timeout_ns=%d",
+			dev_id, dev_conf->dequeue_timeout_ns,
+			info.min_dequeue_timeout_ns,
+			info.max_dequeue_timeout_ns);
+			return -EINVAL;
+		}
+	}
+
+	/* Check nb_events_limit is in limit */
+	if (dev_conf->nb_events_limit > info.max_num_events) {
+		RTE_EDEV_LOG_ERR("dev%d nb_events_limit=%d > max_num_events=%d",
+		dev_id, dev_conf->nb_events_limit, info.max_num_events);
+		return -EINVAL;
+	}
+
+	/* Check nb_event_queues is in limit */
+	if (!dev_conf->nb_event_queues) {
+		RTE_EDEV_LOG_ERR("dev%d nb_event_queues cannot be zero",
+					dev_id);
+		return -EINVAL;
+	}
+	if (dev_conf->nb_event_queues > info.max_event_queues) {
+		RTE_EDEV_LOG_ERR("%d nb_event_queues=%d > max_event_queues=%d",
+		dev_id, dev_conf->nb_event_queues, info.max_event_queues);
+		return -EINVAL;
+	}
+
+	/* Check nb_event_ports is in limit */
+	if (!dev_conf->nb_event_ports) {
+		RTE_EDEV_LOG_ERR("dev%d nb_event_ports cannot be zero", dev_id);
+		return -EINVAL;
+	}
+	if (dev_conf->nb_event_ports > info.max_event_ports) {
+		RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d > max_event_ports= %d",
+		dev_id, dev_conf->nb_event_ports, info.max_event_ports);
+		return -EINVAL;
+	}
+
+	/* Check nb_event_queue_flows is in limit */
+	if (!dev_conf->nb_event_queue_flows) {
+		RTE_EDEV_LOG_ERR("dev%d nb_flows cannot be zero", dev_id);
+		return -EINVAL;
+	}
+	if (dev_conf->nb_event_queue_flows > info.max_event_queue_flows) {
+		RTE_EDEV_LOG_ERR("dev%d nb_flows=%x > max_flows=%x",
+		dev_id, dev_conf->nb_event_queue_flows,
+		info.max_event_queue_flows);
+		return -EINVAL;
+	}
+
+	/* Check nb_event_port_dequeue_depth is in limit */
+	if (!dev_conf->nb_event_port_dequeue_depth) {
+		RTE_EDEV_LOG_ERR("dev%d nb_dequeue_depth cannot be zero",
+					dev_id);
+		return -EINVAL;
+	}
+	if (dev_conf->nb_event_port_dequeue_depth >
+			 info.max_event_port_dequeue_depth) {
+		RTE_EDEV_LOG_ERR("dev%d nb_dq_depth=%d > max_dq_depth=%d",
+		dev_id, dev_conf->nb_event_port_dequeue_depth,
+		info.max_event_port_dequeue_depth);
+		return -EINVAL;
+	}
+
+	/* Check nb_event_port_enqueue_depth is in limit */
+	if (!dev_conf->nb_event_port_enqueue_depth) {
+		RTE_EDEV_LOG_ERR("dev%d nb_enqueue_depth cannot be zero",
+					dev_id);
+		return -EINVAL;
+	}
+	if (dev_conf->nb_event_port_enqueue_depth >
+			 info.max_event_port_enqueue_depth) {
+		RTE_EDEV_LOG_ERR("dev%d nb_enq_depth=%d > max_enq_depth=%d",
+		dev_id, dev_conf->nb_event_port_enqueue_depth,
+		info.max_event_port_enqueue_depth);
+		return -EINVAL;
+	}
+
+	/* Copy the dev_conf parameter into the dev structure */
+	memcpy(&dev->data->dev_conf, dev_conf, sizeof(dev->data->dev_conf));
+
+	/* Setup new number of queues and reconfigure device. */
+	diag = rte_event_dev_queue_config(dev, dev_conf->nb_event_queues);
+	if (diag != 0) {
+		RTE_EDEV_LOG_ERR("dev%d rte_event_dev_queue_config = %d",
+				dev_id, diag);
+		return diag;
+	}
+
+	/* Setup new number of ports and reconfigure device. */
+	diag = rte_event_dev_port_config(dev, dev_conf->nb_event_ports);
+	if (diag != 0) {
+		rte_event_dev_queue_config(dev, 0);
+		RTE_EDEV_LOG_ERR("dev%d rte_event_dev_port_config = %d",
+				dev_id, diag);
+		return diag;
+	}
+
+	/* Configure the device */
+	diag = (*dev->dev_ops->dev_configure)(dev);
+	if (diag != 0) {
+		RTE_EDEV_LOG_ERR("dev%d dev_configure = %d", dev_id, diag);
+		rte_event_dev_queue_config(dev, 0);
+		rte_event_dev_port_config(dev, 0);
+	}
+
+	dev->data->event_dev_cap = info.event_dev_cap;
+	return diag;
+}
+
+static inline int
+is_valid_queue(struct rte_eventdev *dev, uint8_t queue_id)
+{
+	if (queue_id < dev->data->nb_queues && queue_id <
+				RTE_EVENT_MAX_QUEUES_PER_DEV)
+		return 1;
+	else
+		return 0;
+}
+
+int
+rte_event_queue_default_conf_get(uint8_t dev_id, uint8_t queue_id,
+				 struct rte_event_queue_conf *queue_conf)
+{
+	struct rte_eventdev *dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+
+	if (queue_conf == NULL)
+		return -EINVAL;
+
+	if (!is_valid_queue(dev, queue_id)) {
+		RTE_EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id);
+		return -EINVAL;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_def_conf, -ENOTSUP);
+	memset(queue_conf, 0, sizeof(struct rte_event_queue_conf));
+	(*dev->dev_ops->queue_def_conf)(dev, queue_id, queue_conf);
+	return 0;
+}
+
+static inline int
+is_valid_atomic_queue_conf(const struct rte_event_queue_conf *queue_conf)
+{
+	if (queue_conf && (
+		((queue_conf->event_queue_cfg &
+			RTE_EVENT_QUEUE_CFG_FLAG_TYPE_MASK)
+			== RTE_EVENT_QUEUE_CFG_FLAG_ALL_TYPES) ||
+		((queue_conf->event_queue_cfg &
+			RTE_EVENT_QUEUE_CFG_FLAG_TYPE_MASK)
+			== RTE_EVENT_QUEUE_CFG_FLAG_ATOMIC_ONLY)
+		))
+		return 1;
+	else
+		return 0;
+}
+
+static inline int
+is_valid_ordered_queue_conf(const struct rte_event_queue_conf *queue_conf)
+{
+	if (queue_conf && (
+		((queue_conf->event_queue_cfg &
+			RTE_EVENT_QUEUE_CFG_FLAG_TYPE_MASK)
+			== RTE_EVENT_QUEUE_CFG_FLAG_ALL_TYPES) ||
+		((queue_conf->event_queue_cfg &
+			RTE_EVENT_QUEUE_CFG_FLAG_TYPE_MASK)
+			== RTE_EVENT_QUEUE_CFG_FLAG_ORDERED_ONLY)
+		))
+		return 1;
+	else
+		return 0;
+}
+
+
+int
+rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id,
+		      const struct rte_event_queue_conf *queue_conf)
+{
+	struct rte_eventdev *dev;
+	struct rte_event_queue_conf def_conf;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+
+	if (!is_valid_queue(dev, queue_id)) {
+		RTE_EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id);
+		return -EINVAL;
+	}
+
+	/* Check nb_atomic_flows limit */
+	if (is_valid_atomic_queue_conf(queue_conf)) {
+		if (queue_conf->nb_atomic_flows == 0 ||
+		    queue_conf->nb_atomic_flows >
+			dev->data->dev_conf.nb_event_queue_flows) {
+			RTE_EDEV_LOG_ERR(
+		"dev%d queue%d Invalid nb_atomic_flows=%d max_flows=%d",
+			dev_id, queue_id, queue_conf->nb_atomic_flows,
+			dev->data->dev_conf.nb_event_queue_flows);
+			return -EINVAL;
+		}
+	}
+
+	/* Check nb_atomic_order_sequences limit */
+	if (is_valid_ordered_queue_conf(queue_conf)) {
+		if (queue_conf->nb_atomic_order_sequences == 0 ||
+		    queue_conf->nb_atomic_order_sequences >
+			dev->data->dev_conf.nb_event_queue_flows) {
+			RTE_EDEV_LOG_ERR(
+		"dev%d queue%d Invalid nb_atomic_order_seq=%d max_flows=%d",
+			dev_id, queue_id, queue_conf->nb_atomic_order_sequences,
+			dev->data->dev_conf.nb_event_queue_flows);
+			return -EINVAL;
+		}
+	}
+
+	if (dev->data->dev_started) {
+		RTE_EDEV_LOG_ERR(
+		    "device %d must be stopped to allow queue setup", dev_id);
+		return -EBUSY;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_setup, -ENOTSUP);
+
+	if (queue_conf == NULL) {
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_def_conf,
+					-ENOTSUP);
+		(*dev->dev_ops->queue_def_conf)(dev, queue_id, &def_conf);
+		def_conf.event_queue_cfg = RTE_EVENT_QUEUE_CFG_FLAG_DEFAULT;
+		queue_conf = &def_conf;
+	}
+
+	dev->data->queues_prio[queue_id] = queue_conf->priority;
+	return (*dev->dev_ops->queue_setup)(dev, queue_id, queue_conf);
+}
+
+uint8_t
+rte_event_queue_count(uint8_t dev_id)
+{
+	struct rte_eventdev *dev;
+
+	dev = &rte_eventdevs[dev_id];
+	return dev->data->nb_queues;
+}
+
+uint8_t
+rte_event_queue_priority(uint8_t dev_id, uint8_t queue_id)
+{
+	struct rte_eventdev *dev;
+
+	dev = &rte_eventdevs[dev_id];
+	if (dev->data->event_dev_cap & RTE_EVENT_DEV_CAP_FLAG_QUEUE_QOS)
+		return dev->data->queues_prio[queue_id];
+	else
+		return RTE_EVENT_DEV_PRIORITY_NORMAL;
+}
+
+static inline int
+is_valid_port(struct rte_eventdev *dev, uint8_t port_id)
+{
+	if (port_id < dev->data->nb_ports)
+		return 1;
+	else
+		return 0;
+}
+
+int
+rte_event_port_default_conf_get(uint8_t dev_id, uint8_t port_id,
+				 struct rte_event_port_conf *port_conf)
+{
+	struct rte_eventdev *dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+
+	if (port_conf == NULL)
+		return -EINVAL;
+
+	if (!is_valid_port(dev, port_id)) {
+		RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
+		return -EINVAL;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_def_conf, -ENOTSUP);
+	memset(port_conf, 0, sizeof(struct rte_event_port_conf));
+	(*dev->dev_ops->port_def_conf)(dev, port_id, port_conf);
+	return 0;
+}
+
+int
+rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
+		     const struct rte_event_port_conf *port_conf)
+{
+	struct rte_eventdev *dev;
+	struct rte_event_port_conf def_conf;
+	int diag;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+
+	if (!is_valid_port(dev, port_id)) {
+		RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
+		return -EINVAL;
+	}
+
+	/* Check new_event_threshold limit */
+	if ((port_conf && !port_conf->new_event_threshold) ||
+			(port_conf && port_conf->new_event_threshold >
+				 dev->data->dev_conf.nb_events_limit)) {
+		RTE_EDEV_LOG_ERR(
+		   "dev%d port%d Invalid event_threshold=%d nb_events_limit=%d",
+			dev_id, port_id, port_conf->new_event_threshold,
+			dev->data->dev_conf.nb_events_limit);
+		return -EINVAL;
+	}
+
+	/* Check dequeue_depth limit */
+	if ((port_conf && !port_conf->dequeue_depth) ||
+			(port_conf && port_conf->dequeue_depth >
+		dev->data->dev_conf.nb_event_port_dequeue_depth)) {
+		RTE_EDEV_LOG_ERR(
+		   "dev%d port%d Invalid dequeue depth=%d max_dequeue_depth=%d",
+			dev_id, port_id, port_conf->dequeue_depth,
+			dev->data->dev_conf.nb_event_port_dequeue_depth);
+		return -EINVAL;
+	}
+
+	/* Check enqueue_depth limit */
+	if ((port_conf && !port_conf->enqueue_depth) ||
+			(port_conf && port_conf->enqueue_depth >
+		dev->data->dev_conf.nb_event_port_enqueue_depth)) {
+		RTE_EDEV_LOG_ERR(
+		   "dev%d port%d Invalid enqueue depth=%d max_enqueue_depth=%d",
+			dev_id, port_id, port_conf->enqueue_depth,
+			dev->data->dev_conf.nb_event_port_enqueue_depth);
+		return -EINVAL;
+	}
+
+	if (dev->data->dev_started) {
+		RTE_EDEV_LOG_ERR(
+		    "device %d must be stopped to allow port setup", dev_id);
+		return -EBUSY;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_setup, -ENOTSUP);
+
+	if (port_conf == NULL) {
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_def_conf,
+					-ENOTSUP);
+		(*dev->dev_ops->port_def_conf)(dev, port_id, &def_conf);
+		port_conf = &def_conf;
+	}
+
+	dev->data->ports_dequeue_depth[port_id] =
+			port_conf->dequeue_depth;
+	dev->data->ports_enqueue_depth[port_id] =
+			port_conf->enqueue_depth;
+
+	diag = (*dev->dev_ops->port_setup)(dev, port_id, port_conf);
+
+	/* Unlink all the queues from this port(default state after setup) */
+	if (!diag)
+		diag = rte_event_port_unlink(dev_id, port_id, NULL, 0);
+
+	if (diag < 0)
+		return diag;
+
+	return 0;
+}
+
+uint8_t
+rte_event_port_dequeue_depth(uint8_t dev_id, uint8_t port_id)
+{
+	struct rte_eventdev *dev;
+
+	dev = &rte_eventdevs[dev_id];
+	return dev->data->ports_dequeue_depth[port_id];
+}
+
+uint8_t
+rte_event_port_enqueue_depth(uint8_t dev_id, uint8_t port_id)
+{
+	struct rte_eventdev *dev;
+
+	dev = &rte_eventdevs[dev_id];
+	return dev->data->ports_enqueue_depth[port_id];
+}
+
+uint8_t
+rte_event_port_count(uint8_t dev_id)
+{
+	struct rte_eventdev *dev;
+
+	dev = &rte_eventdevs[dev_id];
+	return dev->data->nb_ports;
+}
+
+int
+rte_event_port_link(uint8_t dev_id, uint8_t port_id,
+		    const struct rte_event_queue_link link[],
+		    uint16_t nb_links)
+{
+	struct rte_eventdev *dev;
+	struct rte_event_queue_link all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
+	uint16_t *links_map;
+	int i, diag;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_link, -ENOTSUP);
+
+	if (!is_valid_port(dev, port_id)) {
+		RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
+		return -EINVAL;
+	}
+
+	if (link == NULL) {
+		for (i = 0; i < dev->data->nb_queues; i++) {
+			all_queues[i].queue_id = i;
+			all_queues[i].priority =
+				RTE_EVENT_DEV_PRIORITY_NORMAL;
+		}
+		link = all_queues;
+		nb_links = dev->data->nb_queues;
+	}
+
+	for (i = 0; i < nb_links; i++)
+		if (link[i].queue_id >= RTE_EVENT_MAX_QUEUES_PER_DEV)
+			return -EINVAL;
+
+	diag = (*dev->dev_ops->port_link)(dev->data->ports[port_id], link,
+						 nb_links);
+	if (diag < 0)
+		return diag;
+
+	links_map = dev->data->links_map;
+	/* Point links_map to this port specific area */
+	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
+	for (i = 0; i < diag; i++)
+		links_map[link[i].queue_id] = (uint8_t)link[i].priority;
+
+	return diag;
+}
+
+#define EVENT_QUEUE_SERVICE_PRIORITY_INVALID (0xdead)
+
+int
+rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
+		      uint8_t queues[], uint16_t nb_unlinks)
+{
+	struct rte_eventdev *dev;
+	uint8_t all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
+	int i, diag;
+	uint16_t *links_map;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_unlink, -ENOTSUP);
+
+	if (!is_valid_port(dev, port_id)) {
+		RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
+		return -EINVAL;
+	}
+
+	if (queues == NULL) {
+		for (i = 0; i < dev->data->nb_queues; i++)
+			all_queues[i] = i;
+		queues = all_queues;
+		nb_unlinks = dev->data->nb_queues;
+	}
+
+	for (i = 0; i < nb_unlinks; i++)
+		if (queues[i] >= RTE_EVENT_MAX_QUEUES_PER_DEV)
+			return -EINVAL;
+
+	diag = (*dev->dev_ops->port_unlink)(dev->data->ports[port_id], queues,
+					nb_unlinks);
+
+	if (diag < 0)
+		return diag;
+
+	links_map = dev->data->links_map;
+	/* Point links_map to this port specific area */
+	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
+	for (i = 0; i < diag; i++)
+		links_map[queues[i]] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
+
+	return diag;
+}
+
+int
+rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
+			struct rte_event_queue_link link[])
+{
+	struct rte_eventdev *dev;
+	uint16_t *links_map;
+	int i, count = 0;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+	if (!is_valid_port(dev, port_id)) {
+		RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
+		return -EINVAL;
+	}
+
+	links_map = dev->data->links_map;
+	/* Point links_map to this port specific area */
+	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
+	for (i = 0; i < RTE_EVENT_MAX_QUEUES_PER_DEV; i++) {
+		if (links_map[i] != EVENT_QUEUE_SERVICE_PRIORITY_INVALID) {
+			link[count].queue_id = i;
+			link[count].priority = (uint8_t)links_map[i];
+			++count;
+		}
+	}
+	return count;
+}
+
+int
+rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
+				 uint64_t *timeout_ticks)
+{
+	struct rte_eventdev *dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timeout_ticks, -ENOTSUP);
+
+	if (timeout_ticks == NULL)
+		return -EINVAL;
+
+	(*dev->dev_ops->timeout_ticks)(dev, ns, timeout_ticks);
+	return 0;
+}
+
+int
+rte_event_dev_dump(uint8_t dev_id, FILE *f)
+{
+	struct rte_eventdev *dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dump, -ENOTSUP);
+
+	(*dev->dev_ops->dump)(dev, f);
+	return 0;
+
+}
+
+int
+rte_event_dev_start(uint8_t dev_id)
+{
+	struct rte_eventdev *dev;
+	int diag;
+
+	RTE_EDEV_LOG_DEBUG("Start dev_id=%" PRIu8, dev_id);
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_start, -ENOTSUP);
+
+	if (dev->data->dev_started != 0) {
+		RTE_EDEV_LOG_ERR("Device with dev_id=%" PRIu8 "already started",
+			dev_id);
+		return 0;
+	}
+
+	diag = (*dev->dev_ops->dev_start)(dev);
+	if (diag == 0)
+		dev->data->dev_started = 1;
+	else
+		return diag;
+
+	return 0;
+}
+
+void
+rte_event_dev_stop(uint8_t dev_id)
+{
+	struct rte_eventdev *dev;
+
+	RTE_EDEV_LOG_DEBUG("Stop dev_id=%" PRIu8, dev_id);
+
+	RTE_EVENTDEV_VALID_DEVID_OR_RET(dev_id);
+	dev = &rte_eventdevs[dev_id];
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
+
+	if (dev->data->dev_started == 0) {
+		RTE_EDEV_LOG_ERR("Device with dev_id=%" PRIu8 "already stopped",
+			dev_id);
+		return;
+	}
+
+	dev->data->dev_started = 0;
+	(*dev->dev_ops->dev_stop)(dev);
+}
+
+int
+rte_event_dev_close(uint8_t dev_id)
+{
+	struct rte_eventdev *dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_close, -ENOTSUP);
+
+	/* Device must be stopped before it can be closed */
+	if (dev->data->dev_started == 1) {
+		RTE_EDEV_LOG_ERR("Device %u must be stopped before closing",
+				dev_id);
+		return -EBUSY;
+	}
+
+	return (*dev->dev_ops->dev_close)(dev);
+}
diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
index 451bb5d..cefca98 100644
--- a/lib/librte_eventdev/rte_eventdev.h
+++ b/lib/librte_eventdev/rte_eventdev.h
@@ -970,6 +970,8 @@ struct rte_event {
 	};
 };
 
+
+struct rte_eventdev_driver;
 struct rte_eventdev_ops;
 struct rte_eventdev;
 
@@ -991,6 +993,51 @@ typedef uint16_t (*event_dequeue_burst_t)(void *port, struct rte_event ev[],
 		uint16_t nb_events, uint64_t timeout_ticks);
 /**< @internal Dequeue burst of events from port of a device */
 
+#define RTE_EVENTDEV_NAME_MAX_LEN	(64)
+/**< @internal Max length of name of event PMD */
+
+/**
+ * @internal
+ * The data part, with no function pointers, associated with each device.
+ *
+ * This structure is safe to place in shared memory to be common among
+ * different processes in a multi-process configuration.
+ */
+struct rte_eventdev_data {
+	int socket_id;
+	/**< Socket ID where memory is allocated */
+	uint8_t dev_id;
+	/**< Device ID for this instance */
+	uint8_t nb_queues;
+	/**< Number of event queues. */
+	uint8_t nb_ports;
+	/**< Number of event ports. */
+	void **ports;
+	/**< Array of pointers to ports. */
+	uint8_t *ports_dequeue_depth;
+	/**< Array of port dequeue depth. */
+	uint8_t *ports_enqueue_depth;
+	/**< Array of port enqueue depth. */
+	void **queues;
+	/**< Array of pointers to queues. */
+	uint8_t *queues_prio;
+	/**< Array of queue priority. */
+	uint16_t *links_map;
+	/**< Memory to store queues to port connections. */
+	void *dev_private;
+	/**< PMD-specific private data */
+	uint32_t event_dev_cap;
+	/**< Event device capabilities(RTE_EVENT_DEV_CAP_FLAG)*/
+	struct rte_event_dev_config dev_conf;
+	/**< Configuration applied to device. */
+
+	RTE_STD_C11
+	uint8_t dev_started : 1;
+	/**< Device state: STARTED(1)/STOPPED(0) */
+
+	char name[RTE_EVENTDEV_NAME_MAX_LEN];
+	/**< Unique identifier name */
+} __rte_cache_aligned;
 
 /** @internal The data structure associated with each event device. */
 struct rte_eventdev {
@@ -1005,8 +1052,23 @@ struct rte_eventdev {
 	event_dequeue_burst_t dequeue_burst;
 	/**< Pointer to PMD dequeue burst function. */
 
+	struct rte_eventdev_data *data;
+	/**< Pointer to device data */
+	const struct rte_eventdev_ops *dev_ops;
+	/**< Functions exported by PMD */
+	struct rte_pci_device *pci_dev;
+	/**< PCI info. supplied by probing */
+	const struct rte_eventdev_driver *driver;
+	/**< Driver for this device */
+
+	RTE_STD_C11
+	uint8_t attached : 1;
+	/**< Flag indicating the device is attached */
 } __rte_cache_aligned;
 
+extern struct rte_eventdev *rte_eventdevs;
+/** @internal The pool of rte_eventdev structures. */
+
 
 /**
  * Schedule one or more events in the event dev.
@@ -1017,8 +1079,13 @@ struct rte_eventdev {
  * @param dev_id
  *   The identifier of the device.
  */
-void
-rte_event_schedule(uint8_t dev_id);
+static inline void
+rte_event_schedule(uint8_t dev_id)
+{
+	struct rte_eventdev *dev = &rte_eventdevs[dev_id];
+	if (*dev->schedule)
+		(*dev->schedule)(dev);
+}
 
 /**
  * Enqueue a burst of events objects or an event object supplied in *rte_event*
@@ -1053,9 +1120,23 @@ rte_event_schedule(uint8_t dev_id);
  *
  * @see rte_event_port_enqueue_depth()
  */
-uint16_t
+static inline uint16_t
 rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event ev[],
-			uint16_t nb_events);
+			uint16_t nb_events)
+{
+	struct rte_eventdev *dev = &rte_eventdevs[dev_id];
+
+	/*
+	 * Allow zero cost non burst mode routine invocation if application
+	 * requests nb_events as const one
+	 */
+	if (nb_events == 1)
+		return (*dev->enqueue)(
+			dev->data->ports[port_id], ev);
+	else
+		return (*dev->enqueue_burst)(
+			dev->data->ports[port_id], ev, nb_events);
+}
 
 /**
  * Converts nanoseconds to *timeout_ticks* value for rte_event_dequeue_burst()
@@ -1147,9 +1228,24 @@ rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
  *
  * @see rte_event_port_dequeue_depth()
  */
-uint16_t
+static inline uint16_t
 rte_event_dequeue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event ev[],
-			uint16_t nb_events, uint64_t timeout_ticks);
+			uint16_t nb_events, uint64_t timeout_ticks)
+{
+	struct rte_eventdev *dev = &rte_eventdevs[dev_id];
+
+	/*
+	 * Allow zero cost non burst mode routine invocation if application
+	 * requests nb_events as const one
+	 */
+	if (nb_events == 1)
+		return (*dev->dequeue)(
+			dev->data->ports[port_id], ev, timeout_ticks);
+	else
+		return (*dev->dequeue_burst)(
+			dev->data->ports[port_id], ev, nb_events,
+				timeout_ticks);
+}
 
 /** Structure to hold the queue to port link establishment attributes */
 struct rte_event_queue_link {
diff --git a/lib/librte_eventdev/rte_eventdev_pmd.h b/lib/librte_eventdev/rte_eventdev_pmd.h
index 0b04ab7..7d94031 100644
--- a/lib/librte_eventdev/rte_eventdev_pmd.h
+++ b/lib/librte_eventdev/rte_eventdev_pmd.h
@@ -44,8 +44,117 @@
 extern "C" {
 #endif
 
+#include <string.h>
+
+#include <rte_dev.h>
+#include <rte_pci.h>
+#include <rte_malloc.h>
+#include <rte_log.h>
+#include <rte_common.h>
+
 #include "rte_eventdev.h"
 
+#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
+#define RTE_PMD_DEBUG_TRACE(...) \
+	rte_pmd_debug_trace(__func__, __VA_ARGS__)
+#else
+#define RTE_PMD_DEBUG_TRACE(...)
+#endif
+
+/* Logging Macros */
+#define RTE_EDEV_LOG_ERR(fmt, args...) \
+	RTE_LOG(ERR, EVENTDEV, "%s() line %u: " fmt "\n",  \
+			__func__, __LINE__, ## args)
+
+#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
+#define RTE_EDEV_LOG_DEBUG(fmt, args...) \
+	RTE_LOG(DEBUG, EVENTDEV, "%s() line %u: " fmt "\n",  \
+			__func__, __LINE__, ## args)
+#else
+#define RTE_EDEV_LOG_DEBUG(fmt, args...) (void)0
+#endif
+
+/* Macros to check for valid device */
+#define RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, retval) do { \
+	if (!rte_event_pmd_is_valid_dev((dev_id))) { \
+		RTE_EDEV_LOG_ERR("Invalid dev_id=%d\n", dev_id); \
+		return retval; \
+	} \
+} while (0)
+
+#define RTE_EVENTDEV_VALID_DEVID_OR_RET(dev_id) do { \
+	if (!rte_event_pmd_is_valid_dev((dev_id))) { \
+		RTE_EDEV_LOG_ERR("Invalid dev_id=%d\n", dev_id); \
+		return; \
+	} \
+} while (0)
+
+#define RTE_EVENTDEV_DETACHED  (0)
+#define RTE_EVENTDEV_ATTACHED  (1)
+
+/** Global structure used for maintaining state of allocated event devices */
+struct rte_eventdev_global {
+	uint8_t nb_devs;	/**< Number of devices found */
+	uint8_t max_devs;	/**< Max number of devices */
+};
+
+extern struct rte_eventdev_global *rte_eventdev_globals;
+/** Pointer to global event devices data structure. */
+extern struct rte_eventdev *rte_eventdevs;
+/** The pool of rte_eventdev structures. */
+
+/**
+ * Get the rte_eventdev structure device pointer for the named device.
+ *
+ * @param name
+ *   device name to select the device structure.
+ *
+ * @return
+ *   - The rte_eventdev structure pointer for the given device ID.
+ */
+static inline struct rte_eventdev *
+rte_event_pmd_get_named_dev(const char *name)
+{
+	struct rte_eventdev *dev;
+	unsigned int i;
+
+	if (name == NULL)
+		return NULL;
+
+	for (i = 0, dev = &rte_eventdevs[i];
+			i < rte_eventdev_globals->max_devs; i++) {
+		if ((dev->attached == RTE_EVENTDEV_ATTACHED) &&
+				(strcmp(dev->data->name, name) == 0))
+			return dev;
+	}
+
+	return NULL;
+}
+
+/**
+ * Validate if the event device index is valid attached event device.
+ *
+ * @param dev_id
+ *   Event device index.
+ *
+ * @return
+ *   - If the device index is valid (1) or not (0).
+ */
+static inline unsigned
+rte_event_pmd_is_valid_dev(uint8_t dev_id)
+{
+	struct rte_eventdev *dev;
+
+	if (dev_id >= rte_eventdev_globals->nb_devs)
+		return 0;
+
+	dev = &rte_eventdevs[dev_id];
+	if (dev->attached != RTE_EVENTDEV_ATTACHED)
+		return 0;
+	else
+		return 1;
+}
+
 /**
  * Definitions of all functions exported by a driver through the
  * the generic structure of type *event_dev_ops* supplied in the
diff --git a/lib/librte_eventdev/rte_eventdev_version.map b/lib/librte_eventdev/rte_eventdev_version.map
new file mode 100644
index 0000000..3cae03d
--- /dev/null
+++ b/lib/librte_eventdev/rte_eventdev_version.map
@@ -0,0 +1,33 @@
+DPDK_17.02 {
+	global:
+
+	rte_eventdevs;
+
+	rte_event_dev_count;
+	rte_event_dev_get_dev_id;
+	rte_event_dev_socket_id;
+	rte_event_dev_info_get;
+	rte_event_dev_configure;
+	rte_event_dev_start;
+	rte_event_dev_stop;
+	rte_event_dev_close;
+	rte_event_dev_dump;
+
+	rte_event_port_default_conf_get;
+	rte_event_port_setup;
+	rte_event_port_dequeue_depth;
+	rte_event_port_enqueue_depth;
+	rte_event_port_count;
+	rte_event_port_link;
+	rte_event_port_unlink;
+	rte_event_port_links_get;
+
+	rte_event_queue_default_conf_get;
+	rte_event_queue_setup;
+	rte_event_queue_count;
+	rte_event_queue_priority;
+
+	rte_event_dequeue_timeout_ticks;
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index f75f0e2..716725a 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -93,6 +93,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF)           += -lrte_mbuf
 _LDLIBS-$(CONFIG_RTE_LIBRTE_NET)            += -lrte_net
 _LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER)          += -lrte_ethdev
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CRYPTODEV)      += -lrte_cryptodev
+_LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV)       += -lrte_eventdev
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL)        += -lrte_mempool
 _LDLIBS-$(CONFIG_RTE_LIBRTE_RING)           += -lrte_ring
 _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL)            += -lrte_eal
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [PATCH v2 4/6] eventdev: implement PMD registration functions
  2016-12-06  3:52   ` [PATCH v2 0/6] libeventdev API and northbound implementation Jerin Jacob
                       ` (2 preceding siblings ...)
  2016-12-06  3:52     ` [PATCH v2 3/6] eventdev: implement the northbound APIs Jerin Jacob
@ 2016-12-06  3:52     ` Jerin Jacob
  2016-12-06  3:52     ` [PATCH v2 5/6] event/skeleton: add skeleton eventdev driver Jerin Jacob
                       ` (3 subsequent siblings)
  7 siblings, 0 replies; 109+ messages in thread
From: Jerin Jacob @ 2016-12-06  3:52 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, hemant.agrawal, gage.eads,
	harry.van.haaren, Jerin Jacob

This patch adds infrastructure for registering the vdev or
the PCI based event device.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 lib/librte_eventdev/rte_eventdev.c           | 236 +++++++++++++++++++++++++++
 lib/librte_eventdev/rte_eventdev_pmd.h       | 111 +++++++++++++
 lib/librte_eventdev/rte_eventdev_version.map |   6 +
 3 files changed, 353 insertions(+)

diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c
index 0a1d2d6..084c21c 100644
--- a/lib/librte_eventdev/rte_eventdev.c
+++ b/lib/librte_eventdev/rte_eventdev.c
@@ -124,6 +124,8 @@ rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info *dev_info)
 	(*dev->dev_ops->dev_infos_get)(dev, dev_info);
 
 	dev_info->pci_dev = dev->pci_dev;
+	if (dev->driver)
+		dev_info->driver_name = dev->driver->pci_drv.driver.name;
 	return 0;
 }
 
@@ -999,3 +1001,237 @@ rte_event_dev_close(uint8_t dev_id)
 
 	return (*dev->dev_ops->dev_close)(dev);
 }
+
+static inline int
+rte_eventdev_data_alloc(uint8_t dev_id, struct rte_eventdev_data **data,
+		int socket_id)
+{
+	char mz_name[RTE_EVENTDEV_NAME_MAX_LEN];
+	const struct rte_memzone *mz;
+	int n;
+
+	/* Generate memzone name */
+	n = snprintf(mz_name, sizeof(mz_name), "rte_eventdev_data_%u", dev_id);
+	if (n >= (int)sizeof(mz_name))
+		return -EINVAL;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		mz = rte_memzone_reserve(mz_name,
+				sizeof(struct rte_eventdev_data),
+				socket_id, 0);
+	} else
+		mz = rte_memzone_lookup(mz_name);
+
+	if (mz == NULL)
+		return -ENOMEM;
+
+	*data = mz->addr;
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		memset(*data, 0, sizeof(struct rte_eventdev_data));
+
+	return 0;
+}
+
+static inline uint8_t
+rte_eventdev_find_free_device_index(void)
+{
+	uint8_t dev_id;
+
+	for (dev_id = 0; dev_id < RTE_EVENT_MAX_DEVS; dev_id++) {
+		if (rte_eventdevs[dev_id].attached ==
+				RTE_EVENTDEV_DETACHED)
+			return dev_id;
+	}
+	return RTE_EVENT_MAX_DEVS;
+}
+
+struct rte_eventdev *
+rte_event_pmd_allocate(const char *name, int socket_id)
+{
+	struct rte_eventdev *eventdev;
+	uint8_t dev_id;
+
+	if (rte_event_pmd_get_named_dev(name) != NULL) {
+		RTE_EDEV_LOG_ERR("Event device with name %s already "
+				"allocated!", name);
+		return NULL;
+	}
+
+	dev_id = rte_eventdev_find_free_device_index();
+	if (dev_id == RTE_EVENT_MAX_DEVS) {
+		RTE_EDEV_LOG_ERR("Reached maximum number of event devices");
+		return NULL;
+	}
+
+	eventdev = &rte_eventdevs[dev_id];
+
+	if (eventdev->data == NULL) {
+		struct rte_eventdev_data *eventdev_data = NULL;
+
+		int retval = rte_eventdev_data_alloc(dev_id, &eventdev_data,
+				socket_id);
+
+		if (retval < 0 || eventdev_data == NULL)
+			return NULL;
+
+		eventdev->data = eventdev_data;
+
+		snprintf(eventdev->data->name, RTE_EVENTDEV_NAME_MAX_LEN,
+				"%s", name);
+
+		eventdev->data->dev_id = dev_id;
+		eventdev->data->socket_id = socket_id;
+		eventdev->data->dev_started = 0;
+
+		eventdev->attached = RTE_EVENTDEV_ATTACHED;
+
+		eventdev_globals.nb_devs++;
+	}
+
+	return eventdev;
+}
+
+int
+rte_event_pmd_release(struct rte_eventdev *eventdev)
+{
+	int ret;
+
+	if (eventdev == NULL)
+		return -EINVAL;
+
+	ret = rte_event_dev_close(eventdev->data->dev_id);
+	if (ret < 0)
+		return ret;
+
+	eventdev->attached = RTE_EVENTDEV_DETACHED;
+	eventdev_globals.nb_devs--;
+	eventdev->data = NULL;
+
+	return 0;
+}
+
+struct rte_eventdev *
+rte_event_pmd_vdev_init(const char *name, size_t dev_private_size,
+		int socket_id)
+{
+	struct rte_eventdev *eventdev;
+
+	/* Allocate device structure */
+	eventdev = rte_event_pmd_allocate(name, socket_id);
+	if (eventdev == NULL)
+		return NULL;
+
+	/* Allocate private device structure */
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		eventdev->data->dev_private =
+				rte_zmalloc_socket("eventdev device private",
+						dev_private_size,
+						RTE_CACHE_LINE_SIZE,
+						socket_id);
+
+		if (eventdev->data->dev_private == NULL)
+			rte_panic("Cannot allocate memzone for private device"
+					" data");
+	}
+
+	return eventdev;
+}
+
+int
+rte_event_pmd_pci_probe(struct rte_pci_driver *pci_drv,
+			struct rte_pci_device *pci_dev)
+{
+	struct rte_eventdev_driver *eventdrv;
+	struct rte_eventdev *eventdev;
+
+	char eventdev_name[RTE_EVENTDEV_NAME_MAX_LEN];
+
+	int retval;
+
+	eventdrv = (struct rte_eventdev_driver *)pci_drv;
+	if (eventdrv == NULL)
+		return -ENODEV;
+
+	rte_eal_pci_device_name(&pci_dev->addr, eventdev_name,
+			sizeof(eventdev_name));
+
+	eventdev = rte_event_pmd_allocate(eventdev_name,
+			 pci_dev->device.numa_node);
+	if (eventdev == NULL)
+		return -ENOMEM;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		eventdev->data->dev_private =
+				rte_zmalloc_socket(
+						"eventdev private structure",
+						eventdrv->dev_private_size,
+						RTE_CACHE_LINE_SIZE,
+						rte_socket_id());
+
+		if (eventdev->data->dev_private == NULL)
+			rte_panic("Cannot allocate memzone for private "
+					"device data");
+	}
+
+	eventdev->pci_dev = pci_dev;
+	eventdev->driver = eventdrv;
+
+	/* Invoke PMD device initialization function */
+	retval = (*eventdrv->eventdev_init)(eventdev);
+	if (retval == 0)
+		return 0;
+
+	RTE_EDEV_LOG_ERR("driver %s: (vendor_id=0x%x device_id=0x%x)"
+			" failed", pci_drv->driver.name,
+			(unsigned int) pci_dev->id.vendor_id,
+			(unsigned int) pci_dev->id.device_id);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(eventdev->data->dev_private);
+
+	eventdev->attached = RTE_EVENTDEV_DETACHED;
+	eventdev_globals.nb_devs--;
+
+	return -ENXIO;
+}
+
+int
+rte_event_pmd_pci_remove(struct rte_pci_device *pci_dev)
+{
+	const struct rte_eventdev_driver *eventdrv;
+	struct rte_eventdev *eventdev;
+	char eventdev_name[RTE_EVENTDEV_NAME_MAX_LEN];
+	int ret;
+
+	if (pci_dev == NULL)
+		return -EINVAL;
+
+	rte_eal_pci_device_name(&pci_dev->addr, eventdev_name,
+			sizeof(eventdev_name));
+
+	eventdev = rte_event_pmd_get_named_dev(eventdev_name);
+	if (eventdev == NULL)
+		return -ENODEV;
+
+	eventdrv = (const struct rte_eventdev_driver *)pci_dev->driver;
+	if (eventdrv == NULL)
+		return -ENODEV;
+
+	/* Invoke PMD device un-init function */
+	if (*eventdrv->eventdev_uninit) {
+		ret = (*eventdrv->eventdev_uninit)(eventdev);
+		if (ret)
+			return ret;
+	}
+
+	/* Free event device */
+	rte_event_pmd_release(eventdev);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(eventdev->data->dev_private);
+
+	eventdev->pci_dev = NULL;
+	eventdev->driver = NULL;
+
+	return 0;
+}
diff --git a/lib/librte_eventdev/rte_eventdev_pmd.h b/lib/librte_eventdev/rte_eventdev_pmd.h
index 7d94031..29959ae 100644
--- a/lib/librte_eventdev/rte_eventdev_pmd.h
+++ b/lib/librte_eventdev/rte_eventdev_pmd.h
@@ -92,6 +92,60 @@ extern "C" {
 #define RTE_EVENTDEV_DETACHED  (0)
 #define RTE_EVENTDEV_ATTACHED  (1)
 
+/**
+ * Initialisation function of a event driver invoked for each matching
+ * event PCI device detected during the PCI probing phase.
+ *
+ * @param dev
+ *   The dev pointer is the address of the *rte_eventdev* structure associated
+ *   with the matching device and which has been [automatically] allocated in
+ *   the *rte_event_devices* array.
+ *
+ * @return
+ *   - 0: Success, the device is properly initialised by the driver.
+ *        In particular, the driver MUST have set up the *dev_ops* pointer
+ *        of the *dev* structure.
+ *   - <0: Error code of the device initialisation failure.
+ */
+typedef int (*eventdev_init_t)(struct rte_eventdev *dev);
+
+/**
+ * Finalisation function of a driver invoked for each matching
+ * PCI device detected during the PCI closing phase.
+ *
+ * @param dev
+ *   The dev pointer is the address of the *rte_eventdev* structure associated
+ *   with the matching device and which	has been [automatically] allocated in
+ *   the *rte_event_devices* array.
+ *
+ * @return
+ *   - 0: Success, the device is properly finalised by the driver.
+ *        In particular, the driver MUST free the *dev_ops* pointer
+ *        of the *dev* structure.
+ *   - <0: Error code of the device initialisation failure.
+ */
+typedef int (*eventdev_uninit_t)(struct rte_eventdev *dev);
+
+/**
+ * The structure associated with a PMD driver.
+ *
+ * Each driver acts as a PCI driver and is represented by a generic
+ * *event_driver* structure that holds:
+ *
+ * - An *rte_pci_driver* structure (which must be the first field).
+ *
+ * - The *eventdev_init* function invoked for each matching PCI device.
+ *
+ * - The size of the private data to allocate for each matching device.
+ */
+struct rte_eventdev_driver {
+	struct rte_pci_driver pci_drv;	/**< The PMD is also a PCI driver. */
+	unsigned int dev_private_size;	/**< Size of device private data. */
+
+	eventdev_init_t eventdev_init;	/**< Device init function. */
+	eventdev_uninit_t eventdev_uninit; /**< Device uninit function. */
+};
+
 /** Global structure used for maintaining state of allocated event devices */
 struct rte_eventdev_global {
 	uint8_t nb_devs;	/**< Number of devices found */
@@ -388,6 +442,63 @@ struct rte_eventdev_ops {
 	/* Dump internal information */
 };
 
+/**
+ * Allocates a new eventdev slot for an event device and returns the pointer
+ * to that slot for the driver to use.
+ *
+ * @param name
+ *   Unique identifier name for each device
+ * @param socket_id
+ *   Socket to allocate resources on.
+ * @return
+ *   - Slot in the rte_dev_devices array for a new device;
+ */
+struct rte_eventdev *
+rte_event_pmd_allocate(const char *name, int socket_id);
+
+/**
+ * Release the specified eventdev device.
+ *
+ * @param eventdev
+ * The *eventdev* pointer is the address of the *rte_eventdev* structure.
+ * @return
+ *   - 0 on success, negative on error
+ */
+int
+rte_event_pmd_release(struct rte_eventdev *eventdev);
+
+/**
+ * Creates a new virtual event device and returns the pointer to that device.
+ *
+ * @param name
+ *   PMD type name
+ * @param dev_private_size
+ *   Size of event PMDs private data
+ * @param socket_id
+ *   Socket to allocate resources on.
+ *
+ * @return
+ *   - Eventdev pointer if device is successfully created.
+ *   - NULL if device cannot be created.
+ */
+struct rte_eventdev *
+rte_event_pmd_vdev_init(const char *name, size_t dev_private_size,
+		int socket_id);
+
+
+/**
+ * Wrapper for use by pci drivers as a .probe function to attach to a event
+ * interface.
+ */
+int rte_event_pmd_pci_probe(struct rte_pci_driver *pci_drv,
+			    struct rte_pci_device *pci_dev);
+
+/**
+ * Wrapper for use by pci drivers as a .remove function to detach a event
+ * interface.
+ */
+int rte_event_pmd_pci_remove(struct rte_pci_device *pci_dev);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_eventdev/rte_eventdev_version.map b/lib/librte_eventdev/rte_eventdev_version.map
index 3cae03d..68b8c81 100644
--- a/lib/librte_eventdev/rte_eventdev_version.map
+++ b/lib/librte_eventdev/rte_eventdev_version.map
@@ -29,5 +29,11 @@ DPDK_17.02 {
 
 	rte_event_dequeue_timeout_ticks;
 
+	rte_event_pmd_allocate;
+	rte_event_pmd_release;
+	rte_event_pmd_vdev_init;
+	rte_event_pmd_pci_probe;
+	rte_event_pmd_pci_remove;
+
 	local: *;
 };
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [PATCH v2 5/6] event/skeleton: add skeleton eventdev driver
  2016-12-06  3:52   ` [PATCH v2 0/6] libeventdev API and northbound implementation Jerin Jacob
                       ` (3 preceding siblings ...)
  2016-12-06  3:52     ` [PATCH v2 4/6] eventdev: implement PMD registration functions Jerin Jacob
@ 2016-12-06  3:52     ` Jerin Jacob
  2016-12-06  3:52     ` [PATCH v2 6/6] app/test: unit test case for eventdev APIs Jerin Jacob
                       ` (2 subsequent siblings)
  7 siblings, 0 replies; 109+ messages in thread
From: Jerin Jacob @ 2016-12-06  3:52 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, hemant.agrawal, gage.eads,
	harry.van.haaren, Jerin Jacob

The skeleton driver facilitates, bootstrapping the new
eventdev driver and creates a platform to verify
the northbound eventdev common code.

The driver supports both VDEV and PCI based eventdev
devices.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 MAINTAINERS                                        |   1 +
 config/common_base                                 |   8 +
 drivers/Makefile                                   |   1 +
 drivers/event/Makefile                             |  36 ++
 drivers/event/skeleton/Makefile                    |  55 +++
 .../skeleton/rte_pmd_skeleton_event_version.map    |   4 +
 drivers/event/skeleton/skeleton_eventdev.c         | 540 +++++++++++++++++++++
 drivers/event/skeleton/skeleton_eventdev.h         |  72 +++
 mk/rte.app.mk                                      |   4 +
 9 files changed, 721 insertions(+)
 create mode 100644 drivers/event/Makefile
 create mode 100644 drivers/event/skeleton/Makefile
 create mode 100644 drivers/event/skeleton/rte_pmd_skeleton_event_version.map
 create mode 100644 drivers/event/skeleton/skeleton_eventdev.c
 create mode 100644 drivers/event/skeleton/skeleton_eventdev.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 8e59352..a10899f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -252,6 +252,7 @@ F: examples/l2fwd-crypto/
 Eventdev API - EXPERIMENTAL
 M: Jerin Jacob <jerin.jacob@caviumnetworks.com>
 F: lib/librte_eventdev/
+F: drivers/event/skeleton/
 
 Networking Drivers
 ------------------
diff --git a/config/common_base b/config/common_base
index 7a8814e..35aef0a 100644
--- a/config/common_base
+++ b/config/common_base
@@ -417,6 +417,14 @@ CONFIG_RTE_LIBRTE_EVENTDEV=y
 CONFIG_RTE_LIBRTE_EVENTDEV_DEBUG=n
 CONFIG_RTE_EVENT_MAX_DEVS=16
 CONFIG_RTE_EVENT_MAX_QUEUES_PER_DEV=64
+
+#
+# Compile PMD for skeleton event device
+#
+CONFIG_RTE_LIBRTE_PMD_SKELETON_EVENTDEV=y
+CONFIG_RTE_LIBRTE_PMD_SKELETON_EVENTDEV_DEBUG=n
+
+#
 # Compile librte_ring
 #
 CONFIG_RTE_LIBRTE_RING=y
diff --git a/drivers/Makefile b/drivers/Makefile
index 81c03a8..40b8347 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -33,5 +33,6 @@ include $(RTE_SDK)/mk/rte.vars.mk
 
 DIRS-y += net
 DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += crypto
+DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += event
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/event/Makefile b/drivers/event/Makefile
new file mode 100644
index 0000000..678279f
--- /dev/null
+++ b/drivers/event/Makefile
@@ -0,0 +1,36 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2016 Cavium networks. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Cavium networks nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_SKELETON_EVENTDEV) += skeleton
+
+include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/event/skeleton/Makefile b/drivers/event/skeleton/Makefile
new file mode 100644
index 0000000..e557f6d
--- /dev/null
+++ b/drivers/event/skeleton/Makefile
@@ -0,0 +1,55 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2016 Cavium Networks. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Cavium Networks nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+#
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_skeleton_event.a
+
+CFLAGS += $(WERROR_FLAGS)
+
+EXPORT_MAP := rte_pmd_skeleton_event_version.map
+
+LIBABIVER := 1
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_SKELETON_EVENTDEV) += skeleton_eventdev.c
+
+# this lib depends upon:
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_SKELETON_EVENTDEV) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_SKELETON_EVENTDEV) += lib/librte_event
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/event/skeleton/rte_pmd_skeleton_event_version.map b/drivers/event/skeleton/rte_pmd_skeleton_event_version.map
new file mode 100644
index 0000000..31eca32
--- /dev/null
+++ b/drivers/event/skeleton/rte_pmd_skeleton_event_version.map
@@ -0,0 +1,4 @@
+DPDK_17.02 {
+
+	local: *;
+};
diff --git a/drivers/event/skeleton/skeleton_eventdev.c b/drivers/event/skeleton/skeleton_eventdev.c
new file mode 100644
index 0000000..ec3be4f
--- /dev/null
+++ b/drivers/event/skeleton/skeleton_eventdev.c
@@ -0,0 +1,540 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <assert.h>
+#include <stdio.h>
+#include <stdbool.h>
+#include <errno.h>
+#include <stdint.h>
+#include <string.h>
+
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_debug.h>
+#include <rte_dev.h>
+#include <rte_eal.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_malloc.h>
+#include <rte_pci.h>
+#include <rte_lcore.h>
+#include <rte_vdev.h>
+
+#include "skeleton_eventdev.h"
+
+#define EVENTDEV_NAME_SKELETON_PMD event_skeleton
+/**< Skeleton event device PMD name */
+
+static uint16_t
+skeleton_eventdev_enqueue(void *port, struct rte_event *ev)
+{
+	struct skeleton_port *sp = port;
+
+	RTE_SET_USED(sp);
+	RTE_SET_USED(ev);
+	RTE_SET_USED(port);
+
+	return 0;
+}
+
+static uint16_t
+skeleton_eventdev_enqueue_burst(void *port, struct rte_event ev[],
+			uint16_t nb_events)
+{
+	struct skeleton_port *sp = port;
+
+	RTE_SET_USED(sp);
+	RTE_SET_USED(ev);
+	RTE_SET_USED(port);
+	RTE_SET_USED(nb_events);
+
+	return 0;
+}
+
+static uint16_t
+skeleton_eventdev_dequeue(void *port, struct rte_event *ev,
+				uint64_t timeout_ticks)
+{
+	struct skeleton_port *sp = port;
+
+	RTE_SET_USED(sp);
+	RTE_SET_USED(ev);
+	RTE_SET_USED(timeout_ticks);
+
+	return 0;
+}
+
+static uint16_t
+skeleton_eventdev_dequeue_burst(void *port, struct rte_event ev[],
+		uint16_t nb_events, uint64_t timeout_ticks)
+{
+	struct skeleton_port *sp = port;
+
+	RTE_SET_USED(sp);
+	RTE_SET_USED(ev);
+	RTE_SET_USED(nb_events);
+	RTE_SET_USED(timeout_ticks);
+
+	return 0;
+}
+
+static void
+skeleton_eventdev_info_get(struct rte_eventdev *dev,
+		struct rte_event_dev_info *dev_info)
+{
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(dev);
+
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(skel);
+
+	dev_info->min_dequeue_timeout_ns = 1;
+	dev_info->max_dequeue_timeout_ns = 10000;
+	dev_info->dequeue_timeout_ns = 25;
+	dev_info->max_event_queues = 64;
+	dev_info->max_event_queue_flows = (1ULL << 20);
+	dev_info->max_event_queue_priority_levels = 8;
+	dev_info->max_event_priority_levels = 8;
+	dev_info->max_event_ports = 32;
+	dev_info->max_event_port_dequeue_depth = 16;
+	dev_info->max_event_port_enqueue_depth = 16;
+	dev_info->max_num_events = (1ULL << 20);
+	dev_info->event_dev_cap = RTE_EVENT_DEV_CAP_FLAG_QUEUE_QOS |
+					RTE_EVENT_DEV_CAP_FLAG_EVENT_QOS;
+}
+
+static int
+skeleton_eventdev_configure(const struct rte_eventdev *dev)
+{
+	struct rte_eventdev_data *data = dev->data;
+	struct rte_event_dev_config *conf = &data->dev_conf;
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(dev);
+
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(conf);
+	RTE_SET_USED(skel);
+
+	PMD_DRV_LOG(DEBUG, "Configured eventdev devid=%d", dev->data->dev_id);
+	return 0;
+}
+
+static int
+skeleton_eventdev_start(struct rte_eventdev *dev)
+{
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(dev);
+
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(skel);
+
+	return 0;
+}
+
+static void
+skeleton_eventdev_stop(struct rte_eventdev *dev)
+{
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(dev);
+
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(skel);
+}
+
+static int
+skeleton_eventdev_close(struct rte_eventdev *dev)
+{
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(dev);
+
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(skel);
+
+	return 0;
+}
+
+static void
+skeleton_eventdev_queue_def_conf(struct rte_eventdev *dev, uint8_t queue_id,
+				 struct rte_event_queue_conf *queue_conf)
+{
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(dev);
+
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(skel);
+	RTE_SET_USED(queue_id);
+
+	queue_conf->nb_atomic_flows = (1ULL << 20);
+	queue_conf->nb_atomic_order_sequences = (1ULL << 20);
+	queue_conf->event_queue_cfg = RTE_EVENT_QUEUE_CFG_FLAG_DEFAULT;
+	queue_conf->priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+}
+
+static void
+skeleton_eventdev_queue_release(void *queue)
+{
+	struct skeleton_queue *sq = queue;
+	PMD_DRV_FUNC_TRACE();
+
+	rte_free(sq);
+}
+
+static int
+skeleton_eventdev_queue_setup(struct rte_eventdev *dev, uint8_t queue_id,
+			      const struct rte_event_queue_conf *queue_conf)
+{
+	struct skeleton_queue *sq;
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(dev);
+
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(skel);
+	RTE_SET_USED(queue_conf);
+
+	/* Free memory prior to re-allocation if needed */
+	if (dev->data->queues[queue_id] != NULL) {
+		PMD_DRV_LOG(DEBUG, "Freeing memory prior to re-allocation %d",
+				queue_id);
+		skeleton_eventdev_queue_release(dev->data->queues[queue_id]);
+		dev->data->queues[queue_id] = NULL;
+	}
+
+	/* Allocate event queue memory */
+	sq = rte_zmalloc_socket("eventdev queue",
+			sizeof(struct skeleton_queue), RTE_CACHE_LINE_SIZE,
+			dev->data->socket_id);
+	if (sq == NULL) {
+		PMD_DRV_ERR("Failed to allocate sq queue_id=%d", queue_id);
+		return -ENOMEM;
+	}
+
+	sq->queue_id = queue_id;
+
+	PMD_DRV_LOG(DEBUG, "[%d] sq=%p", queue_id, sq);
+
+	dev->data->queues[queue_id] = sq;
+	return 0;
+}
+
+static void
+skeleton_eventdev_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
+				 struct rte_event_port_conf *port_conf)
+{
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(dev);
+
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(skel);
+	RTE_SET_USED(port_id);
+
+	port_conf->new_event_threshold = 32 * 1024;
+	port_conf->dequeue_depth = 16;
+	port_conf->enqueue_depth = 16;
+}
+
+static void
+skeleton_eventdev_port_release(void *port)
+{
+	struct skeleton_port *sp = port;
+	PMD_DRV_FUNC_TRACE();
+
+	rte_free(sp);
+}
+
+static int
+skeleton_eventdev_port_setup(struct rte_eventdev *dev, uint8_t port_id,
+				const struct rte_event_port_conf *port_conf)
+{
+	struct skeleton_port *sp;
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(dev);
+
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(skel);
+	RTE_SET_USED(port_conf);
+
+	/* Free memory prior to re-allocation if needed */
+	if (dev->data->ports[port_id] != NULL) {
+		PMD_DRV_LOG(DEBUG, "Freeing memory prior to re-allocation %d",
+				port_id);
+		skeleton_eventdev_port_release(dev->data->ports[port_id]);
+		dev->data->ports[port_id] = NULL;
+	}
+
+	/* Allocate event port memory */
+	sp = rte_zmalloc_socket("eventdev port",
+			sizeof(struct skeleton_port), RTE_CACHE_LINE_SIZE,
+			dev->data->socket_id);
+	if (sp == NULL) {
+		PMD_DRV_ERR("Failed to allocate sp port_id=%d", port_id);
+		return -ENOMEM;
+	}
+
+	sp->port_id = port_id;
+
+	PMD_DRV_LOG(DEBUG, "[%d] sp=%p", port_id, sp);
+
+	dev->data->ports[port_id] = sp;
+	return 0;
+}
+
+static int
+skeleton_eventdev_port_link(void *port,
+				const struct rte_event_queue_link link[],
+				uint16_t nb_links)
+{
+	struct skeleton_port *sp = port;
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(sp);
+	RTE_SET_USED(link);
+
+	/* Linked all the queues */
+	return (int)nb_links;
+}
+
+static int
+skeleton_eventdev_port_unlink(void *port, uint8_t queues[],
+				 uint16_t nb_unlinks)
+{
+	struct skeleton_port *sp = port;
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(sp);
+	RTE_SET_USED(queues);
+
+	/* Unlinked all the queues */
+	return (int)nb_unlinks;
+
+}
+
+static void
+skeleton_eventdev_timeout_ticks(struct rte_eventdev *dev, uint64_t ns,
+				 uint64_t *timeout_ticks)
+{
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(dev);
+	uint32_t scale = 1;
+
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(skel);
+	*timeout_ticks = ns * scale;
+}
+
+static void
+skeleton_eventdev_dump(struct rte_eventdev *dev, FILE *f)
+{
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(dev);
+
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(skel);
+	RTE_SET_USED(f);
+}
+
+
+/* Initialize and register event driver with DPDK Application */
+static const struct rte_eventdev_ops skeleton_eventdev_ops = {
+	.dev_infos_get    = skeleton_eventdev_info_get,
+	.dev_configure    = skeleton_eventdev_configure,
+	.dev_start        = skeleton_eventdev_start,
+	.dev_stop         = skeleton_eventdev_stop,
+	.dev_close        = skeleton_eventdev_close,
+	.queue_def_conf   = skeleton_eventdev_queue_def_conf,
+	.queue_setup      = skeleton_eventdev_queue_setup,
+	.queue_release    = skeleton_eventdev_queue_release,
+	.port_def_conf    = skeleton_eventdev_port_def_conf,
+	.port_setup       = skeleton_eventdev_port_setup,
+	.port_release     = skeleton_eventdev_port_release,
+	.port_link        = skeleton_eventdev_port_link,
+	.port_unlink      = skeleton_eventdev_port_unlink,
+	.timeout_ticks    = skeleton_eventdev_timeout_ticks,
+	.dump             = skeleton_eventdev_dump
+};
+
+static int
+skeleton_eventdev_init(struct rte_eventdev *eventdev)
+{
+	struct rte_pci_device *pci_dev;
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(eventdev);
+	int ret = 0;
+
+	PMD_DRV_FUNC_TRACE();
+
+	eventdev->dev_ops       = &skeleton_eventdev_ops;
+	eventdev->schedule      = NULL;
+	eventdev->enqueue       = skeleton_eventdev_enqueue;
+	eventdev->enqueue_burst = skeleton_eventdev_enqueue_burst;
+	eventdev->dequeue       = skeleton_eventdev_dequeue;
+	eventdev->dequeue_burst = skeleton_eventdev_dequeue_burst;
+
+	/* For secondary processes, the primary has done all the work */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return 0;
+
+	pci_dev = eventdev->pci_dev;
+
+	skel->reg_base = (uintptr_t)pci_dev->mem_resource[0].addr;
+	if (!skel->reg_base) {
+		PMD_DRV_ERR("Failed to map BAR0");
+		ret = -ENODEV;
+		goto fail;
+	}
+
+	skel->device_id = pci_dev->id.device_id;
+	skel->vendor_id = pci_dev->id.vendor_id;
+	skel->subsystem_device_id = pci_dev->id.subsystem_device_id;
+	skel->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+
+	PMD_DRV_LOG(DEBUG, "pci device (%x:%x) %u:%u:%u:%u",
+			pci_dev->id.vendor_id, pci_dev->id.device_id,
+			pci_dev->addr.domain, pci_dev->addr.bus,
+			pci_dev->addr.devid, pci_dev->addr.function);
+
+	PMD_DRV_LOG(INFO, "dev_id=%d socket_id=%d (%x:%x)",
+		eventdev->data->dev_id, eventdev->data->socket_id,
+		skel->vendor_id, skel->device_id);
+
+fail:
+	return ret;
+}
+
+/* PCI based event device */
+
+#define EVENTDEV_SKEL_VENDOR_ID         0x177d
+#define EVENTDEV_SKEL_PRODUCT_ID        0x0001
+
+static const struct rte_pci_id pci_id_skeleton_map[] = {
+	{
+		RTE_PCI_DEVICE(EVENTDEV_SKEL_VENDOR_ID,
+			       EVENTDEV_SKEL_PRODUCT_ID)
+	},
+	{
+		.vendor_id = 0,
+	},
+};
+
+static struct rte_eventdev_driver pci_eventdev_skeleton_pmd = {
+	.pci_drv = {
+		.id_table = pci_id_skeleton_map,
+		.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+		.probe = rte_event_pmd_pci_probe,
+		.remove = rte_event_pmd_pci_remove,
+	},
+	.eventdev_init = skeleton_eventdev_init,
+	.dev_private_size = sizeof(struct skeleton_eventdev),
+};
+
+RTE_PMD_REGISTER_PCI(event_skeleton_pci, pci_eventdev_skeleton_pmd.pci_drv);
+RTE_PMD_REGISTER_PCI_TABLE(event_skeleton_pci, pci_id_skeleton_map);
+
+/* VDEV based event device */
+
+/**
+ * Global static parameter used to create a unique name for each skeleton
+ * event device.
+ */
+static unsigned int skeleton_unique_id;
+
+static inline int
+skeleton_create_unique_device_name(char *name, size_t size)
+{
+	int ret;
+
+	if (name == NULL)
+		return -EINVAL;
+
+	ret = snprintf(name, size, "%s_%u", RTE_STR(EVENTDEV_NAME_SKELETON_PMD),
+			skeleton_unique_id++);
+	if (ret < 0)
+		return ret;
+	return 0;
+}
+
+static int
+skeleton_eventdev_create(int socket_id)
+{
+	struct rte_eventdev *eventdev;
+	char eventdev_name[RTE_EVENTDEV_NAME_MAX_LEN];
+
+	/* Create a unique device name */
+	if (skeleton_create_unique_device_name(eventdev_name,
+			RTE_EVENTDEV_NAME_MAX_LEN) != 0) {
+		PMD_DRV_ERR("Failed to create unique eventdev name");
+		return -EINVAL;
+	}
+
+	eventdev = rte_event_pmd_vdev_init(eventdev_name,
+			sizeof(struct skeleton_eventdev), socket_id);
+	if (eventdev == NULL) {
+		PMD_DRV_ERR("Failed to create eventdev vdev");
+		goto fail;
+	}
+
+	eventdev->dev_ops       = &skeleton_eventdev_ops;
+	eventdev->schedule      = NULL;
+	eventdev->enqueue       = skeleton_eventdev_enqueue;
+	eventdev->enqueue_burst = skeleton_eventdev_enqueue_burst;
+	eventdev->dequeue       = skeleton_eventdev_dequeue;
+	eventdev->dequeue_burst = skeleton_eventdev_dequeue_burst;
+
+	return 0;
+fail:
+	return -EFAULT;
+}
+
+static int
+skeleton_eventdev_probe(const char *name, __rte_unused const char *input_args)
+{
+	RTE_LOG(INFO, PMD, "Initializing %s on NUMA node %d", name,
+			rte_socket_id());
+	return skeleton_eventdev_create(rte_socket_id());
+}
+
+static int
+skeleton_eventdev_remove(const char *name)
+{
+	if (name == NULL)
+		return -EINVAL;
+
+	PMD_DRV_LOG(INFO, "Closing %s on NUMA node %d", name, rte_socket_id());
+
+	return 0;
+}
+
+static struct rte_vdev_driver vdev_eventdev_skeleton_pmd = {
+	.probe = skeleton_eventdev_probe,
+	.remove = skeleton_eventdev_remove
+};
+
+RTE_PMD_REGISTER_VDEV(EVENTDEV_NAME_SKELETON_PMD, vdev_eventdev_skeleton_pmd);
diff --git a/drivers/event/skeleton/skeleton_eventdev.h b/drivers/event/skeleton/skeleton_eventdev.h
new file mode 100644
index 0000000..016cdcd
--- /dev/null
+++ b/drivers/event/skeleton/skeleton_eventdev.h
@@ -0,0 +1,72 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __SKELETON_EVENTDEV_H__
+#define __SKELETON_EVENTDEV_H__
+
+#include <rte_eventdev_pmd.h>
+
+#ifdef RTE_LIBRTE_PMD_SKELETON_EVENTDEV_DEBUG
+#define PMD_DRV_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#define PMD_DRV_FUNC_TRACE() PMD_DRV_LOG(DEBUG, ">>")
+#else
+#define PMD_DRV_LOG(level, fmt, args...) do { } while (0)
+#define PMD_DRV_FUNC_TRACE() do { } while (0)
+#endif
+
+#define PMD_DRV_ERR(fmt, args...) \
+	RTE_LOG(ERR, PMD, "%s(): " fmt "\n", __func__, ## args)
+
+struct skeleton_eventdev {
+	uintptr_t reg_base;
+	uint16_t device_id;
+	uint16_t vendor_id;
+	uint16_t subsystem_device_id;
+	uint16_t subsystem_vendor_id;
+} __rte_cache_aligned;
+
+struct skeleton_queue {
+	uint8_t queue_id;
+} __rte_cache_aligned;
+
+struct skeleton_port {
+	uint8_t port_id;
+} __rte_cache_aligned;
+
+static inline struct skeleton_eventdev *
+skeleton_pmd_priv(const struct rte_eventdev *eventdev)
+{
+	return eventdev->data->dev_private;
+}
+
+#endif /* __SKELETON_EVENTDEV_H__ */
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 716725a..8341c13 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -148,6 +148,10 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC)         += -lrte_pmd_zuc
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC)         += -L$(LIBSSO_ZUC_PATH)/build -lsso_zuc
 endif # CONFIG_RTE_LIBRTE_CRYPTODEV
 
+ifeq ($(CONFIG_RTE_LIBRTE_EVENTDEV),y)
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SKELETON_EVENTDEV) += -lrte_pmd_skeleton_event
+endif # CONFIG_RTE_LIBRTE_EVENTDEV
+
 endif # !CONFIG_RTE_BUILD_SHARED_LIBS
 
 _LDLIBS-y += --no-whole-archive
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [PATCH v2 6/6] app/test: unit test case for eventdev APIs
  2016-12-06  3:52   ` [PATCH v2 0/6] libeventdev API and northbound implementation Jerin Jacob
                       ` (4 preceding siblings ...)
  2016-12-06  3:52     ` [PATCH v2 5/6] event/skeleton: add skeleton eventdev driver Jerin Jacob
@ 2016-12-06  3:52     ` Jerin Jacob
  2016-12-06 16:46     ` [PATCH v2 0/6] libeventdev API and northbound implementation Bruce Richardson
  2016-12-21  9:25     ` [PATCH v4 " Jerin Jacob
  7 siblings, 0 replies; 109+ messages in thread
From: Jerin Jacob @ 2016-12-06  3:52 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, hemant.agrawal, gage.eads,
	harry.van.haaren, Jerin Jacob

This commit adds basic unit tests for the eventdev API.

commands to run the test app:
./build/app/test -c 2
RTE>>eventdev_common_autotest

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 MAINTAINERS              |   1 +
 app/test/Makefile        |   2 +
 app/test/test_eventdev.c | 775 +++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 778 insertions(+)
 create mode 100644 app/test/test_eventdev.c

diff --git a/MAINTAINERS b/MAINTAINERS
index a10899f..21ff4db 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -252,6 +252,7 @@ F: examples/l2fwd-crypto/
 Eventdev API - EXPERIMENTAL
 M: Jerin Jacob <jerin.jacob@caviumnetworks.com>
 F: lib/librte_eventdev/
+F: app/test/test_eventdev*
 F: drivers/event/skeleton/
 
 Networking Drivers
diff --git a/app/test/Makefile b/app/test/Makefile
index 5be023a..e28c079 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -197,6 +197,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev_blockcipher.c
 SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev_perf.c
 SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev.c
 
+SRCS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += test_eventdev.c
+
 SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
 
 CFLAGS += -O3
diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
new file mode 100644
index 0000000..b5a4a83
--- /dev/null
+++ b/app/test/test_eventdev.c
@@ -0,0 +1,775 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Cavium networks. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Cavium networks nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_hexdump.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_eventdev.h>
+#include <rte_cryptodev.h>
+
+#include "test.h"
+
+#define TEST_DEV_ID   0
+
+static int
+testsuite_setup(void)
+{
+	RTE_BUILD_BUG_ON(sizeof(struct rte_event) != 16);
+	uint8_t count;
+	count = rte_event_dev_count();
+	if (!count) {
+		printf("Failed to find a valid event device,"
+			" testing with event_skeleton device\n");
+		return rte_eal_vdev_init("event_skeleton", NULL);
+	}
+	return TEST_SUCCESS;
+}
+
+static void
+testsuite_teardown(void)
+{
+}
+
+static int
+test_eventdev_count(void)
+{
+	uint8_t count;
+	count = rte_event_dev_count();
+	TEST_ASSERT(count > 0, "Invalid eventdev count %" PRIu8, count);
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_get_dev_id(void)
+{
+	int ret;
+	ret = rte_event_dev_get_dev_id("not_a_valid_eventdev_driver");
+	TEST_ASSERT_FAIL(ret, "Expected <0 for invalid dev name ret=%d", ret);
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_socket_id(void)
+{
+	int socket_id;
+	socket_id = rte_event_dev_socket_id(TEST_DEV_ID);
+	TEST_ASSERT(socket_id != -EINVAL, "Failed to get socket_id %d",
+				socket_id);
+	socket_id = rte_event_dev_socket_id(RTE_EVENT_MAX_DEVS);
+	TEST_ASSERT(socket_id == -EINVAL, "Expected -EINVAL %d", socket_id);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_info_get(void)
+{
+	int ret;
+	struct rte_event_dev_info info;
+	ret = rte_event_dev_info_get(TEST_DEV_ID, NULL);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+	ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+	TEST_ASSERT(info.max_event_ports > 0,
+			"Not enough event ports %d", info.max_event_ports);
+	TEST_ASSERT(info.max_event_queues > 0,
+			"Not enough event queues %d", info.max_event_queues);
+	return TEST_SUCCESS;
+}
+
+static inline void
+devconf_set_default_sane_values(struct rte_event_dev_config *dev_conf,
+			struct rte_event_dev_info *info)
+{
+	memset(dev_conf, 0, sizeof(struct rte_event_dev_config));
+	dev_conf->dequeue_timeout_ns = info->min_dequeue_timeout_ns;
+	dev_conf->nb_event_ports = info->max_event_ports;
+	dev_conf->nb_event_queues = info->max_event_queues;
+	dev_conf->nb_event_queue_flows = info->max_event_queue_flows;
+	dev_conf->nb_event_port_dequeue_depth =
+			info->max_event_port_dequeue_depth;
+	dev_conf->nb_event_port_enqueue_depth =
+			info->max_event_port_enqueue_depth;
+	dev_conf->nb_event_port_enqueue_depth =
+			info->max_event_port_enqueue_depth;
+	dev_conf->nb_events_limit =
+			info->max_num_events;
+}
+
+static int
+test_ethdev_config_run(struct rte_event_dev_config *dev_conf,
+		struct rte_event_dev_info *info,
+		void (*fn)(struct rte_event_dev_config *dev_conf,
+			struct rte_event_dev_info *info))
+{
+	devconf_set_default_sane_values(dev_conf, info);
+	fn(dev_conf, info);
+	return rte_event_dev_configure(TEST_DEV_ID, dev_conf);
+}
+
+static void
+min_dequeue_limit(struct rte_event_dev_config *dev_conf,
+		  struct rte_event_dev_info *info)
+{
+	dev_conf->dequeue_timeout_ns = info->min_dequeue_timeout_ns - 1;
+}
+
+static void
+max_dequeue_limit(struct rte_event_dev_config *dev_conf,
+		  struct rte_event_dev_info *info)
+{
+	dev_conf->dequeue_timeout_ns = info->max_dequeue_timeout_ns + 1;
+}
+
+static void
+max_events_limit(struct rte_event_dev_config *dev_conf,
+		  struct rte_event_dev_info *info)
+{
+	dev_conf->nb_events_limit  = info->max_num_events + 1;
+}
+
+static void
+max_event_ports(struct rte_event_dev_config *dev_conf,
+		  struct rte_event_dev_info *info)
+{
+	dev_conf->nb_event_ports = info->max_event_ports + 1;
+}
+
+static void
+max_event_queues(struct rte_event_dev_config *dev_conf,
+		  struct rte_event_dev_info *info)
+{
+	dev_conf->nb_event_queues = info->max_event_queues + 1;
+}
+
+static void
+max_event_queue_flows(struct rte_event_dev_config *dev_conf,
+		  struct rte_event_dev_info *info)
+{
+	dev_conf->nb_event_queue_flows = info->max_event_queue_flows + 1;
+}
+
+static void
+max_event_port_dequeue_depth(struct rte_event_dev_config *dev_conf,
+		  struct rte_event_dev_info *info)
+{
+	dev_conf->nb_event_port_dequeue_depth =
+		info->max_event_port_dequeue_depth + 1;
+}
+
+static void
+max_event_port_enqueue_depth(struct rte_event_dev_config *dev_conf,
+		  struct rte_event_dev_info *info)
+{
+	dev_conf->nb_event_port_enqueue_depth =
+		info->max_event_port_enqueue_depth + 1;
+}
+
+
+static int
+test_eventdev_configure(void)
+{
+	int ret;
+	struct rte_event_dev_config dev_conf;
+	struct rte_event_dev_info info;
+	ret = rte_event_dev_configure(TEST_DEV_ID, NULL);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+
+	ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	/* Check limits */
+	TEST_ASSERT_EQUAL(-EINVAL,
+		test_ethdev_config_run(&dev_conf, &info, min_dequeue_limit),
+		 "Config negative test failed");
+	TEST_ASSERT_EQUAL(-EINVAL,
+		test_ethdev_config_run(&dev_conf, &info, max_dequeue_limit),
+		 "Config negative test failed");
+	TEST_ASSERT_EQUAL(-EINVAL,
+		test_ethdev_config_run(&dev_conf, &info, max_events_limit),
+		 "Config negative test failed");
+	TEST_ASSERT_EQUAL(-EINVAL,
+		test_ethdev_config_run(&dev_conf, &info, max_event_ports),
+		 "Config negative test failed");
+	TEST_ASSERT_EQUAL(-EINVAL,
+		test_ethdev_config_run(&dev_conf, &info, max_event_queues),
+		 "Config negative test failed");
+	TEST_ASSERT_EQUAL(-EINVAL,
+		test_ethdev_config_run(&dev_conf, &info, max_event_queue_flows),
+		 "Config negative test failed");
+	TEST_ASSERT_EQUAL(-EINVAL,
+		test_ethdev_config_run(&dev_conf, &info,
+			max_event_port_dequeue_depth),
+			 "Config negative test failed");
+	TEST_ASSERT_EQUAL(-EINVAL,
+		test_ethdev_config_run(&dev_conf, &info,
+		max_event_port_enqueue_depth),
+		 "Config negative test failed");
+
+	/* Positive case */
+	devconf_set_default_sane_values(&dev_conf, &info);
+	ret = rte_event_dev_configure(TEST_DEV_ID, &dev_conf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev");
+
+	/* re-configure */
+	devconf_set_default_sane_values(&dev_conf, &info);
+	dev_conf.nb_event_ports = info.max_event_ports/2;
+	dev_conf.nb_event_queues = info.max_event_queues/2;
+	ret = rte_event_dev_configure(TEST_DEV_ID, &dev_conf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to re configure eventdev");
+
+	/* re-configure back to max_event_queues and max_event_ports */
+	devconf_set_default_sane_values(&dev_conf, &info);
+	ret = rte_event_dev_configure(TEST_DEV_ID, &dev_conf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to re-configure eventdev");
+
+	return TEST_SUCCESS;
+
+}
+
+static int
+eventdev_configure_setup(void)
+{
+	int ret;
+	struct rte_event_dev_config dev_conf;
+	struct rte_event_dev_info info;
+
+	ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+	devconf_set_default_sane_values(&dev_conf, &info);
+	ret = rte_event_dev_configure(TEST_DEV_ID, &dev_conf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_queue_default_conf_get(void)
+{
+	int i, ret;
+	struct rte_event_queue_conf qconf;
+
+	ret = rte_event_queue_default_conf_get(TEST_DEV_ID, 0, NULL);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+
+	for (i = 0; i < rte_event_queue_count(TEST_DEV_ID); i++) {
+		ret = rte_event_queue_default_conf_get(TEST_DEV_ID, i,
+						 &qconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to get queue%d info", i);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_queue_setup(void)
+{
+	int i, ret;
+	struct rte_event_dev_info info;
+	struct rte_event_queue_conf qconf;
+
+	ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	/* Negative cases */
+	ret = rte_event_queue_default_conf_get(TEST_DEV_ID, 0, &qconf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get queue0 info");
+	qconf.event_queue_cfg =	(RTE_EVENT_QUEUE_CFG_FLAG_ALL_TYPES &
+		 RTE_EVENT_QUEUE_CFG_FLAG_TYPE_MASK);
+	qconf.nb_atomic_flows = info.max_event_queue_flows + 1;
+	ret = rte_event_queue_setup(TEST_DEV_ID, 0, &qconf);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+
+	qconf.nb_atomic_flows = info.max_event_queue_flows;
+	qconf.event_queue_cfg =	(RTE_EVENT_QUEUE_CFG_FLAG_ORDERED_ONLY &
+		 RTE_EVENT_QUEUE_CFG_FLAG_TYPE_MASK);
+	qconf.nb_atomic_order_sequences = info.max_event_queue_flows + 1;
+	ret = rte_event_queue_setup(TEST_DEV_ID, 0, &qconf);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+
+	ret = rte_event_queue_setup(TEST_DEV_ID, info.max_event_queues,
+					&qconf);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+
+	/* Positive case */
+	ret = rte_event_queue_default_conf_get(TEST_DEV_ID, 0, &qconf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get queue0 info");
+	ret = rte_event_queue_setup(TEST_DEV_ID, 0, &qconf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to setup queue0");
+
+
+	for (i = 0; i < rte_event_queue_count(TEST_DEV_ID); i++) {
+		ret = rte_event_queue_setup(TEST_DEV_ID, i, NULL);
+		TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_queue_count(void)
+{
+	int ret;
+	struct rte_event_dev_info info;
+
+	ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	TEST_ASSERT_EQUAL(rte_event_queue_count(TEST_DEV_ID),
+		 info.max_event_queues, "Wrong queue count");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_queue_priority(void)
+{
+	int i, ret;
+	struct rte_event_dev_info info;
+	struct rte_event_queue_conf qconf;
+	uint8_t priority;
+
+	ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	for (i = 0; i < rte_event_queue_count(TEST_DEV_ID); i++) {
+		ret = rte_event_queue_default_conf_get(TEST_DEV_ID, i,
+					&qconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to get queue%d def conf", i);
+		qconf.priority = i %  RTE_EVENT_DEV_PRIORITY_LOWEST;
+		ret = rte_event_queue_setup(TEST_DEV_ID, i, &qconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i);
+	}
+
+	for (i = 0; i < rte_event_queue_count(TEST_DEV_ID); i++) {
+		priority =  rte_event_queue_priority(TEST_DEV_ID, i);
+		if (info.event_dev_cap & RTE_EVENT_DEV_CAP_FLAG_QUEUE_QOS)
+			TEST_ASSERT_EQUAL(priority,
+			 i %  RTE_EVENT_DEV_PRIORITY_LOWEST,
+			 "Wrong priority value for queue%d", i);
+		else
+			TEST_ASSERT_EQUAL(priority,
+			 RTE_EVENT_DEV_PRIORITY_NORMAL,
+			 "Wrong priority value for queue%d", i);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_port_default_conf_get(void)
+{
+	int i, ret;
+	struct rte_event_port_conf pconf;
+
+	ret = rte_event_port_default_conf_get(TEST_DEV_ID, 0, NULL);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+
+	ret = rte_event_port_default_conf_get(TEST_DEV_ID,
+			rte_event_port_count(TEST_DEV_ID) + 1, NULL);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+
+	for (i = 0; i < rte_event_port_count(TEST_DEV_ID); i++) {
+		ret = rte_event_port_default_conf_get(TEST_DEV_ID, i,
+							&pconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to get port%d info", i);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_port_setup(void)
+{
+	int i, ret;
+	struct rte_event_dev_info info;
+	struct rte_event_port_conf pconf;
+
+	ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	/* Negative cases */
+	ret = rte_event_port_default_conf_get(TEST_DEV_ID, 0, &pconf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get port0 info");
+	pconf.new_event_threshold = info.max_num_events + 1;
+	ret = rte_event_port_setup(TEST_DEV_ID, 0, &pconf);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+
+	pconf.new_event_threshold = info.max_num_events;
+	pconf.dequeue_depth = info.max_event_port_dequeue_depth + 1;
+	ret = rte_event_port_setup(TEST_DEV_ID, 0, &pconf);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+
+	pconf.dequeue_depth = info.max_event_port_dequeue_depth;
+	pconf.enqueue_depth = info.max_event_port_enqueue_depth + 1;
+	ret = rte_event_port_setup(TEST_DEV_ID, 0, &pconf);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+
+	ret = rte_event_port_setup(TEST_DEV_ID, info.max_event_ports,
+					&pconf);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+
+	/* Positive case */
+	ret = rte_event_port_default_conf_get(TEST_DEV_ID, 0, &pconf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get port0 info");
+	ret = rte_event_port_setup(TEST_DEV_ID, 0, &pconf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to setup port0");
+
+
+	for (i = 0; i < rte_event_port_count(TEST_DEV_ID); i++) {
+		ret = rte_event_port_setup(TEST_DEV_ID, i, NULL);
+		TEST_ASSERT_SUCCESS(ret, "Failed to setup port%d", i);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_dequeue_depth(void)
+{
+	int ret;
+	struct rte_event_dev_info info;
+	struct rte_event_port_conf pconf;
+
+	ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	ret = rte_event_port_default_conf_get(TEST_DEV_ID, 0, &pconf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get port0 info");
+	ret = rte_event_port_setup(TEST_DEV_ID, 0, &pconf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to setup port0");
+
+	TEST_ASSERT_EQUAL(rte_event_port_dequeue_depth(TEST_DEV_ID, 0),
+		 pconf.dequeue_depth, "Wrong port dequeue depth");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_enqueue_depth(void)
+{
+	int ret;
+	struct rte_event_dev_info info;
+	struct rte_event_port_conf pconf;
+
+	ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	ret = rte_event_port_default_conf_get(TEST_DEV_ID, 0, &pconf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get port0 info");
+	ret = rte_event_port_setup(TEST_DEV_ID, 0, &pconf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to setup port0");
+
+	TEST_ASSERT_EQUAL(rte_event_port_enqueue_depth(TEST_DEV_ID, 0),
+		 pconf.enqueue_depth, "Wrong port enqueue depth");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_port_count(void)
+{
+	int ret;
+	struct rte_event_dev_info info;
+
+	ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	TEST_ASSERT_EQUAL(rte_event_port_count(TEST_DEV_ID),
+		 info.max_event_ports, "Wrong port count");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_timeout_ticks(void)
+{
+	int ret;
+	uint64_t timeout_ticks;
+
+	ret = rte_event_dequeue_timeout_ticks(TEST_DEV_ID, 100, &timeout_ticks);
+	TEST_ASSERT_SUCCESS(ret, "Fail to get timeout_ticks");
+
+	return TEST_SUCCESS;
+}
+
+
+static int
+test_eventdev_start_stop(void)
+{
+	int i, ret;
+
+	ret = eventdev_configure_setup();
+	TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev");
+
+	for (i = 0; i < rte_event_queue_count(TEST_DEV_ID); i++) {
+		ret = rte_event_queue_setup(TEST_DEV_ID, i, NULL);
+		TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i);
+	}
+
+	for (i = 0; i < rte_event_port_count(TEST_DEV_ID); i++) {
+		ret = rte_event_port_setup(TEST_DEV_ID, i, NULL);
+		TEST_ASSERT_SUCCESS(ret, "Failed to setup port%d", i);
+	}
+
+	ret = rte_event_dev_start(TEST_DEV_ID);
+	TEST_ASSERT_SUCCESS(ret, "Failed to start device%d", TEST_DEV_ID);
+
+	rte_event_dev_stop(TEST_DEV_ID);
+	return TEST_SUCCESS;
+}
+
+
+static int
+eventdev_setup_device(void)
+{
+	int i, ret;
+
+	ret = eventdev_configure_setup();
+	TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev");
+
+	for (i = 0; i < rte_event_queue_count(TEST_DEV_ID); i++) {
+		ret = rte_event_queue_setup(TEST_DEV_ID, i, NULL);
+		TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i);
+	}
+
+	for (i = 0; i < rte_event_port_count(TEST_DEV_ID); i++) {
+		ret = rte_event_port_setup(TEST_DEV_ID, i, NULL);
+		TEST_ASSERT_SUCCESS(ret, "Failed to setup port%d", i);
+	}
+
+	ret = rte_event_dev_start(TEST_DEV_ID);
+	TEST_ASSERT_SUCCESS(ret, "Failed to start device%d", TEST_DEV_ID);
+
+	return TEST_SUCCESS;
+}
+
+static void
+eventdev_stop_device(void)
+{
+	rte_event_dev_stop(TEST_DEV_ID);
+}
+
+static int
+test_eventdev_link(void)
+{
+	int ret, nb_queues, i;
+	struct rte_event_queue_link links[RTE_EVENT_MAX_QUEUES_PER_DEV];
+
+	ret = rte_event_port_link(TEST_DEV_ID, 0, NULL, 0);
+	TEST_ASSERT(ret >= 0, "Failed to link with NULL device%d",
+				 TEST_DEV_ID);
+
+	nb_queues = rte_event_queue_count(TEST_DEV_ID);
+	for (i = 0; i < nb_queues; i++) {
+		links[i].queue_id = i;
+		links[i].priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+	}
+
+	ret = rte_event_port_link(TEST_DEV_ID, 0, links, nb_queues);
+	TEST_ASSERT(ret == nb_queues, "Failed to link(device%d) ret=%d",
+				 TEST_DEV_ID, ret);
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_unlink(void)
+{
+	int ret, nb_queues, i;
+	uint8_t queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
+
+	ret = rte_event_port_unlink(TEST_DEV_ID, 0, NULL, 0);
+	TEST_ASSERT(ret >= 0, "Failed to unlink with NULL device%d",
+				 TEST_DEV_ID);
+
+	nb_queues = rte_event_queue_count(TEST_DEV_ID);
+	for (i = 0; i < nb_queues; i++)
+		queues[i] = i;
+
+
+	ret = rte_event_port_unlink(TEST_DEV_ID, 0, queues, nb_queues);
+	TEST_ASSERT(ret == nb_queues, "Failed to unlink(device%d) ret=%d",
+				 TEST_DEV_ID, ret);
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_link_get(void)
+{
+	int ret, nb_queues, i;
+	uint8_t queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
+	struct rte_event_queue_link links[RTE_EVENT_MAX_QUEUES_PER_DEV];
+
+	/* link all queues */
+	ret = rte_event_port_link(TEST_DEV_ID, 0, NULL, 0);
+	TEST_ASSERT(ret >= 0, "Failed to link with NULL device%d",
+				 TEST_DEV_ID);
+
+	nb_queues = rte_event_queue_count(TEST_DEV_ID);
+	for (i = 0; i < nb_queues; i++)
+		queues[i] = i;
+
+	ret = rte_event_port_unlink(TEST_DEV_ID, 0, queues, nb_queues);
+	TEST_ASSERT(ret == nb_queues, "Failed to unlink(device%d) ret=%d",
+				 TEST_DEV_ID, ret);
+
+	ret = rte_event_port_links_get(TEST_DEV_ID, 0, links);
+	TEST_ASSERT(ret == 0, "(%d)Wrong link get=%d", TEST_DEV_ID, ret);
+
+	/* link all queues and get the links */
+	nb_queues = rte_event_queue_count(TEST_DEV_ID);
+	for (i = 0; i < nb_queues; i++) {
+		links[i].queue_id = i;
+		links[i].priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+	}
+	ret = rte_event_port_link(TEST_DEV_ID, 0, links, nb_queues);
+	TEST_ASSERT(ret == nb_queues, "Failed to link(device%d) ret=%d",
+				 TEST_DEV_ID, ret);
+	ret = rte_event_port_links_get(TEST_DEV_ID, 0, links);
+	TEST_ASSERT(ret == nb_queues, "(%d)Wrong link get ret=%d expected=%d",
+				 TEST_DEV_ID, ret, nb_queues);
+	/* unlink all*/
+	ret = rte_event_port_unlink(TEST_DEV_ID, 0, NULL, 0);
+	TEST_ASSERT(ret == nb_queues, "Failed to unlink(device%d) ret=%d",
+				 TEST_DEV_ID, ret);
+	/* link just one queue */
+	links[0].queue_id = 0;
+	links[0].priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+
+	ret = rte_event_port_link(TEST_DEV_ID, 0, links, 1);
+	TEST_ASSERT(ret == 1, "Failed to link(device%d) ret=%d",
+				 TEST_DEV_ID, ret);
+	ret = rte_event_port_links_get(TEST_DEV_ID, 0, links);
+	TEST_ASSERT(ret == 1, "(%d)Wrong link get ret=%d expected=%d",
+					TEST_DEV_ID, ret, 1);
+	/* unlink all*/
+	ret = rte_event_port_unlink(TEST_DEV_ID, 0, NULL, 0);
+	TEST_ASSERT(ret == nb_queues, "Failed to unlink(device%d) ret=%d",
+				 TEST_DEV_ID, ret);
+	/* 4links and 2 unlinks */
+	nb_queues = rte_event_queue_count(TEST_DEV_ID);
+	if (nb_queues >= 4) {
+		for (i = 0; i < 4; i++) {
+			links[i].queue_id = i;
+			links[i].priority = 0x40;
+		}
+		ret = rte_event_port_link(TEST_DEV_ID, 0, links, 4);
+		TEST_ASSERT(ret == 4, "Failed to link(device%d) ret=%d",
+					 TEST_DEV_ID, ret);
+
+		for (i = 0; i < 2; i++)
+			queues[i] = i;
+
+		ret = rte_event_port_unlink(TEST_DEV_ID, 0, queues, 2);
+		TEST_ASSERT(ret == 2, "Failed to unlink(device%d) ret=%d",
+					 TEST_DEV_ID, ret);
+		ret = rte_event_port_links_get(TEST_DEV_ID, 0, links);
+		TEST_ASSERT(ret == 2, "(%d)Wrong link get ret=%d expected=%d",
+						TEST_DEV_ID, ret, 2);
+		TEST_ASSERT(links[0].queue_id == 2, "ret=%d expected=%d",
+					ret, 2);
+		TEST_ASSERT(links[0].priority == 0x40, "ret=%d expected=%d",
+					ret, 0x40);
+		TEST_ASSERT(links[1].queue_id == 3, "ret=%d expected=%d",
+					ret, 3);
+		TEST_ASSERT(links[1].priority == 0x40, "ret=%d expected=%d",
+					ret, 0x40);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_close(void)
+{
+	rte_event_dev_stop(TEST_DEV_ID);
+	return rte_event_dev_close(TEST_DEV_ID);
+}
+
+static struct unit_test_suite eventdev_common_testsuite  = {
+	.suite_name = "eventdev common code unit test suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(NULL, NULL,
+			test_eventdev_count),
+		TEST_CASE_ST(NULL, NULL,
+			test_eventdev_get_dev_id),
+		TEST_CASE_ST(NULL, NULL,
+			test_eventdev_socket_id),
+		TEST_CASE_ST(NULL, NULL,
+			test_eventdev_info_get),
+		TEST_CASE_ST(NULL, NULL,
+			test_eventdev_configure),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_queue_default_conf_get),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_queue_setup),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_queue_count),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_queue_priority),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_port_default_conf_get),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_port_setup),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_dequeue_depth),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_enqueue_depth),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_port_count),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_timeout_ticks),
+		TEST_CASE_ST(NULL, NULL,
+			test_eventdev_start_stop),
+		TEST_CASE_ST(eventdev_setup_device, eventdev_stop_device,
+			test_eventdev_link),
+		TEST_CASE_ST(eventdev_setup_device, eventdev_stop_device,
+			test_eventdev_unlink),
+		TEST_CASE_ST(eventdev_setup_device, eventdev_stop_device,
+			test_eventdev_link_get),
+		TEST_CASE_ST(eventdev_setup_device, NULL,
+			test_eventdev_close),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static int
+test_eventdev_common(void)
+{
+	return unit_test_suite_runner(&eventdev_common_testsuite);
+}
+
+REGISTER_TEST_COMMAND(eventdev_common_autotest, test_eventdev_common);
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* Re: [PATCH v2 0/6] libeventdev API and northbound implementation
  2016-12-06  3:52   ` [PATCH v2 0/6] libeventdev API and northbound implementation Jerin Jacob
                       ` (5 preceding siblings ...)
  2016-12-06  3:52     ` [PATCH v2 6/6] app/test: unit test case for eventdev APIs Jerin Jacob
@ 2016-12-06 16:46     ` Bruce Richardson
  2016-12-21  9:25     ` [PATCH v4 " Jerin Jacob
  7 siblings, 0 replies; 109+ messages in thread
From: Bruce Richardson @ 2016-12-06 16:46 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dev, thomas.monjalon, hemant.agrawal, gage.eads, harry.van.haaren

On Tue, Dec 06, 2016 at 09:22:14AM +0530, Jerin Jacob wrote:
> As previously discussed in RFC v1 [1], RFC v2 [2], with changes
> described in [3] (also pasted below), here is the first non-draft series
> for this new API.
> 
> [1] http://dpdk.org/ml/archives/dev/2016-August/045181.html
> [2] http://dpdk.org/ml/archives/dev/2016-October/048592.html
> [3] http://dpdk.org/ml/archives/dev/2016-October/048196.html
> 
> v1..v2:
> 1) Remove unnecessary header files from rte_eventdev.h(Thomas)
> 2) Removed PMD driver name(EVENTDEV_NAME_SKELETON_PMD) from rte_eventdev.h(Thomas)
> 3) Removed different #define for different priority schemes. Changed to
> one event device RTE_EVENT_DEV_PRIORITY_* priority (Bruce)
> 4) add const to rte_event_dev_configure(), rte_event_queue_setup(),
> rte_event_port_setup(), rte_event_port_link()(Bruce)
> 5) Fixed missing dev argument in dev->schedule() function(Bruce)
> 6) Changed \see to @see in doxgen comments(Thomas)
> 7) Added additional text in specification to clarify the queue depth(Thomas)
> 8) Changed wait to timeout across the specification(Thomas)
> 9) Added longer explanation for RTE_EVENT_OP_NEW and RTE_EVENT_OP_FORWARD(Thomas)
> 10) Fixed issue with RTE_EVENT_OP_RELEASE doxgen formatting(Thomas)
> 11) Changed to RTE_EVENT_DEV_CFG_FLAG_ from RTE_EVENT_DEV_CFG_(Thomas)
> 12) Changed to EVENT_QUEUE_CFG_FLAG_ from EVENT_QUEUE_CFG_(Thomas)
> 13) s/RTE_EVENT_TYPE_CORE/RTE_EVENT_TYPE_CPU/(Thomas, Gage)
> 14) Removed non burst API and kept only the burst API in the API specification
> (Thomas, Bruce, Harry, Jerin)
> -- Driver interface has non burst API, selection of the non burst API is based
> on num_objects == 1
> 15) sizeeof(struct rte_event) was not 16 in v1. Fixed it in v2
> -- reduced the width of event_type to 4bit to save space for future change
> -- introduced impl_opaque for implementation specific opaque data(Harry),
> Something useful for HW driver too, in the context of removal the need for sepeare
> release API.
> -- squashed other element size and provided enough space to impl_opaque(Jerin)
> -- added RTE_BUILD_BUG_ON(sizeof(struct rte_event) != 16); check
> 16) add union of uint64_t in the second element in struct rte_event to
> make sure the structure has 16byte address all arch(Thomas)
> 17) Fixed invalid check of nb_atomic_order_sequences in implementation(Gage)
> 18) s/EDEV_LOG_ERR/RTE_EDEV_LOG_ERR(Thomas)
> 19) s/rte_eventdev_pmd_/rte_event_pmd_/(Bruce)
> 20) added fine details of distributed vs centralized scheduling information
> in the specification and introduced RTE_EVENT_DEV_CAP_FLAG_DISTRIBUTED_SCHED
> flag(Gage)
> 21)s/RTE_EVENT_QUEUE_CFG_FLAG_SINGLE_CONSUMER/RTE_EVENT_QUEUE_CFG_FLAG_SINGLE_LINK (Jerin)
> to remove the confusion to between another producer and consumer in sw eventdev driver
> 22) Northbound api implementation  patch spited to more logical patches(Thomas)
> 
> 
Thanks for this Jerin, great job tracking the changes between the
versions.
Couple of comments I have to make on the patches thus far, but I think
we are near having a first version we can commit to a next-event tree.

/Bruce

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v2 1/6] eventdev: introduce event driven programming model
  2016-12-06  3:52     ` [PATCH v2 1/6] eventdev: introduce event driven programming model Jerin Jacob
@ 2016-12-06 16:51       ` Bruce Richardson
  2016-12-07 18:53         ` Jerin Jacob
  2016-12-07 10:57       ` Van Haaren, Harry
                         ` (2 subsequent siblings)
  3 siblings, 1 reply; 109+ messages in thread
From: Bruce Richardson @ 2016-12-06 16:51 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dev, thomas.monjalon, hemant.agrawal, gage.eads, harry.van.haaren

On Tue, Dec 06, 2016 at 09:22:15AM +0530, Jerin Jacob wrote:
> In a polling model, lcores poll ethdev ports and associated
> rx queues directly to look for packet. In an event driven model,
> by contrast, lcores call the scheduler that selects packets for
> them based on programmer-specified criteria. Eventdev library
> adds support for event driven programming model, which offer
> applications automatic multicore scaling, dynamic load balancing,
> pipelining, packet ingress order maintenance and
> synchronization services to simplify application packet processing.
> 
> By introducing event driven programming model, DPDK can support
> both polling and event driven programming models for packet processing,
> and applications are free to choose whatever model
> (or combination of the two) that best suits their needs.
> 
> This patch adds the eventdev specification header file.
> 
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> ---
<snip>
> +
> +/**
> + * The generic *rte_event* structure to hold the event attributes
> + * for dequeue and enqueue operation
> + */
> +struct rte_event {
> +	/** WORD0 */
> +	RTE_STD_C11
> +	union {
> +		uint64_t event;
> +		/** Event attributes for dequeue or enqueue operation */
> +		struct {
> +			uint64_t flow_id:20;
> +			/**< Targeted flow identifier for the enqueue and
> +			 * dequeue operation.
> +			 * The value must be in the range of
> +			 * [0, nb_event_queue_flows - 1] which
> +			 * previously supplied to rte_event_dev_configure().
> +			 */
> +			uint64_t sub_event_type:8;
> +			/**< Sub-event types based on the event source.
> +			 * @see RTE_EVENT_TYPE_CPU
> +			 */
> +			uint64_t event_type:4;
> +			/**< Event type to classify the event source.
> +			 * @see RTE_EVENT_TYPE_ETHDEV, (RTE_EVENT_TYPE_*)
> +			 */
> +			uint64_t sched_type:2;
> +			/**< Scheduler synchronization type (RTE_SCHED_TYPE_*)
> +			 * associated with flow id on a given event queue
> +			 * for the enqueue and dequeue operation.
> +			 */
> +			uint64_t queue_id:8;
> +			/**< Targeted event queue identifier for the enqueue or
> +			 * dequeue operation.
> +			 * The value must be in the range of
> +			 * [0, nb_event_queues - 1] which previously supplied to
> +			 * rte_event_dev_configure().
> +			 */
> +			uint64_t priority:8;
> +			/**< Event priority relative to other events in the
> +			 * event queue. The requested priority should in the
> +			 * range of  [RTE_EVENT_DEV_PRIORITY_HIGHEST,
> +			 * RTE_EVENT_DEV_PRIORITY_LOWEST].
> +			 * The implementation shall normalize the requested
> +			 * priority to supported priority value.
> +			 * Valid when the device has
> +			 * RTE_EVENT_DEV_CAP_FLAG_EVENT_QOS capability.
> +			 */
> +			uint64_t op:2;
> +			/**< The type of event enqueue operation - new/forward/
> +			 * etc.This field is not preserved across an instance
> +			 * and is undefined on dequeue.
> +			 *  @see RTE_EVENT_OP_NEW, (RTE_EVENT_OP_*)
> +			 */
> +			uint64_t impl_opaque:12;
> +			/**< Implementation specific opaque value.
> +			 * An implementation may use this field to hold
> +			 * implementation specific value to share between
> +			 * dequeue and enqueue operation.
> +			 * The application should not modify this field.
> +			 */
> +		};
> +	};
> +	/** WORD1 */
> +	RTE_STD_C11
> +	union {
> +		uint64_t u64;
> +		/**< Opaque 64-bit value */
> +		uintptr_t event_ptr;
> +		/**< Opaque event pointer */

Since we have a uint64_t member of the union, might this be better as a
void * rather than uintptr_t?

> +		struct rte_mbuf *mbuf;
> +		/**< mbuf pointer if dequeued event is associated with mbuf */
> +	};
> +};
> +
<snip>
> +/**
> + * Link multiple source event queues supplied in *rte_event_queue_link*
> + * structure as *queue_id* to the destination event port designated by its
> + * *port_id* on the event device designated by its *dev_id*.
> + *
> + * The link establishment shall enable the event port *port_id* from
> + * receiving events from the specified event queue *queue_id*
> + *
> + * An event queue may link to one or more event ports.
> + * The number of links can be established from an event queue to event port is
> + * implementation defined.
> + *
> + * Event queue(s) to event port link establishment can be changed at runtime
> + * without re-configuring the device to support scaling and to reduce the
> + * latency of critical work by establishing the link with more event ports
> + * at runtime.

I think this might need to be clarified. The device doesn't need to be
reconfigured, but does it need to be stopped? In SW implementation, this
affects how much we have to make things thread-safe. At minimum I think
we should limit this to having only one thread call the function at a
time, but we may allow enqueue dequeue ops from the data plane to run
in parallel.

> + *
> + * @param dev_id
> + *   The identifier of the device.
> + *
> + * @param port_id
> + *   Event port identifier to select the destination port to link.
> + *
> + * @param link
> + *   Points to an array of *nb_links* objects of type *rte_event_queue_link*
> + *   structure which contain the event queue to event port link establishment
> + *   attributes.
> + *   NULL value is allowed, in which case this function links all the configured
> + *   event queues *nb_event_queues* which previously supplied to
> + *   rte_event_dev_configure() to the event port *port_id* with normal servicing
> + *   priority(RTE_EVENT_DEV_PRIORITY_NORMAL).
> + *
> + * @param nb_links
> + *   The number of links to establish
> + *
> + * @return
> + * The number of links actually established. The return value can be less than
> + * the value of the *nb_links* parameter when the implementation has the
> + * limitation on specific queue to port link establishment or if invalid
> + * parameters are specified in a *rte_event_queue_link*.
> + * If the return value is less than *nb_links*, the remaining links at the end
> + * of link[] are not established, and the caller has to take care of them.
> + * If return value is less than *nb_links* then implementation shall update the
> + * rte_errno accordingly, Possible rte_errno values are
> + * (-EDQUOT) Quota exceeded(Application tried to link the queue configured with
> + *  RTE_EVENT_QUEUE_CFG_FLAG_SINGLE_LINK to more than one event ports)
> + * (-EINVAL) Invalid parameter
> + *
> + */
> +int
> +rte_event_port_link(uint8_t dev_id, uint8_t port_id,
> +		    const struct rte_event_queue_link link[],
> +		    uint16_t nb_links);
> +
> +/**
> + * Unlink multiple source event queues supplied in *queues* from the destination
> + * event port designated by its *port_id* on the event device designated
> + * by its *dev_id*.
> + *
> + * The unlink establishment shall disable the event port *port_id* from
> + * receiving events from the specified event queue *queue_id*
> + *
> + * Event queue(s) to event port unlink establishment can be changed at runtime
> + * without re-configuring the device.

Clarify, as above with link call.

> + *
> + * @param dev_id
> + *   The identifier of the device.
> + *
> + * @param port_id
> + *   Event port identifier to select the destination port to unlink.
> + *
> + * @param queues
> + *   Points to an array of *nb_unlinks* event queues to be unlinked
> + *   from the event port.
> + *   NULL value is allowed, in which case this function unlinks all the
> + *   event queue(s) from the event port *port_id*.
> + *
> + * @param nb_unlinks
> + *   The number of unlinks to establish
> + *
> + * @return
> + * The number of unlinks actually established. The return value can be less
> + * than the value of the *nb_unlinks* parameter when the implementation has the
> + * limitation on specific queue to port unlink establishment or
> + * if invalid parameters are specified.
> + * If the return value is less than *nb_unlinks*, the remaining queues at the
> + * end of queues[] are not established, and the caller has to take care of them.
> + * If return value is less than *nb_unlinks* then implementation shall update
> + * the rte_errno accordingly, Possible rte_errno values are
> + * (-EINVAL) Invalid parameter
> + *
> + */
> +int
> +rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
> +		      uint8_t queues[], uint16_t nb_unlinks);
> +
<snip>

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v2 3/6] eventdev: implement the northbound APIs
  2016-12-06  3:52     ` [PATCH v2 3/6] eventdev: implement the northbound APIs Jerin Jacob
@ 2016-12-06 17:17       ` Bruce Richardson
  2016-12-07 17:02         ` Jerin Jacob
  0 siblings, 1 reply; 109+ messages in thread
From: Bruce Richardson @ 2016-12-06 17:17 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dev, thomas.monjalon, hemant.agrawal, gage.eads, harry.van.haaren

On Tue, Dec 06, 2016 at 09:22:17AM +0530, Jerin Jacob wrote:
> This patch implements northbound eventdev API interface using
> southbond driver interface
> 
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> ---
>  config/common_base                           |    6 +
>  lib/Makefile                                 |    1 +
>  lib/librte_eal/common/include/rte_log.h      |    1 +
>  lib/librte_eventdev/Makefile                 |   57 ++
>  lib/librte_eventdev/rte_eventdev.c           | 1001 ++++++++++++++++++++++++++
>  lib/librte_eventdev/rte_eventdev.h           |  108 ++-
>  lib/librte_eventdev/rte_eventdev_pmd.h       |  109 +++
>  lib/librte_eventdev/rte_eventdev_version.map |   33 +
>  mk/rte.app.mk                                |    1 +
>  9 files changed, 1311 insertions(+), 6 deletions(-)
>  create mode 100644 lib/librte_eventdev/Makefile
>  create mode 100644 lib/librte_eventdev/rte_eventdev.c
>  create mode 100644 lib/librte_eventdev/rte_eventdev_version.map
> 
<snip>
> +
> +static inline int
> +rte_event_dev_queue_config(struct rte_eventdev *dev, uint8_t nb_queues)
> +{
> +	uint8_t old_nb_queues = dev->data->nb_queues;
> +	void **queues;
> +	uint8_t *queues_prio;
> +	unsigned int i;
> +
> +	RTE_EDEV_LOG_DEBUG("Setup %d queues on device %u", nb_queues,
> +			 dev->data->dev_id);
> +
> +	/* First time configuration */
> +	if (dev->data->queues == NULL && nb_queues != 0) {
> +		dev->data->queues = rte_zmalloc_socket("eventdev->data->queues",
> +				sizeof(dev->data->queues[0]) * nb_queues,
> +				RTE_CACHE_LINE_SIZE, dev->data->socket_id);
> +		if (dev->data->queues == NULL) {
> +			dev->data->nb_queues = 0;
> +			RTE_EDEV_LOG_ERR("failed to get memory for queue meta,"
> +					"nb_queues %u", nb_queues);
> +			return -(ENOMEM);
> +		}
> +		/* Allocate memory to store queue priority */
> +		dev->data->queues_prio = rte_zmalloc_socket(
> +				"eventdev->data->queues_prio",
> +				sizeof(dev->data->queues_prio[0]) * nb_queues,
> +				RTE_CACHE_LINE_SIZE, dev->data->socket_id);
> +		if (dev->data->queues_prio == NULL) {
> +			dev->data->nb_queues = 0;
> +			RTE_EDEV_LOG_ERR("failed to get mem for queue priority,"
> +					"nb_queues %u", nb_queues);
> +			return -(ENOMEM);
> +		}
> +
> +	} else if (dev->data->queues != NULL && nb_queues != 0) {/* re-config */
> +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_release, -ENOTSUP);
> +
> +		queues = dev->data->queues;
> +		for (i = nb_queues; i < old_nb_queues; i++)
> +			(*dev->dev_ops->queue_release)(queues[i]);
> +
> +		queues = rte_realloc(queues, sizeof(queues[0]) * nb_queues,
> +				RTE_CACHE_LINE_SIZE);
> +		if (queues == NULL) {
> +			RTE_EDEV_LOG_ERR("failed to realloc queue meta data,"
> +						" nb_queues %u", nb_queues);
> +			return -(ENOMEM);
> +		}
> +		dev->data->queues = queues;
> +
> +		/* Re allocate memory to store queue priority */
> +		queues_prio = dev->data->queues_prio;
> +		queues_prio = rte_realloc(queues_prio,
> +				sizeof(queues_prio[0]) * nb_queues,
> +				RTE_CACHE_LINE_SIZE);
> +		if (queues_prio == NULL) {
> +			RTE_EDEV_LOG_ERR("failed to realloc queue priority,"
> +						" nb_queues %u", nb_queues);
> +			return -(ENOMEM);
> +		}
> +		dev->data->queues_prio = queues_prio;
> +
> +		if (nb_queues > old_nb_queues) {
> +			uint8_t new_qs = nb_queues - old_nb_queues;
> +
> +			memset(queues + old_nb_queues, 0,
> +				sizeof(queues[0]) * new_qs);
> +			memset(queues_prio + old_nb_queues, 0,
> +				sizeof(queues_prio[0]) * new_qs);
> +		}
> +	} else if (dev->data->queues != NULL && nb_queues == 0) {
> +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_release, -ENOTSUP);
> +
> +		queues = dev->data->queues;
> +		for (i = nb_queues; i < old_nb_queues; i++)
> +			(*dev->dev_ops->queue_release)(queues[i]);
> +	}
> +
> +	dev->data->nb_queues = nb_queues;
> +	return 0;
> +}
> +
While the ports array makes sense to have available at the top level of
the API and allocated from rte_eventdev.c, I'm not seeing what the value
of having the queues allocated at that level is. The only time the queue
array is indexed by eventdev layer is when releasing a queue. Therefore,
I suggest just saving the number of queues for sanity checking and let
the queue array allocation and freeing be handled entirely in the
drivers themselves.

/Bruce

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v2 1/6] eventdev: introduce event driven programming model
  2016-12-06  3:52     ` [PATCH v2 1/6] eventdev: introduce event driven programming model Jerin Jacob
  2016-12-06 16:51       ` Bruce Richardson
@ 2016-12-07 10:57       ` Van Haaren, Harry
  2016-12-08  1:24         ` Jerin Jacob
  2016-12-07 11:12       ` Bruce Richardson
  2016-12-14 15:19       ` Bruce Richardson
  3 siblings, 1 reply; 109+ messages in thread
From: Van Haaren, Harry @ 2016-12-07 10:57 UTC (permalink / raw)
  To: Jerin Jacob, dev
  Cc: thomas.monjalon, Richardson, Bruce, hemant.agrawal, Eads, Gage

> From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]

Hi Jerin,

Re v2 rte_event struct, there seems to be some changes in the struct layout and field sizes. I've investigated them, and would like to propose some changes to balance the byte-alignment and accessing of the fields.

These changes target only the first 64 bits of the rte_event struct. I've left the current v2 code for reference, please find my proposed changes below.

> +struct rte_event {
> +	/** WORD0 */
> +	RTE_STD_C11
> +	union {
> +		uint64_t event;
> +		/** Event attributes for dequeue or enqueue operation */
> +		struct {
> +			uint64_t flow_id:20;
> +			/**< Targeted flow identifier for the enqueue and
> +			 * dequeue operation.
> +			 * The value must be in the range of
> +			 * [0, nb_event_queue_flows - 1] which
> +			 * previously supplied to rte_event_dev_configure().
> +			 */
> +			uint64_t sub_event_type:8;
> +			/**< Sub-event types based on the event source.
> +			 * @see RTE_EVENT_TYPE_CPU
> +			 */
> +			uint64_t event_type:4;
> +			/**< Event type to classify the event source.
> +			 * @see RTE_EVENT_TYPE_ETHDEV, (RTE_EVENT_TYPE_*)
> +			 */
> +			uint64_t sched_type:2;
> +			/**< Scheduler synchronization type (RTE_SCHED_TYPE_*)
> +			 * associated with flow id on a given event queue
> +			 * for the enqueue and dequeue operation.
> +			 */
> +			uint64_t queue_id:8;
> +			/**< Targeted event queue identifier for the enqueue or
> +			 * dequeue operation.
> +			 * The value must be in the range of
> +			 * [0, nb_event_queues - 1] which previously supplied to
> +			 * rte_event_dev_configure().
> +			 */
> +			uint64_t priority:8;
> +			/**< Event priority relative to other events in the
> +			 * event queue. The requested priority should in the
> +			 * range of  [RTE_EVENT_DEV_PRIORITY_HIGHEST,
> +			 * RTE_EVENT_DEV_PRIORITY_LOWEST].
> +			 * The implementation shall normalize the requested
> +			 * priority to supported priority value.
> +			 * Valid when the device has
> +			 * RTE_EVENT_DEV_CAP_FLAG_EVENT_QOS capability.
> +			 */
> +			uint64_t op:2;
> +			/**< The type of event enqueue operation - new/forward/
> +			 * etc.This field is not preserved across an instance
> +			 * and is undefined on dequeue.
> +			 *  @see RTE_EVENT_OP_NEW, (RTE_EVENT_OP_*)
> +			 */
> +			uint64_t impl_opaque:12;
> +			/**< Implementation specific opaque value.
> +			 * An implementation may use this field to hold
> +			 * implementation specific value to share between
> +			 * dequeue and enqueue operation.
> +			 * The application should not modify this field.
> +			 */
> +		};
> +	};

struct rte_event {
	/** WORD0 */
	RTE_STD_C11
	union {
		uint64_t event;
		struct {
			uint32_t flow_id: 24;
			uint32_t impl_opaque : 8; /* not defined on deq */

			uint8_t queue_id;
			uint8_t priority;

			uint8_t operation  : 4; /* new fwd drop */
			uint8_t sched_type : 4;

			uint8_t event_type : 4;
			uint8_t sub_event_type : 4;
		};
	};
	/** word 1 */
<snip>


The changes made are as follows:
* Restore flow_id to 24 bits of a 32 bit int (previous size was 20 bits)
* Add impl_opaque to the remaining 8 bits of those 32 bits (previous size was 12 bits)

* QueueID and Priority remain 8 bit integers - but now accessible as 8 bit ints.

* Operation and sched_type *increased* to 4 bits each (from previous value of 2) to allow future expansion without ABI changes

* Event type remains constant at 4 bits
* sub-event-type reduced to 4 bits (previous value was 8 bits). Can we think of situations where 16 values for application specified identifiers of each event-type is genuinely not enough?

In my opinion this structure layout is more balanced, and will perform better due to less loads that will need masking to access the required value.


Feedback and improvements welcomed, -Harry

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v2 1/6] eventdev: introduce event driven programming model
  2016-12-06  3:52     ` [PATCH v2 1/6] eventdev: introduce event driven programming model Jerin Jacob
  2016-12-06 16:51       ` Bruce Richardson
  2016-12-07 10:57       ` Van Haaren, Harry
@ 2016-12-07 11:12       ` Bruce Richardson
  2016-12-08  1:48         ` Jerin Jacob
  2016-12-14 15:19       ` Bruce Richardson
  3 siblings, 1 reply; 109+ messages in thread
From: Bruce Richardson @ 2016-12-07 11:12 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dev, thomas.monjalon, hemant.agrawal, gage.eads, harry.van.haaren

On Tue, Dec 06, 2016 at 09:22:15AM +0530, Jerin Jacob wrote:
> In a polling model, lcores poll ethdev ports and associated
> rx queues directly to look for packet. In an event driven model,
> by contrast, lcores call the scheduler that selects packets for
> them based on programmer-specified criteria. Eventdev library
> adds support for event driven programming model, which offer
> applications automatic multicore scaling, dynamic load balancing,
> pipelining, packet ingress order maintenance and
> synchronization services to simplify application packet processing.
> 
> By introducing event driven programming model, DPDK can support
> both polling and event driven programming models for packet processing,
> and applications are free to choose whatever model
> (or combination of the two) that best suits their needs.
> 
> This patch adds the eventdev specification header file.
> 
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> ---
>  MAINTAINERS                        |    3 +
>  doc/api/doxy-api-index.md          |    1 +
>  doc/api/doxy-api.conf              |    1 +
>  lib/librte_eventdev/rte_eventdev.h | 1274 ++++++++++++++++++++++++++++++++++++
>  4 files changed, 1279 insertions(+)
<snip>
> +
> +/** Structure to hold the queue to port link establishment attributes */
> +struct rte_event_queue_link {
> +	uint8_t queue_id;
> +	/**< Event queue identifier to select the source queue to link */
> +	uint8_t priority;
> +	/**< The priority of the event queue for this event port.
> +	 * The priority defines the event port's servicing priority for
> +	 * event queue, which may be ignored by an implementation.
> +	 * The requested priority should in the range of
> +	 * [RTE_EVENT_DEV_PRIORITY_HIGHEST, RTE_EVENT_DEV_PRIORITY_LOWEST].
> +	 * The implementation shall normalize the requested priority to
> +	 * implementation supported priority value.
> +	 */
> +};
> +
> +/**
> + * Link multiple source event queues supplied in *rte_event_queue_link*
> + * structure as *queue_id* to the destination event port designated by its
> + * *port_id* on the event device designated by its *dev_id*.
> + *
> + * The link establishment shall enable the event port *port_id* from
> + * receiving events from the specified event queue *queue_id*
> + *
> + * An event queue may link to one or more event ports.
> + * The number of links can be established from an event queue to event port is
> + * implementation defined.
> + *
> + * Event queue(s) to event port link establishment can be changed at runtime
> + * without re-configuring the device to support scaling and to reduce the
> + * latency of critical work by establishing the link with more event ports
> + * at runtime.
> + *
> + * @param dev_id
> + *   The identifier of the device.
> + *
> + * @param port_id
> + *   Event port identifier to select the destination port to link.
> + *
> + * @param link
> + *   Points to an array of *nb_links* objects of type *rte_event_queue_link*
> + *   structure which contain the event queue to event port link establishment
> + *   attributes.
> + *   NULL value is allowed, in which case this function links all the configured
> + *   event queues *nb_event_queues* which previously supplied to
> + *   rte_event_dev_configure() to the event port *port_id* with normal servicing
> + *   priority(RTE_EVENT_DEV_PRIORITY_NORMAL).
> + *
> + * @param nb_links
> + *   The number of links to establish
> + *
> + * @return
> + * The number of links actually established. The return value can be less than
> + * the value of the *nb_links* parameter when the implementation has the
> + * limitation on specific queue to port link establishment or if invalid
> + * parameters are specified in a *rte_event_queue_link*.
> + * If the return value is less than *nb_links*, the remaining links at the end
> + * of link[] are not established, and the caller has to take care of them.
> + * If return value is less than *nb_links* then implementation shall update the
> + * rte_errno accordingly, Possible rte_errno values are
> + * (-EDQUOT) Quota exceeded(Application tried to link the queue configured with
> + *  RTE_EVENT_QUEUE_CFG_FLAG_SINGLE_LINK to more than one event ports)
> + * (-EINVAL) Invalid parameter
> + *
> + */
> +int
> +rte_event_port_link(uint8_t dev_id, uint8_t port_id,
> +		    const struct rte_event_queue_link link[],
> +		    uint16_t nb_links);
> +

Hi again Jerin,

another small suggestion here. I'm not a big fan of using small
structures to pass parameters into functions, especially when not all
fields are always going to be used. Rather than use the event queue link
structure, can we just pass in two array parameters here - the list of
QIDs, and the list of priorities. In cases where the eventdev
implementation does not support link prioritization, or where the app
does not want different priority mappings , then the second
array can be null [implying NORMAL priority for the don't care case].

	int
	rte_event_port_link(uint8_t dev_id, uint8_t port_id,
		const uint8_t queues[], const uint8_t priorities[],
		uint16_t nb_queues);

This just makes mapping an array of queues easier, as we can just pass
an array of ints directly in, and it especially makes it easier to
create a single link via:

  rte_event_port_link(dev_id, port_id, &queue_id, NULL, 1);

Regards,
/Bruce

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v2 3/6] eventdev: implement the northbound APIs
  2016-12-06 17:17       ` Bruce Richardson
@ 2016-12-07 17:02         ` Jerin Jacob
  2016-12-08  9:59           ` Bruce Richardson
  0 siblings, 1 reply; 109+ messages in thread
From: Jerin Jacob @ 2016-12-07 17:02 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: dev, thomas.monjalon, hemant.agrawal, gage.eads, harry.van.haaren

On Tue, Dec 06, 2016 at 05:17:12PM +0000, Bruce Richardson wrote:
> On Tue, Dec 06, 2016 at 09:22:17AM +0530, Jerin Jacob wrote:
> > This patch implements northbound eventdev API interface using
> > southbond driver interface
> > 
> > Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > ---
> > +		/* Re allocate memory to store queue priority */
> > +		queues_prio = dev->data->queues_prio;
> > +		queues_prio = rte_realloc(queues_prio,
> > +				sizeof(queues_prio[0]) * nb_queues,
> > +				RTE_CACHE_LINE_SIZE);
> > +		if (queues_prio == NULL) {
> > +			RTE_EDEV_LOG_ERR("failed to realloc queue priority,"
> > +						" nb_queues %u", nb_queues);
> > +			return -(ENOMEM);
> > +		}
> > +		dev->data->queues_prio = queues_prio;
> > +
> > +		if (nb_queues > old_nb_queues) {
> > +			uint8_t new_qs = nb_queues - old_nb_queues;
> > +
> > +			memset(queues + old_nb_queues, 0,
> > +				sizeof(queues[0]) * new_qs);
> > +			memset(queues_prio + old_nb_queues, 0,
> > +				sizeof(queues_prio[0]) * new_qs);
> > +		}
> > +	} else if (dev->data->queues != NULL && nb_queues == 0) {
> > +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_release, -ENOTSUP);
> > +
> > +		queues = dev->data->queues;
> > +		for (i = nb_queues; i < old_nb_queues; i++)
> > +			(*dev->dev_ops->queue_release)(queues[i]);
> > +	}
> > +
> > +	dev->data->nb_queues = nb_queues;
> > +	return 0;
> > +}
> > +
> While the ports array makes sense to have available at the top level of
> the API and allocated from rte_eventdev.c, I'm not seeing what the value
> of having the queues allocated at that level is. The only time the queue
> array is indexed by eventdev layer is when releasing a queue. Therefore,
> I suggest just saving the number of queues for sanity checking and let
> the queue array allocation and freeing be handled entirely in the
> drivers themselves.

I thought it would be useful for other drivers. I agree, If something is not
common across all the driver lets remove it from common code.
I will remove it in v3

> 
> /Bruce

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v2 1/6] eventdev: introduce event driven programming model
  2016-12-06 16:51       ` Bruce Richardson
@ 2016-12-07 18:53         ` Jerin Jacob
  2016-12-08  9:30           ` Bruce Richardson
  0 siblings, 1 reply; 109+ messages in thread
From: Jerin Jacob @ 2016-12-07 18:53 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: dev, thomas.monjalon, hemant.agrawal, gage.eads, harry.van.haaren

On Tue, Dec 06, 2016 at 04:51:19PM +0000, Bruce Richardson wrote:
> On Tue, Dec 06, 2016 at 09:22:15AM +0530, Jerin Jacob wrote:
> > In a polling model, lcores poll ethdev ports and associated
> > rx queues directly to look for packet. In an event driven model,
> > by contrast, lcores call the scheduler that selects packets for
> > them based on programmer-specified criteria. Eventdev library
> > adds support for event driven programming model, which offer
> > applications automatic multicore scaling, dynamic load balancing,
> > pipelining, packet ingress order maintenance and
> > synchronization services to simplify application packet processing.
> > 
> > By introducing event driven programming model, DPDK can support
> > both polling and event driven programming models for packet processing,
> > and applications are free to choose whatever model
> > (or combination of the two) that best suits their needs.
> > 
> > This patch adds the eventdev specification header file.
> > 
> > Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > ---
> > +	/** WORD1 */
> > +	RTE_STD_C11
> > +	union {
> > +		uint64_t u64;
> > +		/**< Opaque 64-bit value */
> > +		uintptr_t event_ptr;
> > +		/**< Opaque event pointer */
> 
> Since we have a uint64_t member of the union, might this be better as a
> void * rather than uintptr_t?

No strong opinion here. For me, uintptr_t looks clean.
But, It is OK to change to void* as per your input.

> 
> > +		struct rte_mbuf *mbuf;
> > +		/**< mbuf pointer if dequeued event is associated with mbuf */
> > +	};
> > +};
> > +
> <snip>
> > +/**
> > + * Link multiple source event queues supplied in *rte_event_queue_link*
> > + * structure as *queue_id* to the destination event port designated by its
> > + * *port_id* on the event device designated by its *dev_id*.
> > + *
> > + * The link establishment shall enable the event port *port_id* from
> > + * receiving events from the specified event queue *queue_id*
> > + *
> > + * An event queue may link to one or more event ports.
> > + * The number of links can be established from an event queue to event port is
> > + * implementation defined.
> > + *
> > + * Event queue(s) to event port link establishment can be changed at runtime
> > + * without re-configuring the device to support scaling and to reduce the
> > + * latency of critical work by establishing the link with more event ports
> > + * at runtime.
> 
> I think this might need to be clarified. The device doesn't need to be
> reconfigured, but does it need to be stopped? In SW implementation, this
> affects how much we have to make things thread-safe. At minimum I think
> we should limit this to having only one thread call the function at a
> time, but we may allow enqueue dequeue ops from the data plane to run
> in parallel.

Cavium implementation can change it at runtime without re-configuring or stopping
the device to support runtime load balancing from the application perspective.

AFAIK, link establishment is _NOT_ fast path API. But the application
can invoke it from worker thread whenever there is a need for re-wiring
the queue to port connection for better explicit load balancing. IMO, A
software implementation with lock is fine here as we don't use this in
fastpath.

Thoughts?
>
> > + *
> > + * @param dev_id
> > + *   The identifier of the device.
> > + *
> > + *
> > + */
> > +int
> > +rte_event_port_link(uint8_t dev_id, uint8_t port_id,
> > +		    const struct rte_event_queue_link link[],
> > +		    uint16_t nb_links);
> > +
> > +/**
> > + * Unlink multiple source event queues supplied in *queues* from the destination
> > + * event port designated by its *port_id* on the event device designated
> > + * by its *dev_id*.
> > + *
> > + * The unlink establishment shall disable the event port *port_id* from
> > + * receiving events from the specified event queue *queue_id*
> > + *
> > + * Event queue(s) to event port unlink establishment can be changed at runtime
> > + * without re-configuring the device.
> 
> Clarify, as above with link call.

Same as above.

> 
> > + *

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v2 1/6] eventdev: introduce event driven programming model
  2016-12-07 10:57       ` Van Haaren, Harry
@ 2016-12-08  1:24         ` Jerin Jacob
  2016-12-08 11:02           ` Van Haaren, Harry
  0 siblings, 1 reply; 109+ messages in thread
From: Jerin Jacob @ 2016-12-08  1:24 UTC (permalink / raw)
  To: Van Haaren, Harry
  Cc: dev, thomas.monjalon, Richardson, Bruce, hemant.agrawal, Eads, Gage

On Wed, Dec 07, 2016 at 10:57:13AM +0000, Van Haaren, Harry wrote:
> > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> 
> Hi Jerin,

Hi Harry,

> 
> Re v2 rte_event struct, there seems to be some changes in the struct layout and field sizes. I've investigated them, and would like to propose some changes to balance the byte-alignment and accessing of the fields.

OK. Looks like balanced byte-alignment makes more sense on IA.We will go with that then.
Few comments below,


> 
> These changes target only the first 64 bits of the rte_event struct. I've left the current v2 code for reference, please find my proposed changes below.
> 
> > +struct rte_event {
> > +	/** WORD0 */
> > +	RTE_STD_C11
> > +	union {
> > +		uint64_t event;
> > +		/** Event attributes for dequeue or enqueue operation */
> > +		struct {
> > +			uint64_t flow_id:20;
> > +			/**< Targeted flow identifier for the enqueue and
> > +			 * dequeue operation.
> > +			 * The value must be in the range of
> > +			 * [0, nb_event_queue_flows - 1] which
> > +			 * previously supplied to rte_event_dev_configure().
> > +			 */
> > +			uint64_t sub_event_type:8;
> > +			/**< Sub-event types based on the event source.
> > +			 * @see RTE_EVENT_TYPE_CPU
> > +			 */
> > +			uint64_t event_type:4;
> > +			/**< Event type to classify the event source.
> > +			 * @see RTE_EVENT_TYPE_ETHDEV, (RTE_EVENT_TYPE_*)
> > +			 */
> > +			uint64_t sched_type:2;
> > +			/**< Scheduler synchronization type (RTE_SCHED_TYPE_*)
> > +			 * associated with flow id on a given event queue
> > +			 * for the enqueue and dequeue operation.
> > +			 */
> > +			uint64_t queue_id:8;
> > +			/**< Targeted event queue identifier for the enqueue or
> > +			 * dequeue operation.
> > +			 * The value must be in the range of
> > +			 * [0, nb_event_queues - 1] which previously supplied to
> > +			 * rte_event_dev_configure().
> > +			 */
> > +			uint64_t priority:8;
> > +			/**< Event priority relative to other events in the
> > +			 * event queue. The requested priority should in the
> > +			 * range of  [RTE_EVENT_DEV_PRIORITY_HIGHEST,
> > +			 * RTE_EVENT_DEV_PRIORITY_LOWEST].
> > +			 * The implementation shall normalize the requested
> > +			 * priority to supported priority value.
> > +			 * Valid when the device has
> > +			 * RTE_EVENT_DEV_CAP_FLAG_EVENT_QOS capability.
> > +			 */
> > +			uint64_t op:2;
> > +			/**< The type of event enqueue operation - new/forward/
> > +			 * etc.This field is not preserved across an instance
> > +			 * and is undefined on dequeue.
> > +			 *  @see RTE_EVENT_OP_NEW, (RTE_EVENT_OP_*)
> > +			 */
> > +			uint64_t impl_opaque:12;
> > +			/**< Implementation specific opaque value.
> > +			 * An implementation may use this field to hold
> > +			 * implementation specific value to share between
> > +			 * dequeue and enqueue operation.
> > +			 * The application should not modify this field.
> > +			 */
> > +		};
> > +	};
> 
> struct rte_event {
> 	/** WORD0 */
> 	RTE_STD_C11
> 	union {
> 		uint64_t event;
> 		struct {
> 			uint32_t flow_id: 24;
> 			uint32_t impl_opaque : 8; /* not defined on deq */
> 
> 			uint8_t queue_id;
> 			uint8_t priority;
> 
> 			uint8_t operation  : 4; /* new fwd drop */
> 			uint8_t sched_type : 4;
> 
> 			uint8_t event_type : 4;
> 			uint8_t sub_event_type : 4;
> 		};
> 	};
> 	/** word 1 */
> <snip>
> 
> 
> The changes made are as follows:
> * Add impl_opaque to the remaining 8 bits of those 32 bits (previous size was 12 bits)
OK
> 
> * QueueID and Priority remain 8 bit integers - but now accessible as 8 bit ints.
OK
> 
> * Operation and sched_type *increased* to 4 bits each (from previous value of 2) to allow future expansion without ABI changes

Anyway it will break ABI if we add new operation. I would propose to keep 4bit
reserved and add it when required.

> 
> * Event type remains constant at 4 bits

OK

> * Restore flow_id to 24 bits of a 32 bit int (previous size was 20 bits)
> * sub-event-type reduced to 4 bits (previous value was 8 bits). Can we think of situations where 16 values for application specified identifiers of each event-type is genuinely not enough?
One packet will not go beyond 16 stages but an application may have more stages and
each packet may go mutually exclusive stages. For example,

packet 0: stagex_0 ->stagex_1
packet 1: stagey_0 ->stagey_1

In that sense, IMO, more than 16 is required.(AFIAK, VPP has any much larger limit on number of stages)

> 
> In my opinion this structure layout is more balanced, and will perform better due to less loads that will need masking to access the required value.
OK. Considering more balanced layout and above points. I propose following scheme(based on your input)

	union {
		uint64_t event;
		struct {
			uint32_t flow_id: 20;
			uint32_t sub_event_type : 8;
			uint32_t event_type : 4;

			uint8_t rsvd: 4; /* for future additions */
			uint8_t operation  : 2; /* new fwd drop */
			uint8_t sched_type : 2;

			uint8_t queue_id;
			uint8_t priority;
			uint8_t impl_opaque;
		};
	};

Feedback and improvements welcomed,

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v2 1/6] eventdev: introduce event driven programming model
  2016-12-07 11:12       ` Bruce Richardson
@ 2016-12-08  1:48         ` Jerin Jacob
  2016-12-08  9:57           ` Bruce Richardson
  0 siblings, 1 reply; 109+ messages in thread
From: Jerin Jacob @ 2016-12-08  1:48 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: dev, thomas.monjalon, hemant.agrawal, gage.eads, harry.van.haaren

On Wed, Dec 07, 2016 at 11:12:51AM +0000, Bruce Richardson wrote:
> On Tue, Dec 06, 2016 at 09:22:15AM +0530, Jerin Jacob wrote:
> > In a polling model, lcores poll ethdev ports and associated
> > rx queues directly to look for packet. In an event driven model,
> > by contrast, lcores call the scheduler that selects packets for
> > them based on programmer-specified criteria. Eventdev library
> > adds support for event driven programming model, which offer
> > applications automatic multicore scaling, dynamic load balancing,
> > pipelining, packet ingress order maintenance and
> > synchronization services to simplify application packet processing.
> > 
> > By introducing event driven programming model, DPDK can support
> > both polling and event driven programming models for packet processing,
> > and applications are free to choose whatever model
> > (or combination of the two) that best suits their needs.
> > 
> > This patch adds the eventdev specification header file.
> > 
> > Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > ---
> > +
> > +/**
> > + * Link multiple source event queues supplied in *rte_event_queue_link*
> > + * structure as *queue_id* to the destination event port designated by its
> > + * *port_id* on the event device designated by its *dev_id*.
> > + *
> > + * The link establishment shall enable the event port *port_id* from
> > + * receiving events from the specified event queue *queue_id*
> > + *
> > + * An event queue may link to one or more event ports.
> > + * The number of links can be established from an event queue to event port is
> > + * implementation defined.
> > + *
> > + * Event queue(s) to event port link establishment can be changed at runtime
> > + * without re-configuring the device to support scaling and to reduce the
> > + * latency of critical work by establishing the link with more event ports
> > + * at runtime.
> > + *
> > + * @param dev_id
> > + *   The identifier of the device.
> > + *
> > + * @param port_id
> > + *   Event port identifier to select the destination port to link.
> > + *
> > + * @param link
> > + *   Points to an array of *nb_links* objects of type *rte_event_queue_link*
> > + *   structure which contain the event queue to event port link establishment
> > + *   attributes.
> > + *   NULL value is allowed, in which case this function links all the configured
> > + *   event queues *nb_event_queues* which previously supplied to
> > + *   rte_event_dev_configure() to the event port *port_id* with normal servicing
> > + *   priority(RTE_EVENT_DEV_PRIORITY_NORMAL).
> > + *
> > + * @param nb_links
> > + *   The number of links to establish
> > + *
> > + * @return
> > + * The number of links actually established. The return value can be less than
> > + * the value of the *nb_links* parameter when the implementation has the
> > + * limitation on specific queue to port link establishment or if invalid
> > + * parameters are specified in a *rte_event_queue_link*.
> > + * If the return value is less than *nb_links*, the remaining links at the end
> > + * of link[] are not established, and the caller has to take care of them.
> > + * If return value is less than *nb_links* then implementation shall update the
> > + * rte_errno accordingly, Possible rte_errno values are
> > + * (-EDQUOT) Quota exceeded(Application tried to link the queue configured with
> > + *  RTE_EVENT_QUEUE_CFG_FLAG_SINGLE_LINK to more than one event ports)
> > + * (-EINVAL) Invalid parameter
> > + *
> > + */
> > +int
> > +rte_event_port_link(uint8_t dev_id, uint8_t port_id,
> > +		    const struct rte_event_queue_link link[],
> > +		    uint16_t nb_links);
> > +
> 
> Hi again Jerin,
> 
> another small suggestion here. I'm not a big fan of using small
> structures to pass parameters into functions, especially when not all
> fields are always going to be used. Rather than use the event queue link
> structure, can we just pass in two array parameters here - the list of
> QIDs, and the list of priorities. In cases where the eventdev
> implementation does not support link prioritization, or where the app
> does not want different priority mappings , then the second
> array can be null [implying NORMAL priority for the don't care case].
> 
> 	int
> 	rte_event_port_link(uint8_t dev_id, uint8_t port_id,
> 		const uint8_t queues[], const uint8_t priorities[],
> 		uint16_t nb_queues);
> 
> This just makes mapping an array of queues easier, as we can just pass
> an array of ints directly in, and it especially makes it easier to
> create a single link via:
> 
>   rte_event_port_link(dev_id, port_id, &queue_id, NULL, 1);

The reason why I thought of creating "struct rte_event_queue_link",
- Its easy to add new parameter in link attributes if required
- Its _easy_ to implement PAUSE and RESUME in application

PAUSE:
nr_links = rte_event_port_links_get(,,link)
rte_event_port_unlink_all

RESUME:
rte_event_port_link(,,link, nr_links);

No strong opinion here. I will go with your proposal then

> 
> Regards,
> /Bruce

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v2 1/6] eventdev: introduce event driven programming model
  2016-12-07 18:53         ` Jerin Jacob
@ 2016-12-08  9:30           ` Bruce Richardson
  2016-12-08 20:41             ` Jerin Jacob
  0 siblings, 1 reply; 109+ messages in thread
From: Bruce Richardson @ 2016-12-08  9:30 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dev, thomas.monjalon, hemant.agrawal, gage.eads, harry.van.haaren

On Thu, Dec 08, 2016 at 12:23:03AM +0530, Jerin Jacob wrote:
> On Tue, Dec 06, 2016 at 04:51:19PM +0000, Bruce Richardson wrote:
> > On Tue, Dec 06, 2016 at 09:22:15AM +0530, Jerin Jacob wrote:
> > > In a polling model, lcores poll ethdev ports and associated
> > > rx queues directly to look for packet. In an event driven model,
> > > by contrast, lcores call the scheduler that selects packets for
> > > them based on programmer-specified criteria. Eventdev library
> > > adds support for event driven programming model, which offer
> > > applications automatic multicore scaling, dynamic load balancing,
> > > pipelining, packet ingress order maintenance and
> > > synchronization services to simplify application packet processing.
> > > 
> > > By introducing event driven programming model, DPDK can support
> > > both polling and event driven programming models for packet processing,
> > > and applications are free to choose whatever model
> > > (or combination of the two) that best suits their needs.
> > > 
> > > This patch adds the eventdev specification header file.
> > > 
> > > Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > > ---
> > > +	/** WORD1 */
> > > +	RTE_STD_C11
> > > +	union {
> > > +		uint64_t u64;
> > > +		/**< Opaque 64-bit value */
> > > +		uintptr_t event_ptr;
> > > +		/**< Opaque event pointer */
> > 
> > Since we have a uint64_t member of the union, might this be better as a
> > void * rather than uintptr_t?
> 
> No strong opinion here. For me, uintptr_t looks clean.
> But, It is OK to change to void* as per your input.
> 
> > 
> > > +		struct rte_mbuf *mbuf;
> > > +		/**< mbuf pointer if dequeued event is associated with mbuf */
> > > +	};
> > > +};
> > > +
> > <snip>
> > > +/**
> > > + * Link multiple source event queues supplied in *rte_event_queue_link*
> > > + * structure as *queue_id* to the destination event port designated by its
> > > + * *port_id* on the event device designated by its *dev_id*.
> > > + *
> > > + * The link establishment shall enable the event port *port_id* from
> > > + * receiving events from the specified event queue *queue_id*
> > > + *
> > > + * An event queue may link to one or more event ports.
> > > + * The number of links can be established from an event queue to event port is
> > > + * implementation defined.
> > > + *
> > > + * Event queue(s) to event port link establishment can be changed at runtime
> > > + * without re-configuring the device to support scaling and to reduce the
> > > + * latency of critical work by establishing the link with more event ports
> > > + * at runtime.
> > 
> > I think this might need to be clarified. The device doesn't need to be
> > reconfigured, but does it need to be stopped? In SW implementation, this
> > affects how much we have to make things thread-safe. At minimum I think
> > we should limit this to having only one thread call the function at a
> > time, but we may allow enqueue dequeue ops from the data plane to run
> > in parallel.
> 
> Cavium implementation can change it at runtime without re-configuring or stopping
> the device to support runtime load balancing from the application perspective.
> 
> AFAIK, link establishment is _NOT_ fast path API. But the application
> can invoke it from worker thread whenever there is a need for re-wiring
> the queue to port connection for better explicit load balancing. IMO, A
> software implementation with lock is fine here as we don't use this in
> fastpath.
> 
> Thoughts?
> >

I agree that it's obviously not fast-path. Therefore I suggest that we
document that this API should be safe to call while the data path is in
operation, but that it should not be called by multiple cores
simultaneously i.e. single-writer, multi-reader safe, but not
multi-writer safe. Does that seem reasonable to you?

/Bruce

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v2 1/6] eventdev: introduce event driven programming model
  2016-12-08  1:48         ` Jerin Jacob
@ 2016-12-08  9:57           ` Bruce Richardson
  2016-12-14  6:40             ` Jerin Jacob
  0 siblings, 1 reply; 109+ messages in thread
From: Bruce Richardson @ 2016-12-08  9:57 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dev, thomas.monjalon, hemant.agrawal, gage.eads, harry.van.haaren

On Thu, Dec 08, 2016 at 07:18:01AM +0530, Jerin Jacob wrote:
> On Wed, Dec 07, 2016 at 11:12:51AM +0000, Bruce Richardson wrote:
> > On Tue, Dec 06, 2016 at 09:22:15AM +0530, Jerin Jacob wrote:
> > > In a polling model, lcores poll ethdev ports and associated
> > > rx queues directly to look for packet. In an event driven model,
> > > by contrast, lcores call the scheduler that selects packets for
> > > them based on programmer-specified criteria. Eventdev library
> > > adds support for event driven programming model, which offer
> > > applications automatic multicore scaling, dynamic load balancing,
> > > pipelining, packet ingress order maintenance and
> > > synchronization services to simplify application packet processing.
> > > 
> > > By introducing event driven programming model, DPDK can support
> > > both polling and event driven programming models for packet processing,
> > > and applications are free to choose whatever model
> > > (or combination of the two) that best suits their needs.
> > > 
> > > This patch adds the eventdev specification header file.
> > > 
> > > Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > > ---
> > > +
> > > +/**
> > > + * Link multiple source event queues supplied in *rte_event_queue_link*
> > > + * structure as *queue_id* to the destination event port designated by its
> > > + * *port_id* on the event device designated by its *dev_id*.
> > > + *
> > > + * The link establishment shall enable the event port *port_id* from
> > > + * receiving events from the specified event queue *queue_id*
> > > + *
> > > + * An event queue may link to one or more event ports.
> > > + * The number of links can be established from an event queue to event port is
> > > + * implementation defined.
> > > + *
> > > + * Event queue(s) to event port link establishment can be changed at runtime
> > > + * without re-configuring the device to support scaling and to reduce the
> > > + * latency of critical work by establishing the link with more event ports
> > > + * at runtime.
> > > + *
> > > + * @param dev_id
> > > + *   The identifier of the device.
> > > + *
> > > + * @param port_id
> > > + *   Event port identifier to select the destination port to link.
> > > + *
> > > + * @param link
> > > + *   Points to an array of *nb_links* objects of type *rte_event_queue_link*
> > > + *   structure which contain the event queue to event port link establishment
> > > + *   attributes.
> > > + *   NULL value is allowed, in which case this function links all the configured
> > > + *   event queues *nb_event_queues* which previously supplied to
> > > + *   rte_event_dev_configure() to the event port *port_id* with normal servicing
> > > + *   priority(RTE_EVENT_DEV_PRIORITY_NORMAL).
> > > + *
> > > + * @param nb_links
> > > + *   The number of links to establish
> > > + *
> > > + * @return
> > > + * The number of links actually established. The return value can be less than
> > > + * the value of the *nb_links* parameter when the implementation has the
> > > + * limitation on specific queue to port link establishment or if invalid
> > > + * parameters are specified in a *rte_event_queue_link*.
> > > + * If the return value is less than *nb_links*, the remaining links at the end
> > > + * of link[] are not established, and the caller has to take care of them.
> > > + * If return value is less than *nb_links* then implementation shall update the
> > > + * rte_errno accordingly, Possible rte_errno values are
> > > + * (-EDQUOT) Quota exceeded(Application tried to link the queue configured with
> > > + *  RTE_EVENT_QUEUE_CFG_FLAG_SINGLE_LINK to more than one event ports)
> > > + * (-EINVAL) Invalid parameter
> > > + *
> > > + */
> > > +int
> > > +rte_event_port_link(uint8_t dev_id, uint8_t port_id,
> > > +		    const struct rte_event_queue_link link[],
> > > +		    uint16_t nb_links);
> > > +
> > 
> > Hi again Jerin,
> > 
> > another small suggestion here. I'm not a big fan of using small
> > structures to pass parameters into functions, especially when not all
> > fields are always going to be used. Rather than use the event queue link
> > structure, can we just pass in two array parameters here - the list of
> > QIDs, and the list of priorities. In cases where the eventdev
> > implementation does not support link prioritization, or where the app
> > does not want different priority mappings , then the second
> > array can be null [implying NORMAL priority for the don't care case].
> > 
> > 	int
> > 	rte_event_port_link(uint8_t dev_id, uint8_t port_id,
> > 		const uint8_t queues[], const uint8_t priorities[],
> > 		uint16_t nb_queues);
> > 
> > This just makes mapping an array of queues easier, as we can just pass
> > an array of ints directly in, and it especially makes it easier to
> > create a single link via:
> > 
> >   rte_event_port_link(dev_id, port_id, &queue_id, NULL, 1);
> 
> The reason why I thought of creating "struct rte_event_queue_link",
> - Its easy to add new parameter in link attributes if required

Make the priority value be in a struct, perhaps. That would allow for
future expansion, while still making it easier for the case where people
just want the mappings without any prioritization.

> - Its _easy_ to implement PAUSE and RESUME in application
> 
> PAUSE:
> nr_links = rte_event_port_links_get(,,link)
> rte_event_port_unlink_all
> 
> RESUME:
> rte_event_port_link(,,link, nr_links);

Ok, I had missed that implication. Since that is probably an important
operation we might want to do, perhaps links_get API should be updated
too to keep parameter matching.

/Bruce

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v2 3/6] eventdev: implement the northbound APIs
  2016-12-07 17:02         ` Jerin Jacob
@ 2016-12-08  9:59           ` Bruce Richardson
  2016-12-14  6:28             ` Jerin Jacob
  0 siblings, 1 reply; 109+ messages in thread
From: Bruce Richardson @ 2016-12-08  9:59 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dev, thomas.monjalon, hemant.agrawal, gage.eads, harry.van.haaren

On Wed, Dec 07, 2016 at 10:32:56PM +0530, Jerin Jacob wrote:
> On Tue, Dec 06, 2016 at 05:17:12PM +0000, Bruce Richardson wrote:
> > On Tue, Dec 06, 2016 at 09:22:17AM +0530, Jerin Jacob wrote:
> > > This patch implements northbound eventdev API interface using
> > > southbond driver interface
> > > 
> > > Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > > ---
> > > +		/* Re allocate memory to store queue priority */
> > > +		queues_prio = dev->data->queues_prio;
> > > +		queues_prio = rte_realloc(queues_prio,
> > > +				sizeof(queues_prio[0]) * nb_queues,
> > > +				RTE_CACHE_LINE_SIZE);
> > > +		if (queues_prio == NULL) {
> > > +			RTE_EDEV_LOG_ERR("failed to realloc queue priority,"
> > > +						" nb_queues %u", nb_queues);
> > > +			return -(ENOMEM);
> > > +		}
> > > +		dev->data->queues_prio = queues_prio;
> > > +
> > > +		if (nb_queues > old_nb_queues) {
> > > +			uint8_t new_qs = nb_queues - old_nb_queues;
> > > +
> > > +			memset(queues + old_nb_queues, 0,
> > > +				sizeof(queues[0]) * new_qs);
> > > +			memset(queues_prio + old_nb_queues, 0,
> > > +				sizeof(queues_prio[0]) * new_qs);
> > > +		}
> > > +	} else if (dev->data->queues != NULL && nb_queues == 0) {
> > > +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_release, -ENOTSUP);
> > > +
> > > +		queues = dev->data->queues;
> > > +		for (i = nb_queues; i < old_nb_queues; i++)
> > > +			(*dev->dev_ops->queue_release)(queues[i]);
> > > +	}
> > > +
> > > +	dev->data->nb_queues = nb_queues;
> > > +	return 0;
> > > +}
> > > +
> > While the ports array makes sense to have available at the top level of
> > the API and allocated from rte_eventdev.c, I'm not seeing what the value
> > of having the queues allocated at that level is. The only time the queue
> > array is indexed by eventdev layer is when releasing a queue. Therefore,
> > I suggest just saving the number of queues for sanity checking and let
> > the queue array allocation and freeing be handled entirely in the
> > drivers themselves.
> 
> I thought it would be useful for other drivers. I agree, If something is not
> common across all the driver lets remove it from common code.
> I will remove it in v3
> 
It's not a big deal for us - just an extra assignment we need to do in
our code path, so if it provides benefit for your driver, leave it in. I
just found it strange that that array was never really used by the
eventdev APIs, which is why I thought it might be better as internal
only.

/Bruce

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v2 1/6] eventdev: introduce event driven programming model
  2016-12-08  1:24         ` Jerin Jacob
@ 2016-12-08 11:02           ` Van Haaren, Harry
  2016-12-14 13:13             ` Jerin Jacob
  0 siblings, 1 reply; 109+ messages in thread
From: Van Haaren, Harry @ 2016-12-08 11:02 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dev, thomas.monjalon, Richardson, Bruce, hemant.agrawal, Eads, Gage

> From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> Sent: Thursday, December 8, 2016 1:24 AM
> To: Van Haaren, Harry <harry.van.haaren@intel.com>

<snip>

> > * Operation and sched_type *increased* to 4 bits each (from previous value of 2) to
> allow future expansion without ABI changes
> 
> Anyway it will break ABI if we add new operation. I would propose to keep 4bit
> reserved and add it when required.

Ok sounds good. I'll suggest to move it to the middle between operation or sched type, which would allow expanding operation without ABI breaks. On expanding the field would remain in the same place with the same bits available in that place (no ABI break), but new bits can be added into the currently reserved space.


> > * Restore flow_id to 24 bits of a 32 bit int (previous size was 20 bits)
> > * sub-event-type reduced to 4 bits (previous value was 8 bits). Can we think of
> situations where 16 values for application specified identifiers of each event-type is
> genuinely not enough?
> One packet will not go beyond 16 stages but an application may have more stages and
> each packet may go mutually exclusive stages. For example,
> 
> packet 0: stagex_0 ->stagex_1
> packet 1: stagey_0 ->stagey_1
> 
> In that sense, IMO, more than 16 is required.(AFIAK, VPP has any much larger limit on
> number of stages)

My understanding was that stages are linked to event queues, so the application can determine the stage the packet comes from by reading queue_id?

I'm not opposed to having an 8 bit sub_event_type, but it seems unnecessarily large from my point of view. If you have a use for it, I'm ok with 8 bits.


> > In my opinion this structure layout is more balanced, and will perform better due to
> less loads that will need masking to access the required value.
> OK. Considering more balanced layout and above points. I propose following scheme(based on
> your input)
> 
> 	union {
> 		uint64_t event;
> 		struct {
> 			uint32_t flow_id: 20;
> 			uint32_t sub_event_type : 8;
> 			uint32_t event_type : 4;
> 
> 			uint8_t rsvd: 4; /* for future additions */
> 			uint8_t operation  : 2; /* new fwd drop */
> 			uint8_t sched_type : 2;
> 
> 			uint8_t queue_id;
> 			uint8_t priority;
> 			uint8_t impl_opaque;
> 		};
> 	};
> 
> Feedback and improvements welcomed,


So incorporating my latest suggestions on moving fields around, excluding sub_event_type *size* changes:

union {
	uint64_t event;
	struct {
		uint32_t flow_id: 20;
		uint32_t event_type : 4;
		uint32_t sub_event_type : 8; /* 8 bits now naturally aligned */

		uint8_t operation  : 2; /* new fwd drop */
		uint8_t rsvd: 4; /* for future additions, can be expanded into without ABI break */
		uint8_t sched_type : 2;

		uint8_t queue_id;
		uint8_t priority;
		uint8_t impl_opaque;
	};
};


-Harry

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v2 1/6] eventdev: introduce event driven programming model
  2016-12-08  9:30           ` Bruce Richardson
@ 2016-12-08 20:41             ` Jerin Jacob
  2016-12-09 15:11               ` Bruce Richardson
  0 siblings, 1 reply; 109+ messages in thread
From: Jerin Jacob @ 2016-12-08 20:41 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: dev, thomas.monjalon, hemant.agrawal, gage.eads, harry.van.haaren

On Thu, Dec 08, 2016 at 09:30:49AM +0000, Bruce Richardson wrote:
> On Thu, Dec 08, 2016 at 12:23:03AM +0530, Jerin Jacob wrote:
> > On Tue, Dec 06, 2016 at 04:51:19PM +0000, Bruce Richardson wrote:
> > > On Tue, Dec 06, 2016 at 09:22:15AM +0530, Jerin Jacob wrote:
> > > > In a polling model, lcores poll ethdev ports and associated
> > > > rx queues directly to look for packet. In an event driven model,
> > > > by contrast, lcores call the scheduler that selects packets for
> > > > them based on programmer-specified criteria. Eventdev library
> > > > adds support for event driven programming model, which offer
> > > > applications automatic multicore scaling, dynamic load balancing,
> > > > pipelining, packet ingress order maintenance and
> > > > synchronization services to simplify application packet processing.
> > > > 
> > > > By introducing event driven programming model, DPDK can support
> > > > both polling and event driven programming models for packet processing,
> > > > and applications are free to choose whatever model
> > > > (or combination of the two) that best suits their needs.
> > > > 
> > > > This patch adds the eventdev specification header file.
> > > > 
> > > > Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > > > ---
> > > > +	/** WORD1 */
> > > > +	RTE_STD_C11
> > > > +	union {
> > > > +		uint64_t u64;
> > > > +		/**< Opaque 64-bit value */
> > > > +		uintptr_t event_ptr;
> > > > +		/**< Opaque event pointer */
> > > 
> > > Since we have a uint64_t member of the union, might this be better as a
> > > void * rather than uintptr_t?
> > 
> > No strong opinion here. For me, uintptr_t looks clean.
> > But, It is OK to change to void* as per your input.
> > 
> > > 
> > > > +		struct rte_mbuf *mbuf;
> > > > +		/**< mbuf pointer if dequeued event is associated with mbuf */
> > > > +	};
> > > > +};
> > > > +
> > > <snip>
> > > > +/**
> > > > + * Link multiple source event queues supplied in *rte_event_queue_link*
> > > > + * structure as *queue_id* to the destination event port designated by its
> > > > + * *port_id* on the event device designated by its *dev_id*.
> > > > + *
> > > > + * The link establishment shall enable the event port *port_id* from
> > > > + * receiving events from the specified event queue *queue_id*
> > > > + *
> > > > + * An event queue may link to one or more event ports.
> > > > + * The number of links can be established from an event queue to event port is
> > > > + * implementation defined.
> > > > + *
> > > > + * Event queue(s) to event port link establishment can be changed at runtime
> > > > + * without re-configuring the device to support scaling and to reduce the
> > > > + * latency of critical work by establishing the link with more event ports
> > > > + * at runtime.
> > > 
> > > I think this might need to be clarified. The device doesn't need to be
> > > reconfigured, but does it need to be stopped? In SW implementation, this
> > > affects how much we have to make things thread-safe. At minimum I think
> > > we should limit this to having only one thread call the function at a
> > > time, but we may allow enqueue dequeue ops from the data plane to run
> > > in parallel.
> > 
> > Cavium implementation can change it at runtime without re-configuring or stopping
> > the device to support runtime load balancing from the application perspective.
> > 
> > AFAIK, link establishment is _NOT_ fast path API. But the application
> > can invoke it from worker thread whenever there is a need for re-wiring
> > the queue to port connection for better explicit load balancing. IMO, A
> > software implementation with lock is fine here as we don't use this in
> > fastpath.
> > 
> > Thoughts?
> > >
> 
> I agree that it's obviously not fast-path. Therefore I suggest that we
> document that this API should be safe to call while the data path is in
> operation, but that it should not be called by multiple cores
> simultaneously i.e. single-writer, multi-reader safe, but not
> multi-writer safe. Does that seem reasonable to you?

If I understand it correctly, per "event port" their will be ONLY ONE
writer at time.

i.e, In the valid case, Following two can be invoked in parallel
rte_event_port_link(dev_id, 0 /*port_id*/,..)
rte_event_port_link(dev_id, 1 /*port_id*/,..)

But, not invoking rte_event_port_link() on the _same_ event port in parallel

Are we on same page?

Jerin 

> 
> /Bruce

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v2 1/6] eventdev: introduce event driven programming model
  2016-12-08 20:41             ` Jerin Jacob
@ 2016-12-09 15:11               ` Bruce Richardson
  2016-12-14  6:55                 ` Jerin Jacob
  0 siblings, 1 reply; 109+ messages in thread
From: Bruce Richardson @ 2016-12-09 15:11 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dev, thomas.monjalon, hemant.agrawal, gage.eads, harry.van.haaren

On Fri, Dec 09, 2016 at 02:11:15AM +0530, Jerin Jacob wrote:
> On Thu, Dec 08, 2016 at 09:30:49AM +0000, Bruce Richardson wrote:
> > On Thu, Dec 08, 2016 at 12:23:03AM +0530, Jerin Jacob wrote:
> > > On Tue, Dec 06, 2016 at 04:51:19PM +0000, Bruce Richardson wrote:
> > > > On Tue, Dec 06, 2016 at 09:22:15AM +0530, Jerin Jacob wrote:
> > > > > In a polling model, lcores poll ethdev ports and associated
> > > > > rx queues directly to look for packet. In an event driven model,
> > > > > by contrast, lcores call the scheduler that selects packets for
> > > > > them based on programmer-specified criteria. Eventdev library
> > > > > adds support for event driven programming model, which offer
> > > > > applications automatic multicore scaling, dynamic load balancing,
> > > > > pipelining, packet ingress order maintenance and
> > > > > synchronization services to simplify application packet processing.
> > > > > 
> > > > > By introducing event driven programming model, DPDK can support
> > > > > both polling and event driven programming models for packet processing,
> > > > > and applications are free to choose whatever model
> > > > > (or combination of the two) that best suits their needs.
> > > > > 
> > > > > This patch adds the eventdev specification header file.
> > > > > 
> > > > > Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > > > > ---
> > > > > +	/** WORD1 */
> > > > > +	RTE_STD_C11
> > > > > +	union {
> > > > > +		uint64_t u64;
> > > > > +		/**< Opaque 64-bit value */
> > > > > +		uintptr_t event_ptr;
> > > > > +		/**< Opaque event pointer */
> > > > 
> > > > Since we have a uint64_t member of the union, might this be better as a
> > > > void * rather than uintptr_t?
> > > 
> > > No strong opinion here. For me, uintptr_t looks clean.
> > > But, It is OK to change to void* as per your input.
> > > 
> > > > 
> > > > > +		struct rte_mbuf *mbuf;
> > > > > +		/**< mbuf pointer if dequeued event is associated with mbuf */
> > > > > +	};
> > > > > +};
> > > > > +
> > > > <snip>
> > > > > +/**
> > > > > + * Link multiple source event queues supplied in *rte_event_queue_link*
> > > > > + * structure as *queue_id* to the destination event port designated by its
> > > > > + * *port_id* on the event device designated by its *dev_id*.
> > > > > + *
> > > > > + * The link establishment shall enable the event port *port_id* from
> > > > > + * receiving events from the specified event queue *queue_id*
> > > > > + *
> > > > > + * An event queue may link to one or more event ports.
> > > > > + * The number of links can be established from an event queue to event port is
> > > > > + * implementation defined.
> > > > > + *
> > > > > + * Event queue(s) to event port link establishment can be changed at runtime
> > > > > + * without re-configuring the device to support scaling and to reduce the
> > > > > + * latency of critical work by establishing the link with more event ports
> > > > > + * at runtime.
> > > > 
> > > > I think this might need to be clarified. The device doesn't need to be
> > > > reconfigured, but does it need to be stopped? In SW implementation, this
> > > > affects how much we have to make things thread-safe. At minimum I think
> > > > we should limit this to having only one thread call the function at a
> > > > time, but we may allow enqueue dequeue ops from the data plane to run
> > > > in parallel.
> > > 
> > > Cavium implementation can change it at runtime without re-configuring or stopping
> > > the device to support runtime load balancing from the application perspective.
> > > 
> > > AFAIK, link establishment is _NOT_ fast path API. But the application
> > > can invoke it from worker thread whenever there is a need for re-wiring
> > > the queue to port connection for better explicit load balancing. IMO, A
> > > software implementation with lock is fine here as we don't use this in
> > > fastpath.
> > > 
> > > Thoughts?
> > > >
> > 
> > I agree that it's obviously not fast-path. Therefore I suggest that we
> > document that this API should be safe to call while the data path is in
> > operation, but that it should not be called by multiple cores
> > simultaneously i.e. single-writer, multi-reader safe, but not
> > multi-writer safe. Does that seem reasonable to you?
> 
> If I understand it correctly, per "event port" their will be ONLY ONE
> writer at time.
> 
> i.e, In the valid case, Following two can be invoked in parallel
> rte_event_port_link(dev_id, 0 /*port_id*/,..)
> rte_event_port_link(dev_id, 1 /*port_id*/,..)
> 
> But, not invoking rte_event_port_link() on the _same_ event port in parallel
> 
> Are we on same page?
> 
> Jerin 
> 
Not entirely. Since our current software implementation pushes the events
from the internal queues to the ports, rather than having the ports pull
the events, the links are tracked at the qid level rather than at the
port one. So having two link operations on two separate ports at the
same time could actually conflict for us, because they attempt to modify
the mappings for the same queue. That's why for us the number of
simultaneous link calls is important.
However, given that this is not fast-path, we can probably work around
this with locking internally. The main ask is that we explicitly
document what are the expected safe and unsafe conditions under which
this call can be made.

/Bruce

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v2 3/6] eventdev: implement the northbound APIs
  2016-12-08  9:59           ` Bruce Richardson
@ 2016-12-14  6:28             ` Jerin Jacob
  0 siblings, 0 replies; 109+ messages in thread
From: Jerin Jacob @ 2016-12-14  6:28 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: dev, thomas.monjalon, hemant.agrawal, gage.eads, harry.van.haaren

On Thu, Dec 08, 2016 at 09:59:37AM +0000, Bruce Richardson wrote:
> On Wed, Dec 07, 2016 at 10:32:56PM +0530, Jerin Jacob wrote:
> > On Tue, Dec 06, 2016 at 05:17:12PM +0000, Bruce Richardson wrote:
> > > On Tue, Dec 06, 2016 at 09:22:17AM +0530, Jerin Jacob wrote:
> > > > This patch implements northbound eventdev API interface using
> > > > southbond driver interface
> > > > 
> > > > Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > > > ---
> > > > +		/* Re allocate memory to store queue priority */
> > > > +		queues_prio = dev->data->queues_prio;
> > > > +		queues_prio = rte_realloc(queues_prio,
> > > > +				sizeof(queues_prio[0]) * nb_queues,
> > > > +				RTE_CACHE_LINE_SIZE);
> > > > +		if (queues_prio == NULL) {
> > > > +			RTE_EDEV_LOG_ERR("failed to realloc queue priority,"
> > > > +						" nb_queues %u", nb_queues);
> > > > +			return -(ENOMEM);
> > > > +		}
> > > > +		dev->data->queues_prio = queues_prio;
> > > > +
> > > > +		if (nb_queues > old_nb_queues) {
> > > > +			uint8_t new_qs = nb_queues - old_nb_queues;
> > > > +
> > > > +			memset(queues + old_nb_queues, 0,
> > > > +				sizeof(queues[0]) * new_qs);
> > > > +			memset(queues_prio + old_nb_queues, 0,
> > > > +				sizeof(queues_prio[0]) * new_qs);
> > > > +		}
> > > > +	} else if (dev->data->queues != NULL && nb_queues == 0) {
> > > > +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_release, -ENOTSUP);
> > > > +
> > > > +		queues = dev->data->queues;
> > > > +		for (i = nb_queues; i < old_nb_queues; i++)
> > > > +			(*dev->dev_ops->queue_release)(queues[i]);
> > > > +	}
> > > > +
> > > > +	dev->data->nb_queues = nb_queues;
> > > > +	return 0;
> > > > +}
> > > > +
> > > While the ports array makes sense to have available at the top level of
> > > the API and allocated from rte_eventdev.c, I'm not seeing what the value
> > > of having the queues allocated at that level is. The only time the queue
> > > array is indexed by eventdev layer is when releasing a queue. Therefore,
> > > I suggest just saving the number of queues for sanity checking and let
> > > the queue array allocation and freeing be handled entirely in the
> > > drivers themselves.
> > 
> > I thought it would be useful for other drivers. I agree, If something is not
> > common across all the driver lets remove it from common code.
> > I will remove it in v3
> > 
> It's not a big deal for us - just an extra assignment we need to do in
> our code path, so if it provides benefit for your driver, leave it in. I

We don't use it either. I will remove it in v3

> just found it strange that that array was never really used by the
> eventdev APIs, which is why I thought it might be better as internal
> only.
> 
> /Bruce

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v2 1/6] eventdev: introduce event driven programming model
  2016-12-08  9:57           ` Bruce Richardson
@ 2016-12-14  6:40             ` Jerin Jacob
  0 siblings, 0 replies; 109+ messages in thread
From: Jerin Jacob @ 2016-12-14  6:40 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: dev, thomas.monjalon, hemant.agrawal, gage.eads, harry.van.haaren

On Thu, Dec 08, 2016 at 09:57:52AM +0000, Bruce Richardson wrote:
> On Thu, Dec 08, 2016 at 07:18:01AM +0530, Jerin Jacob wrote:
> > On Wed, Dec 07, 2016 at 11:12:51AM +0000, Bruce Richardson wrote:
> > > On Tue, Dec 06, 2016 at 09:22:15AM +0530, Jerin Jacob wrote:
> > > > + */
> > > > +int
> > > > +rte_event_port_link(uint8_t dev_id, uint8_t port_id,
> > > > +		    const struct rte_event_queue_link link[],
> > > > +		    uint16_t nb_links);
> > > > +
> > > 
> > > Hi again Jerin,
> > > 
> > > another small suggestion here. I'm not a big fan of using small
> > > structures to pass parameters into functions, especially when not all
> > > fields are always going to be used. Rather than use the event queue link
> > > structure, can we just pass in two array parameters here - the list of
> > > QIDs, and the list of priorities. In cases where the eventdev
> > > implementation does not support link prioritization, or where the app
> > > does not want different priority mappings , then the second
> > > array can be null [implying NORMAL priority for the don't care case].
> > > 
> > > 	int
> > > 	rte_event_port_link(uint8_t dev_id, uint8_t port_id,
> > > 		const uint8_t queues[], const uint8_t priorities[],
> > > 		uint16_t nb_queues);
> > > 
> > > This just makes mapping an array of queues easier, as we can just pass
> > > an array of ints directly in, and it especially makes it easier to
> > > create a single link via:
> > > 
> > >   rte_event_port_link(dev_id, port_id, &queue_id, NULL, 1);
> > 
> > The reason why I thought of creating "struct rte_event_queue_link",
> > - Its easy to add new parameter in link attributes if required
> 
> Make the priority value be in a struct, perhaps. That would allow for
> future expansion, while still making it easier for the case where people
> just want the mappings without any prioritization.

OK. I will change the API prototype to align with your proposal in v3

int
rte_event_port_link(uint8_t dev_id, uint8_t port_id,
	const uint8_t queues[], const uint8_t priorities[],
	uint16_t nb_links);

int
rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
	uint8_t queues[], uint8_t priorities[]);

> 
> > - Its _easy_ to implement PAUSE and RESUME in application
> > 
> > PAUSE:
> > nr_links = rte_event_port_links_get(,,link)
> > rte_event_port_unlink_all
> > 
> > RESUME:
> > rte_event_port_link(,,link, nr_links);
> 
> Ok, I had missed that implication. Since that is probably an important
> operation we might want to do, perhaps links_get API should be updated
> too to keep parameter matching.
> 
> /Bruce

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v2 1/6] eventdev: introduce event driven programming model
  2016-12-09 15:11               ` Bruce Richardson
@ 2016-12-14  6:55                 ` Jerin Jacob
  0 siblings, 0 replies; 109+ messages in thread
From: Jerin Jacob @ 2016-12-14  6:55 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: dev, thomas.monjalon, hemant.agrawal, gage.eads, harry.van.haaren

On Fri, Dec 09, 2016 at 03:11:42PM +0000, Bruce Richardson wrote:
> On Fri, Dec 09, 2016 at 02:11:15AM +0530, Jerin Jacob wrote:
> > On Thu, Dec 08, 2016 at 09:30:49AM +0000, Bruce Richardson wrote:
> > > On Thu, Dec 08, 2016 at 12:23:03AM +0530, Jerin Jacob wrote:
> > > > On Tue, Dec 06, 2016 at 04:51:19PM +0000, Bruce Richardson wrote:
> > > > > On Tue, Dec 06, 2016 at 09:22:15AM +0530, Jerin Jacob wrote:
> > > > > I think this might need to be clarified. The device doesn't need to be
> > > > > reconfigured, but does it need to be stopped? In SW implementation, this
> > > > > affects how much we have to make things thread-safe. At minimum I think
> > > > > we should limit this to having only one thread call the function at a
> > > > > time, but we may allow enqueue dequeue ops from the data plane to run
> > > > > in parallel.
> > > > 
> > > > Cavium implementation can change it at runtime without re-configuring or stopping
> > > > the device to support runtime load balancing from the application perspective.
> > > > 
> > > > AFAIK, link establishment is _NOT_ fast path API. But the application
> > > > can invoke it from worker thread whenever there is a need for re-wiring
> > > > the queue to port connection for better explicit load balancing. IMO, A
> > > > software implementation with lock is fine here as we don't use this in
> > > > fastpath.
> > > > 
> > > > Thoughts?
> > > > >
> > > 
> > > I agree that it's obviously not fast-path. Therefore I suggest that we
> > > document that this API should be safe to call while the data path is in
> > > operation, but that it should not be called by multiple cores
> > > simultaneously i.e. single-writer, multi-reader safe, but not
> > > multi-writer safe. Does that seem reasonable to you?
> > 
> > If I understand it correctly, per "event port" their will be ONLY ONE
> > writer at time.
> > 
> > i.e, In the valid case, Following two can be invoked in parallel
> > rte_event_port_link(dev_id, 0 /*port_id*/,..)
> > rte_event_port_link(dev_id, 1 /*port_id*/,..)
> > 
> > But, not invoking rte_event_port_link() on the _same_ event port in parallel
> > 
> > Are we on same page?
> > 
> > Jerin 
> > 
> Not entirely. Since our current software implementation pushes the events
> from the internal queues to the ports, rather than having the ports pull
> the events, the links are tracked at the qid level rather than at the
> port one. So having two link operations on two separate ports at the
> same time could actually conflict for us, because they attempt to modify
> the mappings for the same queue. That's why for us the number of
> simultaneous link calls is important.
> However, given that this is not fast-path, we can probably work around
> this with locking internally. The main ask is that we explicitly

Yes, It is in slow-path. IMO, no harm in adding the lock internally and it
helps the application too.

> document what are the expected safe and unsafe conditions under which
> this call can be made.

As we agreed and it is a norm in DPDK that operation on same queue id
(in our case same port id) _cannot_ not be invoked in parallel.
Apart from the above constrain, Let us know what are other constrains you
want to add(if any).

> 
> /Bruce

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v2 1/6] eventdev: introduce event driven programming model
  2016-12-08 11:02           ` Van Haaren, Harry
@ 2016-12-14 13:13             ` Jerin Jacob
  2016-12-14 15:15               ` Bruce Richardson
  2016-12-15 16:54               ` Van Haaren, Harry
  0 siblings, 2 replies; 109+ messages in thread
From: Jerin Jacob @ 2016-12-14 13:13 UTC (permalink / raw)
  To: Van Haaren, Harry
  Cc: dev, thomas.monjalon, Richardson, Bruce, hemant.agrawal, Eads, Gage

On Thu, Dec 08, 2016 at 11:02:16AM +0000, Van Haaren, Harry wrote:
> > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> > Sent: Thursday, December 8, 2016 1:24 AM
> > To: Van Haaren, Harry <harry.van.haaren@intel.com>
> 
> <snip>
> 
> > > * Operation and sched_type *increased* to 4 bits each (from previous value of 2) to
> > allow future expansion without ABI changes
> > 
> > Anyway it will break ABI if we add new operation. I would propose to keep 4bit
> > reserved and add it when required.
> 
> Ok sounds good. I'll suggest to move it to the middle between operation or sched type, which would allow expanding operation without ABI breaks. On expanding the field would remain in the same place with the same bits available in that place (no ABI break), but new bits can be added into the currently reserved space.

OK. We will move the rsvd field as you suggested.

> 
> 
> > > * Restore flow_id to 24 bits of a 32 bit int (previous size was 20 bits)
> > > * sub-event-type reduced to 4 bits (previous value was 8 bits). Can we think of
> > situations where 16 values for application specified identifiers of each event-type is
> > genuinely not enough?
> > One packet will not go beyond 16 stages but an application may have more stages and
> > each packet may go mutually exclusive stages. For example,
> > 
> > packet 0: stagex_0 ->stagex_1
> > packet 1: stagey_0 ->stagey_1
> > 
> > In that sense, IMO, more than 16 is required.(AFIAK, VPP has any much larger limit on
> > number of stages)
> 
> My understanding was that stages are linked to event queues, so the application can determine the stage the packet comes from by reading queue_id?

That is one way of doing it. But it is limited to number of queues
therefore scalability issues.Another approach is through
sub_event_type scheme without depended on the number of queues.

> 
> I'm not opposed to having an 8 bit sub_event_type, but it seems unnecessarily large from my point of view. If you have a use for it, I'm ok with 8 bits.

OK

> 
> 
> > > In my opinion this structure layout is more balanced, and will perform better due to
> > less loads that will need masking to access the required value.
> > OK. Considering more balanced layout and above points. I propose following scheme(based on
> > your input)
> > 
> > 	union {
> > 		uint64_t event;
> > 		struct {
> > 			uint32_t flow_id: 20;
> > 			uint32_t sub_event_type : 8;
> > 			uint32_t event_type : 4;
> > 
> > 			uint8_t rsvd: 4; /* for future additions */
> > 			uint8_t operation  : 2; /* new fwd drop */
> > 			uint8_t sched_type : 2;
> > 
> > 			uint8_t queue_id;
> > 			uint8_t priority;
> > 			uint8_t impl_opaque;
> > 		};
> > 	};
> > 
> > Feedback and improvements welcomed,
> 
> 
> So incorporating my latest suggestions on moving fields around, excluding sub_event_type *size* changes:
> 
> union {
> 	uint64_t event;
> 	struct {
> 		uint32_t flow_id: 20;
> 		uint32_t event_type : 4;
> 		uint32_t sub_event_type : 8; /* 8 bits now naturally aligned */

Just one suggestion here. I am not sure about the correct split between
number of bits to represent flow_id and sub_event_type fields. And its
connected in our implementation, so I propose to move sub_event_type up so
that future flow_id/sub_event_type bit field size change request wont
impact our implementation. Since it is represented as 32bit as whole, I
don't think there is an alignment issue.

So incorporating my latest suggestions on moving sub_event_type field around:

union {
	uint64_t event;
	struct {
		uint32_t flow_id: 20;
		uint32_t sub_event_type : 8;
		uint32_t event_type : 4;

		uint8_t operation  : 2; /* new fwd drop */
		uint8_t rsvd: 4; /* for future additions, can be expanded into without ABI break */
		uint8_t sched_type : 2;

		uint8_t queue_id;
		uint8_t priority;
		uint8_t impl_opaque;
	};
};

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v2 1/6] eventdev: introduce event driven programming model
  2016-12-14 13:13             ` Jerin Jacob
@ 2016-12-14 15:15               ` Bruce Richardson
  2016-12-15 16:54               ` Van Haaren, Harry
  1 sibling, 0 replies; 109+ messages in thread
From: Bruce Richardson @ 2016-12-14 15:15 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Van Haaren, Harry, dev, thomas.monjalon, hemant.agrawal, Eads, Gage

On Wed, Dec 14, 2016 at 06:43:58PM +0530, Jerin Jacob wrote:
> On Thu, Dec 08, 2016 at 11:02:16AM +0000, Van Haaren, Harry wrote:
> > > From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> > > Sent: Thursday, December 8, 2016 1:24 AM
> > > To: Van Haaren, Harry <harry.van.haaren@intel.com>
> > 
> > <snip>
> > 
> > > > * Operation and sched_type *increased* to 4 bits each (from previous value of 2) to
> > > allow future expansion without ABI changes
> > > 
> > > Anyway it will break ABI if we add new operation. I would propose to keep 4bit
> > > reserved and add it when required.
> > 
> > Ok sounds good. I'll suggest to move it to the middle between operation or sched type, which would allow expanding operation without ABI breaks. On expanding the field would remain in the same place with the same bits available in that place (no ABI break), but new bits can be added into the currently reserved space.
> 
> OK. We will move the rsvd field as you suggested.
> 
> > 
> > 
> > > > * Restore flow_id to 24 bits of a 32 bit int (previous size was 20 bits)
> > > > * sub-event-type reduced to 4 bits (previous value was 8 bits). Can we think of
> > > situations where 16 values for application specified identifiers of each event-type is
> > > genuinely not enough?
> > > One packet will not go beyond 16 stages but an application may have more stages and
> > > each packet may go mutually exclusive stages. For example,
> > > 
> > > packet 0: stagex_0 ->stagex_1
> > > packet 1: stagey_0 ->stagey_1
> > > 
> > > In that sense, IMO, more than 16 is required.(AFIAK, VPP has any much larger limit on
> > > number of stages)
> > 
> > My understanding was that stages are linked to event queues, so the application can determine the stage the packet comes from by reading queue_id?
> 
> That is one way of doing it. But it is limited to number of queues
> therefore scalability issues.Another approach is through
> sub_event_type scheme without depended on the number of queues.
> 
> > 
> > I'm not opposed to having an 8 bit sub_event_type, but it seems unnecessarily large from my point of view. If you have a use for it, I'm ok with 8 bits.
> 
> OK
> 
> > 
> > 
> > > > In my opinion this structure layout is more balanced, and will perform better due to
> > > less loads that will need masking to access the required value.
> > > OK. Considering more balanced layout and above points. I propose following scheme(based on
> > > your input)
> > > 
> > > 	union {
> > > 		uint64_t event;
> > > 		struct {
> > > 			uint32_t flow_id: 20;
> > > 			uint32_t sub_event_type : 8;
> > > 			uint32_t event_type : 4;
> > > 
> > > 			uint8_t rsvd: 4; /* for future additions */
> > > 			uint8_t operation  : 2; /* new fwd drop */
> > > 			uint8_t sched_type : 2;
> > > 
> > > 			uint8_t queue_id;
> > > 			uint8_t priority;
> > > 			uint8_t impl_opaque;
> > > 		};
> > > 	};
> > > 
> > > Feedback and improvements welcomed,
> > 
> > 
> > So incorporating my latest suggestions on moving fields around, excluding sub_event_type *size* changes:
> > 
> > union {
> > 	uint64_t event;
> > 	struct {
> > 		uint32_t flow_id: 20;
> > 		uint32_t event_type : 4;
> > 		uint32_t sub_event_type : 8; /* 8 bits now naturally aligned */
> 
> Just one suggestion here. I am not sure about the correct split between
> number of bits to represent flow_id and sub_event_type fields. And its
> connected in our implementation, so I propose to move sub_event_type up so
> that future flow_id/sub_event_type bit field size change request wont
> impact our implementation. Since it is represented as 32bit as whole, I
> don't think there is an alignment issue.
> 
> So incorporating my latest suggestions on moving sub_event_type field around:
> 
> union {
> 	uint64_t event;
> 	struct {
> 		uint32_t flow_id: 20;
> 		uint32_t sub_event_type : 8;
> 		uint32_t event_type : 4;
> 

The issue with the above layout is that you have an 8-bit value which
can never be accessed as a byte. With the layout above proposed by
Harry, the sub_event_type can be accessed without any bit manipultaion
operations just by doing a byte read. With the layout you propose, all
fields require masking and/or shifting to access. It won't affect the
scheduler performance for us, but it means potentially more cycles in
the app to access those fields.

/Bruce

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v2 1/6] eventdev: introduce event driven programming model
  2016-12-06  3:52     ` [PATCH v2 1/6] eventdev: introduce event driven programming model Jerin Jacob
                         ` (2 preceding siblings ...)
  2016-12-07 11:12       ` Bruce Richardson
@ 2016-12-14 15:19       ` Bruce Richardson
  2016-12-15 13:39         ` Jerin Jacob
  3 siblings, 1 reply; 109+ messages in thread
From: Bruce Richardson @ 2016-12-14 15:19 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dev, thomas.monjalon, hemant.agrawal, gage.eads, harry.van.haaren

On Tue, Dec 06, 2016 at 09:22:15AM +0530, Jerin Jacob wrote:
> In a polling model, lcores poll ethdev ports and associated
> rx queues directly to look for packet. In an event driven model,
> by contrast, lcores call the scheduler that selects packets for
> them based on programmer-specified criteria. Eventdev library
> adds support for event driven programming model, which offer
> applications automatic multicore scaling, dynamic load balancing,
> pipelining, packet ingress order maintenance and
> synchronization services to simplify application packet processing.
> 
> By introducing event driven programming model, DPDK can support
> both polling and event driven programming models for packet processing,
> and applications are free to choose whatever model
> (or combination of the two) that best suits their needs.
> 
> This patch adds the eventdev specification header file.
> 
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> ---
<snip>
> + *
> + * The *nb_events* parameter is the number of event objects to enqueue which are
> + * supplied in the *ev* array of *rte_event* structure.
> + *
> + * The rte_event_enqueue_burst() function returns the number of
> + * events objects it actually enqueued. A return value equal to *nb_events*
> + * means that all event objects have been enqueued.
> + *
> + * @param dev_id
> + *   The identifier of the device.
> + * @param port_id
> + *   The identifier of the event port.
> + * @param ev
> + *   Points to an array of *nb_events* objects of type *rte_event* structure
> + *   which contain the event object enqueue operations to be processed.
> + * @param nb_events
> + *   The number of event objects to enqueue, typically number of
> + *   rte_event_port_enqueue_depth() available for this port.
> + *
> + * @return
> + *   The number of event objects actually enqueued on the event device. The
> + *   return value can be less than the value of the *nb_events* parameter when
> + *   the event devices queue is full or if invalid parameters are specified in a
> + *   *rte_event*. If return value is less than *nb_events*, the remaining events
> + *   at the end of ev[] are not consumed,and the caller has to take care of them
> + *
> + * @see rte_event_port_enqueue_depth()
> + */
> +uint16_t
> +rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event ev[],
> +			uint16_t nb_events);
> +
One suggestion - do we want to make the ev[] array const, to disallow
drivers from modifying the events passed in? Since the event structure
is only 16B big, it should be small enough to be copied around in
scheduler instances, allow the original events to remain unmodified.

/Bruce

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v2 1/6] eventdev: introduce event driven programming model
  2016-12-14 15:19       ` Bruce Richardson
@ 2016-12-15 13:39         ` Jerin Jacob
  0 siblings, 0 replies; 109+ messages in thread
From: Jerin Jacob @ 2016-12-15 13:39 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: dev, thomas.monjalon, hemant.agrawal, gage.eads, harry.van.haaren

On Wed, Dec 14, 2016 at 03:19:22PM +0000, Bruce Richardson wrote:
> On Tue, Dec 06, 2016 at 09:22:15AM +0530, Jerin Jacob wrote:
> > In a polling model, lcores poll ethdev ports and associated
> > rx queues directly to look for packet. In an event driven model,
> > by contrast, lcores call the scheduler that selects packets for
> > them based on programmer-specified criteria. Eventdev library
> > adds support for event driven programming model, which offer
> > applications automatic multicore scaling, dynamic load balancing,
> > pipelining, packet ingress order maintenance and
> > synchronization services to simplify application packet processing.
> > 
> > By introducing event driven programming model, DPDK can support
> > both polling and event driven programming models for packet processing,
> > and applications are free to choose whatever model
> > (or combination of the two) that best suits their needs.
> > 
> > This patch adds the eventdev specification header file.
> > 
> > Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > ---
> <snip>
> > + *
> > + * The *nb_events* parameter is the number of event objects to enqueue which are
> > + * supplied in the *ev* array of *rte_event* structure.
> > + *
> > + * The rte_event_enqueue_burst() function returns the number of
> > + * events objects it actually enqueued. A return value equal to *nb_events*
> > + * means that all event objects have been enqueued.
> > + *
> > + * @param dev_id
> > + *   The identifier of the device.
> > + * @param port_id
> > + *   The identifier of the event port.
> > + * @param ev
> > + *   Points to an array of *nb_events* objects of type *rte_event* structure
> > + *   which contain the event object enqueue operations to be processed.
> > + * @param nb_events
> > + *   The number of event objects to enqueue, typically number of
> > + *   rte_event_port_enqueue_depth() available for this port.
> > + *
> > + * @return
> > + *   The number of event objects actually enqueued on the event device. The
> > + *   return value can be less than the value of the *nb_events* parameter when
> > + *   the event devices queue is full or if invalid parameters are specified in a
> > + *   *rte_event*. If return value is less than *nb_events*, the remaining events
> > + *   at the end of ev[] are not consumed,and the caller has to take care of them
> > + *
> > + * @see rte_event_port_enqueue_depth()
> > + */
> > +uint16_t
> > +rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event ev[],
> > +			uint16_t nb_events);
> > +
> One suggestion - do we want to make the ev[] array const, to disallow
> drivers from modifying the events passed in? Since the event structure
> is only 16B big, it should be small enough to be copied around in
> scheduler instances, allow the original events to remain unmodified.

Seems like a good idea to me. I will add it in v3.

> 
> /Bruce

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v2 1/6] eventdev: introduce event driven programming model
  2016-12-14 13:13             ` Jerin Jacob
  2016-12-14 15:15               ` Bruce Richardson
@ 2016-12-15 16:54               ` Van Haaren, Harry
  1 sibling, 0 replies; 109+ messages in thread
From: Van Haaren, Harry @ 2016-12-15 16:54 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dev, thomas.monjalon, Richardson, Bruce, hemant.agrawal, Eads, Gage


> From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> Sent: Wednesday, December 14, 2016 1:14 PM
> To: Van Haaren, Harry <harry.van.haaren@intel.com>
<snip>

> So incorporating my latest suggestions on moving sub_event_type field around:
> 
> union {
> 	uint64_t event;
> 	struct {
> 		uint32_t flow_id: 20;
> 		uint32_t sub_event_type : 8;
> 		uint32_t event_type : 4;
> 
> 		uint8_t operation  : 2; /* new fwd drop */
> 		uint8_t rsvd: 4; /* for future additions */
> 		uint8_t sched_type : 2;
> 
> 		uint8_t queue_id;
> 		uint8_t priority;
> 		uint8_t impl_opaque;
> 	};
> };

Thanks, looks good to me!

^ permalink raw reply	[flat|nested] 109+ messages in thread

* [PATCH v4 0/6] libeventdev API and northbound implementation
  2016-12-06  3:52   ` [PATCH v2 0/6] libeventdev API and northbound implementation Jerin Jacob
                       ` (6 preceding siblings ...)
  2016-12-06 16:46     ` [PATCH v2 0/6] libeventdev API and northbound implementation Bruce Richardson
@ 2016-12-21  9:25     ` Jerin Jacob
  2016-12-21  9:25       ` [PATCH v4 1/6] eventdev: introduce event driven programming model Jerin Jacob
                         ` (5 more replies)
  7 siblings, 6 replies; 109+ messages in thread
From: Jerin Jacob @ 2016-12-21  9:25 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, hemant.agrawal, gage.eads,
	harry.van.haaren, Jerin Jacob

As previously discussed in RFC v1 [1], RFC v2 [2], with changes
described in [3] (also pasted below), here is the first non-draft series
for this new API.

[1] http://dpdk.org/ml/archives/dev/2016-August/045181.html
[2] http://dpdk.org/ml/archives/dev/2016-October/048592.html
[3] http://dpdk.org/ml/archives/dev/2016-October/048196.html

v3..v4:

1) Fixed the shared lib build issue(Bruce)
2) Added missing "struct rte_eventdev *dev" in eventdev_queue_release_t(Bruce)
3) In order to shorten the macro name, removed _FLAG_ in it(Bruce)
4) Fixed the wrong 'in-reply-to' while sending the v3 (Shreyansh)

v2..v3:

1) Changed struct rte_event layout more aligment balanced(Harry, Jerin)
2) Changed event_ptr type to void* from uintptr_t(Bruce)
3) Changed ev[] as const in rte_event_enqueue_burst  to disallow
drivers from modifying the events passed in(Bruce)
4) Removed queue memory allocation from common code as some drivers may not need
it(Bruce)
5) Removed "struct rte_event_queue_link" and replaced with queues and priorities
in the link and link_get API to avoid one redirection to use the API(Bruce)

v1..v2:
1) Remove unnecessary header files from rte_eventdev.h(Thomas)
2) Removed PMD driver name(EVENTDEV_NAME_SKELETON_PMD) from rte_eventdev.h(Thomas)
3) Removed different #define for different priority schemes. Changed to
one event device RTE_EVENT_DEV_PRIORITY_* priority (Bruce)
4) add const to rte_event_dev_configure(), rte_event_queue_setup(),
rte_event_port_setup(), rte_event_port_link()(Bruce)
5) Fixed missing dev argument in dev->schedule() function(Bruce)
6) Changed \see to @see in doxgen comments(Thomas)
7) Added additional text in specification to clarify the queue depth(Thomas)
8) Changed wait to timeout across the specification(Thomas)
9) Added longer explanation for RTE_EVENT_OP_NEW and RTE_EVENT_OP_FORWARD(Thomas)
10) Fixed issue with RTE_EVENT_OP_RELEASE doxgen formatting(Thomas)
11) Changed to RTE_EVENT_DEV_CFG_FLAG_ from RTE_EVENT_DEV_CFG_(Thomas)
12) Changed to EVENT_QUEUE_CFG_FLAG_ from EVENT_QUEUE_CFG_(Thomas)
13) s/RTE_EVENT_TYPE_CORE/RTE_EVENT_TYPE_CPU/(Thomas, Gage)
14) Removed non burst API and kept only the burst API in the API specification
(Thomas, Bruce, Harry, Jerin)
-- Driver interface has non burst API, selection of the non burst API is based
on num_objects == 1
15) sizeeof(struct rte_event) was not 16 in v1. Fixed it in v2
-- reduced the width of event_type to 4bit to save space for future change
-- introduced impl_opaque for implementation specific opaque data(Harry),
Something useful for HW driver too, in the context of removal the need for sepeare
release API.
-- squashed other element size and provided enough space to impl_opaque(Jerin)
-- added RTE_BUILD_BUG_ON(sizeof(struct rte_event) != 16); check
16) add union of uint64_t in the second element in struct rte_event to
make sure the structure has 16byte address all arch(Thomas)
17) Fixed invalid check of nb_atomic_order_sequences in implementation(Gage)
18) s/EDEV_LOG_ERR/RTE_EDEV_LOG_ERR(Thomas)
19) s/rte_eventdev_pmd_/rte_event_pmd_/(Bruce)
20) added fine details of distributed vs centralized scheduling information
in the specification and introduced RTE_EVENT_DEV_CAP_FLAG_DISTRIBUTED_SCHED
flag(Gage)
21)s/RTE_EVENT_QUEUE_CFG_FLAG_SINGLE_CONSUMER/RTE_EVENT_QUEUE_CFG_FLAG_SINGLE_LINK (Jerin)
to remove the confusion to between another producer and consumer in sw eventdev driver
22) Northbound api implementation  patch spited to more logical patches(Thomas)

Changes since RFC v2:

- Updated the documentation to define the need for this library[Jerin]
- Added RTE_EVENT_QUEUE_CFG_*_ONLY configuration parameters in
  struct rte_event_queue_conf to enable optimized sw implementation [Bruce]
- Introduced RTE_EVENT_OP* ops [Bruce]
- Added nb_event_queue_flows,nb_event_port_dequeue_depth, nb_event_port_enqueue_depth
  in rte_event_dev_configure() like ethdev and crypto library[Jerin]
- Removed rte_event_release() and replaced with RTE_EVENT_OP_RELEASE ops to
  reduce fast path APIs and it is redundant too[Jerin]
- In the view of better application portability, Removed pin_event
  from rte_event_enqueue as it is just hint and Intel/NXP can not support it[Jerin]
- Added rte_event_port_links_get()[Jerin]
- Added rte_event_dev_dump[Harry]

Notes:

- This patch set is check-patch clean with an exception that
03/06 has one WARNING:MACRO_WITH_FLOW_CONTROL
- Looking forward to getting additional maintainers for libeventdev

TODO:
1) Create user guide

Jerin Jacob (6):
  eventdev: introduce event driven programming model
  eventdev: define southbound driver interface
  eventdev: implement the northbound APIs
  eventdev: implement PMD registration functions
  event/skeleton: add skeleton eventdev driver
  app/test: unit test case for eventdev APIs

 MAINTAINERS                                        |    5 +
 app/test/Makefile                                  |    2 +
 app/test/test_eventdev.c                           |  778 +++++++++++
 config/common_base                                 |   14 +
 doc/api/doxy-api-index.md                          |    1 +
 doc/api/doxy-api.conf                              |    1 +
 drivers/Makefile                                   |    1 +
 drivers/event/Makefile                             |   36 +
 drivers/event/skeleton/Makefile                    |   55 +
 .../skeleton/rte_pmd_skeleton_event_version.map    |    4 +
 drivers/event/skeleton/skeleton_eventdev.c         |  519 ++++++++
 drivers/event/skeleton/skeleton_eventdev.h         |   68 +
 lib/Makefile                                       |    1 +
 lib/librte_eal/common/include/rte_log.h            |    1 +
 lib/librte_eventdev/Makefile                       |   57 +
 lib/librte_eventdev/rte_eventdev.c                 | 1222 +++++++++++++++++
 lib/librte_eventdev/rte_eventdev.h                 | 1407 ++++++++++++++++++++
 lib/librte_eventdev/rte_eventdev_pmd.h             |  514 +++++++
 lib/librte_eventdev/rte_eventdev_version.map       |   39 +
 mk/rte.app.mk                                      |    5 +
 20 files changed, 4730 insertions(+)
 create mode 100644 app/test/test_eventdev.c
 create mode 100644 drivers/event/Makefile
 create mode 100644 drivers/event/skeleton/Makefile
 create mode 100644 drivers/event/skeleton/rte_pmd_skeleton_event_version.map
 create mode 100644 drivers/event/skeleton/skeleton_eventdev.c
 create mode 100644 drivers/event/skeleton/skeleton_eventdev.h
 create mode 100644 lib/librte_eventdev/Makefile
 create mode 100644 lib/librte_eventdev/rte_eventdev.c
 create mode 100644 lib/librte_eventdev/rte_eventdev.h
 create mode 100644 lib/librte_eventdev/rte_eventdev_pmd.h
 create mode 100644 lib/librte_eventdev/rte_eventdev_version.map

-- 
2.5.5

^ permalink raw reply	[flat|nested] 109+ messages in thread

* [PATCH v4 1/6] eventdev: introduce event driven programming model
  2016-12-21  9:25     ` [PATCH v4 " Jerin Jacob
@ 2016-12-21  9:25       ` Jerin Jacob
  2017-01-25 16:32         ` Eads, Gage
  2017-02-02 11:18         ` Nipun Gupta
  2016-12-21  9:25       ` [PATCH v4 2/6] eventdev: define southbound driver interface Jerin Jacob
                         ` (4 subsequent siblings)
  5 siblings, 2 replies; 109+ messages in thread
From: Jerin Jacob @ 2016-12-21  9:25 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, hemant.agrawal, gage.eads,
	harry.van.haaren, Jerin Jacob

In a polling model, lcores poll ethdev ports and associated
rx queues directly to look for packet. In an event driven model,
by contrast, lcores call the scheduler that selects packets for
them based on programmer-specified criteria. Eventdev library
adds support for event driven programming model, which offer
applications automatic multicore scaling, dynamic load balancing,
pipelining, packet ingress order maintenance and
synchronization services to simplify application packet processing.

By introducing event driven programming model, DPDK can support
both polling and event driven programming models for packet processing,
and applications are free to choose whatever model
(or combination of the two) that best suits their needs.

This patch adds the eventdev specification header file.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
 MAINTAINERS                        |    3 +
 doc/api/doxy-api-index.md          |    1 +
 doc/api/doxy-api.conf              |    1 +
 lib/librte_eventdev/rte_eventdev.h | 1275 ++++++++++++++++++++++++++++++++++++
 4 files changed, 1280 insertions(+)
 create mode 100644 lib/librte_eventdev/rte_eventdev.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 26d9590..8e59352 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -249,6 +249,9 @@ F: lib/librte_cryptodev/
 F: app/test/test_cryptodev*
 F: examples/l2fwd-crypto/
 
+Eventdev API - EXPERIMENTAL
+M: Jerin Jacob <jerin.jacob@caviumnetworks.com>
+F: lib/librte_eventdev/
 
 Networking Drivers
 ------------------
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 33c04ed..0ad3367 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -40,6 +40,7 @@ There are many libraries, so their headers may be grouped by topics:
   [ethdev]             (@ref rte_ethdev.h),
   [ethctrl]            (@ref rte_eth_ctrl.h),
   [cryptodev]          (@ref rte_cryptodev.h),
+  [eventdev]           (@ref rte_eventdev.h),
   [devargs]            (@ref rte_devargs.h),
   [bond]               (@ref rte_eth_bond.h),
   [vhost]              (@ref rte_virtio_net.h),
diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf
index b340fcf..e030c21 100644
--- a/doc/api/doxy-api.conf
+++ b/doc/api/doxy-api.conf
@@ -41,6 +41,7 @@ INPUT                   = doc/api/doxy-api-index.md \
                           lib/librte_cryptodev \
                           lib/librte_distributor \
                           lib/librte_ether \
+                          lib/librte_eventdev \
                           lib/librte_hash \
                           lib/librte_ip_frag \
                           lib/librte_jobstats \
diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
new file mode 100644
index 0000000..b2bc471
--- /dev/null
+++ b/lib/librte_eventdev/rte_eventdev.h
@@ -0,0 +1,1275 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright 2016 Cavium.
+ *   Copyright 2016 Intel Corporation.
+ *   Copyright 2016 NXP.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_EVENTDEV_H_
+#define _RTE_EVENTDEV_H_
+
+/**
+ * @file
+ *
+ * RTE Event Device API
+ *
+ * In a polling model, lcores poll ethdev ports and associated rx queues
+ * directly to look for packet. In an event driven model, by contrast, lcores
+ * call the scheduler that selects packets for them based on programmer
+ * specified criteria. Eventdev library adds support for event driven
+ * programming model, which offer applications automatic multicore scaling,
+ * dynamic load balancing, pipelining, packet ingress order maintenance and
+ * synchronization services to simplify application packet processing.
+ *
+ * The Event Device API is composed of two parts:
+ *
+ * - The application-oriented Event API that includes functions to setup
+ *   an event device (configure it, setup its queues, ports and start it), to
+ *   establish the link between queues to port and to receive events, and so on.
+ *
+ * - The driver-oriented Event API that exports a function allowing
+ *   an event poll Mode Driver (PMD) to simultaneously register itself as
+ *   an event device driver.
+ *
+ * Event device components:
+ *
+ *                     +-----------------+
+ *                     | +-------------+ |
+ *        +-------+    | |    flow 0   | |
+ *        |Packet |    | +-------------+ |
+ *        |event  |    | +-------------+ |
+ *        |       |    | |    flow 1   | |port_link(port0, queue0)
+ *        +-------+    | +-------------+ |     |     +--------+
+ *        +-------+    | +-------------+ o-----v-----o        |dequeue +------+
+ *        |Crypto |    | |    flow n   | |           | event  +------->|Core 0|
+ *        |work   |    | +-------------+ o----+      | port 0 |        |      |
+ *        |done ev|    |  event queue 0  |    |      +--------+        +------+
+ *        +-------+    +-----------------+    |
+ *        +-------+                           |
+ *        |Timer  |    +-----------------+    |      +--------+
+ *        |expiry |    | +-------------+ |    +------o        |dequeue +------+
+ *        |event  |    | |    flow 0   | o-----------o event  +------->|Core 1|
+ *        +-------+    | +-------------+ |      +----o port 1 |        |      |
+ *       Event enqueue | +-------------+ |      |    +--------+        +------+
+ *     o-------------> | |    flow 1   | |      |
+ *        enqueue(     | +-------------+ |      |
+ *        queue_id,    |                 |      |    +--------+        +------+
+ *        flow_id,     | +-------------+ |      |    |        |dequeue |Core 2|
+ *        sched_type,  | |    flow n   | o-----------o event  +------->|      |
+ *        event_type,  | +-------------+ |      |    | port 2 |        +------+
+ *        subev_type,  |  event queue 1  |      |    +--------+
+ *        event)       +-----------------+      |    +--------+
+ *                                              |    |        |dequeue +------+
+ *        +-------+    +-----------------+      |    | event  +------->|Core n|
+ *        |Core   |    | +-------------+ o-----------o port n |        |      |
+ *        |(SW)   |    | |    flow 0   | |      |    +--------+        +--+---+
+ *        |event  |    | +-------------+ |      |                         |
+ *        +-------+    | +-------------+ |      |                         |
+ *            ^        | |    flow 1   | |      |                         |
+ *            |        | +-------------+ o------+                         |
+ *            |        | +-------------+ |                                |
+ *            |        | |    flow n   | |                                |
+ *            |        | +-------------+ |                                |
+ *            |        |  event queue n  |                                |
+ *            |        +-----------------+                                |
+ *            |                                                           |
+ *            +-----------------------------------------------------------+
+ *
+ * Event device: A hardware or software-based event scheduler.
+ *
+ * Event: A unit of scheduling that encapsulates a packet or other datatype
+ * like SW generated event from the CPU, Crypto work completion notification,
+ * Timer expiry event notification etc as well as metadata.
+ * The metadata includes flow ID, scheduling type, event priority, event_type,
+ * sub_event_type etc.
+ *
+ * Event queue: A queue containing events that are scheduled by the event dev.
+ * An event queue contains events of different flows associated with scheduling
+ * types, such as atomic, ordered, or parallel.
+ *
+ * Event port: An application's interface into the event dev for enqueue and
+ * dequeue operations. Each event port can be linked with one or more
+ * event queues for dequeue operations.
+ *
+ * By default, all the functions of the Event Device API exported by a PMD
+ * are lock-free functions which assume to not be invoked in parallel on
+ * different logical cores to work on the same target object. For instance,
+ * the dequeue function of a PMD cannot be invoked in parallel on two logical
+ * cores to operates on same  event port. Of course, this function
+ * can be invoked in parallel by different logical cores on different ports.
+ * It is the responsibility of the upper level application to enforce this rule.
+ *
+ * In all functions of the Event API, the Event device is
+ * designated by an integer >= 0 named the device identifier *dev_id*
+ *
+ * At the Event driver level, Event devices are represented by a generic
+ * data structure of type *rte_event_dev*.
+ *
+ * Event devices are dynamically registered during the PCI/SoC device probing
+ * phase performed at EAL initialization time.
+ * When an Event device is being probed, a *rte_event_dev* structure and
+ * a new device identifier are allocated for that device. Then, the
+ * event_dev_init() function supplied by the Event driver matching the probed
+ * device is invoked to properly initialize the device.
+ *
+ * The role of the device init function consists of resetting the hardware or
+ * software event driver implementations.
+ *
+ * If the device init operation is successful, the correspondence between
+ * the device identifier assigned to the new device and its associated
+ * *rte_event_dev* structure is effectively registered.
+ * Otherwise, both the *rte_event_dev* structure and the device identifier are
+ * freed.
+ *
+ * The functions exported by the application Event API to setup a device
+ * designated by its device identifier must be invoked in the following order:
+ *     - rte_event_dev_configure()
+ *     - rte_event_queue_setup()
+ *     - rte_event_port_setup()
+ *     - rte_event_port_link()
+ *     - rte_event_dev_start()
+ *
+ * Then, the application can invoke, in any order, the functions
+ * exported by the Event API to schedule events, dequeue events, enqueue events,
+ * change event queue(s) to event port [un]link establishment and so on.
+ *
+ * Application may use rte_event_[queue/port]_default_conf_get() to get the
+ * default configuration to set up an event queue or event port by
+ * overriding few default values.
+ *
+ * If the application wants to change the configuration (i.e. call
+ * rte_event_dev_configure(), rte_event_queue_setup(), or
+ * rte_event_port_setup()), it must call rte_event_dev_stop() first to stop the
+ * device and then do the reconfiguration before calling rte_event_dev_start()
+ * again. The schedule, enqueue and dequeue functions should not be invoked
+ * when the device is stopped.
+ *
+ * Finally, an application can close an Event device by invoking the
+ * rte_event_dev_close() function.
+ *
+ * Each function of the application Event API invokes a specific function
+ * of the PMD that controls the target device designated by its device
+ * identifier.
+ *
+ * For this purpose, all device-specific functions of an Event driver are
+ * supplied through a set of pointers contained in a generic structure of type
+ * *event_dev_ops*.
+ * The address of the *event_dev_ops* structure is stored in the *rte_event_dev*
+ * structure by the device init function of the Event driver, which is
+ * invoked during the PCI/SoC device probing phase, as explained earlier.
+ *
+ * In other words, each function of the Event API simply retrieves the
+ * *rte_event_dev* structure associated with the device identifier and
+ * performs an indirect invocation of the corresponding driver function
+ * supplied in the *event_dev_ops* structure of the *rte_event_dev* structure.
+ *
+ * For performance reasons, the address of the fast-path functions of the
+ * Event driver is not contained in the *event_dev_ops* structure.
+ * Instead, they are directly stored at the beginning of the *rte_event_dev*
+ * structure to avoid an extra indirect memory access during their invocation.
+ *
+ * RTE event device drivers do not use interrupts for enqueue or dequeue
+ * operation. Instead, Event drivers export Poll-Mode enqueue and dequeue
+ * functions to applications.
+ *
+ * An event driven based application has following typical workflow on fastpath:
+ * \code{.c}
+ *	while (1) {
+ *
+ *		rte_event_schedule(dev_id);
+ *
+ *		rte_event_dequeue(...);
+ *
+ *		(event processing)
+ *
+ *		rte_event_enqueue(...);
+ *	}
+ * \endcode
+ *
+ * The events are injected to event device through *enqueue* operation by
+ * event producers in the system. The typical event producers are ethdev
+ * subsystem for generating packet events, CPU(SW) for generating events based
+ * on different stages of application processing, cryptodev for generating
+ * crypto work completion notification etc
+ *
+ * The *dequeue* operation gets one or more events from the event ports.
+ * The application process the events and send to downstream event queue through
+ * rte_event_enqueue_burst() if it is an intermediate stage of event processing,
+ * on the final stage, the application may send to different subsystem like
+ * ethdev to send the packet/event on the wire using ethdev
+ * rte_eth_tx_burst() API.
+ *
+ * The point at which events are scheduled to ports depends on the device.
+ * For hardware devices, scheduling occurs asynchronously without any software
+ * intervention. Software schedulers can either be distributed
+ * (each worker thread schedules events to its own port) or centralized
+ * (a dedicated thread schedules to all ports). Distributed software schedulers
+ * perform the scheduling in rte_event_dequeue_burst(), whereas centralized
+ * scheduler logic is located in rte_event_schedule().
+ * The RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag is not set
+ * indicates the device is centralized and thus needs a dedicated scheduling
+ * thread that repeatedly calls rte_event_schedule().
+ *
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_common.h>
+#include <rte_pci.h>
+#include <rte_mbuf.h>
+
+/* Event device capability bitmap flags */
+#define RTE_EVENT_DEV_CAP_QUEUE_QOS           (1ULL << 0)
+/**< Event scheduling prioritization is based on the priority associated with
+ *  each event queue.
+ *
+ *  @see rte_event_queue_setup()
+ */
+#define RTE_EVENT_DEV_CAP_EVENT_QOS           (1ULL << 1)
+/**< Event scheduling prioritization is based on the priority associated with
+ *  each event. Priority of each event is supplied in *rte_event* structure
+ *  on each enqueue operation.
+ *
+ *  @see rte_event_enqueue_burst()
+ */
+#define RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED   (1ULL << 2)
+/**< Event device operates in distributed scheduling mode.
+ * In distributed scheduling mode, event scheduling happens in HW or
+ * rte_event_dequeue_burst() or the combination of these two.
+ * If the flag is not set then eventdev is centralized and thus needs a
+ * dedicated scheduling thread that repeatedly calls rte_event_schedule().
+ *
+ * @see rte_event_schedule(), rte_event_dequeue_burst()
+ */
+
+/* Event device priority levels */
+#define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
+/**< Highest priority expressed across eventdev subsystem
+ * @see rte_event_queue_setup(), rte_event_enqueue_burst()
+ * @see rte_event_port_link()
+ */
+#define RTE_EVENT_DEV_PRIORITY_NORMAL    128
+/**< Normal priority expressed across eventdev subsystem
+ * @see rte_event_queue_setup(), rte_event_enqueue_burst()
+ * @see rte_event_port_link()
+ */
+#define RTE_EVENT_DEV_PRIORITY_LOWEST    255
+/**< Lowest priority expressed across eventdev subsystem
+ * @see rte_event_queue_setup(), rte_event_enqueue_burst()
+ * @see rte_event_port_link()
+ */
+
+/**
+ * Get the total number of event devices that have been successfully
+ * initialised.
+ *
+ * @return
+ *   The total number of usable event devices.
+ */
+uint8_t
+rte_event_dev_count(void);
+
+/**
+ * Get the device identifier for the named event device.
+ *
+ * @param name
+ *   Event device name to select the event device identifier.
+ *
+ * @return
+ *   Returns event device identifier on success.
+ *   - <0: Failure to find named event device.
+ */
+int
+rte_event_dev_get_dev_id(const char *name);
+
+/**
+ * Return the NUMA socket to which a device is connected.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @return
+ *   The NUMA socket id to which the device is connected or
+ *   a default of zero if the socket could not be determined.
+ *   -(-EINVAL)  dev_id value is out of range.
+ */
+int
+rte_event_dev_socket_id(uint8_t dev_id);
+
+/**
+ * Event device information
+ */
+struct rte_event_dev_info {
+	const char *driver_name;	/**< Event driver name */
+	struct rte_pci_device *pci_dev;	/**< PCI information */
+	uint32_t min_dequeue_timeout_ns;
+	/**< Minimum supported global dequeue timeout(ns) by this device */
+	uint32_t max_dequeue_timeout_ns;
+	/**< Maximum supported global dequeue timeout(ns) by this device */
+	uint32_t dequeue_timeout_ns;
+	/**< Configured global dequeue timeout(ns) for this device */
+	uint8_t max_event_queues;
+	/**< Maximum event_queues supported by this device */
+	uint32_t max_event_queue_flows;
+	/**< Maximum supported flows in an event queue by this device*/
+	uint8_t max_event_queue_priority_levels;
+	/**< Maximum number of event queue priority levels by this device.
+	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability
+	 */
+	uint8_t max_event_priority_levels;
+	/**< Maximum number of event priority levels by this device.
+	 * Valid when the device has RTE_EVENT_DEV_CAP_EVENT_QOS capability
+	 */
+	uint8_t max_event_ports;
+	/**< Maximum number of event ports supported by this device */
+	uint8_t max_event_port_dequeue_depth;
+	/**< Maximum number of events can be dequeued at a time from an
+	 * event port by this device.
+	 * A device that does not support bulk dequeue will set this as 1.
+	 */
+	uint32_t max_event_port_enqueue_depth;
+	/**< Maximum number of events can be enqueued at a time from an
+	 * event port by this device.
+	 * A device that does not support bulk enqueue will set this as 1.
+	 */
+	int32_t max_num_events;
+	/**< A *closed system* event dev has a limit on the number of events it
+	 * can manage at a time. An *open system* event dev does not have a
+	 * limit and will specify this as -1.
+	 */
+	uint32_t event_dev_cap;
+	/**< Event device capabilities(RTE_EVENT_DEV_CAP_)*/
+};
+
+/**
+ * Retrieve the contextual information of an event device.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ *
+ * @param[out] dev_info
+ *   A pointer to a structure of type *rte_event_dev_info* to be filled with the
+ *   contextual information of the device.
+ *
+ * @return
+ *   - 0: Success, driver updates the contextual information of the event device
+ *   - <0: Error code returned by the driver info get function.
+ *
+ */
+int
+rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info *dev_info);
+
+/* Event device configuration bitmap flags */
+#define RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT (1ULL << 0)
+/**< Override the global *dequeue_timeout_ns* and use per dequeue timeout in ns.
+ *  @see rte_event_dequeue_timeout_ticks(), rte_event_dequeue_burst()
+ */
+
+/** Event device configuration structure */
+struct rte_event_dev_config {
+	uint32_t dequeue_timeout_ns;
+	/**< rte_event_dequeue_burst() timeout on this device.
+	 * This value should be in the range of *min_dequeue_timeout_ns* and
+	 * *max_dequeue_timeout_ns* which previously provided in
+	 * rte_event_dev_info_get()
+	 * @see RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT
+	 */
+	int32_t nb_events_limit;
+	/**< Applies to *closed system* event dev only. This field indicates a
+	 * limit to ethdev-like devices to limit the number of events injected
+	 * into the system to not overwhelm core-to-core events.
+	 * This value cannot exceed the *max_num_events* which previously
+	 * provided in rte_event_dev_info_get()
+	 */
+	uint8_t nb_event_queues;
+	/**< Number of event queues to configure on this device.
+	 * This value cannot exceed the *max_event_queues* which previously
+	 * provided in rte_event_dev_info_get()
+	 */
+	uint8_t nb_event_ports;
+	/**< Number of event ports to configure on this device.
+	 * This value cannot exceed the *max_event_ports* which previously
+	 * provided in rte_event_dev_info_get()
+	 */
+	uint32_t nb_event_queue_flows;
+	/**< Number of flows for any event queue on this device.
+	 * This value cannot exceed the *max_event_queue_flows* which previously
+	 * provided in rte_event_dev_info_get()
+	 */
+	uint8_t nb_event_port_dequeue_depth;
+	/**< Maximum number of events can be dequeued at a time from an
+	 * event port by this device.
+	 * This value cannot exceed the *max_event_port_dequeue_depth*
+	 * which previously provided in rte_event_dev_info_get()
+	 * @see rte_event_port_setup()
+	 */
+	uint32_t nb_event_port_enqueue_depth;
+	/**< Maximum number of events can be enqueued at a time from an
+	 * event port by this device.
+	 * This value cannot exceed the *max_event_port_enqueue_depth*
+	 * which previously provided in rte_event_dev_info_get()
+	 * @see rte_event_port_setup()
+	 */
+	uint32_t event_dev_cfg;
+	/**< Event device config flags(RTE_EVENT_DEV_CFG_)*/
+};
+
+/**
+ * Configure an event device.
+ *
+ * This function must be invoked first before any other function in the
+ * API. This function can also be re-invoked when a device is in the
+ * stopped state.
+ *
+ * The caller may use rte_event_dev_info_get() to get the capability of each
+ * resources available for this event device.
+ *
+ * @param dev_id
+ *   The identifier of the device to configure.
+ * @param dev_conf
+ *   The event device configuration structure.
+ *
+ * @return
+ *   - 0: Success, device configured.
+ *   - <0: Error code returned by the driver configuration function.
+ */
+int
+rte_event_dev_configure(uint8_t dev_id,
+			const struct rte_event_dev_config *dev_conf);
+
+
+/* Event queue specific APIs */
+
+/* Event queue configuration bitmap flags */
+#define RTE_EVENT_QUEUE_CFG_DEFAULT            (0)
+/**< Default value of *event_queue_cfg* when rte_event_queue_setup() invoked
+ * with queue_conf == NULL
+ *
+ * @see rte_event_queue_setup()
+ */
+#define RTE_EVENT_QUEUE_CFG_TYPE_MASK          (3ULL << 0)
+/**< Mask for event queue schedule type configuration request */
+#define RTE_EVENT_QUEUE_CFG_ALL_TYPES          (0ULL << 0)
+/**< Allow ATOMIC,ORDERED,PARALLEL schedule type enqueue
+ *
+ * @see RTE_SCHED_TYPE_ORDERED, RTE_SCHED_TYPE_ATOMIC, RTE_SCHED_TYPE_PARALLEL
+ * @see rte_event_enqueue_burst()
+ */
+#define RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY        (1ULL << 0)
+/**< Allow only ATOMIC schedule type enqueue
+ *
+ * The rte_event_enqueue_burst() result is undefined if the queue configured
+ * with ATOMIC only and sched_type != RTE_SCHED_TYPE_ATOMIC
+ *
+ * @see RTE_SCHED_TYPE_ATOMIC, rte_event_enqueue_burst()
+ */
+#define RTE_EVENT_QUEUE_CFG_ORDERED_ONLY       (2ULL << 0)
+/**< Allow only ORDERED schedule type enqueue
+ *
+ * The rte_event_enqueue_burst() result is undefined if the queue configured
+ * with ORDERED only and sched_type != RTE_SCHED_TYPE_ORDERED
+ *
+ * @see RTE_SCHED_TYPE_ORDERED, rte_event_enqueue_burst()
+ */
+#define RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY      (3ULL << 0)
+/**< Allow only PARALLEL schedule type enqueue
+ *
+ * The rte_event_enqueue_burst() result is undefined if the queue configured
+ * with PARALLEL only and sched_type != RTE_SCHED_TYPE_PARALLEL
+ *
+ * @see RTE_SCHED_TYPE_PARALLEL, rte_event_enqueue_burst()
+ */
+#define RTE_EVENT_QUEUE_CFG_SINGLE_LINK        (1ULL << 2)
+/**< This event queue links only to a single event port.
+ *
+ *  @see rte_event_port_setup(), rte_event_port_link()
+ */
+
+/** Event queue configuration structure */
+struct rte_event_queue_conf {
+	uint32_t nb_atomic_flows;
+	/**< The maximum number of active flows this queue can track at any
+	 * given time. The value must be in the range of
+	 * [1 - nb_event_queue_flows)] which previously provided in
+	 * rte_event_dev_info_get().
+	 */
+	uint32_t nb_atomic_order_sequences;
+	/**< The maximum number of outstanding events waiting to be
+	 * reordered by this queue. In other words, the number of entries in
+	 * this queue’s reorder buffer.When the number of events in the
+	 * reorder buffer reaches to *nb_atomic_order_sequences* then the
+	 * scheduler cannot schedule the events from this queue and invalid
+	 * event will be returned from dequeue until one or more entries are
+	 * freed up/released.
+	 * The value must be in the range of [1 - nb_event_queue_flows)]
+	 * which previously supplied to rte_event_dev_configure().
+	 */
+	uint32_t event_queue_cfg; /**< Queue cfg flags(EVENT_QUEUE_CFG_) */
+	uint8_t priority;
+	/**< Priority for this event queue relative to other event queues.
+	 * The requested priority should in the range of
+	 * [RTE_EVENT_DEV_PRIORITY_HIGHEST, RTE_EVENT_DEV_PRIORITY_LOWEST].
+	 * The implementation shall normalize the requested priority to
+	 * event device supported priority value.
+	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability
+	 */
+};
+
+/**
+ * Retrieve the default configuration information of an event queue designated
+ * by its *queue_id* from the event driver for an event device.
+ *
+ * This function intended to be used in conjunction with rte_event_queue_setup()
+ * where caller needs to set up the queue by overriding few default values.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param queue_id
+ *   The index of the event queue to get the configuration information.
+ *   The value must be in the range [0, nb_event_queues - 1]
+ *   previously supplied to rte_event_dev_configure().
+ * @param[out] queue_conf
+ *   The pointer to the default event queue configuration data.
+ * @return
+ *   - 0: Success, driver updates the default event queue configuration data.
+ *   - <0: Error code returned by the driver info get function.
+ *
+ * @see rte_event_queue_setup()
+ *
+ */
+int
+rte_event_queue_default_conf_get(uint8_t dev_id, uint8_t queue_id,
+				 struct rte_event_queue_conf *queue_conf);
+
+/**
+ * Allocate and set up an event queue for an event device.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param queue_id
+ *   The index of the event queue to setup. The value must be in the range
+ *   [0, nb_event_queues - 1] previously supplied to rte_event_dev_configure().
+ * @param queue_conf
+ *   The pointer to the configuration data to be used for the event queue.
+ *   NULL value is allowed, in which case default configuration	used.
+ *
+ * @see rte_event_queue_default_conf_get()
+ *
+ * @return
+ *   - 0: Success, event queue correctly set up.
+ *   - <0: event queue configuration failed
+ */
+int
+rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id,
+		      const struct rte_event_queue_conf *queue_conf);
+
+/**
+ * Get the number of event queues on a specific event device
+ *
+ * @param dev_id
+ *   Event device identifier.
+ * @return
+ *   - The number of configured event queues
+ */
+uint8_t
+rte_event_queue_count(uint8_t dev_id);
+
+/**
+ * Get the priority of the event queue on a specific event device
+ *
+ * @param dev_id
+ *   Event device identifier.
+ * @param queue_id
+ *   Event queue identifier.
+ * @return
+ *   - If the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability then the
+ *    configured priority of the event queue in
+ *    [RTE_EVENT_DEV_PRIORITY_HIGHEST, RTE_EVENT_DEV_PRIORITY_LOWEST] range
+ *    else the value RTE_EVENT_DEV_PRIORITY_NORMAL
+ */
+uint8_t
+rte_event_queue_priority(uint8_t dev_id, uint8_t queue_id);
+
+/* Event port specific APIs */
+
+/** Event port configuration structure */
+struct rte_event_port_conf {
+	int32_t new_event_threshold;
+	/**< A backpressure threshold for new event enqueues on this port.
+	 * Use for *closed system* event dev where event capacity is limited,
+	 * and cannot exceed the capacity of the event dev.
+	 * Configuring ports with different thresholds can make higher priority
+	 * traffic less likely to  be backpressured.
+	 * For example, a port used to inject NIC Rx packets into the event dev
+	 * can have a lower threshold so as not to overwhelm the device,
+	 * while ports used for worker pools can have a higher threshold.
+	 * This value cannot exceed the *nb_events_limit*
+	 * which previously supplied to rte_event_dev_configure()
+	 */
+	uint8_t dequeue_depth;
+	/**< Configure number of bulk dequeues for this event port.
+	 * This value cannot exceed the *nb_event_port_dequeue_depth*
+	 * which previously supplied to rte_event_dev_configure()
+	 */
+	uint8_t enqueue_depth;
+	/**< Configure number of bulk enqueues for this event port.
+	 * This value cannot exceed the *nb_event_port_enqueue_depth*
+	 * which previously supplied to rte_event_dev_configure()
+	 */
+};
+
+/**
+ * Retrieve the default configuration information of an event port designated
+ * by its *port_id* from the event driver for an event device.
+ *
+ * This function intended to be used in conjunction with rte_event_port_setup()
+ * where caller needs to set up the port by overriding few default values.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param port_id
+ *   The index of the event port to get the configuration information.
+ *   The value must be in the range [0, nb_event_ports - 1]
+ *   previously supplied to rte_event_dev_configure().
+ * @param[out] port_conf
+ *   The pointer to the default event port configuration data
+ * @return
+ *   - 0: Success, driver updates the default event port configuration data.
+ *   - <0: Error code returned by the driver info get function.
+ *
+ * @see rte_event_port_setup()
+ *
+ */
+int
+rte_event_port_default_conf_get(uint8_t dev_id, uint8_t port_id,
+				struct rte_event_port_conf *port_conf);
+
+/**
+ * Allocate and set up an event port for an event device.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param port_id
+ *   The index of the event port to setup. The value must be in the range
+ *   [0, nb_event_ports - 1] previously supplied to rte_event_dev_configure().
+ * @param port_conf
+ *   The pointer to the configuration data to be used for the queue.
+ *   NULL value is allowed, in which case default configuration	used.
+ *
+ * @see rte_event_port_default_conf_get()
+ *
+ * @return
+ *   - 0: Success, event port correctly set up.
+ *   - <0: Port configuration failed
+ *   - (-EDQUOT) Quota exceeded(Application tried to link the queue configured
+ *   with RTE_EVENT_QUEUE_CFG_SINGLE_LINK to more than one event ports)
+ */
+int
+rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
+		     const struct rte_event_port_conf *port_conf);
+
+/**
+ * Get the number of dequeue queue depth configured for event port designated
+ * by its *port_id* on a specific event device
+ *
+ * @param dev_id
+ *   Event device identifier.
+ * @param port_id
+ *   Event port identifier.
+ * @return
+ *   - The number of configured dequeue queue depth
+ *
+ * @see rte_event_dequeue_burst()
+ */
+uint8_t
+rte_event_port_dequeue_depth(uint8_t dev_id, uint8_t port_id);
+
+/**
+ * Get the number of enqueue queue depth configured for event port designated
+ * by its *port_id* on a specific event device
+ *
+ * @param dev_id
+ *   Event device identifier.
+ * @param port_id
+ *   Event port identifier.
+ * @return
+ *   - The number of configured enqueue queue depth
+ *
+ * @see rte_event_enqueue_burst()
+ */
+uint8_t
+rte_event_port_enqueue_depth(uint8_t dev_id, uint8_t port_id);
+
+/**
+ * Get the number of ports on a specific event device
+ *
+ * @param dev_id
+ *   Event device identifier.
+ * @return
+ *   - The number of configured ports
+ */
+uint8_t
+rte_event_port_count(uint8_t dev_id);
+
+/**
+ * Start an event device.
+ *
+ * The device start step is the last one and consists of setting the event
+ * queues to start accepting the events and schedules to event ports.
+ *
+ * On success, all basic functions exported by the API (event enqueue,
+ * event dequeue and so on) can be invoked.
+ *
+ * @param dev_id
+ *   Event device identifier
+ * @return
+ *   - 0: Success, device started.
+ *   - <0: Error code of the driver device start function.
+ */
+int
+rte_event_dev_start(uint8_t dev_id);
+
+/**
+ * Stop an event device. The device can be restarted with a call to
+ * rte_event_dev_start()
+ *
+ * @param dev_id
+ *   Event device identifier.
+ */
+void
+rte_event_dev_stop(uint8_t dev_id);
+
+/**
+ * Close an event device. The device cannot be restarted!
+ *
+ * @param dev_id
+ *   Event device identifier
+ *
+ * @return
+ *  - 0 on successfully closing device
+ *  - <0 on failure to close device
+ *  - (-EAGAIN) if device is busy
+ */
+int
+rte_event_dev_close(uint8_t dev_id);
+
+/* Scheduler type definitions */
+#define RTE_SCHED_TYPE_ORDERED          0
+/**< Ordered scheduling
+ *
+ * Events from an ordered flow of an event queue can be scheduled to multiple
+ * ports for concurrent processing while maintaining the original event order.
+ * This scheme enables the user to achieve high single flow throughput by
+ * avoiding SW synchronization for ordering between ports which bound to cores.
+ *
+ * The source flow ordering from an event queue is maintained when events are
+ * enqueued to their destination queue within the same ordered flow context.
+ * An event port holds the context until application call
+ * rte_event_dequeue_burst() from the same port, which implicitly releases
+ * the context.
+ * User may allow the scheduler to release the context earlier than that
+ * by invoking rte_event_enqueue_burst() with RTE_EVENT_OP_RELEASE operation.
+ *
+ * Events from the source queue appear in their original order when dequeued
+ * from a destination queue.
+ * Event ordering is based on the received event(s), but also other
+ * (newly allocated or stored) events are ordered when enqueued within the same
+ * ordered context. Events not enqueued (e.g. released or stored) within the
+ * context are  considered missing from reordering and are skipped at this time
+ * (but can be ordered again within another context).
+ *
+ * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP_RELEASE
+ */
+
+#define RTE_SCHED_TYPE_ATOMIC           1
+/**< Atomic scheduling
+ *
+ * Events from an atomic flow of an event queue can be scheduled only to a
+ * single port at a time. The port is guaranteed to have exclusive (atomic)
+ * access to the associated flow context, which enables the user to avoid SW
+ * synchronization. Atomic flows also help to maintain event ordering
+ * since only one port at a time can process events from a flow of an
+ * event queue.
+ *
+ * The atomic queue synchronization context is dedicated to the port until
+ * application call rte_event_dequeue_burst() from the same port,
+ * which implicitly releases the context. User may allow the scheduler to
+ * release the context earlier than that by invoking rte_event_enqueue_burst()
+ * with RTE_EVENT_OP_RELEASE operation.
+ *
+ * @see rte_event_queue_setup(), rte_event_dequeue_burst(), RTE_EVENT_OP_RELEASE
+ */
+
+#define RTE_SCHED_TYPE_PARALLEL         2
+/**< Parallel scheduling
+ *
+ * The scheduler performs priority scheduling, load balancing, etc. functions
+ * but does not provide additional event synchronization or ordering.
+ * It is free to schedule events from a single parallel flow of an event queue
+ * to multiple events ports for concurrent processing.
+ * The application is responsible for flow context synchronization and
+ * event ordering (SW synchronization).
+ *
+ * @see rte_event_queue_setup(), rte_event_dequeue_burst()
+ */
+
+/* Event types to classify the event source */
+#define RTE_EVENT_TYPE_ETHDEV           0x0
+/**< The event generated from ethdev subsystem */
+#define RTE_EVENT_TYPE_CRYPTODEV        0x1
+/**< The event generated from crypodev subsystem */
+#define RTE_EVENT_TYPE_TIMERDEV         0x2
+/**< The event generated from timerdev subsystem */
+#define RTE_EVENT_TYPE_CPU              0x3
+/**< The event generated from cpu for pipelining.
+ * Application may use *sub_event_type* to further classify the event
+ */
+#define RTE_EVENT_TYPE_MAX              0x10
+/**< Maximum number of event types */
+
+/* Event enqueue operations */
+#define RTE_EVENT_OP_NEW                0
+/**< The event producers use this operation to inject a new event to the
+ * event device.
+ */
+#define RTE_EVENT_OP_FORWARD            1
+/**< The CPU use this operation to forward the event to different event queue or
+ * change to new application specific flow or schedule type to enable
+ * pipelining
+ */
+#define RTE_EVENT_OP_RELEASE            2
+/**< Release the flow context associated with the schedule type.
+ *
+ * If current flow's scheduler type method is *RTE_SCHED_TYPE_ATOMIC*
+ * then this function hints the scheduler that the user has completed critical
+ * section processing in the current atomic context.
+ * The scheduler is now allowed to schedule events from the same flow from
+ * an event queue to another port. However, the context may be still held
+ * until the next rte_event_dequeue_burst() call, this call allows but does not
+ * force the scheduler to release the context early.
+ *
+ * Early atomic context release may increase parallelism and thus system
+ * performance, but the user needs to design carefully the split into critical
+ * vs non-critical sections.
+ *
+ * If current flow's scheduler type method is *RTE_SCHED_TYPE_ORDERED*
+ * then this function hints the scheduler that the user has done all that need
+ * to maintain event order in the current ordered context.
+ * The scheduler is allowed to release the ordered context of this port and
+ * avoid reordering any following enqueues.
+ *
+ * Early ordered context release may increase parallelism and thus system
+ * performance.
+ *
+ * If current flow's scheduler type method is *RTE_SCHED_TYPE_PARALLEL*
+ * or no scheduling context is held then this function may be an NOOP,
+ * depending on the implementation.
+ *
+ */
+
+/**
+ * The generic *rte_event* structure to hold the event attributes
+ * for dequeue and enqueue operation
+ */
+struct rte_event {
+	/** WORD0 */
+	RTE_STD_C11
+	union {
+		uint64_t event;
+		/** Event attributes for dequeue or enqueue operation */
+		struct {
+			uint32_t flow_id:20;
+			/**< Targeted flow identifier for the enqueue and
+			 * dequeue operation.
+			 * The value must be in the range of
+			 * [0, nb_event_queue_flows - 1] which
+			 * previously supplied to rte_event_dev_configure().
+			 */
+			uint32_t sub_event_type:8;
+			/**< Sub-event types based on the event source.
+			 * @see RTE_EVENT_TYPE_CPU
+			 */
+			uint32_t event_type:4;
+			/**< Event type to classify the event source.
+			 * @see RTE_EVENT_TYPE_ETHDEV, (RTE_EVENT_TYPE_*)
+			 */
+			uint8_t op:2;
+			/**< The type of event enqueue operation - new/forward/
+			 * etc.This field is not preserved across an instance
+			 * and is undefined on dequeue.
+			 * @see RTE_EVENT_OP_NEW, (RTE_EVENT_OP_*)
+			 */
+			uint8_t rsvd:4;
+			/**< Reserved for future use */
+			uint8_t sched_type:2;
+			/**< Scheduler synchronization type (RTE_SCHED_TYPE_*)
+			 * associated with flow id on a given event queue
+			 * for the enqueue and dequeue operation.
+			 */
+			uint8_t queue_id;
+			/**< Targeted event queue identifier for the enqueue or
+			 * dequeue operation.
+			 * The value must be in the range of
+			 * [0, nb_event_queues - 1] which previously supplied to
+			 * rte_event_dev_configure().
+			 */
+			uint8_t priority;
+			/**< Event priority relative to other events in the
+			 * event queue. The requested priority should in the
+			 * range of  [RTE_EVENT_DEV_PRIORITY_HIGHEST,
+			 * RTE_EVENT_DEV_PRIORITY_LOWEST].
+			 * The implementation shall normalize the requested
+			 * priority to supported priority value.
+			 * Valid when the device has
+			 * RTE_EVENT_DEV_CAP_EVENT_QOS capability.
+			 */
+			uint8_t impl_opaque;
+			/**< Implementation specific opaque value.
+			 * An implementation may use this field to hold
+			 * implementation specific value to share between
+			 * dequeue and enqueue operation.
+			 * The application should not modify this field.
+			 */
+		};
+	};
+	/** WORD1 */
+	RTE_STD_C11
+	union {
+		uint64_t u64;
+		/**< Opaque 64-bit value */
+		void *event_ptr;
+		/**< Opaque event pointer */
+		struct rte_mbuf *mbuf;
+		/**< mbuf pointer if dequeued event is associated with mbuf */
+	};
+};
+
+/**
+ * Schedule one or more events in the event dev.
+ *
+ * An event dev implementation may define this is a NOOP, for instance if
+ * the event dev performs its scheduling in hardware.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ */
+void
+rte_event_schedule(uint8_t dev_id);
+
+/**
+ * Enqueue a burst of events objects or an event object supplied in *rte_event*
+ * structure on an  event device designated by its *dev_id* through the event
+ * port specified by *port_id*. Each event object specifies the event queue on
+ * which it will be enqueued.
+ *
+ * The *nb_events* parameter is the number of event objects to enqueue which are
+ * supplied in the *ev* array of *rte_event* structure.
+ *
+ * The rte_event_enqueue_burst() function returns the number of
+ * events objects it actually enqueued. A return value equal to *nb_events*
+ * means that all event objects have been enqueued.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param port_id
+ *   The identifier of the event port.
+ * @param ev
+ *   Points to an array of *nb_events* objects of type *rte_event* structure
+ *   which contain the event object enqueue operations to be processed.
+ * @param nb_events
+ *   The number of event objects to enqueue, typically number of
+ *   rte_event_port_enqueue_depth() available for this port.
+ *
+ * @return
+ *   The number of event objects actually enqueued on the event device. The
+ *   return value can be less than the value of the *nb_events* parameter when
+ *   the event devices queue is full or if invalid parameters are specified in a
+ *   *rte_event*. If return value is less than *nb_events*, the remaining events
+ *   at the end of ev[] are not consumed,and the caller has to take care of them
+ *
+ * @see rte_event_port_enqueue_depth()
+ */
+uint16_t
+rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
+			const struct rte_event ev[], uint16_t nb_events);
+
+/**
+ * Converts nanoseconds to *timeout_ticks* value for rte_event_dequeue_burst()
+ *
+ * If the device is configured with RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT flag
+ * then application can use this function to convert timeout value in
+ * nanoseconds to implementations specific timeout value supplied in
+ * rte_event_dequeue_burst()
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param ns
+ *   Wait time in nanosecond
+ * @param[out] timeout_ticks
+ *   Value for the *timeout_ticks* parameter in rte_event_dequeue_burst()
+ *
+ * @return
+ *  - 0 on success.
+ *  - <0 on failure.
+ *
+ * @see rte_event_dequeue_burst(), RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT
+ * @see rte_event_dev_configure()
+ *
+ */
+int
+rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
+					uint64_t *timeout_ticks);
+
+/**
+ * Dequeue a burst of events objects or an event object from the event port
+ * designated by its *event_port_id*, on an event device designated
+ * by its *dev_id*.
+ *
+ * rte_event_dequeue_burst() does not dictate the specifics of scheduling
+ * algorithm as each eventdev driver may have different criteria to schedule
+ * an event. However, in general, from an application perspective scheduler may
+ * use the following scheme to dispatch an event to the port.
+ *
+ * 1) Selection of event queue based on
+ *   a) The list of event queues are linked to the event port.
+ *   b) If the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability then event
+ *   queue selection from list is based on event queue priority relative to
+ *   other event queue supplied as *priority* in rte_event_queue_setup()
+ *   c) If the device has RTE_EVENT_DEV_CAP_EVENT_QOS capability then event
+ *   queue selection from the list is based on event priority supplied as
+ *   *priority* in rte_event_enqueue_burst()
+ * 2) Selection of event
+ *   a) The number of flows available in selected event queue.
+ *   b) Schedule type method associated with the event
+ *
+ * The *nb_events* parameter is the maximum number of event objects to dequeue
+ * which are returned in the *ev* array of *rte_event* structure.
+ *
+ * The rte_event_dequeue_burst() function returns the number of events objects
+ * it actually dequeued. A return value equal to *nb_events* means that all
+ * event objects have been dequeued.
+ *
+ * The number of events dequeued is the number of scheduler contexts held by
+ * this port. These contexts are automatically released in the next
+ * rte_event_dequeue_burst() invocation, or invoking rte_event_enqueue_burst()
+ * with RTE_EVENT_OP_RELEASE operation can be used to release the
+ * contexts early.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ * @param port_id
+ *   The identifier of the event port.
+ * @param[out] ev
+ *   Points to an array of *nb_events* objects of type *rte_event* structure
+ *   for output to be populated with the dequeued event objects.
+ * @param nb_events
+ *   The maximum number of event objects to dequeue, typically number of
+ *   rte_event_port_dequeue_depth() available for this port.
+ *
+ * @param timeout_ticks
+ *   - 0 no-wait, returns immediately if there is no event.
+ *   - >0 wait for the event, if the device is configured with
+ *   RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT then this function will wait until
+ *   the event available or *timeout_ticks* time.
+ *   if the device is not configured with RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT
+ *   then this function will wait until the event available or
+ *   *dequeue_timeout_ns* ns which was previously supplied to
+ *   rte_event_dev_configure()
+ *
+ * @return
+ * The number of event objects actually dequeued from the port. The return
+ * value can be less than the value of the *nb_events* parameter when the
+ * event port's queue is not full.
+ *
+ * @see rte_event_port_dequeue_depth()
+ */
+uint16_t
+rte_event_dequeue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event ev[],
+			uint16_t nb_events, uint64_t timeout_ticks);
+
+/**
+ * Link multiple source event queues supplied in *queues* to the destination
+ * event port designated by its *port_id* with associated service priority
+ * supplied in *priorities* on the event device designated by its *dev_id*.
+ *
+ * The link establishment shall enable the event port *port_id* from
+ * receiving events from the specified event queue(s) supplied in *queues*
+ *
+ * An event queue may link to one or more event ports.
+ * The number of links can be established from an event queue to event port is
+ * implementation defined.
+ *
+ * Event queue(s) to event port link establishment can be changed at runtime
+ * without re-configuring the device to support scaling and to reduce the
+ * latency of critical work by establishing the link with more event ports
+ * at runtime.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ *
+ * @param port_id
+ *   Event port identifier to select the destination port to link.
+ *
+ * @param queues
+ *   Points to an array of *nb_links* event queues to be linked
+ *   to the event port.
+ *   NULL value is allowed, in which case this function links all the configured
+ *   event queues *nb_event_queues* which previously supplied to
+ *   rte_event_dev_configure() to the event port *port_id*
+ *
+ * @param priorities
+ *   Points to an array of *nb_links* service priorities associated with each
+ *   event queue link to event port.
+ *   The priority defines the event port's servicing priority for
+ *   event queue, which may be ignored by an implementation.
+ *   The requested priority should in the range of
+ *   [RTE_EVENT_DEV_PRIORITY_HIGHEST, RTE_EVENT_DEV_PRIORITY_LOWEST].
+ *   The implementation shall normalize the requested priority to
+ *   implementation supported priority value.
+ *   NULL value is allowed, in which case this function links the event queues
+ *   with RTE_EVENT_DEV_PRIORITY_NORMAL servicing priority
+ *
+ * @param nb_links
+ *   The number of links to establish
+ *
+ * @return
+ * The number of links actually established. The return value can be less than
+ * the value of the *nb_links* parameter when the implementation has the
+ * limitation on specific queue to port link establishment or if invalid
+ * parameters are specified in *queues*
+ * If the return value is less than *nb_links*, the remaining links at the end
+ * of link[] are not established, and the caller has to take care of them.
+ * If return value is less than *nb_links* then implementation shall update the
+ * rte_errno accordingly, Possible rte_errno values are
+ * (-EDQUOT) Quota exceeded(Application tried to link the queue configured with
+ *  RTE_EVENT_QUEUE_CFG_SINGLE_LINK to more than one event ports)
+ * (-EINVAL) Invalid parameter
+ *
+ */
+int
+rte_event_port_link(uint8_t dev_id, uint8_t port_id,
+		    const uint8_t queues[], const uint8_t priorities[],
+		    uint16_t nb_links);
+
+/**
+ * Unlink multiple source event queues supplied in *queues* from the destination
+ * event port designated by its *port_id* on the event device designated
+ * by its *dev_id*.
+ *
+ * The unlink establishment shall disable the event port *port_id* from
+ * receiving events from the specified event queue *queue_id*
+ *
+ * Event queue(s) to event port unlink establishment can be changed at runtime
+ * without re-configuring the device.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ *
+ * @param port_id
+ *   Event port identifier to select the destination port to unlink.
+ *
+ * @param queues
+ *   Points to an array of *nb_unlinks* event queues to be unlinked
+ *   from the event port.
+ *   NULL value is allowed, in which case this function unlinks all the
+ *   event queue(s) from the event port *port_id*.
+ *
+ * @param nb_unlinks
+ *   The number of unlinks to establish
+ *
+ * @return
+ * The number of unlinks actually established. The return value can be less
+ * than the value of the *nb_unlinks* parameter when the implementation has the
+ * limitation on specific queue to port unlink establishment or
+ * if invalid parameters are specified.
+ * If the return value is less than *nb_unlinks*, the remaining queues at the
+ * end of queues[] are not established, and the caller has to take care of them.
+ * If return value is less than *nb_unlinks* then implementation shall update
+ * the rte_errno accordingly, Possible rte_errno values are
+ * (-EINVAL) Invalid parameter
+ *
+ */
+int
+rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
+		      uint8_t queues[], uint16_t nb_unlinks);
+
+/**
+ * Retrieve the list of source event queues and its associated service priority
+ * linked to the destination event port designated by its *port_id*
+ * on the event device designated by its *dev_id*.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ *
+ * @param port_id
+ *   Event port identifier.
+ *
+ * @param[out] queues
+ *   Points to an array of *queues* for output.
+ *   The caller has to allocate *RTE_EVENT_MAX_QUEUES_PER_DEV* bytes to
+ *   store the event queue(s) linked with event port *port_id*
+ *
+ * @param[out] priorities
+ *   Points to an array of *priorities* for output.
+ *   The caller has to allocate *RTE_EVENT_MAX_QUEUES_PER_DEV* bytes to
+ *   store the service priority associated with each event queue linked
+ *
+ * @return
+ * The number of links established on the event port designated by its
+ *  *port_id*.
+ * - <0 on failure.
+ *
+ */
+int
+rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
+			 uint8_t queues[], uint8_t priorities[]);
+
+/**
+ * Dump internal information about *dev_id* to the FILE* provided in *f*.
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ *
+ * @param f
+ *   A pointer to a file for output
+ *
+ * @return
+ *   - 0: on success
+ *   - <0: on failure.
+ */
+int
+rte_event_dev_dump(uint8_t dev_id, FILE *f);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_EVENTDEV_H_ */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [PATCH v4 2/6] eventdev: define southbound driver interface
  2016-12-21  9:25     ` [PATCH v4 " Jerin Jacob
  2016-12-21  9:25       ` [PATCH v4 1/6] eventdev: introduce event driven programming model Jerin Jacob
@ 2016-12-21  9:25       ` Jerin Jacob
  2017-02-02 11:19         ` Nipun Gupta
  2016-12-21  9:25       ` [PATCH v4 3/6] eventdev: implement the northbound APIs Jerin Jacob
                         ` (3 subsequent siblings)
  5 siblings, 1 reply; 109+ messages in thread
From: Jerin Jacob @ 2016-12-21  9:25 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, hemant.agrawal, gage.eads,
	harry.van.haaren, Jerin Jacob

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/librte_eventdev/rte_eventdev.h     |  38 +++++
 lib/librte_eventdev/rte_eventdev_pmd.h | 294 +++++++++++++++++++++++++++++++++
 2 files changed, 332 insertions(+)
 create mode 100644 lib/librte_eventdev/rte_eventdev_pmd.h

diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
index b2bc471..014e1ec 100644
--- a/lib/librte_eventdev/rte_eventdev.h
+++ b/lib/librte_eventdev/rte_eventdev.h
@@ -972,6 +972,44 @@ struct rte_event {
 	};
 };
 
+struct rte_eventdev_ops;
+struct rte_eventdev;
+
+typedef void (*event_schedule_t)(struct rte_eventdev *dev);
+/**< @internal Schedule one or more events in the event dev. */
+
+typedef uint16_t (*event_enqueue_t)(void *port, const struct rte_event *ev);
+/**< @internal Enqueue event on port of a device */
+
+typedef uint16_t (*event_enqueue_burst_t)(void *port,
+			const struct rte_event ev[], uint16_t nb_events);
+/**< @internal Enqueue burst of events on port of a device */
+
+typedef uint16_t (*event_dequeue_t)(void *port, struct rte_event *ev,
+		uint64_t timeout_ticks);
+/**< @internal Dequeue event from port of a device */
+
+typedef uint16_t (*event_dequeue_burst_t)(void *port, struct rte_event ev[],
+		uint16_t nb_events, uint64_t timeout_ticks);
+/**< @internal Dequeue burst of events from port of a device */
+
+
+/** @internal The data structure associated with each event device. */
+struct rte_eventdev {
+	event_schedule_t schedule;
+	/**< Pointer to PMD schedule function. */
+	event_enqueue_t enqueue;
+	/**< Pointer to PMD enqueue function. */
+	event_enqueue_burst_t enqueue_burst;
+	/**< Pointer to PMD enqueue burst function. */
+	event_dequeue_t dequeue;
+	/**< Pointer to PMD dequeue function. */
+	event_dequeue_burst_t dequeue_burst;
+	/**< Pointer to PMD dequeue burst function. */
+
+} __rte_cache_aligned;
+
+
 /**
  * Schedule one or more events in the event dev.
  *
diff --git a/lib/librte_eventdev/rte_eventdev_pmd.h b/lib/librte_eventdev/rte_eventdev_pmd.h
new file mode 100644
index 0000000..40552aa
--- /dev/null
+++ b/lib/librte_eventdev/rte_eventdev_pmd.h
@@ -0,0 +1,294 @@
+/*
+ *
+ *   Copyright(c) 2016 Cavium networks. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_EVENTDEV_PMD_H_
+#define _RTE_EVENTDEV_PMD_H_
+
+/** @file
+ * RTE Event PMD APIs
+ *
+ * @note
+ * These API are from event PMD only and user applications should not call
+ * them directly.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include "rte_eventdev.h"
+
+/**
+ * Definitions of all functions exported by a driver through the
+ * the generic structure of type *event_dev_ops* supplied in the
+ * *rte_eventdev* structure associated with a device.
+ */
+
+/**
+ * Get device information of a device.
+ *
+ * @param dev
+ *   Event device pointer
+ * @param dev_info
+ *   Event device information structure
+ *
+ * @return
+ *   Returns 0 on success
+ */
+typedef void (*eventdev_info_get_t)(struct rte_eventdev *dev,
+		struct rte_event_dev_info *dev_info);
+
+/**
+ * Configure a device.
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @return
+ *   Returns 0 on success
+ */
+typedef int (*eventdev_configure_t)(const struct rte_eventdev *dev);
+
+/**
+ * Start a configured device.
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @return
+ *   Returns 0 on success
+ */
+typedef int (*eventdev_start_t)(struct rte_eventdev *dev);
+
+/**
+ * Stop a configured device.
+ *
+ * @param dev
+ *   Event device pointer
+ */
+typedef void (*eventdev_stop_t)(struct rte_eventdev *dev);
+
+/**
+ * Close a configured device.
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @return
+ * - 0 on success
+ * - (-EAGAIN) if can't close as device is busy
+ */
+typedef int (*eventdev_close_t)(struct rte_eventdev *dev);
+
+/**
+ * Retrieve the default event queue configuration.
+ *
+ * @param dev
+ *   Event device pointer
+ * @param queue_id
+ *   Event queue index
+ * @param[out] queue_conf
+ *   Event queue configuration structure
+ *
+ */
+typedef void (*eventdev_queue_default_conf_get_t)(struct rte_eventdev *dev,
+		uint8_t queue_id, struct rte_event_queue_conf *queue_conf);
+
+/**
+ * Setup an event queue.
+ *
+ * @param dev
+ *   Event device pointer
+ * @param queue_id
+ *   Event queue index
+ * @param queue_conf
+ *   Event queue configuration structure
+ *
+ * @return
+ *   Returns 0 on success.
+ */
+typedef int (*eventdev_queue_setup_t)(struct rte_eventdev *dev,
+		uint8_t queue_id,
+		const struct rte_event_queue_conf *queue_conf);
+
+/**
+ * Release resources allocated by given event queue.
+ *
+ * @param dev
+ *   Event device pointer
+ * @param queue_id
+ *   Event queue index
+ *
+ */
+typedef void (*eventdev_queue_release_t)(struct rte_eventdev *dev,
+		uint8_t queue_id);
+
+/**
+ * Retrieve the default event port configuration.
+ *
+ * @param dev
+ *   Event device pointer
+ * @param port_id
+ *   Event port index
+ * @param[out] port_conf
+ *   Event port configuration structure
+ *
+ */
+typedef void (*eventdev_port_default_conf_get_t)(struct rte_eventdev *dev,
+		uint8_t port_id, struct rte_event_port_conf *port_conf);
+
+/**
+ * Setup an event port.
+ *
+ * @param dev
+ *   Event device pointer
+ * @param port_id
+ *   Event port index
+ * @param port_conf
+ *   Event port configuration structure
+ *
+ * @return
+ *   Returns 0 on success.
+ */
+typedef int (*eventdev_port_setup_t)(struct rte_eventdev *dev,
+		uint8_t port_id,
+		const struct rte_event_port_conf *port_conf);
+
+/**
+ * Release memory resources allocated by given event port.
+ *
+ * @param port
+ *   Event port pointer
+ *
+ */
+typedef void (*eventdev_port_release_t)(void *port);
+
+/**
+ * Link multiple source event queues to destination event port.
+ *
+ * @param port
+ *   Event port pointer
+ * @param link
+ *   Points to an array of *nb_links* event queues to be linked
+ *   to the event port.
+ * @param priorities
+ *   Points to an array of *nb_links* service priorities associated with each
+ *   event queue link to event port.
+ * @param nb_links
+ *   The number of links to establish
+ *
+ * @return
+ *   Returns 0 on success.
+ *
+ */
+typedef int (*eventdev_port_link_t)(void *port,
+		const uint8_t queues[], const uint8_t priorities[],
+		uint16_t nb_links);
+
+/**
+ * Unlink multiple source event queues from destination event port.
+ *
+ * @param port
+ *   Event port pointer
+ * @param queues
+ *   An array of *nb_unlinks* event queues to be unlinked from the event port.
+ * @param nb_unlinks
+ *   The number of unlinks to establish
+ *
+ * @return
+ *   Returns 0 on success.
+ *
+ */
+typedef int (*eventdev_port_unlink_t)(void *port,
+		uint8_t queues[], uint16_t nb_unlinks);
+
+/**
+ * Converts nanoseconds to *timeout_ticks* value for rte_event_dequeue()
+ *
+ * @param dev
+ *   Event device pointer
+ * @param ns
+ *   Wait time in nanosecond
+ * @param[out] timeout_ticks
+ *   Value for the *timeout_ticks* parameter in rte_event_dequeue() function
+ *
+ */
+typedef void (*eventdev_dequeue_timeout_ticks_t)(struct rte_eventdev *dev,
+		uint64_t ns, uint64_t *timeout_ticks);
+
+/**
+ * Dump internal information
+ *
+ * @param dev
+ *   Event device pointer
+ * @param f
+ *   A pointer to a file for output
+ *
+ */
+typedef void (*eventdev_dump_t)(struct rte_eventdev *dev, FILE *f);
+
+/** Event device operations function pointer table */
+struct rte_eventdev_ops {
+	eventdev_info_get_t dev_infos_get;	/**< Get device info. */
+	eventdev_configure_t dev_configure;	/**< Configure device. */
+	eventdev_start_t dev_start;		/**< Start device. */
+	eventdev_stop_t dev_stop;		/**< Stop device. */
+	eventdev_close_t dev_close;		/**< Close device. */
+
+	eventdev_queue_default_conf_get_t queue_def_conf;
+	/**< Get default queue configuration. */
+	eventdev_queue_setup_t queue_setup;
+	/**< Set up an event queue. */
+	eventdev_queue_release_t queue_release;
+	/**< Release an event queue. */
+
+	eventdev_port_default_conf_get_t port_def_conf;
+	/**< Get default port configuration. */
+	eventdev_port_setup_t port_setup;
+	/**< Set up an event port. */
+	eventdev_port_release_t port_release;
+	/**< Release an event port. */
+
+	eventdev_port_link_t port_link;
+	/**< Link event queues to an event port. */
+	eventdev_port_unlink_t port_unlink;
+	/**< Unlink event queues from an event port. */
+	eventdev_dequeue_timeout_ticks_t timeout_ticks;
+	/**< Converts ns to *timeout_ticks* value for rte_event_dequeue() */
+	eventdev_dump_t dump;
+	/* Dump internal information */
+};
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_EVENTDEV_PMD_H_ */
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [PATCH v4 3/6] eventdev: implement the northbound APIs
  2016-12-21  9:25     ` [PATCH v4 " Jerin Jacob
  2016-12-21  9:25       ` [PATCH v4 1/6] eventdev: introduce event driven programming model Jerin Jacob
  2016-12-21  9:25       ` [PATCH v4 2/6] eventdev: define southbound driver interface Jerin Jacob
@ 2016-12-21  9:25       ` Jerin Jacob
  2017-02-02 11:19         ` Nipun Gupta
  2016-12-21  9:25       ` [PATCH v4 4/6] eventdev: implement PMD registration functions Jerin Jacob
                         ` (2 subsequent siblings)
  5 siblings, 1 reply; 109+ messages in thread
From: Jerin Jacob @ 2016-12-21  9:25 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, hemant.agrawal, gage.eads,
	harry.van.haaren, Jerin Jacob

This patch implements northbound eventdev API interface using
southbond driver interface

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
 config/common_base                           |   6 +
 lib/Makefile                                 |   1 +
 lib/librte_eal/common/include/rte_log.h      |   1 +
 lib/librte_eventdev/Makefile                 |  57 ++
 lib/librte_eventdev/rte_eventdev.c           | 986 +++++++++++++++++++++++++++
 lib/librte_eventdev/rte_eventdev.h           | 106 ++-
 lib/librte_eventdev/rte_eventdev_pmd.h       | 109 +++
 lib/librte_eventdev/rte_eventdev_version.map |  33 +
 mk/rte.app.mk                                |   1 +
 9 files changed, 1294 insertions(+), 6 deletions(-)
 create mode 100644 lib/librte_eventdev/Makefile
 create mode 100644 lib/librte_eventdev/rte_eventdev.c
 create mode 100644 lib/librte_eventdev/rte_eventdev_version.map

diff --git a/config/common_base b/config/common_base
index 47a2dc0..3a17dfb 100644
--- a/config/common_base
+++ b/config/common_base
@@ -412,6 +412,12 @@ CONFIG_RTE_LIBRTE_PMD_ZUC_DEBUG=n
 CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO=y
 
 #
+# Compile generic event device library
+#
+CONFIG_RTE_LIBRTE_EVENTDEV=y
+CONFIG_RTE_LIBRTE_EVENTDEV_DEBUG=n
+CONFIG_RTE_EVENT_MAX_DEVS=16
+CONFIG_RTE_EVENT_MAX_QUEUES_PER_DEV=64
 # Compile librte_ring
 #
 CONFIG_RTE_LIBRTE_RING=y
diff --git a/lib/Makefile b/lib/Makefile
index 990f23a..1a067bf 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -41,6 +41,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) += librte_cfgfile
 DIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += librte_cmdline
 DIRS-$(CONFIG_RTE_LIBRTE_ETHER) += librte_ether
 DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += librte_cryptodev
+DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += librte_eventdev
 DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
 DIRS-$(CONFIG_RTE_LIBRTE_HASH) += librte_hash
 DIRS-$(CONFIG_RTE_LIBRTE_LPM) += librte_lpm
diff --git a/lib/librte_eal/common/include/rte_log.h b/lib/librte_eal/common/include/rte_log.h
index 671e274..a6dd7c8 100644
--- a/lib/librte_eal/common/include/rte_log.h
+++ b/lib/librte_eal/common/include/rte_log.h
@@ -79,6 +79,7 @@ extern struct rte_logs rte_logs;
 #define RTE_LOGTYPE_PIPELINE 0x00008000 /**< Log related to pipeline. */
 #define RTE_LOGTYPE_MBUF    0x00010000 /**< Log related to mbuf. */
 #define RTE_LOGTYPE_CRYPTODEV 0x00020000 /**< Log related to cryptodev. */
+#define RTE_LOGTYPE_EVENTDEV 0x00040000 /**< Log related to eventdev. */
 
 /* these log types can be used in an application */
 #define RTE_LOGTYPE_USER1   0x01000000 /**< User-defined log type 1. */
diff --git a/lib/librte_eventdev/Makefile b/lib/librte_eventdev/Makefile
new file mode 100644
index 0000000..dac0663
--- /dev/null
+++ b/lib/librte_eventdev/Makefile
@@ -0,0 +1,57 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2016 Cavium networks. All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Cavium networks nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+# library name
+LIB = librte_eventdev.a
+
+# library version
+LIBABIVER := 1
+
+# build flags
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+# library source files
+SRCS-y += rte_eventdev.c
+
+# export include files
+SYMLINK-y-include += rte_eventdev.h
+SYMLINK-y-include += rte_eventdev_pmd.h
+
+# versioning export map
+EXPORT_MAP := rte_eventdev_version.map
+
+# library dependencies
+DEPDIRS-y += lib/librte_eal
+DEPDIRS-y += lib/librte_mbuf
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c
new file mode 100644
index 0000000..b13eb00
--- /dev/null
+++ b/lib/librte_eventdev/rte_eventdev.c
@@ -0,0 +1,986 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Cavium networks. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <ctype.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <stdarg.h>
+#include <errno.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <sys/types.h>
+#include <sys/queue.h>
+
+#include <rte_byteorder.h>
+#include <rte_log.h>
+#include <rte_debug.h>
+#include <rte_dev.h>
+#include <rte_pci.h>
+#include <rte_memory.h>
+#include <rte_memcpy.h>
+#include <rte_memzone.h>
+#include <rte_eal.h>
+#include <rte_per_lcore.h>
+#include <rte_lcore.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_common.h>
+#include <rte_malloc.h>
+#include <rte_errno.h>
+
+#include "rte_eventdev.h"
+#include "rte_eventdev_pmd.h"
+
+struct rte_eventdev rte_event_devices[RTE_EVENT_MAX_DEVS];
+
+struct rte_eventdev *rte_eventdevs = &rte_event_devices[0];
+
+static struct rte_eventdev_global eventdev_globals = {
+	.nb_devs		= 0
+};
+
+struct rte_eventdev_global *rte_eventdev_globals = &eventdev_globals;
+
+/* Event dev north bound API implementation */
+
+uint8_t
+rte_event_dev_count(void)
+{
+	return rte_eventdev_globals->nb_devs;
+}
+
+int
+rte_event_dev_get_dev_id(const char *name)
+{
+	int i;
+
+	if (!name)
+		return -EINVAL;
+
+	for (i = 0; i < rte_eventdev_globals->nb_devs; i++)
+		if ((strcmp(rte_event_devices[i].data->name, name)
+				== 0) &&
+				(rte_event_devices[i].attached ==
+						RTE_EVENTDEV_ATTACHED))
+			return i;
+	return -ENODEV;
+}
+
+int
+rte_event_dev_socket_id(uint8_t dev_id)
+{
+	struct rte_eventdev *dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+
+	return dev->data->socket_id;
+}
+
+int
+rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info *dev_info)
+{
+	struct rte_eventdev *dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+
+	if (dev_info == NULL)
+		return -EINVAL;
+
+	memset(dev_info, 0, sizeof(struct rte_event_dev_info));
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
+	(*dev->dev_ops->dev_infos_get)(dev, dev_info);
+
+	dev_info->dequeue_timeout_ns = dev->data->dev_conf.dequeue_timeout_ns;
+
+	dev_info->pci_dev = dev->pci_dev;
+	return 0;
+}
+
+static inline int
+rte_event_dev_queue_config(struct rte_eventdev *dev, uint8_t nb_queues)
+{
+	uint8_t old_nb_queues = dev->data->nb_queues;
+	uint8_t *queues_prio;
+	unsigned int i;
+
+	RTE_EDEV_LOG_DEBUG("Setup %d queues on device %u", nb_queues,
+			 dev->data->dev_id);
+
+	/* First time configuration */
+	if (dev->data->queues_prio == NULL && nb_queues != 0) {
+		/* Allocate memory to store queue priority */
+		dev->data->queues_prio = rte_zmalloc_socket(
+				"eventdev->data->queues_prio",
+				sizeof(dev->data->queues_prio[0]) * nb_queues,
+				RTE_CACHE_LINE_SIZE, dev->data->socket_id);
+		if (dev->data->queues_prio == NULL) {
+			dev->data->nb_queues = 0;
+			RTE_EDEV_LOG_ERR("failed to get mem for queue priority,"
+					"nb_queues %u", nb_queues);
+			return -(ENOMEM);
+		}
+	/* Re-configure */
+	} else if (dev->data->queues_prio != NULL && nb_queues != 0) {
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_release, -ENOTSUP);
+
+		for (i = nb_queues; i < old_nb_queues; i++)
+			(*dev->dev_ops->queue_release)(dev, i);
+
+		/* Re allocate memory to store queue priority */
+		queues_prio = dev->data->queues_prio;
+		queues_prio = rte_realloc(queues_prio,
+				sizeof(queues_prio[0]) * nb_queues,
+				RTE_CACHE_LINE_SIZE);
+		if (queues_prio == NULL) {
+			RTE_EDEV_LOG_ERR("failed to realloc queue priority,"
+						" nb_queues %u", nb_queues);
+			return -(ENOMEM);
+		}
+		dev->data->queues_prio = queues_prio;
+
+		if (nb_queues > old_nb_queues) {
+			uint8_t new_qs = nb_queues - old_nb_queues;
+
+			memset(queues_prio + old_nb_queues, 0,
+				sizeof(queues_prio[0]) * new_qs);
+		}
+	} else if (dev->data->queues_prio != NULL && nb_queues == 0) {
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_release, -ENOTSUP);
+
+		for (i = nb_queues; i < old_nb_queues; i++)
+			(*dev->dev_ops->queue_release)(dev, i);
+	}
+
+	dev->data->nb_queues = nb_queues;
+	return 0;
+}
+
+static inline int
+rte_event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports)
+{
+	uint8_t old_nb_ports = dev->data->nb_ports;
+	void **ports;
+	uint16_t *links_map;
+	uint8_t *ports_dequeue_depth;
+	uint8_t *ports_enqueue_depth;
+	unsigned int i;
+
+	RTE_EDEV_LOG_DEBUG("Setup %d ports on device %u", nb_ports,
+			 dev->data->dev_id);
+
+	/* First time configuration */
+	if (dev->data->ports == NULL && nb_ports != 0) {
+		dev->data->ports = rte_zmalloc_socket("eventdev->data->ports",
+				sizeof(dev->data->ports[0]) * nb_ports,
+				RTE_CACHE_LINE_SIZE, dev->data->socket_id);
+		if (dev->data->ports == NULL) {
+			dev->data->nb_ports = 0;
+			RTE_EDEV_LOG_ERR("failed to get mem for port meta data,"
+					"nb_ports %u", nb_ports);
+			return -(ENOMEM);
+		}
+
+		/* Allocate memory to store ports dequeue depth */
+		dev->data->ports_dequeue_depth =
+			rte_zmalloc_socket("eventdev->ports_dequeue_depth",
+			sizeof(dev->data->ports_dequeue_depth[0]) * nb_ports,
+			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
+		if (dev->data->ports_dequeue_depth == NULL) {
+			dev->data->nb_ports = 0;
+			RTE_EDEV_LOG_ERR("failed to get mem for port deq meta,"
+					"nb_ports %u", nb_ports);
+			return -(ENOMEM);
+		}
+
+		/* Allocate memory to store ports enqueue depth */
+		dev->data->ports_enqueue_depth =
+			rte_zmalloc_socket("eventdev->ports_enqueue_depth",
+			sizeof(dev->data->ports_enqueue_depth[0]) * nb_ports,
+			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
+		if (dev->data->ports_enqueue_depth == NULL) {
+			dev->data->nb_ports = 0;
+			RTE_EDEV_LOG_ERR("failed to get mem for port enq meta,"
+					"nb_ports %u", nb_ports);
+			return -(ENOMEM);
+		}
+
+		/* Allocate memory to store queue to port link connection */
+		dev->data->links_map =
+			rte_zmalloc_socket("eventdev->links_map",
+			sizeof(dev->data->links_map[0]) * nb_ports *
+			RTE_EVENT_MAX_QUEUES_PER_DEV,
+			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
+		if (dev->data->links_map == NULL) {
+			dev->data->nb_ports = 0;
+			RTE_EDEV_LOG_ERR("failed to get mem for port_map area,"
+					"nb_ports %u", nb_ports);
+			return -(ENOMEM);
+		}
+	} else if (dev->data->ports != NULL && nb_ports != 0) {/* re-config */
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_release, -ENOTSUP);
+
+		ports = dev->data->ports;
+		ports_dequeue_depth = dev->data->ports_dequeue_depth;
+		ports_enqueue_depth = dev->data->ports_enqueue_depth;
+		links_map = dev->data->links_map;
+
+		for (i = nb_ports; i < old_nb_ports; i++)
+			(*dev->dev_ops->port_release)(ports[i]);
+
+		/* Realloc memory for ports */
+		ports = rte_realloc(ports, sizeof(ports[0]) * nb_ports,
+				RTE_CACHE_LINE_SIZE);
+		if (ports == NULL) {
+			RTE_EDEV_LOG_ERR("failed to realloc port meta data,"
+						" nb_ports %u", nb_ports);
+			return -(ENOMEM);
+		}
+
+		/* Realloc memory for ports_dequeue_depth */
+		ports_dequeue_depth = rte_realloc(ports_dequeue_depth,
+			sizeof(ports_dequeue_depth[0]) * nb_ports,
+			RTE_CACHE_LINE_SIZE);
+		if (ports_dequeue_depth == NULL) {
+			RTE_EDEV_LOG_ERR("failed to realloc port dequeue meta,"
+						" nb_ports %u", nb_ports);
+			return -(ENOMEM);
+		}
+
+		/* Realloc memory for ports_enqueue_depth */
+		ports_enqueue_depth = rte_realloc(ports_enqueue_depth,
+			sizeof(ports_enqueue_depth[0]) * nb_ports,
+			RTE_CACHE_LINE_SIZE);
+		if (ports_enqueue_depth == NULL) {
+			RTE_EDEV_LOG_ERR("failed to realloc port enqueue meta,"
+						" nb_ports %u", nb_ports);
+			return -(ENOMEM);
+		}
+
+		/* Realloc memory to store queue to port link connection */
+		links_map = rte_realloc(links_map,
+			sizeof(dev->data->links_map[0]) * nb_ports *
+			RTE_EVENT_MAX_QUEUES_PER_DEV,
+			RTE_CACHE_LINE_SIZE);
+		if (dev->data->links_map == NULL) {
+			dev->data->nb_ports = 0;
+			RTE_EDEV_LOG_ERR("failed to realloc mem for port_map,"
+					"nb_ports %u", nb_ports);
+			return -(ENOMEM);
+		}
+
+		if (nb_ports > old_nb_ports) {
+			uint8_t new_ps = nb_ports - old_nb_ports;
+
+			memset(ports + old_nb_ports, 0,
+				sizeof(ports[0]) * new_ps);
+			memset(ports_dequeue_depth + old_nb_ports, 0,
+				sizeof(ports_dequeue_depth[0]) * new_ps);
+			memset(ports_enqueue_depth + old_nb_ports, 0,
+				sizeof(ports_enqueue_depth[0]) * new_ps);
+			memset(links_map +
+				(old_nb_ports * RTE_EVENT_MAX_QUEUES_PER_DEV),
+				0, sizeof(ports_enqueue_depth[0]) * new_ps);
+		}
+
+		dev->data->ports = ports;
+		dev->data->ports_dequeue_depth = ports_dequeue_depth;
+		dev->data->ports_enqueue_depth = ports_enqueue_depth;
+		dev->data->links_map = links_map;
+	} else if (dev->data->ports != NULL && nb_ports == 0) {
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_release, -ENOTSUP);
+
+		ports = dev->data->ports;
+		for (i = nb_ports; i < old_nb_ports; i++)
+			(*dev->dev_ops->port_release)(ports[i]);
+	}
+
+	dev->data->nb_ports = nb_ports;
+	return 0;
+}
+
+int
+rte_event_dev_configure(uint8_t dev_id,
+			const struct rte_event_dev_config *dev_conf)
+{
+	struct rte_eventdev *dev;
+	struct rte_event_dev_info info;
+	int diag;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_configure, -ENOTSUP);
+
+	if (dev->data->dev_started) {
+		RTE_EDEV_LOG_ERR(
+		    "device %d must be stopped to allow configuration", dev_id);
+		return -EBUSY;
+	}
+
+	if (dev_conf == NULL)
+		return -EINVAL;
+
+	(*dev->dev_ops->dev_infos_get)(dev, &info);
+
+	/* Check dequeue_timeout_ns value is in limit */
+	if (!dev_conf->event_dev_cfg & RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT) {
+		if (dev_conf->dequeue_timeout_ns < info.min_dequeue_timeout_ns
+			|| dev_conf->dequeue_timeout_ns >
+				 info.max_dequeue_timeout_ns) {
+			RTE_EDEV_LOG_ERR("dev%d invalid dequeue_timeout_ns=%d"
+			" min_dequeue_timeout_ns=%d max_dequeue_timeout_ns=%d",
+			dev_id, dev_conf->dequeue_timeout_ns,
+			info.min_dequeue_timeout_ns,
+			info.max_dequeue_timeout_ns);
+			return -EINVAL;
+		}
+	}
+
+	/* Check nb_events_limit is in limit */
+	if (dev_conf->nb_events_limit > info.max_num_events) {
+		RTE_EDEV_LOG_ERR("dev%d nb_events_limit=%d > max_num_events=%d",
+		dev_id, dev_conf->nb_events_limit, info.max_num_events);
+		return -EINVAL;
+	}
+
+	/* Check nb_event_queues is in limit */
+	if (!dev_conf->nb_event_queues) {
+		RTE_EDEV_LOG_ERR("dev%d nb_event_queues cannot be zero",
+					dev_id);
+		return -EINVAL;
+	}
+	if (dev_conf->nb_event_queues > info.max_event_queues) {
+		RTE_EDEV_LOG_ERR("%d nb_event_queues=%d > max_event_queues=%d",
+		dev_id, dev_conf->nb_event_queues, info.max_event_queues);
+		return -EINVAL;
+	}
+
+	/* Check nb_event_ports is in limit */
+	if (!dev_conf->nb_event_ports) {
+		RTE_EDEV_LOG_ERR("dev%d nb_event_ports cannot be zero", dev_id);
+		return -EINVAL;
+	}
+	if (dev_conf->nb_event_ports > info.max_event_ports) {
+		RTE_EDEV_LOG_ERR("id%d nb_event_ports=%d > max_event_ports= %d",
+		dev_id, dev_conf->nb_event_ports, info.max_event_ports);
+		return -EINVAL;
+	}
+
+	/* Check nb_event_queue_flows is in limit */
+	if (!dev_conf->nb_event_queue_flows) {
+		RTE_EDEV_LOG_ERR("dev%d nb_flows cannot be zero", dev_id);
+		return -EINVAL;
+	}
+	if (dev_conf->nb_event_queue_flows > info.max_event_queue_flows) {
+		RTE_EDEV_LOG_ERR("dev%d nb_flows=%x > max_flows=%x",
+		dev_id, dev_conf->nb_event_queue_flows,
+		info.max_event_queue_flows);
+		return -EINVAL;
+	}
+
+	/* Check nb_event_port_dequeue_depth is in limit */
+	if (!dev_conf->nb_event_port_dequeue_depth) {
+		RTE_EDEV_LOG_ERR("dev%d nb_dequeue_depth cannot be zero",
+					dev_id);
+		return -EINVAL;
+	}
+	if (dev_conf->nb_event_port_dequeue_depth >
+			 info.max_event_port_dequeue_depth) {
+		RTE_EDEV_LOG_ERR("dev%d nb_dq_depth=%d > max_dq_depth=%d",
+		dev_id, dev_conf->nb_event_port_dequeue_depth,
+		info.max_event_port_dequeue_depth);
+		return -EINVAL;
+	}
+
+	/* Check nb_event_port_enqueue_depth is in limit */
+	if (!dev_conf->nb_event_port_enqueue_depth) {
+		RTE_EDEV_LOG_ERR("dev%d nb_enqueue_depth cannot be zero",
+					dev_id);
+		return -EINVAL;
+	}
+	if (dev_conf->nb_event_port_enqueue_depth >
+			 info.max_event_port_enqueue_depth) {
+		RTE_EDEV_LOG_ERR("dev%d nb_enq_depth=%d > max_enq_depth=%d",
+		dev_id, dev_conf->nb_event_port_enqueue_depth,
+		info.max_event_port_enqueue_depth);
+		return -EINVAL;
+	}
+
+	/* Copy the dev_conf parameter into the dev structure */
+	memcpy(&dev->data->dev_conf, dev_conf, sizeof(dev->data->dev_conf));
+
+	/* Setup new number of queues and reconfigure device. */
+	diag = rte_event_dev_queue_config(dev, dev_conf->nb_event_queues);
+	if (diag != 0) {
+		RTE_EDEV_LOG_ERR("dev%d rte_event_dev_queue_config = %d",
+				dev_id, diag);
+		return diag;
+	}
+
+	/* Setup new number of ports and reconfigure device. */
+	diag = rte_event_dev_port_config(dev, dev_conf->nb_event_ports);
+	if (diag != 0) {
+		rte_event_dev_queue_config(dev, 0);
+		RTE_EDEV_LOG_ERR("dev%d rte_event_dev_port_config = %d",
+				dev_id, diag);
+		return diag;
+	}
+
+	/* Configure the device */
+	diag = (*dev->dev_ops->dev_configure)(dev);
+	if (diag != 0) {
+		RTE_EDEV_LOG_ERR("dev%d dev_configure = %d", dev_id, diag);
+		rte_event_dev_queue_config(dev, 0);
+		rte_event_dev_port_config(dev, 0);
+	}
+
+	dev->data->event_dev_cap = info.event_dev_cap;
+	return diag;
+}
+
+static inline int
+is_valid_queue(struct rte_eventdev *dev, uint8_t queue_id)
+{
+	if (queue_id < dev->data->nb_queues && queue_id <
+				RTE_EVENT_MAX_QUEUES_PER_DEV)
+		return 1;
+	else
+		return 0;
+}
+
+int
+rte_event_queue_default_conf_get(uint8_t dev_id, uint8_t queue_id,
+				 struct rte_event_queue_conf *queue_conf)
+{
+	struct rte_eventdev *dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+
+	if (queue_conf == NULL)
+		return -EINVAL;
+
+	if (!is_valid_queue(dev, queue_id)) {
+		RTE_EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id);
+		return -EINVAL;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_def_conf, -ENOTSUP);
+	memset(queue_conf, 0, sizeof(struct rte_event_queue_conf));
+	(*dev->dev_ops->queue_def_conf)(dev, queue_id, queue_conf);
+	return 0;
+}
+
+static inline int
+is_valid_atomic_queue_conf(const struct rte_event_queue_conf *queue_conf)
+{
+	if (queue_conf && (
+		((queue_conf->event_queue_cfg &
+			RTE_EVENT_QUEUE_CFG_TYPE_MASK)
+			== RTE_EVENT_QUEUE_CFG_ALL_TYPES) ||
+		((queue_conf->event_queue_cfg &
+			RTE_EVENT_QUEUE_CFG_TYPE_MASK)
+			== RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY)
+		))
+		return 1;
+	else
+		return 0;
+}
+
+static inline int
+is_valid_ordered_queue_conf(const struct rte_event_queue_conf *queue_conf)
+{
+	if (queue_conf && (
+		((queue_conf->event_queue_cfg &
+			RTE_EVENT_QUEUE_CFG_TYPE_MASK)
+			== RTE_EVENT_QUEUE_CFG_ALL_TYPES) ||
+		((queue_conf->event_queue_cfg &
+			RTE_EVENT_QUEUE_CFG_TYPE_MASK)
+			== RTE_EVENT_QUEUE_CFG_ORDERED_ONLY)
+		))
+		return 1;
+	else
+		return 0;
+}
+
+
+int
+rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id,
+		      const struct rte_event_queue_conf *queue_conf)
+{
+	struct rte_eventdev *dev;
+	struct rte_event_queue_conf def_conf;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+
+	if (!is_valid_queue(dev, queue_id)) {
+		RTE_EDEV_LOG_ERR("Invalid queue_id=%" PRIu8, queue_id);
+		return -EINVAL;
+	}
+
+	/* Check nb_atomic_flows limit */
+	if (is_valid_atomic_queue_conf(queue_conf)) {
+		if (queue_conf->nb_atomic_flows == 0 ||
+		    queue_conf->nb_atomic_flows >
+			dev->data->dev_conf.nb_event_queue_flows) {
+			RTE_EDEV_LOG_ERR(
+		"dev%d queue%d Invalid nb_atomic_flows=%d max_flows=%d",
+			dev_id, queue_id, queue_conf->nb_atomic_flows,
+			dev->data->dev_conf.nb_event_queue_flows);
+			return -EINVAL;
+		}
+	}
+
+	/* Check nb_atomic_order_sequences limit */
+	if (is_valid_ordered_queue_conf(queue_conf)) {
+		if (queue_conf->nb_atomic_order_sequences == 0 ||
+		    queue_conf->nb_atomic_order_sequences >
+			dev->data->dev_conf.nb_event_queue_flows) {
+			RTE_EDEV_LOG_ERR(
+		"dev%d queue%d Invalid nb_atomic_order_seq=%d max_flows=%d",
+			dev_id, queue_id, queue_conf->nb_atomic_order_sequences,
+			dev->data->dev_conf.nb_event_queue_flows);
+			return -EINVAL;
+		}
+	}
+
+	if (dev->data->dev_started) {
+		RTE_EDEV_LOG_ERR(
+		    "device %d must be stopped to allow queue setup", dev_id);
+		return -EBUSY;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_setup, -ENOTSUP);
+
+	if (queue_conf == NULL) {
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_def_conf,
+					-ENOTSUP);
+		(*dev->dev_ops->queue_def_conf)(dev, queue_id, &def_conf);
+		def_conf.event_queue_cfg = RTE_EVENT_QUEUE_CFG_DEFAULT;
+		queue_conf = &def_conf;
+	}
+
+	dev->data->queues_prio[queue_id] = queue_conf->priority;
+	return (*dev->dev_ops->queue_setup)(dev, queue_id, queue_conf);
+}
+
+uint8_t
+rte_event_queue_count(uint8_t dev_id)
+{
+	struct rte_eventdev *dev;
+
+	dev = &rte_eventdevs[dev_id];
+	return dev->data->nb_queues;
+}
+
+uint8_t
+rte_event_queue_priority(uint8_t dev_id, uint8_t queue_id)
+{
+	struct rte_eventdev *dev;
+
+	dev = &rte_eventdevs[dev_id];
+	if (dev->data->event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_QOS)
+		return dev->data->queues_prio[queue_id];
+	else
+		return RTE_EVENT_DEV_PRIORITY_NORMAL;
+}
+
+static inline int
+is_valid_port(struct rte_eventdev *dev, uint8_t port_id)
+{
+	if (port_id < dev->data->nb_ports)
+		return 1;
+	else
+		return 0;
+}
+
+int
+rte_event_port_default_conf_get(uint8_t dev_id, uint8_t port_id,
+				 struct rte_event_port_conf *port_conf)
+{
+	struct rte_eventdev *dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+
+	if (port_conf == NULL)
+		return -EINVAL;
+
+	if (!is_valid_port(dev, port_id)) {
+		RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
+		return -EINVAL;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_def_conf, -ENOTSUP);
+	memset(port_conf, 0, sizeof(struct rte_event_port_conf));
+	(*dev->dev_ops->port_def_conf)(dev, port_id, port_conf);
+	return 0;
+}
+
+int
+rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
+		     const struct rte_event_port_conf *port_conf)
+{
+	struct rte_eventdev *dev;
+	struct rte_event_port_conf def_conf;
+	int diag;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+
+	if (!is_valid_port(dev, port_id)) {
+		RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
+		return -EINVAL;
+	}
+
+	/* Check new_event_threshold limit */
+	if ((port_conf && !port_conf->new_event_threshold) ||
+			(port_conf && port_conf->new_event_threshold >
+				 dev->data->dev_conf.nb_events_limit)) {
+		RTE_EDEV_LOG_ERR(
+		   "dev%d port%d Invalid event_threshold=%d nb_events_limit=%d",
+			dev_id, port_id, port_conf->new_event_threshold,
+			dev->data->dev_conf.nb_events_limit);
+		return -EINVAL;
+	}
+
+	/* Check dequeue_depth limit */
+	if ((port_conf && !port_conf->dequeue_depth) ||
+			(port_conf && port_conf->dequeue_depth >
+		dev->data->dev_conf.nb_event_port_dequeue_depth)) {
+		RTE_EDEV_LOG_ERR(
+		   "dev%d port%d Invalid dequeue depth=%d max_dequeue_depth=%d",
+			dev_id, port_id, port_conf->dequeue_depth,
+			dev->data->dev_conf.nb_event_port_dequeue_depth);
+		return -EINVAL;
+	}
+
+	/* Check enqueue_depth limit */
+	if ((port_conf && !port_conf->enqueue_depth) ||
+			(port_conf && port_conf->enqueue_depth >
+		dev->data->dev_conf.nb_event_port_enqueue_depth)) {
+		RTE_EDEV_LOG_ERR(
+		   "dev%d port%d Invalid enqueue depth=%d max_enqueue_depth=%d",
+			dev_id, port_id, port_conf->enqueue_depth,
+			dev->data->dev_conf.nb_event_port_enqueue_depth);
+		return -EINVAL;
+	}
+
+	if (dev->data->dev_started) {
+		RTE_EDEV_LOG_ERR(
+		    "device %d must be stopped to allow port setup", dev_id);
+		return -EBUSY;
+	}
+
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_setup, -ENOTSUP);
+
+	if (port_conf == NULL) {
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_def_conf,
+					-ENOTSUP);
+		(*dev->dev_ops->port_def_conf)(dev, port_id, &def_conf);
+		port_conf = &def_conf;
+	}
+
+	dev->data->ports_dequeue_depth[port_id] =
+			port_conf->dequeue_depth;
+	dev->data->ports_enqueue_depth[port_id] =
+			port_conf->enqueue_depth;
+
+	diag = (*dev->dev_ops->port_setup)(dev, port_id, port_conf);
+
+	/* Unlink all the queues from this port(default state after setup) */
+	if (!diag)
+		diag = rte_event_port_unlink(dev_id, port_id, NULL, 0);
+
+	if (diag < 0)
+		return diag;
+
+	return 0;
+}
+
+uint8_t
+rte_event_port_dequeue_depth(uint8_t dev_id, uint8_t port_id)
+{
+	struct rte_eventdev *dev;
+
+	dev = &rte_eventdevs[dev_id];
+	return dev->data->ports_dequeue_depth[port_id];
+}
+
+uint8_t
+rte_event_port_enqueue_depth(uint8_t dev_id, uint8_t port_id)
+{
+	struct rte_eventdev *dev;
+
+	dev = &rte_eventdevs[dev_id];
+	return dev->data->ports_enqueue_depth[port_id];
+}
+
+uint8_t
+rte_event_port_count(uint8_t dev_id)
+{
+	struct rte_eventdev *dev;
+
+	dev = &rte_eventdevs[dev_id];
+	return dev->data->nb_ports;
+}
+
+int
+rte_event_port_link(uint8_t dev_id, uint8_t port_id,
+		    const uint8_t queues[], const uint8_t priorities[],
+		    uint16_t nb_links)
+{
+	struct rte_eventdev *dev;
+	uint8_t queues_list[RTE_EVENT_MAX_QUEUES_PER_DEV];
+	uint8_t priorities_list[RTE_EVENT_MAX_QUEUES_PER_DEV];
+	uint16_t *links_map;
+	int i, diag;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_link, -ENOTSUP);
+
+	if (!is_valid_port(dev, port_id)) {
+		RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
+		return -EINVAL;
+	}
+
+	if (queues == NULL) {
+		for (i = 0; i < dev->data->nb_queues; i++)
+			queues_list[i] = i;
+
+		queues = queues_list;
+		nb_links = dev->data->nb_queues;
+	}
+
+	if (priorities == NULL) {
+		for (i = 0; i < nb_links; i++)
+			priorities_list[i] = RTE_EVENT_DEV_PRIORITY_NORMAL;
+
+		priorities = priorities_list;
+	}
+
+	for (i = 0; i < nb_links; i++)
+		if (queues[i] >= RTE_EVENT_MAX_QUEUES_PER_DEV)
+			return -EINVAL;
+
+	diag = (*dev->dev_ops->port_link)(dev->data->ports[port_id], queues,
+						priorities, nb_links);
+	if (diag < 0)
+		return diag;
+
+	links_map = dev->data->links_map;
+	/* Point links_map to this port specific area */
+	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
+	for (i = 0; i < diag; i++)
+		links_map[queues[i]] = (uint8_t)priorities[i];
+
+	return diag;
+}
+
+#define EVENT_QUEUE_SERVICE_PRIORITY_INVALID (0xdead)
+
+int
+rte_event_port_unlink(uint8_t dev_id, uint8_t port_id,
+		      uint8_t queues[], uint16_t nb_unlinks)
+{
+	struct rte_eventdev *dev;
+	uint8_t all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
+	int i, diag;
+	uint16_t *links_map;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_unlink, -ENOTSUP);
+
+	if (!is_valid_port(dev, port_id)) {
+		RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
+		return -EINVAL;
+	}
+
+	if (queues == NULL) {
+		for (i = 0; i < dev->data->nb_queues; i++)
+			all_queues[i] = i;
+		queues = all_queues;
+		nb_unlinks = dev->data->nb_queues;
+	}
+
+	for (i = 0; i < nb_unlinks; i++)
+		if (queues[i] >= RTE_EVENT_MAX_QUEUES_PER_DEV)
+			return -EINVAL;
+
+	diag = (*dev->dev_ops->port_unlink)(dev->data->ports[port_id], queues,
+					nb_unlinks);
+
+	if (diag < 0)
+		return diag;
+
+	links_map = dev->data->links_map;
+	/* Point links_map to this port specific area */
+	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
+	for (i = 0; i < diag; i++)
+		links_map[queues[i]] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID;
+
+	return diag;
+}
+
+int
+rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
+			 uint8_t queues[], uint8_t priorities[])
+{
+	struct rte_eventdev *dev;
+	uint16_t *links_map;
+	int i, count = 0;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+	if (!is_valid_port(dev, port_id)) {
+		RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
+		return -EINVAL;
+	}
+
+	links_map = dev->data->links_map;
+	/* Point links_map to this port specific area */
+	links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV);
+	for (i = 0; i < RTE_EVENT_MAX_QUEUES_PER_DEV; i++) {
+		if (links_map[i] != EVENT_QUEUE_SERVICE_PRIORITY_INVALID) {
+			queues[count] = i;
+			priorities[count] = (uint8_t)links_map[i];
+			++count;
+		}
+	}
+	return count;
+}
+
+int
+rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
+				 uint64_t *timeout_ticks)
+{
+	struct rte_eventdev *dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->timeout_ticks, -ENOTSUP);
+
+	if (timeout_ticks == NULL)
+		return -EINVAL;
+
+	(*dev->dev_ops->timeout_ticks)(dev, ns, timeout_ticks);
+	return 0;
+}
+
+int
+rte_event_dev_dump(uint8_t dev_id, FILE *f)
+{
+	struct rte_eventdev *dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dump, -ENOTSUP);
+
+	(*dev->dev_ops->dump)(dev, f);
+	return 0;
+
+}
+
+int
+rte_event_dev_start(uint8_t dev_id)
+{
+	struct rte_eventdev *dev;
+	int diag;
+
+	RTE_EDEV_LOG_DEBUG("Start dev_id=%" PRIu8, dev_id);
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_start, -ENOTSUP);
+
+	if (dev->data->dev_started != 0) {
+		RTE_EDEV_LOG_ERR("Device with dev_id=%" PRIu8 "already started",
+			dev_id);
+		return 0;
+	}
+
+	diag = (*dev->dev_ops->dev_start)(dev);
+	if (diag == 0)
+		dev->data->dev_started = 1;
+	else
+		return diag;
+
+	return 0;
+}
+
+void
+rte_event_dev_stop(uint8_t dev_id)
+{
+	struct rte_eventdev *dev;
+
+	RTE_EDEV_LOG_DEBUG("Stop dev_id=%" PRIu8, dev_id);
+
+	RTE_EVENTDEV_VALID_DEVID_OR_RET(dev_id);
+	dev = &rte_eventdevs[dev_id];
+	RTE_FUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);
+
+	if (dev->data->dev_started == 0) {
+		RTE_EDEV_LOG_ERR("Device with dev_id=%" PRIu8 "already stopped",
+			dev_id);
+		return;
+	}
+
+	dev->data->dev_started = 0;
+	(*dev->dev_ops->dev_stop)(dev);
+}
+
+int
+rte_event_dev_close(uint8_t dev_id)
+{
+	struct rte_eventdev *dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	dev = &rte_eventdevs[dev_id];
+	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_close, -ENOTSUP);
+
+	/* Device must be stopped before it can be closed */
+	if (dev->data->dev_started == 1) {
+		RTE_EDEV_LOG_ERR("Device %u must be stopped before closing",
+				dev_id);
+		return -EBUSY;
+	}
+
+	return (*dev->dev_ops->dev_close)(dev);
+}
diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
index 014e1ec..e1bd05f 100644
--- a/lib/librte_eventdev/rte_eventdev.h
+++ b/lib/librte_eventdev/rte_eventdev.h
@@ -972,6 +972,8 @@ struct rte_event {
 	};
 };
 
+
+struct rte_eventdev_driver;
 struct rte_eventdev_ops;
 struct rte_eventdev;
 
@@ -993,6 +995,49 @@ typedef uint16_t (*event_dequeue_burst_t)(void *port, struct rte_event ev[],
 		uint16_t nb_events, uint64_t timeout_ticks);
 /**< @internal Dequeue burst of events from port of a device */
 
+#define RTE_EVENTDEV_NAME_MAX_LEN	(64)
+/**< @internal Max length of name of event PMD */
+
+/**
+ * @internal
+ * The data part, with no function pointers, associated with each device.
+ *
+ * This structure is safe to place in shared memory to be common among
+ * different processes in a multi-process configuration.
+ */
+struct rte_eventdev_data {
+	int socket_id;
+	/**< Socket ID where memory is allocated */
+	uint8_t dev_id;
+	/**< Device ID for this instance */
+	uint8_t nb_queues;
+	/**< Number of event queues. */
+	uint8_t nb_ports;
+	/**< Number of event ports. */
+	void **ports;
+	/**< Array of pointers to ports. */
+	uint8_t *ports_dequeue_depth;
+	/**< Array of port dequeue depth. */
+	uint8_t *ports_enqueue_depth;
+	/**< Array of port enqueue depth. */
+	uint8_t *queues_prio;
+	/**< Array of queue priority. */
+	uint16_t *links_map;
+	/**< Memory to store queues to port connections. */
+	void *dev_private;
+	/**< PMD-specific private data */
+	uint32_t event_dev_cap;
+	/**< Event device capabilities(RTE_EVENT_DEV_CAP_)*/
+	struct rte_event_dev_config dev_conf;
+	/**< Configuration applied to device. */
+
+	RTE_STD_C11
+	uint8_t dev_started : 1;
+	/**< Device state: STARTED(1)/STOPPED(0) */
+
+	char name[RTE_EVENTDEV_NAME_MAX_LEN];
+	/**< Unique identifier name */
+} __rte_cache_aligned;
 
 /** @internal The data structure associated with each event device. */
 struct rte_eventdev {
@@ -1007,8 +1052,23 @@ struct rte_eventdev {
 	event_dequeue_burst_t dequeue_burst;
 	/**< Pointer to PMD dequeue burst function. */
 
+	struct rte_eventdev_data *data;
+	/**< Pointer to device data */
+	const struct rte_eventdev_ops *dev_ops;
+	/**< Functions exported by PMD */
+	struct rte_pci_device *pci_dev;
+	/**< PCI info. supplied by probing */
+	const struct rte_eventdev_driver *driver;
+	/**< Driver for this device */
+
+	RTE_STD_C11
+	uint8_t attached : 1;
+	/**< Flag indicating the device is attached */
 } __rte_cache_aligned;
 
+extern struct rte_eventdev *rte_eventdevs;
+/** @internal The pool of rte_eventdev structures. */
+
 
 /**
  * Schedule one or more events in the event dev.
@@ -1019,8 +1079,13 @@ struct rte_eventdev {
  * @param dev_id
  *   The identifier of the device.
  */
-void
-rte_event_schedule(uint8_t dev_id);
+static inline void
+rte_event_schedule(uint8_t dev_id)
+{
+	struct rte_eventdev *dev = &rte_eventdevs[dev_id];
+	if (*dev->schedule)
+		(*dev->schedule)(dev);
+}
 
 /**
  * Enqueue a burst of events objects or an event object supplied in *rte_event*
@@ -1055,9 +1120,23 @@ rte_event_schedule(uint8_t dev_id);
  *
  * @see rte_event_port_enqueue_depth()
  */
-uint16_t
+static inline uint16_t
 rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
-			const struct rte_event ev[], uint16_t nb_events);
+			const struct rte_event ev[], uint16_t nb_events)
+{
+	struct rte_eventdev *dev = &rte_eventdevs[dev_id];
+
+	/*
+	 * Allow zero cost non burst mode routine invocation if application
+	 * requests nb_events as const one
+	 */
+	if (nb_events == 1)
+		return (*dev->enqueue)(
+			dev->data->ports[port_id], ev);
+	else
+		return (*dev->enqueue_burst)(
+			dev->data->ports[port_id], ev, nb_events);
+}
 
 /**
  * Converts nanoseconds to *timeout_ticks* value for rte_event_dequeue_burst()
@@ -1149,9 +1228,24 @@ rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
  *
  * @see rte_event_port_dequeue_depth()
  */
-uint16_t
+static inline uint16_t
 rte_event_dequeue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event ev[],
-			uint16_t nb_events, uint64_t timeout_ticks);
+			uint16_t nb_events, uint64_t timeout_ticks)
+{
+	struct rte_eventdev *dev = &rte_eventdevs[dev_id];
+
+	/*
+	 * Allow zero cost non burst mode routine invocation if application
+	 * requests nb_events as const one
+	 */
+	if (nb_events == 1)
+		return (*dev->dequeue)(
+			dev->data->ports[port_id], ev, timeout_ticks);
+	else
+		return (*dev->dequeue_burst)(
+			dev->data->ports[port_id], ev, nb_events,
+				timeout_ticks);
+}
 
 /**
  * Link multiple source event queues supplied in *queues* to the destination
diff --git a/lib/librte_eventdev/rte_eventdev_pmd.h b/lib/librte_eventdev/rte_eventdev_pmd.h
index 40552aa..e60eca9 100644
--- a/lib/librte_eventdev/rte_eventdev_pmd.h
+++ b/lib/librte_eventdev/rte_eventdev_pmd.h
@@ -44,8 +44,117 @@
 extern "C" {
 #endif
 
+#include <string.h>
+
+#include <rte_dev.h>
+#include <rte_pci.h>
+#include <rte_malloc.h>
+#include <rte_log.h>
+#include <rte_common.h>
+
 #include "rte_eventdev.h"
 
+#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
+#define RTE_PMD_DEBUG_TRACE(...) \
+	rte_pmd_debug_trace(__func__, __VA_ARGS__)
+#else
+#define RTE_PMD_DEBUG_TRACE(...)
+#endif
+
+/* Logging Macros */
+#define RTE_EDEV_LOG_ERR(fmt, args...) \
+	RTE_LOG(ERR, EVENTDEV, "%s() line %u: " fmt "\n",  \
+			__func__, __LINE__, ## args)
+
+#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
+#define RTE_EDEV_LOG_DEBUG(fmt, args...) \
+	RTE_LOG(DEBUG, EVENTDEV, "%s() line %u: " fmt "\n",  \
+			__func__, __LINE__, ## args)
+#else
+#define RTE_EDEV_LOG_DEBUG(fmt, args...) (void)0
+#endif
+
+/* Macros to check for valid device */
+#define RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, retval) do { \
+	if (!rte_event_pmd_is_valid_dev((dev_id))) { \
+		RTE_EDEV_LOG_ERR("Invalid dev_id=%d\n", dev_id); \
+		return retval; \
+	} \
+} while (0)
+
+#define RTE_EVENTDEV_VALID_DEVID_OR_RET(dev_id) do { \
+	if (!rte_event_pmd_is_valid_dev((dev_id))) { \
+		RTE_EDEV_LOG_ERR("Invalid dev_id=%d\n", dev_id); \
+		return; \
+	} \
+} while (0)
+
+#define RTE_EVENTDEV_DETACHED  (0)
+#define RTE_EVENTDEV_ATTACHED  (1)
+
+/** Global structure used for maintaining state of allocated event devices */
+struct rte_eventdev_global {
+	uint8_t nb_devs;	/**< Number of devices found */
+	uint8_t max_devs;	/**< Max number of devices */
+};
+
+extern struct rte_eventdev_global *rte_eventdev_globals;
+/** Pointer to global event devices data structure. */
+extern struct rte_eventdev *rte_eventdevs;
+/** The pool of rte_eventdev structures. */
+
+/**
+ * Get the rte_eventdev structure device pointer for the named device.
+ *
+ * @param name
+ *   device name to select the device structure.
+ *
+ * @return
+ *   - The rte_eventdev structure pointer for the given device ID.
+ */
+static inline struct rte_eventdev *
+rte_event_pmd_get_named_dev(const char *name)
+{
+	struct rte_eventdev *dev;
+	unsigned int i;
+
+	if (name == NULL)
+		return NULL;
+
+	for (i = 0, dev = &rte_eventdevs[i];
+			i < rte_eventdev_globals->max_devs; i++) {
+		if ((dev->attached == RTE_EVENTDEV_ATTACHED) &&
+				(strcmp(dev->data->name, name) == 0))
+			return dev;
+	}
+
+	return NULL;
+}
+
+/**
+ * Validate if the event device index is valid attached event device.
+ *
+ * @param dev_id
+ *   Event device index.
+ *
+ * @return
+ *   - If the device index is valid (1) or not (0).
+ */
+static inline unsigned
+rte_event_pmd_is_valid_dev(uint8_t dev_id)
+{
+	struct rte_eventdev *dev;
+
+	if (dev_id >= rte_eventdev_globals->nb_devs)
+		return 0;
+
+	dev = &rte_eventdevs[dev_id];
+	if (dev->attached != RTE_EVENTDEV_ATTACHED)
+		return 0;
+	else
+		return 1;
+}
+
 /**
  * Definitions of all functions exported by a driver through the
  * the generic structure of type *event_dev_ops* supplied in the
diff --git a/lib/librte_eventdev/rte_eventdev_version.map b/lib/librte_eventdev/rte_eventdev_version.map
new file mode 100644
index 0000000..3cae03d
--- /dev/null
+++ b/lib/librte_eventdev/rte_eventdev_version.map
@@ -0,0 +1,33 @@
+DPDK_17.02 {
+	global:
+
+	rte_eventdevs;
+
+	rte_event_dev_count;
+	rte_event_dev_get_dev_id;
+	rte_event_dev_socket_id;
+	rte_event_dev_info_get;
+	rte_event_dev_configure;
+	rte_event_dev_start;
+	rte_event_dev_stop;
+	rte_event_dev_close;
+	rte_event_dev_dump;
+
+	rte_event_port_default_conf_get;
+	rte_event_port_setup;
+	rte_event_port_dequeue_depth;
+	rte_event_port_enqueue_depth;
+	rte_event_port_count;
+	rte_event_port_link;
+	rte_event_port_unlink;
+	rte_event_port_links_get;
+
+	rte_event_queue_default_conf_get;
+	rte_event_queue_setup;
+	rte_event_queue_count;
+	rte_event_queue_priority;
+
+	rte_event_dequeue_timeout_ticks;
+
+	local: *;
+};
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index f75f0e2..716725a 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -93,6 +93,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF)           += -lrte_mbuf
 _LDLIBS-$(CONFIG_RTE_LIBRTE_NET)            += -lrte_net
 _LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER)          += -lrte_ethdev
 _LDLIBS-$(CONFIG_RTE_LIBRTE_CRYPTODEV)      += -lrte_cryptodev
+_LDLIBS-$(CONFIG_RTE_LIBRTE_EVENTDEV)       += -lrte_eventdev
 _LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL)        += -lrte_mempool
 _LDLIBS-$(CONFIG_RTE_LIBRTE_RING)           += -lrte_ring
 _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL)            += -lrte_eal
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [PATCH v4 4/6] eventdev: implement PMD registration functions
  2016-12-21  9:25     ` [PATCH v4 " Jerin Jacob
                         ` (2 preceding siblings ...)
  2016-12-21  9:25       ` [PATCH v4 3/6] eventdev: implement the northbound APIs Jerin Jacob
@ 2016-12-21  9:25       ` Jerin Jacob
  2017-02-02 11:20         ` Nipun Gupta
  2016-12-21  9:25       ` [PATCH v4 5/6] event/skeleton: add skeleton eventdev driver Jerin Jacob
  2016-12-21  9:25       ` [PATCH v4 6/6] app/test: unit test case for eventdev APIs Jerin Jacob
  5 siblings, 1 reply; 109+ messages in thread
From: Jerin Jacob @ 2016-12-21  9:25 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, hemant.agrawal, gage.eads,
	harry.van.haaren, Jerin Jacob

This patch adds infrastructure for registering the vdev or
the PCI based event device.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/librte_eventdev/rte_eventdev.c           | 236 +++++++++++++++++++++++++++
 lib/librte_eventdev/rte_eventdev_pmd.h       | 111 +++++++++++++
 lib/librte_eventdev/rte_eventdev_version.map |   6 +
 3 files changed, 353 insertions(+)

diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c
index b13eb00..c8f3e94 100644
--- a/lib/librte_eventdev/rte_eventdev.c
+++ b/lib/librte_eventdev/rte_eventdev.c
@@ -126,6 +126,8 @@ rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info *dev_info)
 	dev_info->dequeue_timeout_ns = dev->data->dev_conf.dequeue_timeout_ns;
 
 	dev_info->pci_dev = dev->pci_dev;
+	if (dev->driver)
+		dev_info->driver_name = dev->driver->pci_drv.driver.name;
 	return 0;
 }
 
@@ -984,3 +986,237 @@ rte_event_dev_close(uint8_t dev_id)
 
 	return (*dev->dev_ops->dev_close)(dev);
 }
+
+static inline int
+rte_eventdev_data_alloc(uint8_t dev_id, struct rte_eventdev_data **data,
+		int socket_id)
+{
+	char mz_name[RTE_EVENTDEV_NAME_MAX_LEN];
+	const struct rte_memzone *mz;
+	int n;
+
+	/* Generate memzone name */
+	n = snprintf(mz_name, sizeof(mz_name), "rte_eventdev_data_%u", dev_id);
+	if (n >= (int)sizeof(mz_name))
+		return -EINVAL;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		mz = rte_memzone_reserve(mz_name,
+				sizeof(struct rte_eventdev_data),
+				socket_id, 0);
+	} else
+		mz = rte_memzone_lookup(mz_name);
+
+	if (mz == NULL)
+		return -ENOMEM;
+
+	*data = mz->addr;
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		memset(*data, 0, sizeof(struct rte_eventdev_data));
+
+	return 0;
+}
+
+static inline uint8_t
+rte_eventdev_find_free_device_index(void)
+{
+	uint8_t dev_id;
+
+	for (dev_id = 0; dev_id < RTE_EVENT_MAX_DEVS; dev_id++) {
+		if (rte_eventdevs[dev_id].attached ==
+				RTE_EVENTDEV_DETACHED)
+			return dev_id;
+	}
+	return RTE_EVENT_MAX_DEVS;
+}
+
+struct rte_eventdev *
+rte_event_pmd_allocate(const char *name, int socket_id)
+{
+	struct rte_eventdev *eventdev;
+	uint8_t dev_id;
+
+	if (rte_event_pmd_get_named_dev(name) != NULL) {
+		RTE_EDEV_LOG_ERR("Event device with name %s already "
+				"allocated!", name);
+		return NULL;
+	}
+
+	dev_id = rte_eventdev_find_free_device_index();
+	if (dev_id == RTE_EVENT_MAX_DEVS) {
+		RTE_EDEV_LOG_ERR("Reached maximum number of event devices");
+		return NULL;
+	}
+
+	eventdev = &rte_eventdevs[dev_id];
+
+	if (eventdev->data == NULL) {
+		struct rte_eventdev_data *eventdev_data = NULL;
+
+		int retval = rte_eventdev_data_alloc(dev_id, &eventdev_data,
+				socket_id);
+
+		if (retval < 0 || eventdev_data == NULL)
+			return NULL;
+
+		eventdev->data = eventdev_data;
+
+		snprintf(eventdev->data->name, RTE_EVENTDEV_NAME_MAX_LEN,
+				"%s", name);
+
+		eventdev->data->dev_id = dev_id;
+		eventdev->data->socket_id = socket_id;
+		eventdev->data->dev_started = 0;
+
+		eventdev->attached = RTE_EVENTDEV_ATTACHED;
+
+		eventdev_globals.nb_devs++;
+	}
+
+	return eventdev;
+}
+
+int
+rte_event_pmd_release(struct rte_eventdev *eventdev)
+{
+	int ret;
+
+	if (eventdev == NULL)
+		return -EINVAL;
+
+	ret = rte_event_dev_close(eventdev->data->dev_id);
+	if (ret < 0)
+		return ret;
+
+	eventdev->attached = RTE_EVENTDEV_DETACHED;
+	eventdev_globals.nb_devs--;
+	eventdev->data = NULL;
+
+	return 0;
+}
+
+struct rte_eventdev *
+rte_event_pmd_vdev_init(const char *name, size_t dev_private_size,
+		int socket_id)
+{
+	struct rte_eventdev *eventdev;
+
+	/* Allocate device structure */
+	eventdev = rte_event_pmd_allocate(name, socket_id);
+	if (eventdev == NULL)
+		return NULL;
+
+	/* Allocate private device structure */
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		eventdev->data->dev_private =
+				rte_zmalloc_socket("eventdev device private",
+						dev_private_size,
+						RTE_CACHE_LINE_SIZE,
+						socket_id);
+
+		if (eventdev->data->dev_private == NULL)
+			rte_panic("Cannot allocate memzone for private device"
+					" data");
+	}
+
+	return eventdev;
+}
+
+int
+rte_event_pmd_pci_probe(struct rte_pci_driver *pci_drv,
+			struct rte_pci_device *pci_dev)
+{
+	struct rte_eventdev_driver *eventdrv;
+	struct rte_eventdev *eventdev;
+
+	char eventdev_name[RTE_EVENTDEV_NAME_MAX_LEN];
+
+	int retval;
+
+	eventdrv = (struct rte_eventdev_driver *)pci_drv;
+	if (eventdrv == NULL)
+		return -ENODEV;
+
+	rte_eal_pci_device_name(&pci_dev->addr, eventdev_name,
+			sizeof(eventdev_name));
+
+	eventdev = rte_event_pmd_allocate(eventdev_name,
+			 pci_dev->device.numa_node);
+	if (eventdev == NULL)
+		return -ENOMEM;
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+		eventdev->data->dev_private =
+				rte_zmalloc_socket(
+						"eventdev private structure",
+						eventdrv->dev_private_size,
+						RTE_CACHE_LINE_SIZE,
+						rte_socket_id());
+
+		if (eventdev->data->dev_private == NULL)
+			rte_panic("Cannot allocate memzone for private "
+					"device data");
+	}
+
+	eventdev->pci_dev = pci_dev;
+	eventdev->driver = eventdrv;
+
+	/* Invoke PMD device initialization function */
+	retval = (*eventdrv->eventdev_init)(eventdev);
+	if (retval == 0)
+		return 0;
+
+	RTE_EDEV_LOG_ERR("driver %s: (vendor_id=0x%x device_id=0x%x)"
+			" failed", pci_drv->driver.name,
+			(unsigned int) pci_dev->id.vendor_id,
+			(unsigned int) pci_dev->id.device_id);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(eventdev->data->dev_private);
+
+	eventdev->attached = RTE_EVENTDEV_DETACHED;
+	eventdev_globals.nb_devs--;
+
+	return -ENXIO;
+}
+
+int
+rte_event_pmd_pci_remove(struct rte_pci_device *pci_dev)
+{
+	const struct rte_eventdev_driver *eventdrv;
+	struct rte_eventdev *eventdev;
+	char eventdev_name[RTE_EVENTDEV_NAME_MAX_LEN];
+	int ret;
+
+	if (pci_dev == NULL)
+		return -EINVAL;
+
+	rte_eal_pci_device_name(&pci_dev->addr, eventdev_name,
+			sizeof(eventdev_name));
+
+	eventdev = rte_event_pmd_get_named_dev(eventdev_name);
+	if (eventdev == NULL)
+		return -ENODEV;
+
+	eventdrv = (const struct rte_eventdev_driver *)pci_dev->driver;
+	if (eventdrv == NULL)
+		return -ENODEV;
+
+	/* Invoke PMD device un-init function */
+	if (*eventdrv->eventdev_uninit) {
+		ret = (*eventdrv->eventdev_uninit)(eventdev);
+		if (ret)
+			return ret;
+	}
+
+	/* Free event device */
+	rte_event_pmd_release(eventdev);
+
+	if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+		rte_free(eventdev->data->dev_private);
+
+	eventdev->pci_dev = NULL;
+	eventdev->driver = NULL;
+
+	return 0;
+}
diff --git a/lib/librte_eventdev/rte_eventdev_pmd.h b/lib/librte_eventdev/rte_eventdev_pmd.h
index e60eca9..c84c9a2 100644
--- a/lib/librte_eventdev/rte_eventdev_pmd.h
+++ b/lib/librte_eventdev/rte_eventdev_pmd.h
@@ -92,6 +92,60 @@ extern "C" {
 #define RTE_EVENTDEV_DETACHED  (0)
 #define RTE_EVENTDEV_ATTACHED  (1)
 
+/**
+ * Initialisation function of a event driver invoked for each matching
+ * event PCI device detected during the PCI probing phase.
+ *
+ * @param dev
+ *   The dev pointer is the address of the *rte_eventdev* structure associated
+ *   with the matching device and which has been [automatically] allocated in
+ *   the *rte_event_devices* array.
+ *
+ * @return
+ *   - 0: Success, the device is properly initialised by the driver.
+ *        In particular, the driver MUST have set up the *dev_ops* pointer
+ *        of the *dev* structure.
+ *   - <0: Error code of the device initialisation failure.
+ */
+typedef int (*eventdev_init_t)(struct rte_eventdev *dev);
+
+/**
+ * Finalisation function of a driver invoked for each matching
+ * PCI device detected during the PCI closing phase.
+ *
+ * @param dev
+ *   The dev pointer is the address of the *rte_eventdev* structure associated
+ *   with the matching device and which	has been [automatically] allocated in
+ *   the *rte_event_devices* array.
+ *
+ * @return
+ *   - 0: Success, the device is properly finalised by the driver.
+ *        In particular, the driver MUST free the *dev_ops* pointer
+ *        of the *dev* structure.
+ *   - <0: Error code of the device initialisation failure.
+ */
+typedef int (*eventdev_uninit_t)(struct rte_eventdev *dev);
+
+/**
+ * The structure associated with a PMD driver.
+ *
+ * Each driver acts as a PCI driver and is represented by a generic
+ * *event_driver* structure that holds:
+ *
+ * - An *rte_pci_driver* structure (which must be the first field).
+ *
+ * - The *eventdev_init* function invoked for each matching PCI device.
+ *
+ * - The size of the private data to allocate for each matching device.
+ */
+struct rte_eventdev_driver {
+	struct rte_pci_driver pci_drv;	/**< The PMD is also a PCI driver. */
+	unsigned int dev_private_size;	/**< Size of device private data. */
+
+	eventdev_init_t eventdev_init;	/**< Device init function. */
+	eventdev_uninit_t eventdev_uninit; /**< Device uninit function. */
+};
+
 /** Global structure used for maintaining state of allocated event devices */
 struct rte_eventdev_global {
 	uint8_t nb_devs;	/**< Number of devices found */
@@ -396,6 +450,63 @@ struct rte_eventdev_ops {
 	/* Dump internal information */
 };
 
+/**
+ * Allocates a new eventdev slot for an event device and returns the pointer
+ * to that slot for the driver to use.
+ *
+ * @param name
+ *   Unique identifier name for each device
+ * @param socket_id
+ *   Socket to allocate resources on.
+ * @return
+ *   - Slot in the rte_dev_devices array for a new device;
+ */
+struct rte_eventdev *
+rte_event_pmd_allocate(const char *name, int socket_id);
+
+/**
+ * Release the specified eventdev device.
+ *
+ * @param eventdev
+ * The *eventdev* pointer is the address of the *rte_eventdev* structure.
+ * @return
+ *   - 0 on success, negative on error
+ */
+int
+rte_event_pmd_release(struct rte_eventdev *eventdev);
+
+/**
+ * Creates a new virtual event device and returns the pointer to that device.
+ *
+ * @param name
+ *   PMD type name
+ * @param dev_private_size
+ *   Size of event PMDs private data
+ * @param socket_id
+ *   Socket to allocate resources on.
+ *
+ * @return
+ *   - Eventdev pointer if device is successfully created.
+ *   - NULL if device cannot be created.
+ */
+struct rte_eventdev *
+rte_event_pmd_vdev_init(const char *name, size_t dev_private_size,
+		int socket_id);
+
+
+/**
+ * Wrapper for use by pci drivers as a .probe function to attach to a event
+ * interface.
+ */
+int rte_event_pmd_pci_probe(struct rte_pci_driver *pci_drv,
+			    struct rte_pci_device *pci_dev);
+
+/**
+ * Wrapper for use by pci drivers as a .remove function to detach a event
+ * interface.
+ */
+int rte_event_pmd_pci_remove(struct rte_pci_device *pci_dev);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_eventdev/rte_eventdev_version.map b/lib/librte_eventdev/rte_eventdev_version.map
index 3cae03d..68b8c81 100644
--- a/lib/librte_eventdev/rte_eventdev_version.map
+++ b/lib/librte_eventdev/rte_eventdev_version.map
@@ -29,5 +29,11 @@ DPDK_17.02 {
 
 	rte_event_dequeue_timeout_ticks;
 
+	rte_event_pmd_allocate;
+	rte_event_pmd_release;
+	rte_event_pmd_vdev_init;
+	rte_event_pmd_pci_probe;
+	rte_event_pmd_pci_remove;
+
 	local: *;
 };
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [PATCH v4 5/6] event/skeleton: add skeleton eventdev driver
  2016-12-21  9:25     ` [PATCH v4 " Jerin Jacob
                         ` (3 preceding siblings ...)
  2016-12-21  9:25       ` [PATCH v4 4/6] eventdev: implement PMD registration functions Jerin Jacob
@ 2016-12-21  9:25       ` Jerin Jacob
  2016-12-21  9:25       ` [PATCH v4 6/6] app/test: unit test case for eventdev APIs Jerin Jacob
  5 siblings, 0 replies; 109+ messages in thread
From: Jerin Jacob @ 2016-12-21  9:25 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, hemant.agrawal, gage.eads,
	harry.van.haaren, Jerin Jacob

The skeleton driver facilitates, bootstrapping the new
eventdev driver and creates a platform to verify
the northbound eventdev common code.

The driver supports both VDEV and PCI based eventdev
devices.

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
 MAINTAINERS                                        |   1 +
 config/common_base                                 |   8 +
 drivers/Makefile                                   |   1 +
 drivers/event/Makefile                             |  36 ++
 drivers/event/skeleton/Makefile                    |  55 +++
 .../skeleton/rte_pmd_skeleton_event_version.map    |   4 +
 drivers/event/skeleton/skeleton_eventdev.c         | 519 +++++++++++++++++++++
 drivers/event/skeleton/skeleton_eventdev.h         |  68 +++
 mk/rte.app.mk                                      |   4 +
 9 files changed, 696 insertions(+)
 create mode 100644 drivers/event/Makefile
 create mode 100644 drivers/event/skeleton/Makefile
 create mode 100644 drivers/event/skeleton/rte_pmd_skeleton_event_version.map
 create mode 100644 drivers/event/skeleton/skeleton_eventdev.c
 create mode 100644 drivers/event/skeleton/skeleton_eventdev.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 8e59352..a10899f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -252,6 +252,7 @@ F: examples/l2fwd-crypto/
 Eventdev API - EXPERIMENTAL
 M: Jerin Jacob <jerin.jacob@caviumnetworks.com>
 F: lib/librte_eventdev/
+F: drivers/event/skeleton/
 
 Networking Drivers
 ------------------
diff --git a/config/common_base b/config/common_base
index 3a17dfb..650df13 100644
--- a/config/common_base
+++ b/config/common_base
@@ -418,6 +418,14 @@ CONFIG_RTE_LIBRTE_EVENTDEV=y
 CONFIG_RTE_LIBRTE_EVENTDEV_DEBUG=n
 CONFIG_RTE_EVENT_MAX_DEVS=16
 CONFIG_RTE_EVENT_MAX_QUEUES_PER_DEV=64
+
+#
+# Compile PMD for skeleton event device
+#
+CONFIG_RTE_LIBRTE_PMD_SKELETON_EVENTDEV=y
+CONFIG_RTE_LIBRTE_PMD_SKELETON_EVENTDEV_DEBUG=n
+
+#
 # Compile librte_ring
 #
 CONFIG_RTE_LIBRTE_RING=y
diff --git a/drivers/Makefile b/drivers/Makefile
index 81c03a8..40b8347 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -33,5 +33,6 @@ include $(RTE_SDK)/mk/rte.vars.mk
 
 DIRS-y += net
 DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += crypto
+DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += event
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/event/Makefile b/drivers/event/Makefile
new file mode 100644
index 0000000..678279f
--- /dev/null
+++ b/drivers/event/Makefile
@@ -0,0 +1,36 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2016 Cavium networks. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Cavium networks nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_SKELETON_EVENTDEV) += skeleton
+
+include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/event/skeleton/Makefile b/drivers/event/skeleton/Makefile
new file mode 100644
index 0000000..bd22832
--- /dev/null
+++ b/drivers/event/skeleton/Makefile
@@ -0,0 +1,55 @@
+#   BSD LICENSE
+#
+#   Copyright(c) 2016 Cavium Networks. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of Cavium Networks nor the names of its
+#       contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+#
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_skeleton_event.a
+
+CFLAGS += $(WERROR_FLAGS)
+
+EXPORT_MAP := rte_pmd_skeleton_event_version.map
+
+LIBABIVER := 1
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_SKELETON_EVENTDEV) += skeleton_eventdev.c
+
+# this lib depends upon:
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_SKELETON_EVENTDEV) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_SKELETON_EVENTDEV) += lib/librte_eventdev
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/event/skeleton/rte_pmd_skeleton_event_version.map b/drivers/event/skeleton/rte_pmd_skeleton_event_version.map
new file mode 100644
index 0000000..31eca32
--- /dev/null
+++ b/drivers/event/skeleton/rte_pmd_skeleton_event_version.map
@@ -0,0 +1,4 @@
+DPDK_17.02 {
+
+	local: *;
+};
diff --git a/drivers/event/skeleton/skeleton_eventdev.c b/drivers/event/skeleton/skeleton_eventdev.c
new file mode 100644
index 0000000..085cb86
--- /dev/null
+++ b/drivers/event/skeleton/skeleton_eventdev.c
@@ -0,0 +1,519 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <assert.h>
+#include <stdio.h>
+#include <stdbool.h>
+#include <errno.h>
+#include <stdint.h>
+#include <string.h>
+
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <rte_debug.h>
+#include <rte_dev.h>
+#include <rte_eal.h>
+#include <rte_log.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_malloc.h>
+#include <rte_pci.h>
+#include <rte_lcore.h>
+#include <rte_vdev.h>
+
+#include "skeleton_eventdev.h"
+
+#define EVENTDEV_NAME_SKELETON_PMD event_skeleton
+/**< Skeleton event device PMD name */
+
+static uint16_t
+skeleton_eventdev_enqueue(void *port, const struct rte_event *ev)
+{
+	struct skeleton_port *sp = port;
+
+	RTE_SET_USED(sp);
+	RTE_SET_USED(ev);
+	RTE_SET_USED(port);
+
+	return 0;
+}
+
+static uint16_t
+skeleton_eventdev_enqueue_burst(void *port, const struct rte_event ev[],
+			uint16_t nb_events)
+{
+	struct skeleton_port *sp = port;
+
+	RTE_SET_USED(sp);
+	RTE_SET_USED(ev);
+	RTE_SET_USED(port);
+	RTE_SET_USED(nb_events);
+
+	return 0;
+}
+
+static uint16_t
+skeleton_eventdev_dequeue(void *port, struct rte_event *ev,
+				uint64_t timeout_ticks)
+{
+	struct skeleton_port *sp = port;
+
+	RTE_SET_USED(sp);
+	RTE_SET_USED(ev);
+	RTE_SET_USED(timeout_ticks);
+
+	return 0;
+}
+
+static uint16_t
+skeleton_eventdev_dequeue_burst(void *port, struct rte_event ev[],
+		uint16_t nb_events, uint64_t timeout_ticks)
+{
+	struct skeleton_port *sp = port;
+
+	RTE_SET_USED(sp);
+	RTE_SET_USED(ev);
+	RTE_SET_USED(nb_events);
+	RTE_SET_USED(timeout_ticks);
+
+	return 0;
+}
+
+static void
+skeleton_eventdev_info_get(struct rte_eventdev *dev,
+		struct rte_event_dev_info *dev_info)
+{
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(dev);
+
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(skel);
+
+	dev_info->min_dequeue_timeout_ns = 1;
+	dev_info->max_dequeue_timeout_ns = 10000;
+	dev_info->dequeue_timeout_ns = 25;
+	dev_info->max_event_queues = 64;
+	dev_info->max_event_queue_flows = (1ULL << 20);
+	dev_info->max_event_queue_priority_levels = 8;
+	dev_info->max_event_priority_levels = 8;
+	dev_info->max_event_ports = 32;
+	dev_info->max_event_port_dequeue_depth = 16;
+	dev_info->max_event_port_enqueue_depth = 16;
+	dev_info->max_num_events = (1ULL << 20);
+	dev_info->event_dev_cap = RTE_EVENT_DEV_CAP_QUEUE_QOS |
+					RTE_EVENT_DEV_CAP_EVENT_QOS;
+}
+
+static int
+skeleton_eventdev_configure(const struct rte_eventdev *dev)
+{
+	struct rte_eventdev_data *data = dev->data;
+	struct rte_event_dev_config *conf = &data->dev_conf;
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(dev);
+
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(conf);
+	RTE_SET_USED(skel);
+
+	PMD_DRV_LOG(DEBUG, "Configured eventdev devid=%d", dev->data->dev_id);
+	return 0;
+}
+
+static int
+skeleton_eventdev_start(struct rte_eventdev *dev)
+{
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(dev);
+
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(skel);
+
+	return 0;
+}
+
+static void
+skeleton_eventdev_stop(struct rte_eventdev *dev)
+{
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(dev);
+
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(skel);
+}
+
+static int
+skeleton_eventdev_close(struct rte_eventdev *dev)
+{
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(dev);
+
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(skel);
+
+	return 0;
+}
+
+static void
+skeleton_eventdev_queue_def_conf(struct rte_eventdev *dev, uint8_t queue_id,
+				 struct rte_event_queue_conf *queue_conf)
+{
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(dev);
+
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(skel);
+	RTE_SET_USED(queue_id);
+
+	queue_conf->nb_atomic_flows = (1ULL << 20);
+	queue_conf->nb_atomic_order_sequences = (1ULL << 20);
+	queue_conf->event_queue_cfg = RTE_EVENT_QUEUE_CFG_DEFAULT;
+	queue_conf->priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
+}
+
+static void
+skeleton_eventdev_queue_release(struct rte_eventdev *dev, uint8_t queue_id)
+{
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(dev);
+	RTE_SET_USED(queue_id);
+}
+
+static int
+skeleton_eventdev_queue_setup(struct rte_eventdev *dev, uint8_t queue_id,
+			      const struct rte_event_queue_conf *queue_conf)
+{
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(dev);
+
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(skel);
+	RTE_SET_USED(queue_conf);
+	RTE_SET_USED(queue_id);
+
+	return 0;
+}
+
+static void
+skeleton_eventdev_port_def_conf(struct rte_eventdev *dev, uint8_t port_id,
+				 struct rte_event_port_conf *port_conf)
+{
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(dev);
+
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(skel);
+	RTE_SET_USED(port_id);
+
+	port_conf->new_event_threshold = 32 * 1024;
+	port_conf->dequeue_depth = 16;
+	port_conf->enqueue_depth = 16;
+}
+
+static void
+skeleton_eventdev_port_release(void *port)
+{
+	struct skeleton_port *sp = port;
+	PMD_DRV_FUNC_TRACE();
+
+	rte_free(sp);
+}
+
+static int
+skeleton_eventdev_port_setup(struct rte_eventdev *dev, uint8_t port_id,
+				const struct rte_event_port_conf *port_conf)
+{
+	struct skeleton_port *sp;
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(dev);
+
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(skel);
+	RTE_SET_USED(port_conf);
+
+	/* Free memory prior to re-allocation if needed */
+	if (dev->data->ports[port_id] != NULL) {
+		PMD_DRV_LOG(DEBUG, "Freeing memory prior to re-allocation %d",
+				port_id);
+		skeleton_eventdev_port_release(dev->data->ports[port_id]);
+		dev->data->ports[port_id] = NULL;
+	}
+
+	/* Allocate event port memory */
+	sp = rte_zmalloc_socket("eventdev port",
+			sizeof(struct skeleton_port), RTE_CACHE_LINE_SIZE,
+			dev->data->socket_id);
+	if (sp == NULL) {
+		PMD_DRV_ERR("Failed to allocate sp port_id=%d", port_id);
+		return -ENOMEM;
+	}
+
+	sp->port_id = port_id;
+
+	PMD_DRV_LOG(DEBUG, "[%d] sp=%p", port_id, sp);
+
+	dev->data->ports[port_id] = sp;
+	return 0;
+}
+
+static int
+skeleton_eventdev_port_link(void *port,
+			const uint8_t queues[], const uint8_t priorities[],
+			uint16_t nb_links)
+{
+	struct skeleton_port *sp = port;
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(sp);
+	RTE_SET_USED(queues);
+	RTE_SET_USED(priorities);
+
+	/* Linked all the queues */
+	return (int)nb_links;
+}
+
+static int
+skeleton_eventdev_port_unlink(void *port, uint8_t queues[],
+				 uint16_t nb_unlinks)
+{
+	struct skeleton_port *sp = port;
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(sp);
+	RTE_SET_USED(queues);
+
+	/* Unlinked all the queues */
+	return (int)nb_unlinks;
+
+}
+
+static void
+skeleton_eventdev_timeout_ticks(struct rte_eventdev *dev, uint64_t ns,
+				 uint64_t *timeout_ticks)
+{
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(dev);
+	uint32_t scale = 1;
+
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(skel);
+	*timeout_ticks = ns * scale;
+}
+
+static void
+skeleton_eventdev_dump(struct rte_eventdev *dev, FILE *f)
+{
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(dev);
+
+	PMD_DRV_FUNC_TRACE();
+
+	RTE_SET_USED(skel);
+	RTE_SET_USED(f);
+}
+
+
+/* Initialize and register event driver with DPDK Application */
+static const struct rte_eventdev_ops skeleton_eventdev_ops = {
+	.dev_infos_get    = skeleton_eventdev_info_get,
+	.dev_configure    = skeleton_eventdev_configure,
+	.dev_start        = skeleton_eventdev_start,
+	.dev_stop         = skeleton_eventdev_stop,
+	.dev_close        = skeleton_eventdev_close,
+	.queue_def_conf   = skeleton_eventdev_queue_def_conf,
+	.queue_setup      = skeleton_eventdev_queue_setup,
+	.queue_release    = skeleton_eventdev_queue_release,
+	.port_def_conf    = skeleton_eventdev_port_def_conf,
+	.port_setup       = skeleton_eventdev_port_setup,
+	.port_release     = skeleton_eventdev_port_release,
+	.port_link        = skeleton_eventdev_port_link,
+	.port_unlink      = skeleton_eventdev_port_unlink,
+	.timeout_ticks    = skeleton_eventdev_timeout_ticks,
+	.dump             = skeleton_eventdev_dump
+};
+
+static int
+skeleton_eventdev_init(struct rte_eventdev *eventdev)
+{
+	struct rte_pci_device *pci_dev;
+	struct skeleton_eventdev *skel = skeleton_pmd_priv(eventdev);
+	int ret = 0;
+
+	PMD_DRV_FUNC_TRACE();
+
+	eventdev->dev_ops       = &skeleton_eventdev_ops;
+	eventdev->schedule      = NULL;
+	eventdev->enqueue       = skeleton_eventdev_enqueue;
+	eventdev->enqueue_burst = skeleton_eventdev_enqueue_burst;
+	eventdev->dequeue       = skeleton_eventdev_dequeue;
+	eventdev->dequeue_burst = skeleton_eventdev_dequeue_burst;
+
+	/* For secondary processes, the primary has done all the work */
+	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+		return 0;
+
+	pci_dev = eventdev->pci_dev;
+
+	skel->reg_base = (uintptr_t)pci_dev->mem_resource[0].addr;
+	if (!skel->reg_base) {
+		PMD_DRV_ERR("Failed to map BAR0");
+		ret = -ENODEV;
+		goto fail;
+	}
+
+	skel->device_id = pci_dev->id.device_id;
+	skel->vendor_id = pci_dev->id.vendor_id;
+	skel->subsystem_device_id = pci_dev->id.subsystem_device_id;
+	skel->subsystem_vendor_id = pci_dev->id.subsystem_vendor_id;
+
+	PMD_DRV_LOG(DEBUG, "pci device (%x:%x) %u:%u:%u:%u",
+			pci_dev->id.vendor_id, pci_dev->id.device_id,
+			pci_dev->addr.domain, pci_dev->addr.bus,
+			pci_dev->addr.devid, pci_dev->addr.function);
+
+	PMD_DRV_LOG(INFO, "dev_id=%d socket_id=%d (%x:%x)",
+		eventdev->data->dev_id, eventdev->data->socket_id,
+		skel->vendor_id, skel->device_id);
+
+fail:
+	return ret;
+}
+
+/* PCI based event device */
+
+#define EVENTDEV_SKEL_VENDOR_ID         0x177d
+#define EVENTDEV_SKEL_PRODUCT_ID        0x0001
+
+static const struct rte_pci_id pci_id_skeleton_map[] = {
+	{
+		RTE_PCI_DEVICE(EVENTDEV_SKEL_VENDOR_ID,
+			       EVENTDEV_SKEL_PRODUCT_ID)
+	},
+	{
+		.vendor_id = 0,
+	},
+};
+
+static struct rte_eventdev_driver pci_eventdev_skeleton_pmd = {
+	.pci_drv = {
+		.id_table = pci_id_skeleton_map,
+		.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
+		.probe = rte_event_pmd_pci_probe,
+		.remove = rte_event_pmd_pci_remove,
+	},
+	.eventdev_init = skeleton_eventdev_init,
+	.dev_private_size = sizeof(struct skeleton_eventdev),
+};
+
+RTE_PMD_REGISTER_PCI(event_skeleton_pci, pci_eventdev_skeleton_pmd.pci_drv);
+RTE_PMD_REGISTER_PCI_TABLE(event_skeleton_pci, pci_id_skeleton_map);
+
+/* VDEV based event device */
+
+/**
+ * Global static parameter used to create a unique name for each skeleton
+ * event device.
+ */
+static unsigned int skeleton_unique_id;
+
+static inline int
+skeleton_create_unique_device_name(char *name, size_t size)
+{
+	int ret;
+
+	if (name == NULL)
+		return -EINVAL;
+
+	ret = snprintf(name, size, "%s_%u", RTE_STR(EVENTDEV_NAME_SKELETON_PMD),
+			skeleton_unique_id++);
+	if (ret < 0)
+		return ret;
+	return 0;
+}
+
+static int
+skeleton_eventdev_create(int socket_id)
+{
+	struct rte_eventdev *eventdev;
+	char eventdev_name[RTE_EVENTDEV_NAME_MAX_LEN];
+
+	/* Create a unique device name */
+	if (skeleton_create_unique_device_name(eventdev_name,
+			RTE_EVENTDEV_NAME_MAX_LEN) != 0) {
+		PMD_DRV_ERR("Failed to create unique eventdev name");
+		return -EINVAL;
+	}
+
+	eventdev = rte_event_pmd_vdev_init(eventdev_name,
+			sizeof(struct skeleton_eventdev), socket_id);
+	if (eventdev == NULL) {
+		PMD_DRV_ERR("Failed to create eventdev vdev");
+		goto fail;
+	}
+
+	eventdev->dev_ops       = &skeleton_eventdev_ops;
+	eventdev->schedule      = NULL;
+	eventdev->enqueue       = skeleton_eventdev_enqueue;
+	eventdev->enqueue_burst = skeleton_eventdev_enqueue_burst;
+	eventdev->dequeue       = skeleton_eventdev_dequeue;
+	eventdev->dequeue_burst = skeleton_eventdev_dequeue_burst;
+
+	return 0;
+fail:
+	return -EFAULT;
+}
+
+static int
+skeleton_eventdev_probe(const char *name, __rte_unused const char *input_args)
+{
+	RTE_LOG(INFO, PMD, "Initializing %s on NUMA node %d\n", name,
+			rte_socket_id());
+	return skeleton_eventdev_create(rte_socket_id());
+}
+
+static int
+skeleton_eventdev_remove(const char *name)
+{
+	if (name == NULL)
+		return -EINVAL;
+
+	PMD_DRV_LOG(INFO, "Closing %s on NUMA node %d", name, rte_socket_id());
+
+	return 0;
+}
+
+static struct rte_vdev_driver vdev_eventdev_skeleton_pmd = {
+	.probe = skeleton_eventdev_probe,
+	.remove = skeleton_eventdev_remove
+};
+
+RTE_PMD_REGISTER_VDEV(EVENTDEV_NAME_SKELETON_PMD, vdev_eventdev_skeleton_pmd);
diff --git a/drivers/event/skeleton/skeleton_eventdev.h b/drivers/event/skeleton/skeleton_eventdev.h
new file mode 100644
index 0000000..1ce62da
--- /dev/null
+++ b/drivers/event/skeleton/skeleton_eventdev.h
@@ -0,0 +1,68 @@
+/*
+ *   BSD LICENSE
+ *
+ *   Copyright (C) Cavium networks Ltd. 2016.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Cavium networks nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef __SKELETON_EVENTDEV_H__
+#define __SKELETON_EVENTDEV_H__
+
+#include <rte_eventdev_pmd.h>
+
+#ifdef RTE_LIBRTE_PMD_SKELETON_EVENTDEV_DEBUG
+#define PMD_DRV_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#define PMD_DRV_FUNC_TRACE() PMD_DRV_LOG(DEBUG, ">>")
+#else
+#define PMD_DRV_LOG(level, fmt, args...) do { } while (0)
+#define PMD_DRV_FUNC_TRACE() do { } while (0)
+#endif
+
+#define PMD_DRV_ERR(fmt, args...) \
+	RTE_LOG(ERR, PMD, "%s(): " fmt "\n", __func__, ## args)
+
+struct skeleton_eventdev {
+	uintptr_t reg_base;
+	uint16_t device_id;
+	uint16_t vendor_id;
+	uint16_t subsystem_device_id;
+	uint16_t subsystem_vendor_id;
+} __rte_cache_aligned;
+
+struct skeleton_port {
+	uint8_t port_id;
+} __rte_cache_aligned;
+
+static inline struct skeleton_eventdev *
+skeleton_pmd_priv(const struct rte_eventdev *eventdev)
+{
+	return eventdev->data->dev_private;
+}
+
+#endif /* __SKELETON_EVENTDEV_H__ */
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 716725a..8341c13 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -148,6 +148,10 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC)         += -lrte_pmd_zuc
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_ZUC)         += -L$(LIBSSO_ZUC_PATH)/build -lsso_zuc
 endif # CONFIG_RTE_LIBRTE_CRYPTODEV
 
+ifeq ($(CONFIG_RTE_LIBRTE_EVENTDEV),y)
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SKELETON_EVENTDEV) += -lrte_pmd_skeleton_event
+endif # CONFIG_RTE_LIBRTE_EVENTDEV
+
 endif # !CONFIG_RTE_BUILD_SHARED_LIBS
 
 _LDLIBS-y += --no-whole-archive
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* [PATCH v4 6/6] app/test: unit test case for eventdev APIs
  2016-12-21  9:25     ` [PATCH v4 " Jerin Jacob
                         ` (4 preceding siblings ...)
  2016-12-21  9:25       ` [PATCH v4 5/6] event/skeleton: add skeleton eventdev driver Jerin Jacob
@ 2016-12-21  9:25       ` Jerin Jacob
  5 siblings, 0 replies; 109+ messages in thread
From: Jerin Jacob @ 2016-12-21  9:25 UTC (permalink / raw)
  To: dev
  Cc: thomas.monjalon, bruce.richardson, hemant.agrawal, gage.eads,
	harry.van.haaren, Jerin Jacob

This commit adds basic unit tests for the eventdev API.

commands to run the test app:
./build/app/test -c 2
RTE>>eventdev_common_autotest

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
 MAINTAINERS              |   1 +
 app/test/Makefile        |   2 +
 app/test/test_eventdev.c | 778 +++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 781 insertions(+)
 create mode 100644 app/test/test_eventdev.c

diff --git a/MAINTAINERS b/MAINTAINERS
index a10899f..21ff4db 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -252,6 +252,7 @@ F: examples/l2fwd-crypto/
 Eventdev API - EXPERIMENTAL
 M: Jerin Jacob <jerin.jacob@caviumnetworks.com>
 F: lib/librte_eventdev/
+F: app/test/test_eventdev*
 F: drivers/event/skeleton/
 
 Networking Drivers
diff --git a/app/test/Makefile b/app/test/Makefile
index 8af39cb..3269270 100644
--- a/app/test/Makefile
+++ b/app/test/Makefile
@@ -198,6 +198,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev_blockcipher.c
 SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev_perf.c
 SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev.c
 
+SRCS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += test_eventdev.c
+
 SRCS-$(CONFIG_RTE_LIBRTE_KVARGS) += test_kvargs.c
 
 CFLAGS += -O3
diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
new file mode 100644
index 0000000..042a446
--- /dev/null
+++ b/app/test/test_eventdev.c
@@ -0,0 +1,778 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2016 Cavium networks. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *	 * Redistributions of source code must retain the above copyright
+ *	   notice, this list of conditions and the following disclaimer.
+ *	 * Redistributions in binary form must reproduce the above copyright
+ *	   notice, this list of conditions and the following disclaimer in
+ *	   the documentation and/or other materials provided with the
+ *	   distribution.
+ *	 * Neither the name of Cavium networks nor the names of its
+ *	   contributors may be used to endorse or promote products derived
+ *	   from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_common.h>
+#include <rte_hexdump.h>
+#include <rte_mbuf.h>
+#include <rte_malloc.h>
+#include <rte_memcpy.h>
+#include <rte_eventdev.h>
+#include <rte_cryptodev.h>
+
+#include "test.h"
+
+#define TEST_DEV_ID   0
+
+static int
+testsuite_setup(void)
+{
+	RTE_BUILD_BUG_ON(sizeof(struct rte_event) != 16);
+	uint8_t count;
+	count = rte_event_dev_count();
+	if (!count) {
+		printf("Failed to find a valid event device,"
+			" testing with event_skeleton device\n");
+		return rte_eal_vdev_init("event_skeleton", NULL);
+	}
+	return TEST_SUCCESS;
+}
+
+static void
+testsuite_teardown(void)
+{
+}
+
+static int
+test_eventdev_count(void)
+{
+	uint8_t count;
+	count = rte_event_dev_count();
+	TEST_ASSERT(count > 0, "Invalid eventdev count %" PRIu8, count);
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_get_dev_id(void)
+{
+	int ret;
+	ret = rte_event_dev_get_dev_id("not_a_valid_eventdev_driver");
+	TEST_ASSERT_FAIL(ret, "Expected <0 for invalid dev name ret=%d", ret);
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_socket_id(void)
+{
+	int socket_id;
+	socket_id = rte_event_dev_socket_id(TEST_DEV_ID);
+	TEST_ASSERT(socket_id != -EINVAL, "Failed to get socket_id %d",
+				socket_id);
+	socket_id = rte_event_dev_socket_id(RTE_EVENT_MAX_DEVS);
+	TEST_ASSERT(socket_id == -EINVAL, "Expected -EINVAL %d", socket_id);
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_info_get(void)
+{
+	int ret;
+	struct rte_event_dev_info info;
+	ret = rte_event_dev_info_get(TEST_DEV_ID, NULL);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+	ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+	TEST_ASSERT(info.max_event_ports > 0,
+			"Not enough event ports %d", info.max_event_ports);
+	TEST_ASSERT(info.max_event_queues > 0,
+			"Not enough event queues %d", info.max_event_queues);
+	return TEST_SUCCESS;
+}
+
+static inline void
+devconf_set_default_sane_values(struct rte_event_dev_config *dev_conf,
+			struct rte_event_dev_info *info)
+{
+	memset(dev_conf, 0, sizeof(struct rte_event_dev_config));
+	dev_conf->dequeue_timeout_ns = info->min_dequeue_timeout_ns;
+	dev_conf->nb_event_ports = info->max_event_ports;
+	dev_conf->nb_event_queues = info->max_event_queues;
+	dev_conf->nb_event_queue_flows = info->max_event_queue_flows;
+	dev_conf->nb_event_port_dequeue_depth =
+			info->max_event_port_dequeue_depth;
+	dev_conf->nb_event_port_enqueue_depth =
+			info->max_event_port_enqueue_depth;
+	dev_conf->nb_event_port_enqueue_depth =
+			info->max_event_port_enqueue_depth;
+	dev_conf->nb_events_limit =
+			info->max_num_events;
+}
+
+static int
+test_ethdev_config_run(struct rte_event_dev_config *dev_conf,
+		struct rte_event_dev_info *info,
+		void (*fn)(struct rte_event_dev_config *dev_conf,
+			struct rte_event_dev_info *info))
+{
+	devconf_set_default_sane_values(dev_conf, info);
+	fn(dev_conf, info);
+	return rte_event_dev_configure(TEST_DEV_ID, dev_conf);
+}
+
+static void
+min_dequeue_limit(struct rte_event_dev_config *dev_conf,
+		  struct rte_event_dev_info *info)
+{
+	dev_conf->dequeue_timeout_ns = info->min_dequeue_timeout_ns - 1;
+}
+
+static void
+max_dequeue_limit(struct rte_event_dev_config *dev_conf,
+		  struct rte_event_dev_info *info)
+{
+	dev_conf->dequeue_timeout_ns = info->max_dequeue_timeout_ns + 1;
+}
+
+static void
+max_events_limit(struct rte_event_dev_config *dev_conf,
+		  struct rte_event_dev_info *info)
+{
+	dev_conf->nb_events_limit  = info->max_num_events + 1;
+}
+
+static void
+max_event_ports(struct rte_event_dev_config *dev_conf,
+		  struct rte_event_dev_info *info)
+{
+	dev_conf->nb_event_ports = info->max_event_ports + 1;
+}
+
+static void
+max_event_queues(struct rte_event_dev_config *dev_conf,
+		  struct rte_event_dev_info *info)
+{
+	dev_conf->nb_event_queues = info->max_event_queues + 1;
+}
+
+static void
+max_event_queue_flows(struct rte_event_dev_config *dev_conf,
+		  struct rte_event_dev_info *info)
+{
+	dev_conf->nb_event_queue_flows = info->max_event_queue_flows + 1;
+}
+
+static void
+max_event_port_dequeue_depth(struct rte_event_dev_config *dev_conf,
+		  struct rte_event_dev_info *info)
+{
+	dev_conf->nb_event_port_dequeue_depth =
+		info->max_event_port_dequeue_depth + 1;
+}
+
+static void
+max_event_port_enqueue_depth(struct rte_event_dev_config *dev_conf,
+		  struct rte_event_dev_info *info)
+{
+	dev_conf->nb_event_port_enqueue_depth =
+		info->max_event_port_enqueue_depth + 1;
+}
+
+
+static int
+test_eventdev_configure(void)
+{
+	int ret;
+	struct rte_event_dev_config dev_conf;
+	struct rte_event_dev_info info;
+	ret = rte_event_dev_configure(TEST_DEV_ID, NULL);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+
+	ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	/* Check limits */
+	TEST_ASSERT_EQUAL(-EINVAL,
+		test_ethdev_config_run(&dev_conf, &info, min_dequeue_limit),
+		 "Config negative test failed");
+	TEST_ASSERT_EQUAL(-EINVAL,
+		test_ethdev_config_run(&dev_conf, &info, max_dequeue_limit),
+		 "Config negative test failed");
+	TEST_ASSERT_EQUAL(-EINVAL,
+		test_ethdev_config_run(&dev_conf, &info, max_events_limit),
+		 "Config negative test failed");
+	TEST_ASSERT_EQUAL(-EINVAL,
+		test_ethdev_config_run(&dev_conf, &info, max_event_ports),
+		 "Config negative test failed");
+	TEST_ASSERT_EQUAL(-EINVAL,
+		test_ethdev_config_run(&dev_conf, &info, max_event_queues),
+		 "Config negative test failed");
+	TEST_ASSERT_EQUAL(-EINVAL,
+		test_ethdev_config_run(&dev_conf, &info, max_event_queue_flows),
+		 "Config negative test failed");
+	TEST_ASSERT_EQUAL(-EINVAL,
+		test_ethdev_config_run(&dev_conf, &info,
+			max_event_port_dequeue_depth),
+			 "Config negative test failed");
+	TEST_ASSERT_EQUAL(-EINVAL,
+		test_ethdev_config_run(&dev_conf, &info,
+		max_event_port_enqueue_depth),
+		 "Config negative test failed");
+
+	/* Positive case */
+	devconf_set_default_sane_values(&dev_conf, &info);
+	ret = rte_event_dev_configure(TEST_DEV_ID, &dev_conf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev");
+
+	/* re-configure */
+	devconf_set_default_sane_values(&dev_conf, &info);
+	dev_conf.nb_event_ports = info.max_event_ports/2;
+	dev_conf.nb_event_queues = info.max_event_queues/2;
+	ret = rte_event_dev_configure(TEST_DEV_ID, &dev_conf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to re configure eventdev");
+
+	/* re-configure back to max_event_queues and max_event_ports */
+	devconf_set_default_sane_values(&dev_conf, &info);
+	ret = rte_event_dev_configure(TEST_DEV_ID, &dev_conf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to re-configure eventdev");
+
+	return TEST_SUCCESS;
+
+}
+
+static int
+eventdev_configure_setup(void)
+{
+	int ret;
+	struct rte_event_dev_config dev_conf;
+	struct rte_event_dev_info info;
+
+	ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+	devconf_set_default_sane_values(&dev_conf, &info);
+	ret = rte_event_dev_configure(TEST_DEV_ID, &dev_conf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_queue_default_conf_get(void)
+{
+	int i, ret;
+	struct rte_event_queue_conf qconf;
+
+	ret = rte_event_queue_default_conf_get(TEST_DEV_ID, 0, NULL);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+
+	for (i = 0; i < rte_event_queue_count(TEST_DEV_ID); i++) {
+		ret = rte_event_queue_default_conf_get(TEST_DEV_ID, i,
+						 &qconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to get queue%d info", i);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_queue_setup(void)
+{
+	int i, ret;
+	struct rte_event_dev_info info;
+	struct rte_event_queue_conf qconf;
+
+	ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	/* Negative cases */
+	ret = rte_event_queue_default_conf_get(TEST_DEV_ID, 0, &qconf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get queue0 info");
+	qconf.event_queue_cfg =	(RTE_EVENT_QUEUE_CFG_ALL_TYPES &
+		 RTE_EVENT_QUEUE_CFG_TYPE_MASK);
+	qconf.nb_atomic_flows = info.max_event_queue_flows + 1;
+	ret = rte_event_queue_setup(TEST_DEV_ID, 0, &qconf);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+
+	qconf.nb_atomic_flows = info.max_event_queue_flows;
+	qconf.event_queue_cfg =	(RTE_EVENT_QUEUE_CFG_ORDERED_ONLY &
+		 RTE_EVENT_QUEUE_CFG_TYPE_MASK);
+	qconf.nb_atomic_order_sequences = info.max_event_queue_flows + 1;
+	ret = rte_event_queue_setup(TEST_DEV_ID, 0, &qconf);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+
+	ret = rte_event_queue_setup(TEST_DEV_ID, info.max_event_queues,
+					&qconf);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+
+	/* Positive case */
+	ret = rte_event_queue_default_conf_get(TEST_DEV_ID, 0, &qconf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get queue0 info");
+	ret = rte_event_queue_setup(TEST_DEV_ID, 0, &qconf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to setup queue0");
+
+
+	for (i = 0; i < rte_event_queue_count(TEST_DEV_ID); i++) {
+		ret = rte_event_queue_setup(TEST_DEV_ID, i, NULL);
+		TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_queue_count(void)
+{
+	int ret;
+	struct rte_event_dev_info info;
+
+	ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	TEST_ASSERT_EQUAL(rte_event_queue_count(TEST_DEV_ID),
+		 info.max_event_queues, "Wrong queue count");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_queue_priority(void)
+{
+	int i, ret;
+	struct rte_event_dev_info info;
+	struct rte_event_queue_conf qconf;
+	uint8_t priority;
+
+	ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	for (i = 0; i < rte_event_queue_count(TEST_DEV_ID); i++) {
+		ret = rte_event_queue_default_conf_get(TEST_DEV_ID, i,
+					&qconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to get queue%d def conf", i);
+		qconf.priority = i %  RTE_EVENT_DEV_PRIORITY_LOWEST;
+		ret = rte_event_queue_setup(TEST_DEV_ID, i, &qconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i);
+	}
+
+	for (i = 0; i < rte_event_queue_count(TEST_DEV_ID); i++) {
+		priority =  rte_event_queue_priority(TEST_DEV_ID, i);
+		if (info.event_dev_cap & RTE_EVENT_DEV_CAP_QUEUE_QOS)
+			TEST_ASSERT_EQUAL(priority,
+			 i %  RTE_EVENT_DEV_PRIORITY_LOWEST,
+			 "Wrong priority value for queue%d", i);
+		else
+			TEST_ASSERT_EQUAL(priority,
+			 RTE_EVENT_DEV_PRIORITY_NORMAL,
+			 "Wrong priority value for queue%d", i);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_port_default_conf_get(void)
+{
+	int i, ret;
+	struct rte_event_port_conf pconf;
+
+	ret = rte_event_port_default_conf_get(TEST_DEV_ID, 0, NULL);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+
+	ret = rte_event_port_default_conf_get(TEST_DEV_ID,
+			rte_event_port_count(TEST_DEV_ID) + 1, NULL);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+
+	for (i = 0; i < rte_event_port_count(TEST_DEV_ID); i++) {
+		ret = rte_event_port_default_conf_get(TEST_DEV_ID, i,
+							&pconf);
+		TEST_ASSERT_SUCCESS(ret, "Failed to get port%d info", i);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_port_setup(void)
+{
+	int i, ret;
+	struct rte_event_dev_info info;
+	struct rte_event_port_conf pconf;
+
+	ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	/* Negative cases */
+	ret = rte_event_port_default_conf_get(TEST_DEV_ID, 0, &pconf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get port0 info");
+	pconf.new_event_threshold = info.max_num_events + 1;
+	ret = rte_event_port_setup(TEST_DEV_ID, 0, &pconf);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+
+	pconf.new_event_threshold = info.max_num_events;
+	pconf.dequeue_depth = info.max_event_port_dequeue_depth + 1;
+	ret = rte_event_port_setup(TEST_DEV_ID, 0, &pconf);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+
+	pconf.dequeue_depth = info.max_event_port_dequeue_depth;
+	pconf.enqueue_depth = info.max_event_port_enqueue_depth + 1;
+	ret = rte_event_port_setup(TEST_DEV_ID, 0, &pconf);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+
+	ret = rte_event_port_setup(TEST_DEV_ID, info.max_event_ports,
+					&pconf);
+	TEST_ASSERT(ret == -EINVAL, "Expected -EINVAL, %d", ret);
+
+	/* Positive case */
+	ret = rte_event_port_default_conf_get(TEST_DEV_ID, 0, &pconf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get port0 info");
+	ret = rte_event_port_setup(TEST_DEV_ID, 0, &pconf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to setup port0");
+
+
+	for (i = 0; i < rte_event_port_count(TEST_DEV_ID); i++) {
+		ret = rte_event_port_setup(TEST_DEV_ID, i, NULL);
+		TEST_ASSERT_SUCCESS(ret, "Failed to setup port%d", i);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_dequeue_depth(void)
+{
+	int ret;
+	struct rte_event_dev_info info;
+	struct rte_event_port_conf pconf;
+
+	ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	ret = rte_event_port_default_conf_get(TEST_DEV_ID, 0, &pconf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get port0 info");
+	ret = rte_event_port_setup(TEST_DEV_ID, 0, &pconf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to setup port0");
+
+	TEST_ASSERT_EQUAL(rte_event_port_dequeue_depth(TEST_DEV_ID, 0),
+		 pconf.dequeue_depth, "Wrong port dequeue depth");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_enqueue_depth(void)
+{
+	int ret;
+	struct rte_event_dev_info info;
+	struct rte_event_port_conf pconf;
+
+	ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	ret = rte_event_port_default_conf_get(TEST_DEV_ID, 0, &pconf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get port0 info");
+	ret = rte_event_port_setup(TEST_DEV_ID, 0, &pconf);
+	TEST_ASSERT_SUCCESS(ret, "Failed to setup port0");
+
+	TEST_ASSERT_EQUAL(rte_event_port_enqueue_depth(TEST_DEV_ID, 0),
+		 pconf.enqueue_depth, "Wrong port enqueue depth");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_port_count(void)
+{
+	int ret;
+	struct rte_event_dev_info info;
+
+	ret = rte_event_dev_info_get(TEST_DEV_ID, &info);
+	TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
+
+	TEST_ASSERT_EQUAL(rte_event_port_count(TEST_DEV_ID),
+		 info.max_event_ports, "Wrong port count");
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_timeout_ticks(void)
+{
+	int ret;
+	uint64_t timeout_ticks;
+
+	ret = rte_event_dequeue_timeout_ticks(TEST_DEV_ID, 100, &timeout_ticks);
+	TEST_ASSERT_SUCCESS(ret, "Fail to get timeout_ticks");
+
+	return TEST_SUCCESS;
+}
+
+
+static int
+test_eventdev_start_stop(void)
+{
+	int i, ret;
+
+	ret = eventdev_configure_setup();
+	TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev");
+
+	for (i = 0; i < rte_event_queue_count(TEST_DEV_ID); i++) {
+		ret = rte_event_queue_setup(TEST_DEV_ID, i, NULL);
+		TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i);
+	}
+
+	for (i = 0; i < rte_event_port_count(TEST_DEV_ID); i++) {
+		ret = rte_event_port_setup(TEST_DEV_ID, i, NULL);
+		TEST_ASSERT_SUCCESS(ret, "Failed to setup port%d", i);
+	}
+
+	ret = rte_event_dev_start(TEST_DEV_ID);
+	TEST_ASSERT_SUCCESS(ret, "Failed to start device%d", TEST_DEV_ID);
+
+	rte_event_dev_stop(TEST_DEV_ID);
+	return TEST_SUCCESS;
+}
+
+
+static int
+eventdev_setup_device(void)
+{
+	int i, ret;
+
+	ret = eventdev_configure_setup();
+	TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev");
+
+	for (i = 0; i < rte_event_queue_count(TEST_DEV_ID); i++) {
+		ret = rte_event_queue_setup(TEST_DEV_ID, i, NULL);
+		TEST_ASSERT_SUCCESS(ret, "Failed to setup queue%d", i);
+	}
+
+	for (i = 0; i < rte_event_port_count(TEST_DEV_ID); i++) {
+		ret = rte_event_port_setup(TEST_DEV_ID, i, NULL);
+		TEST_ASSERT_SUCCESS(ret, "Failed to setup port%d", i);
+	}
+
+	ret = rte_event_dev_start(TEST_DEV_ID);
+	TEST_ASSERT_SUCCESS(ret, "Failed to start device%d", TEST_DEV_ID);
+
+	return TEST_SUCCESS;
+}
+
+static void
+eventdev_stop_device(void)
+{
+	rte_event_dev_stop(TEST_DEV_ID);
+}
+
+static int
+test_eventdev_link(void)
+{
+	int ret, nb_queues, i;
+	uint8_t queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
+	uint8_t priorities[RTE_EVENT_MAX_QUEUES_PER_DEV];
+
+	ret = rte_event_port_link(TEST_DEV_ID, 0, NULL, NULL, 0);
+	TEST_ASSERT(ret >= 0, "Failed to link with NULL device%d",
+				 TEST_DEV_ID);
+
+	nb_queues = rte_event_queue_count(TEST_DEV_ID);
+	for (i = 0; i < nb_queues; i++) {
+		queues[i] = i;
+		priorities[i] = RTE_EVENT_DEV_PRIORITY_NORMAL;
+	}
+
+	ret = rte_event_port_link(TEST_DEV_ID, 0, queues,
+					priorities, nb_queues);
+	TEST_ASSERT(ret == nb_queues, "Failed to link(device%d) ret=%d",
+				 TEST_DEV_ID, ret);
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_unlink(void)
+{
+	int ret, nb_queues, i;
+	uint8_t queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
+
+	ret = rte_event_port_unlink(TEST_DEV_ID, 0, NULL, 0);
+	TEST_ASSERT(ret >= 0, "Failed to unlink with NULL device%d",
+				 TEST_DEV_ID);
+
+	nb_queues = rte_event_queue_count(TEST_DEV_ID);
+	for (i = 0; i < nb_queues; i++)
+		queues[i] = i;
+
+
+	ret = rte_event_port_unlink(TEST_DEV_ID, 0, queues, nb_queues);
+	TEST_ASSERT(ret == nb_queues, "Failed to unlink(device%d) ret=%d",
+				 TEST_DEV_ID, ret);
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_link_get(void)
+{
+	int ret, nb_queues, i;
+	uint8_t queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
+	uint8_t priorities[RTE_EVENT_MAX_QUEUES_PER_DEV];
+
+	/* link all queues */
+	ret = rte_event_port_link(TEST_DEV_ID, 0, NULL, NULL, 0);
+	TEST_ASSERT(ret >= 0, "Failed to link with NULL device%d",
+				 TEST_DEV_ID);
+
+	nb_queues = rte_event_queue_count(TEST_DEV_ID);
+	for (i = 0; i < nb_queues; i++)
+		queues[i] = i;
+
+	ret = rte_event_port_unlink(TEST_DEV_ID, 0, queues, nb_queues);
+	TEST_ASSERT(ret == nb_queues, "Failed to unlink(device%d) ret=%d",
+				 TEST_DEV_ID, ret);
+
+	ret = rte_event_port_links_get(TEST_DEV_ID, 0, queues, priorities);
+	TEST_ASSERT(ret == 0, "(%d)Wrong link get=%d", TEST_DEV_ID, ret);
+
+	/* link all queues and get the links */
+	nb_queues = rte_event_queue_count(TEST_DEV_ID);
+	for (i = 0; i < nb_queues; i++) {
+		queues[i] = i;
+		priorities[i] = RTE_EVENT_DEV_PRIORITY_NORMAL;
+	}
+	ret = rte_event_port_link(TEST_DEV_ID, 0, queues, priorities,
+					 nb_queues);
+	TEST_ASSERT(ret == nb_queues, "Failed to link(device%d) ret=%d",
+				 TEST_DEV_ID, ret);
+	ret = rte_event_port_links_get(TEST_DEV_ID, 0, queues, priorities);
+	TEST_ASSERT(ret == nb_queues, "(%d)Wrong link get ret=%d expected=%d",
+				 TEST_DEV_ID, ret, nb_queues);
+	/* unlink all*/
+	ret = rte_event_port_unlink(TEST_DEV_ID, 0, NULL, 0);
+	TEST_ASSERT(ret == nb_queues, "Failed to unlink(device%d) ret=%d",
+				 TEST_DEV_ID, ret);
+	/* link just one queue */
+	queues[0] = 0;
+	priorities[0] = RTE_EVENT_DEV_PRIORITY_NORMAL;
+
+	ret = rte_event_port_link(TEST_DEV_ID, 0, queues, priorities, 1);
+	TEST_ASSERT(ret == 1, "Failed to link(device%d) ret=%d",
+				 TEST_DEV_ID, ret);
+	ret = rte_event_port_links_get(TEST_DEV_ID, 0, queues, priorities);
+	TEST_ASSERT(ret == 1, "(%d)Wrong link get ret=%d expected=%d",
+					TEST_DEV_ID, ret, 1);
+	/* unlink all*/
+	ret = rte_event_port_unlink(TEST_DEV_ID, 0, NULL, 0);
+	TEST_ASSERT(ret == nb_queues, "Failed to unlink(device%d) ret=%d",
+				 TEST_DEV_ID, ret);
+	/* 4links and 2 unlinks */
+	nb_queues = rte_event_queue_count(TEST_DEV_ID);
+	if (nb_queues >= 4) {
+		for (i = 0; i < 4; i++) {
+			queues[i] = i;
+			priorities[i] = 0x40;
+		}
+		ret = rte_event_port_link(TEST_DEV_ID, 0, queues, priorities,
+						4);
+		TEST_ASSERT(ret == 4, "Failed to link(device%d) ret=%d",
+					 TEST_DEV_ID, ret);
+
+		for (i = 0; i < 2; i++)
+			queues[i] = i;
+
+		ret = rte_event_port_unlink(TEST_DEV_ID, 0, queues, 2);
+		TEST_ASSERT(ret == 2, "Failed to unlink(device%d) ret=%d",
+					 TEST_DEV_ID, ret);
+		ret = rte_event_port_links_get(TEST_DEV_ID, 0,
+						queues, priorities);
+		TEST_ASSERT(ret == 2, "(%d)Wrong link get ret=%d expected=%d",
+						TEST_DEV_ID, ret, 2);
+		TEST_ASSERT(queues[0] == 2, "ret=%d expected=%d", ret, 2);
+		TEST_ASSERT(priorities[0] == 0x40, "ret=%d expected=%d",
+							ret, 0x40);
+		TEST_ASSERT(queues[1] == 3, "ret=%d expected=%d", ret, 3);
+		TEST_ASSERT(priorities[1] == 0x40, "ret=%d expected=%d",
+					ret, 0x40);
+	}
+
+	return TEST_SUCCESS;
+}
+
+static int
+test_eventdev_close(void)
+{
+	rte_event_dev_stop(TEST_DEV_ID);
+	return rte_event_dev_close(TEST_DEV_ID);
+}
+
+static struct unit_test_suite eventdev_common_testsuite  = {
+	.suite_name = "eventdev common code unit test suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(NULL, NULL,
+			test_eventdev_count),
+		TEST_CASE_ST(NULL, NULL,
+			test_eventdev_get_dev_id),
+		TEST_CASE_ST(NULL, NULL,
+			test_eventdev_socket_id),
+		TEST_CASE_ST(NULL, NULL,
+			test_eventdev_info_get),
+		TEST_CASE_ST(NULL, NULL,
+			test_eventdev_configure),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_queue_default_conf_get),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_queue_setup),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_queue_count),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_queue_priority),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_port_default_conf_get),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_port_setup),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_dequeue_depth),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_enqueue_depth),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_port_count),
+		TEST_CASE_ST(eventdev_configure_setup, NULL,
+			test_eventdev_timeout_ticks),
+		TEST_CASE_ST(NULL, NULL,
+			test_eventdev_start_stop),
+		TEST_CASE_ST(eventdev_setup_device, eventdev_stop_device,
+			test_eventdev_link),
+		TEST_CASE_ST(eventdev_setup_device, eventdev_stop_device,
+			test_eventdev_unlink),
+		TEST_CASE_ST(eventdev_setup_device, eventdev_stop_device,
+			test_eventdev_link_get),
+		TEST_CASE_ST(eventdev_setup_device, NULL,
+			test_eventdev_close),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static int
+test_eventdev_common(void)
+{
+	return unit_test_suite_runner(&eventdev_common_testsuite);
+}
+
+REGISTER_TEST_COMMAND(eventdev_common_autotest, test_eventdev_common);
-- 
2.5.5

^ permalink raw reply related	[flat|nested] 109+ messages in thread

* Re: [PATCH v4 1/6] eventdev: introduce event driven programming model
  2016-12-21  9:25       ` [PATCH v4 1/6] eventdev: introduce event driven programming model Jerin Jacob
@ 2017-01-25 16:32         ` Eads, Gage
  2017-01-25 16:36           ` Richardson, Bruce
  2017-02-02 11:18         ` Nipun Gupta
  1 sibling, 1 reply; 109+ messages in thread
From: Eads, Gage @ 2017-01-25 16:32 UTC (permalink / raw)
  To: Jerin Jacob, dev
  Cc: thomas.monjalon, Richardson, Bruce, hemant.agrawal, Van Haaren,
	Harry, McDaniel, Timothy

Hi Jerin,

See the bottom of this email for a proposed tweak to the rte_event_enqueue_burst() return value.

>  -----Original Message-----
>  From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
>  Sent: Wednesday, December 21, 2016 3:25 AM
>  To: dev@dpdk.org
>  Cc: thomas.monjalon@6wind.com; Richardson, Bruce
>  <bruce.richardson@intel.com>; hemant.agrawal@nxp.com; Eads, Gage
>  <gage.eads@intel.com>; Van Haaren, Harry <harry.van.haaren@intel.com>;
>  Jerin Jacob <jerin.jacob@caviumnetworks.com>
>  Subject: [dpdk-dev] [PATCH v4 1/6] eventdev: introduce event driven
>  programming model
>
>  In a polling model, lcores poll ethdev ports and associated
>  rx queues directly to look for packet. In an event driven model,
>  by contrast, lcores call the scheduler that selects packets for
>  them based on programmer-specified criteria. Eventdev library
>  adds support for event driven programming model, which offer
>  applications automatic multicore scaling, dynamic load balancing,
>  pipelining, packet ingress order maintenance and
>  synchronization services to simplify application packet processing.
>
>  By introducing event driven programming model, DPDK can support
>  both polling and event driven programming models for packet processing,
>  and applications are free to choose whatever model
>  (or combination of the two) that best suits their needs.
>
>  This patch adds the eventdev specification header file.
>
>  Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
>  Acked-by: Bruce Richardson <bruce.richardson@intel.com>
>  ---
>   MAINTAINERS                        |    3 +
>   doc/api/doxy-api-index.md          |    1 +
>   doc/api/doxy-api.conf              |    1 +
>   lib/librte_eventdev/rte_eventdev.h | 1275
>  ++++++++++++++++++++++++++++++++++++
>   4 files changed, 1280 insertions(+)
>   create mode 100644 lib/librte_eventdev/rte_eventdev.h
>
>  diff --git a/MAINTAINERS b/MAINTAINERS
>  index 26d9590..8e59352 100644
>  --- a/MAINTAINERS
>  +++ b/MAINTAINERS
>  @@ -249,6 +249,9 @@ F: lib/librte_cryptodev/
>   F: app/test/test_cryptodev*
>   F: examples/l2fwd-crypto/
>
>  +Eventdev API - EXPERIMENTAL
>  +M: Jerin Jacob <jerin.jacob@caviumnetworks.com>
>  +F: lib/librte_eventdev/
>
>   Networking Drivers
>   ------------------
>  diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
>  index 33c04ed..0ad3367 100644
>  --- a/doc/api/doxy-api-index.md
>  +++ b/doc/api/doxy-api-index.md
>  @@ -40,6 +40,7 @@ There are many libraries, so their headers may be
>  grouped by topics:
>     [ethdev]             (@ref rte_ethdev.h),
>     [ethctrl]            (@ref rte_eth_ctrl.h),
>     [cryptodev]          (@ref rte_cryptodev.h),
>  +  [eventdev]           (@ref rte_eventdev.h),
>     [devargs]            (@ref rte_devargs.h),
>     [bond]               (@ref rte_eth_bond.h),
>     [vhost]              (@ref rte_virtio_net.h),
>  diff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf
>  index b340fcf..e030c21 100644
>  --- a/doc/api/doxy-api.conf
>  +++ b/doc/api/doxy-api.conf
>  @@ -41,6 +41,7 @@ INPUT                   = doc/api/doxy-api-index.md \
>                             lib/librte_cryptodev \
>                             lib/librte_distributor \
>                             lib/librte_ether \
>  +                          lib/librte_eventdev \
>                             lib/librte_hash \
>                             lib/librte_ip_frag \
>                             lib/librte_jobstats \
>  diff --git a/lib/librte_eventdev/rte_eventdev.h
>  b/lib/librte_eventdev/rte_eventdev.h
>  new file mode 100644
>  index 0000000..b2bc471
>  --- /dev/null
>  +++ b/lib/librte_eventdev/rte_eventdev.h
>  @@ -0,0 +1,1275 @@
>  +/*
>  + *   BSD LICENSE
>  + *
>  + *   Copyright 2016 Cavium.
>  + *   Copyright 2016 Intel Corporation.
>  + *   Copyright 2016 NXP.
>  + *
>  + *   Redistribution and use in source and binary forms, with or without
>  + *   modification, are permitted provided that the following conditions
>  + *   are met:
>  + *
>  + *     * Redistributions of source code must retain the above copyright
>  + *       notice, this list of conditions and the following disclaimer.
>  + *     * Redistributions in binary form must reproduce the above copyright
>  + *       notice, this list of conditions and the following disclaimer in
>  + *       the documentation and/or other materials provided with the
>  + *       distribution.
>  + *     * Neither the name of Cavium nor the names of its
>  + *       contributors may be used to endorse or promote products derived
>  + *       from this software without specific prior written permission.
>  + *
>  + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND
>  CONTRIBUTORS
>  + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT
>  NOT
>  + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
>  FITNESS FOR
>  + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
>  COPYRIGHT
>  + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
>  INCIDENTAL,
>  + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
>  NOT
>  + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
>  OF USE,
>  + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
>  AND ON ANY
>  + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
>  TORT
>  + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
>  THE USE
>  + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
>  DAMAGE.
>  + */
>  +
>  +#ifndef _RTE_EVENTDEV_H_
>  +#define _RTE_EVENTDEV_H_
>  +
>  +/**
>  + * @file
>  + *
>  + * RTE Event Device API
>  + *
>  + * In a polling model, lcores poll ethdev ports and associated rx queues
>  + * directly to look for packet. In an event driven model, by contrast, lcores
>  + * call the scheduler that selects packets for them based on programmer
>  + * specified criteria. Eventdev library adds support for event driven
>  + * programming model, which offer applications automatic multicore scaling,
>  + * dynamic load balancing, pipelining, packet ingress order maintenance and
>  + * synchronization services to simplify application packet processing.
>  + *
>  + * The Event Device API is composed of two parts:
>  + *
>  + * - The application-oriented Event API that includes functions to setup
>  + *   an event device (configure it, setup its queues, ports and start it), to
>  + *   establish the link between queues to port and to receive events, and so on.
>  + *
>  + * - The driver-oriented Event API that exports a function allowing
>  + *   an event poll Mode Driver (PMD) to simultaneously register itself as
>  + *   an event device driver.
>  + *
>  + * Event device components:
>  + *
>  + *                     +-----------------+
>  + *                     | +-------------+ |
>  + *        +-------+    | |    flow 0   | |
>  + *        |Packet |    | +-------------+ |
>  + *        |event  |    | +-------------+ |
>  + *        |       |    | |    flow 1   | |port_link(port0, queue0)
>  + *        +-------+    | +-------------+ |     |     +--------+
>  + *        +-------+    | +-------------+ o-----v-----o        |dequeue +------+
>  + *        |Crypto |    | |    flow n   | |           | event  +------->|Core 0|
>  + *        |work   |    | +-------------+ o----+      | port 0 |        |      |
>  + *        |done ev|    |  event queue 0  |    |      +--------+        +------+
>  + *        +-------+    +-----------------+    |
>  + *        +-------+                           |
>  + *        |Timer  |    +-----------------+    |      +--------+
>  + *        |expiry |    | +-------------+ |    +------o        |dequeue +------+
>  + *        |event  |    | |    flow 0   | o-----------o event  +------->|Core 1|
>  + *        +-------+    | +-------------+ |      +----o port 1 |        |      |
>  + *       Event enqueue | +-------------+ |      |    +--------+        +------+
>  + *     o-------------> | |    flow 1   | |      |
>  + *        enqueue(     | +-------------+ |      |
>  + *        queue_id,    |                 |      |    +--------+        +------+
>  + *        flow_id,     | +-------------+ |      |    |        |dequeue |Core 2|
>  + *        sched_type,  | |    flow n   | o-----------o event  +------->|      |
>  + *        event_type,  | +-------------+ |      |    | port 2 |        +------+
>  + *        subev_type,  |  event queue 1  |      |    +--------+
>  + *        event)       +-----------------+      |    +--------+
>  + *                                              |    |        |dequeue +------+
>  + *        +-------+    +-----------------+      |    | event  +------->|Core n|
>  + *        |Core   |    | +-------------+ o-----------o port n |        |      |
>  + *        |(SW)   |    | |    flow 0   | |      |    +--------+        +--+---+
>  + *        |event  |    | +-------------+ |      |                         |
>  + *        +-------+    | +-------------+ |      |                         |
>  + *            ^        | |    flow 1   | |      |                         |
>  + *            |        | +-------------+ o------+                         |
>  + *            |        | +-------------+ |                                |
>  + *            |        | |    flow n   | |                                |
>  + *            |        | +-------------+ |                                |
>  + *            |        |  event queue n  |                                |
>  + *            |        +-----------------+                                |
>  + *            |                                                           |
>  + *            +-----------------------------------------------------------+
>  + *
>  + * Event device: A hardware or software-based event scheduler.
>  + *
>  + * Event: A unit of scheduling that encapsulates a packet or other datatype
>  + * like SW generated event from the CPU, Crypto work completion
>  notification,
>  + * Timer expiry event notification etc as well as metadata.
>  + * The metadata includes flow ID, scheduling type, event priority, event_type,
>  + * sub_event_type etc.
>  + *
>  + * Event queue: A queue containing events that are scheduled by the event
>  dev.
>  + * An event queue contains events of different flows associated with
>  scheduling
>  + * types, such as atomic, ordered, or parallel.
>  + *
>  + * Event port: An application's interface into the event dev for enqueue and
>  + * dequeue operations. Each event port can be linked with one or more
>  + * event queues for dequeue operations.
>  + *
>  + * By default, all the functions of the Event Device API exported by a PMD
>  + * are lock-free functions which assume to not be invoked in parallel on
>  + * different logical cores to work on the same target object. For instance,
>  + * the dequeue function of a PMD cannot be invoked in parallel on two logical
>  + * cores to operates on same  event port. Of course, this function
>  + * can be invoked in parallel by different logical cores on different ports.
>  + * It is the responsibility of the upper level application to enforce this rule.
>  + *
>  + * In all functions of the Event API, the Event device is
>  + * designated by an integer >= 0 named the device identifier *dev_id*
>  + *
>  + * At the Event driver level, Event devices are represented by a generic
>  + * data structure of type *rte_event_dev*.
>  + *
>  + * Event devices are dynamically registered during the PCI/SoC device probing
>  + * phase performed at EAL initialization time.
>  + * When an Event device is being probed, a *rte_event_dev* structure and
>  + * a new device identifier are allocated for that device. Then, the
>  + * event_dev_init() function supplied by the Event driver matching the probed
>  + * device is invoked to properly initialize the device.
>  + *
>  + * The role of the device init function consists of resetting the hardware or
>  + * software event driver implementations.
>  + *
>  + * If the device init operation is successful, the correspondence between
>  + * the device identifier assigned to the new device and its associated
>  + * *rte_event_dev* structure is effectively registered.
>  + * Otherwise, both the *rte_event_dev* structure and the device identifier are
>  + * freed.
>  + *
>  + * The functions exported by the application Event API to setup a device
>  + * designated by its device identifier must be invoked in the following order:
>  + *     - rte_event_dev_configure()
>  + *     - rte_event_queue_setup()
>  + *     - rte_event_port_setup()
>  + *     - rte_event_port_link()
>  + *     - rte_event_dev_start()
>  + *
>  + * Then, the application can invoke, in any order, the functions
>  + * exported by the Event API to schedule events, dequeue events, enqueue
>  events,
>  + * change event queue(s) to event port [un]link establishment and so on.
>  + *
>  + * Application may use rte_event_[queue/port]_default_conf_get() to get the
>  + * default configuration to set up an event queue or event port by
>  + * overriding few default values.
>  + *
>  + * If the application wants to change the configuration (i.e. call
>  + * rte_event_dev_configure(), rte_event_queue_setup(), or
>  + * rte_event_port_setup()), it must call rte_event_dev_stop() first to stop the
>  + * device and then do the reconfiguration before calling rte_event_dev_start()
>  + * again. The schedule, enqueue and dequeue functions should not be invoked
>  + * when the device is stopped.
>  + *
>  + * Finally, an application can close an Event device by invoking the
>  + * rte_event_dev_close() function.
>  + *
>  + * Each function of the application Event API invokes a specific function
>  + * of the PMD that controls the target device designated by its device
>  + * identifier.
>  + *
>  + * For this purpose, all device-specific functions of an Event driver are
>  + * supplied through a set of pointers contained in a generic structure of type
>  + * *event_dev_ops*.
>  + * The address of the *event_dev_ops* structure is stored in the
>  *rte_event_dev*
>  + * structure by the device init function of the Event driver, which is
>  + * invoked during the PCI/SoC device probing phase, as explained earlier.
>  + *
>  + * In other words, each function of the Event API simply retrieves the
>  + * *rte_event_dev* structure associated with the device identifier and
>  + * performs an indirect invocation of the corresponding driver function
>  + * supplied in the *event_dev_ops* structure of the *rte_event_dev*
>  structure.
>  + *
>  + * For performance reasons, the address of the fast-path functions of the
>  + * Event driver is not contained in the *event_dev_ops* structure.
>  + * Instead, they are directly stored at the beginning of the *rte_event_dev*
>  + * structure to avoid an extra indirect memory access during their invocation.
>  + *
>  + * RTE event device drivers do not use interrupts for enqueue or dequeue
>  + * operation. Instead, Event drivers export Poll-Mode enqueue and dequeue
>  + * functions to applications.
>  + *
>  + * An event driven based application has following typical workflow on
>  fastpath:
>  + * \code{.c}
>  + *  while (1) {
>  + *
>  + *          rte_event_schedule(dev_id);
>  + *
>  + *          rte_event_dequeue(...);
>  + *
>  + *          (event processing)
>  + *
>  + *          rte_event_enqueue(...);
>  + *  }
>  + * \endcode
>  + *
>  + * The events are injected to event device through *enqueue* operation by
>  + * event producers in the system. The typical event producers are ethdev
>  + * subsystem for generating packet events, CPU(SW) for generating events
>  based
>  + * on different stages of application processing, cryptodev for generating
>  + * crypto work completion notification etc
>  + *
>  + * The *dequeue* operation gets one or more events from the event ports.
>  + * The application process the events and send to downstream event queue
>  through
>  + * rte_event_enqueue_burst() if it is an intermediate stage of event
>  processing,
>  + * on the final stage, the application may send to different subsystem like
>  + * ethdev to send the packet/event on the wire using ethdev
>  + * rte_eth_tx_burst() API.
>  + *
>  + * The point at which events are scheduled to ports depends on the device.
>  + * For hardware devices, scheduling occurs asynchronously without any
>  software
>  + * intervention. Software schedulers can either be distributed
>  + * (each worker thread schedules events to its own port) or centralized
>  + * (a dedicated thread schedules to all ports). Distributed software schedulers
>  + * perform the scheduling in rte_event_dequeue_burst(), whereas centralized
>  + * scheduler logic is located in rte_event_schedule().
>  + * The RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag is not set
>  + * indicates the device is centralized and thus needs a dedicated scheduling
>  + * thread that repeatedly calls rte_event_schedule().
>  + *
>  + */
>  +
>  +#ifdef __cplusplus
>  +extern "C" {
>  +#endif
>  +
>  +#include <rte_common.h>
>  +#include <rte_pci.h>
>  +#include <rte_mbuf.h>
>  +
>  +/* Event device capability bitmap flags */
>  +#define RTE_EVENT_DEV_CAP_QUEUE_QOS           (1ULL << 0)
>  +/**< Event scheduling prioritization is based on the priority associated with
>  + *  each event queue.
>  + *
>  + *  @see rte_event_queue_setup()
>  + */
>  +#define RTE_EVENT_DEV_CAP_EVENT_QOS           (1ULL << 1)
>  +/**< Event scheduling prioritization is based on the priority associated with
>  + *  each event. Priority of each event is supplied in *rte_event* structure
>  + *  on each enqueue operation.
>  + *
>  + *  @see rte_event_enqueue_burst()
>  + */
>  +#define RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED   (1ULL << 2)
>  +/**< Event device operates in distributed scheduling mode.
>  + * In distributed scheduling mode, event scheduling happens in HW or
>  + * rte_event_dequeue_burst() or the combination of these two.
>  + * If the flag is not set then eventdev is centralized and thus needs a
>  + * dedicated scheduling thread that repeatedly calls rte_event_schedule().
>  + *
>  + * @see rte_event_schedule(), rte_event_dequeue_burst()
>  + */
>  +
>  +/* Event device priority levels */
>  +#define RTE_EVENT_DEV_PRIORITY_HIGHEST   0
>  +/**< Highest priority expressed across eventdev subsystem
>  + * @see rte_event_queue_setup(), rte_event_enqueue_burst()
>  + * @see rte_event_port_link()
>  + */
>  +#define RTE_EVENT_DEV_PRIORITY_NORMAL    128
>  +/**< Normal priority expressed across eventdev subsystem
>  + * @see rte_event_queue_setup(), rte_event_enqueue_burst()
>  + * @see rte_event_port_link()
>  + */
>  +#define RTE_EVENT_DEV_PRIORITY_LOWEST    255
>  +/**< Lowest priority expressed across eventdev subsystem
>  + * @see rte_event_queue_setup(), rte_event_enqueue_burst()
>  + * @see rte_event_port_link()
>  + */
>  +
>  +/**
>  + * Get the total number of event devices that have been successfully
>  + * initialised.
>  + *
>  + * @return
>  + *   The total number of usable event devices.
>  + */
>  +uint8_t
>  +rte_event_dev_count(void);
>  +
>  +/**
>  + * Get the device identifier for the named event device.
>  + *
>  + * @param name
>  + *   Event device name to select the event device identifier.
>  + *
>  + * @return
>  + *   Returns event device identifier on success.
>  + *   - <0: Failure to find named event device.
>  + */
>  +int
>  +rte_event_dev_get_dev_id(const char *name);
>  +
>  +/**
>  + * Return the NUMA socket to which a device is connected.
>  + *
>  + * @param dev_id
>  + *   The identifier of the device.
>  + * @return
>  + *   The NUMA socket id to which the device is connected or
>  + *   a default of zero if the socket could not be determined.
>  + *   -(-EINVAL)  dev_id value is out of range.
>  + */
>  +int
>  +rte_event_dev_socket_id(uint8_t dev_id);
>  +
>  +/**
>  + * Event device information
>  + */
>  +struct rte_event_dev_info {
>  +    const char *driver_name;        /**< Event driver name */
>  +    struct rte_pci_device *pci_dev; /**< PCI information */
>  +    uint32_t min_dequeue_timeout_ns;
>  +    /**< Minimum supported global dequeue timeout(ns) by this device */
>  +    uint32_t max_dequeue_timeout_ns;
>  +    /**< Maximum supported global dequeue timeout(ns) by this device */
>  +    uint32_t dequeue_timeout_ns;
>  +    /**< Configured global dequeue timeout(ns) for this device */
>  +    uint8_t max_event_queues;
>  +    /**< Maximum event_queues supported by this device */
>  +    uint32_t max_event_queue_flows;
>  +    /**< Maximum supported flows in an event queue by this device*/
>  +    uint8_t max_event_queue_priority_levels;
>  +    /**< Maximum number of event queue priority levels by this device.
>  +     * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS
>  capability
>  +     */
>  +    uint8_t max_event_priority_levels;
>  +    /**< Maximum number of event priority levels by this device.
>  +     * Valid when the device has RTE_EVENT_DEV_CAP_EVENT_QOS
>  capability
>  +     */
>  +    uint8_t max_event_ports;
>  +    /**< Maximum number of event ports supported by this device */
>  +    uint8_t max_event_port_dequeue_depth;
>  +    /**< Maximum number of events can be dequeued at a time from an
>  +     * event port by this device.
>  +     * A device that does not support bulk dequeue will set this as 1.
>  +     */
>  +    uint32_t max_event_port_enqueue_depth;
>  +    /**< Maximum number of events can be enqueued at a time from an
>  +     * event port by this device.
>  +     * A device that does not support bulk enqueue will set this as 1.
>  +     */
>  +    int32_t max_num_events;
>  +    /**< A *closed system* event dev has a limit on the number of events
>  it
>  +     * can manage at a time. An *open system* event dev does not have a
>  +     * limit and will specify this as -1.
>  +     */
>  +    uint32_t event_dev_cap;
>  +    /**< Event device capabilities(RTE_EVENT_DEV_CAP_)*/
>  +};
>  +
>  +/**
>  + * Retrieve the contextual information of an event device.
>  + *
>  + * @param dev_id
>  + *   The identifier of the device.
>  + *
>  + * @param[out] dev_info
>  + *   A pointer to a structure of type *rte_event_dev_info* to be filled with the
>  + *   contextual information of the device.
>  + *
>  + * @return
>  + *   - 0: Success, driver updates the contextual information of the event device
>  + *   - <0: Error code returned by the driver info get function.
>  + *
>  + */
>  +int
>  +rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info
>  *dev_info);
>  +
>  +/* Event device configuration bitmap flags */
>  +#define RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT (1ULL << 0)
>  +/**< Override the global *dequeue_timeout_ns* and use per dequeue
>  timeout in ns.
>  + *  @see rte_event_dequeue_timeout_ticks(), rte_event_dequeue_burst()
>  + */
>  +
>  +/** Event device configuration structure */
>  +struct rte_event_dev_config {
>  +    uint32_t dequeue_timeout_ns;
>  +    /**< rte_event_dequeue_burst() timeout on this device.
>  +     * This value should be in the range of *min_dequeue_timeout_ns* and
>  +     * *max_dequeue_timeout_ns* which previously provided in
>  +     * rte_event_dev_info_get()
>  +     * @see RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT
>  +     */
>  +    int32_t nb_events_limit;
>  +    /**< Applies to *closed system* event dev only. This field indicates a
>  +     * limit to ethdev-like devices to limit the number of events injected
>  +     * into the system to not overwhelm core-to-core events.
>  +     * This value cannot exceed the *max_num_events* which previously
>  +     * provided in rte_event_dev_info_get()
>  +     */
>  +    uint8_t nb_event_queues;
>  +    /**< Number of event queues to configure on this device.
>  +     * This value cannot exceed the *max_event_queues* which previously
>  +     * provided in rte_event_dev_info_get()
>  +     */
>  +    uint8_t nb_event_ports;
>  +    /**< Number of event ports to configure on this device.
>  +     * This value cannot exceed the *max_event_ports* which previously
>  +     * provided in rte_event_dev_info_get()
>  +     */
>  +    uint32_t nb_event_queue_flows;
>  +    /**< Number of flows for any event queue on this device.
>  +     * This value cannot exceed the *max_event_queue_flows* which
>  previously
>  +     * provided in rte_event_dev_info_get()
>  +     */
>  +    uint8_t nb_event_port_dequeue_depth;
>  +    /**< Maximum number of events can be dequeued at a time from an
>  +     * event port by this device.
>  +     * This value cannot exceed the *max_event_port_dequeue_depth*
>  +     * which previously provided in rte_event_dev_info_get()
>  +     * @see rte_event_port_setup()
>  +     */
>  +    uint32_t nb_event_port_enqueue_depth;
>  +    /**< Maximum number of events can be enqueued at a time from an
>  +     * event port by this device.
>  +     * This value cannot exceed the *max_event_port_enqueue_depth*
>  +     * which previously provided in rte_event_dev_info_get()
>  +     * @see rte_event_port_setup()
>  +     */
>  +    uint32_t event_dev_cfg;
>  +    /**< Event device config flags(RTE_EVENT_DEV_CFG_)*/
>  +};
>  +
>  +/**
>  + * Configure an event device.
>  + *
>  + * This function must be invoked first before any other function in the
>  + * API. This function can also be re-invoked when a device is in the
>  + * stopped state.
>  + *
>  + * The caller may use rte_event_dev_info_get() to get the capability of each
>  + * resources available for this event device.
>  + *
>  + * @param dev_id
>  + *   The identifier of the device to configure.
>  + * @param dev_conf
>  + *   The event device configuration structure.
>  + *
>  + * @return
>  + *   - 0: Success, device configured.
>  + *   - <0: Error code returned by the driver configuration function.
>  + */
>  +int
>  +rte_event_dev_configure(uint8_t dev_id,
>  +                    const struct rte_event_dev_config *dev_conf);
>  +
>  +
>  +/* Event queue specific APIs */
>  +
>  +/* Event queue configuration bitmap flags */
>  +#define RTE_EVENT_QUEUE_CFG_DEFAULT            (0)
>  +/**< Default value of *event_queue_cfg* when rte_event_queue_setup()
>  invoked
>  + * with queue_conf == NULL
>  + *
>  + * @see rte_event_queue_setup()
>  + */
>  +#define RTE_EVENT_QUEUE_CFG_TYPE_MASK          (3ULL << 0)
>  +/**< Mask for event queue schedule type configuration request */
>  +#define RTE_EVENT_QUEUE_CFG_ALL_TYPES          (0ULL << 0)
>  +/**< Allow ATOMIC,ORDERED,PARALLEL schedule type enqueue
>  + *
>  + * @see RTE_SCHED_TYPE_ORDERED, RTE_SCHED_TYPE_ATOMIC,
>  RTE_SCHED_TYPE_PARALLEL
>  + * @see rte_event_enqueue_burst()
>  + */
>  +#define RTE_EVENT_QUEUE_CFG_ATOMIC_ONLY        (1ULL << 0)
>  +/**< Allow only ATOMIC schedule type enqueue
>  + *
>  + * The rte_event_enqueue_burst() result is undefined if the queue configured
>  + * with ATOMIC only and sched_type != RTE_SCHED_TYPE_ATOMIC
>  + *
>  + * @see RTE_SCHED_TYPE_ATOMIC, rte_event_enqueue_burst()
>  + */
>  +#define RTE_EVENT_QUEUE_CFG_ORDERED_ONLY       (2ULL << 0)
>  +/**< Allow only ORDERED schedule type enqueue
>  + *
>  + * The rte_event_enqueue_burst() result is undefined if the queue configured
>  + * with ORDERED only and sched_type != RTE_SCHED_TYPE_ORDERED
>  + *
>  + * @see RTE_SCHED_TYPE_ORDERED, rte_event_enqueue_burst()
>  + */
>  +#define RTE_EVENT_QUEUE_CFG_PARALLEL_ONLY      (3ULL << 0)
>  +/**< Allow only PARALLEL schedule type enqueue
>  + *
>  + * The rte_event_enqueue_burst() result is undefined if the queue configured
>  + * with PARALLEL only and sched_type != RTE_SCHED_TYPE_PARALLEL
>  + *
>  + * @see RTE_SCHED_TYPE_PARALLEL, rte_event_enqueue_burst()
>  + */
>  +#define RTE_EVENT_QUEUE_CFG_SINGLE_LINK        (1ULL << 2)
>  +/**< This event queue links only to a single event port.
>  + *
>  + *  @see rte_event_port_setup(), rte_event_port_link()
>  + */
>  +
>  +/** Event queue configuration structure */
>  +struct rte_event_queue_conf {
>  +    uint32_t nb_atomic_flows;
>  +    /**< The maximum number of active flows this queue can track at any
>  +     * given time. The value must be in the range of
>  +     * [1 - nb_event_queue_flows)] which previously provided in
>  +     * rte_event_dev_info_get().
>  +     */
>  +    uint32_t nb_atomic_order_sequences;
>  +    /**< The maximum number of outstanding events waiting to be
>  +     * reordered by this queue. In other words, the number of entries in
>  +     * this queue’s reorder buffer.When the number of events in the
>  +     * reorder buffer reaches to *nb_atomic_order_sequences* then the
>  +     * scheduler cannot schedule the events from this queue and invalid
>  +     * event will be returned from dequeue until one or more entries are
>  +     * freed up/released.
>  +     * The value must be in the range of [1 - nb_event_queue_flows)]
>  +     * which previously supplied to rte_event_dev_configure().
>  +     */
>  +    uint32_t event_queue_cfg; /**< Queue cfg
>  flags(EVENT_QUEUE_CFG_) */
>  +    uint8_t priority;
>  +    /**< Priority for this event queue relative to other event queues.
>  +     * The requested priority should in the range of
>  +     * [RTE_EVENT_DEV_PRIORITY_HIGHEST,
>  RTE_EVENT_DEV_PRIORITY_LOWEST].
>  +     * The implementation shall normalize the requested priority to
>  +     * event device supported priority value.
>  +     * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS
>  capability
>  +     */
>  +};
>  +
>  +/**
>  + * Retrieve the default configuration information of an event queue
>  designated
>  + * by its *queue_id* from the event driver for an event device.
>  + *
>  + * This function intended to be used in conjunction with
>  rte_event_queue_setup()
>  + * where caller needs to set up the queue by overriding few default values.
>  + *
>  + * @param dev_id
>  + *   The identifier of the device.
>  + * @param queue_id
>  + *   The index of the event queue to get the configuration information.
>  + *   The value must be in the range [0, nb_event_queues - 1]
>  + *   previously supplied to rte_event_dev_configure().
>  + * @param[out] queue_conf
>  + *   The pointer to the default event queue configuration data.
>  + * @return
>  + *   - 0: Success, driver updates the default event queue configuration data.
>  + *   - <0: Error code returned by the driver info get function.
>  + *
>  + * @see rte_event_queue_setup()
>  + *
>  + */
>  +int
>  +rte_event_queue_default_conf_get(uint8_t dev_id, uint8_t queue_id,
>  +                             struct rte_event_queue_conf *queue_conf);
>  +
>  +/**
>  + * Allocate and set up an event queue for an event device.
>  + *
>  + * @param dev_id
>  + *   The identifier of the device.
>  + * @param queue_id
>  + *   The index of the event queue to setup. The value must be in the range
>  + *   [0, nb_event_queues - 1] previously supplied to
>  rte_event_dev_configure().
>  + * @param queue_conf
>  + *   The pointer to the configuration data to be used for the event queue.
>  + *   NULL value is allowed, in which case default configuration     used.
>  + *
>  + * @see rte_event_queue_default_conf_get()
>  + *
>  + * @return
>  + *   - 0: Success, event queue correctly set up.
>  + *   - <0: event queue configuration failed
>  + */
>  +int
>  +rte_event_queue_setup(uint8_t dev_id, uint8_t queue_id,
>  +                  const struct rte_event_queue_conf *queue_conf);
>  +
>  +/**
>  + * Get the number of event queues on a specific event device
>  + *
>  + * @param dev_id
>  + *   Event device identifier.
>  + * @return
>  + *   - The number of configured event queues
>  + */
>  +uint8_t
>  +rte_event_queue_count(uint8_t dev_id);
>  +
>  +/**
>  + * Get the priority of the event queue on a specific event device
>  + *
>  + * @param dev_id
>  + *   Event device identifier.
>  + * @param queue_id
>  + *   Event queue identifier.
>  + * @return
>  + *   - If the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability then the
>  + *    configured priority of the event queue in
>  + *    [RTE_EVENT_DEV_PRIORITY_HIGHEST,
>  RTE_EVENT_DEV_PRIORITY_LOWEST] range
>  + *    else the value RTE_EVENT_DEV_PRIORITY_NORMAL
>  + */
>  +uint8_t
>  +rte_event_queue_priority(uint8_t dev_id, uint8_t queue_id);
>  +
>  +/* Event port specific APIs */
>  +
>  +/** Event port configuration structure */
>  +struct rte_event_port_conf {
>  +    int32_t new_event_threshold;
>  +    /**< A backpressure threshold for new event enqueues on this port.
>  +     * Use for *closed system* event dev where event capacity is limited,
>  +     * and cannot exceed the capacity of the event dev.
>  +     * Configuring ports with different thresholds can make higher priority
>  +     * traffic less likely to  be backpressured.
>  +     * For example, a port used to inject NIC Rx packets into the event dev
>  +     * can have a lower threshold so as not to overwhelm the device,
>  +     * while ports used for worker pools can have a higher threshold.
>  +     * This value cannot exceed the *nb_events_limit*
>  +     * which previously supplied to rte_event_dev_configure()
>  +     */
>  +    uint8_t dequeue_depth;
>  +    /**< Configure number of bulk dequeues for this event port.
>  +     * This value cannot exceed the *nb_event_port_dequeue_depth*
>  +     * which previously supplied to rte_event_dev_configure()
>  +     */
>  +    uint8_t enqueue_depth;
>  +    /**< Configure number of bulk enqueues for this event port.
>  +     * This value cannot exceed the *nb_event_port_enqueue_depth*
>  +     * which previously supplied to rte_event_dev_configure()
>  +     */
>  +};
>  +
>  +/**
>  + * Retrieve the default configuration information of an event port designated
>  + * by its *port_id* from the event driver for an event device.
>  + *
>  + * This function intended to be used in conjunction with
>  rte_event_port_setup()
>  + * where caller needs to set up the port by overriding few default values.
>  + *
>  + * @param dev_id
>  + *   The identifier of the device.
>  + * @param port_id
>  + *   The index of the event port to get the configuration information.
>  + *   The value must be in the range [0, nb_event_ports - 1]
>  + *   previously supplied to rte_event_dev_configure().
>  + * @param[out] port_conf
>  + *   The pointer to the default event port configuration data
>  + * @return
>  + *   - 0: Success, driver updates the default event port configuration data.
>  + *   - <0: Error code returned by the driver info get function.
>  + *
>  + * @see rte_event_port_setup()
>  + *
>  + */
>  +int
>  +rte_event_port_default_conf_get(uint8_t dev_id, uint8_t port_id,
>  +                            struct rte_event_port_conf *port_conf);
>  +
>  +/**
>  + * Allocate and set up an event port for an event device.
>  + *
>  + * @param dev_id
>  + *   The identifier of the device.
>  + * @param port_id
>  + *   The index of the event port to setup. The value must be in the range
>  + *   [0, nb_event_ports - 1] previously supplied to rte_event_dev_configure().
>  + * @param port_conf
>  + *   The pointer to the configuration data to be used for the queue.
>  + *   NULL value is allowed, in which case default configuration     used.
>  + *
>  + * @see rte_event_port_default_conf_get()
>  + *
>  + * @return
>  + *   - 0: Success, event port correctly set up.
>  + *   - <0: Port configuration failed
>  + *   - (-EDQUOT) Quota exceeded(Application tried to link the queue
>  configured
>  + *   with RTE_EVENT_QUEUE_CFG_SINGLE_LINK to more than one event
>  ports)
>  + */
>  +int
>  +rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
>  +                 const struct rte_event_port_conf *port_conf);
>  +
>  +/**
>  + * Get the number of dequeue queue depth configured for event port
>  designated
>  + * by its *port_id* on a specific event device
>  + *
>  + * @param dev_id
>  + *   Event device identifier.
>  + * @param port_id
>  + *   Event port identifier.
>  + * @return
>  + *   - The number of configured dequeue queue depth
>  + *
>  + * @see rte_event_dequeue_burst()
>  + */
>  +uint8_t
>  +rte_event_port_dequeue_depth(uint8_t dev_id, uint8_t port_id);
>  +
>  +/**
>  + * Get the number of enqueue queue depth configured for event port
>  designated
>  + * by its *port_id* on a specific event device
>  + *
>  + * @param dev_id
>  + *   Event device identifier.
>  + * @param port_id
>  + *   Event port identifier.
>  + * @return
>  + *   - The number of configured enqueue queue depth
>  + *
>  + * @see rte_event_enqueue_burst()
>  + */
>  +uint8_t
>  +rte_event_port_enqueue_depth(uint8_t dev_id, uint8_t port_id);
>  +
>  +/**
>  + * Get the number of ports on a specific event device
>  + *
>  + * @param dev_id
>  + *   Event device identifier.
>  + * @return
>  + *   - The number of configured ports
>  + */
>  +uint8_t
>  +rte_event_port_count(uint8_t dev_id);
>  +
>  +/**
>  + * Start an event device.
>  + *
>  + * The device start step is the last one and consists of setting the event
>  + * queues to start accepting the events and schedules to event ports.
>  + *
>  + * On success, all basic functions exported by the API (event enqueue,
>  + * event dequeue and so on) can be invoked.
>  + *
>  + * @param dev_id
>  + *   Event device identifier
>  + * @return
>  + *   - 0: Success, device started.
>  + *   - <0: Error code of the driver device start function.
>  + */
>  +int
>  +rte_event_dev_start(uint8_t dev_id);
>  +
>  +/**
>  + * Stop an event device. The device can be restarted with a call to
>  + * rte_event_dev_start()
>  + *
>  + * @param dev_id
>  + *   Event device identifier.
>  + */
>  +void
>  +rte_event_dev_stop(uint8_t dev_id);
>  +
>  +/**
>  + * Close an event device. The device cannot be restarted!
>  + *
>  + * @param dev_id
>  + *   Event device identifier
>  + *
>  + * @return
>  + *  - 0 on successfully closing device
>  + *  - <0 on failure to close device
>  + *  - (-EAGAIN) if device is busy
>  + */
>  +int
>  +rte_event_dev_close(uint8_t dev_id);
>  +
>  +/* Scheduler type definitions */
>  +#define RTE_SCHED_TYPE_ORDERED          0
>  +/**< Ordered scheduling
>  + *
>  + * Events from an ordered flow of an event queue can be scheduled to
>  multiple
>  + * ports for concurrent processing while maintaining the original event order.
>  + * This scheme enables the user to achieve high single flow throughput by
>  + * avoiding SW synchronization for ordering between ports which bound to
>  cores.
>  + *
>  + * The source flow ordering from an event queue is maintained when events
>  are
>  + * enqueued to their destination queue within the same ordered flow context.
>  + * An event port holds the context until application call
>  + * rte_event_dequeue_burst() from the same port, which implicitly releases
>  + * the context.
>  + * User may allow the scheduler to release the context earlier than that
>  + * by invoking rte_event_enqueue_burst() with RTE_EVENT_OP_RELEASE
>  operation.
>  + *
>  + * Events from the source queue appear in their original order when dequeued
>  + * from a destination queue.
>  + * Event ordering is based on the received event(s), but also other
>  + * (newly allocated or stored) events are ordered when enqueued within the
>  same
>  + * ordered context. Events not enqueued (e.g. released or stored) within the
>  + * context are  considered missing from reordering and are skipped at this
>  time
>  + * (but can be ordered again within another context).
>  + *
>  + * @see rte_event_queue_setup(), rte_event_dequeue_burst(),
>  RTE_EVENT_OP_RELEASE
>  + */
>  +
>  +#define RTE_SCHED_TYPE_ATOMIC           1
>  +/**< Atomic scheduling
>  + *
>  + * Events from an atomic flow of an event queue can be scheduled only to a
>  + * single port at a time. The port is guaranteed to have exclusive (atomic)
>  + * access to the associated flow context, which enables the user to avoid SW
>  + * synchronization. Atomic flows also help to maintain event ordering
>  + * since only one port at a time can process events from a flow of an
>  + * event queue.
>  + *
>  + * The atomic queue synchronization context is dedicated to the port until
>  + * application call rte_event_dequeue_burst() from the same port,
>  + * which implicitly releases the context. User may allow the scheduler to
>  + * release the context earlier than that by invoking rte_event_enqueue_burst()
>  + * with RTE_EVENT_OP_RELEASE operation.
>  + *
>  + * @see rte_event_queue_setup(), rte_event_dequeue_burst(),
>  RTE_EVENT_OP_RELEASE
>  + */
>  +
>  +#define RTE_SCHED_TYPE_PARALLEL         2
>  +/**< Parallel scheduling
>  + *
>  + * The scheduler performs priority scheduling, load balancing, etc. functions
>  + * but does not provide additional event synchronization or ordering.
>  + * It is free to schedule events from a single parallel flow of an event queue
>  + * to multiple events ports for concurrent processing.
>  + * The application is responsible for flow context synchronization and
>  + * event ordering (SW synchronization).
>  + *
>  + * @see rte_event_queue_setup(), rte_event_dequeue_burst()
>  + */
>  +
>  +/* Event types to classify the event source */
>  +#define RTE_EVENT_TYPE_ETHDEV           0x0
>  +/**< The event generated from ethdev subsystem */
>  +#define RTE_EVENT_TYPE_CRYPTODEV        0x1
>  +/**< The event generated from crypodev subsystem */
>  +#define RTE_EVENT_TYPE_TIMERDEV         0x2
>  +/**< The event generated from timerdev subsystem */
>  +#define RTE_EVENT_TYPE_CPU              0x3
>  +/**< The event generated from cpu for pipelining.
>  + * Application may use *sub_event_type* to further classify the event
>  + */
>  +#define RTE_EVENT_TYPE_MAX              0x10
>  +/**< Maximum number of event types */
>  +
>  +/* Event enqueue operations */
>  +#define RTE_EVENT_OP_NEW                0
>  +/**< The event producers use this operation to inject a new event to the
>  + * event device.
>  + */
>  +#define RTE_EVENT_OP_FORWARD            1
>  +/**< The CPU use this operation to forward the event to different event
>  queue or
>  + * change to new application specific flow or schedule type to enable
>  + * pipelining
>  + */
>  +#define RTE_EVENT_OP_RELEASE            2
>  +/**< Release the flow context associated with the schedule type.
>  + *
>  + * If current flow's scheduler type method is *RTE_SCHED_TYPE_ATOMIC*
>  + * then this function hints the scheduler that the user has completed critical
>  + * section processing in the current atomic context.
>  + * The scheduler is now allowed to schedule events from the same flow from
>  + * an event queue to another port. However, the context may be still held
>  + * until the next rte_event_dequeue_burst() call, this call allows but does not
>  + * force the scheduler to release the context early.
>  + *
>  + * Early atomic context release may increase parallelism and thus system
>  + * performance, but the user needs to design carefully the split into critical
>  + * vs non-critical sections.
>  + *
>  + * If current flow's scheduler type method is *RTE_SCHED_TYPE_ORDERED*
>  + * then this function hints the scheduler that the user has done all that need
>  + * to maintain event order in the current ordered context.
>  + * The scheduler is allowed to release the ordered context of this port and
>  + * avoid reordering any following enqueues.
>  + *
>  + * Early ordered context release may increase parallelism and thus system
>  + * performance.
>  + *
>  + * If current flow's scheduler type method is *RTE_SCHED_TYPE_PARALLEL*
>  + * or no scheduling context is held then this function may be an NOOP,
>  + * depending on the implementation.
>  + *
>  + */
>  +
>  +/**
>  + * The generic *rte_event* structure to hold the event attributes
>  + * for dequeue and enqueue operation
>  + */
>  +struct rte_event {
>  +    /** WORD0 */
>  +    RTE_STD_C11
>  +    union {
>  +            uint64_t event;
>  +            /** Event attributes for dequeue or enqueue operation */
>  +            struct {
>  +                    uint32_t flow_id:20;
>  +                    /**< Targeted flow identifier for the enqueue and
>  +                     * dequeue operation.
>  +                     * The value must be in the range of
>  +                     * [0, nb_event_queue_flows - 1] which
>  +                     * previously supplied to rte_event_dev_configure().
>  +                     */
>  +                    uint32_t sub_event_type:8;
>  +                    /**< Sub-event types based on the event source.
>  +                     * @see RTE_EVENT_TYPE_CPU
>  +                     */
>  +                    uint32_t event_type:4;
>  +                    /**< Event type to classify the event source.
>  +                     * @see RTE_EVENT_TYPE_ETHDEV,
>  (RTE_EVENT_TYPE_*)
>  +                     */
>  +                    uint8_t op:2;
>  +                    /**< The type of event enqueue operation -
>  new/forward/
>  +                     * etc.This field is not preserved across an instance
>  +                     * and is undefined on dequeue.
>  +                     * @see RTE_EVENT_OP_NEW, (RTE_EVENT_OP_*)
>  +                     */
>  +                    uint8_t rsvd:4;
>  +                    /**< Reserved for future use */
>  +                    uint8_t sched_type:2;
>  +                    /**< Scheduler synchronization type
>  (RTE_SCHED_TYPE_*)
>  +                     * associated with flow id on a given event queue
>  +                     * for the enqueue and dequeue operation.
>  +                     */
>  +                    uint8_t queue_id;
>  +                    /**< Targeted event queue identifier for the enqueue
>  or
>  +                     * dequeue operation.
>  +                     * The value must be in the range of
>  +                     * [0, nb_event_queues - 1] which previously supplied
>  to
>  +                     * rte_event_dev_configure().
>  +                     */
>  +                    uint8_t priority;
>  +                    /**< Event priority relative to other events in the
>  +                     * event queue. The requested priority should in the
>  +                     * range of  [RTE_EVENT_DEV_PRIORITY_HIGHEST,
>  +                     * RTE_EVENT_DEV_PRIORITY_LOWEST].
>  +                     * The implementation shall normalize the requested
>  +                     * priority to supported priority value.
>  +                     * Valid when the device has
>  +                     * RTE_EVENT_DEV_CAP_EVENT_QOS capability.
>  +                     */
>  +                    uint8_t impl_opaque;
>  +                    /**< Implementation specific opaque value.
>  +                     * An implementation may use this field to hold
>  +                     * implementation specific value to share between
>  +                     * dequeue and enqueue operation.
>  +                     * The application should not modify this field.
>  +                     */
>  +            };
>  +    };
>  +    /** WORD1 */
>  +    RTE_STD_C11
>  +    union {
>  +            uint64_t u64;
>  +            /**< Opaque 64-bit value */
>  +            void *event_ptr;
>  +            /**< Opaque event pointer */
>  +            struct rte_mbuf *mbuf;
>  +            /**< mbuf pointer if dequeued event is associated with mbuf
>  */
>  +    };
>  +};
>  +
>  +/**
>  + * Schedule one or more events in the event dev.
>  + *
>  + * An event dev implementation may define this is a NOOP, for instance if
>  + * the event dev performs its scheduling in hardware.
>  + *
>  + * @param dev_id
>  + *   The identifier of the device.
>  + */
>  +void
>  +rte_event_schedule(uint8_t dev_id);
>  +
>  +/**
>  + * Enqueue a burst of events objects or an event object supplied in
>  *rte_event*
>  + * structure on an  event device designated by its *dev_id* through the event
>  + * port specified by *port_id*. Each event object specifies the event queue on
>  + * which it will be enqueued.
>  + *
>  + * The *nb_events* parameter is the number of event objects to enqueue
>  which are
>  + * supplied in the *ev* array of *rte_event* structure.
>  + *
>  + * The rte_event_enqueue_burst() function returns the number of
>  + * events objects it actually enqueued. A return value equal to *nb_events*
>  + * means that all event objects have been enqueued.
>  + *
>  + * @param dev_id
>  + *   The identifier of the device.
>  + * @param port_id
>  + *   The identifier of the event port.
>  + * @param ev
>  + *   Points to an array of *nb_events* objects of type *rte_event* structure
>  + *   which contain the event object enqueue operations to be processed.
>  + * @param nb_events
>  + *   The number of event objects to enqueue, typically number of
>  + *   rte_event_port_enqueue_depth() available for this port.
>  + *
>  + * @return
>  + *   The number of event objects actually enqueued on the event device. The
>  + *   return value can be less than the value of the *nb_events* parameter
>  when
>  + *   the event devices queue is full or if invalid parameters are specified in a
>  + *   *rte_event*. If return value is less than *nb_events*, the remaining events
>  + *   at the end of ev[] are not consumed,and the caller has to take care of
>  them
>  + *
>  + * @see rte_event_port_enqueue_depth()
>  + */
>  +uint16_t
>  +rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
>  +                    const struct rte_event ev[], uint16_t nb_events);

There are a number of reasons this operation could fail to enqueue all the events, including:
- Backpressure
- Invalid port ID
- Invalid queue ID
- Invalid sched type when a queue is configured for ATOMIC_ONLY, ORDERED_ONLY, or PARALLEL_ONLY
- ...

The current API doesn't provide a straightforward way to determine the cause of a failure. This is a particular issue on event PMDs that can backpressure, where the app may want to treat that case differently than the other failure cases.

Could we change the return type to int16_t, and define a set of error cases (e.g. -ENOSPC for backpressure, -EINVAL for an invalid argument)? (With corresponding changes needed in the PMD API) Similarly we could change rte_event_dequeue_burst() to return an int16_t, with -EINVAL as a possible error case.

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v4 1/6] eventdev: introduce event driven programming model
  2017-01-25 16:32         ` Eads, Gage
@ 2017-01-25 16:36           ` Richardson, Bruce
  2017-01-25 16:53             ` Eads, Gage
  0 siblings, 1 reply; 109+ messages in thread
From: Richardson, Bruce @ 2017-01-25 16:36 UTC (permalink / raw)
  To: Eads, Gage, Jerin Jacob, dev
  Cc: thomas.monjalon, hemant.agrawal, Van Haaren, Harry, McDaniel, Timothy



> -----Original Message-----
> From: Eads, Gage
> Sent: Wednesday, January 25, 2017 4:32 PM
> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>; dev@dpdk.org
> Cc: thomas.monjalon@6wind.com; Richardson, Bruce
> <bruce.richardson@intel.com>; hemant.agrawal@nxp.com; Van Haaren, Harry
> <harry.van.haaren@intel.com>; McDaniel, Timothy
> <timothy.mcdaniel@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v4 1/6] eventdev: introduce event driven
> programming model
> 
> Hi Jerin,
> 
> See the bottom of this email for a proposed tweak to the
> rte_event_enqueue_burst() return value.
> 
> >  -----Original Message-----
> >  From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> >  Sent: Wednesday, December 21, 2016 3:25 AM
> >  To: dev@dpdk.org
> >  Cc: thomas.monjalon@6wind.com; Richardson, Bruce
> > <bruce.richardson@intel.com>; hemant.agrawal@nxp.com; Eads, Gage
> > <gage.eads@intel.com>; Van Haaren, Harry <harry.van.haaren@intel.com>;
> > Jerin Jacob <jerin.jacob@caviumnetworks.com>
> >  Subject: [dpdk-dev] [PATCH v4 1/6] eventdev: introduce event driven
> > programming model
> >
<message truncated for brevity>
> >  +/**
> >  + * Enqueue a burst of events objects or an event object supplied in
> >  *rte_event*
> >  + * structure on an  event device designated by its *dev_id* through
> > the event  + * port specified by *port_id*. Each event object
> > specifies the event queue on  + * which it will be enqueued.
> >  + *
> >  + * The *nb_events* parameter is the number of event objects to
> > enqueue  which are  + * supplied in the *ev* array of *rte_event*
> > structure.
> >  + *
> >  + * The rte_event_enqueue_burst() function returns the number of  + *
> > events objects it actually enqueued. A return value equal to
> > *nb_events*  + * means that all event objects have been enqueued.
> >  + *
> >  + * @param dev_id
> >  + *   The identifier of the device.
> >  + * @param port_id
> >  + *   The identifier of the event port.
> >  + * @param ev
> >  + *   Points to an array of *nb_events* objects of type *rte_event*
> structure
> >  + *   which contain the event object enqueue operations to be
> processed.
> >  + * @param nb_events
> >  + *   The number of event objects to enqueue, typically number of
> >  + *   rte_event_port_enqueue_depth() available for this port.
> >  + *
> >  + * @return
> >  + *   The number of event objects actually enqueued on the event
> device. The
> >  + *   return value can be less than the value of the *nb_events*
> parameter
> >  when
> >  + *   the event devices queue is full or if invalid parameters are
> specified in a
> >  + *   *rte_event*. If return value is less than *nb_events*, the
> remaining events
> >  + *   at the end of ev[] are not consumed,and the caller has to take
> care of
> >  them
> >  + *
> >  + * @see rte_event_port_enqueue_depth()  + */  +uint16_t
> > +rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
> >  +			const struct rte_event ev[], uint16_t nb_events);
> 
> There are a number of reasons this operation could fail to enqueue all the
> events, including:
> - Backpressure
> - Invalid port ID
> - Invalid queue ID
> - Invalid sched type when a queue is configured for ATOMIC_ONLY,
> ORDERED_ONLY, or PARALLEL_ONLY
> - ...
> 
> The current API doesn't provide a straightforward way to determine the
> cause of a failure. This is a particular issue on event PMDs that can
> backpressure, where the app may want to treat that case differently than
> the other failure cases.
> 
> Could we change the return type to int16_t, and define a set of error
> cases (e.g. -ENOSPC for backpressure, -EINVAL for an invalid argument)?
> (With corresponding changes needed in the PMD API) Similarly we could
> change rte_event_dequeue_burst() to return an int16_t, with -EINVAL as a
> possible error case.

Use rte_errno instead, I suggest. That's what it's there for.

/Bruce

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v4 1/6] eventdev: introduce event driven programming model
  2017-01-25 16:36           ` Richardson, Bruce
@ 2017-01-25 16:53             ` Eads, Gage
  2017-01-25 22:36               ` Eads, Gage
  0 siblings, 1 reply; 109+ messages in thread
From: Eads, Gage @ 2017-01-25 16:53 UTC (permalink / raw)
  To: Richardson, Bruce, Jerin Jacob, dev
  Cc: thomas.monjalon, hemant.agrawal, Van Haaren, Harry, McDaniel, Timothy



>  -----Original Message-----
>  From: Richardson, Bruce
>  Sent: Wednesday, January 25, 2017 10:36 AM
>  To: Eads, Gage <gage.eads@intel.com>; Jerin Jacob
>  <jerin.jacob@caviumnetworks.com>; dev@dpdk.org
>  Cc: thomas.monjalon@6wind.com; hemant.agrawal@nxp.com; Van Haaren,
>  Harry <harry.van.haaren@intel.com>; McDaniel, Timothy
>  <timothy.mcdaniel@intel.com>
>  Subject: RE: [dpdk-dev] [PATCH v4 1/6] eventdev: introduce event driven
>  programming model
>  
>  
>  
>  > -----Original Message-----
>  > From: Eads, Gage
>  > Sent: Wednesday, January 25, 2017 4:32 PM
>  > To: Jerin Jacob <jerin.jacob@caviumnetworks.com>; dev@dpdk.org
>  > Cc: thomas.monjalon@6wind.com; Richardson, Bruce
>  > <bruce.richardson@intel.com>; hemant.agrawal@nxp.com; Van Haaren,
>  > Harry <harry.van.haaren@intel.com>; McDaniel, Timothy
>  > <timothy.mcdaniel@intel.com>
>  > Subject: RE: [dpdk-dev] [PATCH v4 1/6] eventdev: introduce event
>  > driven programming model
>  >
>  > Hi Jerin,
>  >
>  > See the bottom of this email for a proposed tweak to the
>  > rte_event_enqueue_burst() return value.
>  >
>  > >  -----Original Message-----
>  > >  From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
>  > >  Sent: Wednesday, December 21, 2016 3:25 AM
>  > >  To: dev@dpdk.org
>  > >  Cc: thomas.monjalon@6wind.com; Richardson, Bruce
>  > > <bruce.richardson@intel.com>; hemant.agrawal@nxp.com; Eads, Gage
>  > > <gage.eads@intel.com>; Van Haaren, Harry
>  > > <harry.van.haaren@intel.com>; Jerin Jacob
>  > > <jerin.jacob@caviumnetworks.com>
>  > >  Subject: [dpdk-dev] [PATCH v4 1/6] eventdev: introduce event driven
>  > > programming model
>  > >
>  <message truncated for brevity>
>  > >  +/**
>  > >  + * Enqueue a burst of events objects or an event object supplied
>  > > in
>  > >  *rte_event*
>  > >  + * structure on an  event device designated by its *dev_id*
>  > > through the event  + * port specified by *port_id*. Each event
>  > > object specifies the event queue on  + * which it will be enqueued.
>  > >  + *
>  > >  + * The *nb_events* parameter is the number of event objects to
>  > > enqueue  which are  + * supplied in the *ev* array of *rte_event*
>  > > structure.
>  > >  + *
>  > >  + * The rte_event_enqueue_burst() function returns the number of  +
>  > > * events objects it actually enqueued. A return value equal to
>  > > *nb_events*  + * means that all event objects have been enqueued.
>  > >  + *
>  > >  + * @param dev_id
>  > >  + *   The identifier of the device.
>  > >  + * @param port_id
>  > >  + *   The identifier of the event port.
>  > >  + * @param ev
>  > >  + *   Points to an array of *nb_events* objects of type *rte_event*
>  > structure
>  > >  + *   which contain the event object enqueue operations to be
>  > processed.
>  > >  + * @param nb_events
>  > >  + *   The number of event objects to enqueue, typically number of
>  > >  + *   rte_event_port_enqueue_depth() available for this port.
>  > >  + *
>  > >  + * @return
>  > >  + *   The number of event objects actually enqueued on the event
>  > device. The
>  > >  + *   return value can be less than the value of the *nb_events*
>  > parameter
>  > >  when
>  > >  + *   the event devices queue is full or if invalid parameters are
>  > specified in a
>  > >  + *   *rte_event*. If return value is less than *nb_events*, the
>  > remaining events
>  > >  + *   at the end of ev[] are not consumed,and the caller has to take
>  > care of
>  > >  them
>  > >  + *
>  > >  + * @see rte_event_port_enqueue_depth()  + */  +uint16_t
>  > > +rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
>  > >  +			const struct rte_event ev[], uint16_t nb_events);
>  >
>  > There are a number of reasons this operation could fail to enqueue all
>  > the events, including:
>  > - Backpressure
>  > - Invalid port ID
>  > - Invalid queue ID
>  > - Invalid sched type when a queue is configured for ATOMIC_ONLY,
>  > ORDERED_ONLY, or PARALLEL_ONLY
>  > - ...
>  >
>  > The current API doesn't provide a straightforward way to determine the
>  > cause of a failure. This is a particular issue on event PMDs that can
>  > backpressure, where the app may want to treat that case differently
>  > than the other failure cases.
>  >
>  > Could we change the return type to int16_t, and define a set of error
>  > cases (e.g. -ENOSPC for backpressure, -EINVAL for an invalid argument)?
>  > (With corresponding changes needed in the PMD API) Similarly we could
>  > change rte_event_dequeue_burst() to return an int16_t, with -EINVAL as
>  > a possible error case.
>  
>  Use rte_errno instead, I suggest. That's what it's there for.
>  
>  /Bruce

That makes sense. In that case, the API comment could be tweaked like so:

  * If the return value is less than *nb_events*, the remaining events at the
  * end of ev[] are not consumed and the caller has to take care of them, and
  * rte_errno is set accordingly. Possible errno values include:
  * - EINVAL - The port ID is invalid, an event's queue ID is invalid, or an
  *            event's sched type doesn't match the capabilities of the
  *            destination queue.
  * - ENOSPC - The event port was backpressured and unable to enqueue one or
  *            more events.

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v4 1/6] eventdev: introduce event driven programming model
  2017-01-25 16:53             ` Eads, Gage
@ 2017-01-25 22:36               ` Eads, Gage
  2017-01-26  9:39                 ` Jerin Jacob
  0 siblings, 1 reply; 109+ messages in thread
From: Eads, Gage @ 2017-01-25 22:36 UTC (permalink / raw)
  To: Richardson, Bruce, 'Jerin Jacob', 'dev@dpdk.org'
  Cc: 'thomas.monjalon@6wind.com',
	'hemant.agrawal@nxp.com',
	Van Haaren, Harry, McDaniel, Timothy



>  -----Original Message-----
>  From: Eads, Gage
>  Sent: Wednesday, January 25, 2017 10:54 AM
>  To: Richardson, Bruce <bruce.richardson@intel.com>; Jerin Jacob
>  <jerin.jacob@caviumnetworks.com>; dev@dpdk.org
>  Cc: thomas.monjalon@6wind.com; hemant.agrawal@nxp.com; Van Haaren,
>  Harry <harry.van.haaren@intel.com>; McDaniel, Timothy
>  <timothy.mcdaniel@intel.com>
>  Subject: RE: [dpdk-dev] [PATCH v4 1/6] eventdev: introduce event driven
>  programming model
>  
>  
>  
>  >  -----Original Message-----
>  >  From: Richardson, Bruce
>  >  Sent: Wednesday, January 25, 2017 10:36 AM
>  >  To: Eads, Gage <gage.eads@intel.com>; Jerin Jacob
>  > <jerin.jacob@caviumnetworks.com>; dev@dpdk.org
>  >  Cc: thomas.monjalon@6wind.com; hemant.agrawal@nxp.com; Van Haaren,
>  > Harry <harry.van.haaren@intel.com>; McDaniel, Timothy
>  > <timothy.mcdaniel@intel.com>
>  >  Subject: RE: [dpdk-dev] [PATCH v4 1/6] eventdev: introduce event
>  > driven  programming model
>  >
>  >
>  >
>  >  > -----Original Message-----
>  >  > From: Eads, Gage
>  >  > Sent: Wednesday, January 25, 2017 4:32 PM  > To: Jerin Jacob
>  > <jerin.jacob@caviumnetworks.com>; dev@dpdk.org  > Cc:
>  > thomas.monjalon@6wind.com; Richardson, Bruce  >
>  > <bruce.richardson@intel.com>; hemant.agrawal@nxp.com; Van Haaren,  >
>  > Harry <harry.van.haaren@intel.com>; McDaniel, Timothy  >
>  > <timothy.mcdaniel@intel.com>  > Subject: RE: [dpdk-dev] [PATCH v4 1/6]
>  > eventdev: introduce event  > driven programming model  >  > Hi Jerin,
>  > >  > See the bottom of this email for a proposed tweak to the  >
>  > rte_event_enqueue_burst() return value.
>  >  >
>  >  > >  -----Original Message-----
>  >  > >  From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
>  >  > >  Sent: Wednesday, December 21, 2016 3:25 AM  > >  To:
>  > dev@dpdk.org  > >  Cc: thomas.monjalon@6wind.com; Richardson, Bruce  >
>  > > <bruce.richardson@intel.com>; hemant.agrawal@nxp.com; Eads, Gage  >
>  > > <gage.eads@intel.com>; Van Haaren, Harry  > >
>  > <harry.van.haaren@intel.com>; Jerin Jacob  > >
>  > <jerin.jacob@caviumnetworks.com>  > >  Subject: [dpdk-dev] [PATCH v4
>  > 1/6] eventdev: introduce event driven  > > programming model  > >
>  > <message truncated for brevity>  > >  +/**  > >  + * Enqueue a burst
>  > of events objects or an event object supplied  > > in  > >
>  > *rte_event*  > >  + * structure on an  event device designated by its
>  > *dev_id*  > > through the event  + * port specified by *port_id*. Each
>  > event  > > object specifies the event queue on  + * which it will be
>  > enqueued.
>  >  > >  + *
>  >  > >  + * The *nb_events* parameter is the number of event objects to
>  > > > enqueue  which are  + * supplied in the *ev* array of *rte_event*
>  > > > structure.
>  >  > >  + *
>  >  > >  + * The rte_event_enqueue_burst() function returns the number of
>  > +  > > * events objects it actually enqueued. A return value equal to
>  > > > *nb_events*  + * means that all event objects have been enqueued.
>  >  > >  + *
>  >  > >  + * @param dev_id
>  >  > >  + *   The identifier of the device.
>  >  > >  + * @param port_id
>  >  > >  + *   The identifier of the event port.
>  >  > >  + * @param ev
>  >  > >  + *   Points to an array of *nb_events* objects of type *rte_event*
>  >  > structure
>  >  > >  + *   which contain the event object enqueue operations to be
>  >  > processed.
>  >  > >  + * @param nb_events
>  >  > >  + *   The number of event objects to enqueue, typically number of
>  >  > >  + *   rte_event_port_enqueue_depth() available for this port.
>  >  > >  + *
>  >  > >  + * @return
>  >  > >  + *   The number of event objects actually enqueued on the event
>  >  > device. The
>  >  > >  + *   return value can be less than the value of the *nb_events*
>  >  > parameter
>  >  > >  when
>  >  > >  + *   the event devices queue is full or if invalid parameters are
>  >  > specified in a
>  >  > >  + *   *rte_event*. If return value is less than *nb_events*, the
>  >  > remaining events
>  >  > >  + *   at the end of ev[] are not consumed,and the caller has to take
>  >  > care of
>  >  > >  them
>  >  > >  + *
>  >  > >  + * @see rte_event_port_enqueue_depth()  + */  +uint16_t  > >
>  > +rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
>  >  > >  +			const struct rte_event ev[], uint16_t nb_events);
>  >  >
>  >  > There are a number of reasons this operation could fail to enqueue
>  > all  > the events, including:
>  >  > - Backpressure
>  >  > - Invalid port ID
>  >  > - Invalid queue ID
>  >  > - Invalid sched type when a queue is configured for ATOMIC_ONLY,  >
>  > ORDERED_ONLY, or PARALLEL_ONLY  > - ...
>  >  >
>  >  > The current API doesn't provide a straightforward way to determine
>  > the  > cause of a failure. This is a particular issue on event PMDs
>  > that can  > backpressure, where the app may want to treat that case
>  > differently  > than the other failure cases.
>  >  >
>  >  > Could we change the return type to int16_t, and define a set of
>  > error  > cases (e.g. -ENOSPC for backpressure, -EINVAL for an invalid
>  argument)?
>  >  > (With corresponding changes needed in the PMD API) Similarly we
>  > could  > change rte_event_dequeue_burst() to return an int16_t, with
>  > -EINVAL as  > a possible error case.
>  >
>  >  Use rte_errno instead, I suggest. That's what it's there for.
>  >
>  >  /Bruce
>  
>  That makes sense. In that case, the API comment could be tweaked like so:
>  
>    * If the return value is less than *nb_events*, the remaining events at the
>    * end of ev[] are not consumed and the caller has to take care of them, and
>    * rte_errno is set accordingly. Possible errno values include:
>    * - EINVAL - The port ID is invalid, an event's queue ID is invalid, or an
>    *            event's sched type doesn't match the capabilities of the
>    *            destination queue.
>    * - ENOSPC - The event port was backpressured and unable to enqueue one or
>    *            more events.

However it seems better to use a signed integer for the dequeue burst return value, if it is to use rte_errno. Application code could be simplified:

(signed return value)
ret = rte_event_dequeue_burst(...);
if (ret < 0)
    rte_panic("Dequeued returned errno %d\n", rte_errno);

vs.

(unsigned return value)
ret = rte_event_dequeue_burst(...);
if (ret == 0 && rte_errno != 0)
    rte_panic("Dequeued returned errno %d\n", rte_errno);

And with an unsigned return value, all dequeue implementations would have to clear rte_errno when no events are dequeued.

Thanks,
Gage

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v4 1/6] eventdev: introduce event driven programming model
  2017-01-25 22:36               ` Eads, Gage
@ 2017-01-26  9:39                 ` Jerin Jacob
  2017-01-26 20:39                   ` Eads, Gage
  0 siblings, 1 reply; 109+ messages in thread
From: Jerin Jacob @ 2017-01-26  9:39 UTC (permalink / raw)
  To: Eads, Gage
  Cc: Richardson, Bruce, 'dev@dpdk.org',
	'thomas.monjalon@6wind.com',
	'hemant.agrawal@nxp.com',
	Van Haaren, Harry, McDaniel, Timothy

On Wed, Jan 25, 2017 at 10:36:21PM +0000, Eads, Gage wrote:
> >  > <jerin.jacob@caviumnetworks.com>  > >  Subject: [dpdk-dev] [PATCH v4
> >  > 1/6] eventdev: introduce event driven  > > programming model  > >
> >  > <message truncated for brevity>  > >  +/**  > >  + * Enqueue a burst
> >  > of events objects or an event object supplied  > > in  > >
> >  > *rte_event*  > >  + * structure on an  event device designated by its
> >  > *dev_id*  > > through the event  + * port specified by *port_id*. Each
> >  > event  > > object specifies the event queue on  + * which it will be
> >  > enqueued.
> >  >  > >  + *
> >  >  > >  + * The *nb_events* parameter is the number of event objects to
> >  > > > enqueue  which are  + * supplied in the *ev* array of *rte_event*
> >  > > > structure.
> >  >  > >  + *
> >  >  > >  + * The rte_event_enqueue_burst() function returns the number of
> >  > +  > > * events objects it actually enqueued. A return value equal to
> >  > > > *nb_events*  + * means that all event objects have been enqueued.
> >  >  > >  + *
> >  >  > >  + * @param dev_id
> >  >  > >  + *   The identifier of the device.
> >  >  > >  + * @param port_id
> >  >  > >  + *   The identifier of the event port.
> >  >  > >  + * @param ev
> >  >  > >  + *   Points to an array of *nb_events* objects of type *rte_event*
> >  >  > structure
> >  >  > >  + *   which contain the event object enqueue operations to be
> >  >  > processed.
> >  >  > >  + * @param nb_events
> >  >  > >  + *   The number of event objects to enqueue, typically number of
> >  >  > >  + *   rte_event_port_enqueue_depth() available for this port.
> >  >  > >  + *
> >  >  > >  + * @return
> >  >  > >  + *   The number of event objects actually enqueued on the event
> >  >  > device. The
> >  >  > >  + *   return value can be less than the value of the *nb_events*
> >  >  > parameter
> >  >  > >  when
> >  >  > >  + *   the event devices queue is full or if invalid parameters are
> >  >  > specified in a
> >  >  > >  + *   *rte_event*. If return value is less than *nb_events*, the
> >  >  > remaining events
> >  >  > >  + *   at the end of ev[] are not consumed,and the caller has to take
> >  >  > care of
> >  >  > >  them
> >  >  > >  + *
> >  >  > >  + * @see rte_event_port_enqueue_depth()  + */  +uint16_t  > >
> >  > +rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
> >  >  > >  +			const struct rte_event ev[], uint16_t nb_events);
> >  >  >
> >  >  > There are a number of reasons this operation could fail to enqueue
> >  > all  > the events, including:
> >  >  > - Backpressure
> >  >  > - Invalid port ID
> >  >  > - Invalid queue ID
> >  >  > - Invalid sched type when a queue is configured for ATOMIC_ONLY,  >
> >  > ORDERED_ONLY, or PARALLEL_ONLY  > - ...
> >  >  >
> >  >  > The current API doesn't provide a straightforward way to determine
> >  > the  > cause of a failure. This is a particular issue on event PMDs
> >  > that can  > backpressure, where the app may want to treat that case
> >  > differently  > than the other failure cases.
> >  >  >
> >  >  > Could we change the return type to int16_t, and define a set of
> >  > error  > cases (e.g. -ENOSPC for backpressure, -EINVAL for an invalid
> >  argument)?
> >  >  > (With corresponding changes needed in the PMD API) Similarly we
> >  > could  > change rte_event_dequeue_burst() to return an int16_t, with
> >  > -EINVAL as  > a possible error case.
> >  >
> >  >  Use rte_errno instead, I suggest. That's what it's there for.
> >  >
> >  >  /Bruce
> >  
> >  That makes sense. In that case, the API comment could be tweaked like so:
> >  
> >    * If the return value is less than *nb_events*, the remaining events at the
> >    * end of ev[] are not consumed and the caller has to take care of them, and
> >    * rte_errno is set accordingly. Possible errno values include:
> >    * - EINVAL - The port ID is invalid, an event's queue ID is invalid, or an
> >    *            event's sched type doesn't match the capabilities of the
> >    *            destination queue.
> >    * - ENOSPC - The event port was backpressured and unable to enqueue one or
> >    *            more events.
> 
> However it seems better to use a signed integer for the dequeue burst return value, if it is to use rte_errno. Application code could be simplified:
> 
> (signed return value)
> ret = rte_event_dequeue_burst(...);
> if (ret < 0)
>     rte_panic("Dequeued returned errno %d\n", rte_errno);
> 
> vs.
> 
> (unsigned return value)
> ret = rte_event_dequeue_burst(...);
> if (ret == 0 && rte_errno != 0)
>     rte_panic("Dequeued returned errno %d\n", rte_errno);
> 
> And with an unsigned return value, all dequeue implementations would have to clear rte_errno when no events are dequeued.

Gage,

Just to understand, what is the expected application behavior if the
implementation returns -ENOSPC

Apart for the above SW driver behavior, I think, HW implementation has two more
different behavior
a) Implementation make sure that it never returns -ENOSPC by allocating
more space on the fly or any other scheme
b) Tail drop

Considering different implementation has different behaviors, How about
enumerating the overflow policy at the port configuration time? and let
implementation act accordingly to avoid fast-patch change in application(effects
in all implementation irrespective of the capability)

possible enumerating value at the port configuration time,
- PANIC or similar scheme to denote it cannot proceed
- TAIL DROP
or any expected application behavior you want to add

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v4 1/6] eventdev: introduce event driven programming model
  2017-01-26  9:39                 ` Jerin Jacob
@ 2017-01-26 20:39                   ` Eads, Gage
  2017-01-27 10:03                     ` Bruce Richardson
  2017-01-30 10:42                     ` Jerin Jacob
  0 siblings, 2 replies; 109+ messages in thread
From: Eads, Gage @ 2017-01-26 20:39 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: Richardson, Bruce, 'dev@dpdk.org',
	'thomas.monjalon@6wind.com',
	'hemant.agrawal@nxp.com',
	Van Haaren, Harry, McDaniel, Timothy



>  -----Original Message-----
>  From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
>  Sent: Thursday, January 26, 2017 3:39 AM
>  To: Eads, Gage <gage.eads@intel.com>
>  Cc: Richardson, Bruce <bruce.richardson@intel.com>; 'dev@dpdk.org'
>  <dev@dpdk.org>; 'thomas.monjalon@6wind.com'
>  <thomas.monjalon@6wind.com>; 'hemant.agrawal@nxp.com'
>  <hemant.agrawal@nxp.com>; Van Haaren, Harry
>  <harry.van.haaren@intel.com>; McDaniel, Timothy
>  <timothy.mcdaniel@intel.com>
>  Subject: Re: [dpdk-dev] [PATCH v4 1/6] eventdev: introduce event driven
>  programming model
>  
>  On Wed, Jan 25, 2017 at 10:36:21PM +0000, Eads, Gage wrote:
>  > >  > <jerin.jacob@caviumnetworks.com>  > >  Subject: [dpdk-dev] [PATCH
>  > > v4  > 1/6] eventdev: introduce event driven  > > programming model
>  > > > >  > <message truncated for brevity>  > >  +/**  > >  + * Enqueue
>  > > a burst  > of events objects or an event object supplied  > > in  >
>  > > >  > *rte_event*  > >  + * structure on an  event device designated
>  > > by its  > *dev_id*  > > through the event  + * port specified by
>  > > *port_id*. Each  > event  > > object specifies the event queue on  +
>  > > * which it will be  > enqueued.
>  > >  >  > >  + *
>  > >  >  > >  + * The *nb_events* parameter is the number of event
>  > > objects to  > > > enqueue  which are  + * supplied in the *ev* array
>  > > of *rte_event*  > > > structure.
>  > >  >  > >  + *
>  > >  >  > >  + * The rte_event_enqueue_burst() function returns the
>  > > number of  > +  > > * events objects it actually enqueued. A return
>  > > value equal to  > > > *nb_events*  + * means that all event objects have
>  been enqueued.
>  > >  >  > >  + *
>  > >  >  > >  + * @param dev_id
>  > >  >  > >  + *   The identifier of the device.
>  > >  >  > >  + * @param port_id
>  > >  >  > >  + *   The identifier of the event port.
>  > >  >  > >  + * @param ev
>  > >  >  > >  + *   Points to an array of *nb_events* objects of type *rte_event*
>  > >  >  > structure
>  > >  >  > >  + *   which contain the event object enqueue operations to be
>  > >  >  > processed.
>  > >  >  > >  + * @param nb_events
>  > >  >  > >  + *   The number of event objects to enqueue, typically number of
>  > >  >  > >  + *   rte_event_port_enqueue_depth() available for this port.
>  > >  >  > >  + *
>  > >  >  > >  + * @return
>  > >  >  > >  + *   The number of event objects actually enqueued on the event
>  > >  >  > device. The
>  > >  >  > >  + *   return value can be less than the value of the *nb_events*
>  > >  >  > parameter
>  > >  >  > >  when
>  > >  >  > >  + *   the event devices queue is full or if invalid parameters are
>  > >  >  > specified in a
>  > >  >  > >  + *   *rte_event*. If return value is less than *nb_events*, the
>  > >  >  > remaining events
>  > >  >  > >  + *   at the end of ev[] are not consumed,and the caller has to take
>  > >  >  > care of
>  > >  >  > >  them
>  > >  >  > >  + *
>  > >  >  > >  + * @see rte_event_port_enqueue_depth()  + */  +uint16_t  >
>  > > >  > +rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
>  > >  >  > >  +			const struct rte_event ev[], uint16_t
>  nb_events);
>  > >  >  >
>  > >  >  > There are a number of reasons this operation could fail to
>  > > enqueue  > all  > the events, including:
>  > >  >  > - Backpressure
>  > >  >  > - Invalid port ID
>  > >  >  > - Invalid queue ID
>  > >  >  > - Invalid sched type when a queue is configured for
>  > > ATOMIC_ONLY,  >  > ORDERED_ONLY, or PARALLEL_ONLY  > - ...
>  > >  >  >
>  > >  >  > The current API doesn't provide a straightforward way to
>  > > determine  > the  > cause of a failure. This is a particular issue
>  > > on event PMDs  > that can  > backpressure, where the app may want to
>  > > treat that case  > differently  > than the other failure cases.
>  > >  >  >
>  > >  >  > Could we change the return type to int16_t, and define a set
>  > > of  > error  > cases (e.g. -ENOSPC for backpressure, -EINVAL for an
>  > > invalid  argument)?
>  > >  >  > (With corresponding changes needed in the PMD API) Similarly
>  > > we  > could  > change rte_event_dequeue_burst() to return an
>  > > int16_t, with  > -EINVAL as  > a possible error case.
>  > >  >
>  > >  >  Use rte_errno instead, I suggest. That's what it's there for.
>  > >  >
>  > >  >  /Bruce
>  > >
>  > >  That makes sense. In that case, the API comment could be tweaked like so:
>  > >
>  > >    * If the return value is less than *nb_events*, the remaining events at the
>  > >    * end of ev[] are not consumed and the caller has to take care of them,
>  and
>  > >    * rte_errno is set accordingly. Possible errno values include:
>  > >    * - EINVAL - The port ID is invalid, an event's queue ID is invalid, or an
>  > >    *            event's sched type doesn't match the capabilities of the
>  > >    *            destination queue.
>  > >    * - ENOSPC - The event port was backpressured and unable to enqueue
>  one or
>  > >    *            more events.
>  >
>  > However it seems better to use a signed integer for the dequeue burst return
>  value, if it is to use rte_errno. Application code could be simplified:
>  >
>  > (signed return value)
>  > ret = rte_event_dequeue_burst(...);
>  > if (ret < 0)
>  >     rte_panic("Dequeued returned errno %d\n", rte_errno);
>  >
>  > vs.
>  >
>  > (unsigned return value)
>  > ret = rte_event_dequeue_burst(...);
>  > if (ret == 0 && rte_errno != 0)
>  >     rte_panic("Dequeued returned errno %d\n", rte_errno);
>  >
>  > And with an unsigned return value, all dequeue implementations would have
>  to clear rte_errno when no events are dequeued.

After some internal discussion, I don't think the signed return value is necessary for burst dequeue. Burst enqueue is the more interesting case...

>  
>  Gage,
>  
>  Just to understand, what is the expected application behavior if the
>  implementation returns -ENOSPC

It's application-dependent -- depending on the importance of the event, the application could decide to retry the enqueue some number of times or decide to drop the event.

>  
>  Apart for the above SW driver behavior, I think, HW implementation has two
>  more different behavior
>  a) Implementation make sure that it never returns -ENOSPC by allocating more
>  space on the fly or any other scheme
>  b) Tail drop
>  

By "tail drop," do you mean the hardware drops the event (and presumably frees any memory it points to)? Or the enqueue is unsuccessful and the application drops the event?

>  Considering different implementation has different behaviors, How about
>  enumerating the overflow policy at the port configuration time? and let
>  implementation act accordingly to avoid fast-patch change in
>  application(effects in all implementation irrespective of the capability)
>  
>  possible enumerating value at the port configuration time,
>  - PANIC or similar scheme to denote it cannot proceed
>  - TAIL DROP
>  or any expected application behavior you want to add

I wonder if that's necessary? Hardware behavior a) means the function will always return nb_events. If hardware drops the event(s), I assume enqueue_burst would still return nb_events and the app behaves as if all events were sent. If the enqueue fails (ret < nb_events), app software could check rte_errno and take the action it deems necessary. So all fast-path enqueue code could look like:

ret = rte_event_enqueue_burst(..., nb_events);
if (ret < nb_events) {
    ....
}

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v4 1/6] eventdev: introduce event driven programming model
  2017-01-26 20:39                   ` Eads, Gage
@ 2017-01-27 10:03                     ` Bruce Richardson
  2017-01-30 10:42                     ` Jerin Jacob
  1 sibling, 0 replies; 109+ messages in thread
From: Bruce Richardson @ 2017-01-27 10:03 UTC (permalink / raw)
  To: Eads, Gage
  Cc: Jerin Jacob, 'dev@dpdk.org',
	'thomas.monjalon@6wind.com',
	'hemant.agrawal@nxp.com',
	Van Haaren, Harry, McDaniel, Timothy

On Thu, Jan 26, 2017 at 08:39:57PM +0000, Eads, Gage wrote:
> 
> 
> >  -----Original Message-----
> >  From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> >  Sent: Thursday, January 26, 2017 3:39 AM
> >  To: Eads, Gage <gage.eads@intel.com>
> >  Cc: Richardson, Bruce <bruce.richardson@intel.com>; 'dev@dpdk.org'
> >  <dev@dpdk.org>; 'thomas.monjalon@6wind.com'
> >  <thomas.monjalon@6wind.com>; 'hemant.agrawal@nxp.com'
> >  <hemant.agrawal@nxp.com>; Van Haaren, Harry
> >  <harry.van.haaren@intel.com>; McDaniel, Timothy
> >  <timothy.mcdaniel@intel.com>
> >  Subject: Re: [dpdk-dev] [PATCH v4 1/6] eventdev: introduce event driven
> >  programming model
> >  
> >  On Wed, Jan 25, 2017 at 10:36:21PM +0000, Eads, Gage wrote:
> >  > >  > <jerin.jacob@caviumnetworks.com>  > >  Subject: [dpdk-dev] [PATCH
> >  > > v4  > 1/6] eventdev: introduce event driven  > > programming model
> >  > > > >  > <message truncated for brevity>  > >  +/**  > >  + * Enqueue
> >  > > a burst  > of events objects or an event object supplied  > > in  >
> >  > > >  > *rte_event*  > >  + * structure on an  event device designated
> >  > > by its  > *dev_id*  > > through the event  + * port specified by
> >  > > *port_id*. Each  > event  > > object specifies the event queue on  +
> >  > > * which it will be  > enqueued.
> >  > >  >  > >  + *
> >  > >  >  > >  + * The *nb_events* parameter is the number of event
> >  > > objects to  > > > enqueue  which are  + * supplied in the *ev* array
> >  > > of *rte_event*  > > > structure.
> >  > >  >  > >  + *
> >  > >  >  > >  + * The rte_event_enqueue_burst() function returns the
> >  > > number of  > +  > > * events objects it actually enqueued. A return
> >  > > value equal to  > > > *nb_events*  + * means that all event objects have
> >  been enqueued.
> >  > >  >  > >  + *
> >  > >  >  > >  + * @param dev_id
> >  > >  >  > >  + *   The identifier of the device.
> >  > >  >  > >  + * @param port_id
> >  > >  >  > >  + *   The identifier of the event port.
> >  > >  >  > >  + * @param ev
> >  > >  >  > >  + *   Points to an array of *nb_events* objects of type *rte_event*
> >  > >  >  > structure
> >  > >  >  > >  + *   which contain the event object enqueue operations to be
> >  > >  >  > processed.
> >  > >  >  > >  + * @param nb_events
> >  > >  >  > >  + *   The number of event objects to enqueue, typically number of
> >  > >  >  > >  + *   rte_event_port_enqueue_depth() available for this port.
> >  > >  >  > >  + *
> >  > >  >  > >  + * @return
> >  > >  >  > >  + *   The number of event objects actually enqueued on the event
> >  > >  >  > device. The
> >  > >  >  > >  + *   return value can be less than the value of the *nb_events*
> >  > >  >  > parameter
> >  > >  >  > >  when
> >  > >  >  > >  + *   the event devices queue is full or if invalid parameters are
> >  > >  >  > specified in a
> >  > >  >  > >  + *   *rte_event*. If return value is less than *nb_events*, the
> >  > >  >  > remaining events
> >  > >  >  > >  + *   at the end of ev[] are not consumed,and the caller has to take
> >  > >  >  > care of
> >  > >  >  > >  them
> >  > >  >  > >  + *
> >  > >  >  > >  + * @see rte_event_port_enqueue_depth()  + */  +uint16_t  >
> >  > > >  > +rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
> >  > >  >  > >  +			const struct rte_event ev[], uint16_t
> >  nb_events);
> >  > >  >  >
> >  > >  >  > There are a number of reasons this operation could fail to
> >  > > enqueue  > all  > the events, including:
> >  > >  >  > - Backpressure
> >  > >  >  > - Invalid port ID
> >  > >  >  > - Invalid queue ID
> >  > >  >  > - Invalid sched type when a queue is configured for
> >  > > ATOMIC_ONLY,  >  > ORDERED_ONLY, or PARALLEL_ONLY  > - ...
> >  > >  >  >
> >  > >  >  > The current API doesn't provide a straightforward way to
> >  > > determine  > the  > cause of a failure. This is a particular issue
> >  > > on event PMDs  > that can  > backpressure, where the app may want to
> >  > > treat that case  > differently  > than the other failure cases.
> >  > >  >  >
> >  > >  >  > Could we change the return type to int16_t, and define a set
> >  > > of  > error  > cases (e.g. -ENOSPC for backpressure, -EINVAL for an
> >  > > invalid  argument)?
> >  > >  >  > (With corresponding changes needed in the PMD API) Similarly
> >  > > we  > could  > change rte_event_dequeue_burst() to return an
> >  > > int16_t, with  > -EINVAL as  > a possible error case.
> >  > >  >
> >  > >  >  Use rte_errno instead, I suggest. That's what it's there for.
> >  > >  >
> >  > >  >  /Bruce
> >  > >
> >  > >  That makes sense. In that case, the API comment could be tweaked like so:
> >  > >
> >  > >    * If the return value is less than *nb_events*, the remaining events at the
> >  > >    * end of ev[] are not consumed and the caller has to take care of them,
> >  and
> >  > >    * rte_errno is set accordingly. Possible errno values include:
> >  > >    * - EINVAL - The port ID is invalid, an event's queue ID is invalid, or an
> >  > >    *            event's sched type doesn't match the capabilities of the
> >  > >    *            destination queue.
> >  > >    * - ENOSPC - The event port was backpressured and unable to enqueue
> >  one or
> >  > >    *            more events.
> >  >
> >  > However it seems better to use a signed integer for the dequeue burst return
> >  value, if it is to use rte_errno. Application code could be simplified:
> >  >
> >  > (signed return value)
> >  > ret = rte_event_dequeue_burst(...);
> >  > if (ret < 0)
> >  >     rte_panic("Dequeued returned errno %d\n", rte_errno);
> >  >
> >  > vs.
> >  >
> >  > (unsigned return value)
> >  > ret = rte_event_dequeue_burst(...);
> >  > if (ret == 0 && rte_errno != 0)
> >  >     rte_panic("Dequeued returned errno %d\n", rte_errno);
> >  >
> >  > And with an unsigned return value, all dequeue implementations would have
> >  to clear rte_errno when no events are dequeued.
> 
> After some internal discussion, I don't think the signed return value is necessary for burst dequeue. Burst enqueue is the more interesting case...
> 
> >  
> >  Gage,
> >  
> >  Just to understand, what is the expected application behavior if the
> >  implementation returns -ENOSPC
> 
> It's application-dependent -- depending on the importance of the event, the application could decide to retry the enqueue some number of times or decide to drop the event.
> 
> >  
> >  Apart for the above SW driver behavior, I think, HW implementation has two
> >  more different behavior
> >  a) Implementation make sure that it never returns -ENOSPC by allocating more
> >  space on the fly or any other scheme
> >  b) Tail drop
> >  
> 
> By "tail drop," do you mean the hardware drops the event (and presumably frees any memory it points to)? Or the enqueue is unsuccessful and the application drops the event?
> 
> >  Considering different implementation has different behaviors, How about
> >  enumerating the overflow policy at the port configuration time? and let
> >  implementation act accordingly to avoid fast-patch change in
> >  application(effects in all implementation irrespective of the capability)
> >  
> >  possible enumerating value at the port configuration time,
> >  - PANIC or similar scheme to denote it cannot proceed
> >  - TAIL DROP
> >  or any expected application behavior you want to add
> 
> I wonder if that's necessary? Hardware behavior a) means the function will always return nb_events. If hardware drops the event(s), I assume enqueue_burst would still return nb_events and the app behaves as if all events were sent. If the enqueue fails (ret < nb_events), app software could check rte_errno and take the action it deems necessary. So all fast-path enqueue code could look like:
> 
> ret = rte_event_enqueue_burst(..., nb_events);
> if (ret < nb_events) {
>     ....
> }

I would agree with that.
I think both enqueue and dequeue should have unsigned return values.
Both should set rte_errno on unsuccessful or partially successful
operation i.e.:
	enqueue: sets errno where ret < nb_events
	dequeue: sets errno where ret == 0 (errno may be set to no-error
		if queue is just empty)

	/Bruce

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v4 1/6] eventdev: introduce event driven programming model
  2017-01-26 20:39                   ` Eads, Gage
  2017-01-27 10:03                     ` Bruce Richardson
@ 2017-01-30 10:42                     ` Jerin Jacob
  1 sibling, 0 replies; 109+ messages in thread
From: Jerin Jacob @ 2017-01-30 10:42 UTC (permalink / raw)
  To: Eads, Gage
  Cc: Richardson, Bruce, 'dev@dpdk.org',
	'thomas.monjalon@6wind.com',
	'hemant.agrawal@nxp.com',
	Van Haaren, Harry, McDaniel, Timothy

On Thu, Jan 26, 2017 at 08:39:57PM +0000, Eads, Gage wrote:
> 
> 
> >  -----Original Message-----
> >  From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> >  Sent: Thursday, January 26, 2017 3:39 AM
> >  To: Eads, Gage <gage.eads@intel.com>
> >  Cc: Richardson, Bruce <bruce.richardson@intel.com>; 'dev@dpdk.org'
> >  <dev@dpdk.org>; 'thomas.monjalon@6wind.com'
> >  <thomas.monjalon@6wind.com>; 'hemant.agrawal@nxp.com'
> >  <hemant.agrawal@nxp.com>; Van Haaren, Harry
> >  <harry.van.haaren@intel.com>; McDaniel, Timothy
> >  <timothy.mcdaniel@intel.com>
> >  Subject: Re: [dpdk-dev] [PATCH v4 1/6] eventdev: introduce event driven
> >  programming model
> >  
> >  Considering different implementation has different behaviors, How about
> >  enumerating the overflow policy at the port configuration time? and let
> >  implementation act accordingly to avoid fast-patch change in
> >  application(effects in all implementation irrespective of the capability)
> >  
> >  possible enumerating value at the port configuration time,
> >  - PANIC or similar scheme to denote it cannot proceed
> >  - TAIL DROP
> >  or any expected application behavior you want to add
> 
> I wonder if that's necessary? Hardware behavior a) means the function will always return nb_events. If hardware drops the event(s), I assume enqueue_burst would still return nb_events and the app behaves as if all events were sent. If the enqueue fails (ret < nb_events), app software could check rte_errno and take the action it deems necessary. So all fast-path enqueue code could look like:
> 
> ret = rte_event_enqueue_burst(..., nb_events);
> if (ret < nb_events) {

I was concerned about this section of the application code get bloated with
drivers specific actions. But, If we want the actions based on per event then
I think, it makes sense to update the specification with new rte_errno values
for enqueue.

>     ....
> }

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v4 1/6] eventdev: introduce event driven programming model
  2016-12-21  9:25       ` [PATCH v4 1/6] eventdev: introduce event driven programming model Jerin Jacob
  2017-01-25 16:32         ` Eads, Gage
@ 2017-02-02 11:18         ` Nipun Gupta
  2017-02-02 14:09           ` Jerin Jacob
  1 sibling, 1 reply; 109+ messages in thread
From: Nipun Gupta @ 2017-02-02 11:18 UTC (permalink / raw)
  To: Jerin Jacob, dev
  Cc: thomas.monjalon, bruce.richardson, Hemant Agrawal, gage.eads,
	harry.van.haaren

Hi,

I had a few queries/comments regarding the eventdev patches.

Please see inline.

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jerin Jacob
> Sent: Wednesday, December 21, 2016 14:55
> To: dev@dpdk.org
> Cc: thomas.monjalon@6wind.com; bruce.richardson@intel.com; Hemant
> Agrawal <hemant.agrawal@nxp.com>; gage.eads@intel.com;
> harry.van.haaren@intel.com; Jerin Jacob <jerin.jacob@caviumnetworks.com>	
> Subject: [dpdk-dev] [PATCH v4 1/6] eventdev: introduce event driven
> programming model
> 
> In a polling model, lcores poll ethdev ports and associated
> rx queues directly to look for packet. In an event driven model,
> by contrast, lcores call the scheduler that selects packets for
> them based on programmer-specified criteria. Eventdev library
> adds support for event driven programming model, which offer
> applications automatic multicore scaling, dynamic load balancing,
> pipelining, packet ingress order maintenance and
> synchronization services to simplify application packet processing.
> 
> By introducing event driven programming model, DPDK can support
> both polling and event driven programming models for packet processing,
> and applications are free to choose whatever model
> (or combination of the two) that best suits their needs.
> 
> This patch adds the eventdev specification header file.
> 
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Acked-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>  MAINTAINERS                        |    3 +
>  doc/api/doxy-api-index.md          |    1 +
>  doc/api/doxy-api.conf              |    1 +
>  lib/librte_eventdev/rte_eventdev.h | 1275
> ++++++++++++++++++++++++++++++++++++
>  4 files changed, 1280 insertions(+)
>  create mode 100644 lib/librte_eventdev/rte_eventdev.h

<snip>

> +
> +/**
> + * Event device information
> + */
> +struct rte_event_dev_info {
> +	const char *driver_name;	/**< Event driver name */
> +	struct rte_pci_device *pci_dev;	/**< PCI information */

With 'rte_device' in place (rte_dev.h), should we not have 'rte_device' instead of 'rte_pci_device' here?

> +	uint32_t min_dequeue_timeout_ns;
> +	/**< Minimum supported global dequeue timeout(ns) by this device */
> +	uint32_t max_dequeue_timeout_ns;
> +	/**< Maximum supported global dequeue timeout(ns) by this device */
> +	uint32_t dequeue_timeout_ns;
> +	/**< Configured global dequeue timeout(ns) for this device */
> +	uint8_t max_event_queues;
> +	/**< Maximum event_queues supported by this device */
> +	uint32_t max_event_queue_flows;
> +	/**< Maximum supported flows in an event queue by this device*/
> +	uint8_t max_event_queue_priority_levels;
> +	/**< Maximum number of event queue priority levels by this device.
> +	 * Valid when the device has RTE_EVENT_DEV_CAP_QUEUE_QOS
> capability
> +	 */

<snip>

> +/**
> + * Dequeue a burst of events objects or an event object from the event port
> + * designated by its *event_port_id*, on an event device designated
> + * by its *dev_id*.
> + *
> + * rte_event_dequeue_burst() does not dictate the specifics of scheduling
> + * algorithm as each eventdev driver may have different criteria to schedule
> + * an event. However, in general, from an application perspective scheduler
> may
> + * use the following scheme to dispatch an event to the port.
> + *
> + * 1) Selection of event queue based on
> + *   a) The list of event queues are linked to the event port.
> + *   b) If the device has RTE_EVENT_DEV_CAP_QUEUE_QOS capability then
> event
> + *   queue selection from list is based on event queue priority relative to
> + *   other event queue supplied as *priority* in rte_event_queue_setup()
> + *   c) If the device has RTE_EVENT_DEV_CAP_EVENT_QOS capability then
> event
> + *   queue selection from the list is based on event priority supplied as
> + *   *priority* in rte_event_enqueue_burst()
> + * 2) Selection of event
> + *   a) The number of flows available in selected event queue.
> + *   b) Schedule type method associated with the event
> + *
> + * The *nb_events* parameter is the maximum number of event objects to
> dequeue
> + * which are returned in the *ev* array of *rte_event* structure.
> + *
> + * The rte_event_dequeue_burst() function returns the number of events
> objects
> + * it actually dequeued. A return value equal to *nb_events* means that all
> + * event objects have been dequeued.
> + *
> + * The number of events dequeued is the number of scheduler contexts held by
> + * this port. These contexts are automatically released in the next
> + * rte_event_dequeue_burst() invocation, or invoking
> rte_event_enqueue_burst()
> + * with RTE_EVENT_OP_RELEASE operation can be used to release the
> + * contexts early.
> + *
> + * @param dev_id
> + *   The identifier of the device.
> + * @param port_id
> + *   The identifier of the event port.
> + * @param[out] ev
> + *   Points to an array of *nb_events* objects of type *rte_event* structure
> + *   for output to be populated with the dequeued event objects.
> + * @param nb_events
> + *   The maximum number of event objects to dequeue, typically number of
> + *   rte_event_port_dequeue_depth() available for this port.
> + *
> + * @param timeout_ticks
> + *   - 0 no-wait, returns immediately if there is no event.
> + *   - >0 wait for the event, if the device is configured with
> + *   RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT then this function will
> wait until
> + *   the event available or *timeout_ticks* time.

Just for understanding - Is expectation that rte_event_dequeue_burst() will wait till timeout
unless requested number of events (nb_events) are not received on the event port?

> + *   if the device is not configured with
> RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT
> + *   then this function will wait until the event available or
> + *   *dequeue_timeout_ns* ns which was previously supplied to
> + *   rte_event_dev_configure()
> + *
> + * @return
> + * The number of event objects actually dequeued from the port. The return
> + * value can be less than the value of the *nb_events* parameter when the
> + * event port's queue is not full.
> + *
> + * @see rte_event_port_dequeue_depth()
> + */
> +uint16_t
> +rte_event_dequeue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event
> ev[],
> +			uint16_t nb_events, uint64_t timeout_ticks);
> +

<Snip>

Regards,
Nipun

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v4 3/6] eventdev: implement the northbound APIs
  2016-12-21  9:25       ` [PATCH v4 3/6] eventdev: implement the northbound APIs Jerin Jacob
@ 2017-02-02 11:19         ` Nipun Gupta
  2017-02-02 14:32           ` Jerin Jacob
  0 siblings, 1 reply; 109+ messages in thread
From: Nipun Gupta @ 2017-02-02 11:19 UTC (permalink / raw)
  To: Jerin Jacob, dev
  Cc: thomas.monjalon, bruce.richardson, Hemant Agrawal, gage.eads,
	harry.van.haaren



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jerin Jacob
> Sent: Wednesday, December 21, 2016 14:55
> To: dev@dpdk.org
> Cc: thomas.monjalon@6wind.com; bruce.richardson@intel.com; Hemant
> Agrawal <hemant.agrawal@nxp.com>; gage.eads@intel.com;
> harry.van.haaren@intel.com; Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Subject: [dpdk-dev] [PATCH v4 3/6] eventdev: implement the northbound APIs
> 
> This patch implements northbound eventdev API interface using southbond
> driver interface
> 
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Acked-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>  config/common_base                           |   6 +
>  lib/Makefile                                 |   1 +
>  lib/librte_eal/common/include/rte_log.h      |   1 +
>  lib/librte_eventdev/Makefile                 |  57 ++
>  lib/librte_eventdev/rte_eventdev.c           | 986
> +++++++++++++++++++++++++++
>  lib/librte_eventdev/rte_eventdev.h           | 106 ++-
>  lib/librte_eventdev/rte_eventdev_pmd.h       | 109 +++
>  lib/librte_eventdev/rte_eventdev_version.map |  33 +
>  mk/rte.app.mk                                |   1 +
>  9 files changed, 1294 insertions(+), 6 deletions(-)  create mode 100644
> lib/librte_eventdev/Makefile  create mode 100644
> lib/librte_eventdev/rte_eventdev.c
>  create mode 100644 lib/librte_eventdev/rte_eventdev_version.map
> 

<Snip>

> +static inline int
> +rte_event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports) {
> +	uint8_t old_nb_ports = dev->data->nb_ports;
> +	void **ports;
> +	uint16_t *links_map;
> +	uint8_t *ports_dequeue_depth;
> +	uint8_t *ports_enqueue_depth;
> +	unsigned int i;
> +
> +	RTE_EDEV_LOG_DEBUG("Setup %d ports on device %u", nb_ports,
> +			 dev->data->dev_id);
> +
> +	/* First time configuration */
> +	if (dev->data->ports == NULL && nb_ports != 0) {
> +		dev->data->ports = rte_zmalloc_socket("eventdev->data-
> >ports",
> +				sizeof(dev->data->ports[0]) * nb_ports,
> +				RTE_CACHE_LINE_SIZE, dev->data->socket_id);
> +		if (dev->data->ports == NULL) {
> +			dev->data->nb_ports = 0;
> +			RTE_EDEV_LOG_ERR("failed to get mem for port meta
> data,"
> +					"nb_ports %u", nb_ports);
> +			return -(ENOMEM);
> +		}
> +
> +		/* Allocate memory to store ports dequeue depth */
> +		dev->data->ports_dequeue_depth =
> +			rte_zmalloc_socket("eventdev-
> >ports_dequeue_depth",
> +			sizeof(dev->data->ports_dequeue_depth[0]) *
> nb_ports,
> +			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
> +		if (dev->data->ports_dequeue_depth == NULL) {
> +			dev->data->nb_ports = 0;
> +			RTE_EDEV_LOG_ERR("failed to get mem for port deq
> meta,"
> +					"nb_ports %u", nb_ports);
> +			return -(ENOMEM);
> +		}
> +
> +		/* Allocate memory to store ports enqueue depth */
> +		dev->data->ports_enqueue_depth =
> +			rte_zmalloc_socket("eventdev-
> >ports_enqueue_depth",
> +			sizeof(dev->data->ports_enqueue_depth[0]) *
> nb_ports,
> +			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
> +		if (dev->data->ports_enqueue_depth == NULL) {
> +			dev->data->nb_ports = 0;
> +			RTE_EDEV_LOG_ERR("failed to get mem for port enq
> meta,"
> +					"nb_ports %u", nb_ports);
> +			return -(ENOMEM);
> +		}
> +
> +		/* Allocate memory to store queue to port link connection */
> +		dev->data->links_map =
> +			rte_zmalloc_socket("eventdev->links_map",
> +			sizeof(dev->data->links_map[0]) * nb_ports *
> +			RTE_EVENT_MAX_QUEUES_PER_DEV,
> +			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
> +		if (dev->data->links_map == NULL) {
> +			dev->data->nb_ports = 0;
> +			RTE_EDEV_LOG_ERR("failed to get mem for port_map
> area,"
> +					"nb_ports %u", nb_ports);
> +			return -(ENOMEM);
> +		}

I think we also need to set all the 'links map' to EVENT_QUEUE_SERVICE_PRIORITY_INVALID
on zmalloc.

> +	} else if (dev->data->ports != NULL && nb_ports != 0) {/* re-config */
> +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_release, -
> ENOTSUP);
> +
> +		ports = dev->data->ports;
> +		ports_dequeue_depth = dev->data->ports_dequeue_depth;
> +		ports_enqueue_depth = dev->data->ports_enqueue_depth;
> +		links_map = dev->data->links_map;
> +

<Snip>

> +int
> +rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
> +		     const struct rte_event_port_conf *port_conf) {
> +	struct rte_eventdev *dev;
> +	struct rte_event_port_conf def_conf;
> +	int diag;
> +
> +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> +	dev = &rte_eventdevs[dev_id];
> +
> +	if (!is_valid_port(dev, port_id)) {
> +		RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
> +		return -EINVAL;
> +	}
> +
> +	/* Check new_event_threshold limit */
> +	if ((port_conf && !port_conf->new_event_threshold) ||
> +			(port_conf && port_conf->new_event_threshold >
> +				 dev->data->dev_conf.nb_events_limit)) {

As mentioned in 'rte_eventdev.h', the 'new_event_threshold' is valid for *closed systems*,
so is the above check valid for *open systems*?
Or is it implicit that for open systems the 'port_conf->new_event_threshold' should be
set to '-1' by the application just as it is for 'max_num_events' of 'struct rte_event_dev_info'.

> +		RTE_EDEV_LOG_ERR(
> +		   "dev%d port%d Invalid event_threshold=%d
> nb_events_limit=%d",
> +			dev_id, port_id, port_conf->new_event_threshold,
> +			dev->data->dev_conf.nb_events_limit);
> +		return -EINVAL;
> +	}
> +

<Snip>

Regards,
Nipun

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v4 2/6] eventdev: define southbound driver interface
  2016-12-21  9:25       ` [PATCH v4 2/6] eventdev: define southbound driver interface Jerin Jacob
@ 2017-02-02 11:19         ` Nipun Gupta
  2017-02-02 11:34           ` Bruce Richardson
  0 siblings, 1 reply; 109+ messages in thread
From: Nipun Gupta @ 2017-02-02 11:19 UTC (permalink / raw)
  To: Jerin Jacob, dev
  Cc: thomas.monjalon, bruce.richardson, Hemant Agrawal, gage.eads,
	harry.van.haaren



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jerin Jacob
> Sent: Wednesday, December 21, 2016 14:55
> To: dev@dpdk.org
> Cc: thomas.monjalon@6wind.com; bruce.richardson@intel.com; Hemant
> Agrawal <hemant.agrawal@nxp.com>; gage.eads@intel.com;
> harry.van.haaren@intel.com; Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Subject: [dpdk-dev] [PATCH v4 2/6] eventdev: define southbound driver
> interface
> 
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Acked-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>  lib/librte_eventdev/rte_eventdev.h     |  38 +++++
>  lib/librte_eventdev/rte_eventdev_pmd.h | 294
> +++++++++++++++++++++++++++++++++
>  2 files changed, 332 insertions(+)
>  create mode 100644 lib/librte_eventdev/rte_eventdev_pmd.h
> 

<snip>

> +typedef int (*eventdev_port_link_t)(void *port,
> +		const uint8_t queues[], const uint8_t priorities[],
> +		uint16_t nb_links);

I think having event device as input parameter to the port_link & port_unlink will
be required so that queue configuration can be fetched from the event device.

> +
> +/**
> + * Unlink multiple source event queues from destination event port.
> + *

<snip>

Regards,
Nipun

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v4 4/6] eventdev: implement PMD registration functions
  2016-12-21  9:25       ` [PATCH v4 4/6] eventdev: implement PMD registration functions Jerin Jacob
@ 2017-02-02 11:20         ` Nipun Gupta
  2017-02-05 13:04           ` Jerin Jacob
  0 siblings, 1 reply; 109+ messages in thread
From: Nipun Gupta @ 2017-02-02 11:20 UTC (permalink / raw)
  To: Jerin Jacob, dev
  Cc: thomas.monjalon, bruce.richardson, Hemant Agrawal, gage.eads,
	harry.van.haaren



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jerin Jacob
> Sent: Wednesday, December 21, 2016 14:55
> To: dev@dpdk.org
> Cc: thomas.monjalon@6wind.com; bruce.richardson@intel.com; Hemant
> Agrawal <hemant.agrawal@nxp.com>; gage.eads@intel.com;
> harry.van.haaren@intel.com; Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Subject: [dpdk-dev] [PATCH v4 4/6] eventdev: implement PMD registration
> functions
> 
> This patch adds infrastructure for registering the vdev or
> the PCI based event device.
> 
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Acked-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>  lib/librte_eventdev/rte_eventdev.c           | 236
> +++++++++++++++++++++++++++
>  lib/librte_eventdev/rte_eventdev_pmd.h       | 111 +++++++++++++
>  lib/librte_eventdev/rte_eventdev_version.map |   6 +
>  3 files changed, 353 insertions(+)
> 

<snip>

> +
> +struct rte_eventdev *
> +rte_event_pmd_vdev_init(const char *name, size_t dev_private_size,
> +		int socket_id)

Isn't there any requirement to have a clean-up function corresponding to
rte_event_pmd_vdev_init?

> +{
> +	struct rte_eventdev *eventdev;
> +
> +	/* Allocate device structure */
> +	eventdev = rte_event_pmd_allocate(name, socket_id);
> +	if (eventdev == NULL)
> +		return NULL;
> +

<snip>

Regards,
Nipun

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v4 2/6] eventdev: define southbound driver interface
  2017-02-02 11:19         ` Nipun Gupta
@ 2017-02-02 11:34           ` Bruce Richardson
  2017-02-02 12:53             ` Nipun Gupta
  0 siblings, 1 reply; 109+ messages in thread
From: Bruce Richardson @ 2017-02-02 11:34 UTC (permalink / raw)
  To: Nipun Gupta
  Cc: Jerin Jacob, dev, thomas.monjalon, Hemant Agrawal, gage.eads,
	harry.van.haaren

On Thu, Feb 02, 2017 at 11:19:51AM +0000, Nipun Gupta wrote:
> 
> 
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jerin Jacob
> > Sent: Wednesday, December 21, 2016 14:55
> > To: dev@dpdk.org
> > Cc: thomas.monjalon@6wind.com; bruce.richardson@intel.com; Hemant
> > Agrawal <hemant.agrawal@nxp.com>; gage.eads@intel.com;
> > harry.van.haaren@intel.com; Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > Subject: [dpdk-dev] [PATCH v4 2/6] eventdev: define southbound driver
> > interface
> > 
> > Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > Acked-by: Bruce Richardson <bruce.richardson@intel.com>
> > ---
> >  lib/librte_eventdev/rte_eventdev.h     |  38 +++++
> >  lib/librte_eventdev/rte_eventdev_pmd.h | 294
> > +++++++++++++++++++++++++++++++++
> >  2 files changed, 332 insertions(+)
> >  create mode 100644 lib/librte_eventdev/rte_eventdev_pmd.h
> > 
> 
> <snip>
> 
> > +typedef int (*eventdev_port_link_t)(void *port,
> > +		const uint8_t queues[], const uint8_t priorities[],
> > +		uint16_t nb_links);
> 
> I think having event device as input parameter to the port_link & port_unlink will
> be required so that queue configuration can be fetched from the event device.
> 
Or each port structure in each driver can have a pointer back to its
containing eventdev. That is what we have done in our SW eventdev
driver.

/Bruce

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v4 2/6] eventdev: define southbound driver interface
  2017-02-02 11:34           ` Bruce Richardson
@ 2017-02-02 12:53             ` Nipun Gupta
  2017-02-02 13:58               ` Bruce Richardson
  0 siblings, 1 reply; 109+ messages in thread
From: Nipun Gupta @ 2017-02-02 12:53 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: Jerin Jacob, dev, thomas.monjalon, Hemant Agrawal, gage.eads,
	harry.van.haaren



> -----Original Message-----
> From: Bruce Richardson [mailto:bruce.richardson@intel.com]
> Sent: Thursday, February 02, 2017 17:04
> To: Nipun Gupta <nipun.gupta@nxp.com>
> Cc: Jerin Jacob <jerin.jacob@caviumnetworks.com>; dev@dpdk.org;
> thomas.monjalon@6wind.com; Hemant Agrawal <hemant.agrawal@nxp.com>;
> gage.eads@intel.com; harry.van.haaren@intel.com
> Subject: Re: [dpdk-dev] [PATCH v4 2/6] eventdev: define southbound driver
> interface
> 
> On Thu, Feb 02, 2017 at 11:19:51AM +0000, Nipun Gupta wrote:
> >
> >
> > > -----Original Message-----
> > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jerin Jacob
> > > Sent: Wednesday, December 21, 2016 14:55
> > > To: dev@dpdk.org
> > > Cc: thomas.monjalon@6wind.com; bruce.richardson@intel.com; Hemant
> > > Agrawal <hemant.agrawal@nxp.com>; gage.eads@intel.com;
> > > harry.van.haaren@intel.com; Jerin Jacob
> > > <jerin.jacob@caviumnetworks.com>
> > > Subject: [dpdk-dev] [PATCH v4 2/6] eventdev: define southbound
> > > driver interface
> > >
> > > Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > > Acked-by: Bruce Richardson <bruce.richardson@intel.com>
> > > ---
> > >  lib/librte_eventdev/rte_eventdev.h     |  38 +++++
> > >  lib/librte_eventdev/rte_eventdev_pmd.h | 294
> > > +++++++++++++++++++++++++++++++++
> > >  2 files changed, 332 insertions(+)
> > >  create mode 100644 lib/librte_eventdev/rte_eventdev_pmd.h
> > >
> >
> > <snip>
> >
> > > +typedef int (*eventdev_port_link_t)(void *port,
> > > +		const uint8_t queues[], const uint8_t priorities[],
> > > +		uint16_t nb_links);
> >
> > I think having event device as input parameter to the port_link &
> > port_unlink will be required so that queue configuration can be fetched from
> the event device.
> >
> Or each port structure in each driver can have a pointer back to its containing
> eventdev. That is what we have done in our SW eventdev driver.

That's one solution, but I think having device in the API will be more cleaner here, just like
it is provided in other configuration API's?

Thanks,
Nipun

> 
> /Bruce

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v4 2/6] eventdev: define southbound driver interface
  2017-02-02 12:53             ` Nipun Gupta
@ 2017-02-02 13:58               ` Bruce Richardson
  2017-02-03  5:59                 ` Nipun Gupta
  0 siblings, 1 reply; 109+ messages in thread
From: Bruce Richardson @ 2017-02-02 13:58 UTC (permalink / raw)
  To: Nipun Gupta
  Cc: Jerin Jacob, dev, thomas.monjalon, Hemant Agrawal, gage.eads,
	harry.van.haaren

On Thu, Feb 02, 2017 at 12:53:17PM +0000, Nipun Gupta wrote:
> 
> 
> > -----Original Message-----
> > From: Bruce Richardson [mailto:bruce.richardson@intel.com]
> > Sent: Thursday, February 02, 2017 17:04
> > To: Nipun Gupta <nipun.gupta@nxp.com>
> > Cc: Jerin Jacob <jerin.jacob@caviumnetworks.com>; dev@dpdk.org;
> > thomas.monjalon@6wind.com; Hemant Agrawal <hemant.agrawal@nxp.com>;
> > gage.eads@intel.com; harry.van.haaren@intel.com
> > Subject: Re: [dpdk-dev] [PATCH v4 2/6] eventdev: define southbound driver
> > interface
> > 
> > On Thu, Feb 02, 2017 at 11:19:51AM +0000, Nipun Gupta wrote:
> > >
> > >
> > > > -----Original Message-----
> > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jerin Jacob
> > > > Sent: Wednesday, December 21, 2016 14:55
> > > > To: dev@dpdk.org
> > > > Cc: thomas.monjalon@6wind.com; bruce.richardson@intel.com; Hemant
> > > > Agrawal <hemant.agrawal@nxp.com>; gage.eads@intel.com;
> > > > harry.van.haaren@intel.com; Jerin Jacob
> > > > <jerin.jacob@caviumnetworks.com>
> > > > Subject: [dpdk-dev] [PATCH v4 2/6] eventdev: define southbound
> > > > driver interface
> > > >
> > > > Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > > > Acked-by: Bruce Richardson <bruce.richardson@intel.com>
> > > > ---
> > > >  lib/librte_eventdev/rte_eventdev.h     |  38 +++++
> > > >  lib/librte_eventdev/rte_eventdev_pmd.h | 294
> > > > +++++++++++++++++++++++++++++++++
> > > >  2 files changed, 332 insertions(+)
> > > >  create mode 100644 lib/librte_eventdev/rte_eventdev_pmd.h
> > > >
> > >
> > > <snip>
> > >
> > > > +typedef int (*eventdev_port_link_t)(void *port,
> > > > +		const uint8_t queues[], const uint8_t priorities[],
> > > > +		uint16_t nb_links);
> > >
> > > I think having event device as input parameter to the port_link &
> > > port_unlink will be required so that queue configuration can be fetched from
> > the event device.
> > >
> > Or each port structure in each driver can have a pointer back to its containing
> > eventdev. That is what we have done in our SW eventdev driver.
> 
> That's one solution, but I think having device in the API will be more cleaner here, just like
> it is provided in other configuration API's?
> 
> Thanks,
> Nipun
> 
Sure. Will you do up a patch to make this change, since the code is
already applied to next-event tree?

/Bruce

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v4 1/6] eventdev: introduce event driven programming model
  2017-02-02 11:18         ` Nipun Gupta
@ 2017-02-02 14:09           ` Jerin Jacob
  2017-02-03  6:38             ` Nipun Gupta
  0 siblings, 1 reply; 109+ messages in thread
From: Jerin Jacob @ 2017-02-02 14:09 UTC (permalink / raw)
  To: Nipun Gupta
  Cc: dev, thomas.monjalon, bruce.richardson, Hemant Agrawal,
	gage.eads, harry.van.haaren

On Thu, Feb 02, 2017 at 11:18:52AM +0000, Nipun Gupta wrote:
> Hi,
> 
> I had a few queries/comments regarding the eventdev patches.
> 
> Please see inline.
> 
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jerin Jacob
> > Sent: Wednesday, December 21, 2016 14:55
> > To: dev@dpdk.org
> > Cc: thomas.monjalon@6wind.com; bruce.richardson@intel.com; Hemant
> > Agrawal <hemant.agrawal@nxp.com>; gage.eads@intel.com;
> > harry.van.haaren@intel.com; Jerin Jacob <jerin.jacob@caviumnetworks.com>	
> > Subject: [dpdk-dev] [PATCH v4 1/6] eventdev: introduce event driven
> > programming model
> > 
> > In a polling model, lcores poll ethdev ports and associated
> > rx queues directly to look for packet. In an event driven model,
> > by contrast, lcores call the scheduler that selects packets for
> > them based on programmer-specified criteria. Eventdev library
> > adds support for event driven programming model, which offer
> > applications automatic multicore scaling, dynamic load balancing,
> > pipelining, packet ingress order maintenance and
> > synchronization services to simplify application packet processing.
> > 
> > By introducing event driven programming model, DPDK can support
> > both polling and event driven programming models for packet processing,
> > and applications are free to choose whatever model
> > (or combination of the two) that best suits their needs.
> > 
> > This patch adds the eventdev specification header file.
> > 
> > Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > Acked-by: Bruce Richardson <bruce.richardson@intel.com>
> > ---
> >  MAINTAINERS                        |    3 +
> >  doc/api/doxy-api-index.md          |    1 +
> >  doc/api/doxy-api.conf              |    1 +
> >  lib/librte_eventdev/rte_eventdev.h | 1275
> > ++++++++++++++++++++++++++++++++++++
> >  4 files changed, 1280 insertions(+)
> >  create mode 100644 lib/librte_eventdev/rte_eventdev.h
> 
> <snip>
> 
> > +
> > +/**
> > + * Event device information
> > + */
> > +struct rte_event_dev_info {
> > +	const char *driver_name;	/**< Event driver name */
> > +	struct rte_pci_device *pci_dev;	/**< PCI information */
> 
> With 'rte_device' in place (rte_dev.h), should we not have 'rte_device' instead of 'rte_pci_device' here?

Yes. Please post a patch to fix this. As the time of merging to
next-eventdev tree it was not the case.

> 
> > + * The number of events dequeued is the number of scheduler contexts held by
> > + * this port. These contexts are automatically released in the next
> > + * rte_event_dequeue_burst() invocation, or invoking
> > rte_event_enqueue_burst()
> > + * with RTE_EVENT_OP_RELEASE operation can be used to release the
> > + * contexts early.
> > + *
> > + * @param dev_id
> > + *   The identifier of the device.
> > + * @param port_id
> > + *   The identifier of the event port.
> > + * @param[out] ev
> > + *   Points to an array of *nb_events* objects of type *rte_event* structure
> > + *   for output to be populated with the dequeued event objects.
> > + * @param nb_events
> > + *   The maximum number of event objects to dequeue, typically number of
> > + *   rte_event_port_dequeue_depth() available for this port.
> > + *
> > + * @param timeout_ticks
> > + *   - 0 no-wait, returns immediately if there is no event.
> > + *   - >0 wait for the event, if the device is configured with
> > + *   RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT then this function will
> > wait until
> > + *   the event available or *timeout_ticks* time.
> 
> Just for understanding - Is expectation that rte_event_dequeue_burst() will wait till timeout
> unless requested number of events (nb_events) are not received on the event port?

Yes. If you need any change then a send RFC patch for the header file
change.

> 
> > + *   if the device is not configured with
> > RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT
> > + *   then this function will wait until the event available or
> > + *   *dequeue_timeout_ns* ns which was previously supplied to
> > + *   rte_event_dev_configure()
> > + *
> > + * @return
> > + * The number of event objects actually dequeued from the port. The return
> > + * value can be less than the value of the *nb_events* parameter when the
> > + * event port's queue is not full.
> > + *
> > + * @see rte_event_port_dequeue_depth()
> > + */
> > +uint16_t
> > +rte_event_dequeue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event
> > ev[],
> > +			uint16_t nb_events, uint64_t timeout_ticks);
> > +
> 
> <Snip>
> 
> Regards,
> Nipun

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v4 3/6] eventdev: implement the northbound APIs
  2017-02-02 11:19         ` Nipun Gupta
@ 2017-02-02 14:32           ` Jerin Jacob
  2017-02-03  6:59             ` Nipun Gupta
  0 siblings, 1 reply; 109+ messages in thread
From: Jerin Jacob @ 2017-02-02 14:32 UTC (permalink / raw)
  To: Nipun Gupta
  Cc: dev, thomas.monjalon, bruce.richardson, Hemant Agrawal,
	gage.eads, harry.van.haaren

On Thu, Feb 02, 2017 at 11:19:45AM +0000, Nipun Gupta wrote:
> 
> 
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jerin Jacob
> > Sent: Wednesday, December 21, 2016 14:55
> > To: dev@dpdk.org
> > Cc: thomas.monjalon@6wind.com; bruce.richardson@intel.com; Hemant
> > Agrawal <hemant.agrawal@nxp.com>; gage.eads@intel.com;
> > harry.van.haaren@intel.com; Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > Subject: [dpdk-dev] [PATCH v4 3/6] eventdev: implement the northbound APIs
> > 
> > This patch implements northbound eventdev API interface using southbond
> > driver interface
> > 
> > Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > Acked-by: Bruce Richardson <bruce.richardson@intel.com>
> > ---
> >  config/common_base                           |   6 +
> >  lib/Makefile                                 |   1 +
> >  lib/librte_eal/common/include/rte_log.h      |   1 +
> >  lib/librte_eventdev/Makefile                 |  57 ++
> >  lib/librte_eventdev/rte_eventdev.c           | 986
> > +++++++++++++++++++++++++++
> >  lib/librte_eventdev/rte_eventdev.h           | 106 ++-
> >  lib/librte_eventdev/rte_eventdev_pmd.h       | 109 +++
> >  lib/librte_eventdev/rte_eventdev_version.map |  33 +
> >  mk/rte.app.mk                                |   1 +
> >  9 files changed, 1294 insertions(+), 6 deletions(-)  create mode 100644
> > lib/librte_eventdev/Makefile  create mode 100644
> > lib/librte_eventdev/rte_eventdev.c
> >  create mode 100644 lib/librte_eventdev/rte_eventdev_version.map
> > 
> 
> <Snip>
> 
> > +static inline int
> > +rte_event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports) {
> > +	uint8_t old_nb_ports = dev->data->nb_ports;
> > +	void **ports;
> > +	uint16_t *links_map;
> > +	uint8_t *ports_dequeue_depth;
> > +	uint8_t *ports_enqueue_depth;
> > +	unsigned int i;
> > +
> > +	RTE_EDEV_LOG_DEBUG("Setup %d ports on device %u", nb_ports,
> > +			 dev->data->dev_id);
> > +
> > +	/* First time configuration */
> > +	if (dev->data->ports == NULL && nb_ports != 0) {
> > +		dev->data->ports = rte_zmalloc_socket("eventdev->data-
> > >ports",
> > +				sizeof(dev->data->ports[0]) * nb_ports,
> > +				RTE_CACHE_LINE_SIZE, dev->data->socket_id);
> > +		if (dev->data->ports == NULL) {
> > +			dev->data->nb_ports = 0;
> > +			RTE_EDEV_LOG_ERR("failed to get mem for port meta
> > data,"
> > +					"nb_ports %u", nb_ports);
> > +			return -(ENOMEM);
> > +		}
> > +
> > +		/* Allocate memory to store ports dequeue depth */
> > +		dev->data->ports_dequeue_depth =
> > +			rte_zmalloc_socket("eventdev-
> > >ports_dequeue_depth",
> > +			sizeof(dev->data->ports_dequeue_depth[0]) *
> > nb_ports,
> > +			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
> > +		if (dev->data->ports_dequeue_depth == NULL) {
> > +			dev->data->nb_ports = 0;
> > +			RTE_EDEV_LOG_ERR("failed to get mem for port deq
> > meta,"
> > +					"nb_ports %u", nb_ports);
> > +			return -(ENOMEM);
> > +		}
> > +
> > +		/* Allocate memory to store ports enqueue depth */
> > +		dev->data->ports_enqueue_depth =
> > +			rte_zmalloc_socket("eventdev-
> > >ports_enqueue_depth",
> > +			sizeof(dev->data->ports_enqueue_depth[0]) *
> > nb_ports,
> > +			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
> > +		if (dev->data->ports_enqueue_depth == NULL) {
> > +			dev->data->nb_ports = 0;
> > +			RTE_EDEV_LOG_ERR("failed to get mem for port enq
> > meta,"
> > +					"nb_ports %u", nb_ports);
> > +			return -(ENOMEM);
> > +		}
> > +
> > +		/* Allocate memory to store queue to port link connection */
> > +		dev->data->links_map =
> > +			rte_zmalloc_socket("eventdev->links_map",
> > +			sizeof(dev->data->links_map[0]) * nb_ports *
> > +			RTE_EVENT_MAX_QUEUES_PER_DEV,
> > +			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
> > +		if (dev->data->links_map == NULL) {
> > +			dev->data->nb_ports = 0;
> > +			RTE_EDEV_LOG_ERR("failed to get mem for port_map
> > area,"
> > +					"nb_ports %u", nb_ports);
> > +			return -(ENOMEM);
> > +		}
> 
> I think we also need to set all the 'links map' to EVENT_QUEUE_SERVICE_PRIORITY_INVALID
> on zmalloc.

Just after the port_setup, we are setting to EVENT_QUEUE_SERVICE_PRIORITY_INVALID in
rte_event_port_unlink(). So it looks OK to me.

        diag = (*dev->dev_ops->port_setup)(dev, port_id, port_conf);

        /* Unlink all the queues from this port(default state after
	 * setup) */
        if (!diag)
                diag = rte_event_port_unlink(dev_id, port_id, NULL, 0);


> 
> > +	} else if (dev->data->ports != NULL && nb_ports != 0) {/* re-config */
> > +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_release, -
> > ENOTSUP);
> > +
> > +		ports = dev->data->ports;
> > +		ports_dequeue_depth = dev->data->ports_dequeue_depth;
> > +		ports_enqueue_depth = dev->data->ports_enqueue_depth;
> > +		links_map = dev->data->links_map;
> > +
> 
> <Snip>
> 
> > +int
> > +rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
> > +		     const struct rte_event_port_conf *port_conf) {
> > +	struct rte_eventdev *dev;
> > +	struct rte_event_port_conf def_conf;
> > +	int diag;
> > +
> > +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> > +	dev = &rte_eventdevs[dev_id];
> > +
> > +	if (!is_valid_port(dev, port_id)) {
> > +		RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
> > +		return -EINVAL;
> > +	}
> > +
> > +	/* Check new_event_threshold limit */
> > +	if ((port_conf && !port_conf->new_event_threshold) ||
> > +			(port_conf && port_conf->new_event_threshold >
> > +				 dev->data->dev_conf.nb_events_limit)) {
> 
> As mentioned in 'rte_eventdev.h', the 'new_event_threshold' is valid for *closed systems*,
> so is the above check valid for *open systems*?

new_event_threshold is valid  only for *closed systems*. If you need any
change then please suggest.

> Or is it implicit that for open systems the 'port_conf->new_event_threshold' should be
> set to '-1' by the application just as it is for 'max_num_events' of 'struct rte_event_dev_info'.



> 
> > +		RTE_EDEV_LOG_ERR(
> > +		   "dev%d port%d Invalid event_threshold=%d
> > nb_events_limit=%d",
> > +			dev_id, port_id, port_conf->new_event_threshold,
> > +			dev->data->dev_conf.nb_events_limit);
> > +		return -EINVAL;
> > +	}
> > +
> 
> <Snip>
> 
> Regards,
> Nipun

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v4 2/6] eventdev: define southbound driver interface
  2017-02-02 13:58               ` Bruce Richardson
@ 2017-02-03  5:59                 ` Nipun Gupta
  0 siblings, 0 replies; 109+ messages in thread
From: Nipun Gupta @ 2017-02-03  5:59 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: Jerin Jacob, dev, thomas.monjalon, Hemant Agrawal, gage.eads,
	harry.van.haaren



> -----Original Message-----
> From: Bruce Richardson [mailto:bruce.richardson@intel.com]
> Sent: Thursday, February 02, 2017 19:29
> To: Nipun Gupta <nipun.gupta@nxp.com>
> Cc: Jerin Jacob <jerin.jacob@caviumnetworks.com>; dev@dpdk.org;
> thomas.monjalon@6wind.com; Hemant Agrawal <hemant.agrawal@nxp.com>;
> gage.eads@intel.com; harry.van.haaren@intel.com
> Subject: Re: [dpdk-dev] [PATCH v4 2/6] eventdev: define southbound driver
> interface
> 
> On Thu, Feb 02, 2017 at 12:53:17PM +0000, Nipun Gupta wrote:
> >
> >
> > > -----Original Message-----
> > > From: Bruce Richardson [mailto:bruce.richardson@intel.com]
> > > Sent: Thursday, February 02, 2017 17:04
> > > To: Nipun Gupta <nipun.gupta@nxp.com>
> > > Cc: Jerin Jacob <jerin.jacob@caviumnetworks.com>; dev@dpdk.org;
> > > thomas.monjalon@6wind.com; Hemant Agrawal
> <hemant.agrawal@nxp.com>;
> > > gage.eads@intel.com; harry.van.haaren@intel.com
> > > Subject: Re: [dpdk-dev] [PATCH v4 2/6] eventdev: define southbound driver
> > > interface
> > >
> > > On Thu, Feb 02, 2017 at 11:19:51AM +0000, Nipun Gupta wrote:
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jerin Jacob
> > > > > Sent: Wednesday, December 21, 2016 14:55
> > > > > To: dev@dpdk.org
> > > > > Cc: thomas.monjalon@6wind.com; bruce.richardson@intel.com;
> Hemant
> > > > > Agrawal <hemant.agrawal@nxp.com>; gage.eads@intel.com;
> > > > > harry.van.haaren@intel.com; Jerin Jacob
> > > > > <jerin.jacob@caviumnetworks.com>
> > > > > Subject: [dpdk-dev] [PATCH v4 2/6] eventdev: define southbound
> > > > > driver interface
> > > > >
> > > > > Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > > > > Acked-by: Bruce Richardson <bruce.richardson@intel.com>
> > > > > ---
> > > > >  lib/librte_eventdev/rte_eventdev.h     |  38 +++++
> > > > >  lib/librte_eventdev/rte_eventdev_pmd.h | 294
> > > > > +++++++++++++++++++++++++++++++++
> > > > >  2 files changed, 332 insertions(+)
> > > > >  create mode 100644 lib/librte_eventdev/rte_eventdev_pmd.h
> > > > >
> > > >
> > > > <snip>
> > > >
> > > > > +typedef int (*eventdev_port_link_t)(void *port,
> > > > > +		const uint8_t queues[], const uint8_t priorities[],
> > > > > +		uint16_t nb_links);
> > > >
> > > > I think having event device as input parameter to the port_link &
> > > > port_unlink will be required so that queue configuration can be fetched
> from
> > > the event device.
> > > >
> > > Or each port structure in each driver can have a pointer back to its
> containing
> > > eventdev. That is what we have done in our SW eventdev driver.
> >
> > That's one solution, but I think having device in the API will be more cleaner
> here, just like
> > it is provided in other configuration API's?
> >
> > Thanks,
> > Nipun
> >
> Sure. Will you do up a patch to make this change, since the code is
> already applied to next-event tree?

Sure. I'll send a patch regarding the same :)

Regards,
Nipun

> 
> /Bruce

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v4 1/6] eventdev: introduce event driven programming model
  2017-02-02 14:09           ` Jerin Jacob
@ 2017-02-03  6:38             ` Nipun Gupta
  2017-02-03 10:58               ` Hemant Agrawal
  0 siblings, 1 reply; 109+ messages in thread
From: Nipun Gupta @ 2017-02-03  6:38 UTC (permalink / raw)
  To: Jerin Jacob, bruce.richardson, gage.eads
  Cc: dev, thomas.monjalon, Hemant Agrawal, harry.van.haaren



> -----Original Message-----
> From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> Sent: Thursday, February 02, 2017 19:39
> To: Nipun Gupta <nipun.gupta@nxp.com>
> Cc: dev@dpdk.org; thomas.monjalon@6wind.com;
> bruce.richardson@intel.com; Hemant Agrawal <hemant.agrawal@nxp.com>;
> gage.eads@intel.com; harry.van.haaren@intel.com
> Subject: Re: [dpdk-dev] [PATCH v4 1/6] eventdev: introduce event driven
> programming model
> 
> On Thu, Feb 02, 2017 at 11:18:52AM +0000, Nipun Gupta wrote:
> > Hi,
> >
> > I had a few queries/comments regarding the eventdev patches.
> >
> > Please see inline.
> >
> > > -----Original Message-----
> > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jerin Jacob
> > > Sent: Wednesday, December 21, 2016 14:55
> > > To: dev@dpdk.org
> > > Cc: thomas.monjalon@6wind.com; bruce.richardson@intel.com; Hemant
> > > Agrawal <hemant.agrawal@nxp.com>; gage.eads@intel.com;
> > > harry.van.haaren@intel.com; Jerin Jacob
> <jerin.jacob@caviumnetworks.com>
> > > Subject: [dpdk-dev] [PATCH v4 1/6] eventdev: introduce event driven
> > > programming model
> > >
> > > In a polling model, lcores poll ethdev ports and associated
> > > rx queues directly to look for packet. In an event driven model,
> > > by contrast, lcores call the scheduler that selects packets for
> > > them based on programmer-specified criteria. Eventdev library
> > > adds support for event driven programming model, which offer
> > > applications automatic multicore scaling, dynamic load balancing,
> > > pipelining, packet ingress order maintenance and
> > > synchronization services to simplify application packet processing.
> > >
> > > By introducing event driven programming model, DPDK can support
> > > both polling and event driven programming models for packet processing,
> > > and applications are free to choose whatever model
> > > (or combination of the two) that best suits their needs.
> > >
> > > This patch adds the eventdev specification header file.
> > >
> > > Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > > Acked-by: Bruce Richardson <bruce.richardson@intel.com>
> > > ---
> > >  MAINTAINERS                        |    3 +
> > >  doc/api/doxy-api-index.md          |    1 +
> > >  doc/api/doxy-api.conf              |    1 +
> > >  lib/librte_eventdev/rte_eventdev.h | 1275
> > > ++++++++++++++++++++++++++++++++++++
> > >  4 files changed, 1280 insertions(+)
> > >  create mode 100644 lib/librte_eventdev/rte_eventdev.h
> >
> > <snip>
> >
> > > +
> > > +/**
> > > + * Event device information
> > > + */
> > > +struct rte_event_dev_info {
> > > +	const char *driver_name;	/**< Event driver name */
> > > +	struct rte_pci_device *pci_dev;	/**< PCI information */
> >
> > With 'rte_device' in place (rte_dev.h), should we not have 'rte_device' instead
> of 'rte_pci_device' here?
> 
> Yes. Please post a patch to fix this. As the time of merging to
> next-eventdev tree it was not the case.

Sure. I'll send a patch regarding this.

> 
> >
> > > + * The number of events dequeued is the number of scheduler contexts held
> by
> > > + * this port. These contexts are automatically released in the next
> > > + * rte_event_dequeue_burst() invocation, or invoking
> > > rte_event_enqueue_burst()
> > > + * with RTE_EVENT_OP_RELEASE operation can be used to release the
> > > + * contexts early.
> > > + *
> > > + * @param dev_id
> > > + *   The identifier of the device.
> > > + * @param port_id
> > > + *   The identifier of the event port.
> > > + * @param[out] ev
> > > + *   Points to an array of *nb_events* objects of type *rte_event* structure
> > > + *   for output to be populated with the dequeued event objects.
> > > + * @param nb_events
> > > + *   The maximum number of event objects to dequeue, typically number of
> > > + *   rte_event_port_dequeue_depth() available for this port.
> > > + *
> > > + * @param timeout_ticks
> > > + *   - 0 no-wait, returns immediately if there is no event.
> > > + *   - >0 wait for the event, if the device is configured with
> > > + *   RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT then this function will
> > > wait until
> > > + *   the event available or *timeout_ticks* time.
> >
> > Just for understanding - Is expectation that rte_event_dequeue_burst() will
> wait till timeout
> > unless requested number of events (nb_events) are not received on the event
> port?
> 
> Yes. If you need any change then a send RFC patch for the header file
> change.
> 
> >
> > > + *   if the device is not configured with
> > > RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT
> > > + *   then this function will wait until the event available or
> > > + *   *dequeue_timeout_ns* ns which was previously supplied to
> > > + *   rte_event_dev_configure()
> > > + *
> > > + * @return
> > > + * The number of event objects actually dequeued from the port. The return
> > > + * value can be less than the value of the *nb_events* parameter when the
> > > + * event port's queue is not full.
> > > + *
> > > + * @see rte_event_port_dequeue_depth()
> > > + */
> > > +uint16_t
> > > +rte_event_dequeue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event
> > > ev[],
> > > +			uint16_t nb_events, uint64_t timeout_ticks);
> > > +
> >
> > <Snip>
> >
> > Regards,
> > Nipun

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v4 3/6] eventdev: implement the northbound APIs
  2017-02-02 14:32           ` Jerin Jacob
@ 2017-02-03  6:59             ` Nipun Gupta
  0 siblings, 0 replies; 109+ messages in thread
From: Nipun Gupta @ 2017-02-03  6:59 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dev, thomas.monjalon, bruce.richardson, Hemant Agrawal,
	gage.eads, harry.van.haaren



> -----Original Message-----
> From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> Sent: Thursday, February 02, 2017 20:02
> To: Nipun Gupta <nipun.gupta@nxp.com>
> Cc: dev@dpdk.org; thomas.monjalon@6wind.com;
> bruce.richardson@intel.com; Hemant Agrawal <hemant.agrawal@nxp.com>;
> gage.eads@intel.com; harry.van.haaren@intel.com
> Subject: Re: [dpdk-dev] [PATCH v4 3/6] eventdev: implement the northbound
> APIs
> 
> On Thu, Feb 02, 2017 at 11:19:45AM +0000, Nipun Gupta wrote:
> >
> >
> > > -----Original Message-----
> > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jerin Jacob
> > > Sent: Wednesday, December 21, 2016 14:55
> > > To: dev@dpdk.org
> > > Cc: thomas.monjalon@6wind.com; bruce.richardson@intel.com; Hemant
> > > Agrawal <hemant.agrawal@nxp.com>; gage.eads@intel.com;
> > > harry.van.haaren@intel.com; Jerin Jacob
> <jerin.jacob@caviumnetworks.com>
> > > Subject: [dpdk-dev] [PATCH v4 3/6] eventdev: implement the northbound
> APIs
> > >
> > > This patch implements northbound eventdev API interface using southbond
> > > driver interface
> > >
> > > Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > > Acked-by: Bruce Richardson <bruce.richardson@intel.com>
> > > ---
> > >  config/common_base                           |   6 +
> > >  lib/Makefile                                 |   1 +
> > >  lib/librte_eal/common/include/rte_log.h      |   1 +
> > >  lib/librte_eventdev/Makefile                 |  57 ++
> > >  lib/librte_eventdev/rte_eventdev.c           | 986
> > > +++++++++++++++++++++++++++
> > >  lib/librte_eventdev/rte_eventdev.h           | 106 ++-
> > >  lib/librte_eventdev/rte_eventdev_pmd.h       | 109 +++
> > >  lib/librte_eventdev/rte_eventdev_version.map |  33 +
> > >  mk/rte.app.mk                                |   1 +
> > >  9 files changed, 1294 insertions(+), 6 deletions(-)  create mode 100644
> > > lib/librte_eventdev/Makefile  create mode 100644
> > > lib/librte_eventdev/rte_eventdev.c
> > >  create mode 100644 lib/librte_eventdev/rte_eventdev_version.map
> > >
> >
> > <Snip>
> >
> > > +static inline int
> > > +rte_event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports) {
> > > +	uint8_t old_nb_ports = dev->data->nb_ports;
> > > +	void **ports;
> > > +	uint16_t *links_map;
> > > +	uint8_t *ports_dequeue_depth;
> > > +	uint8_t *ports_enqueue_depth;
> > > +	unsigned int i;
> > > +
> > > +	RTE_EDEV_LOG_DEBUG("Setup %d ports on device %u", nb_ports,
> > > +			 dev->data->dev_id);
> > > +
> > > +	/* First time configuration */
> > > +	if (dev->data->ports == NULL && nb_ports != 0) {
> > > +		dev->data->ports = rte_zmalloc_socket("eventdev->data-
> > > >ports",
> > > +				sizeof(dev->data->ports[0]) * nb_ports,
> > > +				RTE_CACHE_LINE_SIZE, dev->data->socket_id);
> > > +		if (dev->data->ports == NULL) {
> > > +			dev->data->nb_ports = 0;
> > > +			RTE_EDEV_LOG_ERR("failed to get mem for port meta
> > > data,"
> > > +					"nb_ports %u", nb_ports);
> > > +			return -(ENOMEM);
> > > +		}
> > > +
> > > +		/* Allocate memory to store ports dequeue depth */
> > > +		dev->data->ports_dequeue_depth =
> > > +			rte_zmalloc_socket("eventdev-
> > > >ports_dequeue_depth",
> > > +			sizeof(dev->data->ports_dequeue_depth[0]) *
> > > nb_ports,
> > > +			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
> > > +		if (dev->data->ports_dequeue_depth == NULL) {
> > > +			dev->data->nb_ports = 0;
> > > +			RTE_EDEV_LOG_ERR("failed to get mem for port deq
> > > meta,"
> > > +					"nb_ports %u", nb_ports);
> > > +			return -(ENOMEM);
> > > +		}
> > > +
> > > +		/* Allocate memory to store ports enqueue depth */
> > > +		dev->data->ports_enqueue_depth =
> > > +			rte_zmalloc_socket("eventdev-
> > > >ports_enqueue_depth",
> > > +			sizeof(dev->data->ports_enqueue_depth[0]) *
> > > nb_ports,
> > > +			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
> > > +		if (dev->data->ports_enqueue_depth == NULL) {
> > > +			dev->data->nb_ports = 0;
> > > +			RTE_EDEV_LOG_ERR("failed to get mem for port enq
> > > meta,"
> > > +					"nb_ports %u", nb_ports);
> > > +			return -(ENOMEM);
> > > +		}
> > > +
> > > +		/* Allocate memory to store queue to port link connection */
> > > +		dev->data->links_map =
> > > +			rte_zmalloc_socket("eventdev->links_map",
> > > +			sizeof(dev->data->links_map[0]) * nb_ports *
> > > +			RTE_EVENT_MAX_QUEUES_PER_DEV,
> > > +			RTE_CACHE_LINE_SIZE, dev->data->socket_id);
> > > +		if (dev->data->links_map == NULL) {
> > > +			dev->data->nb_ports = 0;
> > > +			RTE_EDEV_LOG_ERR("failed to get mem for port_map
> > > area,"
> > > +					"nb_ports %u", nb_ports);
> > > +			return -(ENOMEM);
> > > +		}
> >
> > I think we also need to set all the 'links map' to
> EVENT_QUEUE_SERVICE_PRIORITY_INVALID
> > on zmalloc.
> 
> Just after the port_setup, we are setting to
> EVENT_QUEUE_SERVICE_PRIORITY_INVALID in
> rte_event_port_unlink(). So it looks OK to me.
> 
>         diag = (*dev->dev_ops->port_setup)(dev, port_id, port_conf);
> 
>         /* Unlink all the queues from this port(default state after
> 	 * setup) */
>         if (!diag)
>                 diag = rte_event_port_unlink(dev_id, port_id, NULL, 0);
> 
> 

In case of NULL parameter as queues, in the rte_event_port_unlink(),
the number of 'links_map' which are being set to
EVENT_QUEUE_SERVICE_PRIORITY_INVALID equals the number of
configured queues:

	if (queues == NULL) {
		for (i = 0; i < dev->data->nb_queues; i++)
			all_queues[i] = i;
		queues = all_queues;
		nb_unlinks = dev->data->nb_queues;
	}

So, the EVENT_QUEUE_SERVICE_PRIORITY_INVALID does not gets set for
complete 'links_map' memory.

The API rte_event_port_links_get() will probably return wrong number of links
as the loop there is on RTE_EVENT_MAX_QUEUES_PER_DEV:

	for (i = 0; i < RTE_EVENT_MAX_QUEUES_PER_DEV; i++) {
		if (links_map[i] != EVENT_QUEUE_SERVICE_PRIORITY_INVALID) {
			queues[count] = i;
			priorities[count] = (uint8_t)links_map[i];
			++count;
		}
	}

> >
> > > +	} else if (dev->data->ports != NULL && nb_ports != 0) {/* re-config */
> > > +		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->port_release, -
> > > ENOTSUP);
> > > +
> > > +		ports = dev->data->ports;
> > > +		ports_dequeue_depth = dev->data->ports_dequeue_depth;
> > > +		ports_enqueue_depth = dev->data->ports_enqueue_depth;
> > > +		links_map = dev->data->links_map;
> > > +
> >
> > <Snip>
> >
> > > +int
> > > +rte_event_port_setup(uint8_t dev_id, uint8_t port_id,
> > > +		     const struct rte_event_port_conf *port_conf) {
> > > +	struct rte_eventdev *dev;
> > > +	struct rte_event_port_conf def_conf;
> > > +	int diag;
> > > +
> > > +	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
> > > +	dev = &rte_eventdevs[dev_id];
> > > +
> > > +	if (!is_valid_port(dev, port_id)) {
> > > +		RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id);
> > > +		return -EINVAL;
> > > +	}
> > > +
> > > +	/* Check new_event_threshold limit */
> > > +	if ((port_conf && !port_conf->new_event_threshold) ||
> > > +			(port_conf && port_conf->new_event_threshold >
> > > +				 dev->data->dev_conf.nb_events_limit)) {
> >
> > As mentioned in 'rte_eventdev.h', the 'new_event_threshold' is valid for
> *closed systems*,
> > so is the above check valid for *open systems*?
> 
> new_event_threshold is valid  only for *closed systems*. If you need any
> change then please suggest.

This is fine, but I think we also need to mention in the new_event_threshold
description in 'rte_eventdev.h' that for open systems this needs to be set to
'-1', because otherwise the check here will fail.

And this is also for ' struct rte_event_dev_config'->'nb_events_limit', which is
required to be set to '-1' for open systems by the application. Right?

I'll send a patch updating this in rte_eventdev.h file.

Regards,
Nipun

> 
> > Or is it implicit that for open systems the 'port_conf->new_event_threshold'
> should be
> > set to '-1' by the application just as it is for 'max_num_events' of 'struct
> rte_event_dev_info'.
> 
> 
> 
> >
> > > +		RTE_EDEV_LOG_ERR(
> > > +		   "dev%d port%d Invalid event_threshold=%d
> > > nb_events_limit=%d",
> > > +			dev_id, port_id, port_conf->new_event_threshold,
> > > +			dev->data->dev_conf.nb_events_limit);
> > > +		return -EINVAL;
> > > +	}
> > > +
> >
> > <Snip>
> >
> > Regards,
> > Nipun

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v4 1/6] eventdev: introduce event driven programming model
  2017-02-03  6:38             ` Nipun Gupta
@ 2017-02-03 10:58               ` Hemant Agrawal
  2017-02-07  4:59                 ` Jerin Jacob
  0 siblings, 1 reply; 109+ messages in thread
From: Hemant Agrawal @ 2017-02-03 10:58 UTC (permalink / raw)
  To: Nipun Gupta, Jerin Jacob, bruce.richardson, gage.eads
  Cc: dev, thomas.monjalon, harry.van.haaren

On 2/3/2017 12:08 PM, Nipun Gupta wrote:
>>>> -----Original Message-----
>>>> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jerin Jacob
>>>> Sent: Wednesday, December 21, 2016 14:55
>>>> To: dev@dpdk.org
>>>> Cc: thomas.monjalon@6wind.com; bruce.richardson@intel.com; Hemant
>>>> Agrawal <hemant.agrawal@nxp.com>; gage.eads@intel.com;
>>>> harry.van.haaren@intel.com; Jerin Jacob
>> <jerin.jacob@caviumnetworks.com>
>>>> Subject: [dpdk-dev] [PATCH v4 1/6] eventdev: introduce event driven
>>>> programming model
>>>>
>>>> In a polling model, lcores poll ethdev ports and associated
>>>> rx queues directly to look for packet. In an event driven model,
>>>> by contrast, lcores call the scheduler that selects packets for
>>>> them based on programmer-specified criteria. Eventdev library
>>>> adds support for event driven programming model, which offer
>>>> applications automatic multicore scaling, dynamic load balancing,
>>>> pipelining, packet ingress order maintenance and
>>>> synchronization services to simplify application packet processing.
>>>>
>>>> By introducing event driven programming model, DPDK can support
>>>> both polling and event driven programming models for packet processing,
>>>> and applications are free to choose whatever model
>>>> (or combination of the two) that best suits their needs.
>>>>
>>>> This patch adds the eventdev specification header file.
>>>>
>>>> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
>>>> Acked-by: Bruce Richardson <bruce.richardson@intel.com>
>>>> ---
>>>>  MAINTAINERS                        |    3 +
>>>>  doc/api/doxy-api-index.md          |    1 +
>>>>  doc/api/doxy-api.conf              |    1 +
>>>>  lib/librte_eventdev/rte_eventdev.h | 1275
>>>> ++++++++++++++++++++++++++++++++++++
>>>>  4 files changed, 1280 insertions(+)
>>>>  create mode 100644 lib/librte_eventdev/rte_eventdev.h
>>>
>>> <snip>
>>>
>>>> +
>>>> +/**
>>>> + * Event device information
>>>> + */
>>>> +struct rte_event_dev_info {
>>>> +	const char *driver_name;	/**< Event driver name */
>>>> +	struct rte_pci_device *pci_dev;	/**< PCI information */
>>>
>>> With 'rte_device' in place (rte_dev.h), should we not have 'rte_device' instead
>> of 'rte_pci_device' here?
>>
>> Yes. Please post a patch to fix this. As the time of merging to
>> next-eventdev tree it was not the case.
>
> Sure. I'll send a patch regarding this.
>
>>
>>>
>>>> + * The number of events dequeued is the number of scheduler contexts held
>> by
>>>> + * this port. These contexts are automatically released in the next
>>>> + * rte_event_dequeue_burst() invocation, or invoking
>>>> rte_event_enqueue_burst()
>>>> + * with RTE_EVENT_OP_RELEASE operation can be used to release the
>>>> + * contexts early.
>>>> + *
>>>> + * @param dev_id
>>>> + *   The identifier of the device.
>>>> + * @param port_id
>>>> + *   The identifier of the event port.
>>>> + * @param[out] ev
>>>> + *   Points to an array of *nb_events* objects of type *rte_event* structure
>>>> + *   for output to be populated with the dequeued event objects.
>>>> + * @param nb_events
>>>> + *   The maximum number of event objects to dequeue, typically number of
>>>> + *   rte_event_port_dequeue_depth() available for this port.
>>>> + *
>>>> + * @param timeout_ticks
>>>> + *   - 0 no-wait, returns immediately if there is no event.
>>>> + *   - >0 wait for the event, if the device is configured with
>>>> + *   RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT then this function will
>>>> wait until
>>>> + *   the event available or *timeout_ticks* time.
>>>
>>> Just for understanding - Is expectation that rte_event_dequeue_burst() will
>> wait till timeout
>>> unless requested number of events (nb_events) are not received on the event
>> port?
>>
>> Yes. If you need any change then a send RFC patch for the header file
>> change.

"at least one event available"

The API should not wait, if at least one event is available to discard 
the timeout value.

the *timeout* is valid only until the first event is received (even when 
multiple events are requested) and driver will only checking for further 
events availability and return as many events as it is able to get in 
its processing loop.


>>
>>>
>>>> + *   if the device is not configured with
>>>> RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT
>>>> + *   then this function will wait until the event available or
>>>> + *   *dequeue_timeout_ns* ns which was previously supplied to
>>>> + *   rte_event_dev_configure()
>>>> + *
>>>> + * @return
>>>> + * The number of event objects actually dequeued from the port. The return
>>>> + * value can be less than the value of the *nb_events* parameter when the
>>>> + * event port's queue is not full.
>>>> + *
>>>> + * @see rte_event_port_dequeue_depth()
>>>> + */
>>>> +uint16_t
>>>> +rte_event_dequeue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event
>>>> ev[],
>>>> +			uint16_t nb_events, uint64_t timeout_ticks);
>>>> +
>>>
>>> <Snip>
>>>
>>> Regards,
>>> Nipun
>

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v4 4/6] eventdev: implement PMD registration functions
  2017-02-02 11:20         ` Nipun Gupta
@ 2017-02-05 13:04           ` Jerin Jacob
  0 siblings, 0 replies; 109+ messages in thread
From: Jerin Jacob @ 2017-02-05 13:04 UTC (permalink / raw)
  To: Nipun Gupta
  Cc: dev, thomas.monjalon, bruce.richardson, Hemant Agrawal,
	gage.eads, harry.van.haaren

On Thu, Feb 02, 2017 at 11:20:09AM +0000, Nipun Gupta wrote:
> 
> 
> > -----Original Message-----
> > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jerin Jacob
> > Sent: Wednesday, December 21, 2016 14:55
> > To: dev@dpdk.org
> > Cc: thomas.monjalon@6wind.com; bruce.richardson@intel.com; Hemant
> > Agrawal <hemant.agrawal@nxp.com>; gage.eads@intel.com;
> > harry.van.haaren@intel.com; Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > Subject: [dpdk-dev] [PATCH v4 4/6] eventdev: implement PMD registration
> > functions
> > 
> > This patch adds infrastructure for registering the vdev or
> > the PCI based event device.
> > 
> > Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > Acked-by: Bruce Richardson <bruce.richardson@intel.com>
> > ---
> >  lib/librte_eventdev/rte_eventdev.c           | 236
> > +++++++++++++++++++++++++++
> >  lib/librte_eventdev/rte_eventdev_pmd.h       | 111 +++++++++++++
> >  lib/librte_eventdev/rte_eventdev_version.map |   6 +
> >  3 files changed, 353 insertions(+)
> > 
> 
> <snip>
> 
> > +
> > +struct rte_eventdev *
> > +rte_event_pmd_vdev_init(const char *name, size_t dev_private_size,
> > +		int socket_id)
> 
> Isn't there any requirement to have a clean-up function corresponding to
> rte_event_pmd_vdev_init?

I can add one for completeness. I will send a patch on this.

> 
> > +{
> > +	struct rte_eventdev *eventdev;
> > +
> > +	/* Allocate device structure */
> > +	eventdev = rte_event_pmd_allocate(name, socket_id);
> > +	if (eventdev == NULL)
> > +		return NULL;
> > +
> 
> <snip>
> 
> Regards,
> Nipun

^ permalink raw reply	[flat|nested] 109+ messages in thread

* Re: [PATCH v4 1/6] eventdev: introduce event driven programming model
  2017-02-03 10:58               ` Hemant Agrawal
@ 2017-02-07  4:59                 ` Jerin Jacob
  0 siblings, 0 replies; 109+ messages in thread
From: Jerin Jacob @ 2017-02-07  4:59 UTC (permalink / raw)
  To: Hemant Agrawal
  Cc: Nipun Gupta, bruce.richardson, gage.eads, dev, thomas.monjalon,
	harry.van.haaren

On Fri, Feb 03, 2017 at 04:28:15PM +0530, Hemant Agrawal wrote:
> On 2/3/2017 12:08 PM, Nipun Gupta wrote:
> > > > > -----Original Message-----
> > > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Jerin Jacob
> > > > > Sent: Wednesday, December 21, 2016 14:55
> > > > > To: dev@dpdk.org
> > > > > Cc: thomas.monjalon@6wind.com; bruce.richardson@intel.com; Hemant
> > > > > Agrawal <hemant.agrawal@nxp.com>; gage.eads@intel.com;
> > > > > harry.van.haaren@intel.com; Jerin Jacob
> > > <jerin.jacob@caviumnetworks.com>
> > > > > Subject: [dpdk-dev] [PATCH v4 1/6] eventdev: introduce event driven
> > > > > programming model
> > > > > 
> > > > > In a polling model, lcores poll ethdev ports and associated
> > > > > rx queues directly to look for packet. In an event driven model,
> > > > > by contrast, lcores call the scheduler that selects packets for
> > > > > them based on programmer-specified criteria. Eventdev library
> > > > > adds support for event driven programming model, which offer
> > > > > applications automatic multicore scaling, dynamic load balancing,
> > > > > pipelining, packet ingress order maintenance and
> > > > > synchronization services to simplify application packet processing.
> > > > > 
> > > > > By introducing event driven programming model, DPDK can support
> > > > > both polling and event driven programming models for packet processing,
> > > > > and applications are free to choose whatever model
> > > > > (or combination of the two) that best suits their needs.
> > > > > 
> > > > > This patch adds the eventdev specification header file.
> > > > > 
> > > > > Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> > > > > Acked-by: Bruce Richardson <bruce.richardson@intel.com>
> > > > > ---
> > > > >  MAINTAINERS                        |    3 +
> > > > >  doc/api/doxy-api-index.md          |    1 +
> > > > >  doc/api/doxy-api.conf              |    1 +
> > > > >  lib/librte_eventdev/rte_eventdev.h | 1275
> > > > > ++++++++++++++++++++++++++++++++++++
> > > > >  4 files changed, 1280 insertions(+)
> > > > >  create mode 100644 lib/librte_eventdev/rte_eventdev.h
> > > > 
> > > > <snip>
> > > > 
> > > > > +
> > > > > +/**
> > > > > + * Event device information
> > > > > + */
> > > > > +struct rte_event_dev_info {
> > > > > +	const char *driver_name;	/**< Event driver name */
> > > > > +	struct rte_pci_device *pci_dev;	/**< PCI information */
> > > > 
> > > > With 'rte_device' in place (rte_dev.h), should we not have 'rte_device' instead
> > > of 'rte_pci_device' here?
> > > 
> > > Yes. Please post a patch to fix this. As the time of merging to
> > > next-eventdev tree it was not the case.
> > 
> > Sure. I'll send a patch regarding this.
> > 
> > > 
> > > > 
> > > > > + * The number of events dequeued is the number of scheduler contexts held
> > > by
> > > > > + * this port. These contexts are automatically released in the next
> > > > > + * rte_event_dequeue_burst() invocation, or invoking
> > > > > rte_event_enqueue_burst()
> > > > > + * with RTE_EVENT_OP_RELEASE operation can be used to release the
> > > > > + * contexts early.
> > > > > + *
> > > > > + * @param dev_id
> > > > > + *   The identifier of the device.
> > > > > + * @param port_id
> > > > > + *   The identifier of the event port.
> > > > > + * @param[out] ev
> > > > > + *   Points to an array of *nb_events* objects of type *rte_event* structure
> > > > > + *   for output to be populated with the dequeued event objects.
> > > > > + * @param nb_events
> > > > > + *   The maximum number of event objects to dequeue, typically number of
> > > > > + *   rte_event_port_dequeue_depth() available for this port.
> > > > > + *
> > > > > + * @param timeout_ticks
> > > > > + *   - 0 no-wait, returns immediately if there is no event.
> > > > > + *   - >0 wait for the event, if the device is configured with
> > > > > + *   RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT then this function will
> > > > > wait until
> > > > > + *   the event available or *timeout_ticks* time.
> > > > 
> > > > Just for understanding - Is expectation that rte_event_dequeue_burst() will
> > > wait till timeout
> > > > unless requested number of events (nb_events) are not received on the event
> > > port?
> > > 
> > > Yes. If you need any change then a send RFC patch for the header file
> > > change.
> 
> "at least one event available"

Looks good to me. If there no objections then you can send a patch to
update the header file.

> 
> The API should not wait, if at least one event is available to discard the
> timeout value.
> 
> the *timeout* is valid only until the first event is received (even when
> multiple events are requested) and driver will only checking for further
> events availability and return as many events as it is able to get in its
> processing loop.
> 
> 
> > > 
> > > > 
> > > > > + *   if the device is not configured with
> > > > > RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT
> > > > > + *   then this function will wait until the event available or
> > > > > + *   *dequeue_timeout_ns* ns which was previously supplied to
> > > > > + *   rte_event_dev_configure()
> > > > > + *
> > > > > + * @return
> > > > > + * The number of event objects actually dequeued from the port. The return
> > > > > + * value can be less than the value of the *nb_events* parameter when the
> > > > > + * event port's queue is not full.
> > > > > + *
> > > > > + * @see rte_event_port_dequeue_depth()
> > > > > + */
> > > > > +uint16_t
> > > > > +rte_event_dequeue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event
> > > > > ev[],
> > > > > +			uint16_t nb_events, uint64_t timeout_ticks);
> > > > > +
> > > > 
> > > > <Snip>
> > > > 
> > > > Regards,
> > > > Nipun
> > 
> 
> 

^ permalink raw reply	[flat|nested] 109+ messages in thread

end of thread, other threads:[~2017-02-07  5:00 UTC | newest]

Thread overview: 109+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-11-18  5:44 [PATCH 0/4] libeventdev API and northbound implementation Jerin Jacob
2016-11-18  5:44 ` [PATCH 1/4] eventdev: introduce event driven programming model Jerin Jacob
2016-11-23 18:39   ` Thomas Monjalon
2016-11-24  1:59     ` Jerin Jacob
2016-11-24 12:26       ` Bruce Richardson
2016-11-24 15:35       ` Thomas Monjalon
2016-11-25  0:23         ` Jerin Jacob
2016-11-25 11:00           ` Bruce Richardson
2016-11-25 13:09             ` Thomas Monjalon
2016-11-26  0:57               ` Jerin Jacob
2016-11-28  9:10                 ` Bruce Richardson
2016-11-26  2:54             ` Jerin Jacob
2016-11-28  9:16               ` Bruce Richardson
2016-11-28 11:30                 ` Thomas Monjalon
2016-11-29  4:01                 ` Jerin Jacob
2016-11-29 10:00                   ` Bruce Richardson
2016-11-25 11:59           ` Van Haaren, Harry
2016-11-25 12:09             ` Richardson, Bruce
2016-11-24 16:24   ` Bruce Richardson
2016-11-24 19:30     ` Jerin Jacob
2016-12-06  3:52   ` [PATCH v2 0/6] libeventdev API and northbound implementation Jerin Jacob
2016-12-06  3:52     ` [PATCH v2 1/6] eventdev: introduce event driven programming model Jerin Jacob
2016-12-06 16:51       ` Bruce Richardson
2016-12-07 18:53         ` Jerin Jacob
2016-12-08  9:30           ` Bruce Richardson
2016-12-08 20:41             ` Jerin Jacob
2016-12-09 15:11               ` Bruce Richardson
2016-12-14  6:55                 ` Jerin Jacob
2016-12-07 10:57       ` Van Haaren, Harry
2016-12-08  1:24         ` Jerin Jacob
2016-12-08 11:02           ` Van Haaren, Harry
2016-12-14 13:13             ` Jerin Jacob
2016-12-14 15:15               ` Bruce Richardson
2016-12-15 16:54               ` Van Haaren, Harry
2016-12-07 11:12       ` Bruce Richardson
2016-12-08  1:48         ` Jerin Jacob
2016-12-08  9:57           ` Bruce Richardson
2016-12-14  6:40             ` Jerin Jacob
2016-12-14 15:19       ` Bruce Richardson
2016-12-15 13:39         ` Jerin Jacob
2016-12-06  3:52     ` [PATCH v2 2/6] eventdev: define southbound driver interface Jerin Jacob
2016-12-06  3:52     ` [PATCH v2 3/6] eventdev: implement the northbound APIs Jerin Jacob
2016-12-06 17:17       ` Bruce Richardson
2016-12-07 17:02         ` Jerin Jacob
2016-12-08  9:59           ` Bruce Richardson
2016-12-14  6:28             ` Jerin Jacob
2016-12-06  3:52     ` [PATCH v2 4/6] eventdev: implement PMD registration functions Jerin Jacob
2016-12-06  3:52     ` [PATCH v2 5/6] event/skeleton: add skeleton eventdev driver Jerin Jacob
2016-12-06  3:52     ` [PATCH v2 6/6] app/test: unit test case for eventdev APIs Jerin Jacob
2016-12-06 16:46     ` [PATCH v2 0/6] libeventdev API and northbound implementation Bruce Richardson
2016-12-21  9:25     ` [PATCH v4 " Jerin Jacob
2016-12-21  9:25       ` [PATCH v4 1/6] eventdev: introduce event driven programming model Jerin Jacob
2017-01-25 16:32         ` Eads, Gage
2017-01-25 16:36           ` Richardson, Bruce
2017-01-25 16:53             ` Eads, Gage
2017-01-25 22:36               ` Eads, Gage
2017-01-26  9:39                 ` Jerin Jacob
2017-01-26 20:39                   ` Eads, Gage
2017-01-27 10:03                     ` Bruce Richardson
2017-01-30 10:42                     ` Jerin Jacob
2017-02-02 11:18         ` Nipun Gupta
2017-02-02 14:09           ` Jerin Jacob
2017-02-03  6:38             ` Nipun Gupta
2017-02-03 10:58               ` Hemant Agrawal
2017-02-07  4:59                 ` Jerin Jacob
2016-12-21  9:25       ` [PATCH v4 2/6] eventdev: define southbound driver interface Jerin Jacob
2017-02-02 11:19         ` Nipun Gupta
2017-02-02 11:34           ` Bruce Richardson
2017-02-02 12:53             ` Nipun Gupta
2017-02-02 13:58               ` Bruce Richardson
2017-02-03  5:59                 ` Nipun Gupta
2016-12-21  9:25       ` [PATCH v4 3/6] eventdev: implement the northbound APIs Jerin Jacob
2017-02-02 11:19         ` Nipun Gupta
2017-02-02 14:32           ` Jerin Jacob
2017-02-03  6:59             ` Nipun Gupta
2016-12-21  9:25       ` [PATCH v4 4/6] eventdev: implement PMD registration functions Jerin Jacob
2017-02-02 11:20         ` Nipun Gupta
2017-02-05 13:04           ` Jerin Jacob
2016-12-21  9:25       ` [PATCH v4 5/6] event/skeleton: add skeleton eventdev driver Jerin Jacob
2016-12-21  9:25       ` [PATCH v4 6/6] app/test: unit test case for eventdev APIs Jerin Jacob
2016-11-18  5:45 ` [PATCH 2/4] eventdev: implement the northbound APIs Jerin Jacob
2016-11-21 17:45   ` Eads, Gage
2016-11-21 19:13     ` Jerin Jacob
2016-11-21 19:31       ` Jerin Jacob
2016-11-22 15:15         ` Eads, Gage
2016-11-22 18:19           ` Jerin Jacob
2016-11-22 19:43             ` Eads, Gage
2016-11-22 20:00               ` Jerin Jacob
2016-11-22 22:48                 ` Eads, Gage
2016-11-22 23:43                   ` Jerin Jacob
2016-11-28 15:53                     ` Eads, Gage
2016-11-29  2:01                       ` Jerin Jacob
2016-11-29  3:43                       ` Jerin Jacob
2016-11-29  5:46                         ` Eads, Gage
2016-11-23  9:57           ` Bruce Richardson
2016-11-23 19:18   ` Thomas Monjalon
2016-11-25  4:17     ` Jerin Jacob
2016-11-25  9:55       ` Richardson, Bruce
2016-11-25 23:08         ` Jerin Jacob
2016-11-18  5:45 ` [PATCH 3/4] event/skeleton: add skeleton eventdev driver Jerin Jacob
2016-11-18  5:45 ` [PATCH 4/4] app/test: unit test case for eventdev APIs Jerin Jacob
2016-11-18 15:25 ` [PATCH 0/4] libeventdev API and northbound implementation Bruce Richardson
2016-11-18 16:04   ` Bruce Richardson
2016-11-18 19:27     ` Jerin Jacob
2016-11-21  9:40       ` Thomas Monjalon
2016-11-21  9:57         ` Bruce Richardson
2016-11-22  0:11           ` Thomas Monjalon
2016-11-22  2:00       ` Yuanhan Liu
2016-11-22  9:05         ` Shreyansh Jain

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.