All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [PATCH v5 6/7] eventdev: add eth Rx adapter implementation
  2017-10-06 21:10 ` [PATCH v5 6/7] eventdev: add eth Rx adapter implementation Nikhil Rao
@ 2017-10-06 14:34   ` Pavan Nikhilesh Bhagavatula
  0 siblings, 0 replies; 17+ messages in thread
From: Pavan Nikhilesh Bhagavatula @ 2017-10-06 14:34 UTC (permalink / raw)
  To: Nikhil Rao; +Cc: dev

Hi Nikhil,

I have verified the series with octeontx hw and found few issues mentioned
below, I will send the hw driver in a while.

On Sat, Oct 07, 2017 at 02:40:00AM +0530, Nikhil Rao wrote:
> The adapter implementation uses eventdev PMDs to configure the packet
> transfer if HW support is available and if not, it uses an EAL service
> function that reads packets from ethernet Rx queues and injects these
> as events into the event device.
>
> Signed-off-by: Gage Eads <gage.eads@intel.com>
> Signed-off-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
> Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
> ---
>  lib/librte_eventdev/rte_event_eth_rx_adapter.c | 1237 ++++++++++++++++++++++++
>  lib/Makefile                                   |    2 +-
>  lib/librte_eventdev/Makefile                   |    1 +
>  lib/librte_eventdev/rte_eventdev_version.map   |    9 +
>  4 files changed, 1248 insertions(+), 1 deletion(-)
>  create mode 100644 lib/librte_eventdev/rte_event_eth_rx_adapter.c
>
> diff --git a/lib/librte_eventdev/rte_event_eth_rx_adapter.c b/lib/librte_eventdev/rte_event_eth_rx_adapter.c
> new file mode 100644
> index 000000000..0823aee16
> --- /dev/null
> +++ b/lib/librte_eventdev/rte_event_eth_rx_adapter.c
> @@ -0,0 +1,1237 @@
> +#include <rte_cycles.h>
> +#include <rte_common.h>
> +#include <rte_dev.h>
> +#include <rte_errno.h>
> +#include <rte_ethdev.h>
> +#include <rte_log.h>
> +#include <rte_malloc.h>
> +#include <rte_service_component.h>
> +#include <rte_thash.h>
> +
> +#include "rte_eventdev.h"
> +#include "rte_eventdev_pmd.h"
> +#include "rte_event_eth_rx_adapter.h"
> +
> +#define BATCH_SIZE		32
> +#define BLOCK_CNT_THRESHOLD	10
> +#define ETH_EVENT_BUFFER_SIZE	(4*BATCH_SIZE)
> +
> +#define ETH_RX_ADAPTER_SERVICE_NAME_LEN	32
> +#define ETH_RX_ADAPTER_MEM_NAME_LEN	32
> +
> +/*
> + * There is an instance of this struct per polled Rx queue added to the
> + * adapter
> + */
> +struct eth_rx_poll_entry {
> +	/* Eth port to poll */
> +	uint8_t eth_dev_id;
> +	/* Eth rx queue to poll */
> +	uint16_t eth_rx_qid;
> +};
> +
> +/* Instance per adapter */
> +struct rte_eth_event_enqueue_buffer {
> +	/* Count of events in this buffer */
> +	uint16_t count;
> +	/* Array of events in this buffer */
> +	struct rte_event events[ETH_EVENT_BUFFER_SIZE];
> +};
> +
> +struct rte_event_eth_rx_adapter {
> +	/* RSS key */
> +	uint8_t rss_key_be[40];

Use #define or compile time config parameter instead of hardcoding to 40

> +	/* Event device identifier */
> +	uint8_t eventdev_id;
> +	/* Per ethernet device structure */
> +	struct eth_device_info *eth_devices;
> +	/* Event port identifier */
> +	uint8_t event_port_id;
> +	/* Lock to serialize config updates with service function */
> +	rte_spinlock_t rx_lock;
> +	/* Max mbufs processed in any service function invocation */
> +	uint32_t max_nb_rx;
> +	/* Receive queues that need to be polled */
> +	struct eth_rx_poll_entry *eth_rx_poll;
> +

<snip>

> +static int add_rx_queue(struct rte_event_eth_rx_adapter *rx_adapter,
> +		uint8_t eth_dev_id,
> +		int rx_queue_id,
> +		const struct rte_event_eth_rx_adapter_queue_conf *queue_conf)
> +{
> +	struct eth_device_info *dev_info = &rx_adapter->eth_devices[eth_dev_id];
> +	uint32_t i;
> +	int ret;
> +
> +	if (queue_conf->servicing_weight == 0) {
> +		struct rte_event_eth_rx_adapter_queue_conf temp_conf;

temp_conf should be declared outside if condition, It goes out of scope and the
assignment queue_conf = &temp_conf; would become undefined.

> +
> +		struct rte_eth_dev_data *data = dev_info->dev->data;
> +		if (data->dev_conf.intr_conf.rxq) {
> +			RTE_EDEV_LOG_ERR("Interrupt driven queues"
> +					" not supported");
> +			return -ENOTSUP;
> +		}
> +		temp_conf = *queue_conf;
> +		temp_conf.servicing_weight = 1;
> +		/* If Rx interrupts are disabled set wt = 1 */
> +		queue_conf = &temp_conf;
> +	}
> +
> +	if (dev_info->rx_queue == NULL) {
<snip>

Thanks,
Pavan

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v5 0/7] eventdev: cover letter: eth Rx adapter
@ 2017-10-06 21:09 Nikhil Rao
  2017-10-06 21:09 ` [PATCH v5 1/7] eventdev: add caps API and PMD callback for " Nikhil Rao
                   ` (7 more replies)
  0 siblings, 8 replies; 17+ messages in thread
From: Nikhil Rao @ 2017-10-06 21:09 UTC (permalink / raw)
  To: jerin.jacob, bruce.richardson; +Cc: dev

Eventdev-based networking applications require a component to dequeue
packets from NIC Rx queues and inject them into eventdev queues[1]. While
some platforms (e.g. Cavium Octeontx) do this operation in hardware, other
platforms use software.

This patchset introduces an ethernet Rx event adapter that dequeues packets
from ethernet devices and enqueues them to event devices. This patch is based on
a previous RFC[2] and supercedes [3], the main difference being that
this version implements a common abstraction for HW and SW based packet transfers.

The adapter is designed to work with the EAL service core[4] for SW based
packet transfers. An eventdev PMD callback is used to determine that SW
based packet transfer service is required. The application can discover
and configure the service with a core mask using rte_service APIs.

The adapter can service multiple ethernet devices and queues. For SW based
packet transfers each queue is  configured with a servicing weight to
control the relative frequency with which the adapter polls the queue,
and the event fields to use when constructing packet events. The adapter
has two modes for programming an event's flow ID: use a static per-queue
user-specified value or use the RSS hash.

A detailed description of the adapter is contained in the header's
comments.

[1] http://dpdk.org/ml/archives/dev/2017-May/065341.html
[2] http://dpdk.org/ml/archives/dev/2017-May/065539.html
[3] http://dpdk.org/ml/archives/dev/2017-July/070452.html
[4] http://dpdk.org/ml/archives/dev/2017-July/069782.html

v3:
- This patch extends the V2 implementation with some changes to
provide a common API across HW & SW packet transfer mechanisms from
the eth devices to the event devices.
- Introduces a caps API that is used by the apps and the Rx
adapter to configure the packet transfer path.
- Adds an API to retrieve the service ID of the service function.
(if one is used by the adapter)

v4:
- Unit test fixes (Santosh Shukla)
- Pass ethernet device pointer to eventdev callback functions
instead of ethernet port ID. (Nipun Gupta)
- Check for existence of stats callback before invocation (Nipun Gupta)
- Various code cleanups (Harry Van Haaren)

v5:
- Add RTE_EVENT_TYPE_ETH_RX_ADAPTER (Jerin)
- Add header to documentation build (Jerin)
- Split patch series into smaller patches (Jerin)
- Replace RTE_EVENT_ETH_RX_ADAPTER_CAP_FLOW_ID with
RTE_EVENT_ETH_RX_ADAPTER_CAP_OVERRIDE_FLOW_ID (Jerin)
- If eth_rx_adapter_caps_get PMD callback isn't implemented
return zeored caps (Jerin)
- Replace RTE_EVENT_ETH_RX_ADAPTER_CAP_SINGLE_EVENTQ with
RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ (Jerin)
- Replace RTE_MAX_EVENT_ETH_RX_ADAPTER_INSTANCE with
RTE_EVENT_ETH_RX_ADAPTER_MAX_INSTANCE (Jerin)
- Move rss_key_be to per adapter memory
- Various other fixes (Jerin)
- Test fixes for mempool (Santosh)
- Change order for event v/s eth port init in test app (Pavan)

Nikhil Rao (7):
  eventdev: add caps API and PMD callback for eth Rx adapter
  eventdev: add PMD callbacks for eth Rx adapter
  eventdev: add eth Rx adapter caps function to SW PMD
  eventdev: add eth Rx adapter API header
  eventdev: add event type for eth rx adapter
  eventdev: add eth Rx adapter implementation
  eventdev: add tests for eth Rx adapter APIs

 lib/librte_eventdev/rte_event_eth_rx_adapter.h |  392 ++++++++
 lib/librte_eventdev/rte_eventdev.h             |   41 +
 lib/librte_eventdev/rte_eventdev_pmd.h         |  182 ++++
 drivers/event/sw/sw_evdev.c                    |   15 +
 lib/librte_eventdev/rte_event_eth_rx_adapter.c | 1237 ++++++++++++++++++++++++
 lib/librte_eventdev/rte_eventdev.c             |   23 +
 test/test/test_event_eth_rx_adapter.c          |  453 +++++++++
 MAINTAINERS                                    |    4 +
 doc/api/doxy-api-index.md                      |    1 +
 lib/Makefile                                   |    2 +-
 lib/librte_eventdev/Makefile                   |    2 +
 lib/librte_eventdev/rte_eventdev_version.map   |   15 +
 test/test/Makefile                             |    1 +
 13 files changed, 2367 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_eventdev/rte_event_eth_rx_adapter.h
 create mode 100644 lib/librte_eventdev/rte_event_eth_rx_adapter.c
 create mode 100644 test/test/test_event_eth_rx_adapter.c

-- 
2.14.1.145.gb3622a4

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v5 1/7] eventdev: add caps API and PMD callback for eth Rx adapter
  2017-10-06 21:09 [PATCH v5 0/7] eventdev: cover letter: eth Rx adapter Nikhil Rao
@ 2017-10-06 21:09 ` Nikhil Rao
  2017-10-09 12:03   ` Jerin Jacob
  2017-10-06 21:09 ` [PATCH v5 2/7] eventdev: add PMD callbacks " Nikhil Rao
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Nikhil Rao @ 2017-10-06 21:09 UTC (permalink / raw)
  To: jerin.jacob, bruce.richardson; +Cc: dev

The caps API allows application to retrieve capability information
needed to configure the ethernet Rx adapter for the eventdev and
ethdev pair.

For e.g., the ethdev, eventdev pairing maybe such that all of the
ethdev Rx queues can only be connected to a single event queue, in
this case the application is required to pass in -1 as the queue id
when adding a receive queue to the adapter.

Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
---
 lib/librte_eventdev/rte_eventdev.h           | 39 ++++++++++++++++++++++++++++
 lib/librte_eventdev/rte_eventdev_pmd.h       | 29 +++++++++++++++++++++
 lib/librte_eventdev/rte_eventdev.c           | 23 ++++++++++++++++
 lib/Makefile                                 |  2 +-
 lib/librte_eventdev/rte_eventdev_version.map |  6 +++++
 5 files changed, 98 insertions(+), 1 deletion(-)

diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
index 128bc5221..84143a120 100644
--- a/lib/librte_eventdev/rte_eventdev.h
+++ b/lib/librte_eventdev/rte_eventdev.h
@@ -990,6 +990,45 @@ struct rte_event {
 	};
 };
 
+/* Ethdev Rx adapter capability bitmap flags */
+#define RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT	0x1
+/**< This flag is sent when the packet transfer mechanism is in HW.
+ * Ethdev can send packets to the event device using internal event port.
+ */
+#define RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ	0x2
+/**< Adapter supports multiple event queues per ethdev. Every ethdev
+ * Rx queue can be connected to a unique event queue.
+ */
+#define RTE_EVENT_ETH_RX_ADAPTER_CAP_OVERRIDE_FLOW_ID	0x4
+/**< The application can override the adapter generated flow ID in the
+ * event. This flow ID can be specified when adding an ethdev Rx queue
+ * to the adapter using the ev member of struct rte_event_eth_rx_adapter
+ * @see struct rte_event_eth_rx_adapter_queue_conf::ev
+ * @see struct rte_event_eth_rx_adapter_queue_conf::rx_queue_flags
+ */
+
+/**
+ * Retrieve the event device's ethdev Rx adapter capabilities for the
+ * specified ethernet port
+ *
+ * @param dev_id
+ *   The identifier of the device.
+ *
+ * @param eth_port_id
+ *   The identifier of the ethernet device.
+ *
+ * @param[out] caps
+ *   A pointer to memory filled with Rx event adapter capabilities.
+ *
+ * @return
+ *   - 0: Success, driver provides Rx event adapter capabilities for the
+ *	ethernet device.
+ *   - <0: Error code returned by the driver function.
+ *
+ */
+int
+rte_event_eth_rx_adapter_caps_get(uint8_t dev_id, uint8_t eth_port_id,
+				uint32_t *caps);
 
 struct rte_eventdev_driver;
 struct rte_eventdev_ops;
diff --git a/lib/librte_eventdev/rte_eventdev_pmd.h b/lib/librte_eventdev/rte_eventdev_pmd.h
index 3d72acf3a..0836f9af5 100644
--- a/lib/librte_eventdev/rte_eventdev_pmd.h
+++ b/lib/librte_eventdev/rte_eventdev_pmd.h
@@ -86,6 +86,8 @@ extern "C" {
 #define RTE_EVENTDEV_DETACHED  (0)
 #define RTE_EVENTDEV_ATTACHED  (1)
 
+struct rte_eth_dev;
+
 /** Global structure used for maintaining state of allocated event devices */
 struct rte_eventdev_global {
 	uint8_t nb_devs;	/**< Number of devices found */
@@ -429,6 +431,30 @@ typedef int (*eventdev_xstats_get_names_t)(const struct rte_eventdev *dev,
 typedef uint64_t (*eventdev_xstats_get_by_name)(const struct rte_eventdev *dev,
 		const char *name, unsigned int *id);
 
+
+/**
+ * Retrieve the event device's ethdev Rx adapter capabilities for the
+ * specified ethernet port
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @param eth_dev
+ *   Ethernet device pointer
+ *
+ * @param[out] caps
+ *   A pointer to memory filled with Rx event adapter capabilities.
+ *
+ * @return
+ *   - 0: Success, driver provides Rx event adapter capabilities for the
+ *	ethernet device.
+ *   - <0: Error code returned by the driver function.
+ *
+ */
+typedef int (*eventdev_eth_rx_adapter_caps_get_t)
+					(const struct rte_eventdev *dev,
+					const struct rte_eth_dev *eth_dev,
+					uint32_t *caps);
 /** Event device operations function pointer table */
 struct rte_eventdev_ops {
 	eventdev_info_get_t dev_infos_get;	/**< Get device info. */
@@ -468,6 +494,9 @@ struct rte_eventdev_ops {
 	/**< Get one value by name. */
 	eventdev_xstats_reset_t xstats_reset;
 	/**< Reset the statistics values in xstats. */
+
+	eventdev_eth_rx_adapter_caps_get_t eth_rx_adapter_caps_get;
+	/**< Get ethernet Rx adapter capabilities */
 };
 
 /**
diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c
index bbb380502..d4bd37e0a 100644
--- a/lib/librte_eventdev/rte_eventdev.c
+++ b/lib/librte_eventdev/rte_eventdev.c
@@ -56,6 +56,7 @@
 #include <rte_common.h>
 #include <rte_malloc.h>
 #include <rte_errno.h>
+#include <rte_ethdev.h>
 
 #include "rte_eventdev.h"
 #include "rte_eventdev_pmd.h"
@@ -128,6 +129,28 @@ rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info *dev_info)
 	return 0;
 }
 
+int
+rte_event_eth_rx_adapter_caps_get(uint8_t dev_id, uint8_t eth_port_id,
+				uint32_t *caps)
+{
+	struct rte_eventdev *dev;
+
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(eth_port_id, -EINVAL);
+
+	dev = &rte_eventdevs[dev_id];
+
+	if (caps == NULL)
+		return -EINVAL;
+	*caps = 0;
+
+	return dev->dev_ops->eth_rx_adapter_caps_get ?
+				(*dev->dev_ops->eth_rx_adapter_caps_get)(dev,
+						&rte_eth_devices[eth_port_id],
+						caps)
+				: 0;
+}
+
 static inline int
 rte_event_dev_queue_config(struct rte_eventdev *dev, uint8_t nb_queues)
 {
diff --git a/lib/Makefile b/lib/Makefile
index 86caba17b..ccff22c39 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -52,7 +52,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += librte_cryptodev
 DEPDIRS-librte_cryptodev := librte_eal librte_mempool librte_ring librte_mbuf
 DEPDIRS-librte_cryptodev += librte_kvargs
 DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += librte_eventdev
-DEPDIRS-librte_eventdev := librte_eal librte_ring
+DEPDIRS-librte_eventdev := librte_eal librte_ring librte_ether
 DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
 DEPDIRS-librte_vhost := librte_eal librte_mempool librte_mbuf librte_ether
 DIRS-$(CONFIG_RTE_LIBRTE_HASH) += librte_hash
diff --git a/lib/librte_eventdev/rte_eventdev_version.map b/lib/librte_eventdev/rte_eventdev_version.map
index 4c48e5f0a..c181fab95 100644
--- a/lib/librte_eventdev/rte_eventdev_version.map
+++ b/lib/librte_eventdev/rte_eventdev_version.map
@@ -51,3 +51,9 @@ DPDK_17.08 {
 	rte_event_ring_init;
 	rte_event_ring_lookup;
 } DPDK_17.05;
+
+DPDK_17.11 {
+	global:
+
+	rte_event_eth_rx_adapter_caps_get;
+} DPDK_17.08;
-- 
2.14.1.145.gb3622a4

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v5 2/7] eventdev: add PMD callbacks for eth Rx adapter
  2017-10-06 21:09 [PATCH v5 0/7] eventdev: cover letter: eth Rx adapter Nikhil Rao
  2017-10-06 21:09 ` [PATCH v5 1/7] eventdev: add caps API and PMD callback for " Nikhil Rao
@ 2017-10-06 21:09 ` Nikhil Rao
  2017-10-09 12:05   ` Jerin Jacob
  2017-10-06 21:09 ` [PATCH v5 3/7] eventdev: add eth Rx adapter caps function to SW PMD Nikhil Rao
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Nikhil Rao @ 2017-10-06 21:09 UTC (permalink / raw)
  To: jerin.jacob, bruce.richardson; +Cc: dev

The PMD callbacks are used by the rte_event_eth_rx_xxx() APIs to
configure and control the ethernet receive adapter if packet transfers
from the ethdev to eventdev is implemented in hardware.

Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
---
 lib/librte_eventdev/rte_eventdev_pmd.h | 145 +++++++++++++++++++++++++++++++++
 1 file changed, 145 insertions(+)

diff --git a/lib/librte_eventdev/rte_eventdev_pmd.h b/lib/librte_eventdev/rte_eventdev_pmd.h
index 0836f9af5..9f3188fc8 100644
--- a/lib/librte_eventdev/rte_eventdev_pmd.h
+++ b/lib/librte_eventdev/rte_eventdev_pmd.h
@@ -455,6 +455,139 @@ typedef int (*eventdev_eth_rx_adapter_caps_get_t)
 					(const struct rte_eventdev *dev,
 					const struct rte_eth_dev *eth_dev,
 					uint32_t *caps);
+
+struct rte_event_eth_rx_adapter_queue_conf *queue_conf;
+
+/**
+ * Add ethernet Rx queues to event device. This callback is invoked if
+ * the caps returned from rte_eventdev_eth_rx_adapter_caps_get(, eth_port_id)
+ * has RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT set.
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @param eth_dev
+ *   Ethernet device pointer
+ *
+ * @param rx_queue_id
+ *   Ethernet device receive queue index
+ *
+ * @param queue_conf
+ *  Additonal configuration structure
+
+ * @return
+ *   - 0: Success, ethernet receive queue added successfully.
+ *   - <0: Error code returned by the driver function.
+ *
+ */
+typedef int (*eventdev_eth_rx_adapter_queue_add_t)(
+		const struct rte_eventdev *dev,
+		const struct rte_eth_dev *eth_dev,
+		int32_t rx_queue_id,
+		const struct rte_event_eth_rx_adapter_queue_conf *queue_conf);
+
+/**
+ * Delete ethernet Rx queues from event device. This callback is invoked if
+ * the caps returned from eventdev_eth_rx_adapter_caps_get(, eth_port_id)
+ * has RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT set.
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @param eth_dev
+ *   Ethernet device pointer
+ *
+ * @param rx_queue_id
+ *   Ethernet device receive queue index
+ *
+ * @return
+ *   - 0: Success, ethernet receive queue deleted successfully.
+ *   - <0: Error code returned by the driver function.
+ *
+ */
+typedef int (*eventdev_eth_rx_adapter_queue_del_t)
+					(const struct rte_eventdev *dev,
+					const struct rte_eth_dev *eth_dev,
+					int32_t rx_queue_id);
+
+/**
+ * Start ethernet Rx adapter. This callback is invoked if
+ * the caps returned from eventdev_eth_rx_adapter_caps_get(.., eth_port_id)
+ * has RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT set and Rx queues
+ * from eth_port_id have been added to the event device.
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @param eth_dev
+ *   Ethernet device pointer
+ *
+ * @return
+ *   - 0: Success, ethernet Rx adapter started successfully.
+ *   - <0: Error code returned by the driver function.
+ */
+typedef int (*eventdev_eth_rx_adapter_start_t)
+					(const struct rte_eventdev *dev,
+					const struct rte_eth_dev *eth_dev);
+
+/**
+ * Stop ethernet Rx adapter. This callback is invoked if
+ * the caps returned from eventdev_eth_rx_adapter_caps_get(..,eth_port_id)
+ * has RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT set and Rx queues
+ * from eth_port_id have been added to the event device.
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @param eth_dev
+ *   Ethernet device pointer
+ *
+ * @return
+ *   - 0: Success, ethernet Rx adapter stopped successfully.
+ *   - <0: Error code returned by the driver function.
+ */
+typedef int (*eventdev_eth_rx_adapter_stop_t)
+					(const struct rte_eventdev *dev,
+					const struct rte_eth_dev *eth_dev);
+
+struct rte_event_eth_rx_adapter_stats *stats;
+
+/**
+ * Retrieve ethernet Rx adapter statistics.
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @param eth_dev
+ *   Ethernet device pointer
+ *
+ * @param[out] stats
+ *   Pointer to stats structure
+ *
+ * @return
+ *   Return 0 on success.
+ */
+
+typedef int (*eventdev_eth_rx_adapter_stats_get)
+			(const struct rte_eventdev *dev,
+			const struct rte_eth_dev *eth_dev,
+			struct rte_event_eth_rx_adapter_stats *stats);
+/**
+ * Reset ethernet Rx adapter statistics.
+ *
+ * @param dev
+ *   Event device pointer
+ *
+ * @param eth_dev
+ *   Ethernet device pointer
+ *
+ * @return
+ *   Return 0 on success.
+ */
+typedef int (*eventdev_eth_rx_adapter_stats_reset)
+			(const struct rte_eventdev *dev,
+			const struct rte_eth_dev *eth_dev);
+
 /** Event device operations function pointer table */
 struct rte_eventdev_ops {
 	eventdev_info_get_t dev_infos_get;	/**< Get device info. */
@@ -497,6 +630,18 @@ struct rte_eventdev_ops {
 
 	eventdev_eth_rx_adapter_caps_get_t eth_rx_adapter_caps_get;
 	/**< Get ethernet Rx adapter capabilities */
+	eventdev_eth_rx_adapter_queue_add_t eth_rx_adapter_queue_add;
+	/**< Add Rx queues to ethernet Rx adapter */
+	eventdev_eth_rx_adapter_queue_del_t eth_rx_adapter_queue_del;
+	/**< Delete Rx queues from ethernet Rx adapter */
+	eventdev_eth_rx_adapter_start_t eth_rx_adapter_start;
+	/**< Start ethernet Rx adapter */
+	eventdev_eth_rx_adapter_stop_t eth_rx_adapter_stop;
+	/**< Stop ethernet Rx adapter */
+	eventdev_eth_rx_adapter_stats_get eth_rx_adapter_stats_get;
+	/**< Get ethernet Rx stats */
+	eventdev_eth_rx_adapter_stats_reset eth_rx_adapter_stats_reset;
+	/**< Reset ethernet Rx stats */
 };
 
 /**
-- 
2.14.1.145.gb3622a4

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v5 3/7] eventdev: add eth Rx adapter caps function to SW PMD
  2017-10-06 21:09 [PATCH v5 0/7] eventdev: cover letter: eth Rx adapter Nikhil Rao
  2017-10-06 21:09 ` [PATCH v5 1/7] eventdev: add caps API and PMD callback for " Nikhil Rao
  2017-10-06 21:09 ` [PATCH v5 2/7] eventdev: add PMD callbacks " Nikhil Rao
@ 2017-10-06 21:09 ` Nikhil Rao
  2017-10-09 12:06   ` Jerin Jacob
  2017-10-06 21:09 ` [PATCH v5 4/7] eventdev: add eth Rx adapter API header Nikhil Rao
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Nikhil Rao @ 2017-10-06 21:09 UTC (permalink / raw)
  To: jerin.jacob, bruce.richardson; +Cc: dev

Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
---
 lib/librte_eventdev/rte_eventdev_pmd.h |  8 ++++++++
 drivers/event/sw/sw_evdev.c            | 15 +++++++++++++++
 2 files changed, 23 insertions(+)

diff --git a/lib/librte_eventdev/rte_eventdev_pmd.h b/lib/librte_eventdev/rte_eventdev_pmd.h
index 9f3188fc8..4369d9b8c 100644
--- a/lib/librte_eventdev/rte_eventdev_pmd.h
+++ b/lib/librte_eventdev/rte_eventdev_pmd.h
@@ -83,6 +83,14 @@ extern "C" {
 	} \
 } while (0)
 
+#define RTE_EVENT_ETH_RX_ADAPTER_SW_CAP \
+		((RTE_EVENT_ETH_RX_ADAPTER_CAP_OVERRIDE_FLOW_ID) | \
+			(RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ))
+
+/**< Ethernet Rx adapter cap to return If the packet transfers from
+ * the ethdev to eventdev use a SW service function
+ */
+
 #define RTE_EVENTDEV_DETACHED  (0)
 #define RTE_EVENTDEV_ATTACHED  (1)
 
diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
index da6ac30f4..aed8b728f 100644
--- a/drivers/event/sw/sw_evdev.c
+++ b/drivers/event/sw/sw_evdev.c
@@ -437,6 +437,19 @@ sw_dev_configure(const struct rte_eventdev *dev)
 	return 0;
 }
 
+struct rte_eth_dev;
+
+static int
+sw_eth_rx_adapter_caps_get(const struct rte_eventdev *dev,
+			const struct rte_eth_dev *eth_dev,
+			uint32_t *caps)
+{
+	RTE_SET_USED(dev);
+	RTE_SET_USED(eth_dev);
+	*caps = RTE_EVENT_ETH_RX_ADAPTER_SW_CAP;
+	return 0;
+}
+
 static void
 sw_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info)
 {
@@ -751,6 +764,8 @@ sw_probe(struct rte_vdev_device *vdev)
 			.port_link = sw_port_link,
 			.port_unlink = sw_port_unlink,
 
+			.eth_rx_adapter_caps_get = sw_eth_rx_adapter_caps_get,
+
 			.xstats_get = sw_xstats_get,
 			.xstats_get_names = sw_xstats_get_names,
 			.xstats_get_by_name = sw_xstats_get_by_name,
-- 
2.14.1.145.gb3622a4

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v5 4/7] eventdev: add eth Rx adapter API header
  2017-10-06 21:09 [PATCH v5 0/7] eventdev: cover letter: eth Rx adapter Nikhil Rao
                   ` (2 preceding siblings ...)
  2017-10-06 21:09 ` [PATCH v5 3/7] eventdev: add eth Rx adapter caps function to SW PMD Nikhil Rao
@ 2017-10-06 21:09 ` Nikhil Rao
  2017-10-09 12:27   ` Jerin Jacob
  2017-10-06 21:09 ` [PATCH v5 5/7] eventdev: add event type for eth rx adapter Nikhil Rao
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Nikhil Rao @ 2017-10-06 21:09 UTC (permalink / raw)
  To: jerin.jacob, bruce.richardson; +Cc: dev

Add common APIs for configuring packet transfer from ethernet Rx
queues to event devices across HW & SW packet transfer mechanisms.
A detailed description of the adapter is contained in the header's
comments.

Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
---
 lib/librte_eventdev/rte_event_eth_rx_adapter.h | 392 +++++++++++++++++++++++++
 MAINTAINERS                                    |   3 +
 doc/api/doxy-api-index.md                      |   1 +
 lib/librte_eventdev/Makefile                   |   1 +
 4 files changed, 397 insertions(+)
 create mode 100644 lib/librte_eventdev/rte_event_eth_rx_adapter.h

diff --git a/lib/librte_eventdev/rte_event_eth_rx_adapter.h b/lib/librte_eventdev/rte_event_eth_rx_adapter.h
new file mode 100644
index 000000000..a94aa1c2d
--- /dev/null
+++ b/lib/librte_eventdev/rte_event_eth_rx_adapter.h
@@ -0,0 +1,392 @@
+/*
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_EVENT_ETH_RX_ADAPTER_
+#define _RTE_EVENT_ETH_RX_ADAPTER_
+
+/**
+ * @file
+ *
+ * RTE Event Ethernet Rx Adapter
+ *
+ * An eventdev-based packet processing application enqueues/dequeues mbufs
+ * to/from the event device. Packet flow from the ethernet device to the event
+ * device can be accomplished using either HW or SW mechanisms depending on the
+ * platform and the particular combination of ethernet and event devices. The
+ * event ethernet Rx adapter provides common APIs to configure the packet flow
+ * from the ethernet devices to event devices across both these transfer
+ * mechanisms.
+ *
+ * The adapter uses a EAL service core function for SW based packet transfer
+ * and uses the eventdev PMD functions to configure HW based packet transfer
+ * between the ethernet device and the event device.
+ *
+ * The ethernet Rx event adapter's functions are:
+ *  - rte_event_eth_rx_adapter_create_ext()
+ *  - rte_event_eth_rx_adapter_create()
+ *  - rte_event_eth_rx_adapter_free()
+ *  - rte_event_eth_rx_adapter_queue_add()
+ *  - rte_event_eth_rx_adapter_queue_del()
+ *  - rte_event_eth_rx_adapter_start()
+ *  - rte_event_eth_rx_adapter_stop()
+ *  - rte_event_eth_rx_adapter_stats_get()
+ *  - rte_event_eth_rx_adapter_stats_reset()
+ *
+ * The application creates an ethernet to event adapter using
+ * rte_event_eth_rx_adapter_create_ext() or rte_event_eth_rx_adapter_create()
+ * functions.
+ * The adapter needs to know which ethernet rx queues to poll for mbufs as well
+ * as event device parameters such as the event queue identifier, event
+ * priority and scheduling type that the adapter should use when constructing
+ * events. The rte_event_eth_rx_adapter_queue_add() function is provided for
+ * this purpose.
+ * The servicing weight parameter in the rte_event_eth_rx_adapter_queue_conf
+ * is applicable when the Rx adapter uses a service core function and is
+ * intended to provide application control of the frequency of polling ethernet
+ * device receive queues, for example, the application may want to poll higher
+ * priority queues with a higher frequency but at the same time not starve
+ * lower priority queues completely. If this parameter is zero and the receive
+ * interrupt is enabled when configuring the device, the receive queue is
+ * interrupt driven; else, the queue is assigned a servicing weight of one.
+ *
+ * The application can start/stop the adapter using the
+ * rte_event_eth_rx_adapter_start() and the rte_event_eth_rx_adapter_stop()
+ * functions. If the adapter uses a rte_service function, then the application
+ * is also required to assign a core to the service function and control the
+ * service core using the rte_service APIs. The
+ * rte_event_eth_rx_adapter_service_id_get() function can be used to retrieve
+ * the service function ID of the adapter in this case.
+ *
+ * Note: Interrupt driven receive queues are currently unimplemented.
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <stdint.h>
+
+#include <rte_service.h>
+
+#include "rte_eventdev.h"
+
+#define RTE_EVENT_ETH_RX_ADAPTER_MAX_INSTANCE 32
+
+/* struct rte_event_eth_rx_adapter_queue_conf flags definitions */
+#define RTE_EVENT_ETH_RX_ADAPTER_QUEUE_FLOW_ID_VALID	0x1
+/**< This flag indicates the flow identifier is valid
+ * @see rte_event_eth_rx_adapter_queue_conf::rx_queue_flags
+ */
+
+struct rte_event_eth_rx_adapter_conf {
+	uint8_t event_port_id;
+	/**< Event port identifier, the adapter enqueues mbuf events to this
+	 * port.
+	 */
+	uint32_t max_nb_rx;
+	/**< The adapter can return early if it has processed at least
+	 * max_nb_rx mbufs. This isn't treated as a requirement; batching may
+	 * cause the adapter to process more than max_nb_rx mbufs.
+	 */
+};
+
+/**
+ * Function type used for adapter configuration callback. The callback is
+ * used to fill in members of the struct rte_event_eth_rx_adapter_conf, this
+ * callback is invoked when creating a SW service for packet transfer from
+ * ethdev queues to the event device. The SW service is created within the
+ * rte_event_eth_rx_adapter_queue_add() function if SW based packet transfers
+ * from ethdev queues to the event device are required.
+ *
+ * @param id
+ *  Adapter identifier.
+ *
+ * @param dev_id
+ *  Event device identifier.
+ *
+ * @param [out] conf
+ *  Structure that needs to be populated by this callback.
+ *
+ * @param arg
+ *  Argument to the callback. This is the same as the conf_arg passed to the
+ *  rte_event_eth_rx_adapter_create_ext().
+ */
+typedef int (*rte_event_eth_rx_adapter_conf_cb) (uint8_t id, uint8_t dev_id,
+			struct rte_event_eth_rx_adapter_conf *conf,
+			void *arg);
+
+/** Rx queue configuration structure */
+struct rte_event_eth_rx_adapter_queue_conf {
+	uint32_t rx_queue_flags;
+	 /**< Flags for handling received packets
+	  * @see RTE_EVENT_ETH_RX_ADAPTER_QUEUE_FLOW_ID_VALID
+	  */
+	uint16_t servicing_weight;
+	/**< Relative polling frequency of ethernet receive queue when the
+	 * adapter uses a service core function for ethernet to event device
+	 * transfers. If it is set to zero, the Rx queue is interrupt driven
+	 * (unless rx queue interrupts are not enabled for the ethernet
+	 * device).
+	 */
+	struct rte_event ev;
+	/**<
+	 *  The values from the following event fields will be used when
+	 *  queuing mbuf events:
+	 *   - event_queue_id: Targeted event queue ID for received packets.
+	 *   - event_priority: Event priority of packets from this Rx queue in
+	 *                     the event queue relative to other events.
+	 *   - sched_type: Scheduling type for packets from this Rx queue.
+	 *   - flow_id: If the RTE_ETH_RX_EVENT_ADAPTER_QUEUE_FLOW_ID_VALID bit
+	 *		is set in rx_queue_flags, this flow_id is used for all
+	 *		packets received from this queue. Otherwise the flow ID
+	 *		is set to the RSS hash of the src and dst IPv4/6
+	 *		addresses.
+	 *
+	 * The event adapter sets ev.event_type to RTE_EVENT_TYPE_ETHDEV in the
+	 * enqueued event.
+	 */
+};
+
+struct rte_event_eth_rx_adapter_stats {
+	uint64_t rx_poll_count;
+	/**< Receive queue poll count */
+	uint64_t rx_packets;
+	/**< Received packet count */
+	uint64_t rx_enq_count;
+	/**< Eventdev enqueue count */
+	uint64_t rx_enq_retry;
+	/**< Eventdev enqueue retry count */
+	uint64_t rx_enq_start_ts;
+	/**< Rx enqueue start timestamp */
+	uint64_t rx_enq_block_cycles;
+	/**< Cycles for which the service is blocked by the event device,
+	 * i.e, the service fails to enqueue to the event device.
+	 */
+	uint64_t rx_enq_end_ts;
+	/**< Latest timestamp at which the service is unblocked
+	 * by the event device. The start, end timestamps and
+	 * block cycles can be used to compute the percentage of
+	 * cycles the service is blocked by the event device.
+	 */
+};
+
+/**
+ * Create a new ethernet Rx event adapter with the specified identifier.
+ *
+ * @param id
+ *  The identifier of the ethernet Rx event adapter.
+ *
+ * @param dev_id
+ *  The identifier of the device to configure.
+ *
+ * @param conf_cb
+ *  Callback function that fills in members of a
+ *  struct rte_event_eth_rx_adapter_conf struct passed into
+ *  it.
+ *
+ * @param conf_arg
+ *  Argument that is passed to the conf_cb function.
+ *
+ * @return
+ *   - 0: Success
+ *   - <0: Error code on failure
+ */
+int rte_event_eth_rx_adapter_create_ext(uint8_t id, uint8_t dev_id,
+				rte_event_eth_rx_adapter_conf_cb conf_cb,
+				void *conf_arg);
+
+/**
+ * Create a new ethernet Rx event adapter with the specified identifier.
+ * This function uses an internal configuration function that creates an event
+ * port. This default function reconfigures the event device with an
+ * additional event port and setups up the event port using the port_config
+ * parameter passed into this function. In case the application needs more
+ * control in configuration of the service, it should use the
+ * rte_event_eth_rx_adapter_create_ext() version.
+ *
+ * @param id
+ *  The identifier of the ethernet Rx event adapter.
+ *
+ * @param dev_id
+ *  The identifier of the device to configure.
+ *
+ * @param port_config
+ *  Argument of type *rte_event_port_conf* that is passed to the conf_cb
+ *  function.
+ *
+ * @return
+ *   - 0: Success
+ *   - <0: Error code on failure
+ */
+int rte_event_eth_rx_adapter_create(uint8_t id, uint8_t dev_id,
+				struct rte_event_port_conf *port_config);
+
+/**
+ * Free an event adapter
+ *
+ * @param id
+ *  Adapter identifier.
+ *
+ * @return
+ *   - 0: Success
+ *   - <0: Error code on failure, If the adapter still has Rx queues
+ *      added to it, the function returns -EBUSY.
+ */
+int rte_event_eth_rx_adapter_free(uint8_t id);
+
+/**
+ * Add receive queue to an event adapter. After a queue has been
+ * added to the event adapter, the result of the application calling
+ * rte_eth_rx_burst(eth_dev_id, rx_queue_id, ..) is undefined.
+ *
+ * @param id
+ *  Adapter identifier.
+ *
+ * @param eth_dev_id
+ *  Port identifier of Ethernet device.
+ *
+ * @param rx_queue_id
+ *  Ethernet device receive queue index.
+ *  If rx_queue_id is -1, then all Rx queues configured for
+ *  the device are added. If the ethdev Rx queues can only be
+ *  connected to a single event queue then rx_queue_id is
+ *  required to be -1.
+ * @see RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ
+ *
+ * @param conf
+ *  Additonal configuration structure of type *rte_event_eth_rx_adapter_conf*
+ *
+ * @return
+ *  - 0: Success, Receive queue added correctly.
+ *  - <0: Error code on failure.
+ */
+int rte_event_eth_rx_adapter_queue_add(uint8_t id,
+			uint8_t eth_dev_id,
+			int32_t rx_queue_id,
+			const struct rte_event_eth_rx_adapter_queue_conf *conf);
+
+/**
+ * Delete receive queue from an event adapter.
+ *
+ * @param id
+ *  Adapter identifier.
+ *
+ * @param eth_dev_id
+ *  Port identifier of Ethernet device.
+ *
+ * @param rx_queue_id
+ *  Ethernet device receive queue index.
+ *  If rx_queue_id is -1, then all Rx queues configured for
+ *  the device are deleted. If the ethdev Rx queues can only be
+ *  connected to a single event queue then rx_queue_id is
+ *  required to be -1.
+ * @see RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ
+ *
+ * @return
+ *  - 0: Success, Receive queue deleted correctly.
+ *  - <0: Error code on failure.
+ */
+int rte_event_eth_rx_adapter_queue_del(uint8_t id, uint8_t eth_dev_id,
+				       int32_t rx_queue_id);
+
+/**
+ * Start  ethernet Rx event adapter
+ *
+ * @param id
+ *  Adapter identifier.
+ *
+ * @return
+ *  - 0: Success, Adapter started correctly.
+ *  - <0: Error code on failure.
+ */
+int rte_event_eth_rx_adapter_start(uint8_t id);
+
+/**
+ * Stop  ethernet Rx event adapter
+ *
+ * @param id
+ *  Adapter identifier.
+ *
+ * @return
+ *  - 0: Success, Adapter started correctly.
+ *  - <0: Error code on failure.
+ */
+int rte_event_eth_rx_adapter_stop(uint8_t id);
+
+/**
+ * Retrieve statistics for an adapter
+ *
+ * @param id
+ *  Adapter identifier.
+ *
+ * @param [out] stats
+ *  A pointer to structure used to retrieve statistics for an adapter.
+ *
+ * @return
+ *  - 0: Success, retrieved successfully.
+ *  - <0: Error code on failure.
+ */
+int rte_event_eth_rx_adapter_stats_get(uint8_t id,
+				struct rte_event_eth_rx_adapter_stats *stats);
+
+/**
+ * Reset statistics for an adapter.
+ *
+ * @param id
+ *  Adapter identifier.
+ *
+ * @return
+ *  - 0: Success, statistics reset successfully.
+ *  - <0: Error code on failure.
+ */
+int rte_event_eth_rx_adapter_stats_reset(uint8_t id);
+
+/**
+ * Retrieve the service ID of an adapter. If the adapter doesn't use
+ * a rte_service function, this function returns -ESRCH.
+ *
+ * @param id
+ *  Adapter identifier.
+ *
+ * @param [out] service_id
+ *  A pointer to a uint32_t, to be filled in with the service id.
+ *
+ * @return
+ *  - 0: Success
+ *  - <0: Error code on failure, if the adapter doesn't use a rte_service
+ * function, this function returns -ESRCH.
+ */
+int rte_event_eth_rx_adapter_service_id_get(uint8_t id, uint32_t *service_id);
+
+#ifdef __cplusplus
+}
+#endif
+#endif	/* _RTE_EVENT_ETH_RX_ADAPTER_ */
diff --git a/MAINTAINERS b/MAINTAINERS
index 8df2a7f2a..53fd50e1f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -273,6 +273,9 @@ F: lib/librte_eventdev/
 F: drivers/event/skeleton/
 F: test/test/test_eventdev.c
 
+Event Ethdev Rx Adapter API - EXPERIMENTAL
+M: Nikhil Rao <nikhil.rao@intel.com>
+F: lib/librte_eventdev/*eth_rx_adapter*
 
 Networking Drivers
 ------------------
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 19e0d4f3d..0d2102a09 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -43,6 +43,7 @@ The public API headers are grouped by topics:
   [rte_tm]             (@ref rte_tm.h),
   [cryptodev]          (@ref rte_cryptodev.h),
   [eventdev]           (@ref rte_eventdev.h),
+  [event_eth_rx_adapter]        (@ref rte_event_eth_rx_adapter.h),
   [metrics]            (@ref rte_metrics.h),
   [bitrate]            (@ref rte_bitrate.h),
   [latency]            (@ref rte_latencystats.h),
diff --git a/lib/librte_eventdev/Makefile b/lib/librte_eventdev/Makefile
index 410578a14..eb1467d56 100644
--- a/lib/librte_eventdev/Makefile
+++ b/lib/librte_eventdev/Makefile
@@ -50,6 +50,7 @@ SYMLINK-y-include += rte_eventdev_pmd.h
 SYMLINK-y-include += rte_eventdev_pmd_pci.h
 SYMLINK-y-include += rte_eventdev_pmd_vdev.h
 SYMLINK-y-include += rte_event_ring.h
+SYMLINK-y-include += rte_event_eth_rx_adapter.h
 
 # versioning export map
 EXPORT_MAP := rte_eventdev_version.map
-- 
2.14.1.145.gb3622a4

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v5 5/7] eventdev: add event type for eth rx adapter
  2017-10-06 21:09 [PATCH v5 0/7] eventdev: cover letter: eth Rx adapter Nikhil Rao
                   ` (3 preceding siblings ...)
  2017-10-06 21:09 ` [PATCH v5 4/7] eventdev: add eth Rx adapter API header Nikhil Rao
@ 2017-10-06 21:09 ` Nikhil Rao
  2017-10-09 12:31   ` Jerin Jacob
  2017-10-06 21:10 ` [PATCH v5 6/7] eventdev: add eth Rx adapter implementation Nikhil Rao
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Nikhil Rao @ 2017-10-06 21:09 UTC (permalink / raw)
  To: jerin.jacob, bruce.richardson; +Cc: dev

Add RTE_EVENT_TYPE_ETH_RX_ADAPTER event type. Certain platforms (e.g.,
octeontx), in the event dequeue function, need to identify events
injected from ethernet hardware into eventdev so that DPDK mbuf can be
populated from the HW descriptor.

Events injected from ethernet hardware would use an event type of
RTE_EVENT_TYPE_ETHDEV and events injected from the rx adapter service
function would use an event type of RTE_EVENT_TYPE_ETH_RX_ADAPTER to
help the event dequeue function differentiate between these two event
sources.

Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
---
 lib/librte_eventdev/rte_eventdev.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
index 84143a120..4c46c425c 100644
--- a/lib/librte_eventdev/rte_eventdev.h
+++ b/lib/librte_eventdev/rte_eventdev.h
@@ -871,6 +871,8 @@ rte_event_dev_close(uint8_t dev_id);
 /**< The event generated from cpu for pipelining.
  * Application may use *sub_event_type* to further classify the event
  */
+#define RTE_EVENT_TYPE_ETH_RX_ADAPTER   0x4
+/**< The event generated from event eth Rx adapter */
 #define RTE_EVENT_TYPE_MAX              0x10
 /**< Maximum number of event types */
 
-- 
2.14.1.145.gb3622a4

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v5 6/7] eventdev: add eth Rx adapter implementation
  2017-10-06 21:09 [PATCH v5 0/7] eventdev: cover letter: eth Rx adapter Nikhil Rao
                   ` (4 preceding siblings ...)
  2017-10-06 21:09 ` [PATCH v5 5/7] eventdev: add event type for eth rx adapter Nikhil Rao
@ 2017-10-06 21:10 ` Nikhil Rao
  2017-10-06 14:34   ` Pavan Nikhilesh Bhagavatula
  2017-10-06 21:10 ` [PATCH v5 7/7] eventdev: add tests for eth Rx adapter APIs Nikhil Rao
  2017-10-09 12:42 ` [PATCH v5 0/7] eventdev: cover letter: eth Rx adapter Jerin Jacob
  7 siblings, 1 reply; 17+ messages in thread
From: Nikhil Rao @ 2017-10-06 21:10 UTC (permalink / raw)
  To: jerin.jacob, bruce.richardson; +Cc: dev

The adapter implementation uses eventdev PMDs to configure the packet
transfer if HW support is available and if not, it uses an EAL service
function that reads packets from ethernet Rx queues and injects these
as events into the event device.

Signed-off-by: Gage Eads <gage.eads@intel.com>
Signed-off-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
---
 lib/librte_eventdev/rte_event_eth_rx_adapter.c | 1237 ++++++++++++++++++++++++
 lib/Makefile                                   |    2 +-
 lib/librte_eventdev/Makefile                   |    1 +
 lib/librte_eventdev/rte_eventdev_version.map   |    9 +
 4 files changed, 1248 insertions(+), 1 deletion(-)
 create mode 100644 lib/librte_eventdev/rte_event_eth_rx_adapter.c

diff --git a/lib/librte_eventdev/rte_event_eth_rx_adapter.c b/lib/librte_eventdev/rte_event_eth_rx_adapter.c
new file mode 100644
index 000000000..0823aee16
--- /dev/null
+++ b/lib/librte_eventdev/rte_event_eth_rx_adapter.c
@@ -0,0 +1,1237 @@
+#include <rte_cycles.h>
+#include <rte_common.h>
+#include <rte_dev.h>
+#include <rte_errno.h>
+#include <rte_ethdev.h>
+#include <rte_log.h>
+#include <rte_malloc.h>
+#include <rte_service_component.h>
+#include <rte_thash.h>
+
+#include "rte_eventdev.h"
+#include "rte_eventdev_pmd.h"
+#include "rte_event_eth_rx_adapter.h"
+
+#define BATCH_SIZE		32
+#define BLOCK_CNT_THRESHOLD	10
+#define ETH_EVENT_BUFFER_SIZE	(4*BATCH_SIZE)
+
+#define ETH_RX_ADAPTER_SERVICE_NAME_LEN	32
+#define ETH_RX_ADAPTER_MEM_NAME_LEN	32
+
+/*
+ * There is an instance of this struct per polled Rx queue added to the
+ * adapter
+ */
+struct eth_rx_poll_entry {
+	/* Eth port to poll */
+	uint8_t eth_dev_id;
+	/* Eth rx queue to poll */
+	uint16_t eth_rx_qid;
+};
+
+/* Instance per adapter */
+struct rte_eth_event_enqueue_buffer {
+	/* Count of events in this buffer */
+	uint16_t count;
+	/* Array of events in this buffer */
+	struct rte_event events[ETH_EVENT_BUFFER_SIZE];
+};
+
+struct rte_event_eth_rx_adapter {
+	/* RSS key */
+	uint8_t rss_key_be[40];
+	/* Event device identifier */
+	uint8_t eventdev_id;
+	/* Per ethernet device structure */
+	struct eth_device_info *eth_devices;
+	/* Event port identifier */
+	uint8_t event_port_id;
+	/* Lock to serialize config updates with service function */
+	rte_spinlock_t rx_lock;
+	/* Max mbufs processed in any service function invocation */
+	uint32_t max_nb_rx;
+	/* Receive queues that need to be polled */
+	struct eth_rx_poll_entry *eth_rx_poll;
+	/* Size of the eth_rx_poll array */
+	uint16_t num_rx_polled;
+	/* Weighted round robin schedule */
+	uint32_t *wrr_sched;
+	/* wrr_sched[] size */
+	uint32_t wrr_len;
+	/* Next entry in wrr[] to begin polling */
+	uint32_t wrr_pos;
+	/* Event burst buffer */
+	struct rte_eth_event_enqueue_buffer event_enqueue_buffer;
+	/* Per adapter stats */
+	struct rte_event_eth_rx_adapter_stats stats;
+	/* Block count, counts upto BLOCK_CNT_THRESHOLD */
+	uint16_t enq_block_count;
+	/* Block start ts */
+	uint64_t rx_enq_block_start_ts;
+	/* Configuration callback for rte_service configuration */
+	rte_event_eth_rx_adapter_conf_cb conf_cb;
+	/* Configuration callback argument */
+	void *conf_arg;
+	/* Set if  default_cb is being used */
+	int default_cb_arg;
+	/* Service initialization state */
+	uint8_t service_inited;
+	/* Total count of Rx queues in adapter */
+	uint32_t nb_queues;
+	/* Memory allocation name */
+	char mem_name[ETH_RX_ADAPTER_MEM_NAME_LEN];
+	/* Socket identifier cached from eventdev */
+	int socket_id;
+	/* Per adapter EAL service */
+	uint32_t service_id;
+} __rte_cache_aligned;
+
+/* Per eth device */
+struct eth_device_info {
+	struct rte_eth_dev *dev;
+	struct eth_rx_queue_info *rx_queue;
+	/* Set if ethdev->eventdev packet transfer uses a
+	 * hardware mechanism
+	 */
+	uint8_t internal_event_port;
+	/* Set if the adapter is processing rx queues for
+	 * this eth device and packet processing has been
+	 * started, allows for the code to know if the PMD
+	 * rx_adapter_stop callback needs to be invoked
+	 */
+	uint8_t dev_rx_started;
+	/* If nb_dev_queues > 0, the start callback will
+	 * be invoked if not already invoked
+	 */
+	uint16_t nb_dev_queues;
+};
+
+/* Per Rx queue */
+struct eth_rx_queue_info {
+	int queue_enabled;	/* True if added */
+	uint16_t wt;		/* Polling weight */
+	uint8_t event_queue_id;	/* Event queue to enqueue packets to */
+	uint8_t sched_type;	/* Sched type for events */
+	uint8_t priority;	/* Event priority */
+	uint32_t flow_id;	/* App provided flow identifier */
+	uint32_t flow_id_mask;	/* Set to ~0 if app provides flow id else 0 */
+};
+
+static struct rte_event_eth_rx_adapter **event_eth_rx_adapter;
+
+static inline int
+valid_id(uint8_t id)
+{
+	return id < RTE_EVENT_ETH_RX_ADAPTER_MAX_INSTANCE;
+}
+
+#define RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, retval) do { \
+	if (!valid_id(id)) { \
+		RTE_EDEV_LOG_ERR("Invalid eth Rx adapter id = %d\n", id); \
+		return retval; \
+	} \
+} while (0)
+
+static inline int
+sw_rx_adapter_queue_count(struct rte_event_eth_rx_adapter *rx_adapter)
+{
+	return rx_adapter->num_rx_polled;
+}
+
+/* Greatest common divisor */
+static uint16_t gcd_u16(uint16_t a, uint16_t b)
+{
+	uint16_t r = a % b;
+
+	return r ? gcd_u16(b, r) : b;
+}
+
+/* Returns the next queue in the polling sequence
+ *
+ * http://kb.linuxvirtualserver.org/wiki/Weighted_Round-Robin_Scheduling
+ */
+static int
+wrr_next(struct rte_event_eth_rx_adapter *rx_adapter,
+	 unsigned int n, int *cw,
+	 struct eth_rx_poll_entry *eth_rx_poll, uint16_t max_wt,
+	 uint16_t gcd, int prev)
+{
+	int i = prev;
+	uint16_t w;
+
+	while (1) {
+		uint16_t q;
+		uint8_t d;
+
+		i = (i + 1) % n;
+		if (i == 0) {
+			*cw = *cw - gcd;
+			if (*cw <= 0)
+				*cw = max_wt;
+		}
+
+		q = eth_rx_poll[i].eth_rx_qid;
+		d = eth_rx_poll[i].eth_dev_id;
+		w = rx_adapter->eth_devices[d].rx_queue[q].wt;
+
+		if ((int)w >= *cw)
+			return i;
+	}
+}
+
+/* Precalculate WRR polling sequence for all queues in rx_adapter */
+static int
+eth_poll_wrr_calc(struct rte_event_eth_rx_adapter *rx_adapter)
+{
+	uint8_t d;
+	uint16_t q;
+	unsigned int i;
+
+	/* Initialize variables for calculation of wrr schedule */
+	uint16_t max_wrr_pos = 0;
+	unsigned int poll_q = 0;
+	uint16_t max_wt = 0;
+	uint16_t gcd = 0;
+
+	struct eth_rx_poll_entry *rx_poll = NULL;
+	uint32_t *rx_wrr = NULL;
+
+	if (rx_adapter->num_rx_polled) {
+		size_t len = RTE_ALIGN(rx_adapter->num_rx_polled *
+				sizeof(*rx_adapter->eth_rx_poll),
+				RTE_CACHE_LINE_SIZE);
+		rx_poll = rte_zmalloc_socket(rx_adapter->mem_name,
+					     len,
+					     RTE_CACHE_LINE_SIZE,
+					     rx_adapter->socket_id);
+		if (rx_poll == NULL)
+			return -ENOMEM;
+
+		/* Generate array of all queues to poll, the size of this
+		 * array is poll_q
+		 */
+		for (d = 0; d < rte_eth_dev_count(); d++) {
+			uint16_t nb_rx_queues;
+			struct eth_device_info *dev_info =
+					&rx_adapter->eth_devices[d];
+			nb_rx_queues = dev_info->dev->data->nb_rx_queues;
+			if (dev_info->rx_queue == NULL)
+				continue;
+			for (q = 0; q < nb_rx_queues; q++) {
+				struct eth_rx_queue_info *queue_info =
+					&dev_info->rx_queue[q];
+				if (queue_info->queue_enabled == 0)
+					continue;
+
+				uint16_t wt = queue_info->wt;
+				rx_poll[poll_q].eth_dev_id = d;
+				rx_poll[poll_q].eth_rx_qid = q;
+				max_wrr_pos += wt;
+				max_wt = RTE_MAX(max_wt, wt);
+				gcd = (gcd) ? gcd_u16(gcd, wt) : wt;
+				poll_q++;
+			}
+		}
+
+		len = RTE_ALIGN(max_wrr_pos * sizeof(*rx_wrr),
+				RTE_CACHE_LINE_SIZE);
+		rx_wrr = rte_zmalloc_socket(rx_adapter->mem_name,
+					    len,
+					    RTE_CACHE_LINE_SIZE,
+					    rx_adapter->socket_id);
+		if (rx_wrr == NULL) {
+			rte_free(rx_poll);
+			return -ENOMEM;
+		}
+
+		/* Generate polling sequence based on weights */
+		int prev = -1;
+		int cw = -1;
+		for (i = 0; i < max_wrr_pos; i++) {
+			rx_wrr[i] = wrr_next(rx_adapter, poll_q, &cw,
+					     rx_poll, max_wt, gcd, prev);
+			prev = rx_wrr[i];
+		}
+	}
+
+	rte_free(rx_adapter->eth_rx_poll);
+	rte_free(rx_adapter->wrr_sched);
+
+	rx_adapter->eth_rx_poll = rx_poll;
+	rx_adapter->wrr_sched = rx_wrr;
+	rx_adapter->wrr_len = max_wrr_pos;
+
+	return 0;
+}
+
+static inline void
+mtoip(struct rte_mbuf *m, struct ipv4_hdr **ipv4_hdr,
+	struct ipv6_hdr **ipv6_hdr)
+{
+	struct ether_hdr *eth_hdr = rte_pktmbuf_mtod(m, struct ether_hdr *);
+	struct vlan_hdr *vlan_hdr;
+
+	*ipv4_hdr = NULL;
+	*ipv6_hdr = NULL;
+
+	switch (eth_hdr->ether_type) {
+	case RTE_BE16(ETHER_TYPE_IPv4):
+		*ipv4_hdr = (struct ipv4_hdr *)(eth_hdr + 1);
+		break;
+
+	case RTE_BE16(ETHER_TYPE_IPv6):
+		*ipv6_hdr = (struct ipv6_hdr *)(eth_hdr + 1);
+		break;
+
+	case RTE_BE16(ETHER_TYPE_VLAN):
+		vlan_hdr = (struct vlan_hdr *)(eth_hdr + 1);
+		switch (vlan_hdr->eth_proto) {
+		case RTE_BE16(ETHER_TYPE_IPv4):
+			*ipv4_hdr = (struct ipv4_hdr *)(vlan_hdr + 1);
+			break;
+		case RTE_BE16(ETHER_TYPE_IPv6):
+			*ipv6_hdr = (struct ipv6_hdr *)(vlan_hdr + 1);
+			break;
+		default:
+			break;
+		}
+		break;
+
+	default:
+		break;
+	}
+}
+
+/* Calculate RSS hash for IPv4/6 */
+static inline uint32_t
+do_softrss(struct rte_mbuf *m, const uint8_t *rss_key_be)
+{
+	uint32_t input_len;
+	void *tuple;
+	struct rte_ipv4_tuple ipv4_tuple;
+	struct rte_ipv6_tuple ipv6_tuple;
+	struct ipv4_hdr *ipv4_hdr;
+	struct ipv6_hdr *ipv6_hdr;
+
+	mtoip(m, &ipv4_hdr, &ipv6_hdr);
+
+	if (ipv4_hdr) {
+		ipv4_tuple.src_addr = rte_be_to_cpu_32(ipv4_hdr->src_addr);
+		ipv4_tuple.dst_addr = rte_be_to_cpu_32(ipv4_hdr->dst_addr);
+		tuple = &ipv4_tuple;
+		input_len = RTE_THASH_V4_L3_LEN;
+	} else if (ipv6_hdr) {
+		rte_thash_load_v6_addrs(ipv6_hdr,
+					(union rte_thash_tuple *)&ipv6_tuple);
+		tuple = &ipv6_tuple;
+		input_len = RTE_THASH_V6_L3_LEN;
+	} else
+		return 0;
+
+	return rte_softrss_be(tuple, input_len, rss_key_be);
+}
+
+static inline int
+rx_enq_blocked(struct rte_event_eth_rx_adapter *rx_adapter)
+{
+	return !!rx_adapter->enq_block_count;
+}
+
+static inline void
+rx_enq_block_start_ts(struct rte_event_eth_rx_adapter *rx_adapter)
+{
+	if (rx_adapter->rx_enq_block_start_ts)
+		return;
+
+	rx_adapter->enq_block_count++;
+	if (rx_adapter->enq_block_count < BLOCK_CNT_THRESHOLD)
+		return;
+
+	rx_adapter->rx_enq_block_start_ts = rte_get_tsc_cycles();
+}
+
+static inline void
+rx_enq_block_end_ts(struct rte_event_eth_rx_adapter *rx_adapter,
+		    struct rte_event_eth_rx_adapter_stats *stats)
+{
+	if (unlikely(!stats->rx_enq_start_ts))
+		stats->rx_enq_start_ts = rte_get_tsc_cycles();
+
+	if (likely(!rx_enq_blocked(rx_adapter)))
+		return;
+
+	rx_adapter->enq_block_count = 0;
+	if (rx_adapter->rx_enq_block_start_ts) {
+		stats->rx_enq_end_ts = rte_get_tsc_cycles();
+		stats->rx_enq_block_cycles += stats->rx_enq_end_ts -
+		    rx_adapter->rx_enq_block_start_ts;
+		rx_adapter->rx_enq_block_start_ts = 0;
+	}
+}
+
+/* Add event to buffer, free space check is done prior to calling
+ * this function
+ */
+static inline void
+buf_event_enqueue(struct rte_event_eth_rx_adapter *rx_adapter,
+		  struct rte_event *ev)
+{
+	struct rte_eth_event_enqueue_buffer *buf =
+	    &rx_adapter->event_enqueue_buffer;
+	rte_memcpy(&buf->events[buf->count++], ev, sizeof(struct rte_event));
+}
+
+/* Enqueue buffered events to event device */
+static inline uint16_t
+flush_event_buffer(struct rte_event_eth_rx_adapter *rx_adapter)
+{
+	struct rte_eth_event_enqueue_buffer *buf =
+	    &rx_adapter->event_enqueue_buffer;
+	struct rte_event_eth_rx_adapter_stats *stats = &rx_adapter->stats;
+
+	uint16_t n = rte_event_enqueue_burst(rx_adapter->eventdev_id,
+					rx_adapter->event_port_id,
+					buf->events,
+					buf->count);
+	if (n != buf->count) {
+		memmove(buf->events,
+			&buf->events[n],
+			(buf->count - n) * sizeof(struct rte_event));
+		stats->rx_enq_retry++;
+	}
+
+	n ? rx_enq_block_end_ts(rx_adapter, stats) :
+		rx_enq_block_start_ts(rx_adapter);
+
+	buf->count -= n;
+	stats->rx_enq_count += n;
+
+	return n;
+}
+
+static inline void
+fill_event_buffer(struct rte_event_eth_rx_adapter *rx_adapter,
+	uint8_t dev_id,
+	uint16_t rx_queue_id,
+	struct rte_mbuf **mbufs,
+	uint16_t num)
+{
+	uint32_t i;
+	struct eth_device_info *eth_device_info =
+					&rx_adapter->eth_devices[dev_id];
+	struct eth_rx_queue_info *eth_rx_queue_info =
+					&eth_device_info->rx_queue[rx_queue_id];
+
+	int32_t qid = eth_rx_queue_info->event_queue_id;
+	uint8_t sched_type = eth_rx_queue_info->sched_type;
+	uint8_t priority = eth_rx_queue_info->priority;
+	uint32_t flow_id;
+	struct rte_event events[BATCH_SIZE];
+	struct rte_mbuf *m = mbufs[0];
+	uint32_t rss_mask;
+	uint32_t rss;
+	int do_rss;
+
+	/* 0xffff ffff if PKT_RX_RSS_HASH is set, otherwise 0 */
+	rss_mask = ~(((m->ol_flags & PKT_RX_RSS_HASH) != 0) - 1);
+	do_rss = !rss_mask && !eth_rx_queue_info->flow_id_mask;
+
+	for (i = 0; i < num; i++) {
+		m = mbufs[i];
+		struct rte_event *ev = &events[i];
+
+		rss = do_rss ?
+			do_softrss(m, rx_adapter->rss_key_be) : m->hash.rss;
+		flow_id =
+		    eth_rx_queue_info->flow_id &
+				eth_rx_queue_info->flow_id_mask;
+		flow_id |= rss & ~eth_rx_queue_info->flow_id_mask;
+
+		ev->flow_id = flow_id;
+		ev->op = RTE_EVENT_OP_NEW;
+		ev->sched_type = sched_type;
+		ev->queue_id = qid;
+		ev->event_type = RTE_EVENT_TYPE_ETH_RX_ADAPTER;
+		ev->sub_event_type = 0;
+		ev->priority = priority;
+		ev->mbuf = m;
+
+		buf_event_enqueue(rx_adapter, ev);
+	}
+}
+
+/*
+ * Polls receive queues added to the event adapter and enqueues received
+ * packets to the event device.
+ *
+ * The receive code enqueues initially to a temporary buffer, the
+ * temporary buffer is drained anytime it holds >= BATCH_SIZE packets
+ *
+ * If there isn't space available in the temporary buffer, packets from the
+ * Rx queue aren't dequeued from the eth device, this back pressures the
+ * eth device, in virtual device environments this back pressure is relayed to
+ * the hypervisor's switching layer where adjustments can be made to deal with
+ * it.
+ */
+static inline uint32_t
+eth_rx_poll(struct rte_event_eth_rx_adapter *rx_adapter)
+{
+	uint32_t num_queue;
+	uint16_t n;
+	uint32_t nb_rx = 0;
+	struct rte_mbuf *mbufs[BATCH_SIZE];
+	struct rte_eth_event_enqueue_buffer *buf;
+	uint32_t wrr_pos;
+	uint32_t max_nb_rx;
+
+	wrr_pos = rx_adapter->wrr_pos;
+	max_nb_rx = rx_adapter->max_nb_rx;
+	buf = &rx_adapter->event_enqueue_buffer;
+	struct rte_event_eth_rx_adapter_stats *stats = &rx_adapter->stats;
+
+	/* Iterate through a WRR sequence */
+	for (num_queue = 0; num_queue < rx_adapter->wrr_len; num_queue++) {
+		unsigned int poll_idx = rx_adapter->wrr_sched[wrr_pos];
+		uint16_t qid = rx_adapter->eth_rx_poll[poll_idx].eth_rx_qid;
+		uint8_t d = rx_adapter->eth_rx_poll[poll_idx].eth_dev_id;
+
+		/* Don't do a batch dequeue from the rx queue if there isn't
+		 * enough space in the enqueue buffer.
+		 */
+		if (buf->count >= BATCH_SIZE)
+			flush_event_buffer(rx_adapter);
+		if (BATCH_SIZE > (ETH_EVENT_BUFFER_SIZE - buf->count))
+			break;
+
+		stats->rx_poll_count++;
+		n = rte_eth_rx_burst(d, qid, mbufs, BATCH_SIZE);
+
+		if (n) {
+			stats->rx_packets += n;
+			/* The check before rte_eth_rx_burst() ensures that
+			 * all n mbufs can be buffered
+			 */
+			fill_event_buffer(rx_adapter, d, qid, mbufs, n);
+			nb_rx += n;
+			if (nb_rx > max_nb_rx) {
+				rx_adapter->wrr_pos =
+				    (wrr_pos + 1) % rx_adapter->wrr_len;
+				return nb_rx;
+			}
+		}
+
+		if (++wrr_pos == rx_adapter->wrr_len)
+			wrr_pos = 0;
+	}
+
+	return nb_rx;
+}
+
+static int
+event_eth_rx_adapter_service_func(void *args)
+{
+	struct rte_event_eth_rx_adapter *rx_adapter = args;
+	struct rte_eth_event_enqueue_buffer *buf;
+
+	buf = &rx_adapter->event_enqueue_buffer;
+	if (rte_spinlock_trylock(&rx_adapter->rx_lock) == 0)
+		return 0;
+	if (eth_rx_poll(rx_adapter) == 0 && buf->count)
+		flush_event_buffer(rx_adapter);
+	rte_spinlock_unlock(&rx_adapter->rx_lock);
+	return 0;
+}
+
+static int
+rte_event_eth_rx_adapter_init(void)
+{
+	const char *name = "rte_event_eth_rx_adapter_array";
+	const struct rte_memzone *mz;
+	unsigned int sz;
+
+	sz = sizeof(*event_eth_rx_adapter) *
+	    RTE_EVENT_ETH_RX_ADAPTER_MAX_INSTANCE;
+	sz = RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE);
+
+	mz = rte_memzone_lookup(name);
+	if (mz == NULL) {
+		mz = rte_memzone_reserve_aligned(name, sz, rte_socket_id(), 0,
+						 RTE_CACHE_LINE_SIZE);
+		if (mz == NULL) {
+			RTE_EDEV_LOG_ERR("failed to reserve memzone err = %"
+					PRId32, rte_errno);
+			return -rte_errno;
+		}
+	}
+
+	event_eth_rx_adapter = mz->addr;
+	return 0;
+}
+
+static inline struct rte_event_eth_rx_adapter *
+id_to_rx_adapter(uint8_t id)
+{
+	return event_eth_rx_adapter ?
+		event_eth_rx_adapter[id] : NULL;
+}
+
+static int
+default_conf_cb(uint8_t id, uint8_t dev_id,
+		struct rte_event_eth_rx_adapter_conf *conf, void *arg)
+{
+	int ret;
+	struct rte_eventdev *dev;
+	struct rte_event_dev_config dev_conf;
+	int started;
+	uint8_t port_id;
+	struct rte_event_port_conf *port_conf = arg;
+	struct rte_event_eth_rx_adapter *rx_adapter = id_to_rx_adapter(id);
+
+	dev = &rte_eventdevs[rx_adapter->eventdev_id];
+	dev_conf = dev->data->dev_conf;
+
+	started = dev->data->dev_started;
+	if (started)
+		rte_event_dev_stop(dev_id);
+	port_id = dev_conf.nb_event_ports;
+	dev_conf.nb_event_ports += 1;
+	ret = rte_event_dev_configure(dev_id, &dev_conf);
+	if (ret) {
+		RTE_EDEV_LOG_ERR("failed to configure event dev %u\n",
+						dev_id);
+		if (started)
+			rte_event_dev_start(dev_id);
+		return ret;
+	}
+
+	ret = rte_event_port_setup(dev_id, port_id, port_conf);
+	if (ret) {
+		RTE_EDEV_LOG_ERR("failed to setup event port %u\n",
+					port_id);
+		return ret;
+	}
+
+	conf->event_port_id = port_id;
+	conf->max_nb_rx = 128;
+	if (started)
+		rte_event_dev_start(dev_id);
+	rx_adapter->default_cb_arg = 1;
+	return ret;
+}
+
+static int
+init_service(struct rte_event_eth_rx_adapter *rx_adapter, uint8_t id)
+{
+	int ret;
+	struct rte_service_spec service;
+	struct rte_event_eth_rx_adapter_conf rx_adapter_conf;
+
+	if (rx_adapter->service_inited)
+		return 0;
+
+	memset(&service, 0, sizeof(service));
+	snprintf(service.name, ETH_RX_ADAPTER_SERVICE_NAME_LEN,
+		"rte_event_eth_rx_adapter_%d", id);
+	service.socket_id = rx_adapter->socket_id;
+	service.callback = event_eth_rx_adapter_service_func;
+	service.callback_userdata = rx_adapter;
+	/* Service function handles locking for queue add/del updates */
+	service.capabilities = RTE_SERVICE_CAP_MT_SAFE;
+	ret = rte_service_component_register(&service, &rx_adapter->service_id);
+	if (ret) {
+		RTE_EDEV_LOG_ERR("failed to register service %s err = %" PRId32,
+			service.name, ret);
+		return ret;
+	}
+
+	ret = rx_adapter->conf_cb(id, rx_adapter->eventdev_id,
+		&rx_adapter_conf, rx_adapter->conf_arg);
+	if (ret) {
+		RTE_EDEV_LOG_ERR("configuration callback failed err = %" PRId32,
+			ret);
+		goto err_done;
+	}
+	rx_adapter->event_port_id = rx_adapter_conf.event_port_id;
+	rx_adapter->max_nb_rx = rx_adapter_conf.max_nb_rx;
+	rx_adapter->service_inited = 1;
+	return 0;
+
+err_done:
+	rte_service_component_unregister(rx_adapter->service_id);
+	return ret;
+}
+
+
+static void
+update_queue_info(struct rte_event_eth_rx_adapter *rx_adapter,
+		struct eth_device_info *dev_info,
+		int32_t rx_queue_id,
+		uint8_t add)
+{
+	struct eth_rx_queue_info *queue_info;
+	int enabled;
+	uint16_t i;
+
+	if (dev_info->rx_queue == NULL)
+		return;
+
+	if (rx_queue_id == -1) {
+		for (i = 0; i < dev_info->dev->data->nb_rx_queues; i++)
+			update_queue_info(rx_adapter, dev_info, i, add);
+	} else {
+		queue_info = &dev_info->rx_queue[rx_queue_id];
+		enabled = queue_info->queue_enabled;
+		if (add) {
+			rx_adapter->nb_queues += !enabled;
+			dev_info->nb_dev_queues += !enabled;
+		} else {
+			rx_adapter->nb_queues -= enabled;
+			dev_info->nb_dev_queues -= enabled;
+		}
+		queue_info->queue_enabled = !!add;
+	}
+}
+
+static int
+event_eth_rx_adapter_queue_del(struct rte_event_eth_rx_adapter *rx_adapter,
+			    struct eth_device_info *dev_info,
+			    uint16_t rx_queue_id)
+{
+	struct eth_rx_queue_info *queue_info;
+
+	if (rx_adapter->nb_queues == 0)
+		return 0;
+
+	queue_info = &dev_info->rx_queue[rx_queue_id];
+	rx_adapter->num_rx_polled -= queue_info->queue_enabled;
+	update_queue_info(rx_adapter, dev_info, rx_queue_id, 0);
+	return 0;
+}
+
+static void
+event_eth_rx_adapter_queue_add(struct rte_event_eth_rx_adapter *rx_adapter,
+		struct eth_device_info *dev_info,
+		uint16_t rx_queue_id,
+		const struct rte_event_eth_rx_adapter_queue_conf *conf)
+
+{
+	struct eth_rx_queue_info *queue_info;
+	const struct rte_event *ev = &conf->ev;
+
+	queue_info = &dev_info->rx_queue[rx_queue_id];
+	queue_info->event_queue_id = ev->queue_id;
+	queue_info->sched_type = ev->sched_type;
+	queue_info->priority = ev->priority;
+	queue_info->wt = conf->servicing_weight;
+
+	if (conf->rx_queue_flags &
+			RTE_EVENT_ETH_RX_ADAPTER_QUEUE_FLOW_ID_VALID) {
+		queue_info->flow_id = ev->flow_id;
+		queue_info->flow_id_mask = ~0;
+	}
+
+	/* The same queue can be added more than once */
+	rx_adapter->num_rx_polled += !queue_info->queue_enabled;
+	update_queue_info(rx_adapter, dev_info, rx_queue_id, 1);
+}
+
+static int add_rx_queue(struct rte_event_eth_rx_adapter *rx_adapter,
+		uint8_t eth_dev_id,
+		int rx_queue_id,
+		const struct rte_event_eth_rx_adapter_queue_conf *queue_conf)
+{
+	struct eth_device_info *dev_info = &rx_adapter->eth_devices[eth_dev_id];
+	uint32_t i;
+	int ret;
+
+	if (queue_conf->servicing_weight == 0) {
+		struct rte_event_eth_rx_adapter_queue_conf temp_conf;
+
+		struct rte_eth_dev_data *data = dev_info->dev->data;
+		if (data->dev_conf.intr_conf.rxq) {
+			RTE_EDEV_LOG_ERR("Interrupt driven queues"
+					" not supported");
+			return -ENOTSUP;
+		}
+		temp_conf = *queue_conf;
+		temp_conf.servicing_weight = 1;
+		/* If Rx interrupts are disabled set wt = 1 */
+		queue_conf = &temp_conf;
+	}
+
+	if (dev_info->rx_queue == NULL) {
+		dev_info->rx_queue =
+		    rte_zmalloc_socket(rx_adapter->mem_name,
+				       dev_info->dev->data->nb_rx_queues *
+				       sizeof(struct eth_rx_queue_info), 0,
+				       rx_adapter->socket_id);
+		if (dev_info->rx_queue == NULL)
+			return -ENOMEM;
+	}
+
+	if (rx_queue_id == -1) {
+		for (i = 0; i < dev_info->dev->data->nb_rx_queues; i++)
+			event_eth_rx_adapter_queue_add(rx_adapter,
+						dev_info, i,
+						queue_conf);
+	} else {
+		event_eth_rx_adapter_queue_add(rx_adapter, dev_info,
+					  (uint16_t)rx_queue_id,
+					  queue_conf);
+	}
+
+	ret = eth_poll_wrr_calc(rx_adapter);
+	if (ret) {
+		event_eth_rx_adapter_queue_del(rx_adapter,
+					dev_info, rx_queue_id);
+		return ret;
+	}
+
+	return ret;
+}
+
+static int
+rx_adapter_ctrl(uint8_t id, int start)
+{
+	struct rte_event_eth_rx_adapter *rx_adapter;
+	struct rte_eventdev *dev;
+	struct eth_device_info *dev_info;
+	uint32_t i;
+	int use_service = 0;
+	int stop = !start;
+
+	RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+	rx_adapter = id_to_rx_adapter(id);
+	if (rx_adapter == NULL)
+		return -EINVAL;
+
+	dev = &rte_eventdevs[rx_adapter->eventdev_id];
+
+	for (i = 0; i < rte_eth_dev_count(); i++) {
+		dev_info = &rx_adapter->eth_devices[i];
+		/* if start  check for num dev queues */
+		if (start && !dev_info->nb_dev_queues)
+			continue;
+		/* if stop check if dev has been started */
+		if (stop && !dev_info->dev_rx_started)
+			continue;
+		use_service |= !dev_info->internal_event_port;
+		dev_info->dev_rx_started = start;
+		if (dev_info->internal_event_port == 0)
+			continue;
+		start ? (*dev->dev_ops->eth_rx_adapter_start)(dev,
+						&rte_eth_devices[i]) :
+			(*dev->dev_ops->eth_rx_adapter_stop)(dev,
+						&rte_eth_devices[i]);
+	}
+
+	if (use_service)
+		rte_service_runstate_set(rx_adapter->service_id, start);
+
+	return 0;
+}
+
+int
+rte_event_eth_rx_adapter_create_ext(uint8_t id, uint8_t dev_id,
+				rte_event_eth_rx_adapter_conf_cb conf_cb,
+				void *conf_arg)
+{
+	struct rte_event_eth_rx_adapter *rx_adapter;
+	int ret;
+	int socket_id;
+	uint8_t i;
+	char mem_name[ETH_RX_ADAPTER_SERVICE_NAME_LEN];
+	const uint8_t default_rss_key[] = {
+		0x6d, 0x5a, 0x56, 0xda, 0x25, 0x5b, 0x0e, 0xc2,
+		0x41, 0x67, 0x25, 0x3d, 0x43, 0xa3, 0x8f, 0xb0,
+		0xd0, 0xca, 0x2b, 0xcb, 0xae, 0x7b, 0x30, 0xb4,
+		0x77, 0xcb, 0x2d, 0xa3, 0x80, 0x30, 0xf2, 0x0c,
+		0x6a, 0x42, 0xb7, 0x3b, 0xbe, 0xac, 0x01, 0xfa,
+	};
+
+	RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
+	if (conf_cb == NULL)
+		return -EINVAL;
+
+	if (event_eth_rx_adapter == NULL) {
+		ret = rte_event_eth_rx_adapter_init();
+		if (ret)
+			return ret;
+	}
+
+	rx_adapter = id_to_rx_adapter(id);
+	if (rx_adapter != NULL) {
+		RTE_EDEV_LOG_ERR("Eth Rx adapter exists id = %" PRIu8, id);
+		return -EEXIST;
+	}
+
+	socket_id = rte_event_dev_socket_id(dev_id);
+	snprintf(mem_name, ETH_RX_ADAPTER_MEM_NAME_LEN,
+		"rte_event_eth_rx_adapter_%d",
+		id);
+
+	rx_adapter = rte_zmalloc_socket(mem_name, sizeof(*rx_adapter),
+			RTE_CACHE_LINE_SIZE, socket_id);
+	if (rx_adapter == NULL) {
+		RTE_EDEV_LOG_ERR("failed to get mem for rx adapter");
+		return -ENOMEM;
+	}
+
+	rx_adapter->eventdev_id = dev_id;
+	rx_adapter->socket_id = socket_id;
+	rx_adapter->conf_cb = conf_cb;
+	rx_adapter->conf_arg = conf_arg;
+	strcpy(rx_adapter->mem_name, mem_name);
+	rx_adapter->eth_devices = rte_zmalloc_socket(rx_adapter->mem_name,
+					rte_eth_dev_count() *
+					sizeof(struct eth_device_info), 0,
+					socket_id);
+	rte_convert_rss_key((const uint32_t *)default_rss_key,
+			(uint32_t *)rx_adapter->rss_key_be,
+			    RTE_DIM(default_rss_key));
+
+	if (rx_adapter->eth_devices == NULL) {
+		RTE_EDEV_LOG_ERR("failed to get mem for eth devices\n");
+		rte_free(rx_adapter);
+		return -ENOMEM;
+	}
+	rte_spinlock_init(&rx_adapter->rx_lock);
+	for (i = 0; i < rte_eth_dev_count(); i++)
+		rx_adapter->eth_devices[i].dev = &rte_eth_devices[i];
+
+	event_eth_rx_adapter[id] = rx_adapter;
+	if (conf_cb == default_conf_cb)
+		rx_adapter->default_cb_arg = 1;
+	return 0;
+}
+
+int
+rte_event_eth_rx_adapter_create(uint8_t id, uint8_t dev_id,
+		struct rte_event_port_conf *port_config)
+{
+	struct rte_event_port_conf *pc;
+	int ret;
+
+	if (port_config == NULL)
+		return -EINVAL;
+	RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+
+	pc = rte_malloc(NULL, sizeof(*pc), 0);
+	if (pc == NULL)
+		return -ENOMEM;
+	*pc = *port_config;
+	ret = rte_event_eth_rx_adapter_create_ext(id, dev_id,
+					default_conf_cb,
+					pc);
+	if (ret)
+		rte_free(pc);
+	return ret;
+}
+
+int
+rte_event_eth_rx_adapter_free(uint8_t id)
+{
+	struct rte_event_eth_rx_adapter *rx_adapter;
+
+	RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+
+	rx_adapter = id_to_rx_adapter(id);
+	if (rx_adapter == NULL)
+		return -EINVAL;
+
+	if (rx_adapter->nb_queues) {
+		RTE_EDEV_LOG_ERR("%" PRIu16 " Rx queues not deleted",
+				rx_adapter->nb_queues);
+		return -EBUSY;
+	}
+
+	if (rx_adapter->default_cb_arg)
+		rte_free(rx_adapter->conf_arg);
+	rte_free(rx_adapter->eth_devices);
+	rte_free(rx_adapter);
+	event_eth_rx_adapter[id] = NULL;
+
+	return 0;
+}
+
+int
+rte_event_eth_rx_adapter_queue_add(uint8_t id,
+		uint8_t eth_dev_id,
+		int32_t rx_queue_id,
+		const struct rte_event_eth_rx_adapter_queue_conf *queue_conf)
+{
+	int ret;
+	uint32_t cap;
+	struct rte_event_eth_rx_adapter *rx_adapter;
+	struct rte_eventdev *dev;
+	struct eth_device_info *dev_info;
+	int start_service;
+
+	RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(eth_dev_id, -EINVAL);
+
+	rx_adapter = id_to_rx_adapter(id);
+	if ((rx_adapter == NULL) || (queue_conf == NULL))
+		return -EINVAL;
+
+	dev = &rte_eventdevs[rx_adapter->eventdev_id];
+	ret = rte_event_eth_rx_adapter_caps_get(rx_adapter->eventdev_id,
+						eth_dev_id,
+						&cap);
+	if (ret) {
+		RTE_EDEV_LOG_ERR("Failed to get adapter caps edev %" PRIu8
+			"eth port %" PRIu8, id, eth_dev_id);
+		return ret;
+	}
+
+	if ((cap & RTE_EVENT_ETH_RX_ADAPTER_CAP_OVERRIDE_FLOW_ID) == 0
+		&& (queue_conf->rx_queue_flags &
+			RTE_EVENT_ETH_RX_ADAPTER_QUEUE_FLOW_ID_VALID)) {
+		RTE_EDEV_LOG_ERR("Flow ID override is not supported,"
+				" eth port: %" PRIu8 " adapter id: %" PRIu8,
+				eth_dev_id, id);
+		return -EINVAL;
+	}
+
+	if ((cap & RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ) == 0 &&
+		(rx_queue_id != -1)) {
+		RTE_EDEV_LOG_ERR("Rx queues can only be connected to single "
+			"event queue id %u eth port %u", id, eth_dev_id);
+		return -EINVAL;
+	}
+
+	if (rx_queue_id != -1 && (uint16_t)rx_queue_id >=
+			rte_eth_devices[eth_dev_id].data->nb_rx_queues) {
+		RTE_EDEV_LOG_ERR("Invalid rx queue_id %" PRIu16,
+			 (uint16_t)rx_queue_id);
+		return -EINVAL;
+	}
+
+	start_service = 0;
+	dev_info = &rx_adapter->eth_devices[eth_dev_id];
+
+	if (cap & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT) {
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->eth_rx_adapter_queue_add,
+					-ENOTSUP);
+		if (dev_info->rx_queue == NULL) {
+			dev_info->rx_queue =
+			    rte_zmalloc_socket(rx_adapter->mem_name,
+					dev_info->dev->data->nb_rx_queues *
+					sizeof(struct eth_rx_queue_info), 0,
+					rx_adapter->socket_id);
+			if (dev_info->rx_queue == NULL)
+				return -ENOMEM;
+		}
+
+		ret = (*dev->dev_ops->eth_rx_adapter_queue_add)(dev,
+				&rte_eth_devices[eth_dev_id],
+				rx_queue_id, queue_conf);
+		if (ret == 0) {
+			update_queue_info(rx_adapter,
+					&rx_adapter->eth_devices[eth_dev_id],
+					rx_queue_id,
+					1);
+		}
+	} else {
+		rte_spinlock_lock(&rx_adapter->rx_lock);
+		ret = init_service(rx_adapter, id);
+		if (ret == 0)
+			ret = add_rx_queue(rx_adapter, eth_dev_id, rx_queue_id,
+					queue_conf);
+		rte_spinlock_unlock(&rx_adapter->rx_lock);
+		if (ret == 0)
+			start_service = !!sw_rx_adapter_queue_count(rx_adapter);
+	}
+
+	if (ret)
+		return ret;
+
+	if (start_service)
+		rte_service_component_runstate_set(rx_adapter->service_id, 1);
+
+	return 0;
+}
+
+int
+rte_event_eth_rx_adapter_queue_del(uint8_t id, uint8_t eth_dev_id,
+				int32_t rx_queue_id)
+{
+	int ret = 0;
+	struct rte_eventdev *dev;
+	struct rte_event_eth_rx_adapter *rx_adapter;
+	struct eth_device_info *dev_info;
+	uint32_t cap;
+	uint16_t i;
+
+	RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+	RTE_ETH_VALID_PORTID_OR_ERR_RET(eth_dev_id, -EINVAL);
+
+	rx_adapter = id_to_rx_adapter(id);
+	if (rx_adapter == NULL)
+		return -EINVAL;
+
+	dev = &rte_eventdevs[rx_adapter->eventdev_id];
+	ret = rte_event_eth_rx_adapter_caps_get(rx_adapter->eventdev_id,
+						eth_dev_id,
+						&cap);
+	if (ret)
+		return ret;
+
+	if (rx_queue_id != -1 && (uint16_t)rx_queue_id >=
+		rte_eth_devices[eth_dev_id].data->nb_rx_queues) {
+		RTE_EDEV_LOG_ERR("Invalid rx queue_id %" PRIu16,
+			 (uint16_t)rx_queue_id);
+		return -EINVAL;
+	}
+
+	dev_info = &rx_adapter->eth_devices[eth_dev_id];
+
+	if (cap & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT) {
+		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->eth_rx_adapter_queue_del,
+				 -ENOTSUP);
+		ret = (*dev->dev_ops->eth_rx_adapter_queue_del)(dev,
+						&rte_eth_devices[eth_dev_id],
+						rx_queue_id);
+		if (ret == 0) {
+			update_queue_info(rx_adapter,
+					&rx_adapter->eth_devices[eth_dev_id],
+					rx_queue_id,
+					0);
+			if (dev_info->nb_dev_queues == 0) {
+				rte_free(dev_info->rx_queue);
+				dev_info->rx_queue = NULL;
+			}
+		}
+	} else {
+		int rc;
+		rte_spinlock_lock(&rx_adapter->rx_lock);
+		if (rx_queue_id == -1) {
+			for (i = 0; i < dev_info->dev->data->nb_rx_queues; i++)
+				event_eth_rx_adapter_queue_del(rx_adapter,
+							dev_info,
+							i);
+		} else {
+			event_eth_rx_adapter_queue_del(rx_adapter,
+						dev_info,
+						(uint16_t)rx_queue_id);
+		}
+
+		rc = eth_poll_wrr_calc(rx_adapter);
+		if (rc)
+			RTE_EDEV_LOG_ERR("WRR recalculation failed %" PRId32,
+					rc);
+
+		if (dev_info->nb_dev_queues == 0) {
+			rte_free(dev_info->rx_queue);
+			dev_info->rx_queue = NULL;
+		}
+
+		rte_spinlock_unlock(&rx_adapter->rx_lock);
+		rte_service_component_runstate_set(rx_adapter->service_id,
+				sw_rx_adapter_queue_count(rx_adapter));
+	}
+
+	return ret;
+}
+
+
+int
+rte_event_eth_rx_adapter_start(uint8_t id)
+{
+	return rx_adapter_ctrl(id, 1);
+}
+
+int
+rte_event_eth_rx_adapter_stop(uint8_t id)
+{
+	return rx_adapter_ctrl(id, 0);
+}
+
+int
+rte_event_eth_rx_adapter_stats_get(uint8_t id,
+			       struct rte_event_eth_rx_adapter_stats *stats)
+{
+	struct rte_event_eth_rx_adapter *rx_adapter;
+	struct rte_event_eth_rx_adapter_stats dev_stats_sum = { 0 };
+	struct rte_event_eth_rx_adapter_stats dev_stats;
+	struct rte_eventdev *dev;
+	struct eth_device_info *dev_info;
+	uint32_t i;
+	int ret;
+
+	RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+
+	rx_adapter = id_to_rx_adapter(id);
+	if (rx_adapter  == NULL || stats == NULL)
+		return -EINVAL;
+
+	dev = &rte_eventdevs[rx_adapter->eventdev_id];
+	memset(stats, 0, sizeof(*stats));
+	for (i = 0; i < rte_eth_dev_count(); i++) {
+		dev_info = &rx_adapter->eth_devices[i];
+		if (dev_info->internal_event_port == 0 ||
+			dev->dev_ops->eth_rx_adapter_stats_get == NULL)
+			continue;
+		ret = (*dev->dev_ops->eth_rx_adapter_stats_get)(dev,
+						&rte_eth_devices[i],
+						&dev_stats);
+		if (ret)
+			continue;
+		dev_stats_sum.rx_packets += dev_stats.rx_packets;
+		dev_stats_sum.rx_enq_count += dev_stats.rx_enq_count;
+	}
+
+	if (rx_adapter->service_inited)
+		*stats = rx_adapter->stats;
+
+	stats->rx_packets += dev_stats_sum.rx_packets;
+	stats->rx_enq_count += dev_stats_sum.rx_enq_count;
+	return 0;
+}
+
+int
+rte_event_eth_rx_adapter_stats_reset(uint8_t id)
+{
+	struct rte_event_eth_rx_adapter *rx_adapter;
+	struct rte_eventdev *dev;
+	struct eth_device_info *dev_info;
+	uint32_t i;
+
+	RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+
+	rx_adapter = id_to_rx_adapter(id);
+	if (rx_adapter == NULL)
+		return -EINVAL;
+
+	dev = &rte_eventdevs[rx_adapter->eventdev_id];
+	for (i = 0; i < rte_eth_dev_count(); i++) {
+		dev_info = &rx_adapter->eth_devices[i];
+		if (dev_info->internal_event_port == 0 ||
+			dev->dev_ops->eth_rx_adapter_stats_reset == NULL)
+			continue;
+		(*dev->dev_ops->eth_rx_adapter_stats_reset)(dev,
+							&rte_eth_devices[i]);
+	}
+
+	memset(&rx_adapter->stats, 0, sizeof(rx_adapter->stats));
+	return 0;
+}
+
+int
+rte_event_eth_rx_adapter_service_id_get(uint8_t id, uint32_t *service_id)
+{
+	struct rte_event_eth_rx_adapter *rx_adapter;
+
+	RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+
+	rx_adapter = id_to_rx_adapter(id);
+	if (rx_adapter == NULL || service_id == NULL)
+		return -EINVAL;
+
+	if (rx_adapter->service_inited)
+		*service_id = rx_adapter->service_id;
+
+	return rx_adapter->service_inited ? 0 : -ESRCH;
+}
diff --git a/lib/Makefile b/lib/Makefile
index ccff22c39..7b2173cf5 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -52,7 +52,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += librte_cryptodev
 DEPDIRS-librte_cryptodev := librte_eal librte_mempool librte_ring librte_mbuf
 DEPDIRS-librte_cryptodev += librte_kvargs
 DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += librte_eventdev
-DEPDIRS-librte_eventdev := librte_eal librte_ring librte_ether
+DEPDIRS-librte_eventdev := librte_eal librte_ring librte_ether librte_hash
 DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost
 DEPDIRS-librte_vhost := librte_eal librte_mempool librte_mbuf librte_ether
 DIRS-$(CONFIG_RTE_LIBRTE_HASH) += librte_hash
diff --git a/lib/librte_eventdev/Makefile b/lib/librte_eventdev/Makefile
index eb1467d56..c404d673f 100644
--- a/lib/librte_eventdev/Makefile
+++ b/lib/librte_eventdev/Makefile
@@ -43,6 +43,7 @@ CFLAGS += $(WERROR_FLAGS)
 # library source files
 SRCS-y += rte_eventdev.c
 SRCS-y += rte_event_ring.c
+SRCS-y += rte_event_eth_rx_adapter.c
 
 # export include files
 SYMLINK-y-include += rte_eventdev.h
diff --git a/lib/librte_eventdev/rte_eventdev_version.map b/lib/librte_eventdev/rte_eventdev_version.map
index c181fab95..11a5f21bd 100644
--- a/lib/librte_eventdev/rte_eventdev_version.map
+++ b/lib/librte_eventdev/rte_eventdev_version.map
@@ -55,5 +55,14 @@ DPDK_17.08 {
 DPDK_17.11 {
 	global:
 
+	rte_event_eth_rx_adapter_create_ext;
+	rte_event_eth_rx_adapter_create;
+	rte_event_eth_rx_adapter_free;
+	rte_event_eth_rx_adapter_queue_add;
+	rte_event_eth_rx_adapter_queue_del;
+	rte_event_eth_rx_adapter_start;
+	rte_event_eth_rx_adapter_stop;
+	rte_event_eth_rx_adapter_stats_get;
+	rte_event_eth_rx_adapter_stats_reset;
 	rte_event_eth_rx_adapter_caps_get;
 } DPDK_17.08;
-- 
2.14.1.145.gb3622a4

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v5 7/7] eventdev: add tests for eth Rx adapter APIs
  2017-10-06 21:09 [PATCH v5 0/7] eventdev: cover letter: eth Rx adapter Nikhil Rao
                   ` (5 preceding siblings ...)
  2017-10-06 21:10 ` [PATCH v5 6/7] eventdev: add eth Rx adapter implementation Nikhil Rao
@ 2017-10-06 21:10 ` Nikhil Rao
  2017-10-09 12:33   ` Jerin Jacob
  2017-10-09 12:42 ` [PATCH v5 0/7] eventdev: cover letter: eth Rx adapter Jerin Jacob
  7 siblings, 1 reply; 17+ messages in thread
From: Nikhil Rao @ 2017-10-06 21:10 UTC (permalink / raw)
  To: jerin.jacob, bruce.richardson; +Cc: dev

Add unit tests for rte_event_eth_rx_adapter_xxx() APIs

Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
---
 test/test/test_event_eth_rx_adapter.c | 453 ++++++++++++++++++++++++++++++++++
 MAINTAINERS                           |   1 +
 test/test/Makefile                    |   1 +
 3 files changed, 455 insertions(+)
 create mode 100644 test/test/test_event_eth_rx_adapter.c

diff --git a/test/test/test_event_eth_rx_adapter.c b/test/test/test_event_eth_rx_adapter.c
new file mode 100644
index 000000000..56ed1f85a
--- /dev/null
+++ b/test/test/test_event_eth_rx_adapter.c
@@ -0,0 +1,453 @@
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 Intel Corporation. All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <string.h>
+#include <rte_common.h>
+#include <rte_mempool.h>
+#include <rte_mbuf.h>
+#include <rte_ethdev.h>
+#include <rte_eventdev.h>
+
+#include <rte_event_eth_rx_adapter.h>
+
+#include "test.h"
+
+#define MAX_NUM_RX_QUEUE	64
+#define NB_MBUFS		(8192 * num_ports * MAX_NUM_RX_QUEUE)
+#define MBUF_CACHE_SIZE		512
+#define MBUF_PRIV_SIZE		0
+#define TEST_INST_ID		0
+#define TEST_DEV_ID		0
+#define TEST_ETHDEV_ID		0
+
+struct event_eth_rx_adapter_test_params {
+	struct rte_mempool *mp;
+	uint16_t rx_rings, tx_rings;
+	uint32_t caps;
+};
+
+static struct event_eth_rx_adapter_test_params default_params;
+
+static inline int
+port_init(uint8_t port, struct rte_mempool *mp)
+{
+	static const struct rte_eth_conf port_conf_default = {
+		.rxmode = {
+			.mq_mode = ETH_MQ_RX_RSS,
+			.max_rx_pkt_len = ETHER_MAX_LEN
+		},
+		.rx_adv_conf = {
+			.rss_conf = {
+				.rss_hf = ETH_RSS_IP |
+					  ETH_RSS_TCP |
+					  ETH_RSS_UDP,
+			}
+		}
+	};
+	const uint16_t rx_ring_size = 512, tx_ring_size = 512;
+	struct rte_eth_conf port_conf = port_conf_default;
+	int retval;
+	uint16_t q;
+	struct rte_eth_dev_info dev_info;
+
+	if (port >= rte_eth_dev_count())
+		return -1;
+
+	retval = rte_eth_dev_configure(port, 0, 0, &port_conf);
+
+	rte_eth_dev_info_get(port, &dev_info);
+
+	default_params.rx_rings = RTE_MIN(dev_info.max_rx_queues,
+					MAX_NUM_RX_QUEUE);
+	default_params.tx_rings = 1;
+
+	/* Configure the Ethernet device. */
+	retval = rte_eth_dev_configure(port, default_params.rx_rings,
+				default_params.tx_rings, &port_conf);
+	if (retval != 0)
+		return retval;
+
+	for (q = 0; q < default_params.rx_rings; q++) {
+		retval = rte_eth_rx_queue_setup(port, q, rx_ring_size,
+				rte_eth_dev_socket_id(port), NULL, mp);
+		if (retval < 0)
+			return retval;
+	}
+
+	/* Allocate and set up 1 TX queue per Ethernet port. */
+	for (q = 0; q < default_params.tx_rings; q++) {
+		retval = rte_eth_tx_queue_setup(port, q, tx_ring_size,
+				rte_eth_dev_socket_id(port), NULL);
+		if (retval < 0)
+			return retval;
+	}
+
+	/* Start the Ethernet port. */
+	retval = rte_eth_dev_start(port);
+	if (retval < 0)
+		return retval;
+
+	/* Display the port MAC address. */
+	struct ether_addr addr;
+	rte_eth_macaddr_get(port, &addr);
+	printf("Port %u MAC: %02" PRIx8 " %02" PRIx8 " %02" PRIx8
+			   " %02" PRIx8 " %02" PRIx8 " %02" PRIx8 "\n",
+			(unsigned int)port,
+			addr.addr_bytes[0], addr.addr_bytes[1],
+			addr.addr_bytes[2], addr.addr_bytes[3],
+			addr.addr_bytes[4], addr.addr_bytes[5]);
+
+	/* Enable RX in promiscuous mode for the Ethernet device. */
+	rte_eth_promiscuous_enable(port);
+
+	return 0;
+}
+
+static int
+init_ports(int num_ports)
+{
+	uint8_t portid;
+	int retval;
+
+	default_params.mp = rte_pktmbuf_pool_create("packet_pool",
+						NB_MBUFS,
+						MBUF_CACHE_SIZE,
+						MBUF_PRIV_SIZE,
+						RTE_MBUF_DEFAULT_BUF_SIZE,
+						rte_socket_id());
+	if (!default_params.mp)
+		return -ENOMEM;
+
+	for (portid = 0; portid < num_ports; portid++) {
+		retval = port_init(portid, default_params.mp);
+		if (retval)
+			return retval;
+	}
+
+	return 0;
+}
+
+static int
+testsuite_setup(void)
+{
+	int err;
+	uint8_t count;
+	struct rte_event_dev_info dev_info;
+
+	count = rte_event_dev_count();
+	if (!count) {
+		printf("Failed to find a valid event device,"
+			" testing with event_skeleton device\n");
+		rte_vdev_init("event_skeleton", NULL);
+	}
+
+	struct rte_event_dev_config config = {
+			.nb_event_queues = 1,
+			.nb_event_ports = 1,
+	};
+
+	err = rte_event_dev_info_get(TEST_DEV_ID, &dev_info);
+	config.nb_event_queue_flows = dev_info.max_event_queue_flows;
+	config.nb_event_port_dequeue_depth =
+			dev_info.max_event_port_dequeue_depth;
+	config.nb_event_port_enqueue_depth =
+			dev_info.max_event_port_enqueue_depth;
+	config.nb_events_limit =
+			dev_info.max_num_events;
+	err = rte_event_dev_configure(TEST_DEV_ID, &config);
+	TEST_ASSERT(err == 0, "Event device initialization failed err %d\n",
+			err);
+
+	/*
+	 * eth devices like octeontx use event device to receive packets
+	 * so rte_eth_dev_start invokes rte_event_dev_start internally, so
+	 * call init_ports after rte_event_dev_configure
+	 */
+	err = init_ports(rte_eth_dev_count());
+	TEST_ASSERT(err == 0, "Port initialization failed err %d\n", err);
+
+	err = rte_event_eth_rx_adapter_caps_get(TEST_DEV_ID, TEST_ETHDEV_ID,
+						&default_params.caps);
+	TEST_ASSERT(err == 0, "Failed to get adapter cap err %d\n",
+			err);
+
+	return err;
+}
+
+static void
+testsuite_teardown(void)
+{
+	uint32_t i;
+	for (i = 0; i < rte_eth_dev_count(); i++)
+		rte_eth_dev_stop(i);
+
+	rte_mempool_free(default_params.mp);
+}
+
+static int
+adapter_create(void)
+{
+	int err;
+	struct rte_event_dev_info dev_info;
+	struct rte_event_port_conf rx_p_conf;
+
+	err = rte_event_dev_info_get(TEST_DEV_ID, &dev_info);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	rx_p_conf.new_event_threshold = dev_info.max_num_events;
+	rx_p_conf.dequeue_depth = dev_info.max_event_port_dequeue_depth;
+	rx_p_conf.enqueue_depth = dev_info.max_event_port_enqueue_depth;
+	err = rte_event_eth_rx_adapter_create(TEST_INST_ID, TEST_DEV_ID,
+					&rx_p_conf);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	return err;
+}
+
+static void
+adapter_free(void)
+{
+	rte_event_eth_rx_adapter_free(TEST_INST_ID);
+}
+
+static int
+adapter_create_free(void)
+{
+	int err;
+
+	struct rte_event_port_conf rx_p_conf = {
+			.dequeue_depth = 8,
+			.enqueue_depth = 8,
+			.new_event_threshold = 1200,
+	};
+
+	err = rte_event_eth_rx_adapter_create(TEST_INST_ID, TEST_DEV_ID,
+					NULL);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	err = rte_event_eth_rx_adapter_create(TEST_INST_ID, TEST_DEV_ID,
+					&rx_p_conf);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_rx_adapter_create(TEST_INST_ID,
+					TEST_DEV_ID, &rx_p_conf);
+	TEST_ASSERT(err == -EEXIST, "Expected -EEXIST %d got %d", -EEXIST, err);
+
+	err = rte_event_eth_rx_adapter_free(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_rx_adapter_free(TEST_INST_ID);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL %d got %d", -EINVAL, err);
+
+	err = rte_event_eth_rx_adapter_free(1);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL %d got %d", -EINVAL, err);
+
+	return TEST_SUCCESS;
+}
+
+static int
+adapter_queue_add_del(void)
+{
+	int err;
+	struct rte_event ev;
+	uint32_t cap;
+
+	struct rte_event_eth_rx_adapter_queue_conf queue_config;
+
+	err = rte_event_eth_rx_adapter_caps_get(TEST_DEV_ID, TEST_ETHDEV_ID,
+					 &cap);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	ev.queue_id = 0;
+	ev.sched_type = RTE_SCHED_TYPE_ATOMIC;
+	ev.priority = 0;
+
+	queue_config.rx_queue_flags = 0;
+	if (cap & RTE_EVENT_ETH_RX_ADAPTER_CAP_OVERRIDE_FLOW_ID) {
+		ev.flow_id = 1;
+		queue_config.rx_queue_flags =
+			RTE_EVENT_ETH_RX_ADAPTER_QUEUE_FLOW_ID_VALID;
+	}
+	queue_config.ev = ev;
+	queue_config.servicing_weight = 1;
+
+	err = rte_event_eth_rx_adapter_queue_add(TEST_INST_ID,
+						rte_eth_dev_count(),
+						-1, &queue_config);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	if (cap & RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ) {
+		err = rte_event_eth_rx_adapter_queue_add(TEST_INST_ID,
+							TEST_ETHDEV_ID, 0,
+							&queue_config);
+		TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+		err = rte_event_eth_rx_adapter_queue_del(TEST_INST_ID,
+							TEST_ETHDEV_ID, 0);
+		TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+		err = rte_event_eth_rx_adapter_queue_add(TEST_INST_ID,
+							TEST_ETHDEV_ID,
+							-1,
+							&queue_config);
+		TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+		err = rte_event_eth_rx_adapter_queue_del(TEST_INST_ID,
+							TEST_ETHDEV_ID,
+							-1);
+		TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+	} else {
+		err = rte_event_eth_rx_adapter_queue_add(TEST_INST_ID,
+							TEST_ETHDEV_ID,
+							0,
+							&queue_config);
+		TEST_ASSERT(err == -EINVAL, "Expected EINVAL got %d", err);
+
+		err = rte_event_eth_rx_adapter_queue_add(TEST_INST_ID,
+							TEST_ETHDEV_ID, -1,
+							&queue_config);
+		TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+		err = rte_event_eth_rx_adapter_queue_del(TEST_INST_ID,
+							TEST_ETHDEV_ID, 0);
+		TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+		err = rte_event_eth_rx_adapter_queue_del(TEST_INST_ID,
+							TEST_ETHDEV_ID, -1);
+		TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+		err = rte_event_eth_rx_adapter_queue_del(TEST_INST_ID,
+							TEST_ETHDEV_ID, -1);
+		TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+	}
+
+	err = rte_event_eth_rx_adapter_queue_add(1, TEST_ETHDEV_ID, -1,
+						&queue_config);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	err = rte_event_eth_rx_adapter_queue_del(1, TEST_ETHDEV_ID, -1);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	return TEST_SUCCESS;
+}
+
+static int
+adapter_start_stop(void)
+{
+	int err;
+	struct rte_event ev;
+
+	ev.queue_id = 0;
+	ev.sched_type = RTE_SCHED_TYPE_ATOMIC;
+	ev.priority = 0;
+
+	struct rte_event_eth_rx_adapter_queue_conf queue_config;
+
+	queue_config.rx_queue_flags = 0;
+	if (default_params.caps &
+		RTE_EVENT_ETH_RX_ADAPTER_CAP_OVERRIDE_FLOW_ID) {
+		ev.flow_id = 1;
+		queue_config.rx_queue_flags =
+			RTE_EVENT_ETH_RX_ADAPTER_QUEUE_FLOW_ID_VALID;
+	}
+
+	queue_config.ev = ev;
+	queue_config.servicing_weight = 1;
+
+	err = rte_event_eth_rx_adapter_queue_add(TEST_INST_ID, TEST_ETHDEV_ID,
+					-1, &queue_config);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_rx_adapter_start(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_rx_adapter_stop(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_rx_adapter_queue_del(TEST_INST_ID, TEST_ETHDEV_ID,
+						-1);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_rx_adapter_start(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_rx_adapter_stop(TEST_INST_ID);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_rx_adapter_start(1);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	err = rte_event_eth_rx_adapter_stop(1);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	return TEST_SUCCESS;
+}
+
+static int
+adapter_stats(void)
+{
+	int err;
+	struct rte_event_eth_rx_adapter_stats stats;
+
+	err = rte_event_eth_rx_adapter_stats_get(TEST_INST_ID, NULL);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	err = rte_event_eth_rx_adapter_stats_get(TEST_INST_ID, &stats);
+	TEST_ASSERT(err == 0, "Expected 0 got %d", err);
+
+	err = rte_event_eth_rx_adapter_stats_get(1, &stats);
+	TEST_ASSERT(err == -EINVAL, "Expected -EINVAL got %d", err);
+
+	return TEST_SUCCESS;
+}
+
+static struct unit_test_suite service_tests  = {
+	.suite_name = "rx event eth adapter test suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(NULL, NULL, adapter_create_free),
+		TEST_CASE_ST(adapter_create, adapter_free,
+					adapter_queue_add_del),
+		TEST_CASE_ST(adapter_create, adapter_free, adapter_start_stop),
+		TEST_CASE_ST(adapter_create, adapter_free, adapter_stats),
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
+static int
+test_event_eth_rx_adapter_common(void)
+{
+	return unit_test_suite_runner(&service_tests);
+}
+
+REGISTER_TEST_COMMAND(event_eth_rx_adapter_autotest,
+		test_event_eth_rx_adapter_common);
diff --git a/MAINTAINERS b/MAINTAINERS
index 53fd50e1f..944e64500 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -276,6 +276,7 @@ F: test/test/test_eventdev.c
 Event Ethdev Rx Adapter API - EXPERIMENTAL
 M: Nikhil Rao <nikhil.rao@intel.com>
 F: lib/librte_eventdev/*eth_rx_adapter*
+F: test/test/test_event_eth_rx_adapter.c
 
 Networking Drivers
 ------------------
diff --git a/test/test/Makefile b/test/test/Makefile
index 42d9a49e2..011288219 100644
--- a/test/test/Makefile
+++ b/test/test/Makefile
@@ -204,6 +204,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += test_cryptodev.c
 ifeq ($(CONFIG_RTE_LIBRTE_EVENTDEV),y)
 SRCS-y += test_eventdev.c
 SRCS-y += test_event_ring.c
+SRCS-y += test_event_eth_rx_adapter.c
 SRCS-$(CONFIG_RTE_LIBRTE_PMD_SW_EVENTDEV) += test_eventdev_sw.c
 SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX_SSOVF) += test_eventdev_octeontx.c
 endif
-- 
2.14.1.145.gb3622a4

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v5 1/7] eventdev: add caps API and PMD callback for eth Rx adapter
  2017-10-06 21:09 ` [PATCH v5 1/7] eventdev: add caps API and PMD callback for " Nikhil Rao
@ 2017-10-09 12:03   ` Jerin Jacob
  0 siblings, 0 replies; 17+ messages in thread
From: Jerin Jacob @ 2017-10-09 12:03 UTC (permalink / raw)
  To: Nikhil Rao; +Cc: bruce.richardson, dev

-----Original Message-----
> Date: Sat, 7 Oct 2017 02:39:55 +0530
> From: Nikhil Rao <nikhil.rao@intel.com>
> To: jerin.jacob@caviumnetworks.com, bruce.richardson@intel.com
> CC: dev@dpdk.org
> Subject: [PATCH v5 1/7] eventdev: add caps API and PMD callback for eth Rx
>  adapter
> X-Mailer: git-send-email 2.7.4
> 
> The caps API allows application to retrieve capability information
> needed to configure the ethernet Rx adapter for the eventdev and
> ethdev pair.
> 
> For e.g., the ethdev, eventdev pairing maybe such that all of the
> ethdev Rx queues can only be connected to a single event queue, in
> this case the application is required to pass in -1 as the queue id
> when adding a receive queue to the adapter.
> 
> Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>

Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v5 2/7] eventdev: add PMD callbacks for eth Rx adapter
  2017-10-06 21:09 ` [PATCH v5 2/7] eventdev: add PMD callbacks " Nikhil Rao
@ 2017-10-09 12:05   ` Jerin Jacob
  0 siblings, 0 replies; 17+ messages in thread
From: Jerin Jacob @ 2017-10-09 12:05 UTC (permalink / raw)
  To: Nikhil Rao; +Cc: bruce.richardson, dev

-----Original Message-----
> Date: Sat, 7 Oct 2017 02:39:56 +0530
> From: Nikhil Rao <nikhil.rao@intel.com>
> To: jerin.jacob@caviumnetworks.com, bruce.richardson@intel.com
> CC: dev@dpdk.org
> Subject: [PATCH v5 2/7] eventdev: add PMD callbacks for eth Rx adapter
> X-Mailer: git-send-email 2.7.4
> 
> The PMD callbacks are used by the rte_event_eth_rx_xxx() APIs to
> configure and control the ethernet receive adapter if packet transfers
> from the ethdev to eventdev is implemented in hardware.
> 
> Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>

Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v5 3/7] eventdev: add eth Rx adapter caps function to SW PMD
  2017-10-06 21:09 ` [PATCH v5 3/7] eventdev: add eth Rx adapter caps function to SW PMD Nikhil Rao
@ 2017-10-09 12:06   ` Jerin Jacob
  0 siblings, 0 replies; 17+ messages in thread
From: Jerin Jacob @ 2017-10-09 12:06 UTC (permalink / raw)
  To: Nikhil Rao; +Cc: bruce.richardson, dev

-----Original Message-----
> Date: Sat, 7 Oct 2017 02:39:57 +0530
> From: Nikhil Rao <nikhil.rao@intel.com>
> To: jerin.jacob@caviumnetworks.com, bruce.richardson@intel.com
> CC: dev@dpdk.org
> Subject: [PATCH v5 3/7] eventdev: add eth Rx adapter caps function to SW PMD
> X-Mailer: git-send-email 2.7.4
> 
> Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>

Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>

> ---
>  lib/librte_eventdev/rte_eventdev_pmd.h |  8 ++++++++
>  drivers/event/sw/sw_evdev.c            | 15 +++++++++++++++
>  2 files changed, 23 insertions(+)
> 
> diff --git a/lib/librte_eventdev/rte_eventdev_pmd.h b/lib/librte_eventdev/rte_eventdev_pmd.h
> index 9f3188fc8..4369d9b8c 100644
> --- a/lib/librte_eventdev/rte_eventdev_pmd.h
> +++ b/lib/librte_eventdev/rte_eventdev_pmd.h
> @@ -83,6 +83,14 @@ extern "C" {
>  	} \
>  } while (0)
>  
> +#define RTE_EVENT_ETH_RX_ADAPTER_SW_CAP \
> +		((RTE_EVENT_ETH_RX_ADAPTER_CAP_OVERRIDE_FLOW_ID) | \
> +			(RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ))
> +
> +/**< Ethernet Rx adapter cap to return If the packet transfers from
> + * the ethdev to eventdev use a SW service function
> + */
> +
>  #define RTE_EVENTDEV_DETACHED  (0)
>  #define RTE_EVENTDEV_ATTACHED  (1)
>  
> diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
> index da6ac30f4..aed8b728f 100644
> --- a/drivers/event/sw/sw_evdev.c
> +++ b/drivers/event/sw/sw_evdev.c
> @@ -437,6 +437,19 @@ sw_dev_configure(const struct rte_eventdev *dev)
>  	return 0;
>  }
>  
> +struct rte_eth_dev;
> +
> +static int
> +sw_eth_rx_adapter_caps_get(const struct rte_eventdev *dev,
> +			const struct rte_eth_dev *eth_dev,
> +			uint32_t *caps)
> +{
> +	RTE_SET_USED(dev);
> +	RTE_SET_USED(eth_dev);
> +	*caps = RTE_EVENT_ETH_RX_ADAPTER_SW_CAP;
> +	return 0;
> +}
> +
>  static void
>  sw_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info)
>  {
> @@ -751,6 +764,8 @@ sw_probe(struct rte_vdev_device *vdev)
>  			.port_link = sw_port_link,
>  			.port_unlink = sw_port_unlink,
>  
> +			.eth_rx_adapter_caps_get = sw_eth_rx_adapter_caps_get,
> +
>  			.xstats_get = sw_xstats_get,
>  			.xstats_get_names = sw_xstats_get_names,
>  			.xstats_get_by_name = sw_xstats_get_by_name,
> -- 
> 2.14.1.145.gb3622a4
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v5 4/7] eventdev: add eth Rx adapter API header
  2017-10-06 21:09 ` [PATCH v5 4/7] eventdev: add eth Rx adapter API header Nikhil Rao
@ 2017-10-09 12:27   ` Jerin Jacob
  0 siblings, 0 replies; 17+ messages in thread
From: Jerin Jacob @ 2017-10-09 12:27 UTC (permalink / raw)
  To: Nikhil Rao; +Cc: bruce.richardson, dev

-----Original Message-----
> Date: Sat, 7 Oct 2017 02:39:58 +0530
> From: Nikhil Rao <nikhil.rao@intel.com>
> To: jerin.jacob@caviumnetworks.com, bruce.richardson@intel.com
> CC: dev@dpdk.org
> Subject: [PATCH v5 4/7] eventdev: add eth Rx adapter API header
> X-Mailer: git-send-email 2.7.4
> 
> Add common APIs for configuring packet transfer from ethernet Rx
> queues to event devices across HW & SW packet transfer mechanisms.
> A detailed description of the adapter is contained in the header's
> comments.
> 
> Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>
> ---

The EXPERIMENTAL functions follows the following comment in doxygen. Add
that for new Rx adapter functions

 * @warning                                                                     
 * @b EXPERIMENTAL: this API may change without prior notice   

reference file: lib/librte_eal/common/include/rte_service.h

> +#define RTE_EVENT_ETH_RX_ADAPTER_MAX_INSTANCE 32
> +
> +/* struct rte_event_eth_rx_adapter_queue_conf flags definitions */
> +#define RTE_EVENT_ETH_RX_ADAPTER_QUEUE_FLOW_ID_VALID	0x1
> +/**< This flag indicates the flow identifier is valid
> + * @see rte_event_eth_rx_adapter_queue_conf::rx_queue_flags
> + */
> +
> +struct rte_event_eth_rx_adapter_conf {

Doxygen comment missing for this structure.

> +	uint8_t event_port_id;
> +	/**< Event port identifier, the adapter enqueues mbuf events to this
> +	 * port.
> +	 */
> +	uint32_t max_nb_rx;
> +	/**< The adapter can return early if it has processed at least
> +	 * max_nb_rx mbufs. This isn't treated as a requirement; batching may
> +	 * cause the adapter to process more than max_nb_rx mbufs.
> +	 */
> +};
> +
> +
> +struct rte_event_eth_rx_adapter_stats {

Doxygen comment missing for this structure.

> +	uint64_t rx_poll_count;
> +	/**< Receive queue poll count */
> +	uint64_t rx_packets;
> +	/**< Received packet count */
> +	uint64_t rx_enq_count;
> +	/**< Eventdev enqueue count */
> +	uint64_t rx_enq_retry;
> +	/**< Eventdev enqueue retry count */
> +	uint64_t rx_enq_start_ts;
> +	/**< Rx enqueue start timestamp */
> +	uint64_t rx_enq_block_cycles;
> +	/**< Cycles for which the service is blocked by the event device,
> +	 * i.e, the service fails to enqueue to the event device.
> +	 */
> +	uint64_t rx_enq_end_ts;
> +	/**< Latest timestamp at which the service is unblocked
> +	 * by the event device. The start, end timestamps and
> +	 * block cycles can be used to compute the percentage of
> +	 * cycles the service is blocked by the event device.
> +	 */
> +};
> +
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +#endif	/* _RTE_EVENT_ETH_RX_ADAPTER_ */
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 8df2a7f2a..53fd50e1f 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -273,6 +273,9 @@ F: lib/librte_eventdev/
>  F: drivers/event/skeleton/
>  F: test/test/test_eventdev.c
>  
> +Event Ethdev Rx Adapter API - EXPERIMENTAL
> +M: Nikhil Rao <nikhil.rao@intel.com>

T: git://dpdk.org/next/dpdk-next-eventdev 

> +F: lib/librte_eventdev/*eth_rx_adapter*
> 


With above changes:
Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v5 5/7] eventdev: add event type for eth rx adapter
  2017-10-06 21:09 ` [PATCH v5 5/7] eventdev: add event type for eth rx adapter Nikhil Rao
@ 2017-10-09 12:31   ` Jerin Jacob
  0 siblings, 0 replies; 17+ messages in thread
From: Jerin Jacob @ 2017-10-09 12:31 UTC (permalink / raw)
  To: Nikhil Rao; +Cc: bruce.richardson, dev

-----Original Message-----
> Date: Sat, 7 Oct 2017 02:39:59 +0530
> From: Nikhil Rao <nikhil.rao@intel.com>
> To: jerin.jacob@caviumnetworks.com, bruce.richardson@intel.com
> CC: dev@dpdk.org
> Subject: [PATCH v5 5/7] eventdev: add event type for eth rx adapter
> X-Mailer: git-send-email 2.7.4
> 
> Add RTE_EVENT_TYPE_ETH_RX_ADAPTER event type. Certain platforms (e.g.,
> octeontx), in the event dequeue function, need to identify events
> injected from ethernet hardware into eventdev so that DPDK mbuf can be
> populated from the HW descriptor.
> 
> Events injected from ethernet hardware would use an event type of
> RTE_EVENT_TYPE_ETHDEV and events injected from the rx adapter service
> function would use an event type of RTE_EVENT_TYPE_ETH_RX_ADAPTER to
> help the event dequeue function differentiate between these two event
> sources.
> 
> Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>

Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v5 7/7] eventdev: add tests for eth Rx adapter APIs
  2017-10-06 21:10 ` [PATCH v5 7/7] eventdev: add tests for eth Rx adapter APIs Nikhil Rao
@ 2017-10-09 12:33   ` Jerin Jacob
  0 siblings, 0 replies; 17+ messages in thread
From: Jerin Jacob @ 2017-10-09 12:33 UTC (permalink / raw)
  To: Nikhil Rao; +Cc: bruce.richardson, dev

-----Original Message-----
> Date: Sat, 7 Oct 2017 02:40:01 +0530
> From: Nikhil Rao <nikhil.rao@intel.com>
> To: jerin.jacob@caviumnetworks.com, bruce.richardson@intel.com
> CC: dev@dpdk.org
> Subject: [PATCH v5 7/7] eventdev: add tests for eth Rx adapter APIs
> X-Mailer: git-send-email 2.7.4
> 
> Add unit tests for rte_event_eth_rx_adapter_xxx() APIs
> 
> Signed-off-by: Nikhil Rao <nikhil.rao@intel.com>

Acked-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v5 0/7] eventdev: cover letter: eth Rx adapter
  2017-10-06 21:09 [PATCH v5 0/7] eventdev: cover letter: eth Rx adapter Nikhil Rao
                   ` (6 preceding siblings ...)
  2017-10-06 21:10 ` [PATCH v5 7/7] eventdev: add tests for eth Rx adapter APIs Nikhil Rao
@ 2017-10-09 12:42 ` Jerin Jacob
  2017-10-09 13:06   ` Nipun Gupta
  7 siblings, 1 reply; 17+ messages in thread
From: Jerin Jacob @ 2017-10-09 12:42 UTC (permalink / raw)
  To: Nikhil Rao; +Cc: bruce.richardson, dev, hemant.agrawal, nipun.gupta

-----Original Message-----
> Date: Sat, 7 Oct 2017 02:39:54 +0530
> From: Nikhil Rao <nikhil.rao@intel.com>
> To: jerin.jacob@caviumnetworks.com, bruce.richardson@intel.com
> CC: dev@dpdk.org
> Subject: [PATCH v5 0/7] eventdev: cover letter: eth Rx adapter
> X-Mailer: git-send-email 2.7.4
> 
> Eventdev-based networking applications require a component to dequeue
> packets from NIC Rx queues and inject them into eventdev queues[1]. While
> some platforms (e.g. Cavium Octeontx) do this operation in hardware, other
> platforms use software.
> 
> This patchset introduces an ethernet Rx event adapter that dequeues packets
> from ethernet devices and enqueues them to event devices. This patch is based on
> a previous RFC[2] and supercedes [3], the main difference being that
> this version implements a common abstraction for HW and SW based packet transfers.
> 
> The adapter is designed to work with the EAL service core[4] for SW based
> packet transfers. An eventdev PMD callback is used to determine that SW
> based packet transfer service is required. The application can discover
> and configure the service with a core mask using rte_service APIs.
> 
> The adapter can service multiple ethernet devices and queues. For SW based
> packet transfers each queue is  configured with a servicing weight to
> control the relative frequency with which the adapter polls the queue,
> and the event fields to use when constructing packet events. The adapter
> has two modes for programming an event's flow ID: use a static per-queue
> user-specified value or use the RSS hash.

Hi Nikhil,

- Please re base to dpdk-next-eventdev
- There is one check-git-long error. Please fix it
Wrong headline lowercase:
	eventdev: add event type for eth rx adapter
- You are planning to send the programmer guide with version. Right?
Are planning to send now or post RC1 ?

- For it looks OK to pull in next-eventdev, after fixing the
http://dpdk.org/ml/archives/dev/2017-October/077915.html and exiting
comments

CC: Hemant Agrawal <hemant.agrawal@nxp.com>
CC: Nipun Gupta <nipun.gupta@nxp.com>

Does any have any objection to pull this in RC1 if Nikhil sends the next
version in time?

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v5 0/7] eventdev: cover letter: eth Rx adapter
  2017-10-09 12:42 ` [PATCH v5 0/7] eventdev: cover letter: eth Rx adapter Jerin Jacob
@ 2017-10-09 13:06   ` Nipun Gupta
  0 siblings, 0 replies; 17+ messages in thread
From: Nipun Gupta @ 2017-10-09 13:06 UTC (permalink / raw)
  To: Jerin Jacob, Nikhil Rao; +Cc: bruce.richardson, dev, Hemant Agrawal



> -----Original Message-----
> From: Jerin Jacob [mailto:jerin.jacob@caviumnetworks.com]
> Sent: Monday, October 09, 2017 18:13
> To: Nikhil Rao <nikhil.rao@intel.com>
> Cc: bruce.richardson@intel.com; dev@dpdk.org; Hemant Agrawal
> <hemant.agrawal@nxp.com>; Nipun Gupta <nipun.gupta@nxp.com>
> Subject: Re: [PATCH v5 0/7] eventdev: cover letter: eth Rx adapter
> 
> -----Original Message-----
> > Date: Sat, 7 Oct 2017 02:39:54 +0530
> > From: Nikhil Rao <nikhil.rao@intel.com>
> > To: jerin.jacob@caviumnetworks.com, bruce.richardson@intel.com
> > CC: dev@dpdk.org
> > Subject: [PATCH v5 0/7] eventdev: cover letter: eth Rx adapter
> > X-Mailer: git-send-email 2.7.4
> >
> > Eventdev-based networking applications require a component to dequeue
> > packets from NIC Rx queues and inject them into eventdev queues[1]. While
> > some platforms (e.g. Cavium Octeontx) do this operation in hardware, other
> > platforms use software.
> >
> > This patchset introduces an ethernet Rx event adapter that dequeues packets
> > from ethernet devices and enqueues them to event devices. This patch is
> based on
> > a previous RFC[2] and supercedes [3], the main difference being that
> > this version implements a common abstraction for HW and SW based packet
> transfers.
> >
> > The adapter is designed to work with the EAL service core[4] for SW based
> > packet transfers. An eventdev PMD callback is used to determine that SW
> > based packet transfer service is required. The application can discover
> > and configure the service with a core mask using rte_service APIs.
> >
> > The adapter can service multiple ethernet devices and queues. For SW based
> > packet transfers each queue is  configured with a servicing weight to
> > control the relative frequency with which the adapter polls the queue,
> > and the event fields to use when constructing packet events. The adapter
> > has two modes for programming an event's flow ID: use a static per-queue
> > user-specified value or use the RSS hash.
> 
> Hi Nikhil,
> 
> - Please re base to dpdk-next-eventdev
> - There is one check-git-long error. Please fix it
> Wrong headline lowercase:
> 	eventdev: add event type for eth rx adapter
> - You are planning to send the programmer guide with version. Right?
> Are planning to send now or post RC1 ?
> 
> - For it looks OK to pull in next-eventdev, after fixing the
> http://dpdk.org/ml/archives/dev/2017-October/077915.html and exiting
> comments
> 
> CC: Hemant Agrawal <hemant.agrawal@nxp.com>
> CC: Nipun Gupta <nipun.gupta@nxp.com>
> 
> Does any have any objection to pull this in RC1 if Nikhil sends the next
> version in time?

Looks fine with us. Also, we will be sending a patch to support NXP event adapter
implementation based on this soon.

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2017-10-09 13:06 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-10-06 21:09 [PATCH v5 0/7] eventdev: cover letter: eth Rx adapter Nikhil Rao
2017-10-06 21:09 ` [PATCH v5 1/7] eventdev: add caps API and PMD callback for " Nikhil Rao
2017-10-09 12:03   ` Jerin Jacob
2017-10-06 21:09 ` [PATCH v5 2/7] eventdev: add PMD callbacks " Nikhil Rao
2017-10-09 12:05   ` Jerin Jacob
2017-10-06 21:09 ` [PATCH v5 3/7] eventdev: add eth Rx adapter caps function to SW PMD Nikhil Rao
2017-10-09 12:06   ` Jerin Jacob
2017-10-06 21:09 ` [PATCH v5 4/7] eventdev: add eth Rx adapter API header Nikhil Rao
2017-10-09 12:27   ` Jerin Jacob
2017-10-06 21:09 ` [PATCH v5 5/7] eventdev: add event type for eth rx adapter Nikhil Rao
2017-10-09 12:31   ` Jerin Jacob
2017-10-06 21:10 ` [PATCH v5 6/7] eventdev: add eth Rx adapter implementation Nikhil Rao
2017-10-06 14:34   ` Pavan Nikhilesh Bhagavatula
2017-10-06 21:10 ` [PATCH v5 7/7] eventdev: add tests for eth Rx adapter APIs Nikhil Rao
2017-10-09 12:33   ` Jerin Jacob
2017-10-09 12:42 ` [PATCH v5 0/7] eventdev: cover letter: eth Rx adapter Jerin Jacob
2017-10-09 13:06   ` Nipun Gupta

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.